title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Strange output when writing to stdout in console | 37,047,368 | 4 | 2016-05-05T09:46:28Z | 37,047,458 | 7 | 2016-05-05T09:51:04Z | [
"python",
"python-3.x",
"stdout",
"sys"
] | I was just playing around with `sys.stdout.write()` in a Python console when I noticed that this gives some strange output.
For every `write()` call the number of characters written, passed to the function respectively gets append to the output.
`>>> sys.stdout.write('foo bar')`
for example results in
`foo bar7` being printed out.
Even passing an empty string results in an output of `0`.
This really only happens in a Python console, but not when executing a file with the same statements. More interestingly it only happens for Python 3, but not for Python 2.
Although this isn't really an issue for me as it only occurs in a console, I really wonder why it behaves like this.
My Python version is 3.5.1 under Ubuntu 15.10. | Apart from writing out the given string, `write` will also return the number of characters (actually, bytes, try `sys.stdout.write('ã¸llö')`) As the python console prints the return value of each expression to stdout, the return value is appended to the actual printed value.
Because `write` doesn't append any newlines, it looks like the same string.
You can verify this with a script containing this:
```
#!/usr/bin/python
import sys
ret = sys.stdout.write("Greetings, human!\n")
print("return value: <{}>".format(ret))
```
This script should when executed output:
```
Greetings, human!
return value: <18>
```
This behaviour is mentioned in the docs [here](https://docs.python.org/3.3/tutorial/inputoutput.html). |
What is the difference between a number n and (n) in python | 37,047,988 | 2 | 2016-05-05T10:17:30Z | 37,048,024 | 13 | 2016-05-05T10:19:45Z | [
"python",
"python-3.x",
"int"
] | ```
print(type(1))
print(type((1)))
```
gives me
```
<class 'int'>
```
Also
```
print(id(1))
print(id((1)))
```
gives me
```
1555424112
1555424112
```
however `(1)` is recognized as an instance where as `1` is not?
for e.g., on doing a `(1)`. in editor I get a lot of methods such as `bit_length`, `conjugate`, `__add__()` etc. but not in doing `1`.
What is the reason for the distinction? | `(..)` merely *groups* an expression. For integers, it also has the side-effect that the `.` character for *floating point decimals* can be disambiguated from the `.` attribute access operator.
So
```
1.bit_length()
```
is a syntax error, because `bit_length` is not a valid non-integer portion for a real number. But
```
(1).bit_length()
```
is valid Python syntax, because now the parser won't see the `.` token as part of the number literal.
Alternatively, add a space:
```
1 .bit_length()
``` |
Python 2.7.9 Print statement with list strange output | 37,048,372 | 2 | 2016-05-05T10:37:05Z | 37,048,433 | 8 | 2016-05-05T10:40:31Z | [
"python"
] | Why whole argument in print function along with paranthesis is printed when only the string should have been
This is Python 2.7.9
```
import os
alist = [ 'A' ,'B']
print('Hello there')
print('The first item is ',alist[0])
print('Good Evening')
root@justin:/python# python hello.py
Hello there
('The first item is ', 'A')
Good Evening
``` | In python 2 `print` isn't a function it's a statement. When you write
```
print('The first item is ',alist[0])
```
it's actually means "print me a tuple of 2 elements: `'The first item is '` and `alist[0]`"
it's equivalent to
```
a = ('The first item is ',alist[0])
print a
```
if you want to print only strings you should remove the parentheses like that:
```
print 'The first item is ',alist[0]
```
EDIT:
As guys in comments tell, you can also add
```
from __future__ import print_statement
```
This will make `print` a function like in python 3 and your examples will work as you expected without any changes.
But I think it's useful to understand what is going on in both cases. |
Times two faster than bit shift? | 37,053,379 | 120 | 2016-05-05T14:35:16Z | 37,054,723 | 124 | 2016-05-05T15:42:39Z | [
"python",
"performance",
"python-3.x",
"bit-shift"
] | I was looking at the source of [sorted\_containers](https://github.com/grantjenks/sorted_containers/blob/master/sortedcontainers/sortedlist.py) and was surprised to see [this line](https://github.com/grantjenks/sorted_containers/blob/master/sortedcontainers/sortedlist.py#L72):
```
self._load, self._twice, self._half = load, load * 2, load >> 1
```
Here `load` is an integer. Why use bit shift in one place, and multiplication in another? It seems reasonable that bit shifting may be faster than integral division by 2, but why not replace the multiplication by a shift as well? I benchmarked the the following cases:
1. (times, divide)
2. (shift, shift)
3. (times, shift)
4. (shift, divide)
and found that #3 is consistently faster than other alternatives:
```
# self._load, self._twice, self._half = load, load * 2, load >> 1
import random
import timeit
import pandas as pd
x = random.randint(10 ** 3, 10 ** 6)
def test_naive():
a, b, c = x, 2 * x, x // 2
def test_shift():
a, b, c = x, x << 1, x >> 1
def test_mixed():
a, b, c = x, x * 2, x >> 1
def test_mixed_swaped():
a, b, c = x, x << 1, x // 2
def observe(k):
print(k)
return {
'naive': timeit.timeit(test_naive),
'shift': timeit.timeit(test_shift),
'mixed': timeit.timeit(test_mixed),
'mixed_swapped': timeit.timeit(test_mixed_swaped),
}
def get_observations():
return pd.DataFrame([observe(k) for k in range(100)])
```
[](http://i.stack.imgur.com/mrSuI.png)
[](http://i.stack.imgur.com/Yo4wL.png)
The question:
Is my test valid? If so, why is (multiply, shift) faster than (shift, shift)?
I run Python 3.5 on Ubuntu 14.04.
**Edit**
Above is the original statement of the question. Dan Getz provides an excellent explanation in his answer.
For the sake of completeness, here are sample illustrations for larger `x` when multiplication optimizations do not apply.
[](http://i.stack.imgur.com/2btSH.png)
[](http://i.stack.imgur.com/ItTVG.png) | This seems to be because multiplication of small numbers is optimized in CPython 3.5, in a way that left shifts by small numbers are not. Positive left shifts always create a larger integer object to store the result, as part of the calculation, while for multiplications of the sort you used in your test, a special optimization avoids this and creates an integer object of the correct size. This can be seen in [the source code of Python's integer implementation](https://hg.python.org/cpython/file/580ddeccd689/Objects/longobject.c).
Because integers in Python are arbitrary-precision, they are stored as arrays of integer "digits", with a limit on the number of bits per integer digit. So in the general case, operations involving integers are not single operations, but instead need to handle the case of multiple "digits". In *pyport.h*, this bit limit [is defined as](https://hg.python.org/cpython/file/580ddeccd689/Include/pyport.h#l134) 30 bits on 64-bit platform, or 15 bits otherwise. (I'll just call this 30 from here on to keep the explanation simple. But note that if you were using Python compiled for 32-bit, your benchmark's result would depend on if `x` were less than 32,768 or not.)
When an operation's inputs and outputs stay within this 30-bit limit, the operation can be handled in an optimized way instead of the general way. The beginning of the [integer multiplication implementation](https://hg.python.org/cpython/file/580ddeccd689/Objects/longobject.c#l3401) is as follows:
```
static PyObject *
long_mul(PyLongObject *a, PyLongObject *b)
{
PyLongObject *z;
CHECK_BINOP(a, b);
/* fast path for single-digit multiplication */
if (Py_ABS(Py_SIZE(a)) <= 1 && Py_ABS(Py_SIZE(b)) <= 1) {
stwodigits v = (stwodigits)(MEDIUM_VALUE(a)) * MEDIUM_VALUE(b);
#ifdef HAVE_LONG_LONG
return PyLong_FromLongLong((PY_LONG_LONG)v);
#else
/* if we don't have long long then we're almost certainly
using 15-bit digits, so v will fit in a long. In the
unlikely event that we're using 30-bit digits on a platform
without long long, a large v will just cause us to fall
through to the general multiplication code below. */
if (v >= LONG_MIN && v <= LONG_MAX)
return PyLong_FromLong((long)v);
#endif
}
```
So when multiplying two integers where each fits in a 30-bit digit, this is done as a direct multiplication by the CPython interpreter, instead of working with the integers as arrays. ([`MEDIUM_VALUE()`](https://hg.python.org/cpython/file/580ddeccd689/Objects/longobject.c#l19) called on a positive integer object simply gets its first 30-bit digit.) If the result fits in a single 30-bit digit, [`PyLong_FromLongLong()`](https://hg.python.org/cpython/file/580ddeccd689/Objects/longobject.c#l1048) will notice this in a relatively small number of operations, and create a single-digit integer object to store it.
In contrast, left shifts are not optimized this way, and every left shift deals with the integer being shifted as an array. In particular, if you look at the source code for [`long_lshift()`](https://hg.python.org/cpython/file/580ddeccd689/Objects/longobject.c#l4101), in the case of a small but positive left shift, a 2-digit integer object is always created, if only to have its length truncated to 1 later: *(my comments in `/*** ***/`)*
```
static PyObject *
long_lshift(PyObject *v, PyObject *w)
{
/*** ... ***/
wordshift = shiftby / PyLong_SHIFT; /*** zero for small w ***/
remshift = shiftby - wordshift * PyLong_SHIFT; /*** w for small w ***/
oldsize = Py_ABS(Py_SIZE(a)); /*** 1 for small v > 0 ***/
newsize = oldsize + wordshift;
if (remshift)
++newsize; /*** here newsize becomes at least 2 for w > 0, v > 0 ***/
z = _PyLong_New(newsize);
/*** ... ***/
}
```
---
# Integer division
You didn't ask about the worse performance of integer floor division compared to right shifts, because that fit your (and my) expectations. But dividing a small positive number by another small positive number is not as optimized as small multiplications, either. Every `//` computes both the quotient *and* the remainder using the function [`long_divrem()`](https://hg.python.org/cpython/file/580ddeccd689/Objects/longobject.c#l2376). This remainder is computed for a small divisor with [a multiplication](https://hg.python.org/cpython/file/580ddeccd689/Objects/longobject.c#l1537), and [is stored in a newly-allocated integer object](https://hg.python.org/cpython/file/580ddeccd689/Objects/longobject.c#l2406), which in this situation is immediately discarded. |
Is slicing really slower in Python 3.4? | 37,056,654 | 11 | 2016-05-05T17:26:04Z | 37,056,826 | 10 | 2016-05-05T17:34:50Z | [
"python",
"performance",
"python-2.7",
"python-3.4"
] | This [question](http://stackoverflow.com/q/37052139/6292850) and my [answer](http://stackoverflow.com/a/37055026/6292850) got me thinking about this peculiar difference between Python 2.7 and Python 3.4. Take the simple example code:
```
import timeit
import dis
c = 1000000
r = range(c)
def slow():
for pos in range(c):
r[pos:pos+3]
dis.dis(slow)
time = timeit.Timer(lambda: slow()).timeit(number=1)
print('%3.3f' % time)
```
In Python 2.7, I consistently get `0.165~` and for Python 3.4 I consistently get `0.554~`. The only significant difference between the disassemblies is that Python 2.7 emits the [`SLICE+3`](https://docs.python.org/2/library/dis.html#opcode-SLICE+3) byte code while Python 3.4 emits [`BUILD_SLICE`](https://docs.python.org/3/library/dis.html#opcode-BUILD_SLICE) followed by [`BINARY_SUBSCR`](https://docs.python.org/3/library/dis.html#opcode-BUILD_SLICE). Note that I've eliminated the candidates for potential slowdown from the other question, namely strings and the fact that `xrange` doesn't exist in Python 3.4 (which is supposed to be similar to the latter's `range` class anyways).
Using `itertools'` `islice` yields nearly identical timings between the two, so I highly suspect that it's the slicing that's the cause of the difference here.
Why is this happening and is there a link to an authoritative source documenting change in behavior?
EDIT: In response to the answer, I have wrapped the `range` objects in `list`, which did give a noticeable speedup. However as I increased the number of iterations in `timeit` I noticed that the timing differences became larger and larger. As a sanity check, I replaced the slicing with `None` to see what would happen.
500 iterations in `timeit`.
```
c = 1000000
r = list(range(c))
def slow():
for pos in r:
None
```
yields `10.688` and `9.915` respectively. Replacing the for loop with `for pos in islice(r, 0, c, 3)` yields `7.626` and `6.270` respectively. Replacing `None` with `r[pos]` yielded `20~` and `28~` respectively. `r[pos:pos+3]` yields `67.531` and `106.784` respectively.
As you can see, the timing differences are huge. Again, I'm still convinced the issue is not directly related to `range`. | On Python 2.7, you're iterating over a list and slicing a list. On Python 3.4, you're iterating over a `range` and **slicing a `range`**.
When I run a test with a list on both Python versions:
```
from __future__ import print_function
import timeit
print(timeit.timeit('x[5:8]', setup='x = list(range(10))'))
```
I get [0.243554830551 seconds on Python 2.7](http://ideone.com/S8Th1n) and [0.29082867689430714 seconds on Python 3.4](http://ideone.com/Pf9Nst), a much smaller difference.
---
The performance difference you see after eliminating the `range` object is much smaller. It comes primarily from two factors: addition is a bit slower on Python 3, and Python 3 needs to go through `__getitem__` with a slice object for slicing, while Python 2 has `__getslice__`.
I wasn't able to replicate the timing difference you saw with `r[pos]`; you may have had some confounding factor in that test. |
Stop at exception in my, not library code | 37,069,323 | 8 | 2016-05-06T09:51:07Z | 37,398,488 | 7 | 2016-05-23T18:56:23Z | [
"python",
"exception",
"exception-handling",
"ipython"
] | I'm developing an app using a Python library `urllib` and it is sometimes rising exceptions due to not being able to access an URL.
However, the exception is raised almost 6 levels into the standard library stack:
```
/home/user/Workspace/application/main.py in call(path)
11 headers={'content-type': 'application/json'},
12 data=b'')
---> 13 resp = urllib.request.urlopen(req) ####### THIS IS MY CODE
14 return json.loads(resp.read().decode('utf-8'))
/usr/lib/python3.4/urllib/request.py in urlopen(url, data, timeout, cafile, capath, cadefault, context)
159 else:
160 opener = _opener
--> 161 return opener.open(url, data, timeout)
162
163 def install_opener(opener):
/usr/lib/python3.4/urllib/request.py in open(self, fullurl, data, timeout)
461 req = meth(req)
462
--> 463 response = self._open(req, data)
464
465 # post-process response
/usr/lib/python3.4/urllib/request.py in _open(self, req, data)
479 protocol = req.type
480 result = self._call_chain(self.handle_open, protocol, protocol +
--> 481 '_open', req)
482 if result:
483 return result
/usr/lib/python3.4/urllib/request.py in _call_chain(self, chain, kind, meth_name, *args)
439 for handler in handlers:
440 func = getattr(handler, meth_name)
--> 441 result = func(*args)
442 if result is not None:
443 return result
/usr/lib/python3.4/urllib/request.py in http_open(self, req)
1208
1209 def http_open(self, req):
-> 1210 return self.do_open(http.client.HTTPConnection, req)
1211
1212 http_request = AbstractHTTPHandler.do_request_
/usr/lib/python3.4/urllib/request.py in do_open(self, http_class, req, **http_conn_args)
1182 h.request(req.get_method(), req.selector, req.data, headers)
1183 except OSError as err: # timeout error
-> 1184 raise URLError(err)
1185 r = h.getresponse()
1186 except:
URLError: <urlopen error [Errno 111] Connection refused>
```
I usually run the code in `ipython3` with the `%pdb` magic turned on so in case there is an exception I can inspect it immediately. However for this I have to go down the stack 6 levels to get to my code.
Is it achievable that my app crashes pointing to my code directly? | I would go with modifying the code:
```
try:
resp = urllib.request.urlopen(req)
except Exception as e:
raise RuntimeError(e)
```
That way:
* %pdb moves you to your code,
* the original exception is preserved as argument of the "secondary" exception.
You may also monkeypatch `urllib.request.urlopen()` function:
```
class MonkeyPatchUrllib(object):
def __enter__(self):
self.__urlopen = urllib.request.urlopen
urllib.request.urlopen = self
def __exit__(self, exception_type, exception_value, traceback):
urllib.request.urlopen = self.__urlopen
def __call__(self, *args, **kwargs):
try:
return self.__urlopen(*args, **kwargs)
except Exception as e:
raise RuntimeError(e)
```
Any time you have an exception raised in `urlibopen()` call within the context manager scope:
```
with MonkeyPatchUrllib():
#your code here
```
%pdb will move you only 1 level away from your code. |
Programmatically searching google in Python using custom search | 37,083,058 | 4 | 2016-05-07T00:04:53Z | 37,084,643 | 10 | 2016-05-07T04:55:24Z | [
"python",
"google-custom-search"
] | I have a snippet of code using the pygoogle python module that allows me to programmatically search for some term in google succintly:
```
g = pygoogle(search_term)
g.pages = 1
results = g.get_urls()[0:10]
```
I just found out that this has been discontinued unfortunately and replaced by something called the google custom search. I looked at the other related questions on SO but didn't find anything I could use. I have two questions:
1) Does google custom search allow me to do exactly what I am doing in the three lines above?
2) If yes - where can I find example code to do exactly what I am doing above? If no then what is the alternative to do what I did using pygoogle? | It is possible to do this. The setup is... not very straightforward, but the end result is that you can search the entire web from python with few lines of code.
There are 3 main steps in total.
# 1st step: get Google API key
The [pygoogle](http://pygoogle.sourceforge.net/)'s page states:
> Unfortunately, Google no longer supports the SOAP API for search, nor
> do they provide new license keys. In a nutshell, PyGoogle is pretty
> much dead at this point.
>
> You can use their AJAX API instead. Take a look here for sample code:
> <http://dcortesi.com/2008/05/28/google-ajax-search-api-example-python-code/>
... but you actually can't use AJAX API either. You have to get a Google API key. <https://developers.google.com/api-client-library/python/guide/aaa_apikeys> For simple experimental use I suggest "server key".
# 2nd step: setup Custom Search Engine so that you can search the entire web
Indeed, the old API is not available. The best new API that is available is Custom Search. It seems to support only searching within specific domains, however, after following [this SO answer](http://stackoverflow.com/a/11206266/4973698) you can search the whole web:
> 1. From the Google Custom Search homepage ( <http://www.google.com/cse/> ), click Create a Custom Search Engine.
> 2. Type a name and description for your search engine.
> 3. Under Define your search engine, in the Sites to Search box, enter at least one valid URL (For now, just put www.anyurl.com to get
> past this screen. More on this later ).
> 4. Select the CSE edition you want and accept the Terms of Service, then click Next. Select the layout option you want, and then click
> Next.
> 5. Click any of the links under the Next steps section to navigate to your Control panel.
> 6. In the left-hand menu, under Control Panel, click Basics.
> 7. In the Search Preferences section, select Search the entire web but emphasize included sites.
> 8. Click Save Changes.
> 9. In the left-hand menu, under Control Panel, click Sites.
> 10. Delete the site you entered during the initial setup process.
This approach is also recommended by Google: <https://support.google.com/customsearch/answer/2631040>
# 3rd step: install Google API client for Python
`pip install google-api-python-client`, more info here:
* repo: <https://github.com/google/google-api-python-client>
* more info: <https://developers.google.com/api-client-library/python/apis/customsearch/v1>
* complete docs: <https://api-python-client-doc.appspot.com/>
# 4th step (bonus): do the search
So, after setting this up, you can follow the code samples from few places:
* simple example: <https://github.com/google/google-api-python-client/blob/master/samples/customsearch/main.py>
* `cse()` function docs: <https://google-api-client-libraries.appspot.com/documentation/customsearch/v1/python/latest/customsearch_v1.cse.html>
and end up with this:
```
from googleapiclient.discovery import build
import pprint
my_api_key = "Google API key"
my_cse_id = "Custom Search Engine ID"
def google_search(search_term, api_key, cse_id, **kwargs):
service = build("customsearch", "v1", developerKey=api_key)
res = service.cse().list(q=search_term, cx=cse_id, **kwargs).execute()
return res['items']
results = google_search(
'stackoverflow site:en.wikipedia.org', my_api_key, my_cse_id, num=10)
for result in results:
pprint.pprint(result)
```
After some tweaking you could write some functions that behave exactly like your snippet, but I'll skip this step here. |
tf.shape() get wrong shape in tensorflow | 37,085,430 | 6 | 2016-05-07T06:47:17Z | 37,085,824 | 12 | 2016-05-07T07:33:11Z | [
"python",
"tensorflow"
] | I define a tensor like this:
`x = tf.get_variable("x", [100])`
But when I try to print shape of tensor :
`print( tf.shape(x) )`
I get **Tensor("Shape:0", shape=(1,), dtype=int32)**, why the result of output should not be shape=(100) | [tf.shape(input, name=None)](https://www.tensorflow.org/versions/r0.8/api_docs/python/array_ops.html#shape) returns a 1-D integer tensor representing the shape of input.
You're looking for: `x.get_shape()` that returns the `TensorShape` of the `x` variable. |
The difference between np.random.seed(int) and np.random.seed(array_like)? | 37,085,669 | 7 | 2016-05-07T07:15:30Z | 37,086,802 | 8 | 2016-05-07T09:27:43Z | [
"python",
"numpy",
"random"
] | In python's numpy library, the `np.random.seed` method can accept two different type of parameter: `int` and `array_like[int]`, what's the difference between them? Such as: `np.random.seed(2)` and `np.random.seed([2013, 1, 4])`. | The state of the underlying [Mersenne Twister](https://en.wikipedia.org/wiki/Mersenne_Twister) PRNG is very large, 624 32-bit integers, to be exact. If given an integer seed, the initialization routine will run a smaller PRNG to expand that single 32-bit integer out to the full 624-element state. This does mean that you can't access the vast majority of the possible states.
Similarly, if given a sequence of integers as the seed, then a different smaller PRNG will use that to expand out to 624 elements, but one that can use the whole array that you pass it. This lets you access the whole space of initial states, if such a thing matters to you. This algorithm is shared between the standard library's `random` module and `numpy.random`. |
Difference between defining typing.Dict and dict? | 37,087,457 | 9 | 2016-05-07T10:36:37Z | 37,087,556 | 12 | 2016-05-07T10:45:42Z | [
"python",
"dictionary",
"type-hinting"
] | I am practicing using type hints in Python 3.5. One of my colleague uses `typing.Dict`:
```
import typing
def change_bandwidths(new_bandwidths: typing.Dict,
user_id: int,
user_name: str) -> bool:
print(new_bandwidths, user_id, user_name)
return False
def my_change_bandwidths(new_bandwidths: dict,
user_id: int,
user_name: str) ->bool:
print(new_bandwidths, user_id, user_name)
return True
def main():
my_id, my_name = 23, "Tiras"
simple_dict = {"Hello": "Moon"}
change_bandwidths(simple_dict, my_id, my_name)
new_dict = {"new": "energy source"}
my_change_bandwidths(new_dict, my_id, my_name)
if __name__ == "__main__":
main()
```
Both of them work just fine, there doesn't appear to be a difference.
I have read the [`typing` module documentation](https://docs.python.org/3/library/typing.html).
Between `typing.Dict` or `dict` which one should I use in the program? | There is no real difference between using a plain `typing.Dict` and `dict`, no.
However, `typing.Dict` is a [*Generic type*](https://docs.python.org/3/library/typing.html#generics) that lets you specify the type of the keys and values *too*, making it more flexible:
```
def change_bandwidths(new_bandwidths: typing.Dict[str, str],
user_id: int,
user_name: str) -> bool:
```
As such, it could well be that at some point in your project lifetime you want to define the dictionary argument a little more precisely, at which point expanding `typing.Dict` to `typing.Dict[key_type, value_type]` is a 'smaller' change than replacing `dict`.
You can make this even more generic by using [`Mapping`](https://docs.python.org/3/library/typing.html#typing.Mapping) or [`MutableMapping`](https://docs.python.org/3/library/typing.html#typing.MutableMapping) types here; since your function doesn't need to *alter* the mapping, I'd stick with `Mapping`. A `dict` is one mapping, but you could create other objects that also satisfy the mapping interface, and your function might well still work with those:
```
def change_bandwidths(new_bandwidths: typing.Mapping[str, str],
user_id: int,
user_name: str) -> bool:
```
Now you are clearly telling other users of this function that your code won't actually *alter* the `new_bandwidths` mapping passed in.
Your actual implementation is merely expecting an object that is printable. That may be a test implementation, but as it stands your code would continue to work if you used `new_bandwidths: typing.Any`, because any object in Python is printable. |
Difference between Variable and get_variable in TensorFlow | 37,098,546 | 7 | 2016-05-08T09:57:28Z | 37,102,908 | 7 | 2016-05-08T17:44:49Z | [
"python",
"tensorflow"
] | As far as I know, `Variable` is the default operation for making a variable, and `get_variable` is mainly used for weight sharing.
On the one hand, there are some people suggesting using `get_variable` instead of the primitive `Variable` operation whenever you need a variable. On the other hand, I merely see any use of `get_variable` in TensorFlow's official documents and demos.
Thus I want to know some rules of thumbs on how to correctly use these two mechanisms. Are there any "standard" principles? Thanks! | I'd recommend to always use `tf.get_variable(...)` -- it will make it way easier to refactor your code if you need to share variables at any time, e.g. in a multi-gpu setting (see the multi-gpu CIFAR example). There is no downside to it.
Pure `tf.Variable` is lower-level; at some point `tf.get_variable()` did not exist so some code still uses the low-level way. |
Nested for in Python | 37,102,022 | 2 | 2016-05-08T16:17:01Z | 37,102,053 | 7 | 2016-05-08T16:20:22Z | [
"python",
"python-2.7",
"nested"
] | I have a list and I want to **iterate** over it, and then iterate in a nested `for` insde the first one over the same list from the next position that the first one I've read.
In a language like Java would be:
```
int[10] array;
for (int i=0; i < array.length(); i++)
for (int j=i+1; j < array.length(); j ++)
//do something with the array comparing a[i] and a[j]
```
How could I do that on Python?
I try this:
```
for a in array:
del array[0]
for a2 in array:
//do something with the array comparing a and a2
```
But only works in the first iteration.. any help? | ```
for i in range(0,len(array)):
for j in range(i+1,len(array)):
#do something with array[i] and array[j]
``` |
Detect the first unique rows in multiple numpy 2-d arrays | 37,104,013 | 4 | 2016-05-08T19:33:13Z | 37,104,973 | 7 | 2016-05-08T21:24:34Z | [
"python",
"arrays",
"numpy",
"scipy"
] | I have multiple numpy 2-d arrays which I want to compare rowwise. The output of my function should be a numpy 2-d array representing all rows of the three inputs arrays. I want to be able to detect the first time that a row occurs, every second or third duplicate row should be flagged as False in the output. It is not possible to have duplicate rows within a single array.
If it is possible I would like to avoid the use of loops, as they slow down the calculation speed.
Example:
```
array1 = array([[444, 427],
[444, 428],
[444, 429],
[444, 430],
[445, 421]], dtype=uint64)
array2 = array([[446, 427],
[446, 440],
[444, 429],
[444, 432],
[445, 421]], dtype=uint64)
array3 = array([[447, 427],
[446, 441],
[444, 429],
[444, 432],
[445, 421]], dtype=uint64)
# output
array([[True, True, True, True, True],
[ True, True, False, True, False],
[ True, True, False, False, False]], dtype=bool)
```
Any ideas? | Here's a fast vectorized approach:
```
def find_dupe_rows(*arrays):
A = np.vstack(arrays)
rtype = np.dtype((np.void, A.dtype.itemsize*A.shape[1]))
_, first_idx = np.unique(A.view(rtype), return_index=True)
out = np.zeros(A.shape[0], np.bool)
out[first_idx] = True
return out.reshape(len(arrays), -1)
```
Example usage:
```
print(find_dupe_rows(array1, array2, array3))
# [[ True True True True True]
# [ True True False True False]
# [ True True False False False]]
```
---
To break that down a bit:
1. Stack the three subarrays to produce a `(15, 2)` array:
```
A = np.vstack((array1, array2, array3))
```
2. Use [`np.unique`](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.unique.html) together with [this trick](http://stackoverflow.com/a/16973510/1461210) to efficiently find the indices where each unique row first occurs within `A`:
```
rtype = np.dtype((np.void, A.dtype.itemsize * A.shape[1]))
_, first_idx = np.unique(A.view(rtype), return_index=True)
```
3. Every row that isn't the first occurrence of a unique row can be treated as a duplicate:
```
out = np.zeros(A.shape[0], np.bool) # output is False by default
out[first_idx] = True # set first occurrences to True
```
4. Finally, reshape this boolean vector to `(narrays, nrows)`, as per your example output:
```
return out.reshape(len(arrays), -1)
``` |
How to add regularizations in TensorFlow? | 37,107,223 | 11 | 2016-05-09T03:04:56Z | 37,143,333 | 8 | 2016-05-10T15:47:09Z | [
"python",
"neural-network",
"tensorflow",
"deep-learning"
] | I found in many available neural network code implemented using TensorFlow that regularization terms are often implemented by manually adding an additional term to loss value.
My questions are:
1. Is there a more elegant or recommended way of regularization than doing it manually?
2. I also find that `get_variable` has an argument `regularizer`, how should it be used? According to my observation, if we pass a regularizer to it(such as `tf.contrib.layers.l2_regularizer`, a tensor representing regularized term will be computed and added to a graph collection named `tf.GraphKeys.REGULARIZATOIN_LOSSES`, will that collection be automatically used by TensorFlow(e.g. used by optimizers when training)? Or it's expected that I should use that collection by myself? | As you say in the second point, using the `regularizer` argument is the recommended way. You can use it in `get_variable`, or set it once in your `variable_scope` and have all your variables regularized.
The losses are collected in the graph, and you need to manually add them to your cost function like this.
```
reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
reg_constant = 0.01 # Choose an appropriate one.
loss = my_normal_loss + reg_constant * sum(reg_losses)
```
Hope that helps! |
Return tuple with smallest y value from list of tuples | 37,110,652 | 10 | 2016-05-09T08:10:03Z | 37,110,813 | 15 | 2016-05-09T08:19:07Z | [
"python",
"list",
"tuples",
"min"
] | I am trying to return a tuple the smallest second index value (y value) from a list of tuples. If there are two tuples with the lowest y value, then select the tuple with the largest x value (i.e first index).
For example, suppose I have the tuple:
```
x = [(2, 3), (4, 3), (6, 9)]
```
The the value returned should be `(4, 3)`. `(2, 3)` is a candidate, as `x[0][1]` is `3` (same as `x[1][1]`), however, `x[0][0]` is smaller than `x[1][0]`.
So far I have tried:
```
start_point = min(x, key = lambda t: t[1])
```
However, this only checks the second index, and does not compare two tuples first index if their second index's are equivalent. | Include the `x` value in a tuple returned from the key; this second element in the key will be then used when there is a tie for the `y` value. To inverse the comparison (from smallest to largest), just negate that value:
```
min(x, key=lambda t: (t[1], -t[0]))
```
After all, `-4` is smaller than `-2`.
Demo:
```
>>> x = [(2, 3), (4, 3), (6, 9)]
>>> min(x, key=lambda t: (t[1], -t[0]))
(4, 3)
``` |
Where does Anaconda Python install on Windows? | 37,117,571 | 5 | 2016-05-09T13:52:44Z | 37,117,572 | 17 | 2016-05-09T13:52:44Z | [
"python",
"pydev",
"anaconda"
] | I installed Anaconda for Python 2.7 on my Windows machine and wanted to add the Anaconda interpreter to PyDev, but quick googling couldn't find the default place where Anaconda installed, and searching SO didn't turn up anything useful, so.
Where does Anaconda 4.0 install on Windows 7?
***Suggestions this is a duplicate question***
There have been a couple of users (out of the hundreds that have viewed this question) that have suggested that this is a duplicate of the following question [Is there an equivalent of 'which' on the Windows command line?](http://stackoverflow.com/questions/304319/is-there-an-equivalent-of-which-on-the-windows-command-line) , and SO has a nifty banner that now asks me to edit the question and explain the difference.
I had no idea that "which" was even a command in 'nix until the duplicate question suggestion came up, or that "where" was a Windows command until I started researching this question. the only reason I know now is because I didn't want to wait hours for an search of my non-indexed C: drive and went looking.
I think it is a stretch to say that someone who is searching for the install location of a Windows program should think first to search for a 'nix command, or that we should expect Widows users to know extensive amounts of 'nix. Which leaves me having to deal with the "well if two questions share the same answer then they are duplicates" argument.
The idea that questions and answers share some version of the transitive property of equality is one of the silliest insinuations I've seen here on SO. That's like telling someone who forgot to close a for loop bracket in a C++ code that their question is a duplicate of a Java tagged question where somebody forgot to close their while loop brackets. the answer to both would be to add an extra "}" at the end of the loop, but I would hardly consider them the same question just because they shared that answer. | To find where Anaconda was installed I used the "where" command on the command line in Windows.
```
C:\>where anaconda
```
which for me returned:
> C:\Users\User-Name\AppData\Local\Continuum\Anaconda2\Scripts\anaconda.exe
Which allowed me to find the Anaconda Python interpreter at
> C:\Users\User-Name\AppData\Local\Continuum\Anaconda2\python.exe
to update PyDev |
installing cPickle with python 3.5 | 37,132,899 | 3 | 2016-05-10T08:20:38Z | 37,138,791 | 8 | 2016-05-10T12:37:19Z | [
"python",
"docker",
"pickle",
"python-3.5"
] | This might be silly but I am unable to install `cPickle` with python 3.5 docker image
**Dockerfile**
```
FROM python:3.5-onbuild
```
**requirements.txt**
```
cpickle
```
When I try to build the image
```
$ docker build -t sample .
Sending build context to Docker daemon 3.072 kB
Step 1 : FROM python:3.5-onbuild
# Executing 3 build triggers...
Step 1 : COPY requirements.txt /usr/src/app/
Step 1 : RUN pip install --no-cache-dir -r requirements.txt
---> Running in 016c35a032ee
Collecting cpickle (from -r requirements.txt (line 1))
Could not find a version that satisfies the requirement cpickle (from -r requirements.txt (line 1)) (from versions: )
No matching distribution found for cpickle (from -r requirements.txt (line 1))
You are using pip version 7.1.2, however version 8.1.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
The command '/bin/sh -c pip install --no-cache-dir -r requirements.txt' returned a non-zero code: 1
``` | `cPickle` comes with the standard library⦠in python 2.x. You are on python 3.x, so if you want `cPikcle`, you can do this:
```
>>> import _pickle as cPickle
```
However, in 3.x, it's easier just to use `pickle`.
No need to install anything. If something requires `cPickle` in python 3.x, then that's probably a bug. |
What is the correct way to report an error in a Python unittest in the setUp method? | 37,134,320 | 8 | 2016-05-10T09:24:47Z | 37,374,945 | 8 | 2016-05-22T13:23:33Z | [
"python",
"unit-testing",
"testing",
"assert",
"arrange-act-assert"
] | I've read some conflicting advice on the use of `assert` in the `setUp` method of a Python unit test. I can't see the harm in failing a test if a precondition that test relies on fails.
For example:
```
import unittest
class MyProcessor():
"""
This is the class under test
"""
def __init__(self):
pass
def ProcessData(self, content):
return ['some','processed','data','from','content'] # Imagine this could actually pass
class Test_test2(unittest.TestCase):
def LoadContentFromTestFile(self):
return None # Imagine this is actually doing something that could pass.
def setUp(self):
self.content = self.LoadContentFromTestFile()
self.assertIsNotNone(self.content, "Failed to load test data")
self.processor = MyProcessor()
def test_ProcessData(self):
results = self.processor.ProcessData(self.content)
self.assertGreater(results, 0, "No results returned")
if __name__ == '__main__':
unittest.main()
```
This seems like a reasonable thing to do to me i.e. make sure the test is able to run. When this fails because of the setup condition we get:
```
F
======================================================================
FAIL: test_ProcessData (__main__.Test_test2)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Projects\Experiments\test2.py", line 21, in setUp
self.assertIsNotNone(self.content, "Failed to load test data")
AssertionError: unexpectedly None : Failed to load test data
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (failures=1)
``` | The purpose of `setUp` is to reduce [Boilerplate code](https://en.wikipedia.org/wiki/Boilerplate_code) which creates between the tests in the test class during the Arrange phase.
In the Arrange phase you: setup everything needed for the running the tested code. This includes any initialization of dependencies, mocks and data needed for the test to run.
Based on the above paragraphs you should not assert anything in your `setUp` method.
So as mentioned earlier; **If you can't create the test precondition then your test is broken.** To avoid situations like this Roy Osherove wrote a great book called: [The Art Of Unit Testing](http://rads.stackoverflow.com/amzn/click/1617290890) ( For a fully disclosure Lior Friedman(He was Roy's boss) is a friend of mine and I worked closely with them for more then 2 years, so I am little bit biased...)
Basically there are only a few reasons to have an interaction with external resources during the Arrange phase(or with things which may cause an exception), most of them(if not all of them) are related in integration tests.
Back to your example; There is a pattern to structure the tests where
you need to load an external resource(for all/most of them). Just a side note; before you decide to apply this pattern make sure that you can't has this content as a static resource in your UT's class, if other test classes need to use this resource extract this resource into a module.
The following pattern decrease the possibility for failure, since you have less calls to the external resource:
```
class TestClass(unittest.TestCase):
def setUpClass(self):
# since external resources such as other servers can provide a bad content
# you can verify that the content is valid
# then prevent from the tests to run
# however, in most cases you shouldn't.
self.externalResourceContent = loadContentFromExternalResource()
def setUp(self):
self.content = self.copyContentForTest()
```
Pros:
1. less chances to failure
2. prevent inconsistency behavior (1. something/one has edited the external resource. 2. you failed to load the external resource in some of your tests)
3. faster execution
Cons:
1. the code is more complex |
List unhashable, but tuple hashable? | 37,136,878 | 3 | 2016-05-10T11:12:22Z | 37,136,961 | 8 | 2016-05-10T11:16:15Z | [
"python",
"list",
"python-2.7",
"hash",
"tuples"
] | In [How to hash lists?](https://stackoverflow.com/questions/37125539/how-to-hash-lists) I was told that I should convert to a tuple first, e.g. `[1,2,3,4,5]` to `(1,2,3,4,5)`.
So the first cannot be hashed, but the second can. Why\*?
---
\*I am not really looking for a detailed technical explanation, but rather for an intuition | Mainly, because tuples are immutable. Assume the following works:
```
>>> l = [1, 2, 3]
>>> t = (1, 2, 3)
>>> x = {l: 'a list', t: 'a tuple'}
```
Now, what happens when you do `l.append(4)`? You've modified the key in your dictionary! From afar! If you're familiar with how hashing algorithms work, this should frighten you. Tuples, on the other hand, are absolutely immutable. `t += (1,)` might look like it's modifying the tuple, but really it's not: it simply creating a *new* tuple, leaving your dictionary key unchanged. |
Custom chained comparisons | 37,140,933 | 15 | 2016-05-10T14:07:17Z | 37,141,205 | 8 | 2016-05-10T14:18:33Z | [
"python"
] | Python allows expressions like `x > y > z`, which, according to the docs, is equivalent to `(x > y) and (y > z)` except `y` is only evaluated once. (<https://docs.python.org/3/reference/expressions.html>)
However, this seems to break if I customize comparison functions. E.g. suppose I have the following class: (Apologies for the large block, but once you read the `__eq__` method, the rest is trivial.)
```
class CompareList(list):
def __repr__(self):
return "CompareList([" + ",".join(str(x) for x in self) + "])"
def __eq__(self, other):
if isinstance(other, list):
return CompareList(self[idx] == other[idx] for idx in xrange(len(self)))
else:
return CompareList(x == other for x in self)
def __ne__(self, other):
if isinstance(other, list):
return CompareList(self[idx] != other[idx] for idx in xrange(len(self)))
else:
return CompareList(x != other for x in self)
def __gt__(self, other):
if isinstance(other, list):
return CompareList(self[idx] > other[idx] for idx in xrange(len(self)))
else:
return CompareList(x > other for x in self)
def __ge__(self, other):
if isinstance(other, list):
return CompareList(self[idx] >= other[idx] for idx in xrange(len(self)))
else:
return CompareList(x >= other for x in self)
def __lt__(self, other):
if isinstance(other, list):
return CompareList(self[idx] < other[idx] for idx in xrange(len(self)))
else:
return CompareList(x < other for x in self)
def __le__(self, other):
if isinstance(other, list):
return CompareList(self[idx] <= other[idx] for idx in xrange(len(self)))
else:
return CompareList(x <= other for x in self)
```
Now I can do fun stuff like `CompareList([10, 5]) > CompareList([5, 10])` and it will correctly return `CompareList([True,False])`
However, chaining these operations doesn't work nicely:
```
low = CompareList([1])
high = CompareList([2])
print(low > high > low) # returns CompareList([True])
```
Why not? What happens under the hood here? I know it isn't equivalent to `(low > high) > low` = `(False > low)` (because that would return False). It could be `low > (high > low)` but that wouldn't make sense in terms of operator precedence (normally left-to-right). | > Python allows expressions like `x > y > z`, which, according to the docs, is equivalent to `(x > y) and (y > z)` except `y` is only evaluated once.
According to this, `low > high > low` will be equivalent to `(low > high) and (high > low)`.
```
>>> x = low > high # CompareList([False])
>>> y = high > low # CompareList([True])
>>> x and y
CompareList([True])
```
More from the documentation on [x and y](https://docs.python.org/3/library/stdtypes.html#boolean-operations-and-or-not):
> `x and y`: if `x` is false, then `x`, else `y`
In the above case:
```
>>> x is False
False
>>> x if x is False else y # x and y
CompareList([True])
```
so when you do `x and y` it returns the `y` which is `CompareList([True])`. |
Mocking boto3 S3 client method Python | 37,143,597 | 4 | 2016-05-10T15:59:35Z | 37,144,161 | 8 | 2016-05-10T16:29:30Z | [
"python",
"mocking",
"boto",
"boto3",
"botocore"
] | I'm trying to mock a singluar method from the boto3 s3 client object to throw and exception. But I need all other methods for this class to work as normal.
This is so I can test a singular Exception test when and error occurs performing a [upload\_part\_copy](http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.upload_part_copy)
**1st Attempt**
```
import boto3
from mock import patch
with patch('botocore.client.S3.upload_part_copy', side_effect=Exception('Error Uploading')) as mock:
client = boto3.client('s3')
# Should return actual result
o = client.get_object(Bucket='my-bucket', Key='my-key')
# Should return mocked exception
e = client.upload_part_copy()
```
However this gives the following error:
```
ImportError: No module named S3
```
**2nd Attempt**
After looking at the botocore.client.py source code I found that it is doing something clever and the method `upload_part_copy` does not exist. I found that it seems to call `BaseClient._make_api_call` instead so I tried to mock that
```
import boto3
from mock import patch
with patch('botocore.client.BaseClient._make_api_call', side_effect=Exception('Error Uploading')) as mock:
client = boto3.client('s3')
# Should return actual result
o = client.get_object(Bucket='my-bucket', Key='my-key')
# Should return mocked exception
e = client.upload_part_copy()
```
This throws an exception but throws an exception... but on the `get_object` which I want to avoid.
Any ideas about how I can only throw the exception on the `upload_part_copy` method? | Botocore has a client stubber you can use for just this purpose: [docs](http://botocore.readthedocs.io/en/latest/reference/stubber.html).
Here's an example of putting an error in:
```
import boto3
from botocore.stub import Stubber
client = boto3.client('s3')
stubber = Stubber(client)
stubber.add_client_error('upload_part_copy')
stubber.activate()
# Will raise a ClientError
client.upload_part_copy()
```
Here's an example of putting a normal response in. Additionally, the stubber can now be used in a context. It's important to note that the stubber will verify, so far as it is able, that your provided response matches what the service will actually return. This isn't perfect, but it will protect you from inserting total nonsense responses.
```
import boto3
from botocore.stub import Stubber
client = boto3.client('s3')
stubber = Stubber(client)
list_buckets_response = {
"Owner": {
"DisplayName": "name",
"ID": "EXAMPLE123"
},
"Buckets": [{
"CreationDate": "2016-05-25T16:55:48.000Z",
"Name": "foo"
}]
}
expected_params = {}
stubber.add_response('list_buckets', list_buckets_response, expected_params)
with stubber:
response = client.list_buckets()
assert response == list_buckets_response
``` |
Why is it faster to break rather than to raise an exception? | 37,154,381 | 19 | 2016-05-11T06:12:57Z | 37,155,232 | 26 | 2016-05-11T06:57:08Z | [
"python",
"python-3.x"
] | After checking a few simple tests, it seems as if it might be faster to break from a loop to end a generator rather than to raise a StopIteration exception. Why is this the case if the standard and accepted method of stopping a generator is using the exception. [source](https://wiki.python.org/moin/Generators)
```
In [1]: def f():
....: for i in range(1024):
....: yield None
....: break
....:
In [2]: def g():
....: for i in range(1024):
....: yield None
....: raise StopIteration
....:
In [3]: %timeit for i in f(): pass
1000000 loops, best of 3: 1.22 µs per loop
In [4]: %timeit for i in g(): pass
100000 loops, best of 3: 5.9 µs per loop
In [5]: %timeit for i in f(): pass
1000000 loops, best of 3: 1.22 µs per loop
In [6]: %timeit for i in g(): pass
100000 loops, best of 3: 5.82 µs per loop
``` | > Why is this the case if the standard and accepted method of stopping a generator is using the exception.
The exception `StopIteration` is raised only when the generator has nothing to produce any more. And, it is not a standard way of stopping a generator midway.
Here are two statements from the documentation on generators about how to stop them properly:
1. [PEP 479 -- Change StopIteration handling inside generators](https://www.python.org/dev/peps/pep-0479/):
> ... the proposal also clears up the confusion about how to terminate a
> generator: the proper way is `return` , not `raise StopIteration`.
2. [PEP 255 -- Simple Generators](https://www.python.org/dev/peps/pep-0255/)
> Q. Why allow `"return"` at all? Why not force termination to be spelled
> `"raise StopIteration"`?
>
> A. The mechanics of `StopIteration` are low-level details, much like the
> mechanics of IndexError in Python 2.1: the implementation needs to
> do *something* well-defined under the covers, and Python exposes
> these mechanisms for advanced users. That's not an argument for
> forcing everyone to work at that level, though. `"return"` means "I'm
> done" in any kind of function, and that's easy to explain and to use.
> Note that `"return"` isn't always equivalent to `"raise StopIteration"`
> in try/except construct, either (see the "Specification: Return"
> section).
So the correct way would be to use a `return` statement instead of using `break` or `raise StopIteration`.
---
> it seems as if it might be faster to `break` from a loop to end a generator rather than to raise a `StopIteration` exception.
Indeed it is because when raising the exception there is more job to do. You can use the [`dis`](https://docs.python.org/2/library/dis.html) module to get a look at the bytecode:
```
In [37]: dis.dis(f)
2 0 SETUP_LOOP 26 (to 29)
3 LOAD_GLOBAL 0 (range)
6 LOAD_CONST 1 (1024)
9 CALL_FUNCTION 1
12 GET_ITER
>> 13 FOR_ITER 12 (to 28)
16 STORE_FAST 0 (i)
3 19 LOAD_CONST 0 (None)
22 YIELD_VALUE
23 POP_TOP
4 24 BREAK_LOOP
25 JUMP_ABSOLUTE 13
>> 28 POP_BLOCK
>> 29 LOAD_CONST 0 (None)
32 RETURN_VALUE
In [38]: dis.dis(g)
2 0 SETUP_LOOP 31 (to 34)
3 LOAD_GLOBAL 0 (range)
6 LOAD_CONST 1 (1024)
9 CALL_FUNCTION 1
12 GET_ITER
>> 13 FOR_ITER 17 (to 33)
16 STORE_FAST 0 (i)
3 19 LOAD_CONST 0 (None)
22 YIELD_VALUE
23 POP_TOP
4 24 LOAD_GLOBAL 2 (StopIteration)
27 RAISE_VARARGS 1
30 JUMP_ABSOLUTE 13
>> 33 POP_BLOCK
>> 34 LOAD_CONST 0 (None)
37 RETURN_VALUE
```
You can see that almost everything is same but for raising the exception, it has to execute some extra instructions:
```
24 LOAD_GLOBAL 2 (StopIteration)
27 RAISE_VARARGS 1
``` |
Why does a generator using `()` need a lot of memory? | 37,156,574 | 27 | 2016-05-11T08:06:40Z | 37,156,765 | 48 | 2016-05-11T08:14:56Z | [
"python",
"python-2.7",
"generator"
] | ## Problem
Let's assume that I want to find `n**2` for all numbers smaller than `20000000`.
### General setup for all three variants that I test:
```
import time, psutil, gc
gc.collect()
mem_before = psutil.virtual_memory()[3]
time1 = time.time()
# (comprehension, generator, function)-code comes here
time2 = time.time()
mem_after = psutil.virtual_memory()[3]
print "Used Mem = ", (mem_after - mem_before)/(1024**2) # convert Byte to Megabyte
print "Calculation time = ", time2 - time1
```
### Three options to calculate these numbers:
**1. Creating a list of via comprehension:**
```
x = [i**2 for i in range(20000000)]
```
It is really slow and time consuming:
```
Used Mem = 1270 # Megabytes
Calculation time = 33.9309999943 # Seconds
```
**2. Creating a generator using `'()'`:**
```
x = (i**2 for i in range(20000000))
```
It is much faster than option 1, but still uses a lot of memory:
```
Used Mem = 611
Calculation time = 0.278000116348
```
**3. Defining a generator function (most efficient):**
```
def f(n):
i = 0
while i < n:
yield i**2
i += 1
x = f(20000000)
```
Its consumption:
```
Used Mem = 0
Calculation time = 0.0
```
### The questions are:
1. What's the difference between the first and second solutions? Using `()` creates a generator, so why does it need a lot of memory?
2. Is there any built-in function equivalent to my third option? | 1. As others have pointed out in the comments, `range` creates a `list` in Python 2. Hence, it is not the generator per se that uses up the memory, but the `range` that the generator uses:
```
x = (i**2 for i in range(20000000))
# builds a 2*10**7 element list, not for the squares , but for the bases
>>> sys.getsizeof(range(100))
872
>>> sys.getsizeof(xrange(100))
40
>>> sys.getsizeof(range(1000))
8720
>>> sys.getsizeof(xrange(1000))
40
>>> sys.getsizeof(range(20000000))
160000072
>>> sys.getsizeof(xrange(20000000))
40
```
This also explains why your second version (the generator expression) uses around half the memory of the first version (the list comprehension) as the first one builds two lists (for the bases and the squares) while the second only builds one list for the bases.
2. `xrange(20000000)` thus, greatly improves memory usage as it returns a lazy iterable. This is essentially the built-in memory efficient way to iterate over a range of numbers that mirrors your third version (with the added flexibility of `start`, `stop` and `step`):
```
x = (i**2 for i in xrange(20000000))
```
In Python 3, `range` is essentially what `xrange` used to be in Python 2.
However, the Python 3 `range` object has some nice features that Python 2's `xrange` doesn't have, like `O(1)` slicing, contains, etc.
## Some references:
* [Python2 xrange docs](https://docs.python.org/2/library/functions.html#xrange)
* [Python3 range docs](https://docs.python.org/3/library/stdtypes.html#ranges)
* [Stack Overflow - "Should you always favor xrange() over range()?"](/q/135041)
* [Martijn Pieters excellent answer to "Why is 1000000000000000 in range(1000000000000001) so fast in Python 3?"](/a/30081318/4850040) |
Python: converting iterable to list: 'list(x)' vs. full slice 'x[:]' | 37,158,448 | 3 | 2016-05-11T09:30:04Z | 37,158,481 | 9 | 2016-05-11T09:31:28Z | [
"python",
"list",
"slice"
] | I wonder if there is any difference in these two ways to convert an iterable to a list in Python:
* using the `list()` constructor:
```
my_list = list(my_iterable)
```
* using a full slice:
```
my_list = my_iterable[:]
```
Are there differences in the implementation? If so, what about performance? Any changes in Python 2 vs. Python 3?
Of course I realize that the constructor version is much better readable though. | `list(thing)` gives you a list.
`thing[:]` could give you any type it wants.
In other words, the second option only works with specific types (and you haven't mentioned which types you're actually working with).
Edit: A useful feature of `thing[:]` is that when it is supported it usually results in a reference to "all the elements of the thing" which can be modified without changing which object `thing` points to. For example:
```
thing[:] = [1,2,3]
```
Will assign `[1,2,3]` over top of whatever was in `thing`, even if that's not a list per se. Whereas:
```
list(thing) = [1,2,3]
```
Is nonsense, and:
```
thing = [1,2,3]
```
Makes `thing` refer to a new list, regardless of its previous type. |
Python: converting iterable to list: 'list(x)' vs. full slice 'x[:]' | 37,158,448 | 3 | 2016-05-11T09:30:04Z | 37,158,494 | 11 | 2016-05-11T09:32:01Z | [
"python",
"list",
"slice"
] | I wonder if there is any difference in these two ways to convert an iterable to a list in Python:
* using the `list()` constructor:
```
my_list = list(my_iterable)
```
* using a full slice:
```
my_list = my_iterable[:]
```
Are there differences in the implementation? If so, what about performance? Any changes in Python 2 vs. Python 3?
Of course I realize that the constructor version is much better readable though. | Not everything supports slicing, e.g. generators:
```
(x for x in range(5))[:] # raises an error
```
So `list` is more general. `[:]` is a mostly just a way to copy a list.
The main reason `[:]` is possible is because it's what you get when you combine leaving out the index before the colon (e.g. `[:3]`) and leaving out the index after, both of which are useful by themselves. It's also useful if you add a step, e.g. `[::-1]` is a common way to reverse a string. `[:]` is not very useful by itself for a list but it is slightly faster for certain situations as I've commented on Padriac's answer, although readability is usually more important than such micro-optimisations.
For non-lists, `[:]` could theoretically do anything depending on what you tell it to:
```
class A(object):
def __getitem__(self, item):
return 3
print A()[:] # prints 3
```
But you should probably avoid such definitions. |
Choice made by Python 3.5 to choose the keys when comparing them in a dictionary | 37,164,127 | 60 | 2016-05-11T13:26:35Z | 37,164,377 | 21 | 2016-05-11T13:37:02Z | [
"python",
"dictionary"
] | When constructing a dictionary as follows:
```
dict = { True: 'yes', 1: 'No'}
```
When I run it in the interactive Python interpreter the dict is represented this way:
```
dict = {True: 'No'}
```
I understand that the values `True` and `1` are equal due to the type coercion because when comparing numeric types, the narrowed type is widened to the other type (boolean is a child of integer). So as I understood from the documentation, when we enter `True == 1` Python converts `True` to `1` and compares them.
What I don't understand is why the `True` is selected as a key instead of `1`.
I am missing something? | `True` and `1` are different objects, but they both have the same value:
```
>>> True is 1
False
>>> True == 1
True
```
This is similar to two strings that may have the same value, but are stored in different memory locations:
```
>>> x = str(12345)
>>> y = str(12345)
>>> x == y
True
>>> x is y
False
```
First one item is added to the dictionary; then when the other is added, *that value already exists as a key*. So the key is updated, key values are unique.
```
>>> {x: 1, y: 2}
{"12345": 2}
``` |
Choice made by Python 3.5 to choose the keys when comparing them in a dictionary | 37,164,127 | 60 | 2016-05-11T13:26:35Z | 37,164,422 | 41 | 2016-05-11T13:38:39Z | [
"python",
"dictionary"
] | When constructing a dictionary as follows:
```
dict = { True: 'yes', 1: 'No'}
```
When I run it in the interactive Python interpreter the dict is represented this way:
```
dict = {True: 'No'}
```
I understand that the values `True` and `1` are equal due to the type coercion because when comparing numeric types, the narrowed type is widened to the other type (boolean is a child of integer). So as I understood from the documentation, when we enter `True == 1` Python converts `True` to `1` and compares them.
What I don't understand is why the `True` is selected as a key instead of `1`.
I am missing something? | Dictionaries are implemented as hash tables and there are two important concepts when adding keys/values here: *hashing* and *equality*.
To insert a particular key/value, Python first computes the *hash* value of the key. This hash value is used to determine the row of the table where Python should first attempt to put the key/value.
If the row of the hash table is empty, great: the new key/value can inserted into the dictionary, filling the empty row.
However, if there's already something in that row, Python needs to test the keys for equality. If the keys are equal (using `==`) then they're deemed to be the same key and Python just needs to update the corresponding value on that row.
(If the keys are not equal Python looks at other rows in the table until it finds the key or reaches an empty row, but that's not relevant for this question.)
---
When you write `{True: 'yes', 1: 'No'}`, you are telling Python to create a new dictionary and then fill it with two key/value pairs. These are processed left to right: `True: 'yes'` then `1: 'No'`.
We have `hash(True)` equals 1. The key `True` goes in at row 1 in the hash table and the string `'yes'` is its value.
For the next pair, Python sees that `hash(1)` is also 1 and so looks at row 1 of the table. There's something already there, so now Python checks the keys for equality. We have `1 == True` so `1` is deemed to be the same key as `True` and so its corresponding value is changed to the string `'No'`.
This results in a dictionary with one entry: `{True: 'No'}`.
---
If you want to peer at the guts of CPython 3.5 to see what creating a dictionary looks below the surface-Python level, here's more detail.
* The Python code `{True: 'yes', 1: 'No'}` is parsed into tokens and given to the compiler. Given the syntax, Python knows that a dictionary must be created using the values inside the braces. Byte code to load the four values onto the virtual machine's stack (`LOAD_CONST`) and then build the dictionary (`BUILD_MAP`) is queued up.
* The four constant values are pushed onto the top of the stack in the order that they're seen:
```
'No'
1
'yes'
True
```
* The opcode `BUILD_MAP` is then called with the argument `2` (Python counted two key/value pairs). This opcode is responsible for actually creating dictionary from the items on the stack. It looks like [this](https://hg.python.org/cpython/file/3.5/Python/ceval.c#l2582):
```
TARGET(BUILD_MAP) {
int i;
PyObject *map = _PyDict_NewPresized((Py_ssize_t)oparg);
if (map == NULL)
goto error;
for (i = oparg; i > 0; i--) {
int err;
PyObject *key = PEEK(2*i);
PyObject *value = PEEK(2*i - 1);
err = PyDict_SetItem(map, key, value);
if (err != 0) {
Py_DECREF(map);
goto error;
}
}
while (oparg--) {
Py_DECREF(POP());
Py_DECREF(POP());
}
PUSH(map);
DISPATCH();
}
```
The three key steps here are as follows:
1. An empty hashtable is created using `_PyDict_NewPresized`. Small dictionaries (of just a few items, like 2 in this case) need a table with eight rows.
2. The `for` loop is entered, starting at 2 (in this case) and counting down to 0. [`PEEK(n)`](https://hg.python.org/cpython/file/3.5/Python/ceval.c#l1052) is a macro that points to the nth item down the stack. Therefore on the first iteration of the loop, we'll have
```
PyObject *key = PEEK(2*2); /* item 4 down the stack */
PyObject *value = PEEK(2*2 - 1); /* item 3 down the stack */
```
This means that `*key` will be `True` and `*value` will be `'yes'` on the first loop through. On the second it will be `1` and `'No'`.
3. [`PyDict_SetItem`](https://hg.python.org/cpython/file/3.5/Objects/dictobject.c#l1208) is called in each loop to put the current `*key` and `*value` into the dictionary. This is the same function that is called when you write `dictionary[key] = value`. It computes the hash of the key to work out where to look first in the hash table and then, if needed, compare the key to any existing key on that row (as discussed above). |
Choice made by Python 3.5 to choose the keys when comparing them in a dictionary | 37,164,127 | 60 | 2016-05-11T13:26:35Z | 37,164,428 | 8 | 2016-05-11T13:38:53Z | [
"python",
"dictionary"
] | When constructing a dictionary as follows:
```
dict = { True: 'yes', 1: 'No'}
```
When I run it in the interactive Python interpreter the dict is represented this way:
```
dict = {True: 'No'}
```
I understand that the values `True` and `1` are equal due to the type coercion because when comparing numeric types, the narrowed type is widened to the other type (boolean is a child of integer). So as I understood from the documentation, when we enter `True == 1` Python converts `True` to `1` and compares them.
What I don't understand is why the `True` is selected as a key instead of `1`.
I am missing something? | If the key is already present in the dictionary, it does not override the key only the value associated.
I believe that writing `x = {True:"a", 1:"b"}` is along the lines of:
```
x = {}
x[True] = "a"
x[1] = "b"
```
and by the time it reaches `x[1] = "b"` the key `True` is already in the dict so why change it? why not just override the associated value. |
Choice made by Python 3.5 to choose the keys when comparing them in a dictionary | 37,164,127 | 60 | 2016-05-11T13:26:35Z | 37,164,447 | 31 | 2016-05-11T13:39:34Z | [
"python",
"dictionary"
] | When constructing a dictionary as follows:
```
dict = { True: 'yes', 1: 'No'}
```
When I run it in the interactive Python interpreter the dict is represented this way:
```
dict = {True: 'No'}
```
I understand that the values `True` and `1` are equal due to the type coercion because when comparing numeric types, the narrowed type is widened to the other type (boolean is a child of integer). So as I understood from the documentation, when we enter `True == 1` Python converts `True` to `1` and compares them.
What I don't understand is why the `True` is selected as a key instead of `1`.
I am missing something? | Basic premise is - `True` and `1` have same hashes and are equal to each other - that's why they cannot be separate keys in hash table (technically inequal object with same hashes may - but hash collisions decreases performance).
```
>>> True == 1
True
>>> hash(1)
1
>>> hash(True)
1
```
Now, let's consider a bytecode:
```
import dis
dis.dis("Dic = { True: 'yes', 1: 'No'}")
```
This prints:
```
0 LOAD_CONST 0 (True)
3 LOAD_CONST 1 ('yes')
6 LOAD_CONST 2 (1)
9 LOAD_CONST 3 ('No')
12 BUILD_MAP 2
15 STORE_NAME 0 (Dic)
18 LOAD_CONST 4 (None)
21 RETURN_VALUE
```
Basically what happens is that dict literal is tokenized to keys and values, and they are pushed to stack. After that `BUILD_MAP 2` coverts two pairs of (keys, values) to dictionary.
Most likely order of data on stack (which seems to be determined by order of keys in dict literal) and implementation details of `BUILD_MAP` decides on resulting dict keys and values.
It seems like key-value assignments are done in order defined in dict literal.
Assignment behaves same as `d[key] = value` operation, so it's basically:
* if `key` is not in dict (by equality): add `key` do dict
* store `value` under `key`
Given `{True: 'yes', 1: 'No'}`:
1. Start with `{}`
2. Add `(True, 'yes')`
1. True is not in dict -> `(True, hash(True))` == `(True, 1)` is new key in dict
2. Update value for key equal to `1` to `'yes'`
3. Add `(1, 'no')`
1. `1` is in dict (`1 == True`) -> there is no need for new key in dictionary
2. Update value for key equal to `1` (`True`) with value `'no'`
Result: `{True: 'No'}`
As I commented, I do not know if this is guarranted by Python specs or if it is just CPython implementation-defined behavior, it may differ in other interpreter implementations. |
Why does date + timedelta become date, not datetime? | 37,165,952 | 17 | 2016-05-11T14:38:58Z | 37,166,707 | 7 | 2016-05-11T15:08:50Z | [
"python",
"datetime"
] | In Python, in an operation of numbers of mixed type, the narrower type is [widened to that of the other](https://docs.python.org/3.5/library/stdtypes.html#numeric-types-int-float-complex), such as `int` + `float` â `float`:
```
In [57]: 3 + 0.1
Out[57]: 3.1
```
But for `datetime.date`, we have `datetime.date` + `datetime.timedelta` â `datetime.date`, *not* `datetime.datetime`:
```
In [58]: datetime.date(2013, 1, 1) + datetime.timedelta(seconds=42)
Out[58]: datetime.date(2013, 1, 1)
```
Why is the widening reasoning applied to numbers, but not to `date`/`datetime`/`timedelta`?
(Background: I'm writing a reading routine for a file format where one field is year, one field is day-of-year, one field is milliseconds-since-midnight. Of course, the simple and explicit solution is `datetime.datetime(2013, 1, 1, 0, 0, 0) + datetime.timedelta(seconds=42)`, but one could equally reason that one should rewrite `3 + 0.1` as `3.0 + 0.1`) | The behaviour is [documented](https://docs.python.org/3/library/datetime.html):
> *date2* is moved forward in time if `timedelta.days > 0`, or backward if `timedelta.days < 0`. Afterward `date2 - date1 == timedelta.days`. **`timedelta.seconds` and `timedelta.microseconds` are ignored.**
(My emphasis. This behaviour has [remained unchanged](https://docs.python.org/2.3/lib/datetime-date.html) since `date` objects were added in Python 2.3.)
I haven't been able to find any evidence as to why the module is designed like this. Certainly there are use cases like yours where you want to represent the point in time corresponding to the midnight at the start of a day. In these cases it is annoying to have to convert back and forth. But there are other use cases in which you want to represent a whole day (and not just some point in time on that day), in which case you don't want to accidentally end up with partial days when you add timedeltas.
Chris Withers suggested that the behaviour be changed, in [issue 3249](http://bugs.python.org/issue3249), but Tim Peters noted that:
> an incompatible change to documented always-worked-this-way behavior is unlikely to be accepted.
If you want an object that behaves like a `datetime.date`, but where arithmetic operations return `datetime.datetime` objects, then it shouldn't be not too hard to write one:
```
from datetime import date, datetime, time, timedelta
def _part_day(t):
"""Return True if t is a timedelta object that does not consist of
whole days.
"""
return isinstance(t, timedelta) and (t.seconds or t.microseconds)
class mydate(date):
"""Subclass of datetime.date where arithmetic operations with a
timedelta object return a datetime.datetime object unless the
timedelta object consists of whole days.
"""
def datetime(self):
"""Return datetime corresponding to the midnight at start of this
date.
"""
return datetime.combine(self, time())
def __add__(self, other):
if _part_day(other):
return self.datetime() + other
else:
return super().__add__(other)
__radd__ = __add__
def __sub__(self, other):
if _part_day(other):
return self.datetime() - other
else:
return super().__sub__(other)
```
(This is untested, but it's shouldn't be hard to get it working from here.) |
tflearn / tensorflow does not learn xor | 37,166,268 | 8 | 2016-05-11T14:52:32Z | 37,167,223 | 8 | 2016-05-11T15:33:39Z | [
"python",
"machine-learning",
"tensorflow",
"deep-learning"
] | Following code was written to learn the XOR function, but about half of the time the network does not learn and the loss after each epoch stays the same.
```
train_f = [[0, 0], [0, 1], [1, 0], [1, 1]]
train_c = [[0], [1], [1], [0]]
test_f = train_f
test_c = train_c
import tensorflow as tf
import tflearn
X = [[0., 0.], [0., 1.], [1., 0.], [1., 1.]]
Y_xor = [[0.], [1.], [1.], [0.]]
# Graph definition
with tf.Graph().as_default():
# Building a network with 2 optimizers
net = tflearn.input_data(shape=[None, 2])
# Nand operator definition
net = tflearn.fully_connected(net, 2, activation='relu')
net = tflearn.fully_connected(net, 2, activation='relu')
net = tflearn.fully_connected(net, 1, activation='sigmoid')
regressor = tflearn.regression(net, optimizer='adam', learning_rate=0.005, loss="mean_square",)
# Training
m = tflearn.DNN(regressor)
m.fit(X, Y_xor, n_epoch=256, snapshot_epoch=False)
# Testing
print("Testing XOR operator")
print("0 xor 0:", m.predict([[0., 0.]]))
print("0 xor 1:", m.predict([[0., 1.]]))
print("1 xor 0:", m.predict([[1., 0.]]))
print("1 xor 1:", m.predict([[1., 1.]]))
```
Sometimes I get correct results like this:
```
Testing XOR operator
0 xor 0: [[0.1487255096435547]]
0 xor 1: [[0.9297153949737549]]
1 xor 0: [[0.9354135394096375]]
1 xor 1: [[0.1487255096435547]]
```
But often this:
```
Testing XOR operator
0 xor 0: [[0.4999997615814209]]
0 xor 1: [[0.5000002384185791]]
1 xor 0: [[0.4999997615814209]]
1 xor 1: [[0.5000001788139343]]
```
My 2x2x1 network should be able to perform XOR, and there is even some evidence that suggests that this network should always converge <http://www.ncbi.nlm.nih.gov/pubmed/12662805>
I have also tried to change the relu layers to sigmoid, to perform 2048 iterations, and to make a 4x4x1 and 6x6x1 networks, but the same problem still occurs sometimes.
Could there be something wrong with how the weights are initialized?
How do I use tflearn to have a neural net learn the xor function? | The network with `relu`s (as it is written in the code snippet) is expected to often fail to train. The reason for that is that if the input to relu is less than zero, the output is zero, and therefore the gradient going back is also zero.
Since you have two layers, each having only two relu units, with random initialization each of these two layers has 25% of having all its neurons returning zero, and therefore having zero gradient going back => neural network will not learn at all. In such a network the output of the last layer (before the final sigmoid) will be zero, sigmoid of which is 0.5 -- exactly what you observe on the attempts on which your network didn't converge.
Since each layer has 25% chance of doing this damage, the entire network has a total chance of around 45% (`1 - (1 - 0.25)^2`) of failing to train from the get go. There's also a non-zero chance that the network is not in such a state at the beginning, but happens to bring itself to such a state during training, further increasing the chance of divergence.
With four neurons the chance will be significantly lower, but still not zero.
Now, the only thing I cannot answer is why your network doesn't converge when you replace `relu` with `sigmoid` -- such a network should be always able to learn "xor". My only hypothesis is that you replaced only one `relu` with `sigmoid`, not both of them.
Can you replace both `relu`s with `sigmoid`s and confirm you still observe divergence? |
Optimization of arithmetic expressions - what is this technique called? | 37,179,184 | 13 | 2016-05-12T06:45:28Z | 37,179,266 | 17 | 2016-05-12T06:49:46Z | [
"python",
"optimization",
"arithmetic-expressions"
] | A discussion with a friend led to the following realization:
```
>>> import dis
>>> i = lambda n: n*24*60*60
>>> dis.dis(i)
1 0 LOAD_FAST 0 (n)
3 LOAD_CONST 1 (24)
6 BINARY_MULTIPLY
7 LOAD_CONST 2 (60)
10 BINARY_MULTIPLY
11 LOAD_CONST 2 (60)
14 BINARY_MULTIPLY
15 RETURN_VALUE
>>> k = lambda n: 24*60*60*n
>>> dis.dis(k)
1 0 LOAD_CONST 4 (86400)
3 LOAD_FAST 0 (n)
6 BINARY_MULTIPLY
7 RETURN_VALUE
```
The second example is clearly more efficient simply by reducing the number of instructions.
My question is, is there a name for this optimization, and why doesn't it happen in the first example?
Also, I'm not sure if this is a duplicate of [Why doesn't GCC optimize a\*a\*a\*a\*a\*a to (a\*a\*a)\*(a\*a\*a)?](http://stackoverflow.com/questions/6430448/why-doesnt-gcc-optimize-aaaaaa-to-aaaaaa) ; if it is please explain a bit further as it applies to Python. | This optimization technique is called [constant folding](https://en.wikipedia.org/wiki/Constant_folding).
The reason for constant folding occurring in the latter code but not in the former is that Python has dynamic typing, and while in mathematics a product of real numbers is *commutative* and freely *associative*, it is not so in Python in the general case, because neither do all variables contain real numbers, nor can one know the types beforehand.
---
Multiplication in Python is [left-associative](https://en.wikipedia.org/wiki/Operator_associativity) - `24 * 60 * 60 * n` behaves like `(((24 * 60) * 60) * n)`, which in turn implicitly executes like
```
(24).__mul__(60).__mul__(60).__mul__(n)
```
or
```
(n).__rmul_((24).__mul__(60).__mul__(60))
```
whereas `n * 24 * 60 * 60` which is `(((n * 24) * 60) * 60)` *can* behave like
```
n.__mul__(24).__mul__(60).__mul__(60)
```
or
```
(24).__rmul__(n).__mul__(60).__mul__(60)
```
Since we cannot know the behaviour of `n.__mul__` beforehand, we cannot fold a constant in the latter case. Consider this example of a funny class that returns a subclass of `int` that defines `__mul__`/`__rmul__` as returning the sum of the operands instead of product:
```
class MultiplyAsAdd(int):
def __mul__(self, other):
return MultiplyAsAdd(self + other)
def __rmul__(self, other):
return MultiplyAsAdd(other + self)
```
Then
```
>>> (lambda n: 24*60*60*n)(MultiplyAsAdd(5))
86405
>>> (lambda n: n*24*60*60)(MultiplyAsAdd(5))
149
```
Clearly it'd be wrong for Python to parenthesize the product as `n*(24*60*60)` in the latter case. |
Using Deep Learning to Predict Subsequence from Sequence | 37,179,916 | 17 | 2016-05-12T07:20:48Z | 37,243,949 | 10 | 2016-05-15T21:40:54Z | [
"python",
"theano",
"deep-learning",
"keras"
] | I have a data that looks like this:
[](http://i.stack.imgur.com/CNK0K.jpg)
It can be viewed [here](http://dpaste.com/2PZ9WH6) and has been included in the code below.
In actuality I have ~7000 samples (row), [downloadable too](http://www.filedropper.com/test_148).
The task is given antigen, predict the corresponding epitope.
So epitope is always an exact substring of antigen. This is equivalent with
the **[Sequence to Sequence Learning](http://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf)**. Here is my code running on Recurrent Neural Network under Keras. It was modeled according the [**example**](https://github.com/fchollet/keras/blob/master/examples/addition_rnn.py).
My question are:
1. Can RNN, LSTM or GRU used to predict subsequence as posed above?
2. How can I improve the accuracy of my code?
3. How can I modify my code so that it can run faster?
Here is my running code which gave very bad accuracy score.
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import print_function
import sys
import json
import pandas as pd
from keras.models import Sequential
from keras.engine.training import slice_X
from keras.layers.core import Activation, RepeatVector, Dense
from keras.layers import recurrent, TimeDistributed
import numpy as np
from six.moves import range
class CharacterTable(object):
'''
Given a set of characters:
+ Encode them to a one hot integer representation
+ Decode the one hot integer representation to their character output
+ Decode a vector of probabilties to their character output
'''
def __init__(self, chars, maxlen):
self.chars = sorted(set(chars))
self.char_indices = dict((c, i) for i, c in enumerate(self.chars))
self.indices_char = dict((i, c) for i, c in enumerate(self.chars))
self.maxlen = maxlen
def encode(self, C, maxlen=None):
maxlen = maxlen if maxlen else self.maxlen
X = np.zeros((maxlen, len(self.chars)))
for i, c in enumerate(C):
X[i, self.char_indices[c]] = 1
return X
def decode(self, X, calc_argmax=True):
if calc_argmax:
X = X.argmax(axis=-1)
return ''.join(self.indices_char[x] for x in X)
class colors:
ok = '\033[92m'
fail = '\033[91m'
close = '\033[0m'
INVERT = True
HIDDEN_SIZE = 128
BATCH_SIZE = 64
LAYERS = 3
# Try replacing GRU, or SimpleRNN
RNN = recurrent.LSTM
def main():
"""
Epitope_core = answers
Antigen = questions
"""
epi_antigen_df = pd.io.parsers.read_table("http://dpaste.com/2PZ9WH6.txt")
antigens = epi_antigen_df["Antigen"].tolist()
epitopes = epi_antigen_df["Epitope Core"].tolist()
if INVERT:
antigens = [ x[::-1] for x in antigens]
allchars = "".join(antigens+epitopes)
allchars = list(set(allchars))
aa_chars = "".join(allchars)
sys.stderr.write(aa_chars + "\n")
max_antigen_len = len(max(antigens, key=len))
max_epitope_len = len(max(epitopes, key=len))
X = np.zeros((len(antigens),max_antigen_len, len(aa_chars)),dtype=np.bool)
y = np.zeros((len(epitopes),max_epitope_len, len(aa_chars)),dtype=np.bool)
ctable = CharacterTable(aa_chars, max_antigen_len)
sys.stderr.write("Begin vectorization\n")
for i, antigen in enumerate(antigens):
X[i] = ctable.encode(antigen, maxlen=max_antigen_len)
for i, epitope in enumerate(epitopes):
y[i] = ctable.encode(epitope, maxlen=max_epitope_len)
# Shuffle (X, y) in unison as the later parts of X will almost all be larger digits
indices = np.arange(len(y))
np.random.shuffle(indices)
X = X[indices]
y = y[indices]
# Explicitly set apart 10% for validation data that we never train over
split_at = len(X) - len(X) / 10
(X_train, X_val) = (slice_X(X, 0, split_at), slice_X(X, split_at))
(y_train, y_val) = (y[:split_at], y[split_at:])
sys.stderr.write("Build model\n")
model = Sequential()
# "Encode" the input sequence using an RNN, producing an output of HIDDEN_SIZE
# note: in a situation where your input sequences have a variable length,
# use input_shape=(None, nb_feature).
model.add(RNN(HIDDEN_SIZE, input_shape=(max_antigen_len, len(aa_chars))))
# For the decoder's input, we repeat the encoded input for each time step
model.add(RepeatVector(max_epitope_len))
# The decoder RNN could be multiple layers stacked or a single layer
for _ in range(LAYERS):
model.add(RNN(HIDDEN_SIZE, return_sequences=True))
# For each of step of the output sequence, decide which character should be chosen
model.add(TimeDistributed(Dense(len(aa_chars))))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# Train the model each generation and show predictions against the validation dataset
for iteration in range(1, 200):
print()
print('-' * 50)
print('Iteration', iteration)
model.fit(X_train, y_train, batch_size=BATCH_SIZE, nb_epoch=5,
validation_data=(X_val, y_val))
###
# Select 10 samples from the validation set at random so we can visualize errors
for i in range(10):
ind = np.random.randint(0, len(X_val))
rowX, rowy = X_val[np.array([ind])], y_val[np.array([ind])]
preds = model.predict_classes(rowX, verbose=0)
q = ctable.decode(rowX[0])
correct = ctable.decode(rowy[0])
guess = ctable.decode(preds[0], calc_argmax=False)
# print('Q', q[::-1] if INVERT else q)
print('T', correct)
print(colors.ok + 'â' + colors.close if correct == guess else colors.fail + 'â' + colors.close, guess)
print('---')
if __name__ == '__main__':
main()
``` | 1. *Can RNN, LSTM or GRU used to predict subsequence as posed above?*
Yes, you can use any of these. LSTMs and GRUs are types of RNNs; if by RNN you mean a [fully-connected RNN](http://keras.io/layers/recurrent/#simplernn), these have fallen out of favor because of the vanishing gradients problem ([1](http://www.dsi.unifi.it/~paolo/ps/tnn-94-gradient.pdf), [2](http://www.jmlr.org/proceedings/papers/v28/pascanu13.pdf)). Because of the relatively small number of examples in your dataset, a GRU might be preferable to an LSTM due to its simpler architecture.
2. *How can I improve the accuracy of my code?*
You mentioned that training and validation error are both bad. In general, this could be due to one of several factors:
* The learning rate is too low (not an issue since you're using Adam, a per-parameter adaptive learning rate algorithm)
* The model is too simple for the data (not at all the issue, since you have a very complex model and a small dataset)
* You have vanishing gradients (probably the issue since you have a 3-layer RNN). Try reducing the number of layers to 1 (in general, it's good to start by getting a simple model working and then increase the complexity), and also consider hyperparameter search (e.g. a 128-dimensional hidden state may be too large - try 30?).
Another option, since your epitope is a substring of your input, is to **predict the start and end indices of the epitope within the antigen sequence** (potentially normalized by the length of the antigen sequence) instead of predicting the substring one character at a time. This would be a regression problem with two tasks. For instance, if the antigen is FSKIAGLTVT (10 letters long) and its epitope is KIAGL (positions 3 to 7, one-based) then the input would be FSKIAGLTVT and the outputs would be 0.3 (first task) and 0.7 (second task).
Alternatively, if you can make all the antigens be the same length (by removing parts of your dataset with short antigens and/or chopping off the ends of long antigens assuming you know *a priori* that the epitope is not near the ends), you can frame it as a classification problem with two tasks (start and end) and sequence-length classes, where you're trying to assign a probability to the antigen starting and ending at each of the positions.
3. *How can I modify my code so that it can run faster?*
Reducing the number of layers will speed your code up significantly. Also, GRUs will be faster than LSTMs due to their simpler architecture. However, both types of recurrent networks will be slower than, e.g. convolutional networks.
Feel free to send me an email (address in my profile) if you're interested in a collaboration. |
How to have list() consume __iter__ without calling __len__? | 37,189,968 | 13 | 2016-05-12T14:29:29Z | 37,192,530 | 10 | 2016-05-12T16:25:08Z | [
"python"
] | I have a class with both an `__iter__` and a `__len__` methods. The latter uses the former to count all elements.
It works like the following:
```
class A:
def __iter__(self):
print("iter")
for _ in range(5):
yield "something"
def __len__(self):
print("len")
n = 0
for _ in self:
n += 1
return n
```
Now if we take e.g. the length of an instance it prints `len` and `iter`, as expected:
```
>>> len(A())
len
iter
5
```
But if we call `list()` it calls both `__iter__` and `__len__`:
```
>>> list(A())
len
iter
iter
['something', 'something', 'something', 'something', 'something']
```
It works as expected if we make a generator expression:
```
>>> list(x for x in A())
iter
['something', 'something', 'something', 'something', 'something']
```
I would assume `list(A())` and `list(x for x in A())` to work the same but they donât.
Note that it appears to first call `__iter__`, then `__len__`, then loop over the iterator:
```
class B:
def __iter__(self):
print("iter")
def gen():
print("gen")
yield "something"
return gen()
def __len__(self):
print("len")
return 1
print(list(B()))
```
Output:
```
iter
len
gen
['something']
```
---
How can I get `list()` not to call `__len__` so that my instanceâs iterator is not consumed twice? I could define e.g. a `length` or `size` method and one would then call `A().size()` but thatâs less pythonic.
I tried to compute the length in `__iter__` and cache it so that subsequent calls to `__len__` donât need to iter again but `list()` calls `__len__` without starting to iterate so it doesnât work.
Note that in my case I work on very large data collections so caching all items is not an option. | It's a safe bet that the `list()` constructor is detecting that `len()` is available and calling it in order to pre-allocate storage for the list.
Your implementation is pretty much completely backwards. You are implementing `__len__()` by using `__iter__()`, which is not what Python expects. The expectation is that `len()` is a fast, efficient way to determine the length *in advance*.
I don't think you can convince `list(A())` not to call `len`. As you have already observed, you can create an intermediate step that prevents `len` from being called.
You should definitely cache the result, if the sequence is immutable. If there are as many items as you speculate, there's no sense computing `len` more than once. |
FFT in numpy vs FFT in MATLAB do not have the same results | 37,190,596 | 5 | 2016-05-12T14:55:42Z | 37,191,173 | 7 | 2016-05-12T15:19:05Z | [
"python",
"matlab",
"numpy",
"fft"
] | I have a vector with complex numbers (can be found [here](https://www.dropbox.com/s/ve0de4ebk41s8y2/data.txt?dl=1)), both in Python and in MATLAB. I am calculating the `ifft`-transformation with
```
ifft(<vector>)
```
in MATLAB and with
```
np.fft.ifft(<vector>)
```
in Python. My problem is that I get two completely different results out of it, i.e. while the vector in Python is complex, it is not in MATLAB. While some components in MATLAB are zero, none are in Python. Why is that? The `fft`-version works as intended. The minimal values are at around `1e-10`, i.e. not too low. | Actually, they are the same but Python is showing the imaginary part with extremely high precision. The imaginary components are being shown with values with a magnitude of around `10^{-12}`.
Here's what I wrote to reconstruct your problem in MATLAB:
```
format long g;
data = importdata('data.txt');
out = ifft(data);
```
`format long g;` is a formatting option that shows you more significant digits where we show 15 significant digits including decimal places.
When I show the first 10 elements of the inverse FFT output, this is what I get:
```
>> out(1:10)
ans =
-6.08077329443768
-5.90538963023573
-5.72145198564976
-5.53037208039314
-5.33360059559345
-5.13261402212083
-4.92890104744583
-4.72394865937531
-4.51922820694745
-4.31618153490126
```
For `numpy`, be advised that complex numbers are read in with the `j` letter, not `i`. Therefore when you load in your text, you **must** transform all `i` characters to `j`. Once you do that, you can load in the data as normal:
```
In [15]: import numpy as np
In [16]: with open('data.txt', 'r') as f:
....: lines = map(lambda x: x.replace('i', 'j'), f)
....: data = np.loadtxt(lines, dtype=np.complex)
```
When you open up the file, the call to `map` would thus take the contents of the file and transform each `i` character into `j` and return a list of strings where each element in this list is a complex number in your text file with the `i` replaced as `j`. We would then call `numpy.loadtxt` function to convert these strings into an array of complex numbers.
Now when I take the IFFT and display the first 10 elements of the inversed result as we saw with the MATLAB version, we get:
```
In [20]: out = np.fft.ifft(data)
In [21]: out[:10]
Out[21]:
array([-6.08077329 +0.00000000e+00j, -5.90538963 +8.25472974e-12j,
-5.72145199 +3.56159535e-12j, -5.53037208 -1.21875843e-11j,
-5.33360060 +1.77529105e-11j, -5.13261402 -1.58326676e-11j,
-4.92890105 -6.13731196e-12j, -4.72394866 +5.46673985e-12j,
-4.51922821 -2.59774424e-11j, -4.31618154 -1.77484689e-11j])
```
As you can see the real part is the same but the imaginary part still exists. However, note how small in magnitude the imaginary components are. MATLAB in this case chose to not display the imaginary components because their magnitudes are very small. Actually, the data type returned from the `ifft` call in MATLAB is real so there was probably some post-processing after `ifft` was called to discard these imaginary components. `numpy` does not do the same thing by the way but you might as well consider these components to be very small and insignificant.
---
All in all, both `ifft` calls in Python and MATLAB are essentially the same but the imaginary components are different in the sense that Python/`numpy` returns those imaginary components even though they are insignificant where as the `ifft` call in MATLAB does not. Also take note that you need to ensure that the imaginary variable is replaced with `j` and you can't use `i` as in the original text file you've provided. If you know **for certain** that the output type should be real, you can also drop the imaginary components by giving a call to `numpy.real` on the `ifft` result if you so wish. |
How to enable a virtualenv in a systemd service unit? | 37,211,115 | 4 | 2016-05-13T13:18:13Z | 37,211,676 | 11 | 2016-05-13T13:46:21Z | [
"python",
"environment-variables",
"virtualenv"
] | I want to "activate" a virtualenv in a systemd service file.
I would like avoid to have a shell process between the systemd process and the python interpreter.
My current solution looks like this:
```
[Unit]
Description=fooservice
After=syslog.target network.target
[Service]
Type=simple
User=fooservice
WorkingDirectory={{ venv_home }}
ExecStart={{ venv_home }}/fooservice --serve-in-foreground
Restart=on-abort
EnvironmentFile=/etc/sysconfig/fooservice.env
[Install]
WantedBy=multi-user.target
```
/etc/sysconfig/fooservice.env
```
PATH={{ venv_home }}/bin:/usr/local/bin:/usr/bin:/bin
PYTHONIOENCODING=utf-8
PYTHONPATH={{ venv_home }}/...
VIRTUAL_ENV={{ venv_home }}
```
But I am having trouble. I get ImportErrors since some enties in sys.path are missing. | The virtualenv is "baked into the Python interpreter in the virtualenv". This means you can launch `python` or [`console_scripts`](https://pythonhosted.org/setuptools/setuptools.html#automatic-script-creation) directly in that virtualenv and don't need to manage `PATH` etc yourself.:
```
ExecStart={{ venv_home }}/bin/fooservice --serve-in-foreground
```
or
```
ExecStart={{ venv_home }}/bin/python {{ venv_home }}/fooservice.py --serve-in-foreground
```
and remove the `EnvironmentFile` entry.
To verify that it is indeed correct you can check the path by running
```
{{ venv_home }}/bin/python -c "import sys; print sys.path"
```
and comparing the output to
```
python -c "import sys; print sys.path"
``` |
How to generate random sequence of numbers in python? | 37,212,700 | 3 | 2016-05-13T14:31:01Z | 37,212,767 | 7 | 2016-05-13T14:34:18Z | [
"python",
"random"
] | How to generate random sequence of 1, 2, 3 provided that 80 % of numbers will 1, 15 % will 2 and 5 % will 3? | Use `random` to get a random number in [0,1) and map your outputs to that interval.
```
from random import random
result = []
# Return 100 results (for instance)
for i in range(100):
res = random()
if res < 0.8:
result.append(1)
elif res < 0.95:
result.append(2)
else:
result.append(3)
return result
```
This is a trivial solution. You may want to write a more elegant one that allows you to specify the probabilities for each number in a dedicated structure (list, dict,...) rather than in a if/else statement.
But then you might be better off using a dedicated library, as suggested in [this answer](http://stackoverflow.com/a/4266645/4653485). Here's an example with scipy's [`stats.rv_discrete`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rv_discrete.html)
```
from scipy import stats
xk = np.arange(1, 4)
pk = (0.8, 0.15, 0.05)
custm = stats.rv_discrete(name='custm', values=(xk, pk))
# Return 100 results (for instance)
return custm.rvs(size=100)
``` |
pandas .at versus .loc | 37,216,485 | 4 | 2016-05-13T17:57:21Z | 37,216,587 | 8 | 2016-05-13T18:04:00Z | [
"python",
"pandas"
] | I've been exploring how to optimize my code and ran across `pandas` `.at` method. Per the [documentation](http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.at.html#pandas.DataFrame.at)
> Fast label-based scalar accessor
>
> Similarly to loc, at provides label based scalar lookups. You can also set using these indexers.
So I ran some samples:
# Setup
```
import pandas as pd
import numpy as np
from string import letters, lowercase, uppercase
lt = list(letters)
lc = list(lowercase)
uc = list(uppercase)
def gdf(rows, cols, seed=None):
"""rows and cols are what you'd pass
to pd.MultiIndex.from_product()"""
gmi = pd.MultiIndex.from_product
df = pd.DataFrame(index=gmi(rows), columns=gmi(cols))
np.random.seed(seed)
df.iloc[:, :] = np.random.rand(*df.shape)
return df
seed = [3, 1415]
df = gdf([lc, uc], [lc, uc], seed)
print df.head().T.head().T
```
`df` looks like:
```
a
A B C D E
a A 0.444939 0.407554 0.460148 0.465239 0.462691
B 0.032746 0.485650 0.503892 0.351520 0.061569
C 0.777350 0.047677 0.250667 0.602878 0.570528
D 0.927783 0.653868 0.381103 0.959544 0.033253
E 0.191985 0.304597 0.195106 0.370921 0.631576
```
Lets use `.at` and `.loc` and ensure I get the same thing
```
print "using .loc", df.loc[('a', 'A'), ('c', 'C')]
print "using .at ", df.at[('a', 'A'), ('c', 'C')]
using .loc 0.37374090276
using .at 0.37374090276
```
Test speed using `.loc`
```
%%timeit
df.loc[('a', 'A'), ('c', 'C')]
10000 loops, best of 3: 180 µs per loop
```
Test speed using `.at`
```
%%timeit
df.at[('a', 'A'), ('c', 'C')]
The slowest run took 6.11 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 8 µs per loop
```
This looks to be a huge speed increase. Even at the caching stage `6.11 * 8` is a lot faster than `180`
# Question
What are the limitations of `.at`? I'm motivated to use it. The documentation says it's similar to `.loc` but it doesn't behave similarly. Example:
```
# small df
sdf = gdf([lc[:2]], [uc[:2]], seed)
print sdf.loc[:, :]
A B
a 0.444939 0.407554
b 0.460148 0.465239
```
where as `print sdf.at[:, :]` results in `TypeError: unhashable type`
So obviously not the same even if the intent is to be similar.
That said, who can provide guidance on what can and cannot be done with the `.at` method? | `df.at` can only access a single value at a time.
`df.loc` can select multiple rows and/or columns.
Note that there is also [`df.get_value`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.get_value.html), which may be even quicker at accessing single values:
```
In [25]: %timeit df.loc[('a', 'A'), ('c', 'C')]
10000 loops, best of 3: 187 µs per loop
In [26]: %timeit df.at[('a', 'A'), ('c', 'C')]
100000 loops, best of 3: 8.33 µs per loop
In [35]: %timeit df.get_value(('a', 'A'), ('c', 'C'))
100000 loops, best of 3: 3.62 µs per loop
```
---
Under the hood, `df.at[...]` [calls `df.get_value`](https://github.com/pydata/pandas/blob/master/pandas/core/indexing.py#L1649), but it also does [some type checking](https://github.com/pydata/pandas/blob/master/pandas/core/indexing.py#L1678) on the keys. |
IPython console can't locate "backports.shutil_get_terminal_size" and won't load | 37,232,446 | 9 | 2016-05-14T22:13:53Z | 37,665,563 | 10 | 2016-06-06T19:38:55Z | [
"python",
"python-2.7",
"terminal",
"ipython",
"anaconda"
] | I'm running Python2.7 on windows 10 doing env and most pkg management with Anaconda. After upgrading a number of packages, my ipython console now fails to start in any IDE or at the console. When I attempt to run it at the console I get this error:
```
Traceback (most recent call last):
File "C:\Anaconda3\Scripts\ipython-script.py", line 3, in <module>
import IPython
File "C:\Anaconda3\lib\site-packages\IPython\__init__.py", line 48, in <module>
from .core.application import Application
File "C:\Anaconda3\lib\site-packages\IPython\core\application.py", line 24, in <module>
from IPython.core import release, crashhandler
File "C:\Anaconda3\lib\site-packages\IPython\core\crashhandler.py", line 28, in <module>
from IPython.core import ultratb
File "C:\Anaconda3\lib\site-packages\IPython\core\ultratb.py", line 121, in <module>
from IPython.utils.terminal import get_terminal_size
File "C:\Anaconda3\lib\site-packages\IPython\utils\terminal.py", line 27, in <module>
import backports.shutil_get_terminal_size
ImportError: No module named backports.shutil_get_terminal_size
```
The first thing I tried to do was:
```
pip install --upgrade backports.shutil_get_terminal_size
```
output:
```
Requirement already up-to-date: backports.shutil_get_terminal_size in c:\anaconda3\lib\site-packages
```
I've uninstalled and reinstalled ipython with both
```
conda uninstall ipython
conda install ipython
```
and
```
pip uninstall ipython
pip install ipython
```
Still won't work. Help please! | Try this
```
conda config --add channels conda-forge
conda install backports.shutil_get_terminal_size
``` |
In Python: How to remove an object from a list if it is only referenced in that list? | 37,232,884 | 16 | 2016-05-14T23:21:08Z | 37,233,282 | 7 | 2016-05-15T00:38:49Z | [
"python",
"list",
"object",
"reference",
"garbage-collection"
] | I want to keep track of objects of a certain type that are currently in use. For example: Keep track of all instances of a class or all classes that have been created by a metaclass.
It is easy to keep track of instances like this:
```
class A():
instances = []
def __init__(self):
self.instances.append(self)
```
But if an instance is not referenced anywhere outside of that list it will not be needed anymore and I do not want to process that instance in a potentially time consuming loop.
I tried to remove objects that are only referenced in the list using sys.getrefcount.
```
for i in A.instances:
if sys.getrefcount(i) <=3: # in the list, in the loop and in getrefcount
# collect and remove after the loop
```
The problem I have is that the reference count is very obscure.
Opening a new shell and creating a dummy class with no content returns 5 for
```
sys.getrefcount(DummyClass)
```
Another idea is to copy the objects then deleting the list and checking which objects have been scheduled for garbage collecting and in the last step removing those objects. Something like:
```
Copy = copy(A.instances)
del A.instances
A.instances = [i for i in Copy if not copy_of_i_is_in_GC(i)]
```
The objects don't have to be removed immediately when the reference count goes to 0. I just don't want to waste too much ressources on objects that are not used anymore. | This answer is the same as Kevin's but I was working up an example implementation with weak references and am posting it here. Using weak references solves the problem where an object is referenced by the `self.instance` list, so it will never be deleted.
One of the things about creating a weak reference for an object is that you can include a callback when the object is deleted. There are issues such as the callback not happening when the program exits... but that may be what you want anyway.
```
import threading
import weakref
class A(object):
instances = []
lock = threading.RLock()
@classmethod
def _cleanup_ref(cls, ref):
print('cleanup') # debug
with cls.lock:
try:
cls.instances.remove(ref)
except ValueError:
pass
def __init__(self):
with self.lock:
self.instances.append(weakref.ref(self, self._cleanup_ref))
# test
test = [A() for _ in range(3)]
for i in range(3,-1,-1):
assert len(A.instances) == i
if test:
test.pop()
print("see if 3 are removed at exit")
test = [A() for _ in range(3)]
``` |
Difficult Dataframe Reshape in Python Pandas | 37,233,840 | 3 | 2016-05-15T02:13:30Z | 37,233,930 | 7 | 2016-05-15T02:29:37Z | [
"python",
"pandas"
] | If I have a dataframe that looks like this:
```
DATE1 DATE2 DATE3 AMOUNT1 AMOUNT2 AMOUNT3
1 1/1/15 5/22/14 7/12/13 5 6 3
.. .. .. .. .. .. ..
```
and I want to get it in the form:
```
DATE AMOUNT
1 1/1/15 5
2 5/22/14 6
3 7/12/13 3
.. .. ..
```
What is the most efficient code to do this? From what I can tel melting or grouping wont work because of the difference in column names (DATE1, DATE2, etc). Is the best thing to subset the "1" columns, "2" columns, and "3" columns into smaller dataframes, rename the columns, and concat? Or is there a better way to do it that I'm missing?
Thanks. | You could use `pd.lreshape`:
```
import pandas as pd
df = pd.DataFrame([['1/1/15', '5/22/14', '7/12/13', 5, 6, 3]],
columns=['DATE1', 'DATE2', 'DATE3', 'AMOUNT1', 'AMOUNT2', 'AMOUNT3'])
result = pd.lreshape(df, {'AMOUNT': ['AMOUNT1', 'AMOUNT2', 'AMOUNT3'],
'DATE': ['DATE1', 'DATE2', 'DATE3']})
print(result)
```
yields
```
DATE AMOUNT
0 1/1/15 5
1 5/22/14 6
2 7/12/13 3
```
The second argument to `pd.lreshape` is a dict of key/value pairs. Each key is
the name of a desired column, and each value is a list of columns from `df`
which you wish to coalesce into one column.
See the docstring, `help(pd.lreshape)`, for a little more on `pd.lreshape`.
---
Alternatively, you could use `pd.melt` to coalesce all the columns into one column, and use `str.extract` to separate the text-part from the numeric-part of the column names. Then use `pivot` to obtain the desired result:
```
result = pd.melt(df)
result[['variable', 'num']] = result['variable'].str.extract('(\D+)(\d+)', expand=True)
result = result.pivot(index='num', columns='variable', values='value')
print(result)
```
yields
```
variable AMOUNT DATE
num
1 5 1/1/15
2 6 5/22/14
3 3 7/12/13
``` |
What is the x = [m]*n syntax in Python? | 37,234,887 | 9 | 2016-05-15T05:28:14Z | 37,234,914 | 17 | 2016-05-15T05:32:17Z | [
"python",
"terminology"
] | I stumbled on the 'x = [m]\*n' and running it in the interpreter I can see that the code allocates an n element array initialized with m. But I can't find a description of this type of code online. What is this called?
```
>>> x = [0]*7
>>> x
[0, 0, 0, 0, 0, 0, 0]
``` | `*` is just a multiplication - as `+` for lists is an intuitive thing, meaning concatenate both operands, the next step is multiplication by a scalar - with `[0] * N` meaning "concatenate this list with itself N times"!
In other words: `*` is an operator defined in Python for its primitive sequence types and an integer to concatenate the sequence with itself that number of times. It will work with lists, tuples and even strings.
That is somewhat natural in Python also because the language allows for operator overloading - so Python programmers do expect operators to do meaningful things with objects.
One should take some care that the objects that compose the resulting list are not copies of the objects on the original list - but references to the same object. Thus, if the contents of the original list were just numbers or some other immutable object, there are no surprises - but if it contains mutable objects, such as inner lists, one could be struck by severe side effects when changing them - like in this snippet:
```
In [167]: a = [[0]] * 7
In [168]: a
Out[168]: [[0], [0], [0], [0], [0], [0], [0]]
In [169]: a[0].append(1)
In [170]: a
Out[170]: [[0, 1], [0, 1], [0, 1], [0, 1], [0, 1], [0, 1], [0, 1]]
``` |
What is the x = [m]*n syntax in Python? | 37,234,887 | 9 | 2016-05-15T05:28:14Z | 37,234,928 | 11 | 2016-05-15T05:35:19Z | [
"python",
"terminology"
] | I stumbled on the 'x = [m]\*n' and running it in the interpreter I can see that the code allocates an n element array initialized with m. But I can't find a description of this type of code online. What is this called?
```
>>> x = [0]*7
>>> x
[0, 0, 0, 0, 0, 0, 0]
``` | From [the Python docs' description](https://docs.python.org/3/reference/expressions.html#binary-arithmetic-operations), the multiplication operator `*` used between an integer `n` and a primitive sequence type performs sequence repetition of the items in the sequence `n` times. So I suppose the term you are looking for is **sequence repetition**. Note that this is not "sequence copying", as no copies of the items are created - you have `n` references to the very same sequence. |
Extract using Beautiful Soup | 37,234,995 | 4 | 2016-05-15T05:46:40Z | 37,235,213 | 7 | 2016-05-15T06:22:26Z | [
"python",
"beautifulsoup"
] | I want to fetch the stock price from web site: <http://www.bseindia.com/>
For example stock price appears as "S&P BSE :25,489.57".I want to fetch the numeric part of it as "25489.57"
This is the code i have written as of now.It is fetching the entire div in which this amount appears but not the amount.
Below is the code:
```
from bs4 import BeautifulSoup
from urllib.request import urlopen
page = "http://www.bseindia.com"
html_page = urlopen(page)
html_text = html_page.read()
soup = BeautifulSoup(html_text,"html.parser")
divtag = soup.find_all("div",{"class":"sensexquotearea"})
for oye in divtag:
tdidTags = oye.find_all("div", {"class": "sensexvalue2"})
for tag in tdidTags:
tdTags = tag.find_all("div",{"class":"newsensexvaluearea"})
for newtag in tdTags:
tdnewtags = newtag.find_all("div",{"class":"sensextext"})
for rakesh in tdnewtags:
tdtdsp1 = rakesh.find_all("div",{"id":"tdsp"})
for texts in tdtdsp1:
print(texts)
``` | I had a look around in what is going on when that page loads the information and I was able to simulate what the javascript is doing in python.
I found out it is referencing a page called `IndexMovers.aspx?ln=en` [check it out here](http://www.bseindia.com/Msource/IndexMovers.aspx?ln=en)
[](http://i.stack.imgur.com/lNv7M.png)
It looks like this page is a comma separated list of things. First comes the name, next comes the price, and then a couple other things you don't care about.
To simulate this in python, we request the page, split it by the commas, then read through every 6th value in the list, adding that value and the value one after that to a new list called stockInformation.
Now we can just loop through stock information and get the name using `item[0]` and price with `item[1]`
```
import requests
newUrl = "http://www.bseindia.com/Msource/IndexMovers.aspx?ln=en"
response = requests.get(newUrl).text
commaItems = response.split(",")
#create list of stocks, each one containing information
#index 0 is the name, index 1 is the price
#the last item is not included because for some reason it has no price info on indexMovers page
stockInformation = []
for i, item in enumerate(commaItems[:-1]):
if i % 6 == 0:
newList = [item, commaItems[i+1]]
stockInformation.append(newList)
#print each item and its price from your list
for item in stockInformation:
print(item[0], "has a price of", item[1])
```
This prints out:
```
S&P BSE SENSEX has a price of 25489.57
SENSEX#S&P BSE 100 has a price of 7944.50
BSE-100#S&P BSE 200 has a price of 3315.87
BSE-200#S&P BSE MidCap has a price of 11156.07
MIDCAP#S&P BSE SmallCap has a price of 11113.30
SMLCAP#S&P BSE 500 has a price of 10399.54
BSE-500#S&P BSE GREENEX has a price of 2234.30
GREENX#S&P BSE CARBONEX has a price of 1283.85
CARBON#S&P BSE India Infrastructure Index has a price of 152.35
INFRA#S&P BSE CPSE has a price of 1190.25
CPSE#S&P BSE IPO has a price of 3038.32
#and many more... (total of 40 items)
```
Which clearly is equivlent to the values shown on the page
[](http://i.stack.imgur.com/p4k3G.png)
So there you have it, you can simulate exactly what the javascript on that page is doing to load the information. Infact you now have even more information than was just shown to you on the page and the request is going to be faster because we are downloading just data, not all that extraneous html. |
Special Matrix in Numpy | 37,241,995 | 3 | 2016-05-15T18:17:52Z | 37,242,171 | 9 | 2016-05-15T18:35:08Z | [
"python",
"numpy"
] | I want to make a numpy array that looks like this:
```
m = [1, 1, 1, 0, 0, 0, 0, 0, 0
0, 0, 0, 1, 1, 1, 0, 0, 0
0, 0, 0, 0, 0, 0, 1, 1, 1]
```
I have seen this answer [Make special diagonal matrix in Numpy](https://stackoverflow.com/questions/18026541/make-special-diagonal-matrix-in-numpy) and I have this:
```
a = np.zeros(3,9)
a[0, 0] = 1
a[0, 1] = 1
a[0, 2] = 1
a[1, 3] = 1
a[1, 4] = 1
a[1, 5] = 1
a[2, 6] = 1
a[2, 7] = 1
a[2, 8] = 1
```
But I want to use a 'for' cicle, How I can fill the diagonal efficiently? | One way is to simply stretch an identity array horizontally;
```
> np.repeat(np.identity(3, dtype=int), 3, axis=1)
array([[1, 1, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 1, 1]])
``` |
ImportError: cannot import name NUMPY_MKL | 37,267,399 | 11 | 2016-05-17T04:52:01Z | 37,281,256 | 25 | 2016-05-17T16:03:27Z | [
"python",
"windows",
"python-2.7",
"numpy",
"scipy"
] | I am trying to run the following simple code
```
import scipy
scipy.test()
```
But I am getting the following error
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 586, in runfile
execfile(filename, namespace)
File "C:/Users/Mustafa/Documents/My Python Code/SpectralGraphAnalysis/main.py", line 8, in <module>
import scipy
File "C:\Python27\lib\site-packages\scipy\__init__.py", line 61, in <module>
from numpy._distributor_init import NUMPY_MKL # requires numpy+mkl
ImportError: cannot import name NUMPY_MKL
```
I am using python 2.7 under windows 10.
I have installed `scipy` but that does not seem to solve the problem
Any help is appreciated. | If you look at the line which is causing the error, you'll see this:
```
from numpy._distributor_init import NUMPY_MKL # requires numpy+mkl
```
This line comment states the dependency as `numpy+mkl` (`numpy` with [**Intel Math Kernel Library**](http://www.intel.com/software/products/mkl/)). This means that you've installed the `numpy` by `pip`, but the `scipy` was installed by precompiled archive, which expects `numpy+mkl`.
This problem can be easy solved by installation for `numpy+mkl` from whl file from [here](http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy). |
getting Forbidden by robots.txt: scrapy | 37,274,835 | 6 | 2016-05-17T11:28:33Z | 37,278,895 | 12 | 2016-05-17T14:24:08Z | [
"python",
"scrapy",
"web-crawler"
] | while crawling website like <https://www.netflix.com>, getting Forbidden by robots.txt: https://www.netflix.com/>
ERROR: No response downloaded for: <https://www.netflix.com/> | In the new version (scrapy 1.1) launched 2016-05-11 the crawl first downloads robots.txt before crawling. To change this behavior change in your `settings.py` with [ROBOTSTXT\_OBEY](http://doc.scrapy.org/en/1.1/topics/settings.html#robotstxt-obey)
```
ROBOTSTXT_OBEY=False
```
Here are the [release notes](http://doc.scrapy.org/en/1.1/news.html#id1) |
"Fire and forget" python async/await | 37,278,647 | 18 | 2016-05-17T14:13:00Z | 37,345,564 | 14 | 2016-05-20T11:30:16Z | [
"python",
"python-3.5",
"python-asyncio"
] | Sometimes there is some non-critical asynchronous operation that needs to happen but I don't want to wait for it to complete. In Tornado's coroutine implementation you can "fire & forget" an asynchronous function by simply ommitting the `yield` key-word.
I've been trying to figure out how to "fire & forget" with the new `async`/`await` syntax released in Python 3.5. E.g., a simplified code snippet:
```
async def async_foo():
print("Do some stuff asynchronously here...")
def bar():
async_foo() # fire and forget "async_foo()"
bar()
```
What happens though is that `bar()` never executes and instead we get a runtime warning:
```
RuntimeWarning: coroutine 'async_foo' was never awaited
async_foo() # fire and forget "async_foo()"
``` | ## asyncio.Task to âfire and forgetâ
`asyncio.Task` [is a way](https://docs.python.org/3/library/asyncio-task.html#asyncio.Task) to start some coroutine to executing "in background". Task created by `asyncio.ensure_future` [function](https://docs.python.org/3/library/asyncio-task.html#asyncio.ensure_future) wouldn't block execution (function always return immediately). Looks like a way to âfire and forgetâ you search.
```
import asyncio
async def async_foo():
print("async_foo started")
await asyncio.sleep(1)
print("async_foo done")
async def main():
asyncio.ensure_future(async_foo()) # fire and forget async_foo()
# btw, you can also create tasks inside non-async funcs
print('Do some actions 1')
await asyncio.sleep(1)
print('Do some actions 2')
await asyncio.sleep(1)
print('Do some actions 3')
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
```
Output:
```
Do some actions 1
async_foo started
Do some actions 2
async_foo done
Do some actions 3
```
## What if tasks are executing after event loop complete?
Note that asyncio expects task would be completed at the moment event loop completed. So if you'll change `main()` to:
```
async def main():
asyncio.ensure_future(async_foo()) # fire and forget
print('Do some actions 1')
await asyncio.sleep(0.1)
print('Do some actions 2')
```
you'll get warning, after program finished:
```
Task was destroyed but it is pending!
task: <Task pending coro=<async_foo() running at [...]
```
To prevent that you can just [await all pending tasks](http://stackoverflow.com/a/27910822/1113207) after event loop completed:
```
async def main():
asyncio.ensure_future(async_foo()) # fire and forget
print('Do some actions 1')
await asyncio.sleep(0.1)
print('Do some actions 2')
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
# Let's also finish all running tasks:
pending = asyncio.Task.all_tasks()
loop.run_until_complete(asyncio.gather(*pending))
```
## Kill tasks instead of awaiting them
Sometimes you don't want to await tasks done (for example, some tasks may be created to run forever). In that case, you can just cancel() them instead of awaiting:
```
import asyncio
from contextlib import suppress
async def echo_forever():
while True:
print("echo")
await asyncio.sleep(1)
async def main():
asyncio.ensure_future(echo_forever()) # fire and forget
print('Do some actions 1')
await asyncio.sleep(1)
print('Do some actions 2')
await asyncio.sleep(1)
print('Do some actions 3')
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
# Let's also cancel all running tasks:
pending = asyncio.Task.all_tasks()
for task in pending:
task.cancel()
# Now we should await task to execute it's cancellation.
# Cancelled task raises asyncio.CancelledError that we can suppress:
with suppress(asyncio.CancelledError):
loop.run_until_complete(task)
```
Output:
```
Do some actions 1
echo
Do some actions 2
echo
Do some actions 3
echo
``` |
ScrapyRT vs Scrapyd | 37,283,531 | 15 | 2016-05-17T18:16:06Z | 37,285,578 | 9 | 2016-05-17T20:22:07Z | [
"python",
"web-scraping",
"scrapy",
"scrapyd"
] | We've been using [`Scrapyd` service](https://github.com/scrapy/scrapyd) for a while up until now. It provides a nice wrapper around a scrapy project and its spiders letting to control the spiders via an HTTP API:
> Scrapyd is a service for running Scrapy spiders.
>
> It allows you to deploy your Scrapy projects and control their spiders
> using a HTTP JSON API.
But, recently, I've noticed another "fresh" package - [`ScrapyRT`](https://github.com/scrapinghub/scrapyrt) that, according to the project description, sounds very promising and similar to `Scrapyd`:
> HTTP server which provides API for scheduling Scrapy spiders and making requests with spiders.
Is this package an alternative to `Scrapyd`? If yes, what is the difference between the two? | They don't have thaaat much in common. As you have already seen you have to deploy your spiders to scrapyd and then schedule crawls. scrapyd is a standalone service running on a server where you can deploy and run every project/spider you like.
With ScrapyRT you choose one of your projects and you `cd` to that directory. Then you run e.g. `scrapyrt` and you start crawls for spiders *on that project* through a simple (and very similar to scrapyd's) REST API. Then you get crawled items back as part of the JSON response.
It's a very nice idea and it looks fast, lean and well defined. Scrapyd on the other hand is more mature and more generic.
Here are some key differences:
* Scrapyd supports multiple versions of spiders and multiple projects. As far as I can see if you want to run two different projects (or versions) with ScrapyRT you will have to use different ports for each.
* Scrapyd provides infrastructure for keeping items in the server while ScrapyRT sends them back to you on the response which, for me, means that they should be in the order of a few MBs (instead of potentially GBs.) Similarly, the way logging is handled in scrapyd is more generic when compared to ScrapyRT.
* Scrapyd (potentially persistently) queues jobs and gives you control over the number of Scrapy processes that run in parallel. ScrapyRT does something simple which as far as I can tell is to start a crawl for every request as soon as the request arrives. Blocking code in one of the spiders will block others as well.
* ScrapyRT requires an `url` argument which as far as I can tell overrides any `start_urls`-related logic.
I would say that ScrapyRT and Scrapyd very cleverly don't overlap at this point in time. Of course you never know what future holds. |
How to tell Keras stop training based on loss value? | 37,293,642 | 7 | 2016-05-18T08:02:26Z | 37,296,168 | 11 | 2016-05-18T09:56:36Z | [
"python",
"machine-learning",
"neural-network",
"conv-neural-network",
"keras"
] | Currently I use the following code:
```
callbacks = [
EarlyStopping(monitor='val_loss', patience=2, verbose=0),
ModelCheckpoint(kfold_weights_path, monitor='val_loss', save_best_only=True, verbose=0),
]
model.fit(X_train.astype('float32'), Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
shuffle=True, verbose=1, validation_data=(X_valid, Y_valid),
callbacks=callbacks)
```
It tells Keras to stop training when loss didn't improve for 2 epochs. But I want to stop training after loss became smaller than some constant "THR":
```
if val_loss < THR:
break
```
I've seen in documentation there are possibility to make your own callback:
<http://keras.io/callbacks/>
But nothing found how to stop training process. I need an advice. | I found the answer. I looked into Keras sources and find out code for EarlyStopping. I made my own callback, based on it:
```
class EarlyStoppingByLossVal(Callback):
def __init__(self, monitor='val_loss', value=0.00001, verbose=0):
super(Callback, self).__init__()
self.monitor = monitor
self.value = value
self.verbose = verbose
def on_epoch_end(self, epoch, logs={}):
current = logs.get(self.monitor)
if current is None:
warnings.warn("Early stopping requires %s available!" % self.monitor, RuntimeWarning)
if current < self.value:
if self.verbose > 0:
print("Epoch %05d: early stopping THR" % epoch)
self.model.stop_training = True
```
And usage:
```
callbacks = [
EarlyStoppingByLossVal(monitor='val_loss', value=0.00001, verbose=1),
# EarlyStopping(monitor='val_loss', patience=2, verbose=0),
ModelCheckpoint(kfold_weights_path, monitor='val_loss', save_best_only=True, verbose=0),
]
model.fit(X_train.astype('float32'), Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
shuffle=True, verbose=1, validation_data=(X_valid, Y_valid),
callbacks=callbacks)
``` |
How to iterate few elements in Python arrays? | 37,298,611 | 2 | 2016-05-18T11:43:48Z | 37,298,672 | 7 | 2016-05-18T11:46:41Z | [
"python",
"python-2.7"
] | For example I have a list of objects like this:
```
[[{1},{2},{3}],[{4},{5}],[{6},{7},{8}]]
```
I need to iterate through them all to get on each iteration objects like:
```
1,4,6
1,4,7
1,4,8
1,5,6
1,5,7
1,5,8
2,4,6
2,4,7
2,4,8
2,5,6
2,5,7
2,5,8
```
Basically each result is like a sub array of the input lists. | You can easily use [`itertools.product`](https://docs.python.org/2/library/itertools.html#itertools.product)
```
>>> import itertools
>>> x = list(itertools.product([1,2,3],[4,5],[6,7,8]))
[(1, 4, 6), (1, 4, 7), (1, 4, 8), (1, 5, 6), (1, 5, 7), (1, 5, 8), (2, 4, 6), (2, 4, 7), (2, 4, 8), (2, 5, 6), (2, 5, 7), (2, 5, 8), (3, 4, 6), (3, 4, 7), (3, 4, 8), (3, 5, 6), (3, 5, 7), (3, 5, 8)]
```
Note that the output of every combination you are looking for is called the [Cartesian product](https://en.wikipedia.org/wiki/Cartesian_product) of your input lists. |
Attributes of Python module `this` | 37,301,273 | 16 | 2016-05-18T13:35:19Z | 37,301,341 | 28 | 2016-05-18T13:37:31Z | [
"python"
] | Typing `import this` returns Tim Peters' Zen of Python. But I noticed that there are 4 properties on the module:
```
this.i
this.c
this.d
this.s
```
I can see that the statement
```
print(''.join(this.d.get(el, el) for el in this.s))
```
uses `this.d` to decode `this.s` to print the Zen.
But can someone tell me what the attributes `this.i` and `this.c` are for?
I assume they are there intentionally - answers to [this question](http://stackoverflow.com/q/5855758/115237) seem to suggest there are other jokes to be gleaned from the wording of the Zen. I'm wondering if there is a reference I'm missing with these 2 values.
I noticed that the values differ between Python versions:
```
# In v3.5:
this.c
Out[2]: 97
this.i
Out[3]: 25
# In v2.6
this.c
Out[8]: '!'
this.i
Out[9]: 25
``` | `i` and `c` are simply *loop variables*, used to build the `d` dictionary. From [the module source code](https://hg.python.org/cpython/file/3.5/Lib/this.py):
```
d = {}
for c in (65, 97):
for i in range(26):
d[chr(i+c)] = chr((i+13) % 26 + c)
```
This builds a [ROT-13 mapping](https://en.wikipedia.org/wiki/ROT13); each ASCII letter (codepoints 65 through 90 for uppercase, 97 through 122 for lowercase) is mapped to another ASCII letter 13 spots along the alphabet (looping back to A and onwards). So `A` (ASCII point 65) is mapped to `N` and vice versa (as well as `a` mapped to `n`):
```
>>> c, i = 65, 0
>>> chr(i + c)
'A'
>>> chr((i + 13) % 26 + c)
'N'
```
Note that if you wanted to ROT-13 text yourself, there is a simpler method; just encode or decode with the `rot13` codec:
```
>>> this.s
"Gur Mra bs Clguba, ol Gvz Crgref\n\nOrnhgvshy vf orggre guna htyl.\nRkcyvpvg vf orggre guna vzcyvpvg.\nFvzcyr vf orggre guna pbzcyrk.\nPbzcyrk vf orggre guna pbzcyvpngrq.\nSyng vf orggre guna arfgrq.\nFcnefr vf orggre guna qrafr.\nErnqnovyvgl pbhagf.\nFcrpvny pnfrf nera'g fcrpvny rabhtu gb oernx gur ehyrf.\nNygubhtu cenpgvpnyvgl orngf chevgl.\nReebef fubhyq arire cnff fvyragyl.\nHayrff rkcyvpvgyl fvyraprq.\nVa gur snpr bs nzovthvgl, ershfr gur grzcgngvba gb thrff.\nGurer fubhyq or bar-- naq cersrenoyl bayl bar --boivbhf jnl gb qb vg.\nNygubhtu gung jnl znl abg or boivbhf ng svefg hayrff lbh'er Qhgpu.\nAbj vf orggre guna arire.\nNygubhtu arire vf bsgra orggre guna *evtug* abj.\nVs gur vzcyrzragngvba vf uneq gb rkcynva, vg'f n onq vqrn.\nVs gur vzcyrzragngvba vf rnfl gb rkcynva, vg znl or n tbbq vqrn.\nAnzrfcnprf ner bar ubaxvat terng vqrn -- yrg'f qb zber bs gubfr!"
>>> import codecs
>>> codecs.decode(this.s, 'rot13')
"The Zen of Python, by Tim Peters\n\nBeautiful is better than ugly.\nExplicit is better than implicit.\nSimple is better than complex.\nComplex is better than complicated.\nFlat is better than nested.\nSparse is better than dense.\nReadability counts.\nSpecial cases aren't special enough to break the rules.\nAlthough practicality beats purity.\nErrors should never pass silently.\nUnless explicitly silenced.\nIn the face of ambiguity, refuse the temptation to guess.\nThere should be one-- and preferably only one --obvious way to do it.\nAlthough that way may not be obvious at first unless you're Dutch.\nNow is better than never.\nAlthough never is often better than *right* now.\nIf the implementation is hard to explain, it's a bad idea.\nIf the implementation is easy to explain, it may be a good idea.\nNamespaces are one honking great idea -- let's do more of those!"
```
As for the difference in Python 2.6 (or Python 2.7 for that matter) versus Python 3.5; the same variable name `c` is also used in the list comprehension in the `str.join()` call:
```
print "".join([d.get(c, c) for c in s])
```
In Python 2, list comprehensions do not get their own scope (unlike generator expressions and dict and set comprehensions). In Python 3 they do, and the `c` value in the list comprehension is no longer part of the module namespace. So the last value assigned to `c` *at the module scope* is `97` in Python 3, and `this.s[-1]` (so a `'!'`) in Python 2. See [Why do list comprehensions write to the loop variable, but generators don't?](http://stackoverflow.com/questions/19848082/why-do-list-comprehensions-write-to-the-loop-variable-but-generators-dont)
There is no joke embedded in these 1-letter variable names. There *are* jokes in the Zen itself. Like the fact that between the source code for the `this` module and the text itself you can find violations for just about all the rules! |
Show hidden option using argparse | 37,303,960 | 7 | 2016-05-18T15:25:44Z | 37,304,287 | 7 | 2016-05-18T15:39:37Z | [
"python",
"argparse"
] | I'm using argprase to create an option, and it's a very specific option to do one specific job. The script currently has roughly 30 knobs, and most aren't used regularly.
I'm creating an option:
```
opt.add_argument('-opt',help="Some Help", help=argparse.SUPPRESS)
```
But i want there to be two ways to show the help for the script:
```
my_script -help
my_script -help-long
```
I want the -help-long to also show all the hidden args. I couldn't find a way to do this.
Is there a way to implement this behavior? | I don't think there's a builtin way to support this. You can probably hack around it by checking `sys.argv` directly and using that to modify how you build the parser:
```
import sys
show_hidden_args = '--help-long' in sys.argv
opt = argparse.ArgumentParser()
opt.add_argument('--hidden-arg', help='...' if show_hidden_args else argparse.SUPPRESS)
opt.add_argument('--help-long', help='Show all options.', action='help')
args = opt.parse_args()
```
Of course, if writing this over and over is too inconvenient, you can wrap it in a helper function (or subclass `ArgumentParser`):
```
def add_hidden_argument(*args, **kwargs):
if not show_hidden_args:
kwargs['help'] = argparse.SUPPRESS
opt.add_argument(*args, **kwargs)
```
And you'll probably want to add a non-hidden `--help-long` argument so that users know what it supposedly does... |
Python3: zip in range | 37,308,000 | 3 | 2016-05-18T18:57:02Z | 37,308,050 | 8 | 2016-05-18T18:59:54Z | [
"python",
"python-3.x"
] | I'm new to Python and I'm trying to zip 2 lists together into 1, which I was already able to do. I've got 2 lists with several things in them, but I'm asking the user to input a number, which should determine the range.
So i've got List1: A1, A2, A3, A4, A5, A6 and List2: B1,B2,B3,B4,B5,B6
I know how to display the 2 complete lists, but what I'm trying to do is, if the user enters number "3", the zip should only display: (A1,B1), (A2,B2), (A3,B3) instead of the whole list. So here's what I tried:
```
a = ["A1", "A2", "A3", "A4", "A5", "A6"]
b = ["B1", "B2", "B3", "B4", "B5", "B6"]
c = zip(a,b)
number = int(input("please enter number"))
for x in c:
print(x[:number])
```
But this doesn't work. I tried to look it up, but couldn't find anything. I would be glad, if someone could help me out. | You can slice the result of `zip()` with [`itertools.islice()`](https://docs.python.org/3/library/itertools.html#itertools.islice):
```
>>> from itertools import islice
>>> list(islice(c, number))
[('A1', 'B1'), ('A2', 'B2'), ('A3', 'B3')]
``` |
Merge two or more lists with given order of merging | 37,309,556 | 8 | 2016-05-18T20:30:42Z | 37,309,683 | 11 | 2016-05-18T20:38:11Z | [
"python",
"python-2.7"
] | On start I have 2 lists and 1 list that says in what order I should merge those two lists.
For example I have first list equal to `[a, b, c]` and second list equal to `[d, e]` and 'merging' list equal to `[0, 1, 0, 0, 1]`.
That means: to make merged list first I need to take element from first list, then second, then first, then first, then second... And I end up with `[a, d, b, c, e]`.
To solve this I just used for loop and two "pointers", but I was wondering if I can do this task more pythonic... I tried to find some functions that could help me, but no real result. | You could create 2 iterators from those lists, loop through the ordering list, and call [`next`](https://docs.python.org/3/library/functions.html#next) on one of the 2 iterators:
```
i1 = iter(['a', 'b', 'c'])
i2 = iter(['d', 'e'])
# Select the iterator to advance: `i2` if `x` == 1, `i1` otherwise
print([next(i2 if x else i1) for x in [0, 1, 0, 0, 1]]) # ['a', 'd', 'b', 'c', 'e']
```
It's possible to generalize this solution to any number of lists as shown below
```
def ordered_merge(lists, selector):
its = [iter(l) for l in lists]
for i in selector:
yield next(its[i])
```
Here's the usage example:
```
In [9]: list(ordered_merge([[3, 4], [1, 5], [2, 6]], [1, 2, 0, 0, 1, 2]))
Out[9]: [1, 2, 3, 4, 5, 6]
``` |
Mutually exclusive option groups in python Click | 37,310,718 | 5 | 2016-05-18T21:52:56Z | 37,491,504 | 7 | 2016-05-27T20:07:45Z | [
"python",
"python-click"
] | How can I create a mutually exclusive option group in Click? I want to either accept the flag "--all" or take an option with a parameter like "--color red". | I ran into this same use case recently; this is what I came up with. For each option, you can give a list of conflicting options.
```
from click import command, option, Option, UsageError
class MutuallyExclusiveOption(Option):
def __init__(self, *args, **kwargs):
self.mutually_exclusive = set(kwargs.pop('mutually_exclusive', []))
help = kwargs.get('help', '')
if self.mutually_exclusive:
ex_str = ', '.join(self.mutually_exclusive)
kwargs['help'] = help + (
' NOTE: This argument is mutually exclusive with '
' arguments: [' + ex_str + '].'
)
super(MutuallyExclusiveOption, self).__init__(*args, **kwargs)
def handle_parse_result(self, ctx, opts, args):
if self.mutually_exclusive.intersection(opts) and self.name in opts:
raise UsageError(
"Illegal usage: `{}` is mutually exclusive with "
"arguments `{}`.".format(
self.name,
', '.join(self.mutually_exclusive)
)
)
return super(MutuallyExclusiveOption, self).handle_parse_result(
ctx,
opts,
args
)
```
Then use the regular `option` decorator but pass the `cls` argument:
```
@command(help="Run the command.")
@option('--jar-file', cls=MutuallyExclusiveOption,
help="The jar file the topology lives in.",
mutually_exclusive=["other_arg"])
@option('--other-arg',
cls=MutuallyExclusiveOption,
help="The jar file the topology lives in.",
mutually_exclusive=["jar_file"])
def cli(jar_file, other_arg):
print "Running cli."
print "jar-file: {}".format(jar_file)
print "other-arg: {}".format(other_arg)
if __name__ == '__main__':
cli()
```
[Here's a gist](https://gist.github.com/jacobtolar/fb80d5552a9a9dfc32b12a829fa21c0c)
that includes the code above and shows the output from running it.
If that won't work for you, there's also a few (closed) issues mentioning this on the click github page with a couple of ideas that you may be able to use.
* <https://github.com/pallets/click/issues/257>
* <https://github.com/pallets/click/issues/509> |
What's the meaning of "(1,) == 1," in Python? | 37,313,471 | 106 | 2016-05-19T03:29:25Z | 37,313,506 | 137 | 2016-05-19T03:32:47Z | [
"python",
"tuples",
"equals-operator"
] | I'm testing the tuple structure, and I found it's strange when I use the `==` operator like:
```
>>> (1,) == 1,
Out: (False,)
```
When I assign these two expressions to a variable, the result is true:
```
>>> a = (1,)
>>> b = 1,
>>> a==b
Out: True
```
This questions is different from [Python tuple trailing comma syntax rule](http://stackoverflow.com/questions/7992559/python-tuple-trailing-comma-syntax-rule) in my view. I ask the group of expressions between `==` operator. | This is just operator precedence. Your first
```
(1,) == 1,
```
groups like so:
```
((1,) == 1),
```
so builds a tuple with a single element from the result of comparing the one-element tuple `1,` to the integer `1` for equality They're not equal, so you get the 1-tuple `False,` for a result. |
What's the meaning of "(1,) == 1," in Python? | 37,313,471 | 106 | 2016-05-19T03:29:25Z | 37,313,562 | 11 | 2016-05-19T03:39:37Z | [
"python",
"tuples",
"equals-operator"
] | I'm testing the tuple structure, and I found it's strange when I use the `==` operator like:
```
>>> (1,) == 1,
Out: (False,)
```
When I assign these two expressions to a variable, the result is true:
```
>>> a = (1,)
>>> b = 1,
>>> a==b
Out: True
```
This questions is different from [Python tuple trailing comma syntax rule](http://stackoverflow.com/questions/7992559/python-tuple-trailing-comma-syntax-rule) in my view. I ask the group of expressions between `==` operator. | When you do
```
>>> (1,) == 1,
```
it builds a tuple with the result from comparing the *tuple* `(1,)` with an *integer* and thus returning `False`.
Instead when you assign to variables, the two *equal tuples* are compared with each other.
You can try:
```
>>> x = 1,
>>> x
(1,)
``` |
What's the meaning of "(1,) == 1," in Python? | 37,313,471 | 106 | 2016-05-19T03:29:25Z | 37,313,614 | 79 | 2016-05-19T03:45:03Z | [
"python",
"tuples",
"equals-operator"
] | I'm testing the tuple structure, and I found it's strange when I use the `==` operator like:
```
>>> (1,) == 1,
Out: (False,)
```
When I assign these two expressions to a variable, the result is true:
```
>>> a = (1,)
>>> b = 1,
>>> a==b
Out: True
```
This questions is different from [Python tuple trailing comma syntax rule](http://stackoverflow.com/questions/7992559/python-tuple-trailing-comma-syntax-rule) in my view. I ask the group of expressions between `==` operator. | Other answers have already shown you that the behaviour is due to operator precedence, as documented [here](https://docs.python.org/3/reference/expressions.html#operator-precedence).
I'm going to show you how to find the answer yourself next time you have a question similar to this. You can deconstruct how the expression parses using the [`ast`](https://docs.python.org/3/library/ast.html) module:
```
>>> import ast
>>> source_code = '(1,) == 1,'
>>> print(ast.dump(ast.parse(source_code), annotate_fields=False))
Module([Expr(Tuple([Compare(Tuple([Num(1)], Load()), [Eq()], [Num(1)])], Load()))])
```
From this we can see that the code gets parsed [as Tim Peters explained](http://stackoverflow.com/a/37313506/674039):
```
Module([Expr(
Tuple([
Compare(
Tuple([Num(1)], Load()),
[Eq()],
[Num(1)]
)
], Load())
)])
``` |
Transforming Dataframe columns into Dataframe of rows | 37,333,614 | 5 | 2016-05-19T20:40:03Z | 37,333,712 | 8 | 2016-05-19T20:46:07Z | [
"python",
"pandas",
"dataframe"
] | I have following DataFrame:
```
data = {'year': [2010, 2010, 2011, 2012, 2011, 2012, 2010, 2011, 2012, 2013],
'store_number': ['1944', '1945', '1946', '1947', '1948', '1949', '1947', '1948', '1949', '1947'],
'retailer_name': ['Walmart','Walmart', 'CRV', 'CRV', 'CRV', 'Walmart', 'Walmart', 'CRV', 'CRV', 'CRV'],
'product': ['a', 'b', 'a', 'a', 'b', 'a', 'b', 'a', 'a', 'c'],
'amount': [5, 5, 8, 6, 1, 5, 10, 6, 12, 11]}
stores = pd.DataFrame(data, columns=['retailer_name', 'store_number', 'year', 'product', 'amount'])
stores.set_index(['retailer_name', 'store_number', 'year', 'product'], inplace=True)
stores.groupby(level=[0, 1, 2, 3]).sum()
```
I want to transform following Dataframe:
```
amount
retailer_name store_number year product
CRV 1946 2011 a 8
1947 2012 a 6
2013 c 11
1948 2011 a 6
b 1
1949 2012 a 12
Walmart 1944 2010 a 5
1945 2010 b 5
1947 2010 b 10
1949 2012 a 5
```
into dataframe of rows:
```
retailer_name store_number year a b c
CRV 1946 2011 8 0 0
CRV 1947 2012 6 0 0
etc...
```
The products are known ahead.
Any idea how to do so ? | Please see below for solution. Thanks to EdChum for corrections to original post.
**Without reset\_index()**
```
stores.groupby(level=[0, 1, 2, 3]).sum().unstack().fillna(0)
amount
product a b c
retailer_name store_number year
CRV 1946 2011 8 0 0
1947 2012 6 0 0
2013 0 0 11
1948 2011 6 1 0
1949 2012 12 0 0
Walmart 1944 2010 5 0 0
1945 2010 0 5 0
1947 2010 0 10 0
1949 2012 5 0 0
```
**With reset\_index()**
```
stores.groupby(level=[0, 1, 2, 3]).sum().unstack().reset_index().fillna(0)
retailer_name store_number year amount
product a b c
0 CRV 1946 2011 8 0 0
1 CRV 1947 2012 6 0 0
2 CRV 1947 2013 0 0 11
3 CRV 1948 2011 6 1 0
4 CRV 1949 2012 12 0 0
5 Walmart 1944 2010 5 0 0
6 Walmart 1945 2010 0 5 0
7 Walmart 1947 2010 0 10 0
8 Walmart 1949 2012 5 0 0
``` |
Transforming Dataframe columns into Dataframe of rows | 37,333,614 | 5 | 2016-05-19T20:40:03Z | 37,333,857 | 7 | 2016-05-19T20:56:45Z | [
"python",
"pandas",
"dataframe"
] | I have following DataFrame:
```
data = {'year': [2010, 2010, 2011, 2012, 2011, 2012, 2010, 2011, 2012, 2013],
'store_number': ['1944', '1945', '1946', '1947', '1948', '1949', '1947', '1948', '1949', '1947'],
'retailer_name': ['Walmart','Walmart', 'CRV', 'CRV', 'CRV', 'Walmart', 'Walmart', 'CRV', 'CRV', 'CRV'],
'product': ['a', 'b', 'a', 'a', 'b', 'a', 'b', 'a', 'a', 'c'],
'amount': [5, 5, 8, 6, 1, 5, 10, 6, 12, 11]}
stores = pd.DataFrame(data, columns=['retailer_name', 'store_number', 'year', 'product', 'amount'])
stores.set_index(['retailer_name', 'store_number', 'year', 'product'], inplace=True)
stores.groupby(level=[0, 1, 2, 3]).sum()
```
I want to transform following Dataframe:
```
amount
retailer_name store_number year product
CRV 1946 2011 a 8
1947 2012 a 6
2013 c 11
1948 2011 a 6
b 1
1949 2012 a 12
Walmart 1944 2010 a 5
1945 2010 b 5
1947 2010 b 10
1949 2012 a 5
```
into dataframe of rows:
```
retailer_name store_number year a b c
CRV 1946 2011 8 0 0
CRV 1947 2012 6 0 0
etc...
```
The products are known ahead.
Any idea how to do so ? | Unstack `product` from the index and fill `NaN` values with zero.
```
df = stores.groupby(level=[0, 1, 2, 3]).sum().unstack('product')
mask = pd.IndexSlice['amount', :]
df.loc[:, mask] = df.loc[:, mask].fillna(0)
>>> df
amount
product a b c
retailer_name store_number year
CRV 1946 2011 8 0 0
1947 2012 6 0 0
2013 0 0 11
1948 2011 6 1 0
1949 2012 12 0 0
Walmart 1944 2010 5 0 0
1945 2010 0 5 0
1947 2010 0 10 0
1949 2012 5 0 0
``` |
Pythonic way to iterate and/or enumerate with a binary 'switch' | 37,342,396 | 2 | 2016-05-20T09:00:28Z | 37,342,506 | 7 | 2016-05-20T09:05:42Z | [
"python",
"loops",
"iterator"
] | I'm working with a few things at the moment where there will be 2n possible outcomes that I need to iterate over in a binary manner.
I'd like some kind of binary [enumeration](https://docs.python.org/3/library/functions.html#enumerate) or similar that I could use to switch on and off operators and/or functions in each iteration.
An example where the *sign* (or +/- operator) is changing over 23=8 iterations may be:
```
loop1: + var1 + var2 + var3
loop2: + var1 + var2 - var3
loop3: + var1 - var2 + var3
loop4: + var1 - var2 - var3
loop5: - var1 + var2 + var3
loop6: - var1 + var2 - var3
loop7: - var1 - var2 + var3
loop8: - var1 - var2 - var3
```
Sort of a binary tree, but as a code structure as opposed to a data structure?
Is there a helpful builtin? | Just produce the product of binary flags; if you need to switch 3 different things, generate the product of `(False, True)` three times:
```
from itertools import product
for first, second, third in product((False, True), repeat=3):
```
You can also produce the product of operators; your sample could use [`operator` module functions](https://docs.python.org/2/library/operator.html):
```
import operator
from itertools import product
unary_op = operator.pos, operator.neg
for ops in product(unary_op, repeat=3):
result = sum(op(var) for op, var in zip(ops, (var1, var2, var3)))
```
Demo:
```
>>> from itertools import product
>>> import operator
>>> var1, var2, var3 = 42, 13, 81
>>> unary_op = operator.pos, operator.neg
>>> for ops in product(unary_op, repeat=3):
... vars = [op(var) for op, var in zip(ops, (var1, var2, var3))]
... print('{:=3d} + {:=3d} + {:=3d} = {sum:=4d}'.format(*vars, sum=sum(vars)))
...
42 + 13 + 81 = 136
42 + 13 + -81 = - 26
42 + -13 + 81 = 110
42 + -13 + -81 = - 52
-42 + 13 + 81 = 52
-42 + 13 + -81 = -110
-42 + -13 + 81 = 26
-42 + -13 + -81 = -136
``` |
ImportError : cannot import name '_win32stdio' | 37,342,603 | 5 | 2016-05-20T09:10:15Z | 37,357,863 | 10 | 2016-05-21T01:00:26Z | [
"python",
"visual-studio",
"scrapy"
] | I am working with Scrapy framework to scrap out data from website, but getting the following error in command prompt:
> ImportError: cannot import name '\_win32stdio'
Traceback is attached as a screenshot.
Kindly revert if require directory structure of my program's directory.
 | Scrapy can work with Python 3 on windows if you make some minor adjustments:
1. Copy the \_win32stdio and \_pollingfile to the appropriate directory under site-packages. Namely, twisted-dir\internet. Download these from <https://github.com/twisted/twisted/tree/trunk/twisted/internet>
2. `pip install pypiwin32`
Granted, this is based on my personal experience. Because the repository will certainly change in the future, readers should beware the age of this answer. |
How to correctly supply argument to the following Python script | 37,355,964 | 2 | 2016-05-20T21:15:26Z | 37,355,990 | 7 | 2016-05-20T21:17:17Z | [
"python",
"linux",
"command-line",
"arguments"
] | I'm new to Python and would really like to run some files through the following Python script.
<https://github.com/ashutoshkpandey/Variants_call/blob/master/Filter_Pindel_del_vcf.py>
I'm running on Linux server and Python is installed. I saved the script in a directory with the two required files (Del.vcf and Output\_D). Here is what I've typed on the command line but the script can't find the files for some reason.
```
$ python Filter_Pindel_del_vcf.py Del.vcf, Output_D, Outputfile
Traceback (most recent call last):
File "Filter_Pindel_del_vcf.py", line 45, in <module>
for row in fileinput.input([Filepath]):
File "/usr/lib64/python2.7/fileinput.py", line 253, in next
line = self.readline()
File "/usr/lib64/python2.7/fileinput.py", line 345, in readline
self._file = open(self._filename, self._mode)
IOError: [Errno 2] No such file or directory: 'Del.vcf,'
``` | Just remove commas:
```
python Filter_Pindel_del_vcf.py Del.vcf Output_D Outputfile
``` |
How to check if all items in list are string | 37,357,798 | 2 | 2016-05-21T00:49:00Z | 37,357,810 | 8 | 2016-05-21T00:50:57Z | [
"python",
"python-3.x"
] | If I have a list in python, is there a function to tell me if all the items in the list are strings?
For Example:
`["one", "two", 3]` would return `False`, and `["one", "two", "three"]` would return `True`. | Just use `all()` and check for types with `isinstance()`.
```
>>> l = ["one", "two", 3]
>>> all(isinstance(item, str) for item in l)
False
>>> l = ["one", "two", '3']
>>> all(isinstance(item, str) for item in l)
True
``` |
Why are literal formatted strings so slow in Python 3.6 alpha? | 37,365,311 | 19 | 2016-05-21T16:15:09Z | 37,365,521 | 19 | 2016-05-21T16:34:30Z | [
"python",
"performance",
"string-formatting",
"python-internals",
"python-3.6"
] | I've downloaded a Python 3.6 alpha build from the Python Github repository, and one of my favourite new features is literal string formatting. It can be used like so:
```
>>> x = 2
>>> f"x is {x}"
"x is 2"
```
This appears to do the same thing as using the `format` function on a `str` instance. However, one thing that I've noticed is that this literal string formatting is actually very slow compared to just calling `format`. Here's what `timeit` says about each method:
```
>>> x = 2
>>> timeit.timeit(lambda: f"X is {x}")
0.8658502227130764
>>> timeit.timeit(lambda: "X is {}".format(x))
0.5500578542015617
```
If I use a string as `timeit`'s argument, my results are still showing the pattern:
```
>>> timeit.timeit('x = 2; f"X is {x}"')
0.5786435347381484
>>> timeit.timeit('x = 2; "X is {}".format(x)')
0.4145195760771685
```
As you can see, using `format` takes almost half the time. I would expect the literal method to be faster because less syntax is involved. What is going on behind the scenes which causes the literal method to be so much slower? | The `f"..."` syntax is effectively converted to a `str.join()` operation on the literal string parts around the `{...}` expressions, and the results of the expressions themselves passed through the `object.__format__()` method (passing any `:..` format specification in). You can see this when disassembling:
```
>>> import dis
>>> dis.dis(compile('f"X is {x}"', '', 'exec'))
1 0 LOAD_CONST 0 ('')
3 LOAD_ATTR 0 (join)
6 LOAD_CONST 1 ('X is ')
9 LOAD_NAME 1 (x)
12 FORMAT_VALUE 0
15 BUILD_LIST 2
18 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
21 POP_TOP
22 LOAD_CONST 2 (None)
25 RETURN_VALUE
>>> dis.dis(compile('"X is {}".format(x)', '', 'exec'))
1 0 LOAD_CONST 0 ('X is {}')
3 LOAD_ATTR 0 (format)
6 LOAD_NAME 1 (x)
9 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
12 POP_TOP
13 LOAD_CONST 1 (None)
16 RETURN_VALUE
```
Note the `BUILD_LIST` and `LOAD_ATTR .. (join)` op-codes in that result. The new `FORMAT_VALUE` takes the top of the stack plus a format value (parsed out at compile time) to combine these in a `object.__format__()` call.
So your example, `f"X is {x}"`, is translated to:
```
''.join(["X is ", x.__format__('')])
```
Note that this requires Python to create a list object, and call the `str.join()` method.
The `str.format()` call is also a method call, and after parsing there is still a call to `x.__format__('')` involved, but crucially, there is no *list creation* involved here. It is this difference that makes the `str.format()` method faster.
Note that Python 3.6 has only been released as an alpha build; this implementation can still easily change. See [PEP 494 â *Python 3.6 Release Schedule*](https://www.python.org/dev/peps/pep-0494/) for the time table, as well as [Python issue #27078](http://bugs.python.org/issue27078) (opened in response to this question) for a discussion on how to further improve the performance of formatted string literals. |
Cleanest way to filter a Pandas dataframe? | 37,375,158 | 2 | 2016-05-22T13:46:13Z | 37,375,226 | 8 | 2016-05-22T13:52:16Z | [
"python",
"pandas"
] | Python:
```
pd.read_csv("CME-datasets-codes.csv", header=None)
```
Produces:
```
0 1
0 CME/OH2014 Oats Futures, March 2014, OH2014, CBOT
1 CME/HGG2004 Copper Futures, February 2004, HGG2004, COMEX
2 CME/BRH2014 Brazilian Real (BRL/USD) Futures, March 2014, ...
3 CME/F5H2014 PJM PPL Zone Off-Peak Calendar-Month Day-Ahead...
4 CME/PDMU2016 MISO Indiana Hub Day-Ahead Peak Calendar-Month...
```
I want to filter this to show me all rows that start with "CME/C" in column 0.
What's the cleanest way? | ```
df[df[0].str.startswith('CME/C')]
``` |
Python - Check if multiple variables have the same value | 37,376,516 | 3 | 2016-05-22T16:01:34Z | 37,376,566 | 8 | 2016-05-22T16:07:37Z | [
"python",
"python-3.x"
] | I have a set of three variables x, y, z and I want to check if they **all share the same value**. In my case, the value will either by 1 or 2, but I only need to know if they are all the same. Currently I'm using
```
if 1 == x and 1 == y and 1 == z:
sameness = True
```
Looking for the answer I've found:
```
if 1 in {x, y, z}:
```
However, this operates as
```
if 1 == x or 1 == y or 1 == z:
atleastOneMatch = True
```
Is it possible to check if 1 is in each: x, y, and z?
Better yet, is there a more concise way of checking simply if x, y, and z are the same value?
(if it matters, I use python 3) | If you have an arbitrary sequence, use the [`all()` function](https://docs.python.org/3/library/functions.html#all) with a [generator expression](https://docs.python.org/3/tutorial/classes.html#generator-expressions):
```
values = [x, y, z] # can contain any number of values
if all(v == 1 for v in values):
```
otherwise, just use `==` on *all three variables*:
```
if x == y == z == 1:
```
If you only needed to know if they are all the same value (regardless of what value that is), use:
```
if all(v == values[0] for v in values):
```
or
```
if x == y == z:
``` |
Extracting items from a list and assigning them to variables | 37,376,776 | 3 | 2016-05-22T16:28:11Z | 37,376,812 | 8 | 2016-05-22T16:31:15Z | [
"python",
"list"
] | I'm trying to extract the first three items from `numbers`, and assign them to three different variables, as follows:
```
numbers = [1,2,3,4,5,7,8,9,10]
[first_item, second_item, third_item] = numbers
```
Why am I getting this error?
```
Traceback (most recent call last):
File "test.py", line 2, in <module>
[first_item, second_item, third_item] = numbers
ValueError: too many values to unpack
``` | You don't assign to them in a list like that, and you need to handle the rest of the unpacking for your list - your error message is indicating that there are more values to unpack than you have variables to assign to. One way you can amend this is by assigning the remaining elements in your list to a `rest` variable with the `*` unpacking operator
```
numbers = [1,2,3,4,5,7,8,9,10]
first_item, second_item, third_item, *rest = numbers
```
Note that this is only possible since Python 3, see [PEP 3132 - Extended Iterable Unpacking](https://www.python.org/dev/peps/pep-3132/), |
In Python Dictionaries, how does ( (j*5)+1 ) % 2**i cycle through all 2**i | 37,378,874 | 17 | 2016-05-22T19:43:42Z | 37,379,175 | 13 | 2016-05-22T20:13:13Z | [
"python",
"algorithm",
"dictionary",
"linear-probing"
] | I am researching how python implements dictionaries. One of the equations in the python dictionary implementation relates the the pseudo random probing for an empty dictionary slot using the equation
```
j = ((j*5) + 1) % 2**i
```
which is explained [here](https://hg.python.org/cpython/file/52f68c95e025/Objects/dictobject.c#l89).
I have read this question, [How are Python's Built In Dictionaries Implemented](http://stackoverflow.com/questions/327311/how-are-pythons-built-in-dictionaries-implemented), and basically understand how dictionaries are implemented.
What I don't understand is why/how the equation:
```
j = ((j*5) + 1) % 2**i
```
cycles through all the remainders of `2**i`. For instance, if `i = 3` for a total starting size of 8. `j` goes through the cycle:
```
0
1
6
7
4
5
2
3
0
```
if the starting size is 16, it would go through the cycle:
```
0 1 6 15 12 13 2 11 8 9 14 7 4 5 10 3 0
```
This is very useful for probing all the slots in the dictionary. **But why does it work ?** Why does `j = ((j*5)+1)` work but not `j = ((j*6)+1)` or `j = ((j*3)+1)` both of which get stuck in smaller cycles.
I am hoping to get a more intuitive understanding of this than *the equation just works and that's why they used it*. | This is the same principle that pseudo-random number generators use, as Jasper hinted at, namely [linear congruential generators](https://en.wikipedia.org/wiki/Linear_congruential_generator). A linear congruential generator is a sequence that follows the relationship `X_(n+1) = (a * X_n + c) mod m`. From the wiki page,
> The period of a general LCG is at most m, and for some choices of factor a much less than that. The LCG will have a full period for all seed values if and only if:
>
> 1. `m` and `c` are relatively prime.
> 2. `a - 1` is divisible by all prime factors of `m`.
> 3. `a - 1` is divisible by 4 if `m` is divisible by `4`.
It's clear to see that 5 is the smallest `a` to satisfy these requirements, namely
1. 2^i and 1 are relatively prime.
2. 4 is divisible by 2.
3. 4 is divisible by 4.
Also interestingly, 5 is not the only number that satisfies these conditions. 9 will also work. Taking `m` to be 16, using `j=(9*j+1)%16` yields
```
0 1 10 11 4 5 14 15 8 9 2 3 12 13 6 7
```
The proof for these three conditions can be found in [the original Hull-Dobell paper](http://chagall.med.cornell.edu/BioinfoCourse/PDFs/Lecture4/random_number_generator.pdf) on page 5, along with a bunch of other PRNG-related theorems that also may be of interest. |
Spark ALS predictAll returns empty | 37,379,751 | 4 | 2016-05-22T21:16:03Z | 37,435,580 | 7 | 2016-05-25T11:21:12Z | [
"python",
"apache-spark",
"machine-learning",
"pyspark",
"rdd"
] | I have the following Python test code (the arguments to ALS.train are defined elsewhere):
```
r1 = (2, 1)
r2 = (3, 1)
test = sc.parallelize([r1, r2])
model = ALS.train(ratings, rank, numIter, lmbda)
predictions = model.predictAll(test)
print test.take(1)
print predictions.count()
print predictions
```
Which works, because it has a count of 1 against the predictions variable and outputs:
```
[(2, 1)]
1
ParallelCollectionRDD[2691] at parallelize at PythonRDD.scala:423
```
However, when I try and use an RDD I created myself using the following code, it doesn't appear to work anymore:
```
model = ALS.train(ratings, rank, numIter, lmbda)
validation_data = validation.map(lambda xs: tuple(int(x) for x in xs))
predictions = model.predictAll(validation_data)
print validation_data.take(1)
print predictions.count()
print validation_data
```
Which outputs:
```
[(61, 3864)]
0
PythonRDD[4018] at RDD at PythonRDD.scala:43
```
As you can see, predictAll comes back empty when passed the mapped RDD. The values going in are both of the same format. The only noticeable difference that I can see is that the first example uses parallelize and produces a `ParallelCollectionRDD`whereas the second example just uses a map which produces a `PythonRDD`. Does predictAll only work if passed a certain type of RDD? If so, is it possible to convert between RDD types? I'm not sure how to get this working. | There are two basic conditions under which `MatrixFactorizationMode.predictAll` may return a RDD with lower number of items than the input:
* user is missing in the training set.
* product is missing in the training set.
You can easily reproduce this behavior and check that it is is not dependent on the way how RDD has been created. First lets use example data to build a model:
```
from pyspark.mllib.recommendation import ALS, MatrixFactorizationModel, Rating
def parse(s):
x, y, z = s.split(",")
return Rating(int(x), int(y), float(z))
ratings = (sc.textFile("data/mllib/als/test.data")
.map(parse)
.union(sc.parallelize([Rating(1, 5, 4.0)])))
model = ALS.train(ratings, 10, 10)
```
Next lets see which products and users are present in the training data:
```
set(ratings.map(lambda r: r.product).collect())
## {1, 2, 3, 4, 5}
set(ratings.map(lambda r: r.user).collect())
## {1, 2, 3, 4}
```
Now lets create test data and check predictions:
```
valid_test = sc.parallelize([(2, 5), (1, 4), (3, 5)])
valid_test
## ParallelCollectionRDD[434] at parallelize at PythonRDD.scala:423
model.predictAll(valid_test).count()
## 3
```
So far so good. Next lets map it using the same logic as in your code:
```
valid_test_ = valid_test.map(lambda xs: tuple(int(x) for x in xs))
valid_test_
## PythonRDD[497] at RDD at PythonRDD.scala:43
model.predictAll(valid_test_).count()
## 3
```
Still fine. Next lets create invalid data and repeat experiment:
```
invalid_test = sc.parallelize([
(2, 6), # No product in the training data
(6, 1) # No user in the training data
])
invalid_test
## ParallelCollectionRDD[500] at parallelize at PythonRDD.scala:423
model.predictAll(invalid_test).count()
## 0
invalid_test_ = invalid_test.map(lambda xs: tuple(int(x) for x in xs))
model.predictAll(invalid_test_).count()
## 0
```
As expected there are no predictions for invalid input.
Finally you can confirm this is really the case by using ML model which is completely independent in training / prediction from Python code:
```
from pyspark.ml.recommendation import ALS as MLALS
model_ml = MLALS(rank=10, maxIter=10).fit(
ratings.toDF(["user", "item", "rating"])
)
model_ml.transform((valid_test + invalid_test).toDF(["user", "item"])).show()
## +----+----+----------+
## |user|item|prediction|
## +----+----+----------+
## | 6| 1| NaN|
## | 1| 4| 1.0184212|
## | 2| 5| 4.0041084|
## | 3| 5|0.40498763|
## | 2| 6| NaN|
## +----+----+----------+
```
As you can see no corresponding user / item in the training data means no prediction. |
Replacing repeated captures | 37,381,249 | 24 | 2016-05-23T01:01:07Z | 37,381,369 | 9 | 2016-05-23T01:21:00Z | [
"python",
"regex"
] | This is sort of a follow-up to [Python regex - Replace single quotes and brackets](http://stackoverflow.com/questions/37375828/python-regex-replace-single-quotes-and-brackets) thread.
**The task:**
Sample input strings:
```
RSQ(name['BAKD DK'], name['A DKJ'])
SMT(name['BAKD DK'], name['A DKJ'], name['S QRT'])
```
Desired outputs:
```
XYZ(BAKD DK, A DKJ)
XYZ(BAKD DK, A DKJ, S QRT)
```
The number of `name['something']`-like items is *variable*.
**The current solution:**
Currently, I'm doing it through *two separate `re.sub()` calls*:
```
>>> import re
>>>
>>> s = "RSQ(name['BAKD DK'], name['A DKJ'])"
>>> s1 = re.sub(r"^(\w+)", "XYZ", s)
>>> re.sub(r"name\['(.*?)'\]", r"\1", s1)
'XYZ(BAKD DK, A DKJ)'
```
**The question:**
Would it be possible to combine these two `re.sub()` calls into a single one?
In other words, I want to replace something at the beginning of the string and then multiple similar things after, all of that in one go.
---
I've looked into [`regex` module](https://pypi.python.org/pypi/regex) - it's ability to [capture repeated patterns](http://stackoverflow.com/questions/9764930/capturing-repeating-subpatterns-in-python-regex) looks very promising, tried using `re.subf()` but failed to make it work. | You could do this. Though I don't think it's very readable. And doing it this way could get unruly if you start adding more patterns to replace. It takes advantage of the fact that the replacement string can also be a function.
```
s = "RSQ(name['BAKD DK'], name['A DKJ'])"
re.sub(r"^(\w+)|name\['(.*?)'\]", lambda m: 'XYZ' if m.group(1) else m.group(2), s)
``` |
Replacing repeated captures | 37,381,249 | 24 | 2016-05-23T01:01:07Z | 37,381,447 | 13 | 2016-05-23T01:33:33Z | [
"python",
"regex"
] | This is sort of a follow-up to [Python regex - Replace single quotes and brackets](http://stackoverflow.com/questions/37375828/python-regex-replace-single-quotes-and-brackets) thread.
**The task:**
Sample input strings:
```
RSQ(name['BAKD DK'], name['A DKJ'])
SMT(name['BAKD DK'], name['A DKJ'], name['S QRT'])
```
Desired outputs:
```
XYZ(BAKD DK, A DKJ)
XYZ(BAKD DK, A DKJ, S QRT)
```
The number of `name['something']`-like items is *variable*.
**The current solution:**
Currently, I'm doing it through *two separate `re.sub()` calls*:
```
>>> import re
>>>
>>> s = "RSQ(name['BAKD DK'], name['A DKJ'])"
>>> s1 = re.sub(r"^(\w+)", "XYZ", s)
>>> re.sub(r"name\['(.*?)'\]", r"\1", s1)
'XYZ(BAKD DK, A DKJ)'
```
**The question:**
Would it be possible to combine these two `re.sub()` calls into a single one?
In other words, I want to replace something at the beginning of the string and then multiple similar things after, all of that in one go.
---
I've looked into [`regex` module](https://pypi.python.org/pypi/regex) - it's ability to [capture repeated patterns](http://stackoverflow.com/questions/9764930/capturing-repeating-subpatterns-in-python-regex) looks very promising, tried using `re.subf()` but failed to make it work. | You can indeed use the regex module and repeated captures. The main interest is that you can check the structure of the matched string:
```
import regex
regO = regex.compile(r'''
\w+ \( (?: name\['([^']*)'] (?: ,[ ] | (?=\)) ) )* \)
''', regex.VERBOSE);
regO.sub(lambda m: 'XYZ(' + (', '.join(m.captures(1))) + ')', s)
```
*(Note that you can replace `"name"` by `\w+` or anything you want without problems.)* |
Replacing repeated captures | 37,381,249 | 24 | 2016-05-23T01:01:07Z | 37,383,496 | 9 | 2016-05-23T05:56:52Z | [
"python",
"regex"
] | This is sort of a follow-up to [Python regex - Replace single quotes and brackets](http://stackoverflow.com/questions/37375828/python-regex-replace-single-quotes-and-brackets) thread.
**The task:**
Sample input strings:
```
RSQ(name['BAKD DK'], name['A DKJ'])
SMT(name['BAKD DK'], name['A DKJ'], name['S QRT'])
```
Desired outputs:
```
XYZ(BAKD DK, A DKJ)
XYZ(BAKD DK, A DKJ, S QRT)
```
The number of `name['something']`-like items is *variable*.
**The current solution:**
Currently, I'm doing it through *two separate `re.sub()` calls*:
```
>>> import re
>>>
>>> s = "RSQ(name['BAKD DK'], name['A DKJ'])"
>>> s1 = re.sub(r"^(\w+)", "XYZ", s)
>>> re.sub(r"name\['(.*?)'\]", r"\1", s1)
'XYZ(BAKD DK, A DKJ)'
```
**The question:**
Would it be possible to combine these two `re.sub()` calls into a single one?
In other words, I want to replace something at the beginning of the string and then multiple similar things after, all of that in one go.
---
I've looked into [`regex` module](https://pypi.python.org/pypi/regex) - it's ability to [capture repeated patterns](http://stackoverflow.com/questions/9764930/capturing-repeating-subpatterns-in-python-regex) looks very promising, tried using `re.subf()` but failed to make it work. | Please do not do this in any code I have to maintain.
You are trying to parse syntactically valid Python. Use [`ast`](https://docs.python.org/3/library/ast.html) for that. It's more readable, easier to extend to new syntax, and won't fall apart on some weird corner case.
Working sample:
```
from ast import parse
l = [
"RSQ(name['BAKD DK'], name['A DKJ'])",
"SMT(name['BAKD DK'], name['A DKJ'], name['S QRT'])"
]
for item in l:
tree = parse(item)
args = [arg.slice.value.s for arg in tree.body[0].value.args]
output = "XYZ({})".format(", ".join(args))
print(output)
```
Prints:
```
XYZ(BAKD DK, A DKJ)
XYZ(BAKD DK, A DKJ, S QRT)
``` |
TensorFlow, "'module' object has no attribute 'placeholder'" | 37,383,812 | 3 | 2016-05-23T06:20:20Z | 37,386,150 | 7 | 2016-05-23T08:29:39Z | [
"python",
"machine-learning",
"tensorflow"
] | I've been trying to use tensorflow for two days now installing and reinstalling it over and over again in python2.7 and 3.4. No matter what I do, I get this error message when trying to use tensorflow.placeholder()
It's very boilerplate code:
```
tf_in = tf.placeholder("float", [None, A]) # Features
```
No matter what I do I always get the trace back:
```
Traceback (most recent call last):
File "/home/willim/PycharmProjects/tensorflow/tensorflow.py", line 2, in <module>
import tensorflow as tf
File "/home/willim/PycharmProjects/tensorflow/tensorflow.py", line 53, in <module>
tf_in = tf.placeholder("float", [None, A]) # Features
AttributeError: 'module' object has no attribute 'placeholder'
```
Anyone know how I can fix this? | **Solution: Do not use "tensorflow" as your filename.**
Notice that you use tensorflow.py as your filename. And I guess you write code like:
```
import tensorflow as tf
```
Then you are actually importing the script file "tensorflow.py" that is under your current working directory, rather than the "real" tensorflow module from Google.
Here is the order in which a module will be searched when importing:
> 1. The directory containing the input script (or the current directory when no file is specified).
> 2. PYTHONPATH (a list of directory names,
> with the same syntax as the shell variable PATH).
> 3. The installation-dependent default. |
How can I create a type hint that my returned list contains strings? | 37,386,499 | 16 | 2016-05-23T08:47:14Z | 37,386,610 | 19 | 2016-05-23T08:53:08Z | [
"python",
"python-3.5",
"type-hinting"
] | I want to use Type Hints in my Python program. How can I create Type Hints for complex data structures like
* lists with strings
* a generator returning integers?
**Example**
```
def names() -> list:
# I would like to specify that the list contains strings?
return ['Amelie', 'John', 'Carmen']
def numbers():
# Which type should I specify for `numbers()`?
for num in range(100):
yield num
``` | Use the [`typing` module](https://docs.python.org/3/library/typing.html); it contains *generics*, type objects you can use to specify containers with constraints on their contents:
```
import typing
def names() -> typing.List[str]: # list object with strings
return ['Amelie', 'John', 'Carmen']
def numbers() -> typing.Iterator[int]: # iterator yielding integers
for num in range(100):
yield num
```
Depending on how you design your code and how you want to use the return value of `names()`, you could also use the [`types.Sequence`](https://docs.python.org/3/library/typing.html#typing.Sequence) and [`types.MutableSequence`](https://docs.python.org/3/library/typing.html#typing.MutableSequence) types here, depending on wether or not you expect to be able to mutate the result.
A generator is a specific type of *iterator*, so `typing.Iterator` is appropriate here. If your generator also accepts `send()` values and uses `return` to set a `StopIteration` value, you can use the [`typing.Generator` object](https://docs.python.org/3/library/typing.html#typing.Generator) too:
```
def filtered_numbers(filter) -> typing.Generator[int, int, float]:
# contrived generator that filters numbers; returns percentage filtered.
# first send a limit!
matched = 0
limit = yield
yield # one more yield to pause after sending
for num in range(limit):
if filter(num):
yield num
matched += 1
return (matched / limit) * 100
```
If you are new to type hinting, then [PEP 483 â *The Theory of Type Hints*](https://www.python.org/dev/peps/pep-0483/) may be helpful. |
Spurious newlines added in Django management commands | 37,400,807 | 13 | 2016-05-23T21:23:38Z | 39,931,636 | 10 | 2016-10-08T11:08:05Z | [
"python",
"django",
"python-3.x",
"stdout",
"django-management-command"
] | Running Django v1.10 on Python 3.5.0:
```
from django.core.management.base import BaseCommand
class Command(BaseCommand):
def handle(self, *args, **options):
print('hello ', end='', file=self.stdout)
print('world', file=self.stdout)
```
Expected output:
```
hello world
```
Actual output:
```
hello
world
```
How do I correctly pass the ending character? I currently use a workaround of setting explicitly:
```
self.stdout.ending = ''
```
But this hack means you don't get all the features of the print function, you must use `self.stdout.write` and prepare the bytes manually. | As is mentioned in [Django 1.10's Custom Management Commands](https://docs.djangoproject.com/en/1.10/howto/custom-management-commands/) document:
> When you are using management commands and wish to provide console output, you should write to **self.stdout** and **self.stderr**, instead of printing to **stdout** and **stderr** directly. By using these proxies, it becomes much easier to test your custom command. *Note also that you donât need to end messages with a newline character, it will be added automatically, unless you specify the ending parameter*:
>
> ```
> self.stdout.write("Unterminated line", ending='')
> ```
Hence, in order to print in your `Command` class, you should define your `handle()` function as:
```
from django.core.management.base import BaseCommand
class Command(BaseCommand):
def handle(self, *args, **options):
self.stdout.write("hello ", ending='')
self.stdout.write("world", ending='')
# prints: hello world
```
Also, by explicitly setting `self.stdout.ending = ''`, you are modifying the property of `self.stdout` object. But you may not want this to be reflected in future calls of `self.stdout.write()`. Hence it will be better to use `ending` parameter within `self.stdout.write()` function (as demonstrated in sample code above).
As you mentioned *"But this hack means you don't get all the features of the print function, you must use self.stdout.write and prepare the bytes manually."* No, you do not have to worry about creating the `bytes` or other features of `print()`, as [`self.stdout.write()`](http://fossies.org/dox/Django-1.10.2/classdjango_1_1core_1_1management_1_1base_1_1OutputWrapper.html#a100ad4bd24e087ee23ff3376ba7e9be0) function belonging to [`OutputWrapper`](http://fossies.org/dox/Django-1.10.2/classdjango_1_1core_1_1management_1_1base_1_1OutputWrapper.html) class expects data to be in `str` format. Also I would like to mention that `print()` and `OutputWrapper.write()` behaves quite similar both acting as a wrapper around [`sys.stdout.write()`](https://docs.python.org/3.1/library/sys.html#sys.stdout).
The only difference between `print()` and `OutputWrapper.write()` is:
* `print()` accepts message string as `*args` with `separator` parameter to join the the multiple strings, whereas
* `OutputWrapper.write()` accepts single message string
But this difference could be easily handled by explicitly joining the strings with *separator* and passing it to `OutputWrapper.write()`.
**Conclusion:** You do not have to worry about the additional features provided by `print()` as there are none, and should go ahead with using `self.stdout.write()` as suggested in this answer's quoted content from [Custom Management Commands](https://docs.djangoproject.com/en/1.10/howto/custom-management-commands/) document.
If you are interested, you may check the source code of `BaseCommand` and `OutputWrapper` classes available at: [Source code for `django.core.management.base`](https://docs.djangoproject.com/en/1.10/_modules/django/core/management/base/). It might help in clearing some of your doubts. You may also check [PEP-3105](https://www.python.org/dev/peps/pep-3105/#specification) related to Python 3's `print()`. |
Smoothing Edges of a Binary Image | 37,409,811 | 6 | 2016-05-24T09:40:28Z | 37,458,312 | 7 | 2016-05-26T10:34:15Z | [
"python",
"opencv",
"image-processing",
"blur",
"edge"
] | How to smooth the edges of this binary image of blood vessels obtained after thresholding.
[](http://i.stack.imgur.com/YyNQV.png)
I tried a method somewhat similar to [this method](http://stackoverflow.com/questions/21795643/image-edge-smoothing-with-opencv) but did not quite get the result I expected.
[](http://i.stack.imgur.com/8IAYc.png)
Here's the code:
```
import cv2
import numpy as np
INPUT = cv2.imread('so-br-in.png',0)
MASK = np.array(INPUT/255.0, dtype='float32')
MASK = cv2.GaussianBlur(MASK, (5,5), 11)
BG = np.ones([INPUT.shape[0], INPUT.shape[1], 1], dtype='uint8')*255
OUT_F = np.ones([INPUT.shape[0], INPUT.shape[1], 1],dtype='uint8')
for r in range(INPUT.shape[0]):
for c in range(INPUT.shape[1]):
OUT_F[r][c] = int(BG[r][c]*(MASK[r][c]) + INPUT[r][c]*(1-MASK[r][c]))
cv2.imwrite('brain-out.png', OUT_F)
```
What can be done to improve the smoothing of these harsh edges?
**EDIT**
I'd like to smoothen the edges something like <http://pscs5.tumblr.com/post/60284570543>. How to do this in OpenCV? | Here is the result I obtained with your image: [](http://i.stack.imgur.com/frmkx.png)
My method is mostly based on several `cv::medianBlur`applied on a scaled-up image.
Here is the code:
```
cv::Mat vesselImage = cv::imread(filename); //the original image
cv::threshold(vesselImage, vesselImage, 125, 255, THRESH_BINARY);
cv::Mat blurredImage; //output of the algorithm
cv::pyrUp(vesselImage, blurredImage);
for (int i = 0; i < 15; i++)
cv::medianBlur(blurredImage, blurredImage, 7);
cv::pyrDown(blurredImage, blurredImage);
cv::threshold(blurredImage, blurredImage, 200, 255, THRESH_BINARY);
```
The jagged edges are due to the thresholding. If you are comfortable with an output image that is non-binary (i.e. with 256 shades of grAy), you can just remove it and you get this image: [](http://i.stack.imgur.com/4m2lx.png) |
Efficient Double Sum of Products | 37,416,978 | 7 | 2016-05-24T14:48:51Z | 37,417,313 | 8 | 2016-05-24T15:02:34Z | [
"python",
"numpy"
] | Consider two `ndarrays` of length `n`, `arr1` and `arr2`. I'm computing the following sum of products, and doing it `num_runs` times to benchmark:
```
import numpy as np
import time
num_runs = 1000
n = 100
arr1 = np.random.rand(n)
arr2 = np.random.rand(n)
start_comp = time.clock()
for r in xrange(num_runs):
sum_prods = np.sum( [arr1[i]*arr2[j] for i in xrange(n)
for j in xrange(i+1, n)] )
print "total time for comprehension = ", time.clock() - start_comp
start_loop = time.clock()
for r in xrange(num_runs):
sum_prod = 0.0
for i in xrange(n):
for j in xrange(i+1, n):
sum_prod += arr1[i]*arr2[j]
print "total time for loop = ", time.clock() - start_loop
```
The output is
```
total time for comprehension = 3.23097066953
total time for comprehension = 3.9045544426
```
so using list comprehension appears faster.
Is there a much more efficient implementation, using Numpy routines perhaps, to calculate such a sum of products? | A vectorized way : `np.sum(np.triu(np.multiply.outer(arr1,arr2),1))`.
for a 30x improvement:
```
In [9]: %timeit np.sum(np.triu(np.multiply.outer(arr1,arr2),1))
1000 loops, best of 3: 272 µs per loop
In [10]: %timeit np.sum( [arr1[i]*arr2[j] for i in range(n)
for j in range(i+1, n)]
100 loops, best of 3: 7.9 ms per loop
In [11]: allclose(np.sum(np.triu(np.multiply.outer(arr1,arr2),1)),
np.sum(np.triu(np.multiply.outer(arr1,arr2),1)))
Out[11]: True
```
Another fast approch is to use numba :
```
from numba import jit
@jit
def t(arr1,arr2):
s=0
for i in range(n):
for j in range(i+1,n):
s+= arr1[i]*arr2[j]
return s
```
for a 10x new factor :
```
In [12]: %timeit t(arr1,arr2)
10000 loops, best of 3: 21.1 µs per loop
```
And using @user2357112 minimal answer,
```
@jit
def t2357112(arr1,arr2):
s=0
c=0
for i in range(n-2,-1,-1):
c += arr2[i+1]
s += arr1[i]*c
return s
```
for
```
In [13]: %timeit t2357112(arr1,arr2)
100000 loops, best of 3: 2.33 µs per loop
```
, just doing the necessary operations. |
Efficient Double Sum of Products | 37,416,978 | 7 | 2016-05-24T14:48:51Z | 37,417,863 | 12 | 2016-05-24T15:26:41Z | [
"python",
"numpy"
] | Consider two `ndarrays` of length `n`, `arr1` and `arr2`. I'm computing the following sum of products, and doing it `num_runs` times to benchmark:
```
import numpy as np
import time
num_runs = 1000
n = 100
arr1 = np.random.rand(n)
arr2 = np.random.rand(n)
start_comp = time.clock()
for r in xrange(num_runs):
sum_prods = np.sum( [arr1[i]*arr2[j] for i in xrange(n)
for j in xrange(i+1, n)] )
print "total time for comprehension = ", time.clock() - start_comp
start_loop = time.clock()
for r in xrange(num_runs):
sum_prod = 0.0
for i in xrange(n):
for j in xrange(i+1, n):
sum_prod += arr1[i]*arr2[j]
print "total time for loop = ", time.clock() - start_loop
```
The output is
```
total time for comprehension = 3.23097066953
total time for comprehension = 3.9045544426
```
so using list comprehension appears faster.
Is there a much more efficient implementation, using Numpy routines perhaps, to calculate such a sum of products? | Rearrange the operation into an O(n) runtime algorithm instead of O(n^2), *and* take advantage of NumPy for the products and sums:
```
# arr1_weights[i] is the sum of all terms arr1[i] gets multiplied by in the
# original version
arr1_weights = arr2[::-1].cumsum()[::-1] - arr2
sum_prods = arr1.dot(arr1_weights)
```
Timing shows this to be about 200 times faster than the list comprehension for `n == 100`.
```
In [21]: %%timeit
....: np.sum([arr1[i] * arr2[j] for i in range(n) for j in range(i+1, n)])
....:
100 loops, best of 3: 5.13 ms per loop
In [22]: %%timeit
....: arr1_weights = arr2[::-1].cumsum()[::-1] - arr2
....: sum_prods = arr1.dot(arr1_weights)
....:
10000 loops, best of 3: 22.8 µs per loop
``` |
How many items has been scraped per start_url | 37,417,373 | 4 | 2016-05-24T15:05:19Z | 37,420,626 | 7 | 2016-05-24T17:54:18Z | [
"python",
"scrapy",
"scrapy-spider"
] | I use scrapy to crawl 1000 urls and store scraped item in a mongodb. I'd to know how many items have been found for each url. From scrapy stats I can see `'item_scraped_count': 3500`
However, I need this count for each start\_url separately. There is also `referer` field for each item that I might use to count each url items manually:
```
2016-05-24 15:15:10 [scrapy] DEBUG: Crawled (200) <GET https://www.youtube.com/watch?v=6w-_ucPV674> (referer: https://www.youtube.com/results?q=billys&sp=EgQIAhAB)
```
But I wonder if there is a built-in support from scrapy. | challenge accepted!
there isn't something on `scrapy` that directly supports this, but you could separate it from your spider code with a [`Spider Middleware`](http://doc.scrapy.org/en/latest/topics/spider-middleware.html):
**middlewares.py**
```
from scrapy.http.request import Request
class StartRequestsCountMiddleware(object):
start_urls = {}
def process_start_requests(self, start_requests, spider):
for i, request in enumerate(start_requests):
self.start_urls[i] = request.url
request.meta.update(start_request_index=i)
yield request
def process_spider_output(self, response, result, spider):
for output in result:
if isinstance(output, Request):
output.meta.update(
start_request_index=response.meta['start_request_index'],
)
else:
spider.crawler.stats.inc_value(
'start_requests/item_scraped_count/{}'.format(
self.start_urls[response.meta['start_request_index']],
),
)
yield output
```
Remember to activate it on `settings.py`:
```
SPIDER_MIDDLEWARES = {
...
'myproject.middlewares.StartRequestsCountMiddleware': 200,
}
```
Now you should be able to see something like this on your spider stats:
```
'start_requests/item_scraped_count/START_URL1': ITEMCOUNT1,
'start_requests/item_scraped_count/START_URL2': ITEMCOUNT2,
``` |
Does PyPy work with asyncio? | 37,419,442 | 5 | 2016-05-24T16:43:45Z | 37,421,658 | 9 | 2016-05-24T18:51:57Z | [
"python",
"performance",
"asynchronous",
"python-asyncio",
"pypy"
] | Does PyPy support the aio and Python 3.5?
I need the performance of `PyPy` and asynchrous code of `asyncio`. Also I need to use `async/await` in my code. Is that possible?
If so, what are the nuances? | Currently PyPy supports Python 3.3. This means that you can [install asyncio](https://pypi.python.org/pypi/asyncio) on PyPy3.3. Note that PyPy's 3.3 support is only alpha / beta quality in the moment. We are however actively working on increasing performance and compatibility with CPython.
The `async` / `await` feature was added in Python 3.5. We started a very experimental branch with Python 3.5 support, but it's still got a long way to go. Luckily we have a GSoC student working on it currently, but still it could take several years (depending on how much donations and volunteer work we receive).
EDIT 1: Previously there was a feature missing to run asyncio. It was implemented shortly before this edit. The answer was edited accordingly.
EDIT 2: We just released an alpha version of PyPy3.3. We don't recommend anyone to try the old PyPy3 release supporting only Python 3.2. This is why I rewrote most of the answer.
---
Old, now obsolete (as of 2016-05-30) notes:
The PyPy3 version from the website is very old and only implements Python 3.2 - we haven't done a release for over one and a half year. Because Python 3.2 is missing the `yield from` feature, asyncio won't work with this version. |
Comparing 2 dictionaries: same key, mismatching values | 37,425,592 | 4 | 2016-05-24T23:35:30Z | 37,425,724 | 9 | 2016-05-24T23:49:57Z | [
"python"
] | I am currently trying to compare 2 data sets:
```
dict1 = {'a':1, 'b':2, 'c':3}
dict2 = {'a':1, 'b':2, 'c':4}
```
In this case I want the output to be something like:
```
set1 = set([('c', 4), ('c',3)])
```
since their keys match but values do not.
I've tried a variation of comprehensions using the intersection and difference operators but I cannot get the desired output.
Any help is appreciated. | If you are using Python 2:
```
dict1.viewitems() ^ dict2.viewitems()
```
If you are using Python 3:
```
dict1.items() ^ dict2.items()
```
`viewitems` (Python 2) and `items` (Python 3) return a set-like object, which we can use the caret operator to calculate the symetric difference. |
Dummy variables when not all categories are present | 37,425,961 | 6 | 2016-05-25T00:22:39Z | 37,451,867 | 8 | 2016-05-26T04:53:38Z | [
"python",
"pandas",
"machine-learning",
"dummy-variable"
] | I have a set of dataframes where one of the columns contains a categorical variable. I'd like to convert it to several dummy variables, in which case I'd normally use `get_dummies`.
What happens is that `get_dummies` looks at the data available in each dataframe to find out how many categories there are, and thus create the appropriate number of dummy variables. However, in the problem I'm working right now, I actually know in advance what the possible categories are. But when looking at each dataframe individually, not all categories necessarily appear.
My question is: is there a way to pass to `get_dummies` (or an equivalent function) the names of the categories, so that, for the categories that don't appear in a given dataframe, it'd just create a column of 0s?
Something that would make this:
```
categories = ['a', 'b', 'c']
cat
1 a
2 b
3 a
```
Become this:
```
cat_a cat_b cat_c
1 1 0 0
2 0 1 0
3 1 0 0
``` | > is there a way to pass to get\_dummies (or an equivalent function) the names of the categories, so that, for the categories that don't appear in a given dataframe, it'd just create a column of 0s?
Yes, there is! Pandas has a special type of Series just for [categorical data](http://pandas.pydata.org/pandas-docs/stable/categorical.html). One of the attributes of this series is the possible categories, which `get_dummies` takes into account. Here's an example:
```
In [1]: import pandas as pd
In [2]: cat=pd.Series(list('aba'),index=range(1,4))
In [3]: cat=cat.astype('category',categories=list('abc'))
In [4]: cat
Out[4]:
1 a
2 b
3 a
dtype: category
Categories (3, object): [a, b, c]
```
Then, `get_dummies` will do exactly what you want!
```
In [5]: pd.get_dummies(cat)
Out[5]:
a b c
1 1 0 0
2 0 1 0
3 1 0 0
```
There are a bunch of other ways to create a categorical `Series` or `DataFrame`, this is just the one I find most convenient. You can read about all of them in [the pandas documentation](http://pandas.pydata.org/pandas-docs/stable/categorical.html).
**EDIT:**
I haven't followed the exact versioning, but there was a [bug](https://github.com/pydata/pandas/issues/10627) in how pandas treats sparse matrices, at least until version 0.17.0. It was corrected by version 0.18.1.
For version 0.17.0, if you try to do this with the `sparse=True` option with a `DataFrame`, the column of zeros for the missing dummy variable will be a column of `NaN`, and it will be converted to dense. |
Scrapy: AttributeError: 'list' object has no attribute 'iteritems' | 37,442,907 | 5 | 2016-05-25T16:27:24Z | 37,469,195 | 8 | 2016-05-26T19:10:14Z | [
"python",
"scrapy-spider",
"six"
] | This is my first question on stack overflow. Recently I want to use [linked-in-scraper](https://github.com/junks/linkedInScraper), so I downloaded and instruct "scrapy crawl linkedin.com" and get the below error message. For your information, I use anaconda 2.3.0 and python 2.7.11. All the related packages, including scrapy and six, are updated by pip before executing the program.
```
Traceback (most recent call last):
File "/Users/byeongsuyu/anaconda/bin/scrapy", line 11, in <module>
sys.exit(execute())
File "/Users/byeongsuyu/anaconda/lib/python2.7/site-packages/scrapy/cmdline.py", line 108, in execute
settings = get_project_settings()
File "/Users/byeongsuyu/anaconda/lib/python2.7/site-packages/scrapy/utils/project.py", line 60, in get_project_settings
settings.setmodule(settings_module_path, priority='project')
File "/Users/byeongsuyu/anaconda/lib/python2.7/site-packages/scrapy/settings/__init__.py", line 285, in setmodule
self.set(key, getattr(module, key), priority)
File "/Users/byeongsuyu/anaconda/lib/python2.7/site-packages/scrapy/settings/__init__.py", line 260, in set
self.attributes[name].set(value, priority)
File "/Users/byeongsuyu/anaconda/lib/python2.7/site-packages/scrapy/settings/__init__.py", line 55, in set
value = BaseSettings(value, priority=priority)
File "/Users/byeongsuyu/anaconda/lib/python2.7/site-packages/scrapy/settings/__init__.py", line 91, in __init__
self.update(values, priority)
File "/Users/byeongsuyu/anaconda/lib/python2.7/site-packages/scrapy/settings/__init__.py", line 317, in update
for name, value in six.iteritems(values):
File "/Users/byeongsuyu/anaconda/lib/python2.7/site-packages/six.py", line 599, in iteritems
return d.iteritems(**kw)
AttributeError: 'list' object has no attribute 'iteritems'
```
I understand that this error stems from d is not the dictionary type but list type. And since the error is from the code on scrapy, maybe it is problem on scrapy package or six package. How can I try to fix this error?
**EDIT:** This is code from scrapy.cfg
```
# Automatically created by: scrapy start project
#
# For more information about the [deploy] section see:
# http://doc.scrapy.org/topics/scrapyd.html
[settings]
default = linkedIn.settings
[deploy]
#url = http://localhost:6800/
project = linkedIn
``` | This is caused by the linked-in scraper's [settings](https://github.com/junks/linkedInScraper/blob/master/linkedIn/linkedIn/settings.py#L23):
```
ITEM_PIPELINES = ['linkedIn.pipelines.LinkedinPipeline']
```
However, `ITEM_PIPELINES` is supposed to be a link, [according to the doc](http://doc.scrapy.org/en/latest/topics/item-pipeline.html#activating-an-item-pipeline-component):
> To activate an Item Pipeline component you must add its class to the `ITEM_PIPELINES` setting, like in the following example:
>
> ```
> ITEM_PIPELINES = {
> 'myproject.pipelines.PricePipeline': 300,
> 'myproject.pipelines.JsonWriterPipeline': 800,
> }
> ```
>
> The integer values you assign to classes in this setting determine the order in which they run: items go through from lower valued to higher valued classes. Itâs customary to define these numbers in the 0-1000 range.
According to [this question](https://stackoverflow.com/q/20881431/539465), it used to be a list, which explains why this scraper uses a list.
So you will have to either ask your the developer of the scraper to update their code, or to set `ITEM_PIPELINES` yourself. |
Multiple exceptions and code coverage when unit testing python | 37,463,561 | 8 | 2016-05-26T14:21:08Z | 37,473,190 | 7 | 2016-05-27T00:53:05Z | [
"python",
"unit-testing",
"testing",
"code-coverage",
"coverage.py"
] | **The Problem:**
Here is an artificial example of the code under test:
```
from datetime import datetime
def f(s):
try:
date = s.split(":")[1]
return datetime.strptime(date, "%Y%m%d")
except (ValueError, IndexError) as e:
# some code here
raise
```
Here is a set of tests I currently have:
```
from datetime import datetime
import unittest
from test_module import f
class MyTestCase(unittest.TestCase):
def test_valid_date(self):
self.assertEqual(f("1:20130101"), datetime(2013, 1, 1))
def test_invalid_date(self):
self.assertRaises(ValueError, f, "1:invalid")
```
The test passes and, if I run the coverage with the `--branch` flag, I would get *100% line and branch coverage*:
```
$ coverage run --branch -m unittest test
..
----------------------------------------------------------------------
Ran 2 tests in 0.003s
OK
$ coverage report
Name Stmts Miss Branch BrPart Cover
--------------------------------------------
test_module.py 7 0 0 0 100%
--------------------------------------------
TOTAL 7 0 0 0 100%
```
However, note that the test currently examines only two cases - when there is no exception thrown, and there is a `ValueError` exception raised.
**The question:**
Is there a way for `coverage` to report that I have not tested a case when `IndexError` is raised? | Coverage.py can only measure which execution paths (statements or branches) were run. It has no means of tracking what values were used, including what exception types were raised.
As I see it, your options are:
1. Separate the exception clauses. In the code you've shown, the two exceptions could be raised by separate lines anyway, though perhaps in your real code they are not so separable.
2. Don't worry about the two exceptions. Your tests for this code will likely consider a number of different inputs, designed to exercise different edge cases. Coverage.py can't help you with distinguishing among them all, or ensuring that you've written enough cases. Use other criteria to decide you've written enough test cases. |
Python find index of all array elements in another array | 37,467,515 | 2 | 2016-05-26T17:30:49Z | 37,467,583 | 9 | 2016-05-26T17:34:31Z | [
"python",
"numpy"
] | I am trying to do the following:
```
import numpy as np
A = np.array([1,5,2,7,1])
B = np.sort(A)
print B
>>> [1,1,2,5,7]
```
I want to find the location of all elements in B as in original array A. i.e. I want to create an array C such that
```
print C
>>[0,4,2,1,3]
```
which refers to 1 in B being present in A at 0 and 4th location, 5 in B was present in A at 1st location, etc.
I tried using `np.where( B == A)` but it produces gibberish | ```
import numpy as np
A = np.array([1,5,2,7,1])
print np.argsort(A) #prints [0 4 2 1 3]
``` |
How to make statement `import datetime` bind the datetime type instead of the datetime module? | 37,491,110 | 4 | 2016-05-27T19:40:21Z | 37,491,209 | 7 | 2016-05-27T19:47:23Z | [
"python",
"python-import",
"python-module"
] | After one too many times having accidentally typed `import datetime` when what was really needed was `from datetime import datetime`, I wondered whether it was possible to hack around and make the former do the latter.
That is, to recreate this behaviour (in a freshly opened interpreter session):
```
$ python -ic ''
>>> import datetime
>>> datetime(2016, 5, 27)
datetime.datetime(2016, 5, 27, 0, 0)
```
Came pretty close to faking a "callable module" below:
```
>>> import dt
>>> dt(2016, 5, 27)
datetime.datetime(2016, 5, 27, 0, 0)
```
Which was implemented like this:
```
# dt.py
import sys
import datetime
class CallableModule(object):
def __init__(self, thing):
self.thing = thing
def __call__(self, *args, **kwargs):
return self.thing.__call__(*args, **kwargs)
sys.modules['dt'] = CallableModule(datetime.datetime)
```
However it doesn't work if I try to use the filename `datetime.py` for the module, and I could not yet find any hack to get at the built-in datetime module when my own file was also called `datetime.py`.
How can we *temporarily unhide* a built-in or site-packages module, from within the shadowing module itself? Is there any indirect way to get at the core `datetime` under this situation (perhaps similar to how we can still access `sys.__stdout__` even when `sys.stdout` has been redirected)?
**Disclaimer**: In no way suggesting that this is a sane idea - just interested if it's *possible*. | Here we go:
`datetime.py`:
```
import sys
import imp
import os
path = [p for p in sys.path if p != os.path.dirname(__file__)]
f, pathname, desc = imp.find_module('datetime', path)
std_datetime = imp.load_module('datetime', f, pathname, desc)
# if this^ is renamed to datetime, everything breaks!
f.close()
class CallableModule(object):
def __init__(self, module, fn):
self.module = module
self.fn = fn
def __call__(self, *args, **kwargs):
return self.fn(*args, **kwargs)
def __getattr__(self, item):
try:
return getattr(self.fn, item)
except AttributeError:
return getattr(self.module, item)
sys.modules['datetime'] = CallableModule(std_datetime, std_datetime.datetime)
```
`test.py` (lives next to `datetime.py`):
```
import datetime
print(datetime(1, 2, 3))
print(datetime.timedelta(days=1))
print(datetime.now())
```
Run `test.py`, output:
```
0001-02-03 00:00:00
1 day, 0:00:00
2016-05-27 23:16:30.954270
```
It also works with `from datetime import datetime, timedelta` etc.
This is especially hacky and fragile and will depend on your distribution. For example, apparently it doesn't work with IPython. You must import `datetime.py` before the standard library module.
To understand just how weird things get with this, if the variable `std_datetime` (the datetime module object) is renamed to `datetime`, then `datetime.datetime` is no longer the class, but rather `datetime is datetime.datetime is datetime.datetime.datetime ...`. If someone can explain why this happens, I'd love to hear it.
(note that the first comment below is from before I got to the final version) |
Why is dict definition faster in Python 2.7 than in Python 3.x? | 37,502,392 | 24 | 2016-05-28T18:17:18Z | 37,502,529 | 31 | 2016-05-28T18:33:09Z | [
"python",
"python-2.7",
"python-3.x",
"dictionary",
"python-internals"
] | I have encountered a (not very unusual) situation in which I had to either use a `map()` or a list comprehension expression. And then I wondered which one is faster.
[This](http://stackoverflow.com/a/1247490/3120525) StackOverflow answer provided me the solution, but then I started to test it myself. Basically the results were the same, but I found an unexpected behavior when switching to Python 3 that I got curious about, and namely:
```
λ iulian-pc ~ â python --version
Python 2.7.6
λ iulian-pc ~ â python3 --version
Python 3.4.3
λ iulian-pc ~ â python -mtimeit '{}'
10000000 loops, best of 3: 0.0306 usec per loop
λ iulian-pc ~ â python3 -mtimeit '{}'
10000000 loops, best of 3: 0.105 usec per loop
λ iulian-pc ~ â python -mtimeit 'dict()'
10000000 loops, best of 3: 0.103 usec per loop
λ iulian-pc ~ â python3 -mtimeit 'dict()'
10000000 loops, best of 3: 0.165 usec per loop
```
I had the assumption that Python 3 is faster than Python 2, but it turned out in several posts ([1](https://www.reddit.com/r/Python/comments/272bao/python_34_slow_compared_to_27_whats_your_mileage/), [2](https://www.quora.com/How-fast-is-Python-3-x-compared-to-2-7)) that it's not the case. Then I thought that maybe Python 3.5 will perform better at such a simple task, as they state in their `README`:
> The language is mostly the same, but many details, especially how
> built-in objects like dictionaries and strings work, have changed
> considerably, and a lot of deprecated features have finally been
> removed.
But nope, it performed even worse:
```
λ iulian-pc ~ â python3 --version
Python 3.5.0
λ iulian-pc ~ â python3 -mtimeit '{}'
10000000 loops, best of 3: 0.144 usec per loop
λ iulian-pc ~ â python3 -mtimeit 'dict()'
1000000 loops, best of 3: 0.217 usec per loop
```
I've tried to dive into the Python 3.5 source code for `dict`, but my knowledge of C language is not sufficient to find the answer myself (or, maybe I even don't search in the right place).
## So, my question is:
What makes the newer version of Python slower comparing to an older version of Python on a relatively simple task such as a `dict` definition, as by the common sense it should be vice-versa? I'm aware of the fact that these differences are so small that in most cases they can be neglected. It was just an observation that made me curious about why the time increased and not remained the same at least? | **Because nobody cares**
The differences you are citing are on the order of tens or hundreds of nanoseconds. A slight difference in how the C compiler optimizes register use could easily cause such changes (as could any number of other C-level optimization differences). That, in turn, could be caused by any number of things, such as changes in the number and usage of local variables in the C implementation of Python (CPython), or even just switching C compilers.
The fact is, nobody is actively optimizing for these small differences, so nobody is going to be able to give you a specific answer. CPython is not designed to be fast in an absolute sense. It is designed to be *scalable*. So, for example, you can shove hundreds or thousands of items into a dictionary and it will continue to perform well. But the absolute speed of creating a dictionary simply isn't a primary concern of the Python implementors, at least when the differences are this small. |
Why is dict definition faster in Python 2.7 than in Python 3.x? | 37,502,392 | 24 | 2016-05-28T18:17:18Z | 37,502,747 | 20 | 2016-05-28T18:55:41Z | [
"python",
"python-2.7",
"python-3.x",
"dictionary",
"python-internals"
] | I have encountered a (not very unusual) situation in which I had to either use a `map()` or a list comprehension expression. And then I wondered which one is faster.
[This](http://stackoverflow.com/a/1247490/3120525) StackOverflow answer provided me the solution, but then I started to test it myself. Basically the results were the same, but I found an unexpected behavior when switching to Python 3 that I got curious about, and namely:
```
λ iulian-pc ~ â python --version
Python 2.7.6
λ iulian-pc ~ â python3 --version
Python 3.4.3
λ iulian-pc ~ â python -mtimeit '{}'
10000000 loops, best of 3: 0.0306 usec per loop
λ iulian-pc ~ â python3 -mtimeit '{}'
10000000 loops, best of 3: 0.105 usec per loop
λ iulian-pc ~ â python -mtimeit 'dict()'
10000000 loops, best of 3: 0.103 usec per loop
λ iulian-pc ~ â python3 -mtimeit 'dict()'
10000000 loops, best of 3: 0.165 usec per loop
```
I had the assumption that Python 3 is faster than Python 2, but it turned out in several posts ([1](https://www.reddit.com/r/Python/comments/272bao/python_34_slow_compared_to_27_whats_your_mileage/), [2](https://www.quora.com/How-fast-is-Python-3-x-compared-to-2-7)) that it's not the case. Then I thought that maybe Python 3.5 will perform better at such a simple task, as they state in their `README`:
> The language is mostly the same, but many details, especially how
> built-in objects like dictionaries and strings work, have changed
> considerably, and a lot of deprecated features have finally been
> removed.
But nope, it performed even worse:
```
λ iulian-pc ~ â python3 --version
Python 3.5.0
λ iulian-pc ~ â python3 -mtimeit '{}'
10000000 loops, best of 3: 0.144 usec per loop
λ iulian-pc ~ â python3 -mtimeit 'dict()'
1000000 loops, best of 3: 0.217 usec per loop
```
I've tried to dive into the Python 3.5 source code for `dict`, but my knowledge of C language is not sufficient to find the answer myself (or, maybe I even don't search in the right place).
## So, my question is:
What makes the newer version of Python slower comparing to an older version of Python on a relatively simple task such as a `dict` definition, as by the common sense it should be vice-versa? I'm aware of the fact that these differences are so small that in most cases they can be neglected. It was just an observation that made me curious about why the time increased and not remained the same at least? | As @Kevin already stated:
> CPython is not designed to be fast in an absolute sense. It is
> designed to be scalable
Try this instead:
```
$ python -mtimeit "dict([(2,3)]*10000000)"
10 loops, best of 3: 512 msec per loop
$
$ python3 -mtimeit "dict([(2,3)]*10000000)"
10 loops, best of 3: 502 msec per loop
```
And again:
```
$ python -mtimeit "dict([(2,3)]*100000000)"
10 loops, best of 3: 5.19 sec per loop
$
$ python3 -mtimeit "dict([(2,3)]*100000000)"
10 loops, best of 3: 5.07 sec per loop
```
That pretty shows that you can't benchmark Python3 as losing against Python2 on such an insignificant difference. From the look of things, Python3 should scale better. |
Why is dict definition faster in Python 2.7 than in Python 3.x? | 37,502,392 | 24 | 2016-05-28T18:17:18Z | 37,503,256 | 17 | 2016-05-28T19:56:10Z | [
"python",
"python-2.7",
"python-3.x",
"dictionary",
"python-internals"
] | I have encountered a (not very unusual) situation in which I had to either use a `map()` or a list comprehension expression. And then I wondered which one is faster.
[This](http://stackoverflow.com/a/1247490/3120525) StackOverflow answer provided me the solution, but then I started to test it myself. Basically the results were the same, but I found an unexpected behavior when switching to Python 3 that I got curious about, and namely:
```
λ iulian-pc ~ â python --version
Python 2.7.6
λ iulian-pc ~ â python3 --version
Python 3.4.3
λ iulian-pc ~ â python -mtimeit '{}'
10000000 loops, best of 3: 0.0306 usec per loop
λ iulian-pc ~ â python3 -mtimeit '{}'
10000000 loops, best of 3: 0.105 usec per loop
λ iulian-pc ~ â python -mtimeit 'dict()'
10000000 loops, best of 3: 0.103 usec per loop
λ iulian-pc ~ â python3 -mtimeit 'dict()'
10000000 loops, best of 3: 0.165 usec per loop
```
I had the assumption that Python 3 is faster than Python 2, but it turned out in several posts ([1](https://www.reddit.com/r/Python/comments/272bao/python_34_slow_compared_to_27_whats_your_mileage/), [2](https://www.quora.com/How-fast-is-Python-3-x-compared-to-2-7)) that it's not the case. Then I thought that maybe Python 3.5 will perform better at such a simple task, as they state in their `README`:
> The language is mostly the same, but many details, especially how
> built-in objects like dictionaries and strings work, have changed
> considerably, and a lot of deprecated features have finally been
> removed.
But nope, it performed even worse:
```
λ iulian-pc ~ â python3 --version
Python 3.5.0
λ iulian-pc ~ â python3 -mtimeit '{}'
10000000 loops, best of 3: 0.144 usec per loop
λ iulian-pc ~ â python3 -mtimeit 'dict()'
1000000 loops, best of 3: 0.217 usec per loop
```
I've tried to dive into the Python 3.5 source code for `dict`, but my knowledge of C language is not sufficient to find the answer myself (or, maybe I even don't search in the right place).
## So, my question is:
What makes the newer version of Python slower comparing to an older version of Python on a relatively simple task such as a `dict` definition, as by the common sense it should be vice-versa? I'm aware of the fact that these differences are so small that in most cases they can be neglected. It was just an observation that made me curious about why the time increased and not remained the same at least? | Let's [disassemble](https://docs.python.org/3/library/dis.html#dis.dis) `{}`:
```
>>> from dis import dis
>>> dis(lambda: {})
1 0 BUILD_MAP 0
3 RETURN_VALUE
```
[Python 2.7 implementation of BUILD\_MAP](https://hg.python.org/cpython/file/2.7/Python/ceval.c#l2498)
```
TARGET(BUILD_MAP)
{
x = _PyDict_NewPresized((Py_ssize_t)oparg);
PUSH(x);
if (x != NULL) DISPATCH();
break;
}
```
[Python 3.5 implementation of BUILD\_MAP](https://hg.python.org/cpython/file/3.5/Python/ceval.c#l2582)
```
TARGET(BUILD_MAP) {
int i;
PyObject *map = _PyDict_NewPresized((Py_ssize_t)oparg);
if (map == NULL)
goto error;
for (i = oparg; i > 0; i--) {
int err;
PyObject *key = PEEK(2*i);
PyObject *value = PEEK(2*i - 1);
err = PyDict_SetItem(map, key, value);
if (err != 0) {
Py_DECREF(map);
goto error;
}
}
while (oparg--) {
Py_DECREF(POP());
Py_DECREF(POP());
}
PUSH(map);
DISPATCH();
}
```
It's little bit more code.
# EDIT:
Python 3.4 implementation of BUILD\_MAP id exactly the same as 2.7 (thanks @user2357112). I dig deeper and it's looks like Python 3 min size of dict is 8 [PyDict\_MINSIZE\_COMBINED const](https://hg.python.org/cpython/file/3.4/Objects/dictobject.c#l60)
> PyDict\_MINSIZE\_COMBINED is the starting size for any new, non-split dict. 8 allows dicts with no more than 5 active entries; experiments suggested this suffices for the majority of dicts (consisting mostly of usually-small dicts created to pass keyword arguments). Making this 8, rather than 4 reduces the number of resizes for most dictionaries, without any significant extra memory use.
Look at [\_PyDict\_NewPresized in Python 3.4](https://hg.python.org/cpython/file/3.4/Objects/dictobject.c#l1031)
```
PyObject *
_PyDict_NewPresized(Py_ssize_t minused)
{
Py_ssize_t newsize;
PyDictKeysObject *new_keys;
for (newsize = PyDict_MINSIZE_COMBINED;
newsize <= minused && newsize > 0;
newsize <<= 1)
;
new_keys = new_keys_object(newsize);
if (new_keys == NULL)
return NULL;
return new_dict(new_keys, NULL);
}
```
and in [2.7](https://hg.python.org/cpython/file/2.7/Objects/dictobject.c#l685)
```
PyObject *
_PyDict_NewPresized(Py_ssize_t minused)
{
PyObject *op = PyDict_New();
if (minused>5 && op != NULL && dictresize((PyDictObject *)op, minused) == -1) {
Py_DECREF(op);
return NULL;
}
return op;
}
```
In both cases `minused` has value 1.
Python 2.7 create a empty dict and Python 3.4 create a 7-element dict. |
How can I start and stop a Python script from shell | 37,511,265 | 2 | 2016-05-29T14:47:05Z | 37,511,354 | 8 | 2016-05-29T14:54:59Z | [
"python",
"shell"
] | thanks for helping!
I want to start and stop a Python script from a shell script. The start works fine, but I want to stop / terminate the Python script after 10 seconds. (it's a counter that keeps counting). bud is won't stop.... I think it is hanging on the first line.
What is the right way to start wait for 10 seconds en stop?
Shell script:
```
python /home/pi/count1.py
sleep 10
kill /home/pi/count1.py
```
It's not working yet. I get the point of doing the script on the background. That's working!. But I get another comment form my raspberry after doing:
```
python /home/pi/count1.py &
sleep 10; kill /home/pi/count1.py
```
/home/pi/sebastiaan.sh: line 19: kill: /home/pi/count1.py: arguments must be process or job IDs
It's got to be in the: (but what? Thanks for helping out!)
```
sleep 10; kill /home/pi/count1.py
``` | You're right, the shell script "hangs" on the first line until the python script finishes. If it doesn't, the shell script won't continue. Therefore you have to use `&` at the end of the shell command to run it in the background. This way, the python script starts and the shell script continues.
The `kill` command doesn't take a path, it takes a process id. After all, you might run the same program several times, and then try to kill the first, or last one.
The bash shell supports the `$!` variable, which is the pid of the last background process.
Your current example script is wrong, because it doesn't run the python job and the sleep job in parallel. Without adornment, the script will wait for the python job to finish, then sleep 10 seconds, then kill.
What you probably want is something like:
```
python myscript.py & # <-- Note '&' to run in background
LASTPID=$! # Save $! in case you do other background-y stuff
sleep 10; kill $LASTPID # Sleep then kill to set timeout.
``` |
how to make arrow that loops in matplotlib? | 37,512,502 | 9 | 2016-05-29T16:50:23Z | 38,224,201 | 7 | 2016-07-06T12:37:08Z | [
"python",
"matplotlib"
] | what is the right way to draw an arrow that loops back to point to its origin in matplotlib? i tried:
```
plt.figure()
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.annotate("", xy=(0.6, 0.9),
xycoords="figure fraction",
xytext = (0.6, 0.8),
textcoords="figure fraction",
fontsize = 10, \
color = "k",
arrowprops=dict(edgecolor='black',
connectionstyle="angle,angleA=-180,angleB=45",
arrowstyle = '<|-',
facecolor="k",
linewidth=1,
shrinkA = 0,
shrinkB = 0))
plt.show()
```
this doesn't give the right result:
[](http://i.stack.imgur.com/DfHPl.png)
the `connectionstyle` arguments are hard to follow from this page (<http://matplotlib.org/users/annotations_guide.html>).
i'm looking for is something like [this](http://i.istockimg.com/file_thumbview_approve/25282609/6/stock-illustration-25282609-arrow-sign-refresh-reload-loop-rotation-pictogram-black-icon.jpg) or [this](https://cdn0.iconfinder.com/data/icons/arrows-volume-5/48/280-512.png):
[](http://i.stack.imgur.com/nVPdN.png)
**update:** the answer linked to does not show how do this with `plt.annotate` which has other features i want to use. the proposal to use `$\circlearrowleft$` marker is not a real solution. | I find no way to make a loop using `plt.annotate` only once, but using it four times works :
```
import matplotlib.pyplot as plt
fig,ax = plt.subplots()
# coordinates of the center of the loop
x_center = 0.5
y_center = 0.5
radius = 0.2
# linewidth of the arrow
linewidth = 1
ax.annotate("", (x_center + radius, y_center), (x_center, y_center + radius),
arrowprops=dict(arrowstyle="-",
shrinkA=10, # creates a gap between the start point and end point of the arrow
shrinkB=0,
linewidth=linewidth,
connectionstyle="angle,angleB=-90,angleA=180,rad=10"))
ax.annotate("", (x_center, y_center - radius), (x_center + radius, y_center),
arrowprops=dict(arrowstyle="-",
shrinkA=0,
shrinkB=0,
linewidth=linewidth,
connectionstyle="angle,angleB=180,angleA=-90,rad=10"))
ax.annotate("", (x_center - radius, y_center), (x_center, y_center - radius),
arrowprops=dict(arrowstyle="-",
shrinkA=0,
shrinkB=0,
linewidth=linewidth,
connectionstyle="angle,angleB=-90,angleA=180,rad=10"))
ax.annotate("", (x_center, y_center + radius), (x_center - radius, y_center),
arrowprops=dict(arrowstyle="-|>",
facecolor="k",
linewidth=linewidth,
shrinkA=0,
shrinkB=0,
connectionstyle="angle,angleB=180,angleA=-90,rad=10"))
plt.show()
```
[](http://i.stack.imgur.com/ZKi97.png) |
Python static typing does not work | 37,526,876 | 2 | 2016-05-30T13:26:10Z | 37,526,963 | 9 | 2016-05-30T13:30:47Z | [
"python",
"python-3.x",
"static-typing"
] | I found out that Python is intending to support static typing, but it still in beta phase.
I tried the following code with python 3.4.3:
```
def somme(a: int, b: int) -> int:
return a + b
```
the syntax is supported but I did not get the expected result. if I type `somme('1', '3')` I get 13, while I should get `TypeError` exception saying `int variable expected`.
Has anyone an idea why it's not working ? | The function annotations there are *annotations*, nothing more. They're documentation about intended usage. Like it says in [PEP 3107](https://www.python.org/dev/peps/pep-3107/), the mechanism provides a single, standard way to specify function parameters and return values, replacing a variety of ad-hoc tools and libraries.
But as it goes on to say:
> Function annotations are nothing more than a way of associating
> arbitrary Python expressions with various parts of a function at
> compile-time.
>
> By itself, Python does not attach any particular meaning or
> significance to annotations. Left to its own, Python simply makes
> these expressions available as described in Accessing Function
> Annotations below.
[PEP 484](https://www.python.org/dev/peps/pep-0484/) adds more conventions and tools for using these annotations to mark types, adopted in Python 3.5. But still there is no runtime checking in the language itself: it "includes support for off-line type checkers such as mypy".
That is, after using these annotations, you can run third-party type checkers to check if your code adheres to the annotated types. Building these annotations into the language should make it easier for various IDEs to provide this service. |
What is the advantage of using a lambda:None function? | 37,532,208 | 15 | 2016-05-30T19:02:38Z | 37,532,492 | 11 | 2016-05-30T19:27:35Z | [
"python",
"lambda",
"namespaces",
"names"
] | I saw the following [code](https://github.com/sunqm/pyscf/blob/master/mcscf/mc1step_uhf.py#L531):
```
eris = lambda:None
eris.jkcpp = np.einsum('iipq->ipq', eriaa[:ncore[0],:ncore[0],:,:])
eris.jc_PP = np.einsum('iipq->pq', eriab[:ncore[0],:ncore[0],:,:])
```
Can we define arbitrary attributes for a function defined by `lambda:None`? | This looks like a trick to create a simple object to hold values in one line. Most built-in objects don't allow you to set arbitrary attributes on them:
```
>>> object().x = 0
Traceback (most recent call last):
File "<input>", line 1, in <module>
AttributeError: 'object' object has no attribute 'x'
>>> ''.x = 0
Traceback (most recent call last):
File "<input>", line 1, in <module>
AttributeError: 'str' object has no attribute 'x'
>>> [].x = 0
Traceback (most recent call last):
File "<input>", line 1, in <module>
AttributeError: 'list' object has no attribute 'x'
```
If you make your own class, then you can add whatever attributes you want. In this case you could make a class whose `__init__` method assigns the attributes, but this may not be worth the boilerplate. So you can just make an empty class:
```
>>> class Data(object): pass
>>> d = Data()
>>> d.x = 0
>>> d.x
0
```
Apparently the programmer is either not aware of this or doesn't want that extra line where the class is declared and has come up with their own workaround for storing data. It turns out functions, despite being a built-in type *do* allow you to add attributes to them:
```
>>> def foo(): pass
>>> foo.x = 0
>>> foo.x
0
```
Both the above and a lambda let you create such a container in a single statement. I actually think that's a neat idea. |
Why is statistics.mean() so slow? | 37,533,666 | 42 | 2016-05-30T21:05:05Z | 37,533,799 | 65 | 2016-05-30T21:22:30Z | [
"python",
"performance",
"mean"
] | I compared the performance of the `mean` function of the `statistics` module with the simple `sum(l)/len(l)` method and found the `mean` function to be very slow for some reason. I used `timeit` with the two code snippets below to compare them, does anyone know what causes the massive difference in execution speed? I'm using Python 3.5.
```
from timeit import repeat
print(min(repeat('mean(l)',
'''from random import randint; from statistics import mean; \
l=[randint(0, 10000) for i in range(10000)]''', repeat=20, number=10)))
```
The code above executes in about 0.043 seconds on my machine.
```
from timeit import repeat
print(min(repeat('sum(l)/len(l)',
'''from random import randint; from statistics import mean; \
l=[randint(0, 10000) for i in range(10000)]''', repeat=20, number=10)))
```
The code above executes in about 0.000565 seconds on my machine. | Python's `statistics` module is not built for speed, but for precision
In [the specs for this module](https://www.python.org/dev/peps/pep-0450/), it appears that
> The built-in sum can lose accuracy when dealing with floats of wildly
> differing magnitude. Consequently, the above naive mean fails this
> "torture test"
>
> `assert mean([1e30, 1, 3, -1e30]) == 1`
>
> returning 0 instead of 1, a purely computational error of 100%.
>
> Using math.fsum inside mean will make it more accurate with float
> data, but it also has the side-effect of converting any arguments to
> float even when unnecessary. E.g. we should expect the mean of a list
> of Fractions to be a Fraction, not a float.
Conversely, if we take a look at the implementation of `_sum()` in this module, the first lines of the method's docstring [seem to confirm that](https://hg.python.org/cpython/file/3.5/Lib/statistics.py#l119):
```
def _sum(data, start=0):
"""_sum(data [, start]) -> (type, sum, count)
Return a high-precision sum of the given numeric data as a fraction,
together with the type to be converted to and the count of items.
[...] """
```
So yeah, `statistics` implementation of `sum`, instead of being a simple one-liner call to Python's built-in `sum()` function, takes about 20 lines by itself with a nested `for` loop in its body.
This happens because `statistics._sum` chooses to guarantee the maximum precision for all types of number it could encounter (even if they widely differ from one another), instead of simply emphasizing speed.
Hence, it appears normal that the built-in `sum` proves a hundred times faster. The cost of it being a much lower precision in you happen to call it with exotic numbers.
**Other options**
If you need to prioritize speed in your algorithms, you should have a look at [Numpy](http://www.numpy.org/) or [Pandas](http://pandas.pydata.org/) instead (built on top of Numpy), the algorithms of which being implemented in C.
These modules offer both high-speed and high-precision computations. |
addCleanUp vs tearDown | 37,534,021 | 14 | 2016-05-30T21:50:18Z | 37,534,051 | 15 | 2016-05-30T21:53:21Z | [
"python",
"unit-testing",
"python-unittest"
] | Recently, Ned Batchelder during [his talk at PyCon 2016](http://nedbatchelder.com/text/machete.html) noted:
> If you are using `unittest` to write your tests, definitely use
> `addCleanup`, it's much better than `tearDown`.
Up until now, I've never used `addCleanUp()` and got used to `setUp()`/`tearDown()` pair of methods for test "set up" and "tear down" phases.
Why should I switch to `addCleanUp()` instead of `tearDown()`?
---
It was also recently discussed in the [Python unittest with Robert Collins](http://pythontesting.net/podcast/19-python-unittest-robert-collins/) podcast. | Per the [`addCleanup` doc string](https://docs.python.org/3.5/library/unittest.html#unittest.TestCase.addCleanup):
> Cleanup items are called even if setUp fails (unlike tearDown)
`addCleanup` can be used to register multiple functions, so you could use
separate functions for each resource you wish to clean up. That would allow your
code to be a bit more reusable/modular. |
How to map a function to every item in every sublist of a list | 37,537,609 | 3 | 2016-05-31T05:39:53Z | 37,537,652 | 11 | 2016-05-31T05:43:51Z | [
"python",
"list",
"python-2.7"
] | Is there a way to do this without using a regular for loop to iterate through the main list?
```
>>> map(lambda x: x*2, [[1,2,3],[4,5,6]])
[[1, 2, 3, 1, 2, 3], [4, 5, 6, 4, 5, 6]]
# want [[2,4,6],[8,10,12]]
``` | You have nested lists, and `x` represents just one of the lists. To process that, you need to actually map the multiplication function on to the individual elements of `x`, like this
```
>>> map(lambda x: map(lambda y: y * 2, x), [[1, 2, 3], [4, 5, 6]])
[[2, 4, 6], [8, 10, 12]]
```
But I would prefer list comprehension over this,
```
>>> [[y * 2 for y in x] for x in [[1, 2, 3], [4, 5, 6]]]
[[2, 4, 6], [8, 10, 12]]
``` |
Pandas: sum all rows | 37,558,572 | 2 | 2016-06-01T02:45:11Z | 37,558,643 | 7 | 2016-06-01T02:54:39Z | [
"python",
"pandas",
"dataframe"
] | I have a `DataFrame` that looks like this:
```
score num_participants
0 20
1 15
2 5
3 10
4 12
5 15
```
I need to find the number of participants with `score` that is greater than or equal to the `score` in the current row:
```
score num_participants num_participants_with_score_greater_or_equal
0 20 77
1 15 57
2 5 42
3 10 37
4 12 27
5 15 15
```
So, I am trying to sum current row and all rows below it. The data has around 5000 rows, so I can't manually set it by indexing. `cumsum` doesn't do the trick and I am not sure if there is a simple way to do this. I have spent quite some time trying to solve this, so any help would be appreciated. | This is a reverse `cumsum`. Reverse the list, `cumsum`, then reverse back.
```
df.iloc[::-1].cumsum().iloc[::-1]
score num_participants
0 15 77
1 15 57
2 14 42
3 12 37
4 9 27
5 5 15
``` |
What do all the distributions available in scipy.stats look like? | 37,559,470 | 4 | 2016-06-01T04:31:48Z | 37,559,471 | 15 | 2016-06-01T04:31:48Z | [
"python",
"matplotlib",
"scipy",
"statistics",
"distribution"
] | # Visualizing [`scipy.stats`](http://docs.scipy.org/doc/scipy/reference/stats.html) distributions
A histogram can be made of [the `scipy.stats` normal random variable](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html#scipy.stats.norm) to see what the distribution looks like.
```
% matplotlib inline
import pandas as pd
import scipy.stats as stats
d = stats.norm()
rv = d.rvs(100000)
pd.Series(rv).hist(bins=32, normed=True)
```
[](http://i.stack.imgur.com/tVSpl.png)
**What do the other distributions look like?** | # Visualizing all [`scipy.stats` distributions](http://docs.scipy.org/doc/scipy/reference/stats.html)
Based on the [list of `scipy.stats` distributions](http://docs.scipy.org/doc/scipy/reference/stats.html), plotted below are the [histogram](https://en.wikipedia.org/wiki/Histogram)s and [PDF](https://en.wikipedia.org/wiki/Probability_density_function)s of each [continuous random variable](https://en.wikipedia.org/wiki/Random_variable). The code used to generate each distribution is [at the bottom](http://stackoverflow.com/questions/37559470/what-do-all-the-distributions-available-in-scipy-stats-look-like#new-answer). Note: The shape constants were taken from the examples on the scipy.stats distribution documentation pages.
### [`alpha(a=3.57, loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.alpha.html#scipy.stats.alpha)

### [`anglit(loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.anglit.html#scipy.stats.anglit)

### [`arcsine(loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.arcsine.html#scipy.stats.arcsine)

### [`beta(a=2.31, loc=0.00, scale=1.00, b=0.63)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.beta.html#scipy.stats.beta)

### [`betaprime(a=5.00, loc=0.00, scale=1.00, b=6.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.betaprime.html#scipy.stats.betaprime)

### [`bradford(loc=0.00, c=0.30, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.bradford.html#scipy.stats.bradford)

### [`burr(loc=0.00, c=10.50, scale=1.00, d=4.30)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.burr.html#scipy.stats.burr)

### [`cauchy(loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.cauchy.html#scipy.stats.cauchy)

### [`chi(df=78.00, loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chi.html#scipy.stats.chi)

### [`chi2(df=55.00, loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chi2.html#scipy.stats.chi2)

### [`cosine(loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.cosine.html#scipy.stats.cosine)

### [`dgamma(a=1.10, loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.dgamma.html#scipy.stats.dgamma)

### [`dweibull(loc=0.00, c=2.07, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.dweibull.html#scipy.stats.dweibull)

### [`erlang(a=2.00, loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.erlang.html#scipy.stats.erlang)

### [`expon(loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.expon.html#scipy.stats.expon)

### [`exponnorm(loc=0.00, K=1.50, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.exponnorm.html#scipy.stats.exponnorm)

### [`exponpow(loc=0.00, scale=1.00, b=2.70)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.exponpow.html#scipy.stats.exponpow)

### [`exponweib(a=2.89, loc=0.00, c=1.95, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.exponweib.html#scipy.stats.exponweib)

### [`f(loc=0.00, dfn=29.00, scale=1.00, dfd=18.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.f.html#scipy.stats.f)

### [`fatiguelife(loc=0.00, c=29.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.fatiguelife.html#scipy.stats.fatiguelife)

### [`fisk(loc=0.00, c=3.09, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.fisk.html#scipy.stats.fisk)

### [`foldcauchy(loc=0.00, c=4.72, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.foldcauchy.html#scipy.stats.foldcauchy)

### [`foldnorm(loc=0.00, c=1.95, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.foldnorm.html#scipy.stats.foldnorm)

### [`frechet_l(loc=0.00, c=3.63, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.frechet_l.html#scipy.stats.frechet_l)

### [`frechet_r(loc=0.00, c=1.89, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.frechet_r.html#scipy.stats.frechet_r)

### [`gamma(a=1.99, loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gamma.html#scipy.stats.gamma)

### [`gausshyper(a=13.80, loc=0.00, c=2.51, scale=1.00, b=3.12, z=5.18)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gausshyper.html#scipy.stats.gausshyper)

### [`genexpon(a=9.13, loc=0.00, c=3.28, scale=1.00, b=16.20)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.genexpon.html#scipy.stats.genexpon)

### [`genextreme(loc=0.00, c=-0.10, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.genextreme.html#scipy.stats.genextreme)

### [`gengamma(a=4.42, loc=0.00, c=-3.12, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gengamma.html#scipy.stats.gengamma)

### [`genhalflogistic(loc=0.00, c=0.77, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.genhalflogistic.html#scipy.stats.genhalflogistic)

### [`genlogistic(loc=0.00, c=0.41, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.genlogistic.html#scipy.stats.genlogistic)

### [`gennorm(loc=0.00, beta=1.30, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gennorm.html#scipy.stats.gennorm)

### [`genpareto(loc=0.00, c=0.10, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.genpareto.html#scipy.stats.genpareto)

### [`gilbrat(loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gilbrat.html#scipy.stats.gilbrat)

### [`gompertz(loc=0.00, c=0.95, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gompertz.html#scipy.stats.gompertz)

### [`gumbel_l(loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gumbel_l.html#scipy.stats.gumbel_l)

### [`gumbel_r(loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gumbel_r.html#scipy.stats.gumbel_r)

### [`halfcauchy(loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.halfcauchy.html#scipy.stats.halfcauchy)

### [`halfgennorm(loc=0.00, beta=0.68, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.halfgennorm.html#scipy.stats.halfgennorm)

### [`halflogistic(loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.halflogistic.html#scipy.stats.halflogistic)

### [`halfnorm(loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.halfnorm.html#scipy.stats.halfnorm)

### [`hypsecant(loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.hypsecant.html#scipy.stats.hypsecant)

### [`invgamma(a=4.07, loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.invgamma.html#scipy.stats.invgamma)

### [`invgauss(mu=0.14, loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.invgauss.html#scipy.stats.invgauss)

### [`invweibull(loc=0.00, c=10.60, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.invweibull.html#scipy.stats.invweibull)

### [`johnsonsb(a=4.32, loc=0.00, scale=1.00, b=3.18)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.johnsonsb.html#scipy.stats.johnsonsb)

### [`johnsonsu(a=2.55, loc=0.00, scale=1.00, b=2.25)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.johnsonsu.html#scipy.stats.johnsonsu)

### [`ksone(loc=0.00, scale=1.00, n=1000.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ksone.html#scipy.stats.ksone)

### [`kstwobign(loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.kstwobign.html#scipy.stats.kstwobign)

### [`laplace(loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.laplace.html#scipy.stats.laplace)

### [`levy(loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.levy.html#scipy.stats.levy)

### [`levy_l(loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.levy_l.html#scipy.stats.levy_l)

### [`loggamma(loc=0.00, c=0.41, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.loggamma.html#scipy.stats.loggamma)

### [`logistic(loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.logistic.html#scipy.stats.logistic)

### [`loglaplace(loc=0.00, c=3.25, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.loglaplace.html#scipy.stats.loglaplace)

### [`lognorm(loc=0.00, s=0.95, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.lognorm.html#scipy.stats.lognorm)

### [`lomax(loc=0.00, c=1.88, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.lomax.html#scipy.stats.lomax)

### [`maxwell(loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.maxwell.html#scipy.stats.maxwell)

### [`mielke(loc=0.00, s=3.60, scale=1.00, k=10.40)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.mielke.html#scipy.stats.mielke)

### [`nakagami(loc=0.00, scale=1.00, nu=4.97)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.nakagami.html#scipy.stats.nakagami)

### [`ncf(loc=0.00, dfn=27.00, nc=0.42, dfd=27.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ncf.html#scipy.stats.ncf)

### [`nct(df=14.00, loc=0.00, scale=1.00, nc=0.24)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.nct.html#scipy.stats.nct)

### [`ncx2(df=21.00, loc=0.00, scale=1.00, nc=1.06)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ncx2.html#scipy.stats.ncx2)

### [`norm(loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html#scipy.stats.norm)

### [`pareto(loc=0.00, scale=1.00, b=2.62)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.pareto.html#scipy.stats.pareto)

### [`pearson3(loc=0.00, skew=0.10, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.pearson3.html#scipy.stats.pearson3)

### [`powerlaw(a=1.66, loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.powerlaw.html#scipy.stats.powerlaw)

### [`powerlognorm(loc=0.00, s=0.45, scale=1.00, c=2.14)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.powerlognorm.html#scipy.stats.powerlognorm)

### [`powernorm(loc=0.00, c=4.45, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.powernorm.html#scipy.stats.powernorm)

### [`rayleigh(loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rayleigh.html#scipy.stats.rayleigh)

### [`rdist(loc=0.00, c=0.90, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rdist.html#scipy.stats.rdist)

### [`recipinvgauss(mu=0.63, loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.recipinvgauss.html#scipy.stats.recipinvgauss)

### [`reciprocal(a=0.01, loc=0.00, scale=1.00, b=1.01)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.reciprocal.html#scipy.stats.reciprocal)

### [`rice(loc=0.00, scale=1.00, b=0.78)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rice.html#scipy.stats.rice)

### [`semicircular(loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.semicircular.html#scipy.stats.semicircular)

### [`t(df=2.74, loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.t.html#scipy.stats.t)

### [`triang(loc=0.00, c=0.16, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.triang.html#scipy.stats.triang)

### [`truncexpon(loc=0.00, scale=1.00, b=4.69)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.truncexpon.html#scipy.stats.truncexpon)

### [`truncnorm(a=0.10, loc=0.00, scale=1.00, b=2.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.truncnorm.html#scipy.stats.truncnorm)

### [`tukeylambda(loc=0.00, scale=1.00, lam=3.13)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.tukeylambda.html#scipy.stats.tukeylambda)

### [`uniform(loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.uniform.html#scipy.stats.uniform)

### [`vonmises(loc=0.00, scale=1.00, kappa=3.99)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.vonmises.html#scipy.stats.vonmises)

### [`vonmises_line(loc=0.00, scale=1.00, kappa=3.99)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.vonmises_line.html#scipy.stats.vonmises_line)

### [`wald(loc=0.00, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.wald.html#scipy.stats.wald)

### [`weibull_max(loc=0.00, c=2.87, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.weibull_max.html#scipy.stats.weibull_max)

### [`weibull_min(loc=0.00, c=1.79, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.weibull_min.html#scipy.stats.weibull_min)

### [`wrapcauchy(loc=0.00, c=0.03, scale=1.00)`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.wrapcauchy.html#scipy.stats.wrapcauchy)

## Generation Code
Here is the [Jupyter Notebook](http://jupyter.org) used to generate the plots.
```
%matplotlib inline
import io
import numpy as np
import pandas as pd
import scipy.stats as stats
import matplotlib
import matplotlib.pyplot as plt
matplotlib.rcParams['figure.figsize'] = (16.0, 14.0)
matplotlib.style.use('ggplot')
```
---
```
# Distributions to check, shape constants were taken from the examples on the scipy.stats distribution documentation pages.
DISTRIBUTIONS = [
stats.alpha(a=3.57, loc=0.0, scale=1.0), stats.anglit(loc=0.0, scale=1.0),
stats.arcsine(loc=0.0, scale=1.0), stats.beta(a=2.31, b=0.627, loc=0.0, scale=1.0),
stats.betaprime(a=5, b=6, loc=0.0, scale=1.0), stats.bradford(c=0.299, loc=0.0, scale=1.0),
stats.burr(c=10.5, d=4.3, loc=0.0, scale=1.0), stats.cauchy(loc=0.0, scale=1.0),
stats.chi(df=78, loc=0.0, scale=1.0), stats.chi2(df=55, loc=0.0, scale=1.0),
stats.cosine(loc=0.0, scale=1.0), stats.dgamma(a=1.1, loc=0.0, scale=1.0),
stats.dweibull(c=2.07, loc=0.0, scale=1.0), stats.erlang(a=2, loc=0.0, scale=1.0),
stats.expon(loc=0.0, scale=1.0), stats.exponnorm(K=1.5, loc=0.0, scale=1.0),
stats.exponweib(a=2.89, c=1.95, loc=0.0, scale=1.0), stats.exponpow(b=2.7, loc=0.0, scale=1.0),
stats.f(dfn=29, dfd=18, loc=0.0, scale=1.0), stats.fatiguelife(c=29, loc=0.0, scale=1.0),
stats.fisk(c=3.09, loc=0.0, scale=1.0), stats.foldcauchy(c=4.72, loc=0.0, scale=1.0),
stats.foldnorm(c=1.95, loc=0.0, scale=1.0), stats.frechet_r(c=1.89, loc=0.0, scale=1.0),
stats.frechet_l(c=3.63, loc=0.0, scale=1.0), stats.genlogistic(c=0.412, loc=0.0, scale=1.0),
stats.genpareto(c=0.1, loc=0.0, scale=1.0), stats.gennorm(beta=1.3, loc=0.0, scale=1.0),
stats.genexpon(a=9.13, b=16.2, c=3.28, loc=0.0, scale=1.0), stats.genextreme(c=-0.1, loc=0.0, scale=1.0),
stats.gausshyper(a=13.8, b=3.12, c=2.51, z=5.18, loc=0.0, scale=1.0), stats.gamma(a=1.99, loc=0.0, scale=1.0),
stats.gengamma(a=4.42, c=-3.12, loc=0.0, scale=1.0), stats.genhalflogistic(c=0.773, loc=0.0, scale=1.0),
stats.gilbrat(loc=0.0, scale=1.0), stats.gompertz(c=0.947, loc=0.0, scale=1.0),
stats.gumbel_r(loc=0.0, scale=1.0), stats.gumbel_l(loc=0.0, scale=1.0),
stats.halfcauchy(loc=0.0, scale=1.0), stats.halflogistic(loc=0.0, scale=1.0),
stats.halfnorm(loc=0.0, scale=1.0), stats.halfgennorm(beta=0.675, loc=0.0, scale=1.0),
stats.hypsecant(loc=0.0, scale=1.0), stats.invgamma(a=4.07, loc=0.0, scale=1.0),
stats.invgauss(mu=0.145, loc=0.0, scale=1.0), stats.invweibull(c=10.6, loc=0.0, scale=1.0),
stats.johnsonsb(a=4.32, b=3.18, loc=0.0, scale=1.0), stats.johnsonsu(a=2.55, b=2.25, loc=0.0, scale=1.0),
stats.ksone(n=1e+03, loc=0.0, scale=1.0), stats.kstwobign(loc=0.0, scale=1.0),
stats.laplace(loc=0.0, scale=1.0), stats.levy(loc=0.0, scale=1.0),
stats.levy_l(loc=0.0, scale=1.0), stats.levy_stable(alpha=0.357, beta=-0.675, loc=0.0, scale=1.0),
stats.logistic(loc=0.0, scale=1.0), stats.loggamma(c=0.414, loc=0.0, scale=1.0),
stats.loglaplace(c=3.25, loc=0.0, scale=1.0), stats.lognorm(s=0.954, loc=0.0, scale=1.0),
stats.lomax(c=1.88, loc=0.0, scale=1.0), stats.maxwell(loc=0.0, scale=1.0),
stats.mielke(k=10.4, s=3.6, loc=0.0, scale=1.0), stats.nakagami(nu=4.97, loc=0.0, scale=1.0),
stats.ncx2(df=21, nc=1.06, loc=0.0, scale=1.0), stats.ncf(dfn=27, dfd=27, nc=0.416, loc=0.0, scale=1.0),
stats.nct(df=14, nc=0.24, loc=0.0, scale=1.0), stats.norm(loc=0.0, scale=1.0),
stats.pareto(b=2.62, loc=0.0, scale=1.0), stats.pearson3(skew=0.1, loc=0.0, scale=1.0),
stats.powerlaw(a=1.66, loc=0.0, scale=1.0), stats.powerlognorm(c=2.14, s=0.446, loc=0.0, scale=1.0),
stats.powernorm(c=4.45, loc=0.0, scale=1.0), stats.rdist(c=0.9, loc=0.0, scale=1.0),
stats.reciprocal(a=0.00623, b=1.01, loc=0.0, scale=1.0), stats.rayleigh(loc=0.0, scale=1.0),
stats.rice(b=0.775, loc=0.0, scale=1.0), stats.recipinvgauss(mu=0.63, loc=0.0, scale=1.0),
stats.semicircular(loc=0.0, scale=1.0), stats.t(df=2.74, loc=0.0, scale=1.0),
stats.triang(c=0.158, loc=0.0, scale=1.0), stats.truncexpon(b=4.69, loc=0.0, scale=1.0),
stats.truncnorm(a=0.1, b=2, loc=0.0, scale=1.0), stats.tukeylambda(lam=3.13, loc=0.0, scale=1.0),
stats.uniform(loc=0.0, scale=1.0), stats.vonmises(kappa=3.99, loc=0.0, scale=1.0),
stats.vonmises_line(kappa=3.99, loc=0.0, scale=1.0), stats.wald(loc=0.0, scale=1.0),
stats.weibull_min(c=1.79, loc=0.0, scale=1.0), stats.weibull_max(c=2.87, loc=0.0, scale=1.0),
stats.wrapcauchy(c=0.0311, loc=0.0, scale=1.0)
]
```
---
```
bins = 32
size = 16384
plotData = []
for distribution in DISTRIBUTIONS:
try:
# Create random data
rv = pd.Series(distribution.rvs(size=size))
# Get sane start and end points of distribution
start = distribution.ppf(0.01)
end = distribution.ppf(0.99)
# Build PDF and turn into pandas Series
x = np.linspace(start, end, size)
y = distribution.pdf(x)
pdf = pd.Series(y, x)
# Get histogram of random data
b = np.linspace(start, end, bins+1)
y, x = np.histogram(rv, bins=b, normed=True)
x = [(a+x[i+1])/2.0 for i,a in enumerate(x[0:-1])]
hist = pd.Series(y, x)
# Create distribution name and parameter string
title = '{}({})'.format(distribution.dist.name, ', '.join(['{}={:0.2f}'.format(k,v) for k,v in distribution.kwds.items()]))
# Store data for later
plotData.append({
'pdf': pdf,
'hist': hist,
'title': title
})
except Exception:
print 'could not create data', distribution.dist.name
```
---
```
plotMax = len(plotData)
for i, data in enumerate(plotData):
w = abs(abs(data['hist'].index[0]) - abs(data['hist'].index[1]))
# Display
plt.figure(figsize=(10, 6))
ax = data['pdf'].plot(kind='line', label='Model PDF', legend=True, lw=2)
ax.bar(data['hist'].index, data['hist'].values, label='Random Sample', width=w, align='center', alpha=0.5)
ax.set_title(data['title'])
# Grab figure
fig = matplotlib.pyplot.gcf()
# Output 'file'
fig.savefig('~/Desktop/dist/'+data['title']+'.png', format='png', bbox_inches='tight')
matplotlib.pyplot.close()
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.