title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
How does str(list) work? | 30,109,030 | 7 | 2015-05-07T18:32:28Z | 30,109,108 | 43 | 2015-05-07T18:37:29Z | [
"python",
"string",
"list",
"eval",
"python-internals"
] | **Why does `str(list)` returns how we see list on the console? How does `str(list)` work? (any reference to the CPython code for `str(list)`)?**
```
>>> x = ['abc', 'def', 'ghi']
>>> str(x)
"['abc', 'def', 'ghi']"
```
To get the original list back from the `str(list)` I have to:
```
>>> from ast import literal_eval
>>> x = ['abc', 'def', 'ghi']
>>> str(x)
"['abc', 'def', 'ghi']"
>>> list(str(x))
['[', "'", 'a', 'b', 'c', "'", ',', ' ', "'", 'd', 'e', 'f', "'", ',', ' ', "'", 'g', 'h', 'i', "'", ']']
>>> literal_eval(str(x))
['abc', 'def', 'ghi']
```
**Why doesn't `list(str(list))` turns the `str(list)` back to the original list?**
Or I could use:
```
>>> eval(str(x))
['abc', 'def', 'ghi']
```
**Is `literal_eval` the same as `eval`? Is `eval` safe to use?**
**How many times can I do the following? Does the code break if it keep on doing `str(list(str(list))))`?** E.g.
```
>>> x = 'abc'
>>> list(x)
['a', 'b', 'c']
>>> str(list(x))
"['a', 'b', 'c']"
>>> list(str(list(x)))
['[', "'", 'a', "'", ',', ' ', "'", 'b', "'", ',', ' ', "'", 'c', "'", ']']
>>> str(list(str(list(x))))
'[\'[\', "\'", \'a\', "\'", \',\', \' \', "\'", \'b\', "\'", \',\', \' \', "\'", \'c\', "\'", \']\']'
>>> list(str(list(str(list(x)))))
['[', "'", '[', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", 'a', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", 'b', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", 'c', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ']', "'", ']']
>>> str(list(str(list(str(list(x))))))
'[\'[\', "\'", \'[\', "\'", \',\', \' \', \'"\', "\'", \'"\', \',\', \' \', "\'", \'a\', "\'", \',\', \' \', \'"\', "\'", \'"\', \',\', \' \', "\'", \',\', "\'", \',\', \' \', "\'", \' \', "\'", \',\', \' \', \'"\', "\'", \'"\', \',\', \' \', "\'", \'b\', "\'", \',\', \' \', \'"\', "\'", \'"\', \',\', \' \', "\'", \',\', "\'", \',\', \' \', "\'", \' \', "\'", \',\', \' \', \'"\', "\'", \'"\', \',\', \' \', "\'", \'c\', "\'", \',\', \' \', \'"\', "\'", \'"\', \',\', \' \', "\'", \']\', "\'", \']\']'
>>> list(str(list(str(list(str(list(x)))))))
['[', "'", '[', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", '[', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', "'", '"', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", '"', "'", ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", 'a', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', "'", '"', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", '"', "'", ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ',', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ' ', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', "'", '"', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", '"', "'", ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", 'b', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', "'", '"', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", '"', "'", ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ',', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ' ', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', "'", '"', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", '"', "'", ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", 'c', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', "'", '"', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", '"', "'", ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ']', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ']', "'", ']']
``` | Well you have a total of 4 questions, let us go one by one.
> **1. Why does `str(list)` returns how we see `list` on the console? How does `str(list)` work?**
# What is [`str()`](https://docs.python.org/3/library/functions.html#str) and [`__str__()`](https://docs.python.org/3/reference/datamodel.html#object.__str__)?
The `str()` callable is to return a *printable* form of the object only! From the [docs](https://docs.python.org/3/library/functions.html#str)
> `str(object)` does not always attempt to return a string that is
> acceptable to `eval()`; its goal is to return a printable string.
The `__str__()` function in a class is called whenever you call `str()` on an object. Again from the [documentation](https://docs.python.org/3/reference/datamodel.html#object.__str__)
> `object.__str__(self)`
>
> Called by the `str()` built-in function and by the `print` statement to compute the âinformalâ string representation of an object.
# What is the [`list`](https://docs.python.org/3/library/functions.html#list) callable?
The `list()` callable is to create a list from an iterable passed as an argument. Again from the [docs](https://docs.python.org/3/library/functions.html#list)
> Return a `list` whose items are the same and in the same order as
> iterableâs items
Thus, `str(list)` gives you a printable form and `list(str(list))` will iterate over the string. That is `list(str(list))` will give you a list of the individual characters of the printable form of the argument passed.
A small walk-through between the nested calls,
Given list, `l = ['a','b']` (Apologies for taking a smaller example than that in your question).
When you call `str(l)`, it returns a printable form of the list `l`, that is `"['a','b']"`.
Now you can see clearly that `"['a','b']"` is a string and is indeed an *iterable*. Now when you call `list` on this i.e. `list("['a','b']")` you get a weird list like `['[', "'", 'a', "'", ',', "'", 'b', "'", ']']`. *Why does this happen?* This happens because the string iterates over its characters, you can test this by using a dummy string,
```
>>> 'dummy'
'dummy'
>>> list('dummy')
['d', 'u', 'm', 'm', 'y']
```
Thus when you call the `list` on a string you get a list of character. Note that again here, when you call `str()` on `list('dummy')`, you will not get back your original string `'dummy'`, so again you will have to use [`join`](https://docs.python.org/3/library/functions.html#join)! Thus recalling the same function will **NOT** get you back your original object!
**So, Calling `str()` over a list calls the builtin `__str__()` method of the list?**
***The answer is NO!***
## What happens internally when you call `str()` on a list?
Whenever you call `str()` on an list object, the steps followed are
1. Call the `repr()` of each of the list element.
2. Add a fancy `[` at the front and another `]` at the end of the list.
3. Join all of them with a comma.
~~As you can see from the source code of the list object in [cpython on github](https://github.com/python/cpython/blob/master/Objects/listobject.c).~~ Going through the source code of cpython in [hg.python](https://hg.python.org/cpython/file/e8783c581928/Objects/listobject.c#l362), which is more clear, you can see the following three comments. (Thanks to Ashwini for the link on that particular [code](http://stackoverflow.com/questions/30109030/how-does-strlist-work#comment48330380_30109030))
> ```
> /* Do repr() on each element. Note that this may mutate the list,
> so must refetch the list size on each iteration. */ line (382)
>
> /* Add "[]" decorations to the first and last items. */ line (398)
>
> /* Paste them all together with ", " between. */ line (418)
> ```
These correspond to the points I mentioned above.
# Now what is [`repr()`](https://docs.python.org/3/library/functions.html#repr)?
`repr()` prints the string representation of all the objects. Again from the [documentation](https://docs.python.org/3/library/functions.html#repr)
> Return a string containing a printable representation of an object.
and also note this sentence!
> For many types, this function makes an attempt to return a string
> that would yield an object with the same value when passed to `eval()`,
> otherwise the representation is a string enclosed in angle brackets
> that contains the name of the type of the object together with
> additional information often including the name and address of the
> object.
And now your second question here,
> **2. Why doesn't `list(str(list))` turns the `str(list)` back to the original list?**
Internally, `str(list)` actually creates the `repr()` representation of the list object. So to get back the list after calling `str` on the list, you actually need to do [`eval`](https://docs.python.org/3/library/functions.html#eval) on it and not a `list` call.
# Workarounds
But we all know that [`eval` is *evil*](http://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html), so what is/are the workaround(s)?
## 1. Using [`literal_eval`](https://docs.python.org/3/library/ast.html#ast.literal_eval)
The first work-around would be to use [`ast.literal_eval`](https://docs.python.org/3/library/ast.html#ast.literal_eval). That brings us to your 3rd question,
> **3. Is `literal_eval()` the same as `eval()`? Is `eval()` safe to use?**
[`ast.literal_eval()`](https://docs.python.org/3/library/ast.html#ast.literal_eval) is safe [unlike](http://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html) the `eval()` function. The docs themselves mention that it is safe --
> *Safely* evaluate an expression node or a string containing a Python literal or container display
## 2. Using string functions and builtins
Another workaround can be done using [`str.split()`](https://docs.python.org/3/library/stdtypes.html#str.split)
```
>>> x = ['abc', 'def', 'ghi']
>>> a = str(x)
>>> a[2:-2].split("', '")
['abc', 'def', 'ghi']
```
This is just a simple way to do that for a list of strings. For a list of integers you will need [`map`](https://docs.python.org/3/library/functions.html#map).
```
>>> x = [1,2,3]
>>> a =str(x)
>>> list(map(int,a[1:-1].split(', '))) # No need for list call in Py2
[1, 2, 3]
```
Thus unlike `literal_eval` these are simple hacks given that you know the elements of the list. If they are heterogeneous in nature like `[1, "a", True]` then you will have to loop through the split list and discover the element type and then convert it and append the converted element to a final list.
And for your final question,
> **4. Does the code break if you do `str(list(str(list))))` again and again?**
Not really. The output will grow longer and longer as each time you are creating a `list` of a `str` and then again getting the printable version of it. The limitation is your physical machine's limitation only. (which will be soon reached as each step the string length is multiplied by 5.) |
Implement MATLAB's im2col 'sliding' in Python | 30,109,068 | 7 | 2015-05-07T18:34:25Z | 30,110,497 | 10 | 2015-05-07T19:54:33Z | [
"python",
"performance",
"python-2.7",
"image-processing",
"numpy"
] | Q: How to speed this up?
Below is my implementation of Matlab's [im2col](http://www.mathworks.com/help/images/ref/im2col.html) 'sliding' with the additional feature of returning every n'th column. The function takes an image (or any 2 dim array) and slides from left to right, top to bottom, picking off every overlapping sub-image of a given size, and returning an array whose columns are the sub-images.
```
import numpy as np
def im2col_sliding(image, block_size, skip=1):
rows, cols = image.shape
horz_blocks = cols - block_size[1] + 1
vert_blocks = rows - block_size[0] + 1
output_vectors = np.zeros((block_size[0] * block_size[1], horz_blocks * vert_blocks))
itr = 0
for v_b in xrange(vert_blocks):
for h_b in xrange(horz_blocks):
output_vectors[:, itr] = image[v_b: v_b + block_size[0], h_b: h_b + block_size[1]].ravel()
itr += 1
return output_vectors[:, ::skip]
```
example:
```
a = np.arange(16).reshape(4, 4)
print a
print im2col_sliding(a, (2, 2)) # return every overlapping 2x2 patch
print im2col_sliding(a, (2, 2), 4) # return every 4th vector
```
returns:
```
[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]
[12 13 14 15]]
[[ 0. 1. 2. 4. 5. 6. 8. 9. 10.]
[ 1. 2. 3. 5. 6. 7. 9. 10. 11.]
[ 4. 5. 6. 8. 9. 10. 12. 13. 14.]
[ 5. 6. 7. 9. 10. 11. 13. 14. 15.]]
[[ 0. 5. 10.]
[ 1. 6. 11.]
[ 4. 9. 14.]
[ 5. 10. 15.]]
```
The performance isn't great, especially considering whether I call `im2col_sliding(big_matrix, (8, 8))` (62001 columns) or `im2col_sliding(big_matrix, (8, 8), 10)` (6201 columns; keeping only every 10th vector) it will take the same amount of time [where big\_matrix is of size 256 x 256].
I'm looking for any ideas to speed this up. | You could use some [`broadcasting`](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) here to get all the indices of all those sliding windows in one go and thus with indexing achieve a `vectorized solution`. This is inspired by [`Efficient Implementation of im2col and col2im`](http://stackoverflow.com/q/25449279/3293881) Here's the implementation in `Python/Numpy` -
```
import numpy as np
A = np.random.randint(0,9,(8,5)) # Sample input array
B = [2,4] # Sample blocksize (rows x columns)
# Parameters
M,N = A.shape
col_extent = N - B[1] + 1
row_extent = M - B[0] + 1
# Get Starting block indices
start_idx = np.arange(B[0])[:,None]*N + np.arange(B[1])
# Get offsetted indices across the height and width of input array
offset_idx = np.arange(row_extent)[:,None]*N + np.arange(col_extent)
# Get all actual indices & index into input array for final output
out = np.take (A,start_idx.ravel()[:,None] + offset_idx.ravel())
```
Note that you can introduce the additional skipping facility at the last line with `offset_idx.ravel()[::skip]` instead of `offset_idx.ravel()`.
Sample run -
```
In [440]: A # Input array
Out[440]:
array([[2, 2, 1, 0, 5],
[6, 8, 6, 0, 2],
[0, 1, 3, 4, 8],
[3, 0, 3, 8, 1],
[7, 2, 0, 1, 5],
[8, 6, 5, 7, 1],
[1, 0, 1, 0, 5],
[4, 7, 0, 5, 0]])
In [441]: B # Block size
Out[441]: [2, 4]
In [442]: out # Output array
Out[442]:
array([[2, 2, 6, 8, 0, 1, 3, 0, 7, 2, 8, 6, 1, 0],
[2, 1, 8, 6, 1, 3, 0, 3, 2, 0, 6, 5, 0, 1],
[1, 0, 6, 0, 3, 4, 3, 8, 0, 1, 5, 7, 1, 0],
[0, 5, 0, 2, 4, 8, 8, 1, 1, 5, 7, 1, 0, 5],
[6, 8, 0, 1, 3, 0, 7, 2, 8, 6, 1, 0, 4, 7],
[8, 6, 1, 3, 0, 3, 2, 0, 6, 5, 0, 1, 7, 0],
[6, 0, 3, 4, 3, 8, 0, 1, 5, 7, 1, 0, 0, 5],
[0, 2, 4, 8, 8, 1, 1, 5, 7, 1, 0, 5, 5, 0]])
```
---
## Runtimes tests
Here's few runtime tests comparing the original loopy approach against the proposed broadcasting based vectorized approach for various input datasizes -
**Case #1: `A` as `1000 x 1000` and Blocksize, `B` as `100 x 100`**
```
In [461]: %timeit loop_based(A,B)
1 loops, best of 3: 1.86 s per loop
In [462]: %timeit broadcasting_based(A,B)
1 loops, best of 3: 569 ms per loop
```
**Case #2: `A` as `1000 x 1000` and Blocksize, `B` as `10 x 10`**
```
In [464]: %timeit loop_based(A,B)
100 loops, best of 3: 15.3 ms per loop
In [465]: %timeit broadcasting_based(A,B)
100 loops, best of 3: 4.51 ms per loop
```
**Case #3: `A` as `100 x 100` and Blocksize, `B` as `2 x 2`**
```
In [467]: %timeit loop_based(A,B)
100 loops, best of 3: 14.3 ms per loop
In [468]: %timeit broadcasting_based(A,B)
10000 loops, best of 3: 89.1 µs per loop
``` |
How to skip `if __name__ == "__main__"` in interactive mode? | 30,109,178 | 2 | 2015-05-07T18:40:41Z | 30,109,210 | 8 | 2015-05-07T18:42:18Z | [
"python",
"python-3.x"
] | Given a simple script like:
```
#!/usr/bin/env python3
if __name__ == "__main__":
print("Hello World")
```
How can I load that into the interactive interpreter without executing the `if __name__ == "__main__":` block? By default it gets executed:
```
$ python3 -i simple-script.py
Hello World
>>> â
``` | Don't pass it as an argument, import it into the interpreter.
```
$ python3
>>> import simple_script
>>>
``` |
What does "SSLError: [SSL] PEM lib (_ssl.c:2532)" mean using the Python ssl library? | 30,109,449 | 5 | 2015-05-07T18:55:31Z | 30,109,730 | 8 | 2015-05-07T19:11:55Z | [
"python",
"python-3.x",
"ssl",
"ssl-certificate",
"python-asyncio"
] | **In case someone had a similar issue:** This question was based on a great misconception of how SSL works. Though the variables are named poorly, the actual problem was the certs that where being used had expired. Generating new ones fixed the problem.
I am trying to use connect to another party using Python 3 asyncio module and get this error:
```
36 sslcontext = ssl.SSLContext(ssl.PROTOCOL_TLSv1)
---> 37 sslcontext.load_cert_chain(cert, keyfile=ca_cert)
38
SSLError: [SSL] PEM lib (_ssl.c:2532)
```
The question is just what the error mean. My certificate is correct, *the keyfile (CA certificate) might not*. | Assuming that version 3.4 is being used:
See: <https://github.com/python/cpython/blob/3.4/Modules/_ssl.c#L2529-L2535>
```
PySSL_BEGIN_ALLOW_THREADS_S(pw_info.thread_state);
r = SSL_CTX_check_private_key(self->ctx);
PySSL_END_ALLOW_THREADS_S(pw_info.thread_state);
if (r != 1) {
_setSSLError(NULL, 0, __FILE__, __LINE__);
goto error;
}
```
What it is saying is that `SSL_CTX_check_private_key` failed; thus, the private key is not correct. |
Make a new array selectively copying, with varying multiplicity, rows of old array in numpy | 30,111,771 | 2 | 2015-05-07T21:13:45Z | 30,111,862 | 7 | 2015-05-07T21:19:31Z | [
"python",
"numpy"
] | I have a 2D array with n rows in numpy, and an accompanying 1-D array of n elements in which the ith element specifies how many times the ith row of the original array should go into a new array. For example if my 2D array is:
```
array([[1, 1, 1],
[2, 2, 2],
[3, 3, 3],
[4, 4, 4],
[5, 5, 5]])
```
And my 1D array is
```
array([2, 0, 1, 0, 3])
```
Then I'd like the new array to be:
```
array([[1, 1, 1],
[1, 1, 1],
[3, 3, 3],
[5, 5, 5],
[5, 5, 5],
[5, 5, 5]])
```
I can't figure out how to do this efficiently, does anyone have any ideas? | You could use [np.repeat()](http://docs.scipy.org/doc/numpy/reference/generated/numpy.repeat.html) to repeat elements of an array.
```
In [174]: x.repeat(np.array([2, 0, 1, 0, 3]), axis=0)
Out[174]:
array([[1, 1, 1],
[1, 1, 1],
[3, 3, 3],
[5, 5, 5],
[5, 5, 5],
[5, 5, 5]])
```
---
Details:
```
In [175]: x
Out[175]:
array([[1, 1, 1],
[2, 2, 2],
[3, 3, 3],
[4, 4, 4],
[5, 5, 5]])
In [176]: repeat_on = np.array([2, 0, 1, 0, 3])
In [177]: x.repeat(repeat_on, axis=0)
Out[177]:
array([[1, 1, 1],
[1, 1, 1],
[3, 3, 3],
[5, 5, 5],
[5, 5, 5],
[5, 5, 5]])
``` |
TypeError: descriptor 'strftime' requires a 'datetime.date' object but received a 'Text' | 30,112,357 | 5 | 2015-05-07T21:55:44Z | 30,112,469 | 10 | 2015-05-07T22:05:09Z | [
"python",
"datetime"
] | I have a variable `testeddate` which has a date in text format like 4/25/2015. I am trying convert it to `%Y-%m-%d %H:%M:%S` as follows:
```
dt_str = datetime.strftime(testeddate,'%Y-%m-%d %H:%M:%S')
```
but I am running into this error:
```
TypeError: descriptor 'strftime' requires a 'datetime.date' object but received a 'Text'
```
How do I resolve this? | You have a `Text` object. The [`strftime`](https://docs.python.org/2/library/time.html#time.strftime) function requires a datetime object. The code below takes an intermediate step of converting your `Text` to a `datetime` using [`strptime`](https://docs.python.org/2/library/time.html#time.strptime)
```
import datetime
testeddate = '4/25/2015'
dt_obj = datetime.datetime.strptime(testeddate,'%m/%d/%Y')
```
At this point, the `dt_obj` is a datetime object. This means we can easily convert it to a string with any format. In your particular case:
```
dt_str = datetime.datetime.strftime(dt_obj,'%Y-%m-%d %H:%M:%S')
```
The `dt_str` now is:
```
'2015-04-25 00:00:00'
``` |
Continue until all iterators are done Python | 30,113,214 | 7 | 2015-05-07T23:16:36Z | 30,113,331 | 18 | 2015-05-07T23:28:02Z | [
"python",
"generator",
"iterable"
] | **I cannot use itertools**
So the coding seems pretty simple, but I'm having trouble thinking of the algorithm to keep a generator running until all iterations have been processed fully.
The idea of the function is to take 2 iterables as parameters like this ...
`(['a', 'b', 'c', 'd', 'e'], [1,2,5])`
And what it does is yield these values ...
`a, b, b, c, c, c, c, c`
However, in the event that the second iterable runs out of elements first, the function simply iterates the remaining value one time ...
So the remaining values would be iterated like this:
`d, e`
```
def iteration(letters, numbers):
times = 0
for x,y in zip(letters, numbers):
try:
for z in range(y):
yield x
except:
continue
[print(x) for x in iteration(['a', 'b', 'c', 'd'], [1,2,3])]
```
I'm having difficulty ignoring the first StopIteration and continuing to completion. | Use a default value of `1` for next so you print the letters at least once:
```
def iteration(letters, numbers):
# create iterator from numbers
it = iter(numbers)
# get every letter
for x in letters:
# either print in range passed or default range of 1
for z in range(next(it, 1)):
yield x
```
Output:
```
In [60]: for s in iteration(['a', 'b', 'c', 'd', 'e'], [1,2,5]):
....: print(s)
....:
a
b
b
c
c
c
c
c
d
e
``` |
How come Python does not include a function to load a pickle from a file name? | 30,113,346 | 6 | 2015-05-07T23:29:35Z | 30,113,466 | 9 | 2015-05-07T23:41:57Z | [
"python",
"pickle"
] | I often include this, or something close to it, in Python scripts and IPython notebooks.
```
import cPickle
def unpickle(filename):
with open(filename) as f:
obj = cPickle.load(f)
return obj
```
This seems like a common enough use case that the standard library should provide a function that does the same thing. Is there such a function? If there isn't, how come? | Most of the serialization libraries in the stdlib and on PyPI have a similar API. I'm pretty sure it was [`marshal`](https://docs.python.org/3/library/marshal.html) that set the standard,\* and `pickle`, `json`, `PyYAML`, etc. have just followed in its footsteps.
So, the question is, why was `marshal` designed that way?
Well, you obviously need `loads`/`dumps`; you couldn't build those on top of a filename-based function, and to build them on top of a file-object-based function you'd need `StringIO`, which didn't come until later.
You don't necessarily *need* `load`/`dump`, because those could be built on top of `loads`/`dumps`âbut doing so could have major performance implications: you can't save anything to the file until you've built the whole thing in memory, and vice-versa, which could be a problem for huge objects.
You definitely don't need a `loadf`/`dumpf` function based on filenames, because those can be built trivially on top of `load`/`dump`, with no performance implications, and no tricky considerations that a user is likely to get wrong.
On the one hand, it would be convenient to have them anywayâand there are some libraries, like `ElementTree`, that do have analogous functions. It may only save a few seconds and a few lines per project, but multiply that by thousands of projectsâ¦
On the other hand, it would make Python larger. Not so much the extra 1K to download and install it if you added these two functions to every module (although that did mean a lot more back in the 1.x days than nowadaysâ¦), but more to document, more to learn, more to remember. And of course more code to maintainâevery time you need to fix a bug in `marshal.dumpf` you have to remember to go check `pickle.dumpf` and `json.dumpf` to make sure they don't need the change, and sometimes you won't remember.
Balancing those two considerations is really a judgment call. One someone made decades ago and probably nobody has really discussed since. If you think there's a good case for changing it today, you can always post a feature request on [the issue tracker](http://bugs.python.org/) or start a thread on [`python-ideas`](https://mail.python.org/mailman/listinfo/python-ideas).
---
\* Not in [the original 1991 version of `marshal.c`](https://hg.python.org/cpython/file/457661e47c7b/Python/marshal.c); that just had `load` and `dump`. Guido added `loads` and `dumps` [in 1993](https://hg.python.org/cpython/file/38f0054f44a7/Python/marshal.c) as part of a change whose main description was "Add separate main program for the Mac: macmain.c". Presumably because something inside the Python interpreter needed to dump and load to strings.\*\*
\*\* `marshal` is used as the underpinnings for things like importing `.pyc` files. This also means (at least in CPython) it's not just implemented in C, but statically built into the core of the interpreter itself. While I think it actually *could* be turned into a regular module since the 3.4 `import` changes, but it definitely couldn't have back in the early days. So, that's extra motivation to keep it small and simple. |
Using `split` on columns too slow - how can I get better performance? | 30,113,499 | 2 | 2015-05-07T23:46:01Z | 30,113,715 | 7 | 2015-05-08T00:12:06Z | [
"python",
"performance",
"pandas",
"split"
] | I've a dataset (around 10Gb) of call records. There's column with ip addresses that I want to split into four new columns. I'm trying to use:
```
df['ip'].fillna('0.0.0.0', inplace=True)
df = df.join(df['ip'].apply(lambda x: Series(x.split('.'))))
```
but it's tooooo slow... the `fillna` is fast, like 10s, but then it stays in the split for like 5 hours...
Is there any better way to do it? | ~~It turns out that the `str.split` in Pandas (in `core/strings.py` as `str_split`) is actually very slow; it isn't any more efficient, and still iterates through using Python, offering no speedup whatsoever.~~
Actually, see below. Pandas performance on this is simply miserable; it's not just Python vs C iteration, as doing the same thing with Python lists is the fastest method!
Interestingly, though, there's a trick solution that's much faster: writing the Series out to text, and then reading it in again, with '.' as the separator:
```
df[['ip0', 'ip1', 'ip2', 'ip3']] = \
pd.read_table(StringIO(df['ip'].to_csv(None,index=None)),sep='.')
```
To compare, I use Marius' code and generate 20,000 ips:
```
import pandas as pd
import random
import numpy as np
from StringIO import StringIO
def make_ip():
return '.'.join(str(random.randint(0, 255)) for n in range(4))
df = pd.DataFrame({'ip': [make_ip() for i in range(20000)]})
%timeit df[['ip0', 'ip1', 'ip2', 'ip3']] = df.ip.str.split('.', return_type='frame')
# 1 loops, best of 3: 3.06 s per loop
%timeit df[['ip0', 'ip1', 'ip2', 'ip3']] = df['ip'].apply(lambda x: pd.Series(x.split('.')))
# 1 loops, best of 3: 3.1 s per loop
%timeit df[['ip0', 'ip1', 'ip2', 'ip3']] = \
pd.read_table(StringIO(df['ip'].to_csv(None,index=None)),sep='.',header=None)
# 10 loops, best of 3: 46.4 ms per loop
```
---
Ok, so I wanted to compare all of this to just using a Python list and the Python split, which should be slower than using the more efficient Pandas:
```
iplist = list(df['ip'])
%timeit [ x.split('.') for x in iplist ]
100 loops, best of 3: 10 ms per loop
```
What!? Apparently, the best way to do a simple string operation on a large number of strings is to *throw out Pandas entirely*. **Using Pandas makes the process 400 times slower.** If you want to use Pandas, though, you may as well just convert to a Python list and back:
```
%timeit df[['ip0', 'ip1', 'ip2', 'ip3']] = \
pd.DataFrame([ x.split('.') for x in list(df['ip']) ])
# 100 loops, best of 3: 18.4 ms per loop
```
There's something *very* wrong here. |
Get the current function name in the Flask template context | 30,114,205 | 3 | 2015-05-08T01:09:43Z | 30,114,380 | 7 | 2015-05-08T01:33:05Z | [
"python",
"flask"
] | What I basically need is something like the following:
```
{{ url_for(current_view) }}
```
in a Jinja2 template, such that if the response came from:
```
@app.route('/')
def index():
return render('index.html')
```
then `current_view` equals to `"index"`, i.e., the actual view name, such that `url_for` works.
To provide some context, this problem actually came up when using `flask-babel`. I want to have a "switch language button" that basically redirects to the same url, but with a different language code, as in:
```
{{ url_for(current_view, lang_code="es") }}
```
if you are in the English page, and `lang_code="en"` if you are on the Spanish page. | You can use `request.endpoint`:
```
{{ url_for(request.endpoint) }}
``` |
How to use a WTForms FieldList of FormFields? | 30,121,763 | 5 | 2015-05-08T10:38:49Z | 30,179,649 | 9 | 2015-05-12T00:10:51Z | [
"python",
"forms",
"flask",
"wtforms",
"fieldlist"
] | I'm building a website using [Flask](http://flask.pocoo.org/) in which I use [WTForms](http://wtforms.readthedocs.org/). In a Form I now want to use a FieldList of FormFields as follows:
```
class LocationForm(Form):
location_id = StringField('location_id')
city = StringField('city')
class CompanyForm(Form):
company_name = StringField('company_name')
locations = FieldList(FormField(LocationForm))
```
so to give people the ability to enter a company with two locations (dynamic adding of locations comes later) I do this on the front side:
```
<form action="" method="post" role="form">
{{ companyForm.hidden_tag() }}
{{ companyForm.company_name() }}
{{ locationForm.location_id() }}
{{ locationForm.city() }}
{{ locationForm.location_id() }}
{{ locationForm.city() }}
<input type="submit" value="Submit!" />
</form>
```
So on submit I print the locations:
```
print companyForm.locations.data
```
but I get
```
[{'location_id': u'', 'city': u''}]
```
I can print the values of the first location using the locationForm (see below), but I still don't know how to get the data of the second location.
```
print locationForm.location_id.data
print locationForm.city.data
```
So the list of locations does have one dict with empty values, but:
1. Why does the list of locations have only one, and not two dicts?
2. And why are the values in the location dict empty?
Does anybody know what I'm doing wrong here? All tips are welcome! | For starters, there's an argument for the [FieldList](http://wtforms.readthedocs.org/en/latest/fields.html#wtforms.fields.FieldList) called `min_entries`, that will make space for your data:
```
class CompanyForm(Form):
company_name = StringField('company_name')
locations = FieldList(FormField(LocationForm), min_entries=2)
```
This will setup the list the way you need. Next you should render the fields directly from the `locations` property, so names are generated correctly:
```
<form action="" method="post" role="form">
{{ companyForm.hidden_tag() }}
{{ companyForm.company_name() }}
{{ companyForm.locations() }}
<input type="submit" value="Submit!" />
</form>
```
Look at the rendered html, the inputs should have names like `locations-0-city`, this way WTForms will know which is which.
Alternatively, for custom rendering of elements do
```
{% for l in companyForms.locations %}
{{ l.form.city }}
{% endfor %}
```
(in wtforms alone `l.city` is shorthand for `l.form.city`. However, that syntax seems to clash with Jinja, and there it is necessary to use the explicit `l.form.city` in the template.)
Now to ready the submitted data, just create the `CompanyForm` and iterate over the locations:
```
for entry in form.locations.entries:
print entry.data['location_id']
print entry.data['city']
``` |
Change Python if input to be case insensitive | 30,123,112 | 3 | 2015-05-08T11:49:18Z | 30,123,141 | 9 | 2015-05-08T11:50:55Z | [
"python"
] | I'm in school, and since we are rather young students, some of my 'colleagues' do not grasp what case sensitivity is. We are making a quiz in Python. Here is the code:
```
score = 0 #this defines variable score, and sets it as zero
print("What is the capital of the UK?")
answer = input ()
if answer == "London":
print("Well done")
score = score + 1 #this increases score by one
else:
print("Sorry the answer was London")
print("What is the capital of France?")
answer = input ()
if answer == "Paris":
print("Well done")
score = score + 1 #this increases score by one
else:
print("Sorry the answer was Paris")
print("Your score was ",score)
```
They are inputting 'london' as an answer instead of 'London' and still getting the answer wrong. Any workaround? | You can use `.upper()` or `.lower()`
```
if answer.lower() == 'london':
``` |
How can I make a script to recover my Grooveshark playlists now that the service has been shut down? | 30,124,541 | 4 | 2015-05-08T13:00:14Z | 30,124,542 | 17 | 2015-05-08T13:00:14Z | [
"python",
"backup",
"playlist",
"recovery",
"grooveshark"
] | The Grooveshark music streaming service has been shut down without previous notification. I had many playlists that I would like to recover (playlists I made over several years).
Is there any way I could recover them? A script or something automated would be awesome. | I made a script that will try to find all the playlists made by the user and download them in an output directory as CSV files. It is made in Python.
* You must just pass your username as parameter to the script (i.e. `python pysharkbackup.py "my_user_name"`). Your email address should work as well (the one you used for registering in Grooveshark).
* The output directory is set by default to `./pysharkbackup_$USERNAME`.
Here is the script:
```
#!/bin/python
import os
import sys
import csv
import argparse
import requests
URI = 'http://playlist.fish/api'
description = 'Download your Grooveshark playlists as CSV.'
parser = argparse.ArgumentParser(description = description)
parser.add_argument('USER', type=str, help='Grooveshar user name')
args = parser.parse_args()
user = args.USER
with requests.Session() as session:
# Login as user
data = {'method': 'login', 'params': {'username': user}}
response = session.post(URI, json=data).json()
if not response['success']:
print('Could not login as user "%s"! (%s)' %
(user, response['result']))
sys.exit()
# Get user playlists
data = {'method': 'getplaylists'}
response = session.post(URI, json=data).json()
if not response['success']:
print('Could not get "%s" playlists! (%s)' %
(user, response['result']))
sys.exit()
# Save to CSV
playlists = response['result']
if not playlists:
print('No playlists found for user %s!' % user)
sys.exit()
path = './pysharkbackup_%s' % user
if not os.path.exists(path):
os.makedirs(path)
for p in playlists:
plid = p['id']
name = p['n']
data = {'method': 'getPlaylistSongs', 'params': {'playlistID': plid}}
response = session.post(URI, json=data).json()
if not response['success']:
print('Could not get "%s" songs! (%s)' %
(name, response['result']))
continue
playlist = response['result']
f = csv.writer(open(path + '/%s.csv' % name, 'w'))
f.writerow(['Artist', 'Album', 'Name'])
for song in playlist:
f.writerow([song['Artist'], song['Album'], song['Name']])
``` |
How to emit websocket message from outside a websocket endpoint? | 30,124,701 | 2 | 2015-05-08T13:09:12Z | 30,125,944 | 8 | 2015-05-08T14:11:50Z | [
"python",
"flask",
"websocket",
"socket.io",
"flask-socketio"
] | I'm building a website using [Flask](http://flask.pocoo.org/) in which I also use Websockets using [Flask-socketIO](https://flask-socketio.readthedocs.org/en/latest/), but there's one thing I don't understand.
I built a chat-functionality. When one user sends a message I use websockets to send that message to the server, after which I emit the message to the other user from within that same call:
```
@socketio.on('newPM', namespace='/test')
@login_required_with_global
def io_newMessage(theJson):
emit('message', {'message': theJson['message']}, room=str(theJson['toUserId']))
```
But let's say that I want to emit a message to a user when a file was saved. This means that I need to emit a message from within the view in which the file is POSTed. So according to the [flask\_socketio docs](https://flask-socketio.readthedocs.org/en/latest/#sending-messages) I can add a namespace in the emit. So I wrote this:
```
@app.route('/doc', methods=['POST'])
@login_required
def postDoc():
saveDocument(request.files['file'], g.user.id)
emit('my response', {'data': 'A NEW FILE WAS POSTED'}, room=current_user.id, namespace='/test')
return jsonify({'id': str(doc.id)})
```
But seeing the stacktrace below there still is a problem with the namespace; werkzeug has an `AttributeError: 'Request' object has no attribute 'namespace'`.
Does anybody know what I'm doing wrong here? Or is this a bug in flask\_socketio? All tips are welcome!
```
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1820, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1403, in handle_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1817, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1477, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1381, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1461, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python2.7/dist-packages/flask_login.py", line 758, in decorated_view
return func(*args, **kwargs)
File "/home/vg/app/views.py", line 768, in emitNotificationCount
emit('menuInit', emitJson, room=current_user.id, namespace='/test')
File "/usr/local/lib/python2.7/dist-packages/flask_socketio/__init__.py", line 444, in emit
return request.namespace.emit(event, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/werkzeug/local.py", line 338, in __getattr__
return getattr(self._get_current_object(), name)
AttributeError: 'Request' object has no attribute 'namespace'
``` | Quoting from Miguel Grinberg's response on [an open issue page on the Flask-SocketIO GitHub](https://github.com/miguelgrinberg/Flask-SocketIO/issues/40):
> When you want to emit from a regular route you have to use
> socketio.emit(), only socket handlers have the socketio context
> necessary to call the plain emit().
So as an example:
```
from flask_socketio import SocketIO
app = Flask(__name__)
app.config.from_object('config')
socketio = SocketIO(app)
@app.route('/doc', methods=['POST'])
def postDoc():
saveDocument(request.files['file'], g.user.id)
socketio.emit('my response', {'data': 'A NEW FILE WAS POSTED'}, room=current_user.id)
return jsonify({'id': str(doc.id)})
``` |
Is it Pythonic to use objects wherever possible? | 30,126,475 | 2 | 2015-05-08T14:37:59Z | 30,126,539 | 7 | 2015-05-08T14:41:49Z | [
"python",
"oop"
] | Newbie Python question here - I am writing a little utility in Python to do disk space calculations when given the attributes of 2 different files.
Should I create a 'file' class with methods appropriate to the conversion and then create each file as an instance of that class? I'm pretty new to Python, but ok with Perl, and I believe that in Perl (I may be wrong, being self-taught), from the examples that I have seen, that most Perl is not OO.
Background info - These are IBM z/OS (mainframe) data sets, and when given the allocation attributes for a file on a specific disk type and file organisation (it's block size) and then given the allocation parameters for a different disk type & organisation, the space requirements can vary enormously. | *Definition nitpicking preface: Everything in Python is technically an object, even functions and numbers. I'm going to assume you mean classes vs. functions in your question.*
Actually I think one of the great things about Python is that it doesn't embrace classes for absolutely everything as some other languages (e.g., Java and C#).
It's perfectly acceptable in Python (and the built-in modules do this a lot) to define module level functions rather than encapsulating all logic in objects.
That said, classes do have their place, for example when you perform multiple actions on a single piece of data, and especially when these actions change the data and you want to keep its state encapsulated. |
Flask-Admin vs Flask-AppBuilder | 30,126,607 | 3 | 2015-05-08T14:44:31Z | 30,292,629 | 7 | 2015-05-17T22:04:04Z | [
"python",
"flask",
"flask-admin"
] | I am new to Flask and have noticed that there are two plugins that enable CRUD views and authorized login, **Flask-Admin** and **Flask-AppBuilder**.
These two features interest me along with nice **Master-Detail** views for my model, where I can see both the rows of the master table and the relevant details on the same screen.
Any idea which one to prefer? I see that Flask-AppBuilder has far more commits in Github, while Flask-Admin many more stars.
How to tell the difference, without spending too much time with the wrong choice? | I am the developer of Flask-AppBuilder, so maybe a strong bias here. I will try to give you my most honest view. I do not know Flask-Admin that much, so i will probably make some mistakes.
Flask-Admin and Flask-AppBuilder:
* Will both give you an admin interface for Flask with bootstrap.
* Will both make their best to get out of your way.
* Will both help you develop Flask more Object Oriented style.
* Will both let you override almost everything on the admin templates.
* Will both support Babel.
* Both inspired on Django-Admin.
Pros for Flask-AppBuilder:
* Has a nicer look and feel (bias? maybe...).
* Security has been taken care of for you, and supports out of the box, database, LDAP, OpenID, Web server integrated (REMOTE\_USER), and in the near future OAuth too. Will let you extend the user model and security views.
* Granular permissions, creates one permission for each web exposed method and action (you have to try it).
* You can easily render Google graphs.
* Smaller project, it's easier to request new features, and get your pull requests merged.
* MasterDetail views and multiple views can be setup easily.
* Backends: supports SQLAlchemy, MongoEngine, GenericInterface (you can integrate with your own builtin data still a bit beta).
Pros for Flask-Admin:
* You have to assemble your own security (models, views, auth etc), it's ready though to integrate nicely with flask-security. This can be a pro or a con depending on what you want.
* Builtin File Admin.
* Bigger project with bigger community.
* Backends: supports SQLAlchemy, GeoAlchemy, MongoEngine, Pewee and PyMongo.
* Better support for MongoEngine (EmbeddedDocument, ListFields etc..).
On the overall i think Flask-Admin makes no assumptions at all like Flask, you have to code more, but this will let you be more free. Flask-AppBuilder makes some assumptions (on security), you will have to code much less, but some things can get on your way if your building very specific security models.
Hope this helps you and others, i tried my best to keep the bias out. |
Best way to install psycopg2 on ubuntu 14.04 | 30,127,224 | 3 | 2015-05-08T15:14:20Z | 30,127,304 | 12 | 2015-05-08T15:18:34Z | [
"python",
"virtualenv"
] | I am having trouble installing a Django app ([Mezzanine](http://mezzanine.jupo.org)) on Ubuntu 14.04. I've installed most necessities using apt-get (except for django-compressor and south -used pip), including psycopg2 for Postgres. However when I go to run python manage.py createdb it gives this error:
```
Error loading psycopg2 module: No module named psycopg2
```
This is the command I'm using to install psycopg2:
```
sudo apt-get install python-psycopg2
```
What am I doing wrong? Should I use pip to install psycopg2. I went to the website and it recommends installing through your OS package manager instead of pip.
I am working in a virtualenv except for when I am installing the psycopg2 elements.... | If you need psycopg2 for a system installed program, then install it with the system package manager. If you need it for a program in a virtualenv, install it in that virtualenv.
```
. env/bin/activate
pip install psycopg2
```
Note that on many distros, the development headers needed for compiling against libraries are not installed by default. For psycopg2 on Ubuntu you'll need the python and postgresql headers.
```
sudo aptitude install python-dev libpq-dev
``` |
how find all groups of subsets of set A? Set partitions in Python | 30,130,053 | 3 | 2015-05-08T18:00:26Z | 30,133,157 | 7 | 2015-05-08T21:30:50Z | [
"python",
"algorithm",
"python-3.x",
"set",
"subset"
] | I want to find an algorithm that given a set `A` to find all groups of subsets that satisfy the following condition:
> x ⪠y ⪠.... z = A, where x, y, ... z â Group
and
> â x,y â Group: x â A, y â A, x â© y = â
= {}
and
> â x â Group: x != â
Note: I hope to define it well, I'm not good with math symbols
I made the following approach to search groups of two subsets only:
```
from itertools import product, combinations
def my_combos(A):
subsets = []
for i in xrange(1, len(A)):
subsets.append(list(combinations(A,i)))
combos = []
for i in xrange(1, 1+len(subsets)/2):
combos.extend(list(product(subsets[i-1], subsets[-i])))
if not len(A) % 2:
combos.extend(list(combinations(subsets[len(A)/2-1], 2)))
return [combo for combo in combos if not set(combo[0]) & set(combo[1])]
my_combos({1,2,3,4})
```
I get the following output, these are all groups composed of two subsets
```
[
((1,), (2, 3, 4)),
((2,), (1, 3, 4)),
((3,), (1, 2, 4)),
((4,), (1, 2, 3)),
((1, 2), (3, 4)),
((1, 3), (2, 4)),
((1, 4), (2, 3))
]
```
..... but, groups composed of one, three, four subsets ....
**Question:**
how may i find a general solution?
for example the following expected output:
```
my_combos({1,2,3,4})
[
((1,2,3,4)),
((1,2,3),(4,)),
((1,2,4),(3,)),
((1,3,4),(2,)),
((2,3,4),(1,)),
((1,2),(3,4)),
((1,3),(2,4)),
((1,4),(2,3)),
((1,2),(3,),(4,)),
((1,3),(2,),(4,)),
((1,4),(2,),(3,)),
((1,),(2,),(3,4)),
((1,),(3,),(2,4)),
((1,),(4,),(2,3)),
((1,),(4,),(2,),(3,))
]
``` | Solution:
```
def partitions(A):
if not A:
yield []
else:
a, *R = A
for partition in partitions(R):
yield partition + [[a]]
for i, subset in enumerate(partition):
yield partition[:i] + [subset + [a]] + partition[i+1:]
```
Explanation:
* The empty set only has the empty partition.
* For a non-empty set, take out one element and then for each partition of the remaining elements, add that element as its own subset or add it to one of the partition's subsets.
* Note that a partition is really a set of sets. I only represent it as list of lists because that's faster and because I don't want to use frozensets, which don't print nicely. Tuples are faster and the question asked for them, but I can't stand the commas for single-element tuples.
Test with output:
```
for partition in partitions({1, 2, 3, 4}):
print(partition)
[[4], [3], [2], [1]]
[[4, 1], [3], [2]]
[[4], [3, 1], [2]]
[[4], [3], [2, 1]]
[[4, 2], [3], [1]]
[[4, 2, 1], [3]]
[[4, 2], [3, 1]]
[[4], [3, 2], [1]]
[[4, 1], [3, 2]]
[[4], [3, 2, 1]]
[[4, 3], [2], [1]]
[[4, 3, 1], [2]]
[[4, 3], [2, 1]]
[[4, 3, 2], [1]]
[[4, 3, 2, 1]]
```
Speed test with output (on a relatively weak laptop):
```
from time import time
print('elements partitions seconds')
for n in range(14):
t0 = time()
number = sum(1 for partition in partitions(range(n)))
print('{:5}{:10}{:11.2f}'.format(n, number, time() - t0))
elements partitions seconds
0 1 0.00
1 1 0.00
2 2 0.00
3 5 0.00
4 15 0.00
5 52 0.00
6 203 0.00
7 877 0.00
8 4140 0.06
9 21147 0.07
10 115975 0.36
11 678570 2.20
12 4213597 13.56
13 27644437 87.59
```
I confirmed those partition numbers with the [OEIS page](https://oeis.org/A000110). |
datetime to string with series in python pandas | 30,132,282 | 5 | 2015-05-08T20:23:11Z | 30,132,313 | 9 | 2015-05-08T20:25:12Z | [
"python",
"datetime",
"pandas"
] | I need to make this simple thing:
```
dates = p.to_datetime(p.Series(['20010101', '20010331']), format = '%Y%m%d')
dates.str
```
But a get an error. How should I transform from datetime to string
Thanks in advance | There is no `str` accessor for datetimes and you can't do `dates.astype(str)` either, you can call `apply` and use `datetime.strftime`:
```
In [73]:
dates = pd.to_datetime(pd.Series(['20010101', '20010331']), format = '%Y%m%d')
dates.apply(lambda x: x.strftime('%Y-%m-%d'))
Out[73]:
0 2001-01-01
1 2001-03-31
dtype: object
```
You can change the format of your date strings using whatever you like: <https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior>
**update**
As of version `0.17.0` you can do this using [`dt.strftime`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.strftime.html#pandas.Series.dt.strftime)
```
dates.dt.strftime('%Y-%m-%d')
```
will now work |
How can a pointer be passed between Rust and Python? | 30,133,293 | 9 | 2015-05-08T21:41:49Z | 30,134,107 | 9 | 2015-05-08T23:05:39Z | [
"python",
"rust",
"ctypes"
] | I am experimenting with writing a library in Rust that I can call from Python code. I would like to be able to pass a void pointer back to Python so that I can hold state between calls into Rust. However, I get a segfault in Rust when trying to access the pointer again.
Full code samples and crash report: <https://gist.github.com/robyoung/3644f13a05c95cb1b947>
## The code
```
#![feature(libc)]
#![feature(alloc)]
extern crate libc;
use std::boxed;
pub struct Point {
x: i64,
y: i32,
}
#[no_mangle]
pub extern "C" fn start_state() -> *mut Point {
let point = Box::new(Point{x: 0, y: 10});
let raw = unsafe { boxed::into_raw(point) };
println!("{:?}", raw);
raw
}
#[no_mangle]
pub extern "C" fn continue_state(point: *mut Point) -> i32 {
println!("{:?}", point);
let p = unsafe { Box::from_raw(point) };
println!("{} {}", p.x, p.y);
0
}
```
```
import ctypes
lib = ctypes.cdll.LoadLibrary('target/libpytesttype.so')
lib.start_state.restype = ctypes.c_void_p
pointer = lib.start_state()
print("{:x}".format(pointer))
lib.continue_state(pointer)
```
## The output
```
0xdc24000
10dc24000
0xdc24000
[1] 64006 segmentation fault python src/main.py
```
What am I doing wrong? | [eryksun nailed it](http://stackoverflow.com/questions/30133293/how-can-a-pointer-be-passed-between-rust-and-python#comment48377553_30133293):
> On the Python side, you're missing lib.continue\_state.argtypes = (ctypes.c\_void\_p,). Without defining the parameter as a pointer, ctypes uses the default conversion for a Python integer, which truncates the value to 32-bit, e.g. 0x0dc24000. If you're lucky accessing that address triggers a segfault immediately.
My output (with my own padding) was:
```
0x103424000
103424000
0x 3424000
```
So the `Debug` formatter for pointers should be fine. Not sure why your output differs.
After adding
```
lib.continue_state.argtypes = (ctypes.c_void_p,)
```
The program ran just fine. |
BeautifulSoup invalid syntax in Python 3.4 (after 2to3.py) | 30,140,034 | 3 | 2015-05-09T12:39:41Z | 30,140,054 | 7 | 2015-05-09T12:41:24Z | [
"python",
"python-3.x",
"beautifulsoup",
"python-3.4"
] | I am trying to install Beautiful Soup 4 in Python 3.4. I installed it from the command line, (got the invalid syntax error because I had not converted it), ran the `2to3.py` conversion script to `bs4` and now I get a new invalid syntax error.
```
>>> from bs4 import BeautifulSoup
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
from bs4 import BeautifulSoup
File "C:\Python34\bs4\__init__.py", line 30, in <module>
from .builder import builder_registry, ParserRejectedMarkup
File "C:\Python34\bs4\builder\__init__.py", line 4, in <module>
from bs4.element import (
File "C:\Python34\bs4\element.py", line 1213
print 'Running CSS selector "%s"' % selector
^
SyntaxError: Missing parentheses in call to 'print'
```
Any ideas? | BeautifulSoup 4 does **not** need manual converting to run on Python 3. You are trying to run code only compatible with Python 2 instead; it appears you failed to correctly convert the codebase.
From the [BeautifulSoup 4 homepage](http://www.crummy.com/software/BeautifulSoup/):
> Beautiful Soup 4 works on both Python 2 (2.6+) and Python 3.
The line now throwing the exception *should* read:
```
print('Running CSS selector "%s"' % selector)
```
The codebase does use Python 2 syntax, but the `setup.py` installer converts this for you to compatible Python 3 syntax. Make sure to install the project with `pip`:
```
pip install beautifulsoup4
```
or using the `pip` version bundled with Python 3.4:
```
python3.4 -m pip install beautifulsoup4
```
or using `easy_install`:
```
easy_install beautifulsoup4
```
If you downloaded just the tarball, *at the very least* run
```
python3.4 setup.py install
```
to have the installer correctly convert the codebase for you; the converted code is *copied into your Python setup*. You can discard the downloaded source directory after running the command, see [*How installation works*](https://docs.python.org/3/install/index.html#how-installation-works).
Alternatively, run:
```
python3.4 setup.py build
```
and copy across the `build/lib` directory. Again, do **not** use the original source directory as it is left untouched. |
Sort a list to form the largest possible number | 30,140,796 | 3 | 2015-05-09T13:59:34Z | 30,141,885 | 13 | 2015-05-09T15:40:01Z | [
"python",
"python-2.7"
] | I am trying to write a function that given a list of non negative integers, arranges them such that they form the largest possible number.
For example, given `[50, 2, 1, 9]`, the largest formed number is `95021`.
Here is the code that I have tried to solve the problem:
```
a = [50, 2, 1, 9]
a.sort()
ans = []
for i in range(len(a)-1,-1,-1):
ans.append(a[i])
print ''.join(map(str,ans))
```
However, I get `50921` , as `50` is largest, but it should show `9` first. | In Python 2 you can do this with an appropriate comparison function passed to `sort`.
```
#!/usr/bin/env python
''' Sort a list of non-negative integers so that
if the integers were converted to string, concatenated
and converted back to int, the resulting int is the highest
possible for that list
From http://stackoverflow.com/q/30140796/4014959
Written by PM 2Ring 2015.05.10
Python 2 version
'''
data = [
[50, 2, 1, 9],
[10, 1],
[2, 23, 21],
]
def mycmp(a, b):
a, b = str(a), str(b)
ab, ba = a + b, b + a
if ab == ba:
return 0
if ab < ba:
return -1
return 1
for a in data:
print 'In: ', a
a.sort(cmp=mycmp, reverse=True)
print 'Out:', a
print
```
**output**
```
In: [50, 2, 1, 9]
Out: [9, 50, 2, 1]
In: [10, 1]
Out: [1, 10]
In: [2, 23, 21]
Out: [23, 2, 21]
```
---
In Python 3, `sort` no longer takes a custom comparison function. scpio's answer shows how to use `functools` to convert a comparison function into a key function, but it's not that hard to do "by hand".
```
#!/usr/bin/env python
''' Sort a list of non-negative integers so that
if the integers were converted to string, concatenated
and converted back to int, the resulting int is the highest
possible for that list
From http://stackoverflow.com/q/30140796/4014959
Written by PM 2Ring 2015.05.10
Python 3 compatible version
'''
from __future__ import print_function
class cmpclass(object):
def __init__(self, n):
self.n = str(n)
def __str__(self):
return self.n
def _cmp(self, other):
a, b = self.n, str(other)
ab, ba = a + b, b + a
if ab == ba:
return 0
if ab < ba:
return -1
return 1
def __lt__(self, other): return self._cmp(other) == -1
def __le__(self, other): return self._cmp(other) <= 0
def __eq__(self, other): return self._cmp(other) == 0
def __ne__(self, other): return self._cmp(other) != 0
def __gt__(self, other): return self._cmp(other) == 1
def __ge__(self, other): return self._cmp(other) >= 0
data = [
[50, 2, 1, 9],
[10, 1],
[2, 23, 21],
]
for a in data:
print('In: ', a)
a.sort(key=cmpclass, reverse=True)
print('Out:', a)
print('')
```
**output**
```
In: [50, 2, 1, 9]
Out: [9, 50, 2, 1]
In: [10, 1]
Out: [1, 10]
In: [2, 23, 21]
Out: [23, 2, 21]
```
The previous Python 3 compatible version I posted doesn't actually work on Python 3 :oops:! That's because the `__cmp__` method is no longer supported in Python 3. So I've changed my old `__cmp__` method to `_cmp` and used it to implement all 6 of the [rich comparison methods](https://docs.python.org/3/reference/datamodel.html#object.__lt__).
**Important note**
I have to mention that this comparison function is a bit weird: it's non-transitive, in other words, a>b and b>c doesn't *necessarily* imply a>c. And that means that the results of using it in `.sort()` are *unpredictable*. It does appear to do the right thing for the data I've tested it with, eg, it returns the correct result for all permutations of `[1, 5, 10]`, but I guess it really shouldn't be trusted to do so for all input.
An alternative strategy that's *guaranteed* to work is brute force: generate all permutations of the input list & find the permutation that yields the maximum result. But hopefully there's a more efficient algorithm, since generating all permutations of a large list is rather slow.
---
As Antti Haapala points out in the comments, my old comparison functions were unstable when comparing different numbers that consist of the same sequences of repeating digits, eg 123123 and 123123123. Such sequences should compare equal, my old functions didn't do that. The latest modification addresses that problem.
---
**Update**
It turns out that `mycmp() / _cmp()` actually *is* transitive. It's also stable, now that it handles the `ab == ba` case properly, so it's safe to use with TimSort (or any other sorting algorithm). And it can be shown that it gives the same result as Antti Haapala's `fractionalize()` key function.
In what follows I'll use uppercase letters to represent integers in the list and I'll use the lowercase version of a letter to represent the number of digits in that integer. Eg, `a` is the number of digits in `A`. I'll use `_` as an infix operator to represent digit concatenation. Eg, `A_B` is `int(str(A)+str(B)`; note that `A_B` has `a+b` digits. Arithmetically,
`A_B = A * 10**b + B`.
For the sake of brevity, I'll use `f()` to represent Antti Haapala's `fractionalize()` key function. Note that `f(A) = A / (10**a - 1)`.
Now for some algebra. I'll put it in a code block to keep the formatting simple.
```
Let A_B = B_A
A * 10**b + B = B * 10**a + A
A * 10**b - A = B * 10**a - B
A * (10**b - 1) = B * (10**a - 1)
A / (10**a - 1) = B / (10**b - 1)
f(A) = f(B)
So A_B = B_A if & only if f(A) = f(B)
Similarly,
A_B > B_A if & only if f(A) > f(B)
This proves that using mycmp() / _cmp() as the sort comparison function
is equivalent to using fractionalize() as the sort key function.
Note that
f(A_B) = (A * 10**b + B) / (10**(a+b)-1)
and
f(B_A) = (B * 10**a + A) / (10**(a+b)-1)
So f(A_B) = f(B_A) iff A_B = B_A, and f(A_B) > f(B_A) iff A_B > B_A
Let's see what happens with 3 integers.
f(A), f(B), f(C) are just real numbers, so comparing them is
transitive.
And so if f(A) > f(B) and f(B) > f(C) then f(A) > f(C).
This proves that mycmp() / _cmp() is also transitive.
Clearly, if f(A) > f(B) > f(C) then
A_B > B_A, B_C > C_B, A_C > C_A
Let B_C > C_B
For any A,
A * 10**(b+c) + B_C > A * 10**(b+c) + C_B
So A_B_C > A_C_B
i.e. adding the same integer to the beginning of B_C and C_B preserves
the inequality.
Let A_B > B_A
For any C,
(A_B) * 10**c + C > (B_A) * 10**c + C
So A_B_C > B_A_C,
i.e. adding the same integer to the end of A_B and B_A preserves the
inequality.
Using these results, we can show that
if f(A) > f(B) > f(C) then
A_B_C > A_C_B > C_A_B > C_B_A and
A_B_C > B_A_C > B_C_A > C_B_A.
This covers all 6 permutations of [A, B, C] and shows that A_B_C is the
largest possible integer for that list.
```
A mathematical induction-style argument shows that sorting a list of any
finite length using pairwise comparisons with `mycmp()` / `_cmp()` as the
comparison function or with `fractionalize()` as the key function suffices
to find the permutation that yields the largest possible integer
produced by digit concatenation. The details of this argument will be
left as an exercise for the reader. :) |
Computing the correlation coefficient between two multi-dimensional arrays | 30,143,417 | 2 | 2015-05-09T18:12:37Z | 30,143,754 | 7 | 2015-05-09T18:49:00Z | [
"python",
"arrays",
"numpy",
"scipy",
"correlation"
] | I have two arrays that have the shapes `N X T` and `M X T`. I'd like to compute the correlation coefficient across `T` between every possible pair of rows `n` and `m` (from `N` and `M`, respectively).
What's the fastest, most pythonic way to do this? (Looping over `N` and `M` would seem to me to be neither fast nor pythonic.) I'm expecting the answer to involve `numpy` and/or `scipy`. Right now my arrays are `numpy` `array`s, but I'm open to converting them to a different type.
I'm expecting my output to be an array with the shape `N X M`.
N.B. When I say "correlation coefficient," I mean the [Pearson product-moment correlation coefficient](http://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient).
Here are some things to note:
* The `numpy` function `correlate` requires input arrays to be one-dimensional.
* The `numpy` function `corrcoef` accepts two-dimensional arrays, but they must have the same shape.
* The `scipy.stats` function `pearsonr` requires input arrays to be one-dimensional. | **Correlation (default 'valid' case) between two 2D arrays:**
You can simply use matrix-multiplication [`np.dot`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html) like so -
```
out = np.dot(arr_one,arr_two.T)
```
Correlation with the default `"valid"` case between each pairwise row combinations (row1,row2) of the two input arrays would correspond to multiplication result at each (row1,row2) position.
---
**Row-wise Correlation Coefficient calculation for two 2D arrays:**
```
def corr2_coeff(A,B):
# Rowwise mean of input arrays & subtract from input arrays themeselves
A_mA = A - A.mean(1)[:,None]
B_mB = B - B.mean(1)[:,None]
# Sum of squares across rows
ssA = (A_mA**2).sum(1);
ssB = (B_mB**2).sum(1);
# Finally get corr coeff
return np.dot(A_mA,B_mB.T)/np.sqrt(np.dot(ssA[:,None],ssB[None]))
```
This is based upon this solution to [`How to apply corr2 functions in Multidimentional arrays in MATLAB`](http://stackoverflow.com/a/26526798/3293881)
**Benchmarking**
This section compares runtime performance with the proposed approach against `generate_correlation_map` & loopy `pearsonr` based approach listed in the [other answer.](http://stackoverflow.com/a/30145770/3293881)(taken from the function `test_generate_correlation_map()` without the value correctness verification code at the end of it). Please note the timings for the proposed approach also include a check at the start to check for equal number of columns in the two input arrays, as also done in that other answer. The runtimes are listed next.
Case #1:
```
In [106]: A = np.random.rand(1000,100)
In [107]: B = np.random.rand(1000,100)
In [108]: %timeit corr2_coeff(A,B)
100 loops, best of 3: 15 ms per loop
In [109]: %timeit generate_correlation_map(A, B)
100 loops, best of 3: 19.6 ms per loop
```
Case #2:
```
In [110]: A = np.random.rand(5000,100)
In [111]: B = np.random.rand(5000,100)
In [112]: %timeit corr2_coeff(A,B)
1 loops, best of 3: 368 ms per loop
In [113]: %timeit generate_correlation_map(A, B)
1 loops, best of 3: 493 ms per loop
```
Case #3:
```
In [114]: A = np.random.rand(10000,10)
In [115]: B = np.random.rand(10000,10)
In [116]: %timeit corr2_coeff(A,B)
1 loops, best of 3: 1.29 s per loop
In [117]: %timeit generate_correlation_map(A, B)
1 loops, best of 3: 1.83 s per loop
```
The other loopy `pearsonr based` approach seemed too slow, but here are the runtimes for one small datasize -
```
In [118]: A = np.random.rand(1000,100)
In [119]: B = np.random.rand(1000,100)
In [120]: %timeit corr2_coeff(A,B)
100 loops, best of 3: 15.3 ms per loop
In [121]: %timeit generate_correlation_map(A, B)
100 loops, best of 3: 19.7 ms per loop
In [122]: %timeit pearsonr_based(A,B)
1 loops, best of 3: 33 s per loop
``` |
Difference between bytearray and list | 30,145,490 | 4 | 2015-05-09T22:01:50Z | 30,145,560 | 9 | 2015-05-09T22:10:32Z | [
"python",
"python-2.7"
] | What is the difference between `bytearray` and for example, a `list` or `tuple`?
As the name suggests, `bytearray` must be an `array` that carries `byte` objects.
In python, it seems that `bytes` and `str` are treated equally
```
>>> bytes
<type 'str'>
```
So, what is the difference?
Also, if you print a `bytearray`, the result is pretty weird
```
>>> v = bytearray([200, 201])
>>> print v
ÃÃ
```
It seems that it transforms the integer in `chr(integer)` , is that right? What is the use of a `bytearray` then? | You are correct in some way: In Python 2, `bytes` is synonymous with the `str` type. This is because originally, there was no `bytes` object, there was only `str` and `unicode` (the latter being for unicode string, i.e. having multi-byte capabilities). When Python 3 came, they changed the whole string things and made unicode the default Python 3 `str` type, and they added `bytes` as the type for raw byte sequences (making it equivalent to Python 2âs `str` object).
So while in Python 3 you differ between `str` and `bytes`, the corresponding types in Python 2 are `unicode` and `str`.
Now what makes the `bytearray` type interesting is that itâs mutable. All string and byte sequences above are immutable, so with every change, you are creating a new object. But you can modify `bytearray` objects, making them interesting for various purposes where you need to modify individual bytes in a sequence. |
Plotting 2D Kernel Density Estimation with Python | 30,145,957 | 8 | 2015-05-09T23:04:40Z | 30,146,280 | 14 | 2015-05-09T23:53:02Z | [
"python",
"matplotlib",
"plot",
"kernel",
"seaborn"
] | I would like to plot a 2D kernel density estimation. I find the seaborn package very useful here. However, after searching for a long time, I couldn't figure out how to make the y-axis and x-axis non-transparent. Also, how to show the values of the density on the contour? I would be very appreciated if someone could help me out. Below please see my code and graph. 
```
import numpy as np
import seaborn as sns
import matplotlib.pyplot as pl
Y = np.random.multivariate_normal((0, 0), [[0.8, 0.05], [0.05, 0.7]], 100)
ax = sns.kdeplot(Y, shade = True, cmap = "PuBu")
ax.patch.set_facecolor('white')
ax.collections[0].set_alpha(0)
ax.set_xlabel('$Y_1$', fontsize = 15)
ax.set_ylabel('$Y_0$', fontsize = 15)
pl.xlim(-3, 3)
pl.ylim(-3, 3)
pl.plot([-3, 3], [-3, 3], color = "black", linewidth = 1)
pl.show()
``` | Here is a solution using `scipy` and `matplotlib` only :
```
import numpy as np
import matplotlib.pyplot as pl
import scipy.stats as st
data = np.random.multivariate_normal((0, 0), [[0.8, 0.05], [0.05, 0.7]], 100)
x = data[:, 0]
y = data[:, 1]
xmin, xmax = -3, 3
ymin, ymax = -3, 3
# Peform the kernel density estimate
xx, yy = np.mgrid[xmin:xmax:100j, ymin:ymax:100j]
positions = np.vstack([xx.ravel(), yy.ravel()])
values = np.vstack([x, y])
kernel = st.gaussian_kde(values)
f = np.reshape(kernel(positions).T, xx.shape)
fig = pl.figure()
ax = fig.gca()
ax.set_xlim(xmin, xmax)
ax.set_ylim(ymin, ymax)
# Contourf plot
cfset = ax.contourf(xx, yy, f, cmap='Blues')
## Or kernel density estimate plot instead of the contourf plot
#ax.imshow(np.rot90(f), cmap='Blues', extent=[xmin, xmax, ymin, ymax])
# Contour plot
cset = ax.contour(xx, yy, f, colors='k')
# Label plot
ax.clabel(cset, inline=1, fontsize=10)
ax.set_xlabel('Y1')
ax.set_ylabel('Y0')
pl.show()
```
The previous code gives the following result :

which has a non-transparent x-axis, a non-transparent y-axis and values of the density on the contour. Is this the expected result ? |
Why do numbers in a string become "x0n" when a backslash precedes them? | 30,146,891 | 4 | 2015-05-10T01:53:10Z | 30,146,915 | 8 | 2015-05-10T01:59:17Z | [
"python",
"string",
"python-3.x"
] | I was doing a few experiments with escape backslashes in the Python 3.4 shell and noticed something quite strange.
```
>>> string = "\test\test\1\2\3"
>>> string
'\test\test\x01\x02\x03'
>>> string = "5"
>>> string
'5'
>>> string = "5\6\7"
>>> string
'5\x06\x07'
```
As you can see in the above code, I defined a variable string as `"\test\test\1\2\3"`. However, when I entered `string` in the console, instead of printing `"\test\test\1\2\3"`, it printed `"\test\test\x01\x02\x03"`. Why does this occur, and what is it used for? | In Python string literals, the `\` character starts escape sequences. `\n` translates to a newline character, `\t` to a tab, etc. `\xhh` hex sequences let you produce codepoints with hex values instead, `\uhhhh` produce codepoints with 4-digit hex values, and `\Uhhhhhhhh` produce codepoints with 8-digit hex values.
See the [*String and Bytes Literals* documentation](https://docs.python.org/3/reference/lexical_analysis.html#string-and-bytes-literals), which contains a table of all the possible escape sequences.
When Python echoes a string object in the interpreter (or you use the [`repr()` function](https://docs.python.org/3/library/functions.html#repr) on a string object), then Python creates a *representation* of the string value. That representation happens to use the exact same Python string literal syntax, to make it easier to debug your values, as you can use the representation to *recreate* the exact same value.
To keep non-printable characters from either causing havoc or not be shown at all, Python uses the same escape sequence syntax to represent those characters. Thus bytes that are not printable are represented using suitable `\xhh` sequences, or if possible, one of the `\c` single letter escapes (so newlines are shown as `\n`).
In your example, you created non-printable bytes using the `\ooo` *octal value* escape sequence syntax. The digits are interpreted as an octal number to create a corrensponding codepoint. When echoing that string value back, the default `\xhh` syntax is used to represent the exact same value in hexadecimal:
```
>>> '\20' # Octal for 16
'\x10'
```
while your `\t` became a *tab* character:
```
>>> print('\test')
est
```
Note how there is no letter `t` there; instead, the remaining `est` is indented by whitespace, a horizontal tab.
If you need to include *literal* `\` backslash characters you need to *double* the character:
```
>>> '\\test\\1\\2\\3'
'\\test\\1\\2\\3'
>>> print('\\test\\1\\2\\3')
\test\1\2\3
>>> len('\\test\\1\\2\\3')
11
```
Note that the representation *used doubled backslashes*! If it didn't, you'd not be able to copy the string and paste it back into Python to recreate the value. Using `print()` to write the value to the terminal as actual characters (and not as a string representation) shows that there are single backslashes there, and taking the length shows we have just 11 characters in the string, not 15.
You can also use a *raw* string literal. That's just a different syntax, the string objects that are created from the syntax are the exact same type, with the same value. It is just a different way of spelling out string values. In a raw string literal, backslashes are just backslashes, as long as they are not the last character in the string; most escape sequences do not work in a raw string literal:
```
>>> r'\test\1\2\3'
'\\test\\1\\2\\3'
```
Last but not least, if you are creating strings that represent filenames on your Windows system, you could also use *forward* slashes; most APIs in Window don't mind and accept both types of slash as separators in the filename:
```
>>> 'C:/This/is/a/valid/path'
'C:/This/is/a/valid/path'
``` |
Why does assigning to an empty list (e.g. [] = "") raise no error? | 30,147,165 | 100 | 2015-05-10T02:41:15Z | 30,147,177 | 33 | 2015-05-10T02:43:22Z | [
"python",
"string",
"list",
"tuples"
] | In python 3.4, I am typing
```
[] = ""
```
and it works fine, no Exception is raised. Though of course `[]` is not equal to `""` afterwards.
```
[] = ()
```
also works fine.
```
"" = []
```
raises an exception as expected though,
```
() = ""
```
raises an exception as expected though. So, what's going on? | It follows the [*Assignment statements* section](https://docs.python.org/3/reference/simple_stmts.html#assignment-statements) rules from the documentation,
> ```
> assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)
> ```
>
> If the `target list` is a comma-separated list of targets: **The object must be an iterable with the same number of items as there are targets in the target list, and the items are assigned, from left to right, to the corresponding targets.**
>
> The object must be a sequence with the same number of items as there are targets in the target list, and the items are assigned, from left to right, to the corresponding targets.
So, when you say
```
[] = ""
```
`""` is an iterable (any valid python string is an iterable) and it is being unpacked over the elements of the list.
For example,
```
>>> [a, b, c] = "123"
>>> a, b, c
('1', '2', '3')
```
Since you have an empty string, and an empty list, there is nothing to unpack. So, no error.
But, try this
```
>>> [] = "1"
Traceback (most recent call last):
File "<input>", line 1, in <module>
ValueError: too many values to unpack (expected 0)
>>> [a] = ""
Traceback (most recent call last):
File "<input>", line 1, in <module>
ValueError: need more than 0 values to unpack
```
In the `[] = "1"` case, you are trying to unpack the string `"1"` over an empty list of variables. So it complains with "too many values to unpack (expected 0)".
Same way, in `[a] = ""` case, you have an empty string, so nothing to unpack really, but you are unpacking it over one variable, which is, again, not possible. That is why it complains "need more than 0 values to unpack".
Apart from that, as you noticed,
```
>>> [] = ()
```
also throws no error, because `()` is an empty tuple.
```
>>> ()
()
>>> type(())
<class 'tuple'>
```
and when it is unpacked over an empty list, there is nothing to unpack. So no error.
---
But, when you do
```
>>> "" = []
File "<input>", line 1
SyntaxError: can't assign to literal
>>> "" = ()
File "<input>", line 1
SyntaxError: can't assign to literal
```
as the error message says, you are trying to assign to a string literal. Which is not possible. That is why you are getting the errors. It is like saying
```
>>> 1 = "one"
File "<input>", line 1
SyntaxError: can't assign to literal
```
---
**Internals**
Internally, this assignment operation will be translated to [`UNPACK_SEQUENCE`](https://docs.python.org/3/library/dis.html#opcode-UNPACK_SEQUENCE) op code,
```
>>> dis(compile('[] = ""', "string", "exec"))
1 0 LOAD_CONST 0 ('')
3 UNPACK_SEQUENCE 0
6 LOAD_CONST 1 (None)
```
Here, since the string is empty, `UNPACK_SEQUENCE` unpacks `0` times. But when you have something like this
```
>>> dis(compile('[a, b, c] = "123"', "string", "exec"))
1 0 LOAD_CONST 0 ('123')
3 UNPACK_SEQUENCE 3
6 STORE_NAME 0 (a)
9 STORE_NAME 1 (b)
12 STORE_NAME 2 (c)
15 LOAD_CONST 1 (None)
18 RETURN_VALUE
```
the sequence `123` is unpacked in to the stack, from right to left. So, the top of the stack would be `1` and the next would be `2` and the last would be `3`. Then it assigns from the top of the stack to the variables from the left hand side expression one by one.
---
BTW, in Python, this is how you can do multiple assignments in the same expression. For example,
```
a, b, c, d, e, f = u, v, w, x, y, z
```
this works because, the right hand values are used to construct a tuple and then it will be unpacked over the left hand side values.
```
>>> dis(compile('a, b, c, d, e, f = u, v, w, x, y, z', "string", "exec"))
1 0 LOAD_NAME 0 (u)
3 LOAD_NAME 1 (v)
6 LOAD_NAME 2 (w)
9 LOAD_NAME 3 (x)
12 LOAD_NAME 4 (y)
15 LOAD_NAME 5 (z)
18 BUILD_TUPLE 6
21 UNPACK_SEQUENCE 6
24 STORE_NAME 6 (a)
27 STORE_NAME 7 (b)
30 STORE_NAME 8 (c)
33 STORE_NAME 9 (d)
36 STORE_NAME 10 (e)
39 STORE_NAME 11 (f)
42 LOAD_CONST 0 (None)
45 RETURN_VALUE
```
but the classic swapping technique `a, b = b, a` uses rotation of elements in the top of the stack. If you have only two or three elements then they are treated with special [`ROT_TWO`](https://docs.python.org/3/library/dis.html#opcode-ROT_TWO) and [`ROT_THREE`](https://docs.python.org/3/library/dis.html#opcode-ROT_THREE) instructions instead of constructing the tuple and unpacking.
```
>>> dis(compile('a, b = b, a', "string", "exec"))
1 0 LOAD_NAME 0 (b)
3 LOAD_NAME 1 (a)
6 ROT_TWO
7 STORE_NAME 1 (a)
10 STORE_NAME 0 (b)
13 LOAD_CONST 0 (None)
16 RETURN_VALUE
``` |
Why does assigning to an empty list (e.g. [] = "") raise no error? | 30,147,165 | 100 | 2015-05-10T02:41:15Z | 30,147,179 | 116 | 2015-05-10T02:43:37Z | [
"python",
"string",
"list",
"tuples"
] | In python 3.4, I am typing
```
[] = ""
```
and it works fine, no Exception is raised. Though of course `[]` is not equal to `""` afterwards.
```
[] = ()
```
also works fine.
```
"" = []
```
raises an exception as expected though,
```
() = ""
```
raises an exception as expected though. So, what's going on? | You are not comparing for equality. You are **assigning**.
Python allows you to assign to multiple targets:
```
foo, bar = 1, 2
```
assigns the two values to `foo` and `bar`, respectively. All you need is a *sequence* or *iterable* on the right-hand side, and a list or tuple of names on the left.
When you do:
```
[] = ""
```
you assigned an *empty* sequence (empty strings are sequences still) to an empty list of names.
It is essentially the same thing as doing:
```
[foo, bar, baz] = "abc"
```
where you end up with `foo = "a"`, `bar = "b"` and `baz = "c"`, but with fewer characters.
You cannot, however, assign to a string, so `""` on the left-hand side of an assignment never works and is always a syntax error.
See the [*Assignment statements* documentation](https://docs.python.org/3/reference/simple_stmts.html#assignment-statements):
> An assignment statement evaluates the expression list (remember that this can be a single expression or a comma-separated list, the latter yielding a tuple) and assigns the single resulting object to each of the target lists, from left to right.
and
> Assignment of an object to a target list, **optionally enclosed in parentheses or square brackets**, is recursively defined as follows.
*Emphasis mine*.
That Python doesn't throw a syntax error for the empty list is actually a bit of a bug! The officially documented grammar doesn't allow for an empty target list, and for the empty `()` you do get an error. See [bug 23275](http://bugs.python.org/issue23275); it is considered a harmless bug:
> The starting point is recognizing that this has been around for very long time and is harmless.
Also see [Why is it valid to assign to an empty list but not to an empty tuple?](http://stackoverflow.com/questions/29870019/why-is-it-valid-to-assign-to-an-empty-list-but-not-to-an-empty-tuple) |
scoping and typing rules in python | 30,148,453 | 3 | 2015-05-10T06:35:29Z | 30,148,460 | 9 | 2015-05-10T06:37:26Z | [
"python"
] | I am a beginner at python programming and I have written the following program in an attempt to understand scoping and typing rules.
```
a = 5
print(a)
a = "Hello World"
print(a)
```
I get the following output.
```
5
Hello World
```
I understand that the variables are dynamically typed in python. Interpreter understands 'a' is an integer when `a=5` assignment happens.
Why doens't it give an error when the same variable is assigned a string value | Variables in Python don't have types. They're just names for values. Those values, of course, do have types, but the variables don't care. Just as "John" can be the name for a man in one conversation or for a poodle in another, `a` can be the name for an `int` in one part of your program and a `str` in another part.
---
This is part of a much larger difference between variables in Python and whatever language you're probably coming from.
In C++, a variable actually defines a location in memory where values can live. For example, and `int` variable has 4 bytes of memory, and when you write `a = 42`, it copies the number 42 into those 4 bytes.
In Python, a variable just defines a reference to a value that has its own life off somewhere on the heap. The value's storage still has to have a type, of course, but the variable isn't the value's storage.
(If C++ really is your most familiar language, it may help to think of every Python variable as being of type `std::shared_ptr<boost::any>`.)
---
There are some programs where being able to reuse variable names for different types is useful, but there are others where it's more likely to be a bug than something intentional. There are static analyzers like [`mypy`](http://mypy-lang.org/) that you can use to check for such bugs. In most cases, automated type inference is sufficient to figure out what type you wanted a variable to be and make sure you used that type consistently. When it's not, [PEP 484](https://www.python.org/dev/peps/pep-0484/) (which will probably be part of Python 3.5, and is already mostly handled by `mypy`) standardizes a way to help these analyzers by giving them explicit type hints.
---
One last thing: If you're curious how this works under the covers, in Python, every scope's namespace is just a dictionary. When you write `a = 5` at the module-global level, it's just like doing `g['a'] = 5`, where `g` is the module's dictionary. There are a few tricky bits with locals and closures, but basically, this is the key to how variables work. |
How do I do exponentiation in python? | 30,148,740 | 3 | 2015-05-10T07:20:26Z | 30,148,765 | 12 | 2015-05-10T07:23:29Z | [
"python",
"math",
"operators"
] | ```
def cube(number):
return number^3
print cube(2)
```
I would would expect `cube(2) = 8`, but instead I'm getting `cube(2) = 1`
What am I doing wrong? | `^` is the [xor](http://en.wikipedia.org/wiki/Exclusive_or) operator.
`**` is exponentiation.
`2**3 = 8` |
Redis AUTH command in Python | 30,149,493 | 4 | 2015-05-10T08:51:50Z | 30,153,367 | 7 | 2015-05-10T15:47:25Z | [
"python",
"redis"
] | I'm using [redis-py](https://github.com/andymccurdy/redis-py) binding in Python 2 to connect to my Redis server. The server requires a password. I don't know how to `AUTH` after making the connection in Python.
The following code does not work:
```
import redis
r = redis.StrictRedis()
r.auth('pass')
```
It says:
> 'StrictRedis' object has no attribute 'auth'
Also,
```
r = redis.StrictRedis(auth='pass')
```
does not work either. No such keyword argument.
I've used Redis binding in other languages before, and usually the method name coincides with the Redis command. So I would guess `r.auth` will send `AUTH`, but unfortunately it does not have this method.
So what is the standard way of `AUTH`? Also, why call this `StrictRedis`? What does `Strict` mean here? | Thanks to the hints from the comments. I found the answer from <https://redis-py.readthedocs.org/en/latest/>.
It says
```
class redis.StrictRedis(host='localhost', port=6379, db=0, password=None, socket_timeout=None, connection_pool=None, charset='utf-8', errors='strict', unix_socket_path=None)
```
So `AUTH` is in fact `password` passed by keyword argument. |
IOError: [Errno 2] No such file or directory writing in a file in home directory | 30,151,355 | 2 | 2015-05-10T12:24:06Z | 30,151,425 | 9 | 2015-05-10T12:29:26Z | [
"python"
] | I have this code below to store some text in a file ~/.boto that is in home directory.
But Im getting this error:
```
IOError: [Errno 2] No such file or directory: '~/.boto'
```
This is the code:
```
file = open("~/.boto")
file.write("test")
file.close()
``` | You need to use os.path.expanduser and open for writing with `w`:
```
import os
# with will automatically close your file
with open(os.path.expanduser("~/.boto"),"w") as f:
f.write("test") # write to file
```
**os.path.expanduser(path)**
> On Unix and Windows, return the argument with an initial component of ~ or ~user replaced by that userâs home directory.
>
> On Unix, an initial ~ is replaced by the environment variable HOME if it is set; otherwise the current userâs home directory is looked up in the password directory through the built-in module pwd. An initial ~user is looked up directly in the password directory.
>
> On Windows, HOME and USERPROFILE will be used if set, otherwise a combination of HOMEPATH and HOMEDRIVE will be used. An initial ~user is handled by stripping the last directory component from the created user path derived above.
>
> If the expansion fails or if the path does not begin with a tilde, the path is returned unchanged.
```
In [17]: open("~/foo.py")
---------------------------------------------------------------------------
IOError Traceback (most recent call last)
<ipython-input-17-e9eb7789ac68> in <module>()
----> 1 open("~/foo.py")
IOError: [Errno 2] No such file or directory: '~/foo.py'
In [18]: open(os.path.expanduser("~/foo.py"))
Out[18]: <open file '/home/padraic/foo.py', mode 'r' at 0x7f452d16e4b0>
```
By default a file is only open for reading, if you want to open for writing you need to use `w`, f you want to open for reading and writing use `r+` or to append use `a`.
If you have content in the file then `w` is going to **overwrite**, if you are trying to add to the file then use `a` |
json.dumps messes up order | 30,152,688 | 4 | 2015-05-10T14:41:00Z | 30,152,809 | 7 | 2015-05-10T14:51:11Z | [
"python",
"json"
] | I'm working with the [json module](https://docs.python.org/2/library/json.html) creating a `json` file containing entries of the like
```
json.dumps({"fields": { "name": "%s", "city": "%s", "status": "%s", "country": "%s" }})
```
However, in the `json`-file created the fields are in the wrong order
```
{"fields": {"status": "%s", "city": "%s", "name": "%s", "country": "%s"}}
```
which is a problem because the substitions for the `%s`-strings are now incorrect.
How can I force the `dumps` function to keep the given order? | Like the other answers correctly state, Python dictionaries are *unordered*.
That said, [JSON is also supposed to have *unordered* mappings](http://stackoverflow.com/a/4920304/42973), so in principle it does not make much sense to store ordered dictionaries in JSON. Concretely, this means that upon reading a JSON object, the order of the returned keys can be arbitrary.
A good way of preserving the order of a mapping (like a Python OrderedDict) in JSON is therefore to output an array of (key, value) pairs that you convert back to an ordered mapping upon reading:
```
>>> from collections import OrderedDict
>>> import json
>>> d = OrderedDict([(1, 10), (2, 20)])
>>> print d[2]
20
>>> json_format = json.dumps(d.items())
>>> print json_format # Order maintained
[[1, 10], [2, 20]]
>>> OrderedDict(json.loads(json_format)) # Reading from JSON: works!
OrderedDict([(1, 10), (2, 20)])
>>> _[2] # This works!
20
```
(Note the way the ordered dictionary is constructed from a *list* of (key, value) pairs: `OrderedDict({1: 10, 2: 20})` would not work: its keys are not necessarily ordered as in the dictionary literal, since the literal creates a Python dictionary whose keys are unordered.) |
Django, uwsgi and nginx - Internal Server Error | 30,154,364 | 3 | 2015-05-10T17:21:30Z | 30,155,016 | 7 | 2015-05-10T18:20:39Z | [
"python",
"django",
"nginx",
"uwsgi"
] | I am quite new to python and am working on my first django project. I followed [this](http://uwsgi-docs.readthedocs.org/en/latest/tutorials/Django_and_nginx.html) tutorial. I managed to configure all the things but django app itself. It works if I run django server alone, however it does not work when started by uwsgi.
This is my uwsgi conf:
```
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "api.settings")
application = get_wsgi_application()
```
And error from uwsgi log:
```
--- no python application found, check your startup logs for errors ---
```
So I looked up for startup errors:
```
Traceback (most recent call last):
File "./wsgi.py", line 16, in <module>
application = get_wsgi_application()
File "/var/www/api.partycon.net/virtualpy/local/lib/python2.7/site-packages/django/core/wsgi.py", line 14, in get_wsgi_application
django.setup()
File "/var/www/api.partycon.net/virtualpy/local/lib/python2.7/site-packages/django/__init__.py", line 17, in setup
configure_logging(settings.LOGGING_CONFIG, settings.LOGGING)
File "/var/www/api.partycon.net/virtualpy/local/lib/python2.7/site-packages/django/conf/__init__.py", line 48, in __getattr__
self._setup(name)
File "/var/www/api.partycon.net/virtualpy/local/lib/python2.7/site-packages/django/conf/__init__.py", line 44, in _setup
self._wrapped = Settings(settings_module)
File "/var/www/api.partycon.net/virtualpy/local/lib/python2.7/site-packages/django/conf/__init__.py", line 92, in __init__
mod = importlib.import_module(self.SETTINGS_MODULE)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named api.settings
```
My project's dirtree:
```
.
|-- api
| |-- __init__.py
| |-- __init__.pyc
| |-- media
| | `-- sample-media.jpg
| |-- settings.py
| |-- settings.pyc
| |-- urls.py
| |-- urls.pyc
| |-- wsgi.py
| `-- wsgi.pyc
|-- db.sqlite3
|-- manage.py
|-- static
| `-- admin
| ... ... ... ...
|-- uwsgi_conf.ini
`-- uwsgi_params
```
I hope I provided enough information, however I can tell u more - the problem is that I actually have no idea where to look.
Thanks in advance :) | You are probably lacking a `chdir` line in your `uwsgi_conf.ini`. Or, probably, you have a `chdir` line, but it is wrong.
This is confirmed by your traceback:
```
File "./wsgi.py", line 16, in <module>
```
Here you should see `./api/wsgi.py`, not `./wsgi.py`.
Clearly, the working directory of uWSGI is the `api/` directory, while it should be the parent directory.
In general, your uWSGI configuration file should look like this:
```
[uwsgi]
chdir=/path/to/your/project
module=mysite.wsgi:application
...
```
See also the [Django documentation on uWSGI](https://docs.djangoproject.com/en/1.8/howto/deployment/wsgi/uwsgi/). |
How can I write asyncio coroutines that optionally act as regular functions? | 30,155,138 | 5 | 2015-05-10T18:31:03Z | 30,163,654 | 10 | 2015-05-11T09:10:53Z | [
"python",
"python-3.x",
"python-asyncio"
] | I'm writing a library that I'd like end-users to be able to optionally use as if its methods and functions were not coroutines.
For example, given this function:
```
@asyncio.coroutine
def blah_getter():
return (yield from http_client.get('http://blahblahblah'))
```
An end user who doesn't care to use any asynchronous features in their own code, still has to import asyncio and run this:
```
>>> response = asyncio.get_event_loop().run_until_complete(blah_getter())
```
It would be cool if I could, inside of `blah_getter` determine if I was being called as a coroutine or not and react accordingly.
So something like:
```
@asyncio.coroutine
def blah_getter():
if magically_determine_if_being_yielded_from():
return (yield from http_client.get('http://blahblahblah'))
else:
el = asyncio.get_event_loop()
return el.run_until_complete(http_client.get('http://blahblahblah'))
``` | You need two functions -- asynchronous coroutine and synchronous regular function:
```
@asyncio.coroutine
def async_gettter():
return (yield from http_client.get('http://example.com'))
def sync_getter()
return asyncio.get_event_loop().run_until_complete(async_getter())
```
`magically_determine_if_being_yielded_from()` is actually `event_loop.is_running()` but I strongly don't recommend to mix sync and async code in the same function. |
How can I write asyncio coroutines that optionally act as regular functions? | 30,155,138 | 5 | 2015-05-10T18:31:03Z | 30,171,640 | 7 | 2015-05-11T15:24:05Z | [
"python",
"python-3.x",
"python-asyncio"
] | I'm writing a library that I'd like end-users to be able to optionally use as if its methods and functions were not coroutines.
For example, given this function:
```
@asyncio.coroutine
def blah_getter():
return (yield from http_client.get('http://blahblahblah'))
```
An end user who doesn't care to use any asynchronous features in their own code, still has to import asyncio and run this:
```
>>> response = asyncio.get_event_loop().run_until_complete(blah_getter())
```
It would be cool if I could, inside of `blah_getter` determine if I was being called as a coroutine or not and react accordingly.
So something like:
```
@asyncio.coroutine
def blah_getter():
if magically_determine_if_being_yielded_from():
return (yield from http_client.get('http://blahblahblah'))
else:
el = asyncio.get_event_loop()
return el.run_until_complete(http_client.get('http://blahblahblah'))
``` | I agree with Andrew's answer, I just want to add that if you're dealing with objects, rather than top-level functions, you can use a metaclass to add synchronous versions of your asynchronous methods automatically. See this example:
```
import asyncio
import aiohttp
class SyncAdder(type):
""" A metaclass which adds synchronous version of coroutines.
This metaclass finds all coroutine functions defined on a class
and adds a synchronous version with a '_s' suffix appended to the
original function name.
"""
def __new__(cls, clsname, bases, dct, **kwargs):
new_dct = {}
for name,val in dct.items():
# Make a sync version of all coroutine functions
if asyncio.iscoroutinefunction(val):
meth = cls.sync_maker(name)
syncname = '{}_s'.format(name)
meth.__name__ = syncname
meth.__qualname__ = '{}.{}'.format(clsname, syncname)
new_dct[syncname] = meth
dct.update(new_dct)
return super().__new__(cls, clsname, bases, dct)
@staticmethod
def sync_maker(func):
def sync_func(self, *args, **kwargs):
meth = getattr(self, func)
return asyncio.get_event_loop().run_until_complete(meth(*args, **kwargs))
return sync_func
class Stuff(metaclass=SyncAdder):
@asyncio.coroutine
def getter(self, url):
return (yield from aiohttp.request('GET', url))
```
Usage:
```
>>> import aio, asyncio
>>> aio.Stuff.getter_s
<function Stuff.getter_s at 0x7f90459c2bf8>
>>> aio.Stuff.getter
<function Stuff.getter at 0x7f90459c2b70>
>>> s = aio.Stuff()
>>> s.getter_s('http://example.com')
<ClientResponse(http://example.com) [200 OK]>
<CIMultiDictProxy {'ACCEPT-RANGES': 'bytes', 'CACHE-CONTROL': 'max-age=604800', 'DATE': 'Mon, 11 May 2015 15:13:21 GMT', 'ETAG': '"359670651"', 'EXPIRES': 'Mon, 18 May 2015 15:13:21 GMT', 'SERVER': 'ECS (ewr/15BD)', 'X-CACHE': 'HIT', 'X-EC-CUSTOM-ERROR': '1', 'CONTENT-LENGTH': '1270', 'CONTENT-TYPE': 'text/html', 'LAST-MODIFIED': 'Fri, 09 Aug 2013 23:54:35 GMT', 'VIA': '1.1 xyz.com:80', 'CONNECTION': 'keep-alive'}>
>>> asyncio.get_event_loop().run_until_complete(s.getter('http://example.com'))
<ClientResponse(http://example.com) [200 OK]>
<CIMultiDictProxy {'ACCEPT-RANGES': 'bytes', 'CACHE-CONTROL': 'max-age=604800', 'DATE': 'Mon, 11 May 2015 15:25:09 GMT', 'ETAG': '"359670651"', 'EXPIRES': 'Mon, 18 May 2015 15:25:09 GMT', 'SERVER': 'ECS (ewr/15BD)', 'X-CACHE': 'HIT', 'X-EC-CUSTOM-ERROR': '1', 'CONTENT-LENGTH': '1270', 'CONTENT-TYPE': 'text/html', 'LAST-MODIFIED': 'Fri, 09 Aug 2013 23:54:35 GMT', 'VIA': '1.1 xys.com:80', 'CONNECTION': 'keep-alive'}>
``` |
Python in Browser: How to choose between Brython, PyPy.js, Skulpt and Transcrypt? | 30,155,551 | 18 | 2015-05-10T19:08:35Z | 30,155,677 | 8 | 2015-05-10T19:20:10Z | [
"python",
"browser",
"brython",
"skulpt",
"transcrypt"
] | *EDIT: Please note I'm **NOT** asking for a (subjective) product recommendation. I am asking for **objective** information -- that I can then use to make my own decision.*
I'm very excited to see that it is now possible to code Python inside a browser page. The main candidates appear to be:
<http://www.brython.info/>
<http://www.skulpt.org/>
<http://pypyjs.org/>
<http://transcrypt.org/> <-- Added July 2016
*(If there is another viable candidate I'm missing, please put me right!)*
But how to choose between them?
*(EDIT: Please note, I'm **not** asking for a candidate nomination. I'm seeking information that will allow me to make an educated choice.)*
The only obvious difference I can see is that Skulpt emulates Python 2.x whereas Brython emulates Python 3.x. | <https://brythonista.wordpress.com/2015/03/28/comparing-the-speed-of-cpython-brython-skulpt-and-pypy-js/>
This page benchmarks the three candidates. Brython emerges as a clear winner.
Despite the 'help' explaining that S.O. is not good for this kind of question, it appears that a concise answer is in this case possible.
Maybe people are being too hasty? |
Python in Browser: How to choose between Brython, PyPy.js, Skulpt and Transcrypt? | 30,155,551 | 18 | 2015-05-10T19:08:35Z | 30,517,812 | 9 | 2015-05-28T22:01:19Z | [
"python",
"browser",
"brython",
"skulpt",
"transcrypt"
] | *EDIT: Please note I'm **NOT** asking for a (subjective) product recommendation. I am asking for **objective** information -- that I can then use to make my own decision.*
I'm very excited to see that it is now possible to code Python inside a browser page. The main candidates appear to be:
<http://www.brython.info/>
<http://www.skulpt.org/>
<http://pypyjs.org/>
<http://transcrypt.org/> <-- Added July 2016
*(If there is another viable candidate I'm missing, please put me right!)*
But how to choose between them?
*(EDIT: Please note, I'm **not** asking for a candidate nomination. I'm seeking information that will allow me to make an educated choice.)*
The only obvious difference I can see is that Skulpt emulates Python 2.x whereas Brython emulates Python 3.x. | This might be helpful too:
<http://stromberg.dnsalias.org/~strombrg/pybrowser/python-browser.html>
It compares several Python-in-the-browser technologies. |
Python in Browser: How to choose between Brython, PyPy.js, Skulpt and Transcrypt? | 30,155,551 | 18 | 2015-05-10T19:08:35Z | 38,564,424 | 9 | 2016-07-25T09:43:56Z | [
"python",
"browser",
"brython",
"skulpt",
"transcrypt"
] | *EDIT: Please note I'm **NOT** asking for a (subjective) product recommendation. I am asking for **objective** information -- that I can then use to make my own decision.*
I'm very excited to see that it is now possible to code Python inside a browser page. The main candidates appear to be:
<http://www.brython.info/>
<http://www.skulpt.org/>
<http://pypyjs.org/>
<http://transcrypt.org/> <-- Added July 2016
*(If there is another viable candidate I'm missing, please put me right!)*
But how to choose between them?
*(EDIT: Please note, I'm **not** asking for a candidate nomination. I'm seeking information that will allow me to make an educated choice.)*
The only obvious difference I can see is that Skulpt emulates Python 2.x whereas Brython emulates Python 3.x. | Here's some info on Brython vs Transcrypt (July 2016, since Transcrypt was added as an option on this question by the OP), gleaned by starting off a project with Brython a few months ago and moving to Transcrypt (completed moving last week). I like Brython and Transcrypt and can see uses for both of them.
For people that are new to this, Brython and Transcrypt are both 'transpilers', which means in this case that they take python input and convert it to javascript. Some might just call that a 'compiler'. Both require Python 3 syntax. Brython includes a substantial number of Python standard libraries and some of it's own for dealing with web related things, whereas Transcrypt avoids that for the most part and suggests using Javascript libraries instead.
[Brython](http://brython.info/) ([Github](https://github.com/brython-dev/brython)) can do the conversion in the browser. So you write in python and the brython.js engine converts it to javascript on the fly when the page is loaded. This is really convenient, and is much faster than you might think. However, the brython.js engine that you need to include in your pages is about 500Kb. Also, there's the matter of importing standard libraries, which Brython handles by fetching separate .js files with XHR requests. Some libs are already compiled into brython.js, so not every import will pull in new files, but if you use many imports, things can get slow. However, there are ways around this. What i did was to check the network tab in browser dev tools to see what files were being pulled in when the page was loaded, then delete all the files my project wasn't using in a copy of the Brython src folder, and run the script included with Brython (i think it's at Brython/www/scripts/make\_VFS.py) that compiles all of the available libs into one file called py\_VFS.js that you need to also link to from your html. Normally, it will make one huge 2MB+ file, but if you delete the things you aren't using, it can be quite tiny. Doing it this way, means you only need to pull in brython.js, py\_VFS.js and your python code, and no additional XHR requests will be needed.
[Transcrypt](http://transcrypt.org/) ([Github](https://github.com/JdeH/Transcrypt))on the other hand, is distributed as a [python 3 package](https://pypi.python.org/pypi/Transcrypt) that you can use manually, or hook into your toolchain, to compile python to javascript in advance. So with Transcrypt, you write in python, run transcrypt against the python and it spits out javascript that you can link to in your project. It is more like a traditional compiler also in that it offers some control over the output. For example, you can choose to compile to ES6 or ES5, or ask it to output sourcemaps (that during debugging let's the browser take you directly to the corresponding python code, insead of the generated javascript code.) Transcrypt's javascript output is pretty terse (or put another way, it's pretty and terse). In my case 150kB of python is converted to 165kB of unminified ES5 javascript. By way of comparison, the Brython version of my project used about 800Kb after conversion.
However, getting the benefits of Transcrypts terseness, requires reading the docs a bit (really just a bit). For example, with Transcrypt, Python's 'truthiness' for data structures like dict, set and list isn't enabled by default and globally enabling it is discouraged because of potential performance issues related to typechecking. For clarity: Under CPython, an empty dict, set or list has truth value False, whereas in Javascript it's considered 'true'.. Example:
```
myList = []
if myList: # False in CPython bcs it's empty, true in javascript bcs it exists
# do some things.
```
There are at least three ways to address this:
* Use the -t flag when converting python to javascript e.g.: $ transcrypt -t python.py (not recommended, but probably isn't a problem unless you check for truthiness many times in inner loops of performance sensitive code..)
* Use `__pragma__(tconv)` or `__pragma__(notconv)` within your code to tell the transcrypt compiler to switch on automatic conversion to python-like truth values locally.
* Instead of checking for the truth value, avoid the problem altogether by just checking len(myList) > 0... Maybe that will be fine for most situations, does the job for my light use.
Right, so my project was getting bigger and i wanted to pre-compile for a performance gain but found it hard to do so with Brython (though it's technically possible, an easy way being to use the [online editor](http://brython.info/tests/editor.html?lang=en) and click the javascript button to see the output). I did that and linked to the generated javascript from project.html but it didn't work for some reason. Also, I find it hard to understand error messages from Brython so i didn't know where to start after this step failed. Also, the big size of the outputted code and the size of the brython engine was beginning to bug me. So i decided to have a closer look at Transcrypt, which had at first seemed to be higher grade because i prefer dumbed down instructions that tell me how to get started immediately (these have since been added).
The main thing getting it set up after installing Python3.5 was 1) use venv (it's like a new built-in version of virtualenv that uses less space for each project) to set up a python3.5 project folder (just type: python3.5 -m venv foldername - [workaround for ubuntu with package issues for 3.5](https://gist.github.com/denilsonsa/21e50a357f2d4920091e)). This makes 'foldername' with a bin subfolder among other things. 2) install Transcrypt python package with pip ('foldername/bin/pip install transcrypt') which installs it to foldername/lib/python3.5/site-packages/transcrypt. 3) 'activate' the current terminal if you don't want to have to type the full path to foldername/bin/python3.5 every time. Activate by typing: 'source foldername/bin/activate' 4) begin writing code and compiling it to javascript for testing. Compile from within the folder you write your code in. For example, i used foldername/www/project. So CD into that folder and run: 'transcrypt -b your\_python\_script.py'. That puts the output in a subfolder called `__javascript__`. You can then link to the outputted javascript from your html.
**Main issues moving across**
I have rather simple needs, so your mileage may vary.
* You need to replace brython or python standard libs with javascript libs.
So for example 'import json' is provided by Brython, but under Transcrypt you could use a javascript lib or just use JSON.parse / JSON.stringify directly in your Python code. To include a minified version of a javascript library directly in your python code use this format (note the triple quotes):
`__pragma__ ('js', '{}', '''
// javascript code
''')`
* Brython's html specific functions don't work with Transcrypt obviously. Just use the normal javascript ways. Examples: 1) under Brython, you might have referred to a specific HTML tag using 'document['id']', but with Transcrypt you'd use 'document.getElementById('id') (which is the same way you do it from javascript). 2) You can't delete a node with 'del nodeName' (bcs that's a brython function). Use something like 'node.parentNode.removeChild(node)'. 3) replace all of brython's DOM functions with the javascript alternatives. e.g. class\_name = className; text = textContent; html = innerHTML; parent = parentNode; children = childNodes etc. I guess if you need something that contains alternatives required by some older browsers then there are javascript libraries for that. 4) Brython's set\_timeout is replaced with javascripts setTimeout 5) Brython's html tags such as BR() need to be replaced using the normal javascript ways as well as redoing any places you used it's <= dom manipulation syntax. Either inject plain text markup as innerHTML or make the elements using javascript syntax and then attach them using normal javascript DOM syntax. I also noticed that for checkboxes brython uses "if checkbox = 'checked':" but Transcrypt is happy with "if checkbox:"..
* I finished moving a 2700 line project over last week at which time Transcrypt didn't have support for a few minor things (though they were easy enough to replace with fillers), these were 1) str.lower, str.split (str.split is present, but seems to be the javascript split, which works differently to the python version, the behavior of which i was relying on), 2) round (this seems to be supported in the dev version now) and 3) isinstance didn't work on str, int and float, only on dict, list and set. 4) Another difference from Brython i noticed is that if i pull in a JSON representation of a dict, i need to do so using 'myDict = dict(data)', whereas brython was happy with 'myDict = data'. But that might be related to something in Brython's json.loads, which i replaced directly with JSON.parse. 5) Also without specifically enabled Transcrypts operator overloading (using -o switch for global, or `__pragma__('opov')` for local), you can't do things like set operations using the overloaded format, but need to use the corresponding functions. E.g.
```
a = set([1, 2, 3])
b = set([3, 4, 5])
a.difference(b) # is used instead of a - b
a.union(b) # used instead of a | b
a.intersection(b) # used instead of a & b
a.symmetric_difference(b) # used instead of a ^ b
```
6) Also, you can't iterate dicts by default using 'for i in dict:', without enabling that (cmd line -i or `__pragma__('iconv')`, but you can avoid having to enable it by just using the keys() member e.g.:
```
for key, value in dict.items():
# do things for each key and value..
```
**To summarise**
* I like Brython because it's easy to get going with it and to test your code (just F5). It's closer to true python because most of the standard lib is there. I dislike having to include the transpilation engine in the browser and the large outputted javascript size. If i had to do things over (but still using Brython), i would have used javascript methods to manipulate the DOM from brython (which you can do..), instead of leaning so much on the brython methods because that wasted time moving across to another transpiler when my needs changed.
* I like Transcrypt because the outputted javascript is really 'lean and mean' and because the only thing you load browser side is your generated javascript code which is similar in size to your python code. Also because it supports sourcemaps and because it gives me a measure of control over the outputted javascript. And using it taught me quite a bit about optimization.
Hope that helps someone see which of these might be good for their particular project. |
Virtualenv in source control | 30,156,152 | 5 | 2015-05-10T20:10:09Z | 30,156,162 | 8 | 2015-05-10T20:10:55Z | [
"python",
"virtualenv"
] | Perhaps this is more an opinion-based question, but I was wondering whether the contents of a `virtualenv` should be included in a GitHub repository. Why should it or should it not be included? | No, anything that can be generated should not be included.
Dependencies should be managed with something like pip, and the requirements.txt file can be included.
The only files under source control should be files you absolutely need to get you development environment going. So it can included boot strapping of some sort, ie, you can script the creation of the virtual environment, and that would be the first thing you run once you have cloned.
Also consider that your virtual environment contain binary files. You absolutely do not want those in your repository.
As pointed out by @JeremyBank below, your virtual environment can also differ from system to system meaning that your virtual environment will not be portable. |
matplotlib installation issue python 3 | 30,156,965 | 5 | 2015-05-10T21:41:45Z | 31,403,604 | 7 | 2015-07-14T10:06:24Z | [
"python",
"ubuntu",
"matplotlib",
"install"
] | I am trying to install metaplotlib on ubuntu 14.04 within pycharm and get the following error:
> TypeError: unorderable types: str() < int()
ubuntu 14.04 64bits
pycharm running python 3
The traceback is:
```
DEPRECATION: --no-install, --no-download, --build, and --no-clean are deprecated. Downloading/unpacking matplotlib Running setup.py (path:/tmp/pycharm-packaging7.tmp/matplotlib/setup.py) egg_info for package matplotlib
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "/tmp/pycharm-packaging7.tmp/matplotlib/setup.py", line 155, in <module>
result = package.check()
File "/tmp/pycharm-packaging7.tmp/matplotlib/setupext.py", line 961, in check
min_version='2.3', version=version)
File "/tmp/pycharm-packaging7.tmp/matplotlib/setupext.py", line 445, in _check_for_pkg_config
if (not is_min_version(version, min_version)):
File "/tmp/pycharm-packaging7.tmp/matplotlib/setupext.py", line 173, in is_min_version
return found_version >= expected_version
File "/usr/lib/python3.4/distutils/version.py", line 76, in __ge__
c = self._cmp(other)
File "/usr/lib/python3.4/distutils/version.py", line 343, in _cmp
if self.version < other.version:
TypeError: unorderable types: str() < int()
============================================================================
Edit setup.cfg to change the build options
BUILDING MATPLOTLIB
matplotlib: yes [1.4.3]
python: yes [3.4.0 (default, Apr 11 2014, 13:05:11) [GCC
4.8.2]]
platform: yes [linux]
REQUIRED DEPENDENCIES AND EXTENSIONS
numpy: yes [version 1.9.2]
six: yes [using six version 1.5.2]
dateutil: yes [using dateutil version 2.4.2]
pytz: yes [using pytz version 2015.2]
tornado: yes [using tornado version 4.1]
pyparsing: yes [using pyparsing version 2.0.1]
pycxx: yes [Official versions of PyCXX are not compatible
with matplotlib on Python 3.x, since they lack
support for the buffer object. Using local copy]
libagg: yes [pkg-config information for 'libagg' could not
be found. Using local copy.]
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "/tmp/pycharm-packaging7.tmp/matplotlib/setup.py", line 155, in <module>
result = package.check()
File "/tmp/pycharm-packaging7.tmp/matplotlib/setupext.py", line 961, in check
min_version='2.3', version=version)
File "/tmp/pycharm-packaging7.tmp/matplotlib/setupext.py", line 445, in _check_for_pkg_config
if (not is_min_version(version, min_version)):
File "/tmp/pycharm-packaging7.tmp/matplotlib/setupext.py", line 173, in is_min_version
return found_version >= expected_version
File "/usr/lib/python3.4/distutils/version.py", line 76, in __ge__
c = self._cmp(other)
File "/usr/lib/python3.4/distutils/version.py", line 343, in _cmp
if self.version < other.version:
TypeError: unorderable types: str() < int()
============================================================================
Edit setup.cfg to change the build options
```
Can someone please advise?
Thanks | I was getting the same error and was able to solve it by installing libfreetype6-dev and libpng12-dev.
`sudo apt-get install libfreetype6-dev libpng12-dev`
Thanks to @koukouviou for pointing the upstream bug report, which considerably sped up the solution finding process. |
Does python os.fork uses the same python interpreter? | 30,157,895 | 7 | 2015-05-11T00:01:32Z | 30,157,929 | 8 | 2015-05-11T00:06:56Z | [
"python",
"multiprocessing"
] | I understand that threads in Python use the same instance of Python interpreter. My question is it the same with process created by `os.fork`? Or does each process created by `os.fork` has its own interpreter? | `os.fork()` is equivalent to the `fork()` syscall in many UNIC(es). So **yes** your sub-process(es) will be separate from the parent and have a different interpreter (*as such*).
[`man fork`](http://linux.die.net/man/2/fork):
> FORK(2)
>
> NAME
> fork - create a child process
>
> SYNOPSIS
> #include
>
> ```
> pid_t fork(void);
> ```
>
> DESCRIPTION
> fork() creates a new process by duplicating the calling process. The new process, referred to as the child,
> is an exact duplicate of the calling process, referred to as the parent, except for the following points:
[`pydoc os.fork()`](https://docs.python.org/2/library/os.html#os.fork):
> `os.fork()` Fork a child process. Return 0 in the child and the
> childâs process id in the parent. If an error occurs OSError is
> raised.
>
> Note that some platforms including FreeBSD <= 6.3, Cygwin and OS/2 EMX
> have known issues when using fork() from a thread.
See also: Martin Konecny's response as to the why's and advantages of "forking" :)
For brevity; other approaches to concurrency which don't involve a separate process and therefore a separate Python interpreter include:
* Green or Lightweight threads; ala [greenlet](https://pypi.python.org/pypi/greenlet)
* Coroutines ala Python generators and the new Python 3+ [`yield from`](https://www.python.org/dev/peps/pep-0380/)
* Async I/O ala [asyncio](https://docs.python.org/3/library/asyncio.html), [Twisted](https://pypi.python.org/pypi/Twisted), [circuits](https://pypi.python.org/pypi/circuits), etc. |
Does python os.fork uses the same python interpreter? | 30,157,895 | 7 | 2015-05-11T00:01:32Z | 30,157,933 | 9 | 2015-05-11T00:07:39Z | [
"python",
"multiprocessing"
] | I understand that threads in Python use the same instance of Python interpreter. My question is it the same with process created by `os.fork`? Or does each process created by `os.fork` has its own interpreter? | Whenever you fork, the entire Python process is duplicated in memory (*including* the Python interpreter, your code and any libraries, current stack etc.) to create a second process - one reason why forking a process is much more expensive than creating a thread.
This creates a **new copy** of the python interpreter.
One advantage of having two python interpreters running is that you now have two GIL's (Global Interpreter Locks), and therefore can have true multi-processing on a multi-core system.
Threads in one process share the same GIL, meaning only one runs at a given moment, giving only the illusion of parallelism. |
Does python os.fork uses the same python interpreter? | 30,157,895 | 7 | 2015-05-11T00:01:32Z | 30,158,020 | 9 | 2015-05-11T00:21:05Z | [
"python",
"multiprocessing"
] | I understand that threads in Python use the same instance of Python interpreter. My question is it the same with process created by `os.fork`? Or does each process created by `os.fork` has its own interpreter? | While `fork` does indeed create a copy of the current Python interpreter rather than running with the same one, it usually isn't what you want, at least not on its own. Among other problems:
* There can be problems forking multi-threaded processes on some platforms. And some libraries (most famously Apple's Cocoa/CoreFoundation) may start threads for you in the background, or use thread-local APIs even though you've only got one thread, etc., without your knowledge.
* Some libraries assume that every process will be initialized properly, but if you `fork` after initialization that isn't true. Most infamously, if you let `ssl` seed its PRNG in the main process, then fork, you now have potentially predictable random numbers, which is a big hole in your security.
* Open file descriptors are inherited (as dups) by the children, with details that vary in annoying ways between platforms.
* POSIX only requires platforms to implement a very specific set of syscalls between a `fork` and an `exec`. If you never call `exec`, you can only use those syscalls. Which basically means you can't do *anything* portably.
* Anything to do with signals is *especially* annoying and nonportable after `fork`.
See [POSIX `fork`](http://pubs.opengroup.org/onlinepubs/009695399/functions/fork.html) or your platform's manpage for details on these issues.
The right answer is almost always to use [`multiprocessing`](https://docs.python.org/3/library/multiprocessing.html), or [`concurrent.futures`](https://docs.python.org/3/library/concurrent.futures.html) (which wraps up `multiprocessing`), or a similar third-party library.
With 3.4+, you can even specify a [start method](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods). The `fork` method basically just calls `fork`. The `forkserver` method runs a single "clean" process (no threads, signal handlers, SSL initialization, etc.) and forks off new children from that. The `spawn` method calls `fork` then `exec`, or an equivalent like `posix_spawn`, to get you a brand-new interpreter instead of a copy. So you can start off with `fork`, ut then if there are any problems, switch to `forkserver` or `spawn` and nothing else in your code has to change. Which is pretty nice. |
Removing backslashes from string | 30,158,002 | 9 | 2015-05-11T00:18:29Z | 30,158,037 | 11 | 2015-05-11T00:23:10Z | [
"python"
] | I answered a question earlier where the OP asked how he can remove backslashes from a string. This is how the backslashes looked like in the OP's string:
```
"I don\'t know why I don\'t have the right answer"
```
This was my answer:
```
a = "I don\'t know why I don\'t have the right answer"
b = a.strip("/")
print b
```
This removed the backslashes from the string but my answer was down-voted and I received a comment that said "*there are so many things wrong with my answer that it's hard to count*" I totally believe that my answer is probably wrong but I would like to know why so I can learn from it. However, that question was deleted by the author so I couldn't reply to the comment there to ask this question. | Well, there are no slashes, nor backslashes, in the string. The backslashes escape `'`, although they don't have to because the string is delimited with `""`.
```
print("I don\'t know why I don\'t have the right answer")
print("I don't know why I don't have the right answer")
```
Produces:
```
I don't know why I don't have the right answer
I don't know why I don't have the right answer
```
Moreover, you are using wrong character and `strip` only removes characters from the ends of the string:
```
Python 2.7.9 (default, Mar 1 2015, 12:57:24)
>>> print("///I don't know why ///I don't have the right answer///".strip("/"))
I don't know why ///I don't have the right answer
```
To put a backslash into a string you need to escape it too (or use raw string literals).
```
>>> print("\\I don't know why ///I don't have the right answer\\".strip("/"))
\I don't know why ///I don't have the right answer\
```
As you can see even though the backslashes were at the beginning and the end of the string they didn't get removed.
Finally, to answer the original question. One way is to use [`replace`](http://stackoverflow.com/questions/3559559/how-to-delete-a-character-from-a-string-using-python) method on string:
```
>>> print("\\I don't know why \\\I don't have the right answer\\".replace("\\",""))
I don't know why I don't have the right answer
```
Also, props for reaching out for a good answer after you screwed your own one =). |
Removing backslashes from string | 30,158,002 | 9 | 2015-05-11T00:18:29Z | 30,158,063 | 15 | 2015-05-11T00:28:43Z | [
"python"
] | I answered a question earlier where the OP asked how he can remove backslashes from a string. This is how the backslashes looked like in the OP's string:
```
"I don\'t know why I don\'t have the right answer"
```
This was my answer:
```
a = "I don\'t know why I don\'t have the right answer"
b = a.strip("/")
print b
```
This removed the backslashes from the string but my answer was down-voted and I received a comment that said "*there are so many things wrong with my answer that it's hard to count*" I totally believe that my answer is probably wrong but I would like to know why so I can learn from it. However, that question was deleted by the author so I couldn't reply to the comment there to ask this question. | ```
a = "I don\'t know why I don\'t have the right answer"
b = a.strip("/")
print b
```
1. Slash (`/`) and backslash (`\`) are not the same character. There are no slashes anywhere in the string, so what you're doing will have no effect.
2. Even if you used a backslash, there are no backslashes in `a` anyway; `\'` in a non-raw string literal is just a `'` character.
3. [`strip`](https://docs.python.org/2/library/stdtypes.html#str.strip) only removes "the leading and trailing characters". Since you're trying to remove a character in the middle of the string, it won't help.
And maybe some meta-problems:
1. The OP's example string actually had just backslashes, not backslash-escaped apostrophes, even though his question title said the latter. So you were solving a different problem. Which may be more the OP's fault than yours, but it's still not a good answer.
2. The OP's code already did what you were trying to do. He obviously had some *other* problem that he couldn't explain. (Given that he stopped answering comments and deleted his question, my first bet would be that it was a silly typo somewhereâ¦) So, even if you wrote your code correctly, it wouldn't have been that helpful.
At any rate, that's not too many to count.
Then again, the comment didn't say it was too many to count, just hard to count. Some people have trouble counting to four three. Even [Kings of the Britons have to be reminded by their clerics how to do it](http://en.wikiquote.org/wiki/Monty_Python_and_the_Holy_Grail). |
display an octal value as it's string representation | 30,160,270 | 2 | 2015-05-11T05:34:14Z | 30,160,301 | 13 | 2015-05-11T05:36:56Z | [
"python",
"format",
"octal"
] | I've got a problem when convert an octal to string
```
p = 01212
k = str(p)
print k
```
The result is `650` but I need `01212`. How can I do this? Thanks in advance | Your number `p` is the actual *value* rather than the *representation* of that value. So it's actually `65010`, `12128` and `28a16`, *all at the same time.*
If you want to see it as octal, just use:
```
print oct(p)
```
as per the following transcript:
```
>>> p = 01212
>>> print p
650
>>> print oct(p)
01212
```
That's for Python 2 (which you appear to be using since you use the `0NNN` variant of the octal literal rather than `0oNNN`).
Python 3 has a slightly different representation:
```
>>> p = 0o1212
>>> print (p)
650
>>> print (oct(p))
0o1212
``` |
How to move files between two Amazon S3 Buckets using boto? | 30,161,700 | 3 | 2015-05-11T07:17:31Z | 30,162,265 | 7 | 2015-05-11T07:52:17Z | [
"python",
"amazon-s3",
"boto"
] | I have to move files between one bucket to another with Pyhton Boto API. (I need it to "Cut" the file from the first Bucket and "Paste" it in the second one).
What is the best way to do that?
\*\* Note: Is that matter if I have two different ACCESS KEYS and SECRET KEYS? | I think the boto S3 documentation answers your question.
<https://github.com/boto/boto/blob/develop/docs/source/s3_tut.rst>
Moving files from one bucket to another via boto is effectively a copy of the keys from source to destination than removing the key from source.
You can get access to the buckets:
```
import boto
c = boto.connect_s3()
src = c.get_bucket('my_source_bucket')
dst = c.get_bucket('my_destination_bucket')
```
and iterate the keys:
```
for k in src.list():
# copy stuff to your destination here
dst.copy_key(k.key.name, src, k.key.name)
# then delete the source key
k.delete()
```
See also: [Is it possible to copy all files from one S3 bucket to another with s3cmd?](http://stackoverflow.com/questions/5194552/is-it-possible-to-copy-all-files-from-one-s3-bucket-to-another-with-s3cmd) |
Is there a faster way to clean out control characters in a file? | 30,169,613 | 13 | 2015-05-11T13:56:41Z | 30,347,354 | 7 | 2015-05-20T10:44:24Z | [
"python",
"regex",
"unicode",
"io",
"control-characters"
] | [Previously](http://www.uni-koeln.de/~mzampier/papers/bucc2014.pdf), I had been cleaning out data using the [code snippet](http://pastebin.com/1aR1ivaR) below
```
import unicodedata, re, io
all_chars = (unichr(i) for i in xrange(0x110000))
control_chars = ''.join(c for c in all_chars if unicodedata.category(c)[0] == 'C')
cc_re = re.compile('[%s]' % re.escape(control_chars))
def rm_control_chars(s): # see http://www.unicode.org/reports/tr44/#General_Category_Values
return cc_re.sub('', s)
cleanfile = []
with io.open('filename.txt', 'r', encoding='utf8') as fin:
for line in fin:
line =rm_control_chars(line)
cleanfile.append(line)
```
There are newline characters in the file that i want to keep.
The following records the time taken for `cc_re.sub('', s)` to substitute the first few lines (1st column is the time taken and 2nd column is `len(s)`):
```
0.275146961212 251
0.672796010971 614
0.178567171097 163
0.200030088425 180
0.236430883408 215
0.343492984772 313
0.317672967911 290
0.160616159439 142
0.0732028484344 65
0.533437013626 468
0.260229110718 236
0.231380939484 204
0.197766065598 181
0.283867120743 258
0.229172945023 208
```
As @ashwinichaudhary suggested, using `s.translate(dict.fromkeys(control_chars))` and the same time taken log outputs:
```
0.464188098907 252
0.366552114487 615
0.407374858856 164
0.322507858276 181
0.35142993927 216
0.319973945618 314
0.324357032776 291
0.371646165848 143
0.354818105698 66
0.351796150208 469
0.388131856918 237
0.374715805054 205
0.363368988037 182
0.425950050354 259
0.382766962051 209
```
But the code is really slow for my 1GB of text. Is there any other way to clean out controlled characters? | found a solution working character by charater, I bench marked it using a 100K file:
```
import unicodedata, re, io
from time import time
# This is to generate randomly a file to test the script
from string import lowercase
from random import random
all_chars = (unichr(i) for i in xrange(0x110000))
control_chars = [c for c in all_chars if unicodedata.category(c)[0] == 'C']
chars = (list(u'%s' % lowercase) * 115117) + control_chars
fnam = 'filename.txt'
out=io.open(fnam, 'w')
for line in range(1000000):
out.write(u''.join(chars[int(random()*len(chars))] for _ in range(600)) + u'\n')
out.close()
# version proposed by alvas
all_chars = (unichr(i) for i in xrange(0x110000))
control_chars = ''.join(c for c in all_chars if unicodedata.category(c)[0] == 'C')
cc_re = re.compile('[%s]' % re.escape(control_chars))
def rm_control_chars(s):
return cc_re.sub('', s)
t0 = time()
cleanfile = []
with io.open(fnam, 'r', encoding='utf8') as fin:
for line in fin:
line =rm_control_chars(line)
cleanfile.append(line)
out=io.open(fnam + '_out1.txt', 'w')
out.write(''.join(cleanfile))
out.close()
print time() - t0
# using a set and checking character by character
all_chars = (unichr(i) for i in xrange(0x110000))
control_chars = set(c for c in all_chars if unicodedata.category(c)[0] == 'C')
def rm_control_chars_1(s):
return ''.join(c for c in s if not c in control_chars)
t0 = time()
cleanfile = []
with io.open(fnam, 'r', encoding='utf8') as fin:
for line in fin:
line = rm_control_chars_1(line)
cleanfile.append(line)
out=io.open(fnam + '_out2.txt', 'w')
out.write(''.join(cleanfile))
out.close()
print time() - t0
```
the output is:
```
114.625444174
0.0149750709534
```
I tried on a file of 1Gb (only for the second one) and it lasted 186s.
I also wrote this other version of the same script, slightly faster (176s), and more memory efficient (for very large files not fitting in RAM):
```
t0 = time()
out=io.open(fnam + '_out5.txt', 'w')
with io.open(fnam, 'r', encoding='utf8') as fin:
for line in fin:
out.write(rm_control_chars_1(line))
out.close()
print time() - t0
``` |
How to run Spyder in virtual environment? | 30,170,468 | 9 | 2015-05-11T14:33:47Z | 30,170,469 | 10 | 2015-05-11T14:33:47Z | [
"python",
"python-3.x",
"virtualenv",
"anaconda",
"spyder"
] | I have been using Spyder installed with with Anaconda distribution which uses Python 2.7 as default. Currently I need to set up a development virtual environment with Python 3.4.
Top two suggestions after research online are:
1. to set up virtual environment first and to point change the preferences of Spyder , e.g [here](http://stackoverflow.com/questions/28190500/virtualenv-ipython-in-spyder-not-working);
2. to install all Spyder dependencies, like PyQt4, in the virtual environment itself, e. g. [here](http://stackoverflow.com/q/28190500/3052217) ;
Both recommendations are cumbersome and do not look like smart options for development.
Is there a solution that would allow to run Spyder with required Python version automatically after activating the required virtual environment? | There is an option to [create virtual environments in Anaconda](http://conda.pydata.org/docs/faq.html#creating-new-environments) with required Python version.
```
conda create -n myenv python=3.4 anaconda
```
This will create virtual environment to `virtualenv`.
To activate it :
```
source activate myenv # (in linux)
activate myenv # (in windows - note that you should be in your c:\anaconda2 directory)
```
To check the current environment has been changed:
```
conda info -e
```
And now to run Spyder with Python 3.4 just type:
```
spyder
``` |
Python asyncio task got bad yield | 30,172,821 | 8 | 2015-05-11T16:23:30Z | 30,174,350 | 7 | 2015-05-11T17:50:31Z | [
"python",
"asynchronous",
"python-asyncio"
] | I am confused about how to play around with the `asyncio` module in Python 3.4. I have a `searching` API for a search engine, and want to each search request to be run either parallel, or asynchronously, so that I don't have to wait for one search finish to start another.
Here is my high-level searching API to build some objects with the raw search results. The search engine itself is using some kind of asyncio mechanism, so I won't bother with that.
```
# No asyncio module used here now
class search(object):
...
self.s = some_search_engine()
...
def searching(self, *args, **kwargs):
ret = {}
# do some raw searching according to args and kwargs and build the wrapped results
...
return ret
```
To try to async the requests, I wrote following test case to test how I can interact my stuff with the `asyncio` module.
```
# Here is my testing script
@asyncio.coroutine
def handle(f, *args, **kwargs):
r = yield from f(*args, **kwargs)
return r
s = search()
loop = asyncio.get_event_loop()
loop.run_until_complete(handle(s.searching, arg1, arg2, ...))
loop.close()
```
By running with pytest, it will return a `RuntimeError: Task got bad yield : {results from searching...}`, when it hits the line `r = yield from ...`.
I also tried another way.
```
# same handle as above
def handle(..):
....
s = search()
loop = asyncio.get_event_loop()
tasks = [
asyncio.async(handle(s.searching, arg11, arg12, ...)),
asyncio.async(handle(s.searching, arg21, arg22, ...)),
...
]
loop.run_until_complete(asyncio.wait(tasks))
loop.close()
```
By running this test case by pytest, it passes but some weird exception from the search engine will raise. And it says `Future/Task exception was never retrieved`.
Things I wish to ask:
1. For my 1st try, is that the right way to use `yield from`, by returning the actual result from a function call?
2. I think I need to add some sleep to my 2nd test case to wait for the task finish, but how should I do that? And how can I get my function calls to return in my 2nd test case?
3. Is that a good way to implement asyncio with an existing module, by creating an async handler to handle requests?
4. If the answer to question 2 is NO, does every client calls to the class `search` needs to include `loop = get_event_loop()` this kind of stuffs to async the requests? | The problem is that you can't just call existing synchronous code as if it was an `asyncio.coroutine` and get asynchronous behavior. When you call `yield from searching(...)`, you're only going to get asynchronous behavior if `searching` itself is actually an `asyncio.coroutine`, or at least returns an `asyncio.Future`. Right now, `searching` is just a regular synchronous function, so calling `yield from searching(...)` is just going to throw an error, because it doesn't return a `Future` or coroutine.
To get the behavior you want, you'll need to have an asynchronous version of `searching` in addition to a `synchronous` version (or just drop the synchronous version altogether if you don't need it). You have a few options to support both:
1. Rewrite `searching` as an `asyncio.coroutine` that it uses `asyncio`-compatible calls to do its I/O, rather than blocking I/O. This will make it work in an `asyncio` context, but it means you won't be able to call it directly in a synchronous context anymore. Instead, you'd need to also provide an alternative synchronous `searching` method that starts an `asyncio` event loop and calls `return loop.run_until_complete(self.searching(...))`. See [this question](http://stackoverflow.com/questions/30155138/how-can-i-write-asyncio-coroutines-that-optionally-act-as-regular-functions) for more details on that.
2. Keep your synchronous implementation of `searching`, and provide an alternative asynchronous API that uses [`BaseEventLoop.run_in_executor`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.BaseEventLoop.run_in_executor) to run your the `searching` method in a background thread:
```
class search(object):
...
self.s = some_search_engine()
...
def searching(self, *args, **kwargs):
ret = {}
...
return ret
@asyncio.coroutine
def searching_async(self, *args, **kwargs):
loop = kwargs.get('loop', asyncio.get_event_loop())
try:
del kwargs['loop'] # assuming searching doesn't take loop as an arg
except KeyError:
pass
r = yield from loop.run_in_executor(None, self.searching, *args) # Passing None tells asyncio to use the default ThreadPoolExecutor
return r
```
Testing script:
```
s = search()
loop = asyncio.get_event_loop()
loop.run_until_complete(s.searching_async(arg1, arg2, ...))
loop.close()
```
This way, you can keep your synchronous code as is, and at least provide methods that can be used in `asyncio` code without blocking the event loop. It's not as clean a solution as it would be if you actually used asynchronous I/O in your code, but its better than nothing.
3. Provide two completely separate versions of `searching`, one that uses blocking I/O, and one that's `asyncio`-compatible. This gives ideal implementations for both contexts, but requires twice the work. |
String vs list membership check | 30,175,674 | 10 | 2015-05-11T19:05:43Z | 30,175,729 | 9 | 2015-05-11T19:09:09Z | [
"python",
"string",
"list"
] | So i'm wondering why this:
```
'alpha' in 'alphanumeric'
```
is `True`, but
```
list('alpha') in list('alphanumeric')
```
is `False`.
Why does `x in s` succeed when `x` is a substring of `s`, but `x in l` doesn't when `x` is a sub*list* of `l`? | When you use `list` function with any iterable, a new list object will be created with all the elements from the iterable as individual elements in the list.
In your case, strings are valid Python iterables, so
```
>>> list('alpha')
['a', 'l', 'p', 'h', 'a']
>>> list('alphanumeric')
['a', 'l', 'p', 'h', 'a', 'n', 'u', 'm', 'e', 'r', 'i', 'c']
```
So, you are effectively checking if one list is a sublist of another list.
In Python only Strings have the `in` operator to check if one string is part of another string. For all other collections, you can only use individual members. Quoting the [documentation](https://docs.python.org/2/reference/expressions.html#in),
> The operators `in` and `not in` test for collection membership. `x in s` evaluates to true if `x` is a member of the collection `s`, and false otherwise. `x not in s` returns the negation of `x in s`. The collection membership test has traditionally been bound to sequences; an object is a member of a collection if the collection is a sequence and contains an element equal to that object. However, it make sense for many other object types to support membership tests without being a sequence. In particular, dictionaries (for keys) and sets support membership testing.
>
> **For the list and tuple types, `x in y` is true if and only if there exists an index `i` such that `x == y[i]` is true.**
>
> **For the Unicode and string types, `x in y` is true if and only if `x` is a substring of `y`.** An equivalent test is `y.find(x) != -1`. Note, `x` and `y` need not be the same type; consequently, `u'ab'` in `'abc'` will return `True`. Empty strings are always considered to be a substring of any other string, so `""` in `"abc"` will return `True`. |
How to pass tuple1 if ... else tuple2 to str.format? | 30,185,316 | 10 | 2015-05-12T08:15:25Z | 30,185,360 | 8 | 2015-05-12T08:17:43Z | [
"python",
"tuples",
"string-formatting"
] | Put simply, why do I get the following error?
```
>>> yes = True
>>> 'no [{0}] yes [{1}]'.format((" ", "x") if yes else ("x", " "))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: tuple index out of range
```
I use python 2.6. | Use the `*` operator, which takes an iterable of parameters and supplies each one as a positional argument to the function:
```
In [3]: 'no [{0}] yes [{1}]'.format(*(" ", "x") if yes else ("x", " "))
Out[3]: 'no [ ] yes [x]'
``` |
How to pass tuple1 if ... else tuple2 to str.format? | 30,185,316 | 10 | 2015-05-12T08:15:25Z | 30,185,365 | 15 | 2015-05-12T08:17:57Z | [
"python",
"tuples",
"string-formatting"
] | Put simply, why do I get the following error?
```
>>> yes = True
>>> 'no [{0}] yes [{1}]'.format((" ", "x") if yes else ("x", " "))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: tuple index out of range
```
I use python 2.6. | â Indexing option:
When accessing argumentsâ items in format string, you should use index to call the value:
```
yes = True
print 'no [{0[0]}] yes [{0[1]}]'.format((" ", "x") if yes else ("x", " "))
```
`{0[0]}` in format string equals `(" ", "x")[0]` in calling index of tulple
`{0[1]}` in format string equals `(" ", "x")[1]` in calling index of tulple
---
â `*` operator option:
or you can use `*` operator to unpacking argument tuple.
```
yes = True
print 'no [{0}] yes [{1}]'.format(*(" ", "x") if yes else ("x", " "))
```
When invoking `*` operator, `'no [{0}] yes [{1}]'.format(*(" ", "x") if yes else ("x", " "))` equals `'no [{0}] yes [{1}]'.format(" ", "x")` if if statement is `True`
---
â `**` operator option (It's extra method when your var is dict):
```
yes = True
print 'no [{no}] yes [{yes}]'.format(**{"no":" ", "yes":"x"} if yes else {"no":"x", "yes":" "})
``` |
Obtain a list containing string elements excluding elements prefixed with any other element from initial list | 30,186,869 | 13 | 2015-05-12T09:30:55Z | 30,188,818 | 8 | 2015-05-12T10:57:40Z | [
"python",
"string",
"list",
"python-2.7",
"filtering"
] | I have some trouble with filtering a list of strings. I found a similar question [here](http://stackoverflow.com/questions/22221878/python-delete-substrings-from-list-of-strings) but is not what i need.
The input list is:
```
l = ['ab', 'xc', 'abb', 'abed', 'sdfdg', 'abfdsdg', 'xccc']
```
and the expected result is
```
['ab', 'xc', 'sdfdg']
```
The order of the items in the result is not important
The filter function must be fast because the size of list is big
My current solution is
```
l = ['ab', 'xc', 'abb', 'abed', 'sdfdg', 'abfdsdg', 'xccc']
for i in range(0, len(l) - 1):
for j in range(i + 1, len(l)):
if l[j].startswith(l[i]):
l[j] = l[i]
else:
if l[i].startswith(l[j]):
l[i] = l[j]
print list(set(l))
```
**EDIT**
After multiple tests with a big input data, list with 1500000 strings, my best solution for this is:
```
def filter(l):
if l==[]:
return []
l2=[]
l2.append(l[0])
llen = len(l)
k=0
itter = 0
while k<llen:
addkelem = ''
j=0
l2len = len(l2)
while j<l2len:
if (l2[j].startswith(l[k]) and l[k]!= l2[j]):
l2[j]=l[k]
l.remove(l[k])
llen-=1
j-=1
addkelem = ''
continue
if (l[k].startswith(l2[j])):
addkelem = ''
break
elif(l[k] not in l2):
addkelem = l[k]
j+=1
if addkelem != '':
l2.append(addkelem)
addkelem = ''
k+=1
return l2
```
for which the execution time is around of 213 seconds
[Sample imput data](http://soft2u.ro/out.7z) - each line is a string in list | You can group the items by first letter and just search the sublists, no string can start with a substring unless it has at least the same first letter:
```
from collections import defaultdict
def find(l):
d = defaultdict(list)
# group by first letter
for ele in l:
d[ele[0]].append(ele)
for val in d.values():
for v in val:
# check each substring in the sublist
if not any(v.startswith(s) and v != s for s in val):
yield v
print(list(find(l)))
['sdfdg', 'xc', 'ab']
```
This correctly filters the words, as you can see from the code below that the reduce function does not, `'tool'` should not be in the output:
```
In [56]: l = ["tool",'ab',"too", 'xc', 'abb',"toot", 'abed',"abel", 'sdfdg', 'abfdsdg', 'xccc',"xcew","xrew"]
In [57]: reduce(r,l)
Out[57]: ['tool', 'ab', 'too', 'xc', 'sdfdg', 'xrew']
In [58]: list(find(l))
Out[58]: ['sdfdg', 'too', 'xc', 'xrew', 'ab']
```
It also does it efficiently:
```
In [59]: l = ["".join(sample(ascii_lowercase, randint(2,25))) for _ in range(5000)]
In [60]: timeit reduce(r,l)
1 loops, best of 3: 2.12 s per loop
In [61]: timeit list(find(l))
1 loops, best of 3: 203 ms per loop
In [66]: %%timeit
..... result = []
....: for element in lst:
....: is_prefixed = False
....: for possible_prefix in lst:
....: if element is not possible_prefix and element.startswith(possible_prefix):
....: is_prefixed = True
....: break
....: if not is_prefixed:
....: result.append(element)
....:
1 loops, best of 3: 4.39 s per loop
In [92]: timeit list(my_filter(l))
1 loops, best of 3: 2.94 s per loop
```
If you know the minimum string length is always > 1 you can optimise further, again if the minimum length string is `2` then a word has to have a minimum of the first two letters in common:
```
def find(l):
d = defaultdict(list)
# find shortest length string to use as key length
mn = len(min(l, key=len))
for ele in l:
d[ele[:mn]].append(ele)
for val in d.values():
for v in val:
if not any(v.startswith(s) and v != s for s in val):
yield v
In [84]: timeit list(find(l))
100 loops, best of 3: 14.6 ms per loop
```
Lastly if you have dupes you may want to filter out the duplicated words from your list but you need to keep them to compare:
```
from collections import defaultdict,Counter
def find(l):
d = defaultdict(list)
mn = len(min(l, key=len))
cn = Counter(l)
for ele in l:
d[ele[:mn]].append(ele)
for val in d.values():
for v in val:
if not any(v.startswith(s) and v != s for s in val):
# make sure v is not a dupe
if cn[v] == 1:
yield v
```
So if speed is important, an implementation using some variation of the code above is going to be significantly faster than your naive approach. There is also more data stored in memory so you should also take the into account.
To save memory we can create a counter for each val/sublist so we only store a single counter dict of words at a time:
```
def find(l):
d = defaultdict(list)
mn = len(min(l, key=len))
for ele in l:
d[ele[:mn]].append(ele)
for val in d.values():
# we only need check each grouping of words for dupes
cn = Counter(val)
for v in val:
if not any(v.startswith(s) and v != s for s in val):
if cn[v] == 1:
yield v
```
creating a dict each loop adds 5ms so still < 20ms for 5k words.
The reduce method should work if the data is sorted:
```
reduce(r,sorted(l)) # -> ['ab', 'sdfdg', 'too', 'xc', 'xrew']
```
To make the difference clear between the behaviour:
```
In [202]: l = ["tool",'ab',"too", 'xc', 'abb',"toot", 'abed',
"abel", 'sdfdg', 'abfdsdg', 'xccc',"xcew","xrew","ab"]
In [203]: list(filter_list(l))
Out[203]: ['ab', 'too', 'xc', 'sdfdg', 'xrew', 'ab']
In [204]: list(find(l))
Out[204]: ['sdfdg', 'too', 'xc', 'xrew', 'ab', 'ab']
In [205]: reduce(r,sorted(l))
Out[205]: ['ab', 'sdfdg', 'too', 'xc', 'xrew']
In [206]: list(find_dupe(l))
Out[206]: ['too', 'xrew', 'xc', 'sdfdg']
In [207]: list(my_filter(l))
Out[207]: ['sdfdg', 'xrew', 'too', 'xc']
In [208]: "ab".startswith("ab")
Out[208]: True
```
So `ab` is repeated twice so using a set or a dict without keeping track of how may times `ab` appeared would mean we consider that there was no other element that satisfied the condition `ab` `"ab".startswith(other ) == True`, which we can see is incorrect.
You can also use itertools.groupby to group based on the min index size:
```
def find_dupe(l):
l.sort()
mn = len(min(l, key=len))
for k, val in groupby(l, key=lambda x: x[:mn]):
val = list(val)
for v in val:
cn = Counter(val)
if not any(v.startswith(s) and v != s for s in val) and cn[v] == 1:
yield v
```
Based on your comments then we can adjust my first code if you don't think `"dd".startswith("dd")` should be True with repeated elements:
```
l = ['abbb', 'xc', 'abb', 'abed', 'sdfdg', 'xc','abfdsdg', 'xccc', 'd','dd','sdfdg', 'xc','abfdsdg', 'xccc', 'd','dd']
def find_with_dupe(l):
d = defaultdict(list)
# group by first letter
srt = sorted(set(l))
ind = len(srt[0])
for ele in srt:
d[ele[:ind]].append(ele)
for val in d.values():
for v in val:
# check each substring in the sublist
if not any(v.startswith(s) and v != s for s in val):
yield v
print(list(find_with_dupe(l)))
['abfdsdg', 'abed', 'abb', 'd', 'sdfdg', 'xc']
```
Which run on a random sample of text runs in a fraction of the time your own code does:
```
In [15]: l = open("/home/padraic/Downloads/sample.txt").read().split()
In [16]: timeit list(find(l))
100 loops, best of 3: 19 ms per loop
In [17]: %%timeit
....: l = open("/home/padraic/Downloads/sample.txt").read().split()
....: for i in range(0, len(l) - 1):
....: for j in range(i + 1, len(l)):
....: if l[j].startswith(l[i]):
....: l[j] = l[i]
....: else:
....: if l[i].startswith(l[j]):
....: l[i] = l[j]
....:
1 loops, best of 3: 4.92 s per loop
```
Both returning identical output:
```
In [41]: l = open("/home/padraic/Downloads/sample.txt").read().split()
In [42]:
for i in range(0, len(l) - 1):
for j in range(i + 1, len(l)):
if l[j].startswith(l[i]):
l[j] = l[i]
else:
if l[i].startswith(l[j]):
l[i] = l[j]
....:
In [43]:
In [43]: l2 = open("/home/padraic/Downloads/sample.txt").read().split()
In [44]: sorted(set(l)) == sorted(find(l2))
Out[44]: True
``` |
Obtain a list containing string elements excluding elements prefixed with any other element from initial list | 30,186,869 | 13 | 2015-05-12T09:30:55Z | 30,250,865 | 12 | 2015-05-15T03:03:46Z | [
"python",
"string",
"list",
"python-2.7",
"filtering"
] | I have some trouble with filtering a list of strings. I found a similar question [here](http://stackoverflow.com/questions/22221878/python-delete-substrings-from-list-of-strings) but is not what i need.
The input list is:
```
l = ['ab', 'xc', 'abb', 'abed', 'sdfdg', 'abfdsdg', 'xccc']
```
and the expected result is
```
['ab', 'xc', 'sdfdg']
```
The order of the items in the result is not important
The filter function must be fast because the size of list is big
My current solution is
```
l = ['ab', 'xc', 'abb', 'abed', 'sdfdg', 'abfdsdg', 'xccc']
for i in range(0, len(l) - 1):
for j in range(i + 1, len(l)):
if l[j].startswith(l[i]):
l[j] = l[i]
else:
if l[i].startswith(l[j]):
l[i] = l[j]
print list(set(l))
```
**EDIT**
After multiple tests with a big input data, list with 1500000 strings, my best solution for this is:
```
def filter(l):
if l==[]:
return []
l2=[]
l2.append(l[0])
llen = len(l)
k=0
itter = 0
while k<llen:
addkelem = ''
j=0
l2len = len(l2)
while j<l2len:
if (l2[j].startswith(l[k]) and l[k]!= l2[j]):
l2[j]=l[k]
l.remove(l[k])
llen-=1
j-=1
addkelem = ''
continue
if (l[k].startswith(l2[j])):
addkelem = ''
break
elif(l[k] not in l2):
addkelem = l[k]
j+=1
if addkelem != '':
l2.append(addkelem)
addkelem = ''
k+=1
return l2
```
for which the execution time is around of 213 seconds
[Sample imput data](http://soft2u.ro/out.7z) - each line is a string in list | This algorithm completes the task in 0.97 second on my computer, with [the input file submitted by the author (154MB)](http://soft2u.ro/out.txt):
```
l.sort()
last_str = l[0]
filtered = [last_str]
app = filtered.append
for str in l:
if not str.startswith(last_str):
last_str = str
app(str)
# Commented because of the massive amount of data to print.
# print filtered
```
The algorithm is simple: first sort the list lexicographically, then search for the first string which isn't prefixed by the very first one of the list, then search the next one which isn't prefixed by the last unprefixed one, etc.
If the list is already sorted (your example file seems to be already sorted), you can remove the `l.sort()` line, which will result in a O(n) complexity in both time and memory. |
"yield from iterable" vs "return iter(iterable)" | 30,187,598 | 12 | 2015-05-12T10:03:43Z | 30,188,263 | 9 | 2015-05-12T10:32:29Z | [
"python",
"python-3.x",
"iterable",
"yield-from"
] | When wrapping an (internal) iterator one often has to reroute the `__iter__` method to the underlying iterable. Consider the following example:
```
class FancyNewClass(collections.Iterable):
def __init__(self):
self._internal_iterable = [1,2,3,4,5]
# ...
# variant A
def __iter__(self):
return iter(self._internal_iterable)
# variant B
def __iter__(self):
yield from self._internal_iterable
```
Is there any significant difference between variant A and B?
Variant A returns an iterator object that has been queried via `iter()` from the internal iterable. Variant B returns a generator object that returns values from the internal iterable. Is one or the other preferable for some reason? In `collections.abc` the `yield from` version is used. The `return iter()` variant is the pattern that I have used until now. | The only significant difference is what happens when an exception is raised from within the iterable. Using `return iter()` your `FancyNewClass` will not appear on the exception traceback, whereas with `yield from` it will. It is generally a good thing to have as much information on the traceback as possible, although there could be situations where you want to hide your wrapper.
Other differences:
* `return iter` has to load the name `iter` from globals - this is potentially slow (although unlikely to significantly affect performance) and could be messed with (although anyone who overwrites globals like that deserves what they get).
* With `yield from` you can insert other `yield` expressions before and after (although you could equally use `itertools.chain`).
* As presented, the `yield from` form discards any generator return value (i.e. `raise StopException(value)`. You can fix this by writing instead `return (yield from iterator)`.
Here's a test comparing the disassembly of the two approaches and also showing exception tracebacks: <http://ideone.com/1YVcSe>
Using `return iter()`:
```
3 0 LOAD_GLOBAL 0 (iter)
3 LOAD_FAST 0 (it)
6 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
9 RETURN_VALUE
Traceback (most recent call last):
File "./prog.py", line 12, in test
File "./prog.py", line 10, in i
RuntimeError
```
Using `return (yield from)`:
```
5 0 LOAD_FAST 0 (it)
3 GET_ITER
4 LOAD_CONST 0 (None)
7 YIELD_FROM
8 RETURN_VALUE
Traceback (most recent call last):
File "./prog.py", line 12, in test
File "./prog.py", line 5, in bar
File "./prog.py", line 10, in i
RuntimeError
``` |
How can I use super() with one argument in python | 30,190,185 | 13 | 2015-05-12T11:57:02Z | 30,190,341 | 11 | 2015-05-12T12:04:08Z | [
"python",
"super"
] | While reading about the [`super()` object](https://docs.python.org/2/library/functions.html#super) in Python, I read the following statement:
> If the second argument is omitted, the super object returned is unbound
What does this exactly mean and how do I use `super()` with one argument in code? | Python function objects are [descriptors](https://docs.python.org/2/howto/descriptor.html), and Python uses the descriptor protocol to *bind* functions to an instance. This process produces a *bound method*.
Binding is what makes the 'magic' `self` argument appear when you call a method, and what makes a `property` object automatically call methods when you try to use the property as an attribute on instances.
`super()` with two arguments invokes the same descriptor protocol when you try to use it to look up methods on parent classes; `super(Foo, self).bar()` will traverse the `Foo` parent classes until an attribute `bar` is found, and if that is an object that is a descriptor, it'll be bound to `self`. Calling `bar` then calls the bound method, which in turn calls the function passing in the `self` argument as `bar(self)`.
To do this, the `super()` object stores both the class (first argument) the `self` argument to bind with, *and* the type of the `self` object as attributes:
```
>>> class Foo(object):
... def bar(self):
... return 'bar on Foo'
...
>>> class Spam(Foo):
... def bar(self):
... return 'bar on Spam'
...
>>> spam = Spam()
>>> super(Spam, spam)
<super: <class 'Spam'>, <Spam object>>
>>> super(Spam, spam).__thisclass__
<class '__main__.Spam'>
>>> super(Spam, spam).__self_class__
<class '__main__.Spam'>
>>> super(Spam, spam).__self__
<__main__.Spam object at 0x107195c10>
```
When looking up attributes, the `__mro__` attribute of the `__self_class__` attribute is searched, starting one position past the position of `__thisclass__`, and the results are bound.
`super()` with just *one* argument will miss the `__self_class__` and `__self__` attributes, and cannot do lookups *yet*:
```
>>> super(Spam)
<super: <class 'Spam'>, NULL>
>>> super(Spam).__self_class__ is None
True
>>> super(Spam).__self__ is None
True
>>> super(Spam).bar
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'super' object has no attribute 'bar'
```
The object *does* support the descriptor protocol, so you can *bind* it just like you can bind a method:
```
>>> super(Spam).__get__(spam, Spam)
<super: <class 'Spam'>, <Spam object>>
>>> super(Spam).__get__(spam, Spam).bar()
'bar on Foo'
```
This means you can store such an object *on a class* and use it to to traverse to parent methods:
```
>>> class Eggs(Spam):
... pass
...
>>> Eggs.parent = super(Eggs)
>>> eggs = Eggs()
>>> eggs.parent
<super: <class 'Eggs'>, <Eggs object>>
>>> eggs.parent.bar()
'bar on Spam'
``` |
Coroutine in python between 3.4 and 3.5, How can I keep backwords compatibility? | 30,191,556 | 8 | 2015-05-12T12:56:07Z | 30,195,047 | 11 | 2015-05-12T15:16:46Z | [
"python",
"python-3.x",
"asynchronous",
"python-asyncio"
] | I'm on developing python chatting bot framework with `asyncio`. But I watch [PEP-492](https://www.python.org/dev/peps/pep-0492) and there is new syntax, `async`/`await` and finally it accepted.
I like `async`/`await` syntax and I want to use it. but I worry about 3.4 backwords compatibility.
If I use new syntax in my code, someone can use it in 3.4?
For example, I write some code like this,
```
import asyncio
class ChatBot:
def __init__(self, loop):
self.loop = loop
async def connect(self):
self.reader, self.writer = await asyncio.open_connect(HOST, PORT, loop=self.loop)
async def read():
return await self.reader.read()
async def run(self):
running = True
while running:
try:
await self.connect()
line = await self.read()
if not line:
continue
await self.parse(line)
except BotInternalError as e:
if e.stop:
running = False
break
except:
pass
async def parse(self, msg):
if msg.startswith('PING'):
self.pong()
elif msg.startswith('ERROR'):
self.error()
else:
await self.some_work(msg)
async def some_work(self, msg):
# some looooooooong works
self.send(msg)
def send(self, msg):
self.writer.write(msg)
```
Than, I can use it with this source in py35
```
loop = asyncio.get_event_loop() # I don't know it really needed in py35.
bot = ChatBot(loop)
asyncio.run_until_complete(bot.run())
```
But, py34 don't have `await` syntax. If I uploaded above source at PyPI without version constraint and someone installed it on py34, It'll work fine? How can I keep it? | If you need to support Python 3.4 in your code, you'll need to use the old `@asyncio.coroutine`/`yield from` style syntax. There's no way to support the `async`/`await` syntax without running 3.5; you'll get a `SyntaxError` at compilation time on 3.4 or lower.
The only thing that takes advantage of the new features you *can* do in a backwards-compatible way is add the various `__a*__` methods to your classes where appropriate (`__aiter__`, `__aenter__`, `__aexit__`, etc.), using the `yield from` coroutine syntax. That way, your objects can support `async with`/`async for` statements, so that users of your library running Python 3.5 could take advantage of the new features.
For example, this class can be used with `async with`, but won't break when run on Python 3.4:
```
import asyncio
class Test:
def __enter__(self):
return self
def __exit__(self, *args):
print("arg")
@asyncio.coroutine
def __aenter__(self):
yield from self.init_state()
return self
@asyncio.coroutine
def init_state(self):
yield from asyncio.sleep(2) # Pretend this is real initialization
@asyncio.coroutine
def __aexit__(self, *args):
return self.__exit__(self, *args)
```
On Python 3.5:
```
import asyncio
from test import Test
async def main():
print("entering with")
async with Test() as t:
print("in here")
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
```
On Python 3.4
```
import asyncio
from test import Test
@asyncio.coroutine
def oldmain():
print("entering with")
with Test() as t:
yield from t.init_state()
print("in here")
loop = asyncio.get_event_loop()
loop.run_until_complete(oldmain())
```
This probably isn't useful if you're writing an application that uses `asyncio`, but if you're developing a library or framework intended to be used by other developers, it's worth doing. |
Python: apply list of functions to each element in list | 30,193,522 | 2 | 2015-05-12T14:14:42Z | 30,193,577 | 8 | 2015-05-12T14:17:13Z | [
"python",
"python-2.7",
"lambda"
] | Say I have list with elements `content = ['121\n', '12\n', '2\n', '322\n']` and list with functions `fnl = [str.strip, int]`.
So I need to apply each function from `fnl` to each element from `content` sequentially.
I can do this by several calls `map`.
Another way:
```
xl = lambda func, content: map(func, content)
for func in fnl:
content = xl(func, content)
```
I'm just wondering if there is a more pythonic way to do it.
Without separate function? By single expression? | You could use the [`reduce()` function](https://docs.python.org/2/library/functions.html#reduce) in a list comprehension here:
```
[reduce(lambda v, f: f(v), fnl, element) for element in content]
```
Demo:
```
>>> content = ['121\n', '12\n', '2\n', '322\n']
>>> fnl = [str.strip, int]
>>> [reduce(lambda v, f: f(v), fnl, element) for element in content]
[121, 12, 2, 322]
```
This applies each function in turn to each element, as if you nested the calls; for `fnl = [str.strip, int]` that translates to `int(str.strip(element))`.
In Python 3, `reduce()` was moved to the [`functools` module](https://docs.python.org/3/library/functools.html#functools.reduce); for forwards compatibility, you can import it from that module from Python 2.6 onwards:
```
from functools import reduce
results = [reduce(lambda v, f: f(v), fnl, element) for element in content]
```
Note that for the `int()` function, it doesn't matter if there is extra whitespace around the digits; `int('121\n')` works without stripping of the newline. |
How to multiply functions in python? | 30,195,045 | 25 | 2015-05-12T15:16:32Z | 30,196,480 | 16 | 2015-05-12T16:22:09Z | [
"python",
"function",
"monkeypatching",
"function-composition"
] | ```
def sub3(n):
return n - 3
def square(n):
return n * n
```
It's dead easy to compose functions in python:
```
>>> my_list
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> [square(sub3(n)) for n in my_list]
[9, 4, 1, 0, 1, 4, 9, 16, 25, 36]
```
Unfortunately, when wanting to use the composition as a *key*, it's kind of lame:
```
>>> sorted(my_list, key=lambda n: square(sub3(n)))
[3, 2, 4, 1, 5, 0, 6, 7, 8, 9]
```
This should really just be `sorted(my_list, key=square*sub3)`, because heck, function `__mul__` isn't used for anything else anyway:
```
>>> square * sub3
TypeError: unsupported operand type(s) for *: 'function' and 'function'
```
Well let's just define it then!
```
>>> type(sub3).__mul__ = 'something'
TypeError: can't set attributes of built-in/extension type 'function'
```
D'oh!
```
>>> class CoolerFunction(types.FunctionType):
... pass
...
TypeError: Error when calling the metaclass bases
type 'function' is not an acceptable base type
```
D'oh!
```
class Hack(object):
def __init__(self, function):
self.function = function
def __call__(self, *args, **kwargs):
return self.function(*args, **kwargs)
def __mul__(self, other):
def hack(*args, **kwargs):
return self.function(other(*args, **kwargs))
return Hack(hack)
```
Hey, now we're getting somewhere..
```
>>> square = Hack(square)
>>> sub3 = Hack(sub3)
>>> [square(sub3(n)) for n in my_list]
[9, 4, 1, 0, 1, 4, 9, 16, 25, 36]
>>> [(square*sub3)(n) for n in my_list]
[9, 4, 1, 0, 1, 4, 9, 16, 25, 36]
>>> sorted(my_list, key=square*sub3)
[3, 2, 4, 1, 5, 0, 6, 7, 8, 9]
```
But I don't want a `Hack` callable class! The scoping rules are totally different in ways I don't fully understand, and this is even uglier than the "lameda" arguably. I want to monkeypatch the *functions*. How can I do that? | You can use your hack class as a decorator pretty much as it's written, though you'd likely want to choose a more appropriate name for the class.
Like this:
```
class Composable(object):
def __init__(self, function):
self.function = function
def __call__(self, *args, **kwargs):
return self.function(*args, **kwargs)
def __mul__(self, other):
@Composable
def composed(*args, **kwargs):
return self.function(other(*args, **kwargs))
return composed
def __rmul__(self, other):
@Composable
def composed(*args, **kwargs):
return other(self.function(*args, **kwargs))
return composed
```
You can then decorate your functions like so:
```
@Composable
def sub3(n):
return n - 3
@Composable
def square(n):
return n * n
```
And compose them like so:
```
(square * sub3)(n)
```
Basically it's the same thing you've accomplished using your hack class, but using it as a decorator. |
How to save Python NLTK alignment models for later use? | 30,195,287 | 13 | 2015-05-12T15:25:51Z | 30,214,539 | 7 | 2015-05-13T12:18:28Z | [
"python",
"io",
"nlp",
"nltk",
"machine-translation"
] | In Python, I'm using [`NLTK's alignment module`](http://www.nltk.org/api/nltk.align.html) to create word alignments between parallel texts. Aligning bitexts can be a time-consuming process, especially when done over considerable corpora. It would be nice to do alignments in batch one day and use those alignments later on.
```
from nltk import IBMModel1 as ibm
biverses = [list of AlignedSent objects]
model = ibm(biverses, 20)
with open(path + "eng-taq_model.txt", 'w') as f:
f.write(model.train(biverses, 20)) // makes empty file
```
Once I create a model, how can I (1) save it to disk and (2) reuse it later? | The immediate answer is to pickle it, see <https://wiki.python.org/moin/UsingPickle>
But because IBMModel1 returns a lambda function, it's not possible to pickle it with the default `pickle` / `cPickle` (see <https://github.com/nltk/nltk/blob/develop/nltk/align/ibm1.py#L74> and <https://github.com/nltk/nltk/blob/develop/nltk/align/ibm1.py#L104>)
So we'll use `dill`. Firstly, install `dill`, see [Can Python pickle lambda functions?](http://stackoverflow.com/questions/25348532/can-python-pickle-lambda-functions)
```
$ pip install dill
$ python
>>> import dill as pickle
```
Then:
```
>>> import dill
>>> import dill as pickle
>>> from nltk.corpus import comtrans
>>> from nltk.align import IBMModel1
>>> bitexts = comtrans.aligned_sents()[:100]
>>> ibm = IBMModel1(bitexts, 20)
>>> with open('model1.pk', 'wb') as fout:
... pickle.dump(ibm, fout)
...
>>> exit()
```
To use pickled model:
```
>>> import dill as pickle
>>> from nltk.corpus import comtrans
>>> bitexts = comtrans.aligned_sents()[:100]
>>> with open('model1.pk', 'rb') as fin:
... ibm = pickle.load(fin)
...
>>> aligned_sent = ibm.align(bitexts[0])
>>> aligned_sent.words
['Wiederaufnahme', 'der', 'Sitzungsperiode']
```
---
If you try to pickle the `IBMModel1` object, which is a lambda function, you'll end up with this:
```
>>> import cPickle as pickle
>>> from nltk.corpus import comtrans
>>> from nltk.align import IBMModel1
>>> bitexts = comtrans.aligned_sents()[:100]
>>> ibm = IBMModel1(bitexts, 20)
>>> with open('model1.pk', 'wb') as fout:
... pickle.dump(ibm, fout)
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/usr/lib/python2.7/copy_reg.py", line 70, in _reduce_ex
raise TypeError, "can't pickle %s objects" % base.__name__
TypeError: can't pickle function objects
```
(Note: the above code snippet comes from NLTK version 3.0.0)
In python3 with NLTK 3.0.0, you will also face the same problem because IBMModel1 returns a lambda function:
```
alvas@ubi:~$ python3
Python 3.4.0 (default, Apr 11 2014, 13:05:11)
[GCC 4.8.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pickle
>>> from nltk.corpus import comtrans
>>> from nltk.align import IBMModel1
>>> bitexts = comtrans.aligned_sents()[:100]
>>> ibm = IBMModel1(bitexts, 20)
>>> with open('mode1.pk', 'wb') as fout:
... pickle.dump(ibm, fout)
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
_pickle.PicklingError: Can't pickle <function IBMModel1.train.<locals>.<lambda> at 0x7fa37cf9d620>: attribute lookup <lambda> on nltk.align.ibm1 failed'
>>> import dill
>>> with open('model1.pk', 'wb') as fout:
... dill.dump(ibm, fout)
...
>>> exit()
alvas@ubi:~$ python3
Python 3.4.0 (default, Apr 11 2014, 13:05:11)
[GCC 4.8.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import dill
>>> from nltk.corpus import comtrans
>>> with open('model1.pk', 'rb') as fin:
... ibm = dill.load(fin)
...
>>> bitexts = comtrans.aligned_sents()[:100]
>>> aligned_sent = ibm.aligned(bitexts[0])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'IBMModel1' object has no attribute 'aligned'
>>> aligned_sent = ibm.align(bitexts[0])
>>> aligned_sent.words
['Wiederaufnahme', 'der', 'Sitzungsperiode']
```
(Note: In python3, `pickle` is `cPickle`, see <http://docs.pythonsprints.com/python3_porting/py-porting.html>) |
Why is my Fortran code wrapped with f2py using so much memory? | 30,197,773 | 7 | 2015-05-12T17:32:34Z | 30,198,584 | 7 | 2015-05-12T18:18:39Z | [
"python",
"memory-management",
"fortran",
"out-of-memory",
"f2py"
] | I am trying to calculate all the distances between approximately a hundred thousand points. I have the following code written in Fortran and compiled using `f2py`:
```
C 1 2 3 4 5 6 7
C123456789012345678901234567890123456789012345678901234567890123456789012
subroutine distances(coor,dist,n)
double precision coor(n,3),dist(n,n)
integer n
double precision x1,y1,z1,x2,y2,z2,diff2
cf2py intent(in) :: coor,dist
cf2py intent(in,out):: dist
cf2py intent(hide)::n
cf2py intent(hide)::x1,y1,z1,x2,y2,z2,diff2
do 200,i=1,n-1
x1=coor(i,1)
y1=coor(i,2)
z1=coor(i,3)
do 100,j=i+1,n
x2=coor(j,1)
y2=coor(j,2)
z2=coor(j,3)
diff2=(x1-x2)*(x1-x2)+(y1-y2)*(y1-y2)+(z1-z2)*(z1-z2)
dist(i,j)=sqrt(diff2)
100 continue
200 continue
end
```
I am compiling the fortran code using the following python code `setup_collision.py`:
```
# System imports
from distutils.core import *
from distutils import sysconfig
# Third-party modules
import numpy
from numpy.distutils.core import Extension, setup
# Obtain the numpy include directory. This logic works across numpy versions.
try:
numpy_include = numpy.get_include()
except AttributeError:
numpy_include = numpy.get_numpy_include()
# simple extension module
collision = Extension(name="collision",sources=['./collision.f'],
include_dirs = [numpy_include],
)
# NumyTypemapTests setup
setup( name = "COLLISION",
description = "Module calculates collision energies",
author = "Stvn66",
version = "0.1",
ext_modules = [collision]
)
```
Then running it as follows:
```
import numpy as np
import collision
coor = np.loadtxt('coordinates.txt')
n_atoms = len(coor)
dist = np.zeros((n_atoms, n_atoms), dtype=np.float16) # float16 reduces memory
n_dist = n_atoms*(n_atoms-1)/2
n_GB = n_dist * 2 / float(2**30) # 1 kB = 1024 B
n_Gb = n_dist * 2 / 1E9 # 1 kB = 1000 B
print 'calculating %d distances between %d atoms' % (n_dist, n_atoms)
print 'should use between %f and %f GB of memory' % (n_GB, n_Gb)
dist = collision.distances(coor, dist)
```
Using this code with 30,000 atoms, what should use around 1 GB of memory to store the distances, it instead uses 10 GB. With this difference, performing this calculation with 100,000 atoms will require 100 GB instead of 10 GB. I only have 20 GB in my computer.
Am I missing something related to passing the data between Python and Fortran? The huge difference indicates a major flaw in the implementation. | You are feeding double precision arrays to the Fortran subroutine. Each element in double precision requires 8 Byte of memory. For `N=30,000` that makes
```
coor(n,3) => 30,000*3*8 ~ 0.7 MB
dist(n,n) => 30,000^2*8 ~ 6.7 GB
```
Since the half precision floats are additionally required for Python, that accounts for another 1-2GB. So the overall requirement is 9-10GB.
The same holds true for `N=100,000`, which will require ~75GB for the Fortran part alone.
Instead of `double precision` floats, you should use single precision `real`s - if that is sufficient for your calculations. This will lead to half the memory requirements. [I have no experience with that, but I assume that if both parts use the same precision, Python can operate on the data directly...]
As @VladimirF noted in his comment, *"usual compilers do not support 2 byte reals"*. I checked with `gfortran` and `ifort`, and they both do not. So you need to use at least single precision. |
Nest a flat list based on an arbitrary criterion | 30,198,235 | 10 | 2015-05-12T17:59:02Z | 30,198,305 | 9 | 2015-05-12T18:03:37Z | [
"python",
"list"
] | I have a flat list of unique objects, some of which may share a given attribute with others. I wish to create a nested list-of-lists, with objects grouped by the given attribute. As a minimal example, given the following list:
```
>>> flat = ["Shoes", "pants", "shirt", "tie", "jacket", "hat"]
```
I might want to group it by length, eg:
```
>>> nest_by_length(flat)
[['tie', 'hat'], ['shoes', 'pants', 'shirt'], ['jacket']]
```
I've seen a couple of [similar](http://stackoverflow.com/questions/27620855/unflattening-a-list-in-python) [questions](http://stackoverflow.com/questions/8916209/how-to-build-a-nested-list-from-a-flat-one-in-python) and [suggestions](http://code.activestate.com/recipes/577061-nest-a-flat-list/). However, in all of these cases, the nesting is based on the ordering of the input list. In my case, the ordering of the input list is completely unpredictable, as is the number of sub-lists for the output and the number of items per sub-list.
Is there a standard function or idiomatic way to accomplish this? | A common idiom for an existing list is to use [groupby](https://docs.python.org/2/library/itertools.html#itertools.groupby) in itertools:
```
from itertools import groupby
flat = ["Shoes", "pants", "shirt", "tie", "jacket", "hat"]
result=[]
for k, g in groupby(sorted(flat, key=len), key=len):
result.append(list(g))
print result
```
Or, more tersely:
```
[list(g) for _,g in groupby(sorted(flat, key=len), key=len)]
```
Prints:
```
[['tie', 'hat'], ['Shoes', 'pants', 'shirt'], ['jacket']]
```
Input to `groupby` is grouped into groups based on the changing value of output of the key function, in this case `len`. Generally, you need to preorder the list based on the same key function, so the `sorted` function is called first.
If your source list is not complete yet, or not sortable based on the criteria (or you would just prefer another option), create a dict that maps your criteria to a unique key value:
```
groups={}
for e in flat:
groups.setdefault(len(e), []).append(e)
print groups
# {5: ['Shoes', 'pants', 'shirt'], 3: ['tie', 'hat'], 6: ['jacket']}
```
You can also use [defaultdict](https://docs.python.org/2/library/collections.html#collections.defaultdict) rather than setdefault with the arbitrary key value:
```
from collections import defaultdict
groups=defaultdict(list)
for e in flat:
groups[len(e)].append(e)
# groups=defaultdict(<type 'list'>, {5: ['Shoes', 'pants', 'shirt'], 3: ['tie', 'hat'], 6: ['jacket']})
```
In either case, then you can create the nested list from that:
```
>>> [groups[k] for k in sorted(groups.keys())]
[['tie', 'hat'], ['Shoes', 'pants', 'shirt'], ['jacket']]
``` |
What's the pythonic idiom for making a simple list out of a list of instances? | 30,198,638 | 2 | 2015-05-12T18:21:16Z | 30,198,704 | 8 | 2015-05-12T18:24:21Z | [
"python",
"sqlalchemy",
"list-comprehension"
] | I have a list of record instances returned by SQLAlchemy.
While the instances have many attributes, I want a new list with only one of the attributes. The java coder in me says:
```
my_records = query.all()
names = []
for my_record in my_records:
names.append(my_record.name)
```
...which works, of course. But What's the Pythonic answer? I know there a one-liner that includes these 4 lines into 1, but finding it is like googling for "for". | You are looking for what is called a [list comprehension](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions):
```
names = [my_record.name for my_record in query.all()]
```
The above is a concise equivalent to the for-loop in your example.
---
In addition, you should be aware that there are [dict comprehensions](https://docs.python.org/3/tutorial/datastructures.html#dictionaries):
```
{key:val for key, val in iterable}
```
as well as [set comprehensions](https://docs.python.org/3/tutorial/datastructures.html#sets):
```
{item for item in iterable}
```
which will construct new dictionaries and sets respectively.
---
Lastly, all of these constructs allow you to add an optional condition to be tested for each item:
```
[item for item in iterable if condition]
{key:val for key, val in iterable if condition}
{item for item in iterable if condition}
```
This is useful if you want to filter the items from the iterable by the condition. |
Python how to index multidimensional array with string key, like a dict | 30,198,973 | 5 | 2015-05-12T18:39:41Z | 30,200,037 | 7 | 2015-05-12T19:42:26Z | [
"python",
"numpy",
"dictionary",
"indexing",
"pandas"
] | I would like to combine the functionality of numpy's `array` with native python's `dict`, namely creating a multidimensional array that can be indexed with strings.
For example, I could do this:
```
dict_2d = {'a': {'x': 1, 'y': 2},
'b': {'x': 3, 'y': 4}}
print dict_2d['a','y'] # returns 2
```
I know I could do `dict_2d['a']['x']` but long term I'd like to be able to treat them like numpy arrays including doing matrix multiplication and such and thats not possible with layered dicts.
Its also not that hard to write up a simple version of the class where I just use the class to convert all the strings to int indexes and then use numpy but I'd like to use something that already exists if possible.
Edit: I don't need incredible performance. I'll be working with maybe 10x10 arrays. My goal is to make writing the code simple and robust. Working with numpy arrays is not really much different than just writing it in Fortran. I've spent enough of my life tracking down Fortran indexing errors... | You may be looking for [pandas](http://pandas.pydata.org/), which provides handy datatypes that wrap numpy arrays, allowing you do access rows and columns by name instead of just by number. |
Expanding a block of numbers in Python | 30,201,119 | 7 | 2015-05-12T20:47:54Z | 30,201,202 | 9 | 2015-05-12T20:52:36Z | [
"python",
"list",
"python-2.7",
"variable-expansion"
] | Before I asked, I did some googling, and was unable to find an answer.
The scenario I have is this:
A list of numbers are passed to the script, either \n-delimited via a file, or comma-delimited via a command line arg. The numbers can be singular, or in blocks, like so:
File:
```
1
2
3
7-10
15
20-25
```
Command Line Arg:
```
1, 2, 3, 7-10, 15, 20-25
```
Both end up in the same list[]. I would like to expand the 7-10 or 20-25 blocks (obviously in the actual script these numbers will vary) and append them onto a new list with the final list looking like this:
```
['1','2','3','7','8','9','10','15','20','21','22','23','24','25']
```
I understand that something like .append(range(7,10)) could help me here, but I can't seem to be able to find out which elements of the original list[] have the need for expansion.
So, my question is this:
Given a list[]:
```
['1','2','3','7-10','15','20-25'],
```
how can I get a list[]:
```
['1','2','3','7','8','9','10','15','20','21','22','23','24','25']
``` | So let's say you're given the list:
```
L = ['1','2','3','7-10','15','20-25']
```
and you want to expand out all the ranges contained therein:
```
answer = []
for elem in L:
if '-' not in elem:
answer.append(elem)
continue
start, end = elem.split('-')
answer.extend(map(str, range(int(start), int(end)+1)))
```
Of course, there's a handy one-liner for this:
```
answer = list(itertools.chain.from_iterable([[e] if '-' not in e else map(str, range(*[int(i) for i in e.split('-')]) + [int(i)]) for e in L]))
```
But this exploits the nature of leaky variables in python2.7, which I don't think will work in python3. Also, it's not exactly the most readable line of code. So I wouldn't really use it in production, if I were you... unless you really hate your manager.
References: Â [`append()`](https://docs.python.org/2/library/array.html?#array.array.append) Â [`continue`](https://docs.python.org/2/reference/simple_stmts.html#continue) Â [`split()`](https://docs.python.org/2/library/stdtypes.html#str.split) Â [`extend()`](https://docs.python.org/2/library/array.html?#array.array.extend) Â [`map()`](https://docs.python.org/2/library/functions.html#map) Â [`range()`](https://docs.python.org/2/library/functions.html?#range) Â [`list()`](https://docs.python.org/2/library/functions.html?#list) Â [`itertools.chain.from_iterable()`](https://docs.python.org/2/library/itertools.html#itertools.chain.from_iterable) Â [`int()`](https://docs.python.org/2/library/functions.html?#int) |
Why are the programming languages limits precision of 22/7 equally? | 30,201,855 | 4 | 2015-05-12T21:36:57Z | 30,202,144 | 16 | 2015-05-12T21:58:06Z | [
"python",
"ruby",
"haskell",
"floating-point",
"erlang"
] | I tried
```
Erlang
$ erl
1> Pi = 22/7.
3.142857142857143
Haskell
$ ghci
Prelude> 22/7
3.142857142857143
Python
$ python
>>> 22/7.0
3.142857142857143
Ruby
$ irb
2.1.6 :001 > 22 / 7.0
=> 3.142857142857143
```
The results are the same. Why? | This happens because all the languages are using the same numerical representation for non-integer numbers: [IEEE 754 floating point numbers](http://en.wikipedia.org/wiki/IEEE_floating_point) with, most likely, the same level of precision. (Either 32-bit "floats" or 64-bit "doubles", depending on how your system and languages are configured.)
Floating point numbers are the default choice for this sort of operation, in large part because they're supported directly in hardware. However, fundamentally, nothing stops languages from supporting other kinds of numbers as well. This is easiest to demonstrate in Haskell, which has rational numbers in its standard library:
```
λ> 22/7 :: Rational
22 % 7
```
A rational number is a fraction, so it's stored as a pair of integers and you don't lose any precision when dividing. At the same time, some operations are more difficult and less efficient than with normal floating point numbers.
Another possible representation are [fixed-point numbers](http://en.wikipedia.org/wiki/Fixed-point_arithmetic) which have a smaller range than floating point numbers but do a better job of maintaining precision *within* that range. (I'm really handwaving a lot of details here.) You can try these out in Haskell too:
```
λ> import Data.Fixed
λ> 22/7 :: Deci
3.1
λ> 22/7 :: Centi
3.14
λ> 22/7 :: Micro
3.142857
``` |
'NoneType' object has no attribute '_app_data' in scrapy\twisted\openssl | 30,202,669 | 8 | 2015-05-12T22:41:08Z | 31,423,708 | 21 | 2015-07-15T07:11:12Z | [
"python",
"openssl",
"scrapy",
"twisted",
"pyopenssl"
] | During the scraping process using scrapy one error appears in my logs from time to time.
It doesnt seem to be anywhere in my code, and looks like it something inside twisted\openssl.
Any ideas what caused this and how to get rid of it?
Stacktrace here:
```
[Launcher,27487/stderr] Error during info_callback
Traceback (most recent call last):
File "/opt/webapps/link_crawler/lib/python2.7/site-packages/twisted/protocols/tls.py", line 415, in dataReceived
self._write(bytes)
File "/opt/webapps/link_crawler/lib/python2.7/site-packages/twisted/protocols/tls.py", line 554, in _write
sent = self._tlsConnection.send(toSend)
File "/opt/webapps/link_crawler/lib/python2.7/site-packages/OpenSSL/SSL.py", line 1270, in send
result = _lib.SSL_write(self._ssl, buf, len(buf))
File "/opt/webapps/link_crawler/lib/python2.7/site-packages/OpenSSL/SSL.py", line 926, in wrapper
callback(Connection._reverse_mapping[ssl], where, return_code)
--- <exception caught here> ---
File "/opt/webapps/link_crawler/lib/python2.7/site-packages/twisted/internet/_sslverify.py", line 1055, in infoCallback
return wrapped(connection, where, ret)
File "/opt/webapps/link_crawler/lib/python2.7/site-packages/twisted/internet/_sslverify.py", line 1157, in _identityVerifyingInfoCallback
transport = connection.get_app_data()
File "/opt/webapps/link_crawler/lib/python2.7/site-packages/OpenSSL/SSL.py", line 1589, in get_app_data
return self._app_data
File "/opt/webapps/link_crawler/lib/python2.7/site-packages/OpenSSL/SSL.py", line 1148, in __getattr__
return getattr(self._socket, name)
exceptions.AttributeError: 'NoneType' object has no attribute '_app_data'
``` | I was able to solve this problem by installing the `service_identity` package:
`pip install service_identity` |
Unprint a line on the console in Python? | 30,203,228 | 2 | 2015-05-12T23:38:32Z | 30,203,416 | 7 | 2015-05-13T00:01:53Z | [
"python",
"printing"
] | Is it possible to manipulate lines of text that have already been printed to the console?
For example,
```
import time
for k in range(1,100):
print(str(k)+"/"+"100")
time.sleep(0.03)
#>> Clear the most recent line printed to the console
print("ready or not here I come!")
```
I've seen some things for using custom [DOS](http://en.wikipedia.org/wiki/DOS) consoles under Windows, but I would really like something that works on the command\_line like does print without any additional canvases.
Does this exist? If it doesnât, why not?
P.S.: I was trying to use **curses**, and it was causing problems with my command line behaviour outside of Python. (After erroring out of a Python script with curses in it, my [Bash](http://en.wikipedia.org/wiki/Bash_%28Unix_shell%29) shell stopped printing newline -*unacceptable*- ). | What you're looking for is:
```
print("{}/100".format(k), "\r", end="")
```
`\r` is carriage return, which returns the cursor to the beginning of the line. In effect, whatever is printed will overwrite the previous printed text. `end=""` is to prevent `\n` after printing (to stay on the same line).
In [Python 2](https://en.wikipedia.org/wiki/Python_%28programming_language%29#History), the same can be achieved with:
```
print "{}/100".format(k), "\r",
``` |
running a python package after compiling and uploading to pypicloud server | 30,205,298 | 12 | 2015-05-13T03:44:21Z | 30,271,399 | 9 | 2015-05-16T03:01:26Z | [
"python",
"pypi"
] | Folks,
After building and deploying a package called `myShtuff` to a local pypicloud server, I am able to install it into a separate virtual env.
Everything seems to work, except for the path of the executable...
```
(venv)[ec2-user@ip-10-0-1-118 ~]$ pip freeze
Fabric==1.10.1
boto==2.38.0
myShtuff==0.1
ecdsa==0.13
paramiko==1.15.2
pycrypto==2.6.1
wsgiref==0.1.2
```
If I try running the script directly, I get:
```
(venv)[ec2-user@ip-10-0-1-118 ~]$ myShtuff
-bash: myShtuff: command not found
```
However, I can run it via:
```
(venv)[ec2-user@ip-10-0-1-118 ~]$ python /home/ec2-user/venv/lib/python2.7/site-packages/myShtuff/myShtuff.py
..works
```
Am I making a mistake when building the package? Somewhere in setup.cfg or setup.py?
Thanks!!! | You need a `__main__.py` in your package, and an entry point defined in setup.py.
See [here](https://pythonhosted.org/setuptools/setuptools.html#automatic-script-creation) and [here](https://chriswarrick.com/blog/2014/09/15/python-apps-the-right-way-entry_points-and-scripts/) but in short, your `__main__.py` runs whatever your main functionality is when running your module using `python -m`, and setuptools can make whatever arbitrary functions you want to run as scripts. You can do either or both. Your `__main__.py` looks like:
```
from .stuff import my_main_func
if __name__ == "__main__":
my_main_func()
```
and in setup.py:
```
entry_points={
'console_scripts': [
'myShtuffscript = myShtuff.stuff:my_main_func'
]
```
Here, `myShtuffscript` is whatever you want the executable to be called, `myShtuff` the name of your package, `stuff` the name of file in the package (`myShtuff/stuff.py`), and `my_main_func` the name of a function in that file. |
Optimize Double loop in python | 30,211,336 | 7 | 2015-05-13T09:53:15Z | 30,215,273 | 7 | 2015-05-13T12:48:12Z | [
"python",
"performance",
"loops",
"numpy",
"numba"
] | I am trying to optimize the following loop :
```
def numpy(nx, nz, c, rho):
for ix in range(2, nx-3):
for iz in range(2, nz-3):
a[ix, iz] = sum(c*rho[ix-1:ix+3, iz])
b[ix, iz] = sum(c*rho[ix-2:ix+2, iz])
return a, b
```
I tried different solutions and found using numba to calculate the sum of the product leads to better performances:
```
import numpy as np
import numba as nb
import time
@nb.autojit
def sum_opt(arr1, arr2):
s = arr1[0]*arr2[0]
for i in range(1, len(arr1)):
s+=arr1[i]*arr2[i]
return s
def numba1(nx, nz, c, rho):
for ix in range(2, nx-3):
for iz in range(2, nz-3):
a[ix, iz] = sum_opt(c, rho[ix-1:ix+3, iz])
b[ix, iz] = sum_opt(c, rho[ix-2:ix+2, iz])
return a, b
@nb.autojit
def numba2(nx, nz, c, rho):
for ix in range(2, nx-3):
for iz in range(2, nz-3):
a[ix, iz] = sum_opt(c, rho[ix-1:ix+3, iz])
b[ix, iz] = sum_opt(c, rho[ix-2:ix+2, iz])
return a, b
nx = 1024
nz = 256
rho = np.random.rand(nx, nz)
c = np.random.rand(4)
a = np.zeros((nx, nz))
b = np.zeros((nx, nz))
ti = time.clock()
a, b = numpy(nx, nz, c, rho)
print 'Time numpy : ' + `round(time.clock() - ti, 4)`
ti = time.clock()
a, b = numba1(nx, nz, c, rho)
print 'Time numba1 : ' + `round(time.clock() - ti, 4)`
ti = time.clock()
a, b = numba2(nx, nz, c, rho)
print 'Time numba2 : ' + `round(time.clock() - ti, 4)`
```
This lead to
> Time numpy : 4.1595
>
> Time numba1 : 0.6993
>
> Time numba2 : 1.0135
Using the numba version of the sum function (sum\_opt) performs very well. But I am wondering why the numba version of the double loop function (numba2) leads to slower execution times. I tried to use jit instead of autojit, specifying the argument types, but it was worse.
I also noticed that looping first on the smallest loop is slower than looping first on the biggest loop. Is there any explanation ?
Whether it is, I am sure this double loop function can be improved a lot vectorizing the problem (like [this](http://stackoverflow.com/questions/8299891/vectorization-of-this-numpy-double-loop)) or using another method (map ?) but I am a little bit confused about these methods.
In the other parts of my code, I used numba and numpy slicing methods to replace all explicit loops but in this particular case, I don't how to set it up.
Any ideas ?
**EDIT**
Thanks for all your comments. I worked a little on this problem:
```
import numba as nb
import numpy as np
from scipy import signal
import time
@nb.jit(['float64(float64[:], float64[:])'], nopython=True)
def sum_opt(arr1, arr2):
s = arr1[0]*arr2[0]
for i in xrange(1, len(arr1)):
s+=arr1[i]*arr2[i]
return s
@nb.autojit
def numba1(nx, nz, c, rho, a, b):
for ix in range(2, nx-3):
for iz in range(2, nz-3):
a[ix, iz] = sum_opt(c, rho[ix-1:ix+3, iz])
b[ix, iz] = sum_opt(c, rho[ix-2:ix+2, iz])
return a, b
@nb.jit(nopython=True)
def numba2(nx, nz, c, rho, a, b):
for ix in range(2, nx-3):
for iz in range(2, nz-3):
a[ix, iz] = sum_opt(c, rho[ix-1:ix+3, iz])
b[ix, iz] = sum_opt(c, rho[ix-2:ix+2, iz])
return a, b
@nb.jit(['float64[:,:](int16, int16, float64[:], float64[:,:], float64[:,:])'], nopython=True)
def numba3a(nx, nz, c, rho, a):
for ix in range(2, nx-3):
for iz in range(2, nz-3):
a[ix, iz] = sum_opt(c, rho[ix-1:ix+3, iz])
return a
@nb.jit(['float64[:,:](int16, int16, float64[:], float64[:,:], float64[:,:])'], nopython=True)
def numba3b(nx, nz, c, rho, b):
for ix in range(2, nx-3):
for iz in range(2, nz-3):
b[ix, iz] = sum_opt(c, rho[ix-2:ix+2, iz])
return b
def convol(nx, nz, c, aa, bb):
s1 = rho[1:nx-1,2:nz-3]
s2 = rho[0:nx-2,2:nz-3]
kernel = c[:,None][::-1]
aa[2:nx-3,2:nz-3] = signal.convolve2d(s1, kernel, boundary='symm', mode='valid')
bb[2:nx-3,2:nz-3] = signal.convolve2d(s2, kernel, boundary='symm', mode='valid')
return aa, bb
nx = 1024
nz = 256
rho = np.random.rand(nx, nz)
c = np.random.rand(4)
a = np.zeros((nx, nz))
b = np.zeros((nx, nz))
ti = time.clock()
for i in range(1000):
a, b = numba1(nx, nz, c, rho, a, b)
print 'Time numba1 : ' + `round(time.clock() - ti, 4)`
ti = time.clock()
for i in range(1000):
a, b = numba2(nx, nz, c, rho, a, b)
print 'Time numba2 : ' + `round(time.clock() - ti, 4)`
ti = time.clock()
for i in range(1000):
a = numba3a(nx, nz, c, rho, a)
b = numba3b(nx, nz, c, rho, b)
print 'Time numba3 : ' + `round(time.clock() - ti, 4)`
ti = time.clock()
for i in range(1000):
a, b = convol(nx, nz, c, a, b)
print 'Time convol : ' + `round(time.clock() - ti, 4)`
```
Your solution is very elegant Divakar, but I have to use this function a large number of time in my code. So, for 1000 iterations, this lead to
> Time numba1 : 3.2487
>
> Time numba2 : 3.7012
>
> Time numba3 : 3.2088
>
> Time convol : 22.7696
`autojit` and `jit` are very close.
However, when using `jit`, it seems important to specify all argument types.
I do not know if there is a way to specify argument types in the `jit` decorator when the function has multiple outputs. Someone ?
For now I did not find other solution than using numba. New ideas are welcomed ! | You are basically performing 2D convolution there, with a small modification that your kernel is not reversing as the usual [`convolution`](http://en.wikipedia.org/wiki/Convolution) operation does.
So, basically, there are two things we need to do here to use [`signal.convolve2d`](http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.signal.convolve2d.html) to solve our case -
* Slice the input array `rho` to select a portion of it which is used in the original loopy version of your code. This would be the input data to convolution.
* Reverse the kernel, `c` and feed it alongwith the sliced data to `signal.convolve2d`.
Please note that these are to be done for calculation of both `a` and `b` separately.
Here's the implementation -
```
import numpy as np
from scipy import signal
# Slices for convolutions to get a and b respectively
s1 = rho[1:nx-1,2:nz-3]
s2 = rho[0:nx-2,2:nz-3]
kernel = c[:,None][::-1] # convolution kernel
# Setup output arrays and fill them with convolution results
a = np.zeros((nx, nz))
b = np.zeros((nx, nz))
a[2:nx-3,2:nz-3] = signal.convolve2d(s1, kernel, boundary='symm', mode='valid')
b[2:nx-3,2:nz-3] = signal.convolve2d(s2, kernel, boundary='symm', mode='valid')
```
If you don't need the extra zeros around the boundaries of the output arrays, you could simply use the outputs from `signal.convolve2d` as they are, which must further boost up the performance.
**Runtime tests**
```
In [532]: %timeit loop_based(nx, nz, c, rho)
1 loops, best of 3: 1.52 s per loop
In [533]: %timeit numba1(nx, nz, c, rho)
1 loops, best of 3: 282 ms per loop
In [534]: %timeit numba2(nx, nz, c, rho)
1 loops, best of 3: 509 ms per loop
In [535]: %timeit conv_based(nx, nz, c, rho)
10 loops, best of 3: 15.5 ms per loop
```
So, for the actual input datasize, the proposed convolution based approach is about **`100x`** faster than the loopy code and about **`20x`** better than the fastest `numba` based approach `numba1`. |
What is the difference between pandas.qcut and pandas.cut? | 30,211,923 | 9 | 2015-05-13T10:18:25Z | 30,214,901 | 13 | 2015-05-13T12:33:00Z | [
"python",
"pandas"
] | The documentation says:
<http://pandas.pydata.org/pandas-docs/dev/basics.html>
"Continuous values can be discretized using the cut (bins based on values) and qcut (bins based on sample quantiles) functions"
Sounds very abstract to me... I can see the differences in the example below but **what does qcut (sample quantile) actually do/mean? When would you use qcut versus cut?**
Thanks.
```
factors = np.random.randn(30)
In [11]:
pd.cut(factors, 5)
Out[11]:
[(-0.411, 0.575], (-0.411, 0.575], (-0.411, 0.575], (-0.411, 0.575], (0.575, 1.561], ..., (-0.411, 0.575], (-1.397, -0.411], (0.575, 1.561], (-2.388, -1.397], (-0.411, 0.575]]
Length: 30
Categories (5, object): [(-2.388, -1.397] < (-1.397, -0.411] < (-0.411, 0.575] < (0.575, 1.561] < (1.561, 2.547]]
In [14]:
pd.qcut(factors, 5)
Out[14]:
[(-0.348, 0.0899], (-0.348, 0.0899], (0.0899, 1.19], (0.0899, 1.19], (0.0899, 1.19], ..., (0.0899, 1.19], (-1.137, -0.348], (1.19, 2.547], [-2.383, -1.137], (-0.348, 0.0899]]
Length: 30
Categories (5, object): [[-2.383, -1.137] < (-1.137, -0.348] < (-0.348, 0.0899] < (0.0899, 1.19] < (1.19, 2.547]]`
``` | To begin, note that quantiles is just the most general term for things like percentiles, quartiles, and medians. You specified five bins in your example, so you are asking `qcut` for quintiles.
So, when you ask for quintiles with `qcut`, the bins will be chosen so that you have the same number of records in each bin. You have 30 records, so should have 6 in each bin (your output should look like this, although the breakpoints will differ due to the random draw):
```
pd.qcut(factors, 5).value_counts()
[-2.578, -0.829] 6
(-0.829, -0.36] 6
(-0.36, 0.366] 6
(0.366, 0.868] 6
(0.868, 2.617] 6
```
Conversely, for `cut` you will see something more uneven:
```
pd.cut(factors, 5).value_counts()
(-2.583, -1.539] 5
(-1.539, -0.5] 5
(-0.5, 0.539] 9
(0.539, 1.578] 9
(1.578, 2.617] 2
```
That's because `cut` will choose the bins to be evenly spaced according to the values themselves and not the *frequency* of those values. Hence, because you drew from a random normal, you'll see higher frequencies in the inner bins and fewer in the outer. This is essentially going to be a tabular form of a histogram (which you would expect to be fairly bell shaped with 30 records). |
Sort two lists in python? | 30,212,452 | 3 | 2015-05-13T10:40:56Z | 30,212,576 | 7 | 2015-05-13T10:46:17Z | [
"python",
"list",
"sorting"
] | I am counting some occurrences of words in a text, and I have two lists : the first contains the words, the second contains the occurrences.
So at the end of the analysis I have something like
```
listWords : ["go", "make", "do", "some", "lot"]
listOccurrences: [2, 4, 8, 1, 5]
```
And I want to sort those two lists following listOccurrences DESC, so I would have :
```
listWords : ["do", "lot", "make", "go", "some"]
listOccurrences: [8, 5, 4, 2, 1]
```
Is there any way I can do this ? Or do you know any other way more "natural" than two lists ? (Like a single "list" where every occurrence is referenced by a word) | ```
>>> listWords = ["go", "make", "do", "some", "lot"]
>>> listOccurrences = [2, 4, 8, 1, 5]
>>> listTmp = zip(listOccurrences, listWords)
>>> listTmp
[(2, 'go'), (4, 'make'), (8, 'do'), (1, 'some'), (5, 'lot')]
>>> listTmp.sort(reverse=True)
>>> listTmp
[(8, 'do'), (5, 'lot'), (4, 'make'), (2, 'go'), (1, 'some')]
>>> zip(*listTmp)
[(8, 5, 4, 2, 1), ('do', 'lot', 'make', 'go', 'some')]
>>> listOccurrences, listWord = zip(*listTmp)
```
Note that the obvious data type for a key:values pairs (here : word:count) is a `dict`. FWIW you may want to have a look at `collections.Counter`.
Edit : For the sake of completeness: you can also use the builtin `sorted()` function instead of `list.sort()` if you want to cram all this in a single line statement (which might not be such a good idea wrt/ readability but that's another story):
```
>>> listWords = ["go", "make", "do", "some", "lot"]
>>> listOccurrences = [2, 4, 8, 1, 5]
>>> listOccurrences, listWords = zip(*sorted(zip(listOccurrences, listWords), reverse=True))
>>> listWords
('do', 'lot', 'make', 'go', 'some')
>>> listOccurrences
(8, 5, 4, 2, 1)
``` |
How do I compare two Python Pandas Series of different lengths? | 30,214,328 | 6 | 2015-05-13T12:07:34Z | 30,214,389 | 10 | 2015-05-13T12:10:24Z | [
"python",
"pandas"
] | I have two Series of different lengths, and I want to get the indices for which **both the indices *and* the amount** are the same in both series.
Here are the Series:
```
ipdb> s1
s1
000007720 2000.00
group1 -3732.05
group t3 2432.12
group2 -38147.87
FSHLAJ -36711.09
EWkayuwo -3.22
Name: amount, dtype: float64
ipdb> s2
s2
000007720 2000.00
group1 -3732.05
group z 12390.00
group y 68633.43
group x 25.00
group w 3913.00
group v -12750.50
group u -53.49
group t -7500.00
group s -1575.82
group r -10.00
group q 1800.00
group p -4510.34
EWFhjkaQU 455.96
group2 -38147.87
FSHLAJ -36711.09
GEKWJ 5.54
Name: amount, dtype: float64
```
When I try to compare them, I get:
```
ipdb>s1 == s2
*** ValueError: Series lengths must match to compare
```
How can I achieve my objective? | You want to use [`isin`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html#pandas.Series.isin):
```
In [121]:
s2[s2.isin(s1)]
Out[121]:
000007720
group1 -3732.05
group2 -38147.87
FSHLAJ -36711.09
Name: 2000.00, dtype: float64
```
I don't know which way round you wanted to perform the comparison, here is the other way:
```
In [122]:
s1[s1.isin(s2)]
Out[122]:
000007720
group1 -3732.05
group2 -38147.87
FSHLAJ -36711.09
Name: 2000.00, dtype: float64
```
The problem with trying to do `s1 == s2` is that it doesn't make sense comparing Series or arrays of different lengths.
If you want the indices to match also then add this as a condition:
```
In [131]:
s1[(s1.index.isin(s2.index)) & (s1.isin(s2))]
Out[131]:
000007720
group1 -3732.05
group2 -38147.87
FSHLAJ -36711.09
Name: 2000.00, dtype: float64
``` |
Basics of recursion in Python | 30,214,531 | 13 | 2015-05-13T12:18:00Z | 30,214,677 | 44 | 2015-05-13T12:24:12Z | [
"python",
"list",
"python-2.7",
"recursion"
] | > "Write a recursive function, "listSum" that takes a list of integers and returns the sum of all integers in the list".
Example:
```
>>>> listSum([1,3,4,5,6])
19
```
I know how to do this another way but not in the recursive way.
```
def listSum(ls):
i = 0
s = 0
while i < len(ls):
s = s + ls[i]
i = i + 1
print s
```
I need the basic way to do this since special built-in functions is not allowed. | Whenever you face a problem like this, try to express the result of the function with the same function.
In your case, you can get the result by adding the first number with the result of calling the same function with rest of the elements in the list.
For example,
```
listSum([1, 3, 4, 5, 6]) = 1 + listSum([3, 4, 5, 6])
= 1 + (3 + listSum([4, 5, 6]))
= 1 + (3 + (4 + listSum([5, 6])))
= 1 + (3 + (4 + (5 + listSum([6]))))
= 1 + (3 + (4 + (5 + (6 + listSum([])))))
```
Now, what should be the result of `listSum([])`? It should be 0. That is called *base condition* of your recursion. When the base condition is met, the recursion will come to an end. Now, lets try to implement it.
The main thing here is, splitting the list. You can use [slicing](http://stackoverflow.com/a/509295/1903116) to do that.
**Simple version**
```
>>> def listSum(ls):
... # Base condition
... if not ls:
... return 0
...
... # First element + result of calling `listsum` with rest of the elements
... return ls[0] + listSum(ls[1:])
>>>
>>> listSum([1, 3, 4, 5, 6])
19
```
---
**Tail Call Recursion**
Once you understand how the above recursion works, you can try to make it a little bit better. Now, to find the actual result, we are depending on the value of the previous function also. The `return` statement cannot immediately return the value till the recursive call returns a result. We can avoid this by, passing the current to the function parameter, like this
```
>>> def listSum(ls, result):
... if not ls:
... return result
... return listSum(ls[1:], result + ls[0])
...
>>> listSum([1, 3, 4, 5, 6], 0)
19
```
Here, we pass what the initial value of the sum to be in the parameters, which is zero in `listSum([1, 3, 4, 5, 6], 0)`. Then, when the base condition is met, we are actually accumulating the sum in the `result` parameter, so we return it. Now, the last `return` statement has `listSum(ls[1:], result + ls[0])`, where we add the first element to the current `result` and pass it again to the recursive call.
This might be a good time to understand [Tail Call](http://en.wikipedia.org/wiki/Tail_call). It would not be relevant to Python, as it doesn't do Tail call optimization.
---
**Passing around index version**
Now, you might think that we are creating so many intermediate lists. Can I avoid that?
Of course, you can. You just need the index of the item to be processed next. But now, the base condition will be different. Since we are going to be passing index, how will we determine how the entire list has been processed? Well, if the index equals to the length of the list, then we have processed all the elements in it.
```
>>> def listSum(ls, index, result):
... # Base condition
... if index == len(ls):
... return result
...
... # Call with next index and add the current element to result
... return listSum(ls, index + 1, result + ls[index])
...
>>> listSum([1, 3, 4, 5, 6], 0, 0)
19
```
---
**Inner function version**
If you look at the function definition now, you are passing three parameters to it. Lets say you are going to release this function as an API. Will it be convenient for the users to pass three values, when they actually find the sum of a list?
Nope. What can we do about it? We can create another function, which is local to the actual `listSum` function and we can pass all the implementation related parameters to it, like this
```
>>> def listSum(ls):
...
... def recursion(index, result):
... if index == len(ls):
... return result
... return recursion(index + 1, result + ls[index])
...
... return recursion(0, 0)
...
>>> listSum([1, 3, 4, 5, 6])
19
```
Now, when the `listSum` is called, it just returns the return value of `recursion` inner function, which accepts the `index` and the `result` parameters. Now we are only passing those values, not the users of `listSum`. They just have to pass the list to be processed.
In this case, if you observe the parameters, we are not passing `ls` to `recursion` but we are using it inside it. `ls` is accessible inside `recursion` because of the closure property.
---
**Default parameters version**
Now, if you want to keep it simple, without creating an inner function, you can make use of the default parameters, like this
```
>>> def listSum(ls, index=0, result=0):
... # Base condition
... if index == len(ls):
... return result
...
... # Call with next index and add the current element to result
... return listSum(ls, index + 1, result + ls[index])
...
>>> listSum([1, 3, 4, 5, 6])
19
```
Now, if the caller doesn't explicitly pass any value, then `0` will be assigned to both `index` and `result`.
---
**Recursive Power problem**
Now, lets apply the ideas to a different problem. For example, lets try to implement the `power(base, exponent)` function. It would return the value of `base` raised to the power `exponent`.
```
power(2, 5) = 32
power(5, 2) = 25
power(3, 4) = 81
```
Now, how can we do this recursively? Let us try to understand how those results are achieved.
```
power(2, 5) = 2 * 2 * 2 * 2 * 2 = 32
power(5, 2) = 5 * 5 = 25
power(3, 4) = 3 * 3 * 3 * 3 = 81
```
Hmmm, so we get the idea. The `base` multiplied to itself, `exponent` times gives the result. Okay, how do we approach it. Lets try to define the solution with the same function.
```
power(2, 5) = 2 * power(2, 4)
= 2 * (2 * power(2, 3))
= 2 * (2 * (2 * power(2, 2)))
= 2 * (2 * (2 * (2 * power(2, 1))))
```
What should be the result if anything raised to power 1? Result will be the same number, right? We got our base condition for our recursion :-)
```
= 2 * (2 * (2 * (2 * 2)))
= 2 * (2 * (2 * 4))
= 2 * (2 * 8)
= 2 * 16
= 32
```
Alright, lets implement it.
```
>>> def power(base, exponent):
... # Base condition, if `exponent` is lesser than or equal to 1, return `base`
... if exponent <= 1:
... return base
...
... return base * power(base, exponent - 1)
...
>>> power(2, 5)
32
>>> power(5, 2)
25
>>> power(3, 4)
81
```
Okay, how will be define the Tail call optimized version of it? Lets pass the current result as the parameter to the function itself and return the result when the base condition it met. Let's keep it simple and use the default parameter approach directly.
```
>>> def power(base, exponent, result=1):
... # Since we start with `1`, base condition would be exponent reaching 0
... if exponent <= 0:
... return result
...
... return power(base, exponent - 1, result * base)
...
>>> power(2, 5)
32
>>> power(5, 2)
25
>>> power(3, 4)
81
```
Now, we reduce the `exponent` value in every recursive call and multiple `result` with `base` and pass it to the recursive `power` call. We start with the value `1`, because we are approaching the problem in reverse. The recursion will happen like this
```
power(2, 5, 1) = power(2, 4, 1 * 2)
= power(2, 4, 2)
= power(2, 3, 2 * 2)
= power(2, 3, 4)
= power(2, 2, 4 * 2)
= power(2, 2, 8)
= power(2, 1, 8 * 2)
= power(2, 1, 16)
= power(2, 0, 16 * 2)
= power(2, 0, 32)
```
Since `exponent` becomes zero, the base condition is met and the `result` will be returned, so we get `32` :-) |
Errata (erasures+errors) Berlekamp-Massey for Reed-Solomon decoding | 30,215,337 | 23 | 2015-05-13T12:50:39Z | 30,468,399 | 12 | 2015-05-26T20:35:24Z | [
"python",
"math",
"error-correction",
"galois-field",
"reed-solomon"
] | I am trying to implement a Reed-Solomon encoder-decoder in Python supporting the decoding of both erasures and errors, and that's driving me crazy.
The implementation currently supports decoding only errors or only erasures, but not both at the same time (even if it's below the theoretical bound of 2\*errors+erasures <= (n-k) ).
From Blahut's papers ([here](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.92.600&rep=rep1&type=pdf) and [here](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.84.2084&rep=rep1&type=pdf)), it seems we only need to initialize the error locator polynomial with the erasure locator polynomial to implicitly compute the errata locator polynomial inside Berlekamp-Massey.
This approach partially works for me: when I have 2\*errors+erasures < (n-k)/2 it works, but in fact after debugging it only works because BM compute an errors locator polynomial that gets the exact same value as the erasure locator polynomial (because we are below the limit for errors-only correction), and thus it is truncated via galois fields and we end up with the correct value of the erasure locator polynomial (at least that's how I understand it, I may be wrong).
However, when we go above (n-k)/2, for example if n = 20 and k = 11, thus we have (n-k)=9 erased symbols we can correct, if we feed in 5 erasures then BM just goes wrong. If we feed in 4 erasures + 1 error (we are still well below the bound since we have 2\*errors+erasures = 2+4 = 6 < 9), the BM still goes wrong.
The exact algorithm of Berlekamp-Massey I implemented can be found in [this presentation](https://www.cs.duke.edu/courses/spring11/cps296.3/decoding_rs.pdf) (pages 15-17), but a very similar description can be found [here](http://web.udl.es/usuaris/carlesm/docencia/xc1/Treballs/ReedSolomon.Treball.pdf) and [here](http://www.ijedr.org/papers/IJEDR1401047.pdf), and here I attach a copy of the mathematical description:

Now, I have an almost exact reproduction of this mathematical algorithm into a Python code. What I would like is to extend it to support erasures, which I tried by initializing the error locator sigma with the erasure locator:
```
def _berlekamp_massey(self, s, k=None, erasures_loc=None):
'''Computes and returns the error locator polynomial (sigma) and the
error evaluator polynomial (omega).
If the erasures locator is specified, we will return an errors-and-erasures locator polynomial and an errors-and-erasures evaluator polynomial.
The parameter s is the syndrome polynomial (syndromes encoded in a
generator function) as returned by _syndromes. Don't be confused with
the other s = (n-k)/2
Notes:
The error polynomial:
E(x) = E_0 + E_1 x + ... + E_(n-1) x^(n-1)
j_1, j_2, ..., j_s are the error positions. (There are at most s
errors)
Error location X_i is defined: X_i = a^(j_i)
that is, the power of a corresponding to the error location
Error magnitude Y_i is defined: E_(j_i)
that is, the coefficient in the error polynomial at position j_i
Error locator polynomial:
sigma(z) = Product( 1 - X_i * z, i=1..s )
roots are the reciprocals of the error locations
( 1/X_1, 1/X_2, ...)
Error evaluator polynomial omega(z) is here computed at the same time as sigma, but it can also be constructed afterwards using the syndrome and sigma (see _find_error_evaluator() method).
'''
# For errors-and-erasures decoding, see: Blahut, Richard E. "Transform techniques for error control codes." IBM Journal of Research and development 23.3 (1979): 299-315. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.92.600&rep=rep1&type=pdf and also a MatLab implementation here: http://www.mathworks.com/matlabcentral/fileexchange/23567-reed-solomon-errors-and-erasures-decoder/content//RS_E_E_DEC.m
# also see: Blahut, Richard E. "A universal Reed-Solomon decoder." IBM Journal of Research and Development 28.2 (1984): 150-158. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.84.2084&rep=rep1&type=pdf
# or alternatively see the reference book by Blahut: Blahut, Richard E. Theory and practice of error control codes. Addison-Wesley, 1983.
# and another good alternative book with concrete programming examples: Jiang, Yuan. A practical guide to error-control coding using Matlab. Artech House, 2010.
n = self.n
if not k: k = self.k
# Initialize:
if erasures_loc:
sigma = [ Polynomial(erasures_loc.coefficients) ] # copy erasures_loc by creating a new Polynomial
B = [ Polynomial(erasures_loc.coefficients) ]
else:
sigma = [ Polynomial([GF256int(1)]) ] # error locator polynomial. Also called Lambda in other notations.
B = [ Polynomial([GF256int(1)]) ] # this is the error locator support/secondary polynomial, which is a funky way to say that it's just a temporary variable that will help us construct sigma, the error locator polynomial
omega = [ Polynomial([GF256int(1)]) ] # error evaluator polynomial. We don't need to initialize it with erasures_loc, it will still work, because Delta is computed using sigma, which itself is correctly initialized with erasures if needed.
A = [ Polynomial([GF256int(0)]) ] # this is the error evaluator support/secondary polynomial, to help us construct omega
L = [ 0 ] # necessary variable to check bounds (to avoid wrongly eliminating the higher order terms). For more infos, see https://www.cs.duke.edu/courses/spring11/cps296.3/decoding_rs.pdf
M = [ 0 ] # optional variable to check bounds (so that we do not mistakenly overwrite the higher order terms). This is not necessary, it's only an additional safe check. For more infos, see the presentation decoding_rs.pdf by Andrew Brown in the doc folder.
# Polynomial constants:
ONE = Polynomial(z0=GF256int(1))
ZERO = Polynomial(z0=GF256int(0))
Z = Polynomial(z1=GF256int(1)) # used to shift polynomials, simply multiply your poly * Z to shift
s2 = ONE + s
# Iteratively compute the polynomials 2s times. The last ones will be
# correct
for l in xrange(0, n-k):
K = l+1
# Goal for each iteration: Compute sigma[K] and omega[K] such that
# (1 + s)*sigma[l] == omega[l] in mod z^(K)
# For this particular loop iteration, we have sigma[l] and omega[l],
# and are computing sigma[K] and omega[K]
# First find Delta, the non-zero coefficient of z^(K) in
# (1 + s) * sigma[l]
# This delta is valid for l (this iteration) only
Delta = ( s2 * sigma[l] ).get_coefficient(l+1) # Delta is also known as the Discrepancy, and is always a scalar (not a polynomial).
# Make it a polynomial of degree 0, just for ease of computation with polynomials sigma and omega.
Delta = Polynomial(x0=Delta)
# Can now compute sigma[K] and omega[K] from
# sigma[l], omega[l], B[l], A[l], and Delta
sigma.append( sigma[l] - Delta * Z * B[l] )
omega.append( omega[l] - Delta * Z * A[l] )
# Now compute the next B and A
# There are two ways to do this
# This is based on a messy case analysis on the degrees of the four polynomials sigma, omega, A and B in order to minimize the degrees of A and B. For more infos, see https://www.cs.duke.edu/courses/spring10/cps296.3/decoding_rs_scribe.pdf
# In fact it ensures that the degree of the final polynomials aren't too large.
if Delta == ZERO or 2*L[l] > K \
or (2*L[l] == K and M[l] == 0):
# Rule A
B.append( Z * B[l] )
A.append( Z * A[l] )
L.append( L[l] )
M.append( M[l] )
elif (Delta != ZERO and 2*L[l] < K) \
or (2*L[l] == K and M[l] != 0):
# Rule B
B.append( sigma[l] // Delta )
A.append( omega[l] // Delta )
L.append( K - L[l] )
M.append( 1 - M[l] )
else:
raise Exception("Code shouldn't have gotten here")
return sigma[-1], omega[-1]
```
Polynomial and GF256int are generic implementation of, respectively, polynomials and galois fields over 2^8. These classes are unit tested and they are, normally, bug proof. Same goes for the rest of the encoding/decoding methods for Reed-Solomon such as Forney and Chien search. The full code with a quick test case for the issue I am talking here can be found here: <http://codepad.org/l2Qi0y8o>
Here's an example output:
```
Encoded message:
hello world�ê�Ī`>
-------
Erasures decoding:
Erasure locator: 189x^5 + 88x^4 + 222x^3 + 33x^2 + 251x + 1
Syndrome: 149x^9 + 113x^8 + 29x^7 + 231x^6 + 210x^5 + 150x^4 + 192x^3 + 11x^2 + 41x
Sigma: 189x^5 + 88x^4 + 222x^3 + 33x^2 + 251x + 1
Symbols positions that were corrected: [19, 18, 17, 16, 15]
('Decoded message: ', 'hello world', '\xce\xea\x90\x99\x8d\xc4\xaa`>')
Correctly decoded: True
-------
Errors+Erasures decoding for the message with only erasures:
Erasure locator: 189x^5 + 88x^4 + 222x^3 + 33x^2 + 251x + 1
Syndrome: 149x^9 + 113x^8 + 29x^7 + 231x^6 + 210x^5 + 150x^4 + 192x^3 + 11x^2 + 41x
Sigma: 101x^10 + 139x^9 + 5x^8 + 14x^7 + 180x^6 + 148x^5 + 126x^4 + 135x^3 + 68x^2 + 155x + 1
Symbols positions that were corrected: [187, 141, 90, 19, 18, 17, 16, 15]
('Decoded message: ', '\xf4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00.\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00P\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xe3\xe6\xffO> world', '\xce\xea\x90\x99\x8d\xc4\xaa`>')
Correctly decoded: False
-------
Errors+Erasures decoding for the message with erasures and one error:
Erasure locator: 77x^4 + 96x^3 + 6x^2 + 206x + 1
Syndrome: 49x^9 + 107x^8 + x^7 + 109x^6 + 236x^5 + 15x^4 + 8x^3 + 133x^2 + 243x
Sigma: 38x^9 + 98x^8 + 239x^7 + 85x^6 + 32x^5 + 168x^4 + 92x^3 + 225x^2 + 22x + 1
Symbols positions that were corrected: [19, 18, 17, 16]
('Decoded message: ', "\xda\xe1'\xccA world", '\xce\xea\x90\x99\x8d\xc4\xaa`>')
Correctly decoded: False
```
Here, the erasure decoding is always correct since it doesn't use BM at all to compute the erasure locator. Normally, the other two test cases should output the same sigma, but they simply don't.
The fact that the problem comes from BM is blatant here when you compare the first two test cases: the syndrome and the erasure locator are the same, but the resulting sigma is totally different (in the second test, BM is used, while in the first test case with erasures only BM is not called).
Thank you very much for any help or any idea on how I could debug this out. Note that your answers can be mathematical or code, but please explain what has gone wrong with my approach.
**/EDIT:** still didn't find how to correctly implement an errata BM decoder (see my answer below). The bounty is offered to anyone who can fix the issue (or at least guide me to the solution).
**/EDIT2:** silly me, sorry, I just re-read the schema and found that I missed the change in the assignment `L = r - L - erasures_count`... I have updated the code to fix that and re-accepted my answer. | After reading lots and lots of research papers and books, the only place where I have found the answer is in the book ([readable online on Google Books](https://books.google.fr/books?id=eQs2i-R9-oYC&lpg=PR11&ots=atCPQJm3OJ&dq=%22Algebraic%20codes%20for%20data%20transmission%22%2C%20Blahut%2C%20Richard%20E.%2C%202003%2C%20Cambridge%20university%20press.&lr&hl=fr&pg=PA193#v=onepage&q=%22Algebraic%20codes%20for%20data%20transmission%22,%20Blahut,%20Richard%20E.,%202003,%20Cambridge%20university%20press.&f=false), but not available as a PDF):
> "Algebraic codes for data transmission", Blahut, Richard E., 2003, Cambridge university press.
Here are some extracts of this book, which details is the exact (except for the matricial/vectorized representation of polynomial operations) description of the Berlekamp-Massey algorithm I implemented:

And here is the errata (errors-and-erasures) Berlekamp-Massey algorithm for Reed-Solomon:

As you can see -- contrary to the usual description that you only **need to initialize Lambda, the errors locator polynomial, with the value of the previously computed erasures locator polynomial** -- you also need to skip the first v iterations, where v is the number of erasures. Note that it's not equivalent to skipping the last v iterations: **you need to skip the first v iterations**, because r (the iteration counter, K in my implementation) is used not only to count iterations but also to generate the correct discrepancy factor Delta.
Here is the resulting code with the modifications to support erasures as well as errors up to `v+2*e <= (n-k)`:
```
def _berlekamp_massey(self, s, k=None, erasures_loc=None, erasures_eval=None, erasures_count=0):
'''Computes and returns the errata (errors+erasures) locator polynomial (sigma) and the
error evaluator polynomial (omega) at the same time.
If the erasures locator is specified, we will return an errors-and-erasures locator polynomial and an errors-and-erasures evaluator polynomial, else it will compute only errors. With erasures in addition to errors, it can simultaneously decode up to v+2e <= (n-k) where v is the number of erasures and e the number of errors.
Mathematically speaking, this is equivalent to a spectral analysis (see Blahut, "Algebraic Codes for Data Transmission", 2003, chapter 7.6 Decoding in Time Domain).
The parameter s is the syndrome polynomial (syndromes encoded in a
generator function) as returned by _syndromes.
Notes:
The error polynomial:
E(x) = E_0 + E_1 x + ... + E_(n-1) x^(n-1)
j_1, j_2, ..., j_s are the error positions. (There are at most s
errors)
Error location X_i is defined: X_i = α^(j_i)
that is, the power of α (alpha) corresponding to the error location
Error magnitude Y_i is defined: E_(j_i)
that is, the coefficient in the error polynomial at position j_i
Error locator polynomial:
sigma(z) = Product( 1 - X_i * z, i=1..s )
roots are the reciprocals of the error locations
( 1/X_1, 1/X_2, ...)
Error evaluator polynomial omega(z) is here computed at the same time as sigma, but it can also be constructed afterwards using the syndrome and sigma (see _find_error_evaluator() method).
It can be seen that the algorithm tries to iteratively solve for the error locator polynomial by
solving one equation after another and updating the error locator polynomial. If it turns out that it
cannot solve the equation at some step, then it computes the error and weights it by the last
non-zero discriminant found, and delays the weighted result to increase the polynomial degree
by 1. Ref: "Reed Solomon Decoder: TMS320C64x Implementation" by Jagadeesh Sankaran, December 2000, Application Report SPRA686
The best paper I found describing the BM algorithm for errata (errors-and-erasures) evaluator computation is in "Algebraic Codes for Data Transmission", Richard E. Blahut, 2003.
'''
# For errors-and-erasures decoding, see: "Algebraic Codes for Data Transmission", Richard E. Blahut, 2003 and (but it's less complete): Blahut, Richard E. "Transform techniques for error control codes." IBM Journal of Research and development 23.3 (1979): 299-315. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.92.600&rep=rep1&type=pdf and also a MatLab implementation here: http://www.mathworks.com/matlabcentral/fileexchange/23567-reed-solomon-errors-and-erasures-decoder/content//RS_E_E_DEC.m
# also see: Blahut, Richard E. "A universal Reed-Solomon decoder." IBM Journal of Research and Development 28.2 (1984): 150-158. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.84.2084&rep=rep1&type=pdf
# and another good alternative book with concrete programming examples: Jiang, Yuan. A practical guide to error-control coding using Matlab. Artech House, 2010.
n = self.n
if not k: k = self.k
# Initialize, depending on if we include erasures or not:
if erasures_loc:
sigma = [ Polynomial(erasures_loc.coefficients) ] # copy erasures_loc by creating a new Polynomial, so that we initialize the errata locator polynomial with the erasures locator polynomial.
B = [ Polynomial(erasures_loc.coefficients) ]
omega = [ Polynomial(erasures_eval.coefficients) ] # to compute omega (the evaluator polynomial) at the same time, we also need to initialize it with the partial erasures evaluator polynomial
A = [ Polynomial(erasures_eval.coefficients) ] # TODO: fix the initial value of the evaluator support polynomial, because currently the final omega is not correct (it contains higher order terms that should be removed by the end of BM)
else:
sigma = [ Polynomial([GF256int(1)]) ] # error locator polynomial. Also called Lambda in other notations.
B = [ Polynomial([GF256int(1)]) ] # this is the error locator support/secondary polynomial, which is a funky way to say that it's just a temporary variable that will help us construct sigma, the error locator polynomial
omega = [ Polynomial([GF256int(1)]) ] # error evaluator polynomial. We don't need to initialize it with erasures_loc, it will still work, because Delta is computed using sigma, which itself is correctly initialized with erasures if needed.
A = [ Polynomial([GF256int(0)]) ] # this is the error evaluator support/secondary polynomial, to help us construct omega
L = [ 0 ] # update flag: necessary variable to check when updating is necessary and to check bounds (to avoid wrongly eliminating the higher order terms). For more infos, see https://www.cs.duke.edu/courses/spring11/cps296.3/decoding_rs.pdf
M = [ 0 ] # optional variable to check bounds (so that we do not mistakenly overwrite the higher order terms). This is not necessary, it's only an additional safe check. For more infos, see the presentation decoding_rs.pdf by Andrew Brown in the doc folder.
# Fix the syndrome shifting: when computing the syndrome, some implementations may prepend a 0 coefficient for the lowest degree term (the constant). This is a case of syndrome shifting, thus the syndrome will be bigger than the number of ecc symbols (I don't know what purpose serves this shifting). If that's the case, then we need to account for the syndrome shifting when we use the syndrome such as inside BM, by skipping those prepended coefficients.
# Another way to detect the shifting is to detect the 0 coefficients: by definition, a syndrome does not contain any 0 coefficient (except if there are no errors/erasures, in this case they are all 0). This however doesn't work with the modified Forney syndrome (that we do not use in this lib but it may be implemented in the future), which set to 0 the coefficients corresponding to erasures, leaving only the coefficients corresponding to errors.
synd_shift = 0
if len(s) > (n-k): synd_shift = len(s) - (n-k)
# Polynomial constants:
ONE = Polynomial(z0=GF256int(1))
ZERO = Polynomial(z0=GF256int(0))
Z = Polynomial(z1=GF256int(1)) # used to shift polynomials, simply multiply your poly * Z to shift
# Precaching
s2 = ONE + s
# Iteratively compute the polynomials n-k-erasures_count times. The last ones will be correct (since the algorithm refines the error/errata locator polynomial iteratively depending on the discrepancy, which is kind of a difference-from-correctness measure).
for l in xrange(0, n-k-erasures_count): # skip the first erasures_count iterations because we already computed the partial errata locator polynomial (by initializing with the erasures locator polynomial)
K = erasures_count+l+synd_shift # skip the FIRST erasures_count iterations (not the last iterations, that's very important!)
# Goal for each iteration: Compute sigma[l+1] and omega[l+1] such that
# (1 + s)*sigma[l] == omega[l] in mod z^(K)
# For this particular loop iteration, we have sigma[l] and omega[l],
# and are computing sigma[l+1] and omega[l+1]
# First find Delta, the non-zero coefficient of z^(K) in
# (1 + s) * sigma[l]
# Note that adding 1 to the syndrome s is not really necessary, you can do as well without.
# This delta is valid for l (this iteration) only
Delta = ( s2 * sigma[l] ).get_coefficient(K) # Delta is also known as the Discrepancy, and is always a scalar (not a polynomial).
# Make it a polynomial of degree 0, just for ease of computation with polynomials sigma and omega.
Delta = Polynomial(x0=Delta)
# Can now compute sigma[l+1] and omega[l+1] from
# sigma[l], omega[l], B[l], A[l], and Delta
sigma.append( sigma[l] - Delta * Z * B[l] )
omega.append( omega[l] - Delta * Z * A[l] )
# Now compute the next support polynomials B and A
# There are two ways to do this
# This is based on a messy case analysis on the degrees of the four polynomials sigma, omega, A and B in order to minimize the degrees of A and B. For more infos, see https://www.cs.duke.edu/courses/spring10/cps296.3/decoding_rs_scribe.pdf
# In fact it ensures that the degree of the final polynomials aren't too large.
if Delta == ZERO or 2*L[l] > K+erasures_count \
or (2*L[l] == K+erasures_count and M[l] == 0):
#if Delta == ZERO or len(sigma[l+1]) <= len(sigma[l]): # another way to compute when to update, and it doesn't require to maintain the update flag L
# Rule A
B.append( Z * B[l] )
A.append( Z * A[l] )
L.append( L[l] )
M.append( M[l] )
elif (Delta != ZERO and 2*L[l] < K+erasures_count) \
or (2*L[l] == K+erasures_count and M[l] != 0):
# elif Delta != ZERO and len(sigma[l+1]) > len(sigma[l]): # another way to compute when to update, and it doesn't require to maintain the update flag L
# Rule B
B.append( sigma[l] // Delta )
A.append( omega[l] // Delta )
L.append( K - L[l] ) # the update flag L is tricky: in Blahut's schema, it's mandatory to use `L = K - L - erasures_count` (and indeed in a previous draft of this function, if you forgot to do `- erasures_count` it would lead to correcting only 2*(errors+erasures) <= (n-k) instead of 2*errors+erasures <= (n-k)), but in this latest draft, this will lead to a wrong decoding in some cases where it should correctly decode! Thus you should try with and without `- erasures_count` to update L on your own implementation and see which one works OK without producing wrong decoding failures.
M.append( 1 - M[l] )
else:
raise Exception("Code shouldn't have gotten here")
# Hack to fix the simultaneous computation of omega, the errata evaluator polynomial: because A (the errata evaluator support polynomial) is not correctly initialized (I could not find any info in academic papers). So at the end, we get the correct errata evaluator polynomial omega + some higher order terms that should not be present, but since we know that sigma is always correct and the maximum degree should be the same as omega, we can fix omega by truncating too high order terms.
if omega[-1].degree > sigma[-1].degree: omega[-1] = Polynomial(omega[-1].coefficients[-(sigma[-1].degree+1):])
# Return the last result of the iterations (since BM compute iteratively, the last iteration being correct - it may already be before, but we're not sure)
return sigma[-1], omega[-1]
def _find_erasures_locator(self, erasures_pos):
'''Compute the erasures locator polynomial from the erasures positions (the positions must be relative to the x coefficient, eg: "hello worldxxxxxxxxx" is tampered to "h_ll_ worldxxxxxxxxx" with xxxxxxxxx being the ecc of length n-k=9, here the string positions are [1, 4], but the coefficients are reversed since the ecc characters are placed as the first coefficients of the polynomial, thus the coefficients of the erased characters are n-1 - [1, 4] = [18, 15] = erasures_loc to be specified as an argument.'''
# See: http://ocw.usu.edu/Electrical_and_Computer_Engineering/Error_Control_Coding/lecture7.pdf and Blahut, Richard E. "Transform techniques for error control codes." IBM Journal of Research and development 23.3 (1979): 299-315. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.92.600&rep=rep1&type=pdf and also a MatLab implementation here: http://www.mathworks.com/matlabcentral/fileexchange/23567-reed-solomon-errors-and-erasures-decoder/content//RS_E_E_DEC.m
erasures_loc = Polynomial([GF256int(1)]) # just to init because we will multiply, so it must be 1 so that the multiplication starts correctly without nulling any term
# erasures_loc is very simple to compute: erasures_loc = prod(1 - x*alpha[j]**i) for i in erasures_pos and where alpha is the alpha chosen to evaluate polynomials (here in this library it's gf(3)). To generate c*x where c is a constant, we simply generate a Polynomial([c, 0]) where 0 is the constant and c is positionned to be the coefficient for x^1. See https://en.wikipedia.org/wiki/Forney_algorithm#Erasures
for i in erasures_pos:
erasures_loc = erasures_loc * (Polynomial([GF256int(1)]) - Polynomial([GF256int(self.generator)**i, 0]))
return erasures_loc
```
*Note*: Sigma, Omega, A, B, L and M are all lists of polynomials (so we keep the whole history of all intermediate polynomials we computed on each iteration). This can of course be optimized because we only really need `Sigma[l]`, `Sigma[l-1]`, `Omega[l]`, `Omega[l-1]`, `A[l]`, `B[l]`, `L[l]` and `M[l]` (so it's just Sigma and Omega that needs to keep the previous iteration in memory, the other variables don't need).
*Note2*: the update flag L is tricky: in some implementations, doing just like in the Blahut's schema will lead to wrong failures when decoding. In my past implementation, it was mandatory to use `L = K - L - erasures_count` to correctly decode both errors-and-erasures up to the Singleton bound, but in my latest implementation, I had to use `L = K - L` (even when there are erasures) to avoid wrong decoding failures. You should just try both on your own implementation and see which one doesn't produce any wrong decoding failures. See below in the issues for more info.
The only issue with this algorithm is that it does not describe how to simultaneously compute Omega, the errors evaluator polynomial (the book describes how to initialize Omega for errors only, but not when decoding errors-and-erasures). I tried several variations and the above works, but not completely: at the end, Omega will include higher order terms that should have been cancelled. Probably Omega or A the errors evaluator support polynomial, is not initialized with the good value.
However, you can fix that by either trimming the Omega polynomial of the too high order terms (since it should have the same degree as Lambda/Sigma):
```
if omega[-1].degree > sigma[-1].degree: omega[-1] = Polynomial(omega[-1].coefficients[-(sigma[-1].degree+1):])
```
Or you can totally compute Omega from scratch after BM by using the errata locator Lambda/Sigma, which is always correctly computed:
```
def _find_error_evaluator(self, synd, sigma, k=None):
'''Compute the error (or erasures if you supply sigma=erasures locator polynomial) evaluator polynomial Omega from the syndrome and the error/erasures/errata locator Sigma. Omega is already computed at the same time as Sigma inside the Berlekamp-Massey implemented above, but in case you modify Sigma, you can recompute Omega afterwards using this method, or just ensure that Omega computed by BM is correct given Sigma (as long as syndrome and sigma are correct, omega will be correct).'''
n = self.n
if not k: k = self.k
# Omega(x) = [ Synd(x) * Error_loc(x) ] mod x^(n-k+1) -- From Blahut, Algebraic codes for data transmission, 2003
return (synd * sigma) % Polynomial([GF256int(1)] + [GF256int(0)] * (n-k+1)) # Note that you should NOT do (1+Synd(x)) as can be seen in some books because this won't work with all primitive generators.
```
I am looking for a better solution in the [following question on CSTheory](http://cstheory.stackexchange.com/questions/31606/initialization-of-errata-evaluator-polynomial-for-simultaneous-computation-in-be).
**/EDIT:** I will describe some of the issues I have had and how to fix them:
* don't forget to init the error locator polynomial with the erasures locator polynomial (that you can easily compute from the syndromes and erasures positions).
* if you can decode errors only and erasures only flawlessly, but limited to `2*errors + erasures <= (n-k)/2`, then you forgot to skip the first v iterations.
* if you can decode both erasures-and-errors but up to `2*(errors+erasures) <= (n-k)`, then you forgot to update the assigment of L: `L = i+1 - L - erasures_count` instead of `L = i+1 - L`. But this may actually make your decoder fail in some cases depending on how you implemented your decoder, see the next point.
* my first decoder was limited to only one generator/prime polynomial/fcr, but when I updated it to be universal and added strict unit tests, the decoder failed when it shouldn't. It seems Blahut's schema above is wrong about L (the updating flag): it must be updated using `L = K - L` and not `L = K - L - erasures_count`, because this will lead to the decoder failing sometimes even through we are under the Singleton bound (and thus we should be decoding correctly!). This seems to be confirmed by the fact that computing `L = K - L` will not only fix those decoding issues, but it will also give the exact same result as the alternative way to update without using the update flag L (ie, the condition `if Delta == ZERO or len(sigma[l+1]) <= len(sigma[l]):`). But this is weird: in my past implementation, `L = K - L - erasures_count` was mandatory for errors-and-erasures decoding, but now it seems it produces wrong failures. So you should just try with and without on your own implementation and whether one or the other produces wrong failures for you.
* note that the condition `2*L[l] > K` change to `2*L[l] > K+erasures_count`. You may not notice any side effect without adding the condition `+erasures_count` at first, but in some cases the decoding will fail when it shouldn't.
* if you can fix only exactly one error or erasure, check that your condition is `2*L[l] > K+erasures_count` and not `2*L[l] >= K+erasures_count` (notice the `>` instead of `>=`).
* if you can correct `2*errors + erasures <= (n-k-2)` (just below the limit, eg, if you have 10 ecc symbols, you can correct only 4 errors instead of 5 normally) then check your syndrome and your loop inside the BM algo: if the syndrome starts with a 0 coefficient for the constant term x^0 (which is sometimes advised in books), then your syndrome is shifted, and then your loop inside BM must start at `1` and end at `n-k+1` instead of `0:(n-k)` if not shifted.
* If you can correct every symbol but the last one (the last ecc symbol), then check your ranges, particularly in your Chien search: you should not evaluate the error locator polynomial from alpha^0 to alpha^255 but from alpha^1 to alpha^256. |
Why is [] faster than list()? | 30,216,000 | 472 | 2015-05-13T13:16:22Z | 30,216,145 | 65 | 2015-05-13T13:21:38Z | [
"python",
"performance",
"list",
"instantiation",
"literals"
] | I recently compared the processing speeds of `[]` and `list()` and was surprised to discover that `[]` runs *more than three times faster* than `list()`. I ran the same test with `{}` and `dict()` and the results were practically identical: `[]` and `{}` both took around 0.128sec / million cycles, while `list()` and `dict()` took roughly 0.428sec / million cycles each.
Why is this? Do `[]` and `{}` (probably `()` and `''`, too) immediately pass back a copies of some empty stock literal while their explicitly-named counterparts (`list()`, `dict()`, `tuple()`, `str()`) fully go about creating an object, whether or not they actually have elements?
I have no idea how these two methods differ but I'd love to find out.
I couldn't find an answer in the `docs` or on SO, and searching for empty brackets turned out to be more complicated than I'd expected.
I got my timing results by calling `timeit.timeit("[]")` and `timeit.timeit("list()")`, and `timeit.timeit("{}")` and `timeit.timeit("dict()")`, to compare lists and dictionaries, respectively. I'm running Python 2.7.9.
I recently discovered "[Why is if True slower than if 1?](http://stackoverflow.com/questions/18123965/why-if-true-is-slower-than-if-1)" that compares the performance of `if True` to `if 1` and seems to touch on a similar literal-versus-global scenario; perhaps it's worth considering as well. | Because `list` is a [function](https://docs.python.org/2/library/functions.html#list) to convert say a string to a list object, while `[]` is used to create a list off the bat. Try this (might make more sense to you):
```
x = "wham bam"
a = list(x)
>>> a
["w", "h", "a", "m", ...]
```
While
```
y = ["wham bam"]
>>> y
["wham bam"]
```
Gives you a actual list containing whatever you put in it. |
Why is [] faster than list()? | 30,216,000 | 472 | 2015-05-13T13:16:22Z | 30,216,156 | 539 | 2015-05-13T13:21:57Z | [
"python",
"performance",
"list",
"instantiation",
"literals"
] | I recently compared the processing speeds of `[]` and `list()` and was surprised to discover that `[]` runs *more than three times faster* than `list()`. I ran the same test with `{}` and `dict()` and the results were practically identical: `[]` and `{}` both took around 0.128sec / million cycles, while `list()` and `dict()` took roughly 0.428sec / million cycles each.
Why is this? Do `[]` and `{}` (probably `()` and `''`, too) immediately pass back a copies of some empty stock literal while their explicitly-named counterparts (`list()`, `dict()`, `tuple()`, `str()`) fully go about creating an object, whether or not they actually have elements?
I have no idea how these two methods differ but I'd love to find out.
I couldn't find an answer in the `docs` or on SO, and searching for empty brackets turned out to be more complicated than I'd expected.
I got my timing results by calling `timeit.timeit("[]")` and `timeit.timeit("list()")`, and `timeit.timeit("{}")` and `timeit.timeit("dict()")`, to compare lists and dictionaries, respectively. I'm running Python 2.7.9.
I recently discovered "[Why is if True slower than if 1?](http://stackoverflow.com/questions/18123965/why-if-true-is-slower-than-if-1)" that compares the performance of `if True` to `if 1` and seems to touch on a similar literal-versus-global scenario; perhaps it's worth considering as well. | Because `[]` and `{}` are *literal syntax*. Python can create bytecode just to create the list or dictionary objects:
```
>>> import dis
>>> dis.dis(compile('[]', '', 'eval'))
1 0 BUILD_LIST 0
3 RETURN_VALUE
>>> dis.dis(compile('{}', '', 'eval'))
1 0 BUILD_MAP 0
3 RETURN_VALUE
```
`list()` and `dict()` are separate objects. Their names need to be resolved, the stack has to be involved to push the arguments, the frame has to be stored to retrieve later, and a call has to be made. That all takes more time.
For the empty case, that means you have at the very least a [`LOAD_NAME`](https://docs.python.org/2/library/dis.html#opcode-LOAD_NAME) (which has to search through the global namespace as well as the [`__builtin__` module](https://docs.python.org/2/library/__builtin__.html)) followed by a [`CALL_FUNCTION`](https://docs.python.org/2/library/dis.html#opcode-CALL_FUNCTION), which has to preserve the current frame:
```
>>> dis.dis(compile('list()', '', 'eval'))
1 0 LOAD_NAME 0 (list)
3 CALL_FUNCTION 0
6 RETURN_VALUE
>>> dis.dis(compile('dict()', '', 'eval'))
1 0 LOAD_NAME 0 (dict)
3 CALL_FUNCTION 0
6 RETURN_VALUE
```
You can time the name lookup separately with `timeit`:
```
>>> import timeit
>>> timeit.timeit('list', number=10**7)
0.30749011039733887
>>> timeit.timeit('dict', number=10**7)
0.4215109348297119
```
The time discrepancy there is probably a dictionary hash collision. Subtract those times from the times for calling those objects, and compare the result against the times for using literals:
```
>>> timeit.timeit('[]', number=10**7)
0.30478692054748535
>>> timeit.timeit('{}', number=10**7)
0.31482696533203125
>>> timeit.timeit('list()', number=10**7)
0.9991960525512695
>>> timeit.timeit('dict()', number=10**7)
1.0200958251953125
```
So having to call the object takes an additional `1.00 - 0.31 - 0.30 == 0.39` seconds per 10 million calls.
You can avoid the global lookup cost by aliasing the global names as locals (using a `timeit` setup, everything you bind to a name is a local):
```
>>> timeit.timeit('_list', '_list = list', number=10**7)
0.1866450309753418
>>> timeit.timeit('_dict', '_dict = dict', number=10**7)
0.19016098976135254
>>> timeit.timeit('_list()', '_list = list', number=10**7)
0.841480016708374
>>> timeit.timeit('_dict()', '_dict = dict', number=10**7)
0.7233691215515137
```
but you never can overcome that `CALL_FUNCTION` cost. |
Why is [] faster than list()? | 30,216,000 | 472 | 2015-05-13T13:16:22Z | 30,216,174 | 111 | 2015-05-13T13:22:44Z | [
"python",
"performance",
"list",
"instantiation",
"literals"
] | I recently compared the processing speeds of `[]` and `list()` and was surprised to discover that `[]` runs *more than three times faster* than `list()`. I ran the same test with `{}` and `dict()` and the results were practically identical: `[]` and `{}` both took around 0.128sec / million cycles, while `list()` and `dict()` took roughly 0.428sec / million cycles each.
Why is this? Do `[]` and `{}` (probably `()` and `''`, too) immediately pass back a copies of some empty stock literal while their explicitly-named counterparts (`list()`, `dict()`, `tuple()`, `str()`) fully go about creating an object, whether or not they actually have elements?
I have no idea how these two methods differ but I'd love to find out.
I couldn't find an answer in the `docs` or on SO, and searching for empty brackets turned out to be more complicated than I'd expected.
I got my timing results by calling `timeit.timeit("[]")` and `timeit.timeit("list()")`, and `timeit.timeit("{}")` and `timeit.timeit("dict()")`, to compare lists and dictionaries, respectively. I'm running Python 2.7.9.
I recently discovered "[Why is if True slower than if 1?](http://stackoverflow.com/questions/18123965/why-if-true-is-slower-than-if-1)" that compares the performance of `if True` to `if 1` and seems to touch on a similar literal-versus-global scenario; perhaps it's worth considering as well. | `list()` requires a global lookup and a function call but `[]` compiles to a single instruction. See:
```
Python 2.7.3
>>> import dis
>>> print dis.dis(lambda: list())
1 0 LOAD_GLOBAL 0 (list)
3 CALL_FUNCTION 0
6 RETURN_VALUE
None
>>> print dis.dis(lambda: [])
1 0 BUILD_LIST 0
3 RETURN_VALUE
None
``` |
Python argparse, provide different arguments based on parent argument value | 30,216,662 | 3 | 2015-05-13T13:40:57Z | 30,217,387 | 7 | 2015-05-13T14:09:22Z | [
"python",
"dynamic",
"argparse",
"subparsers"
] | here is what i would like to do :
A command that looks like git command behavior. You don't get the same options whether you typed git commit or git checkout.
But in my case i want to provide different arguments based on an argument value (a file name) like this :
```
>cmd file.a -h
usage: cmd filename [-opt1] [-opt2]
positional arguments:
filename file to process
optional arguments:
-opt1 do something on files of type 'a'
-opt2 do something else on files of type 'a'
>cmd file.b -h
usage: cmd filename [-opt3] [-opt4]
positional arguments:
filename file to process
optional arguments:
-opt3 do something on files of type 'b'
-opt4 do something else on files of type 'b'
```
Is it possible to do this kind of thing using python and argparse ?
What i've tried so far is :
```
parser = argparse.Argument_parser(prog='cmd')
subparsers = parser.add_subparsers()
parser.add_argument('filename',
help="file or sequence to process")
args = parser.parse_args(args=argv[1:])
sub_parser = subparsers.add_parser(args.filename, help="job type")
base, ext = os.path.splitext(args.filename)
if ext == 'a':
sub_parser.add_argument("-opt1", action='store_true')
sub_parser.add_argument("-opt2", action='store_true')
elif ext == 'b':
sub_parser.add_argument("-opt3", action='store_true')
sub_parser.add_argument("-opt4", action='store_true')
args = parser.parse_args(args=argv[1:])
```
I don't know if i should use subparsers or child parsers or groups, i'm kind of lost in all the possibilities provided by argparse | When you take a look at [`parse_args()` implementation](https://github.com/python/cpython/blob/3.2/Lib/argparse.py) you'll notice that it parses all arguments at once (it doesn't use `yield` to continuously generate state) so you have to prepare your structure before and not after half of the arguments would be parsed.
[Taking from official example in the docs](https://docs.python.org/3.2/library/argparse.html#sub-commands) you should add subparser(s) before starting parsing like this:
```
import argparse
parser = argparse.ArgumentParser(prog='PROG')
subparsers = parser.add_subparsers(help='sub-command help')
# create the parser for the "a" command
parser_a = subparsers.add_parser('a', help='a help')
parser_a.add_argument("--opt1", action='store_true')
parser_a.add_argument("--opt2", action='store_true')
# create the parser for the "b" command
parser_b = subparsers.add_parser('b', help='b help')
parser_b.add_argument("--opt3", action='store_true')
parser_b.add_argument("--opt4", action='store_true')
# parse some argument lists
print(parser.parse_args())
```
And the output (in command line), help is nicely printed:
```
D:\tmp>s.py -h
usage: PROG [-h] {a,b} ...
positional arguments:
{a,b} sub-command help
a a help
b b help
optional arguments:
-h, --help show this help message and exit
```
*A* arguments are parsed
```
D:\tmp>s.py a --opt1
Namespace(opt1=True, opt2=False)
```
*B* arguments are parsed
```
D:\tmp>s.py b
Namespace(opt3=False, opt4=False)
```
Also with args:
```
D:\tmp>s.py b --opt3
Namespace(opt3=True, opt4=False)
```
Running *A* arguments in *B* causes error:
```
D:\tmp>s.py b --opt2
usage: PROG [-h] {a,b} ...
PROG: error: unrecognized arguments: --opt2
```
---
Also if you need to [identify which subparser](http://stackoverflow.com/a/4575792/1149736) was used you may add `dest=name` to `parser.add_subparsers()` call (*which I think isn't properly stressed in the docs*):
```
subparsers = parser.add_subparsers(help='sub-command help', dest='subparser_name')
```
With the result of:
```
D:\tmp>s.py b --opt3
Namespace(opt3=True, opt4=False, subparser_name='b')
```
---
If you needed to really create arguments dynamically (for example load some argument options from *expensive* resource) you could use [`parse_known_args()`](https://docs.python.org/3.2/library/argparse.html#argparse.ArgumentParser.parse_known_args):
> Sometimes a script may only parse a few of the command-line arguments, passing the remaining arguments on to another script or program. In these cases, the `parse_known_args()` method can be useful. It works much like `parse_args()`except that it does not produce an error when extra arguments are present. Instead, **it returns a two item *tuple* containing the populated namespace and the *list* of remaining argument strings**.
After all, `parse_args()` just checks trailing aruments:
```
def parse_args(self, args=None, namespace=None):
args, argv = self.parse_known_args(args, namespace)
if argv:
msg = _('unrecognized arguments: %s')
self.error(msg % ' '.join(argv))
return args
```
And then you can re-execute another parser on `argv`, but I can imagine few issues that can go with this and I wouldn't recommend it until *really* necessary. |
How to create non-blocking continuous reading from `stdin`? | 30,217,916 | 6 | 2015-05-13T14:30:55Z | 30,218,069 | 7 | 2015-05-13T14:36:50Z | [
"python",
"python-3.x",
"subprocess",
"stdin",
"nonblocking"
] | I have a single process, which has been created like this:
```
p = subprocess.Popen(args = './myapp',
stdin = subprocess.PIPE,
stdout = subprocess.PIPE,
universal_newlines=True)
```
Later on, I'm trying to write to `p`'s `stdin`:
```
p.stdin.write('my message\n')
```
The `myapp` process has the following setup:
```
q = queue.Queue()
def get_input():
for line in iter(sys.stdin.readline, ''):
q.put(line)
sys.stdin.close()
threading.Thread(name = 'input-getter',
target = get_input).start()
```
And it is trying to read new lines continuously, like so:
```
try:
print('input:', q.get_nowait())
except Empty:
print('no input')
```
Unfortunately the subprocess never receives any of my messages. Of course, when I use:
```
p.communicate('my message\n')
```
the subprocess recieves the message, but as expected, the `communicate` method closes `p`'s `stdin`, so there are no more communications going on. | ```
p = subprocess.Popen(args = './myapp',
stdin = subprocess.PIPE,
stdout = subprocess.PIPE,
universal_newlines=True)
while p.poll() is None:
data = p.stdout.readline()
```
This will create a non-blocking read of your process until the process exits.
However, there are some cautions to be aware of here. For instance, if you would pipe `stderr` as well, but not read from it.. Then you will most likely fill a buffer or two and you will hang the program anyway. So always make sure you clear out any buffer I/O's when doing things manually.
A better alternative would be to use `select.epoll()` if possible, this is only available on unix systems but gives you a hell of a lot better performance and error handling :)
```
epoll = select.epoll()
epoll.register(p.stdout.fileno(), select.EPOLLHUP) # Use select.EPOLLIN for stdin.
for fileno, event in epoll.poll(1):
if fileno == p.stdout.fileno():
# ... Do something ...
```
**NOTE:** Remember that whenever a process expects input, it usually indicates this via `stdout`, so you'll still register `STDOUT` with `select.epoll` in order to check for "waiting for input". You can register `select.EPOLLIN` to check if input was given, but I hardly see the point because remember, that would what you choose to input to the process which you should already be aware is "happening".
# Checking if the process expects input
You can use `select.epoll` to check if the process is awaiting input or not without blocking your application execution with the above example. But there are better alternatives.
[Pexpect](https://pexpect.readthedocs.org/en/latest/) is one library that does it really well and works with `SSH` for instance.
It works a little bit different from subprocess but might be a good alternative.
# Getting subprocess.popen to work with SSH
I'll redirect to another question+answer if this is what you're after (because SSH will spawn a `stdin` in a protected manner.
[Python + SSH Password auth (no external libraries or public/private keys)?](http://stackoverflow.com/questions/20472288/python-ssh-password-auth-no-external-libraries-or-public-private-keys) |
Get parent of current directory from Python script | 30,218,802 | 7 | 2015-05-13T15:07:34Z | 30,218,825 | 11 | 2015-05-13T15:08:45Z | [
"python",
"sys",
"sys.path"
] | I want to get the parent of current directory from Python script. For example I launch the script from `/home/kristina/desire-directory/scripts` the desire path in this case is `/home/kristina/desire-directory`
I know `sys.path[0]` from `sys`. But I don't want to parse `sys.path[0]` resulting string. Is there any another way to get parent of current directory in Python? | ### Using os.path
To *get the parent directory of the directory containing the script* (regardless of the current working directory), you'll need to use `__file__`.
Inside the script use [`os.path.abspath(__file__)`](https://docs.python.org/3/library/os.path.html#os.path.abspath) to obtain the absolute path of the script, and call [`os.path.dirname`](https://docs.python.org/3/library/os.path.html#os.path.dirname) twice:
```
from os.path import dirname, abspath
d = dirname(dirname(abspath(__file__))) # /home/kristina/desire-directory
```
Basically, you can walk up the directory tree by calling `os.path.dirname` as many times as needed. Example:
```
In [4]: from os.path import dirname
In [5]: dirname('/home/kristina/desire-directory/scripts/script.py')
Out[5]: '/home/kristina/desire-directory/scripts'
In [6]: dirname(dirname('/home/kristina/desire-directory/scripts/script.py'))
Out[6]: '/home/kristina/desire-directory'
```
If you want to *get the parent directory of the current working directory*, use [`os.getcwd`](https://docs.python.org/3/library/os.html#os.getcwd):
```
import os
d = os.path.dirname(os.getcwd())
```
### Using pathlib
You could also use the [`pathlib`](https://docs.python.org/3/library/pathlib.html#module-pathlib) module (available in Python 3.4 or newer).
Each `pathlib.Path` instance have the `parent` attribute referring to the parent directory, as well as the `parents` attribute, which is a list of ancestors of the path. [`Path.resolve`](https://docs.python.org/3/library/pathlib.html#pathlib.Path.resolve) may be used to obtain the absolute path. It also resolves all symlinks, but you may use `Path.absolute` instead if that isn't a desired behaviour.
`Path(__file__)` and `Path()` represent the script path and the current working directory respectively, therefore in order to *get the parent directory of the script directory* (regardless of the current working directory) you would use
```
from pathlib import Path
# `path.parents[1]` is the same as `path.parent.parent`
d = Path(__file__).resolve().parents[1] # Path('/home/kristina/desire-directory')
```
and to *get the parent directory of the current working directory*
```
from pathlib import Path
d = Path().resolve().parent
```
Note that `d` is a `Path` instance, which isn't always handy. You can convert it to `str` easily when you need it:
```
In [15]: str(d)
Out[15]: '/home/kristina/desire-directory'
``` |
Different slicing behaviors on left/right hand side of assignment operator | 30,220,736 | 2 | 2015-05-13T16:42:39Z | 30,221,031 | 7 | 2015-05-13T16:58:12Z | [
"python",
"python-3.x",
"operators",
"slice",
"deep-copy"
] | As a Python newbie coming from the C++ background, the slicing operator in Python (3.4.x) looks ridiculous to me. I just don't get the design philosophy behind the "special rule". Let me explain why I say it's "special".
On the one hand, according to the Stack Overflow answer [here](http://stackoverflow.com/questions/323689/python-list-slice-syntax-used-for-no-obvious-reason), **the slicing operator creates a (deep) copy of a list or part of the list**, i.e. a new list. The link may be old (earlier than python 3.4.x), but I just confirmed the behavior with the following simple experiment with python 3.4.2:
```
words = ['cat', 'window', 'defenestrate']
newList = words[:] # new objects are created; a.k.a. deep copy
newList[0] = 'dog'
print(words) # ['cat' ...
print(newList) # ['dog' ...
```
On the other hand, according to the official documentation [here](https://docs.python.org/3.4/tutorial/introduction.html):
```
Assignment to slices is also possible, and this can even change the size of the list or clear it entirely:
>>>
>>> letters = ['a', 'b', 'c', 'd', 'e', 'f', 'g']
>>> letters ['a', 'b', 'c', 'd', 'e', 'f', 'g']
>>> # replace some values
>>> letters[2:5] = ['C', 'D', 'E']
>>> letters
['a', 'b', 'C', 'D', 'E', 'f', 'g']
>>> # now remove them
>>> letters[2:5] = []
>>> letters
['a', 'b', 'f', 'g']
>>> # clear the list by replacing all the elements with an empty list
>>> letters[:] = []
>>> letters
[]
```
**Clearly, the slicing operator `[:]` does not do a deep copy here.**
From the observation it seems to suggest that the slicing operator produces different behavior when it's on left/right side with respect to the assignment operator. I do not know any language in which an operator could produce similar behavior. After all, an operator is a function, just a syntactically special function, and a function's behavior should be self-contained, purely determined by all of its inputs.
So what can justify this "special rule" in Python design philosophy?
P.S. If my conclusion is not correct, there are really only two possibilities:
1, Python's slicing 'operator' is actually not an operator, so my assumption does not hold --- then what is it (the 'slicing operator' `[:]`)?
2, The difference in behavior is caused by some latent factor not observed. The slicing operator's location (left/right hand side) with respect to the assignment operator accidentally co-exists with the observation of different behavior. They do not have causality relationship --- then what is the latent factor that causes the difference in behavior? | Python operators are best considered as syntactic sugar for *"magic"* methods; for example, `x + y` is evaluated as `x.__add__(y)`. In the same way that:
* `foo = bar.baz` becomes `foo = bar.__getattr__(baz)`; whereas
* `bar.baz = foo` becomes `bar.__setattr__(baz, foo)`;
the Python *"slicing operator"* \* `a[b]` is evaluated as either:
* `a.__getitem__(b)`; or
* `a.__setitem__(b, ...)`;
depending on which side of the assignment it's on; the two *aren't quite* the same (see also [How assignment works with python list slice](http://stackoverflow.com/q/10623302/3001761)). Written out in *"longhand"*, therefore:
```
>>> x = [1, 2, 3]
>>> x.__getitem__(slice(None)) # ... = x[:]
[1, 2, 3]
>>> x.__setitem__(slice(None), (4, 5, 6)) # x[:] = ...
>>> x
[4, 5, 6]
```
The [data model documentation](https://docs.python.org/3/reference/datamodel.html) explains these methods in more detail (e.g. [`__getitem__`](https://docs.python.org/3/reference/datamodel.html#object.__getitem__)), and you can read [the docs on `slice`](https://docs.python.org/3/library/functions.html#slice), too.
---
Note that the slice is a *shallow copy*, not a deep one, as the following demonstrates:
```
>>> foo = [[], []]
>>> bar = foo[:]
>>> bar is foo
False # outer list is new object
>>> bar[0] is foo[0]
True # inner lists are same objects
>>> bar[0].append(1)
>>> foo
[[1], []]
```
---
\* Well, not *strictly* an [operator](https://docs.python.org/3/reference/lexical_analysis.html#operators). |
Create a day-of-week column in a Pandas dataframe using Python | 30,222,533 | 5 | 2015-05-13T18:24:11Z | 30,222,759 | 12 | 2015-05-13T18:36:36Z | [
"python",
"datetime",
"pandas"
] | Create a day-of-week column in a Pandas dataframe using Python
Iâd like to read a csv file into a pandas dataframe, parse a column of dates from string format to a date object, and then generate a new column that indicates the day of the week.
This is what Iâm trying:
What Iâd like to do is something like:
```
import pandas as pd
import csv
df = pd.read_csv('data.csv', parse_dates=['date']))
df['day-of-week'] = df['date'].weekday()
AttributeError: 'Series' object has no attribute 'weekday'
```
---
Thank you for your help.
James | EDIT:
As user jezrael points out below, `dt.weekday_name` was added in version 0.18.1
---
Use this:
<http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.dayofweek.html>
See this:
[Get weekday/day-of-week for Datetime column of DataFrame](http://stackoverflow.com/questions/28009370/get-weekday-for-datetime-in-pandas-dataframe)
If you want a string instead of an integer do something like this:
```
import pandas as pd
df = pd.DataFrame({'my_dates':['2015-01-01','2015-01-02','2015-01-03'],'myvals':[1,2,3]})
df['my_dates'] = pd.to_datetime(df['my_dates'])
df['day_of_week'] = df['my_dates'].dt.dayofweek
days = {0:'Mon',1:'Tues',2:'Weds',3:'Thurs',4:'Fri',5:'Sat',6:'Sun'}
df['day_of_week'] = df['day_of_week'].apply(lambda x: days[x])
```
Output:
```
my_dates myvals day_of_week
0 2015-01-01 1 Thurs
1 2015-01-02 2 Fri
2 2015-01-01 3 Thurs
``` |
matplotlib (mplot3d) - how to increase the size of an axis (stretch) in a 3D Plot? | 30,223,161 | 21 | 2015-05-13T18:58:38Z | 30,333,984 | 15 | 2015-05-19T18:59:47Z | [
"python",
"matplotlib"
] | I have this so far:
```
x,y,z = data.nonzero()
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x, y, z, zdir='z', c= 'red')
plt.savefig("plot.png")
```
Which creates:

What I'd like to do is stretch this out to make the Z axis 9 times taller and keep X and Y the same. I'd like to keep the same coordinates though.
So far I tried this guy:
```
fig = plt.figure(figsize=(4.,35.))
```
But that just stretches out the plot.png image. | The code example below provides a way to scale each axis relative to the others. However, to do so you need to modify the Axes3D.get\_proj function. Below is an example based on the example provided by matplot lib: <http://matplotlib.org/1.4.0/mpl_toolkits/mplot3d/tutorial.html#line-plots>
(There is a shorter version at the end of this answer)
```
from mpl_toolkits.mplot3d.axes3d import Axes3D
from mpl_toolkits.mplot3d import proj3d
import matplotlib as mpl
import numpy as np
import matplotlib.pyplot as plt
#Make sure these are floating point values:
scale_x = 1.0
scale_y = 2.0
scale_z = 3.0
#Axes are scaled down to fit in scene
max_scale=max(scale_x, scale_y, scale_z)
scale_x=scale_x/max_scale
scale_y=scale_y/max_scale
scale_z=scale_z/max_scale
#Create scaling matrix
scale = np.array([[scale_x,0,0,0],
[0,scale_y,0,0],
[0,0,scale_z,0],
[0,0,0,1]])
print scale
def get_proj_scale(self):
"""
Create the projection matrix from the current viewing position.
elev stores the elevation angle in the z plane
azim stores the azimuth angle in the x,y plane
dist is the distance of the eye viewing point from the object
point.
"""
relev, razim = np.pi * self.elev/180, np.pi * self.azim/180
xmin, xmax = self.get_xlim3d()
ymin, ymax = self.get_ylim3d()
zmin, zmax = self.get_zlim3d()
# transform to uniform world coordinates 0-1.0,0-1.0,0-1.0
worldM = proj3d.world_transformation(
xmin, xmax,
ymin, ymax,
zmin, zmax)
# look into the middle of the new coordinates
R = np.array([0.5, 0.5, 0.5])
xp = R[0] + np.cos(razim) * np.cos(relev) * self.dist
yp = R[1] + np.sin(razim) * np.cos(relev) * self.dist
zp = R[2] + np.sin(relev) * self.dist
E = np.array((xp, yp, zp))
self.eye = E
self.vvec = R - E
self.vvec = self.vvec / proj3d.mod(self.vvec)
if abs(relev) > np.pi/2:
# upside down
V = np.array((0, 0, -1))
else:
V = np.array((0, 0, 1))
zfront, zback = -self.dist, self.dist
viewM = proj3d.view_transformation(E, R, V)
perspM = proj3d.persp_transformation(zfront, zback)
M0 = np.dot(viewM, worldM)
M = np.dot(perspM, M0)
return np.dot(M, scale);
Axes3D.get_proj=get_proj_scale
"""
You need to include all the code above.
From here on you should be able to plot as usual.
"""
mpl.rcParams['legend.fontsize'] = 10
fig = plt.figure(figsize=(5,5))
ax = fig.gca(projection='3d')
theta = np.linspace(-4 * np.pi, 4 * np.pi, 100)
z = np.linspace(-2, 2, 100)
r = z**2 + 1
x = r * np.sin(theta)
y = r * np.cos(theta)
ax.plot(x, y, z, label='parametric curve')
ax.legend()
plt.show()
```
Standard output:

Scaled by (1, 2, 3):

Scaled by (1, 1, 3):

The reason I particularly like this method,
Swap z and x, scale by (3, 1, 1):

Below is a shorter version of the code.
```
from mpl_toolkits.mplot3d.axes3d import Axes3D
from mpl_toolkits.mplot3d import proj3d
import matplotlib as mpl
import numpy as np
import matplotlib.pyplot as plt
mpl.rcParams['legend.fontsize'] = 10
fig = plt.figure(figsize=(5,5))
ax = fig.gca(projection='3d')
theta = np.linspace(-4 * np.pi, 4 * np.pi, 100)
z = np.linspace(-2, 2, 100)
r = z**2 + 1
x = r * np.sin(theta)
y = r * np.cos(theta)
"""
Scaling is done from here...
"""
x_scale=1
y_scale=1
z_scale=2
scale=np.diag([x_scale, y_scale, z_scale, 1.0])
scale=scale*(1.0/scale.max())
scale[3,3]=1.0
def short_proj():
return np.dot(Axes3D.get_proj(ax), scale)
ax.get_proj=short_proj
"""
to here
"""
ax.plot(z, y, x, label='parametric curve')
ax.legend()
plt.show()
``` |
How to speed up the code - searching through a dataframe takes hours | 30,224,143 | 6 | 2015-05-13T20:00:06Z | 30,224,634 | 7 | 2015-05-13T20:30:23Z | [
"python",
"csv",
"python-3.x",
"pandas"
] | I've got a CSV file containing the distance between centroids in a GIS-model in the next format:
```
InputID,TargetID,Distance
1,2,3050.01327866
1,7,3334.99565217
1,5,3390.99115304
1,3,3613.77046864
1,4,4182.29900892
...
...
3330,3322,955927.582933
```
It is sorted on origin (`InputID`) and then on the nearest destination (`TargetID`).
For a specific modelling tool I need this data in a CSV file, formatted as follows (the numbers are the centroid numbers):
```
distance1->1, distance1->2, distance1->3,.....distance1->3330
distance2->1, distance2->2,.....
.....
distance3330->1,distance3330->2....distance3330->3330
```
So no InputID's or TargetID's, just the distances with the origins on the rows and the destinations on the columns:
(example for the first 5 origins/destinations)
```
0,3050.01327866,3613.77046864,4182.29900892,3390.99115304
3050.01327866,0,1326.94611797,1175.10254872,1814.45584129
3613.77046864,1326.94611797,0,1832.209595,3132.78725738
4182.29900892,1175.10254872,1832.209595,0,1935.55056767
3390.99115304,1814.45584129,3132.78725738,1935.55056767,0
```
I've built the next code, and it works. But it is so slow that running it will take days to get the 3330x3330 file. As I am a beginner in Python I think I am overlooking something...
```
import pandas as pd
import numpy as np
file=pd.read_csv('c:\\users\\Niels\\Dropbox\\Python\\centroid_distances.csv')
df=file.sort_index(by=['InputID', 'TargetID'], ascending=[True, True])
number_of_zones=3330
text_file = open("c:\\users\\Niels\\Dropbox\\Python\\Output.csv", "w")
for origin in range(1,number_of_zones):
output_string=''
print(origin)
for destination in range(1,number_of_zones):
if origin==destination:
distance=0
else:
distance_row=df[(df['InputID']==origin) & (df['TargetID'] == destination)]
# I guess this is the time-consuming part
distance=distance_row.iloc[0]['Distance']
output_string=output_string+str(distance)+','
text_file.write(output_string[:-1]+'\n') #strip last ',' of line
text_file.close()
```
Could you give me some hints to speed up this code? | IIUC, all you need is `pivot`. If you start from a frame like this:
```
df = pd.DataFrame(columns="InputID,TargetID,Distance".split(","))
df["InputID"] = np.arange(36)//6 + 1
df["TargetID"] = np.arange(36) % 6 + 1
df["Distance"] = np.random.uniform(0, 100, len(df))
df = df[df.InputID != df.TargetID]
df = df.sort(["InputID", "Distance"])
>>> df.head()
InputID TargetID Distance
2 1 3 6.407198
3 1 4 43.037829
1 1 2 52.121284
4 1 5 86.769620
5 1 6 96.703294
```
and we know the InputID and TargetID are unique, we can simply `pivot`:
```
>>> pv = df.pivot(index="InputID", columns="TargetID", values="Distance").fillna(0)
>>> pv
TargetID 1 2 3 4 5 6
InputID
1 0.000000 52.121284 6.407198 43.037829 86.769620 96.703294
2 53.741611 0.000000 27.555296 85.328607 59.561345 8.895407
3 96.142920 62.532984 0.000000 6.320273 37.809105 69.896308
4 57.835249 49.350647 38.660269 0.000000 7.151053 45.017780
5 72.758342 48.947788 4.212775 98.183169 0.000000 15.702280
6 32.468329 83.979431 23.578347 30.212883 82.580496 0.000000
>>> pv.to_csv("out_dist.csv", index=False, header=False)
>>> !cat out_dist.csv
0.0,52.1212839519,6.40719759732,43.0378290605,86.769620064,96.7032941473
53.7416111725,0.0,27.5552964592,85.3286070586,59.5613449796,8.89540736892
96.1429198049,62.5329836475,0.0,6.32027280686,37.8091052942,69.8963084944
57.8352492462,49.3506467609,38.6602692461,0.0,7.15105257546,45.0177800391
72.7583417281,48.9477878574,4.21277494476,98.183168992,0.0,15.7022798801
32.4683285321,83.9794307564,23.578346756,30.2128827937,82.5804959193,0.0
```
The [reshaping](http://pandas.pydata.org/pandas-docs/stable/reshaping.html) section of the tutorial might be useful. |
Reproduce a C function pointer array in Python | 30,225,067 | 6 | 2015-05-13T20:58:45Z | 30,225,178 | 7 | 2015-05-13T21:05:02Z | [
"python",
"c"
] | I have a Python program asking the user for input like a shell, and if I detect some specific keywords I want to go inside some specific functions.
The thing is that I would like to avoid doing a lot of `if` and `else if`. Usually in C to avoid this situation I use a function pointer array that I travel with a `while` and use `strcmp` to check the input.
I would like to know how to do that in Python if it is even possible. | In Python you use a [***dictionary***](https://docs.python.org/2/tutorial/datastructures.html#dictionaries).
Example:
```
keyword2func = {
"word1": function_word1,
"word2": function_word2,
}
word = input("")
keyword2func[word]()
``` |
How to make an Python subclass uncallable | 30,225,477 | 18 | 2015-05-13T21:27:45Z | 30,225,611 | 14 | 2015-05-13T21:37:32Z | [
"python"
] | How do you "disable" the `__call__` method on a subclass so the following would be true:
```
class Parent(object):
def __call__(self):
return
class Child(Parent):
def __init__(self):
super(Child, self).__init__()
object.__setattr__(self, '__call__', None)
>>> c = Child()
>>> callable(c)
False
```
This and other ways of trying to set `__call__` to some non-callable value still result in the child appearing as callable. | You can't. As jonrsharpe points out, there's no way to make `Child` appear to not have the attribute, and that's what `callable(Child())` relies on to produce its answer. Even making it a descriptor that raises `AttributeError` won't work, per this bug report: <https://bugs.python.org/issue23990> . A python 2 example:
```
>>> class Parent(object):
... def __call__(self): pass
...
>>> class Child(Parent):
... __call__ = property()
...
>>> c = Child()
>>> c()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: unreadable attribute
>>> c.__call__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: unreadable attribute
>>> callable(c)
True
```
This is because `callable(...)` doesn't act out the descriptor protocol. Actually calling the object, or accessing a `__call__` attribute, involves retrieving the method even if it's behind a property, through the normal descriptor protocol. But `callable(...)` doesn't bother going that far, if it finds anything at all it is satisfied, and every subclass of Parent will have *something* for `__call__` -- either an attribute in a subclass, or the definition from `Parent`.
So while you can make actually calling the instance fail with any exception you want, you can't ever make `callable(some_instance_of_parent)` return False. |
Loop for each item in a list | 30,228,015 | 8 | 2015-05-14T01:49:43Z | 30,228,035 | 12 | 2015-05-14T01:52:00Z | [
"python",
"dictionary",
"cartesian-product"
] | I have a dictionary:
```
mydict = {'item1':[1,2,3],'item2':[10,20,30]}
```
I want to create the cartesian product of the two so that I get a tuple of each possible pair.
```
output: [(1,10),(1,20),(1,30),
(2,10),(2,20),(2,30),
(3,10),(3,20),(3,30)]
```
It seems like there would be a simple way to do this so that it extends if I have three items. Kind of like a dynamic number of loops. Feels like I am missing an obvious way to do this... | The [`itertools.product()`](https://docs.python.org/2/library/itertools.html#itertools.product) function will do this:
```
>>> import itertools
>>> mydict = {'item1':[1,2,3],'item2':[10,20,30]}
>>> list(itertools.product(*mydict.values()))
[(10, 1), (10, 2), (10, 3), (20, 1), (20, 2), (20, 3), (30, 1), (30, 2), (30, 3)]
```
If you need to control the order of the resulting tuples, you can do
```
itertools.product(mydict['item1'], mydict['item2'])
``` |
Django custom command error: unrecognized arguments | 30,230,490 | 10 | 2015-05-14T06:12:15Z | 30,230,675 | 20 | 2015-05-14T06:25:11Z | [
"python",
"django",
"django-admin"
] | I'm trying to create a command similar to `createsuperuser` which will take two arguments (username and password)
Its working fine in django 1.7 but not in 1.8. (I'm also using python3.4)
this is the code I wrote
**myapp/management/commands/createmysuperuser.py**
```
from django.core.management.base import BaseCommand, CommandError
from django.contrib.auth.models import User
class Command(BaseCommand):
help = 'Create a super user'
def handle(self, *args, **options):
if len(args) != 2:
raise CommandError('need exactly two arguments for username and password')
username, password = args
u, created = User.objects.get_or_create(username=username)
if created:
u.is_superuser = True
u.is_staff = True
u.set_password(password)
u.save()
else:
raise CommandError("user '%s' already exist" % username)
return "Password changed successfully for user '%s'" % u.username
```
and when I try to run this command
`$ python manage.py createmysuperuser myuser mypassword`
I get this error
```
usage: manage.py createmysuperuser [-h] [--version] [-v {0,1,2,3}]
[--settings SETTINGS]
[--pythonpath PYTHONPATH] [--traceback]
[--no-color]
manage.py createmysuperuser: error: unrecognized arguments: myuser mypassword
```
but when I dont pass any arguments it raises `CommandError` which is expected.
`CommandError: need exactly two arguments for username and password` | In django 1.8 you should [add arguments](https://docs.djangoproject.com/en/1.8/releases/1.8/#extending-management-command-arguments-through-command-option-list) to you command:
```
class Command(BaseCommand):
...
def add_arguments(self, parser):
parser.add_argument('username')
parser.add_argument('password')
```
`add_argument()` method of `argparse` is [documented here](https://docs.python.org/3/library/argparse.html#the-add-argument-method).
**UPDATE**: By default arguments are passed in the `options` parameter so the `handle()` method should look like this:
```
def handle(self, *args, **options):
username = options['username']
password = options['password']
...
```
And you don't need to check the length of the `args` list - it is already done by `argparse`. This is the recommended method but if you want to use the `args` argument then you have to use the "compatibility mode" and name the added argument as `args`:
```
class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument('args')
def handle(self, *args, **options):
if len(args) != 2:
...
```
Read the "Changed in Django 1.8" side note in the [first chapter](https://docs.djangoproject.com/en/1.8/howto/custom-management-commands/#module-django.core.management) of the docs (right after the `closepoll.py` example).
**UPDATE2**: Here is the full working example:
```
from django.core.management.base import BaseCommand
class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument('username')
parser.add_argument('password')
def handle(self, *args, **options):
username = options['username']
password = options['password']
return u'Username: %s Password: %s' % (username, password)
``` |
Raise error if a Python dict comprehension overwrites a key | 30,238,783 | 17 | 2015-05-14T13:34:57Z | 30,238,855 | 15 | 2015-05-14T13:38:31Z | [
"python"
] | Is there a way to get a dict comprehension to raise an exception if it would override a key?
For example, I would like the following to error because there are two values for the key `'a'`:
```
>>> {k:v for k, v in ('a1', 'a2', 'b3')}
{'a': '2', 'b': '3'}
```
I realise this can be done with a `for` loop. Is there a way to do it while keeping the comprehension syntax? | You can use a generator with a helper function:
```
class DuplicateKeyError(ValueError): pass
def dict_no_dupl(it):
d = {}
for k, v in it:
if k in d: raise DuplicateKeyError(k)
d[k] = v
return d
dict_no_dupl((k, v) for k, v in ('a1', 'a2', 'b3'))
```
This does add a helper function, but keeps the comprehension syntax (reasonably) intact. |
Raise error if a Python dict comprehension overwrites a key | 30,238,783 | 17 | 2015-05-14T13:34:57Z | 30,238,926 | 9 | 2015-05-14T13:42:14Z | [
"python"
] | Is there a way to get a dict comprehension to raise an exception if it would override a key?
For example, I would like the following to error because there are two values for the key `'a'`:
```
>>> {k:v for k, v in ('a1', 'a2', 'b3')}
{'a': '2', 'b': '3'}
```
I realise this can be done with a `for` loop. Is there a way to do it while keeping the comprehension syntax? | If you don't care about which key caused a collision:
Check that the generated dict has the appropriate size with `len()`. |
Splitting lists by short numbers | 30,241,068 | 4 | 2015-05-14T15:22:00Z | 30,241,768 | 7 | 2015-05-14T15:54:52Z | [
"python",
"python-3.x",
"numpy",
"split"
] | I'm using NumPy to find intersections on a graph, but `isClose` returns multiple values per intersection
So, I'm going to try to find their averages. But first, I want to isolate the similar values. This is also a useful skill I feel.
I have a list of the x values for the intersection called `idx` that looks like this
```
[-8.67735471 -8.63727455 -8.59719439 -5.5511022 -5.51102204 -5.47094188
-5.43086172 -2.4248497 -2.38476954 -2.34468938 -2.30460922 0.74148297
0.78156313 0.82164329 3.86773547 3.90781563 3.94789579 3.98797595
7.03406814 7.0741483 7.11422846]
```
and I want to separate it out into lists each comprised of the similar numbers.
this is what I have so far:
```
n = 0
for i in range(len(idx)):
try:
if (idx[n]-idx[n-1])<0.5:
sdx.append(idx[n-1])
else:
print(sdx)
sdx = []
except:
sdx.append(idx[n-1])
n = n+1
```
It works for the most part but it forgets some numbers:
```
[-8.6773547094188377, -8.6372745490981959]
[-5.5511022044088181, -5.5110220440881763, -5.4709418837675354]
[-2.4248496993987976, -2.3847695390781567, -2.3446893787575149]
[0.7414829659318638, 0.78156312625250379]
[3.8677354709418825, 3.9078156312625243, 3.9478957915831661]
```
Theres probably a more efficient way to do this, does anyone know of one? | Considering you have a numpy array, you can use [np.split](http://docs.scipy.org/doc/numpy/reference/generated/numpy.split.html), splitting where the difference is > `.5`:
```
import numpy as np
x = np.array([-8.67735471, -8.63727455, -8.59719439, -5.5511022, -5.51102204, -5.47094188,
-5.43086172, -2.4248497, -2.38476954, -2.34468938, -2.30460922, 0.74148297,
0.78156313, 0.82164329, 3.86773547, 3.90781563, 3.94789579, 3.98797595,
7.03406814, 7.0741483])
print np.split(x, np.where(np.diff(x) > .5)[0] + 1)
[array([-8.67735471, -8.63727455, -8.59719439]), array([-5.5511022 , -5.51102204, -5.47094188, -5.43086172]), array([-2.4248497 , -2.38476954, -2.34468938, -2.30460922]), array([ 0.74148297, 0.78156313, 0.82164329]), array([ 3.86773547, 3.90781563, 3.94789579, 3.98797595]), array([ 7.03406814, 7.0741483 ])]
```
`np.where(np.diff(x) > .5)[0]` returns the index where the following element does not meet the `np.diff(x) > .5)` condition:
```
In [6]: np.where(np.diff(x) > .5)[0]
Out[6]: array([ 2, 6, 10, 13, 17])
```
`+ 1` adds 1 to each index:
```
In [12]: np.where(np.diff(x) > .5)[0] + 1
Out[12]: array([ 3, 7, 11, 14, 18])
```
Then passing `[ 3, 7, 11, 14, 18]` to np.split splits the elements into subarrays, `x[:3], x[3:7],x[7:11] ...` |
boost python threading segmentation fault | 30,241,980 | 3 | 2015-05-14T16:05:53Z | 30,251,481 | 7 | 2015-05-15T04:23:22Z | [
"python",
"c++",
"boost-python"
] | Consider the following straightforward python extension. When `start()-ed`, `Foo` will just add the next sequential integer to a `py::list`, once a second:
```
#include <boost/python.hpp>
#include <thread>
#include <atomic>
namespace py = boost::python;
struct Foo {
Foo() : running(false) { }
~Foo() { stop(); }
void start() {
running = true;
thread = std::thread([this]{
while(running) {
std::cout << py::len(messages) << std::end;
messages.append(py::len(messages));
std::this_thread::sleep_for(std::chrono::seconds(1));
}
});
}
void stop() {
if (running) {
running = false;
thread.join();
}
}
std::thread thread;
py::list messages;
std::atomic<bool> running;
};
BOOST_PYTHON_MODULE(Foo)
{
PyEval_InitThreads();
py::class_<Foo, boost::noncopyable>("Foo",
py::init<>())
.def("start", &Foo::start)
.def("stop", &Foo::stop)
;
}
```
Given the above, the following simple python script segfaults all the time, never even printing anything:
```
>>> import Foo
>>> f = Foo.Foo()
>>> f.start()
>>> Segmentation fault (core dumped)
```
With the core pointing to:
```
namespace boost { namespace python {
inline ssize_t len(object const& obj)
{
ssize_t result = PyObject_Length(obj.ptr());
if (PyErr_Occurred()) throw_error_already_set(); // <==
return result;
}
}} // namespace boost::python
```
Where:
```
(gdb) inspect obj
$1 = (const boost::python::api::object &) @0x62d368: {<boost::python::api::object_base> = {<boost::python::api::object_operators<boost::python::api::object>> = {<boost::python::def_visitor<boost::python::api::object>> = {<No data fields>}, <No data fields>}, m_ptr = []}, <No data fields>}
(gdb) inspect obj.ptr()
$2 = []
(gdb) inspect result
$3 = 0
```
Why does this fail when run in a thread? `obj` looks fine, `result` gets set correctly. Why does `PyErr_Occurred()` happen? Who sets that? | In short, there is a mutex around the CPython interpreter known as the [Global Interpreter Lock](http://wiki.python.org/moin/GlobalInterpreterLock) (GIL). This mutex prevents parallel operations to be performed on Python objects. Thus, at any point in time, a max of one thread, the one that has acquired the GIL, is allowed to perform operations on Python objects. When multiple threads are present, invoking Python code whilst not holding the GIL results in undefined behavior.
C or C++ threads are sometimes referred to as alien threads in the Python documentation. The Python interpreter has no ability to control the alien thread. Therefore, alien threads are responsible for managing the GIL to permit concurrent or parallel execution with Python threads. With this in mind, lets examine the original code:
```
while (running) {
std::cout << py::len(messages) << std::endl; // Python
messages.append(py::len(messages)); // Python
std::this_thread::sleep_for(std::chrono::seconds(1)); // No Python
}
```
As noted above, only two of the three lines in the thread body need to run whilst the thread owns the GIL. One common way to handle this is to use an [RAII](http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization) classes to help manage the GIL. For example, with the following `gil_lock` class, when a `gil_lock` object is created, the calling thread will acquire the GIL. When the `gil_lock` object is destructed, it releases the GIL.
```
/// @brief RAII class used to lock and unlock the GIL.
class gil_lock
{
public:
gil_lock() { state_ = PyGILState_Ensure(); }
~gil_lock() { PyGILState_Release(state_); }
private:
PyGILState_STATE state_;
};
```
The thread body can then use explicit scope to control the lifetime of the lock.
```
while (running) {
// Acquire GIL while invoking Python code.
{
gil_lock lock;
std::cout << py::len(messages) << std::endl;
messages.append(py::len(messages));
}
// Release GIL, allowing other threads to run Python code while
// this thread sleeps.
std::this_thread::sleep_for(std::chrono::seconds(1));
}
```
---
Here is a complete example based on the original code that [demonstrates](http://coliru.stacked-crooked.com/a/ce1b6cad6c01aaf7) the program working properly once the GIL is explicitly managed:
```
#include <thread>
#include <atomic>
#include <iostream>
#include <boost/python.hpp>
/// @brief RAII class used to lock and unlock the GIL.
class gil_lock
{
public:
gil_lock() { state_ = PyGILState_Ensure(); }
~gil_lock() { PyGILState_Release(state_); }
private:
PyGILState_STATE state_;
};
struct foo
{
foo() : running(false) {}
~foo() { stop(); }
void start()
{
namespace python = boost::python;
running = true;
thread = std::thread([this]
{
while (running)
{
{
gil_lock lock; // Acquire GIL.
std::cout << python::len(messages) << std::endl;
messages.append(python::len(messages));
} // Release GIL.
std::this_thread::sleep_for(std::chrono::seconds(1));
}
});
}
void stop()
{
if (running)
{
running = false;
thread.join();
}
}
std::thread thread;
boost::python::list messages;
std::atomic<bool> running;
};
BOOST_PYTHON_MODULE(example)
{
// Force the GIL to be created and initialized. The current caller will
// own the GIL.
PyEval_InitThreads();
namespace python = boost::python;
python::class_<foo, boost::noncopyable>("Foo", python::init<>())
.def("start", &foo::start)
.def("stop", &foo::stop)
;
}
```
Interactive usage:
```
>>> import example
>>> import time
>>> foo = example.Foo()
>>> foo.start()
>>> time.sleep(3)
0
1
2
>>> foo.stop()
>>>
``` |
NumPy slice notation in a dictionary | 30,244,731 | 7 | 2015-05-14T18:33:02Z | 30,244,830 | 9 | 2015-05-14T18:38:31Z | [
"python",
"python-2.7",
"numpy",
"slice"
] | I wonder if it is possible to store numpy slice notation in a python dictionary. Something like:
```
lookup = {0:[:540],
30:[540:1080],
60:[1080:]}
```
It is possible to use native python slice syntax, e.g. `slice(0,10,2)`, but I have not been able to store more complex slices. For example, something that is multidimensional `[:,:2,:, :540]`.
My current work around is to store the values as tuples and then unpack these into the necessary slices.
Working in Python 2.x. | The syntax `[:, :2, :, :540]` is turned into a tuple of `slice` objects by Python:
```
(slice(None, None, None),
slice(None, 2, None),
slice(None, None, None),
slice(None, 540, None))
```
A convenient way to generate this tuple is to use the special function\* [`np.s_`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.s_.html). You just need to pass it the `[...]` expression. For example:
```
>>> np.s_[:540]
slice(None, 540, None)
>>> np.s_[:, :2, :, :540]
(slice(None, None, None),
slice(None, 2, None),
slice(None, None, None),
slice(None, 540, None))
```
Then your dictionary of slices could be written as:
```
lookup = {0: np.s_[:540],
30: np.s_[540:1080],
60: np.s_[1080:]}
```
\* *technically `s_` is an alias for the class [`IndexExpression`](https://github.com/numpy/numpy/blob/master/numpy/lib/index_tricks.py#l603) that implements a special `__getitem__` method.* |
Python Pandas Create New Column with Groupby().Sum() | 30,244,952 | 4 | 2015-05-14T18:44:39Z | 30,244,979 | 20 | 2015-05-14T18:46:07Z | [
"python",
"pandas"
] | Trying to create a new column with the groupby calculation. In the code below, I get the correct calculated values for each date (see group below) but when I try to create a new column (df['Data4']) with it I get NaN. So I am trying to create a new column in the dataframe with the sum of 'Data3' for the all dates and apply that to each date row. For example, 2015-05-08 is in 2 rows (total is 50+5 = 55) and in this new column I would like to have 55 in both of the rows.
```
import pandas as pd
import numpy as np
from pandas import DataFrame
df = pd.DataFrame({'Date': ['2015-05-08', '2015-05-07', '2015-05-06', '2015-05-05', '2015-05-08', '2015-05-07', '2015-05-06', '2015-05-05'], 'Sym': ['aapl', 'aapl', 'aapl', 'aapl', 'aaww', 'aaww', 'aaww', 'aaww'], 'Data2': [11, 8, 10, 15, 110, 60, 100, 40],'Data3': [5, 8, 6, 1, 50, 100, 60, 120]})
group = df['Data3'].groupby(df['Date']).sum()
df['Data4'] = group
``` | You want to use [`transform`](http://pandas.pydata.org/pandas-docs/stable/groupby.html#transformation) this will return a Series with the index aligned to the df so you can then add it as a new column:
```
In [74]:
df = pd.DataFrame({'Date': ['2015-05-08', '2015-05-07', '2015-05-06', '2015-05-05', '2015-05-08', '2015-05-07', '2015-05-06', '2015-05-05'], 'Sym': ['aapl', 'aapl', 'aapl', 'aapl', 'aaww', 'aaww', 'aaww', 'aaww'], 'Data2': [11, 8, 10, 15, 110, 60, 100, 40],'Data3': [5, 8, 6, 1, 50, 100, 60, 120]})
â
df['Data4'] = df['Data3'].groupby(df['Date']).transform('sum')
df
Out[74]:
Data2 Data3 Date Sym Data4
0 11 5 2015-05-08 aapl 55
1 8 8 2015-05-07 aapl 108
2 10 6 2015-05-06 aapl 66
3 15 1 2015-05-05 aapl 121
4 110 50 2015-05-08 aaww 55
5 60 100 2015-05-07 aaww 108
6 100 60 2015-05-06 aaww 66
7 40 120 2015-05-05 aaww 121
``` |
Python - efficiently find where something would land in a sorted list? | 30,245,090 | 4 | 2015-05-14T18:52:19Z | 30,245,136 | 9 | 2015-05-14T18:54:58Z | [
"python",
"sorting"
] | I have a list:
```
x = ['c', 'a', 'e']
```
I can sort this list:
```
x_sorted = sorted(x)
```
`x_sorted` is now `['a', 'c', 'e']`
Now let's say I have a new variable `y = 'd'`
I want to find out where in `x_sorted` this new variable would fall. In this example the new variable `y` contains the string `'d'` so it would be placed as `['a', 'c', 'd', 'e']` in the index 2 of the list. I desire to find out this index number as efficiently as possible (since I have to repeat this process many times).
Here is a function I wrote which does the task very simply:
```
def f(x_sorted, y):
new_list = x_sorted[:] + [y]
return sorted(new_list).index(y)
```
This gives me the correct answer.
I am wondering if there is a better more efficient way of doing this, as `f` will be called 100,000+ times.
Thanks in advance! | You can use [bisect](https://docs.python.org/2/library/bisect.html)
```
from bisect import bisect
l = ['a', 'c', 'e']
print(bisect(l,"d"))
2
```
To add it to the list:
```
from bisect import insort
l = ['a',"b", 'c', 'e']
insort(l, "d")
print(l)
insort(l, "f")
print(l)
['a', 'b', 'c', 'd', 'e']
['a', 'b', 'c', 'd', 'e', 'f']
```
If you want a faster insert you could use a [blist](https://pypi.python.org/pypi/blist/?) where maintaining a sorted list with insort is:
```
O(log**2 n) vs O(n)
```
from bisect import insort
```
from blist import blist
b = blist(["a", "b", "c", "e"])
insort(b, "f")
insort(b, "d")
print(b)
blist(['a', 'b', 'c', 'd', 'e', 'f'])
```
There is also a [blist.sortedlist](http://stutzbachenterprises.com/blist/sortedlist.html#blist.sortedlist) list where you can use [.add](http://stutzbachenterprises.com/blist/sortedlist.html#blist.sortedlist.L.add):
```
from blist import sortedlist
l = ['b',"a", 'c', 'e']
b = sortedlist(l)
b.add("f")
print(b)
sortedlist(['a', 'b', 'c', 'e', 'f'])
```
There is also a [sortedcontainers](http://www.grantjenks.com/docs/sortedcontainers/) library that has a [sortedlist](http://www.grantjenks.com/docs/sortedcontainers/sortedlist.html) implementation. |
python plot horizontal line for a range of values | 30,246,807 | 4 | 2015-05-14T20:32:37Z | 30,247,026 | 7 | 2015-05-14T20:45:40Z | [
"python",
"matplotlib"
] | I am really new to python and trying to plot speed as a constant value for the distince from its current startpointinmeters to the next startpointinmeters, so speed is constant from the start to the end (next start).
For example, speed should be 13 for distance 0 to 27.82 and 15 from 27.82 to 40.12 and so on.
Any idea?
```
startpointinmeters speed
0.0 13.0
27.82 15.0
40.12 14.0
75.33 14.0
172.77 17.0
208.64 18.0
253.0 21.0
335.21 20.0
351.16 25.0
590.38 22.0
779.37 21.0
968.35 22.0
1220.66 20.0
1299.17 19.0
1318.32 14.0
1352.7 9.0
``` | This can be done with the [step](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.step) function of Matplotlib:
```
import matplotlib.pyplot as plt
x = [0., 27.82, 40.12, 75.33, 172.77, 208.64, 253., 335.21, 351.16,
590.38, 779.37, 968.35, 1220.66, 1299.17, 1318.32, 1352.7]
v = [13., 15., 14., 14., 17., 18., 21., 20., 25., 22., 21., 22., 20.,
19., 14., 9.]
plt.step(x, v, where='post')
plt.xlabel('Position [m]')
plt.ylabel('Speed [m/s]')
plt.show()
```
Result:

See [this example](http://matplotlib.org/examples/pylab_examples/step_demo.html) for the difference between the different values for the 'where' argument. From your description it seems you want the 'post' option. |
Listing contents of a bucket with boto3 | 30,249,069 | 13 | 2015-05-14T23:22:55Z | 30,249,553 | 19 | 2015-05-15T00:17:48Z | [
"python",
"amazon-s3",
"boto",
"boto3"
] | How can I see what's inside a bucket in S3 with `boto3`? (i.e. do an `"ls"`)?
Doing the following:
```
import boto3
s3 = boto3.resource('s3')
my_bucket = s3.Bucket('some/path/')
```
returns:
```
s3.Bucket(name='some/path/')
```
How do I see its contents? | One way to see the contents would be:
```
for object in mybucket.objects.all():
print(object)
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.