title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
'Library not loaded: @rpath/libcudart.7.5.dylib' TensorFlow Error on Mac | 38,710,339 | 6 | 2016-08-02T01:12:01Z | 38,712,022 | 10 | 2016-08-02T04:55:05Z | [
"python",
"osx",
"pip",
"tensorflow",
"osx-elcapitan"
] | I'm using OS X El Capitan (10.11.4).
I just downloaded TensorFlow using the pip install instructions [here](https://www.tensorflow.org/versions/r0.10/get_started/os_setup.html).
Everything went pretty smoothly, though I did get a few warning messages like:
`The directory '/Users/myusername/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want the -H flag.`
and
`You are using pip version 6.0.8, however version 8.1.2 is available.` Even though I just installed pip.
Then, when I tested TensorFlow in Python, I got the error:
```
>>> import tensorflow as tf
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/__init__.py", line 23, in <module>
from tensorflow.python import *
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/python/__init__.py", line 48, in <module>
from tensorflow.python import pywrap_tensorflow
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow.py", line 28, in <module>
_pywrap_tensorflow = swig_import_helper()
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/python/pywrap_tensorflow.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow', fp, pathname, description)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/python/_pywrap_tensorflow.so, 10): Library not loaded: @rpath/libcudart.7.5.dylib
Referenced from: /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/python/_pywrap_tensorflow.so
Reason: image not found
```
Now, when I try to do `pip uninstall tensorflow-0.10.0rc0` it tells me that it's not installed.
The closest thing I've found to resembling this problem is [this issue](https://github.com/tensorflow/tensorflow/issues/2278) in the TensorFlow GitHub docs (which I have not tried).
How can I uninstall whatever it did install and get TensorFlow up and running correctly? | This error message is displayed if you install the GPU-enabled Mac OS version of TensorFlow (available from release 0.10 onwards) on a machine that does not have CUDA installed.
To fix the error, install the CPU version for Python 2.7 or 3.x, as follows:
```
# Mac OS X, CPU only, Python 2.7:
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.10.0rc0-py2-none-any.whl
$ sudo pip install --upgrade $TF_BINARY_URL
# Mac OS X, CPU only, Python 3.4 or 3.5:
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.10.0rc0-py3-none-any.whl
$ sudo pip3 install --upgrade $TF_BINARY_URL
``` |
How do I turn a dataframe into a series of lists? | 38,713,200 | 17 | 2016-08-02T06:29:48Z | 38,713,212 | 7 | 2016-08-02T06:30:33Z | [
"python",
"list",
"pandas",
"dataframe"
] | I have had to do this several times and I'm always frustrated. I have a dataframe:
```
df = pd.DataFrame([[1, 2, 3, 4], [5, 6, 7, 8]], ['a', 'b'], ['A', 'B', 'C', 'D'])
print df
A B C D
a 1 2 3 4
b 5 6 7 8
```
I want to turn `df` into:
```
pd.Series([[1, 2, 3, 4], [5, 6, 7, 8]], ['a', 'b'])
a [1, 2, 3, 4]
b [5, 6, 7, 8]
dtype: object
```
I've tried
```
df.apply(list, axis=1)
```
Which just gets me back the same `df`
What is a convenient/effective way to do this? | pandas tries really hard to make making dataframes convenient. As such, it interprets lists and arrays as things you'd want to split into columns. I'm not going to complain, this is almost always helpful.
I've done this one of two ways.
***Option 1***:
```
# Only works with a non MultiIndex
# and its slow, so don't use it
df.T.apply(tuple).apply(list)
```
***Option 2***:
```
pd.Series(df.T.to_dict('list'))
```
Both give you:
```
a [1, 2, 3, 4]
b [5, 6, 7, 8]
dtype: object
```
However ***Option 2*** scales better.
---
### Timing
**given `df`**
[](http://i.stack.imgur.com/oJ0nk.png)
**much larger `df`**
```
from string import ascii_letters
letters = list(ascii_letters)
df = pd.DataFrame(np.random.choice(range(10), (52 ** 2, 52)),
pd.MultiIndex.from_product([letters, letters]),
letters)
```
Results for `df.T.apply(tuple).apply(list)` are erroneous because that solution doesn't work over a MultiIndex.
[](http://i.stack.imgur.com/X2c18.png) |
How do I turn a dataframe into a series of lists? | 38,713,200 | 17 | 2016-08-02T06:29:48Z | 38,713,387 | 11 | 2016-08-02T06:41:55Z | [
"python",
"list",
"pandas",
"dataframe"
] | I have had to do this several times and I'm always frustrated. I have a dataframe:
```
df = pd.DataFrame([[1, 2, 3, 4], [5, 6, 7, 8]], ['a', 'b'], ['A', 'B', 'C', 'D'])
print df
A B C D
a 1 2 3 4
b 5 6 7 8
```
I want to turn `df` into:
```
pd.Series([[1, 2, 3, 4], [5, 6, 7, 8]], ['a', 'b'])
a [1, 2, 3, 4]
b [5, 6, 7, 8]
dtype: object
```
I've tried
```
df.apply(list, axis=1)
```
Which just gets me back the same `df`
What is a convenient/effective way to do this? | You can first convert `DataFrame` to `numpy array` by [`values`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.values.html), then convert to list and last create new `Series` with index from `df` if need faster solution:
```
print (pd.Series(df.values.tolist(), index=df.index))
a [1, 2, 3, 4]
b [5, 6, 7, 8]
dtype: object
```
Timings with small DataFrame:
```
In [76]: %timeit (pd.Series(df.values.tolist(), index=df.index))
1000 loops, best of 3: 295 µs per loop
In [77]: %timeit pd.Series(df.T.to_dict('list'))
1000 loops, best of 3: 685 µs per loop
In [78]: %timeit df.T.apply(tuple).apply(list)
1000 loops, best of 3: 958 µs per loop
```
and with large:
```
from string import ascii_letters
letters = list(ascii_letters)
df = pd.DataFrame(np.random.choice(range(10), (52 ** 2, 52)),
pd.MultiIndex.from_product([letters, letters]),
letters)
In [71]: %timeit (pd.Series(df.values.tolist(), index=df.index))
100 loops, best of 3: 2.06 ms per loop
In [72]: %timeit pd.Series(df.T.to_dict('list'))
1 loop, best of 3: 203 ms per loop
In [73]: %timeit df.T.apply(tuple).apply(list)
1 loop, best of 3: 506 ms per loop
``` |
Understanding Keras LSTMs | 38,714,959 | 15 | 2016-08-02T08:04:13Z | 38,737,941 | 7 | 2016-08-03T08:09:59Z | [
"python",
"deep-learning",
"keras",
"lstm"
] | I am trying to reconcile my understand of LSTMs and pointed out here: <http://colah.github.io/posts/2015-08-Understanding-LSTMs/> with the LSTM implemented in Keras. I am following the blog written <http://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/> for the Keras tutorial. What I am mainly confused about is,
1. The reshaping of the data series into `[samples, time steps, features]` and,
2. The stateful LSTMs
Lets concentrate on the above two questions with reference to the code pasted below:
```
# reshape into X=t and Y=t+1
look_back = 3
trainX, trainY = create_dataset(train, look_back)
testX, testY = create_dataset(test, look_back)
# reshape input to be [samples, time steps, features]
trainX = numpy.reshape(trainX, (trainX.shape[0], look_back, 1))
testX = numpy.reshape(testX, (testX.shape[0], look_back, 1))
########################
# The IMPORTANT BIT
##########################
# create and fit the LSTM network
batch_size = 1
model = Sequential()
model.add(LSTM(4, batch_input_shape=(batch_size, look_back, 1), stateful=True))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
for i in range(100):
model.fit(trainX, trainY, nb_epoch=1, batch_size=batch_size, verbose=2, shuffle=False)
model.reset_states()
```
Note: create\_dataset takes a sequence of length N and returns a `N-look_back` array of which each element is a `look_back` length sequence.
# What is Time Steps and Features?
As can be seen TrainX is a 3-D array with Time\_steps and Feature being the last two dimensions respectively (3 and 1 in this particular code). With respect to the image below, does this mean that we are considering the `many to one` case, where the number of pink boxes are 3? Or does it literally mean the chain length is 3 (i.e. only 3 green boxes considered). [](http://i.stack.imgur.com/kwhAP.jpg)
Does the features argument become relevant when we consider multivariate series? e.g. modelling two financial stocks simultaneously?
# Stateful LSTMs
Does stateful LSTMs mean that we save the cell memory values between runs of batches? If this is the case, `batch_size` is one, and the memory is reset between the training runs so what was the point of saying that it was stateful. I'm guessing this is related to the fact that training data is not shuffled, but I'm not sure how.
Any thoughts?
Image reference: <http://karpathy.github.io/2015/05/21/rnn-effectiveness/>
## Edit 1:
A bit confused about @van's comment about the red and green boxes being equal. So just to confirm, does the following API calls correspond to the unrolled diagrams? Especially noting the second diagram (`batch_size` was arbitrarily chosen.):
[](http://i.stack.imgur.com/sW207.jpg)
[](http://i.stack.imgur.com/15V2C.jpg)
## Edit 2:
For people who have done Udacity's deep learning course and still confused about the time\_step argument, look at the following discussion: <https://discussions.udacity.com/t/rnn-lstm-use-implementation/163169> | First of all, you choose great tutorials([1](http://colah.github.io/posts/2015-08-Understanding-LSTMs/),[2](http://karpathy.github.io/2015/05/21/rnn-effectiveness/)) to start.
**What Time-step means**: `Time-steps==3` in X.shape (Describing data shape) means there are three pink boxes. Since in Keras each step requires an input, therefore the number of the green boxes should usually equal to the number of red boxes. Unless you hack the structure.
**many to many vs. many to one**: In keras, there is a `return_sequences` parameter when your initializing `LSTM` or `GRU` or `SimpleRNN`. When `return_sequences` is `False` (by default), then it is **many to one** as shown in the picture. Its return shape is `(batch_size, hidden_unit_length)`, which represent the last state. When `return_sequences` is `True`, then it is **many to many**. Its return shape is `(batch_size, time_step, hidden_unit_length)`
**Does the features argument become relevant**: Feature argument means **"How big is your red box"** or what is the input dimension each step. If you want to predict from, say, 8 kinds of market information, then you can generate your data with `feature==8`.
**Stateful**: You can look up [the source code](https://github.com/fchollet/keras/blob/master/keras/layers/recurrent.py#L223). When initializing the state, if `stateful==True`, then the state from last training will be used as the initial state, otherwise it will generate a new state. I haven't turn on `stateful` yet. However, I disagree with that the `batch_size` can only be 1 when `stateful==True`.
Currently, you generate your data with collected data. Image your stock information is coming as stream, rather than waiting for a day to collect all sequential, you would like to generate input data **online** while training/predicting with network. If you have 400 stocks sharing a same network, then you can set `batch_size==400`. |
How to unpack deep nested iterable structure | 38,720,918 | 4 | 2016-08-02T12:50:13Z | 38,720,943 | 10 | 2016-08-02T12:51:02Z | [
"python",
"python-3.x",
"iterable",
"iterable-unpacking"
] | Say for example I have a structure that contains many sub-elements some of which are structures:
```
v = [1, 2, 3, [4, (5, 6)]]
```
How can I unpack these into a series of names that contain only the contents of the structures and not a structure?
Trying `a, b, c, d, e, f = v` raises a `ValueError` while using the starred expression would assign a structure to the names. How can I unpack them in order to get:
```
print(a, b, c, d, e, f)
```
to print:
```
1 2 3 4 5 6
``` | Assignments are defined recursively, you need to *[use parentheses `()` and/or square brackets `[]` to enclose target names](https://docs.python.org/3/reference/simple_stmts.html#assignment-statements)* and match the nested structure of your iterable. In your case:
```
a, b, c, (d, (e, f)) = v
print(a, b, c, d, e, f)
1 2 3 4 5 6
```
Similarly, with no change in semantics, you could use `[]` to denote the structure:
```
a, b, c, [d, [e, f]] = v
print(a, b, c, d, e, f)
1 2 3 4 5 6
```
or, of course, mix them up.
Python will then unpack `v` and assign the first 3 values normally, then unpack the contents of `(d, (e, f))` and assign `d` and then again unpack `(e, f)` and do the same.
You can see this happening if you import the `dis` module and disassembling the statement with `dis.dis`:
```
dis.dis('a, b, c, (d, (e, f)) = v')
1 0 LOAD_NAME 0 (v)
3 UNPACK_SEQUENCE 4 # <- first unpack
6 STORE_NAME 1 (a)
9 STORE_NAME 2 (b)
12 STORE_NAME 3 (c)
15 UNPACK_SEQUENCE 2 # <- second unpack
18 STORE_NAME 4 (d)
21 UNPACK_SEQUENCE 2 # <- third unpack
24 STORE_NAME 5 (e)
27 STORE_NAME 6 (f)
30 LOAD_CONST 0 (None)
33 RETURN_VALUE
```
In general, to unpack arbitrarily nested structures, match the structure in the left side of the assignment (target-list):
```
v = [1, [2, [3, [4, 5]]]]
[a, [b, [c, [d, e]]]] = v
print(a, b, c, d, e)
1 2 3 4 5
```
the outer `[]` are, of course, unnecessary, just adding them to show that simply matching the structure suffices.
A more general (soft) introduction on iterable unpacking can be found on [the doc page for it](http://stackoverflow.com/documentation/python/809/python-3-vs-python-2/2845/unpacking-iterables#t=20160802141126769914), even though the case of nested structures have not yet been discussed there. |
Why does create() in PayPal's batch payments via API return False? | 38,725,877 | 8 | 2016-08-02T16:27:21Z | 39,066,929 | 7 | 2016-08-21T17:28:08Z | [
"python",
"paypal",
"paypal-rest-sdk"
] | <https://developer.paypal.com/docs/api/payments.payouts-batch/#payouts_create>
Sample code:
<https://github.com/paypal/PayPal-Python-SDK/blob/master/samples/payout/create.py>
Why does `create()` return False? How do I get an explanation of why?
Update: I was able to get this info, but it's not helpful either:
```
ForbiddenAccess: Failed. Response status: 403. Response message: Forbidden. Error message: {"name":"AUTHORIZATION_ERROR","message":"Authorization error occurred","debug_id":"60e73559274d3","information_link":"https://developer.paypal.com/webapps/developer/docs/api/#AUTHORIZATION_ERROR"}
``` | PayPal tech/dev support told me the debug ID said I didn't have Mass Pay enabled on my account, so I had to call them and talk to general support. I did, and they said they cannot enable it on Canadian accounts. I'm going to have to change payment processors to someone who offers the Mass Pay feature. I need to send out 500 micro payments to 500 different people.
They told me to open a US PayPal account. They asked if I had a US residence, and I do have a vacation home in the US. Then they asked me if I had a social security number, and I don't. So that option was not available.
Update: I told PayPal technical support that it could not be enabled in Canada. They told me that it works in Canada on the sandbox, so maybe it's coming soon. However, they said there is a feature called [Payouts](https://developer.paypal.com/docs/api/payments.payouts-batch/?mark=payouts) that can work for me. They went ahead and enabled it for me. So I'm going with that instead of mass pay.
Moral of the story: PayPal technical support via email sorted it all out. Their phone support is useless and stubborn. |
Returning string matches between two lists for a given number of elements in a third list | 38,728,204 | 11 | 2016-08-02T18:43:57Z | 38,728,248 | 8 | 2016-08-02T18:46:38Z | [
"python",
"list",
"set-intersection"
] | I've got a feeling that I will be told to go to the 'beginner's guide' or what have you but I have this code here that goes
```
does = ['my','mother','told','me','to','choose','the']
it = ['my','mother','told','me','to','choose','the']
work = []
while 5 > len(work):
for nope in it:
if nope in does:
work.append(nope)
print (work)
```
And I get
```
['my', 'mother', 'told', 'me', 'to', 'choose', 'the']
```
Why is this? And how do I convince it to return
```
['my', 'mother', 'told', 'me']
``` | You could try something like this:
```
for nope in it:
if len(work) < 5 and nope in does:
work.append(nope)
else:
break
```
The problem with your code is that it does the check of the work's length, after having looped through all the items of `it` and having added all of them that are in `does`. |
Python enumerate reverse index only | 38,738,548 | 5 | 2016-08-03T08:38:41Z | 38,738,652 | 8 | 2016-08-03T08:43:26Z | [
"python",
"python-3.x",
"reverse",
"enumerate"
] | I am trying to reverse the index given by `enumerate` whilst retaining the original order of the list being enumerated.
Assume I have the following:
```
>> range(5)
[0, 1, 2, 3, 4]
```
If I enumerate this I would get the following:
```
>> list(enumerate(range(5)))
[(0, 0), (1, 1), (2, 2), (3, 3), (4, 4)]
```
However I want to reverse the index provided by enumerate so that I get:
```
[(4, 0), (3, 1), (2, 2), (1, 3), (0, 4)]
```
So far I have the following code:
```
reversed(list(enumerate(reversed(range(5)))))
```
I was just wondering if there was a neater way to do this? | How about using zip instead with a reversed range?
```
>>> zip(range(9, -1, -1), range(10))
[(9, 0), (8, 1), (7, 2), (6, 3), (5, 4), (4, 5), (3, 6), (2, 7), (1, 8), (0, 9)]
>>> def reversedEnumerate(l):
return zip(range(len(l)-1, -1, -1), l)
>>> reversedEnumerate(range(10))
[(9, 0), (8, 1), (7, 2), (6, 3), (5, 4), (4, 5), (3, 6), (2, 7), (1, 8), (0, 9)]
```
As @julienSpronk suggests, use `izip` to get a generator, also `xrange`:
```
import itertools
>>> import itertools
>>> def reversedEnumerate(l):
... return itertools.izip(xrange(len(l)-1, -1, -1), l)
...
>>> reversedEnumerate(range(10))
<itertools.izip object at 0x03749760>
>>> for i in reversedEnumerate(range(10)):
... print i
...
(9, 0)
(8, 1)
(7, 2)
(6, 3)
(5, 4)
(4, 5)
(3, 6)
(2, 7)
(1, 8)
(0, 9)
``` |
Django error: render_to_response() got an unexpected keyword argument 'context_instance' | 38,739,422 | 7 | 2016-08-03T09:18:34Z | 38,739,423 | 23 | 2016-08-03T09:18:34Z | [
"python",
"django"
] | After upgrading to Django 1.10, I get the error `render_to_response() got an unexpected keyword argument 'context_instance'`.
My view is as follows:
```
from django.shortcuts import render_to_response
from django.template import RequestContext
def my_view(request):
context = {'foo': 'bar'}
return render_to_response('my_template.html', context, context_instance=RequestContext(request))
```
Here is the full traceback:
```
Traceback:
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/core/handlers/exception.py" in inner
39. response = get_response(request)
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/core/handlers/base.py" in _get_response
187. response = self.process_exception_by_middleware(e, request)
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/core/handlers/base.py" in _get_response
185. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/Users/alasdair/dev/rtr/rtr/urls.py" in my_view
26. return render_to_response('my_template.html', context, context_instance=RequestContext(request))
Exception Type: TypeError at /
Exception Value: render_to_response() got an unexpected keyword argument 'context_instance'
``` | The `context_instance` parameter in `render_to_response` was [deprecated in Django 1.8](https://docs.djangoproject.com/en/1.10/releases/1.8/#dictionary-and-context-instance-arguments-of-rendering-functions), and removed in Django 1.10.
The solution is to switch to the [`render`](https://docs.djangoproject.com/en/1.10/topics/http/shortcuts/#django.shortcuts.render) shortcut, which automatically uses a `RequestContext`.
Update your imports and view as follows. Note that `render` takes the `request` object as its first argument.
```
from django.shortcuts import render
def my_view(request):
context = {'foo': 'bar'}
return render(request, 'my_template.html', context)
```
The `render` shortcut was introduced in Django 1.3, so this change is compatible with older versions of Django. |
How to make this Block of python code short and efficient | 38,742,938 | 31 | 2016-08-03T11:56:13Z | 38,742,979 | 46 | 2016-08-03T11:57:49Z | [
"python",
"performance",
"python-2.7",
"if-statement"
] | I am total newbie to programming and python. I was solving a problem. I found the solution but it seems like too slow.
```
if n % 2 == 0 and n % 3 == 0 and\
n % 4 == 0 and n % 5 == 0 and\
n % 6 == 0 and n % 7 == 0 and\
n % 8 == 0 and n % 9 == 0 and\
n % 10 == 0 and n % 11 == 0 and\
n % 12 == 0 and n % 13 == 0 and\
n % 14 == 0 and n % 15 == 0 and\
n % 16 == 0 and n % 17 == 0 and\
n % 18 == 0 and n % 19 == 0 and\
n % 20 == 0:
```
This is the piece the code to check whether `n` is divisible by all numbers from 2 to 20 or not.
How I can make it short and efficient. | ```
if all(n % i == 0 for i in range(2, 21)):
```
`all` accepts an iterable and returns `True` if all of its elements are evaluated to `True`, `False` otherwise. The `n % i == 0 for i in range(2, 21)` part returns an iterable with 19 `True` or `False` values, depending if `n` is dividable by the corresponding `i` value. |
How to make this Block of python code short and efficient | 38,742,938 | 31 | 2016-08-03T11:56:13Z | 38,743,106 | 79 | 2016-08-03T12:02:46Z | [
"python",
"performance",
"python-2.7",
"if-statement"
] | I am total newbie to programming and python. I was solving a problem. I found the solution but it seems like too slow.
```
if n % 2 == 0 and n % 3 == 0 and\
n % 4 == 0 and n % 5 == 0 and\
n % 6 == 0 and n % 7 == 0 and\
n % 8 == 0 and n % 9 == 0 and\
n % 10 == 0 and n % 11 == 0 and\
n % 12 == 0 and n % 13 == 0 and\
n % 14 == 0 and n % 15 == 0 and\
n % 16 == 0 and n % 17 == 0 and\
n % 18 == 0 and n % 19 == 0 and\
n % 20 == 0:
```
This is the piece the code to check whether `n` is divisible by all numbers from 2 to 20 or not.
How I can make it short and efficient. | There's a trade-off between short and efficient.
The *Short* way is `if all(n % i == 0 for i in range(2, 21)):`
The *Efficient* way is to notice that things like `n % 20 == 0` also mean that `n % f == 0` where `f` is any factor of 20. For example, you can drop `n % 2 == 0`. So you'll end up with fewer comparisons which will run faster. In doing this you'll notice a pattern and you'll notice that the *entire* statement reduces to `if n % 232792560 == 0`! But that has now deeply embedded the 20 within it so will be difficult to unpick if you need a different upper limit.
So you see that the *efficient* way is not so easy to read and maintain. So pick the one best suited to your requirements. |
How to make this Block of python code short and efficient | 38,742,938 | 31 | 2016-08-03T11:56:13Z | 38,743,446 | 53 | 2016-08-03T12:17:35Z | [
"python",
"performance",
"python-2.7",
"if-statement"
] | I am total newbie to programming and python. I was solving a problem. I found the solution but it seems like too slow.
```
if n % 2 == 0 and n % 3 == 0 and\
n % 4 == 0 and n % 5 == 0 and\
n % 6 == 0 and n % 7 == 0 and\
n % 8 == 0 and n % 9 == 0 and\
n % 10 == 0 and n % 11 == 0 and\
n % 12 == 0 and n % 13 == 0 and\
n % 14 == 0 and n % 15 == 0 and\
n % 16 == 0 and n % 17 == 0 and\
n % 18 == 0 and n % 19 == 0 and\
n % 20 == 0:
```
This is the piece the code to check whether `n` is divisible by all numbers from 2 to 20 or not.
How I can make it short and efficient. | There's a smarter way to do this. If `n` is divisible by every integer in range(1, 21) then it *must* be a multiple of the [least common multiple](https://en.wikipedia.org/wiki/Least_common_multiple) of those integers.
You can calculate the LCM of a set of numbers progressively, using the GCD (greatest common divisor). You can import the gcd function from the `fractions` module, or implement it directly in your code.
```
def gcd(a, b):
''' Greatest Common Divisor '''
while b:
a, b = b, a % b
return a
def lcm(a, b):
''' Least Common Multiple '''
return a * b // gcd(a, b)
# Compute the LCM of range(1, 21)
n = 2
for i in range(3, 21):
n = lcm(n, i)
lcm20 = n
print('LCM =', lcm20)
#test
for i in range(1, 21):
print(i, lcm20 % i)
```
**output**
```
LCM = 232792560
1 0
2 0
3 0
4 0
5 0
6 0
7 0
8 0
9 0
10 0
11 0
12 0
13 0
14 0
15 0
16 0
17 0
18 0
19 0
20 0
```
Now, to test if any number `n` is divisible by all the numbers is range(1, 21) you can just do
```
n % lcm20 == 0
```
or hard-code the constant into your script:
```
# 232792560 is the LCM of 1..20
n % 232792560 == 0
```
---
As Anton Sherwood points out in [his comment](http://stackoverflow.com/questions/38742938/how-to-make-this-block-of-python-code-short-and-efficient/38743446#comment64890813_38743106) we can speed up the process of finding the required LCM by just taking the LCM of the upper half of the range. This works because each number in the lower half of the range is a divisor of a number in the upper half of the range.
We can improve the speed even further by in-lining the GCD and LCM calculations, rather than calling functions to perform those operations. Python function calls are noticeably slower than C function calls due to the extra overheads involved.
Yakk mentions an alternative approach to finding the required LCM: calculate the product of the prime powers in the range. This is quite fast if the range is large enough (around 40 or so), but for small numbers the simple LCM loop is faster.
Below is some `timeit` code that compares the speed of these various approaches. This script runs on Python 2 and 3, I've tested it on Python 2.6 and Python 3.6. It uses a prime list function by Robert William Hanks to implement Yakk's suggestion. I've modified Robert's code slightly to make it compatible with Python 3. I suppose there may be a more efficient way to find the prime powers; if so, I'd like to see it. :)
I mentioned earlier that there's a GCD function in the `fractions` module. I did some time tests with it, but it's noticeably slower than my code. Presumably that's because it does error checking on the arguments.
```
#!/usr/bin/env python3
''' Least Common Multiple of the numbers in range(1, m)
Speed tests
Written by PM 2Ring 2016.08.04
'''
from __future__ import print_function
from timeit import Timer
#from fractions import gcd
def gcd(a, b):
''' Greatest Common Divisor '''
while b:
a, b = b, a % b
return a
def lcm(a, b):
''' Least Common Multiple '''
return a * b // gcd(a, b)
def primes(n):
''' Returns a list of primes < n '''
# By Robert William Hanks, from http://stackoverflow.com/a/3035188/4014959
sieve = [True] * (n//2)
for i in range(3, int(n ** 0.5) + 1, 2):
if sieve[i//2]:
sieve[i*i//2::i] = [False] * ((n - i*i - 1) // (2*i) + 1)
return [2] + [2*i + 1 for i in range(1, n//2) if sieve[i]]
def lcm_range_PM(m):
''' The LCM of range(1, m) '''
n = 1
for i in range(2, m):
n = lcm(n, i)
return n
def lcm_range_AS(m):
''' The LCM of range(1, m) '''
n = m // 2
for i in range(n + 1, m):
n = lcm(n, i)
return n
def lcm_range_fast(m):
''' The LCM of range(1, m) '''
n = m // 2
for i in range(n + 1, m):
a, b = n, i
while b:
a, b = b, a % b
n = n * i // a
return n
def lcm_range_primes(m):
n = 1
for p in primes(m):
a = p
while a < m:
a *= p
n *= a // p
return n
funcs = (
lcm_range_PM,
lcm_range_AS,
lcm_range_fast,
lcm_range_primes
)
def verify(hi):
''' Verify that all the functions give the same result '''
for i in range(2, hi + 1):
a = [func(i) for func in funcs]
a0 = a[0]
assert all(u == a0 for u in a[1:]), (i, a)
print('ok')
def time_test(loops, reps):
''' Print timing stats for all the functions '''
timings = []
for func in funcs:
fname = func.__name__
setup = 'from __main__ import num, ' + fname
cmd = fname + '(num)'
t = Timer(cmd, setup)
result = t.repeat(reps, loops)
result.sort()
timings.append((result, fname))
timings.sort()
for result, fname in timings:
print('{0:16} {1}'.format(fname, result))
verify(500)
reps = 3
loops = 8192
num = 2
for _ in range(10):
print('\nnum = {0}, loops = {1}'.format(num, loops))
time_test(loops, reps)
num *= 2
loops //= 2
print('\n' + '- ' * 40)
funcs = (
lcm_range_fast,
lcm_range_primes
)
loops = 1000
for num in range(30, 60):
print('\nnum = {0}, loops = {1}'.format(num, loops))
time_test(loops, reps)
```
**output**
```
ok
num = 2, loops = 8192
lcm_range_PM [0.013914467999711633, 0.01393848999941838, 0.023966414999449626]
lcm_range_fast [0.01656803699916054, 0.016577592001340236, 0.016578077998929075]
lcm_range_AS [0.01738608899904648, 0.017602848000024096, 0.01770572900022671]
lcm_range_primes [0.0979132459997345, 0.09863009199943917, 0.10133290699923236]
num = 4, loops = 4096
lcm_range_fast [0.01580070299860381, 0.01581421999981103, 0.016406731001552544]
lcm_range_AS [0.020135083001150633, 0.021132826999746612, 0.021589830999801052]
lcm_range_PM [0.02821666900126729, 0.029041511999821523, 0.036708851001094445]
lcm_range_primes [0.06287289499960025, 0.06381634699937422, 0.06406087200048205]
num = 8, loops = 2048
lcm_range_fast [0.015360695999333984, 0.02138442599971313, 0.02630166100061615]
lcm_range_AS [0.02104746699842508, 0.021742354998423252, 0.022648989999652258]
lcm_range_PM [0.03499621999981173, 0.03546843599906424, 0.042924503999529406]
lcm_range_primes [0.03741390599861916, 0.03865244000007806, 0.03959638999913295]
num = 16, loops = 1024
lcm_range_fast [0.015973221999956877, 0.01600381199932599, 0.01603960700049356]
lcm_range_AS [0.023003745000096387, 0.023848425998949097, 0.024875303000953863]
lcm_range_primes [0.028887982000014745, 0.029422679001072538, 0.029940758000520873]
lcm_range_PM [0.03780223299872887, 0.03925949299991771, 0.04462484900068375]
num = 32, loops = 512
lcm_range_fast [0.018606906000059098, 0.02557359899947187, 0.03725786200084258]
lcm_range_primes [0.021675119000065024, 0.022790905999499955, 0.03934840099827852]
lcm_range_AS [0.025330593998660333, 0.02545427500081132, 0.026093265998497372]
lcm_range_PM [0.044320442000753246, 0.044836185001258855, 0.05193238799984101]
num = 64, loops = 256
lcm_range_primes [0.01650579099987226, 0.02443148000020301, 0.033489004999864846]
lcm_range_fast [0.018367127000601613, 0.019002625000211992, 0.01955779200034158]
lcm_range_AS [0.026258470001266687, 0.04113643799973943, 0.0436801750001905]
lcm_range_PM [0.04854909000096086, 0.054864030998942326, 0.0797669980001956]
num = 128, loops = 128
lcm_range_primes [0.013294352000229992, 0.013383581999732996, 0.024317635999977938]
lcm_range_fast [0.02098568399924261, 0.02108044199849246, 0.03272008299973095]
lcm_range_AS [0.028861763999884715, 0.0399744570004259, 0.04660961700028565]
lcm_range_PM [0.05302166500041494, 0.059346372001527925, 0.07757829000001948]
num = 256, loops = 64
lcm_range_primes [0.010487794999789912, 0.010514846000660327, 0.01055656300013652]
lcm_range_fast [0.02619308099929185, 0.02637610199963092, 0.03755473099954543]
lcm_range_AS [0.03422451699952944, 0.03513622399987071, 0.05206341099983547]
lcm_range_PM [0.06851765200008231, 0.073690847000762, 0.07841700100107118]
num = 512, loops = 32
lcm_range_primes [0.009275872000216623, 0.009292663999076467, 0.009309271999882185]
lcm_range_fast [0.03759837500001595, 0.03774761099884927, 0.0383951439998782]
lcm_range_AS [0.04527828100071929, 0.046646228000099654, 0.0569303670017689]
lcm_range_PM [0.11064135100059502, 0.12738902800083451, 0.13843623499997193]
num = 1024, loops = 16
lcm_range_primes [0.009248070000467123, 0.00931658900117327, 0.010279963000357384]
lcm_range_fast [0.05642254200029129, 0.05663530499987246, 0.05796714499956579]
lcm_range_AS [0.06509247900066839, 0.0652738099997805, 0.0658949799999391]
lcm_range_PM [0.11376448099872505, 0.11652833600055601, 0.12083648199950403]
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
num = 30, loops = 1000
lcm_range_fast [0.03275446999941778, 0.033530079999763984, 0.04002811799909978]
lcm_range_primes [0.04062690899991139, 0.040886697999667376, 0.04130547800014028]
num = 31, loops = 1000
lcm_range_fast [0.03423191600086284, 0.039976395999474335, 0.04078094900069118]
lcm_range_primes [0.04053011599899037, 0.04140578700025799, 0.04566663300101936]
num = 32, loops = 1000
lcm_range_fast [0.036124262000157614, 0.036700047998238006, 0.04392546200142533]
lcm_range_primes [0.042666604998885305, 0.04393434200028423, 0.05142524700022477]
num = 33, loops = 1000
lcm_range_fast [0.03875456000059785, 0.03997290300139866, 0.044469664000644116]
lcm_range_primes [0.04280027899949346, 0.0437891679994209, 0.04381238600035431]
num = 34, loops = 1000
lcm_range_fast [0.038203157999305404, 0.03937257799952931, 0.04531203700025799]
lcm_range_primes [0.043273317998682614, 0.043349457999283914, 0.04420187600044301]
num = 35, loops = 1000
lcm_range_fast [0.04228670399970724, 0.04346491300020716, 0.047442203998798504]
lcm_range_primes [0.04332462999991549, 0.0433610400014004, 0.04525857199951133]
num = 36, loops = 1000
lcm_range_fast [0.04175829099949624, 0.04217126499861479, 0.046840714998324984]
lcm_range_primes [0.04339772299863398, 0.04360795700085873, 0.04453475599984813]
num = 37, loops = 1000
lcm_range_fast [0.04231068799890636, 0.04373836499871686, 0.05010528200000408]
lcm_range_primes [0.04371378700125206, 0.04463105400100176, 0.04481986299833807]
num = 38, loops = 1000
lcm_range_fast [0.042841554000915494, 0.043649038998410106, 0.04868016199907288]
lcm_range_primes [0.04571479200058093, 0.04654245399979118, 0.04671720700025617]
num = 39, loops = 1000
lcm_range_fast [0.04469198100014182, 0.04786454099848925, 0.05639159299971652]
lcm_range_primes [0.04572433999965142, 0.04583652600013011, 0.046649005000290344]
num = 40, loops = 1000
lcm_range_fast [0.044788433999201516, 0.046223339000789565, 0.05302252199908253]
lcm_range_primes [0.045482261000870494, 0.04680115900009696, 0.046941823999077315]
num = 41, loops = 1000
lcm_range_fast [0.04650144500010356, 0.04783133000091766, 0.05405569400136301]
lcm_range_primes [0.04678159699869866, 0.046870936999766855, 0.04726529199979268]
num = 42, loops = 1000
lcm_range_fast [0.04772527699969942, 0.04824955299955036, 0.05483534199993301]
lcm_range_primes [0.0478546140002436, 0.048954233001495595, 0.04905354400034412]
num = 43, loops = 1000
lcm_range_primes [0.047872637000182294, 0.048093739000250935, 0.048502418998396024]
lcm_range_fast [0.04906317900167778, 0.05292572700091114, 0.09274570399975346]
num = 44, loops = 1000
lcm_range_primes [0.049750300000596326, 0.050272532000235515, 0.05087747600009607]
lcm_range_fast [0.050906279000628274, 0.05109869400075695, 0.05820328499976313]
num = 45, loops = 1000
lcm_range_primes [0.050158660000306554, 0.050309066000409075, 0.054478109999763547]
lcm_range_fast [0.05236714599959669, 0.0539534259987704, 0.058996140000090236]
num = 46, loops = 1000
lcm_range_primes [0.049894845999006066, 0.0512076260001777, 0.051318084999365965]
lcm_range_fast [0.05081920200063905, 0.051397655999608105, 0.05722950699964713]
num = 47, loops = 1000
lcm_range_primes [0.04971165599999949, 0.05024208400027419, 0.051092388999677496]
lcm_range_fast [0.05388393700013694, 0.05502788499870803, 0.05994341699988581]
num = 48, loops = 1000
lcm_range_primes [0.0517014939996443, 0.05279760400117084, 0.052917389999493025]
lcm_range_fast [0.05402479099939228, 0.055251746000067214, 0.06128628700025729]
num = 49, loops = 1000
lcm_range_primes [0.05412415899991174, 0.05474224499994307, 0.05610057699959725]
lcm_range_fast [0.05757830900074623, 0.0590323519991216, 0.06310263200066402]
num = 50, loops = 1000
lcm_range_primes [0.054892387001018506, 0.05504404100065585, 0.05610281799999939]
lcm_range_fast [0.0588886920013465, 0.0594741389995761, 0.06682244199873821]
num = 51, loops = 1000
lcm_range_primes [0.05582956999933231, 0.055921465000210446, 0.06004790299994056]
lcm_range_fast [0.060586288000195054, 0.061715600999377784, 0.06733965300009004]
num = 52, loops = 1000
lcm_range_primes [0.0557458109997242, 0.05669860099988, 0.056761407999147195]
lcm_range_fast [0.060323355999571504, 0.06177857100010442, 0.06778404599936039]
num = 53, loops = 1000
lcm_range_primes [0.05501838899908762, 0.05541463699955784, 0.0561610999993718]
lcm_range_fast [0.06281833000139159, 0.06334177999997337, 0.06843207200108736]
num = 54, loops = 1000
lcm_range_primes [0.057314272000439814, 0.059501444000488846, 0.060004871998899034]
lcm_range_fast [0.06634221600143064, 0.06662889200015343, 0.07153233899953193]
num = 55, loops = 1000
lcm_range_primes [0.05790564500057371, 0.05824322199987364, 0.05863306900027965]
lcm_range_fast [0.06693624800027465, 0.06784769100158883, 0.07562533499913116]
num = 56, loops = 1000
lcm_range_primes [0.057219010001063, 0.05858367799919506, 0.06246676000046136]
lcm_range_fast [0.06854197999928147, 0.06999059400004626, 0.07505119899906276]
num = 57, loops = 1000
lcm_range_primes [0.05746709300001385, 0.0587476679993415, 0.0606189070003893]
lcm_range_fast [0.07094627400147147, 0.07241532700027165, 0.07868066799892404]
num = 58, loops = 1000
lcm_range_primes [0.0576490580006066, 0.058481812999161775, 0.05857339500107628]
lcm_range_fast [0.07127979200049595, 0.07549924399972952, 0.07849203499972646]
num = 59, loops = 1000
lcm_range_primes [0.057503377998727956, 0.058632499998566345, 0.060360438999850885]
lcm_range_fast [0.07332589399993594, 0.07625177999943844, 0.08087236799838138]
```
This timing info was generated using Python 3.6 running on a Debian derivative of Linux, on an ancient 2GHz Pentium IV machine. |
Django URLs error: view must be a callable or a list/tuple in the case of include() | 38,744,285 | 6 | 2016-08-03T12:54:34Z | 38,744,286 | 20 | 2016-08-03T12:54:34Z | [
"python",
"django",
"django-urls",
"django-1.10"
] | After upgrading to Django 1.10, I get the error:
```
TypeError: view must be a callable or a list/tuple in the case of include().
```
My urls.py is as follows:
```
urlpatterns = [
url(r'^$', 'myapp.views.home'),
url(r'^contact$', 'myapp.views.contact'),
url(r'^login/$', 'django.contrib.auth.views.login'),
]
```
The full traceback is:
```
Traceback (most recent call last):
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/utils/autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/core/management/commands/runserver.py", line 121, in inner_run
self.check(display_num_errors=True)
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/core/management/base.py", line 385, in check
include_deployment_checks=include_deployment_checks,
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/core/management/base.py", line 372, in _run_checks
return checks.run_checks(**kwargs)
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/core/checks/registry.py", line 81, in run_checks
new_errors = check(app_configs=app_configs)
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/core/checks/urls.py", line 14, in check_url_config
return check_resolver(resolver)
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/core/checks/urls.py", line 24, in check_resolver
for pattern in resolver.url_patterns:
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/utils/functional.py", line 35, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/urls/resolvers.py", line 310, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/utils/functional.py", line 35, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/urls/resolvers.py", line 303, in urlconf_module
return import_module(self.urlconf_name)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/Users/alasdair/dev/urlproject/urlproject/urls.py", line 28, in <module>
url(r'^$', 'myapp.views.home'),
File "/Users/alasdair/.virtualenvs/django110/lib/python2.7/site-packages/django/conf/urls/__init__.py", line 85, in url
raise TypeError('view must be a callable or a list/tuple in the case of include().')
TypeError: view must be a callable or a list/tuple in the case of include().
``` | Django 1.10 no longer allows you to specify views as a string (e.g. `'myapp.views.home'`) in your URL patterns.
The solution is to update your `urls.py` to include the view callable. This means that you have to import the view in your `urls.py`. If your URL patterns don't have names, then now is a good time to add one, because reversing with the dotted python path no longer works.
```
from django.contrib.auth.views import login
from myapp.views import home, contact
urlpatterns = [
url(r'^$', home, name='home'),
url(r'^contact$', contact, name='contact'),
url(r'^login/$', login, name='login'),
]
```
If there are many views, then importing them individually can be inconvenient. An alternative is to import the views module from your app.
```
from django.contrib.auth import views as auth_views
from myapp import views as myapp_views
urlpatterns = [
url(r'^$', myapp_views.home, name='home'),
url(r'^contact$', myapp_views.contact, name='contact'),
url(r'^login/$', auth_views.login, name='login'),
]
```
Note that we have used `as myapp_views` and `as auth_views`, which allows us to import the `views.py` from multiple apps without them clashing.
See the [Django URL dispatcher docs](https://docs.djangoproject.com/en/1.10/topics/http/urls/#url-dispatcher) for more information about `urlpatterns`. |
simplest python equivalent to R's grepl | 38,745,710 | 4 | 2016-08-03T13:56:45Z | 38,745,802 | 9 | 2016-08-03T14:00:27Z | [
"python",
"python-2.7"
] | Is there a simple/one-line python equivalent to R's `grepl` function?
```
strings = c("aString", "yetAnotherString", "evenAnotherOne")
grepl(pattern = "String", x = strings) #[1] TRUE TRUE FALSE
``` | You can use list comprehension:
```
strings = ["aString", "yetAnotherString", "evenAnotherOne"]
["String" in i for i in strings]
#Out[76]: [True, True, False]
```
Or use `re` module:
```
import re
[bool(re.search("String", i)) for i in strings]
#Out[77]: [True, True, False]
```
Or with `Pandas` (R user may be interested in this library, using a dataframe "similar" structure):
```
import pandas as pd
pd.Series(strings).str.contains('String').tolist()
#Out[78]: [True, True, False]
``` |
Comparison of collections containing non-reflexive elements | 38,779,705 | 8 | 2016-08-05T01:15:56Z | 38,779,764 | 7 | 2016-08-05T01:27:31Z | [
"python"
] | In python, a value `x` is not always constrained to equal itself. Perhaps the best known example is `NaN`:
```
>>> x = float("NaN")
>>> x == x
False
```
Now consider a list of exactly one item. We might consider two such lists to be *equal* if and only the items they contained were *equal*. For example:
```
>>> ["hello"] == ["hello"]
True
```
But this does not appear to be the case with `NaN`:
```
>>> x = float("NaN")
>>> x == x
False
>>> [x] == [x]
True
```
So these lists of items that are "not equal", are "equal". But only sometimes ... in particular:
* two lists consisting of the same instance of `NaN` are considered equal; while
* two separate lists consisting of different instances of `NaN` are not equal
Observe:
```
>>> x = float("NaN")
>>> [x] == [x]
True
>>> [x] == [float("NaN")]
False
```
This general behaviour also applies to other collection types such as tuples and sets. Is there a good rationale for this? | Per [the docs](https://docs.python.org/3/reference/expressions.html#value-comparisons),
> In enforcing reflexivity of elements, **the comparison of collections assumes that for a collection element x, x == x is always true**. Based on that assumption, element identity is compared first, and element comparison is performed only for distinct elements. This approach yields the same result as a strict element comparison would, if the compared elements are reflexive. For non-reflexive elements, the result is different than for strict element comparison, and may be surprising: The non-reflexive not-a-number values for example result in the following comparison behavior when used in a list:
>
> ```
> >>> nan = float('NaN')
> >>> nan is nan
> True
> >>> nan == nan
> False <-- the defined non-reflexive behavior of NaN
> >>> [nan] == [nan]
> True <-- list enforces reflexivity and tests identity first
> ``` |
Passing a "pointer to a virtual function" as argument in Python | 38,779,876 | 11 | 2016-08-05T01:43:56Z | 38,780,855 | 9 | 2016-08-05T03:56:17Z | [
"python",
"c++",
"reference",
"virtual-functions",
"dispatch"
] | Compare the following code in **C++**:
```
#include <iostream>
#include <vector>
struct A
{
virtual void bar(void) { std::cout << "one" << std::endl; }
};
struct B : public A
{
virtual void bar(void) { std::cout << "two" << std::endl; }
};
void test(std::vector<A*> objs, void (A::*fun)())
{
for (auto o = objs.begin(); o != objs.end(); ++o)
{
A* obj = (*o);
(obj->*fun)();
}
}
int main()
{
std::vector<A*> objs = {new A(), new B()};
test(objs, &A::bar);
}
```
and in **Python**:
```
class A:
def bar(self):
print("one")
class B(A):
def bar(self):
print("two")
def test(objs, fun):
for o in objs:
fun(o)
objs = [A(), B()]
test(objs, A.bar)
```
The **C++** code will print:
```
one
two
```
while the **Python** code will print
```
one
one
```
How can I pass "a pointer to a method" and resolve it to the overridden one, achieving the same behavior in Python as in C++?
To add some context and explain why I initially thought about this pattern. I have a tree consisting of nodes that can be subclassed. I would like to create a generic graph traversal function which takes a node of the graph as well as a function which might be overridden in subclasses of graph nodes. The function calculates some value for a node, given values calculated for adjacent nodes. The goal is to return a value calculated for the given node (which requires traversing the whole graph). | Regarding your edit, one thing you could do is use a little wrapper lambda that calls the method you want to reference. This way the method call looks like "regular python code" instead of being something complicated based on string-based access.
In your example, the only part that would need to change is the call to the `test` function:
```
test(objs, (lambda x: x.bar()))
``` |
Why is this python generator function running correctly only once? | 38,796,024 | 2 | 2016-08-05T18:52:27Z | 38,796,094 | 7 | 2016-08-05T18:57:02Z | [
"python",
"python-3.x",
"generator"
] | This is almost certainly a result of my ignorance of how generators work, but I am completely lost.
If I interactively create the following generator:
```
def neighborhood(iterable):
iterator = iter(iterable)
prev = None
item = next(iterator)
for post in iterator:
yield (prev,item,post)
prev = item
item = post
yield (prev,item,None)
```
and then test it like:
```
for prev,item,next in neighborhood([1,2,3,4,5]):
print(prev, item, next)
```
It produces:
```
None 1 2
1 2 3
2 3 4
3 4 5
4 5 None
```
as expected. If I run it again, or try to redefine it in any way, I get a
> 'NoneType' object is not callable"
error. | When you did
```
for prev,item,next in ...
# ^^^^
```
you shadowed the built-in `next` function. The next time you try to use your generator, it fails because it gets your `next` variable instead of the function it needed. |
Python Using Keyword and variable number of arguments in same function | 38,796,059 | 3 | 2016-08-05T18:54:23Z | 38,796,124 | 7 | 2016-08-05T18:58:44Z | [
"python",
"python-2.7",
"function",
"arguments"
] | I am wondering if there is a way to do something like this in python 2.7.12
```
def saveValues(file,*data,delim="|"):
buf=""
for d in data:
buf+=str(d) + delim
open(file,"w").write(buf[:-1])
```
So that I have the option to pass delim, or take the default. | It's possible in python3. The python2 workaround is usually this:
```
def saveValues(file, *data, **kwargs):
delim = kwargs.pop('delim', '|')
...
``` |
get first and last values in a groupby | 38,797,271 | 7 | 2016-08-05T20:23:54Z | 38,797,283 | 8 | 2016-08-05T20:24:39Z | [
"python",
"pandas",
"dataframe",
"group-by"
] | I have a dataframe `df`
```
df = pd.DataFrame(np.arange(20).reshape(10, -1),
[['a', 'a', 'a', 'a', 'b', 'b', 'b', 'c', 'c', 'd'],
['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']],
['X', 'Y'])
```
How do I get the first and last rows, grouped by the first level of the index?
I tried
```
df.groupby(level=0).agg(['first', 'last']).stack()
```
and got
```
X Y
a first 0 1
last 6 7
b first 8 9
last 12 13
c first 14 15
last 16 17
d first 18 19
last 18 19
```
This is so close to what I want. How can I preserve the level 1 index and get this instead:
```
X Y
a a 0 1
d 6 7
b e 8 9
g 12 13
c h 14 15
i 16 17
d j 18 19
j 18 19
``` | ### Option 1
```
def first_last(df):
return df.ix[[0, -1]]
df.groupby(level=0, group_keys=False).apply(first_last)
```
[](http://i.stack.imgur.com/PBecd.png)
---
### Option 2 - only works if index is unique
```
idx = df.index.to_series().groupby(level=0).agg(['first', 'last']).stack()
df.loc[idx]
```
---
### Option 3 - per notes below, this only makes sense when there are no NAs
I also abused the `agg` function. The code below works, but is far uglier.
```
df.reset_index(1).groupby(level=0).agg(['first', 'last']).stack() \
.set_index('level_1', append=True).reset_index(1, drop=True) \
.rename_axis([None, None])
```
---
# Note
per @unutbu: `agg(['first', 'last'])` take the firs non-na values.
I interpreted this as, it must then be necessary to run this column by column. Further, forcing index level=1 to align may not even make sense.
Let's include another test
```
df = pd.DataFrame(np.arange(20).reshape(10, -1),
[list('aaaabbbccd'),
list('abcdefghij')],
list('XY'))
df.loc[tuple('aa'), 'X'] = np.nan
```
---
```
def first_last(df):
return df.ix[[0, -1]]
df.groupby(level=0, group_keys=False).apply(first_last)
```
[](http://i.stack.imgur.com/MQNBt.png)
```
df.reset_index(1).groupby(level=0).agg(['first', 'last']).stack() \
.set_index('level_1', append=True).reset_index(1, drop=True) \
.rename_axis([None, None])
```
[](http://i.stack.imgur.com/34k3s.png)
Sure enough! This second solution is taking the first valid value in column X. It is now nonsensical to have forced that value to align with the index a. |
Pandas still getting SettingWithCopyWarning even after using .loc | 38,809,796 | 3 | 2016-08-07T00:18:40Z | 38,810,015 | 7 | 2016-08-07T01:08:52Z | [
"python",
"pandas"
] | At first, I tried writing some code that looked like this:
```
import numpy as np
import pandas as pd
np.random.seed(2016)
train = pd.DataFrame(np.random.choice([np.nan, 1, 2], size=(10, 3)),
columns=['Age', 'SibSp', 'Parch'])
complete = train.dropna()
complete['AgeGt15'] = complete['Age'] > 15
```
After getting SettingWithCopyWarning, I tried using.loc:
```
complete.loc[:, 'AgeGt15'] = complete['Age'] > 15
complete.loc[:, 'WithFamily'] = complete['SibSp'] + complete['Parch'] > 0
```
However, I still get the same warning. What gives? | When `complete = train.dropna()` is executed, `dropna` might return a copy, so
out of an abundance of caution, Pandas sets `complete.is_copy` to a Truthy
value:
```
In [220]: complete.is_copy
Out[220]: <weakref at 0x7f7f0b295b38; to 'DataFrame' at 0x7f7eee6fe668>
```
This allows Pandas to warn you later, when `complete['AgeGt15'] = complete['Age'] > 15` is executed that you may be modifying a copy which will have no effect on `train`. For beginners this may be a useful warning. In your case, it appears you have no intention of modifying `train` indirectly by modifying `complete`. Therefore the warning is just a meaningless annoyance in your case.
You can silence the warning by setting
```
complete.is_copy = False
```
This is quicker than making an actual copy, and nips the `SettingWithCopyWarning` in the bud (at the point [where `_check_setitem_copy` is called](https://github.com/pydata/pandas/blob/master/pandas/core/generic.py#L1559)):
```
def _check_setitem_copy(self, stacklevel=4, t='setting', force=False):
if force or self.is_copy:
...
```
---
If you are really confident you know what you are doing, you can shut off the `SettingWithCopyWarning` globally with
```
pd.options.mode.chained_assignment = None # None|'warn'|'raise'
``` |
Denoise and filter an image | 38,820,743 | 9 | 2016-08-08T03:08:06Z | 38,971,196 | 7 | 2016-08-16T09:27:06Z | [
"python",
"image",
"opencv",
"image-processing",
"scikit-image"
] | I am doing a **license-plate recognition**. I have crop out the plate but it is very **blurred**. Therefore I cannot split out the digits/characters and recognize it.
Here is my image:
[](http://i.stack.imgur.com/qQ8CF.jpg)
I have tried to **denoise** it through using **scikit image** function.
First, import the libraries:
```
import cv2
from skimage import restoration
from skimage.filters import threshold_otsu, rank
from skimage.morphology import closing, square, disk
```
then, I read the image and convert it to **gray scale**
```
image = cv2.imread("plate.jpg")
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
```
I try to **remove the noise**:
```
denoise = restoration.denoise_tv_chambolle(image , weight=0.1)
thresh = threshold_otsu(denoise)
bw = closing(denoise > thresh, square(2))
```
What I got is :
[](http://i.stack.imgur.com/udt2O.jpg)
As you can see, all the digits are **mixed together**. Thus, I **cannot separate** them and recognize the characters one by one.
What I expect is something like this (I draw it):
[](http://i.stack.imgur.com/OHQN3.png)
I am looking for help how can I better filter the image? Thank you.
=====================================================================
**UPDATE**:
After using `skimage.morphology.erosion`, I got:
[](http://i.stack.imgur.com/Z0Pak.jpg) | I concur with the opinion that you should probably try to optimize the input image quality.
Number plate blur is a typical example of motion blur.
How well you can deblur depends upon how big or small is the blur radius.
Generally greater the speed of the vehicle, larger the blur radius and therefore more difficult to restore.
A simple solution that somewhat works is de-interlacing of images.
[](http://i.stack.imgur.com/dwNUD.png)
Note that it is only slightly more readable than your input image.
Here I have dropped every alternate line and resized the image to half its size using PIL/Pillow and this is what I get:
```
from PIL import Image
img=Image.open("license.jpeg")
size=list(img.size)
size[0] /= 2
size[1] /= 2
smaller_image=img.resize(size, Image.NEAREST)
smaller_image.save("smaller_image.png")
```
The next and more formal approach is **deconvolution**.
Since blurring is achieved using convolution of images, deblurring requires doing the inverse of convolution or deconvolution of the image. There are various kinds of deconvolution algorithms like the Wiener deconvolution,
Richardson-Lucy method, Radon transform and a few types of Bayesian filtering.
You can apply Wiener deconvolution algorithm using this [code](https://github.com/opencv/opencv/blob/master/samples/python/deconvolution.py). Play with the angle, diameter and signal to noise ratio and see if it provides some improvements.
The `skimage.restoration` module also provides implementation of both `unsupervised_wiener` and `richardson_lucy` deconvolution.
In the code below I have shown both the implementations but you will have to modify the psf to see which one suits better.
```
import numpy as np
import matplotlib.pyplot as plt
import cv2
from skimage import color, data, restoration
from scipy.signal import convolve2d as conv2
img = cv2.imread('license.jpg')
licence_grey_scale = color.rgb2gray(img)
psf = np.ones((5, 5)) / 25
# comment/uncomment next two lines one by one to see unsupervised_wiener and richardson_lucy deconvolution
deconvolved, _ = restoration.unsupervised_wiener(licence_grey_scale, psf)
deconvolved = restoration.richardson_lucy(licence_grey_scale, psf)
fig, ax = plt.subplots()
plt.gray()
ax.imshow(deconvolved)
ax.axis('off')
plt.show()
```
Unfortunately most of these deconvolution alogirthms require you to know in advance the
blur kernel (aka the Point Spread Function aka PSF).
Here since you do not know the PSF, so you will have to use blind deconvolution.
Blind deconvolution attempts to estimate the original image without any knowledge of the blur kernel.
I have not tried this with your image but here is a Python implementation of blind deconvolution algorithm:
<https://github.com/alexis-mignon/pydeconv>
Note that an effective general purpose blind deconvolution algorithms has not yet been found and is an active field of research. |
Why does next() always display the same value? | 38,822,840 | 3 | 2016-08-08T06:49:37Z | 38,822,868 | 10 | 2016-08-08T06:51:13Z | [
"python",
"python-2.7",
"generator",
"yield"
] | I was practicing the `yield` statement. I have written the following function in Python 2.7:
```
>>> def mygen():
... i = 0
... j = 3
... for k in range(i, j):
... yield k
...
>>> mygen().next()
0
>>> mygen().next()
0
>>> mygen().next()
0
```
Whenever I call the `mygen().next()` it always displays output as `0`, instead of `0`, `1`, `2` & `StopIteration`. Can anybody please explain this? | You are *recreating* the generator each time, so it starts from the beginning each time.
Create the generator **once**:
```
gen = mygen()
gen.next()
gen.next()
gen.next()
```
Generator functions produce a new iterator every time you call them; that way you can produce multiple independent copies. Each independent iterator is an invocation of the function that can be stepped through separately from the others:
```
>>> def mygen():
... i = 0
... j = 3
... for k in range(i, j):
... yield k
...
>>> gen1 = mygen()
>>> gen2 = mygen()
>>> gen1.next()
0
>>> gen1.next()
1
>>> gen2.next()
0
>>> gen2.next()
1
>>> gen1.next()
2
>>> gen1.next()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
```
Note that you probably want to use the [`next()` function](https://docs.python.org/2/library/functions.html#next) instead of calling `generator.next()` directly:
```
next(gen)
```
`generator.next()` is considered a hook (Python 3 renamed it to `generator.__next__()` and the `next()` function is the official API to invoke it in a cross-version compatible way. |
'is' operator behaves unexpectedly with floats | 38,834,770 | 7 | 2016-08-08T16:59:39Z | 38,835,030 | 20 | 2016-08-08T17:15:36Z | [
"python",
"python-2.7",
"python-3.x",
"floating-point",
"python-internals"
] | I came across a confusing problem when unit testing a module. The module is actually casting values and I want to compare this values.
There is a difference in comparison with `==` and `is` (partly, I'm beware of the difference)
```
>>> 0.0 is 0.0
True # as expected
>>> float(0.0) is 0.0
True # as expected
```
As expected till now, but here is my "problem":
```
>>> float(0) is 0.0
False
>>> float(0) is float(0)
False
```
Why? At least the last one is really confusing to me. The internal representation of `float(0)` and `float(0.0)` should be equal. Comparison with `==` is working as expected. | This has to do with how `is` works. It checks for references instead of value. It returns `True` if either argument is assigned to the same object.
In this case, they are different instances; `float(0)` and `float(0)` have the same value `==`, but are distinct entities as far as Python is concerned. CPython implementation also caches integers as singleton objects in this range -> **[x | x â ⤠⧠-5 ⤠x ⤠256 ]**:
```
>>> 0.0 is 0.0
True
>>> float(0) is float(0) # Not the same reference, unique instances.
False
```
In this example we can demonstrate the integer *caching principle*:
```
>>> a = 256
>>> b = 256
>>> a is b
True
>>> a = 257
>>> b = 257
>>> a is b
False
```
Now, if floats are passed to `float()`, the float literal is simply returned (*short-circuited*), as in the same reference is used, as there's no need to instantiate a new float from an existing float:
```
>>> 0.0 is 0.0
True
>>> float(0.0) is float(0.0)
True
```
This can be demonstrated further by using `int()` also:
```
>>> int(256.0) is int(256.0) # Same reference, cached.
True
>>> int(257.0) is int(257.0) # Different references are returned, not cached.
False
>>> 257 is 257 # Same reference.
True
>>> 257.0 is 257.0 # Same reference. As @Martijn Pieters pointed out.
True
```
However, the results of `is` are also dependant on the scope it is being executed in (*beyond the span of this question/explanation*), please refer to user: **@[Jim](http://stackoverflow.com/users/4952130/jim)**'s fantastic explanation on [code objects](http://stackoverflow.com/questions/34147515/is-operator-returns-different-results-on-integers/34147516#34147516). Even python's doc includes a section on this behavior:
* [5.9 Comparisons](https://docs.python.org/2/reference/expressions.html#id16)
> **[7]**
> Due to automatic garbage-collection, free lists, and the dynamic nature of descriptors, you may notice seemingly unusual behaviour in certain uses of the `is` operator, like those involving comparisons between instance methods, or constants. Check their documentation for more info. |
'is' operator behaves unexpectedly with floats | 38,834,770 | 7 | 2016-08-08T16:59:39Z | 38,835,101 | 9 | 2016-08-08T17:20:23Z | [
"python",
"python-2.7",
"python-3.x",
"floating-point",
"python-internals"
] | I came across a confusing problem when unit testing a module. The module is actually casting values and I want to compare this values.
There is a difference in comparison with `==` and `is` (partly, I'm beware of the difference)
```
>>> 0.0 is 0.0
True # as expected
>>> float(0.0) is 0.0
True # as expected
```
As expected till now, but here is my "problem":
```
>>> float(0) is 0.0
False
>>> float(0) is float(0)
False
```
Why? At least the last one is really confusing to me. The internal representation of `float(0)` and `float(0.0)` should be equal. Comparison with `==` is working as expected. | If a `float` object is supplied to `float()`, *CPython*\* just returns it without making a new object.
This can be seen in [`PyNumber_Float`](https://github.com/python/cpython/blob/954b23e8571dfe4ea94a03fde134f289f9845f2c/Objects/abstract.c#L1348) (which is eventually called from [`float_new`](https://github.com/python/cpython/blob/master/Objects/floatobject.c#L1550)) where the object `o` passed in is checked with [`PyFloat_CheckExact`](https://docs.python.org/3/c-api/float.html#c.PyFloat_CheckExact); if `True`, it just increases its reference count and returns it:
```
if (PyFloat_CheckExact(o)) {
Py_INCREF(o);
return o;
}
```
As a result, the `id` of the object stays the same. So the expression
```
>>> float(0.0) is float(0.0)
```
reduces to:
```
>>> 0.0 is 0.0
```
As demonstrated in your first example, `CPython` uses the same object for the two occurrences of `0.0` in your command because they are part of [the same `code` object](http://stackoverflow.com/a/34147516/4952130) (short disclaimer: they're on the same logical line), so the `is` test will succeed.
This can be further corroborated if you execute `float(0.0)` in separate lines (or, delimited by `;`) and *then* check for identity:
```
a = float(0.0); b = float(0.0) # Python compiles these separately
a is b # False
```
On the other hand, if an `int` (or a `str`) is supplied, CPython will create a *new* `float` object from it and return that. For this, it uses [`PyFloat_FromDouble`](https://docs.python.org/3/c-api/float.html#c.PyFloat_FromDouble) and [`PyFloat_FromString`](https://docs.python.org/3/c-api/float.html#c.PyFloat_FromString) respectively.
The effect is that the returned objects differ in `id`s (which used to check identities with `is`):
```
# Python uses the same object representing 0 to the calls to float
# but float returns new float objects when supplied with ints
# Thereby, the result will be False
float(0) is float(0)
```
---
**Note:** All previous mentioned behavior applies for the implementation of python in `C` i.e `CPython`. Other implementations might have different behavior, in short, *don't depend on it*. |
Convert float to string without scientific notation and false precision | 38,847,690 | 27 | 2016-08-09T10:01:59Z | 38,847,691 | 21 | 2016-08-09T10:01:59Z | [
"python",
"python-3.x",
"floating-point",
"number-formatting",
"python-2.x"
] | I want to print some floating point numbers so that they're always written in decimal form (e.g. `12345000000000000000000.0` or `0.000000000000012345`, not in [scientific notation](https://en.wikipedia.org/wiki/Scientific_notation), yet I'd want to keep the 15.7 decimal digits of precision and no more.
It is well-known that the `repr` of a `float` is written in scientific notation if the exponent is greater than 15, or less than -4:
```
>>> n = 0.000000054321654321
>>> n
5.4321654321e-08 # scientific notation
```
If `str` is used, the resulting string again is in scientific notation:
```
>>> str(n)
'5.4321654321e-08'
```
---
It has been suggested that I can use `format` with `f` flag and sufficient precision to get rid of the scientific notation:
```
>>> format(0.00000005, '.20f')
'0.00000005000000000000'
```
It works for that number, though it has some extra trailing zeroes. But then the same format fails for `.1`, which gives decimal digits beyond the actual machine precision of float:
```
>>> format(0.1, '.20f')
'0.10000000000000000555'
```
And if my number is `4.5678e-20`, using `.20f` would still lose relative precision:
```
>>> format(4.5678e-20, '.20f')
'0.00000000000000000005'
```
Thus **these approaches do not match my requirements**.
---
This leads to the question: what is the easiest and also well-performing way to print arbitrary floating point number in decimal format, having the same digits as in [`repr(n)` (or `str(n)` on Python 3)](http://stackoverflow.com/a/28493269/918959), but always using the decimal format, not the scientific notation.
That is, a function or operation that for example converts the float value `0.00000005` to string `'0.00000005'`; `0.1` to `'0.1'`; `420000000000000000.0` to `'420000000000000000.0'` or `420000000000000000` and formats the float value `-4.5678e-5` as `'-0.000045678'`.
---
After the bounty period: It seems that there are at least 2 viable approaches, as Karin demonstrated that using string manipulation one can achieve significant speed boost compared to my initial algorithm on Python 2.
Thus,
* If performance is important and Python 2 compatibility is required; or if the `decimal` module cannot be used for some reason, then [Karin's approach using string manipulation](http://stackoverflow.com/a/38983595/918959) is the way to do it.
* On Python 3, [my somewhat shorter code will also be faster](http://stackoverflow.com/a/38847691/918959).
Since I am primarily developing on Python 3, I will accept my own answer, and shall award Karin the bounty. | Unfortunately it seems that not even the new-style formatting with `float.__format__` supports this. The default formatting of `float`s is the same as with `repr`; and with `f` flag there are 6 fractional digits by default:
```
>>> format(0.0000000005, 'f')
'0.000000'
```
---
However there is a hack to get the desired result - not the fastest one, but relatively simple:
* first the float is converted to a string using `str()` or `repr()`
* then a new [`Decimal`](https://docs.python.org/3/library/decimal.html#decimal.Decimal) instance is created from that string.
* `Decimal.__format__` supports `f` flag which gives the desired result, and, unlike `float`s it prints the actual precision instead of default precision.
Thus we can make a simple utility function `float_to_str`:
```
import decimal
# create a new context for this task
ctx = decimal.Context()
# 20 digits should be enough for everyone :D
ctx.prec = 20
def float_to_str(f):
"""
Convert the given float to a string,
without resorting to scientific notation
"""
d1 = ctx.create_decimal(repr(f))
return format(d1, 'f')
```
Care must be taken to not use the global decimal context, so a new context is constructed for this function. This is the fastest way; another way would be to use `decimal.local_context` but it would be slower, creating a new thread-local context and a context manager for each conversion.
This function now returns the string with all possible digits from mantissa, rounded to the [shortest equivalent representation](http://stackoverflow.com/a/28493269/918959):
```
>>> float_to_str(0.1)
'0.1'
>>> float_to_str(0.00000005)
'0.00000005'
>>> float_to_str(420000000000000000.0)
'420000000000000000'
>>> float_to_str(0.000000000123123123123123123123)
'0.00000000012312312312312313'
```
The last result is rounded at the last digit
As @Karin noted, `float_to_str(420000000000000000.0)` does not strictly match the format expected; it returns `420000000000000000` without trailing `.0`. |
Convert float to string without scientific notation and false precision | 38,847,690 | 27 | 2016-08-09T10:01:59Z | 38,983,595 | 13 | 2016-08-16T20:09:16Z | [
"python",
"python-3.x",
"floating-point",
"number-formatting",
"python-2.x"
] | I want to print some floating point numbers so that they're always written in decimal form (e.g. `12345000000000000000000.0` or `0.000000000000012345`, not in [scientific notation](https://en.wikipedia.org/wiki/Scientific_notation), yet I'd want to keep the 15.7 decimal digits of precision and no more.
It is well-known that the `repr` of a `float` is written in scientific notation if the exponent is greater than 15, or less than -4:
```
>>> n = 0.000000054321654321
>>> n
5.4321654321e-08 # scientific notation
```
If `str` is used, the resulting string again is in scientific notation:
```
>>> str(n)
'5.4321654321e-08'
```
---
It has been suggested that I can use `format` with `f` flag and sufficient precision to get rid of the scientific notation:
```
>>> format(0.00000005, '.20f')
'0.00000005000000000000'
```
It works for that number, though it has some extra trailing zeroes. But then the same format fails for `.1`, which gives decimal digits beyond the actual machine precision of float:
```
>>> format(0.1, '.20f')
'0.10000000000000000555'
```
And if my number is `4.5678e-20`, using `.20f` would still lose relative precision:
```
>>> format(4.5678e-20, '.20f')
'0.00000000000000000005'
```
Thus **these approaches do not match my requirements**.
---
This leads to the question: what is the easiest and also well-performing way to print arbitrary floating point number in decimal format, having the same digits as in [`repr(n)` (or `str(n)` on Python 3)](http://stackoverflow.com/a/28493269/918959), but always using the decimal format, not the scientific notation.
That is, a function or operation that for example converts the float value `0.00000005` to string `'0.00000005'`; `0.1` to `'0.1'`; `420000000000000000.0` to `'420000000000000000.0'` or `420000000000000000` and formats the float value `-4.5678e-5` as `'-0.000045678'`.
---
After the bounty period: It seems that there are at least 2 viable approaches, as Karin demonstrated that using string manipulation one can achieve significant speed boost compared to my initial algorithm on Python 2.
Thus,
* If performance is important and Python 2 compatibility is required; or if the `decimal` module cannot be used for some reason, then [Karin's approach using string manipulation](http://stackoverflow.com/a/38983595/918959) is the way to do it.
* On Python 3, [my somewhat shorter code will also be faster](http://stackoverflow.com/a/38847691/918959).
Since I am primarily developing on Python 3, I will accept my own answer, and shall award Karin the bounty. | If you are satisfied with the precision in scientific notation, then could we just take a simple string manipulation approach? Maybe it's not terribly clever, but it seems to work (passes all of the use cases you've presented), and I think it's fairly understandable:
```
def float_to_str(f):
float_string = repr(f)
if 'e' in float_string: # detect scientific notation
digits, exp = float_string.split('e')
digits = digits.replace('.', '').replace('-', '')
exp = int(exp)
zero_padding = '0' * (abs(int(exp)) - 1) # minus 1 for decimal point in the sci notation
sign = '-' if f < 0 else ''
if exp > 0:
float_string = '{}{}{}.0'.format(sign, digits, zero_padding)
else:
float_string = '{}0.{}{}'.format(sign, zero_padding, digits)
return float_string
n = 0.000000054321654321
assert(float_to_str(n) == '0.000000054321654321')
n = 0.00000005
assert(float_to_str(n) == '0.00000005')
n = 420000000000000000.0
assert(float_to_str(n) == '420000000000000000.0')
n = 4.5678e-5
assert(float_to_str(n) == '0.000045678')
n = 1.1
assert(float_to_str(n) == '1.1')
n = -4.5678e-5
assert(float_to_str(n) == '-0.000045678')
```
**Performance**:
I was worried this approach may be too slow, so I ran `timeit` and compared with the OP's solution of decimal contexts. It appears the string manipulation is actually quite a bit faster. **Edit**: It appears to only be much faster in Python 2. In Python 3, the results were similar, but with the decimal approach slightly faster.
**Result**:
* Python 2: using `ctx.create_decimal()`: `2.43655490875`
* Python 2: using string manipulation: `0.305557966232`
* Python 3: using `ctx.create_decimal()`: `0.19519368198234588`
* Python 3: using string manipulation: `0.2661344590014778`
Here is the timing code:
```
from timeit import timeit
CODE_TO_TIME = '''
float_to_str(0.000000054321654321)
float_to_str(0.00000005)
float_to_str(420000000000000000.0)
float_to_str(4.5678e-5)
float_to_str(1.1)
float_to_str(-0.000045678)
'''
SETUP_1 = '''
import decimal
# create a new context for this task
ctx = decimal.Context()
# 20 digits should be enough for everyone :D
ctx.prec = 20
def float_to_str(f):
"""
Convert the given float to a string,
without resorting to scientific notation
"""
d1 = ctx.create_decimal(repr(f))
return format(d1, 'f')
'''
SETUP_2 = '''
def float_to_str(f):
float_string = repr(f)
if 'e' in float_string: # detect scientific notation
digits, exp = float_string.split('e')
digits = digits.replace('.', '').replace('-', '')
exp = int(exp)
zero_padding = '0' * (abs(int(exp)) - 1) # minus 1 for decimal point in the sci notation
sign = '-' if f < 0 else ''
if exp > 0:
float_string = '{}{}{}.0'.format(sign, digits, zero_padding)
else:
float_string = '{}0.{}{}'.format(sign, zero_padding, digits)
return float_string
'''
print(timeit(CODE_TO_TIME, setup=SETUP_1, number=10000))
print(timeit(CODE_TO_TIME, setup=SETUP_2, number=10000))
``` |
Is it possible to create and reference objects in list comprehension? | 38,850,748 | 3 | 2016-08-09T12:27:08Z | 38,850,776 | 7 | 2016-08-09T12:28:25Z | [
"python",
"python-2.7",
"list-comprehension"
] | I have a list of urls that I want the net locations.
```
urls = ["http://server1:53000/cgi-bin/mapserv?map=../maps/Weather.wms.map",
"http://server2:53000/cgi-bin/mapserv?map=../maps/Weather.wms.map"]
```
I would normally just write something like this:
```
servers = []
for url in urls:
o = urlparse(url)
servers.append(o.netloc)
```
Then I immediately thought, "I should just put that into a comprehension" and proceeded to write this (which of course doesn't work):
```
servers = [o.netloc() for urlparse(url) as o in urls]
```
Does python have a way to do this type of complex comprehension? (perhaps in 3.x?)
On a more academic level, would doing this type of complex comprehension move too far away from being "pythonic"? It seems relatively intuitive to me, but I've been completely off-base on these things before. | There is no need to assign to an intermediary name, just access the `.netloc` attribute on the return value of `urlparse()` directly:
```
servers = [urlparse(url).netloc for url in urls]
```
It's a perfectly pythonic thing to do it this way. |
What is the point of calling super in custom error classes in python? | 38,856,819 | 7 | 2016-08-09T17:14:36Z | 38,857,736 | 8 | 2016-08-09T18:12:37Z | [
"python",
"python-2.7",
"exception",
"pylint"
] | So I have a simple custom error class in Python that I created based on the Python 2.7 documentation:
```
class InvalidTeamError(Exception):
def __init__(self, message='This user belongs to a different team'):
self.message = message
```
This gives me warning `W0231: __init__ method from base class %r is not called` in PyLint so I go look it up and am given the very helpful description of "explanation needed." I'd normally just ignore this error but I have noticed that a ton code online includes a call to super in the beginning of the **init** method of custom error classes so my question is: Does doing this actually serve a purpose or is it just people trying to appease a bogus pylint warning? | This was a valid pylint warning: by not using the superclass `__init__` you can miss out on implementation changes in the parent class. And, indeed, you have - because `BaseException.message` has been deprecated as of Python 2.6.
Here would be an implementation which will avoid your warning W0231 and will also avoid python's deprecation warning about the `message` attribute.
```
class InvalidTeamError(Exception):
def __init__(self, message='This user belongs to a different team'):
super(InvalidTeamError, self).__init__(message)
```
This is a better way to do it, because the [implementation for `BaseException.__str__`](https://hg.python.org/cpython/file/6f6e56bb10aa/Objects/exceptions.c#l100) only considers the 'args' tuple, it doesn't look at message at all. With your old implementation, `print InvalidTeamError()` would have only printed an empty string, which is probably not what you wanted! |
Pythonic way to replace every second comma of string with space | 38,857,048 | 3 | 2016-08-09T17:28:08Z | 38,857,120 | 12 | 2016-08-09T17:32:37Z | [
"python"
] | I have a string which looks like this:
```
coords = "86.2646484375,23.039297747769726,87.34130859375,22.59372606392931,88.13232421875,24.066528197726857"
```
What I want is to bring it to this format:
```
coords = "86.2646484375,23.039297747769726 87.34130859375,22.59372606392931 88.13232421875,24.066528197726857"
```
So in every second number to replace the comma with a space. Is there a simple, pythonic way to do this.
Right now I am trying to do it with using the split function to create a list and then loop through the list. But it seems rather not straightforward. | First let's import the regular expression module and define your `coords` variable:
```
>>> import re
>>> coords = "86.2646484375,23.039297747769726,87.34130859375,22.59372606392931,88.13232421875,24.066528197726857"
```
Now, let's replace every second comma with a space:
```
>>> re.sub('(,[^,]*),', r'\1 ', coords)
'86.2646484375,23.039297747769726 87.34130859375,22.59372606392931 88.13232421875,24.066528197726857'
```
The regular expression `(,[^,]*),` looks for pairs of commas. The replacement text, `r'\1 '` keeps the first comma but replaces the second with a space. |
Splitting a list into uneven groups? | 38,861,457 | 11 | 2016-08-09T22:30:37Z | 38,861,665 | 12 | 2016-08-09T22:51:36Z | [
"python",
"list",
"python-2.7",
"split",
"sublist"
] | I know how to split a list into even groups, but I'm having trouble splitting it into uneven groups.
Essentially here is what I have: some list, let's call it `mylist`, that contains x elements.
I also have another file, lets call it second\_list, that looks something like this:
```
{2, 4, 5, 9, etc.}
```
Now what I want to do is divide `mylist` into uneven groups by the spacing in second\_list. So, I want my first group to be the first 2 elements of `mylist`, the second group to be the next 4 elements of `mylist`, the third group to be the next 5 elements of `mylist`, the fourth group to be the next 9 elements of `mylist, and so on.
Is there some easy way to do this? I tried doing something similar to if you want to split it into even groups:
```
for j in range(0, len(second_list)):
for i in range(0, len(mylist), second_list[j]):
chunk_mylist = mylist[i:i+second_list[j]]
```
However this doesn't split it like I want it to. I want to end up with my # of sublists being `len(second_list)`, and also split correctly, and this gives a lot more than that (and also splits incorrectly). | You can create an iterator and [*itertools.islice*](https://docs.python.org/dev/library/itertools.html#itertools.islice):
```
mylist = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
seclist = [2,4,6]
from itertools import islice
it = iter(mylist)
sliced =[list(islice(it, 0, i)) for i in seclist]
```
Which would give you:
```
[[1, 2], [3, 4, 5, 6], [7, 8, 9, 10, 11, 12]]
```
Once *i* elements are consumed they are gone so we keep getting the next *i* elements.
Not sure what should happen with any remaining elements, if you want them added, you could add something like:
```
mylist = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 ,14]
seclist = [2, 4, 6]
from itertools import islice
it = iter(mylist)
slices = [sli for sli in (list(islice(it, 0, i)) for i in seclist)]
remaining = list(it)
if remaining:
slices.append(remaining)
print(slices)
```
Which would give you:
```
[[1, 2], [3, 4, 5, 6], [7, 8, 9, 10, 11, 12], [13, 14]]
```
Or in contrast if there were not enough, you could use a couple of approaches to remove empty lists, one an inner generator expression:
```
from itertools import islice
it = iter(mylist)
slices = [sli for sli in (list(islice(it, 0, i)) for i in seclist) if sli]
```
Or combine with [itertools.takewhile](https://docs.python.org/dev/library/itertools.html#itertools.takewhile):
```
from itertools import islice, takewhile
it = iter(mylist)
slices = list(takewhile(bool, (list(islice(it, 0, i)) for i in seclist)))
```
Which for:
```
mylist = [1, 2, 3, 4, 5, 6]
seclist = [2, 4, 6,8]
```
would give you:
```
[[1, 2], [3, 4, 5, 6]]
```
As opposed to:
```
[[1, 2], [3, 4, 5, 6], [], []]
```
What you use completely depends on your possible inouts and how you would like to handle the various possibilities. |
Regex: How to match words without consecutive vowels? | 38,862,349 | 4 | 2016-08-10T00:15:54Z | 38,862,495 | 7 | 2016-08-10T00:35:16Z | [
"python",
"regex"
] | I'm really new to regex and I've been able to find regex which can match this quite easily, but I am unsure how to only match words without it.
I have a .txt file with words like
```
sheep
fleece
eggs
meat
potato
```
I want to make a regular expression that matches words in which vowels are not repeated consecutively, so it would return `eggs meat potato`.
I'm not very experienced with regex and I've been unable to find anything about how to do this online, so it'd be awesome if someone with more experience could help me out. Thanks!
I'm using python and have been testing my regex with [https://regex101.com](http://regex101.com).
Thanks!
EDIT: provided incorrect examples of results for the regular expression. Fixed. | Note that, since the desired output includes `meat` but not `fleece`, desired words are allowed to have repeated vowels, just not the same vowel repeated.
To select lines with no repeated vowel:
```
>>> [w for w in open('file.txt') if not re.search(r'([aeiou])\1', w)]
['eggs\n', 'meat\n', 'potato\n']
```
The regex `[aeiou]` matches any vowel (you can include `y` if you like). The regex `([aeiou])\1` matches any vowel followed by the same vowel. Thus, `not re.search(r'([aeiou])\1', w)` is true only for strings `w` that contain no repeated vowels.
### Addendum
If we wanted to exclude `meat` because it has two vowels in a row, even though they are not the *same* vowel, then:
```
>>> [w for w in open('file.txt') if not re.search(r'[aeiou]{2}', w)]
['eggs\n', 'potato\n']
``` |
How to optimize multiprocessing in Python | 38,864,711 | 12 | 2016-08-10T05:11:53Z | 38,915,996 | 7 | 2016-08-12T10:40:54Z | [
"python",
"multithreading",
"python-2.7",
"queue",
"multiprocessing"
] | *EDIT:
I've had questions about what the video stream is, so I will offer more clarity. The stream is a live video feed from my webcam, accessed via OpenCV. I get each frame as the camera reads it, and send it to a separate process for processing. The process returns text based on computations done on the image. The text is then displayed onto the image. I need to display the stream in realtime, and it is ok if there is a lag between the text and the video being shown (i.e. if the text was applicable to a previous frame, that's ok).*
*Perhaps an easier way to think of this is that I'm doing image recognition on what the webcam sees. I send one frame at a time to a separate process to do recognition analysis on the frame, and send the text back to be put as a caption on the live feed. Obviously the processing takes more time than simply grabbing frames from the webcam and showing them, so if there is a delay in what the caption is and what the webcam feed shows, that's acceptable and expected.*
*What's happening now is that the live video I'm displaying is lagging due to the other processes (when I don't send frames to the process for computing, there is no lag). I've also ensured only one frame is enqueued at a time so avoid overloading the queue and causing lag. I've updated the code below to reflect this detail.*
I'm using the multiprocessing module in python to help speed up my main program. However I believe I might be doing something incorrectly, as I don't think the computations are happening quite in parallel.
I want my program to read in images from a video stream in the main process, and pass on the frames to two child processes that do computations on them and send text back (containing the results of the computations) to the main process.
However, the main process seems to lag when I use multiprocessing, running about half as fast as without it, leading me to believe that the processes aren't running completely in parallel.
After doing some research, I surmised that the lag may have been due to communicating between the processes using a queue (passing an image from the main to the child, and passing back text from child to main).
However I commented out the computational step and just had the main process pass an image and the child return blank text, and in this case, the main process did not slow down at all. It ran at full speed.
Thus I believe that either
1) I am not optimally using multiprocessing
OR
2) These processes cannot truly be run in parallel (I would understand a little lag, but it's slowing the main process down in half).
Here's a outline of my code. There is only one consumer instead of 2, but both consumers are nearly identical. If anyone could offer guidance, I would appreciate it.
```
class Consumer(multiprocessing.Process):
def __init__(self, task_queue, result_queue):
multiprocessing.Process.__init__(self)
self.task_queue = task_queue
self.result_queue = result_queue
#other initialization stuff
def run(self):
while True:
image = self.task_queue.get()
#Do computations on image
self.result_queue.put("text")
return
import cv2
tasks = multiprocessing.Queue()
results = multiprocessing.Queue()
consumer = Consumer(tasks,results)
consumer.start()
#Creating window and starting video capturer from camera
cv2.namedWindow("preview")
vc = cv2.VideoCapture(0)
#Try to get the first frame
if vc.isOpened():
rval, frame = vc.read()
else:
rval = False
while rval:
if tasks.empty():
tasks.put(image)
else:
text = tasks.get()
#Add text to frame
cv2.putText(frame,text)
#Showing the frame with all the applied modifications
cv2.imshow("preview", frame)
#Getting next frame from camera
rval, frame = vc.read()
``` | > I want my program to read in images from a video stream in the main process
In producer/consumer implementations, which is what you have above, the producer, what puts tasks into the queue to be executed by the consumers, needs to be separate from the main/controlling process so that it can add tasks in parallel with the main process reading output from results queue.
Try the following. Have added a sleep in the consumer processes to simulate processing and added a second consumer to show they are being run in parallel.
It would also be a good idea to limit the size of the task queue to avoid having it run away with memory usage if processing cannot keep up with input stream. Can specify a size when calling `Queue(<size>)`. If the queue is at that size, calls to `.put` will block until the queue is not full.
```
import time
import multiprocessing
import cv2
class ImageProcessor(multiprocessing.Process):
def __init__(self, tasks_q, results_q):
multiprocessing.Process.__init__(self)
self.tasks_q = tasks_q
self.results_q = results_q
def run(self):
while True:
image = self.tasks_q.get()
# Do computations on image
time.sleep(1)
# Display the result on stream
self.results_q.put("text")
# Tasks queue with size 1 - only want one image queued
# for processing.
# Queue size should therefore match number of processes
tasks_q, results_q = multiprocessing.Queue(1), multiprocessing.Queue()
processor = ImageProcessor(tasks_q, results_q)
processor.start()
def capture_display_video(vc):
rval, frame = vc.read()
while rval:
image = frame.get_image()
if not tasks_q.full():
tasks_q.put(image)
if not results_q.empty():
text = results_q.get()
cv2.putText(frame, text)
cv2.imshow("preview", frame)
rval, frame = vc.read()
cv2.namedWindow("preview")
vc = cv2.VideoCapture(0)
if not vc.isOpened():
raise Exception("Cannot capture video")
capture_display_video(vc)
processor.terminate()
``` |
Îpposite of any() function | 38,869,155 | 56 | 2016-08-10T09:19:40Z | 38,869,220 | 41 | 2016-08-10T09:21:59Z | [
"python",
"python-3.x",
"iterable"
] | The Python built-in function `any(iterable)` can help to quickly check if any `bool(element)` is `True` in a iterable type.
```
>>> l = [None, False, 0]
>>> any(l)
False
>>> l = [None, 1, 0]
>>> any(l)
True
```
But is there an elegant way or function in Python that could achieve the opposite effect of `any(iterable)`? That is, if any `bool(element) is False` then return `True`, like the following example:
```
>>> l = [True, False, True]
>>> any_false(l)
>>> True
``` | Write a generator expression which tests your custom condition. You're not bound to only the default *truthiness* test:
```
any(not i for i in l)
``` |
Îpposite of any() function | 38,869,155 | 56 | 2016-08-10T09:19:40Z | 38,869,246 | 102 | 2016-08-10T09:23:14Z | [
"python",
"python-3.x",
"iterable"
] | The Python built-in function `any(iterable)` can help to quickly check if any `bool(element)` is `True` in a iterable type.
```
>>> l = [None, False, 0]
>>> any(l)
False
>>> l = [None, 1, 0]
>>> any(l)
True
```
But is there an elegant way or function in Python that could achieve the opposite effect of `any(iterable)`? That is, if any `bool(element) is False` then return `True`, like the following example:
```
>>> l = [True, False, True]
>>> any_false(l)
>>> True
``` | There is also the `all` function which does the opposite of what you want, it returns `True` if all are `True` and `False` if any are `False`. Therefore you can just do:
```
not all(l)
``` |
Îpposite of any() function | 38,869,155 | 56 | 2016-08-10T09:19:40Z | 38,870,927 | 14 | 2016-08-10T10:33:00Z | [
"python",
"python-3.x",
"iterable"
] | The Python built-in function `any(iterable)` can help to quickly check if any `bool(element)` is `True` in a iterable type.
```
>>> l = [None, False, 0]
>>> any(l)
False
>>> l = [None, 1, 0]
>>> any(l)
True
```
But is there an elegant way or function in Python that could achieve the opposite effect of `any(iterable)`? That is, if any `bool(element) is False` then return `True`, like the following example:
```
>>> l = [True, False, True]
>>> any_false(l)
>>> True
``` | Well, the implementation of `any` is [*equivalent*](https://docs.python.org/3/library/functions.html#any) to:
```
def any(iterable):
for element in iterable:
if element:
return True
return False
```
So, just switch the condition from `if element` to `if not element`:
```
def reverse_any(iterable):
for element in iterable:
if not element:
return True
return False
```
Yes, *of course* this doesn't leverage the speed of the built-ins `any` or `all` like the other answers do, but it's a nice readable alternative. |
Îpposite of any() function | 38,869,155 | 56 | 2016-08-10T09:19:40Z | 38,884,821 | 9 | 2016-08-10T22:52:17Z | [
"python",
"python-3.x",
"iterable"
] | The Python built-in function `any(iterable)` can help to quickly check if any `bool(element)` is `True` in a iterable type.
```
>>> l = [None, False, 0]
>>> any(l)
False
>>> l = [None, 1, 0]
>>> any(l)
True
```
But is there an elegant way or function in Python that could achieve the opposite effect of `any(iterable)`? That is, if any `bool(element) is False` then return `True`, like the following example:
```
>>> l = [True, False, True]
>>> any_false(l)
>>> True
``` | You can do:
```
>>> l = [True, False, True]
>>> False in map(bool, l)
True
```
Recall that `map` in Python 3 is a generator. For Python 2, you probably want to use `imap`
---
Mea Culpa: After timing these, the method I offered is hands down **the slowest**
The fastest is `not all(l)` or `not next(filterfalse(bool, it), True)` which is just a silly itertools variant. Use Jack Aidleys [solution](http://stackoverflow.com/a/38869246/298607).
Timing code:
```
from itertools import filterfalse
def af1(it):
return not all(it)
def af2(it):
return any(not i for i in it)
def af3(iterable):
for element in iterable:
if not element:
return True
return False
def af4(it):
return False in map(bool, it)
def af5(it):
return not next(filterfalse(bool, it), True)
if __name__=='__main__':
import timeit
for i, l in enumerate([[True]*1000+[False]+[True]*999, # False in the middle
[False]*2000, # all False
[True]*2000], # all True
start=1):
print("case:", i)
for f in (af1, af2, af3, af4, af5):
print(" ",f.__name__, timeit.timeit("f(l)", setup="from __main__ import f, l", number=100000), f(l) )
```
Results:
```
case: 1
af1 0.45357259700540453 True
af2 4.538436588976765 True
af3 1.2491040650056675 True
af4 8.935278153978288 True
af5 0.4685744970047381 True
case: 2
af1 0.016299808979965746 True
af2 0.04787631600629538 True
af3 0.015038023004308343 True
af4 0.03326922300038859 True
af5 0.029870904982089996 True
case: 3
af1 0.8545824179891497 False
af2 8.786235476000002 False
af3 2.448748088994762 False
af4 17.90895140200155 False
af5 0.9152941330103204 False
``` |
Why are my list structure math operations off by a couple billionths? | 38,882,255 | 3 | 2016-08-10T19:46:05Z | 38,882,281 | 7 | 2016-08-10T19:48:26Z | [
"python",
"list"
] | I have some pretty simple function that tries to return a list that is the distance between the inputted list and the average of that list. The code *almost* works. Any thoughts as to why the results are slightly off?
```
def distances_from_average(test_list):
average = [sum(test_list)/float(len(test_list))]*len(test_list)
return [x-y for x,y in zip(test_list, average)]
```
Here are my example results:
[-4.200000000000003, 35.8, 2.799999999999997, -23.200000000000003, -11.200000000000003] should equal [4.2, -35.8, -2.8, 23.2, 11.2] | This is due to the way computers represent floating point numbers.
They are not always accurate in the way you expect, and thus should not be used to check equality, or represent things like amounts of money.
How are these numbers being used? If you need that kind of accuracy perhaps there are better ways to use the information, like checking for a range instead of checking equality.
[Here is some good reading material on the subject](https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html) |
Concurrency with subprocess module. How can I do this? | 38,918,337 | 9 | 2016-08-12T12:46:02Z | 38,919,320 | 7 | 2016-08-12T13:35:26Z | [
"python",
"python-3.x",
"python-multithreading",
"python-multiprocessing"
] | The code below works but each time you run a program, for example the notepad on target machine, the prompt is stuck until I quit the program.
How to run multiple programs at the same time on target machine? I suppose it can be achieved with either the threads or subprocess modules, but I still can not use the concept.
How can I do this?
```
import socket
import time
import subprocess #Executar comandos do SO
#criando a conexao reversa
IP = '192.168.1.33' # ip do cliente linux netcat que sera a central de comando
PORT = 443 # usamos a porta de https pra confundir o firewall : a conexao de saida nao sera bloqueada
def connect(IP,PORT):
#conectando a central de controle
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # IP/TCP
s.connect((IP,PORT))
s.send('[!] Conexao recebida\n') # msg pra ver se foi conectado
#s.close()
return s
except Exception as e:
print('Erro de conexao',e )
return None
def listen(s):
##qdo o cliente nao esta escutando, da erro na conexao e fecha!. Nao quero isso. O server tem que ficar o tempo todo tentando ate conectar!
## versao 3!!!!!!!!!!
#versao 4 usa while True
##########loop infinito para receber comandos
try:
while True:
data = s.recv(1024) # a central de controle envia tb o "Enter" que teclamos apos cada comando {\n}
#print(data)
if data[:-1] == '/exit': #tudo exceto o ultimo caractere, que eh o \n
s.close()#fechar conexao
exit(0) # 0 eh execucao normal/sem erros
else: #executar os comandos
cmd(s,data)
except:
main(s)
def cmd(s,data):
try:
proc = subprocess.Popen(data, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
saida = s.send(proc.stdout.read() + proc.stderr.read())
s.send(saida)
#print(proc.stdout.read())
except:
main(s)
def main(s):
if s:
s.close()
while True:
s_connected = connect(IP,PORT)
if s_connected:
listen(s_connected)
else:
print("deu erro na conexao, tentando de novo!!!")##so pra debug
time.sleep(10)
#return 0 #nao precisa
s = None
main(s)
``` | Try something like:
```
import socket
import time
import subprocess
import select
IP = '192.168.1.33'
PORT = 443
def connect(IP,PORT):
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((IP,PORT))
s.send('[!] Conexao recebida\n')
return s
except Exception as e:
print('Erro de conexao',e )
return None
def listen(s):
try:
# Create a polling object and register socket with it
socket_poll = select.poll()
socket_poll.register(s)
# Create a list of running processes
processes = []
while True:
# If no data is available on the socket, we can use the time to clean up processes which are finished
if not socket_poll.poll(1):
for process in tuple(processes):
if process.poll():
s.send(proc.stdout.read() + proc.stderr.read())
processes.remove(process)
continue
data = s.recv(1024)
if data[:-1] == '/exit':
s.close()
exit(0)
else:
cmd(s, data, processes)
except:
main(s)
def cmd(s, data, processes):
try:
proc = subprocess.Popen(data, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# Add new process to the list
processes.append(proc)
except:
main(s)
def main(s):
if s:
s.close()
while True:
s_connected = connect(IP,PORT)
if s_connected:
listen(s_connected)
else:
time.sleep(10)
s = None
main(s)
```
Sorry for removing Spanish comments ;) |
Difference between Python 2 and 3 for shuffle with a given seed | 38,943,038 | 3 | 2016-08-14T14:09:41Z | 38,943,222 | 9 | 2016-08-14T14:31:20Z | [
"python",
"python-2.7",
"python-3.x",
"shuffle",
"random-seed"
] | I am writing a program compatible with both Python 2.7 and 3.5. Some parts of it rely on stochastic process. My unit tests use an arbitrary seed, which leads to the same results across executions and languages... except for the code using `random.shuffle`.
Example in Python 2.7:
```
In[]: import random
random.seed(42)
print(random.random())
l = list(range(20))
random.shuffle(l)
print(l)
Out[]: 0.639426798458
[6, 8, 9, 15, 7, 3, 17, 14, 11, 16, 2, 19, 18, 1, 13, 10, 12, 4, 5, 0]
```
Same input in Python 3.5:
```
In []: import random
random.seed(42)
print(random.random())
l = list(range(20))
random.shuffle(l)
print(l)
Out[]: 0.6394267984578837
[3, 5, 2, 15, 9, 12, 16, 19, 6, 13, 18, 14, 10, 1, 11, 4, 17, 7, 8, 0]
```
Note that the pseudo-random number is the same, but the shuffled lists are different. As expected, reexecuting the cells does not change their respective output.
How could I write the same test code for the two versions of Python? | In Python 3.2 the random module was refactored a little to make the output uniform across architectures (given the same seed), see [issue #7889](http://bugs.python.org/issue7889). The `shuffle()` method was switched to using `Random._randbelow()`.
However, the `_randbelow()` method was *also* adjusted, so simply copying the 3.5 version of `shuffle()` is not enough to fix this.
That said, if you pass in your own `random()` function, the implementation in Python 3.5 is *unchanged from the 2.7* version, and thus lets you bypass this limitation:
```
random.shuffle(l, random.random)
```
Note however, than now you are subject to the old 32-bit vs 64-bit architecture differences that #7889 tried to solve.
Ignoring several optimisations and special cases, if you include `_randbelow()` the 3.5 version can be backported as:
```
import random
import sys
if sys.version_info >= (3, 2):
newshuffle = random.shuffle
else:
try:
xrange
except NameError:
xrange = range
def newshuffle(x):
def _randbelow(n):
"Return a random int in the range [0,n). Raises ValueError if n==0."
getrandbits = random.getrandbits
k = n.bit_length() # don't use (n-1) here because n can be 1
r = getrandbits(k) # 0 <= r < 2**k
while r >= n:
r = getrandbits(k)
return r
for i in xrange(len(x) - 1, 0, -1):
# pick an element in x[:i+1] with which to exchange x[i]
j = _randbelow(i+1)
x[i], x[j] = x[j], x[i]
```
which gives you the same output on 2.7 as 3.5:
```
>>> random.seed(42)
>>> print(random.random())
0.639426798458
>>> l = list(range(20))
>>> newshuffle(l)
>>> print(l)
[3, 5, 2, 15, 9, 12, 16, 19, 6, 13, 18, 14, 10, 1, 11, 4, 17, 7, 8, 0]
``` |
Conditional comprehension in Julia | 38,947,840 | 6 | 2016-08-15T00:27:35Z | 38,947,888 | 12 | 2016-08-15T00:35:53Z | [
"python",
"if-statement",
"filter",
"list-comprehension",
"julia-lang"
] | In Python, there is the option to provide a condition for whether or not to include a specific item in a comprehension.
```
[x**2 for x in range(10) if x > 5]
# [36, 49, 64, 81]
```
It is possible to conditionally use function, but I have not yet found a way to entirely exclude values, other than `filter!`ing them outside of the comprehension.
```
l = collect(0:9)
filter!(x -> x > 5, l)
l = [x^2 for x in l] # alternatively, map!(x -> x^2, l)
# [36, 49, 64, 81]
```
Is this possible in Julia? | It is possible in the latest Julia.
```
julia> [x^2 for x in 0:9 if x > 5]
4-element Array{Int64,1}:
36
49
64
81
```
Otherwise, yes, if you're using pre 0.5 you're stuck with:
```
[x^2 for x in filter((x) -> x > 5, 0:9)]
``` |
How to find and leave only doubles in list python? | 38,969,024 | 5 | 2016-08-16T07:32:54Z | 38,969,122 | 13 | 2016-08-16T07:37:59Z | [
"python"
] | How to find only doubles in list? My version of the algorithm
```
import collections
a = [1,2,3,4,5,2,4,5]
b = []
for x,y in collections.Counter(a).items():
if y>1:
b.append(x)
print(b) # [2, 4, 5]
c = []
for item in a:
if item in b:
c.append(item)
print(c) # [2, 4, 5, 2, 4, 5]
```
need find result such as c
code defects:
1. three list (a,b,c), one collections (dict)
2. long code
me need leave list doubles values, example. x = [1,2,2,2,3,4,5,6,6,7ââ], need [2,2,2,6,6] not [2,6] | ```
from collections import Counter
a = [1, 2, 3, 4, 5, 2, 4, 5]
counts = Counter(a)
print([num for num in a if counts[num] > 1])
``` |
Better alternative to lots of IF statements? Table of values | 38,973,433 | 7 | 2016-08-16T11:13:22Z | 38,973,575 | 8 | 2016-08-16T11:21:39Z | [
"python",
"algorithm",
"if-statement",
"dictionary"
] | I have a table of moves that decide whether or not the player wins based on their selection against the AI. Think Rock, Paper, Scissors with a lot more moves.
I'll be eventually coding it in Python, but before I begin, I want to know if there is a better way of doing this rather than LOTS and LOTS of IF statements?
The table looks like this:
[](http://i.stack.imgur.com/iVldr.png)
I'm thinking that the moves will need to be assigned numbers, or something like that? I don't know where to start... | You could use a dict? Something like this:
```
#dict of winning outcomes, the first layer represents the AI moves, and the inner
#layer represent the player move and the outcome
ai = {
'punch' : {
'punch' : 'tie',
'kick' : 'wins',
},
'stab' : {
'punch' : 'loses',
'kick' : 'loses'
}
}
ai_move = 'punch'
player_move = 'kick'
print ai[ai_move][player_move] #output: wins
ai_move = 'stab'
player_move = 'punch'
print ai[ai_move][player_move] #output: loses
```
I didn't map out all the moves, but you get the gist |
How to create unittests for python prompt toolkit? | 38,975,025 | 9 | 2016-08-16T12:30:33Z | 39,049,257 | 7 | 2016-08-20T00:13:07Z | [
"python",
"python-3.x"
] | I want to create unittests for my command line interface
build with the Python `prompt-toolkit` (<https://github.com/jonathanslenders/python-prompt-toolkit>).
* **How can I emulate user interaction with the prompt-toolkit?**
* **Is there a best practice for these unittests?**
Example Code:
```
from os import path
from prompt_toolkit import prompt
def csv():
csv_path = prompt('\nselect csv> ')
full_path = path.abspath(csv_path)
return full_path
``` | You could [mock](https://pypi.python.org/pypi/mock) the prompt calls.
**app\_file**
```
from prompt_toolkit import prompt
def word():
result = prompt('type a word')
return result
```
**test\_app\_file**
```
import unittest
from app import word
from mock import patch
class TestAnswer(unittest.TestCase):
def test_yes(self):
with patch('app.prompt', return_value='Python') as prompt:
self.assertEqual(word(), 'Python')
prompt.assert_called_once_with('type a word')
if __name__ == '__main__':
unittest.main()
```
Just an attention to the point you should mock the prompt from the **app.py**, not from **prompt\_toolkit**, because you want to intercept the call from the file.
According with the [docstring module](https://github.com/jonathanslenders/python-prompt-toolkit/blob/master/prompt_toolkit/shortcuts.py#L4-L7):
> If you are using this library for retrieving some input from the user (as a
> pure Python replacement for GNU readline), probably for 90% of the use cases,
> the :func:`.prompt` function is all you need.
And as [method docstring](https://github.com/jonathanslenders/python-prompt-toolkit/blob/master/prompt_toolkit/shortcuts.py#L512-L515) says:
> Get input from the user and return it.
> This is a wrapper around a lot of `prompt_toolkit` functionality and can be a replacement for `raw_input`. (or GNU readline.)
Following the [Getting started](https://github.com/jonathanslenders/python-prompt-toolkit#getting-started) from project:
```
>>> from prompt_toolkit import prompt
>>> answer = prompt('Give me some input: ')
Give me some input: Hello World
>>> print(answer)
'Hello World'
>>> type(answer)
<class 'str'>
```
As `prompt` method return a string type, you could use `mock.return_value` to simulate the user integration with your app. |
Converting a list into comma-separated string with "and" before the last item - Python 2.7 | 38,981,302 | 8 | 2016-08-16T17:44:46Z | 38,981,316 | 9 | 2016-08-16T17:46:31Z | [
"python",
"python-2.7"
] | I have created this function to parse the list:
```
listy = ['item1', 'item2','item3','item4','item5', 'item6']
def coma(abc):
for i in abc[0:-1]:
print i+',',
print "and " + abc[-1] + '.'
coma(listy)
#item1, item2, item3, item4, item5, and item6.
```
Is there a neater way to achieve this?
This should be applicable to lists with any length. | When there are 1+ items in the list (if not, just use the first element):
```
>>> "{} and {}".format(", ".join(listy[:-1]), listy[-1])
'item1, item2, item3, item4, item5, and item6'
```
Edit: If you need an *Oxford comma* (didn't know it even existed!) -- just use: `", and"` isntead. |
Simple, ugly function to produce an orientation from an angle. | 38,982,644 | 5 | 2016-08-16T19:05:51Z | 38,982,734 | 18 | 2016-08-16T19:11:48Z | [
"python"
] | I wrote a function that takes a degree and returns the orientation as 'N', 'NE', ...etc. Very simple, but it's ugly - is there any way to rewrite this to make it...prettier?
```
def orientation(tn):
if 23 <= tn <= 67:
o = 'NE'
elif 68 <= tn <= 113:
o = 'E'
elif 114 <= tn <= 158:
o = 'SE'
elif 159 <= tn <= 203:
o = 'S'
elif 204 <= tn <= 248:
o = 'SW'
elif 249 <= tn <= 293:
o = 'W'
elif 294 <= tn <= 338:
o = 'NW'
else:
o = 'N'
return o
``` | Use [bisection](https://docs.python.org/2/library/bisect.html):
```
from bisect import bisect_left
directions = ['N', 'NE', 'E', 'SE', 'S', 'SW', 'W', 'NW', 'N']
boundaries = [22, 67, 113, 158, 203, 248, 293, 338, 360]
def orientation(tn):
return directions[bisect_left(boundaries, tn)]
```
`bisect_left()` (very efficiently) finds the index into which you'd insert `tn` into the `boundaries` list; that index is then mapped into the `directions` list to translate to a string.
Bisection only takes up to 4 steps to find the right boundary (`log2(len(boundaries))`).
You could also add 22 and divide the value modulo 360 by 45:
```
directions = ['N', 'NE', 'E', 'SE', 'S', 'SW', 'W', 'NW', 'N']
def orientation(tn):
index = ((tn + 22) % 360) // 45
return directions[index]
```
However, your original boundaries were not evenly distributed at 45 degrees each, so this gives a slightly different result (your `N` boundaries span 44 degrees, while `E` is allotted 46 degrees). Bisection doesn't care about such exceptions; you can shift the boundaries all you like. |
Why I can't instantiate a instance in the same module? | 38,989,481 | 2 | 2016-08-17T06:27:00Z | 38,989,614 | 8 | 2016-08-17T06:34:52Z | [
"python"
] | Suppose my module is `myclass.py`, and here is the code:
```
#!/usr/bin/env python
# coding=utf-8
class A(object):
b = B()
def __init__(self):
pass
class B(object):
pass
```
and import it
```
In [1]: import myclass
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-1-e891426834ac> in <module>()
----> 1 import myclass
/home/python/myclass.py in <module>()
2 # coding=utf-8
3
----> 4 class A(object):
5 b = B()
6 def __init__(self):
/home/python/myclass.py in A()
3
4 class A(object):
----> 5 b = B()
6 def __init__(self):
7 pass
NameError: name 'B' is not defined
```
I know that if I define the class B above the class A, it is ok, there is no error. But, I don't want do that, are there any other methods to solve this. And I know that in C, there is function declaration.Thank you! | The class definition is a statement. When statement `AA` is executed, The statement of `BB` is not executed yet. Therefore, There is no class `B` yet and you get `NameError: name 'B' is not defined`
```
class A(object):
b = B() # <== AA
def __init__(self):
pass
class B(object): # <== BB
pass
```
To fix it:
---
You can change the order of classes:
```
class B(object):
pass
class A(object):
b = B()
def __init__(self):
pass
```
---
You can move the statement which use the class `B` to `classmethod` and call it after the the defintion of class `B`:
```
class A(object):
@classmethod
def init(cls):
cls.b = B()
def __init__(self):
pass
class B(object):
pass
A.init()
``` |
What is the best way to evaluate whether a string is a stringified dictionary or not | 38,990,616 | 3 | 2016-08-17T07:31:23Z | 38,990,684 | 8 | 2016-08-17T07:35:50Z | [
"python",
"dictionary"
] | I could use a regex, but am wondering if there is a better way.
For example, a value might be returned either as:
1.) `'{"username": "joe.soap", "password": "pass@word123"}'`
or
2.) `'https://www.url-example.com'`
In the case of 1.) I want to convert the contents to an actual dictionary. I am happy that I know how to do the conversion. I am stuck on how to identify 1.) without reverting to the use of regex.
**EDIT:** Because I was asked, this is how I plan to make the conversion:
```
import ast
if string_in_question == '{"username": "joe.soap", "password": "pass@word123"}':
return ast.literal_eval(strââing_in_question)
else:
return valid_command_returnââs
``` | Don't bother with a complex regex, simply try to convert the string to a dictionary.
I'm assuming you are using `json.loads` to do it. If the string doesn't represent a dictionary `json.loads` will raise an exception.
Note that if you do use `json.loads` the conversion will fail if the "keys" are not surrounded with double-quotes, ie trying to convert the string `"{'username': 'joe.soap', 'password': 'pass@word123'}"` to a dictionary will raise an exception as well.
```
import json
a = '{"username": "joe.soap", "password": "pass@word123"}'
b = 'https://www.url-example.com'
try:
json.loads(a)
except ValueError:
print("{} is not a dictionary".format(a))
try:
json.loads(b)
except ValueError:
print("{} is not a dictionary".format(b))
```
The output of this program will be
`https://www.url-example.com is not a dictionary`
**UPDATE**:
When using `ast.literal_eval` the concept is the same, but you will have to catch `SyntaxError` instead of `ValueError`. Note that with `literal_eval` both single and double quotes are acceptable.
```
import ast
a = '{"username": "joe.soap", "password": "pass@word123"}'
b = "{'username': 'joe.soap', 'password': 'pass@word123'}"
c = 'https://www.url-example.com'
try:
ast.literal_eval(a)
except SyntaxError:
print("{} is not a dictionary".format(a))
try:
ast.literal_eval(b)
except SyntaxError:
print("{} is not a dictionary".format(b))
try:
ast.literal_eval(c)
except SyntaxError:
print("{} is not a dictionary".format(c))
```
Same as before, output is `https://www.url-example.com is not a dictionary`. |
Remove first encountered elements from a list | 38,991,478 | 12 | 2016-08-17T08:18:46Z | 38,991,663 | 7 | 2016-08-17T08:28:53Z | [
"python",
"list"
] | I have two Python lists with the same number of elements. The elements of the first list are unique, the ones in the second list - not necessarily so. For instance
```
list1 = ['e1', 'e2', 'e3', 'e4', 'e5', 'e6', 'e7']
list2 = ['h1', 'h2', 'h1', 'h3', 'h1', 'h2', 'h4']
```
I want to remove all the "first encountered" elements from the second list and their corresponding elements from the first list. Basically, this means removing all unique elements *and* the first element of the duplicates. With the above example, the correct result should be
```
>>>list1
['e3', 'e5', 'e6']
>>>list2
['h1', 'h1', 'h2']
```
That is, the element 'e1' was removed because its corresponding 'h1' was encountered for the first time, 'e2' was removed because 'h2' was seen for the first time, 'e3' was left because 'h1' was already seen, 'e4' was removed because 'h3' was seen for the first time, 'e5' was left because 'h1' was already seen, 'e6' was left because 'h2' was already seen, and 'e7' was removed because 'h4' was seen for the first time.
What would be an efficient way to solve this problem? The lists could contain thousands of elements, so I'd rather not make duplicates of them or run multiple loops, if possible. | An efficient way would be to use a `set`, which contains all already seen keys. A `set` will guarantee you an average lookup of `O(1)`.
So something like this should work:
```
s = set()
result1 = []
result2 = []
for x, y in zip(list1, list2):
if y in s:
result1.append(x)
result2.append(y)
else:
s.add(y)
```
Notice, this will create a new list. Shouldn't be a big problem though, since Python doesn't actually copy the strings, but only creates a pointer to the original string. |
Remove first encountered elements from a list | 38,991,478 | 12 | 2016-08-17T08:18:46Z | 38,991,667 | 10 | 2016-08-17T08:29:09Z | [
"python",
"list"
] | I have two Python lists with the same number of elements. The elements of the first list are unique, the ones in the second list - not necessarily so. For instance
```
list1 = ['e1', 'e2', 'e3', 'e4', 'e5', 'e6', 'e7']
list2 = ['h1', 'h2', 'h1', 'h3', 'h1', 'h2', 'h4']
```
I want to remove all the "first encountered" elements from the second list and their corresponding elements from the first list. Basically, this means removing all unique elements *and* the first element of the duplicates. With the above example, the correct result should be
```
>>>list1
['e3', 'e5', 'e6']
>>>list2
['h1', 'h1', 'h2']
```
That is, the element 'e1' was removed because its corresponding 'h1' was encountered for the first time, 'e2' was removed because 'h2' was seen for the first time, 'e3' was left because 'h1' was already seen, 'e4' was removed because 'h3' was seen for the first time, 'e5' was left because 'h1' was already seen, 'e6' was left because 'h2' was already seen, and 'e7' was removed because 'h4' was seen for the first time.
What would be an efficient way to solve this problem? The lists could contain thousands of elements, so I'd rather not make duplicates of them or run multiple loops, if possible. | Just use a `set` object to lookup if the current value is already seen, like this
```
>>> list1 = ['e1', 'e2', 'e3', 'e4', 'e5', 'e6', 'e7']
>>> list2 = ['h1', 'h2', 'h1', 'h3', 'h1', 'h2', 'h4']
>>>
>>> def filterer(l1, l2):
... r1 = []
... r2 = []
... seen = set()
... for e1, e2 in zip(l1, l2):
... if e2 not in seen:
... seen.add(e2)
... else:
... r1.append(e1)
... r2.append(e2)
... return r1, r2
...
>>> list1, list2 = filterer(list1, list2)
>>> list1
['e3', 'e5', 'e6']
>>> list2
['h1', 'h1', 'h2']
```
---
If you are going to consume the elements one-by-one and if the input lists are pretty big, then I would recommend making a generator, like this
```
>>> def filterer(l1, l2):
... seen = set()
... for e1, e2 in zip(l1, l2):
... if e2 not in seen:
... seen.add(e2)
... else:
... yield e1, e2
...
>>> list(filterer(list1, list2))
[('e3', 'h1'), ('e5', 'h1'), ('e6', 'h2')]
>>>
>>> zip(*filterer(list1, list2))
[('e3', 'e5', 'e6'), ('h1', 'h1', 'h2')]
``` |
Remove first encountered elements from a list | 38,991,478 | 12 | 2016-08-17T08:18:46Z | 38,991,706 | 7 | 2016-08-17T08:31:03Z | [
"python",
"list"
] | I have two Python lists with the same number of elements. The elements of the first list are unique, the ones in the second list - not necessarily so. For instance
```
list1 = ['e1', 'e2', 'e3', 'e4', 'e5', 'e6', 'e7']
list2 = ['h1', 'h2', 'h1', 'h3', 'h1', 'h2', 'h4']
```
I want to remove all the "first encountered" elements from the second list and their corresponding elements from the first list. Basically, this means removing all unique elements *and* the first element of the duplicates. With the above example, the correct result should be
```
>>>list1
['e3', 'e5', 'e6']
>>>list2
['h1', 'h1', 'h2']
```
That is, the element 'e1' was removed because its corresponding 'h1' was encountered for the first time, 'e2' was removed because 'h2' was seen for the first time, 'e3' was left because 'h1' was already seen, 'e4' was removed because 'h3' was seen for the first time, 'e5' was left because 'h1' was already seen, 'e6' was left because 'h2' was already seen, and 'e7' was removed because 'h4' was seen for the first time.
What would be an efficient way to solve this problem? The lists could contain thousands of elements, so I'd rather not make duplicates of them or run multiple loops, if possible. | I might be code golfing here but i find this interesting:
```
list1_new = [x for i, x in enumerate(list1) if list2[i] in list2[:i]]
print(list1_new)
# prints ['e3', 'e5', 'e6']
```
What happens here in case you are not familiar with list comprehensions is the following (reading it from the end):
* i am checking whether element `i` of `list2` exists in a slicing of `list2` that includes all previous elements `list2[:i]`.
* if it does then i capture the corresponding element from `list1` (`x`) and i store it in the new list i am creating `list1_new` |
Text based data format which supports multiline strings | 38,993,265 | 15 | 2016-08-17T09:50:34Z | 39,037,722 | 21 | 2016-08-19T11:17:49Z | [
"python",
"json",
"format"
] | I search a text based data format which supports multiline strings.
JSON does not allow multiline strings:
```
>>> import json
>>> json.dumps(dict(text='first line\nsecond line'))
'{"text": "first line\\nsecond line"}'
```
My desired output:
```
{"text": "first line
second line"}
```
This question is about input and output. The data format should be editable with a editor like vi, emacs or notepad.
I don't care if simple quotes `"` or tripple quotes (like in Python) `"""` get used.
Is there a easy for human beings readable textual data interchange format which supports this?
# Use case
I want to edit data with multiline strings with `vi`. This is not fun, if the data is in json format. | I think you should consider [`YAML`](http://yaml.org/) format. It supports block notation which is [able to preserve newlines](http://www.yaml.org/spec/1.2/spec.html#id2760844) like this
```
data: |
There once was a short man from Ealing
Who got on a bus to Darjeeling
It said on the door
"Please don't spit on the floor"
So he carefully spat on the ceiling
```
Also there is a lot of parsers for any kind of programming languages including python *(i.e [pyYaml](http://pyyaml.org/wiki/PyYAMLDocumentation))*.
Also there is a huge advantage that any valid [JSON is YAML](http://yaml.org/spec/1.2/spec.html#id2759572). |
Reading python variables during a running job | 38,996,051 | 5 | 2016-08-17T12:00:26Z | 38,996,231 | 7 | 2016-08-17T12:08:14Z | [
"python",
"shell"
] | I have launched a python script that takes a long time to finish, and silly me I forgot to print out the values of important variables every now and then in my script, to for example estimate the progress of the computation I'm doing.
So now, I was wondering if there's a way to access the current values of certain set of variables in my code (e.g. a list), as the script is running? (I could of course just stop it and add the changes/prints to the code then relaunch, but since it has been running for a day now, it is a pity to lose the computed values so far)
Alternatively, can I crash it in a certain way (other than usual Ctrl-c keyboard interrupt) such that the variable values at the moment of crash are pasted somewhere given that I didn't plan for this in my script? (I am running Ubuntu, python 2.7 and the script is simply run from a terminal by 'python test.py') | Without editing your program, you're going to have a bad time. What you are looking for is some form of remote debugger, but anything that gives you python specific things will probably have to be at least somehow given a hook into your program. That being said, if you feel like fiddling around in a stack, you can attach gdb to your program (`gdb -p <PID>`) and see what you can find.
Edit: Well. This might actually be possible.
Following [here](https://wiki.python.org/moin/DebuggingWithGdb), with the python extentions for GDB installed, if you pop open a gdb shell with `gdb python <PID>`, you should be able to run `py-print <name of the variable>` to get its value, assuming it's in the scope of the program at that point.
Attempting to do this myself, with the trivial program
```
import time
a = 10
time.sleep(1000)
```
I was able to open a GDB shell by finding the PID of the program (`ps aux | grep python`), running `sudo gdb python <PID>` and then run `py-print a`, which produced "global 'a' = 10". Of course this assumes you are running in a \*nix environment.
Tawling around in the GDB shell for a while, I found you can actually interact with the Python primatives. For example, to get the length of an array:
```
(gdb) python-interative
>>> frame = Frame.get_selected_python_frame()
>>> pyop_frame = frame.get_pyop()
>>> var, scope = pyop_frame.get_var_by_name('<variable name>')
>>> print(var.field('ob_size'))
```
Note the requirement to use the actual internal field names to get things (The actual values of the list can be found with 'ob\_item', and then an index).
You can dump the array to a file in a similar way:
```
length = int(str(var.field('ob_size'))
output = []
for i in range(length):
output.append(str(var[i]))
with open('dump', 'w') as f:
f.write(', '.join(output))
``` |
Most efficient way to turn dictionary into symmetric/distance matrix (Python | Pandas) | 39,004,152 | 2 | 2016-08-17T18:46:29Z | 39,004,744 | 7 | 2016-08-17T19:25:55Z | [
"python",
"pandas",
"numpy",
"matrix",
"distance"
] | I'm doing pairwise distance for something w/ a weird distance metric. I have a dictionary like `{(key_A, key_B):distance_value}` and I want to make a symmetric `pd.DataFrame` like a distance matrix.
What is the most efficient way to do this? I found one way but it doesn't seem like the best way to do this... **Is there anything in `NumPy` or `Pandas` that does this type of operation?** or just a quicker way? My way is `1.46 ms per loop`
```
np.random.seed(0)
D_pair_value = dict()
for pair in itertools.combinations(list("ABCD"),2):
D_pair_value[pair] = np.random.randint(0,5)
D_pair_value
# {('A', 'B'): 4,
# ('A', 'C'): 0,
# ('A', 'D'): 3,
# ('B', 'C'): 3,
# ('B', 'D'): 3,
# ('C', 'D'): 1}
D_nested_dict = defaultdict(dict)
for (p,q), value in D_pair_value.items():
D_nested_dict[p][q] = value
D_nested_dict[q][p] = value
# Fill diagonal with zeros
DF = pd.DataFrame(D_nested_dict)
np.fill_diagonal(DF.values, 0)
DF
```
[](http://i.stack.imgur.com/ZWrTr.png) | You can use [`scipy.spatial.distance.squareform`](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.spatial.distance.squareform.html#scipy.spatial.distance.squareform), which converts a vector of distance computations, i.e. `[d(A,B), d(A,C), ..., d(C,D)]`, into the distance matrix you're looking for.
**Method 1: Distances Stored in a List**
If you're computing your distances in order, like in your example code and in my example distance vector, I'd avoid using a dictionary and just store the results in a list, and do something like:
```
from scipy.spatial.distance import squareform
df = pd.DataFrame(squareform(dist_list), index=list('ABCD'), columns=list('ABCD'))
```
**Method 2: Distances Stored in a Dictionary**
If you're computing things out of order and a dictionary is required, you just need to get a distance vector that's properly sorted:
```
from scipy.spatial.distance import squareform
dist_list = [dist[1] for dist in sorted(D_pair_value.items())]
df = pd.DataFrame(squareform(dist_list), index=list('ABCD'), columns=list('ABCD'))
```
**Method 3: Distances Stored in a Sorted Dictionary**
If a dictionary is required, note that there's a package called [`sortedcontainers`](https://pypi.python.org/pypi/sortedcontainers) which has a [`SortedDict`](http://www.grantjenks.com/docs/sortedcontainers/sorteddict.html) that essentially would solve the sorting issue for you. To use it, all you'd need to change is initializing `D_pair_value` as a `SortedDict()` instead of a `dict`. Using your example setup:
```
from scipy.spatial.distance import squareform
from sortedcontainers import SortedDict
np.random.seed(0)
D_pair_value = SortedDict()
for pair in itertools.combinations(list("ABCD"),2):
D_pair_value[pair] = np.random.randint(0,5)
df = pd.DataFrame(squareform(D_pair_value.values()), index=list('ABCD'), columns=list('ABCD'))
```
**The Resulting Output for Any Method Above:**
```
A B C D
A 0.0 4.0 0.0 3.0
B 4.0 0.0 3.0 3.0
C 0.0 3.0 0.0 1.0
D 3.0 3.0 1.0 0.0
``` |
Running Jupyter with multiple Python and iPython paths | 39,007,571 | 5 | 2016-08-17T23:06:12Z | 39,022,003 | 16 | 2016-08-18T15:23:59Z | [
"python",
"ipython",
"jupyter",
"jupyter-notebook"
] | I'd like to work with Jupyter notebooks, but have had difficulty doing basic imports (such as import matplotlib). I think this was because I have several user-managed python installations. For instance:
```
> which -a python
/usr/bin/python
/usr/local/bin/python
> which -a ipython
/Library/Frameworks/Python.framework/Versions/3.5/bin/ipython
/usr/local/bin/ipython
> which -a jupyter
/Library/Frameworks/Python.framework/Versions/3.5/bin/jupyter
/usr/local/bin/jupyter
```
I used to have anaconda, but removed if from the ~/anaconda directory. Now, when I start a Jupyter Notebook, I get a Kernel Error:
```
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/pythoân3.5/subprocess.py",
line 947, in init restore_signals, start_new_session)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/pythoân3.5/subprocess.py",
line 1551, in _execute_child raise child_exception_type(errno_num, err_msg)
FileNotFoundError: [Errno 2]
No such file or directory: '/Users/npr1/anaconda/envs/py27/bin/python'
```
What should I do?! | This is fairly straightforward to fix, but it involves understanding three different concepts:
1. How Unix/Linux/OSX use `$PATH` to find executables
2. How Python installs and finds packages
3. How Jupyter knows what Python to use
For the sake of completeness, I'll try to do a quick ELI5 on each of these, so you'll know how to solve this issue in the best way for you.
## 1. Unix/Linux/OSX $PATH
When you type any command at the prompt (say, `python`), the system has a well-defined sequence of places that it looks for the executable. This sequence is defined in a system variable called `PATH`, which the user can specify. To see your `PATH`, you can type `echo $PATH`.
The result is a list of directories on your computer, which will be searched **in order** for the desired executable. From your output above, I assume that it contains this:
```
$ echo $PATH
/usr/bin/:/Library/Frameworks/Python.framework/Versions/3.5/bin/:/usr/local/bin/
```
Probably with some other paths interspersed as well. What this means is that when you type `python`, the system will go to `/usr/bin/python`. When you type `ipython`, the system will go to `/Library/Frameworks/Python.framework/Versions/3.5/bin/ipython`, because there is no `ipython` in `/usr/bin/`.
It's always important to know what executable you're using, particularly when you have so many installations of the same program on your system. Changing the path is not too complicated; see e.g. [How to permanently set $PATH on Linux?](http://stackoverflow.com/questions/14637979/how-to-permanently-set-path-on-linux).
## 2. How Python finds packages
When you run python and do something like `import matplotlib`, Python has to play a similar game to find the package you have in mind. Similar to `$PATH` in unix, Python has `sys.path` that specifies these:
```
$ python
>>> import sys
>>> sys.path
['',
'/Users/jakevdp/anaconda/lib/python3.5',
'/Users/jakevdp/anaconda/lib/python3.5/site-packages',
...]
```
Some important things: by default, the first entry in `sys.path` is the current directory. Also, unless you modify this (which you shouldn't do unless you know exactly what you're doing) you'll usually find something called `site-packages` in the path: this is the default place that Python puts packages when you install them using `python setup.py install`, or `pip`, or `conda`, or a similar means.
The important thing to note is that **each python installation has its own site-packages**, where packages are installed **for that specific Python version**. In other words, if you install something for, e.g. `/usr/bin/python`, then `~/anaconda/bin/python` **can't use that package**, because it was installed on a different Python! This is why in our twitter exchange I recommended you focus on one Python installation, and fix your`$PATH` so that you're only using the one you want to use.
There's another component to this: some Python packages come bundled with stand-alone scripts that you can run from the command line (examples are `pip`, `ipython`, `jupyter`, `pep8`, etc.) By default, these executables will be put in the **same directory path** as the Python used to install them, and are designed to work **only with that Python installation**.
That means that, as your system is set-up, when you run `python`, you get `/usr/bin/python`, but when you run `ipython`, you get `/Library/Frameworks/Python.framework/Versions/3.5/bin/ipython` which is associated with the Python version at `/Library/Frameworks/Python.framework/Versions/3.5/bin/python`! Further, this means that the packages you can import when running `python` are entirely separate from the packages you can import when running `ipython`: you're using two completely independent Python installations.
So how to fix this? Well, first make sure your `$PATH` variable is doing what you want it to. You likely have a startup script called something like `~/.bash_profile` or `~/.bashrc` that sets this `$PATH` variable. You can manually modify that if you want your system to search things in a different order. When you first install anaconda/miniconda, there will be an option to do this automatically: say yes to that, and then `python` will always point to `~/anaconda/python`, which is probably what you want.
## 3. How Jupyter knows what Python to use
We're not totally out of the water yet. You mentioned that in the Jupyter notebook, you're getting a kernel error: this indicates that Jupyter is looking for a non-existent Python version.
Jupyter is set-up to be able to use a wide range of "kernels", or execution engines for the code. These can be Python 2, Python 3, R, Julia, Ruby... there are dozens of possible kernels to use. But in order for this to happen, Jupyter needs to know *where* to look for the associated executable: that is, it needs to know which path the `python` sits in.
These paths are specified in jupyter's `kernelspec`, and it's possible for the user to adjust them to their desires. For example, here's the list of kernels that I have on my system:
```
$ jupyter kernelspec list
Available kernels:
python2.7 /Users/jakevdp/.ipython/kernels/python2.7
python3.3 /Users/jakevdp/.ipython/kernels/python3.3
python3.4 /Users/jakevdp/.ipython/kernels/python3.4
python3.5 /Users/jakevdp/.ipython/kernels/python3.5
python2 /Users/jakevdp/Library/Jupyter/kernels/python2
python3 /Users/jakevdp/Library/Jupyter/kernels/python3
```
Each of these is a directory containing some metadata that specifies the kernel name, the path to the executable, and other relevant info.
You can adjust kernels manually, editing the metadata inside the directories listed above.
The command to install a kernel can change depending on the kernel. IPython relies on the [ipykernel package](https://pypi.python.org/pypi/ipykernel/) which contains a command to install a python kernel: for example
```
$ python -m ipykernel install
```
It will create a kernelspec associated with the Python executable you use to run this command. You can then choose this kernel in the Jupyter notebook to run your code with that Python.
You can see other options that ipykernel provides using the help command:
```
$ python -m ipykernel install --help
usage: ipython-kernel-install [-h] [--user] [--name NAME]
[--display-name DISPLAY_NAME] [--prefix PREFIX]
[--sys-prefix]
Install the IPython kernel spec.
optional arguments:
-h, --help show this help message and exit
--user Install for the current user instead of system-wide
--name NAME Specify a name for the kernelspec. This is needed to
have multiple IPython kernels at the same time.
--display-name DISPLAY_NAME
Specify the display name for the kernelspec. This is
helpful when you have multiple IPython kernels.
--prefix PREFIX Specify an install prefix for the kernelspec. This is
needed to install into a non-default location, such as
a conda/virtual-env.
--sys-prefix Install to Python's sys.prefix. Shorthand for
--prefix='/Users/bussonniermatthias/anaconda'. For use
in conda/virtual-envs.
```
Note: the recent version of *anaconda* ships with an extension for the notebook that should automatically detect your various conda environments if the `ipykernel` package is installed in it.
## Wrap-up: Fixing your Issue
So with that background, your issue is quite easy to fix:
1. Set your `$PATH` so that the desired Python version is first. For example, you could run `export PATH="/path/to/python/bin:$PATH"` to specify (one time) which Python you'd like to use. To do this permanently, add that line to your `.bash_profile`/`.bashrc` (note that anaconda can do this automatically for you when you install it). I'd recommend using the Python that comes with anaconda or miniconda: this will allow you to `conda install` all the tools you need.
2. Make sure the packages you want to use are installed for **that** python. If you're using conda, you can type, e.g. `conda install notebook matplotlib scikit-learn` to install those packages for `anaconda/bin/python`.
3. Make sure that your Jupyter kernels point to the Python versions you want to use. When you `conda install notebook` it should set this up for `anaconda/bin/python` automatically. Otherwise you can use the `jupyter kernelspec` command or `python -m ipykernel install` command to adjust existing kernels or install new ones.
Hopefully that's clear... good luck! |
Efficiently find indices of all values in an array | 39,013,722 | 4 | 2016-08-18T08:48:25Z | 39,013,922 | 7 | 2016-08-18T08:57:50Z | [
"python",
"numpy"
] | I have a very large array, consisting of integers between 0 and N, where each value occurs at least once.
I'd like to know, for each value *k*, all the indices in my array where the array's value equals *k*.
For example:
```
arr = np.array([0,1,2,3,2,1,0])
desired_output = {
0: np.array([0,6]),
1: np.array([1,5]),
2: np.array([2,4]),
3: np.array([3]),
}
```
Right now I am accomplishing this with a loop over `range(N+1)`, and calling `np.where` N times.
```
indices = {}
for value in range(max(arr)+1):
indices[value] = np.where(arr == value)[0]
```
This loop is by far the slowest part of my code. (Both the `arr==value` evaluation and the `np.where` call take up significant chunks of time.) Is there a more efficient way to do this?
I also tried playing around with `np.unique(arr, return_index=True)` but that only tells me the very first index, rather than all of them. | **Approach #1**
Here's a vectorized approach to get those indices as list of arrays -
```
sidx = arr.argsort()
unq, cut_idx = np.unique(arr[sidx],return_index=True)
indices = np.split(sidx,cut_idx)[1:]
```
If you want the final dictionary that corresponds each unique element to their indices, finally we can use a loop-comprehension -
```
dict_out = {unq[i]:iterID for i,iterID in enumerate(indices)}
```
---
**Approach #2**
If you are just interested in the list of arrays, here's an alternative meant for performance -
```
sidx = arr.argsort()
indices = np.split(sidx,np.flatnonzero(np.diff(arr[sidx])>0)+1)
``` |
Address of last value in 1d NumPy array | 39,022,027 | 3 | 2016-08-18T15:24:58Z | 39,022,135 | 8 | 2016-08-18T15:29:43Z | [
"python",
"arrays",
"numpy"
] | I have a 1d array with zeros scattered throughout. Would like to create a second array which contains the position of the last zero, like so:
```
>>> a = np.array([1, 0, 3, 2, 0, 3, 5, 8, 0, 7, 12])
>>> foo(a)
[0, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3]
```
Is there a built-in NumPy function or broadcasting trick to do this without using a for loop or other iterator? | ```
>>> (a == 0).cumsum()
array([0, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3])
``` |
python list of strings parsing | 39,036,115 | 2 | 2016-08-19T09:57:40Z | 39,036,170 | 7 | 2016-08-19T10:00:12Z | [
"python",
"list-comprehension"
] | I have a list of strings representing dates:
```
>>> dates
['14.9.2016',
'13.9.2016',
'12.9.2016',
'11.9.2016',
'10.9.2016',
'9.9.2016',
'8.9.2016',
'7.9.2016',
'6.9.2016',
'5.9.2016']
```
I need to zero-padd days & months and I cannot use standard calendar methods due to "artifical dates" I need to work with ("29.2.2015" for example)
following seems to be working:
```
>>> parsed_dates = []
>>> for l in [d.split('.') for d in dates]:
>>> parsed_dates.append('.'.join([i.zfill(2) for i in l]))
>>> parsed_dates
['14.09.2016',
'13.09.2016',
'12.09.2016',
'11.09.2016',
'10.09.2016',
'09.09.2016',
'08.09.2016',
'07.09.2016',
'06.09.2016',
'05.09.2016']
```
is it possible to achieve the same result using a single list comprehension? or some other, more elegant way?
I have tried following, but cannot find a way to concatenate single list items to form date strings again...
```
>>> [i.zfill(2) for l in [d.split('.') for d in dates] for i in l]
['14',
'09',
'2016',
'13',
'09',
'2016',
'12',
'09',
.
.
.
``` | Sure, just inline the expression you pass to the `parsed_dates.append()` call, with `l` substituted with `d.split('.')` from your `for` loop:
```
['.'.join([i.zfill(2) for i in d.split('.')]) for d in dates]
```
Demo:
```
>>> ['.'.join([i.zfill(2) for i in d.split('.')]) for d in dates]
['14.09.2016', '13.09.2016', '12.09.2016', '11.09.2016', '10.09.2016', '09.09.2016', '08.09.2016', '07.09.2016', '06.09.2016', '05.09.2016']
``` |
Python - Basic vs extended slicing | 39,037,663 | 8 | 2016-08-19T11:14:23Z | 39,037,810 | 9 | 2016-08-19T11:22:02Z | [
"python",
"python-2.7"
] | When experimenting with slicing I noticed a strange behavior in Python 2.7:
```
class A:
def __getitem__(self, i):
print repr(i)
a=A()
a[:] #Prints slice(0, 9223372036854775807, None)
a[::] #prints slice(None, None, None)
a[:,:] #prints (slice(None, None, None), slice(None, None, None))
```
When using a single colon in the brackets, the slice object has 0 as start and a huge integer as end. However, when I use more than a single colon, start and stop are None if not specified.
Is this behaviour guaranteed or implementation specific?
The [Documentation](https://docs.python.org/2/reference/datamodel.html#types) says that the second and third case are extended slicing, while the first case is not. However, I couldn't find any clear explanation of the difference between basic and extended slicing.
Are there any other "special cases" which I should be aware of when I override `__getitem__` and want to accept extended slicing?? | For Python 2 `[:]` still calls [`__getslice__(self, i, j)`](https://docs.python.org/2/reference/datamodel.html#object.__getslice__) (deprecated) and this is documented to return a slice `slice(0, sys.maxsize, None)` when called with default parameters:
> Note that missing `i` or `j` in the slice expression are replaced by **zero** or **`sys.maxsize`**, ...
(emphasis mine).
New style classes don't implement `__getslice__()` by default, so
> If no `__getslice__()` is found, a slice object is created instead, and passed to `__getitem__()` instead.
Python 3 doesn't support `__getslice__()`, anymore, instead it [constructs a `slice()`](https://docs.python.org/3/reference/datamodel.html#object.__length_hint__) object for all of the above slice expressions. And `slice()` has `None` as default:
> Note: Slicing is done exclusively with the following three methods. A call like
>
> `a[1:2] = b`
>
> is translated to
>
> `a[slice(1, 2, None)] = b`
>
> and so forth. Missing slice items are always filled in with `None`. |
Function chaining in Python | 39,038,358 | 64 | 2016-08-19T11:48:30Z | 39,038,455 | 75 | 2016-08-19T11:53:03Z | [
"python",
"function",
"python-3.x"
] | On codewars.com I encountered the following task:
> Create a function `add` that adds numbers together when called in succession. So `add(1)` should return `1`, `add(1)(2)` should return `1+2`, ...
While I'm familiar with the basics of Python, I've never encountered a function that is able to be called in such succession, i.e. a function `f(x)` that can be called as `f(x)(y)(z)...`. Thus far, I'm not even sure how to interpret this notation.
As a mathematician, I'd suspect that `f(x)(y)` is a function that assigns to every `x` a function `g_{x}` and then returns `g_{x}(y)` and likewise for `f(x)(y)(z)`.
Should this interpretation be correct, Python would allow me to dynamically create functions which seems very interesting to me. I've searched the web for the past hour, but wasn't able to find a lead in the right direction. Since I don't know how this programming concept is called, however, this may not be too surprising.
How do you call this concept and where can I read more about it? | I don't know whether this is *function* chaining as much as it's *callable* chaining, but, since functions *are* callables I guess there's no harm done. Either way, there's two ways I can think of doing this:
### Sub-classing `int` and defining `__call__`:
The first way would be with a custom `int` subclass that defines [`__call__`](https://docs.python.org/3/reference/datamodel.html#object.__call__) which returns a new instance of itself with the updated value:
```
class CustomInt(int):
def __call__(self, v):
return CustomInt(self + v)
```
Function `add` can now be defined to return a `CustomInt` instance, which, as a callable that returns an updated value of itself, can be called in succession:
```
>>> def add(v):
... return CustomInt(v)
>>> add(1)
1
>>> add(1)(2)
3
>>> add(1)(2)(3)(44) # and so on..
50
```
In addition, as an `int` subclass, the returned value retains the `__repr__` and `__str__` behavior of `int`s. *For more complex operations though, you should define other dunders appropriately*.
As @Caridorc noted in a comment, `add` could also be simply written as:
```
add = CustomInt
```
Renaming the class to `add` instead of `CustomInt` also works similarly.
---
### Define a closure, requires extra call to yield value:
The only other way I can think of involves a nested function that requires an extra empty argument call in order to return the result. I'm **not** using `nonlocal` and opt for attaching attributes to the function objects to make it portable between Pythons:
```
def add(v):
def _inner_adder(val=None):
"""
if val is None we return _inner_adder.v
else we increment and return ourselves
"""
if val is None:
return _inner_adder.v
_inner_adder.v += val
return _inner_adder
_inner_adder.v = v # save value
return _inner_adder
```
This continuously returns itself (`_inner_adder`) which, if a `val` is supplied, increments it (`_inner_adder += val`) and if not, returns the value as it is. Like I mentioned, it requires an extra `()` call in order to return the incremented value:
```
>>> add(1)(2)()
3
>>> add(1)(2)(3)() # and so on..
6
``` |
Function chaining in Python | 39,038,358 | 64 | 2016-08-19T11:48:30Z | 39,038,540 | 13 | 2016-08-19T11:57:30Z | [
"python",
"function",
"python-3.x"
] | On codewars.com I encountered the following task:
> Create a function `add` that adds numbers together when called in succession. So `add(1)` should return `1`, `add(1)(2)` should return `1+2`, ...
While I'm familiar with the basics of Python, I've never encountered a function that is able to be called in such succession, i.e. a function `f(x)` that can be called as `f(x)(y)(z)...`. Thus far, I'm not even sure how to interpret this notation.
As a mathematician, I'd suspect that `f(x)(y)` is a function that assigns to every `x` a function `g_{x}` and then returns `g_{x}(y)` and likewise for `f(x)(y)(z)`.
Should this interpretation be correct, Python would allow me to dynamically create functions which seems very interesting to me. I've searched the web for the past hour, but wasn't able to find a lead in the right direction. Since I don't know how this programming concept is called, however, this may not be too surprising.
How do you call this concept and where can I read more about it? | If you want to define a function to be called multiple times, first you need to return a callable object each time (for example a function) otherwise you have to create your own object by defining a `__call__` attribute, in order for it to be callable.
The next point is that you need to preserve all the arguments, which in this case means you might want to use [Coroutines](https://docs.python.org/3/library/asyncio-task.html) or a recursive function. But note that **Coroutines are much more optimized/flexible than recursive functions**, specially for such tasks.
Here is a sample function using Coroutines, that preserves the latest state of itself. Note that it can't be called multiple times since the return value is an `integer` which is not callable, but you might think about turning this into your expected object ;-).
```
def add():
current = yield
while True:
value = yield current
current = value + current
it = add()
next(it)
print(it.send(10))
print(it.send(2))
print(it.send(4))
10
12
16
``` |
Function chaining in Python | 39,038,358 | 64 | 2016-08-19T11:48:30Z | 39,039,586 | 16 | 2016-08-19T12:54:13Z | [
"python",
"function",
"python-3.x"
] | On codewars.com I encountered the following task:
> Create a function `add` that adds numbers together when called in succession. So `add(1)` should return `1`, `add(1)(2)` should return `1+2`, ...
While I'm familiar with the basics of Python, I've never encountered a function that is able to be called in such succession, i.e. a function `f(x)` that can be called as `f(x)(y)(z)...`. Thus far, I'm not even sure how to interpret this notation.
As a mathematician, I'd suspect that `f(x)(y)` is a function that assigns to every `x` a function `g_{x}` and then returns `g_{x}(y)` and likewise for `f(x)(y)(z)`.
Should this interpretation be correct, Python would allow me to dynamically create functions which seems very interesting to me. I've searched the web for the past hour, but wasn't able to find a lead in the right direction. Since I don't know how this programming concept is called, however, this may not be too surprising.
How do you call this concept and where can I read more about it? | You can hate me, but here is a one-liner :)
```
add = lambda v: type("", (int,), {"__call__": lambda self, v: self.__class__(self + v)})(v)
```
Edit: Ok, how this works? The code is identical to answer of @Jim, but everything happens on a single line.
1. `type` can be used to construct new types: `type(name, bases, dict) -> a new type`. For `name` we provide empty string, as name is not really needed in this case. For `bases` (tuple) we provide an `(int,)`, which is identical to inheriting `int`. `dict` are the class attributes, where we attach the `__call__` lambda.
2. `self.__class__(self + v)` is identical to `return CustomInt(self + v)`
3. The new type is constructed and returned within the outer lambda. |
Using a single replacement operation replace all leading tabs with spaces | 39,038,569 | 4 | 2016-08-19T11:58:44Z | 39,038,654 | 8 | 2016-08-19T12:03:33Z | [
"python",
"regex"
] | In my text I want to replace all leading tabs with two spaces but leave the non-leading tabs alone.
For example:
```
a
\tb
\t\tc
\td\te
f\t\tg
```
(`"a\n\tb\n\t\tc\n\td\te\nf\t\tg"`)
should turn into:
```
a
b
c
d\te
f\t\tg
```
(`"a\n b\n c\n d\te\nf\t\tg"`)
For my case I could do that with multiple replacement operations, repeating as many times as the many maximum nesting level or until nothing changes.
**But wouldn't it also be possible to do in a single run?**
I tried but didn't manage to come up with something, the best I came up yet was with lookarounds:
```
re.sub(r'(^|(?<=\t))\t', ' ', a, flags=re.MULTILINE)
```
Which "only" makes one wrong replacement (second tab between `f` and `g`).
Now it might be that it's simply impossible to do in regex in a single run because the already replaced parts can't be matched again (or rather the replacement does not happen right away) and you can't sort-of "count" in regex, in this case I would love to see some more detailed explanations on why (as long as this won't shift too much into [cs.se] territory).
I am working in Python currently but this could apply to pretty much any similar regex implementation. | You may match the tabs at the start of the lines, and use a lambda inside `re.sub` to replace with the double spaces multiplied by the length of the match:
```
import re
s = "a\n\tb\n\t\tc\n\td\te\nf\t\tg";
print(re.sub(r"^\t+", lambda m: " "*len(m.group()), s, flags=re.M))
```
See the [Python demo](https://ideone.com/AmOECe) |
Coding style (PEP8) - Module level "dunders" | 39,044,343 | 12 | 2016-08-19T17:01:55Z | 39,044,408 | 13 | 2016-08-19T17:06:14Z | [
"python",
"pycharm",
"pep8",
"pep"
] | *Definition of "Dunder" (**D**ouble **under**score): <http://www.urbandictionary.com/define.php?term=Dunder>*
---
I have a question according the placement of module level "dunders" (like `__all__`, `__version__`, `__author__` etc.) in Python code.
The question came up to me while reading through [PEP8](https://www.python.org/dev/peps/pep-0008) and seeing [this](http://stackoverflow.com/questions/24741141/pycharm-have-author-appear-before-imports) Stack Overflow question.
The accepted answer says:
> `__author__` is a global "variable" and should therefore appear below the imports.
But in the PEP8 section [Module level dunder names](https://www.python.org/dev/peps/pep-0008/#module-level-dunder-names) I read the following:
> Module level "dunders" (i.e. names with two leading and two trailing
> underscores) such as `__all__` , `__author__` , `__version__` , etc. should
> be placed after the module docstring but before any import statements
> except from `__future__` imports. Python mandates that future-imports
> must appear in the module before any other code except docstrings.
The authors also give a code example:
```
"""This is the example module.
This module does stuff.
"""
from __future__ import barry_as_FLUFL
__all__ = ['a', 'b', 'c']
__version__ = '0.1'
__author__ = 'Cardinal Biggles'
import os
import sys
```
But when I put the above into PyCharm, I see this warning (also see the screenshot):
> PEP8: module level import not at top of file
[](http://i.stack.imgur.com/EWkeW.png)
**Question: What is the correct way/place to store these variables with double underscores?** | PEP 8 recently was *updated* to put the location before the imports. See [revision cf8e888b9555](https://hg.python.org/peps/rev/cf8e888b9555), committed on June 7th, 2016:
> Relax `__all__` location.
>
> Put all module level dunders together in the same location, and remove
> the redundant version bookkeeping information.
>
> Closes #27187. Patch by Ian Lee.
The text was [further updated the next day](https://hg.python.org/peps/rev/c451868df657) to address the `from __future__ import ...` caveat.
The patch links to [issue #27187](http://bugs.python.org/issue27187), which in turn references [this `pycodestyle` issue](https://github.com/PyCQA/pycodestyle/issues/394), where it was discovered PEP 8 was unclear.
Before this change, as there was no clear guideline on module-level dunder globals, so PyCharm and the other answer were correct *at the time*. I'm not sure how PyCharm implements their PEP 8 checks; if they use the [pycodestyle project](https://pycodestyle.readthedocs.io/en/latest/) (the defacto Python style checker), then I'm sure it'll be fixed automatically. Otherwise, perhaps file a bug with them to see this fixed. |
List comprehension works but not for loopââwhy? | 39,044,403 | 5 | 2016-08-19T17:05:46Z | 39,044,516 | 8 | 2016-08-19T17:13:29Z | [
"python",
"pandas",
"for-loop",
"indexing",
"list-comprehension"
] | I'm a bit annoyed with myself because I can't understand why one solution to a problem worked but another didn't. As in, it points to a deficient understanding of (basic) pandas on my part, and that makes me mad!
Anyway, my problem was simple: I had a list of 'bad' values ('bad\_index'); these corresponded to row indexes on a dataframe ('data\_clean1') for which I wanted to delete the corresponding rows. However, as the values will change with each new dataset, I didn't want to plug the bad values directly into the code. Here's what I did first:
```
bad_index = [2, 7, 8, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 24, 29]
for i in bad_index:
dataclean2 = dataclean1.drop([i]).reset_index(level = 0, drop = True)
```
But this didn't work; the data\_clean2 remained the exact same as data\_clean1. My second idea was to use list comprehensions (as below); this worked out fine.
```
bad_index = [2, 7, 8, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 24, 29]
data_clean2 = data_clean1.drop([x for x in bad_index]).reset_index(level = 0, drop = True)
```
Now, why did the list comprehension method work and not the 'for' loop? I've been coding for a few months, and I feel that I shouldn't be making these kinds of errors.
Thanks! | `data_clean1.drop([x for x in bad_index]).reset_index(level = 0, drop = True)` is equivalent to simply passing the `bad_index` list to `drop`:
`data_clean1.drop(bad_index).reset_index(level = 0, drop = True)`
`drop` accepts a list, and drops every index present in the list.
Your explicit `for` loop didn't work because in every iteration you simply dropped a different index from the `dataclean1` dataframe without saving the intermediate dataframes, so by the last iteration `dataclean2` was simply the result of executing
`dataclean2 = dataclean1.drop(29).reset_index(level = 0, drop = True)` |
python use lazy assignment when passing lists to methods | 39,068,974 | 2 | 2016-08-21T21:20:33Z | 39,069,040 | 7 | 2016-08-21T21:30:10Z | [
"python",
"generator"
] | I have a python class A and its child B. I make an object of class B passing some list-type arguments, which are appended in B and the result stored in class A. I would like to be able to change one of those initial arguments and see the change reflected without further ado. Let me explain with an example:
```
class A():
def __init__(self,data):
self.a=data
class B(A):
def __init__(self,x,y):
self.x=x
self.y=y
A.__init__(self,self.x+self.y)
def change_x(self,x):
self.x = x
```
Now, wen i run
```
test = B([1, 2], [3,4])
print(test.a)
```
I obviously get [1,2,3,4]. If I change part of the list as follows:
```
test.change_x([3,4])
print(test.a)
```
I would like to get [3,4,3,4], instead of course, I receive [1,2,3,4]. Of course test.a is only evaluated during instantiation.
I understand why this is not the case and I read about generators but don't manage to figure out how to implement this. I don't want to end up with an iterable that i can only iterate once.
Could anyone help me? A clean way to solve this? | Is there a reason to have the class `A` at all? You could create a property or method `a` that returns what you want within class `B`.
```
class B():
def __init__(self, x, y):
self.x = x
self.y = y
def change_x(self, x):
self.x = x
@property
def a(self):
return self.x + self.y
test = B([1, 2], [3, 4])
print(test.a) # [1, 2, 3, 4]
test.change_x([3, 4])
print(test.a) # [3, 4, 3, 4]
```
**Small note**: With this implementation, the `change_x` method shouldn't be necessary. It's generally more Pythonic to just access attributes directly (e.g. `test.x = x`) than to use getters and setters. |
What is the python way to iterate two list and compute by position? | 39,069,782 | 3 | 2016-08-21T23:40:29Z | 39,069,789 | 8 | 2016-08-21T23:42:00Z | [
"python"
] | What is the Pythonic way to iterate two list and compute?
```
a, b=[1,2,3], [4,5,6]
c=[]
for i in range(3):
c.append(a[i]+b[i])
print(c)
[5,7,9]
```
Is there a one-liner for `c` without a for loop? | One possibility:
```
a, b = [1, 2, 3], [4, 5, 6]
c = list(map(sum, zip(a,b)))
```
Another option (if it's always just two lists):
```
c = [x + y for x, y in zip(a, b)]
```
Or probably my favorite, which is equivalent to my first example but uses list comprehension instead of `map`:
```
c = [sum(numbers) for numbers in zip(a, b)]
```
To generalize to a list of lists (rather than a fixed number of lists in different variables):
```
lists = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
c = [sum(numbers) for numbers in zip(*lists)]
``` |
What is the python way to iterate two list and compute by position? | 39,069,782 | 3 | 2016-08-21T23:40:29Z | 39,069,792 | 9 | 2016-08-21T23:42:20Z | [
"python"
] | What is the Pythonic way to iterate two list and compute?
```
a, b=[1,2,3], [4,5,6]
c=[]
for i in range(3):
c.append(a[i]+b[i])
print(c)
[5,7,9]
```
Is there a one-liner for `c` without a for loop? | Use `zip` and list comprehension:
```
[x+y for (x,y) in zip(a, b)]
``` |
Accessing entire string in Python using Negative indices | 39,074,498 | 2 | 2016-08-22T08:21:15Z | 39,074,556 | 7 | 2016-08-22T08:24:05Z | [
"python",
"python-2.7"
] | Here is a Python oddity that I discovered while teaching students.
If negative indexing should work right, then for a string m='string', I did the following steps.
```
>>> m='string'
>>> m[:-1]
'strin'
>>> m[:0]
''
>>> m[-1]
'g'
>>> m[0:]
'string'
>>> m[:-1]
'strin'
>>> m[:0]
''
>>>
```
I want to know how to access the entire string using the negative index? | ```
>>> m='string'
>>> m[-len(m):]
'string'
```
Just as positive indices count forward from the beginning of a string, negative indices count back from the end. Thus, we have to count back by `len(m)` to get back to the beginning of `m`. |
X and Y or Z - ternary operator | 39,080,416 | 6 | 2016-08-22T13:07:46Z | 39,080,631 | 8 | 2016-08-22T13:16:40Z | [
"python",
"syntax",
"ternary-operator"
] | In Java or C we have `<condition> ? X : Y`, which translates into Python as `X if <condition> else Y`.
But there's also this little trick: `<condition> and X or Y`.
While I understand that it's equivalent to the aforementioned ternary operators, I find it difficult to grasp how `and` and `or` operators are able to produce correct result. What's the logic behind this? | > While I understand that it's equivalent to the aforementioned ternary
> operators
This is incorrect:
```
In [32]: True and 0 or 1
Out[32]: 1
In [33]: True and 2 or 1
Out[33]: 2
```
Why the first expression returns `1` (i.e. `Y`), while the condition is `True` and the "expected" answer is `0` (i.e. `X`)?
According to the docs:
> The expression x and y first evaluates x; if x is false, its value is
> returned; otherwise, y is evaluated and the resulting value is
> returned.
>
> The expression x or y first evaluates x; if x is true, its value is
> returned; otherwise, y is evaluated and the resulting value is
> returned.
So, `True and 0 or 1` evaluates the first argument of the `and` operator, which is `True`. Then it returns the second argument, which is `0`.
Since the `True and 0` returns false value, the `or` operator returns the second argument (i.e. `1`) |
FutureWarning when comparing a NumPy object to "None" | 39,088,173 | 5 | 2016-08-22T20:27:41Z | 39,088,194 | 10 | 2016-08-22T20:29:11Z | [
"python",
"arrays",
"numpy"
] | I have a function that receives some arguments, plus some optional arguments. In it, the action taken is dependent upon whether the optional argument `c` was filled:
```
def func(a, b, c = None):
doStuff()
if c != None:
doOtherStuff()
```
If `c` is not passed, then this works fine. However, in my context, if `c` *is* passed, it will always be a `numpy` array. And comparing `numpy` arrays to `None` yields the following warning:
```
FutureWarning: comparison to `None` will result in an elementwise object comparison in the future.
```
So, what is the cleanest and most general way to check whether or not `c` was passed or not without comparing to `None`? | Use `if c is not None` instead. In addition to avoiding the warning, this is [generally considered best-practice](https://www.python.org/dev/peps/pep-0008/#programming-recommendations). |
Time complexity calculation for my algorithm | 39,092,049 | 5 | 2016-08-23T03:50:33Z | 39,092,904 | 9 | 2016-08-23T05:18:34Z | [
"python",
"algorithm"
] | Given a string, find the first non-repeating character in it and return its index. If it doesn't exist, return -1. You may assume the string contain only lowercase letters.
I'm going to define a hash that tracks the occurrence of characters. Traverse the string from left to right, check if the current character is in the hash, continue if yes, otherwise in another loop traverse the rest of the string to see if the current character exists. Return the index if not and update the hash if it exists.
```
def firstUniqChar(s):
track = {}
for index, i in enumerate(s):
if i in track:
continue
elif i in s[index+1:]: # For the last element, i in [] holds False
track[i] = 1
continue
else:
return index
return -1
firstUniqChar('timecomplexity')
```
What's the time complexity (average and worst) of my algorithm? | Your algorithm has time complexity of `O(kn)` where `k` is the number of unique characters in the string. If `k` is a constant then it is `O(n)`. As the problem description clearly bounds the number of alternatives for elements ("assume lower-case (ASCII) letters"), thus `k` is constant and your algorithm runs in `O(n)` time on this problem. Even though n will grow to infinite, you will only make `O(1)` slices of the string and your algorithm will remain `O(n)`. If you removed `track`, then it would be `O(n²)`:
```
In [36]: s = 'abcdefghijklmnopqrstuvwxyz' * 10000
In [37]: %timeit firstUniqChar(s)
100 loops, best of 3: 18.2 ms per loop
In [38]: s = 'abcdefghijklmnopqrstuvwxyz' * 20000
In [37]: %timeit firstUniqChar(s)
10 loops, best of 3: 36.3 ms per loop
In [38]: s = 'timecomplexity' * 40000 + 'a'
In [39]: %timeit firstUniqChar(s)
10 loops, best of 3: 73.3 ms per loop
```
It pretty much holds there that the `T(n)` is still of `O(n)` complexity - it scales exactly linearly with number of characters in the string, even though this is the worst-case scenario for your algorithm - there is no single character that is be unique.
---
I will present a not-that efficient, but simple and smart method here; count the character histogram first with `collections.Counter`; then iterate over the characters finding the one
```
from collections import Counter
def first_uniq_char_ultra_smart(s):
counts = Counter(s)
for i, c in enumerate(s):
if counts[c] == 1:
return i
return -1
first_uniq_char('timecomplexity')
```
This has time complexity of `O(n)`; `Counter` counts the histogram in `O(n)` time and we need to enumerate the string again for `O(n)` characters. However in practice I believe my algorithm has low constants, because it uses a standard dictionary for `Counter`.
And lets make a very stupid brute-force algorithm. Since you can assume that the string contains only lower-case letters, then use that assumption:
```
import string
def first_uniq_char_very_stupid(s):
indexes = []
for c in string.ascii_lowercase:
if s.count(c) == 1:
indexes.append(s.find(c))
# default=-1 is Python 3 only
return min(indexes, default=-1)
```
Let's test my algorithm and some algorithms found in the other answers, on Python 3.5. I've chosen a case that is pathologically bad for *my* algorithm:
```
In [30]: s = 'timecomplexity' * 10000 + 'a'
In [31]: %timeit first_uniq_char_ultra_smart(s)
10 loops, best of 3: 35 ms per loop
In [32]: %timeit karin(s)
100 loops, best of 3: 11.7 ms per loop
In [33]: %timeit john(s)
100 loops, best of 3: 9.92 ms per loop
In [34]: %timeit nicholas(s)
100 loops, best of 3: 10.4 ms per loop
In [35]: %timeit first_uniq_char_very_stupid(s)
1000 loops, best of 3: 1.55 ms per loop
```
So, my stupid algorithm is the fastest, because it finds the `a` at the end and bails out. And my smart algorithm is slowest, One more reason for bad performance of my algorithm besides this being worst case is that `OrderedDict` is written in C on Python 3.5, and `Counter` is in Python.
---
Let's make a better test here:
```
In [60]: s = string.ascii_lowercase * 10000
In [61]: %timeit nicholas(s)
100 loops, best of 3: 18.3 ms per loop
In [62]: %timeit karin(s)
100 loops, best of 3: 19.6 ms per loop
In [63]: %timeit john(s)
100 loops, best of 3: 18.2 ms per loop
In [64]: %timeit first_uniq_char_very_stupid(s)
100 loops, best of 3: 2.89 ms per loop
```
So it appears that the "stupid" algorithm of mine isn't all that stupid at all, it exploits the speed of C while minimizing the number of iterations of Python code being run, and wins clearly in this problem. |
Why do 4 different languages give 4 different results here? | 39,100,460 | 5 | 2016-08-23T11:54:05Z | 39,106,941 | 9 | 2016-08-23T17:01:27Z | [
"python",
"perl",
"awk",
"rounding",
"long-integer"
] | Consider this (all commands run on an 64bit Arch Linux system):
* Perl (v5.24.0)
```
$ perl -le 'print 10190150730169267102/1000%10'
6
```
* `awk` (GNU Awk 4.1.3)
```
$ awk 'BEGIN{print 10190150730169267102/1000%10}'
6
```
* R (3.3.1)
```
> (10190150730169267102/1000)%%10
[1] 6
```
* `bc`
```
$ echo 10190150730169267102/1000%10 | bc
7
```
* Python 2 (2.7.12)
```
>>> print(10190150730169267102/1000%10)
7
```
* Python 3 (3.5.2)
```
>>> print(10190150730169267102/1000%10)
8.0
```
So, Perl, `gawk` and `R` agree, as do `bc` and Pyhon 2. Nevertheless, between the 6 tools tested, I got 4 different results. I understand that this has something to do with how very long integers are being rounded, but why do the different tools differ quite so much? I had expected that this would depend on the processor's ability to deal with large numbers, but it seems to depend on internal features (or bugs) of the language.
Could someone explain what is going on behind the scenes here? What are the limitations in each language and why do they behave quite so differently? | You're seeing different results for two reasons:
1. The division step is doing two different things: in some of the languages you tried, it represents *integer* division, which discards the fractional part of the result and just keeps the integer part. In others it represents actual mathematical division (which following Python's terminology I'll call "true division" below), returning a floating-point result close to the true quotient.
2. In some languages (those with support for arbitrary precision), the large numerator value `10190150730169267102` is being represented exactly; in others, it's replaced by the nearest representable floating-point value.
The different combinations of the possibilities in 1. and 2. above give you the different results.
In detail: in Perl, awk, and R, we're working with floating-point values and true division. The value `10190150730169267102` is too large to store in a machine integer, so it's stored in the usual IEEE 754 binary64 floating-point format. That format can't represent that particular value exactly, so what gets stored is the closest value that *is* representable in that format, which is `10190150730169266176.0`. Now we divide that approximation by `1000`, again giving a floating-point result. The exact quotient, `10190150730169266.176`, is again not exactly representable in the binary64 format, and we get the closest representable float, which happens to be `10190150730169266.0`. Taking a remainder modulo `10` gives `6`.
In bc and Python 2, we're working with arbitrary-precision integers and integer division. Both those languages can represent the numerator exactly. The division result is then `10190150730169267` (we're doing *integer division*, not *true division*, so the fractional part is discarded), and the remainder modulo `10` is `7`. (This is oversimplifying a bit: the format that bc is using internally is somewhat closer to Python's `Decimal` type than to an arbitrary-precision integer type, but in this case the effect is the same.)
In Python 3, we're working with arbitrary-precision integers and true division. The numerator is represented exactly, but the result of the division is the nearest floating-point value to the true quotient. In this case the exact quotient is `10190150730169267.102`, and the closest representable floating-point value is `10190150730169268.0`. Taking the remainder of that value modulo `10` gives `8`.
Summary:
* Perl, awk, R: floating-point approximations, true division
* bc, Python 2: arbitrary-precision integers, integer division
* Python 3: arbitrary-precision integers, true division |
Why make lists unhashable? | 39,104,841 | 3 | 2016-08-23T15:09:57Z | 39,104,966 | 9 | 2016-08-23T15:14:57Z | [
"python",
"list",
"hash"
] | A common issue on SO is [removing duplicates from a list of lists](https://stackoverflow.com/questions/39081807/python-2-d-list-how-to-make-a-set/39081956#39081956). Since lists are unhashable, `set([[1, 2], [3, 4], [1, 2]])` throws `TypeError: unhashable type: 'list'`. Answers to this kind of question usually involve using tuples, which are immutable and therefore hashable.
This answer to [What makes lists unhashable?](http://stackoverflow.com/questions/23268899/what-makes-lists-unhashable) include the following:
> If the hash value changes after it gets stored at a particular slot in the dictionary, it will lead to an inconsistent dictionary. For example, initially the list would have gotten stored at location A, which was determined based on the hash value. If the hash value changes, and if we look for the list we might not find it at location A, or as per the new hash value, we might find some other object.
but I don't quite understand because other types that can be used for dictionary keys can be changed without issue:
```
>>> d = {}
>>> a = 1234
>>> d[a] = 'foo'
>>> a += 1
>>> d[a] = 'bar'
>>> d
{1234: 'foo', 1235: 'bar'}
```
It is obvious that if the value of `a` changes, it will hash to a different location in the dictionary. **Why is the same assumption dangerous for a list?** Why is the following an unsafe method for hashing a list, since it is what we all use when we need to anyway?
```
>>> class my_list(list):
... def __hash__(self):
... return tuple(self).__hash__()
...
>>> a = my_list([1, 2])
>>> b = my_list([3, 4])
>>> c = my_list([1, 2])
>>> foo = [a, b, c]
>>> foo
[[1, 2], [3, 4], [1, 2]]
>>> set(foo)
set([[1, 2], [3, 4]])
```
It seems that this solves the `set()` problem, why is this an issue? Lists may be mutable, but they are ordered which seems like it would be all that's needed for hashing. | You seem to confuse mutability with rebinding. `a += 1` assigns a **new object**, the `int` object with the numeric value 1235, to `a`. Under the hood, for immutable objects like `int`, `a += 1` is just the same as `a = a + 1`.
The original `1234` object is not mutated. The dictionary is still using an `int` object with numeric value 1234 as the key. The dictionary still holds a *reference* to that object, even though `a` now references a different object. The two references are independent.
Try this instead:
```
>>> class BadKey:
... def __init__(self, value):
... self.value = value
... def __eq__(self, other):
... return other == self.value
... def __hash__(self):
... return hash(self.value)
... def __repr__(self):
... return 'BadKey({!r})'.format(self.value)
...
>>> badkey = BadKey('foo')
>>> d = {badkey: 42}
>>> badkey.value = 'bar'
>>> print(d)
{BadKey('bar'): 42}
```
Note that I altered the attribute `value` on the `badkey` instance. I didn't even touch the dictionary. The dictionary reflects the change; the *actual key value itself* was mutated, the object that both the name `badkey` and the dictionary reference.
However, you now **can't access that key anymore**:
```
>>> badkey in d
False
>>> BadKey('bar') in d
False
>>> for key in d:
... print(key, key in d)
...
BadKey('bar') False
```
I have thoroughly broken my dictionary, because I can no longer reliably locate the key.
That's because `BadKey` violates the principles of *hashability*; that the hash value **must** remain stable. You can only do that if you don't change anything about the object that the hash is based on. And the hash must be based on whatever makes two instances equal.
For lists, the *contents* make two list objects equal. And you can change those, so you can't produce a stable hash either. |
A python regex that matches the regional indicator character class | 39,108,298 | 4 | 2016-08-23T18:26:36Z | 39,108,371 | 7 | 2016-08-23T18:32:00Z | [
"python",
"regex",
"unicode"
] | Flags in emoji are indicated by a pair of [Regional Indicator Symbols](https://en.wikipedia.org/wiki/Regional_Indicator_Symbol). I would like to write a python regex to insert spaces between a string of emoji flags.
For example, this string is two Brazilian flags:
```
u"\U0001F1E7\U0001F1F7\U0001F1E7\U0001F1F7"
```
Which will render like this: í ¼í·§í ¼í··í ¼í·§í ¼í··
I'd like to insert spaces between any pair of regional indicator symbols. Something like this:
```
re.sub(re.compile(u"([\U0001F1E6-\U0001F1FF][\U0001F1E6-\U0001F1FF])"),
r"\1 ",
u"\U0001F1E7\U0001F1F7\U0001F1E7\U0001F1F7")
```
Which would result in:
```
u"\U0001F1E7\U0001F1F7 \U0001F1E7\U0001F1F7 "
```
But that code gives me an error:
```
sre_constants.error: bad character range
```
A hint (I think) at what's going wrong is the following, which shows that \U0001F1E7 is turning into two "characters" in the regex:
```
re.search(re.compile(u"([\U0001F1E7])"),
u"\U0001F1E7\U0001F1F7\U0001F1E7\U0001F1F7").group(0)
```
This results in:
```
u'\ud83c'
```
Sadly my understanding of unicode is too weak for me to make further progress.
EDIT: I am using python 2.7.10 on a Mac. | I believe you're using Python 2.7 in Windows or Mac, which has the narrow 16-bit Unicode build - Linux/Glibc usually have 32-bit full unicode, also Python 3.5 has wide Unicode on all platforms.
What you see is the one code being split into a surrogate pair. Unfortunately it also means that you cannot use a single character class easily for this task. However it is still possible. The UTF-16 representation of [U+1F1E6 (í ¼í·¦)](http://www.fileformat.info/info/unicode/char/1f1e6/index.htm) is `\uD83C\uDDE6`, and that of [U+1F1FF (í ¼í·¿)](http://www.fileformat.info/info/unicode/char/1f1ff/index.htm) is `\uD83C\uDDFF`.
I do not even have an access to such Python build at all, but you could try
```
\uD83C[\uDDE6-\uDDFF]
```
as a replacement for single `[\U0001F1E6-\U0001F1FF]`, thus your whole regex would be
```
(\uD83C[\uDDE6-\uDDFF]\uD83C[\uDDE6-\uDDFF])
```
The reason why the character class doesn't work is that it tries to make a range from the second half of the first surrogate pair to the first half of the second surrogate pair - this fails, because the start of the range is lexicographically greater than the end.
However, this regular expression still wouldn't work on Linux, you need to use the original there. Or upgrade to Python 3.5. |
List Accumulation with Append | 39,119,300 | 3 | 2016-08-24T09:22:25Z | 39,119,482 | 8 | 2016-08-24T09:32:02Z | [
"python",
"list"
] | I want to generate or return an append-accumulated list from a given list (or iterator). For a list like `[1, 2, 3, 4]`, I would like to get, `[1]`, `[1, 2]`, `[1, 2, 3]` and `[1, 2, 3, 4]`. Like so:
```
>>> def my_accumulate(iterable):
... grow = []
... for each in iterable:
... grow.append(each)
... yield grow
...
>>> for x in my_accumulate(some_list):
... print x # or something more useful
...
[1]
[1, 2]
[1, 2, 3]
[1, 2, 3, 4]
```
This works but is there an operation I could use with [`itertools.accumulate`](https://docs.python.org/3/library/itertools.html#itertools.accumulate) to facilitate this? (I'm on Python2 but the pure-python implementation/equivalent has been provided in the docs.)
Another problem I have with `my_accumulate` is that it doesn't work well with `list()`, it outputs the entire `some_list` for each element in the list:
```
>>> my_accumulate(some_list)
<generator object my_accumulate at 0x0000000002EC3A68>
>>> list(my_accumulate(some_list))
[[1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]]
```
---
Option 1:
I wrote my own appending accumulator function to use with `itertools.accumulate` but considering the LoC and final useful-ness, it seems like a waste of effort, with `my_accumulate` being more useful, *(though may fail in case of empty iterables and consumes more memory since `grow` keeps growing)*:
```
>>> def app_acc(first, second):
... if isinstance(first, list):
... first.append(second)
... else:
... first = [first, second]
... return first
...
>>> for x in accumulate(some_list, app_acc):
... print x
...
1
[1, 2]
[1, 2, 3]
[1, 2, 3, 4]
>>> list(accumulate(some_list, app_acc)) # same problem again with list
[1, [1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]]
```
*(and the first returned elem is not a list, just a single item)*
---
Option 2: Figured it would be easier to just do incremental slicing but using the ugly iterate over list length method:
```
>>> for i in xrange(len(some_list)): # the ugly iterate over list length method
... print some_list[:i+1]
...
[1]
[1, 2]
[1, 2, 3]
[1, 2, 3, 4]
``` | The easiest way to use `accumulate` is to make each item in the iterable a list with a single item and then the default function works as expected:
```
from itertools import accumulate
acc = accumulate([el] for el in range(1, 5))
res = list(acc)
# [[1], [1, 2], [1, 2, 3], [1, 2, 3, 4]]
``` |
Apply a list of Python functions in order elegantly | 39,123,375 | 4 | 2016-08-24T12:29:56Z | 39,123,400 | 12 | 2016-08-24T12:31:07Z | [
"python",
"functional-programming"
] | I have an input value `val` and a list of functions to be applied in the order:
```
funcs = [f1, f2, f3, ..., fn]
```
How to apply elegantly and not writing
```
fn( ... (f3(f2(f1(val))) ... )
```
and also not using for loop:
```
tmp = val
for f in funcs:
tmp = f(tmp)
```
Thanks Martijn for the awesome answer. There's some reading I found: <https://mathieularose.com/function-composition-in-python/> . | Use the [`reduce()` function](https://docs.python.org/2/library/functions.html#reduce):
```
# forward-compatible import
from functools import reduce
result = reduce(lambda res, f: f(res), funcs, val)
```
`reduce()` applies the first argument, a callable, to each element taken from the second argument, plus the accumulated result so far (as `(result, element)`). The third argument is a starting value (the first element from `funcs` would be used otherwise).
In Python 3, the built-in function was moved to the [`functools.reduce()` location](https://docs.python.org/3/library/functools.html#functools.reduce); for forward compatibility that same reference is available in Python 2.6 and up.
Other languages may call this [folding](https://en.wikipedia.org/wiki/Fold_(higher-order_function)#Folds_in_various_languages).
If you need *intermediate* results for each function too, use [`itertools.accumulate()`](https://docs.python.org/3/library/itertools.html#itertools.accumulate) (only from Python 3.3 onwards for a version that takes a function argument):
```
from itertools import accumulate, chain
running_results = accumulate(chain(val, funcs), lambda res, f: f(res))
``` |
Why does printing a dataframe break python when constructed from numpy empty_like | 39,129,419 | 10 | 2016-08-24T17:17:52Z | 39,131,997 | 7 | 2016-08-24T19:54:46Z | [
"python",
"pandas",
"numpy"
] | ```
import numpy as np
import pandas as pd
```
consider numpy array `a`
```
a = np.array([None, None], dtype=object)
print(a)
[None None]
```
And `dfa`
```
dfa = pd.DataFrame(a)
print(dfa)
0
0 None
1 None
```
Now consider numpy array `b`
```
b = np.empty_like(a)
print(b)
[None None]
```
It appears the same as `a`
```
(a == b).all()
True
```
# ***THIS! CRASHES MY PYTHON!!*** BE CAREFUL!!!
```
dfb = pd.DataFrame(b) # Fine so far
print(dfb.values)
[[None]
[None]]
```
However
```
print(dfb) # BOOM!!!
``` | As reported [here,](https://github.com/pydata/pandas/issues/14082) this is a bug, which is fixed in the master branch of `pandas` / the upcoming `0.19.0` release. |
What is the correct format to upgrade pip3 when the default pip is pip2? | 39,129,450 | 2 | 2016-08-24T17:20:10Z | 39,129,467 | 7 | 2016-08-24T17:21:11Z | [
"python",
"python-3.x",
"cygwin",
"pip",
"python-3.4"
] | I develop for both `Python 2` and `3.`
Thus, I have to use both `pip2` and `pip3.`
When using `pip3 -` I receive this upgrade request (last two lines):
```
$ pip3 install arrow
Requirement already satisfied (use --upgrade to upgrade): arrow in c:\program files (x86)\python3.5.1\lib\site-packages
Requirement already satisfied (use --upgrade to upgrade): python-dateutil in c:\program files (x86)\python3.5.1\lib\site-packages (from arrow)
Requirement already satisfied (use --upgrade to upgrade): six>=1.5 in c:\program files (x86)\python3.5.1\lib\site-packages (from python-dateutil->arrow)
You are using pip version 7.1.2, however version 8.1.2 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.
```
My default `pip` is for `Python 2,` namely:
```
$ python -m pip install --upgrade pip
Requirement already up-to-date: pip in /usr/lib/python2.7/site-packages
```
However, none of the following *explicit* commands succeed in upgrading the `Python 3 pip:`
```
$ python -m pip3 install --upgrade pip3
/bin/python: No module named pip3
$ python -m pip install --upgrade pip3
Collecting pip3
Could not find a version that satisfies the requirement pip3 (from versions: )
No matching distribution found for pip3
$ python -m pip install --upgrade pip3.4
Collecting pip3.4
Could not find a version that satisfies the requirement pip3.4 (from versions: )
No matching distribution found for pip3.4
```
## What is the correct command to upgrade pip3 when it is not the default pip?
Environment:
```
$ python3 -V
Python 3.4.3
$ uname -a
CYGWIN_NT-6.1-WOW 2.5.2(0.297/5/3) 2016-06-23 14:27 i686 Cygwin
``` | Just use the `pip3` command you already have:
```
pip3 install --upgrade pip
```
The installed *project* is called `pip`, always. The `pip3` command is tied to your Python 3 installation and is an alias for `pip`, but the latter is shadowed by the `pip` command in your Python 2 setup.
You can do it with the associated Python binary too; if it executable as `python3`, then use that:
```
python3 -m pip install --upgrade pip
```
Again, the project is called `pip`, and so is the module that is installed into your `site-packages` directory, so stick to that name for the `-m` command-line option and for the `install` command. |
Sort list of mixed strings based on digits | 39,129,846 | 6 | 2016-08-24T17:43:34Z | 39,129,897 | 10 | 2016-08-24T17:47:18Z | [
"python"
] | How do I sort this list via the numerical values? Is a regex required to remove the numbers or is there a more Pythonic way to do this?
```
to_sort
['12-foo',
'1-bar',
'2-bar',
'foo-11',
'bar-3',
'foo-4',
'foobar-5',
'6-foo',
'7-bar']
```
Desired output is as follows:
```
1-bar
2-bar
bar-3
foo-4
foobar-5
6-foo
7-bar
foo-11
12-foo
``` | One solution is the following regex extraction:
```
sorted(l, key=lambda x: int(re.search('\d+', x).group(0)))
```
---
```
>>> l
['12-foo', '1-bar', '2-bar', 'foo-11', 'bar-3', 'foo-4', 'foobar-5', '6-foo', '7-bar']
>>> sorted(l, key=lambda x: int(re.search('\d+', x).group(0)))
['1-bar', '2-bar', 'bar-3', 'foo-4', 'foobar-5', '6-foo', '7-bar', 'foo-11', '12-foo']
```
The `key` is the extracted digit (converted to `int` to avoid sorting lexographically). |
Operations on Large Python Sets | 39,133,758 | 2 | 2016-08-24T22:08:34Z | 39,133,943 | 9 | 2016-08-24T22:27:39Z | [
"python",
"set"
] | **UPDATE 1**
Both sets contain strings of maximum length 20 and can only take values from 'abcdefghijklmnopqrstuvwxyz'
**UPDATE 2**
I constructed the sets by reading 2 files from disk using a library called ujson (similar to simplejson) and then converting the returned lists into sets .
---
I am trying to take the difference of 2 sets containing 100 million elements each.
This code executes in 2 minutes:
```
temp = set() #O(1)
for i in first_100_million_set: #O(N)
temp.add(i) #O(1)
```
This code executes in 6 hours:
```
temp = set() #O(1)
for i in first_100_million_set: #O(N)
if i in second_100_million_set: #O(1)
temp.add(i) #O(1)
```
All I did was add membership check which , if I am not mistaken is done in O(1)? Where is this massive reduction coming from ?
*I know about set(a) - set(b) , it is practically doing exactly what my second block of code is doing , takes 6 hours to complete as well, I just wanted to write the whole procedure to demonstrate my point of confusion.*
**Do you think there is a better solution for what I am trying to do ?** | When talking about 100 million element sets, I'd worry about data being evicted from RAM (going to swap/pagefile). A 100M element `set` on Python 3.5 built for a 64 bit processor (which you're using, because you couldn't even create such a `set` in a 32 bit build of Python) uses 4 GB of memory just for the `set` overhead (ignoring the memory used by the objects it contains).
Your code that creates a new `set` without membership testing the second `set` accesses this memory sequentially, so the OS can predict the access patterns and it's likely pulling data into the cache before you need it even if most of the `set` is paged out. The only random access occurs in the building of the second `set` (but conveniently, the objects being inserted are already in cache because you pulled them from the original `set`). So you grow from no random access to maybe 4 GB (plus size of contained objects) worth of memory that is being accessed randomly and *must* not be paged out w/o causing performance problems.
In your second case, the `set` being membership tested is accessed randomly on every test, and it has to load every object in the bucket collision chain with a matching hash (admittedly, with good hash generation, there shouldn't be too many of these matches). But it means the size of your randomly accessed memory went from growing from 0 to 4 GB to growing from 4 to as much as 8 GB (depending on how much overlap exists between the `set`s; again, ignoring the access to the stored objects themselves). I wouldn't be surprised if this pushed you from performing mostly RAM accesses to incurring page faults requiring reads from the page file, which is several orders of magnitude slower than RAM access. Not coincidentally, that code is taking a few orders of magnitude longer to execute.
For the record, the `set` overhead is likely to be a fraction of the cost of the objects stored. The smallest useful objects in Python are `float`s (24 bytes a piece on Python 3.5 x64) though they're poor choices for `set`s due to issues with exact equality testing. `int`s that require less than 30 bits of magnitude are conceivably useful, and eat 28 bytes a piece (add 4 bytes for every full 30 bits required to store the value). So a 100M element set might "only" use 4 GB for the data structure itself, but the values are another 2.6 GB or so minimum; if they're not Python built-in types, user-defined objects, even using `__slots__`, would at least double this (quintuple it if not using `__slots__`), before they even pay the RAM for their attributes. I've got 12 GB of RAM on my machine, and your second use case would cause massive page thrashing on it, while your first case would run just fine for a `set` initialized with `range(100000000)` (though it would cause most of the other processes to page out; Python with two `set`s plus the `int`s shared between them use ~11 GB).
Update: Your data (strings from 1-20 ASCII characters) would use 50-69 bytes each on Python 3.5 x64 (probably a little more including allocator overhead), or 4.65-6.43 GB per `set` (assuming none of the strings are shared, that's 9-13 GB for the raw data). Add the three `set`s involved, and you're looking at up to 25 GB of RAM (you don't pay again for the members of the third `set` since they're shared with the first `set`). I wouldn't try to run your code on any machine with less than 32 GB of RAM.
As for "is there a better solution?" it depends on what you need. If you don't actually need the original `set`s, just the resulting difference, streaming your data would help. For example:
```
with open(file1) as f:
# Assume one string per line with newlines separating
myset = set(map(str.rstrip, f))
with open(file2) as f:
myset.difference_update(map(str.rstrip, f))
```
That would peak at about 10-11 GB of memory, then drop as elements from the second input were removed, leaving you with just the difference `set` and nothing else. Other options include using sorted `list`s of data, which would reduce the overhead from 4 GB per `set` to ~850 MB per `list`, then iterate them both in parallel (but not simultaneously; `zip` is no good here) to find the elements that exist in the first `list` but not the second, removing some of the random access costs too. |
How to use list comprehension with list of variable number of filenames? | 39,139,620 | 4 | 2016-08-25T07:53:54Z | 39,139,727 | 9 | 2016-08-25T07:58:57Z | [
"python",
"python-3.x",
"file-io",
"list-comprehension",
"contextmanager"
] | Given the list of filenames `filenames = [...]`.
Is it possibly rewrite the next list comprehension for I/O-safety: `[do_smth(open(filename, 'rb').read()) for filename in filenames]`? Using `with` statement, `.close` method or something else.
Another problem formulation: is it possibly to write I/O-safe list comprehension for the next code?
```
results = []
for filename in filenames:
with open(filename, 'rb') as file:
results.append(do_smth(file.read()))
``` | You can put the `with` statement/block to a function and call that in the list comprehension:
```
def slurp_file(filename):
with open(filename, 'rb') as f:
return f.read()
results = [do_smth(slurp_file(f)) for f in filenames]
``` |
Imported a Python module; why does a reassigning a member in it also affect an import elsewhere? | 39,144,498 | 18 | 2016-08-25T11:49:35Z | 39,144,686 | 26 | 2016-08-25T11:57:29Z | [
"python",
"import"
] | I am seeing Python behavior that I don't understand. Consider this layout:
```
project
| main.py
| test1.py
| test2.py
| config.py
```
main.py:
```
import config as conf
import test1
import test2
print(conf.test_var)
test1.test1()
print(conf.test_var)
test2.test2()
```
test1.py:
```
import config as conf
def test1():
conf.test_var = 'test1'
```
test2.py:
```
import config as conf
def test2():
print(conf.test_var)
```
config.py:
```
test_var = 'initial_value'
```
so, `python main.py` produce:
```
initial_value
test1
test1
```
I am confused by the last line. I thought that it would print `initial_value` again because I'm importing `config.py` in `test2.py` again, and I thought that changes that I've made in the previous step would be overwritten. Am I misunderstanding something? | Python caches imported modules. The second `import` call doesn't reload the file. |
Imported a Python module; why does a reassigning a member in it also affect an import elsewhere? | 39,144,498 | 18 | 2016-08-25T11:49:35Z | 39,144,726 | 7 | 2016-08-25T11:59:31Z | [
"python",
"import"
] | I am seeing Python behavior that I don't understand. Consider this layout:
```
project
| main.py
| test1.py
| test2.py
| config.py
```
main.py:
```
import config as conf
import test1
import test2
print(conf.test_var)
test1.test1()
print(conf.test_var)
test2.test2()
```
test1.py:
```
import config as conf
def test1():
conf.test_var = 'test1'
```
test2.py:
```
import config as conf
def test2():
print(conf.test_var)
```
config.py:
```
test_var = 'initial_value'
```
so, `python main.py` produce:
```
initial_value
test1
test1
```
I am confused by the last line. I thought that it would print `initial_value` again because I'm importing `config.py` in `test2.py` again, and I thought that changes that I've made in the previous step would be overwritten. Am I misunderstanding something? | test2.py
```
import config as conf
def test2():
print(id(conf.test_var))
print(conf.test_var)
```
test1.py
```
import config as conf
def test1():
conf.test_var = 'test1'
print(id(conf.test_var))
```
Change code like this.
And run `main.py`
```
initial_value
140007892404912
test1
140007892404912
test1
```
So, you can see that in both cases you are changing value of the same variable. See these `id`'s are same. |
How to apply bitwise operator to compare a list of objects | 39,153,200 | 2 | 2016-08-25T19:21:03Z | 39,153,234 | 8 | 2016-08-25T19:22:45Z | [
"python",
"bitwise-operators"
] | Suppose I have a long list of objects (say, a list of numpy matrices of bool elements) `foo = [a, b, c]`, that I want to compare with some bitwise operator, to get something like `a | b | c`.
If I could use this bitwise operation as a function, say a `bitwiseor` function, I could simply do this with `bitwiseor(*foo)`.
However, I was not able to find whether the bitwise or can be written in such functional form.
Is there some handy way to handle this kind of problem? Or should I just use a loop to compare all the elements cumulatively? | Using the functional method in [`operator`](https://docs.python.org/2/library/operator.html#operator.or_) in combination with [`reduce`](https://docs.python.org/2/library/functions.html#reduce):
```
>>> import operator, functools
>>> functools.reduce(operator.or_, [1,2,3])
3
``` |
Whitespace at the end of line is not ignored by python regex | 39,153,613 | 2 | 2016-08-25T19:48:23Z | 39,153,658 | 8 | 2016-08-25T19:51:11Z | [
"python",
"regex"
] | The leading spaces are ignored but the trailing ones are not in the below regular expression code. It's just a `"Name = Value"` string but with spaces. I thought the `\s*` after the capture would ignore spaces.
```
import re
line = " Name = Peppa Pig "
match = re.search(r"\s*(Name)\s*=\s*(.+)\s*", line)
print(match.groups())
>>>('Name', 'Peppa Pig ') # Why extra spaces after Pig!
```
What am I missing? | You're getting trailing spaces because of greedy nature of `.+`.
You can use this regex to correctly capture your value:
```
>>> re.search(r"\s*(Name)\s*=\s*(.+?)\s*$", line).groups()
('Name', 'Peppa Pig')
```
`\s*$` ensures we are capturing value before trailing white spaces at end. |
How to get a python script to invoke "python -i" when called normally? | 39,155,928 | 18 | 2016-08-25T22:51:54Z | 39,155,981 | 19 | 2016-08-25T22:57:40Z | [
"python",
"python-interactive"
] | I have a python script that I like to run with `python -i script.py`, which runs the script and then enters interactive mode so that I can play around with the results.
Is it possible to have the script itself invoke this option, such that I can just run `python script.py` and the script will enter interactive mode after running?
Of course, I can simply add the `-i`, or if that is too much effort, I can write a shell script to invoke this. | From within `script.py`, set the [`PYTHONINSPECT`](https://docs.python.org/3/using/cmdline.html#envvar-PYTHONINSPECT) environment variable to any nonempty string. Python will recheck this environment variable at the end of the program and enter interactive mode.
```
import os
# This can be placed at top or bottom of the script, unlike code.interact
os.environ['PYTHONINSPECT'] = 'TRUE'
``` |
How to get a python script to invoke "python -i" when called normally? | 39,155,928 | 18 | 2016-08-25T22:51:54Z | 39,156,962 | 7 | 2016-08-26T01:10:06Z | [
"python",
"python-interactive"
] | I have a python script that I like to run with `python -i script.py`, which runs the script and then enters interactive mode so that I can play around with the results.
Is it possible to have the script itself invoke this option, such that I can just run `python script.py` and the script will enter interactive mode after running?
Of course, I can simply add the `-i`, or if that is too much effort, I can write a shell script to invoke this. | In addition to all the above answers, you can run the script as simply `./script.py` by making the file executable and setting the shebang line, e.g.
```
#!/usr/bin/python -i
this = "A really boring program"
```
---
If you want to use this with the `env` command in order to get the system default `python`, then you can try using a shebang like [@donkopotamus](http://stackoverflow.com/users/5249307/donkopotamus) suggested in the comments
```
#!/usr/bin/env PYTHONINSPECT=1 python
```
The success of this may depend on the version of `env` installed on your platform however. |
interactive conditional histogram bucket slicing data visualization | 39,156,545 | 12 | 2016-08-26T00:08:52Z | 39,363,715 | 7 | 2016-09-07T07:26:47Z | [
"python",
"pandas",
"data-visualization",
"seaborn",
"bokeh"
] | I have a df that looks like:
```
df.head()
Out[1]:
A B C
city0 40 12 73
city1 65 56 10
city2 77 58 71
city3 89 53 49
city4 33 98 90
```
An example df can be created by the following code:
```
df = pd.DataFrame(np.random.randint(100,size=(1000000,3)), columns=list('ABC'))
indx = ['city'+str(x) for x in range(0,1000000)]
df.index = indx
```
What I want to do is:
a) determine appropriate histogram bucket lengths for column A and assign each city to a bucket for column A
b) determine appropriate histogram bucket lengths for column B and assign each city to a bucket for column B
Maybe the resulting df looks like (or is there a better built in way in pandas?)
```
df.head()
Out[1]:
A B C Abkt Bbkt
city0 40 12 73 2 1
city1 65 56 10 4 3
city2 77 58 71 4 3
city3 89 53 49 5 3
city4 33 98 90 2 5
```
Where Abkt and Bbkt are histogram bucket identifiers:
```
1-20 = 1
21-40 = 2
41-60 = 3
61-80 = 4
81-100 = 5
```
Ultimately, I want to better understand the behavior of each city with respect to columns A, B and C and be able to answer questions like:
a) What does the distribution of Column A (or B) look like - i.e. what buckets are most/least populated.
b) Conditional on a particular slice/bucket of Column A, what does the distribution of Column B look like - i.e. what buckets are most/least populated.
c) Conditional on a particular slice/bucket of Column A and B, what does the behavior of C look like.
Ideally, I want to be able to visualize the data (heat maps, region identifiers etc). I'm a relative pandas/python newbie and don't know what is possible to develop.
If the SO community can kindly provide code examples of how I can do what I want (or a better approach if there are better pandas/numpy/scipy built in methods) I would be grateful.
As well, any pointers to resources that can help me better summarize/slice/dice my data and be able to visualize at intermediate steps as I proceed with my analysis.
**UPDATE:**
I am following some of the suggestions in the comments.
I tried:
1) `df.hist()`
```
ValueError: The first argument of bincount must be non-negative
```
2) `df[['A']].hist(bins=10,range=(0,10))`
```
array([[<matplotlib.axes._subplots.AxesSubplot object at 0x000000A2350615C0>]], dtype=object)
```
Isn't `#2` suppose to show a plot? instead of producing an object that is not rendered? I am using `jupyter notebook`.
Is there something I need to turn-on / enable in `Jupyter Notebook` to render the histogram objects?
**UPDATE2:**
I solved the rendering problem by: [in Ipython notebook, Pandas is not displying the graph I try to plot.](http://stackoverflow.com/questions/10511024/in-ipython-notebook-pandas-is-not-displying-the-graph-i-try-to-plot)
**UPDATE3:**
As per suggestions from the comments, I started looking through [pandas visualization](http://pandas.pydata.org/pandas-docs/stable/visualization.html), [bokeh](http://bokeh.pydata.org/en/latest/) and [seaborn](http://stanford.edu/~mwaskom/software/seaborn/). However, I'm not sure how I can create linkages between plots.
Lets say I have 10 variables. I want to explore them but since 10 is a large number to explore at once, lets say I want to explore 5 at any given time (r,s,t,u,v).
If I want an interactive hexbin with marginal distributions plot to examine the relationship between r & s, how do I also see the distribution of t, u and v given interactive region selections/slices of r&s (polygons).
I found hexbin with marginal distribution plot here [hexbin plot](http://stanford.edu/~mwaskom/software/seaborn/examples/hexbin_marginals.html):
**But:**
1) How to make this interactive (allow selections of polygons)
2) How to link region selections of r & s to other plots, for example 3 histogram plots of t,u, and v (or any other type of plot).
This way, I can navigate through the data more rigorously and explore the relationships in depth. | In order to get the interaction effect you're looking for, you must bin all the columns you care about, together.
The cleanest way I can think of doing this is to `stack` into a single `series` then use `pd.cut`
Considering your sample `df`
[](http://i.stack.imgur.com/8IjPO.png)
```
df_ = pd.cut(df[['A', 'B']].stack(), 5, labels=list(range(5))).unstack()
df_.columns = df_.columns.to_series() + 'bkt'
pd.concat([df, df_], axis=1)
```
[](http://i.stack.imgur.com/dlCEm.png)
---
Let's build a better example and look at a visualization using `seaborn`
```
df = pd.DataFrame(dict(A=(np.random.randn(10000) * 100 + 20).astype(int),
B=(np.random.randn(10000) * 100 - 20).astype(int)))
import seaborn as sns
df.index = df.index.to_series().astype(str).radd('city')
df_ = pd.cut(df[['A', 'B']].stack(), 30, labels=list(range(30))).unstack()
df_.columns = df_.columns.to_series() + 'bkt'
sns.jointplot(x=df_.Abkt, y=df_.Bbkt, kind="scatter", color="k")
```
[](http://i.stack.imgur.com/Q2XnW.png)
---
Or how about some data with some correlation
```
mean, cov = [0, 1], [(1, .5), (.5, 1)]
data = np.random.multivariate_normal(mean, cov, 100000)
df = pd.DataFrame(data, columns=["A", "B"])
df.index = df.index.to_series().astype(str).radd('city')
df_ = pd.cut(df[['A', 'B']].stack(), 30, labels=list(range(30))).unstack()
df_.columns = df_.columns.to_series() + 'bkt'
sns.jointplot(x=df_.Abkt, y=df_.Bbkt, kind="scatter", color="k")
```
[](http://i.stack.imgur.com/YMo9M.png)
---
### Interactive `bokeh`
Without getting too complicated
```
from bokeh.io import show, output_notebook, output_file
from bokeh.plotting import figure
from bokeh.layouts import row, column
from bokeh.models import ColumnDataSource, Select, CustomJS
output_notebook()
# generate random data
flips = np.random.choice((1, -1), (5, 5))
flips = np.tril(flips, -1) + np.triu(flips, 1) + np.eye(flips.shape[0])
half = np.ones((5, 5)) / 2
cov = (half + np.diag(np.diag(half))) * flips
mean = np.zeros(5)
data = np.random.multivariate_normal(mean, cov, 10000)
df = pd.DataFrame(data, columns=list('ABCDE'))
df.index = df.index.to_series().astype(str).radd('city')
# Stack and cut to get dependent relationships
b = 20
df_ = pd.cut(df.stack(), b, labels=list(range(b))).unstack()
# assign default columns x and y. These will be the columns I set bokeh to read
df_[['x', 'y']] = df_.loc[:, ['A', 'B']]
source = ColumnDataSource(data=df_)
tools = 'box_select,pan,box_zoom,wheel_zoom,reset,resize,save'
p = figure(plot_width=600, plot_height=300)
p.circle('x', 'y', source=source, fill_color='olive', line_color='black', alpha=.5)
def gcb(like, n):
code = """
var data = source.get('data');
var f = cb_obj.get('value');
data['{0}{1}'] = data[f];
source.trigger('change');
"""
return CustomJS(args=dict(source=source), code=code.format(like, n))
xcb = CustomJS(
args=dict(source=source),
code="""
var data = source.get('data');
var colm = cb_obj.get('value');
data['x'] = data[colm];
source.trigger('change');
"""
)
ycb = CustomJS(
args=dict(source=source),
code="""
var data = source.get('data');
var colm = cb_obj.get('value');
data['y'] = data[colm];
source.trigger('change');
"""
)
options = list('ABCDE')
x_select = Select(options=options, callback=xcb, value='A')
y_select = Select(options=options, callback=ycb, value='B')
show(column(p, row(x_select, y_select)))
```
[](http://i.stack.imgur.com/RTu1E.png) |
String compression using repeated chars count | 39,157,420 | 2 | 2016-08-26T02:22:17Z | 39,157,493 | 7 | 2016-08-26T02:31:53Z | [
"python"
] | I'm going through Cracking the Code and there is this question where they ask to write a method for string compression so:
```
aabbccccaa
```
Would become:
```
a2b1c4a2
```
I came up with this:
```
''.join(y+str.count(y) for y in set(str))
```
But my output was:
```
a5c4b1
```
Could someone point me in the clean direction?
Sorry for bad edits, I'm on a cellphone | You could use [`groupby`](https://docs.python.org/3/library/itertools.html#itertools.groupby) to do the work for you:
```
>>> from itertools import groupby
>>> s = 'aabbccccaa'
>>> ''.join(k + str(sum(1 for x in g)) for k, g in groupby(s))
'a2b2c4a2'
``` |
Simplest example for streaming audio with Alexa | 39,157,599 | 7 | 2016-08-26T02:49:19Z | 39,283,692 | 7 | 2016-09-02T03:03:11Z | [
"python",
"aws-lambda",
"alexa-skills-kit"
] | I'm trying to get the new streaming audio API going. Is the following response valid? I'm getting a "there was a problem with the skill" error when I test it on my device.
Here is the code for my AWS-lambda function:
```
def lambda_handler(event, context):
return {
"response": {
"directives": [
{
"type": "AudioPlayer.Play",
"playBehavior": "REPLACE_ALL",
"audioItem": {
"stream": {
"token": "12345",
"url": "http://emit-media-production.s3.amazonaws.com/pbs/the-afterglow/2016/08/24/1700/201608241700_the-afterglow_64.m4a",
"offsetInMilliseconds": 0
}
}
}
],
"shouldEndSession": True
}
}
``` | The following code worked for me:
```
def lambda_handler(event, context):
return {
"response": {
"directives": [
{
"type": "AudioPlayer.Play",
"playBehavior": "REPLACE_ALL",
"audioItem": {
"stream": {
"token": "12345",
"url": "https://emit-media-production.s3.amazonaws.com/pbs/the-afterglow/2016/08/24/1700/201608241700_the-afterglow_64.m4a",
"offsetInMilliseconds": 0
}
}
}
],
"shouldEndSession": True
}
}
]
```
The only difference is that the URL is https rather than http.
Don't be put off if it doesn't work in the skill simulator. That hasn't been upgraded yet to work with streaming audio. You won't even see your directives there. But it should work when used with your device. |
Is there a direct way to ignore parts of a python datetime object? | 39,159,092 | 3 | 2016-08-26T05:47:26Z | 39,159,198 | 8 | 2016-08-26T05:56:26Z | [
"python",
"datetime"
] | I'm trying to compare two datetime objects, but ignoring the year. For example, given
```
a = datetime.datetime(2015,07,04,01,01,01)
b = datetime.datetime(2016,07,04,01,01,01)
```
I want a == b to return True by ignoring the year. To do a comparison like this, I imagine I could just create new datetime objects with the same year like:
```
c = datetime.datetime(2014,a.month,a.day,a.hour,a.minute,a.second)
d = datetime.datetime(2014,b.month,b.day,b.hour,b.minute,b.second)
```
However, this doesn't seem very pythonic. Is there a more direct method to do a comparison like what I'm asking?
I'm using python 3.4. | ```
(a.month, a.day, a.hour, a.minute, a.second ==
b.month, b.day, b.hour, b.minute, b.second)
```
A less explicit method is to compare the corresponding elements in the time tuples:
```
a.timetuple()[1:6] == b.timetuple()[1:6]
``` |
pandas: how to do multiple groupby-apply operations | 39,159,475 | 6 | 2016-08-26T06:14:58Z | 39,159,561 | 7 | 2016-08-26T06:20:40Z | [
"python",
"pandas",
"dataframe",
"group-by"
] | I have more experience with Râs `data.table`, but am trying to learn `pandas`. In `data.table`, I can do something like this:
```
> head(dt_m)
event_id device_id longitude latitude time_ category
1: 1004583 -100015673884079572 NA NA 1970-01-01 06:34:52 1 free
2: 1004583 -100015673884079572 NA NA 1970-01-01 06:34:52 1 free
3: 1004583 -100015673884079572 NA NA 1970-01-01 06:34:52 1 free
4: 1004583 -100015673884079572 NA NA 1970-01-01 06:34:52 1 free
5: 1004583 -100015673884079572 NA NA 1970-01-01 06:34:52 1 free
6: 1004583 -100015673884079572 NA NA 1970-01-01 06:34:52 1 free
app_id is_active
1: -5305696816021977482 0
2: -7164737313972860089 0
3: -8504475857937456387 0
4: -8807740666788515175 0
5: 5302560163370202064 0
6: 5521284031585796822 0
dt_m_summary <- dt_m[,
.(
mean_active = mean(is_active, na.rm = TRUE)
, median_lat = median(latitude, na.rm = TRUE)
, median_lon = median(longitude, na.rm = TRUE)
, mean_time = mean(time_)
, new_col = your_function(latitude, longitude, time_)
)
, by = list(device_id, category)
]
```
The new columns (`mean_active` through `new_col`), as well as `device_id` and `category`, will appear in `dt_m_summary`. I could also do a similar `by` transformation in the original table if I want a new column that has the results of the groupby-apply:
`dt_m[, mean_active := mean(is_active, na.rm = TRUE), by = list(device_id, category)]`
(in case I wanted, e.g., to select rows where `mean_active` is greater than some threshold, or do something else).
I know there is `groupby` in `pandas`, but I havenât found a way of doing the sort of easy transformations as above. The best I could think of was doing a series of groupby-applyâs and then merging the results into one `dataframe`, but that seems very clunky. Is there a better way of doing that? | IIUC, use `groupby` and `agg`. See [docs](http://pandas.pydata.org/pandas-docs/stable/groupby.html#applying-multiple-functions-at-once) for more information.
```
df = pd.DataFrame(np.random.rand(10, 2),
pd.MultiIndex.from_product([list('XY'), range(5)]),
list('AB'))
df
```
[](http://i.stack.imgur.com/sqwqv.png)
```
df.groupby(level=0).agg(['sum', 'count', 'std'])
```
[](http://i.stack.imgur.com/NGOGJ.png)
---
A more tailored example would be
```
# level=0 means group by the first level in the index
# if there is a specific column you want to group by
# use groupby('specific column name')
df.groupby(level=0).agg({'A': ['sum', 'std'],
'B': {'my_function': lambda x: x.sum() ** 2}})
```
[](http://i.stack.imgur.com/HPtvm.png)
***Note*** the `dict` passed to the `agg` method has keys `'A'` and `'B'`. This means, run the functions `['sum', 'std']` for `'A'` and `lambda x: x.sum() ** 2` for `'B'` (and label it `'my_function'`)
***Note 2*** pertaining to your `new_column`. `agg` requires that the passed functions reduce columns to scalars. You're better off adding the new column ahead of the `groupby`/`agg` |
Better way to swap elements in a list? | 39,167,057 | 47 | 2016-08-26T13:05:46Z | 39,167,227 | 29 | 2016-08-26T13:13:46Z | [
"python"
] | I have a bunch of lists that look like this one:
```
l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
```
I want to swap elements as follows:
```
final_l = [2, 1, 4, 3, 6, 5, 8, 7, 10, 9]
```
The size of the lists may vary, but they will always contain an even number of elements.
I'm fairly new to Python and am currently doing it like this:
```
l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
final_l = []
for i in range(0, len(l)/2):
final_l.append(l[2*i+1])
final_l.append(l[2*i])
```
I know this isn't really [Pythonic](https://en.wiktionary.org/wiki/Pythonic#Adjective) and would like to use something more efficient. Maybe a list comprehension? | Here a single list comprehension that does the trick:
```
In [1]: l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
In [2]: [l[i^1] for i in range(len(l))]
Out[2]: [2, 1, 4, 3, 6, 5, 8, 7, 10, 9]
```
The key to understanding it is the following demonstration of how it permutes the list indices:
```
In [3]: [i^1 for i in range(10)]
Out[3]: [1, 0, 3, 2, 5, 4, 7, 6, 9, 8]
```
The `^` is the [exclusive or](https://en.wikipedia.org/wiki/Exclusive_or) operator. All that `i^1` does is flip the least-significant bit of `i`, effectively swapping 0 with 1, 2 with 3 and so on. |
Better way to swap elements in a list? | 39,167,057 | 47 | 2016-08-26T13:05:46Z | 39,167,243 | 19 | 2016-08-26T13:14:37Z | [
"python"
] | I have a bunch of lists that look like this one:
```
l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
```
I want to swap elements as follows:
```
final_l = [2, 1, 4, 3, 6, 5, 8, 7, 10, 9]
```
The size of the lists may vary, but they will always contain an even number of elements.
I'm fairly new to Python and am currently doing it like this:
```
l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
final_l = []
for i in range(0, len(l)/2):
final_l.append(l[2*i+1])
final_l.append(l[2*i])
```
I know this isn't really [Pythonic](https://en.wiktionary.org/wiki/Pythonic#Adjective) and would like to use something more efficient. Maybe a list comprehension? | You can use the [pairwise iteration](http://stackoverflow.com/questions/5389507/iterating-over-every-two-elements-in-a-list) and chaining to [flatten the list](http://stackoverflow.com/questions/952914/making-a-flat-list-out-of-list-of-lists-in-python):
```
>>> from itertools import chain
>>>
>>> list(chain(*zip(l[1::2], l[0::2])))
[2, 1, 4, 3, 6, 5, 8, 7, 10, 9]
```
Or, you can use the [`itertools.chain.from_iterable()`](https://docs.python.org/2/library/itertools.html#itertools.chain.from_iterable) to avoid the extra unpacking:
```
>>> list(chain.from_iterable(zip(l[1::2], l[0::2])))
[2, 1, 4, 3, 6, 5, 8, 7, 10, 9]
``` |
Better way to swap elements in a list? | 39,167,057 | 47 | 2016-08-26T13:05:46Z | 39,167,545 | 93 | 2016-08-26T13:29:52Z | [
"python"
] | I have a bunch of lists that look like this one:
```
l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
```
I want to swap elements as follows:
```
final_l = [2, 1, 4, 3, 6, 5, 8, 7, 10, 9]
```
The size of the lists may vary, but they will always contain an even number of elements.
I'm fairly new to Python and am currently doing it like this:
```
l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
final_l = []
for i in range(0, len(l)/2):
final_l.append(l[2*i+1])
final_l.append(l[2*i])
```
I know this isn't really [Pythonic](https://en.wiktionary.org/wiki/Pythonic#Adjective) and would like to use something more efficient. Maybe a list comprehension? | No need for complicated logic, simply rearrange the list with slicing and step:
```
In [1]: l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
In [2]: l[::2], l[1::2] = l[1::2], l[::2]
In [3]: l
Out[3]: [2, 1, 4, 3, 6, 5, 8, 7, 10, 9]
```
---
## Â TLDR;
**Edited with explanation**
I believe most viewers are already familiar with list slicing and multiple assignment. In case you don't I will try my best to explain what's going on (hope I do not make it worse).
To understand list slicing, [here](http://stackoverflow.com/questions/509211/explain-pythons-slice-notation) already has an excellent answer and explanation of list slice notation.
Simply put:
```
a[start:end] # items start through end-1
a[start:] # items start through the rest of the array
a[:end] # items from the beginning through end-1
a[:] # a copy of the whole array
There is also the step value, which can be used with any of the above:
a[start:end:step] # start through not past end, by step
```
Let's look at OP's requirements:
```
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10] # list l
^ ^ ^ ^ ^ ^ ^ ^ ^ ^
0 1 2 3 4 5 6 7 8 9 # respective index of the elements
l[0] l[2] l[4] l[6] l[8] # first tier : start=0, step=2
l[1] l[3] l[5] l[7] l[9] # second tier: start=1, step=2
-----------------------------------------------------------------------
l[1] l[3] l[5] l[7] l[9]
l[0] l[2] l[4] l[6] l[8] # desired output
```
First tier will be: `l[::2] = [1, 3, 5, 7, 9]`
Second tier will be: `l[1::2] = [2, 4, 6, 8, 10]`
As we want to re-assign `first = second` & `second = first`, we can use multiple assignment, and update the original list in place:
```
first , second = second , first
```
that is:
```
l[::2], l[1::2] = l[1::2], l[::2]
```
As a side note, to get a new list but not altering original `l`, we can assign a new list from `l`, and perform above, that is:
```
n = l[:] # assign n as a copy of l (without [:], n still points to l)
n[::2], n[1::2] = n[1::2], n[::2]
```
Hopefully I do not confuse any of you with this added explanation. If it does, please help update mine and make it better :-) |
Better way to swap elements in a list? | 39,167,057 | 47 | 2016-08-26T13:05:46Z | 39,168,029 | 7 | 2016-08-26T13:54:30Z | [
"python"
] | I have a bunch of lists that look like this one:
```
l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
```
I want to swap elements as follows:
```
final_l = [2, 1, 4, 3, 6, 5, 8, 7, 10, 9]
```
The size of the lists may vary, but they will always contain an even number of elements.
I'm fairly new to Python and am currently doing it like this:
```
l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
final_l = []
for i in range(0, len(l)/2):
final_l.append(l[2*i+1])
final_l.append(l[2*i])
```
I know this isn't really [Pythonic](https://en.wiktionary.org/wiki/Pythonic#Adjective) and would like to use something more efficient. Maybe a list comprehension? | ## A benchmark between top answers:
Python 2.7:
```
('inp1 ->', 15.302665948867798) # NPE's answer
('inp2a ->', 10.626379013061523) # alecxe's answer with chain
('inp2b ->', 9.739919185638428) # alecxe's answer with chain.from_iterable
('inp3 ->', 2.6654279232025146) # Anzel's answer
```
Python 3.4:
```
inp1 -> 7.913498195000102
inp2a -> 9.680125927000518
inp2b -> 4.728151862000232
inp3 -> 3.1804273489997286
```
If you are curious about the different performances between python 2 and 3, here are the reasons:
As you can see @NPE's answer (`inp1`) performs very better in python3.4, the reason is that in python3.X `range()` is a smart object and doesn't preserve all the items between that range in memory like a list.
> In many ways the object returned by `range()` behaves as if it is a list, but in fact it isnât. It is an object which returns the successive items of the desired sequence when you iterate over it, but it doesnât really make the list, thus saving space.
And that's why in python 3 it doesn't return a list while you slice the range object.
```
# python2.7
>>> range(10)[2:5]
[2, 3, 4]
# python 3.X
>>> range(10)[2:5]
range(2, 5)
```
The second significant change is performance accretion of the third approach (`inp3`). As you can see the difference between it and the last solution has decreased to ~2sec (from ~7sec). The reason is because of the `zip()` function which in python3.X it returns an iterator which produces the items on demand. And since the `chain.from_iterable()` needs to iterate over the items once again it's completely redundant to do it before that too (what that `zip` does in python 2).
Code:
```
from timeit import timeit
inp1 = """
[l[i^1] for i in range(len(l))]
"""
inp2a = """
list(chain(*zip(l[1::2], l[0::2])))
"""
inp2b = """
list(chain.from_iterable(zip(l[1::2], l[0::2])))
"""
inp3 = """
l[::2], l[1::2] = l[1::2], l[::2]
"""
lst = list(range(100000))
print('inp1 ->', timeit(stmt=inp1,
number=1000,
setup="l={}".format(lst)))
print('inp2a ->', timeit(stmt=inp2a,
number=1000,
setup="l={}; from itertools import chain".format(lst)))
print('inp2b ->', timeit(stmt=inp2b,
number=1000,
setup="l={}; from itertools import chain".format(lst)))
print('inp3 ->', timeit(stmt=inp3,
number=1000,
setup="l={}".format(lst)))
``` |
Can a line of Python code know its indentation nesting level? | 39,172,306 | 139 | 2016-08-26T18:07:24Z | 39,172,520 | 9 | 2016-08-26T18:21:24Z | [
"python",
"reflection",
"metaprogramming",
"indentation",
"tokenize"
] | From something like this:
```
print(get_indentation_level())
print(get_indentation_level())
print(get_indentation_level())
```
I would like to get something like this:
```
1
2
3
```
Can the code read itself in this way?
All I want is the output from the more nested parts of the code to be more nested. In the same way that this makes code easier to read, it would make the output easier to read.
Of course I could implement this manually, using e.g. `.format()`, but what I had in mind was a custom print function which would `print(i*' ' + string)` where `i` is the indentation level. This would be a quick way to make readable output on my terminal.
Is there a better way to do this which avoids painstaking manual formatting? | You can use `sys.current_frame.f_lineno` in order to get the line number. Then in order to find the number of indentation level you need to find the previous line with zero indentation then be subtracting the current line number from that line's number you'll get the number of indentation:
```
import sys
current_frame = sys._getframe(0)
def get_ind_num():
with open(__file__) as f:
lines = f.readlines()
current_line_no = current_frame.f_lineno
to_current = lines[:current_line_no]
previous_zoro_ind = len(to_current) - next(i for i, line in enumerate(to_current[::-1]) if not line[0].isspace())
return current_line_no - previous_zoro_ind
```
Demo:
```
if True:
print get_ind_num()
if True:
print(get_ind_num())
if True:
print(get_ind_num())
if True: print(get_ind_num())
# Output
1
3
5
6
```
If you want the number of the indentation level based on the previouse lines with `:` you can just do it with a little change:
```
def get_ind_num():
with open(__file__) as f:
lines = f.readlines()
current_line_no = current_frame.f_lineno
to_current = lines[:current_line_no]
previous_zoro_ind = len(to_current) - next(i for i, line in enumerate(to_current[::-1]) if not line[0].isspace())
return sum(1 for line in lines[previous_zoro_ind-1:current_line_no] if line.strip().endswith(':'))
```
Demo:
```
if True:
print get_ind_num()
if True:
print(get_ind_num())
if True:
print(get_ind_num())
if True: print(get_ind_num())
# Output
1
2
3
3
```
And as an alternative answer here is a function for getting the number of indentation (whitespace):
```
import sys
from itertools import takewhile
current_frame = sys._getframe(0)
def get_ind_num():
with open(__file__) as f:
lines = f.readlines()
return sum(1 for _ in takewhile(str.isspace, lines[current_frame.f_lineno - 1]))
``` |
Can a line of Python code know its indentation nesting level? | 39,172,306 | 139 | 2016-08-26T18:07:24Z | 39,172,552 | 22 | 2016-08-26T18:23:18Z | [
"python",
"reflection",
"metaprogramming",
"indentation",
"tokenize"
] | From something like this:
```
print(get_indentation_level())
print(get_indentation_level())
print(get_indentation_level())
```
I would like to get something like this:
```
1
2
3
```
Can the code read itself in this way?
All I want is the output from the more nested parts of the code to be more nested. In the same way that this makes code easier to read, it would make the output easier to read.
Of course I could implement this manually, using e.g. `.format()`, but what I had in mind was a custom print function which would `print(i*' ' + string)` where `i` is the indentation level. This would be a quick way to make readable output on my terminal.
Is there a better way to do this which avoids painstaking manual formatting? | Yeah, that's definitely possible, here's a working example:
```
import inspect
def get_indentation_level():
callerframerecord = inspect.stack()[1]
frame = callerframerecord[0]
info = inspect.getframeinfo(frame)
cc = info.code_context[0]
return len(cc) - len(cc.lstrip())
if 1:
print get_indentation_level()
if 1:
print get_indentation_level()
if 1:
print get_indentation_level()
``` |
Can a line of Python code know its indentation nesting level? | 39,172,306 | 139 | 2016-08-26T18:07:24Z | 39,172,845 | 104 | 2016-08-26T18:41:27Z | [
"python",
"reflection",
"metaprogramming",
"indentation",
"tokenize"
] | From something like this:
```
print(get_indentation_level())
print(get_indentation_level())
print(get_indentation_level())
```
I would like to get something like this:
```
1
2
3
```
Can the code read itself in this way?
All I want is the output from the more nested parts of the code to be more nested. In the same way that this makes code easier to read, it would make the output easier to read.
Of course I could implement this manually, using e.g. `.format()`, but what I had in mind was a custom print function which would `print(i*' ' + string)` where `i` is the indentation level. This would be a quick way to make readable output on my terminal.
Is there a better way to do this which avoids painstaking manual formatting? | If you want indentation in terms of nesting level rather than spaces and tabs, things get tricky. For example, in the following code:
```
if True:
print(
get_nesting_level())
```
the call to `get_nesting_level` is actually nested one level deep, despite the fact that there is no leading whitespace on the line of the `get_nesting_level` call. Meanwhile, in the following code:
```
print(1,
2,
get_nesting_level())
```
the call to `get_nesting_level` is nested zero levels deep, despite the presence of leading whitespace on its line.
In the following code:
```
if True:
if True:
print(get_nesting_level())
if True:
print(get_nesting_level())
```
the two calls to `get_nesting_level` are at different nesting levels, despite the fact that the leading whitespace is identical.
In the following code:
```
if True: print(get_nesting_level())
```
is that nested zero levels, or one? In terms of `INDENT` and `DEDENT` tokens in the formal grammar, it's zero levels deep, but you might not feel the same way.
---
If you want to do this, you're going to have to tokenize the whole file up to the point of the call and count `INDENT` and `DEDENT` tokens. The [`tokenize`](https://docs.python.org/2/library/tokenize.html) module would be very useful for such a function:
```
import inspect
import tokenize
def get_nesting_level():
caller_frame = inspect.currentframe().f_back
filename, caller_lineno, _, _, _ = inspect.getframeinfo(caller_frame)
with open(filename) as f:
indentation_level = 0
for token_record in tokenize.generate_tokens(f.readline):
token_type, _, (token_lineno, _), _, _ = token_record
if token_lineno > caller_lineno:
break
elif token_type == tokenize.INDENT:
indentation_level += 1
elif token_type == tokenize.DEDENT:
indentation_level -= 1
return indentation_level
``` |
How `[System.Console]::OutputEncoding/InputEncoding` with Python? | 39,183,614 | 15 | 2016-08-27T16:59:53Z | 39,428,999 | 8 | 2016-09-10T17:58:44Z | [
"python",
".net",
"shell",
"powershell"
] | Under Powershell v5, Windows 8.1, Python 3. Why these fails and how to fix?
```
[system.console]::InputEncoding = [System.Text.Encoding]::UTF8;
[system.console]::OutputEncoding = [System.Text.Encoding]::UTF8;
chcp;
"import sys
print(sys.stdout.encoding)
print(sys.stdin.encoding)
sys.stdout.write(sys.stdin.readline())
" |
sc test.py -Encoding utf8;
[char]0x0422+[char]0x0415+[char]0x0421+[char]0x0422+"`n" | py -3 test.py
```
prints:
```
Active code page: 65001
cp65001
cp1251
п»Ñ????
``` | You are piping data into Python; at that point Python's `stdin` is no longer attached to a TTY (your console) and won't guess at what the encoding might be. Instead, the default system locale is used; on your system that's cp1251 (the Windows Latin-1-based codepage).
Set the [`PYTHONIOENCODING` environment variable](https://docs.python.org/3/using/cmdline.html#envvar-PYTHONIOENCODING) to override:
> `PYTHONIOENCODING`
> If this is set before running the interpreter, it overrides the encoding used for stdin/stdout/stderr, in the syntax `encodingname:errorhandler`. Both the `encodingname` and the `:errorhandler` parts are optional and have the same meaning as in `str.encode()`.
PowerShell doesn't appear to support per-command-line environment variables the way UNIX shells do; the easiest is to just set the variable first:
```
Set-Item Env:PYTHONIOENCODING "UTF-8"
```
or even
```
Set-Item Env:PYTHONIOENCODING "cp65001"
```
as the Windows UTF-8 codepage is apparently not *quite* UTF-8 really, depending on the Windows version and on wether or not pipe redirection is used. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.