title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Refactoring "x if x is not None else y" to "x or y" | 34,993,092 | 3 | 2016-01-25T12:50:58Z | 34,993,404 | 7 | 2016-01-25T13:06:46Z | [
"python",
"refactoring",
"default-value"
] | I came across to a piece of Python Code which technically looks like this:
```
def bar(b):
return b*2
class MyClass(object):
def __init__(self, a):
self.a = a
self.b = 10
def foo(self, b=None):
return bar(b if b is not None else self.b)
```
So the class has `b` as a fixed attribute, which is used as default input for a classmethod `foo`, if it is called without argument.
Please assume that `b` in this case is expected to be a float.
Please ignore unused `self.a`, it is used otherwise - i just wanted to make sure that you see that there are attributes settable by the constructor.
The usage would look like this:
```
c = MyClass(2)
c.foo(3) # returns 3*2 = 6
c.foo() # returns 10*2 = 20
```
In this case, I was wondering about the `foo`-method. Is this a good way to implement `self.b` as default value for `b`?
Is one of the following two suggestions by me more or less desireable? If so: why?
```
def foo(self, b=None):
return bar(b or self.b)
```
or
```
def foo(self, b=self.b):
return bar(b)
``` | Stick with the first option:
```
def foo(self, b=None):
return bar(b if b is not None else self.b)
```
If you use the second option:
```
def foo(self, b=None):
return bar(b or self.b)
```
then you will alter the behaviour of `x.foo(0)`, because zero is also falsey.
The third option is not an option:
```
def foo(self, b=self.b):
return bar(b)
```
`self` does not exist when the method `foo()` is defined, so you cannot use `self.b` as a default argument. The definition will simply fail; or worse, it will try and use whatever is the current value of `self` when the definition is executed. |
Fairly Basic String Expansion in Python? | 35,003,123 | 3 | 2016-01-25T21:53:21Z | 35,003,251 | 7 | 2016-01-25T22:01:23Z | [
"python",
"string",
"function",
"python-3.x"
] | I am trying to create a function that will take a 'compressed' string, for example
```
'a12b3c'
```
and return its 'expanded' form, which for this example would be
```
'aaaaaaaaaaaabbbc'
```
Each character is supposed to be repeated as many times as the following number or just one time if no number follows the character.
I have been able to create a function that will do this if the number values in the 'compressed' string are only single-digit numbers, so my attempt works for strings like
```
'a3b2c6'
```
but I can't seem to find a way to take into account the case where the number values are more than a single digit long. | Use a regular expression:
```
>>> import re
>>> compressed = "a12b3c"
>>> expanded = ""
>>> for char, count in re.findall(r'(\w)(\d+)?', compressed):
... count = int(count or 1)
... expanded += char*count
...
>>> expanded
'aaaaaaaaaaaabbbc'
``` |
Why does range(0) == range(2, 2, 2) equal True in Python 3? | 35,004,162 | 43 | 2016-01-25T23:04:01Z | 35,004,173 | 61 | 2016-01-25T23:04:57Z | [
"python",
"python-3.x",
"range",
"identity"
] | Why do `range` objects which are initialized with different values compare equal to one another in Python 3 (this doesn't happen in Python 2)?
When I execute the following commands in my interpreter:
```
>>> r1 = range(0)
>>> r2 = range(2, 2, 2)
>>> r1 == r2
True
>>>
```
The result is `True`. Why is this so? Why are two different `range` objects with different parameter values treated as equal? | ### The `range` objects are special:
Python will compare **[`range`](https://docs.python.org/3.5/library/stdtypes.html#ranges)** objects as *[Sequences](https://docs.python.org/3.5/library/stdtypes.html#sequence-types-list-tuple-range)*. What that essentially means is that *the comparison doesn't evaluate **how** they represent a given sequence but rather **what** they represent.*
The fact that the `start`, `stop` and `step` parameters are completely different plays no difference here because *they all represent an empty list when expanded*:
For example, the first `range` object:
```
list(range(0)) # []
```
and the second `range` object:
```
list(range(2, 2, 2)) # []
```
*Both represent an empty list* and since two empty lists compare equal (`True`) so will the `range` objects that *represent* them.
As a result, you can have completely different *looking* `range` objects; if they represent the same sequence they will *compare* equal:
```
range(1, 5, 100) == range(1, 30, 100)
```
Both represent a list with a single element `[1]` so these two will also compare equal.
---
### No, `range` objects are *really* special:
Do note, though, that even though the comparison doesn't evaluate *how* they represent a sequence the result of comparing *can be achieved* using **solely** the values of `start`, `step` along with the `len` of the `range` objects; this has very interesting implications with the speed of comparisons:
```
r0 = range(1, 1000000)
r1 = range(1, 1000000)
l0 = list(r0)
l1 = list(r1)
```
Ranges compares super fast:
```
%timeit r0 == r1
The slowest run took 28.82 times longer than the fastest. This could mean that an intermediate result is being cached
10000000 loops, best of 3: 160 ns per loop
```
on the other hand, the lists..
```
%timeit l0 == l1
10 loops, best of 3: 27.8 ms per loop
```
Yeah..
---
As **[@SuperBiasedMan](http://stackoverflow.com/users/4374739/superbiasedman)** noted, this only applies to the range objects in Python 3. Python 2 `range()` is a plain ol' function that returns a list while the `2.x` [`xrange`](https://hg.python.org/cpython/file/2.7/Objects/rangeobject.c#l185) object doesn't have the comparing capabilies (*[and not only these..](http://stackoverflow.com/questions/30081275/why-is-1000000000000000-in-range1000000000000001-so-fast-in-python-3?rq=1)*) that `range` objects have in Python 3.
Look at **[@ajcr's answer](http://stackoverflow.com/a/35014301/4952130)** for quotes directly from the source code on Python 3 `range` objects. It's documented in there what the comparison between two different ranges actually entails: Simple quick operations. The `range_equals` function is utilized in the *[`range_richcompare` function](https://hg.python.org/cpython/file/tip/Objects/rangeobject.c#l475)* for `EQ` and `NE` cases and assigned to the *[`tp_richcompare` member for `PyRange_Type`](https://hg.python.org/cpython/file/tip/Objects/rangeobject.c#l730)*. |
Why does range(0) == range(2, 2, 2) equal True in Python 3? | 35,004,162 | 43 | 2016-01-25T23:04:01Z | 35,012,761 | 11 | 2016-01-26T11:27:09Z | [
"python",
"python-3.x",
"range",
"identity"
] | Why do `range` objects which are initialized with different values compare equal to one another in Python 3 (this doesn't happen in Python 2)?
When I execute the following commands in my interpreter:
```
>>> r1 = range(0)
>>> r2 = range(2, 2, 2)
>>> r1 == r2
True
>>>
```
The result is `True`. Why is this so? Why are two different `range` objects with different parameter values treated as equal? | Direct quote from [the docs](https://docs.python.org/3.5/library/stdtypes.html#ranges) (emphasis mine):
> Testing range objects for equality with == and != compares them as
> sequences. That is, **two range objects are considered equal if they
> represent the same sequence of values**. (Note that two range objects
> that compare equal might have different start, stop and step
> attributes, for example range(0) == range(2, 1, 3) or range(0, 3, 2)
> == range(0, 4, 2).)
If you compare `range`s with the "same" list, you'll get inequality, as stated in [the docs](https://docs.python.org/3.5/library/stdtypes.html#comparisons) as well:
> Objects of different types, except different numeric types, never
> compare equal.
Example:
```
>>> type(range(1))
<class 'range'>
>>> type([0])
<class 'list'>
>>> [0] == range(1)
False
>>> [0] == list(range(1))
True
```
Note that this explicitly only applies to Python 3. In Python 2, where `range` just returns a list, `range(1) == [0]` evaluates as `True`. |
Why does range(0) == range(2, 2, 2) equal True in Python 3? | 35,004,162 | 43 | 2016-01-25T23:04:01Z | 35,014,301 | 9 | 2016-01-26T12:52:27Z | [
"python",
"python-3.x",
"range",
"identity"
] | Why do `range` objects which are initialized with different values compare equal to one another in Python 3 (this doesn't happen in Python 2)?
When I execute the following commands in my interpreter:
```
>>> r1 = range(0)
>>> r2 = range(2, 2, 2)
>>> r1 == r2
True
>>>
```
The result is `True`. Why is this so? Why are two different `range` objects with different parameter values treated as equal? | To add a few additional details to the excellent answers on this page, two `range` objects `r0` and `r1` are compared [roughly as follows](https://hg.python.org/cpython/file/d8f48717b74e/Objects/rangeobject.c#l614):
```
if r0 is r1: # True if r0 and r1 are same object in memory
return True
if len(r0) != len(r1): # False if different number of elements in sequences
return False
if not len(r0): # True if r0 has no elements
return True
if r0.start != r1.start: # False if r0 and r1 have different start values
return False
if len(r0) == 1: # True if r0 has just one element
return True
return r0.step == r1.step # if we made it this far, compare step of r0 and r1
```
The length of a `range` object is easily to calculate using the `start`, `stop` and `step` parameters. In the case where `start == stop`, for example, Python can immediately know that the length is 0. In non-trivial cases, Python can just do a [simple arithmetic calculation](https://hg.python.org/cpython/file/d8f48717b74e/Objects/rangeobject.c#l1072) using the `start`, `stop` and `step` values.
So in the case of `range(0) == range(2, 2, 2)`, Python does the following:
1. sees that `range(0)` and `range(2, 2, 2)` are different objects in memory.
2. computes the length of both objects; both lengths are 0 (because `start == stop` in both objects) so another test is needed.
3. sees that `len(range(0))` is 0. This means that `len(range(2, 2, 2))` is also 0 (the previous test for inequality failed) and so the comparison should return `True`. |
Is there a Pythonic way to iterate over an "expanded" source list? | 35,005,253 | 3 | 2016-01-26T00:49:56Z | 35,005,340 | 7 | 2016-01-26T00:59:27Z | [
"python"
] | I created a generator expression that builds a dictionary out of more than the source keys, like so:
```
def moreIter(names):
for name in names:
yield name
yield name + "Bar"
KEYS = ("a", "b")
src = {"a": 1, "aBar": 2, "b": 3, "bBar": 4, "c": 0, "cBar": 1, "d": 10}
d = {key: src[key] for key in moreIter(KEYS)}
```
I was wondering if there is a more "pythonic" way to do something like this. It seems all the standard library functions I've come across that iterate through a list will return something of an equal or smaller length than the original list, but in this case I want to iterate through an expanded result. | You could use a multi-level generator:
```
src = {"a": 1, "aBar": 2, "b": 3, "bBar": 4}
d = {key: src[key] for item in 'ab' for key in (item, item+'Bar')}
``` |
How to use image_summary to view images from different batches in Tensorflow? | 35,005,691 | 6 | 2016-01-26T01:39:45Z | 35,005,934 | 7 | 2016-01-26T02:06:34Z | [
"python",
"deep-learning",
"tensorflow"
] | I am curious about how image\_summary works. There is a parameter called max\_images, which controls how many images would be shown. However it seems the summary only displays images from one batch. If we use bigger value of max\_iamges, we will just view more images from the batch. Is there a way I can view for example one image from each batch? | To view one image from each batch, you need to fetch the result of the [`tf.image_summary()`](https://www.tensorflow.org/versions/master/api_docs/python/train.html#image_summary) op every time you run a step. For example, it you have the following setup:
```
images = ...
loss = ...
optimizer = ...
train_op = optimizer.minimize(loss)
init_op = tf.initialize_all_variables()
image_summary_t = tf.image_summary(images.name, images, max_images=1)
sess = tf.Session()
summary_writer = tf.train.SummaryWriter(...)
sess.run(init_op)
```
...you could set up your training loop to capture one image per iteration as follows:
```
for _ in range(10000):
_, image_summary = sess.run([train_op, image_summary_t])
summary_writer.add_summary(image_summary)
```
Note that capturing summaries on each batch might be inefficient, and you should probably only capture the summary periodically for faster training.
**EDIT:** The above code writes a separate summary for each image, so your log will contain all of the images, but they will not all be visualized in TensorBoard. If you want to combine your summaries to visualize images from multiple batches, you could do the following:
```
combined_summary = tf.Summary()
for i in range(10000):
_, image_summary = sess.run([train_op, image_summary_t])
combined_summary.MergeFromString(image_summary)
if i % 10 == 0:
summary_writer.add_summary(combined_summary)
combined_summary = tf.Summary()
``` |
Tensorflow: How to get all variables from rnn_cell.BasicLSTM & rnn_cell.MultiRNNCell | 35,013,080 | 5 | 2016-01-26T11:43:25Z | 35,018,188 | 7 | 2016-01-26T15:58:52Z | [
"python",
"tensorflow"
] | I have a setup where I need to initialize an LSTM after the main initialization which uses `tf.initialize_all_variables()`. I.e. I want to call `tf.initialize_variables([var_list])`
Is there way to collect all the internal trainable variables for both:
* rnn\_cell.BasicLSTM
* rnn\_cell.MultiRNNCell
so that I can initialize **JUST** these parameters?
The main reason I want this is because I do not want to re-initialize some trained values from earlier. | The easiest way to solve your problem is to use variable scope. The names of the variables within a scope will be prefixed with its name. Here is a short snippet:
```
cell = rnn_cell.BasicLSTMCell(num_nodes)
with tf.variable_scope("LSTM") as vs:
# Execute the LSTM cell here in any way, for example:
for i in range(num_steps):
output[i], state = cell(input_data[i], state)
# Retrieve just the LSTM variables.
lstm_variables = [v for v in tf.all_variables()
if v.name.startswith(vs.name)]
# [..]
# Initialize the LSTM variables.
tf.initialize_variables(lstm_variables)
```
It would work the same way with `MultiRNNCell`.
EDIT: changed `tf.trainable_variables` to `tf.all_variables()` |
TypeError: ufunc 'add' did not contain a loop with signature matching types | 35,013,726 | 5 | 2016-01-26T12:19:58Z | 35,016,330 | 7 | 2016-01-26T14:34:48Z | [
"python",
"python-3.x"
] | I am creating bag of words representation of the sentence. Then taking the words that exist in the sentence to compare to the file "vectors.txt", in order to get their embedding vectors. After getting vectors for each word that exists in the sentence, I am taking average of the vectors of the words in the sentence. This is my code:
```
import nltk
import numpy as np
from nltk import FreqDist
from nltk.corpus import brown
news = brown.words(categories='news')
news_sents = brown.sents(categories='news')
fdist = FreqDist(w.lower() for w in news)
vocabulary = [word for word, _ in fdist.most_common(10)]
num_sents = len(news_sents)
def averageEmbeddings(sentenceTokens, embeddingLookupTable):
listOfEmb=[]
for token in sentenceTokens:
embedding = embeddingLookupTable[token]
listOfEmb.append(embedding)
return sum(np.asarray(listOfEmb)) / float(len(listOfEmb))
embeddingVectors = {}
with open("D:\\Embedding\\vectors.txt") as file:
for line in file:
(key, *val) = line.split()
embeddingVectors[key] = val
for i in range(num_sents):
features = {}
for word in vocabulary:
features[word] = int(word in news_sents[i])
print(features)
print(list(features.values()))
sentenceTokens = []
for key, value in features.items():
if value == 1:
sentenceTokens.append(key)
sentenceTokens.remove(".")
print(sentenceTokens)
print(averageEmbeddings(sentenceTokens, embeddingVectors))
print(features.keys())
```
Not sure why, but I get this error:
```
TypeError Traceback (most recent call last)
<ipython-input-4-643ccd012438> in <module>()
39 sentenceTokens.remove(".")
40 print(sentenceTokens)
---> 41 print(averageEmbeddings(sentenceTokens, embeddingVectors))
42
43 print(features.keys())
<ipython-input-4-643ccd012438> in averageEmbeddings(sentenceTokens, embeddingLookupTable)
18 listOfEmb.append(embedding)
19
---> 20 return sum(np.asarray(listOfEmb)) / float(len(listOfEmb))
21
22 embeddingVectors = {}
TypeError: ufunc 'add' did not contain a loop with signature matching types dtype('<U9') dtype('<U9') dtype('<U9')
```
P.S. Embedding Vector looks like:
```
the 0.011384 0.010512 -0.008450 -0.007628 0.000360 -0.010121 0.004674 -0.000076
of 0.002954 0.004546 0.005513 -0.004026 0.002296 -0.016979 -0.011469 -0.009159
and 0.004691 -0.012989 -0.003122 0.004786 -0.002907 0.000526 -0.006146 -0.003058
one 0.014722 -0.000810 0.003737 -0.001110 -0.011229 0.001577 -0.007403 -0.005355
in -0.001046 -0.008302 0.010973 0.009608 0.009494 -0.008253 0.001744 0.003263
```
After using np.sum I get this error:
```
TypeError Traceback (most recent call last)
<ipython-input-13-8a7edbb9d946> in <module>()
40 sentenceTokens.remove(".")
41 print(sentenceTokens)
---> 42 print(averageEmbeddings(sentenceTokens, embeddingVectors))
43
44 print(features.keys())
<ipython-input-13-8a7edbb9d946> in averageEmbeddings(sentenceTokens, embeddingLookupTable)
18 listOfEmb.append(embedding)
19
---> 20 return np.sum(np.asarray(listOfEmb)) / float(len(listOfEmb))
21
22 embeddingVectors = {}
C:\Anaconda3\lib\site-packages\numpy\core\fromnumeric.py in sum(a, axis, dtype, out, keepdims)
1829 else:
1830 return _methods._sum(a, axis=axis, dtype=dtype,
-> 1831 out=out, keepdims=keepdims)
1832
1833
C:\Anaconda3\lib\site-packages\numpy\core\_methods.py in _sum(a, axis, dtype, out, keepdims)
30
31 def _sum(a, axis=None, dtype=None, out=None, keepdims=False):
---> 32 return umr_sum(a, axis, dtype, out, keepdims)
33
34 def _prod(a, axis=None, dtype=None, out=None, keepdims=False):
TypeError: cannot perform reduce with flexible type
``` | You have a numpy array of strings, not floats. This is what is meant by `dtype('<U9')` -- a little endian encoded unicode string with up to 9 characters.
try:
```
return sum(np.asarray(listOfEmb, dtype=float)) / float(len(listOfEmb))
```
However, you don't need numpy here at all. You can really just do:
```
return sum(float(embedding) for embedding in listOfEmb) / len(listOfEmb)
```
Or if you're really set on using numpy.
```
return np.asarray(listOfEmb, dtype=float).mean()
``` |
Why is max slower than sort in Python? | 35,014,951 | 89 | 2016-01-26T13:30:13Z | 35,015,128 | 121 | 2016-01-26T13:39:22Z | [
"python",
"sorting",
"max",
"python-internals"
] | I've found that `max` is slower than the `sort` function in Python 2 and 3.
Python 2
```
$ python -m timeit -s 'import random;a=range(10000);random.shuffle(a)' 'a.sort();a[-1]'
1000 loops, best of 3: 239 usec per loop
$ python -m timeit -s 'import random;a=range(10000);random.shuffle(a)' 'max(a)'
1000 loops, best of 3: 342 usec per loop
```
Python 3
```
$ python3 -m timeit -s 'import random;a=list(range(10000));random.shuffle(a)' 'a.sort();a[-1]'
1000 loops, best of 3: 252 usec per loop
$ python3 -m timeit -s 'import random;a=list(range(10000));random.shuffle(a)' 'max(a)'
1000 loops, best of 3: 371 usec per loop
```
Why *is* `max` (`O(n)`) slower than the `sort` function (`O(nlogn)`)? | You have to be very careful when using the `timeit` module in Python.
```
python -m timeit -s 'import random;a=range(10000);random.shuffle(a)' 'a.sort();a[-1]'
```
Here the initialisation code runs once to produce a randomised array `a`. Then the rest of the code is run several times. The first time it sorts the array, but every other time you are calling the sort method on an already sorted array. Only the fastest time is returned, so you are actually timing how long it takes Python to sort an already sorted array.
Part of Python's sort algorithm is to detect when the array is already partly or completely sorted. When completely sorted it simply has to scan once through the array to detect this and then it stops.
If instead you tried:
```
python -m timeit -s 'import random;a=range(100000);random.shuffle(a)' 'sorted(a)[-1]'
```
then the sort happens on every timing loop and you can see that the time for sorting an array is indeed much longer than to just find the maximum value.
**Edit:** @skyking's [answer](http://stackoverflow.com/a/35015156/641833) explains the part I left unexplained: `a.sort()` knows it is working on a list so can directly access the elements. `max(a)` works on any arbitrary iterable so has to use generic iteration. |
Why is max slower than sort in Python? | 35,014,951 | 89 | 2016-01-26T13:30:13Z | 35,015,156 | 31 | 2016-01-26T13:40:25Z | [
"python",
"sorting",
"max",
"python-internals"
] | I've found that `max` is slower than the `sort` function in Python 2 and 3.
Python 2
```
$ python -m timeit -s 'import random;a=range(10000);random.shuffle(a)' 'a.sort();a[-1]'
1000 loops, best of 3: 239 usec per loop
$ python -m timeit -s 'import random;a=range(10000);random.shuffle(a)' 'max(a)'
1000 loops, best of 3: 342 usec per loop
```
Python 3
```
$ python3 -m timeit -s 'import random;a=list(range(10000));random.shuffle(a)' 'a.sort();a[-1]'
1000 loops, best of 3: 252 usec per loop
$ python3 -m timeit -s 'import random;a=list(range(10000));random.shuffle(a)' 'max(a)'
1000 loops, best of 3: 371 usec per loop
```
Why *is* `max` (`O(n)`) slower than the `sort` function (`O(nlogn)`)? | This could be because `l.sort` is a member of `list` while `max` is a generic function. This means that `l.sort` can rely on the internal representation of `list` while `max` will have to go through generic iterator protocol.
This makes that each element fetch for `l.sort` is faster than each element fetch that `max` does.
I assume that if you instead use `sorted(a)` you will get the result slower than `max(a)`. |
Why is max slower than sort in Python? | 35,014,951 | 89 | 2016-01-26T13:30:13Z | 35,015,364 | 87 | 2016-01-26T13:50:54Z | [
"python",
"sorting",
"max",
"python-internals"
] | I've found that `max` is slower than the `sort` function in Python 2 and 3.
Python 2
```
$ python -m timeit -s 'import random;a=range(10000);random.shuffle(a)' 'a.sort();a[-1]'
1000 loops, best of 3: 239 usec per loop
$ python -m timeit -s 'import random;a=range(10000);random.shuffle(a)' 'max(a)'
1000 loops, best of 3: 342 usec per loop
```
Python 3
```
$ python3 -m timeit -s 'import random;a=list(range(10000));random.shuffle(a)' 'a.sort();a[-1]'
1000 loops, best of 3: 252 usec per loop
$ python3 -m timeit -s 'import random;a=list(range(10000));random.shuffle(a)' 'max(a)'
1000 loops, best of 3: 371 usec per loop
```
Why *is* `max` (`O(n)`) slower than the `sort` function (`O(nlogn)`)? | First off, note that [`max()` uses the iterator protocol](https://hg.python.org/cpython/file/fadc4b53b840/Python/bltinmodule.c#l1478), while [`list.sort()` uses ad-hoc code](https://hg.python.org/cpython/file/fadc4b53b840/Objects/listobject.c#l1903). Clearly, using an iterator is an important overhead, that's why you are observing that difference in timings.
However, apart from that, your tests are not fair. You are running `a.sort()` on the same list more than once. The [algorithm used by Python](https://en.wikipedia.org/wiki/Timsort) is specifically designed to be fast for already (partially) sorted data. Your tests are saying that the algorithm is doing its job well.
These are fair tests:
```
$ python3 -m timeit -s 'import random;a=list(range(10000));random.shuffle(a)' 'max(a[:])'
1000 loops, best of 3: 227 usec per loop
$ python3 -m timeit -s 'import random;a=list(range(10000));random.shuffle(a)' 'a[:].sort()'
100 loops, best of 3: 2.28 msec per loop
```
Here I'm creating a copy of the list every time. As you can see, the order of magnitude of the results are different: micro- vs milliseconds, as we would expect.
And remember: big-Oh specifies an upper bound! The lower bound for Python's sorting algorithm is Ω(*n*). Being O(*n* log *n*) does not automatically imply that every run takes a time proportional to *n* log *n*. It does not even imply that it needs to be slower than a O(*n*) algorithm, but that's another story. What's important to understand is that in some favorable cases, an O(*n* log *n*) algorithm may run in O(*n*) time or less. |
How do I transform a multi-level list into a list of strings in Python? | 35,015,693 | 17 | 2016-01-26T14:06:08Z | 35,015,732 | 33 | 2016-01-26T14:07:31Z | [
"python",
"list"
] | I have a list that looks something like this:
```
a = [('A', 'V', 'C'), ('A', 'D', 'D')]
```
And I want to create another list that transforms `a` into:
```
['AVC', 'ADD']
```
How would I go on to do this? | Use [`str.join()`](https://docs.python.org/3.5/library/stdtypes.html#str.join) in a list comprehension (works in both Python 2.x and 3.x):
```
>>> a = [('A', 'V', 'C'), ('A', 'D', 'D')]
>>> [''.join(x) for x in a]
['AVC', 'ADD']
``` |
How do I transform a multi-level list into a list of strings in Python? | 35,015,693 | 17 | 2016-01-26T14:06:08Z | 35,016,510 | 22 | 2016-01-26T14:42:25Z | [
"python",
"list"
] | I have a list that looks something like this:
```
a = [('A', 'V', 'C'), ('A', 'D', 'D')]
```
And I want to create another list that transforms `a` into:
```
['AVC', 'ADD']
```
How would I go on to do this? | You could map `str.join` to each `tuple` in `a`:
Python 2:
```
>>> map(''.join, a)
['AVC', 'ADD']
```
In Python 3, `map` is an iterable object so you'd need to materialise it as a `list`:
```
>>> list(map(''.join, a))
['AVC', 'ADD']
``` |
How do I structure this nested for loop? | 35,026,139 | 2 | 2016-01-26T23:39:13Z | 35,026,154 | 7 | 2016-01-26T23:40:57Z | [
"python",
"loops",
"for-loop",
"nested-loops"
] | I am trying to use a nested for loop to print the values 192.168.42.0-255 and 192.168.43.0-255.
```
for i in range(42, 43):
for y in range(0, 255):
ip = "192.168." + str(i) + "." + str(y)
print ip
```
All that is printing is the values 192.168.42.0-255. It doesn't seem to be changing to 192.168.43. | `range(x,y)` does not include y.
```
>>> range(42,43)
[42]
```
Your code can be fixed by changing the first one to `range(42,44)` and the second one to `range(0,256)` (or just `range(256)`).
If you want to get rid of the nested loops altogether, you can use a generator expression:
```
for ip in ('192.168.{}.{}'.format(i,j) for i in (42,43) for j in range(256)):
print(ip)
``` |
Jupyter notebook command does not work on Mac | 35,029,029 | 9 | 2016-01-27T04:41:16Z | 36,410,507 | 10 | 2016-04-04T18:48:23Z | [
"python",
"osx",
"jupyter",
"jupyter-notebook"
] | I installed jupyter using pip on my macbook air. Upon trying to execute the command `jupyter notebook`, I get an error
```
jupyter: 'notebook' is not a Jupyter command
```
I used the --h option to get a listing of all jupyter commands and indeed, 'notebook' is not one of the commands. I am running python 2.7 and it was already installed before I installed jupyter.
I searched on google and I see a similar problem some people have faced with the latest version of jupyter but I don't see any solutions. Can somebody point me in the right direction?
`which -a pip`: /usr/local/bin/pip
`which -a jupyter`: /usr/local/bin/jupyter | I had the same problem. After looking to a gazillion of pages on line and trying as many solutions, this is the only thing that worked for me:
```
pip uninstall notebook
pip install --upgrade notebook
```
I am not sure that the "upgrade" is necessary, but after that I had
```
jupyter-notebook
```
and
```
jupyter notebook
```
as commands. |
How to set "simple" password in Django 1.9 | 35,032,159 | 4 | 2016-01-27T08:28:48Z | 35,032,185 | 8 | 2016-01-27T08:30:43Z | [
"python",
"django"
] | ```
python manage.py createsuperuser --username admin
```
After that it prompts for password and when I enter "admin" I got the following message:
> This password is too short. It must contain at least 8 characters.
> This password is too common.
I haven't seen this on earlier versions of Django. | Tune your [`AUTH_PASSWORD_VALIDATORS`](https://docs.djangoproject.com/en/1.9/ref/settings/#std:setting-AUTH_PASSWORD_VALIDATORS) setting by removing the `django.contrib.auth.password_validation.CommonPasswordValidator` out there.
[Password validation](https://docs.djangoproject.com/en/1.9/topics/auth/passwords/#module-django.contrib.auth.password_validation) is a new feature introduced in Django 1.9. |
Difference between a -= b and a = a - b in Python | 35,036,126 | 86 | 2016-01-27T11:29:43Z | 35,036,306 | 40 | 2016-01-27T11:37:14Z | [
"python",
"arrays",
"numpy",
"in-place"
] | I have recently applied [this](http://stackoverflow.com/questions/30379311/fast-way-to-take-average-of-every-n-rows-in-a-npy-array) solution for averaging every N rows of matrix.
Although the solution works in general I had problems when applied to a 7x1 array. I have noticed that the problem is when using the `-=` operator.
To make a small example:
```
import numpy as np
a = np.array([1,2,3])
b = np.copy(a)
a[1:] -= a[:-1]
b[1:] = b[1:] - b[:-1]
print a
print b
```
which outputs:
```
[1 1 2]
[1 1 1]
```
So, in the case of an array `a -= b` produces a different result than `a = a - b`. I thought until now that these two ways are exactly the same. What is the difference?
How come the method I am mentioning for summing every N rows in a matrix is working e.g. for a 7x4 matrix but not for a 7x1 array? | Internally, the difference is that this:
```
a[1:] -= a[:-1]
```
is equivalent to this:
```
a[1:] = a[1:].__isub__(a[:-1])
a.__setitem__(slice(1, None, None), a.__getitem__(slice(1, None, None)).__isub__(a.__getitem__(slice(1, None, None)))
```
while this:
```
b[1:] = b[1:] - b[:-1]
```
maps to this:
```
b[1:] = b[1:].__sub__(b[:-1])
b.__setitem__(slice(1, None, None), b.__getitem__(slice(1, None, None)).__sub__(b.__getitem__(slice(1, None, None)))
```
In some cases, `__sub__()` and `__isub__()` work in a similar way. But mutable objects should mutate and return themselves when using `__isub__()`, while they should return a new object with `__sub__()`.
Applying slice operations on numpy objects creates views on them, so using them directly accesses the memory of the "original" object. |
Difference between a -= b and a = a - b in Python | 35,036,126 | 86 | 2016-01-27T11:29:43Z | 35,036,528 | 76 | 2016-01-27T11:47:37Z | [
"python",
"arrays",
"numpy",
"in-place"
] | I have recently applied [this](http://stackoverflow.com/questions/30379311/fast-way-to-take-average-of-every-n-rows-in-a-npy-array) solution for averaging every N rows of matrix.
Although the solution works in general I had problems when applied to a 7x1 array. I have noticed that the problem is when using the `-=` operator.
To make a small example:
```
import numpy as np
a = np.array([1,2,3])
b = np.copy(a)
a[1:] -= a[:-1]
b[1:] = b[1:] - b[:-1]
print a
print b
```
which outputs:
```
[1 1 2]
[1 1 1]
```
So, in the case of an array `a -= b` produces a different result than `a = a - b`. I thought until now that these two ways are exactly the same. What is the difference?
How come the method I am mentioning for summing every N rows in a matrix is working e.g. for a 7x4 matrix but not for a 7x1 array? | Mutating arrays while they're being used in computations can lead to unexpected results!
In the example in the question, subtraction with `-=` modifies the second element of `a` and then immediately uses that *modified* second element in the operation on the third element of `a`.
Here is what happens with `a[1:] -= a[:-1]` step by step:
* `a` is the array with the data `[1, 2, 3]`.
* We have two views onto this data: `a[1:]` is `[2, 3]`, and `a[:-1]` is `[1, 2]`.
* The inplace subtraction `-=` begins. The first element of `a[:-1]`, 1, is subtracted from the first element of `a[1:]`. This has modified `a` to be `[1, 1, 3]`. Now we have that `a[1:]` is a view of the data `[1, 3]`, and `a[:-1]` is a view of the data `[1, 1]` (the second element of array `a` has been changed).
* `a[:-1]` is now `[1, 1]` and NumPy must now subtract its second element *which is 1* (not 2 anymore!) from the second element of `a[1:]`. This makes `a[1:]` a view of the values `[1, 2]`.
* `a` is now an array with the values `[1, 1, 2]`.
`b[1:] = b[1:] - b[:-1]` does not have this problem because `b[1:] - b[:-1]` creates a *new* array first and then assigns the values in this array to `b[1:]`. It does not modify `b` itself during the subtraction, so the views `b[1:]` and `b[:-1]` do not change.
---
The general advice is to avoid modifying one view inplace with another if they overlap. This includes the operators `-=`, `*=`, etc. and using the `out` parameter in universal functions (like `np.subtract` and `np.multiply`) to write back to one of the arrays. |
Difference between a -= b and a = a - b in Python | 35,036,126 | 86 | 2016-01-27T11:29:43Z | 35,036,836 | 10 | 2016-01-27T12:02:07Z | [
"python",
"arrays",
"numpy",
"in-place"
] | I have recently applied [this](http://stackoverflow.com/questions/30379311/fast-way-to-take-average-of-every-n-rows-in-a-npy-array) solution for averaging every N rows of matrix.
Although the solution works in general I had problems when applied to a 7x1 array. I have noticed that the problem is when using the `-=` operator.
To make a small example:
```
import numpy as np
a = np.array([1,2,3])
b = np.copy(a)
a[1:] -= a[:-1]
b[1:] = b[1:] - b[:-1]
print a
print b
```
which outputs:
```
[1 1 2]
[1 1 1]
```
So, in the case of an array `a -= b` produces a different result than `a = a - b`. I thought until now that these two ways are exactly the same. What is the difference?
How come the method I am mentioning for summing every N rows in a matrix is working e.g. for a 7x4 matrix but not for a 7x1 array? | [The docs](http://legacy.python.org/dev/peps/pep-0203/) say :
> The idea behind augmented assignment in Python is that it isn't
> just an easier way to write the common practice of storing the
> result of a binary operation in its left-hand operand, but also a
> way for the left-hand operand in question to know that it should
> operate `on itself', rather than creating a modified copy of
> itself.
As a thumb rule, augmented substraction (`x-=y`) is `x.__isub__(y)`, for **IN**-place operation **IF** possible, when normal substraction (`x = x-y`) is `x=x.__sub__(y)` . On non mutable objects like integers it's equivalent. But for mutable ones like arrays or lists, as in your example, they can be very different things. |
How to deploy and serve prediction using TensorFlow from API? | 35,037,360 | 9 | 2016-01-27T12:26:59Z | 35,440,530 | 8 | 2016-02-16T18:38:30Z | [
"python",
"tensorflow",
"tensorflow-serving"
] | From google tutorial we know how to train a model in TensorFlow. But what is the best way to save a trained model, then serve the prediction using a basic minimal python api in production server.
My question is basically for TensorFlow best practices to save the model and serve prediction on live server without compromising speed and memory issue. Since the API server will be running on the background for forever.
A small snippet of python code will be appreciated. | [TensorFlow Serving](https://github.com/tensorflow/serving) is a high performance, open source serving system for machine learning models, designed for production environments and optimized for TensorFlow. The initial release contains C++ server and Python client examples based on [gRPC](http://www.grpc.io/). The basic architecture is shown in the diagram below.
[](http://i.stack.imgur.com/dfLtJ.png)
To get started quickly, check out the [tutorial](https://tensorflow.github.io/serving/serving_basic). |
Updating instance attributes of class | 35,040,893 | 3 | 2016-01-27T15:06:08Z | 35,041,004 | 7 | 2016-01-27T15:10:33Z | [
"python"
] | I have been googling a lot on this topic and I did not really find a commonly accepted way of achieving my goal.
Suppose we have the following class:
```
import numpy as np
class MyClass:
def __init__(self, x):
self.x = x
self.length = x.size
def append(self, data):
self.x = np.append(self.x, data)
```
and `x` should be a numpy array! If I run
```
A = MyClass(x=np.arange(10))
print(A.x)
print(A.length)
```
I get
`[0 1 2 3 4 5 6 7 8 9]` and `10`. So far so good. But if I use the append method
```
A.append(np.arange(5))
```
I get `[0 1 2 3 4 5 6 7 8 9 0 1 2 3 4]` and `10`. This is also expected since the instance attribute `length` was set during the instantiation of `A`. Now I am not sure what the most pythonic way of updating instance attributes is. For example I could run `__init__` again:
```
A.__init__(A.x)
```
and then the `length` attribute will have the correct value, but in some other posts here I found that this is somehow frowned upon. Another solution would be to update the `length` attribute in the `append` method directly, but I kind of want to avoid this since I don't want to forget updating an attribute at some point. Is there a more pythonic way of updating the `length` attribute for this class? | Don't *update* it, just *read* it when you need it with [a getter](https://docs.python.org/3.5/library/functions.html#property):
```
class MyClass:
...
@property
def length(self):
return self.x.size
``` |
How to plot multiple Seaborn Jointplot in Subplot | 35,042,255 | 5 | 2016-01-27T16:05:11Z | 35,044,845 | 8 | 2016-01-27T18:06:22Z | [
"python",
"python-3.x",
"pandas",
"matplotlib",
"seaborn"
] | I'm having problem placing Seaborn `Jointplot` inside a multicolumn `subplot`.
```
import pandas as pd
import seaborn as sns
df = pd.DataFrame({'C1': {'a': 1,'b': 15,'c': 9,'d': 7,'e': 2,'f': 2,'g': 6,'h': 5,'k': 5,'l': 8},
'C2': {'a': 6,'b': 18,'c': 13,'d': 8,'e': 6,'f': 6,'g': 8,'h': 9,'k': 13,'l': 15}})
fig = plt.figure();
ax1 = fig.add_subplot(121);
ax2 = fig.add_subplot(122);
sns.jointplot("C1", "C2", data=df, kind='reg', ax=ax1)
sns.jointplot("C1", "C2", data=df, kind='kde', ax=ax2)
```
Notice how only a portion of the `jointplot` is placed inside the subplot and the rest left inside another two plot frames. What I'd want is to have both the `distributions` also inserted inside the `subplots`.
Can anyone help with this? | It can not be easily done without hacking. `jointplot` calls `JointGrid` method, which in turn creates a new `figure` object every time it is called.
Therefore, the hack is to make two jointplots (`JG1` `JG2`), then make a new figure, then migrate the axes objects from `JG1` `JG2` to the new figure created.
Finally, we adjust the sizes and the positions of subplots in the new figure we just created.
```
JG1 = sns.jointplot("C1", "C2", data=df, kind='reg')
JG2 = sns.jointplot("C1", "C2", data=df, kind='kde')
#subplots migration
f = plt.figure()
for J in [JG1, JG2]:
for A in J.fig.axes:
f._axstack.add(f._make_key(A), A)
#subplots size adjustment
f.axes[0].set_position([0.05, 0.05, 0.4, 0.4])
f.axes[1].set_position([0.05, 0.45, 0.4, 0.05])
f.axes[2].set_position([0.45, 0.05, 0.05, 0.4])
f.axes[3].set_position([0.55, 0.05, 0.4, 0.4])
f.axes[4].set_position([0.55, 0.45, 0.4, 0.05])
f.axes[5].set_position([0.95, 0.05, 0.05, 0.4])
```
It is a hack because we are now using `_axstack` and `_add_key` private methods, which might and might not stay the same as they are now in `matplotlib` future versions.
[](http://i.stack.imgur.com/Bm6gQ.png) |
What does "(?u)" do in a regex? | 35,043,085 | 4 | 2016-01-27T16:40:49Z | 35,043,102 | 9 | 2016-01-27T16:41:18Z | [
"python",
"regex"
] | I looked into how tokenization is implemented in scikit-learn and found this regex ([source](https://github.com/scikit-learn/scikit-learn/blob/c957249/sklearn/feature_extraction/text.py#L1215)):
```
token_pattern = r"(?u)\b\w\w+\b"
```
The regex is pretty straightforward but I have never seen the `(?u)` part before. Can someone explain me what this part is doing? | It switches on the [`re.U` (`re.UNICODE`) flag](https://docs.python.org/2/library/re.html#re.U) for this expression.
From the [module documentation](https://docs.python.org/2/library/re.html):
> `(?iLmsux)`
>
> (One or more letters from the set `'i'`, `'L'`, `'m'`, `'s'`, `'u'`, `'x'`.) The group matches the empty string; the letters set the corresponding flags: `re.I` (ignore case), `re.L` (locale dependent), `re.M` (multi-line), `re.S` (dot matches all), `re.U` (Unicode dependent), and `re.X` (verbose), for the entire regular expression. (The flags are described in Module Contents.) This is useful if you wish to include the flags as part of the regular expression, instead of passing a flag argument to the `re.compile()` function. |
In Python 3, what is the most efficient way to change the values in a list? | 35,048,679 | 2 | 2016-01-27T21:46:17Z | 35,049,019 | 7 | 2016-01-27T22:07:47Z | [
"python",
"python-3.x",
"memory-efficient"
] | I'm learning Python, and I was trying to change a list in different ways. For instance, if I have list called names like this:
```
names = ["David", "Jake", "Alex"]
```
and I want to add the name "Carter" into the list, what is the most efficient way to accomplish this? Here's some of the things I can do:
```
names.append("Carter")
names = names + ["Carter"]
names += ["Carter"]
``` | append is the fastest.
Here is how you build a small profile using the timeit module
```
import timeit
a = (timeit.timeit("l.append('Cheese')", setup="l=['Meat', 'Milk']"))
b = (timeit.timeit("l+=['Cheese']", setup="l=['Meat', 'Milk']"))
c = (timeit.timeit("append('Cheese')", setup="l=['Meat', 'Milk'];append = l.append"))
print ('a', a)
print ('b', b)
print ('c', c)
print ("==> " , (c < a < b))
```
As you can see, In python the access to the method append takes half of the time as the *l.append* itself...
> > > a 0.08502503100316972
> > >
> > > b 0.1582659209962003
> > >
> > > c 0.041991976962890476
> > >
> > > ==> True |
Is it possible to force exponent or significand of a float to match another float (Python)? | 35,056,475 | 8 | 2016-01-28T08:37:12Z | 35,258,941 | 7 | 2016-02-07T20:48:24Z | [
"python",
"numpy",
"floating-point",
"floating-accuracy"
] | This is an interesting question that I was trying to work through the other day. Is it possible to force the significand or exponent of one `float` to be the same as another `float` in Python?
The question arises because I was trying to rescale some data so that the min and max match another data set. However, my rescaled data was slightly off (after about 6 decimal places) and it was enough to cause problems down the line.
To give an idea, I have `f1` and `f2` (`type(f1) == type(f2) == numpy.ndarray`). I want `np.max(f1) == np.max(f2) and np.min(f1) == np.min(f2)`. To achieve this, I do:
```
import numpy as np
f2 = (f2-np.min(f2))/(np.max(f2)-np.min(f2)) # f2 is now between 0.0 and 1.0
f2 = f2*(np.max(f1)-np.min(f1)) + np.min(f1) # f2 is now between min(f1) and max(f1)
```
The result (just as an example) would be:
```
np.max(f1) # 5.0230593
np.max(f2) # 5.0230602 but I need 5.0230593
```
My initial thought is that forcing the exponent of the `float` would be the correct solution. I couldn't find much on it, so I made a **workaround** for my need:
```
exp = 0
mm = np.max(f1)
# find where the decimal is
while int(10**exp*mm) == 0
exp += 1
# add 4 digits of precision
exp += 4
scale = 10**exp
f2 = np.round(f2*scale)/scale
f1 = np.round(f1*scale)/scale
```
now `np.max(f2) == np.max(f1)`
However, is there a better way? Did I do something wrong? Is it possible to reshape a `float` to be similar to another `float` (exponent or other means)?
EDIT: as was suggested, I am now using:
```
scale = 10**(-np.floor(np.log10(np.max(f1))) + 4)
```
While my solution above will work (for my application), I'm interested to know if there's a solution that can somehow force the `float` to have the same exponent and/or significand so that the numbers will become identical. | It depends what you mean by "mantissa."
Internally, floats are stored using scientific notation in base 2. So if you mean the *base 2* mantissa, it is actually very easy: Just multiply or divide by powers of two (not powers of 10), and the mantissa will stay the same (provided the exponent doesn't get out of range; if it does, you'll get clamped to infinity or zero, or possibly go into [denormal numbers](https://en.wikipedia.org/wiki/Denormal_number) depending on architectural details). It's important to understand that the decimal expansions will not match up when you rescale on powers of two. It's the binary expansion that's preserved with this method.
But if you mean the base 10 mantissa, no, it's not possible with floats, because the rescaled value may not be exactly representable. For example, 1.1 cannot be represented exactly in base 2 (with a finite number of digits) in much the same way as 1/3 cannot be represented in base 10 (with a finite number of digits). So rescaling 11 down by 1/10 cannot be done perfectly accurately:
```
>>> print("%1.29f" % (11 * 0.1))
1.10000000000000008881784197001
```
You can, however, do the latter with [`decimal`](https://docs.python.org/2/library/decimal.html)s. Decimals work in base 10, and will behave as expected in terms of base 10 rescaling. They also provide a fairly large amount of specialized functionality to detect and handle various kinds of loss of precision. But decimals [don't benefit from NumPy speedups](http://stackoverflow.com/a/7772386/1340389), so if you have a very large volume of data to work with, they may not be efficient enough for your use case. Since NumPy depends on hardware support for floating point, and most (all?) modern architectures provide no hardware support for base 10, this is not easily remedied. |
Input to LSTM network tensorflow | 35,056,909 | 7 | 2016-01-28T08:59:47Z | 35,163,341 | 10 | 2016-02-02T20:37:32Z | [
"python",
"tensorflow",
"lstm",
"recurrent-neural-network"
] | I have a time series of length t (x0, ...,xt) each of the xi is a d-dimension vector i.e. xi=(x0i, x1i, ...., xdi). Thus my input X is of shape [batch\_size, d]
The input for the tensorflow LSTM should be of size [batchSize, hidden\_size].
My question is how should i input my time series to the LSTM. One possible solution that i thought of is to have additional weight matrix, W, of size [d,hidden\_size] and to input the LSTM with X\*W + B.
Is this correct or should i input something else to the netwoרk?
Thanks | Your intuition is correct; what you need (and what you have described) is an embedding to translate your input vector to the dimension of your LSTM's input. There are three primary ways that I know of to accomplish that.
* You could do this manually with an additional weight matrix `W` and bias vector `b` as you described.
* You could create the weight matrix and bias vectors automatically using the `linear()` function [from TensorFlow's rnn\_cell.py library](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/rnn_cell.py#L668). Then pass the output of that linear layer as the input of your LSTM when you create your LSTM via the `rnn_decoder()` function [in Tensorflow's seq2seq.py library](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/seq2seq.py#L36) or otherwise.
* Or you could have Tensorflow create this embedding and hook it up to the inputs of your LSTM automatically, by creating the LSTM via the `embedding_rnn_decoder()` function at line 141 of the same seq2seq library. (If you trace through the code for this function without any optional arguments, you'll see that it is simply creating a linear embedding layer for the input as well as the LSTM and hooking them together.)
Unless you need access to the individual components that you're creating for some reason, I would recommend the third option to keep your code at a high level. |
Django json response error status | 35,059,916 | 6 | 2016-01-28T11:12:33Z | 35,059,936 | 25 | 2016-01-28T11:13:32Z | [
"python",
"django",
"tastypie"
] | My api is returning a json object on error. But the status code is HTTP 200. How can I change the response code to indicate an error
```
response = JsonResponse({'status':'false','message':message})
return response
``` | `JsonResponse` normally returns `HTTP 200`, which is the status code for `'OK'`. In order to indicate an error, you can add an HTTP status code to `JsonResponse` as it is a subclass of `HttpResponse`:
```
response = JsonResponse({'status':'false','message':message}, status=500)
``` |
Python multiprocessing - Why is using functools.partial slower than default arguments? | 35,062,087 | 8 | 2016-01-28T12:54:07Z | 35,062,539 | 7 | 2016-01-28T13:14:12Z | [
"python",
"python-3.x",
"python-multiprocessing",
"functools"
] | Consider the following function:
```
def f(x, dummy=list(range(10000000))):
return x
```
If I use `multiprocessing.Pool.imap`, I get the following timings:
```
import time
import os
from multiprocessing import Pool
def f(x, dummy=list(range(10000000))):
return x
start = time.time()
pool = Pool(2)
for x in pool.imap(f, range(10)):
print("parent process, x=%s, elapsed=%s" % (x, int(time.time() - start)))
parent process, x=0, elapsed=0
parent process, x=1, elapsed=0
parent process, x=2, elapsed=0
parent process, x=3, elapsed=0
parent process, x=4, elapsed=0
parent process, x=5, elapsed=0
parent process, x=6, elapsed=0
parent process, x=7, elapsed=0
parent process, x=8, elapsed=0
parent process, x=9, elapsed=0
```
Now if I use `functools.partial` instead of using a default value:
```
import time
import os
from multiprocessing import Pool
from functools import partial
def f(x, dummy):
return x
start = time.time()
g = partial(f, dummy=list(range(10000000)))
pool = Pool(2)
for x in pool.imap(g, range(10)):
print("parent process, x=%s, elapsed=%s" % (x, int(time.time() - start)))
parent process, x=0, elapsed=1
parent process, x=1, elapsed=2
parent process, x=2, elapsed=5
parent process, x=3, elapsed=7
parent process, x=4, elapsed=8
parent process, x=5, elapsed=9
parent process, x=6, elapsed=10
parent process, x=7, elapsed=10
parent process, x=8, elapsed=11
parent process, x=9, elapsed=11
```
Why is the version using `functools.partial` so much slower? | Using `multiprocessing` requires sending the worker processes information about the function to run, not just the arguments to pass. That information is transferred by [pickling](https://docs.python.org/3/library/pickle.html) that information in the main process, sending it to the worker process, and unpickling it there.
This leads to the primary issue:
**Pickling a function with default arguments is cheap**; it only pickles the name of the function (plus the info to let Python know it's a function); the worker processes just look up the local copy of the name. They already have a named function `f` to find, so it costs almost nothing to pass it.
But **pickling a `partial` function involves pickling the underlying function (cheap) and *all* the default arguments (*expensive* when the default argument is a 10M long `list`)**. So every time a task is dispatched in the `partial` case, it's pickling the bound argument, sending it to the worker process, the worker process unpickles, then finally does the "real" work. On my machine, that pickle is roughly 50 MB in size, which is a huge amount of overhead; in quick timing tests on my machine, pickling and unpickling a 10 million long `list` of `0` takes about 620 ms (and that's ignoring the overhead of actually transferring the 50 MB of data).
`partial`s have to pickle this way, because they don't know their own names; when pickling a function like `f`, `f` (being `def`-ed) knows its qualified name (in an interactive interpreter or from the main module of a program, it's `__main__.f`), so the remote side can just recreate it locally by doing the equivalent of `from __main__ import f`. But the `partial` doesn't know its name; sure, you assigned it to `g`, but neither `pickle` nor the `partial` itself know it available with the qualified name `__main__.g`; it could be named `foo.fred` or a million other things. So it has to `pickle` the info necessary to recreate it entirely from scratch. It's also `pickle`-ing for each call (not just once per worker) because it doesn't know that the callable isn't changing in the parent between work items, and it's always trying to ensure it sends up to date state.
You have other issues (timing creation of the `list` only in the `partial` case and the minor overhead of calling a `partial` wrapped function vs. calling the function directly), but those are chump change relative to the per-call overhead pickling and unpickling the `partial` is adding (the initial creation of the `list` is adding one-time overhead of a little under half what *each* pickle/unpickle cycle costs; the overhead to call through the `partial` is less than a microsecond). |
Working with strings seems more cumbersome than it needs to be in Python 3.x | 35,067,366 | 3 | 2016-01-28T16:54:32Z | 35,067,488 | 10 | 2016-01-28T17:00:20Z | [
"python",
"string",
"python-3.x",
"bytearray"
] | I have a function that takes in a string, sends it via a socket, and prints it to the console. Sending strings to this function yields some warnings that turn into other warnings when attempting to fix them.
Function:
```
def log(socket, sock_message):
sock_message = sock_message.encode()
socket.send(sock_message)
print(sock_message.decode())
```
I'm attempting to call my function this way:
```
log(conn, "BATT " + str(random.randint(1, 100)))
```
And also, for simplicity:
```
log(conn, "SIG: 100%")
```
With both of the `log` calls, I get `Type 'str' doesn't have expected attribute 'decode'`. So instead, I saw you could pass a string as an array of bytes with `bytes("my string", 'utf-8')` but then I get the warning `Type 'str' doesn't have expected attribute 'encode'`.
I'm 100% sure I'm just missing some key bit of information on how to pass strings around in python, so what's the generally accepted way to accomplish this?
**EDIT:**
As explained below, an str can't have both `decode` and `encode` and I'm confusing my IDE by doing both on the same variable. I fixed it by maintaining a separate variable for the `bytes` version, and this fixes the issue.
```
def log(sock, msg):
sock_message = msg.encode()
sock.send(sock_message)
print(sock_message.msg())
``` | In Python 2 you could be very sloppy (and sometimes get away with it) when handling characters (strings) and handling bytes. Python 3 fixes this by making them two separate types: `str` and `bytes`.
You **encode** to convert from `str` *to* `bytes`. Many characters (in particular ones not in English / US-ASCII) require two or more bytes to represent them (in many encodings).
You **decode** to convert from `bytes` *to* `str`.
Thus you can't *decode* a `str`. You need to *encode* it to print it or to send it anywhere that requires bytes (files, sockets, etc.). You also need to use the correct encoding so that the receiver of the bytes can correctly decode it and receive the correct characters. For some US-ASCII is sufficient. Many prefer using UTF-8, in part because all the characters that can be handled by US-ASCII are the same in UTF-8 but UTF-8 can handle (other) Unicode characters. |
How can I get from 'pyspark.sql.types.Row' all the columns/attributes name? | 35,067,467 | 4 | 2016-01-28T16:59:33Z | 35,067,846 | 7 | 2016-01-28T17:16:32Z | [
"python",
"apache-spark",
"attributes",
"row",
"pyspark"
] | I am using the Python API of Spark version 1.4.1.
My row object looks like this :
```
row_info = Row(name = Tim, age = 5, is_subscribed = false)
```
How can I get as a result, a list of the object attributes ?
Something like : `["name", "age", "is_subscribed"]` | If you don't care about the order you can simply extract these from a `dict`:
```
list(row_info.asDict())
```
otherwise the only option I am aware of is using `__fields__` directly:
```
row_info.__fields__
``` |
Compile Brotli into a DLL .NET can reference | 35,075,492 | 7 | 2016-01-29T01:42:31Z | 37,533,467 | 9 | 2016-05-30T20:46:29Z | [
"c#",
"python",
"compression",
"python.net",
"brotli"
] | So I'd like to take advantage of Brotli but I am not familiar with Python and C++..
I know someone had compiled it into a Windows .exe. But how do I wrap it into a DLL or something that a .NET app can reference? I know there's IronPython, do I just bring in all the source files into an IronPython project and write a .NET adapter that calls into the Brotli API and exposes them? But actually, I'm not even sure if the Brotli API is Python or C++..
Looking at `tools/bro.cc`, it looks like the "entry" methods are defined in `encode.c` and `decode.c` as `BrotliCompress()`, `BrotliDecompressBuffer()`, `BrotliDecompressStream()` methods. So I suppose a DLL can be compiled from the C++ classes. | To avoid the need for Python, I have forked the original brotli source here <https://github.com/smourier/brotli> and created a Windows DLL version of it that you can use with .NET.
I've added a directory that contains a "WinBrotli" Visual Studio 2015 solution with two projects:
* **WinBrotli**: a Windows DLL (x86 and x64) that contains original unchanged C/C++ brotli code.
* **Brotli**: a Windows Console Application (Any Cpu) written in C# that contains P/Invoke interop code for WinBrotli.
To reuse the Winbrotli DLL, just copy WinBrotli.x64.dll and WinBrotli.x86.dll (you can find already built release versions in the *WinBrotli/binaries* folder) aside your .NET application, and incorporate the *BrotliCompression.cs* file in your C# project (or port it to VB or another language if C# is not your favorite language). The interop code will automatically pick the right DLL that correspond to the current process' bitness (X86 or X64).
Once you've done that, using it is fairly simple (input and output can be file paths or standard .NET Streams):
```
// compress
BrotliCompression.Compress(input, output);
// decompress
BrotliCompression.Decompress(input, output);
```
To create WinBrotli, here's what I've done (for others that would want to use other Visual Studio versions)
* Created a standard DLL project, removed the precompiled header
* Included all encoder and decoder original brotli C/C++ files (never changed anything in there, so we can update the original files when needed)
* Configured the project to remove dependencies on MSVCRT (so we don't need to deploy other DLL)
* Disabled the 4146 warning (otherwise we just can't compile)
* Added a very standard dllmain.cpp file that does nothing special
* Added a WinBrotli.cpp file that exposes brotli compression and decompression code to the outside Windows world (with a very thin adaptation layer, so it's easier to interop in .NET)
* Added a WinBrotli.def file that exports 4 functions |
Preprocessing in scikit learn - single sample - Depreciation warning | 35,082,140 | 8 | 2016-01-29T10:25:53Z | 35,082,270 | 9 | 2016-01-29T10:32:16Z | [
"python",
"scikit-learn",
"deprecation-warning"
] | On a fresh installation of Anaconda under Ubuntu... I am preprocessing my data in various ways prior to a classification task using Scikit-Learn.
```
from sklearn import preprocessing
scaler = preprocessing.MinMaxScaler().fit(train)
train = scaler.transform(train)
test = scaler.transform(test)
```
This all works fine but if I have a new sample (temp below) that I want to classify (and thus I want to preprocess in the same way then I get
```
temp = [1,2,3,4,5,5,6,....................,7]
temp = scaler.transform(temp)
```
Then I get a depreciation warning...
```
DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17
and willraise ValueError in 0.19. Reshape your data either using
X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1)
if it contains a single sample.
```
So the question is how should I be rescaling such a single sample?
I suppose an alternative (not very good one) would be...
```
temp = [temp, temp]
temp = scaler.transform(temp)
temp = temp[0]
```
But I'm sure there are better ways. | Well, it actually looks like the warning is telling you what to do.
As part of [`sklearn.pipeline` stages' uniform interfaces](http://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html), as a rule of thumb:
* when you see `X`, it should be an `np.array` with two dimensions
* when you see `y`, it should be an `np.array` with a single dimension.
Here, therefore, you should consider the following:
```
temp = [1,2,3,4,5,5,6,....................,7]
# This makes it into a 2d array
temp = np.array(temp).reshape((len(temp), 1))
temp = scaler.transform(temp)
``` |
Subtraction over a list of sets | 35,093,304 | 11 | 2016-01-29T20:18:31Z | 35,093,454 | 7 | 2016-01-29T20:27:42Z | [
"python",
"algorithm",
"list",
"set",
"list-comprehension"
] | Given a list of sets:
```
allsets = [set([1, 2, 4]), set([4, 5, 6]), set([4, 5, 7])]
```
What is a pythonic way to compute the corresponding list of sets of elements having no overlap with other sets?
```
only = [set([1, 2]), set([6]), set([7])]
```
Is there a way to do this with a list comprehension? | Yes it can be done but is hardly pythonic
```
>>> [(i-set.union(*[j for j in allsets if j!= i])) for i in allsets]
[set([1, 2]), set([6]), set([7])]
```
Some reference on sets can be found [in the documentation](https://docs.python.org/3/library/stdtypes.html#set). The `*` operator is called [unpacking operator](https://docs.python.org/3/tutorial/controlflow.html#unpacking-argument-lists). |
Subtraction over a list of sets | 35,093,304 | 11 | 2016-01-29T20:18:31Z | 35,093,513 | 19 | 2016-01-29T20:31:37Z | [
"python",
"algorithm",
"list",
"set",
"list-comprehension"
] | Given a list of sets:
```
allsets = [set([1, 2, 4]), set([4, 5, 6]), set([4, 5, 7])]
```
What is a pythonic way to compute the corresponding list of sets of elements having no overlap with other sets?
```
only = [set([1, 2]), set([6]), set([7])]
```
Is there a way to do this with a list comprehension? | To avoid quadratic runtime, you'd want to make an initial pass to figure out which elements appear in more than one set:
```
import itertools
import collections
element_counts = collections.Counter(itertools.chain.from_iterable(allsets))
```
Then you can simply make a list of sets retaining all elements that only appear once:
```
nondupes = [{elem for elem in original if element_counts[elem] == 1}
for original in allsets]
```
---
Alternatively, instead of constructing `nondupes` from `element_counts` directly, we can make an additional pass to construct a set of all elements that appear in exactly one input. This requires an additional statement, but it allows us to take advantage of the `&` operator for set intersection to make the list comprehension shorter and more efficient:
```
element_counts = collections.Counter(itertools.chain.from_iterable(allsets))
all_uniques = {elem for elem, count in element_counts.items() if count == 1}
# ^ viewitems() in Python 2.7
nondupes = [original & all_uniques for original in allsets]
```
Timing seems to indicate that using an `all_uniques` set produces a substantial speedup for the overall duplicate-elimination process. It's up to about a [3.5x speedup](http://ideone.com/NUApHy) on Python 3 for heavily-duplicate input sets, though only about a [30% speedup](http://ideone.com/8b70l4) for the overall duplicate-elimination process on Python 2 due to more of the runtime being dominated by constructing the Counter. This speedup is fairly substantial, though not nearly as important as avoiding quadratic runtime by using `element_counts` in the first place. If you're on Python 2 and this code is speed-critical, you'd want to use an ordinary `dict` or a `collections.defaultdict` instead of a `Counter`.
Another way would be to construct a `dupes` set from `element_counts` and use `original - dupes` instead of `original & all_uniques` in the list comprehension, as [suggested](http://stackoverflow.com/a/35093930/2357112) by munk. Whether this performs better or worse than using an `all_uniques` set and `&` would depend on the degree of duplication in your input and what Python version you're on, but it [doesn't](http://ideone.com/ugvQdw) [seem](http://ideone.com/7ddxZA) to make much of a difference either way. |
TensorFlow operator overloading | 35,094,899 | 7 | 2016-01-29T22:06:41Z | 35,095,052 | 15 | 2016-01-29T22:17:44Z | [
"python",
"machine-learning",
"tensorflow"
] | What is the difference between
```
tf.add(x, y)
```
and
```
x + y
```
in TensorFlow? What would be different in your computation graph when you construct your graph with `+` instead of `tf.add()`?
More generally, are `+` or other operations overloaded for tensors? | If at least one of `x` or `y` is a [`tf.Tensor`](https://www.tensorflow.org/versions/0.6.0/api_docs/python/framework.html#Tensor) object, the expressions `tf.add(x, y)` and `x + y` are equivalent. The main reason you might use `tf.add()` is to specify an explicit [`name`](https://www.tensorflow.org/versions/0.6.0/api_docs/python/math_ops.html#add) keyword argument for the created op, which is not possible with the overloaded operator version.
Note that if neither `x` nor `y` is a `tf.Tensor`—for example if they are NumPy arrays—then `x + y` will not create a TensorFlow op. `tf.add()` always creates a TensorFlow op and converts its arguments to `tf.Tensor` objects. Therefore, if you are writing a library function that might accept both tensors and NumPy arrays, you might prefer to use `tf.add()`.
The following operators are overloaded in the TensorFlow Python API:
* `__neg__` (unary `-`)
* `__abs__` (`abs()`)
* `__invert__` (unary `~`)
* `__add__` (binary `+`)
* `__sub__` (binary `-`)
* `__mul__` (binary elementwise `*`)
* `__div__` (binary `/` in Python 2)
* `__floordiv__` (binary `//` in Python 3)
* `__truediv__` (binary `/` in Python 3)
* `__mod__` (binary `%`)
* `__pow__` (binary `**`)
* `__and__` (binary `&`)
* `__or__` (binary `|`)
* `__xor__` (binary `^`)
* `__lt__` (binary `<`)
* `__le__` (binary `<=`)
* `__gt__` (binary `>`)
* `__ge__` (binary `>=`) |
Return Subset of List that Matches Condition | 35,101,426 | 3 | 2016-01-30T11:51:22Z | 35,101,441 | 10 | 2016-01-30T11:52:55Z | [
"python",
"list",
"python-2.7",
"set",
"list-comprehension"
] | Let's say I have a list of `int`s:
```
listOfNumbers = range(100)
```
And I want to return a list of the elements that meet a certain condition, say:
```
def meetsCondition(element):
return bool(element != 0 and element % 7 == 0)
```
What's a Pythonic way to return a sub-`list` of element in a `list` for which `meetsCondition(element)` is `True`?
A naive approach:
```
def subList(inputList):
outputList = []
for element in inputList:
if meetsCondition(element):
outputList.append(element)
return outputList
divisibleBySeven = subList(listOfNumbers)
```
Is there a simple way to do this, perhaps with with a list comprehension or `set()` functions, and without the temporary outputList? | Use list comprehension,
```
divisibleBySeven = [num for num in inputList if num != 0 and num % 7 == 0]
```
or you can use the `meetsCondition` also,
```
divisibleBySeven = [num for num in inputList if meetsCondition(num)]
```
you can actually write the same condition with Python's [truthy](https://docs.python.org/3/library/stdtypes.html?highlight=truth#truth-value-testing) semantics, like this
```
divisibleBySeven = [num for num in inputList if num and num % 7]
```
---
alternatively, you can use `filter` function with your `meetsCondition`, like this
```
divisibleBySeven = filter(meetsCondition, inputList)
``` |
NLTK ViterbiParser fails in parsing words that are not in the PCFG rule | 35,103,191 | 12 | 2016-01-30T14:50:49Z | 35,107,254 | 7 | 2016-01-30T21:20:14Z | [
"python",
"nlp",
"nltk",
"context-free-grammar",
"viterbi"
] | ```
import nltk
from nltk.parse import ViterbiParser
def pcfg_chartparser(grammarfile):
f=open(grammarfile)
grammar=f.read()
f.close()
return nltk.PCFG.fromstring(grammar)
grammarp = pcfg_chartparser("wsjp.cfg")
VP = ViterbiParser(grammarp)
print VP
for w in sent:
for tree in VP.parse(nltk.word_tokenize(w)):
print tree
```
When I run the above code, it produces the following output for the sentence, "turn off the lights"-
> (S
> (VP (VB turn) (PRT (RP off)) (NP (DT the) (NNS lights)))) (p=2.53851e-14)
However, it raises the following error for the sentence, "please turn off the lights"-
> ValueError: Grammar does not cover some of the input words: u"'please'"
I am building a ViterbiParser by supplying it a probabilistic context free grammar. It works well in parsing sentences that have words which are already in the rules of the grammar. It fails to parse sentences in which the Parser has not seen the word in the grammar rules. How to get around this limitation?
I am referring to this [assignment](http://web.mit.edu/6.863/www/fall2012/labs/assignment5.pdf). | Firstly, try to use (i) namespaces and (ii) unequivocal variable names, e.g.:
```
>>> from nltk import PCFG
>>> from nltk.parse import ViterbiParser
>>> import urllib.request
>>> response = urllib.request.urlopen('https://raw.githubusercontent.com/salmanahmad/6.863/master/Labs/Assignment5/Code/wsjp.cfg')
>>> wsjp = response.read().decode('utf8')
>>> grammar = PCFG.fromstring(wsjp)
>>> parser = ViterbiParser(grammar)
>>> list(parser.parse('turn off the lights'.split()))
[ProbabilisticTree('S', [ProbabilisticTree('VP', [ProbabilisticTree('VB', ['turn']) (p=0.002082678), ProbabilisticTree('PRT', [ProbabilisticTree('RP', ['off']) (p=0.1089101771)]) (p=0.10768769667270556), ProbabilisticTree('NP', [ProbabilisticTree('DT', ['the']) (p=0.7396712852), ProbabilisticTree('NNS', ['lights']) (p=4.61672e-05)]) (p=4.4236397464693323e-07)]) (p=1.0999324002161311e-13)]) (p=2.5385077255727538e-14)]
```
If we look at the grammar:
```
>>> grammar.check_coverage('please turn off the lights'.split())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.4/dist-packages/nltk/grammar.py", line 631, in check_coverage
"input words: %r." % missing)
ValueError: Grammar does not cover some of the input words: "'please'".
```
**To resolve the unknown word issues, there're several options**:
* **Use `wildcard` non-terminals nodes to replace the unknown words**. Find some way to replace the words that the grammar don't cover from `check_coverage()` with the `wildcard`, then parse the sentence with the wildcard
+ this will usually decrease the parser's accuracy unless you have specifically train the PCFG with a grammar that handles unknown words and the wildcard is a superset of the unknown words.
* Go back to your grammar production file that you have before creating the learning the PCFG with [`learn_pcfg.py`](https://github.com/salmanahmad/6.863/blob/master/Labs/Assignment5/Code/learn_pcfg.py) and **add all possible words in the terminal productions**.
* **Add the unknown words into your pcfg grammar and then renormalize the weights**, given either very small weights to the unknown words (you can also try smarter smoothing/interpolation techniques)
Since this is a homework question I will not give the answer with the full code. But the hints above should be enough to resolve the problem. |
Delete rest of string after n-th occurence | 35,109,927 | 3 | 2016-01-31T03:14:50Z | 35,109,928 | 10 | 2016-01-31T03:15:54Z | [
"python",
"regex",
"string",
"python-2.7",
"python-2.x"
] | I have the following string:
```
a = "this.is.a.string"
```
I wish to delete everything after the 3rd '.' symbol so that it returns
```
trim(a)
>>> "this.is.a"
```
while a string without the 3rd '.' should return itself.
This answer ([How to remove all characters after a specific character in python?](http://stackoverflow.com/questions/904746/how-to-remove-all-characters-after-a-specific-character-in-python)) was the closest solution I could find, however I don't think `split` would help me this time. | [`.split()`](https://docs.python.org/2/library/stdtypes.html#str.split) by the `dot` and then [`.join()`](https://docs.python.org/2/library/stdtypes.html#str.join):
```
>>> ".".join(a.split(".")[:3])
'this.is.a'
```
You may also specify the `maxsplit` argument since you need only 3 "slices":
> If `maxsplit` is given, at most `maxsplit` splits are done (thus, the list will have at most `maxsplit+1` elements).
```
>>> ".".join(a.split(".", 3)[:-1])
'this.is.a'
``` |
How does Python's Twisted Reactor work? | 35,111,265 | 8 | 2016-01-31T07:03:33Z | 35,119,594 | 13 | 2016-01-31T21:19:50Z | [
"python",
"twisted",
"event-loop",
"reactor"
] | Recently, I've been diving into the Twisted docs. From what I gathered, the basis of Twisted's functionality is the result of it's event loop called the "Reactor". The reactor listens for certain events and dispatches them to registered callback functions that have been designed to handle these events. In the book, there is some pseudo code describing what the Reactor does but I'm having trouble understanding it, it just doesn't make any sense to me.
```
while True:
timeout = time_until_next_timed_event()
events = wait_for_events(timeout)
events += timed_events_until(now())
for event in events:
event.process()
```
What does this mean? | In case it's not obvious, It's called the *reactor* because it *reacts to
things*. The loop is *how* it reacts.
One line at a time:
```
while True:
```
It's not *actually* `while True`; it's more like `while not loop.stopped`. You can call `reactor.stop()` to stop the loop, and (after performing some shut-down logic) the loop will in fact exit. But it is portrayed in the example as `while True` because when you're writing a long-lived program (as you often are with Twisted) it's best to assume that your program will either crash or run forever, and that "cleanly exiting" is not really an option.
```
timeout = time_until_next_timed_event()
```
If we were to expand this calculation a bit, it might make more sense:
```
def time_until_next_timed_event():
now = time.time()
timed_events.sort(key=lambda event: event.desired_time)
soonest_event = timed_events[0]
return soonest_event.desired_time - now
```
`timed_events` is the list of events scheduled with `reactor.callLater`; i.e. the functions that the application has asked for Twisted to run at a particular time.
```
events = wait_for_events(timeout)
```
This line here is the "magic" part of Twisted. I can't expand `wait_for_events` in a general way, because its implementation depends on exactly how the operating system makes the desired events available. And, given that operating systems are complex and tricky beasts, I can't expand on it in a specific way while keeping it simple enough for an answer to your question.
What this function is intended to mean is, ask the operating system, or a Python wrapper around it, to block, until one or more of the objects previously registered with it - at a minimum, stuff like listening ports and established connections, but also possibly things like buttons that might get clicked on - is "ready for work". The work might be reading some bytes out of a socket when they arrive from the network. The work might be writing bytes to the network when a buffer empties out sufficiently to do so. It might be accepting a new connection or disposing of a closed one. Each of these possible events are functions that the reactor might call on your objects: `dataReceived`, `buildProtocol`, `resumeProducing`, etc, that you will learn about if you go through the full Twisted tutorial.
Once we've got our list of hypothetical "event" objects, each of which has an imaginary "`process`" method (the exact names of the methods are different in the reactor just due to accidents of history), we then go back to dealing with time:
```
events += timed_events_until(now())
```
First, this is assuming `events` is simply a `list` of an abstract `Event` class, which has a `process` method that each specific type of event needs to fill out.
At this point, the loop has "woken up", because `wait_for_events`, stopped blocking. However, we don't know how many timed events we might need to execute based on *how long it was "asleep" for*. We might have slept for the full timeout if nothign was going on, but if lots of connections were active we might have slept for effectively no time at all. So we check the current time ("`now()`"), and we add to the list of events we need to process, every timed event with a `desired_time` that is at, or before, the present time.
Finally,
```
for event in events:
event.process()
```
This just means that Twisted goes through the list of things that it has to do and does them. In reality of course it handles exceptions around each event, and the concrete implementation of the reactor often just calls straight into an event handler rather than creating an `Event`-like object to record the work that needs to be done first, but conceptually this is just what happens. `event.process` here might mean calling `socket.recv()` and then `yourProtocol.dataReceived` with the result, for example.
I hope this expanded explanation helps you get your head around it. If you'd like to learn more about Twisted by working on it, I'd encourage you to [join the mailing list](http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python), hop on to the IRC channel, `#twisted` to talk about applications or `#twisted-dev` to work on Twisted itself, both on [Freenode](https://freenode.net). |
Dot notation string manipulation | 35,118,265 | 17 | 2016-01-31T19:20:50Z | 35,118,303 | 19 | 2016-01-31T19:24:24Z | [
"python",
"string"
] | Is there a way to manipulate a string in Python using the following ways?
For any string that is stored in dot notation, for example:
```
s = "classes.students.grades"
```
Is there a way to change the string to the following:
```
"classes.students"
```
Basically, remove everything up to and including the last period. So `"restaurants.spanish.food.salty"` would become `"restaurants.spanish.food"`.
Additionally, is there any way to identify what comes after the last period? The reason I want to do this is I want to use `isDigit()`.
So, if it was `classes.students.grades.0` could I grab the `0` somehow, so I could use an if statement with `isdigit`, and say if the part of the string after the last period (so `0` in this case) is a digit, remove it, otherwise, leave it. | you can use [**`split`**](https://docs.python.org/2/library/string.html#string.split) and [**`join`**](https://docs.python.org/2/library/string.html#string.join) together:
```
s = "classes.students.grades"
print '.'.join(s.split('.')[:-1])
```
You are splitting the string on `.` - it'll give you a list of strings, after that you are joining the list elements back to string separating them by `.`
`[:-1]` will pick all the elements from the list but the last one
To check what comes after the last `.`:
```
s.split('.')[-1]
```
Another way is to use [**`rsplit`**](https://docs.python.org/2/library/string.html#string.rsplit). It works the same way as **split** but if you provide **maxsplit** parameter it'll split the string starting from the end:
```
rest, last = s.rsplit('.', 1)
'classes.students'
'grades'
```
You can also use [**`re.sub`**](https://docs.python.org/2/library/re.html#re.sub) to substitute the part after the last `.` with an empty string:
```
re.sub('\.[^.]+$', '', s)
```
And the last part of your question to wrap words in `[]` i would recommend to use [**`format`**](https://docs.python.org/2/library/string.html#string.Formatter.format) and [**`list comprehension`**](https://docs.python.org/2/tutorial/datastructures.html#list-comprehensions):
```
''.join("[{}]".format(e) for e in s.split('.'))
```
It'll give you the desired output:
```
[classes][students][grades]
``` |
Dot notation string manipulation | 35,118,265 | 17 | 2016-01-31T19:20:50Z | 35,118,414 | 13 | 2016-01-31T19:32:43Z | [
"python",
"string"
] | Is there a way to manipulate a string in Python using the following ways?
For any string that is stored in dot notation, for example:
```
s = "classes.students.grades"
```
Is there a way to change the string to the following:
```
"classes.students"
```
Basically, remove everything up to and including the last period. So `"restaurants.spanish.food.salty"` would become `"restaurants.spanish.food"`.
Additionally, is there any way to identify what comes after the last period? The reason I want to do this is I want to use `isDigit()`.
So, if it was `classes.students.grades.0` could I grab the `0` somehow, so I could use an if statement with `isdigit`, and say if the part of the string after the last period (so `0` in this case) is a digit, remove it, otherwise, leave it. | if `'.' in s`, `s.rpartition('.')` finds last dot in `s`,
and returns `(before_last_dot, dot, after_last_dot)`:
```
s = "classes.students.grades"
s.rpartition('.')[0]
``` |
Dot notation string manipulation | 35,118,265 | 17 | 2016-01-31T19:20:50Z | 35,118,430 | 15 | 2016-01-31T19:34:17Z | [
"python",
"string"
] | Is there a way to manipulate a string in Python using the following ways?
For any string that is stored in dot notation, for example:
```
s = "classes.students.grades"
```
Is there a way to change the string to the following:
```
"classes.students"
```
Basically, remove everything up to and including the last period. So `"restaurants.spanish.food.salty"` would become `"restaurants.spanish.food"`.
Additionally, is there any way to identify what comes after the last period? The reason I want to do this is I want to use `isDigit()`.
So, if it was `classes.students.grades.0` could I grab the `0` somehow, so I could use an if statement with `isdigit`, and say if the part of the string after the last period (so `0` in this case) is a digit, remove it, otherwise, leave it. | The best way to do this is using the [`rsplit`](https://docs.python.org/3.5/library/stdtypes.html?highlight=rpartition#str.rsplit) method and pass in the `maxsplit` argument.
```
>>> s = "classes.students.grades"
>>> before, after = s.rsplit('.', maxsplit=1) # rsplit('.', 1) in Python 2.x onwards
>>> before
'classes.students'
>>> after
'grades'
```
You can also use the [`rfind()`](https://docs.python.org/3.5/library/stdtypes.html?highlight=rfind#str.rfind) method with normal slice operation.
To get everything before last `.`:
```
>>> s = "classes.students.grades"
>>> last_index = s.rfind('.')
>>> s[:last_index]
'classes.students'
```
Then everything after last `.`
```
>>> s[last_index + 1:]
'grades'
``` |
How to pass a list as an input of a function in Python | 35,120,620 | 10 | 2016-01-31T23:01:57Z | 35,120,654 | 15 | 2016-01-31T23:05:44Z | [
"python",
"arrays",
"list",
"function"
] | I am using Python, and I have a function which takes a list as the argument. For example, I am using the following syntax,
```
def square(x,result= []):
for y in x:
result.append=math.pow(y,2.0)
return result
print(square([1,2,3]))
```
and the output is `[1]` only where I am supposed to get `[1,4,9]`.
What am I doing wrong? | Why not side-step the problem altogether?
```
def square(vals):
return [v*v for v in vals]
```
---
**Edit:** The first problem, as several people have pointed out, is that you are short-circuiting your `for` loop. Your `return` should come *after* the loop, not in it.
The next problem is your use of `list.append` - you need to call it, not assign to it, ie `result.append(y*y)`. `result.append = y*y` instead overwrites the method with a numeric value, probably throwing an error the next time you try to call it.
Once you fix that, you will find another less obvious error occurs if you call your function repeatedly:
```
print(square([1,2,3]) # => [1, 4, 9]
print(square([1,2,3]) # => [1, 4, 9, 1, 4, 9]
```
Because you pass a mutable item (a list) as a default, all further use of that default item points back to *the same original list*.
Instead, try
```
def square(vals, result=None):
if result is None:
result = []
result.extend(v*v for v in vals)
return result
``` |
How to pass a list as an input of a function in Python | 35,120,620 | 10 | 2016-01-31T23:01:57Z | 35,120,655 | 13 | 2016-01-31T23:05:54Z | [
"python",
"arrays",
"list",
"function"
] | I am using Python, and I have a function which takes a list as the argument. For example, I am using the following syntax,
```
def square(x,result= []):
for y in x:
result.append=math.pow(y,2.0)
return result
print(square([1,2,3]))
```
and the output is `[1]` only where I am supposed to get `[1,4,9]`.
What am I doing wrong? | You are currently returning a value from your function in the first iteration of your `for` loop. Because of this, the second and third iteration of your `for` loop never take place. You need to move your `return` statement outside of the loop as follows:
```
import math
def square(x):
result = []
for y in x:
result.append(math.pow(y,2.0))
return result
print(square([1,2,3]))
```
**Output**
```
[1.0, 4.0, 9.0]
``` |
Why do 3 backslashes equal 4 in a Python string? | 35,121,288 | 86 | 2016-02-01T00:25:21Z | 35,121,348 | 82 | 2016-02-01T00:33:47Z | [
"python",
"python-2.7"
] | Could you tell me why `'?\\\?'=='?\\\\?'` gives `True`? That drives me crazy and I can't find a reasonable answer...
```
>>> list('?\\\?')
['?', '\\', '\\', '?']
>>> list('?\\\\?')
['?', '\\', '\\', '?']
``` | Basically, because python is slightly lenient in backslash processing. Quoting from <https://docs.python.org/2.0/ref/strings.html> :
> Unlike Standard C, all unrecognized escape sequences are left in the string unchanged, i.e., *the backslash is left in the string*.
(Emphasis in the original)
Therefore, in python, it isn't that three backslashes are equal to four, it's that when you follow backslash with a character like `?`, the two together come through as two characters, because `\?` is not a recognized escape sequence. |
Why do 3 backslashes equal 4 in a Python string? | 35,121,288 | 86 | 2016-02-01T00:25:21Z | 35,121,350 | 12 | 2016-02-01T00:33:53Z | [
"python",
"python-2.7"
] | Could you tell me why `'?\\\?'=='?\\\\?'` gives `True`? That drives me crazy and I can't find a reasonable answer...
```
>>> list('?\\\?')
['?', '\\', '\\', '?']
>>> list('?\\\\?')
['?', '\\', '\\', '?']
``` | Because `\x` in a character string, when `x` is not one of the special backslashable characters like `n`, `r`, `t`, `0`, etc, evaluates to a string with a backslash and then an `x`.
```
>>> '\?'
'\\?'
``` |
Why do 3 backslashes equal 4 in a Python string? | 35,121,288 | 86 | 2016-02-01T00:25:21Z | 35,121,689 | 28 | 2016-02-01T01:20:37Z | [
"python",
"python-2.7"
] | Could you tell me why `'?\\\?'=='?\\\\?'` gives `True`? That drives me crazy and I can't find a reasonable answer...
```
>>> list('?\\\?')
['?', '\\', '\\', '?']
>>> list('?\\\\?')
['?', '\\', '\\', '?']
``` | This is because backslash acts as an escape character for the character(s) immediately following it, if the combination represents a valid escape sequence. The dozen or so escape sequences are [listed here](https://docs.python.org/2/reference/lexical_analysis.html?highlight=literal#string-literals). They include the obvious ones such as newline `\n`, horizontal tab `\t`, carriage return `\r` and more obscure ones such as named unicode characters using `\N{...}`, e.g. `\N{WAVY DASH}` which represents unicode character `\u3030`. The key point though is that if the escape sequence is not known, the character sequence is left in the string as is.
Part of the problem might also be that the Python interpreter output is misleading you. This is because the backslashes are escaped when displayed. However, if you *print* those strings, you will see the extra backslashes disappear.
```
>>> '?\\\?'
'?\\\\?'
>>> print('?\\\?')
?\\?
>>> '?\\\?' == '?\\?' # I don't know why you think this is True???
False
>>> '?\\\?' == r'?\\?' # but if you use a raw string for '?\\?'
True
>>> '?\\\\?' == '?\\\?' # this is the same string... see below
True
```
For your specific examples, in the first case `'?\\\?'`, the first `\` escapes the second backslash leaving a single backslash, but the third backslash remains as a backslash because `\?` is not a valid escape sequence. Hence the resulting string is `?\\?`.
For the second case `'?\\\\?'`, the first backslash escapes the second, and the third backslash escapes the fourth which results in the string `?\\?`.
So that's why three backslashes is the same as four:
```
>>> '?\\\?' == '?\\\\?'
True
```
If you want to create a string with 3 backslashes you can escape each backslash:
```
>>> '?\\\\\\?'
'?\\\\\\?'
>>> print('?\\\\\\?')
?\\\?
```
or you might find "raw" strings more understandable:
```
>>> r'?\\\?'
'?\\\\\\?'
>>> print(r'?\\\?')
?\\\?
```
This turns of escape sequence processing for the string literal. See [String Literals](https://docs.python.org/2/reference/lexical_analysis.html?highlight=literal#string-literals) for more details. |
Unable to run odoo properly in Mac OS X | 35,122,765 | 15 | 2016-02-01T03:53:35Z | 35,303,294 | 12 | 2016-02-09T22:17:14Z | [
"python",
"osx",
"postgresql",
"openerp"
] | I have installed Odoo 9 Community version from Git in my Mac OS X El Capitan 10.11.2, all my steps:
```
python --version
Python 2.7.10
git clone https://github.com/odoo/odoo.git
Checking out files: 100% (20501/20501), done.
```
Installed [PostgresApp](http://postgresapp.com/) into `Applications` and added path in `~/.bash_profile`, executed the same.
```
export PATH=$PATH:/Applications/Postgres.app/Contents/Versions/latest/bin
```
Installed pip
```
sudo easy_install pip
Finished processing dependencies for pip
```
I have `nodejs` installed in my system,
```
node -v
v5.0.0
npm -v
3.3.9
```
Installed `less` and `less-plugin-clean-css`
```
sudo npm install -g less less-plugin-clean-css
```
I have latest xcode installed,
```
xcode-select --install
xcode-select: error: command line tools are already installed, use "Software Update" to install updates
```
I have homebrew installed,
```
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
It appears Homebrew is already installed. If your intent is to reinstall you
should do the following before running this installer again:
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/uninstall)"
The current contents of /usr/local are bin Cellar CODEOFCONDUCT.md CONTRIBUTING.md etc include lib Library LICENSE.txt opt README.md sbin share SUPPORTERS.md var .git .gitignore
```
Installed other libs
```
brew install autoconf automake libtool
brew install libxml2 libxslt libevent
```
Installed Python dependencies
```
sudo easy_install -U setuptools
Finished processing dependencies for setuptools
cd odoo/
sudo pip install --user -r requirements.txt
Successfully installed Mako-1.0.1 Pillow-2.7.0 Werkzeug-0.9.6 argparse-1.2.1 lxml-3.4.1 psutil-2.2.0 psycopg2-2.5.4 pyparsing-2.0.1 python-dateutil-1.5 python-ldap-2.4.19 pytz-2013.7 pyusb-1.0.0b2 qrcode-5.1 six-1.4.1
```
Running odoo
```
export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8
./odoo.py --addons-path=addons --db-filter=mydb
```
It says
```
2016-02-10 16:51:42,351 3389 INFO ? openerp: OpenERP version 9.0c
2016-02-10 16:51:42,351 3389 INFO ? openerp: addons paths: ['/Users/anshad/Library/Application Support/Odoo/addons/9.0', u'/Users/anshad/odoo/addons', '/Users/anshad/odoo/openerp/addons']
2016-02-10 16:51:42,352 3389 INFO ? openerp: database: default@default:default
2016-02-10 16:51:42,444 3389 INFO ? openerp.service.server: HTTP service (werkzeug) running on 0.0.0.0:8069
```
And the browser says `500 500 Internal Server Error`
and in terminal,
```
conn = _connect(dsn, connection_factory=connection_factory, async=async)
OperationalError: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
```
Started PostgresApp to solve this issue.
Now I got the database setup window appears without CSS as in the below screen-shot.
Created database `mydbodoo` with password `admin` and navigated to main page `http://localhost:8069/web/`
It shows a blank page with black header and odoo logo, some error in terminal as well.
`ImportError: No module named pyPdf`
```
./odoo.py --addons-path=addons --db-filter=mydb
2016-02-10 17:02:12,220 3589 INFO ? openerp: OpenERP version 9.0c
2016-02-10 17:02:12,220 3589 INFO ? openerp: addons paths: ['/Users/anshad/Library/Application Support/Odoo/addons/9.0', u'/Users/anshad/odoo/addons', '/Users/anshad/odoo/openerp/addons']
2016-02-10 17:02:12,221 3589 INFO ? openerp: database: default@default:default
2016-02-10 17:02:12,314 3589 INFO ? openerp.service.server: HTTP service (werkzeug) running on 0.0.0.0:8069
2016-02-10 17:02:16,855 3589 INFO ? openerp.addons.bus.models.bus: Bus.loop listen imbus on db postgres
2016-02-10 17:02:16,888 3589 INFO ? werkzeug: 127.0.0.1 - - [10/Feb/2016 17:02:16] "GET /web/ HTTP/1.1" 500 -
2016-02-10 17:02:16,895 3589 ERROR ? werkzeug: Error on request:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/werkzeug/serving.py", line 177, in run_wsgi
execute(self.server.app)
File "/Library/Python/2.7/site-packages/werkzeug/serving.py", line 165, in execute
application_iter = app(environ, start_response)
File "/Users/anshad/odoo/openerp/service/server.py", line 245, in app
return self.app(e, s)
File "/Users/anshad/odoo/openerp/service/wsgi_server.py", line 184, in application
return application_unproxied(environ, start_response)
File "/Users/anshad/odoo/openerp/service/wsgi_server.py", line 170, in application_unproxied
result = handler(environ, start_response)
File "/Users/anshad/odoo/openerp/http.py", line 1487, in __call__
self.load_addons()
File "/Users/anshad/odoo/openerp/http.py", line 1508, in load_addons
m = __import__('openerp.addons.' + module)
File "/Users/anshad/odoo/openerp/modules/module.py", line 61, in load_module
mod = imp.load_module('openerp.addons.' + module_part, f, path, descr)
File "/Users/anshad/odoo/addons/document/__init__.py", line 4, in <module>
import models
File "/Users/anshad/odoo/addons/document/models/__init__.py", line 4, in <module>
import ir_attachment
File "/Users/anshad/odoo/addons/document/models/ir_attachment.py", line 8, in <module>
import pyPdf
ImportError: No module named pyPdf
2016-02-10 17:02:17,708 3589 INFO mydbodoo openerp.modules.loading: loading 1 modules...
2016-02-10 17:02:17,716 3589 INFO mydbodoo openerp.modules.loading: 1 modules loaded in 0.01s, 0 queries
2016-02-10 17:02:17,719 3589 INFO mydbodoo openerp.modules.loading: loading 4 modules...
2016-02-10 17:02:17,727 3589 INFO mydbodoo openerp.modules.loading: 4 modules loaded in 0.01s, 0 queries
2016-02-10 17:02:17,899 3589 INFO mydbodoo openerp.modules.loading: Modules loaded.
2016-02-10 17:02:17,900 3589 INFO mydbodoo openerp.addons.base.ir.ir_http: Generating routing map
2016-02-10 17:02:18,249 3589 INFO mydbodoo werkzeug: 127.0.0.1 - - [10/Feb/2016 17:02:18] "GET /web/ HTTP/1.1" 200 -
2016-02-10 17:02:18,308 3589 INFO mydbodoo werkzeug: 127.0.0.1 - - [10/Feb/2016 17:02:18] "GET /web/content/341-42af255/web.assets_common.0.css HTTP/1.1" 304 -
2016-02-10 17:02:18,350 3589 INFO mydbodoo werkzeug: 127.0.0.1 - - [10/Feb/2016 17:02:18] "GET /web/static/src/css/full.css HTTP/1.1" 404 -
2016-02-10 17:02:18,367 3589 INFO mydbodoo werkzeug: 127.0.0.1 - - [10/Feb/2016 17:02:18] "GET /web/content/343-4d5beef/web.assets_backend.0.css HTTP/1.1" 304 -
2016-02-10 17:02:18,411 3589 INFO mydbodoo werkzeug: 127.0.0.1 - - [10/Feb/2016 17:02:18] "GET /web/content/344-4d5beef/web.assets_backend.js HTTP/1.1" 304 -
2016-02-10 17:02:18,428 3589 INFO mydbodoo werkzeug: 127.0.0.1 - - [10/Feb/2016 17:02:18] "GET /web/content/342-42af255/web.assets_common.js HTTP/1.1" 304 -
2016-02-10 17:02:18,663 3589 INFO mydbodoo werkzeug: 127.0.0.1 - - [10/Feb/2016 17:02:18] "GET /web/binary/company_logo HTTP/1.1" 304 -
2016-02-10 17:02:18,838 3589 INFO mydbodoo openerp.service.common: successful login from 'admin' using database 'mydbodoo'
2016-02-10 17:02:18,859 3589 INFO mydbodoo werkzeug: 127.0.0.1 - - [10/Feb/2016 17:02:18] "POST /web/session/get_session_info HTTP/1.1" 200 -
2016-02-10 17:02:18,893 3589 INFO mydbodoo werkzeug: 127.0.0.1 - - [10/Feb/2016 17:02:18] "POST /web/proxy/load HTTP/1.1" 200 -
2016-02-10 17:02:18,915 3589 INFO mydbodoo werkzeug: 127.0.0.1 - - [10/Feb/2016 17:02:18] "POST /web/session/modules HTTP/1.1" 200 -
2016-02-10 17:02:18,945 3589 INFO mydbodoo werkzeug: 127.0.0.1 - - [10/Feb/2016 17:02:18] "POST /web/dataset/search_read HTTP/1.1" 200 -
2016-02-10 17:02:18,945 3589 INFO mydbodoo werkzeug: 127.0.0.1 - - [10/Feb/2016 17:02:18] "POST /web/webclient/translations HTTP/1.1" 200 -
2016-02-10 17:02:18,991 3589 INFO mydbodoo werkzeug: 127.0.0.1 - - [10/Feb/2016 17:02:18] "GET /web/webclient/locale/en_US HTTP/1.1" 500 -
2016-02-10 17:02:18,998 3589 ERROR mydbodoo werkzeug: Error on request:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/werkzeug/serving.py", line 177, in run_wsgi
execute(self.server.app)
File "/Library/Python/2.7/site-packages/werkzeug/serving.py", line 165, in execute
application_iter = app(environ, start_response)
File "/Users/anshad/odoo/openerp/service/server.py", line 245, in app
return self.app(e, s)
File "/Users/anshad/odoo/openerp/service/wsgi_server.py", line 184, in application
return application_unproxied(environ, start_response)
File "/Users/anshad/odoo/openerp/service/wsgi_server.py", line 170, in application_unproxied
result = handler(environ, start_response)
File "/Users/anshad/odoo/openerp/http.py", line 1488, in __call__
return self.dispatch(environ, start_response)
File "/Users/anshad/odoo/openerp/http.py", line 1652, in dispatch
result = ir_http._dispatch()
File "/Users/anshad/odoo/openerp/addons/base/ir/ir_http.py", line 186, in _dispatch
return self._handle_exception(e)
File "/Users/anshad/odoo/openerp/addons/base/ir/ir_http.py", line 157, in _handle_exception
return request._handle_exception(exception)
File "/Users/anshad/odoo/openerp/http.py", line 781, in _handle_exception
return super(HttpRequest, self)._handle_exception(exception)
File "/Users/anshad/odoo/openerp/addons/base/ir/ir_http.py", line 182, in _dispatch
result = request.dispatch()
File "/Users/anshad/odoo/openerp/http.py", line 840, in dispatch
r = self._call_function(**self.params)
File "/Users/anshad/odoo/openerp/http.py", line 316, in _call_function
return checked_call(self.db, *args, **kwargs)
File "/Users/anshad/odoo/openerp/service/model.py", line 118, in wrapper
return f(dbname, *args, **kwargs)
File "/Users/anshad/odoo/openerp/http.py", line 309, in checked_call
result = self.endpoint(*a, **kw)
File "/Users/anshad/odoo/openerp/http.py", line 959, in __call__
return self.method(*args, **kw)
File "/Users/anshad/odoo/openerp/http.py", line 509, in response_wrap
response = f(*args, **kw)
File "/Users/anshad/odoo/addons/web/controllers/main.py", line 505, in load_locale
addons_path = http.addons_manifest['web']['addons_path']
KeyError: 'web'
```
**Screen-shot:Terminal and file system**
[](http://i.stack.imgur.com/6LmTb.png)
**Screen-shot:Database selection window**
[](http://i.stack.imgur.com/WW9nW.png)
**Screen-shot: Main window**
[](http://i.stack.imgur.com/7pqvA.png)
Tried `sudo pip install pyPdf` and it says `Requirement already satisfied (use --upgrade to upgrade): pyPdf in /Users/anshad/Library/Python/2.7/lib/python/site-packages` | I just went through the setup on two systems, one is Mac OS X El Capitan 10.11.2 and another one is my primary OS - Ubuntu 15.04 (where things went much easier, but maybe it is just because I use Ubuntu on daily basis).
Below are installation steps for both systems. Make sure that every command finishes successfully (at least doesn't report any errors).
## Mac OS X El Capitan 10.11.2
Prerequisites: I already had `git` and `python 2.7.10`.
1) Clone odoo repository:
```
git clone https://github.com/odoo/odoo.git
```
2) Download and install `Postgresapp`
* Go to <http://postgresapp.com/>, download
* Open it in Finder, drag to Applications, double click
* Postgres application appears, double click it
* Sorry if these steps are obvious, it is just for me since I am not a Mac OS user
Now add to `~/.bash_profile`:
```
export PATH=$PATH:/Applications/Postgres.app/Contents/Versions/latest/bin
```
And just execute the command above it if you already have the open terminal.
3) Install `pip`
```
sudo easy_install pip
```
4) Install `nodejs`
* Go to <https://nodejs.org>,
* Download node v4.3.0
* Move to Applications, run and install
* Open terminal and check that `node` and `npm` commands are available
5) Install `less` and `less-plugin-clean-css`
```
sudo npm install -g less less-plugin-clean-css
```
Should show output like this:
```
/usr/local/bin/lessc -> /usr/local/lib/node_modules/less/bin/lessc
[email protected] /usr/local/lib/node_modules/less-plugin-clean-css
âââ [email protected] ([email protected], [email protected])
[email protected] /usr/local/lib/node_modules/less
âââ [email protected]
âââ [email protected]
âââ [email protected]
âââ [email protected] ([email protected])
âââ [email protected] ([email protected])
âââ [email protected] ([email protected])
âââ [email protected] ([email protected])
âââ [email protected] ([email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected])
```
6) Install binary dependencies
I think not all the steps below are really necessary, but I performed them, so include just for the case they actually were needed.
* Run in the terminal `xcode-select --install`, when dialog appears - agree to install
* Go to <http://brew.sh> and follow instructions to install homebrew
Once you have `brew`, run the following in the terminal:
```
brew install autoconf automake libtool
brew install libxml2 libxslt libevent
```
7) Install python dependencies
```
sudo easy_install -U setuptools
pip install --user -r requirements.txt
```
It should show something like this at the end:
```
Successfully installed Babel-1.3 Jinja2-2.7.3 Mako-1.0.1 MarkupSafe-0.23 Pillow-2.7.0 PyYAML-3.11 Python-Chart-1.39 Werkzeug-0.9.6 argparse-1.2.1 beautifulsoup4-4.4.1 decorator-3.4.0 docutils-0.12 feedparser-5.1.3 gdata-2.0.18 gevent-1.0.2 greenlet-0.4.7 jcconv-0.2.3 lxml-3.4.1 mock-1.0.1 ofxparse-0.14 passlib-1.6.2 psutil-2.2.0 psycogreen-1.0 psycopg2-2.5.4 pyPdf-1.13 pydot-1.0.2 pyparsing-2.0.1 pyserial-2.7 python-dateutil-1.5 python-ldap-2.4.19 python-openid-2.2.5 python-stdnum-1.2 pytz-2013.7 pyusb-1.0.0b2 qrcode-5.1 reportlab-3.1.44 requests-2.6.0 six-1.4.1 suds-jurko-0.6 vatnumber-1.2 vobject-0.6.6 xlwt-0.7.5
```
8) Run `odoo`
```
cd odoo # change dir to the folder you cloned odoo to
export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8
# Re-check parameters, it looks like addons path you used is incorrect
./odoo.py --addons-path=addons --db-filter=mydb
```
Now you should see the output like this:
```
INFO ? openerp: OpenERP version 9.0c
INFO ? openerp: addons paths: ['/Users/dev/Library/Application Support/Odoo/addons/9.0', u'/Users/dev/projects/odoo/addons', '/Users/dev/projects/odoo/openerp/addons']
INFO ? openerp: database: default@default:default
INFO ? openerp.service.server: HTTP service (werkzeug) running on 0.0.0.0:8069
```
9) Open `odoo` in your browser
* Go to <http://localhost:8069>
* The database setup window appears (see the first screenshot below)
* Enter databse name = `mydbodoo` (I think the prefix `mydb` is important here) and password `admin`
* You can also check the checkbox to load the demo data
* Click `Create database`
* Wait and you should be redirected to the `odoo` interface (see the second screenshot)
Done!
[](http://i.stack.imgur.com/0r891.png)
[](http://i.stack.imgur.com/RpcVM.png)
## Update: Mac OS X El Capitan 10.11.2 with virtualenv
Do the same as above, on step `(7)` do not run `pip install --user -r requirements.txt` and instead to this:
```
pip install virtualenv # not sure here, sudo may be needed
mkdir ~/venv
cd ~/venv
mkdir odoo
virtualenv odoo
source ~/venv/odoo/bin/activate
cd ~/path/to/odoo
pip install -r requirements.txt # no sudo here!
```
Now continue with step `(8)`. Each time, before starting odoo make sure to activate the virtualenv first:
```
source ~/venv/odoo/bin/activate
export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8
./odoo.py --addons-path=addons --db-filter=mydb
```
## Ubuntu 15.04
Prerequisites: I already had postgresql 9.4.5, nodejs 0.10.25 and python 2.7.8.
Installation:
```
git clone https://github.com/odoo/odoo.git
cd odoo
sudo apt-get install libldap2-dev libsasl2-dev libevent-dev libxslt1-dev libxml2-dev
pip install -r requirements.txt
sudo npm install -g less less-plugin-clean-css
./odoo.py --addons-path=addons --db-filter=mydb
```
That's all, now setup the same way as in the step (9) for Mac OS. |
Looping through a list that contains dicts and displaying it a certain way | 35,133,736 | 6 | 2016-02-01T15:01:38Z | 35,133,794 | 15 | 2016-02-01T15:05:17Z | [
"python",
"for-loop",
"dictionary"
] | These are 3 dicts I made each with the 4 same keys but of course different values.
```
lloyd = {
"name": "Lloyd",
"homework": [90.0, 97.0, 75.0, 92.0],
"quizzes": [88.0, 40.0, 94.0],
"tests": [75.0, 90.0]
}
alice = {
"name": "Alice",
"homework": [100.0, 92.0, 98.0, 100.0],
"quizzes": [82.0, 83.0, 91.0],
"tests": [89.0, 97.0]
}
tyler = {
"name": "Tyler",
"homework": [0.0, 87.0, 75.0, 22.0],
"quizzes": [0.0, 75.0, 78.0],
"tests": [100.0, 100.0]
}
```
I stored the dicts in a list.
```
students = [lloyd, alice, tyler]
```
What I'd like to do is loop through the list and display each like so:
```
"""
student's Name: val
student's Homework: val
student's Quizzes: val
student's Tests: val
"""
```
I was thinking a for loop would do the trick `for student in students:` and I could store each in a empty dict `current = {}` but after that is where I get lost. I was going to use **getitem** but I didn't think that would work.
Thanks in advance | You can do this:
```
students = [lloyd, alice, tyler]
def print_student(student):
print("""
Student's name: {name}
Student's homework: {homework}
Student's quizzes: {quizzes}
Student's tests: {tests}
""".format(**student)) # unpack the dictionary
for std in students:
print_student(std)
``` |
Table thumbnail_kvstore doesn't exist | 35,136,411 | 8 | 2016-02-01T17:15:18Z | 35,143,733 | 19 | 2016-02-02T01:53:43Z | [
"python",
"django",
"sorl-thumbnail"
] | I can't get the thumbnail displayed in my template. I get this error:
```
django.db.utils.ProgrammingError: (1146, "Table 'ia_website.thumbnail_kvstore' doesn't exist")
```
* Installed sorl\_thumbnail-12.3
* I'm using MariaDB 10.1.11
* I have no migration that are not executed
* I can see the image if I don't use the 'thumbnail' tag
Here is what I did
* In settings.py:
```
INSTALLED_APPS = [
...
'sorl.thumbnail',
]
THUMBNAIL_DEBUG = TRUE
```
* In models.py
```
import sorl
...
image = sorl.thumbnail.ImageField(upload_to='thumbnails', null=True)
```
* In my template
```
{% thumbnail content.image "237x110" as im %}
<img src="{{ im.url }}">
{% endthumbnail %}
``` | So after some research, it looks like the version `12.3` of sorl-thumbnail on PyPI and Github are different!
If you download the source directly from PyPI - <https://pypi.python.org/pypi/sorl-thumbnail/12.3> - you will find that the package doesn't contain any migrations. **This is the reason the table doesn't exist even though you've run all the migrations**.
On Github, the migration file for version `12.3` definitely exists: <https://github.com/mariocesar/sorl-thumbnail/blob/v12.3/sorl/thumbnail/migrations/0001_initial.py>.
You have three options:
1. Create the table using `./manage.py syncdb` (only if you're running Django 1.8 or below)
2. Install directly from Github for version `12.3`
3. Use version `12.4a1` of sorl-thumbnail which includes migrations
You can install from Github directly as follows:
```
pip install git+git://github.com/mariocesar/[email protected]
```
sorl-thumbnail version 12.3 supports up to Django version 1.8, where the syncdb command still exists. If you're running Django 1.8 or lower, you can create the missing table by running
```
python manage.py syncdb
``` |
Table thumbnail_kvstore doesn't exist | 35,136,411 | 8 | 2016-02-01T17:15:18Z | 35,883,910 | 15 | 2016-03-09T05:49:15Z | [
"python",
"django",
"sorl-thumbnail"
] | I can't get the thumbnail displayed in my template. I get this error:
```
django.db.utils.ProgrammingError: (1146, "Table 'ia_website.thumbnail_kvstore' doesn't exist")
```
* Installed sorl\_thumbnail-12.3
* I'm using MariaDB 10.1.11
* I have no migration that are not executed
* I can see the image if I don't use the 'thumbnail' tag
Here is what I did
* In settings.py:
```
INSTALLED_APPS = [
...
'sorl.thumbnail',
]
THUMBNAIL_DEBUG = TRUE
```
* In models.py
```
import sorl
...
image = sorl.thumbnail.ImageField(upload_to='thumbnails', null=True)
```
* In my template
```
{% thumbnail content.image "237x110" as im %}
<img src="{{ im.url }}">
{% endthumbnail %}
``` | If just
```
manage.py makemigrations
```
doesn't create any migrations, try
```
manage.py makemigrations thumbnail
manage.py migrate
```
This will create migrations for thumbnail and then migrate them.
It works for me.
I am using Django 1.9 and sorl.thumbnail 12.3. |
How to install cryptography on ubuntu? | 35,144,550 | 22 | 2016-02-02T03:31:15Z | 35,460,842 | 33 | 2016-02-17T15:25:36Z | [
"python",
"ubuntu",
"cryptography",
"pip"
] | My ubuntu is 14.04 LTS.
When I install cryptography, the error is:
```
Installing egg-scripts.
uses namespace packages but the distribution does not require setuptools.
Getting distribution for 'cryptography==0.2.1'.
no previously-included directories found matching 'documentation/_build'
zip_safe flag not set; analyzing archive contents...
six: module references __path__
Installed /tmp/easy_install-oUz7ei/cryptography-0.2.1/.eggs/six-1.10.0-py2.7.egg
Searching for cffi>=0.8
Reading https://pypi.python.org/simple/cffi/
Best match: cffi 1.5.0
Downloading https://pypi.python.org/packages/source/c/cffi/cffi-1.5.0.tar.gz#md5=dec8441e67880494ee881305059af656
Processing cffi-1.5.0.tar.gz
Writing /tmp/easy_install-oUz7ei/cryptography-0.2.1/temp/easy_install-Yf2Yl3/cffi-1.5.0/setup.cfg
Running cffi-1.5.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-oUz7ei/cryptography-0.2.1/temp/easy_install-Yf2Yl3/cffi-1.5.0/egg-dist-tmp-A2kjMD
c/_cffi_backend.c:15:17: fatal error: ffi.h: No such file or directory
#include <ffi.h>
^
compilation terminated.
error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
An error occurred when trying to install cryptography 0.2.1. Look above this message for any errors that were output by easy_install.
While:
Installing egg-scripts.
Getting distribution for 'cryptography==0.2.1'.
Error: Couldn't install: cryptography 0.2.1
```
I don't know why it was failed. What is the reason. Is there something necessary when install it on ubuntu system? | I had the same problem when pip installing the cryptography module on Ubuntu 14.04. I solved it by installing libffi-dev:
```
apt-get install -y libffi-dev
```
Then I got the following error:
```
build/temp.linux-x86_64-3.4/_openssl.c:431:25: fatal error: openssl/aes.h: No such file or directory
#include <openssl/aes.h>
^
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
```
Which I resolved by installing libssl-dev:
```
apt-get install -y libssl-dev
``` |
How to install cryptography on ubuntu? | 35,144,550 | 22 | 2016-02-02T03:31:15Z | 36,057,779 | 35 | 2016-03-17T10:27:32Z | [
"python",
"ubuntu",
"cryptography",
"pip"
] | My ubuntu is 14.04 LTS.
When I install cryptography, the error is:
```
Installing egg-scripts.
uses namespace packages but the distribution does not require setuptools.
Getting distribution for 'cryptography==0.2.1'.
no previously-included directories found matching 'documentation/_build'
zip_safe flag not set; analyzing archive contents...
six: module references __path__
Installed /tmp/easy_install-oUz7ei/cryptography-0.2.1/.eggs/six-1.10.0-py2.7.egg
Searching for cffi>=0.8
Reading https://pypi.python.org/simple/cffi/
Best match: cffi 1.5.0
Downloading https://pypi.python.org/packages/source/c/cffi/cffi-1.5.0.tar.gz#md5=dec8441e67880494ee881305059af656
Processing cffi-1.5.0.tar.gz
Writing /tmp/easy_install-oUz7ei/cryptography-0.2.1/temp/easy_install-Yf2Yl3/cffi-1.5.0/setup.cfg
Running cffi-1.5.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-oUz7ei/cryptography-0.2.1/temp/easy_install-Yf2Yl3/cffi-1.5.0/egg-dist-tmp-A2kjMD
c/_cffi_backend.c:15:17: fatal error: ffi.h: No such file or directory
#include <ffi.h>
^
compilation terminated.
error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
An error occurred when trying to install cryptography 0.2.1. Look above this message for any errors that were output by easy_install.
While:
Installing egg-scripts.
Getting distribution for 'cryptography==0.2.1'.
Error: Couldn't install: cryptography 0.2.1
```
I don't know why it was failed. What is the reason. Is there something necessary when install it on ubuntu system? | The answer is on the docs of `cryptography`'s [installation section](https://cryptography.io/en/latest/installation/#building-cryptography-on-linux) which pretty much reflects Angelos' answer:
Quoting it:
> For Debian and **Ubuntu**, the following command will ensure that the
> required dependencies are installed:
>
> ```
> $ sudo apt-get install build-essential libssl-dev libffi-dev python-dev
> ```
>
> For Fedora and RHEL-derivatives, the following command will ensure
> that the required dependencies are installed:
>
> ```
> $ sudo yum install gcc libffi-devel python-devel openssl-devel
> ```
>
> You should now be able to build and install cryptography with the
> usual
>
> ```
> $ pip install cryptography
> ``` |
Tensorflow python : Accessing individual elements in a tensor | 35,146,444 | 12 | 2016-02-02T06:21:56Z | 35,158,370 | 13 | 2016-02-02T16:12:35Z | [
"python",
"python-2.7",
"tensorflow"
] | This question is with respect to accessing individual elements in a tensor, say [[1,2,3]]. I need to access the inner element [1,2,3] (This can be performed using .eval() or sess.run()) but it takes longer when the size of the tensor is huge)
Is there any method to do the same faster?
Thanks in Advance. | There are two main ways to access subsets of the elements in a tensor, either of which should work for your example.
1. Use the indexing operator (based on [`tf.slice()`](https://www.tensorflow.org/versions/0.6.0/api_docs/python/array_ops.html#slice)) to extract a contiguous slice from the tensor.
```
input = tf.constant([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
output = input[0, :]
print sess.run(output) # ==> [1 2 3]
```
The indexing operator supports many of the same slice specifications as NumPy does.
2. Use the [`tf.gather()`](https://www.tensorflow.org/versions/0.6.0/api_docs/python/array_ops.html#gather) op to select a non-contiguous slice from the tensor.
```
input = tf.constant([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
output = tf.gather(input, 0)
print sess.run(output) # ==> [1 2 3]
output = tf.gather(input, [0, 2])
print sess.run(output) # ==> [[1 2 3] [7 8 9]]
```
Note that `tf.gather()` only allows you to select whole slices in the 0th dimension (whole rows in the example of a matrix), so you may need to [`tf.reshape()`](https://www.tensorflow.org/versions/0.6.0/api_docs/python/array_ops.html#reshape) or [`tf.transpose()`](https://www.tensorflow.org/versions/0.6.0/api_docs/python/array_ops.html#transpose) your input to obtain the appropriate elements. |
why is logged_out.html not overriding in django registration? | 35,153,108 | 4 | 2016-02-02T12:08:51Z | 35,157,729 | 7 | 2016-02-02T15:42:23Z | [
"python",
"django",
"django-templates"
] | I am using built-in django login and logout. In my Project/urls.py i have added url's for both login and logout.
```
from django.conf.urls import include, url
from account import views
from django.contrib.auth import views as auth_views
from django.contrib import admin
urlpatterns = [
url(r'^admin/', include(admin.site.urls)),
url(r'^$',views.index,name='Index'),
url(r'^accounts/login/$',auth_views.login,name='login'),
url(r'^accounts/logout/$',auth_views.logout,name='logout'),
url(r'^accounts/register/$',views.register,name='register'),
url(r'^accounts/profile/$',views.profile,name='profile'),
]
```
and i've templates folder inside my account app folder. i have directory structure like this
```
account
-templates
-registration
-login.html
-logged_out.html
-register.html
-rest_html_files
-rest files
```
i've read django docs which say that for login() default template is registration/login.html which is working fine in my project and logout() default template is registration/logged\_out.html if no arguments is supplied but whenever it Logout button ( which has a href={% url 'logout' %} ) is clicked it redirects to the admin logout page rather than my custom logout page.
what could possibly be wrong?? | The `django.contrib.admin` app also has a `registration/logged_out.html` template.
To ensure that the template from your 'account' app, is used, make sure it is **above** 'django.contrib.admin' in your `INSTALLED_APPS` setting.
```
INSTALLED_APPS = (
'account',
...
'django.contrib.admin',
...
)
```
The app template loader goes through the apps in `INSTALLED_APPS`, and each app's template directory until it finds a match. Therefore, if admin is above your app, then Django will use the template from the admin instead of from your app. |
Convert tuple-strings to tuple of strings | 35,154,108 | 7 | 2016-02-02T12:53:47Z | 35,154,249 | 11 | 2016-02-02T13:00:37Z | [
"python"
] | My Input is:
```
input = ['(var1, )', '(var2,var3)']
```
Expected Output is:
```
output = [('var1', ), ('var2','var3')]
```
Iterating over input and using `eval`/`literal_eval` on the tuple-strings is not possible:
```
>>> eval('(var1, )')
>>> NameError: name 'var1' is not defined
```
How can I convert an item such as `'(var1, )'` to a tuple where the inner objects are treated as strings instead of variables?
Is there a simpler way than writing a parser or using regex? | For each occurrence of a variable, [`eval`](//docs.python.org/3/library/functions.html#eval) searches the symbol table for the name of the variable. It's possible to provide a custom mapping that will return the key name for every missing key:
```
class FakeNamespace(dict):
def __missing__(self, key):
return key
```
Example:
```
In [38]: eval('(var1,)', FakeNamespace())
Out[38]: ('var1',)
In [39]: eval('(var2, var3)', FakeNamespace())
Out[39]: ('var2', 'var3')
```
***Note:*** `eval` copies current globals to the submitted `globals` dictionary, if it doesn't have `__builtins__`. That means that the expression will have access to built-in functions, exceptions and constants, as well as variables in your namespace. You can try to solve this by passing `FakeNamespace(__builtins__=<None or some other value>)` instead of just `FakeNamespace()`, but it won't make `eval` 100% safe ([*Python eval: is it still dangerous if I disable builtins and attribute access?*](//stackoverflow.com/q/35804961/2301450)) |
Fast way of crossing strings in a list | 35,157,442 | 12 | 2016-02-02T15:28:16Z | 35,157,828 | 9 | 2016-02-02T15:46:46Z | [
"python",
"list",
"concatenation"
] | If have a list like so:
```
shops=['A','B','C','D']
```
And would like to create the following new lists (I cross each element with every other and create a string where first part is alphanumerically before the second):
```
['A-B', 'A-C', 'A-D']
['A-B', 'B-C', 'B-D']
['A-C', 'B-C', 'C-D']
['A-D', 'B-D', 'C-D']
```
I have something like this:
```
for a in shops:
cons = []
for b in shops:
if a!=b:
con = [a,b]
con = sorted(con, key=lambda x: float(x))
cons.append(con[0]+'-'+con[1])
print(cons)
```
However, this is pretty slow for large lists (e.g. 1000 where I have 1000\*999\*0.5 outputs). I was looking for a more efficient way of doing this?
I could have used an if-else clause for the sort e.g.
```
for a in shops:
cons = []
for b in shops:
if a<b:
cons.append(a+"-"+b)
elif a>b:
cons.append(b+"-"+a)
print(cons)
```
Which, I haven't timed yet - however I thought the main slow-down was the double for-loop | You can create a nested list-comprehension with some additional checks:
```
>>> shops=['A','B','C','D']
>>> [["-".join((min(a,b), max(a,b))) for b in shops if b != a] for a in shops]
[['A-B', 'A-C', 'A-D'],
['A-B', 'B-C', 'B-D'],
['A-C', 'B-C', 'C-D'],
['A-D', 'B-D', 'C-D']]
```
Note that this will probably not be much faster than your code, as you still have to generate all those combinations. In practice, you could make it a generator expression, so the elements are not generated all at once but only "as needed":
```
gen = (["-".join((min(a,b), max(a,b))) for b in shops if b != a] for a in shops)
for item in gen:
print(item)
```
---
Update: I did some timing analysis using IPython's `%timeit`. Turns out your second implementation is the fastest. Tested with a list of 100 strings (`map(str, range(100))`) and after turning each of the methods into generators.
```
In [32]: %timeit list(test.f1()) # your first implementation
100 loops, best of 3: 13.5 ms per loop
In [33]: %timeit list(test.f2()) # your second implementation
1000 loops, best of 3: 1.63 ms per loop
In [34]: %timeit list(test.g()) # my implementation
100 loops, best of 3: 3.49 ms per loop
```
You can speed it up by using a simple `if/else` instead of `min/max`, as in your 2nd implementation, then they are about equally fast.
```
(["-".join((a,b) if a < b else (b,a)) for b in shops if b != a] for a in shops)
``` |
Plone: intercept a workflow transition and change it to another one programmatically | 35,157,543 | 3 | 2016-02-02T15:33:32Z | 35,172,746 | 7 | 2016-02-03T09:13:47Z | [
"python",
"workflow",
"plone"
] | We have a utility ([collective.contentalerts](https://pypi.python.org/pypi/collective.contentalerts)) that checks if the content of an object (say an article) is adequate or not (does not contain bad words).
So far we have been using it together with `plone.app.contentrules` to send emails when that happens.
Now we want to go one step further:
Regular users can still create their articles but whenever they are trying to make them public, if the utility finds fishy content on it, it should put them in another state (i.e. make another workflow transition instead).
**So the question is:** is there a way to intercept a workflow transition, and given some logic (our utility) change the intended workflow transition to another one?
Extra nice would be that this transition to the moderation state should not be seen by regular users on the workflow transition drop down. | I don't think it is necessary to intercept a transition: show users the transition "publish" which sends the object to the state "needs\_review"
Use an automatic transition from state "needs\_review" to "public" that is guarded with a view that checks if the article is ok (does not contain words from your blacklist, etc)
This way users see the "publish" transition (not "send to moderation").
An example on how to configure a guard expression can be found on the [Poi add-on](https://github.com/collective/Products.Poi/blob/335e2c586bcd8841b14c9c0c281ef6b6b20b4ec4/Products/Poi/profiles/default/workflows/poi_issue_workflow/definition.xml#L246).
Think about something like this:
```
<guard-expression>here/@@myview</guard-expression>
```
Where `myview` can be a public view that perform all of the needed check and return True/False.
The trigger type of the transition has to be automatic instead of "initiated by user" (see screenshot)
[](http://i.stack.imgur.com/ioXU6.png)
if you follow the [?] questionmark link next to the expression field you get more information on the available variables. |
Python changing list into a word | 35,159,002 | 3 | 2016-02-02T16:40:39Z | 35,159,161 | 7 | 2016-02-02T16:48:05Z | [
"python",
"python-2.7",
"python-3.x"
] | I had a problem in python for a couple of days and cant seem to fix it.
How can I just simply change this code below:
```
['a'], ['b'], ['c']
to this :
['abc']
```
thanks ! | You could use the following:
```
codedletters = ['a'], ['b'], ['c']
answer = [''.join([x[0] for x in codedletters])]
print(answer)
```
**Output**
```
['abc']
``` |
How do I output lists as a table in Jupyter notebook? | 35,160,256 | 3 | 2016-02-02T17:40:45Z | 35,161,699 | 8 | 2016-02-02T18:59:42Z | [
"python",
"jupyter-notebook"
] | I know that I've seen some example somewhere before but for the life of me I cannot find it when googling around.
I have some rows of data:
```
data = [[1,2,3],
[4,5,6],
[7,8,9],
]
```
And I want to output this data in a table, e.g.
```
+---+---+---+
| 1 | 2 | 3 |
+---+---+---+
| 4 | 5 | 6 |
+---+---+---+
| 7 | 8 | 9 |
+---+---+---+
```
Obviously I could use a library like prettytable or download pandas or something but I'm very disinterested in doing that.
I just want to output my rows as tables in my Jupyter notebook cell. How do I do this? | I finally re-found the [jupyter/IPython documentation](http://nbviewer.jupyter.org/github/ipython/ipython/blob/4.0.x/examples/IPython%20Kernel/Rich%20Output.ipynb#HTML) that I was looking for.
I needed this:
```
from IPython.display import HTML, display
data = [[1,2,3],
[4,5,6],
[7,8,9],
]
display(HTML(
'<table><tr>{}</tr></table>'.format(
'</tr><tr>'.join(
'<td>{}</td>'.format('</td><td>'.join(str(_) for _ in row)) for row in data)
)
))
```
(I may have slightly mucked up the comprehensions, but `display(HTML('some html here'))` is what we needed) |
In TensorFlow is there any way to just initialize uninitialised variables? | 35,164,529 | 11 | 2016-02-02T21:55:18Z | 35,164,992 | 8 | 2016-02-02T22:26:12Z | [
"python",
"tensorflow"
] | The standard way of initializing variables in TensorFlow is
```
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
```
After running some learning for a while I create a new set of variables but once I initialize them it resets all my existing variables. At the moment my way around this is to save all the variable I need and then reapply them after the tf.initalize\_all\_variables call. This works but is a bit ugly and clunky. I cannot find anything like this in the docs...
Does anyone know of any good way to just initialize the uninitialized variables? | There is no elegant\* way to enumerate the uninitialized variables in a graph. However, if you have access to the new variable objects—let's call them `v_6`, `v_7`, and `v_8`—you can selectively initialize them using [`tf.initialize_variables()`](https://www.tensorflow.org/versions/0.6.0/api_docs/python/state_ops.html#initialize_variables):
```
init_new_vars_op = tf.initialize_variables([v_6, v_7, v_8])
sess.run(init_new_vars_op)
```
---
\* A process of trial and error could be used to identify the uninitialized variables, as follows:
```
uninitialized_vars = []
for var in tf.all_variables():
try:
sess.run(var)
except tf.errors.FailedPreconditionError:
uninitialized_vars.append(var)
init_new_vars_op = tf.initialize_variables(uninitialized_vars)
# ...
```
...however, I would not condone such behavior :-). |
Is there a pythonic way to skip decoration on a subclass' method? | 35,165,669 | 15 | 2016-02-02T23:17:33Z | 35,165,876 | 9 | 2016-02-02T23:35:30Z | [
"python",
"flask",
"decorator",
"flask-restful"
] | I have an class which decorates some methods using a decorator from another library. Specifically, the class subclasses flask-restful resources, decorates the http methods with `httpauth.HTTPBasicAuth().login_required()`, and does some sensible defaults on a model service.
On most subclasses I want the decorator applied; therefore I'd rather remove it than add it in the subclasses.
My thought is to have a private method which does the operations and a public method which is decorated. The effects of decoration can be avoided by overriding the public method to call the private one and not decorating this override. Mocked example below.
I am curious to know if there's a better way to do this. Is there a shortcut for 'cancelling decorators' in python that gives this effect?
Or can you recommend a better approach?
Some other questions have suitable answers for this, e.g. [Is there a way to get the function a decorator has wrapped?](http://stackoverflow.com/questions/1545178/is-there-a-way-to-get-the-function-a-decorator-has-wrapped). But my question is about broader design - i am interested in *any* pythonic way to run the operations in decorated methods without the effects of decoration. E.g. my example is one such way but there may be others.
```
def auth_required(fn):
def new_fn(*args, **kwargs):
print('Auth required for this resource...')
fn(*args, **kwargs)
return new_fn
class Resource:
name = None
@auth_required
def get(self):
self._get()
def _get(self):
print('Getting %s' %self.name)
class Eggs(Resource):
name = 'Eggs'
class Spam(Resource):
name = 'Spam'
def get(self):
self._get()
# super(Spam, self)._get()
eggs = Eggs()
spam = Spam()
eggs.get()
# Auth required for this resource...
# Getting Eggs
spam.get()
# Getting Spam
``` | [Flask-HTTPAuth](https://github.com/miguelgrinberg/Flask-HTTPAuth/blob/master/flask_httpauth.py#L47) uses [`functools.wraps`](https://docs.python.org/3/library/functools.html#functools.wraps) in the `login_required` decorator:
```
def login_required(self, f):
@wraps(f)
def decorated(*args, **kwargs):
...
```
From Python 3.2, as this calls [`update_wrapper`](https://docs.python.org/3/library/functools.html#functools.update_wrapper), you can access the original function via `__wrapped__`:
> To allow access to the original function for introspection and other
> purposes (e.g. bypassing a caching decorator such as `lru_cache()`),
> this function automatically adds a `__wrapped__` attribute to the
> wrapper that refers to the function being wrapped.
If you're writing your own decorators, as in your example, you can also use `@wraps` to get the same functionality (as well as keeping the docstrings, etc.).
See also [Is there a way to get the function a decorator has wrapped?](http://stackoverflow.com/q/1545178/3001761) |
How do I multiply each element in a list by a number? | 35,166,633 | 7 | 2016-02-03T00:54:03Z | 35,166,717 | 11 | 2016-02-03T01:02:42Z | [
"python",
"list"
] | I have a list:
```
my_list = [1, 2, 3, 4, 5]
```
How can I multiply each element in `my_list` by 5? The output should be:
```
[5, 10, 15, 20, 25]
``` | You can just use a list comprehension:
```
my_list = [1, 2, 3, 4, 5]
my_new_list = [i * 5 for i in my_list]
>>> print(my_new_list)
[5, 10, 15, 20, 25]
```
Note that a list comprehension is generally a more efficient way to do a `for` loop:
```
my_new_list = []
for i in my_list:
my_new_list.append(i * 5)
>>> print(my_new_list)
[5, 10, 15, 20, 25]
```
As an alternative, here is a solution using the popular Pandas package:
```
import pandas as pd
s = pd.Series(my_list)
>>> s * 5
0 5
1 10
2 15
3 20
4 25
dtype: int64
```
Or, if you just want the list:
```
>>> (s * 5).tolist()
[5, 10, 15, 20, 25]
``` |
Algorithm to find faster the suitable prefix for each phone number in log file? | 35,172,948 | 2 | 2016-02-03T09:23:03Z | 35,173,153 | 9 | 2016-02-03T09:32:45Z | [
"python",
"algorithm",
"performance",
"search",
"logfile"
] | There is a csv file including the list of prefixes to categorise the phone numbers base on that. This is an example of prefixes.csv. This file have near to 2000 row.
```
3511891,PORTUGAL-MOBILE (VODAFONE)
3511693,PORTUGAL-MOBILE (OPTIMUS)
3511691,PORTUGAL-MOBILE (VODAFONE)
34,SPAIN-FIXED
3469400,SPAIN-MOBILE (MVNO)
3469310,SPAIN-MOBILE (MVNO)
3469279,SPAIN-MOBILE (MVNO)
3469278,SPAIN-MOBILE (MVNO)
3469277,SPAIN-MOBILE (MVNO)
3469276,SPAIN-MOBILE (MVNO)
34673,SPAIN-MOBILE (VODAFONE)
243820000006,CONGO DEMOCARTIC REPUBLIC-SPECIAL SERVICES
243820000005,CONGO DEMOCARTIC REPUBLIC-SPECIAL SERVICES
243820000004,CONGO DEMOCARTIC REPUBLIC-SPECIAL SERVICES
88213200361,EMSAT-SPECIAL SERVICES
67518497899,PAPUA NEW GUINEA-SPECIAL SERVICES
56751975883,CHILE-SPECIAL SERVICES
56751975334,CHILE-SPECIAL SERVICES
56731974707,CHILE-SPECIAL SERVICES
```
On the other hand, there is an huge log file including thousand of lines. This is the format of the log:
```
2015-11-01T00:00:17.735616+00:00 x1ee energysrvpol[15690]: INFO consume_processor: user:<<"dbdiayhg">> callee_num:<<"34673809195">> sid:<<"A1003unjhjhvhgfgvhbghgujhj02">> credits:-0.5000000000000001 result:ok provider:ooioutisrt.ym.ms
```
So, I have to extract the phone number after `callee_num`, and then compare it with all the prefixes , digit by digit, in order to discover what is the country code for the related number came after `callee_num`. In this example the phone number is `34673809195`, so extract this number, go to the **prefixes.csv** and check it row by row for the suitable prefix.
```
1)first time '3' from 34673xxxx
2)then 4
3) then 6
4) then 7
....
```
and all this process have to be repeated for each row of the **prefixes.csv**, and finally in this row `34673,SPAIN-MOBILE (VODAFONE)`, the number is matches. Imagine that the number instead of `34673` is `34670`, and after checking all the rows , there is not any match with this number, so it should possible to keep the last matches of **prefixes.csv** which is `34` and return the `SPAIN-FIX`.
**I would like to know what is the best algorithm to doing these processes in the minimum time. If I need to order the prefix before or I use only dictionary for that? How can I manage everything in order to have the efficient code? how should be worked the search algorithm? If the recursive function is the good idea for that or not? or if there are some library in python that it is implemented good please recommend it.**
Thank you for providing any solution. | Look at [prefix tree (Trie)](https://en.wikipedia.org/wiki/Trie) data structure.
During scanning the tree, always remember the last best result (remember 34, while checking 34\*\*\* nodes)
There are many implementations of tries in Python |
traitlets.traitlets.TraitError in Pycharm | 35,176,604 | 9 | 2016-02-03T12:05:40Z | 35,427,682 | 7 | 2016-02-16T08:44:29Z | [
"python",
"python-2.7",
"ubuntu",
"console",
"pycharm"
] | I am a beginner in python. I am facing the following problem.
Whenever I start pycharm Community edition(version:5.0.3) python console fails to starts and shows the following error:
> ```
> usr/bin/python2.7 /usr/lib/pycharm-community/helpers/pydev/pydevconsole.py 53192 49994
>
> Traceback (most recent call last): File "/usr/lib/pycharm-community/helpers/pydev/pydevconsole.py", line 488, in <module>
>
> pydevconsole.StartServer(pydev_localhost.get_localhost(), int(port), int(client_port))
>
> File "/usr/lib/pycharm-community/helpers/pydev/pydevconsole.py", line 330, in StartServer
> interpreter = InterpreterInterface(host, client_port, threading.currentThread())
>
> File "/usr/lib/pycharm-community/helpers/pydev/pydev_ipython_console.py", line 26, in __init__
> self.interpreter = get_pydev_frontend(host, client_port)
>
> File "/usr/lib/pycharm-community/helpers/pydev/pydev_ipython_console_011.py", line 472, in get_pydev_frontend
> _PyDevFrontEndContainer._instance = _PyDevFrontEnd()
>
> File "/usr/lib/pycharm-community/helpers/pydev/pydev_ipython_console_011.py", line 303, in __init__
> self.ipython = PyDevTerminalInteractiveShell.instance()
>
> File "/usr/lib/python2.7/dist-packages/IPython/config/configurable.py", line 354, in instance
> inst = cls(*args, **kwargs)
>
> File "/usr/lib/python2.7/dist-packages/IPython/terminal/interactiveshell.py", line 328, in __init__
> **kwargs
>
> File "/usr/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 483, in __init__
> self.init_readline()
>
> File "/usr/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 1816, in init_readline
> if self.readline_use:
>
> File "/home/vivekruhela/.local/lib/python2.7/site-packages/traitlets/traitlets.py", line 529, in __get__
> return self.get(obj, cls)
>
> File "/home/vivekruhela/.local/lib/python2.7/site-packages/traitlets/traitlets.py", line 507, in get
> % (self.name, obj))
>
> traitlets.traitlets.TraitError: No default value found for None trait of <pydev_ipython_console_011.PyDevTerminalInteractiveShell object at 0x7f7b0e682cd0>
>
>
> Process finished with exit code 1
>
> Couldn't connect to console process.
> ```
How to Connect pycharm concole. I have already installed iPython in my system. | I had the same issue.
I solved this by **updating ipython**.
My version was 3.1.0, upgrading it to 4.1.1 solved it.
in my case:
```
workon project
pip install -U ipython
``` |
Why won't my elif statment execute but my if statment will in Python? | 35,182,004 | 2 | 2016-02-03T16:10:06Z | 35,182,094 | 7 | 2016-02-03T16:13:50Z | [
"python",
"tkinter",
"python-2.x"
] | I wrote a simple age verification in python to where if the user enters a year less than 2000, it will say welcome. However, if the user enters a year greater than 2000, it should redirect them to another site.
My code works but only executes only the `if` statement not the `elif` no matter what year I input.
Here is my code:
```
from Tkinter import *
import webbrowser
import tkMessageBox
url = 'google.com'
root = Tk()
frame = Frame(root, width=100, height=100)
frame.pack()
L1 = Label(root, text = "Month")
L1.pack(side = LEFT)
E1 = Entry(root,bd=5)
E1.pack(side=LEFT)
L2 = Label(root, text = "Day")
L2.pack(side =LEFT)
E2 = Entry(root, bd= 5)
E2.pack(side = LEFT)
L3 = Label(root, text = "Year")
L3.pack(side = LEFT)
E3 = Entry(root, bd = 5)
E3.pack(side = LEFT)
def getdate():
tkMessageBox.showinfo(title ="Results", message = E1.get() + " "+ E2.get() + " " + E3.get())
getage()
root.destroy()
#tkMessageBox.showinfo(E2.get())
#tkMessageBox.showinfo(E3.get())
def getage():
if E3 < 2000:
tkMessageBox.showinfo(message= "Welcome! ")
elif E3 > 2000:
tkMessageBox.showinfo(message="You will be redirected")
webbrowser.open_new(url)
b1 = Button(root, text = "Submit", width = 25, command = getdate)
b1.pack()
root.mainloop()
``` | You are comparing the *`Entry()` object* to an integer. In Python 2, numbers are always sorted *before* other types, so they always register as *smaller*:
```
>>> object() > 2000
True
```
You want to get the value from the entry box and convert to an integer before testing:
```
entry = int(E3.get())
if entry > 2000:
# ...
else:
# ...
```
There is no need to do `elif` here, unless you want the value `2000` to be ignored altogether (your test only works for numbers either greater than, or smaller than, not equal). |
Shortening code with multiple similar "while"-statements | 35,182,162 | 4 | 2016-02-03T16:16:03Z | 35,182,191 | 9 | 2016-02-03T16:17:25Z | [
"python"
] | Very basic question probably, but writing my first program and did not know what to search for to find the answer.
I have a while statement that looks something like this:
```
while number > 9999 or number < 0 or number == 1111 or number == 2222 or number == 3333...
```
And goes on until I get to 9999. Lots of code that probably can be shortened, am I correct? Not sure about where I could read about the grammar for this, so someone could also link me there!
Would be glad if anyone could help! :) | Use the *modulo* operator:
```
while number > 9999 or number < 0 or (number % 1111 == 0 and number != 0):
``` |
URL Encode in Windows Batch Script | 35,186,560 | 4 | 2016-02-03T20:02:06Z | 35,186,784 | 8 | 2016-02-03T20:13:11Z | [
"python",
"linux",
"windows",
"batch-file"
] | I have a Windows batch script that I use to do quick Google searches. However, I can't figure out how to generically encode special characters. Like if I try to search for C#, the pound sign breaks it. Here is my code:
```
SET q="https://www.google.com/#q=%*"
SET q=%q: =+%
chrm %q%
``` | I don't think there is a good way to do that directly in a Windows bat script. Python is a great solution for some heavier things like that, and it is cross platform which is always nice. Since you are in Windows, you could probably write a powershell script to do it. However, here is a Python 3 script I wrote which I think does what you are looking for.
```
import sys
import subprocess
import urllib.parse
browser = sys.argv[1]
browserParms = sys.argv[2]
queryString = " ".join(sys.argv[3:])
queryString = urllib.parse.quote(queryString)
url = "https://www.google.com/#q=" + queryString
subprocess.Popen([browser, browserParms, url])
sys.exit()
```
Here is a native script for Linux where you can set up your specifics. You could do something very similar in a windows batch file. I named it goog (with no extension because that would be too much to type :) )
```
#!/bin/bash
python3 /home/justin/Dropbox/MyFiles/Programs/CrossPlatform/Python3/GoogleSearch.py "firefox" "-new-window" "$@"
```
Make sure your native script location is in $PATH. Execute like this from Terminal or Run A Command.
```
goog i like turtles
``` |
SSLError using pip install (to install tensorflow) | 35,190,574 | 5 | 2016-02-04T00:46:51Z | 35,277,748 | 10 | 2016-02-08T19:30:54Z | [
"python",
"ssl",
"installation",
"pip",
"ssl-certificate"
] | while installing TF, exception appeared:
> File
> "/usr/local/lib/python2.7/dist-packages/pip/\_vendor/cachecontrol/adapter.py",
> line 46, in send
> resp = super(CacheControlAdapter, self).send(request, \*\*kw) File "/usr/local/lib/python2.7/dist-packages/pip/\_vendor/requests/adapters.py",
> line 447, in send
> raise SSLError(e, request=request) SSLError: ("bad handshake: Error([('SSL routines', 'SSL3\_GET\_SERVER\_CERTIFICATE', 'certificate
> verify failed')],)",)
```
command: pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.6.0-cp27-none-linux_x86_64.whl
```
I have no clue how to solve this. I was recently reinstalled pip, could reinstall cause it? | *SSL error can be solved by bellow steps for sure. Just download the wheel on your own and pip install.*
**# Ubuntu/Linux 64-bit, CPU only:**
$ wget <https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.5.0-cp27-none-linux_x86_64.whl>
$ sudo pip install --upgrade tensorflow-0.5.0-cp27-none-linux\_x86\_64.whl
**# Ubuntu/Linux 64-bit, GPU enabled:**
$ wget <https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.5.0-cp27-none-linux_x86_64.whl>
$ sudo pip install --upgrade tensorflow-0.5.0-cp27-none-linux\_x86\_64.whl |
How do I avoid the "self.x = x; self.y = y; self.z = z" pattern in __init__? | 35,190,877 | 159 | 2016-02-04T01:22:42Z | 35,190,922 | 27 | 2016-02-04T01:27:34Z | [
"python",
"python-decorators",
"namedtuple"
] | I see patterns like
```
def __init__(self, x, y, z):
...
self.x = x
self.y = y
self.z = z
...
```
quite frequently, often with a lot more parameters. Is there a good way to avoid this type of tedious repetitiveness? Should I inherit from `namedtuple`? | explicit is better than implicit ...
so sure you could make it more concise:
```
def __init__(self,a,b,c):
for k,v in locals().items():
if k != "self":
setattr(self,k,v)
```
The better question is should you?
... that said if you want a named tuple I would recommend using a namedtuple (remember tuples have certain conditions attached to them) ... perhaps you want an ordereddict or even just a dict ... |
How do I avoid the "self.x = x; self.y = y; self.z = z" pattern in __init__? | 35,190,877 | 159 | 2016-02-04T01:22:42Z | 35,190,941 | 101 | 2016-02-04T01:29:28Z | [
"python",
"python-decorators",
"namedtuple"
] | I see patterns like
```
def __init__(self, x, y, z):
...
self.x = x
self.y = y
self.z = z
...
```
quite frequently, often with a lot more parameters. Is there a good way to avoid this type of tedious repetitiveness? Should I inherit from `namedtuple`? | **EDIT**
It seems that several people are concerned about presenting this solution, so I will provide a very clear disclaimer. You should not use this solution. I only provide it as information, so you know that the language is capable of this. The rest of the answer is just showing language capabilities, not endorsing using them in this way.
**ORIGINAL ANSWER**
There isn't really anything wrong with explicitly copying parameters into attributes. If you have too many parameters in the ctor, it is sometimes considered a code smell and maybe you should group these params into a fewer objects. Other times, it is necessary and there is nothing wrong with it. **Anyway, doing it explicitly is the way to go.**
However, since you are asking HOW it can be done (and not whether it should be done), then one solution is this:
```
class A:
def __init__(self, **kwargs):
for key in kwargs:
setattr(self, key, kwargs[key])
a = A(l=1, d=2)
a.l # will return 1
a.d # will return 2
``` |
How do I avoid the "self.x = x; self.y = y; self.z = z" pattern in __init__? | 35,190,877 | 159 | 2016-02-04T01:22:42Z | 35,190,996 | 10 | 2016-02-04T01:35:38Z | [
"python",
"python-decorators",
"namedtuple"
] | I see patterns like
```
def __init__(self, x, y, z):
...
self.x = x
self.y = y
self.z = z
...
```
quite frequently, often with a lot more parameters. Is there a good way to avoid this type of tedious repetitiveness? Should I inherit from `namedtuple`? | You could also do:
```
locs = locals()
for arg in inspect.getargspec(self.__init__)[0][1:]:
setattr(self, arg, locs[arg])
```
Of course, you would have to import the `inspect` module. |
How do I avoid the "self.x = x; self.y = y; self.z = z" pattern in __init__? | 35,190,877 | 159 | 2016-02-04T01:22:42Z | 35,191,590 | 28 | 2016-02-04T02:48:05Z | [
"python",
"python-decorators",
"namedtuple"
] | I see patterns like
```
def __init__(self, x, y, z):
...
self.x = x
self.y = y
self.z = z
...
```
quite frequently, often with a lot more parameters. Is there a good way to avoid this type of tedious repetitiveness? Should I inherit from `namedtuple`? | As others have mentioned, the repetition isn't bad, but in some cases a namedtuple can be a great fit for this type of issue. This avoids using locals() or kwargs, which are usually a bad idea.
```
from collections import namedtuple
# declare a new object type with three properties; x y z
# the first arg of namedtuple is a typename
# the second arg is comma-separated or space-separated property names
XYZ = namedtuple("XYZ", "x, y, z")
# create an object of type XYZ. properties are in order
abc = XYZ("one", "two", 3)
print abc.x
print abc.y
print abc.z
```
I've found limited use for it, but you can inherit a namedtuple as with any other object (example continued):
```
class MySuperXYZ(XYZ):
""" I add a helper function which returns the original properties """
def properties(self):
return self.x, self.y, self.z
abc2 = MySuperXYZ(4, "five", "six")
print abc2.x
print abc2.y
print abc2.z
print abc2.properties()
``` |
How do I avoid the "self.x = x; self.y = y; self.z = z" pattern in __init__? | 35,190,877 | 159 | 2016-02-04T01:22:42Z | 35,194,904 | 84 | 2016-02-04T07:24:51Z | [
"python",
"python-decorators",
"namedtuple"
] | I see patterns like
```
def __init__(self, x, y, z):
...
self.x = x
self.y = y
self.z = z
...
```
quite frequently, often with a lot more parameters. Is there a good way to avoid this type of tedious repetitiveness? Should I inherit from `namedtuple`? | A decorator solution that keeps the signature:
```
import decorator
import inspect
import sys
@decorator.decorator
def simple_init(func, self, *args, **kws):
"""
@simple_init
def __init__(self,a,b,...,z)
dosomething()
behaves like
def __init__(self,a,b,...,z)
self.a = a
self.b = b
...
self.z = z
dosomething()
"""
#init_argumentnames_without_self = ['a','b',...,'z']
if sys.version_info.major == 2:
init_argumentnames_without_self = inspect.getargspec(func).args[1:]
else:
init_argumentnames_without_self = tuple(inspect.signature(func).parameters.keys())[1:]
positional_values = args
keyword_values_in_correct_order = tuple(kws[key] for key in init_argumentnames_without_self if key in kws)
attribute_values = positional_values + keyword_values_in_correct_order
for attribute_name,attribute_value in zip(init_argumentnames_without_self,attribute_values):
setattr(self,attribute_name,attribute_value)
# call the original __init__
func(self, *args, **kws)
class Test():
@simple_init
def __init__(self,a,b,c,d=4):
print(self.a,self.b,self.c,self.d)
#prints 1 3 2 4
t = Test(1,c=2,b=3)
#keeps signature
#prints ['self', 'a', 'b', 'c', 'd']
if sys.version_info.major == 2:
print(inspect.getargspec(Test.__init__).args)
else:
print(inspect.signature(Test.__init__))
``` |
How do I avoid the "self.x = x; self.y = y; self.z = z" pattern in __init__? | 35,190,877 | 159 | 2016-02-04T01:22:42Z | 35,196,024 | 20 | 2016-02-04T08:31:27Z | [
"python",
"python-decorators",
"namedtuple"
] | I see patterns like
```
def __init__(self, x, y, z):
...
self.x = x
self.y = y
self.z = z
...
```
quite frequently, often with a lot more parameters. Is there a good way to avoid this type of tedious repetitiveness? Should I inherit from `namedtuple`? | To expand on `gruszczy`s answer, I have used a pattern like:
```
class X:
x = None
y = None
z = None
def __init__(self, **kwargs):
for (k, v) in kwargs.items():
if hasattr(self, k):
setattr(self, k, v)
else:
raise TypeError('Unknown keyword argument: {:s}'.format(k))
```
I like this method because it:
* avoids repetition
* is resistant against typos when constructing an object
* works well with subclassing (can just `super().__init(...)`)
* allows for documentation of the attributes on a class-level (where they belong) rather than in `X.__init__`
It could probably be improved upon a bit, but I'm the only user of my own code so I am not worried about any form of input sanitation. |
How do I avoid the "self.x = x; self.y = y; self.z = z" pattern in __init__? | 35,190,877 | 159 | 2016-02-04T01:22:42Z | 35,239,676 | 9 | 2016-02-06T09:59:31Z | [
"python",
"python-decorators",
"namedtuple"
] | I see patterns like
```
def __init__(self, x, y, z):
...
self.x = x
self.y = y
self.z = z
...
```
quite frequently, often with a lot more parameters. Is there a good way to avoid this type of tedious repetitiveness? Should I inherit from `namedtuple`? | This is a solution without any additional imports.
## Helper function
A small helper function makes it more convenient and re-usable:
```
def auto_init(local_name_space):
"""Set instance attributes from arguments.
"""
self = local_name_space.pop('self')
for name, value in local_name_space.items():
setattr(self, name, value)
```
## Application
You need to call it with `locals()`:
```
class A:
def __init__(self, x, y, z):
auto_init(locals())
```
## Test
```
a = A(1, 2, 3)
print(a.__dict__)
```
Output:
```
{'y': 2, 'z': 3, 'x': 1}
``` |
Python keyword arguments unpack and return dictionary | 35,197,854 | 19 | 2016-02-04T10:00:38Z | 35,197,989 | 13 | 2016-02-04T10:06:01Z | [
"python",
"dictionary",
"argument-unpacking"
] | I have a function definition as below and I am passing keyword arguments. How do I get to return a dictionary with the same name as the keyword arguments?
Manually I can do:
```
def generate_student_dict(first_name=None, last_name=None , birthday=None, gender =None):
return {
'first_name': first_name,
'last_name': last_name,
'birthday': birthday,
'gender': gender
}
```
But I don't want to do that. Is there any way that I can make this work without actually typing the dict?
```
def generate_student_dict(self, first_name=None, last_name=None, birthday=None, gender=None):
return # Packed value from keyword argument.
``` | If that way is suitable for you, use [kwargs](https://docs.python.org/2/tutorial/controlflow.html#unpacking-argument-lists) (see [Understanding kwargs in Python](http://stackoverflow.com/questions/1769403/understanding-kwargs-in-python)) as in code snippet below:
```
def generate_student_dict(self, **kwargs):
return kwargs
```
Otherwise, you can create a copy of params with [`built-in locals()`](https://docs.python.org/2/library/functions.html#locals) at function start and **return** that copy:
```
def generate_student_dict(first_name=None, last_name=None , birthday=None, gender =None):
# It's important to copy locals in first line of code (see @MuhammadTahir comment).
args_passed = locals().copy()
# some code
return args_passed
generate_student_dict()
``` |
Python keyword arguments unpack and return dictionary | 35,197,854 | 19 | 2016-02-04T10:00:38Z | 35,198,006 | 7 | 2016-02-04T10:06:52Z | [
"python",
"dictionary",
"argument-unpacking"
] | I have a function definition as below and I am passing keyword arguments. How do I get to return a dictionary with the same name as the keyword arguments?
Manually I can do:
```
def generate_student_dict(first_name=None, last_name=None , birthday=None, gender =None):
return {
'first_name': first_name,
'last_name': last_name,
'birthday': birthday,
'gender': gender
}
```
But I don't want to do that. Is there any way that I can make this work without actually typing the dict?
```
def generate_student_dict(self, first_name=None, last_name=None, birthday=None, gender=None):
return # Packed value from keyword argument.
``` | If you don't want to pass `**kwargs`, you can simply return [`locals`](https://docs.python.org/3.5/library/functions.html#locals):
```
def generate_student_dict(first_name=None, last_name=None,
birthday=None, gender=None):
return locals()
```
Note that you want to remove `self` from the result if you pass it as an argument. |
Return true or false in a list comprehension python | 35,198,090 | 4 | 2016-02-04T10:09:28Z | 35,198,129 | 12 | 2016-02-04T10:11:44Z | [
"python"
] | The goal is to write a function that takes in a list and returns whether the numbers in the list are even resulting in True or False.
Ex. [1, 2, 3, 4] ---> [False, True, False, True]
I've written this portion of code:
```
def even_or_odd(t_f_list):
return = [ x for x in t_f_list if x % 2 == 0]
```
I know this code would return [2, 4]. How would I make it so it instead returns a true and false like the above example? | Instead of filtering by predicate, you should map it:
```
def even_or_odd(t_f_list):
return [ x % 2 == 0 for x in t_f_list]
``` |
Why does print(0.3) print 0.3 and not 0.30000000000000004 | 35,199,412 | 4 | 2016-02-04T11:07:04Z | 35,199,663 | 9 | 2016-02-04T11:20:47Z | [
"python",
"python-3.x"
] | So I think I basically understand how floating-point works and why we can't have "precise" results for some operations.
I got confused by [this SO-question](http://stackoverflow.com/questions/35197598/range-with-floating-point-numbers-and-negative-steps), where @MikeMüller suggests rounding.
---
My understanding is the following.
If we write decimal places it would look like this:
`1000 100 10 1 . 1/10 1/100 1/1000`
It would look like this in binary:
`8 4 2 1 . 1/2 1/4 1/8`
So we store 0.5 or 0.25 or 0.125 precisely in memory but not e.g. 0.3
So why does python output the following:
```
print(0.1)
print(0.2)
print(0.3)
print(0.1 + 0.2)
>>>0.1
>>>0.2
>>>0.3
>>>0.30000000000000004
```
I think it should output
```
>>>0.1
>>>0.2
>>>0.30000000000000004
>>>0.30000000000000004
```
Where am I wrong?
---
My Question is NOT a duplicate of [Is floating point math broken?](http://stackoverflow.com/questions/588004)
because OP does not understand why 0.1+0.2 != 0.3. This is not topic of my question! | Because they're not the same, as `0.1` and `0.2` isn't correctly represented already. So:
```
>>>print("%.20f" % (0.1+0.2))
0.30000000000000004441
>>>print("%.20f" % 0.3)
0.29999999999999998890
>>>print(0.29999999999999998890)
0.3
```
So it's all up to the Python rules for printing stuff, especially considering that pure `0.3` representation is much closer to actual `0.3` than `0.1 + 0.2`.
Here's the related excerpt from [Python docs](https://docs.python.org/3/tutorial/floatingpoint.html):
> Interestingly, there are many different decimal numbers that share the
> same nearest approximate binary fraction. For example, the numbers `0.1`
> and `0.10000000000000001` and
> `0.1000000000000000055511151231257827021181583404541015625` are all approximated by `3602879701896397 / 2 ** 55`. Since all of these decimal
> values share the same approximation, any one of them could be
> displayed while still preserving the invariant `eval(repr(x)) == x`.
>
> Historically, the Python prompt and built-in `repr()` function would
> choose the one with 17 significant digits, `0.10000000000000001`.
> Starting with Python 3.1, Python (on most systems) is now able to
> choose the shortest of these and simply display `0.1`. |
Why django models make many small queries instead of one large? | 35,199,536 | 3 | 2016-02-04T11:13:47Z | 35,199,624 | 8 | 2016-02-04T11:18:43Z | [
"python",
"django"
] | Sorry for non-specific problem, but i suppose this might be an interesting question. At least it is to me ;)
I am using django models to get data from few related tables at once. When the QuerySet is eveluated I'd rather expect django to make query like this one:
```
SELECT t1.field1, t1.field2, t2.field1, t2.field2 FROM t1
JOIN t2 ON (t1.fk_t2 = t2.pk);
```
But instead of what I excpect, I can see Django doing something like this:
```
SELECT t1.field1, t1.field2, t1.fk_t2 FROM t1;
```
And then for all t1.fk\_t2
```
SELECT t2.field1, t2.field2 FROM t2 WHERE id = (here comes some single id);
```
Is this default Django behavior? Why it acts like this? Is this more efficient? What came to my mind first, is that making JOIN requires doing cross join and then filtering very large table, when doing many single selects allows you to deal with no more data, than really required, but this is only an idea.
Anyone can explain? Thanks in advance! | Django gets only the data you ask it to get. You can use [select\_related()](https://docs.djangoproject.com/en/1.9/ref/models/querysets/#select-related) and [prefetch\_related()](https://docs.djangoproject.com/en/1.9/ref/models/querysets/#prefetch-related) to ask it to get all the data in a single query using JOINs.
Quoting docs:
> select\_related(\*fields)
>
> Returns a QuerySet that will âfollowâ
> foreign-key relationships, selecting additional related-object data
> when it executes its query. This is a performance booster which
> results in a single more complex query but means later use of
> foreign-key relationships wonât require database queries.
>
> prefetch\_related(\*lookups)
>
> Returns a QuerySet that will automatically
> retrieve, in a single batch, related objects for each of the specified
> lookups. |
Dot Product in Python without NumPy | 35,208,160 | 2 | 2016-02-04T17:54:33Z | 35,208,273 | 7 | 2016-02-04T18:01:12Z | [
"python",
"numpy",
"operation"
] | Is there a way that you can preform a dot product of two lists that contain values without using NumPy or the Operation module in Python? So that the code is as simple as it could get?
For example:
```
V_1=[1,2,3]
V_2=[4,5,6]
Dot(V_1,V_2)
```
Answer: 32 | Without numpy, you can write yourself a function for the dot product which uses `zip` and `sum`.
```
>>> def dot(v1, v2):
... return sum(x*y for x,y in zip(v1,v2))
...
>>> dot([1,2,3], [4,5,6])
32
``` |
Attempting Python list comprehension with two variable of different ranges | 35,215,024 | 10 | 2016-02-05T01:51:41Z | 35,215,054 | 20 | 2016-02-05T01:55:17Z | [
"python",
"list",
"list-comprehension"
] | I'm trying to generate a list quickly with content from two different arrays of size n and n/2. As an example:
```
A = [70, 60, 50, 40, 30, 20, 10, 0]
B = [1, 2, 3, 4]
```
I wish to generate something like
```
[(A[x], B[y]) for x in range(len(A)) for y in range(len(B))]
```
I understand the second for statement is the nested for loop after the "x" one. I'm trying to get the contents of the new array to be
```
A[0], B[0]
A[1], B[1]
A[2], B[2]
A[3], B[3]
A[4], B[0]
A[5], B[1]
A[6], B[2]
A[7], B[3]
```
Could anyone point me in the right direction? | Don't use nested loops; you are pairing up `A` and `B`, with `B` repeating as needed. What you need is [`zip()`](https://docs.python.org/2/library/functions.html#zip) (to do the pairing), and [`itertools.cycle()`](https://docs.python.org/2/library/itertools.html#itertools.cycle) (to repeat `B`):
```
from itertools import cycle
zip(A, cycle(B))
```
If `B` is always going to be half the size of `A`, you could also just double `B`:
```
zip(A, B + B)
```
Demo:
```
>>> from itertools import cycle
>>> A = [70, 60, 50, 40, 30, 20, 10, 0]
>>> B = [1, 2, 3, 4]
>>> zip(A, cycle(B))
[(70, 1), (60, 2), (50, 3), (40, 4), (30, 1), (20, 2), (10, 3), (0, 4)]
>>> zip(A, B + B)
[(70, 1), (60, 2), (50, 3), (40, 4), (30, 1), (20, 2), (10, 3), (0, 4)]
```
For cases where it is not known which one is the longer list, you could use `min()` and `max()` to pick which one to cycle:
```
zip(max((A, B), key=len), cycle(min((A, B), key=len))
```
or for an arbitrary number of lists to pair up, cycle them *all* but use [`itertools.islice()`](https://docs.python.org/2/library/itertools.html#itertools.islice) to limit things to the maximum length:
```
inputs = (A, B) # potentially more
max_length = max(len(elem) for elem in inputs)
zip(*(islice(cycle(elem), max_length) for elem in inputs))
```
Demo:
```
>>> from itertools import islice
>>> inputs = (A, B) # potentially more
>>> max_length = max(len(elem) for elem in inputs)
>>> zip(*(islice(cycle(elem), max_length) for elem in inputs))
[(70, 1), (60, 2), (50, 3), (40, 4), (30, 1), (20, 2), (10, 3), (0, 4)]
``` |
Attempting Python list comprehension with two variable of different ranges | 35,215,024 | 10 | 2016-02-05T01:51:41Z | 35,215,064 | 10 | 2016-02-05T01:56:18Z | [
"python",
"list",
"list-comprehension"
] | I'm trying to generate a list quickly with content from two different arrays of size n and n/2. As an example:
```
A = [70, 60, 50, 40, 30, 20, 10, 0]
B = [1, 2, 3, 4]
```
I wish to generate something like
```
[(A[x], B[y]) for x in range(len(A)) for y in range(len(B))]
```
I understand the second for statement is the nested for loop after the "x" one. I'm trying to get the contents of the new array to be
```
A[0], B[0]
A[1], B[1]
A[2], B[2]
A[3], B[3]
A[4], B[0]
A[5], B[1]
A[6], B[2]
A[7], B[3]
```
Could anyone point me in the right direction? | `[(A[x % len(A)], B[x % len(B)]) for x in range(max(len(A), len(B)))]`
This will work whether or not A is the larger list. :) |
Why is calling float() on a number slower than adding 0.0 in Python? | 35,229,877 | 16 | 2016-02-05T17:17:45Z | 35,230,019 | 14 | 2016-02-05T17:26:26Z | [
"python",
"casting",
"floating-point",
"integer"
] | What is the reason that casting an integer to a float is slower than adding 0.0 to that int in Python?
```
import timeit
def add_simple():
for i in range(1000):
a = 1 + 0.0
def cast_simple():
for i in range(1000):
a = float(1)
def add_total():
total = 0
for i in range(1000):
total += 1 + 0.0
def cast_total():
total = 0
for i in range(1000):
total += float(1)
print "Add simple timing: %s" % timeit.timeit(add_simple, number=1)
print "Cast simple timing: %s" % timeit.timeit(cast_simple, number=1)
print "Add total timing: %s" % timeit.timeit(add_total, number=1)
print "Cast total timing: %s" % timeit.timeit(cast_total, number=1)
```
The output of which is:
> Add simple timing: 0.0001220703125
>
> Cast simple timing: 0.000469923019409
>
> Add total timing: 0.000164985656738
>
> Cast total timing: 0.00040078163147 | If you use the `dis` module, you can start to see why:
```
In [11]: dis.dis(add_simple)
2 0 SETUP_LOOP 26 (to 29)
3 LOAD_GLOBAL 0 (range)
6 LOAD_CONST 1 (1000)
9 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
12 GET_ITER
>> 13 FOR_ITER 12 (to 28)
16 STORE_FAST 0 (i)
3 19 LOAD_CONST 4 (1.0)
22 STORE_FAST 1 (a)
25 JUMP_ABSOLUTE 13
>> 28 POP_BLOCK
>> 29 LOAD_CONST 0 (None)
32 RETURN_VALUE
In [12]: dis.dis(cast_simple)
2 0 SETUP_LOOP 32 (to 35)
3 LOAD_GLOBAL 0 (range)
6 LOAD_CONST 1 (1000)
9 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
12 GET_ITER
>> 13 FOR_ITER 18 (to 34)
16 STORE_FAST 0 (i)
3 19 LOAD_GLOBAL 1 (float)
22 LOAD_CONST 2 (1)
25 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
28 STORE_FAST 1 (a)
31 JUMP_ABSOLUTE 13
>> 34 POP_BLOCK
>> 35 LOAD_CONST 0 (None)
38 RETURN_VALUE
```
Note the `CALL_FUNCTION`
Function calls in Python are (relatively) slow. As are `.` lookups. Because casting to `float` requires a function call - that's why it's slower. |
Why is calling float() on a number slower than adding 0.0 in Python? | 35,229,877 | 16 | 2016-02-05T17:17:45Z | 35,230,191 | 9 | 2016-02-05T17:35:20Z | [
"python",
"casting",
"floating-point",
"integer"
] | What is the reason that casting an integer to a float is slower than adding 0.0 to that int in Python?
```
import timeit
def add_simple():
for i in range(1000):
a = 1 + 0.0
def cast_simple():
for i in range(1000):
a = float(1)
def add_total():
total = 0
for i in range(1000):
total += 1 + 0.0
def cast_total():
total = 0
for i in range(1000):
total += float(1)
print "Add simple timing: %s" % timeit.timeit(add_simple, number=1)
print "Cast simple timing: %s" % timeit.timeit(cast_simple, number=1)
print "Add total timing: %s" % timeit.timeit(add_total, number=1)
print "Cast total timing: %s" % timeit.timeit(cast_total, number=1)
```
The output of which is:
> Add simple timing: 0.0001220703125
>
> Cast simple timing: 0.000469923019409
>
> Add total timing: 0.000164985656738
>
> Cast total timing: 0.00040078163147 | If you look at the bytecode for `add_simple`:
```
>>> dis.dis(add_simple)
2 0 SETUP_LOOP 26 (to 29)
3 LOAD_GLOBAL 0 (range)
6 LOAD_CONST 1 (1000)
9 CALL_FUNCTION 1
12 GET_ITER
>> 13 FOR_ITER 12 (to 28)
16 STORE_FAST 0 (i)
3 19 LOAD_CONST 4 (1.0)
22 STORE_FAST 1 (a)
25 JUMP_ABSOLUTE 13
>> 28 POP_BLOCK
>> 29 LOAD_CONST 0 (None)
32 RETURN_VALUE
```
You'll see that `0.0` isn't actually anywhere in there. It just loads the constant `1.0` and stores it to `a`. Python computed the result at compile-time, so you're not actually timing the addition.
If you use a variable for `1`, so Python's primitive peephole optimizer can't do the addition at compile-time, adding `0.0` still has a lead:
```
>>> timeit.timeit('float(a)', 'a=1')
0.22538208961486816
>>> timeit.timeit('a+0.0', 'a=1')
0.13347005844116211
```
Calling `float` requires two dict lookups to figure out what `float` is, one in the module's global namespace and one in the built-ins. It also has Python function call overhead, which is more expensive than a C function call.
Adding `0.0` only requires indexing into the function's code object's `co_consts` to load the constant `0.0`, and then calling the C-level `nb_add` functions of the `int` and `float` types to perform the addition. This is a lower amount of overhead overall. |
Type hinting in Python 2 | 35,230,635 | 8 | 2016-02-05T18:02:05Z | 35,230,792 | 11 | 2016-02-05T18:12:10Z | [
"python",
"python-2.7",
"types",
"type-hinting"
] | In [PEP 484](https://www.python.org/dev/peps/pep-0484/), type hinting was added to Python 3 with the inclusion of the [`typing`](https://docs.python.org/3/library/typing.html) module. Is there any way to do this in Python 2? All I can think of is having a decorator to add to methods to check types, but this would fail at runtime and not be caught earlier like the hinting would allow. | According to [**Suggested syntax for Python 2.7 and straddling code**](https://www.python.org/dev/peps/pep-0484/#suggested-syntax-for-python-2-7-and-straddling-code) in PEP 484 which defined type hinting, there is an alternative syntax for compatibility with Python 2.7. It is however not mandatory so I don't know how well supported it is, but quoting the PEP:
> Some tools may want to support type annotations in code that must be compatible with Python 2.7. For this purpose this PEP has a suggested (but not mandatory) extension where function annotations are placed in a # type: comment. Such a comment must be placed immediately following the function header (before the docstring). An example: the following Python 3 code:
>
> ```
> def embezzle(self, account: str, funds: int = 1000000, *fake_receipts: str) -> None:
> """Embezzle funds from account using fake receipts."""
> <code goes here>
> ```
>
> is equivalent to the following:
>
> ```
> def embezzle(self, account, funds=1000000, *fake_receipts):
> # type: (str, int, *str) -> None
> """Embezzle funds from account using fake receipts."""
> <code goes here>
> ```
For `mypy` support, see [**Type checking Python 2 code**](https://mypy.readthedocs.io/en/latest/python2.html#type-checking-python-2-code). |
Why is a `for` over a Python list faster than over a Numpy array? | 35,232,406 | 5 | 2016-02-05T19:53:50Z | 35,232,645 | 7 | 2016-02-05T20:07:55Z | [
"python",
"arrays",
"performance",
"numpy"
] | So without telling a really long story I was working on some code where I was reading in some data from a binary file and then looping over every single point using a for loop. So I completed the code and it was running ridiculously slow. I was looping over around 60,000 points from around 128 data channels and this was taking a minute or more to process. This was way slower than I ever expected Python to run. So I made the whole thing more efficient by using Numpy but in trying to figure out why the original process ran so slow we were doing some type checking and found that I was looping over Numpy arrays instead of Python lists. OK no major deal to make the inputs to our test setup the same I converted the Numpy arrays to lists before looping. Bang the same slow code that took a minute to run now took 10 seconds. I was floored. The only think I did was change a Numpy array to a Python list I changed it back and it was slow as mud again. I couldn't believe it so I went to get more definitive proof
```
$ python -m timeit -s "import numpy" "for k in numpy.arange(5000): k+1"
100 loops, best of 3: 5.46 msec per loop
$ python -m timeit "for k in range(5000): k+1"
1000 loops, best of 3: 256 usec per loop
```
What is going on? I know that Numpy arrays and and Python list are different but why is it so much slower to iterate over every point in an array?
I observed this behavior in both Python 2.6 and 2.7 running Numpy 10.1 I believe. | We can do a little sleuthing to figure this out:
```
>>> import numpy as np
>>> a = np.arange(32)
>>> a
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31])
>>> a.data
<read-write buffer for 0x107d01e40, size 256, offset 0 at 0x107d199b0>
>>> id(a.data)
4433424176
>>> id(a[0])
4424950096
>>> id(a[1])
4424950096
>>> for item in a:
... print id(item)
...
4424950096
4424950120
4424950096
4424950120
4424950096
4424950120
4424950096
4424950120
4424950096
4424950120
4424950096
4424950120
4424950096
4424950120
4424950096
4424950120
4424950096
4424950120
4424950096
4424950120
4424950096
4424950120
4424950096
4424950120
4424950096
4424950120
4424950096
4424950120
4424950096
4424950120
4424950096
4424950120
```
So what is going on here? First, I took a look at the memory location of the array's memory buffer. It's at `4433424176`. That in itself isn't *too* illuminating. However, numpy stores it's data as a contiguous C array, so the first element in the numpy array *should* correspond to the memory address of the array itself, but it doesn't:
```
>>> id(a[0])
4424950096
```
and it's a good thing it doesn't because that would break the invariant in python that 2 objects never have the same `id` during their lifetimes.
So, how does numpy accomplish this? Well, the answer is that numpy has to wrap the returned object with a python type (e.g. `numpy.float64` or `numpy.int64` in this case) which takes time if you're iterating item-by-item1. Further proof of this is demonstrated when iterating -- We see that we're alternating between 2 separate IDs while iterating over the array. This means that python's memory allocator and garbage collector are working overtime to create new objects and then free them.
A *list* doesn't have this memory allocator/garbage collector overhead. The objects in the list already exist as python objects (and they'll still exist after iteration), so neither plays any role in the iteration over a list.
## Timing methodology:
Also note, your timings are thrown off a little bit by your assumptions. You were assuming that `k + 1` should take about the same amount of time in both cases, but it doesn't. Notice if I repeat your timings *without* doing any addition:
```
mgilson$ python -m timeit -s "import numpy" "for k in numpy.arange(5000): k"
1000 loops, best of 3: 233 usec per loop
mgilson$ python -m timeit "for k in range(5000): k"
10000 loops, best of 3: 114 usec per loop
```
there's only about a factor of 2 difference. Doing the addition however leads to a factor of 5 difference or so:
```
mgilson$ python -m timeit "for k in range(5000): k+1"
10000 loops, best of 3: 179 usec per loop
mgilson$ python -m timeit -s "import numpy" "for k in numpy.arange(5000): k+1"
1000 loops, best of 3: 786 usec per loop
```
For fun, lets just do the addition:
```
$ python -m timeit -s "v = 1" "v + 1"
10000000 loops, best of 3: 0.0261 usec per loop
mgilson$ python -m timeit -s "import numpy; v = numpy.int64(1)" "v + 1"
10000000 loops, best of 3: 0.121 usec per loop
```
And finally, your timeit also includes list/array construction time which isn't ideal:
```
mgilson$ python -m timeit -s "v = range(5000)" "for k in v: k"
10000 loops, best of 3: 80.2 usec per loop
mgilson$ python -m timeit -s "import numpy; v = numpy.arange(5000)" "for k in v: k"
1000 loops, best of 3: 237 usec per loop
```
Notice that numpy actually got further away from the list solution in this case. This shows that *iteration* really *is* slower and you might get some speedups if you convert the numpy types to standard python types.
1Note, this doesn't take a lot of time when slicing because that only has to allocate O(1) new objects since numpy returns a *view* into the original array. |
How to print with inline if statement? | 35,234,846 | 7 | 2016-02-05T22:47:13Z | 35,234,894 | 9 | 2016-02-05T22:51:23Z | [
"python",
"dynamic",
"printing",
"inline"
] | This dictionary corresponds with numbered nodes:
```
{0: True, 1: True, 2: True, 3: False, 4: False, 5: False, 6: True, 7: True, 8: False, 9: False}
```
Using two print statements, I want to print marked and unmarked nodes as follows:
* Marked nodes: `0 1 2 6 7`
* Unmarked nodes: `3 4 5 8 9`
I want something close to:
```
print("Marked nodes: %d" key in markedDict if markedDict[key] = True)
print("Unmarked nodes: %d" key in markedDict if markedDict[key] = False)
``` | You can use list comprehensions:
```
nodes = {0: True, 1: True, 2: True,
3: False, 4: False, 5: False,
6: True, 7: True, 8: False, 9: False}
print("Marked nodes: ", *[i for i, value in nodes.items() if value])
print("Unmarked nodes: ", *[i for i, value in nodes.items() if not value])
```
**Output:**
```
Marked nodes: 0 1 2 6 7
Unmarked nodes: 3 4 5 8 9
``` |
Why are tuples enclosed in parentheses? | 35,241,105 | 5 | 2016-02-06T12:38:58Z | 35,241,124 | 10 | 2016-02-06T12:40:40Z | [
"python"
] | > a tuple is a comma-separated list of values
so the valid syntax to declare a tuple is:
```
tup = 'a', 'b', 'c', 'd'
```
But what I often see is a declaration like this:
```
tup = ('a', 'b', 'c', 'd')
```
What is the benefit of enclosing tuples in parentheses ? | From the [Python docs](https://docs.python.org/2/tutorial/datastructures.html#tuples-and-sequences):
> ... so that nested tuples are interpreted correctly. Tuples may be
> input with or without surrounding parentheses, although often
> parentheses are necessary anyway (if the tuple is part of a larger
> expression).
Example of nested tuples:
```
tuple = ('a', ('b', 'c'), 'd')
``` |
Combining conda environment.yml with pip requrements.txt | 35,245,401 | 7 | 2016-02-06T19:25:40Z | 35,245,610 | 10 | 2016-02-06T19:44:10Z | [
"python",
"pip",
"anaconda",
"conda",
"env"
] | I work with conda environments and need some pip packages as well, e.g. pre-compiled wheels from [~gohlke](http://www.lfd.uci.edu/~gohlke/pythonlibs/).
At the moment I have two files: `environment.yml` for conda with:
```
# run: conda env create --file environment.yml
name: test-env
dependencies:
- python>=3.5
- anaconda
```
and `requirements.txt` for pip which can be used after activating above conda environment:
```
# run: pip install -i requirements.txt
http://www.lfd.uci.edu/~gohlke/pythonlibs/bofhrmxk/opencv_python-3.1.0-cp35-none-win_amd64.whl
```
Is there a possibility to combine them in one file (for conda)? | Ah, I found the solution: I can include extra pip dependencies in the `environment.yml` for conda like this:
```
# run: conda env create --file environment.yml
name: test-env
dependencies:
- python>=3.5
- anaconda
- pip
- pip:
- http://www.lfd.uci.edu/~gohlke/pythonlibs/bofhrmxk/opencv_python-3.1.0-cp35-none-win_amd64.whl
```
That is great! But why is it not mentioned in the conda [docs](http://conda.pydata.org/docs/using/envs.html)? |
Conda command not found | 35,246,386 | 5 | 2016-02-06T20:58:53Z | 35,246,794 | 8 | 2016-02-06T21:39:08Z | [
"python",
"zsh",
"anaconda",
"miniconda"
] | I've installed Miniconda and have added the environment variable `export PATH="/home/username/miniconda3/bin:$PATH"` to my `.bachrc` and `.bash_profile` but still can't run any conda commands in my terminal.
Am I missing another setup? I'm using zsh by the way. | If you're using zsh and it has not been set up to read .bashrc, you need to add the Miniconda directory to the zsh shell PATH environment variable. Add this to your `.zshrc`:
```
export PATH="/home/username/miniconda/bin:$PATH"
```
Make sure to **replace** `/home/username/miniconda` with **your actual path**.
Save, exit the terminal and then reopen the terminal. `conda` command should work. |
Raising error if string not in one or more lists | 35,246,467 | 4 | 2016-02-06T21:07:03Z | 35,246,504 | 9 | 2016-02-06T21:10:35Z | [
"python",
"list",
"boolean"
] | I wish to raise an error manually if a *target\_string* does not occur in one or more lists of a list of lists.
```
if False in [False for lst in lst_of_lsts if target_string not in lst]:
raise ValueError('One or more lists does not contain "%s"' % (target_string))
```
Surely there is a more Pythonic solution than the one specified above. | Use [`all()`](https://docs.python.org/2/library/functions.html#all)
```
if not all(target_string in lst for lst in lst_of_lsts):
raise ValueError('One or more lists does not contain "%s"' % (target_string))
```
The generator yields `True` or `False` for each individual test and `all()` checks if all of them are true. Since we are using a generator, the evaluation is lazy, i.e. it stops when the first `False` is found without evaluating the full list.
Or if the double `in` on the same label seems confusing, one might
```
if not all((target_string in lst) for lst in lst_of_lsts):
raise ValueError('One or more lists does not contain "%s"' % (target_string))
```
but I'm not so sure any more that actually increases readability. |
What is special with replace() method in Python? | 35,247,549 | 3 | 2016-02-06T23:00:18Z | 35,247,611 | 8 | 2016-02-06T23:08:14Z | [
"python",
"string",
"python-3.x"
] | First of all I'm a beginner in Python. Therefore I'm sorry if my question seems ridiculous for you. If you have a string value, for example:
```
a = 'Hello 11'
```
if you type:
```
a[-1] = str(int(a[-1]) + 1)
```
the result will be: `'2'`
but if you type:
```
a.replace(a[-1], str(int(a[-1]) + 1))
```
the result will be:
'`Hello 22'` instead of `'Hello 12'`
Why that happens? | Have a look at the parts:
```
>>> a[-1]
'1'
>>> str(int(a[-1]) + 1)
'2'
```
This means:
```
>>> a.replace(a[-1], str(int(a[-1]) + 1))
```
does this:
```
>>> a.replace('1', '2')
'Hello 22'
```
It replaces the string `1` by the string `2`.
In Python strings are immutable. Therefore, this:
```
>>> a[-1] = str(int(a[-1]) + 1)
```
does not work:
```
TypeError: 'str' object does not support item assignment
``` |
Python, zip multiple lists where one list requires two items each | 35,247,659 | 4 | 2016-02-06T23:13:29Z | 35,247,736 | 8 | 2016-02-06T23:22:16Z | [
"python",
"list"
] | I have the following lists as an example:
```
a = ['#12908069', '#12906115', '#12904949', '#12904654', '#12904288', '#12903553']
b = ['85028,', '83646,', '77015,', '90011,', '91902,', '80203,']
c = ['9.09', '9.09', '1.81', '3.62', '1.81', '1.81', '9.09', '9.09', '1.81', '3.62', '1.81', '1.81']
d = ['Zone 3', 'Zone 3', 'Zone 2']
```
What I'd like to achieve as an output, the first item set zipped as an example:
```
[('#12908069', '85028', (9.09, 9.09), 'Zone 3'), ...]
```
How do I get `zip()` to add an extra item for each tuple from list `c`? | you can use list slices with a step of 2, see [Explain Python's slice notation](http://stackoverflow.com/a/509295/1176601):
```
list(zip(a,b,zip(c[0::2],c[1::2]),d))
``` |
Why do chat applications have to be asynchronous? | 35,249,741 | 2 | 2016-02-07T04:29:41Z | 35,250,204 | 7 | 2016-02-07T05:48:18Z | [
"python",
"django",
"chat",
"tornado"
] | I need to implement a chat application for my web service (that is written in Django + Rest api framework). After doing some google search, I found that Django chat applications that are available are all deprecated and not supported anymore. And all the DIY (do it yourself) solutions I found are using **Tornado** or **Twisted** framework.
So, My question is: is it OK to make a Django-only based synchronous chat application? And do I need to use any asynchronous framework? I have very little experience in backend programming, so I want to keep everything as simple as possible. | Django, like many other web framework, is constructed around the concept of receiving an HTTP request from a web client, processing the request and sending a response. Breaking down that flow (simplified for sake of clarity):
1. The remote client opens [TCP](https://en.wikipedia.org/wiki/Transmission_Control_Protocol) connection with your Django server.
2. The client sends a [HTTP](https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol) request to the server, having a path, some headers and possibly a body.
3. Server sends a HTTP response.
4. Connection is closed
5. Server goes back to a state where it waits for a new connection.
A chat server, if it needs to be somewhat real-time, needs to be different: it needs to maintain many simultaneous open connections with connected clients, so that when new messages are published on a channel, the appropriate clients are notified accordingly.
A modern way of implementing that is using [WebSockets](https://en.wikipedia.org/wiki/WebSocket). This communication flow between the client and server starts with a HTTP request, like the one described above, but the client sends a special **Upgrade** HTTP request to the server, asking for the session to switch over from a simple request/response paradigm to a persistent, "full-duplex" communication model, where both the client and server can send messages at any time in both direction.
The fact that the connections with multiple simultaneous clients needs to be persistent means you can't have a simple execution model where a single request would be taken care of by your server at a time, which is usually what happens in what you call *synchronous* servers. **Tornado** and **Twisted** have different models for doing networking, using multithreading, so that multiple connections can be left open and taken care of simultanously by a server, and making a chat service possible.
---
### Synchronous approach nevertheless
Having said that, there are ways to implement a very simple, non-scalable chat service with apparent latency:
1. Clients perform `POST` requests to your server to send messages to channels.
2. Clients perform periodical `GET` requests to the server to ask for any new messages to the channels they're subscribed to. The rate at which they send these requests is basically the refresh rate of the chat app.
With this approach, your server will work significantly harder than if it had a asynchronous execution model for maintaining persistent connections, but it will work. |
How to check if all values of a dictionary are 0, in Python? | 35,253,971 | 9 | 2016-02-07T13:18:27Z | 35,254,031 | 14 | 2016-02-07T13:24:54Z | [
"python",
"dictionary"
] | I want to check if all the values, i.e values corresponding to all keys in a dictionary are 0. Is there any way to do it without loops? If so how? | use `all()`:
```
all(value == 0 for value in your_dict.values())
```
`all` returns `True` if all elements of the given iterable are true. |
Mongoengine is very slow on large documents comapred to native pymongo usage | 35,257,305 | 4 | 2016-02-07T18:24:58Z | 35,274,930 | 7 | 2016-02-08T16:52:08Z | [
"python",
"mongodb",
"pymongo",
"mongoengine"
] | I have the following mongoengine model:
```
class MyModel(Document):
date = DateTimeField(required = True)
data_dict_1 = DictField(required = False)
data_dict_2 = DictField(required = True)
```
In some cases the document in the DB can be very large (around 5-10MB), and the data\_dict fields contain complex nested documents (dict of lists of dicts, etc...).
I have encountered two (possibly related) issues:
1. When I run native pymongo find\_one() query, it returns within a second. When I run MyModel.objects.first() it takes 5-10 seconds.
2. When I query a single large document from the DB, and then access its field, it takes 10-20 seconds just to do the following:
```
m = MyModel.objects.first()
val = m.data_dict_1.get(some_key)
```
The data in the object does not contain any references to any other objects, so it is not an issue of objects dereferencing.
I suspect it is related to some inefficiency of the internal data representation of mongoengine, which affects the document object construction as well as fields access. Is there anything I can do to improve this ? | **TL;DR: mongoengine is spending ages converting all the returned arrays to dicts**
To test this out I built a collection with a document with a `DictField` with a large nested `dict`. The doc being roughly in your 5-10MB range.
We can then use [`timeit.timeit`](https://docs.python.org/3.4/library/timeit.html) to confirm the difference in reads using pymongo and mongoengine.
We can then use [pycallgraph](http://pycallgraph.slowchop.com/en/master/index.html) and [GraphViz](http://www.graphviz.org/) to see what is taking mongoengine so damn long.
Here is the code in full:
```
import datetime
import itertools
import random
import sys
import timeit
from collections import defaultdict
import mongoengine as db
from pycallgraph.output.graphviz import GraphvizOutput
from pycallgraph.pycallgraph import PyCallGraph
db.connect("test-dicts")
class MyModel(db.Document):
date = db.DateTimeField(required=True, default=datetime.date.today)
data_dict_1 = db.DictField(required=False)
MyModel.drop_collection()
data_1 = ['foo', 'bar']
data_2 = ['spam', 'eggs', 'ham']
data_3 = ["subf{}".format(f) for f in range(5)]
m = MyModel()
tree = lambda: defaultdict(tree) # http://stackoverflow.com/a/19189366/3271558
data = tree()
for _d1, _d2, _d3 in itertools.product(data_1, data_2, data_3):
data[_d1][_d2][_d3] = list(random.sample(range(50000), 20000))
m.data_dict_1 = data
m.save()
def pymongo_doc():
return db.connection.get_connection()["test-dicts"]['my_model'].find_one()
def mongoengine_doc():
return MyModel.objects.first()
if __name__ == '__main__':
print("pymongo took {:2.2f}s".format(timeit.timeit(pymongo_doc, number=10)))
print("mongoengine took", timeit.timeit(mongoengine_doc, number=10))
with PyCallGraph(output=GraphvizOutput()):
mongoengine_doc()
```
And the output proves that mongoengine is being very slow compared to pymongo:
```
pymongo took 0.87s
mongoengine took 25.81118331072267
```
The resulting call graph illustrates pretty clearly where the bottle neck is:
[](http://i.stack.imgur.com/qAb0t.png)
[](http://i.stack.imgur.com/5q8Px.png)
Essentially mongoengine will call the to\_python method on every `DictField` that it gets back from the db. `to_python` is pretty slow and in our example it's being called an insane number of times.
Mongoengine is used to elegantly map your document structure to python objects. If you have very large unstructured documents (which mongodb is great for) then mongoengine isn't really the right tool and you should just use pymongo.
However, if you know the structure you can use `EmbeddedDocument` fields to get slightly better performance from mongoengine. I've run a similar but not equivalent test [code in this gist](https://gist.github.com/BeardedSteve/a1484adcf7475f62028e) and the output is:
```
pymongo with dict took 0.12s
pymongo with embed took 0.12s
mongoengine with dict took 4.3059175412661075
mongoengine with embed took 1.1639373211854682
```
So you can make mongoengine faster but pymongo is much faster still.
**UPDATE**
A good shortcut to the pymongo interface here is to use the aggregation framework:
```
def mongoengine_agg_doc():
return list(MyModel.objects.aggregate({"$limit":1}))[0]
``` |
Can you create a Python list from a string, while keeping characters in specific keywords together? | 35,259,465 | 31 | 2016-02-07T21:40:56Z | 35,259,492 | 36 | 2016-02-07T21:43:42Z | [
"python",
"python-2.7"
] | I want to create a list from the characters in a string, but keep specific keywords together.
For example:
keywords: car, bus
INPUT:
```
"xyzcarbusabccar"
```
OUTPUT:
```
["x", "y", "z", "car", "bus", "a", "b", "c", "car"]
``` | With `re.findall`. Alternate between your keywords first.
```
>>> import re
>>> s = "xyzcarbusabccar"
>>> re.findall('car|bus|[a-z]', s)
['x', 'y', 'z', 'car', 'bus', 'a', 'b', 'c', 'car']
```
In case you have overlapping keywords, note that this solution will find the first one you encounter:
```
>>> s = 'abcaratab'
>>> re.findall('car|rat|[a-z]', s)
['a', 'b', 'car', 'a', 't', 'a', 'b']
```
You can make the solution more general by substituting the `[a-z]` part with whatever you like, `\w` for example, or a simple `.` to match any character.
Short explanation why this works and why the regex `'[a-z]|car|bus'` would not work:
The regular expression engine tries the alternating options from left to right and is "*eager*" to return a match. That means it considers the whole alternation to match as soon as one of the options has been fully matched. At this point, it will not try any of the remaining options but stop processing and report a match immediately. With `'[a-z]|car|bus'`, the engine will report a match when it sees any character in the character class [a-z] and never go on to check if 'car' or 'bus' could also be matched. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.