title
stringlengths
12
150
question_id
int64
469
40.1M
question_score
int64
2
5.52k
question_date
stringdate
2008-08-02 15:11:16
2016-10-18 06:16:31
answer_id
int64
536
40.1M
answer_score
int64
7
8.38k
answer_date
stringdate
2008-08-02 18:49:07
2016-10-18 06:19:33
tags
listlengths
1
5
question_body_md
stringlengths
15
30.2k
answer_body_md
stringlengths
11
27.8k
Matlab range in Python
37,571,622
3
2016-06-01T14:31:28Z
37,571,844
7
2016-06-01T14:40:35Z
[ "python", "matlab", "numpy", "range" ]
I must translate some Matlab code into Python 3 and I often come across ranges of the form start:step:stop. When these arguments are all integers, I easily translate this command with np.arange(), but when some of the arguments are floats, especially the step parameter, I don't get the same output in Python. For example, ``` 7:8 %In Matlab 7 8 ``` If I want to translate it in Python I simply use : ``` np.arange(7,8+1) array([7, 8]) ``` But if I have, let's say : ``` 7:0.3:8 %In Matlab 7.0000 7.3000 7.6000 7.9000 ``` I can't translate it using the same logic : ``` np.arange(7, 8+0.3, 0.3) array([ 7. , 7.3, 7.6, 7.9, 8.2]) ``` In this case, I must not add the step to the stop argument. But then, if I have : ``` 7:0.2:8 %In Matlab 7.0000 7.2000 7.4000 7.6000 7.8000 8.0000 ``` I can use my first idea : ``` np.arange(7,8+0.2,0.2) array([ 7. , 7.2, 7.4, 7.6, 7.8, 8. ]) ``` My problem comes from the fact that I am not translating hardcoded lines like these. In fact, each parameters of these ranges can change depending on the inputs of the function I am working on. Thus, I can sometimes have 0.2 or 0.3 as the step parameter. So basically, do you guys know if there is another numpy/scipy or whatever function that really acts like Matlab range, or if I must add a little bit of code by myself to make sure that my Python range ends up at the same number as Matlab's? Thanks!
You don't actually need to add your entire step size to the max limit of `np.arange` but just a very tiny number to make sure that that max is enclose. For example the [machine epsilon](http://stackoverflow.com/a/19141711/1011724): ``` eps = np.finfo(np.float32).eps ``` adding `eps` will give you the same result as MATLAB does in all three of your scenarios: ``` In [13]: np.arange(7, 8+eps) Out[13]: array([ 7., 8.]) In [14]: np.arange(7, 8+eps, 0.3) Out[14]: array([ 7. , 7.3, 7.6, 7.9]) In [15]: np.arange(7, 8+eps, 0.2) Out[15]: array([ 7. , 7.2, 7.4, 7.6, 7.8, 8. ]) ```
Python 3.x multi line comment throws syntax error
37,571,974
2
2016-06-01T14:46:16Z
37,572,043
7
2016-06-01T14:48:37Z
[ "python", "indentation" ]
I'm working on a Python project and as of now, my code has over 400+ lines. At one point, I had to write a multi-line comment about a small bug which needs a work around, and the interpreter decided to throw a syntax error. According to the interpreter, the syntax error is occuring at **elif**. I re-checked my indentation, converted tabs to spaces etc. Nothing seems to work. ``` if some_condition_1 == True: do_something() """ Sub stage (b): Refer documentation [1.7A] for ... .... .... .... """ elif condition_1 == True: if condition_2 == False: list.append(item) ``` However, if I remove the multi-line comment, the code executes fine. Any idea what's going wrong? Please note that the code sample I've show above, is at **very top** of the file, and there's no chance for anything to go wrong elsewhere.
This is an indentation error. Your "multi-line comment" (really multi-line string) must be indented under the `if` block just like anything else. `""" These kinds of things """` are not really comments in Python. You're just creating a string and then throwing the value away (not storing it anywhere). Since Python doesn't have true multi-line comments, many people use them this way. However, since they are not true comments (they aren't ignored by the interpreter), they must obey all normal syntax rules, including indentation rules. (Do note that when I say "creating a string" I'm speaking loosely. CPython, at least, has an optimization not to create an object here.)
Generating random vectors of Euclidean norm <= 1 in Python?
37,577,803
11
2016-06-01T20:04:26Z
37,577,963
9
2016-06-01T20:14:50Z
[ "python", "numpy", "random" ]
More specifically, given a natural number d, how can I generate random vectors in R^d such that each vector x has Euclidean norm <= 1? Generating random vectors via numpy.random.rand(1,d) is no problem, but the likelihood of such a random vector having norm <= 1 is predictably bad for even not-small d. For example, even for d = 10 about 0.2% percent of such random vectors have appropriately small norm. So that seems like a silly solution. EDIT: Re: Walter's comment, yes, I'm looking for a uniform distribution over vectors in the unit ball in R^d.
Based on the Wolfram Mathworld article on [hypersphere point picking](http://mathworld.wolfram.com/HyperspherePointPicking.html) and [Nate Eldredge's answer](http://math.stackexchange.com/questions/87230/picking-random-points-in-the-volume-of-sphere-with-uniform-probability) to a similar question on math.stackexchange.com, you can generate such a vector by generating a vector of `d` independent Gaussian random variables and a random number `U` uniformly distributed over the closed interval `[0, 1]`, then normalizing the vector to norm `U^(1/d)`.
How can I set a default for star arguments?
37,587,389
4
2016-06-02T09:09:12Z
37,587,548
11
2016-06-02T09:16:19Z
[ "python", "python-3.x" ]
I have a function that accepts `*args`, but I would like to set a default tuple, in case none are provided. (This is not possible through `def f(*args=(1, 3, 5))`, which raises a `SyntaxError`.) What would be the best way to accomplish this? The intended functionality is shown below. ``` f() # I received 1, 2, 3! f(1) # I received 1! f(9, 3, 72) # I received 9, 3, 72! ``` The following function `g` will provide the correct functionality, but I would prefer `*args`. ``` def g(args=(1, 2, 3)): return "I received {}!".format(', '.join(str(arg) for arg in args)) g() # I received 1, 2, 3! g((1,)) # I received 1! g((9, 3, 72)) # I received 9, 3, 72! ```
You could check whether `args` are truthy in your function: ``` def g(*args): if not args: args = (1, 2, 3) return "I received {}!".format(', '.join(str(arg) for arg in args)) ``` If no `args` are passed to the function, it will result in a empty tuple, which evaluates to `False`.
Drop duplicates in pandas time series dataframe
37,602,233
3
2016-06-02T21:14:49Z
37,602,345
9
2016-06-02T21:22:20Z
[ "python", "pandas", "dataframe", "time-series" ]
I have time series data in a data frame that looks like this: ``` Index Time Value_A Value_B 0 1 A A 1 2 A A 2 2 B A 3 3 A A 4 5 A A ``` I want to drop duplicate in the Value\_A and Value\_B columns such that duplicates are only dropped until a different pattern is encountered. The result for this sample data should be: ``` Index Time Value_A Value_B 0 1 A A 2 2 B A 3 3 A A ```
The usual trick to detect contiguous groups is to compare something with a shifted version of itself. For example: ``` In [137]: cols = ["Value_A", "Value_B"] In [138]: df[~(df[cols] == df[cols].shift()).all(axis=1)] Out[138]: Time Value_A Value_B Index 0 1 A A 2 2 B A 3 3 A A ```
Replacing python list elements with key
37,603,164
4
2016-06-02T22:28:57Z
37,603,292
9
2016-06-02T22:42:06Z
[ "python", "list", "key" ]
I have a list of non-unique strings: ``` list = ["a", "b", "c", "a", "a", "d", "b"] ``` I would like to replace each element with an integer key which uniquely identifies each string: ``` list = [0, 1, 2, 0, 0, 3, 1] ``` The number does not matter, as long as it is a unique identifier. So far all I can think to do is copy the list to a set, and use the index of the set to reference the list. I'm sure there's a better way though.
This will guarantee uniqueness and that the id's are contiguous starting from `0`: ``` id_s = {c: i for i, c in enumerate(set(list))} li = [id_s[c] for c in list] ``` On a different note, you should not use `'list'` as variable name because it will shadow the built-in type `list`.
How to convert sort using cmp from python 2 to python 3?
37,603,757
5
2016-06-02T23:32:58Z
37,603,846
8
2016-06-02T23:43:58Z
[ "python", "sorting", "python2to3" ]
I'm trying to convert this code which is written in python 2 to python 3 ``` nums = ["30", "31"] num.sort(cmp=lambda x, y: cmp(y + x, x + y)) ``` Not sure how to do that in python 3 since cmp is removed (I believed) The result should be `["31", "30"]` instead of `["30", "31"]`
This is one of the rare cases where a comparator is much cleaner than a key function. I'd actually just reimplement `cmp`: ``` try: cmp except NameError: def cmp(x, y): if x < y: return -1 elif x > y: return 1 else: return 0 ``` and then use [`functools.cmp_to_key`](https://docs.python.org/3/library/functools.html#functools.cmp_to_key) to convert the comparator to a Python 3 style key function: ``` nums.sort(key=functools.cmp_to_key(lambda x, y: cmp(y+x, x+y))) ``` For anyone wondering what this weird sort actually does, it finds the order in which to concatenate the input strings to produce the lexicographically greatest output string. When all the strings are sequences of digits, the output has the highest possible numeric value.
When is hash(n) == n in Python?
37,612,524
94
2016-06-03T10:55:04Z
37,613,404
7
2016-06-03T11:38:23Z
[ "python", "python-2.7", "python-3.x", "hash" ]
I've been playing with Python's [hash function](https://docs.python.org/3/library/functions.html#hash). For small integers, it appears `hash(n) == n` always. However this does not extend to large numbers: ``` >>> hash(2**100) == 2**100 False ``` I'm not surprised, I understand hash takes a finite range of values. What is that range? I tried using [binary search](http://codejamhelpers.readthedocs.io/en/latest/#codejamhelpers.binary_search) to find the smallest number `hash(n) != n` ``` >>> import codejamhelpers # pip install codejamhelpers >>> help(codejamhelpers.binary_search) Help on function binary_search in module codejamhelpers.binary_search: binary_search(f, t) Given an increasing function :math:`f`, find the greatest non-negative integer :math:`n` such that :math:`f(n) \le t`. If :math:`f(n) > t` for all :math:`n \ge 0`, return None. >>> f = lambda n: int(hash(n) != n) >>> n = codejamhelpers.binary_search(f, 0) >>> hash(n) 2305843009213693950 >>> hash(n+1) 0 ``` What's special about 2305843009213693951? I note it's less than `sys.maxsize == 9223372036854775807` Edit: I'm using Python 3. I ran the same binary search on Python 2 and got a different result 2147483648, which I note is `sys.maxint+1` I also played with `[hash(random.random()) for i in range(10**6)]` to estimate the range of hash function. The max is consistently below n above. Comparing the min, it seems Python 3's hash is always positively valued, whereas Python 2's hash can take negative values.
Hash function returns **plain int** that means that returned value is greater than `-sys.maxint` and lower than `sys.maxint`, which means if you pass `sys.maxint + x` to it result would be `-sys.maxint + (x - 2)`. ``` hash(sys.maxint + 1) == sys.maxint + 1 # False hash(sys.maxint + 1) == - sys.maxint -1 # True hash(sys.maxint + sys.maxint) == -sys.maxint + sys.maxint - 2 # True ``` Meanwhile `2**200` is a `n` times greater than `sys.maxint` - my guess is that hash would go over range `-sys.maxint..+sys.maxint` n times until it stops on plain integer in that range, like in code snippets above.. So generally, for any **n <= sys.maxint**: ``` hash(sys.maxint*n) == -sys.maxint*(n%2) + 2*(n%2)*sys.maxint - n/2 - (n + 1)%2 ## True ``` **Note:** this is true for python 2.
When is hash(n) == n in Python?
37,612,524
94
2016-06-03T10:55:04Z
37,614,051
76
2016-06-03T12:10:07Z
[ "python", "python-2.7", "python-3.x", "hash" ]
I've been playing with Python's [hash function](https://docs.python.org/3/library/functions.html#hash). For small integers, it appears `hash(n) == n` always. However this does not extend to large numbers: ``` >>> hash(2**100) == 2**100 False ``` I'm not surprised, I understand hash takes a finite range of values. What is that range? I tried using [binary search](http://codejamhelpers.readthedocs.io/en/latest/#codejamhelpers.binary_search) to find the smallest number `hash(n) != n` ``` >>> import codejamhelpers # pip install codejamhelpers >>> help(codejamhelpers.binary_search) Help on function binary_search in module codejamhelpers.binary_search: binary_search(f, t) Given an increasing function :math:`f`, find the greatest non-negative integer :math:`n` such that :math:`f(n) \le t`. If :math:`f(n) > t` for all :math:`n \ge 0`, return None. >>> f = lambda n: int(hash(n) != n) >>> n = codejamhelpers.binary_search(f, 0) >>> hash(n) 2305843009213693950 >>> hash(n+1) 0 ``` What's special about 2305843009213693951? I note it's less than `sys.maxsize == 9223372036854775807` Edit: I'm using Python 3. I ran the same binary search on Python 2 and got a different result 2147483648, which I note is `sys.maxint+1` I also played with `[hash(random.random()) for i in range(10**6)]` to estimate the range of hash function. The max is consistently below n above. Comparing the min, it seems Python 3's hash is always positively valued, whereas Python 2's hash can take negative values.
`2305843009213693951` is `2^61 - 1`. It's the largest Mersenne prime that fits into 64 bits. If you have to make a hash just by taking the value mod some number, then a large Mersenne prime is a good choice -- it's easy to compute and ensures an even distribution of possibilities. (Although I personally would never make a hash this way) It's especially convenient to compute the modulus for floating point numbers. They have an exponential component that multiplies the whole number by `2^x`. Since `2^61 = 1 mod 2^61-1`, you only need to consider the `(exponent) mod 61`. See: <https://en.wikipedia.org/wiki/Mersenne_prime>
When is hash(n) == n in Python?
37,612,524
94
2016-06-03T10:55:04Z
37,614,182
64
2016-06-03T12:16:11Z
[ "python", "python-2.7", "python-3.x", "hash" ]
I've been playing with Python's [hash function](https://docs.python.org/3/library/functions.html#hash). For small integers, it appears `hash(n) == n` always. However this does not extend to large numbers: ``` >>> hash(2**100) == 2**100 False ``` I'm not surprised, I understand hash takes a finite range of values. What is that range? I tried using [binary search](http://codejamhelpers.readthedocs.io/en/latest/#codejamhelpers.binary_search) to find the smallest number `hash(n) != n` ``` >>> import codejamhelpers # pip install codejamhelpers >>> help(codejamhelpers.binary_search) Help on function binary_search in module codejamhelpers.binary_search: binary_search(f, t) Given an increasing function :math:`f`, find the greatest non-negative integer :math:`n` such that :math:`f(n) \le t`. If :math:`f(n) > t` for all :math:`n \ge 0`, return None. >>> f = lambda n: int(hash(n) != n) >>> n = codejamhelpers.binary_search(f, 0) >>> hash(n) 2305843009213693950 >>> hash(n+1) 0 ``` What's special about 2305843009213693951? I note it's less than `sys.maxsize == 9223372036854775807` Edit: I'm using Python 3. I ran the same binary search on Python 2 and got a different result 2147483648, which I note is `sys.maxint+1` I also played with `[hash(random.random()) for i in range(10**6)]` to estimate the range of hash function. The max is consistently below n above. Comparing the min, it seems Python 3's hash is always positively valued, whereas Python 2's hash can take negative values.
Based on python documentation in [`pyhash.c`](https://github.com/python/cpython/blob/master/Python/pyhash.c) file: > For numeric types, the hash of a number x is based on the reduction > of x modulo the prime `P = 2**_PyHASH_BITS - 1`. It's designed so that > `hash(x) == hash(y)` whenever x and y are numerically equal, even if > x and y have different types. So for a 64/32 bit machine, the reduction would be 2 \_PyHASH\_BITS - 1, but what is `_PyHASH_BITS`? You can find it in [`pyhash.h`](https://github.com/python/cpython/blob/master/Include/pyhash.h) header file which for a 64 bit machine has been defined as 61 (you can read more explanation in `pyconfig.h` file). ``` #if SIZEOF_VOID_P >= 8 # define _PyHASH_BITS 61 #else # define _PyHASH_BITS 31 #endif ``` So first off all it's based on your platform for example in my 64bit Linux platform the reduction is 261-1, which is `2305843009213693951`: ``` >>> 2**61 - 1 2305843009213693951 ``` Also You can use `math.frexp` in order to get the mantissa and exponent of `sys.maxint` which for a 64 bit machine shows that max int is 263: ``` >>> import math >>> math.frexp(sys.maxint) (0.5, 64) ``` And you can see the difference by a simple test: ``` >>> hash(2**62) == 2**62 True >>> hash(2**63) == 2**63 False ``` Read the complete documentation about python hashing algorithm <https://github.com/python/cpython/blob/master/Python/pyhash.c#L34> As mentioned in comment you can use `sys.hash_info` (in python 3.X) which will give you a struct sequence of parameters used for computing hashes. ``` >>> sys.hash_info sys.hash_info(width=64, modulus=2305843009213693951, inf=314159, nan=0, imag=1000003, algorithm='siphash24', hash_bits=64, seed_bits=128, cutoff=0) >>> ``` Alongside the modulus that I've described in preceding lines, you can also get the `inf` value as following: ``` >>> hash(float('inf')) 314159 >>> sys.hash_info.inf 314159 ```
Given a list of n points, how can I generate a matrix that contains the distance from every point to every other point using numpy?
37,616,593
4
2016-06-03T14:08:47Z
37,616,731
7
2016-06-03T14:14:59Z
[ "python", "matlab", "numpy", "matrix" ]
Hey guys so i'm trying to rewrite the following matlab code in python: ``` repmat(points, 1, length(points)) - repmat(points', length(points),1); ``` `points` is an array that contain the radian value of several points. The above code gives me a matrix output like this: ``` 0 1 2 0 1 2 0 1 2 -1 0 1 -1 0 1 -1 0 1 -2 -1 0 -2 -1 0 -2 -1 0 0 1 2 0 1 2 0 1 2 -1 0 1 -1 0 1 -1 0 1 -2 -1 0 -2 -1 0 -2 -1 0 0 1 2 0 1 2 0 1 2 -1 0 1 -1 0 1 -1 0 1 -2 -1 0 -2 -1 0 -2 -1 0 ``` Which i can easily manipulate to get the distance from each point to every other point. I was just wondering if there's a *one liner* way to do it using numpy? I tried the following which didn't work: ``` np.tile(points, (1, len(points))) - np.tile(points.T, (len(points), 1)) ``` Anyone has any ideas?
In MATLAB, you had to use `repmat` because you needed the arrays on the left and right side of the `-` to be the same size. With numpy, this does not have to be the case thanks to numpy's [automatic broadcasting](https://docs.scipy.org/doc/numpy-dev/user/quickstart.html#broadcasting-rules). Instead, you can simply subtract one from the other and the automatic broadcasting will create your result of the expected size. ``` # Create some example data points = np.array([[1,2,3,4]]); # Just subtract the transpose from points B = points - points.T # array([[ 0, 1, 2, 3], # [-1, 0, 1, 2], # [-2, -1, 0, 1], # [-3, -2, -1, 0]]) ``` If `points` is instead just a 1D array, then @JoshAdel's answer should work for you (which also uses broadcasting) or you could convert it to a 2D array.
Summing 2nd list items in a list of lists of lists
37,619,348
4
2016-06-03T16:28:34Z
37,619,560
7
2016-06-03T16:41:55Z
[ "python", "list", "list-comprehension" ]
My data is a list of lists of lists of varying size: ``` data = [[[1, 3],[2, 5],[3, 7]],[[1,11],[2,15]],.....]]] ``` What I want to do is return a list of lists with the values of the 2nd element of each list of lists summed - so, 3+5+7 is a list, so is 11+15, etc: ``` newdata = [[15],[26],...] ``` Or even just a list of the sums would be fine as I can take it from there: ``` newdata2 = [15,26,...] ``` I've tried accessing the items in the list through different forms and structures of list comprehensions, but I can't get seem to get it to the format I want.
Try this one-line approach using list comprehension: ``` [sum([x[1] for x in i]) for i in data] ``` Output: ``` data = [[[1, 3],[2, 5],[3, 7]],[[1,11],[2,15]]] [sum([x[1] for x in i]) for i in data] Out[19]: [15, 26] ``` If you want the output to be a list of list, then use ``` [[sum([x[1] for x in i])] for i in data] ```
How can I make sense of the `else` statement in Python loops?
37,642,573
173
2016-06-05T13:41:04Z
37,643,193
29
2016-06-05T14:46:51Z
[ "python", "loops", "for-loop", "while-loop" ]
Many Python programmers are probably unaware that the syntax of `while` loops and `for` loops includes an optional `else:` clause: ``` for val in iterable: do_something(val) else: clean_up() ``` The body of the `else` clause is a good place for certain kinds of clean-up actions, and is executed on normal termination of the loop: I.e., exiting the loop with `return` or `break` skips the `else` clause; exiting after a `continue` executes it. I know this only because I just [looked it up](https://docs.python.org/3/reference/compound_stmts.html#the-for-statement) (yet again), because I can never remember *when* the `else` clause is executed. Always? On "failure" of the loop, as the name suggests? On regular termination? Even if the loop is exited with `return`? I can never be entirely sure without looking it up. I blame my persisting uncertainty on the choice of keyword: I find `else` incredibly unmnemonic for this semantics. My question is not "why is this keyword used for this purpose" (which I would probably vote to close, though only after reading the answers and comments), but **how can I think about the `else` keyword so that its semantics make sense, and I can therefore remember it?** I'm sure there was a fair amount of discussion about this, and I can imagine that the choice was made for consistency with the `try` statement's `else:` clause (which I also have to look up), and with the goal of not adding to the list of Python's reserved words. Perhaps the reasons for choosing `else` will clarify its function and make it more memorable, but I'm after connecting name to function, not after historical explanation per se. The answers to [this question](http://stackoverflow.com/q/9979970/699305), which my question was briefly closed as a duplicate of, contain a lot of interesting back story. My question has a different focus (how to connect the specific semantics of `else` with the keyword choice), but I feel there should be a link to this question somewhere.
When does an `if` execute an `else`? When its condition is false. It is exactly the same for the `while`/`else`. So you can think of `while`/`else` as just an `if` that keeps running its true condition until it evaluates false. A `break` doesn't change that. It just jumps of of the containing loop with no evaluation. The `else` is only executed if *evaluating* the `if`/`while` condition is false. The `for` is similar, except its false condition is exhausting its iterator. `continue` and `break` don't execute `else`. That isn't their function. The `break` exits the containing loop. The `continue` goes back to the top of the containing loop, where the loop condition is evaluated. It is the act of evaluating `if`/`while` to false (or `for` has no more items) that executes `else` and no other way.
How can I make sense of the `else` statement in Python loops?
37,642,573
173
2016-06-05T13:41:04Z
37,643,265
20
2016-06-05T14:53:43Z
[ "python", "loops", "for-loop", "while-loop" ]
Many Python programmers are probably unaware that the syntax of `while` loops and `for` loops includes an optional `else:` clause: ``` for val in iterable: do_something(val) else: clean_up() ``` The body of the `else` clause is a good place for certain kinds of clean-up actions, and is executed on normal termination of the loop: I.e., exiting the loop with `return` or `break` skips the `else` clause; exiting after a `continue` executes it. I know this only because I just [looked it up](https://docs.python.org/3/reference/compound_stmts.html#the-for-statement) (yet again), because I can never remember *when* the `else` clause is executed. Always? On "failure" of the loop, as the name suggests? On regular termination? Even if the loop is exited with `return`? I can never be entirely sure without looking it up. I blame my persisting uncertainty on the choice of keyword: I find `else` incredibly unmnemonic for this semantics. My question is not "why is this keyword used for this purpose" (which I would probably vote to close, though only after reading the answers and comments), but **how can I think about the `else` keyword so that its semantics make sense, and I can therefore remember it?** I'm sure there was a fair amount of discussion about this, and I can imagine that the choice was made for consistency with the `try` statement's `else:` clause (which I also have to look up), and with the goal of not adding to the list of Python's reserved words. Perhaps the reasons for choosing `else` will clarify its function and make it more memorable, but I'm after connecting name to function, not after historical explanation per se. The answers to [this question](http://stackoverflow.com/q/9979970/699305), which my question was briefly closed as a duplicate of, contain a lot of interesting back story. My question has a different focus (how to connect the specific semantics of `else` with the keyword choice), but I feel there should be a link to this question somewhere.
This is what it essentially means: ``` for/while ...: if ...: break if there was a break: pass else: ... ``` It's a nicer way of writing of this common pattern: ``` found = False for/while ...: if ...: found = True break if not found: ... ``` The `else` clause will not be executed if there is a `return` because `return` leaves the function, as it is meant to. The only exception to that which you may be thinking of is `finally`, whose purpose is to be sure that it is always executed. `continue` has nothing special to do with this matter. It causes the current iteration of the loop to end which may happen to end the entire loop, and clearly in that case the loop wasn't ended by a `break`. `try/else` is similar: ``` try: ... except: ... if there was an exception: pass else: ... ```
How can I make sense of the `else` statement in Python loops?
37,642,573
173
2016-06-05T13:41:04Z
37,643,296
19
2016-06-05T14:57:15Z
[ "python", "loops", "for-loop", "while-loop" ]
Many Python programmers are probably unaware that the syntax of `while` loops and `for` loops includes an optional `else:` clause: ``` for val in iterable: do_something(val) else: clean_up() ``` The body of the `else` clause is a good place for certain kinds of clean-up actions, and is executed on normal termination of the loop: I.e., exiting the loop with `return` or `break` skips the `else` clause; exiting after a `continue` executes it. I know this only because I just [looked it up](https://docs.python.org/3/reference/compound_stmts.html#the-for-statement) (yet again), because I can never remember *when* the `else` clause is executed. Always? On "failure" of the loop, as the name suggests? On regular termination? Even if the loop is exited with `return`? I can never be entirely sure without looking it up. I blame my persisting uncertainty on the choice of keyword: I find `else` incredibly unmnemonic for this semantics. My question is not "why is this keyword used for this purpose" (which I would probably vote to close, though only after reading the answers and comments), but **how can I think about the `else` keyword so that its semantics make sense, and I can therefore remember it?** I'm sure there was a fair amount of discussion about this, and I can imagine that the choice was made for consistency with the `try` statement's `else:` clause (which I also have to look up), and with the goal of not adding to the list of Python's reserved words. Perhaps the reasons for choosing `else` will clarify its function and make it more memorable, but I'm after connecting name to function, not after historical explanation per se. The answers to [this question](http://stackoverflow.com/q/9979970/699305), which my question was briefly closed as a duplicate of, contain a lot of interesting back story. My question has a different focus (how to connect the specific semantics of `else` with the keyword choice), but I feel there should be a link to this question somewhere.
If you think of your loops as a structure similar to this (somewhat pseudo-code): ``` loop: if condition then ... //execute body goto loop else ... ``` it might make a little bit more sense. A loop is essentially just an `if` statement that is repeated until the condition is `false`. And this is the important point. The loop checks its condition and sees that it's `false`, thus executes the `else` (just like a normal `if/else`) and then the loop is done. So notice that the `else` **only get's executed when the condition is checked**. That means that if you exit the body of the loop in the middle of execution with for example a `return` or a `break`, since the condition is not checked again, the `else` case won't be executed. A `continue` on the other hand stops the current execution and then jumps back to check the condition of the loop again, which is why the `else` can be reached in this scenario.
How can I make sense of the `else` statement in Python loops?
37,642,573
173
2016-06-05T13:41:04Z
37,643,358
31
2016-06-05T15:02:49Z
[ "python", "loops", "for-loop", "while-loop" ]
Many Python programmers are probably unaware that the syntax of `while` loops and `for` loops includes an optional `else:` clause: ``` for val in iterable: do_something(val) else: clean_up() ``` The body of the `else` clause is a good place for certain kinds of clean-up actions, and is executed on normal termination of the loop: I.e., exiting the loop with `return` or `break` skips the `else` clause; exiting after a `continue` executes it. I know this only because I just [looked it up](https://docs.python.org/3/reference/compound_stmts.html#the-for-statement) (yet again), because I can never remember *when* the `else` clause is executed. Always? On "failure" of the loop, as the name suggests? On regular termination? Even if the loop is exited with `return`? I can never be entirely sure without looking it up. I blame my persisting uncertainty on the choice of keyword: I find `else` incredibly unmnemonic for this semantics. My question is not "why is this keyword used for this purpose" (which I would probably vote to close, though only after reading the answers and comments), but **how can I think about the `else` keyword so that its semantics make sense, and I can therefore remember it?** I'm sure there was a fair amount of discussion about this, and I can imagine that the choice was made for consistency with the `try` statement's `else:` clause (which I also have to look up), and with the goal of not adding to the list of Python's reserved words. Perhaps the reasons for choosing `else` will clarify its function and make it more memorable, but I'm after connecting name to function, not after historical explanation per se. The answers to [this question](http://stackoverflow.com/q/9979970/699305), which my question was briefly closed as a duplicate of, contain a lot of interesting back story. My question has a different focus (how to connect the specific semantics of `else` with the keyword choice), but I feel there should be a link to this question somewhere.
Better to think of it this way: The `else` block will **always** be executed if everything goes *right* in the preceding `for` block such that it reaches exhaustion. *Right* in this context will mean no `exception`, no `break`, no `return`. Any statement that hijacks control from `for` will cause the `else` block to be bypassed. --- A common use case is found when searching for an item in an `iterable`, for which the search is either called off when the item is found or a `"not found"` flag is raised/printed via the following `else` block: ``` for items in basket: if isinstance(item, Egg): break else: print("No eggs in basket") ``` A `continue` does not hijack control from `for`, so control will proceed to the `else` after the `for` is exhausted.
How can I make sense of the `else` statement in Python loops?
37,642,573
173
2016-06-05T13:41:04Z
37,644,460
192
2016-06-05T16:54:53Z
[ "python", "loops", "for-loop", "while-loop" ]
Many Python programmers are probably unaware that the syntax of `while` loops and `for` loops includes an optional `else:` clause: ``` for val in iterable: do_something(val) else: clean_up() ``` The body of the `else` clause is a good place for certain kinds of clean-up actions, and is executed on normal termination of the loop: I.e., exiting the loop with `return` or `break` skips the `else` clause; exiting after a `continue` executes it. I know this only because I just [looked it up](https://docs.python.org/3/reference/compound_stmts.html#the-for-statement) (yet again), because I can never remember *when* the `else` clause is executed. Always? On "failure" of the loop, as the name suggests? On regular termination? Even if the loop is exited with `return`? I can never be entirely sure without looking it up. I blame my persisting uncertainty on the choice of keyword: I find `else` incredibly unmnemonic for this semantics. My question is not "why is this keyword used for this purpose" (which I would probably vote to close, though only after reading the answers and comments), but **how can I think about the `else` keyword so that its semantics make sense, and I can therefore remember it?** I'm sure there was a fair amount of discussion about this, and I can imagine that the choice was made for consistency with the `try` statement's `else:` clause (which I also have to look up), and with the goal of not adding to the list of Python's reserved words. Perhaps the reasons for choosing `else` will clarify its function and make it more memorable, but I'm after connecting name to function, not after historical explanation per se. The answers to [this question](http://stackoverflow.com/q/9979970/699305), which my question was briefly closed as a duplicate of, contain a lot of interesting back story. My question has a different focus (how to connect the specific semantics of `else` with the keyword choice), but I feel there should be a link to this question somewhere.
(This is inspired by @Mark Tolonen's answer.) An `if` statement runs its `else` clause if its condition evaluates to false. Identically, a `while` loop runs the else clause if its condition evaluates to false. This rule matches the behavior you described: * In normal execution, the while loop repeatedly runs until the condition evaluates to false, and therefore naturally exiting the loop runs the else clause. * When you execute a `break` statement, you exit out of the loop without evaluating the condition, so the condition cannot evaluate to false and you never run the else clause. * When you execute a `continue` statement, you evaluate the condition again, and do exactly what you normally would at the beginning of a loop iteration. So, if the condition is true, you keep looping, but if it is false you run the else clause. * Other methods of exiting the loop, such as `return`, do not evaluate the condition and therefore do not run the else clause. `for` loops behave the same way. Just consider the condition as true if the iterator has more elements, or false otherwise.
How can I make sense of the `else` statement in Python loops?
37,642,573
173
2016-06-05T13:41:04Z
37,658,079
13
2016-06-06T12:54:57Z
[ "python", "loops", "for-loop", "while-loop" ]
Many Python programmers are probably unaware that the syntax of `while` loops and `for` loops includes an optional `else:` clause: ``` for val in iterable: do_something(val) else: clean_up() ``` The body of the `else` clause is a good place for certain kinds of clean-up actions, and is executed on normal termination of the loop: I.e., exiting the loop with `return` or `break` skips the `else` clause; exiting after a `continue` executes it. I know this only because I just [looked it up](https://docs.python.org/3/reference/compound_stmts.html#the-for-statement) (yet again), because I can never remember *when* the `else` clause is executed. Always? On "failure" of the loop, as the name suggests? On regular termination? Even if the loop is exited with `return`? I can never be entirely sure without looking it up. I blame my persisting uncertainty on the choice of keyword: I find `else` incredibly unmnemonic for this semantics. My question is not "why is this keyword used for this purpose" (which I would probably vote to close, though only after reading the answers and comments), but **how can I think about the `else` keyword so that its semantics make sense, and I can therefore remember it?** I'm sure there was a fair amount of discussion about this, and I can imagine that the choice was made for consistency with the `try` statement's `else:` clause (which I also have to look up), and with the goal of not adding to the list of Python's reserved words. Perhaps the reasons for choosing `else` will clarify its function and make it more memorable, but I'm after connecting name to function, not after historical explanation per se. The answers to [this question](http://stackoverflow.com/q/9979970/699305), which my question was briefly closed as a duplicate of, contain a lot of interesting back story. My question has a different focus (how to connect the specific semantics of `else` with the keyword choice), but I feel there should be a link to this question somewhere.
My gotcha moment with the loop's `else` clause was when I was watching a talk by [Raymond Hettinger](http://stackoverflow.com/users/1001643/raymond-hettinger), who told a story about how he thought it should have been called `nobreak`. Take a look at the following code, what do you think it would do? ``` for i in range(10): if test(i): break # ... work with i nobreak: print('Loop completed') ``` What would you guess it does? Well, the part that says `nobreak` would only be executed if a `break` statement wasn't hit in the loop.
common lisp equivalent of a python idiom
37,645,391
3
2016-06-05T18:32:40Z
37,645,615
7
2016-06-05T18:56:36Z
[ "python", "lisp", "common-lisp" ]
How do I run an equivalent of this Python command in Lisp ``` from lib import func ``` For example, I want to use the `split-sequence` package, and in particular I only want the `split-sequence` method from that package. Currently, I have to use it as `(split-sequence:split-sequence #\Space "this is a string")`. But I what I want to do is `(split-sequence #\Space "this is a string")`. How do I get access to the function directly without qualifying it with the package name?
What you want to do is simply: ``` (import 'split-sequence:split-sequence) ``` This works fine in a REPL, but if you want to organize your symbols, you'd better rely on packages. ``` (defpackage #:my-package (:use #:cl) (:import-from #:split-sequence #:split-sequence)) ``` The first ̀`split-sequence` is the package, followed by all the symbols that should be imported. In [`DEFPACKAGE`](http://clhs.lisp.se/Body/m_defpkg.htm) forms, people generally use either keywords or uninterned symbols like above in order to avoid interning symbols in the current package. Alternatively, you could use strings, because only the names of symbols are important: ``` (defpackage "MY-PACKAGE" (:use "CL") (:import-from "SPLIT-SEQUENCE" "SPLIT-SEQUENCE")) ```
How to classify blurry numbers with openCV
37,645,576
19
2016-06-05T18:52:26Z
37,645,912
18
2016-06-05T19:28:26Z
[ "python", "opencv", "edge-detection", "number-recognition" ]
I would like to capture the number from this kind of picture. [![enter image description here](http://i.stack.imgur.com/S8esF.png)](http://i.stack.imgur.com/S8esF.png) I tried multi-scale matching from the following link. <http://www.pyimagesearch.com/2015/01/26/multi-scale-template-matching-using-python-opencv/> All I want to know is the red number. But the problem is, the red number is blurry for openCV recognize/match template. Would there be other possible way to detect this red number on the black background?
**Classifying Digits** You clarified in comments that you've already isolated the number part of the image pre-detection, so I'll start under that assumption. Perhaps you can approximate the perspective effects and "blurriness" of the number by treating it as a hand-written number. In this case, there is a famous data-set of handwritten numerals for classification training called mnist. Yann LeCun has enumerated the state of the art on this dataset here [mnist hand-written dataset](http://yann.lecun.com/exdb/mnist/). At the far end of the spectrum, convolutional neural networks yield [outrageously low error rates](http://arxiv.org/abs/1202.2745) (fractions of 1% error). For a simpler solution, k-nearest neighbours using deskewing, noise removal, blurring, and 2 pixel shift, yielded about 1% error, and is significantly faster to implement. [Python opencv has an implementation](http://docs.opencv.org/2.4/modules/ml/doc/k_nearest_neighbors.html). Neural networks and support vector machines with deskewing also have some pretty impressive performance rates. Note that convolutional networks don't have you pick your own features, so the important color-differential information here might just be used for narrowing the region-of-interest. Other approaches, where you define your feature space, might incorporate the known color difference more precisely. Python supports a lot of machine learning techniques in the terrific package sklearn - [here are examples of sklearn applied to mnist](http://scikit-learn.org/stable/auto_examples/classification/plot_digits_classification.html). *If you're looking for an tutorialized explanation of machine learning in python, [sklearn's own tutorial is very verbose](http://scikit-learn.org/stable/supervised_learning.html)* From the sklearn link: [![Classifying mnist](http://i.stack.imgur.com/PHDCw.png)](http://i.stack.imgur.com/PHDCw.png) Those are the kinds of items you're trying to classify if you learn using this approach. To emphasize how easy it is to start training some of these machine learning-based classifiers, here is an abridged section from the example code in the linked sklearn package: ``` digits = datasets.load_digits() # built-in to sklearn! data = digits.images.reshape((len(digits.images), -1)) # Create a classifier: a support vector classifier classifier = svm.SVC(gamma=0.001) # We learn the digits on the first half of the digits classifier.fit(data[:n_samples / 2], digits.target[:n_samples / 2]) ``` If you're wedded to openCv (possibly because you want to port to a real-time system in the future), opencv3/python [has a tutorial on this exact topic too](http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_ml/py_knn/py_knn_opencv/py_knn_opencv.html#knn-opencv)! Their demo uses k-nearest-neighbor (listed in the LeCun page), but they also [have svms](http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_ml/py_svm/py_svm_opencv/py_svm_opencv.html#svm-opencv) and the many of the other tools in sklearn. Their ocr page using SVMs uses deskewing, which might be useful with the perspective effect in your problem: [![Deskewed digit](http://i.stack.imgur.com/NT190.jpg)](http://i.stack.imgur.com/NT190.jpg) --- **UPDATE:** I used the out-of-the box skimage approach outlined above on your image, heavily cropped, and it **correctly classified it**. A **lot** more testing would be required to see if this is rhobust in practice [![enter image description here](http://i.stack.imgur.com/Gpgsk.png)](http://i.stack.imgur.com/Gpgsk.png) ^^ That tiny image is the 8x8 crop of the image you embedded in your question. mnist is 8x8 images. That's why it trains in less than a second with default arguments in skimage. I converted it the correct format by scaling it up to the mnist range using ``` number = scipy.misc.imread("cropped_image.png") datum = (number[:,:,0]*15).astype(int).reshape((64,)) classifier.predict(datum) # returns 8 ``` I didn't change anything else from the example; here, I'm only using the first channel for classification, and no smart feature computation. 15 looked about right to me; you'll need to tune it to get within the target range or (ideally) provide your own training and testing set --- **Object Detection** If you haven't isolated the number in the image you'll need an object detector. The literature space on this problem is gigantic and I won't start down that rabbit hole (google Viola and Jones, maybe?) [This blog](http://www.pyimagesearch.com/2015/03/23/sliding-windows-for-object-detection-with-python-and-opencv/) covers the fundamentals of a "sliding window" detector in python. Adrian Rosebrock looks like he's even a contributor on SO, and that page has some good examples of opencv and python-based object detectors fairly tutorialized (you actually linked to that blog in your question, I didn't realize). In short, classify windows across the image and pick the window of highest confidence. Narrowing down the search space with a region of interest will of course yield huge improvements in all areas of performance
Python-style variables in C++
37,651,358
3
2016-06-06T07:04:38Z
37,651,398
7
2016-06-06T07:06:53Z
[ "python", "c++", "types", "type-safety", "strong-typing" ]
I'm working on a C++ project and am trying to figure out how to make a "dynamic" variable. In Python, variables can have multiple types: ``` variable = 0 variable = "Hello" ``` In Java, this is also (somewhat) achievable: ``` Object o = 0; o = "Hello"; ``` From what I can find related to C++, there is no `object` type or "dynamic" object at that. The reason I need this, is I'm trying to create an object which takes in any of the following types: `int`, `float`, `char`, `string`, `bool`, and allow me to do operations such as: ``` object o = 0; // currently an int o -= 2.5; // now a float o += "Test"; // now a string ``` Is there any default functionality for this kind of variable? If not, can it be done with macros, `struct`'s, etc.? I've found things like this: ``` template <typename name> ``` But have no idea how to use it.
You can use [boost.variant](http://www.boost.org/doc/libs/1_61_0/doc/html/variant.html) library. Basic usage instructions [here](http://www.boost.org/doc/libs/1_61_0/doc/html/variant/tutorial.html#variant.tutorial.basic). In short, it would be something like ``` using var_t = boost::variant<bool,int,double,string, boost::blank_t>; var_t var = "hello"; std::cout << boost::get<std::string>(var) << '\n'; std::cout << var << '\n'; // if all possible stored types are streamable ``` Somewhat non-straightforward part is accessing value without knowing the exact type. This requires [static visitor](http://www.boost.org/doc/libs/1_61_0/doc/html/boost/static_visitor.html). In case you wonder what the difference is between `any` and `variant` - you are not alone, and [here is the comparison chart](http://www.boost.org/doc/libs/1_61_0/doc/html/variant/misc.html#variant.versus-any).
Modifying a dict during iteration
37,667,277
8
2016-06-06T21:38:10Z
37,667,331
7
2016-06-06T21:43:01Z
[ "python", "dictionary" ]
What's going on below ``` >>> d = {0:0} >>> for i in d: ... del d[i] ... d[i+1] = 0 ... print(i) ... 0 1 2 3 4 5 6 7 >>> ``` Why does the iteration stop at 8 without any error? Reproducible on both python2.7 and python 3.5.
The initial size of the key table in a dict is 8 elements. So `0`...`7` sets the 1st to 8th element and 8 sets the 1st element again, ending the loop. ### [Source: Objects/dictobject.c](https://github.com/python/cpython/blob/master/Objects/dictobject.c#L60) > /\* PyDict\_MINSIZE\_COMBINED is the starting size for any new, non-split > dict. 8 allows dicts with no more than 5 active entries; > experiments suggested this suffices for the majority of dicts > (consisting mostly of usually-small dicts created to pass keyword > arguments). Making this 8, rather than 4 reduces the number of > resizes for most dictionaries, without any significant extra memory > use. \*/ > > #define PyDict\_MINSIZE\_COMBINED 8
Recursively calling a Merge Sort algorithm
37,673,137
4
2016-06-07T07:31:54Z
37,673,197
7
2016-06-07T07:35:09Z
[ "python", "algorithm", "recursion" ]
I'm new to recursion, and currently writing a Merge Sort algorithm that compares the first elements of two lists and determines which one is smaller, then adds the smaller one to a new list. I'm trying to update the three lists after each comparison and have the function call itself again with the updated lists, but I'm getting unresolved reference issues in Pycharm and not sure what I'm doing wrong. Here is my code so far, my desired output is: > new\_list = [4, 8, 15, 16, 23, 42, 50, 75, 108] ``` def merge_Sort(list1, list2, new_list): list1 = [8, 15, 16, 50, 75] list2 = [4, 23, 42, 108] new_list = [] for i in range(len(list1)): if list1[0] < list2[0]: new_list = new_list.append(list1[0]) list1 = list1.remove(list1[0]) break elif list1[0] > list2[0]: new_list = new_list.append(list2[0]) list2 = list2.remove(list2[0]) break merge_Sort(list1, list2, new_list) merge_Sort(list1, list2, new_list) ```
Your code lead to infinite recursion. You should move `list1`,`list2` and `new_list` initialization outside of `merge_Sort` function. ``` def merge_Sort(list1, list2, new_list): for i in range(len(list1)): if list1[0] < list2[0]: new_list.append(list1[0]) list1.remove(list1[0]) break elif list1[0] > list2[0]: new_list.append(list2[0]) list2.remove(list2[0]) break merge_Sort(list1, list2, new_list) list1 = [8, 15, 16, 50, 75] list2 = [4, 23, 42, 108] new_list = [] merge_Sort(list1, list2, new_list) ```
How to make Python generators as fast as possible?
37,675,026
4
2016-06-07T09:07:34Z
37,675,802
10
2016-06-07T09:43:13Z
[ "python", "python-3.x", "generator" ]
For writing an event-driven simulator, I'm relying on [simpy](https://simpy.readthedocs.io/en/latest/), which heavily uses Python generators. I'm trying to understand how to make generators as fast as possible, i.e., minimize the state save/restore overhead. I tried three alternatives 1. All state stored in a class instance 2. All state stored globally 3. All state stored locally and got the following results with Python 3.4.3: ``` class_generator 20.851247710175812 global_generator 12.802394330501556 local_generator 9.067587919533253 ``` The code can be found [here](https://gist.github.com/cristiklein/0111ed20265076aeb02f1a612e496a2c). This feels counter-intuitive to me: Storing all state in the class instance means that only `self` needs to be saved/restored, whereas storing all state globally should ensure zero save/restore overhead. Does anybody know why are class generators and global generators are slower than local generators?
The generator actually retains the actual *call frames* when *yield* happens. It doesn't really affect performance whether you have 1 or 100 local variables. The performance difference really comes from how Python (here I am using the CPython a.k.a. the one that you'd download from <http://www.python.com/>, or have on your operating system as `/usr/bin/python`, but most implementations would have similar performance characteristics due to mostly the same reasons) behaves on different kinds of variable lookups: * Local variables are not actually *named* in Python; instead they're referred to by a *number*, and accessed by `LOAD_FAST` opcode. * Global variables are accessed using `LOAD_GLOBAL` opcode. They're referred to by name always, and thus every access needs an actual dictionary lookup. * The instance attribute accesses are the slowest, because `self.foobar` first needs to use `LOAD_FAST` to load a reference to `self`, then the `LOAD_ATTR` is used to find `foobar` on the referred-to object, and this is a dictionary lookup. Also, if the attribute is on the **instance** itself this is going to be OK, but if it is set on the **class**, the attribute lookup is going to be slower. You're also *setting* values on the instance, it is going to be even slower, because now it needs to do `STORE_ATTR` on the loaded instance. Further complicating is the fact that the *class* of the instance needs to be consulted as well - if the *class* happens to have a **property descriptor** by the same name, then it can alter the behaviour of reading and setting the attribute. Thus the fastest generator is the one that only refers to local variables. It is a common idiom in Python code to store the value of global read-only variables into local variables to speed things up. To demonstrate the differences, consider the code generated for these 3 variable accesses `a`, `b` and `self.c`: ``` a = 42 class Foo(object): def __init__(self): self.c = 42 def foo(self): b = 42 yield a yield b yield self.c print(list(Foo().foo())) # prints [42, 42, 42] ``` The relevant part of the disassembly for `foo` method is: ``` 8 6 LOAD_GLOBAL 0 (a) 9 YIELD_VALUE 10 POP_TOP 9 11 LOAD_FAST 1 (b) 14 YIELD_VALUE 15 POP_TOP 10 16 LOAD_FAST 0 (self) 19 LOAD_ATTR 1 (c) 22 YIELD_VALUE 23 POP_TOP ``` The operands to `LOAD_GLOBAL` and `LOAD_ATTR` are references to the name `a` and `c` respectively; the numbers are indices on a table. The operand of `LOAD_FAST` is the *number of the local variable in the table of local variable tables*.
List of tuple to string in Python
37,678,612
3
2016-06-07T11:50:45Z
37,678,648
11
2016-06-07T11:52:51Z
[ "python" ]
I have a list like this: ``` l = ['b', '7', 'a', 'e', 'a', '6', 'a', '7', '9', 'c', '7', 'b', '6', '9', '9', 'd', '7', '5', '2', '4', 'c', '7', '8', 'b', '3', 'f', 'f', '7', 'b', '9', '4', '4'] ``` and I want to make a string from it like this: ``` 7bea6a7ac9b796d957427cb8f37f9b44 ``` I did: ``` l = (zip(l[1:], l)[::2]) s = [] for ll in l: s += ll print ''.join(s) ``` But is there any simpler way? May be, in one line?
You can concatenate each pair of letters, then `join` the whole result in a generator expression ``` >>> ''.join(i+j for i,j in zip(l[1::2], l[::2])) '7bea6a7ac9b796d957427cb8f37f9b44' ```
Timing Modular Exponentiation in Python: syntax vs function
37,686,000
4
2016-06-07T17:47:00Z
37,686,188
7
2016-06-07T17:57:11Z
[ "python", "python-2.7", "python-3.x" ]
In Python, if the builtin `pow()` function is used with 3 arguments, the last one is used as the modulus of the exponentiation, resulting in a [Modular exponentiation](https://en.wikipedia.org/wiki/Modular_exponentiation) operation. In other words, `pow(x, y, z)` is equivalent to `(x ** y) % z`, but accordingly to Python help, the `pow()` may be more efficient. When I timed the two versions, I got the opposite result, the `pow()` version seemed slower than the equivalent syntax: Python 2.7: ``` >>> import sys >>> print sys.version 2.7.11 (default, May 2 2016, 12:45:05) [GCC 4.9.3] >>> >>> help(pow) Help on built-in function pow in module __builtin__: <F2> Show Source pow(...) pow(x, y[, z]) -> number With two arguments, equivalent to x**y. With three arguments, equivalent to (x**y) % z, but may be more efficient (e.g. for longs). >>> >>> import timeit >>> st_expmod = '( 65537 ** 767587 ) % 14971787' >>> st_pow = 'pow(65537, 767587, 14971787)' >>> >>> timeit.timeit(st_expmod) 0.016651153564453125 >>> timeit.timeit(st_expmod) 0.016621112823486328 >>> timeit.timeit(st_expmod) 0.016611099243164062 >>> >>> timeit.timeit(st_pow) 0.8393168449401855 >>> timeit.timeit(st_pow) 0.8449611663818359 >>> timeit.timeit(st_pow) 0.8767969608306885 >>> ``` Python 3.4: ``` >>> import sys >>> print(sys.version) 3.4.3 (default, May 2 2016, 12:47:35) [GCC 4.9.3] >>> >>> help(pow) Help on built-in function pow in module builtins: pow(...) pow(x, y[, z]) -> number With two arguments, equivalent to x**y. With three arguments, equivalent to (x**y) % z, but may be more efficient (e.g. for ints). >>> >>> import timeit >>> st_expmod = '( 65537 ** 767587 ) % 14971787' >>> st_pow = 'pow(65537, 767587, 14971787)' >>> >>> timeit.timeit(st_expmod) 0.014722830994287506 >>> timeit.timeit(st_expmod) 0.01443593599833548 >>> timeit.timeit(st_expmod) 0.01485627400688827 >>> >>> timeit.timeit(st_pow) 3.3412855619972106 >>> timeit.timeit(st_pow) 3.2800855879904702 >>> timeit.timeit(st_pow) 3.323372773011215 >>> ``` Python 3.5: ``` >>> import sys >>> print(sys.version) 3.5.1 (default, May 2 2016, 14:34:13) [GCC 4.9.3 >>> >>> help(pow) Help on built-in function pow in module builtins: pow(x, y, z=None, /) Equivalent to x**y (with two arguments) or x**y % z (with three arguments) Some types, such as ints, are able to use a more efficient algorithm when invoked using the three argument form. >>> >>> import timeit >>> st_expmod = '( 65537 ** 767587 ) % 14971787' >>> st_pow = 'pow(65537, 767587, 14971787)' >>> >>> timeit.timeit(st_expmod) 0.014827249979134649 >>> timeit.timeit(st_expmod) 0.014763347018742934 >>> timeit.timeit(st_expmod) 0.014756042015505955 >>> >>> timeit.timeit(st_pow) 3.6817933860002086 >>> timeit.timeit(st_pow) 3.6238356370013207 >>> timeit.timeit(st_pow) 3.7061628740048036 >>> ``` What is the explanation for the above numbers? --- **Edit**: After the answers I see that in the `st_expmod` version, the computation were not being executed in runtime, but by the parser and the expression became a constant.. Using the fix suggested by @user2357112 in Python2: ``` >>> timeit.timeit('(a**b) % c', setup='a=65537; b=767587; c=14971787', number=150) 370.9698350429535 >>> timeit.timeit('pow(a, b, c)', setup='a=65537; b=767587; c=14971787', number=150) 0.00013303756713867188 ```
You're not actually timing the computation with `**` and `%`, because the result gets constant-folded by the bytecode compiler. Avoid that: ``` timeit.timeit('(a**b) % c', setup='a=65537; b=767587; c=14971787') ``` and the `pow` version will win hands down.
Python and OpenSSL version reference issue on OS X
37,690,054
8
2016-06-07T21:54:04Z
37,712,029
14
2016-06-08T20:11:08Z
[ "python", "osx", "ssl", "openssl", "version" ]
Trying to resolve an OpenSSL version issue I'm having. It seems that I have three different versions of OpenSSL on my Mac. 1. Python 2.7.11 has version 0.9.7m: ``` python -c "import ssl; print ssl.OPENSSL_VERSION" OpenSSL 0.9.7m 23 Feb 2007 ``` 2. At the Terminal: ``` openssl version OpenSSL 1.0.1h 5 Jun 2014 ``` 3. Recently Compiled / Installed: ``` /usr/local/ssl/bin/openssl OpenSSL> version OpenSSL 1.0.2h 3 May 2016 OpenSSL> ``` I recently upgraded my OS X to 10.11.5. In the process, caused an issue for previously working python scripts. Below is the error message snippet: Python Error Message: ``` You are linking against OpenSSL 0.9.8, which is no longer * RuntimeError: You are linking against OpenSSL 0.9.8, which is no longer support by the OpenSSL project. You need to upgrade to a newer version of OpenSSL. ``` (\* - yes, this is how the error message looks like. It's trimmed in the middle of the sentence.) Any recommendations on resolving this issue would be greatly appreciated. What I'd like is to have Python reference the OpenSSL version 1.0.2h vs the outdated version 0.9.7m. I've tried installing Python and OpenSSL many times using various post / blogs for guidance without any luck.
Use this as a workaround: ``` export CRYPTOGRAPHY_ALLOW_OPENSSL_098=1 ``` This appears to be a recent check of the hazmat cryptography library. You can see the source code at: <https://github.com/pyca/cryptography/blob/master/src/cryptography/hazmat/bindings/openssl/binding.py#L221> The `CRYPTOGRAPHY_ALLOW_OPENSSL_098` environment variable downgrades the error to a deprecation warning, if you are willing to take the risk. I also ran into this on OS X in just the past day, so something changed recently.
How to create a dictionary from a list with a list of keys only and key-value pairs (Python)?
37,705,101
3
2016-06-08T14:18:22Z
37,705,455
9
2016-06-08T14:31:52Z
[ "python", "string", "list", "dictionary" ]
This is an extension of this question: [How to split a string within a list to create key-value pairs in Python](http://stackoverflow.com/questions/12739911/how-to-separate-string-and-create-a-key-value-pairs-python) The difference from the above question is the items in my list are not all key-value pairs; some items need to be assigned a value. I have a list: ``` list = ['abc=ddd', 'ef', 'ghj', 'jkl=yui', 'rty'] ``` I would like to create a dictionary: ``` dict = { 'abc':'ddd', 'ef':1, 'ghj':1, 'jkl':'yui', 'rty':1 } ``` I was thinking something along the lines of: ``` a = {} for item in list: if '=' in item: d = item.split('=') a.append(d) #I don't I can do this. else: a[item] = 1 #feel like I'm missing something here. ```
For each split "pair", you can append `[1]` and extract the first 2 elements. This way, 1 will be used when there isn't a value: ``` print dict((s.split('=')+[1])[:2] for s in l) ```
pip error while installing Python: "Ignoring ensurepip failure: pip 8.1.1 requires SSL/TLS"
37,723,236
6
2016-06-09T10:21:07Z
37,723,517
11
2016-06-09T10:34:26Z
[ "python", "install" ]
I downloaded the Python 3.5 source code and ran the following: ``` $ tar -xf Python-3.5.2.tar.xz $ ./configure --with-ensurepip=upgrade $ make $ sudo make altinstall ``` It proceeded well until `make`. When `sudo make altinstall` ran, it printed: `Ignoring ensurepip failure: pip 8.1.1 requires SSL/TLS` What went wrong?
You are most likely not compiling Python with SSL/TLS support - this is likely because you don't have the SSL development libraries installed on your system. Install the following dependency, and then re-configure and re-compile Python 3.5. **Ubuntu** ``` apt-get install libssl-dev ``` In addition it is recommended to install the following. ``` apt-get install make build-essential libssl-dev zlib1g-dev libbz2-dev libsqlite3-dev ``` **CentOS** ``` yum install openssl-devel ``` In addition it is recommended to install the following. ``` yum install zlib-devel bzip2-devel sqlite sqlite-devel openssl-devel ```
Find 3 letter words
37,728,006
4
2016-06-09T13:55:51Z
37,728,157
8
2016-06-09T14:01:56Z
[ "python" ]
I have the following code in Python: ``` import re string = "what are you doing you i just said hello guys" regexValue = re.compile(r'(\s\w\w\w\s)') mo = regexValue.findall(string) ``` My goal is to find any 3 letter word, but for some reason I seem to only be getting the "are" and not the "you" in my list. I figured this might be because the space between the two overlap, and since the space is already used it cannot be a part of "you". So, how should I find only three letter words from a string like this?
It's not regex, but you could do this: ``` words = [word for word in string.split() if len(word) == 3] ```
Python nosetests with coverage no longer shows missing lines
37,733,194
7
2016-06-09T18:05:02Z
37,746,141
10
2016-06-10T10:39:45Z
[ "python", "nose", "coverage.py" ]
I've been using the following command to run tests and evaluate code coverage for a Python project for over a year now. ``` nosetests -v --with-coverage --cover-package=genhub genhub/*.py ``` The coverage report used to include a column on the far right showing the lines missing coverage. ``` Name Stmts Miss Branch BrPart Cover Missing ---------------------------------------------------------------- genhub/cdhit.py 50 0 8 0 100% genhub/exons.py 85 69 8 0 17% 24-40, 48-56, 60-79, 87-107, 129-132, 138-141, 147-150 genhub/fasta.py 76 0 26 0 100% genhub/genomedb.py 205 153 48 0 21% 40-43, 53-60, 64-65, 70, 74, 82, 86, 90, 98-99, 103-104, 108-109, 113-114, 118-119, 123-124, 128-129, 143-144, 152-154, 158-160, 164-166, 175, 180, 240-280, 289, 292, 295, 308-317, 323-330, 351-377, 380-386, 396-413, 419-430, 436-443, 449-456 genhub/iloci.py 112 91 8 0 18% 30-46, 54-64, 73-90, 102-118, 127-142, 165-173, 179-183, 189-193, 199-207, 213-225 genhub/mrnas.py 121 108 24 0 9% 30-63, 79-105, 118-158, 178-197, 203-226 genhub/pdom.py 95 68 24 0 23% 31-32, 35, 39, 43, 47, 50-53, 56-59, 62-64, 67-72, 75-106, 116-119, 126-128, 134-141, 148-156 genhub/proteins.py 20 13 2 0 32% 43-53, 94-97 genhub/refseq.py 237 195 44 0 15% 30-46, 49, 53, 57, 61, 65, 69, 73, 76-86, 89-115, 118-127, 130-178, 189-211, 217-226, 232-242, 248-265, 271-288, 294-297, 303-310, 317-326, 333-374, 380-387 genhub/registry.py 126 90 32 2 24% 48-56, 59-64, 67-69, 72-77, 81-83, 92-94, 103-109, 112-113, 116-117, 142-168, 174-188, 194-201, 207-216, 40->44, 44->48 genhub/stats.py 3 0 0 0 100% genhub/tair.py 128 97 22 0 21% 32-42, 45, 49, 53, 57, 61, 65, 69, 73, 76-79, 82-104, 110-119, 122-154, 165-180, 186-189, 195-203, 210-221 ---------------------------------------------------------------- TOTAL 1258 884 246 2 27% ---------------------------------------------------------------------- Ran 46 tests in 0.033s FAILED (errors=41) ``` However, the `Missing` column no longer shows up for me (nose version 1.3.7, coverage.py version 4.1). I'm aware nose is no longer being supported. Is this change related to that, or something in coverage.py, or both?
In coverage.py 4.1, I fixed a problem with the coverage.py API defaulting two parameters to non-None values. One of them was `show_missing`. The best way to fix this in your project is to set `show_missing` in your .coveragerc file: ``` # .coveragerc [report] show_missing = True ```
How to use SyntaxNet output to operate an executive command ,for example save a file in a folder, on Linux system
37,753,980
2
2016-06-10T17:24:03Z
38,444,785
8
2016-07-18T19:43:09Z
[ "python", "xml", "terminal", "semantics", "syntaxnet" ]
having downloaded and trained [SyntaxNet](https://github.com/tensorflow/models/tree/master/syntaxnet), I am trying to write a program that can open new/existed files, for example AutoCAD files, and save the files in an specific directory by analyzing the text: **open LibreOffice file X** . considering the output of SyntaxNet as: ``` echo "save AUTOCAD file X in directory Y" | ./test.sh > output.txt Input: save AUTOCAD file X in directory Y Parse: save VB ROOT +-- X NNP dobj | +-- file NN compound | +-- AUTOCAD CD nummod +-- directory NN nmod +-- in IN case +-- Y CD nummod ``` First I thought about changing the parsed text to XML format then parse the XML file with semantic analyses (like `SPARQL`) to find ROOT=save, dobj=X, and nummode=Y and write a python program that can do the same thing which is said in the text 1. I don’t know that if I change the parsed text to XML and then use semantic analyses that uses querying in order to match `ROOT` with its counterpart function or script that saves `dobj`, in a directory with mention in `nummode` 2. I have some ideas to connect python to the terminal with `subprocess`package but I didn't find anything that can help me saving for example an AUTOCAD file, or any other files, from the terminal or do I need to write a script, `.sh`, with the help of python? I did a huge research on syntactic and semantic analyses on texts, such as [Christian Chiarcos, 2011](http://svn.aksw.org/papers/2012/LDL/ldl2012_proceedings/public/chiarcos.pdf), [Hunter and Cohen, 2006](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1702322/) and [Verspoor et al., 2015](https://www.semanticscholar.org/paper/Interoperability-of-text-corpus-annotations-with-Verspoor-Kim/65c80f186ff7da0c7e4e847856d9acf57e479061/pdf), and also studied [Microsoft Cortana](http://windows.microsoft.com/en-us/windows-10/getstarted-what-is-cortana), [Sirius](http://sirius.clarity-lab.org/tutorial/), [google now](https://en.wikipedia.org/wiki/Google_Now) but none of them went through the detail of how they change a **parsed text** to a **executive command** which brought me to the conclusion that this work is too easy to be talked about but as I am none computer science major I cannot figure out what I can do about it.
I am a beginner in Computer Science World and SyntaxNet. I wrote a simple SyntaxNet-Python algorithm which used SyntaxNet to analyze a text command a user inserts,"open the file book which I have written with laboratory writer with LibreOffice writer", and then analyzes SyntaxNet output with a python algorithm in order to turn it to an executive command, in this case open a file, with any supported format, with LibreOffice in Linux, Ubuntu 14.04) environment. you can see [here](https://help.libreoffice.org/Common/Starting_the_Software_With_Parameters) the different command lines defined by LibreOffice in order to use different application in this package. 1. After installing and running SyntaxNet (the installation process in explained [here](https://github.com/JoshData/models/blob/b72274d38f169f77e6a15e54834f463f627dc82a/syntaxnet/build/ubuntu-14.04_x64.sh)),the shell script is opened [demo.sh](https://github.com/tensorflow/models/blob/master/syntaxnet/syntaxnet/demo.sh) in `~/models/syntaxnet/suntaxnet/` directory and the `conl2tree` function (`line 54 to 56`) is erased in order to get a `tab delimited` output from SyntaxNet instead of a tree format output. 2. This command is typed in the terminal window: echo 'open the file book which I have writtern with the laboratory writer with libreOffice writer' | syntaxnet/demo.sh > output.txt the `output.txt` document is saved in the directory where `demo.sh` exists and it will be somehow like the below figure: [![enter image description here](http://i.stack.imgur.com/KrzfB.png)](http://i.stack.imgur.com/KrzfB.png) 3. The `output.txt` as the input file and use the below python algorithm to analyze SyntaxNet output and identifies the name of the file you want the target application from LibreOffice package and the command the user wants to use. `#!/bin/sh` ``` import csv import subprocess import sys import os #get SyntaxNet output as the Python algorithm input file filename='/home/username/models/syntaxnet/work/output.txt' #all possible executive commands for opening any file with any format with Libreoffice file commands={ ('open', 'libreoffice', 'writer'): ('libreoffice', '--writer'), ('open', 'libreoffice', 'calculator'): ('libreoffice' ,'--calc'), ('open', 'libreoffice', 'draw'): ('libreoffice' ,'--draw'), ('open', 'libreoffice', 'impress'): ('libreoffice' ,'--impress'), ('open', 'libreoffice', 'math'): ('libreoffice' ,'--math'), ('open', 'libreoffice', 'global'): ('libreoffice' ,'--global'), ('open', 'libreoffice', 'web'): ('libreoffice' ,'--web'), ('open', 'libreoffice', 'show'): ('libreoffice', '--show'), } #all of the possible synonyms of the application from Libreoffice comments={ 'writer': ['word','text','writer'], 'calculator': ['excel','calc','calculator'], 'draw': ['paint','draw','drawing'], 'impress': ['powerpoint','impress'], 'math': ['mathematic','calculator','math'], 'global': ['global'], 'web': ['html','web'], 'show':['presentation','show'] } root ='ROOT' #ROOT of the senctence noun='NOUN' #noun tagger verb='VERB' #verb tagger adjmod='amod' #adjective modifier dirobj='dobj' #direct objective apposmod='appos' # appositional modifier prepos_obj='pobj' # prepositional objective app='libreoffice' # name of the package preposition='prep' # preposition noun_modi='nn' # noun modifier #read from Syntaxnet output tab delimited textfile def readata(filename): file=open(filename,'r') lines=file.readlines() lines=lines[:-1] data=csv.reader(lines,delimiter='\t') lol=list(data) return lol # identifies the action, the name of the file and whether the user mentioned the name of the application implicitely def exe(root,noun,verb,adjmod,dirobj,apposmod,commands,noun_modi): interprete='null' lists=readata(filename) for sublist in lists: if sublist[7]==root and sublist[3]==verb: # when the ROOT is verb the dobj is probably the name of the file you want to have action=sublist[1] dep_num=sublist[0] for sublist in lists: if sublist[6]==dep_num and sublist[7]==dirobj: direct_object=sublist[1] dep_num=sublist[0] dep_num_obj=sublist[0] for sublist in lists: if direct_object=='file' and sublist[6]==dep_num_obj and sublist[7]==apposmod: direct_object=sublist[1] elif direct_object=='file' and sublist[6]==dep_num_obj and sublist[7]==adjmod: direct_object=sublist[1] for sublist in lists: if sublist[6]==dep_num_obj and sublist[7]==adjmod: for key, v in comments.iteritems(): if sublist[1] in v: interprete=key for sublist in lists: if sublist[6]==dep_num_obj and sublist[7]==noun_modi: dep_num_nn=sublist[0] for key, v in comments.iteritems(): if sublist[1] in v: interprete=key print interprete if interprete=='null': for sublist in lists: if sublist[6]==dep_num_nn and sublist[7]==noun_modi: for key, v in comments.iteritems(): if sublist[1] in v: interprete=key elif sublist[7]==root and sublist[3]==noun: # you have to find the word which is in a adjective form and depends on the root dep_num=sublist[0] dep_num_obj=sublist[0] direct_object=sublist[1] for sublist in lists: if sublist[6]==dep_num and sublist[7]==adjmod: actionis=any(t1==sublist[1] for (t1, t2, t3) in commands) if actionis==True: action=sublist[1] elif sublist[6]==dep_num and sublist[7]==noun_modi: dep_num=sublist[0] for sublist in lists: if sublist[6]==dep_num and sublist[7]==adjmod: if any(t1==sublist[1] for (t1, t2, t3) in commands): action=sublist[1] for sublist in lists: if direct_object=='file' and sublist[6]==dep_num_obj and sublist[7]==apposmod and sublist[1]!=action: direct_object=sublist[1] if direct_object=='file' and sublist[6]==dep_num_obj and sublist[7]==adjmod and sublist[1]!=action: direct_object=sublist[1] for sublist in lists: if sublist[6]==dep_num_obj and sublist[7]==noun_modi: dep_num_obj=sublist[0] for key, v in comments.iteritems(): if sublist[1] in v: interprete=key else: for sublist in lists: if sublist[6]==dep_num_obj and sublist[7]==noun_modi: for key, v in comments.iteritems(): if sublist[1] in v: interprete=key return action, direct_object, interprete action, direct_object, interprete = exe(root,noun,verb,adjmod,dirobj,apposmod,commands,noun_modi) # find the application (we assume we know user want to use libreoffice but we donot know what subapplication should be used) def application(app,prepos_obj,preposition,noun_modi): lists=readata(filename) subapp='not mentioned' for sublist in lists: if sublist[1]==app: dep_num=sublist[6] for sublist in lists: if sublist[0]==dep_num and sublist[7]==prepos_obj: actioni=any(t3==sublist[1] for (t1, t2, t3) in commands) if actioni==True: subapp=sublist[1] else: for sublist in lists: if sublist[6]==dep_num and sublist[7]==noun_modi: actioni=any(t3==sublist[1] for (t1, t2, t3) in commands) if actioni==True: subapp=sublist[1] elif sublist[0]==dep_num and sublist[7]==preposition: sublist[6]=dep_num for subline in lists: if subline[0]==dep_num and subline[7]==prepos_obj: if any(t3==sublist[1] for (t1, t2, t3) in commands): subapp=sublist[1] else: for subline in lists: if subline[0]==dep_num and subline[7]==noun_modi: if any(t3==sublist[1] for (t1, t2, t3) in commands): subapp=sublist[1] return subapp sub_application=application(app,prepos_obj,preposition,noun_modi) if sub_application=='not mentioned' and interprete!='null': sub_application=interprete elif sub_application=='not mentioned' and interprete=='null': sub_application=interprete # the format of file def format_function(sub_application): subapp=sub_application Dobj=exe(root,noun,verb,adjmod,dirobj,apposmod,commands,noun_modi)[1] if subapp!='null': if subapp=='writer': a='.odt' Dobj=Dobj+a elif subapp=='calculator': a='.ods' Dobj=Dobj+a elif subapp=='impress': a='.odp' Dobj=Dobj+a elif subapp=='draw': a='.odg' Dobj=Dobj+a elif subapp=='math': a='.odf' Dobj=Dobj+a elif subapp=='math': a='.odf' Dobj=Dobj+a elif subapp=='web': a='.html' Dobj=Dobj+a else: Dobj='null' return Dobj def get_filepaths(directory): myfile=format_function(sub_application) file_paths = [] # List which will store all of the full filepaths. # Walk the tree. for root, directories, files in os.walk(directory): for filename in files: # Join the two strings in order to form the full filepath. if filename==myfile: filepath = os.path.join(root, filename) file_paths.append(filepath) # Add it to the list. return file_paths # Self-explanatory. # Run the above function and store its results in a variable. full_file_paths = get_filepaths("/home/ubuntu/") if full_file_paths==[]: print 'No file with name %s is found' % format_function(sub_application) if full_file_paths!=[]: path=full_file_paths prompt='> ' if len(full_file_paths) >1: print full_file_paths print 'which %s do you mean?'% subapp inputname=raw_input(prompt) if inputname in full_file_paths: path=inputname #the main code structure if sub_application!='null': command= commands[action,app,sub_application] subprocess.call([command[0],command[1],path[0]]) else: print "The sub application is not mentioned clearly" ``` I again say I am a beginner and the code might not seems so tidied up or professional but I just tried to use all my knowledge about this fascinating `SyntaxNet` to a practical algorithm. **This simple algorithm can open the file:** 1. with any format which is supported by `LibreOffice` e.g. `.odt,.odf,.ods,.html,.odp`. 2. it can understand implicit reference of different application in `LibreOffice`, for example: " open the text file book with libreoffice" instead of "open the file book with libreoffice writer" 3. can overcome the problem of SyntaxNet interpreting the name of the files which are referred as an adjective.
What does CPython actually do when "=" is performed on primitive type variables?
37,764,401
4
2016-06-11T13:39:27Z
37,764,949
7
2016-06-11T14:38:23Z
[ "python", "cpython", "reference-counting" ]
For instance: ``` a = some_process_that_generates_integer_result() b = a ``` Someone told me that *b* and *a* will point to same chunk of integer object, thus *b* would modify the reference count of that object. The code is executed in function `PyObject* ast2obj_expr(void* _o)` in *Python-ast.c*: ``` static PyObject* ast2obj_object(void *o) { if (!o) o = Py_None; Py_INCREF((PyObject*)o); return (PyObject*)o; } ...... case Num_kind: result = PyType_GenericNew(Num_type, NULL, NULL); if (!result) goto failed; value = ast2obj_object(o->v.Num.n); if (!value) goto failed; if (PyObject_SetAttrString(result, "n", value) == -1) goto failed; Py_DECREF(value); break; ``` However, I think modifying reference count without ownership change is really futile. What I expect is that each variable holding primitive values (floats, integers, etc.) always have their own value, instead of referring to a same object. And in the execution of my simple test code, I found the break point in the above `Num_kind` branch is never reached: ``` def some_function(x, y): return (x+y)*(x-y) a = some_function(666666,66666) print a b = a print a print b b = a + 999999 print a print b b = a print a print b ``` I'm using the python2.7-dbg program provided by Debian. I'm sure the program and the source code matches, because many other break points works properly. So, what does CPython actually do on primitive type objects?
First of all, there are no “primitive objects” in Python. Everything is an object, of the same kind, and they are all handled in the same way on the language level. As such, the following assignments work the same way regardless of the values which are assigned: ``` a = some_process_that_generates_integer_result() b = a ``` In Python, assignments are *always* reference copies. So whatever the function returns, its reference is *copied* into the variable `a`. And then in the second line, the reference is again *copied* into the variable `b`. As such, both variables will refer to the exact same object. You can easily verify this by using the [`id()`](https://docs.python.org/2/library/functions.html#id) function which will tell you the identity of an object: ``` print id(a) print id(b) ``` This will print the same identifying number twice. Note though, that wil doing just this, you copied the reference two more times: It’s not variables that are *passed* to functions but copies of references. This is different from other languages where you often differentiate between *“call by value”* and *“call by reference”*. The former means that you create a copy of the value and pass it to a function, which means that new memory is allocated for that value; the latter means that the actual reference is passed and changes to that reference affect the original variable as well. What Python does is often called *“call by assignment”*: every function call where you pass arguments is essentially an assignment into new variables (which are then available to the function). And an assignment copies the reference. When everything is an object, this is actually a very simple strategy. And as I said above, what happens with integers is then no different to what happens to other objects. The only “special” thing about integers is that they are *immutable*, so you cannot change their values. This means that an integer object always refers to the *exact same* value. This makes it easy to share the object (in memory) with multiple values. Every operation that yields a new result gives you a different object, so when you do a series of arithmetic operations, you are actually changing what object a variable is pointing to all the time. The same happens with other immutable objects too, for example strings. Every operation that yields a changed string gives you a different string object. Assignments with mutable objects however are the same too. It’s just that changing the value of those objects is possible, so they appear different. Consider this example: ``` a = [1] # creates a new list object b = a # copies the reference to that same list object c = [2] # creates a new list object b = a + c # concats the two lists and creates a new list object d = b # at this point, we have *three* list objects d.append(3) # mutates the list object print(d) print(b) # same result since b and d reference the same list object ``` --- Now coming back to your question and the C code you cite there, you are actually looking at the wrong part of CPython to get an explanation there. AST is the abstract syntax tree that the parser creates when parsing a file. It reflects the syntax structure of a program but says nothing about the actual run-time behavior yet. The code you showed for the `Num_kind` is actually responsible for creating `Num` AST objects. You can get an idea of this when using the [`ast` module](https://docs.python.org/2/library/ast.html): ``` >>> import ast >>> doc = ast.parse('foo = 5') # the document contains an assignment >>> doc.body[0] <_ast.Assign object at 0x0000000002322278> # the target of that assignment has the id `foo` >>> doc.body[0].targets[0].id 'foo' # and the value of that assignment is the `Num` object that was # created in that C code, with that `n` property containing the value >>> doc.body[0].value <_ast.Num object at 0x00000000023224E0> >>> doc.body[0].value.n 5 ``` If you want to get an idea of the actual evaluation of Python code, you should first look at the byte code. The byte code is what is being executed at run-time by the virtual machine. You can use the [`dis` module](https://docs.python.org/2/library/dis.html) to see byte code for Python code: ``` >>> def test(): foo = 5 >>> import dis >>> dis.dis(test) 2 0 LOAD_CONST 1 (5) 3 STORE_FAST 0 (foo) 6 LOAD_CONST 0 (None) 9 RETURN_VALUE ``` As you can see, there are two major byte code instructions here: `LOAD_CONST` and `STORE_FAST`. `LOAD_CONST` will just load a constant value onto the evaluation stack. In this example, we just load a constant number, but we could also load the value from a function call instead (try playing with the `dis` module to figure out how it works). The assignment itself is made using `STORE_FAST`. The byte code interpreter does [the following](https://github.com/python/cpython/blob/2.7/Python/ceval.c#L1236) for that instruction: ``` TARGET(STORE_FAST) { v = POP(); SETLOCAL(oparg, v); FAST_DISPATCH(); } ``` So it essentially gets the value (the reference to the integer object) from the stack, and then calls `SETLOCAL` which essentially will just assign the value to local variable. Note though, that this does not increase the reference count of that value. That’s what happens with [`LOAD_CONST`](https://github.com/python/cpython/blob/2.7/Python/ceval.c#L1227), or any other byte code instruction that gets a value from somewhere: ``` TARGET(LOAD_CONST) { x = GETITEM(consts, oparg); Py_INCREF(x); PUSH(x); FAST_DISPATCH(); } ``` --- So tl;dr: Assignments in Python are always reference copies. References are also copied whenever a value is used (but in many other situations that copied reference only exists for a short time). The AST is responsible for creating an object representation of parsed programs (only the syntax), while the byte code interpreter runs the previously compiled byte code to do actual stuff at run-time and deal with real objects.
mac - pip install pymssql error
37,771,434
3
2016-06-12T06:38:34Z
38,002,724
8
2016-06-23T22:10:13Z
[ "python", "osx", "python-2.7", "pymssql" ]
I use mac(Ver. 10.11.5). I want to install module pymssql for python. In terminal, I input `sudo -H pip install pymssql`, `pip install pymssql`, `sudo pip install pymssql` . But error occur. The directory '/Users/janghyunsoo/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. The directory `/Users/janghyunsoo/Library/Caches/pip` or its parent directory is not owned by the current user and caching wheels has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. ``` Collecting pymssql Downloading pymssql-2.1.2.tar.gz (898kB) 100% |████████████████████████████████| 901kB 955kB/s Installing collected packages: pymssql Running setup.py install for pymssql ... error Complete output from command /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python -u -c "import setuptools, tokenize;__file__='/private/tmp/pip-build-KA5ksi/pymssql/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-A3wRBy-record/install-record.txt --single-version-externally-managed --compile: setup.py: platform.system() => 'Darwin' setup.py: platform.architecture() => ('64bit', '') setup.py: platform.libc_ver() => ('', '') setup.py: Detected Darwin/Mac OS X. You can install FreeTDS with Homebrew or MacPorts, or by downloading and compiling it yourself. Homebrew (http://brew.sh/) -------------------------- brew install freetds MacPorts (http://www.macports.org/) ----------------------------------- sudo port install freetds setup.py: Not using bundled FreeTDS setup.py: include_dirs = ['/usr/local/include', '/opt/local/include', '/opt/local/include/freetds'] setup.py: library_dirs = ['/usr/local/lib', '/opt/local/lib'] running install running build running build_ext building '_mssql' extension creating build creating build/temp.macosx-10.6-intel-2.7 /usr/bin/clang -fno-strict-aliasing -fno-common -dynamic -arch i386 -arch x86_64 -g -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/usr/local/include -I/opt/local/include -I/opt/local/include/freetds -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c _mssql.c -o build/temp.macosx-10.6-intel-2.7/_mssql.o -DMSDBLIB _mssql.c:18924:15: error: use of undeclared identifier 'DBVERSION_80' __pyx_r = DBVERSION_80; ^ 1 error generated. error: command '/usr/bin/clang' failed with exit status 1 ---------------------------------------- Command "/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python -u -c "import setuptools, tokenize;__file__='/private/tmp/pip-build-KA5ksi/pymssql/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-A3wRBy-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/tmp/pip-build-KA5ksi/pymssql/ ```
I was able to work around this by reverting to an older version of FreeTDS through Homebrew before running the pip install. ``` brew unlink freetds; brew install homebrew/versions/freetds091 ``` The solution was found by andrewmwhite at: <https://github.com/pymssql/pymssql/issues/432>
Fast Numpy Loops
37,793,370
11
2016-06-13T15:14:11Z
37,793,744
7
2016-06-13T15:31:42Z
[ "python", "numpy", "vectorization", "cython" ]
How do you optimize this code (***without*** vectorizing, as this leads up to using the semantics of the calculation, which is quite often far from being non-trivial): ``` slow_lib.py: import numpy as np def foo(): size = 200 np.random.seed(1000031212) bar = np.random.rand(size, size) moo = np.zeros((size,size), dtype = np.float) for i in range(0,size): for j in range(0,size): val = bar[j] moo += np.outer(val, val) ``` The point is that such kind loops correspond quite often to operations where you have double sums over some vector operation. This is quite slow: ``` >>t = timeit.timeit('foo()', 'from slow_lib import foo', number = 10) >>print ("took: "+str(t)) took: 41.165681839 ``` Ok, so then let's cynothize it and add type annotations likes there is no tomorrow: ``` c_slow_lib.pyx: import numpy as np cimport numpy as np import cython @cython.boundscheck(False) @cython.wraparound(False) def foo(): cdef int size = 200 cdef int i,j np.random.seed(1000031212) cdef np.ndarray[np.double_t, ndim=2] bar = np.random.rand(size, size) cdef np.ndarray[np.double_t, ndim=2] moo = np.zeros((size,size), dtype = np.float) cdef np.ndarray[np.double_t, ndim=1] val for i in xrange(0,size): for j in xrange(0,size): val = bar[j] moo += np.outer(val, val) >>t = timeit.timeit('foo()', 'from c_slow_lib import foo', number = 10) >>print ("took: "+str(t)) took: 42.3104710579 ``` ... ehr... what? Numba to the rescue! ``` numba_slow_lib.py: import numpy as np from numba import jit size = 200 np.random.seed(1000031212) bar = np.random.rand(size, size) @jit def foo(): bar = np.random.rand(size, size) moo = np.zeros((size,size), dtype = np.float) for i in range(0,size): for j in range(0,size): val = bar[j] moo += np.outer(val, val) >>t = timeit.timeit('foo()', 'from numba_slow_lib import foo', number = 10) >>print("took: "+str(t)) took: 40.7327859402 ``` So is there really no way to speed this up? The point is: * if I convert the inner loop into a vectorized version (building a larger matrix representing the inner loop and then calling np.outer on the larger matrix) I get *much* faster code. * **if I implement something similar in Matlab (R2016a) this performs quite well due to JIT.**
Memory permitting, you can use [`np.einsum`](http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.einsum.html) to perform those heavy calculations in a vectorized manner, like so - ``` moo = size*np.einsum('ij,ik->jk',bar,bar) ``` One can also use [`np.tensordot`](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.tensordot.html) - ``` moo = size*np.tensordot(bar,bar,axes=(0,0)) ``` Or simply `np.dot` - ``` moo = size*bar.T.dot(bar) ```
Fast Numpy Loops
37,793,370
11
2016-06-13T15:14:11Z
37,794,513
12
2016-06-13T16:11:37Z
[ "python", "numpy", "vectorization", "cython" ]
How do you optimize this code (***without*** vectorizing, as this leads up to using the semantics of the calculation, which is quite often far from being non-trivial): ``` slow_lib.py: import numpy as np def foo(): size = 200 np.random.seed(1000031212) bar = np.random.rand(size, size) moo = np.zeros((size,size), dtype = np.float) for i in range(0,size): for j in range(0,size): val = bar[j] moo += np.outer(val, val) ``` The point is that such kind loops correspond quite often to operations where you have double sums over some vector operation. This is quite slow: ``` >>t = timeit.timeit('foo()', 'from slow_lib import foo', number = 10) >>print ("took: "+str(t)) took: 41.165681839 ``` Ok, so then let's cynothize it and add type annotations likes there is no tomorrow: ``` c_slow_lib.pyx: import numpy as np cimport numpy as np import cython @cython.boundscheck(False) @cython.wraparound(False) def foo(): cdef int size = 200 cdef int i,j np.random.seed(1000031212) cdef np.ndarray[np.double_t, ndim=2] bar = np.random.rand(size, size) cdef np.ndarray[np.double_t, ndim=2] moo = np.zeros((size,size), dtype = np.float) cdef np.ndarray[np.double_t, ndim=1] val for i in xrange(0,size): for j in xrange(0,size): val = bar[j] moo += np.outer(val, val) >>t = timeit.timeit('foo()', 'from c_slow_lib import foo', number = 10) >>print ("took: "+str(t)) took: 42.3104710579 ``` ... ehr... what? Numba to the rescue! ``` numba_slow_lib.py: import numpy as np from numba import jit size = 200 np.random.seed(1000031212) bar = np.random.rand(size, size) @jit def foo(): bar = np.random.rand(size, size) moo = np.zeros((size,size), dtype = np.float) for i in range(0,size): for j in range(0,size): val = bar[j] moo += np.outer(val, val) >>t = timeit.timeit('foo()', 'from numba_slow_lib import foo', number = 10) >>print("took: "+str(t)) took: 40.7327859402 ``` So is there really no way to speed this up? The point is: * if I convert the inner loop into a vectorized version (building a larger matrix representing the inner loop and then calling np.outer on the larger matrix) I get *much* faster code. * **if I implement something similar in Matlab (R2016a) this performs quite well due to JIT.**
Here's the code for `outer`: ``` def outer(a, b, out=None): a = asarray(a) b = asarray(b) return multiply(a.ravel()[:, newaxis], b.ravel()[newaxis,:], out) ``` So each call to `outer` involves a number of python calls. Those eventually call compiled code to perform the multiplication. But each incurs an overhead that has nothing to do with the size of your arrays. So 200 (200\*\*2?) calls to `outer` will have all that overhead, whereas one call to `outer` with all 200 rows has one overhead set, followed by one fast compiled operation. `cython` and `numba` don't compile or otherwise bypass the Python code in `outer`. All they can do is streamline the iteration code that you wrote - and that isn't consuming much time. Without getting into details, the MATLAB jit must be able to replace the 'outer' with faster code - it rewrites the iteration. But my experience with MATLAB dates from a time before its jit. For real speed improvements with `cython` and `numba` you need to use primitive numpy/python code all the way down. Or better yet focus your effort on slow inner pieces. Replacing your `outer` with a streamlined version cuts run time about in half: ``` def foo1(N): size = N np.random.seed(1000031212) bar = np.random.rand(size, size) moo = np.zeros((size,size), dtype = np.float) for i in range(0,size): for j in range(0,size): val = bar[j] moo += val[:,None]*val return moo ``` With the full `N=200` your function took 17s per loop. If I replace the inner two lines with `pass` (no calculation), time drops to 3ms per loop. In other words, the outer loop mechanism is not a big time consumer, at least not compared to many calls to `outer()`.
Regex - Python matching between string and first occurence
37,797,979
2
2016-06-13T19:46:30Z
37,798,006
7
2016-06-13T19:48:45Z
[ "python", "regex" ]
I'm having a hard time grasping regex no matter how much documentation I read up on. I'm trying to match everything between a a string and the first occurrence of `&` this is what I have ``` link = "group.do?sys_id=69adb887157e450051e85118b6ff533c&amp;&" rex = re.compile("group\.do\?sys_id=(.?)&") sysid = rex.search(link).groups()[0] ``` I'm using <https://regex101.com/#python> to help me validate my regex and I can kinda get `rex = re.compile("user_group.do?sys_id=(.*)&")` to work but the `.*` is greedy and matches to the last & and im looking to match to the first `&` I thought `.?` matches zero to 1 time
You don't necessarily need regular expressions here. Use [`urlparse`](https://docs.python.org/2/library/urlparse.html) instead: ``` >>> from urlparse import urlparse, parse_qs >>> parse_qs(urlparse(link).query)['sys_id'][0] '69adb887157e450051e85118b6ff533c' ``` In case of [Python 3](https://docs.python.org/3/library/urllib.parse.html) change the import to: ``` from urllib.parse import urlparse, parse_qs ```
Variable step in a for loop
37,845,872
17
2016-06-15T21:14:00Z
37,846,018
19
2016-06-15T21:23:21Z
[ "python", "for-loop" ]
I am trying to loop between 0.01 and 10, but between 0.01 and 0.1 use 0.01 as the step, then between 0.1 and 1.0 use 0.1 as step, and between 1.0 and 10.0 use 1.0 as step. I have the while loop code written, but want to make it more pythonic. ``` i = 0.01 while i < 10: # do something print i if i < 0.1: i += 0.01 elif i < 1.0: i += 0.1 else: i += 1 ``` This will produce ``` 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1, 2, 3, 4, 5, 6, 7, 8, 9 ```
A special-purse generator function might be the right way to go. This would effectively separate the boring part (getting the list of numbers right) from the interesting part (the `# do something` in your example). ``` def my_range(): for j in .01, .1, 1.: for i in range(1, 10, 1): yield i * j for x in my_range(): print x ```
Why is Django returning stale cache data?
37,849,236
9
2016-06-16T03:08:58Z
37,904,144
9
2016-06-19T05:11:48Z
[ "python", "django", "django-cache", "django-cache-machine" ]
I have two Django models as shown below, `MyModel1` & `MyModel2`: ``` class MyModel1(CachingMixin, MPTTModel): name = models.CharField(null=False, blank=False, max_length=255) objects = CachingManager() def __str__(self): return "; ".join(["ID: %s" % self.pk, "name: %s" % self.name, ] ) class MyModel2(CachingMixin, models.Model): name = models.CharField(null=False, blank=False, max_length=255) model1 = models.ManyToManyField(MyModel1, related_name="MyModel2_MyModel1") objects = CachingManager() def __str__(self): return "; ".join(["ID: %s" % self.pk, "name: %s" % self.name, ] ) ``` `MyModel2` has a ManyToMany field to `MyModel1` entitled `model1` Now look what happens when I add a new entry to this ManyToMany field. According to Django, it has no effect: ``` >>> m1 = MyModel1.objects.all()[0] >>> m2 = MyModel2.objects.all()[0] >>> m2.model1.all() [] >>> m2.model1.add(m1) >>> m2.model1.all() [] ``` Why? It seems definitely like a caching issue because I see that there is a new entry in Database table myapp\_mymodel2\_mymodel1 for this link between `m2` & `m1`. How should I fix it??
## Is django-cache-machine really needed? ``` MyModel1.objects.all()[0] ``` Roughly translates to ``` SELECT * FROM app_mymodel LIMIT 1 ``` Queries like this are always fast. There would not be a significant difference in speeds whether you fetch it from the cache or from the database. When you use cache manager you actually add a bit of overhead here that might make things a bit slower. Most of the time this effort will be wasted because there may not be a cache hit as explained in the next section. ## How django-cache-machine works > Whenever you run a query, `CachingQuerySet` will try to find that query > in the cache. Queries are keyed by `{prefix}:{sql}`. If it’s there, we > return the cached result set and everyone is happy. If the query isn’t > in the cache, the normal codepath to run a database query is executed. > As the objects in the result set are iterated over, they are added to > a list that will get cached once iteration is done. source: <https://cache-machine.readthedocs.io/en/latest/> Accordingly, with the two queries executed in your question being identical, cache manager will fetch the second result set from memcache provided the cache hasn't been invalided. The same link explains how cache keys are invalidated. > To support easy cache invalidation, we use “flush lists” to mark the > cached queries an object belongs to. That way, all queries where an > object was found will be invalidated when that object changes. Flush > lists map an object key to a list of query keys. > > When an object is saved or deleted, all query keys in its flush list > will be deleted. In addition, the flush lists of its foreign key > relations will be cleared. To avoid stale foreign key relations, any > cached objects will be flushed when the object their foreign key > points to is invalidated. It's clear that saving or deleting an object would result in many objects in the cache having to be invalidated. So you are slowing down these operations by using cache manager. Also worth noting is that the invalidation documentation does not mention many to many fields at all. There is an [open issue](https://github.com/django-cache-machine/django-cache-machine/issues/23) for this and from your comment on that issue it's clear that you have discovered it too. ## Solution Chuck cache machine. Caching all queries are almost never worth it. It leads to all kind of hard to find bugs and issues. The best approach is to optimize your tables and fine tune your queries. If you find a particular query that is too slow cache it manually.
How can I perform two-dimensional interpolation using scipy?
37,872,171
29
2016-06-17T02:27:48Z
37,872,172
35
2016-06-17T02:27:48Z
[ "python", "scipy", "interpolation" ]
> **This Q&A is intended as a canonical(-ish) concerning two-dimensional (and multi-dimensional) interpolation using scipy. There are often questions concerning the basic syntax of various multidimensional interpolation methods, I hope to set these straight too.** I have a set of scattered two-dimensional data points, and I would like to plot them as a nice surface, preferably using something like `contourf` or `plot_surface` in `matplotlib.pyplot`. How can I interpolate my two-dimensional or multidimensional data to a mesh using scipy? I've found the `scipy.interpolate` sub-package, but I keep getting errors when using `interp2d` or `bisplrep` or `griddata` or `rbf`. What is the proper syntax of these methods?
Disclaimer: I'm mostly writing this post with syntactical considerations and general behaviour in mind. I'm not familiar with the memory and CPU aspect of the methods described, and I aim this answer at those who have reasonably small sets of data, such that the quality of the interpolation can be the main aspect to consider. I am aware that when working with very large data sets, the better-performing methods (namely `griddata` and `Rbf`) might not be feasible. I'm going to compare three kinds of multi-dimensional interpolation methods ([`interp2d`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp2d.html#scipy.interpolate.interp2d)/splines, [`griddata`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html#scipy.interpolate.griddata) and [`Rbf`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.Rbf.html#scipy.interpolate.Rbf)). I will subject them to two kinds of interpolation tasks and two kinds of underlying functions (points from which are to be interpolated). The specific examples will demonstrate two-dimensional interpolation, but the viable methods are applicable in arbitrary dimensions. Each method provides various kinds of interpolation; in all cases I will use cubic interpolation (or something close1). It's important to note that whenever you use interpolation you introduce bias compared to your raw data, and the specific methods used affect the artifacts that you will end up with. Always be aware of this, and interpolate responsibly. The two interpolation tasks will be 1. upsampling (input data is on a rectangular grid, output data is on a denser grid) 2. interpolation of scattered data onto a regular grid The two functions (over the domain `[x,y] in [-1,1]x[-1,1]`) will be 1. a smooth and friendly function: `cos(pi*x)*sin(pi*y)`; range in `[-1, 1]` 2. an evil (and in particular, non-continuous) function: `x*y/(x^2+y^2)` with a value of 0.5 near the origin; range in `[-0.5, 0.5]` Here's how they look: [![fig1: test functions](http://i.stack.imgur.com/OcDcJ.png)](http://i.stack.imgur.com/OcDcJ.png) I will first demonstrate how the three methods behave under these four tests, then I'll detail the syntax of all three. If you know what you should expect from a method, you might not want to waste your time learning its syntax (looking at you, `interp2d`). # Test data For the sake of explicitness, here is the code with which I generated the input data. While in this specific case I'm obviously aware of the function underlying the data, I will only use this to generate input for the interpolation methods. I use numpy for convenience (and mostly for generating the data), but scipy alone would suffice too. ``` import numpy as np import scipy.interpolate as interp # auxiliary function for mesh generation def gimme_mesh(n): minval = -1 maxval = 1 # produce an asymmetric shape in order to catch issues with transpositions return np.meshgrid(np.linspace(minval,maxval,n), np.linspace(minval,maxval,n+1)) # set up underlying test functions, vectorized def fun_smooth(x, y): return np.cos(np.pi*x)*np.sin(np.pi*y) def fun_evil(x, y): # watch out for singular origin; function has no unique limit there return np.where(x**2+y**2>1e-10, x*y/(x**2+y**2), 0.5) # sparse input mesh, 6x7 in shape N_sparse = 6 x_sparse,y_sparse = gimme_mesh(N_sparse) z_sparse_smooth = fun_smooth(x_sparse, y_sparse) z_sparse_evil = fun_evil(x_sparse, y_sparse) # scattered input points, 10^2 altogether (shape (100,)) N_scattered = 10 x_scattered,y_scattered = np.random.rand(2,N_scattered**2)*2 - 1 z_scattered_smooth = fun_smooth(x_scattered, y_scattered) z_scattered_evil = fun_evil(x_scattered, y_scattered) # dense output mesh, 20x21 in shape N_dense = 20 x_dense,y_dense = gimme_mesh(N_dense) ``` # Smooth function and upsampling Let's start with the easiest task. Here's how an upsampling from a mesh of shape `[6,7]` to one of `[20,21]` works out for the smooth test function: [![fig2: smooth upsampling](http://i.stack.imgur.com/tgy94.png)](http://i.stack.imgur.com/tgy94.png) Even though this is a simple task, there are already subtle differences between the outputs. At a first glance all three outputs are reasonable. There are two features to note, based on our prior knowledge of the underlying function: the middle case of `griddata` distorts the data most. Note the `y==-1` boundary of the plot (nearest the `x` label): the function should be strictly zero (since `y==-1` is a nodal line for the smooth function), yet this is not the case for `griddata`. Also note the `x==-1` boundary of the plots (behind, to the left): the underlying function has a local maximum (implying zero gradient near the boundary) at `[-1, -0.5]`, yet the `griddata` output shows clearly non-zero gradient in this region. The effect is subtle, but it's a bias none the less. (The fidelity of `Rbf` is even better with the default choice of radial functions, dubbed `multiquadratic`.) # Evil function and upsampling A bit harder task is to perform upsampling on our evil function: [![fig3: evil upsampling](http://i.stack.imgur.com/3Qyw1.png)](http://i.stack.imgur.com/3Qyw1.png) Clear differences are starting to show among the three methods. Looking at the surface plots, there are clear spurious extrema appearing in the output from `interp2d` (note the two humps on the right side of the plotted surface). While `griddata` and `Rbf` seem to produce similar results at first glance, the latter seems to produce a deeper minimum near `[0.4, -0.4]` that is absent from the underlying function. However, there is one crucial aspect in which `Rbf` is far superior: it respects the symmetry of the underlying function (which is of course also made possible by the symmetry of the sample mesh). The output from `griddata` breaks the symmetry of the sample points, which is already weakly visible in the smooth case. # Smooth function and scattered data Most often one wants to perform interpolation on scattered data. For this reason I expect these tests to be more important. As shown above, the sample points were chosen pseudo-uniformly in the domain of interest. In realistic scenarios you might have additional noise with each measurement, and you should consider whether it makes sense to interpolate your raw data to begin with. Output for the smooth function: [![fig4: smooth scattered interpolation](http://i.stack.imgur.com/B6LLG.png)](http://i.stack.imgur.com/B6LLG.png) Now there's already a bit of a horror show going on. I clipped the output from `interp2d` to between `[-1, 1]` exclusively for plotting, in order to preserve at least a minimal amount of information. It's clear that while some of the underlying shape is present, there are huge noisy regions where the method completely breaks down. The second case of `griddata` reproduces the shape fairly nicely, but note the white regions at the border of the contour plot. This is due to the fact that `griddata` only works inside the convex hull of the input data points (in other words, it doesn't perform any *extrapolation*). I kept the default NaN value for output points lying outside the convex hull.2 Considering these features, `Rbf` seems to perform best. # Evil function and scattered data And the moment we've all been waiting for: [![fig5: evil scattered interpolation](http://i.stack.imgur.com/Nyneb.png)](http://i.stack.imgur.com/Nyneb.png) It's no huge surprise that `interp2d` gives up. In fact, during the call to `interp2d` you should expect some friendly `RuntimeWarning`s complaining about the impossibility of the spline to be constructed. As for the other two methods, `Rbf` seems to produce the best output, even near the borders of the domain where the result is extrapolated. --- So let me say a few words about the three methods, in decreasing order of preference (so that the worst is the least likely to be read by anybody). # `scipy.interpolate.Rbf` The `Rbf` class stands for "radial basis functions". To be honest I've never considered this approach until I started researching for this post, but I'm pretty sure I'll be using these in the future. Just like the spline-based methods (see later), usage comes in two steps: first one creates a callable `Rbf` class instance based on the input data, and then calls this object for a given output mesh to obtain the interpolated result. Example from the smooth upsampling test: ``` import scipy.interpolate as interp zfun_smooth_rbf = interp.Rbf(x_sparse, y_sparse, z_sparse_smooth, function='cubic', smooth=0) # default smooth=0 for interpolation z_dense_smooth_rbf = zfun_smooth_rbf(x_dense, y_dense) # not really a function, but a callable class instance ``` Note that both input and output points were 2d arrays in this case, and the output `z_dense_smooth_rbf` has the same shape as `x_dense` and `y_dense` without any effort. Also note that `Rbf` supports arbitrary dimensions for interpolation. So, `scipy.interpolate.Rbf` * produces well-behaved output even for crazy input data * supports interpolation in higher dimensions * extrapolates outside the convex hull of the input points (of course extrapolation is always a gamble, and you should generally not rely on it at all) * creates an interpolator as a first step, so evaluating it in various output points is less additional effort * can have output points of arbitrary shape (as opposed to being constrained to rectangular meshes, see later) * prone to preserving the symmetry of the input data * supports multiple kinds of radial functions for keyword `function`: `multiquadric`, `inverse`, `gaussian`, `linear`, `cubic`, `quintic`, `thin_plate` and user-defined arbitrary # `scipy.interpolate.griddata` My former favourite, `griddata`, is a general workhorse for interpolation in arbitrary dimensions. It doesn't perform extrapolation beyond setting a single preset value for points outside the convex hull of the nodal points, but since extrapolation is a very fickle and dangerous thing, this is not necessarily a con. Usage example: ``` z_dense_smooth_griddata = interp.griddata(np.array([x_sparse.ravel(),y_sparse.ravel()]).T, z_sparse_smooth.ravel(), (x_dense,y_dense), method='cubic') # default method is linear ``` Note the slightly kludgy syntax. The input points have to be specified in an array of shape `[N, D]` in `D` dimensions. For this we first have to flatten our 2d coordinate arrays (using `ravel`), then concatenate the arrays and transpose the result. There are multiple ways to do this, but all of them seem to be bulky. The input `z` data also have to be flattened. We have a bit more freedom when it comes to the output points: for some reason these can also be specified as a tuple of multidimensional arrays. Note that the `help` of `griddata` is misleading, as it suggests that the same is true for the *input* points (at least for version 0.17.0): ``` griddata(points, values, xi, method='linear', fill_value=nan, rescale=False) Interpolate unstructured D-dimensional data. Parameters ---------- points : ndarray of floats, shape (n, D) Data point coordinates. Can either be an array of shape (n, D), or a tuple of `ndim` arrays. values : ndarray of float or complex, shape (n,) Data values. xi : ndarray of float, shape (M, D) Points at which to interpolate data. ``` In a nutshell, `scipy.interpolate.griddata` * produces well-behaved output even for crazy input data * supports interpolation in higher dimensions * does not perform extrapolation, a single value can be set for the output outside the convex hull of the input points (see `fill_value`) * computes the interpolated values in a single call, so probing multiple sets of output points starts from scratch * can have output points of arbitrary shape * supports nearest-neighbour and linear interpolation in arbitrary dimensions, cubic in 1d and 2d. Nearest-neighbour and linear interpolation use `NearestNDInterpolator` and `LinearNDInterpolator` under the hood, respectively. 1d cubic interpolation uses a spline, 2d cubic interpolation uses `CloughTocher2DInterpolator` to construct a continuously differentiable piecewise-cubic interpolator. * might violate the symmetry of the input data # `scipy.interpolate.interp2d`/`scipy.interpolate.bisplrep` The only reason I'm discussing `interp2d` and its relatives is that it has a deceptive name, and people are likely to try using it. Spoiler alert: don't use it (as of scipy version 0.17.0). It's already more special than the previous subjects in that it's specifically used for two-dimensional interpolation, but I suspect this is by far the most common case for multivariate interpolation. As far as syntax goes, `interp2d` is similar to `Rbf` in that it first needs constructing an interpolation instance, which can be called to provide the actual interpolated values. There's a catch, however: the output points have to be located on a rectangular mesh, so inputs going into the call to the interpolator have to be 1d vectors which span the output grid, as if from `numpy.meshgrid`: ``` # reminder: x_sparse and y_sparse are of shape [6, 7] from numpy.meshgrid zfun_smooth_interp2d = interp.interp2d(x_sparse, y_sparse, z_sparse_smooth, kind='cubic') # default kind is 'linear' # reminder: x_dense and y_dense are of shape [20, 21] from numpy.meshgrid xvec = x_dense[0,:] # 1d array of unique x values, 20 elements yvec = y_dense[:,0] # 1d array of unique y values, 21 elements z_dense_smooth_interp2d = zfun_smooth_interp2d(xvec,yvec) # output is [20, 21]-shaped array ``` One of the most common mistakes when using `interp2d` is putting your full 2d meshes into the interpolation call, which leads to explosive memory consumption, and hopefully to a hasty `MemoryError`. Now, the greatest problem with `interp2d` is that it often doesn't work. In order to understand this, we have to look under the hood. It turns out that `interp2d` is a wrapper for the lower-level functions [`bisplrep`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.bisplrep.html#scipy.interpolate.bisplrep)+[`bisplev`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.bisplev.html#scipy.interpolate.bisplev), which are in turn wrappers for FITPACK routines (written in Fortran). The equivalent call to the previous example would be ``` kind = 'cubic' if kind=='linear': kx=ky=1 elif kind=='cubic': kx=ky=3 elif kind=='quintic': kx=ky=5 # bisplrep constructs a spline representation, bisplev evaluates the spline at given points bisp_smooth = interp.bisplrep(x_sparse.ravel(),y_sparse.ravel(),z_sparse_smooth.ravel(),kx=kx,ky=ky,s=0) z_dense_smooth_bisplrep = interp.bisplev(xvec,yvec,bisp_smooth).T # note the transpose ``` Now, here's the thing about `interp2d`: (in scipy version 0.17.0) there is a nice [comment in `interpolate/interpolate.py`](https://github.com/scipy/scipy/blob/b9c3ef6efd9c5034aae0168fb0dce4e43ac351de/scipy/interpolate/interpolate.py#L223) for `interp2d`: ``` if not rectangular_grid: # TODO: surfit is really not meant for interpolation! self.tck = fitpack.bisplrep(x, y, z, kx=kx, ky=ky, s=0.0) ``` and indeed in `interpolate/fitpack.py`, in `bisplrep` there's some setup and ultimately ``` tx, ty, c, o = _fitpack._surfit(x, y, z, w, xb, xe, yb, ye, kx, ky, task, s, eps, tx, ty, nxest, nyest, wrk, lwrk1, lwrk2) ``` And that's it. The routines underlying `interp2d` are not really meant to perform interpolation. They might suffice for sufficiently well-behaved data, but under realistic circumstances you will probably want to use something else. Just to conclude, `interpolate.interp2d` * can lead to artifacts even with well-tempered data * is specifically for bivariate problems (although there's the limited `interpn` for input points defined on a grid) * performs extrapolation * creates an interpolator as a first step, so evaluating it in various output points is less additional effort * can only produce output over a rectangular grid, for scattered output you would have to call the interpolator in a loop * supports linear, cubic and quintic interpolation * might violate the symmetry of the input data --- 1I'm fairly certain that the `cubic` and `linear` kind of basis functions of `Rbf` do not exactly correspond to the other interpolators of the same name. 2These NaNs are also the reason for why the surface plot seems so odd: matplotlib historically has difficulties with plotting complex 3d objects with proper depth information. The NaN values in the data confuse the renderer, so parts of the surface that should be in the back are plotted to be in the front. This is an issue with visualization, and not interpolation.
How to round a number to a chosen integer
37,877,895
16
2016-06-17T09:26:29Z
37,878,425
21
2016-06-17T09:49:45Z
[ "python", "rounding" ]
In Denmark we have an odd grading system that goes as follows. [-3,00,02,4,7,10,12] Our assignment is to take a vector with different decimal numbers, and round it to the nearest valid grade. Here is our code so far. ``` import numpy as np def roundGrade(grades): if (-5<grades<-1.5): gradesRounded = -3 elif (-1.5<=grades<1.5): gradesRounded = 00 elif (1.5<=grades<3): gradesRounded = 2 elif (3<=grades<5.5): gradesRounded = 4 elif (5.5<=grades<8.5): gradesRounded = 7 elif (8.5<=grades<11): gradesRounded = 10 elif (11<=grades<15): gradesRounded = 12 return gradesRounded print(roundGrade(np.array[-2.1,6.3,8.9,9])) ``` Our console doesn't seem to like this and retuns: TypeError: builtin\_function\_or\_method' object is not subscriptable All help is appreciated, and if you have a smarter method you are welcome to put us in our place.
You are getting that error because when you print, you are using incorrect syntax: ``` print(roundGrade(np.array[-2.1,6.3,8.9,9])) ``` needs to be ``` print(roundGrade(np.array([-2.1,6.3,8.9,9]))) ``` Notice the extra parentheses: `np.array(<whatever>)` However, this won't work, since your function expects a single number. Fortunately, numpy provides a function which can fix that for you: ``` In [15]: roundGrade = np.vectorize(roundGrade) In [16]: roundGrade(np.array([-2.1,6.3,8.9,9])) Out[16]: array([-3, 7, 10, 10]) ``` <http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.vectorize.html>
How to round a number to a chosen integer
37,877,895
16
2016-06-17T09:26:29Z
37,878,529
17
2016-06-17T09:55:08Z
[ "python", "rounding" ]
In Denmark we have an odd grading system that goes as follows. [-3,00,02,4,7,10,12] Our assignment is to take a vector with different decimal numbers, and round it to the nearest valid grade. Here is our code so far. ``` import numpy as np def roundGrade(grades): if (-5<grades<-1.5): gradesRounded = -3 elif (-1.5<=grades<1.5): gradesRounded = 00 elif (1.5<=grades<3): gradesRounded = 2 elif (3<=grades<5.5): gradesRounded = 4 elif (5.5<=grades<8.5): gradesRounded = 7 elif (8.5<=grades<11): gradesRounded = 10 elif (11<=grades<15): gradesRounded = 12 return gradesRounded print(roundGrade(np.array[-2.1,6.3,8.9,9])) ``` Our console doesn't seem to like this and retuns: TypeError: builtin\_function\_or\_method' object is not subscriptable All help is appreciated, and if you have a smarter method you are welcome to put us in our place.
You could simply take the minimum distance from each grade to each grade group, like so. This assumes you actually want to round to the nearest grade from your grade group, which your current code doesn't do exactly. ``` grade_groups = [-3,0,2,4,7,10,12] sample_grades = [-2.1,6.3,8.9,9] grouped = [min(grade_groups,key=lambda x:abs(grade-x)) for grade in sample_grades] print(grouped) ``` **Outputs**: ``` [-3, 7, 10, 10] ``` Note that even after fixing your error your approach won't yet work because `roundGrade` expects a single number as a parameter. [As shown by juanapa](http://stackoverflow.com/a/37878425/4686625) you could vectorize your function.
Indexing one array by another in numpy
37,878,946
4
2016-06-17T10:15:22Z
37,879,017
7
2016-06-17T10:18:54Z
[ "python", "numpy" ]
Suppose I have a matrix **A** with some arbitrary values: ``` array([[ 2, 4, 5, 3], [ 1, 6, 8, 9], [ 8, 7, 0, 2]]) ``` And a matrix **B** which contains indices of elements in A: ``` array([[0, 0, 1, 2], [0, 3, 2, 1], [3, 2, 1, 0]]) ``` How do I select values from **A** pointed by **B**, i.e.: ``` A[B] = [[2, 2, 4, 5], [1, 9, 8, 6], [2, 0, 7, 8]] ```
You can use [`NumPy's advanced indexing`](http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing) - ``` A[np.arange(A.shape[0])[:,None],B] ``` One can also use `linear indexing` - ``` m,n = A.shape out = np.take(A,B + n*np.arange(m)[:,None]) ``` Sample run - ``` In [40]: A Out[40]: array([[2, 4, 5, 3], [1, 6, 8, 9], [8, 7, 0, 2]]) In [41]: B Out[41]: array([[0, 0, 1, 2], [0, 3, 2, 1], [3, 2, 1, 0]]) In [42]: A[np.arange(A.shape[0])[:,None],B] Out[42]: array([[2, 2, 4, 5], [1, 9, 8, 6], [2, 0, 7, 8]]) In [43]: m,n = A.shape In [44]: np.take(A,B + n*np.arange(m)[:,None]) Out[44]: array([[2, 2, 4, 5], [1, 9, 8, 6], [2, 0, 7, 8]]) ```
using decorators to persist python objects
37,883,015
8
2016-06-17T13:35:20Z
37,914,677
10
2016-06-20T04:34:07Z
[ "python", "ipython-notebook", "jupyter-notebook" ]
Code that I got from below link, can persist data to the disk. <http://tohyongcheng.github.io/python/2016/06/07/persisting-a-cache-in-python-to-disk.html> I tried it but the file does not get generated. ``` import atexit import pickle # or import cPickle as pickle def persist_cache_to_disk(filename): def decorator(original_func): try: cache = pickle.load(open(filename, 'r')) except (IOError, ValueError): cache = {} atexit.register(lambda: pickle.dump(cache, open(filename, "w"))) def new_func(*args): if tuple(args) not in cache: cache[tuple(args)] = original_func(*args) return cache[args] return new_func return decorator ``` I tried to use this code as per the example... ``` @persist_cache_to_disk('users.p') def get_all_users(): x = 'some user' return x ``` Update: This is working at python command prompt, but does not work in ipython notebook.
The problem is the example employs `atexit` which runs the dump routine only when python exits. This modified version will dump each time the cache is updated: ``` import atexit import functools import pickle # or import cPickle as pickle def persist_cache_to_disk(filename): def decorator(original_func): try: cache = pickle.load(open(filename, 'r')) except (IOError, ValueError): cache = {} # Your python script has to exit in order to run this line! # atexit.register(lambda: pickle.dump(cache, open(filename, "w"))) # # Let's make a function and call it periodically: # def save_data(): pickle.dump(cache, open(filename, "w")) # You should wrap your func @functools.wraps(original_func) def new_func(*args): if tuple(args) not in cache: cache[tuple(args)] = original_func(*args) # Instead, dump your pickled data after # every call where the cache is changed. # This can be expensive! save_data() return cache[args] return new_func return decorator @persist_cache_to_disk('users.p') def get_all_users(): x = 'some user' return x get_all_users() ``` If you wanted to throttle the saving, you could modify `save_data()` to only save, say, when the `len(cache.keys())` is a multiple of 100. I also added `functools.wraps` to your decorator. From the [docs](https://docs.python.org/2/library/functools.html): > Without the use of this decorator factory, the name of the example function would have been 'wrapper', and the docstring of the original example() would have been lost.
Meaning of '>>' in Python byte code
37,900,782
11
2016-06-18T19:19:45Z
37,900,805
12
2016-06-18T19:21:46Z
[ "python", "virtual-machine", "bytecode" ]
I have disassembled the following python code ``` def factorial(n): if n <= 1: return 1 elif n == 2: return 2 elif n ==4: print('hi') return n * 2 ``` and the resulting bytecode ``` 2 0 LOAD_FAST 0 (n) 3 LOAD_CONST 1 (1) 6 COMPARE_OP 1 (<=) 9 POP_JUMP_IF_FALSE 16 3 12 LOAD_CONST 1 (1) 15 RETURN_VALUE 4 >> 16 LOAD_FAST 0 (n) 19 LOAD_CONST 2 (2) 22 COMPARE_OP 2 (==) 25 POP_JUMP_IF_FALSE 32 5 28 LOAD_CONST 2 (2) 31 RETURN_VALUE 6 >> 32 LOAD_FAST 0 (n) 35 LOAD_CONST 3 (4) 38 COMPARE_OP 2 (==) 41 POP_JUMP_IF_FALSE 52 7 44 LOAD_CONST 4 ('hi') 47 PRINT_ITEM 48 PRINT_NEWLINE 49 JUMP_FORWARD 0 (to 52) 8 >> 52 LOAD_FAST 0 (n) 55 LOAD_CONST 2 (2) 58 BINARY_MULTIPLY 59 RETURN_VALUE ``` What do the '>>' symbols in the above bytecode stand for?
They are jump targets; positions earlier `*JUMP*` bytecode jumps to when the condition is met. The first jump: ``` 9 POP_JUMP_IF_FALSE 16 ``` jumps to offset 16, so at offset 16 the output has a target `>>`: ``` 4 >> 16 LOAD_FAST 0 (n) ``` From the [`dis.disassemble()` function docs](https://docs.python.org/2/library/dis.html#dis.disassemble) names each column: > [...] > > 3. a labelled instruction, indicated with `>>`, and the [`dis.findlabels()` function](https://docs.python.org/2/library/dis.html#dis.findlabels): > Detect all offsets in the code object code which are jump targets, and return a list of these offsets.
Is there a Python constant for Unicode whitespace?
37,903,317
15
2016-06-19T02:05:25Z
37,903,645
9
2016-06-19T03:20:37Z
[ "python", "c", "string", "unicode", "whitespace" ]
The `string` module contains a `whitespace` attribute, which is a string consisting of all the ASCII characters that are considered whitespace. Is there a corresponding constant that includes Unicode spaces too, such as the [no-break space (U+00A0)](http://www.fileformat.info/info/unicode/char/00a0/index.htm)? We can see from the question "[strip() and strip(string.whitespace) give different results](http://stackoverflow.com/questions/22230080/strip-and-stripstring-whitespace-give-different-results-despite-documentatio)" that at least `strip` is aware of additional Unicode whitespace characters. This question was identified as a duplicate of [In Python, how to list all characters matched by POSIX extended regex `[:space:]`?](http://stackoverflow.com/questions/8921365/in-python-how-to-list-all-characters-matched-by-posix-extended-regex-space?lq=1), but the answers to that question identify ways of *searching* for whitespace characters to generate your own list. This is a time-consuming process. My question was specifically about a **constant**.
> # Is there a Python constant for Unicode whitespace? Short answer: **No.** I have personally grepped for these characters (specifically, the numeric code points) in the Python code base, and such a constant is not there. The sections below explains why it is not necessary, and how it is implemented without this information being available as a constant. But having such a constant would also be a really bad idea. If the Unicode Consortium added another character/code-point that is semantically whitespace, the maintainers of Python would have a poor choice between continuing to support semantically incorrect code or changing the constant and possibly breaking pre-existing code that might (inadvisably) make assumptions about the constant not changing. How could it add these character code-points? There are 1,111,998 possible characters in Unicode. But only 120,672 are occupied as of [version 8](http://www.unicode.org/versions/Unicode8.0.0/ch01.pdf). Each new version of Unicode may add additional characters. One of these new characters might be a form of whitespace. ## The information is stored in a dynamically generated C function The code that determines what is whitespace in unicode is the following dynamically generated [code](https://hg.python.org/cpython/file/tip/Tools/unicode/makeunicodedata.py#l557). ``` # Generate code for _PyUnicode_IsWhitespace() print("/* Returns 1 for Unicode characters having the bidirectional", file=fp) print(" * type 'WS', 'B' or 'S' or the category 'Zs', 0 otherwise.", file=fp) print(" */", file=fp) print('int _PyUnicode_IsWhitespace(const Py_UCS4 ch)', file=fp) print('{', file=fp) print(' switch (ch) {', file=fp) for codepoint in sorted(spaces): print(' case 0x%04X:' % (codepoint,), file=fp) print(' return 1;', file=fp) print(' }', file=fp) print(' return 0;', file=fp) print('}', file=fp) print(file=fp) ``` This is a switch statement, which is a constant code block, but this information is not available as a module "constant" like the string module has. It is instead buried in the function compiled from C and not directly accessible from Python. This is likely because as more code points are added to Unicode, we would not be able to change constants for backwards compatibility reasons. ### The Generated Code Here's the generated code currently [at the tip](https://hg.python.org/cpython/file/tip/Objects/unicodetype_db.h#l5600): ``` int _PyUnicode_IsWhitespace(const Py_UCS4 ch) { switch (ch) { case 0x0009: case 0x000A: case 0x000B: case 0x000C: case 0x000D: case 0x001C: case 0x001D: case 0x001E: case 0x001F: case 0x0020: case 0x0085: case 0x00A0: case 0x1680: case 0x2000: case 0x2001: case 0x2002: case 0x2003: case 0x2004: case 0x2005: case 0x2006: case 0x2007: case 0x2008: case 0x2009: case 0x200A: case 0x2028: case 0x2029: case 0x202F: case 0x205F: case 0x3000: return 1; } return 0; } ``` ## Making your own constant: The following code (from my answer [here](http://stackoverflow.com/a/37903375/541136)), in Python 3, generates a constant of all whitespace: ``` import re import sys s = ''.join(chr(c) for c in range(sys.maxunicode+1)) ws = ''.join(re.findall(r'\s', s)) ``` As an optimization, you could store this in a code base, instead of auto-generating it every new process, but I would caution against assuming that it would never change. ``` >>> ws '\t\n\x0b\x0c\r\x1c\x1d\x1e\x1f \x85\xa0\u1680\u2000\u2001\u2002\u2003\u2004\u2005\u2006\u2007\u2008\u2009\u200a\u2028\u2029\u202f\u205f\u3000' ``` (Other answers to the question linked show how to get that for Python 2.) Remember that at one point, some people probably thought 256 character encodings was all that we'd ever need. ``` >>> import string >>> string.whitespace ' \t\n\r\x0b\x0c' ``` If you're insisting on keeping a constant in your code base, just generate the constant for your version of Python, and store it as a literal: ``` unicode_whitespace = u'\t\n\x0b\x0c\r\x1c\x1d\x1e\x1f \x85\xa0\u1680\u2000\u2001\u2002\u2003\u2004\u2005\u2006\u2007\u2008\u2009\u200a\u2028\u2029\u202f\u205f\u3000' ``` The `u` prefix makes it unicode in Python 2 (2.7 happens to recognize the entire string above as whitespace too), and in Python 3 it is ignored as string literals are unicode by default.
How to obtain the right alpha value to perfectly blend two images?
37,911,062
9
2016-06-19T19:34:00Z
37,918,596
8
2016-06-20T09:10:55Z
[ "python", "opencv", "image-processing", "computer-vision", "alphablending" ]
I've been trying to blend two images. The current approach I'm taking is, I obtain the coordinates of the overlapping region of the two images, and only for the overlapping regions, I blend with a hardcoded alpha of 0.5, before adding it. SO basically I'm just taking half the value of each pixel from overlapping regions of both the images, and adding them. That doesn't give me a perfect blend because the alpha value is hardcoded to 0.5. Here's the result of blending of 3 images: As you can see, the transition from one image to another is still visible. How do I obtain the perfect alpha value that would eliminate this visible transition? Or is there no such thing, and I'm taking a wrong approach? Here's how I'm currently doing the blending: ``` for i in range(3): base_img_warp[overlap_coords[0], overlap_coords[1], i] = base_img_warp[overlap_coords[0], overlap_coords[1],i]*0.5 next_img_warp[overlap_coords[0], overlap_coords[1], i] = next_img_warp[overlap_coords[0], overlap_coords[1],i]*0.5 final_img = cv2.add(base_img_warp, next_img_warp) ``` If anyone would like to give it a shot, here are two warped images, and the mask of their overlapping region: <http://imgur.com/a/9pOsQ>
Here is the way I would do it in general: ``` int main(int argc, char* argv[]) { cv::Mat input1 = cv::imread("C:/StackOverflow/Input/pano1.jpg"); cv::Mat input2 = cv::imread("C:/StackOverflow/Input/pano2.jpg"); // compute the vignetting masks. This is much easier before warping, but I will try... // it can be precomputed, if the size and position of your ROI in the image doesnt change and can be precomputed and aligned, if you can determine the ROI for every image // the compression artifacts make it a little bit worse here, I try to extract all the non-black regions in the images. cv::Mat mask1; cv::inRange(input1, cv::Vec3b(10, 10, 10), cv::Vec3b(255, 255, 255), mask1); cv::Mat mask2; cv::inRange(input2, cv::Vec3b(10, 10, 10), cv::Vec3b(255, 255, 255), mask2); // now compute the distance from the ROI border: cv::Mat dt1; cv::distanceTransform(mask1, dt1, CV_DIST_L1, 3); cv::Mat dt2; cv::distanceTransform(mask2, dt2, CV_DIST_L1, 3); // now you can use the distance values for blending directly. If the distance value is smaller this means that the value is worse (your vignetting becomes worse at the image border) cv::Mat mosaic = cv::Mat(input1.size(), input1.type(), cv::Scalar(0, 0, 0)); for (int j = 0; j < mosaic.rows; ++j) for (int i = 0; i < mosaic.cols; ++i) { float a = dt1.at<float>(j, i); float b = dt2.at<float>(j, i); float alpha = a / (a + b); // distances are not between 0 and 1 but this value is. The "better" a is, compared to b, the higher is alpha. // actual blending: alpha*A + beta*B mosaic.at<cv::Vec3b>(j, i) = alpha*input1.at<cv::Vec3b>(j, i) + (1 - alpha)* input2.at<cv::Vec3b>(j, i); } cv::imshow("mosaic", mosaic); cv::waitKey(0); return 0; } ``` Basically you compute the distance from your ROI border to the center of your objects and compute the alpha from both blending mask values. So if one image has a high distance from the border and other one a low distance from border, you prefer the pixel that is closer to the image center. It would be better to normalize those values for cases where the warped images aren't of similar size. But even better and more efficient is to precompute the blending masks and warp them. Best would be to know the vignetting of your optical system and choose and identical blending mask (typically lower values of the border). From the previous code you'll get these results: ROI masks: [![enter image description here](http://i.stack.imgur.com/2viM8.png)](http://i.stack.imgur.com/2viM8.png) [![enter image description here](http://i.stack.imgur.com/mFWHT.png)](http://i.stack.imgur.com/mFWHT.png) Blending masks (just as an impression, must be float matrices instead): [![enter image description here](http://i.stack.imgur.com/pgsgB.png)](http://i.stack.imgur.com/pgsgB.png) [![enter image description here](http://i.stack.imgur.com/GzeUR.png)](http://i.stack.imgur.com/GzeUR.png) image mosaic: [![enter image description here](http://i.stack.imgur.com/8TPQg.png)](http://i.stack.imgur.com/8TPQg.png)
Cut within a pattern using Python regex
37,928,771
12
2016-06-20T17:50:25Z
37,929,012
7
2016-06-20T18:04:32Z
[ "python", "regex", "string", "split", "protein-database" ]
**Objective:** I am trying to perform a cut in Python RegEx where split doesn't quite do what I want. I need to cut within a pattern, but between characters. **What I am looking for:** I need to recognize the pattern below in a string, and split the string at the location of the pipe. The pipe isn't actually in the string, it just shows where I want to split. Pattern: `CDE|FG` String: `ABCDEFGHIJKLMNOCDEFGZYPE` Results: `['ABCDE', 'FGHIJKLMNOCDE', 'FGZYPE']` **What I have tried:** I seems like using split with parenthesis is close, but it doesn't keep the search pattern attached to the results like I need it to. `re.split('CDE()FG', 'ABCDEFGHIJKLMNOCDEFGZYPE')` Gives, `['AB', 'HIJKLMNO', 'ZYPE']` When I actually need, `['ABCDE', 'FGHIJKLMNOCDE', 'FGZYPE']` **Motivation:** Practicing with RegEx, and wanted to see if I could use RegEx to make a script that would predict the fragments of a protein digestion using specific proteases.
A non regex way would be to [replace](https://docs.python.org/2/library/stdtypes.html#str.replace) the pattern with the piped value and then [split](https://docs.python.org/2/library/stdtypes.html#str.split). ``` >>> pattern = 'CDE|FG' >>> s = 'ABCDEFGHIJKLMNOCDEFGZYPE' >>> s.replace('CDEFG',pattern).split('|') ['ABCDE', 'FGHIJKLMNOCDE', 'FGZYPE'] ```
Identifying consecutive occurrences of a value
37,934,399
9
2016-06-21T01:56:07Z
37,934,721
8
2016-06-21T02:39:32Z
[ "python", "pandas", "dataframe", "itertools" ]
I have a df like so: ``` Count 1 0 1 1 0 0 1 1 1 0 ``` and I want to return a `1` in a new column if there are two or more consecutive occurrences of `1` in `Count` and a `0` if there is not. So in the new column each row would get a `1` based on this criteria being met in the column `Count`. My desired output would then be: ``` Count New_Value 1 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 1 1 0 0 ``` I am thinking I may need to use `itertools` but I have been reading about it and haven't come across what I need yet. I would like to be able to use this method to count any number of consecutive occurrences, not just 2 as well. For example, sometimes I need to count 10 consecutive occurrences, I just use 2 in the example here.
You could: ``` df['consecutive'] = df.Count.groupby((df.Count != df.Count.shift()).cumsum()).transform('size') * df.Count ``` to get: ``` Count consecutive 0 1 1 1 0 0 2 1 2 3 1 2 4 0 0 5 0 0 6 1 3 7 1 3 8 1 3 9 0 0 ``` From here you can, for any threshold: ``` threshold = 2 df['consecutive'] = (df.consecutive > threshold).astype(int) ``` to get: ``` Count consecutive 0 1 0 1 0 0 2 1 1 3 1 1 4 0 0 5 0 0 6 1 1 7 1 1 8 1 1 9 0 0 ``` or, in a single step: ``` (df.Count.groupby((df.Count != df.Count.shift()).cumsum()).transform('size') * df.Count >= threshold).astype(int) ``` In terms of efficiency, using `pandas` methods provides a significant speedup when the size of the problem grows: ``` df = pd.concat([df for _ in range(1000)]) %timeit (df.Count.groupby((df.Count != df.Count.shift()).cumsum()).transform('size') * df.Count >= threshold).astype(int) 1000 loops, best of 3: 1.47 ms per loop ``` compared to: ``` %%timeit l = [] for k, g in groupby(df.Count): size = sum(1 for _ in g) if k == 1 and size >= 2: l = l + [1]*size else: l = l + [0]*size pd.Series(l) 10 loops, best of 3: 76.7 ms per loop ```
'WSGIRequest' object has no attribute 'user' Django admin
37,949,198
5
2016-06-21T15:55:52Z
37,950,161
27
2016-06-21T16:45:28Z
[ "python", "django", "admin", "panel", "wsgi" ]
When I trying to access the admin page it gives me the following error: ``` System check identified no issues (0 silenced). June 21, 2016 - 15:26:14 Django version 1.9.7, using settings 'librato_chart_sender_web.settings' Starting development server at http://127.0.0.1:8000/ Quit the server with CONTROL-C. Internal Server Error: /admin/ Traceback (most recent call last): File "/Library/Python/2.7/site-packages/django/core/handlers/base.py", line 149, in get_response response = self.process_exception_by_middleware(e, request) File "/Library/Python/2.7/site-packages/django/core/handlers/base.py", line 147, in get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/Library/Python/2.7/site-packages/django/contrib/admin/sites.py", line 265, in wrapper return self.admin_view(view, cacheable)(*args, **kwargs) File "/Library/Python/2.7/site-packages/django/utils/decorators.py", line 149, in _wrapped_view response = view_func(request, *args, **kwargs) File "/Library/Python/2.7/site-packages/django/views/decorators/cache.py", line 57, in _wrapped_view_func response = view_func(request, *args, **kwargs) File "/Library/Python/2.7/site-packages/django/contrib/admin/sites.py", line 233, in inner if not self.has_permission(request): File "/Library/Python/2.7/site-packages/django/contrib/admin/sites.py", line 173, in has_permission return request.user.is_active and request.user.is_staff AttributeError: 'WSGIRequest' object has no attribute 'user' [21/Jun/2016 15:26:18] "GET /admin/ HTTP/1.1" 500 78473 ``` Im quite new in django ... but i followed this tutorial: <https://docs.djangoproject.com/en/1.9/ref/contrib/admin/> I dont have any custom AdminSites and custom AdminModels. I already googled about this problem but still i cannot solve it for my case in any way. Can you help ? here is my `settings.py`: ``` """ Django settings for librato_chart_sender_web project. Generated by 'django-admin startproject' using Django 1.11.dev20160523235928. For more information on this file, see https://docs.djangoproject.com/en/dev/topics/settings/ For the full list of settings and their values, see https://docs.djangoproject.com/en/dev/ref/settings/ """ import os # Build paths inside the project like this: os.path.join(BASE_DIR, ...) BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/dev/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = '*1@+=wzrqx^6$9z&@2@d8r(w$js+ktw45lv2skez(=kz+rwff_' # SECURITY WARNING: don't run with debug turned on in production! DEBUG = True ALLOWED_HOSTS = [] # Application definition INSTALLED_APPS = [ 'django.contrib.admin', 'librato_chart_sender', 'fontawesome', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ] MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] ROOT_URLCONF = 'librato_chart_sender_web.urls' TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [os.path.join(BASE_DIR, 'librato_chart_sender/templates')], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] WSGI_APPLICATION = 'librato_chart_sender_web.wsgi.application' # Database # https://docs.djangoproject.com/en/dev/ref/settings/#databases DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } } # Password validation # https://docs.djangoproject.com/en/dev/ref/settings/#auth-password-validators AUTH_PASSWORD_VALIDATORS = [ { 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', }, { 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', }, { 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', }, { 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', }, ] # Internationalization # https://docs.djangoproject.com/en/dev/topics/i18n/ LANGUAGE_CODE = 'en-us' TIME_ZONE = 'GMT' USE_I18N = True USE_L10N = True USE_TZ = True # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/dev/howto/static-files/ STATIC_URL = '/static/' STATICFILES_DIRS = [ ('css', 'librato_chart_sender/static/css'), ('js', 'librato_chart_sender/static/js'), ('fonts', 'librato_chart_sender/static/fonts'), ] ``` and `admin.py`: ``` from django.contrib import admin from .models import Configuration # Register your models here. admin.site.register(Configuration) ```
I found the answer. The variable name on: ``` MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] ``` It is not `MIDDLEWARE`, changed the name to `MIDDLEWARE_CLASSES` and now it works. So now the code is: ``` MIDDLEWARE_CLASSES = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] ```
'WSGIRequest' object has no attribute 'user' Django admin
37,949,198
5
2016-06-21T15:55:52Z
39,519,162
10
2016-09-15T19:44:03Z
[ "python", "django", "admin", "panel", "wsgi" ]
When I trying to access the admin page it gives me the following error: ``` System check identified no issues (0 silenced). June 21, 2016 - 15:26:14 Django version 1.9.7, using settings 'librato_chart_sender_web.settings' Starting development server at http://127.0.0.1:8000/ Quit the server with CONTROL-C. Internal Server Error: /admin/ Traceback (most recent call last): File "/Library/Python/2.7/site-packages/django/core/handlers/base.py", line 149, in get_response response = self.process_exception_by_middleware(e, request) File "/Library/Python/2.7/site-packages/django/core/handlers/base.py", line 147, in get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/Library/Python/2.7/site-packages/django/contrib/admin/sites.py", line 265, in wrapper return self.admin_view(view, cacheable)(*args, **kwargs) File "/Library/Python/2.7/site-packages/django/utils/decorators.py", line 149, in _wrapped_view response = view_func(request, *args, **kwargs) File "/Library/Python/2.7/site-packages/django/views/decorators/cache.py", line 57, in _wrapped_view_func response = view_func(request, *args, **kwargs) File "/Library/Python/2.7/site-packages/django/contrib/admin/sites.py", line 233, in inner if not self.has_permission(request): File "/Library/Python/2.7/site-packages/django/contrib/admin/sites.py", line 173, in has_permission return request.user.is_active and request.user.is_staff AttributeError: 'WSGIRequest' object has no attribute 'user' [21/Jun/2016 15:26:18] "GET /admin/ HTTP/1.1" 500 78473 ``` Im quite new in django ... but i followed this tutorial: <https://docs.djangoproject.com/en/1.9/ref/contrib/admin/> I dont have any custom AdminSites and custom AdminModels. I already googled about this problem but still i cannot solve it for my case in any way. Can you help ? here is my `settings.py`: ``` """ Django settings for librato_chart_sender_web project. Generated by 'django-admin startproject' using Django 1.11.dev20160523235928. For more information on this file, see https://docs.djangoproject.com/en/dev/topics/settings/ For the full list of settings and their values, see https://docs.djangoproject.com/en/dev/ref/settings/ """ import os # Build paths inside the project like this: os.path.join(BASE_DIR, ...) BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/dev/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = '*1@+=wzrqx^6$9z&@2@d8r(w$js+ktw45lv2skez(=kz+rwff_' # SECURITY WARNING: don't run with debug turned on in production! DEBUG = True ALLOWED_HOSTS = [] # Application definition INSTALLED_APPS = [ 'django.contrib.admin', 'librato_chart_sender', 'fontawesome', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ] MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] ROOT_URLCONF = 'librato_chart_sender_web.urls' TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [os.path.join(BASE_DIR, 'librato_chart_sender/templates')], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] WSGI_APPLICATION = 'librato_chart_sender_web.wsgi.application' # Database # https://docs.djangoproject.com/en/dev/ref/settings/#databases DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } } # Password validation # https://docs.djangoproject.com/en/dev/ref/settings/#auth-password-validators AUTH_PASSWORD_VALIDATORS = [ { 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', }, { 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', }, { 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', }, { 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', }, ] # Internationalization # https://docs.djangoproject.com/en/dev/topics/i18n/ LANGUAGE_CODE = 'en-us' TIME_ZONE = 'GMT' USE_I18N = True USE_L10N = True USE_TZ = True # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/dev/howto/static-files/ STATIC_URL = '/static/' STATICFILES_DIRS = [ ('css', 'librato_chart_sender/static/css'), ('js', 'librato_chart_sender/static/js'), ('fonts', 'librato_chart_sender/static/fonts'), ] ``` and `admin.py`: ``` from django.contrib import admin from .models import Configuration # Register your models here. admin.site.register(Configuration) ```
To resolve this go to settings.py where there MIDDLEWARE Change that to MIDDLEWARE\_CLASSES <https://docs.djangoproject.com/ja/1.9/topics/http/middleware/>
Flask app get "IOError: [Errno 32] Broken pipe"
37,962,925
6
2016-06-22T08:46:48Z
38,628,780
7
2016-07-28T06:38:31Z
[ "python", "web", "flask", "webserver" ]
Now I use flask to develop web app. But at first it works well,after operating web page for a while,the flask back-end shows error like these: ``` File "/usr/lib64/python2.6/BaseHTTPServer.py", line 329, in handle self.handle_one_request() File "/usr/lib/python2.6/site-packages/werkzeug/serving.py", line 251, in handle_one_request return self.run_wsgi() File "/usr/lib/python2.6/site-packages/werkzeug/serving.py", line 193, in run_wsgi execute(self.server.app) File "/usr/lib/python2.6/site-packages/werkzeug/serving.py", line 184, in execute write(data) File "/usr/lib/python2.6/site-packages/werkzeug/serving.py", line 152, in write self.send_header(key, value) File "/usr/lib64/python2.6/BaseHTTPServer.py", line 390, in send_header self.wfile.write("%s: %s\r\n" % (keyword, value)) IOError: [Errno 32] Broken pipe ``` My app run on port 5000 `app.run(debug=True,port=5000)`, I use nginx as web server,and set `proxy_pass http://127.0.0.1:5000` in nginx config file. Now I really don't know where is the wrong,I use `session['email'] = request.form['email']` and in other file I use `email = session.get('email')`. Is this usage right? How to set session active period? or any other reason cause this error ? then I set `app.run(debug=False,port=5000)`,it shows new error ``` File "/usr/lib64/python2.6/SocketServer.py", line 671, in finish self.wfile.flush() File "/usr/lib64/python2.6/socket.py", line 303, in flush self._sock.sendall(buffer(data, write_offset, buffer_size)) socket.error: [Errno 32] Broken pipe ``` why ? Please help me,thks.
The built-in werkzeug server is not capable of handling the remote end closing the connection while the server is still churing its content out. instead of `app.run(debug=True,port=5000)` try ``` from gevent.wsgi import WSGIServer http_server = WSGIServer(('', 5000), app) http_server.serve_forever() ``` or if you are using nginx, use it with uwsgi as described [here](http://flask.pocoo.org/docs/0.11/deploying/uwsgi/#starting-your-app-with-uwsgi) It is rather a `werkzeug` issue I would argue
What is the pythonic way of instantiating a class, calling one of its methods, and returning it from a lambda function?
37,970,032
2
2016-06-22T13:47:53Z
37,970,134
9
2016-06-22T13:52:10Z
[ "python", "lambda" ]
I am dealing widgets and signals and I want to bind a signal to a certain callback. Since I don't really need to create a named callback function in the case of interest, I am defining it as a lambda function. However, the way it integrates with other classes is best described by the following minimal working example: ``` class Foo(): def parse(self, what): self.bar = what foo = lambda x = Foo(): (x.parse("All set"), x)[-1] print(foo().bar) 'All set' ``` The lambda function needs to instantiate a class, call one of its members to parse a string and change its internal state, and return the instantiated class. The only way to do this that I can think of at the moment is as shown in the above example: pass the instance as the default argument, create a list where the first element is the call to the method and the second is the instance itself, then select the last element. Is there a more pythonic and elegant way of obtaining the same result? EDIT: A few caveats: In the actual code the class Foo is defined in other modules, and I'm passing the lambda as an argument to another function, hence why I don't really need to name the callback. Indeed, what I actually have is something that looks like ``` widget.bind( 'some_signal', lambda t, x = Foo(): (x.parse(t), x)[-1] ) ```
The most pythonic solution is to not use a lambda: ``` def foo(): x = Foo() x.parse("All set") return x print(foo().bar) ``` Lambdas in python are a syntactic convenience and are strictly less powerful than named functions.
Split complicated strings in Python dynamically
37,975,964
4
2016-06-22T18:43:09Z
37,976,086
8
2016-06-22T18:50:50Z
[ "python", "regex", "string", "split" ]
I have been having difficulty with organizing a function that will handle strings in the manner I want. I have looked into a handful previous questions [1](http://stackoverflow.com/questions/4982531/how-do-i-split-a-comma-delimited-string-in-python-except-for-the-commas-that-are), [2](http://stackoverflow.com/questions/32150332/python-complicated-splitting-of-a-string), [3](http://stackoverflow.com/questions/4998629/python-split-string-with-multiple-delimiters) among others that I have sorted through. Here is the set up, I have well structured but variable data that needs to be split from a string read from the file, to an array of strings. The following showcases some examples of the data I am dealing with ``` ('Vdfbr76','gsdf','gsfd','',NULL), ('Vkdfb23l','gsfd','gsfg','[email protected]',NULL), ('4asg0124e','Lead Actor/SFX MUA/Prop designer','John Smith','[email protected]',NULL), ('asdguIux','Director, Camera Operator, Editor, VFX','John Smith','',NULL), ... (492,'E1asegaZ1ox','Nysdag_5YmD','145872325372620',1,'long, string, with, commas'), ``` I want to split these strings based on commas, however, there are commas occasionally contained within the strings which causes problems. In addition to this, developing an accurate `re.split(regex, line)` becomes difficult becomes the number of items in each line changes throughout the read. Some solutions that I have tried up to this point. ``` def splitLine(text, fields, delimiter): return_line = [] regex_string = "(.*?)," for i in range(0,len(fields)-1): regex_string+=("(.*)") if i < len(fields)-2: regex_string+=delimiter return_line = re.split(regex_string, text) return return_line ``` This will give a result where we have the following output ``` regex_string return_line ``` However the main problem with this is that it occasionally lumps two fields together. In the case the 3rd value in the array. ``` (.*?),(.*),(.*),(.*),(.*),(.*) ['', '\t(222', "'Vy1asdfnuJkA','Ndfbyz3_YMD'", "'14541242640005471'", '2', "'Hello World!')", '', '\n'] ``` Where the ideal result would look like: ``` ['', '\t(222', "'Vy1asdfnuJkA'", "'Ndfbyz3_YMD'", "'14541242640005471'", '2', "'Hello World!')", '', '\n'] ``` It is a small change, but it has a huge influence on the result. I tried manipulating the regex string to better suit what I was trying to do, but with each case I solved, another broke it unfortunately. Another case which I played around with came from user Aaron Cronin in this post [4](http://stackoverflow.com/questions/20599233/splitting-comma-delimited-strings-in-python), which looks like below ``` def split_at(text, delimiter, opens='<([', closes='>)]', quotes='"\''): result = [] buff = "" level = 0 is_quoted = False for char in text: if char in delimiter and level == 0 and not is_quoted: result.append(buff) buff = "" else: buff += char if char in opens: level += 1 if char in closes: level -= 1 if char in quotes: is_quoted = not is_quoted if not buff == "": result.append(buff) return result ``` The results of this look like so: ``` ["\t('Vk3NIasef366l','gsdasdf','gsfasfd','',NULL),\n"] ``` The main problem is that it comes out as the same string. Which puts me in a feedback loop. The ideal result would look like: ``` [\t('Vk3NIasef366l','gsdasdf','gsfasfd','',NULL),\n] ``` Any help is appreciated, I am not sure what the best approach is in this scenario. I am happy to clarify any questions that arise as well. I tried to be as complete as possible.
Use [`ast`'s `literal_eval`](https://docs.python.org/2/library/ast.html#ast.literal_eval)! ``` from ast import literal_eval s = """('Vdfbr76','gsdf','gsfd','',NULL), ('Vkdfb23l','gsfd','gsfg','[email protected]',NULL), ('4asg0124e','Lead Actor/SFX MUA/Prop designer','John Smith','[email protected]',NULL), ('asdguIux','Director, Camera Operator, Editor, VFX','John Smith','',NULL), (492,'E1asegaZ1ox','Nysdag_5YmD','145872325372620',1,'long, string, with, commas'), """ for line in s.split("\n"): line = line.strip().rstrip(",").replace("NULL", "None") if line: print list(literal_eval(line)) #list(..) is just an example ``` Output: ``` ['Vdfbr76', 'gsdf', 'gsfd', '', None] ['Vkdfb23l', 'gsfd', 'gsfg', '[email protected]', None] ['4asg0124e', 'Lead Actor/SFX MUA/Prop designer', 'John Smith', '[email protected]', None] ['asdguIux', 'Director, Camera Operator, Editor, VFX', 'John Smith', '', None] [492, 'E1asegaZ1ox', 'Nysdag_5YmD', '145872325372620', 1, 'long, string, with, commas'] ```
Empty class size in python
37,990,752
12
2016-06-23T11:48:13Z
37,990,980
12
2016-06-23T11:58:20Z
[ "python" ]
I just trying to know the rationale behind the empty class size in python, In C++ as everyone knows the size of empty class will always shows 1 byte(as far as i have seen) this let the run time to create unique object,and i trying to find out what size of empty class in python: ``` class Empty:pass # i hope this will create empty class ``` and when i do ``` import sys print ("show",sys.getsizeof(Empty)) # i get 1016 ``` I wonder why the `Empty` takes this much 1016(bytes)? and also does the value(1016) it returns is this a standard value that never change(mostly) like C++?, Do we expect any zero base class optimization from python interpreter?Is there any way we can reduce the size of am Empty(just for curiosity sake)?
I assume you are running a 64 bit version of Python 3. On 32 bit Python 3.6 (on Linux), your code prints `show 508`. However, that's the size of the class object itself, which inherits quite a lot of things from the base `object` class. If you instead get the size of an *instance* of your class the result is much smaller. On my system, ``` import sys class Empty(object): pass print("show", sys.getsizeof(Empty())) ``` **output** ``` show 28 ``` which is a *lot* more compact. :) FWIW, on Python 2.6.6, `sys.getsizeof(Empty)` returns 448 for a new-style class, and a measly 44 for an old-style class (one that doesn't inherit from `object`). `sys.getsizeof(Empty())` returns 28 for a new-style class instance and 32 for an old-style. --- You can reduce the size of an instance by using [`__slots__`](https://docs.python.org/3/reference/datamodel.html#object.__slots__) > This class variable can be assigned a string, iterable, or sequence of > strings with variable names used by instances. `__slots__` reserves > space for the declared variables and prevents the automatic creation > of `__dict__` and `__weakref__` for each instance. ``` import sys class Empty(object): __slots__ = [] print("show", sys.getsizeof(Empty())) ``` **output** ``` show 8 ``` Please read the docs to understand how to use this feature.
type(4) == type(int) is False in Python?
38,006,826
2
2016-06-24T06:23:33Z
38,006,843
10
2016-06-24T06:25:00Z
[ "python", "python-2.7", "types", "integer" ]
I tried `type(4) == type(int)`, which returns `False`, but `print type(4)` returns `<type 'int'>`, so 4 is obviously `int`. Confused why the first statement returns `False` and not `True`?
The `type` of `int` is *type itself*: ``` >>> type(int) <type 'type'> ``` You'd compare with `int` **directly**; `int` is, after all, a type, as we established above: ``` >>> type(4) == int True ``` or even, since `int` is a singleton, like all types should be: ``` >>> type(4) is int True ``` However, the proper way to test for types is to use the [`isinstance()` function](https://docs.python.org/3/library/functions.html#isinstance): ``` >>> isinstance(4, int) True ``` `isinstance()` also allows for *any subclasses* of `int` to pass this test; a subcclass is always considered to be *at least* an `int`. That includes any custom subclasses you could build yourself, and still have it work as an `int` everywhere else in your code.
How to tell if tensorflow is using gpu acceleration from inside python shell?
38,009,682
4
2016-06-24T09:14:23Z
38,019,608
7
2016-06-24T18:07:13Z
[ "python", "ubuntu", "tensorflow" ]
I have installed tensorflow in my ubuntu 16.04 using the second answer [here](https://devtalk.nvidia.com/default/topic/936429/-solved-tensorflow-with-gpu-in-anaconda-env-ubuntu-16-04-cuda-7-5-cudnn-/) with ubuntu's builtin apt cuda installation. Now my question is how can I test if tensorflow is really using gpu? I have a gtx 960m gpu. When I `import tensorflow` this is the outp ``` I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcublas.so locally I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcudnn.so locally I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcufft.so locally I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcuda.so.1 locally I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcurand.so locally ``` Is this output enough to check if tensorflow is using gpu ?
No, I don't think "open CUDA library" is enough to tell, because different nodes of the graph may be on different devices. To find out which device is used, you can enable log device placement like this: ``` sess = tf.Session(config=tf.ConfigProto(log_device_placement=True)) ```
Implementing an asynchronous iterator
38,031,066
11
2016-06-25T17:24:14Z
38,031,740
14
2016-06-25T18:36:26Z
[ "python", "python-3.x", "asynchronous", "python-asyncio" ]
Per [PEP-492](https://www.python.org/dev/peps/pep-0492/#asynchronous-iterators-and-async-for) I am trying to implement an asynchronous iterator, such that I can do e.g. ``` async for foo in bar: ... ``` Here is a trivial example, similar to the one in the docs, with a very basic test of instantiation and async iteration: ``` import pytest class TestImplementation: def __aiter__(self): return self async def __anext__(self): raise StopAsyncIteration @pytest.mark.asyncio # note use of pytest-asyncio marker async def test_async_for(): async for _ in TestImplementation(): pass ``` However, when I execute my test suite, I see: ``` =================================== FAILURES =================================== ________________________________ test_async_for ________________________________ @pytest.mark.asyncio async def test_async_for(): > async for _ in TestImplementation(): E TypeError: 'async for' received an invalid object from __aiter__: TestImplementation ...: TypeError ===================== 1 failed, ... passed in 2.89 seconds ====================== ``` Why does my `TestImplementation` appear to be invalid? As far as I can tell it meets the protocol: > 1. An object must implement an `__aiter__` method ... returning an asynchronous iterator object. > 2. An asynchronous iterator object must implement an `__anext__` method ... returning an awaitable. > 3. To stop iteration `__anext__` must raise a `StopAsyncIteration` exception. This is failing with the latest released versions of Python (3.5.1), `py.test` (2.9.2) and `pytest-asyncio` (0.4.1).
If you read [a little further down the documentation](https://www.python.org/dev/peps/pep-0492/#why-aiter-does-not-return-an-awaitable) it mentions that (emphasis mine): > PEP 492 was accepted in CPython 3.5.0 with `__aiter__` defined as a > method, that was expected to return **an awaitable resolving to an > asynchronous iterator**. > > In 3.5.2 (as PEP 492 was accepted on a provisional basis) the > `__aiter__` protocol was updated to return asynchronous iterators directly. Therefore for versions prior to 3.5.2 (released 2016/6/27) the documentation is slightly out of step with how to write a working asynchronous iterator. The fixed version for 3.5.0 and 3.5.1 looks like: ``` class TestImplementation: async def __aiter__(self): # ^ note return self async def __anext__(self): raise StopAsyncIteration ``` This was introduced on closing [bug #27243](http://bugs.python.org/issue27243) and is a little clearer in the [data model documentation](https://docs.python.org/3/reference/datamodel.html#asynchronous-iterators), which also suggests a way of writing backwards compatible code.
Looping through an interval in either direction
38,036,637
4
2016-06-26T08:32:09Z
38,036,694
7
2016-06-26T08:38:13Z
[ "python", "loops", "intervals" ]
Suppose you want to loop through all integers between two bounds `a` and `b` (inclusive), but don't know in advance how `a` compares to `b`. Expected behavior: ``` def run(a, b): if a < b: for i in range(a, b + 1): print i, elif a > b: for i in range(a, b - 1, -1): print i, else: print a print run(3, 6) run(6, 3) run(5, 5) ``` Result: ``` 3 4 5 6 6 5 4 3 5 ``` Is there a more elegant solution? The following is more concise, but fails when `a == b`: ``` def run(a, b): for i in range(a, b + cmp(b, a), cmp(b, a)): print i, print run(3, 6) run(6, 3) run(5, 5) ``` Result: ``` 3 4 5 6 6 5 4 3 (...) ValueError: range() step argument must not be zero ```
This will work for all cases: ``` def run(a, b): """Iterate from a to b (inclusive).""" step = -1 if b < a else 1 for x in xrange(a, b + step, step): yield x ``` The insight that led me to this formulation was that `step` and the adjustment to `b` were the same in both of your cases; once you have an inclusive end you don't need to special-case `a == b`. Note that I've written it as a generator so that it doesn't just `print` the results, which makes it more use when you need to integrate it with other code: ``` >>> list(run(3, 6)) [3, 4, 5, 6] >>> list(run(6, 3)) [6, 5, 4, 3] >>> list(run(5, 5)) [5] ``` --- Using generator delegation in Python 3.3+ (see [PEP-380](https://www.python.org/dev/peps/pep-0380/)), this becomes even neater: ``` def run(a, b): """Iterate from a to b (inclusive).""" step = -1 if b < a else 1 yield from range(a, b + step, step) ```
Python return several value to add in the middle of a list
38,038,123
2
2016-06-26T11:36:25Z
38,038,151
9
2016-06-26T11:39:23Z
[ "python" ]
Is it possible to make a function that returns several elements like this: ``` def foo(): return 'b', 'c', 'd' print ['a', foo(), 'e'] # ['a', 'b', 'c', 'd', 'e'] ``` I tried this but it doesn't work
You can insert a sequence into a list with a slice assignment: ``` bar = ['a', 'e'] bar[1:1] = foo() print bar ``` Note that the slice is essentially empty; `bar[1:1]` is an empty list between `'a'` and `'e'` here. To do this on one line in Python 2 requires concatenation: ``` ['a'] + list(foo()) + ['e'] ``` If you were to upgrade to Python 3.5, you can use `*` unpacking instead: ``` print(['a', *foo(), 'e']) ``` See [*Additional Unpacking Generalisations*](https://docs.python.org/3/whatsnew/3.5.html#whatsnew-pep-448) in *What's New in Python 3.5*. Demo (using Python 3): ``` >>> def foo(): ... return 'b', 'c', 'd' ... >>> bar = ['a', 'e'] >>> bar[1:1] = foo() >>> bar ['a', 'b', 'c', 'd', 'e'] >>> ['a'] + list(foo()) + ['e'] ['a', 'b', 'c', 'd', 'e'] >>> ['a', *foo(), 'e'] ['a', 'b', 'c', 'd', 'e'] ```
Django Left Outer Join
38,060,232
13
2016-06-27T17:52:43Z
38,108,285
12
2016-06-29T19:16:34Z
[ "python", "django", "django-models", "orm" ]
I have a website where users can see a list of movies, and create reviews for them. The user should be able to see the list of all the movies. Additionally, IF they have reviewed the movie, they should be able to see the score that they gave it. If not, the movie is just displayed without the score. They do not care at all about the scores provided by other users. Consider the following `models.py` ``` from django.contrib.auth.models import User from django.db import models class Topic(models.Model): name = models.TextField() def __str__(self): return self.name class Record(models.Model): user = models.ForeignKey(User) topic = models.ForeignKey(Topic) value = models.TextField() class Meta: unique_together = ("user", "topic") ``` What I essentially want is this ``` select * from bar_topic left join (select topic_id as tid, value from bar_record where user_id = 1) on tid = bar_topic.id ``` Consider the following `test.py` for context: ``` from django.test import TestCase from bar.models import * from django.db.models import Q class TestSuite(TestCase): def setUp(self): t1 = Topic.objects.create(name="A") t2 = Topic.objects.create(name="B") t3 = Topic.objects.create(name="C") # 2 for Johnny johnny = User.objects.create(username="Johnny") johnny.record_set.create(topic=t1, value=1) johnny.record_set.create(topic=t3, value=3) # 3 for Mary mary = User.objects.create(username="Mary") mary.record_set.create(topic=t1, value=4) mary.record_set.create(topic=t2, value=5) mary.record_set.create(topic=t3, value=6) def test_raw(self): print('\nraw\n---') with self.assertNumQueries(1): topics = Topic.objects.raw(''' select * from bar_topic left join (select topic_id as tid, value from bar_record where user_id = 1) on tid = bar_topic.id ''') for topic in topics: print(topic, topic.value) def test_orm(self): print('\norm\n---') with self.assertNumQueries(1): topics = Topic.objects.filter(Q(record__user_id=1)).values_list('name', 'record__value') for topic in topics: print(*topic) ``` BOTH tests should print the exact same output, however, only the raw version spits out the correct table of results: ``` raw --- A 1 B None C 3 ``` the orm instead returns this ``` orm --- A 1 C 3 ``` Any attempt to join back the rest of the topics, those that have no reviews from user "johnny", result in the following: ``` orm --- A 1 A 4 B 5 C 3 C 6 ``` How can I accomplish the simple behavior of the raw query with the Django ORM? edit: This sort of works but seems very poor: ``` topics = Topic.objects.filter(record__user_id=1).values_list('name', 'record__value') noned = Topic.objects.exclude(record__user_id=1).values_list('name') for topic in chain(topics, noned): ... ``` edit: This works a little bit better, but still bad: ``` topics = Topic.objects.filter(record__user_id=1).annotate(value=F('record__value')) topics |= Topic.objects.exclude(pk__in=topics) ``` ``` orm --- A 1 B 5 C 3 ```
First of all, there is not a way (atm Django 1.9.7) to have a representation **with Django's ORM** of the raw query you posted, ***exactly** as you want*; although, you can get the same desired result with something like: ``` >>> Topic.objects.annotate(f=Case(When(record__user=johnny, then=F('record__value')), output_field=IntegerField())).order_by('id', 'name', 'f').distinct('id', 'name').values_list('name', 'f') >>> [(u'A', 1), (u'B', None), (u'C', 3)] >>> Topic.objects.annotate(f=Case(When(record__user=may, then=F('record__value')), output_field=IntegerField())).order_by('id', 'name', 'f').distinct('id', 'name').values_list('name', 'f') >>> [(u'A', 4), (u'B', 5), (u'C', 6)] ``` Here the SQL generated for the first query: ``` >>> print Topic.objects.annotate(f=Case(When(record__user=johnny, then=F('record__value')), output_field=IntegerField())).order_by('id', 'name', 'f').distinct('id', 'name').values_list('name', 'f').query >>> SELECT DISTINCT ON ("payments_topic"."id", "payments_topic"."name") "payments_topic"."name", CASE WHEN "payments_record"."user_id" = 1 THEN "payments_record"."value" ELSE NULL END AS "f" FROM "payments_topic" LEFT OUTER JOIN "payments_record" ON ("payments_topic"."id" = "payments_record"."topic_id") ORDER BY "payments_topic"."id" ASC, "payments_topic"."name" ASC, "f" ASC ``` ## Some notes * Doesn't hesitate to use raw queries, specially when the performance is the most important thing. Moreover, sometimes it is a must since you can't get the same result using Django's ORM; in other cases you can, but once in a while having clean and understandable code is more important than the performance in *this piece* of code. * `distinct` with positional arguments is used in this answer, which is available for PostgreSQL only, atm. In the docs you can see more about [conditional expressions](https://docs.djangoproject.com/en/1.9/ref/models/conditional-expressions/).
How to fix Selenium WebDriverException: "The browser appears to have exited"
38,074,031
4
2016-06-28T10:44:26Z
38,617,777
7
2016-07-27T15:49:04Z
[ "python", "selenium", "selenium-webdriver", "selenium-firefoxdriver" ]
I got this exception when I want to use `FireFox webdriver` > raise WebDriverException "The browser appears to have exited " > WebDriverException: Message: The browser appears to have exited before we could connect. If you specified a log\_file in the > FirefoxBinary constructor, check it for details. I read [this question](http://stackoverflow.com/questions/27321907/firefox-build-does-not-work-with-selenium) and updated my selenium, but i already have the same problem. my code : ``` driver = webdriver.Firefox() time.sleep(5) driver.get('http://www.ooshop.com') ``` **UPDATE** I read [this question](http://stackoverflow.com/questions/27270337/how-to-fix-webdriverexception-the-browser-appears-to-have-exited-before-we-coul) and now i have this error ``` OSError: [Errno 20] Not a directory Exception AttributeError: "'Service' object has no attribute 'process'" in <bound method Service.__del__ of <selenium.webdriver.firefox.service.Service object at 0x407a690>> ignored ```
If you're running Selenium on Firefox 47.0, you need to update to Firefox 47.0.1 which is not released in Ubuntu's main repos.. so you have to add this PPA: <https://launchpad.net/~ubuntu-mozilla-security/+archive/ubuntu/ppa> Release notes: <https://www.mozilla.org/en-US/firefox/47.0.1/releasenotes/> "Selenium WebDriver may cause Firefox to crash at startup" Once Firefox 48.0 is out, it will include the fix. I think Ubuntu has skipped this update because it affects very few users. I can confirm the error with FF 47.0 and Selenium 2.53, and I can also confirm that upgrading to FF 47.0.1 fixes the error.
How can I disable ExtDeprecationWarning for external libs in flask
38,079,200
5
2016-06-28T14:37:18Z
38,080,580
9
2016-06-28T15:37:53Z
[ "python", "flask" ]
When I run my script, I get this output: ``` /app/venv/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.sqlalchemy is deprecated, use flask_sqlalchemy instead. .format(x=modname), ExtDeprecationWarning /app/venv/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.marshmallow is deprecated, use flask_marshmallow instead. .format(x=modname), ExtDeprecationWarning /app/venv/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead. .format(x=modname), ExtDeprecationWarning /app/venv/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.restful is deprecated, use flask_restful instead. .format(x=modname), ExtDeprecationWarning /app/venv/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.restful.fields is deprecated, use flask_restful.fields instead. .format(x=modname), ExtDeprecationWarning /app/venv/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.restful.reqparse is deprecated, use flask_restful.reqparse instead. .format(x=modname), ExtDeprecationWarning /app/venv/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.restplus is deprecated, use flask_restplus instead. .format(x=modname), ExtDeprecationWarning /app/venv/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.restful.representations is deprecated, use flask_restful.representations instead. .format(x=modname), ExtDeprecationWarning /app/venv/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.script is deprecated, use flask_script instead. .format(x=modname), ExtDeprecationWarning /app/venv/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.migrate is deprecated, use flask_migrate instead. .format(x=modname), ExtDeprecationWarning ``` I don't really care about this, because external libs are causing this. I can't update these libs as I don't own them and I see for several there are pull requests pending. How can I get some peace and quiet?
First, you *should* care about this because the packages you're using aren't up to date. Report a bug that they should switch to using direct import names, such as `flask_sqlalchemy`, rather than the `flask.ext` import hook. Add a [`warnings.simplefilter`](https://docs.python.org/3.5/library/warnings.html) line to filter out these warnings. You can place it wherever you're configuring your application, before performing any imports that would raise the warning. ``` import warnings from flask.exthook import ExtDeprecationWarning warnings.simplefilter('ignore', ExtDeprecationWarning) ```
Finding an array elements location in a pandas frame column (a.k.a pd.series)
38,083,227
9
2016-06-28T18:01:33Z
38,083,374
9
2016-06-28T18:08:47Z
[ "python", "arrays", "numpy", "pandas", "indexing" ]
I have a pandas frame similar to this one: ``` import pandas as pd import numpy as np data = {'Col1' : [4,5,6,7], 'Col2' : [10,20,30,40], 'Col3' : [100,50,-30,-50], 'Col4' : ['AAA', 'BBB', 'AAA', 'CCC']} df = pd.DataFrame(data=data, index = ['R1','R2','R3','R4']) Col1 Col2 Col3 Col4 R1 4 10 100 AAA R2 5 20 50 BBB R3 6 30 -30 AAA R4 7 40 -50 CCC ``` Given an array of targets: ``` target_array = np.array(['AAA', 'CCC', 'EEE']) ``` I would like to find the cell elements indices in `Col4` which also appear in the `target_array`. I have tried to find a documented answer but it seems beyond my skill... Anyone has any advice? P.S. Incidentally, for this particular case I can input a target array whose elements are the data frame indices names `array(['R1', 'R3', 'R5'])`. Would it be easier that way? Edit 1: Thank you very much for all the great replies. Sadly I can only choose one but everyone seems to point @Divakar as the best. Still you should look at piRSquared and MaxU speed comparisons for all the possibilities available
This should do it: ``` df.loc[df.Col4.isin(target_array)].index ``` --- EDIT: I ran three options: from selected answers. Mine, Bruce Pucci, and Divakar [![enter image description here](http://i.stack.imgur.com/GdHMa.png)](http://i.stack.imgur.com/GdHMa.png) Divakars was faster by a large amount. I'd pick his.
Finding an array elements location in a pandas frame column (a.k.a pd.series)
38,083,227
9
2016-06-28T18:01:33Z
38,083,418
10
2016-06-28T18:11:40Z
[ "python", "arrays", "numpy", "pandas", "indexing" ]
I have a pandas frame similar to this one: ``` import pandas as pd import numpy as np data = {'Col1' : [4,5,6,7], 'Col2' : [10,20,30,40], 'Col3' : [100,50,-30,-50], 'Col4' : ['AAA', 'BBB', 'AAA', 'CCC']} df = pd.DataFrame(data=data, index = ['R1','R2','R3','R4']) Col1 Col2 Col3 Col4 R1 4 10 100 AAA R2 5 20 50 BBB R3 6 30 -30 AAA R4 7 40 -50 CCC ``` Given an array of targets: ``` target_array = np.array(['AAA', 'CCC', 'EEE']) ``` I would like to find the cell elements indices in `Col4` which also appear in the `target_array`. I have tried to find a documented answer but it seems beyond my skill... Anyone has any advice? P.S. Incidentally, for this particular case I can input a target array whose elements are the data frame indices names `array(['R1', 'R3', 'R5'])`. Would it be easier that way? Edit 1: Thank you very much for all the great replies. Sadly I can only choose one but everyone seems to point @Divakar as the best. Still you should look at piRSquared and MaxU speed comparisons for all the possibilities available
You can use [`NumPy's in1d`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.in1d.html) - ``` df.index[np.in1d(df['Col4'],target_array)] ``` **Explanation** 1) Create a `1D` mask corresponding to each row telling us whether there is a match between `col4's` element and any element in `target_array` : ``` mask = np.in1d(df['Col4'],target_array) ``` 2) Use the mask to select valid indices from the dataframe as final output : ``` out = df.index[np.in1d(df['Col4'],target_array)] ```
Finding an array elements location in a pandas frame column (a.k.a pd.series)
38,083,227
9
2016-06-28T18:01:33Z
38,084,031
7
2016-06-28T18:48:20Z
[ "python", "arrays", "numpy", "pandas", "indexing" ]
I have a pandas frame similar to this one: ``` import pandas as pd import numpy as np data = {'Col1' : [4,5,6,7], 'Col2' : [10,20,30,40], 'Col3' : [100,50,-30,-50], 'Col4' : ['AAA', 'BBB', 'AAA', 'CCC']} df = pd.DataFrame(data=data, index = ['R1','R2','R3','R4']) Col1 Col2 Col3 Col4 R1 4 10 100 AAA R2 5 20 50 BBB R3 6 30 -30 AAA R4 7 40 -50 CCC ``` Given an array of targets: ``` target_array = np.array(['AAA', 'CCC', 'EEE']) ``` I would like to find the cell elements indices in `Col4` which also appear in the `target_array`. I have tried to find a documented answer but it seems beyond my skill... Anyone has any advice? P.S. Incidentally, for this particular case I can input a target array whose elements are the data frame indices names `array(['R1', 'R3', 'R5'])`. Would it be easier that way? Edit 1: Thank you very much for all the great replies. Sadly I can only choose one but everyone seems to point @Divakar as the best. Still you should look at piRSquared and MaxU speed comparisons for all the possibilities available
For the sake of completeness I've added two (`.query()` variants) - my timings against 400K rows df: ``` In [63]: df.shape Out[63]: (400000, 4) In [64]: %timeit df.index[np.in1d(df['Col4'],target_array)] 10 loops, best of 3: 35.1 ms per loop In [65]: %timeit df.index[df.Col4.isin(target_array)] 10 loops, best of 3: 36.7 ms per loop In [66]: %timeit df.loc[df.Col4.isin(target_array)].index 10 loops, best of 3: 47.8 ms per loop In [67]: %timeit df.query('@target_array.tolist() == Col4') 10 loops, best of 3: 45.7 ms per loop In [68]: %timeit df.query('@target_array in Col4') 10 loops, best of 3: 51.9 ms per loop ``` [Here is a similar comparison for (`not in ...`) and for different `dtypes`](http://stackoverflow.com/a/38110564/5741205)
How to remove Python tools for Visual Studio (June 2016) uodate notification? It's already installed
38,085,253
7
2016-06-28T19:59:08Z
38,085,524
13
2016-06-28T20:16:16Z
[ "python", "visual-studio-2015", "installation" ]
I have updated VS 2015 Community to Update 3. According to the installer, this includes Python tools 2.2.4. However, Visual Studio still reports that update is available (from 2.2.3 to 2.2.4) and when I choose to do that, VS Setup starts, but Update button is disabled. It enables if I uncheck Python tools (due the fact that in that case it would be removed). VS Update 3 is installed and in Help / About I can see that Python tools is 2.2.4. How can I remove notification from VS?
I ran into the same issue. [Downloading the stand-alone installer and running it fixes the issue](https://ptvs.azureedge.net/download/PTVS%202.2.4%20VS%202015.msi).
Assign external function to class variable in Python
38,097,638
5
2016-06-29T10:59:43Z
38,097,916
8
2016-06-29T11:12:23Z
[ "python", "oop", "python-2.x" ]
I am trying to assign a function defined elsewhere to a class variable so I can later call it in one of the methods of the instance, like this: ``` from module import my_func class Bar(object): func = my_func def run(self): self.func() # Runs my function ``` The problem is that this fails because when doing `self.func()`, then the instance is passed as the first parameter. I've come up with a hack but seems ugly to me, anybody has an alternative? ``` In [1]: class Foo(object): ...: func = lambda *args: args ...: def __init__(self): ...: print(self.func()) ...: In [2]: class Foo2(object): ...: funcs = [lambda *args: args] ...: def __init__(self): ...: print(self.funcs[0]()) ...: In [3]: f = Foo() (<__main__.Foo object at 0x00000000044BFB70>,) In [4]: f2 = Foo2() () ``` **Edit:** The behavior is different with builtin functions! ``` In [13]: from math import pow In [14]: def pow_(a, b): ....: return pow(a, b) ....: In [15]: class Foo3(object): ....: func = pow_ ....: def __init__(self): ....: print(self.func(2, 3)) ....: In [16]: f3 = Foo3() --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-16-c27c8778655e> in <module>() ----> 1 f3 = Foo3() <ipython-input-15-efeb6adb211c> in __init__(self) 2 func = pow_ 3 def __init__(self): ----> 4 print(self.func(2, 3)) 5 TypeError: pow_() takes exactly 2 arguments (3 given) In [17]: class Foo4(object): ....: func = pow ....: def __init__(self): ....: print(self.func(2, 3)) ....: In [18]: f4 = Foo4() 8.0 ```
Python functions are [*descriptor* objects](https://docs.python.org/2/howto/descriptor.html), and when attributes on a class accessing them an instance causes them to be bound as methods. If you want to prevent this, use the [`staticmethod` function](https://docs.python.org/2/library/functions.html#staticmethod) to wrap the function in a different descriptor that doesn't bind to the instance: ``` class Bar(object): func = staticmethod(my_func) def run(self): self.func() ``` Alternatively, access the unbound function via the `__func__` attribute on the method: ``` def run(self): self.func.__func__() ``` or go directly to the class `__dict__` attribute to bypass the descriptor protocol altogether: ``` def run(self): Bar.__dict__['func']() ``` As for `math.pow`, that's not a *Python* function, in that it is written in C code. Most built-in functions are written in C, and most are not descriptors.
Why PyPi doesn't show download stats anymore?
38,102,317
3
2016-06-29T14:15:16Z
38,102,521
7
2016-06-29T14:24:42Z
[ "python", "pypi" ]
It was so handy to get an idea if the package is popular or not (even if its popularity is the reason of some "import" case in another popular package). But now I don't see this info for some reason. An example: <https://pypi.python.org/pypi/blist> Why did they turn off this useful thing?
As can be seen in [this mail.python.org article](https://mail.python.org/pipermail/distutils-sig/2013-May/020855.html), download stats were removed because they weren't updating and would be too difficult to fix. Donald Stufft, the author of the article, listed these reasons: > There are numerous reasons for their removal/deprecation some of which > are: > > * Technically hard to make work with the new CDN > + The CDN is being donated to the PSF, and the donated tier does not offer any form of log access > + The work around for not having log access would greatly reduce the utility of the CDN > * Highly inaccurate > + A number of things prevent the download counts from being inaccurate, some of which include: > - pip download cache > - Internal or unofficial mirrors > - Packages not hosted on PyPI (for comparisons sake) > - Mirrors or unofficial grab scripts causing inflated counts (Last I looked 25% of the downloads were from a known mirroring > script). > * Not particularly useful > + Just because a project has been downloaded a lot doesn't mean it's good > + Similarly just because a project hasn't been downloaded a lot doesn't mean it's bad
Is there any legitimate use of list[True], list[False] in Python?
38,117,555
37
2016-06-30T08:17:05Z
38,117,588
58
2016-06-30T08:18:59Z
[ "python", "boolean" ]
Since `True` and `False` are instances of `int`, the following is valid in Python: ``` >>> l = [0, 1, 2] >>> l[False] 0 >>> l[True] 1 ``` I understand why this happens. However, I find this behaviour a bit unexpected and can lead to hard-to-debug bugs. It has certainly bitten me a couple of times. Can anyone think of a legit use of indexing lists with `True` or `False`?
In the past, some people have used this behaviour to produce a poor-man's [conditional expression](http://stackoverflow.com/questions/394809/does-python-have-a-ternary-conditional-operator): ``` ['foo', 'bar'][eggs > 5] # produces 'bar' when eggs is 6 or higher, 'foo' otherwise ``` However, with a [proper conditional expression](https://docs.python.org/2/reference/expressions.html#conditional-expressions) having been added to the language in Python 2.5, this is very much frowned upon, for the reasons you state: relying on booleans being a subclass of integers is too 'magical' and unreadable for a maintainer. So, unless you are code-golfing (*deliberately* producing very compact and obscure code), use ``` 'bar' if eggs > 5 else 'foo' ``` instead, which has the added advantage that the two expressions this selects between are *lazily* evaluated; if `eggs > 5` is false, the expression before the `if` is never executed.
Is there any legitimate use of list[True], list[False] in Python?
38,117,555
37
2016-06-30T08:17:05Z
38,123,068
34
2016-06-30T12:27:00Z
[ "python", "boolean" ]
Since `True` and `False` are instances of `int`, the following is valid in Python: ``` >>> l = [0, 1, 2] >>> l[False] 0 >>> l[True] 1 ``` I understand why this happens. However, I find this behaviour a bit unexpected and can lead to hard-to-debug bugs. It has certainly bitten me a couple of times. Can anyone think of a legit use of indexing lists with `True` or `False`?
If you are puzzled why `bool` is a valid index argument: this is simply for **consistency** with the fact that `bool` is a subclass of `int` and in Python it **is** a numerical type. If you are asking why `bool` is a numerical type in the first place then you have to understand that `bool` wasn't present in old releases of Python and people used `int`s instead. I will add a bit of historic arguments. First of all the addition of `bool` in python is shortly described in Guido van Rossum (aka BDFL) blogpost: [The History of Python: The history of `bool`, `True` and `False`](http://python-history.blogspot.co.uk/). The type was added via [PEP 285](https://www.python.org/dev/peps/pep-0285/). The PEP contains the *actual* rationales used for this decisions. I'll quote some of the portions of the PEP below. > 4) Should we strive to eliminate non-Boolean operations on bools > in the future, through suitable warnings, so that for example > `True+1` would eventually (in Python 3000) be illegal? > > => No. > > There's a small but vocal minority that would prefer to see > "textbook" bools that don't support arithmetic operations at > all, but most reviewers agree with me that bools should always > allow arithmetic operations. --- > 6) Should `bool` inherit from `int`? > > => Yes. > > In an ideal world, `bool` might be better implemented as a > separate integer type that knows how to perform mixed-mode > arithmetic. However, **inheriting `bool` from `int` eases the > implementation enormously(in part since all C code that calls > `PyInt_Check()` will continue to work -- this returns true for > subclasses of `int`).** Also, I believe this is right in terms of > substitutability: code that requires an `int` can be fed a `bool` > and it will behave the same as `0` or `1`. Code that requires a > `bool` may not work when it is given an `int`; for example, 3 & 4 > is 0, but both 3 and 4 are true when considered as truth > values. --- > Because `bool` inherits from `int`, `True+1` is valid and equals `2`, and > so on. **This is important for backwards compatibility**: because > comparisons and so on currently return integer values, there's no > way of telling what uses existing applications make of these > values. --- > **Because of backwards compatibility**, the bool type lacks many > properties that some would like to see. For example, arithmetic > operations with one or two bool arguments is allowed, treating > False as 0 and True as 1. Also, a bool may be used as a sequence > index. > > **I don't see this as a problem, and I don't want evolve the > language in this direction either. I don't believe that a > stricter interpretation of "Booleanness" makes the language any > clearer.** --- **Summary**: * Backwards compatibility: there was plenty of code that already used `int`s `0` and `1` to represent `False` and `True` and some of it used those values in numerical computations. * It wasn't seen as a big deal to have a "non-textbook" `bool` type * Plenty of people in the Python community wanted these features * BDFL said so.
Dividing two columns of a different DataFrames
38,128,014
2
2016-06-30T15:51:49Z
38,130,693
13
2016-06-30T18:26:40Z
[ "python", "apache-spark", "pyspark", "apache-spark-sql" ]
I am using Spark to do exploratory data analysis on a user log file. One of the analysis that I am doing is average requests on daily basis per host. So in order to figure out the average, I need to divide the total request column of the DataFrame by number unique Request column of the DataFrame. ``` total_req_per_day_df = logs_df.select('host',dayofmonth('time').alias('day')).groupby('day').count() avg_daily_req_per_host_df = total_req_per_day_df.select("day",(total_req_per_day_df["count"] / daily_hosts_df["count"]).alias("count")) ``` This is what I have written using the PySpark to determine the average. And here is the log of error that I get ``` AnalysisException: u'resolved attribute(s) count#1993L missing from day#3628,count#3629L in operator !Project [day#3628,(cast(count#3629L as double) / cast(count#1993L as double)) AS count#3630]; ``` Note: daily\_hosts\_df and logs\_df is cached in the memory. How do you divide the count column of both data frames?
It is not possible to reference column from another table. If you want to combine data you'll have to `join` first using something similar to this: ``` from pyspark.sql.functions import col (total_req_per_day_df.alias("total") .join(daily_hosts_df.alias("host"), ["day"]) .select(col("day"), (col("total.count") / col("host.count")).alias("count"))) ```
Groupby in python pandas: Fast Way
38,143,717
6
2016-07-01T11:00:16Z
38,145,104
7
2016-07-01T12:10:26Z
[ "python", "numpy", "pandas", "group" ]
I want to improve the time of a `groupby` in python pandas. I have this code: ``` df["Nbcontrats"] = df.groupby(['Client', 'Month'])['Contrat'].transform(len) ``` The objective is to count how many contracts a client has in a month and add this information in a new column (`Nbcontrats`). * `Client`: client code * `Month`: month of data extraction * `Contrat`: contract number I want to improve the time. Below I am only working with a subset of my real data: ``` %timeit df["Nbcontrats"] = df.groupby(['Client', 'Month'])['Contrat'].transform(len) 1 loops, best of 3: 391 ms per loop df.shape Out[309]: (7464, 61) ``` How can I improve the execution time?
Here's one way to proceed : * Slice out the relevant columns (`['Client', 'Month']`) from the input dataframe into a NumPy array. This is mostly a performance-focused idea as we would be using NumPy functions later on, which are optimized to work with NumPy arrays. * Convert the two columns data from `['Client', 'Month']` into a single `1D` array, which would be a linear index equivalent of it considering elements from the two columns as pairs. Thus, we can assume that the elements from `'Client'` represent the row indices, whereas `'Month'` elements are the column indices. This is like going from `2D` to `1D`. But, the issue would be deciding the shape of the 2D grid to perform such a mapping. To cover all pairs, one safe assumption would be assuming a 2D grid whose dimensions are one more than the max along each column because of 0-based indexing in Python. Thus, we would get linear indices. * Next up, we tag each linear index based on their uniqueness among others. I think this would correspond to the keys obtained with `grouby` instead. We also need to get counts of each group/unique key along the entire length of that 1D array. Finally, indexing into the counts with those tags should map for each element the respective counts. That's the whole idea about it! Here's the implementation - ``` # Save relevant columns as a NumPy array for performing NumPy operations afterwards arr_slice = df[['Client', 'Month']].values # Get linear indices equivalent of those columns lidx = np.ravel_multi_index(arr_slice.T,arr_slice.max(0)+1) # Get unique IDs corresponding to each linear index (i.e. group) and grouped counts unq,unqtags,counts = np.unique(lidx,return_inverse=True,return_counts=True) # Index counts with the unique tags to map across all elements with the counts df["Nbcontrats"] = counts[unqtags] ``` **Runtime test** 1) Define functions : ``` def original_app(df): df["Nbcontrats"] = df.groupby(['Client', 'Month'])['Contrat'].transform(len) def vectorized_app(df): arr_slice = df[['Client', 'Month']].values lidx = np.ravel_multi_index(arr_slice.T,arr_slice.max(0)+1) unq,unqtags,counts = np.unique(lidx,return_inverse=True,return_counts=True) df["Nbcontrats"] = counts[unqtags] ``` 2) Verify results : ``` In [143]: # Let's create a dataframe with 100 unique IDs and of length 10000 ...: arr = np.random.randint(0,100,(10000,3)) ...: df = pd.DataFrame(arr,columns=['Client','Month','Contrat']) ...: df1 = df.copy() ...: ...: # Run the function on the inputs ...: original_app(df) ...: vectorized_app(df1) ...: In [144]: np.allclose(df["Nbcontrats"],df1["Nbcontrats"]) Out[144]: True ``` 3) Finally time them : ``` In [145]: # Let's create a dataframe with 100 unique IDs and of length 10000 ...: arr = np.random.randint(0,100,(10000,3)) ...: df = pd.DataFrame(arr,columns=['Client','Month','Contrat']) ...: df1 = df.copy() ...: In [146]: %timeit original_app(df) 1 loops, best of 3: 645 ms per loop In [147]: %timeit vectorized_app(df1) 100 loops, best of 3: 2.62 ms per loop ```
How to work with surrogate pairs in Python?
38,147,259
5
2016-07-01T13:55:31Z
38,147,966
10
2016-07-01T14:28:45Z
[ "python", "python-3.x", "unicode", "surrogate-pairs" ]
This is a follow-up to [Converting to Emoji](http://stackoverflow.com/questions/38106422/converting-to-emoji). In that question, the OP had a `json.dumps()`-encoded file with an emoji represented as a surrogate pair - `\ud83d\ude4f`. S/he was having problems reading the file and translating the emoji correctly, and the correct [answer](http://stackoverflow.com/a/38145581/1426065) was to `json.loads()` each line from the file, and the `json` module would handle the conversion from surrogate pair back to (I'm assuming UTF8-encoded) emoji. So here is my situation: say I have just a regular Python 3 unicode string with a surrogate pair in it: ``` emoji = "This is \ud83d\ude4f, an emoji." ``` How do I process this string to get a representation of the [emoji](http://apps.timwhitlock.info/unicode/inspect?s=%F0%9F%99%8F) out of it? I'm looking to get something like this: ``` "This is 🙏, an emoji." # or "This is \U0001f64f, an emoji." ``` I've tried: ``` print(emoji) print(emoji.encode("utf-8")) # also tried "ascii", "utf-16", and "utf-16-le" json.loads(emoji) # and `.encode()` with various codecs ``` Generally I get an error similar to `UnicodeEncodeError: XXX codec can't encode character '\ud83d' in position 8: surrogates no allowed`. I'm running Python 3.5.1 on Linux, with `$LANG` set to `en_US.UTF-8`. I've run these samples both in the Python interpreter on the command line, and within IPython running in Sublime Text - there don't appear to be any differences.
You've mixed a literal string `\ud83d` in a json file on disk (six characters: `\ u d 8 3 d`) and a *single* character `u'\ud83d'` (specified using a string literal in Python source code) in memory. It is the difference between `len(r'\ud83d') == 6` and `len('\ud83d') == 1` on Python 3. If you see `'\ud83d\ude4f'` Python string (**2** characters) then there is a bug upstream. Normally, you shouldn't get such string. If you get one and you can't fix upstream that generates it; you could fix it using `surrogatepass` error handler: ``` >>> "\ud83d\ude4f".encode('utf-16', 'surrogatepass').decode('utf-16') '🙏' ``` [Python 2 was more permissive](http://bugs.python.org/issue26260). Note: even if your json file contains literal \ud83d\ude4f (**12** characters); you shouldn't get the surrogate pair: ``` >>> print(ascii(json.loads(r'"\ud83d\ude4f"'))) '\U0001f64f' ``` Notice: the result is **1** character ( `'\U0001f64f'`), not the surrogate pair (`'\ud83d\ude4f'`).
Mixing datetime.strptime() arguments
38,147,923
8
2016-07-01T14:26:51Z
38,215,307
17
2016-07-06T01:51:30Z
[ "python", "datetime", "pycharm", "pylint", "static-code-analysis" ]
It is quite a common mistake to mix up the [`datetime.strptime()`](https://docs.python.org/2/library/datetime.html#datetime.datetime.strptime) format string and date string arguments using: ``` datetime.strptime("%B %d, %Y", "January 8, 2014") ``` instead of the other way around: ``` datetime.strptime("January 8, 2014", "%B %d, %Y") ``` Of course, it would fail during the runtime: ``` >>> datetime.strptime("%B %d, %Y", "January 8, 2014") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/_strptime.py", line 325, in _strptime (data_string, format)) ValueError: time data '%B %d, %Y' does not match format 'January 8, 2014' ``` But, is it possible to catch this problem *statically* even before actually running the code? Is it something `pylint` or `flake8` can help with? --- I've tried the PyCharm code inspection, but both snippets don't issue any warnings. Probably, because both arguments have the same type - they both are strings which makes the problem more difficult. We would have to actually analyze if a string is a datetime format string or not. Also, the [Language Injections](https://www.jetbrains.com/help/idea/2016.1/using-language-injections.html) PyCharm/IDEA feature looks relevant.
I claim that this cannot be checked statically *in the general case*. Consider the following snippet: ``` d = datetime.strptime(read_date_from_network(), read_format_from_file()) ``` This code may be completely valid, where both `read_date_from_network` and `read_format_from_file` really do return strings of the proper format -- or they may be total garbage, both returning None or some crap. Regardless, that information can *only* be determined at runtime -- hence, a static checker is powerless. --- What's more, given the current definition of datetime.strptime, even if we *were* using a statically typed language, we wouldn't be able to catch this error (except in very specific cases) -- the reason being that *the signature of this function doomed us from the start*: ``` classmethod datetime.strptime(date_string, format) ``` in this definition, `date_string` and `format` are both *strings*, even though they actually have special meaning. Even if we had something analogous in a statically typed language like this: ``` public DateTime strpTime(String dateString, String format) ``` The compiler (and linter and everyone else) still only sees: ``` public DateTime strpTime(String, String) ``` Which means that none of the following are distinguishable from each other: ``` strpTime("%B %d, %Y", "January 8, 2014") // strpTime(String, String) CHECK strpTime("January 8, 2014", "%B %d, %Y") // strpTime(String, String) CHECK strpTime("cat", "bat") // strpTime(String, String) CHECK ``` --- This isn't to say that it can't be done at all -- there do exist some linters for statically typed languages such as Java/C++/etc. that will inspect string literals when you pass them to some specific functions (like printf, etc.), but this can only be done when you're calling that function directly with a literal format string. The same linters become just as helpless in the first case that I presented, because it's simply not yet known if the strings will be the right format. i.e. A linter may be able to warn about this: ``` // Linter regex-es the first argument, sees %B et. al., warns you strpTime("%B %d, %Y", "January 8, 2014") ``` but it would not be able to warn about this: ``` strpTime(scanner.readLine(), scanner.readLine()) ``` Now, the same could be engineered into a python linter, but I don't believe that it would be very useful because functions are first-class, so I could easily defeat the (hypothetical python) linter by writing: ``` f = datetime.strptime d = f("January 8, 2014", "%B %d, %Y") ``` And then we're pretty much hosed again. --- Bonus: What Went Wrong The problem here is that the `datetime.strptime` gives implicit meaning to each of these strings, but it doesn't surface that information to the type system. What could have been done was to give the two strings differing types -- then there could have been more safety, albeit at the expense of some ease-of-use. e.g (using PEP 484 type annotations, [a real thing!](https://www.python.org/dev/peps/pep-0484/)): ``` class DateString(str): pass class FormatString(str): pass class datetime(date): ... def strptime(date_string: DateString, format: FormatString) -> datetime: # etc. etc. ``` Then it would start to be feasible to provide good linting in the general case -- though the DateString and FormatString classes would need to take care of validating their input, because again, the type system can't do anything at that level. --- Afterword: I think the best way to deal with this is to avoid the problem by using the [`strftime`](https://docs.python.org/2/library/datetime.html#datetime.datetime.strftime) method, which is bound to a specific datetime object and takes just a format string argument. That circumvents the entire problem by giving us a function signature that doesn't cut us when we hug it. Yay.
How do I reliably split a string in Python?
38,149,470
69
2016-07-01T15:50:23Z
38,149,500
100
2016-07-01T15:52:02Z
[ "python", "string", "python-3.x", "split" ]
In Perl I can do: ``` my ($x, $y) = split /:/, $str; ``` And it will work whether or not the string contains the pattern. In Python, however this won't work: ``` a, b = "foo".split(":") # ValueError: not enough values to unpack ``` What's the canonical way to prevent errors in such cases?
If you're splitting into just two parts (like in your example) you can use [`str.partition()`](https://docs.python.org/3.5/library/stdtypes.html#str.partition) to get a guaranteed argument unpacking size of 3: ``` >>> a, sep, b = "foo".partition(":") >>> a, sep, b ('foo', '', '') ``` `str.partition()` always returns a 3-tuple, whether the separator is found or not. Another alternative for Python 3 is to use extended unpacking, as described in [@cdarke's answer](http://stackoverflow.com/a/38149677/244297): ``` >>> a, *b = "foo".split(":") >>> a, b ('foo', []) ``` This assigns the first split item to `a` and the list of remaining items (if any) to `b`.
How do I reliably split a string in Python?
38,149,470
69
2016-07-01T15:50:23Z
38,149,677
56
2016-07-01T16:01:42Z
[ "python", "string", "python-3.x", "split" ]
In Perl I can do: ``` my ($x, $y) = split /:/, $str; ``` And it will work whether or not the string contains the pattern. In Python, however this won't work: ``` a, b = "foo".split(":") # ValueError: not enough values to unpack ``` What's the canonical way to prevent errors in such cases?
Since you are on Python 3, it is easy. PEP 3132 introduced a welcome simplification of the syntax when assigning to tuples - *Extended iterable unpacking*. In the past, if assigning to variables in a tuple, the number of items on the left of the assignment must be exactly equal to that on the right. In Python 3 we can designate any variable on the left as a list by prefixing with an asterisk \*. That will grab as many values as it can, while still populating the variables to its right (so it need not be the rightmost item). This avoids many nasty slices when we don't know the length of a tuple. ``` a, *b = "foo".split(":") print("a:", a, "b:", b) ``` Gives: ``` a: foo b: [] ``` EDIT following comments and discussion: In comparison to the Perl version, this is considerably different, but it is the Python (3) way. In comparison with the Perl version, `re.split()` would be more similar, however invoking the RE engine for splitting around a single character is an unnecessary overhead. With multiple elements in Python: ``` s = 'hello:world:sailor' a, *b = s.split(":") print("a:", a, "b:", b) ``` gives: ``` a: hello b: ['world', 'sailor'] ``` However in Perl: ``` my $s = 'hello:world:sailor'; my ($a, $b) = split /:/, $s; print "a: $a b: $b\n"; ``` gives: ``` a: hello b: world ``` It can be seen that additional elements are ignored, or lost, in Perl. That is fairly easy to replicate in Python if required: ``` s = 'hello:world:sailor' a, *b = s.split(":") b = b[0] print("a:", a, "b:", b) ``` So, `a, *b = s.split(":")` equivalent in Perl would be ``` my ($a, @b) = split /:/, $s; ``` NB: we shouldn't use `$a` and `$b` in general Perl since they have a special meaning when used with `sort`. I have used them here for consistency with the Python example. Python does have an extra trick up its sleeve, we can unpack to any element in the tuple on the left: ``` s = "one:two:three:four" a, *b, c = s.split(':') print("a:", a, "b:", b, "c:", c) ``` Gives: ``` a: one b: ['two', 'three'] c: four ``` Whereas in the Perl equivalent, the array (`@b`) is greedy, and the scalar `$c` is `undef`: ``` use strict; use warnings; my $s = 'one:two:three:four'; my ($a, @b, $c) = split /:/, $s; print "a: $a b: @b c: $c\n"; ``` Gives: ``` Use of uninitialized value $c in concatenation (.) or string at gash.pl line 8. a: one b: two three four c: ```
How do I reliably split a string in Python?
38,149,470
69
2016-07-01T15:50:23Z
38,149,856
15
2016-07-01T16:12:12Z
[ "python", "string", "python-3.x", "split" ]
In Perl I can do: ``` my ($x, $y) = split /:/, $str; ``` And it will work whether or not the string contains the pattern. In Python, however this won't work: ``` a, b = "foo".split(":") # ValueError: not enough values to unpack ``` What's the canonical way to prevent errors in such cases?
`split` will always return a list. `a, b = ...` will always expect list length to be two. You can use something like `l = string.split(':'); a = l[0]; ...`. Here is a one liner: `a, b = (string.split(':') + [None]*2)[:2]`
How do I reliably split a string in Python?
38,149,470
69
2016-07-01T15:50:23Z
38,150,707
21
2016-07-01T17:09:00Z
[ "python", "string", "python-3.x", "split" ]
In Perl I can do: ``` my ($x, $y) = split /:/, $str; ``` And it will work whether or not the string contains the pattern. In Python, however this won't work: ``` a, b = "foo".split(":") # ValueError: not enough values to unpack ``` What's the canonical way to prevent errors in such cases?
You are always free to catch the exception. For example: ``` some_string = "foo" try: a, b = some_string.split(":") except ValueError: a = some_string b = "" ``` If assigning the whole original string to `a` and an empty string to `b` is the desired behaviour, I would probably use `str.partition()` as eugene y suggests. However, this solution gives you more control over exactly what happens when there is no separator in the string, which might be useful in some cases.
What is a good explanation of how to read the histogram feature of TensorBoard?
38,149,622
13
2016-07-01T15:58:16Z
38,173,831
9
2016-07-03T19:58:09Z
[ "python", "machine-learning", "statistics", "tensorflow" ]
Question is simple, **how do you read those graphs**? I read their explanation and it doesn't make sense to me. I was reading TensorFlow's [newly updated readme file for TensorBoard](https://github.com/tensorflow/tensorflow/blob/r0.9/tensorflow/tensorboard/README.md) and in it it tries to explain what a "histogram" is. First it clarifies that its not really a histogram: > Right now, its name is a bit of a misnomer, as it doesn't show > histograms; instead, it shows some high-level statistics on a > distribution. I am trying to figure out what their description is actually trying to say. Right now I am trying to parse the specific sentence: > Each line on the chart represents a percentile in the distribution > over the data: for example, the bottom line shows how the minimum > value has changed over time, and the line in the middle shows how the > median has changed. The first question I have is, what do they mean by "each line". There are horizontal axis and there are lines that make a square grid on the graph or maybe the plotted lines, themselves. Consider a screen shot from [the TensorBoard example](https://www.tensorflow.org/tensorboard/index.html#events): [![enter image description here](http://i.stack.imgur.com/u7zsf.png)](http://i.stack.imgur.com/u7zsf.png) What are they referring to with "lines"? In the above example what are the lines and percentiles that they are talking about? Then the readme file tries to provide more detail with an example: > Reading from top to bottom, the lines have the following meaning: > [maximum, 93%, 84%, 69%, 50%, 31%, 16%, 7%, minimum] However, its unclear to me what they are talking about. What is lines and what percentiles? It seems that they are trying to replace this in the future, but meanwhile, I am stuck with this. Can someone help me understand how to use this?
The lines that they are talking about are described below: [![enter image description here](http://i.stack.imgur.com/ENhlR.png)](http://i.stack.imgur.com/ENhlR.png) as for the meaning of percentile, check out the [wikipedia article](https://en.wikipedia.org/wiki/Percentile), basically, the 93rd percentile means that 93% of the values are situated below the 93rd percentile line
Digit separators in Python code
38,155,177
12
2016-07-01T23:58:08Z
38,155,210
10
2016-07-02T00:03:10Z
[ "python", "literals" ]
Is there any way to group digits in a Python code to increase code legibility? I've tried `'` and `_` which are [*digit separators*](https://en.wikipedia.org/wiki/Integer_literal#Digit_separators) of some other languages, but no avail. A weird operator which concatenates its left hand side with its right hand side could also work out.
This is not implemented in python at the present time. You can look at the lexical analysis for strict definitions [python2.7](https://docs.python.org/2/reference/lexical_analysis.html#numeric-literals), [python3.5](https://docs.python.org/3.5/reference/lexical_analysis.html#numeric-literals) ... Supposedly it [will be implemented for python3.6](https://www.python.org/dev/peps/pep-0515/#literal-grammar), but it doesn't look like the [documentation](https://docs.python.org/3.6/reference/lexical_analysis.html#grammar-token-digit) has been updated for that yet, nor is it available in python3.6.0a2: ``` Python 3.6.0a2 (v3.6.0a2:378893423552, Jun 13 2016, 14:44:21) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> 1_000 File "<stdin>", line 1 1_000 ^ SyntaxError: invalid syntax >>> amount = 10_000_000.0 File "<stdin>", line 1 amount = 10_000_000.0 ^ SyntaxError: invalid syntax ``` When it is implemented, you'll be able to use `_` in your integer and float literals...
How to know when to use numpy.linalg instead of scipy.linalg?
38,168,016
6
2016-07-03T08:15:01Z
38,168,215
8
2016-07-03T08:41:20Z
[ "python", "arrays", "numpy", "scipy" ]
Received wisdom is to prefer `scipy.linalg` over `numpy.linalg` functions. For doing linear algebra, ideally (and conveniently) I would like to combine the functionalities of `numpy.array` and `scipy.linalg` without ever looking towards `numpy.linalg`. This is not always possible and may become too frustrating. Is there a comparative checklist of equivalent functions from these two modules to quickly determine when to use `numpy.linalg` in case a function is absent in `scipy.linalg`? e.g. There are `scipy.linalg.norm()` and `numpy.linalg.norm()`, but there seem to be no scipy equivalents of `numpy.linalg.matrix_rank()` and `numpy.linalg.cond()`.
So, the normal rule is to just use `scipy.linalg` as it generally supports all of the `numpy.linalg` functionality and more. The [documentation](https://docs.scipy.org/doc/scipy/reference/linalg.html) says this: > **See also** > > `numpy.linalg` for more linear algebra functions. Note that although `scipy.linalg` imports most of them, identically named functions from `scipy.linalg` may offer more or slightly differing functionality. However, [`matrix_rank()`](https://docs.scipy.org/doc/numpy-1.10.4/reference/generated/numpy.linalg.matrix_rank.html) is only in NumPy. Here we can see the differences between the functions provided by both libraries, and how SciPy is more complete: ``` In [2]: from scipy import linalg as scipy_linalg In [3]: from numpy import linalg as numpy_linalg In [4]: dir(scipy_linalg) Out[4]: [ ... 'absolute_import', 'basic', 'bench', 'blas', 'block_diag', 'cho_factor', 'cho_solve', 'cho_solve_banded', 'cholesky', 'cholesky_banded', 'circulant', 'companion', 'coshm', 'cosm', 'cython_blas', 'cython_lapack', 'decomp', 'decomp_cholesky', 'decomp_lu', 'decomp_qr', 'decomp_schur', 'decomp_svd', 'det', 'dft', 'diagsvd', 'division', 'eig', 'eig_banded', 'eigh', 'eigvals', 'eigvals_banded', 'eigvalsh', 'expm', 'expm2', 'expm3', 'expm_cond', 'expm_frechet', 'find_best_blas_type', 'flinalg', 'fractional_matrix_power', 'funm', 'get_blas_funcs', 'get_lapack_funcs', 'hadamard', 'hankel', 'helmert', 'hessenberg', 'hilbert', 'inv', 'invhilbert', 'invpascal', 'kron', 'lapack', 'leslie', 'linalg_version', 'logm', 'lstsq', 'lu', 'lu_factor', 'lu_solve', 'matfuncs', 'misc', 'norm', 'ordqz', 'orth', 'orthogonal_procrustes', 'pascal', 'pinv', 'pinv2', 'pinvh', 'polar', 'print_function', 'qr', 'qr_delete', 'qr_insert', 'qr_multiply', 'qr_update', 'qz', 'rq', 'rsf2csf', 's', 'schur', 'signm', 'sinhm', 'sinm', 'solve', 'solve_banded', 'solve_circulant', 'solve_continuous_are', 'solve_discrete_are', 'solve_discrete_lyapunov', 'solve_lyapunov', 'solve_sylvester', 'solve_toeplitz', 'solve_triangular', 'solveh_banded', 'special_matrices', 'sqrtm', 'svd', 'svdvals', 'tanhm', 'tanm', 'test', 'toeplitz', 'tri', 'tril', 'triu'] In [5]: dir(numpy_linalg) Out[5]: [ ... 'absolute_import', 'bench', 'cholesky', 'cond', 'det', 'division', 'eig', 'eigh', 'eigvals', 'eigvalsh', 'info', 'inv', 'lapack_lite', 'linalg', 'lstsq', 'matrix_power', 'matrix_rank', 'multi_dot', 'norm', 'pinv', 'print_function', 'qr', 'slogdet', 'solve', 'svd', 'tensorinv', 'tensorsolve', 'test'] In [6]: ``` Note that not all of these are functions. SciPy does provide [`scipy.linalg.expm_cond()`](https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.linalg.expm_cond.html), but this only returns the condition in the Frobenius norm, whereas [`numpy.linalg.cond()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.cond.html) supports multiple norms.
What is faster, `if x` or `if x != 0`?
38,172,499
3
2016-07-03T17:25:06Z
38,172,713
13
2016-07-03T17:48:55Z
[ "python" ]
I was wondering, what code runs faster? For example, we have variable x: ``` if x!=0 : return ``` or ``` if x: return ``` I tried to check with timeit, and here are results: ``` >>> def a(): ... if 0 == 0: return ... >>> def b(): ... if 0: return ...>>> timeit(a) 0.18059834650234943 >>> timeit(b) 0.13115053638194007 >>> ``` I can't quite understand it.
This is too hard to show in a comment: there's more (or less ;-) ) going on here than any of the comments so far noted. With `a()` and `b()` defined as you showed, let's go on: ``` >>> from dis import dis >>> dis(b) 2 0 LOAD_CONST 0 (None) 3 RETURN_VALUE ``` What happens is that when the CPython compiler sees `if 0:` or `if 1:`, it evaluates them *at compile time*, and doesn't generate any code to do the testing at run time. So the code for `b()` just loads `None` and returns it. But the code generated for `a()` is much more involved: ``` >>> dis(a) 2 0 LOAD_CONST 1 (0) 3 LOAD_CONST 1 (0) 6 COMPARE_OP 2 (==) 9 POP_JUMP_IF_FALSE 16 12 LOAD_CONST 0 (None) 15 RETURN_VALUE >> 16 LOAD_CONST 0 (None) 19 RETURN_VALUE ``` Nothing is evaluated at compile time in this case - it's all done at run time. That's why `a()` is much slower. Beyond that, I endorse @Charles Duffy's comment: worrying about micro-optimization is usually counterproductive in Python. But, if you must ;-) , learn how to use `dis.dis` so you're not fooled by *gross* differences in generated code, as happened in this specific case.
graph.write_pdf("iris.pdf") AttributeError: 'list' object has no attribute 'write_pdf'
38,176,472
6
2016-07-04T03:13:34Z
38,177,223
12
2016-07-04T04:59:49Z
[ "python", "machine-learning", "graphviz", "pydot" ]
My code is follow the class of machine learning of google.The two code are same.I don't know why it show error.May be the type of variable is error.But google's code is same to me.Who has ever had this problem? This is error ``` [0 1 2] [0 1 2] Traceback (most recent call last): File "/media/joyce/oreo/python/machine_learn/VisualizingADecisionTree.py", line 34, in <module> graph.write_pdf("iris.pdf") AttributeError: 'list' object has no attribute 'write_pdf' [Finished in 0.4s with exit code 1] [shell_cmd: python -u "/media/joyce/oreo/python/machine_learn/VisualizingADecisionTree.py"] [dir: /media/joyce/oreo/python/machine_learn] [path: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games] ``` This is code ``` import numpy as np from sklearn.datasets import load_iris from sklearn import tree iris = load_iris() test_idx = [0, 50, 100] # training data train_target = np.delete(iris.target, test_idx) train_data = np.delete(iris.data, test_idx, axis=0) # testing data test_target = iris.target[test_idx] test_data = iris.data[test_idx] clf = tree.DecisionTreeClassifier() clf.fit(train_data, train_target) print test_target print clf.predict(test_data) # viz code from sklearn.externals.six import StringIO import pydot dot_data = StringIO() tree.export_graphviz(clf, out_file=dot_data, feature_names=iris.feature_names, class_names=iris.target_names, filled=True, rounded=True, impurity=False) graph = pydot.graph_from_dot_data(dot_data.getvalue()) graph.write_pdf("iris.pdf") ```
I think you are using newer version of python. Please try with pydotplus. ``` import pydotplus ... graph = pydotplus.graph_from_dot_data(dot_data.getvalue()) graph.write_pdf("iris.pdf") ``` This should do it.
Simulate if-in statement in Java
38,182,463
4
2016-07-04T10:28:15Z
38,182,484
12
2016-07-04T10:29:21Z
[ "java", "python", "if-statement" ]
I've coded for several months in Python, and now i have to switch to Java for work's related reasons. My question is, there is a way to simulate this kind of statement ``` if var_name in list_name: # do something ``` without defining an additional `isIn()`-like boolean function that scans `list_name` in order to find `var_name`?
You're looking for [`List#contains`](https://docs.oracle.com/javase/7/docs/api/java/util/List.html#contains(java.lang.Object)) which is inherited from [`Collection#contains`](https://docs.oracle.com/javase/7/docs/api/java/util/Collection.html#contains(java.lang.Object)) (so you can use it with `Set` objects also) ``` if (listName.contains(varName)) { // doSomething } ``` --- ## `List#contains` > Returns true if this list contains the specified element. More > formally, returns true if and only if this list contains at least one > element e such that (o==null ? e==null : **o.equals(e)**). As you see, `List#contains` uses `equals` to return true or false. It is strongly recommended to `@Override` this method in the classes you're creating, along with `hashcode`. * [Why do I need to override the equals and hashcode methods in Java ?](http://stackoverflow.com/questions/2265503/why-do-i-need-to-override-the-equals-and-hashcode-methods-in-java)
Find unique pairs in list of pairs
38,187,286
12
2016-07-04T14:38:41Z
38,187,443
13
2016-07-04T14:48:59Z
[ "python", "list" ]
I have a (large) list of lists of integers, e.g., ``` a = [ [1, 2], [3, 6], [2, 1], [3, 5], [3, 6] ] ``` Most of the pairs will appear twice, where the order of the integers doesn't matter (i.e., `[1, 2]` is equivalent to `[2, 1]`). I'd now like to find the pairs that appear only *once*, and get a Boolean list indicating that. For the above example, ``` b = [False, False, False, True, False] ``` Since `a` is typically large, I'd like to avoid explicit loops. Mapping to `frozenset`s may be advised, but I'm not sure if that's overkill.
``` ctr = Counter(frozenset(x) for x in a) b = [ctr[frozenset(x)] == 1 for x in a] ``` We can use Counter to get counts of each list (turn list to frozenset to ignore order) and then for each list check if it only appears once.
Two variables in Python have same id, but not lists or tuples
38,189,660
26
2016-07-04T17:21:10Z
38,189,759
30
2016-07-04T17:30:08Z
[ "python", "python-3.x", "tuples", "identity", "python-internals" ]
Two variables in Python have the same `id`: ``` a = 10 b = 10 a is b >>> True ``` If I take two `list`s: ``` a = [1, 2, 3] b = [1, 2, 3] a is b >>> False ``` according to [this link](http://stackoverflow.com/questions/27460234/two-variables-with-the-same-list-have-different-ids-why-is-that?answertab=votes#tab-top) Senderle answered that immutable object references have the same id and mutable objects like lists have different ids. So now according to his answer, tuples should have the same ids - meaning: ``` a = (1, 2, 3) b = (1, 2, 3) a is b >>> False ``` Ideally, as tuples are not mutable, it should return `True`, but it is returning `False`! What is the explanation?
Immutable objects don't have the same `id`, and as a mater of fact this is not true for any type of objects that you define separately. Every time you define an object in Python, you'll create a new object with a new identity. But there are some exceptions for small integers (between -5 and 256) and small strings (interned strings, with a special length (usually less than 20 character)) which are singletons and have same `id` (actually one object with multiple pointer). You can check this fact like following: ``` >>> 30 is 20 + 10 True >>> >>> 300 is 200 + 100 False >>> 'aa' * 2 is 'a' * 4 True >>> 'aa' * 20 is 'a' * 40 False ``` And for a custom object: ``` >>> class A: ... pass ... >>> A() is A() # Every time you create an instance you'll have a new instance with new identity False ``` Also note that the `is` operator will check the object's identity, not the value. If you want to check the value you should use `==`: ``` >>> 300 == 3*100 True ``` And since there is no such rule for tuples (other types) if you define the two same tuples in any size they'll get their own ids: ``` >>> a = (1,) >>> b = (1,) >>> >>> a is b False ``` And note that the fact of singleton integers and interned strings is true even when you define them within mutable and immutable objects: ``` >>> a = (100, 700, 400) >>> >>> b = (100, 700, 400) >>> >>> a[0] is b[0] True >>> a[1] is b[1] False ```
Two variables in Python have same id, but not lists or tuples
38,189,660
26
2016-07-04T17:21:10Z
38,189,766
17
2016-07-04T17:30:50Z
[ "python", "python-3.x", "tuples", "identity", "python-internals" ]
Two variables in Python have the same `id`: ``` a = 10 b = 10 a is b >>> True ``` If I take two `list`s: ``` a = [1, 2, 3] b = [1, 2, 3] a is b >>> False ``` according to [this link](http://stackoverflow.com/questions/27460234/two-variables-with-the-same-list-have-different-ids-why-is-that?answertab=votes#tab-top) Senderle answered that immutable object references have the same id and mutable objects like lists have different ids. So now according to his answer, tuples should have the same ids - meaning: ``` a = (1, 2, 3) b = (1, 2, 3) a is b >>> False ``` Ideally, as tuples are not mutable, it should return `True`, but it is returning `False`! What is the explanation?
**Immutable `!=` same object.**\* [An immutable object is simply an object whose state cannot be altered;](https://docs.python.org/3.5/reference/datamodel.html#data-model) and that is all. *When a new object is created, **a new address will be assigned to it**.* As such, checking if the addresses are equal with `is` will return `False`. The fact that `1 is 1` or `"a" is "a"` returns `True` is due to [integer caching](http://stackoverflow.com/questions/306313/is-operator-behaves-unexpectedly-with-integers) and string [interning](http://stackoverflow.com/questions/15541404/python-string-interning) performed by Python so do not let it confuse you; it is not related with the objects in question being mutable/immutable. --- \*Empty immutable objects [do refer to the same object](http://stackoverflow.com/questions/38328857/why-does-is-return-true-when-is-and-is-return-false/38328858#38328858) and their `is`ness does return true, this is a special implementation specific case, though.
Two variables in Python have same id, but not lists or tuples
38,189,660
26
2016-07-04T17:21:10Z
38,189,778
12
2016-07-04T17:31:49Z
[ "python", "python-3.x", "tuples", "identity", "python-internals" ]
Two variables in Python have the same `id`: ``` a = 10 b = 10 a is b >>> True ``` If I take two `list`s: ``` a = [1, 2, 3] b = [1, 2, 3] a is b >>> False ``` according to [this link](http://stackoverflow.com/questions/27460234/two-variables-with-the-same-list-have-different-ids-why-is-that?answertab=votes#tab-top) Senderle answered that immutable object references have the same id and mutable objects like lists have different ids. So now according to his answer, tuples should have the same ids - meaning: ``` a = (1, 2, 3) b = (1, 2, 3) a is b >>> False ``` Ideally, as tuples are not mutable, it should return `True`, but it is returning `False`! What is the explanation?
Take a look at this code: ``` >>> a = (1, 2, 3) >>> b = (1, 2, 3) >>> c = a >>> id(a) 178153080L >>> id(b) 178098040L >>> id(c) 178153080L ``` In order to figure out why `a is c` is evaluated as `True` whereas `a is b` yields `False` I strongly recommend you to run step-by-step the snippet above in the [Online Python Tutor](http://www.pythontutor.com/visualize.html#code=a+%3D+(1,+2,+3%29%0Ab+%3D+(1,+2,+3%29%0Ac+%3D+a&mode=display&origin=opt-frontend.js&cumulative=false&heapPrimitives=false&textReferences=false&py=2&rawInputLstJSON=%5B%5D&curInstr=0). The graphical representation of the objects in memory will provide you with a deeper insight into this issue (I'm attaching a screenshot). [![enter image description here](http://i.stack.imgur.com/mv8rO.png)](http://i.stack.imgur.com/mv8rO.png)
ValueError time data 'Fri Mar 11 15:59:57 EST 2016' does not match format '%a %b %d %H:%M:%S %Z %Y'
38,189,867
4
2016-07-04T17:38:42Z
38,189,982
7
2016-07-04T17:48:22Z
[ "python", "datetime", "strptime" ]
I am trying to simply create a datetime object from the following date: 'Fri Mar 11 15:59:57 EST 2016' using the format: '%a %b %d %H:%M:%S %Z %Y'. Here's the code. ``` from datetime import datetime date = datetime.strptime('Fri Mar 11 15:59:57 EST 2016', '%a %b %d %H:%M:%S %Z %Y') ``` However, this results in a ValueError as shown below. ``` ValueError: time data 'Fri Mar 11 15:59:57 EST 2016' does not match format '%a %b %d %H:%M:%S %Z %Y' ``` Maybe I am missing something obviously wrong with my format string, but I have checked it over and over. Any help would be greatly appreciated, thanks. Edit to reflect comments/questions for more information: The Python version I'm using is 2.7.6. Using the 'locale' command on Ubuntu 14.04 I get this: ``` $ locale LANG=en_US.UTF-8 LANGUAGE= LC_CTYPE="en_US.UTF-8" LC_NUMERIC="en_US.UTF-8" LC_TIME="en_US.UTF-8" LC_COLLATE="en_US.UTF-8" LC_MONETARY="en_US.UTF-8" LC_MESSAGES="en_US.UTF-8" LC_PAPER="en_US.UTF-8" LC_NAME="en_US.UTF-8" LC_ADDRESS="en_US.UTF-8" LC_TELEPHONE="en_US.UTF-8" LC_MEASUREMENT="en_US.UTF-8" LC_IDENTIFICATION="en_US.UTF-8" LC_ALL= ```
For the %Z specifier, `strptime` only recognises "UTC", "GMT" and whatever is in `time.tzname` (so whether it recognises "EST" will depend on the time zone of your computer). This is [issue 22377](http://bugs.python.org/issue22377). See: [Python strptime() and timezones?](http://stackoverflow.com/questions/3305413/python-strptime-and-timezones) and [Parsing date/time string with timezone abbreviated name in Python?](http://stackoverflow.com/questions/1703546/parsing-date-time-string-with-timezone-abbreviated-name-in-python) The best option for parsing timezones that include a human-readable time zone is still to use the third-party [python-dateutil](https://labix.org/python-dateutil#head-c0e81a473b647dfa787dc11e8c69557ec2c3ecd2) library: ``` import dateutil date = dateutil.parse('Fri Mar 11 15:59:57 EST 2016' ``` If you cannot install python-dateutil, you could strip out the timezone and parse it manually e.g. using a dictionary lookup.
Is there an one-line python code to replace this nested loop?
38,225,181
2
2016-07-06T13:23:54Z
38,225,216
9
2016-07-06T13:25:57Z
[ "python", "for-loop", "dictionary" ]
variables: ``` rs = { 'results': [ {'addresses': [{'State': 'NY'}, {'State': 'IL'}]}, {'addresses': [{'State': 'NJ'}, {'State': 'IL'}]} ] } ``` I want to get a list of states for each member of results. Currently I used the following code: ``` for y in rs['results']: for x in y['addresses']: phy_states.append(x['state']) ``` I want something like: ``` phy_states = [x['state'] for x in y['addresses'] for y in rs['results']] ``` But I don't know how to do. The one line code above does not work because local variable y was referenced before assignment.
You almost got it, you just got it the other way around: ``` phy_states = [x['State'] for y in rs['results'] for x in y['addresses']] ```
Django Rest framework: passing request object around makes it lose POST data in newer version
38,227,476
3
2016-07-06T15:10:13Z
38,339,312
8
2016-07-12T21:39:43Z
[ "python", "django", "django-rest-framework" ]
I'm upgrading a site from Django 1.4 to Django 1.9 I have a view passing control to another view like this: ``` @csrf_protect @api_view(['POST']) @authentication_classes((SessionAuthentication,)) def preview(request, project_id, channel_type, format=None): return build(request, project_id, channel_type, preview=True, format=format) @csrf_protect @api_view(['POST']) @authentication_classes((SessionAuthentication,)) def build(request, project_id, channel_type, preview=True, builder=None, build_after=True, format=None): pass ``` The problem (never occurred before) is that, when passed from `preview()` to `build()`, the request object loses its POST content. How to solve this?
You could just separate out the logic you're storing in the `build` view into a common function used by both endpoints without any decorators e.g. `_build` – this way whatever is happening within the decorators shouldn't occur in the case of the call within `preview`. ``` @csrf_protect @api_view(['POST']) @authentication_classes((SessionAuthentication,)) def preview(request, project_id, channel_type, format=None): return _build(request, project_id, channel_type, preview=True, format=format) @csrf_protect @api_view(['POST']) @authentication_classes((SessionAuthentication,)) def build(request, project_id, channel_type, preview=True, builder=None, build_after=True, format=None): return _build(request, project_id, channel_type, preview=preview, builder=builder, build_after=build_after, format=format) def _build(request, project_id, channel_type, preview=True, builder=None, build_after=True, format=None): pass ```
Understanding this line: list_of_tuples = [(x,y) for x, y, label in data_one]
38,229,059
10
2016-07-06T16:29:11Z
38,229,389
8
2016-07-06T16:47:46Z
[ "python", "numpy", "list-comprehension" ]
As you've already understood I'm a beginner and am trying to understand what the "Pythonic way" of writing this function is built on. I know that other threads might include a partial answer to this, but I don't know what to look for since I don't understand what is happening here. This line is a code that my friend sent me, to improve my code which is: ``` import numpy as np #load_data: def load_data(): data_one = np.load ('/Users/usr/... file_name.npy') list_of_tuples = [] for x, y, label in data_one: list_of_tuples.append( (x,y) ) return list_of_tuples print load_data() ``` The "improved" version: ``` import numpy as np #load_data: def load_data(): data_one = np.load ('/Users/usr.... file_name.npy') list_of_tuples = [(x,y) for x, y, label in data_one] return list_of_tuples print load_data() ``` I wonder: 1. What is happening here? 2. Is it a better or worse way? since it is "Pythonic" I assume it wouldn't work with other languages and so perhaps it's better to get used to the more general way?
Both ways are correct and work. You could probably relate the first way with the way things are done in C and other languages. This is, you basically run a for loop to go through all of the values and then append it to your list of tuples. The second way is more pythonic but does the same. If you take a look at `[(x,y) for x, y, label in data_one]` (this is a list comprehension) you will see that you are also running a for loop on the same data but your result will be `(x, y)` and all of those results will form a list. So it achieves the same thing. The third way (added as a response of the comments) uses a slice method. I've prepared a small example similar to yours: ``` data = [(1, 2, 3), (2, 3, 4), (4, 5, 6)] def load_data(): list_of_tuples = [] for x, y, label in data: list_of_tuples.append((x,y)) return list_of_tuples def load_data_2(): return [(x,y) for x, y, label in data] def load_data_3(): return [t[:2] for t in data] ``` They all do the same thing and return `[(1, 2), (2, 3), (4, 5)]` but their runtime is different. This is why a list comprehension is a better way to do this. When i run the first method `load_data()` i get: ``` %%timeit load_data() 1000000 loops, best of 3: 1.36 µs per loop ``` When I run the second method `load_data_2()` I get: ``` %%timeit load_data_2() 1000000 loops, best of 3: 969 ns per loop ``` When I run the third method `load_data_3()` I get: ``` %%timeit load_data_3() 1000000 loops, best of 3: 981 ns per loop ``` The **second** way, list comprehension, is faster!
Understanding this line: list_of_tuples = [(x,y) for x, y, label in data_one]
38,229,059
10
2016-07-06T16:29:11Z
38,229,435
13
2016-07-06T16:50:54Z
[ "python", "numpy", "list-comprehension" ]
As you've already understood I'm a beginner and am trying to understand what the "Pythonic way" of writing this function is built on. I know that other threads might include a partial answer to this, but I don't know what to look for since I don't understand what is happening here. This line is a code that my friend sent me, to improve my code which is: ``` import numpy as np #load_data: def load_data(): data_one = np.load ('/Users/usr/... file_name.npy') list_of_tuples = [] for x, y, label in data_one: list_of_tuples.append( (x,y) ) return list_of_tuples print load_data() ``` The "improved" version: ``` import numpy as np #load_data: def load_data(): data_one = np.load ('/Users/usr.... file_name.npy') list_of_tuples = [(x,y) for x, y, label in data_one] return list_of_tuples print load_data() ``` I wonder: 1. What is happening here? 2. Is it a better or worse way? since it is "Pythonic" I assume it wouldn't work with other languages and so perhaps it's better to get used to the more general way?
``` list_of_tuples = [(x,y) for x, y, label in data_one] ``` `(x, y)` is a [`tuple`](http://www.tutorialspoint.com/python/python_tuples.htm) <-- linked tutorial. This is a list [comprehension](http://www.secnetix.de/olli/Python/list_comprehensions.hawk) ``` [(x,y) for x, y, label in data_one] # ^ ^ # | ^comprehension syntax^ | # begin list end list ``` `data_one` is an [`iterable`](http://stackoverflow.com/q/9884132/2336654) and is necessary for a list comprehension. Under the covers they are loops and must iterate over something. `x, y, label in data_one` tells me that I can "unpack" these three items from every element that is delivered by the `data_one` iterable. This is just like a local variable of a for loop, it changes upon each iteration. In total, this says: Make a list of tuples that look like `(x, y)` where I get `x, y, and label` from each item delivered by the iterable `data_one`. Put each `x` and `y` into a tuple inside a list called `list_of_tuples`. Yes I know I "unpacked" `label` and never used it, I don't care.
copying strings to the list in Python
38,231,261
3
2016-07-06T18:30:51Z
38,231,326
7
2016-07-06T18:34:09Z
[ "python", "string", "list", "copy", "append" ]
I have a list `data`, and I am reading lines from it. ``` item = user = channel = time = [] with open('train_likes.csv') as f: data = csv.reader(f) readlines = f.readlines() ``` I want to split each line into single string elements and append each element to a one of 2 other lists: `user`, `items` ``` for line in readlines: Type = line.split(",") Copy1 = Type[0] Copy2 = Type[1] user.append(Copy1) item.append(Copy2) ``` But when appending `item`, `user` is appended with `Copy2` as well, and `item` got the same as `user`!!! ![user's and item's value's](http://i.stack.imgur.com/AM6As.png) I have checked all familiar replies, but that didn't help me (I tried to copy everything in the script, here's only one version.
When you write `item=user=channel=time=[]`, you are creating only one object, with 4 aliases. So appending to `item` is the same as appending to `user`. Instead you should write ``` item, user, channel, time = [], [], [], [] ``` or simply ``` item = [] user = [] channel = [] time = [] ```
Numpy: Replace every value in the array with the mean of its adjacent elements
38,232,497
4
2016-07-06T19:44:03Z
38,232,755
8
2016-07-06T20:00:42Z
[ "python", "arrays", "numpy", "numpy-broadcasting" ]
I have an ndarray, and I want to replace every value in the array with the mean of its adjacent elements. The code below can do the job, but it is super slow when I have 700 arrays all with shape (7000, 7000) , so I wonder if there are better ways to do it. Thanks! ``` a = np.array(([1,2,3,4,5,6,7,8,9],[4,5,6,7,8,9,10,11,12],[3,4,5,6,7,8,9,10,11])) row,col = a.shape new_arr = np.ndarray(a.shape) for x in xrange(row): for y in xrange(col): min_x = max(0, x-1) min_y = max(0, y-1) new_arr[x][y] = a[min_x:(x+2),min_y:(y+2)].mean() print new_arr ```
Well, that's a [`smoothing operation in image processing`](https://en.wikipedia.org/wiki/Smoothing), which can be achieved with `2D` convolution. You are working a bit differently on the near-boundary elements. So, if the boundary elements are let off for precision, you can use [`scipy's convolve2d`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.convolve2d.html) like so - ``` from scipy.signal import convolve2d as conv2 out = (conv2(a,np.ones((3,3)),'same')/9.0 ``` This specific operation is a built-in in OpenCV module as [`cv2.blur`](http://docs.opencv.org/2.4/modules/imgproc/doc/filtering.html?highlight=blur#blur) and is very efficient at it. The name basically describes its operation of blurring the input arrays representing images. I believe the efficiency comes from the fact that internally its implemented entirely in `C` for performance with a thin Python wrapper to handle NumPy arrays. So, the output could be alternatively calculated with it, like so - ``` import cv2 # Import OpenCV module out = cv2.blur(a.astype(float),(3,3)) ``` Here's a quick show-down on timings on a decently big image/array - ``` In [93]: a = np.random.randint(0,255,(5000,5000)) # Input array In [94]: %timeit conv2(a,np.ones((3,3)),'same')/9.0 1 loops, best of 3: 2.74 s per loop In [95]: %timeit cv2.blur(a.astype(float),(3,3)) 1 loops, best of 3: 627 ms per loop ```
list comprehension where list itself is None
38,234,951
3
2016-07-06T22:40:45Z
38,234,988
8
2016-07-06T22:44:06Z
[ "python", "list-comprehension" ]
Is there a way for me to deal with the case where the list my\_list itself can be None in the list comprehension: ``` [x for x in my_list] ``` I tried this: ``` [x for x in my_list if my_list is not None else ['1']] ``` However, it doesn't seem to work.
I think this does what you want: ``` >>> my_list = None >>> [x for x in my_list] if my_list is not None else ['1'] ['1'] ``` The change here is moving the ternary statement outside of the list comprehension. Alternatively, if we add some parens, we can keep the ternary statement inside the list comprehension: ``` >>> my_list = None >>> [x for x in (my_list if my_list is not None else ['1'])] ['1'] ```
Exception during list comprehension. Are intermediate results kept anywhere?
38,239,281
18
2016-07-07T06:59:15Z
38,239,480
26
2016-07-07T07:10:30Z
[ "python", "list-comprehension" ]
When using try-except in a for loop context, the commands executed so far are obviously done with ``` a = [1, 2, 3, 'text', 5] b = [] try: for k in range(len(a)): b.append(a[k] + 4) except: print('Error!') print(b) ``` results with ``` Error! [5, 6, 7] ``` However the same is not true for list comprehensions ``` c=[] try: c = [a[k] + 4 for k in range(len(a))] except: print('Error!') print(c) ``` And the result is ``` Error! [] ``` Is the intermediate list, built before the exception occurred, kept anywhere? Is it accessible?
The list comprehension intermediate results are kept on an internal CPython stack, and are not accessible from the Python expressions that are part of the list comprehension. Note that Python executes the `[.....]` **first**, which produces a list object, and only **then** assigns that result to the name `c`. If an exception occurs within the `[....]` expression, the expression is terminated and exception handling kicks in instead. Your `print(c)` expression thus can only ever show the *previous* object that `c` was bound to, which here is an empty list object. It could have been anything else: ``` >>> c = 'Anything else' >>> try: ... c = [2 // i for i in (1, 0)] ... except ZeroDivisionError: ... pass ... >>> c 'Anything else' ``` In your first example, no new list object is produced. You instead manipulate (using `b.append()`) an *existing* list object, which is why you can see what all successful `b.append()` calls have done to it.
Exception during list comprehension. Are intermediate results kept anywhere?
38,239,281
18
2016-07-07T06:59:15Z
38,240,321
11
2016-07-07T07:59:14Z
[ "python", "list-comprehension" ]
When using try-except in a for loop context, the commands executed so far are obviously done with ``` a = [1, 2, 3, 'text', 5] b = [] try: for k in range(len(a)): b.append(a[k] + 4) except: print('Error!') print(b) ``` results with ``` Error! [5, 6, 7] ``` However the same is not true for list comprehensions ``` c=[] try: c = [a[k] + 4 for k in range(len(a))] except: print('Error!') print(c) ``` And the result is ``` Error! [] ``` Is the intermediate list, built before the exception occurred, kept anywhere? Is it accessible?
Let's look at the bytecode: ``` >>> def example(): ... c=[] ... try: ... c = [a[k] + 4 for k in range(len(a))] ... except: ... print('Error!') ... print(c) ... >>> import dis >>> dis.dis(example) --- removed some instructions 27 GET_ITER >> 28 FOR_ITER 20 (to 51) 31 STORE_FAST 1 (k) 34 LOAD_GLOBAL 2 (a) 37 LOAD_FAST 1 (k) 40 BINARY_SUBSCR 41 LOAD_CONST 1 (4) 44 BINARY_ADD 45 LIST_APPEND 2 48 JUMP_ABSOLUTE 28 >> 51 STORE_FAST 0 (c) --- more instructions... ``` As you can see, the list comprehension is translated to a series of instructions `GET_ITER`...`JUMP_ABSOLUTE`. The next instruction `STORE_FAST` is the one that modifies `c`. If any exception occurs before it, `c` will not have been modified.
falied to install flask under virutalenv on windows -- [Error 2] The system cannot find the file specified
38,243,633
6
2016-07-07T10:53:00Z
38,413,695
15
2016-07-16T17:21:32Z
[ "python", "windows", "flask", "virtualenv" ]
I'm using python 2.7 on a windows box.I'm able to install flask using pip install, as you can see below: ![cool](http://i.stack.imgur.com/2gEWO.png) However, after I created a virtualenv, I got below error when trying to do the same thing. scripts: ``` $pip install virtualenv $cd /d d: $mkdir test $cd test $virtualenv flaskEnv $cd flaskEnv/Scritps/ $activate $cd ../../ $pip install flask ``` log file as below: ``` Collecting flask Using cached Flask-0.11.1-py2.py3-none-any.whl Requirement already satisfied (use --upgrade to upgrade): click>=2.0 in c:\projects\flask-react\flsk\lib\site-packages (from flask) Requirement already satisfied (use --upgrade to upgrade): Werkzeug>=0.7 in c:\projects\flask-react\flsk\lib\site-packages (from flask) Collecting Jinja2>=2.4 (from flask) Using cached Jinja2-2.8-py2.py3-none-any.whl Collecting itsdangerous>=0.21 (from flask) Collecting MarkupSafe (from Jinja2>=2.4->flask) Using cached MarkupSafe-0.23.tar.gz Building wheels for collected packages: MarkupSafe Running setup.py bdist_wheel for MarkupSafe: started Running setup.py bdist_wheel for MarkupSafe: finished with status 'error' Complete output from command c:\projects\flask-react\flsk\scripts\python.exe -u -c "import setuptools, tokenize;__file__='c:\\users\\admini~1\\appdata\\local\\temp\\pip-build-3ep417\\MarkupSafe\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" bdist_wheel -d c:\users\admini~1\appdata\local\temp\tmp8mkr70pip-wheel- --python-tag cp27: running bdist_wheel running build running build_py creating build creating build\lib.win32-2.7 creating build\lib.win32-2.7\markupsafe copying markupsafe\tests.py -> build\lib.win32-2.7\markupsafe copying markupsafe\_compat.py -> build\lib.win32-2.7\markupsafe copying markupsafe\_constants.py -> build\lib.win32-2.7\markupsafe copying markupsafe\_native.py -> build\lib.win32-2.7\markupsafe copying markupsafe\__init__.py -> build\lib.win32-2.7\markupsafe running egg_info writing MarkupSafe.egg-info\PKG-INFO writing top-level names to MarkupSafe.egg-info\top_level.txt writing dependency_links to MarkupSafe.egg-info\dependency_links.txt warning: manifest_maker: standard file '-c' not found reading manifest file 'MarkupSafe.egg-info\SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'MarkupSafe.egg-info\SOURCES.txt' copying markupsafe\_speedups.c -> build\lib.win32-2.7\markupsafe running build_ext building 'markupsafe._speedups' extension error: [Error 2] The system cannot find the file specified ---------------------------------------- Running setup.py clean for MarkupSafe Failed to build MarkupSafe Installing collected packages: MarkupSafe, Jinja2, itsdangerous, flask Running setup.py install for MarkupSafe: started Running setup.py install for MarkupSafe: finished with status 'error' Complete output from command c:\projects\flask-react\flsk\scripts\python.exe -u -c "import setuptools, tokenize;__file__='c:\\users\\admini~1\\appdata\\local\\temp\\pip-build-3ep417\\MarkupSafe\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\admini~1\appdata\local\temp\pip-8v3_ep-record\install-record.txt --single-version-externally-managed --compile --install-headers c:\projects\flask-react\flsk\include\site\python2.7\MarkupSafe: running install running build running build_py creating build creating build\lib.win32-2.7 creating build\lib.win32-2.7\markupsafe copying markupsafe\tests.py -> build\lib.win32-2.7\markupsafe copying markupsafe\_compat.py -> build\lib.win32-2.7\markupsafe copying markupsafe\_constants.py -> build\lib.win32-2.7\markupsafe copying markupsafe\_native.py -> build\lib.win32-2.7\markupsafe copying markupsafe\__init__.py -> build\lib.win32-2.7\markupsafe running egg_info writing MarkupSafe.egg-info\PKG-INFO writing top-level names to MarkupSafe.egg-info\top_level.txt writing dependency_links to MarkupSafe.egg-info\dependency_links.txt warning: manifest_maker: standard file '-c' not found reading manifest file 'MarkupSafe.egg-info\SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'MarkupSafe.egg-info\SOURCES.txt' copying markupsafe\_speedups.c -> build\lib.win32-2.7\markupsafe running build_ext building 'markupsafe._speedups' extension error: [Error 2] The system cannot find the file specified ``` Does the error message mean something to anyone? thanks
This issue seems to be related with the setuptools version installed in your virtualenv. Downgrading to an older version fixed it for me. From your virtualenv: ``` pip install setuptools==21.2.1 pip install flask ```