title
stringlengths
12
150
question_id
int64
469
40.1M
question_score
int64
2
5.52k
question_date
stringdate
2008-08-02 15:11:16
2016-10-18 06:16:31
answer_id
int64
536
40.1M
answer_score
int64
7
8.38k
answer_date
stringdate
2008-08-02 18:49:07
2016-10-18 06:19:33
tags
listlengths
1
5
question_body_md
stringlengths
15
30.2k
answer_body_md
stringlengths
11
27.8k
+= vs. =+ in timedelta
34,521,215
2
2015-12-30T00:20:45Z
34,521,245
9
2015-12-30T00:25:37Z
[ "python", "datetime" ]
Question: what's up with this? ``` from datetime import timedelta, date ONE_DAY = timedelta(days=1) date = date(2015,12,12) >>> date -= ONE_DAY >>> date date(2015,12,11) >>> date += ONE_DAY >>> date date(2015,12,12) >>> date **=+** ONE_DAY >>> date datetime.timedelta(1) ``` Normally, I would expect an error when attempting to use `=+`, but instead I seem to be assigning the timedelta object. Also: ``` >>> TWO_DAYS = timedelta(days=2) >>> TWO_DAYS datetime.timedelta(2) ``` ... ``` >>> date = date(2015,12,12) Traceback (most recent call last): File "(stdin)", line 1, in <module> TypeError: 'datetime.timedelta' object is not callable >>> date = False >>> date False >>> date = date(2015,12,12) Traceback (most recent call last): File "(stdin)", line 1, in <module> TypeError: 'bool' object is not callable >>> dates = date(2015,12,12) Traceback (most recent call last): File "(stdin)", line 1, in <module> TypeError: 'bool' object is not callable >>> x = date(2015,12,12) Traceback (most recent call last): File "(stdin)", line 1, in <module> TypeError: 'bool' object is not callable ``` Apparently my ability to assign variables has also been somehow compromised? I initially was getting an error from the issue in the first half, and as I dug deeper I realized the things in the second half of my post. Any help would be appreciated.
`=+` is not an augmented assignment syntax. You have `=` and a *separate* [unary `+` operator](https://docs.python.org/2/reference/expressions.html#unary-arithmetic-and-bitwise-operations): ``` date = (+ONE_DAY) ``` You rebound `date` to `ONE_DAY` (`+` did nothing to that value). Note that you bound the name `date` first to a `datetime.date()` instance, then to a `datetime.timedelta()` instance with the above `=+` statement. You can't then still expect it to be bound to the `datetime.date` object from the `import` line. Python doesn't separate imported names from other names in your code, importing binds names just the same way assignment does. In other words, assignment hasn't been compromised at all. Quite to the contrary, it is *because* assignment works that you can no longer treat the `date` name as being bound to `datetime.date`. This works just fine: ``` import datetime date = datetime.date(2015, 12, 12) ``` because then you bound the name `datetime`, and can still reach the `datetime.date` callable from there.
Python's MySQLdb can’t find libmysqlclient.dylib with Homebrewed MySQL
34,536,914
9
2015-12-30T20:38:50Z
34,627,734
15
2016-01-06T07:24:49Z
[ "python", "mysql", "homebrew", "mysql-python" ]
# MySQL and Python installed with Homebrew I installed MySQL and Python with Homebrew on OS X 10.10.5 Yosemite. My Python 2.7 is at `python -> ../Cellar/python/2.7.9/bin/python` with a symlink to it at `/usr/local/bin/python`. In `/usr/local/bin` there is a symlink: `mysql -> ../Cellar/mysql/5.7.9/bin/mysql` ## The error In the Python shell: ``` >>> import MySQLdb Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/site-packages/MySQLdb/__init__.py", line 19, in <module> import _mysql ImportError: dlopen(/usr/local/lib/python2.7/site-packages/_mysql.so, 2): Library not loaded: /usr/local/lib/libmysqlclient.18.dylib Referenced from: /usr/local/lib/python2.7/site-packages/_mysql.so Reason: image not found ``` So I tried: `$ sudo unlink /usr/local/lib/libmysqlclient.18.dylib` followed by: `DYLD_LIBRARY_PATH=/usr/local/mysql/lib/:$DYLD_LIBRARY_PATH` and then (desperation over reason): `$ export DYLD_LIBRARY_PATH=/usr/local/Cellar/mysql/5.7.9/lib` But in both cases `import MySQLdb` still tried to import `libmysqlclient.18.dylib`. Then I tried: `$ pip install -U MySQL-python` and got: `Requirement already up-to-date: MySQL-python in /usr/local/lib/python2.7/site-packages` ## Existing answers Many [answers](http://stackoverflow.com/questions/4546698/library-not-loaded-libmysqlclient-16-dylib-error-when-trying-to-run-rails-serv) [to this](http://stackoverflow.com/questions/7375199/python-brew-and-mysqldb) [problem](http://stackoverflow.com/questions/4559699/python-mysqldb-and-library-not-loaded-libmysqlclient-16-dylib) [on SO](http://stackoverflow.com/questions/6383310/python-mysqldb-library-not-loaded-libmysqlclient-18-dylib) suggest manually making an explicit symlink to the library with a version number (in my case `libmysqlclient.20.dylib`). However, this seems crude and not future-proof, given the existing symlinks: in `/usr/local/lib` there is `libmysqlclient.dylib -> ../Cellar/mysql/5.7.9/lib/libmysqlclient.dylib` and in `/usr/local/Cellar/mysql/5.7.9/lib` we find: `libmysqlclient.20.dylib` with a symlink in the same directory to it: `libmysqlclient.dylib -> libmysqlclient.20.dylib` ## How to make Python forget `libmysqlclient.18.dylib`? So how can I get Python to forget `/usr/local/lib/libmysqlclient.18.dylib` and follow the correct symlink in in `/usr/local/lib` to `libmysqlclient.dylib`, without manually adding yet another symlink?
I also encountered this problem. I uninstalled the MySQL-python, and then installed it. ``` pip uninstall MySQL-python pip install MySQL-python ``` And the problem is solved.
Python/numpy: Most efficient way to sum n elements of an array, so that each output element is the sum of the previous n input elements?
34,537,068
7
2015-12-30T20:52:29Z
34,537,170
8
2015-12-30T20:59:58Z
[ "python", "arrays", "algorithm", "performance", "numpy" ]
I want to write a function that takes a flattened array as input and returns an array of equal length containing the sums of the previous n elements from the input array, with the initial `n - 1` elements of the output array set to `NaN`. For example if the array has ten `elements = [2, 4, 3, 7, 6, 1, 9, 4, 6, 5]` and `n = 3` then the resulting array should be `[NaN, NaN, 9, 14, 16, 14, 16, 14, 19, 15]`. One way I've come up with to do this: ``` def sum_n_values(flat_array, n): sums = np.full(flat_array.shape, np.NaN) for i in range(n - 1, flat_array.shape[0]): sums[i] = np.sum(flat_array[i - n + 1:i + 1]) return sums ``` Is there a better/more efficient/more "Pythonic" way to do this? Thanks in advance for your help.
You can make use of [`np.cumsum`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.cumsum.html), and take the difference of the `cumsum`ed array and a shifted version of it: ``` n = 3 arr = np.array([2, 4, 3, 7, 6, 1, 9, 4, 6, 5]) sum_arr = arr.cumsum() shifted_sum_arr = np.concatenate([[np.NaN]*(n-1), [0], sum_arr[:-n]]) sum_arr => array([ 2, 6, 9, 16, 22, 23, 32, 36, 42, 47]) shifted_sum_arr => array([ nan, nan, 0., 2., 6., 9., 16., 22., 23., 32.]) sum_arr - shifted_sum_arr => array([ nan, nan, 9., 14., 16., 14., 16., 14., 19., 15.]) ``` IMO, this is a more *numpyish* way to do this, mainly because it avoids the loop. --- **Timings** ``` def cumsum_app(flat_array, n): sum_arr = flat_array.cumsum() shifted_sum_arr = np.concatenate([[np.NaN]*(n-1), [0], sum_arr[:-n]]) return sum_arr - shifted_sum_arr flat_array = np.random.randint(0,9,(100000)) %timeit cumsum_app(flat_array,10) 1000 loops, best of 3: 985 us per loop %timeit cumsum_app(flat_array,100) 1000 loops, best of 3: 963 us per loop ```
Python/numpy: Most efficient way to sum n elements of an array, so that each output element is the sum of the previous n input elements?
34,537,068
7
2015-12-30T20:52:29Z
34,537,191
7
2015-12-30T21:02:11Z
[ "python", "arrays", "algorithm", "performance", "numpy" ]
I want to write a function that takes a flattened array as input and returns an array of equal length containing the sums of the previous n elements from the input array, with the initial `n - 1` elements of the output array set to `NaN`. For example if the array has ten `elements = [2, 4, 3, 7, 6, 1, 9, 4, 6, 5]` and `n = 3` then the resulting array should be `[NaN, NaN, 9, 14, 16, 14, 16, 14, 19, 15]`. One way I've come up with to do this: ``` def sum_n_values(flat_array, n): sums = np.full(flat_array.shape, np.NaN) for i in range(n - 1, flat_array.shape[0]): sums[i] = np.sum(flat_array[i - n + 1:i + 1]) return sums ``` Is there a better/more efficient/more "Pythonic" way to do this? Thanks in advance for your help.
You are basically performing [`1D convolution`](https://en.wikipedia.org/wiki/Convolution) there, so you can use [`np.convolve`](http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.convolve.html), like so - ``` # Get the valid sliding summations with 1D convolution vals = np.convolve(flat_array,np.ones(n),mode='valid') # Pad with NaNs at the start if needed out = np.pad(vals,(n-1,0),'constant',constant_values=(np.nan)) ``` Sample run - ``` In [110]: flat_array Out[110]: array([2, 4, 3, 7, 6, 1, 9, 4, 6, 5]) In [111]: n = 3 In [112]: vals = np.convolve(flat_array,np.ones(n),mode='valid') ...: out = np.pad(vals,(n-1,0),'constant',constant_values=(np.nan)) ...: In [113]: vals Out[113]: array([ 9., 14., 16., 14., 16., 14., 19., 15.]) In [114]: out Out[114]: array([ nan, nan, 9., 14., 16., 14., 16., 14., 19., 15.]) ``` For 1D convolution, one can also use [`Scipy's implementation`](http://docs.scipy.org/doc/scipy-0.16.1/reference/generated/scipy.signal.convolve.html). The runtimes with Scipy version seemed better for a large window size, as also the runtime tests listed next would try to investigate. The Scipy version for getting `vals` would be - ``` from scipy import signal vals = signal.convolve(flat_array,np.ones(n),mode='valid') ``` The `NaNs` padding operation could be replaced by [`np.hstack`](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.hstack.html) : `np.hstack(([np.nan]*(n-1),vals))` for better performance. --- Runtime tests - ``` In [238]: def original_app(flat_array,n): ...: sums = np.full(flat_array.shape, np.NaN) ...: for i in range(n - 1, flat_array.shape[0]): ...: sums[i] = np.sum(flat_array[i - n + 1:i + 1]) ...: return sums ...: ...: def vectorized_app1(flat_array,n): ...: vals = np.convolve(flat_array,np.ones(n),mode='valid') ...: return np.hstack(([np.nan]*(n-1),vals)) ...: ...: def vectorized_app2(flat_array,n): ...: vals = signal.convolve(flat_array,np.ones(3),mode='valid') ...: return np.hstack(([np.nan]*(n-1),vals)) ...: In [239]: flat_array = np.random.randint(0,9,(100000)) In [240]: %timeit original_app(flat_array,10) 1 loops, best of 3: 833 ms per loop In [241]: %timeit vectorized_app1(flat_array,10) 1000 loops, best of 3: 1.96 ms per loop In [242]: %timeit vectorized_app2(flat_array,10) 100 loops, best of 3: 13.1 ms per loop In [243]: %timeit original_app(flat_array,100) 1 loops, best of 3: 836 ms per loop In [244]: %timeit vectorized_app1(flat_array,100) 100 loops, best of 3: 16.5 ms per loop In [245]: %timeit vectorized_app2(flat_array,100) 100 loops, best of 3: 13.1 ms per loop ```
How to speed up Sieve of Eratosthenes python list generator
34,540,574
7
2015-12-31T03:22:29Z
34,541,317
8
2015-12-31T05:07:23Z
[ "python", "list", "computer-science" ]
My problem comes directly from the CS circles site. It's the last problem on the bottom of [this](http://cscircles.cemc.uwaterloo.ca/18-efficiency/) page called 'Primed for Takeoff'. The basic rundown is they want a list of 1,000,001 length, where the index of each item is True if the index is prime, while the index of each item is False if it is not prime. So, for example, isPrime[13] is True. isPrime[14] is False. The first little bit of the list 'isPrime' would look like: ``` isPrime = [False, False, True, True, False, True, False, True, False, False, ...] ``` My problem is the 7-second time limit they have. I have a working code below with a lower number, 101, for debugging purposes. When I bump it up to their required 1,000,001 list length, it takes way too long, I ended up killing the program after a few minutes. This is my working (at length 101), very slow code: ``` n = 101 c = 2 isPrime = [False,False] for i in range(2,n): isPrime.append(i) def ifInt(isPrime): for item in isPrime: if type(item) == int: return item for d in range(2,n): if c != None: for i in range(c,n,c): isPrime[i] = False isPrime[c] = True c = ifInt(isPrime) ``` Then there's this one I found [here](http://stackoverflow.com/questions/3939660/sieve-of-eratosthenes-finding-primes-python). It has a quicker run time, but only outputs a list of primes, not a list of n length with list[n] returning True for primes and false otherwise. I'm not sure if this 2nd bit of code really holds the key to creating a prime list of length 1,000,001 in under 7 seconds, but it was the most relevant thing I could find in my research. ``` def primes_sieve1(limit): limitn = limit+1 primes = dict() for i in range(2, limitn): primes[i] = True for i in primes: factors = range(i,limitn, i) for f in factors[1:]: primes[f] = False return [i for i in primes if primes[i]==True] print primes_sieve1(101) ``` CS circles seems pretty commonly used, but I wasn't able to find a working version of this for Python. Hopefully this will be an easy solve for someone. This question differs from [this one](http://stackoverflow.com/questions/2068372/fastest-way-to-list-all-primes-below-n) because am not trying to only create a list of primes quickly, but to create a list of all positive integers from 0 to n that are marked as prime by True and non prime by False.
Edit: Realized that there a lot of optimizations on SO, but they are rarely ever explained by others for the prime sieve algorithm, so it makes them difficult to approach by beginners or first time creators of the algorithm. All the solutions will be in python, to be on the same page for speed and optimizations. These solutions will progressively become faster and more complex. :) **Vanilla Solution** ``` def primesVanilla(n): r = [True] * n r[0] = r[1] = False for i in xrange(n): if r[i]: for j in xrange(i+i,n,i): r[j]=False return r ``` This is a very straightforward implementation of the Sieve. Please make sure you understand what is going on above before proceeding. The only slight thing to note is that you start marking not-primes at i+i instead of i, but this is rather obvious. (Since you assume that i itself is a prime). To make tests fair, all numbers will be for the list up to **25 million**. ``` real 0m7.663s user 0m7.624s sys 0m0.036s ``` **Minor Improvement 1 (Square roots):** I'll try to sort them in terms of straight-forward to less straight-forward changes. Observe that we do not need to iterate to n, but rather only need to go up to the square root of n. The reason being that any composite number under n, must have a prime factor under or equal to the square root of n. When you sieve by hand, you'll notice that all the "unsieved" numbers over the square root of n are by default primes. Another remark is that you have to be a little bit careful for when the square root turns out to be an integer, so you should add one in this case so it covers it. IE, at n=49, you want to loop until 7 inclusive, or you might conclude that 49 is prime. ``` def primes1(n): r = [True] * n r[0] = r[1] = False for i in xrange(int(n**0.5+1)): if r[i]: for j in xrange(i+i,n,i): r[j]=False return r real 0m4.615s user 0m4.572s sys 0m0.040s ``` Note that it's quite a bit faster. When you think about it, you're looping only until the square root, so what would take 25 million top level iterations now is only 5000 top level. **Minor Improvement 2 (Skipping in inner loop):** Observe that in the inner loop, instead of starting from i+i, we can start from i\*i. This follows from a very similar argument to the square root thing, but the big idea is that any composites between i and i\*i have already been marked by smaller primes. ``` def primes2(n): r = [True] * n r[0] = r[1] = False for i in xrange(int(n**0.5+1)): if r[i]: for j in xrange(i*i,n,i): r[j]=False return r real 0m4.559s user 0m4.500s sys 0m0.056s ``` Well that's a bit disappointing. But hey, it's still faster. **Somewhat Major Improvement 3 (Even skips):** The idea here is that we can premark all the even indices, and then skip iterations by 2 in the main loop. After that we can start the outer loop at 3, and the inner loop can skip by 2\*i instead. (Since going by i instead implies that it'll be even, (i+i) (i+i+i+i) etc.) ``` def primes3(n): r = [True] * n r[0] = r[1] = False for i in xrange(4,n,2):r[i]=False for i in xrange(3,int(n**0.5+1),2): if r[i]: for j in xrange(i*i,n,2*i): r[j]=False return r real 0m2.916s user 0m2.872s sys 0m0.040s ``` **Cool Improvements 4 (wim's idea):** This solution is a rather advanced trick. Slice assignment is faster than looping, so this uses python's slice notation. r[begin:end:skip] ``` def primes4(n): r = [True] * n r[0] = r[1] = False r[4::2] = [False] * len(r[4::2]) for i in xrange(3,int(1 + n**0.5),2): if r[i]: r[i*i::2*i] = [False] * len(r[i*i::2*i]) return r 10 loops, best of 3: 1.1 sec per loop ``` **Slight Improvement 5** Note that python reslices the r[4::2] when it calculates the length, so this takes quite a bit of extra time since all we need for it is computing the length. We do use some nasty math to achieve this though. ``` def primes5(n): r = [True] * n r[0] = r[1] = False r[4::2] = [False] * ((n+1)/2-2) for i in xrange(3,int(1 + n**0.5),2): if r[i]: r[i*i::2*i] = [False] * ((n+2*i-1-i*i)/(2*i)) return r 10 loops, best of 3: 767 msec per loop ``` **Assignment speed-up (Padraic Cunningham)** Note that we assign an array with all True and then set half (the evens) to be False. We can actually just start with a boolean array that's alternating. ``` def primes6(n): r = [False,True] * (n//2)+[True] r[1],r[2]=False, True for i in xrange(3,int(1 + n**0.5),2): if r[i]: r[i*i::2*i] = [False] * ((n+2*i-1-i*i)/(2*i)) return r 10 loops, best of 3: 717 msec per loop ``` Don't quote me on this, but I think without some nasty math methods, there is no obvious improvements to this last version. One cute property that I tried, but did not turn out to be any faster is noting that primes other than 2,3 must be of the form 6k+1 or 6k-1. (Note that if it's 6k, then divisible by 6, 6k+2 | 2, 6k+3 | 3, 6k+ 4 | 2, 6k+5 is congruent to -1 mod 6. This suggests that we can skip by 6 each time and check both sides. Either from a poor implementation on my side, or python internals, I was unable to find any meaningful speed increase. :(
Issue warning for missing comma between list items bug
34,540,634
29
2015-12-31T03:30:21Z
34,540,763
18
2015-12-31T03:53:32Z
[ "python", "string", "list", "multiline" ]
**The Story:** When a list of strings is defined on multiple lines, it is often easy to *forget a comma* between list items, like in this example case: ``` test = [ "item1" "item2" ] ``` The list `test` would now have a single item `"item1item2"`. Quite often the problem appears after rearranging the items in a list. Sample Stack Overflow questions having this issue: * [Why do I get a KeyError?](http://stackoverflow.com/questions/30461384/why-do-i-get-a-keyerror) * [Python - Syntax error on colon in list](http://stackoverflow.com/questions/19344255/python-syntax-error-on-colon-in-list) **The Question:** Is there a way to, preferably using *static code analysis*, issue a warning in cases like this in order to spot the problem as early as possible?
*These are merely probable solutions since I'm not really apt with static-analysis*. ### With `tokenize`: I recently fiddled around [with tokenizing python code](https://docs.python.org/2.7/library/tokenize.html#module-tokenize) and I believe it has all the information needed to perform these kind of checks when sufficient logic is added. For your given list, the tokens generated with `python -m tokenize list1.py` are as follows: ``` python -m tokenize list1.py 1,0-1,4: NAME 'test' 1,5-1,6: OP '=' 1,7-1,8: OP '[' 1,8-1,9: NL '\n' 2,1-2,8: STRING '"item1"' 2,8-2,9: NL '\n' 3,1-3,8: STRING '"item2"' 3,8-3,9: NL '\n' 4,0-4,1: OP ']' 4,1-4,2: NEWLINE '\n' 5,0-5,0: ENDMARKER '' ``` This of course is the '*problematic*' case where the contents are going to get concatenated. In the case where a `,` is present, the output slightly changes to reflect this (I added the tokens only for the list body): ``` 1,7-1,8: OP '[' 1,8-1,9: NL '\n' 2,1-2,8: STRING '"item1"' 2,8-2,9: OP ',' 2,9-2,10: NL '\n' 3,1-3,8: STRING '"item2"' 3,8-3,9: NL '\n' 4,0-4,1: OP ']' ``` Now we have the additional `OP ','` token signifying the presence of a second element seperated by comma. Given this information, we could use the really handy method `generate_tokens` in the `tokenize` module. Method [`tokenize.generate_tokens()`](https://docs.python.org/2.7/library/tokenize.html#tokenize.generate_tokens) , `tokenize.tokenize()` in `Py3`, has a single argument `readline`, a method on file-like objects which essentially returns the next line for that file like object ([relevant answer](http://stackoverflow.com/questions/34511673/extracting-comments-from-python-source-code/34512388#34512388)). It returns a named tuple with 5 elements in total with information about the token type, the token string along with line number and position in the line. Using this information, one could theoretically loop through a file and when an `OP ','` is absent inside a list initialization (whose beginning is detected by checking that the tokens `NAME`, `OP '='` and `OP '['` exist on the same line number) one can issue a warning on the lines on which it was detected. The good thing about this approach is that it is pretty straight-forward to generalize. To fit all cases where string literal concatenation takes place (namely, inside the 'grouping' operators `(), {}, []` ) you check that the token is of [`type = 51`](https://hg.python.org/cpython/file/2.7/Lib/token.py#l62) (or [53 for Python 3](https://hg.python.org/cpython/file/3.5/Lib/token.py#l66)) or that a value in any of `(, [, {` exists on the same line (these are coarse, top of the head suggestions atm). Now, *I'm not really sure how other people go about with these sort of problems* **but** *it seems like it could be something you can look into*. All the information necessary is offered by `tokenize`, the logic to detect it is the only thing missing. **Implementation Note:** These values (for example, for `type`) do differ between versions and are subject to change so it is something one should be aware of. One could possibly leverage this [by only working with constants](https://docs.python.org/3.5/library/token.html#module-token) for the tokens, though. --- ### With `parser` and `ast`: Another probable solution which is probably more tedious could involve the [`parser`](https://docs.python.org/2.7/library/parser.html) and [`ast`](https://docs.python.org/2.7/library/ast.html) modules. The concatenation of strings is actually performed during the creation of the Abstract Syntax Tree so you could alternatively detect it over there. I don't really want to dump the full output of the methods for `parser` and `ast` that I'm going to mention, but, just to make sure we're on the same page, I'm going to be using the following list initialization statement: ``` l_init = """ test = [ "item1" "item2", "item3" ] """ ``` In order to get the parse tree generated, use [`p = parser.suite(l_init)`](https://docs.python.org/2.7/library/parser.html#parser.suite). After this is done, you can get a view of it with [`p.tolist()`](https://docs.python.org/2.7/library/parser.html#parser.ST.tolist) (output is too large to add it). What you notice is that *there will be three entries for the three different `str` objects `item1`, `item2`, `item3`*. On the other hand, when the AST is created with [`node = ast.parse(l_init)`](https://docs.python.org/2.7/library/ast.html#ast.parse) and viewed with [`ast.dump(node)`](https://docs.python.org/2.7/library/ast.html#ast.dump) *there are only two entries*: one for the concatenated `str`s `item1item2` and one for the other entry `item3`. So, this is another probable way to do it but, as I previously mentioned, it is way more tedious. I'm not sure if line information is available and you deal with two different modules. Just have it as a back thought if you maybe want to play around with internal objects higher in the compiler chain. --- **Closing Comments**: As a closing note, the `tokenize` approach seems to be most logical one in this case. On the contrary though, it seems that `pylint` actually works with [`astroid`](http://www.astroid.org/) a python lib that eases analysis of Abstract Syntax Trees for python code. So, one should ideally look at it and how it is used [inside pylint](https://bitbucket.org/logilab/pylint/src/467e659fd2eb3dcd96f608b2765b9d6e623d4880/pylint/checkers/?at=default). **Note**: *Of course*, I might be completely over-analyzing it and a simpler 'check for white-space or newline' solution as you guys suggested would suffice. :-)
python equality precedence
34,543,909
19
2015-12-31T09:08:37Z
34,543,942
18
2015-12-31T09:11:39Z
[ "python", "equality", "python-2.x" ]
``` class L(object): def __eq__(self, other): print 'invoked L.__eq__' return False class R(object): def __eq__(self, other): print 'invoked R.__eq__' return False left = L() right = R() ``` With this code, left side gets the first shot at comparison, as [documented](https://docs.python.org/2/reference/datamodel.html?highlight=__eq__#object.__eq__) in the data model: ``` >>> left == right invoked L.__eq__ False ``` But if we make a slight modification on line 6 (everything else the same): ``` class R(L): ``` Now the *right* side gets to have the first shot at comparison. ``` >>> left == right invoked R.__eq__ False ``` Why is that? Where is it documented, and what's the reason for the design decision?
This is documented under the [numeric operations](https://docs.python.org/2/reference/datamodel.html#emulating-numeric-types), further down the page, with an explanation for why it works that way: > Note: If the right operand’s type is a subclass of the left operand’s type and that subclass provides the reflected method for the operation, this method will be called before the left operand’s non-reflected method. This behavior allows subclasses to override their ancestors’ operations. The [Python 3 documentation](https://docs.python.org/3/reference/datamodel.html#object.__eq__) additionally mentions it in the section you were looking at: > If the operands are of different types, and right operand’s type is a direct or indirect subclass of the left operand’s type, the reflected method of the right operand has priority, otherwise the left operand’s method has priority. Virtual subclassing is not considered.
Numba 3x slower than numpy
34,544,210
7
2015-12-31T09:33:00Z
34,547,296
10
2015-12-31T13:40:11Z
[ "python", "numpy", "numba" ]
We have a vectorial numpy **get\_pos\_neg\_bitwise** function that use a mask=[132 20 192] and a df.shape of (500e3, 4) that we want to accelerate with numba. ``` from numba import jit import numpy as np from time import time def get_pos_neg_bitwise(df, mask): """ In [1]: print mask [132 20 192] In [1]: print df [[ 1 162 97 41] [ 0 136 135 171] ..., [ 0 245 30 73]] """ check = (np.bitwise_and(mask, df[:, 1:]) == mask).all(axis=1) pos = (df[:, 0] == 1) & check neg = (df[:, 0] == 0) & check pos = np.nonzero(pos)[0] neg = np.nonzero(neg)[0] return (pos, neg) ``` Using tips from @morningsun we made this numba version: ``` @jit(nopython=True) def numba_get_pos_neg_bitwise(df, mask): posneg = np.zeros((df.shape[0], 2)) for idx in range(df.shape[0]): vandmask = np.bitwise_and(df[idx, 1:], mask) # numba fail with # if np.all(vandmask == mask): vandm_equal_m = 1 for i, val in enumerate(vandmask): if val != mask[i]: vandm_equal_m = 0 break if vandm_equal_m == 1: if df[idx, 0] == 1: posneg[idx, 0] = 1 else: posneg[idx, 1] = 1 pos = list(np.nonzero(posneg[:, 0])[0]) neg = list(np.nonzero(posneg[:, 1])[0]) return (pos, neg) ``` But it still 3 times slower than the numpy one (~0.06s Vs ~0,02s). ``` if __name__ == '__main__': df = np.array(np.random.randint(256, size=(int(500e3), 4))) df[:, 0] = np.random.randint(2, size=(1, df.shape[0])) # set target to 0 or 1 mask = np.array([132, 20, 192]) start = time() pos, neg = get_pos_neg_bitwise(df, mask) msg = '==> pos, neg made; p={}, n={} in [{:.4} s] numpy' print msg.format(len(pos), len(neg), time() - start) start = time() msg = '==> pos, neg made; p={}, n={} in [{:.4} s] numba' pos, neg = numba_get_pos_neg_bitwise(df, mask) print msg.format(len(pos), len(neg), time() - start) start = time() pos, neg = numba_get_pos_neg_bitwise(df, mask) print msg.format(len(pos), len(neg), time() - start) ``` Am I missing something ? ``` In [1]: %run numba_test2.py ==> pos, neg made; p=3852, n=3957 in [0.02306 s] numpy ==> pos, neg made; p=3852, n=3957 in [0.3492 s] numba ==> pos, neg made; p=3852, n=3957 in [0.06425 s] numba In [1]: ```
Try moving the call to `np.bitwise_and` outside of the loop since numba can't do anything to speed it up: ``` @jit(nopython=True) def numba_get_pos_neg_bitwise(df, mask): posneg = np.zeros((df.shape[0], 2)) vandmask = np.bitwise_and(df[:, 1:], mask) for idx in range(df.shape[0]): # numba fail with # if np.all(vandmask == mask): vandm_equal_m = 1 for i, val in enumerate(vandmask[idx]): if val != mask[i]: vandm_equal_m = 0 break if vandm_equal_m == 1: if df[idx, 0] == 1: posneg[idx, 0] = 1 else: posneg[idx, 1] = 1 pos = np.nonzero(posneg[:, 0])[0] neg = np.nonzero(posneg[:, 1])[0] return (pos, neg) ``` Then I get timings of: ``` ==> pos, neg made; p=3920, n=4023 in [0.02352 s] numpy ==> pos, neg made; p=3920, n=4023 in [0.2896 s] numba ==> pos, neg made; p=3920, n=4023 in [0.01539 s] numba ``` So now numba is a bit faster than numpy. Also, it didn't make a huge difference, but in your original function you return numpy arrays, while in the numba version you were converting `pos` and `neg` to lists. In general though, I would guess that the function calls are dominated by numpy functions, which numba can't speed up, and the numpy version of the code is already using fast vectorization routines. **Update:** You can make it faster by removing the `enumerate` call and index directly into the array instead of grabbing a slice. Also splitting `pos` and `neg` into separate arrays helps to avoid slicing along a non-contiguous axis in memory: ``` @jit(nopython=True) def numba_get_pos_neg_bitwise(df, mask): pos = np.zeros(df.shape[0]) neg = np.zeros(df.shape[0]) vandmask = np.bitwise_and(df[:, 1:], mask) for idx in range(df.shape[0]): # numba fail with # if np.all(vandmask == mask): vandm_equal_m = 1 for i in xrange(vandmask.shape[1]): if vandmask[idx,i] != mask[i]: vandm_equal_m = 0 break if vandm_equal_m == 1: if df[idx, 0] == 1: pos[idx] = 1 else: neg[idx] = 1 pos = np.nonzero(pos)[0] neg = np.nonzero(neg)[0] return pos, neg ``` And timings in an ipython notebook: ``` %timeit pos1, neg1 = get_pos_neg_bitwise(df, mask) %timeit pos2, neg2 = numba_get_pos_neg_bitwise(df, mask) ​ 100 loops, best of 3: 18.2 ms per loop 100 loops, best of 3: 7.89 ms per loop ```
how to get the value of multiple maximas in an array in python
34,551,710
6
2015-12-31T21:26:49Z
34,551,818
8
2015-12-31T21:40:59Z
[ "python", "numpy" ]
I have an array ``` a =[0, 0, 15, 17, 16, 17, 16, 12, 18, 18] ``` I am trying to find the element value that has `max` count. and if there is a tie, I would like all of the elements that have the same `max` count. as you can see there are two 0, two 16, two 17, two 18 one 15 and one 12 so i want something that would return `[0, 16, 17, 18]` (order not important but I do not want the 15 or the 12) I was doing `np.argmax(np.bincount(a))` but `argmax` only returns one element (per its documentation) so I only get the 1st one which is 0 I tried `np.argpartition(values, -4)[-4:]` that works, but in practice I would not know that there are 4 elements that have the same count number! (maybe I am close here!!! the light bulb just went on !!!)
You can use [np.unique](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.unique.html) to get the counts and an array of the unique elements then pull the elements whose count is equal to the max: ``` import numpy as np a = np.array([0, 0, 15, 17, 16, 17, 16, 12, 18, 18]) un, cnt = np.unique(a, return_counts=True) print(un[cnt == cnt.max()]) [ 0 16 17 18] ``` un are the unique elements, cnt is the frequency/count of each: ``` In [11]: a = np.array([0, 0, 15, 17, 16, 17, 16, 12, 18, 18]) In [12]: un, cnt = np.unique(a, return_counts=True) In [13]: un, cnt Out[13]: (array([ 0, 12, 15, 16, 17, 18]), array([2, 1, 1, 2, 2, 2])) ``` `cnt == cnt.max()` will give us the mask to pull the elements that are equal to the max: ``` In [14]: cnt == cnt.max() Out[14]: array([ True, False, False, True, True, True], dtype=bool) ```
Trying to set function as parameter
34,555,045
2
2016-01-01T09:40:27Z
34,555,065
8
2016-01-01T09:43:38Z
[ "python" ]
I'm trying to create function that get a number and return function. For example: ``` >>> const_function(2)(2) 2 >>> const_function(4)(2) 4 ``` How can I return function as output? I've tried to write this: ``` def const_function(c): def helper(x): return c return helper(x) ``` Why is this not working?
You're returning the result of calling the function. If you want to return the function itself, simply refer to it without calling it: ``` def const_function(c): def helper(x): return c return helper # don't call it ``` Now you can use it with the desired results: ``` >>> const_function(2) <function const_function.<locals>.helper at 0x0000000002B38D90> >>> const_function(2)(2) 2 >>> const_function(4)(2) 4 ```
Pandas: read_html
34,555,135
4
2016-01-01T09:55:22Z
34,555,201
7
2016-01-01T10:05:21Z
[ "python", "pandas" ]
I'm trying to extract US states from wiki URL, and for which I'm using Python Pandas. ``` import pandas as pd import html5lib f_states = pd.read_html('https://simple.wikipedia.org/wiki/List_of_U.S._states') ``` However, the above code is giving me an error L > ImportError Traceback (most recent call last) > in () > 1 import pandas as pd > ----> 2 f\_states = pd.read\_html('<https://simple.wikipedia.org/wiki/List_of_U.S._states>') > > if flavor in ('bs4', 'html5lib'): > 662 if not \_HAS\_HTML5LIB: > --> 663 raise ImportError("html5lib not found, please install it") > 664 if not \_HAS\_BS4: > 665 raise ImportError("BeautifulSoup4 (bs4) not found, please install it") > ImportError: html5lib not found, please install it I installed html5lib and beautifulsoup4 as well, but it is not working. Can someone help pls.
Running Python 3.4 on a mac New pyvenv ``` pip install pandas pip install lxml pip install html5lib pip install BeautifulSoup4 ``` Then ran your example.... ``` import pandas as pd import html5lib f_states= pd.read_html('https://simple.wikipedia.org/wiki/List_of_U.S._states') ``` All works...
Is it possible to use functions defined in the shell from python?
34,556,609
4
2016-01-01T13:41:24Z
34,556,639
7
2016-01-01T13:45:48Z
[ "python", "shell", "subprocess" ]
Example: ``` #!/bin/bash function my_test(){ echo this is a test $1 } my_test 1 python -c "from subprocess import check_output; print(check_output('my_test 2', shell=True))" ``` output: ``` this is a test 1 /bin/sh: my_test: command not found Traceback (most recent call last): File "<string>", line 1, in <module> File "/usr/lib/python3.5/subprocess.py", line 629, in check_output **kwargs).stdout File "/usr/lib/python3.5/subprocess.py", line 711, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command 'my_test 2' returned non-zero exit status 127 ```
You need to export the shell function, so it will be inherited by child shells. ``` #!/bin/sh function my_test(){ echo this is a test $1 } my_test 1 export -f my_test python -c "from subprocess import check_output; print(check_output('my_test 2', shell=True))" ```
What's the complexity of this function?
34,560,530
4
2016-01-01T22:08:24Z
34,560,546
7
2016-01-01T22:11:25Z
[ "python", "algorithm", "performance" ]
I'm practising in Codeacademy and I have to make the following function: Define a function called anti\_vowel that takes one string, text, as input and returns the text with all of the vowels removed This is my solution. ``` def anti_vowel(text): md = "" for ch in text: if ch not in "aeiouAEIOU": md = md + ch return md ``` It works well but I'm wondering what the complexity of the function is. I think it's `O(nk)` where n:="length of text" and k:="length of "aeoiuAEIOU"". I take one element of text and compare it with all vowels, that takes O(k) time. But I repeat that n times, so I do it all in `O(nk)`. Is my analysis correct? How could I improve my function? Could it be linear?
Big-O complexity doesn't work like that. `k` (the length of the vowels) is a constant, it doesn't change depending on the length of the input. So we discount it in calculating the complexity. Your function is just O(n), ie linear complexity.
WebDriver click() vs JavaScript click()
34,562,061
52
2016-01-02T02:36:35Z
34,567,956
13
2016-01-02T16:20:26Z
[ "javascript", "python", "selenium", "selenium-webdriver", "protractor" ]
**The Story:** Here on StackOverflow, I've seen users reporting that they cannot click an element via selenium WebDriver "click" command and can workaround it with a JavaScript click by executing a script. Example in Python: ``` element = driver.find_element_by_id("myid") driver.execute_script("arguments[0].click();", element) ``` Example in WebDriverJS/Protractor: ``` var elm = $("#myid"); browser.executeScript("arguments[0].click();", elm.getWebElement()); ``` **The Question:** Why is clicking "via JavaScript" works when a regular WebDriver click does not? When exactly is this happening and what is the downside of this workaround (if any)? I personally used this workaround without fully understanding why I have to do it and what problems it can lead to.
**NOTE: let's call 'click' is end-user click. 'js click' is click via JS** > Why is clicking "via JavaScript" works when a regular WebDriver click does not? There are 2 cases for this to happen: # I. **If you are using PhamtomJS** Then this is the most common known behavior of `PhantomJS` . Some elements are sometimes not clickable, for example `<div>`. This is because `PhantomJS` was original made for simulating the engine of browsers (like initial HTML + CSS -> computing CSS -> rendering). But it does not mean to be interacted with as an end-user's way (viewing, clicking, dragging). Therefore `PhamtomJS` is only partially support with end-user's interation. **WHY DOES JS CLICK WORK?** As for either clicks, they are all mean click. It is like a gun with **1 barrel** and **2 triggers**. One from the viewport, one from JS. Since `PhamtomJS` great in simulating browser's engine a JS click should work perfectly. # **II. The event handler of "click" got to bind in the bad period of time.** For example we got a `<div>` * -> We do some calculation * -> then we bind event of click to the `<div>`. * -> Plus with some bad coding of angular (e.g. not handling scope's cycle poperly) We may ended up with same result. Click won't work, because WebdriverJS trying to click on the element when it has no click event handler. **WHY DOES JS CLICK WORK?** Js click is like injecting js directly into browser. Possible with 2 ways, **Fist** is through devtools console (yes, WebdriverJS does comunicating with devtools's console). **Second** is inject a `<script>` tag directly into html. For each browser the behavior will be difference. But regardless, these methods are more complicating than clicking on button. Click is using what already there (end-user's click), js click is going through backdoor. And for js click will appear to be an asyncronus task. This is related a with a kinda complex topic of '**browser asynchronus task and CPU task scheduling**' (read it awhile back can't find article again). For short this will mostly result as js click will need to wait for a cycle of **task scheduling of CPU** and it will be ran a bit slower after the binding of click event. *(You could know this case when you found the element sometimes clickable, sometimes not. )* > When exactly is this happening and what is the downside of this > workaround (if any)? => As mention above, both mean for one purpose, but about using which entrance: * Click: is using what providing by default of browser. * JS click: is going through backdoor. => For performance, it is hard to say because it rely on browsers. But generically: * Click: doesn't mean faster but only signed higher position in schedule list of CPU exectution task. * JS click: doesn't mean slower but only it signed into the last position of schedule list of CPU task. => Downsides: * Click: doesn't seem to have any downside except you are using PhamtomJS. * JS click: very bad for health. You may accidently click on something that doesn't there on the view. When you use this, make sure the element is there and available to view and click as the point of view of end-user. # P.S. if you are looking for solution. * Using PhantomJS? I will suggest to use Chrome headless instead. Yes, you can settup Chrome headless on Ubuntu. Thing run just like Chrome but it only does not have a view and less buggy like PhantomJS. * Not using PhamtomJS but it still having problem? I will suggest to use ExpectedCondition of Protractor with `browser.wait()` ([check this for more information](http://stackoverflow.com/questions/34481077/how-to-wait-for-a-event-to-be-processed-in-a-protractor-end-to-end-test/34481764#34481764)) *(I want to make it short, but ended up badly. Anything related with theory is complicating to explain...)*
WebDriver click() vs JavaScript click()
34,562,061
52
2016-01-02T02:36:35Z
34,796,379
43
2016-01-14T17:49:56Z
[ "javascript", "python", "selenium", "selenium-webdriver", "protractor" ]
**The Story:** Here on StackOverflow, I've seen users reporting that they cannot click an element via selenium WebDriver "click" command and can workaround it with a JavaScript click by executing a script. Example in Python: ``` element = driver.find_element_by_id("myid") driver.execute_script("arguments[0].click();", element) ``` Example in WebDriverJS/Protractor: ``` var elm = $("#myid"); browser.executeScript("arguments[0].click();", elm.getWebElement()); ``` **The Question:** Why is clicking "via JavaScript" works when a regular WebDriver click does not? When exactly is this happening and what is the downside of this workaround (if any)? I personally used this workaround without fully understanding why I have to do it and what problems it can lead to.
Contrarily to what the [currently accepted answer](http://stackoverflow.com/a/34567956/1906307) suggests, there's nothing specific to PhantomJS when it comes to the difference between having WebDriver do a click and doing it in JavaScript. ### The Difference The essential difference between the two methods is common to all browsers and can be explained pretty simply: * WebDriver: **When WebDriver does the click, it attempts as best as it can to simulate what happens when a real user uses the browser.** Suppose you have an element A which is a button that says "Click me" and an element B which is a `div` element which is transparent but has its dimensions and `zIndex` set so that it completely covers A. Then you tell WebDriver to click A. WebDriver will simulate the click so that B receives the click *first*. Why? Because B covers A, and if a user were to try to click on A, then B would get the event first. Whether or not A would eventually get the click event depends on how B handles the event. At any rate, the behavior with WebDriver in this case is the same as when a real user tries to click on A. * JavaScript: Now, suppose you use JavaScript to do `A.click()`. **This method of clicking does not reproduce what really happens when the user tries to click A.** JavaScript sends the `click` event directly to A, and B will not get any event. ### Why a JavaScript Click Works When a WebDriver Click Does Not? As I mentioned above WebDriver will try to simulate as best it can what happens when a real user is using a browser. The fact of the matter is that the DOM can contain elements that a user cannot interact with, and WebDriver won't allow you to click on these element. Besides the overlapping case I mentioned, this also entails that invisible elements cannot be clicked. A common case I see in Stack Overflow questions is someone who is trying to interact with a GUI element that already exists in the DOM but becomes visible only when some other element has been manipulated. This sometimes happens with dropdown menus: you have to first click on the button the brings up the dropdown before a menu item can be selected. If someone tries to click the menu item before the menu is visible, WebDriver will balk and say that the element cannot be manipulated. **If the person then tries to do it with JavaScript, it will work because the event is delivered directly to the element, irrespective of visibility.** ### When Should You Use JavaScript for Clicking? If you are using Selenium for *testing an application*, my answer to this question is **"almost never".** By and large, your Selenium test should reproduce what a user would do with the browser. Taking the example of the drop down menu: a test should click on the button that brings up the drop down first, and then click on the menu item. If there is a problem with the GUI because the button is invisible, or the button fails to show the menu items, or something similar, then your test will fail and you'll have detected the bug. **If you use JavaScript to click around, you won't be able to detect these bugs through automated testing.** I say "almost never" because there may be exceptions where it makes sense to use JavaScript. They should be very rare, though. If you are using Selenium for *scraping sites*, then it is not as critical to attempt to reproduce user behavior. So using JavaScript to bypass the GUI is less of an issue.
Pandas: AttributeError: 'module' object has no attribute '__version__'
34,564,249
3
2016-01-02T09:05:21Z
34,568,955
7
2016-01-02T17:59:36Z
[ "python", "pandas", "import" ]
When I try to import pandas into Python I get this error: ``` >>> import pandas Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/robertdefilippi/miniconda2/lib/python2.7/site-packages/pandas/__init__.py", line 44, in <module> from pandas.core.api import * File "/Users/robertdefilippi/miniconda2/lib/python2.7/site-packages/pandas/core/api.py", line 9, in <module> from pandas.core.groupby import Grouper File "/Users/robertdefilippi/miniconda2/lib/python2.7/site-packages/pandas/core/groupby.py", line 17, in <module> from pandas.core.frame import DataFrame File "/Users/robertdefilippi/miniconda2/lib/python2.7/site-packages/pandas/core/frame.py", line 41, in <module> from pandas.core.series import Series File "/Users/robertdefilippi/miniconda2/lib/python2.7/site-packages/pandas/core/series.py", line 2909, in <module> import pandas.tools.plotting as _gfx File "/Users/robertdefilippi/miniconda2/lib/python2.7/site-packages/pandas/tools/plotting.py", line 135, in <module> if _mpl_ge_1_5_0(): File "/Users/robertdefilippi/miniconda2/lib/python2.7/site-packages/pandas/tools/plotting.py", line 130, in _mpl_ge_1_5_0 return (matplotlib.__version__ >= LooseVersion('1.5') AttributeError: 'module' object has no attribute '__version__' ``` But when I check if pandas is installed: ``` me$ conda install pandas Fetching package metadata: .... Solving package specifications: ..................... # All requested packages already installed. # packages in environment at /Users/me/miniconda2: # pandas 0.17.1 np110py27_0 ``` So I don't know what is wrong? What is going on with my pandas? **Edit** ``` $ pip list |grep matplotlib $ conda list matplotlib # packages in environment at /Users/me/miniconda2: # matplotlib 1.5.0 np110py27_0 ``` For some reason there was no output to `pip list |grep matplotlib` **Edit2** I wanted to see if there was a different path to the executables `ipython` and `python`. So I ran this: ``` $ python >>> import sys >>> print sys.executable /Users/me/miniconda2/bin/python ``` However in IPython, I get this: ``` $ ipython notebook >>> import sys >>> print sys.executable /usr/local/opt/python/bin/python2.7 ``` Could that be the problem?
Remove (or rename) the file `matplotlib.py` from your current working directory. It shadows the real library with the same name.
Why does Python allow function calls with wrong number of arguments?
34,567,770
75
2016-01-02T15:58:12Z
34,567,789
140
2016-01-02T16:00:29Z
[ "python" ]
Python is my first dynamic language. I recently coded a function call incorrectly supplying a wrong number of arguments. This failed with an exception at the time that function was called. I expected that even in a dynamic language, this kind of error can be detected when the source file is parsed. I understand that the **type** of actual arguments is not known until the function is called, because the same variable may contain values of any type at different times. But the **number** of arguments is known as soon as the source file is parsed. It is not going to change while the program is running. ### So that this is not a philosophical question To keep this in scope of Stack Overflow, let me phrase the question like this. Is there some feature, that Python offers, that requires it to delay checking the number of arguments in a function call until the code actually executes?
Python cannot know up-front what object you'll end up calling, because being dynamic, you can *swap out the function object*. At any time. And each of these objects can have a different number of arguments. Here is an extreme example: ``` import random def foo(): pass def bar(arg1): pass def baz(arg1, arg2): pass the_function = random.choice([foo, bar, baz]) print(the_function()) ``` The above code has a 2 in 3 chance of raising an exception. But Python cannot know a-priori if that'll be the case or not! And I haven't even started with dynamic module imports, dynamic function generation, other callable objects (any object with a `__call__` method can be called), or catch-all arguments (`*args` and `**kwargs`). But to make this extra clear, you state in your question: > It is not going to change while the program is running. This is not the case, not in Python, once the module is loaded you can delete, add or replace any object in the module namespace, including function objects.
Why does Python allow function calls with wrong number of arguments?
34,567,770
75
2016-01-02T15:58:12Z
34,567,801
33
2016-01-02T16:02:12Z
[ "python" ]
Python is my first dynamic language. I recently coded a function call incorrectly supplying a wrong number of arguments. This failed with an exception at the time that function was called. I expected that even in a dynamic language, this kind of error can be detected when the source file is parsed. I understand that the **type** of actual arguments is not known until the function is called, because the same variable may contain values of any type at different times. But the **number** of arguments is known as soon as the source file is parsed. It is not going to change while the program is running. ### So that this is not a philosophical question To keep this in scope of Stack Overflow, let me phrase the question like this. Is there some feature, that Python offers, that requires it to delay checking the number of arguments in a function call until the code actually executes?
The number of arguments being passed is known, but not the function which is actually called. See this example: ``` def foo(): print("I take no arguments.") def bar(): print("I call foo") foo() ``` This might seem obvious, but let us put these into a file called "fubar.py". Now, in an interactive Python session, do this: ``` >>> import fubar >>> fubar.foo() I take no arguments. >>> fubar.bar() I call foo I take no arguments. ``` That was obvious. Now for the fun part. We’ll define a function which requires a non-zero amount of arguments: ``` >>> def notfoo(a): ... print("I take arguments!") ... ``` Now we do something which is called *[monkey patching](https://en.wikipedia.org/wiki/Monkey_patch)*. We can in fact *replace* the function `foo` in the `fubar` module: ``` >>> fubar.foo = notfoo ``` Now, when we call `bar`, a `TypeError` will be raised; the name `foo` now refers to the function we defined above instead of the original function formerly-known-as-`foo`. ``` >>> fubar.bar() I call foo Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/horazont/tmp/fubar.py", line 6, in bar foo() TypeError: notfoo() missing 1 required positional argument: 'a' ``` So even in a situation like this, where it might seem very obvious that the called function `foo` takes no arguments, Python can only know that it is actually the `foo` function which is being called when it executes that source line. This is a property of Python which makes it powerful, but it also causes some of its slowness. In fact, [making modules read-only to improve performance has been discussed](https://mail.python.org/pipermail/python-ideas/2014-May/027870.html) on the python-ideas mailinglist some time ago, but it didn't gain any real support.
Improve performance of operation on numpy trigonometric functions
34,569,186
4
2016-01-02T18:22:16Z
34,569,565
7
2016-01-02T18:59:33Z
[ "python", "performance", "numpy", "trigonometry" ]
I have a rather large code which I need to optimize. After some analysis using `time.time()`, I've found that the line that takes up the most processing time (it is executed thousands of times) is this one: ``` A = np.cos(a) * np.cos(b) - np.sin(a) * np.sin(b) * np.sin(c - d) ``` where all the variables can be randomly defined with: ``` N = 5000 a = np.random.uniform(0., 10., N) b = np.random.uniform(0., 50., N) c = np.random.uniform(0., 30., N) d = np.random.uniform(0., 25., N) ``` Is there a way to improve the performance of the calculation of `A`? As I'm already using `numpy`, I'm pretty much out of ideas.
By using the [product-to-sum trig. identities](https://en.wikipedia.org/wiki/List_of_trigonometric_identities#Product-to-sum_and_sum-to-product_identities), you can reduce the number of trig. function calls. In the following, `func1` and `func2` compute the same value, but `func2` makes fewer calls to trig. functions. ``` import numpy as np def func1(a, b, c, d): A = np.cos(a) * np.cos(b) - np.sin(a) * np.sin(b) * np.sin(c - d) return A def func2(a, b, c, d): s = np.sin(c - d) A = 0.5*((1 - s)*np.cos(a - b) + (1 + s)*np.cos(a + b)) return A ``` Here's a timing comparison with `N = 5000`: ``` In [48]: %timeit func1(a, b, c, d) 1000 loops, best of 3: 374 µs per loop In [49]: %timeit func2(a, b, c, d) 1000 loops, best of 3: 241 µs per loop ```
Python: writing combinations of letters and numbers to a file
34,570,624
2
2016-01-02T20:54:11Z
34,570,733
7
2016-01-02T21:06:23Z
[ "python", "string", "performance", "file-io" ]
I have been looking at other SO posts in order to create a program that will generate a list of combinations (letters + numbers) given certain parameters, I have gotten as far as this: ``` from itertools import product from string import * keywords = [''.join(i) for i in product(ascii_letters + digits, repeat = 3)] file = open("file.txt", "w") for item in keywords: file.write("%s\n" % item) file.close() ``` This program works fine if the **repeat** parameter is kept to 3/4, but if raised to 5 or above then the program does not finish - it doesn't crash, just never seems to finish. I am guessing it is a performance issue, however I am not sure. If someone could give a more efficient program it would be most appreciated. Secondly, I want the program to output both: * aec * cea With this current code, it will only output the first.
`product(ascii_letters + digits, repeat=5)` generates all 916,132,832 possibilities for the strings (`62**5`). Your current code is making a list of all of these string objects in memory before writing to the file. This might be too much for your system since each three-letter string object will be around 52 bytes (in Python 3, slightly less in Python 2). This means you're making about 44GB of Python strings for your list. Instead, use a *generator* expression for `keywords` to avoid holding all the strings in memory (just use `(...)` rather than `[...]`): ``` keywords = (''.join(i) for i in product(ascii_letters + digits, repeat=5)) ``` You can then iterate and write the strings to a file as before: ``` with open("file.txt", "w") as f: for item in keywords: f.write(item) f.write('\n') ``` (also, `product(ascii_letters + digits, repeat=3)` *will* generate both 'aec' and 'cea'.)
Labels for clustermap in seaborn?
34,572,177
15
2016-01-03T00:28:05Z
34,697,479
17
2016-01-09T18:44:41Z
[ "python", "matplotlib", "machine-learning", "artificial-intelligence", "seaborn" ]
I have several questions about labeling for `clustermap` in `seaborn`. First is it possible to extract the the distance values for the hierarchical clustering, and plot the value on the tree structure visualization (maybe only the first three levels). Here is my example code for creating a clustermap plot: ``` import pandas as pd import numpy as np import seaborn as sns get_ipython().magic(u'matplotlib inline') m = np.random.rand(50, 50) df = pd.DataFrame(m, columns=range(4123, 4173), index=range(4123, 4173)) sns.clustermap(df, metric="correlation") ``` [![enter image description here](http://i.stack.imgur.com/KbaBP.png)](http://i.stack.imgur.com/KbaBP.png) The other two questions are: - How to rotate the y labels since they overlaps together. - How to move the color bar to the bottom or right. (There was a [question](http://stackoverflow.com/questions/27037241/changing-the-rotation-of-tick-labels-in-seaborn-heatmap) for heatmap, but does not work for my case. Also does not address the color bar position)
I had the exact same issue with the labels on the y-axis being rotated and found a solution. The issue is that if you do `plt.yticks(rotation=0)` like suggested in the question you referenced, it will rotate the labels on your colobar due to the way `ClusterGrid` works. To solve it and rotate the right labels, you need to reference the `Axes` from the underlying `Heatmap` and rotate these: ``` cg = sns.clustermap(df, metric="correlation") plt.setp(cg.ax_heatmap.yaxis.get_majorticklabels(), rotation=0) ``` For your other question about the colorbar placement, I don't think this is supported at the moment, as indicated by [this Github issue](https://github.com/mwaskom/seaborn/issues/589) unfortunately. And finally for the hierarchical clustering distance values, you can access the linkage matrics for rows or columns with: ``` cg = sns.clustermap(df, metric="correlation") cg.dendrogram_col.linkage # linkage matrix for columns cg.dendrogram_row.linkage # linkage matrix for rows ```
Python, assign function to variable, change optional argument's value
34,573,540
2
2016-01-03T05:11:10Z
34,573,570
8
2016-01-03T05:15:38Z
[ "python", "function" ]
Is it possible to assign a function to a variable with modified default arguments? To make it more concrete, I'll give an example. The following obviously doesn't work in the current form and is only meant to show what I need: ``` def power(a, pow=2): ret = 1 for _ in range(pow): ret *= a return ret cube = power(pow=3) ``` And the result of `cube(5)` should be `125`.
[`functools.partial`](https://docs.python.org/2/library/functools.html#functools.partial) to the rescue: > Return a new partial object which when called will behave like func called with the positional arguments args and keyword arguments keywords. If more arguments are supplied to the call, they are appended to args. If additional keyword arguments are supplied, they extend and override keywords. ``` from functools import partial cube = partial(power, pow=3) ``` Demo: ``` >>> from functools import partial >>> >>> def power(a, pow=2): ... ret = 1 ... for _ in range(pow): ... ret *= a ... return ret ... >>> cube = partial(power, pow=3) >>> >>> cube(5) 125 ```
Finding all occurrences of alternating digits using regular expressions
34,573,648
9
2016-01-03T05:28:35Z
34,573,677
7
2016-01-03T05:32:57Z
[ "python", "regex", "string" ]
I would like to find all alternating digits in a string using regular expressions. An alternating digit is defined as two equal digits having a digit in between; for example, 1212 contains 2 alternations (121 and 212) and 1111 contains 2 alternations as well (111 and 111). I have the following regular expression code: ``` s = "1212" re.findall(r'(\d)(?:\d)(\1)+', s) ``` This works for strings like "121656", but not "1212". This is a problem to do with overlapping matches I think. How can I deal with that?
``` (?=((\d)\d\2)) ``` Use [lookahead](http://www.regular-expressions.info/lookaround.html) to get all overlapping matches. Use `re.findall` and get the first element from the tuple. See the demo: <https://regex101.com/r/fM9lY3/54>
Reverse the order of legend
34,576,059
3
2016-01-03T11:34:42Z
34,576,778
7
2016-01-03T12:58:15Z
[ "python", "matplotlib", "reverse", "legend" ]
I use the following code to plot the bar graph. Need to present a legend in reverse order. How can I do it? ``` colorsArr = plt.cm.BuPu(np.linspace(0, 0.5, len(C2))) p = numpy.empty(len(C2), dtype=object) plt.figure(figsize=(11,11)) prevBar = 0 for index in range(len(C2)): plt.bar(ind, C2[index], width, bottom=prevBar, color=colorsArr[index], label=C0[index]) prevBar = prevBar + C2[index] # positions of the x-axis ticks (center of the bars as bar labels) tick_pos = [i+(width/2) for i in ind] plt.ylabel('Home Category') plt.title('Affinity - Retail Details(Home category)') # set the x ticks with names plt.xticks(tick_pos, C1) plt.yticks(np.arange(0,70000,3000)) plt.legend(title="Line", loc='upper left' ) # Set a buffer around the edge plt.xlim(-width*2, width*2) plt.show() ```
You could call ``` handles, labels = ax.get_legend_handles_labels() ax.legend(handles[::-1], labels[::-1], title='Line', loc='upper left') ``` --- ``` import numpy as np import matplotlib.pyplot as plt np.random.seed(2016) C0 = list('ABCDEF') C2 = np.random.randint(20000, size=(len(C0), 3)) width = 1.0 C1 = ['foo', 'bar', 'baz'] ind = np.linspace(-width, width, len(C1)) colorsArr = plt.cm.BuPu(np.linspace(0, 0.5, len(C2))) fig = plt.figure(figsize=(11,11)) ax = fig.add_subplot(1, 1, 1) prevBar = 0 for height, color, label in zip(C2, colorsArr, C0): h = ax.bar(ind, height, width, bottom=prevBar, color=color, label=label) prevBar = prevBar + height plt.ylabel('Home Category') plt.title('Affinity - Retail Details(Home category)') # positions of the x-axis ticks (center of the bars as bar labels) tick_pos = [i+(width/2.0) for i in ind] # set the x ticks with names plt.xticks(tick_pos, C1) plt.yticks(np.arange(0,70000,3000)) handles, labels = ax.get_legend_handles_labels() ax.legend(handles[::-1], labels[::-1], title='Line', loc='upper left') plt.show() ``` [![enter image description here](http://i.stack.imgur.com/mvQeG.png)](http://i.stack.imgur.com/mvQeG.png)
Keep firstly found duplicate items in a list
34,576,676
7
2016-01-03T12:46:49Z
34,576,721
7
2016-01-03T12:52:26Z
[ "python", "list", "python-2.7" ]
I have a list that looks like this: ``` [(1, 0.3), (3, 0.2), (3, 0.15), (1, 0.07), (1, 0.02), (2, 0.01)] ``` I want to keep the firstly found duplicate items in this list, based on the first item in every tuple: ``` [(1, 0.3), (3, 0.2), (2, 0.01)] ``` Is there an efficient way to do this?
If order of the resulting list does not matter, only that it contains the first entry from the original list for each tuple: reverse the list first, then pass it through `dict` to remove duplicates and keep the *last* entry for each key (the *first* in the original list, since it’s been reversed): ``` >>> items = [(1, 0.3), (3, 0.2), (3, 0.15), (1, 0.07), (1, 0.02), (2, 0.01)] >>> list(dict(reversed(items)).items()) [(1, 0.3), (2, 0.01), (3, 0.2)] ``` If order of the resulting list matters, see Padraic’s answer :)
How to remove elements from a Python dictionary based on elements in a list?
34,580,845
3
2016-01-03T19:43:05Z
34,580,877
7
2016-01-03T19:45:54Z
[ "python", "list", "dictionary" ]
I have a list of tuples: ``` lst=[(6, 'C'), (6, 'H'), (2, 'C'), (2, 'H')] ``` And a dictionary: ``` dct={'6C': (6, 'C'), '6H': (6, 'H'), '9D': (9, 'D'), '10D': (10, 'D'), '11S': (11, 'S'), '2C': (2, 'C'), '2H': (2, 'H')} ``` How can I remove the elements from the dictionary that are in the list? In this example my desired output would be: ``` dct2={'9D': (9, 'D'), '10D': (10, 'D'), '11S': (11, 'S')} ```
I would use a dictionary comprehension to map the keys with the values that aren't found within a list: ``` new_dict = {k: v for k, v in old_dict.items() if v not in the_list} # filter from the list ```
What's the point of Django's collectstatic?
34,586,114
21
2016-01-04T06:57:57Z
34,586,172
14
2016-01-04T07:03:13Z
[ "python", "django", "django-staticfiles", "static-files", "collectstatic" ]
This is probably a stupid question, but it's just not clicking in my head. In Django, the convention is to put all of your static files (i.e css, js) specific to your app into a folder called **static**. So the structure would look like this: ``` mysite/ manage.py mysite/ --> (settings.py, etc) myapp/ --> (models.py, views.py, etc) static/ ``` In mysite/settings.py I have: ``` STATIC_ROOT = 'staticfiles' ``` So when I run the command: ``` python manage.py collectstatic ``` It creates a folder called **staticfiles** at the root level (so same directory as myapp/) What's the point of this? Isn't it just creating a copy of all my static files?
## Collect static files from multiple apps into a single path Well, a single Django *project* may use several *apps*, so while there you only have one `myapp`, it may actually be `myapp1`, `myapp2`, etc By copying them from inside the individual apps into a single folder, you can point your frontend web server (e.g. nginx) to that single folder `STATIC_ROOT` and serve static files from a single location, rather than configure your web server to serve static files from multiple paths. ## Persistent URLs with [ManifestStaticFilesStorage](https://docs.djangoproject.com/en/1.9/ref/contrib/staticfiles/#manifeststaticfilesstorage) A note about the MD5 hash being appended to the filename for versioning: It's not part of the default behavior of `collectstatic`, as `settings.STATICFILES_STORAGE` defaults to `StaticFilesStorage` (which doesn't do that) The MD5 hash will kick in e.g. if you set it to use `ManifestStaticFilesStorage`, which ads that behavior. > The purpose of this storage is to keep serving the old files in case > some pages still refer to those files, e.g. because they are cached by > you or a 3rd party proxy server. Additionally, it’s very helpful if > you want to apply far future Expires headers to the deployed files to > speed up the load time for subsequent page visits.
What's the point of Django's collectstatic?
34,586,114
21
2016-01-04T06:57:57Z
34,586,179
10
2016-01-04T07:03:39Z
[ "python", "django", "django-staticfiles", "static-files", "collectstatic" ]
This is probably a stupid question, but it's just not clicking in my head. In Django, the convention is to put all of your static files (i.e css, js) specific to your app into a folder called **static**. So the structure would look like this: ``` mysite/ manage.py mysite/ --> (settings.py, etc) myapp/ --> (models.py, views.py, etc) static/ ``` In mysite/settings.py I have: ``` STATIC_ROOT = 'staticfiles' ``` So when I run the command: ``` python manage.py collectstatic ``` It creates a folder called **staticfiles** at the root level (so same directory as myapp/) What's the point of this? Isn't it just creating a copy of all my static files?
In the production installation, you want to have persistent URLs. The URL doesn't change unless the file content changes. This is to prevent having clients to have wrong version of CSS or JS file on their computer when opening a web page from Django. Django staticfiles detects file changes and updates URLs accordingly, so that if CSS or JS file changes the web browser downloads the new version. This is usually achieved by adding MD5 hash to the filename during `collectstatic` run. Edit: Also see related answer to multiple apps.
What's the point of Django's collectstatic?
34,586,114
21
2016-01-04T06:57:57Z
34,586,268
9
2016-01-04T07:11:41Z
[ "python", "django", "django-staticfiles", "static-files", "collectstatic" ]
This is probably a stupid question, but it's just not clicking in my head. In Django, the convention is to put all of your static files (i.e css, js) specific to your app into a folder called **static**. So the structure would look like this: ``` mysite/ manage.py mysite/ --> (settings.py, etc) myapp/ --> (models.py, views.py, etc) static/ ``` In mysite/settings.py I have: ``` STATIC_ROOT = 'staticfiles' ``` So when I run the command: ``` python manage.py collectstatic ``` It creates a folder called **staticfiles** at the root level (so same directory as myapp/) What's the point of this? Isn't it just creating a copy of all my static files?
Django static files can be in many places. A file that is served as `/static/img/icon.png` could [come from many places](https://docs.djangoproject.com/en/1.9/ref/settings/#staticfiles-finders). By default: * `FileSystemFinder` will look for `img/icon.png` in each of `STATICFILES_DIRS`, * `AppDirectoriesFinder` will look for `img/icon.png` in the `static` subfolder in each of your `INSTALLED_APPS`. This allows libraries like Django Admin to add their own static files to your app. Now: this only works if you run `manage.py runserver` with DEBUG=1. When you go live, the Django process will no longer serve the static assets. It would be inefficient to use Django for serving these, there are more specialised tools specifically for that. Instead, you should do something like this: * find all of static files from every app * build a single directory that contains all of them * upload them somewhere (a `static` directory somewhere on your webserver or a third-party file storage) * configure your webserver (such as nginx) to serve `/static/*` directly from that directory and redirect any other requests to Django. `collectstatic` is a ready-made script that prepares this directory for you, so that you can connect it directly to your deployment script.
List comprehension as substitute for reduce() in Python
34,586,127
15
2016-01-04T06:59:30Z
34,586,159
14
2016-01-04T07:02:26Z
[ "python", "list", "python-3.x", "list-comprehension" ]
The following python tutorial says that: > List comprehension is a complete substitute for the lambda function as well as the functions `map()`, `filter()` and `reduce()`. > > <http://python-course.eu/python3_list_comprehension.php> However, it does not mention an example how a list comprehension can substitute a `reduce()` and I can't think of an example how it should be possible. Can please someone explain how to achieve a reduce-like functionality with list comprehension or confirm that it isn't possible?
Ideally, list comprehension is to create a new list. Quoting [official documentation](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions), > **List comprehensions provide a concise way to create lists.** Common applications are to make new lists where each element is the result of some operations applied to each member of another sequence or iterable, or to create a subsequence of those elements that satisfy a certain condition. whereas `reduce` is used to reduce an iterable to a single value. Quoting [`functools.reduce`](https://docs.python.org/3/library/functools.html#functools.reduce), > Apply function of two arguments cumulatively to the items of sequence, from left to right, so as to **reduce the sequence to a single value**. So, list comprehension cannot be used as a drop-in replacement for `reduce`.
Unexpected result when passing datetime objects to str.format()
34,589,607
2
2016-01-04T10:59:27Z
34,589,747
9
2016-01-04T11:08:09Z
[ "python", "datetime" ]
In Python 2.7, `str.format()` accepts non-string arguments and calls the `__str__` method of the value before formatting output: ``` class Test: def __str__(self): return 'test' t = Test() str(t) # output: 'test' repr(t) # output: '__main__.Test instance at 0x...' '{0: <5}'.format(t) # output: 'test ' in python 2.7 and TypeError in python3 '{0: <5}'.format('a') # output: 'a ' '{0: <5}'.format(None) # output: 'None ' in python 2.7 and TypeError in python3 '{0: <5}'.format([]) # output: '[] ' in python 2.7 and TypeError in python3 ``` But when I pass a `datetime.time` object, I get `' <5'` as output in both Python 2.7 and Python 3: ``` from datetime import time '{0: <5}'.format(time(10,10)) # output: ' <5' ``` Passing a `datetime.time` object to `str.format()` should either raise a `TypeError` or format `str(datetime.time)`, instead it returns the formatting directive. Why is that?
`'{0: <5}'.format(time(10, 10))` results in call to `time(10, 10).__format__`, which returns `<5` for the `<5` format specifier: ``` In [26]: time(10, 10).__format__(' <5') Out[26]: ' <5' ``` This happens because [`time_instance.__format__`](https://docs.python.org/3/library/datetime.html#datetime.time.__format__) attempts to format `time_instance` using [`time.strftime`](https://docs.python.org/3/library/datetime.html#datetime.time.strftime) and `time.strftime` doesn't understand the formatting directive. ``` In [29]: time(10, 10).strftime(' <5') Out[29]: ' <5' ``` --- The `!s` conversion flag will tell `str.format` to call `str` on the `time` instance before rendering the result - it will call `str(time(10, 10)).__format__(' <5')`: ``` In [30]: '{0!s: <5}'.format(time(10, 10)) Out[30]: '10:10:00' ```
Working with nested lists in Python
34,593,476
4
2016-01-04T14:34:12Z
34,593,532
7
2016-01-04T14:37:43Z
[ "python", "list", "python-3.x" ]
``` list_ = [(1, 2), (3, 4)] ``` What is the Pythonic way of taking sum of ordered pairs from inner tuples and multiplying the sums? For the above example: ``` (1 + 3) * (2 + 4) = 24 ```
For example: ``` import operator as op import functools functools.reduce(op.mul, (sum(x) for x in zip(*list_))) ``` works for any length of the initial array as well as of the inner tuples. Another solution using [numpy](http://www.numpy.org): ``` import numpy as np np.array(list_).sum(0).prod() ```
Trim / strip zeros of a numpy array
34,593,824
4
2016-01-04T14:54:21Z
34,593,911
11
2016-01-04T14:58:58Z
[ "python", "arrays", "numpy" ]
**How to remove leading / trailing zeros of a numpy array?** ``` import numpy as np a = np.array([0,0,0,3,2,-1,0,0,7,9,13,0,0,0,0,0,0,0]) #Desired output [3,2,-1,0,0,7,9,13] ``` This doesn't work: ``` a[a != 0] ``` because it would remove *all zeros* including the zeros which are *inside*.
Use [`numpy.trim_zeros`](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.trim_zeros.html): ``` >>> import numpy as np >>> a = np.array([0,0,0,3,2,-1,0,0,7,9,13,0,0,0,0,0,0,0]) >>> np.trim_zeros(a) array([ 3, 2, -1, 0, 0, 7, 9, 13]) ```
How to prefetch data using a custom python function in tensorflow
34,594,198
14
2016-01-04T15:14:04Z
34,596,212
21
2016-01-04T17:08:34Z
[ "python", "multithreading", "latency", "tensorflow", "prefetch" ]
I am trying to prefetch training data to hide I/O latency. I would like to write custom Python code that loads data from disk and preprocesses the data (e.g. by adding a context window). In other words, one thread does data preprocessing and the other does training. Is this possible in TensorFlow? Update: I have a working example based on @mrry's example. ``` import numpy as np import tensorflow as tf import threading BATCH_SIZE = 5 TRAINING_ITERS = 4100 feature_input = tf.placeholder(tf.float32, shape=[128]) label_input = tf.placeholder(tf.float32, shape=[128]) q = tf.FIFOQueue(200, [tf.float32, tf.float32], shapes=[[128], [128]]) enqueue_op = q.enqueue([label_input, feature_input]) label_batch, feature_batch = q.dequeue_many(BATCH_SIZE) c = tf.reshape(feature_batch, [BATCH_SIZE, 128]) + tf.reshape(label_batch, [BATCH_SIZE, 128]) sess = tf.Session() def load_and_enqueue(sess, enqueue_op, coord): with open('dummy_data/features.bin') as feature_file, open('dummy_data/labels.bin') as label_file: while not coord.should_stop(): feature_array = np.fromfile(feature_file, np.float32, 128) if feature_array.shape[0] == 0: print('reach end of file, reset using seek(0,0)') feature_file.seek(0,0) label_file.seek(0,0) continue label_value = np.fromfile(label_file, np.float32, 128) sess.run(enqueue_op, feed_dict={feature_input: feature_array, label_input: label_value}) coord = tf.train.Coordinator() t = threading.Thread(target=load_and_enqueue, args=(sess,enqueue_op, coord)) t.start() for i in range(TRAINING_ITERS): sum = sess.run(c) print('train_iter='+str(i)) print(sum) coord.request_stop() coord.join([t]) ```
This is a common use case, and most implementations use TensorFlow's *queues* to decouple the preprocessing code from the training code. There is [a tutorial on how to use queues](https://www.tensorflow.org/versions/master/how_tos/threading_and_queues/index.html), but the main steps are as follows: 1. Define a queue, `q`, that will buffer the preprocessed data. TensorFlow supports the simple [`tf.FIFOQueue`](https://www.tensorflow.org/versions/master/api_docs/python/io_ops.html#FIFOQueue) that produces elements in the order they were enqueued, and the more advanced [`tf.RandomShuffleQueue`](https://www.tensorflow.org/versions/master/api_docs/python/io_ops.html#RandomShuffleQueue) that produces elements in a random order. A queue element is a tuple of one or more tensors (which can have different types and shapes). All queues support single-element (`enqueue`, `dequeue`) and batch (`enqueue_many`, `dequeue_many`) operations, but to use the batch operations you must specify the shapes of each tensor in a queue element when constructing the queue. 2. Build a subgraph that enqueues preprocessed elements into the queue. One way to do this would be to define some [`tf.placeholder()`](https://www.tensorflow.org/versions/master/api_docs/python/io_ops.html#placeholder) ops for tensors corresponding to a single input example, then pass them to [`q.enqueue()`](https://www.tensorflow.org/versions/master/api_docs/python/io_ops.html#QueueBase.enqueue). (If your preprocessing produces a batch at once, you should use [`q.enqueue_many()`](https://www.tensorflow.org/versions/master/api_docs/python/io_ops.html#QueueBase.enqueue_many) instead.) You might also include TensorFlow ops in this subgraph. 3. Build a subgraph that performs training. This will look like a regular TensorFlow graph, but will get its input by calling [`q.dequeue_many(BATCH_SIZE)`](https://www.tensorflow.org/versions/master/api_docs/python/io_ops.html#QueueBase.dequeue_many). 4. Start your session. 5. Create one or more threads that execute your preprocessing logic, then execute the enqueue op, feeding in the preprocessed data. You may find the [`tf.train.Coordinator`](https://www.tensorflow.org/versions/master/api_docs/python/train.html#Coordinator) and [`tf.train.QueueRunner`](https://www.tensorflow.org/versions/master/api_docs/python/train.html#QueueRunner) utility classes useful for this. 6. Run your training graph (optimizer, etc.) as normal. **EDIT:** Here's a simple `load_and_enqueue()` function and code fragment to get you started: ``` # Features are length-100 vectors of floats feature_input = tf.placeholder(tf.float32, shape=[100]) # Labels are scalar integers. label_input = tf.placeholder(tf.int32, shape=[]) # Alternatively, could do: # feature_batch_input = tf.placeholder(tf.float32, shape=[None, 100]) # label_batch_input = tf.placeholder(tf.int32, shape=[None]) q = tf.FIFOQueue(100, [tf.float32, tf.int32], shapes=[[100], []]) enqueue_op = q.enqueue([label_input, feature_input]) # For batch input, do: # enqueue_op = q.enqueue_many([label_batch_input, feature_batch_input]) label_batch, feature_batch = q.dequeue_many(BATCH_SIZE) # Build rest of model taking label_batch, feature_batch as input. # [...] train_op = ... sess = tf.Session() def load_and_enqueue(): with open(...) as feature_file, open(...) as label_file: while True: feature_array = numpy.fromfile(feature_file, numpy.float32, 100) if not feature_array: return label_value = numpy.fromfile(feature_file, numpy.int32, 1)[0] sess.run(enqueue_op, feed_dict={feature_input: feature_array, label_input: label_value}) # Start a thread to enqueue data asynchronously, and hide I/O latency. t = threading.Thread(target=load_and_enqueue) t.start() for _ in range(TRAINING_EPOCHS): sess.run(train_op) ```
Why do argument-less function calls execute faster?
34,596,793
8
2016-01-04T17:43:50Z
34,596,794
13
2016-01-04T17:43:50Z
[ "python", "function", "python-2.7", "python-3.x", "arguments" ]
I set up a simple custom function that takes some default arguments (Python 3.5): ``` def foo(a=10, b=20, c=30, d=40): return a * b + c * d ``` and timed different calls to it with or without specifying argument values: **Without specifying arguments**: ``` %timeit foo() The slowest run took 7.83 times longer than the fastest. This could mean that an intermediate result is being cached 1000000 loops, best of 3: 361 ns per loop ``` **Specifying arguments**: ``` %timeit foo(a=10, b=20, c=30, d=40) The slowest run took 12.83 times longer than the fastest. This could mean that an intermediate result is being cached 1000000 loops, best of 3: 446 ns per loop ``` As you can see, there is a somewhat noticeable increase in time required for the call specifying arguments and for the one not specifying them. In simple one-off calls this might be negligible, but the overhead scales and becomes more noticeable if a large number of calls to a function are made: **No arguments**: ``` %timeit for i in range(10000): foo() 100 loops, best of 3: 3.83 ms per loop ``` **With Arguments**: ``` %timeit for i in range(10000): foo(a=10, b=20, c=30, d=40) 100 loops, best of 3: 4.68 ms per loop ``` *The same behaviour is present and in Python 2.7* where the time difference between these calls was actually a bit larger `foo() -> 291ns` and `foo(a=10, b=20, c=30, d=40) -> 410ns` --- Why does this happen? Should I generally try and avoid specifying argument values during calls?
> Why does this happen? Should I avoid specifying argument values during calls? **Generally, No**. *The real reason you're able to see this is because the function you are using **is simply not computationally intensive***. As such, the time required for the additional byte code commands issued in the case where arguments are supplied can be detected through timing. If, for example, you had a more intensive function of the form: ``` def foo_intensive(a=10, b=20, c=30, d=40): [i * j for i in range(a * b) for j in range(c * d)] ``` It will pretty much show no difference whatsoever in time required: ``` %timeit foo_intensive() 10 loops, best of 3: 32.7 ms per loop %timeit foo_intensive(a=10, b=20, c=30, d=40) 10 loops, best of 3: 32.7 ms per loop ``` Even when scaled to more calls, the time required to execute the function body simply trumps the small overhead introduced by additional byte code instructions. --- ### Looking at the Byte Code: One way of viewing the generated byte code issued for each call case is by creating a function that wraps around `foo` and calls it in different ways. For now, let's create `fooDefault` for calls using default arguments and `fooKwargs()` for functions specifying keyword arguments: ``` # call foo without arguments, using defaults def fooDefault(): foo() # call foo with keyword arguments def fooKw(): foo(a=10, b=20, c=30, d=40) ``` Now with **[`dis`](https://docs.python.org/3/library/dis.html)** we can see the differences in byte code between these. For the default version, we can see that essentially one command is issued (Ignoring `POP_TOP` which is present in both cases) **for the function call**, **[`CALL_FUNCTION`](https://docs.python.org/3/library/dis.html#opcode-CALL_FUNCTION)**: ``` dis.dis(fooDefaults) 2 0 LOAD_GLOBAL 0 (foo) 3 CALL_FUNCTION 0 (0 positional, 0 keyword pair) 6 POP_TOP 7 LOAD_CONST 0 (None) 10 RETURN_VALUE ``` On the other hand, in the case where keywords are used, **8 more [`LOAD_CONST`](https://docs.python.org/3/library/dis.html#opcode-LOAD_CONST) commands are issued** in order to load the argument names `(a, b, c, d)` and values `(10, 20, 30, 40)` into the value stack (even though loading numbers `< 256` is probably really fast in this case since they are cached): ``` dis.dis(fooKwargs) 2 0 LOAD_GLOBAL 0 (foo) 3 LOAD_CONST 1 ('a') # call starts 6 LOAD_CONST 2 (10) 9 LOAD_CONST 3 ('b') 12 LOAD_CONST 4 (20) 15 LOAD_CONST 5 ('c') 18 LOAD_CONST 6 (30) 21 LOAD_CONST 7 ('d') 24 LOAD_CONST 8 (40) 27 CALL_FUNCTION 1024 (0 positional, 4 keyword pair) 30 POP_TOP # call ends 31 LOAD_CONST 0 (None) 34 RETURN_VALUE ``` Additionally, a few extra steps are generally required for the case where keyword arguments are not zero. (for example in [`ceval/_PyEval_EvalCodeWithName()`](https://hg.python.org/cpython/file/tip/Python/ceval.c#l3801)). Even though these are really fast commands, they do sum up. The more arguments the bigger the sum and, when many calls to the function are actually performed these pile up to result in a felt difference in execution time. --- A direct result of these is that *the more values we specify, the more commands must be issued and the function runs slower*. Additionally, specifying positional arguments, unpacking positional arguments and unpacking keyword arguments all have a different amount of overhead associated with them: 1. **Positional arguments `foo(10, 20, 30, 40)`:** Require 4 additional commands to load each value. 2. **List unpacking `foo(*[10, 20, 30, 40])`**: 4 `LOAD_CONST` commands and an additional **[`BUILD_LIST`](https://docs.python.org/3/library/dis.html#opcode-BUILD_LIST)** command. * Using a list as in `foo(*l)` cuts down execution a bit since we provide an already built list containing the values. 3. **Dictionary unpacking `foo(**{'a':10, 'b':20, 'c': 30, 'd': 40})`**: 8 `LOAD_CONST` commands and a **[`BUILD_MAP`](https://docs.python.org/3/library/dis.html#opcode-BUILD_MAP)**. * As with list unpacking `foo(**d)` will cut down execution because a built list will bu supplied. All in all the ordering for the execution times of different cases of calls are: ``` defaults < positionals < keyword arguments < list unpacking < dictionary unpacking ``` I suggest using `dis.dis` on these cases and seeing their differences. --- ### In conclusion: As @goofd pointed out in a comment, this is really something one should not worry about, it really does depend on the use case. If you frequently call 'light' functions from a computation standpoint, specifying defaults will produce a slight boost in speed. If you frequently supply different values this produces next to nothing. So, it's probably negligible and trying to get boosts from obscure edge cases like this is really pushing it. If you find yourself doing this, you might want to look at things like [`PyPy`](http://pypy.org/) and [`Cython`](http://cython.org/).
Are objects with the same id always equal when comparing them with ==?
34,599,953
57
2016-01-04T21:13:16Z
34,600,015
55
2016-01-04T21:17:35Z
[ "python", "equality" ]
If I have two objects o1 and o2, and we know that ``` id(o1) == id(o2) ``` returns **true**. Then, does it follow that ``` o1 == o2 ``` Or is this not always the case? The paper I'm working on says this is not the case, but in my opinion it should be true!
The paper is right. Consider the following. ``` class WeirdEquals: def __eq__(self, other): return False w = WeirdEquals() print("id(w) == id(w)", id(w) == id(w)) print("w == w", w == w) ``` Output is this: ``` id(w) == id(w) True w == w False ```
Are objects with the same id always equal when comparing them with ==?
34,599,953
57
2016-01-04T21:13:16Z
34,600,023
19
2016-01-04T21:18:15Z
[ "python", "equality" ]
If I have two objects o1 and o2, and we know that ``` id(o1) == id(o2) ``` returns **true**. Then, does it follow that ``` o1 == o2 ``` Or is this not always the case? The paper I'm working on says this is not the case, but in my opinion it should be true!
`id(o1) == id(o2)` does not imply `o1 == o2`. Let's have a look at this `Troll` which overrides `__eq__` to always return `False`. ``` >>> class Troll(object): ... def __eq__(self, other): ... return False ... >>> a = Troll() >>> b = a >>> id(a) == id(b) True >>> a == b False ``` That being said, there should be *very* few examples in the standard library where the object-ids match but `__eq__` can return `False` anyway, kudos @MarkMüller for finding a good example. So either the objects are insane, very special (like nan), or concurrency bites you. Consider this extreme example, where `Foo` has a more reasonable `__eq__` method (which 'forgets' to check the ids) and `f is f` is always `True`. ``` import threading class Foo(object): def __init__(self): self.x = 1 def __eq__(self, other): return isinstance(other, Foo) and self.x == other.x f = Foo() class MutateThread(threading.Thread): def run(self): while True: f.x = 2 f.x = 1 class CheckThread(threading.Thread): def run(self): i = 1 while True: if not (f == f): print 'loop {0}: f != f'.format(i) i += 1 MutateThread().start() CheckThread().start() ``` Output: ``` $ python eqtest.py loop 520617: f != f loop 1556675: f != f loop 1714709: f != f loop 2436222: f != f loop 3210760: f != f loop 3772996: f != f loop 5610559: f != f loop 6065230: f != f loop 6287500: f != f ... ```
Are objects with the same id always equal when comparing them with ==?
34,599,953
57
2016-01-04T21:13:16Z
34,600,035
124
2016-01-04T21:19:04Z
[ "python", "equality" ]
If I have two objects o1 and o2, and we know that ``` id(o1) == id(o2) ``` returns **true**. Then, does it follow that ``` o1 == o2 ``` Or is this not always the case? The paper I'm working on says this is not the case, but in my opinion it should be true!
Not always: ``` >>> nan = float('nan') >>> nan is nan True ``` or formulated the same way as in the question: ``` >>> id(nan) == id(nan) True ``` but ``` >>> nan == nan False ``` [NaN](https://en.wikipedia.org/wiki/NaN) is a strange thing. Per definition it is not equal nor less or greater than itself. But it is the same object. More details why all comparisons have to return `False` in [this SO question](http://stackoverflow.com/questions/1565164/what-is-the-rationale-for-all-comparisons-returning-false-for-ieee754-nan-values).
Passing a function with two arguments to filter() in python
34,609,935
6
2016-01-05T10:53:06Z
34,610,018
11
2016-01-05T10:57:19Z
[ "python", "list", "python-2.7" ]
Given the following list: ``` DNA_list = ['ATAT', 'GTGTACGT', 'AAAAGGTT'] ``` I want to filter strings longer than 3 characters. I achieve this with the following code: With for loop: ``` long_dna = [] for element in DNA_list: length = len(element) if int(length) > 3: long_dna.append(element) print long_dna ``` But I want my code to be more general, so I can later filter strings of any length, so I use a function and for loop: ``` def get_long(dna_seq, threshold): return len(dna_seq) > threshold long_dna_loop2 = [] for element in DNA_list: if get_long(element, 3) is True: long_dna_loop2.append(element) print long_dna_loop2 ``` I want to achieve the same generality using `filter()` but I cannot achieve this. If I use the above function `get_long()`, I simply cannot pass arguments to it when I use it with `filter()`. Is it just not possible or is there a way around it? My code with `filter()` for the specific case: ``` def is_long(dna): return len(dna) > 3 long_dna_filter = filter(is_long, DNA_list) ```
Use `lambda` to provide the threshold, like this: ``` filter(lambda seq: get_long(seq, 3), dna_list) ```
Passing a function with two arguments to filter() in python
34,609,935
6
2016-01-05T10:53:06Z
34,610,141
12
2016-01-05T11:03:49Z
[ "python", "list", "python-2.7" ]
Given the following list: ``` DNA_list = ['ATAT', 'GTGTACGT', 'AAAAGGTT'] ``` I want to filter strings longer than 3 characters. I achieve this with the following code: With for loop: ``` long_dna = [] for element in DNA_list: length = len(element) if int(length) > 3: long_dna.append(element) print long_dna ``` But I want my code to be more general, so I can later filter strings of any length, so I use a function and for loop: ``` def get_long(dna_seq, threshold): return len(dna_seq) > threshold long_dna_loop2 = [] for element in DNA_list: if get_long(element, 3) is True: long_dna_loop2.append(element) print long_dna_loop2 ``` I want to achieve the same generality using `filter()` but I cannot achieve this. If I use the above function `get_long()`, I simply cannot pass arguments to it when I use it with `filter()`. Is it just not possible or is there a way around it? My code with `filter()` for the specific case: ``` def is_long(dna): return len(dna) > 3 long_dna_filter = filter(is_long, DNA_list) ```
What you are trying to do is known as [partial function application](https://en.wikipedia.org/wiki/Partial_application): you have a function with multiple arguments (in this case, 2) and want to get a function derived from it with one or more arguments fixed, which you can then pass to `filter`. Some languages (especially functional ones) have this functionality "built in". In python, you can use lambdas to do this (as others have shown) or you can use the [`functools` library](https://docs.python.org/2/library/functools.html). In particular, [`functools.partial`](https://docs.python.org/2/library/functools.html#functools.partial): > The partial() is used for partial function application which “freezes” some portion of a function’s arguments and/or keywords resulting in a new object with a simplified signature. For example, partial() can be used to create a callable that behaves like the int() function where the base argument defaults to two: > > ``` > >>> from functools import partial > >>> basetwo = partial(int, base=2) > >>> basetwo.__doc__ = 'Convert base 2 string to an int.' > >>> basetwo('10010') > 18 > ``` So you can do: ``` filter(functools.partial(get_long, treshold=13), DNA_list) ```
Passing a function with two arguments to filter() in python
34,609,935
6
2016-01-05T10:53:06Z
34,610,648
9
2016-01-05T11:29:36Z
[ "python", "list", "python-2.7" ]
Given the following list: ``` DNA_list = ['ATAT', 'GTGTACGT', 'AAAAGGTT'] ``` I want to filter strings longer than 3 characters. I achieve this with the following code: With for loop: ``` long_dna = [] for element in DNA_list: length = len(element) if int(length) > 3: long_dna.append(element) print long_dna ``` But I want my code to be more general, so I can later filter strings of any length, so I use a function and for loop: ``` def get_long(dna_seq, threshold): return len(dna_seq) > threshold long_dna_loop2 = [] for element in DNA_list: if get_long(element, 3) is True: long_dna_loop2.append(element) print long_dna_loop2 ``` I want to achieve the same generality using `filter()` but I cannot achieve this. If I use the above function `get_long()`, I simply cannot pass arguments to it when I use it with `filter()`. Is it just not possible or is there a way around it? My code with `filter()` for the specific case: ``` def is_long(dna): return len(dna) > 3 long_dna_filter = filter(is_long, DNA_list) ```
Do you need to use `filter()`? Why not use a more Pythonic list comprehension? Example: ``` >>> DNA_list = ['ATAT', 'GTGTACGT', 'AAAAGGTT'] >>> threshold = 3 >>> long_dna = [dna_seq for dna_seq in DNA_list if len(dna_seq) > threshold] >>> long_dna ['ATAT', 'GTGTACGT', 'AAAAGGTT'] >>> threshold = 4 >>> [dna_seq for dna_seq in DNA_list if len(dna_seq) > threshold] ['GTGTACGT', 'AAAAGGTT'] ``` This method has the advantage that it's trivial to convert it to a generator which can provide improved memory and execution depending on your application, e.g. if you have a lot of DNA sequences, and you want to iterate over them, realising them as a list will consume a lot of memory in one go. The equivalent generator simply requires replacing square brackets `[]` with round brackets `()`: ``` >>> long_dna = (dna_seq for dna_seq in DNA_list if len(dna_seq) > threshold) <generator object <genexpr> at 0x7f50de229cd0> >>> list(long_dna) ['GTGTACGT', 'AAAAGGTT'] ``` In Python 2 this performance improvement is not an option with `filter()` because it returns a list. In Python 3 `filter()` returns a filter object more akin to a generator.
AttributeError: 'RegexURLPattern' object has no attribute '_callback'
34,611,268
4
2016-01-05T12:03:01Z
34,616,645
10
2016-01-05T16:35:16Z
[ "python", "django", "django-rest-framework" ]
I'm newbie in python. I used this tutorial <http://www.django-rest-framework.org/tutorial/quickstart/>, but have an issue with **RegexURLPattern**. Full stack trace of issue: ``` Unhandled exception in thread started by <function check_errors. <locals>.wrapper at 0x103c8cf28> Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/Django-1.10.dev20151224130822-py3.5.egg/django/utils/autoreload.py", line 226, in wrapper fn(*args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/Django-1.10.dev20151224130822-py3.5.egg/django/core/management/commands/runserver.py", line 116, in inner_run self.check(display_num_errors=True) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/Django-1.10.dev20151224130822-py3.5.egg/django/core/management/base.py", line 366, in check include_deployment_checks=include_deployment_checks, File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/Django-1.10.dev20151224130822-py3.5.egg/django/core/checks/registry.py", line 75, in run_checks new_errors = check(app_configs=app_configs) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/Django-1.10.dev20151224130822-py3.5.egg/django/core/checks/urls.py", line 10, in check_url_config return check_resolver(resolver) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/Django-1.10.dev20151224130822-py3.5.egg/django/core/checks/urls.py", line 19, in check_resolver for pattern in resolver.url_patterns: File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/Django-1.10.dev20151224130822-py3.5.egg/django/utils/functional.py", line 35, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/Django-1.10.dev20151224130822-py3.5.egg/django/core/urlresolvers.py", line 379, in url_patterns patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/Django-1.10.dev20151224130822-py3.5.egg/django/utils/functional.py", line 35, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/Django-1.10.dev20151224130822-py3.5.egg/django/core/urlresolvers.py", line 372, in urlconf_module return import_module(self.urlconf_name) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 986, in _gcd_import File "<frozen importlib._bootstrap>", line 969, in _find_and_load File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 673, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 662, in exec_module File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed File "/Users/igor/tutorial/tutorial/tutorial/urls.py", line 28, in <module> url(r'^', include(router.urls)), File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/rest_framework/routers.py", line 79, in urls self._urls = self.get_urls() File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/rest_framework/routers.py", line 321, in get_urls urls = format_suffix_patterns(urls) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/rest_framework/urlpatterns.py", line 64, in format_suffix_patterns return apply_suffix_patterns(urlpatterns, suffix_pattern, suffix_required) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/rest_framework/urlpatterns.py", line 27, in apply_suffix_patterns view = urlpattern._callback or urlpattern._callback_str AttributeError: 'RegexURLPattern' object has no attribute '_callback' ``` My urls.py content: ``` from django.conf.urls import url, include from rest_framework import routers from quickstart import views router = routers.DefaultRouter() router.register(r'users', views.UserViewSet) router.register(r'groups', views.GroupViewSet) urlpatterns = [ url(r'^', include(router.urls)), url(r'^api-auth/', include('rest_framework.urls', namespace='rest_framework')) ] ``` What i'am doing wrong? Help please...
You are using the development version of Django. DRF is not yet compatible. You should install Django 1.8.x or 1.9.x instead.
Is there a chain calling method in Python?
34,613,543
4
2016-01-05T14:04:12Z
34,613,566
9
2016-01-05T14:05:08Z
[ "python" ]
Is there a function in Python that would do this: ``` val = f3(f2(f1(arg))) ``` by typing this (for example): ``` val = chainCalling(arg,f3,f2,f1) ``` I just figured, since python is (arguably) a functional language the function I'm looking for will make syntax brighter
Use the [`reduce()` function](https://docs.python.org/2/library/functions.html#reduce) to chain calls: ``` from functools import reduce val = reduce(lambda r, f: f(r), (f1, f2, f3), arg) ``` I used the [forward-compatible `functools.reduce()` function](https://docs.python.org/2/library/functools.html#functools.reduce); in Python 3 `reduce()` is no longer in the built-in namespace.
Recursively replace characters in a dictionary
34,615,164
12
2016-01-05T15:21:16Z
34,615,257
18
2016-01-05T15:25:36Z
[ "python", "string", "dictionary", "recursion" ]
How do I change all dots `.` to underscores (in the dict's keys), **given an arbitrarily nested dictionary**? What I tried is write two loops, but then I would be limited to 2-level-nested dictionaries. This ... ``` { "brown.muffins": 5, "green.pear": 4, "delicious.apples": { "green.apples": 2 { } ``` ... should become: ``` { "brown_muffins": 5, "green_pear": 4, "delicious_apples": { "green_apples": 2 { } ``` Is there an elegant way?
You can write a recursive function, like this ``` from collections.abc import Mapping def rec_key_replace(obj): if isinstance(obj, Mapping): return {key.replace('.', '_'): rec_key_replace(val) for key, val in obj.items()} return obj ``` and when you invoke this with the dictionary you have shown in the question, you will get a new dictionary, with the dots in keys replaced with `_`s ``` {'delicious_apples': {'green_apples': 2}, 'green_pear': 4, 'brown_muffins': 5} ``` **Explanation** Here, we just check if the current object is an instance of `dict` and if it is, then we iterate the dictionary, replace the key and call the function recursively. If it is actually not a dictionary, then return it as it is.
Recursively replace characters in a dictionary
34,615,164
12
2016-01-05T15:21:16Z
34,615,264
7
2016-01-05T15:26:04Z
[ "python", "string", "dictionary", "recursion" ]
How do I change all dots `.` to underscores (in the dict's keys), **given an arbitrarily nested dictionary**? What I tried is write two loops, but then I would be limited to 2-level-nested dictionaries. This ... ``` { "brown.muffins": 5, "green.pear": 4, "delicious.apples": { "green.apples": 2 { } ``` ... should become: ``` { "brown_muffins": 5, "green_pear": 4, "delicious_apples": { "green_apples": 2 { } ``` Is there an elegant way?
Assuming `.` is only present in keys and all the dictionary's contents are primitive literals, the really cheap way would be to use `str()` or `repr()`, do the replacement, then `ast.literal_eval()` to get it back: ``` d ={ "brown.muffins": 5, "green.pear": 4, "delicious_apples": { "green.apples": 2 } # correct brace } ``` Result: ``` >>> import ast >>> ast.literal_eval(repr(d).replace('.','_')) {'delicious_apples': {'green_apples': 2}, 'green_pear': 4, 'brown_muffins': 5} ``` If the dictionary has `.` outside of keys, we can replace more carefully by using a regular expression to look for strings like `'ke.y':` and replace only those bits: ``` >>> import re >>> ast.literal_eval(re.sub(r"'(.*?)':", lambda x: x.group(0).replace('.','_'), repr(d))) {'delicious_apples': {'green_apples': 2}, 'green_pear': 4, 'brown_muffins': 5} ``` If your dictionary is very complex, with `'.'` in values and dictionary-like strings and so on, use a real recursive approach. Like I said at the start, though, this is the cheap way.
Modifying built-in function
34,615,840
4
2016-01-05T15:54:25Z
34,616,000
7
2016-01-05T16:01:58Z
[ "python", "python-3.x", "overriding", "builtin" ]
Let's consider any user-defined pythonic class. If I call `dir(obect_of_class)`, I get the list of its attributes: ``` ['__class__', '__delattr__', '__dict__', '__dir__', ... '__weakref__', 'bases', 'build_full_name', 'candidates', ... 'update_name']. ``` You can see 2 types of attributes in this list: * built-in attributes, * user defined. I need to override `__dir__` so, that it will return only user defined attribltes. How I can do that? It is clear, that if in overridden function I call itself, it gives me infinite recursion. So, I want to do somethig like this: ``` def __dir__(self): return list(filter(lambda x: not re.match('__\S*__', x), dir(self))) ``` but evade the infinite recursion. In general, how can I modify a built-in function if I don't want to write it from scratch but want to modify the existing function?
Use [`super`](https://docs.python.org/3/library/functions.html#super) to call parent's implementation of `__dir__`; avoid the recursion: ``` import re class AClass: def __dir__(self): return [x for x in super().__dir__() if not re.match(r'__\S+__$', x)] def method(self): pass ``` --- ``` >>> dir(AClass()) ['method'] ```
These Python functions don't have running times as expected
34,617,257
5
2016-01-05T17:04:24Z
34,617,515
9
2016-01-05T17:17:57Z
[ "python", "algorithm" ]
(I'm not sure if this question belongs here or CS forum. I kept it here because it has Python-specific code. Please migrate if needed!) I'm studying algorithms these days, using Python as my tool of choice. Today, I wanted to plot the running times three variations of a simple problem: Compute the prefix average of a given sequence (list). Here are the three variations: ``` import timeit seq = [20, 45, 45, 40, 12, 48, 67, 90, 0, 56, 12, 45, 67, 45, 34, 32, 20] # Quadratic running time def quad (S): n = len(S) A = [0] * n for j in range(n): total = 0 for i in range(j+1): total += S[i] A[j] = total / (j+1) return A # Use prev result def prev (S): n = len(S) A = [0] * n for j in range(n): if j == 0: A[j] = S[j] else: A[j] = (A[j-1]*j + S[j]) / (j+1) return A # Use Python's sum method def summ (S): n = len(S) A = [0] * n for j in range(n): A[j] = sum(S[0:j+1])/(j+1) return A def plot_func (name): for i in range(0, 1000000, 100000): t = timeit.Timer('{}(seq)'.format(name), 'from __main__ import {}, seq'.format(name)) print(i, ',', t.timeit(number=i)) plot_func('quad') plot_func('prev') plot_func('summ') ``` So I'm collecting the running times of three algorithms and plotting them. My final data looked like this: ``` Input size Quadratic Prev Summ (x100000) 1 4.92E-007 7.78E-007 3.47E-007 2 1.582717351 0.603501161 0.750457885 3 3.205707528 1.176623609 1.508853766 4 4.796092943 1.76059924 2.295842737 5 6.457349465 2.34945291 3.112500982 6 8.057410897 2.947556047 3.882303307 7 9.59740446 3.520847787 4.654968896 8 11.36328988 4.122617632 5.412608518 9 12.776150393 4.703240974 6.181500295 10 14.704703677 5.282404892 6.882074295 ``` When plotted, these numbers result in: [![enter image description here](http://i.stack.imgur.com/p7z3P.png)](http://i.stack.imgur.com/p7z3P.png) Now, according to the textbook I'm following, the functions `quad` and `summ` are supposed to run in quadratic time, while `prev` will run in linear time. I can see that `prev` is significantly faster than `quad` and somewhat faster than `summ`, but all of these look like linear functions to me! Further, there is frighteningly little gap in `summ` and `prev`. Could someone please explain what's wrong?
The asymptotic complexity of an algorithm means its dependence from input length. Here, you do not change the input size between runs, you just change the number of times to run each algorithm (as a parameter to `timeit()`): ``` for i in range(0, 1000000, 100000): t = timeit.Timer('{}(seq)'.format(name), 'from __main__ import {}, seq'.format(name)) print(i, ',', t.timeit(number=i)) ``` To get proper comparison, change the length of your sequence between runs.
How does Rounding in Python work?
34,620,633
5
2016-01-05T20:27:54Z
34,620,702
12
2016-01-05T20:32:31Z
[ "python", "floating-point", "rounding", "precision", "floating-accuracy" ]
I am a bit confused about how rounding in Python works. Could someone please explain why Python behaves like this? Example: ``` >>> round(0.05,1) # this makes sense 0.1 >>> round(0.15,1) # this doesn't make sense! Why is the result not 0.2? 0.1 ``` And same for: ``` >>> round(0.25,1) # this makes sense 0.3 >>> round(0.35,1) # in my opinion, should be 0.4 but evaluates to 0.3 0.3 ``` **Edit:** So in general, there is a possibility that Python rounds down instead of rounding up. So am I to understand that the only "abnormal" thing that can happen is that Python rounds down? Or may it also get rounded up "abnormally" due to how it is stored? (I haven't found a case where Python rounded up when I expected it to round down)
This is actually by design. From Pythons' [documentation](https://docs.python.org/2/library/functions.html#round): > The behavior of `round()` for floats can be surprising: for example, `round(2.675, 2)` gives 2.67 instead of the expected 2.68. **This is not a bug**: it’s a result of the fact that most decimal fractions can’t be represented exactly as a float.
How does Rounding in Python work?
34,620,633
5
2016-01-05T20:27:54Z
34,620,996
7
2016-01-05T20:49:58Z
[ "python", "floating-point", "rounding", "precision", "floating-accuracy" ]
I am a bit confused about how rounding in Python works. Could someone please explain why Python behaves like this? Example: ``` >>> round(0.05,1) # this makes sense 0.1 >>> round(0.15,1) # this doesn't make sense! Why is the result not 0.2? 0.1 ``` And same for: ``` >>> round(0.25,1) # this makes sense 0.3 >>> round(0.35,1) # in my opinion, should be 0.4 but evaluates to 0.3 0.3 ``` **Edit:** So in general, there is a possibility that Python rounds down instead of rounding up. So am I to understand that the only "abnormal" thing that can happen is that Python rounds down? Or may it also get rounded up "abnormally" due to how it is stored? (I haven't found a case where Python rounded up when I expected it to round down)
It sounds to me like you need the `decimal` module: ``` from decimal import * x = Decimal('0.15') print x.quantize(Decimal('0.1'), rounding=ROUND_HALF_UP) ``` Output: ``` 0.2 ```
multiple assigments with a comma in python
34,621,065
9
2016-01-05T20:54:14Z
34,621,110
10
2016-01-05T20:57:11Z
[ "python" ]
I tried to find an explanation of [this](http://docs.quantifiedcode.com/python-anti-patterns/readability/not_using_unpacking_for_updating_multiple_values_at_once.html), the Gotcha part: ``` b = "1984" a = b, c = "AB" print(a, b, c) ``` returns: ``` ('AB', 'A', 'B') ``` I understand what happens with multiple equals: ``` a = b = 1 ``` but using it together with a comma, I cannot understand the behaviour, ideas in why it works that way?
The answer is ``` a = b, c ="AB" ``` acts like: ``` a = (b, c) = "AB" ``` this is why ``` a = "AB" and b = "A" and c = "B" ```
Pandas:drop_duplicates() based on condition in python
34,627,380
4
2016-01-06T06:59:43Z
34,627,580
7
2016-01-06T07:14:30Z
[ "python", "pandas" ]
Having below data set: ``` data_input: A B 1 C13D C07H 2 C07H C13D 3 B42C B65H 4 B65H B42C 5 A45B A47C ``` i.e. row 1 and row 2 in `data_input` are same,I just want to keep one,so drop row 2. Want the Output as below: ``` data_output: A B 1 C13D C07H 2 B42C B65H 3 A45B A47C ```
You can create a third column `'C'` based on `'A'` and `'B'` and use it to find duplicates as such: ``` df['C'] = df['A'] + df['B'] df['C'] = df['C'].apply(lambda x: ''.join(sorted(x))) df = df.drop_duplicates(subset='C')[['A', 'B']] ```
Apply function to column before filtering
34,627,757
17
2016-01-06T07:26:10Z
34,799,278
9
2016-01-14T20:38:15Z
[ "python", "python-2.7", "sqlalchemy" ]
I have a column in my database called `coordinates`, now the coordinates column contains information on the range of time an object takes up within my graph. I want to allow the user to filter by the date, but the problem is I use a function to determine the date normally. Take: ``` # query_result is the result of some filter operation for obj in query_result: time_range, altitude_range = get_shape_range(obj.coordinates) # time range for example would be "2006-06-01 07:56:17 - ..." ``` Now if I wanted to filter by date, I would want to is a `like`: ``` query_result = query_result.filter( DatabaseShape.coordinates.like('%%%s%%' % date)) ``` But the problem is I first need to apply `get_shape_range` to `coordinates` in order to receive a string. Is there any way to do ... I guess a transform\_filter operation? Such that before the `like` happens, I apply some function to coordinates? In this case I would need to write a `get_time_range` function that returned only time, but the question remains the same. --- EDIT: Here's my database class ``` class DatabasePolygon(dbBase): __tablename__ = 'objects' id = Column(Integer, primary_key=True) # primary key tag = Column(String) # shape tag color = Column(String) # color of polygon time_ = Column(String) # time object was exported hdf = Column(String) # filename plot = Column(String) # type of plot drawn on attributes = Column(String) # list of object attributes coordinates = Column(String) # plot coordinates for displaying to user notes = Column(String) # shape notes lat = Column(String) @staticmethod def plot_string(i): return constants.PLOTS[i] def __repr__(self): """ Represent the database class as a JSON object. Useful as our program already supports JSON reading, so simply parse out the database as separate JSON 'files' """ data = {} for key in constants.plot_type_enum: data[key] = {} data[self.plot] = {self.tag: { 'color': self.color, 'attributes': self.attributes, 'id': self.id, 'coordinates': self.coordinates, 'lat': self.lat, 'notes': self.notes}} data['time'] = self.time_ data['hdfFile'] = self.hdf logger.info('Converting unicode to ASCII') return byteify(json.dumps(data)) ``` and I'm using sqlite 3.0. The reasoning why behind most things are strings is because most of my values that are to be stored in the database are sent as strings, so storing is trivial. I'm wondering if I should do all this parsing magic with the functions *before*, and just have more database entries? for stuff like decimal *time\_begin*, *time\_end*, *latitude\_begin* instead of having a string containing the range of *time* that I parse to find *time\_begin* and *time\_end* when i'm filtering
I think you should definitely parse strings to columns before storing it in the databases. Let the database do the job it was designed for! ``` CREATE TABLE [coordinates] ( id INTEGER NOT NULL PRIMARY KEY, tag VARCHAR2(32), color VARCHAR2(32) default 'green', time_begin TIMESTAMP, time_end TIMESTAMP, latitude_begin INT ); create index ix_coord_tag on coordinates(tag); create index ix_coord_tm_beg on coordinates(time_begin); insert into coordinates(tag, time_begin, time_end, latitude_begin) values('tag1', '2006-06-01T07:56:17', '2006-06-01T07:56:19', 123); insert into coordinates(tag, time_begin, time_end, latitude_begin) values('tag1', '2016-01-01T11:35:01', '2016-01-01T12:00:00', 130); insert into coordinates(tag, color, time_begin, time_end, latitude_begin) values('tag2', 'blue', '2014-03-03T20:11:01', '2014-03-03T20:11:20', 2500); insert into coordinates(tag, color, time_begin, time_end, latitude_begin) values('tag2', 'blue', '2014-03-12T23:59:59', '2014-03-13T00:00:29', 2978); insert into coordinates(tag, color, time_begin, time_end, latitude_begin) values('tag3', 'red', '2016-01-01T11:35:01', '2016-01-01T12:00:00', 13000); insert into coordinates(tag, color, time_begin, time_end, latitude_begin) values('tag3', 'red', '2016-01-01T12:00:00', '2016-01-01T12:00:11', 13001); .headers on .mode column select * from coordinates where tag='tag1' and '2006-06-01T07:56:18' between time_begin and time_end; select * from coordinates where color='blue' and time_end between '2014-03-13T00:00:00' and '2014-03-13T00:10:00'; ``` Output: ``` sqlite> select * from coordinates where tag='tag1' and '2006-06-01T07:56:18' between time_begin and time_end; id tag color time_begin time_end latitude_begin ---------- ---------- ---------- ------------------- ------------------- -------------- 1 tag1 green 2006-06-01T07:56:17 2006-06-01T07:56:19 123 sqlite> sqlite> select * from coordinates where color='blue' and time_end between '2014-03-13T00:00:00' and '2014-03-13T00:10:00'; id tag color time_begin time_end latitude_begin ---------- ---------- ---------- ------------------- ------------------- -------------- 4 tag2 blue 2014-03-12T23:59:59 2014-03-13T00:00:29 2978 ```
pronoun resolution backwards
34,628,224
21
2016-01-06T08:00:09Z
34,708,531
7
2016-01-10T17:27:21Z
[ "python", "nlp", "nltk", "stanford-nlp" ]
The usual coreference resolution works in the following way: Provided ``` The man likes math. He really does. ``` it figures out that ``` he ``` refers to ``` the man. ``` There are plenty of tools to do this. However, is there a way to do it backwards? For example, given ``` The man likes math. The man really does. ``` I want to do the pronoun resolution "backwards," so that I get an output like ``` The man likes math. He really does. ``` My input text will mostly be 3~10 sentences, and I'm working with python.
This is perhaps not really an answer to be happy with, but I think the answer is that there's no such functionality built in anywhere, though you can code it yourself without too much difficulty. Giving an outline of how I'd do it with CoreNLP: 1. Still run coref. This'll tell you that "the man" and "the man" are coreferent, and so you can replace the second one with a pronoun. 2. Run the `gender` annotator from CoreNLP. This is a poorly-documented and even more poorly advertised annotator that tries to attach gender to tokens in a sentence. 3. Somehow figure out plurals. Most of the time you could use the part-of-speech tag: plural nouns get the tags NNS or NNPS, but there are some complications so you might also want to consider (1) the existence of conjunctions in the antecedent; (2) the lemma of a word being different from its text; (3) especially in conjunction with 2, the word ending in 's' or 'es' -- this can distinguish between lemmatizations which strip out plurals versus lemmatizations which strip out tenses, etc. 4. This is enough to figure out the right pronoun. Now it's just a matter of chopping up the sentence and putting it back together. This is a bit of a pain if you do it in CoreNLP -- the code is just not set up to change the text of a sentence -- but in the worst case you can always just re-annotate a new surface form. Hope this helps somewhat!
Fail during installation of Pillow (Python module) in Linux
34,631,806
69
2016-01-06T11:16:29Z
34,631,976
148
2016-01-06T11:25:07Z
[ "python", "linux", "pillow" ]
I'm trying to install Pillow (Python module) using pip, but it throws this error: ``` ValueError: jpeg is required unless explicitly disabled using --disable-jpeg, aborting ``` So as the error says, I tried: ``` pip install pillow --global-option="--disable-jpeg" ``` But it fails with: ``` error: option --disable-jpeg not recognized ``` Any hints how to deal with it?
There is a bug reported for Pillow [here](https://github.com/python-pillow/Pillow/issues/1457), which indicates that `libjpeg` and `zlib` are now [required](https://github.com/python-pillow/Pillow/issues/1412) as of Pillow 3.0.0. The [installation instructions](https://pillow.readthedocs.org/en/3.0.0/installation.html#linux-installation) for Pillow on Linux give advice of how to install these packages. Note that not all of the following packages may be missing on your machine (comments suggest that only `libjpeg8-dev` is actually missing). ### Ubuntu 12.04 LTS or Raspian Wheezy 7.0 ``` sudo apt-get install libtiff4-dev libjpeg8-dev zlib1g-dev libfreetype6-dev liblcms2-dev libwebp-dev tcl8.5-dev tk8.5-dev python-tk ``` ### Ubuntu 14.04 ``` sudo apt-get install libtiff5-dev libjpeg8-dev zlib1g-dev libfreetype6-dev liblcms2-dev libwebp-dev tcl8.6-dev tk8.6-dev python-tk ``` ### Fedora 20 The Fedora 20 equivalent of `libjpeg8-dev` is `libjpeg-devel`. ``` sudo yum install libtiff-devel libjpeg-devel libzip-devel freetype-devel lcms2-devel libwebp-devel tcl-devel tk-devel ``` ### Mac OS X (via Homebrew) On Mac OS X with [Homebrew](http://brew.sh/) this can be fixed using: ``` brew install libjpeg zlib ``` You may also need to force-link zlib using the following: ``` brew link zlib --force ``` *Thanks to Panos Angelopoulous and nelsonvarela in the comments.* **Update 5th July 2016:** In current versions of Homebrew the above may no longer work, as there is no longer a formula for `zlib` available in the main repository (Homebrew will prompt you to install `lzlib` which is a different library and will not solve this problem). There *is* a formula available in the *dupes* repository. You can either tap this repository, and install as normal: ``` brew tap homebrew/dupes brew install zlib ``` Or you can install `zlib` via `xcode` instead, as follows: ``` xcode-select --install ``` *Thanks to benjaminz in the comments and Kal's answer below* After these are installed the pip installation of Pillow should work normally.
Fail during installation of Pillow (Python module) in Linux
34,631,806
69
2016-01-06T11:16:29Z
34,632,233
21
2016-01-06T11:39:01Z
[ "python", "linux", "pillow" ]
I'm trying to install Pillow (Python module) using pip, but it throws this error: ``` ValueError: jpeg is required unless explicitly disabled using --disable-jpeg, aborting ``` So as the error says, I tried: ``` pip install pillow --global-option="--disable-jpeg" ``` But it fails with: ``` error: option --disable-jpeg not recognized ``` Any hints how to deal with it?
Thank you @mfitzp. In my case (CentOS) these libs are not available in the yum repo, but actually the solution was even easier. What I did: ``` sudo yum install python-devel sudo yum install zlib-devel sudo yum install libjpeg-turbo-devel ``` And now pillow's installation finishes successfully.
Fail during installation of Pillow (Python module) in Linux
34,631,806
69
2016-01-06T11:16:29Z
35,547,105
13
2016-02-22T06:29:51Z
[ "python", "linux", "pillow" ]
I'm trying to install Pillow (Python module) using pip, but it throws this error: ``` ValueError: jpeg is required unless explicitly disabled using --disable-jpeg, aborting ``` So as the error says, I tried: ``` pip install pillow --global-option="--disable-jpeg" ``` But it fails with: ``` error: option --disable-jpeg not recognized ``` Any hints how to deal with it?
On Raspberry pi II, I had the same problem. After trying the following, I solved the problem. The solution is: ``` sudo apt-get update sudo apt-get install libjpeg-dev ```
Fail during installation of Pillow (Python module) in Linux
34,631,806
69
2016-01-06T11:16:29Z
36,627,436
7
2016-04-14T15:21:02Z
[ "python", "linux", "pillow" ]
I'm trying to install Pillow (Python module) using pip, but it throws this error: ``` ValueError: jpeg is required unless explicitly disabled using --disable-jpeg, aborting ``` So as the error says, I tried: ``` pip install pillow --global-option="--disable-jpeg" ``` But it fails with: ``` error: option --disable-jpeg not recognized ``` Any hints how to deal with it?
``` brew install zlib ``` on OS X doesn't work anymore and instead prompts to install `lzlib`. Installing that doesn't help. Instead you install XCode Command line tools and that should install `zlib` ``` xcode-select --install ```
What's the best way to create datetime from "%H:%M:%S"
34,637,288
6
2016-01-06T15:54:18Z
34,637,424
7
2016-01-06T16:01:59Z
[ "python", "datetime" ]
Say I got some string with format `%H:%M:%S`, e.g. `04:35:45`. I want to convert them to `datetime.datetime` object, year/month/day are the same as `datetime.datetime.now()`. I tried ``` now = datetime.now() datetime_obj = datetime.strptime(time_string, "%H:%M:%S") datetime_obj.year = now.year datetime_obj.month = now.month datetime_obj.day = now.day ``` This won't work since `year/month/day` are read-only properties. So what's the best solution for this?
You want `datetime.combine(date, time)`: ``` >>> time = datetime.datetime.strptime("04:35:45", "%H:%M:%S").time() >>> time datetime.time(4, 35, 45) >>> day = datetime.datetime.now().date() >>> day datetime.date(2016, 1, 6) >>> datetime.datetime.combine(day, time) datetime.datetime(2016, 1, 6, 4, 35, 45) >>> ```
Better way for concatenating two sorted list of integers
34,637,757
5
2016-01-06T16:18:57Z
34,637,982
11
2016-01-06T16:31:39Z
[ "python", "algorithm", "optimization" ]
Lets assume I have one list and another tuple both of them are already sorted: ``` A = [10, 20, 30, 40] B = (20, 60, 81, 90) ``` What I would need is to add all the elements from B into A in such a way that A remains sorted. Solution I could come with was: ``` for item in B: for i in range(0, len(A)): if item > A[i]: i += 1 else: A.insert(i, item) ``` assuming A of size m, and B of size n; this solution would take O(m`x`n) in worst case, how can I make it perform better ?
A simple way would be [heapq.merge](https://docs.python.org/2/library/heapq.html#heapq.merge): ``` A = [10, 20, 30, 40] B = (20, 60, 81, 90) from heapq import merge for ele in merge(A,B): print(ele) ``` Output: ``` 10 20 20 30 40 60 81 90 ``` Some timings using the other `O(n)` solution: ``` In [53]: A = list(range(10000)) In [54]: B = list(range(1,20000,10)) In [55]: timeit list(merge(A,B)) 100 loops, best of 3: 2.52 ms per loop In [56]: %%timeit C = [] i = j = 0 while i < len(A) and j < len(B): if A[i] < B[j]: C.append(A[i]) i += 1 else: C.append(B[j]) j += 1 C += A[i:] + B[j:] ....: 100 loops, best of 3: 4.29 ms per loop In [58]: m =list(merge(A,B)) In [59]: m == C Out[59]: True ``` If you wanted to roll your own this is a bit faster than merge: ``` def merger_try(a, b): if not a or not b: yield chain(a, b) iter_a, iter_b = iter(a), iter(b) prev_a, prev_b = next(iter_a), next(iter_b) while True: if prev_a >= prev_b: yield prev_b try: prev_b = next(iter_b) except StopIteration: yield prev_a break else: yield prev_a try: prev_a = next(iter_a) except StopIteration: yield prev_b break for ele in chain(iter_b, iter_a): yield ele ``` Some timings: ``` In [128]: timeit list(merge(A,B)) 1 loops, best of 3: 771 ms per loop In [129]: timeit list(merger_try(A,B)) 1 loops, best of 3: 581 ms per loop In [130]: list(merger_try(A,B)) == list(merge(A,B)) Out[130]: True In [131]: %%timeit C = [] i = j = 0 while i < len(A) and j < len(B): if A[i] < B[j]: C.append(A[i]) i += 1 else: C.append(B[j]) j += 1 C += A[i:] + B[j:] .....: 1 loops, best of 3: 919 ms per loop ```
Tensorflow Strides Argument
34,642,595
21
2016-01-06T20:56:26Z
34,643,081
46
2016-01-06T21:25:57Z
[ "python", "neural-network", "convolution", "tensorflow", "conv-neural-network" ]
I am trying to understand the **strides** argument in tf.nn.avg\_pool, tf.nn.max\_pool, tf.nn.conv2d. The [documentation](https://www.tensorflow.org/versions/master/api_docs/python/nn.html#max_pool) repeatedly says > strides: A list of ints that has length >= 4. The stride of the sliding window for each dimension of the input tensor. My questions are: 1. What do each of the 4+ integers represent? 2. Why must they have strides[0] = strides[3] = 1 for convnets? 3. In [this example](https://github.com/aymericdamien/TensorFlow-Examples/blob/master/notebooks/3%20-%20Neural%20Networks/convolutional_network.ipynb) we see `tf.reshape(_X,shape=[-1, 28, 28, 1])`. Why -1? Sadly the examples in the docs for reshape using -1 don't translate too well to this scenario.
The pooling and convolutional ops slide a "window" across the input tensor. Using [`tf.nn.conv2d`](https://www.tensorflow.org/versions/master/api_docs/python/nn.html#conv2d) as an example: If the input tensor has 4 dimensions: `[batch, height, width, channels]`, then the convolution operates on a 2D window on the `height, width` dimensions. `strides` determines how much the window shifts by in each of the dimensions. The typical use sets the first (the batch) and last (the depth) stride to 1. Let's use a very concrete example: Running a 2-d convolution over a 32x32 greyscale input image. I say greyscale because then the input image has depth=1, which helps keep it simple. Let that image look like this: ``` 00 01 02 03 04 ... 10 11 12 13 14 ... 20 21 22 23 24 ... 30 31 32 33 34 ... ... ``` Let's run a 2x2 convolution window over a single example (batch size = 1). We'll give the convolution an output channel depth of 8. The input to the convolution has `shape=[1, 32, 32, 1]`. If you specify `strides=[1,1,1,1]` with `padding=SAME`, then the output of the filter will be [1, 32, 32, 8]. The filter will first create an output for: ``` F(00 01 10 11) ``` And then for: ``` F(01 02 11 12) ``` and so on. Then it will move to the second row, calculating: ``` F(10, 11 20, 21) ``` then ``` F(11, 12 21, 22) ``` If you specify a stride of [2, 2] it won't do overlapping windows. It will compute: ``` F(00, 01 10, 11) ``` and then ``` F(02, 03 12, 13) ``` The stride operates similarly for the pooling operators. **Question 2: Why strides [1, x, y, 1] for convnets** The first 1 is the batch: You don't usually want to skip over examples in your batch, or you shouldn't have included them in the first place. :) The last 1 is the depth of the convolution: You don't usually want to skip inputs, for the same reason. The conv2d operator is more general, so you *could* create convolutions that slide the window along other dimensions, but that's not a typical use in convnets. The typical use is to use them spatially. **Why reshape to -1** -1 is a placeholder that says "adjust as necessary to match the size needed for the full tensor." It's a way of making the code be independent of the input batch size, so that you can change your pipeline and not have to adjust the batch size everywhere in the code.
scipy interp2d/bisplrep unexpected output when given 1D input
34,643,642
3
2016-01-06T22:00:50Z
34,656,728
7
2016-01-07T13:45:42Z
[ "python", "scipy", "interpolation" ]
I've been having invalid input errors when working with scipy `interp2d` function. It turns out the problem comes from the `bisplrep` function, as showed here: ``` import numpy as np from scipy import interpolate # Case 1 x = np.linspace(0,1) y = np.zeros_like(x) z = np.ones_like(x) tck = interpolate.bisplrep(x,y,z) # or interp2d ``` Returns: `ValueError: Invalid inputs` It turned out the test data I was giving `interp2d` contained only one distinct value for the 2nd axis, as in the test sample above. The `bisplrep` function inside `interp2d` considers it as an invalid output: This may be considered as an acceptable behaviour: `interp2d` & `bisplrep` expect a 2D grid, and I'm only giving them values **along one line**. *On a side note, I find the error message quite unclear. One could include a test in `interp2d` to deal with such cases: something along the lines of* ``` if len(np.unique(x))==1 or len(np.unique(y))==1: ValueError ("Can't build 2D splines if x or y values are all the same") ``` *may be enough to detect this kind of invalid input, and raise a more explicit error message, or even directly call the more appropriate `interp1d` function (which works perfectly here)* --- I thought I had correctly understood the problem. However, consider the following code sample: ``` # Case 2 x = np.linspace(0,1) y = x z = np.ones_like(x) tck = interpolate.bisplrep(x,y,z) ``` In that case, `y` being proportional to `x`, I'm also feeding `bisplrep` with data along one line. But, surprisingly, `bisplrep` is able to compute a 2D spline interpolation in that case. I plotted it: ``` # Plot def plot_0to1(tck): import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D X = np.linspace(0,1,10) Y = np.linspace(0,1,10) Z = interpolate.bisplev(X,Y,tck) X,Y = np.meshgrid(X,Y) fig = plt.figure() ax = Axes3D(fig) ax.plot_surface(X, Y, Z,rstride=1, cstride=1, cmap=cm.coolwarm, linewidth=0, antialiased=False) plt.show() plot_0to1(tck) ``` The result is the following: ![Case2.png](http://i.stack.imgur.com/EmPC5.png) where `bisplrep` seems to fill the gaps with 0's, as better showed when I extend the plot below: ![Case2bis.png](http://i.stack.imgur.com/bL0jD.png) Regarding of whether adding 0 is expected, my real question is: **why does `bisplrep` work in Case 2 but not in Case 1?** Or, in other words: do we want it to return an error when 2D interpolation is fed with input along one direction only (Case 1 & 2 fail), or not? (Case 1 & 2 should return something, even if unpredicted).
I was originally going to show you how much of a difference it makes for 2d interpolation if your input data are oriented along the coordinate axes rather than in some general direction, but it turns out that the result would be even messier than I had anticipated. I tried using a random dataset over an interpolated rectangular mesh, and comparing that to a case where the same `x` and `y` coordinates were rotated by 45 degrees for interpolation. The result was abysmal. I then tried doing a comparison with a smoother dataset: turns out `scipy.interpolate.interp2d` has quite a few issues. So my bottom line will be "use `scipy.interpolate.griddata`". For instructive purposes, here's my (quite messy) code: ``` import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import matplotlib.cm as cm n = 10 # rough number of points dom = np.linspace(-2,2,n+1) # 1d input grid x1,y1 = np.meshgrid(dom,dom) # 2d input grid z = np.random.rand(*x1.shape) # ill-conditioned sample #z = np.cos(x1)*np.sin(y1) # smooth sample # first interpolator with interp2d: fun1 = interp.interp2d(x1,y1,z,kind='linear') # construct twice finer plotting and interpolating mesh plotdom = np.linspace(-1,1,2*n+1) # for interpolation and plotting plotx1,ploty1 = np.meshgrid(plotdom,plotdom) plotz1 = fun1(plotdom,plotdom) # interpolated points # construct 45-degree rotated input and interpolating meshes rotmat = np.array([[1,-1],[1,1]])/np.sqrt(2) # 45-degree rotation x2,y2 = rotmat.dot(np.vstack([x1.ravel(),y1.ravel()])) # rotate input mesh plotx2,ploty2 = rotmat.dot(np.vstack([plotx1.ravel(),ploty1.ravel()])) # rotate plotting/interp mesh # interpolate on rotated mesh with interp2d # (reverse rotate by using plotx1, ploty1 later!) fun2 = interp.interp2d(x2,y2,z.ravel(),kind='linear') # I had to generate the rotated points element-by-element # since fun2() accepts only rectangular meshes as input plotz2 = np.array([fun2(xx,yy) for (xx,yy) in zip(plotx2.ravel(),ploty2.ravel())]) # try interpolating with griddata plotz3 = interp.griddata(np.array([x1.ravel(),y1.ravel()]).T,z.ravel(),np.array([plotx1.ravel(),ploty1.ravel()]).T,method='linear') plotz4 = interp.griddata(np.array([x2,y2]).T,z.ravel(),np.array([plotx2,ploty2]).T,method='linear') # function to plot a surface def myplot(X,Y,Z): fig = plt.figure() ax = Axes3D(fig) ax.plot_surface(X, Y, Z,rstride=1, cstride=1, linewidth=0, antialiased=False,cmap=cm.coolwarm) plt.show() # plot interp2d versions myplot(plotx1,ploty1,plotz1) # Cartesian meshes myplot(plotx1,ploty1,plotz2.reshape(2*n+1,-1)) # rotated meshes # plot griddata versions myplot(plotx1,ploty1,plotz3.reshape(2*n+1,-1)) # Cartesian meshes myplot(plotx1,ploty1,plotz4.reshape(2*n+1,-1)) # rotated meshes ``` So here's a gallery of the results. Using random input `z` data, and `interp2d`, Cartesian (left) vs rotated interpolation (right): [![interp2d random input](http://i.stack.imgur.com/fGrejm.png)](http://i.stack.imgur.com/fGrej.png)[![interp2 rotated random input](http://i.stack.imgur.com/4G6pHm.png)](http://i.stack.imgur.com/4G6pH.png) Note the horrible scale on the right side, noting that the input points are between `0` and `1`. Even its mother wouldn't recognize the data set. Note that there are runtime warnings during the evaluation of the rotated data set, so we're being warned that it's all crap. Now let's do the same with `griddata`: [![griddata random input](http://i.stack.imgur.com/rlwUXm.png)](http://i.stack.imgur.com/rlwUX.png)[![griddata rotated random input](http://i.stack.imgur.com/H5YuDm.png)](http://i.stack.imgur.com/H5YuD.png) We should note that these figures are much closer to each other, and they seem to make *way* more sense than the output of `interp2d`. For instance, note the overshoot in the scale of the very first figure. These artifacts always arise between input data points. Since it's still interpolation, the input points have to be reproduced by the interpolating function, but it's pretty weird that a linear interpolating function overshoots between data points. It's clear that `griddata` doesn't suffer from this issue. Consider an even more clear case: the other set of `z` values, which are smooth and deterministic. The surfaces with `interp2d`: [![interp2d smooth input](http://i.stack.imgur.com/oU46Vm.png)](http://i.stack.imgur.com/oU46V.png)[![interp2d rotated smooth input](http://i.stack.imgur.com/TiqRim.png)](http://i.stack.imgur.com/TiqRi.png) HELP! Call the interpolation police! Already the Cartesian input case has inexplicable (well, at least by me) spurious features in it, and the rotated input case poses the threat of s͔̖̰͕̞͖͇ͣ́̈̒ͦ̀̀ü͇̹̞̳ͭ̊̓̎̈m̥̠͈̣̆̐ͦ̚m̻͑͒̔̓ͦ̇oͣ̐ͣṉ̟͖͙̆͋i͉̓̓ͭ̒͛n̹̙̥̩̥̯̭ͤͤͤ̄g͈͇̼͖͖̭̙ ̐z̻̉ͬͪ̑ͭͨ͊ä̼̣̬̗̖́̄ͥl̫̣͔͓̟͛͊̏ͨ͗̎g̻͇͈͚̟̻͛ͫ͛̅͋͒o͈͓̱̥̙̫͚̾͂. So let's do the same with `griddata`: [![griddata smooth input](http://i.stack.imgur.com/wazDim.png)](http://i.stack.imgur.com/wazDi.png)[![griddata rotated smooth input](http://i.stack.imgur.com/6EP65m.png)](http://i.stack.imgur.com/6EP65.png) The day is saved, thanks to The Powerpuff Girls `scipy.interpolate.griddata`. Homework: check the same with `cubic` interpolation. --- By the way, a very short answer to your original question is in `help(interp.interp2d)`: ``` | Notes | ----- | The minimum number of data points required along the interpolation | axis is ``(k+1)**2``, with k=1 for linear, k=3 for cubic and k=5 for | quintic interpolation. ``` For linear interpolation you need *at least 4 points along the interpolation axis*, i.e. at least 4 unique `x` and `y` values have to be present to get a meaningful result. Check these: ``` nvals = 3 # -> RuntimeWarning x = np.linspace(0,1,10) y = np.random.randint(low=0,high=nvals,size=x.shape) z = x interp.interp2d(x,y,z) nvals = 4 # -> no problem here x = np.linspace(0,1,10) y = np.random.randint(low=0,high=nvals,size=x.shape) z = x interp.interp2d(x,y,z) ``` And of course this all ties in to you question like this: it makes a huge difference if your geometrically 1d data set is along one of the Cartesian axes, or if it's in a general way such that the coordinate values assume various different values. It's probably meaningless (or at least very ill-defined) to try 2d interpolation from a geometrically 1d data set, but at least the algorithm shouldn't break if your data are along a general direction of the `x,y` plane.
SSL3_GET_SERVER_CERTIFICATE certificate verify failed on Python when requesting (only) *.google.com
34,646,942
8
2016-01-07T03:33:48Z
34,665,344
18
2016-01-07T21:18:14Z
[ "python", "ssl", "python-requests" ]
I have encountered a really strange bug that has to do with SSL and python to google.com (or more generally I think with domains that have multiple certificate chains). Whenever I try to do a request to `https://*.google.com/whatever` I get the following error: ``` SSLError: ("bad handshake: Error([('SSL routines', 'SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed')],)",) while doing GET request to URL: https://google.com/ ``` # What I have done so far I have gone through many hoops trying to get this working and am resorting to posting to Stack Overflow now that I don't know what to do. Here is what I have tried: 1. Noticed that `date` returned a date that was 2 minutes behind the real time (potentially invalidating my cert). I fixed this assuming it would validate the cert. This did not fix the issue. 2. Found out that Python 2.7.9 backported some SSL libraries from Python 3. I upgraded from Python 2.7.6 to 2.7.9 assuming the updates (which include fixes listed in this thread: <http://serverfault.com/questions/692110/error-with-python2-as-a-https-client-with-an-nginx-server-and-ssl-certificate-ch>) would fix it. No luck, same error. 3. Obviously setting `verify=False` works, but we are not willing to budge on security, we need to get `verify=True` to work. 4. `curl https://google.com` also works as expected. This is how I know it has to do with Python. # Environment ``` $ python -V Python 2.7.9 $ pip list | grep -e requests requests (2.9.1) $ uname-a # ubuntu 14.04 Linux staging.example.com 3.13.0-48-generic #80-Ubuntu SMP Thu Mar 12 11:16:15 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux ``` # Example This is *only* happening for google domains over https. Here is an example: ``` $ ipython Python 2.7.9 (default, Jan 6 2016, 21:37:32) Type "copyright", "credits" or "license" for more information. IPython 4.0.1 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object', use 'object??' for extra details. In [1]: import requests In [2]: requests.get('https://facebook.com', verify=True) Out[2]: <Response [200]> In [3]: requests.get('https://stackoverflow.com', verify=True) Out[3]: <Response [200]> In [4]: requests.get('https://spotify.com', verify=True) Out[4]: <Response [200]> In [5]: requests.get('http://google.com', verify=True) # notice the http Out[5]: <Response [200]> In [6]: requests.get('https://google.com', verify=True) --------------------------------------------------------------------------- SSLError Traceback (most recent call last) <ipython-input-6-a7fff1831944> in <module>() ----> 1 requests.get('https://google.com', verify=True) /example/.virtualenv/example/lib/python2.7/site-packages/requests/api.pyc in get(url, params, **kwargs) 65 66 kwargs.setdefault('allow_redirects', True) ---> 67 return request('get', url, params=params, **kwargs) 68 69 /example/.virtualenv/example/lib/python2.7/site-packages/requests/api.pyc in request(method, url, **kwargs) 51 # cases, and look like a memory leak in others. 52 with sessions.Session() as session: ---> 53 return session.request(method=method, url=url, **kwargs) 54 55 /example/.virtualenv/example/lib/python2.7/site-packages/requests/sessions.pyc in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 466 } 467 send_kwargs.update(settings) --> 468 resp = self.send(prep, **send_kwargs) 469 470 return resp /example/.virtualenv/example/lib/python2.7/site-packages/requests/sessions.pyc in send(self, request, **kwargs) 574 575 # Send the request --> 576 r = adapter.send(request, **kwargs) 577 578 # Total elapsed time of the request (approximately) /example/.virtualenv/example/lib/python2.7/site-packages/requests/adapters.pyc in send(self, request, stream, timeout, verify, cert, proxies) 445 except (_SSLError, _HTTPError) as e: 446 if isinstance(e, _SSLError): --> 447 raise SSLError(e, request=request) 448 elif isinstance(e, ReadTimeoutError): 449 raise ReadTimeout(e, request=request) SSLError: ("bad handshake: Error([('SSL routines', 'SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed')],)",) ```
I found a solution. There seems to be a major issue in the version of `certifi` that was running. I found this out from this (very long) GitHub issue: <https://github.com/certifi/python-certifi/issues/26> **TL;DR** `pip uninstall -y certifi && pip install certifi==2015.04.28`
Neural network backpropagation algorithm not working in Python
34,649,152
6
2016-01-07T07:03:08Z
34,689,425
7
2016-01-09T03:34:30Z
[ "python", "numpy", "machine-learning", "neural-network", "backpropagation" ]
I am writing a neural network in Python, following the example [here](http://page.mi.fu-berlin.de/rojas/neural/chapter/K7.pdf). It seems that the backpropagation algorithm isn't working, given that the neural network fails to produce the right value (within a margin of error) after being trained 10 thousand times. Specifically, I am training it to compute the sine function in the following example: ``` import numpy as np class Neuralnet: def __init__(self, neurons): self.weights = [] self.inputs = [] self.outputs = [] self.errors = [] self.rate = .1 for layer in range(len(neurons)): self.inputs.append(np.empty(neurons[layer])) self.outputs.append(np.empty(neurons[layer])) self.errors.append(np.empty(neurons[layer])) for layer in range(len(neurons)-1): self.weights.append( np.random.normal( scale=1/np.sqrt(neurons[layer]), size=[neurons[layer], neurons[layer + 1]] ) ) def feedforward(self, inputs): self.inputs[0] = inputs for layer in range(len(self.weights)): self.outputs[layer] = np.tanh(self.inputs[layer]) self.inputs[layer + 1] = np.dot(self.weights[layer].T, self.outputs[layer]) self.outputs[-1] = np.tanh(self.inputs[-1]) def backpropagate(self, targets): gradient = 1 - self.outputs[-1] * self.outputs[-1] self.errors[-1] = gradient * (self.outputs[-1] - targets) for layer in reversed(range(len(self.errors) - 1)): gradient = 1 - self.outputs[layer] * self.outputs[layer] self.errors[layer] = gradient * np.dot(self.weights[layer], self.errors[layer + 1]) for layer in range(len(self.weights)): self.weights[layer] -= self.rate * np.outer(self.outputs[layer], self.errors[layer + 1]) def xor_example(): net = Neuralnet([2, 2, 1]) for step in range(100000): net.feedforward([0, 0]) net.backpropagate([-1]) net.feedforward([0, 1]) net.backpropagate([1]) net.feedforward([1, 0]) net.backpropagate([1]) net.feedforward([1, 1]) net.backpropagate([-1]) net.feedforward([1, 1]) print(net.outputs[-1]) def identity_example(): net = Neuralnet([1, 3, 1]) for step in range(100000): x = np.random.normal() net.feedforward([x]) net.backpropagate([np.tanh(x)]) net.feedforward([-2]) print(net.outputs[-1]) def sine_example(): net = Neuralnet([1, 6, 1]) for step in range(100000): x = np.random.normal() net.feedforward([x]) net.backpropagate([np.tanh(np.sin(x))]) net.feedforward([3]) print(net.outputs[-1]) sine_example() ``` The output fails to be close to `tanh(sin(3)) = 0.140190616`. I suspected a mistake involving wrong indices or alignment, but Numpy isn't raising any errors like these. Any tips on where I went wrong? **EDIT:** I forgot to add the bias neurons. Here is the updated code: ``` import numpy as np class Neuralnet: def __init__(self, neurons): self.weights = [] self.outputs = [] self.inputs = [] self.errors = [] self.offsets = [] self.rate = .01 for layer in range(len(neurons)-1): self.weights.append( np.random.normal( scale=1/np.sqrt(neurons[layer]), size=[neurons[layer], neurons[layer + 1]] ) ) self.outputs.append(np.empty(neurons[layer])) self.inputs.append(np.empty(neurons[layer])) self.errors.append(np.empty(neurons[layer])) self.offsets.append(np.random.normal(scale=1/np.sqrt(neurons[layer]), size=neurons[layer + 1])) self.inputs.append(np.empty(neurons[-1])) self.errors.append(np.empty(neurons[-1])) def feedforward(self, inputs): self.inputs[0] = inputs for layer in range(len(self.weights)): self.outputs[layer] = np.tanh(self.inputs[layer]) self.inputs[layer + 1] = self.offsets[layer] + np.dot(self.weights[layer].T, self.outputs[layer]) def backpropagate(self, targets): self.errors[-1] = self.inputs[-1] - targets for layer in reversed(range(len(self.errors) - 1)): gradient = 1 - self.outputs[layer] * self.outputs[layer] self.errors[layer] = gradient * np.dot(self.weights[layer], self.errors[layer + 1]) for layer in range(len(self.weights)): self.weights[layer] -= self.rate * np.outer(self.outputs[layer], self.errors[layer + 1]) self.offsets[layer] -= self.rate * self.errors[layer + 1] def sine_example(): net = Neuralnet([1, 5, 1]) for step in range(10000): x = np.random.uniform(-5, 5) net.feedforward([x]) net.backpropagate([np.sin(x)]) net.feedforward([np.pi]) print(net.inputs[-1]) def xor_example(): net = Neuralnet([2, 2, 1]) for step in range(10000): net.feedforward([0, 0]) net.backpropagate([-1]) net.feedforward([0, 1]) net.backpropagate([1]) net.feedforward([1, 0]) net.backpropagate([1]) net.feedforward([1, 1]) net.backpropagate([-1]) net.feedforward([1, 1]) print(net.outputs[-1]) def identity_example(): net = Neuralnet([1, 3, 1]) for step in range(10000): x = np.random.normal() net.feedforward([x]) net.backpropagate([x]) net.feedforward([-2]) print(net.outputs[-1]) identity_example() ```
I think you train the NN in the wrong way. You have a loop over 10000 iterations and feed a new sample in each cycle. The NN will never get trained in this case. **(the statement is wrong! See the update! )** What you need to do is to generate a large array of true samples **`Y = sin(X)`**, give it to your network **ONCE** and iterate over the training set forwards and backwards, in order to minimize the cost function. To check the algorithm you may need to plot the cost function depending on the iteration number and make sure the cost goes down. Another important point is the initialization of the weights. Your numbers are pretty large and the network will take a lot of time to converge, especially when using low rates. It's a good practice to generate the initial weights in some small range **`[-eps .. eps]`** uniformly. In my code I implemented two different activation functions: **`sigmoid()`** and **`tanh()`**. You need to scale your inputs depending on the selected function: `[0 .. 1]` and `[-1 .. 1]` respectively. Here are some images which show the cost function and the resulting predictions for `sigmoid()` and `tanh()` activation functions: [![sigmoid activation](http://i.stack.imgur.com/fIBvp.png)](http://i.stack.imgur.com/fIBvp.png) [![tanh activation](http://i.stack.imgur.com/rMAwn.png)](http://i.stack.imgur.com/rMAwn.png) As you can see the **`sigmoid()`** activation gives a little bit better results, than the `tanh()`. Also I got much better predictions when using a network **`[1, 6, 1]`**, compared to a bigger network with 4 layers `[1, 6, 4, 1]`. So the size of the NN is not always the crucial factor. Here is the prediction for the mentioned network with 4 layers: [![sigmoid for a bigger network](http://i.stack.imgur.com/eeVbr.png)](http://i.stack.imgur.com/eeVbr.png) Here is my code with some comments. I tried to use your notations where it was possible. ``` import numpy as np import math import matplotlib.pyplot as plt class Neuralnet: def __init__(self, neurons, activation): self.weights = [] self.inputs = [] self.outputs = [] self.errors = [] self.rate = 0.5 self.activation = activation #sigmoid or tanh self.neurons = neurons self.L = len(self.neurons) #number of layers eps = 0.12; # range for uniform distribution -eps..+eps for layer in range(len(neurons)-1): self.weights.append(np.random.uniform(-eps,eps,size=(neurons[layer+1], neurons[layer]+1))) ################################################################################################### def train(self, X, Y, iter_count): m = X.shape[0]; for layer in range(self.L): self.inputs.append(np.empty([m, self.neurons[layer]])) self.errors.append(np.empty([m, self.neurons[layer]])) if (layer < self.L -1): self.outputs.append(np.empty([m, self.neurons[layer]+1])) else: self.outputs.append(np.empty([m, self.neurons[layer]])) #accumulate the cost function J_history = np.zeros([iter_count, 1]) for i in range(iter_count): self.feedforward(X) J = self.cost(Y, self.outputs[self.L-1]) J_history[i, 0] = J self.backpropagate(Y) #plot the cost function to check the descent plt.plot(J_history) plt.show() ################################################################################################### def cost(self, Y, H): J = np.sum(np.sum(np.power((Y - H), 2), axis=0))/(2*m) return J ################################################################################################### def feedforward(self, X): m = X.shape[0]; self.outputs[0] = np.concatenate( (np.ones([m, 1]), X), axis=1) for i in range(1, self.L): self.inputs[i] = np.dot( self.outputs[i-1], self.weights[i-1].T ) if (self.activation == 'sigmoid'): output_temp = self.sigmoid(self.inputs[i]) elif (self.activation == 'tanh'): output_temp = np.tanh(self.inputs[i]) if (i < self.L - 1): self.outputs[i] = np.concatenate( (np.ones([m, 1]), output_temp), axis=1) else: self.outputs[i] = output_temp ################################################################################################### def backpropagate(self, Y): self.errors[self.L-1] = self.outputs[self.L-1] - Y for i in range(self.L - 2, 0, -1): if (self.activation == 'sigmoid'): self.errors[i] = np.dot( self.errors[i+1], self.weights[i][:, 1:] ) * self.sigmoid_prime(self.inputs[i]) elif (self.activation == 'tanh'): self.errors[i] = np.dot( self.errors[i+1], self.weights[i][:, 1:] ) * (1 - self.outputs[i][:, 1:]*self.outputs[i][:, 1:]) for i in range(0, self.L-1): grad = np.dot(self.errors[i+1].T, self.outputs[i]) / m self.weights[i] = self.weights[i] - self.rate*grad ################################################################################################### def sigmoid(self, z): s = 1.0/(1.0 + np.exp(-z)) return s ################################################################################################### def sigmoid_prime(self, z): s = self.sigmoid(z)*(1 - self.sigmoid(z)) return s ################################################################################################### def predict(self, X, weights): m = X.shape[0]; self.inputs = [] self.outputs = [] self.weights = weights for layer in range(self.L): self.inputs.append(np.empty([m, self.neurons[layer]])) if (layer < self.L -1): self.outputs.append(np.empty([m, self.neurons[layer]+1])) else: self.outputs.append(np.empty([m, self.neurons[layer]])) self.feedforward(X) return self.outputs[self.L-1] ################################################################################################### # MAIN PART activation1 = 'sigmoid' # the input should be scaled into [ 0..1] activation2 = 'tanh' # the input should be scaled into [-1..1] activation = activation1 net = Neuralnet([1, 6, 1], activation) # structure of the NN and its activation function ########################################################################################## # TRAINING m = 1000 #size of the training set X = np.linspace(0, 4*math.pi, num = m).reshape(m, 1); # input training set Y = np.sin(X) # target kx = 0.1 # noise parameter noise = (2.0*np.random.uniform(0, kx, m) - kx).reshape(m, 1) Y = Y + noise # noisy target # scaling of the target depending on the activation function if (activation == 'sigmoid'): Y_scaled = (Y/(1+kx) + 1)/2.0 elif (activation == 'tanh'): Y_scaled = Y/(1+kx) # number of the iteration for the training stage iter_count = 20000 net.train(X, Y_scaled, iter_count) #training # gained weights trained_weights = net.weights ########################################################################################## # PREDICTION m_new = 40 #size of the prediction set X_new = np.linspace(0, 4*math.pi, num = m_new).reshape(m_new, 1); Y_new = net.predict(X_new, trained_weights) # prediction #rescaling of the result if (activation == 'sigmoid'): Y_new = (2.0*Y_new - 1.0) * (1+kx) elif (activation == 'tanh'): Y_new = Y_new * (1+kx) # visualization plt.plot(X, Y) plt.plot(X_new, Y_new, 'ro') plt.show() raw_input('press any key to exit') ``` **UPDATE** I would like to take back the statement regarding the training method used in your code. The network can be indeed trained using only one sample per iteration. I got interesting results in online-training using both sigmoid and tanh activation functions: **Online-training using Sigmoid** (cost function and prediction) [![Sigmoid](http://i.stack.imgur.com/vuze1.png)](http://i.stack.imgur.com/vuze1.png) **Online-training using Tanh** (cost function and prediction) [![Tanh](http://i.stack.imgur.com/JJRqe.png)](http://i.stack.imgur.com/JJRqe.png) As can be seen the choice of Sigmoid as activation function gives better performance. The cost function looks not that good as during the offline-training, but at least it tends to go down. I plotted the cost function in your implementation, it looks pretty jerky as well: [![enter image description here](http://i.stack.imgur.com/KVhZS.png)](http://i.stack.imgur.com/KVhZS.png) May be it is a good idea to try your code with the sigmoid or even the ReLU function. Here is the updated source code. To switch between `online` and `offline` training modes just change the `method` variable. ``` import numpy as np import math import matplotlib.pyplot as plt class Neuralnet: def __init__(self, neurons, activation): self.weights = [] self.inputs = [] self.outputs = [] self.errors = [] self.rate = 0.2 self.activation = activation #sigmoid or tanh self.neurons = neurons self.L = len(self.neurons) #number of layers eps = 0.12; #range for uniform distribution -eps..+eps for layer in range(len(neurons)-1): self.weights.append(np.random.uniform(-eps,eps,size=(neurons[layer+1], neurons[layer]+1))) ################################################################################################### def train(self, X, Y, iter_count): m = X.shape[0]; for layer in range(self.L): self.inputs.append(np.empty([m, self.neurons[layer]])) self.errors.append(np.empty([m, self.neurons[layer]])) if (layer < self.L -1): self.outputs.append(np.empty([m, self.neurons[layer]+1])) else: self.outputs.append(np.empty([m, self.neurons[layer]])) #accumulate the cost function J_history = np.zeros([iter_count, 1]) for i in range(iter_count): self.feedforward(X) J = self.cost(Y, self.outputs[self.L-1]) J_history[i, 0] = J self.backpropagate(Y) #plot the cost function to check the descent #plt.plot(J_history) #plt.show() ################################################################################################### def cost(self, Y, H): J = np.sum(np.sum(np.power((Y - H), 2), axis=0))/(2*m) return J ################################################################################################### def cost_online(self, min_x, max_x, iter_number): h_arr = np.zeros([iter_number, 1]) y_arr = np.zeros([iter_number, 1]) for step in range(iter_number): x = np.random.uniform(min_x, max_x, 1).reshape(1, 1) self.feedforward(x) h_arr[step, 0] = self.outputs[-1] y_arr[step, 0] = np.sin(x) J = np.sum(np.sum(np.power((y_arr - h_arr), 2), axis=0))/(2*iter_number) return J ################################################################################################### def feedforward(self, X): m = X.shape[0]; self.outputs[0] = np.concatenate( (np.ones([m, 1]), X), axis=1) for i in range(1, self.L): self.inputs[i] = np.dot( self.outputs[i-1], self.weights[i-1].T ) if (self.activation == 'sigmoid'): output_temp = self.sigmoid(self.inputs[i]) elif (self.activation == 'tanh'): output_temp = np.tanh(self.inputs[i]) if (i < self.L - 1): self.outputs[i] = np.concatenate( (np.ones([m, 1]), output_temp), axis=1) else: self.outputs[i] = output_temp ################################################################################################### def backpropagate(self, Y): self.errors[self.L-1] = self.outputs[self.L-1] - Y for i in range(self.L - 2, 0, -1): if (self.activation == 'sigmoid'): self.errors[i] = np.dot( self.errors[i+1], self.weights[i][:, 1:] ) * self.sigmoid_prime(self.inputs[i]) elif (self.activation == 'tanh'): self.errors[i] = np.dot( self.errors[i+1], self.weights[i][:, 1:] ) * (1 - self.outputs[i][:, 1:]*self.outputs[i][:, 1:]) for i in range(0, self.L-1): grad = np.dot(self.errors[i+1].T, self.outputs[i]) / m self.weights[i] = self.weights[i] - self.rate*grad ################################################################################################### def sigmoid(self, z): s = 1.0/(1.0 + np.exp(-z)) return s ################################################################################################### def sigmoid_prime(self, z): s = self.sigmoid(z)*(1 - self.sigmoid(z)) return s ################################################################################################### def predict(self, X, weights): m = X.shape[0]; self.inputs = [] self.outputs = [] self.weights = weights for layer in range(self.L): self.inputs.append(np.empty([m, self.neurons[layer]])) if (layer < self.L -1): self.outputs.append(np.empty([m, self.neurons[layer]+1])) else: self.outputs.append(np.empty([m, self.neurons[layer]])) self.feedforward(X) return self.outputs[self.L-1] ################################################################################################### # MAIN PART activation1 = 'sigmoid' #the input should be scaled into [0..1] activation2 = 'tanh' #the input should be scaled into [-1..1] activation = activation1 net = Neuralnet([1, 6, 1], activation) # structure of the NN and its activation function method1 = 'online' method2 = 'offline' method = method1 kx = 0.1 #noise parameter ################################################################################################### # TRAINING if (method == 'offline'): m = 1000 #size of the training set X = np.linspace(0, 4*math.pi, num = m).reshape(m, 1); #input training set Y = np.sin(X) #target noise = (2.0*np.random.uniform(0, kx, m) - kx).reshape(m, 1) Y = Y + noise #noisy target #scaling of the target depending on the activation function if (activation == 'sigmoid'): Y_scaled = (Y/(1+kx) + 1)/2.0 elif (activation == 'tanh'): Y_scaled = Y/(1+kx) #number of the iteration for the training stage iter_count = 20000 net.train(X, Y_scaled, iter_count) #training elif (method == 'online'): sampling_count = 100000 # number of samplings during the training stage m = 1 #batch size iter_count = sampling_count/m for layer in range(net.L): net.inputs.append(np.empty([m, net.neurons[layer]])) net.errors.append(np.empty([m, net.neurons[layer]])) if (layer < net.L -1): net.outputs.append(np.empty([m, net.neurons[layer]+1])) else: net.outputs.append(np.empty([m, net.neurons[layer]])) J_history = [] step_history = [] for i in range(iter_count): X = np.random.uniform(0, 4*math.pi, m).reshape(m, 1) Y = np.sin(X) #target noise = (2.0*np.random.uniform(0, kx, m) - kx).reshape(m, 1) Y = Y + noise #noisy target #scaling of the target depending on the activation function if (activation == 'sigmoid'): Y_scaled = (Y/(1+kx) + 1)/2.0 elif (activation == 'tanh'): Y_scaled = Y/(1+kx) net.feedforward(X) net.backpropagate(Y_scaled) if (np.remainder(i, 1000) == 0): J = net.cost_online(0, 4*math.pi, 1000) J_history.append(J) step_history.append(i) plt.plot(step_history, J_history) plt.title('Batch size ' + str(m) + ', rate ' + str(net.rate) + ', samples ' + str(sampling_count)) #plt.ylim([0, 0.1]) plt.show() #gained weights trained_weights = net.weights ########################################################################################## # PREDICTION m_new = 40 #size of the prediction set X_new = np.linspace(0, 4*math.pi, num = m_new).reshape(m_new, 1); Y_new = net.predict(X_new, trained_weights) #prediction #rescaling of the result if (activation == 'sigmoid'): Y_new = (2.0*Y_new - 1.0) * (1+kx) elif (activation == 'tanh'): Y_new = Y_new * (1+kx) #visualization #fake sine curve to show the ideal signal if (method == 'online'): X = np.linspace(0, 4*math.pi, num = 100) Y = np.sin(X) plt.plot(X, Y) plt.plot(X_new, Y_new, 'ro') if (method == 'online'): plt.title('Batch size ' + str(m) + ', rate ' + str(net.rate) + ', samples ' + str(sampling_count)) plt.ylim([-1.5, 1.5]) plt.show() raw_input('press any key to exit') ``` *Now I have some remarks to your current code:* Your sine function looks like this: ``` def sine_example(): net = Neuralnet([1, 6, 1]) for step in range(100000): x = np.random.normal() net.feedforward([x]) net.backpropagate([np.tanh(np.sin(x))]) net.feedforward([3]) print(net.outputs[-1]) ``` I don't know why you use tanh in your target input. If you really want to use tanh of sine as target, you need to scale it to `[-1..1]`, because tanh(sin(x)) returns values in range `[-0.76..0.76]`. The next thing is the range of your training set. You use `x = np.random.normal()` to generate the samples. Here is the distribution of such an input: [![enter image description here](http://i.stack.imgur.com/lj1hD.png)](http://i.stack.imgur.com/lj1hD.png) After it you want your network to predict the sine of `3`, but the network has almost never seen this number during the training stage. I would use the uniform distribution in a wider range for sample generation instead.
Convert an Array, converted to a String, back to an Array
34,655,194
3
2016-01-07T12:30:21Z
34,655,447
11
2016-01-07T12:44:01Z
[ "python", "string", "list", "python-3.x" ]
I recently found an interesting behaviour in python due to a bug in my code. Here's a simplified version of what happened: ``` a=[[1,2],[2,3],[3,4]] print(str(a)) ``` ``` console: "[[1,2],[2,3],[3,4]]" ``` Now I wondered if I could convert the String back to an Array.Is there a good way of converting a String, representing an Array with mixed datatypes( `"[1,'Hello',['test','3'],True,2.532]"`) including integers,strings,booleans,floats and arrays back to an Array?
There's always everybody's old favourite `ast.literal_eval` ``` >>> import ast >>> x = "[1,'Hello',['test','3'],True,2.532]" >>> y = ast.literal_eval(x) >>> y [1, 'Hello', ['test', '3'], True, 2.532] >>> z = str(y) >>> z "[1, 'Hello', ['test', '3'], True, 2.532]" ```
Is there a built-in equivalent to C#'s Enumerable.Single()?
34,662,054
2
2016-01-07T18:00:50Z
34,662,238
10
2016-01-07T18:11:41Z
[ "python" ]
in C#, if you have an enumerable and try to call [.Single()](https://msdn.microsoft.com/library/bb155325(v=vs.100).aspx) on it, it will throw an error if it does not have exactly one element in it. Is there something similar built-in to Python for this? ``` if len(iterable) == 0 or len(iterable) > 1: raise Error("...") return iterable[0] ```
Not a built in method, but there is an idiomatic way to achieve the same goal: `(value,) = iterable` raises `ValueError` if `iterable` doesn't contain exactly one element. The single element will be stored in `value` so your example could be simplified to: ``` (value,) = iterable return value ``` The unpacking is a feature of the [assignment operator](https://docs.python.org/2/reference/simple_stmts.html#assignment-statements). > If the target list is a comma-separated list of targets: The object must be an iterable with the same number of items as there are targets in the target list, and the items are assigned, from left to right, to the corresponding targets.
Python while loop for finding prime numbers
34,664,067
3
2016-01-07T19:59:01Z
34,664,104
8
2016-01-07T20:01:20Z
[ "python", "python-3.x", "while-loop", "primes" ]
As a first exercise with Python, I'm trying to write a program using loops to find primes. Everything works with a for loop so I am trying to use a while loop. This works but the program returns a few incorrect numbers. ``` import math # looking for all primes below this number max_num = int(input("max number?: ")) primes = [2] # start with 2 test_num = 3 # which means testing starts with 3 while test_num < max_num: i = 0 # It's only necessary to check with the primes smaller than the square # root of the test_num while primes[i] < math.sqrt(test_num): # using modulo to figure out if test_num is prime or not if (test_num % primes[i]) == 0: test_num += 1 break else: i += 1 else: primes.append(test_num) test_num += 1 print(primes) ``` So the weird thing is that for `max_num=100` it returns: ``` [2, 3, 5, 7, 9, 11, 13, 17, 19, 23, 25, 29, 31, 37, 41, 43, 47, 49, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97] ``` which is correct except for 9, 25 and 49 and I can't figure out why.
You need to go up to and including the square root. Otherwise your algorithm will miss the family of prime squares (9, 25, and 49 are prime squares). The quick fix is to replace `<` with `<=` as your stopping condition. But consider changing the stopping condition to `primes[i] * primes[i] <= test_num` With this test, you don't dip in and out of floating point.
XGBoost XGBClassifier Defaults in Python
34,674,797
5
2016-01-08T10:30:37Z
34,696,477
8
2016-01-09T17:14:06Z
[ "python", "scikit-learn", "classification", "analytics", "xgboost" ]
I am attempting to use XGBoosts classifier to classify some binary data. When I do the simplest thing and just use the defaults (as follows) ``` clf = xgb.XGBClassifier() metLearn=CalibratedClassifierCV(clf, method='isotonic', cv=2) metLearn.fit(train, trainTarget) testPredictions = metLearn.predict(test) ``` I get reasonably good classification results. My next step was to try tuning my parameters. Guessing from the parameters guide at... <https://github.com/dmlc/xgboost/blob/master/doc/parameter.md> I wanted to start from the default and work from there... ``` # setup parameters for xgboost param = {} param['booster'] = 'gbtree' param['objective'] = 'binary:logistic' param["eval_metric"] = "error" param['eta'] = 0.3 param['gamma'] = 0 param['max_depth'] = 6 param['min_child_weight']=1 param['max_delta_step'] = 0 param['subsample']= 1 param['colsample_bytree']=1 param['silent'] = 1 param['seed'] = 0 param['base_score'] = 0.5 clf = xgb.XGBClassifier(params) metLearn=CalibratedClassifierCV(clf, method='isotonic', cv=2) metLearn.fit(train, trainTarget) testPredictions = metLearn.predict(test) ``` The result is everything being predicted to be one of the conditions and not the other. curiously if I set ``` params={} ``` which I expected to give me the same defaults as not feeding any parameters, I get the same thing happening So does anyone know what the defaults for XGBclassifier is? so that I can start tuning?
That isn't how you set parameters in xgboost. You would either want to pass your param grid into your training function, such as xgboost's `train` or sklearn's `GridSearchCV`, or you would want to use your XGBClassifier's `set_params` method. Another thing to note is that if you're using xgboost's wrapper to sklearn (ie: the `XGBClassifier()` or `XGBRegressor()` classes) then the paramater names used are the same ones used in sklearn's own GBM class (ex: eta --> learning\_rate). I'm not seeing where the exact documentation for the sklearn wrapper is hidden, but the code for those classes is here: <https://github.com/dmlc/xgboost/blob/master/python-package/xgboost/sklearn.py> For your reference here is how you would set the model object parameters directly. ``` >>> grid = {'max_depth':10} >>> >>> clf = XGBClassifier() >>> clf.max_depth 3 >>> clf.set_params(**grid) XGBClassifier(base_score=0.5, colsample_bylevel=1, colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=10, min_child_weight=1, missing=None, n_estimators=100, nthread=-1, objective='binary:logistic', reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=0, silent=True, subsample=1) >>> clf.max_depth 10 ``` EDIT: I suppose you can set parameters on model creation, it just isn't super typical to do so since most people grid search in some means. However if you do so you would need to either list them as full params or use \*\*kwargs. For example: ``` >>> XGBClassifier(max_depth=10) XGBClassifier(base_score=0.5, colsample_bylevel=1, colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=10, min_child_weight=1, missing=None, n_estimators=100, nthread=-1, objective='binary:logistic', reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=0, silent=True, subsample=1) >>> XGBClassifier(**grid) XGBClassifier(base_score=0.5, colsample_bylevel=1, colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=10, min_child_weight=1, missing=None, n_estimators=100, nthread=-1, objective='binary:logistic', reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=0, silent=True, subsample=1) ``` Using a dictionary as input without \*\*kwargs will set that parameter to literally be your dictionary: ``` >>> XGBClassifier(grid) XGBClassifier(base_score=0.5, colsample_bylevel=1, colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0, max_depth={'max_depth': 10}, min_child_weight=1, missing=None, n_estimators=100, nthread=-1, objective='binary:logistic', reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=0, silent=True, subsample=1) ```
Understanding Python List operation
34,678,158
3
2016-01-08T13:23:31Z
34,678,182
7
2016-01-08T13:25:19Z
[ "python", "list", "sequence" ]
I am new to python. i am learning some basic stuff. I was doing some operation on python list like this `three_lists=[]*3` when i execute this piece of code it gives me only one empty list like this`[]`. Why it is not giving me 3 empty list? some what like this `[],[],[]`
It says right in the [Python docs](https://docs.python.org/3/library/stdtypes.html#common-sequence-operations) > `s * n` or `n * s` equivalent to adding `s` to itself `n` times where `s` is a sequence and `n` is an `int`. For example ``` >>> [1,2,3]*3 [1, 2, 3, 1, 2, 3, 1, 2, 3] ``` This is consistent with other sequences as well, such as `str` ``` >>> 'hello'*3 'hellohellohello' ``` If you wanted a list of 3 empty lists you could say ``` >>> [[] for _ in range(3)] [[], [], []] ```
lowercase first n characters
34,680,657
2
2016-01-08T15:37:14Z
34,680,676
11
2016-01-08T15:38:35Z
[ "python", "lowercase" ]
I'm trying to lowercase the first n characters in a string. For example, say I want to lowercase the first 4 characters in this string: ``` String1 = 'HELPISNEEDED' ``` I would like the output to look like this: ``` String1 = 'helpISNEEDED' ``` I thought I could use this: ``` String1 = String1[4].lower() + String1[5:] ``` but this gives me this output: ``` String1 = 'iSNEEDED' ``` Any idea on how I'm doing this wrong?
You selected just *one character*. Use a slice for both parts: ``` String1 = String1[:4].lower() + String1[4:] ``` Note that the second object starts slicing at `4`, not `5`; you want to skip `'HELP'`, not `'HELPI'`: ``` >>> String1 = 'HELPISNEEDED' >>> String1[:4].lower() + String1[4:] 'helpISNEEDED' ``` Remember: the start index is inclusive, the end index is exclusive; `:4` selects indices 0, 1, 2, and 3, while `4:` selects indices 4 and onwards.
Having trouble using requests for urls
34,681,096
5
2016-01-08T15:58:26Z
34,681,432
8
2016-01-08T16:14:50Z
[ "python", "python-requests", "anaconda" ]
I simply wrote the following code to play around with the Requests library ``` requests tests import requests r = requests.get('https://api.github.com/events') ``` but I keep getting the same error message, even if I use `from requests import *` ``` Traceback (most recent call last): File "/Users/dvanderknaap/Desktop/Organized/CS/My_Python_Programs/requests.py", line 3, in <module> import requests File "/Users/dvanderknaap/Desktop/Organized/CS/My_Python_Programs/requests.py", line 5, in <module> r = requests.get('https://api.github.com/events') AttributeError: 'module' object has no attribute 'get' ``` I've tried reinstalling requests using `pip install requests`, but the output is: ``` Requirement already satisfied (use --upgrade to upgrade): requests in /anaconda/lib/python3.5/site-packages ``` I think the problem is that it is installed in my python3.5 library but I am using python2.7, but I'm not sure how to fix that. Advice?
First, rename your file My\_Python\_Programs/requests.py to something else than requests.py. It is importing itself instead of the requests module. Your python 2.7 may or may not already have the requests package installed. If not, you can install it with ``` pip2.7 install requests ```
Choose three different values from list in Python
34,683,400
6
2016-01-08T18:08:44Z
34,683,498
7
2016-01-08T18:14:03Z
[ "python", "algorithm", "python-3.x" ]
I have a list of points with their coordinates, looking like this: ``` [(0,1),(2,3),(7,-1) and so on.] ``` What is the Pythonic way to iterate over them and choose three different every time? I can't find simpler solution than using three `for` loops like this: ``` for point1 in a: for point2 in a: if not point1 == point2: for point3 in a: if not point1 == point3 and not point2 == point3: ``` So I'm asking for help.
``` import random lst = [(0, 1), (2, 3), (7, -1), (1, 2), (4, 5)] random.sample(lst, 3) ``` This will just give you 3 points chosen at random from the list. It seems you may want something different. Can you clarify?
Insert 0s into 2d array
34,685,084
13
2016-01-08T19:57:28Z
34,685,182
10
2016-01-08T20:03:35Z
[ "python", "arrays", "numpy" ]
I have an array `x`: ``` x = [0, -1, 0, 3] ``` and I want `y`: ``` y = [[0, -2, 0, 2], [0, -1, 0, 3], [0, 0, 0, 4]] ``` where the first row is `x-1`, the second row is `x`, and the third row is `x+1`. All even column indices are zero. I'm doing: ``` y=np.vstack(x-1, x, x+1) y[0][::2] = 0 y[1][::2] = 0 y[2][::2] = 0 ``` I was thinking there might be a one-liner to do this instead of 4.
## In two lines ``` >>> x = np.array([0, -1, 0, 3]) >>> y = np.vstack((x-1, x, x+1)) >>> y[:,::2] = 0 >>> y array([[ 0, -2, 0, 2], [ 0, -1, 0, 3], [ 0, 0, 0, 4]]) ``` ## Explanation ``` y[:, ::2] ``` gives the full first dimension. i.e all rows and every other entry form the second dimension, i.e. the columns: ``` array([[-1, -1], [ 0, 0], [ 1, 1]]) ``` This is different from: ``` y[:][::2] ``` because this works in two steps. Step one: ``` y[:] ``` gives a view of the whole array: ``` array([[-1, -2, -1, 2], [ 0, -1, 0, 3], [ 1, 0, 1, 4]]) ``` Therefore, step two is doing essentially this: ``` y[::2] array([[-1, -2, -1, 2], [ 1, 0, 1, 4]]) ``` It works along the first dimension. i.e. the rows.
Odd behavior of Python operator.xor
34,685,596
4
2016-01-08T20:33:55Z
34,685,638
7
2016-01-08T20:36:29Z
[ "python", "bitwise-operators", "xor" ]
I am working on an encryption puzzle and am needing to take the exclusive or of two binary numbers (I'm using the `operator` package in Python). If I run `operator.xor(1001111, 1100001)` for instance I get the very weird output `2068086`. Why doesn't it return `0101110` or at least `101110`?
The calculated answer is using the decimal values you provided, not their binary appearance. What you are really asking is... ``` 1001111 ^ 1100001 ``` When you mean is `79 ^ 97`. Instead try using the binary literals as so... ``` 0b1001111 ^ 0b1100001 ``` See [How do you express binary literals in Python?](http://stackoverflow.com/questions/1476/how-do-you-express-binary-literals-in-python) for more information.
Odd behavior of Python operator.xor
34,685,596
4
2016-01-08T20:33:55Z
34,685,642
9
2016-01-08T20:36:36Z
[ "python", "bitwise-operators", "xor" ]
I am working on an encryption puzzle and am needing to take the exclusive or of two binary numbers (I'm using the `operator` package in Python). If I run `operator.xor(1001111, 1100001)` for instance I get the very weird output `2068086`. Why doesn't it return `0101110` or at least `101110`?
Because Python doesn't see that as binary numbers. Instead use: ``` operator.xor(0b1001111, 0b1100001) ```
How to link PyCharm with PySpark?
34,685,905
17
2016-01-08T20:55:36Z
34,714,207
20
2016-01-11T04:29:58Z
[ "python", "apache-spark", "pycharm", "homebrew", "pyspark" ]
I'm new with apache spark and apparently I installed apache-spark with homebrew in my macbook: ``` Last login: Fri Jan 8 12:52:04 on console user@MacBook-Pro-de-User-2:~$ pyspark Python 2.7.10 (default, Jul 13 2015, 12:05:58) [GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)] on darwin Type "help", "copyright", "credits" or "license" for more information. Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 16/01/08 14:46:44 INFO SparkContext: Running Spark version 1.5.1 16/01/08 14:46:46 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 16/01/08 14:46:47 INFO SecurityManager: Changing view acls to: user 16/01/08 14:46:47 INFO SecurityManager: Changing modify acls to: user 16/01/08 14:46:47 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(user); users with modify permissions: Set(user) 16/01/08 14:46:50 INFO Slf4jLogger: Slf4jLogger started 16/01/08 14:46:50 INFO Remoting: Starting remoting 16/01/08 14:46:51 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:50199] 16/01/08 14:46:51 INFO Utils: Successfully started service 'sparkDriver' on port 50199. 16/01/08 14:46:51 INFO SparkEnv: Registering MapOutputTracker 16/01/08 14:46:51 INFO SparkEnv: Registering BlockManagerMaster 16/01/08 14:46:51 INFO DiskBlockManager: Created local directory at /private/var/folders/5x/k7n54drn1csc7w0j7vchjnmc0000gn/T/blockmgr-769e6f91-f0e7-49f9-b45d-1b6382637c95 16/01/08 14:46:51 INFO MemoryStore: MemoryStore started with capacity 530.0 MB 16/01/08 14:46:52 INFO HttpFileServer: HTTP File server directory is /private/var/folders/5x/k7n54drn1csc7w0j7vchjnmc0000gn/T/spark-8e4749ea-9ae7-4137-a0e1-52e410a8e4c5/httpd-1adcd424-c8e9-4e54-a45a-a735ade00393 16/01/08 14:46:52 INFO HttpServer: Starting HTTP Server 16/01/08 14:46:52 INFO Utils: Successfully started service 'HTTP file server' on port 50200. 16/01/08 14:46:52 INFO SparkEnv: Registering OutputCommitCoordinator 16/01/08 14:46:52 INFO Utils: Successfully started service 'SparkUI' on port 4040. 16/01/08 14:46:52 INFO SparkUI: Started SparkUI at http://192.168.1.64:4040 16/01/08 14:46:53 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set. 16/01/08 14:46:53 INFO Executor: Starting executor ID driver on host localhost 16/01/08 14:46:53 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 50201. 16/01/08 14:46:53 INFO NettyBlockTransferService: Server created on 50201 16/01/08 14:46:53 INFO BlockManagerMaster: Trying to register BlockManager 16/01/08 14:46:53 INFO BlockManagerMasterEndpoint: Registering block manager localhost:50201 with 530.0 MB RAM, BlockManagerId(driver, localhost, 50201) 16/01/08 14:46:53 INFO BlockManagerMaster: Registered BlockManager Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /__ / .__/\_,_/_/ /_/\_\ version 1.5.1 /_/ Using Python version 2.7.10 (default, Jul 13 2015 12:05:58) SparkContext available as sc, HiveContext available as sqlContext. >>> ``` I would like start playing in order to learn more about MLlib. However, I use Pycharm to write scripts in python. The problem is: when I go to Pycharm and try to call pyspark, Pycharm can not found the module. I tried adding the path to Pycharm as follows: [![cant link pycharm with spark](http://i.stack.imgur.com/SCMrY.png)](http://i.stack.imgur.com/SCMrY.png) Then from a [blog](http://renien.github.io/blog/accessing-pyspark-pycharm/) I tried this: ``` import os import sys # Path for spark source folder os.environ['SPARK_HOME']="/Users/user/Apps/spark-1.5.2-bin-hadoop2.4" # Append pyspark to Python Path sys.path.append("/Users/user/Apps/spark-1.5.2-bin-hadoop2.4/python/pyspark") try: from pyspark import SparkContext from pyspark import SparkConf print ("Successfully imported Spark Modules") except ImportError as e: print ("Can not import Spark Modules", e) sys.exit(1) ``` And still can not start using PySpark with Pycharm, any idea of how to "link" PyCharm with apache-pyspark?. **Update:** Then I search for apache-spark and python path in order to set the environment variables of Pycharm: apache-spark path: ``` user@MacBook-Pro-User-2:~$ brew info apache-spark apache-spark: stable 1.6.0, HEAD Engine for large-scale data processing https://spark.apache.org/ /usr/local/Cellar/apache-spark/1.5.1 (649 files, 302.9M) * Poured from bottle From: https://github.com/Homebrew/homebrew/blob/master/Library/Formula/apache-spark.rb ``` python path: ``` user@MacBook-Pro-User-2:~$ brew info python python: stable 2.7.11 (bottled), HEAD Interpreted, interactive, object-oriented programming language https://www.python.org /usr/local/Cellar/python/2.7.10_2 (4,965 files, 66.9M) * ``` Then with the above information I tried to set the environment variables as follows: [![configuration 1](http://i.stack.imgur.com/TOsDo.png)](http://i.stack.imgur.com/TOsDo.png) **Any idea of how to correctly link Pycharm with pyspark?** Then when I run a python script with the above configuration I have this exception: ``` /usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/bin/python2.7 /Users/user/PycharmProjects/spark_examples/test_1.py Traceback (most recent call last): File "/Users/user/PycharmProjects/spark_examples/test_1.py", line 1, in <module> from pyspark import SparkContext ImportError: No module named pyspark ``` **UPDATE:** Then I tried this configurations proposed by @zero323 Configuration 1: ``` /usr/local/Cellar/apache-spark/1.5.1/ ``` [![conf 1](http://i.stack.imgur.com/i9dZu.png)](http://i.stack.imgur.com/i9dZu.png) out: ``` user@MacBook-Pro-de-User-2:/usr/local/Cellar/apache-spark/1.5.1$ ls CHANGES.txt NOTICE libexec/ INSTALL_RECEIPT.json README.md LICENSE bin/ ``` Configuration 2: ``` /usr/local/Cellar/apache-spark/1.5.1/libexec ``` [![enter image description here](http://i.stack.imgur.com/Bq2YP.png)](http://i.stack.imgur.com/Bq2YP.png) out: ``` user@MacBook-Pro-de-User-2:/usr/local/Cellar/apache-spark/1.5.1/libexec$ ls R/ bin/ data/ examples/ python/ RELEASE conf/ ec2/ lib/ sbin/ ```
**Create Run configuration**: 1. Go to **Run** -> **Edit configurations** 2. Add new Python configuration 3. Set **Script** path so it points to the script you want to execute 4. Edit **Environment variables** field so it contains at least: * `SPARK_HOME` - it should point to the directory with Spark installation. It should contain directories such as `bin` (with `spark-submit`, `spark-shell`, etc.) and `conf` (with `spark-defaults.conf`, `spark-env.sh`, etc.) * `PYTHONPATH` - it should contain `$SPARK_HOME/python` and optionally `$SPARK_HOME/python/lib/py4j-some-version.src.zip` if not available otherwise. `some-version` should match Py4J version used by a given Spark installation (0.8.2.1 - 1.5, 0.9 - 1.6.0) [![enter image description here](http://i.stack.imgur.com/LwAJ6.png)](http://i.stack.imgur.com/LwAJ6.png) 5. Apply the settings **Add PySpark library to the interpreter path (required for code completion)**: 1. Go to **File** -> **Settings** -> **Project Interpreter** 2. Open settings for an interpreter you want to use with Spark 3. Edit interpreter paths so it contains path to `$SPARK_HOME/python` (an Py4J if required) 4. Save the settings Use newly created configuration to run your script.
How to link PyCharm with PySpark?
34,685,905
17
2016-01-08T20:55:36Z
36,415,945
13
2016-04-05T02:12:10Z
[ "python", "apache-spark", "pycharm", "homebrew", "pyspark" ]
I'm new with apache spark and apparently I installed apache-spark with homebrew in my macbook: ``` Last login: Fri Jan 8 12:52:04 on console user@MacBook-Pro-de-User-2:~$ pyspark Python 2.7.10 (default, Jul 13 2015, 12:05:58) [GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)] on darwin Type "help", "copyright", "credits" or "license" for more information. Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 16/01/08 14:46:44 INFO SparkContext: Running Spark version 1.5.1 16/01/08 14:46:46 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 16/01/08 14:46:47 INFO SecurityManager: Changing view acls to: user 16/01/08 14:46:47 INFO SecurityManager: Changing modify acls to: user 16/01/08 14:46:47 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(user); users with modify permissions: Set(user) 16/01/08 14:46:50 INFO Slf4jLogger: Slf4jLogger started 16/01/08 14:46:50 INFO Remoting: Starting remoting 16/01/08 14:46:51 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:50199] 16/01/08 14:46:51 INFO Utils: Successfully started service 'sparkDriver' on port 50199. 16/01/08 14:46:51 INFO SparkEnv: Registering MapOutputTracker 16/01/08 14:46:51 INFO SparkEnv: Registering BlockManagerMaster 16/01/08 14:46:51 INFO DiskBlockManager: Created local directory at /private/var/folders/5x/k7n54drn1csc7w0j7vchjnmc0000gn/T/blockmgr-769e6f91-f0e7-49f9-b45d-1b6382637c95 16/01/08 14:46:51 INFO MemoryStore: MemoryStore started with capacity 530.0 MB 16/01/08 14:46:52 INFO HttpFileServer: HTTP File server directory is /private/var/folders/5x/k7n54drn1csc7w0j7vchjnmc0000gn/T/spark-8e4749ea-9ae7-4137-a0e1-52e410a8e4c5/httpd-1adcd424-c8e9-4e54-a45a-a735ade00393 16/01/08 14:46:52 INFO HttpServer: Starting HTTP Server 16/01/08 14:46:52 INFO Utils: Successfully started service 'HTTP file server' on port 50200. 16/01/08 14:46:52 INFO SparkEnv: Registering OutputCommitCoordinator 16/01/08 14:46:52 INFO Utils: Successfully started service 'SparkUI' on port 4040. 16/01/08 14:46:52 INFO SparkUI: Started SparkUI at http://192.168.1.64:4040 16/01/08 14:46:53 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set. 16/01/08 14:46:53 INFO Executor: Starting executor ID driver on host localhost 16/01/08 14:46:53 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 50201. 16/01/08 14:46:53 INFO NettyBlockTransferService: Server created on 50201 16/01/08 14:46:53 INFO BlockManagerMaster: Trying to register BlockManager 16/01/08 14:46:53 INFO BlockManagerMasterEndpoint: Registering block manager localhost:50201 with 530.0 MB RAM, BlockManagerId(driver, localhost, 50201) 16/01/08 14:46:53 INFO BlockManagerMaster: Registered BlockManager Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /__ / .__/\_,_/_/ /_/\_\ version 1.5.1 /_/ Using Python version 2.7.10 (default, Jul 13 2015 12:05:58) SparkContext available as sc, HiveContext available as sqlContext. >>> ``` I would like start playing in order to learn more about MLlib. However, I use Pycharm to write scripts in python. The problem is: when I go to Pycharm and try to call pyspark, Pycharm can not found the module. I tried adding the path to Pycharm as follows: [![cant link pycharm with spark](http://i.stack.imgur.com/SCMrY.png)](http://i.stack.imgur.com/SCMrY.png) Then from a [blog](http://renien.github.io/blog/accessing-pyspark-pycharm/) I tried this: ``` import os import sys # Path for spark source folder os.environ['SPARK_HOME']="/Users/user/Apps/spark-1.5.2-bin-hadoop2.4" # Append pyspark to Python Path sys.path.append("/Users/user/Apps/spark-1.5.2-bin-hadoop2.4/python/pyspark") try: from pyspark import SparkContext from pyspark import SparkConf print ("Successfully imported Spark Modules") except ImportError as e: print ("Can not import Spark Modules", e) sys.exit(1) ``` And still can not start using PySpark with Pycharm, any idea of how to "link" PyCharm with apache-pyspark?. **Update:** Then I search for apache-spark and python path in order to set the environment variables of Pycharm: apache-spark path: ``` user@MacBook-Pro-User-2:~$ brew info apache-spark apache-spark: stable 1.6.0, HEAD Engine for large-scale data processing https://spark.apache.org/ /usr/local/Cellar/apache-spark/1.5.1 (649 files, 302.9M) * Poured from bottle From: https://github.com/Homebrew/homebrew/blob/master/Library/Formula/apache-spark.rb ``` python path: ``` user@MacBook-Pro-User-2:~$ brew info python python: stable 2.7.11 (bottled), HEAD Interpreted, interactive, object-oriented programming language https://www.python.org /usr/local/Cellar/python/2.7.10_2 (4,965 files, 66.9M) * ``` Then with the above information I tried to set the environment variables as follows: [![configuration 1](http://i.stack.imgur.com/TOsDo.png)](http://i.stack.imgur.com/TOsDo.png) **Any idea of how to correctly link Pycharm with pyspark?** Then when I run a python script with the above configuration I have this exception: ``` /usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/bin/python2.7 /Users/user/PycharmProjects/spark_examples/test_1.py Traceback (most recent call last): File "/Users/user/PycharmProjects/spark_examples/test_1.py", line 1, in <module> from pyspark import SparkContext ImportError: No module named pyspark ``` **UPDATE:** Then I tried this configurations proposed by @zero323 Configuration 1: ``` /usr/local/Cellar/apache-spark/1.5.1/ ``` [![conf 1](http://i.stack.imgur.com/i9dZu.png)](http://i.stack.imgur.com/i9dZu.png) out: ``` user@MacBook-Pro-de-User-2:/usr/local/Cellar/apache-spark/1.5.1$ ls CHANGES.txt NOTICE libexec/ INSTALL_RECEIPT.json README.md LICENSE bin/ ``` Configuration 2: ``` /usr/local/Cellar/apache-spark/1.5.1/libexec ``` [![enter image description here](http://i.stack.imgur.com/Bq2YP.png)](http://i.stack.imgur.com/Bq2YP.png) out: ``` user@MacBook-Pro-de-User-2:/usr/local/Cellar/apache-spark/1.5.1/libexec$ ls R/ bin/ data/ examples/ python/ RELEASE conf/ ec2/ lib/ sbin/ ```
Here's how I solved this on mac osx. 1. `brew install apache-spark` 2. Add this to ~/.bash\_profile ``` export SPARK_VERSION=`ls /usr/local/Cellar/apache-spark/ | sort | tail -1` export SPARK_HOME="/usr/local/Cellar/apache-spark/$SPARK_VERSION/libexec" export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH export PYTHONPATH=$SPARK_HOME/python/lib/py4j-0.9-src.zip:$PYTHONPATH ``` 3. Add pyspark and py4j to content root (use the correct Spark version): ``` /usr/local/Cellar/apache-spark/1.6.1/libexec/python/lib/py4j-0.9-src.zip /usr/local/Cellar/apache-spark/1.6.1/libexec/python/lib/pyspark.zip ``` [![enter image description here](http://i.stack.imgur.com/8coKV.png)](http://i.stack.imgur.com/8coKV.png)
Merge lists in Python by placing every nth item from one list and others from another?
34,692,738
10
2016-01-09T11:12:02Z
34,692,834
7
2016-01-09T11:22:33Z
[ "python", "list", "python-2.7", "merge" ]
I have two lists, `list1` and `list2`. Here **`len(list2) << len(list1)`**. Now I want to merge both of the lists such that every **nth element** of final list is **from `list2`** and **the others from `list1`**. For example: ``` list1 = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'] list2 = ['x', 'y'] n = 3 ``` Now the final list should be: ``` ['a', 'b', 'x', 'c', 'd', 'y', 'e', 'f', 'g', 'h'] ``` What is the most [Pythonic](https://en.wiktionary.org/wiki/Pythonic) way to achieve this? I want to add all elements of `list2` to the final list, final list should include all elements from `list1` and `list2`.
To preserve the original list, you could try the following: ``` result = copy.deepcopy(list1) index = n - 1 for elem in list2: result.insert(index, elem) index += n ``` result ``` ['a', 'b', 'x', 'c', 'd', 'y', 'e', 'f', 'g', 'h'] ```
Merge lists in Python by placing every nth item from one list and others from another?
34,692,738
10
2016-01-09T11:12:02Z
34,692,876
10
2016-01-09T11:26:51Z
[ "python", "list", "python-2.7", "merge" ]
I have two lists, `list1` and `list2`. Here **`len(list2) << len(list1)`**. Now I want to merge both of the lists such that every **nth element** of final list is **from `list2`** and **the others from `list1`**. For example: ``` list1 = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'] list2 = ['x', 'y'] n = 3 ``` Now the final list should be: ``` ['a', 'b', 'x', 'c', 'd', 'y', 'e', 'f', 'g', 'h'] ``` What is the most [Pythonic](https://en.wiktionary.org/wiki/Pythonic) way to achieve this? I want to add all elements of `list2` to the final list, final list should include all elements from `list1` and `list2`.
Making the larger list an iterator makes it easy to take multiple elements for each element of the smaller list: ``` list1 = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'] list2 = ['x', 'y'] n = 3 iter1 = iter(list1) res = [] for x in list2: res.extend([next(iter1) for _ in range(n - 1)]) res.append(x) res.extend(iter1) >>> res ['a', 'b', 'x', 'c', 'd', 'y', 'e', 'f', 'g', 'h'] ``` This avoids `insert` which can be expensive for large lists because each time the whole list needs to be re-created.
Assign and increment value on one line
34,693,939
7
2016-01-09T13:19:15Z
34,693,996
7
2016-01-09T13:24:42Z
[ "python" ]
Is it possible to assign a value and increment the assigned value on the same line in Python? Something like this: ``` x = 1 a = x b = (x += 1) c = (x += 1) print a print b print c >>> 1 >>> 2 >>> 3 ``` **Edit:** I need it in a context where I'm creating an Excel sheet: ``` col = row = 1 ws.cell(row=row, column=col).value = "A cell value" ws.cell(row=row, column=(col += 1)).value = "Another cell value" ws.cell(row=row, column=(col += 1)).value = "Another cell value" ``` **Edit 2: Solution:** It's not possible, but I have created an easy fix: ``` col = row = 1 def increment_one(): global col col += 1 return col ws.cell(row=row, column=col).value = "A cell value" ws.cell(row=row, column=increment_one()).value = "Another cell value" ws.cell(row=row, column=increment_one()).value = "Another cell value" ```
No that’s not possible in Python. [Assignments](https://docs.python.org/3/reference/simple_stmts.html#assignment-statements) (or [augmented assignments](https://docs.python.org/3/reference/simple_stmts.html#augmented-assignment-statements)) are statements and as such may not appear on the right-hand side of another assignment. You can only assign *expressions* to variables. The reason for this is most likely to avoid confusions from side effects which are easily caused in other languages that support this. However, normal assignments do support multiple targets, so you can assign the same expression to multiple variables. This of course still only allows you to have a single expression on the right-hand side (still no statement). In your case, since you want `b` and `x` to end up with the same value, you could write it like this: ``` b = x = x + 1 c = x = x + 1 ``` Note that since you’re doing `x = x + 1` you are no longer using an augmented assignment and as such could have different effects for some types (not for integers though).
for loops and iterating through lists
34,695,538
20
2016-01-09T15:49:31Z
34,695,599
30
2016-01-09T15:55:21Z
[ "python", "for-loop", "indexing" ]
Here is a snippet of code which gives the output: `0 1 2 2`. I had expected the output `3 3 3 3` since `a[-1]` accesses the number 3 in the list. The explanation given online says "The value of `a[-1]` changes in each iteration" but I don't quite understand how or why. Any explanations would be great! ``` a = [0, 1, 2, 3] for a[-1] in a: print(a[-1]) ```
While doing `for a[-1] in a`, you actually iterate through the list and temporary store the value of the current element into `a[-1]`. You can see the loop like these instructions: ``` a[-1] = a[0] # a = [0, 1, 2, 0] print(a[-1]) # 0 a[-1] = a[1] # a = [0, 1, 2, 1] print(a[-1]) # 1 a[-1] = a[2] # a = [0, 1, 2, 2] print(a[-1]) # 2 a[-1] = a[3] # a = [0, 1, 2, 2] print(a[-1]) # 2 ``` So, when you are on the third element, then `2` is stored to `a[-1]` (which value is `1`, but was `0` before and `3` on start). Finally, when it comes to the last element (and the end of the iteration), the last value stored into `a[-1]` is `2` which explains why it is printed twice.
for loops and iterating through lists
34,695,538
20
2016-01-09T15:49:31Z
34,695,662
12
2016-01-09T16:01:10Z
[ "python", "for-loop", "indexing" ]
Here is a snippet of code which gives the output: `0 1 2 2`. I had expected the output `3 3 3 3` since `a[-1]` accesses the number 3 in the list. The explanation given online says "The value of `a[-1]` changes in each iteration" but I don't quite understand how or why. Any explanations would be great! ``` a = [0, 1, 2, 3] for a[-1] in a: print(a[-1]) ```
What's happening here is a list is mutated during looping. Let's consider following code snippet: ``` a = [0, 1, 2, 3] for a[-1] in a: print a ``` Output is: ``` [0, 1, 2, 0] [0, 1, 2, 1] [0, 1, 2, 2] [0, 1, 2, 2] ``` Each iteration: * reads value from position currently pointed by internal pointer * immediately assigns it to last element in list * after that last element is printed on standard output So it goes like: * internal pointer points to first element, it's 0, and last element is overwritten with that value; list is `[0, 1, 2, 0]`; printed value is `0` * internal pointer points to second element, it's 1, and last element is overwritten with that value; list is `[0, 1, 2, 1]`; printed value is `1` * (...) * at last step, internal pointer points to last element; last element is overwritten by itself - list does not change on last iteration; printed element also does not change.
Convert list of strings to int
34,696,853
5
2016-01-09T17:48:40Z
34,696,871
8
2016-01-09T17:50:41Z
[ "python", "string", "list", "type-conversion" ]
I have a list of strings that I want to convert to int, or have in int from the start. The task is to extract numbers out of a text (and get the sum). What I did was this: ``` for line in handle: line = line.rstrip() z = re.findall("\d+",line) if len(z)>0: lst.append(z) print (z) ``` Which gives me a list like `[['5382', '1399', '3534'], ['1908', '8123', '2857']`. I tried `map(int,...` and one other thing, but I get errors such as: ``` TypeError: int() argument must be a string, a bytes-like object or a number, not 'list' ```
You can use a list comprehension: ``` >>> [[int(x) for x in sublist] for sublist in lst] [[5382, 1399, 3534], [1908, 8123, 2857]] ``` or map ``` >>> [map(int, sublist) for sublist in lst] [[5382, 1399, 3534], [1908, 8123, 2857]] ``` or just change your line ``` lst.append(z) ``` to ``` lst.append(map(int, z)) ``` The reason why your map did not work is that you tried to apply `int` to every list of your list of lists, not to every element of every sublist. **update** for Python3 users: In Python3, `map` will return a map object which you have to cast back to a list manually, i.e. `list(map(int, z))` instead of `map(int, z)`.
How to tell which Keras model is better?
34,702,041
6
2016-01-10T04:23:16Z
34,705,152
13
2016-01-10T12:05:43Z
[ "python", "machine-learning", "keras", "data-science" ]
I don't understand which accuracy in the output to use to compare my 2 Keras models to see which one is better. Do I use the "acc" (from the training data?) one or the "val acc" (from the validation data?) one? There are different accs and val accs for each epoch. How do I know the acc or val acc for my model as a whole? Do I average all of the epochs accs or val accs to find the acc or val acc of the model as a whole? **Model 1 Output** ``` Train on 970 samples, validate on 243 samples Epoch 1/20 0s - loss: 0.1708 - acc: 0.7990 - val_loss: 0.2143 - val_acc: 0.7325 Epoch 2/20 0s - loss: 0.1633 - acc: 0.8021 - val_loss: 0.2295 - val_acc: 0.7325 Epoch 3/20 0s - loss: 0.1657 - acc: 0.7938 - val_loss: 0.2243 - val_acc: 0.7737 Epoch 4/20 0s - loss: 0.1847 - acc: 0.7969 - val_loss: 0.2253 - val_acc: 0.7490 Epoch 5/20 0s - loss: 0.1771 - acc: 0.8062 - val_loss: 0.2402 - val_acc: 0.7407 Epoch 6/20 0s - loss: 0.1789 - acc: 0.8021 - val_loss: 0.2431 - val_acc: 0.7407 Epoch 7/20 0s - loss: 0.1789 - acc: 0.8031 - val_loss: 0.2227 - val_acc: 0.7778 Epoch 8/20 0s - loss: 0.1810 - acc: 0.8010 - val_loss: 0.2438 - val_acc: 0.7449 Epoch 9/20 0s - loss: 0.1711 - acc: 0.8134 - val_loss: 0.2365 - val_acc: 0.7490 Epoch 10/20 0s - loss: 0.1852 - acc: 0.7959 - val_loss: 0.2423 - val_acc: 0.7449 Epoch 11/20 0s - loss: 0.1889 - acc: 0.7866 - val_loss: 0.2523 - val_acc: 0.7366 Epoch 12/20 0s - loss: 0.1838 - acc: 0.8021 - val_loss: 0.2563 - val_acc: 0.7407 Epoch 13/20 0s - loss: 0.1835 - acc: 0.8041 - val_loss: 0.2560 - val_acc: 0.7325 Epoch 14/20 0s - loss: 0.1868 - acc: 0.8031 - val_loss: 0.2573 - val_acc: 0.7407 Epoch 15/20 0s - loss: 0.1829 - acc: 0.8072 - val_loss: 0.2581 - val_acc: 0.7407 Epoch 16/20 0s - loss: 0.1878 - acc: 0.8062 - val_loss: 0.2589 - val_acc: 0.7407 Epoch 17/20 0s - loss: 0.1833 - acc: 0.8072 - val_loss: 0.2613 - val_acc: 0.7366 Epoch 18/20 0s - loss: 0.1837 - acc: 0.8113 - val_loss: 0.2605 - val_acc: 0.7325 Epoch 19/20 0s - loss: 0.1906 - acc: 0.8010 - val_loss: 0.2555 - val_acc: 0.7407 Epoch 20/20 0s - loss: 0.1884 - acc: 0.8062 - val_loss: 0.2542 - val_acc: 0.7449 ``` **Model 2 Output** ``` Train on 970 samples, validate on 243 samples Epoch 1/20 0s - loss: 0.1735 - acc: 0.7876 - val_loss: 0.2386 - val_acc: 0.6667 Epoch 2/20 0s - loss: 0.1733 - acc: 0.7825 - val_loss: 0.1894 - val_acc: 0.7449 Epoch 3/20 0s - loss: 0.1781 - acc: 0.7856 - val_loss: 0.2028 - val_acc: 0.7407 Epoch 4/20 0s - loss: 0.1717 - acc: 0.8021 - val_loss: 0.2545 - val_acc: 0.7119 Epoch 5/20 0s - loss: 0.1757 - acc: 0.8052 - val_loss: 0.2252 - val_acc: 0.7202 Epoch 6/20 0s - loss: 0.1776 - acc: 0.8093 - val_loss: 0.2449 - val_acc: 0.7490 Epoch 7/20 0s - loss: 0.1833 - acc: 0.7897 - val_loss: 0.2272 - val_acc: 0.7572 Epoch 8/20 0s - loss: 0.1827 - acc: 0.7928 - val_loss: 0.2376 - val_acc: 0.7531 Epoch 9/20 0s - loss: 0.1795 - acc: 0.8062 - val_loss: 0.2445 - val_acc: 0.7490 Epoch 10/20 0s - loss: 0.1746 - acc: 0.8103 - val_loss: 0.2491 - val_acc: 0.7449 Epoch 11/20 0s - loss: 0.1831 - acc: 0.8082 - val_loss: 0.2477 - val_acc: 0.7449 Epoch 12/20 0s - loss: 0.1831 - acc: 0.8113 - val_loss: 0.2496 - val_acc: 0.7490 Epoch 13/20 0s - loss: 0.1920 - acc: 0.8000 - val_loss: 0.2459 - val_acc: 0.7449 Epoch 14/20 0s - loss: 0.1945 - acc: 0.7928 - val_loss: 0.2446 - val_acc: 0.7490 Epoch 15/20 0s - loss: 0.1852 - acc: 0.7990 - val_loss: 0.2459 - val_acc: 0.7449 Epoch 16/20 0s - loss: 0.1800 - acc: 0.8062 - val_loss: 0.2495 - val_acc: 0.7449 Epoch 17/20 0s - loss: 0.1891 - acc: 0.8000 - val_loss: 0.2469 - val_acc: 0.7449 Epoch 18/20 0s - loss: 0.1891 - acc: 0.8041 - val_loss: 0.2467 - val_acc: 0.7531 Epoch 19/20 0s - loss: 0.1853 - acc: 0.8072 - val_loss: 0.2511 - val_acc: 0.7449 Epoch 20/20 0s - loss: 0.1905 - acc: 0.8062 - val_loss: 0.2460 - val_acc: 0.7531 ```
> Do I use the "acc" (from the training data?) one or the "val acc" (from the validation data?) one? If you want to estimate the ability of your model to generalize to new data (which is probably what you want to do), then you look at the validation accuracy, because the validation split contains only data that the model never sees during the training and therefor cannot just memorize. If your training data accuracy ("acc") keeps improving while your validation data accuracy ("val\_acc") gets worse, you are likely in an [overfitting](https://en.wikipedia.org/wiki/Overfitting) situation, i.e. your model starts to basically just memorize the data. > There are different accs and val accs for each epoch. How do I know the acc or val acc for my model as a whole? Do I average all of the epochs accs or val accs to find the acc or val acc of the model as a whole? Each epoch is a training run over all of your data. During that run the parameters of your model are adjusted according to your loss function. The result is a set of parameters which have a certain ability to generalize to new data. That ability is reflected by the validation accuracy. So think of every epoch as its own model, which can get better or worse if it is trained for another epoch. Whether it got better or worse is judged by the change in validation accuracy (better = validation accuracy increased). Therefore pick the model of the epoch with the highest validation accuracy. Don't average the accuracies over different epochs, that wouldn't make much sense. You can use the Keras callback `ModelCheckpoint` to automatically save the model with the highest validation accuracy (see [callbacks documentation](http://keras.io/callbacks/)). The highest accuracy in model 1 is `0.7737` and the highest one in model 2 is `0.7572`. Therefore you should view model 1 (at epoch 3) as better. Though it is possible that the `0.7737` was just a random outlier.
conda - How to install R packages that are not available in "R-essentials"?
34,705,917
10
2016-01-10T13:25:32Z
35,023,854
8
2016-01-26T21:00:22Z
[ "python", "anaconda", "conda" ]
I use an out-of-the-box Anaconda installation to work with Python. Now I have read that it is possible to also "include" the R world within this installation and to use the IR kernel within the *Jupyter/Ipython notebook*. I found the command to install a number of famous R packages: conda install -c r r-essentials My beginner's question: How do I install R packages that are not included in the *R-essential* package? For example R packages that are available on CRAN. "pip" works only for PyPI Python packages, doesn't it?
**Now I have found the documentation:** This is the documentation that explains how to generate R packages that are only available in the CRAN repository: <https://www.continuum.io/content/conda-data-science> Go to the section "Building a conda R package". (Hint: As long as the R package is available under anaconda.org use this resource. See here: <https://www.continuum.io/blog/developer/jupyter-and-conda-r>) ***alistaire* ' s answer is another possibility to add R packages:** If you install packages from inside of R via the regular install.packages (from CRAN mirrors), or devtools::install\_github (from GitHub), they work fine. @alistaire **How to do this:** Open your (independent) R installation, then run the follwing command: ``` install.packages("png", "/home/user/anaconda3/lib/R/library") ``` to add new package to the correct R library used by Jupyter, otherwise the package will be installed in /home/user/R/i686-pc-linux-gnu-library/3.2/png/libs mentioned in *.libPaths()* .
Recursive factorial using dict causes RecursionError
34,706,731
12
2016-01-10T14:46:52Z
34,706,749
17
2016-01-10T14:49:17Z
[ "python", "dictionary", "recursion" ]
A simple recursive factorial method works perfectly: ``` def fact(n): if n == 0: return 1 return n * fact(n-1) ``` But I wanted to experiment a little and use a `dict` instead. Logically, this should work, but a bunch of print statements tell me that `n`, instead of stopping at `0`, glides down across the negative numbers until the maximum recursion depth is reached: ``` def recursive_fact(n): lookup = {0: 1} return lookup.get(n, n*recursive_fact(n-1)) ``` Why is that?
**Python doesn't lazily evaluate parameters.** The default value passed to `dict.get` call will also be evaluated before calling the `dict.get`. So, in your case, the default value has a recursive call and since your condition is never met, it does infinite recursion. You can confirm this, with this program ``` >>> def getter(): ... print("getter called") ... return 0 ... >>> {0: 1}.get(0, getter()) getter called 1 ``` Even though the key `0` exists in the dictionary, since all parameters passed to functions in Python will be evaluated, `getter` is also invoked, before the actual `dict.get` is made. --- If all you want to do is to avoid multiple recursive evaluations when the values are already evaluated, then you use [`functools.lru_cache`](https://docs.python.org/3/library/functools.html#functools.lru_cache), if you are using Python 3.2+ ``` >>> @functools.lru_cache() ... def fact(n): ... print("fact called with {}".format(n)) ... if n == 0: ... return 1 ... return n * fact(n-1) ... >>> fact(3) fact called with 3 fact called with 2 fact called with 1 fact called with 0 6 >>> fact(4) fact called with 4 24 ``` This decorator simply caches the results for the parameters passed and if the same call is made again, it will simply return the value from the cache. --- If you want to fix your custom caching function to work, then you need to define the `look_up` outside the function, so that it will not be created whenever the function is called. ``` >>> look_up = {0: 1} >>> def fact(n): ... if n not in look_up: ... print("recursing when n is {}".format(n)) ... look_up[n] = n * fact(n - 1) ... return look_up[n] ... >>> fact(3) recursing when n is 3 recursing when n is 2 recursing when n is 1 6 >>> fact(4) recursing when n is 4 24 >>> fact(4) 24 ``` Otherwise you can use the default parameter, like this ``` >>> def fact(n, look_up={0: 1}): ... if n not in look_up: ... print("recursing when n is {}".format(n)) ... look_up[n] = n * fact(n - 1) ... return look_up[n] ```
Selecting random values from dictionary
34,707,280
8
2016-01-10T15:37:10Z
34,707,337
13
2016-01-10T15:41:57Z
[ "python", "dictionary", "random" ]
Let's say I have this dictionary: ``` dict = {'a': 100, 'b': 5, 'c': 150, 'd': 60}; ``` I get the key which has greatest value with this code: ``` most_similar = max(dic.iteritems(), key=operator.itemgetter(1))[0] ``` it returns `'c'` But I want to select a random key from top 3 greatest values. According to this dictionary top 3 are: ``` c a d ``` It should randomly select a key from them. How can I do that?
If you want to find the top 3 keys and then get one of the keys randomly, then I would recommend using [`random.choice`](https://docs.python.org/3/library/random.html#random.choice) and [`collections.Counter`](https://docs.python.org/3/library/collections.html#collections.Counter), like this ``` >>> d = {'a': 100, 'b': 5, 'c': 150, 'd': 60} >>> from collections import Counter >>> from random import choice >>> choice(Counter(d).most_common(3))[0] 'c' ``` [`Counter(d).most_common(3)`](https://docs.python.org/3/library/collections.html#collections.Counter.most_common) will get the top three values from the dictionary based on the values of the dictionary object passed to it and then we randomly pick one of the returned values and return only the key from it.
Selecting random values from dictionary
34,707,280
8
2016-01-10T15:37:10Z
34,707,344
7
2016-01-10T15:42:33Z
[ "python", "dictionary", "random" ]
Let's say I have this dictionary: ``` dict = {'a': 100, 'b': 5, 'c': 150, 'd': 60}; ``` I get the key which has greatest value with this code: ``` most_similar = max(dic.iteritems(), key=operator.itemgetter(1))[0] ``` it returns `'c'` But I want to select a random key from top 3 greatest values. According to this dictionary top 3 are: ``` c a d ``` It should randomly select a key from them. How can I do that?
Get the keys with the three largest values. ``` >>> import heapq >>> d = {'a': 100, 'b': 5, 'c': 150, 'd': 60} >>> largest = heapq.nlargest(3, d, key=d.__getitem__) >>> largest ['c', 'a', 'd'] ``` Then select one of them randomly: ``` >>> import random >>> random.choice(largest) 'c' ```
Pythonic way to use range with excluded last number?
34,709,633
6
2016-01-10T19:08:15Z
34,709,828
7
2016-01-10T19:26:49Z
[ "python", "range" ]
If I wanted a list from 0 to 100 in steps of five I could use `range(0,105,5)`, but I could also use `range(0,101,5)`. Honestly, neither of these makes sense to me because excluding the last number seems non-intuitive. That aside, what is the "correct" way to create a list from 0 to 100 in steps of five? And if anyone has the time, in what instance would excluding the last number make code easier to read?
The two choices you have listed are not similar. One is `range(start, stop+step, step)` and the other is `range(start, stop+1, step)`. They don't necessary return the same thing. The only case they do is when `stop - start` is divisible by `step`. ``` >>> start, stop, step = 0, 42, 5 >>> range(start, stop+step, step) [0, 5, 10, 15, 20, 25, 30, 35, 40, 45] >>> range(start, stop+1, step) [0, 5, 10, 15, 20, 25, 30, 35, 40] ``` So, which one should you use? If you want [start, stop] (inclusive), then use `stop+1` for the end of the range. Anything more than `stop+1` will have side effects like the above. `range(0, 42+5, 5)` includes 45 as a result, which is not in the [0, 42] range. How do you feel about that?
Can I use a list comprehension on a list of dictionaries if a key is missing?
34,710,571
3
2016-01-10T20:44:10Z
34,710,581
8
2016-01-10T20:45:19Z
[ "python", "dictionary", "counter", "list-comprehension" ]
I want to count the number of times specific values appear in a list of dictionaries. However, I know that some of these dictionaries will not have the key. I do not know *which* though, because these are results from an API call. As an example, this code works for `key1`, because all of the dictionaries have the key. ``` from collections import Counter list_of_dicts = [ {'key1': 'testing', 'key2': 'testing'}, {'key1': 'prod', 'key2': 'testing'}, {'key1': 'testing',}, {'key1': 'prod',}, {'key1': 'testing', 'key2': 'testing'}, {'key1': 'testing',}, ] print Counter(r['key1'] for r in list_of_dicts) ``` I get a nice result of ``` Counter({'testing': 4, 'prod': 2}) ``` However, if I change that last print to: ``` print Counter(r['key2'] for r in list_of_dicts) ``` It fails, because `key2` is missing in a couple of the dictionaries. ``` Traceback (most recent call last): File "test.py", line 11, in <module> print Counter(r['key2'] for r in list_of_dicts) File "h:\Anaconda\lib\collections.py", line 453, in __init__ self.update(iterable, **kwds) File "h:\Anaconda\lib\collections.py", line 534, in update for elem in iterable: File "test.py", line 11, in <genexpr> print Counter(r['key2'] for r in list_of_dicts) KeyError: 'key2' ``` How can I use a list comprehension to count the values of `key2` and not fail if a dictionary doesn't contain the key?
You can explicitly check if `key2` is in a dictionary: ``` Counter(r['key2'] for r in list_of_dicts if 'key2' in r) ```
global vs. local namespace performance difference
34,713,860
8
2016-01-11T03:39:14Z
34,714,291
14
2016-01-11T04:40:28Z
[ "python", "python-2.7", "python-3.x", "optimization" ]
Why is it that executing a set of commands in a function: ``` def main(): [do stuff] return something print(main()) ``` will tend to run `1.5x` to `3x` times faster in python than executing commands in the top level: ``` [do stuff] print(something) ```
The difference does indeed **greatly** depend on what "do stuff" actually does and **mainly** on how many times it accesses names that are defined/used. Granted that the code is similar, there is a fundamental difference between these two cases: * In functions, the byte code for loading/storing names is done with **[`LOAD_FAST`](https://docs.python.org/3.5/library/dis.html#opcode-LOAD_FAST)**/**[`STORE_FAST`](https://docs.python.org/3.5/library/dis.html#opcode-STORE_FAST)**. * In the top level scope (i.e module), the same commands are performed with **[`LOAD_NAME`](https://docs.python.org/3.5/library/dis.html#opcode-LOAD_NAME)**/**[`STORE_NAME`](https://docs.python.org/3.5/library/dis.html#opcode-STORE_NAME)** which are more sluggish. This can be viewed in the following cases, *I'll be using a `for` loop to make sure that lookups for variables defined is performed multiple times*. **Function and `LOAD_FAST/STORE_FAST`:** We define a simple function that does some really silly things: ``` def main(): b = 20 for i in range(1000000): z = 10 * b return z ``` Output generated by **[`dis.dis`](https://docs.python.org/3.5/library/dis.html#dis.Bytecode.dis)**: ``` dis.dis(main) # [/snipped output/] 18 GET_ITER >> 19 FOR_ITER 16 (to 38) 22 STORE_FAST 1 (i) 25 LOAD_CONST 3 (10) 28 LOAD_FAST 0 (b) 31 BINARY_MULTIPLY 32 STORE_FAST 2 (z) 35 JUMP_ABSOLUTE 19 >> 38 POP_BLOCK # [/snipped output/] ``` The thing to note here is the `LOAD_FAST/STORE_FAST` commands at the offsets `28` and `32`, these are used to access the `b` name used in the `BINARY_MULTIPLY` operation and store the `z` name, respectively. As their byte code name implies, *they are the fast version* of the `LOAD_*/STORE_*` family. --- **Modules and `LOAD_NAME/STORE_NAME`:** Now, let's look at the output of `dis` for our module version of the previous function: ``` # compile the module m = compile(open('main.py', 'r').read(), "main", "exec") dis.dis(m) # [/snipped output/] 18 GET_ITER >> 19 FOR_ITER 16 (to 38) 22 STORE_NAME 2 (i) 25 LOAD_NAME 3 (z) 28 LOAD_NAME 0 (b) 31 BINARY_MULTIPLY 32 STORE_NAME 3 (z) 35 JUMP_ABSOLUTE 19 >> 38 POP_BLOCK # [/snipped output/] ``` Over here we have multiple calls to `LOAD_NAME/STORE_NAME`, *which*, as mentioned previously, *are more sluggish commands to execute*. In this case, *there is going to be a clear difference in execution time*, mainly because Python must evaluate `LOAD_NAME/STORE_NAME` and `LOAD_FAST/STORE_FAST` multiple times (due to the `for` loop I added) and, as a result, the overhead introduced each time the code for each byte code is executed *will accumulate*. Timing the execution 'as a module': ``` start_time = time.time() b = 20 for i in range(1000000): z = 10 *b print(z) print("Time: ", time.time() - start_time) 200 Time: 0.15162253379821777 ``` Timing the execution as a function: ``` start_time = time.time() print(main()) print("Time: ", time.time() - start_time) 200 Time: 0.08665871620178223 ``` If you `time` loops in a smaller `range` (for example `for i in range(1000)`) you'll notice that the 'module' version is faster. This happens because the overhead introduced by needing to call function `main()` is larger than that introduced by `*_FAST` vs `*_NAME` differences. So it's largely relative to the amount of work that is done. So, the real culprit here, and the reason why this difference is evident, is the `for` loop used. *You generally have `0` reason to ever put an intensive loop like that one at the top level of your script. Move it in a function and avoid using global variables*, it is designed to be more efficient. --- You can take a look at the code executed for each of the byte code. I'll link the source for the [`3.5`](https://hg.python.org/cpython/file/tip) version of Python here even though I'm pretty sure [`2.7`](https://hg.python.org/cpython/file/2.7) doesn't differ much. Bytecode evaluation is done in [`Python/ceval.c`](https://hg.python.org/cpython/file/tip/Python/ceval.c) specifically in function [`PyEval_EvalFrameEx`](https://hg.python.org/cpython/file/tip/Python/ceval.c#l797): * [`LOAD_FAST source`](https://hg.python.org/cpython/file/tip/Python/ceval.c#l1369) - [`STORE_FAST source`](https://hg.python.org/cpython/file/tip/Python/ceval.c#l1389) * [`LOAD_NAME source`](https://hg.python.org/cpython/file/tip/Python/ceval.c#l2348) - [`STORE_NAME source`](https://hg.python.org/cpython/file/tip/Python/ceval.c#l2214) As you'll see, the `*_FAST` bytecodes simply get the value stored/loaded using a [*`fastlocals` local symbol table contained inside frame objects*](https://hg.python.org/cpython/file/3.5/Include/frameobject.h#l23).
Where do I call the BatchNormalization function in Keras?
34,716,454
7
2016-01-11T07:47:53Z
37,979,391
9
2016-06-22T22:40:51Z
[ "python", "neural-network", "keras", "data-science" ]
If I want to use the BatchNormalization function in Keras, then do I need to call it once only at the beginning? I read this documentation for it: <http://keras.io/layers/normalization/> I don't see where I'm supposed to call it. Below is my code attempting to use it: ``` model = Sequential() keras.layers.normalization.BatchNormalization(epsilon=1e-06, mode=0, momentum=0.9, weights=None) model.add(Dense(64, input_dim=14, init='uniform')) model.add(Activation('tanh')) model.add(Dropout(0.5)) model.add(Dense(64, init='uniform')) model.add(Activation('tanh')) model.add(Dropout(0.5)) model.add(Dense(2, init='uniform')) model.add(Activation('softmax')) sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True) model.compile(loss='binary_crossentropy', optimizer=sgd) model.fit(X_train, y_train, nb_epoch=20, batch_size=16, show_accuracy=True, validation_split=0.2, verbose = 2) ``` I ask because if I run the code with the second line including the batch normalization and if I run the code without the second line I get similar outputs. So either I'm not calling the function in the right place, or I guess it doesn't make that much of a difference.
Just to answer this question in a little more detail, and as Pavel said, Batch Normalization is just another layer, so you can use it as such to create your desired network architecture. The general use case is to use BN between the linear and non-linear layers in your network, because it normalizes the input to your activation function, so that you're centered in the linear section of the activation function (such as Sigmoid). There's a small discussion of it [here](https://www.reddit.com/r/MachineLearning/comments/2x0bq8/some_questions_regarding_batch_normalization/?su=ynbwk&st=iprg6e3w&sh=88bcbe40) In your case above, this might look like: --- ``` # import BatchNormalization from keras.layers.normalization import BatchNormalization # instantiate model model = Sequential() # we can think of this chunk as the input layer model.add(Dense(64, input_dim=14, init='uniform')) model.add(BatchNormalization()) model.add(Activation('tanh')) model.add(Dropout(0.5)) # we can think of this chunk as the hidden layer model.add(Dense(64, init='uniform')) model.add(BatchNormalization()) model.add(Activation('tanh')) model.add(Dropout(0.5)) # we can think of this chunk as the output layer model.add(Dense(2, init='uniform')) model.add(BatchNormalization()) model.add(Activation('softmax')) # setting up the optimization of our weights sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True) model.compile(loss='binary_crossentropy', optimizer=sgd) # running the fitting model.fit(X_train, y_train, nb_epoch=20, batch_size=16, show_accuracy=True, validation_split=0.2, verbose = 2) ``` --- Hope this clarifies things a bit more.
How to use advanced activation layers in Keras?
34,717,241
6
2016-01-11T08:44:13Z
34,721,948
7
2016-01-11T12:47:09Z
[ "python", "machine-learning", "neural-network", "keras", "data-science" ]
This is my code that works if I use other activation layers like tanh: ``` model = Sequential() act = keras.layers.advanced_activations.PReLU(init='zero', weights=None) model.add(Dense(64, input_dim=14, init='uniform')) model.add(Activation(act)) model.add(Dropout(0.15)) model.add(Dense(64, init='uniform')) model.add(Activation('softplus')) model.add(Dropout(0.15)) model.add(Dense(2, init='uniform')) model.add(Activation('softmax')) sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True) model.compile(loss='binary_crossentropy', optimizer=sgd) model.fit(X_train, y_train, nb_epoch=20, batch_size=16, show_accuracy=True, validation_split=0.2, verbose = 2) ``` In this case, it doesn't work and says "TypeError: 'PReLU' object is not callable" and the error is called at the model.compile line. Why is this the case? All the non-advanced activation functions works. However, neither of the advanced activation functions, including this one, works.
The correct way to use the advanced activations like PReLU is to use it with `add()` method and not wrapping it using `Activation` class. Example: ``` model = Sequential() act = keras.layers.advanced_activations.PReLU(init='zero', weights=None) model.add(Dense(64, input_dim=14, init='uniform')) model.add(act) ```
Get items from multidimensional list Python
34,717,664
4
2016-01-11T09:11:30Z
34,717,762
8
2016-01-11T09:16:02Z
[ "python", "list" ]
I have a list with the following appearance: ``` [0] = [ [0.0, 100.0], [0.1, 93.08], [0.3, 92.85], [0.5, 92.62], [0.7, 91.12], [0.9, 90.89] ] [1] = [ [0.0, 100.0], [0.1, 2.79], [0.3, 2.62], [0.5, 2.21], [0.7, 1.83], [0.9, 1.83] ] ``` and I´d like to obtain vectors to plot the info as follows: ``` [0.0, 0.1, 0.3, 0.5, 0.7, 0.9] [100.0, 93.08, 92.85, 92.62, 91.12, 90.89] ``` and the same with all entries in the list. I was trying something like: ``` x = mylist[0][:][0] ``` Any ideas? I appreciate the help!
Use [`zip`](https://docs.python.org/3/library/functions.html#zip): ``` >>> mylist = [ [0.0, 100.0], [0.1, 93.08], [0.3, 92.85], [0.5, 92.62], [0.7, 91.12], [0.9, 90.89] ] >>> a, b = zip(*mylist) >>> a (0.0, 0.1, 0.3, 0.5, 0.7, 0.9) >>> b (100.0, 93.08, 92.85, 92.62, 91.12, 90.89) >>> list(a) [0.0, 0.1, 0.3, 0.5, 0.7, 0.9] >>> list(b) [100.0, 93.08, 92.85, 92.62, 91.12, 90.89] ```
what is best way to indentify that the length of tuple in a list is 0 using python
34,719,283
2
2016-01-11T10:33:16Z
34,719,338
7
2016-01-11T10:36:10Z
[ "python", "python-2.7" ]
I have some list. Each list has some tuples. I want to process (print the value of my tuple). But some of my list has some tuples which is the length of all tuples is 0. I want to identify that i can past that list for next process because there is no value in my tuples. Example: ``` myList1= [(),(1,2),(2,3)] myList2= [(),(),(),()] myList3= [(),(),()] def Check_true_List(myList): r = 0 for x in myList: if len(x) != 0: r+=1 return r != 0 if Check_true_List(myList2): for t in myList2: for value in t: print value ``` my `Check_true_List` is working well as I want, But, is there another way to identify that the length of all tuples in my list is not 0 ?? I think my way (function: `Check_true_List`) is not effective.
You are looking for the `any()` function: ``` >>> myList1= [(),(1,2),(2,3)] >>> myList2= [(),(),(),()] >>> myList3= [(),(),()] >>> any(myList1) True >>> any(myList2) False >>> any(myList3) False ```
Subtract all items in a list against each other
34,719,740
5
2016-01-11T10:56:11Z
34,719,933
12
2016-01-11T11:06:44Z
[ "python", "list", "tuples", "combinations" ]
I have a list in Python that looks like this: ``` myList = [(1,1),(2,2),(3,3),(4,5)] ``` And I want to subtract each item with the others, like this: ``` (1,1) - (2,2) (1,1) - (3,3) (1,1) - (4,5) (2,2) - (3,3) (2,2) - (4,5) (3,3) - (4,5) ``` The expected result would be a list with the answers: ``` [(1,1), (2,2), (3,4), (1,1), (2,3), (1,2)] ``` How can I do this? If I approach it with a `for` loop, I can maybe store the previous item and check it against the one that I'm working with at that moment, but it doesn't really work.
Use `itertools.combinations` with tuple unpacking to generate the pairs of differences: ``` >>> from itertools import combinations >>> [(y1-x1, y2-x2) for (x1, x2), (y1, y2) in combinations(myList, 2)] [(1, 1), (2, 2), (3, 4), (1, 1), (2, 3), (1, 2)] ```
Why are sockets closed in list comprehension but not in for loop?
34,725,855
5
2016-01-11T15:58:34Z
34,726,007
8
2016-01-11T16:05:45Z
[ "python", "sockets", "list-comprehension" ]
I'm trying to create a list of available ports in Python. I am following [this tutorial](http://www.pythonforbeginners.com/code-snippets-source-code/port-scanner-in-python), but instead of printing the open ports, I'm adding them to a list. Initially, I had something like the following: ``` available_ports = [] try: for port in range(1,8081): sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) result = sock.connect_ex((remoteServerIP, port)) if result == 0: available_ports.append(port) sock.close() # ... ``` This clearly works fine, but it is well known that [comprehensions are faster than loops](https://wiki.python.org/moin/PythonSpeed/PerformanceTips#Loops), so I now have: ``` try: available_ports = [port for port in range(1, 8081) if not socket.socket(socket.AF_INET, socket.SOCK_STREAM).connect_ex((remoteServerIP, port))] # ... ``` I assumed the sockets wouldn't be closed, but I tested it with the following: ``` try: available_ports = [port for port in range(1, 8081) if not socket.socket(socket.AF_INET, socket.SOCK_STREAM).connect_ex((remoteServerIP, port))] for port in range(1,8081): sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) result = sock.connect_ex((remoteServerIP, port)) if result == 0: print("Port {}: \t Open".format(port)) sock.close() # ... ``` and indeed the open ports were printed. Why are the sockets closed in the comprehension but not the for loop? Can I rely on this behavior or is this a red herring?
There are no references left to your opened sockets, which means they are garbage collected. Sockets are closed [as soon as they are garbage collected](https://docs.python.org/3/library/socket.html#socket.socket.close). Exactly when the sockets from your list comprehension are garbage collected differs between Python implementations. CPython uses reference counting and therefore closes the sockets as soon as the last reference is dropped. Other implementations might defer the closing to the next GC cycle.
Avoid `logger=logging.getLogger(__name__)`
34,726,515
17
2016-01-11T16:31:38Z
34,789,692
9
2016-01-14T12:30:58Z
[ "python", "logging" ]
We set up logging like the django docs told us: <https://docs.djangoproject.com/en/1.9/topics/logging/#using-logging> ``` # import the logging library import logging # Get an instance of a logger logger = logging.getLogger(__name__) def my_view(request, arg1, arg): ... if bad_mojo: # Log an error message logger.error('Something went wrong!') ``` I want to avoid this line in every Python file which wants to log: `logger = logging.getLogger(__name__)` I want it simple: `logging.error('Something went wrong!')` But we want to keep one feature: We want to see the Python file name in the logging output. Up to now we use this format: ``` '%(asctime)s %(name)s.%(funcName)s +%(lineno)s: %(levelname)-8s [%(process)d] %(message)s' ``` Example output: ``` 2016-01-11 12:12:31 myapp.foo +68: ERROR Something went wrong ``` How to avoid `logger = logging.getLogger(__name__)`?
You can use `logging.basicConfig` to define the default interface available through `logging` as follows: ``` import logging logging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(name)s.%(funcName)s +%(lineno)s: %(levelname)-8s [%(process)d] %(message)s', ) ``` This definition will now be used whenever you do the following anywhere in your application: ``` import logging logging.error(...) ``` While `__name__` is not available, the equivalent (and other options) are available through the default [`LogRecord` attributes](https://docs.python.org/2/library/logging.html#logrecord-attributes) that can be used for error string formatting - including `module`, `filename` and `pathname`. The following is a two-script demonstration of this in action: scripta.py ``` import logging logging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(module)s %(name)s.%(funcName)s +%(lineno)s: %(levelname)-8s [%(process)d] %(message)s', ) from scriptb import my_view my_view() ``` scriptb.py ``` import logging def my_view(): # Log an error message logging.error('Something went wrong!') ``` The logging definition is defined in `scripta.py`, with the added `module` parameter. In `scriptb.py` we simply need to import `logging` to get access to this defined default. When running `scripta.py` the following output is generated: ``` 2016-01-14 13:22:24,640 scriptb root.my_view +9: ERROR [14144] Something went wrong! ``` Which shows the module (`scriptb`) where the logging of the error occurs. According to [this answer](http://stackoverflow.com/a/22336174/754456) you can continue to use any per-module configuration of logging from Django, by turning off Django handling and setting up the root handler as follows: ``` # settings.py - django config LOGGING_CONFIG = None # disables Django handling of logging LOGGING = {...} # your standard Django logging configuration import logging.config logging.config.dictConfig(LOGGING) ```
summing only the numbers contained in a list
34,728,246
10
2016-01-11T18:11:46Z
34,728,328
17
2016-01-11T18:18:22Z
[ "python", "list", "python-2.7" ]
Give a method that sums all the numbers in a `list`. The method should be able to skip elements that are not numbers. So, `sum([1, 2, 3])` should be `6` but `sum(['A', 1, 'B', 2, 3])` *should also* be `6`. How can I accomplish this? What I have already tried so far: ``` def foo(list): dict = "ABCDEFGHIJKLMN" n = 0 for i in range(0, len(list) - 1): if list[i].str in dict: "" else: n= n + list[i] return n print foo([1, 2, 3, 4, 5, 6, "A", "B"]) ```
You can do this with a simple one liner: ``` l1 = [1, 2, 3, 'A'] sum(filter(lambda i: isinstance(i, int), l1)) # prints 6 ``` Or, if you need it inside a function: ``` def foo(l1): return sum(filter(lambda i: isinstance(i, int), l1)) ``` Additionally, as noted in the comments, **don't** use names like `dict` and `list` for your variables; \*they will shadow they build-in names for the dictionary (`dict`) and (`list`) types. You'll then need to explicitly `del dict, list` in order to use them as intended. --- But, let me explain. What **[`filter`](https://docs.python.org/2.7/library/functions.html#filter)** does is here is: **a)** It takes a function as its first argument: ``` # this function will return True if i is an int # and false otherwise lambda i: isinstance(i, int) ``` and then takes every element inside the list `l1` (second argument) and evaluates whether it is `True` or `False` based on the function. **b)** Then, `filter` will essentially filter out any objects inside list `l1` that are not instances of `int` (i.e the function returns `False` for them). As a result, for a list like `[1, 2, 3, 'A']` filter is going to return `[1, 2, 3]` which will then be summed up by **[`sum()`](https://docs.python.org/2.7/library/functions.html#sum)**. Some Examples: ``` foo([1, 2, 3, 'A']) # 6 foo([1, 2, 3]) # 6 foo([1, 2, 3, 'HELLO', 'WORLD']) # 6 ``` --- **Slight caveat:** As is, this doesn't sum up `float` values, it drops them (and any other numeric types for that case). If you need that too, simply add the `float` type in the `lambda` function as so: ``` lambda i: isinstance(i, (int, float)) ``` Now, your function sums floats too: ``` foo([1, 2, 3, 3.1, 'HELLO', 'WORLD']) # 9.1 ``` Add any other types as necessary in the `lambda` function to catch the cases that you need. --- **A catch all case:** As noted by **@Copperfield** you can check for objects that are instances of any number by utilizing the **[`numbers.Number`](https://docs.python.org/2.7/library/numbers.html#numbers.Number)** abstract base class in the **[`numbers`](https://docs.python.org/2.7/library/numbers.html)** module. This acts as a catch-all case for numeric values: ``` import numbers # must import sum(filter(lambda i: isinstance(i, numbers.Number), l1)) ``` **Simpler and a bit faster, too:** Additionally, as noted by **@ShadowRanger**, and since [*`lambda` might not be the most comfortable construct for new users*](https://www.reddit.com/r/learnpython/comments/2emwwh/lambdas_are_confusing_me/), one could simply use *[a generator expression](https://www.python.org/dev/peps/pep-0289/)* (which is also faster) with `sum` to get the same exact result: ``` sum(val for val in l1 if isinstance(val, numbers.Number)) ```
summing only the numbers contained in a list
34,728,246
10
2016-01-11T18:11:46Z
34,728,344
8
2016-01-11T18:18:55Z
[ "python", "list", "python-2.7" ]
Give a method that sums all the numbers in a `list`. The method should be able to skip elements that are not numbers. So, `sum([1, 2, 3])` should be `6` but `sum(['A', 1, 'B', 2, 3])` *should also* be `6`. How can I accomplish this? What I have already tried so far: ``` def foo(list): dict = "ABCDEFGHIJKLMN" n = 0 for i in range(0, len(list) - 1): if list[i].str in dict: "" else: n= n + list[i] return n print foo([1, 2, 3, 4, 5, 6, "A", "B"]) ```
``` sum([x for x in list if isinstance(x, (int, long, float))]) ```
summing only the numbers contained in a list
34,728,246
10
2016-01-11T18:11:46Z
34,728,346
10
2016-01-11T18:19:07Z
[ "python", "list", "python-2.7" ]
Give a method that sums all the numbers in a `list`. The method should be able to skip elements that are not numbers. So, `sum([1, 2, 3])` should be `6` but `sum(['A', 1, 'B', 2, 3])` *should also* be `6`. How can I accomplish this? What I have already tried so far: ``` def foo(list): dict = "ABCDEFGHIJKLMN" n = 0 for i in range(0, len(list) - 1): if list[i].str in dict: "" else: n= n + list[i] return n print foo([1, 2, 3, 4, 5, 6, "A", "B"]) ```
The Pythonic way is to do a try/except. While you could do this in a one liner, I prefer to break things out a bit to see exactly what is happening. ``` val=0 for item in list: try: val+=int(item) except ValueError: pass ``` If you want to include floating points, simply change the `int` to a `float`. Floating points are anything with a decimal, among others.