title
stringlengths
12
150
question_id
int64
469
40.1M
question_score
int64
2
5.52k
question_date
stringdate
2008-08-02 15:11:16
2016-10-18 06:16:31
answer_id
int64
536
40.1M
answer_score
int64
7
8.38k
answer_date
stringdate
2008-08-02 18:49:07
2016-10-18 06:19:33
tags
listlengths
1
5
question_body_md
stringlengths
15
30.2k
answer_body_md
stringlengths
11
27.8k
Ubuntu 14.04 - Python 3.4 - pyenv: command Not Found
29,954,984
7
2015-04-29T21:31:58Z
29,955,015
12
2015-04-29T21:35:03Z
[ "python", "ubuntu" ]
I am trying to create a virtual environment for Python 3.4 on a fresh install of Ubuntu Server 14.04. I following the instructions for the venv module at: <https://docs.python.org/3/library/venv.html#module-venv> I don't have a lot of Python 3.4 or Ubuntu experience. When I type the command: ``` pyvenv testDir ``` I get back: ``` pyvenv: command not found ``` What is causing this?
Ubuntu 14.04 uses Python 2 by default, and the `pyenv` command does not exist in Python 2 out of the box. You can, however, use `virtualenv` for the same purpose. You just need to install it! You should: * Install Python 3 and virtualenv `apt-get install -y python3 python-virtualenv` * Create a Python 3 virtualenv: `virtualenv -p $(which python3) testDir` * Activate the virtual environment with `source testDir/bin/activate`
How can I select elements lesser than a given integer, from a sorted list?
29,957,895
4
2015-04-30T02:16:02Z
29,957,912
9
2015-04-30T02:17:50Z
[ "python", "list", "python-3.x" ]
I have array of primes e.g. between integers 0 to 1000 ``` primes = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 251, 257, 263, 269, 271, 277, 281, 283, 293, 307, 311, 313, 317, 331, 337, 347, 349, 353, 359, 367, 373, 379, 383, 389, 397, 401, 409, 419, 421, 431, 433, 439, 443, 449, 457, 461, 463, 467, 479, 487, 491, 499, 503, 509, 521, 523, 541, 547, 557, 563, 569, 571, 577, 587, 593, 599, 601, 607, 613, 617, 619, 631, 641, 643, 647, 653, 659, 661, 673, 677, 683, 691, 701, 709, 719, 727, 733, 739, 743, 751, 757, 761, 769, 773, 787, 797, 809, 811, 821, 823, 827, 829, 839, 853, 857, 859, 863, 877, 881, 883, 887, 907, 911, 919, 929, 937, 941, 947, 953, 967, 971, 977, 983, 991, 997] ``` I get input ``` n = int(input()) ``` What is the most efficient way to slice array to new array where last element of array will be less than `n`?
You can make use of the fact that `primes` is already sorted, with [`bisect`](https://docs.python.org/3/library/bisect.html#bisect.bisect), like this ``` >>> from bisect import bisect >>> primes[:bisect(primes, n)] ``` `bisect` does binary search on the input list and returns the index of the element which is lesser than `n`.
Python: Time input validation
29,961,249
6
2015-04-30T07:11:10Z
29,961,522
7
2015-04-30T07:23:57Z
[ "python", "python-2.7", "date" ]
I have the fallowing problem, Im supposed to get user input in the form of `10:10:10` (hh:mm:ss) or `10:10`(mm:ss) or `10`(ss). Now i need check the fallowing parameters: * If I'm getting only seconds then there is no limit. * If I'm getting `mm:ss` then the seconds are limited to 0..59 and minutes are unlimited. * If I'm getting `hh:mm:ss` then both seconds and minutes are limited to 0..59 while hours are unlimited. Then return a TimeDelta object. The naive way is to write multiply `if` statements to check all this. But im looking for a smoother way. ``` val = "11:66:11" try: val = map(int, val.split(':')) except ValueError: return False if len(val) == 1: return val if len(val) == 2: if val[1] > 59: print "Bad seconds" return False if len(val) == 3: if val[2] > 59 or val[1] >59: print "Bad seconds / minutes" return False while len(val) < 3: split.insert(0,0) return = timedelta(hours=split[0],minutes=split[1],seconds=split[2]) ```
How about using regular expression here: ``` import re import datetime pattern = re.compile(r'^(\d+)(?::([0-5]?\d)(?::([0-5]?\d))?)?$') def str2seconds(val): match = pattern.match(val) if not match: raise ValueError("Invalid input: %s" % val) else: result = 0 for i in match.groups(): if i is not None: result *= 60 result += int(i) return datetime.timedelta(seconds=result) ``` Example: ``` >>> print(str2seconds('255')) 0:04:15 >>> print(str2seconds('255:25')) 4:15:25 >>> print(str2seconds('255:25:25')) 10 days, 15:25:25 >>> print(str2seconds('255:25:25:255')) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "y.py", line 8, in str2seconds raise ValueError("Invalid input: %s" % val) ValueError: Invalid input: 255:25:25:255 >>> print(str2seconds('255:25:60')) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "y.py", line 8, in str2seconds raise ValueError("Invalid input: %s" % val) ValueError: Invalid input: 255:25:60 ``` --- The regular expression part by part: * `^`: beginning of string * `(\d+)`: 1-n digits, captured as group 1 * `(?::([0-5]?\d)(?::([0-5]?\d))?)?` optional part: + `(?:...)` is a non-capturing group + `:` matches literal `:` + `([0-5]?\d)` an optional digit 0-5, followed by a digit, captured as group 2 + there is a second non-capturing group in the first: `(?::([0-5]?\d))?` that optionally matches second `:` followed by an optional digit 0-5 followed by a digit; and capturing the digits into group 3 + and finally `$` matches the end of the string. Strictly, `^` at the beginning is not necessary as `match` anchors the match at the beginning; however the `$` is necessary, as otherwise the match is not anchored at the end of the string! (Python 3.4 added [the `re.fullmatch`](https://docs.python.org/3/library/re.html#re.fullmatch) to fix this.) The `match.groups()` will be a tuple of 3 items; the non-matching groups will be returned as `None`.
How to track anonymous users with Flask
29,961,898
3
2015-04-30T07:44:15Z
29,962,315
7
2015-04-30T08:06:01Z
[ "python", "flask", "flask-login", "anonymous-users" ]
My app implements a shopping cart in which anonymous users can fill their cart with products. User Login is required only before payment. How can this be implemented? The main challenge is that flask must keep track of the user (even if anonymous) and their orders. My current approach is to leverage the `AnonymousUserMixin` object that is assigned to `current_user`. The assumption is that `current_user` will not change throughout the session. However, I noticed that a new `AnonymousUserMixin` object is assigned to `current_user`, for example, upon every browser page refresh. Notice that this does not happen if a user is authenticated. Any suggestions on how to circumvent this?
There is no need for a custom `AnonymousUserMixin`, you can keep the shopping cart data in session: * anonymous user adds something to hist cart -> update his session with the cart data * the user wants to check out -> redirect him to login page * logged in user is back at the check out -> take his cart data out of the session and do whatever you would do if he was logged in the whole time
What does this notation do for lists in Python: "someList[:]"?
29,963,686
32
2015-04-30T09:15:00Z
29,963,746
8
2015-04-30T09:18:23Z
[ "python", "list", "slice", "notation", "shallow-copy" ]
I sometimes get across this way of printing or returning a list - `someList[:]`. I don't see why people use it, as it returns the full list. Why not simply write `someList`, whithout the `[:]` part?
To **create a copy** of a list instead of passing by reference, as Python does. Use next two example to understand the difference. **Example:** ``` # Passing by reference SomeListA = [1, 2, 3] SomeListB = [2, 3, 4] SomeListB = SomeListA SomeListA[2] = 5 print SomeListB print SomeListA # Using slice SomeListA = [1, 2, 3] SomeListB = [2, 3, 4] SomeListB = SomeListA[:] SomeListA[2] = 5 print SomeListB print SomeListA ```
What does this notation do for lists in Python: "someList[:]"?
29,963,686
32
2015-04-30T09:15:00Z
29,963,752
52
2015-04-30T09:18:38Z
[ "python", "list", "slice", "notation", "shallow-copy" ]
I sometimes get across this way of printing or returning a list - `someList[:]`. I don't see why people use it, as it returns the full list. Why not simply write `someList`, whithout the `[:]` part?
`[:]` creates a slice, usually used to get just a part of a list. Without any minimum/maximum index given, it creates a copy of the entire list. Here's a Python session demonstrating it: ``` >>> a = [1,2,3] >>> b1 = a >>> b2 = a[:] >>> b1.append(50) >>> b2.append(51) >>> a [1, 2, 3, 50] >>> b1 [1, 2, 3, 50] >>> b2 [1, 2, 3, 51] ``` Note how appending to `b1` also appended the value to `a`. Appending to `b2` however did not modify `a`, i.e. `b2` is a copy.
What does this notation do for lists in Python: "someList[:]"?
29,963,686
32
2015-04-30T09:15:00Z
29,964,001
15
2015-04-30T09:30:08Z
[ "python", "list", "slice", "notation", "shallow-copy" ]
I sometimes get across this way of printing or returning a list - `someList[:]`. I don't see why people use it, as it returns the full list. Why not simply write `someList`, whithout the `[:]` part?
In python, when you do `a = b`, a doesn't take the *value* of b, but *references* the same value referenced by b. To see this, make: ``` >>> a = {'Test': 42} >>> b = a >>> b['Test'] = 24 ``` What is now the value of a? ``` >>> a['Test'] 24 ``` It's similar with lists, so we must find a way to really *copy* a list, and not *make a reference* to it. One way could be to recreate the list `copy = list(list1)`, or use the functions of the `copy` module. But, after all, the easiest way, the prettiest, the best way ( ;) ) for doing this, is to copy each value of the first list to the other, by doing `copy = list1[:]`. It uses the slices, here list1 is sliced from index 0 to index len(list1), so the whole list1 is returned! Moreover, the slice method is slightly faster: using the time.clock() method to measure the mean execution time of 1000 assignment of lists, each one containing 10000 random integers, with slices, constructor and deepcopy, the results show that the slices are 15% faster than the constructor method, and deepcopy is 4 times slower. However, this gain of time is negligible while using small lists: thus, using `copy = list(list_to_copy)` or `copy = list_to_copy[:]` is up to the developer's preferences. Finally, we often forget the list.copy method, which seems to be the faster! In fact, it's even 13% faster than the slice method!
Python -- Setting Background color to transparent in Plotly plots
29,968,152
3
2015-04-30T12:46:21Z
29,969,848
7
2015-04-30T14:01:28Z
[ "python", "plotly" ]
My python code creates a plotly Bar plot but the background is white in color i want to change it into transparent color is that doable ***My Code:*** ``` import plotly.plotly as py from plotly.graph_objs import * py.sign_in('username', 'api_key') data = Data([ Bar( x=['Sivaranjani S', 'Vijayalakshmi C', 'Rajeshwari S', 'Shanthi Priscilla', 'Pandiyaraj G', 'Kamatchi S', 'MohanaPriya', 'Madhumitha G', 'Franklin Alphones Raj J', 'Akfaris Almaas', 'Biswajit Champati', 'Priya R', 'Rekha Rajasekaran', 'Sarath Kumar B', 'Jegan L', 'Karthick A', 'Mahalakshmi S', 'Ragunathan V', 'Anu S', 'Ramkumar KS', 'Uthra R'], y=[1640, 1394, 1390, 1313, 2166, 1521, 1078, 1543, 780, 1202, 1505, 2028, 2032, 1769, 1238, 1491, 1477, 1329, 2038, 1339, 1458], text=['Scuti', 'Scuti', 'Cygni', 'Scorpii', 'Scuti', 'Pollux', 'Scorpii', 'Pollux', 'Scuti', 'Pollux', 'Scorpii', 'Scorpii', 'Scuti', 'Cygni', 'Scorpii', 'Scuti', 'Scuti', 'Pollux', 'Scuti', 'Pollux', 'Pollux'])]) plot_url = py.plot(data) ``` ***The graph looks like this*** ![enter image description here](http://i.stack.imgur.com/KudRA.jpg)
For a fully transparent plot, make sure to specify both the paper bgcolor and the plot's: ``` import plotly.plotly as py from plotly.graph_objs import * py.sign_in('', '') data = Data([ Bar( x=['Sivaranjani S', 'Vijayalakshmi C', 'Rajeshwari S', 'Shanthi Priscilla', 'Pandiyaraj G', 'Kamatchi S', 'MohanaPriya', 'Madhumitha G', 'Franklin Alphones Raj J', 'Akfaris Almaas', 'Biswajit Champati', 'Priya R', 'Rekha Rajasekaran', 'Sarath Kumar B', 'Jegan L', 'Karthick A', 'Mahalakshmi S', 'Ragunathan V', 'Anu S', 'Ramkumar KS', 'Uthra R'], y=[1640, 1394, 1390, 1313, 2166, 1521, 1078, 1543, 780, 1202, 1505, 2028, 2032, 1769, 1238, 1491, 1477, 1329, 2038, 1339, 1458], text=['Scuti', 'Scuti', 'Cygni', 'Scorpii', 'Scuti', 'Pollux', 'Scorpii', 'Pollux', 'Scuti', 'Pollux', 'Scorpii', 'Scorpii', 'Scuti', 'Cygni', 'Scorpii', 'Scuti', 'Scuti', 'Pollux', 'Scuti', 'Pollux', 'Pollux'] ) ]) layout = Layout( paper_bgcolor='rgba(0,0,0,0)', plot_bgcolor='rgba(0,0,0,0)' ) fig = Figure(data=data, layout=layout) plot_url = py.plot(fig, filename='transparent-background') ```
Why is the en-dash written as '\xe2\x80\x93' in Python?
29,968,179
3
2015-04-30T12:47:42Z
29,968,221
10
2015-04-30T12:49:41Z
[ "python", "unicode", "utf-8" ]
Specifically, what does each escape in `\xe2\x80\x93` do and why does it need 3 escapes? Trying to decode one by itself leads to an 'unexpected end of data' error. ``` >>> print(b'\xe2\x80\x93'.decode('utf-8')) – >>> print(b'\xe2'.decode('utf-8')) Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe2 in position 0: unexpected end of data ```
You have [UTF-8 bytes](http://en.wikipedia.org/wiki/UTF-8), which is a *codec*, a standard to represent text as computer-readable data. The [U+2013 EN-DASH codepoint](https://codepoints.net/U+2013) encodes to those 3 bytes when encoded to that codec. Trying to decode just one such byte as UTF-8 doesn't work because in the UTF-8 standard that one byte does not, on its own, carry meaning. In the UTF-8 encoding scheme, a `\xe2` byte is used for all codepoints between U+2000 and U+2FFF in the Unicode standard (which would all be encoded with an additional 2 bytes); thats 4095 codepoints in all. Python represents values in a `bytes` object in a manner that lets you reproduce the value by copying it back into a Python script or terminal. Anything that isn't printable ASCII is then represented by a `\xhh` hex escape. The two characters form the hexadecimal value of the byte, an integer number between 0 and 255. Hexadecimal is a very helpful way to represent bytes because you can represent the 2 pairs of 4 bytes each with one character, a digit in the range 0 - F. `\xe2\x80\x93` then means there are three bytes, with the hexadecimal values E2, 80 and 93, or 226, 128 and 147 in decimal, respectively. The UTF-8 standard tells a decoder to take the last 4 bits of the first byte, and the last 6 bytes of each of the second and third bytes (the remaining bits are used to signal what type of byte you are dealing with for error handling). Those 4 + 6 + 6 == 16 bits then encode the hex value 2013 (`0010 000000 010011` in binary). You probably want to read up about the difference between codecs (encodings) and Unicode; UTF-8 is a codec that can handle all of the Unicode standard, but is not the same thing. See: * [The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)](http://joelonsoftware.com/articles/Unicode.html) by Joel Spolsky * [Pragmatic Unicode](http://nedbatchelder.com/text/unipain.html) by Ned Batchelder * The [Python Unicode HOWTO](http://docs.python.org/3/howto/unicode.html)
While loop doesn't stop
29,970,679
5
2015-04-30T14:39:36Z
29,970,744
11
2015-04-30T14:42:11Z
[ "python" ]
I have this simple code in Python: ``` import sys class Crawler(object): def __init__(self, num_of_runs): self.run_number = 1 self.num_of_runs = num_of_runs def single_run(self): #do stuff pass def run(self): while self.run_number <= self.num_of_runs: self.single_run() print self.run_number self.run_number += 1 if __name__ == "__main__": num_of_runs = sys.argv[1] crawler = Crawler(num_of_runs) crawler.run() ``` Then, I run it this way: `python path/crawler.py 10` From my understanding, it should loop 10 times and stop, right? Why it doesn't?
``` num_of_runs = sys.argv[1] ``` `num_of_runs` is a string at that stage. ``` while self.run_number <= self.num_of_runs: ``` You are comparing a `string` and an `int` here. A simple way to fix this is to convert it to an int ``` num_of_runs = int(sysargv[1]) ``` Another way to deal with this is to use `argparser`. ``` import argparse parser = argparse.ArgumentParser(description='The program does bla and bla') parser.add_argument( 'my_int', type=int, help='an integer for the script' ) args = parser.parse_args() print args.my_int print type(args.my_int) ``` Now if you execute the script like this: ``` ./my_script.py 20 ``` The output is: > 20 Using argparser also gives you the -h option by default: ``` python my_script.py -h usage: i.py [-h] my_int The program does bla and bla positional arguments: my_int an integer for the script optional arguments: -h, --help show this help message and exit ``` For more information, have a look at the [argparser](https://docs.python.org/dev/library/argparse.html) documentation. Note: The code I have used is from the argparser documentation, but has been slightly modified.
Count number of non-NaN entries in every column of Dataframe
29,971,075
13
2015-04-30T14:57:40Z
29,971,188
16
2015-04-30T15:01:44Z
[ "python", "pandas", "dataframe", null ]
I have a really big DataFrame and I was wondering if there was short (one or two liner) way to get the a count of non-NaN entries in a DataFrame. I don't want to do this one column at a time as I have close to 1000 columns. ``` df1 = pd.DataFrame([(1,2,None),(None,4,None),(5,None,7),(5,None,None)], columns=['a','b','d'], index = ['A', 'B','C','D']) a b d A 1 2 NaN B NaN 4 NaN C 5 NaN 7 D 5 NaN NaN ``` Output: ``` a: 3 b: 2 d: 1 ```
The [`count()`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.count.html) method returns the number of non-`NaN` values in each column: ``` >>> df1.count() a 3 b 2 d 1 dtype: int64 ``` Similarly, `count(axis=1)` returns the number of non-`NaN` values in each row.
Python SQLAlchemy and Postgres - How to query a JSON element
29,974,143
4
2015-04-30T17:27:11Z
29,975,187
7
2015-04-30T18:32:46Z
[ "python", "json", "postgresql", "filter", "sqlalchemy" ]
Let's say I have a Postgres database (9.3) and there is a table called Resources. In the Resources table I have the fields id which is an int and data which is a JSON type. Let's say I have the following records in said table. * 1, {'firstname':'Dave', 'lastname':'Gallant'} * 2, {'firstname':'John', 'lastname':'Doe'} What I want to do is write a query that would return all the records in which the data column has a json element with the lastname equal to "Doe" I tried to write something like this: ``` records = db_session.query(Resource).filter(Resources.data->>'lastname' == "Doe").all() ``` Pycharm however is giving me a compile error on the "->>" Does anyone know how I would write the filter clause to do what I need?
Try using [**astext**](http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#sqlalchemy.dialects.postgresql.JSON) ``` records = db_session.query(Resource).filter( Resources.data["lastname"].astext == "Doe" ).all() ```
Why might Python's `from` form of an import statement bind a module name?
29,974,455
17
2015-04-30T17:45:30Z
29,974,921
9
2015-04-30T18:16:42Z
[ "python", "import" ]
I have a Python project with the following structure: ``` testapp/ ├── __init__.py ├── api │   ├── __init__.py │   └── utils.py └── utils.py ``` All of the modules are empty except `testapp/api/__init__.py` which has the following code: ``` from testapp import utils print "a", utils from testapp.api.utils import x print "b", utils ``` and `testapp/api/utils.py` which defines `x`: ``` x = 1 ``` Now from the root I import `testapp.api`: ``` $ export PYTHONPATH=$PYTHONPATH:. $ python -c "import testapp.api" a <module 'testapp.utils' from 'testapp/utils.pyc'> b <module 'testapp.api.utils' from 'testapp/api/utils.pyc'> ``` The result of the import surprises me, because it shows that the second `import` statement has overwritten `utils`. Yet the docs state that the [from statement will not bind a module name](https://docs.python.org/2/reference/simple_stmts.html#the-import-statement): > The from form does not bind the module name: it goes through the list > of identifiers, looks each one of them up in the module found in step > (1), and binds the name in the local namespace to the object thus > found. And indeed, when in a terminal I use a `from ... import ...` statement, no module names are introduced: ``` >>> from os.path import abspath >>> path Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'path' is not defined ``` I suspect this has to do with Python, at the time of the second import statement, trying to import `testapp.api.utils` which refers to `testapp.utils` and failing but I'm not certain. What is happening here?
From the [import system documentation](https://docs.python.org/3/reference/import.html#submodules): > When a submodule is loaded using any mechanism (e.g. `importlib` APIs, > the `import` or **`import-from`** statements, or built-in `__import__()`) > a binding is placed in the parent module’s namespace to the submodule > object. For example, if package `spam` has a submodule `foo`, after > importing `spam.foo`, `spam` will have an attribute `foo` which is > bound to the submodule. Let’s say you have the following directory > structure: > > ``` > spam/ > __init__.py > foo.py > bar.py > ``` > > and `spam/__init__.py` has the following lines in it: > > ``` > from .foo import Foo > from .bar import Bar > ``` > > then executing the following puts a name binding to `foo` and `bar` in > the `spam` module: > > ``` > >>> import spam > >>> spam.foo > <module 'spam.foo' from '/tmp/imports/spam/foo.py'> > >>> spam.bar > <module 'spam.bar' from '/tmp/imports/spam/bar.py'> > ``` > > Given Python’s familiar name binding rules this might seem surprising, > but it’s actually a fundamental feature of the import system. The > invariant holding is that if you have `sys.modules['spam']` and > `sys.modules['spam.foo']` (as you would after the above import), the > latter must appear as the `foo` attribute of the former. If you do `from testapp.api.utils import x`, the import statement will not load `utils` into the local namespace. However, the import machinery *will* load `utils` into the `testapp.api` namespace, to make further imports work right. It just happens that in your case, `testapp.api` is also the local namespace, so you're getting a surprise.
Save File to Webserver from POST Request
29,977,261
9
2015-04-30T20:31:24Z
34,116,375
10
2015-12-06T10:29:52Z
[ "javascript", "php", "python", "html", "http-post" ]
I am making a post request with some javascript to a python script in my `/var/www/cgi-bin` on my web server, and then in this python script I want to save the image file to my html folder, so it can later be retrieved. Located at `/var/www/html`, but right now the only way I know how to do this is to set the python script to chmod `777` which I do not want to do. So how else can I save a file that I grab from my webpage using javascript and then send to server with javascript via POST? Currently when I do this I get an error saying the python does not have permission to save, as its chmod is `755`. I here is python code, I know it works as the error just says I dont have permission to write the file ``` fh = open("/var/www/html/logo.png", "wb") fh.write(photo.decode('base64')) fh.close() ```
If you don't want to change the permission of that directory to `777`, you can change the owner of the directory to your HTTP server user, then the user of your web app will be able to write file into that directory because they have `rwx - 7` permission of the directory. To do that, via (since you're using Apache as your web server, remember login as `root): ``` chown -R apache:apache /var/www/cgi-bin/ ``` Remember that then only user `apache` and `root` has `rwx` to that directory, and others has `rx`. And this command means: ``` chown - change the owner of the directory -R - operate on files and directories recursively apache:apache - apache user, apache group /var/www/cgi-bin/ - the directory ``` Try `man chown` command to check the manual page of `chown` and learn more, [here's a online version](http://www.techonthenet.com/linux/commands/chown.php). --- If you need change it back, I think the default user of that directory is `root`. So login as `root`, and run command: ``` chown -R root:root /var/www/cgi-bin/ ``` --- > We were solved the problem [in chat room](http://chat.stackoverflow.com/transcript/97106).
Why are slice objects not hashable in python
29,980,786
7
2015-05-01T02:23:17Z
29,980,846
8
2015-05-01T02:32:03Z
[ "python" ]
Why are slice objects in python not hashable: ``` >>> s = slice(0, 10) >>> hash(s) TypeError Traceback (most recent call last) <ipython-input-10-bdf9773a0874> in <module>() ----> 1 hash(s) TypeError: unhashable type ``` They seem to be immutable: ``` >>> s.start = 5 TypeError Traceback (most recent call last) <ipython-input-11-6710992d7b6d> in <module>() ----> 1 s.start = 5 TypeError: readonly attribute ``` Context, I'd like to make a dictionary that maps python ints or slice objects to some values, something like this: ``` class Foo: def __init__(self): self.cache = {} def __getitem__(self, idx): if idx in self.cache: return self.cache[idx] else: r = random.random() self.cache[idx] = r return r ``` As a workaround I need to special case slices: ``` class Foo: def __init__(self): self.cache = {} def __getitem__(self, idx): if isinstance(idx, slice): idx = ("slice", idx.start, idx.stop, idx.step) if idx in self.cache: return self.cache[idx] else: r = random.random() self.cache[idx] = r return r ``` This isn't a big deal, I'd just like to know if there is some reasoning behind it.
From the [Python bug tracker](https://bugs.python.org/issue1733184): > Patch [# 408326](https://bugs.python.org/issue408326) was designed to make assignment to d[:] an error where > d is a dictionary. See discussion starting at > <http://mail.python.org/pipermail/python-list/2001-March/072078.html> . Slices were specifically made unhashable so you'd get an error if you tried to slice-assign to a dict. (The discussion link in the quote is dead; [here's a working link](https://mail.python.org/pipermail/python-list/2001-March/033254.html).)
Converting a loop with an assignment into a comprehension
29,980,865
6
2015-05-01T02:34:56Z
29,980,952
9
2015-05-01T02:46:13Z
[ "python" ]
Converting a loop into a comprehension is simple enough: ``` mylist = [] for word in ['Hello', 'world']: mylist.append(word.split('l')[0]) ``` to ``` mylist = [word.split('l')[0] for word in ['Hello', 'world']] ``` But I'm not sure how to proceed when the loop involves assigning a value to a reference. ``` mylist = [] for word in ['Hello', 'world']: split_word = word.split('l') mylist.append(split_word[0]+split_word[1]) ``` And the comprehension ends up looking like this: ``` mylist = [word.split('l')[0]+word.split('l')[1] for word in ['Hello', 'world']] ``` This calculates `word.split('l')` multiple times whereas the loop only calculates it once and saves a reference. I've tried the following: ``` mylist = [split_word[0]+split_word[1] for word in ['Hello', 'world'] with word.split('l') as split_word] ``` which fails because `with` doesn't work that way, and: ``` mylist = [split_word[0]+split_word[1] for word in ['Hello', 'world'] for split_word = word.split('l')] ``` which doesn't work either. I'm aware of unpacking via `*` and `**` but I'm not sure where that would fit in here. Is it possible to turn these sorts of loops into comprehensions, hopefully in a tidy way?
You can't directly translate that loop to a comprehension. Comprehensions, being expressions, can only contain expressions, and assignments are statements. However, that doesn't mean there are no options. First, at the cost of calling `split` twice, you can just do this: ``` mylist = [word.split('l')[0]+word.split('l')[1] for word in ['Hello', 'world']] ``` But you don't want to call `split` twice. --- The most general way around that is to use a chain of generator expressions (with one list comprehension at the end) to transform things: ``` words = (word.split('l') for word in ['Hello', 'world']) mylist = [w[0]+w[1] for w in words] ``` If you really want to merge that all into one expression, you can: ``` mylist = [w[0]+w[1] for w in (word.split('l') for word in ['Hello', 'world'])] ``` But unless you actually *need* it to be in an expression, it's probably more readable not to do that. --- A more specific way in this case is to replace the `w[0]+w[1]` with something equivalent that doesn't need to reference `w` twice: ``` mylist = [''.join(word.split('l')[:2]) for word in ['Hello', 'world']] ``` And you can always generalize this one, too. You can turn *any* expression into a function, which means you can avoid evaluating any part of it by passing it as an argument to that function. If there isn't a function that does what you want, write it: ``` def join_up(split_word): return split_word[0]+split_word[1] mylist = [join_up(word.split('l')) for word in ['Hello', 'world']] ``` If you need to make that all into one expression without repeating any work, it may not be pretty: ``` mylist = [(lambda split_word: split_word[0]+split_word[1])(word.split('l')) for word in ['Hello', 'world']] ``` --- But ultimately, unless I already had a function lying around that did what I needed, I'd use the chain-of-generator-expressions solution. Or, of course, just keep it in an explicit loop; there's nothing wrong with `for` loops, and if the intermediate temporary variable makes your code clearer, there's no better way to do that than with an assignment statement.
How to reverse values in a dictionary or list?
29,981,473
2
2015-05-01T04:09:13Z
29,981,488
12
2015-05-01T04:11:01Z
[ "python", "list", "dictionary" ]
I have a dictionary ``` {1:’one’,2:’two’} ``` I want to reverse it using a function and became to the following ``` {‘1:’eno’,2:’owt’ } ``` How can I do it? Similarly, if I have a list or tuple like `[15,49]`, how can I convert it to `[94,51]`?
You can use a simple dict comprehension, using the fact that `string[::-1]` reverses a string: ``` >>> d = {1: "one", 2: "two"} >>> {x: v[::-1] for x, v in d.items()} {1: 'eno', 2: 'owt'} ``` You could also define a function: ``` def reverse_values(dct): for key in dct: dct[key] = dct[key][::-1] ``` Which will alter the values in the same dict. ``` >>> reverse_values(d) >>> d {1: 'eno', 2: 'owt'} ``` --- For converting list of type `[15,49]` to `[94, 51]`, you can try the snippet below (this will work for lists of type `[12, 34, 56, 78]` to `[87, 65, 43, 21]` as well): ``` >>> l = [15,49] >>> [int(str(x)[::-1]) for x in l[::-1]] [94, 51] ```
How can i set proxy with authentication in selenium chrome web driver using python
29,983,106
4
2015-05-01T07:24:28Z
30,953,780
7
2015-06-20T12:04:32Z
[ "python", "python-2.7", "google-chrome", "selenium", "proxy" ]
I have following script to visit a web page using python selenium Chrome driver. ``` from selenium import webdriver USERNAME = 'usename' PASSWORD = 'pass' proxies = ["xxx.xxx.xxx.xxx"] proxy_tpl ='{0}:{1}' proxy = proxy_tpl.format(proxies[0],'xx') chrome_options = webdriver.ChromeOptions() chrome_options.add_argument('--proxy-server=%s' % proxy) chrome = webdriver.Chrome(chrome_options=chrome_options) chrome.get("http://{0}:{1}@whatismyipaddress.com".format(USERNAME, PASSWORD)) driver.close() ``` Chrome still asking username and password when i try to run script. How can i authenticate proxy server from script ?
Inspired by this this [solution in php](https://github.com/RobinDev/Selenium-Chrome-HTTP-Private-Proxy), i wrote a equivalent in python: ``` from selenium import webdriver from selenium.webdriver.chrome.options import Options import zipfile manifest_json = """ { "version": "1.0.0", "manifest_version": 2, "name": "Chrome Proxy", "permissions": [ "proxy", "tabs", "unlimitedStorage", "storage", "<all_urls>", "webRequest", "webRequestBlocking" ], "background": { "scripts": ["background.js"] }, "minimum_chrome_version":"22.0.0" } """ background_js = """ var config = { mode: "fixed_servers", rules: { singleProxy: { scheme: "http", host: "XXX.XXX.XXX.XXX", port: parseInt(XXXX) }, bypassList: ["foobar.com"] } }; chrome.proxy.settings.set({value: config, scope: "regular"}, function() {}); function callbackFn(details) { return { authCredentials: { username: "XXXXXXXXX", password: "XXXXXXXXX" } }; } chrome.webRequest.onAuthRequired.addListener( callbackFn, {urls: ["<all_urls>"]}, ['blocking'] ); """ pluginfile = 'proxy_auth_plugin.zip' with zipfile.ZipFile(pluginfile, 'w') as zp: zp.writestr("manifest.json", manifest_json) zp.writestr("background.js", background_js) co = Options() co.add_argument("--start-maximized") co.add_extension(pluginfile) driver = webdriver.Chrome("path/to/chromedriver", chrome_options=co) driver.get("http://www.google.com.br") ``` In `background_js` string, replace the XXX with your information.
Python 2.7: Ints as objects
29,986,244
5
2015-05-01T11:44:21Z
29,986,373
7
2015-05-01T11:54:22Z
[ "python", "python-2.7" ]
How does an int in python avoid being an object but yet is one: If I do the following: ``` >>> dir(10) ['__abs__', '__add__', '__and__', '__class__', '__cmp__', '__coerce__', '__delattr__', '__div__', '__divmod__', '__doc__', '__float__', '__floordiv__', '__format__', '__getattribute__', '__getnewargs__', '__hash__', '__hex__', '__index__', '__init__', '__int__', '__invert__', '__long__', '__lshift__', '__mod__', '__mul__', '__neg__', '__new__', '__nonzero__', '__oct__', '__or__', '__pos__', '__pow__', '__radd__', '__rand__', '__rdiv__', '__rdivmod__', '__reduce__', '__reduce_ex__', '__repr__', '__rfloordiv__', '__rlshift__', '__rmod__', '__rmul__', '__ror__', '__rpow__', '__rrshift__', '__rshift__', '__rsub__', '__rtruediv__', '__rxor__', '__setattr__', '__sizeof__', '__str__', '__sub__', '__subclasshook__', '__truediv__', '__trunc__', '__xor__', 'bit_length', 'conjugate', 'denominator', 'imag', 'numerator', 'real'] >>> 10.__add__(20) File "<stdin>", line 1 10.__add__(20) ^ SyntaxError: invalid syntax ``` If I type in 10. it produces 10.0 whereas anything such as 10.\_\_ anything \_\_ produces a syntax error. It does make sense since a float would be considered as 10.5 but 1. how is this achieved/implemented? 2. how can I call the int methods on an int?
The Python tokenizer is greedy, it always tries to match the longest token possible at any given position; otherwise it could think that `10.e+123` is the same as `(10).e + 123`. In the case of `10.__add__(20)` it sees the following tokens: ``` >>> tokenize.tokenize(iter(['10.__add__(20)']).next) 1,0-1,3: NUMBER '10.' 1,3-1,10: NAME '__add__' 1,10-1,11: OP '(' 1,11-1,13: NUMBER '20' 1,13-1,14: OP ')' 2,0-2,0: ENDMARKER '' ``` i.e., the `.` was considered to be a part of the number literal, e.g. a `float`. If you parenthesize the number (`(10).__add__(20)`), you'll get: ``` >>> tokenize.tokenize(iter(['(10).__add__(20)']).next) 1,0-1,1: OP '(' 1,1-1,3: NUMBER '10' 1,3-1,4: OP ')' 1,4-1,5: OP '.' 1,5-1,12: NAME '__add__' 1,12-1,13: OP '(' 1,13-1,15: NUMBER '20' 1,15-1,16: OP ')' 2,0-2,0: ENDMARKER '' ``` Similarly, just adding a space between the number and the dot (`10 .`) would work here. Here the `.` is tokenized as a separate operator. If a `float` constant would do, then you could actually type: ``` 10..__add__(20) ``` This is tokenized as float literal `10.` followed by a `.` followed by identifier `__add__` and so forth. --- The silly `iter().next` needs to be `iter().__next__` on Python 3. The [`tokenize.tokenize`](https://docs.python.org/3/library/tokenize.html#tokenize.tokenize) requires an argument that is a `readline` -like function; when called, it should return a line of program input.
Error connecting python to neo4j using py2neo
29,986,317
8
2015-05-01T11:49:45Z
29,986,433
17
2015-05-01T11:59:00Z
[ "python", "neo4j" ]
I wrote the following python code to neo4j using py2neo ``` from py2neo import Graph from py2neo import neo4j,Node,Relationship sgraph = Graph() alice = Node("person",name="alice") bob = Node("person",name="bob") alice_knows_bob = Relationship(alice,"KNOWS",bob) sgraph.create(alice_knows_bob) ``` but i got the following error ``` Traceback (most recent call last): File "C:\Python34\lib\site-packages\py2neo\core.py", line 258, in get response = self.__base.get(headers=headers, redirect_limit=redirect_limit, * *kwargs) File "C:\Python34\lib\site-packages\py2neo\packages\httpstream\http.py",line 966, in get return self.__get_or_head("GET", if_modified_since, headers, redirect_limit, **kwargs) File "C:\Python34\lib\site-packages\py2neo\packages\httpstream\http.py",line 943, in __get_or_head return rq.submit(redirect_limit=redirect_limit, **kwargs) File "C:\Python34\lib\site-packages\py2neo\packages\httpstream\http.py",line 452, in submit return Response.wrap(http, uri, self, rs, **response_kwargs) File "C:\Python34\lib\site-packages\py2neo\packages\httpstream\http.py",line 489, in wrap raise inst py2neo.packages.httpstream.http.ClientError: 401 Unauthorized During handling of the above exception, another exception occurr ed: Traceback (most recent call last): File "neo.py", line 7, in <module> sgraph.create(alice_knows_bob) File "C:\Python34\lib\site-packages\py2neo\core.py", line 704, in create statement = CreateStatement(self) File "C:\Python34\lib\site-packages\py2neo\cypher\create.py", 44,in__init__ self.supports_node_labels = self.graph.supports_node_labels File "C:\Python34\lib\site-packages\py2neo\core.py", line 1078, in supports_node_labels return self.neo4j_version >= (2, 0) File "C:\Python34\lib\site-packages\py2neo\core.py", line 956, in neo4j_version return version_tuple(self.resource.metadata["neo4j_version"]) File "C:\Python34\lib\site-packages\py2neo\core.py", line 213, in metadata self.get() File "C:\Python34\lib\site-packages\py2neo\core.py", line 261, in get raise Unauthorized(self.uri.string) py2neo.error.Unauthorized: http://localhost:7474/db/data/ ``` can anyone please help me.This is the first time i writing python code to connect to neo4j.
If you're using Neo4j 2.2, authentication for database servers is enabled by default. You need to authenticate before performing further operations. Read [documentation](http://py2neo.org/2.0/essentials.html#authentication). ``` from py2neo import authenticate, Graph # set up authentication parameters authenticate("localhost:7474", "user", "pass") # connect to authenticated graph database sgraph = Graph("http://localhost:7474/db/data/") # create alice and bob ... ``` --- From the same [documentation](http://py2neo.org/2.0/essentials.html#authentication), > Py2neo provides a command line tool to help with changing user > passwords as well as checking whether a password change is required. For a new installation, use: ``` $ neoauth neo4j neo4j my-p4ssword Password change succeeded ``` > After a password has been set, the tool can also be used to validate > credentials ``` $ neoauth neo4j my-p4ssword Password change not required ```
Python Imports Convention
29,986,805
3
2015-05-01T12:24:54Z
29,986,891
7
2015-05-01T12:31:26Z
[ "python" ]
I've noticed in Python code it's usually preferred to import explicitly the parts of a module you need, eg ``` from django.core.urlresolvers import reverse from django.db import models ``` However, I've noticed that this doesn't seem to be the case for Python standard library modules, where I'd typically see, eg: ``` import os import sys something = os.path.join('home', 'ludo') other = sys.argv('dunno') ``` instead of ``` from os.path import join from sys import argv something = join('home', 'ludo') other = argv('dunno') ``` Is there any reasoning or documented convention for this?
The [holy style guide](https://www.python.org/dev/peps/pep-0008/#imports) is pretty loose regarding this subject: > When importing a class from a class-containing module, it's usually > okay to spell this: > > ``` > from myclass import MyClass > from foo.bar.yourclass import YourClass > ``` > > If this spelling causes local name clashes, then spell them > > ``` > import myclass > import foo.bar.yourclass > ``` > > and use "myclass.MyClass" and "foo.bar.yourclass.YourClass". There aren't really any 'rules' for this, just some pointers as mentioned above. If you are not obstructed by e.g. name clashing, you are free to do as you see fit. However, as also mentioned in the link, you should keep in mind that > Wildcard imports ( from import \* ) should be avoided, as they > make it unclear which names are present in the namespace, confusing > both readers and many automated tools.
How to execute Python code from within Visual Studio Code
29,987,840
42
2015-05-01T13:35:17Z
29,989,061
35
2015-05-01T14:45:42Z
[ "python", "vscode" ]
[Visual Studio Code](https://code.visualstudio.com/) was recently released and I liked the look of it and the features it offered, so I figured I would give it a go. I downloaded the application from the [downloads page](https://code.visualstudio.com/Download) fired it up, messed around a bit with some of the features ... and then realized I had no idea how to actually execute any of my Python code! I really like the look and feel/usability/features of Visual Studio Code, but I can't seem to find out how to run my Python code, a real killer because that's what I program primarily in. Does anyone know if there is a way to execute Python code in Visual Studio Code?
You can [add a custom task](https://www.stevefenton.co.uk/Content/Blog/Date/201505/Blog/Custom-Tasks-In-Visual-Studio-Code/) to do this. Here is a basic custom task for Python. ``` { "version": "0.1.0", "command": "c:\\Python34\\python", "args": ["app.py"], "problemMatcher": { "fileLocation": ["relative", "${workspaceRoot}"], "pattern": { "regexp": "^(.*)+s$", "message": 1 } } } ``` You add this to `tasks.json` and press `CTRL` + `SHIFT` + `B` to run it.
How to execute Python code from within Visual Studio Code
29,987,840
42
2015-05-01T13:35:17Z
34,975,137
44
2016-01-24T11:24:01Z
[ "python", "vscode" ]
[Visual Studio Code](https://code.visualstudio.com/) was recently released and I liked the look of it and the features it offered, so I figured I would give it a go. I downloaded the application from the [downloads page](https://code.visualstudio.com/Download) fired it up, messed around a bit with some of the features ... and then realized I had no idea how to actually execute any of my Python code! I really like the look and feel/usability/features of Visual Studio Code, but I can't seem to find out how to run my Python code, a real killer because that's what I program primarily in. Does anyone know if there is a way to execute Python code in Visual Studio Code?
Here is how to Configure Task Runner in Visual Studio Code to run a py file. In your console press `Ctrl`+`Shift`+`P` (Windows) or `Cmd`+`Shift`+`P` (Apple) and this brings up a search box where you search for "Configure Task Runner" [![enter image description here](http://i.stack.imgur.com/IbyrC.png)](http://i.stack.imgur.com/IbyrC.png) EDIT: If this is the first time you open the "Task: Configure Task Runner", you need to select "other" at the bottom of the next selection list. This will bring up the properties which you can then change to suit your preference. In this case you want to change the following properties; 1. Change the Command property from `"tsc"` (TypeScript) to `"Python"` 2. Change showOutput from `"silent"` to `"Always"` 3. Change `args` (Arguments) from `["Helloworld.ts"]` to `["${file}"]` (filename) 4. Delete the last property `problemMatcher` 5. Save the changes made [![enter image description here](http://i.stack.imgur.com/4cymA.png)](http://i.stack.imgur.com/4cymA.png) You can now open your py file and run it nicely with the shortcut `Ctrl`+`Shift`+`B` (Windows) or `Cmd`+`Shift`+`B` (Apple) Enjoy!
How to execute Python code from within Visual Studio Code
29,987,840
42
2015-05-01T13:35:17Z
38,995,516
8
2016-08-17T11:35:15Z
[ "python", "vscode" ]
[Visual Studio Code](https://code.visualstudio.com/) was recently released and I liked the look of it and the features it offered, so I figured I would give it a go. I downloaded the application from the [downloads page](https://code.visualstudio.com/Download) fired it up, messed around a bit with some of the features ... and then realized I had no idea how to actually execute any of my Python code! I really like the look and feel/usability/features of Visual Studio Code, but I can't seem to find out how to run my Python code, a real killer because that's what I program primarily in. Does anyone know if there is a way to execute Python code in Visual Studio Code?
There is a much easier way to run Python, no any configuration needed: 1. Install the [Code Runner Extension](https://marketplace.visualstudio.com/items?itemName=formulahendry.code-runner) 2. Open the Python code file in Text Editor, then use shortcut `Ctrl+Alt+N`, or press `F1` and then select/type `Run Code`, the code will run and the output will be shown in the Output Window. [![run python](http://i.stack.imgur.com/C05sk.gif)](http://i.stack.imgur.com/C05sk.gif)
Python hide ticks but show tick labels
29,988,241
7
2015-05-01T13:58:01Z
29,988,431
11
2015-05-01T14:08:57Z
[ "python", "matplotlib" ]
I can remove the ticks with ``` ax.set_xticks([]) ax.set_yticks([]) ``` but this removes the labels as well. Any way I can plot the tick labels but not the ticks and the spine
You can set the tick length to 0 using `tick_params` (<http://matplotlib.org/api/axes_api.html#matplotlib.axes.Axes.tick_params>): ``` fig = plt.figure() ax = fig.add_subplot(111) ax.plot([1],[1]) ax.tick_params(axis=u'both', which=u'both',length=0) plt.show() ```
Python list input error
29,989,095
4
2015-05-01T14:47:52Z
29,989,124
9
2015-05-01T14:49:18Z
[ "python", "list", "python-2.7" ]
``` grid = [] for _ in range(3): grid.append(raw_input().split()) ``` **Input:** ``` 000 000 000 ``` **Output**: `[['000'], ['000'], ['000']]`. How do I change my code to get the output? `[['0','0','0'], ['0','0','0'],['0','0','0']]`
You have: ``` "000".split() == ["000"] ``` You want: ``` list("000") == ["0", "0", "0"] ```
Python asyncio debugging example
29,996,257
8
2015-05-01T22:49:24Z
30,000,466
12
2015-05-02T09:08:50Z
[ "python", "python-asyncio" ]
I would like to enable Asyncio's un-yielded coroutine detection, but have not succeeded. This simple code implements the recommendations on: <https://docs.python.org/3/library/asyncio-dev.html#asyncio-logger> but does not actually catch the un-yielded 'dummy' coroutine. ``` import sys, os import asyncio import logging import warnings os.environ['PYTHONASYNCIODEBUG'] = '1' logging.basicConfig(level=logging.DEBUG) warnings.resetwarnings() @asyncio.coroutine def dummy(): print('yeah, dummy ran!!') @asyncio.coroutine def startdummy(): print('creating dummy') dummy() if __name__ == '__main__': lp = asyncio.get_event_loop() lp.run_until_complete(startdummy()) ``` I expected that the program would end with a warning about the coroutine 'dummy', created but not yielded from. Actually, results are: ``` DEBUG:asyncio:Using selector: SelectSelector creating dummy sys:1: ResourceWarning: unclosed <socket object at 0x02DCB6F0> c:\python34\lib\importlib\_bootstrap.py:2150: ImportWarning: sys.meta_path is empty sys:1: ResourceWarning: unclosed <socket object at 0x02DE10C0> ``` No hint of an abandoned coroutine. What am I missing?
`asyncio` performs check for `PYTHONASYNCIODEBUG` on module importing. Thus you need setup environment variable **before** very first asyncio import: ``` import sys os.environ['PYTHONASYNCIODEBUG'] = '1' import asyncio # rest of your file ```
Deleting consonants from a string in Python
29,998,052
3
2015-05-02T03:25:06Z
29,998,062
13
2015-05-02T03:26:37Z
[ "python", "string", "list", "python-3.x", "python-idle" ]
Here is my code. I'm not exactly sure if I need a counter for this to work. The answer should be `'iiii'`. ``` def eliminate_consonants(x): vowels= ['a','e','i','o','u'] vowels_found = 0 for char in x: if char == vowels: print(char) eliminate_consonants('mississippi') ```
## Correcting your code The line `if char == vowels:` is wrong. It has to be `if char in vowels:`. This is because you need to check if that particular character is present in the list of vowels. Apart from that you need to `print(char,end = '')` (in python3) to print the output as `iiii` all in one line. The final program will be like ``` def eliminate_consonants(x): vowels= ['a','e','i','o','u'] for char in x: if char in vowels: print(char,end = "") eliminate_consonants('mississippi') ``` And the output will be ``` iiii ``` --- ## Other ways include * ***Using `in` a string*** ``` def eliminate_consonants(x): for char in x: if char in 'aeiou': print(char,end = "") ``` As simple as it looks, the statement `if char in 'aeiou'` checks if `char` is present in the string `aeiou`. * ***[A list comprehension](https://docs.python.org/2/tutorial/datastructures.html#list-comprehensions)*** ``` ''.join([c for c in x if c in 'aeiou']) ``` This list comprehension will return a list that will contain the characters only if the character is in `aeiou` * ***[A generator expression](https://docs.python.org/2/reference/expressions.html#generator-expressions)*** ``` ''.join(c for c in x if c in 'aeiou') ``` This gen exp will return a generator than will return the characters only if the character is in `aeiou` * ***[Regular Expressions](https://docs.python.org/3/howto/regex.html)*** You can use [`re.findall`](https://docs.python.org/2/library/re.html#re.findall) to discover only the vowels in your string. The code ``` re.findall(r'[aeiou]',"mississippi") ``` will return a list of vowels found in the string i.e. `['i', 'i', 'i', 'i']`. So now we can use `str.join` and then use ``` ''.join(re.findall(r'[aeiou]',"mississippi")) ``` * ***[`str.translate`](https://docs.python.org/2/library/stdtypes.html#str.translate) and [`maketrans`](https://docs.python.org/3/library/stdtypes.html#str.maketrans)*** For this technique you will need to store a map which matches each of the non vowels to a `None` type. For this you can use [`string.ascii_lowecase`](https://docs.python.org/2/library/string.html#string.ascii_lowercase). The code to make the map is ``` str.maketrans({i:None for i in string.ascii_lowercase if i not in "aeiou"}) ``` this will return the mapping. Do store it in a variable (here `m` for map) ``` "mississippi".translate(m) ``` This will remove all the non `aeiou` characters from the string. * ***Using [`dict.fromkeys`](https://docs.python.org/3/library/stdtypes.html#dict.fromkeys)*** You can use `dict.fromkeys` along with [`sys.maxunicode`](https://docs.python.org/3/library/sys.html#sys.maxunicode). But remember to `import sys` first! ``` dict.fromkeys(i for i in range(sys.maxunicode+1) if chr(i) not in 'aeiou') ``` and now use `str.translate`. ``` 'mississippi'.translate(m) ``` * ***Using [`bytearray`](https://docs.python.org/3/library/functions.html#bytearray)*** As mentioned by [J.F.Sebastian](http://stackoverflow.com/users/4279/j-f-sebastian) in the [comments below](http://stackoverflow.com/questions/29998052/print-vowels-in-string-python/29998062?noredirect=1#comment50843538_29998062), you can create a bytearray of lower case consonants by using ``` non_vowels = bytearray(set(range(0x100)) - set(b'aeiou')) ``` Using this we can translate the word , ``` 'mississippi'.encode('ascii', 'ignore').translate(None, non_vowels) ``` which will return `b'iiii'`. This can easily be converted to `str` by using `decode` i.e. `b'iiii'.decode("ascii")`. * ***Using [`bytes`](https://docs.python.org/3/library/functions.html#bytes)*** `bytes` returns an bytes object and is the immutable version of `bytearray`. (**It is Python 3 specific**) ``` non_vowels = bytes(set(range(0x100)) - set(b'aeiou')) ``` Using this we can translate the word , ``` 'mississippi'.encode('ascii', 'ignore').translate(None, non_vowels) ``` which will return `b'iiii'`. This can easily be converted to `str` by using `decode` i.e. `b'iiii'.decode("ascii")`. --- ## Timing comparison ## Python 3 ``` python3 -m timeit -s "text = 'mississippi'*100; non_vowels = bytes(set(range(0x100)) - set(b'aeiou'))" "text.encode('ascii', 'ignore').translate(None, non_vowels).decode('ascii')" 100000 loops, best of 3: 2.88 usec per loop python3 -m timeit -s "text = 'mississippi'*100; non_vowels = bytearray(set(range(0x100)) - set(b'aeiou'))" "text.encode('ascii', 'ignore').translate(None, non_vowels).decode('ascii')" 100000 loops, best of 3: 3.06 usec per loop python3 -m timeit -s "text = 'mississippi'*100;d=dict.fromkeys(i for i in range(127) if chr(i) not in 'aeiou')" "text.translate(d)" 10000 loops, best of 3: 71.3 usec per loop python3 -m timeit -s "import string; import sys; text='mississippi'*100; m = dict.fromkeys(i for i in range(sys.maxunicode+1) if chr(i) not in 'aeiou')" "text.translate(m)" 10000 loops, best of 3: 71.6 usec per loop python3 -m timeit -s "text = 'mississippi'*100" "''.join(c for c in text if c in 'aeiou')" 10000 loops, best of 3: 60.1 usec per loop python3 -m timeit -s "text = 'mississippi'*100" "''.join([c for c in text if c in 'aeiou'])" 10000 loops, best of 3: 53.2 usec per loop python3 -m timeit -s "import re;text = 'mississippi'*100; p=re.compile(r'[aeiou]')" "''.join(p.findall(text))" 10000 loops, best of 3: 57 usec per loop ``` The timings in sorted order ``` translate (bytes) | 2.88 translate (bytearray)| 3.06 List Comprehension | 53.2 Regular expressions | 57.0 Generator exp | 60.1 dict.fromkeys | 71.3 translate (unicode) | 71.6 ``` As you can see the final method using `bytes` is the fastest. --- ## Python 3.5 ``` python3.5 -m timeit -s "text = 'mississippi'*100; non_vowels = bytes(set(range(0x100)) - set(b'aeiou'))" "text.encode('ascii', 'ignore').translate(None, non_vowels).decode('ascii')" 100000 loops, best of 3: 4.17 usec per loop python3.5 -m timeit -s "text = 'mississippi'*100; non_vowels = bytearray(set(range(0x100)) - set(b'aeiou'))" "text.encode('ascii', 'ignore').translate(None, non_vowels).decode('ascii')" 100000 loops, best of 3: 4.21 usec per loop python3.5 -m timeit -s "text = 'mississippi'*100;d=dict.fromkeys(i for i in range(127) if chr(i) not in 'aeiou')" "text.translate(d)" 100000 loops, best of 3: 2.39 usec per loop python3.5 -m timeit -s "import string; import sys; text='mississippi'*100; m = dict.fromkeys(i for i in range(sys.maxunicode+1) if chr(i) not in 'aeiou')" "text.translate(m)" 100000 loops, best of 3: 2.33 usec per loop python3.5 -m timeit -s "text = 'mississippi'*100" "''.join(c for c in text if c in 'aeiou')" 10000 loops, best of 3: 97.1 usec per loop python3.5 -m timeit -s "text = 'mississippi'*100" "''.join([c for c in text if c in 'aeiou'])" 10000 loops, best of 3: 86.6 usec per loop python3.5 -m timeit -s "import re;text = 'mississippi'*100; p=re.compile(r'[aeiou]')" "''.join(p.findall(text))" 10000 loops, best of 3: 74.3 usec per loop ``` The timings in sorted order ``` translate (unicode) | 2.33 dict.fromkeys | 2.39 translate (bytes) | 4.17 translate (bytearray)| 4.21 List Comprehension | 86.6 Regular expressions | 74.3 Generator exp | 97.1 ```
Django Import Error: No module named apps
30,001,009
4
2015-05-02T10:10:39Z
35,322,449
7
2016-02-10T17:51:23Z
[ "python", "django", "module", "directory", "importerror" ]
I just checked out a project with git. The project structure is ``` project apps myapp settings __init__.py __init__.py manage.py ``` There are other directories and files, but I think those are the important ones. When I run the server I get ``` Traceback (most recent call last): File "C:/Dev/project/apps/manage.py", line 10, in <module> execute_from_command_line(sys.argv) File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line 385, in execute_from_command_line utility.execute() File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line 345, in execute settings.INSTALLED_APPS File "C:\Python27\lib\site-packages\django\conf\__init__.py", line 46, in __getattr__ self._setup(name) File "C:\Python27\lib\site-packages\django\conf\__init__.py", line 42, in _setup self._wrapped = Settings(settings_module) File "C:\Python27\lib\site-packages\django\conf\__init__.py", line 98, in __init__ % (self.SETTINGS_MODULE, e) ImportError: Could not import settings 'apps.myapp.settings' (Is it on sys.path? Is there an import error in the settings file?): No module named apps.myapp.settings ``` When running `manage.py check` I get `ImportError: No module named apps.` so I guess the problem has nothing to do with my setting module but with my apps directory. I'm not sure why it can't find my module apps, because project is on my `sys.path` and the direcory `apps` obviously exists. As I'm not very experienced as a Python developer I don't find a solution myself.
Note that in Django 1.9 there is a module called django.apps Avoiding name clashes with built-in modules is generally advised
Summing list of counters in python
30,003,466
6
2015-05-02T14:30:44Z
30,003,471
13
2015-05-02T14:31:56Z
[ "python", "python-2.7", "counter" ]
I am looking to sum a list of counters in python. For example to sum: ``` counter_list = [Counter({"a":1, "b":2}), Counter({"b":3, "c":4})] ``` to give `Counter({'b': 5, 'c': 4, 'a': 1})` I can get the following code to do the summation: ``` counter_master = Counter() for element in counter_list: counter_master = counter_master + element ``` But I am confused as to why `counter_master = sum(counter_list)` results in the error `TypeError: unsupported operand type(s) for +: 'int' and 'Counter'` ? Given it is possible to add counters together, why is it not possible to sum them?
The [`sum`](https://docs.python.org/3/library/functions.html#sum) function has the optional *start* argument which defaults to 0. Quoting the linked page: > `sum(iterable[, start])` > > Sums *start* and the items of an *iterable* from left to right and returns > the total Set *start* to (empty) `Counter` object to avoid the `TypeError`: ``` In [5]: sum(counter_list, Counter()) Out[5]: Counter({'b': 5, 'c': 4, 'a': 1}) ```
How to use a bash variable in python
30,007,026
2
2015-05-02T20:18:53Z
30,007,057
7
2015-05-02T20:21:37Z
[ "python", "linux", "bash" ]
In bash I'm able to do this in bash`IP=$(wget -qO- ipinfo.io/ip)` This captures my public IP and stores it as the variable $IP on my Raspberry Pi. Now I want to capture this variable in python to make a led connected to GPIO 1 turn on when the `$IP` is not equal to 82.1x.xxx.xx . I'm kind of a newbie in python so I need some help, I have very little knowledge in python but very good knowledge in bash. Any help or suggestions will be appreciated. Thank you in advance.
You should use [os.environ](https://docs.python.org/2/library/os.html#os.environ) dict. Try it out: ``` >>> import os >>> os.environ['IP'] ``` or ``` >>> os.environ.get('IP') ``` From doc: **os.environ** > A mapping object representing the string environment. For example, > environ['HOME'] is the pathname of your home directory (on some > platforms), and is equivalent to getenv("HOME") in C. **BONUS GAME** You could grab IP from http response just using pure Python, something like this: ``` import urllib2 # that's really cool! IP = urllib2.urlopen("http://ipinfo.io/ip").read().strip() ```
What is this piece of code doing, python
30,008,825
6
2015-05-03T00:00:46Z
30,008,850
11
2015-05-03T00:05:08Z
[ "python", "if-statement" ]
I am self learning python and I was doing an exercise, the solution to which was posted in [this](http://stackoverflow.com/questions/15396739/i-made-a-python-robbers-language-translating-programme-is-it-correct) thread. Could anyone translate into english what this piece of code means? When I learned if statements I never came across this syntax. ``` consonants = 'bcdfghjklmnpqrstvwxz' return ''.join(l + 'o' + l if l in consonants else l for l in s) ```
It's a longer piece of code, written as a generator. Here is what it would look like, more drawn out. ``` consonants = 'bcdfghjklmnpqrstvwxz' ls = [] for l in s: if l in consonants: ls.append(l + 'o' + l) else: ls.append(l) return ''.join(ls) ``` It loops through `s` and checks if `l` is in the string `consonants`. If it is, it pushes `l + 'o' + l` to the list, and if not, it simply pushes `l`. The result is then joined into a string, using `''.join`, and returned. More accurately (as a generator): ``` consonants = 'bcdfghjklmnpqrstvwxz' def gencons(s): for l in s: if l in consonants: yield l + 'o' + l else: yield l return ''.join(gencons(s)) ``` Where `gencons` is just a arbitrary name I gave the generator function.
How to reorder indexed rows based on a list in Pandas data frame
30,009,948
5
2015-05-03T03:34:33Z
30,010,004
10
2015-05-03T03:43:59Z
[ "python", "pandas" ]
I have a data frame that looks like this: ``` company Amazon Apple Yahoo name A 0 130 0 C 173 0 0 Z 0 0 150 ``` It was created using this code: ``` import pandas as pd df = pd.DataFrame({'name' : ['A', 'Z','C'], 'company' : ['Apple', 'Yahoo','Amazon'], 'height' : [130, 150,173]}) df = df.pivot(index="name", columns="company", values="height").fillna(0) ``` What I want to do is to sort the row (with index `name`) according to a predefined list `["Z", "C", "A"]`. Resulting in this : ``` company Amazon Apple Yahoo name Z 0 0 150 C 173 0 0 A 0 130 0 ``` How can I achieve that?
You could set index on predefined order using `reindex` like ``` In [14]: df.reindex(["Z", "C", "A"]) Out[14]: company Amazon Apple Yahoo Z 0 0 150 C 173 0 0 A 0 130 0 ``` However, if it's alphabetical order, you could use `sort_index(ascending=False)` ``` In [12]: df.sort_index(ascending=False) Out[12]: company Amazon Apple Yahoo name Z 0 0 150 C 173 0 0 A 0 130 0 ```
opencv 3.0.0-dev python bindings not working properly
30,013,009
10
2015-05-03T10:56:51Z
30,013,069
31
2015-05-03T11:02:29Z
[ "python", "opencv", "binding" ]
I am on ubuntu 14.04.02, i have python, cython and numpy installed and updated. i pulled the latest sources of open cv from <http://github.com/itseez/opencv>, compiled according to the documentation... when trying to run the python source i pulled from <https://github.com/shantnu/FaceDetect/> it's giving me the following error : > modprobe: FATAL: Module nvidia not found. > Traceback (most recent call last): > File "face\_detect.py", line 21, in > flags = cv2.cv.CV\_HAAR\_SCALE\_IMAGE > AttributeError: 'module' object has no attribute 'cv' to make sure i have the python bindings i typed the following in the terminal: python ``` import cv2 cv2.__version__ ``` it returned the following '3.0.0-dev' what could be wrong with it?
the cv2.cv submodule got removed in opencv3.0, also some constants were changed. please use cv2.CASCADE\_SCALE\_IMAGE instead (do a `help(cv2)` to see the updated constants)
Problems obtaining most informative features with scikit learn?
30,017,491
15
2015-05-03T18:07:12Z
30,065,040
10
2015-05-05T23:41:41Z
[ "python", "pandas", "machine-learning", "nlp", "scikit-learn" ]
Im triying to obtain the most informative features from a [textual corpus](http://pastebin.com/3qYc9mfZ). From this well answered [question](https://stackoverflow.com/questions/26976362/how-to-get-most-informative-features-for-scikit-learn-classifier-for-different-c) I know that this task could be done as follows: ``` def most_informative_feature_for_class(vectorizer, classifier, classlabel, n=10): labelid = list(classifier.classes_).index(classlabel) feature_names = vectorizer.get_feature_names() topn = sorted(zip(classifier.coef_[labelid], feature_names))[-n:] for coef, feat in topn: print classlabel, feat, coef ``` Then: ``` most_informative_feature_for_class(tfidf_vect, clf, 5) ``` For this classfier: ``` X = tfidf_vect.fit_transform(df['content'].values) y = df['label'].values from sklearn import cross_validation X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.33) clf = SVC(kernel='linear', C=1) clf.fit(X, y) prediction = clf.predict(X_test) ``` The problem is the output of `most_informative_feature_for_class`: ``` 5 a_base_de_bien bastante (0, 2451) -0.210683496368 (0, 3533) -0.173621065386 (0, 8034) -0.135543062425 (0, 10346) -0.173621065386 (0, 15231) -0.154148294738 (0, 18261) -0.158890483047 (0, 21083) -0.297476572586 (0, 434) -0.0596263855375 (0, 446) -0.0753492277856 (0, 769) -0.0753492277856 (0, 1118) -0.0753492277856 (0, 1439) -0.0753492277856 (0, 1605) -0.0753492277856 (0, 1755) -0.0637950312345 (0, 3504) -0.0753492277856 (0, 3511) -0.115802483001 (0, 4382) -0.0668983049212 (0, 5247) -0.315713152154 (0, 5396) -0.0753492277856 (0, 5753) -0.0716096348446 (0, 6507) -0.130661516772 (0, 7978) -0.0753492277856 (0, 8296) -0.144739048504 (0, 8740) -0.0753492277856 (0, 8906) -0.0753492277856 : : (0, 23282) 0.418623443832 (0, 4100) 0.385906085143 (0, 15735) 0.207958503155 (0, 16620) 0.385906085143 (0, 19974) 0.0936828782325 (0, 20304) 0.385906085143 (0, 21721) 0.385906085143 (0, 22308) 0.301270427482 (0, 14903) 0.314164150621 (0, 16904) 0.0653764031957 (0, 20805) 0.0597723455204 (0, 21878) 0.403750815828 (0, 22582) 0.0226150073272 (0, 6532) 0.525138162099 (0, 6670) 0.525138162099 (0, 10341) 0.525138162099 (0, 13627) 0.278332617058 (0, 1600) 0.326774799211 (0, 2074) 0.310556919237 (0, 5262) 0.176400451433 (0, 6373) 0.290124806858 (0, 8593) 0.290124806858 (0, 12002) 0.282832270298 (0, 15008) 0.290124806858 (0, 19207) 0.326774799211 ``` It is not returning the label nor the words. Why this is happening and how can I print the words and the labels?. Do you guys this is happening since I am using pandas to read the data?. Another thing I tried is the following, form this [question](https://stackoverflow.com/questions/11116697/how-to-get-most-informative-features-for-scikit-learn-classifiers): ``` def print_top10(vectorizer, clf, class_labels): """Prints features with the highest coefficient values, per class""" feature_names = vectorizer.get_feature_names() for i, class_label in enumerate(class_labels): top10 = np.argsort(clf.coef_[i])[-10:] print("%s: %s" % (class_label, " ".join(feature_names[j] for j in top10))) print_top10(tfidf_vect,clf,y) ``` But I get this traceback: Traceback (most recent call last): ``` File "/Users/user/PycharmProjects/TESIS_FINAL/Classification/Supervised_learning/Final/experimentos/RBF/SVM_con_rbf.py", line 237, in <module> print_top10(tfidf_vect,clf,5) File "/Users/user/PycharmProjects/TESIS_FINAL/Classification/Supervised_learning/Final/experimentos/RBF/SVM_con_rbf.py", line 231, in print_top10 for i, class_label in enumerate(class_labels): TypeError: 'int' object is not iterable ``` Any idea of how to solve this, in order to get the features with the highest coefficient values?.
To solve this specifically for linear SVM, we first have to understand the formulation of the SVM in sklearn and the differences that it has to MultinomialNB. The reason why the `most_informative_feature_for_class` works for MultinomialNB is because the output of the `coef_` is essentially the log probability of features given a class (and hence would be of size `[nclass, n_features]`, due to the formulation of the naive bayes problem. But if we check the [documentation](http://scikit-learn.org/stable/modules/svm.html#multi-class-classification) for SVM, the `coef_` is not that simple. Instead `coef_` for (linear) SVM is `[n_classes * (n_classes -1)/2, n_features]` because each of the binary models are fitted to every possible class. If we do possess some knowledge on which particular coefficient we're interested in, we could alter the function to look like the following: ``` def most_informative_feature_for_class_svm(vectorizer, classifier, classlabel, n=10): labelid = ?? # this is the coef we're interested in. feature_names = vectorizer.get_feature_names() svm_coef = classifier.coef_.toarray() topn = sorted(zip(svm_coef[labelid], feature_names))[-n:] for coef, feat in topn: print feat, coef ``` This would work as intended and print out the labels and the top n features according to the coefficient vector that you're after. As for getting the correct output for a particular class, that would depend on the assumptions and what you aim to output. I suggest reading through the multi-class documentation within the SVM documentation to get a feel for what you're after. So using the `train.txt` [file](http://stackoverflow.com/a/26977579/1992167) which was described in this [question](https://stackoverflow.com/questions/26976362/how-to-get-most-informative-features-for-scikit-learn-classifier-for-different-c), we can get some kind of output, though in this situation it isn't particularly descriptive or helpful to interpret. Hopefully this helps you. ``` import codecs, re, time from itertools import chain import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.naive_bayes import MultinomialNB trainfile = 'train.txt' # Vectorizing data. train = [] word_vectorizer = CountVectorizer(analyzer='word') trainset = word_vectorizer.fit_transform(codecs.open(trainfile,'r','utf8')) tags = ['bs','pt','es','sr'] # Training NB mnb = MultinomialNB() mnb.fit(trainset, tags) from sklearn.svm import SVC svcc = SVC(kernel='linear', C=1) svcc.fit(trainset, tags) def most_informative_feature_for_class(vectorizer, classifier, classlabel, n=10): labelid = list(classifier.classes_).index(classlabel) feature_names = vectorizer.get_feature_names() topn = sorted(zip(classifier.coef_[labelid], feature_names))[-n:] for coef, feat in topn: print classlabel, feat, coef def most_informative_feature_for_class_svm(vectorizer, classifier, n=10): labelid = 3 # this is the coef we're interested in. feature_names = vectorizer.get_feature_names() svm_coef = classifier.coef_.toarray() topn = sorted(zip(svm_coef[labelid], feature_names))[-n:] for coef, feat in topn: print feat, coef most_informative_feature_for_class(word_vectorizer, mnb, 'pt') print most_informative_feature_for_class_svm(word_vectorizer, svcc) ``` with output: ``` pt teve -4.63472898823 pt tive -4.63472898823 pt todas -4.63472898823 pt vida -4.63472898823 pt de -4.22926388012 pt foi -4.22926388012 pt mais -4.22926388012 pt me -4.22926388012 pt as -3.94158180767 pt que -3.94158180767 no 0.0204081632653 parecer 0.0204081632653 pone 0.0204081632653 por 0.0204081632653 relación 0.0204081632653 una 0.0204081632653 visto 0.0204081632653 ya 0.0204081632653 es 0.0408163265306 lo 0.0408163265306 ```
How do I add to odd index values in python?
30,018,033
3
2015-05-03T18:55:46Z
30,018,119
8
2015-05-03T19:02:50Z
[ "python", "for-loop", "indexing" ]
I have to process the list to add an amount to every odd position. For example, if I start with ``` def main(): L = [1,3,5,7,9,11] ``` and have to add 5 to each odd position, the output should be ``` L = [1,8,5,12,9,16] ``` I'm stumped as to where to begin, I am supposed to use indexing and a for loop but everything I try is not working. Here's what I have so far ``` def main(): L = [3,12,9,5,7,6,8] i=0 while i < len(L): if i%2 == 1: print L[i] i = i+5 elif i%2 == 0: print L ```
You can splice your list. Splicing is making assignments to indexes of your original list from another source list. This "other source list" can come from a simple list comprehension that is formed from another slice. ``` >>> L = [1,3,5,7,9,11] >>> L[1::2] = [x+5 for x in L[1::2]] >>> L [1, 8, 5, 12, 9, 16] ``` --- **Slicing syntax explanation** Slicing is when you take a finite sequence, and index it with 1-2 colon characters, with the goal of selecting a *subsequence*. A slice with 1 colon character has the format: ``` [start:stop] ``` A slice with 2 colon characters has the format: ``` [start:stop:step] ``` Arguments can be omitted and then they will be set to their default values. If start is omitted, it defaults to the first index of the sequence, or 0, since Python sequences are 0-indexed. If stop is omitted it defaults to the last valid index of the sequence + 1, or equivalently, the length of the sequence. If step is omitted, it defaults to 1. So when you do a slice like `seq[1::2]`, what you're saying is, get the elements of `seq` with indexes starting from index 1 inclusive (specified by you), stopping at index `len(seq)` exclusive (default value), and stepping by 2 every time (specified by you). --- **List comprehension explanation** A list comprehension can be thought of as a concise, Pythonic way to do map and filter operations on an arbitrary number of potentially nested iterations over iterables. If none of that makes sense to you, don't worry! You can learn all the map and filter stuff later. For now, just think of the list comprehension `[x+5 for x in L[1::2]]` as: ``` newlist = [] for x in L[1::2]: newlist.append(x+5) ``` And then `newlist` would be set to the value of the list comprehension.
In Python is there any way to append behind?
30,018,467
2
2015-05-03T19:35:10Z
30,018,493
8
2015-05-03T19:37:37Z
[ "python", "matrix", "append", "behind" ]
I have a matrix: ``` [[1 2 3], [4 5 6], [7 8 9]] ``` and I need to create a new matrix: ``` [[7 4 1], [8 5 2], [9 6 3]] ``` I tried ``` new_matrix = [[1]] new_matrix.append(matrix[1][0]) ``` and got a new\_matrix = `[4 1]` instead of a new\_matrix = `[1 4]` If you need more clarification, please just ask.
Yes. Use `new_matrix.insert(0,matrix[1][0])`. `insert(position,value)` allows you to insert objects into specified positions in a list. In this case, since you want to insert a number at the beginning, the position is zero. Note, however, that this take O(n) time if new\_matrix has n elements. If new\_matrix has 100 elements, it will take ten times longer to add something to the beginning than if it has 10. That's much slower than adding something to the end of the list, which usually takes O(1): it should be fast regardless of how big new\_matrix is. See [here](https://wiki.python.org/moin/TimeComplexity) for more on time complexity of python operations. If you'll regularly be adding elements to the beginning of lists, you might want to think about whether you can reverse what you're doing. Also, note that the way you've done things, this will give you a new\_matrix of [4,[1]]. I'm not quite sure what you want: if you want the final results as you are describing them, then you need `new_matrix = [1]`. If your code is correct (`new_matrix = [[1]]`), and you want `[[4,1]]`, then you'll need to do `new_matrix[0].insert(0,4)`. If you want `[[4],[1]]`, you'll need to do `new_matrix.insert(0,[4])`, and so on. As an aside, since you seem to be doing things with matrices, have you considered using numpy? --- (I'd like to point out that, if this answer seems a bit off-topic, it's because this question was edited to be something entirely different than was originally asked.) As for the new question: while Stefan's answer is good python, you may be giving yourself too much work. It's clear you're trying to implement something like a matrix transpose, except mirrored. If you're doing those sorts of manipulations, [Numpy](http://numpy.org) is *much* easier and faster. In this case, with numpy arrays, you'd just need to do the following: ``` import numpy as np # import numpy matrix = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) # Here's your matrix new_matrix_1 = matrix[::-1,:] # Here's your matrix with all the rows reversed: [[7,8,9],[4,5,6],[1,2,3]] new_matrix = new_matrix_1.T # Here's the transpose of that, which is what you want. ``` While this is just one thing, this will make *everything* you do easier. For example, arithmetic will actually work: `new_matrix+matrix`, `2*new_matrix`, and so on. You'd have to implement this manually otherwise.
How can I get a list of the symbols in a sympy expression?
30,018,977
6
2015-05-03T20:23:44Z
30,019,034
7
2015-05-03T20:28:57Z
[ "python", "python-3.x", "sympy" ]
For example, if I run ``` import sympy x, y, z = sympy.symbols('x:z') f = sympy.exp(x + y) - sympy.sqrt(z) ``` is there any method of `f` that I can use to get a list or tuple of `sympy.Symbol` objects that the expression contains? I'd rather not have to parse `srepr(f)` or parse downward through `f.args`. In this case, `g.args[0].args[1].args[0]` gives me `Symbol("z")`, while `g.args[1].args[0].args` gives me the tuple `(Symbol("x"), Symbol("y"))`, but obviously these are expression-specific.
You can use: ``` f.free_symbols ``` which will return a set of all free symbols. Example: ``` >>> import sympy >>> x, y, z = sympy.symbols('x:z') >>> f = sympy.exp(x + y) - sympy.sqrt(z) >>> f.free_symbols set([x, z, y]) ```
How to find the first index of any of a set of characters in a string
30,020,184
12
2015-05-03T22:36:31Z
30,020,209
16
2015-05-03T22:41:00Z
[ "python", "string", "indexing" ]
I'd like to find the index of the first occurrence of any “special” character in a string, like so: ``` >>> "Hello world!".index([' ', '!']) 5 ``` …except that's not valid Python syntax. Of course, I can write a function that emulates this behavior: ``` def first_index(s, characters): i = [] for c in characters: try: i.append(s.index(c)) except ValueError: pass if not i: raise ValueError return min(i) ``` I could also use regular expressions, but both solutions seem to be a bit overkill. Is there any “sane” way to do this in Python?
You can use [enumerate](https://docs.python.org/2/library/functions.html#enumerate) and [next](https://docs.python.org/2/library/functions.html#next) with a [generator expression](https://docs.python.org/2/reference/expressions.html#generator-expressions), getting the first match or returning None if no character appears in s: ``` s = "Hello world!" st = {"!"," "} ind = next((i for i, ch in enumerate(s) if ch in st),None) print(ind) ``` You can pass any value you want to next as a default return value if there is no match. If you want to use a function and raise a ValueError: ``` def first_index(s, characters): st = set(characters) ind = next((i for i, ch in enumerate(s) if ch in st), None) if ind is not None: return ind raise ValueError ``` For smaller inputs using a set won't make much if any difference but for large strings it will be a more efficient. Some timings: In the string, last character of character set: ``` In [40]: s = "Hello world!" * 100 In [41]: string = s In [42]: %%timeit st = {"x","y","!"} next((i for i, ch in enumerate(s) if ch in st), None) ....: 1000000 loops, best of 3: 1.71 µs per loop In [43]: %%timeit specials = ['x', 'y', '!'] min(map(lambda x: (string.index(x) if (x in string) else len(string)), specials)) ....: 100000 loops, best of 3: 2.64 µs per loop ``` Not in the string, larger character set: ``` In [44]: %%timeit st = {"u","v","w","x","y","z"} next((i for i, ch in enumerate(s) if ch in st), None) ....: 1000000 loops, best of 3: 1.49 µs per loop In [45]: %%timeit specials = ["u","v","w","x","y","z"] min(map(lambda x: (string.index(x) if (x in string) else len(string)), specials)) ....: 100000 loops, best of 3: 5.48 µs per loop ``` In string an very first character of character set: ``` In [47]: %%timeit specials = ['H', 'y', '!'] min(map(lambda x: (string.index(x) if (x in string) else len(string)), specials)) ....: 100000 loops, best of 3: 2.02 µs per loop In [48]: %%timeit st = {"H","y","!"} next((i for i, ch in enumerate(s) if ch in st), None) ....: 1000000 loops, best of 3: 903 ns per loop ```
can't build matplotlib (png package issue)
30,022,983
9
2015-05-04T05:09:43Z
30,026,042
13
2015-05-04T08:47:15Z
[ "python", "linux", "matplotlib", "fedora" ]
try to build matplotlib on fedora-18, build fails with ``` ... png: no [pkg-config information for 'libpng' could not be found.] ... * The following required packages can not be built: * png ``` What should I do/check to build png package ?
Sounds like you don't have `libpng-devel` installed. This install is not handled by pip, so you'll need to install it yourself. You should be able to install it via `yum`. ``` sudo yum install libpng-devel ``` You may also need `freetype`. Maybe try `yum-builddep matplotlib`?
sklearn.cross_validation.StratifiedShuffleSplit - error: "indices are out-of-bounds"
30,023,927
9
2015-05-04T06:35:04Z
30,025,025
19
2015-05-04T07:43:29Z
[ "python", "pandas", "scikit-learn" ]
I was trying to split the sample dataset using Scikit-learn's Stratified Shuffle Split. I followed the example shown on the Scikit-learn documentation [here](http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.StratifiedShuffleSplit.html) ``` import pandas as pd import numpy as np # UCI's wine dataset wine = pd.read_csv("https://s3.amazonaws.com/demo-datasets/wine.csv") # separate target variable from dataset target = wine['quality'] data = wine.drop('quality',axis = 1) # Stratified Split of train and test data from sklearn.cross_validation import StratifiedShuffleSplit sss = StratifiedShuffleSplit(target, n_iter=3, test_size=0.2) for train_index, test_index in sss: xtrain, xtest = data[train_index], data[test_index] ytrain, ytest = target[train_index], target[test_index] # Check target series for distribution of classes ytrain.value_counts() ytest.value_counts() ``` However, upon running this script, I get the following error: ``` IndexError: indices are out-of-bounds ``` Could someone please point out what I am doing wrong here? Thanks!
You're running into the different conventions for Pandas `DataFrame` indexing versus NumPy `ndarray` indexing. The arrays `train_index` and `test_index` are collections of row indices. But `data` is a Pandas `DataFrame` object, and when you use a single index into that object, as in `data[train_index]`, Pandas is expecting `train_index` to contain *column* labels rather than row indices. You can either convert the dataframe to a NumPy array, using `.values`: ``` data_array = data.values for train_index, test_index in sss: xtrain, xtest = data_array[train_index], data_array[test_index] ytrain, ytest = target[train_index], target[test_index] ``` or use the Pandas [`.iloc`](http://pandas.pydata.org/pandas-docs/stable/indexing.html#selection-by-position) accessor: ``` for train_index, test_index in sss: xtrain, xtest = data.iloc[train_index], data.iloc[test_index] ytrain, ytest = target[train_index], target[test_index] ``` I tend to favour the second approach, since it gives `xtrain` and `xtest` of type `DataFrame` rather than `ndarray`, and so keeps the column labels.
Flask: Download a csv file on clicking a button
30,024,948
5
2015-05-04T07:39:12Z
30,043,470
7
2015-05-05T02:52:37Z
[ "javascript", "python", "json", "csv", "flask" ]
I just got started with Flask/Python. What I want to achieve is that I have a download button in my HTML and it calls the following function: ``` function downloadPlotCSV() { $.ajax({ url: "/getPlotCSV", type: "post", success: function(data) { dataPlot = JSON.parse(data); console.log(dataPlot); } }); } ``` The incomplete flask code is: ``` @app.route('/getPlotCSV', methods = ['POST']) def plotCSV(): data = open("outputs/Adjacency.csv") ``` The problem I am facing is that I cannot find a way to download this csv file or return it as a JSON string so I can download it using Javascript. Any idea how I can send it as JSON or maybe download it via Flask itself? What's the best way?
Here is one way to download a CSV file with no Javascript: ``` #!/usr/bin/python from flask import Flask, Response app = Flask(__name__) @app.route("/") def hello(): return ''' <html><body> Hello. <a href="/getPlotCSV">Click me.</a> </body></html> ''' @app.route("/getPlotCSV") def getPlotCSV(): # with open("outputs/Adjacency.csv") as fp: # csv = fp.read() csv = '1,2,3\n4,5,6\n' return Response( csv, mimetype="text/csv", headers={"Content-disposition": "attachment; filename=myplot.csv"}) app.run(debug=True) ```
Add Multiple Columns to Pandas Dataframe from Function
30,026,815
6
2015-05-04T09:32:39Z
30,027,273
8
2015-05-04T09:57:09Z
[ "python", "pandas" ]
I have a pandas data frame `mydf` that has two columns,and both columns are datetime datatypes: `mydate` and `mytime`. I want to add three more columns: `hour`, `weekday`, and `weeknum`. ``` def getH(t): #gives the hour return t.hour def getW(d): #gives the week number return d.isocalendar()[1] def getD(d): #gives the weekday return d.weekday() # 0 for Monday, 6 for Sunday mydf["hour"] = mydf.apply(lambda row:getH(row["mytime"]), axis=1) mydf["weekday"] = mydf.apply(lambda row:getD(row["mydate"]), axis=1) mydf["weeknum"] = mydf.apply(lambda row:getW(row["mydate"]), axis=1) ``` The snippet works, but it's not computationally efficient as it loops through the data frame at least three times. I would just like to know if there's a faster and/or more optimal way to do this. For example, using `zip` or `merge`? If, for example, I just create one function that returns three elements, how should I implement this? To illustrate, the function would be: ``` def getHWd(d,t): return t.hour, d.isocalendar()[1], d.weekday() ```
Here's on approach to do it using one `apply` Say, `df` is like ``` In [64]: df Out[64]: mydate mytime 0 2011-01-01 2011-11-14 1 2011-01-02 2011-11-15 2 2011-01-03 2011-11-16 3 2011-01-04 2011-11-17 4 2011-01-05 2011-11-18 5 2011-01-06 2011-11-19 6 2011-01-07 2011-11-20 7 2011-01-08 2011-11-21 8 2011-01-09 2011-11-22 9 2011-01-10 2011-11-23 10 2011-01-11 2011-11-24 11 2011-01-12 2011-11-25 ``` We'll take the lambda function out to separate line for readability and define it like ``` In [65]: lambdafunc = lambda x: pd.Series([x['mytime'].hour, x['mydate'].isocalendar()[1], x['mydate'].weekday()]) ``` And, `apply` and store the result to `df[['hour', 'weekday', 'weeknum']]` ``` In [66]: df[['hour', 'weekday', 'weeknum']] = df.apply(lambdafunc, axis=1) ``` And, the output is like ``` In [67]: df Out[67]: mydate mytime hour weekday weeknum 0 2011-01-01 2011-11-14 0 52 5 1 2011-01-02 2011-11-15 0 52 6 2 2011-01-03 2011-11-16 0 1 0 3 2011-01-04 2011-11-17 0 1 1 4 2011-01-05 2011-11-18 0 1 2 5 2011-01-06 2011-11-19 0 1 3 6 2011-01-07 2011-11-20 0 1 4 7 2011-01-08 2011-11-21 0 1 5 8 2011-01-09 2011-11-22 0 1 6 9 2011-01-10 2011-11-23 0 2 0 10 2011-01-11 2011-11-24 0 2 1 11 2011-01-12 2011-11-25 0 2 2 ```
Correct placement of colorbar relative to geo axes (cartopy)
30,030,328
9
2015-05-04T12:33:59Z
30,077,745
9
2015-05-06T13:08:50Z
[ "python", "matplotlib", "axes", "colorbar", "cartopy" ]
Using Cartopy, I would like to have full control of where my colorbar goes. Usually I do this by getting the current axes position as basis and then create new axes for the colorbar. This works well for standard matplotlib axes but not when using Cartopy and geo\_axes, because this will distort the axes. So, my question is: how do I get the exact position of my geo\_axes? Here is a code example based on the Cartopy docs <http://scitools.org.uk/cartopy/docs/latest/matplotlib/advanced_plotting.html>: ``` import cartopy.crs as ccrs import matplotlib.pyplot as plt import os from netCDF4 import Dataset as netcdf_dataset from cartopy import config def main(): fname = os.path.join(config["repo_data_dir"], 'netcdf', 'HadISST1_SST_update.nc' ) dataset = netcdf_dataset(fname) sst = dataset.variables['sst'][0, :, :] lats = dataset.variables['lat'][:] lons = dataset.variables['lon'][:] #my preferred way of creating plots (even if it is only one plot) ef, ax = plt.subplots(1,1,figsize=(10,5),subplot_kw={'projection': ccrs.PlateCarree()}) ef.subplots_adjust(hspace=0,wspace=0,top=0.925,left=0.1) #get size and extent of axes: axpos = ax.get_position() pos_x = axpos.x0+axpos.width + 0.01# + 0.25*axpos.width pos_y = axpos.y0 cax_width = 0.04 cax_height = axpos.height #create new axes where the colorbar should go. #it should be next to the original axes and have the same height! pos_cax = ef.add_axes([pos_x,pos_y,cax_width,cax_height]) im = ax.contourf(lons, lats, sst, 60, transform=ccrs.PlateCarree()) ax.coastlines() plt.colorbar(im, cax=pos_cax) ax.coastlines(resolution='110m') ax.gridlines() ax.set_extent([-20, 60, 33, 63]) #when using this line the positioning of the colorbar is correct, #but the image gets distorted. #when omitting this line, the positioning of the colorbar is wrong, #but the image is well represented (not distorted). ax.set_aspect('auto', adjustable=None) plt.savefig('sst_aspect.png') plt.close() if __name__ == '__main__': main() ``` Resulting Figure, when using "set\_aspect": ![enter image description here](http://i.stack.imgur.com/iGbLw.png) Resulting Figure, when omitting "set\_aspect": ![enter image description here](http://i.stack.imgur.com/PioZE.png) Basically, I'd like to obtain the first figure (correctly placed colorbar) but without using the "set\_aspect". I guess this should be possible with some transformations, but I didn't find a solution so far. Thanks!
Great question! Thanks for the code, and pictures, it makes the problem a lot easier to understand as well as making it easier to quickly iterate on possible solutions. The problem here is essentially a matplotlib one. Cartopy calls `ax.set_aspect('equal')` as this is part of the the Cartesian units of a projection's definition. Matplotlib's equal aspect ratio functionality resizes the axes to match the x and y limits, rather than changing the limits to fit to the axes rectangle. It is for this reason that the axes does not fill the space allocated to it on the figure. If you interactively resize the figure you will see that the amount of space that the axes occupies varies depending on the aspect that you resize your figure to. The simplest way of identifying the location of an axes is with the `ax.get_position()` method you have already been using. However, as we now know, this "position" changes with the size of the figure. One solution therefore is to re-calculate the position of the colorbar each time the figure is resized. The [matplotlib event machinery](http://matplotlib.org/users/event_handling.html#event-connections) has a "resize\_event" which is triggered each time a figure is resized. If we use this machinery for your colorbar, our event might look something like: ``` def resize_colobar(event): # Tell matplotlib to re-draw everything, so that we can get # the correct location from get_position. plt.draw() posn = ax.get_position() colorbar_ax.set_position([posn.x0 + posn.width + 0.01, posn.y0, 0.04, axpos.height]) fig.canvas.mpl_connect('resize_event', resize_colobar) ``` So if we relate this back to cartopy, and your original question, it is now possible to resize the colorbar based on the position of the geo-axes. The full code to do this might look like: ``` import cartopy.crs as ccrs import matplotlib.pyplot as plt import os from netCDF4 import Dataset as netcdf_dataset from cartopy import config fname = os.path.join(config["repo_data_dir"], 'netcdf', 'HadISST1_SST_update.nc' ) dataset = netcdf_dataset(fname) sst = dataset.variables['sst'][0, :, :] lats = dataset.variables['lat'][:] lons = dataset.variables['lon'][:] fig, ax = plt.subplots(1, 1, figsize=(10,5), subplot_kw={'projection': ccrs.PlateCarree()}) # Add the colorbar axes anywhere in the figure. Its position will be # re-calculated at each figure resize. cbar_ax = fig.add_axes([0, 0, 0.1, 0.1]) fig.subplots_adjust(hspace=0, wspace=0, top=0.925, left=0.1) sst_contour = ax.contourf(lons, lats, sst, 60, transform=ccrs.PlateCarree()) def resize_colobar(event): plt.draw() posn = ax.get_position() cbar_ax.set_position([posn.x0 + posn.width + 0.01, posn.y0, 0.04, posn.height]) fig.canvas.mpl_connect('resize_event', resize_colobar) ax.coastlines() plt.colorbar(sst_contour, cax=cbar_ax) ax.gridlines() ax.set_extent([-20, 60, 33, 63]) plt.show() ```
does nolearn/lasagne support python 3
30,034,492
9
2015-05-04T15:58:06Z
30,177,516
7
2015-05-11T20:57:51Z
[ "python", "python-3.x", "theano" ]
I am working with Neural Net implementation in `nolearn.lasagne` as mentioned [here](http://nbviewer.ipython.org/github/ottogroup/kaggle/blob/master/Otto_Group_Competition.ipynb) However I get the following error: `ImportError: No module named 'cPickle'` I figure out that `cPickle` is `pickle` in python-3 Does nolearn/lasagne support python-3 ? If not, is there any workaround ?
You seem to be using an older version of nolearn. Try the current master from Github with these commands: `pip uninstall nolearn pip install https://github.com/dnouri/nolearn/archive/master.zip#egg=nolearn` Here's the tests in master running with both Python 2.7 and 3.4: <https://travis-ci.org/dnouri/nolearn/builds/61806852>
What are the differences between Conda and Anaconda
30,034,840
14
2015-05-04T16:16:48Z
30,057,885
28
2015-05-05T15:59:19Z
[ "python", "anaconda", "conda" ]
I first installed *Anaconda* on my ubuntu at `~/anaconda`, when I was trying to update my anaconda, according to the [documentation](http://docs.continuum.io/anaconda/install.html) from Continuum Analytics, I should use the following commands: ``` conda update conda conda update anaconda ``` Then I realized that I did not have *conda* installed, so I installed it using the documentation from [here](http://conda.pydata.org/docs/intro.html#update-anaconda-to-latest-version). After *conda* is installed, when I run `conda update anaconda`, I got the following error: > Error: package 'anaconda' is not installed in /home/xiang/miniconda It appears conda is assuming my anaconda is installed under `/home/xiang/miniconda` which is *NOT* true. **My questions are:** 1. What is the differences between *conda* and *anaconda*? 2. How can I tell *conda* where my *anaconda* is installed?
conda is the package manager. Anaconda is a set of about a hundred packages including conda, numpy, scipy, ipython notebook, and so on. You installed Miniconda, which is a smaller alternative to Anaconda that is just conda and its dependencies (as opposed to Anaconda, which is conda and a bunch of other packages like numpy, scipy, ipython notebook, etc.). Once you have Miniconda, you can easily install Anaconda into it with `conda install anaconda`.
How do I use all() built-in function?
30,035,163
2
2015-05-04T16:33:30Z
30,035,214
8
2015-05-04T16:36:05Z
[ "python", "python-2.7", "python-2.x" ]
I am attempting to use [`all()`](https://docs.python.org/2/library/functions.html#all) but it is not working for me: ``` >>> names = ["Rhonda", "Ryan", "Red Rackham", "Paul"] >>> all([name for name in names if name[0] == "R"]) True >>> ``` I am trying to check if all the names begin with `"R"`, and even though I added `"Paul"` to `names`, `all()` still returns `True`. How do I fix this so that `all()` returns `False` until `"Paul"` is removed?
You misunderstand how `all` works. From the [docs](https://docs.python.org/3/library/functions.html#all): > `all(iterable)` > > Return `True` if all elements of the `iterable` are true (or if the > `iterable` is empty). In your code, you are first collecting all names that start with `R` into a list and then passing this list to `all`. Doing this will always return `True` because non-empty strings evaluate to `True`. --- Instead, you should write: ``` all(name[0] == "R" for name in names) ``` This will pass an iterable of booleans to `all`. If all of them are `True`, the function will return `True`; otherwise, it will return `False`. As an added bonus, the result will now be computed lazily because we used a [generator expression](https://docs.python.org/3/reference/expressions.html#grammar-token-generator_expression) instead of a list comprehension. With the list comprehension, the code needed to test *all* strings before determining a result. The new code however will only check as many as necessary.
Sum of products of pairs in a list
30,039,334
8
2015-05-04T20:25:45Z
30,039,414
23
2015-05-04T20:30:46Z
[ "python", "numpy", "pandas", "itertools" ]
This is the problem I have. Given a list ``` xList = [9, 13, 10, 5, 3] ``` I would like to calculate for sum of each element multiplied by subsequent elements ``` sum([9*13, 9*10, 9*5 , 9*3]) + sum([13*10, 13*5, 13*3]) + sum([10*5, 10*3]) + sum ([5*3]) ``` in this case the answer is **608**. Is there a way to do this perhaps with `itertools` or natively with `numpy`? Below is a function I came up with. It does the job but it is far from ideal as I would like to add other stuff as well. ``` def SumProduct(xList): ''' compute the sum of the product of a list e.g. xList = [9, 13, 10, 5, 3] the result will be sum([9*13, 9*10, 9*5 , 9*3]) + sum([13*10, 13*5, 13*3]) + sum([10*5, 10*3]) + sum ([5*3]) ''' xSum = 0 for xnr, x in enumerate(xList): #print xnr, x xList_1 = np.array(xList[xnr+1:]) #print x * xList_1 xSum = xSum + sum(x * xList_1) return xSum ``` Any help appreciated. N.B: In case you wonder, I am trying to implement [Krippendorf's alpha](http://repository.upenn.edu/cgi/viewcontent.cgi?article=1043&context=asc_papers) with **pandas**
``` x = array([9, 13, 10, 5, 3]) result = (x.sum()**2 - x.dot(x)) / 2 ``` This takes advantage of some mathematical simplifications to work in linear time and constant space, compared to other solutions that might have quadratic performance. Here's a diagram of how this works. Suppose `x = array([2, 3, 1])`. Then if you view the products as the areas of rectangles: ``` x is this stick: -- --- - x.sum()**2 is this rectangle: -- --- - |xx xxx x |xx xxx x |xx xxx x |xx xxx x |xx xxx x |xx xxx x x.dot(x) is this diagonal bit: -- --- - |xx |xx | xxx | xxx | xxx | x (x.sum()**2 - x.dot(x)) is the non-diagonal parts: -- --- - | xxx x | xxx x |xx x |xx x |xx x |xx xxx and (x.sum()**2 - x.dot(x)) / 2 is the product you want: -- --- - | xxx x | xxx x | x | x | x | ```
Sum of products of pairs in a list
30,039,334
8
2015-05-04T20:25:45Z
30,039,462
7
2015-05-04T20:33:15Z
[ "python", "numpy", "pandas", "itertools" ]
This is the problem I have. Given a list ``` xList = [9, 13, 10, 5, 3] ``` I would like to calculate for sum of each element multiplied by subsequent elements ``` sum([9*13, 9*10, 9*5 , 9*3]) + sum([13*10, 13*5, 13*3]) + sum([10*5, 10*3]) + sum ([5*3]) ``` in this case the answer is **608**. Is there a way to do this perhaps with `itertools` or natively with `numpy`? Below is a function I came up with. It does the job but it is far from ideal as I would like to add other stuff as well. ``` def SumProduct(xList): ''' compute the sum of the product of a list e.g. xList = [9, 13, 10, 5, 3] the result will be sum([9*13, 9*10, 9*5 , 9*3]) + sum([13*10, 13*5, 13*3]) + sum([10*5, 10*3]) + sum ([5*3]) ''' xSum = 0 for xnr, x in enumerate(xList): #print xnr, x xList_1 = np.array(xList[xnr+1:]) #print x * xList_1 xSum = xSum + sum(x * xList_1) return xSum ``` Any help appreciated. N.B: In case you wonder, I am trying to implement [Krippendorf's alpha](http://repository.upenn.edu/cgi/viewcontent.cgi?article=1043&context=asc_papers) with **pandas**
You actually want combinations not product: ``` from itertools import combinations print(sum(a*b for a,b in combinations(xList,2))) 608 ``` Even creating a numpy array from a python list, [@user2357112](http://stackoverflow.com/a/30039414/2141635) answer wipes the floor with the rest of us. ``` In [38]: timeit sum(a*b for a,b in combinations(xlist,2)) 10000 loops, best of 3: 89.7 µs per loop In [40]: timeit sum(mul(*t) for t in itertools.combinations(xlist, 2)) 1000 loops, best of 3: 165 µs per loop In [41]: %%timeit x = array(arr) (x.sum()**2 - (x**2).sum()) / 2 ....: 100000 loops, best of 3: 10.9 µs per loop In [42]: timeit np.triu(np.outer(x, x), k=1).sum() 10000 loops, best of 3: 48.1 µs per loop In [59]: %%timeit ....: xarr = np.array(xList) ....: N = xarr.size ....: range1 = np.arange(N) ....: mask = range1[:,None] < range1 ....: out = ((mask*xarr)*xarr[:,None]).sum() 10000 loops, best of 3: 30.4 µs per loop ``` All the lists/arrays had 50 elements. Stealing the logic from user2357112 and using it on a normal list with sum python is pretty darn efficient: ``` In [63]: timeit result = (sum(xList)**2 - sum(x ** 2 for x in xList)) / 2 100000 loops, best of 3: 4.63 µs per loop ``` But for a large array the numpy solution is still significantly faster.
Why are nested dictionaries OK but nested sets forbidden?
30,040,864
3
2015-05-04T22:08:54Z
30,040,995
8
2015-05-04T22:19:11Z
[ "python", "dictionary", "set", "hashtable" ]
Why are nested dictionaries allowed in Python, while nested sets are disallowed? One can nest dictionaries and change the sub-dictionaries on the fly, as the following demonstrates: ``` In [1]: dict1 = {'x':{'a':1, 'b':2}, 'y':{'c':3}} In [2]: dict2 = {'x':{'a':1, 'b':2}, 'y':{'c':3}} In [3]: dict1 == dict2 Out[3]: True In [4]: dict2['x'] = {'d':4} In [5]: dict1 == dict2 Out[5]: False ``` On the other hand, if you try to put a set within a set you get an error saying that it can't be done since sets are an *"unhashable type"*: ``` In [6]: set([set(['a'])]) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-8-8e7d044eec15> in <module>() ----> 1 set([set(['a'])]) TypeError: unhashable type: 'set' ``` But this doesn't make sense since dictionaries are unhashable too, ``` In [7]: hash({'a':1}) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-11-44def9788331> in <module>() ----> 1 hash({'a':1}) TypeError: unhashable type: 'dict' ``` Of course, one can put a frozenset within a set, ``` In [8]: set([frozenset(['a'])]) Out[8]: {frozenset({'a'})} ``` but then you can't later change the internals of the nested frozenset like you could for the nested dictionaries. According to what I've found, [`set`](http://stackoverflow.com/a/3949350/484944) and [`dict`](http://stackoverflow.com/a/9022835/484944) are both implemented with hash tables under the hood, so I don't see why it would be allowed in one case but not the other.
The problem is that your examples aren't alike. There is no restriction on the **values** of a dictionary, only on the **keys**. Here is a more accurate comparison: ``` >>> d = {{'a': 'b'}: 'c'} Traceback (most recent call last): File "<pyshell#8>", line 1, in <module> d = {{'a': 'b'}: 'c'} TypeError: unhashable type: 'dict' >>> s = {{'a': 'b'}, 'c'} Traceback (most recent call last): File "<pyshell#9>", line 1, in <module> s = {{'a': 'b'}, 'c'} TypeError: unhashable type: 'dict' ``` Note that now you get the same behaviour, as expected; you can think of a `set` as a key-only `dict`. You cannot use a mutable/unhashable object as the key in a dictionary or the element in a set because if you changed it in-place it would become unrecoverable (Python matches on `__eq__` and `__hash__`, which is why these methods must be implemented to use a custom class as a key/element). For more on this, see e.g. [Why should dictionary keys be immutable?](http://stackoverflow.com/q/15439134/3001761) (different language, but same principle - it's all hash tables). You could also consider watching [*The Mighty Dictionary*](http://pyvideo.org/video/276/the-mighty-dictionary-55) if you're interested in the topic.
Python self and super in multiple inheritance
30,041,679
12
2015-05-04T23:23:29Z
30,041,888
10
2015-05-04T23:43:53Z
[ "python", "inheritance", "self", "super", "method-resolution-order" ]
In Raymond Hettinger's talk "[Super considered super speak](https://www.youtube.com/watch?v=EiOglTERPEo)" at PyCon 2015 he explains the advantages of using `super` in Python in multiple inheritance context. This is one of the examples that Raymond used during his talk: ``` class DoughFactory(object): def get_dough(self): return 'insecticide treated wheat dough' class Pizza(DoughFactory): def order_pizza(self, *toppings): print('Getting dough') dough = super().get_dough() print('Making pie with %s' % dough) for topping in toppings: print('Adding: %s' % topping) class OrganicDoughFactory(DoughFactory): def get_dough(self): return 'pure untreated wheat dough' class OrganicPizza(Pizza, OrganicDoughFactory): pass if __name__ == '__main__': OrganicPizza().order_pizza('Sausage', 'Mushroom') ``` Somebody in the audience [asked](https://www.youtube.com/watch?v=EiOglTERPEo&t=41m10s) Raymond about the difference of using `self.get_dough()` instead `super().get_dough()`. I didn't understand very well the brief answer of Raymond but I coded the two implementations of this example to see the differences. The output are the same for both cases: ``` Getting dough Making pie with pure untreated wheat dough Adding: Sausage Adding: Mushroom ``` If you alter the class order from `OrganicPizza(Pizza, OrganicDoughFactory)` to `OrganicPizza(OrganicDoughFactory, Pizza)` using `self.get_dough()`, you will get this result: `Making pie with pure untreated wheat dough` However if you use `super().get_dough()` this is the output: `Making pie with insecticide treated wheat dough` I understand the `super()` behavior as Raymond explained. But what is the expected behavior of `self` in multiple inheritance scenario?
Just to clarify, there are four cases, based on changing the second line in `Pizza.order_pizza` and the definition of `OrganicPizza`: 1. `super()`, `(Pizza, OrganicDoughFactory)` *(original)*: `'Making pie with pure untreated wheat dough'` 2. `self`, `(Pizza, OrganicDoughFactory)`: `'Making pie with pure untreated wheat dough'` 3. `super()`, `(OrganicDoughFactory, Pizza)`: `'Making pie with insecticide treated wheat dough'` 4. `self`, `(OrganicDoughFactory, Pizza)`: `'Making pie with pure untreated wheat dough'` Case 3. is the one that's surprised you; if we switch the order of inheritance but still use `super`, we apparently end up calling the original `DoughFactory.get_dough`. --- What `super` really does is ask *"which is next in the MRO (method resolution order)?"* So what does `OrganicPizza.mro()` look like? * `(Pizza, OrganicDoughFactory)`: `[<class '__main__.OrganicPizza'>, <class '__main__.Pizza'>, <class '__main__.OrganicDoughFactory'>, <class '__main__.DoughFactory'>, <class 'object'>]` * `(OrganicDoughFactory, Pizza)`: `[<class '__main__.OrganicPizza'>, <class '__main__.OrganicDoughFactory'>, <class '__main__.Pizza'>, <class '__main__.DoughFactory'>, <class 'object'>]` The crucial question here is: which comes *after* `Pizza`? As we're calling `super` from inside `Pizza`, that is where Python will go to find `get_dough`\*. For 1. and 2. it's `OrganicDoughFactory`, so we get the pure, untreated dough, but for 3. and 4. it's the original, insecticide-treated `DoughFactory`. --- Why is `self` different, then? `self` is always *the instance*, so Python goes looking for `get_dough` from the start of the MRO. In both cases, as shown above, `OrganicDoughFactory` is earlier in the list than `DoughFactory`, which is why the `self` versions *always* get untreated dough; `self.get_dough` always resolves to `OrganicDoughFactor.get_dough(self)`. --- \* *I think that this is actually clearer in the two-argument form of `super` used in Python 2.x, which would be `super(Pizza, self).get_dough()`; the first argument is the class to skip (i.e. Python looks in the rest of the MRO after that class).*
Python: sorting a list of tuples on alpha case-insensitive order
30,043,709
3
2015-05-05T03:21:09Z
30,043,740
8
2015-05-05T03:25:11Z
[ "python", "list", "sorting" ]
I have a list of tuples ("twoples") ``` [('aaa',2), ('BBB',7), ('ccc',0)] ``` I need to print it in that order, but ``` >>> sorted([('aaa',2), ('BBB',7), ('ccc',0)]) ``` gives ``` [('BBB', 7), ('aaa', 2), ('ccc', 0)] list.sort(key=str.tolower) ``` doesn't work (obviously), because ``` AttributeError: type object 'str' has no attribute 'tolower' ``` I don't want to change the strings in the list. Another answer gave ``` list.sort(key=lambda (a, b): (a.lower(), b)) ``` but that must be a Python 2 thing, because ``` SyntaxError: invalid syntax ``` ... at the first ( itemgetter() doesn't help, because there's only one 'key' allowed
You're right that this is a Python 2 thing, but the fix is pretty simple: ``` list.sort(key=lambda a: (a[0].lower(), a[1])) ``` That doesn't really seem any less clear, because the names `a` and `b` don't have any more inherent meaning than `a[0]` and `a[1]`. (If they were, say, `name` and `score` or something, that might be a different story…) --- Python 2 allowed you to unpack function arguments into tuples. This worked (and was sometimes handy) in some simple cases, but had a lot of problems. See [PEP 3113](https://www.python.org/dev/peps/pep-3113/) for why it was removed. The canonical way to deal with this is to just split the value inside the function, which doesn't quite work in a `lambda`. But is there a reason you can't just define the function out of line? ``` def twoplekey(ab): a, b = ab return a.lower(), b list.sort(key=twoplekey) ``` --- As a side note, you really shouldn't call your list `list`; that hides the `list` type, so you can't use it anymore (e.g., if you want to convert a tuple to a list by writing `list(tup)`, you'll be trying to call your list, and get a baffling error).
Bash pipe to python
30,043,857
5
2015-05-05T03:38:01Z
30,043,896
7
2015-05-05T03:43:46Z
[ "python", "linux", "bash", "shell" ]
I need to absorb output of a bash command via pipe in real time. E.g ``` for i in $(seq 1 4); do echo $i; sleep 1; done | ./script.py ``` Where script.py has this ``` for line in sys.stdin.readlines(): print line ``` I'm expecting the sequence to be printed as it becomes available, but the python script is waiting for bash script to end before proceeding. I looked at [this](http://stackoverflow.com/questions/2082850/real-time-subprocess-popen-via-stdout-and-pipe) related answer, but that didn't solve my problem. How do I go about achieving this in python?
The first problem is that [`readlines`](https://docs.python.org/2.7/library/stdtypes.html#file.readlines) reads all the lines into a list. It can't do that until all of the lines are present, which won't be until `stdin` has reached EOF. But you don't actually need a *list* of the lines, just *some iterable* of the lines. And a file, like `sys.stdin`, already *is* such an iterable. And it's a lazy one, that generates one line at a time as soon as they're available, instead of waiting to generate them all at once. So: ``` for line in sys.stdin: print line ``` Whenever you find yourself reaching for `readlines`, ask yourself whether you really need it. The answer will *always* be no. (Well, except when you want to call it with an argument, or on some defective not-quite-file-like object.) See [Readlines Considered Silly](http://stupidpythonideas.blogspot.com/2013/06/readlines-considered-silly.html) for more. --- But meanwhile, there's a *second* problem. It's not that Python is buffering its `stdin`, or that the other process is buffering its `stdout`, but that the file-object iterator itself is doing internal buffering, which may (depending on your platform—but on most POSIX platforms, it usually will) prevent you from getting to the first line until EOF, or at least until a lot of lines have been read. This is a known problem with Python 2.x, which has been fixed in 3.x,\* but that doesn't help you unless you're willing to upgrade. The solution is mentioned in the Command line and environment docs, and in the manpage on most systems, but buried in the middle of the [`-u` flag documentation](https://docs.python.org/2/using/cmdline.html#cmdoption-u): > Note that there is internal buffering in xreadlines(), readlines() and file-object iterators ("for line in sys.stdin") which is not influenced by this option. To work around this, you will want to use "sys.stdin.readline()" inside a "while 1:" loop. In other words: ``` while True: line = sys.stdin.readline() if not line: break print line ``` Or: ``` for line in iter(sys.stdin.readline, ''): print line ``` --- For a different problem, in [this answer](http://stackoverflow.com/questions/3670323/setting-smaller-buffer-size-for-sys-stdin/3670470#3670470), Alex Martelli points out that you can always just ignore `sys.stdin` and re-`fdopen` the file descriptor. Which means that you get a wrapper around a POSIX fd instead of a C stdio handle. But that's neither necessary nor sufficient for *this* question, because the problem isn't with the C stdio buffering, but the way the `file.__iter__` buffering interacts with it. --- \* Python 3.x doesn't use the C stdio library's buffering anymore; it does everything itself, in the types in the [`io`](https://docs.python.org/3/library/io.html) module, which means the iterator can just share the same buffer the file object itself is using. While `io` is available on 2.x as well, it's not the default thing you get for `open`—or for the stdio file handles, which is why it doesn't help here.
Truth value of numpy array with one falsey element seems to depend on dtype
30,043,901
21
2015-05-05T03:44:32Z
30,044,111
8
2015-05-05T04:09:02Z
[ "python", "numpy" ]
``` import numpy as np a = np.array([0]) b = np.array([None]) c = np.array(['']) d = np.array([' ']) ``` Why should we have this inconsistency: ``` >>> bool(a) False >>> bool(b) False >>> bool(c) True >>> bool(d) False ```
I'm pretty sure the answer is, as explained in [Scalars](http://docs.scipy.org/doc/numpy/reference/arrays.scalars.html), that: > Array scalars have the same attributes and methods as ndarrays. [1] This allows one to treat items of an array partly on the same footing as arrays, smoothing out rough edges that result when mixing scalar and array operations. So, if it's acceptable to call `bool` on a scalar, it must be acceptable to call `bool` on an array of shape `(1,)`, because they are, as far as possible, the same thing. And, while it isn't directly said anywhere in the docs that I know of, it's pretty obvious from the design that NumPy's scalars are supposed to act like native Python objects. So, that explains why `np.array([0])` is falsey rather than truthy, which is what you were initially surprised about. --- So, that explains the basics. But what about the specifics of case `c`? First, note that your array `np.array([''])` is not an array of one Python `object`, but an array of one NumPy `<U1` null-terminated character string of length 1. Fixed-length-string values don't have the same truthiness rule as Python strings—and they really couldn't; for a fixed-length-string type, "false if empty" doesn't make any sense, because they're *never* empty. You could argument about whether NumPy should have been designed that way or not, but it clearly does follow that rule consistently, and I don't think the opposite rule would be any less confusing here, just different. But there seems to be something else weird going on with strings. Consider this: ``` >>> np.array(['a', 'b']) != 0 True ``` That's not doing an elementwise comparison of the `<U2` strings to 0 and returning `array([True, True])` (as you'd get from `np.array(['a', 'b'], dtype=object)`), it's doing an array-wide comparison and deciding that no array of strings is equal to 0, which seems odd… I'm not sure whether this deserves a separate answer here or even a whole separate question, but I am pretty sure I'm not going to be the one who writes that answer, because I have no clue what's going on here. :) --- Beyond arrays of shape `(1,)`, arrays of shape `()` are treated the same way, but anything else is a `ValueError`, because otherwise it would be very easily to misuse arrays with `and` and other Python operators that NumPy can't automagically convert into elementwise operations. I personally think being consistent with other arrays would be more useful than being consistent with scalars here—in other words, just raise a `ValueError`. I also think that, if being consistent with scalars *were* important here, it would be better to be consistent with the unboxed Python values. In other words, if `bool(array([v]))` and `bool(array(v))` are going to be allowed at all, they should always return exactly the same thing as `bool(v)`, even if that's not consistent with `np.nonzero`. But I can see the argument the other way.
Truth value of numpy array with one falsey element seems to depend on dtype
30,043,901
21
2015-05-05T03:44:32Z
30,830,103
7
2015-06-14T13:44:49Z
[ "python", "numpy" ]
``` import numpy as np a = np.array([0]) b = np.array([None]) c = np.array(['']) d = np.array([' ']) ``` Why should we have this inconsistency: ``` >>> bool(a) False >>> bool(b) False >>> bool(c) True >>> bool(d) False ```
For arrays with one element, the array's truth value is determined by the truth value of that element. The main point to make is that `np.array([''])` is *not* an array containing one empty Python string. This array is created to hold strings of exactly one byte each and NumPy pads strings that are too short with the null character. This means that the array is equal to `np.array(['\0'])`. In this regard, NumPy is being consistent with Python which evaluates `bool('\0')` as `True`. In fact, the only strings which are `False` in NumPy arrays are strings which do not contain any non-whitespace characters (`'\0'` is not a whitespace character). Details of this Boolean evaluation are presented below. --- Navigating NumPy's labyrinthine source code is not always easy, but we can find the code governing how values in different datatypes are mapped to Boolean values in the [arraytypes.c.src](https://github.com/numpy/numpy/blob/943ac81b58c7c8afbfadedbdd28ab94e56ad58fa/numpy/core/src/multiarray/arraytypes.c.src) file. This will explain how `bool(a)`, `bool(b)`, `bool(c)` and `bool(d)` are determined. Before we get to the code in that file, we can see that calling `bool()` on a NumPy array invokes the internal [`_array_nonzero()`](https://github.com/numpy/numpy/blob/943ac81b58c7c8afbfadedbdd28ab94e56ad58fa/numpy/core/src/multiarray/number.c#l745) function. If the array is empty, we get `False`. If there are two or more elements we get an error. But if the array has *exactly* one element, we hit the line: ``` return PyArray_DESCR(mp)->f->nonzero(PyArray_DATA(mp), mp); ``` Now, [`PyArray_DESCR`](http://docs.scipy.org/doc/numpy/reference/c-api.types-and-structures.html#c.PyArray_Descr) is a struct holding various properties for the array. `f` is a pointer to another struct [`PyArray_ArrFuncs`](http://docs.scipy.org/doc/numpy/reference/c-api.types-and-structures.html#c.PyArray_ArrFuncs) that holds the array's `nonzero` function. In other words, NumPy is going to call upon the array's own special `nonzero` function to check the Boolean value of that one element. Determining whether an element is nonzero or not is obviously going to depend on the datatype of the element. The code implementing the type-specific nonzero functions can be found in the "nonzero" section of the [arraytypes.c.src](https://github.com/numpy/numpy/blob/943ac81b58c7c8afbfadedbdd28ab94e56ad58fa/numpy/core/src/multiarray/arraytypes.c.src#l2209) file. As we'd expect, floats, integers and complex numbers are `False` if they're [equal with zero](https://github.com/numpy/numpy/blob/943ac81b58c7c8afbfadedbdd28ab94e56ad58fa/numpy/core/src/multiarray/arraytypes.c.src#l2213). This explains `bool(a)`. In the case of object arrays, `None` is similarly going to be evaluated as `False` because NumPy just [calls the `PyObject_IsTrue` function](https://github.com/numpy/numpy/blob/943ac81b58c7c8afbfadedbdd28ab94e56ad58fa/numpy/core/src/multiarray/arraytypes.c.src#l2349). This explains `bool(b)`. To understand the results of `bool(c)` and `bool(d)`, we see that the `nonzero` function for string type arrays is mapped to the [`STRING_nonzero`](https://github.com/numpy/numpy/blob/943ac81b58c7c8afbfadedbdd28ab94e56ad58fa/numpy/core/src/multiarray/arraytypes.c.src#l2295) function: ``` static npy_bool STRING_nonzero (char *ip, PyArrayObject *ap) { int len = PyArray_DESCR(ap)->elsize; // size of dtype (not string length) int i; npy_bool nonz = NPY_FALSE; for (i = 0; i < len; i++) { if (!Py_STRING_ISSPACE(*ip)) { // if it isn't whitespace, it's True nonz = NPY_TRUE; break; } ip++; } return nonz; } ``` (The unicode case is more or less the same idea.) So in arrays with a string or unicode datatype, a string is only `False` if it contains only whitespace characters: ``` >>> bool(np.array([' '])) False ``` In the case of array `c` in the question, there is a really a null character `\0` padding the seemingly-empty string: ``` >>> np.array(['']) == np.array(['\0']) array([ True], dtype=bool) ``` The `STRING_nonzero` function sees this non-whitespace character and so `bool(c)` is `True`. As noted at the start of this answer, this is consistent with Python's evaluation of strings containing a single null character: `bool('\0')` is also `True`. --- **Update**: [Wim has fixed the behaviour](http://stackoverflow.com/a/30865887/3923281) detailed above in NumPy's master branch by making strings which contain only null characters, or a mix of only whitespace and null characters, evaluate to `False`. This means that NumPy 1.10+ will see that `bool(np.array(['']))` is `False`, which is much more in line with Python's treatment of "empty" strings.
Elegant way to create empty pandas DataFrame with NaN of type float
30,053,329
12
2015-05-05T12:44:09Z
30,053,435
7
2015-05-05T12:48:36Z
[ "python", "numpy", "pandas", null ]
I want to create a Pandas DataFrame filled with NaNs. During my research I found [an answer](http://stackoverflow.com/questions/13784192/creating-an-empty-pandas-dataframe-then-filling-it): ``` import pandas as pd df = pd.DataFrame(index=range(0,4),columns=['A']) ``` This code results in a DataFrame filled with NaNs of type "object". So they cannot be used later on for example with the interpolate() method. Therefore, I created the DataFrame with this complicated code (inspired by [this answer](http://stackoverflow.com/questions/1704823/initializing-numpy-matrix-to-something-other-than-zero-or-one)): ``` import pandas as pd import numpy as np dummyarray = np.empty((4,1)) dummyarray[:] = np.nan df = pd.DataFrame(dummyarray) ``` This results in a DataFrame filled with NaN of type "float", so it can be used later on with interpolate(). Is there a more elegant way to create the same result?
You could specify the dtype directly when constructing the DataFrame: ``` >>> df = pd.DataFrame(index=range(0,4),columns=['A'], dtype='float') >>> df.dtypes A float64 dtype: object ``` Specifying the dtype forces Pandas to try creating the DataFrame with that type, rather than trying to infer it.
Elegant way to create empty pandas DataFrame with NaN of type float
30,053,329
12
2015-05-05T12:44:09Z
30,053,507
13
2015-05-05T12:51:01Z
[ "python", "numpy", "pandas", null ]
I want to create a Pandas DataFrame filled with NaNs. During my research I found [an answer](http://stackoverflow.com/questions/13784192/creating-an-empty-pandas-dataframe-then-filling-it): ``` import pandas as pd df = pd.DataFrame(index=range(0,4),columns=['A']) ``` This code results in a DataFrame filled with NaNs of type "object". So they cannot be used later on for example with the interpolate() method. Therefore, I created the DataFrame with this complicated code (inspired by [this answer](http://stackoverflow.com/questions/1704823/initializing-numpy-matrix-to-something-other-than-zero-or-one)): ``` import pandas as pd import numpy as np dummyarray = np.empty((4,1)) dummyarray[:] = np.nan df = pd.DataFrame(dummyarray) ``` This results in a DataFrame filled with NaN of type "float", so it can be used later on with interpolate(). Is there a more elegant way to create the same result?
This one-liner seems to work as well: ``` >>> df = pd.DataFrame(np.nan, index=[0,1,2,3], columns=['A']) >>> df.dtypes A float64 dtype: object ```
Complex numbers in Cython
30,054,019
35
2015-05-05T13:12:59Z
30,168,885
18
2015-05-11T13:24:05Z
[ "python", "c", "numpy", "cython", "complex-numbers" ]
What is the correct way to work with complex numbers in Cython? I would like to write a pure C loop using a numpy.ndarray of dtype np.complex128. In Cython, the associated C type is defined in `Cython/Includes/numpy/__init__.pxd` as ``` ctypedef double complex complex128_t ``` so it seems this is just a simple C double complex. However, it's easy to obtain strange behaviors. In particular, with these definitions ``` cimport numpy as np import numpy as np np.import_array() cdef extern from "complex.h": pass cdef: np.complex128_t varc128 = 1j np.float64_t varf64 = 1. double complex vardc = 1j double vard = 1. ``` the line ``` varc128 = varc128 * varf64 ``` can be compiled by Cython but gcc can not compiled the C code produced (the error is "testcplx.c:663:25: error: two or more data types in declaration specifiers" and seems to be due to the line `typedef npy_float64 _Complex __pyx_t_npy_float64_complex;`). This error has already been reported (for example [here](http://comments.gmane.org/gmane.comp.python.cython.user/10659)) but I didn't find any good explanation and/or clean solution. Without inclusion of `complex.h`, there is no error (I guess because the `typedef` is then not included). However, there is still a problem since in the html file produced by `cython -a testcplx.pyx`, the line `varc128 = varc128 * varf64` is yellow, meaning that it has not been translated into pure C. The corresponding C code is: ``` __pyx_t_2 = __Pyx_c_prod_npy_float64(__pyx_t_npy_float64_complex_from_parts(__Pyx_CREAL(__pyx_v_8testcplx_varc128), __Pyx_CIMAG(__pyx_v_8testcplx_varc128)), __pyx_t_npy_float64_complex_from_parts(__pyx_v_8testcplx_varf64, 0)); __pyx_v_8testcplx_varc128 = __pyx_t_double_complex_from_parts(__Pyx_CREAL(__pyx_t_2), __Pyx_CIMAG(__pyx_t_2)); ``` and the `__Pyx_CREAL` and `__Pyx_CIMAG` are orange (Python calls). Interestingly, the line ``` vardc = vardc * vard ``` does not produce any error and is translated into pure C (just `__pyx_v_8testcplx_vardc = __Pyx_c_prod(__pyx_v_8testcplx_vardc, __pyx_t_double_complex_from_parts(__pyx_v_8testcplx_vard, 0));`), whereas it is very very similar to the first one. I can avoid the error by using intermediate variables (and it translates into pure C): ``` vardc = varc128 vard = varf64 varc128 = vardc * vard ``` or simply by casting (but it does not translate into pure C): ``` vardc = <double complex>varc128 * <double>varf64 ``` So what happens? What is the meaning of the compilation error? Is there a clean way to avoid it? Why does the multiplication of a np.complex128\_t and np.float64\_t seem to involve Python calls? ## Versions Cython version 0.22 (most recent version in Pypi when the question was asked) and GCC 4.9.2. ## Repository I created a tiny repository with the example (`hg clone https://bitbucket.org/paugier/test_cython_complex`) and a tiny Makefile with 3 targets (`make clean`, `make build`, `make html`) so it is easy to test anything.
The simplest way I can find to work around this issue is to simply switch the order of multiplication. If in `testcplx.pyx` I change ``` varc128 = varc128 * varf64 ``` to ``` varc128 = varf64 * varc128 ``` I change from the failing situation to described to one that works correctly. This scenario is useful as it allows a direct diff of the produced C code. ### tl;dr The order of the multiplication changes the translation, meaning that in the failing version the multiplication is attempted via `__pyx_t_npy_float64_complex` types, whereas in the working version it is done via `__pyx_t_double_complex` types. This in turn introduces the typedef line `typedef npy_float64 _Complex __pyx_t_npy_float64_complex;`, which is invalid. I am fairly sure this is a cython bug (Update: [reported here](http://trac.cython.org/ticket/850#ticket)). Although [this is a very old gcc bug report](https://gcc.gnu.org/bugzilla/show_bug.cgi?id=19514), the response explicitly states (in saying that it is not, in fact, a ***gcc*** bug, but user code error): > ``` > typedef R _Complex C; > ``` > > This is not valid code; you can't use \_Complex together with a typedef, > only together with "float", "double" or "long double" in one of the forms > listed in C99. They conclude that `double _Complex` is a valid type specifier whereas `ArbitraryType _Complex` is not. [This more recent report](https://gcc.gnu.org/bugzilla/show_bug.cgi?id=36692) has the same type of response - trying to use `_Complex` on a non fundamental type is outside spec, and the [GCC manual](http://www.gnu.org/software/gnu-c-manual/gnu-c-manual.html#Standard-Complex-Number-Types) indicates that `_Complex` can only be used with `float`, `double` and `long double` So - we can hack the cython generated C code to test that: replace `typedef npy_float64 _Complex __pyx_t_npy_float64_complex;` with `typedef double _Complex __pyx_t_npy_float64_complex;` and verify that it is indeed valid and can make the output code compile. --- ### Short trek through the code Swapping the multiplication order only highlights the problem that we are told about by the compiler. In the first case, the offending line is the one that says `typedef npy_float64 _Complex __pyx_t_npy_float64_complex;` - it is trying to assign the type `npy_float64` ***and*** use the keyword `_Complex` to the type `__pyx_t_npy_float64_complex`. `float _Complex` or `double _Complex` is a valid type, whereas `npy_float64 _Complex` is not. To see the effect, you can just delete `npy_float64` from that line, or replace it with `double` or `float` and the code compiles fine. The next question is why that line is produced in the first place... This seems to be produced by [this line](https://github.com/cython/cython/blob/bb4d9c2de71b7c7e1e02d9dfeae53f4547fa9d7d/Cython/Compiler/PyrexTypes.py#L1949) in the Cython source code. Why does the order of the multiplication change the code significantly - such that the type `__pyx_t_npy_float64_complex` is introduced, and introduced in a way that fails? In the failing instance, the code to implement the multiplication turns `varf64` into a `__pyx_t_npy_float64_complex` type, does the multiplication on real and imaginary parts and then reassembles the complex number. In the working version, it does the product directly via the `__pyx_t_double_complex` type using the function `__Pyx_c_prod` I guess this is as simple as the cython code taking its cue for which type to use for the multiplication from the first variable it encounters. In the first case, it sees a float 64, so produces (*invalid*) C code based on that, whereas in the second, it sees the (double) complex128 type and bases its translation on that. This explanation is a little hand-wavy and I hope to return to an analysis of it if time allows... A note on this - [here we see](https://github.com/cython/cython/blob/e3f5343f3b648fc0033bdaf0def3268abab7b9ea/Cython/Includes/numpy/__init__.pxd#L331) that the `typedef` for `npy_float64` is `double`, so in this particular case, a fix might consist of modifying [the code here](https://github.com/cython/cython/blob/bb4d9c2de71b7c7e1e02d9dfeae53f4547fa9d7d/Cython/Compiler/Nodes.py#L1011) to use `double _Complex` where `type` is `npy_float64`, but this is getting beyond the scope of a SO answer and doesn't present a general solution. --- # C code diff result ### Working version Creates this C code from the line `varc128 = varf64 \* varc128 ``` __pyx_v_8testcplx_varc128 = __Pyx_c_prod(__pyx_t_double_complex_from_parts(__pyx_v_8testcplx_varf64, 0), __pyx_v_8testcplx_varc128); ``` ### Failing version Creates this C code from the line `varc128 = varc128 * varf64` ``` __pyx_t_2 = __Pyx_c_prod_npy_float64(__pyx_t_npy_float64_complex_from_parts(__Pyx_CREAL(__pyx_v_8testcplx_varc128), __Pyx_CIMAG(__pyx_v_8testcplx_varc128)), __pyx_t_npy_float64_complex_from_parts(__pyx_v_8testcplx_varf64, 0)); __pyx_v_8testcplx_varc128 = __pyx_t_double_complex_from_parts(__Pyx_CREAL(__pyx_t_2), __Pyx_CIMAG(__pyx_t_2)); ``` Which necessitates these extra imports - and the offending line is the one that says `typedef npy_float64 _Complex __pyx_t_npy_float64_complex;` - it is trying to assign the type `npy_float64` ***and*** the type `_Complex` to the type `__pyx_t_npy_float64_complex` ``` #if CYTHON_CCOMPLEX #ifdef __cplusplus typedef ::std::complex< npy_float64 > __pyx_t_npy_float64_complex; #else typedef npy_float64 _Complex __pyx_t_npy_float64_complex; #endif #else typedef struct { npy_float64 real, imag; } __pyx_t_npy_float64_complex; #endif /*... loads of other stuff the same ... */ static CYTHON_INLINE __pyx_t_npy_float64_complex __pyx_t_npy_float64_complex_from_parts(npy_float64, npy_float64); #if CYTHON_CCOMPLEX #define __Pyx_c_eq_npy_float64(a, b) ((a)==(b)) #define __Pyx_c_sum_npy_float64(a, b) ((a)+(b)) #define __Pyx_c_diff_npy_float64(a, b) ((a)-(b)) #define __Pyx_c_prod_npy_float64(a, b) ((a)*(b)) #define __Pyx_c_quot_npy_float64(a, b) ((a)/(b)) #define __Pyx_c_neg_npy_float64(a) (-(a)) #ifdef __cplusplus #define __Pyx_c_is_zero_npy_float64(z) ((z)==(npy_float64)0) #define __Pyx_c_conj_npy_float64(z) (::std::conj(z)) #if 1 #define __Pyx_c_abs_npy_float64(z) (::std::abs(z)) #define __Pyx_c_pow_npy_float64(a, b) (::std::pow(a, b)) #endif #else #define __Pyx_c_is_zero_npy_float64(z) ((z)==0) #define __Pyx_c_conj_npy_float64(z) (conj_npy_float64(z)) #if 1 #define __Pyx_c_abs_npy_float64(z) (cabs_npy_float64(z)) #define __Pyx_c_pow_npy_float64(a, b) (cpow_npy_float64(a, b)) #endif #endif #else static CYTHON_INLINE int __Pyx_c_eq_npy_float64(__pyx_t_npy_float64_complex, __pyx_t_npy_float64_complex); static CYTHON_INLINE __pyx_t_npy_float64_complex __Pyx_c_sum_npy_float64(__pyx_t_npy_float64_complex, __pyx_t_npy_float64_complex); static CYTHON_INLINE __pyx_t_npy_float64_complex __Pyx_c_diff_npy_float64(__pyx_t_npy_float64_complex, __pyx_t_npy_float64_complex); static CYTHON_INLINE __pyx_t_npy_float64_complex __Pyx_c_prod_npy_float64(__pyx_t_npy_float64_complex, __pyx_t_npy_float64_complex); static CYTHON_INLINE __pyx_t_npy_float64_complex __Pyx_c_quot_npy_float64(__pyx_t_npy_float64_complex, __pyx_t_npy_float64_complex); static CYTHON_INLINE __pyx_t_npy_float64_complex __Pyx_c_neg_npy_float64(__pyx_t_npy_float64_complex); static CYTHON_INLINE int __Pyx_c_is_zero_npy_float64(__pyx_t_npy_float64_complex); static CYTHON_INLINE __pyx_t_npy_float64_complex __Pyx_c_conj_npy_float64(__pyx_t_npy_float64_complex); #if 1 static CYTHON_INLINE npy_float64 __Pyx_c_abs_npy_float64(__pyx_t_npy_float64_complex); static CYTHON_INLINE __pyx_t_npy_float64_complex __Pyx_c_pow_npy_float64(__pyx_t_npy_float64_complex, __pyx_t_npy_float64_complex); #endif #endif ```
Pythonic way to merge two overlapping lists, preserving order
30,055,830
24
2015-05-05T14:29:53Z
30,056,066
15
2015-05-05T14:40:27Z
[ "python", "list", "python-3.x", "merge" ]
Alright, so I have two lists, as such: * They can and will have overlapping items, for example, `[1, 2, 3, 4, 5]`, `[4, 5, 6, 7]`. * There will *not* be additional items in the overlap, for example, this will *not* happen: `[1, 2, 3, 4, 5]`, `[3.5, 4, 5, 6, 7]` * The lists are not necessarily ordered nor unique. `[9, 1, 1, 8, 7]`, `[8, 6, 7]`. I want to merge the lists such that existing order is preserved, and to merge at the last possible valid position, and such that no data is lost. Additionally, the first list might be huge. My current working code is as such: ``` master = [1,3,9,8,3,4,5] addition = [3,4,5,7,8] def merge(master, addition): n = 1 while n < len(master): if master[-n:] == addition[:n]: return master + addition[n:] n += 1 return master + addition ``` What I would like to know is - is there a more efficient way of doing this? It works, but I'm slightly leery of this, because it can run into large runtimes in my application - I'm merging large lists of strings. EDIT: I'd expect the merge of [1,3,9,8,3,4,5], [3,4,5,7,8] to be: [1,3,9,8,**3,4,5**,7,8]. For clarity, I've highlighted the overlapping portion. [9, 1, 1, 8, 7], [8, 6, 7] should merge to [9, 1, 1, 8, 7, 8, 6, 7]
You can try the following: ``` >>> a = [1, 3, 9, 8, 3, 4, 5] >>> b = [3, 4, 5, 7, 8] >>> matches = (i for i in xrange(len(b), 0, -1) if b[:i] == a[-i:]) >>> i = next(matches, 0) >>> a + b[i:] [1, 3, 9, 8, 3, 4, 5, 7, 8] ``` The idea is we check the first `i` elements of `b` (`b[:i]`) with the last `i` elements of `a` (`a[-i:]`). We take `i` in decreasing order, starting from the length of `b` until 1 (`xrange(len(b), 0, -1)`) because we want to match as much as possible. We take the first such `i` by using `next` and if we don't find it we use the zero value (`next(..., 0)`). From the moment we found the `i`, we add to `a` the elements of `b` from index `i`.
Pythonic way to merge two overlapping lists, preserving order
30,055,830
24
2015-05-05T14:29:53Z
30,056,970
9
2015-05-05T15:17:07Z
[ "python", "list", "python-3.x", "merge" ]
Alright, so I have two lists, as such: * They can and will have overlapping items, for example, `[1, 2, 3, 4, 5]`, `[4, 5, 6, 7]`. * There will *not* be additional items in the overlap, for example, this will *not* happen: `[1, 2, 3, 4, 5]`, `[3.5, 4, 5, 6, 7]` * The lists are not necessarily ordered nor unique. `[9, 1, 1, 8, 7]`, `[8, 6, 7]`. I want to merge the lists such that existing order is preserved, and to merge at the last possible valid position, and such that no data is lost. Additionally, the first list might be huge. My current working code is as such: ``` master = [1,3,9,8,3,4,5] addition = [3,4,5,7,8] def merge(master, addition): n = 1 while n < len(master): if master[-n:] == addition[:n]: return master + addition[n:] n += 1 return master + addition ``` What I would like to know is - is there a more efficient way of doing this? It works, but I'm slightly leery of this, because it can run into large runtimes in my application - I'm merging large lists of strings. EDIT: I'd expect the merge of [1,3,9,8,3,4,5], [3,4,5,7,8] to be: [1,3,9,8,**3,4,5**,7,8]. For clarity, I've highlighted the overlapping portion. [9, 1, 1, 8, 7], [8, 6, 7] should merge to [9, 1, 1, 8, 7, 8, 6, 7]
There are a couple of easy optimizations that are possible. 1. You don't need to start at master[1], since the longest overlap starts at master[-len(addition)] 2. If you add a call to `list.index` you can avoid creating sub-lists and comparing lists for each index: This approach keeps the code pretty understandable too (and easier to optimize by using cython or pypy): ``` master = [1,3,9,8,3,4,5] addition = [3,4,5,7,8] def merge(master, addition): first = addition[0] n = max(len(master) - len(addition), 1) # (1) while 1: try: n = master.index(first, n) # (2) except ValueError: return master + addition if master[-n:] == addition[:n]: return master + addition[n:] n += 1 ```
RSA encryption and decryption in Python
30,056,762
2
2015-05-05T15:08:38Z
30,057,820
10
2015-05-05T15:56:05Z
[ "python", "encryption", "rsa", "pycrypto" ]
I need help using RSA encryption and decryption in Python. I am creating a private/public key pair, encrypting a message with keys and writing message to a file. Then I am reading ciphertext from file and decrypting text using key. I am having trouble with the decryption portion. As you can see in my code below, when I put in `decrypted = key.decrypt(message)` that the program works, yet the decrypted message is encrypted again. It seems like it is not reading the ciphertext from the file. Can anyone help me write this code so decryption reads ciphertext from file and then uses key to decrypt ciphertext? ``` import Crypto from Crypto.PublicKey import RSA from Crypto import Random random_generator = Random.new().read key = RSA.generate(1024, random_generator) #generate public and private keys publickey = key.publickey # pub key export for exchange encrypted = publickey.encrypt('encrypt this message', 32) #message to encrypt is in the above line 'encrypt this message' print 'encrypted message:', encrypted #ciphertext f = open ('encryption.txt', 'w'w) f.write(str(encrypted)) #write ciphertext to file f.close() #decrypted code below f = open ('encryption.txt', 'r') message = f.read() decrypted = key.decrypt(message) print 'decrypted', decrypted f = open ('encryption.txt', 'w') f.write(str(message)) f.write(str(decrypted)) f.close() ```
In order to make it work you need to convert key from str to tuple before decryption(ast.literal\_eval function). Here is fixed code: ``` import Crypto from Crypto.PublicKey import RSA from Crypto import Random import ast random_generator = Random.new().read key = RSA.generate(1024, random_generator) #generate pub and priv key publickey = key.publickey() # pub key export for exchange encrypted = publickey.encrypt('encrypt this message', 32) #message to encrypt is in the above line 'encrypt this message' print 'encrypted message:', encrypted #ciphertext f = open ('encryption.txt', 'w') f.write(str(encrypted)) #write ciphertext to file f.close() #decrypted code below f = open('encryption.txt', 'r') message = f.read() decrypted = key.decrypt(ast.literal_eval(str(encrypted))) print 'decrypted', decrypted f = open ('encryption.txt', 'w') f.write(str(message)) f.write(str(decrypted)) f.close() ```
python - iterate through list whose elements have variable length
30,060,671
4
2015-05-05T18:32:57Z
30,060,753
7
2015-05-05T18:38:23Z
[ "python", "list", "loops", "iterator" ]
How can I iterate through a list or tuple whose elements as lists with variable length in Python? For example I want to do ``` tup=( [1,2], [2,3,5], [4,3], [4,5,6,7] ) for a,b,c,d in tup: print a,b,c,d ``` and then have the elements of `tup` that are short to be completed with, say, `None`. I have found a workaround with the following code but I believe there must be a better way. ``` tup=( [1,2], [2,3,5], [4,3], [4,5,6,7] ) for a,b,c,d in [ el if len(el)==4 else [ el[i] if i<len(el) else None for i in range(4)] for el in tup ]: print a,b,c,d ``` Where `4` is actually the length of the "longest" element. Is there a better way?
To match your own output you can use [`izip_longest`](https://docs.python.org/2/library/itertools.html#itertools.izip_longest) to fill with `None`'s and transpose again to get back to the original order: ``` from itertools import izip_longest tup=( [1,2], [2,3,5], [4,3], [4,5,6,7] ) for a,b,c,d in zip(*izip_longest(*tup)): print(a,b,c,d) (1, 2, None, None) (2, 3, 5, None) (4, 3, None, None) (4, 5, 6, 7) ``` If an `int`, `float` etc.. would be better then you can specify a different `fillvalue` argument to the `izip_longest` function: ``` tup=( [1,2], [2,3,5], [4,3], [4,5,6,7]) for a,b,c,d in zip(*izip_longest(*tup,fillvalue=0)): print(a,b,c,d) (1, 2, 0, 0) (2, 3, 5, 0) (4, 3, 0, 0) (4, 5, 6, 7) ``` Judging by your `print` statement you are likely to be using python2 but for anyone using python3 as @TimHenigan [commented below](http://stackoverflow.com/questions/30060671/python-iterate-through-element-variable-length/30060753#comment48236641_30060753) it is [`zip_longest`](https://docs.python.org/3/library/itertools.html#itertools.zip_longest). If you don't need a list you can use [itertools.izip](https://docs.python.org/2/library/itertools.html#itertools.izip) which returns an iterator: ``` from itertools import izip_longest, izip tup = ( [1, 2], [2, 3, 5], [4, 3], [4, 5, 6, 7]) for a, b, c, d in izip(*izip_longest(*tup, fillvalue=0)): print(a, b, c, d) ```
Python How to get every first element in 2 Dimensional List
30,062,429
2
2015-05-05T20:14:30Z
30,062,458
7
2015-05-05T20:16:05Z
[ "python", "list", "multidimensional-array", "python-2.x" ]
I have a list like this : ``` a = ((4.0, 4, 4.0), (3.0, 3, 3.6), (3.5, 6, 4.8)) ``` I want an outcome like this (**EVERY** first element in the list) : ``` 4.0, 3.0, 3.5 ``` I tried a[::1][0], but it doesn't work I'm just start learning Python weeks ago. Python version = 2.7.9
You can get the index `[0]` from each element in a list comprehension ``` >>> [i[0] for i in a] [4.0, 3.0, 3.5] ``` Also just to be pedantic, you don't have a `list` of `list`, you have a `tuple` of `tuple`.
Exponential calculation in Python
30,064,526
7
2015-05-05T22:48:00Z
30,064,564
12
2015-05-05T22:51:00Z
[ "python", "optimization", "largenumber", "exponentiation" ]
While experimenting with [Euler 99](https://projecteuler.net/problem=99), I noticed that these operations take different time: ``` >>> 632382**518061 # never finishes.. >>> 632382**518061 > 519432**525806 # finishes in few seconds True ``` I wonder what's the reason for this?
The thing is python tries to print the first result. But this number has a zillion digits and python doesn't flush the output until a newline is encountered, which is after sending all the digits to standard output. As @abarnert mentioned, what is many times worse, is converting the number to string for printing it. This needs considerable memory allocation and processing power. On the other side, the second expression just needs to print a `True`. You can check it if you assign to the first expression: ``` >>> a = 632382**518061 ``` This way the output of the number is suppressed.
Python - What are the major improvement of Pandas over Numpy/Scipy
30,067,051
4
2015-05-06T03:39:14Z
30,067,640
7
2015-05-06T04:38:13Z
[ "python", "numpy", "pandas", "scipy", "data-analysis" ]
I have been using numpy/scipy for data analysis. I recently started to learn Pandas. I have gone through a few tutorials and I am trying to understand what are the major improvement of Pandas over Numpy/Scipy. It seems to me that the key idea of Pandas is to wrap up different numpy arrays in a Data Frame, with some utility functions around it. Is there something **revolutionary** about Pandas that I just stupidly missed?
Pandas is not particularly *revolutionary* and does use the NumPy and SciPy ecosystem to accomplish it's goals along with some key Cython code. It can be seen as a simpler API to the functionality with the addition of key utilities like joins and simpler group-by capability that are particularly useful for people with Table-like data or time-series. But, while not revolutionary, Pandas does have key benefits. For a while I had also perceived Pandas as just utilities on top of NumPy for those who liked the DataFrame interface. However, I now see Pandas as providing these key features (this is not comprehensive): 1. Array of Structures (independent-storage of disparate types instead of the contiguous storage of structured arrays in NumPy) --- this will allow faster processing in many cases. 2. Simpler interfaces to common operations (file-loading, plotting, selection, and joining / aligning data) make it easy to do a lot of work in little code. 3. Index arrays which mean that operations are always *aligned* instead of having to keep track of alignment yourself. 4. Split-Apply-Combine is a powerful way of thinking about and implementing data-processing However, there are downsides to Pandas: 1. Pandas is basically a user-interface library and not particularly suited for writing library code. The "automatic" features can lull you into repeatedly using them even when you don't need to and slowing down code that gets called over and over again. 2. Pandas typically takes up more memory as it is generous with the creation of object arrays to solve otherwise sticky problems of things like string handling. 3. If your use-case is outside the realm of what Pandas was designed to do, it gets clunky quickly. But, within the realms of what it was designed to do, Pandas is powerful and easy to use for quick data analysis.
How to store indices in a list
30,067,470
7
2015-05-06T04:20:04Z
30,067,595
14
2015-05-06T04:32:41Z
[ "python", "python-3.x" ]
I want to find certain segments of a string and store them, however, I will need to store a large number of these strings and I was thinking that it might be more elegant to store them as indices of the master string rather than as a list of strings. I am having trouble retrieving the indices for use. For example: ``` index1 = [0:3, 4:8] #invalid syntax index2 = ['0:3','5:6'] s = 'ABCDEFGHIJKLMN' print(s[index2[0]]) #TypeError string indices must be integers ``` Am I thinking about this the wrong way?
The colon-based slicing syntax is only valid inside the indexing operator, e.g. `x[i:j]`. Instead, you can store `slice` objects in your list, where `slice(x,y,z)` is equivalent to `x:y:z`, e.g. ``` index = [slice(0,3), slice(5,6)] print([s[i] for i in index]) ``` will print: ``` ['ABC', 'F'] ```
Escape character \t behaves differently with space
30,068,184
6
2015-05-06T05:25:00Z
30,068,215
12
2015-05-06T05:26:52Z
[ "python" ]
Why do I see output only if I put space. ``` print "I love you %s" % "\tI'm tabbled in." print "I love you %s" % " \tI'm tabbled in." ``` output ``` I love you I'm tabbled in. I love you I'm tabbled in. ```
Typically, \t (TAB) goes to the *next* [tab stop](http://en.wikipedia.org/wiki/Tab_stop) - it is *not* a synonym for "n spaces". ``` I love you XI'm tabbled in. I love you XXXXI'm tabbled in. 0---1---2---3---4--- ``` The current terminal is configured with a tab stop size of 4 which is shown on the bottom. The "X" are the characters skipped by the tab. So the first line skips *one* character with the tab (it goes to tab stop #3) and the second line writes a space and then skips *four* characters (to get to tab stop #4).
How to find out Chinese or Japanese Character in a String in Python?
30,069,846
11
2015-05-06T07:09:31Z
30,070,664
16
2015-05-06T07:51:14Z
[ "python", "string", "unicode", "utf-8", "character-encoding" ]
Such as: ``` str = 'sdf344asfasf天地方益3権sdfsdf' ``` Add `()` to Chinese and Japanese Characters: ``` strAfterConvert = 'sdfasfasf(天地方益)3(権)sdfsdf' ```
As a start, you can check if the character is in one of the following unicode blocks: * [Unicode Block 'CJK Unified Ideographs'](http://www.fileformat.info/info/unicode/block/cjk_unified_ideographs/index.htm) - U+4E00 to U+9FFF * [Unicode Block 'CJK Unified Ideographs Extension A'](http://www.fileformat.info/info/unicode/block/cjk_unified_ideographs_extension_a/index.htm) - U+3400 to U+4DBF * [Unicode Block 'CJK Unified Ideographs Extension B'](http://www.fileformat.info/info/unicode/block/cjk_unified_ideographs_extension_b/index.htm) - U+20000 to U+2A6DF * [Unicode Block 'CJK Unified Ideographs Extension C'](http://www.fileformat.info/info/unicode/block/cjk_unified_ideographs_extension_c/index.htm) - U+2A700 to U+2B73F * [Unicode Block 'CJK Unified Ideographs Extension D'](http://www.fileformat.info/info/unicode/block/cjk_unified_ideographs_extension_d/index.htm) - U+2B740 to U+2B81F --- After that, all you need to do is iterate through the string, checking if the char is Chinese, Japanese or Korean (CJK) and append accordingly: ``` # -*- coding:utf-8 -*- ranges = [ {"from": ord(u"\u3300"), "to": ord(u"\u33ff")}, # compatibility ideographs {"from": ord(u"\ufe30"), "to": ord(u"\ufe4f")}, # compatibility ideographs {"from": ord(u"\uf900"), "to": ord(u"\ufaff")}, # compatibility ideographs {"from": ord(u"\U0002F800"), "to": ord(u"\U0002fa1f")}, # compatibility ideographs {"from": ord(u"\u30a0"), "to": ord(u"\u30ff")}, # Japanese Kana {"from": ord(u"\u2e80"), "to": ord(u"\u2eff")}, # cjk radicals supplement {"from": ord(u"\u4e00"), "to": ord(u"\u9fff")}, {"from": ord(u"\u3400"), "to": ord(u"\u4dbf")}, {"from": ord(u"\U00020000"), "to": ord(u"\U0002a6df")}, {"from": ord(u"\U0002a700"), "to": ord(u"\U0002b73f")}, {"from": ord(u"\U0002b740"), "to": ord(u"\U0002b81f")}, {"from": ord(u"\U0002b820"), "to": ord(u"\U0002ceaf")} # included as of Unicode 8.0 ] def is_cjk(char): return any([range["from"] <= ord(char) <= range["to"] for range in ranges]) def cjk_substrings(string): i = 0 while i<len(string): if is_cjk(string[i]): start = i while is_cjk(string[i]): i += 1 yield string[start:i] i += 1 string = "sdf344asfasf天地方益3権sdfsdf".decode("utf-8") for sub in cjk_substrings(string): string = string.replace(sub, "(" + sub + ")") print string ``` The above prints ``` sdf344asfasf(天地方益)3(権)sdfsdf ``` To be future-proof, you might want to keep a lookout for CJK Unified Ideographs Extension E. It will [ship with Unicode 8.0](http://unicode.org/versions/Unicode8.0.0/), which is [scheduled for release in June 2015](http://blog.unicode.org/2015/03/unicode-80-beta-review.html). I've added it to the ranges, but you shouldn't include it until Unicode 8.0 is released. **[EDIT]** Added [CJK compatibility ideographs](http://en.wikipedia.org/wiki/CJK_Unified_Ideographs#CJK_Compatibility_Ideographs), [Japanese Kana](http://en.wikipedia.org/wiki/Kana) and [CJK radicals](http://www.unicode.org/charts/PDF/U2E80.pdf).
list() takes at most 1 argument (3 given)
30,070,358
7
2015-05-06T07:35:54Z
30,070,423
23
2015-05-06T07:39:20Z
[ "python" ]
I want to get the vector like: `v:[1.0, 2.0, 3.0]` Here is my code: ``` class VECTOR(list) : def _init_ (self,x=0.0,y=0.0,z=0.0,vec=[]) : list._init_(self,[float(x),float(y),float(z)]) if vec : for i in [0,1,2] : self[i] = vec[i] ``` But when I typed: `a = VECTOR(1,2,3)` it went wrong like this: > TypeError: list() takes at most 1 argument (3 given) How can I dissolve it?
The problem is that you've misspelled the name of the constructor. Replace `_init_` with `__init__`. Here's the fixed code: ``` class VECTOR(list) : def __init__ (self,x=0.0,y=0.0,z=0.0,vec=[]) : list.__init__(self,[float(x),float(y),float(z)]) if vec : for i in [0,1,2] : self[i] = vec[i] a = VECTOR(1,2,3) print(a) ``` And the demonstration that it works: ``` % python test.py [1.0, 2.0, 3.0] ``` I'd also like to give you a few additional comments: * you should fix the coding style according to [PEP8](https://www.python.org/dev/peps/pep-0008/) (that's a document every Python developer should read entirely); * you can probably do something more Pythonic (thanks Benjamin); * inheritance is not the only way to do that, you can also use an attribute to store the list and define the relevant methods (thanks Veedrac); * you could also use `super` (see paddyg's answer); **edit note:** I've added to this solution the relevant advises found in the comments.
How to get current time in python and break up into year, month, day, hour, minute?
30,071,886
24
2015-05-06T08:51:46Z
30,071,999
43
2015-05-06T08:56:35Z
[ "python", "python-2.7", "datetime", "time" ]
I would like to get the current time in Python and assign them into variables like `year`, `month`, `day`, `hour`, `minute`. How can this be done in Python 2.7?
The [`datetime`](https://docs.python.org/2/library/datetime.html#datetime.datetime.now) module is your friend: ``` import datetime now = datetime.datetime.now() print now.year, now.month, now.day, now.hour, now.minute, now.second # 2015 5 6 8 53 40 ``` You don't need separate variables, the attributes on the returned `datetime` object have all you need.
how to handle long path name for pep8 compliance?
30,078,851
5
2015-05-06T13:53:19Z
30,078,906
7
2015-05-06T13:55:19Z
[ "python", "pep8" ]
How would I handle long path name like below for pep8 compliance? Is 79 characters per line a must even if it becomes somewhat unreadable? ``` def setUp(self): self.patcher1 = patch('projectname.common.credential.CredentialCache.mymethodname') ```
There are multiple ways to do this: 1. Use a variable to store this ``` def setUp(self): path = 'projectname.common.credential.CredentialCache.mymethodname' self.patcher1 = patch(path) ``` 2. String concatenation: An assignment like `v = ("a" "b" "c")` gets converted into `v = "abc"`: ``` def setUp(self): self.patcher1 = patch( "projectname.common.credential." "CredentialCache.mymethodname") ``` 3. Tell pep8 that we don't use 80-column terminals anymore with --max-line-length=100 (or some sufficiently reasonable value). (Hat Tip @chepner below :) )
Where does cython pyximport compile?
30,079,858
3
2015-05-06T14:36:20Z
30,086,081
7
2015-05-06T19:41:13Z
[ "python", "cython" ]
My `cython` / `pyximport` code works very well on a read/write filesystem. But (for testing purposes), I need to try it on a **read only filesystem**. **How to change the cython / pyximport temporary directory** ? (where does it do the job? i.e. the on-the-fly compilation?) How to set this "working directory" to somewhere else than the current directory, for example `/tmp/` (which is not `ro`, but `rw`) ? --- Traceback: ``` !! File "/usr/lib/python2.7/site-packages/Cython/Distutils/build_ext.py", line 301, in cython_sources self.mkpath(os.path.dirname(target)) !! File "/usr/lib/python2.7/distutils/cmd.py", line 352, in mkpath dir_util.mkpath(name, mode, dry_run=self.dry_run) !! File "/usr/lib/python2.7/distutils/dir_util.py", line 76, in mkpath "could not create '%s': %s" % (head, exc.args[-1])) !! ImportError: Building module samplerbox_audio failed: ["DistutilsFileError: could not create '/root/.pyxbld': Read-only file system\n"] ```
From `help(pyximport.install)` > By default, compiled modules will end up in a `.pyxbld` > directory in the user's home directory. Passing a different path > as `build_dir` will override this. so pass `build_dir` as an argument when you call `pyximport.install` to make it use your read/write system.
Is divmod() faster than using the % and // operators?
30,079,879
8
2015-05-06T14:37:00Z
30,079,965
13
2015-05-06T14:40:29Z
[ "python", "performance", "division", "modulus", "divmod" ]
I remember from assembly that integer division instructions yield both the quotient and remainder. So, in python will the built-in divmod() function be better performance-wise than using the % and // operators (suppose of course one needs both the quotient and the remainder)? ``` q, r = divmod(n, d) q, r = (n // d, n % d) ```
To measure is to know: ``` >>> import timeit >>> timeit.timeit('divmod(n, d)', 'n, d = 42, 7') 0.22105097770690918 >>> timeit.timeit('n // d, n % d', 'n, d = 42, 7') 0.14434599876403809 ``` The `divmod()` function is at a disadvantage here because you need to look up the global each time. Binding it to a local (all variables in a `timeit` time trial are local) improves performance a little: ``` >>> timeit.timeit('dm(n, d)', 'n, d = 42, 7; dm = divmod') 0.19841599464416504 ``` but the operators still win because they don't have to preserve the current frame while a function call to `divmod()` is executed: ``` >>> import dis >>> dis.dis(compile('divmod(n, d)', '', 'exec')) 1 0 LOAD_NAME 0 (divmod) 3 LOAD_NAME 1 (n) 6 LOAD_NAME 2 (d) 9 CALL_FUNCTION 2 12 POP_TOP 13 LOAD_CONST 0 (None) 16 RETURN_VALUE >>> dis.dis(compile('(n // d, n % d)', '', 'exec')) 1 0 LOAD_NAME 0 (n) 3 LOAD_NAME 1 (d) 6 BINARY_FLOOR_DIVIDE 7 LOAD_NAME 0 (n) 10 LOAD_NAME 1 (d) 13 BINARY_MODULO 14 BUILD_TUPLE 2 17 POP_TOP 18 LOAD_CONST 0 (None) 21 RETURN_VALUE ``` The `//` and `%` variant uses more opcodes, but the `CALL_FUNCTION` bytecode is a bear, performance wise.
Procfile for Heroku for Django 1.8
30,081,246
3
2015-05-06T15:31:46Z
30,082,046
7
2015-05-06T16:09:06Z
[ "python", "django", "python-2.7", "heroku" ]
In all the tutorials I found they suggested to create the `Procfile` to deploy to heroku with the following line: ``` web: gunicorn ProjectName.wsgi --log-file - ``` Since I'm using Django 1.8 and in setting.py I have this: ``` WSGI_APPLICATION = 'ProjectName.wsgi.application' ``` I thought this would work: ``` web: gunicorn ProjectName.wsgi.application --log-file - ``` but it didn't, an error was `ImportError: No module named application`
Very close! What you need is: ``` web: gunicorn ProjectName.wsgi:application --log-file - ``` Note the colon, instead of a dot.
Why is "1000000000000000 in range(1000000000000001)" so fast in Python 3?
30,081,275
771
2015-05-06T15:32:43Z
30,081,318
738
2015-05-06T15:33:58Z
[ "python", "performance", "python-3.x", "range", "python-internals" ]
It is my understanding that the `range()` function, which is actually [an object type in Python 3](https://docs.python.org/3/library/stdtypes.html#typesseq-range), generates its contents on the fly, similar to a generator. This being the case, I would have expected the following line to take an inordinate amount of time, because in order to determine whether 1 quadrillion is in the range, a quadrillion values would have to be generated: ``` 1000000000000000 in range(1000000000000001) ``` Furthermore: it seems that no matter how many zeroes I add on, the calculation more or less takes the same amount of time (basically instantaneous). I have also tried things like this, but the calculation is still almost instant: ``` 1000000000000000000000 in range(0,1000000000000000000001,10) # count by tens ``` If I try to implement my own range function, the result is not so nice!! ``` def my_crappy_range(N): i = 0 while i < N: yield i i += 1 return ``` What is the `range()` object doing under the hood that makes it so fast? --- EDIT: This has turned out to be a much more nuanced topic than I anticipated - there seems to be a bit of history behind the optimization of `range()`. [Martijn Pieters' answer](http://stackoverflow.com/a/30081318/2437514) was chosen for its completeness, but also see [abarnert's first answer](http://stackoverflow.com/a/30081894/2437514) for a good discussion of what it means for `range` to be a full-fledged *sequence* in Python 3, and some information/warning regarding potential inconsistency for `__contains__` function optimization across Python implementations. [abarnert's other answer](http://stackoverflow.com/a/30088140/2437514) goes into some more detail and provides links for those interested in the history behind the optimization in Python 3 (and lack of optimization of `xrange` in Python 2). Answers [by poke](http://stackoverflow.com/a/30081467/2437514) and [by wim](http://stackoverflow.com/a/30081470/2437514) provide the relevant C source code and explanations for those who are interested.
The Python 3 `range()` object doesn't produce numbers immediately; it is a smart sequence object that produces numbers *on demand*. All it contains is your start, stop and step values, then as you iterate over the object the next integer is calculated each iteration. The object also implements the [`object.__contains__` hook](https://docs.python.org/3/reference/datamodel.html#object.__contains__), and *calculates* if your number is part of its range. Calculating is a O(1) constant time operation. There is never a need to scan through all possible integers in the range. From the [`range()` object documentation](https://docs.python.org/3/library/stdtypes.html#range): > The advantage of the `range` type over a regular `list` or `tuple` is that a range object will always take the same (small) amount of memory, no matter the size of the range it represents (as it only stores the `start`, `stop` and `step` values, calculating individual items and subranges as needed). So at a minimum, your `range()` object would do: ``` class my_range(object): def __init__(self, start, stop=None, step=1): if stop is None: start, stop = 0, start self.start, self.stop, self.step = start, stop, step if step < 0: lo, hi = stop, start else: lo, hi = start, stop self.length = ((hi - lo - 1) // abs(step)) + 1 def __iter__(self): current = self.start if self.step < 0: while current > self.stop: yield current current += self.step else: while current < self.stop: yield current current += self.step def __len__(self): return self.length def __getitem__(self, i): if 0 <= i < self.length: return self.start + i * self.step raise IndexError('Index out of range: {}'.format(i)) def __contains__(self, num): if self.step < 0: if not (self.stop < num <= self.start): return False else: if not (self.start <= num < self.stop): return False return (num - self.start) % self.step == 0 ``` This is still missing several things that a real `range()` supports (such as the `.index()` or `.count()` methods, hashing, equality testing, or slicing), but should give you an idea. I also simplified the `__contains__` implementation to only focus on integer tests; if you give a real `range()` object a non-integer value (including subclasses of `int`), a slow scan is initiated to see if there is a match, just as if you use a containment test against a list of all the contained values. This was done to continue to support other numeric types that just happen to support equality testing with integers but are not expected to support integer arithmetic as well. See the original [Python issue](http://bugs.python.org/issue1766304) that implemented the containment test.
Why is "1000000000000000 in range(1000000000000001)" so fast in Python 3?
30,081,275
771
2015-05-06T15:32:43Z
30,081,467
74
2015-05-06T15:41:13Z
[ "python", "performance", "python-3.x", "range", "python-internals" ]
It is my understanding that the `range()` function, which is actually [an object type in Python 3](https://docs.python.org/3/library/stdtypes.html#typesseq-range), generates its contents on the fly, similar to a generator. This being the case, I would have expected the following line to take an inordinate amount of time, because in order to determine whether 1 quadrillion is in the range, a quadrillion values would have to be generated: ``` 1000000000000000 in range(1000000000000001) ``` Furthermore: it seems that no matter how many zeroes I add on, the calculation more or less takes the same amount of time (basically instantaneous). I have also tried things like this, but the calculation is still almost instant: ``` 1000000000000000000000 in range(0,1000000000000000000001,10) # count by tens ``` If I try to implement my own range function, the result is not so nice!! ``` def my_crappy_range(N): i = 0 while i < N: yield i i += 1 return ``` What is the `range()` object doing under the hood that makes it so fast? --- EDIT: This has turned out to be a much more nuanced topic than I anticipated - there seems to be a bit of history behind the optimization of `range()`. [Martijn Pieters' answer](http://stackoverflow.com/a/30081318/2437514) was chosen for its completeness, but also see [abarnert's first answer](http://stackoverflow.com/a/30081894/2437514) for a good discussion of what it means for `range` to be a full-fledged *sequence* in Python 3, and some information/warning regarding potential inconsistency for `__contains__` function optimization across Python implementations. [abarnert's other answer](http://stackoverflow.com/a/30088140/2437514) goes into some more detail and provides links for those interested in the history behind the optimization in Python 3 (and lack of optimization of `xrange` in Python 2). Answers [by poke](http://stackoverflow.com/a/30081467/2437514) and [by wim](http://stackoverflow.com/a/30081470/2437514) provide the relevant C source code and explanations for those who are interested.
To add to Martijn’s answer, this is the relevant part of [the source](https://hg.python.org/cpython/file/7f8cd879687b/Objects/rangeobject.c#l415) (in C, as the range object is written in native code): ``` static int range_contains(rangeobject *r, PyObject *ob) { if (PyLong_CheckExact(ob) || PyBool_Check(ob)) return range_contains_long(r, ob); return (int)_PySequence_IterSearch((PyObject*)r, ob, PY_ITERSEARCH_CONTAINS); } ``` So for `PyLong` objects (which is `int` in Python 3), it will use the `range_contains_long` function to determine the result. And that function essentially checks if `ob` is in the specified range (although it looks a bit more complex in C). If it’s not an `int` object, it falls back to iterating until it finds the value (or not). The whole logic could be translated to pseudo-Python like this: ``` def range_contains (rangeObj, obj): if isinstance(obj, int): return range_contains_long(rangeObj, obj) # default logic by iterating return any(obj == x for x in rangeObj) def range_contains_long (r, num): if r.step > 0: # positive step: r.start <= num < r.stop cmp2 = r.start <= num cmp3 = num < r.stop else: # negative step: r.start >= num > r.stop cmp2 = num <= r.start cmp3 = r.stop < num # outside of the range boundaries if not cmp2 or not cmp3: return False # num must be on a valid step inside the boundaries return (num - r.start) % r.step == 0 ```
Why is "1000000000000000 in range(1000000000000001)" so fast in Python 3?
30,081,275
771
2015-05-06T15:32:43Z
30,081,470
180
2015-05-06T15:41:18Z
[ "python", "performance", "python-3.x", "range", "python-internals" ]
It is my understanding that the `range()` function, which is actually [an object type in Python 3](https://docs.python.org/3/library/stdtypes.html#typesseq-range), generates its contents on the fly, similar to a generator. This being the case, I would have expected the following line to take an inordinate amount of time, because in order to determine whether 1 quadrillion is in the range, a quadrillion values would have to be generated: ``` 1000000000000000 in range(1000000000000001) ``` Furthermore: it seems that no matter how many zeroes I add on, the calculation more or less takes the same amount of time (basically instantaneous). I have also tried things like this, but the calculation is still almost instant: ``` 1000000000000000000000 in range(0,1000000000000000000001,10) # count by tens ``` If I try to implement my own range function, the result is not so nice!! ``` def my_crappy_range(N): i = 0 while i < N: yield i i += 1 return ``` What is the `range()` object doing under the hood that makes it so fast? --- EDIT: This has turned out to be a much more nuanced topic than I anticipated - there seems to be a bit of history behind the optimization of `range()`. [Martijn Pieters' answer](http://stackoverflow.com/a/30081318/2437514) was chosen for its completeness, but also see [abarnert's first answer](http://stackoverflow.com/a/30081894/2437514) for a good discussion of what it means for `range` to be a full-fledged *sequence* in Python 3, and some information/warning regarding potential inconsistency for `__contains__` function optimization across Python implementations. [abarnert's other answer](http://stackoverflow.com/a/30088140/2437514) goes into some more detail and provides links for those interested in the history behind the optimization in Python 3 (and lack of optimization of `xrange` in Python 2). Answers [by poke](http://stackoverflow.com/a/30081467/2437514) and [by wim](http://stackoverflow.com/a/30081470/2437514) provide the relevant C source code and explanations for those who are interested.
Use the [source](https://github.com/python/cpython/blob/cff677abe1823900e954592035a170eb67840971/Objects/rangeobject.c#L364-L413), Luke! In CPython, `range(...).__contains__` (a method wrapper) will eventually delegate to a simple calculation which checks if the value can possibly be in the range. The reason for the speed here is we're using **mathematical reasoning about the bounds, rather than a direct iteration of the range object**. To explain the logic used: 1. Check that the number is between `start` and `stop`, and 2. Check that the stride value doesn't "step over" our number. For example, `994` is in `range(4, 1000, 2)` because: 1. `4 <= 994 < 1000`, and 2. `(994 - 4) % 2 == 0`. The full C code is included below, which is a bit more verbose because of memory management and reference counting details, but the basic idea is there: ``` static int range_contains_long(rangeobject *r, PyObject *ob) { int cmp1, cmp2, cmp3; PyObject *tmp1 = NULL; PyObject *tmp2 = NULL; PyObject *zero = NULL; int result = -1; zero = PyLong_FromLong(0); if (zero == NULL) /* MemoryError in int(0) */ goto end; /* Check if the value can possibly be in the range. */ cmp1 = PyObject_RichCompareBool(r->step, zero, Py_GT); if (cmp1 == -1) goto end; if (cmp1 == 1) { /* positive steps: start <= ob < stop */ cmp2 = PyObject_RichCompareBool(r->start, ob, Py_LE); cmp3 = PyObject_RichCompareBool(ob, r->stop, Py_LT); } else { /* negative steps: stop < ob <= start */ cmp2 = PyObject_RichCompareBool(ob, r->start, Py_LE); cmp3 = PyObject_RichCompareBool(r->stop, ob, Py_LT); } if (cmp2 == -1 || cmp3 == -1) /* TypeError */ goto end; if (cmp2 == 0 || cmp3 == 0) { /* ob outside of range */ result = 0; goto end; } /* Check that the stride does not invalidate ob's membership. */ tmp1 = PyNumber_Subtract(ob, r->start); if (tmp1 == NULL) goto end; tmp2 = PyNumber_Remainder(tmp1, r->step); if (tmp2 == NULL) goto end; /* result = ((int(ob) - start) % step) == 0 */ result = PyObject_RichCompareBool(tmp2, zero, Py_EQ); end: Py_XDECREF(tmp1); Py_XDECREF(tmp2); Py_XDECREF(zero); return result; } static int range_contains(rangeobject *r, PyObject *ob) { if (PyLong_CheckExact(ob) || PyBool_Check(ob)) return range_contains_long(r, ob); return (int)_PySequence_IterSearch((PyObject*)r, ob, PY_ITERSEARCH_CONTAINS); } ``` The "meat" of the idea is mentioned in [the line](https://github.com/python/cpython/blob/9a5bacfecdf6798e960d1447af11d4cfa8d4dc32/Objects/rangeobject.c#L402): ``` /* result = ((int(ob) - start) % step) == 0 */ ``` As a final note - look at the `range_contains` function at the bottom of the code snippet. If the exact type check fails then we don't use the clever algorithm described, instead falling back to a dumb iteration search of the range using `_PySequence_IterSearch`! You can check this behaviour in the interpreter (I'm using v3.5.0 here): ``` >>> x, r = 1000000000000000, range(1000000000000001) >>> class MyInt(int): ... pass ... >>> x_ = MyInt(x) >>> x in r # calculates immediately :) True >>> x_ in r # iterates for ages.. :( ^\Quit (core dumped) ```
Why is "1000000000000000 in range(1000000000000001)" so fast in Python 3?
30,081,275
771
2015-05-06T15:32:43Z
30,081,894
319
2015-05-06T16:01:17Z
[ "python", "performance", "python-3.x", "range", "python-internals" ]
It is my understanding that the `range()` function, which is actually [an object type in Python 3](https://docs.python.org/3/library/stdtypes.html#typesseq-range), generates its contents on the fly, similar to a generator. This being the case, I would have expected the following line to take an inordinate amount of time, because in order to determine whether 1 quadrillion is in the range, a quadrillion values would have to be generated: ``` 1000000000000000 in range(1000000000000001) ``` Furthermore: it seems that no matter how many zeroes I add on, the calculation more or less takes the same amount of time (basically instantaneous). I have also tried things like this, but the calculation is still almost instant: ``` 1000000000000000000000 in range(0,1000000000000000000001,10) # count by tens ``` If I try to implement my own range function, the result is not so nice!! ``` def my_crappy_range(N): i = 0 while i < N: yield i i += 1 return ``` What is the `range()` object doing under the hood that makes it so fast? --- EDIT: This has turned out to be a much more nuanced topic than I anticipated - there seems to be a bit of history behind the optimization of `range()`. [Martijn Pieters' answer](http://stackoverflow.com/a/30081318/2437514) was chosen for its completeness, but also see [abarnert's first answer](http://stackoverflow.com/a/30081894/2437514) for a good discussion of what it means for `range` to be a full-fledged *sequence* in Python 3, and some information/warning regarding potential inconsistency for `__contains__` function optimization across Python implementations. [abarnert's other answer](http://stackoverflow.com/a/30088140/2437514) goes into some more detail and provides links for those interested in the history behind the optimization in Python 3 (and lack of optimization of `xrange` in Python 2). Answers [by poke](http://stackoverflow.com/a/30081467/2437514) and [by wim](http://stackoverflow.com/a/30081470/2437514) provide the relevant C source code and explanations for those who are interested.
The fundamental misunderstanding here is in thinking that `range` is a generator. It's not. In fact, it's not any kind of iterator. You can tell this pretty easily: ``` >>> a = range(5) >>> print(list(a)) [0, 1, 2, 3, 4] >>> print(list(a)) [0, 1, 2, 3, 4] ``` If it were a generator, iterating it once would exhaust it: ``` >>> b = my_crappy_range(5) >>> print(list(b)) [0, 1, 2, 3, 4] >>> print(list(b)) [] ``` What `range` actually is, is a sequence, just like a list. You can even test this: ``` >>> import collections.abc >>> isinstance(a, collections.abc.Sequence) True ``` This means it has to follow all the rules of being a sequence: ``` >>> a[3] # indexable 3 >>> len(a) # sized 5 >>> 3 in a # membership True >>> reversed(a) # reversible <range_iterator at 0x101cd2360> >>> a.index(3) # implements 'index' 3 >>> a.count(3) # implements 'count' 1 ``` --- The difference between a `range` and a `list` is that a `range` is a *lazy* or *dynamic* sequence; it doesn't remember all of its values, it just remembers its `start`, `stop`, and `step`, and creates the values on demand on `__getitem__`. (As a side note, if you `print(iter(a))`, you'll notice that `range` uses the same `listiterator` type as `list`. How does that work? A `listiterator` doesn't use anything special about `list` except for the fact that it provides a C implementation of `__getitem__`, so it works fine for `range` too.) --- Now, there's nothing that says that `Sequence.__contains__` has to be constant time—in fact, for obvious examples of sequences like `list`, it isn't. But there's nothing that says it *can't* be. And it's easier to implement `range.__contains__` to just check it mathematically (`(val - start) % step`, but with some extra complexity to deal with negative steps) than to actually generate and test all the values, so why *shouldn't* it do it the better way? But there doesn't seem to be anything in the language that *guarantees* this will happen. As Ashwini Chaudhari points out, if you give it a non-integral value, instead of converting to integer and doing the mathematical test, it will fall back to iterating all the values and comparing them one by one. And just because CPython 3.2+ and PyPy 3.x versions happen to contain this optimization, and it's an obvious good idea and easy to do, there's no reason that IronPython or NewKickAssPython 3.x couldn't leave it out. (And in fact CPython 3.0-3.1 *didn't* include it.) --- If `range` actually were a generator, like `my_crappy_range`, then it wouldn't make sense to test `__contains__` this way, or at least the way it makes sense wouldn't be obvious. If you'd already iterated the first 3 values, is `1` still `in` the generator? Should testing for `1` cause it to iterate and consume all the values up to `1` (or up to the first value `>= 1`)?
Why is "1000000000000000 in range(1000000000000001)" so fast in Python 3?
30,081,275
771
2015-05-06T15:32:43Z
30,081,959
24
2015-05-06T16:04:37Z
[ "python", "performance", "python-3.x", "range", "python-internals" ]
It is my understanding that the `range()` function, which is actually [an object type in Python 3](https://docs.python.org/3/library/stdtypes.html#typesseq-range), generates its contents on the fly, similar to a generator. This being the case, I would have expected the following line to take an inordinate amount of time, because in order to determine whether 1 quadrillion is in the range, a quadrillion values would have to be generated: ``` 1000000000000000 in range(1000000000000001) ``` Furthermore: it seems that no matter how many zeroes I add on, the calculation more or less takes the same amount of time (basically instantaneous). I have also tried things like this, but the calculation is still almost instant: ``` 1000000000000000000000 in range(0,1000000000000000000001,10) # count by tens ``` If I try to implement my own range function, the result is not so nice!! ``` def my_crappy_range(N): i = 0 while i < N: yield i i += 1 return ``` What is the `range()` object doing under the hood that makes it so fast? --- EDIT: This has turned out to be a much more nuanced topic than I anticipated - there seems to be a bit of history behind the optimization of `range()`. [Martijn Pieters' answer](http://stackoverflow.com/a/30081318/2437514) was chosen for its completeness, but also see [abarnert's first answer](http://stackoverflow.com/a/30081894/2437514) for a good discussion of what it means for `range` to be a full-fledged *sequence* in Python 3, and some information/warning regarding potential inconsistency for `__contains__` function optimization across Python implementations. [abarnert's other answer](http://stackoverflow.com/a/30088140/2437514) goes into some more detail and provides links for those interested in the history behind the optimization in Python 3 (and lack of optimization of `xrange` in Python 2). Answers [by poke](http://stackoverflow.com/a/30081467/2437514) and [by wim](http://stackoverflow.com/a/30081470/2437514) provide the relevant C source code and explanations for those who are interested.
The other answers explained it well already, but I'd like to offer another experiment illustrating the nature of range objects: ``` >>> r = range(5) >>> for i in r: print(i, 2 in r, list(r)) 0 True [0, 1, 2, 3, 4] 1 True [0, 1, 2, 3, 4] 2 True [0, 1, 2, 3, 4] 3 True [0, 1, 2, 3, 4] 4 True [0, 1, 2, 3, 4] ``` As you can see, a range object is an object that remembers its range and can be used many times (even while iterating over it), not just a one-time generator.
Why is "1000000000000000 in range(1000000000000001)" so fast in Python 3?
30,081,275
771
2015-05-06T15:32:43Z
30,088,140
52
2015-05-06T21:42:53Z
[ "python", "performance", "python-3.x", "range", "python-internals" ]
It is my understanding that the `range()` function, which is actually [an object type in Python 3](https://docs.python.org/3/library/stdtypes.html#typesseq-range), generates its contents on the fly, similar to a generator. This being the case, I would have expected the following line to take an inordinate amount of time, because in order to determine whether 1 quadrillion is in the range, a quadrillion values would have to be generated: ``` 1000000000000000 in range(1000000000000001) ``` Furthermore: it seems that no matter how many zeroes I add on, the calculation more or less takes the same amount of time (basically instantaneous). I have also tried things like this, but the calculation is still almost instant: ``` 1000000000000000000000 in range(0,1000000000000000000001,10) # count by tens ``` If I try to implement my own range function, the result is not so nice!! ``` def my_crappy_range(N): i = 0 while i < N: yield i i += 1 return ``` What is the `range()` object doing under the hood that makes it so fast? --- EDIT: This has turned out to be a much more nuanced topic than I anticipated - there seems to be a bit of history behind the optimization of `range()`. [Martijn Pieters' answer](http://stackoverflow.com/a/30081318/2437514) was chosen for its completeness, but also see [abarnert's first answer](http://stackoverflow.com/a/30081894/2437514) for a good discussion of what it means for `range` to be a full-fledged *sequence* in Python 3, and some information/warning regarding potential inconsistency for `__contains__` function optimization across Python implementations. [abarnert's other answer](http://stackoverflow.com/a/30088140/2437514) goes into some more detail and provides links for those interested in the history behind the optimization in Python 3 (and lack of optimization of `xrange` in Python 2). Answers [by poke](http://stackoverflow.com/a/30081467/2437514) and [by wim](http://stackoverflow.com/a/30081470/2437514) provide the relevant C source code and explanations for those who are interested.
If you're wondering *why* this optimization was added to `range.__contains__`, and why it *wasn't* added to `xrange.__contains__` in 2.7: First, as Ashwini Chaudhary discovered, [issue 1766304](http://bugs.python.org/issue1766304) was opened explicitly to optimize `[x]range.__contains__`. A patch for this was [accepted and checked in for 3.2](https://hg.python.org/cpython/rev/d599a3f2e72d/), but not backported to 2.7 because "xrange has behaved like this for such a long time that I don't see what it buys us to commit the patch this late." (2.7 was nearly out at that point.) Meanwhile: Originally, `xrange` was a not-quite-sequence object. As [the 3.1 docs](https://docs.python.org/3.1/library/stdtypes.html#range-type) say: > Range objects have very little behavior: they only support indexing, iteration, and the `len` function. This wasn't quite true; an `xrange` object actually supported a few other things that come automatically with indexing and `len`,\* including `__contains__` (via linear search). But nobody thought it was worth making them full sequences at the time. Then, as part of implementing the [Abstract Base Classes](https://www.python.org/dev/peps/pep-3119/) PEP, it was important to figure out which builtin types should be marked as implementing which ABCs, and `xrange`/`range` claimed to implement `collections.Sequence`, even though it still only handled the same "very little behavior". Nobody noticed that problem until [issue 9213](http://bugs.python.org/issue9213). The patch for that issue not only added `index` and `count` to 3.2's `range`, it also re-worked the optimized `__contains__` (which shares the same math with `index`, and is directly used by `count`).\*\* [This change](https://github.com/python/cpython/commit/8ee11f7e599b2c5b4de39047bf11bde2246e968f) went in for 3.2 as well, and was not backported to 2.x, because "it's a bugfix that adds new methods". (At this point, 2.7 was already past rc status.) So, there were two chances to get this optimization backported to 2.7, but they were both rejected. --- \* In fact, you even get iteration for free with `len` and indexing, but [in 2.3](https://docs.python.org/2/whatsnew/2.3.html#optimizations) `xrange` objects got a custom iterator. Which they then lost in 3.x, which uses the same `listiterator` type as `list`. \*\* The first version actually reimplemented it, and got the details wrong—e.g., it would give you `MyIntSubclass(2) in range(5) == False`. But Daniel Stutzbach's updated version of the patch restored most of the previous code, including the fallback to the generic, slow `_PySequence_IterSearch` that pre-3.2 `range.__contains__` was implicitly using when the optimization doesn't apply.
how to locate the center of a bright spot in an image?
30,081,725
4
2015-05-06T15:52:59Z
30,085,210
7
2015-05-06T18:51:20Z
[ "python", "image", "matlab", "search", "image-processing" ]
Here is an example of the kinds of images I'll be dealing with: ![Balls](http://pages.cs.wisc.edu/~csverma/CS766_09/Stereo/callight.jpg) There is one bright spot on each ball. I want to locate the coordinates of the centre of the bright spot. How can I do it in Python or Matlab? The problem I'm having right now is that more than one points on the spot has the same (or roughly the same) white colour, but what I need is to find the centre of this 'cluster' of white points. Also, for the leftmost and rightmost images, how can I find the centre of the whole circular object?
You can simply threshold the image and find the average coordinates of what is remaining. This handles the case when there are multiple values that have the same intensity. When you threshold the image, there will obviously be more than one bright white pixel, so if you want to bring it all together, find the centroid or the **average** coordinates to determine the centre of all of these white bright pixels. There isn't a need to filter in this particular case. Here's something to go with in MATLAB. I've read in that image directly, converted to grayscale and cleared off the white border that surrounds each of the images. Next, I split up the image into 5 chunks, threshold the image, find the average coordinates that remain and place a dot on where each centre would be: ``` im = imread('http://pages.cs.wisc.edu/~csverma/CS766_09/Stereo/callight.jpg'); im = rgb2gray(im); im = imclearborder(im); %// Split up images and place into individual cells split_point = floor(size(im,2) / 5); images = mat2cell(im, size(im,1), split_point*ones(5,1)); %// Show image to place dots imshow(im); hold on; %// For each image... for idx = 1 : 5 %// Get image img = images{idx}; %// Threshold thresh = img > 200; %// Find coordinates of thresholded image [y,x] = find(thresh); %// Find average xmean = mean(x); ymean = mean(y); %// Place dot at centre %// Make sure you offset by the right number of columns plot(xmean + (idx-1)*split_point, ymean, 'r.', 'MarkerSize', 18); end ``` I get this: ![enter image description here](http://i.stack.imgur.com/f755N.png) --- If you want a Python solution, I recommend using [`scikit-image`](http://scikit-image.org/) combined with [`numpy`](http://www.numpy.org/) and [`matplotlib`](http://matplotlib.org/) for plotting. Here's the above code transcribed in Python. Note that I saved the image referenced by the link manually on disk and named it `balls.jpg`: ``` import skimage.io import skimage.segmentation import numpy as np import matplotlib.pyplot as plt # Read in the image # Note - intensities are floating point from [0,1] im = skimage.io.imread('balls.jpg', True) # Threshold the image first then clear the border im_clear = skimage.segmentation.clear_border(im > (200.0/255.0)) # Determine where to split up the image split_point = int(im.shape[1]/5) # Show image in figure and hold to place dots in plt.figure() plt.imshow(np.dstack([im,im,im])) # For each image... for idx in range(5): # Extract sub image img = im_clear[:,idx*split_point:(idx+1)*split_point] # Find coordinates of thresholded image y,x = np.nonzero(img) # Find average xmean = x.mean() ymean = y.mean() # Plot on figure plt.plot(xmean + idx*split_point, ymean, 'r.', markersize=14) # Show image and make sure axis is removed plt.axis('off') plt.show() ``` We get this figure: ![enter image description here](http://i.stack.imgur.com/GS9W9.png) # Small sidenote I could have totally skipped the above code and used `regionprops` ([MATLAB link](http://www.mathworks.com/help/images/ref/regionprops.html), [`scikit-image` link](http://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.regionprops)). You could simply threshold the image, then apply `regionprops` to find the centroids of each cluster of white pixels, but I figured I'd show you a more manual way so you can appreciate the algorithm and understand it for yourself. --- Hope this helps!
How to join list in Python but make the last separator different?
30,083,949
19
2015-05-06T17:43:11Z
30,084,022
11
2015-05-06T17:46:55Z
[ "python" ]
I'm trying to turn a list into separated strings joined with an ampersand if there are only two items, or commas and an ampersand between the last two e.g. ``` Jones & Ben Jim, Jack & James ``` I currently have this: ``` pa = ' & '.join(listauthors[search]) ``` and don't know how to make sort out the comma/ampersand issue. Beginner so a full explanation would be appreciated.
``` "&".join([",".join(my_list[:-1]),my_list[-1]]) ``` I would think would work or maybe just ``` ",".join(my_list[:-1]) +"&"+my_list[-1] ``` to handle edge cases where only 2 items you could ``` "&".join([",".join(my_list[:-1]),my_list[-1]] if len(my_list) > 2 else my_list) ```
How to join list in Python but make the last separator different?
30,083,949
19
2015-05-06T17:43:11Z
30,084,025
11
2015-05-06T17:47:02Z
[ "python" ]
I'm trying to turn a list into separated strings joined with an ampersand if there are only two items, or commas and an ampersand between the last two e.g. ``` Jones & Ben Jim, Jack & James ``` I currently have this: ``` pa = ' & '.join(listauthors[search]) ``` and don't know how to make sort out the comma/ampersand issue. Beginner so a full explanation would be appreciated.
You could break this up into two joins. Join all but the last item with `", "`. Then join this string and the last item with `" & "`. ``` all_but_last = ', '.join(authors[:-1]) last = authors[-1] ' & '.join([all_but_last, last]) ``` Note: This doesn't deal with edge cases, such as when `authors` is empty or has only one element.
Python convert list of multiple words to single words
30,085,694
3
2015-05-06T19:17:58Z
30,085,746
10
2015-05-06T19:21:14Z
[ "python", "nlp", "nltk" ]
I have a list of words for example: `words = ['one','two','three four','five','six seven']` # quote was missing And I am trying to create a new list where each item in the list is just one word so I would have: `words = ['one','two','three','four','five','six','seven']` Would the best thing to do be join the entire list into a string and then tokenize the string? Something like this: `word_string = ' '.join(words) tokenize_list = nltk.tokenize(word_string)` Or is there a better option?
``` words = ['one','two','three four','five','six seven'] ``` With a loop: ``` words_result = [] for item in words: for word in item.split(): words_result.append(word) ``` or as a comprehension: ``` words = [word for item in words for word in item.split()] ```
Python convert list of multiple words to single words
30,085,694
3
2015-05-06T19:17:58Z
30,085,757
8
2015-05-06T19:21:50Z
[ "python", "nlp", "nltk" ]
I have a list of words for example: `words = ['one','two','three four','five','six seven']` # quote was missing And I am trying to create a new list where each item in the list is just one word so I would have: `words = ['one','two','three','four','five','six','seven']` Would the best thing to do be join the entire list into a string and then tokenize the string? Something like this: `word_string = ' '.join(words) tokenize_list = nltk.tokenize(word_string)` Or is there a better option?
You can join using a space separator and then split again: ``` In [22]: words = ['one','two','three four','five','six seven'] ' '.join(words).split() Out[22]: ['one', 'two', 'three', 'four', 'five', 'six', 'seven'] ```
What is the different between assignment and parameter passing?
30,086,020
2
2015-05-06T19:38:20Z
30,086,070
9
2015-05-06T19:40:36Z
[ "python", "variable-assignment" ]
I can assign a sequence like this in python: ``` a,b,c="ABC" ``` But I am unable to pass this sequence to a function as a parameter. i.e ``` def function2(a,b,c): print a print b print c function2("ABC") ``` The above statement is raising an error. Can any one tell me the difference between assignment and argument passing in python?
The compiler sees a comma-separated list on the LHS and emits bytecode to iterate over the RHS for you. With the function call it sees a single value and so sends it as a single argument. You need to tell it to split the sequence explicitly: ``` >>> function2(*"ABC") A B C ```
Why is my (newbie) code so slow?
30,086,851
9
2015-05-06T20:25:31Z
30,087,482
8
2015-05-06T21:04:02Z
[ "python", "performance", "profiler" ]
I'm learning python and came across [this example](https://www.binpress.com/tutorial/introduction-to-agentbased-models-an-implementation-of-schelling-model-in-python/144) of a simulation of a model I've seen before. One of the functions looked unnecessarily long, so I thought it would be good practice to try to make it more efficient. My attempt, while requiring less code, is about 1/60th as fast. Yes, I made it 60 times worse. My question is, where have I gone wrong? I've tried timing individual parts of the function and don't see where the bottleneck is. Here's the original function. It's for a model where people live on a grid and their happiness depends on whether they're the same race as most of their neighbors. (It's Schelling's [segregation model](http://en.wikipedia.org/wiki/Tipping_point_(sociology)).) So we give it an x,y coordinate for a person and determine their happiness by checking the race of each of their neighbors. ``` def is_unhappy(self, x, y): race = self.agents[(x,y)] count_similar = 0 count_different = 0 if x > 0 and y > 0 and (x-1, y-1) not in self.empty_houses: if self.agents[(x-1, y-1)] == race: count_similar += 1 else: count_different += 1 if y > 0 and (x,y-1) not in self.empty_houses: if self.agents[(x,y-1)] == race: count_similar += 1 else: count_different += 1 if x < (self.width-1) and y > 0 and (x+1,y-1) not in self.empty_houses: if self.agents[(x+1,y-1)] == race: count_similar += 1 else: count_different += 1 if x > 0 and (x-1,y) not in self.empty_houses: if self.agents[(x-1,y)] == race: count_similar += 1 else: count_different += 1 if x < (self.width-1) and (x+1,y) not in self.empty_houses: if self.agents[(x+1,y)] == race: count_similar += 1 else: count_different += 1 if x > 0 and y < (self.height-1) and (x-1,y+1) not in self.empty_houses: if self.agents[(x-1,y+1)] == race: count_similar += 1 else: count_different += 1 if x > 0 and y < (self.height-1) and (x,y+1) not in self.empty_houses: if self.agents[(x,y+1)] == race: count_similar += 1 else: count_different += 1 if x < (self.width-1) and y < (self.height-1) and (x+1,y+1) not in self.empty_houses: if self.agents[(x+1,y+1)] == race: count_similar += 1 else: count_different += 1 if (count_similar+count_different) == 0: return False else: return float(count_similar)/(count_similar+count_different) < self.similarity_threshold ``` And here is my code, which, as I said, is MUCH slower. I wanted to avoid having all the if statements above by just creating a list of "offsets" to add to each person's coordinates to determine the locations of possible neighbors, check if that is a valid position, and then check the neighbor's race. ``` def is_unhappy2(self, x, y): thisRace = self.agents[(x,y)] count_same = 0 count_other = 0 for xo, yo in list(itertools.product([-1,0,1],[-1,0,1])): if xo==0 and yo==0: # do nothing for case of no offset next else: # check if there's a neighbor at the offset of (xo, yo) neighbor = tuple(np.add( (x,y), (xo,yo) )) if neighbor in self.agents.keys(): if self.agents[neighbor] == thisRace: count_same += 1 else: count_other += 1 if count_same+count_other == 0: return False else: return float(count_same) / (count_same + count_other) < self.similarity threshold ``` (The rest of the code that creates the class is [on the site](https://www.binpress.com/tutorial/introduction-to-agentbased-models-an-implementation-of-schelling-model-in-python/144) where the example comes from.) Here are the timing results: ``` %timeit s.is_unhappy2(49,42) 100 loops, best of 3: 5.99 ms per loop %timeit s.is_unhappy(49,42) 10000 loops, best of 3: 103 µs per loop ``` I'm hoping someone with python knowledge can see immediately what I'm doing wrong without having to get into the nitty-gritty of the rest of the code. Can you see why my code is so much worse than the original?
Don't use `np.add`, just use `neighbor = (x+xo, y+yo)`. That should make it much faster (10x faster in my little test). You can also... * ask `if neighbor in self.agents:` without the `.keys()` * leave out the `list` * check `xo or yo` and not have an empty if-block * avoid the double-lookup of neighbor in self-agents Result: ``` for xo, yo in itertools.product([-1,0,1],[-1,0,1]): if xo or yo: neighbor = self.agents.get((x+xo, y+yo)) if neighbor is not None: if neighbor == thisRace: count_same += 1 else: count_other += 1 ``` And you can add ``` self.neighbor_deltas = tuple(set(itertools.product([-1,0,1],[-1,0,1])) - {(0, 0)}) ``` to the class initializer and then your function can just use those pre-computed deltas: ``` for xo, yo in self.neighbor_deltas: neighbor = self.agents.get((x+xo, y+yo)) if neighbor is not None: if neighbor == thisRace: count_same += 1 else: count_other += 1 ``` Congrats for deciding to improve that author's ridiculously repetitive code, btw.
Can't figure out how to fix the error in the following code
30,088,006
9
2015-05-06T21:34:39Z
31,458,385
12
2015-07-16T15:23:43Z
[ "python", "json", "python-2.7", "pandas" ]
I am trying to read in a json file into pandas data frame. Here is the first line line of the json file: ``` {"votes": {"funny": 0, "useful": 0, "cool": 0}, "user_id": "P_Mk0ygOilLJo4_WEvabAA", "review_id": "OeT5kgUOe3vcN7H6ImVmZQ", "stars": 3, "date": "2005-08-26", "text": "This is a pretty typical cafe. The sandwiches and wraps are good but a little overpriced and the food items are the same. The chicken caesar salad wrap is my favorite here but everything else is pretty much par for the course.", "type": "review", "business_id": "Jp9svt7sRT4zwdbzQ8KQmw"} ``` I am trying do the following:`df = pd.read_json(path)` I am getting the following error (with full traceback): ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/d/anaconda/lib/python2.7/site-packages/pandas/io/json.py", line 198, in read_json date_unit).parse() File "/Users/d/anaconda/lib/python2.7/site-packages/pandas/io/json.py", line 266, in parse self._parse_no_numpy() File "/Users/d/anaconda/lib/python2.7/site-packages/pandas/io/json.py", line 483, in _parse_no_numpy loads(json, precise_float=self.precise_float), dtype=None) ValueError: Trailing data ``` What is `Trailing data` error ? How do I read it into a data frame ? **EDIT:** Following some suggestions, here are few lines of the .json file: ``` {"votes": {"funny": 0, "useful": 0, "cool": 0}, "user_id": "P_Mk0ygOilLJo4_WEvabAA", "review_id": "OeT5kgUOe3vcN7H6ImVmZQ", "stars": 3, "date": "2005-08-26", "text": "This is a pretty typical cafe. The sandwiches and wraps are good but a little overpriced and the food items are the same. The chicken caesar salad wrap is my favorite here but everything else is pretty much par for the course.", "type": "review", "business_id": "Jp9svt7sRT4zwdbzQ8KQmw"} {"votes": {"funny": 0, "useful": 0, "cool": 0}, "user_id": "TNJRTBrl0yjtpAACr1Bthg", "review_id": "qq3zF2dDUh3EjMDuKBqhEA", "stars": 3, "date": "2005-11-23", "text": "I agree with other reviewers - this is a pretty typical financial district cafe. However, they have fantastic pies. I ordered three pies for an office event (apple, pumpkin cheesecake, and pecan) - all were delicious, particularly the cheesecake. The sucker weighed in about 4 pounds - no joke.\n\nNo surprises on the cafe side - great pies and cakes from the catering business.", "type": "review", "business_id": "Jp9svt7sRT4zwdbzQ8KQmw"} {"votes": {"funny": 0, "useful": 0, "cool": 0}, "user_id": "H_mngeK3DmjlOu595zZMsA", "review_id": "i3eQTINJXe3WUmyIpvhE9w", "stars": 3, "date": "2005-11-23", "text": "Decent enough food, but very overpriced. Just a large soup is almost $5. Their specials are $6.50, and with an overpriced soda or juice, it's approaching $10. A bit much for a cafe lunch!", "type": "review", "business_id": "Jp9svt7sRT4zwdbzQ8KQmw"} ``` This .json file I am using contains one json object in each line as per the specification. I tried the [jsonlint.com](http://jsonlint.com) website as suggested and it gives the following error: ``` Parse error on line 14: ...t7sRT4zwdbzQ8KQmw"}{ "votes": { ----------------------^ Expecting 'EOF', '}', ',', ']' ```
You have to read it line by line. For example, you can use the following code provided by [ryptophan](http://www.reddit.com/user/ryptophan) on [reddit](http://www.reddit.com/r/MachineLearning/comments/33eglq/python_help_jsoncsv_pandas/): ``` import pandas as pd # read the entire file into a python array with open('your.json', 'rb') as f: data = f.readlines() # remove the trailing "\n" from each line data = map(lambda x: x.rstrip(), data) # each element of 'data' is an individual JSON object. # i want to convert it into an *array* of JSON objects # which, in and of itself, is one large JSON object # basically... add square brackets to the beginning # and end, and have all the individual business JSON objects # separated by a comma data_json_str = "[" + ','.join(data) + "]" # now, load it into pandas data_df = pd.read_json(data_json_str) ```
Mortgage calculator math error
30,090,173
2
2015-05-07T01:00:52Z
30,090,308
7
2015-05-07T01:16:28Z
[ "python", "math", "calculator" ]
This program *runs* fine, but the monthly payment it returns is totally off. For a principal amount of $400,000, interest rate of 11%, and a 10-year payment period, it returns the monthly payment of $44000.16. I googled the equation (algorithm?) for mortgage payments and put it in, not sure where I'm going wrong. ``` import locale locale.setlocale(locale.LC_ALL, '') def mortgage(principal, interest, n): payment = principal*((interest*(1+interest)**n) / ((1+interest)**n-1)) return payment principal = float(input("What is the amount of the loan you are taking out? $")) interest = float(input("What is the interest rate? (%) ")) / 100 n = float(input("How many years? ")) * 12 print print "Your monthly payment would be", locale.currency(mortgage(principal, interest, n)) ```
The problem is in your interest rate used. You request the annual interest rate and never convert to a monthly interest rate. From <https://en.wikipedia.org/wiki/Mortgage_calculator#Monthly_payment_formula>: > r - the **monthly** interest rate, expressed as a decimal, not a > percentage. Since the quoted yearly percentage rate is not a > compounded rate, the monthly percentage rate is simply the yearly > percentage rate divided by 12; dividing the monthly percentage rate by > 100 gives r, the monthly rate expressed as a decimal. I just tried this on my computer and dividing the interest rate by 12 calculated $5510/month which agrees with other mortgage calculators.
about close a file in Python
30,092,249
6
2015-05-07T04:55:51Z
30,092,299
18
2015-05-07T05:00:19Z
[ "python", "file" ]
I know it is a good habit of using close to close a file if not used any more in Python. I have tried to open a large number of open files, and not close them (in the same Python process), but not see any exceptions or errors. I have tried both Mac and Linux. So, just wondering if Python is smart enough to manage file handle to close/reuse them automatically, so that we do not need to care about file close? thanks in advance, Lin
Python **will**, in general, *garbage collect* objects no longer in use and no longer being referenced. This means it's entirely possible that open file objects, that match the garbage collector's filters, will get cleaned up and probably closed. **However**; you should not rely on this, and instead use: ``` with open(..): ``` **Example** (*Also best practice*): ``` with open("file.txt", "r") as f: # do something with f ``` **NB:** If you *don't* close the file and leave "open file descriptors" around on your system, you will eventually start hitting resource limits on your system; specifically "ulimit". You will eventually start to see OS errors related to "too many open files". (*Assuming Linux here, but other OS(es) will have similar behaviour*). **Important:** It's also a *good practice* to close any open files you've written too, so that data you have written is properly flushed. This helps to ensure data integrity, and not have files unexpectedly contain corrupted data because of an application crash. > It's worth noting that the above important note is the cause of many issues with where you write to a file; read it back; discover it's empty; but then close your python program; read it in a text editor and realize it's not empty. **Demo:** A good example of the kind of resource limits and errors you might hit if you don't ensure you close open file(s): ``` $ python Python 2.7.6 (default, Mar 22 2014, 22:59:56) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> xs = [open("/dev/null", "r") for _ in xrange(100000)] Traceback (most recent call last): File "<stdin>", line 1, in <module> IOError: [Errno 24] Too many open files: '/dev/null' ```
cost of len() and pep8 suggestion on sequence empty check
30,100,096
2
2015-05-07T11:43:48Z
30,100,188
7
2015-05-07T11:48:00Z
[ "python", "pep8" ]
If the complexity of python `len()` is O(1), why does pep8 suggest to use `if seq:` instead of `if len(seq) == 0:` > <https://wiki.python.org/moin/TimeComplexity> > <https://www.python.org/dev/peps/pep-0008/#programming-recommendations> Isn't `len(seq) == 0` more readable?
The former can handle both empty string and `None`. For example consider these two variables. ``` >>> s1 = '' >>> s2 = None ``` Using the first method ``` def test(s): if s: return True else: return False >>> test(s1) False >>> test(s2) False ``` Now using `len` ``` def test(s): if len(s) == 0: return True else: return False >>> test(s1) True >>> test(s2) Traceback (most recent call last): File "<pyshell#13>", line 1, in <module> test(s2) File "<pyshell#11>", line 2, in test if len(s) == 0: TypeError: object of type 'NoneType' has no len() ``` So in terms of performance, both will be O(1), but the truthiness test (the first method) is more robust in that it handles `None` in addition to empty strings.
Format seconds as float
30,100,474
3
2015-05-07T12:00:56Z
30,100,634
8
2015-05-07T12:07:58Z
[ "python" ]
I wish to output: ``` "14:48:06.743174" ``` This is the closest I can get: ``` `"14:48:06"` ``` with: ``` t = time.time() time.strftime("%H:%M:%S",time.gmtime(t)) ```
According to the [manual on `time`](https://docs.python.org/3.2/library/time.html#time.strftime) there is no straight forward way to print microseconds (or seconds as float): ``` >>> time.strftime("%H:%M:%S.%f",time.gmtime(t)) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: Invalid format string ``` However [`datetime.datetime` format](https://docs.python.org/3.2/library/datetime.html#strftime-and-strptime-behavior) provides `%f` which is defined as: > Microsecond as a decimal number [0,999999], zero-padded on the left ``` >>> import datetime >>> datetime.datetime.now().strftime('%H:%M:%S.%f') '14:07:11.286000' ``` Or when you have your value stored in `t = time.time()`, you can use [`datetime.datetime.utcfromtimestam()`](https://docs.python.org/3.2/library/datetime.html#datetime.datetime.utcfromtimestamp): ``` >>> datetime.datetime.utcfromtimestamp(t).strftime('%H:%M:%S.%f') '12:08:32.463000' ``` I'm afraid that if you want to have [more control over how microseconds get formatted](http://stackoverflow.com/a/18406412/1149736) (for example displaying only 3 places instead of 6) you will either have to crop the text (using `[:-3]`): ``` >>> datetime.datetime.utcfromtimestamp(t).strftime('%H:%M:%S.%f')[:-3] '12:08:32.463' ``` Or format it by hand: ``` >>> '.{:03}'.format(int(dt.microsecond/1000)) '.463' ```
Why are some float < integer comparisons four times slower than others?
30,100,725
264
2015-05-07T12:11:08Z
30,100,743
334
2015-05-07T12:11:39Z
[ "python", "performance", "floating-point", "cpython", "python-internals" ]
When comparing floats to integers, some pairs of values take much longer to be evaluated than other values of a similar magnitude. For example: ``` >>> import timeit >>> timeit.timeit("562949953420000.7 < 562949953421000") # run 1 million times 0.5387085462592742 ``` But if the float or integer is made smaller or larger by a certain amount, the comparison runs much more quickly: ``` >>> timeit.timeit("562949953420000.7 < 562949953422000") # integer increased by 1000 0.1481498428446173 >>> timeit.timeit("562949953423001.8 < 562949953421000") # float increased by 3001.1 0.1459577925548956 ``` Changing the comparison operator (e.g. using `==` or `>` instead) does not affect the times in any noticeable way. This is not *solely* related to magnitude because picking larger or smaller values can result in faster comparisons, so I suspect it is down to some unfortunate way the bits line up. Clearly, comparing these values is more than fast enough for most use cases. I am simply curious as to why Python seems to struggle more with some pairs of values than with others.
A comment in the Python source code for float objects acknowledges that: > [Comparison is pretty much a nightmare](https://hg.python.org/cpython/file/ea33b61cac74/Objects/floatobject.c#l285) This is especially true when comparing a float to an integer, because, unlike floats, integers in Python can be arbitrarily large and are always exact. Trying to cast the integer to a float might lose precision and make the comparison inaccurate. Trying to cast the float to an integer is not going to work either because any fractional part will be lost. To get around this problem, Python performs a series of checks, returning the result if one of the checks succeeds. It compares the signs of the two values, then whether the integer is "too big" to be a float, then compares the exponent of the float to the length of the integer. If all of these checks fail, it is necessary to construct two new Python objects to compare in order to obtain the result. When comparing a float `v` to an integer/long `w`, the worst case is that: * `v` and `w` have the same sign (both positive or both negative), * the integer `w` has few enough bits that it can be held in the [`size_t`](http://stackoverflow.com/a/2550799/3923281) type (typically 32 or 64 bits), * the integer `w` has at least 49 bits, * the exponent of the float `v` is the same as the number of bits in `w`. And this is exactly what we have for the values in the question: ``` >>> import math >>> math.frexp(562949953420000.7) # gives the float's (significand, exponent) pair (0.9999999999976706, 49) >>> (562949953421000).bit_length() 49 ``` We see that 49 is both the exponent of the float and the number of bits in the integer. Both numbers are positive and so the four criteria above are met. Choosing one of the values to be larger (or smaller) can change the number of bits of the integer, or the value of the exponent, and so Python is able to determine the result of the comparison without performing the expensive final check. This is specific to the CPython implementation of the language. --- ### The comparison in more detail The [`float_richcompare`](https://hg.python.org/cpython/file/ea33b61cac74/Objects/floatobject.c#l301) function handles the comparison between two values `v` and `w`. Below is a step-by-step description of the checks that the function performs. The comments in the Python source are actually very helpful when trying to understand what the function does, so I've left them in where relevant. I've also summarised these checks in a list at the foot of the answer. The main idea is to map the Python objects `v` and `w` to two appropriate C doubles, `i` and `j`, which can then be easily compared to give the correct result. Both Python 2 and Python 3 use the same ideas to do this (the former just handles `int` and `long` types separately). The first thing to do is check that `v` is definitely a Python float and map it to a C double `i`. Next the function looks at whether `w` is also a float and maps it to a C double `j`. This is the best case scenario for the function as all the other checks can be skipped. The function also checks to see whether `v` is `inf` or `nan`: ``` static PyObject* float_richcompare(PyObject *v, PyObject *w, int op) { double i, j; int r = 0; assert(PyFloat_Check(v)); i = PyFloat_AS_DOUBLE(v); if (PyFloat_Check(w)) j = PyFloat_AS_DOUBLE(w); else if (!Py_IS_FINITE(i)) { if (PyLong_Check(w)) j = 0.0; else goto Unimplemented; } ``` Now we know that if `w` failed these checks, it is not a Python float. Now the function checks if it's a Python integer. If this is the case, the easiest test is to extract the sign of `v` and the sign of `w` (return `0` if zero, `-1` if negative, `1` if positive). If the signs are different, this is all the information needed to return the result of the comparison: ``` else if (PyLong_Check(w)) { int vsign = i == 0.0 ? 0 : i < 0.0 ? -1 : 1; int wsign = _PyLong_Sign(w); size_t nbits; int exponent; if (vsign != wsign) { /* Magnitudes are irrelevant -- the signs alone * determine the outcome. */ i = (double)vsign; j = (double)wsign; goto Compare; } } ``` If this check failed, then `v` and `w` have the same sign. The next check counts the number of bits in the integer `w`. If it has too many bits then it can't possibly be held as a float and so must be larger in magnitude than the float `v`: ``` nbits = _PyLong_NumBits(w); if (nbits == (size_t)-1 && PyErr_Occurred()) { /* This long is so large that size_t isn't big enough * to hold the # of bits. Replace with little doubles * that give the same outcome -- w is so large that * its magnitude must exceed the magnitude of any * finite float. */ PyErr_Clear(); i = (double)vsign; assert(wsign != 0); j = wsign * 2.0; goto Compare; } ``` On the other hand, if the integer `w` has 48 or fewer bits, it can safely turned in a C double `j` and compared: ``` if (nbits <= 48) { j = PyLong_AsDouble(w); /* It's impossible that <= 48 bits overflowed. */ assert(j != -1.0 || ! PyErr_Occurred()); goto Compare; } ``` From this point onwards, we know that `w` has 49 or more bits. It will be convenient to treat `w` as a positive integer, so change the sign and the comparison operator as necessary: ``` if (nbits <= 48) { /* "Multiply both sides" by -1; this also swaps the * comparator. */ i = -i; op = _Py_SwappedOp[op]; } ``` Now the function looks at the exponent of the float. Recall that a float can be written (ignoring sign) as significand \* 2exponent and that the significand represents a number between 0.5 and 1: ``` (void) frexp(i, &exponent); if (exponent < 0 || (size_t)exponent < nbits) { i = 1.0; j = 2.0; goto Compare; } ``` This checks two things. If the exponent is less than 0 then the float is smaller than 1 (and so smaller in magnitude than any integer). Or, if the exponent is less than the number of bits in `w` then we have that `v < |w|` since significand \* 2exponent is less than 2nbits. Failing these two checks, the function looks to see whether the exponent is greater than the number of bit in `w`. This shows that significand \* 2exponent is greater than 2nbits and so `v > |w|`: ``` if ((size_t)exponent > nbits) { i = 2.0; j = 1.0; goto Compare; } ``` If this check did not succeed we know that the exponent of the float `v` is the same as the number of bits in the integer `w`. The only way that the two values can be compared now is to construct two new Python integers from `v` and `w`. The idea is to discard the fractional part of `v`, double the integer part, and then add one. `w` is also doubled and these two new Python objects can be compared to give the correct return value. Using an example with small values, `4.65 < 4` would be determined by the comparison `(2*4)+1 == 9 < 8 == (2*4)` (returning false). ``` { double fracpart; double intpart; PyObject *result = NULL; PyObject *one = NULL; PyObject *vv = NULL; PyObject *ww = w; // snip fracpart = modf(i, &intpart); // split i (the double that v mapped to) vv = PyLong_FromDouble(intpart); // snip if (fracpart != 0.0) { /* Shift left, and or a 1 bit into vv * to represent the lost fraction. */ PyObject *temp; one = PyLong_FromLong(1); temp = PyNumber_Lshift(ww, one); // left-shift doubles an integer ww = temp; temp = PyNumber_Lshift(vv, one); vv = temp; temp = PyNumber_Or(vv, one); // a doubled integer is even, so this adds 1 vv = temp; } // snip } } ``` For brevity I've left out the additional error-checking and garbage-tracking Python has to do when it creates these new objects. Needless to say, this adds additional overhead and explains why the values highlighted in the question are significantly slower to compare than others. --- Here is a summary of the checks that are performed by the comparison function. Let `v` be a float and cast it as a C double. Now, if `w` is also a float: * Check whether `w` is `nan` or `inf`. If so, handle this special case separately depending on the type of `w`. * If not, compare `v` and `w` directly by their representations as C doubles. If `w` is an integer: * Extract the signs of `v` and `w`. If they are different then we know `v` and `w` are different and which is the greater value. * (*The signs are the same.*) Check whether `w` has too many bits to be a float (more than `size_t`). If so, `w` has greater magnitude than `v`. * Check if `w` has 48 or fewer bits. If so, it can be safely cast to a C double without losing its precision and compared with `v`. * (*`w` has more than 48 bits. We will now treat `w` as a positive integer having changed the compare op as appropriate.*) * Consider the exponent of the float `v`. If the exponent is negative, then `v` is less than `1` and therefore less than any positive integer. Else, if the exponent is less than the number of bits in `w` then it must be less than `w`. * If the exponent of `v` is greater than the number of bits in `w` then `v` is greater than `w`. * (*The exponent is the same as the number of bits in `w`.*) * The final check. Split `v` into its integer and fractional parts. Double the integer part and add 1 to compensate for the fractional part. Now double the integer `w`. Compare these two new integers instead to get the result.
Pandas: can not write to excel file
30,102,232
10
2015-05-07T13:17:03Z
30,304,735
7
2015-05-18T13:38:23Z
[ "python", "pandas" ]
Trying this example from the [documentation](http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.to_excel.html) ``` writer = ExcelWriter('output.xlsx') df1.to_excel(writer,'Sheet1') df2.to_excel(writer,'Sheet2') writer.save() ``` I found out that I can not write to an excel file with the error ``` TypeError: copy() got an unexpected keyword argument 'font' ``` I'm using Panda 0.16 on a Mac pro. EDIT: writing to an xls file worked just fine. I do not insist in having an xlsx file, just wonder why it does not work.
For fast solution replace this ``` xcell.style = xcell.style.copy(**style_kwargs) ``` with ``` pass ``` At pandas/io/excel.py line 778. openpyxl upgraded their api and pandas also need to be updated for support openpyxl.
Pandas: can not write to excel file
30,102,232
10
2015-05-07T13:17:03Z
30,378,303
11
2015-05-21T15:37:04Z
[ "python", "pandas" ]
Trying this example from the [documentation](http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.to_excel.html) ``` writer = ExcelWriter('output.xlsx') df1.to_excel(writer,'Sheet1') df2.to_excel(writer,'Sheet2') writer.save() ``` I found out that I can not write to an excel file with the error ``` TypeError: copy() got an unexpected keyword argument 'font' ``` I'm using Panda 0.16 on a Mac pro. EDIT: writing to an xls file worked just fine. I do not insist in having an xlsx file, just wonder why it does not work.
As per their [documentation](http://pandas.pydata.org/pandas-docs/stable/install.html), pandas depends > on openpyxl version 1.6.1 or higher, but lower than 2.0.0 The last [openpyxl](https://openpyxl.readthedocs.org/en/latest/) version lower than 2.0.0 being version 1.8.6, you should simply remove your current openpyxl version and run: ``` pip install openpyxl==1.8.6 ``` if your using pip, or find an equivalent way to install this specific version.
Pandas: can not write to excel file
30,102,232
10
2015-05-07T13:17:03Z
32,107,505
7
2015-08-19T23:56:53Z
[ "python", "pandas" ]
Trying this example from the [documentation](http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.to_excel.html) ``` writer = ExcelWriter('output.xlsx') df1.to_excel(writer,'Sheet1') df2.to_excel(writer,'Sheet2') writer.save() ``` I found out that I can not write to an excel file with the error ``` TypeError: copy() got an unexpected keyword argument 'font' ``` I'm using Panda 0.16 on a Mac pro. EDIT: writing to an xls file worked just fine. I do not insist in having an xlsx file, just wonder why it does not work.
If you don't care whether the headers have borders around them and bold font, and you don't want to restrict the version of openpyxl, the quickest way is to overwrite the `header_style` dictionary to be `None`. If you also have dates or datetimes, you must also explicitly set the workbook's `date` and `datetime` formats to `None`: ``` from datetime import datetime import pandas as pd pd.core.format.header_style = None # <--- Workaround for header formatting dt = datetime.now() d = datetime.date(datetime.now()) df1 = pd.DataFrame([{'c1': 'alpha', 'c2': 1}, {'c1': 'beta', 'c2': 2}]) df2 = pd.DataFrame([{'c1': dt, 'c2': d}, {'c1': dt, 'c2': d}]) with pd.ExcelWriter('output.xlsx') as writer: writer.date_format = None # <--- Workaround for date formatting writer.datetime_format = None # <--- this one for datetime df1.to_excel(writer,'Sheet1') df2.to_excel(writer,'Sheet2') ``` Not clear why the keyword arguments never make it through the openpyxl deprecation wrapper...but they don't. In addition if you're formatting any other cells use the new [openpyxl api](http://openpyxl.readthedocs.org/). All of these problems go away if you have Anaconda, by the way.
How to get Best Estimator on GridSearchCV (Random Forest Classifier Scikit)
30,102,973
9
2015-05-07T13:45:10Z
30,105,413
15
2015-05-07T15:24:28Z
[ "python", "scikit-learn", "random-forest", "cross-validation" ]
I'm running GridSearch CV to optimize the parameters of a classifier in scikit. Once I'm done, I'd like to know which parameters were chosen as the best. Whenever I do so I get a `AttributeError: 'RandomForestClassifier' object has no attribute 'best_estimator_'`, and can't tell why, as it seems to be a legitimate attribute on the [documentation](http://scikit-learn.org/stable/modules/generated/sklearn.grid_search.GridSearchCV.html). ``` from sklearn.grid_search import GridSearchCV X = data[usable_columns] y = data[target] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0) rfc = RandomForestClassifier(n_jobs=-1,max_features= 'sqrt' ,n_estimators=50, oob_score = True) param_grid = { 'n_estimators': [200, 700], 'max_features': ['auto', 'sqrt', 'log2'] } CV_rfc = GridSearchCV(estimator=rfc, param_grid=param_grid, cv= 5) print '\n',CV_rfc.best_estimator_ ``` Yields: ``` `AttributeError: 'GridSearchCV' object has no attribute 'best_estimator_' ```
You have to fit your data before you can get the best parameter combination. ``` from sklearn.grid_search import GridSearchCV from sklearn.datasets import make_classification from sklearn.ensemble import RandomForestClassifier # Build a classification task using 3 informative features X, y = make_classification(n_samples=1000, n_features=10, n_informative=3, n_redundant=0, n_repeated=0, n_classes=2, random_state=0, shuffle=False) rfc = RandomForestClassifier(n_jobs=-1,max_features= 'sqrt' ,n_estimators=50, oob_score = True) param_grid = { 'n_estimators': [200, 700], 'max_features': ['auto', 'sqrt', 'log2'] } CV_rfc = GridSearchCV(estimator=rfc, param_grid=param_grid, cv= 5) CV_rfc.fit(X, y) print CV_rfc.best_params_ ```
Go - constructing struct/json on the fly
30,105,798
2
2015-05-07T15:41:31Z
30,106,057
9
2015-05-07T15:54:08Z
[ "python", "json", "go", "syntactic-sugar" ]
In Python it is possible to create a dictionary and serialize it as a JSON object like this: ``` example = { "key1" : 123, "key2" : "value2" } js = json.dumps(example) ``` Go is statically typed, so we have to declare the object schema first: ``` type Example struct { Key1 int Key2 string } example := &Example { Key1 : 123, Key2 : "value2" } js, _ := json.Marshal(example) ``` Sometimes object (struct) with a specific schema (type declaration) is needed just in one place and nowhere else. I don't want to spawn numerous useless types, and I don't want to use reflection for this. Is there any syntactic sugar in Go that provides a more elegant way to do this?
You can use a map: ``` example := map[string]interface{}{ "Key1": 123, "Key2": "value2" } js, _ := json.Marshal(example) ``` You can also create types inside of a function: ``` func f() { type Example struct { } } ``` Or create unnamed types: ``` func f() { json.Marshal(struct { Key1 int; Key2 string }{123, "value2"}) } ```
Looking for idiomatic way to evaluate to False if argument is False in Python 3
30,107,570
14
2015-05-07T17:08:03Z
30,107,704
9
2015-05-07T17:15:13Z
[ "python", "python-3.x" ]
I have a chain of functions, all defined elsewhere in the class: ``` fus(roh(dah(inp))) ``` where `inp` is either a dictionary, or `bool(False)`. The desired result is that if `inp`, or any of the functions evaluate to `False`, `False` is returned by the function stack. I attempted to use ternary operators, but they don't evaluate correctly. ``` def func(inp): return int(inp['value']) + 1 if inp else False ``` throws a TypeError, bool not subscriptable, if `i == False` because `inp['value']` is evaluated before the conditional. I know I can do it explicitly: ``` def func(inp): if inp == False: return False else: return inp['value'] + 1 ``` but there are a ton of functions, and this will nearly quadruple the length of my code. It's also rewriting the exact same lines of code again and again, which suggests to me that it is the wrong way to do things. I suspect that a decorator with arguments is the answer, but the more I play around with it the less sure I am about that. ``` def validate_inp(inp): def decorator(func): def wrapper(*args): return func(inp) if inp else False return wrapper return decorator @validate_inp(inp) def func(inp): return int(inp['value']) + 1 ``` Unfortunately the decorator call throws a NameError, 'inp' not defined. But I'm not sure if I'm using the decorator incorrectly, or the decorator is the wrong solution. Looking for comment, criticism, suggestion, and/or sanity check. --- ## If you found this trying to solve your own problem... You probably want to be using empty dictionaries instead of boolean False. Props to @chepner. In my application, using False was "okay" bur offered no advantages and caused some chunky blocks of code. I've found everything is simpler using an empty dictionary instead. I'm wrapping the functions that use the dict with a decorator that catches the KeyError thrown by referencing `dict['value']` where `dict` is empty.
Decorator should look like: ``` def validate_inp(fun): def wrapper(inp): return fun(inp) if inp else False return wrapper @validate_inp def func(inp): return int(inp['value']) + 1 print(func(False)) print(func({'value': 1})) ``` If you want to use your decorator with a class member: ``` def validate_inp(fun): def wrapper(self, inp): return fun(self, inp) if inp else False return wrapper class Foo(object): @validate_inp def func(self, inp): return int(inp['value']) + 1 if inp else False foo = Foo() print(foo.func(False)) print(foo.func({'value': 1})) ```
Looking for idiomatic way to evaluate to False if argument is False in Python 3
30,107,570
14
2015-05-07T17:08:03Z
30,109,995
8
2015-05-07T19:27:41Z
[ "python", "python-3.x" ]
I have a chain of functions, all defined elsewhere in the class: ``` fus(roh(dah(inp))) ``` where `inp` is either a dictionary, or `bool(False)`. The desired result is that if `inp`, or any of the functions evaluate to `False`, `False` is returned by the function stack. I attempted to use ternary operators, but they don't evaluate correctly. ``` def func(inp): return int(inp['value']) + 1 if inp else False ``` throws a TypeError, bool not subscriptable, if `i == False` because `inp['value']` is evaluated before the conditional. I know I can do it explicitly: ``` def func(inp): if inp == False: return False else: return inp['value'] + 1 ``` but there are a ton of functions, and this will nearly quadruple the length of my code. It's also rewriting the exact same lines of code again and again, which suggests to me that it is the wrong way to do things. I suspect that a decorator with arguments is the answer, but the more I play around with it the less sure I am about that. ``` def validate_inp(inp): def decorator(func): def wrapper(*args): return func(inp) if inp else False return wrapper return decorator @validate_inp(inp) def func(inp): return int(inp['value']) + 1 ``` Unfortunately the decorator call throws a NameError, 'inp' not defined. But I'm not sure if I'm using the decorator incorrectly, or the decorator is the wrong solution. Looking for comment, criticism, suggestion, and/or sanity check. --- ## If you found this trying to solve your own problem... You probably want to be using empty dictionaries instead of boolean False. Props to @chepner. In my application, using False was "okay" bur offered no advantages and caused some chunky blocks of code. I've found everything is simpler using an empty dictionary instead. I'm wrapping the functions that use the dict with a decorator that catches the KeyError thrown by referencing `dict['value']` where `dict` is empty.
> I attempted to use ternary operators, but they don't evaluate correctly. > > ``` > def func(inp): > return int(inp['value']) + 1 if inp else False > ``` > > throws a TypeError, bool not subscriptable, if `i == False` because `inp['value']` is evaluated before the conditional. This is not true - that code works. Further, you can just write ``` def func(inp): return inp and (int(inp['value']) + 1) ``` To automatically wrap functions like this, make a function that wraps a function: ``` def fallthrough_on_false(function): def inner(inp): return inp and function(inp) return inner ``` This should be improved by using `functools.wraps` to carry through decorators and names, and it should probably take a variadic number of arguments to allow for optional extensions: ``` from functools import wraps def fallthrough_on_false(function): @wraps(function) def inner(inp, *args, **kwargs): return inp and function(inp, *args, **kwargs) return inner ```