title
stringlengths
12
150
question_id
int64
469
40.1M
question_score
int64
2
5.52k
question_date
stringdate
2008-08-02 15:11:16
2016-10-18 06:16:31
answer_id
int64
536
40.1M
answer_score
int64
7
8.38k
answer_date
stringdate
2008-08-02 18:49:07
2016-10-18 06:19:33
tags
listlengths
1
5
question_body_md
stringlengths
15
30.2k
answer_body_md
stringlengths
11
27.8k
Numpy: convert an array to a triangular matrix
20,055,493
9
2013-11-18T18:51:00Z
20,055,748
15
2013-11-18T19:04:50Z
[ "python", "numpy", "linear-algebra" ]
I was looking for a built in method to convert an linear array to triangular matrix. As I failed in find one I am asking for help in implementing one. Imagine an array like: ``` In [203]: dm Out[203]: array([ 0.80487805, 0.90243902, 0.85365854, ..., 0.95121951, 0.90243902, 1. ]) In [204]...
``` >>> tri = np.zeros((67, 67)) >>> tri[np.triu_indices(67, 1)] = dm ``` See [doc for `triu_indices`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.triu_indices.html) for details. To get a lower-triangular matrix, use `np.tril_indices` and set the offset to `-1` instead of `1`.
What is the cleanest way to count frequencies in python
20,055,773
2
2013-11-18T19:06:47Z
20,055,806
7
2013-11-18T19:09:01Z
[ "python", "counting" ]
I have data as following [0.1,0.2,1,5,100] and so on... What i want to do is count number of items between ``` 1-10 11-20 21-30 ... and so on... ``` Right now, I have a very messy code.. What I have done is mapped ``` 1-10 :=> 0 11-20:=> 1 ..and on.. ``` So I have defined buckets where bucket 0 has range 1-10, buc...
Use a `Counter` and compute the bucket using integer division. ``` from collections import Counter freq = Counter() for x in data: freq[(x - 1) // 10] += 1 ``` Note that this maps values less than one to `-1`. When dealing with not-strictly positive data, you'll actually want to use ranges 1-9, 10-19, etc.
list comprehension filtering - "the set() trap"
20,056,458
60
2013-11-18T19:48:08Z
20,056,551
47
2013-11-18T19:52:58Z
[ "python", "python-3.x", "python-internals" ]
A reasonably common operation is to filter one `list` based on another `list`. People quickly find that this: ``` [x for x in list_1 if x in list_2] ``` is slow for large inputs - it's O(n\*m). Yuck. How do we speed this up? Use a `set` to make filtering lookups O(1): ``` s = set(list_2) [x for x in list_1 if x in s...
In order to optimize `set(list_2)`, the interpreter needs to prove that `list_2` (and all of its elements) does not change between iterations. This is a hard problem in the general case, and it would not surprise me if the interpreter does not even attempt to tackle it. On the other hand a set literal cannot change it...
list comprehension filtering - "the set() trap"
20,056,458
60
2013-11-18T19:48:08Z
20,056,575
10
2013-11-18T19:54:12Z
[ "python", "python-3.x", "python-internals" ]
A reasonably common operation is to filter one `list` based on another `list`. People quickly find that this: ``` [x for x in list_1 if x in list_2] ``` is slow for large inputs - it's O(n\*m). Yuck. How do we speed this up? Use a `set` to make filtering lookups O(1): ``` s = set(list_2) [x for x in list_1 if x in s...
The basic reason is that a literal really can't change, whereas if it's an expression like `set(list_2)`, it's possible that evaluating the target expression or the iterable of the comprehension could change the value of `set(list_2)`. For instance, if you have ``` [f(x) for x in list_1 if x in set(list_2)] ``` It is...
list comprehension filtering - "the set() trap"
20,056,458
60
2013-11-18T19:48:08Z
20,056,585
38
2013-11-18T19:54:34Z
[ "python", "python-3.x", "python-internals" ]
A reasonably common operation is to filter one `list` based on another `list`. People quickly find that this: ``` [x for x in list_1 if x in list_2] ``` is slow for large inputs - it's O(n\*m). Yuck. How do we speed this up? Use a `set` to make filtering lookups O(1): ``` s = set(list_2) [x for x in list_1 if x in s...
From [What’s New In Python 3.2](http://docs.python.org/dev/whatsnew/3.2.html#optimizations): > Python’s peephole optimizer now recognizes patterns such `x in {1, 2, > 3}` as being a test for membership in a set of constants. The optimizer > recasts the set as a frozenset and stores the pre-built constant.
list comprehension filtering - "the set() trap"
20,056,458
60
2013-11-18T19:48:08Z
20,056,895
18
2013-11-18T20:11:14Z
[ "python", "python-3.x", "python-internals" ]
A reasonably common operation is to filter one `list` based on another `list`. People quickly find that this: ``` [x for x in list_1 if x in list_2] ``` is slow for large inputs - it's O(n\*m). Yuck. How do we speed this up? Use a `set` to make filtering lookups O(1): ``` s = set(list_2) [x for x in list_1 if x in s...
> So now I'm wondering - why can python 3.x optimize away the set > literal to only build once, but not set(list\_2)? No one's mentioned this issue yet: how do you know `set([1,2,3])` and `{1, 2, 3}` are the same thing? ``` >>> import random >>> def set(arg): ... return [random.choice(range(5))] ... >>> list1 = ...
list comprehension filtering - "the set() trap"
20,056,458
60
2013-11-18T19:48:08Z
20,057,101
13
2013-11-18T20:22:22Z
[ "python", "python-3.x", "python-internals" ]
A reasonably common operation is to filter one `list` based on another `list`. People quickly find that this: ``` [x for x in list_1 if x in list_2] ``` is slow for large inputs - it's O(n\*m). Yuck. How do we speed this up? Use a `set` to make filtering lookups O(1): ``` s = set(list_2) [x for x in list_1 if x in s...
*Too long for a comment* This won't speak to the optimization details or v2 vs. v3 differences. But when I encounter this in some situations, I find making a context manager out of the data object is useful: ``` class context_set(set): def __enter__(self): return self def __exit__(self, *args): ...
printing double quotes around a variable
20,056,548
2
2013-11-18T19:52:51Z
20,056,615
9
2013-11-18T19:56:02Z
[ "python", "python-2.7" ]
For instance, we have: ``` word = 'Some Random Word' print '"' + word + '"' ``` is there a better way to print double quotes around a variable?
You can try this ``` print '"%s"'%word ``` OR ``` print '"{}"'.format(word) ``` OR ``` print "\"%s\""%word ``` And, if the double-quotes is not a restriction (i.e. single-quotes would do) ``` from pprint import pprint pprint(word) ``` OR like others have already said ``` word = '"Some word"' print word ``` Us...
What does the "as" statement mean in python?
20,056,716
3
2013-11-18T20:02:10Z
20,056,790
7
2013-11-18T20:06:02Z
[ "python", "flask" ]
I'm just starting to learn the Flask framework and was wondering what the 'as' statement does? It's used in conjunction with an 'with' statement. Here's the example: ``` def init_db(): with closing (connect_db()) as db: with app.open_resource('schema.sql', mode='r') as f: db.cursor().executescript(f.read(...
The `as` keyword is used to add clauses to a few different statements (e.g., [`import`](http://docs.python.org/3.3/reference/simple_stmts.html#the-import-statement)); there is no "`as` statement". In the `with` statement, it means that the value of the `with` context gets assigned to that variable. The precise explana...
How to remove gaps between subplots in matplotlib?
20,057,260
18
2013-11-18T20:31:38Z
20,058,199
26
2013-11-18T21:21:48Z
[ "python", "matplotlib" ]
The code below produces gaps between the subplots. How do I remove the gaps between the subplots and make the image a tight grid? [![enter image description here](http://i.stack.imgur.com/uBn4j.png)](http://i.stack.imgur.com/uBn4j.png) ``` import matplotlib.pyplot as plt for i in range(16): i = i + 1 ax1 = p...
You can use [gridspec](http://matplotlib.org/api/gridspec_api.html) to control the spacing between axes. There's more [information](http://matplotlib.org/users/gridspec.html) here. ``` import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec plt.figure(figsize = (4,4)) gs1 = gridspec.GridSpec(4, 4) gs1....
How to remove gaps between subplots in matplotlib?
20,057,260
18
2013-11-18T20:31:38Z
36,049,914
10
2016-03-17T01:09:16Z
[ "python", "matplotlib" ]
The code below produces gaps between the subplots. How do I remove the gaps between the subplots and make the image a tight grid? [![enter image description here](http://i.stack.imgur.com/uBn4j.png)](http://i.stack.imgur.com/uBn4j.png) ``` import matplotlib.pyplot as plt for i in range(16): i = i + 1 ax1 = p...
The problem is the use of `aspect='equal'`, which prevents the subplots from stretching to an arbitrary aspect ratio and filling up all the empty space. Normally, this would work: ``` import matplotlib.pyplot as plt ax = [plt.subplot(2,2,i+1) for i in range(4)] for a in ax: a.set_xticklabels([]) a.set_ytick...
Check if two Python functions are equal
20,059,011
6
2013-11-18T22:09:55Z
20,059,029
13
2013-11-18T22:12:09Z
[ "python", "function", "lambda" ]
I am wondering how I could check to see if two functions are the same. An example would be `(lambda x: x) == (lambda y: y)` evaluating to true. As far as I know, Python will check to see if the functions occupy the same location in memory, but not whether they have the same operation. I know it seems impractical to hav...
The one thing you could test for is code object equality: ``` >>> x = lambda x: x >>> y = lambda y: y >>> x.__code__.co_code '|\x00\x00S' >>> x.__code__.co_code == y.__code__.co_code True ``` Here the bytecode for both functions is the same. You'll perhaps need to verify more aspects of the code objects (closures spr...
Handle exception in __init__
20,059,766
9
2013-11-18T22:57:43Z
20,060,037
9
2013-11-18T23:17:10Z
[ "python", "exception" ]
Is it fine to raise an exception in `__init__` in python? I have this piece of code: ``` class VersionManager(object): def __init__(self, path): self._path = path if not os.path.exists(path): os.mkdir(path) myfunction(path) ``` The second line can potentially result in an exception. In tha...
It is perfectly fine to raise an exception in `__init__`. You would then wrap the object initiation/creation call with `try/except` and react to the exception. One potential odd result though is that `__del__` is run anyway: ``` class Demo(object): def __init__(self, value): self.value=value if va...
Installing PIL with pip
20,060,096
95
2013-11-18T23:22:28Z
20,061,019
188
2013-11-19T00:47:15Z
[ "python", "django", "python-2.7", "python-imaging-library", "pip" ]
I am trying to install PIL (the Python Imaging Library) using the command: ``` sudo pip install pil ``` but I get the following message: ``` Downloading/unpacking PIL You are installing a potentially insecure and unverifiable file. Future versions of pip will default to disallowing insecure files. Downloading PI...
1. Install Xcode and Xcode Command Line Tools as mentioned. 2. Use Pillow instead, as PIL is basically dead. Pillow is a maintained fork of PIL. <https://pypi.python.org/pypi/Pillow/2.2.1> ``` pip install Pillow ``` If you have both Pythons installed and want to install this for Python3: ``` $ python3 -m pip instal...
Installing PIL with pip
20,060,096
95
2013-11-18T23:22:28Z
20,802,457
15
2013-12-27T14:33:54Z
[ "python", "django", "python-2.7", "python-imaging-library", "pip" ]
I am trying to install PIL (the Python Imaging Library) using the command: ``` sudo pip install pil ``` but I get the following message: ``` Downloading/unpacking PIL You are installing a potentially insecure and unverifiable file. Future versions of pip will default to disallowing insecure files. Downloading PI...
on mac osx using this command: ``` sudo pip install http://effbot.org/media/downloads/Imaging-1.1.7.tar.gz ```
Installing PIL with pip
20,060,096
95
2013-11-18T23:22:28Z
21,151,777
53
2014-01-16T01:28:40Z
[ "python", "django", "python-2.7", "python-imaging-library", "pip" ]
I am trying to install PIL (the Python Imaging Library) using the command: ``` sudo pip install pil ``` but I get the following message: ``` Downloading/unpacking PIL You are installing a potentially insecure and unverifiable file. Future versions of pip will default to disallowing insecure files. Downloading PI...
work to me: ``` apt-get install python-dev apt-get install libjpeg-dev apt-get install libjpeg8-dev apt-get install libpng3 apt-get install libfreetype6-dev ln -s /usr/lib/i386-linux-gnu/libfreetype.so /usr/lib ln -s /usr/lib/i386-linux-gnu/libjpeg.so /usr/lib ln -s /usr/lib/i386-linux-gnu/libz.so /usr/lib pip insta...
Installing PIL with pip
20,060,096
95
2013-11-18T23:22:28Z
26,605,247
21
2014-10-28T09:51:26Z
[ "python", "django", "python-2.7", "python-imaging-library", "pip" ]
I am trying to install PIL (the Python Imaging Library) using the command: ``` sudo pip install pil ``` but I get the following message: ``` Downloading/unpacking PIL You are installing a potentially insecure and unverifiable file. Future versions of pip will default to disallowing insecure files. Downloading PI...
I got the answer from a discussion [here](https://groups.google.com/forum/#!msg/opentreemap-user/3eGxGUgzA4Q/JwTrpXXGUSQJ): I tried ``` pip install --no-index -f http://dist.plone.org/thirdparty/ -U PIL ``` and it worked.
Installing PIL with pip
20,060,096
95
2013-11-18T23:22:28Z
32,459,666
28
2015-09-08T13:53:39Z
[ "python", "django", "python-2.7", "python-imaging-library", "pip" ]
I am trying to install PIL (the Python Imaging Library) using the command: ``` sudo pip install pil ``` but I get the following message: ``` Downloading/unpacking PIL You are installing a potentially insecure and unverifiable file. Future versions of pip will default to disallowing insecure files. Downloading PI...
It is very simple using apt install use this command to get it done ``` sudo apt-get install python-PIL ``` or ``` sudo pip install pillow ``` or ``` sudo easy_install pillow ```
Convert a string list to a tuple in Python
20,060,165
3
2013-11-18T23:28:44Z
20,060,183
8
2013-11-18T23:30:17Z
[ "python", "string", "tuples" ]
I am trying to convert a string such as `'SUP E P I C'` to a tuple containing all the non-spaced strings. For example, if the input is `'SUP E P I C'`, then the program should return `('S', 'U', 'P', 'E', 'P', 'I', 'C')` and I am trying the obvious method of looping, and I started as follows: ``` for ch in john: i...
tuples are immutable, so building them up one item at a time is very inefficient. You can pass a sequence directly to `tuple` ``` >>> tuple('SUP E P I C'.replace(" ","")) ('S', 'U', 'P', 'E', 'P', 'I', 'C') ``` or use a generator expression (overkill for this example) ``` >>> tuple(x for x in 'SUP E P I C' if not x....
sqlalchemy, select using reverse-inclusive (not in) list of child column values
20,060,485
11
2013-11-18T23:57:46Z
20,068,327
9
2013-11-19T09:57:11Z
[ "python", "sqlalchemy", "flask-sqlalchemy" ]
I have a typical Post / Tags (many tags associated with one post) relationship in flask-sqlalchemy, and I want to select posts which aren't tagged with any tag in a list I provide. First, the models I set up: ``` class Post(db.Model): id = db.Column(db.Integer, primary_key=True) tags = db.relationship('Tag', l...
Pretty straightforward using negated [`any`](http://docs.sqlalchemy.org/en/rel_0_9/orm/internals.html#sqlalchemy.orm.properties.RelationshipProperty.Comparator.any): ``` query = session.query(Post).filter(~Post.tags.any(Tag.name.in_(['dont', 'want', 'these']))) ```
Why are there no control structures for python FOR loop?
20,060,546
4
2013-11-19T00:03:14Z
20,060,585
8
2013-11-19T00:07:04Z
[ "python", "loops", "for-loop" ]
In Java, we can have a FOR loop like this ``` for (i = 0; i < 10; i += 1) { } ``` Yet this is not possible in Python; FOR can only be used in the "foreach" sense, looping through each element in an iterable. Why were the control structures for the FOR loop left out in Python?
[The Zen of Python](http://www.python.org/dev/peps/pep-0020/) says “There should be one-- and preferably only one --obvious way to do it.” Python already has this way to do it: ``` for i in xrange(10): pass ``` If you want to make all the parts explicit, you can do that too: ``` for i in xrange(0, 10, 1): ...
Splitting a string by using two substrings in Python
20,062,565
5
2013-11-19T03:26:21Z
20,062,596
9
2013-11-19T03:29:32Z
[ "python", "string", "search" ]
I am searching a string by using `re`, which works quite right for almost all cases except when there is a newline character(\n) For instance if string is defined as: ``` testStr = " Test to see\n\nThis one print\n " ``` Then searching like this `re.search('Test(.*)print', testStr)` does not return anything. Wha...
The re module has re.DOTALL to indicate "." should also match newlines. Normally "." matches anything except a newline. ``` re.search('Test(.*)print', testStr, re.DOTALL) ``` Alternatively: ``` re.search('Test((?:.|\n)*)print', testStr) # (?:…) is a non-matching group to apply * ``` Example: ``` >>> testStr = " ...
Pandas dataframe get first row of each group
20,067,636
18
2013-11-19T09:24:34Z
20,067,665
41
2013-11-19T09:25:52Z
[ "python", "pandas", "dataframe" ]
I have a pandas `DataFrame` like following. ``` df = pd.DataFrame({'id' : [1,1,1,2,2,3,3,3,3,4,4,5,6,6,6,7,7], 'value' : ["first","second","second","first", "second","first","third","fourth", "fifth","second","fifth","first", ...
``` >>> df.groupby('id').first() value id 1 first 2 first 3 first 4 second 5 first 6 first 7 fourth ``` If you need `id` as column: ``` >>> df.groupby('id').first().reset_index() id value 0 1 first 1 2 first 2 3 first 3 4 second 4 5 first 5 6 first 6 7 fo...
Pandas good approach to get top n records within each group
20,069,009
27
2013-11-19T10:28:36Z
20,069,379
35
2013-11-19T10:46:03Z
[ "python", "pandas", "greatest-n-per-group", "window-functions" ]
Suppose I have pandas DataFrame like this: ``` >>> df = pd.DataFrame({'id':[1,1,1,2,2,2,2,3,4],'value':[1,2,3,1,2,3,4,1,1]}) >>> df id value 0 1 1 1 1 2 2 1 3 3 2 1 4 2 2 5 2 3 6 2 4 7 3 1 8 4 1 ``` I want to get new DataFrame with top 2 records for e...
Did you try `df.groupby('id').head(2)` Ouput generated: ``` >>> df.groupby('id').head(2) id value id 1 0 1 1 1 1 2 2 3 2 1 4 2 2 3 7 3 1 4 8 4 1 ``` (Keep in mind that you might need to order/sort before, depending on your data) EDIT: As ment...
Pandas good approach to get top n records within each group
20,069,009
27
2013-11-19T10:28:36Z
32,397,818
19
2015-09-04T12:14:13Z
[ "python", "pandas", "greatest-n-per-group", "window-functions" ]
Suppose I have pandas DataFrame like this: ``` >>> df = pd.DataFrame({'id':[1,1,1,2,2,2,2,3,4],'value':[1,2,3,1,2,3,4,1,1]}) >>> df id value 0 1 1 1 1 2 2 1 3 3 2 1 4 2 2 5 2 3 6 2 4 7 3 1 8 4 1 ``` I want to get new DataFrame with top 2 records for e...
[Since 0.14.1](https://github.com/pydata/pandas/issues/7053), you can now do `nlargest` and `nsmallest` on a `groupby` object: ``` In [23]: df.groupby('id')['value'].nlargest(2) Out[23]: id 1 2 3 1 2 2 6 4 5 3 3 7 1 4 8 1 dtype: int64 ``` There's a slight weirdness that you get t...
Flask-Principal Best Practice of Handling PermissionDenied Exception
20,069,150
5
2013-11-19T10:36:09Z
20,069,821
8
2013-11-19T11:06:14Z
[ "python", "flask", "flask-principal" ]
I am new in writing flask and currently use `flask-principal` as my authorization mechanism. When a user tries to access a url without the required permission, flask-principal raises a `PermissionDenied` Exception. It causes my system to throw a `500 internal server` error. How could I catch the specific exception an...
You can tell Flask-Principal that you want to raise a specific HTTP error code instead: ``` @app.route('/admin') @admin_permission.require(http_exception=403) def admin(request): # ... ``` Now [`flask.abort()`](http://flask.pocoo.org/docs/api/#flask.abort) will be called instead of raising `PermissionDenied`. For...
More efficient algorithm for shortest superstring search
20,071,702
6
2013-11-19T12:32:28Z
20,072,009
7
2013-11-19T12:48:20Z
[ "python", "algorithm", "optimization" ]
My problem below is NP-complete, however, I'm trying to find at least a marginally faster string search function or module that might help in reducing some of the computation time compared to where it is at now. Any suggestions would be appreciated. The concatenated (longest possible) superstring is: AGGAGTCCGCGTGAGG...
This should do it. ``` import itertools as it SEQUENCES = ['AGG', 'AGT', 'CCG', 'CGT', 'GAG', 'GGA', 'GGT', 'GTA', 'GTG', 'TAG', 'TGG'] LONGEST_SUPERSTRING = ''.join(SEQUENCES) def find_shortest_superstring(): current_shortest = LONGEST_SUPERSTRING trim = len(current_shortest)-1 seen_prefixes = set() ...
More efficient algorithm for shortest superstring search
20,071,702
6
2013-11-19T12:32:28Z
20,074,602
14
2013-11-19T14:47:46Z
[ "python", "algorithm", "optimization" ]
My problem below is NP-complete, however, I'm trying to find at least a marginally faster string search function or module that might help in reducing some of the computation time compared to where it is at now. Any suggestions would be appreciated. The concatenated (longest possible) superstring is: AGGAGTCCGCGTGAGG...
I applied the Dykstra algorithm (width-search) and have a solution giving an answer to this task in less than a second. I optimized it a bit in terms of memory usage, but I think concerning the algorithm this is a better approach than the one in the other answer. Unless we run out of memory this should be a better solu...
Sort a deeply nested tuple
20,072,797
3
2013-11-19T13:27:37Z
20,072,919
8
2013-11-19T13:32:44Z
[ "python", "python-3.x", "iteration", "tuples" ]
I have a list of nested tuples that looks like: ``` l = [('apple', ['gala','fuji', 'macintosh']), ('pear', ['seckel','anjou','bosc'])] ``` And I like to sort the second item of the tuple alphabetically, so that it would look like: ``` l2 = [('apple', ['fuji','gala', 'macintosh']), ('pear', ['anjou','bosc','seckel'])...
Something like the following - using a list comp to build new tuples... ``` l = [('apple', ['gala','fuji', 'macintosh']), ('pear', ['seckel','anjou','bosc'])] l2 = [(k, sorted(v)) for k, v in l] # [('apple', ['fuji', 'gala', 'macintosh']), ('pear', ['anjou', 'bosc', 'seckel'])] ```
IPython/matplotlib: Return subplot from function
20,073,017
4
2013-11-19T13:36:53Z
20,073,551
7
2013-11-19T14:00:39Z
[ "python", "matplotlib", "ipython" ]
Using Matplotlib in an IPython Notebook, I would like to create a figure with subplots which are returned from a function: ``` import matplotlib.pyplot as plt %matplotlib inline def create_subplot(data): more_data = do_something_on_data() bp = plt.boxplot(more_data) # return boxplot? return bp # m...
Typically, you'd do something like this: ``` def create_subplot(data, ax=None): if ax is None: ax = plt.gca() more_data = do_something_on_data() bp = ax.boxplot(more_data) return bp # make figure with subplots f, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(10,5)) create_subplot(dat...
Auto zero-padding an array
20,073,036
3
2013-11-19T13:38:04Z
20,073,200
7
2013-11-19T13:44:59Z
[ "python", "arrays", "list", "numpy" ]
Let `L` be a list of, say, 55 items : ``` L=range(55) for i in range(6): print L[10*i:10*(i+1)] ``` The printed list will have 10 items for i = 0, 1, 2, 3 , 4, but for i = 5, it will have 5 items only. Is there a quick method for auto zero-padding L[50:60] so that it is 10 items-long ?
Using `NumPy`: ``` >>> a = np.arange(55) >>> a.resize(60) >>> a.reshape(6, 10) array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [10, 11, 12, 13, 14, 15, 16, 17, 18, 19], [20, 21, 22, 23, 24, 25, 26, 27, 28, 29], [30, 31, 32, 33, 34, 35, 36, 37, 38, 39], [40, 41, 42, 43, 44, 45, 46, 47, 48, ...
No module named 'x' when reloading with os.execl()
20,074,400
14
2013-11-19T14:38:27Z
20,074,776
8
2013-11-19T14:54:45Z
[ "python", "python-2.7", "operating-system" ]
I have a python script that is using the following to restart: ``` python = sys.executable os.execl(python, python, * sys.argv) ``` Most the time this works fine, but occasionally the restart fails with a no module named error. Examples: ``` Traceback (most recent call last): File "/usr/lib/python2.7/site.py", line ...
I believe you are hitting the following bug: <http://bugs.python.org/issue16981> As it is unlikely that these modules are disappearing there must be another error that is actually at fault. The bug report lists 'too many open files' as prone to causing this issue however I am unsure if there are any other errors whic...
Installing pyodbc fails on OSX 10.9 (Mavericks)
20,074,620
28
2013-11-19T14:48:45Z
20,074,700
25
2013-11-19T14:51:59Z
[ "python", "osx", "pyodbc" ]
When running `pip install pyodbc`, I get ``` In file included from .../build/pyodbc/src/buffer.cpp:12: .../build/pyodbc/src/pyodbc.h:52:10: fatal error: 'sql.h' file not found #include <sql.h> ^ 1 error generated. error: command 'cc' failed with exit status 1 ``` It seems that Mavericks h...
As you noticed OSX Mavericks dropped sql headers that are required for PyODBC compilation. Following these steps allowed me to install PyODBC: 1. Make sure you have iODBC library installed (<http://www.iodbc.org/>) 2. Download and extract iODBC sources 3. Run `pip install --no-install pyodbc` 4. `cd [VIRTUAL_ENV]/buil...
Installing pyodbc fails on OSX 10.9 (Mavericks)
20,074,620
28
2013-11-19T14:48:45Z
21,036,601
10
2014-01-10T05:09:36Z
[ "python", "osx", "pyodbc" ]
When running `pip install pyodbc`, I get ``` In file included from .../build/pyodbc/src/buffer.cpp:12: .../build/pyodbc/src/pyodbc.h:52:10: fatal error: 'sql.h' file not found #include <sql.h> ^ 1 error generated. error: command 'cc' failed with exit status 1 ``` It seems that Mavericks h...
I had no joy with @Vitaly's answer because there appears to be an issue building packages on Mavericks to do with the lack of support for hard-linking. I couldn't get the package to build. So I opted for @Vitaly's second suggestion which was to copy the necessary files from the `[libiodbc_sources]/include/` directory ...
Installing pyodbc fails on OSX 10.9 (Mavericks)
20,074,620
28
2013-11-19T14:48:45Z
22,001,496
39
2014-02-24T23:36:36Z
[ "python", "osx", "pyodbc" ]
When running `pip install pyodbc`, I get ``` In file included from .../build/pyodbc/src/buffer.cpp:12: .../build/pyodbc/src/pyodbc.h:52:10: fatal error: 'sql.h' file not found #include <sql.h> ^ 1 error generated. error: command 'cc' failed with exit status 1 ``` It seems that Mavericks h...
You can use [Homebrew](http://brew.sh/) to install unixodbc, then pyodbc via pip in the usual fashion. ``` brew install unixodbc && pip install pyodbc ``` This works for me on Mavericks.
Installing pyodbc fails on OSX 10.9 (Mavericks)
20,074,620
28
2013-11-19T14:48:45Z
23,009,162
11
2014-04-11T09:49:38Z
[ "python", "osx", "pyodbc" ]
When running `pip install pyodbc`, I get ``` In file included from .../build/pyodbc/src/buffer.cpp:12: .../build/pyodbc/src/pyodbc.h:52:10: fatal error: 'sql.h' file not found #include <sql.h> ^ 1 error generated. error: command 'cc' failed with exit status 1 ``` It seems that Mavericks h...
See my [installation instructions](https://gist.github.com/Bouke/10454272) which I've written after some futile attempts of the other answers provided: First, install the following libraries: ``` $ brew install unixodbc $ brew install freetds --with-unixodbc ``` FreeTDS should already work now, without configuration...
Installing pyodbc fails on OSX 10.9 (Mavericks)
20,074,620
28
2013-11-19T14:48:45Z
25,985,917
21
2014-09-23T02:24:44Z
[ "python", "osx", "pyodbc" ]
When running `pip install pyodbc`, I get ``` In file included from .../build/pyodbc/src/buffer.cpp:12: .../build/pyodbc/src/pyodbc.h:52:10: fatal error: 'sql.h' file not found #include <sql.h> ^ 1 error generated. error: command 'cc' failed with exit status 1 ``` It seems that Mavericks h...
After many dead ends, this worked for me: ``` $ brew unlink unixodbc $ brew install unixodbc --universal $ sudo pip install --upgrade --global-option=build_ext --global-option="-I/usr/local/include" --global-option="-L/usr/local/lib" --allow-external pyodbc --allow-unverified pyodbc pyodbc ```
how to remove django from ubuntu?
20,074,883
4
2013-11-19T15:00:15Z
20,074,923
14
2013-11-19T15:02:23Z
[ "python", "django", "ubuntu" ]
I have installed Django from setup.py from my home directory. I would like to uninstall it. Please tell me a working method. P.S- 1. I have seen most of the answers in Stackoverflow but they did not work. Using ubuntu 12.04. 2. One Solution in Stackoverflow for MAC OS X did work for my MAC.
1. Install python-pip ``` sudo apt-get install python-pip ``` 2. Remove django using pip ``` sudo pip uninstall Django ``` 3. Uninstall python-pip ``` sudo apt-get remove python-pip ```
Is wordnet path similarity commutative?
20,075,335
12
2013-11-19T15:19:46Z
20,076,867
8
2013-11-19T16:30:03Z
[ "python", "nlp", "nltk", "wordnet" ]
I am using the wordnet API from nltk. When I compare one synset with another I got `None` but when I compare them the other way around I get a float value. Shouldn't they give the same value? Is there an explanation or is this a bug of wordnet? Example: ``` wn.synset('car.n.01').path_similarity(wn.synset('automobile...
I don't think it is a bug in wordnet per se. In your case, automobile is specified as a verb and car as noun, so you will need to look through the synset to see what the graph looks like and decide if the nets are labeled correctly. ``` A = 'car.n.01' B = 'automobile.v.01' C = 'automobile.n.01' wn.synset(A).path_sim...
Is wordnet path similarity commutative?
20,075,335
12
2013-11-19T15:19:46Z
20,799,567
12
2013-12-27T11:05:23Z
[ "python", "nlp", "nltk", "wordnet" ]
I am using the wordnet API from nltk. When I compare one synset with another I got `None` but when I compare them the other way around I get a float value. Shouldn't they give the same value? Is there an explanation or is this a bug of wordnet? Example: ``` wn.synset('car.n.01').path_similarity(wn.synset('automobile...
Technically without the dummy root, both `car` and `automobile` synsets would have no link to each other: ``` >>> from nltk.corpus import wordnet as wn >>> x = wn.synset('car.n.01') >>> y = wn.synset('automobile.v.01') >>> print x.shortest_path_distance(y) None >>> print y.shortest_path_distance(x) None ``` Now, let'...
Sorting a tuple by its float element
20,075,675
2
2013-11-19T15:35:30Z
20,075,733
8
2013-11-19T15:38:21Z
[ "python", "sorting", "python-3.x", "tuples" ]
I am trying to sort this tuple by the value of the number, so that it be rearranged in descending order: ``` l =[('orange', '3.2'), ('apple', '30.2'), ('pear', '4.5')] ``` as in: ``` l2 =[('apple', '30.2'), ('pear', '4.5'), ('orange', '3.2')] ``` I am trying to sort it using this: ``` l2 = ((k,sorted(l2), key=lamb...
You are muddling up the syntax; you were almost there. This works: ``` l2 = sorted(l, key=lambda x: float(x[1]), reverse=True) ``` e.g. call the `sorted()` function, and pass in the list to sort as the first argument. The other two arguments are keyword arguments (`key`, and `reverse`). Demo: ``` >>> l = [('orange'...
Python: import module from another directory at the same level in project hierarchy
20,075,884
22
2013-11-19T15:45:11Z
20,076,368
21
2013-11-19T16:06:25Z
[ "python", "python-2.7", "module", "path", "package" ]
I've seen all sorts of examples and other similar questions, but I can't seem to find an example that exactly matches my scenario. I feel like a total goon asking this because there are so many similar questions, but I just can't seem to get this working "correctly." Here is my project: ``` user_management (package) ...
> If I move `CreateUser.py` to the main user\_management directory, I can > easily use: `import Modules.LDAPManager` to import `LDAPManager.py` > --- this works. Please, **don't**. In this way the `LDAPManager` module used by `CreateUser` will **not** be the same as the one imported via other imports. This can create ...
What are the implications of using assert vs. raise Exception
20,076,182
2
2013-11-19T15:58:06Z
20,076,295
10
2013-11-19T16:02:47Z
[ "python" ]
Are there any significant differences with the following? ``` raise Exception("some exception") assert False, "some exception" ```
Assertions can be disabled with the `-O` flag when starting Python. For this reason, use assertions only for sanity-checking, not for checking that is part of your program logic. Besides this, there is of course the difference that assertions raise `AssertionError`, which you really shouldn't catch. When you raise an ...
Replace non-ASCII characters with a single space
20,078,816
107
2013-11-19T18:09:03Z
20,078,869
91
2013-11-19T18:11:35Z
[ "python", "unicode", "encoding", "ascii" ]
I need to replace all non-ASCII (\x00-\x7F) characters with a space. I'm surprised that this is not dead-easy in Python, unless I'm missing something. The following function simply removes all non-ASCII characters: ``` def remove_non_ascii_1(text): return ''.join(i for i in text if ord(i)<128) ``` And this one r...
Your `''.join()` expression is *filtering*, removing anything non-ASCII; you could use a conditional expression instead: ``` return ''.join([i if ord(i) < 128 else ' ' for i in text]) ``` This handles characters one by one and would still use one space per character replaced. Your regular expression should just repl...
Replace non-ASCII characters with a single space
20,078,816
107
2013-11-19T18:09:03Z
20,079,244
9
2013-11-19T18:29:14Z
[ "python", "unicode", "encoding", "ascii" ]
I need to replace all non-ASCII (\x00-\x7F) characters with a space. I'm surprised that this is not dead-easy in Python, unless I'm missing something. The following function simply removes all non-ASCII characters: ``` def remove_non_ascii_1(text): return ''.join(i for i in text if ord(i)<128) ``` And this one r...
For *character* processing, use Unicode strings: ``` PythonWin 3.3.0 (v3.3.0:bd8afb90ebf2, Sep 29 2012, 10:57:17) [MSC v.1600 64 bit (AMD64)] on win32. >>> s='ABC马克def' >>> import re >>> re.sub(r'[^\x00-\x7f]',r' ',s) # Each char is a Unicode codepoint. 'ABC def' >>> b = s.encode('utf8') >>> re.sub(rb'[^\x00-\x...
Initializing a dictionary in python with a key value and no corresponding values
20,079,681
21
2013-11-19T18:53:25Z
20,079,718
16
2013-11-19T18:55:21Z
[ "python", "dictionary", "initialization", "key" ]
I was wondering if there was a way to initialize a dictionary in python with keys but no corresponding values until I set them. Such as: ``` Definition = {'apple': , 'ball': } ``` and then later i can set them: ``` Definition[key] = something ``` I only want to initialize keys but I don't know the corresponding val...
You could initialize them to `None`.
Initializing a dictionary in python with a key value and no corresponding values
20,079,681
21
2013-11-19T18:53:25Z
20,079,807
29
2013-11-19T19:00:14Z
[ "python", "dictionary", "initialization", "key" ]
I was wondering if there was a way to initialize a dictionary in python with keys but no corresponding values until I set them. Such as: ``` Definition = {'apple': , 'ball': } ``` and then later i can set them: ``` Definition[key] = something ``` I only want to initialize keys but I don't know the corresponding val...
Use the `fromkeys` function to initialize a dictionary with any default value. In your case, you will initialize with `None` since you don't have a default value in mind. ``` empty_dict = dict.fromkeys(['apple','ball']) ``` this will initialize `empty_dict` as: ``` empty_dict = {'apple': None, 'ball': None} ``` As ...
Python 3.3.2 tkinter ttk TreeView per-column sorting only sorts by last column?
20,079,989
2
2013-11-19T19:08:50Z
20,085,052
7
2013-11-20T00:25:12Z
[ "python", "sorting", "treeview", "tkinter", "ttk" ]
I am trying to get a simplistic ttk TreeView table going with per-column sorting using the heading 'command' tag, but it doesn't seem to work right. I'm using the answer to this question for implementing the functionality: [Tk treeview column sort](http://stackoverflow.com/questions/1966929/tk-treeview-column-sort) My...
You should use lambda in other way. In your code every lambda gets a pointer to the same variable, that is why all functions do the same thing. You should, e.g, copy variable to make lambda unique. An example how to do it: ``` for each in ('name', 'path', 'time','pb'): listbox.heading(each,text=each.capitalize(),c...
Testing file upload with Flask and Python 3
20,080,123
5
2013-11-19T19:15:40Z
20,080,698
10
2013-11-19T19:48:09Z
[ "python", "unit-testing", "python-3.x", "flask" ]
I'm using Flask with Python 3.3 and I know support is still experimental but I'm running into errors when trying to test file uploads. I'm using `unittest.TestCase` and based on Python 2.7 examples I've seen in the docs I'm trying ``` rv = self.app.post('/add', data=dict( file=(io.String...
In Python 3, you need to use `io.BytesIO()` (with a bytes value) to simulate an uploaded file: ``` rv = self.app.post('/add', data=dict( file=(io.BytesIO(b"this is a test"), 'test.pdf'), ), follow_redirects=True) ``` Note the `b'...'` string defining a `bytes`...
How to activate an Anaconda environment
20,081,338
25
2013-11-19T20:25:47Z
21,707,160
25
2014-02-11T16:15:41Z
[ "python", "virtualenv", "anaconda", "conda" ]
I'm on Windows 8, using Anaconda 1.7.5 64bit. I created a new Anaconda environment with `conda create -p ./test python=2.7 pip` from `C:\Pr\TEMP\venv\`. This worked well (there is a folder with a new python distribution). conda tells me to type `activate C:\PR\TEMP\venv\test` to activate the environment, however ...
If this happens you would need to set the PATH for your environment (so that it gets the right Python from the environment and Scripts\ on Windows). Imagine you have created an environment called py33 by using: ``` conda create -n py33 python=3.3 anaconda ``` Here the folders are created by default in Anaconda\envs,...
how to get field type string from db model in django
20,081,924
3
2013-11-19T20:59:58Z
20,084,155
13
2013-11-19T23:11:24Z
[ "python", "django", "django-models" ]
I am doing the following: ``` model._meta.get_field('g').get_internal_type() ``` Which returns the following: ``` <bound method URLField.get_internal_type of <django.db.models.fields.URLField: g>> ``` I only want the know that this field is "URLField" . How do I extract that from this output? *Note: I am doing thi...
If you were doing this: ``` model._meta.get_field('g').get_internal_type() ``` You could not possibly get that as a result. Instead, you are doing this: ``` model._meta.get_field('g').get_internal_type ``` Which, as explained [here](http://stackoverflow.com/a/20084085/908494), does not call the method, it just ref...
Unbalanced classification using RandomForestClassifier in sklearn
20,082,674
19
2013-11-19T21:41:33Z
20,083,159
27
2013-11-19T22:12:29Z
[ "python", "machine-learning", "classification", "scikit-learn", "random-forest" ]
I have a dataset where the classes are unbalanced. The classes are either '1' or '0' where the ratio of class '1':'0' is 5:1. How do you calculate the prediction error for each class and the rebalance weights accordingly in sklearn with Random Forest, kind of like in the following link: <http://www.stat.berkeley.edu/~b...
You can pass sample weights argument to Random Forest [fit method](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html#sklearn.ensemble.RandomForestClassifier.fit) ``` sample_weight : array-like, shape = [n_samples] or None ``` > Sample weights. If None, then samples are equa...
'module' object has no attribute 'loads' while parsing JSON using python
20,082,730
11
2013-11-19T21:44:30Z
20,082,779
30
2013-11-19T21:47:07Z
[ "python", "json", "parsing" ]
I am trying to parse JSON from Python. I recently started working with Python so I followed some stackoverflow tutorial how to parse JSON using Python and I came up with below code - ``` #!/usr/bin/python import json j = json.loads('{"script":"#!/bin/bash echo Hello World"}') print j['script'] ``` But whenever I run...
``` File "json.py", line 2, in <module> import json ``` This line is a giveaway: you have named your script "json", but you are trying to import the builtin module called "json", since your script is in the current directory, it comes first in sys.path, and so that's the module that gets imported. You need to renam...
how to install pip for python3 on mac os x
20,082,935
50
2013-11-19T21:57:24Z
20,340,173
54
2013-12-02T23:47:59Z
[ "python", "osx", "python-3.x", "pip", "python-3.3" ]
OSX (Mavericks) has python2.7 stock installed. But I do all my own personal python stuff with 3.3. I just flushed my 3.3.2 install and installed the new 3.3.3. So I need to install pyserial again. I can do it the way I've done it before, which is: 1. Download pyserial from pypi 2. untar pyserial.tgz 3. cd pyserial 4. ...
UPDATE: This is no longer necessary with Python3.4. It installs pip3 as part of the stock install. I ended up posting this same question on the python mailing list, and got the following answer: ``` # download and install setuptools curl -O https://bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py python3 ez_se...
how to install pip for python3 on mac os x
20,082,935
50
2013-11-19T21:57:24Z
20,868,669
33
2014-01-01T12:31:07Z
[ "python", "osx", "python-3.x", "pip", "python-3.3" ]
OSX (Mavericks) has python2.7 stock installed. But I do all my own personal python stuff with 3.3. I just flushed my 3.3.2 install and installed the new 3.3.3. So I need to install pyserial again. I can do it the way I've done it before, which is: 1. Download pyserial from pypi 2. untar pyserial.tgz 3. cd pyserial 4. ...
I had to go through this process myself and chose a different way that I think is better in the long run. I installed [homebrew](http://brew.sh/) ``` ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" ``` then: ``` brew doctor ``` The last step gives you some warnings and err...
how to install pip for python3 on mac os x
20,082,935
50
2013-11-19T21:57:24Z
27,957,073
17
2015-01-15T05:00:33Z
[ "python", "osx", "python-3.x", "pip", "python-3.3" ]
OSX (Mavericks) has python2.7 stock installed. But I do all my own personal python stuff with 3.3. I just flushed my 3.3.2 install and installed the new 3.3.3. So I need to install pyserial again. I can do it the way I've done it before, which is: 1. Download pyserial from pypi 2. untar pyserial.tgz 3. cd pyserial 4. ...
Install **Python3** on mac ``` 1. brew install python3 2. curl https://bootstrap.pypa.io/get-pip.py | python3 3. python3 ``` Use `pip3` to install modules ``` 1. pip3 install ipython 2. python3 -m IPython ``` :)
Improve pandas (PyTables?) HDF5 table write performance
20,083,098
27
2013-11-19T22:08:16Z
20,084,843
7
2013-11-20T00:05:13Z
[ "python", "performance", "pandas", "hdf5", "pytables" ]
I've been using pandas for research now for about two months to great effect. With large numbers of medium-sized trace event datasets, pandas + PyTables (the HDF5 interface) does a tremendous job of allowing me to process heterogenous data using all the Python tools I know and love. Generally speaking, I use the Fixed...
Here is a similar comparison I just did. Its about 1/3 of the data 10M rows. The final size is abou 1.3GB I define 3 timing functions: Test the Fixed format (called Storer in 0.12). This writes in a PyTables Array format ``` def f(df): store = pd.HDFStore('test.h5','w') store['df'] = df store.close() ```...
Deprecation warning in scikit-learn svmlight format loader
20,084,218
13
2013-11-19T23:15:22Z
20,084,488
10
2013-11-19T23:35:53Z
[ "python", "scikit-learn" ]
I'm getting a new deprecation warning in an IPython notebook I wrote that I've not seen before. What I'm seeing is the following: ``` X,y = load_svmlight_file('./GasSensorArray/batch2.dat') /Users/cpd/.virtualenvs/py27-ipython+pandas/lib/python2.7/site-packages/sklearn/datasets/svmlight_format.py:137: DeprecationWarni...
You probably upgraded the numpy version, as this is a numpy 1.8.0 deprecation warning. Explained in [this](https://github.com/numpy/numpy/pull/2891) pull request. Continuation in [this](https://github.com/numpy/numpy/pull/3243) PR. Briefly browsing the sklearn [issue tracker](https://github.com/scikit-learn/scikit-lea...
Find unique values in a Pandas dataframe, irrespective of row or column location
20,084,382
42
2013-11-19T23:26:53Z
20,084,895
61
2013-11-20T00:10:04Z
[ "python", "pandas", "dataframe" ]
I have a Pandas dataframe and I want to find all the unique values in that dataframe...irrespective of row/columns. If I have a 10 x 10 dataframe, and suppose they have 84 unique values, I need to find them - Not the count. I can create a set and add the values of each rows by iterating over the rows of the dataframe....
``` In [1]: df = DataFrame(np.random.randint(0,10,size=100).reshape(10,10)) In [2]: df Out[2]: 0 1 2 3 4 5 6 7 8 9 0 2 2 3 2 6 1 9 9 3 3 1 1 2 5 8 5 2 5 0 6 3 2 0 7 0 7 5 5 9 1 0 3 3 5 3 2 3 7 6 8 3 8 4 4 8 0 2 2 3 9 7 1 2 7 5 3 2 8 5 6 4 3 7 0 8...
Use index in pandas to plot data
20,084,487
5
2013-11-19T23:35:53Z
20,084,590
8
2013-11-19T23:43:22Z
[ "python", "pandas" ]
I have a pandas-Dataframe and use `resample()` to calculate means (e.g. daily or monthly means). Here is a small example. ``` import pandas as pd import numpy as np dates = pd.date_range('1/1/2000', periods=100) df = pd.DataFrame(np.random.randn(100, 1), index=dates, columns=['A']) monthly_mean = df.resample('M', ...
You can use `reset_index` to turn the index back into a column: ``` monthly_mean.reset_index().plot(x='index', y='A') ```
TypeError: string indices must be integers while parsing JSON using Python?
20,084,779
6
2013-11-19T23:59:44Z
20,085,477
8
2013-11-20T01:08:55Z
[ "python", "json", "parsing" ]
I am confuse now why I am not able to parse this JSON string. Similar code works fine on other JSON string but not on this one - I am trying to parse JSON String and extract script from the JSON. Below is my code. ``` for step in steps: step_path = '/example/v1' +'/'+step data, stat = zk.get(step_path) j...
The problem is that jsonStr is a string that encodes some object in JSON, not the actual object. You obviously *knew* it was a string, because you called it `jsonStr`. And it's proven by the fact that this line works: ``` jsonStr = data.decode("utf-8") ``` So, `jsonStr` is a string. Calling `json.dumps` on a string ...
Eliminate Duplicates of a List (Python)
20,084,897
3
2013-11-20T00:10:05Z
20,084,954
8
2013-11-20T00:15:47Z
[ "python", "list", "duplicate-removal" ]
I have a list of lists and need to eliminate all lists that have the same 3rd element, and save 1. For example: ``` x=[[1,2,3],[1,3,3],[5,6,3],[2,4,6],[8,5,9],[10,5,9]] ``` could turn into: ``` x=[[1,2,3],[2,4,6],[8,5,9]] ``` I've attempted a solution that eliminates all lists that have the same 2nd index using la...
Python sets should still work for you: ``` >>> seen = set() >>> result = [] >>> for s in x: e = s[2] if e not in seen: result.append(s) seen.add(e) >>> result [[1, 2, 3], [2, 4, 6], [8, 5, 9]] ```
Anaphoric list comprehension in Python
20,085,164
12
2013-11-20T00:35:21Z
20,085,196
19
2013-11-20T00:38:33Z
[ "python", "list-comprehension" ]
Consider the following toy example: ``` >>> def square(x): return x*x ... >>> [square(x) for x in range(12) if square(x) > 50] [64, 81, 100, 121] ``` I have to call square(x) twice in the list comprehension. The duplication is ugly, bug-prone (it's easy to change only one of the two calls when modifying the code), a...
You should use a generator: ``` [s for s in (square(x) for x in range(12)) if s > 50] ``` This avoids creating an intermediate unfiltered list of squares.
Anaphoric list comprehension in Python
20,085,164
12
2013-11-20T00:35:21Z
20,085,243
7
2013-11-20T00:43:11Z
[ "python", "list-comprehension" ]
Consider the following toy example: ``` >>> def square(x): return x*x ... >>> [square(x) for x in range(12) if square(x) > 50] [64, 81, 100, 121] ``` I have to call square(x) twice in the list comprehension. The duplication is ugly, bug-prone (it's easy to change only one of the two calls when modifying the code), a...
Another alternative, using "chained" list comps rather than nested ones: ``` [s for n in range(12) for s in [square(n)] if s > 50] ``` Might be a weird read, though.
Anaphoric list comprehension in Python
20,085,164
12
2013-11-20T00:35:21Z
20,085,684
7
2013-11-20T01:31:23Z
[ "python", "list-comprehension" ]
Consider the following toy example: ``` >>> def square(x): return x*x ... >>> [square(x) for x in range(12) if square(x) > 50] [64, 81, 100, 121] ``` I have to call square(x) twice in the list comprehension. The duplication is ugly, bug-prone (it's easy to change only one of the two calls when modifying the code), a...
Here is a comparison of nested generator vs "chained" list comps vs calculating twice ``` $ python -m timeit "[s for n in range(12) for s in [n * n] if s > 50]" 100000 loops, best of 3: 2.48 usec per loop $ python -m timeit "[s for s in (x * x for x in range(12)) if s > 50]" 1000000 loops, best of 3: 1.89 usec per loo...
pandas dataframe groupby and get nth row
20,087,713
5
2013-11-20T04:57:49Z
20,087,789
9
2013-11-20T05:03:10Z
[ "python", "pandas", "dataframe" ]
I have a pandas DataFrame like following. ``` df = pd.DataFrame([[1.1, 1.1, 1.1, 2.6, 2.5, 3.4,2.6,2.6,3.4,3.4,2.6,1.1,1.1,3.3], list('AAABBBBABCBDDD'), [1.1, 1.7, 2.5, 2.6, 3.3, 3.8,4.0,4.2,4.3,4.5,4.6,4.7,4.7,4.8], ['x/y/z','x/y','x/y/z/n','x/u','x','x/u/v','x/y/z','x','x/u/v/b','-','x/y','x/y/z','x','x/u/v/w'],['1'...
I *think* the nth method is supposed to do just that: ``` In [11]: g.nth(1).dropna() Out[11]: col1 col2 col3 col4 col5 ID 1 1.1 D 4.7 x/y/z 200 2 3.4 B 3.8 x/u/v 404 3 1.1 A 2.5 x/y/z/n 404 5 2.6 B 4.6 x/y 500 ``` In 0.13 another...
Python: How does multiple assignments in a single line work?
20,088,323
6
2013-11-20T05:46:26Z
20,088,484
10
2013-11-20T05:56:24Z
[ "python", "expression" ]
I know that assignment is a statement in Python, i.e., it doesn't evaluate to a value unlike an expression. How does the following line of code work in Python, then? Please explain what happens internally in the Python interpreter (lexing, parsing, formation of abstract syntax tree). ``` # this works spam = eggs = 'ha...
> why the first line above works while the second doesn't? It's not about operator precedence. It's a designated syntax. It cannot be "reconcilliated" by adding parenthesis. Now for the full answer (as @Rob's comments already indicate) see [here](http://stackoverflow.com/questions/7601823/how-do-chained-assignments-w...
Calculate weighted pairwise distance matrix in Python
20,089,007
6
2013-11-20T06:31:19Z
20,089,834
8
2013-11-20T07:26:02Z
[ "python", "numpy", "matrix", "scipy", "scikit-learn" ]
I am trying to find the fastest way to perform the following pairwise distance calculation in Python. I want to use the distances to rank a `list_of_objects` by their similarity. Each item in the `list_of_objects` is characterised by four measurements a, b, c, d, which are made on very different scales e.g.: ``` obje...
`scipy.spatial.distance` is the module you'll want to have a look at. It has a lot of different norms that can be easily applied. I'd recommend using the weighted Monkowski Metrik [Weighted Minkowski Metrik](http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.wminkowski.html#scipy.spatial.dista...
Python Regex Engine - "look-behind requires fixed-width pattern" Error
20,089,922
9
2013-11-20T07:31:50Z
20,090,049
7
2013-11-20T07:39:20Z
[ "python", "regex" ]
I am trying to handle un-matched double quotes within a string in the CSV format. To be precise, ``` "It "does "not "make "sense", Well, "Does "it" ``` should be corrected as ``` "It" "does" "not" "make" "sense", Well, "Does" "it" ``` So basically what I am trying to do is to > replace all the ' " ' > > 1. Not pr...
Python lookbehind assertions need to be fixed width, but you can try this: ``` >>> s = '"It "does "not "make "sense", Well, "Does "it"' >>> re.sub(r'\b\s*"(?!,|$)', '" "', s) '"It" "does" "not" "make" "sense", Well, "Does" "it"' ``` **Explanation:** ``` \b # Start the match at the end of a "word" \s* # Matc...
Celery task with a time_start attribute in 1970
20,091,505
11
2013-11-20T09:06:10Z
20,096,342
15
2013-11-20T12:44:10Z
[ "python", "django", "redis", "celery", "celery-task" ]
An inspection of currently running Celery tasks reveals a weird `time_start` timestamp: ``` >> celery.app.control.inspect().active() {u'[email protected]': [{u'acknowledged': True, u'args': u'(...,)', u'delivery_info': {u'exchange': u'celery', u'priority': 0, u'redelivered': None, u'routing_key'...
I found the answer to my own question by digging in the Celery and Kombu code: the `time_start` attribute of a task is computed by the `kombu.five.monotonic` function. (Ironically, the kombu code also refers to another [StackOverflow question](http://stackoverflow.com/questions/1205722/how-do-i-get-monotonic-time-durat...
How do I change the representation of a Python function?
20,093,811
10
2013-11-20T10:48:20Z
20,094,001
9
2013-11-20T10:57:14Z
[ "python", "metaprogramming" ]
``` >>> def hehe(): ... return "spam" ... >>> repr(hehe) '<function hehe at 0x7fe5624e29b0>' ``` I want to have: ``` >>> repr(hehe) 'hehe function created by awesome programmer' ``` How do I do that? Putting `__repr__` inside `hehe` function does not work. **EDIT:** In case you guys are wondering why I want t...
No, you cannot change the representation of a function object; if you wanted to add *documentation*, you'd add a docstring: ``` def foo(): """Frob the bar baz""" ``` and access that with `help(foo)` or `print foo.__doc__`. You *can* create a callable object with a custom `__repr__`, which acts just like a functi...
How do I change the representation of a Python function?
20,093,811
10
2013-11-20T10:48:20Z
20,094,262
9
2013-11-20T11:08:28Z
[ "python", "metaprogramming" ]
``` >>> def hehe(): ... return "spam" ... >>> repr(hehe) '<function hehe at 0x7fe5624e29b0>' ``` I want to have: ``` >>> repr(hehe) 'hehe function created by awesome programmer' ``` How do I do that? Putting `__repr__` inside `hehe` function does not work. **EDIT:** In case you guys are wondering why I want t...
Usually, when you want to change something about the function, say function signature, function behavior or function attributes, you should consider using a decorator. So here is how you might implement what you want: ``` class change_repr(object): def __init__(self, functor): self.functor = functor ...
How to use Flask-Sockets? Getting a KeyError: 'wsgi.websocket'
20,093,995
6
2013-11-20T10:57:05Z
20,104,705
7
2013-11-20T19:04:57Z
[ "python", "sockets", "websocket", "flask", "flask-sockets" ]
I'm trying to use [Flask-Sockets](https://github.com/kennethreitz/flask-sockets) with the example code: ``` sockets = Sockets(app) @sockets.route('/echo') def echo_socket(ws): while True: message = ws.receive() ws.send(message) ``` Unfortunately, when simply going to the url /echo using my browse...
Ah, thats the problem, you cant just visit the websocket endpoint with regular GET request, that way the `wsgi.websocket` will not be set to environ. Also use gunicorn not the dev server, it comes with preconfigured worker: ``` # install from pip pip install gunicorn # run app located in test.py module (in test.py d...
argparse subparser monolithic help output
20,094,215
16
2013-11-20T11:06:04Z
20,096,044
7
2013-11-20T12:30:04Z
[ "python", "argparse" ]
My argparse has only 3 flags (store\_true) on the top level, everything else is handled through subparsers. When I run `myprog.py --help`, the output shows a list of all subcommands like normal, `{sub1, sub2, sub3, sub4, ...}`. So, the default is working great... I usually can't remember the exact subcommand name I ne...
This is a bit tricky, as argparse does not expose a list of defined sub-parsers directly. But it can be done: ``` import argparse # create the top-level parser parser = argparse.ArgumentParser(prog='PROG') parser.add_argument('--foo', action='store_true', help='foo help') subparsers = parser.add_subparsers(help='sub-...
Regression trees or Random Forest regressor with categorical inputs
20,095,187
9
2013-11-20T11:51:46Z
20,100,098
13
2013-11-20T15:32:30Z
[ "python", "regression", "scikit-learn" ]
I have been trying to use a categorical inpust in a regression tree (or Random Forest Regressor) but sklearn keeps returning errors and asking for numerical inputs. ``` import sklearn as sk MODEL = sk.ensemble.RandomForestRegressor(n_estimators=100) MODEL.fit([('a',1,2),('b',2,3),('a',3,2),('b',1,3)], [1,2.5,3,4]) # d...
`scikit-learn` estimators treat `int` as categorical, one possible solution is to encode the strings as `int` using `LabelEncoder`: ``` import numpy as np from sklearn.preprocessing import LabelEncoder from sklearn.ensemble import RandomForestRegressor X = np.asarray([('a',1,2),('b',2,3),('a',3,2),('c',1,3)]) y = ...
python: shift column in pandas dataframe up by one
20,095,673
16
2013-11-20T12:12:11Z
20,096,827
29
2013-11-20T13:06:32Z
[ "python", "pandas", "dataframe" ]
I've got a pandas dataframe. I want to 'lag' one of my columns. Meaning, for example, shifting the entire column 'gdp' up by one, and then removing all the excess data at the bottom of the remaining rows so that all columns are of equal length again. ``` df = y gdp cap 0 1 2 5 1 2 3 9 2 8 7 ...
``` In [44]: df.gdp = df.gdp.shift(-1) In [45]: df Out[45]: y gdp cap 0 1 3 5 1 2 7 9 2 8 4 2 3 3 7 7 4 6 NaN 7 In [46]: df[:-1] ...
Python Sort Multidimensional Array Based on 2nd Element of Subarray
20,099,669
8
2013-11-20T15:13:29Z
20,099,713
19
2013-11-20T15:15:12Z
[ "python", "arrays", "list", "sorting", "multidimensional-array" ]
I have an array like this: ``` [['G', 10], ['A', 22], ['S', 1], ['P', 14], ['V', 13], ['T', 7], ['C', 0], ['I', 219]] ``` I'd like to sort it based on the 2nd element in descending order. An ideal output would be: ``` [['I', 219], ['A', 22], ['P', 14], ... ] ```
`list.sort`, [`sorted`](http://docs.python.org/2/library/functions.html#sorted) accept optional `key` parameter. `key` function is used to generate comparison key. ``` >>> sorted(lst, key=lambda x: x[1], reverse=True) [['I', 219], ['A', 22], ['P', 14], ['V', 13], ['G', 10], ...] >>> sorted(lst, key=lambda x: -x[1]) [...
How to change a variable within a program
20,100,300
3
2013-11-20T15:41:24Z
20,100,400
9
2013-11-20T15:46:00Z
[ "python", "variables", "for-loop" ]
Can someone check this code please? Most of it is working but when they type 'admin' it should allow them to set a new password 'type new password' but then the new password doent save. Can anyone help me fix it? Thanks ``` program = ("live") while program == ("live"): password = ("Python") question = input("W...
You need to move the first `password = ...` line out of the loop: ``` program = ("live") password = ("Python") while program ==("live"): question=input("What is the password? ") if question == password: print ("well done") if question == ("admin"): n_password = input("What is the new passwo...
Subtract a column from one pandas dataframe from another
20,100,717
4
2013-11-20T15:58:30Z
20,100,909
8
2013-11-20T16:06:46Z
[ "python", "python-2.7", "pandas", "dataframe", "subtraction" ]
I'm sorry for a dumb question, but I cannot find any way to do this easily. I have two `pandas` data frames in Python 2.7, which are indexed by tenor: ``` In [136]: rates Out[136]: A A- BBB+ BBB BBB- BB 3M 0.3150 0.3530 0.4960 0.6460 0.7910 1.9070 6M 0.3070 0.3560 0.5330 0.67...
``` rates.sub(treas.iloc[:,0],axis=0).dropna() ``` or ``` rates.sub(treas.squeeze(),axis=0).dropna() ```
How to close the file after pickle.load() in python
20,101,021
3
2013-11-20T16:11:15Z
20,101,064
13
2013-11-20T16:13:21Z
[ "python", "pickle", "data-dictionary" ]
I saved a python dictionary in this way (actually with "import cPickle as pickle"): > > pickle.dump(dictname,open("filename.pkl","wb")) And I load it in another script in this way: > > dictname=pickle.load(open("filename.pkl","rb")) How is it possible to close the file after this?
It's better to use a [`with` statement](http://docs.python.org/2/reference/compound_stmts.html#the-with-statement) instead, which closes the file when the statement ends, even if an exception occurs: ``` with open("filename.pkl", "wb") as f: pickle.dump(dictname, f) ... with open("filename.pkl", "rb") as f: di...
python, numpy; How to insert element at the start of an array
20,101,093
10
2013-11-20T16:15:10Z
20,101,176
15
2013-11-20T16:18:12Z
[ "python", "arrays", "numpy" ]
I have an numpy array of complex numbers. So I want to insert zero at start of the array, and shift the rest of the array one place forward. example: ``` a = [1 + 2j, 5 + 7j,..] ``` I want to make: ``` a = [0 + 0j, 1 + 2j, 5 + 7j,..] ``` What's the simplest way to do this?
Simplest way: ``` a = np.array([1 + 2j, 5 + 7j]) a = np.insert(a, 0, 0) ``` Then: ``` >>> a array([ 0.+0.j, 1.+2.j, 5.+7.j]) ``` Note that this creates a new array, it does not actually insert the `0` into the original array. There are several alternatives to `np.insert`, all of which also create a new array: `...
pip install from github repo branch
20,101,834
197
2013-11-20T16:46:42Z
20,101,940
285
2013-11-20T16:51:32Z
[ "python", "git", "github", "pip" ]
Seems to be rather straightforward but I just can't get it right. Trying to `pip` install a repo's specific branch. Googling tells me to > pip install <https://github.com/user/repo.git@branch> The branch's name is `issue/34/oscar-0.6` so I did `pip install https://github.com/tangentlabs/django-oscar-paypal.git@/issue...
Prepend the url prefix `git+` (See [VCS Support](http://www.pip-installer.org/en/latest/reference/pip_install.html#vcs-support)): ``` pip install git+https://github.com/tangentlabs/django-oscar-paypal.git@issue/34/oscar-0.6 ``` And specify the branch name without the leading `/`.
pip install from github repo branch
20,101,834
197
2013-11-20T16:46:42Z
24,811,490
117
2014-07-17T19:02:22Z
[ "python", "git", "github", "pip" ]
Seems to be rather straightforward but I just can't get it right. Trying to `pip` install a repo's specific branch. Googling tells me to > pip install <https://github.com/user/repo.git@branch> The branch's name is `issue/34/oscar-0.6` so I did `pip install https://github.com/tangentlabs/django-oscar-paypal.git@/issue...
Using pip with git+ to clone a repository can be extremely slow (test with <https://github.com/django/django@stable/1.6.x> for example, it will take a few minutes). The fastest thing I've found, which works with GitHub and BitBucket, is: ``` pip install https://github.com/user/repository/archive/branch.zip ``` which ...
pip install from github repo branch
20,101,834
197
2013-11-20T16:46:42Z
28,748,345
23
2015-02-26T17:01:18Z
[ "python", "git", "github", "pip" ]
Seems to be rather straightforward but I just can't get it right. Trying to `pip` install a repo's specific branch. Googling tells me to > pip install <https://github.com/user/repo.git@branch> The branch's name is `issue/34/oscar-0.6` so I did `pip install https://github.com/tangentlabs/django-oscar-paypal.git@/issue...
Just to add an extra, if you want to install it in your pip file it can be added like this: ``` -e git+https://github.com/tangentlabs/django-oscar-paypal.git@issue/34/oscar-0.6#egg=django-oscar-paypal ``` It will be saved as an egg though.
How can I make a scatter plot colored by density in matplotlib?
20,105,364
19
2013-11-20T19:39:30Z
20,105,673
16
2013-11-20T19:56:43Z
[ "python", "matplotlib" ]
I'd like to make a scatter plot where each point is colored by the spatial density of nearby points. I've come across a very similar question, which shows an example of this using R: [R Scatter Plot: symbol color represents number of overlapping points](http://stackoverflow.com/q/17093935/2983551) What's the best wa...
You could make a histogram: ``` import numpy as np import matplotlib.pyplot as plt # fake data: a = np.random.normal(size=1000) b = a*3 + np.random.normal(size=1000) plt.hist2d(a, b, (50, 50), cmap=plt.cm.jet) plt.colorbar() ``` ![2dhist](http://i.stack.imgur.com/cpptk.png)
How can I make a scatter plot colored by density in matplotlib?
20,105,364
19
2013-11-20T19:39:30Z
20,107,592
42
2013-11-20T21:39:37Z
[ "python", "matplotlib" ]
I'd like to make a scatter plot where each point is colored by the spatial density of nearby points. I've come across a very similar question, which shows an example of this using R: [R Scatter Plot: symbol color represents number of overlapping points](http://stackoverflow.com/q/17093935/2983551) What's the best wa...
In addition to `hist2d` or `hexbin` as @askewchan suggested, you can use the same method that the accepted answer in the question you linked to uses. If you want to do that: ``` import numpy as np import matplotlib.pyplot as plt from scipy.stats import gaussian_kde # Generate fake data x = np.random.normal(size=1000...
Check for open files with Python in Linux
20,106,220
5
2013-11-20T20:23:34Z
20,108,997
9
2013-11-20T23:03:40Z
[ "python", "linux", "lsof" ]
Hello I'm looking for a way to determine which files are open in my whole system (Linux) using Python. I can't find any module that does this. Or maybe the only way is with the command lsof? Thanks!
If you look in the [documentation](https://code.google.com/p/psutil/wiki/Documentation) for the psutil python module (available on PyPI) you'll find a method that checks for open files on a given process. You'll probably want to get a list of all active PIDs as described in the [related stack overflow response](http://...
Passing a matplotlib figure to HTML (flask)
20,107,414
14
2013-11-20T21:28:50Z
20,107,981
15
2013-11-20T22:02:23Z
[ "python", "html", "image", "matplotlib", "flask" ]
I am using matplotlib to render some figure in a web app. I've used `fig.savefig()` before when I'm just running scripts. However, I need a function to return an actual ".png" image so that I can call it with my HTML. Some more (possibly unnecessary) info: I am using Python Flask. I figure I could use `fig.savefig()` ...
You have to separate the HTML and the image into two different routes. Your `/images/<cropzonekey>` route will just serve the page, and in the HTML content of that page there will be a reference to the second route, the one that serves the image. The image is served in its own route from a memory file that you genera...
Removing index column in pandas
20,107,570
14
2013-11-20T21:38:13Z
20,107,825
11
2013-11-20T21:53:21Z
[ "python", "pandas" ]
I have the following code which imports a CSV file. There are 3 columns and I want to set the first two of them to variables. When I set the second column to the variable "efficiency" the index column is also tacked on. How can I get rid of the index column? ``` df = pd.DataFrame.from_csv('Efficiency_Data.csv', header...
DataFrames and Series always have an index. Although it displays alongside the column(s), it is not a column, which is why del df['index'] did not work. If you want to replace the index with simple sequential numbers, use `df.reset_index()`. I strongly suggest reading a little bit of the pandas documentation, like [10...
Removing index column in pandas
20,107,570
14
2013-11-20T21:38:13Z
36,572,039
18
2016-04-12T11:31:31Z
[ "python", "pandas" ]
I have the following code which imports a CSV file. There are 3 columns and I want to set the first two of them to variables. When I set the second column to the variable "efficiency" the index column is also tacked on. How can I get rid of the index column? ``` df = pd.DataFrame.from_csv('Efficiency_Data.csv', header...
When reading to and from your csv file include the argument index=False so for example ``` df.to_csv(filename , index = False) ``` and to read from the csv ``` df.read_csv(filename , index = False) ``` This should prevent the issue so you don't need to fix it later.
numpy vectorization of double python for loop
20,108,817
4
2013-11-20T22:52:12Z
20,109,115
7
2013-11-20T23:11:07Z
[ "python", "numpy", "linear-algebra" ]
V is (n,p) numpy array typically dimensions are n~10, p~20000 The code I have now looks like ``` A = np.zeros(p) for i in xrange(n): for j in xrange(i+1): A += F[i,j] * V[i,:] * V[j,:] ``` How would I go about rewriting this to avoid the double python for loop?
The difficult part about this is that you only want to take the sum of the elements with `j <= i`. If not for that then you could do the following: ``` M = (V.reshape(n, 1, p) * V.reshape(1, n, p)) * F.reshape(n, n, 1) A = M.sum(0).sum(0) ``` If `F` is symmetric (if `F[i,j] == F[j,i]`) then you can exploit the symmet...
numpy vectorization of double python for loop
20,108,817
4
2013-11-20T22:52:12Z
20,110,808
8
2013-11-21T01:35:12Z
[ "python", "numpy", "linear-algebra" ]
V is (n,p) numpy array typically dimensions are n~10, p~20000 The code I have now looks like ``` A = np.zeros(p) for i in xrange(n): for j in xrange(i+1): A += F[i,j] * V[i,:] * V[j,:] ``` How would I go about rewriting this to avoid the double python for loop?
While Isaac's answer seems promising, as it removes those two nested for loops, you are having to create an intermediate array `M` which is `n` times the size of your original `V` array. Python for loops are not cheap, but memory access ain't free either: ``` n = 10 p = 20000 V = np.random.rand(n, p) F = np.random.ran...
How to make good reproducible pandas examples
20,109,391
53
2013-11-20T23:31:39Z
20,159,305
51
2013-11-23T06:19:13Z
[ "python", "pandas" ]
Having spent a decent amount of time watching both the [r](/questions/tagged/r "show questions tagged 'r'") and [pandas](/questions/tagged/pandas "show questions tagged 'pandas'") tags on SO, the impression that I get is that `pandas` questions are less likely to contain reproducible data. This is something that the R ...
*Note: The ideas here are pretty generic for StackOverflow, indeed [questions](http://sscce.org/).* ### Disclaimer: Writing a good question is HARD. ## The Good: * do include small\* example DataFrame, either as runnable code: ``` In [1]: df = pd.DataFrame([[1, 2], [1, 3], [4, 6]], columns=['A', 'B']) ``` ...
How to make good reproducible pandas examples
20,109,391
53
2013-11-20T23:31:39Z
30,424,537
12
2015-05-24T14:22:30Z
[ "python", "pandas" ]
Having spent a decent amount of time watching both the [r](/questions/tagged/r "show questions tagged 'r'") and [pandas](/questions/tagged/pandas "show questions tagged 'pandas'") tags on SO, the impression that I get is that `pandas` questions are less likely to contain reproducible data. This is something that the R ...
This is to mainly to expand on @AndyHayden's answer for situations where you want to create a reproducible medium to large dataset and also to answer the `expand.grid()` question. Pandas and (especially) numpy give you a variety of tools for this such that you can generally create a reasonable facsimile of any real dat...
Turn Pandas Multi-Index into column
20,110,170
44
2013-11-21T00:37:03Z
25,733,562
54
2014-09-08T21:42:18Z
[ "python", "pandas", "dataframe", "flatten", "multi-index" ]
I have a dataframe with 2 index levels: ``` value Trial measurement 1 0 13 1 3 2 4 2 0 NaN 1 12 3 0 34 ``` Which I want to turn into this: ...
The *reset\_index()* is a pandas DataFrame method that will transfer index values into the DataFrame as columns. The default setting for the parameter is *drop=False* (which will keep the index values as columns). All you have to do add `.reset_index(inplace=True)` after the name of the DataFrame: ``` df.reset_index(...
Python - Efficient way to find the largest area of a specific value in a 2D numpy array
20,110,232
7
2013-11-21T00:42:11Z
20,110,364
8
2013-11-21T00:54:23Z
[ "python", "numpy", "scipy", "multidimensional-array" ]
I have a 2D numpy array where some values are zero, and some are not. I'm trying to find an efficient way to find the biggest clump of zeros in the array (by returning the number of zeros, as well as a rough idea of where the center is) For example in this array, I would like to find the clump of 9, with the center of...
You're almost there, you just need to combine `ndimage.label` with `numpy.bincount`: ``` import numpy as np from scipy import ndimage array = np.random.randint(0, 3, size=(200, 200)) label, num_label = ndimage.label(array == 0) size = np.bincount(label.ravel()) biggest_label = size[1:].argmax() + 1 clump_mask = labe...
Python - 'ascii' codec can't decode byte \xbd in position
20,110,578
3
2013-11-21T01:14:29Z
20,110,703
7
2013-11-21T01:25:13Z
[ "python", "unicode", "web-scraping", "lxml" ]
I'm using LXML to scrape some text from webpages. Some of the text includes fractions. ``` 5½ ``` I need to get this into a float format. These fail: ``` ugly_fraction.encode('utf-8') #doesn't change to usable format ugly_fraction.replace('\xbd', '') #throws error ugly_freaction.encode('utf-8').replace('\xbd', ''...
[`unicodedata.numeric`](http://docs.python.org/2.7/library/unicodedata.html#unicodedata.numeric): > Returns the numeric value assigned to the Unicode character unichr as float. If no such value is defined, default is returned, or, if not given, ValueError is raised. Note that it only handles a single character, not a...
Print original input order of dictionary in python
20,110,627
5
2013-11-21T01:19:41Z
20,110,672
8
2013-11-21T01:23:28Z
[ "python", "list", "python-2.7", "dictionary", "printing" ]
**How do I print out my dictionary in the original order I had set up?** If I have a dictionary like this: ``` smallestCars = {'Civic96': 12.5, 'Camry98':13.2, 'Sentra98': 13.8} ``` and I do this: ``` for cars in smallestCars: print cars ``` it outputs: ``` Sentra98 Civic96 Camry98 ``` but what I want is thi...
A *regular* dictionary doesn't have order. You need to use the `OrderedDict` of the `collections` module, which can take a list of lists or a list of tuples, just like this: ``` import collections key_value_pairs = [('Civic86', 12.5), ('Camry98', 13.2), ('Sentra98', 13.8)] smalle...
How to avoid [Errno 12] Cannot allocate memory errors caused by using subprocess module
20,111,242
12
2013-11-21T02:20:58Z
26,416,661
12
2014-10-17T01:46:49Z
[ "python", "networking", "memory", "subprocess", "paramiko" ]
**Complete Working Test Case** Of course depending on your memory on the local and remote machines your array sizes will be different. ``` z1 = numpy.random.rand(300000000,2); for i in range(1000): print('*******************************************\n'); direct_output = subprocess.check_output('ssh blah@blah "ls ...
If you are running out of memory, you may want to increase your swap memory. Or you might have no swap enabled at all. In Ubuntu (it should work for other distributions as well) you can check your swap by: ``` $sudo swapon -s ``` if it is empty it means you don't have any swap enabled. To add a 1GB swap: ``` $sudo d...
python pandas convert dataframe to dictionary with multiple values
20,112,760
7
2013-11-21T04:50:59Z
20,112,846
10
2013-11-21T04:59:40Z
[ "python", "dictionary", "pandas" ]
I have a dataframe with 2 columns Address and ID. I want to merge IDs with the same addresses in a dictionary ``` import pandas as pd, numpy as np df = pd.DataFrame({'Address' : ['12 A', '66 C', '10 B', '10 B', '12 A', '12 A'], 'ID' : ['Aa', 'Bb', 'Cc', 'Dd', 'Ee', 'Ff']}) AS=df.set_index('Address')['...
I think you can use `groupby` and a dictionary comprehension here: ``` >>> df Address ID 0 12 A Aa 1 66 C Bb 2 10 B Cc 3 10 B Dd 4 12 A Ee 5 12 A Ff >>> {k: list(v) for k,v in df.groupby("Address")["ID"]} {'66 C': ['Bb'], '12 A': ['Aa', 'Ee', 'Ff'], '10 B': ['Cc', 'Dd']} ```