title
stringlengths
12
150
question_id
int64
469
40.1M
question_score
int64
2
5.52k
question_date
stringdate
2008-08-02 15:11:16
2016-10-18 06:16:31
answer_id
int64
536
40.1M
answer_score
int64
7
8.38k
answer_date
stringdate
2008-08-02 18:49:07
2016-10-18 06:19:33
tags
listlengths
1
5
question_body_md
stringlengths
15
30.2k
answer_body_md
stringlengths
11
27.8k
Cumulative Ranking of Values in Pandas with Ties
38,246,058
3
2016-07-07T12:55:06Z
38,246,382
7
2016-07-07T13:10:00Z
[ "python", "pandas", "dataframe", "cumulative-sum" ]
I am trying to find a way to do a cumulative total that accounts for ties in Pandas. Lets take hypothetical data from a track meet, where I have people, races, heats, and time. Each person's placement is according to the following: For a given race/heat combination: * The person person with the lowest time placed first * The person with the second lowest time placed second and so on... This would be fairly simple code, but for one thing.. If two people have the same time, they both get the same place and then the next time greater than their time will have that value + 1 as the placement. In the table below, for 100 yard dash, heat 1, **RUNNER1** finished first, **RUNNER2/RUNNER3** finished second, and **RUNNER3** finished third (next time after **RUNNER2**/**RUNNER3**) So basically, the logic is as follows: If race <> race.shift() or heat <> heat.shift() then place =1 If race = race.shift() and heat = heat.shift() and time>time.shift then place =place.shift()+1 If race = race.shift() and heat = heat.shift() and time>time.shift then place =place.shift() The part that confuses me is how to handle the ties. Otherwise I could do something like ``` df['Place']=np.where( (df['race']==df['race'].shift()) & (df['heat']==df['heat'].shift()), df['Place'].shift()+1, 1) ``` Thank you! Sample data follows: ``` Person,Race,Heat,Time RUNNER1,100 Yard Dash,1,9.87 RUNNER2,100 Yard Dash,1,9.92 RUNNER3,100 Yard Dash,1,9.92 RUNNER4,100 Yard Dash,1,9.96 RUNNER5,100 Yard Dash,1,9.97 RUNNER6,100 Yard Dash,1,10.01 RUNNER7,100 Yard Dash,2,9.88 RUNNER8,100 Yard Dash,2,9.93 RUNNER9,100 Yard Dash,2,9.93 RUNNER10,100 Yard Dash,2,10.03 RUNNER11,100 Yard Dash,2,10.26 RUNNER7,200 Yard Dash,1,19.63 RUNNER8,200 Yard Dash,1,19.67 RUNNER9,200 Yard Dash,1,19.72 RUNNER10,200 Yard Dash,1,19.72 RUNNER11,200 Yard Dash,1,19.86 RUNNER12,200 Yard Dash,1,19.92 ``` what I want at the end is ``` Person,Race,Heat,Time,Place RUNNER1,100 Yard Dash,1,9.87,1 RUNNER2,100 Yard Dash,1,9.92,2 RUNNER3,100 Yard Dash,1,9.92,2 RUNNER4,100 Yard Dash,1,9.96,3 RUNNER5,100 Yard Dash,1,9.97,4 RUNNER6,100 Yard Dash,1,10.01,5 RUNNER7,100 Yard Dash,2,9.88,1 RUNNER8,100 Yard Dash,2,9.93,2 RUNNER9,100 Yard Dash,2,9.93,2 RUNNER10,100 Yard Dash,2,10.03,3 RUNNER11,100 Yard Dash,2,10.26,4 RUNNER7,200 Yard Dash,1,19.63,1 RUNNER8,200 Yard Dash,1,19.67,2 RUNNER9,200 Yard Dash,1,19.72,3 RUNNER10,200 Yard Dash,1,19.72,3 RUNNER11,200 Yard Dash,1,19.86,4 RUNNER12,200 Yard Dash,1,19.92,4 ``` ***[edit] Now, one step further..*** Lets assume that once I leave a set of unique values, the next time that set comes up, the values will reset to 1.. So, for example, - Note that it goes to "heat 1" and then "heat 2" and back to "heat 1" - I don't want the rankings to continue from the prior "heat 1", rather I want them to reset. ``` Person,Race,Heat,Time,Place RUNNER1,100 Yard Dash,1,9.87,1 RUNNER2,100 Yard Dash,1,9.92,2 RUNNER3,100 Yard Dash,1,9.92,2 RUNNER4,100 Yard Dash,2,9.96,1 RUNNER5,100 Yard Dash,2,9.97,2 RUNNER6,100 Yard Dash,2,10.01,3 RUNNER7,100 Yard Dash,1,9.88,1 RUNNER8,100 Yard Dash,1,9.93,2 RUNNER9,100 Yard Dash,1,9.93,2 ```
You could use: ``` grouped = df.groupby(['Race','Heat']) df['Place'] = grouped['Time'].transform(lambda x: pd.factorize(x, sort=True)[0]+1) ``` --- ``` import pandas as pd df = pd.DataFrame({'Heat': [1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1], 'Person': ['RUNNER1', 'RUNNER2', 'RUNNER3', 'RUNNER4', 'RUNNER5', 'RUNNER6', 'RUNNER7', 'RUNNER8', 'RUNNER9', 'RUNNER10', 'RUNNER11', 'RUNNER7', 'RUNNER8', 'RUNNER9', 'RUNNER10', 'RUNNER11', 'RUNNER12'], 'Race': ['100 Yard Dash', '100 Yard Dash', '100 Yard Dash', '100 Yard Dash', '100 Yard Dash', '100 Yard Dash', '100 Yard Dash', '100 Yard Dash', '100 Yard Dash', '100 Yard Dash', '100 Yard Dash', '200 Yard Dash', '200 Yard Dash', '200 Yard Dash', '200 Yard Dash', '200 Yard Dash', '200 Yard Dash'], 'Time': [9.8699999999999992, 9.9199999999999999, 9.9199999999999999, 9.9600000000000009, 9.9700000000000006, 10.01, 9.8800000000000008, 9.9299999999999997, 9.9299999999999997, 10.029999999999999, 10.26, 19.629999999999999, 19.670000000000002, 19.719999999999999, 19.719999999999999, 19.859999999999999, 19.920000000000002]}) grouped = df.groupby(['Race','Heat']) df['Place'] = grouped['Time'].transform(lambda x: pd.factorize(x, sort=True)[0]+1) df['Rank'] = grouped['Time'].rank(method='min') print(df) ``` yields ``` Heat Person Race Time Place Rank 0 1 RUNNER1 100 Yard Dash 9.87 1.0 1.0 1 1 RUNNER2 100 Yard Dash 9.92 2.0 2.0 2 1 RUNNER3 100 Yard Dash 9.92 2.0 2.0 3 1 RUNNER4 100 Yard Dash 9.96 3.0 4.0 4 1 RUNNER5 100 Yard Dash 9.97 4.0 5.0 5 1 RUNNER6 100 Yard Dash 10.01 5.0 6.0 6 2 RUNNER7 100 Yard Dash 9.88 1.0 1.0 7 2 RUNNER8 100 Yard Dash 9.93 2.0 2.0 8 2 RUNNER9 100 Yard Dash 9.93 2.0 2.0 9 2 RUNNER10 100 Yard Dash 10.03 3.0 4.0 10 2 RUNNER11 100 Yard Dash 10.26 4.0 5.0 11 1 RUNNER7 200 Yard Dash 19.63 1.0 1.0 12 1 RUNNER8 200 Yard Dash 19.67 2.0 2.0 13 1 RUNNER9 200 Yard Dash 19.72 3.0 3.0 14 1 RUNNER10 200 Yard Dash 19.72 3.0 3.0 15 1 RUNNER11 200 Yard Dash 19.86 4.0 5.0 16 1 RUNNER12 200 Yard Dash 19.92 5.0 6.0 ``` --- Note that Pandas has a [`Groupby.rank`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.rank.html) method which can compute many common forms of rank -- but not the one you described. Notice how for example on row 3 the `Rank` is 4 after a tie between the second and third runners, while the `Place` is 3. --- Regarding the edit: Use ``` (df['Heat'] != df['Heat'].shift()).cumsum() ``` to disambiguate the heats: ``` import pandas as pd df = pd.DataFrame({'Heat': [1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1], 'Person': ['RUNNER1', 'RUNNER2', 'RUNNER3', 'RUNNER4', 'RUNNER5', 'RUNNER6', 'RUNNER7', 'RUNNER8', 'RUNNER9', 'RUNNER10', 'RUNNER11', 'RUNNER7', 'RUNNER8', 'RUNNER9', 'RUNNER10', 'RUNNER11', 'RUNNER12'], 'Race': ['100 Yard Dash', '100 Yard Dash', '100 Yard Dash', '100 Yard Dash', '100 Yard Dash', '100 Yard Dash', '100 Yard Dash', '100 Yard Dash', '100 Yard Dash', '100 Yard Dash', '100 Yard Dash', '100 Yard Dash', '100 Yard Dash', '100 Yard Dash', '100 Yard Dash', '100 Yard Dash', '100 Yard Dash'], 'Time': [9.8699999999999992, 9.9199999999999999, 9.9199999999999999, 9.9600000000000009, 9.9700000000000006, 10.01, 9.8800000000000008, 9.9299999999999997, 9.9299999999999997, 10.029999999999999, 10.26, 19.629999999999999, 19.670000000000002, 19.719999999999999, 19.719999999999999, 19.859999999999999, 19.920000000000002]}) df['HeatGroup'] = (df['Heat'] != df['Heat'].shift()).cumsum() grouped = df.groupby(['Race','HeatGroup']) df['Place'] = grouped['Time'].transform(lambda x: pd.factorize(x, sort=True)[0]+1) df['Rank'] = grouped['Time'].rank(method='min') print(df) ``` yields ``` Heat Person Race Time HeatGroup Place Rank 0 1 RUNNER1 100 Yard Dash 9.87 1 1.0 1.0 1 1 RUNNER2 100 Yard Dash 9.92 1 2.0 2.0 2 1 RUNNER3 100 Yard Dash 9.92 1 2.0 2.0 3 1 RUNNER4 100 Yard Dash 9.96 1 3.0 4.0 4 1 RUNNER5 100 Yard Dash 9.97 1 4.0 5.0 5 1 RUNNER6 100 Yard Dash 10.01 1 5.0 6.0 6 2 RUNNER7 100 Yard Dash 9.88 2 1.0 1.0 7 2 RUNNER8 100 Yard Dash 9.93 2 2.0 2.0 8 2 RUNNER9 100 Yard Dash 9.93 2 2.0 2.0 9 2 RUNNER10 100 Yard Dash 10.03 2 3.0 4.0 10 2 RUNNER11 100 Yard Dash 10.26 2 4.0 5.0 11 1 RUNNER7 100 Yard Dash 19.63 3 1.0 1.0 12 1 RUNNER8 100 Yard Dash 19.67 3 2.0 2.0 13 1 RUNNER9 100 Yard Dash 19.72 3 3.0 3.0 14 1 RUNNER10 100 Yard Dash 19.72 3 3.0 3.0 15 1 RUNNER11 100 Yard Dash 19.86 3 4.0 5.0 16 1 RUNNER12 100 Yard Dash 19.92 3 5.0 6.0 ```
How to split data into 3 sets (train, validation and test)?
38,250,710
11
2016-07-07T16:26:26Z
38,251,063
7
2016-07-07T16:47:10Z
[ "python", "pandas", "dataframe", "machine-learning", "scikit-learn" ]
I have a pandas dataframe and I wish to divide it to 3 seprate sets. I know that using [train\_test\_split](http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.train_test_split.html) from `sklearn.cross_validation`, one can divide the data in two sets (train and test). However, I couldn't find any solution about splitting the data into three sets. Preferably, I'd like to have the indices of the original data. I know that a workaround would be to use `train_test_split` two times and somehow adjust the indices. But is there a more standard / built-in way to split the data into 3 sets instead of 2?
### Note: Function was written to handle seeding of randomized set creation. You should not rely on set splitting that doesn't randomize the sets. ``` import numpy as np import pandas as pd def train_validate_test_split(df, train_percent=.6, validate_percent=.2, seed=None): np.random.seed(seed) perm = np.random.permutation(df.index) m = len(df) train_end = int(train_percent * m) validate_end = int(validate_percent * m) + train_end train = df.ix[perm[:train_end]] validate = df.ix[perm[train_end:validate_end]] test = df.ix[perm[validate_end:]] return train, validate, test ``` ### Demonstration ``` np.random.seed([3,1415]) df = pd.DataFrame(np.random.rand(10, 5), columns=list('ABCDE')) df ``` [![enter image description here](http://i.stack.imgur.com/ThpsQ.png)](http://i.stack.imgur.com/ThpsQ.png) ``` train, validate, test = train_validate_test_split(df) train ``` [![enter image description here](http://i.stack.imgur.com/XNRBT.png)](http://i.stack.imgur.com/XNRBT.png) ``` validate ``` [![enter image description here](http://i.stack.imgur.com/PpyC8.png)](http://i.stack.imgur.com/PpyC8.png) ``` test ``` [![enter image description here](http://i.stack.imgur.com/U6CaT.png)](http://i.stack.imgur.com/U6CaT.png)
How to split data into 3 sets (train, validation and test)?
38,250,710
11
2016-07-07T16:26:26Z
38,251,213
10
2016-07-07T16:56:12Z
[ "python", "pandas", "dataframe", "machine-learning", "scikit-learn" ]
I have a pandas dataframe and I wish to divide it to 3 seprate sets. I know that using [train\_test\_split](http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.train_test_split.html) from `sklearn.cross_validation`, one can divide the data in two sets (train and test). However, I couldn't find any solution about splitting the data into three sets. Preferably, I'd like to have the indices of the original data. I know that a workaround would be to use `train_test_split` two times and somehow adjust the indices. But is there a more standard / built-in way to split the data into 3 sets instead of 2?
Numpy solution (thanks to [root](http://stackoverflow.com/questions/38250710/how-to-split-data-into-3-sets-train-development-and-test/38251213?noredirect=1#comment63923795_38251213) for the randomizing hint) - we will split our data set into the following parts: (60% - train set, 20% - validation set, 20% - test set): ``` In [305]: train, validate, test = np.split(df.sample(frac=1), [int(.6*len(df)), int(.8*len(df))]) In [306]: train Out[306]: A B C D E 0 0.046919 0.792216 0.206294 0.440346 0.038960 2 0.301010 0.625697 0.604724 0.936968 0.870064 1 0.642237 0.690403 0.813658 0.525379 0.396053 9 0.488484 0.389640 0.599637 0.122919 0.106505 8 0.842717 0.793315 0.554084 0.100361 0.367465 7 0.185214 0.603661 0.217677 0.281780 0.938540 In [307]: validate Out[307]: A B C D E 5 0.806176 0.008896 0.362878 0.058903 0.026328 6 0.145777 0.485765 0.589272 0.806329 0.703479 In [308]: test Out[308]: A B C D E 4 0.521640 0.332210 0.370177 0.859169 0.401087 3 0.333348 0.964011 0.083498 0.670386 0.169619 ``` PS [int(.6\*len(df)), int(.8\*len(df))] - is an `indices_or_sections` array for [numpy.split()](http://docs.scipy.org/doc/numpy/reference/generated/numpy.split.html) Here is a small demo for `np.split()` usage - let's split 20-elements array into the following parts: 90%, 10%, 10%: ``` In [45]: a = np.arange(1, 21) In [46]: a Out[46]: array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]) In [47]: np.split(a, [int(.8 * len(a)), int(.9 * len(a))]) Out[47]: [array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]), array([17, 18]), array([19, 20])] ```
Create a list of tuples with adjacent list elements if a condition is true
38,251,245
26
2016-07-07T16:57:49Z
38,251,312
37
2016-07-07T17:01:19Z
[ "python", "list", "python-3.x", "tuples", "list-comprehension" ]
I am trying to create a list of tuples where the tuple contents are the number `9` and the number before it in the list. **Input List:** ``` myList = [1, 8, 9, 2, 4, 9, 6, 7, 9, 8] ``` **Desired Output:** ``` sets = [(8, 9), (4, 9), (7, 9)] ``` **Code:** ``` sets = [list(zip(myList[i:i], myList[-1:])) for i in myList if i==9] ``` **Current Result:** ``` [[], [], []] ```
Cleaner Pythonic approach: ``` >>> [(x,y) for x,y in zip(myList, myList[1:]) if y == 9] [(8, 9), (4, 9), (7, 9)] ``` --- What is the code above doing: * `zip(some_list, some_list[1:])` would generate a list of pairs of adjacent elements. * Now with that tuple, filter on the condition that the second element is equal to `9`. You're done :)
Create a list of tuples with adjacent list elements if a condition is true
38,251,245
26
2016-07-07T16:57:49Z
38,251,318
7
2016-07-07T17:01:45Z
[ "python", "list", "python-3.x", "tuples", "list-comprehension" ]
I am trying to create a list of tuples where the tuple contents are the number `9` and the number before it in the list. **Input List:** ``` myList = [1, 8, 9, 2, 4, 9, 6, 7, 9, 8] ``` **Desired Output:** ``` sets = [(8, 9), (4, 9), (7, 9)] ``` **Code:** ``` sets = [list(zip(myList[i:i], myList[-1:])) for i in myList if i==9] ``` **Current Result:** ``` [[], [], []] ```
You were pretty close, I'll show you an alternative way that might be more intuitive if you're just starting out: ``` sets = [(myList[i-1], myList[i]) for i in range(len(myList)) if myList[i] == 9] ``` Get the index in the range of the list lenght, and if the value at the position `i` is equal to `9`, grab the adjacent elements. The result is: ``` sets [(8, 9), (4, 9), (7, 9)] ``` This is *less efficient than the other approaches* but I decided to un-delete it to show you a different way of doing it. You can make it go a bit faster by using `enumerate()` instead: ``` sets = [(myList[i-1], j) for i, j in enumerate(myList) if j == 9] ``` --- *Take note* that ***in the edge case where `myList[0] = 9`*** the behavior of the comprehension without `zip` and the behavior of the comprehension with `zip` is **different**. Specifically, if `myList = [9, 1, 8, 9, 2, 4, 9, 6, 7, 9, 8]` then: ``` [(myList[i-1], myList[i]) for i in range(len(myList)) if myList[i] == 9] # results in: [(8, 9), (8, 9), (4, 9), (7, 9)] ``` while: ``` [(x, y) for x, y in zip(myList, myList[1:]) if y==9] # results in: [(8, 9), (4, 9), (7, 9)] ``` It is up to you to decide which of these fits your criteria, I'm just pointing out that they don't behave the same in all cases.
Create a list of tuples with adjacent list elements if a condition is true
38,251,245
26
2016-07-07T16:57:49Z
38,251,331
17
2016-07-07T17:02:14Z
[ "python", "list", "python-3.x", "tuples", "list-comprehension" ]
I am trying to create a list of tuples where the tuple contents are the number `9` and the number before it in the list. **Input List:** ``` myList = [1, 8, 9, 2, 4, 9, 6, 7, 9, 8] ``` **Desired Output:** ``` sets = [(8, 9), (4, 9), (7, 9)] ``` **Code:** ``` sets = [list(zip(myList[i:i], myList[-1:])) for i in myList if i==9] ``` **Current Result:** ``` [[], [], []] ```
Part of your issue is that `myList[i:i]` will always return an empty list. The end of a slice is exclusive, so when you do `a_list[0:0]` you're trying to take the elements of `a_list` that exist **between** index 0 and index 0. You're on the right track, but you want to zip the list with itself. ``` [(x, y) for x, y in zip(myList, myList[1:]) if y==9] ```
Comparison of Pandas lookup times
38,254,067
15
2016-07-07T19:46:24Z
38,258,390
8
2016-07-08T03:13:17Z
[ "python", "performance", "pandas" ]
After experimenting with timing various types of lookups on a Pandas DataFrame I am left with a few questions. Here is the set up... ``` import pandas as pd import numpy as np import itertools letters = [chr(x) for x in range(ord('a'), ord('z'))] letter_combinations = [''.join(x) for x in itertools.combinations(letters, 3)] df1 = pd.DataFrame({ 'value': np.random.normal(size=(1000000)), 'letter': np.random.choice(letter_combinations, 1000000) }) df2 = df1.sort_values('letter') df3 = df1.set_index('letter') df4 = df3.sort_index() ``` So df1 looks something like this... ``` print(df1.head(5)) >>> letter value 0 bdh 0.253778 1 cem -1.915726 2 mru -0.434007 3 lnw -1.286693 4 fjv 0.245523 ``` Here is the code to test differences in lookup performance... ``` print('~~~~~~~~~~~~~~~~~NON-INDEXED LOOKUPS / UNSORTED DATASET~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~') %timeit df1[df1.letter == 'ben'] %timeit df1[df1.letter == 'amy'] %timeit df1[df1.letter == 'abe'] print('~~~~~~~~~~~~~~~~~NON-INDEXED LOOKUPS / SORTED DATASET~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~') %timeit df2[df2.letter == 'ben'] %timeit df2[df2.letter == 'amy'] %timeit df2[df2.letter == 'abe'] print('~~~~~~~~~~~~~~~~~~~~~INDEXED LOOKUPS~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~') %timeit df3.loc['ben'] %timeit df3.loc['amy'] %timeit df3.loc['abe'] print('~~~~~~~~~~~~~~~~~~~~~SORTED INDEXED LOOKUPS~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~') %timeit df4.loc['ben'] %timeit df4.loc['amy'] %timeit df4.loc['abe'] ``` And the results... ``` ~~~~~~~~~~~~~~~~~NON-INDEXED LOOKUPS / UNSORTED DATASET~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 10 loops, best of 3: 59.7 ms per loop 10 loops, best of 3: 59.7 ms per loop 10 loops, best of 3: 59.7 ms per loop ~~~~~~~~~~~~~~~~~NON-INDEXED LOOKUPS / SORTED DATASET~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 10 loops, best of 3: 192 ms per loop 10 loops, best of 3: 192 ms per loop 10 loops, best of 3: 193 ms per loop ~~~~~~~~~~~~~~~~~~~~~INDEXED LOOKUPS~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The slowest run took 4.66 times longer than the fastest. This could mean that an intermediate result is being cached 10 loops, best of 3: 40.9 ms per loop 10 loops, best of 3: 41 ms per loop 10 loops, best of 3: 40.9 ms per loop ~~~~~~~~~~~~~~~~~~~~~SORTED INDEXED LOOKUPS~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The slowest run took 1621.00 times longer than the fastest. This could mean that an intermediate result is being cached 1 loops, best of 3: 259 µs per loop 1000 loops, best of 3: 242 µs per loop 1000 loops, best of 3: 243 µs per loop ``` Questions... 1. It's pretty clear why the lookup on the sorted index is so much faster, binary search to get O(log(n)) performance vs O(n) for a full array scan. But, why is the lookup on the sorted non-indexed `df2` column *SLOWER* than the lookup on the unsorted non-indexed column `df1`? 2. What is up with the `The slowest run took x times longer than the fastest. This could mean that an intermediate result is being cached`. Surely, the results aren't being cached. Is it because the created index is lazy and isn't *actually* reindexed until needed? That would explain why it is only on the first call to `.loc[]`. 3. Why isn't an index sorted by default? The fixed cost of the sort can be too much?
The disparity in these %timeit results ``` In [273]: %timeit df1[df1['letter'] == 'ben'] 10 loops, best of 3: 36.1 ms per loop In [274]: %timeit df2[df2['letter'] == 'ben'] 10 loops, best of 3: 108 ms per loop ``` also shows up in the *pure NumPy* equality comparisons: ``` In [275]: %timeit df1['letter'].values == 'ben' 10 loops, best of 3: 24.1 ms per loop In [276]: %timeit df2['letter'].values == 'ben' 10 loops, best of 3: 96.5 ms per loop ``` Under the hood, Pandas' `df1['letter'] == 'ben'` [calls a Cython function](https://github.com/pydata/pandas/blob/master/pandas/lib.pyx#L786) which loops through the values of the underlying NumPy array, `df1['letter'].values`. It is essentially doing the same thing as `df1['letter'].values == 'ben'` but with different handling of NaNs. Moreover, notice that simply accessing the items in `df1['letter']` in sequential order can be done more quickly than doing the same for `df2['letter']`: ``` In [11]: %timeit [item for item in df1['letter']] 10 loops, best of 3: 49.4 ms per loop In [12]: %timeit [item for item in df2['letter']] 10 loops, best of 3: 124 ms per loop ``` The difference in times within each of these three sets of `%timeit` tests are roughly the same. I think that is because they all share the same cause. Since the `letter` column holds strings, the NumPy arrays `df1['letter'].values` and `df2['letter'].values` have dtype `object` and therefore they hold pointers to the memory location of the arbitrary Python objects (in this case strings). Consider the memory location of the strings stored in the DataFrames, `df1` and `df2`. In CPython the `id` returns the memory location of the object: ``` memloc = pd.DataFrame({'df1': list(map(id, df1['letter'])), 'df2': list(map(id, df2['letter'])), }) df1 df2 0 140226328244040 140226299303840 1 140226328243088 140226308389048 2 140226328243872 140226317328936 3 140226328243760 140226230086600 4 140226328243368 140226285885624 ``` The strings in `df1` (after the first dozen or so) tend to appear sequentially in memory, while sorting causes the strings in `df2` (taken in order) to be scattered in memory: ``` In [272]: diffs = memloc.diff(); diffs.head(30) Out[272]: df1 df2 0 NaN NaN 1 -952.0 9085208.0 2 784.0 8939888.0 3 -112.0 -87242336.0 4 -392.0 55799024.0 5 -392.0 5436736.0 6 952.0 22687184.0 7 56.0 -26436984.0 8 -448.0 24264592.0 9 -56.0 -4092072.0 10 -168.0 -10421232.0 11 -363584.0 5512088.0 12 56.0 -17433416.0 13 56.0 40042552.0 14 56.0 -18859440.0 15 56.0 -76535224.0 16 56.0 94092360.0 17 56.0 -4189368.0 18 56.0 73840.0 19 56.0 -5807616.0 20 56.0 -9211680.0 21 56.0 20571736.0 22 56.0 -27142288.0 23 56.0 5615112.0 24 56.0 -5616568.0 25 56.0 5743152.0 26 56.0 -73057432.0 27 56.0 -4988200.0 28 56.0 85630584.0 29 56.0 -4706136.0 ``` Most of the strings in `df1` are 56 bytes apart: ``` In [14]: In [16]: diffs['df1'].value_counts() Out[16]: 56.0 986109 120.0 13671 -524168.0 215 -56.0 1 -12664712.0 1 41136.0 1 -231731080.0 1 Name: df1, dtype: int64 In [20]: len(diffs['df1'].value_counts()) Out[20]: 7 ``` In contrast the strings in `df2` are scattered all over the place: ``` In [17]: diffs['df2'].value_counts().head() Out[17]: -56.0 46 56.0 44 168.0 39 -112.0 37 -392.0 35 Name: df2, dtype: int64 In [19]: len(diffs['df2'].value_counts()) Out[19]: 837764 ``` When these objects (strings) are located sequentially in memory, their values can be retrieved more quickly. This is why the equality comparisons performed by `df1['letter'].values == 'ben'` can be done faster than those in `df2['letter'].values == 'ben'`. *The lookup time is smaller*. This memory accessing issue also explains why there is no disparity in the `%timeit` results for the `value` column. ``` In [5]: %timeit df1[df1['value'] == 0] 1000 loops, best of 3: 1.8 ms per loop In [6]: %timeit df2[df2['value'] == 0] 1000 loops, best of 3: 1.78 ms per loop ``` `df1['value']` and `df2['value']` are NumPy arrays of dtype `float64`. Unlike object arrays, their values are packed together contiguously in memory. Sorting `df1` with `df2 = df1.sort_values('letter')` causes the values in `df2['value']` to be reordered, but since the values are *copied* into a new NumPy array, the values are located sequentially in memory. So accessing the values in `df2['value']` can be done just as quickly as those in `df1['value']`.
Can generators be recursive?
38,254,304
16
2016-07-07T20:01:20Z
38,254,338
23
2016-07-07T20:04:29Z
[ "python", "recursion", "generator" ]
I naively tried to create a recursive generator. Didn't work. This is what I did: ``` def recursive_generator(lis): yield lis[0] recursive_generator(lis[1:]) for k in recursive_generator([6,3,9,1]): print(k) ``` All I got was the first item `6`. Is there a way to make such code work? Essentially transferring the `yield` command to the level above in a recursion scheme?
Try this: ``` def recursive_generator(lis): yield lis[0] yield from recursive_generator(lis[1:]) for k in recursive_generator([6,3,9,1]): print(k) ``` I should point out this doesn't work because of a bug in your function. It should probably include a check that `lis` isn't empty, as shown below: ``` def recursive_generator(lis): if lis: yield lis[0] yield from recursive_generator(lis[1:]) ``` In case you are on Python 2.7 and don't have `yield from`, [check this question out.](http://stackoverflow.com/questions/17581332/converting-yield-from-statement-to-python-2-7-code)
Max recursion is not exactly what sys.getrecursionlimit() claims. How come?
38,265,839
27
2016-07-08T11:47:43Z
38,265,931
35
2016-07-08T11:52:47Z
[ "python", "recursion" ]
I've made a small function that will actually measure the max recursion limit: ``` def f(x): r = x try: r = f(x+1) except Exception as e: print(e) finally: return r ``` To know what to expect I've checked: ``` In [28]: import sys In [29]: sys.getrecursionlimit() Out[29]: 1000 ``` However ``` In [30]: f(0) maximum recursion depth exceeded Out[30]: 970 ``` The number is not fixed, always around ~970, and slightly changes between different instances of python (e.g. from within spyder to system cmd prompt). Please note that I'm using ipython on python3. What's going on? **Why is the actual limit I'm getting lower than the `sys.getrecursionlimit()` value?**
The recursion limit is not the limit on recursion but the maximum depth of the python interpreter stack.There is something on the stack before your function gets executed. Spyder executes some python stuff before it calls your script, as do other interpreters like ipython. You can inspect the stack via methods in the `inspect` module. In CPython for me: ``` >>>print(len(inspect.stack())) 1 ``` In Ipython for me: ``` >>>print(len(inspect.stack())) 10 ``` As knbk pointed out in the comments as soon as you hit the stack limit a RecursionError is thrown and the interpreter raises the stack limit a bit to give you a possibility to handle the error gracefully. If you also exhaust that limit python will crash.
Max recursion is not exactly what sys.getrecursionlimit() claims. How come?
38,265,839
27
2016-07-08T11:47:43Z
38,266,011
8
2016-07-08T11:56:28Z
[ "python", "recursion" ]
I've made a small function that will actually measure the max recursion limit: ``` def f(x): r = x try: r = f(x+1) except Exception as e: print(e) finally: return r ``` To know what to expect I've checked: ``` In [28]: import sys In [29]: sys.getrecursionlimit() Out[29]: 1000 ``` However ``` In [30]: f(0) maximum recursion depth exceeded Out[30]: 970 ``` The number is not fixed, always around ~970, and slightly changes between different instances of python (e.g. from within spyder to system cmd prompt). Please note that I'm using ipython on python3. What's going on? **Why is the actual limit I'm getting lower than the `sys.getrecursionlimit()` value?**
This limit is for stack, not for the function you define. There are other internal things which might push something to stack. And of course it could depend on env in which it was executed. Some can pollute stack more, some less.
Is extending a Python list (e.g. l += [1]) guaranteed to be thread-safe?
38,266,186
21
2016-07-08T12:04:03Z
38,266,364
9
2016-07-08T12:13:54Z
[ "python", "multithreading", "thread-safety", "python-multithreading" ]
If I have an integer `i`, it is not safe to do `i += 1` on multiple threads: ``` >>> i = 0 >>> def increment_i(): ... global i ... for j in range(1000): i += 1 ... >>> threads = [threading.Thread(target=increment_i) for j in range(10)] >>> for thread in threads: thread.start() ... >>> for thread in threads: thread.join() ... >>> i 4858 # Not 10000 ``` However, if I have a list `l`, it does seem safe to do `l += [1]` on multiple threads: ``` >>> l = [] >>> def extend_l(): ... global l ... for j in range(1000): l += [1] ... >>> threads = [threading.Thread(target=extend_l) for j in range(10)] >>> for thread in threads: thread.start() ... >>> for thread in threads: thread.join() ... >>> len(l) 10000 ``` Is `l += [1]` guaranteed to be thread-safe? If so, does this apply to all Python implementations or just CPython? **Edit:** It seems that `l += [1]` is thread-safe but `l = l + [1]` is not... ``` >>> l = [] >>> def extend_l(): ... global l ... for j in range(1000): l = l + [1] ... >>> threads = [threading.Thread(target=extend_l) for j in range(10)] >>> for thread in threads: thread.start() ... >>> for thread in threads: thread.join() ... >>> len(l) 3305 # Not 10000 ```
From <http://effbot.org/pyfaq/what-kinds-of-global-value-mutation-are-thread-safe.htm> : > Operations that replace other objects may invoke those other objects’ `__del__` method when their reference count reaches zero, and that can affect things. This is especially true for the mass updates to dictionaries and lists. > > The following operations are all atomic (L, L1, L2 are lists, D, D1, D2 are dicts, x, y are objects, i, j are ints): > > ``` > L.append(x) > L1.extend(L2) > x = L[i] > x = L.pop() > L1[i:j] = L2 > L.sort() > x = y > x.field = y > D[x] = y > D1.update(D2) > D.keys() > ``` > > These aren’t: > > ``` > i = i+1 > L.append(L[-1]) > L[i] = L[j] > D[x] = D[x] + 1 > ``` Above is purely CPython specific and can vary across different Python implemenation such as PyPy. By the way there is an open issue for documenting atomic Python operations - <https://bugs.python.org/issue15339>
Is extending a Python list (e.g. l += [1]) guaranteed to be thread-safe?
38,266,186
21
2016-07-08T12:04:03Z
38,320,815
14
2016-07-12T05:42:11Z
[ "python", "multithreading", "thread-safety", "python-multithreading" ]
If I have an integer `i`, it is not safe to do `i += 1` on multiple threads: ``` >>> i = 0 >>> def increment_i(): ... global i ... for j in range(1000): i += 1 ... >>> threads = [threading.Thread(target=increment_i) for j in range(10)] >>> for thread in threads: thread.start() ... >>> for thread in threads: thread.join() ... >>> i 4858 # Not 10000 ``` However, if I have a list `l`, it does seem safe to do `l += [1]` on multiple threads: ``` >>> l = [] >>> def extend_l(): ... global l ... for j in range(1000): l += [1] ... >>> threads = [threading.Thread(target=extend_l) for j in range(10)] >>> for thread in threads: thread.start() ... >>> for thread in threads: thread.join() ... >>> len(l) 10000 ``` Is `l += [1]` guaranteed to be thread-safe? If so, does this apply to all Python implementations or just CPython? **Edit:** It seems that `l += [1]` is thread-safe but `l = l + [1]` is not... ``` >>> l = [] >>> def extend_l(): ... global l ... for j in range(1000): l = l + [1] ... >>> threads = [threading.Thread(target=extend_l) for j in range(10)] >>> for thread in threads: thread.start() ... >>> for thread in threads: thread.join() ... >>> len(l) 3305 # Not 10000 ```
There isn't a happy ;-) answer to this. There's nothing guaranteed about any of it, which you can confirm simply by noting that the Python reference manual makes no guarantees about atomicity. In CPython it's a matter of pragmatics. As a snipped part of effbot's article says, > In theory, this means an exact accounting requires an exact understanding of the PVM [Python Virtual Machine] bytecode implementation. And that's the truth. A CPython expert knows `L += [x]` is atomic because they know all of the following: * `+=` compiles to an `INPLACE_ADD` bytecode. * The implementation of `INPLACE_ADD` for list objects is written entirely in C (no Python code is on the execution path, so the GIL can't be released *between* bytecodes). * In `listobject.c`, the implementation of `INPLACE_ADD` is function `list_inplace_concat()`, and nothing during its execution needs to execute any user Python code either (if it did, the GIL may again be released). That may all sound incredibly difficult to keep straight, but for someone with effbot's knowledge of CPython's internals (at the time he wrote that article), it really isn't. In fact, given that depth of knowledge, it's all kind of obvious ;-) So as a matter of *pragmatics*, CPython experts have always freely relied on that "operations that 'look atomic' should really be atomic", and that also guided some language decisions. For example, an operation missing from effbot's list (added to the language after he wrote that article): ``` x = D.pop(y) # or ... x = D.pop(y, default) ``` One argument (at the time) in favor of adding `dict.pop()` was precisely that the obvious C implementation would be atomic, whereas the in-use (at the time) alternative: ``` x = D[y] del D[y] ``` was not atomic (the retrieval and the deletion are done via distinct bytecodes, so threads can switch between them). But the docs never *said* `.pop()` was atomic, and never will. This is a "consenting adults" kind of thing: if you're expert enough to exploit this knowingly, you don't need hand-holding. If you're not expert enough, then the last sentence of effbot's article applies: > When in doubt, use a mutex! As a matter of pragmatic necessity, core developers will never break the atomicity of effbot's examples (or of `D.pop()` or `D.setdefault()`) in CPython. Other implementations are under no obligation at all to mimic these pragmatic choices, though. Indeed, since atomicity in these cases relies on CPython's specific form of bytecode combined with CPython's use of a global interpreter lock that can only be released between bytecodes, it *could* be a real pain for other implementations to mimic them. And you never know: some future version of CPython may remove the GIL too! I doubt it, but it's theoretically possible. But if that happens, I bet a parallel version retaining the GIL will be maintained too, because a whole lot of code (especially extension modules written in `C`) relies on the GIL for thread safety too. Worth repeating: > When in doubt, use a mutex!
Applications of '~' (tilde) operator in Python
38,271,945
13
2016-07-08T16:54:50Z
38,272,045
11
2016-07-08T17:02:41Z
[ "python", "python-3.x", "operator-overloading", "bit-manipulation", "tilde" ]
I just discovered the [bitwise complement unary operation](https://en.wikipedia.org/wiki/Bitwise_operation#NOT) in Python via [this question](https://stackoverflow.com/questions/8305199/the-tilde-operator-in-python) and have been trying to come up with an actual application for it, and if not, to determine if it's generally safe to overload the operator (by overriding the `__invert__` method) for other uses. The example given in the question fails with a `TypeError`, and the [link](https://graphics.stanford.edu/~seander/bithacks.html) provided seems pretty intimidating. Here's some fiddling around to see `~` in use: ``` from bitstring import BitArray x = 7 print(~x) # -8 print(BitArray(int=x, length=4).bin) # '0111' print(BitArray(int=~x, length=4).bin) # '1000' print(~~True, ~~False) # 1 0 for i in range(-100, 100): assert i + ~i == -1 assert i ^ ~i == -1 assert bool(i) == ~~bool(i) ``` Are there *any* examples of valid use-cases for this operator that I should be aware of? And even if there are, is it generally acceptable to override this operator for types other than `int`?
The standard use cases for the bitwise NOT operator are bitwise operations, just like the bitwise AND `&`, the bitwise OR `|`, the bitwise XOR `^`, and bitwise shifting `<<` and `>>`. Although they are rarely used in higher level applications, there are still some times where you need to do bitwise manipulations, so that’s why they are there. Of course, you may overwrite these for custom types, and in general you are not required to follow any specific semantics when doing so. Just choose what makes sense for your type and what still fits the operator in some way. If the operation is obscure and better explained with a word or two, then you should use a standard method instead. But there are some situations, especially when working with number related types, that could have some mathematical-like operations which fit the bitwise operators, and as such are fine to use those. Just like you would overwrite standard operators like `+` and `-` only for meaningful operations, you should try to do the same for bitwise operators. --- The reason `~~True, ~~False` gives you `(1, 0)` is because the `bool` type does not define its own `__invert__` operation. However, `int` does; and `bool` is actually a subtype of `int`. So `bool` actually inherits the logic of all bitwise and arithmetical operators. That’s why `True + True == 2` etc.
How to repeat individual characters in strings in Python
38,273,353
4
2016-07-08T18:36:47Z
38,273,369
15
2016-07-08T18:37:59Z
[ "python", "string" ]
I know that ``` "123abc" * 2 ``` evaluates as `"123abc123abc"`, but is there an easy way to repeat individual letters N times, e.g. convert `"123abc"` to `"112233aabbcc"` or `"111222333aaabbbccc"`?
What about: ``` >>> s = '123abc' >>> n = 3 >>> ''.join(char*n for char in s) '111222333aaabbbccc' >>> ```
How to count multiple unique occurrences of unique occurrences in Python list?
38,273,531
3
2016-07-08T18:50:29Z
38,273,595
11
2016-07-08T18:54:47Z
[ "python" ]
Let's say I have a 2D list in Python: ``` mylist = [["A", "X"],["A", "X"],["A", "Y"],["B", "X"],["B", "X"],["A", "Y"]] ``` In this case my "keys" would be the first element of each array ("A" or "B") and my "values" would be the second element ("X" or "Y"). At the end of my consolidation the output should consolidate the keys and count the unique occurrences of values present for each key, i.e. something like: ``` # Output # {"A":{"X":2, "Y":2}, "B":{"X":2, "Y":1}} ``` I am trying to use Python's itertools.groupby, but to no avail. Something similar to [this question](http://stackoverflow.com/questions/2392929/how-to-get-unique-values-with-respective-occurance-count-from-a-list-in-python). If you have a better method, let me know. Thanks!
I think the easiest way to do this would be with Counter and defaultdict: ``` from collections import defaultdict, Counter output = defaultdict(Counter) for a, b in mylist: output[a][b] += 1 ```
Unpack a Python tuple from left to right?
38,276,068
18
2016-07-08T22:10:02Z
38,276,095
45
2016-07-08T22:12:42Z
[ "python", "python-3.x", "tuples", "iterable-unpacking" ]
Is there a clean/simple way to unpack a Python tuple on the right hand side from left to right? For example for ``` j = 1,2,3,4,5,6,7 (1,2,3,4,5,6,7) v,b,n = j[4:7] ``` Can I modify the slice notation so that `v = j[6], b=j[5], n=j[4]` ? I realise I can just order the left side to get the desired element but there might be instances where I would just want to unpack the tuple from left to right I think.
This should do: ``` v,b,n = j[6:3:-1] ``` A step value of `-1` starting at `6`
Unpack a Python tuple from left to right?
38,276,068
18
2016-07-08T22:10:02Z
38,276,235
10
2016-07-08T22:28:51Z
[ "python", "python-3.x", "tuples", "iterable-unpacking" ]
Is there a clean/simple way to unpack a Python tuple on the right hand side from left to right? For example for ``` j = 1,2,3,4,5,6,7 (1,2,3,4,5,6,7) v,b,n = j[4:7] ``` Can I modify the slice notation so that `v = j[6], b=j[5], n=j[4]` ? I realise I can just order the left side to get the desired element but there might be instances where I would just want to unpack the tuple from left to right I think.
``` n,b,v=j[4:7] ``` will also work. You can just change the order or the returned unpacked values
Unpack a Python tuple from left to right?
38,276,068
18
2016-07-08T22:10:02Z
38,276,396
9
2016-07-08T22:46:47Z
[ "python", "python-3.x", "tuples", "iterable-unpacking" ]
Is there a clean/simple way to unpack a Python tuple on the right hand side from left to right? For example for ``` j = 1,2,3,4,5,6,7 (1,2,3,4,5,6,7) v,b,n = j[4:7] ``` Can I modify the slice notation so that `v = j[6], b=j[5], n=j[4]` ? I realise I can just order the left side to get the desired element but there might be instances where I would just want to unpack the tuple from left to right I think.
You could ignore the first after reversing and use [extended iterable unpacking](https://www.python.org/dev/peps/pep-3132/): ``` j = 1, 2, 3, 4, 5, 6, 7 _, v, b, n, *_ = reversed(j) print(v, b, n) ``` Which would give you: ``` 6 5 4 ``` Or if you want to get arbitrary elements you could use `operator.itemgetter`: ``` j = 1, 2, 3, 4, 5, 6, 7 from operator import itemgetter def unpack(it, *args): return itemgetter(*args)(it) v,b,n = unpack(j, -2,-3,-4) print(v, b, n) ``` The advantage of *itemgetter* is it will work on any iterable and the elements don't have to be consecutive.
Unpack a Python tuple from left to right?
38,276,068
18
2016-07-08T22:10:02Z
38,276,534
11
2016-07-08T23:04:10Z
[ "python", "python-3.x", "tuples", "iterable-unpacking" ]
Is there a clean/simple way to unpack a Python tuple on the right hand side from left to right? For example for ``` j = 1,2,3,4,5,6,7 (1,2,3,4,5,6,7) v,b,n = j[4:7] ``` Can I modify the slice notation so that `v = j[6], b=j[5], n=j[4]` ? I realise I can just order the left side to get the desired element but there might be instances where I would just want to unpack the tuple from left to right I think.
In case you want to keep the original indices (i.e. don't want to bother with changing 4 and 7 to 6 and 3) you can also use: ``` v, b, n = (j[4:7][::-1]) ```
Use groupby in Pandas to count things in one column in comparison to another
38,278,603
4
2016-07-09T05:12:58Z
38,279,370
7
2016-07-09T07:16:08Z
[ "python", "pandas", "dataframe" ]
Maybe groupby is the wrong approach. Seems like it should work but I'm not seeing it... I want to group an event by it's outcome. Here is my DataFrame (df): ``` Status Event SUCCESS Run SUCCESS Walk SUCCESS Run FAILED Walk ``` Here is my desired result: ``` Event SUCCESS FAILED Run 2 1 Walk 0 1 ``` I'm trying to make a grouped object but I can't figure out how to call it to display what I want. ``` grouped = df['Status'].groupby(df['Event']) ```
An alternative solution, using [pivot\_table()](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html) method: ``` In [5]: df.pivot_table(index='Event', columns='Status', aggfunc=len, fill_value=0) Out[5]: Status FAILED SUCCESS Event Run 0 2 Walk 1 1 ``` Timing against 700K DF: ``` In [74]: df.shape Out[74]: (700000, 2) In [75]: # (c) Merlin In [76]: %%timeit ....: pd.crosstab(df.Event, df.Status) ....: 1 loop, best of 3: 333 ms per loop In [77]: # (c) piRSquared In [78]: %%timeit ....: df.groupby('Event').Status.value_counts().unstack().fillna(0) ....: 1 loop, best of 3: 325 ms per loop In [79]: # (c) MaxU In [80]: %%timeit ....: df.pivot_table(index='Event', columns='Status', ....: aggfunc=len, fill_value=0) ....: 1 loop, best of 3: 367 ms per loop In [81]: # (c) ayhan In [82]: %%timeit ....: (df.assign(ones = np.ones(len(df))) ....: .pivot_table(index='Event', columns='Status', ....: aggfunc=np.sum, values = 'ones') ....: ) ....: 1 loop, best of 3: 264 ms per loop In [83]: # (c) Divakar In [84]: %%timeit ....: unq1,ID1 = np.unique(df['Event'],return_inverse=True) ....: unq2,ID2 = np.unique(df['Status'],return_inverse=True) ....: # Get linear indices/tags corresponding to grouped headers ....: tag = ID1*(ID2.max()+1) + ID2 ....: # Setup 2D Numpy array equivalent of expected Dataframe ....: out = np.zeros((len(unq1),len(unq2)),dtype=int) ....: unqID, count = np.unique(tag,return_counts=True) ....: np.put(out,unqID,count) ....: # Finally convert to Dataframe ....: df_out = pd.DataFrame(out,columns=unq2) ....: df_out.index = unq1 ....: 1 loop, best of 3: 2.25 s per loop ``` Conclusion: the @[ayhan](http://stackoverflow.com/questions/38278603/use-groupby-in-pandas-to-count-things-in-one-column-in-comparison-to-another/38279370?noredirect=1#comment63979020_38279370)'s solution currently wins: ``` (df.assign(ones = np.ones(len(df))) .pivot_table(index='Event', columns='Status', values = 'ones', aggfunc=np.sum, fill_value=0) ) ```
Why is str.strip() so much faster than str.strip(' ')?
38,285,654
30
2016-07-09T19:38:30Z
38,285,655
32
2016-07-09T19:38:30Z
[ "python", "string", "performance", "python-3.x", "python-internals" ]
Splitting on white-space can be done in two ways with **[`str.strip`](https://docs.python.org/3/library/stdtypes.html#str.strip)**. You can either issue a call with no arguments, `str.strip()`, which defaults to using a white-space delimiter or explicitly supply the argument yourself with `str.strip(' ')`. But, why is it that when timed these functions perform so differently? Using a sample string with an intentional amount of white spaces: ``` s = " " * 100 + 'a' + " " * 100 ``` The timings for `s.strip()` and `s.strip(' ')` are respectively: ``` %timeit s.strip() The slowest run took 32.74 times longer than the fastest. This could mean that an intermediate result is being cached. 1000000 loops, best of 3: 396 ns per loop %timeit s.strip(' ') 100000 loops, best of 3: 4.5 µs per loop ``` `strip` takes `396ns` while `strip(' ')` takes `4.5 μs`, a similar scenario is present with `rsplit` and `lsplit` under the same conditions. Also, [`bytes objects` seem do be affected too](http://stackoverflow.com/a/38286494/4952130). The timings were performed for `Python 3.5.2`, on `Python 2.7.1` the difference is less drastic. The [docs on `str.split`](https://docs.python.org/3/library/stdtypes.html#str.strip) don't indicate anything useful, so, *why does this happen*?
### In a tl;dr fashion: This is because two functions exist for the two different cases, as can be seen in [`unicode_strip`](https://github.com/python/cpython/blob/master/Objects/unicodeobject.c#L12260); `do_strip` and `_PyUnicodeXStrip` the first executing much faster than the second. Function **[`do_strip`](https://github.com/python/cpython/blob/master/Objects/unicodeobject.c#L12164)** is for the common case `str.strip()` where no arguments exist and [**`do_argstrip`**](https://github.com/python/cpython/blob/master/Objects/unicodeobject.c#L12230) (which wraps `_PyUnicode_XStrip`) for the case where `str.strip(arg)` is called, i.e arguments are provided. --- `do_argstrip` just checks the separator and if it is valid and not equal to `None` (in which case it calls `do_strip`) it calls [`_PyUnicode_XStrip`](https://github.com/python/cpython/blob/master/Objects/unicodeobject.c#L12077). Both `do_strip` and `_PyUnicode_XStrip` follow the same logic, two counters are used, one equal to zero and the other equal to the length of the string. Using two `while` loops, the first counter is incremented until a value not equal to the separator is reached and the second counter is decremented until the same condition is met. The difference lies in the way checking if the current character is not equal to the separator is performed. ### For `do_strip`: In the most common case where the characters in the string to be split can be represented in `ascii` an additional small performance boost is present. ``` while (i < len) { Py_UCS1 ch = data[i]; if (!_Py_ascii_whitespace[ch]) break; i++; } ``` * Accessing the current character in the data is made quickly with by accessing the underlying array: `Py_UCS1 ch = data[i];` * The check if a character is a white-space is made by a simple array index into an array called [`_Py_ascii_whitespace[ch]`](https://github.com/python/cpython/blob/master/Objects/unicodeobject.c#L217). So, in short, it is quite efficient. If the characters are not in the `ascii` range, the differences aren't that drastic but they do slow the overall execution down: ``` while (i < len) { Py_UCS4 ch = PyUnicode_READ(kind, data, i); if (!Py_UNICODE_ISSPACE(ch)) break; i++; } ``` * Accessing is done with `Py_UCS4 ch = PyUnicode_READ(kind, data, i);` * Checking if the character is whitespace is done by the [`Py_UNICODE_ISSPACE(ch)`](https://docs.python.org/3/c-api/unicode.html#c.Py_UNICODE_ISSPACE) macro (which simply calls another macro: [`Py_ISSPACE`](https://github.com/python/cpython/blob/master/Include/pyctype.h#L24)) ### For `_PyUnicodeXStrip`: For this case, accessing the underlying data is, as it was in the previous case, done with `PyUnicode_Read`; the check, on the other hand, to see if the character is a white-space (or really, any character we've provided) is reasonably a bit more complex. ``` while (i < len) { Py_UCS4 ch = PyUnicode_READ(kind, data, i); if (!BLOOM(sepmask, ch)) break; if (PyUnicode_FindChar(sepobj, ch, 0, seplen, 1) < 0) break; i++; } ``` [`PyUnicode_FindChar`](https://docs.python.org/3/c-api/unicode.html#c.PyUnicode_FindChar) is used, which, although efficient, is much more complex and slow compared to an array access. For each character in the string it is called to see if that character is contained in the separator(s) we've provided. As the length of the string increases, so does the overhead introduced by calling this function continuously. For those interested, `PyUnicode_FindChar` after quite some checks, will eventually call [`find_char`](https://github.com/python/cpython/blob/master/Objects/stringlib/fastsearch.h#L36) inside `stringlib` which in the case where the length of the separators is `< 10` will loop until it finds the character. Apart from this, consider the additional functions that need to already be called in order to get here. --- As for `lstrip` and `rstrip`, the situation is similar. Flags for which mode of striping to perform exist, namely: `RIGHTSTRIP` for `rstrip`, `LEFTSTRIP` for `lstrip` and `BOTHSTRIP` for `strip`. The logic inside `do_strip` and `_PyUnicode_XStrip` is performed conditionally based on the flag.
Why is str.strip() so much faster than str.strip(' ')?
38,285,654
30
2016-07-09T19:38:30Z
38,286,494
7
2016-07-09T21:24:51Z
[ "python", "string", "performance", "python-3.x", "python-internals" ]
Splitting on white-space can be done in two ways with **[`str.strip`](https://docs.python.org/3/library/stdtypes.html#str.strip)**. You can either issue a call with no arguments, `str.strip()`, which defaults to using a white-space delimiter or explicitly supply the argument yourself with `str.strip(' ')`. But, why is it that when timed these functions perform so differently? Using a sample string with an intentional amount of white spaces: ``` s = " " * 100 + 'a' + " " * 100 ``` The timings for `s.strip()` and `s.strip(' ')` are respectively: ``` %timeit s.strip() The slowest run took 32.74 times longer than the fastest. This could mean that an intermediate result is being cached. 1000000 loops, best of 3: 396 ns per loop %timeit s.strip(' ') 100000 loops, best of 3: 4.5 µs per loop ``` `strip` takes `396ns` while `strip(' ')` takes `4.5 μs`, a similar scenario is present with `rsplit` and `lsplit` under the same conditions. Also, [`bytes objects` seem do be affected too](http://stackoverflow.com/a/38286494/4952130). The timings were performed for `Python 3.5.2`, on `Python 2.7.1` the difference is less drastic. The [docs on `str.split`](https://docs.python.org/3/library/stdtypes.html#str.strip) don't indicate anything useful, so, *why does this happen*?
For the reasons explained in @Jims answer the same behavior is found in `bytes` objects: ``` b = bytes(" " * 100 + "a" + " " * 100, encoding='ascii') b.strip() # takes 427ns b.strip(b' ') # takes 1.2μs ``` For `bytearray` objects this doesn't happen, the functions performing the `split` in this case are similar for both cases. Additionally, in `Python 2` the same applies to a smaller extent according to my timings.
Insert element to list based on previous and next elements
38,285,679
9
2016-07-09T19:41:19Z
38,285,773
8
2016-07-09T19:51:09Z
[ "python", "list" ]
I'm trying to add a new tuple to a list of tuples (sorted by first element in tuple), where the new tuple contains elements from both the previous and the next element in the list. Example: ``` oldList = [(3, 10), (4, 7), (5,5)] newList = [(3, 10), (4, 10), (4, 7), (5, 7), (5, 5)] ``` (4,10) was constructed from and added in between (3,**10**) and (**4**,7). ``` Construct (x,y) from (a,y) and (x,b) ``` I've tried using enumerate() to insert at the specific position, but that doesn't really let me access the next element.
``` oldList = [(3, 10), (4, 7), (5,5)] def pair(lst): # create two iterators it1, it2 = iter(lst), iter(lst) # move second to the second tuple next(it2) for ele in it1: # yield original yield ele # yield first ele from next and first from current yield (next(it2)[0], ele[1]) ``` Which will give you: ``` In [3]: oldList = [(3, 10), (4, 7), (5, 5)] In [4]: list(pair(oldList)) Out[4]: [(3, 10), (4, 10), (4, 7), (5, 7), (5, 5)] ``` Obviously we need to do some error handling to handle different possible situations. You could also do it using a single iterator if you prefer: ``` def pair(lst): it = iter(lst) prev = next(it) for ele in it: yield prev yield (prev[0], ele[1]) prev = ele yield (prev[0], ele[1]) ``` You can use [itertools.tee](https://docs.python.org/2.7/library/itertools.html#itertools.tee) in place of calling iter: ``` from itertools import tee def pair(lst): # create two iterators it1, it2 = tee(lst) # move second to the second tuple next(it2) for ele in it1: # yield original yield ele # yield first ele from next and first from current yield (next(it2)[0], ele[1]) ```
frozenset at least x elements
38,292,379
3
2016-07-10T13:26:58Z
38,292,424
7
2016-07-10T13:31:59Z
[ "python", "frozenset" ]
I currently have this code, it checks if all elements in the array are the same. If this is the case, return true ``` def all_equal(lst): """ >>> all_equal([1,1,1,1,1,1,1]) True >>> all_equal([1,2,3,1]) False """ return len(frozenset(lst)) == 1 ``` But what I do want to check is if there are atleast 5 elements of the same. So that ``` [1,1,1,1,1,2,2] ``` Will return True aswell. Since there are 5 times 1
Instead of using a set, use a [*bag* or *multiset* type](https://en.wikipedia.org/wiki/Multiset). A multiset counts how many times unique values occur. In Python that's the [`collections.Counter()` object](https://docs.python.org/2/library/collections.html#collections.Counter): ``` from collections import Counter def all_equal(lst): bag = Counter(lst) if any(v >= 5 for v in bag.itervalues()): # an element occurred at least 5 times # (use bag.values() if using Python 3) return True return False ```
frozenset at least x elements
38,292,379
3
2016-07-10T13:26:58Z
38,292,432
8
2016-07-10T13:33:04Z
[ "python", "frozenset" ]
I currently have this code, it checks if all elements in the array are the same. If this is the case, return true ``` def all_equal(lst): """ >>> all_equal([1,1,1,1,1,1,1]) True >>> all_equal([1,2,3,1]) False """ return len(frozenset(lst)) == 1 ``` But what I do want to check is if there are atleast 5 elements of the same. So that ``` [1,1,1,1,1,2,2] ``` Will return True aswell. Since there are 5 times 1
Use [`collections.Counter()`](https://docs.python.org/3/library/collections.html#collections.Counter): ``` from collections import Counter def all_equal(lst, count): return any(v >= count for v in Counter(lst).values()) ```
Is there special significance to 16331239353195370.0?
38,295,501
82
2016-07-10T19:02:56Z
38,295,695
109
2016-07-10T19:25:02Z
[ "python", "numpy", "numerical-methods" ]
Using `import numpy as np` I've noticed that ``` np.tan(np.pi/2) ``` gives the number in the title and not `np.inf` ``` 16331239353195370.0 ``` I'm curious about this number. Is it related to some system machine precision parameter? Could I have calculated it from something? (I'm thinking along the lines of something similar to `sys.float_info`) **EDIT:** The same result is indeed reproducible in other environments such as Java, octace, matlab... The suggested dupe does not explain why, though.
`pi` isn't exactly representable as Python float (same as the platform C's `double` type). The closest representable approximation is used. Here's the exact approximation in use on my box (probably the same as on your box): ``` >>> import math >>> (math.pi / 2).as_integer_ratio() (884279719003555, 562949953421312) ``` To find the tangent of that ratio, I'm going to switch to wxMaxima now: ``` (%i1) fpprec: 32; (%o1) 32 (%i2) tan(bfloat(884279719003555) / 562949953421312); (%o2) 1.6331239353195369755967737041529b16 ``` So essentially identical to what you got. The binary approximation to `pi/2` used is a little bit less than the mathematical ("infinite precision") value of `pi/2`. So you get a very large tangent instead of `infinity`. The computed `tan()` is appropriate for the actual input! For exactly the same kinds of reasons, e.g., ``` >>> math.sin(math.pi) 1.2246467991473532e-16 ``` doesn't return 0. The approximation `math.pi` is a little bit less than `pi`, and the displayed result is correct *given* that truth. ## OTHER WAYS OF SEEING math.pi There are several ways to see the exact approximation in use: ``` >>> import math >>> math.pi.as_integer_ratio() (884279719003555, 281474976710656) ``` `math.pi` is exactly equal to the mathematical ("infinite precision") value of that ratio. Or as an exact float in hex notation: ``` >>> math.pi.hex() '0x1.921fb54442d18p+1' ``` Or in a way most easily understood by just about everyone: ``` >>> import decimal >>> decimal.Decimal(math.pi) Decimal('3.141592653589793115997963468544185161590576171875') ``` While it may not be immediately obvious, every finite binary float is exactly representable as a finite decimal float (the reverse is *not* true; e.g. the decimal `0.1` is not exactly representable as a finite binary float), and the `Decimal(some_float)` constructor produces the exact equivalent. Here's the true value of `pi` followed by the exact decimal value of `math.pi`, and a caret on the third line points to the first digit where they differ: ``` true 3.14159265358979323846264338327950288419716939937510... math.pi 3.141592653589793115997963468544185161590576171875 ^ ``` `math.pi` is the same across "almost all" boxes now, because almost all boxes now use the same binary floating-point format (IEEE 754 double precision). You can use any of the ways above to confirm that on *your* box, or to find the precise approximation in use if your box is an exception.
Django CSRF cookie not set correctly
38,302,058
16
2016-07-11T08:10:02Z
38,436,440
14
2016-07-18T12:13:45Z
[ "python", "django", "cookies", "csrf" ]
Update 7-18: Here is my nginx config for the proxy server: ``` server { listen 80; server_name blah.com; # the blah is intentional access_log /home/cheng/logs/access.log; error_log /home/cheng/logs/error.log; location / { proxy_pass http://127.0.0.1:8001; } location /static { alias /home/cheng/diandi/staticfiles; } location /images { alias /home/cheng/diandi/images; } client_max_body_size 10M; } ``` Here is `nginx.conf`: ``` user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip_disable "msie6"; # Enable Gzip compressed. gzip on; # Enable compression both for HTTP/1.0 and HTTP/1.1. gzip_http_version 1.1; # Compression level (1-9). # 5 is a perfect compromise between size and cpu usage, offering about # 75% reduction for most ascii files (almost identical to level 9). gzip_comp_level 5; # Don't compress anything that's already small and unlikely to shrink much # if at all (the default is 20 bytes, which is bad as that usually leads to # larger files after gzipping). gzip_min_length 256; # Compress data even for clients that are connecting to us via proxies, # identified by the "Via" header (required for CloudFront). gzip_proxied any; # Tell proxies to cache both the gzipped and regular version of a resource # whenever the client's Accept-Encoding capabilities header varies; # Avoids the issue where a non-gzip capable client (which is extremely rare # today) would display gibberish if their proxy gave them the gzipped version. gzip_vary on; # Compress all output labeled with one of the following MIME-types. gzip_types application/atom+xml application/javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml application/x-javascript font/opentype image/svg+xml image/x-icon text/css text/plain text/javascript text/js text/x-component; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } ``` --- Update 7-15: When copying code to the linux machines, I simply replaced the original source code file but didn't delete the old .pyc files which I don't think will cause trouble right? --- Here is the view code: ``` from django.contrib.auth import authenticate, login from django.http import HttpResponseRedirect from django.core.urlresolvers import reverse from django.shortcuts import render def login_view(request): if request.method == 'POST': username = request.POST['username'] password = request.POST['password'] user = authenticate(username=username, password=password) next_url = request.POST['next'] if user is not None: if user.is_active: login(request, user) if next_url: return HttpResponseRedirect(next_url) return HttpResponseRedirect(reverse('diandi:list')) else: form = {'errors': True} return render(request, 'registration/login.html', {'form': form}) else: form = {'errors': False} return render(request, 'registration/login.html', {'form': form}) ``` --- I got one of those `CSRF cookie not set` error from Django, but this is not because I forgot to include the `{% csrf_token %}` in my template. Here is what I observed: ## Access login page #1 try Inside the `Request Header`, the `cookie` value is: ``` csrftoken=yNG8ZmSI4tr2xTLoE9bys8JbSuu9SD34; ``` In the template: ``` <input type="hidden" name="csrfmiddlewaretoken" value="9CVlFSxOo0xiYykIxRmvbWyN5iEUHnPB"> ``` In a cookie plugin that I installed on chrome, the actual csrf cookie value is set to: ``` 9CVlFSxOo0xiYykIxRmvbWyN5iEUHnPB ``` ## Access login page #2 try: Inside the `Request Header`, the `cookie` value is: ``` csrftoken=9CVlFSxOo0xiYykIxRmvbWyN5iEUHnPB; ``` In the template: ``` <input type="hidden" name="csrfmiddlewaretoken" value="Y534sU40S8iTubSVGjjh9KQl0FXesVsC"> ``` In a cookie plugin that I installed on chrome, the actual csrf cookie value is set to: ``` Y534sU40S8iTubSVGjjh9KQl0FXesVsC ``` ## The pattern As you can see from the examples above, the cookie value inside the `Request Header` differs from the actual `csrfmiddlewaretoken` in the form and the actual cookie value being set. The cookie value of the current request matches the next `request header's` cookie value. --- To help debugging, here is a portion of my `settings.py: ``` DJANGO_APPS = ( 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ) THIRD_PARTY_APPS = ( 'compressor', 'crispy_forms', 'django_extensions', 'floppyforms', 'multiselectfield', 'admin_highcharts', ) LOCAL_APPS = ( 'diandi_project', 'uer_application', ) INSTALLED_APPS = DJANGO_APPS + THIRD_PARTY_APPS + LOCAL_APPS MIDDLEWARE_CLASSES = ( 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.SessionAuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ) TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [str(ROOT_DIR.path('templates'))], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.media', 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] ``` I am using `Django 1.9.5` and `python 2.7.10`. ## One "solution" I have encountered [this problem before](http://stackoverflow.com/questions/37200908/django-csrf-verification-failed-even-when-csrf-token-is-included), I can clear all my browser cookies and the site will function properly. But this problem will eventually come up again, so I am really hoping someone can help me out (I probably just made a really dumb mistake somewhere). ## Update Originally, I thought I made some mistakes while overriding the `django.contrib.auth.view` page, so I wrote my own login page handler and it still causes the issue. Here is the core part of my login template: ``` {% block content %} ... <form method="post" action="{% url 'login' %}"> {% csrf_token %} <div class="form-group"> <label for="username">username</label> <input type="text" class="form-control" id="id_username" name="username"> </div> <div class="form-group"> <label for="password">password</label> <input type="password" class="form-control" id="id_password" name="password"> </div> <input type="submit" class="btn btn-default" value="login" /> <input type="hidden" id="next" name="next" value="" /> </form> ... {% endblock %} ``` On the Linux machines, I have a nginx server setup as a reverse proxy which direct request on port 80 to 8001, and I am running the server using `./manage runserver localhost:8001` This is the only difference I can think of in terms of setup. Otherwise, all of the source code and settings file are identical. --- I started deleting cookies but not all of them, this is what I see before deleting them: [![enter image description here](http://i.stack.imgur.com/pU2K4.png)](http://i.stack.imgur.com/pU2K4.png) I deleted all the cookies other than `djdt` and `csrftoken`, then the page worked. Could the deleted cookies somehow go over some limit which prevent the csrftoken which is further down the list from being set? Here is the cookie value of the image above in the request header: ``` Cookie:PSTM=1466561622; BIDUPSID=6D0DDB8084625F2CEB7B9D0F14F93391; BAIDUID=326150BF5A6DFC69B6CFEBD67CA7A18B:FG=1; BDSFRCVID=Fm8sJeC62leqR8bRqWS1u8KOKg9JUZOTH6ao6BQjXAcTew_mbPF_EG0PJOlQpYD-hEb5ogKK0mOTHvbP; H_BDCLCKID_SF=tJPqoCtKtCvbfP0k-tcH244HqxbXq-r8fT7Z0lOnMp05EnnjKl5M3qKOqJraJJ585Gbb5tOhaKj-VDO_e6u-e55LjaRh2PcM2TPXQ458K4__Hn7zep0aqJtpbt-qJjbOfmQBbfoDQCTDfho5b63JyTLqLq5nBT5Ka26WVpQEQM5c8hje-4bMXPkkQN3T-TJQL6RkKTCyyx3cDn3oyToVXp0njGoTqj-eJbA8_CtQbPoHHnvNKCTV-JDthlbLetJyaR3lWCnbWJ5TMCo1bJQCe-DwKJJgJRLOW2Oi0KTFQxccShPC-tP-Ll_qW-Q2LPQfXKjabpQ73l02VhcOhhQ2Wf3DM-oat4RMW20jWl7mWPQDVKcnK4-Xj533DHjP; BDUSS=5TNmRvZnh2eUFXZDA5WXI5UG1HaXYwbzItaWt3SW5adjE1Nn5XbUVoWHZuYXBYQVFBQUFBJCQAAAAAAAAAAAEAAAC0JtydAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAO8Qg1fvEINXSU; Hm_lvt_a7708f393bfa27123a1551fef4551f7a=1468229606; Hm_lpvt_a7708f393bfa27123a1551fef4551f7a=1468229739; BDRCVFR[feWj1Vr5u3D]=I67x6TjHwwYf0; BDRCVFR[dG2JNJb_ajR]=mk3SLVN4HKm; BDRCVFR[-pGxjrCMryR]=mk3SLVN4HKm; cflag=15%3A3; H_PS_PSSID=1424_20515_13289_20536_20416_19861_14994_11792; csrftoken=xUgSHybzHeIwusN0GvMgB1ATeRrPgcV1 ``` Since the site functions now, all I have are five cookies instead of 14 like the image above: [![enter image description here](http://i.stack.imgur.com/HQcZX.png)](http://i.stack.imgur.com/HQcZX.png)
Here is the issue: **You cannot have a cookie which key contains either the character '[' or ']'** I discovered the solution following @Todor's [link](https://mail.python.org/pipermail/python-dev/2015-May/140135.html), then I found out about this [SO post](http://stackoverflow.com/a/33012793/1478290). Basically there was a bug in python 2.7.x that does not parse cookies with ']' in the value. The bug was fixed in 2.7.10. I thought it would be good to just confirm this issue. So I dug through all of the cookies and found one with the following key/value: ``` key: BDRCVFR[feWj1Vr5u3D] val: I67x6TjHwwYf0 ``` So I inserted the following cookie locally and submitted to the server: ``` key: test val: BDRCVFR[feWj1Vr5u3D] ``` The login page worked, which means 2.7.10 indeed fixed the bug. But then I realized that the square brackets are actually in the key name not in the value, so I did the following tests: ``` key: [ val: I67x6TjHwwYf0 ``` and ``` key:] val: I67x6TjHwwYf0 ``` Both cookies break the login process and django displays: ``` CSRF cookie not set ``` So either django or a python library it relies on cannot parse cookies with square brackets in names properly. If anybody knows where I should submit this bug please let me know (django or python). I would like to thank everybody who left a comment in the OP: @raphv, @trinchet, @Phillip, @YPCrumble, @PeterBrittain and @Todor. Thank you guys so much for debugging with me! --- ## Update: July 20, 2016 This bug is fixed in Django 1.10, just have to wait for the release ## Update: July 19, 2016 I [filed a bug report](https://code.djangoproject.com/ticket/26914) to Django as the result of this post. We will see if it will be fixed in future releases.
elegant way to reduce a list of dictionaries?
38,308,519
4
2016-07-11T13:41:54Z
38,308,814
7
2016-07-11T13:56:42Z
[ "python", "python-3.x", "dictionary", "reduce" ]
I have a list of dictionaries and each dictionary contains exactly the same keys. I want to find the average value for each key and I would like to know how to do it using reduce (or if not possible with another more elegant way than using nested `for`s). Here is the list: ``` [ { "accuracy": 0.78, "f_measure": 0.8169374016795885, "precision": 0.8192088044235794, "recall": 0.8172222222222223 }, { "accuracy": 0.77, "f_measure": 0.8159133315763016, "precision": 0.8174754717495807, "recall": 0.8161111111111111 }, { "accuracy": 0.82, "f_measure": 0.8226353934130455, "precision": 0.8238175920455686, "recall": 0.8227777777777778 }, ... ] ``` I would like to get back I dictionary like this: ``` { "accuracy": 0.81, "f_measure": 0.83, "precision": 0.84, "recall": 0.83 } ``` Here is what I had so far, but I don't like it: ``` folds = [ ... ] keys = folds[0].keys() results = dict.fromkeys(keys, 0) for fold in folds: for k in keys: results[k] += fold[k] / len(folds) print(results) ```
As an alternative, if you're going to be doing such calculations on data, then you may wish to use [pandas](http://pandas.pydata.org/) (which will be overkill for a one off, but will greatly simplify such tasks...) ``` import pandas as pd data = [ { "accuracy": 0.78, "f_measure": 0.8169374016795885, "precision": 0.8192088044235794, "recall": 0.8172222222222223 }, { "accuracy": 0.77, "f_measure": 0.8159133315763016, "precision": 0.8174754717495807, "recall": 0.8161111111111111 }, { "accuracy": 0.82, "f_measure": 0.8226353934130455, "precision": 0.8238175920455686, "recall": 0.8227777777777778 }, # ... ] result = pd.DataFrame.from_records(data).mean().to_dict() ``` Which gives you: ``` {'accuracy': 0.79000000000000004, 'f_measure': 0.8184953755563118, 'precision': 0.82016728940624295, 'recall': 0.81870370370370382} ```
grouping rows python pandas
38,311,120
2
2016-07-11T15:46:30Z
38,311,271
7
2016-07-11T15:55:04Z
[ "python", "pandas", "dataframe" ]
say I have the following dataframe and, the index represents ages, the column names is some category, and the values in the frame are frequencies... Now I would like to group ages in various ways (2 year bins, 5 year bins and 10 year bins) ``` >>> table_w 1 2 3 4 20 1000 80 40 100 21 2000 40 100 100 22 3000 70 70 200 23 3000 100 90 100 24 2000 90 90 200 25 2000 100 80 200 26 2000 90 60 100 27 1000 100 30 200 28 1000 100 90 100 29 1000 60 70 100 30 1000 70 100 100 31 900 40 100 90 32 700 100 30 100 33 700 30 50 90 34 600 10 40 100 ``` I would like to end up with something like... ``` 1 2 3 4 20-21 3000 ... ... ... 22-23 6000 ... ... ... 24-25 4000 ... ... ... 26-27 3000 ... ... ... 28-29 2000 ... ... ... 30-31 1900 ... ... ... 32-33 1400 ... ... ... 34 600 ... ... ... ``` Is there a simple and efficient way to do this? Any help is greatly appreciated...
Use `pd.cut()` to create the age bins and group your dataframe with them. ``` import io import numpy as np import pandas as pd data = io.StringIO("""\ 1 2 3 4 20 1000 80 40 100 21 2000 40 100 100 22 3000 70 70 200 23 3000 100 90 100 24 2000 90 90 200 25 2000 100 80 200 26 2000 90 60 100 27 1000 100 30 200 28 1000 100 90 100 29 1000 60 70 100 30 1000 70 100 100 31 900 40 100 90 32 700 100 30 100 33 700 30 50 90 34 600 10 40 100 """) df = pd.read_csv(data, delim_whitespace=True) bins = np.arange(20, 37, 2) df.groupby(pd.cut(df.index, bins, right=False)).sum() ``` Output: ``` 1 2 3 4 [20, 22) 3000 120 140 200 [22, 24) 6000 170 160 300 [24, 26) 4000 190 170 400 [26, 28) 3000 190 90 300 [28, 30) 2000 160 160 200 [30, 32) 1900 110 200 190 [32, 34) 1400 130 80 190 [34, 36) 600 10 40 100 ```
Why does a Python script to read files cause my computer to emit beeping sounds?
38,313,445
10
2016-07-11T18:07:40Z
38,313,491
18
2016-07-11T18:10:57Z
[ "python" ]
I wrote a little module that will search files in a directory and all of its sub-directories for the occurrence of some input string. It's been handy a few times, mostly to find old scripts if I remember some function/variable name that I used. So, I was completely baffled the other day when I used the functions and started hearing, very faintly, from the headphones sitting on the far side of my desk, repeated beeping sounds. At first I thought it was somebody's phone ringing. But no, Python was communicating with me via Morse code. I have no clue why this is happening. I've continued running the functions and getting beeps, not always in the same pattern. The functions only open files with read permission. The code is exactly this: ``` import os import glob def directory_crawl_for_string(dir_name, string, ofile): """Crawl dir_name and all of its subdirectories, opening files and checking for the presence of a string""" #get input directory's listings dir_contents = glob.glob(dir_name) #loop over the listings for dir_element in dir_contents: if(os.path.isfile(dir_element)): #read the file, checking for the string check_for_string(dir_element, string, ofile) else: if(os.path.isdir(dir_element)): directory_crawl_for_string(dir_element + '\\*', string, ofile) def check_for_string(dir_element, string, ofile): try: ifile = open(dir_element, 'r') except IOError as e: pass else: count = 1 for line in ifile: if(string in line.lower()): print count,line,dir_element ofile.write('%s,%d,%s' % (dir_element, count, line)) count += 1 ifile.close() def init_crawl(start_dir, string, output_dir): """args: start_dir - directory to start the crawl at string - string to search for output_dir - directory to write output text file inside of""" if(output_dir): fn = output_dir.rstrip('/').rstrip('\\') + '/dirs.txt' else: fn = 'dirs.txt' ofile = open(fn, 'w') ofile.write('file path,line number of occurance of "%s",exact line\n' % string) directory_crawl_for_string(start_dir, string, ofile) ofile.close() print('list of files containing "%s" written to "%s"' % (string, fn)) ``` To start it, you pass `init_crawl()` the directory to crawl down from, the string to search for, and a directory to write an output text file into. For example: `init_crawl(r'C:\directory-to-crawl', 'foo', r'C:\output-directory')` I don't even know what specific questions to ask about this, but why is it happening? I can tell that the beeps generally occur when the function tries to read non-text files like PDFs and spreadsheets. Sometimes the terminal freezes too... The output is just a csv with columns for file paths where the string is found, line numbers, and the line containing the string.
This line: ``` print count,line,dir_element ``` is probably printing the [BEL character](https://en.wikipedia.org/wiki/Bell_character) if you feed your program binary files. To test, here's a little code I wrote. Python will try and play it note-for-note. Don't worry. Be happy :) ``` def bel(): return chr(7) def wait(duration): return chr(0) * (duration*1000000) song = '' for _ in range(5): song += bel() song += wait(1) song += bel() song += wait(1) song += bel() song += wait(5) print song ```
how to convert items of array into array themselves Python
38,314,820
3
2016-07-11T19:35:42Z
38,314,856
7
2016-07-11T19:37:27Z
[ "python", "python-3.x", "numpy" ]
My problem is that I've got this array: ``` np.array([0.0, 0.0, -1.2, -1.2, -3.4, -3.4, -4.5, -4.5]) ``` and I want to convert the elements to array like this: ``` np.array([[0.0], [0.0], [-1.2], [-1.2], [-3.4], [-3.4], [-4.5], [-4.5]]) ``` So is there a loop or a numpy function that I could use to do this task?
You can use a [list comprehension](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions): ``` >>> a1 = np.array([0.0, 0.0, -1.2, -1.2, -3.4, -3.4, -4.5, -4.5]) >>> np.array([[x] for x in a1]) array([[ 0. ], [ 0. ], [-1.2], [-1.2], [-3.4], [-3.4], [-4.5], [-4.5]]) >>> ```
how to convert items of array into array themselves Python
38,314,820
3
2016-07-11T19:35:42Z
38,314,864
13
2016-07-11T19:38:13Z
[ "python", "python-3.x", "numpy" ]
My problem is that I've got this array: ``` np.array([0.0, 0.0, -1.2, -1.2, -3.4, -3.4, -4.5, -4.5]) ``` and I want to convert the elements to array like this: ``` np.array([[0.0], [0.0], [-1.2], [-1.2], [-3.4], [-3.4], [-4.5], [-4.5]]) ``` So is there a loop or a numpy function that I could use to do this task?
Or simply: ``` arr[:,None] # array([[ 0. ], # [ 0. ], # [-1.2], # [-1.2], # [-3.4], # [-3.4], # [-4.5], # [-4.5]]) ```
What is the reason for _secret_backdoor_key variable in Python HMAC library source code?
38,324,441
9
2016-07-12T09:07:13Z
38,324,561
8
2016-07-12T09:11:57Z
[ "python", "hash", "python-internals" ]
When I was browsing Python HMAC module source code today I found out that it contains global variable `_secret_backdoor_key`. This variable is then checked to interrupt object initialization. The code looks like this ``` # A unique object passed by HMAC.copy() to the HMAC constructor, in order # that the latter return very quickly. HMAC("") in contrast is quite # expensive. _secret_backdoor_key = [] class HMAC: """RFC 2104 HMAC class. Also complies with RFC 4231. This supports the API for Cryptographic Hash Functions (PEP 247). """ blocksize = 64 # 512-bit HMAC; can be changed in subclasses. def __init__(self, key, msg = None, digestmod = None): """Create a new HMAC object. key: key for the keyed hash object. msg: Initial input for the hash, if provided. digestmod: A module supporting PEP 247. *OR* A hashlib constructor returning a new hash object. Defaults to hashlib.md5. """ if key is _secret_backdoor_key: # cheap return ``` Full [code is here](https://hg.python.org/cpython/file/2.7/Lib/hmac.py#l21). Does anyone know what is the reason for this variable? Comment says it is there for HMAC to return quicker than blank string (""). But why would user want to pass empty key to HMAC function? Is variable naming just a joke from HMAC developer or is it really some sort of backdoor?
To create a copy of the HMAC instance, you need to create an *empty* instance first. The `_secret_backdoor_key` object is used as a sentinel to exit `__init__` early and not run through the rest of the `__init__` functionality. The `copy` method then sets the instance attributes directly: ``` def copy(self): """Return a separate copy of this hashing object. An update to this copy won't affect the original object. """ other = self.__class__(_secret_backdoor_key) other.digest_cons = self.digest_cons other.digest_size = self.digest_size other.inner = self.inner.copy() other.outer = self.outer.copy() return other ``` You could get the same effect with `self.__class__('')` (an empty string), but then `HMAC.__init__` does a lot of unnecessary work as the attributes on the instance created are going to be replaced *anyway*. Note that using `HMAC('')` is a *valid way to create an instance*, you'd not want an instance devoid of any state in that case. By passing in the sentinel, `HMAC.copy()` can avoid all that extra work. You could use a different 'flag' value, like `False`, but it is way too easy to pass that in because of a bug in your own code. You'd want to be notified of such bugs instead. By using a 'secret' internal sentinel object instead, you avoid such accidental cases. Using `[]` as a sentinel unique object is quite an old practice. These days you'd use `object()` instead. The idea is that the sentinel is a *unique, single object* that you test against for identity with `is`. You can't re-create that object elsewhere, the `is` test only works if you pass in an reference to the exact same single object.
Why does '() is ()' return True when '[] is []' and '{} is {}' return False?
38,328,857
44
2016-07-12T12:25:48Z
38,328,858
48
2016-07-12T12:25:48Z
[ "python", "python-3.x", "tuples", "identity", "python-internals" ]
From what I've been aware of, using `[], {}, ()` to instantiate objects returns a new instance of `list, dict, tuple` respectively; a new instance object with ***a new identity***\*. This was pretty clear to me until I actually tested it where I noticed that `() is ()` actually returns `False` instead of the expected `True`: ``` >>> () is (), [] is [], {} is {} (True, False, False) ``` and, as expected, this behavior is also manifested when explicitly creating objects with [`list()`](https://docs.python.org/3/library/functions.html#func-list), [`dict()`](https://docs.python.org/3/library/functions.html#func-dict) and [`tuple()`](https://docs.python.org/3/library/functions.html#func-tuple): ``` >>> tuple() is tuple(), list() is list(), dict() is dict() (True, False, False) ``` The only relevant piece of information I could find in [the docs for `tuple()`](https://docs.python.org/3/library/stdtypes.html#tuple) states: > [...] For example, `tuple('abc')` returns `('a', 'b', 'c')` and `tuple([1, 2, 3])` returns `(1, 2, 3)`. **If no argument is given, the constructor creates a new empty tuple, `()`.** Suffice to say, this isn't sufficient for answering my question. So, why do empty tuples have the same identity whilst others like lists or dictionaries do not? --- \*Note, this question is **not** about what the `is` operator **does**, as explained in [**Understanding Python's “is” operator**](http://stackoverflow.com/questions/13650293/understanding-pythons-is-operator), but rather, *why it behaves as it does* in this specific case.
### In short: Python internally creates a `C` list of tuple objects whose first element contains the empty tuple. Every time `tuple()` or `()` is used, Python will return the existing object contained in the aforementioned `C` list and not create a new one. Such mechanism does not exist for `dict` or `list` objects which are, on the contrary, *recreated from scratch every time*. This is most likely related to the fact that immutable objects (like tuples) cannot be altered and, as such, are guaranteed to not change during execution. This is further solidified when considering that `frozenset() is frozenset()` returns `True`; like `()` an empty `frozenset` [is considered an singleton in the implementation of `CPython`](https://github.com/python/cpython/blob/master/Objects/setobject.c#L1082). With mutable objects, *such guarantees are not in place* and, as such, there's no incentive to cache their zero element instances (i.e their contents could change with the identity remaining the same). **Take note:** *This isn't something one should depend on, i.e one shouldn't consider empty tuples to be singletons. No such guarantees are explicitly made in the documentation so one should assume it is implementation dependent.* --- ### How it is done: In the most common case, the implementation of `CPython` is compiled with two macros [`PyTuple_MAXFREELIST`](https://github.com/python/cpython/blob/master/Objects/tupleobject.c#L11) and [`PyTuple_MAXSAVESIZE`](https://github.com/python/cpython/blob/master/Objects/tupleobject.c#L8) set to positive integers. The positive value for these macros results in the creation of an [array of `tuple` objects](https://github.com/python/cpython/blob/master/Objects/tupleobject.c#L19) with size `PyTuple_MAXSAVESIZE`. When `PyTuple_New` is called with the parameter `size == 0` it makes sure to [add a new empty tuple](https://github.com/python/cpython/blob/master/Objects/tupleobject.c#L120) to the list if it doesn't already exist: ``` if (size == 0) { free_list[0] = op; ++numfree[0]; Py_INCREF(op); /* extra INCREF so that this is never freed */ } ``` Then, if a new empty tuple is requested, the one that is located in the [first position of this list](https://github.com/python/cpython/blob/master/Objects/tupleobject.c#L84) is going to get returned instead of a new instance: ``` if (size == 0 && free_list[0]) { op = free_list[0]; Py_INCREF(op); /* rest snipped for brevity.. */ ``` One additional reason causing an incentive to do this is the fact that function calls construct a tuple to hold the positional arguments that are going to be used. This can be seen in the [`load_args`](https://github.com/python/cpython/blob/master/Python/ceval.c) function in `ceval.c`: ``` static PyObject * load_args(PyObject ***pp_stack, int na) { PyObject *args = PyTuple_New(na); /* rest snipped for brevity.. */ ``` which is called via [`do_call`](https://github.com/python/cpython/blob/master/Objects/setobject.c#L1082) in the same file. If the number of arguments `na` is zero, an empty tuple is going to be returned. In essence, this might be an operation that's performed frequently so it makes sense to not reconstruct an empty tuple every single time. --- ### Further reading: A couple more answers shed light on `CPython`'s caching behaviour with immutables: * For integers, another answer that digs in the source can be found [here](http://stackoverflow.com/a/34964030/4952130). * For strings, a handful of answers can be found [here](http://stackoverflow.com/questions/16756699/is-operator-behaves-differently-when-comparing-strings-with-spaces), [here](http://stackoverflow.com/questions/28329498/why-does-a-space-effect-the-identity-comparison-of-equal-strings) and [here](http://stackoverflow.com/questions/24245324/about-the-changing-id-of-a-python-immutable-string).
Catching the same expection in every method of a class
38,332,356
4
2016-07-12T14:54:40Z
38,332,482
7
2016-07-12T14:59:06Z
[ "python", "python-3.x", "exception", "exception-handling" ]
I, a beginner, am working on a simple card-based GUI. written in Python. There is a base class that, among other things, consists a vocabulary of all the cards, like `_cards = {'card1_ID': card1, 'card2_ID': card2}`. The cards on the GUI are referenced by their unique IDs. As I plan to make the code avabile for other beginners, I want to explicitly inform them if they gave a card ID that does not exists (instead of throwing a KeyError). Now I have a tons of repetitive try-expect clause that makes me suspicious: *Part of the code, one-line methods without try-catching:* ``` def shift(self, card_ID, amount): """Moves the card by the given amount of pixels. :param amount: the horizontal and vertical amount of shifting in pixels; tuple""" try: self._cards[card_ID].shift(amount) except KeyError: raise ValueError("Invaild card ID") def align(self, card_ID, horizontal, vertical): """Aligns the card to the given position.""" try: card = self._cards[card_ID] card.align(horizontal, vertical) except KeyError: raise ValueError("Invaild card ID") def invert(self, card_ID): """Inverts the card's colour""" try: self._cards[card_ID].invert() except KeyError: raise ValueError("Invaild card ID") ``` **Is this an accepted practice? Are there any better way to catch this KeyError in every method of the class?**
Extract the actual getting of the card from the id into a separate method, with a try/except there, and call that method from everywhere else. ``` def get_card(self, card_id): try: return self._cards[card_ID] except KeyError: raise ValueError("Invaild card ID") def invert(self, card_id): return self.get_card(card_id).invert() ... ```
How to replace a function call in an existing method
38,336,814
3
2016-07-12T18:53:12Z
38,336,854
7
2016-07-12T18:55:33Z
[ "python", "monkeypatching" ]
given a module with a class Foo with a method that calls a function `bar` defined in the module scope, is there a way to substitute `bar` for a different function without modification to the module? ``` class Foo(object): def run(self): bar() def bar(): return True ``` I then have a an instance of `Foo` for which I would like to substitute a function `baz()` for `bar()` without having to modify the `Foo` class.
Let's assume your module is called `deadbeef`, and you're using it like this ``` import deadbeef … foo_instance = deadbeef.Foo() ``` Then you could do ``` import deadbeef deadbeef.bar = baz … ```
Is there a comprehensive table of Python's "magic constants"?
38,344,848
8
2016-07-13T07:08:28Z
38,345,345
11
2016-07-13T07:34:35Z
[ "python", "python-3.x" ]
Where are `__file__`, `__main__`, etc. defined, and what are they officially called? `__eq__` and `__ge__` are "magic methods", so right now I'm just referring to them as "magic constants" but I don't even know if that's right. Google search really isn't turning up anything and even Python's own documentation doesn't seem to have a comprehensive list of them after scanning through the layers of pages.
Short answer: **no**. For the longer answer, which got badly out of hand, keep reading... --- There is no comprehensive table of those `__dunder_names__` (also not their official title!), as far as I'm aware. There are a couple of sources: * The only real *"magic constant"* is `__debug__`: it's a `SyntaxError` to attempt to assign to this name. It's covered [in the list of constants](https://docs.python.org/3/library/constants.html#__debug__) and mentioned in the context of [the `assert` statement](https://docs.python.org/3/reference/simple_stmts.html#the-assert-statement). * Another module-level name with specific use by a statement is `__all__`, which is documented alongside [the `import` statement](https://docs.python.org/3/reference/simple_stmts.html#the-import-statement). * There are two special modules, documented in the [library reference](https://docs.python.org/3/library/index.html), which have their own pages: + [`__main__`](https://docs.python.org/3/library/__main__.html) is the top-level environment in which a script is executed. + [`__future__`](https://docs.python.org/3/library/__future__.html#module-__future__) is for accessing language features that aren't yet mandatory (e.g. `print_function` to replace the `print` statement in Python 2). * Most of the rest (`__name__`, `__file__`, etc.) are added to modules by the import system, so are listed in [the import documentation](https://docs.python.org/3/reference/import.html#import-related-module-attributes). There are also many related to objects. The basic methods for implementing built-in behaviour (like `__eq__` and `__ge__`, as you mention) are listed in [the data model documentation](https://docs.python.org/3/reference/datamodel.html). But plenty of other, more specific names exist; for example, there are several related specifically to exceptions, like `__cause__` and `__traceback__`, in [the exceptions documentation](https://docs.python.org/3/library/exceptions.html). --- Note that there is nothing particularly "magic" about most of these, they are just regular attributes and can be assigned to as you see fit. However, they are considered reserved for internal Python machinery, so you shouldn't add your own; per [the language reference on "reserved classes of identifiers"](https://docs.python.org/3/reference/lexical_analysis.html#reserved-classes-of-identifiers): > *Any* use of `__*__` names, in any context, that does not follow explicitly documented use, is subject to breakage without warning. That said, there are a couple in common use that I don't think are actually specified *anywhere* in the official docs, like `__author__` and `__version__`; see e.g. [What is the common header format of Python files?](http://stackoverflow.com/q/1523427/3001761) and [What is the origin of \_\_author\_\_?](http://stackoverflow.com/q/9531136/3001761) A few have semi-official status via [PEP-8](https://www.python.org/dev/peps/pep-0008/#module-level-dunder-names), but that's about it. --- A few others have trodden this path, by the looks of it: * [Finding a list of all double-underscore variables?](http://stackoverflow.com/q/8920341/3001761) * [I need \_\_closure\_\_](http://stackoverflow.com/q/1609716/3001761) * [Built-in magic variable names/attributes](http://stackoverflow.com/q/20340815/3001761)
Why does `str.format()` ignore additional/unused arguments?
38,349,822
11
2016-07-13T11:02:07Z
38,350,141
10
2016-07-13T11:15:41Z
[ "python", "string", "python-3.x", "string-formatting" ]
I saw ["Why doesn't join() automatically convert its arguments to strings?"](http://stackoverflow.com/a/22152693/2505645) and [the accepted answer](http://stackoverflow.com/a/22152693/2505645) made me think: since > Explicit is better than implicit. and > Errors should never pass silently. why does `str.format()` ignore additional/unused (sometimes accidentally passed) arguments? To me it looks like an error which is passed silently, and it surely isn't explicit: ``` >>> 'abc'.format(21, 3, 'abc', object(), x=5, y=[1, 2, 3]) 'abc' ``` This actually lead my friend to an issue with `os.makedirs(path, exist_ok=True)` still raising an error even though [the docs for `os.makedirs()`](https://docs.python.org/3/library/os.html#os.makedirs) said that `exist_ok=True` won't raise an error even if `path` already exists. It turned out he just had a long line with nested function calls, and the `exist_ok` was passed in to a nested `.format()` call instead of the `os.makedirs()`.
Ignoring un-used arguments makes it possible to create arbitrary format strings for arbitrary-sized dictionaries or objects. Say you wanted to give your program the feature to let the end-user change the output. You document what *fields* are available, and tell users to put those fields in `{...}` slots in a string. The end-user then can create templating strings with *any number* of those fields being used, including none at all, *without error*. In other words, the choice is deliberate, because there are practical reasons for allowing more arguments than are converted. Note that the C# `String.Formatter` implementation that inspired the Python PEP does the same, for those same reasons. Not that the discussion on this part of the PEP is that clear cut; Guido van Rossum at some point [tries to address this issue](https://mail.python.org/pipermail/python-dev/2006-May/065059.html): > The PEP appears silent on what happens if there are too few or too > many positional arguments, or if there are missing or unused keywords. > Missing ones should be errors; I'm not sure about redundant (unused) > ones. On the one hand complaining about those gives us more certainty > that the format string is correct. On the other hand there are some > use cases for passing lots of keyword parameters (e.g. simple web > templating could pass a fixed set of variables using \*\*dict). Even in > i18n (translation) apps I could see the usefulness of allowing unused > parameters to which the PEP author [responded](https://mail.python.org/pipermail/python-dev/2006-May/065060.html) that they were still undecided on this point. For use-cases where you must raise an exception for unused arguments you are expected to subclass the [`string.Formatter()` class](https://docs.python.org/2/library/string.html#string.Formatter) and provide an implementation for [`Formatter.check_unused_args()`](https://docs.python.org/2/library/string.html#string.Formatter.check_unused_args); the default implementation does nothing. This of course doesn't help your friend's case where you used `str.format(*args, **kwargs)` rather than `Formatter().format(str, *args, **kwargs)`. I believe that at *some point* the idea was that you could replace the formatter used by `str.format()` with a custom implementation, but that never came to pass. If you use the [`flake8` linter](https://pypi.python.org/pypi/flake8), then you can add the [`flake8-string-format` plugin](https://pypi.python.org/pypi/flake8-string-format) to detect the obvious cases, where you passed in an *explicit* keyword argument that is not being used by the format string.
Split pandas dataframe column based on number of digits
38,357,130
5
2016-07-13T16:30:06Z
38,357,443
8
2016-07-13T16:46:37Z
[ "python", "pandas", "dataframe", "data-manipulation" ]
I have a pandas dataframe which has two columns key and value, and the value always consists of a 8 digit number something like ``` >df1 key value 10 10000100 20 10000000 30 10100000 40 11110000 ``` Now I need to take the value column and split it on the digits present, such that my result is a new data frame ``` >df_res key 0 1 2 3 4 5 6 7 10 1 0 0 0 0 1 0 0 20 1 0 0 0 0 0 0 0 30 1 0 1 0 0 0 0 0 40 1 1 1 1 0 0 0 0 ``` I cannot change the input data format, the most conventional thing I thought was to convert the value to a string and loop through each digit char and put it in a list, however am looking for something more elegant and faster, kindly help. EDIT: The input is not in string, it is integer.
This should work: ``` df.value.astype(str).apply(list).apply(pd.Series).astype(int) ``` [![enter image description here](http://i.stack.imgur.com/ZTDqX.png)](http://i.stack.imgur.com/ZTDqX.png)
Pythonic way to check if integer fits in 64 bits
38,359,202
3
2016-07-13T18:26:58Z
38,359,221
8
2016-07-13T18:28:12Z
[ "python", "integer", "64bit" ]
What is the Pythonic way (without using any external libraries) to check if an integer is small enough to fit in a 64 bit signed quantity? Sorry if this question has been asked before!
Just check the size with the [`int.bit_length()` method](https://docs.python.org/2/library/stdtypes.html#int.bit_length): ``` if integer_value.bit_length() <= 63: ``` The method takes the absolute value, so you want to leave a bit for the sign: ``` >>> (-2 ** 63).bit_length() 64 >>> (2 ** 63).bit_length() 64 ```
Perl's __DATA__ equivalent in Python
38,368,956
7
2016-07-14T08:17:37Z
38,369,018
8
2016-07-14T08:20:38Z
[ "python", "perl", "filehandle" ]
When writing code in perl I often read data in from the filehandle `__DATA__` at the end of the script: ``` while (<DATA>) { chomp; say; } __DATA__ line1 line2 ``` I find this quicker for testing code etc than reading in a file, as it means I can edit its contents on the fly. From the [doc](http://perldoc.perl.org/SelfLoader.html#The-__DATA__-token): > The `__DATA__` token tells the perl compiler that the perl code for > compilation is finished. > > Everything after the `__DATA__` token is available for reading via the > filehandle `FOOBAR::DATA`, where `FOOBAR` is the name of the current > package when the `__DATA__` token is reached. Is there an equivalent in Python? If not, can anybody suggest the most Python-ish way of achieving a similar thing?
No, there is no direct equivalent in Python. Put your data in a multi-line variable: ``` DATA = '''\ line1 line2 ''' ``` You can then use `DATA.splitlines()` if you must have access to separate lines. You can put this at the end of your Python file provided you only use the name `DATA` in a function that is not called until after the whole module has loaded. Alternatively, open the current module and read from that: ``` with open(__file__.rstrip('co')) as data: for line in data: while line != '# __DATA__\n': continue # do something with the rest of the 'data' in the current source file. # ... # __DATA__ # This is going to be read later on. ``` However the rest of the module must still at least be valid Python syntax; the Python parser can't be told to stop parsing beyond a given point. Generally speaking, in Python you'd just put the data file *next* to your source files and read that. You can use the `__file__` variable to produce a path the 'current directory' and thus to any other files in the same location: ``` import os.path current_dir = os.path.dirname(os.path.abspath(__file__)) with open(os.path.join(current_dir, 'data.txt')) as data: # read from data.txt ```
Lambdas from a list comprehension are returning a lambda when called
38,369,470
51
2016-07-14T08:43:46Z
38,370,058
17
2016-07-14T09:09:30Z
[ "python", "lambda" ]
I am trying to iterate the lambda func over a list as in `test.py`, and I want to get the call result of the lambda, not the function object itself. However, the following output really confused me. ``` ------test.py--------- #!/bin/env python #coding: utf-8 a = [lambda: i for i in range(5)] for i in a: print i() --------output--------- <function <lambda> at 0x7f489e542e60> <function <lambda> at 0x7f489e542ed8> <function <lambda> at 0x7f489e542f50> <function <lambda> at 0x7f489e54a050> <function <lambda> at 0x7f489e54a0c8> ``` I modified the variable name when print the call result to `t` as following, and everything goes well. I am wondering what is all about of that. ? ``` --------test.py(update)-------- a = [lambda: i for i in range(5)] for t in a: print t() -----------output------------- 4 4 4 4 4 ```
Closures in Python are [late-binding](http://docs.python-guide.org/en/latest/writing/gotchas/#late-binding-closures), meaning that each lambda function in the list will only evaluate the variable `i` when invoked, and *not* when defined. That's why all functions return the same value, i.e. the last value of `ì` (which is 4). To avoid this, one technique is to bind the value of `i` to a local named parameter: ``` >>> a = [lambda i=i: i for i in range(5)] >>> for t in a: ... print t() ... 0 1 2 3 4 ``` Another option is to create a [partial function](https://docs.python.org/2/library/functools.html#functools.partial) and bind the current value of `i` as its parameter: ``` >>> from functools import partial >>> a = [partial(lambda x: x, i) for i in range(5)] >>> for t in a: ... print t() ... 0 1 2 3 4 ``` **Edit:** Sorry, misread the question initially, since these kind of questions are so often about late binding (thanks [@soon](http://stackoverflow.com/users/1532460/soon) for the comment). The second reason for the behavior is list comprehension's variable leaking in Python2 as others have already explained. When using `i` as the iteration variable in the `for` loop, each function prints the current value of `i` (for the reasons stated above), which is simply the function itself. When using a different name (e.g. `t`), functions print the last value of `i` as it was in the list comprehension loop, which is 4.
Lambdas from a list comprehension are returning a lambda when called
38,369,470
51
2016-07-14T08:43:46Z
38,370,271
46
2016-07-14T09:18:35Z
[ "python", "lambda" ]
I am trying to iterate the lambda func over a list as in `test.py`, and I want to get the call result of the lambda, not the function object itself. However, the following output really confused me. ``` ------test.py--------- #!/bin/env python #coding: utf-8 a = [lambda: i for i in range(5)] for i in a: print i() --------output--------- <function <lambda> at 0x7f489e542e60> <function <lambda> at 0x7f489e542ed8> <function <lambda> at 0x7f489e542f50> <function <lambda> at 0x7f489e54a050> <function <lambda> at 0x7f489e54a0c8> ``` I modified the variable name when print the call result to `t` as following, and everything goes well. I am wondering what is all about of that. ? ``` --------test.py(update)-------- a = [lambda: i for i in range(5)] for t in a: print t() -----------output------------- 4 4 4 4 4 ```
In Python 2 list comprehension 'leaks' the variables to outer scope: ``` >>> [i for i in xrange(3)] [0, 1, 2] >>> i 2 ``` Note that the behavior is different on Python 3: ``` >>> [i for i in range(3)] [0, 1, 2] >>> i Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'i' is not defined ``` When you define lambda it's bound to variable `i`, not its' current value as your second example shows. Now when you assign new value to `i` the lambda will return whatever is the current value: ``` >>> a = [lambda: i for i in range(5)] >>> a[0]() 4 >>> i = 'foobar' >>> a[0]() 'foobar' ``` Since the value of `i` within the loop is the lambda itself you'll get it as a return value: ``` >>> i = a[0] >>> i() <function <lambda> at 0x01D689F0> >>> i()()()() <function <lambda> at 0x01D689F0> ``` **UPDATE**: Example on Python 2.7: ``` Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> a = [lambda: i for i in range(5)] >>> for i in a: ... print i() ... <function <lambda> at 0x7f1eae7f15f0> <function <lambda> at 0x7f1eae7f1668> <function <lambda> at 0x7f1eae7f16e0> <function <lambda> at 0x7f1eae7f1758> <function <lambda> at 0x7f1eae7f17d0> ``` Same on Python 3.4: ``` Python 3.4.3 (default, Oct 14 2015, 20:28:29) [GCC 4.8.4] on linux Type "help", "copyright", "credits" or "license" for more information. >>> a = [lambda: i for i in range(5)] >>> for i in a: ... print(i()) ... 4 4 4 4 4 ``` For details about the change regarding the variable scope with list comprehension see Guido's [blogpost from 2010](http://python-history.blogspot.my/2010/06/from-list-comprehensions-to-generator.html). > We also made another change in Python 3, to improve equivalence between list comprehensions and generator expressions. In Python 2, the list comprehension "leaks" the loop control variable into the surrounding scope: ``` x = 'before' a = [x for x in 1, 2, 3] print x # this prints '3', not 'before' ``` > However, in Python 3, we decided to fix the "dirty little secret" of list comprehensions by using the same implementation strategy as for generator expressions. Thus, in Python 3, the above example (after modification to use print(x) :-) will print 'before', proving that the 'x' in the list comprehension temporarily shadows but does not override the 'x' in the surrounding scope.
How to check whether for loop ends completely in python?
38,381,850
3
2016-07-14T18:40:49Z
38,381,893
10
2016-07-14T18:43:15Z
[ "python", "loops", "for-loop", "break" ]
This is a ceased for loop : ``` for i in [1,2,3]: print(i) if i==3: break ``` How can I check its difference with this : ``` for i in [1,2,3]: print(i) ``` This is an idea : ``` IsBroken=False for i in [1,2,3]: print(i) if i==3: IsBroken=True break if IsBroken==True: print("for loop was broken") ```
`for` loops can take an `else` block which can serve this purpose: ``` for i in [1,2,3]: print(i) if i==3: break else: print("for loop was not broken") ```
what does on_delete does on Django models?
38,388,423
3
2016-07-15T05:26:57Z
38,389,488
8
2016-07-15T06:44:00Z
[ "python", "django", "django-models" ]
I'm quite familiar with Django, but recently noticed there exists a `on_delete=models.CASCADE` option with the models, I have searched for the documentation for the same but couldn't find anything more than, > **Changed in Django 1.9:** > > `on_delete` can now be used as the second positional argument (previously it was typically only passed as a keyword argument). It will be a required argument in Django 2.0. [an example case of usage is](https://docs.djangoproject.com/en/1.9/ref/models/fields/#django.db.models.ForeignKey) ``` from django.db import models class Car(models.Model): manufacturer = models.ForeignKey( 'Manufacturer', on_delete=models.CASCADE, ) # ... class Manufacturer(models.Model): # ... pass ``` What does on\_delete does? (*guess the actions to be done if the model is deleted*) What does models.CASCADE do? (*any hints in documentation*) What other options are available (*if my guess is correct*)? Where does the documentation for this resides?
This is the behaviour to adopt when the referenced object is deleted. It is not specific to django, this is an SQL standard. There are 6 possible actions to take when such event occurs: * `CASCADE`: When the referenced object is deleted, also delete the objects that have references to it (When you remove a blog post for instance, you might want to delete comments as well). SQL equivalent: `CASCADE`. * `PROTECT`: Forbid the deletion of the referenced object. To delete it you will have to delete all objects that reference it manually. SQL equivalent: `RESTRICT`. * `SET_NULL`: Set the reference to NULL (requires the field to be nullable). For instance, when you delete a User, you might want to keep the comments he posted on blog posts, but say it was posted by an anonymous (or deleted) user. SQL equivalent: `SET NULL`. * `SET_DEFAULT`: Set the default value. SQL equivalent: `SET DEFAULT`. * `SET(...)`: Set a given value. This one is not part of the SQL standard and is entirely handled by Django. * `DO_NOTHING`: Probably a very bad idea since this would create integrity issues in your database (referencing an object that actually doesn't exist). SQL equivalent: `NO ACTION`. Source: [Django documentation](https://docs.djangoproject.com/en/stable/ref/models/fields/#django.db.models.ForeignKey.on_delete) See also [the documentation of PostGreSQL](https://www.postgresql.org/docs/current/static/sql-createtable.html) for instance. In most cases, `CASCADE` is the expected behaviour, but for every ForeignKey, you should always ask yourself what is the expected behaviour in this situation. `PROTECT` and `SET_NULL` are often useful. Setting `CASCADE` where it should not, can potentially delete all your database in cascade, by simply deleting a single user.
Sort A list of Strings Based on certain field
38,388,799
4
2016-07-15T05:57:44Z
38,389,035
8
2016-07-15T06:15:18Z
[ "python", "list", "python-2.7", "sorting" ]
Overview: I have data something like this (each row is a string): > 81:0A:D7:19:25:7B, **2016-07-14 14:29:13**, 2016-07-14 14:29:15, -69, 22:22:22:22:22:23,null,^M > 3B:3F:B9:0A:83:E6, **2016-07-14 01:28:59**, 2016-07-14 01:29:01, -36, 33:33:33:33:33:31,null,^M > B3:C0:6E:77:E5:31, **2016-07-14 08:26:45**, 2016-07-14 08:26:47, -65, 33:33:33:33:33:32,null,^M > 61:01:55:16:B5:52, **2016-07-14 06:25:32**, 2016-07-14 06:25:34, -56, 33:33:33:33:33:33,null,^M And I want to sort each row based on the first timestamp that is present in the each String, which for these four records is: > 2016-07-14 01:28:59 > > 2016-07-14 06:25:32 > > 2016-07-14 08:26:45 > > 2016-07-14 14:29:13 Now I know the `sort()` method but I don't understand how can I use here to sort all the rows based on this (timestamp) quantity, and I do need to keep the final sorted data in the same format as some other service is going to use it. I also understand I can make the `key()` but I am not clear how that can be made to sort on the timestamp field.
You can use the list method `list.sort` which sorts in-place or use the `sorted()` built-in function which returns a new list. the `key` argument takes a function which it applies to each element of the sequence before sorting. You can use a combination of `string.split(',')` and indexing to the second element, e.g. some\_list[1], so: ``` In [8]: list_of_strings Out[8]: ['81:0A:D7:19:25:7B, 2016-07-14 14:29:13, 2016-07-14 14:29:15, -69, 22:22:22:22:22:23,null,^M', '3B:3F:B9:0A:83:E6, 2016-07-14 01:28:59, 2016-07-14 01:29:01, -36, 33:33:33:33:33:31,null,^M', 'B3:C0:6E:77:E5:31, 2016-07-14 08:26:45, 2016-07-14 08:26:47, -65, 33:33:33:33:33:32,null,^M', '61:01:55:16:B5:52, 2016-07-14 06:25:32, 2016-07-14 06:25:34, -56, 33:33:33:33:33:33,null,^M'] In [9]: sorted(list_of_strings, key=lambda s: s.split(',')[1]) Out[9]: ['3B:3F:B9:0A:83:E6, 2016-07-14 01:28:59, 2016-07-14 01:29:01, -36, 33:33:33:33:33:31,null,^M', '61:01:55:16:B5:52, 2016-07-14 06:25:32, 2016-07-14 06:25:34, -56, 33:33:33:33:33:33,null,^M', 'B3:C0:6E:77:E5:31, 2016-07-14 08:26:45, 2016-07-14 08:26:47, -65, 33:33:33:33:33:32,null,^M', '81:0A:D7:19:25:7B, 2016-07-14 14:29:13, 2016-07-14 14:29:15, -69, 22:22:22:22:22:23,null,^M'] ``` Or if you'd rather sort a list in place, ``` list_of_strings Out[12]: ['81:0A:D7:19:25:7B, 2016-07-14 14:29:13, 2016-07-14 14:29:15, -69, 22:22:22:22:22:23,null,^M', '3B:3F:B9:0A:83:E6, 2016-07-14 01:28:59, 2016-07-14 01:29:01, -36, 33:33:33:33:33:31,null,^M', 'B3:C0:6E:77:E5:31, 2016-07-14 08:26:45, 2016-07-14 08:26:47, -65, 33:33:33:33:33:32,null,^M', '61:01:55:16:B5:52, 2016-07-14 06:25:32, 2016-07-14 06:25:34, -56, 33:33:33:33:33:33,null,^M'] list_of_strings.sort(key=lambda s: s.split(',')[1]) list_of_strings Out[14]: ['3B:3F:B9:0A:83:E6, 2016-07-14 01:28:59, 2016-07-14 01:29:01, -36, 33:33:33:33:33:31,null,^M', '61:01:55:16:B5:52, 2016-07-14 06:25:32, 2016-07-14 06:25:34, -56, 33:33:33:33:33:33,null,^M', 'B3:C0:6E:77:E5:31, 2016-07-14 08:26:45, 2016-07-14 08:26:47, -65, 33:33:33:33:33:32,null,^M', '81:0A:D7:19:25:7B, 2016-07-14 14:29:13, 2016-07-14 14:29:15, -69, 22:22:22:22:22:23,null,^M'] ```
numpy: Why is there a difference between (x,1) and (x, ) dimensionality
38,402,227
5
2016-07-15T17:41:57Z
38,403,363
7
2016-07-15T18:59:24Z
[ "python", "numpy" ]
I am wondering why in numpy there are one dimensional array of dimension (length, 1) and also one dimensional array of dimension (length, ) w/o a second value. I am running into this quite frequently, e.g. when using `np.concatenate()` which then requires a `reshape` step beforehand (or I could directly use `hstack`/`vstack`). I can't think of a reason why this behavior is desirable. Can someone explain? **Edit:** It was suggested by on of the comments that my question is a possible duplicate. I am more interested in the underlying workings of Numpy and not that there is a distinction between 1d and 2d arrays which I think is the point of the mentioned thread.
The data of a `ndarray` is stored as a 1d buffer - just a block of memory. The multidimensional nature of the array is produced by the `shape` and `strides` attributes, and the code that uses them. The `numpy` developers chose to allow for an arbitrary number of dimensions, so the shape and strides are represented as tuples of any length, including 0 and 1. In contrast MATLAB was built around FORTRAN programs that were developed for matrix operations. In the early days everything in MATLAB was a 2d matrix. Around 2000 (v3.5) it was generalized to allow more than 2d, but never less. The `numpy` `np.matrix` still follows that old 2d MATLAB constraint. If you come from a MATLAB world you are used to these 2 dimensions, and the distinction between a row vector and column vector. But in math and physics that isn't influenced by MATLAB, a vector is a 1d array. Python lists are inherently 1d, as are `c` arrays. To get 2d you have to have lists of lists or arrays of pointers to arrays, with `x[1][2]` style of indexing. Look at the shape and strides of this array and its variants: ``` In [48]: x=np.arange(10) In [49]: x.shape Out[49]: (10,) In [50]: x.strides Out[50]: (4,) In [51]: x1=x.reshape(10,1) In [52]: x1.shape Out[52]: (10, 1) In [53]: x1.strides Out[53]: (4, 4) In [54]: x2=np.concatenate((x1,x1),axis=1) In [55]: x2.shape Out[55]: (10, 2) In [56]: x2.strides Out[56]: (8, 4) ``` MATLAB adds new dimensions at the end. It orders its values like a `order='F'` array, and can readily change a (n,1) matrix to a (n,1,1,1). `numpy` is default `order='C'`, and readily expands an array dimension at the start. Understanding this is essential when taking advantage of broadcasting. Thus `x1 + x` is a (10,1)+(10,) => (10,1)+(1,10) => (10,10) Because of broadcasting a `(n,)` array is more like a `(1,n)` one than a `(n,1)` one. A 1d array is more like a row matrix than a column one. ``` In [64]: np.matrix(x) Out[64]: matrix([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]]) In [65]: _.shape Out[65]: (1, 10) ``` The point with `concatenate` is that it requires matching dimensions. It does not use broadcasting to adjust dimensions. There are a bunch of `stack` functions that ease this constraint, but they do so by adjusting the dimensions before using `concatenate`. Look at their code (readable Python). So a proficient numpy user needs to be comfortable with that generalized `shape` tuple, including the empty `()` (0d array), `(n,)` 1d, and up. For more advanced stuff understanding strides helps as well (look for example at the strides and shape of a transpose).
Is None really a built-in?
38,404,151
2
2016-07-15T19:55:09Z
38,404,298
9
2016-07-15T20:05:12Z
[ "python", "built-in", "nonetype" ]
I am trying to use Python's (2.7) `eval` in a (relatively) safe manner. Hence, I defined: ``` def safer_eval(string): """Safer version of eval() as globals and builtins are inaccessible""" return eval(string, {'__builtins__': {}}) ``` As expected, the following does not work any more: ``` print safer_eval("True") NameError: name 'True' is not defined ``` However, I can still eval a `"None"` string: ``` print safer_eval("None") None ``` * So, is `None` not a built-in? They are at least both part of `__builtin__` ... * Why is it still eval-able? * How would I get rid of it, if I had to?
`None` is a *constant* in Python, see the [*Keywords* documentation](https://docs.python.org/2/reference/lexical_analysis.html#keywords): > Changed in version 2.4: `None` became a constant and is now recognized by the compiler as a name for the built-in object `None`. Although it is not a keyword, you cannot assign a different object to it. The compiler simply inserts a reference to the singleton `None` object whenever you name it: ``` >>> from dis import dis >>> dis(compile('None', '', 'eval')) 1 0 LOAD_CONST 0 (None) 3 RETURN_VALUE ``` `True` and `False` are built-ins in Python 2, which also means they can be masked. In Python 3, `None`, `True` and `False` all are [now keywords](https://docs.python.org/3/reference/lexical_analysis.html#keywords), and all three are materialised merely by naming them: ``` >>> eval('True', {'__builtins__': {}}) True ``` See [Guido van Rossum's blog post on why this was changed](http://python-history.blogspot.co.uk/2013/11/story-of-none-true-false.html). Note that there is **nothing safe** about eval, even with `__builtins__` neutered, as it can still be referenced via other means: ``` >>> s = ''' ... [ ... c for c in ().__class__.__base__.__subclasses__() ... if c.__name__ == 'catch_warnings' ... ][0]()._module.__builtins__ ... ''' >>> eval(s, {'__builtins__': {}}) {'bytearray': <type 'bytearray'>, 'IndexError': <type 'exceptions.IndexError'>, 'all': <built-in function all>, 'help': Type help() for interactive help, or help(object) for help about object., 'vars': <built-in function vars>, 'SyntaxError': <type 'exceptions.SyntaxError'>, 'unicode': <type 'unicode'>, 'UnicodeDecodeError': <type 'exceptions.UnicodeDecodeError'>, 'memoryview': <type 'memoryview'>, 'isinstance': <built-in function isinstance>, 'copyright': Copyright (c) 2001-2015 Python Software Foundation. All Rights Reserved. Copyright (c) 2000 BeOpen.com. All Rights Reserved. Copyright (c) 1995-2001 Corporation for National Research Initiatives. All Rights Reserved. Copyright (c) 1991-1995 Stichting Mathematisch Centrum, Amsterdam. All Rights Reserved., 'NameError': <type 'exceptions.NameError'>, 'BytesWarning': <type 'exceptions.BytesWarning'>, 'dict': <type 'dict'>, 'input': <built-in function input>, 'oct': <built-in function oct>, 'bin': <built-in function bin>, 'SystemExit': <type 'exceptions.SystemExit'>, 'StandardError': <type 'exceptions.StandardError'>, 'format': <built-in function format>, 'repr': <built-in function repr>, 'sorted': <built-in function sorted>, 'False': False, 'RuntimeWarning': <type 'exceptions.RuntimeWarning'>, 'list': <type 'list'>, 'iter': <built-in function iter>, 'reload': <built-in function reload>, 'Warning': <type 'exceptions.Warning'>, '__package__': None, 'round': <built-in function round>, 'dir': <built-in function dir>, 'cmp': <built-in function cmp>, 'set': <type 'set'>, 'bytes': <type 'str'>, 'reduce': <built-in function reduce>, 'intern': <built-in function intern>, 'issubclass': <built-in function issubclass>, 'Ellipsis': Ellipsis, 'EOFError': <type 'exceptions.EOFError'>, 'locals': <built-in function locals>, 'BufferError': <type 'exceptions.BufferError'>, 'slice': <type 'slice'>, 'FloatingPointError': <type 'exceptions.FloatingPointError'>, 'sum': <built-in function sum>, 'getattr': <built-in function getattr>, 'abs': <built-in function abs>, 'exit': Use exit() or Ctrl-D (i.e. EOF) to exit, 'print': <built-in function print>, 'True': True, 'FutureWarning': <type 'exceptions.FutureWarning'>, 'ImportWarning': <type 'exceptions.ImportWarning'>, 'None': None, 'hash': <built-in function hash>, 'ReferenceError': <type 'exceptions.ReferenceError'>, 'len': <built-in function len>, 'credits': Thanks to CWI, CNRI, BeOpen.com, Zope Corporation and a cast of thousands for supporting Python development. See www.python.org for more information., 'frozenset': <type 'frozenset'>, '__name__': '__builtin__', 'ord': <built-in function ord>, 'super': <type 'super'>, '_': None, 'TypeError': <type 'exceptions.TypeError'>, 'license': See http://www.python.org/2.7/license.html, 'KeyboardInterrupt': <type 'exceptions.KeyboardInterrupt'>, 'UserWarning': <type 'exceptions.UserWarning'>, 'filter': <built-in function filter>, 'range': <built-in function range>, 'staticmethod': <type 'staticmethod'>, 'SystemError': <type 'exceptions.SystemError'>, 'BaseException': <type 'exceptions.BaseException'>, 'pow': <built-in function pow>, 'RuntimeError': <type 'exceptions.RuntimeError'>, 'float': <type 'float'>, 'MemoryError': <type 'exceptions.MemoryError'>, 'StopIteration': <type 'exceptions.StopIteration'>, 'globals': <built-in function globals>, 'divmod': <built-in function divmod>, 'enumerate': <type 'enumerate'>, 'apply': <built-in function apply>, 'LookupError': <type 'exceptions.LookupError'>, 'open': <built-in function open>, 'quit': Use quit() or Ctrl-D (i.e. EOF) to exit, 'basestring': <type 'basestring'>, 'UnicodeError': <type 'exceptions.UnicodeError'>, 'zip': <built-in function zip>, 'hex': <built-in function hex>, 'long': <type 'long'>, 'next': <built-in function next>, 'ImportError': <type 'exceptions.ImportError'>, 'chr': <built-in function chr>, 'xrange': <type 'xrange'>, 'type': <type 'type'>, '__doc__': "Built-in functions, exceptions, and other objects.\n\nNoteworthy: None is the `nil' object; Ellipsis represents `...' in slices.", 'Exception': <type 'exceptions.Exception'>, 'tuple': <type 'tuple'>, 'UnicodeTranslateError': <type 'exceptions.UnicodeTranslateError'>, 'reversed': <type 'reversed'>, 'UnicodeEncodeError': <type 'exceptions.UnicodeEncodeError'>, 'IOError': <type 'exceptions.IOError'>, 'hasattr': <built-in function hasattr>, 'delattr': <built-in function delattr>, 'setattr': <built-in function setattr>, 'raw_input': <built-in function raw_input>, 'SyntaxWarning': <type 'exceptions.SyntaxWarning'>, 'compile': <built-in function compile>, 'ArithmeticError': <type 'exceptions.ArithmeticError'>, 'str': <type 'str'>, 'property': <type 'property'>, 'GeneratorExit': <type 'exceptions.GeneratorExit'>, 'int': <type 'int'>, '__import__': <built-in function __import__>, 'KeyError': <type 'exceptions.KeyError'>, 'coerce': <built-in function coerce>, 'PendingDeprecationWarning': <type 'exceptions.PendingDeprecationWarning'>, 'file': <type 'file'>, 'EnvironmentError': <type 'exceptions.EnvironmentError'>, 'unichr': <built-in function unichr>, 'id': <built-in function id>, 'OSError': <type 'exceptions.OSError'>, 'DeprecationWarning': <type 'exceptions.DeprecationWarning'>, 'min': <built-in function min>, 'UnicodeWarning': <type 'exceptions.UnicodeWarning'>, 'execfile': <built-in function execfile>, 'any': <built-in function any>, 'complex': <type 'complex'>, 'bool': <type 'bool'>, 'ValueError': <type 'exceptions.ValueError'>, 'NotImplemented': NotImplemented, 'map': <built-in function map>, 'buffer': <type 'buffer'>, 'max': <built-in function max>, 'object': <type 'object'>, 'TabError': <type 'exceptions.TabError'>, 'callable': <built-in function callable>, 'ZeroDivisionError': <type 'exceptions.ZeroDivisionError'>, 'eval': <built-in function eval>, '__debug__': True, 'IndentationError': <type 'exceptions.IndentationError'>, 'AssertionError': <type 'exceptions.AssertionError'>, 'classmethod': <type 'classmethod'>, 'UnboundLocalError': <type 'exceptions.UnboundLocalError'>, 'NotImplementedError': <type 'exceptions.NotImplementedError'>, 'AttributeError': <type 'exceptions.AttributeError'>, 'OverflowError': <type 'exceptions.OverflowError'>} ``` or you can simply blow up the interpreter by creating a broken code object. See [*Eval really is dangerous*](http://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html). If all you want to do is load Python literal syntax (lists, tuples, dictionaries, strings, numbers, etc.) then you want to use the [`ast.literal_eval()` function](https://docs.python.org/2/library/ast.html#ast.literal_eval), specifically designed to be safe.
A neat way to create an infinite cyclic generator?
38,415,014
2
2016-07-16T19:51:41Z
38,415,018
7
2016-07-16T19:52:34Z
[ "python", "python-3.x" ]
I want a generator that cycle infinitely through a list of values. Here is my solution, but I may be missing a more obvious one. The ingredients: a generator function that flatten an infinitely nested list, and a list appendant to itself ``` def ge(x): for it in x: if isinstance(it, list): yield from ge(it) else: yield(it) def infinitecyclegenerator(l): x = l[:] x.append(x) yield from ge(x) ``` The use: ``` g = infinitecyclegenerator([1,2,3]) next(g) #1 next(g) #2 next(g) #3 next(g) #1 next(g) #2 next(g) #3 next(g) #1 ... ``` As I said, I may be missing a trivial way to do the same and I'll be happy to learn. Is there a neater way? Also, should I worry about memory consumption with all the mind boggling infinities going on here, or is everything cool with my code?
You can use [`itertools.cycle`](https://docs.python.org/2.7/library/itertools.html#itertools.cycle) to achieve the same result > Make an iterator returning elements from the iterable and saving a > copy of each. When the iterable is exhausted, return elements from the > **saved copy**. *Emphasis mine*. Your only concern about memory would be saving a copy of each item returned by the *iterator*. ``` >>> from itertools import cycle >>> c = cycle([1,2,3]) >>> next(c) 1 >>> next(c) 2 >>> next(c) 3 >>> next(c) 1 >>> next(c) 2 >>> next(c) 3 ```
What index should I use to convert a numpy array into a pandas dataframe?
38,419,314
3
2016-07-17T08:30:29Z
38,419,655
7
2016-07-17T09:13:49Z
[ "python", "arrays", "numpy", "pandas", "dataframe" ]
I am trying to convert a simple numpy array into a pandas dataframe. `x` is my array, `nam` is the list of the columns names. ``` x = np.array([2,3,1,0]) nam = ['col1', 'col2', 'col3', 'col4'] ``` I use `pd.DataFrame` to convert `x` ``` y = pd.DataFrame(x, columns=nam) ``` But I have this error message : > ValueError: Shape of passed values is (1, 4), indices imply (4, 4) I tried to adjust the index parameter but I can't find the solution. I want my dataframe to look like this: ``` col1 col2 col3 col4 2 3 1 0 ```
Another simplier solution with `[]`: ``` x = np.array([2,3,1,0]) nam = ['col1', 'col2', 'col3', 'col4'] print (pd.DataFrame([x], columns=nam)) col1 col2 col3 col4 0 2 3 1 0 ```
Proper type annotation of Python functions with yield
38,419,654
8
2016-07-17T09:13:46Z
38,423,388
7
2016-07-17T16:25:17Z
[ "python", "generator", "coroutine", "static-typing", "mypy" ]
After reading Eli Bendersky's article [on implementing state machines via Python coroutines](http://eli.thegreenplace.net/2009/08/29/co-routines-as-an-alternative-to-state-machines/) I wanted to... * see his example run under Python3 * and also add the appropriate type annotations for the generators I succeeded in doing the first part (*but without using `async def`s or `yield from`s, I basically just ported the code - so any improvements there are most welcome*). But I need some help with the type annotations of the coroutines: ``` #!/usr/bin/env python3 from typing import Callable, Generator def unwrap_protocol(header: int=0x61, footer: int=0x62, dle: int=0xAB, after_dle_func: Callable[[int], int]=lambda x: x, target: Generator=None) -> Generator: """ Simplified protocol unwrapping co-routine.""" # # Outer loop looking for a frame header # while True: byte = (yield) frame = [] # type: List[int] if byte == header: # # Capture the full frame # while True: byte = (yield) if byte == footer: target.send(frame) break elif byte == dle: byte = (yield) frame.append(after_dle_func(byte)) else: frame.append(byte) def frame_receiver() -> Generator: """ A simple co-routine "sink" for receiving full frames.""" while True: frame = (yield) print('Got frame:', ''.join('%02x' % x for x in frame)) bytestream = bytes( bytearray((0x70, 0x24, 0x61, 0x99, 0xAF, 0xD1, 0x62, 0x56, 0x62, 0x61, 0xAB, 0xAB, 0x14, 0x62, 0x7))) frame_consumer = frame_receiver() next(frame_consumer) # Get to the yield unwrapper = unwrap_protocol(target=frame_consumer) next(unwrapper) # Get to the yield for byte in bytestream: unwrapper.send(byte) ``` This runs properly... ``` $ ./decoder.py Got frame: 99afd1 Got frame: ab14 ``` ...and also typechecks: ``` $ mypy --disallow-untyped-defs decoder.py $ ``` But I am pretty sure I can do better than just use the `Generator` base class in the type specs (just as I did for the `Callable`). I know it takes 3 type parameters (`Generator[A,B,C]`), but I am not sure how exactly they'd be specified here. Any help most welcome.
I figured out the answer on my own. I searched, but found no documentation for the 3 type parameters of `Generator` in the [official typing documentation for Python 3.5.2](https://docs.python.org/3/library/typing.html) - beyond a truly cryptic mention of... ``` class typing.Generator(Iterator[T_co], Generic[T_co, T_contra, V_co]) ``` Luckily, [the original PEP484](https://www.python.org/dev/peps/pep-0484/) (that started all this) was far more helpful: *"The return type of generator functions can be annotated by the generic type Generator[yield\_type, send\_type, return\_type] provided by typing.py module:* ``` def echo_round() -> Generator[int, float, str]: res = yield while res: res = yield round(res) return 'OK' ``` Based on this, I was able to annotate my Generators, and saw `mypy` confirm my assignments: ``` from typing import Callable, Generator # A protocol decoder: # # - yields Nothing # - expects ints to be `send` in his yield waits # - and doesn't return anything. ProtocolDecodingCoroutine = Generator[None, int, None] # A frame consumer (passed as an argument to a protocol decoder): # # - yields Nothing # - expects List[int] to be `send` in his waiting yields # - and doesn't return anything. FrameConsumerCoroutine = Generator[None, List[int], None] def unwrap_protocol(header: int=0x61, footer: int=0x62, dle :int=0xAB, after_dle_func: Callable[[int], int]=lambda x: x, target: FrameConsumerCoroutine=None) -> ProtocolDecodingCoroutine: ... def frame_receiver() -> FrameConsumerCoroutine: ... ``` I tested my assignments by e.g. swaping the order of the types - and then as expected, `mypy` complained and asked for the proper ones (as seen above). The complete code [is accessible from here](https://gist.github.com/ttsiodras/fe32284ac204907249d479b4225eb83c). I will leave the question open for a couple of days, in case anyone wants to chime in - especially in terms of using the new coroutine styles of Python 3.5 (`async def`, etc) - I would appreciate a hint on exactly how they'd be used here.
Python: PEP 8 class name as variable
38,433,503
19
2016-07-18T09:45:48Z
38,434,962
10
2016-07-18T10:58:05Z
[ "python", "pep8" ]
Which is the convention according to PEP 8 for writing variables that identify class names (not instances)? That is, given two classes, `A` and `B`, which of the following statements would be the right one? ``` target_class = A if some_condition else B instance = target_class() ``` or ``` TargetClass = A if some_condition else B instance = TargetClass() ``` --- As stated in the style guide, > **Class Names**: > > Class names should normally use the CapWords convention. But also > **Method Names and Instance Variables:** > > Use the function naming rules: lowercase with words separated by underscores as necessary to improve readability. In my opinion, these two conventions clash and I can't find which one prevails.
In lack of a specific covering of this case in PEP 8, one can make up an argument for both sides of the medal: One side is: As `A` and `B` both are variables as well, but hold a reference to a class, use CamelCase (`TargetClass`) in this case. Nothing prevents you from doing ``` class A: pass class B: pass x = A A = B B = x ``` Now `A` and `B` point to the respectively other class, so they aren't really fixed to the class. So `A` and `B` have the only responsibility to hold a class (no matter if they have the same name or a different one), and so has `TargetClass`. --- In order to remain unbiased, we as well can argue in the other way: `A` and `B` are special in so far as they are created along with their classes, and the classes' internals have the same name. In so far they are kind of "original", any other assignment should be marked special in so far as they are to be seen as a variable and thus in `lower_case`. --- The truth lies, as so often, somewhere in the middle. There are cases where I would go one way, and others where I would go the other way. Example 1: You pass a class, which maybe should be instantiated, to a method or function: ``` def create_new_one(cls): return cls() class A: pass class B: pass print(create_new_one(A)) ``` In this case, `cls` is clearly of very temporary state and clearly a variable; can be different at every call. So it should be `lower_case`. Example 2: Aliasing of a class ``` class OldAPI: pass class NewAPI: pass class ThirdAPI: pass CurrentAPI = ThirdAPI ``` In this case, `CurrentAPI` is to be seen as a kind of alias for the other one and remains constant throughout the program run. Here I would prefer `CamelCase`.
Pandas dataframe, split data by last column in last position but keep other columns
38,441,831
3
2016-07-18T16:32:15Z
38,442,043
7
2016-07-18T16:46:02Z
[ "python", "pandas", "dataframe" ]
Very new to pandas so any explanation with a solution is appreciated. I have a dataframe such as ``` Company Zip State City 1 *CBRE San Diego, CA 92101 4 1908 Brands Boulder, CO 80301 7 1st Infantry Division Headquarters Fort Riley, KS 10 21st Century Healthcare, Inc. Tempe 85282 15 AAA Jefferson City, MO 65101-9564 ``` I want to split the Zip State city column in my data into 3 different columns. Using the answer from this post [Pandas DataFrame, how do i split a column into two](http://stackoverflow.com/questions/14745022/pandas-dataframe-how-do-i-split-a-column-into-two) I could accomplish this task if I didn't have my first column. Writing a regex to captures all companies just leads to me capturing everything in my data. I also tried ``` foo = lambda x: pandas.Series([i for i in reversed(x.split())]) data_pretty = data['Zip State City'].apply(foo) ``` but this causes me to loose the company column and splits the names of the cities that are more than one word into separate columns. How can I split my last column while keeping the company column data?
you can use [extract()](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.extract.html) method: ``` In [110]: df Out[110]: Company Zip State City 1 *CBRE San Diego, CA 92101 4 1908 Brands Boulder, CO 80301 7 1st Infantry Division Headquarters Fort Riley, KS 10 21st Century Healthcare, Inc. Tempe 85282 15 AAA Jefferson City, MO 65101-9564 In [112]: df[['City','State','ZIP']] = df['Zip State City'].str.extract(r'([^,\d]+)?[,]*\s*([A-Z]{2})?\s*([\d\-]{4,11})?', expand=True) In [113]: df Out[113]: Company Zip State City City State ZIP 1 *CBRE San Diego, CA 92101 San Diego CA 92101 4 1908 Brands Boulder, CO 80301 Boulder CO 80301 7 1st Infantry Division Headquarters Fort Riley, KS Fort Riley KS NaN 10 21st Century Healthcare, Inc. Tempe 85282 Tempe NaN 85282 15 AAA Jefferson City, MO 65101-9564 Jefferson City MO 65101-9564 ``` From [docs](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.extract.html): ``` Series.str.extract(pat, flags=0, expand=None) ``` > For each subject string in the Series, extract groups from the first > match of regular expression pat. > > New in version 0.13.0. > > Parameters: > > **pat** : string > > Regular expression pattern with capturing groups > > **flags** : int, default 0 (no flags) > > re module flags, e.g. > re.IGNORECASE .. versionadded:: 0.18.0 > > **expand** : bool, default False > > If True, return DataFrame. > > If False, return Series/Index/DataFrame. > > **Returns**: DataFrame with one row for each subject string, and one > column for each group. Any capture group names in regular expression > pat will be used for column names; otherwise capture group numbers > will be used. The dtype of each result column is always object, even > when no match is found. If expand=True and pat has only one capture > group, then return a Series (if subject is a Series) or Index (if > subject is an Index).
Filling dict with NA values to allow conversion to pandas dataframe
38,446,457
8
2016-07-18T21:43:09Z
38,446,637
7
2016-07-18T21:58:24Z
[ "python", "pandas" ]
I have a dict that holds computed values on different time lags, which means they start on different dates. For instance, the data I have may look like the following: ``` Date col1 col2 col3 col4 col5 01-01-15 5 12 1 -15 10 01-02-15 7 0 9 11 7 01-03-15 6 1 2 18 01-04-15 9 8 10 01-05-15 -4 7 01-06-15 -11 -1 01-07-15 6 ``` Where each header is the key, and each column of values is the value for each key (I'm using a `defaultdict(list)` for this). When I try to run `pd.DataFrame.from_dict(d)` I understandably get an error stating that all arrays must be the same length. Is there an easy/trivial way to fill or pad the numbers so that the output would end up being the following dataframe? ``` Date col1 col2 col3 col4 col5 01-01-15 5 12 1 -15 10 01-02-15 7 0 9 11 7 01-03-15 NaN 6 1 2 18 01-04-15 NaN 9 8 10 NaN 01-05-15 NaN -4 NaN 7 NaN 01-06-15 NaN -11 NaN -1 NaN 01-07-15 NaN 6 NaN NaN NaN ``` Or will I have to do this manually with each list? Here is the code to recreate the dictionary: ``` import pandas as pd from collections import defaultdict d = defaultdict(list) d["Date"].extend([ "01-01-15", "01-02-15", "01-03-15", "01-04-15", "01-05-15", "01-06-15", "01-07-15" ] d["col1"].extend([5, 7]) d["col2"].extend([12, 0, 6, 9, -4, -11, 6]) d["col3"].extend([1, 9, 1, 8]) d["col4"].extend([-15, 11, 2, 10, 7, -1]) d["col5"].extend([10, 7, 18]) ```
Another option is to use `from_dict` with `orient='index'` and then take the tranpose: ``` my_dict = {'a' : [1, 2, 3, 4, 5], 'b': [1, 2, 3]} df = pd.DataFrame.from_dict(my_dict, orient='index').T ``` Note that you could run into problems with `dtype` if your columns have different types, e.g. floats in one column, strings in another. Resulting output: ``` a b 0 1.0 1.0 1 2.0 2.0 2 3.0 3.0 3 4.0 NaN 4 5.0 NaN ```
list of objects python
38,448,914
9
2016-07-19T03:03:15Z
38,449,719
10
2016-07-19T04:42:11Z
[ "python", "python-object" ]
I am trying to print a list of python objects that contain a list as a property and i am having some unexpected results: here is my code: ``` class video(object): name = '' url = '' class topic(object): topicName = '' listOfVideo = [] def addVideo(self,videoToAdd): self.listOfVideo.append(videoToAdd) def getTopic(self): return self.topicName def getListOfVideo(self): return self.listOfVideo topic1 = topic() topic1.topicName = 'topic1' video1 = video() video1.name = 'VideoName1' video1.url = 'VideoURL1' video2 = video() video2.name = 'VideoName2' video2.url = 'VideoURL2' topic1.addVideo(video1) topic1.addVideo(video2) topic2 = topic() topic2.topicName = 'topic2' video3 = video() video3.name = 'VideoName3' video3.url = 'VideoURL3' video4 = video() video4.name = 'VideoName4' video4.url = 'VideoURL4' topic2.addVideo(video3) topic2.addVideo(video4) topicsList = [] topicsList.append(topic1) topicsList.append(topic2) for topicCurrent in topicsList: print(topicCurrent.topicName) for video in topicCurrent.getListOfVideo(): print(video.name) print(video.url) ``` What I ***expect*** to get is this: > topic1 > > VideoName1 > > VideoURL1 > > VideoName2 > > VideoURL2 > > topic2 > > VideoName3 > > VideoURL3 > > VideoName4 > > VideoURL4 but what I ***actually*** get is this: > topic1 > > VideoName1 > > VideoURL1 > > VideoName2 > > VideoURL2 > > VideoName3 > > VideoURL3 > > VideoName4 > > VideoURL4 > > topic2 > > VideoName1 > > VideoURL1 > > VideoName2 > > VideoURL2 > > VideoName3 > > VideoURL3 > > VideoName4 > > VideoURL4 Why? I want to iterate over my list of topics and print out each video in each topic, but for each topic it prints out all videos??? What is going on here?
You have created `class variables` instead of `instance variables`, which are different for each instance object. Define your class as follows: ``` class topic(object): def __init__(self): self.topicName = '' self.listOfVideo = [] def addVideo(self,videoToAdd): self.listOfVideo.append(videoToAdd) def getTopic(self): return self.topicName def getListOfVideo(self): return self.listOfVideo ``` From [Python Tutorial](https://docs.python.org/2/tutorial/classes.html#class-and-instance-variables): > Instance variables are for data unique to each instance and class > variables are for attributes and methods shared by all instances of > the class. **EDIT:** One more important thing to consider is that why only `listOfVideo` was common for all instances but not `topicName`. It is because `list`'s are mutable objects while `string`'s are immutable. So any changes made to `listOfVideo` are common for all instances, i.e., they still refer to `listOfVideo` defined in the `topic` namespace. However when you do `topic1.topicName = 'topic1'`, you create a new variable `topicName` within `topic1` namespace, which overrides the `topicName` found in `topic`(class) namespace. You can confirm it by printing the value `topic.topicName`, which you will find to be an empty string, i.e., `''`.
Simple function that returns a number incremented by 1 on each call, without globals?
38,455,353
8
2016-07-19T10:03:32Z
38,455,715
11
2016-07-19T10:18:33Z
[ "python", "increment" ]
I am trying to write a python function that on the first call, returns a 1. On the second call, returns a 2. On the third, a 3. Etc. Currently, I have achieved this using a global variable: ``` index = 0 def foo(): global index index += 1 return index ``` When calling the function three times: ``` print(foo()) print(foo()) print(foo()) ``` It returns the values expected: ``` 1 2 3 ``` But, I've read that it is bad practice to use global variables. So, I was wondering if the same result could be achieved without using globals. Any suggestion? Thank you for your help.
Using a closure: ``` def make_inc(): val = [0] def inc(): val[0] += 1 return val[0] return inc inc = make_inc() print inc() print inc() print inc() ``` Using a class (the most obvious solution in an OOPL ): ``` class Inc(object): def __init__(self): self._val = 0 def __call__(self): self._val += 1 return self._val inc = Inc() print inc() print inc() print inc() ``` Using a generator (not directly callable, you'll have to use the `.next()` method): ``` def incgen(): val = 0 while True: val += 1 yield val inc = incgen() print inc.next() print inc.next() print inc.next() ```
Merging multiple dataframes on column
38,467,193
3
2016-07-19T19:44:12Z
38,467,413
7
2016-07-19T19:57:14Z
[ "python", "pandas", "dataframe" ]
I am trying to merge/join multiple `Dataframe`s and so far I have no luck. I've found `merge` method, but it works only with two Dataframes. I also found this SO [answer](http://stackoverflow.com/a/23671390/1080517) suggesting to do something like that: ``` df1.merge(df2,on='name').merge(df3,on='name') ``` Unfortunatelly it will not work in my case, because I have 20+ number of dataframes. My next idea was to use `join`. According to the reference when joining multiple dataframes I need to use list and only I can join on index column. So I changed indexes for all of the columns (ok, it can be done grammatically easily) and end up with something like this: ``` df.join([df1,df2,df3]) ``` Unfortunately, also this approach failed, because other columns names are this same in all dataframes. I've decided to do the last thing, that is renaming all columns. But when I finally joined everything: df = pd.Dataframe() df.join([df1,df2,df3]) I've received empty dataframe. I have no more idea, how I can join them. Can someone suggest anything more? **EDIT1:** Sample input: ``` import pandas as pd df1 = pd.DataFrame(np.array([ ['a', 5, 19], ['b', 14, 16], ['c', 4, 9]]), columns=['name', 'attr1', 'attr2']) df2 = pd.DataFrame(np.array([ ['a', 15, 49], ['b', 4, 36], ['c', 14, 9]]), columns=['name', 'attr1', 'attr2']) df1 name attr1 attr2 0 a 5 19 1 b 14 16 2 c 4 9 df2 name attr1 attr2 0 a 15 49 1 b 4 36 2 c 14 9 ``` Expected output: ``` df name attr1_1 attr2_1 attr1_2 attr2_2 0 a 5 19 15 49 1 b 14 16 4 36 2 c 4 9 14 9 ``` Indexes might be unordered between dataframes, but it is guaranteed, that they will exists.
use `pd.concat` ``` dflist = [df1, df2] keys = ["%d" % i for i in range(1, len(dflist) + 1)] merged = pd.concat([df.set_index('name') for df in dflist], axis=1, keys=keys) merged.columns = merged.swaplevel(0, 1, 1).columns.to_series().str.join('_') merged ``` [![enter image description here](http://i.stack.imgur.com/EukWx.png)](http://i.stack.imgur.com/EukWx.png) Or ``` merged.reset_index() ``` [![enter image description here](http://i.stack.imgur.com/u9Bz3.png)](http://i.stack.imgur.com/u9Bz3.png)
Summing values in a Dictionary of Lists
38,470,125
6
2016-07-19T23:29:32Z
38,470,263
8
2016-07-19T23:47:30Z
[ "python", "list", "pandas", "dictionary" ]
I have a dictionary `dictData` that has been created from 3 columns (0, 3 and 4) of a csv file where each key is a datetime object and each value is a list, containing two numbers (let's call them a and b, so the list is [a,b]) stored as strings: ``` import csv import datetime as dt with open(fileInput,'r') as inFile: csv_in = csv.reader(inFile) dictData = {(dt.datetime.strptime(rows[0],'%d/%m/%Y %H:%M')):[rows[3],rows[4]] for rows in csv_in} ``` I want to do two things: Firstly, i want to sum each of the values in the list(i.e sum all the a values, then sum all the b values) for the whole dictionary. If it was a dictionary of single values, I would do something like this: ``` total = sum((float(x) for x in dictData.values())) ``` 1. How do I change this so that `.values` identifies the first (or second) item in the list? (i.e. the a or b values) 2. I want to count all the zero values for the first item in the list.
### Setup ``` dictData = {'2010': ['1', '2'], '2011': ['4', '3'], '2012': ['0', '45'], '2013': ['8', '7'], '2014': ['9', '0'], '2015': ['22', '1'], '2016': ['3', '4'], '2017': ['0', '5'], '2018': ['7', '8'], '2019': ['0', '9'], } print 'sum of 1st items = %d' % sum([float(v[0]) for v in dictData.values()]) print 'sum of 2nd items = %d' % sum([float(v[1]) for v in dictData.values()]) print 'count of zeros = %d' % sum([(float(v[0]) == 0) for v in dictData.values()]) sum of 1st items = 54 sum of 2nd items = 84 count of zeros = 3 ```
why is a sum of strings converted to floats
38,470,550
14
2016-07-20T00:27:31Z
38,470,963
15
2016-07-20T01:36:39Z
[ "python", "pandas" ]
### Setup consider the following dataframe (note the strings): ``` df = pd.DataFrame([['3', '11'], ['0', '2']], columns=list('AB')) df ``` [![enter image description here](http://i.stack.imgur.com/iEQcv.png)](http://i.stack.imgur.com/iEQcv.png) ``` df.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 2 entries, 0 to 1 Data columns (total 2 columns): A 2 non-null object B 2 non-null object dtypes: object(2) memory usage: 104.0+ bytes ``` ### Question I'm going to sum. I expect the strings to be concatenated. ``` df.sum() A 30.0 B 112.0 dtype: float64 ``` It looks as though the strings were concatenated then converted to float. Is there a good reason for this? Is this a bug? Anything enlightening will be up voted.
Went with the good old stack trace. Learned a bit about pdb through Pycharm as well. Turns out what happens is the following: 1) ``` cls.sum = _make_stat_function( 'sum', name, name2, axis_descr, 'Return the sum of the values for the requested axis', nanops.nansum) ``` Let's have a look at `_make_stat_function` 2) ``` def _make_stat_function(name, name1, name2, axis_descr, desc, f): @Substitution(outname=name, desc=desc, name1=name1, name2=name2, axis_descr=axis_descr) @Appender(_num_doc) def stat_func(self, axis=None, skipna=None, level=None, numeric_only=None, **kwargs): _validate_kwargs(name, kwargs, 'out', 'dtype') if skipna is None: skipna = True if axis is None: axis = self._stat_axis_number if level is not None: return self._agg_by_level(name, axis=axis, level=level, skipna=skipna) return self._reduce(f, name, axis=axis, skipna=skipna, numeric_only=numeric_only) ``` The last line is key. It's kind of funny, as there are about 7 different `_reduces` within `pandas.core`. pdb says it's the one in `pandas.core.frame`. Let's take a look. 3) ``` def _reduce(self, op, name, axis=0, skipna=True, numeric_only=None, filter_type=None, **kwds): axis = self._get_axis_number(axis) def f(x): return op(x, axis=axis, skipna=skipna, **kwds) labels = self._get_agg_axis(axis) # exclude timedelta/datetime unless we are uniform types if axis == 1 and self._is_mixed_type and self._is_datelike_mixed_type: numeric_only = True if numeric_only is None: try: values = self.values result = f(values) except Exception as e: # try by-column first if filter_type is None and axis == 0: try: # this can end up with a non-reduction # but not always. if the types are mixed # with datelike then need to make sure a series result = self.apply(f, reduce=False) if result.ndim == self.ndim: result = result.iloc[0] return result except: pass if filter_type is None or filter_type == 'numeric': data = self._get_numeric_data() elif filter_type == 'bool': data = self._get_bool_data() else: # pragma: no cover e = NotImplementedError("Handling exception with filter_" "type %s not implemented." % filter_type) raise_with_traceback(e) result = f(data.values) labels = data._get_agg_axis(axis) else: if numeric_only: if filter_type is None or filter_type == 'numeric': data = self._get_numeric_data() elif filter_type == 'bool': data = self._get_bool_data() else: # pragma: no cover msg = ("Generating numeric_only data with filter_type %s" "not supported." % filter_type) raise NotImplementedError(msg) values = data.values labels = data._get_agg_axis(axis) else: values = self.values result = f(values) if hasattr(result, 'dtype') and is_object_dtype(result.dtype): try: if filter_type is None or filter_type == 'numeric': result = result.astype(np.float64) elif filter_type == 'bool' and notnull(result).all(): result = result.astype(np.bool_) except (ValueError, TypeError): # try to coerce to the original dtypes item by item if we can if axis == 0: result = com._coerce_to_dtypes(result, self.dtypes) return Series(result, index=labels) ``` Holy smokes, talk about an out of control function. Someone needs a refactoring! Let's zoom in on the trouble line(s): ``` if hasattr(result, 'dtype') and is_object_dtype(result.dtype): try: if filter_type is None or filter_type == 'numeric': result = result.astype(np.float64) ``` And you better believe that last line gets executed. Here's some of the pdb trace: ``` > c:\users\matthew\anaconda2\lib\site-packages\pandas\core\frame.py(4801)_reduce() -> result = result.astype(np.float64) (Pdb) l 4796 result = f(values) 4797 4798 if hasattr(result, 'dtype') and is_object_dtype(result.dtype): 4799 try: 4800 if filter_type is None or filter_type == 'numeric': 4801 -> result = result.astype(np.float64) 4802 elif filter_type == 'bool' and notnull(result).all(): 4803 result = result.astype(np.bool_) 4804 except (ValueError, TypeError): 4805 4806 # try to coerce to the original dtypes item by item if we can ``` If you're a non-believer, open up pandas.core.frame.py and put a `print "OI"` right above line 4801. It should splat out to console :). Note I'm on Anaconda 2, windows. I'm going to go with "bug", to answer your question.
How to import a gzip file larger than RAM limit into a Pandas DataFrame? "Kill 9" Use HDF5?
38,472,387
4
2016-07-20T04:34:18Z
38,472,574
8
2016-07-20T04:52:59Z
[ "python", "pandas", "dataframe", "gzip", "hdf5" ]
I have a `gzip` which is approximately 90 GB. This is well within disk space, but far larger than RAM. How can I import this into a pandas dataframe? I tried the following in the command line: ``` # start with Python 3.4.5 import pandas as pd filename = 'filename.gzip' # size 90 GB df = read_table(filename, compression='gzip') ``` However, after several minutes, Python shuts down with `Kill 9`. After defining the database object `df`, I was planning to save it into HDF5. What is the correct way to do this? How can I use `pandas.read_table()` to do this?
I'd do it this way: ``` filename = 'filename.gzip' # size 90 GB hdf_fn = 'result.h5' hdf_key = 'my_huge_df' cols = ['colA','colB','colC','ColZ'] # put here a list of all your columns cols_to_index = ['colA','colZ'] # put here the list of YOUR columns, that you want to index chunksize = 10**6 # you may want to adjust it ... store = pd.HDFStore(hdf_fn) for chunk in pd.read_table(filename, compression='gzip', header=None, names=cols, chunksize=chunksize): # don't index data columns in each iteration - we'll do it later store.append(hdf_key, chunk, data_columns=cols_to_index, index=False) # index data columns in HDFStore store.create_table_index(hdf_key, columns=cols_to_index, optlevel=9, kind='full') store.close() ```
Pandas rolling computations for printing elements in the window
38,473,205
8
2016-07-20T05:49:32Z
38,473,356
9
2016-07-20T05:58:59Z
[ "python", "pandas", "dataframe" ]
I want to make a series out of the values in a column of pandas dataframe in a sliding window fashion. For instance, if this is my dataframe ``` state 0 1 1 1 2 1 3 1 4 0 5 0 6 0 7 1 8 4 9 1 ``` for a window size of say 3, I want to get a list as [111, 111, 110, 100, 000...] I am looking for an efficient way to do this (Of course, trivially I can convert *state* into a list and then slide the list indices). Is there a way to use pandas rolling computations here? Can I somehow print the elements in a rolling window?
``` a = np.array([100, 10, 1]) s.rolling(3).apply(a.dot).apply('{:03.0f}'.format) 0 nan 1 nan 2 111 3 111 4 110 5 100 6 000 7 001 8 014 9 141 Name: state, dtype: object ``` thx @Alex for reminding me to use `dot`
django-debug-toolbar breaking on admin while getting sql stats
38,479,063
19
2016-07-20T10:43:45Z
38,479,670
34
2016-07-20T11:12:25Z
[ "python", "django", "django-admin", "django-debug-toolbar" ]
Environment:django debug toolbar breaking while using to get sql stats else it's working fine on the other pages, breaking only on the pages which have sql queries. ``` Request Method: GET Request URL: http://www.blog.local/admin/ Django Version: 1.9.7 Python Version: 2.7.6 Installed Applications: [ .... 'django.contrib.staticfiles', 'debug_toolbar'] Installed Middleware: [ ... 'debug_toolbar.middleware.DebugToolbarMiddleware'] Traceback: File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response 235. response = middleware_method(request, response) File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/debug_toolbar/middleware.py" in process_response 129. panel.generate_stats(request, response) File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/debug_toolbar/panels/sql/panel.py" in generate_stats 192. query['sql'] = reformat_sql(query['sql']) File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/debug_toolbar/panels/sql/utils.py" in reformat_sql 27. return swap_fields(''.join(stack.run(sql))) File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/sqlparse/engine/filter_stack.py" in run 29. stream = filter_.process(stream) Exception Type: TypeError at /admin/ Exception Value: process() takes exactly 3 arguments (2 given) ```
sqlparse latest version was released today and it's not compatible with django-debug-toolbar version 1.4, Django version 1.9 workaround is force pip to install `sqlparse==0.1.19`
How to natively increment a dictionary element's value?
38,486,601
5
2016-07-20T17:02:31Z
38,486,696
11
2016-07-20T17:07:32Z
[ "python", "python-3.x", "dictionary" ]
When working with Python 3 dictionaries, I keep having to do something like this: ``` d=dict() if 'k' in d: d['k']+=1 else: d['k']=0 ``` I seem to remember there being a native way to do this, but was looking through the documentation and couldn't find it. Do you know what this is?
This is the use case for [`collections.defaultdict`](https://docs.python.org/3/library/collections.html#collections.defaultdict), here simply using the `int` callable for the default factory. ``` >>> from collections import defaultdict >>> d = defaultdict(int) >>> d defaultdict(<class 'int'>, {}) >>> d['k'] +=1 >>> d defaultdict(<class 'int'>, {'k': 1}) ``` A `defaultdict` is configured to create items whenever a missing key is searched. You provide it with a callable (here `int()`) which it uses to produce a default value whenever the lookup with `__getitem__` is passed a key that does not exist. This callable is stored in an instance attribute called `default_factory`. If you don't provide a `default_factory`, you'll get a `KeyError` as per usual for missing keys. Then suppose you wanted a different default value, perhaps 1 instead of 0. You would simply have to pass a callable that provides your desired starting value, in this case very trivially ``` >>> d = defaultdict(lambda: 1) ``` This could obviously also be any regular named function. --- It's worth noting however that if in your case you are attempting to just use a dictionary to store the count of particular values, a [`collections.Counter`](https://docs.python.org/3/library/collections.html#collections.Counter) is more suitable for the job. ``` >>> from collections import Counter >>> Counter('kangaroo') Counter({'a': 2, 'o': 2, 'n': 1, 'r': 1, 'k': 1, 'g': 1}) ```
Is there a pythonic way to process tree-structured dict keys?
38,489,772
8
2016-07-20T19:59:45Z
38,489,875
11
2016-07-20T20:05:45Z
[ "python", "dictionary", "tree" ]
I'm looking for a pythonic idiom to turn a list of keys and a value into a dict with those keys nested. For example: ``` dtree(["a", "b", "c"]) = 42 or dtree("a/b/c".split(sep='/')) = 42 ``` would return the nested dict: ``` {"a": {"b": {"c": 42}}} ``` This could be used to turn a set of values with hierarchical keys into a tree: ``` dtree({ "a/b/c": 10, "a/b/d": 20, "a/e": "foo", "a/f": False, "g": 30 }) would result in: { "a": { "b": { "c": 10, "d": 20 }, "e": foo", "f": False }, "g": 30 } ``` I could write some FORTRANish code to do the conversion using brute force and multiple loops and maybe `collections.defaultdict`, but it seems like a language with splits and joins and slices and comprehensions should have a primitive that turns a list of strings `["a","b","c"]` into nested dict keys `["a"]["b"]["c"]`. What is the shortest way to do this without using `eval` on a dict expression string?
> I'm looking for a pythonic idiom to turn a list of keys and a value into a dict with those keys nested. ``` reduce(lambda v, k: {k: v}, reversed("a/b/c".split("/")), 42) ``` > This could be used to turn a set of values with hierarchical keys into a tree ``` def hdict(keys, value, sep="/"): return reduce(lambda v, k: {k: v}, reversed(keys.split(sep)), value) def merge_dict(trg, src): for k, v in src.items(): if k in trg: merge_dict(trg[k], v) else: trg[k] = v def hdict_from_dict(src): result = {} for sub_hdict in map(lambda kv: hdict(*kv), src.items()): merge_dict(result, sub_hdict) return result data = { "a/b/c": 10, "a/b/d": 20, "a/e": "foo", "a/f": False, "g": 30 } print(hdict_from_dict(data)) ``` ### Another overall solution using `collections.defaultdict` ``` import collections def recursive_dict(): return collections.defaultdict(recursive_dict) def dtree(inp): result = recursive_dict() for keys, value in zip(map(lambda s: s.split("/"), inp), inp.values()): reduce(lambda d, k: d[k], keys[:-1], result)[keys[-1]] = value return result import json print(json.dumps(dtree({ "a/b/c": 10, "a/b/d": 20, "a/e": "foo", "a/f": False, "g": 30 }), indent=4)) ```
Comparing rows of two pandas dataframes?
38,493,795
4
2016-07-21T02:17:44Z
38,494,149
7
2016-07-21T02:59:47Z
[ "python", "pandas", "numpy", "dataframe" ]
This is a continuation of my question. [Fastest way to compare rows of two pandas dataframes?](http://stackoverflow.com/questions/38267763/fastest-way-to-compare-rows-of-two-pandas-dataframes/38270174#38270174) I have two dataframes `A` and `B`: `A` is 1000 rows x 500 columns, filled with binary values indicating either presence or absence. For a condensed example: ``` A B C D E 0 0 0 0 1 0 1 1 1 1 1 0 2 1 0 0 1 1 3 0 1 1 1 0 ``` `B` is 1024 rows x 10 columns, and is a full iteration from 0 to 1023 in binary form. Example: ``` 0 1 2 0 0 0 0 1 0 0 1 2 0 1 0 3 0 1 1 4 1 0 0 5 1 0 1 6 1 1 0 7 1 1 1 ``` I am trying to find which rows in `A`, at a particular 10 columns of `A`, correspond with each row of `B`. Each row of `A[My_Columns_List]` is guaranteed to be somewhere in `B`, but not every row of `B` will match up with a row in `A[My_Columns_List]` For example, I want to show that for columns `[B,D,E]` of `A`, rows `[1,3]` of A match up with row `[6]` of `B`, row `[0]` of A matches up with row `[2]` of `B`, row `[2]` of A matches up with row `[3]` of `B`. I have tried using: ``` pd.merge(B.reset_index(), A.reset_index(), left_on = B.columns.tolist(), right_on =A.columns[My_Columns_List].tolist(), suffixes = ('_B','_A'))) ``` This works, but I was hoping that this method would be faster: ``` S = 2**np.arange(10) A_ID = np.dot(A[My_Columns_List],S) B_ID = np.dot(B,S) out_row_idx = np.where(np.in1d(A_ID,B_ID))[0] ``` But when I do this, `out_row_idx` returns an array containing all the indices of `A`, which doesn't tell me anything. I think this method will be faster, but I don't know why it returns an array from 0 to 999. Any input would be appreciated! Also, credit goes to @jezrael and @Divakar for these methods.
I'll stick by my initial answer but maybe explain better. You are asking to compare 2 pandas dataframes. Because of that, I'm going to build dataframes. I may use numpy, but my inputs and outputs will be dataframes. ### Setup You said we have a a 1000 x 500 array of ones and zeros. Let's build that. ``` A_init = pd.DataFrame(np.random.binomial(1, .5, (1000, 500))) A_init.columns = pd.MultiIndex.from_product([range(A_init.shape[1]/10), range(10)]) A = A_init ``` In addition, I gave `A` a `MultiIndex` to easily group by columns of 10. ### Solution This is very similar to @Divakar's answer with one minor difference that I'll point out. For one group of 10 ones and zeros, we can treat it as a bit array of length 8. We can then calculate what it's integer value is by taking the dot product with an array of powers of 2. ``` twos = 2 ** np.arange(10) ``` I can execute this for every group of 10 ones and zeros in one go like this ``` AtB = A.stack(0).dot(twos).unstack() ``` I `stack` to get a row of 50 groups of 10 into columns in order to do the dot product more elegantly. I then brought it back with the `unstack`. I now have a 1000 x 50 dataframe of numbers that range from 0-1023. Assume `B` is a dataframe with each row one of 1024 unique combinations of ones and zeros. `B` should be sorted like `B = B.sort_values().reset_index(drop=True)`. This is the part I think I failed at explaining last time. Look at ``` AtB.loc[:2, :2] ``` [![enter image description here](http://i.stack.imgur.com/tbnj8.png)](http://i.stack.imgur.com/tbnj8.png) That value in the `(0, 0)` position, `951` means that the first group of 10 ones and zeros in the first row of `A` matches the row in `B` with the index `951`. That's what you want!!! Funny thing is, I never looked at B. You know why, B is irrelevant!!! It's just a goofy way of representing the numbers from 0 to 1023. This is the difference with my answer, I'm ignoring `B`. Ignoring this useless step should save time. These are all functions that take two dataframes `A` and `B` and returns a dataframe of indices where `A` matches `B`. Spoiler alert, I'll ignore `B` completely. ``` def FindAinB(A, B): assert A.shape[1] % 10 == 0, 'Number of columns in A is not a multiple of 10' rng = np.arange(A.shape[1]) A.columns = pd.MultiIndex.from_product([range(A.shape[1]/10), range(10)]) twos = 2 ** np.arange(10) return A.stack(0).dot(twos).unstack() ``` --- ``` def FindAinB2(A, B): assert A.shape[1] % 10 == 0, 'Number of columns in A is not a multiple of 10' rng = np.arange(A.shape[1]) A.columns = pd.MultiIndex.from_product([range(A.shape[1]/10), range(10)]) # use clever bit shifting instead of dot product with powers # questionable improvement return (A.stack(0) << np.arange(10)).sum(1).unstack() ``` --- I'm channelling my inner @Divakar (read, this is stuff I've learned from Divakar) ``` def FindAinB3(A, B): assert A.shape[1] % 10 == 0, 'Number of columns in A is not a multiple of 10' a = A.values.reshape(-1, 10) a = np.einsum('ij->i', a << np.arange(10)) return pd.DataFrame(a.reshape(A.shape[0], -1), A.index) ``` --- ### Minimalist One Liner ``` f = lambda A: pd.DataFrame(np.einsum('ij->i', A.values.reshape(-1, 10) << np.arange(10)).reshape(A.shape[0], -1), A.index) ``` Use it like ``` f(A) ``` --- ### Timing FindAinB3 is an order of magnitude faster [![enter image description here](http://i.stack.imgur.com/eh0uF.png)](http://i.stack.imgur.com/eh0uF.png)
Change the color of text within a pandas dataframe html table python using styles and css
38,511,373
6
2016-07-21T18:06:19Z
38,511,805
10
2016-07-21T18:28:57Z
[ "python", "html", "css", "pandas", "dataframe" ]
I have a pandas dataframe: ``` arrays = [['Midland','Midland','Hereford','Hereford','Hobbs','Hobbs','Childress','Childress','Reese','Reese', 'San Angelo','San Angelo'],['WRF','MOS','WRF','MOS','WRF','MOS','WRF','MOS','WRF','MOS','WRF','MOS']] tuples = list(zip(*arrays)) index = pd.MultiIndex.from_tuples(tuples) df = pd.DataFrame(np.random.randn(12,4),index=arrays,columns=['00 UTC','06 UTC','12 UTC','18 UTC']) df ``` The table that prints from this looks like this:[![enter image description here](http://i.stack.imgur.com/FbsI6.png)](http://i.stack.imgur.com/FbsI6.png) I would like to be able to format this table better. Specifically, I would like to color all of the values in the 'MOS' rows a certain color and color the left two index/header columns as well as the top header row a different background color than the rest of the cells with values in them. Any ideas how I can do this?
This takes a few steps: First import `HTML` and `re` ``` from IPython.display import HTML import re ``` You can get at the `html` pandas puts out via the `to_html` method. ``` df_html = df.to_html() ``` Next we are going to generate a random identifier for the html table and style we are going to create. ``` random_id = 'id%d' % np.random.choice(np.arange(1000000)) ``` Because we are going to insert some style, we need to be careful to specify that this style will only be for our table. Now let's insert this into the `df_html` ``` df_html = re.sub(r'<table', r'<table id=%s ' % random_id, df_html) ``` And create a style tag. This is really up to you. I just added some hover effect. ``` style = """ <style> table#{random_id} tr:hover {{background-color: #f5f5f5}} </style> """.format(random_id=random_id) ``` Finally, display it ``` HTML(style + df_html) ``` ### Function all in one. ``` def HTML_with_style(df, style=None, random_id=None): from IPython.display import HTML import numpy as np import re df_html = df.to_html() if random_id is None: random_id = 'id%d' % np.random.choice(np.arange(1000000)) if style is None: style = """ <style> table#{random_id} {{color: blue}} </style> """.format(random_id=random_id) else: new_style = [] s = re.sub(r'</?style>', '', style).strip() for line in s.split('\n'): line = line.strip() if not re.match(r'^table', line): line = re.sub(r'^', 'table ', line) new_style.append(line) new_style = ['<style>'] + new_style + ['</style>'] style = re.sub(r'table(#\S+)?', 'table#%s' % random_id, '\n'.join(new_style)) df_html = re.sub(r'<table', r'<table id=%s ' % random_id, df_html) return HTML(style + df_html) ``` Use it like this: ``` HTML_with_style(df.head()) ``` [![enter image description here](http://i.stack.imgur.com/iSiyr.png)](http://i.stack.imgur.com/iSiyr.png) ``` HTML_with_style(df.head(), '<style>table {color: red}</style>') ``` [![enter image description here](http://i.stack.imgur.com/LfFrq.png)](http://i.stack.imgur.com/LfFrq.png) ``` style = """ <style> tr:nth-child(even) {color: green;} tr:nth-child(odd) {color: aqua;} </style> """ HTML_with_style(df.head(), style) ``` [![enter image description here](http://i.stack.imgur.com/a95aW.png)](http://i.stack.imgur.com/a95aW.png) Learn CSS and go nuts!
Why does jupyter display "None not found"?
38,517,887
10
2016-07-22T03:53:26Z
39,225,574
12
2016-08-30T10:53:59Z
[ "python", "kernel", "anaconda", "jupyter", "jupyter-notebook" ]
I am trying to use jupyter to write and edit python code. I have a .ipynb file open, but I see "None not found" in the upper right hand corner and I can't execute any of the code that I write. What's so bizarre is that I'll open other .ipynb files and have no problem. Additionally, when I click on the red "None not found" icon, I'll get the message "The 'None' kernel is not available. Please pick another suitable kernel instead, or install that kernel." I have Python 3.5.2 installed. I suspect the problem is that jupyter is not detecting the Python 3 kernel? It displays "Python[root]" where it should say "Python 3." Does anyone know how to get this fixed? [Screenshot of working code](http://i.stack.imgur.com/QCQcM.png) [Screenshot "None not found"](http://i.stack.imgur.com/X7vfk.png)
I had the same problem here. The solution for me was: 1. in the menu in Kernel -> Change kernel -> choose Python [Root] (or the kernel you want), 2. save the file, 3. close it, 4. reopen it.
Why and how are Python functions hashable?
38,518,849
38
2016-07-22T05:30:35Z
38,518,893
37
2016-07-22T05:34:56Z
[ "python", "hash" ]
I recently tried the following commands in Python: ``` >>> {lambda x: 1: 'a'} {<function __main__.<lambda>>: 'a'} >>> def p(x): return 1 >>> {p: 'a'} {<function __main__.p>: 'a'} ``` The success of both `dict` creations indicates that both lambda and regular functions are hashable. (Something like `{[]: 'a'}` fails with `TypeError: unhashable type: 'list'`). The hash is apparently not necessarily the ID of the function: ``` >>> m = lambda x: 1 >>> id(m) 140643045241584 >>> hash(m) 8790190327599 >>> m.__hash__() 8790190327599 ``` The last command shows that the `__hash__` method is explicitly defined for `lambda`s, i.e., this is not some automagical thing Python computes based on the type. What is the motivation behind making functions hashable? For a bonus, what is the hash of a function?
It's nothing special. As you can see if you examine the unbound `__hash__` method of the function type: ``` >>> def f(): pass ... >>> type(f).__hash__ <slot wrapper '__hash__' of 'object' objects> ``` it just inherits `__hash__` from `object`. Function `==` and `hash` work by identity. The difference between `id` and `hash` is normal for any type that inherits `object.__hash__`: ``` >>> x = object() >>> id(x) 40145072L >>> hash(x) 2509067 ``` --- You might think `__hash__` is only supposed to be defined for immutable objects, but that's not true. `__hash__` should only be defined for objects where everything involved in `==` comparisons is immutable. For objects whose `==` is based on identity, it's completely standard to base `hash` on identity as well, since even if the objects are mutable, they can't possibly be mutable in a way that would change their identity. Files, modules, and other mutable objects with identity-based `==` all behave this way.
Why and how are Python functions hashable?
38,518,849
38
2016-07-22T05:30:35Z
38,519,187
20
2016-07-22T05:58:09Z
[ "python", "hash" ]
I recently tried the following commands in Python: ``` >>> {lambda x: 1: 'a'} {<function __main__.<lambda>>: 'a'} >>> def p(x): return 1 >>> {p: 'a'} {<function __main__.p>: 'a'} ``` The success of both `dict` creations indicates that both lambda and regular functions are hashable. (Something like `{[]: 'a'}` fails with `TypeError: unhashable type: 'list'`). The hash is apparently not necessarily the ID of the function: ``` >>> m = lambda x: 1 >>> id(m) 140643045241584 >>> hash(m) 8790190327599 >>> m.__hash__() 8790190327599 ``` The last command shows that the `__hash__` method is explicitly defined for `lambda`s, i.e., this is not some automagical thing Python computes based on the type. What is the motivation behind making functions hashable? For a bonus, what is the hash of a function?
It can be useful, e.g., to create sets of function objects, or to index a dict by functions. Immutable objects *normally* support `__hash__`. In any case, there's no internal difference between a function defined by a `def` or by a `lambda` - that's purely syntactic. The algorithm used depends on the version of Python. It looks like you're using a recent version of Python on a 64-bit box. In that case, the hash of a function object is the right rotation of its `id()` by 4 bits, with the result viewed as a signed 64-bit integer. The right shift is done because object addresses (`id()` results) are typically aligned so that their last 3 or 4 bits are always 0, and that's a mildly annoying property for a hash function. In your specific example, ``` >>> i = 140643045241584 # your id() result >>> (i >> 4) | ((i << 60) & 0xffffffffffffffff) # rotate right 4 bits 8790190327599 # == your hash() result ```
Declaring decorator inside a class
38,524,332
2
2016-07-22T10:37:23Z
38,524,579
7
2016-07-22T10:49:36Z
[ "python", "python-2.7", "wrapper", "decorator", "python-decorators" ]
I'm trying to use custom wrappers/decorators in Python, and I'd like to declare one **inside** a class, so that I could for instance print a snapshot of the attributes. I've tried things from [this question](http://stackoverflow.com/q/11740626/5018771) with no success. --- Here is what I'd like to do (NB: this code doesn't work, I explain what happens below) ``` class TestWrapper(): def __init__(self, a, b): self.a = a self.b = b self.c = 0 def enter_exit_info(self, func): def wrapper(*arg, **kw): print '-- entering', func.__name__ print '-- ', self.__dict__ res = func(*arg, **kw) print '-- exiting', func.__name__ print '-- ', self.__dict__ return res return wrapper @enter_exit_info def add_in_c(self): self.c = self.a + self.b print self.c @enter_exit_info def mult_in_c(self): self.c = self.a * self.b print self.c if __name__ == '__main__': t = TestWrapper(2, 3) t.add_in_c() t.mult_in_c() ``` The expected output is : ``` -- entering add_in_c -- {'a': 2, 'b': 3, 'c': 0} 5 -- exiting add_in_c -- {'a': 2, 'b': 3, 'c': 5} -- entering mult_in_c -- {'a': 2, 'b': 3, 'c': 5} 6 -- exiting mult_in_c -- {'a': 2, 'b': 3, 'c': 6} ``` But I this code gives ``` Traceback (most recent call last): File "C:\Users\cccvag\workspace\Test\src\module2.py", line 2, in <module> class TestWrapper(): File "C:\Users\cccvag\workspace\Test\src\module2.py", line 18, in TestWrapper @enter_exit_info TypeError: enter_exit_info() takes exactly 2 arguments (1 given) ``` And if I try `@enter_exit_info(self)` or `@self.enter_exit_info`, I get a `NameError`. What could I do? --- **EDIT:** I do not need above all to have the decorator *physically* declared inside the class, as long as it is able to access attributes from an instance of this class. I thought it could only be made by declaring it inside the class, [Rawing's answer](http://stackoverflow.com/a/38524581/5018771) proved me wrong.
You will need to handle `self` explicitly. ``` class TestWrapper(): def __init__(self, a, b): self.a = a self.b = b self.c = 0 def enter_exit_info(func): def wrapper(self, *arg, **kw): print '-- entering', func.__name__ print '-- ', self.__dict__ res = func(self, *arg, **kw) print '-- exiting', func.__name__ print '-- ', self.__dict__ return res return wrapper @enter_exit_info def add_in_c(self): self.c = self.a + self.b print self.c @enter_exit_info def mult_in_c(self): self.c = self.a * self.b print self.c if __name__ == '__main__': t = TestWrapper(2, 3) t.add_in_c() t.mult_in_c() ``` This is valid python, but it's somewhat weird to have a function at the class level which is not really a method. Unless you have a good reason to do it this way, it would be more idiomatic to move the decorator to module level scope.
How do I unit test a method that sets internal data, but doesn't return?
38,529,807
7
2016-07-22T15:07:36Z
38,531,286
7
2016-07-22T16:26:00Z
[ "python", "unit-testing" ]
From what I’ve read, unit test should test only one function/method at a time. But I’m not clear on how to test methods that only set internal object data with no return value to test off of, like the setvalue() method in the following Python class (and this is a simple representation of something more complicated): ``` class Alpha(object): def __init__(self): self.__internal_dict = {} def setvalue(self, key, val): self.__internal_dict[key] = val def getvalue(self, key): return self.__internal_dict[key] ``` If unit test law dictates that we should test every function, one at a time, then how do I test the setvalue() method on its own? One "solution" would be to compare what I passed into setvalue() with the return of getvalue(), but if my assert fails, I don't know which method is failing - is it setvalue() or getvalue()? Another idea would be to compare what I passed into setvalue() with the object's private data, \_\_internal\_dict[key] - a HUGE disgusting hack! As of now, this is my solution for this type of problem, but if the assert raises, that would only indicate that 1 of my 2 main methods is not properly working. ``` import pytest def test_alpha01(): alpha = Alpha() alpha.setvalue('abc', 33) expected_val = 33 result_val = alpha.getvalue('abc') assert result_val == expected_val ``` Help appreciated
## The misconception The real problem you have here is that you are working on a false premise: > If unit test law dictates that we should test every function, one at a time... This is not at all what good unit testing is about. Good unit testing is about decomposing your code into logical components, putting them into controlled environments and testing that their *actual* behaviour matches their *expected* behaviour - **from the perspective of a consumer**. Those "units" may be (depending on your environment) anonymous functions, individual classes or clusters of tightly-coupled classes (and don't let anyone tell you that class coupling is inherently bad; some classes are made to go together). The important thing to ask yourself is - *what does a consumer care about*? What they certainly *don't* care about is that - when they call a *set* method - some internal private member that they can't even access is set. ## The solution Naively, from looking at your code, it seems that what the consumer cares about is that when they call `setvalue` for a particular key, calling `getvalue` for that same key gives them back the value that they put in. If that's the intended behaviour of the unit (class), then that's what you should be testing. Nobody should care what happens behind the scenes as long as the *behaviour* is correct. However, I would also consider if that's really all that this class is for - what else does that value have an impact on? It's impossible to say from the example in your question but, whatever it is, that should be tested too. Or maybe, if that's hard to define, this class in itself isn't very meaningful and your "unit" should actually be an independent set of small classes that only really have meaningful behaviour when they're put together and should be tested as such. The balance here is subtle, though, and difficult to be less cryptic about without more context. # The pitfall What you certainly *shouldn't* (ever ever ever) do is have your tests poking around internal state of your objects. There are two very important reasons for this: First, as already mentioned, unit tests are about behaviour of units as perceived by a client. Testing that it does what I believe it should do as a consumer. I don't - and shouldn't - care about how it does it under the hood. That dictionary is irrelevant to me. Second, good unit tests allow you to verify behaviour while still giving you the freedom to change how that behaviour is achieved - if you tie your tests to that dictionary, it ceases to be an implementation detail and becomes part of the contract, meaning any changes to how this unit is implemented force you either to retain that dictionary or change your tests. This is a road that leads to the opposite of what unit testing is intended to achieve - painless maintenance. The bottom line is that consumers - and therefore your tests - do not care about whether `setvalue` updates an internal dictionary. Figure out what they actually care about and test that instead. As an aside, this is where TDD (specifically test-first) really comes into its own - if you state the intended behaviour with a test up-front, it's difficult to find yourself stuck in that "what am I trying to test?" rut.
How to assign member variables temporarily?
38,531,851
12
2016-07-22T17:02:28Z
38,532,086
20
2016-07-22T17:19:17Z
[ "python", "python-2.7", "python-decorators", "python-descriptors" ]
I often find that I need to assign some member variables temporarily, *e.g.* ``` old_x = c.x old_y = c.y # keep c.z unchanged c.x = new_x c.y = new_y do_something(c) c.x = old_x c.y = old_y ``` but I wish I could simply write ``` with c.x = new_x; c.y = new_y: do_something(c) ``` or even ``` do_something(c with x = new_x; y = new_y) ``` Can Python's decorators or other language features enable this kind of pattern? (I could modify `c`'s class as needed)
[Context managers](https://docs.python.org/2/reference/datamodel.html#context-managers) may be used for it easily. Quoting official docs: > Typical uses of context managers include saving and restoring various > kinds of global state, locking and unlocking resources, closing opened > files, etc. It seems like saving and restoring state is exactly what we want to do here. Example: ``` from contextlib import contextmanager @contextmanager def temporary_change_attributes(something, **kwargs): previous_values = {k: getattr(something, k) for k in kwargs} for k, v in kwargs.items(): setattr(something, k, v) try: yield finally: for k, v in previous_values.items(): setattr(something, k, v) class Something(object): def __init__(self, x, y): self.x = x self.y = y def say_hello(self): print("hello", self.x, self.y) s = Something(1, 2) s.say_hello() # hello 1 2 with temporary_change_attributes(s, x=4, y=5): s.say_hello() # hello 4 5 s.say_hello() # hello 1 2 ```
Invalid block tag. Did you forget to register or load this tag?
38,537,464
2
2016-07-23T02:06:01Z
38,537,520
7
2016-07-23T02:19:42Z
[ "python", "html", "django" ]
Getting an invalid block tag message `Invalid block tag on line 2: 'out'. Did you forget to register or load this tag?` but don't know why. Here's my setup: graphs.html ``` {% out %} ``` views.py ``` out = 'something to say' template = loader.get_template('viz_proj/graphs.html') context = { 'out' : out } return HttpResponse(template.render(context, request)) ``` settings.py ``` INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'viz_proj' ] ``` project heirarchy ``` viz_proj | viz_proj----------------------------------------templates | | settings.py--views.py--urls.py graphs.html ```
I think you want to try {{ out }} instead of {% out %}.
Functionality of Python `in` vs. `__contains__`
38,542,543
7
2016-07-23T13:54:30Z
38,542,777
7
2016-07-23T14:19:30Z
[ "python", "contains" ]
I implemented the `__contains__` method on a class for the first time the other day, and the behavior wasn't what I expected. I suspect there's some subtlety to the [`in`](https://docs.python.org/2/library/operator.html) operator that I don't understand and I was hoping someone could enlighten me. It appears to me that the `in` operator doesn't simply wrap an object's `__contains__` method, but it also attempts to coerce the output of `__contains__` to boolean. For example, consider the class ``` class Dummy(object): def __contains__(self, val): # Don't perform comparison, just return a list as # an example. return [False, False] ``` The `in` operator and a direct call to the `__contains__` method return very different output: ``` >>> dum = Dummy() >>> 7 in dum True >>> dum.__contains__(7) [False, False] ``` Again, it looks like `in` is calling `__contains__` but then coercing the result to `bool`. I can't find this behavior documented anywhere except for the fact that the `__contains__` [documentation](https://docs.python.org/2/reference/datamodel.html#object.__contains__) says `__contains__` should only ever return `True` or `False`. I'm happy following the convention, but can someone tell me the precise relationship between `in` and `__contains__`? # Epilogue I decided to choose @eli-korvigo answer, but everyone should look at @ashwini-chaudhary [comment](http://stackoverflow.com/questions/38542543/functionality-of-python-in-vs-contains/38542777?noredirect=1#comment64477339_38542543) about the [bug](https://bugs.python.org/issue16011), below.
Use the source, Luke! Let's trace down the `in` operator implementation ``` >>> import dis >>> class test(object): ... def __contains__(self, other): ... return True >>> def in_(): ... return 1 in test() >>> dis.dis(in_) 2 0 LOAD_CONST 1 (1) 3 LOAD_GLOBAL 0 (test) 6 CALL_FUNCTION 0 (0 positional, 0 keyword pair) 9 COMPARE_OP 6 (in) 12 RETURN_VALUE ``` As you can see, the `in` operator becomes the `COMPARE_OP` virtual machine instruction. You can find that in [ceval.c](http://hg.python.org/cpython/file/tip/Python/ceval.c) ``` TARGET(COMPARE_OP) w = POP(); v = TOP(); x = cmp_outcome(oparg, v, w); Py_DECREF(v); Py_DECREF(w); SET_TOP(x); if (x == NULL) break; PREDICT(POP_JUMP_IF_FALSE); PREDICT(POP_JUMP_IF_TRUE); DISPATCH(); ``` Take a look at one of the switches in `cmp_outcome()` ``` case PyCmp_IN: res = PySequence_Contains(w, v); if (res < 0) return NULL; break; ``` Here we have the `PySequence_Contains` call ``` int PySequence_Contains(PyObject *seq, PyObject *ob) { Py_ssize_t result; PySequenceMethods *sqm = seq->ob_type->tp_as_sequence; if (sqm != NULL && sqm->sq_contains != NULL) return (*sqm->sq_contains)(seq, ob); result = _PySequence_IterSearch(seq, ob, PY_ITERSEARCH_CONTAINS); return Py_SAFE_DOWNCAST(result, Py_ssize_t, int); } ``` That always returns an `int` (a boolean). P.S. Thanks to Martijn Pieters for providing the [way](http://stackoverflow.com/a/12244378/3846213) to find the implementation of the `in` operator.
Removing duplicate edges from graph in Python list
38,555,385
6
2016-07-24T18:30:45Z
38,555,503
7
2016-07-24T18:41:32Z
[ "python" ]
My program returns a list of tuples, which represent the edges of a graph, in the form of: ``` [(i, (e, 130)), (e, (i, 130)), (g, (a, 65)), (g, (d, 15)), (a, (g, 65))] ``` So, (i, (e, 130)) means that 'i' is connected to 'e' and is 130 units away. Similarly, (e, (i, 130)) means that 'e' is connected to 'i' and is 130 units away. So essentially, both these tuples represent the same thing. How would I remove any one of them from this list? Desired output: ``` [(i, (e, 130)), (g, (a, 65)), (g, (d, 15))] ``` I tried writing an equals function. Would this be of any help? ``` def edge_equal(edge_tuple1, edge_tuple2): return edge_tuple1[0] == edge_tuple2[1][0] and edge_tuple2[0] == edge_tuple1[1][0] ```
If a tuple `(n1, (n2, distance))` represents a bidirectional connection, I would introduce a normalization property which constraints the ordering of the two nodes in the tuple. This way, each possible edge has exactly one unique representation. Consequently, a normalization function would map a given, potentially unnormalized, edge to the normalized variant. This function can then be used to normalize all given edges. Duplicates can now be eliminated in several ways. For instance, convert the list to a set. ``` def normalize(t): n1, (n2, dist) = t if n1 < n2: # use a custom compare function if desired return t else: return (n2, (n1, dist)) edges = [('i', ('e', 130)), ('e', ('i', 130)), ('g', ('a', 65)), ('g', ('d', 15)), ('a', ('g', 65))] unique_edges = set(map(normalize, edges)) # set([('e', ('i', 130)), ('d', ('g', 15)), ('a', ('g', 65))]) ``` --- The normalization function can also be formulated like this: ``` def normalize((n1, (n2, dist))): if n1 >= n2: n1, n2 = n2, n1 return n1, (n2, dist) ```
Subset of dictionary keys
38,565,727
4
2016-07-25T10:48:00Z
38,565,965
7
2016-07-25T10:59:44Z
[ "python", "dictionary" ]
I've got a python dictionary of the form `{'ip1:port1' : <value>, 'ip1:port2' : <value>, 'ip2:port1' : <value>, ...}`. Dictionary keys are strings, consisting of ip:port pairs. Values are not important for this task. I need a list of `ip:port` combinations with unique IP addresses, ports can be any of those that appear among original keys. For example above, two variants are acceptable: `['ip1:port1', ip2:port1']` and `['ip1:port2', ip2:port1']`. What is the most pythonic way for doing it? Currently my solution is ``` def get_uniq_worker_ips(workers): wip = set(w.split(':')[0] for w in workers.iterkeys()) return [[worker for worker in workers.iterkeys() if worker.startswith(w)][0] for w in wip] ``` I don't like it, because it creates additional lists and then discards them.
You can use [`itertools.groupby`](https://docs.python.org/3/library/itertools.html#itertools.groupby) to group by same IP addresses: ``` data = {'ip1:port1' : "value1", 'ip1:port2' : "value2", 'ip2:port1' : "value3", 'ip2:port2': "value4"} by_ip = {k: list(g) for k, g in itertools.groupby(sorted(data), key=lambda s: s.split(":")[0])} by_ip # {'ip1': ['ip1:port1', 'ip1:port2'], 'ip2': ['ip2:port1', 'ip2:port2']} ``` Then just pick any one from the different groups of IPs. ``` {v[0]: data[v[0]] for v in by_ip.values()} # {'ip1:port1': 'value1', 'ip2:port1': 'value3'} ``` Or shorter, making a generator expression for just the first key from the groups: ``` one_by_ip = (next(g) for k, g in itertools.groupby(sorted(data), key=lambda s: s.split(":")[0])) {key: data[key] for key in one_by_ip} # {'ip1:port1': 'value1', 'ip2:port1': 'value3'} ``` However, note that `groupby` requires the input data to be sorted. So if you want to avoid sorting all the keys in the dict, you should instead just use a `set` of already seen keys. ``` seen = set() not_seen = lambda x: not(x in seen or seen.add(x)) {key: data[key] for key in data if not_seen(key.split(":")[0])} # {'ip1:port1': 'value1', 'ip2:port1': 'value3'} ``` This is similar to your solution, but instead of looping the unique keys and finding a matching key in the dict for each, you loop the keys and check whether you've already seen the IP.
Pycharm import RuntimeWarning after updating to 2016.2
38,569,992
28
2016-07-25T14:08:50Z
38,724,508
29
2016-08-02T15:21:38Z
[ "python", "import", "pycharm" ]
After updating to new version 2016.2, I am getting ``` RuntimeWarning: Parent module 'tests' not found while handling absolute import import unittest RuntimeWarning: Parent module 'tests' not found while handling absolute import import datetime as dt ``` 'tests' is a package inside my main app package, and I receive these warnings when I try to execute unit tests inside this folder. This issue only came up after updating to 2016.2. Besides the warnings, the remaining code works fine. Edit: This is a known issue - <https://youtrack.jetbrains.com/issue/PY-20171>. They are suggesting to replace utrunner.py in PyCharm installation folder.
This is a known issue with the 2016.2 release. Progress can be followed on the JetBrains website [here](https://youtrack.jetbrains.com/issue/PY-20171). According to this page it's due to be fixed in the 2016.3 release but you can follow the utrunner.py workaround that others have mentioned in the meantime (I downloaded the 2016.1 release and copied the file over from there).
Should I use `__setattr__`, a property or...?
38,574,070
4
2016-07-25T17:32:41Z
38,574,167
9
2016-07-25T17:38:40Z
[ "python" ]
I have an object with two attributes, `file_path` and `save_path`. Unless `save_path` is explicitly set, I want it to have the same value as `file_path`. I *think* the way to do this is with `__setattr__`, with something like the following: ``` class Class(): ... def __setattr__(self, name, value): if name == 'file_path': self.file_path = value self.save_path = value if self.save_path == None else self.save_path elif name == 'save_path': self.save_path = value ``` But this looks like it's going to give me infinite loops since `__setattr__` is called whenever an attribute is set. So, what's the proper way to write the above and avoid that?
First, the easiest way to do this would be with a property: ``` class Class(object): def __init__(self, ...): self._save_path = None ... @property def save_path(self): if self._save_path is None: return self.file_path else: return self._save_path @save_path.setter def save_path(self, val): self._save_path = val ``` Second, if you ever find yourself needing to write a `__setattr__`, you should use `super(Class, self).__setattr__` inside your `__setattr__` to bypass your `__setattr__` and set attributes the normal way, avoiding infinite recursion.
Create child of str (or int or float or tuple) that accepts kwargs
38,574,834
4
2016-07-25T18:18:18Z
38,575,015
7
2016-07-25T18:29:26Z
[ "python", "python-3.x", "parent-child", "subclass", "kwargs" ]
I need a class that behaves like a string but also takes additional `kwargs`. Therefor I subclass `str`: ``` class Child(str): def __init__(self, x, **kwargs): # some code ... pass inst = Child('a', y=2) print(inst) ``` This however raises: ``` Traceback (most recent call last): File "/home/user1/Project/exp1.py", line 8, in <module> inst = Child('a', y=2) TypeError: 'y' is an invalid keyword argument for this function ``` Which is rather strange, since the code below works without any error: ``` class Child(object): def __init__(self, x, **kwargs): # some code ... pass inst = Child('a', y=2) ``` --- **Questions:** * Why do I get different behavior when trying to subclass `str`, `int`, `float`, `tuple` etc compared to other classes like `object`, `list`, `dict` etc? * How can I create a class that behaves like a string but has additional kwargs?
You need to override `__new__` in this case, not `__init__`: ``` >>> class Child(str): ... def __new__(cls, s, **kwargs): ... inst = str.__new__(cls, s) ... inst.__dict__.update(kwargs) ... return inst ... >>> c = Child("foo") >>> c.upper() 'FOO' >>> c = Child("foo", y="banana") >>> c.upper() 'FOO' >>> c.y 'banana' >>> ``` See [here](https://docs.python.org/3/reference/datamodel.html#basic-customization) for the answer to why overriding `__init__` doesn't work when subclassing immutable types like `str`, `int`, and `float`: > `__new__()` is intended mainly to allow subclasses of immutable types **(like int, str, or tuple)** to customize instance creation. It is also > commonly overridden in custom metaclasses in order to customize class > creation.
Python- np.mean() giving wrong means?
38,576,480
7
2016-07-25T20:04:34Z
38,576,857
8
2016-07-25T20:27:08Z
[ "python", "numpy", "mean" ]
**The issue** So I have 50 netCDF4 data files that contain decades of monthly temperature predictions on a global grid. I'm using np.mean() to make an ensemble average of all 50 data files together while preserving time length & spatial scale, but np.mean() gives me two different answers. The first time I run its block of code, it gives me a number that, when averaged over latitude & longitude & plotted against the individual runs, is slightly lower than what the ensemble mean should be. If I re-run the block, it gives me a different mean which looks correct. **The code** I can't copy every line here since it's long, but here's what I do for each run. ``` #Historical (1950-2020) data ncin_1 = Dataset("/project/wca/AR5/CanESM2/monthly/histr1/tas_Amon_CanESM2_historical-r1_r1i1p1_195001-202012.nc") #Import data file tash1 = ncin_1.variables['tas'][:] #extract tas (temperature) variable ncin_1.close() #close to save memory #Repeat for future (2021-2100) data ncin_1 = Dataset("/project/wca/AR5/CanESM2/monthly/histr1/tas_Amon_CanESM2_historical-r1_r1i1p1_202101-210012.nc") tasr1 = ncin_1.variables['tas'][:] ncin_1.close() #Concatenate historical & future files together to make one time series array tas11 = np.concatenate((tash1,tasr1),axis=0) #Subtract the 1950-1979 mean to obtain anomalies tas11 = tas11 - np.mean(tas11[0:359],axis=0,dtype=np.float64) ``` And I repeat that 49 times more for other datasets. Each tas11, tas12, etc file has the shape (1812, 64, 128) corresponding to time length in months, latitude, and longitude. To get the ensemble mean, I do the following. ``` #Move all tas data to one array alltas = np.zeros((1812,64,128,51)) #years, lat, lon, members (no ensemble mean value yet) alltas[:,:,:,0] = tas11 (...) alltas[:,:,:,49] = tas50 #Calculate ensemble mean & fill into 51st slot in axis 3 alltas[:,:,:,50] = np.mean(alltas,axis=3,dtype=np.float64) ``` When I check a coordinate & month, the ensemble mean is off from what it should be. Here's what a plot of globally averaged temperatures from 1950-2100 looks like with the first mean (with monhly values averaged into annual values. Black line is ensemble mean & colored lines are individual runs. [![enter image description here](http://i.stack.imgur.com/uoSK4.png)](http://i.stack.imgur.com/uoSK4.png) Obviously that deviated below the real ensemble mean. Here's what the plot looks like when I run alltas[:,:,:,50]=np.mean(alltas,axis=3,dtype=np.float64) a second time & keep everything else the same. [![enter image description here](http://i.stack.imgur.com/vtSQ5.png)](http://i.stack.imgur.com/vtSQ5.png) Much better. **The question** Why does np.mean() calculate the wrong value the first time? I tried specifying the data type as a float when using np.mean() like in this question- [Wrong numpy mean value?](http://stackoverflow.com/questions/17463128/wrong-numpy-mean-value) But it didn't work. Any way I can fix it so it works correctly the first time? I don't want this problem to occur on a calculation where it's not so easy to notice a math error.
In the line ``` alltas[:,:,:,50] = np.mean(alltas,axis=3,dtype=np.float64) ``` the argument to `mean` should be `alltas[:,:,:,:50]`: ``` alltas[:,:,:,50] = np.mean(alltas[:,:,:,:50], axis=3, dtype=np.float64) ``` Otherwise you are including those final zeros in the calculation of the ensemble means.
pandas equivalent of np.where
38,579,532
6
2016-07-26T00:52:45Z
38,579,700
7
2016-07-26T01:15:44Z
[ "python", "numpy", "pandas" ]
`np.where` has the semantics of a vectorized if/else (similar to Apache Spark's `when`/`otherwise` DataFrame method). I know that I can use `np.where` on pandas `Series`, but `pandas` often defines its own API to use instead of raw `numpy` functions, which is usually more convenient with `pd.Series`/`pd.DataFrame`. Sure enough, I found `pandas.DataFrame.where`. However, at first glance, it has a completely different semantics. I could not find a way to rewrite the most basic example of `np.where` using pandas `where`: ``` # df is pd.DataFrame # how to write this using df.where? df['C'] = np.where((df['A']<0) | (df['B']>0), df['A']+df['B'], df['A']/df['B']) ``` Am I missing something obvious? Or is pandas `where` intended for a completely different use case, despite same name as `np.where`?
Try: ``` (df['A'] + df['B']).where((df['A'] < 0) | (df['B'] > 0), df['A'] / df['B']) ``` The difference between the `numpy` `where` and `DataFrame` `where` is that the default values are supplied by the `DataFrame` that the `where` method is being called on ([docs](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.where.html)). I.e. ``` np.where(m, A, B) ``` is roughly equivalent to ``` A.where(m, B) ``` If you wanted a similar call signature using pandas, you could take advantage of [the way method calls work in Python](https://docs.python.org/3/tutorial/classes.html#method-objects): ``` pd.DataFrame.where(cond=(df['A'] < 0) | (df['B'] > 0), self=df['A'] + df['B'], other=df['A'] / df['B']) ``` or without kwargs (Note: that the positional order of arguments is different from the `numpy` `where` [argument order](http://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html)): ``` pd.DataFrame.where(df['A'] + df['B'], (df['A'] < 0) | (df['B'] > 0), df['A'] / df['B']) ```
rounding errors in Python floor division
38,588,815
29
2016-07-26T11:35:11Z
38,589,033
9
2016-07-26T11:45:40Z
[ "python", "python-2.7", "python-3.x", "floating-point", "rounding" ]
I know rounding errors happen in floating point arithmetic but can somebody explain the reason for this one: ``` >>> 8.0 / 0.4 # as expected 20.0 >>> floor(8.0 / 0.4) # int works too 20 >>> 8.0 // 0.4 # expecting 20.0 19.0 ``` This happens on both Python 2 and 3 on x64. As far as I see it this is either a bug or a very dumb specification of `//` since I don't see any reason why the last expression should evaluate to `19.0`. Why isn't `a // b` simply defined as `floor(a / b)` ? **EDIT**: `8.0 % 0.4` also evaluates to `0.3999999999999996`. At least this is consequent since then `8.0 // 0.4 * 0.4 + 8.0 % 0.4` evaluates to `8.0` **EDIT**: This is not a duplicate of [Is floating point math broken?](http://stackoverflow.com/questions/588004) since I am asking why this specific operation is subject to (maybe avoidable) rounding errors, and why `a // b` isn't defined as / equal to `floor(a / b)`
That's because there is no 0.4 in python (floating-point finite representation) it's actually a float like `0.4000000000000001` which makes the floor of division to be 19. ``` >>> floor(8//0.4000000000000001) 19.0 ``` But the true division (`/`) [returns a reasonable approximation of the division result if the arguments are floats or complex.](https://www.python.org/dev/peps/pep-0238/) And that's why the result of `8.0/0.4` is 20. It actually depends on the size of arguments (in C double arguments). (**not rounding to nearest float**) Read more about [pythons integer division floors](http://python-history.blogspot.co.uk/2010/08/why-pythons-integer-division-floors.html) by Guido himself. Also for complete information about the float numbers you can read this article <https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html> For those who have interest, the following function is the `float_div` that does the true division task for float numbers, in Cpython's source code: ``` float_div(PyObject *v, PyObject *w) { double a,b; CONVERT_TO_DOUBLE(v, a); CONVERT_TO_DOUBLE(w, b); if (b == 0.0) { PyErr_SetString(PyExc_ZeroDivisionError, "float division by zero"); return NULL; } PyFPE_START_PROTECT("divide", return 0) a = a / b; PyFPE_END_PROTECT(a) return PyFloat_FromDouble(a); } ``` Which the final result would be calculated by function `PyFloat_FromDouble`: ``` PyFloat_FromDouble(double fval) { PyFloatObject *op = free_list; if (op != NULL) { free_list = (PyFloatObject *) Py_TYPE(op); numfree--; } else { op = (PyFloatObject*) PyObject_MALLOC(sizeof(PyFloatObject)); if (!op) return PyErr_NoMemory(); } /* Inline PyObject_New */ (void)PyObject_INIT(op, &PyFloat_Type); op->ob_fval = fval; return (PyObject *) op; } ```
rounding errors in Python floor division
38,588,815
29
2016-07-26T11:35:11Z
38,589,356
10
2016-07-26T12:01:54Z
[ "python", "python-2.7", "python-3.x", "floating-point", "rounding" ]
I know rounding errors happen in floating point arithmetic but can somebody explain the reason for this one: ``` >>> 8.0 / 0.4 # as expected 20.0 >>> floor(8.0 / 0.4) # int works too 20 >>> 8.0 // 0.4 # expecting 20.0 19.0 ``` This happens on both Python 2 and 3 on x64. As far as I see it this is either a bug or a very dumb specification of `//` since I don't see any reason why the last expression should evaluate to `19.0`. Why isn't `a // b` simply defined as `floor(a / b)` ? **EDIT**: `8.0 % 0.4` also evaluates to `0.3999999999999996`. At least this is consequent since then `8.0 // 0.4 * 0.4 + 8.0 % 0.4` evaluates to `8.0` **EDIT**: This is not a duplicate of [Is floating point math broken?](http://stackoverflow.com/questions/588004) since I am asking why this specific operation is subject to (maybe avoidable) rounding errors, and why `a // b` isn't defined as / equal to `floor(a / b)`
Ok after a little bit of research I have found this [issue](https://bugs.python.org/issue27463). What seems to be happening is, that as @khelwood suggested `0.4` evaluates internally to `0.40000000000000002220`, which when dividing `8.0` yields something slightly smaller than `20.0`. The `/` operator then rounds to the nearest floating point number, which is `20.0`, but the `//` operator immediately truncates the result, yielding `19.0`. This should be faster and I suppose its "close to the processor", but I it still isn't what the user wants / is expecting.
rounding errors in Python floor division
38,588,815
29
2016-07-26T11:35:11Z
38,589,543
10
2016-07-26T12:10:42Z
[ "python", "python-2.7", "python-3.x", "floating-point", "rounding" ]
I know rounding errors happen in floating point arithmetic but can somebody explain the reason for this one: ``` >>> 8.0 / 0.4 # as expected 20.0 >>> floor(8.0 / 0.4) # int works too 20 >>> 8.0 // 0.4 # expecting 20.0 19.0 ``` This happens on both Python 2 and 3 on x64. As far as I see it this is either a bug or a very dumb specification of `//` since I don't see any reason why the last expression should evaluate to `19.0`. Why isn't `a // b` simply defined as `floor(a / b)` ? **EDIT**: `8.0 % 0.4` also evaluates to `0.3999999999999996`. At least this is consequent since then `8.0 // 0.4 * 0.4 + 8.0 % 0.4` evaluates to `8.0` **EDIT**: This is not a duplicate of [Is floating point math broken?](http://stackoverflow.com/questions/588004) since I am asking why this specific operation is subject to (maybe avoidable) rounding errors, and why `a // b` isn't defined as / equal to `floor(a / b)`
@jotasi explained the true reason behind it. However if you want to prevent it, you can use `decimal` module which was basically designed to represent decimal floating point numbers exactly in contrast to binary floating point representation. So in your case you could do something like: ``` >>> from decimal import * >>> Decimal('8.0')//Decimal('0.4') Decimal('20') ``` **Reference:** <https://docs.python.org/2/library/decimal.html>
rounding errors in Python floor division
38,588,815
29
2016-07-26T11:35:11Z
38,589,899
20
2016-07-26T12:26:58Z
[ "python", "python-2.7", "python-3.x", "floating-point", "rounding" ]
I know rounding errors happen in floating point arithmetic but can somebody explain the reason for this one: ``` >>> 8.0 / 0.4 # as expected 20.0 >>> floor(8.0 / 0.4) # int works too 20 >>> 8.0 // 0.4 # expecting 20.0 19.0 ``` This happens on both Python 2 and 3 on x64. As far as I see it this is either a bug or a very dumb specification of `//` since I don't see any reason why the last expression should evaluate to `19.0`. Why isn't `a // b` simply defined as `floor(a / b)` ? **EDIT**: `8.0 % 0.4` also evaluates to `0.3999999999999996`. At least this is consequent since then `8.0 // 0.4 * 0.4 + 8.0 % 0.4` evaluates to `8.0` **EDIT**: This is not a duplicate of [Is floating point math broken?](http://stackoverflow.com/questions/588004) since I am asking why this specific operation is subject to (maybe avoidable) rounding errors, and why `a // b` isn't defined as / equal to `floor(a / b)`
As you and khelwood already noticed, `0.4` cannot be exactly represented as a float. Why? It is two fifth (`4/10 == 2/5`) which does not have a finite binary fraction representation. Try this: ``` from fractions import Fraction Fraction('8.0') // Fraction('0.4') # or equivalently # Fraction(8, 1) // Fraction(2, 5) # or # Fraction('8/1') // Fraction('2/5') # 20 ``` However ``` Fraction('8') // Fraction(0.4) # 19 ``` Here, `0.4` is interpreted as a float literal (and thus a floating point binary number) which requires (binary) rounding, and only *then* converted to the rational number `Fraction(3602879701896397, 9007199254740992)`, which is almost but not exactly 4 / 10. Then the floored division is executed, and because ``` 19 * Fraction(3602879701896397, 9007199254740992) < 8.0 ``` and ``` 20 * Fraction(3602879701896397, 9007199254740992) > 8.0 ``` the result is 19, not 20. The same probably happens for ``` 8.0 // 0.4 ``` I.e., it seems floored division is determined atomically (but on the only approximate float values of the interpreted float literals). So why does ``` floor(8.0 / 0.4) ``` give the "right" result? Because there, two rounding errors cancel each other out. *First* 1) the division is performed, yielding something slightly smaller than 20.0, but not representable as float. It gets rounded to the closest float, which happens to be `20.0`. Only *then*, the `floor` operation is performed, but now acting on *exactly* `20.0`, thus not changing the number any more. --- 1) As Kyle Strand [points out](http://stackoverflow.com/questions/38588815/rounding-errors-in-python-floor-division/38589899?noredirect=1#comment64578429_38589356), that the exact result is determined *then* rounded **isn't** what *actually* happens low2)-level (CPython's C code or even CPU instructions). However, it can be a useful model for determining the expected 3) result. 2) On the *lowest* 4) level, however, this might not be too far off. Some chipsets determine float results by first computing a more precise (but still not exact, simply has some more binary digits) internal floating point result and then rounding to IEEE double precision. 3) "expected" by the Python specification, not necessarily by our intuition. 4) Well, lowest level *above* logic gates. We don't have to consider the quantum mechanics that make semiconductors possible to understand this.
drop elements of a level of a multi-level index pandas
38,615,616
4
2016-07-27T14:18:54Z
38,616,050
7
2016-07-27T14:35:51Z
[ "python", "pandas", "indexing", "multi-index" ]
In the following DataFrame there are a 2-level MultiIndex, namely `city` and `date`: ``` temp count city date SFO 2014-05-31 31 2014-06-30 30 2014-07-31 31 2014-08-31 31 2014-09-30 30 YYZ 2014-05-31 31 2014-06-30 30 2014-07-31 31 2014-08-31 31 2014-09-30 30 ``` I want to drop `2014-05-31` and `2014-09-30` from the date level. How do I do this? **Comment:** To build the DataFrame - ``` df = pd.DataFrame( {('temp', 'count'): {('SFO', Timestamp('2014-05-31 00:00:00')): 31, ('SFO', Timestamp('2014-06-30 00:00:00')): 30, ('SFO', Timestamp('2014-07-31 00:00:00')): 31, ('SFO', Timestamp('2014-08-31 00:00:00')): 31, ('SFO', Timestamp('2014-09-30 00:00:00')): 30, ('YYZ', Timestamp('2014-05-31 00:00:00')): 31, ('YYZ', Timestamp('2014-06-30 00:00:00')): 30, ('YYZ', Timestamp('2014-07-31 00:00:00')): 31, ('YYZ', Timestamp('2014-08-31 00:00:00')): 31, ('YYZ', Timestamp('2014-09-30 00:00:00')): 30}} ).rename_axis(['city','date']) ```
You can give [drop](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.MultiIndex.drop.html) a specific `level`: ``` In[4]: df.drop([Timestamp('2014-05-31'),Timestamp('2014-09-30')],level=1) Out[4]: temp count city date SFO 2014-06-30 30 2014-07-31 31 2014-08-31 31 YYZ 2014-06-30 30 2014-07-31 31 2014-08-31 31 ```
Convert Python sequence to NumPy array, filling missing values
38,619,143
12
2016-07-27T17:01:10Z
38,619,278
9
2016-07-27T17:10:13Z
[ "python", "arrays", "numpy", "sequence", "variable-length-array" ]
The implicit conversion of a Python sequence of *variable-length* lists into a NumPy array cause the array to be of type *object*. ``` v = [[1], [1, 2]] np.array(v) >>> array([[1], [1, 2]], dtype=object) ``` Trying to force another type will cause an exception: ``` np.array(v, dtype=np.int32) ValueError: setting an array element with a sequence. ``` What is the most efficient way to get a dense NumPy array of type int32, by filling the "missing" values with a given placeholder? From my sample sequence `v`, I would like to get something like this, if 0 is the placeholder ``` array([[1, 0], [1, 2]], dtype=int32) ```
Pandas and its `DataFrame`-s deal beautifully with missing data. ``` import numpy as np import pandas as pd v = [[1], [1, 2]] print(pd.DataFrame(v).fillna(0).values.astype(np.int32)) # array([[1, 0], # [1, 2]], dtype=int32) ```
Convert Python sequence to NumPy array, filling missing values
38,619,143
12
2016-07-27T17:01:10Z
38,619,333
9
2016-07-27T17:12:47Z
[ "python", "arrays", "numpy", "sequence", "variable-length-array" ]
The implicit conversion of a Python sequence of *variable-length* lists into a NumPy array cause the array to be of type *object*. ``` v = [[1], [1, 2]] np.array(v) >>> array([[1], [1, 2]], dtype=object) ``` Trying to force another type will cause an exception: ``` np.array(v, dtype=np.int32) ValueError: setting an array element with a sequence. ``` What is the most efficient way to get a dense NumPy array of type int32, by filling the "missing" values with a given placeholder? From my sample sequence `v`, I would like to get something like this, if 0 is the placeholder ``` array([[1, 0], [1, 2]], dtype=int32) ```
You can use [itertools.zip\_longest](https://docs.python.org/3.4/library/itertools.html#itertools.zip_longest): ``` import itertools np.array(list(itertools.zip_longest(*v, fillvalue=0))).T Out: array([[1, 0], [1, 2]]) ``` Note: For Python 2, it is [itertools.izip\_longest](https://docs.python.org/2/library/itertools.html#itertools.izip_longest).
Maximum recursion depth exceeded, but only when using a decorator
38,624,852
6
2016-07-27T23:36:17Z
38,624,949
7
2016-07-27T23:46:50Z
[ "python", "python-3.x", "recursion", "decorator", "memoization" ]
I'm writing a program to calculate Levenshtein distance in Python. I implemented memoization because I am running the algorithm recursively. My original function implemented the memoization in the function itself. Here's what it looks like: ``` # Memoization table mapping from a tuple of two strings to their Levenshtein distance dp = {} # Levenshtein distance algorithm def lev(s, t): # If the strings are 0, return length of other if not s: return len(t) if not t: return len(s) # If the last two characters are the same, no cost. Otherwise, cost of 1 if s[-1] is t[-1]: cost = 0 else: cost = 1 # Save in dictionary if never calculated before if not (s[:-1], t) in dp: dp[(s[:-1], t)] = lev(s[:-1], t) if not (s, t[:-1]) in dp: dp[(s, t[:-1])] = lev(s, t[:-1]) if not (s[:-1], t[:-1]) in dp: dp[(s[:-1], t[:-1])] = lev(s[:-1], t[:-1]) # Returns minimum chars to delete from s, t, and both return min(dp[(s[:-1], t)] + 1, dp[(s, t[:-1])] + 1, dp[(s[:-1], t[:-1])] + cost) ``` This works! However, I found a way to memoize [using decorators](http://www.python-course.eu/python3_memoization.php). I tried to apply this technique to my algorithm: ``` # Memoization table mapping from a tuple of two strings to their Levenshtein distance def memoize(func): memo = {} def wrap(s, t): if (s, t) not in memo: memo[(s, t)] = func(s, t) return memo[(s, t)] return wrap # Levenshtein distance algorithm @memoize # lev = memoize(lev) def lev(s, t): # If the strings are 0, return length of other if not s: return len(t) if not t: return len(s) # If the last two characters are the same, no cost. Otherwise, cost of 1 if s[-1] is t[-1]: cost = 0 else: cost = 1 # Returns minimum chars to delete from s, t, and both return min(lev(s[:-1], t) + 1, lev(s, t[:-1]) + 1, lev(s[:-1], t[:-1]) + cost) ``` To me, this looks cleaner and less confusing. I thought that the two would be functionally equivalent, but when I ran the version with the decorator, I was surprised to find that I got a `RecursionError: maximum recursion depth exceeded`. What exactly am I missing? Is using the decorator not functionally equivalent? I attempted a fix by adding `sys.setrecursionlimit(1500)` and this works, but it is a hack and doesn't explain why the two function differently. NOTE: I am using one paragraph of lorem ipsum as my test strings for s and t from Wikipedia: > Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est labour. I understand that for even longer strings, my first function will fail. I just want to know why the decorated one fails first. Thanks!
Consider the stack frames (function calls) that are happening in your original code. They will look something like: ``` lev(s, t) -> lev(..., ...) -> lev(..., ...) -> lev(..., ...) -> lev(..., ...) ``` In you memoized version they appear as: ``` wraps(s, t) -> lev(..., ...) -> wraps(s, t) -> lev(..., ...) -> wraps(s, t) -> lev(..., ...) -> wraps(s, t) -> lev(..., ...) -> wraps(s, t) -> lev(..., ...) ``` That is, your stack frame will be twice as big, as each "call" actually invokes two functions. Thus you will exhaust the stack frame limit earlier.
Why is a generator produced by yield faster than a generator produced by xrange?
38,626,308
10
2016-07-28T02:48:00Z
38,626,519
9
2016-07-28T03:16:39Z
[ "python", "python-2.7", "python-3.x", "yield" ]
I was investigating Python generators and decided to run a little experiment. ``` TOTAL = 100000000 def my_sequence(): i = 0 while i < TOTAL: yield i i += 1 def my_list(): return range(TOTAL) def my_xrange(): return xrange(TOTAL) ``` Memory usage (using psutil to get process RSS memory) and time taken (using time.time()) are shown below after running each method several times and taking the average: ``` sequence_of_values = my_sequence() # Memory usage: 6782976B Time taken: 9.53674e-07 s sequence_of_values2 = my_xrange() # Memory usage: 6774784B Time taken: 2.14576e-06 s list_of_values = my_list() # Memory usage: 3266207744B Time taken: 1.80253s ``` I noticed that producing a generator by using xrange is consistently (slightly) slower than that by using yield. Why is that so?
I'm going to preface this answer by saying that timings on this scale are likely going to be hard to measure accurately (probably best to use `timeit`) and that these sorts of optimizations will almost never make any difference in your actual program's runtime ... Ok, now the disclaimer is done ... The first thing that you need to notice is that you're only timing the construction of the generator/xrange object -- You are **NOT** timing how long it takes to actually iterate over the values1. There are a couple reasons why creating the generator might be faster in some cases than creating the xrange object... 1. For the generator case, you're only creating a generator -- No code in the generator actually gets run. This amounts to roughly 1 function call. 2. For the `xrange` case, you're calling the function *and* then you have to lookup the global name `xrange`, the global `TOTAL` and then you need to call that builtin -- So there *are* more things being executed in this case. As for memory -- In both of the lazy approaches, the memory used will be dominated by the python runtime -- Not by the size of your generator objects. The only case where the memory use is impacted appreciably by your script is the case where you construct a list of 100million items. Also note that I can't actually confirm your results consistently on my system... Using `timeit`, I actually get that `my_xrange` is *sometimes*2 faster to construct (by ~30%). Adding the following to the bottom of your script: ``` from timeit import timeit print timeit('my_xrange()', setup='from __main__ import my_xrange') print timeit('my_sequence()', setup='from __main__ import my_sequence') ``` And my results are (for `CPython` on OS-X El-Capitan): ``` 0.227491140366 0.356791973114 ``` However, `pypy` seems to favor the generator construction (I tried it with both `my_xrange` first and `my_sequence` first and got fairly consistent results though the first one to run does seem to be at a bit of a disadvantage -- Maybe due to JIT warm-up time or something): ``` 0.00285911560059 0.00137305259705 ``` --- 1Here, I would *expect* `xrange` to have the edge -- but again, nothing is true until you `timeit` and then it's only true if the timings differences are significant and it's only true on the computer where you did the timings. 2See opening disclaimer :-P
Inserting elements from array to the string
38,635,199
2
2016-07-28T11:35:53Z
38,635,274
7
2016-07-28T11:39:10Z
[ "python", "arrays", "string" ]
I have two variables: ``` query = "String: {} Number: {}" param = ['text', 1] ``` I need to merge these two variables and keep the quote marks in case of string and numbers without quote marks. result= `"String: 'text' Number: 1"` I tried to use query.format(param), but it removes the quote marks around the 'text'. How can I solve that?
You can use `repr` on each item in `param` within a generator expression, then use `format` to add them to your string. ``` >>> query = "String: {} Number: {}" >>> param = ['text', 1] >>> query.format(*(repr(i) for i in param)) "String: 'text' Number: 1" ```
ValueError: too many values to unpack - Is it possible to ignore one value?
38,636,765
2
2016-07-28T12:44:57Z
38,636,821
7
2016-07-28T12:47:06Z
[ "python", "unpack" ]
The following line returns an error: ``` self.m, self.userCodeToUserNameList, self.itemsList, self.userToKeyHash, self.fileToKeyHash = readUserFileMatrixFromFile(x,True) ``` The function actually returns 6 values. But in this case, the last one is useless (its None). So i want to store only 5. Is it possible to ignore the last value?
You can use `*rest` from Python 3. ``` >>> x, y, z, *rest = 1, 2, 3, 4, 5, 6, 7 >>> x 1 >>> y 2 >>> rest [4, 5, 6, 7] ``` This way you can always be sure to not encounter unpacking issues.
Python: How to delete rows ending in certain characters?
38,644,696
9
2016-07-28T19:01:08Z
38,644,862
7
2016-07-28T19:10:13Z
[ "python", "python-3.x", "pandas" ]
I have a large data file and I need to delete rows that end in certain letters. Here is an example of the file I'm using: ``` User Name DN MB212DA CN=MB212DA,CN=Users,DC=prod,DC=trovp,DC=net MB423DA CN=MB423DA,OU=Generic Mailbox,DC=prod,DC=trovp,DC=net MB424PL CN=MB424PL,CN=Users,DC=prod,DC=trovp,DC=net MBDA423 CN=MBDA423,OU=DNA,DC=prod,DC=trovp,DC=net MB2ADA4 CN=MB2ADA4,OU=DNA,DC=prod,DC=trovp,DC=netenter code here ``` Code I am using: ``` from pandas import DataFrame, read_csv import pandas as pd f = pd.read_csv('test1.csv', sep=',',encoding='latin1') df = f.loc[~(~pd.isnull(f['User Name']) & f['UserName'].str.contains("DA|PL",))] ``` How do I use regular expression syntax to delete the words that end in "DA" and "PL" but make sure I do not delete the other rows because they contain "DA" or "PL" inside of them? It should delete the rows and I end up with a file like this: ``` User Name DN MBDA423 CN=MBDA423,OU=DNA,DC=prod,DC=trovp,DC=net MB2ADA4 CN=MB2ADA4,OU=DNA,DC=prod,DC=trovp,DC=net ``` First 3 rows are deleted because they ended in DA and PL.
You could use this expression ``` df = df[~df['User Name'].str.contains('(?:DA|PL)$')] ``` It will return all rows that don't end in either DA or PL. The `?:` is so that the brackets would not capture anything. Otherwise, you'd see pandas returning the following (harmless) warning: ``` UserWarning: This pattern has match groups. To actually get the groups, use str.extract. ``` Alternatively, using `endswith()` and without regular expressions, the same filtering could be achieved by using the following expression: ``` df = df[~df['User Name'].str.endswith(('DA', 'PL'))] ``` As expected, the version without regular expression will be faster. A simple test, consisting of `big_df`, which consists of 10001 copies of your original `df`: ``` # Create a larger DF to get better timing results big_df = df.copy() for i in range(10000): big_df = big_df.append(df) print(big_df.shape) >> (50005, 2) # Without regular expressions %%timeit big_df[~big_df['User Name'].str.endswith(('DA', 'PL'))] >> 10 loops, best of 3: 22.3 ms per loop # With regular expressions %%timeit big_df[~big_df['User Name'].str.contains('(?:DA|PL)$')] >> 10 loops, best of 3: 61.8 ms per loop ```
count number of events in an array python
38,651,692
4
2016-07-29T06:13:46Z
38,651,757
11
2016-07-29T06:17:48Z
[ "python", "arrays", "time-series" ]
I have the following array: ``` a = [0,0,0,1,1,1,0,0,1,0,0,0,1,1,0,0,0,0,0,0,1,1,0,1,1,1,0,0,0] ``` Each time I have a '1' or a series of them(consecutive), this is one event. I need to get, in Python, how many events my array has. So in this case we will have 5 events (that is 5 times 1 or sequences of it appears). I need to count such events in order to to get: ``` b = [5] ``` Thanks
You could use [`itertools.groupby`](https://docs.python.org/3/library/itertools.html#itertools.groupby) (it does exactly what you want - groups consecutive elements) and count all groups which starts with `1`: ``` In [1]: from itertools import groupby In [2]: a = [0,0,0,1,1,1,0,0,1,0,0,0,1,1,0,0,0,0,0,0,1,1,0,1,1,1,0,0,0] In [3]: len([k for k, _ in groupby(a) if k == 1]) Out[3]: 5 ``` --- > what if I wanted to add a condition that an event is given as long as > there are is 2 or more '0's in between. This could be done using `groupby` and custom `key` function: ``` from itertools import groupby class GrouperFn: def __init__(self): self.prev = None def __call__(self, n): assert n is not None, 'n must not be None' if self.prev is None: self.prev = n return n if self.prev == 1: self.prev = n return 1 self.prev = n return n def count_events(events): return len([k for k, _ in groupby(events, GrouperFn()) if k == 1]) def run_tests(tests): for e, a in tests: c = count_events(e) assert c == a, 'failed for {}, expected {}, given {}'.format(e, a, c) print('All tests passed') def main(): run_tests([ ([0, 1, 1, 1, 0], 1), ([], 0), ([1], 1), ([0], 0), ([0, 0, 0], 0), ([1, 1, 0, 1, 1], 1), ([0, 1, 1, 0, 1, 1, 0], 1), ([1, 0, 1, 1, 0, 1, 1, 0, 0, 1], 2), ([1, 1, 0, 0, 1, 1], 2), ([0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0], 4) ]) if __name__ == "__main__": main() ``` The idea is pretty simple - when a `0` goes after a group of `1`'s, it could be a part of the group and therefore should be included in that group. The next event either continues the group (if the event is `1`) or splits it (if the event is `0`) *Note, that presented approach will work only when you need to **count** a number of events, since it splits `[1, 1, 0, 0]` as `[[1, 1, 0], [0]]`*.
Where is a Python built-in object's __enter__() and __exit__() defined?
38,666,733
2
2016-07-29T20:09:03Z
38,666,836
7
2016-07-29T20:16:49Z
[ "python", "with-statement", "contextmanager", "code-inspection" ]
I've read that the object's \_\_ enter\_\_() and \_\_ exit\_\_() methods are called every time 'with' is used. I understand that for user-defined objects, you can define those methods yourself, but I don't understand how this works for built-in objects/functions like 'open' or even the testcases. This code works as expected and I assume it closes the file with \_\_ exit\_\_(): ``` with open('output.txt', 'w') as f: f.write('Hi there!') ``` or ``` with self.assertRaises(ValueError): remove_driver(self.driver) # self refers to a class that inherits from the default unittest.TestCase ``` Yet, there's no such \_\_ enter\_\_() or \_\_ exit\_\_() method on either object when I inspect it: [![enter image description here](http://i.stack.imgur.com/wRRsB.png)](http://i.stack.imgur.com/wRRsB.png) [![enter image description here](http://i.stack.imgur.com/swxCF.png)](http://i.stack.imgur.com/swxCF.png) So how is 'open' working with 'with'? Shouldn't objects that support context management protocol have \_\_ enter\_\_() and \_\_ exit\_\_() methods defined and inspectable?
`open()` is a function. It *returns* something that has an `__enter__` and `__exit__` method. Look at something like this: ``` >>> class f: ... def __init__(self): ... print 'init' ... def __enter__(self): ... print 'enter' ... def __exit__(self, *a): ... print 'exit' ... >>> with f(): ... pass ... init enter exit >>> def return_f(): ... return f() ... >>> with return_f(): ... pass ... init enter exit ``` Of course, `return_f` itself does not have those methods, but what it returns does.
What is the inverse of the numpy cumsum function?
38,666,924
10
2016-07-29T20:23:38Z
38,666,977
10
2016-07-29T20:27:10Z
[ "python", "numpy", "cumsum" ]
If I have `z = cumsum( [ 0, 1, 2, 6, 9 ] )`, which gives me `z = [ 0, 1, 3, 9, 18 ]`, how can I get back to the original array `[ 0, 1, 2, 6, 9 ]` ?
``` z[1:] -= z[:-1].copy() ``` Short and sweet, with no slow Python loops. We take views of all but the first element (`z[1:]`) and all but the last (`z[:-1]`), and subtract elementwise. The copy makes sure we subtract the original element values instead of the values we're computing.
Fail to Launch Mozilla with selenium
38,676,719
8
2016-07-30T17:38:13Z
38,676,858
11
2016-07-30T17:53:02Z
[ "java", "python", "selenium", "firefox", "mozilla" ]
I am trying to launch `Mozilla` but still i am getting this error: > Exception in thread "main" java.lang.IllegalStateException: The path to the driver executable must be set by the webdriver.gecko.driver system property; for more information, see <https://github.com/mozilla/geckodriver>. The latest version can be downloaded from <https://github.com/mozilla/geckodriver/releases> I am using `Selenium 3.0.01` Beta version and `Mozilla 45`. I have tried with `Mozilla 47` too. but still the same thing.
*The `Selenium` client bindings will try to locate the `geckodriver` executable from the system `PATH`. You will need to add the directory containing the executable to the system path.* * On **Unix** systems you can do the following to append it to your system’s search path, if you’re using a bash-compatible shell: ``` export PATH=$PATH:/path/to/directory/of/executable/downloaded/in/previous/step ``` * On **Windows** you need to update the Path system variable to add the full directory path to the executable. The principle is the same as on Unix. *To use Marionette in your tests you will need to update your desired capabilities to use it.* **Java** : As exception is clearly saying you need to download latest `geckodriver.exe` from [here](https://github.com/mozilla/geckodriver/releases) and set downloaded `geckodriver.exe` path where it's exists in your computer as system property with with variable `webdriver.gecko.driver` before initiating marionette driver and launching firefox as below :- ``` //if you didn't update the Path system variable to add the full directory path to the executable as above mentioned then doing this directly through code System.setProperty("webdriver.gecko.driver", "path/to/geckodriver.exe"); //Now you can Initialize marionette driver to launch firefox DesiredCapabilities capabilities = DesiredCapabilities.firefox(); capabilities.setCapability("marionette", true); WebDriver driver = new MarionetteDriver(capabilities); ``` And for `Selenium3` use as :- ``` WebDriver driver = new FirefoxDriver(capabilities); ``` [If you're still in trouble follow this link as well which would help you to solving your problem](http://qavalidation.com/2016/08/whats-new-in-selenium-3-0.html) **.NET** : ``` var driver = new FirefoxDriver(new FirefoxOptions()); ``` **Python** : ``` from selenium import webdriver from selenium.webdriver.common.desired_capabilities import DesiredCapabilities caps = DesiredCapabilities.FIREFOX # Tell the Python bindings to use Marionette. # This will not be necessary in the future, # when Selenium will auto-detect what remote end # it is talking to. caps["marionette"] = True # Path to Firefox DevEdition or Nightly. # Firefox 47 (stable) is currently not supported, # and may give you a suboptimal experience. # # On Mac OS you must point to the binary executable # inside the application package, such as # /Applications/FirefoxNightly.app/Contents/MacOS/firefox-bin caps["binary"] = "/usr/bin/firefox" driver = webdriver.Firefox(capabilities=caps) ``` **Ruby** : ``` # Selenium 3 uses Marionette by default when firefox is specified # Set Marionette in Selenium 2 by directly passing marionette: true # You might need to specify an alternate path for the desired version of Firefox Selenium::WebDriver::Firefox::Binary.path = "/path/to/firefox" driver = Selenium::WebDriver.for :firefox, marionette: true ``` **JavaScript (Node.js)** : ``` const webdriver = require('selenium-webdriver'); const Capabilities = require('selenium-webdriver/lib/capabilities').Capabilities; var capabilities = Capabilities.firefox(); // Tell the Node.js bindings to use Marionette. // This will not be necessary in the future, // when Selenium will auto-detect what remote end // it is talking to. capabilities.set('marionette', true); var driver = new webdriver.Builder().withCapabilities(capabilities).build(); ``` **Using `RemoteWebDriver`** If you want to use `RemoteWebDriver` in any language, this will allow you to use `Marionette` in `Selenium` Grid. **Python**: ``` caps = DesiredCapabilities.FIREFOX # Tell the Python bindings to use Marionette. # This will not be necessary in the future, # when Selenium will auto-detect what remote end # it is talking to. caps["marionette"] = True driver = webdriver.Firefox(capabilities=caps) ``` **Ruby** : ``` # Selenium 3 uses Marionette by default when firefox is specified # Set Marionette in Selenium 2 by using the Capabilities class # You might need to specify an alternate path for the desired version of Firefox caps = Selenium::WebDriver::Remote::Capabilities.firefox marionette: true, firefox_binary: "/path/to/firefox" driver = Selenium::WebDriver.for :remote, desired_capabilities: caps ``` **Java** : ``` DesiredCapabilities capabilities = DesiredCapabilities.firefox(); // Tell the Java bindings to use Marionette. // This will not be necessary in the future, // when Selenium will auto-detect what remote end // it is talking to. capabilities.setCapability("marionette", true); WebDriver driver = new RemoteWebDriver(capabilities); ``` **.NET** ``` DesiredCapabilities capabilities = DesiredCapabilities.Firefox(); // Tell the .NET bindings to use Marionette. // This will not be necessary in the future, // when Selenium will auto-detect what remote end // it is talking to. capabilities.SetCapability("marionette", true); var driver = new RemoteWebDriver(capabilities); ``` Note : Just like the other drivers available to Selenium from other browser vendors, Mozilla has released now an executable that will run alongside the browser. Follow [this](https://developer.mozilla.org/en-US/docs/Mozilla/QA/Marionette/WebDriver) for more details. [You can download latest geckodriver executable to support latest firefox from here](https://github.com/mozilla/geckodriver/releases)
How (in what form) to share (deliver) a Python function?
38,700,517
19
2016-08-01T13:46:07Z
38,771,976
9
2016-08-04T15:48:25Z
[ "python", "sockets", "deployment", "publish" ]
The final outcome of my work should be a Python function that takes a JSON object as the only input and return another JSON object as output. To keep it more specific, I am a data scientist, and the function that I am speaking about, is derived from data and it delivers predictions (in other words, it is a machine learning model). So, my question is how to deliver this function to the "tech team" that is going to incorporate it into a web-service. At the moment I face few problems. First, the tech team does not necessarily work in Python environment. So, they cannot just "copy and paste" my function into their code. Second, I want to make sure that my function runs in the same environment as mine. For example, I can imagine that I use some library that the tech team does not have or they have a version that differ from the version that I use. **ADDED** As a possible solution I consider the following. I start a Python process that listen to a socket, accept incoming strings, transforms them into JSON, gives the JSON to the "published" function and returns the output JSON as a string. Does this solution have disadvantages? In other words, is it a good idea to "publish" a Python function as a background process listening to a socket?
You have the right idea with using a socket but there are tons of frameworks doing exactly what you want. Like [hleggs](http://stackoverflow.com/users/6464893/hleggs), I suggest you checkout [Flask](http://flask.pocoo.org/) to build a microservice. This will let the other team post JSON objects in an HTTP request to your flask application and receive JSON objects back. No knowledge of the underlying system or additional requirements required! Here's a template for a flask app that replies and responds with JSON ``` from flask import Flask, request, jsonify app = Flask(__name__) @app.route('/', methods=['POST']) def index(): json = request.json return jsonify(your_function(json)) if __name__=='__main__': app.run(host='0.0.0.0', port=5000) ``` **Edit**: embeded my code directly as per [Peter Britain](http://stackoverflow.com/users/4994021/peter-brittain)'s advice
One-liner to save value of if statement?
38,700,537
3
2016-08-01T13:46:58Z
38,700,640
7
2016-08-01T13:51:00Z
[ "python", "variables", "if-statement" ]
Is there a smart way to write the following code in three or four lines? ``` a=l["artist"] if a: b=a["projects"] if b: c=b["project"] if c: print c ``` So I thought for something like pseudocode: ``` a = l["artist"] if True: ```
How about: ``` try: print l["artist"]["projects"]["project"] except KeyError: pass except TypeError: pass # None["key"] raises TypeError. ``` This will `try` to `print` the value, but if a `KeyError` is raised, the `except` block will be run. `pass` means to do nothing. This is known and EAFP: it’s **E**asier to **A**sk **F**orgiveness than **P**ermission.
Why is there an underscore following the "from" in the Twilio Rest API?
38,708,834
5
2016-08-01T22:02:56Z
38,708,867
11
2016-08-01T22:07:15Z
[ "python", "twilio", "twilio-api" ]
In the [twilio python library](https://www.twilio.com/docs/libraries/python#testing-your-installation), we have this feature to create messages: `from twilio.rest import TwilioRestClient` and we can write: `msg = TwilioRestClient.messages.create(body=myMsgString, from_=myNumber, to=yourNumber)` My question is simple: why does an underscore follow the `from` parameter? Or why is that the parameter name? Is it because `from` is otherwise a keyword in Python and we differentiate variables from keywords with an underscore suffix? Is that actually necessary in this case?
This is because `from` would be an invalid argument name, resulting in a `SyntaxError` - it's a python keyword. Appending a trailing underscore is the recommended way to avoid such conflicts mentioned in the [PEP8 style guide](https://www.python.org/dev/peps/pep-0008/#function-and-method-arguments): > If a function argument's name clashes with a reserved keyword, it is generally better to append a single trailing underscore rather than use an abbreviation or spelling corruption.