title
stringlengths
12
150
question_id
int64
469
40.1M
question_score
int64
2
5.52k
question_date
stringdate
2008-08-02 15:11:16
2016-10-18 06:16:31
answer_id
int64
536
40.1M
answer_score
int64
7
8.38k
answer_date
stringdate
2008-08-02 18:49:07
2016-10-18 06:19:33
tags
listlengths
1
5
question_body_md
stringlengths
15
30.2k
answer_body_md
stringlengths
11
27.8k
PEP 0492 - Python 3.5 async keyword
31,291,129
31
2015-07-08T11:19:26Z
31,291,832
39
2015-07-08T11:52:10Z
[ "python", "python-3.x", "asynchronous", "async-await", "coroutine" ]
[PEP 0492](https://www.python.org/dev/peps/pep-0492/) adds the `async` keyword to Python 3.5. How does Python benefit from the use of this operator? The example that is given for a coroutine is ``` async def read_data(db): data = await db.fetch('SELECT ...') ``` According to the docs this achieves > suspend[ing] execution of read\_data coroutine until db.fetch awaitable completes and returns the result data. Does this `async` keyword actually involve creation of new threads or perhaps the use of an existing reserved async thread? In the event that `async` does use a reserved thread, is it a single shared thread each in their own?
No, co-routines do not involve any kind of threads. Co-routines allow for *cooperative* multi-tasking in that each co-routine yields control voluntarily. Threads on the other hand switch between units at arbitrary points. Up to Python 3.4, it was possible to write co-routines using *generators*; by using `yield` or `yield from` expressions in a function body you create a generator object instead, where code is only executed when you iterate over the generator. Together with additional event loop libraries (such as [`asyncio`](https://docs.python.org/3/library/asyncio.html)) you could write co-routines that would signal to an event loop that they were going to be busy (waiting for I/O perhaps) and that another co-routine could be run in the meantime: ``` import asyncio import datetime @asyncio.coroutine def display_date(loop): end_time = loop.time() + 5.0 while True: print(datetime.datetime.now()) if (loop.time() + 1.0) >= end_time: break yield from asyncio.sleep(1) ``` Every time the above code advances to the `yield from asyncio.sleep(1)` line, the event loop is free to run a different co-routine, because this routine is not going to do anything for the next second *anyway*. Because generators can be used for all sorts of tasks, not just co-routines, and because writing a co-routine using generator syntax can be confusing to new-comers, the PEP introduces new syntax that makes it *clearer* that you are writing a co-routine. With the PEP implemented, the above sample could be written instead as: ``` async def display_date(loop): end_time = loop.time() + 5.0 while True: print(datetime.datetime.now()) if (loop.time() + 1.0) >= end_time: break await asyncio.sleep(1) ``` The resulting *`coroutine`* object still needs an event loop to drive the co-routines; an event loop would `await` on each co-routine in turn, which would execute those co-routines that are not currently `await`ing for something to complete. The advantages are that with native support, you can also introduce additional syntax to support asynchronous context managers and iterators. Entering and exiting a context manager, or looping over an iterator then can become more points in your co-routine that signal that other code can run instead because something is waiting again.
How to load an existing ipython notebook?
31,292,739
10
2015-07-08T12:30:15Z
31,292,812
19
2015-07-08T12:33:32Z
[ "python", "ipython" ]
I'm missing something really obvious here but I want to load an existing .ipynb file in my own ipython session. I've tried the following: ``` $ ipython dream.ipynb --------------------------------------------------------------------------- NameError Traceback (most recent call last) /home/me/develop/deepdream/dream.ipynb in <module>() 33 { 34 "cell_type": "code", ---> 35 "collapsed": false, 36 "input": [ 37 "# imports and basic notebook setup\n", NameError: name 'false' is not defined ``` (Google's deepdream notebook) but the json syntax isn't good? I am using the ipython from Anaconda 2.3.0, python 3.4.0 and ipython qtconsole 3.2.0.
You must [start `ipython notebook`](https://ipython.org/ipython-doc/3/notebook/notebook.html#starting-the-notebook-server), otherwise `ipython` tries to execute `dream.ipynb` as though it were a file containing Python code: ``` ipython notebook dream.ipynb ```
In python, why is s*3 faster than s+s+s?
31,295,017
3
2015-07-08T14:01:07Z
31,295,106
15
2015-07-08T14:05:03Z
[ "python", "string", "operators" ]
I was going through the google's python intro and came across the statement that `s * 3` is faster than doing `s + s + s` where `s` is of type `string`. Any reason for that to happen? I googled and found [which is faster s+='a' or s=s+'a' in python](http://stackoverflow.com/questions/27287428/which-is-faster-s-a-or-s-sa-in-python). But that didn't help
Because `s * 3` is one operation, whereas `s + s + s` is two operations; it's really `(s + s) + s`, creating an additional string object that then gets discarded. You can see the difference by using [`dis`](https://docs.python.org/2/library/dis.html) to look at the bytecode each generates: `s + s + s`: ``` 3 0 LOAD_FAST 0 (s) 3 LOAD_FAST 0 (s) 6 BINARY_ADD 7 LOAD_FAST 0 (s) 10 BINARY_ADD 11 RETURN_VALUE ``` `s * 3`: ``` 3 0 LOAD_FAST 0 (s) 3 LOAD_CONST 1 (3) 6 BINARY_MULTIPLY 7 RETURN_VALUE ```
Python variables lose scope inside generator?
31,298,428
10
2015-07-08T16:27:03Z
31,298,828
7
2015-07-08T16:48:26Z
[ "python", "scope", "generator" ]
The code below returns `NameError: global name 'self' is not defined`. Why? ``` lengths = [3, 10] self.fooDict = getOrderedDict(stuff) if not all(0 < l < len(self.fooDict) for l in lengths): raise ValueError("Bad lengths!") ``` Note that `self.fooDict` is an OrderedDict (imported from the collections library) that has 35 entries. When I try to debug, the code below executes without error: ``` (Pdb) len(self.dataDict) 35 (Pdb) all(0 < size < 35 for size in lengths) True ``` But the debugginf code below gives me the original error: ``` (Pdb) baz = len(self.dataDict) (Pdb) all(0 < size < baz for size in lengths) NameError: global name 'baz' is not defined ```
## Short answer and workaround You've run into a limitation of the debugger. Expressions entered into the debugger cannot use *non-locally scoped values* because the debugger cannot create the required closures. You could instead create a *function* to run your generator, thus creating a new scope at the same time: ``` def _test(baz, lengths): return all(0 < size < baz for size in lengths) _test(len(self.dataDict), lengths) ``` Note that this applies to set and dictionary comprehensions as well, and in Python 3, list comprehensions. ## The long answer, why this happens Generator expressions (and set, dict and Python 3 list comprehensions) run in a new, nested namespace. The name `baz` in your generator expression is not a local in that namespace, so Python has to find it somewhere else. *At compile time* Python determines where to source that name from. It'll search from the scopes the *compiler* has available and if there are no matches, declares the name a global. Here are two generator expressions to illustrate: ``` def function(some_iterable): gen1 = (var == spam for var in some_iterable) ham = 'bar' gen2 = (var == ham for var in some_iterable) return gen1, gen2 ``` The name `spam` is not found in the parent scope, so the compiler marks it as a global: ``` >>> dis.dis(function.__code__.co_consts[1]) # gen1 2 0 LOAD_FAST 0 (.0) >> 3 FOR_ITER 17 (to 23) 6 STORE_FAST 1 (var) 9 LOAD_FAST 1 (var) 12 LOAD_GLOBAL 0 (spam) 15 COMPARE_OP 2 (==) 18 YIELD_VALUE 19 POP_TOP 20 JUMP_ABSOLUTE 3 >> 23 LOAD_CONST 0 (None) 26 RETURN_VALUE ``` The opcode at index 12 uses `LOAD_GLOBAL` to load the `spam` name. The name `ham` *is* found in the function scope, so the compiler generates bytecode to look up the name as a closure from the function. *At the same time* the name `ham` is marked as a closure; the variable is treated differently by the code generated for `function` so that you can still reference it when the function has returned. ``` >>> dis.dis(function.__code__.co_consts[3]) # gen2 4 0 LOAD_FAST 0 (.0) >> 3 FOR_ITER 17 (to 23) 6 STORE_FAST 1 (var) 9 LOAD_FAST 1 (var) 12 LOAD_DEREF 0 (ham) 15 COMPARE_OP 2 (==) 18 YIELD_VALUE 19 POP_TOP 20 JUMP_ABSOLUTE 3 >> 23 LOAD_CONST 0 (None) 26 RETURN_VALUE >>> function.__code__.co_cellvars # closure cells ('ham',) ``` The name `ham` is loaded with a `LOAD_DEREF` opcode, and the function code object has listed that name as a closure. When you disassemble `function` you'll find, among other bytecode: ``` >>> dis.dis(function) # .... 4 22 LOAD_CLOSURE 0 (ham) 25 BUILD_TUPLE 1 28 LOAD_CONST 3 (<code object <genexpr> at 0x1074a87b0, file "<stdin>", line 4>) 31 MAKE_CLOSURE 0 34 LOAD_FAST 0 (some_iterable) 37 GET_ITER 38 CALL_FUNCTION 1 41 STORE_FAST 2 (gen2) # ... ``` where the `LOAD_CLOSURE` and `MAKE_CLOSURE` bytecodes create a closure for `ham` to be used by the generator code object. When you run arbitrary expressions in the debugger, the compiler has no access to the namespace you are debugging. More importantly, it cannot *alter* that namespace to create a closure. Thus, you cannot use anything but *globals* in your generator expressions.
Difference between [y for y in x.split('_')] and x.split('_')
31,303,026
2
2015-07-08T20:26:59Z
31,303,073
8
2015-07-08T20:29:44Z
[ "python", "string", "list", "split" ]
I've found [this question](http://stackoverflow.com/q/3668964/1937994) and one thing in the original code bugs me: ``` >>> x="Alpha_beta_Gamma" >>> words = [y for y in x.split('_')] ``` What's the point of doing this: `[y for y in x.split('_')]`? `split` already returns a list and items aren't manipulated in this list comprehension. Am I missing something?
You're correct; there's no point in doing that. However, it's often seen in combination with some kind of filter or other structure, such as `[y for y in x.split('_') if y.isalpha()]`.
astropy.io fits efficient element access of a large table
31,315,325
2
2015-07-09T10:54:41Z
31,319,385
7
2015-07-09T13:44:39Z
[ "python", "arrays", "fits", "astropy" ]
I am trying to extract data from a binary table in a FITS file using Python and astropy.io. The table contains an events array with over 2 million events. What I want to do is store the TIME values of certain events in an array, so I can then do analysis on that array. The problem I have is that, whereas in fortran (using FITSIO) the same operation takes maybe a couple of seconds on a much slower processor, the exact same operation in Python using astropy.io is taking several minutes. I would like to know where exactly the bottleneck is, and if there is a more efficient way to access the individual elements in order to determine whether or not to store each time value in the new array. Here is the code I have so far: ``` from astropy.io import fits minenergy=0.3 maxenergy=0.4 xcen=20000 ycen=20000 radius=50 datafile=fits.open('datafile.fits') events=datafile['EVENTS'].data datafile.close() times=[] for i in range(len(events)): energy=events['PI'][i] if energy<maxenergy*1000: if energy>minenergy*1000: x=events['X'][i] y=events['Y'][i] radius2=(x-xcen)*(x-xcen)+(y-ycen)*(y-ycen) if radius2<=radius*radius: times.append(events['TIME'][i]) print times ``` Any help would be appreciated. I am an ok programmer in other languages, but I have not really had to worry about efficiency in Python before. The reason I have chosen to do this in Python now is that I was using fortran with both FITSIO and PGPLOT, as well as some routines from Numerical Recipes, but the newish fortran compiler I have on this machine cannot be persuaded to produce a properly working program (there are some issues of 32- vs. 64-bit, etc.). Python seems to have all the functionality I need (FITS I/O, plotting, etc), but if it takes forever to access the individual elements in a list, I will have to find another solution. Thanks very much.
You need to do this using numpy vector operations. Without special tools like numba, doing large loops like you've done will always be slow in Python because it is an interpreted language. Your program should look more like: ``` energy = events['PI'] / 1000. e_ok = (energy > min_energy) & (energy < max_energy) rad2 = (events['X'][e_ok] - xcen)**2 + (events['Y'][e_ok] - ycen)**2 r_ok = rad2 < radius**2 times = events['TIMES'][e_ok][r_ok] ``` This should have performance comparable to Fortran. You can also filter the entire event table, for instance: ``` events_filt = events[e_ok][r_ok] times = events_filt['TIMES'] ```
Why doesn't the value in for loop change?
31,315,514
3
2015-07-09T11:02:16Z
31,315,569
8
2015-07-09T11:04:32Z
[ "python" ]
Why does the value of `range(len(whole)/2)` not change after `whole` is modified? And what do you call `range(len...)` value in for-loop? ``` whole = 'selenium' for i in range(len(whole)/2): print whole whole = whole[1:-1] ``` output: ``` selenium eleniu leni en ```
The `range()` produces a list of integers *once*. That list is then iterated over by the `for` loop. It is not re-created each iteration; that'd be very inefficient. You could use a `while` loop instead: ``` i = 0 while i < (len(whole) / 2): print whole whole = whole[1:-1] i += 1 ``` the `while` condition is re-tested each loop iteration.
Setting DataFrame values with enlargement
31,319,888
7
2015-07-09T14:05:01Z
36,555,489
12
2016-04-11T17:37:05Z
[ "python", "pandas" ]
I have two `DataFrames` (with `DatetimeIndex`) and want to update the first frame (the older one) with data from the second frame (the newer one). The new frame may contain more recent data for rows already contained in the the old frame. In this case, data in the old frame should be overwritten with data from the new frame. Also the newer frame may have more columns / rows, than the first one. In this case the old frame should be enlarged by the data in the new frame. Pandas [docs](http://pandas.pydata.org/pandas-docs/stable/indexing.html#setting-with-enlargement) state, that "The `.loc/.ix/[]` operations can perform enlargement when setting a non-existant key for that axis" and "a DataFrame can be enlarged on either axis via `.loc`" However this doesn't seem to work and throws a `KeyError`. Example: ``` In [195]: df1 Out[195]: A B C 2015-07-09 12:00:00 1 1 1 2015-07-09 13:00:00 1 1 1 2015-07-09 14:00:00 1 1 1 2015-07-09 15:00:00 1 1 1 In [196]: df2 Out[196]: A B C D 2015-07-09 14:00:00 2 2 2 2 2015-07-09 15:00:00 2 2 2 2 2015-07-09 16:00:00 2 2 2 2 2015-07-09 17:00:00 2 2 2 2 In [197]: df1.loc[df2.index] = df2 --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-197-74e630e87cf8> in <module>() ----> 1 df1.loc[df2.index] = df2 /.../pandas/core/indexing.pyc in __setitem__(self, key, value) 112 113 def __setitem__(self, key, value): --> 114 indexer = self._get_setitem_indexer(key) 115 self._setitem_with_indexer(indexer, value) 116 /.../pandas/core/indexing.pyc in _get_setitem_indexer(self, key) 107 108 try: --> 109 return self._convert_to_indexer(key, is_setter=True) 110 except TypeError: 111 raise IndexingError(key) /.../pandas/core/indexing.pyc in _convert_to_indexer(self, obj, axis, is_setter) 1110 mask = check == -1 1111 if mask.any(): -> 1112 raise KeyError('%s not in index' % objarr[mask]) 1113 1114 return _values_from_object(indexer) KeyError: "['2015-07-09T18:00:00.000000000+0200' '2015-07-09T19:00:00.000000000+0200'] not in index" ``` What is the best way (with respect to performance, as my real data is much larger) two achieve the desired updated and enlarged DataFrame. This is the result I would like to see: ``` A B C D 2015-07-09 12:00:00 1 1 1 NaN 2015-07-09 13:00:00 1 1 1 NaN 2015-07-09 14:00:00 2 2 2 2 2015-07-09 15:00:00 2 2 2 2 2015-07-09 16:00:00 2 2 2 2 2015-07-09 17:00:00 2 2 2 2 ```
`df2.combine_first(df1)` ([documentation](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.combine_first.html)) seems to serve your requirement; PFB code snippet & output ``` import pandas as pd print 'pandas-version: ', pd.__version__ df1 = pd.DataFrame.from_records([('2015-07-09 12:00:00',1,1,1), ('2015-07-09 13:00:00',1,1,1), ('2015-07-09 14:00:00',1,1,1), ('2015-07-09 15:00:00',1,1,1)], columns=['Dt', 'A', 'B', 'C']).set_index('Dt') # print df1 df2 = pd.DataFrame.from_records([('2015-07-09 14:00:00',2,2,2,2), ('2015-07-09 15:00:00',2,2,2,2), ('2015-07-09 16:00:00',2,2,2,2), ('2015-07-09 17:00:00',2,2,2,2),], columns=['Dt', 'A', 'B', 'C', 'D']).set_index('Dt') res_combine1st = df2.combine_first(df1) print res_combine1st ``` ## output ``` pandas-version: 0.15.2 A B C D Dt 2015-07-09 12:00:00 1 1 1 NaN 2015-07-09 13:00:00 1 1 1 NaN 2015-07-09 14:00:00 2 2 2 2 2015-07-09 15:00:00 2 2 2 2 2015-07-09 16:00:00 2 2 2 2 2015-07-09 17:00:00 2 2 2 2 ```
Doc2vec : How to get document vectors
31,321,209
15
2015-07-09T14:57:45Z
31,325,297
17
2015-07-09T18:19:46Z
[ "python", "gensim", "word2vec" ]
How to get document vectors of two text documents using Doc2vec? I am new to this, so it would be helpful if someone could point me in right direction/help me with some tutorial I am using gensim python library. ``` doc1=["This is a sentence","This is another sentence"] documents1=[doc.strip().split(" ") for doc in doc1 ] model = doc2vec.Doc2Vec(documents1, size = 100, window = 300, min_count = 10, workers=4) ``` I get AttributeError: 'list' object has no attribute 'words' whenever I run this
``` doc=["This is a sentence","This is another sentence"] documents=[doc.strip().split(" ") for doc in doc1 ] model = doc2vec.Doc2Vec(documents, size = 100, window = 300, min_count = 10, workers=4) ``` I got AttributeError: 'list' object has no attribute 'words' because the input documents to the Doc2vec() was not in correct LabeledSentence format. I hope this below example will help you understand the format. ``` documents = LabeledSentence(words=[u'some', u'words', u'here'], labels=[u'SENT_1']) ``` More details are here : <http://rare-technologies.com/doc2vec-tutorial/> However, I solved the problem by taking input data from file using TaggedLineDocument(). File format: one document = one line = one TaggedDocument object. Words are expected to be already preprocessed and separated by whitespace, tags are constructed automatically from the document line number. ``` sentences=doc2vec.TaggedLineDocument(file_path) model = doc2vec.Doc2Vec(sentences,size = 100, window = 300, min_count = 10, workers=4) ``` To get document vector : You can use docvecs. More details here : <https://radimrehurek.com/gensim/models/doc2vec.html#gensim.models.doc2vec.TaggedDocument> ``` docvec = model.docvecs[99] ``` where 99 is the document id whose vector we want. If labels are in integer format (by default, if you load using TaggedLineDocument() ), directly use integer id like I did. If labels are in string format,use "SENT\_99" .This is similar to Word2vec
Doc2vec : How to get document vectors
31,321,209
15
2015-07-09T14:57:45Z
33,403,307
17
2015-10-28T23:21:45Z
[ "python", "gensim", "word2vec" ]
How to get document vectors of two text documents using Doc2vec? I am new to this, so it would be helpful if someone could point me in right direction/help me with some tutorial I am using gensim python library. ``` doc1=["This is a sentence","This is another sentence"] documents1=[doc.strip().split(" ") for doc in doc1 ] model = doc2vec.Doc2Vec(documents1, size = 100, window = 300, min_count = 10, workers=4) ``` I get AttributeError: 'list' object has no attribute 'words' whenever I run this
**Gensim was updated**. The syntax of LabeledSentence does not contain **labels**. There are now **tags** - see documentation for LabeledSentence <https://radimrehurek.com/gensim/models/doc2vec.html> However, @bee2502 was right with ``` docvec = model.docvecs[99] ``` It will should the 100th vector's value for trained model, it works with integers and strings.
get_dummies python memory error
31,321,892
3
2015-07-09T15:27:48Z
31,324,037
12
2015-07-09T17:08:58Z
[ "python", "pandas" ]
i relativly new to Python and i have a little problem with a data set. The data set has 400.000 rows and 300 variables. I have to get dummy variables for a categorical variable with 3000+ different items. At the end I want to end up with a data set with 3300 variables or features so that i can train RandomForest model. Here is what I've tried to do: ``` df = pd.concat([df, pd.get_dummies(df['itemID'],prefix = 'itemID_')], axis=1) ``` When i do that I'll always get an memory error. Is there a limit to the number of variables i can have? If I do that with only the first 1000 rows (which got 374 different categories) it just works fin. Does anyone have a solution for my problem? The Machine I'm using is a Intel I7 with 8 GB Ram. Thank you
**update:** looks like get\_dummies is going to be returning integers by default, starting with version 0.19.0 (<https://github.com/pydata/pandas/issues/8725>) Here are a couple of possibilities to try. Both will reduce the memory footprint of the dataframe substantially but you could still run into memory issues later. It's hard to predict, you'll just have to try. (note that I am simplifying the output of `info()` below) ``` df = pd.DataFrame({ 'itemID': np.random.randint(1,4,100) }) pd.concat([df, pd.get_dummies(df['itemID'],prefix = 'itemID_')], axis=1).info() itemID 100 non-null int32 itemID__1 100 non-null float64 itemID__2 100 non-null float64 itemID__3 100 non-null float64 memory usage: 3.5 KB ``` Here's our baseline. Each dummy column takes up 800 bytes because the sample data has 100 rows and `get_dummies` appears to default to float64 (8 bytes). This seems like an unnecessarily inefficient way to store dummies as you could use as little as a bit to do it, but there may be some reason for that which I'm not aware of. So, first attempt, just change to a one byte integer (this doesn't seem to be an option for `get_dummies` so it has to be done as a conversion with `astype(np.int8)`. ``` pd.concat([df, pd.get_dummies(df['itemID'],prefix = 'itemID_').astype(np.int8)], axis=1).info() itemID 100 non-null int32 itemID__1 100 non-null int8 itemID__2 100 non-null int8 itemID__3 100 non-null int8 memory usage: 1.5 KB ``` Each dummy column now takes up 1/8 the memory as before. Alternatively, you can use the `sparse` option of `get_dummies`. ``` pd.concat([df, pd.get_dummies(df['itemID'],prefix = 'itemID_',sparse=True)], axis=1).info() itemID 100 non-null int32 itemID__1 100 non-null float64 itemID__2 100 non-null float64 itemID__3 100 non-null float64 memory usage: 2.0 KB ``` Fairly comparable savings. The `info()` output somewhat hides the way savings are occurring, but you can look at the value of memory usage to see to total savings. It's hard to say which of these will work better in practice (if either is a practical solution at all), so you'll just need to give them each a try. In theory you could even combine the two approaches, but I wouldn't try that unless it turns out each approach works on it's own.
Scikit-learn: How to obtain True Positive, True Negative, False Positive and False Negative
31,324,218
3
2015-07-09T17:19:02Z
31,324,768
10
2015-07-09T17:50:02Z
[ "python", "machine-learning", "scikit-learn", "classification", "supervised-learning" ]
I am new in machine learning and in scikit-learn. **My problem:** (Please, correct any type of missconception) I have a dataset which is a BIG JSON, I retrieve it and store it in a `trainList` variable. I pre-process it in order to be able to work with it. Once I have done that, I start the classification: 1. I use kfold cross validation method in order to obtain the mean accuracy and I train a classifier. 2. I make the predicctions and I obtain the accuracy and confusion matrix of that fold. 3. After this, I would like to obtain the True Positive(TP), True Negative(TN), False Positive(FP) and False Negative(FN) values. I would use these paramters to obtain the Sensitivity and the specificity and I would them and the total of the TPs to a HTML in order to show a chart with the TPs of each label. **Code:** The variables I have for the moment: ``` trainList #It is a list with all the data of my dataset in JSON form labelList #It is a list with all the labels of my data ``` Most part of the method: ``` #I transform the data from JSON form to a numerical one X=vec.fit_transform(trainList) #I scale the matrix (don't know why but without it, it makes an error) X=preprocessing.scale(X.toarray()) #I generate a KFold in order to make cross validation kf = KFold(len(X), n_folds=10, indices=True, shuffle=True, random_state=1) #I start the cross validation for train_indices, test_indices in kf: X_train=[X[ii] for ii in train_indices] X_test=[X[ii] for ii in test_indices] y_train=[listaLabels[ii] for ii in train_indices] y_test=[listaLabels[ii] for ii in test_indices] #I train the classifier trained=qda.fit(X_train,y_train) #I make the predictions predicted=qda.predict(X_test) #I obtain the accuracy of this fold ac=accuracy_score(predicted,y_test) #I obtain the confusion matrix cm=confusion_matrix(y_test, predicted) #I should calculate the TP,TN, FP and FN #I don't know how to continue ```
You can obtain all of the parameters from the confusion matrix. The structure of the confusion matrix(which is 2X2 matrix) is as follows ``` TP|FP FN|TN ``` So ``` TP = cm[0][0] FP = cm[0][1] FN = cm[1][0] TN = cm[1][1] ``` More details at <https://en.wikipedia.org/wiki/Confusion_matrix>
Finding gradient of a Caffe conv-filter with regards to input
31,324,739
25
2015-07-09T17:48:20Z
31,349,941
7
2015-07-10T20:42:29Z
[ "python", "c++", "neural-network", "deep-learning", "caffe" ]
I need to find the gradient with regards to the input layer for a single convolutional filter in a convolutional neural network (CNN) as a way to [visualize the filters](http://research.google.com/pubs/pub38115.html). Given a trained network in the Python interface of [Caffe](http://caffe.berkeleyvision.org/) such as the one in [this example](http://nbviewer.ipython.org/github/BVLC/caffe/blob/tutorial/examples/01-learning-lenet.ipynb), how can I then find the gradient of a conv-filter with respect to the data in the input layer? **Edit:** Based on the [answer by cesans](http://stackoverflow.com/a/31349941/1714410), I added the code below. The dimensions of my input layer is `[8, 8, 7, 96]`. My first conv-layer, `conv1`, has 11 filters with a size of `1x5`, resulting in the dimensions `[8, 11, 7, 92]`. ``` net = solver.net diffs = net.backward(diffs=['data', 'conv1']) print diffs.keys() # >> ['conv1', 'data'] print diffs['data'].shape # >> (8, 8, 7, 96) print diffs['conv1'].shape # >> (8, 11, 7, 92) ``` As you can see from the output, the dimensions of the arrays returned by `net.backward()` are equal to the dimensions of my layers in Caffe. After some testing I've found that this output is the gradients of the loss with regards to respectively the `data` layer and the `conv1` layer. However, my question was how to find the gradient of a single conv-filter with respect to the data in the input layer, which is something else. How can I achieve this?
You can get the gradients in terms of any layer when you run the `backward()` pass. Just specify the list of layers when calling the function. To show the gradients in terms of the data layer: ``` net.forward() diffs = net.backward(diffs=['data', 'conv1'])` data_point = 16 plt.imshow(diffs['data'][data_point].squeeze()) ``` In some cases you may want to force all layers to carry out backward, look at the `force_backward` parameter of the model. <https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto>
Finding gradient of a Caffe conv-filter with regards to input
31,324,739
25
2015-07-09T17:48:20Z
31,847,179
15
2015-08-06T05:02:05Z
[ "python", "c++", "neural-network", "deep-learning", "caffe" ]
I need to find the gradient with regards to the input layer for a single convolutional filter in a convolutional neural network (CNN) as a way to [visualize the filters](http://research.google.com/pubs/pub38115.html). Given a trained network in the Python interface of [Caffe](http://caffe.berkeleyvision.org/) such as the one in [this example](http://nbviewer.ipython.org/github/BVLC/caffe/blob/tutorial/examples/01-learning-lenet.ipynb), how can I then find the gradient of a conv-filter with respect to the data in the input layer? **Edit:** Based on the [answer by cesans](http://stackoverflow.com/a/31349941/1714410), I added the code below. The dimensions of my input layer is `[8, 8, 7, 96]`. My first conv-layer, `conv1`, has 11 filters with a size of `1x5`, resulting in the dimensions `[8, 11, 7, 92]`. ``` net = solver.net diffs = net.backward(diffs=['data', 'conv1']) print diffs.keys() # >> ['conv1', 'data'] print diffs['data'].shape # >> (8, 8, 7, 96) print diffs['conv1'].shape # >> (8, 11, 7, 92) ``` As you can see from the output, the dimensions of the arrays returned by `net.backward()` are equal to the dimensions of my layers in Caffe. After some testing I've found that this output is the gradients of the loss with regards to respectively the `data` layer and the `conv1` layer. However, my question was how to find the gradient of a single conv-filter with respect to the data in the input layer, which is something else. How can I achieve this?
Caffe net juggles two "streams" of numbers. The first is the data "stream": images and labels pushed through the net. As these inputs progress through the net they are converted into high-level representation and eventually into class probabilities vectors (in classification tasks). The second "stream" holds the parameters of the different layers, the weights of the convolutions, the biases etc. These numbers/weights are changed and learned during the train phase of the net. Despite the fundamentally different role these two "streams" play, caffe nonetheless use the same data structure, `blob`, to store and manage them. However, for each layer there are two **different** blobs vectors one for each stream. Here's an example that I hope would clarify: ``` import caffe solver = caffe.SGDSolver( PATH_TO_SOLVER_PROTOTXT ) net = solver.net ``` If you now look at ``` net.blobs ``` You will see a dictionary storing a "caffe blob" object for each layer in the net. Each blob has storing room for both data and gradient ``` net.blobs['data'].data.shape # >> (32, 3, 224, 224) net.blobs['data'].diff.shape # >> (32, 3, 224, 224) ``` And for a convolutional layer: ``` net.blobs['conv1/7x7_s2'].data.shape # >> (32, 64, 112, 112) net.blobs['conv1/7x7_s2'].diff.shape # >> (32, 64, 112, 112) ``` `net.blobs` holds the first data stream, it's shape matches that of the input images up to the resulting class probability vector. On the other hand, you can see another member of `net` ``` net.layers ``` This is a caffe vector storing the parameters of the different layers. Looking at the first layer (`'data'` layer): ``` len(net.layers[0].blobs) # >> 0 ``` There are no parameters to store for an input layer. On the other hand, for the first convolutional layer ``` len(net.layers[1].blobs) # >> 2 ``` The net stores one blob for the filter weights and another for the constant bias. Here they are ``` net.layers[1].blobs[0].data.shape # >> (64, 3, 7, 7) net.layers[1].blobs[1].data.shape # >> (64,) ``` As you can see, this layer performs 7x7 convolutions on 3-channel input image and has 64 such filters. Now, how to get the gradients? well, as you noted ``` diffs = net.backward(diffs=['data','conv1/7x7_s2']) ``` Returns the gradients of the *data* stream. We can verify this by ``` np.all( diffs['data'] == net.blobs['data'].diff ) # >> True np.all( diffs['conv1/7x7_s2'] == net.blobs['conv1/7x7_s2'].diff ) # >> True ``` (**TL;DR**) You want the gradients of the parameters, these are stored in the `net.layers` with the parameters: ``` net.layers[1].blobs[0].diff.shape # >> (64, 3, 7, 7) net.layers[1].blobs[1].diff.shape # >> (64,) ``` --- To help you map between the names of the layers and their indices into `net.layers` vector, you can use `net._layer_names`. --- **Update** regarding the use of gradients to visualize filter responses: A gradient is normally defined for a **scalar** function. The loss is a scalar, and therefore you can speak of a gradient of pixel/filter weight with respect to the scalar loss. This gradient is a single number per pixel/filter weight. If you want to get the input that results with maximal activation of a **specific** internal hidden node, you need an "auxiliary" net which loss is exactly a measure of the activation to the specific hidden node you want to visualize. Once you have this auxiliary net, you can start from an arbitrary input and change this input based on the gradients of the auxilary loss to the input layer: ``` update = prev_in + lr * net.blobs['data'].diff ```
TypeError constructor returned NULL while importing pyplot in ssh
31,328,436
6
2015-07-09T21:20:44Z
31,328,665
7
2015-07-09T21:36:15Z
[ "python", "windows", "matplotlib", "ssh" ]
I am having difficulties importing `matplotlib.pyplot` when I am using ssh to access my local lab cluster. Indeed, trying to `import matplotlib.pyplot as plt` is giving me an error. I have tried to just `import matplotlib as mpl` and this is fine, so something specifically about importing `pyplot` is wrong here. The last line of the trace-back says `File '/usr/lib64/python2.7/site-packages/matplotlib/backends/backend_gtk3.py', line 58, in <module> cursors.MOVE : Gdk.Cursor.new(Gdk.CursorType.FLEUR), TypeError: constructor returned NULL` Can anyone identify the error here? Or, is there a way I can circumvent the error and still use the `pyplot` tools? If it makes a difference, I am using Windows 8.
You are failing to load GTK. Most likely, it is because you do not have access to an X11 server and can't draw windows (which is what GTK does). Try `matplotlib.use("Pdf")` before importing `pyplot`. Then you won't need to load GTK and it won't fail. You can still make plots and save them to pdf but you can't draw them on screen. Alternatively, you could try forwarding X11. Then you could see the windows as if you were on the host machine. With a Linux client, this is done by logging on via `ssh -X hostname`. With a Windows client, [it can be done](http://superuser.com/questions/119792/how-to-use-x11-forwarding-with-putty) but not as smoothly.
Unicode Encode Error when writing pandas df to csv
31,331,358
3
2015-07-10T02:09:04Z
31,331,449
11
2015-07-10T02:23:38Z
[ "python", "pandas", "export-to-csv", "python-unicode" ]
I cleaned 400 excel files and read them into python using pandas and appended all the raw data into one big df. Then when I try to export it to a csv: ``` df.to_csv("path",header=True,index=False) ``` I get this error: ``` UnicodeEncodeError: 'ascii' codec can't encode character u'\xc7' in position 20: ordinal not in range(128) ``` Can someone suggest a way to fix this and what it means? Thanks
You have `unicode` values in your DataFrame. Files store bytes, which means all `unicode` have to be encoded into bytes before they can be stored in a file. You have to specify an encoding, such as `utf-8`. For example, ``` df.to_csv('path', header=True, index=False, encoding='utf-8') ``` If you don't specify an encoding, then the encoding used by `df.to_csv` defaults to `ascii` in Python2, or `utf-8` in Python3.
Adding a new column in Data Frame derived from other columns (Spark)
31,333,437
3
2015-07-10T05:55:35Z
31,338,072
7
2015-07-10T10:05:11Z
[ "python", "apache-spark", "apache-spark-sql", "pyspark" ]
I'm using Spark 1.3.0 and Python. I have a dataframe and I wish to add an additional column which is derived from other columns. Like this, ``` >>old_df.columns [col_1, col_2, ..., col_m] >>new_df.columns [col_1, col_2, ..., col_m, col_n] ``` where ``` col_n = col_3 - col_4 ``` How do I do this in PySpark?
One way to achieve that is to use `withColumn` method: ``` old_df = sqlContext.createDataFrame(sc.parallelize( [(0, 1), (1, 3), (2, 5)]), ('col_1', 'col_2')) new_df = old_df.withColumn('col_n', old_df.col_1 - old_df.col_2) ``` Alternatively you can use SQL on a registered table: ``` old_df.registerTempTable('old_df') new_df = sqlContext.sql('SELECT *, col_1 - col_2 AS col_n FROM old_df') ```
Cannot apply DjangoModelPermissions on a view that does not have `.queryset` property or overrides the `.get_queryset()` method
31,335,736
11
2015-07-10T08:16:04Z
31,337,178
20
2015-07-10T09:25:38Z
[ "python", "django", "django-rest-framework" ]
I am getting the error ".accepted\_renderer not set on Response resp api django". I am following the django rest-api tutorial. Django version i am using 1.8.3 I followed the tutorial till first part. It worked properly. But when i continued the 2nd part in sending response, i got an error ``` Cannot apply DjangoModelPermissions on a view that does not have `.queryset` property or overrides the `.get_queryset()` method. ``` Then i tried other ways i got ``` .accepted_renderer not set on Response resp api django ``` Please help me out. I think its permission issue.
You probably have set `DjangoModelPermissions` as a default permission class in your settings. Something like: ``` REST_FRAMEWORK = { 'DEFAULT_PERMISSION_CLASSES': ( 'rest_framework.permissions.DjangoModelPermissions', ) } ``` `DjangoModelPermissions` can only be applied to views that have a `.queryset` property or `.get_queryset()` method. Since Tutorial 2 uses FBVs, you probably need to convert it to a CBV or an easy way is to specify a different permission class for that view. You must be using the `api_view` decorator in your view. You can then define `permissions` like below: ``` from rest_framework.decorators import api_view, permission_classes from rest_framework import permissions @api_view([..]) @permission_classes((permissions.AllowAny,)) def my_view(request) ... ``` To resolve the renderer error, you need to add the corresponding renderer to your settings. ``` REST_FRAMEWORK = { 'DEFAULT_RENDERER_CLASSES': ( 'rest_framework.renderers.<corresponding_renderer>', ... ) } ```
Cannot apply DjangoModelPermissions on a view that does not have `.queryset` property or overrides the `.get_queryset()` method
31,335,736
11
2015-07-10T08:16:04Z
31,338,276
8
2015-07-10T10:16:27Z
[ "python", "django", "django-rest-framework" ]
I am getting the error ".accepted\_renderer not set on Response resp api django". I am following the django rest-api tutorial. Django version i am using 1.8.3 I followed the tutorial till first part. It worked properly. But when i continued the 2nd part in sending response, i got an error ``` Cannot apply DjangoModelPermissions on a view that does not have `.queryset` property or overrides the `.get_queryset()` method. ``` Then i tried other ways i got ``` .accepted_renderer not set on Response resp api django ``` Please help me out. I think its permission issue.
I got it working in another way. My logged in user was the superuser which i have created. So i have created another user from admin and made him staff user and provided all the permissions. Then logged in to admin by that user. In settings.py file i changed code. ``` REST_FRAMEWORK = { # Use Django's standard `django.contrib.auth` permissions, # or allow read-only access for unauthenticated users. 'DEFAULT_PERMISSION_CLASSES': [ 'rest_framework.permissions.IsAuthenticated', ] } ``` And it worked.
Error handling methodology
31,340,239
5
2015-07-10T12:00:39Z
31,340,289
13
2015-07-10T12:03:02Z
[ "python" ]
In C, if I'm not wrong, the `main` function returns 0 if no errors occurred, and something different from 0 if an error occurs. Is is appropriate to do the same in Python (as long as a function does not have to return any specific value but one to indicate the success/failure); or instead just handle exceptions?
In Python you shouldn't use the return value to indicate an error. You should use Exceptions. So, either let the exception that fired bubble up, or throw a new one. ``` def check_foo(foo): if foo == bar: do_something(args) try: check_foo(...) except SomeError: # Oops! Failure! something_went_wrong() else: # Yay! Success! everything_went_well() ``` In some cases it makes sense to have functions that return a boolean, but that shouldn't be used to indicate *errors*. This is typically used in routine checks where something may be true or false, and neither is exceptional (i.e. neither is *an error*): ``` def is_foo(foo): return foo == "foo" ```
python error when initializing a class derived from and abstract one
31,340,339
3
2015-07-10T12:05:11Z
31,340,505
7
2015-07-10T12:13:19Z
[ "python", "inheritance", "abstract-base-class" ]
I have this simple code and I get a strange error: ``` from abc import ABCMeta, abstractmethod class CVIterator(ABCMeta): def __init__(self): self.n = None # the value of n is obtained in the fit method return class KFold_new_version(CVIterator): # new version of KFold def __init__(self, k): assert k > 0, ValueError('cannot have k below 1') self.k = k return cv = KFold_new_version(10) In [4]: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-4-ec56652b1fdc> in <module>() ----> 1 __pyfile = open('''/tmp/py13196IBS''');exec(compile(__pyfile.read(), '''/home/donbeo/Desktop/prova.py''', 'exec'));__pyfile.close() /home/donbeo/Desktop/prova.py in <module>() 19 20 ---> 21 cv = KFold_new_version(10) TypeError: __new__() missing 2 required positional arguments: 'bases' and 'namespace' ``` What am I doing wrong? A theoretical explanation would be appreciated.
You used the `ABCMeta` meta class incorrectly. It is a *meta* class, not a base class. Use it as such. For Python 2, that means assigning it to the `__metaclass__` attribute on the class: ``` class CVIterator(object): __metaclass__ = ABCMeta def __init__(self): self.n = None # the value of n is obtained in the fit method ``` In Python 3, you'd use the `metaclass=...` syntax when defining the class: ``` class CVIterator(metaclass=ABCMeta): def __init__(self): self.n = None # the value of n is obtained in the fit method ```
Python: why can I put mutable object in a dict or set?
31,340,756
2
2015-07-10T12:26:42Z
31,340,810
7
2015-07-10T12:29:31Z
[ "python", "hash", "immutability" ]
Given the following example, ``` class A(object): pass a = A() a.x = 1 ``` Obviously a is mutable, and then I put a in a set, ``` set([a]) ``` It succeeded. Why I can put mutable object like "a" into a set/dict? Shouldn't set/dict only allow immutable objects so they can identify the object and avoid duplication?
Python doesn't test for *mutable* objects, it tests for *hashable* objects. Custom class instances are by default hashable. That's fine because the default `__eq__` implementation for such classes only tests for instance *identity* and the hash is based of the same information. In other words, it doesn't matter that you alter the state of your instance attributes, because the *identity* of an instance is immutable anyway. As soon as you implement a `__hash__` and `__eq__` method that take instance state into account you might be in trouble and should stop mutating that state. Only then would a custom class instance no longer be suitable for storing in a dictionary or set.
MySQL Improperly Configured Reason: unsafe use of relative path
31,343,299
21
2015-07-10T14:26:27Z
31,821,332
58
2015-08-05T00:01:43Z
[ "python", "mysql", "django", "dynamic-linking", "osx-elcapitan" ]
I'm using Django, and when I run `python manage.py runserver` I receive the following error: ``` ImproperlyConfigured: Error loading MySQLdb module: dlopen(/Library/Python/2.7/site-packages/_mysql.so, 2): Library not loaded: libmysqlclient.18.dylib Referenced from: /Library/Python/2.7/site-packages/_mysql.so Reason: unsafe use of relative rpath libmysqlclient.18.dylib in /Library/Python/2.7/site-packages/_mysql.so with restricted binary ``` I'm not entirely sure how to fix this. I have installed MySQL-python via pip. And I followed [this](http://stackoverflow.com/q/2952187/3990714) step earlier. I want to also point out this is with El Capitan Beta 3.
In OS X El Capitan (10.11), Apple added [System Integrity Protection](https://support.apple.com/en-us/HT204899). This prevents programs in protected locations like /usr from calling a shared library that uses a relative reference to another shared library. In the case of \_mysql.so, it contains a relative reference to the shared library libmysqlclient.18.dylib. In the future, the shared library \_mysql.so may be updated. Until then, you can force it to use an absolute reference via the `install_name_tool` utility. Assuming that libmysqlclient.18.dylib is in /usr/local/mysql/lib/, then run the command: ``` sudo install_name_tool -change libmysqlclient.18.dylib \ /usr/local/mysql/lib/libmysqlclient.18.dylib \ /Library/Python/2.7/site-packages/_mysql.so ```
sql.h not found when installing PyODBC on Heroku
31,353,137
4
2015-07-11T03:31:07Z
31,360,218
15
2015-07-11T18:05:05Z
[ "python", "heroku", "pyodbc" ]
I'm trying to install PyODBC on Heroku, but I get `fatal error: sql.h: No such file or directory` in the logs when pip runs. How do I fix this error?
To follow up on the answer below... Example for Ubuntu: ``` sudo apt-get install unixodbc unixodbc-dev ``` Example for CentOS: ``` sudo yum install unixODBC-devel ``` On Windows: ``` conn = pyodbc.connect('DRIVER={SQL Server};SERVER=yourserver.yourcompany.com;DATABASE=yourdb;UID=user;PWD=password') ``` On Linux: ``` conn = pyodbc.connect('DRIVER={FreeTDS};SERVER=yourserver.yourcompany.com;PORT=1433;DATABASE=yourdb;UID=user;PWD=password;TDS_VERSION=7.2') ```
Django Registration Redux: how to change the unique identifier from username to email and use email as login
31,356,535
3
2015-07-11T11:08:05Z
31,358,213
7
2015-07-11T14:31:31Z
[ "python", "django", "django-registration" ]
I'm using Django-registration-redux in my project for user registration. It uses default User model which use username as the unique identifier. Now we want to discard username and use email as the unique identifier. And also we want to use email instead of username to login. How to achieve this? And is it possible to do it without changing the **AUTH\_USER\_MODEL** settings? Because from the official doc it says"**If you intend to set AUTH\_USER\_MODEL, you should set it *before creating any migrations or running manage.py* migrate for the first time.**"
You can override registration form like this ``` from registration.forms import RegistrationForm class MyRegForm(RegistrationForm): username = forms.CharField(max_length=254, required=False, widget=forms.HiddenInput()) def clean_email(self): email = self.cleaned_data['email'] self.cleaned_data['username'] = email return email ``` And then add this to settings file (read this [link](http://django-registration-redux.readthedocs.org/en/latest/default-backend.html) for details) ``` REGISTRATION_FORM = 'app.forms.MyRegForm' ``` This will set the email to username field as well and then everything will work as email is now the username. The only problem is that username field has a max lenght of 30 in DB. So emails longer than 30 chars will raise DB exception. To solve that override the user model (read [this](https://docs.djangoproject.com/en/1.8/topics/auth/customizing/#substituting-a-custom-user-model) for details).
Why does 'the' survive after .remove?
31,356,546
8
2015-07-11T11:09:27Z
31,356,575
14
2015-07-11T11:13:17Z
[ "python", "string", "python-2.7", "python-3.x" ]
Something weird happens in this code: ``` fh = open('romeo.txt', 'r') lst = list() for line in fh: line = line.split() for word in line: lst.append(word) for word in lst: numberofwords = lst.count(word) if numberofwords > 1: lst.remove(word) lst.sort() print len(lst) print lst ``` romeo.txt is taken from <http://www.pythonlearn.com/code/romeo.txt> Result: ``` 27 ['Arise', 'But', 'It', 'Juliet', 'Who', 'already', 'and', 'breaks', 'east', 'envious', 'fair', 'grief', 'is', 'kill', 'light', 'moon', 'pale', 'sick', 'soft', 'sun', 'the', 'the', 'through', 'what', 'window', 'with', 'yonder'] ``` As you can see, there are two 'the'. Why is that? I can run this part of code again: ``` for word in lst: numberofwords = lst.count(word) if numberofwords > 1: lst.remove(word) ``` After running this code a second time it deletes the remaining 'the', but why doesn't it work the first time? Correct output: ``` 26 ['Arise', 'But', 'It', 'Juliet', 'Who', 'already', 'and', 'breaks', 'east', 'envious', 'fair', 'grief', 'is', 'kill', 'light', 'moon', 'pale', 'sick', 'soft', 'sun', 'the', 'through', 'what', 'window', 'with', 'yonder'] ```
In this loop: ``` for word in lst: numberofwords = lst.count(word) if numberofwords > 1: lst.remove(word) ``` `lst` is modified while iterating over it. Don't do that. A simple fix is to iterate over a copy of it: ``` for word in lst[:]: ```
Error : "You are trying to add a non-nullable field"
31,357,346
7
2015-07-11T12:48:43Z
31,357,397
8
2015-07-11T12:54:29Z
[ "python", "django" ]
I defined below model and getting error : `You are trying to add a non-nullable field 'user' to videodata without a default; we can't do that` models.py ``` class User(Model): userID = models.IntegerField() userName = models.CharField(max_length=40) email = models.EmailField() class Meta: ordering = ['userName'] verbose_name = 'User MetaData' verbose_name_plural = 'Users MetaData' def __unicode__(self): return str(self.userName) class VideoData(Model): video = models.CharField(max_length=40) time = models.IntegerField() user = models.ForeignKey(User, related_name='User') class Meta: verbose_name = 'User_Video MetaData' ``` Where i am doing wrong????
As the error says, your user field on VideoData is not allowing nulls, so you either need to give it a default user or allow nulls. Easiest way is to allow nulls. ``` user = models.ForeignKey(User, related_name='User', null=True) ``` or have a default user ``` user = models.ForeignKey(User, related_name='User', default=<have your default user id here>) ```
Using Cloudfront with Django S3Boto
31,357,353
14
2015-07-11T12:49:45Z
31,440,339
23
2015-07-15T20:24:53Z
[ "python", "django" ]
I have successfully set up my app to use S3 for storing all static and media files. However, I would like to upload to S3 (current operation), but serve from a cloudfront instance I have set up. I have tried adjusting settings to the cloudfront url but it does not work. How can I upload to S3 and serve from Cloudfront please? **Settings** ``` AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME DEFAULT_FILE_STORAGE = 'app.custom_storages.MediaStorage' STATICFILES_STORAGE = 'app.custom_storages.StaticStorage' STATICFILES_LOCATION = 'static' MEDIAFILES_LOCATION = 'media' STATIC_URL = "https://s3-eu-west-1.amazonaws.com/app/%s/" % (STATICFILES_LOCATION) MEDIA_URL = "https://%s/%s/" % (AWS_S3_CUSTOM_DOMAIN, MEDIAFILES_LOCATION) ``` **custom\_storages.py** ``` from django.conf import settings from storages.backends.s3boto import S3BotoStorage class StaticStorage(S3BotoStorage): location = settings.STATICFILES_LOCATION class MediaStorage(S3BotoStorage): location = settings.MEDIAFILES_LOCATION ```
Your code is almost complete except you are not adding your cloudfront domain to STATIC\_URL/MEDIA\_URL and your custom storages. In detail, you must first install the dependencies ``` pip install django-storages-redux boto ``` Add the required settings to your django settings file ``` INSTALLED_APPS = ( ... 'storages', ... ) AWS_STORAGE_BUCKET_NAME = 'mybucketname' AWS_CLOUDFRONT_DOMAIN = 'xxxxxxxx.cloudfront.net' AWS_ACCESS_KEY_ID = get_secret("AWS_ACCESS_KEY_ID") AWS_SECRET_ACCESS_KEY = get_secret("AWS_SECRET_ACCESS_KEY") MEDIAFILES_LOCATION = 'media' MEDIA_ROOT = '/%s/' % MEDIAFILES_LOCATION MEDIA_URL = '//%s/%s/' % (AWS_CLOUDFRONT_DOMAIN, MEDIAFILES_LOCATION) DEFAULT_FILE_STORAGE = 'app.custom_storages.MediaStorage' STATICFILES_LOCATION = 'static' STATIC_ROOT = '/%s/' % STATICFILES_LOCATION STATIC_URL = '//%s/%s/' % (AWS_CLOUDFRONT_DOMAIN, STATICFILES_LOCATION) STATICFILES_STORAGE = 'app.custom_storages.StaticStorage' ``` Your custom storages need some modification to present the cloudfront domain for the resources, instead of the S3 domain: ``` from django.conf import settings from storages.backends.s3boto import S3BotoStorage class StaticStorage(S3BotoStorage): """uploads to 'mybucket/static/', serves from 'cloudfront.net/static/'""" location = settings.STATICFILES_LOCATION def __init__(self, *args, **kwargs): kwargs['custom_domain'] = settings.AWS_CLOUDFRONT_DOMAIN super(StaticStorage, self).__init__(*args, **kwargs) class MediaStorage(S3BotoStorage): """uploads to 'mybucket/media/', serves from 'cloudfront.net/media/'""" location = settings.MEDIAFILES_LOCATION def __init__(self, *args, **kwargs): kwargs['custom_domain'] = settings.AWS_CLOUDFRONT_DOMAIN super(MediaStorage, self).__init__(*args, **kwargs) ``` And that is all you need, assuming your bucket and cloudfront domain are correctly linked and the user's AWS\_ACCESS\_KEY has access permissions to your bucket. Additionally, based on your use case, you may wish to make your s3 bucket items read-only accessible by everyone.
Format y axis as percent
31,357,611
9
2015-07-11T13:21:01Z
31,357,733
16
2015-07-11T13:36:31Z
[ "python", "pandas", "matplotlib", "plot" ]
I have an existing plot that was created with pandas like this: ``` df['myvar'].plot(kind='bar') ``` The y axis is format as float and I want to change the y axis to percentages. All of the solutions I found use ax.xyz syntax and **I can only place code below the line above that creates the plot** (I cannot add ax=ax to the line above.) **How can I format the y axis as percentages without changing the line above?** Here is the solution I found **but requires that I redefine the plot**: ``` import matplotlib.pyplot as plt import numpy as np import matplotlib.ticker as mtick data = [8,12,15,17,18,18.5] perc = np.linspace(0,100,len(data)) fig = plt.figure(1, (7,4)) ax = fig.add_subplot(1,1,1) ax.plot(perc, data) fmt = '%.0f%%' # Format you want the ticks, e.g. '40%' xticks = mtick.FormatStrFormatter(fmt) ax.xaxis.set_major_formatter(xticks) plt.show() ``` Link to the above solution: [Pyplot: using percentage on x axis](http://stackoverflow.com/questions/26294360/pyplot-using-percentage-on-x-axis)
pandas dataframe plot will return the `ax` for you, And then you can start to manipulate the axes whatever you want. ``` import pandas as pd import numpy as np df = pd.DataFrame(np.random.randn(100,5)) # you get ax from here ax = df.plot() type(ax) # matplotlib.axes._subplots.AxesSubplot # manipulate vals = ax.get_yticks() ax.set_yticklabels(['{:3.2f}%'.format(x*100) for x in vals]) ``` ![enter image description here](http://i.stack.imgur.com/lZTy0.png)
Format y axis as percent
31,357,611
9
2015-07-11T13:21:01Z
35,446,404
15
2016-02-17T01:39:40Z
[ "python", "pandas", "matplotlib", "plot" ]
I have an existing plot that was created with pandas like this: ``` df['myvar'].plot(kind='bar') ``` The y axis is format as float and I want to change the y axis to percentages. All of the solutions I found use ax.xyz syntax and **I can only place code below the line above that creates the plot** (I cannot add ax=ax to the line above.) **How can I format the y axis as percentages without changing the line above?** Here is the solution I found **but requires that I redefine the plot**: ``` import matplotlib.pyplot as plt import numpy as np import matplotlib.ticker as mtick data = [8,12,15,17,18,18.5] perc = np.linspace(0,100,len(data)) fig = plt.figure(1, (7,4)) ax = fig.add_subplot(1,1,1) ax.plot(perc, data) fmt = '%.0f%%' # Format you want the ticks, e.g. '40%' xticks = mtick.FormatStrFormatter(fmt) ax.xaxis.set_major_formatter(xticks) plt.show() ``` Link to the above solution: [Pyplot: using percentage on x axis](http://stackoverflow.com/questions/26294360/pyplot-using-percentage-on-x-axis)
[Jianxun](http://stackoverflow.com/users/5014134/jianxun-li)'s solution did the job for me but broke the y value indicator at the bottom left of the window. I ended up using `FuncFormatter`instead (and also stripped the uneccessary trailing zeroes as suggested [here](http://stackoverflow.com/questions/14997799/most-pythonic-way-to-print-at-most-some-number-of-decimal-places)): ``` import pandas as pd import numpy as np from matplotlib.ticker import FuncFormatter df = pd.DataFrame(np.random.randn(100,5)) ax = df.plot() ax.yaxis.set_major_formatter(FuncFormatter(lambda y, _: '{:.0%}'.format(y))) ``` Generally speaking I'd recommend using `FuncFormatter` for label formatting: it's reliable, and versatile. [![enter image description here](http://i.stack.imgur.com/uKf1z.png)](http://i.stack.imgur.com/uKf1z.png)
Format y axis as percent
31,357,611
9
2015-07-11T13:21:01Z
36,319,915
7
2016-03-30T21:16:37Z
[ "python", "pandas", "matplotlib", "plot" ]
I have an existing plot that was created with pandas like this: ``` df['myvar'].plot(kind='bar') ``` The y axis is format as float and I want to change the y axis to percentages. All of the solutions I found use ax.xyz syntax and **I can only place code below the line above that creates the plot** (I cannot add ax=ax to the line above.) **How can I format the y axis as percentages without changing the line above?** Here is the solution I found **but requires that I redefine the plot**: ``` import matplotlib.pyplot as plt import numpy as np import matplotlib.ticker as mtick data = [8,12,15,17,18,18.5] perc = np.linspace(0,100,len(data)) fig = plt.figure(1, (7,4)) ax = fig.add_subplot(1,1,1) ax.plot(perc, data) fmt = '%.0f%%' # Format you want the ticks, e.g. '40%' xticks = mtick.FormatStrFormatter(fmt) ax.xaxis.set_major_formatter(xticks) plt.show() ``` Link to the above solution: [Pyplot: using percentage on x axis](http://stackoverflow.com/questions/26294360/pyplot-using-percentage-on-x-axis)
This is a few months late, but I have created [PR#6251](https://github.com/matplotlib/matplotlib/pull/6251) with matplotlib to add a new `PercentFormatter` class. With this class you just need one line to reformat your axis (two if you count the import of `matplotlib.ticker`): ``` import ... import matplotlib.ticker as mtick ax = df['myvar'].plot(kind='bar') ax.yaxis.set_major_formatter(mtick.PercentFormatter()) ``` `PercentFormatter()` accepts three arguments, `max`, `decimals`, `symbol`. `max` allows you to set the value that corresponds to 100% on the axis. This is nice if you have data from 0.0 to 1.0 and you want to display it from 0% to 100%. Just do `PercentFormatter(1.0)`. The other two parameters allow you to set the number of digits after the decimal point and the symbol. They default to `None` and `'%'`, respectively. `decimals=None` will automatically set the number of decimal points based on how much of the axes you are showing.
How to mutate a list with a function in python?
31,359,652
7
2015-07-11T17:06:02Z
31,359,679
7
2015-07-11T17:09:02Z
[ "python", "function", "mutable" ]
Here's a pseudocode I've written describing my problem:- ``` func(s): #returns a value of s x = a list of strings print func(x) print x #these two should give the SAME output ``` When I print the value of x in the end, I want it to be the one returned by func(x). Can I do something like this only by editing the function (and without setting `x = func(x)`)
That's already how it behaves, the function *can* mutate the list ``` >>> l = ['a', 'b', 'c'] # your list of strings >>> def add_something(x): x.append('d') ... >>> add_something(l) >>> l ['a', 'b', 'c', 'd'] ``` Note however that you cannot mutate the original list in this manner ``` def modify(x): x = ['something'] ``` (The above will assign `x` but not the original list `l`) If you want to place a new list in your list, you'll need something like: ``` def modify(x): x[:] = ['something'] ```
Memory efficient sort of massive numpy array in Python
31,359,980
12
2015-07-11T17:40:22Z
31,362,871
10
2015-07-11T23:30:10Z
[ "python", "performance", "sorting", "numpy", "memory" ]
I need to sort a VERY large genomic dataset using numpy. I have an array of 2.6 billion floats, dimensions = `(868940742, 3)` which takes up about 20GB of memory on my machine once loaded and just sitting there. I have an early 2015 13' MacBook Pro with 16GB of RAM, 500GB solid state HD and an 3.1 GHz intel i7 processor. Just loading the array overflows to virtual memory but not to the point where my machine suffers or I have to stop everything else I am doing. I build this VERY large array step by step from 22 smaller `(N, 2)` subarrays. Function `FUN_1` generates 2 new `(N, 1)` arrays using each of the 22 subarrays which I call `sub_arr`. The first output of `FUN_1` is generated by interpolating values from `sub_arr[:,0]` on array `b = array([X, F(X)])` and the second output is generated by placing `sub_arr[:, 0]` into bins using array `r = array([X, BIN(X)])`. I call these outputs `b_arr` and `rate_arr`, respectively. The function returns a 3-tuple of `(N, 1)` arrays: ``` import numpy as np def FUN_1(sub_arr): """interpolate b values and rates based on position in sub_arr""" b = np.load(bfile) r = np.load(rfile) b_arr = np.interp(sub_arr[:,0], b[:,0], b[:,1]) rate_arr = np.searchsorted(r[:,0], sub_arr[:,0]) # HUGE efficiency gain over np.digitize... return r[rate_r, 1], b_arr, sub_arr[:,1] ``` I call the function 22 times in a for-loop and fill a pre-allocated array of zeros `full_arr = numpy.zeros([868940742, 3])` with the values: ``` full_arr[:,0], full_arr[:,1], full_arr[:,2] = FUN_1 ``` In terms of saving memory at this step, I think this is the best I can do, but I'm open to suggestions. Either way, I don't run into problems up through this point and it only takes about 2 minutes. Here is the sorting routine (there are two consecutive sorts) ``` for idx in range(2): sort_idx = numpy.argsort(full_arr[:,idx]) full_arr = full_arr[sort_idx] # ... # <additional processing, return small (1000, 3) array of stats> ``` Now this sort had been working, albeit slowly (takes about 10 minutes). However, I recently started using a larger, more fine resolution table of `[X, F(X)]` values for the interpolation step above in `FUN_1` that returns `b_arr` and now the SORT really slows down, although everything else remains the same. Interestingly, I am not even sorting on the interpolated values at the step where the sort is now lagging. Here are some snippets of the different interpolation files - the smaller one is about 30% smaller in each case and far more uniform in terms of values in the second column; the slower one has a higher resolution and many more unique values, so the results of interpolation are likely more unique, but I'm not sure if this should have any kind of effect...? **bigger, slower file:** ``` 17399307 99.4 17493652 98.8 17570460 98.2 17575180 97.6 17577127 97 17578255 96.4 17580576 95.8 17583028 95.2 17583699 94.6 17584172 94 ``` **smaller, more uniform regular file:** ``` 1 24 1001 24 2001 24 3001 24 4001 24 5001 24 6001 24 7001 24 ``` I'm not sure what could be causing this issue and I would be interested in any suggestions or just general input about sorting in this type of memory limiting case!
At the moment each call to `np.argsort` is generating a `(868940742, 1)` array of int64 indices, which will take up ~7 GB just by itself. Additionally, when you use these indices to sort the columns of `full_arr` you are generating another `(868940742, 1)` array of floats, since [fancy indexing always returns a copy rather than a view](http://docs.scipy.org/doc/numpy/user/basics.indexing.html#index-arrays). One fairly obvious improvement would be to sort `full_arr` in place using its [`.sort()` method](http://docs.scipy.org/doc/numpy/reference/generated/numpy.sort.html). Unfortunately, `.sort()` does not allow you to directly specify a row or column to sort by. However, you *can* specify a field to sort by for a structured array. You can therefore force an inplace sort over one of the three columns by getting a [`view`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.view.html) onto your array as a structured array with three float fields, then sorting by one of these fields: ``` full_arr.view('f8, f8, f8').sort(order=['f0'], axis=0) ``` In this case I'm sorting `full_arr` in place by the 0th field, which corresponds to the first column. Note that I've assumed that there are three float64 columns (`'f8'`) - you should change this accordingly if your dtype is different. This also requires that your array is contiguous and in row-major format, i.e. `full_arr.flags.C_CONTIGUOUS == True`. Credit for this method should go to Joe Kington for his answer [here](http://stackoverflow.com/a/2828371/1461210). --- Although it requires less memory, sorting a structured array by field is unfortunately much slower compared with using `np.argsort` to generate an index array, as you mentioned in the comments below (see [this previous question](http://stackoverflow.com/q/19682521/1461210)). If you use `np.argsort` to obtain a set of indices to sort by, you might see a modest performance gain by using `np.take` rather than direct indexing to get the sorted array: ``` %%timeit -n 1 -r 100 x = np.random.randn(10000, 2); idx = x[:, 0].argsort() x[idx] # 1 loops, best of 100: 148 µs per loop %%timeit -n 1 -r 100 x = np.random.randn(10000, 2); idx = x[:, 0].argsort() np.take(x, idx, axis=0) # 1 loops, best of 100: 42.9 µs per loop ``` However I wouldn't expect to see any difference in terms of memory usage, since both methods will generate a copy. --- Regarding your question about why sorting the second array is faster - yes, you should expect any reasonable sorting algorithm to be faster when there are fewer unique values in the array because on average there's less work for it to do. Suppose I have a random sequence of digits between 1 and 10: ``` 5 1 4 8 10 2 6 9 7 3 ``` There are 10! = 3628800 possible ways to arrange these digits, but only one in which they are in ascending order. Now suppose there are just 5 unique digits: ``` 4 4 3 2 3 1 2 5 1 5 ``` Now there are 2⁵ = 32 ways to arrange these digits in ascending order, since I could swap any pair of identical digits in the sorted vector without breaking the ordering. By default, `np.ndarray.sort()` uses [Quicksort](https://en.wikipedia.org/wiki/Quicksort). The [`qsort`](https://en.wikipedia.org/wiki/Quicksort#Repeated_elements) variant of this algorithm works by recursively selecting a 'pivot' element in the array, then reordering the array such that all the elements less than the pivot value are placed before it, and all of the elements greater than the pivot value are placed after it. Values that are equal to the pivot are already sorted. Having fewer unique values means that, on average, more values will be equal to the pivot value on any given sweep, and therefore fewer sweeps are needed to fully sort the array. For example: ``` %%timeit -n 1 -r 100 x = np.random.random_integers(0, 10, 100000) x.sort() # 1 loops, best of 100: 2.3 ms per loop %%timeit -n 1 -r 100 x = np.random.random_integers(0, 1000, 100000) x.sort() # 1 loops, best of 100: 4.62 ms per loop ``` In this example the dtypes of the two arrays are the same. If your smaller array has a smaller item size compared with the larger array then the cost of copying it due to the fancy indexing will also be smaller.
How to get value from a theano tensor variable backed by a shared variable?
31,361,377
10
2015-07-11T20:13:34Z
31,362,146
10
2015-07-11T21:43:50Z
[ "python", "numpy", "scipy", "theano" ]
I have a theano tensor variable created from casting a shared variable. How can I extract the original or casted values? (I need that so I don't have to carry the original shared/numpy values around.) ``` >>> x = theano.shared(numpy.asarray([1, 2, 3], dtype='float')) >>> y = theano.tensor.cast(x, 'int32') >>> y.get_value(borrow=True) Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'TensorVariable' object has no attribute 'get_value' # whereas I can do this against the original shared variable >>> x.get_value(borrow=True) array([ 1., 2., 3.]) ```
`get_value` only works for shared variables. `TensorVariables` are general expressions and thus potentially need extra input in order to be able to determine their value (Imagine you set `y = x + z`, where `z` is another tensor variable. You would need to specify `z` before being able to calculate `y`). You can either create a function to provide this input or provide it in a dictionary using the `eval` method. In your case, `y` only depends on `x`, so you can do ``` import theano import theano.tensor as T x = theano.shared(numpy.asarray([1, 2, 3], dtype='float32')) y = T.cast(x, 'int32') y.eval() ``` and you should see the result ``` array([1, 2, 3], dtype=int32) ``` (And in the case `y = x + z`, you would have to do `y.eval({z : 3.})`, for example)
python dask DataFrame, support for (trivially parallelizable) row apply?
31,361,721
17
2015-07-11T20:52:46Z
31,364,127
18
2015-07-12T03:35:33Z
[ "python", "pandas", "parallel-processing", "dask" ]
I recently found [dask](http://dask.pydata.org/en/latest/index.html) module that aims to be an easy-to-use python parallel processing module. Big selling point for me is that it works with pandas. After reading a bit on its manual page, I can't find a way to do this trivially parallelizable task: ``` ts.apply(func) # for pandas series df.apply(func, axis = 1) # for pandas DF row apply ``` At the moment, to achieve this in dask, AFAIK, ``` ddf.assign(A=lambda df: df.apply(func, axis=1)).compute() # dask DataFrame ``` which is ugly syntax and is actually slower than outright ``` df.apply(func, axis = 1) # for pandas DF row apply ``` Any suggestion? Edit: Thanks @MRocklin for the map function. It seems to be slower than plain pandas apply. Is this related to pandas GIL releasing issue or am I doing it wrong? ``` import dask.dataframe as dd s = pd.Series([10000]*120) ds = dd.from_pandas(s, npartitions = 3) def slow_func(k): A = np.random.normal(size = k) # k = 10000 s = 0 for a in A: if a > 0: s += 1 else: s -= 1 return s s.apply(slow_func) # 0.43 sec ds.map(slow_func).compute() # 2.04 sec ```
### `map_partitions` You can apply your function to all of the partitions of your dataframe with the `map_partitions` function. ``` df.map_partitions(func, columns=...) ``` Note that func will be given only part of the dataset at a time, not the entire dataset like with `pandas apply` (which presumably you wouldn't want if you want to do parallelism.) ### `map` / `apply` You can map a function row-wise across a series with `map` ``` df.mycolumn.map(func) ``` You can map a function row-wise across a dataframe with `apply` ``` df.apply(func, axis=1) ``` ### Threads vs Processes As of version 0.6.0 `dask.dataframes` parallelizes with threads. Custom Python functions will not receive much benefit from thread-based parallelism. You could try processes instead ``` df = dd.read_csv(...) from dask.multiprocessing import get df.map_partitions(func, columns=...).compute(get=get) ``` ### But avoid `apply` However, you should really avoid `apply` with custom Python functions, both in Pandas and in Dask. This is often a source of poor performance. It could be that if you find a way to do your operation in a vectorized manner then it could be that your Pandas code will be 100x faster and you won't need dask.dataframe at all. ### Consider `numba` For your particular problem you might consider [`numba`](http://numba.pydata.org/). This significantly improves your performance. ``` In [1]: import numpy as np In [2]: import pandas as pd In [3]: s = pd.Series([10000]*120) In [4]: %paste def slow_func(k): A = np.random.normal(size = k) # k = 10000 s = 0 for a in A: if a > 0: s += 1 else: s -= 1 return s ## -- End pasted text -- In [5]: %time _ = s.apply(slow_func) CPU times: user 345 ms, sys: 3.28 ms, total: 348 ms Wall time: 347 ms In [6]: import numba In [7]: fast_func = numba.jit(slow_func) In [8]: %time _ = s.apply(fast_func) # First time incurs compilation overhead CPU times: user 179 ms, sys: 0 ns, total: 179 ms Wall time: 175 ms In [9]: %time _ = s.apply(fast_func) # Subsequent times are all gain CPU times: user 68.8 ms, sys: 27 µs, total: 68.8 ms Wall time: 68.7 ms ``` Disclaimer, I work for the company that makes both `numba` and `dask` and employs many of the `pandas` developers.
Performance difference in pandas read_table vs. read_csv vs. from_csv vs. read_excel?
31,362,573
3
2015-07-11T22:43:16Z
31,362,987
12
2015-07-11T23:49:58Z
[ "python", "performance", "csv", "pandas", "dataframe" ]
I tend to import .csv files into pandas, but sometimes I may get data in other formats to make `DataFrame` objects. Today, I just found out about `read_table` as a "generic" importer for other formats, and wondered if there were significant performance differences between the various methods in pandas for reading .csv files, e.g. `read_table`, `from_csv`, `read_excel`. 1. Do these other methods have better performance than `read_csv`? 2. Is `read_csv` much different than `from_csv` for creating a `DataFrame`?
1. `read_table` is `read_csv` with `sep=','` replaced by `sep='\t'`, they are two thin wrappers around the same function so the performance will be identical. `read_excel` uses the `xlrd` package to read xls and xlsx files into a DataFrame, it doesn't handle csv files. 2. `from_csv` calls `read_table`, so no.
graphite/carbon ImportError: No module named fields
31,363,276
3
2015-07-12T00:44:55Z
32,557,105
11
2015-09-14T03:58:37Z
[ "python", "carbon", "graphite", "centos7" ]
I am able to follow almost all the instructions [here](http://www.unixmen.com/install-graphite-centos-7/) but when I get to ``` [idf@node1 graphite]$ cd /opt/graphite/webapp/graphite/ [idf@node1 graphite]$ sudo python manage.py syncdb Could not import graphite.local_settings, using defaults! /opt/graphite/webapp/graphite/settings.py:244: UserWarning: SECRET_KEY is set to an unsafe default. This should be set in local_settings.py for better security warn('SECRET_KEY is set to an unsafe default. This should be set in local_settings.py for better security') ImportError: No module named fields [idf@node1 graphite]$ ``` Not sure why I am getting this error? I also tried these instructions, and it gets hung up approximately at the same place <https://www.digitalocean.com/community/tutorials/how-to-keep-effective-historical-logs-with-graphite-carbon-and-collectd-on-centos-7> ``` [idf@node1 graphite]$ sudo PYTHONPATH=/opt/graphite/webapp/ django-admin.py syncdb --settings=graphite.settings /var/tmp/sclHwyLM6: line 8: PYTHONPATH=/opt/graphite/webapp/: No such file or directory [idf@node1 graphite]$ ``` If I echo the PYTHONPATH, I get ``` [idf@node1 ~]$ echo $PYTHONPATH /usr/lib64/python2.7/site-packages/openmpi [idf@node1 ~]$ ``` So I then created ``` /etc/profile.d/local_python.sh ``` with the contents ``` PYTHONPATH="/opt/graphite/webapp/":"${PYTHONPATH}" export PYTHONPATH ``` I created a new shell and echo now seems correct ``` [idf@node1 graphite]$ echo $PYTHONPATH /opt/graphite/webapp/:/usr/lib64/python2.7/site-packages/openmpi [idf@node1 graphite]$ ``` Now I run ``` [idf@node1 graphite]$ sudo django-admin.py syncdb --settings=graphite.settings Traceback (most recent call last): File "/home/idf/anaconda/bin/django-admin.py", line 5, in <module> management.execute_from_command_line() File "/home/idf/anaconda/lib/python2.7/site-packages/django/core/management/__init__.py", line 453, in execute_from_command_line utility.execute() File "/home/idf/anaconda/lib/python2.7/site-packages/django/core/management/__init__.py", line 392, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/idf/anaconda/lib/python2.7/site-packages/django/core/management/__init__.py", line 263, in fetch_command app_name = get_commands()[subcommand] File "/home/idf/anaconda/lib/python2.7/site-packages/django/core/management/__init__.py", line 109, in get_commands apps = settings.INSTALLED_APPS File "/home/idf/anaconda/lib/python2.7/site-packages/django/conf/__init__.py", line 53, in __getattr__ self._setup(name) File "/home/idf/anaconda/lib/python2.7/site-packages/django/conf/__init__.py", line 48, in _setup self._wrapped = Settings(settings_module) File "/home/idf/anaconda/lib/python2.7/site-packages/django/conf/__init__.py", line 134, in __init__ raise ImportError("Could not import settings '%s' (Is it on sys.path?): %s" % (self.SETTINGS_MODULE, e)) ImportError: Could not import settings 'graphite.settings' (Is it on sys.path?): No module named graphite.settings [idf@node1 graphite]$ ``` **ImportError: Could not import settings 'graphite.settings' (Is it on sys.path?): No module named graphite.settings** ``` [idf@node1 graphite]$ pwd /opt/graphite/webapp/graphite [idf@node1 graphite]$ ls account composer local_settings.pyc node.py storage.py version app_settings.py dashboard local_settings.py.example node.pyc storage.pyc views.py app_settings.pyc events logger.py readers.py templates views.pyc browser finders logger.pyc readers.pyc thirdparty whitelist carbonlink.py __init__.py manage.py remote_storage.py url_shortener wsgi.py carbonlink.pyc __init__.pyc manage.pyc remote_storage.pyc urls.py wsgi.pyc cli intervals.py metrics render urls.pyc compat.py intervals.pyc middleware.py settings.py util.py compat.pyc local_settings.py middleware.pyc settings.pyc util.pyc [idf@node1 graphite]$ ```
The issue was solved when the package `django-tagging` (0.3.6) was downgraded using the following commands: ``` pip uninstall django-tagging pip install 'django-tagging<0.4' ```
How to sum values in an iterator in a PySpark groupByKey()
31,366,307
2
2015-07-12T09:29:41Z
31,366,342
8
2015-07-12T09:35:41Z
[ "python", "apache-spark", "pyspark" ]
I'm doing my first steps on Spark (python) and I'm struggling with an interator inside a groupByKey(), I'm not able to sum the values..something like this ``` example = sc.parallelize([('x',1), ('x',1), ('y', 1), ('z', 1)]) example.groupByKey() x [1,1] y [1] z [1] ``` How to have the sum on iterator? I tried something like below but it does not work ``` example.groupByKey().map(lambda (x,iterator) : (x,sum(iterator)) example.groupByKey().map(lambda (x,iterator) : (x,list(sum(iterator))) ``` ideas?
You can simply `mapValues` with `sum`: ``` example.groupByKey().mapValues(sum) ``` although in this particular case `reduceByKey` is much more efficient: ``` example.reduceByKey(lambda x, y: x + y) ``` or ``` from operator import add example.reduceByKey(add) ```
Alternative to for loops | How to check if word contains part of a different word
31,368,683
2
2015-07-12T14:11:23Z
31,368,885
8
2015-07-12T14:33:40Z
[ "python", "for-loop", "set" ]
If you check the code below i used for loops to check if in a set of words, one word is the suffix of another. My question is, how can i replace the double for loop? The guy who wrote the task mentioned that there is a solution using algorithms (not sure what's that :/ ) ``` def checkio(words): if len(words) == 1: return False else: for w1 in words: for w2 in words: if w1 == w2: continue elif w1.endswith(w2) or w2.endswith(w1): return True else: return False print checkio({"abc","cba","ba","a","c"}) # prints True in Komodo print checkio({"walk", "duckwalk"}) # prints True ``` Second question: it appears that the current function doesn't work in every environment. Can someone point out what i did wrong? It works on my Komodo IDE but won't work on chekio website. here is a link to the task : **<http://www.checkio.org/mission/end-of-other/>**
Let Python generate all combinations to be checked: ``` import itertools def checkio(data): return any((x.endswith(y) or y.endswith(x)) for x, y in itertools.combinations(data, 2)) ``` And let Python test it: ``` assert checkio({"abc","cba","ba","a","c"}) == True assert checkio({"walk", "duckwalk"}) == True assert checkio({"aaa", "bbb"}) == False ```
How can I skip a migration with Django migrate command?
31,369,466
3
2015-07-12T15:31:31Z
31,369,615
8
2015-07-12T15:47:36Z
[ "python", "django", "django-models", "django-migrations" ]
First, I am asking about Django migration introduced in 1.7, not `south`. Suppose I have migrations `001_add_field_x`, `002_add_field_y`, and both of them are applied to database. Now I change my mind and decide to revert the second migration and replace it with another migration `003_add_field_z`. In other words, I want to apply 001 and 003, skipping 002, how can I do this? P.S. I know I can migrate backwards to 001, but after I make the 003 migration and execute migrate command, 001 through 003 will all be applied, right?
You can use the `--fake` option. Once you revert to `0001` you can run ``` python manage.py migrate <app> 0002 --fake ``` and then run ``` python manage.py migrate <app> #Optionally specify 0003 explicitly ``` which would apply only `0003` in this case. If you do not want to follow this process for all the environment/other developers, you can just remove the migration files, and run a new `makemigration`, and commit that file - and yes, do run `migrate` with the `--fake` option
Calculate mean and median efficiently
31,370,214
5
2015-07-12T16:50:25Z
31,370,968
8
2015-07-12T18:10:52Z
[ "python", "performance", "numpy", "mean", "median" ]
What is the most efficient way to sequentially find the mean and median of rows in a Python list? For example, my list: ``` input_list = [1,2,4,6,7,8] ``` I want to produce an output list that contains: ``` output_list_mean = [1,1.5,2.3,3.25,4,4.7] output_list_median = [1,1.5,2.0,3.0,4.0,5.0] ``` Where the mean is calculated as follows: * 1 = mean(1) * 1.5 = mean(1,2) (i.e. mean of first 2 values in input\_list) * 2.3 = mean(1,2,4) (i.e. mean of first 3 values in input\_list) * 3.25 = mean(1,2,4,6) (i.e. mean of first 4 values in input\_list) etc. And the median is calculated as follows: * 1 = median(1) * 1.5 = median(1,2) (i.e. median of first 2 values in input\_list) * 2.0 = median(1,2,4) (i.e. median of first 3 values in input\_list) * 3.0 = median(1,2,4,6) (i.e. median of first 4 values in input\_list) etc. I have tried to implement it with the following loop, but it seems very inefficient. ``` import numpy input_list = [1,2,4,6,7,8] for item in range(1,len(input_list)+1): print(numpy.mean(input_list[:item])) print(numpy.median(input_list[:item])) ```
Anything you do yourself, especially with the median, is either going to require a lot of work, or be very inefficient, but Pandas comes with built-in efficient implementations of the functions you are after, the expanding mean is O(n), the expanding median is O(n\*log(n)) using a skip list: ``` import pandas as pd import numpy as np input_list = [1, 2, 4, 6, 7, 8] >>> pd.expanding_mean(np.array(input_list)) array([ 1. , 1.5 , 2.33333, 3.25 , 4. , 4.66667]) >>> pd.expanding_median(np.array(input_list)) array([ 1. , 1.5, 2. , 3. , 4. , 5. ]) ```
Reading JSON from SimpleHTTPServer Post data
31,371,166
12
2015-07-12T18:31:50Z
31,393,963
10
2015-07-13T21:38:00Z
[ "python", "ajax", "rest", "simplejson", "simplehttpserver" ]
I am trying to build a simple REST server with python SimpleHTTPServer. I am having problem reading data from the post message. Please let me know if I am doing it right. ``` from SimpleHTTPServer import SimpleHTTPRequestHandler import SocketServer import simplejson class S(SimpleHTTPRequestHandler): def _set_headers(self): self.send_response(200) self.send_header('Content-type', 'text/html') self.end_headers() def do_GET(self): print "got get request %s" % (self.path) if self.path == '/': self.path = '/index.html' return SimpleHTTPRequestHandler.do_GET(self) def do_POST(self): print "got post!!" content_len = int(self.headers.getheader('content-length', 0)) post_body = self.rfile.read(content_len) test_data = simplejson.loads(post_body) print "post_body(%s)" % (test_data) return SimpleHTTPRequestHandler.do_POST(self) def run(handler_class=S, port=80): httpd = SocketServer.TCPServer(("", port), handler_class) print 'Starting httpd...' httpd.serve_forever() ``` The index.html file ``` <html> <title>JSON TEST PAGE</title> <head> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js"></script> <script type="text/javascript"> JSONTest = function() { var resultDiv = $("#resultDivContainer"); $.ajax({ url: "http://128.107.138.51:8080", type: "POST", data: {txt1: $("#json_text").val()}, dataType: "json", success: function (result) { switch (result) { case true: processResponse(result); break; default: resultDiv.html(result); } }, error: function (xhr, ajaxOptions, thrownError) { alert(xhr.status); alert(thrownError); } }); }; </script> </head> <body> <h1>My Web Page</h1> <div id="resultDivContainer"></div> <form> <textarea name="json_text" id="json_text" rows="50" cols="80"> [{"resources": {"dut": "any_ts", "endpoint1": "endpoint", "endpoint2": "endpoint"}}, {"action": "create_conference", "serverName": "dut", "confName": "GURU_TEST"}] </textarea> <button type="button" onclick="JSONTest()">Generate Test</button> </form> </body> </html> ``` The SimpleJson fails to load the json from the POST message. I am not familiar with web coding and I am not even sure if what I am doing is right for creating a simple REST API server. I appreciate your help.
Thanks matthewatabet for the klein idea. I figured a way to implement it using BaseHTTPHandler. The code below. ``` from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer import SocketServer import simplejson import random class S(BaseHTTPRequestHandler): def _set_headers(self): self.send_response(200) self.send_header('Content-type', 'text/html') self.end_headers() def do_GET(self): self._set_headers() f = open("index.html", "r") self.wfile.write(f.read()) def do_HEAD(self): self._set_headers() def do_POST(self): self._set_headers() print "in post method" self.data_string = self.rfile.read(int(self.headers['Content-Length'])) self.send_response(200) self.end_headers() data = simplejson.loads(self.data_string) with open("test123456.json", "w") as outfile: simplejson.dump(data, outfile) print "{}".format(data) f = open("for_presen.py") self.wfile.write(f.read()) return def run(server_class=HTTPServer, handler_class=S, port=80): server_address = ('', port) httpd = server_class(server_address, handler_class) print 'Starting httpd...' httpd.serve_forever() if __name__ == "__main__": from sys import argv if len(argv) == 2: run(port=int(argv[1])) else: run() ``` And the corresponding html page ``` <form action="/profile/index/sendmessage" method="post" enctype="application/x-www-form-urlencoded"> <div class="upload_form"> <dt id="message-label"><label class="optional" for="message">Enter Message</label></dt> <dd id="message-element"> <textarea cols="80" rows="50" id="message" name="message"> [{"resources": {"dut": "any_ts", "endpoint1": "multistream_endpoint", "endpoint2": "multistream_endpoint"}}, {"action": "create_conference", "serverName": "dut", "conferenceName": "GURU_SLAVE_TS"}, {"action": "dial_out_ep", "serverName": "dut", "confName": "GURU_SLAVE_TS", "epName": "endpoint1"} ] </textarea></dd> <dt id="id-label">&nbsp;</dt> <dd id="id-element"> <input type="hidden" id="id" value="145198" name="id"></dd> <dt id="send_message-label">&nbsp;</dt> <dd id="send_message-element"> <input type="submit" class="sendamessage" value="Send" id="send_message" name="send_message"></dd> </div> </form> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js"></script> <script type="text/javascript"> $("input.sendamessage").click(function(event) { event.preventDefault(); var message = $('textarea#message').val(); var id = $('input#id').val(); url = "http://128.107.138.51:8080" var posting = $.post(url, message) posting.done(function( data ) { alert(message); }); }); </script> ```
reading an ascii file with headers given in the first rows into a pandas dataframe
31,373,831
2
2015-07-13T00:13:16Z
31,374,790
7
2015-07-13T02:53:30Z
[ "python", "numpy", "pandas", "astropy" ]
I have a huge set of catalogues which have different columns and the different header names for each column, where the description for each header name is given as comments at the beginning of my ascii files in a row. What is the best way to read them into a `pandas.DataFrame` while it can set the name of the column as well without being needed to define it from the beginning. The following is an example of my catalogues: ``` # 1 MAG_AUTO Kron-like elliptical aperture magnitude [mag] # 2 rh half light radius (analyse) [pixel] # 3 MU_MAX Peak surface brightness above background [mag * arcsec**(-2)] # 4 FWHM_IMAGE FWHM assuming a gaussian core [pixel] # 5 CLASS_STAR S/G classifier output 18.7462 4.81509 20.1348 6.67273 0.0286538 18.2440 7.17988 20.6454 21.6235 0.0286293 18.3102 3.11273 19.0960 8.26081 0.0430532 21.1751 2.92533 21.9931 5.52080 0.0290418 19.3998 1.86182 19.3166 3.42346 0.986598 20.0801 3.52828 21.3484 6.76799 0.0303842 21.9427 2.08458 22.0577 5.59344 0.981466 20.7726 1.86017 20.8130 3.69570 0.996121 23.0836 2.23427 23.3689 4.49985 0.706207 23.2443 1.62021 23.1089 3.54191 0.973419 20.6343 3.99555 21.9426 6.94700 0.0286164 23.4012 2.00408 23.3412 4.35926 0.946349 23.8427 1.54819 23.8241 3.83407 0.897079 20.3344 2.69910 20.9401 4.38988 0.0355277 21.7506 2.43451 22.2115 4.62045 0.0786921 ```
This is a file in Sextractor format. The `astropy.io.ascii` [reader](http://astropy.readthedocs.org/en/stable/io/ascii/index.html) understands this format natively so this is a snap to read: ``` >>> from astropy.io import ascii >>> dat = ascii.read('table.dat') >>> dat <Table masked=False length=3> MAG_AUTO rh MU_MAX FWHM_IMAGE CLASS_STAR mag mag / arcsec2 pix float64 float64 float64 float64 float64 -------- ------- ------------- ---------- ---------- 18.7462 4.81509 20.1348 6.67273 0.0286538 18.244 7.17988 20.6454 21.6235 0.0286293 18.3102 3.11273 19.096 8.26081 0.0430532 ... ``` Note that using the astropy ASCII reader you get a table that also retains the unit meta data. If you still want to convert this to a pandas dataframe that's easy as well with `DataFrame(dat.as_array())`. Version 1.1 of astropy (and the current master) will have methods `to_pandas` and `from_pandas` that make this conversion more robust (see <http://astropy.readthedocs.org/en/latest/table/pandas.html>).
Why doesn't the MySQLdb Connection context manager close the cursor?
31,374,857
18
2015-07-13T03:05:06Z
31,699,782
7
2015-07-29T11:57:16Z
[ "python", "mysql", "mysql-python", "contextmanager" ]
MySQLdb `Connections` have a rudimentary context manager that creates a cursor on *enter*, either rolls back or commits on *exit*, and implicitly doesn't suppress exceptions. From the [Connection source](https://github.com/farcepest/MySQLdb1/blob/master/MySQLdb/connections.py): ``` def __enter__(self): if self.get_autocommit(): self.query("BEGIN") return self.cursor() def __exit__(self, exc, value, tb): if exc: self.rollback() else: self.commit() ``` **So, does anyone know *why* the cursor isn't closed on exit?** --- At first, I assumed it was because closing the cursor didn't do anything and that cursors only had a close method in deference to the [Python DB API](https://www.python.org/dev/peps/pep-0249/#cursor-methods) (see the comments to [this answer](http://stackoverflow.com/a/22618781/484488)). However, the fact is that closing the cursor burns through the remaining results sets, if any, and disables the cursor. From the [cursor source](https://github.com/farcepest/MySQLdb1/blob/master/MySQLdb/cursors.py): ``` def close(self): """Close the cursor. No further queries will be possible.""" if not self.connection: return while self.nextset(): pass self.connection = None ``` It would be so easy to close the cursor at exit, so I have to suppose that it hasn't been done on purpose. On the other hand, we can see that when a cursor is deleted, it is closed anyway, so I guess the garbage collector will eventually get around to it. I don't know much about garbage collection in Python. ``` def __del__(self): self.close() self.errorhandler = None self._result = None ``` --- Another guess is that there may be a situation where you want to re-use the cursor after the `with` block. But I can't think of any reason why you would need to do this. Can't you always finish using the cursor inside its context, and just use a separate context for the next transaction? To be very clear, this example obviously doesn't make sense: ``` with conn as cursor: cursor.execute(select_stmt) rows = cursor.fetchall() ``` It should be: ``` with conn as cursor: cursor.execute(select_stmt) rows = cursor.fetchall() ``` Nor does this example make sense: ``` # first transaction with conn as cursor: cursor.execute(update_stmt_1) # second transaction, reusing cursor try: cursor.execute(update_stmt_2) except: conn.rollback() else: conn.commit() ``` It should just be: ``` # first transaction with conn as cursor: cursor.execute(update_stmt_1) # second transaction, new cursor with conn as cursor: cursor.execute(update_stmt_2) ``` **Again, what would be the harm in closing the cursor on exit, and what benefits are there to not closing it?**
To answer your question directly: I cannot see any harm whatsoever in closing at the end of a `with` block. I cannot say why it is not done in this case. But, as there is a dearth of activity on this question, I had a search through the code history and will throw in a few thoughts (***guesses***) on why the `close()` ***may*** not be called: 1. There is a small chance that spinning through calls to `nextset()` may throw an exception - possibly this had been observed and seen as undesirable. This may be why the [newer version of `cursors.py`](https://github.com/PyMySQL/mysqlclient-python/blob/1.3.6/MySQLdb/cursors.py#L86) contains this structure in `close()`: ``` def close(self): """Close the cursor. No further queries will be possible.""" if not self.connection: return self._flush() try: while self.nextset(): pass except: pass self.connection = None ``` 2. There is the (somewhat remote) potential that it might take some time to spin through all the remaining results doing nothing. Therefore `close()` may not be called to avoid doing some unnecessary iterations. Whether you think it's worth saving those clock cycles is subjective, I suppose, but you could argue along the lines of "if it's not necessary, don't do it". 3. Browsing the sourceforge commits, the functionality was added to the trunk by [this commit](http://sourceforge.net/p/mysql-python/code/ci/f6a0ff1ca04339387999ab490c8c0abaf011d5e4/) in 2007 and it appears that this section of `connections.py` has not changed since. That's a merge based on [this commit](http://sourceforge.net/p/mysql-python/svn/480/), which has the message > Add Python-2.5 support for with statement as described in <http://docs.python.org/whatsnew/pep-343.html> *Please test* And the code you quote has never changed since. This prompts my final thought - it's probably just a first attempt / prototype that just worked and therefore never got changed. --- ### More modern version You link to source for a legacy version of the connector. I note there is a more active fork of the same library [here](https://github.com/PyMySQL/mysqlclient-python), which I link to in my comments about "newer version" in point 1. Note that the more recent version of this module has implemented `__enter__()` and `__exit__()` within `cursor` itself: [see here](https://github.com/PyMySQL/mysqlclient-python/blob/1.3.6/MySQLdb/cursors.py#L86). `__exit__()` here ***does*** [call `self.close()`](https://github.com/PyMySQL/mysqlclient-python/blob/1.3.6/MySQLdb/cursors.py#L103) and perhaps this provides a more standard way to use the with syntax e.g. ``` with conn.cursor() as c: #Do your thing with the cursor ``` --- ### End notes ***N.B.*** I guess I should add, as far as I understand garbage collection (not an expert either) once there are no references to `conn`, it will be deallocated. At this point there will be no references to the cursor object and it will be deallocated too. ***However*** calling `cursor.close()` does not mean that it will be garbage collected. It simply burns through the results and set the connection to `None`. This means it can't be re-used, but it won't be garbage collected immediately. You can convince yourself of that by manually calling `cursor.close()` after your `with` block and then, say, printing some attribute of `cursor` --- ***N.B. 2*** I think this is a somewhat unusual use of the `with` syntax as the `conn` object persists because it is already in the outer scope - unlike, say, the more common `with open('filename') as f:` where there are no objects hanging around with references after the end of the `with` block.
Return Custom 404 Error when resource not found in Django Rest Framework
31,380,280
7
2015-07-13T09:54:21Z
31,381,075
9
2015-07-13T10:35:40Z
[ "python", "django", "rest", "django-rest-framework" ]
I am learning [Django Rest Framework](http://www.django-rest-framework.org/), and also new to django. I want to return a custom `404` error in json when a client will access a resource which was not found. My `urls.py` looks liks this: ``` urlpatterns = [ url(r'^mailer/$', views.Mailer.as_view(), name='send-email-to-admin') ] ``` In which i have only one resource, which can be accessed through URI, **<http://localhost:8000/mailer/>** Now, when a client access any other URI like **<http://localhost:8000/>**, API should return a `404-Not Found` error like this: ``` { "status_code" : 404 "error" : "The resource was not found" } ``` Please suggest some answer with proper code snippets, if suitable.
You are looking for [`handler404`](https://docs.djangoproject.com/en/1.8/ref/urls/#handler404). Here is my suggestion: 1. Create a view that should be called if none of the URL patterns match. 2. Add `handler404 = path.to.your.view` to your root URLconf. Here is how it's done: 1. `project.views` ``` from django.http import JsonResponse def custom404(request): return JsonResponse({ 'status_code': 404, 'error': 'The resource was not found' }) ``` 2. `project.urls` ``` from project.views import custom404 handler404 = custom404 ``` Read [error handling](https://docs.djangoproject.com/en/1.8/topics/http/urls/#error-handling) for more details. [Django REST framework exceptions](http://www.django-rest-framework.org/api-guide/exceptions/) may be useful as well.
how to export a table dataframe in pyspark to csv?
31,385,363
6
2015-07-13T13:56:14Z
31,386,290
13
2015-07-13T14:36:38Z
[ "python", "sql", "apache-spark", "dataframe", "export-to-csv" ]
I am using spark-1.3.1 (pyspark) and I have generated a table using a SQL query. I now have an object that is a DataFrame. I want to export this DataFrame object (I have called it "table") to a csv file so I can manipulate it and plot the columns. How do I export the DataFrame "table" to a csv file? Thanks!
If data frame fits in a driver memory you can convert [Spark DataFrame](https://github.com/apache/spark/blob/master/python/pyspark/sql/dataframe.py#L42) to local [Pandas DataFrame](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html) using [`toPandas`](https://github.com/apache/spark/blob/master/python/pyspark/sql/dataframe.py#L1229) method and then simply use `save`: ``` df.toPandas().to_csv('mycsv.csv') ``` Otherwise you can use [spark-csv](https://github.com/databricks/spark-csv): * Spark 1.3 ``` df.save('mycsv.csv', 'com.databricks.spark.csv') ``` * Spark 1.4+ ``` df.write.format('com.databricks.spark.csv').save('mycsv.csv') ``` In Spark 2.0+ you can use `csv` data source directly: ``` df.write.csv('mycsv.csv') ```
Right Justify python
31,389,267
4
2015-07-13T17:02:11Z
31,389,332
9
2015-07-13T17:05:27Z
[ "python", "python-3.x" ]
how can I justify the output of this code? ``` N = int(input()) case = '#' print(case) for i in range(N): case += '#' print(case) ```
You can use `format` with `>` to right justify ``` N = 10 for i in range(1, N+1): print('{:>10}'.format('#'*i)) ``` Output ``` # ## ### #### ##### ###### ####### ######## ######### ########## ``` You can programattically figure out how far to right-justify using `rjust` as well. ``` for i in range(1, N+1): print(('#'*i).rjust(N)) ```
using sqlalchemy to load csv file into a database
31,394,998
5
2015-07-13T23:09:43Z
31,397,990
9
2015-07-14T05:00:57Z
[ "python", "sqlalchemy" ]
I am trying to learn to program in python. I would like to us csv files in to a database. Is it a good idea to use sqlalchemy as framework to insert the the data. each file is a database table. some of these files have foreign keys to other csv file / db tables. Thanks !
Because of the power of SQLAlchemy, I'm also using it on a project. It's power comes from the object-oriented way of "talking" to a database instead of hardcoding SQL statements that can be a pain to manage. Not to mention, it's also a lot faster. To answer your question bluntly, yes! Storing data from a CSV into a database using SQLAlchemy is a piece of cake. Here's a full working example (I used SQLAlchemy 1.0.6 and Python 2.7.6): ``` from numpy import genfromtxt from time import time from datetime import datetime from sqlalchemy import Column, Integer, Float, Date from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import create_engine from sqlalchemy.orm import sessionmaker def Load_Data(file_name): data = genfromtxt(file_name, delimiter=',', skiprows=1, converters={0: lambda s: str(s)}) return data.tolist() Base = declarative_base() class Price_History(Base): #Tell SQLAlchemy what the table name is and if there's any table-specific arguments it should know about __tablename__ = 'Price_History' __table_args__ = {'sqlite_autoincrement': True} #tell SQLAlchemy the name of column and its attributes: id = Column(Integer, primary_key=True, nullable=False) date = Column(Date) opn = Column(Float) hi = Column(Float) lo = Column(Float) close = Column(Float) vol = Column(Float) if __name__ == "__main__": t = time() #Create the database engine = create_engine('sqlite:///csv_test.db') Base.metadata.create_all(engine) #Create the session session = sessionmaker() session.configure(bind=engine) s = session() try: file_name = "t.csv" #sample CSV file used: http://www.google.com/finance/historical?q=NYSE%3AT&ei=W4ikVam8LYWjmAGjhoHACw&output=csv data = Load_Data(file_name) for i in data: record = Price_History(**{ 'date' : datetime.strptime(i[0], '%d-%b-%y').date(), 'opn' : i[1], 'hi' : i[2], 'lo' : i[3], 'close' : i[4], 'vol' : i[5] }) s.add(record) #Add all the records s.commit() #Attempt to commit all the records except: s.rollback() #Rollback the changes on error finally: s.close() #Close the connection print "Time elapsed: " + str(time() - t) + " s." #0.091s ``` (Note: this is not necessarily the "best" way to do this, but I think this format is very readable for a beginner; it's also very fast: 0.091s for 251 records inserted!) I think if you go through it line by line, you'll see what a breeze it is to use. Notice the lack of SQL statements -- hooray! I also took the liberty of using numpy to load the CSV contents in two lines, but it can be done without it if you like. If you wanted to compare against the traditional way of doing it, here's a full-working example for reference: ``` import sqlite3 import time from numpy import genfromtxt def dict_factory(cursor, row): d = {} for idx, col in enumerate(cursor.description): d[col[0]] = row[idx] return d def Create_DB(db): #Create DB and format it as needed with sqlite3.connect(db) as conn: conn.row_factory = dict_factory conn.text_factory = str cursor = conn.cursor() cursor.execute("CREATE TABLE [Price_History] ([id] INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL UNIQUE, [date] DATE, [opn] FLOAT, [hi] FLOAT, [lo] FLOAT, [close] FLOAT, [vol] INTEGER);") def Add_Record(db, data): #Insert record into table with sqlite3.connect(db) as conn: conn.row_factory = dict_factory conn.text_factory = str cursor = conn.cursor() cursor.execute("INSERT INTO Price_History({cols}) VALUES({vals});".format(cols = str(data.keys()).strip('[]'), vals=str([data[i] for i in data]).strip('[]') )) def Load_Data(file_name): data = genfromtxt(file_name, delimiter=',', skiprows=1, converters={0: lambda s: str(s)}) return data.tolist() if __name__ == "__main__": t = time.time() db = 'csv_test_sql.db' #Database filename file_name = "t.csv" #sample CSV file used: http://www.google.com/finance/historical?q=NYSE%3AT&ei=W4ikVam8LYWjmAGjhoHACw&output=csv data = Load_Data(file_name) #Get data from CSV Create_DB(db) #Create DB #For every record, format and insert to table for i in data: record = { 'date' : i[0], 'opn' : i[1], 'hi' : i[2], 'lo' : i[3], 'close' : i[4], 'vol' : i[5] } Add_Record(db, record) print "Time elapsed: " + str(time.time() - t) + " s." #3.604s ``` (Note: even in the "old" way, this is by no means the best way to do this, but it's very readable and a "1-to-1" translation from the SQLAlchemy way vs. the "old" way.) Notice the the SQL statements: one to create the table, the other to insert records. Also, notice that it's a bit more cumbersome to maintain long SQL strings vs. a simple class attribute addition. Liking SQLAlchemy so far? As for your foreign key inquiry, of course. SQLAlchemy has the power to do this too. Here's an example of how a class attribute would look like with a foreign key assignment (assuming the `ForeignKey` class has also been imported from the `sqlalchemy` module): ``` class Asset_Analysis(Base): #Tell SQLAlchemy what the table name is and if there's any table-specific arguments it should know about __tablename__ = 'Asset_Analysis' __table_args__ = {'sqlite_autoincrement': True} #tell SQLAlchemy the name of column and its attributes: id = Column(Integer, primary_key=True, nullable=False) fid = Column(Integer, ForeignKey('Price_History.id')) ``` which points the "fid" column as a foreign key to Price\_History's id column. Hope that helps!
how to write to a new cell in python using openpyxl
31,395,058
2
2015-07-13T23:15:49Z
31,395,124
8
2015-07-13T23:22:52Z
[ "python", "excel", "openpyxl" ]
I wrote code, which opens an excel and iterates through each row and pass the value to another function. ``` import openpyxl wb = load_workbook(filename='C:\Users\xxxxx') for ws in wb.worksheets: for row in ws.rows: print row x1=ucr(row[0].value) row[1].value=x1 # i am having error at this point ``` i am getting the following error when i tried to run the file. ``` TypeError: IndexError: tuple index out of range ``` can i write the returned value x1 to row[1] column.Is it possible to write to excel(i.e using row[1]) instead of accessing single cells like "ws.['c1']=x1"
Try this: ``` import openpyxl wb = load_workbook(filename='xxxx.xlsx') ws = wb.worksheets[0] ws['A1'] = 1 ws.cell(row=2, column=2).value = 2 ws.cell(coordinate="C3").value = 3 # 'coordinate=' is optional here ``` This will set Cells A1, B2 and C3 to 1, 2 and 3 respectively (three different ways of setting cell values in a worksheet). The second method (specifying row and column) is most useful for your situation: ``` import openpyxl wb = load_workbook(filename='xxxxx.xlsx') for ws in wb.worksheets: for index, row in enumerate(ws.rows, start=1): print row x1 = ucr(row[0].value) ws.cell(row=index, column=2).value = x1 ```
Error setting up Vagrant with VirtualBox in PyCharm under OS X 10.10
31,395,112
21
2015-07-13T23:21:17Z
31,414,015
21
2015-07-14T18:01:59Z
[ "python", "osx", "vagrant", "virtualbox", "pycharm" ]
When setting up the remote interpreter and selecting Vagrant, I get the following error in PyCharm: ``` Can't Get Vagrant Settings: [0;31mThe provider 'virtualbox' that was requested to back the machine 'default' is reporting that it isn't usable on this system. The reason is shown bellow: Vagrant could not detect VirtualBox! Make sure VirtualBox is properly installed. Vagrant uses the `VBoxManage` binary that ships with VirtualBox, and requires this to be available on the PATH. If VirtualBox is installed, please find the `VBoxManage` binary and add it to the PATH environment variable.[0m ``` Now, from a terminal, everything works. I can do 'up' and ssh into the VM without issues. Ports are forwarded as well as local files. So the issue is only in PyCharm. I have installed Java 1.8 PATH is: /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin which VBoxManage: /usr/local/bin/VBoxManage and works in terminal. Note that this is a fresh install of OS X done this morning. Vagrant version is 1.7.3, VirtualBox is 4.3.30 and PyCharm is 4.5.3
Turns out, this problem is a known bug in PyCharm. Until they fix it, you can get around the problem by launching PyCharm from a terminal window with the `charm` command. [Vagrant 1.7.3 and VirtualBox 4.3.30 under Pycharm 4.5: Path issue](https://youtrack.jetbrains.com/issue/PY-16441)
Error setting up Vagrant with VirtualBox in PyCharm under OS X 10.10
31,395,112
21
2015-07-13T23:21:17Z
32,601,098
28
2015-09-16T06:20:06Z
[ "python", "osx", "vagrant", "virtualbox", "pycharm" ]
When setting up the remote interpreter and selecting Vagrant, I get the following error in PyCharm: ``` Can't Get Vagrant Settings: [0;31mThe provider 'virtualbox' that was requested to back the machine 'default' is reporting that it isn't usable on this system. The reason is shown bellow: Vagrant could not detect VirtualBox! Make sure VirtualBox is properly installed. Vagrant uses the `VBoxManage` binary that ships with VirtualBox, and requires this to be available on the PATH. If VirtualBox is installed, please find the `VBoxManage` binary and add it to the PATH environment variable.[0m ``` Now, from a terminal, everything works. I can do 'up' and ssh into the VM without issues. Ports are forwarded as well as local files. So the issue is only in PyCharm. I have installed Java 1.8 PATH is: /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin which VBoxManage: /usr/local/bin/VBoxManage and works in terminal. Note that this is a fresh install of OS X done this morning. Vagrant version is 1.7.3, VirtualBox is 4.3.30 and PyCharm is 4.5.3
Another workaround: ``` sudo ln -s /usr/local/bin/VBoxManage /usr/bin/VBoxManage ``` Edit: Since it all worked some time ago, one of the following has to be cause of this problem: * either update of VirtualBox changed location of it's executable * or update of PyCharm changed PATH settings / executable location expectation for the IDE Whatever the cause is, the solution is to make sure VBoxManage is in location expected by PyCharm. I haven't make up this solution myself, just googled it, but because it is so nice and clean I decided to add it here.
Difficulty finding a Python 3.x implementation of the familiar C for-loop
31,395,587
4
2015-07-14T00:14:30Z
31,395,910
7
2015-07-14T00:55:03Z
[ "python", "python-3.x" ]
I'm inexperienced in Python and started with Python 3.4. I read over the Python 3.x documentation on [loop idioms](http://docs.python.org/release/3.4.0/tutorial/datastructures.html#tut-loopidioms), and haven't found a way of constructing a familiar C-family *for-loop*, i.e. ``` for (i = 0; i < n; i++) { A[i] = value; } ``` Writing a *for*-loop like this in Python seems all but impossible by design. Does anyone know the reason why Python iteration over a sequence follows a pattern like ``` for x in iterable: # e.g. range, itertools.count, generator functions pass; ``` Is this more efficient, convenient, or reduces index-out-of-bounds exception?
``` for lower <= var < upper: ``` That was [the proposed syntax](https://www.python.org/dev/peps/pep-0284/) for a C-style loop. I say "was the proposed syntax", because PEP 284 was rejected, because: > Specifically, Guido did not buy the premise that the range() format needed fixing, "The whole point (15 years ago) of range() was to \*avoid\* needing syntax to specify a loop over numbers. I think it's worked out well and there's nothing that needs to be fixed (except range() needs to become an iterator, which it will in Python 3.0)." So no `for lower <= var < upper:` for us. Now, how to get a C-style loop? Well, you can use `range([start,]end[,step])`. ``` for i in range(0,len(blah),3): blah[i] += merp #alters every third element of blah #step defaults to 1 if left off ``` You can `enumerate` if you need both index and value: ``` for i,j in enumerate(blah): merp[j].append(i) ``` If you wanted to look at two (or more!) iterators together you can `zip` them (Also: [`itertools.izip`](https://docs.python.org/2/library/itertools.html#itertools.izip) and [`itertools.izip_longest`](https://docs.python.org/2/library/itertools.html#itertools.izip_longest)) ``` for i,j in zip(foo,bar): if i == j: print("Scooby-Doo!") ``` And finally, there's always the `while` loop ``` i = 0 while i < upper: A[i] = b i++ ``` Addendum: There's also [PEP 276](https://www.python.org/dev/peps/pep-0276/), which suggested making `int`s iterable, which was also rejected. Still would have been half-open
ipython server can't launch: No module named notebook.notebookapp
31,397,421
53
2015-07-14T04:02:59Z
31,426,690
18
2015-07-15T09:34:36Z
[ "python", "server", "ipython" ]
I've been trying to setup an ipython server following several tutorials (since none was exactly my case). A couple days ago, I did manage to get it to the point where it was launching but then was not able to access it via url. Today it's not launching anymore and I can't find much about this specific error I get: ``` Traceback (most recent call last): File "/usr/local/bin/ipython", line 9, in <module> load_entry_point('ipython==4.0.0-dev', 'console_scripts', 'ipython')() File "/usr/local/lib/python2.7/dist-packages/ipython-4.0.0_dev-py2.7.egg/IPython/__init__.py", line 118, in start_ipython return launch_new_instance(argv=argv, **kwargs) File "/usr/local/lib/python2.7/dist-packages/traitlets-4.0.0-py2.7.egg/traitlets/config/application.py", line 591, in launch_instance app.initialize(argv) File "<string>", line 2, in initialize File "/usr/local/lib/python2.7/dist-packages/traitlets-4.0.0-py2.7.egg/traitlets/config/application.py", line 75, in catch_config_error return method(app, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/ipython-4.0.0_dev-py2.7.egg/IPython/terminal/ipapp.py", line 302, in initialize super(TerminalIPythonApp, self).initialize(argv) File "<string>", line 2, in initialize File "/usr/local/lib/python2.7/dist-packages/traitlets-4.0.0-py2.7.egg/traitlets/config/application.py", line 75, in catch_config_error return method(app, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/ipython-4.0.0_dev-py2.7.egg/IPython/core/application.py", line 386, in initialize self.parse_command_line(argv) File "/usr/local/lib/python2.7/dist-packages/ipython-4.0.0_dev-py2.7.egg/IPython/terminal/ipapp.py", line 297, in parse_command_line return super(TerminalIPythonApp, self).parse_command_line(argv) File "<string>", line 2, in parse_command_line File "/usr/local/lib/python2.7/dist-packages/traitlets-4.0.0-py2.7.egg/traitlets/config/application.py", line 75, in catch_config_error return method(app, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/traitlets-4.0.0-py2.7.egg/traitlets/config/application.py", line 487, in parse_command_line return self.initialize_subcommand(subc, subargv) File "<string>", line 2, in initialize_subcommand File "/usr/local/lib/python2.7/dist-packages/traitlets-4.0.0-py2.7.egg/traitlets/config/application.py", line 75, in catch_config_error return method(app, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/traitlets-4.0.0-py2.7.egg/traitlets/config/application.py", line 418, in initialize_subcommand subapp = import_item(subapp) File "build/bdist.linux-x86_64/egg/ipython_genutils/importstring.py", line 31, in import_item ImportError: No module named notebook.notebookapp ``` So about the setup, I have installed the anaconda distrib of ipython, pyzmq & tornado libraries. I have created a profile nbserver and the config file is as follows - ipython.config.py: ``` c = get_config() c.IPKernalApp.pylab = 'inline' c.NotebookApp.certfile = u'/home/ludo/.ipython/profile_nbserver/mycert.pem' c.NotebookApp.ip = '*' c.NotebookApp.open_browser = False c.NotebookApp.password = u'sha1:e6cb2aa9a[...]' c.NotebookApp.port = 9999 c.NotebookManager.notebook_dir = u'/var/www/ipynb/' c.NotebookApp.base_project_url = '/ipynb/' c.NotebookApp.base_kernel_url = '/ipynb/' c.NotebookApp.webapp_settings = {'static_url_prefix':'/ipynb/static/'} ``` I really don't know where to look for clues anymore - and I'm probably lacking a greater understanding of how all this works to figure it out. My ultimate goal is to then use the answer to [this question](http://stackoverflow.com/questions/23890386/how-to-run-ipython-behind-an-apache-proxy) on SO to complete a setup behind apache and eventually connect it to colaboratory - but seems like it should launch first. Many thanks for any help :)
I received the same problem when upgrading IPython. At the moment the answer was written, it was a bug linked to the latest `4` version. If a similar problem occurs for which you wish to switch back to the stable version `3.2.1`: ``` pip uninstall -y IPython pip install ipython==3.2.1 ``` * note: the `-y` option indicates "yes I want to uninstall" with no interaction. * note 2: possible duplicate in [ImportError: No module named notebook.notebookapp](http://stackoverflow.com/questions/31401890/importerror-no-module-named-notebook-notebookapp)
ipython server can't launch: No module named notebook.notebookapp
31,397,421
53
2015-07-14T04:02:59Z
32,166,022
124
2015-08-23T11:15:49Z
[ "python", "server", "ipython" ]
I've been trying to setup an ipython server following several tutorials (since none was exactly my case). A couple days ago, I did manage to get it to the point where it was launching but then was not able to access it via url. Today it's not launching anymore and I can't find much about this specific error I get: ``` Traceback (most recent call last): File "/usr/local/bin/ipython", line 9, in <module> load_entry_point('ipython==4.0.0-dev', 'console_scripts', 'ipython')() File "/usr/local/lib/python2.7/dist-packages/ipython-4.0.0_dev-py2.7.egg/IPython/__init__.py", line 118, in start_ipython return launch_new_instance(argv=argv, **kwargs) File "/usr/local/lib/python2.7/dist-packages/traitlets-4.0.0-py2.7.egg/traitlets/config/application.py", line 591, in launch_instance app.initialize(argv) File "<string>", line 2, in initialize File "/usr/local/lib/python2.7/dist-packages/traitlets-4.0.0-py2.7.egg/traitlets/config/application.py", line 75, in catch_config_error return method(app, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/ipython-4.0.0_dev-py2.7.egg/IPython/terminal/ipapp.py", line 302, in initialize super(TerminalIPythonApp, self).initialize(argv) File "<string>", line 2, in initialize File "/usr/local/lib/python2.7/dist-packages/traitlets-4.0.0-py2.7.egg/traitlets/config/application.py", line 75, in catch_config_error return method(app, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/ipython-4.0.0_dev-py2.7.egg/IPython/core/application.py", line 386, in initialize self.parse_command_line(argv) File "/usr/local/lib/python2.7/dist-packages/ipython-4.0.0_dev-py2.7.egg/IPython/terminal/ipapp.py", line 297, in parse_command_line return super(TerminalIPythonApp, self).parse_command_line(argv) File "<string>", line 2, in parse_command_line File "/usr/local/lib/python2.7/dist-packages/traitlets-4.0.0-py2.7.egg/traitlets/config/application.py", line 75, in catch_config_error return method(app, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/traitlets-4.0.0-py2.7.egg/traitlets/config/application.py", line 487, in parse_command_line return self.initialize_subcommand(subc, subargv) File "<string>", line 2, in initialize_subcommand File "/usr/local/lib/python2.7/dist-packages/traitlets-4.0.0-py2.7.egg/traitlets/config/application.py", line 75, in catch_config_error return method(app, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/traitlets-4.0.0-py2.7.egg/traitlets/config/application.py", line 418, in initialize_subcommand subapp = import_item(subapp) File "build/bdist.linux-x86_64/egg/ipython_genutils/importstring.py", line 31, in import_item ImportError: No module named notebook.notebookapp ``` So about the setup, I have installed the anaconda distrib of ipython, pyzmq & tornado libraries. I have created a profile nbserver and the config file is as follows - ipython.config.py: ``` c = get_config() c.IPKernalApp.pylab = 'inline' c.NotebookApp.certfile = u'/home/ludo/.ipython/profile_nbserver/mycert.pem' c.NotebookApp.ip = '*' c.NotebookApp.open_browser = False c.NotebookApp.password = u'sha1:e6cb2aa9a[...]' c.NotebookApp.port = 9999 c.NotebookManager.notebook_dir = u'/var/www/ipynb/' c.NotebookApp.base_project_url = '/ipynb/' c.NotebookApp.base_kernel_url = '/ipynb/' c.NotebookApp.webapp_settings = {'static_url_prefix':'/ipynb/static/'} ``` I really don't know where to look for clues anymore - and I'm probably lacking a greater understanding of how all this works to figure it out. My ultimate goal is to then use the answer to [this question](http://stackoverflow.com/questions/23890386/how-to-run-ipython-behind-an-apache-proxy) on SO to complete a setup behind apache and eventually connect it to colaboratory - but seems like it should launch first. Many thanks for any help :)
This should fix the issue: ``` pip install jupyter ```
Scrapyd-deploy command not found after scrapyd installation
31,398,348
5
2015-07-14T05:31:36Z
31,419,370
9
2015-07-15T00:01:57Z
[ "python", "web-scraping", "scrapy", "twisted", "scrapyd" ]
I have created a couple of web spiders that I intend to run simultaneously with scrapyd. I first successfully installed scrapyd in Ubuntu 14.04 using the command: pip install scrapyd, and when I run the command: scrapyd, I get the following output in the terminal: ``` 2015-07-14 01:22:02-0400 [-] Log opened. 2015-07-14 01:22:02-0400 [-] twistd 13.2.0 (/usr/bin/python 2.7.6) starting up. 2015-07-14 01:22:02-0400 [-] reactor class: twisted.internet.epollreactor.EPollReactor. 2015-07-14 01:22:02-0400 [-] Site starting on 6800 2015-07-14 01:22:02-0400 [-] Starting factory <twisted.web.server.Site instance at 0x7f762f4391b8> 2015-07-14 01:22:02-0400 [Launcher] Scrapyd 1.1.0 started: max_proc=8, runner='scrapyd.runner' ``` I believe that the fact that I got this output suggests that scrapy is working; however, when I run the command: scrapyd-deploy as in the [docs](https://github.com/scrapy/scrapyd-client), I get the error: scrapyd-deploy: command not found. How can this be possible if the installation was successful? I included the following target in the config file: ``` [deploy:scrapyd2] url = http://scrapyd.mydomain.com/api/scrapyd/ username = name password = secret ``` I'm not exactly sure how the target works, but I basically copied it from the docs so I would think that it would work. Is there something that I am supposed to import or configure that I haven't? Thanks.
`scrapyd-deploy` is a part of [scrapyd-client](https://github.com/scrapy/scrapyd-client).You can install it from [PyPi](https://pypi.python.org/pypi/scrapyd-client/). Try: ``` $ sudo pip install scrapyd-client ```
Column filtering in PySpark
31,400,143
5
2015-07-14T07:19:51Z
31,403,594
12
2015-07-14T10:05:51Z
[ "python", "lambda", "apache-spark", "apache-spark-sql", "pyspark" ]
I have a dataframe `df` loaded from Hive table and it has a timestamp column, say `ts`, with string type of format `dd-MMM-yy hh.mm.ss.MS a` (converted to python datetime library, this is `%d-%b-%y %I.%M.%S.%f %p`). Now I want to filter rows from the dataframe that are from the last five minutes: ``` only_last_5_minutes = df.filter( datetime.strptime(df.ts, '%d-%b-%y %I.%M.%S.%f %p') > datetime.now() - timedelta(minutes=5) ) ``` However, this does not work and I get this message ``` TypeError: strptime() argument 1 must be string, not Column ``` It looks like I have wrong application of column operation and it seems to me I have to create a lambda function to filter each column that satisfies the desired condition, but being a newbie to Python and lambda expression in particular, I don't know how to create my filter correct. Please advise. P.S. I prefer to express my filters as Python native (or SparkSQL) rather than a filter inside Hive sql query expression 'WHERE'. preferred: ``` df = sqlContext.sql("SELECT * FROM my_table") df.filter( // filter here) ``` not preferred: ``` df = sqlContext.sql("SELECT * FROM my_table WHERE...") ```
It is possible to use user defined function. ``` from datetime import datetime, timedelta from pyspark.sql.types import BooleanType, TimestampType from pyspark.sql.functions import udf, col def in_last_5_minutes(now): def _in_last_5_minutes(then): then_parsed = datetime.strptime(then, '%d-%b-%y %I.%M.%S.%f %p') return then_parsed > now - timedelta(minutes=5) return udf(_in_last_5_minutes, BooleanType()) ``` Using some dummy data: ``` df = sqlContext.createDataFrame([ (1, '14-Jul-15 11.34.29.000000 AM'), (2, '14-Jul-15 11.34.27.000000 AM'), (3, '14-Jul-15 11.32.11.000000 AM'), (4, '14-Jul-15 11.29.00.000000 AM'), (5, '14-Jul-15 11.28.29.000000 AM') ], ('id', 'datetime')) now = datetime(2015, 7, 14, 11, 35) df.where(in_last_5_minutes(now)(col("datetime"))).show() ``` And as expected we get only 3 entries: ``` +--+--------------------+ |id| datetime| +--+--------------------+ | 1|14-Jul-15 11.34.2...| | 2|14-Jul-15 11.34.2...| | 3|14-Jul-15 11.32.1...| +--+--------------------+ ``` Parsing datetime string all over again is rather inefficient so you may consider storing `TimestampType` instead. ``` def parse_dt(): def _parse(dt): return datetime.strptime(dt, '%d-%b-%y %I.%M.%S.%f %p') return udf(_parse, TimestampType()) df_with_timestamp = df.withColumn("timestamp", parse_dt()(df.datetime)) def in_last_5_minutes(now): def _in_last_5_minutes(then): return then > now - timedelta(minutes=5) return udf(_in_last_5_minutes, BooleanType()) df_with_timestamp.where(in_last_5_minutes(now)(col("timestamp"))) ``` and result: ``` +--+--------------------+--------------------+ |id| datetime| timestamp| +--+--------------------+--------------------+ | 1|14-Jul-15 11.34.2...|2015-07-14 11:34:...| | 2|14-Jul-15 11.34.2...|2015-07-14 11:34:...| | 3|14-Jul-15 11.32.1...|2015-07-14 11:32:...| +--+--------------------+--------------------+ ``` Finally it is possible to use raw SQL query with timestamps: ``` query = """SELECT * FROM df WHERE unix_timestamp(datetime, 'dd-MMM-yy HH.mm.ss.SSSSSS a') > {0} """.format(time.mktime((now - timedelta(minutes=5)).timetuple())) sqlContext.sql(query) ``` Same as above it would be more efficient to parse date strings once. If column is already a `timestamp` it possible to use `datetime` literals: ``` from pyspark.sql.functions import lit df_with_timestamp.where( df_with_timestamp.timestamp > lit(now - timedelta(minutes=5))) ``` **EDIT** Since Spark 1.5 you can parse date string as follows: ``` from pyspark.sql.functions import from_unixtime, unix_timestamp from pyspark.sql.types import TimestampType df.select((from_unixtime(unix_timestamp( df.datetime, "yy-MMM-dd h.mm.ss.SSSSSS aa" ))).cast(TimestampType()).alias("datetime")) ```
How can this code print Hello World without any print statement
31,400,338
7
2015-07-14T07:29:18Z
31,400,518
14
2015-07-14T07:38:43Z
[ "python" ]
I found this code in Python, which prints "Hello World" without the use of the string "Hello World". It's a one line code, a single expression (i.e. no print statement). ``` (lambda _, __, ___, ____, _____, ______, _______, ________: getattr(__import__(True.__class__.__name__[_] + [].__class__.__name__[__]), ().__class__.__eq__.__class__.__name__[:__] + ().__iter__().__class__.__name__[_____:________])(_, (lambda _, __, ___: _(_, __, ___))(lambda _, __, ___: chr(___ % __) + _(_, __, ___ // __) if ___ else (lambda: _).func_code.co_lnotab, _ << ________, (((_____ << ____) + _) << ((___ << _____) - ___)) + (((((___ << __) - _) << ___) + _) << ((_____ << ____) + (_ << _))) + (((_______ << __) - _) << (((((_ << ___) + _)) << ___) + (_ << _))) + (((_______ << ___) + _) << ((_ << ______) + _)) + (((_______ << ____) - _) << ((_______ << ___))) + (((_ << ____) - _) << ((((___ << __) + _) << __) - _)) - (_______ << ((((___ << __) - _) << __) + _)) + (_______ << (((((_ << ___) + _)) << __))) - ((((((_ << ___) + _)) << __) + _) << ((((___ << __) + _) << _))) + (((_______ << __) - _) << (((((_ << ___) + _)) << _))) + (((___ << ___) + _) << ((_____ << _))) + (_____ << ______) + (_ << ___))))(*(lambda _, __, ___: _(_, __, ___))((lambda _, __, ___: [__(___[(lambda: _).func_code.co_nlocals])] + _(_, __, ___[(lambda _: _).func_code.co_nlocals:]) if ___ else []), lambda _: _.func_code.co_argcount, (lambda _: _, lambda _, __: _, lambda _, __, ___: _, lambda _, __, ___, ____: _, lambda _, __, ___, ____, _____: _, lambda _, __, ___, ____, _____, ______: _, lambda _, __, ___, ____, _____, ______, _______: _, lambda _, __, ___, ____, _____, ______, _______, ________: _))) ``` As it is a single line code, [Here's](http://codepad.org/UzSmoxF2) a well formatted code which is more readable. It is made up of only functions, attribute access, lists, tuples, basic math, one True, and one star-args. It has minimal builtin usage (`__import__`, `getattr`, and `chr` once each). It's really hard for me to understand it. Is there any possible explanation of what it does? [Here](https://benkurtovic.com/2014/06/01/obfuscating-hello-world.html), by the way, is where the author of the code explains how it works.
The answer to the question as written: The code avoids a `print` statement by `os.write()`ing to `stdout`'s file descriptor, which is `1`: ``` getattr(__import__("os"), "write")(1, "Hello world!\n") ``` The rest of the explanation is detailed at <https://benkurtovic.com/2014/06/01/obfuscating-hello-world.html>. Instead of a summary here, just read the original!
bounding box of numpy array
31,400,769
7
2015-07-14T07:49:45Z
31,402,351
8
2015-07-14T09:08:12Z
[ "python", "arrays", "numpy", "transformation" ]
Suppose you have a 2D numpy array with some random values and surrounding zeros. Example "tilted rectangle": ``` import numpy as np from skimage import transform img1 = np.zeros((100,100)) img1[25:75,25:75] = 1. img2 = transform.rotate(img1, 45) ``` Now I want to find the smallest bounding rectangle for all the nonzero data. For example: ``` a = np.where(img2 != 0) bbox = img2[np.min(a[0]):np.max(a[0])+1, np.min(a[1]):np.max(a[1])+1] ``` What would be the **fastest** way to achieve this result? I am sure there is a better way since the np.where function takes quite a time if I am e.g. using 1000x1000 data sets. Edit: Should also work in 3D...
You can roughly halve the execution time by using `np.any` to reduce the rows and columns that contain non-zero values to 1D vectors, rather than finding the indices of all non-zero values using `np.where`: ``` def bbox1(img): a = np.where(img != 0) bbox = np.min(a[0]), np.max(a[0]), np.min(a[1]), np.max(a[1]) return bbox def bbox2(img): rows = np.any(img, axis=1) cols = np.any(img, axis=0) rmin, rmax = np.where(rows)[0][[0, -1]] cmin, cmax = np.where(cols)[0][[0, -1]] return rmin, rmax, cmin, cmax ``` Some benchmarks: ``` %timeit bbox1(img2) 10000 loops, best of 3: 63.5 µs per loop %timeit bbox2(img2) 10000 loops, best of 3: 37.1 µs per loop ``` Extending this approach to the 3D case just involves performing the reduction along each pair of axes: ``` def bbox2_3D(img): r = np.any(img, axis=(1, 2)) c = np.any(img, axis=(0, 2)) z = np.any(img, axis=(0, 1)) rmin, rmax = np.where(r)[0][[0, -1]] cmin, cmax = np.where(c)[0][[0, -1]] zmin, zmax = np.where(z)[0][[0, -1]] return rmin, rmax, cmin, cmax, zmin, zmax ``` It's easy to generalize this to *N* dimensions by using `itertools.combinations` to iterate over each unique combination of axes to perform the reduction over: ``` import itertools def bbox2_ND(img): N = img.ndim out = [] for ax in itertools.combinations(range(N), N - 1): nonzero = np.any(img, axis=ax) out.extend(np.where(nonzero)[0][[0, -1]]) return tuple(out) ``` --- If you know the coordinates of the corners of the original bounding box, the angle of rotation, and the centre of rotation, you could get the coordinates of the transformed bounding box corners directly by computing the corresponding [affine transformation matrix](https://en.wikipedia.org/wiki/Transformation_matrix#Affine_transformations) and dotting it with the input coordinates: ``` def bbox_rotate(bbox_in, angle, centre): rmin, rmax, cmin, cmax = bbox_in # bounding box corners in homogeneous coordinates xyz_in = np.array(([[cmin, cmin, cmax, cmax], [rmin, rmax, rmin, rmax], [ 1, 1, 1, 1]])) # translate centre to origin cr, cc = centre cent2ori = np.eye(3) cent2ori[:2, 2] = -cr, -cc # rotate about the origin theta = np.deg2rad(angle) rmat = np.eye(3) rmat[:2, :2] = np.array([[ np.cos(theta),-np.sin(theta)], [ np.sin(theta), np.cos(theta)]]) # translate from origin back to centre ori2cent = np.eye(3) ori2cent[:2, 2] = cr, cc # combine transformations (rightmost matrix is applied first) xyz_out = ori2cent.dot(rmat).dot(cent2ori).dot(xyz_in) r, c = xyz_out[:2] rmin = int(r.min()) rmax = int(r.max()) cmin = int(c.min()) cmax = int(c.max()) return rmin, rmax, cmin, cmax ``` This works out to be very slightly faster than using `np.any` for your small example array: ``` %timeit bbox_rotate([25, 75, 25, 75], 45, (50, 50)) 10000 loops, best of 3: 33 µs per loop ``` However, since the speed of this method is independent of the size of the input array, it can be quite a lot faster for larger arrays. Extending the transformation approach to 3D is slightly more complicated, in that the rotation now has three different components (one about the x-axis, one about the y-axis and one about the z-axis), but the basic method is the same: ``` def bbox_rotate_3d(bbox_in, angle_x, angle_y, angle_z, centre): rmin, rmax, cmin, cmax, zmin, zmax = bbox_in # bounding box corners in homogeneous coordinates xyzu_in = np.array(([[cmin, cmin, cmin, cmin, cmax, cmax, cmax, cmax], [rmin, rmin, rmax, rmax, rmin, rmin, rmax, rmax], [zmin, zmax, zmin, zmax, zmin, zmax, zmin, zmax], [ 1, 1, 1, 1, 1, 1, 1, 1]])) # translate centre to origin cr, cc, cz = centre cent2ori = np.eye(4) cent2ori[:3, 3] = -cr, -cc -cz # rotation about the x-axis theta = np.deg2rad(angle_x) rmat_x = np.eye(4) rmat_x[1:3, 1:3] = np.array([[ np.cos(theta),-np.sin(theta)], [ np.sin(theta), np.cos(theta)]]) # rotation about the y-axis theta = np.deg2rad(angle_y) rmat_y = np.eye(4) rmat_y[[0, 0, 2, 2], [0, 2, 0, 2]] = ( np.cos(theta), np.sin(theta), -np.sin(theta), np.cos(theta)) # rotation about the z-axis theta = np.deg2rad(angle_z) rmat_z = np.eye(4) rmat_z[:2, :2] = np.array([[ np.cos(theta),-np.sin(theta)], [ np.sin(theta), np.cos(theta)]]) # translate from origin back to centre ori2cent = np.eye(4) ori2cent[:3, 3] = cr, cc, cz # combine transformations (rightmost matrix is applied first) tform = ori2cent.dot(rmat_z).dot(rmat_y).dot(rmat_x).dot(cent2ori) xyzu_out = tform.dot(xyzu_in) r, c, z = xyzu_out[:3] rmin = int(r.min()) rmax = int(r.max()) cmin = int(c.min()) cmax = int(c.max()) zmin = int(z.min()) zmax = int(z.max()) return rmin, rmax, cmin, cmax, zmin, zmax ``` I've essentially just modified the function above using the rotation matrix expressions from [here](https://en.wikipedia.org/wiki/Rotation_matrix#Basic_rotations) - I haven't had time to write a test-case yet, so use with caution.
How to remove parentheses only around single words in a string
31,405,409
7
2015-07-14T11:32:43Z
31,405,452
15
2015-07-14T11:34:49Z
[ "python", "regex" ]
Let's say I have a string like this: ``` s = '((Xyz_lk) some stuff (XYZ_l)) (and even more stuff (XyZ))' ``` I would like to remove the parentheses only around single words so that I obtain: ``` '(Xyz_lk some stuff XYZ_l) (and even more stuff XyZ)' ``` How would I do this in Python? So far I only managed to remove them along with the text by using ``` re.sub('\(\w+\)', '', s) ``` which gives ``` '( some stuff ) (and even more stuff )' ``` How can I only remove the parentheses and keep the text inside them?
``` re.sub(r'\((\w+)\)',r'\1',s) ``` Use `\1` or backreferencing.
Determining implementation of Python at runtime?
31,407,123
11
2015-07-14T12:51:40Z
31,407,159
13
2015-07-14T12:52:57Z
[ "python" ]
I'm writing a piece of code that returns profiling information and it would be helpful to be able to dynamically return the implementation of Python in use. Is there a Pythonic way to determine which implementation (e.g. Jython, PyPy) of Python my code is executing on at runtime? I know that I am able to get version information from `sys.version`: ``` >>> import sys >>> sys.version '3.4.3 (default, May 1 2015, 19:14:18) \n[GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.49)]' ``` but I'm not sure where to look in the `sys` module to get the implementation that the code is running.
You can use `python_implementation` from the `platform` module in [Python 3](https://docs.python.org/3/library/platform.html#platform.python_implementation) or [Python 2](https://docs.python.org/2/library/platform.html#platform.python_implementation). This returns a string that identifies the Python implementation. e.g. `return_implementation.py` ``` import platform print(platform.python_implementation()) ``` and iterating through some responses on the command line: ``` $ for i in python python3 pypy pypy3; do echo -n "implementation $i: "; $i return_implementation.py; done implementation python: CPython implementation python3: CPython implementation pypy: PyPy implementation pypy3: PyPy ``` Note as of this answer's date, the possible responses are 'CPython', 'IronPython', 'Jython', 'PyPy', meaning that it's possible that your implementation will not be returned by this `python_implementation` function if it does not identity to the `sys` module as one of these types. [`python_implementation` is calling `sys.version`](https://hg.python.org/cpython/file/3.4/Lib/platform.py#l1207) under the hood and attempting to match the response to a regex pattern -- if there's no conditional match, there's no matching string response.
Hiding lines after showing a pyplot figure
31,410,043
3
2015-07-14T14:50:50Z
31,417,070
7
2015-07-14T20:50:16Z
[ "python", "matplotlib" ]
I'm using pyplot to display a line graph of up to 30 lines. I would like to add a way to quickly show and hide individual lines on the graph. Pyplot does have a menu where you can edit line properties to change the color or style, but its rather clunky when you want to hide lines to isolate the one you're interested in. Ideally, I'd like to use checkboxes on the legend to show and hide lines. (Similar to showing and hiding layers in image editors like Paint.Net) I'm not sure if this is possible with pyplot, so I am open to other modules as long as they're somewhat easy to distribute.
If you'd like, you can hook up a callback to the legend that will show/hide lines when they're clicked. There's a simple example here: <http://matplotlib.org/examples/event_handling/legend_picking.html> Here's a "fancier" example that should work without needing to manually specify the relationship of the lines and legend markers (Also has a few more features). ``` import numpy as np import matplotlib.pyplot as plt def main(): x = np.arange(10) fig, ax = plt.subplots() for i in range(1, 31): ax.plot(x, i * x, label=r'$y={}x$'.format(i)) ax.legend(loc='upper left', bbox_to_anchor=(1.05, 1), ncol=2, borderaxespad=0) fig.subplots_adjust(right=0.55) fig.suptitle('Right-click to hide all\nMiddle-click to show all', va='top', size='large') interactive_legend().show() def interactive_legend(ax=None): if ax is None: ax = plt.gca() if ax.legend_ is None: ax.legend() return InteractiveLegend(ax.legend_) class InteractiveLegend(object): def __init__(self, legend): self.legend = legend self.fig = legend.axes.figure self.lookup_artist, self.lookup_handle = self._build_lookups(legend) self._setup_connections() self.update() def _setup_connections(self): for artist in self.legend.texts + self.legend.legendHandles: artist.set_picker(10) # 10 points tolerance self.fig.canvas.mpl_connect('pick_event', self.on_pick) self.fig.canvas.mpl_connect('button_press_event', self.on_click) def _build_lookups(self, legend): labels = [t.get_text() for t in legend.texts] handles = legend.legendHandles label2handle = dict(zip(labels, handles)) handle2text = dict(zip(handles, legend.texts)) lookup_artist = {} lookup_handle = {} for artist in legend.axes.get_children(): if artist.get_label() in labels: handle = label2handle[artist.get_label()] lookup_handle[artist] = handle lookup_artist[handle] = artist lookup_artist[handle2text[handle]] = artist lookup_handle.update(zip(handles, handles)) lookup_handle.update(zip(legend.texts, handles)) return lookup_artist, lookup_handle def on_pick(self, event): handle = event.artist if handle in self.lookup_artist: artist = self.lookup_artist[handle] artist.set_visible(not artist.get_visible()) self.update() def on_click(self, event): if event.button == 3: visible = False elif event.button == 2: visible = True else: return for artist in self.lookup_artist.values(): artist.set_visible(visible) self.update() def update(self): for artist in self.lookup_artist.values(): handle = self.lookup_handle[artist] if artist.get_visible(): handle.set_visible(True) else: handle.set_visible(False) self.fig.canvas.draw() def show(self): plt.show() if __name__ == '__main__': main() ``` This allows you to click on legend items to toggle their corresponding artists on/off. For example, you can go from this: ![enter image description here](http://i.stack.imgur.com/jARwm.png) To this: ![enter image description here](http://i.stack.imgur.com/4N3g4.png)
Python: Why is popping off a queue faster than for-in block?
31,414,011
9
2015-07-14T18:01:53Z
31,414,080
12
2015-07-14T18:05:56Z
[ "python", "for-loop", "optimization", "while-loop" ]
I have been working on a python script to analyze CSVs. Some of these files are fairly large (1-2 million records), and the script was taking hours to complete. I changed the way the records are processed from a `for-in` loop to a `while` loop, and the speedup was remarkable. Demonstration below: ``` >>> def for_list(): ... for d in data: ... bunk = d**d ... >>> def while_list(): ... while data: ... d = data.pop(0) ... bunk = d**d ... >>> data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] >>> import timeit >>> timeit.timeit(for_list) 1.0698931217193604 >>> timeit.timeit(while_list) 0.14515399932861328 ``` Almost an order of magnitude faster. I've never looked at python bytecode, but I though it might be telling, but it turns out `while_list` has more instructions. So what's going on here? Is there a principle here I can apply to other programs? Are there scenarios where the `for` would be ten times faster than the `while`? **EDIT:** As @HappyLeapSecond pointed out, I didn't quite understand exactly what was going on inside `timeit` The discrepancy is gone with the following: ``` >>> def for_list(): ... data = [x for x in range(1000)] ... for d in data: ... bunk = d**d ... >>> def while_list(): ... data = [x for x in range(1000)] ... while data: ... d = data.pop(0) ... bunk = d**d >>> timeit.timeit(while_list, number=1000) 12.006330966949463 >>> timeit.timeit(for_list, number=1000) 11.847280025482178 ``` Which makes it very odd that my "real" script sped up so much with such a simple change. My best guess is that the iteration method is requiring more swapping? I have a 40G swap partition, the script fills about 15-20G of it. Would popping reduce swapping?
`while_list` is mutating the global `data`. `timeit.timeit` does not reset the value of `data`. `timeit.timeit` calls `for_list` and `while_list` a million times each by default. After the first call to `while_list`, subsequent calls to `while_list` return after performing 0 loops because `data` is already empty. You need to reset the value of `data` before each call to `for_list` and `while_list` to perform a fair benchmark. --- ``` import timeit def for_list(data): for d in data: bunk = d ** d def while_list(data): while data: d = data.pop(0) bunk = d ** d data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] print(timeit.timeit('data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]; for_list(data)', 'from __main__ import for_list')) # 0.959696054459 print(timeit.timeit('data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]; while_list(data)', 'from __main__ import while_list')) # 2.40107011795 ``` `pop(0)` is a `O(n)` operation. Performing that inside a loop of length `n` makes `while_list` have an overall time complexity `O(n**2)`, compared to `O(n)` complexity for `for_list`. So as expected, `for_list` is faster, and the advantage grows as `n`, the length of `data`, gets larger.
How to prepend a path to sys.path in Python?
31,414,041
17
2015-07-14T18:03:05Z
31,580,183
7
2015-07-23T06:55:16Z
[ "python", "ubuntu", "pip", "easy-install", "pythonpath" ]
**Problem description:** Using pip, I upgraded to the latest version of [requests](http://docs.python-requests.org/en/latest/) (version 2.7.0, with `pip show requests` giving the location `/usr/local/lib/python2.7/dist-packages`). When I `import requests` and print `requests.__version__` in the interactive command line, though, I am seeing version 2.2.1. It turns out that Python is using the pre-installed Ubuntu version of requests (`requests.__file__` is `/usr/lib/python2.7/dist-packages/requests/__init__.pyc` -- not `/user/local/lib/...`). From my investigation, this fact is caused by Ubuntu's changes to the Python search path (I run Ubuntu 14.04) by prepending the path to Ubuntu's Python package (for my machine, this happens in `usr/local/lib/python2.7/dist-packages/easy-install.pth`). In my case, this causes the `apt-get` version of requests, which is pre-packaged with Ubuntu, to be used, rather than the pip version I want to use. **What I'm looking for:** I want to globally prepend pip's installation directory path to Python's search path (`sys.path`), before the path to Ubuntu's Python installation directory. Since requests (and many other packages) are used in many Python scripts of mine, I don't want to manually change the search path for every single file on my machine. **Unsatisfactory Solution 1: Using virtualenv** Using [virtualenv](https://virtualenv.pypa.io/en/latest/) would cause an unnecessary amount of change to my machine, since I would have to reinstall every package that exists globally. I only want to upgrade from Ubuntu's packages to pip's packages. **Unsatisfactory Solution 2: Changing easy-install.pth** Since `easy-install.pth` is overwritten every time `easy-install` is used, my changes to `easy-install.pth` would be removed if a new package is installed. This problem makes it difficult to maintain the packages on my machine. **Unsatisfactory (but best one I have so far) Solution 3: Adding a separate .pth file** In the same directory as easy-install.pth I added a `zzz.pth` with contents: ``` import sys; sys.__plen = len(sys.path) /usr/lib/python2.7/dist-packages/test_dir import sys; new=sys.path[sys.__plen:]; del sys.path[sys.__plen:]; p=getattr(sys,'__egginsert',0); sys.path[p:p]=new; sys.__egginsert = p+len(new) ``` This file is read by `site.py` when Python is starting. Since its file name comes after `easy-install.pth` alphanumerically, it is consumed by `site.py` afterwards. Taken together, the first and last lines of the file prepend the path to `sys.path` (these lines were taken from `easy-install.pth`). I don't like how this solution depends on the alphanumeric ordering of the file name to correctly place the new path. **PYTHONPATHs come after Ubuntu's paths** [Another answer](http://stackoverflow.com/questions/7472436/add-a-directory-to-python-sys-path-so-that-its-included-each-time-i-use-python) on Stack Overflow didn't work for me. My `PYTHONPATH` paths come after the paths in `easy-install.pth`, which uses the same code I mention in "Unsatisfactory solution 3" to prepend its paths. *Thank you in advance!*
You shouldn't need to mess with pip's path, python actually handles it's pathing automatically in my experience. It appears you have two pythons installed. If you type: ``` which pip which python ``` what paths do you see? If they're not in the same /bin folder, then that's your problem. I'm guessing that the python you're running (probably the original system one), doesn't have it's own pip installed. You probably just need make sure the path for the python you want to run should come before /usr/bin in your .bashrc or .zshrc If this is correct, then you should see that: ``` which easy_install ``` shares the same path as the python installation you're using, maybe under /usr/local/bin. Then just run: ``` easy_install pip ``` And start installing the right packages for the python that you're using.
ImportError: cannot import name wraps
31,417,964
19
2015-07-14T21:46:55Z
31,419,279
20
2015-07-14T23:50:22Z
[ "python", "mocking", "pyunit" ]
I'm using python 2.7.6 on Ubuntu 14.04.2 LTS. I'm using mock to mock some unittests and noticing when I import mock it fails importing wraps. Not sure if there's a different version of mock or six I should be using for it's import to work? Couldn't find any relevant answers and I'm not using virtual environments. mock module says it's compatible with python 2.7.x: <https://pypi.python.org/pypi/mock> mock==1.1.3 six==1.9.0 ``` Python 2.7.6 (default, Mar 22 2014, 22:59:56) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from mock import Mock Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/dist-packages/mock/__init__.py", line 2, in <module> import mock.mock as _mock File "/usr/local/lib/python2.7/dist-packages/mock/mock.py", line 68, in <module> from six import wraps ImportError: cannot import name wraps ``` also tried with sudo with no luck. ``` $ sudo python -c 'from six import wraps' Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: cannot import name wraps ```
Installed mock==1.0.1 and that worked for some reason. (shrugs) edit: The real fix for me was to **updated setuptools** to the latest and it allowed me to upgrade mock and six to the latest. I was on setuptools 3.3. In my case I also had to remove said modules by hand because they were owned by OS in '/usr/local/lib/python2.7/dist-packages/' check versions of everything ``` pip freeze | grep -e six -e mock easy_install --version ``` Update everything ``` wget https://bootstrap.pypa.io/ez_setup.py -O - | sudo python pip install mock --upgrade pip install six --upgrade ``` Thanks @lifeless
ImportError: cannot import name wraps
31,417,964
19
2015-07-14T21:46:55Z
31,739,766
15
2015-07-31T06:51:28Z
[ "python", "mocking", "pyunit" ]
I'm using python 2.7.6 on Ubuntu 14.04.2 LTS. I'm using mock to mock some unittests and noticing when I import mock it fails importing wraps. Not sure if there's a different version of mock or six I should be using for it's import to work? Couldn't find any relevant answers and I'm not using virtual environments. mock module says it's compatible with python 2.7.x: <https://pypi.python.org/pypi/mock> mock==1.1.3 six==1.9.0 ``` Python 2.7.6 (default, Mar 22 2014, 22:59:56) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from mock import Mock Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/dist-packages/mock/__init__.py", line 2, in <module> import mock.mock as _mock File "/usr/local/lib/python2.7/dist-packages/mock/mock.py", line 68, in <module> from six import wraps ImportError: cannot import name wraps ``` also tried with sudo with no luck. ``` $ sudo python -c 'from six import wraps' Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: cannot import name wraps ```
I encountered the same issue on my mac, which I was able to fix by realizing that my python's sys.path contained both ``` /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/ ``` and ``` /Library/Python/2.7/site-packages/ ``` with the former earlier than the latter. You can test if this is happening to you by running the following in the python console. ``` import six six.__version__ ``` my python was loading an outdated six.py from the former directory (which didn't have wrapper), even though pip had installed a newer version six in the second directory. (It seems mac's framework comes with a version of six by default.) I was able to fix it by moving six.py and six.pyc out of the first directory (requires sudo access), so that python would find the newer version of six in the second directory. I'm sure you could also change the ordering of the paths in sys.path. To find the older version of six that need to be deleted run this from the terminal console ``` find /System/Library/Frameworks/Python.framework/Versions -name six.py* ```
Python operator precedence - and vs greater than
31,421,316
6
2015-07-15T04:06:54Z
31,421,382
7
2015-07-15T04:13:31Z
[ "python", "operator-precedence" ]
I have a line of code in my script that has both these operators chained together. From the documentation reference **BOOLEAN AND** has a lower precedence than **COMPARISON GREATER THAN**. I am getting unexpected results here in this code: ``` >>> def test(msg, value): ... print(msg) ... return value >>> test("First", 10) and test("Second", 15) > test("Third", 5) First Second Third True ``` I was expecting Second or Third test to happen before the fist one, since `>` operator has a higher precedence. What am I doing wrong here? <https://docs.python.org/3/reference/expressions.html#operator-precedence>
Because you are looking at the wrong thing. `call` (or function call) takes higher precendence over both `and` as well as `>` (greater than) . So first function calls occur from left to right. Python will get the results for all function calls before either comparison happens. The only thing that takes precendence over here would be short circuiting , so if `test("First",10)` returned False, it would short circuit and return False. The comparisons and `and` still occur in the same precendence , that is first the result of `test("Second", 15)` is compared against `test("Third", 5)` (please note only the return values (the function call already occured before)) . Then the result of `test("Second", 15) > test("Third", 5)` is used in the `and` operation. From the documentation on [operator precedence](https://docs.python.org/2/reference/expressions.html#operator-precedence) - ![enter image description here](http://i.stack.imgur.com/ne01T.png)
Why does "not(True) in [False, True]" return False?
31,421,379
418
2015-07-15T04:12:58Z
31,421,407
34
2015-07-15T04:17:14Z
[ "python", "python-2.7", "python-3.x" ]
If I do this: ``` >>> False in [False, True] True ``` That returns `True`. Simply because `False` is in the list. But if I do: ``` >>> not(True) in [False, True] False ``` That returns `False`. Whereas `not(True)` is equal to `False`: ``` >>> not(True) False ``` Why?
Operator precedence. `in` binds more tightly than `not`, so your expression is equivalent to `not((True) in [False, True])`.
Why does "not(True) in [False, True]" return False?
31,421,379
418
2015-07-15T04:12:58Z
31,421,410
33
2015-07-15T04:17:25Z
[ "python", "python-2.7", "python-3.x" ]
If I do this: ``` >>> False in [False, True] True ``` That returns `True`. Simply because `False` is in the list. But if I do: ``` >>> not(True) in [False, True] False ``` That returns `False`. Whereas `not(True)` is equal to `False`: ``` >>> not(True) False ``` Why?
It's all about [operator precedence](https://docs.python.org/2/reference/expressions.html) (`in` is stronger than `not`). But it can be easily corrected by adding parentheses at the right place: ``` (not(True)) in [False, True] # prints true ``` writing: ``` not(True) in [False, True] ``` is the same like: ``` not((True) in [False, True]) ``` which looks if `True` is in the list and returns the "not" of the result.
Why does "not(True) in [False, True]" return False?
31,421,379
418
2015-07-15T04:12:58Z
31,421,411
670
2015-07-15T04:17:28Z
[ "python", "python-2.7", "python-3.x" ]
If I do this: ``` >>> False in [False, True] True ``` That returns `True`. Simply because `False` is in the list. But if I do: ``` >>> not(True) in [False, True] False ``` That returns `False`. Whereas `not(True)` is equal to `False`: ``` >>> not(True) False ``` Why?
**Operator precedence** [2.x](https://docs.python.org/2/reference/expressions.html#operator-precedence), [3.x](https://docs.python.org/3/reference/expressions.html#operator-precedence). The precedence of `not` is lower than that of `in`. So it is equivalent to: ``` >>> not (True in [False, True]) False ``` This is what you want: ``` >>> (not True) in [False, True] True ``` --- As @Ben points out: It's recommended to never write `not(True)`, prefer `not True`. The former makes it look like a function call, while `not` is an operator, not a function.
Why does "not(True) in [False, True]" return False?
31,421,379
418
2015-07-15T04:12:58Z
31,421,418
14
2015-07-15T04:18:32Z
[ "python", "python-2.7", "python-3.x" ]
If I do this: ``` >>> False in [False, True] True ``` That returns `True`. Simply because `False` is in the list. But if I do: ``` >>> not(True) in [False, True] False ``` That returns `False`. Whereas `not(True)` is equal to `False`: ``` >>> not(True) False ``` Why?
It is evaluating as `not True in [False, True]`, which returns `False` because `True` is in `[False, True]` If you try ``` >>>(not(True)) in [False, True] True ``` You get the expected result.
Why does "not(True) in [False, True]" return False?
31,421,379
418
2015-07-15T04:12:58Z
31,421,636
67
2015-07-15T04:39:48Z
[ "python", "python-2.7", "python-3.x" ]
If I do this: ``` >>> False in [False, True] True ``` That returns `True`. Simply because `False` is in the list. But if I do: ``` >>> not(True) in [False, True] False ``` That returns `False`. Whereas `not(True)` is equal to `False`: ``` >>> not(True) False ``` Why?
**`not x in y`** is evaluated as **`x not in y`** You can see exactly what's happening by disassembling the code. The first case works as you expect: ``` >>> x = lambda: False in [False, True] >>> dis.dis(x) 1 0 LOAD_GLOBAL 0 (False) 3 LOAD_GLOBAL 0 (False) 6 LOAD_GLOBAL 1 (True) 9 BUILD_LIST 2 12 COMPARE_OP 6 (in) 15 RETURN_VALUE ``` The second case, evaluates to `True not in [False, True]`, which is `False` clearly: ``` >>> x = lambda: not(True) in [False, True] >>> dis.dis(x) 1 0 LOAD_GLOBAL 0 (True) 3 LOAD_GLOBAL 1 (False) 6 LOAD_GLOBAL 0 (True) 9 BUILD_LIST 2 12 COMPARE_OP 7 (not in) 15 RETURN_VALUE >>> ``` What you wanted to express instead was `(not(True)) in [False, True]`, which as expected is `True`, and you can see why: ``` >>> x = lambda: (not(True)) in [False, True] >>> dis.dis(x) 1 0 LOAD_GLOBAL 0 (True) 3 UNARY_NOT 4 LOAD_GLOBAL 1 (False) 7 LOAD_GLOBAL 0 (True) 10 BUILD_LIST 2 13 COMPARE_OP 6 (in) 16 RETURN_VALUE ```
Why does "not(True) in [False, True]" return False?
31,421,379
418
2015-07-15T04:12:58Z
31,458,009
12
2015-07-16T15:07:36Z
[ "python", "python-2.7", "python-3.x" ]
If I do this: ``` >>> False in [False, True] True ``` That returns `True`. Simply because `False` is in the list. But if I do: ``` >>> not(True) in [False, True] False ``` That returns `False`. Whereas `not(True)` is equal to `False`: ``` >>> not(True) False ``` Why?
Along side the other answers that mentioned the precedence of `not` is lower than `in`, actually your statement is equivalent to : ``` not (True in [False, True]) ``` But note that if you don't separate your condition from others python will use 2 role `precedence` or `chaining` to separate that,and in this case python used precedence, also note that if you want to separate a condition you need to put all the condition in parenthesis not just the object or value : ``` (not True) in [False, True] ``` --- But as mentioned, there is another modification by python on operators that is **chaining**: Based on python [*documentation*](https://docs.python.org/3/reference/expressions.html#operator-precedence) : > Note that comparisons, membership tests, and identity tests, all have the same precedence and have a left-to-right **chaining** feature as described in the Comparisons section. For example the result of following statement is `False`: ``` >>> True == False in [False, True] False ``` Because python will chain the statements like following : ``` (True == False) and (False in [False, True]) ``` Which exactly is `False and True` that is `False`. You can assume that the central object will be shared between 2 operations and other objects (False in this case). And note that its also true for all Comparisons, including membership tests and identity tests operations which are following operands : ``` in, not in, is, is not, <, <=, >, >=, !=, == ``` Example : ``` >>> 1 in [1,2] == True False ``` Another famous example is number range : ``` 7<x<20 ``` which is equal to : ``` 7<x and x<20 ```
How to compute precision, recall, accuracy and f1-score for the multiclass case with scikit learn?
31,421,413
33
2015-07-15T04:17:36Z
31,558,398
9
2015-07-22T08:54:02Z
[ "python", "machine-learning", "nlp", "artificial-intelligence", "scikit-learn" ]
I'm working in a sentiment analysis problem the data looks like this: ``` label instances 5 1190 4 838 3 239 1 204 2 127 ``` So my data is unbalanced since 1190 `instances` are labeled with `5`. For the classification Im using scikit's [SVC](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html). The problem is I do not know how to balance my data in the right way in order to compute accurately the precision, recall, accuracy and f1-score for the multiclass case. So I tried the following approaches: First: ``` wclf = SVC(kernel='linear', C= 1, class_weight={1: 10}) wclf.fit(X, y) weighted_prediction = wclf.predict(X_test) print 'Accuracy:', accuracy_score(y_test, weighted_prediction) print 'F1 score:', f1_score(y_test, weighted_prediction,average='weighted') print 'Recall:', recall_score(y_test, weighted_prediction, average='weighted') print 'Precision:', precision_score(y_test, weighted_prediction, average='weighted') print '\n clasification report:\n', classification_report(y_test, weighted_prediction) print '\n confussion matrix:\n',confusion_matrix(y_test, weighted_prediction) ``` Second: ``` auto_wclf = SVC(kernel='linear', C= 1, class_weight='auto') auto_wclf.fit(X, y) auto_weighted_prediction = auto_wclf.predict(X_test) print 'Accuracy:', accuracy_score(y_test, auto_weighted_prediction) print 'F1 score:', f1_score(y_test, auto_weighted_prediction, average='weighted') print 'Recall:', recall_score(y_test, auto_weighted_prediction, average='weighted') print 'Precision:', precision_score(y_test, auto_weighted_prediction, average='weighted') print '\n clasification report:\n', classification_report(y_test,auto_weighted_prediction) print '\n confussion matrix:\n',confusion_matrix(y_test, auto_weighted_prediction) ``` Third: ``` clf = SVC(kernel='linear', C= 1) clf.fit(X, y) prediction = clf.predict(X_test) from sklearn.metrics import precision_score, \ recall_score, confusion_matrix, classification_report, \ accuracy_score, f1_score print 'Accuracy:', accuracy_score(y_test, prediction) print 'F1 score:', f1_score(y_test, prediction) print 'Recall:', recall_score(y_test, prediction) print 'Precision:', precision_score(y_test, prediction) print '\n clasification report:\n', classification_report(y_test,prediction) print '\n confussion matrix:\n',confusion_matrix(y_test, prediction) F1 score:/usr/local/lib/python2.7/site-packages/sklearn/metrics/classification.py:676: DeprecationWarning: The default `weighted` averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for `average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring="f1_weighted" instead of scoring="f1". sample_weight=sample_weight) /usr/local/lib/python2.7/site-packages/sklearn/metrics/classification.py:1172: DeprecationWarning: The default `weighted` averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for `average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring="f1_weighted" instead of scoring="f1". sample_weight=sample_weight) /usr/local/lib/python2.7/site-packages/sklearn/metrics/classification.py:1082: DeprecationWarning: The default `weighted` averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for `average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring="f1_weighted" instead of scoring="f1". sample_weight=sample_weight) 0.930416613529 ``` However, Im getting warnings like this:`/usr/local/lib/python2.7/site-packages/sklearn/metrics/classification.py:1172: DeprecationWarning: The default`weighted`averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for`average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring="f1_weighted" instead of scoring="f1"`. How can I deal correctly with my unbalanced data in order to compute in the right way classifier's metrics?
First of all it's a little bit harder using just counting analysis to tell if your data is unbalanced or not. For example: 1 in 1000 positive observation is just a noise, error or a breakthrough in science? You never know. So it's always better to use all your available knowledge and choice its status with all wise. **Okay, what if it's really unbalanced?** Once again — look to your data. Sometimes you can find one or two observation multiplied by hundred times. Sometimes it's useful to create this fake one-class-observations. If all the data is clean next step is to use class weights in prediction model. **So what about multiclass metrics?** In my experience none of your metrics is usually used. There are two main reasons. First: it's always better to work with probabilities than with solid prediction (because how else could you separate models with 0.9 and 0.6 prediction if they both give you the same class?) And second: it's much easier to compare your prediction models and build new ones depending on only one good metric. From my experience I could recommend [logloss](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.log_loss.html) or [MSE](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_error.html) (or just mean squared error). **How to fix sklearn warnings?** Just simply (as yangjie noticed) overwrite `average` parameter with one of these values: `'micro'` (calculate metrics globally), `'macro'` (calculate metrics for each label) or `'weighted'` (same as macro but with auto weights). ``` f1_score(y_test, prediction, average='weighted') ``` All your Warnings came after calling metrics functions with default `average` value `'binary'` which is inappropriate for multiclass prediction. Good luck and have fun with machine learning! **Edit:** I found another answerer recommendation to switch to regression approaches (e.g. SVR) with which I cannot agree. As far as I remember there is no even such a thing as multiclass regression. Yes there is multilabel regression which is far different and yes it's possible in some cases switch between regression and classification (if classes somehow sorted) but it pretty rare. What I would recommend (in scope of scikit-learn) is to try another very powerful classification tools: [gradient boosting](http://scikit-learn.org/stable/modules/ensemble.html#gradient-tree-boosting), [random forest](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) (my favorite), [KNeighbors](http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html) and many more. After that you can calculate arithmetic or geometric mean between predictions and most of the time you'll get even better result. ``` final_prediction = (KNNprediction * RFprediction) ** 0.5 ```
How to compute precision, recall, accuracy and f1-score for the multiclass case with scikit learn?
31,421,413
33
2015-07-15T04:17:36Z
31,570,518
12
2015-07-22T17:53:38Z
[ "python", "machine-learning", "nlp", "artificial-intelligence", "scikit-learn" ]
I'm working in a sentiment analysis problem the data looks like this: ``` label instances 5 1190 4 838 3 239 1 204 2 127 ``` So my data is unbalanced since 1190 `instances` are labeled with `5`. For the classification Im using scikit's [SVC](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html). The problem is I do not know how to balance my data in the right way in order to compute accurately the precision, recall, accuracy and f1-score for the multiclass case. So I tried the following approaches: First: ``` wclf = SVC(kernel='linear', C= 1, class_weight={1: 10}) wclf.fit(X, y) weighted_prediction = wclf.predict(X_test) print 'Accuracy:', accuracy_score(y_test, weighted_prediction) print 'F1 score:', f1_score(y_test, weighted_prediction,average='weighted') print 'Recall:', recall_score(y_test, weighted_prediction, average='weighted') print 'Precision:', precision_score(y_test, weighted_prediction, average='weighted') print '\n clasification report:\n', classification_report(y_test, weighted_prediction) print '\n confussion matrix:\n',confusion_matrix(y_test, weighted_prediction) ``` Second: ``` auto_wclf = SVC(kernel='linear', C= 1, class_weight='auto') auto_wclf.fit(X, y) auto_weighted_prediction = auto_wclf.predict(X_test) print 'Accuracy:', accuracy_score(y_test, auto_weighted_prediction) print 'F1 score:', f1_score(y_test, auto_weighted_prediction, average='weighted') print 'Recall:', recall_score(y_test, auto_weighted_prediction, average='weighted') print 'Precision:', precision_score(y_test, auto_weighted_prediction, average='weighted') print '\n clasification report:\n', classification_report(y_test,auto_weighted_prediction) print '\n confussion matrix:\n',confusion_matrix(y_test, auto_weighted_prediction) ``` Third: ``` clf = SVC(kernel='linear', C= 1) clf.fit(X, y) prediction = clf.predict(X_test) from sklearn.metrics import precision_score, \ recall_score, confusion_matrix, classification_report, \ accuracy_score, f1_score print 'Accuracy:', accuracy_score(y_test, prediction) print 'F1 score:', f1_score(y_test, prediction) print 'Recall:', recall_score(y_test, prediction) print 'Precision:', precision_score(y_test, prediction) print '\n clasification report:\n', classification_report(y_test,prediction) print '\n confussion matrix:\n',confusion_matrix(y_test, prediction) F1 score:/usr/local/lib/python2.7/site-packages/sklearn/metrics/classification.py:676: DeprecationWarning: The default `weighted` averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for `average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring="f1_weighted" instead of scoring="f1". sample_weight=sample_weight) /usr/local/lib/python2.7/site-packages/sklearn/metrics/classification.py:1172: DeprecationWarning: The default `weighted` averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for `average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring="f1_weighted" instead of scoring="f1". sample_weight=sample_weight) /usr/local/lib/python2.7/site-packages/sklearn/metrics/classification.py:1082: DeprecationWarning: The default `weighted` averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for `average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring="f1_weighted" instead of scoring="f1". sample_weight=sample_weight) 0.930416613529 ``` However, Im getting warnings like this:`/usr/local/lib/python2.7/site-packages/sklearn/metrics/classification.py:1172: DeprecationWarning: The default`weighted`averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for`average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring="f1_weighted" instead of scoring="f1"`. How can I deal correctly with my unbalanced data in order to compute in the right way classifier's metrics?
**Posed question** Responding to the question 'what metric should be used for multi-class classification with imbalanced data': Macro-F1-measure. Macro Precision and Macro Recall can be also used, but they are not so easily interpretable as for binary classificaion, they are already incorporated into F-measure, and excess metrics complicate methods comparison, parameters tuning, and so on. Micro averaging are sensitive to class imbalance: if your method, for example, works good for the most common labels and totally messes others, micro-averaged metrics show good results. Weighting averaging isn't well suited for imbalanced data, because it weights by counts of labels. Moreover, it is too hardly interpretable and unpopular: for instance, there is no mention of such an averaging in the following very detailed [survey](http://rali.iro.umontreal.ca/rali/sites/default/files/publis/SokolovaLapalme-JIPM09.pdf) I strongly recommend to look through: > Sokolova, Marina, and Guy Lapalme. "A systematic analysis of > performance measures for classification tasks." Information Processing > & Management 45.4 (2009): 427-437. **Application-specific question** However, returning to your task, I'd research 2 topics: 1. metrics commonly used for your specific task - it lets (a) to compare your method with others and understand if you do something wrong, and (b) to not explore this by yourself and reuse someone else's findings; 2. cost of different errors of your methods - for example, use-case of your application may rely on 4- and 5-star reviewes only - in this case, good metric should count only these 2 labels. ***Commonly used metrics.*** As I can infer after looking through literature, there are 2 main evaluation metrics: 1. **[Accuracy](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html)**, which is used, e.g. in > Yu, April, and Daryl Chang. "Multiclass Sentiment Prediction using > Yelp Business." ([link](http://cs224d.stanford.edu/reports/YuApril.pdf)) - note that the authors work with almost the same distribution of ratings, see Figure 5. > Pang, Bo, and Lillian Lee. "Seeing stars: Exploiting class > relationships for sentiment categorization with respect to rating > scales." Proceedings of the 43rd Annual Meeting on Association for > Computational Linguistics. Association for Computational Linguistics, > 2005. ([link](http://www.cs.cornell.edu/home/llee/papers/pang-lee-stars.pdf)) 2. **[MSE](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_error.html)** (or, less often, Mean Absolute Error - **[MAE](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_error.html)**) - see, for example, > Lee, Moontae, and R. Grafe. "Multiclass sentiment analysis with > restaurant reviews." Final Projects from CS N 224 (2010). ([link](http://nlp.stanford.edu/courses/cs224n/2010/reports/pgrafe-moontae.pdf)) - they explore both accuracy and MSE, considering the latter to be better > Pappas, Nikolaos, Rue Marconi, and Andrei Popescu-Belis. "Explaining > the Stars: Weighted Multiple-Instance Learning for Aspect-Based > Sentiment Analysis." Proceedings of the 2014 Conference on Empirical > Methods In Natural Language Processing. No. EPFL-CONF-200899. 2014. ([link](http://www.aclweb.org/anthology/D14-1052)) - they utilize scikit-learn for evaluation and baseline approaches and state that their code is available; however, I can't find it, so if you need it, write a letter to the authors, the work is pretty new and seems to be written in Python. ***Cost of different errors*.** If you care more about avoiding gross blunders, e.g. assinging 1-star to 5-star review or something like that, look at MSE; if difference matters, but not so much, try MAE, since it doesn't square diff; otherwise stay with Accuracy. **About approaches, not metrics** Try regression approaches, e.g. [SVR](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html#sklearn.svm.SVR), since they generally outperforms Multiclass classifiers like SVC or OVA SVM.
How to compute precision, recall, accuracy and f1-score for the multiclass case with scikit learn?
31,421,413
33
2015-07-15T04:17:36Z
31,575,870
39
2015-07-22T23:44:13Z
[ "python", "machine-learning", "nlp", "artificial-intelligence", "scikit-learn" ]
I'm working in a sentiment analysis problem the data looks like this: ``` label instances 5 1190 4 838 3 239 1 204 2 127 ``` So my data is unbalanced since 1190 `instances` are labeled with `5`. For the classification Im using scikit's [SVC](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html). The problem is I do not know how to balance my data in the right way in order to compute accurately the precision, recall, accuracy and f1-score for the multiclass case. So I tried the following approaches: First: ``` wclf = SVC(kernel='linear', C= 1, class_weight={1: 10}) wclf.fit(X, y) weighted_prediction = wclf.predict(X_test) print 'Accuracy:', accuracy_score(y_test, weighted_prediction) print 'F1 score:', f1_score(y_test, weighted_prediction,average='weighted') print 'Recall:', recall_score(y_test, weighted_prediction, average='weighted') print 'Precision:', precision_score(y_test, weighted_prediction, average='weighted') print '\n clasification report:\n', classification_report(y_test, weighted_prediction) print '\n confussion matrix:\n',confusion_matrix(y_test, weighted_prediction) ``` Second: ``` auto_wclf = SVC(kernel='linear', C= 1, class_weight='auto') auto_wclf.fit(X, y) auto_weighted_prediction = auto_wclf.predict(X_test) print 'Accuracy:', accuracy_score(y_test, auto_weighted_prediction) print 'F1 score:', f1_score(y_test, auto_weighted_prediction, average='weighted') print 'Recall:', recall_score(y_test, auto_weighted_prediction, average='weighted') print 'Precision:', precision_score(y_test, auto_weighted_prediction, average='weighted') print '\n clasification report:\n', classification_report(y_test,auto_weighted_prediction) print '\n confussion matrix:\n',confusion_matrix(y_test, auto_weighted_prediction) ``` Third: ``` clf = SVC(kernel='linear', C= 1) clf.fit(X, y) prediction = clf.predict(X_test) from sklearn.metrics import precision_score, \ recall_score, confusion_matrix, classification_report, \ accuracy_score, f1_score print 'Accuracy:', accuracy_score(y_test, prediction) print 'F1 score:', f1_score(y_test, prediction) print 'Recall:', recall_score(y_test, prediction) print 'Precision:', precision_score(y_test, prediction) print '\n clasification report:\n', classification_report(y_test,prediction) print '\n confussion matrix:\n',confusion_matrix(y_test, prediction) F1 score:/usr/local/lib/python2.7/site-packages/sklearn/metrics/classification.py:676: DeprecationWarning: The default `weighted` averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for `average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring="f1_weighted" instead of scoring="f1". sample_weight=sample_weight) /usr/local/lib/python2.7/site-packages/sklearn/metrics/classification.py:1172: DeprecationWarning: The default `weighted` averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for `average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring="f1_weighted" instead of scoring="f1". sample_weight=sample_weight) /usr/local/lib/python2.7/site-packages/sklearn/metrics/classification.py:1082: DeprecationWarning: The default `weighted` averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for `average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring="f1_weighted" instead of scoring="f1". sample_weight=sample_weight) 0.930416613529 ``` However, Im getting warnings like this:`/usr/local/lib/python2.7/site-packages/sklearn/metrics/classification.py:1172: DeprecationWarning: The default`weighted`averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for`average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring="f1_weighted" instead of scoring="f1"`. How can I deal correctly with my unbalanced data in order to compute in the right way classifier's metrics?
I think there is a lot of confusion about which weights are used for what. I am not sure I know precisely what bothers you so I am going to cover different topics, bear with me ;). ## Class weights The weights from the `class_weight` parameter are used to **train the classifier**. They **are not used in the calculation of any of the metrics you are using**: with different class weights, the numbers will be different simply because the classifier is different. Basically in every scikit-learn classifier, the class weights are used to tell your model how important a class is. That means that during the training, the classifier will make extra efforts to classify properly the classes with high weights. How they do that is algorithm-specific. If you want details about how it works for SVC and the doc does not make sense to you, feel free to mention it. ## The metrics Once you have a classifier, you want to know how well it is performing. Here you can use the metrics you mentioned: `accuracy`, `recall_score`, `f1_score`... Usually when the class distribution is unbalanced, accuracy is considered a poor choice as it gives high scores to models which just predict the most frequent class. I will not detail all these metrics but note that, with the exception of `accuracy`, they are naturally applied at the class level: as you can see in this `print` of a classification report they are defined for each class. They rely on concepts such as `true positives` or `false negative` that require defining which class is the *positive* one. ``` precision recall f1-score support 0 0.65 1.00 0.79 17 1 0.57 0.75 0.65 16 2 0.33 0.06 0.10 17 avg / total 0.52 0.60 0.51 50 ``` ## The warning ``` F1 score:/usr/local/lib/python2.7/site-packages/sklearn/metrics/classification.py:676: DeprecationWarning: The default `weighted` averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for `average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring="f1_weighted" instead of scoring="f1". ``` You get this warning because you are using the f1-score, recall and precision without defining how they should be computed! The question could be rephrased: from the above classification report, how do you output **one** global number for the f1-score? You could: 1. Take the average of the f1-score for each class: that's the `avg / total` result above. It's also called *macro* averaging. 2. Compute the f1-score using the global count of true positives / false negatives, etc. (you sum the number of true positives / false negatives for each class). Aka *micro* averaging. 3. Compute a weighted average of the f1-score. Using `'weighted'` in scikit-learn will weigh the f1-score by the support of the class: the more elements a class has, the more important the f1-score for this class in the computation. These are 3 of the options in scikit-learn, the warning is there to say you **have to pick one**. So you have to specify an `average` argument for the score method. Which one you choose is up to how you want to measure the performance of the classifier: for instance macro-averaging does not take class imbalance into account and the f1-score of class 1 will be just as important as the f1-score of class 5. If you use weighted averaging however you'll get more importance for the class 5. The whole argument specification in these metrics is not super-clear in scikit-learn right now, it will get better in version 0.18 according to the docs. They are removing some non-obvious standard behavior and they are issuing warnings so that developers notice it. ## Computing scores Last thing I want to mention (feel free to skip it if you're aware of it) is that scores are only meaningful if they are computed on data that the classifier **has never seen**. This is extremely important as any score you get on data that was used in fitting the classifier is completely irrelevant. Here's a way to do it using `StratifiedShuffleSplit`, which gives you a random splits of your data (after shuffling) that preserve the label distribution. ``` from sklearn.datasets import make_classification from sklearn.cross_validation import StratifiedShuffleSplit from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score, classification_report, confusion_matrix # We use a utility to generate artificial classification data. X, y = make_classification(n_samples=100, n_informative=10, n_classes=3) sss = StratifiedShuffleSplit(y, n_iter=1, test_size=0.5, random_state=0) for train_idx, test_idx in sss: X_train, X_test, y_train, y_test = X[train_idx], X[test_idx], y[train_idx], y[test_idx] svc.fit(X_train, y_train) y_pred = svc.predict(X_test) print(f1_score(y_test, y_pred, average="macro")) print(precision_score(y_test, y_pred, average="macro")) print(recall_score(y_test, y_pred, average="macro")) ``` Hope this helps.
How to compute precision, recall, accuracy and f1-score for the multiclass case with scikit learn?
31,421,413
33
2015-07-15T04:17:36Z
31,587,532
14
2015-07-23T12:35:37Z
[ "python", "machine-learning", "nlp", "artificial-intelligence", "scikit-learn" ]
I'm working in a sentiment analysis problem the data looks like this: ``` label instances 5 1190 4 838 3 239 1 204 2 127 ``` So my data is unbalanced since 1190 `instances` are labeled with `5`. For the classification Im using scikit's [SVC](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html). The problem is I do not know how to balance my data in the right way in order to compute accurately the precision, recall, accuracy and f1-score for the multiclass case. So I tried the following approaches: First: ``` wclf = SVC(kernel='linear', C= 1, class_weight={1: 10}) wclf.fit(X, y) weighted_prediction = wclf.predict(X_test) print 'Accuracy:', accuracy_score(y_test, weighted_prediction) print 'F1 score:', f1_score(y_test, weighted_prediction,average='weighted') print 'Recall:', recall_score(y_test, weighted_prediction, average='weighted') print 'Precision:', precision_score(y_test, weighted_prediction, average='weighted') print '\n clasification report:\n', classification_report(y_test, weighted_prediction) print '\n confussion matrix:\n',confusion_matrix(y_test, weighted_prediction) ``` Second: ``` auto_wclf = SVC(kernel='linear', C= 1, class_weight='auto') auto_wclf.fit(X, y) auto_weighted_prediction = auto_wclf.predict(X_test) print 'Accuracy:', accuracy_score(y_test, auto_weighted_prediction) print 'F1 score:', f1_score(y_test, auto_weighted_prediction, average='weighted') print 'Recall:', recall_score(y_test, auto_weighted_prediction, average='weighted') print 'Precision:', precision_score(y_test, auto_weighted_prediction, average='weighted') print '\n clasification report:\n', classification_report(y_test,auto_weighted_prediction) print '\n confussion matrix:\n',confusion_matrix(y_test, auto_weighted_prediction) ``` Third: ``` clf = SVC(kernel='linear', C= 1) clf.fit(X, y) prediction = clf.predict(X_test) from sklearn.metrics import precision_score, \ recall_score, confusion_matrix, classification_report, \ accuracy_score, f1_score print 'Accuracy:', accuracy_score(y_test, prediction) print 'F1 score:', f1_score(y_test, prediction) print 'Recall:', recall_score(y_test, prediction) print 'Precision:', precision_score(y_test, prediction) print '\n clasification report:\n', classification_report(y_test,prediction) print '\n confussion matrix:\n',confusion_matrix(y_test, prediction) F1 score:/usr/local/lib/python2.7/site-packages/sklearn/metrics/classification.py:676: DeprecationWarning: The default `weighted` averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for `average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring="f1_weighted" instead of scoring="f1". sample_weight=sample_weight) /usr/local/lib/python2.7/site-packages/sklearn/metrics/classification.py:1172: DeprecationWarning: The default `weighted` averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for `average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring="f1_weighted" instead of scoring="f1". sample_weight=sample_weight) /usr/local/lib/python2.7/site-packages/sklearn/metrics/classification.py:1082: DeprecationWarning: The default `weighted` averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for `average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring="f1_weighted" instead of scoring="f1". sample_weight=sample_weight) 0.930416613529 ``` However, Im getting warnings like this:`/usr/local/lib/python2.7/site-packages/sklearn/metrics/classification.py:1172: DeprecationWarning: The default`weighted`averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for`average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring="f1_weighted" instead of scoring="f1"`. How can I deal correctly with my unbalanced data in order to compute in the right way classifier's metrics?
Lot of very detailed answers here but I don't think you are answering the right questions. As I understand the question, there are two concerns: 1. How to I score a multiclass problem? 2. How do I deal with unbalanced data? ## 1. You can use most of the scoring functions in scikit-learn with both multiclass problem as with single class problems. Ex.: ``` from sklearn.metrics import precision_recall_fscore_support as score predicted = [1,2,3,4,5,1,2,1,1,4,5] y_test = [1,2,3,4,5,1,2,1,1,4,1] precision, recall, fscore, support = score(y_test, predicted) print('precision: {}'.format(precision)) print('recall: {}'.format(recall)) print('fscore: {}'.format(fscore)) print('support: {}'.format(support)) ``` This way you end up with tangible and interpretable numbers for each of the classes. ``` | Label | Precision | Recall | FScore | Support | |-------|-----------|--------|--------|---------| | 1 | 94% | 83% | 0.88 | 204 | | 2 | 71% | 50% | 0.54 | 127 | | ... | ... | ... | ... | ... | | 4 | 80% | 98% | 0.89 | 838 | | 5 | 93% | 81% | 0.91 | 1190 | ``` Then... ## 2. ... you can tell if the unbalanced data is even a problem. If the scoring for the less represented classes (class 1 and 2) are lower than for the classes with more training samples (class 4 and 5) then you know that the unbalanced data is in fact a problem, and you can act accordingly, as described in some of the other answers in this thread. However, if the same class distribution is present in the data you want to predict on, your unbalanced training data is a good representative of the data, and hence, the unbalance is a good thing.
Dictionary comprehension with inline functions
31,421,638
10
2015-07-15T04:40:04Z
31,421,720
12
2015-07-15T04:48:08Z
[ "python", "dictionary", "lambda" ]
I need to store functions in a dictionary, each function depending on its key, lets say, for a key `1` the lambda function associated is `lambda s: s * A[1]`. I tried with dict comprehension but it seems that the inline functions ends defined with the last value of the loop. ``` d = {k, lambda s: s * A[k] for k in range(n)} # e.g. n = 4 ``` After that all lambda functions created are declared with `A[3]` instead of `A[0], A[1], A[2]` and `A[3]`. What's wrong with this code?
A way to fix it is to change the code to: ``` d = {k: lambda s, k=k: s * A[k] for k in range(n)} ``` Without the binding, Python looks up the "current" `k` when each lambda is called, which is always `n-1` in the original code.
Python in vs ==. Which to Use in this case?
31,422,253
7
2015-07-15T05:38:36Z
31,422,431
12
2015-07-15T05:53:07Z
[ "python" ]
I am making an **AJAX** call and passing variable `pub` in it which could be `1` or `0`. As a beginner I want to be double sure of the variable type that is coming in. I am aware I can easily convert to `int()` and the problem is actually not with AJAX result but it led to this question. My code: ``` if pub == 1 or pub == '1': #execute funcA() ``` But the above is not so pythonic for me so I tried: ``` if pub in [1,'1']: #execute funcA() ``` Which of the above code is better in terms of: 1. Performance(speed). 2. Best practice. 3. Memory usage.
**Performance**: in is better ``` timeit.timeit("pub='1'; pub == 1 or pub == '1'") 0.07568907737731934 timeit.timeit("pub='1'; pub in[1, '1']") 0.04272890090942383 timeit.timeit("pub=1; pub == 1 or pub == '1'") 0.07502007484436035 timeit.timeit("pub=1; pub in[1, '1']") 0.07035684585571289 #other options timeit.timeit("pub='1'; pub in (1,'1')") 0.04643988609313965 timeit.timeit("pub='1'; pub in {1,'1'}") 0.17076611518859863 timeit.timeit("pub=1; pub in (1,'1')") 0.047419071197509766 timeit.timeit("pub=1; pub in {1,'1'}") 0.1770930290222168 ``` So, `{} > or > [] > ()` based on performance. **Practice**: `in` is preferred as it is less to type. `(), [], {}` equally good based on practice **Memory**: ``` sys.getsizeof([1,"1"]) 88 sys.getsizeof("1",1) 38 #other options sys.getsizeof(("1",1)) 72 sys.getsizeof({"1",1}) 232 ``` So, `{} > [] > () > or` based on memory **Although not asked,, good to know:** **Functionality**: Value equality and not reference equality `in` is just sequential checking equality `==`. So similar. `in` uses `==` and not `is`. What I mean to say is this: ``` >>> a = [1,2,3] >>> b = [1,a] >>> b [1, [1, 2, 3]] >>> 1 in b True >>> a in b True >>> [1,2,3] in b True ``` So it is implemented **not** like this: ``` >>> for i in b: ... print [1,2,3] is i ... False False ``` `is` will return True if two variables point to the same object, `==` if the objects referred to by the variables are equal. **`in` uses `==`**
Finding the minimum length of multiple lists
31,425,528
3
2015-07-15T08:42:33Z
31,425,550
10
2015-07-15T08:43:34Z
[ "python", "list", "min" ]
I have three lists of different lengths. For example ``` List1 is of length 40 List2 is of length 42 List3 is of length 47 ``` How can I use the Python inbuilt `min()` or any other method to find the list with the minimum length? I tried: ``` min(len([List1,List2,List3])) ``` but I get `TypeError: 'int' object is not iterable`
You need to apply `len()` to each list separately: ``` shortest_length = min(len(List1), len(List2), len(List3)) ``` If you already have a sequence of the lists, you could use the [`map()` function](https://docs.python.org/2/library/functions.html#map) or a [generator expression](https://docs.python.org/2/tutorial/classes.html#generator-expressions): ``` list_of_lists = [List1, List2, List3] shortest_length = min(map(len, list_of_lists)) # map function shortest_length = min(len(l) for l in list_of_lists) # generator expr ``` To find the shortest list, not the shortest length, use the `key` argument: ``` list_of_lists = [List1, List2, List3] shortest_list = min(list_of_lists, key=len) ```
Can I make an O(1) search algorithm using a sorted array with a known step?
31,431,866
5
2015-07-15T13:32:13Z
31,432,453
9
2015-07-15T13:56:17Z
[ "python", "algorithm", "python-2.7", "matplotlib" ]
## Background my software visualizes *very* large datasets, e.g. the data is so large I can't store all the data in RAM at any one time it is required to be loaded in a page fashion. I embed `matplotlib` functionality for displaying and manipulating the plot in the backend of my application. These datasets contains three internal lists I use to visualize: `time`, `height` and `dataset`. My program plots the data with **time** x **height** , and additionally users have the options of drawing shapes around regions of the graph that can be extracted to a whole different plot. The difficult part is, when I want to extract the data from the shapes, the shape vertices are **real** coordinates computed by the plot, not rounded to the nearest point in my `time` array. Here's an example of a shape which bounds a region in my program ![enter image description here](http://i.stack.imgur.com/ncebD.png) While `X1` may represent the coordinate `(2007-06-12 03:42:20.070901+00:00, 5.2345)` according to matplotlib, the closest coordinate **existing** in `time` and `height` might be something like `(2007-06-12 03:42:20.070801+00:00, 5.219)` , only a small bit off from matploblib's coordinate. --- ## The Problem So given some arbitrary value, lets say `x1 = 732839.154395` (a representation of the date in number format) and a list of similar values with a *constant* step: ``` 732839.154392 732839.154392 732839.154393 732839.154393 732839.154394 732839.154394 732839.154395 732839.154396 732839.154396 732839.154397 732839.154397 732839.154398 732839.154398 732839.154399 etc... ``` What would be the most efficient way of finding the closest representation of that point? I could simply loop through the list and grab the value with the smallest different, but the size of `time` is **huge**. Since I know the array is 1. Sorted and 2. Increments with a constant step , I was thinking this problem should be able to be solved in `O(1)` time? Is there a known algorithm that solves these kind of problems? Or would I simply need to devise some custom algorithm, here is my current thought process. ``` grab first and second element of time subtract second element of time with first, obtain step subtract bounding x value with first element of time, obtain difference divide difference by step, obtain index move time forward to index check surrounding elements of index to ensure closest representation ```
The algorithm you suggest seems reasonable and like it would work. As has become clear in your comments, the problem with it is the coarseness at which your time was recorded. (This can be common when unsynchronized data is recorded -- ie, the data generation clock, eg, frame rate, is not synced with the computer). The easy way around this is to read two points separated by a larger time, so for example, **read the first time value and then the 1000th time value**. Then everything stays the same in your calculation but get you timestep by subtracting and then dividing to 1000 Here's a test that makes data a similar to yours: ``` import matplotlib.pyplot as plt start = 97523.29783 increment = .000378912098 target = 97585.23452 # build a timeline times = [] time = start actual_index = None for i in range(1000000): trunc = float(str(time)[:10]) # truncate the time value times.append(trunc) if actual_index is None and time>target: actual_index = i time = time + increment # now test intervals = [1, 2, 5, 10, 100, 1000, 10000] for i in intervals: dt = (times[i] - times[0])/i index = int((target-start)/dt) print " %6i %8i %8i %.10f" % (i, actual_index, index, dt) ``` Result: ``` span actual guess est dt (actual=.000378912098) 1 163460 154841 0.0004000000 2 163460 176961 0.0003500000 5 163460 162991 0.0003800000 10 163460 162991 0.0003800000 100 163460 163421 0.0003790000 1000 163460 163464 0.0003789000 10000 163460 163460 0.0003789100 ``` That is, as the space between the sampled points gets larger, the time interval estimate gets more accurate (compare to `increment` in the program) and the estimated index (3rd col) gets closer to the actual index (2nd col). Note that the accuracy of the `dt` estimate is basically just proportional to the number of digits in the span. The best you could do is use the times at the start and end points, but it seemed from you question statement that this would be difficult; but if it's not, it will give the most accurate estimate of your time interval. Note that here, for clarity, I exaggerated the lack of accuracy by making my time interval recording very course, but in general, every power of 10 in your span increase your accuracy by the same amount. As an example of that last point, if I reduce the courseness of the time values by changing the coursing line to, `trunc = float(str(time)[:12])`, I get: ``` span actual guess est dt (actual=.000378912098) 1 163460 163853 0.0003780000 10 163460 163464 0.0003789000 100 163460 163460 0.0003789100 1000 163460 163459 0.0003789120 10000 163460 163459 0.0003789121 ``` So if, as you say, using a span of 1 gets you very close, using a span of 100 or 1000 should be more than enough. Overall, this is very similar in idea to the linear "interpolation search". It's just a bit easier to implement because it's only making a single guess based on the interpolation, so it just takes one line of code: `int((target-start)*i/(times[i] - times[0]))`
Force django-admin startproject if project folder already exists
31,431,924
5
2015-07-15T13:34:45Z
31,432,119
22
2015-07-15T13:41:50Z
[ "python", "django", "django-admin" ]
I want to start new django project in already existing folder and obviously get ``` CommandError: '/home/user/projectfolder' already exists. ``` Is there some way to force startproject command to create project in an existing folder? I have some important data in that folder and also git folder so I don't want to move it somewhere and then move it back.
Just use the current directory: `cd /home/user/projectfolder` `django-admin.py startproject project .` The use of `.` just instructs Django to create a project in the current directory while: `django-admin.py startproject` instructs Django to create a project and create the necessary directory > If only the project name is given, both the project directory and > project package will be named and the project directory > will be created in the current working directory. This fails because of the existing directory **which is not a bug** but a constrain in order to prevent accidents.
How to find median using Spark
31,432,843
9
2015-07-15T14:11:39Z
31,437,177
15
2015-07-15T17:30:31Z
[ "python", "apache-spark", "median", "rdd", "pyspark" ]
How can I find median of a rdd of integers using a distributed method, IPython, and Spark? The rdd is approximately 700,000 elements and therefore too large to collect and find the median. This question is similar to this question. However, the answer to the question is using Scala, which I do not know. [How can I calculate exact median with Apache Spark?](http://stackoverflow.com/questions/28158729/how-can-i-calculate-exact-median-with-apache-spark) Using the thinking for the Scala answer, I am trying to write a similar answer in Python. I know I first want to sort the rdd. I do not know how. I see the `sortBy` (Sorts this RDD by the given keyfunc) and `sortByKey` (Sorts this RDD, which is assumed to consist of (key, value) pairs.) methods. I think both use key value and my RDD only has integer elements. 1. First, I was thinking of doing `myrdd.sortBy(lambda x: x)`? 2. Next I will find the length of the rdd (`rdd.count()`). 3. Finally, I want to find the element or 2 elements at the center of the rdd. I need help with this method too. EDIT: I had an idea. Maybe I can index my rdd and then key = index and value = element. And then I can try to sort by value? I don't know if this is possible because there is only a `sortByKey` method.
### Spark 2.0+: You can use `approxQuantile` method which implements [Greenwald-Khanna algorithm](http://infolab.stanford.edu/~datar/courses/cs361a/papers/quantiles.pdf): **Python**: ``` df.approxQuantile("x", [0.5], 0.25) ``` **Scala**: ``` df.stat.approxQuantile("x", Array(0.5), 0.25) ``` where the last parameter is a relative error. The lower the number the more accurate results and more expensive computation. ### Spark < 2.0 **Python** As I've mentioned in the comments it is most likely not worth all the fuss. If data is relatively small like in your case then simply collect and compute median locally: ``` import numpy as np np.random.seed(323) rdd = sc.parallelize(np.random.randint(1000000, size=700000)) %time np.median(rdd.collect()) np.array(rdd.collect()).nbytes ``` It takes around 0.01 second on my few years old computer and around 5.5MB of memory. If data is much larger sorting will be a limiting factor so instead of getting an exact value it is probably better to sample, collect, and compute locally. But if you really want a to use Spark something like this should do the trick (if I didn't mess up anything): ``` from numpy import floor import time def quantile(rdd, p, sample=None, seed=None): """Compute a quantile of order p ∈ [0, 1] :rdd a numeric rdd :p quantile(between 0 and 1) :sample fraction of and rdd to use. If not provided we use a whole dataset :seed random number generator seed to be used with sample """ assert 0 <= p <= 1 assert sample is None or 0 < sample <= 1 seed = seed if seed is not None else time.time() rdd = rdd if sample is None else rdd.sample(False, sample, seed) rddSortedWithIndex = (rdd. sortBy(lambda x: x). zipWithIndex(). map(lambda (x, i): (i, x)). cache()) n = rddSortedWithIndex.count() h = (n - 1) * p rddX, rddXPlusOne = ( rddSortedWithIndex.lookup(x)[0] for x in int(floor(h)) + np.array([0L, 1L])) return rddX + (h - floor(h)) * (rddXPlusOne - rddX) ``` And some tests: ``` np.median(rdd.collect()), quantile(rdd, 0.5) ## (500184.5, 500184.5) np.percentile(rdd.collect(), 25), quantile(rdd, 0.25) ## (250506.75, 250506.75) np.percentile(rdd.collect(), 75), quantile(rdd, 0.75) (750069.25, 750069.25) ``` Finally lets define median: ``` from functools import partial median = partial(quantile, p=0.5) ``` So far so good but it takes 4.66 s in a local mode without any network communication. There is probably way to improve this, but why even bother? **Language independent** (*Hive UDAF*): If you use `HiveContext` you can also use Hive UDAFs. With integral values: ``` rdd.map(lambda x: (float(x), )).toDF(["x"]).registerTempTable("df") sqlContext.sql("SELECT percentile_approx(x, 0.5) FROM df") ``` With continuous values: ``` sqlContext.sql("SELECT percentile(x, 0.5) FROM df") ``` In `percentile_approx` you can pass an additional argument which determines a number of records to use.
List of sets, set.add() is adding to all sets in the list
31,440,056
2
2015-07-15T20:08:49Z
31,440,120
8
2015-07-15T20:12:13Z
[ "python", "list", "set" ]
I'm trying to iterate through a spreadsheet and make a set of all the columns in there while adding the values to their respective set. ``` storage = [ set() ]*35 #there's 35 columns in the excel sheet for line in in_file: #iterate through all the lines in the file t = line.split('\t') #split the line by all the tabs for i in range(0, len(t)): #iterate through all the tabs of the line if t[i]: #if the entry isn't empty storage[i].add(t[i]) #add the ith entry of the to the ith set ``` if i do this for `storage[0].add(t[0])` it works kind of but it adds to ALL the sets in the storage list...why is it doing that? I'm specifying which set I want to add it in. I didn't post what the print out looks like for the sets b/c it's so big but basically every set is the same and has all the entries from the tabs
``` storage = [set()] * 35 ``` This creates a list with the **same set** listed 35 times. To create a list with 35 different sets, use: ``` storage = [set() for i in range(35)] ``` This second form ensures `set()` is called multiple times. The first form only calls it once and then duplicates that single object reference over and over.
Requests works and URLFetch doesn't
31,441,350
2
2015-07-15T21:20:29Z
31,442,489
7
2015-07-15T22:41:43Z
[ "python", "google-app-engine", "python-requests", "urlfetch" ]
I'm trying to make a request to the particle servers in python in a google app engine app. In my terminal, I can complete the request simply and successfully with requests as: ``` res = requests.get('https://api.particle.io/v1/devices', params={"access_token": {ACCESS_TOKEN}}) ``` But in my app, the same thing doesn't work with urlfetch, which keeps telling me it can't find the access token: ``` url = 'https://api.particle.io/v1/devices' payload = {"access_token": {ACCESS_TOKEN}} form_data = urllib.urlencode(payload) res = urlfetch.fetch( url=url, payload=form_data, method=urlfetch.GET, headers={ 'Content-Type': 'application/x-www-form-urlencoded' }, follow_redirects=False ) ``` I have no idea what the problem is, and no way to debug. Thanks!
In a nutshell, your problem is that in your `urlfetch` sample you're embedding your access token into the request body, and since you're issuing a GET request -which cannot carry any request body with them- this information gets discarded. **Why does your first snippet work?** Because `requests.get()` takes that optional `params` argument that means: "take this dictionary I give you, convert all its key/value pairs into a [query string](https://en.wikipedia.org/wiki/Query_string) and append it to the main URL" So, behind the curtains, `requests.get()` is building a string like this: `https://api.particle.io/v1/devices?access_token=ACCESS_TOKEN` That's the correct endpoint you should point your GET requests to. **Why doesn't your second snippet work?** This time, `urlfetch.fetch()` uses a different syntax than `requests.get()` (but equivalent nonetheless). The important bit to note here is that `payload` argument **doesn't** mean the same as our `params` argument that you used before in `requests.get()`. `urlfetch.fetch()` expects our query string -if any- to be already urlencoded into the URL (that's why `urllib.urlencode()` comes into play here). On the other hand, `payload` is where you should put your request body in case you were issuing a POST, PUT or PATCH request, but particle.io's endpoint is not expecting your OAuth access token to be there. Something like this should work (disclaimer: not tested): ``` auth = {"access_token": {ACCESS_TOKEN}} url_params = urllib.urlencode(auth) url = 'https://api.particle.io/v1/devices?%s' % url_params res = urlfetch.fetch( url=url, method=urlfetch.GET, follow_redirects=False ) ``` Notice how now we don't need your previous `Content-type` header anymore, since we aren't carrying any content after all. Hence, `headers` parameter can be removed from this example call. For further reference, take a look at `urlfetch.fetch()` [reference](https://cloud.google.com/appengine/docs/python/refdocs/google.appengine.api.urlfetch#google.appengine.api.urlfetch.fetch) and [this SO thread](http://stackoverflow.com/questions/14551194/how-are-parameters-sent-in-an-http-post-request) that will hopefully give you a better insight into HTTP methods, parameters and request bodies than my poor explanation here. **PS:** If particle.io servers support it (they should), you should move away from this authentication schema and carry your tokens in a `Authorization: Bearer <access_token>` header instead. Carrying access tokens in URLs is not a good idea because they are much more visible that way and tend to stay logged in servers, hence posing a security risk. On the other hand, in a TLS session all request headers are always encrypted so your auth tokens are well hidden there.
How can I make flycheck use virtualenv
31,443,527
3
2015-07-16T00:35:23Z
31,456,619
7
2015-07-16T14:08:45Z
[ "python", "emacs", "virtualenv", "flycheck" ]
I have just happily configured emacs with autocompletion via jedi and syntax check via flycheck and virtualenvs created within bootstrap. It all seems to work. I'd like to add the ability to use **flycheck-pylint** (to get errors in import) but I'm not able to make it work. Even if I change the virtualenv by hand (M-x: pyvenv-activate RET path-to-my-venv) I still see lots of import errors that come from a wrong virtualenv used. My current initialization code: ``` (require 'pyvenv) (add-hook 'after-init-hook #'global-flycheck-mode) (defun set-flake8-executable () (pyvenv-activate (get-current-buffer-venv)) (flycheck-set-checker-executable (quote python-flake8) (get-current-buffer-flake8))) ``` where "get-current-buffer-venv" and "get-current-buffer-flake8" are functions that implement my specific setup and are working correctly. How can I change the interpreter used?
Thanks to an answer from [Lunaryorn on github](https://github.com/flycheck/flycheck/issues/692) i realized there is also a flycheck-set-pylint-executable. Now all is working correctly whith the following configuration: ``` (defun set-flychecker-executables () "Configure virtualenv for flake8 and lint." (when (get-current-buffer-flake8) (flycheck-set-checker-executable (quote python-flake8) (get-current-buffer-flake8))) (when (get-current-buffer-pylint) (flycheck-set-checker-executable (quote python-pylint) (get-current-buffer-pylint)))) (add-hook 'flycheck-before-syntax-check-hook #'set-flychecker-executables 'local) ```
RuntimeError: working outside of application context
31,444,036
7
2015-07-16T01:41:08Z
31,444,175
14
2015-07-16T01:59:04Z
[ "python", "mysql", "flask", "werkzeug", "flask-restful" ]
**app.py** ``` from flask import Flask, render_template, request,jsonify,json,g import mysql.connector app = Flask(__name__) **class TestMySQL():** @app.before_request def before_request(): try: g.db = mysql.connector.connect(user='root', password='root', database='mysql') except mysql.connector.errors.Error as err: resp = jsonify({'status': 500, 'error': "Error:{}".format(err)}) resp.status_code = 500 return resp @app.route('/') def input_info(self): try: cursor = g.db.cursor() cursor.execute ('CREATE TABLE IF NOT EXISTS testmysql (id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, name VARCHAR(40) NOT NULL, \ email VARCHAR(40) NOT NULL UNIQUE)') cursor.close() ``` test.py ``` from app import * class Test(unittest.TestCase): def test_connection1(self): with patch('__main__.mysql.connector.connect') as mock_mysql_connector_connect: object=TestMySQL() object.before_request() """Runtime error on calling this" ``` I am importing **app** into **test.py** for unit testing.On calling '*before\_request*' function into test.py ,it is throwing RuntimeError: working outside of application context same is happening on calling '*input\_info()*'
Flask has an [Application Context](http://flask.pocoo.org/docs/0.10/appcontext/#creating-an-application-context), and it seems like you'll need to do something like: ``` def test_connection(self): with app.app_context(): #test code ``` You can probably also shove the `app.app_context()` call into a test setup method as well. Hope this helps.
Python cassandra driver: Invalid or unsupported protocol version: 4
31,444,098
5
2015-07-16T01:49:39Z
31,462,161
12
2015-07-16T18:45:53Z
[ "python", "amazon-web-services", "cassandra" ]
I get the following error: ``` File "clear-domain-cass.py", line 25, in <module> session = cluster.connect('my_domain') File "/usr/lib/python2.6/dist-packages/cassandra/cluster.py", line 839, in connect self.control_connection.connect() File "/usr/lib/python2.6/dist-packages/cassandra/cluster.py", line 2075, in connect self._set_new_connection(self._reconnect_internal()) File "/usr/lib/python2.6/dist-packages/cassandra/cluster.py", line 2110, in _reconnect_internal raise NoHostAvailable("Unable to connect to any servers", errors) cassandra.cluster.NoHostAvailable: ('Unable to connect to any servers', {'10.1.0.89': ConnectionException(u'Failed to initialize new connection to 10.1.0.89: code=0000 [Server error] message="io.netty.handler.codec.DecoderException: org.apache.cassandra.transport.ProtocolException: Invalid or unsupported protocol version: 4"',)}) ``` This is the relevant bit of script: ``` from cassandra.cluster import Cluster from cassandra.query import BatchStatement startTime = time.time() if len(sys.argv) < 2: print "Target host IP is required arg for this script. A comma-sep. list will work also" exit() if len(sys.argv) < 3: print "Target domain is required arg for this script." exit() hostIp = sys.argv[1] domain = str(sys.argv[2]) cluster = Cluster( contact_points=[hostIp], ) session = cluster.connect('my_domain') ``` It fails on the cluster.connect line. I installed pip into this Amazon EC2 instance by following these steps: <http://bcjordan.com/pip-on-amazon-ec2/>
The version of the python driver you're using attempts to use the v4 native protocol by default, but Cassandra 2.1 only supports protocol versions 3 and lower. To tell the driver to use the v3 protocol, do the following: ``` cluster = Cluster(contact_points=[hostIp], protocol_version=3) ``` (By the way, the error message for this should be improved in Cassandra 2.1.6+ thanks to [CASSANDRA-9451](https://issues.apache.org/jira/browse/CASSANDRA-9451)).
What does the -> (dash-greater-than arrow symbol) mean in a Python method signature?
31,445,728
12
2015-07-16T05:07:59Z
31,445,907
12
2015-07-16T05:25:42Z
[ "python", "python-3.x" ]
There is a `->`, or dash-greater-than symbol at the end of a python method, and I'm not sure what it means. One might call it an arrow as well. Here is the example: ``` @property def get_foo(self) -> Foo: return self._foo ``` where `self._foo` is an instance of Foo. My guess is that it is some kind of static type declaration, to tell the interpreter that `self._foo` is of type Foo. But when I tested this, if `self._foo` is not an instance of Foo, nothing unusual happens. Also, if `self._foo` is of a type other than Foo, let's say it was an `int`, then `type(SomeClass.get_foo())` returns `int`. So, what's the point of `-> Foo`? This concept is hard to lookup because it is a symbol without a common name, and the term "arrow" is misleading.
This is [function annotations](https://www.python.org/dev/peps/pep-3107/). It can be use to attach additional information to the [arguments](https://www.python.org/dev/peps/pep-3107/#id31) or a [return values](https://www.python.org/dev/peps/pep-3107/#id32) of functions. It is a useful way to say how a function must be use. Functions annotations are stored in a function's `__annotations__` attribute. [**Use Cases** (*From documentation*)](https://www.python.org/dev/peps/pep-3107/#id35) * Providing typing information + Type checking + Let IDEs show what types a function expects and returns + Function overloading / generic functions + Foreign-language bridges + Adaptation + Predicate logic functions + Database query mapping + RPC parameter marshaling * Other information + Documentation for parameters and return values From `python-3.5` it can be used for [Type Hints](https://www.python.org/dev/peps/pep-0484/)
Why does Python 3 allow "00" as a literal for 0 but not allow "01" as a literal for 1?
31,447,694
98
2015-07-16T07:18:19Z
31,448,362
16
2015-07-16T07:52:06Z
[ "python", "python-3.x" ]
Why does Python 3 allow "00" as a literal for 0 but not allow "01" as a literal for 1? Is there a good reason? This inconsistency baffles me. (And we're talking about Python 3, which purposely broke backward compatibility in order to achieve goals like consistency.) For example: ``` >>> from datetime import time >>> time(16, 00) datetime.time(16, 0) >>> time(16, 01) File "<stdin>", line 1 time(16, 01) ^ SyntaxError: invalid token >>> ```
It's a special case (`"0"+`) # [2.4.4. Integer literals](https://docs.python.org/3/reference/lexical_analysis.html#integer-literals) ``` Integer literals are described by the following lexical definitions: integer ::= decimalinteger | octinteger | hexinteger | bininteger decimalinteger ::= nonzerodigit digit* | "0"+ nonzerodigit ::= "1"..."9" digit ::= "0"..."9" octinteger ::= "0" ("o" | "O") octdigit+ hexinteger ::= "0" ("x" | "X") hexdigit+ bininteger ::= "0" ("b" | "B") bindigit+ octdigit ::= "0"..."7" hexdigit ::= digit | "a"..."f" | "A"..."F" bindigit ::= "0" | "1" ``` If you look at the grammar, it's easy to see that `0` need a special case. I'm not sure why the '`+`' is considered necessary there though. Time to dig through the dev mailing list... --- Interesting to note that in Python2, more than one `0` was parsed as an `octinteger` (the end result is still `0` though) ``` decimalinteger ::= nonzerodigit digit* | "0" octinteger ::= "0" ("o" | "O") octdigit+ | "0" octdigit+ ```
Why does Python 3 allow "00" as a literal for 0 but not allow "01" as a literal for 1?
31,447,694
98
2015-07-16T07:18:19Z
31,448,530
96
2015-07-16T08:01:29Z
[ "python", "python-3.x" ]
Why does Python 3 allow "00" as a literal for 0 but not allow "01" as a literal for 1? Is there a good reason? This inconsistency baffles me. (And we're talking about Python 3, which purposely broke backward compatibility in order to achieve goals like consistency.) For example: ``` >>> from datetime import time >>> time(16, 00) datetime.time(16, 0) >>> time(16, 01) File "<stdin>", line 1 time(16, 01) ^ SyntaxError: invalid token >>> ```
Per <https://docs.python.org/3/reference/lexical_analysis.html#integer-literals>: > Integer literals are described by the following lexical definitions: > > ``` > integer ::= decimalinteger | octinteger | hexinteger | bininteger > decimalinteger ::= nonzerodigit digit* | "0"+ > nonzerodigit ::= "1"..."9" > digit ::= "0"..."9" > octinteger ::= "0" ("o" | "O") octdigit+ > hexinteger ::= "0" ("x" | "X") hexdigit+ > bininteger ::= "0" ("b" | "B") bindigit+ > octdigit ::= "0"..."7" > hexdigit ::= digit | "a"..."f" | "A"..."F" > bindigit ::= "0" | "1" > ``` > > There is no limit for the length of integer literals apart from what > can be stored in available memory. > > Note that leading zeros in a non-zero decimal number are not allowed. > This is for disambiguation with C-style octal literals, which Python > used before version 3.0. As noted here, leading zeros in a *non-zero* decimal number are not allowed. `"0"+` is legal as a very special case, which [wasn't present in Python 2](https://docs.python.org/2/reference/lexical_analysis.html#integer-and-long-integer-literals): ``` integer ::= decimalinteger | octinteger | hexinteger | bininteger decimalinteger ::= nonzerodigit digit* | "0" octinteger ::= "0" ("o" | "O") octdigit+ | "0" octdigit+ ``` --- [SVN commit r55866](https://mail.python.org/pipermail//python-3000-checkins/2007-June/000874.html) implemented PEP 3127 in the tokenizer, which forbids the old `0<octal>` numbers. However, curiously, it also adds this note: ``` /* in any case, allow '0' as a literal */ ``` with a special `nonzero` flag that only throws a `SyntaxError` if the following sequence of digits contains a nonzero digit. This is odd because [PEP 3127](https://www.python.org/dev/peps/pep-3127/) does not allow this case: > This PEP proposes that the ability to specify an octal number by using a leading zero will be removed from the language in Python 3.0 (and the Python 3.0 preview mode of 2.6), and that **a SyntaxError will be raised whenever a leading "0" is immediately followed by another digit**. (emphasis mine) So, the fact that multiple zeros are allowed is *technically* violating the PEP, and was basically implemented as a special case by Georg Brandl. He made the corresponding documentation change to note that `"0"+` was a valid case for `decimalinteger` (previously that had been covered under `octinteger`). We'll probably never know *exactly* why Georg chose to make `"0"+` valid - it may forever remain an odd corner case in Python. --- **UPDATE** [28 Jul 2015]: This question led to a [lively discussion thread](https://mail.python.org/pipermail/python-ideas/2015-July/034631.html) on python-ideas in which [Georg chimed in](https://mail.python.org/pipermail/python-ideas/2015-July/034650.html): > Steven D'Aprano wrote: > > > Why was it defined that way? [...] Why would we write 0000 to get zero? > > I could tell you, but then I'd have to kill you. > > Georg Later on, the thread spawned [this bug report](https://bugs.python.org/issue24668) aiming to get rid of this special case. Here, [Georg says](https://bugs.python.org/msg246945): > I don't recall the reason for this deliberate change (as seen from the docs change). > > I'm unable to come up with a good reason for this change now [...] and thus we have it: the precise reason behind this inconsistency is lost to time. Finally, note that the bug report was rejected: leading zeros will continue to be accepted only on zero integers for the rest of Python 3.x.
To check whether a number is multiple of second number
31,449,216
2
2015-07-16T08:36:09Z
31,449,252
8
2015-07-16T08:37:44Z
[ "python", "numbers" ]
I want to check whether a number is multiple of second. What's wrong with the following code? ``` def is_multiple(x,y): if x!=0 & (y%x)==0 : print("true") else: print("false") end print("A program in python") x=input("enter a number :") y=input("enter its multiple :") is_multiple(x,y) ``` error: ``` TypeError: not all arguments converted during string formatting ```
You are using the *binary AND operator* `&`; you want the *boolean AND operator* here, `and`: ``` x and (y % x) == 0 ``` Next, you want to get your inputs converted to integers: ``` x = int(input("enter a number :")) y = int(input("enter its multiple :")) ``` You'll get a `NameError` for that `end` expression on a line, drop that altogether, Python doesn't need those. You can test for *just* `x`; in a boolean context such as an `if` statement, a number is considered to be false if 0: ``` if x and y % x == 0: ``` Your function `is_multiple()` should probably just return a boolean; leave printing to the part of the program doing all the other input/output: ``` def is_multiple(x, y): return x and (y % x) == 0 print("A program in python") x = int(input("enter a number :")) y = int(input("enter its multiple :")) if is_multiple(x, y): print("true") else: print("false") ``` That last part could simplified if using a conditional expression: ``` print("A program in python") x = int(input("enter a number :")) y = int(input("enter its multiple :")) print("true" if is_multiple(x, y) else "false") ```
C++ program taking minutes to parse large file whereas python is running in a few seconds
31,456,277
2
2015-07-16T13:54:32Z
31,457,227
8
2015-07-16T14:33:43Z
[ "python", "c++", "regex" ]
I am running a c++ program in VS. I provided a regex and I am parsing a file which is over 2 million lines long for strings that match that regex. Here is the code: ``` int main() { ifstream myfile("file.log"); if (myfile.is_open()) { int order_count = 0; regex pat(R"(.*(SOME)(\s)*(TEXT).*)"); for (string line; getline(myfile, line);) { smatch matches; if (regex_search(line, matches, pat)) { order_count++; } } myfile.close(); cout << order_count; } return 0; } ``` The file should search for the matched strings and count their occurrences. I have a python version of the program that does this within 4 seconds using the same regex. I have been waiting around 5 minutes for the above c++ code to work and it still hasn't finished. It is not running into an infinite loop because I had it print out its current line number at certain intervals and it is progressing.Is there a different way I should write the above code? EDIT: This is run in release mode. EDIT: Here is the python code: ``` class PythonLogParser: def __init__(self, filename): self.filename = filename def open_file(self): f = open(self.filename) return f def count_stuff(self): f = self.open_file() order_pattern = re.compile(r'(.*(SOME)(\s)*(TEXT).*)') order_count = 0 for line in f: if order_pattern.match(line) != None: order_count+=1 # = order_count + 1 print 'Number of Orders (\'ORDER\'): {0}\n'.format(order_count) f.close() ``` The program finally stopped running. What's most disconcerting is that the output is incorrect (I know what the correct value should be). Perhaps using regex for this problem is not the best solution. I will update if I find a solution that works better. EDIT: Based on the answer by @ecatmur, I made the following changes, and the c++ program ran much faster. ``` int main() { ifstream myfile("file.log"); if (myfile.is_open()) { int order_count = 0; regex pat(R"(.*(SOME)(\s)*(TEXT).*)"); for (string line; getline(myfile, line);) { if (regex_match(line, pat)) { order_count++; } } myfile.close(); cout << order_count; } return 0; } ```
You should be using `regex_match`, not `regex_search`. > ### [7.2.5.3. search() vs. match()](https://docs.python.org/2/library/re.html#search-vs-match) > > Python offers two different primitive operations based on regular expressions: re.match() checks for a match only at the beginning of the string, while re.search() checks for a match anywhere in the string And: > ### [`std::regex_search`](http://en.cppreference.com/w/cpp/regex/regex_search) > > `regex_search` will successfully match any subsequence of the given sequence, whereas `std::regex_match` will only return `true` if the regular expression matches the entire sequence. By using `regex_search` you are generating `n * m` match results, where `n` is the number of characters before and `m` is the number of characters after the central part of your search string. Not surprisingly, this takes a long time to generate. In fact, the more efficient would be to use `regex_search`, but *only* with the central part of your search string: ``` regex pat(R"((SOME)(\s)*(TEXT))"); ``` And use the overload of `regex_search` that doesn't take a match results out-parameter (as you're ignoring the match results): ``` if (regex_search(line, pat)) { // no "matches" order_count++; } ```
How do I protect urls I use internally for push queues in Google App Engine?
31,456,321
2
2015-07-16T13:56:31Z
31,456,488
7
2015-07-16T14:03:32Z
[ "python", "security", "google-app-engine" ]
I'm running Flask on GAE, and I'm working on implementing a push queue to run tasks for me in the background. Because GAE's push queues work by scheduling and sending http requests to my flask server, I'm concerned about my users guessing the urls I designated for internal use with my push queue. I thought about having the push queue send a secret key along with the requests, and have my server only execute the job if the key included in the request is correct, something like this: ``` taskqueue.add(url='/worker', params={'super_secret_key': 12345}) ``` But I'm wondering if there's a more secure / better way to do this? Thanks!
You can protect your task urls by configuring them in app.yaml to use admin login ``` - url: /worker ...... login: admin ```
Can't instantiate abstract class ... with abstract methods
31,457,855
4
2015-07-16T15:00:35Z
31,458,576
7
2015-07-16T15:32:48Z
[ "python", "abstract-class", "abc", "six" ]
I'm working on a kind of lib, and for a weird reason i have this error. * [Here](https://github.com/josuebrunel/yahoo-fantasy-sport/blob/development/fantasy_sport/roster.py) is my code. Of course *@abc.abstractmethod have to be uncommented* * [Here](https://github.com/josuebrunel/yahoo-fantasy-sport/blob/development/tests.py#L206-247) are my tests *Sorry couldn't just copy and paste it* I went on the basis that the code below works *test.py* ``` import abc import six @six.add_metaclass(abc.ABCMeta) class Base(object): @abc.abstractmethod def whatever(self,): raise NotImplementedError class SubClass(Base): def __init__(self,): super(Base, self).__init__() self.whatever() def whatever(self,): print("whatever") ``` In the python shell ``` >>> from test import * >>> s = SubClass() whatever ``` Why for my *roster* module i'm having this error ``` Can't instantiate abstract class Player with abstract methods _Base__json_builder, _Base__xml_builder ``` Thanks in advance
Your issue comes because you have defined the abstract methods in your base abstract class with `__` (double underscore) prepended. This causes python to do [name mangling](https://docs.python.org/2/tutorial/classes.html#private-variables-and-class-local-references) at the time of definition of the classes. The names of the function change from `__json_builder` to `_Base__json_builder` or `__xml_builder` to `_Base__xml_builder` . And this is the name you have to implement/overwrite in your subclass. To show this behavior in your example - ``` >>> import abc >>> import six >>> @six.add_metaclass(abc.ABCMeta) ... class Base(object): ... @abc.abstractmethod ... def __whatever(self): ... raise NotImplementedError ... >>> class SubClass(Base): ... def __init__(self): ... super(Base, self).__init__() ... self.__whatever() ... def __whatever(self): ... print("whatever") ... >>> a = SubClass() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: Can't instantiate abstract class SubClass with abstract methods _Base__whatever ``` When I change the implementation to the following, it works ``` >>> class SubClass(Base): ... def __init__(self): ... super(Base, self).__init__() ... self._Base__whatever() ... def _Base__whatever(self): ... print("whatever") ... >>> a = SubClass() whatever ``` But this is very tedious , you may want to think about if you really want to define your functions with `__` (double underscore) . You can read more about name mangling [here](https://docs.python.org/2/tutorial/classes.html#private-variables-and-class-local-references) .
Why does my Sieve of Eratosthenes work faster with integers than with booleans?
31,459,623
13
2015-07-16T16:23:41Z
31,459,730
13
2015-07-16T16:29:21Z
[ "python", "performance", "python-2.7", "boolean", "cpython" ]
I wrote a simple Sieve of Eratosthenes, which uses a list of ones and turns them into zeros if not prime, like so: ``` def eSieve(n): #Where m is fixed-length list of all integers up to n '''Creates a list of primes less than or equal to n''' m = [1]*(n+1) for i in xrange(2,int((n)**0.5)+1): if m[i]: for j in xrange(i*i,n+1,i): m[j]=0 return [i for i in xrange(2,n) if m[i]] ``` I tested the speed it ran with `%timeit` and got: ``` #n: t #10**1: 7 μs #10**2: 26.6 μs #10**3: 234 μs #10**4: 2.46 ms #10**5: 26.4 ms #10**6: 292 ms #10**7: 3.27 s ``` I assumed, if I changed `[1]` and `0` to booleans, it would run faster... but it does the opposite: ``` #n: t #10**1: 7.31 μs #10**2: 29.5 μs #10**3: 297 μs #10**4: 2.99 ms #10**5: 29.9 ms #10**6: 331 ms #10**7: 3.7 s ``` Why are the booleans slower?
This happens because `True` and `False` are looked up as globals in Python 2. The `0` and `1` literals are just constants, looked up by a quick array reference, while globals are *dictionary* lookups in the global namespace (falling through to the built-ins namespace): ``` >>> import dis >>> def foo(): ... a = True ... b = 1 ... >>> dis.dis(foo) 2 0 LOAD_GLOBAL 0 (True) 3 STORE_FAST 0 (a) 3 6 LOAD_CONST 1 (1) 9 STORE_FAST 1 (b) 12 LOAD_CONST 0 (None) 15 RETURN_VALUE ``` The `True` value is looked up with the `LOAD_GLOBAL` bytecode, while the `1` literal value is copied to the stack with `LOAD_CONST`. If you make `True` and `False` *locals* you can make them just as fast again: ``` def eSieve(n, True=True, False=False): m = [True]*(n+1) for i in xrange(2,int((n)**0.5)+1): if m[i]: for j in xrange(i*i,n+1,i): m[j]=False return [i for i in xrange(2,n) if m[i]] ``` Assigning `True` and `False` as default values to for arguments gives the function those names as locals, with the exact same values; again using a simplified version: ``` >>> def bar(True=True, False=False): ... True == False ... >>> dis.dis(bar) 2 0 LOAD_FAST 0 (True) 3 LOAD_FAST 1 (False) 6 COMPARE_OP 2 (==) 9 POP_TOP 10 LOAD_CONST 0 (None) 13 RETURN_VALUE ``` Note the `LOAD_FAST` opcodes, now with indices just like the `LOAD_CONST` bytecodes; locals in a CPython function are stored in an array just like bytecode constants. With that change, using booleans wins out, albeit by a small margin; my timings: ``` # n integers globals locals # 10**1 4.31 µs 4.2 µs 4.2 µs # 10**2 17.1 µs 17.3 µs 16.5 µs # 10**3 147 µs 158 µs 144 µs # 10**4 1.5 ms 1.66 ms 1.48 ms # 10**5 16.4 ms 18.2 ms 15.9 ms # 10**6 190 ms 215 ms 189 ms # 10**7 2.21 s 2.47 s 2.18 s ``` The difference isn't really that much because Python booleans are just an `int` subclass. Note that in Python 3, `True` and `False` have become keywords and can no longer be assigned to, making it possible to treat them just like integer literals.
Python calculating Catalan Numbers
31,459,731
10
2015-07-16T16:29:27Z
31,459,931
8
2015-07-16T16:38:59Z
[ "python", "algorithm" ]
I have code which is calculating catalan numbers with method of Binominal Coefficients. ``` def BinominalCoefficient(n,k): res = 1; if (k > n - k): k = n - k for i in range(k): res *= (n - i) res /= (i + 1) return res def CatalanNumbers(n): c = BinominalCoefficient(2*n, n) return (c//(n+1)) print (CatalanNumbers(510)) ``` I have a "nan" result when i try to calculate Catalan number which n is more than 510. Why this is happening? And how can i solve it?
I assume you're using Python 3. Your `res /= (i + 1)` should be `res //= (i + 1)` to force integer arithmetic: ``` def BinominalCoefficient(n,k): res = 1 if (k > n - k): k = n - k for i in range(k): res *= (n - i) res //= (i + 1) return res def CatalanNumbers(n): c = BinominalCoefficient(2*n, n) return (c//(n+1)) print (CatalanNumbers(511)) ``` returns ``` 2190251491739477424254235019785597839694676372955883183976582551028726151813997871354391075304454574949251922785248583970189394756782256529178824038918189668852236486561863197470752363343641524451529091938039960955474280081989297135147411990495428867310575974835605457151854594468879961981363032236839645 ``` You get `nan` because the divison /= in Python 3 returns a float which overflows to `inf`.
Make a number more probable to result from random
31,462,265
21
2015-07-16T18:52:19Z
31,462,320
8
2015-07-16T18:55:27Z
[ "python", "numpy", "random" ]
I'm using `x = numpy.random.rand(1)` to generate a random number between 0 and 1. How do I make it so that `x > .5` is 2 times more probable than `x < .5`?
``` tmp = random() if tmp < 0.5: tmp = random() ``` is pretty easy way to do it ehh I guess this is 3x as likely ... thats what i get for sleeping through that class I guess ``` from random import random,uniform def rand1(): tmp = random() if tmp < 0.5:tmp = random() return tmp def rand2(): tmp = uniform(0,1.5) return tmp if tmp <= 1.0 else tmp-0.5 sample1 = [] sample2 = [] for i in range(10000): sample1.append(rand1()>=0.5) sample2.append(rand2()>=0.5) print sample1.count(True) #~ 75% print sample2.count(True) #~ 66% <- desired i believe :) ```
Make a number more probable to result from random
31,462,265
21
2015-07-16T18:52:19Z
31,462,327
26
2015-07-16T18:55:41Z
[ "python", "numpy", "random" ]
I'm using `x = numpy.random.rand(1)` to generate a random number between 0 and 1. How do I make it so that `x > .5` is 2 times more probable than `x < .5`?
That's a fitting name! Just do a little manipulation of the inputs. First set `x` to be in the range from `0` to `1.5`. ``` x = numpy.random.uniform(1.5) ``` `x` has a `2/3` chance of being greater than `0.5` and `1/3` chance being smaller. Then if `x` is greater than `1.0`, subtract `.5` from it ``` if x >= 1.0: x = x - 0.5 ```
Make a number more probable to result from random
31,462,265
21
2015-07-16T18:52:19Z
31,463,931
15
2015-07-16T20:28:59Z
[ "python", "numpy", "random" ]
I'm using `x = numpy.random.rand(1)` to generate a random number between 0 and 1. How do I make it so that `x > .5` is 2 times more probable than `x < .5`?
This is overkill for you, but it's good to know an actual method for generating a random number with any probability density function (pdf). You can do that by subclassing [scipy.stat.rv\_continuous](http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.stats.rv_continuous.html#scipy-stats-rv-continuous), provided you do it correctly. You will have to have a normalized pdf (so that its integral is 1). If you don't, numpy will automatically adjust the range for you. In this case, your pdf has a value of 2/3 for x<0.5, and 4/3 for x>0.5, with a support of [0, 1) (support is the interval over which it's nonzero): ``` import scipy.stats as spst import numpy as np import matplotlib.pyplot as plt import ipdb def pdf_shape(x, k): if x < 0.5: return 2/3. elif 0.5 <= x and x < 1: return 4/3. else: return 0. class custom_pdf(spst.rv_continuous): def _pdf(self, x, k): return pdf_shape(x, k) instance = custom_pdf(a=0, b=1) samps = instance.rvs(k=1, size=10000) plt.hist(samps, bins=20) plt.show() ``` ![Example histogram](http://i.stack.imgur.com/ZpGH7.png)
Problems installing lxml in Ubuntu
31,462,967
5
2015-07-16T19:32:37Z
31,463,062
15
2015-07-16T19:37:50Z
[ "python", "python-2.7", "pip", "lxml" ]
Getting the following errors when I do: **pip install lxml** ``` You are using pip version 6.0.8, however version 7.1.0 is available. You should consider upgrading via the 'pip install --upgrade pip' command. Collecting lxml Using cached lxml-3.4.4.tar.gz /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'bugtrack_url' warnings.warn(msg) Building lxml version 3.4.4. Building without Cython. ERROR: /bin/sh: 1: xslt-config: not found ** make sure the development packages of libxml2 and libxslt are installed ** Using build configuration of libxslt Installing collected packages: lxml Running setup.py install for lxml /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'bugtrack_url' warnings.warn(msg) Building lxml version 3.4.4. Building without Cython. ERROR: /bin/sh: 1: xslt-config: not found ** make sure the development packages of libxml2 and libxslt are installed ** Using build configuration of libxslt building 'lxml.etree' extension i686-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/tmp/pip-build-RLyvkw/lxml/src/lxml/includes -I/usr/include/python2.7 -c src/lxml/lxml.etree.c -o build/temp.linux-i686-2.7/src/lxml/lxml.etree.o -w In file included from src/lxml/lxml.etree.c:239:0: /tmp/pip-build-RLyvkw/lxml/src/lxml/includes/etree_defs.h:14:31: fatal error: libxml/xmlversion.h: No such file or directory #include "libxml/xmlversion.h" ^ compilation terminated. error: command 'i686-linux-gnu-gcc' failed with exit status 1 Complete output from command /home/apurva/.virtualenvs/universallogin/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip-build-RLyvkw/lxml/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-9WRQzF-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/apurva/.virtualenvs/universallogin/include/site/python2.7: /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'bugtrack_url' warnings.warn(msg) Building lxml version 3.4.4. Building without Cython. ERROR: /bin/sh: 1: xslt-config: not found ** make sure the development packages of libxml2 and libxslt are installed ** Using build configuration of libxslt running install running build running build_py creating build creating build/lib.linux-i686-2.7 creating build/lib.linux-i686-2.7/lxml copying src/lxml/pyclasslookup.py -> build/lib.linux-i686-2.7/lxml copying src/lxml/doctestcompare.py -> build/lib.linux-i686-2.7/lxml copying src/lxml/sax.py -> build/lib.linux-i686-2.7/lxml copying src/lxml/_elementpath.py -> build/lib.linux-i686-2.7/lxml copying src/lxml/__init__.py -> build/lib.linux-i686-2.7/lxml copying src/lxml/builder.py -> build/lib.linux-i686-2.7/lxml copying src/lxml/ElementInclude.py -> build/lib.linux-i686-2.7/lxml copying src/lxml/cssselect.py -> build/lib.linux-i686-2.7/lxml copying src/lxml/usedoctest.py -> build/lib.linux-i686-2.7/lxml creating build/lib.linux-i686-2.7/lxml/includes copying src/lxml/includes/__init__.py -> build/lib.linux-i686-2.7/lxml/includes creating build/lib.linux-i686-2.7/lxml/html copying src/lxml/html/soupparser.py -> build/lib.linux-i686-2.7/lxml/html copying src/lxml/html/html5parser.py -> build/lib.linux-i686-2.7/lxml/html copying src/lxml/html/_setmixin.py -> build/lib.linux-i686-2.7/lxml/html copying src/lxml/html/diff.py -> build/lib.linux-i686-2.7/lxml/html copying src/lxml/html/formfill.py -> build/lib.linux-i686-2.7/lxml/html copying src/lxml/html/_diffcommand.py -> build/lib.linux-i686-2.7/lxml/html copying src/lxml/html/ElementSoup.py -> build/lib.linux-i686-2.7/lxml/html copying src/lxml/html/__init__.py -> build/lib.linux-i686-2.7/lxml/html copying src/lxml/html/builder.py -> build/lib.linux-i686-2.7/lxml/html copying src/lxml/html/defs.py -> build/lib.linux-i686-2.7/lxml/html copying src/lxml/html/_html5builder.py -> build/lib.linux-i686-2.7/lxml/html copying src/lxml/html/usedoctest.py -> build/lib.linux-i686-2.7/lxml/html copying src/lxml/html/clean.py -> build/lib.linux-i686-2.7/lxml/html creating build/lib.linux-i686-2.7/lxml/isoschematron copying src/lxml/isoschematron/__init__.py -> build/lib.linux-i686-2.7/lxml/isoschematron copying src/lxml/lxml.etree.h -> build/lib.linux-i686-2.7/lxml copying src/lxml/lxml.etree_api.h -> build/lib.linux-i686-2.7/lxml copying src/lxml/includes/htmlparser.pxd -> build/lib.linux-i686-2.7/lxml/includes copying src/lxml/includes/xinclude.pxd -> build/lib.linux-i686-2.7/lxml/includes copying src/lxml/includes/c14n.pxd -> build/lib.linux-i686-2.7/lxml/includes copying src/lxml/includes/xpath.pxd -> build/lib.linux-i686-2.7/lxml/includes copying src/lxml/includes/etreepublic.pxd -> build/lib.linux-i686-2.7/lxml/includes copying src/lxml/includes/schematron.pxd -> build/lib.linux-i686-2.7/lxml/includes copying src/lxml/includes/xslt.pxd -> build/lib.linux-i686-2.7/lxml/includes copying src/lxml/includes/tree.pxd -> build/lib.linux-i686-2.7/lxml/includes copying src/lxml/includes/config.pxd -> build/lib.linux-i686-2.7/lxml/includes copying src/lxml/includes/xmlschema.pxd -> build/lib.linux-i686-2.7/lxml/includes copying src/lxml/includes/xmlerror.pxd -> build/lib.linux-i686-2.7/lxml/includes copying src/lxml/includes/xmlparser.pxd -> build/lib.linux-i686-2.7/lxml/includes copying src/lxml/includes/dtdvalid.pxd -> build/lib.linux-i686-2.7/lxml/includes copying src/lxml/includes/uri.pxd -> build/lib.linux-i686-2.7/lxml/includes copying src/lxml/includes/relaxng.pxd -> build/lib.linux-i686-2.7/lxml/includes copying src/lxml/includes/etree_defs.h -> build/lib.linux-i686-2.7/lxml/includes copying src/lxml/includes/lxml-version.h -> build/lib.linux-i686-2.7/lxml/includes creating build/lib.linux-i686-2.7/lxml/isoschematron/resources creating build/lib.linux-i686-2.7/lxml/isoschematron/resources/rng copying src/lxml/isoschematron/resources/rng/iso-schematron.rng -> build/lib.linux-i686-2.7/lxml/isoschematron/resources/rng creating build/lib.linux-i686-2.7/lxml/isoschematron/resources/xsl copying src/lxml/isoschematron/resources/xsl/RNG2Schtrn.xsl -> build/lib.linux-i686-2.7/lxml/isoschematron/resources/xsl copying src/lxml/isoschematron/resources/xsl/XSD2Schtrn.xsl -> build/lib.linux-i686-2.7/lxml/isoschematron/resources/xsl creating build/lib.linux-i686-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_dsdl_include.xsl -> build/lib.linux-i686-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_abstract_expand.xsl -> build/lib.linux-i686-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_schematron_message.xsl -> build/lib.linux-i686-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_schematron_skeleton_for_xslt1.xsl -> build/lib.linux-i686-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_svrl_for_xslt1.xsl -> build/lib.linux-i686-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/readme.txt -> build/lib.linux-i686-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 running build_ext building 'lxml.etree' extension creating build/temp.linux-i686-2.7 creating build/temp.linux-i686-2.7/src creating build/temp.linux-i686-2.7/src/lxml i686-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/tmp/pip-build-RLyvkw/lxml/src/lxml/includes -I/usr/include/python2.7 -c src/lxml/lxml.etree.c -o build/temp.linux-i686-2.7/src/lxml/lxml.etree.o -w In file included from src/lxml/lxml.etree.c:239:0: /tmp/pip-build-RLyvkw/lxml/src/lxml/includes/etree_defs.h:14:31: fatal error: libxml/xmlversion.h: No such file or directory #include "libxml/xmlversion.h" ^ compilation terminated. error: command 'i686-linux-gnu-gcc' failed with exit status 1 ---------------------------------------- Command "/home/apurva/.virtualenvs/universallogin/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip-build-RLyvkw/lxml/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-9WRQzF-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/apurva/.virtualenvs/universallogin/include/site/python2.7" failed with error code 1 in /tmp/pip-build-RLyvkw/lxml ``` I've already tried this: sudo apt-get install zlib1g-dev before "pip install" reading this answer: [Not able to install lxml verison 3.3.5 in ubuntu](http://stackoverflow.com/questions/23570913/not-able-to-install-lxml-verison-3-3-5-in-ubuntu) but did not help. Also, tried installing python-dev, python3-dev, lib-eventdev did not help either. Also, tried doing this: STATIC\_DEPS=true pip install lxml reading this: <http://lxml.de/installation.html> Did not help either! Will be very grateful if you could suggest something Thanks in advance.
The output states `** make sure the development packages of libxml2 and libxslt are installed **`. Have you done that? ``` sudo apt-get install libxml2-dev libxslt-dev ``` Also, is there a particular reason you're install using pip instead of installing the `python-lxml` package that comes with Ubuntu? Installing your distribution's package should be preferred unless you have a reason to do otherwise.