title
stringlengths
12
150
question_id
int64
469
40.1M
question_score
int64
2
5.52k
question_date
stringdate
2008-08-02 15:11:16
2016-10-18 06:16:31
answer_id
int64
536
40.1M
answer_score
int64
7
8.38k
answer_date
stringdate
2008-08-02 18:49:07
2016-10-18 06:19:33
tags
listlengths
1
5
question_body_md
stringlengths
15
30.2k
answer_body_md
stringlengths
11
27.8k
How to add header row to a pandas DataFrame
34,091,877
8
2015-12-04T15:35:59Z
34,094,058
8
2015-12-04T17:27:21Z
[ "python", "csv", "pandas", "header" ]
I am reading a csv file into `pandas`. This csv file constists of four columns and some rows, but does not have a header row, which I want to add. I have been trying the following: ``` Cov = pd.read_csv("path/to/file.txt", sep='\t') Frame=pd.DataFrame([Cov], columns = ["Sequence", "Start", "End", "Coverage"]) Frame.to_csv("path/to/file.txt", sep='\t') ``` But when I apply the code, I get the following Error: ``` ValueError: Shape of passed values is (1, 1), indices imply (4, 1) ``` What exactly does the error mean? And what would be a clean way in python to add a header row to my csv file/pandas df?
Alternatively you could read you csv with `header=None` and then add it with `df.columns`: ``` Cov = pd.read_csv("path/to/file.txt", sep='\t', header=None) Cov.columns = ["Sequence", "Start", "End", "Coverage"] ```
python logistic regression (beginner)
34,093,264
2
2015-12-04T16:44:39Z
34,093,737
7
2015-12-04T17:08:09Z
[ "python", "machine-learning", "scikit-learn", "logistic-regression", "patsy" ]
I'm working on teaching myself a bit of logistic regression using python. I'm trying to apply the lessons in the walkthrough [here](https://github.com/justmarkham/gadsdc1/blob/master/logistic_assignment/kevin_logistic_sklearn.ipynb) to the small dataset in the wikipedia entry[here](https://en.wikipedia.org/wiki/Logistic_regression). Something doesn't seem quite right. Wikipedia and Excel Solver (verified using the method in [this video](https://www.youtube.com/watch?v=rbKtZcrTlr8)) give intercept -4.0777 and coefficient 1.5046, but the code I built out from the github example outputs -0.924200 and 0.756024, respectively. The code I'm attempting to use is below. Any glaring mistakes? ``` import numpy as np import pandas as pd from patsy import dmatrices from sklearn.linear_model import LogisticRegression X = [0.5,0.75,1.0,1.25,1.5,1.75,1.75,2.0,2.25,2.5,2.75,3.0,3.25, 3.5,4.0,4.25,4.5,4.75,5.0,5.5] y = [0,0,0,0,0,0,1,0,1,0,1,0,1,0,1,1,1,1,1,1] zipped = list(zip(X,y)) df = pd.DataFrame(zipped,columns = ['study_hrs','p_or_f']) y, X = dmatrices('p_or_f ~ study_hrs', df, return_type="dataframe") y = np.ravel(y) model = LogisticRegression() model = model.fit(X,y) print(pd.DataFrame(np.transpose(model.coef_),X.columns)) >>> 0 Intercept -0.924200 study_hrs 0.756024 ```
## Solution Just change the model creation line to ``` model = LogisticRegression(C=100000, fit_intercept=False) ``` ## Analysis of the problem By default, sklearn solves **regularized LogisticRegression**, with fitting strength `C=1` (small C-big regularization, big C-small regularization). > This class implements regularized logistic regression using the > liblinear library, newton-cg and lbfgs solvers. It can handle both > dense and sparse input. Use C-ordered arrays or CSR matrices > containing 64-bit floats for optimal performance; any other input > format will be converted (and copied). Thus to obtain their model you should fit ``` model = LogisticRegression(C=1000000) ``` which gives ``` Intercept -2.038853 # this is actually half the intercept study_hrs 1.504643 # this is correct ``` Furthermore the problem also lies in the way you work with data in patsy, see the simplified, correct example ``` import numpy as np from sklearn.linear_model import LogisticRegression X = [0.5,0.75,1.0,1.25,1.5,1.75,1.75,2.0,2.25,2.5,2.75,3.0,3.25, 3.5,4.0,4.25,4.5,4.75,5.0,5.5] y = [0,0,0,0,0,0,1,0,1,0,1,0,1,0,1,1,1,1,1,1] X = np.array([[x] for x in X]) y = np.ravel(y) model = LogisticRegression(C=1000000.) model = model.fit(X,y) print('coef', model.coef_) print('intercept', model.intercept_) ``` gives ``` coef [[ 1.50464059]] intercept [-4.07769916] ``` What is the problem exactly? When you do `dmatrices` it by default embeds your input data with a column of ones (biases) ``` X = [0.5,0.75,1.0,1.25,1.5,1.75,1.75,2.0,2.25,2.5,2.75,3.0,3.25, 3.5,4.0,4.25,4.5,4.75,5.0,5.5] y = [0,0,0,0,0,0,1,0,1,0,1,0,1,0,1,1,1,1,1,1] zipped = list(zip(X,y)) df = pd.DataFrame(zipped,columns = ['study_hrs','p_or_f']) y, X = dmatrices('p_or_f ~ study_hrs', df, return_type="dataframe") print(X) ``` which leads to ``` Intercept study_hrs 0 1 0.50 1 1 0.75 2 1 1.00 3 1 1.25 4 1 1.50 5 1 1.75 6 1 1.75 7 1 2.00 8 1 2.25 9 1 2.50 10 1 2.75 11 1 3.00 12 1 3.25 13 1 3.50 14 1 4.00 15 1 4.25 16 1 4.50 17 1 4.75 18 1 5.00 19 1 5.50 ``` and this is why the resulting bias is just **a half** of the true one - scikit learns also added a column of ones... so you now have **two biases**, thus optimal solution is to give each of them half of the weight which would be given to a single one. So what you can do? * do not use patsy in such a way * forbid patsy to add a bias * tell sklearn not to add bias . ``` import numpy as np import pandas as pd from patsy import dmatrices from sklearn.linear_model import LogisticRegression X = [0.5,0.75,1.0,1.25,1.5,1.75,1.75,2.0,2.25,2.5,2.75,3.0,3.25, 3.5,4.0,4.25,4.5,4.75,5.0,5.5] y = [0,0,0,0,0,0,1,0,1,0,1,0,1,0,1,1,1,1,1,1] zipped = list(zip(X,y)) df = pd.DataFrame(zipped,columns = ['study_hrs','p_or_f']) y, X = dmatrices('p_or_f ~ study_hrs', df, return_type="dataframe") y = np.ravel(y) model = LogisticRegression(C=100000, fit_intercept=False) model = model.fit(X,y) print(pd.DataFrame(np.transpose(model.coef_),X.columns)) ``` gives ``` Intercept -4.077571 study_hrs 1.504597 ``` as desired
Django: Support for string view arguments to url() is deprecated and will be removed in Django 1.10
34,096,424
20
2015-12-04T19:56:59Z
34,096,508
42
2015-12-04T20:02:31Z
[ "python", "django", "url", "deprecated" ]
New python/Django user (and indeed new to SO): When trying to migrate my Django project, I get an error: ``` RemovedInDjango110Warning: Support for string view arguments to url() is deprecated and will be removed in Django 1.10 (got main.views.home). Pass the callable instead. url(r'^$', 'main.views.home') ``` Apparently the second argument can't be a string anymore. I came to create this code as it is through a tutorial at pluralsight.com that is teaching how to use Django with a previous version (I'm currently working with 1.9). The teacher instructs us to create urlpatterns in urls.py from the views we create in apps. He teaches us to create a urlpattern such as the following: ``` from django.conf.urls import url from django.contrib import admin urlpatterns = [ url(r'^admin/', admin.site.urls), url(r'^$', 'main.views.home') ] ``` to reference ``` def home(request): return render(request, "main/home.html", {'message': 'You\'ve met with a terrible fate, haven\'t you?'}) #this message calls HTML, not shown, not important for question ``` in the views.py of an app "main" that I created. If this method is being deprecated, how do I pass the view argument not as a string? If I just remove the quotes, as shown in the documentation (<https://docs.djangoproject.com/en/1.9/topics/http/urls/>), I get an error: ``` NameError: name 'main' is not defined ``` I tried to "import" views or main using the code presented in this documentation: ``` from . import views ``` or ``` from . import main ``` which gave me: ``` ImportError: cannot import name 'views' ``` and ``` ImportError: cannot import name 'main' ``` I believe I've traced this down to an import error, and am currently researching that.
I have found the answer to my question. It was indeed an import error. For Django 1.10, you now have to import the app's view.py, and then pass the second argument of url() without quotes. Here is my code now in urls.py: ``` from django.conf.urls import url from django.contrib import admin import main.views urlpatterns = [ url(r'^admin/', admin.site.urls), url(r'^$', main.views.home) ] ``` I did not change anything in the app or view.py files. Props to @Rik Poggi for illustrating how to import in his answer to this question: [Django - Import views from separate apps](http://stackoverflow.com/questions/11439447/django-import-views-from-separate-apps)
Django: Support for string view arguments to url() is deprecated and will be removed in Django 1.10
34,096,424
20
2015-12-04T19:56:59Z
34,096,518
7
2015-12-04T20:02:50Z
[ "python", "django", "url", "deprecated" ]
New python/Django user (and indeed new to SO): When trying to migrate my Django project, I get an error: ``` RemovedInDjango110Warning: Support for string view arguments to url() is deprecated and will be removed in Django 1.10 (got main.views.home). Pass the callable instead. url(r'^$', 'main.views.home') ``` Apparently the second argument can't be a string anymore. I came to create this code as it is through a tutorial at pluralsight.com that is teaching how to use Django with a previous version (I'm currently working with 1.9). The teacher instructs us to create urlpatterns in urls.py from the views we create in apps. He teaches us to create a urlpattern such as the following: ``` from django.conf.urls import url from django.contrib import admin urlpatterns = [ url(r'^admin/', admin.site.urls), url(r'^$', 'main.views.home') ] ``` to reference ``` def home(request): return render(request, "main/home.html", {'message': 'You\'ve met with a terrible fate, haven\'t you?'}) #this message calls HTML, not shown, not important for question ``` in the views.py of an app "main" that I created. If this method is being deprecated, how do I pass the view argument not as a string? If I just remove the quotes, as shown in the documentation (<https://docs.djangoproject.com/en/1.9/topics/http/urls/>), I get an error: ``` NameError: name 'main' is not defined ``` I tried to "import" views or main using the code presented in this documentation: ``` from . import views ``` or ``` from . import main ``` which gave me: ``` ImportError: cannot import name 'views' ``` and ``` ImportError: cannot import name 'main' ``` I believe I've traced this down to an import error, and am currently researching that.
You should be able to use the following: ``` from django.conf.urls import url from django.contrib import admin from main import views urlpatterns = [ url(r'^admin/', admin.site.urls), url(r'^$', views.home) ] ``` I'm not absolutely certain what your directory structure looks like, but using a relative import such as `from . import X` is for when the files are in the same folder as each other.
Identifier normalization: Why is the micro sign converted into the Greek letter mu?
34,097,193
16
2015-12-04T20:49:37Z
34,097,194
13
2015-12-04T20:49:37Z
[ "python", "python-3.x", "unicode", "identifier", "python-internals" ]
I just stumbled upon the following odd situation: ``` >>> class Test: µ = 'foo' >>> Test.µ 'foo' >>> getattr(Test, 'µ') Traceback (most recent call last): File "<pyshell#4>", line 1, in <module> getattr(Test, 'µ') AttributeError: type object 'Test' has no attribute 'µ' >>> 'µ'.encode(), dir(Test)[-1].encode() (b'\xc2\xb5', b'\xce\xbc') ``` The character I entered is always the µ sign on the keyboard, but for some reason it gets converted. Why does this happen?
There are two different characters involved here. One is the [MICRO SIGN](http://www.fileformat.info/info/unicode/char/00b5/index.htm), which is the one on the keyboard, and the other is [GREEK SMALL LETTER MU](http://www.fileformat.info/info/unicode/char/03bc/index.htm). To understand what’s going on, we should take a look at how Python defines identifiers in the [language reference](https://docs.python.org/3/reference/lexical_analysis.html#identifiers): ``` identifier ::= xid_start xid_continue* id_start ::= <all characters in general categories Lu, Ll, Lt, Lm, Lo, Nl, the underscore, and characters with the Other_ID_Start property> id_continue ::= <all characters in id_start, plus characters in the categories Mn, Mc, Nd, Pc and others with the Other_ID_Continue property> xid_start ::= <all characters in id_start whose NFKC normalization is in "id_start xid_continue*"> xid_continue ::= <all characters in id_continue whose NFKC normalization is in "id_continue*"> ``` Both our characters, MICRO SIGN and GREEK SMALL LETTER MU, are part of the `Ll` unicode group (lowercase letters), so both of them can be used at any position in an identifier. Now note that the definition of `identifier` actually refers to `xid_start` and `xid_continue`, and those are defined as all characters in the respective non-x definition whose NFKC normalization results in a valid character sequence for an identifier. Python apparently only cares about the *normalized* form of identifiers. This is confirmed a bit below: > All identifiers are converted into the normal form NFKC while parsing; comparison of identifiers is based on NFKC. NFKC is a [Unicode normalization](http://unicode.org/reports/tr15/) that decomposes characters into individual parts. The MICRO SIGN decomposes into GREEK SMALL LETTER MU, and that’s exactly what’s going on there. There are a lot other characters that are also affected by this normalization. One other example is [OHM SIGN](http://www.fileformat.info/info/unicode/char/2126/index.htm) which decomposes into [GREEK CAPITAL LETTER OMEGA](http://www.fileformat.info/info/unicode/char/03a9/index.htm). Using that as an identifier gives a similar result, here shown using locals: ``` >>> Ω = 'bar' >>> locals()['Ω'] Traceback (most recent call last): File "<pyshell#1>", line 1, in <module> locals()['Ω'] KeyError: 'Ω' >>> [k for k, v in locals().items() if v == 'bar'][0].encode() b'\xce\xa9' >>> 'Ω'.encode() b'\xe2\x84\xa6' ``` So in the end, this is just something that Python does. Unfortunately, there isn’t really a good way to detect this behavior, causing errors such as the one shown. Usually, when the identifier is only referred to as an identifier, i.e. it’s used like a real variable or attribute, then everything will be fine: The normalization runs every time, and the identifier is found. The only problem is with string-based access. Strings are just strings, of course there is no normalization happening (that would be just a bad idea). And the two ways shown here, [`getattr`](https://docs.python.org/3/library/functions.html#getattr) and [`locals`](https://docs.python.org/3/library/functions.html#locals), both operate on dictionaries. `getattr()` accesses an object’s attribute via the object’s `__dict__`, and `locals()` returns a dictionary. And in dictionaries, keys can be any string, so it’s perfectly fine to have a MICRO SIGN or a OHM SIGN in there. In those cases, you need to remember to perform a normalization yourself. We can utilize [`unicodedata.normalize`](https://docs.python.org/3/library/unicodedata.html#unicodedata.normalize) for this, which then also allows us to correctly get our value from inside `locals()` (or using `getattr`): ``` >>> normalized_ohm = unicodedata.normalize('NFKC', 'Ω') >>> locals()[normalized_ohm] 'bar' ```
How can I convert a tensor into a numpy array in TensorFlow?
34,097,281
15
2015-12-04T20:55:54Z
34,097,344
17
2015-12-04T20:59:39Z
[ "python", "numpy", "tensorflow" ]
I know how to convert a numpy array into a tensor object with the function `tf.convert_to_tensor(img.eval())`. My problem is that after I apply some preprocessing to this tensors in terms of brightness, contrast, etc, I would like to view the resulting transformations to evaluate and tweak my parameters. How can I convert a tensor into a numpy array so I can show it as an image with PIL?
To convert back from tensor to numpy array you can simply run `.eval()` on the transformed tensor.
Why doesn't `x != x.isdigit()` work?
34,100,616
3
2015-12-05T02:38:02Z
34,100,630
8
2015-12-05T02:40:18Z
[ "python", "python-3.x" ]
I need to make a loop that prompts the user to enter a valid string of digits, and has to ask the user until they enter a correct input. I think I have the right idea here but I am not entirely sure how to correct the wrong input. ``` def c(): x = input("Enter a String of Digits") while x != x.isdigit() i = input("enter correct data string") else: print("True") c() ```
`str.isdigit()` returns a boolean (True / False), don't compare it with `x` itself, but just use the return value: ``` def c(): x = input("Enter a String of Digits") while not x.isdigit(): x = input("enter correct data string") print("True") c() ``` * SyntaxError fixed: add missing `:` in `while ..:` line. * `i = ...` changed to `x = ...`
Unexpected output using Pythons' ternary operator in combination with lambda
34,100,732
8
2015-12-05T02:56:00Z
34,218,180
9
2015-12-11T07:26:54Z
[ "python", "boolean", "ternary-operator", "conditional-expressions" ]
I have a specific situation in which I would like to do the following (actually it is more involved than this, but I reduced the problem to the essence): ``` >>> (lambda e: 1)(0) if (lambda e: True)(0) else (lambda e: 2)(0) True ``` which is a difficult way of writing: ``` >>> 1 if True else 2 1 ``` but in reality '1','True' and '2' are additional expressions that get evaluated and which require the variable 'e', which I set to '0' for this simplified code example. Note the difference in output from both expressions above, although ``` >>> (lambda e: 1)(0) 1 >>> (lambda e: True)(0) True >>> (lambda e: 2)(0) 2 ``` The funny thing is that this is a special case, because if I replace '1' by '3' I get the expected/desired result: ``` >>> (lambda e: 3)(0) if (lambda e: True)(0) else (lambda e: 2)(0) 3 ``` It's even correct if I replace '1' by '0' (which could also be a special case since 1==True and 0==False) ``` >>> (lambda e: 0)(0) if (lambda e: True)(0) else (lambda e: 2)(0) 0 ``` Also, if I replace 'True' by 'not False' or 'not not True', it still works: ``` >>> (lambda e: 1)(0) if (lambda e: not False)(0) else (lambda e: 2)(0) 1 >>> (lambda e: 1)(0) if (lambda e: not not True)(0) else (lambda e: 2)(0) 1 ``` Another alternative formulation uses the usual if..then..else statement and does not produce the error: ``` >>> if (lambda e: True)(0): (lambda e: 1)(0) else: (lambda e: 2)(0) 1 ``` What explains this behavior? How can I solve this behavior in a nice way (avoid to use 'not not True' or something? Thanks!
I think I figured out why the bug is happening, and why your repro is Python 3 specific. [Code objects do equality comparisons by value](https://hg.python.org/cpython/file/3.5/Objects/codeobject.c#l535), rather than by pointer, strangely enough: ``` static PyObject * code_richcompare(PyObject *self, PyObject *other, int op) { ... co = (PyCodeObject *)self; cp = (PyCodeObject *)other; eq = PyObject_RichCompareBool(co->co_name, cp->co_name, Py_EQ); if (eq <= 0) goto unequal; eq = co->co_argcount == cp->co_argcount; if (!eq) goto unequal; eq = co->co_kwonlyargcount == cp->co_kwonlyargcount; if (!eq) goto unequal; eq = co->co_nlocals == cp->co_nlocals; if (!eq) goto unequal; eq = co->co_flags == cp->co_flags; if (!eq) goto unequal; eq = co->co_firstlineno == cp->co_firstlineno; if (!eq) goto unequal; ... ``` In Python 2, `lambda e: True` does a global name lookup and `lambda e: 1` loads a constant `1`, so the code objects for these functions don't compare equal. In Python 3, `True` is a keyword and both lambdas load constants. Since `1 == True`, the code objects are sufficiently similar that all the checks in `code_richcompare` pass, and the code objects compare the same. (One of the checks is for line number, so the bug only appears when the lambdas are on the same line.) [The bytecode compiler calls `ADDOP_O(c, LOAD_CONST, (PyObject*)co, consts)`](https://hg.python.org/cpython/file/3.5/Python/compile.c#l1473) to create the `LOAD_CONST` instruction that loads a lambda's code onto the stack, and `ADDOP_O` uses a dict to keep track of objects it's added, in an attempt to save space on stuff like duplicate constants. It has some handling to distinguish things like `0.0`, `0`, and `-0.0` that would otherwise compare equal, but it wasn't expected that they'd ever need to handle equal-but-inequivalent code objects. The code objects aren't distinguished properly, and the two lambdas end up sharing a single code object. By replacing `True` with `1.0`, we can reproduce the bug on Python 2: ``` >>> f1, f2 = lambda: 1, lambda: 1.0 >>> f2() 1 ``` I don't have Python 3.5, so I can't check whether the bug is still present in that version. I didn't see anything in the bug tracker about the bug, but I could have just missed the report. If the bug is still there and hasn't been reported, it should be reported.
POS tagging using spaCy
34,102,420
5
2015-12-05T07:25:25Z
34,105,051
8
2015-12-05T12:45:14Z
[ "python", "nlp", "spacy" ]
I am trying to do POS tagging using the spaCy module in Python. Here is my code for the same ``` from spacy.en import English, LOCAL_DATA_DIR import spacy.en import os data_dir = os.environ.get('SPACY_DATA', LOCAL_DATA_DIR) nlp = English(parser=False, tagger=True, entity=False) def print_fine_pos(token): return (token.tag_) def pos_tags(sentence): sentence = unicode(sentence, "utf-8") tokens = nlp(sentence) tags = [] for tok in tokens: tags.append((tok,print_fine_pos(tok))) return tags a = "we had crispy dosa" print pos_tags(a) ``` Output: ``` [(We , u'PRP'), (had , u'VBD'), (crispy , u'NN'), (dosa, u'NN')] ``` Here it returns crispy as a noun instead of an adjective. However, if I use a test sentence like ``` a="we had crispy fries" ``` It recognizes that crispy is an adjective. Here is the output: ``` [(we , u'PRP'), (had , u'VBD'), (crispy , u'JJ'), (fries, u'NNS')] ``` I think the primary reason why crispy wasn't tagged as an adjective in the first case was because dosa was tagged as 'NN' whereas fries was tagged as 'NNS' in the second case. Is there any way I can get crispy to be tagged as an adjective in the second case too?
`TL;DR: You should accept the occasional error.` Details: Spacy's tagger is statistical, meaning that the tags you get are its best estimate based on the data it was shown during training. I would guess those data did not contain the word `dosa`. The tagger had to guess, and guessed wrong. There isn't an easy way to correct its output, because it is not using rules or anything you can modify easily. The model has been trained on a standard corpus of English, which may be quite different to the kind of language you are using it for (domain). If error rate is too high for your purposes, you can re-train the model using domain-specific data. This will be very laborious though. Ask yourself what you are trying to achieve and whether 3% error rate in PoS tagging is the worst of your problems. In general, you shouldn't judge the performance of a statistical system on a case-by-case basis. The accuracy of modern English PoS taggers is [around 97%](http://nlp.stanford.edu/pubs/CICLing2011-manning-tagging.pdf), which is roughly the same as the average human. You will inevitably get some errors. However, the errors of the model will not be the same as the human errors, as the two have "learnt" how to solve the problem in a different way. Sometimes the model will get confused by things you and I consider obvious, e.g. your example. This doesn't mean it is bad overall, or that PoS tagging is your real problem.
Upgrade pip in Amazon Linux
34,103,119
6
2015-12-05T09:03:57Z
34,584,537
8
2016-01-04T04:06:04Z
[ "python", "amazon-ec2", "pip" ]
I wanted to deploy my Python app on Amazon Linux AMI 2015.09.1, which has Python2.7 (default) and pip (6.1.1). Then, I upgraded the pip using the command: ``` sudo pip install -U pip ``` However, it seemed broken, and showed the message when I tried to install packages: ``` pkg_resources.DistributionNotFound: pip==6.1.1 ``` I found out that pip remove the previous files located in `/usr/bin/`, and installed the new one in `/usr/local/bin`. Thus, I tried to specify the location by using the command: ``` sudo pip install -U --install-option="--prefix='/usr/bin'" pip ``` Nevertheless, it still installed the new one in `/usr/local/bin`. In addition to that, pip could not work well with `sudo` although it successfully installed. The error message : ``` sudo: pip2.7: command not found ``` Is there a way to properly manage pip?
Try: ``` sudo which pip ``` This may reveal something like 'no pip in ($PATH)'. If that is the case, you can then do: ``` which pip ``` Which will give you a path like `/usr/local/bin/pip`. Copy + paste the path to pip to the sbin folder by running: `sudo cp /usr/local/bin/pip /usr/sbin/` This will allow you to run `sudo pip` without any errors.
Unpack nested list for arguments to map()
34,110,317
5
2015-12-05T20:11:44Z
34,110,336
7
2015-12-05T20:13:45Z
[ "python", "arguments", "map-function", "argument-unpacking" ]
I'm sure there's a way of doing this, but I haven't been able to find it. Say I have: ``` foo = [ [1, 2], [3, 4], [5, 6] ] def add(num1, num2): return num1 + num2 ``` Then how can I use `map(add, foo)` such that it passes `num1=1`, `num2=2` for the first iteration, i.e., it does `add(1, 2)`, then `add(3, 4)` for the second, etc.? * Trying `map(add, foo)` obviously does `add([1, 2], #nothing)` for the first iteration * Trying `map(add, *foo)` does `add(1, 3, 5)` for the first iteration I want something like `map(add, foo)` to do `add(1, 2)` on the first iteration. Expected output: `[3, 7, 11]`
It sounds like you need [`starmap`](https://docs.python.org/3/library/itertools.html#itertools.starmap): ``` >>> import itertools >>> list(itertools.starmap(add, foo)) [3, 7, 11] ``` This unpacks each argument `[a, b]` from the list `foo` for you, passing them to the function `add`. As with all the tools in the `itertools` module, it returns an iterator which you can consume with the `list` built-in function. From the documents: > Used instead of `map()` when argument parameters are already grouped in tuples from a single iterable (the data has been “pre-zipped”). The difference between `map()` and `starmap()` parallels the distinction between `function(a,b)` and `function(*c)`.
Why does Python "preemptively" hang when trying to calculate a very large number?
34,113,609
48
2015-12-06T03:14:09Z
34,113,926
12
2015-12-06T04:09:27Z
[ "python", "linux" ]
I've asked [this question](http://stackoverflow.com/questions/34014099/how-do-i-automatically-kill-a-process-that-uses-too-much-memory-with-python) before about killing a process that uses too much memory, and I've got most of a solution worked out. However, there is one problem: calculating massive numbers seems to be untouched by the method I'm trying to use. This code below is intended to put a 10 second CPU time limit on the process. ``` import resource import os import signal def timeRanOut(n, stack): raise SystemExit('ran out of time!') signal.signal(signal.SIGXCPU, timeRanOut) soft,hard = resource.getrlimit(resource.RLIMIT_CPU) print(soft,hard) resource.setrlimit(resource.RLIMIT_CPU, (10, 100)) y = 10**(10**10) ``` What I *expect* to see when I run this script (on a Unix machine) is this: ``` -1 -1 ran out of time! ``` Instead, I get no output. The only way I get output is with `Ctrl` + `C`, and I get this if I `Ctrl` + `C` after 10 seconds: ``` ^C-1 -1 ran out of time! CPU time limit exceeded ``` If I `Ctrl` + `C` *before* 10 seconds, then I have to do it twice, and the console output looks like this: ``` ^C-1 -1 ^CTraceback (most recent call last): File "procLimitTest.py", line 18, in <module> y = 10**(10**10) KeyboardInterrupt ``` In the course of experimenting and trying to figure this out, I've also put `time.sleep(2)` between the print and large number calculation. It doesn't seem to have any effect. If I change `y = 10**(10**10)` to `y = 10**10`, then the print and sleep statements work as expected. Adding `flush=True` to the print statement or `sys.stdout.flush()` after the print statement don't work either. **Why can I not limit CPU time for the calculation of a very large number?** How can I fix or at least mitigate this? --- Additional information: Python version: `3.3.5 (default, Jul 22 2014, 18:16:02) \n[GCC 4.4.7 20120313 (Red Hat 4.4.7-4)]` Linux information: `Linux web455.webfaction.com 2.6.32-431.29.2.el6.x86_64 #1 SMP Tue Sep 9 21:36:05 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux`
Use a function. It does seem that Python tries to precompute integer literals (I only have empirical evidence; if anyone has a source please let me know). This would normally be a helpful optimization, since the vast majority of literals in scripts are probably small enough to not incur noticeable delays when precomputing. To get around this, you need to make your literal be the result of a non-constant computation, like a function call with parameters. Example: ``` import resource import os import signal def timeRanOut(n, stack): raise SystemExit('ran out of time!') signal.signal(signal.SIGXCPU, timeRanOut) soft,hard = resource.getrlimit(resource.RLIMIT_CPU) print(soft,hard) resource.setrlimit(resource.RLIMIT_CPU, (10, 100)) f = lambda x=10:x**(x**x) y = f() ``` This gives the expected result: ``` xubuntu@xubuntu-VirtualBox:~/Desktop$ time python3 hang.py -1 -1 ran out of time! real 0m10.027s user 0m10.005s sys 0m0.016s ```
Why does Python "preemptively" hang when trying to calculate a very large number?
34,113,609
48
2015-12-06T03:14:09Z
34,114,371
51
2015-12-06T05:25:25Z
[ "python", "linux" ]
I've asked [this question](http://stackoverflow.com/questions/34014099/how-do-i-automatically-kill-a-process-that-uses-too-much-memory-with-python) before about killing a process that uses too much memory, and I've got most of a solution worked out. However, there is one problem: calculating massive numbers seems to be untouched by the method I'm trying to use. This code below is intended to put a 10 second CPU time limit on the process. ``` import resource import os import signal def timeRanOut(n, stack): raise SystemExit('ran out of time!') signal.signal(signal.SIGXCPU, timeRanOut) soft,hard = resource.getrlimit(resource.RLIMIT_CPU) print(soft,hard) resource.setrlimit(resource.RLIMIT_CPU, (10, 100)) y = 10**(10**10) ``` What I *expect* to see when I run this script (on a Unix machine) is this: ``` -1 -1 ran out of time! ``` Instead, I get no output. The only way I get output is with `Ctrl` + `C`, and I get this if I `Ctrl` + `C` after 10 seconds: ``` ^C-1 -1 ran out of time! CPU time limit exceeded ``` If I `Ctrl` + `C` *before* 10 seconds, then I have to do it twice, and the console output looks like this: ``` ^C-1 -1 ^CTraceback (most recent call last): File "procLimitTest.py", line 18, in <module> y = 10**(10**10) KeyboardInterrupt ``` In the course of experimenting and trying to figure this out, I've also put `time.sleep(2)` between the print and large number calculation. It doesn't seem to have any effect. If I change `y = 10**(10**10)` to `y = 10**10`, then the print and sleep statements work as expected. Adding `flush=True` to the print statement or `sys.stdout.flush()` after the print statement don't work either. **Why can I not limit CPU time for the calculation of a very large number?** How can I fix or at least mitigate this? --- Additional information: Python version: `3.3.5 (default, Jul 22 2014, 18:16:02) \n[GCC 4.4.7 20120313 (Red Hat 4.4.7-4)]` Linux information: `Linux web455.webfaction.com 2.6.32-431.29.2.el6.x86_64 #1 SMP Tue Sep 9 21:36:05 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux`
**TLDR:** Python precomputes constants in the code. If any very large number is calculated with at least one intermediate step, the process *will* be CPU time limited. --- It took quite a bit of searching, but I have discovered evidence that Python 3 **does** precompute constant literals that it finds in the code before evaluating anything. One of them is this webpage: *[A Peephole Optimizer for Python](http://legacy.python.org/workshops/1998-11/proceedings/papers/montanaro/montanaro.html)*. I've quoted some of it below. > ## ConstantExpressionEvaluator > > This class precomputes a number of constant expressions and stores them in the function's constants list, including obvious binary and unary operations and tuples consisting of just constants. Of particular note is the fact that complex literals are not represented by the compiler as constants but as expressions, so 2+3j appears as > > `LOAD_CONST n (2)` > `LOAD_CONST m (3j)` > `BINARY_ADD` > > This class converts those to > > `LOAD_CONST q (2+3j)` > > which can result in a fairly large performance boost for code that uses complex constants. The fact that `2+3j` is used as an example very strongly suggests that not only small constants are being precomputed and cached, but also any constant literals in the code. I also found [this comment](http://stackoverflow.com/questions/33000483/are-constant-computations-cached-in-python#comment53826859_33000595) on another Stack Overflow question (*[Are constant computations cached in Python?](http://stackoverflow.com/q/33000483/1473772)*): > Note that for Python 3, the peephole optimizer *does* precompute the `1/3` constant. (CPython specific, of course.) – Mark Dickinson Oct 7 at 19:40 These are supported by the fact that replacing ``` y = 10**(10**10) ``` with this *also* hangs, even though I never call the function! ``` def f(): y = 10**(10**10) ``` --- ### The good news Luckily for me, I don't have any such giant literal constants in my code. Any computation of such constants will happen later, which can be and is limited by the CPU time limit. I changed ``` y = 10**(10**10) ``` to this, ``` x = 10 print(x) y = 10**x print(y) z = 10**y print(z) ``` and got this output, as desired! ``` -1 -1 10 10000000000 ran out of time! ``` **The moral of the story:** Limiting a process by CPU time or memory consumption (or some other method) *will work* if there is not a large literal constant in the code that Python tries to precompute.
Django upgrading to 1.9 error "AppRegistryNotReady: Apps aren't loaded yet."
34,114,427
22
2015-12-06T05:36:10Z
34,115,677
19
2015-12-06T08:56:04Z
[ "python", "django" ]
When upgraded to django 1.9 from 1.8 I got this error. I checked answers for similar questions, but I didn't think this is an issue with any 3rd party packages or apps. ``` Traceback (most recent call last): File "manage.py", line 10, in <module> execute_from_command_line(sys.argv) File "/home/kishore/.virtualenvs/andone/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 350, in execute_from_command_line utility.execute() File "/home/kishore/.virtualenvs/andone/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 342, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/kishore/.virtualenvs/andone/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 176, in fetch_command commands = get_commands() File "/home/kishore/.virtualenvs/andone/local/lib/python2.7/site-packages/django/utils/lru_cache.py", line 100, in wrapper result = user_function(*args, **kwds) File "/home/kishore/.virtualenvs/andone/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 71, in get_commands for app_config in reversed(list(apps.get_app_configs())): File "/home/kishore/.virtualenvs/andone/local/lib/python2.7/site-packages/django/apps/registry.py", line 137, in get_app_configs self.check_apps_ready() File "/home/kishore/.virtualenvs/andone/local/lib/python2.7/site-packages/django/apps/registry.py", line 124, in check_apps_ready raise AppRegistryNotReady("Apps aren't loaded yet.") django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet. ``` I'd modified the Installed apps for 'django.contrib.auth'.
Try to add this lines to the top of your settings file: ``` import django django.setup() ``` And if this will not help you try to remove third-party applications from your installed apps list one-by-one.
How do I sort a key:list dictionary by values in list?
34,116,081
4
2015-12-06T09:52:50Z
34,116,185
9
2015-12-06T10:06:07Z
[ "python", "sorting", "dictionary" ]
I have a dictionary ``` mydict = {'name':['peter', 'janice', 'andy'], 'age':[10, 30, 15]} ``` How do I sort this dictionary based on key=="name" list? End result should be: ``` mydict = {'name':['andy', 'janice', 'peter'], 'age':[15, 30, 10]} ``` Or is dictionary the wrong approach for such data?
If you manipulate data, often it helps that each column be an observed variable (name, age), and each row be an observation (e.g. a sampled person). More on *tidy data* in this [PDF link](http://www.jstatsoft.org/article/view/v059i10/v59i10.pdf) > Bad programmers worry about the code. Good programmers worry about > data structures and their relationships - Linus Torvalds A list of dictionaries lends itself better to operations like this. Below I present a beginner-friendly snippet to tidy your data. Once you have a good data structure, sorting by any variable is trivial even for a beginner. No one-liner Python kung-fu :) ``` >>> mydict = {'name':['peter', 'janice', 'andy'], 'age':[10, 30, 15]} ``` Let's work on a better data structure first ``` >>> persons = [] >>> for i, name in enumerate(mydict['name']): ... persons.append({'name': name, 'age': mydict['age'][i]}) ... >>> persons [{'age': 10, 'name': 'peter'}, {'age': 30, 'name': 'janice'}, {'age': 15, 'name': 'andy'}] ``` Now it's easier to work on this data structure which is similar to "data frames" in data analysis environments. Let's sort it by `person.name` ``` >>> persons = sorted(persons, key=lambda person: person['name']) ``` Now bring it back to your format if you want to ``` >>> {'name': [p['name'] for p in persons], 'age': [p['age'] for p in persons]} {'age': [15, 30, 10], 'name': ['andy', 'janice', 'peter']} ```
Checking a List for a Sequence
34,117,842
10
2015-12-06T13:18:52Z
34,118,255
7
2015-12-06T14:03:01Z
[ "python", "list" ]
I am wanting to check if a list has a specific sequence of elements. I have sorted the list which contains 7 elements, I now want to check of the first 4 are the same as each other and the last 3 are the same as each other. For what I want to achieve to be True the list would be like this: ``` list = ['1','1','1','1','2','2','2'] ``` I hope this makes what I want to achieve clearer.
You can slice a list. Take the first four elements: ``` >>> L = ['1','1','1','1','2','2','2'] >>> L[:4] ['1', '1', '1', '1'] ``` and the last three: ``` >>> L[-3:] ['2', '2', '2'] ``` A [set](https://docs.python.org/3.4/library/stdtypes.html#set) does not allow duplicates. Therefore: ``` >>> set(L[:4]) {1} ``` That means if he length of this set is 1, all elements in the sliced list are the same. Putting this all together: ``` >>> len(set(L[:4])) == 1 and len(set(L[-3:])) == 1 True ``` shows you that your condition is met.
Why do i need to create object of `QApplication` and what is the purpose of it in PyQt GUI programming?
34,125,618
6
2015-12-07T02:44:04Z
34,125,797
7
2015-12-07T03:08:55Z
[ "python", "qt", "pyqt5" ]
``` def main(): app = QtWidgets.QApplication(sys.argv) w = QtWidgets.QWidget() w.show() app.exec() ``` This is a very simple Python GUI program with `PyQt5` framework. Actually I'm not familiar with Qt, also a newbie to GUI programming. As in above program, there is a object of `QApplication` has created as well as QWidget. In this case, `app` object doesn't contain the `w` object and i don't see any connection between two of them. So why it's necessary to create a object of `QApplication` and to execute it using that object? Thank you!
You don't *need* to create QApplication, but it's a convenience class that does a lot of things for you. I won't explain everything it can possibly do for you -- you'll find that in [the manual](http://doc.qt.io/qt-5/qapplication.html) -- but I can explain two of the things you're doing in your sample code. ``` app = QtWidgets.QApplication(sys.argv) ``` Here you're creating the QApplication, and passing in arguments to its constructor. Qt [understands certain arguments](http://doc.qt.io/qt-5/qapplication.html#QApplication) that can be used when executing the application. ``` app.exec_() ``` Like you said, there doesn't appear to be any interaction between `app` and `w`. But there is a connection! In order for Qt's GUI to function it needs an event loop in the main thread. The call to `exec_()` starts this event loop. To quote the documentation for [QApplication::exec()](http://doc.qt.io/qt-5/qapplication.html#exec): > Enters the main event loop and waits until exit() is called, then > returns the value that was set to exit() (which is 0 if exit() is > called via quit()). > > It is necessary to call this function to start event handling. The > main event loop receives events from the window system and dispatches > these to the application widgets. So as soon as you call `exec_()`, control leaves your `main()` function and starts responding to UI events until you tell it to exit. Pretty much all desktop GUIs work this way to some extent, though not all of them have a nice Application object that sets up the event loop for you. If you're new to event driven programming you might want to read up on the concepts. [This Wikipedia article](https://en.wikipedia.org/wiki/Event_loop) isn't a bad place to start.
How do I exit this while loop?
34,129,037
5
2015-12-07T08:13:10Z
34,129,124
8
2015-12-07T08:20:35Z
[ "python", "loops", "while-loop" ]
I’m having trouble with exiting the following while loop. This is a simple program that prints `hello` if random value is greater than 5. The program runs fine once but when I try to run it again it goes into an infinite loop. ``` from random import * seed() a = randint(0,10) b = randint(0,10) c = randint(0,10) count = 0 while True: if a > 5 : print ("aHello") count = count + 1 else : a = randint(0,10) if b > 5 : print ("bHello") count = count + 1 else : b = randint(0,10) if c > 5 : print ("cHello") count = count + 1 else : c = randint(0,10) if count == 20 : count = 0 break count = 0 ```
Your while loop might increment the variable count by 0, 1, 2 or 3. This might result in count going from a value below 20 to a value over 20. For example, if count's value is 18 and the following happens: ``` a > 5, count += 1 b > 5, count += 1 c > 5, count += 1 ``` After these operations, count's value would be 18 + 3 = 21, which is not 20. Therefore, the condition value == 20 will never be met. To fix the error, you can either replace the line ``` if count == 20 ``` with ``` if count >= 20 ``` or just change your program logic inside the while loop.
Python: Line that does not start with #
34,129,811
10
2015-12-07T09:03:16Z
34,129,925
9
2015-12-07T09:09:54Z
[ "python", "regex" ]
I have a file that contains something like > # comment > # comment > not a comment > > # comment > # comment > not a comment I'm trying to read the file line by line and only capture lines that does not start with #. What is wrong with my code/regex? ``` import re def read_file(): pattern = re.compile("^(?<!# ).*") with open('list') as f: for line in f: print pattern.findall(line) ``` Original code captures everything instead of expected.
An alternative and yet simple approach is to only check if the first `char` of each line you read does not contain `#` character: ``` def read_file(): with open('list') as f: for line in f: if not line.lstrip().startswith('#'): print line ```
Python: Line that does not start with #
34,129,811
10
2015-12-07T09:03:16Z
34,130,001
10
2015-12-07T09:14:49Z
[ "python", "regex" ]
I have a file that contains something like > # comment > # comment > not a comment > > # comment > # comment > not a comment I'm trying to read the file line by line and only capture lines that does not start with #. What is wrong with my code/regex? ``` import re def read_file(): pattern = re.compile("^(?<!# ).*") with open('list') as f: for line in f: print pattern.findall(line) ``` Original code captures everything instead of expected.
[Iron Fist](http://stackoverflow.com/a/34129925/996114) shows the way you should probably do this; however, if you want to know what was wrong with your regex anyway, it should have been this: ``` ^[^#].* ``` Explanation: * `^` - match beginning of line. * `[^#]` - match something that is not `#`. `[^...]` is how you say not to match something (just replace `...` with whatever characters you do not want to match. For example, `[^ABC123]` will match a character that is none of A, B, C, 1, 2, or 3. Don't let the `^` that indicates the beginning of a line/string confuse you here. These two `^`'s are totally unrelated. * `.*` - match zero or more of anything else. **EDIT:** The reason `^(?<!# ).*` does NOT discriminate between `# comment` and `not a comment` is that `(?<!#)` checks the text *before* the current position. The engine looks for `#` before the first symbol after the start of string, and since there is no `#` before the start of string, any line is a match for `.*` subpattern. To really check if the first symbol is `#`, you just need to use `^#.*` regex. Or, if there can be leading whitespace, `^\s*#`.
DateField 'str' object has no attribute 'year'
34,131,468
2
2015-12-07T10:33:24Z
34,131,704
8
2015-12-07T10:46:08Z
[ "python", "django" ]
When trying to access the year and month attributes of my `DateField` objects I am getting the error > AttributeError: 'str' object has no attribute 'date'. I thought that DateField objects were saved as Python Datetime objects instead of strings. Here is the models.py: ``` class MonthControlRecord(models.Model): STATUS_CHOICES = ( (0, 'Open'), (1, 'Locked'), (2, 'Closed'), ) employee = models.ForeignKey(Employee, on_delete=models.CASCADE) first_day_of_month = models.DateField() status = models.IntegerField(choices=STATUS_CHOICES, default=0) @property def get_year_month(self): return self.first_day_of_month.year, self.first_day_of_month.month def __str__(self): return self.employee, self.first_day_of_month ``` and the tests.py: ``` employee = Employee.objects.get(staff_number="0001") mcr = MonthControlRecord(employee=employee, first_day_of_month="2015-12-01") mcrYearMonth = mcr.get_year_month ``` and the error: ``` Traceback (most recent call last): File "/Users/James/Django/MITS/src/timesheet/tests.py", line 87, in test_new_month_control_record mcrYearMonth = mcr.get_year_month File "/Users/James/Django/MITS/src/timesheet/models.py", line 54, in get_year_month return self.first_day_of_month.year, self.first_day_of_month.month AttributeError: 'str' object has no attribute 'year' ```
In your test, you're setting the date as a string: ``` mcr = MonthControlRecord(employee=employee, first_day_of_month="2015-12-01") ``` Try setting it as a date: ``` your_date = datetime.date(2015, 12, 1) mcr = MonthControlRecord(employee=employee, first_day_of_month=your_date) ```
Difference between numpy dot() and Python 3.5+ matrix multiplication @
34,142,485
14
2015-12-07T20:23:26Z
34,142,617
13
2015-12-07T20:30:21Z
[ "python", "numpy", "matrix-multiplication", "python-3.5" ]
I recently moved to Python 3.5 and noticed the [new matrix multiplication operator (@)](https://docs.python.org/3/whatsnew/3.5.html#whatsnew-pep-465) sometimes behaves differently from the [numpy dot](http://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html) operator. In example, for 3d arrays: ``` import numpy as np a = np.random.rand(8,13,13) b = np.random.rand(8,13,13) c = a @ b # Python 3.5+ d = np.dot(a, b) ``` The `@` operator returns an array of shape: ``` c.shape (8, 13, 13) ``` while the `np.dot()` function returns: ``` d.shape (8, 13, 8, 13) ``` How can I reproduce the same result with numpy dot? Are there any other significant differences?
The `@` operator calls the array's `__matmul__` method, not `dot`. This method is also present in the API as the function [`np.matmul`](http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.matmul.html). ``` >>> a = np.random.rand(8,13,13) >>> b = np.random.rand(8,13,13) >>> np.matmul(a, b).shape (8, 13, 13) ``` From the documentation: > `matmul` differs from `dot` in two important ways. > > * Multiplication by scalars is not allowed. > * Stacks of matrices are broadcast together as if the matrices were elements. The last point makes it clear that `dot` and `matmul` methods behave differently when passed 3D (or higher dimensional) arrays. Quoting from the documentation some more: For `matmul`: > If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly. For [`np.dot`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html): > For 2-D arrays it is equivalent to matrix multiplication, and for 1-D arrays to inner product of vectors (without complex conjugation). *For N dimensions it is a sum product over the last axis of a and the second-to-last of b*
Sklearn How to Save a Model Created From a Pipeline and GridSearchCV Using Joblib or Pickle?
34,143,829
5
2015-12-07T21:46:23Z
34,166,178
7
2015-12-08T21:16:41Z
[ "python", "scikit-learn", "pipeline", "grid-search" ]
After identifying the best parameters using a `pipeline` and `GridSearchCV`, how do I `pickle`/`joblib` this process to re-use later? I see how to do this when it's a single classifier... ``` from sklearn.externals import joblib joblib.dump(clf, 'filename.pkl') ``` But how do I save this overall `pipeline` with the best parameters after performing and completing a `gridsearch`? I tried: * `joblib.dump(grid, 'output.pkl')` - But that dumped every gridsearch attempt (many files) * `joblib.dump(pipeline, 'output.pkl')` - But I don't think that contains the best parameters --- ``` X_train = df['Keyword'] y_train = df['Ad Group'] pipeline = Pipeline([ ('tfidf', TfidfVectorizer()), ('sgd', SGDClassifier()) ]) parameters = {'tfidf__ngram_range': [(1, 1), (1, 2)], 'tfidf__use_idf': (True, False), 'tfidf__max_df': [0.25, 0.5, 0.75, 1.0], 'tfidf__max_features': [10, 50, 100, 250, 500, 1000, None], 'tfidf__stop_words': ('english', None), 'tfidf__smooth_idf': (True, False), 'tfidf__norm': ('l1', 'l2', None), } grid = GridSearchCV(pipeline, parameters, cv=2, verbose=1) grid.fit(X_train, y_train) #These were the best combination of tuning parameters discovered ##best_params = {'tfidf__max_features': None, 'tfidf__use_idf': False, ## 'tfidf__smooth_idf': False, 'tfidf__ngram_range': (1, 2), ## 'tfidf__max_df': 1.0, 'tfidf__stop_words': 'english', ## 'tfidf__norm': 'l2'} ```
``` from sklearn.externals import joblib joblib.dump(grid.best_estimator_, 'filename.pkl') ``` If you want to dump your object into one file - use: ``` joblib.dump(grid.best_estimator_, 'filename.pkl', compress = 1) ```
Using cosine distance with scikit learn KNeighborsClassifier
34,144,632
2
2015-12-07T22:36:45Z
34,145,444
7
2015-12-07T23:39:54Z
[ "python", "machine-learning", "scikit-learn", "knn" ]
Is it possible to use something like 1 - cosine similarity with scikit learn's KNeighborsClassifier? [This](http://stackoverflow.com/questions/23032628/why-does-scikit-learns-nearest-neighbor-doesnt-seem-to-return-proper-cosine-si) answer says no, but on the [documentation](http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html) for KNeighborsClassifier, it says the metrics mentioned in [DistanceMetrics](http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.DistanceMetric.html) are available. Distance metrics don't include an explicit cosine distance, probably because it's not really a distance, but supposedly it's possible to input a function into the metric. I tried inputting the scikit learn linear kernel into KNeighborsClassifier but it gives me an error that the function needs two arrays as arguments. Anyone else tried this?
The cosine similarity is generally defined as xT y / (||x|| \* ||y||), and outputs 1 if they are the same and goes to -1 if they are completely different. This definition is not technically a metric, and so you can't use accelerating structures like ball and kd trees with it. If you force scikit learn to use the brute force approach, you should be able to use it as a distance if you pass it your own custom distance metric object. There are methods of transforming the cosine similarity into a valid distance metric if you would like to use ball trees (you can find one in the [JSAT library](https://github.com/EdwardRaff/JSAT)) Notice though, that xT y / (||x|| \* ||y||) = (x/||x||)T (y/||y||). The euclidean distance can be equivalently written as sqrt(xTx + yTy − 2 xTy). If we normalize every datapoint before giving it to the KNeighborsClassifier, then `x^T x = 1` for all `x`. So the euclidean distance will degrade to `sqrt(2 − 2x^T y)`. For completely the same inputs, we would get `sqrt(2-2*1) = 0` and for complete opposites `sqrt(2-2*-1)= 2`. And it is clearly a simple shape, so you can get the same ordering as the cosine distance by normalizing your data and then using the euclidean distance. So long as you use the `uniform` weights option, the results will be identical to having used a correct Cosine Distance.
'is' operator behaves unexpectedly with non-cached integers
34,147,515
28
2015-12-08T03:33:40Z
34,147,516
44
2015-12-08T03:33:40Z
[ "python", "python-3.x", "identity", "python-internals" ]
When playing around with the Python interpreter, I stumbled upon this conflicting case regarding the `is` operator: If the evaluation takes place in the function it returns `True`, if it is done outside it returns `False`. ``` >>> def func(): ... a = 1000 ... b = 1000 ... return a is b ... >>> a = 1000 >>> b = 1000 >>> a is b, func() (False, True) ``` Since the `is` operator evaluates the `id()`'s for the objects involved, this means that `a` and `b` point to the same `int` instance when declared inside of function `func` but, on the contrary, they point to a different object when outside of it. Why is this so? --- **Note**: I am aware of the difference between identity (`is`) and equality (`==`) operations as described in [Understanding Python's "is" operator](http://stackoverflow.com/questions/13650293/understanding-pythons-is-operator). In addition, I'm also aware about the caching that is being performed by python for the integers in range `[-5, 256]` as described in ["is" operator behaves unexpectedly with integers](http://stackoverflow.com/questions/306313/pythons-is-operator-behaves-unexpectedly-with-integers). This **isn't the case here** since the numbers are outside that range and **I do** want to evaluate identity and **not** equality.
## tl;dr: As the [reference manual](https://docs.python.org/3.5/reference/executionmodel.html#structure-of-a-program) states: > A block is a piece of Python program text that is executed as a unit. > The following are blocks: a module, a function body, and a class definition. > **Each command typed interactively is a block.** This is why, in the case of a function, you have a **single** code block which contains a **single** object for the numeric literal `1000`, so `id(a) == id(b)` will yield `True`. In the second case, you have **two distinct code objects** each with their own different object for the literal `1000` so `id(a) != id(b)`. Take note that this behavior doesn't manifest with `int` literals only, you'll get similar results with, for example, `float` literals (see [here](http://stackoverflow.com/questions/38834770/is-operator-gives-unexpected-results-on-floats/38835101#38835101)). Of course, comparing objects should (except for explicit `is None` tests ) should always be done with the equality operator `==` and *not* `is`. *Everything stated here applies to the most popular implementation of Python, CPython. Other implementations might differ so no assumptions should be made when using them.* --- ## Longer Answer: To get a little clearer view and additionally verify this *seemingly odd* behaviour we can look directly in the [**`code`**](https://docs.python.org/3.5/reference/datamodel.html#the-standard-type-hierarchy) objects for each of these cases using the [**`dis`**](https://docs.python.org/3.5/library/dis.html) module. **For the function `func`**: Along with all other attributes, function objects also have a `__code__` hook to allow you to peek into the compiled bytecode for that function. Using **[`dis.code_info`](https://docs.python.org/3.5/library/dis.html#dis.code_info)** we can get a nice pretty view of all stored attributes in a code object for a given function: ``` >>> print(dis.code_info(func)) Name: func Filename: <stdin> Argument count: 0 Kw-only arguments: 0 Number of locals: 2 Stack size: 2 Flags: OPTIMIZED, NEWLOCALS, NOFREE Constants: 0: None 1: 1000 Variable names: 0: a 1: b ``` We're only interested in the `Constants` entry for function `func`. In it, we can see that we have two values, `None` (always present) and `1000`. We only have a **single** int instance that represents the constant `1000`. This is the value that `a` and `b` are going to be assigned to when the function is invoked. Accessing this value is easy via `func.__code__.co_consts[1]` and so, another way to view our `a is b` evaluation in the function would be like so: ``` >>> id(func.__code__.co_consts[1]) == id(func.__code__.co_consts[1]) ``` Which, ofcourse, will evaluate to `True` because we're refering to the same object. **For each interactive command:** As noted previously, each interactive command is interpreted as a single code block: parsed, compiled and evaluated independently. We can get the code objects for each command via the [**`compile`**](https://docs.python.org/3.5/library/functions.html#compile) built-in: ``` >>> com1 = compile("a=1000", filename="", mode="exec") >>> com2 = compile("b=1000", filename="", mode="exec") ``` For each assignment statement, we will get a similar looking code object which looks like the following: ``` >>> print(dis.code_info(com1)) Name: <module> Filename: Argument count: 0 Kw-only arguments: 0 Number of locals: 0 Stack size: 1 Flags: NOFREE Constants: 0: 1000 1: None Names: 0: a ``` The same command for `com2` looks the same but, *has a fundamental difference*, **each** of the code objects `com1` and `com2` have different int instances representing the literal `1000`. This is why, in this case, when we do `a is b` via the `co_consts` argument, we actually get: ``` >>> id(com1.co_consts[0]) == id(com2.co_consts[0]) False ``` Which agrees with what we actually got. *Different code objects, different contents.* --- **Note:** I was somewhat curious as to how exactly this happens in the source code and after digging through it I believe I finally found it. During compilations phase the [**`co_consts`**](https://github.com/python/cpython/blob/master/Python/compile.c#L595) attribute is represented by a dictionary object. In [`compile.c`](https://github.com/python/cpython/blob/master/Python/compile.c) we can actually see the initialization: ``` /* snippet for brevity */ u->u_lineno = 0; u->u_col_offset = 0; u->u_lineno_set = 0; u->u_consts = PyDict_New(); /* snippet for brevity */ ``` During compilation this is checked for already existing constants. See [@Raymond Hettinger's answer below](http://stackoverflow.com/a/39325641/4952130) for a bit more on this. --- ### Caveats: * Chained statements will evaluate to an identity check of `True` It should be more clear now why exactly the following evaluates to `True`: ``` >>> a = 1000; b = 1000; >>> a is b ``` In this case, by chaining the two assignment commands together we tell the interpreter to compile these **together**. As in the case for the function object, only one object for the literal `1000` will be created resulting in a `True` value when evaluated. * Execution on a module level yields `True` again: As previously mentioned, the reference manual states that: > ... The following are blocks: **a module** ... So the same premise applies: we will have a single code object (for the module) and so, as a result, single values stored for each different literal. * The same **doesn't** apply for **mutable** objects: Meaning that unless we explicitly initialize to the same mutable object (for example with a = b = []), the identity of the objects will never be equal, for example: ``` a = []; b = [] a is b # always returns false ``` Again, in [the documentation](https://docs.python.org/3/reference/datamodel.html#objects-values-and-types) this is specified: > after a = 1; b = 1, a and b may or may not refer to the same object with the value one, depending on the implementation, but after c = []; d = [], c and d are guaranteed to refer to two different, unique, newly created empty lists.
'is' operator behaves unexpectedly with non-cached integers
34,147,515
28
2015-12-08T03:33:40Z
39,325,641
11
2016-09-05T07:24:48Z
[ "python", "python-3.x", "identity", "python-internals" ]
When playing around with the Python interpreter, I stumbled upon this conflicting case regarding the `is` operator: If the evaluation takes place in the function it returns `True`, if it is done outside it returns `False`. ``` >>> def func(): ... a = 1000 ... b = 1000 ... return a is b ... >>> a = 1000 >>> b = 1000 >>> a is b, func() (False, True) ``` Since the `is` operator evaluates the `id()`'s for the objects involved, this means that `a` and `b` point to the same `int` instance when declared inside of function `func` but, on the contrary, they point to a different object when outside of it. Why is this so? --- **Note**: I am aware of the difference between identity (`is`) and equality (`==`) operations as described in [Understanding Python's "is" operator](http://stackoverflow.com/questions/13650293/understanding-pythons-is-operator). In addition, I'm also aware about the caching that is being performed by python for the integers in range `[-5, 256]` as described in ["is" operator behaves unexpectedly with integers](http://stackoverflow.com/questions/306313/pythons-is-operator-behaves-unexpectedly-with-integers). This **isn't the case here** since the numbers are outside that range and **I do** want to evaluate identity and **not** equality.
At the interactive prompt, entry are [compiled in a *single* mode](https://docs.python.org/3/library/functions.html#compile) which processes one complete statement at a time. The compiler itself (in [Python/compile.c](https://hg.python.org/cpython/file/tip/Python/compile.c)) tracks the constants in a dictionary called [*u\_consts*](https://hg.python.org/cpython/file/tip/Python/compile.c#l114) that maps the constant object to its index. In the [*compiler\_add\_o()*](https://hg.python.org/cpython/file/tip/Python/compile.c#l1099) function, you see that before adding a new constant (and incrementing the index), the dict is checked to see whether the constant object and index already exist. If so, they are reused. In short, that means that repeated constants in one statement (such as in your function definition) are folded into one singleton. In contrast, your `a = 1000` and `b = 1000` are two separate statements, so no folding takes place. FWIW, this is all just a CPython implementation detail (i.e. not guaranteed by the language). This is why the references given here are to the C source code rather than the language specification which makes no guarantees on the subject. Hope you enjoyed this insight into how CPython works under the hood :-)
UnboundLocalError while using += but not append list
34,147,753
3
2015-12-08T03:57:52Z
34,147,963
9
2015-12-08T04:22:14Z
[ "python", "list", "append" ]
I do not quite understand the difference between the following two similar codes: ``` def y(x): temp=[] def z(j): temp.append(j) z(1) return temp ``` calling `y(2)` returns `[1]` ``` def y(x): temp=[] def z(j): temp+=[j] z(1) return temp ``` calling `y(2)` returns `UnboundLocalError: local variable 'temp' referenced before assignment`. Why `+` operator generates the error? Thanks
Answer to the heading, the difference between + and "append" is: ``` [11, 22] + [33, 44,] ``` will give you: ``` [11, 22, 33, 44] ``` and. ``` b = [11, 22, 33] b.append([44, 55, 66]) ``` will give you ``` [11, 22, 33 [44, 55, 66]] ``` **Answer to the error** > This is because when you make an assignment to a variable in a scope, that variable becomes local to that scope and shadows any similarly named variable in the outer scope The problem here is `temp+=[j]` is equal to `temp = temp +[j]`. The temp variable is read here before its assigned. This is why it's giving this problem. This is actually covered in python FAQ's. For further readings, click [here](http://eli.thegreenplace.net/2011/05/15/understanding-unboundlocalerror-in-python). :)
Calculate difference each time the sign changes in a list of values
34,153,761
4
2015-12-08T10:42:36Z
34,154,260
8
2015-12-08T11:06:29Z
[ "python", "list", "numpy", "max", "min" ]
Ok let's imagine that I have a list of values like so: ``` list = [-0.23, -0.5, -0.3, -0.8, 0.3, 0.6, 0.8, -0.9, -0.4, 0.1, 0.6] ``` I would like to loop on this list and when the sign changes to get the absolute difference between the maximum (minimum if it's negative) of the first interval and maximum (minimum if it's negative) of the next interval. For example on the previous list, we would like to have a result like so: ``` [1.6, 1.7, 1.5] ``` The tricky part is that it has to work also for lists like: ``` list = [0.12, -0.23, 0.52, 0.2, 0.6, -0.3, 0.4] ``` Which would return : ``` [0.35, 0.83, 0.9, 0.7] ``` It's tricky because some intervals are 1 value long, and I'm having difficulties with managing this. How would you solve this with the least possible number of lines? --- Here is my current code, but it's not working at the moment. `list` is a list of 6 lists, where each of these 6 lists contains else a nan, else a np.array of 1024 values (the values I want to evaluate) ``` Hmax = [] for c in range(0,6): Hmax_tmp = [] for i in range(len(list[c])): if(not np.isnan(list[c][i]).any()): zero_crossings = np.where(np.diff(np.sign(list[c][i])))[0] if(not zero_crossings[0] == 0): zero_crossings = [0] + zero_crossings.tolist() + [1023] diff = [] for j in range(1,len(zero_crossings)-2): if diff.append(max(list[c][i][np.arange(zero_crossings[j-1],zero_crossings[j])].min(), list[c][i][np.arange(zero_crossings[j]+1,zero_crossings[j+1])].max(), key=abs) - max(list[c][i][np.arange(zero_crossings[j+1],zero_crossings[j+2])].min(), list[c][i][np.arange(zero_crossings[j+1],zero_crossings[j+2])].max(), key=abs)) Hmax_tmp.append(np.max(diff)) else: Hmax_tmp.append(np.nan) Hmax.append(Hmax_tmp) ```
This type of grouping operation can be greatly simplified using [`itertools.groupby`](https://docs.python.org/3/library/itertools.html). For example: ``` >>> from itertools import groupby >>> lst = [-0.23, -0.5, -0.3, -0.8, 0.3, 0.6, 0.8, -0.9, -0.4, 0.1, 0.6] # the list >>> minmax = [min(v) if k else max(v) for k,v in groupby(lst, lambda a: a < 0)] >>> [abs(j-i) for i,j in zip(minmax[:-1], minmax[1:])] [1.6, 1.7000000000000002, 1.5] ``` And the second list: ``` >>> lst2 = [0.12, -0.23, 0.52, 0.2, 0.6, -0.3, 0.4] # the list >>> minmax = [min(v) if k else max(v) for k,v in groupby(lst2, lambda a: a < 0)] >>> [abs(j-i) for i,j in zip(minmax[:-1], minmax[1:])] [0.35, 0.83, 0.8999999999999999, 0.7] ``` So here, the list is grouped into consecutive intervals of negative/positive values. The `min`/`max` is computed for each group and stored in a list `minmax`. Lastly, a list comprehension finds the differences. Excuse the inexact representations of floating point numbers!
How to efficiently convert Matlab engine arrays to numpy ndarray?
34,155,829
3
2015-12-08T12:23:41Z
34,155,926
7
2015-12-08T12:27:30Z
[ "python", "matlab", "numpy", "type-conversion", "matlab-engine" ]
I am currently working on a project where I need do some steps of processing with legacy Matlab code (using the Matlab engine) and the rest in Python (numpy). I noticed that converting the results from Matlab's `matlab.mlarray.double` to numpy's `numpy.ndarray` seems horribly slow. Here is some example code for creating an ndarray with 1000 elements from another ndarray, a list and an mlarray: ``` import timeit setup_range = ("import numpy as np\n" "x = range(1000)") setup_arange = ("import numpy as np\n" "x = np.arange(1000)") setup_matlab = ("import numpy as np\n" "import matlab.engine\n" "eng = matlab.engine.start_matlab()\n" "x = eng.linspace(0., 1000.-1., 1000.)") print 'From other array' print timeit.timeit('np.array(x)', setup=setup_arange, number=1000) print 'From list' print timeit.timeit('np.array(x)', setup=setup_range, number=1000) print 'From matlab' print timeit.timeit('np.array(x)', setup=setup_matlab, number=1000) ``` Which takes the following times: ``` From other array 0.00150722111994 From list 0.0705359556928 From matlab 7.0873282467 ``` The conversion takes about 100 times as long as a conversion from list. Is there any way to speed up the conversion?
Moments after posting the question I found the solution. For one-dimensional arrays, access only the `_data` property of the Matlab array. ``` import timeit print 'From list' print timeit.timeit('np.array(x)', setup=setup_range, number=1000) print 'From matlab' print timeit.timeit('np.array(x)', setup=setup_matlab, number=1000) print 'From matlab_data' print timeit.timeit('np.array(x._data)', setup=setup_matlab, number=1000) ``` prints ``` From list 0.0719847538787 From matlab 7.12802865169 From matlab_data 0.118476275533 ``` For multi-dimensional arrays you need to reshape the array afterwards. In the case of two-dimensional arrays this means calling ``` np.array(x._data).reshape(x.size[::-1]).T ```
How to show ampersand (&) symbol in python?
34,157,247
2
2015-12-08T13:33:19Z
34,157,303
11
2015-12-08T13:36:08Z
[ "python", "wxpython" ]
if I do: ``` self.a= wx.Button(self, -1, "Hi&Hello") ``` I get button named: `HiHello` it ignores the `&` symbol. How can I fix it?
`&` is a special marker in GUI labels for buttons and menu items. It defines the standard keyboard shortcut for that element. You can escape it by doubling the character: ``` self.a = wx.Button(self, -1, "Hi&&Hello") ```
Filter a pandas dataframe using values from a dict
34,157,811
2
2015-12-08T13:59:49Z
34,162,576
7
2015-12-08T17:47:40Z
[ "python", "pandas" ]
I need to filter a data frame with a dict, constructed with the key being the column name and the value being the value that I want to filter: ``` filter_v = {'A':1, 'B':0, 'C':'This is right'} # this would be the normal approach df[(df['A'] == 1) & (df['B'] ==0)& (df['C'] == 'This is right')] ``` But I want to do something on the lines ``` for column, value in filter_v.items(): df[df[column] == value] ``` but this will filter the data frame several times, one value at a time, and not apply all filters at the same time. Is there a way to do it programmatically? EDIT: an example: ``` df1 = pd.DataFrame({'A':[1,0,1,1, np.nan], 'B':[1,1,1,0,1], 'C':['right','right','wrong','right', 'right'],'D':[1,2,2,3,4]}) filter_v = {'A':1, 'B':0, 'C':'right'} df1.loc[df1[filter_v.keys()].isin(filter_v.values()).all(axis=1), :] ``` gives ``` A B C D 0 1 1 right 1 1 0 1 right 2 3 1 0 right 3 ``` but the expected result was ``` A B C D 3 1 0 right 3 ``` only the last one should be selected.
IIUC, you should be able to do something like this: ``` >>> df1.loc[(df1[list(filter_v)] == pd.Series(filter_v)).all(axis=1)] A B C D 3 1 0 right 3 ``` --- This works by making a Series to compare against: ``` >>> pd.Series(filter_v) A 1 B 0 C right dtype: object ``` Selecting the corresponding part of `df1`: ``` >>> df1[list(filter_v)] A C B 0 1 right 1 1 0 right 1 2 1 wrong 1 3 1 right 0 4 NaN right 1 ``` Finding where they match: ``` >>> df1[list(filter_v)] == pd.Series(filter_v) A B C 0 True False True 1 False False True 2 True False False 3 True True True 4 False False True ``` Finding where they *all* match: ``` >>> (df1[list(filter_v)] == pd.Series(filter_v)).all(axis=1) 0 False 1 False 2 False 3 True 4 False dtype: bool ``` And finally using this to index into df1: ``` >>> df1.loc[(df1[list(filter_v)] == pd.Series(filter_v)).all(axis=1)] A B C D 3 1 0 right 3 ```
using IFF in python
34,157,836
6
2015-12-08T14:01:07Z
34,157,993
13
2015-12-08T14:08:47Z
[ "python", "python-2.7" ]
Is there any way to write iff statement(i.e if and only if) in python. I want to use it like in ``` for i in range(x) iff x%2==0 and x%i==0: ``` However, there is no `iff` statement in Python. Wikipedia defines the [truth table for iff](https://en.wikipedia.org/wiki/If_and_only_if#Definition) as this: ``` a | b | iff a and b ----------------------- T | T | T T | F | F F | T | F F | F | T ``` How do I accomplish this in Python?
If you look at the [truth table for IFF](https://en.wikipedia.org/wiki/If_and_only_if#Definition), you can see that (p iff q) is true when both p and q are true or both are false. That's just the same as checking for equality, so in Python code you'd say: ``` if (x%2==0) == (x%i==0): ```
using IFF in python
34,157,836
6
2015-12-08T14:01:07Z
34,158,002
9
2015-12-08T14:08:57Z
[ "python", "python-2.7" ]
Is there any way to write iff statement(i.e if and only if) in python. I want to use it like in ``` for i in range(x) iff x%2==0 and x%i==0: ``` However, there is no `iff` statement in Python. Wikipedia defines the [truth table for iff](https://en.wikipedia.org/wiki/If_and_only_if#Definition) as this: ``` a | b | iff a and b ----------------------- T | T | T T | F | F F | T | F F | F | T ``` How do I accomplish this in Python?
According to [Wikipedia](https://en.wikipedia.org/wiki/If_and_only_if): > Note that it is equivalent to that produced by the XNOR gate, and opposite to that produced by the XOR gate. If that's what you're looking for you simply want this: ``` if not(x%2 == 0 ^ x%i == 0): ```
Variants of string concatenation?
34,158,494
7
2015-12-08T14:31:18Z
34,158,638
12
2015-12-08T14:38:59Z
[ "python", "string", "string-concatenation" ]
Out of the following two variants (with or without plus-sign between) of string literal concatenation: * What's the preferred way? * What's the difference? * When should one or the other be used? * Should non of them ever be used, if so why? * Is `join` preferred? Code: ``` >>> # variant 1. Plus >>> 'A'+'B' 'AB' >>> # variant 2. Just a blank space >>> 'A' 'B' 'AB' >>> # They seems to be both equal >>> 'A'+'B' == 'A' 'B' True ```
Juxtaposing works only for string literals: ``` >>> 'A' 'B' 'AB' ``` If you work with string objects: ``` >>> a = 'A' >>> b = 'B' ``` you need to use a different method: ``` >>> a b a b ^ SyntaxError: invalid syntax >>> a + b 'AB' ``` The `+` is a bit more obvious than just putting literals next to each other. One use of the first method is to split long texts over several lines, keeping indentation in the source code: ``` >>> a = 5 >>> if a == 5: text = ('This is a long string' ' that I can continue on the next line.') >>> text 'This is a long string that I can continue on the next line.' ``` `''join()` is the preferred way to concatenate more strings, for example in a list: ``` >>> ''.join(['A', 'B', 'C', 'D']) 'ABCD' ```
How to pass callable in Django 1.9
34,159,071
9
2015-12-08T14:59:23Z
34,159,129
17
2015-12-08T15:02:06Z
[ "python", "django", "django-1.9" ]
Hi i am new in Python and Django and I follow the [django workshop](http://www.django-workshop.de/erste_schritte/views_templates.html) guide. I just installed Python 3.5 and Django 1.9 and get a lot of error messages ... Just now I found a lot of dokumentations but now stuck. I want to add views and and so I added following code in urls.py: ``` from django.conf.urls import include, url # Uncomment the next two lines to enable the admin: from django.contrib import admin admin.autodiscover() urlpatterns = [ # Uncomment the admin/doc line below to enable admin documentation: #url(r'^admin/doc/', include('django.contrib.admindocs.urls')), url(r'^admin/', include(admin.site.urls)), url(r'^rezept/(?P<slug>[-\w]+)/$', 'recipes.views.detail'), url(r'^$', 'recipes.views.index'), ] ``` and every time get the error message: ``` Support for string view arguments to url() is deprecated and will be removed in Django 1.10 (got recipes.views.index). Pass the callable instead. url(r'^$', 'recipes.views.index'), ``` But I couldn't find how to pass them. The documentations only tell "pass them" but no example how to...
This is a deprecation warning, which means the code would still run for now. But to address this, just change ``` url(r'^$', 'recipes.views.index'), ``` to this: ``` #First of all explicitly import the view from recipes import views as recipes_views #this is to avoid conflicts with other view imports ``` and in the URL patterns, ``` url(r'^rezept/(?P<slug>[-\w]+)/$', recipes_views.detail), url(r'^$', recipes_views.index), ``` [More documentation and the reasoning can be found here](https://docs.djangoproject.com/en/1.9/releases/1.8/#django-conf-urls-patterns) > In the modern era, we have updated the tutorial to instead recommend > importing your views module and referencing your view functions (or > classes) directly. This has a number of advantages, all deriving from > the fact that we are using normal Python in place of “Django String > Magic”: the errors when you mistype a view name are less obscure, IDEs > can help with autocompletion of view names, etc.
Why do many examples use "fig, ax = plt.subplots()" in Matplotlib/pyplot/python
34,162,443
26
2015-12-08T17:39:39Z
34,162,641
36
2015-12-08T17:51:25Z
[ "python", "matplotlib", "plot", "visualization" ]
I'm learning to use `matplotlib` by studying examples, and a lot of examples seem to include a line like the following before creating a single plot... ``` fig, ax = plt.subplots() ``` Here are some examples... * [Modify tick label text](http://stackoverflow.com/questions/11244514/modify-tick-label-text) * <http://matplotlib.org/examples/pylab_examples/boxplot_demo2.html> I see this function used a lot, even though the example is only attempting to create a single chart. Is there some other advantage? The official demo for `subplots()` also uses `f, ax = subplots` when creating a single chart, and it only ever references ax after that. This is the code they use. ``` # Just a figure and one subplot f, ax = plt.subplots() ax.plot(x, y) ax.set_title('Simple plot') ```
`plt.subplots()` is a function that returns a tuple containing a figure and axes object(s). Thus when using `fig, ax = plt.subplots()` you unpack this tuple into the variables `fig` and `ax`. Having `fig` is useful if you want to change figure-level attributes or save the figure as an image file later (e.g. with `fig.savefig('yourfilename.png')`. You certainly don't have to use the returned figure object but many people do use it later so it's common to see. Also, all axes objects (the objects that have plotting methods), have a parent figure object anyway, thus: ``` fig, ax = plt.subplots() ``` is more concise than this: ``` fig = plt.figure() ax = fig.add_subplot(111) ```
Did something about `namedtuple` change in 3.5.1?
34,166,469
32
2015-12-08T21:35:39Z
34,166,547
8
2015-12-08T21:40:51Z
[ "python", "python-3.5", "namedtuple" ]
On Python 3.5.0: ``` >>> from collections import namedtuple >>> cluster = namedtuple('Cluster', ['a', 'b']) >>> c = cluster(a=4, b=9) >>> c Cluster(a=4, b=9) >>> vars(c) OrderedDict([('a', 4), ('b', 9)]) ``` On Python 3.5.1: ``` >>> from collections import namedtuple >>> cluster = namedtuple('Cluster', ['a', 'b']) >>> c = cluster(a=4, b=9) >>> c Cluster(a=4, b=9) >>> vars(c) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: vars() argument must have __dict__ attribute ``` Seems like something about `namedtuple` changed (or maybe it was something about `vars()`?). Was this intentional? Are we not supposed to use this pattern for converting named tuples into dictionaries anymore?
From the [docs](https://docs.python.org/3/library/collections.html#namedtuple-factory-function-for-tuples-with-named-fields) > Named tuple instances do not have per-instance dictionaries, so they are lightweight and require no more memory than regular tuples. The [docs](https://docs.python.org/3/library/collections.html#collections.somenamedtuple._asdict) (and `help(namedtuple)`) say to use `c._asdict()` to convert to a dict.
Did something about `namedtuple` change in 3.5.1?
34,166,469
32
2015-12-08T21:35:39Z
34,166,604
25
2015-12-08T21:45:24Z
[ "python", "python-3.5", "namedtuple" ]
On Python 3.5.0: ``` >>> from collections import namedtuple >>> cluster = namedtuple('Cluster', ['a', 'b']) >>> c = cluster(a=4, b=9) >>> c Cluster(a=4, b=9) >>> vars(c) OrderedDict([('a', 4), ('b', 9)]) ``` On Python 3.5.1: ``` >>> from collections import namedtuple >>> cluster = namedtuple('Cluster', ['a', 'b']) >>> c = cluster(a=4, b=9) >>> c Cluster(a=4, b=9) >>> vars(c) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: vars() argument must have __dict__ attribute ``` Seems like something about `namedtuple` changed (or maybe it was something about `vars()`?). Was this intentional? Are we not supposed to use this pattern for converting named tuples into dictionaries anymore?
Per [Python bug #24931](http://bugs.python.org/issue24931): > [`__dict__`] disappeared because it was fundamentally broken in Python 3, so it had to be removed. Providing `__dict__` broke subclassing and produced odd behaviors. [Revision that made the change](https://hg.python.org/cpython/rev/fa3ac31cfa44) Specifically, subclasses without `__slots__` defined would behave weirdly: ``` >>> Cluster = namedtuple('Cluster', 'x y') >>> class Cluster2(Cluster): pass >>> vars(Cluster(1,2)) OrderedDict([('x', 1), ('y', 2)]) >>> vars(Cluster2(1,2)) {} ``` Use `._asdict()`.
Converting String of a tuple to a Tuple
34,166,703
3
2015-12-08T21:51:28Z
34,166,726
10
2015-12-08T21:52:53Z
[ "python", "arrays", "string", "list", "tuples" ]
If I have a string that looks like a tuple how can I make it into a tuple? ``` s = '(((3,),(4,2),(2,)),((1,),(2,4),(2,)))' ``` and I want to make it into a tuple that contains other tuples. ``` t = tuple((((3,),(4,2),(2,)),((1,),(2,4),(2,)))) ``` Doesn't work because it makes even the `(` as a item in the tuple.
You need to use [`ast.literal_eval`](https://docs.python.org/2/library/ast.html#ast.literal_eval): ``` from ast import literal_eval s = '(((3,),(4,2),(2,)),((1,),(2,4),(2,)))' t = literal_eval(s) print(t) print(type(t)) (((3,), (4, 2), (2,)), ((1,), (2, 4), (2,))) <class 'tuple'> ```
Sort dictionary by multiple values
34,170,515
3
2015-12-09T04:04:12Z
34,170,573
7
2015-12-09T04:11:16Z
[ "python", "sorting", "dictionary" ]
I have the dictionary `{'Bill': 4, 'Alex' : 4, 'Bob' : 3, "Charles": 7}` I need to sort this dictionary first numerically, then within that, alphabetically. If 2 items have the same number key, they need to be sorted alphabetically. The output of this should be `Bob, Alex, Bill, Charles` I tried using lambda, list comprehension, etc but I can't seem to get them to sort correctly.
Using [`sorted`](https://docs.python.org/3/library/functions.html#sorted) with key function (order by value (`d[k]`) first, then key `k`): ``` >>> d = {'Bill': 4, 'Alex' : 4, 'Bob' : 3, "Charles": 7} >>> sorted(d, key=lambda k: (d[k], k)) ['Bob', 'Alex', 'Bill', 'Charles'] ```
Normal equation and Numpy 'least-squares', 'solve' methods difference in regression?
34,170,618
13
2015-12-09T04:16:50Z
34,171,374
11
2015-12-09T05:29:21Z
[ "python", "numpy", "machine-learning", "linear-algebra", "linear-regression" ]
I am doing linear regression with multiple variables/features. I try to get thetas (coefficients) by using **normal equation** method (that uses matrix inverse), Numpy least-squares [**numpy.linalg.lstsq**](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html) tool and [**np.linalg.solve**](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.solve.html) tool. In my data I have **n = 143** features and **m = 13000** training examples. --- For **normal equation** method with **regularization** I use this formula: > [![enter image description here](http://i.stack.imgur.com/JQlsq.jpg)](http://i.stack.imgur.com/JQlsq.jpg) > *Sources*: > > * [*Regularization (Andrew Ng, Stanford)*](http://csns.calstatela.edu/download?fileId=4952741) > * [*Normal equations (Andrew Ng, Stanford*)](http://openclassroom.stanford.edu/MainFolder/DocumentPage.php?course=MachineLearning&doc=exercises/ex5/ex5.html) *Regularization is used to solve the potential problem of matrix non-invertibility (`XtX` matrix may become singular/non-invertible)* --- **Data preparation code:** ``` import pandas as pd import numpy as np path = 'DB2.csv' data = pd.read_csv(path, header=None, delimiter=";") data.insert(0, 'Ones', 1) cols = data.shape[1] X = data.iloc[:,0:cols-1] y = data.iloc[:,cols-1:cols] IdentitySize = X.shape[1] IdentityMatrix= np.zeros((IdentitySize, IdentitySize)) np.fill_diagonal(IdentityMatrix, 1) ``` --- For **least squares** method I use Numpy's *numpy.linalg.lstsq*. Here is Pyhton code: ``` lamb = 1 th = np.linalg.lstsq(X.T.dot(X) + lamb * IdentityMatrix, X.T.dot(y))[0] ``` Also I used **np.linalg.solve** tool of numpy: ``` lamb = 1 XtX_lamb = X.T.dot(X) + lamb * IdentityMatrix XtY = X.T.dot(y) x = np.linalg.solve(XtX_lamb, XtY); ``` For **normal equation** I use: ``` lamb = 1 xTx = X.T.dot(X) + lamb * IdentityMatrix XtX = np.linalg.inv(xTx) XtX_xT = XtX.dot(X.T) theta = XtX_xT.dot(y) ``` --- In all methods I used regularization. Here is results (theta coefficients) to see difference between these three approaches: ``` Normal equation: np.linalg.lstsq np.linalg.solve [-27551.99918303] [-27551.95276154] [-27551.9991855] [-940.27518383] [-940.27520138] [-940.27518383] [-9332.54653964] [-9332.55448263] [-9332.54654461] [-3149.02902071] [-3149.03496582] [-3149.02900965] [-1863.25125909] [-1863.2631435] [-1863.25126344] [-2779.91105618] [-2779.92175308] [-2779.91105347] [-1226.60014026] [-1226.61033117] [-1226.60014192] [-920.73334259] [-920.74331432] [-920.73334194] [-6278.44238081] [-6278.45496955] [-6278.44237847] [-2001.48544938] [-2001.49566981] [-2001.48545349] [-715.79204971] [-715.79664124] [-715.79204921] [ 4039.38847472] [ 4039.38302499] [ 4039.38847515] [-2362.54853195] [-2362.55280478] [-2362.54853139] [-12730.8039209] [-12730.80866036] [-12730.80392076] [-24872.79868125] [-24872.80203459] [-24872.79867954] [-3402.50791863] [-3402.5140501] [-3402.50793382] [ 253.47894001] [ 253.47177732] [ 253.47892472] [-5998.2045186] [-5998.20513905] [-5998.2045184] [ 198.40560401] [ 198.4049081] [ 198.4056042] [ 4368.97581411] [ 4368.97175688] [ 4368.97581426] [-2885.68026222] [-2885.68154407] [-2885.68026205] [ 1218.76602731] [ 1218.76562838] [ 1218.7660275] [-1423.73583813] [-1423.7369068] [-1423.73583793] [ 173.19125007] [ 173.19086525] [ 173.19125024] [-3560.81709538] [-3560.81650156] [-3560.8170952] [-142.68135768] [-142.68162508] [-142.6813575] [-2010.89489111] [-2010.89601322] [-2010.89489092] [-4463.64701238] [-4463.64742877] [-4463.64701219] [ 17074.62997704] [ 17074.62974609] [ 17074.62997723] [ 7917.75662561] [ 7917.75682048] [ 7917.75662578] [-4234.16758492] [-4234.16847544] [-4234.16758474] [-5500.10566329] [-5500.106558] [-5500.10566309] [-5997.79002683] [-5997.7904842] [-5997.79002634] [ 1376.42726683] [ 1376.42629704] [ 1376.42726705] [ 6056.87496151] [ 6056.87452659] [ 6056.87496175] [ 8149.0123667] [ 8149.01209157] [ 8149.01236827] [-7273.3450484] [-7273.34480382] [-7273.34504827] [-2010.61773247] [-2010.61839251] [-2010.61773225] [-7917.81185096] [-7917.81223606] [-7917.81185084] [ 8247.92773739] [ 8247.92774315] [ 8247.92773722] [ 1267.25067823] [ 1267.24677734] [ 1267.25067832] [ 2557.6208133] [ 2557.62126916] [ 2557.62081337] [-5678.53744654] [-5678.53820798] [-5678.53744647] [ 3406.41697822] [ 3406.42040997] [ 3406.41697836] [-8371.23657044] [-8371.2361594] [-8371.23657035] [ 15010.61728285] [ 15010.61598236] [ 15010.61728304] [ 11006.21920273] [ 11006.21711213] [ 11006.21920284] [-5930.93274062] [-5930.93237071] [-5930.93274048] [-5232.84459862] [-5232.84557665] [-5232.84459848] [ 3196.89304277] [ 3196.89414431] [ 3196.8930428] [ 15298.53309912] [ 15298.53496877] [ 15298.53309919] [ 4742.68631183] [ 4742.6862601] [ 4742.68631172] [ 4423.14798495] [ 4423.14765013] [ 4423.14798546] [-16153.50854089] [-16153.51038489] [-16153.50854123] [-22071.50792741] [-22071.49808389] [-22071.50792408] [-688.22903323] [-688.2310229] [-688.22904006] [-1060.88119863] [-1060.8829114] [-1060.88120546] [-101.75750066] [-101.75776411] [-101.75750831] [ 4106.77311898] [ 4106.77128502] [ 4106.77311218] [ 3482.99764601] [ 3482.99518758] [ 3482.99763924] [-1100.42290509] [-1100.42166312] [-1100.4229119] [ 20892.42685103] [ 20892.42487476] [ 20892.42684422] [-5007.54075789] [-5007.54265501] [-5007.54076473] [ 11111.83929421] [ 11111.83734144] [ 11111.83928704] [ 9488.57342568] [ 9488.57158677] [ 9488.57341883] [-2992.3070786] [-2992.29295891] [-2992.30708529] [ 17810.57005982] [ 17810.56651223] [ 17810.57005457] [-2154.47389712] [-2154.47504319] [-2154.47390285] [-5324.34206726] [-5324.33913623] [-5324.34207293] [-14981.89224345] [-14981.8965674] [-14981.89224973] [-29440.90545197] [-29440.90465897] [-29440.90545704] [-6925.31991443] [-6925.32123144] [-6925.31992383] [ 104.98071593] [ 104.97886085] [ 104.98071152] [-5184.94477582] [-5184.9447972] [-5184.94477792] [ 1555.54536625] [ 1555.54254362] [ 1555.5453638] [-402.62443474] [-402.62539068] [-402.62443718] [ 17746.15769322] [ 17746.15458093] [ 17746.15769074] [-5512.94925026] [-5512.94980649] [-5512.94925267] [-2202.8589276] [-2202.86226244] [-2202.85893056] [-5549.05250407] [-5549.05416936] [-5549.05250669] [-1675.87329493] [-1675.87995809] [-1675.87329255] [-5274.27756529] [-5274.28093377] [-5274.2775701] [-5424.10246845] [-5424.10658526] [-5424.10247326] [-1014.70864363] [-1014.71145066] [-1014.70864845] [ 12936.59360437] [ 12936.59168749] [ 12936.59359954] [ 2912.71566077] [ 2912.71282628] [ 2912.71565599] [ 6489.36648506] [ 6489.36538259] [ 6489.36648021] [ 12025.06991281] [ 12025.07040848] [ 12025.06990358] [ 17026.57841531] [ 17026.56827742] [ 17026.57841044] [ 2220.1852193] [ 2220.18531961] [ 2220.18521579] [-2886.39219026] [-2886.39015388] [-2886.39219394] [-18393.24573629] [-18393.25888463] [-18393.24573872] [-17591.33051471] [-17591.32838012] [-17591.33051834] [-3947.18545848] [-3947.17487999] [-3947.18546459] [ 7707.05472816] [ 7707.05577227] [ 7707.0547217] [ 4280.72039079] [ 4280.72338194] [ 4280.72038435] [-3137.48835901] [-3137.48480197] [-3137.48836531] [ 6693.47303443] [ 6693.46528167] [ 6693.47302811] [-13936.14265517] [-13936.14329336] [-13936.14267094] [ 2684.29594641] [ 2684.29859601] [ 2684.29594183] [-2193.61036078] [-2193.63086307] [-2193.610366] [-10139.10424848] [-10139.11905454] [-10139.10426049] [ 4475.11569903] [ 4475.12288711] [ 4475.11569421] [-3037.71857269] [-3037.72118246] [-3037.71857265] [-5538.71349798] [-5538.71654224] [-5538.71349794] [ 8008.38521357] [ 8008.39092739] [ 8008.38521361] [-1433.43859633] [-1433.44181824] [-1433.43859629] [ 4212.47144667] [ 4212.47368097] [ 4212.47144686] [ 19688.24263706] [ 19688.2451694] [ 19688.2426368] [ 104.13434091] [ 104.13434349] [ 104.13434091] [-654.02451175] [-654.02493111] [-654.02451174] [-2522.8642551] [-2522.88694451] [-2522.86424254] [-5011.20385919] [-5011.22742915] [-5011.20384655] [-13285.64644021] [-13285.66951459] [-13285.64642763] [-4254.86406891] [-4254.88695873] [-4254.86405637] [-2477.42063206] [-2477.43501057] [-2477.42061727] [ 0.] [ 1.23691279e-10] [ 0.] [-92.79470071] [-92.79467095] [-92.79470071] [ 2383.66211583] [ 2383.66209637] [ 2383.66211583] [-10725.22892185] [-10725.22889937] [-10725.22892185] [ 234.77560283] [ 234.77560254] [ 234.77560283] [ 4739.22119578] [ 4739.22121432] [ 4739.22119578] [ 43640.05854156] [ 43640.05848841] [ 43640.05854157] [ 2592.3866707] [ 2592.38671547] [ 2592.3866707] [-25130.02819215] [-25130.05501178] [-25130.02819515] [ 4966.82173096] [ 4966.7946407] [ 4966.82172795] [ 14232.97930665] [ 14232.9529959] [ 14232.97930363] [-21621.77202422] [-21621.79840459] [-21621.7720272] [ 9917.80960029] [ 9917.80960571] [ 9917.80960029] [ 1355.79191536] [ 1355.79198092] [ 1355.79191536] [-27218.44185748] [-27218.46880642] [-27218.44185719] [-27218.04184348] [-27218.06875423] [-27218.04184318] [ 23482.80743869] [ 23482.78043029] [ 23482.80743898] [ 3401.67707434] [ 3401.65134677] [ 3401.67707463] [ 3030.36383274] [ 3030.36384909] [ 3030.36383274] [-30590.61847724] [-30590.63933424] [-30590.61847706] [-28818.3942685] [-28818.41520495] [-28818.39426833] [-25115.73726772] [-25115.7580278] [-25115.73726753] [ 77174.61695995] [ 77174.59548773] [ 77174.61696016] [-20201.86613672] [-20201.88871113] [-20201.86613657] [ 51908.53292209] [ 51908.53446495] [ 51908.53292207] [ 7710.71327865] [ 7710.71324194] [ 7710.71327865] [-16206.9785119] [-16206.97851993] [-16206.9785119] ``` As you can see normal equation, least squares and np.linalg.solve tool methods give to some extent different results. The question is why these three approaches gives noticeably different results and which method gives **more efficient** and **more accurate** result? **Assumption:** Results of Normal equation method and results of **np.linalg.solve** are very close to each other. And results of **np.linalg.lstsq** differ from both of them. Since normal equation uses inverse we do not expect very accurate results of it and therefore results of np.linalg.solve tool also. Seem to be that better results are given by **np.linalg.lstsq**. --- *Note: Under accuracy I meant how close these method's solutions to real coefficients. So basically I wanted to know wich of these methods is closer to real model.* --- **Update:** As **Dave Hensley** mentioned: After the line `np.fill_diagonal(IdentityMatrix, 1)` this code `IdentityMatrix[0,0] = 0` should be added. --- ***DB2.csv*** is available on DropBox: [DB2.csv](https://www.dropbox.com/s/e3pd7fp0rfm1cfs/DB2.csv?dl=0) *Full Python code is available on DropBox: [Full code](https://www.dropbox.com/s/b0xq34fuxc51ffj/regression.py?dl=0)*
### Don't calculate matrix inverse to solve linear systems The professional algorithms don't solve for the matrix inverse. It's slow and introduces unnecessary error. It's not a disaster for small systems, but why do something suboptimal? Basically anytime you see the math written as: ``` x = A^-1 * b ``` you instead want: ``` x = np.linalg.solve(A, b) ``` In you case, you want something like: ``` XtX_lamb = X.T.dot(X) + lamb * IdentityMatrix XtY = X.T.dot(Y) x = np.linalg.solve(XtX_lamb, XtY); ```
Tuple unpacking order changes values assigned
34,171,348
43
2015-12-09T05:26:53Z
34,171,485
43
2015-12-09T05:39:37Z
[ "python", "list", "indexing", "tuple-packing" ]
I think the two are identical. ``` nums = [1, 2, 0] nums[nums[0]], nums[0] = nums[0], nums[nums[0]] print nums # [2, 1, 0] nums = [1, 2, 0] nums[0], nums[nums[0]] = nums[nums[0]], nums[0] print nums # [2, 2, 1] ``` But the results are different. Why are the results different? (why is the second one that result?)
## *Prerequisites* - 2 important Points * **Lists are mutable** The main part in lists is that lists are mutable. It means that the values of lists can be changed. This is one of the reason why you are facing the trouble. [Refer the docs for more info](https://docs.python.org/3/tutorial/introduction.html#lists) * **Order of Evaluation** The other part is that while unpacking a tuple, the evaluation starts from left to right. [Refer the docs for more info](https://docs.python.org/3/reference/expressions.html#evaluation-order) --- ## *Introduction* when you do `a,b = c,d` the values of `c` and `d` are first stored. Then starting from the left hand side, the value of `a` is first changed to `c` and then the value of `b` is changed to `d`. The catch here is that if there are any side effects to the location of `b` while changing the value of `a`, then `d` is assigned to the *later* `b`, which is the `b` affected by the side effect of `a`. --- ## *Use Case* Now coming to your problem In the first case, ``` nums = [1, 2, 0] nums[nums[0]], nums[0] = nums[0], nums[nums[0]] ``` `nums[0]` is initially `1` and `nums[nums[0]]` is `2` because it evaluates to `nums[1]`. Hence 1,2 is now stored into memory. Now tuple unpacking happens from left hand side, so ``` nums[nums[0]] = nums[1] = 1 # NO side Effect. nums[0] = 2 ``` hence `print nums` will print `[2, 1, 0]` However in this case ``` nums = [1, 2, 0] nums[0], nums[nums[0]] = nums[nums[0]], nums[0] ``` `nums[nums[0]], nums[0]` puts 2,1 on the stack just like the first case. However on the left hand side, that is `nums[0], nums[nums[0]]`, the changing of `nums[0]` has a side effect as it is used as the index in `nums[nums[0]]`. Thus ``` nums[0] = 2 nums[nums[0]] = nums[2] = 1 # NOTE THAT nums[0] HAS CHANGED ``` `nums[1]` remains unchanged at value `2`. hence `print nums` will print `[2, 2, 1]`
Tuple unpacking order changes values assigned
34,171,348
43
2015-12-09T05:26:53Z
34,171,528
9
2015-12-09T05:43:21Z
[ "python", "list", "indexing", "tuple-packing" ]
I think the two are identical. ``` nums = [1, 2, 0] nums[nums[0]], nums[0] = nums[0], nums[nums[0]] print nums # [2, 1, 0] nums = [1, 2, 0] nums[0], nums[nums[0]] = nums[nums[0]], nums[0] print nums # [2, 2, 1] ``` But the results are different. Why are the results different? (why is the second one that result?)
It's because of that python assignment priority is left to right.So in following code: ``` nums = [1, 2, 0] nums[nums[0]], nums[0] = nums[0], nums[nums[0]] ``` It first assigned the `nums[0]` to `nums[nums[0]]` means `nums[1]==1` and then since lists are mutable objects the nums would be : ``` [1,1,0] ``` and then `nums[nums[0]]` will be assigned to `nums[0]` which means `nums[0]==2` and : ``` nums = [2,1,0] ``` And like so for second part. Note that the important point here is that list objects are mutable and when you change it in a segment of code it can be change in-place. thus it will affect of the rest of the code. [Evaluation order](https://docs.python.org/2/reference/expressions.html#evaluation-order) > Python evaluates expressions from left to right. Notice that while evaluating an assignment, the right-hand side is evaluated before the left-hand side.
Tuple unpacking order changes values assigned
34,171,348
43
2015-12-09T05:26:53Z
34,171,719
18
2015-12-09T05:58:06Z
[ "python", "list", "indexing", "tuple-packing" ]
I think the two are identical. ``` nums = [1, 2, 0] nums[nums[0]], nums[0] = nums[0], nums[nums[0]] print nums # [2, 1, 0] nums = [1, 2, 0] nums[0], nums[nums[0]] = nums[nums[0]], nums[0] print nums # [2, 2, 1] ``` But the results are different. Why are the results different? (why is the second one that result?)
You can define a class to track the process: ``` class MyList(list): def __getitem__(self, key): print('get ' + str(key)) return super(MyList, self).__getitem__(key) def __setitem__(self, key, value): print('set ' + str(key) + ', ' + str(value)) return super(MyList, self).__setitem__(key, value) ``` For the first method: ``` nums = MyList([1, 2, 0]) nums[nums[0]], nums[0] = nums[0], nums[nums[0]] ``` the output is: ``` get 0 get 0 get 1 get 0 set 1, 1 set 0, 2 ``` While the second method: ``` nums = MyList([1, 2, 0]) nums[0], nums[nums[0]] = nums[nums[0]], nums[0] ``` the output is: ``` get 0 get 1 get 0 set 0, 2 get 0 set 2, 1 ``` In both methods, the first three lines are related to tuple generation while the last three lines are related to assignments. Right hand side tuple of the first method is `(1, 2)` and the second method is `(2, 1)`. In the assignment stage, first method get `nums[0]` which is `1`, and set `nums[1] = 1`, then `nums[0] = 2`, second method assign `nums[0] = 2`, then get `nums[0]` which is `2`, and finally set `nums[2] = 1`.
Behaviour of all() in python
34,176,268
6
2015-12-09T10:20:29Z
34,176,372
10
2015-12-09T10:25:01Z
[ "python", "built-in" ]
``` >>all([]) True >>all([[]]) False >>all([[[]]]) True >>all([[[[]]]]) True ``` The documentation of all() reads that it returns True is all the elements are True/For an empty list. Why does all(**[ [ ] ]**) evaluate to False? Because **[ ]** is a member of **[ [ ] ]**, it should evaluate to True as well.
The docstring for `all` is as follows: > `all(iterable) -> bool` > > Return True if bool(x) is True for all values x in the iterable. > If the iterable is empty, return True. This explains the behaviour. `[]` is an empty iterable, so `all` returns True. But `[[]]` is *not* an empty iterable; it is an iterable containing one item, an empty list. Calling `bool` on that empty list returns False; so the entire expression is False. The other examples return True because the single item is no longer empty, so it is boolean True.
How to solve python Celery error when using chain EncodeError(RuntimeError('maximum recursion depth exceeded while getting the str of an object))
34,177,131
6
2015-12-09T11:01:33Z
34,179,519
8
2015-12-09T13:02:56Z
[ "python", "celery" ]
How do you run a chain task in a for loop since the signatures are generated dynamically. The following approach was used because defining the tester task as: ``` @task def tester(items): ch = [] for i in items: ch.append(test.si(i)) return chain(ch)() ``` would raise an error of `EncodeError(RuntimeError('maximum recursion depth exceeded while getting the str of an object',),)` if the chains are too large which is os or system specific. > E.g calling the task as follows ``` item = range(1,40000) #40,000 raises exception but #3,000 doesn't after setting sys.setrecursionlimit(15000) tester.delay(item) ``` raises the `EcodeError`. In the past I used to have this error when length of item is 5000 i.e range(1,5000). Which i fixed by importing `sys` and calling `sys.setrecursionlimit(15000)` at the top of the `module`. But there is a limitation to this so I decided to refactor a little and use the approach below. That is trying, to split the list and do it in chunks after chunks.The problem is it doesn't seem to continue after 2000 i.e test prints 2000 to screen. ``` @task def test(i): print i @task def tester(items): ch = [] for i in items: ch.append(test.si(i)) counter = 1 if len(ch) > 2000: ch_length = len(ch) #4k while ch_length >= 2000: do = ch[0:2000] # 2k print "Doing...NO#...{}".format(counter) ret = chain(do)() #doing 2k print "Ending...NO#...{}".format(counter) ch = ch[2000:] #take all left i.e 2k ch_length = len(ch) #2k if ch_length <= 2000 and ch_length > 0: print "DOING LAST {}".format(counter) ret = chain(ch)() print "ENDING LAST {}".format(counter) break else: break counter +=1 else: ret = chain(ch)() return ret ``` According to celery documentation, a chain basically executes task within it one after the other. I expect the while loop to continue only first iteration is conpleted in the chain before proceeding. I hope someone has experience with this and could help. Merry Xmas in advance!
It seems you met this issue: <https://github.com/celery/celery/issues/1078> Also calling `chain(ch)()` seems to execute it asynchronously. Try to explicitely call `apply()` on it. ``` @app.task def tester(items): ch = [] for i in items: ch.append(test.si(i)) PSIZE = 1000 for cl in range(0, len(ch), PSIZE): print("cl: %s" % cl) chain(ch[cl:cl + PSIZE]).apply() print("cl: %s END" % cl) return None ```
Variable in os.system
34,185,488
2
2015-12-09T17:43:47Z
34,185,515
7
2015-12-09T17:45:39Z
[ "python" ]
I am using `os.system` method in Python to open a file in Linux. But I don't know how to pass the variable (a) inside the os.system command ``` import os a=4 os.system('gedit +a test.txt') ``` How can i pass the variable as an integer inside the command?
``` os.system('gedit +%d test.txt' % (a,)) ``` It is recommended to use [subprocess](https://docs.python.org/3.4/library/subprocess.html?highlight=subprocess) instead of `os.system`: ``` subprocess.call(['gedit', '+%d' % (a,), 'test.txt']) ```
Can you fool isatty AND log stdout and stderr separately?
34,186,035
26
2015-12-09T18:12:45Z
34,189,559
17
2015-12-09T21:43:28Z
[ "python", "linux", "unix", "tty", "pty" ]
## Problem So you want to log the stdout and stderr (separately) of a process or subprocess, without the output being different from what you'd see in the terminal if you weren't logging anything. Seems pretty simple no? Well unfortunately, it appears that it may not be possible to write a general solution for this problem, that works on any given process... ## Background Pipe redirection is one method to separate stdout and stderr, allowing you to log them individually. Unfortunately, if you change the stdout/err to a pipe, the process may detect the pipe is not a tty (because it has no width/height, baud rate, etc) and may change its behaviour accordingly. Why change the behaviour? Well, some developers make use of features of a terminal which don't make sense if you are writing out to a file. For example, loading bars often require the terminal cursor to be moved back to the beginning of the line and the previous loading bar to be overwritten with a bar of a new length. Also colour and font weight can be displayed in a terminal, but in a flat ASCII file they can not. If you were to write such a program's stdout directly to a file, that output would contain all the terminal ANSI escape codes, rather than properly formatted output. The developer therefore implements some sort of "isatty" check before writing anything to the stdout/err, so it can give a simpler output for files if that check returns false. The usual solution here is to trick such programs into thinking the pipes are actually ttys by using a pty - a bidirectional pipe that also has width, height, etc. You redirect all inputs/outputs of the process to this pty, and that tricks the process into thinking its talking to a real terminal (and you can log it directly to a file). The only problem is, that by using a single pty for stdout and stderr, we can now no longer differentiate between the two. So you might want to try a different pty for each pipe - one for the stdin, one for the stdout, and one for the stderr. While this will work 50% of the time, many processes unfortunately do additional redirection checks that make sure that the output path of the stdout and stderr (/dev/tty000x) are the same. If they are not, there must be redirection, thus they give you the same behaviour as if you had piped the stderr and stdout without a pty. You might think this over-the-top checking for redirection is uncommon, but unfortunately it is actually quite prevalent because a lot of programs re-use other code for checking, like this bit of code found in OSX: > <http://src.gnu-darwin.org/src/bin/stty/util.c> ## Challenge I think the best way to find a solution is in the form of a challenge. If anyone can run the following script (ideally via Python, but at this point I'll take anything) in such a way that the stdout and stderr is logged separately, AND you managed to fool it into thinking it was executed via a tty, you solve the problem :) ``` #!/usr/bin/python import os import sys if sys.stdout.isatty() and sys.stderr.isatty() and os.ttyname(sys.stdout.fileno()) == os.ttyname(sys.stderr.fileno()): sys.stdout.write("This is a") sys.stderr.write("real tty :)") else: sys.stdout.write("You cant fool me!") sys.stdout.flush() sys.stderr.flush() ``` Note that a solution should really work for **any** process, not just this code specifically. Overwriting the sys/os module and using LD\_PRELOAD is very interesting ways to beat the challenge, but they don't solve the heart of the problem :)
Like this? ``` % ./challenge.py >stdout 2>stderr % cat stdout This is a real tty :) standard output data % cat stderr standard error data ``` Because I cheated a little bit. ;-) ``` % echo $LD_PRELOAD /home/karol/preload.so ``` Like so... ``` % gcc preload.c -shared -o preload.so -fPIC ``` I feel dirty now, but it was fun. :D ``` % cat preload.c #include <stdlib.h> int isatty(int fd) { if(fd == 2 || fd == 1) { return 1; } return 0; } char* ttyname(int fd) { static char* fake_name = "/dev/fake"; if(fd == 2 || fd == 1) { return fake_name; } return NULL; } ```
Fast random weighted selection across all rows of a stochastic matrix
34,187,130
6
2015-12-09T19:14:58Z
34,190,035
10
2015-12-09T22:16:07Z
[ "python", "numpy", "matrix", "vectorization", "random-sample" ]
`numpy.random.choice` allows for weighted selection from a vector, i.e. ``` arr = numpy.array([1, 2, 3]) weights = numpy.array([0.2, 0.5, 0.3]) choice = numpy.random.choice(arr, p=weights) ``` selects 1 with probability 0.2, 2 with probability 0.5, and 3 with probability 0.3. What if we wanted to do this quickly in a vectorized fashion for a 2D array (matrix) for which each of the rows are a vector of probabilities? That is, we want a vector of choices from a stochastic matrix? This is the super slow way: ``` import numpy as np m = 10 n = 100 # Or some very large number items = np.arange(m) prob_weights = np.random.rand(m, n) prob_matrix = prob_weights / prob_weights.sum(axis=0, keepdims=True) choices = np.zeros((n,)) # This is slow, because of the loop in Python for i in range(n): choices[i] = np.random.choice(items, p=prob_matrix[:,i]) ``` `print(choices)`: ``` array([ 4., 7., 8., 1., 0., 4., 3., 7., 1., 5., 7., 5., 3., 1., 9., 1., 1., 5., 9., 8., 2., 3., 2., 6., 4., 3., 8., 4., 1., 1., 4., 0., 1., 8., 5., 3., 9., 9., 6., 5., 4., 8., 4., 2., 4., 0., 3., 1., 2., 5., 9., 3., 9., 9., 7., 9., 3., 9., 4., 8., 8., 7., 6., 4., 6., 7., 9., 5., 0., 6., 1., 3., 3., 2., 4., 7., 0., 6., 3., 5., 8., 0., 8., 3., 4., 5., 2., 2., 1., 1., 9., 9., 4., 3., 3., 2., 8., 0., 6., 1.]) ``` [This post](http://stackoverflow.com/q/24140114/586086) suggests that `cumsum` and `bisect` could be a potential approach, and is fast. But while `numpy.cumsum(arr, axis=1)` can do this along one axis of a numpy array, the [`bisect.bisect`](https://docs.python.org/2/library/bisect.html) function only works on a single array at a time. Similarly, [`numpy.searchsorted`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.searchsorted.html) only works on 1D arrays as well. Is there a quick way to do this using only vectorized operations?
Here's a fully vectorized version that's pretty fast: ``` def vectorized(prob_matrix, items): s = prob_matrix.cumsum(axis=0) r = np.random.rand(prob_matrix.shape[1]) k = (s < r).sum(axis=0) return items[k] ``` *In theory*, `searchsorted` is the right function to use for looking up the random value in the cumulatively summed probabilities, but with `m` being relatively small, `k = (s < r).sum(axis=0)` ends up being much faster. Its time complexity is O(m), while the `searchsorted` method is O(log(m)), but that will only matter for much larger `m`. *Also*, `cumsum` is O(m), so both `vectorized` and @perimosocordiae's `improved` are O(m). (If your `m` is, in fact, much larger, you'll have to run some tests to see how large `m` can be before this method is slower.) Here's the timing I get with `m = 10` and `n = 10000` (using the functions `original` and `improved` from @perimosocordiae's answer): ``` In [115]: %timeit original(prob_matrix, items) 1 loops, best of 3: 270 ms per loop In [116]: %timeit improved(prob_matrix, items) 10 loops, best of 3: 24.9 ms per loop In [117]: %timeit vectorized(prob_matrix, items) 1000 loops, best of 3: 1 ms per loop ``` The full script where the functions are defined is: ``` import numpy as np def improved(prob_matrix, items): # transpose here for better data locality later cdf = np.cumsum(prob_matrix.T, axis=1) # random numbers are expensive, so we'll get all of them at once ridx = np.random.random(size=n) # the one loop we can't avoid, made as simple as possible idx = np.zeros(n, dtype=int) for i, r in enumerate(ridx): idx[i] = np.searchsorted(cdf[i], r) # fancy indexing all at once is faster than indexing in a loop return items[idx] def original(prob_matrix, items): choices = np.zeros((n,)) # This is slow, because of the loop in Python for i in range(n): choices[i] = np.random.choice(items, p=prob_matrix[:,i]) return choices def vectorized(prob_matrix, items): s = prob_matrix.cumsum(axis=0) r = np.random.rand(prob_matrix.shape[1]) k = (s < r).sum(axis=0) return items[k] m = 10 n = 10000 # Or some very large number items = np.arange(m) prob_weights = np.random.rand(m, n) prob_matrix = prob_weights / prob_weights.sum(axis=0, keepdims=True) ```
Django 1.9: Should I avoid importing models during `django.setup()`?
34,191,392
3
2015-12-10T00:03:13Z
34,191,491
12
2015-12-10T00:12:32Z
[ "python", "django", "django-models", "django-1.9" ]
Porting my app to django 1.9, I got the scary `django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet` Basically my stacktrace is: ``` manage.py execute_from_command_line(sys.argv) django/core/management:352, in execute_from_command_line utility.execute() django/core/management/__init__.py:326, in execute django.setup() django/__init__.py:18, in setup apps.populate(settings.INSTALLED_APPS) django/apps/registry.py:85, in populate app_config = AppConfig.create(entry) django/apps/config.py:90, in create module = import_module(entry) python2.7/importlib/__init__.py:37, in import_module __import__(name) myapp/mylib/__init__.py:52, in <module> from django.contrib.contenttypes.models import ContentType #<= The important part django/contrib/contenttypes/models.py:159, in <module> class ContentType(models.Model): django/db/models/base.py:94, in __new__ app_config = apps.get_containing_app_config(module) django/apps/registry.p:239, in get_containing_app_config self.check_apps_ready() django/apps/registry.py:124, in check_apps_ready raise AppRegistryNotReady("Apps aren't loaded yet.") django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet. ``` My main question here : **Should I import my models in the `__init__.py` of my django apps ?** It seems to trigger the `django.models.ModelBase` metaclass that check if the app is ready before creating the model.
> Should I import my models in the `__init__.py` of my django apps ? No, you **must** not import any model in the `__init__.py` file of any installed app. This is no longer possible in 1.9. From the [release notes](https://docs.djangoproject.com/en/1.9/releases/1.9/#features-removed-in-1-9): > All models need to be defined inside an installed application or > declare an explicit app\_label. Furthermore, it isn’t possible to > import them before their application is loaded. **In particular, it > isn’t possible to import models inside the root package of an > application.**
How to loop through a list inside of a loop
34,191,396
3
2015-12-10T00:03:36Z
34,191,474
7
2015-12-10T00:11:21Z
[ "python", "python-3.x" ]
So I have two lists, a **Header list** and a **Client list**: * **Header list** contains headers such as `First Name` or `Phone Number`. * **Client list** contains the details for a particular client such as their first name or phone number. I'm trying to print one piece of each list at a time. For example: ``` First Name: Joe Phone Number: 911 ``` Right now I have a loop that does something close to what I desire ``` header_list = ['First Name: ', 'Last Name: ', 'Phone: ', 'City: ', 'State: ', 'Zip: '] for elem in header_list: print(elem) for client in client_list[0]: print (client) break ``` This gives output like ``` First Name: Joe Last Name: Joe Phone Number: Joe ``` The problem with this loop is that it prints out all the headers correctly but only prints off the first item in `client_list[0]`, if I remove the break then it prints out everything in `client_list[0]`. How would I loop through `client_list[0]` getting the first then the second etc on through the list?
You can iterate over the header and value at the same time with `zip`: ``` header_list = ['First Name: ', 'Last Name: ', 'Phone: ', 'City: ', 'State: ', 'Zip: '] client_list = ['Joe', 'Somebody', '911'] for head, entry in zip(header_list, client_list): print(head, entry) ``` output: ``` First Name: Joe Last Name: Somebody Phone: 911 ``` **Note**: The shorter list determines how many iterations you get. Longer client list: ``` header_list = ['First Name:', 'Last Name:', 'Phone:', 'City:', 'State:', 'Zip:'] client_list = ['Joe', 'Somebody', '911', 'Somewhere', 'AA', '012345'] for head, entry in zip(header_list, client_list): print(head, entry) ``` prints: ``` First Name: Joe Last Name: Somebody Phone: 911 City: Somewhere State: AA Zip: 012345 ``` Side note: No need to pad your strings in `header` with space, `print` will add one for you.
Best way to flatten a 2D tensor containing a vector in TensorFlow?
34,194,151
8
2015-12-10T05:12:12Z
34,199,902
10
2015-12-10T10:47:47Z
[ "python", "tensorflow" ]
What is the most efficient way to flatten a 2D tensor which is actually a horizontal or vertical vector into a 1D tensor? Is there a difference in terms of performance between: ``` tf.reshape(w, [-1]) ``` and ``` tf.squeeze(w) ``` ?
Both [`tf.reshape(w, [-1])`](https://www.tensorflow.org/versions/master/api_docs/python/array_ops.html#reshape) and [`tf.squeeze(w)`](https://www.tensorflow.org/versions/master/api_docs/python/array_ops.html#squeeze) are "cheap" in that they operate only on the metadata (i.e. the shape) of the given tensor, and don't modify the data itself. Of the two `tf.reshape()` has slightly [simpler logic internally](https://github.com/tensorflow/tensorflow/blob/8242b4dd1b36440e191fef8a07b6f37d8bcee60d/tensorflow/core/kernels/reshape_op.h#L34), but the performance of the two should be indistinguishable.
sorting list of list in python
34,197,047
5
2015-12-10T08:34:19Z
34,197,113
10
2015-12-10T08:38:40Z
[ "python", "string", "python-2.7", "sorting", "python-2.x" ]
I have a list in python like - ``` [['C'], ['B'], ['A'], ['C', 'B'], ['B', 'A'], ['A', 'C']] ``` I want to sort it like follows - ``` [['A'], ['B'], ['C'], ['A', 'B'], ['A', 'C'], ['B', 'C']] ```
First sort the individual items in the lists and then sort the sorted lists based on the length and then the actual elements itself, like this ``` >>> data = [['C'], ['B'], ['A'], ['C', 'B'], ['B', 'A'], ['A', 'C']] >>> sorted((sorted(item) for item in data), key=lambda x: (len(x), x)) [['A'], ['B'], ['C'], ['A', 'B'], ['A', 'C'], ['B', 'C']] ``` This works because, list of strings will be sorted lexicographically, by default. In your case, when the internal lists are sorted, the outer lists are first sorted based on the length of the list and if they are the same then the actual elements of the string itself will be used for the comparison. --- This can be understood step-by-step. The first individual elements sorting results in this ``` >>> [sorted(item) for item in data] [['C'], ['B'], ['A'], ['B', 'C'], ['A', 'B'], ['A', 'C']] ``` Now, we need to sort this based on the length in ascending order first, and then the elements also should be sorted. So, we pass a custom function to the outer sorting function, `lambda x: (len(x), x)`.
Javascript is giving a different answer to same algorithm in Python
34,197,424
9
2015-12-10T08:55:57Z
34,197,812
13
2015-12-10T09:17:53Z
[ "javascript", "python", "algorithm", "dynamic-programming", "rosalind" ]
I'm working on the Rosalind problem [Mortal Fibonacci Rabbits](http://rosalind.info/problems/fibd/) and the website keeps telling me my answer is wrong when I use my algorithm written JavaScript. When I use the same algorithm in Python I get a different (and correct) answer. The inconsistency only happens when the result gets large. For example `fibd(90, 19)` returns `2870048561233730600` in JavaScript but in Python I get `2870048561233731259`. Is there something about numbers in JavaScript that give me a different answer or am making a subtle mistake in my JavaScript code? The JavaScript solution: ``` function fibd(n, m) { // Create an array of length m and set all elements to 0 var rp = new Array(m); rp = rp.map(function(e) { return 0; }); rp[0] = 1; for (var i = 1; i < n; i++) { // prepend the sum of all elements from 1 to the end of the array rp.splice(0, 0, rp.reduce(function (e, s) { return s + e; }) - rp[0]); // Remove the final element rp.pop(); } // Sum up all the elements return rp.reduce(function (e, s) { return s + e; }); } ``` The Python solution: ``` def fibd(n, m): # Create an array of length m and set all elements to 0 rp = [0] * m rp[0] = 1 for i in range(n-1): # The sum of all elements from 1 the end and dropping the final element rp = [sum(rp[1:])] + rp[:-1] return sum(rp) ```
I think Javascript only has a "Number" datatype, and this actually an IEEE double under the hood. 2,870,048,561,233,730,600 is too large to hold precisely in IEEE double, so it is approximated. (Notice the trailing "00" - 17 decimal places is about right for double.) Python on the other hand has bignum support, and will quite cheerfully deal with 4096 bit integers (for those that play around with cryptographic algorithms, this is a huge boon). You ~~might~~ will be able to find a Javascript bignum library if you search - for example <http://silentmatt.com/biginteger/>
Javascript is giving a different answer to same algorithm in Python
34,197,424
9
2015-12-10T08:55:57Z
34,198,049
7
2015-12-10T09:27:51Z
[ "javascript", "python", "algorithm", "dynamic-programming", "rosalind" ]
I'm working on the Rosalind problem [Mortal Fibonacci Rabbits](http://rosalind.info/problems/fibd/) and the website keeps telling me my answer is wrong when I use my algorithm written JavaScript. When I use the same algorithm in Python I get a different (and correct) answer. The inconsistency only happens when the result gets large. For example `fibd(90, 19)` returns `2870048561233730600` in JavaScript but in Python I get `2870048561233731259`. Is there something about numbers in JavaScript that give me a different answer or am making a subtle mistake in my JavaScript code? The JavaScript solution: ``` function fibd(n, m) { // Create an array of length m and set all elements to 0 var rp = new Array(m); rp = rp.map(function(e) { return 0; }); rp[0] = 1; for (var i = 1; i < n; i++) { // prepend the sum of all elements from 1 to the end of the array rp.splice(0, 0, rp.reduce(function (e, s) { return s + e; }) - rp[0]); // Remove the final element rp.pop(); } // Sum up all the elements return rp.reduce(function (e, s) { return s + e; }); } ``` The Python solution: ``` def fibd(n, m): # Create an array of length m and set all elements to 0 rp = [0] * m rp[0] = 1 for i in range(n-1): # The sum of all elements from 1 the end and dropping the final element rp = [sum(rp[1:])] + rp[:-1] return sum(rp) ```
Just doing a bit of research, this article seems interesting. [Javascript only supports 53bits integers.](http://www.2ality.com/2012/07/large-integers.html) The result given by Python is indeed out of the maximum safe range for JS. If you try to do ``` parseInt('2870048561233731259') ``` It will indeed return ``` 2870048561233731000 ```
Cannot import name _uuid_generate_random in heroku django
34,198,538
38
2015-12-10T09:50:31Z
34,199,367
80
2015-12-10T10:25:23Z
[ "python", "django", "heroku", "celery" ]
I am working on a project which scans user gmail inbox and provides a report. I have deployed it in **heroku** with following specs: Language: **Python 2.7** Framework: **Django 1.8** Task scheduler: **Celery** (**Rabbitmq-bigwig** for broker url) Now when heroku execute it the celery is not giving me the output. On Heroku push its showing **Collectstatic configuration error**. I have tried using **whitenoise package** Also tried executing: **heroku run python manage.py collectstatic --dry-run --noinput** Still getting the same error. *$ heroku run python manage.py collectstatic --noinput* gave the following details of the error. ``` File "manage.py", line 10, in <module> execute_from_command_line(sys.argv) File "/app/.heroku/python/lib/python2.7/site-packages/django/core/management/__init__.py", line 338, in execute_from_command_line utility.execute() File "/app/.heroku/python/lib/python2.7/site-packages/django/core/management/__init__.py", line 303, in execute settings.INSTALLED_APPS File "/app/.heroku/python/lib/python2.7/site-packages/django/conf/__init__.py", line 48, in __getattr__ self._setup(name) File "/app/.heroku/python/lib/python2.7/site-packages/django/conf/__init__.py", line 44, in _setup self._wrapped = Settings(settings_module) File "/app/.heroku/python/lib/python2.7/site-packages/django/conf/__init__.py", line 92, in __init__ mod = importlib.import_module(self.SETTINGS_MODULE) File "/app/.heroku/python/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) File "/app/salesblocker/__init__.py", line 5, in <module> from .celery import app as celery_app File "/app/salesblocker/celery.py", line 5, in <module> from celery import Celery File "/app/.heroku/python/lib/python2.7/site-packages/celery/__init__.py", line 131, in <module> from celery import five # noqa File "/app/.heroku/python/lib/python2.7/site-packages/celery/five.py", line 153, in <module> from kombu.utils.compat import OrderedDict # noqa File "/app/.heroku/python/lib/python2.7/site-packages/kombu/utils/__init__.py", line 19, in <module> from uuid import UUID, uuid4 as _uuid4, _uuid_generate_random ImportError: cannot import name _uuid_generate_random ``` I have also tried to rollback heroku commit to previous working commit and cloned that code but on the next commit(changes:removed a media image from the media folder) its showing the same error again. Thanks in advance
You are coming across [this issue](https://github.com/celery/kombu/issues/545), which affects Python 2.7.11 (Kombu is required by Celery). The issue is fixed in Kombu 3.0.30.
How to prevent tensorflow from allocating the totality of a GPU memory?
34,199,233
35
2015-12-10T10:19:51Z
34,200,194
43
2015-12-10T11:00:19Z
[ "python", "tensorflow" ]
I work in an environment in which computational resources are shared, i.e., we have a few server machines equipped with a few Nvidia Titan X GPUs each. For small to moderate size models, the 12GB of the Titan X are usually enough for 2-3 people to run training concurrently on the same GPU. If the models are small enough that a single model does not take full advantage of all the computational units of the Titan X, this can actually result in a speedup compared with running one training process after the other. Even in cases where the concurrent access to the GPU does slow down the individual training time, it is still nice to have the flexibility of having several users running things on the GPUs at once. The problem with TensorFlow is that, by default, it allocates the full amount of available memory on the GPU when it is launched. Even for a small 2-layer Neural Network, I see that the 12 GB of the Titan X are used up. Is there a way to make TensorFlow only allocate, say, 4GB of GPU memory, if one knows that that amount is enough for a given model?
You can set the fraction of GPU memory to be allocated when you construct a [`tf.Session`](https://www.tensorflow.org/versions/master/api_docs/python/client.html#Session) by passing a [`tf.GPUOptions`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/config.proto) as part of the optional `config` argument: ``` # Assume that you have 12GB of GPU memory and want to allocate ~4GB: gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333) sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) ``` The `per_process_gpu_memory_fraction` acts as a hard upper bound on the amount of GPU memory that will be used by the process on each GPU on the same machine. Currently, this fraction is applied uniformly to all of the GPUs on the same machine; there is no way to set this on a per-GPU basis.
How to prevent tensorflow from allocating the totality of a GPU memory?
34,199,233
35
2015-12-10T10:19:51Z
37,454,574
8
2016-05-26T07:43:45Z
[ "python", "tensorflow" ]
I work in an environment in which computational resources are shared, i.e., we have a few server machines equipped with a few Nvidia Titan X GPUs each. For small to moderate size models, the 12GB of the Titan X are usually enough for 2-3 people to run training concurrently on the same GPU. If the models are small enough that a single model does not take full advantage of all the computational units of the Titan X, this can actually result in a speedup compared with running one training process after the other. Even in cases where the concurrent access to the GPU does slow down the individual training time, it is still nice to have the flexibility of having several users running things on the GPUs at once. The problem with TensorFlow is that, by default, it allocates the full amount of available memory on the GPU when it is launched. Even for a small 2-layer Neural Network, I see that the 12 GB of the Titan X are used up. Is there a way to make TensorFlow only allocate, say, 4GB of GPU memory, if one knows that that amount is enough for a given model?
``` config = tf.ConfigProto() config.gpu_options.allow_growth=True sess = tf.Session(config=config) ``` <https://github.com/tensorflow/tensorflow/issues/1578>
Call another function and optionally keep default arguments
34,205,794
12
2015-12-10T15:31:27Z
34,205,906
8
2015-12-10T15:36:42Z
[ "python" ]
I have a function with one optional argument, like this: ``` def funA(x, a, b=1): return a+b*x ``` I want to write a new function that calls `funA` and also has an optional argument, but if no argument is passed, I want to keep the default in `funA`. I was thinking something like this: ``` def funB(x, a, b=None): if b: return funA(x, a, b) else: return funA(x, a) ``` Is there a more pythonic way of doing this?
``` def funA(x, a, b=1): return a+b*x def funB(x, a, b=1): return funA(x, a, b) ``` Make the default value of `b=1` in `funB()` and then pass it always to `funA()`
Call another function and optionally keep default arguments
34,205,794
12
2015-12-10T15:31:27Z
34,205,920
16
2015-12-10T15:37:13Z
[ "python" ]
I have a function with one optional argument, like this: ``` def funA(x, a, b=1): return a+b*x ``` I want to write a new function that calls `funA` and also has an optional argument, but if no argument is passed, I want to keep the default in `funA`. I was thinking something like this: ``` def funB(x, a, b=None): if b: return funA(x, a, b) else: return funA(x, a) ``` Is there a more pythonic way of doing this?
I would replace `if b` with `if b is not None`, so that if you pass `b=0` (or any other "falsy" value) as argument to `funB` it will be passed to `funA`. Apart from that it seems pretty pythonic to me: clear and explicit. (albeit maybe a bit useless, depending on what you're trying to do!) A little more cryptic way that relies on calling `funB` with the correct keyword arguments (e.g. `funB(3, 2, b=4)`: ``` def funB(x, a, **kwargs): return funA(x, a, **kwargs) ```
How to assign value to a tensorflow variable?
34,220,532
12
2015-12-11T09:51:46Z
34,220,750
18
2015-12-11T10:01:57Z
[ "python", "variable-assignment", "tensorflow" ]
I am trying to assign a new value to a tensorflow variable in python. ``` import tensorflow as tf import numpy as np x = tf.Variable(0) init = tf.initialize_all_variables() sess = tf.InteractiveSession() sess.run(init) print(x.eval()) x.assign(1) print(x.eval()) ``` But the output I get is ``` 0 0 ``` So the value has not changed. What am I missing?
The statement [`x.assign(1)`](https://www.tensorflow.org/versions/master/api_docs/python/state_ops.html#Variable.assign) does not actually assign the value `1` to `x`, but rather creates a [`tf.Operation`](https://www.tensorflow.org/versions/master/api_docs/python/framework.html#Operation) that you have to explicitly *run* to update the variable.\* A call to [`Operation.run()`](https://www.tensorflow.org/versions/master/api_docs/python/framework.html#Operation.run) or [`Session.run()`](https://www.tensorflow.org/versions/master/api_docs/python/client.html#Session.run) can be used to run the operation: ``` assign_op = x.assign(1) sess.run(assign_op) # or `assign_op.op.run()` print(x.eval()) # ==> 1 ``` (\* In fact, it returns a `tf.Tensor`, corresponding to the updated value of the variable, to make it easier to chain assignments.)
Grid search with f1 as scoring function, several pages of error message
34,221,712
2
2015-12-11T10:48:51Z
34,222,309
7
2015-12-11T11:17:47Z
[ "python", "scikit-learn", "grid-search" ]
Want to use Gridsearch to find best parameters and use f1 as the scoring metric. If i remove the scoring function, all works well and i get no errors. Here is my code: ``` from sklearn import grid_search parameters = {'n_neighbors':(1,3,5,10,15),'weights':('uniform','distance'),'algorithm':('ball_tree','kd_tree','brute'),'leaf_size':(5,10,20,30,50)} reg = grid_search.GridSearchCV(estimator=neigh,param_grid=parameters,scoring="f1") train_classifier(reg, X_train, y_train) train_f1_score = predict_labels(reg, X_train, y_train) print reg.best_params_ print "F1 score for training set: {}".format(train_f1_score) print "F1 score for test set: {}".format(predict_labels(reg, X_test, y_test)) ``` When i execute i get pages upon pages as errors, and i cannot make heads or tails of it :( ``` ValueError Traceback (most recent call last) <ipython-input-17-3083ff8a20ea> in <module>() 3 parameters = {'n_neighbors':(1,3,5,10,15),'weights':('uniform','distance'),'algorithm':('ball_tree','kd_tree','brute'),'leaf_size':(5,10,20,30,50)} 4 reg = grid_search.GridSearchCV(estimator=neigh,param_grid=parameters,scoring="f1") ----> 5 train_classifier(reg, X_train, y_train) 6 train_f1_score = predict_labels(reg, X_train, y_train) 7 print reg.best_params_ <ipython-input-9-b56ce25fd90b> in train_classifier(clf, X_train, y_train) 5 print "Training {}...".format(clf.__class__.__name__) 6 start = time.time() ----> 7 clf.fit(X_train, y_train) 8 end = time.time() 9 print "Done!\nTraining time (secs): {:.3f}".format(end - start) //anaconda/lib/python2.7/site-packages/sklearn/grid_search.pyc in fit(self, X, y) 802 803 """ --> 804 return self._fit(X, y, ParameterGrid(self.param_grid)) 805 806 //anaconda/lib/python2.7/site-packages/sklearn/grid_search.pyc in _fit(self, X, y, parameter_iterable) 551 self.fit_params, return_parameters=True, 552 error_score=self.error_score) --> 553 for parameters in parameter_iterable 554 for train, test in cv) 555 //anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.pyc in __call__(self, iterable) 802 self._iterating = True 803 --> 804 while self.dispatch_one_batch(iterator): 805 pass 806 //anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.pyc in dispatch_one_batch(self, iterator) 660 return False 661 else: --> 662 self._dispatch(tasks) 663 return True 664 //anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.pyc in _dispatch(self, batch) 568 569 if self._pool is None: --> 570 job = ImmediateComputeBatch(batch) 571 self._jobs.append(job) 572 self.n_dispatched_batches += 1 //anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.pyc in __init__(self, batch) 181 # Don't delay the application, to avoid keeping the input 182 # arguments in memory --> 183 self.results = batch() 184 185 def get(self): //anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.pyc in __call__(self) 70 71 def __call__(self): ---> 72 return [func(*args, **kwargs) for func, args, kwargs in self.items] 73 74 def __len__(self): //anaconda/lib/python2.7/site-packages/sklearn/cross_validation.pyc in _fit_and_score(estimator, X, y, scorer, train, test, verbose, parameters, fit_params, return_train_score, return_parameters, error_score) 1548 1549 else: -> 1550 test_score = _score(estimator, X_test, y_test, scorer) 1551 if return_train_score: 1552 train_score = _score(estimator, X_train, y_train, scorer) //anaconda/lib/python2.7/site-packages/sklearn/cross_validation.pyc in _score(estimator, X_test, y_test, scorer) 1604 score = scorer(estimator, X_test) 1605 else: -> 1606 score = scorer(estimator, X_test, y_test) 1607 if not isinstance(score, numbers.Number): 1608 raise ValueError("scoring must return a number, got %s (%s) instead." //anaconda/lib/python2.7/site-packages/sklearn/metrics/scorer.pyc in __call__(self, estimator, X, y_true, sample_weight) 88 else: 89 return self._sign * self._score_func(y_true, y_pred, ---> 90 **self._kwargs) 91 92 //anaconda/lib/python2.7/site-packages/sklearn/metrics/classification.pyc in f1_score(y_true, y_pred, labels, pos_label, average, sample_weight) 637 return fbeta_score(y_true, y_pred, 1, labels=labels, 638 pos_label=pos_label, average=average, --> 639 sample_weight=sample_weight) 640 641 //anaconda/lib/python2.7/site-packages/sklearn/metrics/classification.pyc in fbeta_score(y_true, y_pred, beta, labels, pos_label, average, sample_weight) 754 average=average, 755 warn_for=('f-score',), --> 756 sample_weight=sample_weight) 757 return f 758 //anaconda/lib/python2.7/site-packages/sklearn/metrics/classification.pyc in precision_recall_fscore_support(y_true, y_pred, beta, labels, pos_label, average, warn_for, sample_weight) 982 else: 983 raise ValueError("pos_label=%r is not a valid label: %r" % --> 984 (pos_label, present_labels)) 985 labels = [pos_label] 986 if labels is None: ValueError: pos_label=1 is not a valid label: array(['no', 'yes'], dtype='|S3') ```
Seems that you have label array with values 'no' and 'yes', you should convert them to binary 1-0 numerical representation, because your error states that scoring function cannot understand where 0's and 1's are in your label array. Other possible way to solve it without modifying your label array: ``` from sklearn.metrics import f1_score from sklearn.metrics import make_scorer f1_scorer = make_scorer(f1_score, pos_label="yes") reg = grid_search.GridSearchCV(estimator=neigh,param_grid=parameters,scoring=f1_scorer) ```
Force child class to call parent method when overriding it
34,224,896
6
2015-12-11T13:40:54Z
34,225,828
7
2015-12-11T14:29:06Z
[ "python", "class" ]
I am curious whether there is a way in Python to force (from the Parent class) for a parent method to be called from a child class when it is being overridden. Example: ``` class Parent(object): def __init__(self): self.someValue=1 def start(self): ''' Force method to be called''' self.someValue+=1 ``` The correct implementation of what I wanted my child Child class to do would be: ``` class ChildCorrect(Parent): def start(self): Parent.start(self) self.someValue+=1 ``` However, is there a way to force developers that develop Child classes to specifically call(not just override) the 'start' method defined in the Parent class in case they forget to call it: ``` class ChildIncorrect(Parent): def start(self): '''Parent method not called, but correctly overridden''' self.someValue+=1 ``` Also, if this is not considered best practice, what would be an alternative?
# How (not) to do it No, there is no safe way to force users to call super. Let us go over a few options which would reach that or a similar goal and discuss why it’s a bad idea. In the next section, I will also discuss what the sensible (with respect to the Python community) way to deal with the situation is. 1. A metaclass could check, at the time the subclass is defined, whether a method overriding the target method (= has the same name as the target method) calls super with the appropriate arguments. This requires deeply implementation-specific behaviour, such as using the [`dis`](https://docs.python.org/3/library/dis.html) module of CPython. There are no circumstances where I could imagine this to be a good idea—it is dependent on the specific version of CPython and on the fact that you are using CPython at all. 2. A metaclass could co-operate with the baseclass. In this scenario, the baseclass notifies the metaclass when the inherited method is called and the metaclass wraps any overriding methods with a piece of guard code. The guard code tests whether the inherited method got called during the execution of the overriding method. This has the downside that a separate notification channel for each method which needs this "feature" is required, and in addition thread-safety would be a concern. Also, the overriding method has finished execution at the point you notice that it has not called the inherited method, which may be bad depending on your scenario. It also opens the question: What to do if the overriding method has not called the inherited method? Raising an exception might be unexpected by the code using the method (or, even worse, it might assume that the method had no effect at all, which is not true). Also the late feedback the developer overriding the class (if they get that feedback at all!) is bad. 3. A metaclass could generate a piece of guard code for every overriden method which calls the inherited method *automatically* before or after the overriding method has executed. The downside here is developers will not be expecting that the inherited method is being called automatically and have no way to stop that call (e.g. if a precondition special to the subclass is not met) or control when in their own overriding method the inherited method is called. This violates a good bunch of sensible principles when coding python (avoid surprises, explicit is better than implicit, and possibly more). 4. A combination of point 3 and 4. Using the co-operation of base- and metaclass from point 3, the guard code from point 4 could be extended to automatically call super *iff* the overriding method has not called super themselves. Again, this is unexpected, but it resolves the issues with duplicate super calls and how to handle a method which does not call super. However, there are still remaining problems. While thread-safety could be fixed with thread-locals, there is still no way for an overriding method to abort the call to super when a precondition is not met, except by raising an exception which may not be desirable in all cases. Also, super can only be called automatically *after* the overriding method, not before, which, again, in some scenarios is undesirable. Also, none of this help against re-binding the attribute during the lifetime of the object and class, although this can be helped by using descriptors and/or extending the metaclass to take care of it. # Why not to do it > Also, if this is not considered best practice, what would be an alternative? A common best-practice with Python is to assume that you are among consenting adults. That means, that noone is actively trying to do nasty things to your code, unless you allow them to. In such an ecosystem, it would make sense to tack a `.. warning::` in the documentation of the method or class, so that anyone inheriting from that class knows what they have to do. Also, calling the super method at an appropriate point makes sense in so many contexts that developers using the base class will consider it anyways and only forget about it accidentally. In such a case, using the metaclass from the third point above would not help either—users would have to remember **not** to call super, which might be an issue by itself, especially for experienced programmers. It also violates the principle of least surprise and "explicit is better than implicit" (noone expects the inherited method to be called implicitly!). This, again, would have to be documented well, in which case you can also resort to just not have super be called automatically and just document that it makes *even more sense than usual* to call the inherited method.
Find the k smallest values of a numpy array
34,226,400
2
2015-12-11T14:59:17Z
34,226,816
8
2015-12-11T15:20:07Z
[ "python", "numpy" ]
In order to find the index of the smallest value, I can use `argmin`: ``` import numpy as np A = np.array([1, 7, 9, 2, 0.1, 17, 17, 1.5]) print A.argmin() # 4 because A[4] = 0.1 ``` But how can I find the indices of the **k-smallest values**? I'm looking for something like: ``` print A.argmin(numberofvalues=3) # [4, 0, 7] because A[4] <= A[0] <= A[7] <= all other A[i] ``` *Note: in my use case A has between ~ 10 000 and 100 000 values, and I'm interested for only the indices of the k=10 smallest values. k will never be > 10.*
Use [`np.argpartition`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.argpartition.html). It does not sort the entire array. It only guarantees that the `kth` element is in sorted position and all smaller elements will be moved before it. Thus the first `k` elements will be the k-smallest elements. ``` import numpy as np A = np.array([1, 7, 9, 2, 0.1, 17, 17, 1.5]) k = 3 idx = np.argpartition(A, k) print(idx) # [4 0 7 3 1 2 6 5] ``` This returns the k-smallest values. Note that these may not be in sorted order. ``` print(A[idx[:k]]) # [ 0.1 1. 1.5] ``` --- To obtain the k-largest values use ``` idx = np.argpartition(A, -k) # [4 0 7 3 1 2 6 5] A[idx[-3:]] # [ 9. 17. 17.] ```
Choosing from different cost function and activation function of a neural network
34,229,140
10
2015-12-11T17:26:10Z
34,248,291
12
2015-12-13T05:28:47Z
[ "python", "machine-learning", "neural-network", "svm", "tensorflow" ]
Recently I started toying with Neural Network. I was trying to implement an `AND`gate with tensorflow. I am having trouble understanding when to use different cost and activation functions. This is a basic neural network. Only no hidden layers. Only an input layer and an output layer. First i tried to implement it in this way. As you can see this is a poor implementation, but i think it gets the job done, at least in some way. So, i tried only the real outputs, no one hot true outputs. For activation functions, I used a sigmoid function and for cost function I used squared error cost function(I think its called that, correct me if I'm wrong). I've tried using ReLU, Softmax as activation function(With same cost function). It doesnt work. I figured out why they don't work. I also tried the sigmoid function with Cross Entropy cost function. It doesn't work as well. ``` import tensorflow as tf import numpy train_X = numpy.asarray([[0,0],[0,1],[1,0],[1,1]]) train_Y = numpy.asarray([[0],[0],[0],[1]]) x = tf.placeholder("float",[None, 2]) y = tf.placeholder("float",[None, 1]) W = tf.Variable(tf.zeros([2, 1])) b = tf.Variable(tf.zeros([1, 1])) activation = tf.nn.sigmoid(tf.matmul(x, W)+b) cost = tf.reduce_sum(tf.square(activation - y))/4 optimizer = tf.train.GradientDescentOptimizer(.1).minimize(cost) init = tf.initialize_all_variables() with tf.Session() as sess: sess.run(init) for i in range(5000): train_data = sess.run(optimizer, feed_dict={x: train_X, y: train_Y}) result = sess.run(activation, feed_dict={x:train_X}) print(result) ``` after 5000 iteration ``` [[ 0.0031316 ] [ 0.12012422] [ 0.12012422] [ 0.85576665]] ``` **`Question 1`** - Is there any other activation function and cost function, that can work(learn) for the above network, without changing the parameters(meaning without changing W, x, b). **`Question 2`** - I read from a stackoverflow post [here](http://stackoverflow.com/questions/20850895/neural-networks-activation-function) that `the selection of activation function depends on the problem`. So there are no cost function that can be used anywhere? I mean there is no `Standard` cost function that can be used on any neural network. Right? Please correct me on this. I also implemented the `AND` gate with a different approach, with the output as one-hot true. As you can see the `train_Y` `[1,0]` means that the 0th index is 1, so the answer is 0. I hope you get it. Here i have used a softmax activation function, with cross entropy as cost function. Sigmoid function as activation function fails miserably. ``` import tensorflow as tf import numpy train_X = numpy.asarray([[0,0],[0,1],[1,0],[1,1]]) train_Y = numpy.asarray([[1,0],[1,0],[1,0],[0,1]]) x = tf.placeholder("float",[None, 2]) y = tf.placeholder("float",[None, 2]) W = tf.Variable(tf.zeros([2, 2])) b = tf.Variable(tf.zeros([2])) activation = tf.nn.softmax(tf.matmul(x, W)+b) cost = -tf.reduce_sum(y*tf.log(activation)) optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(cost) init = tf.initialize_all_variables() with tf.Session() as sess: sess.run(init) for i in range(5000): train_data = sess.run(optimizer, feed_dict={x: train_X, y: train_Y}) result = sess.run(activation, feed_dict={x:train_X}) print(result) ``` after 5000 iteration ``` [[ 1.00000000e+00 1.41971401e-09] [ 9.98996437e-01 1.00352429e-03] [ 9.98996437e-01 1.00352429e-03] [ 1.40495342e-03 9.98595059e-01]] ``` **`Question 3`** So in this case what cost function and activation function can i use? and how do i understand what type of cost and activation functions to use? Is there a standard way or rule, or just experience only, or i have to try every cost and activation function in a brute force manner? I found an answer [here](http://stackoverflow.com/questions/20368015/activation-function-choice-for-neural-network). But i am hoping for a more elaborate explanation. **`Question 4`** I have noticed that it takes a lot of iteration to converge to a near correct prediction. I think the converge rate depends on the learning rate(using too much will miss the solution) and the cost function(correct me if I'm wrong). So, is there any optimal way(meaning the fastest) or cost function for converging to a correct solution?
I will answer your questions a little bit out of order, starting with more general answers, and finishing with those specific to your particular experiment. **Activation functions** Different activation functions, in fact, do have different properties. Let's first consider an activation function between two layers of a neural network. The only purpose of an activation function there is to serve as an nonlinearity. If you do not put an activation function between two layers, then two layers together will serve no better than one, because their effect will still be just a linear transformation. For a long while people were using sigmoid function and tanh, choosing pretty much arbitrarily, with sigmoid being more popular, until recently, when ReLU became the dominant nonleniarity. The reason why people use ReLU between layers is because it is non-saturating (and is also faster to compute). Think about the graph of a sigmoid function. If the absolute value of `x` is large, then the derivative of the sigmoid function is small, which means that as we propagate the error backwards, the gradient of the error will vanish very quickly as we go back through the layers. With ReLU the derivative is `1` for all positive inputs, so the gradient for those neurons that fired will not be changed by the activation unit at all and will not slow down the gradient descent. For the last layer of the network the activation unit also depends on the task. For regression you will want to use the sigmoid or tanh activation, because you want the result to be between 0 and 1. For classification you will want only one of your outputs to be one and all others zeros, but there's no differentiable way to achieve precisely that, so you will want to use a softmax to approximate it. **Your example**. Now let's look at your example. Your first example tries to compute the output of `AND` in a following form: ``` sigmoid(W1 * x1 + W2 * x2 + B) ``` Note that `W1` and `W2` will always converge to the same value, because the output for (`x1`, `x2`) should be equal to the output of (`x2`, `x1`). Therefore, the model that you are fitting is: ``` sigmoid(W * (x1 + x2) + B) ``` `x1 + x2` can only take one of three values (0, 1 or 2) and you want to return `0` for the case when `x1 + x2 < 2` and 1 for the case when `x1 + x2 = 2`. Since the sigmoid function is rather smooth, it will take very large values of `W` and `B` to make the output close to the desired, but because of a small learning rate they can't get to those large values fast. Increasing the learning rate in your first example will increase the speed of convergence. Your second example converges better because the `softmax` function is good at making precisely one output be equal to `1` and all others to `0`. Since this is precisely your case, it does converge quickly. Note that `sigmoid` would also eventually converge to good values, but it will take significantly more iterations (or higher learning rate). **What to use**. Now to the last question, how does one choose which activation and cost functions to use. These advices will work for majority of cases: 1. If you do classification, use `softmax` for the last layer's nonlinearity and `cross entropy` as a cost function. 2. If you do regression, use `sigmoid` or `tanh` for the last layer's nonlinearity and `squared error` as a cost function. 3. Use ReLU as a nonlienearity between layers. 4. Use better optimizers (`AdamOptimizer`, `AdagradOptimizer`) instead of `GradientDescentOptimizer`, or use momentum for faster convergence,
Is it possible to modify an existing TensorFlow computation graph?
34,235,225
6
2015-12-12T01:38:15Z
34,235,413
8
2015-12-12T02:06:24Z
[ "python", "tensorflow" ]
TensorFlow graph is usually built gradually from inputs to outputs, and then executed. Looking at the Python code, the inputs lists of operations are immutable which suggests that the inputs should not be modified. Does that mean that there is no way to update/modify an existing graph?
The TensorFlow [`tf.Graph`](https://www.tensorflow.org/versions/master/api_docs/python/framework.html#Graph) class is an *append-only* data structure, which means that you can add nodes to the graph after executing part of the graph, but you cannot remove or modify existing nodes. Since TensorFlow executes only the necessary subgraph when you call [`Session.run()`](https://www.tensorflow.org/versions/master/api_docs/python/client.html#Session.run), there is no execution-time cost to having redundant nodes in the graph (although they will continue to consume memory). To remove *all* nodes in the graph, you can create a session with a new graph: ``` with tf.Graph().as_default(): # Create a new graph, and make it the default. with tf.Session() as sess: # `sess` will use the new, currently empty, graph. # Build graph and execute nodes in here. ```
What is the purpose of graph collections in TensorFlow?
34,235,557
5
2015-12-12T02:26:40Z
34,243,895
8
2015-12-12T19:17:47Z
[ "python", "tensorflow" ]
The API discusses [Graph Collections](https://www.tensorflow.org/versions/master/api_docs/python/framework.html#graph-collections) which judging from the [code](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/framework/ops.py#L2071) are a general purpose key/data storage. What is the purpose of those collections?
Remember that under the hood, Tensorflow is a system for specifying and then executing computational data flow graphs. The graph collections are used as part of keeping track of the constructed graphs and how they must be executed. For example, when you create certain kinds of ops, such as `tf.train.batch_join`, the code that adds the op will also add some queue runners to the `QUEUE_RUNNERS` graph collection. Later, when you call `start_queue_runners()`, by default, it will look at the `QUEUE_RUNNERS` collection to know which runners to start.
Difference between np.mean and tf.reduce_mean (numpy | tensorflow)?
34,236,252
13
2015-12-12T00:21:08Z
34,237,210
20
2015-12-12T06:55:01Z
[ "python", "numpy", "machine-learning", "mean", "tensorflow" ]
In the following tutorial: <https://www.tensorflow.org/versions/master/tutorials/mnist/beginners/index.html> There is `accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))` `tf.cast` basically changes the type of tensor the object is...but what is the difference between `tf.reduce_mean` and `np.mean`? Here is the doc on `tf.reduce_mean` ``` reduce_mean(input_tensor, reduction_indices=None, keep_dims=False, name=None) input_tensor: The tensor to reduce. Should have numeric type. reduction_indices: The dimensions to reduce. If `None` (the defaut), reduces all dimensions. # 'x' is [[1., 1. ]] # [2., 2.]] tf.reduce_mean(x) ==> 1.5 tf.reduce_mean(x, 0) ==> [1.5, 1.5] tf.reduce_mean(x, 1) ==> [1., 2.] ``` For a 1D vector, it looks like `np.mean == tf.reduce_mean` but I don't understand what's happening in `tf.reduce_mean(x, 1) ==> [1., 2.]`. `tf.reduce_mean(x, 0) ==> [1.5, 1.5]` kind of makes sense, since mean of [1,2] and [1,2] are [1.5,1.5] but what's going on with `tf.reduce_mean(x,1)` ?
The functionality of `numpy.mean` and `tensorflow.reduce_mean` are the same. They do the same thing. From the documentation, for [numpy](http://docs.scipy.org/doc/numpy/reference/generated/numpy.mean.html) and [tensorflow](https://www.tensorflow.org/versions/master/api_docs/python/math_ops.html#reduce_mean), you can see that. Lets look at an example, ``` c = np.array([[3.,4], [5.,6], [6.,7]]) print(np.mean(c,1)) Mean = tf.reduce_mean(c,1) with tf.Session() as sess: result = sess.run(Mean) print(result) ``` Output ``` [ 3.5 5.5 6.5] [ 3.5 5.5 6.5] ``` Here you can see that when `axis`(numpy) or `reduction_indices`(tensorflow) is 1, it computes mean across (3,4) and (5,6) and (6,7), so `1` defines across which axis the mean is computed. When it is 0, the mean is computed across(3,5,6) and (4,6,7), and so on. I hope you get the idea. Now what are the differences between them? You can compute the numpy operation anywhere on python. But in order to do a tensorflow operation, it must be done inside a tensorflow `Session`. You can read more about it [here](https://www.tensorflow.org/versions/master/get_started/basic_usage.html). So when you need to perform any computation for your tensorflow graph(or structure if you will), it must be done inside a tensorflow `Session`. Lets look at another example. ``` npMean = np.mean(c) print(npMean+1) tfMean = tf.reduce_mean(c) Add = tfMean + 1 with tf.Session() as sess: result = sess.run(Add) print(result) ``` We could increase mean by `1` in `numpy` as you would naturally, but in order to do it in tensorflow, you need to perform that in `Session`, without using `Session` you can't do that. In other words, when you are computing `tfMean = tf.reduce_mean(c)`, tensorflow doesn't compute it then. It only computes that in a `Session`. But numpy computes that instantly, when you write `np.mean()`. I hope it makes sense.
Cost of using 10**9 over 1000000000?
34,239,159
9
2015-12-12T11:10:56Z
34,239,211
13
2015-12-12T11:18:51Z
[ "python", "performance", "python-2.7", "literals" ]
In Python, are expressions like `10**9` made of literals also literals? What I am asking: **is there a cost to using expressions over less-meaningful but also less-computable literals** in code that is called very often and should be lightweight?
There is no performance cost. Consider this: ``` import dis def foo(): x = 10**9 y = 10**9 def bar(): x = 1000000000 y = 1000000000 dis.dis(foo) dis.dis(bar) ``` yields ``` In [6]: dis.dis(foo) 5 0 LOAD_CONST 3 (1000000000) 3 STORE_FAST 0 (x) 6 6 LOAD_CONST 4 (1000000000) 9 STORE_FAST 1 (y) 12 LOAD_CONST 0 (None) 15 RETURN_VALUE In [8]: dis.dis(bar) 9 0 LOAD_CONST 1 (1000000000) 3 STORE_FAST 0 (x) 10 6 LOAD_CONST 1 (1000000000) 9 STORE_FAST 1 (y) 12 LOAD_CONST 0 (None) 15 RETURN_VALUE ``` So when Python compiles the code, it changes the `10**9` to `1000000000`. By the time the byte-code is run, there is no difference between using `10**9` or `1000000000`.
difference between tensorflow tf.nn.softmax and tf.nn.softmax_cross_entropy_with_logits
34,240,703
29
2015-12-12T14:03:27Z
34,243,720
64
2015-12-12T19:01:22Z
[ "python", "machine-learning", "tensorflow" ]
I was going through the tensorflow api docs [here](https://www.tensorflow.org/versions/master/api_docs/python/nn.html#softmax). In tensorflow docs they used a keyword called `logits`. What is it? In a lot of methods in the api docs it is written like, `tf.nn.softmax(logits, name=None)` Now what it is written is that `logits` are only `Tensors`. Well why keep a different name like `logits`? I almost thought that it was `logics`. `:D`. Another thing is that there are two methods i could not differentiate. They were ``` tf.nn.softmax(logits, name=None) tf.nn.softmax_cross_entropy_with_logits(logits, labels, name=None) ``` What are the differences between them? The docs are not clear to me. I know what `tf.nn.softmax` does. But not the other. An example will be really helpful.
Logits simply means that the function operates on the unscaled output of earlier layers and that the relative scale to understand the units is linear. It means, in particular, the sum of the inputs may not equal 1, that the values are *not* probabilities (you might have an input of 5). `tf.nn.softmax` produces just the result of applying the [softmax function](https://en.wikipedia.org/wiki/Softmax_function) to an input tensor. The softmax "squishes" the inputs so that sum(input) = 1; it's a way of normalizing. The shape of output of a softmax is the same as the input - it just normalizes the values. The outputs of softmax *can* be interpreted as probabilities. ``` a = tf.constant(np.array([[.1, .3, .5, .9]])) print s.run(tf.nn.softmax(a)) [[ 0.16838508 0.205666 0.25120102 0.37474789]] ``` In contrast, `tf.nn.softmax_cross_entropy_with_logits` computes the cross entropy of the result after applying the softmax function (but it does it all together in a more mathematically careful way). It's similar to the result of: ``` sm = tf.nn.softmax(x) ce = cross_entropy(sm) ``` The cross entropy is a summary metric - it sums across the elements. The output of `tf.nn.softmax_cross_entropy_with_logits` on a shape `[2,5]` tensor is of shape `[2,1]` (the first dimension is treated as the batch). If you want to do optimization to minimize the cross entropy, AND you're softmaxing after your last layer, you should use `tf.nn.softmax_cross_entropy_with_logits` instead of doing it yourself, because it covers numerically unstable corner cases in the mathematically right way. Otherwise, you'll end up hacking it by adding little epsilons here and there. (Edited 2016-02-07: If you have single-class labels, where an object can only belong to one class, you might now consider using `tf.nn.sparse_softmax_cross_entropy_with_logits` so that you don't have to convert your labels to a dense one-hot array. This function was added after release 0.6.0.)
difference between tensorflow tf.nn.softmax and tf.nn.softmax_cross_entropy_with_logits
34,240,703
29
2015-12-12T14:03:27Z
34,272,341
8
2015-12-14T16:47:11Z
[ "python", "machine-learning", "tensorflow" ]
I was going through the tensorflow api docs [here](https://www.tensorflow.org/versions/master/api_docs/python/nn.html#softmax). In tensorflow docs they used a keyword called `logits`. What is it? In a lot of methods in the api docs it is written like, `tf.nn.softmax(logits, name=None)` Now what it is written is that `logits` are only `Tensors`. Well why keep a different name like `logits`? I almost thought that it was `logics`. `:D`. Another thing is that there are two methods i could not differentiate. They were ``` tf.nn.softmax(logits, name=None) tf.nn.softmax_cross_entropy_with_logits(logits, labels, name=None) ``` What are the differences between them? The docs are not clear to me. I know what `tf.nn.softmax` does. But not the other. An example will be really helpful.
tf.nn.softmax computes the forward propagation through a softmax layer. You use it during evaluation of the model when you compute the probabilities that the model outputs. tf.nn.softmax\_cross\_entropy\_with\_logits computes the cost for a softmax layer. It is only used during training. The logits are the unnormalized log probabilities output the model (the values output before the softmax normalization is applied to them).
Remove parentheses around integers in a string
34,245,949
5
2015-12-12T22:53:04Z
34,246,006
7
2015-12-12T22:59:58Z
[ "python", "regex", "string" ]
I want to replace `(number)` with just `number` in an expression like this: ``` 4 + (3) - (7) ``` It should be: ``` 4 + 3 - 7 ``` If the expression is: ``` 2+(2)-(5-2/5) ``` it should be like this: ``` 2+2-(5-2/5) ``` I tried ``` a = a.replace(r'\(\d\+)', '') ``` where `a` is a string, but it did not work. Thanks!
Python has a powerful module for regular expressions, `re`, featuring a substitution method: ``` >>> import re >>> a = '2+(2)-(5-2/5)' >>> re.sub('\((\d+)\)', r'\1', a) '2+2-(5-2/5)' ```
Python RandomForest - Unknown label Error
34,246,336
7
2015-12-12T23:45:08Z
34,246,469
9
2015-12-13T00:05:09Z
[ "python", "python-3.x", "scikit-learn", "random-forest" ]
I have trouble using RandomForest fit function This is my training set ``` P1 Tp1 IrrPOA Gz Drz2 0 0.0 7.7 0.0 -1.4 -0.3 1 0.0 7.7 0.0 -1.4 -0.3 2 ... ... ... ... ... 3 49.4 7.5 0.0 -1.4 -0.3 4 47.4 7.5 0.0 -1.4 -0.3 ... (10k rows) ``` I want to predict P1 thanks to all the other variables using sklearn.ensemble RandomForest ``` colsRes = ['P1'] X_train = train.drop(colsRes, axis = 1) Y_train = pd.DataFrame(train[colsRes]) rf = RandomForestClassifier(n_estimators=100) rf.fit(X_train, Y_train) ``` Here is the error I get: ``` ValueError: Unknown label type: array([[ 0. ], [ 0. ], [ 0. ], ..., [ 49.4], [ 47.4], ``` I did not find anything about this label error, I use Python 3.5. Any advice would be a great help !
When you are passing label (y) data to `rf.fit(X,y)`, it expects y to be 1D list. Slicing the Panda frame always result in a 2D list. So, conflict raised in your use-case. You need to convert the 2D list provided by pandas DataFrame to a 1D list as expected by fit function. Try using 1D list first: ``` Y_train = list(train.P1.values) ``` If this does not solve the problem, you can try with solution mentioned in [MultinomialNB error: "Unknown Label Type"](http://stackoverflow.com/questions/20722986): ``` Y_train = np.asarray(train['P1'], dtype="|S6") ``` So your code becomes, ``` colsRes = ['P1'] X_train = train.drop(colsRes, axis = 1) Y_train = np.asarray(train['P1'], dtype="|S6") rf = RandomForestClassifier(n_estimators=100) rf.fit(X_train, Y_train) ```
Function with dictionaries as optional arguments - Python
34,251,328
10
2015-12-13T12:59:11Z
34,251,438
13
2015-12-13T13:12:25Z
[ "python", "dictionary", "optional-arguments" ]
I'm trying to create a function that might receive as input many or a few dictionaries. I'm using the following code: ``` def merge_many_dics(dic1,dic2,dic3=True,dic4=True,dic5=True,dic6=True,dic7=True,dic8=True,dic9=True,dic10=True): """ Merging up to 10 dictionaries with same keys and different values :return: a dictionary containing the common dates as keys and both values as values """ manydics = {} for k in dic1.viewkeys() & dic2.viewkeys() & dic3.viewkeys() & dic4.viewkeys() & dic5.viewkeys() & dic6.viewkeys()\ & dic7.viewkeys() & dic8.viewkeys() & dic9.viewkeys() & dic10.viewkeys(): manydics[k] = (dic1[k], dic2[k],dic3[k],dic4[k],dic5[k],dic6[k],dic7[k],dic8[k],dic9[k],dic10[k]) return manydics ``` Note that I'm trying to equal the arguments dic3, dic4, dic5 and so on to "True", so when they are not specified and are called in the function nothing happens. However I'm getting the following error: ``` Traceback (most recent call last): File "/Users/File.py", line 616, in <module> main_dic=merge_many_dics(dic1,dic2,dic3,dic4) File "/Users/File.py", line 132, in merge_many_dics & dic7.viewkeys() & dic8.viewkeys() & dic9.viewkeys() & dic10.viewkeys(): AttributeError: 'bool' object has no attribute 'viewkeys' ``` Anyone to enlight my journey available?
Using [arbitrary argument list](https://docs.python.org/2/tutorial/controlflow.html#arbitrary-argument-lists), the function can be called with an arbitrary number of arguments: ``` >>> def merge_many_dics(*dicts): ... common_keys = reduce(lambda a, b: a & b, (d.viewkeys() for d in dicts)) ... return {key: tuple(d[key] for d in dicts) for key in common_keys} ... >>> merge_many_dics({1:2}, {1:3}, {1:4, 2:5}) {1: (2, 3, 4)} ```
Does Python's csv.reader(filename) REALLY return a list? Doesn't seem so
34,258,007
4
2015-12-14T00:14:53Z
34,258,024
11
2015-12-14T00:17:06Z
[ "python", "csv" ]
So I am still learning Python and I am on learning about reading files, csv files today. The lesson I just watched tells me that using ``` csv.reader(filename) ``` returns a list. So I wrote the following code: ``` import csv my_file = open(file_name.csv, mode='r') parsed_data = csv.reader(my_file) print(parsed_data) ``` and what it prints is ``` <_csv.reader object at 0x0000000002838118> ``` If what it outputs is a list, shouldn't I be getting a list, ie, something like this? ``` [value1, value2, value3] ```
What you get is an [*iterable*](https://wiki.python.org/moin/Iterator), i.e. an object which will give you a sequence of other objects (in this case, strings). You can pass it to a `for` loop, or use `list()` to get an actual list: ``` parsed_data = list(csv.reader(my_file)) ``` The reason it is designed this way is that it allows you to work with files that are larger than the amount of memory you have on your computer (or simply files that are large enough to consume inconvenient amounts of memory if you were to load all of its contents into a list). With an iterable, you may choose to look at one element at a time and e.g. throw it out of memory again before reading the next.
Getting good mixing with many input datafiles in tensorflow
34,258,043
3
2015-12-14T00:20:51Z
34,258,214
8
2015-12-14T00:43:47Z
[ "python", "neural-network", "binaryfiles", "tensorflow" ]
I'm working with tensorflow hoping to train a deep CNN to do move prediction for the game Go. The dataset I created consists of 100,000 binary data files, where each datafile corresponds to a recorded game and contains roughly 200 training samples (one for each move in the game). I believe it will be very important to get good mixing when using SGD. I'd like my batches to contain samples from different games AND samples from different stages of the games. So for example simply reading one sample from the start of 100 files and shuffling isn't good b/c those 100 samples will all be the first move of each game. I have read the tutorial on feeding data from files but I'm not sure if their provided libraries do what I need. If I were to hard code it myself I would basically initialize a bunch of file pointers to random locations within each file and then pull samples from random files, incrementing the file pointers accordingly. So, my question is does tensorflow provide this sort of functionality or would it be easier to write my own code for creating batches?
Yes - what you want is to use a combination of two things. First, randomly shuffle the order in which you input your datafiles, by reading from them using a [`tf.train.string_input_producer`](https://www.tensorflow.org/versions/master/api_docs/python/io_ops.html#string_input_producer) with `shuffle=True` that feeds into whatever input method you use (if you can put your examples into tf.Example proto format, that's easy to use with `parse_example`). To be very clear, you put the list of filenames in the `string_input_producer` and then read them with another method such as `read_file`, etc. Second, you need to mix at a finer granularity. You can accomplish this by feeding the input examples into a [`tf.train.shuffle_batch`](https://www.tensorflow.org/versions/master/api_docs/python/io_ops.html#shuffle_batch) node with a large capacity and large value of `min_after_dequeue`. One particularly nice way is to use a `shuffle_batch_join` that receives input from multiple files, so that you get a lot of mixing. Set the capacity of the batch big enough to mix well without exhausting your RAM. Tens of thousands of examples usually works pretty well. Keep in mind that the batch functions add a `QueueRunner` to the `QUEUE_RUNNERS` collection, so you need to run [`tf.train.start_queue_runners()`](https://www.tensorflow.org/versions/master/api_docs/python/train.html#start_queue_runners)
Patch __call__ of a function
34,261,111
17
2015-12-14T06:40:59Z
34,356,508
11
2015-12-18T13:11:34Z
[ "python", "time", "mocking", "python-mock" ]
I need to patch current datetime in tests. I am using this solution: ``` def _utcnow(): return datetime.datetime.utcnow() def utcnow(): """A proxy which can be patched in tests. """ # another level of indirection, because some modules import utcnow return _utcnow() ``` Then in my tests I do something like: ``` with mock.patch('***.utils._utcnow', return_value=***): ... ``` But today an idea came to me, that I could make the implementation simpler by patching `__call__` of function `utcnow` instead of having an additional `_utcnow`. This does not work for me: ``` from ***.utils import utcnow with mock.patch.object(utcnow, '__call__', return_value=***): ... ``` How to do this elegantly?
**[EDIT]** Maybe the most interesting part of this question is **Why I cannot patch `somefunction.__call__`?** Because the function don't use `__call__`'s code but `__call__` (a method-wrapper object) use function's code. I don't find any well sourced documentation about that, but I can prove it (Python2.7): ``` >>> def f(): ... return "f" ... >>> def g(): ... return "g" ... >>> f <function f at 0x7f1576381848> >>> f.__call__ <method-wrapper '__call__' of function object at 0x7f1576381848> >>> g <function g at 0x7f15763817d0> >>> g.__call__ <method-wrapper '__call__' of function object at 0x7f15763817d0> ``` Replace `f`'s code by `g`'s code: ``` >>> f.func_code = g.func_code >>> f() 'g' >>> f.__call__() 'g' ``` Of course `f` and `f.__call__` references are not changed: ``` >>> f <function f at 0x7f1576381848> >>> f.__call__ <method-wrapper '__call__' of function object at 0x7f1576381848> ``` Recover original implementation and copy `__call__` references instead: ``` >>> def f(): ... return "f" ... >>> f() 'f' >>> f.__call__ = g.__call__ >>> f() 'f' >>> f.__call__() 'g' ``` This don't have any effect on `f` function. **Note:** In Python 3 you should use `__code__` instead of `func_code`. I Hope that somebody can point me to the documentation that explain this behavior. You have a way to work around that: in `utils` you can define ``` class Utcnow(object): def __call__(self): return datetime.datetime.utcnow() utcnow = Utcnow() ``` And now your patch can work like a charm. --- Follow the original answer that I consider even the best way to implement your tests. I've my own *gold rule*: **never patch protected methods**. In this case the things are little bit smoother because protected method was introduced just for testing but I cannot see why. The real problem here is that you cannot to patch `datetime.datetime.utcnow` directly (is C extension as you wrote in the comment above). What you can do is to patch `datetime` by wrap the standard behavior and override `utcnow` function: ``` >>> with mock.patch("datetime.datetime", mock.Mock(wraps=datetime.datetime, utcnow=mock.Mock(return_value=3))): ... print(datetime.datetime.utcnow()) ... 3 ``` Ok that is not really clear and neat but you can introduce your own function like ``` def mock_utcnow(return_value): return mock.Mock(wraps=datetime.datetime, utcnow=mock.Mock(return_value=return_value)): ``` and now ``` mock.patch("datetime.datetime", mock_utcnow(***)) ``` do exactly what you need without any other layer and for every kind of import. Another solution can be import `datetime` in `utils` and to patch `***.utils.datetime`; that can give you some freedom to change `datetime` reference implementation without change your tests (in this case take care to change `mock_utcnow()` `wraps` argument too).
Patch __call__ of a function
34,261,111
17
2015-12-14T06:40:59Z
34,443,595
8
2015-12-23T20:56:04Z
[ "python", "time", "mocking", "python-mock" ]
I need to patch current datetime in tests. I am using this solution: ``` def _utcnow(): return datetime.datetime.utcnow() def utcnow(): """A proxy which can be patched in tests. """ # another level of indirection, because some modules import utcnow return _utcnow() ``` Then in my tests I do something like: ``` with mock.patch('***.utils._utcnow', return_value=***): ... ``` But today an idea came to me, that I could make the implementation simpler by patching `__call__` of function `utcnow` instead of having an additional `_utcnow`. This does not work for me: ``` from ***.utils import utcnow with mock.patch.object(utcnow, '__call__', return_value=***): ... ``` How to do this elegantly?
When you pathch `__call__` of a function, you are setting the `__call__` attribute of that **instance**. Python actually calls the `__call__` method defined on the class. For example: ``` >>> class A(object): ... def __call__(self): ... print 'a' ... >>> a = A() >>> a() a >>> def b(): print 'b' ... >>> b() b >>> a.__call__ = b >>> a() a >>> a.__call__ = b.__call__ >>> a() a ``` Assigning anything to `a.__call__` is pointless. However: ``` >>> A.__call__ = b.__call__ >>> a() b ``` ## TLDR; `a()` does not call `a.__call__`. It calls `type(a).__call__(a)`. ### Links There is a good explanation of why that happens in [answer to *"Why `type(x).__enter__(x)` instead of `x.__enter__()` in Python standard contextlib?"*](http://stackoverflow.com/a/34491119/389289). This behaviour is documented in Python documentation on [Special method lookup](https://docs.python.org/3/reference/datamodel.html#special-method-lookup).
Find "one letter that appears twice" in a string
34,261,346
55
2015-12-14T07:00:09Z
34,261,397
32
2015-12-14T07:03:35Z
[ "python", "regex" ]
I'm trying to catch if one letter that appears twice in a string using RegEx (or maybe there's some better ways?), for example my string is: ``` ugknbfddgicrmopn ``` The output would be: ``` dd ``` However, I've tried something like: ``` re.findall('[a-z]{2}', 'ugknbfddgicrmopn') ``` but in this case, it returns: ``` ['ug', 'kn', 'bf', 'dd', 'gi', 'cr', 'mo', 'pn'] # the except output is `['dd']` ``` --- I also have a way to get the expect output: ``` >>> l = [] >>> tmp = None >>> for i in 'ugknbfddgicrmopn': ... if tmp != i: ... tmp = i ... continue ... l.append(i*2) ... ... >>> l ['dd'] >>> ``` But that's too complex... --- If it's `'abbbcppq'`, then only catch: ``` abbbcppq ^^ ^^ ``` So the output is: ``` ['bb', 'pp'] ``` --- Then, if it's `'abbbbcppq'`, catch `bb` twice: ``` abbbbcppq ^^^^ ^^ ``` So the output is: ``` ['bb', 'bb', 'pp'] ```
As a Pythonic way You can use `zip` function within a list comprehension: ``` >>> s = 'abbbcppq' >>> >>> [i+j for i,j in zip(s,s[1:]) if i==j] ['bb', 'bb', 'pp'] ``` If you are dealing with large string you can use `iter()` function to convert the string to an iterator and use `itertols.tee()` to create two independent iterator, then by calling the `next` function on second iterator consume the first item and use call the `zip` class (in Python 2.X use `itertools.izip()` which returns an iterator) with this iterators. ``` >>> from itertools import tee >>> first = iter(s) >>> second, first = tee(first) >>> next(second) 'a' >>> [i+j for i,j in zip(first,second) if i==j] ['bb', 'bb', 'pp'] ``` # Benchmark with `RegEx` recipe: ``` # ZIP ~ $ python -m timeit --setup "s='abbbcppq'" "[i+j for i,j in zip(s,s[1:]) if i==j]" 1000000 loops, best of 3: 1.56 usec per loop # REGEX ~ $ python -m timeit --setup "s='abbbcppq';import re" "[i[0] for i in re.findall(r'(([a-z])\2)', 'abbbbcppq')]" 100000 loops, best of 3: 3.21 usec per loop ``` --- After your last edit as mentioned in comment if you want to only match one pair of `b` in strings like `"abbbcppq"` you can use `finditer()` which returns an iterator of matched objects, and extract the result with `group()` method: ``` >>> import re >>> >>> s = "abbbcppq" >>> [item.group(0) for item in re.finditer(r'([a-z])\1',s,re.I)] ['bb', 'pp'] ``` Note that `re.I` is the *IGNORECASE* flag which makes the RegEx match the uppercase letters too.
Find "one letter that appears twice" in a string
34,261,346
55
2015-12-14T07:00:09Z
34,261,461
9
2015-12-14T07:08:25Z
[ "python", "regex" ]
I'm trying to catch if one letter that appears twice in a string using RegEx (or maybe there's some better ways?), for example my string is: ``` ugknbfddgicrmopn ``` The output would be: ``` dd ``` However, I've tried something like: ``` re.findall('[a-z]{2}', 'ugknbfddgicrmopn') ``` but in this case, it returns: ``` ['ug', 'kn', 'bf', 'dd', 'gi', 'cr', 'mo', 'pn'] # the except output is `['dd']` ``` --- I also have a way to get the expect output: ``` >>> l = [] >>> tmp = None >>> for i in 'ugknbfddgicrmopn': ... if tmp != i: ... tmp = i ... continue ... l.append(i*2) ... ... >>> l ['dd'] >>> ``` But that's too complex... --- If it's `'abbbcppq'`, then only catch: ``` abbbcppq ^^ ^^ ``` So the output is: ``` ['bb', 'pp'] ``` --- Then, if it's `'abbbbcppq'`, catch `bb` twice: ``` abbbbcppq ^^^^ ^^ ``` So the output is: ``` ['bb', 'bb', 'pp'] ```
Using back reference, it is very easy: ``` import re p = re.compile(ur'([a-z])\1{1,}') re.findall(p, u"ugknbfddgicrmopn") #output: [u'd'] re.findall(p,"abbbcppq") #output: ['b', 'p'] ``` For more details, you can refer to a similar question in perl: [Regular expression to match any character being repeated more than 10 times](http://stackoverflow.com/questions/1660694)
Find "one letter that appears twice" in a string
34,261,346
55
2015-12-14T07:00:09Z
34,261,540
48
2015-12-14T07:13:42Z
[ "python", "regex" ]
I'm trying to catch if one letter that appears twice in a string using RegEx (or maybe there's some better ways?), for example my string is: ``` ugknbfddgicrmopn ``` The output would be: ``` dd ``` However, I've tried something like: ``` re.findall('[a-z]{2}', 'ugknbfddgicrmopn') ``` but in this case, it returns: ``` ['ug', 'kn', 'bf', 'dd', 'gi', 'cr', 'mo', 'pn'] # the except output is `['dd']` ``` --- I also have a way to get the expect output: ``` >>> l = [] >>> tmp = None >>> for i in 'ugknbfddgicrmopn': ... if tmp != i: ... tmp = i ... continue ... l.append(i*2) ... ... >>> l ['dd'] >>> ``` But that's too complex... --- If it's `'abbbcppq'`, then only catch: ``` abbbcppq ^^ ^^ ``` So the output is: ``` ['bb', 'pp'] ``` --- Then, if it's `'abbbbcppq'`, catch `bb` twice: ``` abbbbcppq ^^^^ ^^ ``` So the output is: ``` ['bb', 'bb', 'pp'] ```
You need use capturing group based regex and define your regex as raw string. ``` >>> re.search(r'([a-z])\1', 'ugknbfddgicrmopn').group() 'dd' >>> [i+i for i in re.findall(r'([a-z])\1', 'abbbbcppq')] ['bb', 'bb', 'pp'] ``` or ``` >>> [i[0] for i in re.findall(r'(([a-z])\2)', 'abbbbcppq')] ['bb', 'bb', 'pp'] ``` Note that , `re.findall` here should return the list of tuples with the characters which are matched by the first group as first element and the second group as second element. For our case chars within first group would be enough so I mentioned `i[0]`.
Adding lambda functions with the same operator in python
34,261,390
16
2015-12-14T07:03:17Z
34,261,500
10
2015-12-14T07:11:08Z
[ "python", "numpy", "scipy" ]
I have a rather lengthy equation that I need to integrate over using scipy.integrate.quad and was wondering if there is a way to add lambda functions to each other. What I have in mind is something like this ``` y = lambda u: u**(-2) + 8 x = lambda u: numpy.exp(-u) f = y + x int = scipy.integrate.quad(f, 0, numpy.inf) ``` The equations that I am really using are far more complicated than I am hinting at here, so for readability it would be useful to break up the equation into smaller, more manageable parts. Is there a way to do with with lambda functions? Or perhaps another way which does not even require lambda functions but will give the same output?
There's no built-in functionality for that, but you can implement it quite easily (with some performance hit, of course): ``` import numpy class Lambda: def __init__(self, func): self._func = func def __add__(self, other): return Lambda( lambda *args, **kwds: self._func(*args, **kwds) + other._func(*args, **kwds)) def __call__(self, *args, **kwds): return self._func(*args, **kwds) y = Lambda(lambda u: u**(-2) + 8) x = Lambda(lambda u: numpy.exp(-u)) print((x + y)(1)) ``` Other operators can be added in a similar way.
Adding lambda functions with the same operator in python
34,261,390
16
2015-12-14T07:03:17Z
34,261,983
10
2015-12-14T07:47:36Z
[ "python", "numpy", "scipy" ]
I have a rather lengthy equation that I need to integrate over using scipy.integrate.quad and was wondering if there is a way to add lambda functions to each other. What I have in mind is something like this ``` y = lambda u: u**(-2) + 8 x = lambda u: numpy.exp(-u) f = y + x int = scipy.integrate.quad(f, 0, numpy.inf) ``` The equations that I am really using are far more complicated than I am hinting at here, so for readability it would be useful to break up the equation into smaller, more manageable parts. Is there a way to do with with lambda functions? Or perhaps another way which does not even require lambda functions but will give the same output?
With `sympy` you can do function operation like this: ``` >>> import numpy >>> from sympy.utilities.lambdify import lambdify, implemented_function >>> from sympy.abc import u >>> y = implemented_function('y', lambda u: u**(-2) + 8) >>> x = implemented_function('x', lambda u: numpy.exp(-u)) >>> f = lambdify(u, y(u) + x(u)) >>> f(numpy.array([1,2,3])) array([ 9.36787944, 8.13533528, 8.04978707]) ```
Adding lambda functions with the same operator in python
34,261,390
16
2015-12-14T07:03:17Z
34,263,169
16
2015-12-14T09:12:14Z
[ "python", "numpy", "scipy" ]
I have a rather lengthy equation that I need to integrate over using scipy.integrate.quad and was wondering if there is a way to add lambda functions to each other. What I have in mind is something like this ``` y = lambda u: u**(-2) + 8 x = lambda u: numpy.exp(-u) f = y + x int = scipy.integrate.quad(f, 0, numpy.inf) ``` The equations that I am really using are far more complicated than I am hinting at here, so for readability it would be useful to break up the equation into smaller, more manageable parts. Is there a way to do with with lambda functions? Or perhaps another way which does not even require lambda functions but will give the same output?
In Python, you'll normally only use a lambda for very short, simple functions that easily fit inside the line that's creating them. (Some languages have other opinions.) As @DSM hinted in their comment, lambdas are essentially a shortcut to creating functions when it's not worth giving them a name. If you're doing more complex things, or if you need to give the code a name for later reference, a lambda expression won't be much of a shortcut for you -- instead, you might as well `def`ine a plain old function. So instead of assigning the lambda expression to a variable: ``` y = lambda u: u**(-2) + 8 ``` You can define that variable to be a function: ``` def y(u): return u**(-2) + 8 ``` Which gives you room to explain a bit, or be more complex, or whatever you need to do: ``` def y(u): """ Bloopinate the input u should be a positive integer for fastest results. """ offset = 8 bloop = u ** (-2) return bloop + offset ``` Functions and lambdas are both "callable", which means they're essentially interchangable as far as `scipy.integrate.quad()` is concerned. To combine callables, you can use several different techniques. ``` def triple(x): return x * 3 def square(x): return x * x def triple_square(x): return triple(square(x)) def triple_plus_square(x): return triple(x) + square(x) def triple_plus_square_with_explaining_variables(x): tripled = triple(x) squared = square(x) return tripled + squared ``` There are more advanced options that I would only consider if it makes your code clearer (which it probably won't). For example, you can put the callables in a list: ``` all_the_things_i_want_to_do = [triple, square] ``` Once they're in a list, you can use list-based operations to work on them (including applying them in turn to `reduce` the list down to a single value). But if your code is like most code, regular functions that just call each other by name will be the simplest to write and easiest to read.
How can auto create uuid column with django
34,264,391
2
2015-12-14T10:13:04Z
34,264,443
7
2015-12-14T10:15:27Z
[ "python", "mysql", "django" ]
I'm using django to create database tables for mysql,and I want it can create a column which type is uuid,I hope it can generate the uuid by itself,that means each time insert a record,I needn't specify a uuid for the model object.How can I make it,thanks!
If you're using Django >= 1.8, you can [use a `UUIDField`](https://docs.djangoproject.com/en/1.9/ref/models/fields/#django.db.models.UUIDField): ``` import uuid from django.db import models class MyUUIDModel(models.Model): id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False) ``` Passing `default = uuid.uuid4` auto-populates new records with a random UUID (but note that this will be done in Python code, not at the database level). --- If you're using an older version of Django, you can either upgrade, or [use `django-extensions`, which provides a `UUIDField` as well](http://django-extensions.readthedocs.org/en/latest/field_extensions.html).
XGBoost Categorical Variables: Dummification vs encoding
34,265,102
6
2015-12-14T10:48:22Z
34,346,937
10
2015-12-18T00:55:20Z
[ "python", "categorical-data", "xgboost" ]
When using `XGBoost` we need to convert categorical variables into numeric. Would there be any difference in performance/evaluation metrics between the methods of: 1. dummifying your categorical variables 2. encoding your categorical variables from e.g. (a,b,c) to (1,2,3) ALSO: Would there be any reasons not to go with method 2 by using for example `labelencoder`?
`xgboost` only deals with numeric columns. if you have a feature `[a,b,b,c]` which describes a categorical variable (*i.e. no numeric relationship*) Using [LabelEncoder](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html) you will simply have this: ``` array([0, 1, 1, 2]) ``` `Xgboost` **will wrongly interpret this feature as having a numeric relationship!** This just maps each string `('a','b','c')` to an integer, nothing more. **Proper way** Using [OneHotEncoder](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html) you will eventually get to this: ``` array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 1., 0.], [ 0., 0., 1.]]) ``` **This is the proper representation** of a categorical variable for `xgboost` or any other machine learning tool. [Pandas get\_dummies](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html) is a nice tool for creating dummy variables (*which is easier to use, in my opinion).* **Method #2 in above question will not represent the data properly**
Error in function to return 3 largest values from a list of numbers
34,267,961
14
2015-12-14T13:11:43Z
34,268,019
12
2015-12-14T13:15:05Z
[ "python", "numbers", "max" ]
I have this data file and I have to find the 3 largest numbers it contains ``` 24.7 25.7 30.6 47.5 62.9 68.5 73.7 67.9 61.1 48.5 39.6 20.0 16.1 19.1 24.2 45.4 61.3 66.5 72.1 68.4 60.2 50.9 37.4 31.1 10.4 21.6 37.4 44.7 53.2 68.0 73.7 68.2 60.7 50.2 37.2 24.6 21.5 14.7 35.0 48.3 54.0 68.2 69.6 65.7 60.8 49.1 33.2 26.0 19.1 20.6 40.2 50.0 55.3 67.7 70.7 70.3 60.6 50.7 35.8 20.7 14.0 24.1 29.4 46.6 58.6 62.2 72.1 71.7 61.9 47.6 34.2 20.4 8.4 19.0 31.4 48.7 61.6 68.1 72.2 70.6 62.5 52.7 36.7 23.8 11.2 20.0 29.6 47.7 55.8 73.2 68.0 67.1 64.9 57.1 37.6 27.7 13.4 17.2 30.8 43.7 62.3 66.4 70.2 71.6 62.1 46.0 32.7 17.3 22.5 25.7 42.3 45.2 55.5 68.9 72.3 72.3 62.5 55.6 38.0 20.4 17.6 20.5 34.2 49.2 54.8 63.8 74.0 67.1 57.7 50.8 36.8 25.5 20.4 19.6 24.6 41.3 61.8 68.5 72.0 71.1 57.3 52.5 40.6 26.2 ``` Therefore I have written the following code, but it only searches the first row of numbers instead of the entire list. Can anyone help to find the error? ``` def three_highest_temps(f): file = open(f, "r") largest = 0 second_largest = 0 third_largest = 0 temp = [] for line in file: temps = line.split() for i in temps: if i > largest: largest = i elif largest > i > second_largest: second_largest = i elif second_largest > i > third_largest: third_largest = i return largest, second_largest, third_largest print(three_highest_temps("data5.txt")) ```
Your data contains `float` numbers not `integer`. You can use `sorted`: ``` >>> data = '''24.7 25.7 30.6 47.5 62.9 68.5 73.7 67.9 61.1 48.5 39.6 20.0 ... 16.1 19.1 24.2 45.4 61.3 66.5 72.1 68.4 60.2 50.9 37.4 31.1 ... 10.4 21.6 37.4 44.7 53.2 68.0 73.7 68.2 60.7 50.2 37.2 24.6 ... 21.5 14.7 35.0 48.3 54.0 68.2 69.6 65.7 60.8 49.1 33.2 26.0 ... 19.1 20.6 40.2 50.0 55.3 67.7 70.7 70.3 60.6 50.7 35.8 20.7 ... 14.0 24.1 29.4 46.6 58.6 62.2 72.1 71.7 61.9 47.6 34.2 20.4 ... 8.4 19.0 31.4 48.7 61.6 68.1 72.2 70.6 62.5 52.7 36.7 23.8 ... 11.2 20.0 29.6 47.7 55.8 73.2 68.0 67.1 64.9 57.1 37.6 27.7 ... 13.4 17.2 30.8 43.7 62.3 66.4 70.2 71.6 62.1 46.0 32.7 17.3 ... 22.5 25.7 42.3 45.2 55.5 68.9 72.3 72.3 62.5 55.6 38.0 20.4 ... 17.6 20.5 34.2 49.2 54.8 63.8 74.0 67.1 57.7 50.8 36.8 25.5 ... 20.4 19.6 24.6 41.3 61.8 68.5 72.0 71.1 57.3 52.5 40.6 26.2 ... ''' >>> sorted(map(float, data.split()), reverse=True)[:3] [74.0, 73.7, 73.7] ``` If you want to `integer` results ``` >>> temps = sorted(map(float, data.split()), reverse=True)[:3] >>> map(int, temps) [74, 73, 73] ```
Error in function to return 3 largest values from a list of numbers
34,267,961
14
2015-12-14T13:11:43Z
34,268,065
7
2015-12-14T13:18:14Z
[ "python", "numbers", "max" ]
I have this data file and I have to find the 3 largest numbers it contains ``` 24.7 25.7 30.6 47.5 62.9 68.5 73.7 67.9 61.1 48.5 39.6 20.0 16.1 19.1 24.2 45.4 61.3 66.5 72.1 68.4 60.2 50.9 37.4 31.1 10.4 21.6 37.4 44.7 53.2 68.0 73.7 68.2 60.7 50.2 37.2 24.6 21.5 14.7 35.0 48.3 54.0 68.2 69.6 65.7 60.8 49.1 33.2 26.0 19.1 20.6 40.2 50.0 55.3 67.7 70.7 70.3 60.6 50.7 35.8 20.7 14.0 24.1 29.4 46.6 58.6 62.2 72.1 71.7 61.9 47.6 34.2 20.4 8.4 19.0 31.4 48.7 61.6 68.1 72.2 70.6 62.5 52.7 36.7 23.8 11.2 20.0 29.6 47.7 55.8 73.2 68.0 67.1 64.9 57.1 37.6 27.7 13.4 17.2 30.8 43.7 62.3 66.4 70.2 71.6 62.1 46.0 32.7 17.3 22.5 25.7 42.3 45.2 55.5 68.9 72.3 72.3 62.5 55.6 38.0 20.4 17.6 20.5 34.2 49.2 54.8 63.8 74.0 67.1 57.7 50.8 36.8 25.5 20.4 19.6 24.6 41.3 61.8 68.5 72.0 71.1 57.3 52.5 40.6 26.2 ``` Therefore I have written the following code, but it only searches the first row of numbers instead of the entire list. Can anyone help to find the error? ``` def three_highest_temps(f): file = open(f, "r") largest = 0 second_largest = 0 third_largest = 0 temp = [] for line in file: temps = line.split() for i in temps: if i > largest: largest = i elif largest > i > second_largest: second_largest = i elif second_largest > i > third_largest: third_largest = i return largest, second_largest, third_largest print(three_highest_temps("data5.txt")) ```
Your `return` statement is inside the `for` loop. Once return is reached, the function terminates, so the loop never gets into a second iteration. Move the `return` outside the loop by reducing indentation. ``` for line in file: temps = line.split() for i in temps: if i > largest: largest = i elif largest > i > second_largest: second_largest = i elif second_largest > i > third_largest: third_largest = i return largest, second_largest, third_largest ``` In addition, your comparisons won't work, because `line.split()` returns a list of strings, not floats. (As has been pointed out, your data consists of floats, not ints. I'm assuming the task is to find the largest float.) So let's convert the strings using `float()` Your code still won't be correct, though, because when you find a new largest value, you completely discard the old one. Instead you should now consider it the second largest known value. Same rule applies for second to third largest. ``` for line in file: temps = line.split() for temp_string in temps: i = float(temp_string) if i > largest: third_largest = second_largest second_largest = largest largest = i elif largest > i > second_largest: third_largest = second_largest second_largest = i elif second_largest > i > third_largest: third_largest = i return largest, second_largest, third_largest ``` Now there is one last issue: You overlook cases where i is identical with one of the largest values. In such a case `i > largest` would be false, but so would `largest > i`. You could change either of these comparisons to `>=` to fix this. Instead, let us simplify the `if` clauses by considering that the `elif` conditions are only considered after all previous conditions were already found to be false. When we reach the first `elif`, we already know that `i` can not be larger than `largest`, so it suffices to compare it to `second largest`. The same goes for the second `elif`. ``` for line in file: temps = line.split() for temp_string in temps: i = float(temp_string) if i > largest: third_largest = second_largest second_largest = largest largest = i elif i > second_largest: third_largest = second_largest second_largest = i elif i > third_largest: third_largest = i return largest, second_largest, third_largest ``` This way we avoid accidentally filtering out the `i == largest` and `i == second_largest` edge cases.
Error in function to return 3 largest values from a list of numbers
34,267,961
14
2015-12-14T13:11:43Z
34,268,159
8
2015-12-14T13:22:30Z
[ "python", "numbers", "max" ]
I have this data file and I have to find the 3 largest numbers it contains ``` 24.7 25.7 30.6 47.5 62.9 68.5 73.7 67.9 61.1 48.5 39.6 20.0 16.1 19.1 24.2 45.4 61.3 66.5 72.1 68.4 60.2 50.9 37.4 31.1 10.4 21.6 37.4 44.7 53.2 68.0 73.7 68.2 60.7 50.2 37.2 24.6 21.5 14.7 35.0 48.3 54.0 68.2 69.6 65.7 60.8 49.1 33.2 26.0 19.1 20.6 40.2 50.0 55.3 67.7 70.7 70.3 60.6 50.7 35.8 20.7 14.0 24.1 29.4 46.6 58.6 62.2 72.1 71.7 61.9 47.6 34.2 20.4 8.4 19.0 31.4 48.7 61.6 68.1 72.2 70.6 62.5 52.7 36.7 23.8 11.2 20.0 29.6 47.7 55.8 73.2 68.0 67.1 64.9 57.1 37.6 27.7 13.4 17.2 30.8 43.7 62.3 66.4 70.2 71.6 62.1 46.0 32.7 17.3 22.5 25.7 42.3 45.2 55.5 68.9 72.3 72.3 62.5 55.6 38.0 20.4 17.6 20.5 34.2 49.2 54.8 63.8 74.0 67.1 57.7 50.8 36.8 25.5 20.4 19.6 24.6 41.3 61.8 68.5 72.0 71.1 57.3 52.5 40.6 26.2 ``` Therefore I have written the following code, but it only searches the first row of numbers instead of the entire list. Can anyone help to find the error? ``` def three_highest_temps(f): file = open(f, "r") largest = 0 second_largest = 0 third_largest = 0 temp = [] for line in file: temps = line.split() for i in temps: if i > largest: largest = i elif largest > i > second_largest: second_largest = i elif second_largest > i > third_largest: third_largest = i return largest, second_largest, third_largest print(three_highest_temps("data5.txt")) ```
You only get the max elements for the first line because you return at the end of the first iteration. You should de-indent the return statement. Sorting the data and picking the first 3 elements runs in n\*log(n). ``` data = [float(v) for v in line.split() for line in file] sorted(data, reverse=True)[:3] ``` It is perfectly fine for 144 elements. You can also get the answer in linear time using a heapq ``` import heapq heapq.nlargest(3, data) ```
Type hints in namedtuple
34,269,772
9
2015-12-14T14:43:24Z
34,269,877
12
2015-12-14T14:47:50Z
[ "python", "python-3.x", "python-3.5", "type-hinting" ]
Consider following piece of code: ``` from collections import namedtuple point = namedtuple("Point", ("x:int", "y:int")) ``` The Code above is just a way to demonstrate as to what I am trying to achieve. I would like to make `namedtuple` with type hints. Do you know any elegant way how to achieve result as intended?
You can use [`typing.NamedTuple`](https://docs.python.org/3/library/typing.html#typing.NamedTuple) From the docs > *Typed version* of `namedtuple`. ``` >>> import typing >>> Point = typing.NamedTuple("Point", [('x', int), ('y', int)]) ``` This is present only in Python 3.5 onwards
django 1.8 does not work on CentOs 6.5 server
34,271,160
3
2015-12-14T15:47:15Z
34,271,260
9
2015-12-14T15:51:55Z
[ "python", "django", "centos6.5" ]
``` Installing collected packages: Django Successfully installed Django-1.8 [root@manage ~]# PYTHON -bash: PYTHON: command not found [root@manage ~]# python Python 2.6.6 (r266:84292, Jan 22 2014, 09:42:36) [GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import django Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.6/site-packages/django/__init__.py", line 1, in <module> from django.utils.version import get_version File "/usr/lib/python2.6/site-packages/django/utils/version.py", line 7, in <module> from django.utils.lru_cache import lru_cache File "/usr/lib/python2.6/site-packages/django/utils/lru_cache.py", line 28 fasttypes = {int, str, frozenset, type(None)}, ^ SyntaxError: invalid syntax >>> ``` hello ,I am new to Django and CentOS ,,I just install django 1.8 by pip successfully,,but when I try to import django in python shell ,,it shows the error message above,,,can any one tell me what's happening?? thank you !
> Django 1.8 requires Python 2.7, 3.2, 3.3, 3.4, or 3.5. <https://docs.djangoproject.com/fr/1.9/releases/1.8/#python-compatibility>
Install python3-venv module on linux mint
34,271,982
11
2015-12-14T16:29:03Z
34,272,200
29
2015-12-14T16:40:22Z
[ "python", "linux", "python-3.x", "mint", "python-venv" ]
I was able to move to Linux mint 17.3 64 bit version from my Linux mint 16. This was long awaited migration. After moving to Linux Mint 17.3, I am not able to the install python3-venv module, which is said to be the replacement for virtualenv in python 3.x. In my linux mint 16 I had access to pyvenv-3.4 tool. I dont know when I installed that module in Linux mint 16. Anybody faced this issue ? ``` python -m venv test The virtual environment was not created successfully because ensurepip is not available. On Debian/Ubuntu systems, you need to install the python3-venv package using the following command. apt-get install python3-venv You may need to use sudo with that command. After installing the python3-venv package, recreate your virtual environment. izero@Ganesha ~/devel $ sudo apt-get install python3-venv [sudo] password for izero: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package python3-venv ```
Try running this command: ``` sudo apt-get install python3.4-venv ``` Then use this: ``` python3 -m venv test ``` the package name is `python3.4-venv` and not `python3-venv`.
Can I conditionally change which function I'm calling?
34,274,164
3
2015-12-14T18:34:54Z
34,274,216
7
2015-12-14T18:37:47Z
[ "python" ]
Sorry if this has been asked before, but I couldn't find the words to search that would give me the answer I'm looking for. I'm writing a script that contains a helper function, which, in turn, can call one of several functions which all take the same parameters. In this helper function, I end up having strings of this: ``` if funcName="func1": func1(p1, p2, p3) elif funcName="func2": func2(p1, p2, p3) elif funcName="func3": func3(p1, p2, p3) ... ``` I know that I could use another helper function that would also take the funcName string and distribute it to the appropriate function, but is there a better way to do it? What my brain wants to do is this: ``` funcName(p1, p2, p3) ``` That way I could call an arbitrary function name if I want to. Is something like that possible?
Yes, you can, by using a `dict` mapping names to functions: ``` funcs = dict( func1=func1, func2=func2, func3=func3 ) funcs[funcName](p1, p2, p3) ```
How to print an entire list while not starting by the first item
34,280,147
6
2015-12-15T02:32:29Z
34,280,175
9
2015-12-15T02:35:19Z
[ "python", "list" ]
I'm trying to figure out how to print the following list while not starting by the first item. To be clear: If the list is `[0,1,2,3,4,5,6,7,8]`, I want to print something like `4,5,6,7,8,0,1,2,3` Here's the code: ``` you_can_move_on = False List = [0,1,2,3,4,5,6,7,8] next_player = 3 while not you_can_move_on: next_player = self.get_next_player_index(next_player) you_can_move_on = self.check_if_I_can_move_on print(next_player) def get_next_player_index(self, i): if i == len(self.players): return 0 else: return i+1 def check_if_I_can_move_on(self): return False ```
I think it should be ``` print(l[3:] + l[:3]) ```
Why is str.translate faster in Python 3.5 compared to Python 3.4?
34,287,893
88
2015-12-15T11:21:05Z
34,287,999
124
2015-12-15T11:26:12Z
[ "python", "string", "python-3.x", "python-internals", "python-3.5" ]
I was trying to remove unwanted characters from a given string using `text.translate()` in Python 3.4. The minimal code is: ``` import sys s = 'abcde12345@#@$#%$' mapper = dict.fromkeys(i for i in range(sys.maxunicode) if chr(i) in '@#$') print(s.translate(mapper)) ``` It works as expected. However the same program when executed in Python 3.4 and Python 3.5 gives a large difference. The code to calculate timings is ``` python3 -m timeit -s "import sys;s = 'abcde12345@#@$#%$'*1000 ; mapper = dict.fromkeys(i for i in range(sys.maxunicode) if chr(i) in '@#$'); " "s.translate(mapper)" ``` The Python 3.4 program takes ***1.3ms*** whereas the same program in Python 3.5 takes only ***26.4μs***. What has improved in Python 3.5 that makes it faster compared to Python 3.4?
**TL;DR - [ISSUE 21118](http://bugs.python.org/issue21118)** --- **The long Story** Josh Rosenberg found out that the `str.translate()` function is very slow compared to the `bytes.translate`, he raised an [issue](http://bugs.python.org/issue21118), stating that: > In Python 3, `str.translate()` is usually a performance pessimization, not optimization. ### Why was `str.translate()` slow? The main reason for `str.translate()` to be very slow was that the lookup used to be in a Python dictionary. The usage of `maketrans` made this problem worse. The similar approach using [`bytes`](https://docs.python.org/3/library/functions.html#bytes) builds a C array of 256 items to fast table lookup. Hence the usage of higher level Python `dict` makes the `str.translate()` in Python 3.4 very slow. ### What happened now? The first approach was to add a small patch, [translate\_writer](http://bugs.python.org/file34691/translate_writer.patch), However the speed increase was not that pleasing. Soon another patch [fast\_translate](http://bugs.python.org/file34731/fast_translate.patch) was tested and it yielded very nice results of up to 55% speedup. The main change as can be seen from the file is that the Python dictionary lookup is changed into a C level lookup. The speeds now are almost the same as `bytes` ``` unpatched patched str.translate 4.55125927699919 0.7898181750006188 str.translate from bytes trans 1.8910855210015143 0.779950579000797 ``` --- A small note here is that the performance enhancement is only prominent in ASCII strings. As J.F.Sebastian mentions in a [comment](http://stackoverflow.com/questions/34287893/why-is-str-translate-so-fast-in-python-3-5-compared-to-python-3-4-1/34287999?noredirect=1#comment56339043_34287999) below, Before 3.5, translate used to work in the same way for both ASCII and non-ASCII cases. However from 3.5 ASCII case is much faster. Earlier ASCII vs non-ascii used to be almost same, however now we can see a great change in the performance. It can be an improvement from 71.6μs to 2.33μs as seen in this [answer](http://stackoverflow.com/questions/29998052/deleting-consonants-from-a-string-in-python/29998062#29998062). The following code demonstrates this ``` python3.5 -m timeit -s "text = 'mJssissippi'*100; d=dict(J='i')" "text.translate(d)" 100000 loops, best of 3: 2.3 usec per loop python3.5 -m timeit -s "text = 'm\U0001F602ssissippi'*100; d={'\U0001F602': 'i'}" "text.translate(d)" 10000 loops, best of 3: 117 usec per loop python3 -m timeit -s "text = 'm\U0001F602ssissippi'*100; d={'\U0001F602': 'i'}" "text.translate(d)" 10000 loops, best of 3: 91.2 usec per loop python3 -m timeit -s "text = 'mJssissippi'*100; d=dict(J='i')" "text.translate(d)" 10000 loops, best of 3: 101 usec per loop ``` Tabulation of the results: ``` Python 3.4 Python 3.5 Ascii 91.2 2.3 Unicode 101 117 ```
How to remove punctuation marks from a string in Python 3.x using .translate()?
34,293,875
7
2015-12-15T16:05:03Z
34,294,398
17
2015-12-15T16:27:08Z
[ "python" ]
I want to remove all punctuation marks from a text file using .translate() method. It seems to work well under Python 2.x but under Python 3.4 it doesn't seem to do anything. My code is as follows and the output is the same as input text. ``` import string fhand = open("Hemingway.txt") for fline in fhand: fline = fline.rstrip() print(fline.translate(string.punctuation)) ```
You have to create a translation table using `maketrans` that you pass to the `str.translate` method. In Python 3.1 and newer, `maketrans` is now a [static-method on the `str` type](https://docs.python.org/3/library/stdtypes.html#str.maketrans), so you can use it to create a translation of each punctuation you want to `None`. ``` import string # Create a dictionary using a comprehension - this maps every character from # string.punctuation to None. Initialize a translation object from it. translator = str.maketrans({key: None for key in string.punctuation}) s = 'string with "punctuation" inside of it! Does this work? I hope so.' # pass the translator to the string's translate method. print(s.translate(translator)) ``` This should output: ``` string with punctuation inside of it Does this work I hope so ```
Strip removing more characters than expected
34,297,084
2
2015-12-15T18:49:01Z
34,297,147
7
2015-12-15T18:52:29Z
[ "python", "string" ]
Can anyone explain what's going on here: ``` s = 'REFPROP-MIX:METHANOL&WATER' s.lstrip('REFPROP-MIX') # this returns ':METHANOL&WATER' as expected s.lstrip('REFPROP-MIX:') # returns 'THANOL&WATER' ``` What happened to that 'ME'? Is a colon a special character for lstrip? This is particularly confusing because this works as expected: ``` s = 'abc-def:ghi' s.lstrip('abc-def') # returns ':ghi' s.lstrip('abd-def:') # returns 'ghi' ```
`str.lstrip` removes all the characters in its argument from the string, starting at the left. Since all the characters in the left prefix "REFPROP-MIX:ME" are in the argument "REFPROP-MIX:", all those characters are removed. Likewise: ``` >>> s = 'abcadef' >>> s.lstrip('abc') 'def' >>> s.lstrip('cba') 'def' >>> s.lstrip('bacabacabacabaca') 'def' ``` `str.lstrip` does *not* remove whole strings (of length greater than 1) from the left. If you want to do that, use a regular expression with an anchor `^` at the beginning: ``` >>> import re >>> s = 'REFPROP-MIX:METHANOL&WATER' >>> re.sub(r'^REFPROP-MIX:', '', s) 'METHANOL&WATER' ```
Python: Find difference between two dictionaries containing lists
34,298,613
10
2015-12-15T20:18:49Z
34,298,690
7
2015-12-15T20:23:31Z
[ "python", "list", "python-2.7", "dictionary" ]
I have two dictionaries that have the following structure: ``` a = {'joe': [24,32,422], 'bob': [1,42,32,24], 'jack':[0,3,222]} b = {'joe': [24], 'bob': [1,42,32]} ``` I would like to retrieve the difference between these two dictionaries which in this case would result as: ``` {'joe': [32,422], 'bob': [24], 'jack':[0,3,222]} ``` I know that I could do this with a messy loop, but I would like to know how can I achieve this in a clean, pythonic fashion? I did try: `a.items() - b.items()` but I get the following error: `unsupported operand type(s) for -: 'dict_values' and 'dict_values'` Thanks for your help
You need to use sets: ``` diff = {} for key in a: diff[key] = list(set(a[key]) - set(b.get(key, []))) print diff ```
Python: Find difference between two dictionaries containing lists
34,298,613
10
2015-12-15T20:18:49Z
34,298,754
11
2015-12-15T20:27:06Z
[ "python", "list", "python-2.7", "dictionary" ]
I have two dictionaries that have the following structure: ``` a = {'joe': [24,32,422], 'bob': [1,42,32,24], 'jack':[0,3,222]} b = {'joe': [24], 'bob': [1,42,32]} ``` I would like to retrieve the difference between these two dictionaries which in this case would result as: ``` {'joe': [32,422], 'bob': [24], 'jack':[0,3,222]} ``` I know that I could do this with a messy loop, but I would like to know how can I achieve this in a clean, pythonic fashion? I did try: `a.items() - b.items()` but I get the following error: `unsupported operand type(s) for -: 'dict_values' and 'dict_values'` Thanks for your help
Assuming that there would be no duplicate entries in any of your lists, you can do what you want with `set`s but not with lists: ``` >>> a = {'joe': [24,32,422], 'bob': [1,42,32,24], 'jack':[0,3,222]} >>> b = {'joe': [24], 'bob': [1,42,32]} >>> {key: list(set(a[key])- set(b.get(key,[]))) for key in a} {'joe': [32, 422], 'bob': [24], 'jack': [0, 3, 222]} ``` Note two things: * I convert the set back to a list when I set it as the value * I use `b.get` rather than `b[key]` to handle if the key does not exist in `b`, but does in `a` EDIT - using a for loop: I realized that the comprehension may not be that self explanatory so this is an equivalent bit of code using a for loop: ``` >>> c = {} >>> for key in a: c[key] = list(set(a[key]) - set(b.get(key,[]))) >>> c {'joe': [32, 422], 'bob': [24], 'jack': [0, 3, 222]} ``` EDIT - lose the second set: As Padraic Cunningham mentioned in the comments (as he so often does, bless his soul), you can make use of `set.difference` to avoid explicitly casting your second list to a set: ``` >>> c = {} >>> for key in a: c[key] = list(set(a[key]).difference(b.get(key,[]))) >>> c {'joe': [32, 422], 'bob': [24], 'jack': [0, 3, 222]} ``` or with list comprehension: ``` >>> {key: list(set(a[key]).difference(b.get(key,[]))) for key in a} {'joe': [32, 422], 'bob': [24], 'jack': [0, 3, 222]} ``` or if you want to treat `set.difference` as a class method instead of an instance method: ``` >>> {key: list(set.difference(set(a[key]),b.get(key,[]))) for key in a} {'joe': [32, 422], 'bob': [24], 'jack': [0, 3, 222]} ``` Though I find this a tad bit clunky and I don't really like it as much.
Django settings Unknown parameters: TEMPLATE_DEBUG
34,298,867
7
2015-12-15T20:32:59Z
34,298,960
14
2015-12-15T20:38:37Z
[ "python", "django", "django-1.9" ]
Hi I'm following the tutorial on the [djangoproject site](https://docs.djangoproject.com/fr/1.9/intro/tutorial03/) and I'm getting an error on my localhost saying: ``` Unknown parameters: TEMPLATE_DEBUG ``` My settings.py looks like this: ``` TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [], 'APP_DIRS': True, 'TEMPLATE_DEBUG':True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] ``` I added the 'TEMPLATE\_DEBUG' on TEMPLATE because otherwise I'm getting the following warning ``` ?: (1_8.W001) The standalone TEMPLATE_* settings were deprecated in Django 1.8 and the TEMPLATES dictionary takes precedence. You must put the values of the following settings into your default TEMPLATES dict: TEMPLATE_DEBUG. ``` My templates folder are in my apps i.e.: ``` my_project_name/polls/templates/polls/index.html ```
I think you need to do: ``` TEMPLATES = [ { # something else 'OPTIONS': { 'debug': DEBUG, }, }, ] ``` *Django used to accept TEMPLATE\_DEBUG variable but since **Django >= 1.8**, this not allowed any more and is replaced as explained above.* Django [doc](https://docs.djangoproject.com/en/1.9/ref/templates/upgrading/).