title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
jupyter giving 404: Not Found error on WIndows 7 | 33,031,069 | 4 | 2015-10-09T06:11:42Z | 34,791,817 | 7 | 2016-01-14T14:18:08Z | [
"python",
"jupyter"
] | I installed anaconda PYTHON 2.7 64bit on Win 7 and then updated using
```
conda update conda
```
Later installed
```
conda install jupyter
```
When I tried to run from the same drive on windows using
```
jupyter notebook
```
it launches on Firefox and states
```
404: Not Found
```
On the command it says
```
Refusing to serve hidden directory via 404 Error
```
What needs to be done to get it launched?
Update:
The anaconda install has a shortcut to run jupyter and it works that way - but gives 404 for "jupyter notebook":
```
d:\Anaconda\Scripts/ipython-script.py" notebook
``` | I found a simple cause of this could be trying to run Jupyter from the root of a drive - running it from C:\ failed. Cricboy007's suggestion to create a folder (no particular name, I called mine notebook) and run Jupyter once you've used cd to make that the working directory worked for me. |
Add an empty column to spark DataFrame | 33,038,686 | 4 | 2015-10-09T12:45:43Z | 33,041,986 | 8 | 2015-10-09T15:24:05Z | [
"python",
"apache-spark",
"pyspark",
"pyspark-sql"
] | As mentioned in [many](http://stackoverflow.com/questions/29483498/append-a-column-to-data-frame-in-apache-spark-1-3) [other locations](http://apache-spark-user-list.1001560.n3.nabble.com/Append-column-to-Data-Frame-or-RDD-td22385.html) on the web, adding a new column to an existing DataFrame is not straightforward. Unfortunately it is important to have this functionality (even though it is inefficient in a distributed environment) especially when trying to concatenate two `DataFrame`s using `unionAll`.
What is the most elegant workaround for adding a `null` column to a `DataFrame` to facilitate a `unionAll`?
My version goes like this:
```
from pyspark.sql.types import StringType
from pyspark.sql.functions import UserDefinedFunction
to_none = UserDefinedFunction(lambda x: None, StringType())
new_df = old_df.withColumn('new_column', to_none(df_old['any_col_from_old']))
``` | All you need here is a literal and cast:
```
from pyspark.sql.functions import lit
new_df = old_df.withColumn('new_column', lit(None).cast(StringType()))
```
A full example:
```
df = sc.parallelize([row(1, "2"), row(2, "3")]).toDF()
df.printSchema()
## root
## |-- foo: long (nullable = true)
## |-- bar: string (nullable = true)
new_df = df.withColumn('new_column', lit(None).cast(StringType()))
new_df.printSchema()
## root
## |-- foo: long (nullable = true)
## |-- bar: string (nullable = true)
## |-- new_column: string (nullable = true)
new_df.show()
## +---+---+----------+
## |foo|bar|new_column|
## +---+---+----------+
## | 1| 2| null|
## | 2| 3| null|
## +---+---+----------+
```
A Scala equivalent can be found here: [Create new Dataframe with empty/null field values](http://stackoverflow.com/q/32067467/1560062) |
Why is the time complexity of python's list.append() method O(1)? | 33,044,883 | 6 | 2015-10-09T18:24:37Z | 33,044,923 | 7 | 2015-10-09T18:27:02Z | [
"python",
"python-2.7",
"time-complexity",
"amortized-analysis"
] | As seen in the documentation for [TimeComplexity](https://wiki.python.org/moin/TimeComplexity), Python's `list` type is implemented is using an array.
So if an array is being used and we do a few appends, eventually you will have to reallocate space and copy all the information to the new space.
After all that, how can it be O(1) worst case ? | If you look at the footnote in the document you linked, you can see that they include a caveat:
> These operations rely on the "Amortized" part of "Amortized Worst
> Case". Individual actions may take surprisingly long, depending on the
> history of the container.
Using [amortized analysis](https://en.wikipedia.org/wiki/Amortized_analysis), even if we have to occasionally perform expensive operations, we can get a lower bound on the 'average' cost of operations when you consider them as a sequence, instead of individually.
So, any individual operation could be very expensive - O(n) or O(n^2) or something even bigger - but since we know these operations are rare, we guarantee that a sequence of O(n) operations can be done in O(n) time. |
How do you alias a type in python? | 33,045,222 | 3 | 2015-10-09T18:47:59Z | 33,045,252 | 7 | 2015-10-09T18:50:29Z | [
"python",
"alias"
] | In some (mostly functional) languages you can do something like this:
```
type row = list(datum)
```
or
```
type row = [datum]
```
So that we can build things like this:
```
type row = [datum]
type table = [row]
type database = [table]
```
Is there a way to do this in python? You could do it using classes, but python has quite some functional aspects so I was wondering if it could be done an easier way. | Since Python 3.5 you may use [typing](https://docs.python.org/3/library/typing.html) module.
Quoting docs,
A type alias is defined by assigning the type to the alias:
```
Vector = List[float]
```
To learn more about enforcing types in Python you may want to get familiar with PEPs: [PEP483](https://www.python.org/dev/peps/pep-0483/) and [PEP484](https://www.python.org/dev/peps/pep-0484/).
Python historically was using duck-typing instead of strong typing and hadn't built-in way of enforcing types before 3.5 release. |
NumPy exception when using MLlib even though Numpy is installed | 33,045,556 | 5 | 2015-10-09T19:12:48Z | 33,082,057 | 15 | 2015-10-12T13:12:27Z | [
"python",
"numpy",
"apache-spark",
"pyspark",
"apache-spark-mllib"
] | Here's the code I'm trying to execute:
```
from pyspark.mllib.recommendation import ALS
iterations=5
lambdaALS=0.1
seed=5L
rank=8
model=ALS.train(trainingRDD,rank,iterations, lambda_=lambdaALS, seed=seed)
```
When I run the `model=ALS.train(trainingRDD,rank,iterations, lambda_=lambdaALS, seed=seed)` command that depends on numpy, the Py4Java library that Spark uses throws the following message:
```
Py4JJavaError: An error occurred while calling o587.trainALSModel.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 67.0 failed 4 times, most recent failure: Lost task 0.3 in stage 67.0 (TID 195, 192.168.161.55): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/home/platform/spark/python/lib/pyspark.zip/pyspark/worker.py", line 98, in main
command = pickleSer._read_with_length(infile)
File "/home/platform/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
return self.loads(obj)
File "/home/platform/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 421, in loads
return pickle.loads(obj)
File "/home/platform/spark/python/lib/pyspark.zip/pyspark/mllib/__init__.py", line 27, in <module>
Exception: MLlib requires NumPy 1.4+
```
NumPy 1.10 is installed on the machine stated in the error message.
Moreover I get version 1.9.2 when executing the following command directly in my Jupyter notebook:
`import numpy`
`numpy.version.version`
I am obviously running a version of NumPy older than 1.4 but I don't know where. How can I tell on which machine do I need to update my version of NumPy? | It is a bug in Mllib init code
```
import numpy
if numpy.version.version < '1.4':
raise Exception("MLlib requires NumPy 1.4+")
```
'1.10' is < from '1.4'
You can use NumPy 1.9.2 .
If you have to use NumPy 1.10 and don't want to upgrade to spark 1.5.1 .
Do a manual update to the code.
[https://github.com/apache/spark/blob/master/python/pyspark/mllib/**init**.py](https://github.com/apache/spark/blob/master/python/pyspark/mllib/__init__.py) |
Is there a better more readable way to coalese columns in pandas | 33,047,277 | 4 | 2015-10-09T21:16:47Z | 36,490,274 | 8 | 2016-04-08T02:01:36Z | [
"python",
"pandas"
] | I often need a new column that is the best I can achieve from other columns and I have a specific list of preference priorities. I am willing to take the first non null value.
```
def coalesce(values):
not_none = (el for el in values if el is not None)
return next(not_none, None)
df = pd.DataFrame([{'third':'B','first':'A','second':'C'},
{'third':'B','first':None,'second':'C'},
{'third':'B','first':None,'second':None},
{'third':None,'first':None,'second':None},
{'third':'B','first':'A','second':None}])
df['combo1'] = df.apply(coalesce, axis=1)
df['combo2'] = df[['second','third','first']].apply(coalesce, axis=1)
print df
```
Results
```
first second third combo1 combo2
0 A C B A C
1 None C B C C
2 None None B B B
3 None None None None None
4 A None B A B
```
this code works (and the result are what I want) but it is not very fast.
I get to pick my priorities if I need to [['second','third','first']]
Coalesce somewhat like the function of the same name from tsql.
I suspect that I may have overlooked an easy way to achieve it with good performance on large DataFrames (+400,000 rows)
I know there are lots of ways to fill in missing data which I often use on axis=0
this is what makes me think I may have missed an easy option for axis=1
Can you suggest something nicer/faster... or confirm that this is as good as it gets. | The Pandas equivalent to `COALESCE` is the method `fillna()`:
```
result = column_a.fillna(column_b)
```
The result is a column where each value is taken from `column_a` if that column provides a non-null value, otherwise the value is taken from `column_b`. So your `combo1` can be produced with:
```
df['first'].fillna(df['second']).fillna(df['third'])
```
giving:
```
0 A
1 C
2 B
3 None
4 A
```
And your `combo2` can be produced with:
```
(df['second']).fillna(df['third']).fillna(df['first'])
```
which returns the new column:
```
0 C
1 C
2 B
3 None
4 B
```
If you wanted an efficient operation called `coalesce`, it could simply combine columns with `fillna()` from left to right and then return the result:
```
def coalesce(df, column_names):
i = iter(column_names)
column_name = next(i)
answer = df[column_name]
for column_name in i:
answer = answer.fillna(df[column_name])
return answer
print coalesce(df, ['first', 'second', 'third'])
print coalesce(df, ['second', 'third', 'first'])
```
which gives:
```
0 A
1 C
2 B
3 None
4 A
0 C
1 C
2 B
3 None
4 B
``` |
Python equivalent of Scala's exists() function? | 33,048,555 | 2 | 2015-10-09T23:29:58Z | 33,048,617 | 7 | 2015-10-09T23:37:52Z | [
"python",
"list",
"scala",
"exists"
] | Scala lists have an exists() function that returns a boolean if the list has an element that satisfies your predicate.
Is there a way to do this in python that's just as clean?
I've been using
```
if next(x for x in mylist if x > 10): return Something
```
Is there a better way? | Use [**`any`**](https://docs.python.org/2/library/functions.html#any):
```
if any(x > 10 for x in mylist):
return Something
```
You can complement this with [**`all`**](https://docs.python.org/2/library/functions.html#all), and use `not any` and `not all` to round it out.
Your way of using [**`next`**](https://docs.python.org/2/library/functions.html#next) will raise an exception if it doesn't find anything. You can pass it an additional default value to return, instead:
```
if next((x for x in mylist if x > 10), None):
return Something
``` |
Confusing about a Python min quiz | 33,049,388 | 15 | 2015-10-10T01:42:24Z | 33,049,433 | 9 | 2015-10-10T01:52:54Z | [
"python"
] | Just now I saw a quiz on [this page](https://github.com/cosmologicon/pywat/blob/master/quiz.md):
```
>>> x, y = ???
>>> min(x, y) == min(y, x)
False
```
The example answer is
```
x, y = {0}, {1}
```
From the documentation I know that:
> min(iterable[, key=func]) -> value
> min(a, b, c, ...[, key=func]) -> value
>
> With a single iterable argument, return its smallest item.
> With two or more arguments, return the smallest argument.
But why is `min({0},{1})={0}` and `min({1},{0})={1}`?
I also tried a few others:
```
min({0,2},1) # 1
min(1,{0,2}) # 1
min({1},[2,3]) # [2,3]
min([2,3],1) # 1
``` | The comparison operators `<`, `<=`, `>=`, and `>` check whether one set is a strict subset, subset, superset, or strict superset of another, respectively.
`{0}` and `{1}` are `False` for all of these, so the result depends on the check order and operator. |
Confusing about a Python min quiz | 33,049,388 | 15 | 2015-10-10T01:42:24Z | 33,049,473 | 8 | 2015-10-10T01:59:26Z | [
"python"
] | Just now I saw a quiz on [this page](https://github.com/cosmologicon/pywat/blob/master/quiz.md):
```
>>> x, y = ???
>>> min(x, y) == min(y, x)
False
```
The example answer is
```
x, y = {0}, {1}
```
From the documentation I know that:
> min(iterable[, key=func]) -> value
> min(a, b, c, ...[, key=func]) -> value
>
> With a single iterable argument, return its smallest item.
> With two or more arguments, return the smallest argument.
But why is `min({0},{1})={0}` and `min({1},{0})={1}`?
I also tried a few others:
```
min({0,2},1) # 1
min(1,{0,2}) # 1
min({1},[2,3]) # [2,3]
min([2,3],1) # 1
``` | The key point here is, the two sets are not subsets of each other, so both are `False` for `<` even though they are not equal:
```
>>> {0} < {1}
False
>>> {0} > {1}
False
>>> {0} == {1}
False
```
So which one is smaller? The fact that sets don't provide [**strict weak ordering**](https://en.wikipedia.org/wiki/Weak_ordering#Strict_weak_orderings) means there's no correct answer. |
Confusing about a Python min quiz | 33,049,388 | 15 | 2015-10-10T01:42:24Z | 33,049,550 | 7 | 2015-10-10T02:14:22Z | [
"python"
] | Just now I saw a quiz on [this page](https://github.com/cosmologicon/pywat/blob/master/quiz.md):
```
>>> x, y = ???
>>> min(x, y) == min(y, x)
False
```
The example answer is
```
x, y = {0}, {1}
```
From the documentation I know that:
> min(iterable[, key=func]) -> value
> min(a, b, c, ...[, key=func]) -> value
>
> With a single iterable argument, return its smallest item.
> With two or more arguments, return the smallest argument.
But why is `min({0},{1})={0}` and `min({1},{0})={1}`?
I also tried a few others:
```
min({0,2},1) # 1
min(1,{0,2}) # 1
min({1},[2,3]) # [2,3]
min([2,3],1) # 1
``` | `min` is implemented roughly like this:
```
def min(*args):
least = args[0]
for arg in args:
if arg < least:
least = arg
return least
```
The way the comparison operators work for sets break one of the assumptions that this implicitly makes: that for every pair of objects, either they are equal, or `a < b` or `b < a`. Neither `{0}` nor `{1}` compare less than one another, so `min` gives you inconsistent answers.
The other results you see are because of the rules for how Python defines an order for mixed types. A `set` and an `int` aren't comparable - neither of those types defines a rule for comparing to the other. This leads Python 2 to apply a rule called "arbitrary but consistent order" - one of the types is chosen to be the "lower" type, and it will remain the lower type for the lifetime of the program. In practice, it will be the same across all code you run, because it is implemented by comparing the type names alphabetically - but in theory, that could change.
The "arbitrary but consistent order" rule has been dumped from Python 3, because the only effect it really had was to mask bugs. When there is no defined rule for finding an order, Python now tells you so:
```
>>> 1 < {0}
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unorderable types: int() < set()
``` |
Accessing attributes on literals work on all types, but not `int`; why? | 33,054,229 | 43 | 2015-10-10T12:55:24Z | 10,955,711 | 39 | 2012-06-08T20:33:54Z | [
"python",
"python-2.7",
"python-3.x",
"language-lawyer"
] | I have read that everything in python is an object, and as such I started to experiment with different types and invoking *`__str__`* on them — at first I was feeling really excited, but then I got confused.
```
>>> "hello world".__str__()
'hello world'
>>> [].__str__()
'[]'
>>> 3.14.__str__()
'3.14'
>>> 3..__str__()
'3.0'
>>> 123.__str__()
File "<stdin>", line 1
123.__str__()
^
SyntaxError: invalid syntax
```
* Why does `something.__str__()` work for "everything" besides `int`?
* Is `123` not an *object* of type `int`? | Add a space after the `4`:
```
4 .__str__()
```
Otherwise, the lexer will split this expression into the tokens `"4."`, `"__str__"`, `"("` and `")"`, i.e. the first token is interpreted as a floating point number. The lexer always tries to build the longest possible token. |
Accessing attributes on literals work on all types, but not `int`; why? | 33,054,229 | 43 | 2015-10-10T12:55:24Z | 10,955,713 | 84 | 2012-06-08T20:34:00Z | [
"python",
"python-2.7",
"python-3.x",
"language-lawyer"
] | I have read that everything in python is an object, and as such I started to experiment with different types and invoking *`__str__`* on them — at first I was feeling really excited, but then I got confused.
```
>>> "hello world".__str__()
'hello world'
>>> [].__str__()
'[]'
>>> 3.14.__str__()
'3.14'
>>> 3..__str__()
'3.0'
>>> 123.__str__()
File "<stdin>", line 1
123.__str__()
^
SyntaxError: invalid syntax
```
* Why does `something.__str__()` work for "everything" besides `int`?
* Is `123` not an *object* of type `int`? | You need parens:
```
(4).__str__()
```
The problem is the lexer thinks "4." is going to be a floating-point number.
Also, this works:
```
x = 4
x.__str__()
``` |
Accessing attributes on literals work on all types, but not `int`; why? | 33,054,229 | 43 | 2015-10-10T12:55:24Z | 10,955,754 | 7 | 2012-06-08T20:38:16Z | [
"python",
"python-2.7",
"python-3.x",
"language-lawyer"
] | I have read that everything in python is an object, and as such I started to experiment with different types and invoking *`__str__`* on them — at first I was feeling really excited, but then I got confused.
```
>>> "hello world".__str__()
'hello world'
>>> [].__str__()
'[]'
>>> 3.14.__str__()
'3.14'
>>> 3..__str__()
'3.0'
>>> 123.__str__()
File "<stdin>", line 1
123.__str__()
^
SyntaxError: invalid syntax
```
* Why does `something.__str__()` work for "everything" besides `int`?
* Is `123` not an *object* of type `int`? | actually (to increase unreadability...):
```
4..hex()
```
is valid, too. it gives `'0x1.0000000000000p+2'` -- but then it's a float, of course... |
Accessing attributes on literals work on all types, but not `int`; why? | 33,054,229 | 43 | 2015-10-10T12:55:24Z | 33,054,230 | 45 | 2015-10-10T12:55:24Z | [
"python",
"python-2.7",
"python-3.x",
"language-lawyer"
] | I have read that everything in python is an object, and as such I started to experiment with different types and invoking *`__str__`* on them — at first I was feeling really excited, but then I got confused.
```
>>> "hello world".__str__()
'hello world'
>>> [].__str__()
'[]'
>>> 3.14.__str__()
'3.14'
>>> 3..__str__()
'3.0'
>>> 123.__str__()
File "<stdin>", line 1
123.__str__()
^
SyntaxError: invalid syntax
```
* Why does `something.__str__()` work for "everything" besides `int`?
* Is `123` not an *object* of type `int`? | ### So you think you can dance floating-point?
`123` is just as much of an object as `3.14`, the "problem" lies within the grammar rules of the language; the parser thinks we are about to define a *float* — not an *int* with a trailing method call.
We will get the expected behavior if we wrap the number in parenthesis, as in the below.
```
>>> (123).__str__()
'123'
```
Or if we simply add some whitespace after *`123`*:
```
>>> 123 .__str__()
'123'
```
The reason it does not work for `123.__str__()` is that the *dot* following the *`123`* is interpreted as the *decimal-point* of some partially declared *floating-point*.
```
>>> 123.__str__()
File "", line 1
123.__str__()
^
SyntaxError: invalid syntax
```
The parser tries to interpret `__str__()` as a sequence of digits, but obviously fails — and we get a *SyntaxError* basically saying that the parser stumbled upon something that it did not expect.
---
### Elaboration
When looking at `123.__str__()` the python parser could use either *3* characters and interpret these *3* characters as an *integer*, **or** it could use *4* characters and interpret these as the **start** of a *floating-point*.
```
123.__str__()
^^^ - int
```
```
123.__str__()
^^^^- start of floating-point
```
Just as a little child would like as much cake as possible on their plate, the parser is greedy and would like to swallow as much as it can all at once — even if this isn't always the best of ideas —as such the latter ("better") alternative is chosen.
When it later realizes that `__str__()` can in no way be interpreted as the *decimals* of a *floating-point* it is already too late; *SyntaxError*.
> **Note**
>
> ```
> 123 .__str__() # works fine
> ```
>
> In the above snippet, `123` (note the space) must be interpreted as an *integer* since no *number* can contain spaces. This means that it is semantically equivalent to `(123).__str__()`.
> **Note**
>
> ```
> 123..__str__() # works fine
> ```
>
> The above also works because a *number* can contain at most one *decimal-point*, meaning that it is equivalent to `(123.).__str__()`.
---
### For the *language-lawyers*
This section contains the lexical definition of the relevant literals.
**[Lexical analysis - 2.4.5 Floating point literals](https://docs.python.org/3.5/reference/lexical_analysis.html#floating-point-literals)**
```
floatnumber ::= pointfloat | exponentfloat
pointfloat ::= [intpart] fraction | intpart "."
exponentfloat ::= (intpart | pointfloat) exponent
intpart ::= digit+
fraction ::= "." digit+
exponent ::= ("e" | "E") ["+" | "-"] digit+
```
**[Lexical analysis - 2.4.4 Integer literals](https://docs.python.org/3.5/reference/lexical_analysis.html#integer-literals)**
```
integer ::= decimalinteger | octinteger | hexinteger | bininteger
decimalinteger ::= nonzerodigit digit* | "0"+
nonzerodigit ::= "1"..."9"
digit ::= "0"..."9"
octinteger ::= "0" ("o" | "O") octdigit+
hexinteger ::= "0" ("x" | "X") hexdigit+
bininteger ::= "0" ("b" | "B") bindigit+
octdigit ::= "0"..."7"
hexdigit ::= digit | "a"..."f" | "A"..."F"
bindigit ::= "0" | "1"
``` |
python 3.5: TypeError: a bytes-like object is required, not 'str' | 33,054,527 | 42 | 2015-10-10T13:28:09Z | 33,054,552 | 45 | 2015-10-10T13:30:57Z | [
"python"
] | I've very recently migrated to Py 3.5.
This code was working properly in Python 2.7:
```
with open(fname, 'rb') as f:
lines = [x.strip() for x in f.readlines()]
for line in lines:
tmp = line.strip().lower()
if 'some-pattern' in tmp: continue
# ... code
```
After upgrading to 3.5, I'm getting the:
```
TypeError: a bytes-like object is required, not 'str'
```
error on the last line (the pattern search code).
I've tried using the `.decode()` function on either side of the statement, also tried:
```
if tmp.find('some-pattern') != -1: continue
```
- to no avail.
I was able to resolve almost all 2:3 issues quickly, but this little statement is bugging me. | You opened the file in binary mode:
```
with open(fname, 'rb') as f:
```
This means that all data read from the file is returned as `bytes` objects, not `str`. You cannot then use a string in a containment test:
```
if 'some-pattern' in tmp: continue
```
You'd have to use a `bytes` object to test against `tmp` instead:
```
if b'some-pattern' in tmp: continue
```
or open the file as a textfile instead by replacing the `'rb'` mode with `'r'`. |
python function not working properly it doesn't return the values that user should be getting | 33,055,778 | 2 | 2015-10-10T15:43:50Z | 33,055,821 | 8 | 2015-10-10T15:49:09Z | [
"python"
] | ```
def divisors(n):
ans =0
for i in range(n):
i += 1
k=n%i
if k!=1:
ans=i
return ans
print(20)
```
I have a function that's not working properly when I run it prints the `n` value instead of printing out the divisors. | Three key issues are:
1. `k=n%i` returns the remainder - if the remainder != 1 it doesn't mean that `i` is a divisor!
2. in the for-loop you keep overriding `ans` and at the end of the function you return the value that was found on the last iteration that satisfied the `if` condition. What you want to do instead is to accumulate all the `ans`s into a list and return that list.
3. The `print` at the end is not calling the function - it simply prints the number 20.
I'm not posting a corrected solution because I think that it will be a good exercise for you to fix these bugs by yourself, good luck! |
Pandas Dataframe: Replacing NaN with row average | 33,058,590 | 3 | 2015-10-10T20:21:04Z | 33,058,777 | 7 | 2015-10-10T20:42:32Z | [
"python",
"pandas"
] | I am trying to learn pandas but i have been puzzled with the following please. I want to replace NaNs is a dataframe with the row average. Hence something like `df.fillna(df.mean(axis=1))` should work but for some reason it fails for me. Am I missing anything please, something I'm doing wrong? Is is because its not implemented; see [link here](http://stackoverflow.com/questions/29478641/how-to-replace-nan-with-sum-of-the-row-in-pandas-datatframe)
```
import pandas as pd
import numpy as np
â
pd.__version__
Out[44]:
'0.15.2'
In [45]:
df = pd.DataFrame()
df['c1'] = [1, 2, 3]
df['c2'] = [4, 5, 6]
df['c3'] = [7, np.nan, 9]
df
Out[45]:
c1 c2 c3
0 1 4 7
1 2 5 NaN
2 3 6 9
In [46]:
df.fillna(df.mean(axis=1))
Out[46]:
c1 c2 c3
0 1 4 7
1 2 5 NaN
2 3 6 9
```
However something like this looks to work fine
```
df.fillna(df.mean(axis=0))
Out[47]:
c1 c2 c3
0 1 4 7
1 2 5 8
2 3 6 9
``` | As commented the axis argument to fillna is [NotImplemented](https://github.com/pydata/pandas/issues/4514).
```
df.fillna(df.mean(axis=1), axis=1)
```
*Note: this would be critical here as you don't want to fill in your nth columns with the nth row average.*
For now you'll need to iterate through:
```
In [11]: m = df.mean(axis=1)
for i, col in enumerate(df):
# using i allows for duplicate columns
# inplace *may* not always work here, so IMO the next line is preferred
# df.iloc[:, i].fillna(m, inplace=True)
df.iloc[:, i] = df.iloc[:, i].fillna(m)
In [12]: df
Out[12]:
c1 c2 c3
0 1 4 7.0
1 2 5 3.5
2 3 6 9.0
```
An alternative is to fillna the transpose and then transpose, which may be more efficient...
```
df.T.fillna(df.mean(axis=1)).T
``` |
How to execute a command in the terminal from a Python script? | 33,065,588 | 7 | 2015-10-11T13:34:12Z | 33,065,666 | 9 | 2015-10-11T13:41:49Z | [
"python",
"shell",
"python-2.7",
"terminal",
"ubuntu-14.04"
] | I want to execute a command in terminal from a Python script.
```
./driver.exe bondville.dat
```
This command is getting printed in the terminal, but it is failing to execute.
Here are my steps:
```
echo = "echo"
command="./driver.exe"+" "+"bondville.dat"
os.system(echo + " " + command)
```
It should execute the command, but it's just printing it on terminal. When feeding the same thing manually it's executing. How do I do this from a script? | The `echo` terminal command **echoes** its arguments, so printing the command to the terminal is the expected result.
Are you typing `echo driver.exe bondville.dat` and is it running your `driver.exe` program?
If not, then you need to get rid of the echo in the last line of your code:
```
os.system(command)
``` |
Cython speedup isn't as large as expected | 33,067,867 | 4 | 2015-10-11T17:19:22Z | 33,068,834 | 10 | 2015-10-11T18:52:59Z | [
"python",
"numpy",
"cython"
] | I have written a Python function that computes pairwise electromagnetic interactions between a largish number (N ~ 10^3) of particles and stores the results in an NxN complex128 ndarray. It runs, but it is the slowest part of a larger program, taking about 40 seconds when N=900 [corrected]. The original code looks like this:
```
import numpy as np
def interaction(s,alpha,kprop): # s is an Nx3 real array
# alpha is complex
# kprop is float
ndipoles = s.shape[0]
Amat = np.zeros((ndipoles,3, ndipoles, 3), dtype=np.complex128)
I = np.array([[1,0,0],[0,1,0],[0,0,1]])
im = complex(0,1)
k2 = kprop*kprop
for i in range(ndipoles):
xi = s[i,:]
for j in range(ndipoles):
if i != j:
xj = s[j,:]
dx = xi-xj
R = np.sqrt(dx.dot(dx))
n = dx/R
kR = kprop*R
kR2 = kR*kR
A = ((1./kR2) - im/kR)
nxn = np.outer(n, n)
nxn = (3*A-1)*nxn + (1-A)*I
nxn *= -alpha*(k2*np.exp(im*kR))/R
else:
nxn = I
Amat[i,:,j,:] = nxn
return(Amat.reshape((3*ndipoles,3*ndipoles)))
```
I had never previously used Cython, but that seemed like a good place to start in my effort to speed things up, so I pretty much blindly adapted the techniques I found in online tutorials. I got some speedup (30 seconds vs. 40 seconds), but not nearly as dramatic as I expected, so I'm wondering whether I'm doing something wrong or am missing a critical step. The following is my best attempt at cythonizing the above routine:
```
import numpy as np
cimport numpy as np
DTYPE = np.complex128
ctypedef np.complex128_t DTYPE_t
def interaction(np.ndarray s, DTYPE_t alpha, float kprop):
cdef float k2 = kprop*kprop
cdef int i,j
cdef np.ndarray xi, xj, dx, n, nxn
cdef float R, kR, kR2
cdef DTYPE_t A
cdef int ndipoles = s.shape[0]
cdef np.ndarray Amat = np.zeros((ndipoles,3, ndipoles, 3), dtype=DTYPE)
cdef np.ndarray I = np.array([[1,0,0],[0,1,0],[0,0,1]])
cdef DTYPE_t im = complex(0,1)
for i in range(ndipoles):
xi = s[i,:]
for j in range(ndipoles):
if i != j:
xj = s[j,:]
dx = xi-xj
R = np.sqrt(dx.dot(dx))
n = dx/R
kR = kprop*R
kR2 = kR*kR
A = ((1./kR2) - im/kR)
nxn = np.outer(n, n)
nxn = (3*A-1)*nxn + (1-A)*I
nxn *= -alpha*(k2*np.exp(im*kR))/R
else:
nxn = I
Amat[i,:,j,:] = nxn
return(Amat.reshape((3*ndipoles,3*ndipoles)))
``` | The real power of NumPy is in performing an operation across a huge number of elements in a vectorized manner instead of using that operation in chunks spread across loops. In your case, you are using two nested loops and one IF conditional statement. I would propose extending the dimensions of the intermediate arrays, which would bring in [`NumPy's powerful broadcasting capability`](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to come into play and thus the same operations could be used on all elements in one go instead of small chunks of data within the loops.
For extending the dimensions, [`None`/`np.newaxis`](http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#numpy.newaxis) could be used. So, the vectorized implementation to follow such a premise would look like this -
```
def vectorized_interaction(s,alpha,kprop):
im = complex(0,1)
I = np.array([[1,0,0],[0,1,0],[0,0,1]])
k2 = kprop*kprop
# Vectorized calculations for dx, R, n, kR, A
sd = s[:,None] - s
Rv = np.sqrt((sd**2).sum(2))
nv = sd/Rv[:,:,None]
kRv = Rv*kprop
Av = (1./(kRv*kRv)) - im/kRv
# Vectorized calculation for: "nxn = np.outer(n, n)"
nxnv = nv[:,:,:,None]*nv[:,:,None,:]
# Vectorized calculation for: "(3*A-1)*nxn + (1-A)*I"
P = (3*Av[:,:,None,None]-1)*nxnv + (1-Av[:,:,None,None])*I
# Vectorized calculation for: "-alpha*(k2*np.exp(im*kR))/R"
multv = -alpha*(k2*np.exp(im*kRv))/Rv
# Vectorized calculation for: "nxn *= -alpha*(k2*np.exp(im*kR))/R"
outv = P*multv[:,:,None,None]
# Simulate ELSE part of the conditional statement"if i != j:"
# with masked setting to I on the last two dimensions
outv[np.eye((N),dtype=bool)] = I
return outv.transpose(0,2,1,3).reshape(N*3,-1)
```
Runtime tests and output verification -
Case #1:
```
In [703]: N = 10
...: s = np.random.rand(N,3) + complex(0,1)*np.random.rand(N,3)
...: alpha = 3j
...: kprop = 5.4
...:
In [704]: out_org = interaction(s,alpha,kprop)
...: out_vect = vectorized_interaction(s,alpha,kprop)
...: print np.allclose(np.real(out_org),np.real(out_vect))
...: print np.allclose(np.imag(out_org),np.imag(out_vect))
...:
True
True
In [705]: %timeit interaction(s,alpha,kprop)
100 loops, best of 3: 7.6 ms per loop
In [706]: %timeit vectorized_interaction(s,alpha,kprop)
1000 loops, best of 3: 304 µs per loop
```
Case #2:
```
In [707]: N = 100
...: s = np.random.rand(N,3) + complex(0,1)*np.random.rand(N,3)
...: alpha = 3j
...: kprop = 5.4
...:
In [708]: out_org = interaction(s,alpha,kprop)
...: out_vect = vectorized_interaction(s,alpha,kprop)
...: print np.allclose(np.real(out_org),np.real(out_vect))
...: print np.allclose(np.imag(out_org),np.imag(out_vect))
...:
True
True
In [709]: %timeit interaction(s,alpha,kprop)
1 loops, best of 3: 826 ms per loop
In [710]: %timeit vectorized_interaction(s,alpha,kprop)
100 loops, best of 3: 14 ms per loop
```
Case #3:
```
In [711]: N = 900
...: s = np.random.rand(N,3) + complex(0,1)*np.random.rand(N,3)
...: alpha = 3j
...: kprop = 5.4
...:
In [712]: out_org = interaction(s,alpha,kprop)
...: out_vect = vectorized_interaction(s,alpha,kprop)
...: print np.allclose(np.real(out_org),np.real(out_vect))
...: print np.allclose(np.imag(out_org),np.imag(out_vect))
...:
True
True
In [713]: %timeit interaction(s,alpha,kprop)
1 loops, best of 3: 1min 7s per loop
In [714]: %timeit vectorized_interaction(s,alpha,kprop)
1 loops, best of 3: 1.59 s per loop
``` |
Boto3, python and how to handle errors | 33,068,055 | 19 | 2015-10-11T17:36:47Z | 33,663,484 | 41 | 2015-11-12T02:39:14Z | [
"python",
"amazon-web-services",
"boto",
"boto3"
] | I just picked up python as my go-to scripting language and I am trying to figure how to do proper error handling with boto3.
I am trying to create an IAM user:
```
def create_user(username, iam_conn):
try:
user = iam_conn.create_user(UserName=username)
return user
except Exception as e:
return e
```
When the call to create\_user succeeds, i get a neat object that contains the http status code of the API call and the data of the newly created user.
Example:
```
{'ResponseMetadata':
{'HTTPStatusCode': 200,
'RequestId': 'omitted'
},
u'User': {u'Arn': 'arn:aws:iam::omitted:user/omitted',
u'CreateDate': datetime.datetime(2015, 10, 11, 17, 13, 5, 882000, tzinfo=tzutc()),
u'Path': '/',
u'UserId': 'omitted',
u'UserName': 'omitted'
}
}
```
This works great. But when this fails (like if the user already exists), i just get an object of type botocore.exceptions.ClientError with only text to tell me what went wrong.
Example:
ClientError('An error occurred (EntityAlreadyExists) when calling the CreateUser operation: User with name omitted already exists.',)
This (AFAIK) makes error handling very hard because i can't just switch on the resulting http status code (409 for user already exists according to the AWS API docs for IAM). This makes me think that i must be doing something the wrong way. The optimal way would be for boto3 to never throw exceptions, but juts always return an object that reflects how the API call went.
Can anyone enlighten me on this issue or point me in the right direction?
Thanks a lot! | Use the response contained within the exception. Here is an example:
```
import boto3
import botocore
try:
iam = boto3.client('iam')
user = iam.create_user(UserName='fred')
print "Created user: %s" % user
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == 'EntityAlreadyExists':
print "User already exists"
else:
print "Unexpected error: %s" % e
```
The response dict in the exception will contain the following:
* `['Error']['Code']` e.g. 'EntityAlreadyExists' or 'ValidationError'
* `['ResponseMetadata']['HTTPStatusCode']` e.g. 400
* `['ResponseMetadata']['RequestId']` e.g. 'd2b06652-88d7-11e5-99d0-812348583a35'
* `['Error']['Message']` e.g. "An error occurred (EntityAlreadyExists) ..."
* `['Error']['Type']` e.g. 'Sender' |
How to print a text without substring in Python | 33,069,128 | 3 | 2015-10-11T19:19:31Z | 33,069,141 | 9 | 2015-10-11T19:21:01Z | [
"python",
"text"
] | I want to search for a word in the text and then print the text without that word. For example, we have the text "I was with my friend", I want the text be "I with my friend". I have done the following so far:
```
text=re.compile("[^was]")
val = "I was with my friend"
if text.search(val):
print text.search(val) #in this line it is obvious wrong
else:
print 'no'
``` | ```
val = "I was with my friend"
print val.replace("was ", "")
```
Output:
```
I with my friend
``` |
Fast linear interpolation in Numpy / Scipy "along a path" | 33,069,366 | 18 | 2015-10-11T19:44:33Z | 33,333,070 | 10 | 2015-10-25T18:03:46Z | [
"python",
"numpy",
"scipy",
"interpolation"
] | Let's say that I have data from weather stations at 3 (known) altitudes on a mountain. Specifically, each station records a temperature measurement at its location every minute. I have two kinds of interpolation I'd like to perform. And I'd like to be able to perform each quickly.
So let's set up some data:
```
import numpy as np
from scipy.interpolate import interp1d
import pandas as pd
import seaborn as sns
np.random.seed(0)
N, sigma = 1000., 5
basetemps = 70 + (np.random.randn(N) * sigma)
midtemps = 50 + (np.random.randn(N) * sigma)
toptemps = 40 + (np.random.randn(N) * sigma)
alltemps = np.array([basetemps, midtemps, toptemps]).T # note transpose!
trend = np.sin(4 / N * np.arange(N)) * 30
trend = trend[:, np.newaxis]
altitudes = np.array([500, 1500, 4000]).astype(float)
finaltemps = pd.DataFrame(alltemps + trend, columns=altitudes)
finaltemps.index.names, finaltemps.columns.names = ['Time'], ['Altitude']
finaltemps.plot()
```
Great, so our temperatures look like this:
[](http://i.stack.imgur.com/6D9JZ.png)
## Interpolate all times to for the same altitude:
I think this one is pretty straightforward. Say I want to get the temperature at an altitude of 1,000 for each time. I can just use built in `scipy` interpolation methods:
```
interping_function = interp1d(altitudes, finaltemps.values)
interped_to_1000 = interping_function(1000)
fig, ax = plt.subplots(1, 1, figsize=(8, 5))
finaltemps.plot(ax=ax, alpha=0.15)
ax.plot(interped_to_1000, label='Interped')
ax.legend(loc='best', title=finaltemps.columns.name)
```
[](http://i.stack.imgur.com/iDX6d.png)
This works nicely. And let's see about speed:
```
%%timeit
res = interp1d(altitudes, finaltemps.values)(1000)
#-> 1000 loops, best of 3: 207 µs per loop
```
## Interpolate "along a path":
So now I have a second, related problem. Say I know the altitude of a hiking party as a function of time, and I want to compute the temperature at their (moving) location by linearly interpolating my data through time. *In particular, the times at which I know the location of the hiking party are the **same** times at which I know the temperatures at my weather stations.* I can do this without too much effort:
```
location = np.linspace(altitudes[0], altitudes[-1], N)
interped_along_path = np.array([interp1d(altitudes, finaltemps.values[i, :])(loc)
for i, loc in enumerate(location)])
fig, ax = plt.subplots(1, 1, figsize=(8, 5))
finaltemps.plot(ax=ax, alpha=0.15)
ax.plot(interped_along_path, label='Interped')
ax.legend(loc='best', title=finaltemps.columns.name)
```
[](http://i.stack.imgur.com/Px1uc.png)
So this works really nicely, but its important to note that the key line above is using list comprehension to hide an enormous amount of work. In the previous case, `scipy` is creating a single interpolation function for us, and evaluating it once on a large amount of data. In this case, `scipy` is actually constructing `N` individual interpolating functions and evaluating each once on a small amount of data. This feels inherently inefficient. There is a for loop lurking here (in the list comprehension) and moreover, this just feels flabby.
Not surprisingly, this is much slower than the previous case:
```
%%timeit
res = np.array([interp1d(altitudes, finaltemps.values[i, :])(loc)
for i, loc in enumerate(location)])
#-> 10 loops, best of 3: 145 ms per loop
```
So the second example runs 1,000 slower than the first. I.e. consistent with the idea that the heavy lifting is the "make a linear interpolation function" step...which is happening 1,000 times in the second example but only once in the first.
So, the question: **is there a better way to approach the second problem?** For example, is there a good way to set it up with 2-dimensinoal interpolation (which could perhaps handle the case where the times at which the hiking party locations are known are *not* the times at which the temperatures have been sampled)? Or is there a particularly slick way to handle things here where the times do line up? Or other? | A linear interpolation between two values `y1`, `y2` at locations `x1` and `x2`, with respect to point `xi` is simply:
```
yi = y1 + (y2-y1) * (xi-x1) / (x2-x1)
```
With some vectorized Numpy expressions we can select the relevant points from the dataset and apply the above function:
```
I = np.searchsorted(altitudes, location)
x1 = altitudes[I-1]
x2 = altitudes[I]
time = np.arange(len(alltemps))
y1 = alltemps[time,I-1]
y2 = alltemps[time,I]
xI = location
yI = y1 + (y2-y1) * (xI-x1) / (x2-x1)
```
The trouble is that some points lie on the boundaries of (or even outside of) the known range, which should be taken into account:
```
I = np.searchsorted(altitudes, location)
same = (location == altitudes.take(I, mode='clip'))
out_of_range = ~same & ((I == 0) | (I == altitudes.size))
I[out_of_range] = 1 # Prevent index-errors
x1 = altitudes[I-1]
x2 = altitudes[I]
time = np.arange(len(alltemps))
y1 = alltemps[time,I-1]
y2 = alltemps[time,I]
xI = location
yI = y1 + (y2-y1) * (xI-x1) / (x2-x1)
yI[out_of_range] = np.nan
```
Luckily, Scipy already provides ND interpolation, which also just as easy takes care of the mismatching times, for example:
```
from scipy.interpolate import interpn
time = np.arange(len(alltemps))
M = 150
hiketime = np.linspace(time[0], time[-1], M)
location = np.linspace(altitudes[0], altitudes[-1], M)
xI = np.column_stack((hiketime, location))
yI = interpn((time, altitudes), alltemps, xI)
```
---
Here's a benchmark code (without any `pandas` actually, bit I did include the solution from the other answer):
```
import numpy as np
from scipy.interpolate import interp1d, interpn
def original():
return np.array([interp1d(altitudes, alltemps[i, :])(loc)
for i, loc in enumerate(location)])
def OP_self_answer():
return np.diagonal(interp1d(altitudes, alltemps)(location))
def interp_checked():
I = np.searchsorted(altitudes, location)
same = (location == altitudes.take(I, mode='clip'))
out_of_range = ~same & ((I == 0) | (I == altitudes.size))
I[out_of_range] = 1 # Prevent index-errors
x1 = altitudes[I-1]
x2 = altitudes[I]
time = np.arange(len(alltemps))
y1 = alltemps[time,I-1]
y2 = alltemps[time,I]
xI = location
yI = y1 + (y2-y1) * (xI-x1) / (x2-x1)
yI[out_of_range] = np.nan
return yI
def scipy_interpn():
time = np.arange(len(alltemps))
xI = np.column_stack((time, location))
yI = interpn((time, altitudes), alltemps, xI)
return yI
N, sigma = 1000., 5
basetemps = 70 + (np.random.randn(N) * sigma)
midtemps = 50 + (np.random.randn(N) * sigma)
toptemps = 40 + (np.random.randn(N) * sigma)
trend = np.sin(4 / N * np.arange(N)) * 30
trend = trend[:, np.newaxis]
alltemps = np.array([basetemps, midtemps, toptemps]).T + trend
altitudes = np.array([500, 1500, 4000], dtype=float)
location = np.linspace(altitudes[0], altitudes[-1], N)
funcs = [original, interp_checked, scipy_interpn]
for func in funcs:
print(func.func_name)
%timeit func()
from itertools import combinations
outs = [func() for func in funcs]
print('Output allclose:')
print([np.allclose(out1, out2) for out1, out2 in combinations(outs, 2)])
```
With the following result on my system:
```
original
10 loops, best of 3: 184 ms per loop
OP_self_answer
10 loops, best of 3: 89.3 ms per loop
interp_checked
1000 loops, best of 3: 224 µs per loop
scipy_interpn
1000 loops, best of 3: 1.36 ms per loop
Output allclose:
[True, True, True, True, True, True]
```
Scipy's `interpn` suffers somewhat in terms of speed compared to the very fastest method, but for it's generality and ease of use it's definitely the way to go. |
How to map lambda expressions in Java | 33,070,704 | 5 | 2015-10-11T22:16:05Z | 33,070,807 | 7 | 2015-10-11T22:32:00Z | [
"java",
"python",
"lambda"
] | I'm coming from Python, and trying to understand how lambda expressions work differently in Java. In Python, you can do stuff like:
```
opdict = { "+":lambda a,b: a+b, "-": lambda a,b: a-b,
"*": lambda a,b: a*b, "/": lambda a,b: a/b }
sum = opdict["+"](5,4)
```
How can I accomplish something similar in Java? I have read a bit on Java lambda expressions, and it seems I have to declare an interface first, and I'm unclear about how and why you need to do this.
Edit: I attempted to do this myself with a custom interface. Here's the code I have tried:
```
Map<String, MathOperation> opMap = new HashMap<String, MathOperation>(){
{ put("+",(a,b)->b+a);
put("-",(a,b)->b-a);
put("*",(a,b)->b*a);
put("/",(a,b)->b/a); }
};
...
...
interface MathOperation {
double operation(double a, double b);
}
```
However, this gives an error:
> The target type of this expression must be a functional interface.
Where exactly do I declare the interface? | Easy enough to do with a [`BiFunction`](https://docs.oracle.com/javase/8/docs/api/java/util/function/BiFunction.html) in Java 8:
```
final Map<String, BiFunction<Integer, Integer, Integer>> opdict = new HashMap<>();
opdict.put("+", (x, y) -> x + y);
opdict.put("-", (x, y) -> x - y);
opdict.put("*", (x, y) -> x * y);
opdict.put("/", (x, y) -> x / y);
int sum = opdict.get("+").apply(5, 4);
System.out.println(sum);
```
The syntax is a bit more verbose than Python to be sure, and using `getOrDefault` on `opdict` would probably be preferable as to avoid a scenario in which you use an operator that doesn't exist, but this should get the ball rolling at least.
If you're exclusively working with `int`s, using `IntBinaryOperator` would be preferable, as this would take care of any generic typing that you'd have to do.
```
final Map<String, IntBinaryOperator> opdict = new HashMap<>();
opdict.put("+", (x, y) -> x + y);
opdict.put("-", (x, y) -> x - y);
opdict.put("*", (x, y) -> x * y);
opdict.put("/", (x, y) -> x / y);
int sum = opdict.get("+").applyAsInt(5, 4);
System.out.println(sum);
``` |
How does this Python 3 quine work? | 33,071,521 | 11 | 2015-10-12T00:12:41Z | 33,071,551 | 11 | 2015-10-12T00:19:46Z | [
"python",
"quine"
] | Found this example of quine:
```
s='s=%r;print(s%%s)';print(s%s)
```
I get that `%s` and `%r` do the `str` and `repr` functions, as pointed [here](http://stackoverflow.com/questions/2354329/whats-the-meaning-of-r-in-python), but what exactly means the `s%s` part and how the quine works? | `s` is set to:
```
's=%r;print(s%%s)'
```
so the `%r` gets replaced by exactly that (*keeping* the single quotes) in `s%s` and the final `%%` with a single `%`, giving:
```
s='s=%r;print(s%%s)';print(s%s)
```
and hence the quine. |
Regex matching except whitespace in Python | 33,071,878 | 3 | 2015-10-12T01:15:08Z | 33,071,908 | 7 | 2015-10-12T01:20:16Z | [
"python",
"regex"
] | I am looking to match on a white space followed by anything except whitespace [i.e. letters, punctuation] at the start of a line in Python. For example:
```
` a` = True
` .` = True
` a` = False [double whitespace]
`ab` = False [no whitespace]
```
The rule `re.match(" \w")` works except with punctuation - how do i include that? | Remember the following:
```
\s\S
```
* `\s` is whitespace
* `\S` is everything but whitespace |
python regex match single quote | 33,072,292 | 2 | 2015-10-12T02:16:26Z | 33,072,314 | 7 | 2015-10-12T02:20:14Z | [
"python",
"regex"
] | I try to match single quote in below:
```
s= "name:'abc','hello'"
```
but seems the behaviour of match/findall is different:
```
re.match("\B'\w+'\B", s) # ===> return None
re.findall("\B'\w+'\B", s) #===> RETURN ['abc', 'hello']
```
actually this is caused by single quotes in the string, anyone knows what's going on?
I'm using py2.7.8 in win7. | See <https://docs.python.org/2/library/re.html#search-vs-match> -- "Python offers two different primitive operations based on regular expressions: `re.match()` checks for a match only at the beginning of the string, while `re.search()` checks for a match anywhere in the string (this is what Perl does by default)."
You're using `re.match()`; if you switch to `re.search()`, you'll get the behavior you were expecting. |
Django 1.8, makemigrations not detecting newly added app | 33,074,543 | 6 | 2015-10-12T06:30:02Z | 33,076,398 | 9 | 2015-10-12T08:23:03Z | [
"python",
"django"
] | I have an existing django project with a,b,c apps. All of them are included in installed apps in settings file. They have their own models with for which the migrations have already been ran. Now, if I add a new app d, add a model to it, include it in installed apps and try to run a blanket makemigrations using `python manage.py makemigrations` I get a `no changes detected` message. Shouldn't the behaviour be like it detects a new app and run an initial migration for that? I know I can do it manually using `python manage.py makemigrations d` but I want to do it with using `python manage.py makemigrations` command. Can someone provide explanation for this behavior? | If you create a new app manually and add it to the INSTALLED\_APPS setting without adding a migrations module inside it, the system will not pick up changes as this is not considered a migrations configured app.
The startapp command automatically adds the migrations module inside of your new app.
**startapp structure**
```
foo/
__init__.py
admin.py
models.py
migrations/
__init__.py
tests.py
views.py
``` |
Losslessly compressing images on django | 33,077,804 | 12 | 2015-10-12T09:36:20Z | 33,989,023 | 9 | 2015-11-29T22:40:14Z | [
"python",
"django",
"lossless-compression"
] | I'm doing optimization and Google recommends Lossless compression to images, looking for a way to implement this in Django.
Here's the images they specified, I think for it to be done effectively it needs to implemented systemwide possibly using a middleware class wondering if anyone has done this before. Here's the link to google analytics for pagespeed <https://developers.google.com/speed/pagespeed/insights/?url=www.kenyabuzz.com>
Optimize images
Properly formatting and compressing images can save many bytes of data.
Optimize the following images to reduce their size by 627.3KiB (74% reduction).
```
Losslessly compressing http://www.kenyabuzz.com/media/uploads/clients/kenya_buzz_2.jpg could save 594.3KiB (92% reduction).
Losslessly compressing http://www.kenyabuzz.com/media/uploads/clients/new_tribe_2.jpg could save 25KiB (44% reduction).
Losslessly compressing http://www.kenyabuzz.com/â¦a/uploads/clients/EthiopianAirlines2.jpg could save 3KiB (22% reduction).
Losslessly compressing http://www.kenyabuzz.com/static/kb/images/Nightlife.Homepage.jpg could save 1.3KiB (2% reduction).
Losslessly compressing http://www.kenyabuzz.com/static/kb/img/social/blog.png could save 1.1KiB (43% reduction).
Losslessly compressing http://www.kenyabuzz.com/static/kb/img/social/twitter.png could save 969B (52% reduction).
Losslessly compressing http://www.kenyabuzz.com/â¦der-Board---Email-Signature--Neutral.jpg could save 920B (2% reduction).
Losslessly compressing http://www.kenyabuzz.com/static/kb/img/social/youtube.png could save 757B (31% reduction).
``` | > Losslessly compressing <http://www.kenyabuzz.com/media/uploads/clients/kenya_buzz_2.jpg> could save 594.3KiB (92% reduction).
First of all, the information in the logs is rather misleading because it is impossible to compress images by 92% using a lossless format (except for some cases like single-colour images, basic geometric shapes like squares, etc). Read [this answer](http://stackoverflow.com/a/419602/1925257) and [this answer](http://stackoverflow.com/a/7752936/1925257) for more info. Really, do read them, both are excellent answers.
Second, you can use lossy compression formats *"without losing quality"* – the differences are so subtle, human eye doesn't even notice.
---
So, I downloaded an image from the website you're optimizing from this link: <http://www.kenyabuzz.com/media/uploads/clients/kenya_buzz_2.jpg>
I opened my Python console and wrote this:
```
>>> from PIL import Image
>>> # Open the image
>>> im = Image.open("kenya_buzz_2.jpg")
>>> # Now save it
>>> im.save("kenya_buzz_compressed.jpg", format="JPEG", quality=70)
```
This created a new image on my disk. Below are both the images.
**Original (655.3kB)**
[](http://i.stack.imgur.com/hzwPB.jpg)
---
**Compressed (22.4kB ~96% reduction @ quality=70)**
[](http://i.stack.imgur.com/cF2j1.jpg)
---
You can play around with the `quality` option. Like, value of `80` will give you a better quality image but with a little larger size. |
Mapping dictionary value to list | 33,078,554 | 14 | 2015-10-12T10:11:43Z | 33,078,575 | 7 | 2015-10-12T10:12:56Z | [
"python"
] | Given the following dictionary:
```
dct = {'a':3, 'b':3,'c':5,'d':3}
```
How can I apply these values to a list such as:
```
lst = ['c', 'd', 'a', 'b', 'd']
```
in order to get something like:
```
lstval = [5, 3, 3, 3, 3]
``` | You can iterate keys from your list using `map` function:
```
lstval = list(map(dct.get, lst))
```
Or if you prefer list comprehension:
```
lstval = [dct[key] for key in lst]
``` |
Mapping dictionary value to list | 33,078,554 | 14 | 2015-10-12T10:11:43Z | 33,078,598 | 13 | 2015-10-12T10:14:01Z | [
"python"
] | Given the following dictionary:
```
dct = {'a':3, 'b':3,'c':5,'d':3}
```
How can I apply these values to a list such as:
```
lst = ['c', 'd', 'a', 'b', 'd']
```
in order to get something like:
```
lstval = [5, 3, 3, 3, 3]
``` | You can use a list comprehension for this:
```
lstval = [ dct.get(k, your_fav_default) for k in lst ]
```
I personally propose using list comprehensions over built-in `map` because it looks familiar to all Python programmers, is easier to parse and extend in case a custom default value is required. |
Mapping dictionary value to list | 33,078,554 | 14 | 2015-10-12T10:11:43Z | 33,078,607 | 17 | 2015-10-12T10:14:42Z | [
"python"
] | Given the following dictionary:
```
dct = {'a':3, 'b':3,'c':5,'d':3}
```
How can I apply these values to a list such as:
```
lst = ['c', 'd', 'a', 'b', 'd']
```
in order to get something like:
```
lstval = [5, 3, 3, 3, 3]
``` | Using [`map`](https://docs.python.org/2/library/functions.html#map):
```
>>> map(dct.get, lst)
[5, 3, 3, 3, 3]
```
Using a [list comprehension](https://docs.python.org/2/tutorial/datastructures.html#list-comprehensions):
```
>>> [dct[k] for k in lst]
[5, 3, 3, 3, 3]
``` |
Dot product with dictionaries | 33,079,472 | 10 | 2015-10-12T11:01:51Z | 33,079,563 | 17 | 2015-10-12T11:06:43Z | [
"python"
] | I am trying to do a dot product of the values of two dictionary. For example.
```
dict_1={'a':2, 'b':3, 'c':5, 'd':2}
dict_2={'a':2. 'b':2, 'd':3, 'e':5 }
```
In list form, the above looks like this:
```
dict_1=[2,3,5,2,0]
dict_2=[2,2,0,3,5]
```
The dot product of the dictionary with same key would result in.
```
Ans= 16 [2*2 + 3*2 + 5*0 + 2*3 + 0*5]
```
How can I achieve this with dictionary. With list I can just invoke the np.dot function or write a small loop. | Use sum function on list produced through iterate dict\_1 keys in couple with get() function against dict\_2:
```
dot_product = sum(dict_1[key]*dict_2.get(key, 0) for key in dict_1)
``` |
Understanding iterable types in comparisons | 33,080,675 | 6 | 2015-10-12T12:04:14Z | 33,080,724 | 7 | 2015-10-12T12:06:39Z | [
"python",
"iterator"
] | Recently I ran into cosmologicon's [pywats](https://github.com/cosmologicon/pywat) and now try to understand part about fun with iterators:
```
>>> a = 2, 1, 3
>>> sorted(a) == sorted(a)
True
>>> reversed(a) == reversed(a)
False
```
Ok, `sorted(a)` returns a `list` and `sorted(a) == sorted(a)` becomes just a two lists comparision. But `reversed(a)` returns `reversed object`. So why these reversed objects are different? And id's comparision makes me even more confused:
```
>>> id(reversed(a)) == id(reversed(a))
True
``` | `sorted` returns a list, whereas `reversed` returns a `reversed` object and is a different object. If you were to cast the result of `reversed` to a list before comparison, they will be equal.
```
In [8]: reversed(a)
Out[8]: <reversed at 0x2c98d30>
In [9]: reversed(a)
Out[9]: <reversed at 0x2c989b0>
``` |
Understanding iterable types in comparisons | 33,080,675 | 6 | 2015-10-12T12:04:14Z | 33,080,837 | 10 | 2015-10-12T12:12:35Z | [
"python",
"iterator"
] | Recently I ran into cosmologicon's [pywats](https://github.com/cosmologicon/pywat) and now try to understand part about fun with iterators:
```
>>> a = 2, 1, 3
>>> sorted(a) == sorted(a)
True
>>> reversed(a) == reversed(a)
False
```
Ok, `sorted(a)` returns a `list` and `sorted(a) == sorted(a)` becomes just a two lists comparision. But `reversed(a)` returns `reversed object`. So why these reversed objects are different? And id's comparision makes me even more confused:
```
>>> id(reversed(a)) == id(reversed(a))
True
``` | The basic reason why `id(reversed(a) == id(reversed(a)` returns `True` , whereas `reversed(a) == reversed(a)` returns `False` , can be seen from the below example using custom classes -
```
>>> class CA:
... def __del__(self):
... print('deleted', self)
... def __init__(self):
... print('inited', self)
...
>>> CA() == CA()
inited <__main__.CA object at 0x021B8050>
inited <__main__.CA object at 0x021B8110>
deleted <__main__.CA object at 0x021B8050>
deleted <__main__.CA object at 0x021B8110>
False
>>> id(CA()) == id(CA())
inited <__main__.CA object at 0x021B80F0>
deleted <__main__.CA object at 0x021B80F0>
inited <__main__.CA object at 0x021B80F0>
deleted <__main__.CA object at 0x021B80F0>
True
```
As you can see when you did `customobject == customobject` , the object that was created on the fly was not destroyed until after the comparison occurred, this is because that object was required for the comparison.
But in case of `id(co) == id(co)` , the custom object created was passed to `id()` function, and then only the result of `id` function is required for comparison , so the object that was created has no reference left, and hence the object was *garbage collected*, and then when the Python interpreter recreated a new object for the right side of `==` operation, it reused the space that was freed previously. Hence, the `id` for both came as same.
This above behavior is an implementation detail of CPython (it may/may not differ in other implementations of Python) . And you should never rely on the equality of `ids` . For example in the below case it gives the wrong result -
```
>>> a = [1,2,3]
>>> b = [4,5,6]
>>> id(reversed(a)) == id(reversed(b))
True
```
The reason for this is again as explained above (garbage collection of the `reversed` object created for `reversed(a)` before creation of reversed object for `reversed(b)`).
---
If the lists are large, I think the most memory efficient and most probably the fastest method to compare equality for two iterators would be to use [`all()`](https://docs.python.org/2/library/functions.html#all) built-in function along with [`zip()`](https://docs.python.org/3/library/functions.html#zip) function for Python 3.x (or [`itertools.izip()`](https://docs.python.org/2/library/itertools.html#itertools.izip) for Python 2.x).
Example for Python 3.x -
```
all(x==y for x,y in zip(aiterator,biterator))
```
Example for Python 2.x -
```
from itertools import izip
all(x==y for x,y in izip(aiterator,biterator))
```
This is because `all()` short circuits at the first False value is encounters, and `zip() in Python 3.x returns an iterator which yields out the corresponding elements from both the different iterators. This does not need to create a separate list in memory.
Demo -
```
>>> a = [1,2,3]
>>> b = [4,5,6]
>>> all(x==y for x,y in zip(reversed(a),reversed(b)))
False
>>> all(x==y for x,y in zip(reversed(a),reversed(a)))
True
``` |
Actions before close python script | 33,084,356 | 7 | 2015-10-12T15:04:53Z | 33,084,414 | 7 | 2015-10-12T15:07:57Z | [
"python"
] | Hi i have an script in python that run in a infinite loop some actions, but sometimes i have to close the script and update it with a new version, make some works in the server, etc.
The question, is how can i do to stop the script and before close it, the script make some actions after the loop as close sqlite connection, close connection with a broker, etc.
what i do now is ctr+c until it stop (take few times before it finish) and then manually close sqlite connection, etc. | You can catch a signal an execute something other then `sys.exit`
```
import signal
import sys
def signal_handler(signal, frame):
print 'You pressed Ctrl+C - or killed me with -2'
#.... Put your logic here .....
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
``` |
Python - What is exactly sklearn.pipeline.Pipeline? | 33,091,376 | 10 | 2015-10-12T22:42:46Z | 33,094,099 | 17 | 2015-10-13T04:24:40Z | [
"python",
"machine-learning",
"scikit-learn"
] | I can't figure out how the `sklearn.pipeline.Pipeline` works exactly.
There are a few explanation in the [doc](http://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html). For example what do they mean by:
> Pipeline of transforms with a final estimator.
To make my question clearer, what are `steps`? How do they work?
**Edit**
Thanks to the answers I can make my question clearer:
When I call pipeline and pass, as steps, two transformers and one estimator, e.g:
```
pipln = Pipeline([("trsfm1",transformer_1),
("trsfm2",transformer_2),
("estmtr",estimator)])
```
What happens when I call this?
```
pipln.fit()
OR
pipln.fit_transform()
```
I can't figure out how an estimator can be a transformer and how a transformer can be fitted. | **Transformer** in scikit-learn - some class that have fit and transform method, or fit\_transform method.
**Predictor** - some class that has fit and predict methods, or fit\_predict method.
**Pipeline** is just an abstract notion, it's not some existing ml algorithm. Often in ML tasks you need to perform sequence of different transformations (find set of features, generate new features, select only some good features) of raw dataset before applying final estimator.
[Here](http://scikit-learn.org/stable/auto_examples/model_selection/grid_search_text_feature_extraction.html) is a good example of Pipeline usage.
Pipeline gives you a single interface for all 3 steps of transformation and resulting estimator. It encapsulates transformers and predictors inside, and now you can do something like:
```
vect = CountVectorizer()
tfidf = TfidfTransformer()
clf = SGDClassifier()
vX = vect.fit_transform(Xtrain)
tfidfX = tfidf.fit_transform(vX)
predicted = clf.fit_predict(tfidfX)
# Now evaluate all steps on test set
vX = vect.fit_transform(Xtest)
tfidfX = tfidf.fit_transform(vX)
predicted = clf.fit_predict(tfidfX)
```
With just:
```
pipeline = Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', SGDClassifier()),
])
predicted = pipeline.fit(Xtrain).predict(Xtrain)
# Now evaluate all steps on test set
predicted = pipeline.predict(Xtest)
```
With pipelines you can easily perform a grid-search over set of parameters for each step of this meta-estimator. As described in the link above. All steps except last one should be transforms, last step should be transformer or predictor.
**Answer to edit**:
When you call `pipln.fit()` - each transformer inside pipeline will be fitted on outputs of previous transformer (First transformer is learned on raw dataset). Last estimator may be transformer or predictor, you can call fit\_transform() on pipeline only if your last estimator is transformer (that implements fit\_transform, or transform and fit methods separately), you can call fit\_predict() or predict() on pipeline only if your last estimator is predictor. So you just can't call fit\_transform or transform on pipeline, whose last step is predictor. |
Get a dict from a python list | 33,092,558 | 3 | 2015-10-13T01:05:46Z | 33,092,608 | 7 | 2015-10-13T01:13:43Z | [
"python",
"list",
"python-2.7",
"dictionary"
] | I have a list `my_list = ['a', 'b', 'c', 'd']` and I need to create a dictionary which looks like
```
{ 'a': ['a', 'b', 'c', 'd'],
'b': ['b', 'a', 'c', 'd'],
'c': ['c', 'a', 'b', 'd'],
'd': ['d', 'a', 'b', 'c'] }
```
each element as its value being the list but the first element is itself.
Here is my code
```
my_list = ['1', '2', '3', '4']
my_dict=dict()
for x in my_list:
n = my_lst[:]
n.remove(x)
n= [x] + n[:]
my_dict[x] = n
print my_dict
```
which gives
```
{'1': ['1', '2', '3', '4'],
'3': ['3', '1', '2', '4'],
'2': ['2', '1', '3', '4'],
'4': ['4', '1', '2', '3']}
```
as required.
But I don't think that's the most optimal way of doing it. Any help to optimize will be appreciated. | ```
>>> seq
['a', 'b', 'c', 'd']
>>> {e: [e]+[i for i in seq if i != e] for e in seq}
{'a': ['a', 'b', 'c', 'd'],
'b': ['b', 'a', 'c', 'd'],
'c': ['c', 'a', 'b', 'd'],
'd': ['d', 'a', 'b', 'c']}
``` |
How to find the sum of a string in a list | 33,094,687 | 7 | 2015-10-13T05:20:19Z | 33,094,782 | 7 | 2015-10-13T05:28:57Z | [
"python",
"list",
"python-2.7"
] | I have a list like this
```
['MGM', '1'], ['MGD', '1'], ['V1', '[0,2,0,1]'], ['AuD', '[0,0,0,1]']
```
in python. I want to find the sum of the lists that are strings like this:
```
['MGM', '1'], ['MGD', '1'], ['V1', '3'], ['AuD', '1']
```
Should I convert them to lists within the lists first? If so how would I go about this?
Trying my best to learn how to code for science. | Given:
```
s = [['MGM', '1'], ['MGD', '1'], ['V1', '[0,2,0,1]'], ['AuD', '[0,0,0,1]']]
```
The following will convert the second items to Python objects:
```
import ast
for sublist in s:
sublist[1] = ast.literal_eval(sublist[1])
```
Result:
```
[['MGM', 1], ['MGD', 1], ['V1', [0, 2, 0, 1]], ['AuD', [0, 0, 0, 1]]]
```
Then convert them back with special handling for lists:
```
for sublist in s:
if isinstance(sublist[1],list):
sublist[1] = sum(sublist[1])
sublist[1] = str(sublist[1])
```
Result:
```
[['MGM', '1'], ['MGD', '1'], ['V1', '3'], ['AuD', '1']]
``` |
Nested List comprehension in Python | 33,095,393 | 8 | 2015-10-13T06:14:58Z | 33,095,457 | 8 | 2015-10-13T06:19:22Z | [
"python",
"list",
"for-loop",
"list-comprehension"
] | I have a List inside of a List in Python and i want to convert them into a single list using List comprehension:
```
>>> aa = [[1,2],[1,2]]
>>> bb = [num for num in numbers for numbers in aa]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'numbers' is not defined
>>>
```
What am i doing wrong?
**\*The answer to my question isn't on the duplicate as stated above, it is below this question.** | You have the `for` loops in your list comprehension in the opposite order -
```
bb = [num for numbers in aa for num in numbers]
```
Demo -
```
>>> aa = [[1,2],[1,2]]
>>> bb = [num for numbers in aa for num in numbers]
>>> bb
[1, 2, 1, 2]
``` |
What is the difference between !r and %r in Python? | 33,097,143 | 7 | 2015-10-13T07:56:51Z | 33,097,227 | 8 | 2015-10-13T08:00:48Z | [
"python",
"string",
"format",
"string-formatting"
] | As the title states, what is the difference between these two flags? It seems they both convert the value to a string using repr()?
Also, in this line of code:
```
"{0!r:20}".format("Hello")
```
What does the 0 in front of the !r do? | `%r` is not a valid placeholder in the `str.format()` formatting operations; it only works in [old-style `%` string formatting](https://docs.python.org/2/library/stdtypes.html#string-formatting). It indeed converts the object to a representation through the `repr()` function.
In `str.format()`, `!r` is the equivalent, but this also means that you can now use all the [format codes](https://docs.python.org/2/library/string.html#formatspec) for a string. Normally `str.format()` will call the `object.__format__()` method on the object itself, but by using `!r`, `repr(object).__format__()` is used instead.
There are also the `!s` and (in Python 3) `!a` converters; these apply the `str()` and [`ascii()`](https://docs.python.org/3/library/functions.html#ascii) functions first.
The `0` in front indicates what argument to the `str.format()` method will be used to fill that slot; positional argument `0` is `"Hello"` in your case. You could use *named* arguments too, and pass in objects as keyword arguments:
"{greeting!r:20}".format(greeting="Hello")
Unless you are using Python 2.6, you can omit this as slots without indices or names are automatically numbered; the first `{}` is `0`, the second `{}` takes the second argument at index `1`, etc. |
matplotlib example code not working on python virtual environment | 33,100,969 | 3 | 2015-10-13T11:01:50Z | 33,180,744 | 14 | 2015-10-16T22:58:14Z | [
"python",
"osx",
"matplotlib",
"virtualenv"
] | I am trying to display the x y z coordinate of an image in matplotlib. [the example code](http://matplotlib.org/examples/api/image_zcoord.html) work perfectly well on the global python installation: As I move the cursor the x,y,z values get updated instantaneously. However, when I run the example code on a python virtual environment, I would click on the images several times for the coordinate to show in the first place, then when I click on different positions it would update for some. After few clicks, the coordinates will no longer update.
I don't know how to debug this. | This is likely to be a problem with the macosx backend for matplotlib. Switch to using an alternative backend for matplotlib (e.g. use qt4 instead of 'macosx'). For details of how to switch backend and what exactly that means - see [the docs here](http://matplotlib.org/faq/usage_faq.html#what-is-a-backend). Note that you might have to install the backend first - e.g. `pyqt` to use the `qt4agg` backend as I'm suggesting here.
In summary - the backend deals with the output from matplotlib and matplotlib can target different output formats. These can be gui display output formats (for instance `wx`, `qt4` and so on), or file outputs (for instance `pdf`). These are known as interactive and non-interactive backends respectively.
To change backend either do
```
import matplotlib
matplotlib.use('qt4agg')
```
in code, or - if you want to change for every time you start matplotlib - [edit your matplotlibrc file](http://matplotlib.org/users/customizing.html#customizing-matplotlib) setting the backend attribute e.g.
```
backend: Qt4Agg
```
---
N.B. I was alerted by a comment that since posting this answer, matplotlib docs now refer to this issue and [suggest a workaround](http://matplotlib.org/faq/virtualenv_faq.html), although the commenter noted that the solution offered in this answer (switch to Qt backend) worked for them where the official docs workaround was not possible for them. |
Pycharm and sys.argv arguments | 33,102,272 | 6 | 2015-10-13T12:05:19Z | 33,102,415 | 8 | 2015-10-13T12:13:12Z | [
"python",
"linux",
"pycharm"
] | I am trying to debug a script which takes command line arguments as an input. Arguments are text files in the same directory. Script gets file names from sys.argv list. My problem is I cannot launch the script with arguments in pycharm.
I have tried to enter arguments into "Script parameters" field in "Run" > "Edit configuration" menu like so:
```
-s'file1.txt', -s'file2.txt'
```
But it did not work. How do I launch my script with arguments?
P.S. I am on Ubuntu | In PyCharm the parameters are added in the **`Script Parameters`** as you did but, *enclosed in double quotes* `"` and without specifying the Interpreter flags like `-s`, those are specified in the **`Interpreter options`** box.
Script Parameters box contents:
```
"file1.txt" "file2.txt"
```
Interpeter flags:
```
-s
```
Or, visually:
[](http://i.stack.imgur.com/HOh9C.png)
Then, with a simple test file to evaluate these:
```
if __name__ == "__main__":
import sys
print(sys.argv)
```
We get the parameters we provided (with `sys.argv[0]` holding the script name of course):
```
['/home/jim/Desktop/pythonForlder/test.py', 'file1.txt', 'file2.txt']
``` |
Using list comprehensions to make a funcion more pythonic | 33,105,863 | 2 | 2015-10-13T14:51:57Z | 33,105,991 | 11 | 2015-10-13T14:57:30Z | [
"python",
"python-2.7",
"list-comprehension"
] | I'm doing some [Google Python Class](https://developers.google.com/edu/python/exercises/basic) exercises and I'm trying to find a pythonic solution to the following problem.
> D. Given a list of numbers, return a list where all adjacent ==
> elements have been reduced to a single element, so [1, 2, 2, 3]
> returns [1, 2, 3]. You may create a new list or modify the passed in
> list.
My try, which is working perfectly is the following:
```
def remove_adjacent(nums):
result = []
for num in nums:
if len(result) == 0 or num != result[-1]:
result.append(num)
return result
```
For example, with `remove_adjacent([2, 2, 3, 3, 3])` the output is `[2, 3]`. Everything's ok.
I'm trying to use list comprehensions in order to archieve this in a more *pythonic* way, so my try is the following:
```
def remove_adjacent(nums):
result = []
result = [num for num in nums if (len(result)==0 or num!=result[-1])]
return result
```
This, with the same input `[2, 2, 3, 3, 3]`, the output is `[2, 2, 3, 3, 3]` (the same). Meeeh! Wrong.
What I'm doing wrong with the list comprehensions? Am I trying to do something which is impossible to do with list comprehensions? I know it's a bit weird to initialize the list (`result = []`), so maybe it's not posible to do it using list comprehensions in this case. | > Am I trying to do something which is impossible to do with list comprehensions?
Yep. A list comprehension can't refer to itself by name, because the variable doesn't get bound at all until the comprehension is completely done evaluating. That's why you get a `NameError` if you don't have `result = []` in your second code block.
If it's not cheating to use standard modules, consider using `groupby` to group together similar values in your list:
```
>>> import itertools
>>> seq = [1, 2, 2, 3]
>>> [k for k,v in itertools.groupby(seq)]
[1, 2, 3]
>>> seq = [2,2,3,3,3]
>>> [k for k,v in itertools.groupby(seq)]
[2, 3]
``` |
HTTPError: HTTP Error 503: Service Unavailable goslate language detection request : Python | 33,107,292 | 11 | 2015-10-13T15:56:17Z | 33,448,911 | 10 | 2015-10-31T06:52:11Z | [
"python",
"http-error",
"http-status-code-503",
"goslate"
] | I have just started using the goslate library in python to detect the language of the words in a text but after testing it for 7-8 inputs, I gave the input which had the words written in two languages arabic and english. After which, it started giving me the error.
```
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
execfile("C:/test_goslate.py");
File "C:/test_goslate.py", line 12, in <module>
language_id = gs.detect('çÃâïÃËÃâé')
File "C:\Python27\lib\site-packages\goslate.py", line 484, in detect
return self._detect_language(text)
File "C:\Python27\lib\site-packages\goslate.py", line 448, in _detect_language
return self._basic_translate(text[:50].encode('utf-8'), 'en', 'auto')[1]
File "C:\Python27\lib\site-packages\goslate.py", line 251, in _basic_translate
response_content = self._open_url(url)
File "C:\Python27\lib\site-packages\goslate.py", line 181, in _open_url
response = self._opener.open(request, timeout=self._TIMEOUT)
File "C:\Python27\lib\urllib2.py", line 410, in open
response = meth(req, response)
File "C:\Python27\lib\urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python27\lib\urllib2.py", line 448, in error
return self._call_chain(*args)
File "C:\Python27\lib\urllib2.py", line 382, in _call_chain
result = func(*args)
File "C:\Python27\lib\urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 503: Service Unavailable
```
I wrote the code as :
```
# -*- coding: utf8 -*-
import urllib2
import goslate
gs = goslate.Goslate()
language_id = gs.detect('wait Ø§ÙØ¯ÙÙØ©')
print (gs.get_languages()[language_id])
```
and now it is not working at all for any input which I tested previously and is giving me same error.
I tried finding error resolve on google but nothing helped. This is what I found :
[Link 1 - StackOverflow](http://stackoverflow.com/questions/29676112/python-goslate-translation-request-returns-503-service-unavailable)
I tried updating it with the command as also suggested in the link above :
```
pip install -U goslate
```
but it did not help as it is already the newest updated version that I am using. Also I read in the library documentation that one gets this kind of error for translation when :
```
If you get HTTP 5xx error, it is probably because google has banned your client IP address from transation querying.
You could verify it by access google translation service in browser manually.
You could try the following to overcome this issue:
query through a HTTP/SOCK5 proxy, see Proxy Support
using another google domain for translation: gs = Goslate(service_urls=['http://translate.google.de'])
wait for 3 seconds before issue another querying
```
I tried using proxy connection but nothing helped.
**EDIT**
Can the reason be that google allows only some number of requests per day ? In that case what better can be done ? Is there any other python based library which can help me resolve this?
Please someone help me at this. I am new to it. | maybe looking for this: <https://pypi.python.org/pypi/textblob> it is better than goslate,
since textblob is blocked as of now, maybe py-translate could do the trick,
<https://pypi.python.org/pypi/py-translate/#downloads>
<http://pythonhosted.org/py-translate/devs/api.html>
```
from translate import translator
translator('en', 'es', 'Hello World!')
```
*"py-translate is a CLI Tool for Google Translate written in Python!"*
the first argument to the translator function is the source language, the second is the target language, and the third is the phrase to be translated,
it returns a dictionary, which the documentation refers to as a request interface |
What is the difference between skew and kurtosis functions in pandas vs. scipy? | 33,109,107 | 6 | 2015-10-13T17:37:27Z | 33,109,272 | 11 | 2015-10-13T17:46:05Z | [
"python",
"numpy",
"pandas",
"scipy"
] | I decided to compare skew and kurtosis functions in pandas and scipy.stats, and don't understand why I'm getting different results between libraries.
As far as I can tell from the documentation, both kurtosis functions compute using Fisher's definition, whereas for skew there doesn't seem to be enough of a description to tell if there any major differences with how they are computed.
```
import pandas as pd
import scipy.stats.stats as st
heights = np.array([1.46, 1.79, 2.01, 1.75, 1.56, 1.69, 1.88, 1.76, 1.88, 1.78])
print "skewness:", st.skew(heights)
print "kurtosis:", st.kurtosis(heights)
```
this returns:
```
skewness: -0.393524456473
kurtosis: -0.330672097724
```
whereas if I convert to a pandas dataframe:
```
heights_df = pd.DataFrame(heights)
print "skewness:", heights_df.skew()
print "kurtosis:", heights_df.kurtosis()
```
this returns:
```
skewness: 0 -0.466663
kurtosis: 0 0.379705
```
Apologies if I've posted this in the wrong place; not sure if it's a stats or a programming question. | The difference is due to different normalizations. Scipy by default does not correct for bias, whereas pandas does.
You can tell scipy to correct for bias by passing the `bias=False` argument:
```
>>> x = pandas.Series(np.random.randn(10))
>>> stats.skew(x)
-0.17644348972413657
>>> x.skew()
-0.20923623968879457
>>> stats.skew(x, bias=False)
-0.2092362396887948
>>> stats.kurtosis(x)
0.6362620964462327
>>> x.kurtosis()
2.0891062062174464
>>> stats.kurtosis(x, bias=False)
2.089106206217446
```
There does not appear to be a way to tell pandas to remove the bias correction. |
Why is it not possible to convert "1.7" to integer directly, without converting to float first? | 33,113,938 | 8 | 2015-10-13T22:52:15Z | 33,113,952 | 9 | 2015-10-13T22:54:04Z | [
"python",
"floating-point",
"integer",
"type-conversion",
"coercion"
] | When I type `int("1.7")` Python returns error (specifically, ValueError). I know that I can convert it to integer by `int(float("1.7"))`. I would like to know why the first method returns error. | From the [documentation](https://docs.python.org/2/library/functions.html#int):
> If x is not a number or if base is given, then x must be a string or Unicode object representing an integer literal in radix base ...
Obviously, `"1.7"` does not represent an integer literal in radix base.
If you want to know *why* the python dev's decided to limit themselves to integer literals in radix base, there are a possible infinite number of reasons and you'd have to ask Guido et. al to know for sure. One guess would be ease of implementation + efficiency. You might think it would be easily for them to implement it as:
1. Interpret number as a float
2. truncate to an integer
Unfortunately, that doesn't work in python as integers can have arbitrary precision and floats cannot. Special casing big numbers could lead to inefficiency for the common case1.
Additionally, forcing you do to `int(float(...))` has the additional benefit in clarity -- It makes it more obvious what the input string *probably* looks like which can help in debugging elsewhere. In fact, I might argue that even if `int` would accept strings like `"1.7"`, it'd be better to write `int(float("1.7"))` anyway for the increased code clarity.
1Assuming some validation. Other languages skip this -- e.g. `ruby` will evaluate `'1e6'.to_i` and give you `1` since it stops parsing at the first non-integral character. Seems like that could lead to fun bugs to track down ... |
Caffe: Reading LMDB from Python | 33,117,607 | 9 | 2015-10-14T05:53:29Z | 33,123,313 | 17 | 2015-10-14T10:47:24Z | [
"python",
"caffe",
"lmdb"
] | I've extracted features using caffe, which generates a .mdb file.
Then I'm trying to read it using Python and display it as a readable number.
```
import lmdb
lmdb_env = lmdb.open('caffefeat')
lmdb_txn = lmdb_env.begin()
lmdb_cursor = lmdb_txn.cursor()
for key, value in lmdb_cursor:
print str(value)
```
This prints out a very long line of unreadable, broken characters.
Then I tried printing int(value), which returns the following:
```
ValueError: invalid literal for int() with base 10: '\x08\x80 \x10\x01\x18\x015\x8d\x80\xad?5'
```
float(value) gives the following:
```
ValueError: could not convert string to float:? 5????5
```
Is this a problem with the lmdb file itself, or does it have to do with conversion of data type? | Here's the working code I figured out
```
import caffe
import lmdb
lmdb_env = lmdb.open('directory_containing_mdb')
lmdb_txn = lmdb_env.begin()
lmdb_cursor = lmdb_txn.cursor()
datum = caffe.proto.caffe_pb2.Datum()
for key, value in lmdb_cursor:
datum.ParseFromString(value)
label = datum.label
data = caffe.io.datum_to_array(datum)
for l, d in zip(label, data):
print l, d
``` |
Convert String back to List | 33,127,758 | 2 | 2015-10-14T14:13:51Z | 33,127,781 | 8 | 2015-10-14T14:15:00Z | [
"python",
"string",
"csv"
] | I messed up yesterday and saved a datframe to csv and in that dataframe I had a column which was a list of strings. Now that list of strings, is a string(of a list of strings) when I import it back into python from a csv. Is there a way I can, upon importing, change it back to a list of strings?
Example:
```
testList = localGov["DataElements"][4]
testList
Out[62]: "['Ethnicity', 'Sex', 'Cause of Death', 'Count', 'Percent']"
```
The closest I have been able to come is using the following but it leaves a whitespace in front of some of the characters.
```
testList.strip("[]").replace("'","").split(",")
Out[74]: ['Ethnicity', ' Sex', ' Cause of Death', ' Count', ' Percent']
``` | That's what [`ast.literal_eval`](https://docs.python.org/2/library/ast.html) is for :
```
>>> from ast import literal_eval
>>>
>>> literal_eval( "['Ethnicity', 'Sex', 'Cause of Death', 'Count', 'Percent']")
['Ethnicity', 'Sex', 'Cause of Death', 'Count', 'Percent']
``` |
How to set class attribute with await in __init__ | 33,128,325 | 13 | 2015-10-14T14:36:09Z | 33,134,213 | 14 | 2015-10-14T19:42:46Z | [
"python",
"python-3.x",
"python-asyncio"
] | How can I define a class with `await` in the constructor or class body?
For example what I want:
```
import asyncio
# some code
class Foo(object):
async def __init__(self, settings):
self.settings = settings
self.pool = await create_pool(dsn)
foo = Foo(settings)
# it raises:
# TypeError: __init__() should return None, not 'coroutine'
```
or example with class body attribute:
```
class Foo(object):
self.pool = await create_pool(dsn) # Sure it raises syntax Error
def __init__(self, settings):
self.settings = settings
foo = Foo(settings)
```
My solution (But I would like to see a more elegant way)
```
class Foo(object):
def __init__(self, settings):
self.settings = settings
async def init(self):
self.pool = await create_pool(dsn)
foo = Foo(settings)
await foo.init()
``` | Most magic methods aren't designed to work with `async def`/`await` - in general, you should only be using `await` inside the dedicated asynchronous magic methods - `__aiter__`, `__anext__`, `__aenter__`, and `__aexit__`. Using it inside other magic methods either won't work at all (as is the case with `__init__`), or will force you to always use whatever triggers the magic method call in an asynchronous context.
Existing `asyncio` libraries tend to deal with this in one of two ways: First, I've seen the factory pattern used ([`asyncio-redis`](https://github.com/jonathanslenders/asyncio-redis/blob/master/asyncio_redis/pool.py#L27), for example):
```
import asyncio
dsn = "..."
class Foo(object):
@classmethod
async def create(cls, settings):
self = Foo()
self.settings = settings
self.pool = await create_pool(dsn)
return self
async def main(settings):
settings = "..."
foo = await Foo.create(settings)
```
Other libraries use a top-level coroutine function that creates the object, rather than a factory method:
```
import asyncio
dsn = "..."
async def create_foo(settings):
foo = Foo(settings)
await foo._init()
return foo
class Foo(object):
def __init__(self, settings):
self.settings = settings
async def _init(self):
self.pool = await create_pool(dsn)
async def main():
settings = "..."
foo = await create_foo(settings)
```
The `create_pool` function from `aiopg` that you want to call in `__init__` is actually using this exact pattern.
This at least addresses the `__init__` issue. I haven't seen class variables that make asynchronous calls in the wild that I can recall, so I don't know that any well-established patterns have emerged. |
How to set class attribute with await in __init__ | 33,128,325 | 13 | 2015-10-14T14:36:09Z | 33,140,788 | 9 | 2015-10-15T05:47:55Z | [
"python",
"python-3.x",
"python-asyncio"
] | How can I define a class with `await` in the constructor or class body?
For example what I want:
```
import asyncio
# some code
class Foo(object):
async def __init__(self, settings):
self.settings = settings
self.pool = await create_pool(dsn)
foo = Foo(settings)
# it raises:
# TypeError: __init__() should return None, not 'coroutine'
```
or example with class body attribute:
```
class Foo(object):
self.pool = await create_pool(dsn) # Sure it raises syntax Error
def __init__(self, settings):
self.settings = settings
foo = Foo(settings)
```
My solution (But I would like to see a more elegant way)
```
class Foo(object):
def __init__(self, settings):
self.settings = settings
async def init(self):
self.pool = await create_pool(dsn)
foo = Foo(settings)
await foo.init()
``` | I would recommend a separate factory method. It's safe and straightforward. However, if you insist on a `async` version of `__init__()`, here's an example:
```
def asyncinit(cls):
__new__ = cls.__new__
async def init(obj, *arg, **kwarg):
await obj.__init__(*arg, **kwarg)
return obj
def new(cls, *arg, **kwarg):
obj = __new__(cls, *arg, **kwarg)
coro = init(obj, *arg, **kwarg)
#coro.__init__ = lambda *_1, **_2: None
return coro
cls.__new__ = new
return cls
```
**Usage:**
```
@asyncinit
class Foo(object):
def __new__(cls):
'''Do nothing. Just for test purpose.'''
print(cls)
return super().__new__(cls)
async def __init__(self):
self.initialized = True
```
```
async def f():
print((await Foo()).initialized)
loop = asyncio.get_event_loop()
loop.run_until_complete(f())
```
**Output:**
```
<class '__main__.Foo'>
True
```
**Explanation:**
Your class construction must return a `coroutine` object instead of its own instance. |
Convert a list of strings to either int or float | 33,130,279 | 5 | 2015-10-14T16:05:40Z | 33,130,297 | 9 | 2015-10-14T16:06:38Z | [
"python",
"list"
] | I have a list which looks something like this:
```
['1', '2', '3.4', '5.6', '7.8']
```
How do I change the first two to `int` and the three last to `float`?
I want my list to look like this:
```
[1, 2, 3.4, 5.6, 7.8]
``` | Use a [conditional inside a list comprehension](http://stackoverflow.com/q/4406389/4099593)
```
>>> s = ['1', '2', '3.4', '5.6', '7.8']
>>> [float(i) if '.' in i else int(i) for i in s]
[1, 2, 3.4, 5.6, 7.8]
```
Interesting edge case of exponentials. You can add onto the conditional.
```
>>> s = ['1', '2', '3.4', '5.6', '7.8' , '1e2']
>>> [float(i) if '.' in i or 'e' in i else int(i) for i in s]
[1, 2, 3.4, 5.6, 7.8, 100.0]
```
Using [`isdigit`](https://docs.python.org/2/library/stdtypes.html#str.isdigit) is the best as it takes care of all the edge cases (mentioned by [Steven](http://stackoverflow.com/users/1322401/steven-rumbalski) in a [comment](http://stackoverflow.com/questions/33130279/convert-a-list-of-strings-to-int-and-float/33130297?noredirect=1#comment54072755_33130297))
```
>>> s = ['1', '2', '3.4', '5.6', '7.8']
>>> [int(i) if i.isdigit() else float(i) for i in s]
[1, 2, 3.4, 5.6, 7.8, 100.0]
``` |
Evenly distribute within a list (Google Foobar: Maximum Equality) | 33,134,485 | 2 | 2015-10-14T19:57:27Z | 33,134,842 | 9 | 2015-10-14T20:20:16Z | [
"python",
"list",
"python-2.7"
] | This question comes from Google Foobar, and my code passes all but the last test, with the input/output hidden.
# The prompt
> In other words, choose two elements of the array, x[i] and x[j]
> (i distinct from j) and simultaneously increment x[i] by 1 and decrement
> x[j] by 1. Your goal is to get as many elements of the array to have
> equal value as you can.
>
> For example, if the array was [1,4,1] you could perform the operations
> as follows:
>
> Send a rabbit from the 1st car to the 0th: increment x[0], decrement
> x[1], resulting in [2,3,1] Send a rabbit from the 1st car to the 2nd:
> increment x[2], decrement x[1], resulting in [2,2,2].
>
> All the elements are of the array are equal now, and you've got a
> strategy to report back to Beta Rabbit!
>
> Note that if the array was [1,2], the maximum possible number of equal
> elements we could get is 1, as the cars could never have the same
> number of rabbits in them.
>
> Write a function answer(x), which takes the array of integers x and
> returns the maximum number of equal array elements that we can get, by
> doing the above described command as many times as needed.
>
> The number of cars in the train (elements in x) will be at least 2,
> and no more than 100. The number of rabbits that want to share a car
> (each element of x) will be an integer in the range [0, 1000000].
# My code
```
from collections import Counter
def most_common(lst):
data = Counter(lst)
return data.most_common(1)[0][1]
def answer(x):
"""The goal is to take all of the rabbits in list x and distribute
them equally across the original list elements."""
total = sum(x)
length = len(x)
# Find out how many are left over when distributing niavely.
div, mod = divmod(total, length)
# Because of the variable size of the list, the remainder
# might be greater than the length of the list.
# I just realized this is unnecessary.
while mod > length:
div += length
mod -= length
# Create a new list the size of x with the base number of rabbits.
result = [div] * length
# Distribute the leftovers from earlier across the list.
for i in xrange(mod):
result[i] += 1
# Return the most common element.
return most_common(result)
```
It runs well under my own testing purposes, handling one million tries in ten or so seconds. But it fails under an unknown input.
Have I missed something obvious, or did I make an assumption I shouldn't have? | Sorry, but your code doesn't work in my testing. I fed it [0, 0, 0, 0, 22] and got back a list of [5, 5, 4, 4, 4] for an answer of 3; the maximum would be 4 identical cars, with the original input being one such example. [4, 4, 4, 4, 6] would be another. I suspect that's your problem, and that there are quite a few other such examples in the data base.
For N cars, the maximum would be either N (if the rabbit population is divisible by the number of cars) or N-1. This seems so simple that I fear I'm missing a restriction in the problem. It didn't ask for a balanced population, just as many car populations as possible should be equal. In short:
```
def answer(census):
size = len(census)
return size if sum(census) % size == 0 else (size-1)
``` |
Can Golang multiply strings like Python can? | 33,139,020 | 8 | 2015-10-15T02:52:25Z | 33,139,049 | 15 | 2015-10-15T02:54:48Z | [
"python",
"string",
"python-3.x",
"go"
] | Python can multiply strings like so:
```
Python 3.4.3 (default, Mar 26 2015, 22:03:40)
[GCC 4.9.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> x = 'my new text is this long'
>>> y = '#' * len(x)
>>> y
'########################'
>>>
```
Can Golang do the equivalent somehow? | It has a function instead of an operator, [Repeat](https://golang.org/pkg/strings/#Repeat). Here's a port of your Python example:
```
package main
import (
"fmt"
"strings"
"unicode/utf8"
)
func main() {
x := "my new text is this long"
y := strings.Repeat("#", utf8.RuneCountInString(x))
fmt.Println(y)
}
```
Note that I've used `utf8.RuneCountInString(x)` instead of `len(x)`; the former counts "runes" (Unicode code points), while the latter counts bytes. In the case of `"my new text is this long"`, the difference doesn't matter since all the characters are only one byte, but it's good to get into the habit of specifying what you mean.
(In Python, `len` does both jobs depending on whether or not the string is a Unicode string; in Go, you have to pick the function to match the semantics you want.) |
how to use python's any | 33,139,089 | 6 | 2015-10-15T02:58:38Z | 33,139,109 | 12 | 2015-10-15T03:01:02Z | [
"python",
"any"
] | I feel very confused about some code like this[not written by me]:
```
version = any(func1(), func2()) # wrong, should be any([func1(), func2()])
def func1():
if something:
return 1
else:
return None
def func2():
if something:
return 2
else:
return 3
```
`version` must be a num. when `[func1(), func2()]` is `[1, None]`, should return 1, when is `[None, 2]`, should return 2, when `[1, 2]`, should return 1.
so I think it's wrong to use `any()` in this code, because `any()` just return `True` or `False`. If I rewirte this logic using another way, I can not find a graceful way as a pythoner.
I want to know whether `any()` can achieve the logic, if not, how to achieve it gracefully? | You can simply use `or` here.
```
version = func1() or func2()
```
Make sure the functions are defined before trying to call them.
This works because `or` returns the first True-like value or the last value (if no value is True-like) . And 'None' is considered False-like in Boolean context. |
why does x -= x + 4 return -4 instead of 4 | 33,139,310 | 5 | 2015-10-15T03:24:14Z | 33,139,322 | 14 | 2015-10-15T03:25:46Z | [
"python",
"operators",
"assignment-operator"
] | new to python and trying to wrestle with the finer points of assignment operators. Here's my code and then the question.
```
x = 5
print(x)
x -= x + 4
print(x)
```
the above code, returns 5 the first time, but yet -4 upon the second print. In my head I feel that the number should actually be 4 as I am reading this as
x= x - x +4. However, I know that is wrong as python is returning -4 instead. I would be gracious if anyone could explain to me (in simple terms as I am a novice) as I have been really pounding my head over the table on this one. | `x -= x + 4` can be written as:
```
x = x - (x + 4) = x - x - 4 = -4
``` |
How to feed caffe multi label data in HDF5 format? | 33,140,000 | 4 | 2015-10-15T04:38:41Z | 33,166,461 | 14 | 2015-10-16T09:05:52Z | [
"python",
"neural-network",
"deep-learning",
"caffe"
] | I want to use caffe with a vector label, not integer. I have checked some answers, and it seems HDF5 is a better way. But then I'm stucked with error like:
> accuracy\_layer.cpp:34] Check failed: `outer_num_ * inner_num_ == bottom[1]->count()` (50 vs. 200) Number of labels must match number of predictions; e.g., if label axis == 1 and prediction shape is (N, C, H, W), label count (number of labels) must be `N*H*W`, with integer values in {0, 1, ..., C-1}.
with HDF5 created as:
```
f = h5py.File('train.h5', 'w')
f.create_dataset('data', (1200, 128), dtype='f8')
f.create_dataset('label', (1200, 4), dtype='f4')
```
My network is generated by:
```
def net(hdf5, batch_size):
n = caffe.NetSpec()
n.data, n.label = L.HDF5Data(batch_size=batch_size, source=hdf5, ntop=2)
n.ip1 = L.InnerProduct(n.data, num_output=50, weight_filler=dict(type='xavier'))
n.relu1 = L.ReLU(n.ip1, in_place=True)
n.ip2 = L.InnerProduct(n.relu1, num_output=50, weight_filler=dict(type='xavier'))
n.relu2 = L.ReLU(n.ip2, in_place=True)
n.ip3 = L.InnerProduct(n.relu1, num_output=4, weight_filler=dict(type='xavier'))
n.accuracy = L.Accuracy(n.ip3, n.label)
n.loss = L.SoftmaxWithLoss(n.ip3, n.label)
return n.to_proto()
with open(PROJECT_HOME + 'auto_train.prototxt', 'w') as f:
f.write(str(net('/home/romulus/code/project/train.h5list', 50)))
with open(PROJECT_HOME + 'auto_test.prototxt', 'w') as f:
f.write(str(net('/home/romulus/code/project/test.h5list', 20)))
```
It seems I should increase label number and put things in integer rather than array, but if I do this, caffe complains number of data and label is not equal, then exists.
So, what is the correct format to feed multi label data?
Also, I'm so wondering why no one just simply write the data format how HDF5 maps to caffe blobs? | Answer to this question's title:
The HDF5 file should have two dataset in root, named "data" and "label", respectively. The shape is (`data amount`, `dimension`). I'm using only one-dimension data, so I'm not sure what's the order of `channel`, `width`, and `height`. Maybe it does not matter. `dtype` should be float or double.
A sample code creating train set with `h5py` is:
```
import h5py, os
import numpy as np
f = h5py.File('train.h5', 'w')
# 1200 data, each is a 128-dim vector
f.create_dataset('data', (1200, 128), dtype='f8')
# Data's labels, each is a 4-dim vector
f.create_dataset('label', (1200, 4), dtype='f4')
# Fill in something with fixed pattern
# Regularize values to between 0 and 1, or SigmoidCrossEntropyLoss will not work
for i in range(1200):
a = np.empty(128)
if i % 4 == 0:
for j in range(128):
a[j] = j / 128.0;
l = [1,0,0,0]
elif i % 4 == 1:
for j in range(128):
a[j] = (128 - j) / 128.0;
l = [1,0,1,0]
elif i % 4 == 2:
for j in range(128):
a[j] = (j % 6) / 128.0;
l = [0,1,1,0]
elif i % 4 == 3:
for j in range(128):
a[j] = (j % 4) * 4 / 128.0;
l = [1,0,1,1]
f['data'][i] = a
f['label'][i] = l
f.close()
```
Also, the accuracy layer is not needed, simply removing it is fine. Next problem is the loss layer. Since `SoftmaxWithLoss` has only one output (index of the dimension with max value), it can't be used for multi-label problem. Thank to Adian and Shai, I find `SigmoidCrossEntropyLoss` is good in this case.
Below is the full code, from data creation, training network, and getting test result:
> main.py (modified from caffe lanet example)
```
import os, sys
PROJECT_HOME = '.../project/'
CAFFE_HOME = '.../caffe/'
os.chdir(PROJECT_HOME)
sys.path.insert(0, CAFFE_HOME + 'caffe/python')
import caffe, h5py
from pylab import *
from caffe import layers as L
def net(hdf5, batch_size):
n = caffe.NetSpec()
n.data, n.label = L.HDF5Data(batch_size=batch_size, source=hdf5, ntop=2)
n.ip1 = L.InnerProduct(n.data, num_output=50, weight_filler=dict(type='xavier'))
n.relu1 = L.ReLU(n.ip1, in_place=True)
n.ip2 = L.InnerProduct(n.relu1, num_output=50, weight_filler=dict(type='xavier'))
n.relu2 = L.ReLU(n.ip2, in_place=True)
n.ip3 = L.InnerProduct(n.relu2, num_output=4, weight_filler=dict(type='xavier'))
n.loss = L.SigmoidCrossEntropyLoss(n.ip3, n.label)
return n.to_proto()
with open(PROJECT_HOME + 'auto_train.prototxt', 'w') as f:
f.write(str(net(PROJECT_HOME + 'train.h5list', 50)))
with open(PROJECT_HOME + 'auto_test.prototxt', 'w') as f:
f.write(str(net(PROJECT_HOME + 'test.h5list', 20)))
caffe.set_device(0)
caffe.set_mode_gpu()
solver = caffe.SGDSolver(PROJECT_HOME + 'auto_solver.prototxt')
solver.net.forward()
solver.test_nets[0].forward()
solver.step(1)
niter = 200
test_interval = 10
train_loss = zeros(niter)
test_acc = zeros(int(np.ceil(niter * 1.0 / test_interval)))
print len(test_acc)
output = zeros((niter, 8, 4))
# The main solver loop
for it in range(niter):
solver.step(1) # SGD by Caffe
train_loss[it] = solver.net.blobs['loss'].data
solver.test_nets[0].forward(start='data')
output[it] = solver.test_nets[0].blobs['ip3'].data[:8]
if it % test_interval == 0:
print 'Iteration', it, 'testing...'
correct = 0
data = solver.test_nets[0].blobs['ip3'].data
label = solver.test_nets[0].blobs['label'].data
for test_it in range(100):
solver.test_nets[0].forward()
# Positive values map to label 1, while negative values map to label 0
for i in range(len(data)):
for j in range(len(data[i])):
if data[i][j] > 0 and label[i][j] == 1:
correct += 1
elif data[i][j] %lt;= 0 and label[i][j] == 0:
correct += 1
test_acc[int(it / test_interval)] = correct * 1.0 / (len(data) * len(data[0]) * 100)
# Train and test done, outputing convege graph
_, ax1 = subplots()
ax2 = ax1.twinx()
ax1.plot(arange(niter), train_loss)
ax2.plot(test_interval * arange(len(test_acc)), test_acc, 'r')
ax1.set_xlabel('iteration')
ax1.set_ylabel('train loss')
ax2.set_ylabel('test accuracy')
_.savefig('converge.png')
# Check the result of last batch
print solver.test_nets[0].blobs['ip3'].data
print solver.test_nets[0].blobs['label'].data
```
h5list files simply contain paths of h5 files in each line:
> train.h5list
```
/home/foo/bar/project/train.h5
```
> test.h5list
```
/home/foo/bar/project/test.h5
```
and the solver:
> auto\_solver.prototxt
```
train_net: "auto_train.prototxt"
test_net: "auto_test.prototxt"
test_iter: 10
test_interval: 20
base_lr: 0.01
momentum: 0.9
weight_decay: 0.0005
lr_policy: "inv"
gamma: 0.0001
power: 0.75
display: 100
max_iter: 10000
snapshot: 5000
snapshot_prefix: "sed"
solver_mode: GPU
```
Converge graph:
[](http://i.stack.imgur.com/i0pHF.png)
Last batch result:
```
[[ 35.91593933 -37.46276474 -6.2579031 -6.30313492]
[ 42.69248581 -43.00864792 13.19664764 -3.35134125]
[ -1.36403108 1.38531208 2.77786589 -0.34310576]
[ 2.91686511 -2.88944006 4.34043217 0.32656598]
...
[ 35.91593933 -37.46276474 -6.2579031 -6.30313492]
[ 42.69248581 -43.00864792 13.19664764 -3.35134125]
[ -1.36403108 1.38531208 2.77786589 -0.34310576]
[ 2.91686511 -2.88944006 4.34043217 0.32656598]]
[[ 1. 0. 0. 0.]
[ 1. 0. 1. 0.]
[ 0. 1. 1. 0.]
[ 1. 0. 1. 1.]
...
[ 1. 0. 0. 0.]
[ 1. 0. 1. 0.]
[ 0. 1. 1. 0.]
[ 1. 0. 1. 1.]]
```
I think this code still has many things to improve. Any suggestion is appreciated.
See [caffe documentation](http://stackoverflow.com/documentation/caffe/5344/prepare-data-for-training/19117/prepare-arbitrary-data-in-hdf5-format#t=201608100612406076606) for more information. |
TypeError: only integer arrays with one element can be converted to an index 3 | 33,144,039 | 6 | 2015-10-15T08:55:21Z | 33,144,798 | 9 | 2015-10-15T09:30:50Z | [
"python",
"arrays",
"list",
"numpy",
"append"
] | I'm having this error in the title, and don't know what's wrong. It's working when I use np.hstack instead of np.append, but I would like to make this faster, so use append.
> time\_list a list of floats
>
> heights is a 1d np.array of floats
```
j = 0
n = 30
time_interval = 1200
axe_x = []
while j < np.size(time_list,0)-time_interval:
if (not np.isnan(heights[j])) and (not np.isnan(heights[j+time_interval])):
axe_x.append(time_list[np.arange(j+n,j+(time_interval-n))])
```
---
```
File "....", line .., in <module>
axe_x.append(time_list[np.arange(j+n,j+(time_interval-n))])
TypeError: only integer arrays with one element can be converted to an index
``` | The issue is just as the error indicates, `time_list` is a normal python list, and hence you cannot index it using another list (unless the other list is an array with single element). Example -
```
In [136]: time_list = [1,2,3,4,5,6,7,8,9,10,11,12,13,14]
In [137]: time_list[np.arange(5,6)]
Out[137]: 6
In [138]: time_list[np.arange(5,7)]
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-138-ea5b650a8877> in <module>()
----> 1 time_list[np.arange(5,7)]
TypeError: only integer arrays with one element can be converted to an index
```
If you want to do that kind of indexing, you would need to make `time_list` a `numpy.array`. Example -
```
In [141]: time_list = np.array(time_list)
In [142]: time_list[np.arange(5,6)]
Out[142]: array([6])
In [143]: time_list[np.arange(5,7)]
Out[143]: array([6, 7])
```
---
Another thing to note would be that in your `while` loop, you never increase `j`, so it may end-up with infinite loop , you should also increase `j` by some amount (maybe `time_interval` ?).
---
But according to the requirement you posted in comments -
> axe\_x should be a 1d array of floats generated from the time\_list list
You should use `.extend()` instead of `.append()` , `.append` would create a list of arrays for you. But if you need a 1D array, you need to use `.extend()`. Example -
```
time_list = np.array(time_list)
while j < np.size(time_list,0)-time_interval:
if (not np.isnan(heights[j])) and (not np.isnan(heights[j+time_interval])):
axe_x.extend(time_list[np.arange(j+n,j+(time_interval-n))])
j += time_interval #Or what you want to increase it by.
``` |
advance time artificially in pytest | 33,150,313 | 9 | 2015-10-15T13:48:21Z | 33,150,603 | 7 | 2015-10-15T14:01:52Z | [
"python",
"py.test"
] | I have code that depends on elapsed time (for instance: If 10 minutes has passed)
What is the best way to simulate this in pytest?
Monkey patching methods in module time?
Example code (the tested code - a bit schematic but conveys the message):
```
current_time = datetime.datetime.utcnow()
retry_time = current_time + datetime.timedelta(minutes=10)
#time_in_db represents time extracted from DB
if time_in_db > retry_time:
#perform the retry
``` | [FreezeGun](https://github.com/spulec/freezegun/) is probably the easiest solution.
Sample code from its readme:
```
from freezegun import freeze_time
@freeze_time("2012-01-14")
def test():
assert datetime.datetime.now() == datetime.datetime(2012, 01, 14)
``` |
Addition of list and NumPy number | 33,151,072 | 11 | 2015-10-15T14:22:44Z | 33,151,321 | 12 | 2015-10-15T14:33:29Z | [
"python",
"list",
"numpy"
] | If you add an integer to a list, you get an error raised by the \_\_add\_\_ function of the list (I suppose):
```
>>> [1,2,3] + 3
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can only concatenate list (not "int") to list
```
If you add a list to a NumPy array, I assume that the \_\_add\_\_ function of the NumPy array converts the list to a NumPy array and adds the lists
```
>>> np.array([3]) + [1,2,3]
array([4, 5, 6])
```
But what happens in the following?
```
>>> [1,2,3] + np.array([3])
array([4, 5, 6])
```
How does the list know how to handle addition with NumPy arrays? | `list` does not know how to handle addition with NumPy arrays. Even in `[1,2,3] + np.array([3])`, it's NumPy arrays that handle the addition.
As [documented in the data model](https://docs.python.org/2/reference/datamodel.html#coercion-rules):
> * For objects x and y, first `x.__op__(y)` is tried. If this is not implemented or returns NotImplemented, `y.__rop__(x)` is tried. If
> this is also not implemented or returns NotImplemented, a TypeError
> exception is raised. But see the following exception:
> * Exception to the previous item: if the left operand is an instance of a built-in type or a new-style class, and the right operand is an
> instance of a proper subclass of that type or class and overrides the
> baseâs `__rop__()` method, the right operandâs `__rop__()` method is
> tried before the left operandâs `__op__()` method.
When you do
```
[1,2,3] + np.array([3])
```
what is internally called is
```
np.array([3]).__radd__([1,2,3])
``` |
Addition of list and NumPy number | 33,151,072 | 11 | 2015-10-15T14:22:44Z | 33,151,502 | 8 | 2015-10-15T14:41:12Z | [
"python",
"list",
"numpy"
] | If you add an integer to a list, you get an error raised by the \_\_add\_\_ function of the list (I suppose):
```
>>> [1,2,3] + 3
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can only concatenate list (not "int") to list
```
If you add a list to a NumPy array, I assume that the \_\_add\_\_ function of the NumPy array converts the list to a NumPy array and adds the lists
```
>>> np.array([3]) + [1,2,3]
array([4, 5, 6])
```
But what happens in the following?
```
>>> [1,2,3] + np.array([3])
array([4, 5, 6])
```
How does the list know how to handle addition with NumPy arrays? | It is because of the `__radd__` method of np.array, check out this link : <http://www.rafekettler.com/magicmethods.html#numeric> (paragraph Reflected arithmetic operators).
In facts, when you try `[1,2,3].__add__(np.array([3]))`, it raises an error, so `np.array([3]).__radd__([1,2,3])` is called. |
How do I use multiple conditions with pyspark.sql.funtions.when()? | 33,151,861 | 5 | 2015-10-15T14:56:35Z | 33,157,063 | 7 | 2015-10-15T19:37:07Z | [
"python",
"apache-spark"
] | I have a dataframe with a few columns. Now I want to derive a new column from 2 other columns:
```
from pyspark.sql import functions as F
new_df = df.withColumn("new_col", F.when(df["col-1"] > 0.0 & df["col-2"] > 0.0, 1).otherwise(0))
```
With this I only get an exception:
```
py4j.Py4JException: Method and([class java.lang.Double]) does not exist
```
It works with just one condition like this:
```
new_df = df.withColumn("new_col", F.when(df["col-1"] > 0.0, 1).otherwise(0))
```
Does anyone know to use multiple conditions?
I'm using Spark 1.4. | Use brackets to enforce the desired operator precedence:
```
F.when( (df["col-1"]>0.0) & (df["col-2"]>0.0), 1).otherwise(0)
``` |
Why wasn't the string at the top of this function printed? | 33,156,785 | 4 | 2015-10-15T19:21:41Z | 33,156,870 | 10 | 2015-10-15T19:25:52Z | [
"python",
"docstring"
] | I encountered the following function in a tutorial. When I call the function, `"This prints a passed string into this function"` isn't printed. Why does the function not print this piece of text when called?
```
def printme(str):
"This prints a passed string into this function"
print str
return
# Now you can call printme function
printme("I'm first call to user defined function!")
printme("Again second call to the same function")
``` | What youâre seeing there is a document string, or *docstring* in short.
A docstring is a string that is supposed to document the thing it is attached to. In your case, it is attached to a function, and as such is supposed to document the function. You can also have docstrings for classes and modules.
You create docstrings by simply placing a string on its own as the very first thing in a function (or class, or module). The interpreter will then use it as a docstring and make it available in special the `__doc__` attribute:
```
>>> def printme( str ):
"This prints a passed string into this function"
print str
>>> printme.__doc__
'This prints a passed string into this function'
```
Docstrings are also used by the `help()` function:
```
>>> help(printme)
Help on function printme in module __main__:
printme(str)
This prints a passed string into this function
```
The common practice for docstrings, to make it clear that they are supposed to be actual docstrings and not just misplaced âproperâ strings, is to use triple quotes. Triple quotes are used to create multi-line strings which in addition allows docstrings to be multi-line too:
```
def printme (str):
'''
Print the string passed in the `str` argument to the
standard output. This is essentially just a wrapper
around Pythonâs built-in `print`.
'''
print(str)
```
Various docstring conventions are also described in [PEP 257](https://www.python.org/dev/peps/pep-0257/). |
pandas v0.17.0: AttributeError: 'unicode' object has no attribute 'version' | 33,159,634 | 9 | 2015-10-15T22:32:23Z | 33,168,054 | 18 | 2015-10-16T10:20:10Z | [
"python",
"pandas"
] | I installed pandas v0.17.0 directly from the sources on my linux suse 13.2 64 bits. I had previously v0.14.1 installed using yast.
Now
```
>>> import pandas
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python2.7/site-packages/pandas-0.17.0-py2.7-linux-x86_64.egg/pandas/__init__.py", line 44, in <module>
from pandas.core.api import *
File "/usr/lib64/python2.7/site-packages/pandas-0.17.0-py2.7-linux-x86_64.egg/pandas/core/api.py", line 9, in <module>
from pandas.core.groupby import Grouper
File "/usr/lib64/python2.7/site-packages/pandas-0.17.0-py2.7-linux-x86_64.egg/pandas/core/groupby.py", line 16, in <module>
from pandas.core.frame import DataFrame
File "/usr/lib64/python2.7/site-packages/pandas-0.17.0-py2.7-linux-x86_64.egg/pandas/core/frame.py", line 41, in <module>
from pandas.core.series import Series
File "/usr/lib64/python2.7/site-packages/pandas-0.17.0-py2.7-linux-x86_64.egg/pandas/core/series.py", line 2864, in <module>
import pandas.tools.plotting as _gfx
File "/usr/lib64/python2.7/site-packages/pandas-0.17.0-py2.7-linux-x86_64.egg/pandas/tools/plotting.py", line 135, in <module>
if _mpl_ge_1_5_0():
File "/usr/lib64/python2.7/site-packages/pandas-0.17.0-py2.7-linux-x86_64.egg/pandas/tools/plotting.py", line 130, in _mpl_ge_1_5_0
return (matplotlib.__version__ >= LooseVersion('1.5')
File "/usr/lib64/python2.7/distutils/version.py", line 296, in __cmp__
return cmp(self.version, other.version)
AttributeError: 'unicode' object has no attribute 'version'
```
From some posts, I learned that it might be related to the fact that multiple versions are installed. I des-installed the old pandas version using yast and re-installed the newest one, but the problem persists. | ```
pip install -U matplotlib
```
worked for me.
Thanks joris! |
Getting only element from a single-element list in Python? | 33,161,448 | 21 | 2015-10-16T02:09:27Z | 33,161,467 | 32 | 2015-10-16T02:12:23Z | [
"python",
"list",
"python-2.7"
] | When a Python list is known to always contain a single item, is there way to access it other than:
```
mylist[0]
```
You may ask, 'Why would you want to?'. Curiosity alone. There seems to be an alternative way to do *everything* in Python. | Sequence unpacking:
```
singleitem, = mylist
# Identical in behavior (byte code produced is the same),
# but arguably more readable since a lone trailing comma could be missed:
[singleitem] = mylist
```
Explicit use of iterator protocol:
```
singleitem = next(iter(mylist))
```
Destructive pop:
```
singleitem = mylist.pop()
```
Negative index:
```
singleitem = mylist[-1]
```
Set via single iteration `for` (because the loop variable remains available with its last value when a loop terminates):
```
for singleitem in mylist: break
```
Many others (combining or varying bits of the above, or otherwise relying on implicit iteration), but you get the idea. |
How to create standalone executable file from python 3.5 scripts? | 33,168,229 | 4 | 2015-10-16T10:28:32Z | 33,174,611 | 13 | 2015-10-16T15:48:39Z | [
"python",
"exe",
"python-3.5"
] | Most of the programs available only support upto python version 3.4. | You can use [PyInstaller](http://www.pyinstaller.org) which support python 3.5.
To install it with pip execute in terminal:
`pip install pyinstaller`
To make the .exe file:
```
pyinstaller --onefile script.py
``` |
removing an backslash from a string | 33,169,772 | 3 | 2015-10-16T11:48:59Z | 33,169,823 | 7 | 2015-10-16T11:51:05Z | [
"python",
"nltk"
] | I have a string that is a sentence like `I don't want it, there'll be others`
So the text looks like this `I don\'t want it, there\'ll be other`
for some reason a `\` comes with the text next to the `'`. It was read in from another source. I want to remove it, but can't. I've tried.
`sentence.replace("\'","'")`
`sentence.replace(r"\'","'")`
`sentence.replace("\\","")`
`sentence.replace(r"\\","")`
`sentence.replace(r"\\\\","")`
I know the `\` is to escape something, so not sure how to do it with the quotes | The `\` is just there to [escape](https://docs.python.org/2/reference/lexical_analysis.html#string-literals) the `'` character. It is only visible in the representation (`repr`) of the string, it's not actually a character in the string. See the following demo
```
>>> repr("I don't want it, there'll be others")
'"I don\'t want it, there\'ll be others"'
>>> print("I don't want it, there'll be others")
I don't want it, there'll be others
``` |
Reduce multiple list comprehesion into a single statement | 33,170,298 | 4 | 2015-10-16T12:15:56Z | 33,170,357 | 14 | 2015-10-16T12:18:52Z | [
"python",
"list-comprehension"
] | Looking for a reduced list comprehesion and less loops and memory usage, there is some way to reduce two loops to build the final paths, transforming it into a single list comprehesion?
```
def build_paths(domains):
http_paths = ["http://%s" % d for d in domains]
https_paths = ["https://%s" % d for d in domains]
paths = []
paths.extend(http_paths)
paths.extend(https_paths)
return paths
```
In this case, the expected result is a optimized list comprehesion, reducing from three list references (`http_paths`, `https_paths`, `paths`) into a single one line, like the following example structure:
```
def build_paths(domains):
return [<reduced list comprehesion> for d in domains]
```
In both cases, running the following test:
```
domains = ["www.ippssus.com",
"www.example.com",
"www.mararao.com"]
print(build_paths(domains))
```
Expected output, independent of the list order:
```
< ['http://www.ippssus.com', 'http://www.example.com', 'http://www.tetsest.com', 'https://www.ippssus.com', 'https://www.example.com', 'https://www.tetsest.com']
``` | Add a second loop:
```
['%s://%s' % (scheme, domain) for scheme in ('http', 'https') for domain in domains]
```
This builds all the `http` urls first, then the `https` urls, just like your original code. |
Printing from list in Python | 33,172,827 | 3 | 2015-10-16T14:23:41Z | 33,172,924 | 15 | 2015-10-16T14:27:25Z | [
"python",
"list"
] | In the following code, I am trying to print each name with another name once:
```
myList = ['John', 'Adam', 'Nicole', 'Tom']
for i in range(len(myList)-1):
for j in range(len(myList)-1):
if (myList[i] <> myList[j+1]):
print myList[i] + ' and ' + myList[j+1] + ' are now friends'
```
The result that I got is:
```
John and Adam are now friends
John and Nicole are now friends
John and Tom are now friends
Adam and Nicole are now friends
Adam and Tom are now friends
Nicole and Adam are now friends
Nicole and Tom are now friends
```
As you can see, it works fine and each name is a friend with another name but there is a repetition which is `Nicole and Adam` which already mentioned as `Adam and Nicole`. What I want is how can I make the code doesn't print such repetition. | This is a good opportunity to use itertools.combinations:
```
In [9]: from itertools import combinations
In [10]: myList = ['John', 'Adam', 'Nicole', 'Tom']
In [11]: for n1, n2 in combinations(myList, 2):
....: print "{} and {} are now friends".format(n1, n2)
....:
John and Adam are now friends
John and Nicole are now friends
John and Tom are now friends
Adam and Nicole are now friends
Adam and Tom are now friends
Nicole and Tom are now friends
``` |
Python Requests getting ('Connection aborted.', BadStatusLine("''",)) error | 33,174,804 | 4 | 2015-10-16T16:00:44Z | 33,226,080 | 9 | 2015-10-20T00:26:25Z | [
"python",
"python-3.x",
"python-requests"
] | ```
def download_torrent(url):
fname = os.getcwd() + '/' + url.split('title=')[-1] + '.torrent'
try:
schema = ('http:')
r = requests.get(schema + url, stream=True)
with open(fname, 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk:
f.write(chunk)
f.flush()
except requests.exceptions.RequestException as e:
print('\n' + OutColors.LR + str(e))
sys.exit(1)
return fname
```
In that block of code I am getting an error when I run the full script. When I go to actually download the torrent I get:
```
('Connection aborted.', BadStatusLine("''",))
```
I only posted the block of code that I think is relevant above. the entire script is below. Its from pantuts, but I dont think its maintained any longer and I am trying to get it running with python3. From my research the error might mean im using http instead of https but I have tried both.
[Original script](https://github.com/pantuts/asskick/blob/master/asskick.py) | I took a closer look at your question today and I've got your code working on my end.
The error you get indicates the host isn't responding in the expected manner. In this case, it's because **it detects that you're trying to scrape it and deliberately disconnecting you**.
If you try your `requests` code with this URL from a test website: `http://mirror.internode.on.net/pub/test/5meg.test1`, you'll see that it downloads normally.
**To get around this, fake your [user agent](https://en.wikipedia.org/wiki/User_agent).** Your user agent identifies your web browser, and webhosts commonly check it to detect bots.
Use the `headers` field to set your user agent. Here's an example which tells the webhost you're Firefox.
```
headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 6.0; WOW64; rv:24.0) Gecko/20100101 Firefox/24.0' }
r = requests.get(url, headers=headers)
```
There's a lot of other ways for web hosts to detect bots, but user agent is one of the easiest and common checks. If you want your scraper to be harder to detect, you can try [ghost.py](http://jeanphix.me/Ghost.py/). |
Programmatically convert pandas dataframe to markdown table | 33,181,846 | 3 | 2015-10-17T01:39:59Z | 33,869,154 | 8 | 2015-11-23T10:46:53Z | [
"python",
"pandas",
"markdown"
] | I have a Pandas Dataframe generated from a database, which has data with mixed encodings. For example:
```
+----+-------------------------+----------+------------+------------------------------------------------+--------------------------------------------------------+--------------+-----------------------+
| ID | path | language | date | longest_sentence | shortest_sentence | number_words | readability_consensus |
+----+-------------------------+----------+------------+------------------------------------------------+--------------------------------------------------------+--------------+-----------------------+
| 0 | data/Eng/Sagitarius.txt | Eng | 2015-09-17 | With administrative experience in the prepa... | I am able to relocate internationally on short not... | 306 | 11th and 12th grade |
+----+-------------------------+----------+------------+------------------------------------------------+--------------------------------------------------------+--------------+-----------------------+
| 31 | data/Nor/Høylandet.txt | Nor | 2015-07-22 | Høgskolen i Ãstfold er et eksempel... | Som skuespiller har jeg bÃ¥de... | 253 | 15th and 16th grade |
+----+-------------------------+----------+------------+------------------------------------------------+--------------------------------------------------------+--------------+-----------------------+
```
As seen there is a mix of English and Norwegian (encoded as ISO-8859-1 in the database I think). I need to get the contents of this Dataframe output as a Markdown table, but without getting problems with encoding. I followed [this answer](http://stackoverflow.com/a/15445930/603387) (from the question [Generate Markdown tables?](http://stackoverflow.com/questions/13394140/generate-markdown-tables)) and got the following:
```
import sys, sqlite3
db = sqlite3.connect("Applications.db")
df = pd.read_sql_query("SELECT path, language, date, longest_sentence, shortest_sentence, number_words, readability_consensus FROM applications ORDER BY date(date) DESC", db)
db.close()
rows = []
for index, row in df.iterrows():
items = (row['date'],
row['path'],
row['language'],
row['shortest_sentence'],
row['longest_sentence'],
row['number_words'],
row['readability_consensus'])
rows.append(items)
headings = ['Date',
'Path',
'Language',
'Shortest Sentence',
'Longest Sentence since',
'Words',
'Grade level']
fields = [0, 1, 2, 3, 4, 5, 6]
align = [('^', '<'), ('^', '^'), ('^', '<'), ('^', '^'), ('^', '>'),
('^','^'), ('^','^')]
table(sys.stdout, rows, fields, headings, align)
```
However, this yields an `UnicodeEncodeError: 'ascii' codec can't encode character u'\xe5' in position 72: ordinal not in range(128)` error. How can I output the Dataframe as a Markdown table? That is, for the purpose of storing this code in a file for use in writing a Markdown document. I need the output to look like this:
```
| ID | path | language | date | longest_sentence | shortest_sentence | number_words | readability_consensus |
|----|-------------------------|----------|------------|------------------------------------------------|--------------------------------------------------------|--------------|-----------------------|
| 0 | data/Eng/Sagitarius.txt | Eng | 2015-09-17 | With administrative experience in the prepa... | I am able to relocate internationally on short not... | 306 | 11th and 12th grade |
| 31 | data/Nor/Høylandet.txt | Nor | 2015-07-22 | Høgskolen i Ãstfold er et eksempel... | Som skuespiller har jeg bÃ¥de... | 253 | 15th and 16th grade |
``` | Improving the answer further, for use in IPython Notebook:
```
def pandas_df_to_markdown_table(df):
from IPython.display import Markdown, display
fmt = ['---' for i in range(len(df.columns))]
df_fmt = pd.DataFrame([fmt], columns=df.columns)
df_formatted = pd.concat([df_fmt, df])
display(Markdown(df_formatted.to_csv(sep="|", index=False)))
pandas_df_to_markdown_table(infodf)
```
Or use [tabulate](https://pypi.python.org/pypi/tabulate):
```
pip install tabulate
```
Examples of use are in the documentation. |
Django rest framework serializing many to many field | 33,182,092 | 9 | 2015-10-17T02:24:27Z | 33,182,227 | 12 | 2015-10-17T02:53:21Z | [
"python",
"django",
"django-models",
"django-rest-framework",
"django-serializer"
] | How do I serialize a many-to-many field into list of something, and return them through rest framework? In my example below, I try to return the post together with a list of tags associated with it.
**models.py**
```
class post(models.Model):
tag = models.ManyToManyField(Tag)
text = models.CharField(max_length=100)
```
**serializers.py**
```
class PostSerializer(serializers.ModelSerializer):
class Meta:
model = Post
fields = ("text", "tag"??)
```
**views.py**
```
class PostViewSet(viewsets.ReadOnlyModelViewSet):
queryset = Post.objects.all()
serializer_class = PostSerializer
``` | You will need a `TagSerializer`, whose `class Meta` has `model = Tag`. After `TagSerializer` is created, modify the `PostSerializer` with `many=True` for a `ManyToManyField` relation:
```
class PostSerializer(serializers.ModelSerializer):
tag = TagSerializer(read_only=True, many=True)
class Meta:
...
```
[Answer is for DRF 3](http://www.django-rest-framework.org/api-guide/serializers/#dealing-with-nested-objects) |
Assigning to vs. from a slice | 33,182,333 | 11 | 2015-10-17T03:10:56Z | 33,182,355 | 9 | 2015-10-17T03:17:05Z | [
"python",
"slice",
"idioms"
] | When reading `profile.py` of python standard library I came across the assignment statement `sys.argv[:] = args`, which is used to modify `sys.argv` to make the program being profiled see the correct command line arguments. I understand that this is different from `sys.argv = args[:]` in the actual operations, but **in effect** they look the same to me. Is there a situation when one wants to use one and not the other? And is `a[:] = b` a common python idiom?
UPDATE: in this specific situation why would one choose one over the other? (source can be found in the `main` function of `profile.py`) | The difference is, when you use `a[:] = b` it means you will override whatever is already on `a`. If you have something else with a reference to `a` it will change as well, as it keeps referencing the same location.
In the other hand, `a = b[:]` creates a new reference and copy all the values from `b` to this new reference. So existing references to the old data will keep pointing to the old data.
Consider this example:
```
>>> a = [1, 2, 3]
>>> b = [4, 5, 6]
>>> c = a # c is a reference to the list in a
>>> c
[1, 2, 3]
>>>
>>> a[:] = b
>>> a # a will have a copy of the list in b
[4, 5, 6]
>>> c # and c will keep having the same value as a
[4, 5, 6]
>>>
>>> b = [7, 8, 9]
>>> a = b[:]
>>> a # a has the new value
[7, 8, 9]
>>> c # c keeps having the old value
[4, 5, 6]
``` |
Creating a program that prints true if three words are entered in dictionary order | 33,183,399 | 7 | 2015-10-17T06:06:17Z | 33,183,415 | 8 | 2015-10-17T06:09:02Z | [
"python",
"string",
"python-3.x",
"order",
"lexicographic"
] | I am trying to create a program that asks the user for three words and prints 'True' if the words are entered in dictionary order. E.G:
```
Enter first word: chicken
Enter second word: fish
Enter third word: zebra
True
```
Here is my code so far:
```
first = (input('Enter first word: '))
second = (input('Enter second word: '))
third = (input('Enter third word: '))
s = ['a','b','c','d','e','f','g','h',
'i','j','k','l','m','n','o','p',
'q','r','s','t','u','v','w','x',
'y','z','A','B','C','D','E','F',
'G','H','I','J','K','L','M','N',
'O','P','Q','R','S','T','U','V',
'W','Z','Y','Z']
if s.find(first[0]) > s.find(second[0]) and s.find(second[0]) > s.find(third[0]):
print(True)
``` | If you work on a list of arbitrary length, I believe using [`sorted()`](https://docs.python.org/2/library/functions.html#sorted "sorted()") as other answers indicate is good for small lists (with small strings) , when it comes to larger lists and larger strings and cases (and cases where the list can be randomly ordered), a faster way would be to use [`all()`](https://docs.python.org/2/library/functions.html#all "all()") built-in function and a generator expression (This should be faster than the [`sorted()`](https://docs.python.org/2/library/functions.html#sorted "sorted()") approach). Example -
```
#Assuming list is called lst
print(all(lst[i].lower() < lst[i+1].lower() for i in range(len(lst)-1)))
```
Please note, above would end up calling [`str.lower()`](https://docs.python.org/2/library/stdtypes.html#str.lower "str.lower()") on every string (except for first and last) twice. Unless your strings are very large, this should be fine.
If your strings are really very large compared to the length of the list, you can create another temporary list before doing the [`all()`](https://docs.python.org/2/library/functions.html#all "all()") that stores all the strings in lowercase. And then run same logic on that list.
You can create your list (by taking inputs from the user) using a list comprehension, Example -
```
lst = [input("Enter word {}:".format(i)) for i in range(3)] #Change 3 to the number of elements you want to take input from user.
```
---
Timing results of the above method vs `sorted()` (modified code of `sorted()` to work case-insensitively) -
```
In [5]: lst = ['{:0>7}'.format(i) for i in range(1000000)]
In [6]: lst == sorted(lst,key=str.lower)
Out[6]: True
In [7]: %timeit lst == sorted(lst,key=str.lower)
1 loops, best of 3: 204 ms per loop
In [8]: %timeit all(lst[i].lower() < lst[i+1].lower() for i in range(len(lst)-1))
1 loops, best of 3: 439 ms per loop
In [11]: lst = ['{:0>7}'.format(random.randint(1,10000)) for i in range(1000000)]
In [12]: %timeit lst == sorted(lst,key=str.lower)
1 loops, best of 3: 1.08 s per loop
In [13]: %timeit all(lst[i].lower() < lst[i+1].lower() for i in range(len(lst)-1))
The slowest run took 6.20 times longer than the fastest. This could mean that an intermediate result is being cached
100000 loops, best of 3: 2.89 µs per loop
```
Result -
For cases that should return `True` (that is already sorted lists) the using `sorted()` is quite faster than `all()` , since `sorted()` works well for mostly-sorted lists.
For cases that are random, `all()` works better than `sorted()` because of the short-circuiting nature of the `all()` (it would short-circuit when it sees the first `False` ).
Also, there is the fact that `sorted()` would create a temporary (sorted list) in memory (for comparison), whereas `all()` would not require that (and this fact does attribute to the timings we see above).
---
Earlier answer that directly (and only applies to this question) you can simply directly compare the strings as such, you do not need another string/list for alphabets. Example -
```
first = (input('Enter first word: '))
second = (input('Enter second word: '))
third = (input('Enter third word: '))
if first <= second <= third:
print(True)
```
Or if you want to compare only the first characters (though I highly doubt that) -
```
if first[0] <= second[0] <= third[0]:
print(True)
```
---
To compare the strings case-insensitively, you can convert all the string to lowercase, before comparison. Example -
```
if first.lower() <= second.lower() <= third.lower():
print(True)
```
Or the simpler -
```
print(first.lower() <= second.lower() <= third.lower())
``` |
On OS X El Capitan I can not upgrade a python package dependent on the six compatibility utilities NOR can I remove six | 33,185,147 | 12 | 2015-10-17T09:47:05Z | 33,599,105 | 21 | 2015-11-08T21:13:21Z | [
"python",
"osx",
"sudo",
"six"
] | I am trying to use scrape, but I have a problem.
> from six.moves import xmlrpc\_client as xmlrpclib
>
> ImportError: cannot import name xmlrpc\_client
Then, I tried `pip install --upgrade six scrape`, but:
```
Found existing installation: six 1.4.1
DEPRECATION: Uninstalling a distutils installed project (six) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project.
Uninstalling six-1.4.1:
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip/basecommand.py", line 211, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip/commands/install.py", line 311, in run
root=options.root_path,
File "/Library/Python/2.7/site-packages/pip/req/req_set.py", line 640, in install
requirement.uninstall(auto_confirm=True)
File "/Library/Python/2.7/site-packages/pip/req/req_install.py", line 716, in uninstall
paths_to_remove.remove(auto_confirm)
File "/Library/Python/2.7/site-packages/pip/req/req_uninstall.py", line 125, in remove
renames(path, new_path)
File "/Library/Python/2.7/site-packages/pip/utils/__init__.py", line 315, in renames
shutil.move(old, new)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move
copy2(src, real_dst)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 131, in copy2
copystat(src, dst)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 103, in copystat
os.chflags(dst, st.st_flags)
OSError: [Errno 1] Operation not permitted: '/var/folders/3h/r_2cxlvd1sjgzfgs4xckc__c0000gn/T/pip-5h86J8-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info'
``` | I just got around what I think was the same problem. You might consider trying this (sudo, if necessary):
`pip install scrape --upgrade --ignore-installed six`
[Github](https://github.com/pypa/pip/issues/3165) is ultimately where I got this answer (and there are a few more suggestions you may consider if this one doesn't solve your problem). It also seems as though this is an El Capitan problem.
Also, this technically might be a [duplicate](http://stackoverflow.com/questions/29485741/unable-to-upgrade-python-six-package-in-mac-osx-10-10-2). But the answer the other post came up with was installing your own Python rather than relying on the default osx Python, which strikes me as more laborious. |
On OS X El Capitan I can not upgrade a python package dependent on the six compatibility utilities NOR can I remove six | 33,185,147 | 12 | 2015-10-17T09:47:05Z | 36,022,226 | 9 | 2016-03-15T21:11:05Z | [
"python",
"osx",
"sudo",
"six"
] | I am trying to use scrape, but I have a problem.
> from six.moves import xmlrpc\_client as xmlrpclib
>
> ImportError: cannot import name xmlrpc\_client
Then, I tried `pip install --upgrade six scrape`, but:
```
Found existing installation: six 1.4.1
DEPRECATION: Uninstalling a distutils installed project (six) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project.
Uninstalling six-1.4.1:
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip/basecommand.py", line 211, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip/commands/install.py", line 311, in run
root=options.root_path,
File "/Library/Python/2.7/site-packages/pip/req/req_set.py", line 640, in install
requirement.uninstall(auto_confirm=True)
File "/Library/Python/2.7/site-packages/pip/req/req_install.py", line 716, in uninstall
paths_to_remove.remove(auto_confirm)
File "/Library/Python/2.7/site-packages/pip/req/req_uninstall.py", line 125, in remove
renames(path, new_path)
File "/Library/Python/2.7/site-packages/pip/utils/__init__.py", line 315, in renames
shutil.move(old, new)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move
copy2(src, real_dst)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 131, in copy2
copystat(src, dst)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 103, in copystat
os.chflags(dst, st.st_flags)
OSError: [Errno 1] Operation not permitted: '/var/folders/3h/r_2cxlvd1sjgzfgs4xckc__c0000gn/T/pip-5h86J8-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info'
``` | I don't think this is a duplicate, but actually [this issue discussed here on the pip GitHub repository issues list](https://github.com/pypa/pip/issues/3165).
The core of the problem is tied to Apple's new SIP that they shipped with El Capitan. More [specifically](https://github.com/pypa/pip/issues/3165#issuecomment-176971783),
> OS X 10.11's python retains its own copy of six which is unremoveable, because of modifications Apple has done to their python distribution. 1.4.1 is not the latest, 1.10.0 is. It also comes early on their python's import path, so it will typically override later versions you install.
>
> I would suggest using a different python for now. Python.org's, or installed via Homebrew, or Anaconda Python.
There is an [incredibly detailed discussion on the Ask Different Stack Exchange](http://apple.stackexchange.com/questions/209572/how-to-use-pip-after-the-el-capitan-max-os-x-upgrade/210021) that covers how the problems with SIP have been identified, addressed, and evolved since the original release of El Capitan. Although I found it fascinating, you'll spend less time following the instructions below than it would take you to read it, so I'd reccomend checking it out AFTER you finish the following...
I ran into the exact same error when attempting to upgrade VirtualEnv & VirtualEnvWrapper. There were several suggestions kicked around on that above thread, but in the end the most stable was to
1. Leverage the built-in support for the sudo OPTION to specify a HOME environment variable
```
$ man sudo
-H The -H (HOME) option option sets the HOME environment variable
to the home directory of the target user (root by default) as specified
HOME environment variable depends on sudoers(5) settings. By default,
sudo will set HOME if env_reset or always_set_home are set, or if
set_home is set and the -s option is specified on the command line.
```
2. Leverage pip's options to force an upgrade and ignore any pre-existing packages
```
$ pip install --help | grep upgrade
-U, --upgrade Upgrade all specified packages to the newest available
version. This process is recursive regardless of whether a dependency
is already satisfied.
beejhuff@ignatius:~/mac_setup$ pip install --help | grep ignore-installed
-I, --ignore-installed Ignore the installed packages (reinstalling instead).
```
***First, my original attempt & error:***
```
$ sudo pip install virtualenv virtualenvwrapper
The directory '/Users/beejhuff/Library/Caches/pip/http' or its parent directory
is not owned by the current user and the cache has been disabled.
Please check the permissions and owner of that directory. If executing
pip with sudo, you may want sudo's -H flag.
The directory '/Users/beejhuff/Library/Caches/pip' or its parent directory
is not owned by the current user and caching wheels has been disabled.
check the permissions and owner of that directory. If executing pip with
sudo, you may want sudo's -H flag.
Collecting virtualenv
Downloading virtualenv-15.0.0-py2.py3-none-any.whl (1.8MB)
100% |ââââââââââââââââââââââââââââââââ| 1.8MB 335kB/s
Collecting virtualenvwrapper
Downloading virtualenvwrapper-4.7.1-py2.py3-none-any.whl
Collecting virtualenv-clone (from virtualenvwrapper)
Downloading virtualenv-clone-0.2.6.tar.gz
Collecting stevedore (from virtualenvwrapper)
Downloading stevedore-1.12.0-py2.py3-none-any.whl
Collecting pbr>=1.6 (from stevedore->virtualenvwrapper)
Downloading pbr-1.8.1-py2.py3-none-any.whl (89kB)
100% |ââââââââââââââââââââââââââââââââ| 92kB 362kB/s
Collecting six>=1.9.0 (from stevedore->virtualenvwrapper)
Downloading six-1.10.0-py2.py3-none-any.whl
Installing collected packages: virtualenv, virtualenv-clone, pbr, six, stevedore, virtualenvwrapper
Running setup.py install for virtualenv-clone ... done
Found existing installation: six 1.4.1
DEPRECATION: Uninstalling a distutils installed project (six) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project.
Uninstalling six-1.4.1:
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip-8.1.0-py2.7.egg/pip/basecommand.py", line 209, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip-8.1.0-py2.7.egg/pip/commands/install.py", line 317, in run
prefix=options.prefix_path,
File "/Library/Python/2.7/site-packages/pip-8.1.0-py2.7.egg/pip/req/req_set.py", line 726, in install
requirement.uninstall(auto_confirm=True)
File "/Library/Python/2.7/site-packages/pip-8.1.0-py2.7.egg/pip/req/req_install.py", line 746, in uninstall
paths_to_remove.remove(auto_confirm)
File "/Library/Python/2.7/site-packages/pip-8.1.0-py2.7.egg/pip/req/req_uninstall.py", line 115, in remove
renames(path, new_path)
File "/Library/Python/2.7/site-packages/pip-8.1.0-py2.7.egg/pip/utils/__init__.py", line 267, in renames
shutil.move(old, new)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move
copy2(src, real_dst)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 131, in copy2
copystat(src, dst)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 103, in copystat
os.chflags(dst, st.st_flags)
OSError: [Errno 1] Operation not permitted: '/tmp/pip-GQL8Gi-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info'
```
***The Solution***
It required modifying my installation command in THREE specific ways:
1. I had to add the `-H` flag to `sudo`
2. I had to add the `--upgrade` option AFTER the name of the package I was upgrading (`virtualenv`)
3. I had to use the `--ignore-installed` flag and specify the `six` package was the one to be ignored.
4. ***Note:*** I was also attempting to install two upgrades at the same time so in my case I also split the commands into two to simplify the execution command syntax.
**Final Working Example**
*1st Upgrade virtualenv*
```
$ sudo -H pip install virtualenv --upgrade --ignore-installed six
Password:
Collecting virtualenv
Using cached virtualenv-15.0.0-py2.py3-none-any.whl
Collecting six
Using cached six-1.10.0-py2.py3-none-any.whl
Installing collected packages: virtualenv, six
Successfully installed six-1.4.1 virtualenv-15.0.0
```
*2nd Upgrade virtualenvwrapper*
```
$ sudo -H pip install virtualenvwrapper --upgrade --ignore-installed six
Password:
Downloading virtualenvwrapper-4.7.1-py2.py3-none-any.whl
Collecting six
Downloading six-1.10.0-py2.py3-none-any.whl
Collecting virtualenv (from virtualenvwrapper)
Downloading virtualenv-15.0.0-py2.py3-none-any.whl (1.8MB)
100% |ââââââââââââââââââââââââââââââââ| 1.8MB 751kB/s
Collecting virtualenv-clone (from virtualenvwrapper)
Downloading virtualenv-clone-0.2.6.tar.gz
Collecting stevedore (from virtualenvwrapper)
Downloading stevedore-1.12.0-py2.py3-none-any.whl
Collecting pbr>=1.6 (from stevedore->virtualenvwrapper)
Downloading pbr-1.8.1-py2.py3-none-any.whl (89kB)
100% |ââââââââââââââââââââââââââââââââ| 92kB 417kB/s
Installing collected packages: virtualenv, virtualenv-clone, pbr, six, stevedore, virtualenvwrapper
Running setup.py install for virtualenv-clone ... done
Successfully installed pbr-1.8.1 six-1.4.1 stevedore-1.12.0 virtualenv-15.0.0 virtualenv-clone-0.2.6 virtualenvwrapper-4.7.1
``` |
django:django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet | 33,186,413 | 10 | 2015-10-17T12:04:53Z | 33,550,079 | 11 | 2015-11-05T16:39:54Z | [
"python",
"django"
] | I was stuck with the process when I wanted to deploy django project on server today. When I run `python manage.py runserver` on server, the terminal shows me this:
```
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/lib/python2.7/site-packages/django/core/management/__init__.py", line 351, in execute_from_command_line
utility.execute()
File "/usr/lib/python2.7/site-packages/django/core/management/__init__.py", line 343, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/lib/python2.7/site-packages/django/core/management/__init__.py", line 177, in fetch_command
commands = get_commands()
File "/usr/lib/python2.7/site-packages/django/utils/lru_cache.py", line 101, in wrapper
result = user_function(*args, **kwds)
File "/usr/lib/python2.7/site-packages/django/core/management/__init__.py", line 72, in get_commands
for app_config in reversed(list(apps.get_app_configs())):
File "/usr/lib/python2.7/site-packages/django/apps/registry.py", line 137, in get_app_configs
self.check_apps_ready()
File "/usr/lib/python2.7/site-packages/django/apps/registry.py", line 124, in check_apps_ready
raise AppRegistryNotReady("Apps aren't loaded yet.")
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
```
The django version on the server is 1.8.5, and the local is 1.8.1. I doubt the version may cause this problem. But I also doubted the `wsgi.py` wasn't written properly, here's the `wsgi.py`:
```
import os
import sys
path = '/Users/Peterhon/Desktop/dict/'
if path not in sys.path:
sys.path.append(path)
os.chdir(path)
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "dict.settings")
import django
django.setup()
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
```
Here's the `manage.py` file:
```
#!/usr/bin/env python
import os
import sys
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "dict.settings")
from django.core.management import execute_from_command_line
execute_from_command_line(sys.arg)
```
When I run python manage.py check on server, the output is below:
```
#!/usr/bin/env python
import os
import sys
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "dict.settings")
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
```
Could anyone give me some tips? Thanks too much | This could well be an issue with your Django settings. For example, I just had specified in `LOGGING` a filename in a non-existent directory. As soon as I changed it to an existing directory, the issue was resolved. |
Iterators for built-in containers | 33,186,651 | 4 | 2015-10-17T12:32:03Z | 33,186,713 | 9 | 2015-10-17T12:37:52Z | [
"python",
"python-3.x",
"iterator",
"containers"
] | From my understanding so far, you can easily create an iterator for a *user-defined* object by simply defining **both** the `__iter__` method and the `__next__` method for it. That's pretty intuitive to understand.
I also get it that you can manually build an iterator for any *built-in* container by simply calling the `iter()` method on that container.
Using basically any container as an example, what I don't understand is why they don't define a `__next__` method for themselves. Instead, when calling the `iter` method on a container (ergo, `<container>.__iter__`) it returns a new object of type `<container_type>_iterator` and not the container itself.
---
So finally, the question is, why do container objects delegate their `iterator` functionality to separate `<type>_iterator` objects instead of defining it themselves? | If the container was its own iterator (e.g. provided a `__next__` method), you could only iterate over it *in one place*. You could not have independent iterators. Each call to `__next__` would give the next value in the container and you'd not be able to go back to the first value; you have in effect a generator that could only ever yield the values in the container just the once.
By creating separate iterators for a given container, you can iterate independently:
```
>>> lst = ['foo', 'bar', 'baz']
>>> it1 = iter(lst)
>>> it2 = iter(lst)
>>> next(it1)
'foo'
>>> next(it2)
'foo'
>>> list(it1)
['bar', 'baz']
>>> next(it2)
'bar'
``` |
Add numbers and exit with a sentinel | 33,187,609 | 8 | 2015-10-17T14:16:20Z | 33,187,743 | 7 | 2015-10-17T14:32:08Z | [
"python",
"loops",
"sentinel"
] | My assignment is to add up a series of numbers using a loop, and that loop requires the sentinel value of `0` for it to stop. It should then display the total numbers added. So far, my code is:
```
total = 0
print("Enter a number or 0 to quit: ")
while True:
number = int(input("Enter a number or 0 to quit: "))
print("Enter a number or 0 to quit: ")
if number == 0:
break
total = total + number
print ("The total number is", total)
```
Yet when I run it, it doesn't print the total number after I enter `0`. It just prints `"Enter a number or 0 to quit"`, though it's not an infinite loop. | The main reason your code is not working is because `break` ends the innermost loop (in this case your `while` loop) *immediately*, and thus your lines of code after the break will not be executed.
This can easily be fixed using the methods others have pointed out, but I'd like to suggest changing your `while` loop's structure a little.
Currently you are using:
```
while True:
if <condition>:
break
```
Rather than:
```
while <opposite condition>:
```
You might have a reason for this, but it's not visible from the code you've provided us.
If we change your code to use the latter structure, that alone will simplify the program and fix the main problem.
You also print `"Enter a number or 0 to quit:"` multiple times, which is unnecessary. You can just pass it to the `input` and that's enough.
```
total = 0
number = None
while number != 0:
number = int(input("Enter a number or 0 to quit: "))
total += number # Same as: total = total + number
print("The total number is", total)
```
The only "downside" (just cosmetics) is that we need to define `number` before the loop.
Also notice that we want to print the `total` number after the *whole loop* is finished, thus the print at the end is unindented and will not be executed on every cycle of the `while` loop. |
Python: regex to make a python dictionary out of a sequence of words? | 33,207,089 | 5 | 2015-10-19T05:17:32Z | 33,207,190 | 10 | 2015-10-19T05:26:39Z | [
"python",
"regex"
] | I have a .txt file with the following contents:
```
norway sweden
bhargama bhargama
forbisganj forbesganj
canada usa
ankara turkey
```
I want to overwrite the file such that these are its new contents:
```
'norway' : 'sweden',
'bhargama': 'bhargama',
'forbisganj' : 'forbesganj',
'canada': 'usa',
'ankara': 'turkey'
```
Basically I want to turn the .txt file into a python dictionary so I can manipulate it. Are there built in libraries for this sort of task?
Here is my attempt:
```
import re
target = open('file.txt', 'w')
for line in target:
target.write(re.sub(r'([a-z]+)', r'':'"\1"','', line))
```
I'm succeeding in getting the quotes; but what's the proper regex to do what I described above? | You don't need a regular expression for that.
File:
```
norway sweden
bhargama bhargama
forbisganj forbesganj
canada usa
ankara turkey
```
Code:
```
with open('myfile.txt') as f:
my_dictionary = dict(line.split() for line in f)
```
This goes through each line in your file and splits it on whitespace into a `list`. This generator of `list`s is fed to `dict()`, which makes each one a dictionary key and value.
```
>>> my_dictionary['norway']
'sweden'
``` |
Pandas warning when using map: A value is trying to be set on a copy of a slice from a DataFrame | 33,215,630 | 4 | 2015-10-19T13:24:27Z | 33,217,544 | 7 | 2015-10-19T14:51:35Z | [
"python",
"pandas"
] | I've got the following code and it works. This basically renames values in columns so that they can be later merged.
```
pop = pd.read_csv('population.csv')
pop_recent = pop[pop['Year'] == 2014]
mapping = {
'Korea, Rep.': 'South Korea',
'Taiwan, China': 'Taiwan'
}
f= lambda x: mapping.get(x, x)
pop_recent['Country Name'] = pop_recent['Country Name'].map(f)
```
> Warning:
> *A value is trying to be set on a copy of a slice from a DataFrame.
> Try using .loc[row\_indexer,col\_indexer] = value instead
> See the the caveats in the documentation: <http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy>
> pop\_recent['Country Name'] = pop\_recent['Country Name'].map(f)*
I did google this! But no examples seem to be using map, so I'm at a loss... | The issue is with [chained indexing](http://pandas.pydata.org/pandas-docs/stable/indexing.html#returning-a-view-versus-a-copy) , what you are actually trying to do is to set values to - `pop[pop['Year'] == 2014]['Country Name']` - this would not work most of the times (as explained very well in the linked documentation) as this is two different calls and one of the calls may return a copy of the dataframe (I believe the boolean indexing) is returning the copy of the dataframe).
Hence, when you try to set values to that copy, it does not reflect in the original dataframe. Example -
```
In [6]: df
Out[6]:
A B
0 1 2
1 3 4
2 4 5
3 6 7
4 8 9
In [7]: df[df['A']==1]['B'] = 10
/path/to/ipython-script.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
if __name__ == '__main__':
In [8]: df
Out[8]:
A B
0 1 2
1 3 4
2 4 5
3 6 7
4 8 9
```
---
As noted , instead of chained indexing you should use `DataFrame.loc` to index the rows as well as the columns to update in a single call, avoiding this error. Example -
```
pop.loc[(pop['year'] == 2014), 'Country Name'] = pop.loc[(pop['year'] == 2014), 'Country Name'].map(f)
```
Or if this seem too long to you, you can create a mask (boolean dataframe) beforehand and assign to a variable, and use that in the above statement. Example -
```
mask = pop['year'] == 2014
pop.loc[mask,'Country Name'] = pop.loc[mask,'Country Name'].map(f)
```
Demo -
```
In [9]: df
Out[9]:
A B
0 1 2
1 3 4
2 4 5
3 6 7
4 8 9
In [10]: mapping = { 1:2 , 3:4}
In [11]: f= lambda x: mapping.get(x, x)
In [12]: df.loc[(df['B']==2),'A'] = df.loc[(df['B']==2),'A'].map(f)
In [13]: df
Out[13]:
A B
0 2 2
1 3 4
2 4 5
3 6 7
4 8 9
```
Demo with the *mask* method -
```
In [18]: df
Out[18]:
A B
0 1 2
1 3 4
2 4 5
3 6 7
4 8 9
In [19]: mask = df['B']==2
In [20]: df.loc[mask,'A'] = df.loc[mask,'A'].map(f)
In [21]: df
Out[21]:
A B
0 2 2
1 3 4
2 4 5
3 6 7
4 8 9
``` |
Is there any way to check with Python unittest assert if an iterable is not empty? | 33,216,488 | 10 | 2015-10-19T14:02:49Z | 33,216,600 | 13 | 2015-10-19T14:08:02Z | [
"python",
"python-2.7",
"unit-testing",
"assertions",
"python-unittest"
] | After submitting queries to a service, I get a dictionary / a list back and I want to make sure it's not empty. I am on Python 2.7.
I am surprised I don't see any `assertEmpty` method for the `unittest.TestCase` class instance.
The existing alternatives such as:
```
self.assertTrue(bool(d))
```
and
```
self.assertNotEqual(d,{})
```
and
```
self.assertGreater(len(d),0)
```
just don't look right.
**Is this kind of method is missing in the Python unittest framework?** If yes, what would be the most pythonic way to assert that an iterable is not empty? | Empty lists/dicts evaluate to False, so `self.assertTrue(d)` gets the job done. |
Installing OpenCV 3 for Python 3 on a mac using Homebrew and pyenv | 33,222,965 | 9 | 2015-10-19T20:05:43Z | 33,222,993 | 11 | 2015-10-19T20:07:25Z | [
"python",
"osx",
"opencv",
"python-3.x"
] | I am running Mac OS X 10.11 (El Capitan). I want to:
* Maintain my system version of Python as the default
* Install Python 3.5 alongside it
* Install OpenCV 3 and the Python bindings
I installed `pyenv` and Python 3.5 by following this SO answer: <http://stackoverflow.com/a/18671336/1410871>
I activated my Python 3 installation and installed OpenCV 3 with Python 3 support like this:
```
pyenv shell 3.5.0
brew install opencv3 --with-python3
```
But when I launch an IPython shell and import `cv2`, I get an error:
```
ImportError Traceback (most recent call last)
<ipython-input-1-72fbbcfe2587> in <module>()
----> 1 import cv2
ImportError: No module named 'cv2'
```
why? | Answering my own question: I have to manually create a symlink to the shared object file and place it in the pyenv Python 3 site-packages directory:
```
ln -s /usr/local/opt/opencv3/lib/python3.5/site-packages/cv2.cpython-35m-darwin.so ~/.pyenv/versions/3.5.0/lib/python3.5/site-packages/cv2.so
```
Now the line `import cv2` works as expected in Python. |
Generate a list of 6 random numbers between 1 and 6 in python | 33,224,944 | 3 | 2015-10-19T22:21:53Z | 33,224,964 | 9 | 2015-10-19T22:23:28Z | [
"python",
"random",
"numbers"
] | So this is for a practice problem for one of my classes. I want to generate a list of 6 random numbers between 1 and 6. For example,startTheGame() should return [1,4,2,5,4,6]
I think I am close to getting it but Im just not sure how to code it so that all 6 numbers are appended to the list and then returned. Any help is appreciated.
```
import random
def startTheGame():
counter = 0
myList = []
while (counter) < 6:
randomNumber = random.randint(1,6)
myList.append(randomNumber)
counter = counter + 1
if (counter)>=6:
pass
else:
return myList
``` | Use a [list comprehension](https://docs.python.org/2/tutorial/datastructures.html#list-comprehensions):
```
import random
def startTheGame():
mylist=[random.randint(1,6) for _ in range(6)]
return mylist
```
List comprehensions are among the most powerful tools offered by Python. They are considered very pythonic and they make code very expressive.
Consider the following code:
```
counter = 0
myList = []
while (counter) < 6:
randomNumber = random.randint(1,6)
myList.append(randomNumber)
counter = counter + 1
if (counter)>=6:
pass
else:
return
```
We will refactor this code in several steps to better illustrate what list comprehensions do. First thing that we are going to refactor is the while loop with an initialization and an abort-criterion. This can be done much more concise with a for in expression:
```
myList = []
for counter in range(6):
randomNumber = random.randint(1,6)
myList.append(randomNumber)
```
And now the step to make this piece of code into a list comprehension: Move the for loop inside mylist. This eliminates the appending step and the assignment:
```
[random.randint(1,6) for _ in range(6)]
```
The `_` is a variable name just like any other, but it is convention in python to us e `_` for variables that are not used. Think of it as a temp variable. |
Can a website detect when you are using selenium with chromedriver? | 33,225,947 | 55 | 2015-10-20T00:08:57Z | 33,403,473 | 17 | 2015-10-28T23:39:33Z | [
"javascript",
"python",
"google-chrome",
"selenium",
"selenium-chromedriver"
] | I've been testing out Selenium with Chromedriver and I noticed that some pages can detect that you're using Selenium even though there's no automation at all. Even when I'm just browsing manually just using chrome through Selenium and Xephyr I often get a page saying that suspicious activity was detected. I've checked my user agent, and my browser fingerprint, and they are all exactly identical to the normal chrome browser.
When I browse to these sites in normal chrome everything works fine, but the moment I use Selenium I'm detected.
In theory chromedriver and chrome should look literally exactly the same to any webserver, but somehow they can detect it.
If you want some testcode try out this:
```
from pyvirtualdisplay import Display
from selenium import webdriver
display = Display(visible=1, size=(1600, 902))
display.start()
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--disable-extensions')
chrome_options.add_argument('--profile-directory=Default')
chrome_options.add_argument("--incognito")
chrome_options.add_argument("--disable-plugins-discovery");
chrome_options.add_argument("--start-maximized")
driver = webdriver.Chrome(chrome_options=chrome_options)
driver.delete_all_cookies()
driver.set_window_size(800,800)
driver.set_window_position(0,0)
print 'arguments done'
driver.get('http://stubhub.com')
```
If you browse around stubhub you'll get redirected and 'blocked' within one or two requests. I've been investigating this and I can't figure out how they can tell that a user is using Selenium.
How do they do it?
EDIT UPDATE:
I installed the Selenium IDE plugin in Firefox and I got banned when I went to stubhub.com in the normal firefox browser with only the additional plugin.
EDIT:
When I use Fiddler to view the HTTP requests being sent back and forth I've noticed that the 'fake browser\'s' requests often have 'no-cache' in the response header.
EDIT:
results like this [Is there a way to detect that I'm in a Selenium Webdriver page from Javascript](http://stackoverflow.com/questions/3614472/is-there-a-way-to-detect-that-im-in-a-selenium-webdriver-page-from-javascript) suggest that there should be no way to detect when you are using a webdriver. But this evidence suggests otherwise.
EDIT:
The site uploads a fingerprint to their servers, but I checked and the fingerprint of selenium is identical to the fingerprint when using chrome.
EDIT:
This is one of the fingerprint payloads that they send to their servers
```
{"appName":"Netscape","platform":"Linuxx86_64","cookies":1,"syslang":"en-US","userlang":"en-US","cpu":"","productSub":"20030107","setTimeout":1,"setInterval":1,"plugins":{"0":"ChromePDFViewer","1":"ShockwaveFlash","2":"WidevineContentDecryptionModule","3":"NativeClient","4":"ChromePDFViewer"},"mimeTypes":{"0":"application/pdf","1":"ShockwaveFlashapplication/x-shockwave-flash","2":"FutureSplashPlayerapplication/futuresplash","3":"WidevineContentDecryptionModuleapplication/x-ppapi-widevine-cdm","4":"NativeClientExecutableapplication/x-nacl","5":"PortableNativeClientExecutableapplication/x-pnacl","6":"PortableDocumentFormatapplication/x-google-chrome-pdf"},"screen":{"width":1600,"height":900,"colorDepth":24},"fonts":{"0":"monospace","1":"DejaVuSerif","2":"Georgia","3":"DejaVuSans","4":"TrebuchetMS","5":"Verdana","6":"AndaleMono","7":"DejaVuSansMono","8":"LiberationMono","9":"NimbusMonoL","10":"CourierNew","11":"Courier"}}
```
Its identical in selenium and in chrome
EDIT:
VPNs work for a single use but get detected after I load the first page. Clearly some javascript is being run to detect Selenium. | As we've already figured out in the question and the posted answers, there is an anti Web-scraping and a Bot detection service called ["Distil Networks"](http://www.distilnetworks.com/) in play here. And, according to the company CEO's [interview](http://www.forbes.com/sites/timconneally/2013/01/28/theres-something-about-distil-it/):
> Even though they can create new bots, **we figured out a way to identify
> Selenium the a tool theyâre using, so weâre blocking Selenium no
> matter how many times they iterate on that bot**. Weâre doing that now
> with Python and a lot of different technologies. Once we see a pattern
> emerge from one type of bot, then we work to reverse engineer the
> technology they use and identify it as malicious.
It'll take time and additional challenges to understand how exactly they are detecting Selenium, but what can we say for sure at the moment:
* it's not related to the actions you take with selenium - once you navigate to the site, you get immediately detected and banned. I've tried to add artificial random delays between actions, take a pause after the page is loaded - nothing helped
* it's not about browser fingerprint either - tried it in multiple browsers with clean profiles and not, incognito modes - nothing helped
* since, according to the hint in the interview, this was "reverse engineering", I suspect this is done with some JS code being executed in the browser revealing that this is a browser automated via selenium webdriver
Decided to post it as an answer, since clearly:
> Can a website detect when you are using selenium with chromedriver?
Yes.
---
Also, what I haven't experimented with is older selenium and older browser versions - in theory, there could be something implemented/added to selenium at a certain point that Distil Networks bot detector currently relies on. Then, if this is the case, we might detect (yeah, let's detect the detector) at what point/version a relevant change was made, look into changelog and changesets and, may be, this could give us more information on where to look and what is it they use to detect a webdriver-powered browser. It's just a theory that needs to be tested. |
Python for loop iterating 'i' | 33,226,223 | 2 | 2015-10-20T00:43:11Z | 33,226,245 | 7 | 2015-10-20T00:45:45Z | [
"python",
"list",
"for-loop",
"dictionary"
] | I am using the following script:
```
tagRequest = requests.get("https://api.instagram.com/v1/tags/" + tag + "/media/recent?client_id=" + clientId)
tagData = json.loads(tagRequest.text)
tagId = tagData["data"][0]["user"]["id"]
for i in tagData["data"]:
print tagData["data"][i]
```
My script is supposed to iterate over the JSON object, tagData. (Over everything in "data".) However, I am getting the following error: "list indices must be integers, not dict." | You are iterating over the contents of `tagData['data']` not its indices, so:
```
for i in tagData["data"]:
print i
```
Or indices:
```
for i in xrange(len(tagData["data"])):
print tagData["data"][i]
``` |
Is it possible to have static type assertions in PyCharm? | 33,227,330 | 9 | 2015-10-20T03:11:15Z | 33,730,487 | 7 | 2015-11-16T07:42:33Z | [
"python",
"pycharm"
] | ```
def someproperty(self, value):
"""
:type value: int
"""
assert isinstance(value, int)
# other stuff
```
I'd like Pycharm to assert when the user sets the value to something other than an int. I'm already using a type hint. Is there another way to get this functionality? Thanks in advance for any insight you can provide. | Using pycharm you can get somewhat close to static type checking, using type declarations and increasing the severity of the "Type checker" inspection:
[](http://i.stack.imgur.com/PGUpm.png)
This will make type checks very prominent in your code:
[](http://i.stack.imgur.com/WnL6w.png) |
Spark: How to map Python with Scala or Java User Defined Functions? | 33,233,737 | 5 | 2015-10-20T10:06:08Z | 33,257,733 | 9 | 2015-10-21T11:07:01Z | [
"java",
"python",
"scala",
"apache-spark",
"pyspark"
] | Let's say for instance that my team has choosen Python as the reference language to develop with Spark. But later for performance reasons, we would like to develop specific Scala or Java specific librairies in order to map them with our Python code (something similar to Python stubs with Scala or Java skeletons).
Don't you think is it possible to interface new customized Python methods with under the hood some Scala or Java User Defined Functions ? | I wouldn't go so far as to say it is supported but it is certainly possible. All SQL functions available currently in PySpark are simply a wrappers around Scala API.
Lets assume I want to reuse `GroupConcat` UDAF I've created as an answer to [SPARK SQL replacement for mysql GROUP\_CONCAT aggregate function](http://stackoverflow.com/q/31640729/1560062) and it is located in a package `com.example.udaf`:
```
from pyspark.sql.column import Column, _to_java_column, _to_seq
from pyspark.sql import Row
row = Row("k", "v")
df = sc.parallelize([
row(1, "foo1"), row(1, "foo2"), row(2, "bar1"), row(2, "bar2")]).toDF()
def groupConcat(col):
"""Group and concatenate values for a given column
>>> df = sqlContext.createDataFrame([(1, "foo"), (2, "bar")], ("k", "v"))
>>> df.select(groupConcat("v").alias("vs"))
[Row(vs=u'foo,bar')]
"""
sc = SparkContext._active_spark_context
# It is possible to use java_import to avoid full package path
_groupConcat = sc._jvm.com.example.udaf.GroupConcat.apply
# Converting to Seq to match apply(exprs: Column*)
return Column(_groupConcat(_to_seq(sc, [col], _to_java_column)))
df.groupBy("k").agg(groupConcat("v").alias("vs")).show()
## +---+---------+
## | k| vs|
## +---+---------+
## | 1|foo1,foo2|
## | 2|bar1,bar2|
## +---+---------+
```
There is far to much leading underscores for my taste but as you can see it can be done. |
trouble with python manage.py migrate -> No module named psycopg2 | 33,237,274 | 2 | 2015-10-20T12:56:12Z | 33,237,669 | 7 | 2015-10-20T13:14:32Z | [
"python",
"django",
"postgresql",
"psycopg2",
"migrate"
] | I have some trouble with migrating Django using postgresql.
This is my first time with Django and I am just following the tutorial. Next to that I have looked on Google and Stackoverflow to see if I could find a solution.
I have created a virtualenv to run the Django project as suggested on the Django website.
Next I created a postgresql database with these settings:
[](http://i.stack.imgur.com/0td5y.png)
In settings.py I have set these values for the database:
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'django_tutorial',
'USER': 'johan',
'PASSWORD': '1234',
}
}
```
When installing psycopg2 with apt-get I get this message:
```
(venv)johan@johan-pc:~/sdp/django_tutorial/venv/django_tutorial$ sudo apt-get install python-psycopg2
Reading package lists... Done
Building dependency tree
Reading state information... Done
python-psycopg2 is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 95 not upgraded.
```
As far as I can tell this would mean that psycopg2 is installed.
When running
```
$python manage.py migrage
```
I get the following error message:
```
django.core.exceptions.ImproperlyConfigured: Error loading psycopg2 module: No module named psycopg2
```
If needed for the answer I could provide the entire stack trace.
Could someone explain what I could do to solve this? | It must be because you are installing psycopg2 in your system level python installation not in your virtualenv.
```
sudo apt-get install python-psycopg2
```
will install it in your system level python installation.
You can install it in your virtualenv by
```
pip install psycopg2
```
after activating your virtualenv or you can create your virtualenv with `--system-site-packages` flag so that your virtualenv will have packages in your system level python already available.
```
virtualenv --system-site-packages test
```
where `test` is your virtualenv. |
Trailing slash triggers 404 in Flask path rule | 33,241,050 | 8 | 2015-10-20T15:44:09Z | 33,285,603 | 12 | 2015-10-22T16:10:08Z | [
"python",
"flask"
] | I want to redirect any path under `/users` to a static app. The following view should capture these paths and serve the appropriate file (it just prints the path for this example). This works for `/users`, `/users/604511`, and `/users/604511/action`. Why does the path `/users/` cause a 404 error?
```
@bp.route('/users')
@bp.route('/users/<path:path>')
def serve_client_app(path=None):
return path
``` | Your `/users` route is missing a trailing slash, which Werkzeug interprets as an explicit rule to not match a trailing slash. Either add the trailing slash, and Werkzeug will redirect if the url doesn't have it, or set [`strict_slashes=False`](http://werkzeug.pocoo.org/docs/0.10/routing/#rule-format) on the route and Werkzeug will match the rule with or without the slash.
```
@app.route('/users/')
@app.route('/users/<path:path>')
def users(path=None):
return str(path)
c = app.test_client()
print(c.get('/users')) # 302 MOVED PERMANENTLY (to /users/)
print(c.get('/users/')) # 200 OK
print(c.get('/users/test')) # 200 OK
```
```
@app.route('/users', strict_slashes=False)
@app.route('/users/<path:path>')
def users(path=None):
return str(path)
c = app.test_client()
print(c.get('/users')) # 200 OK
print(c.get('/users/')) # 200 OK
print(c.get('/users/test')) # 200 OK
``` |
Stopword removal with NLTK and Pandas | 33,245,567 | 5 | 2015-10-20T19:47:28Z | 33,246,035 | 9 | 2015-10-20T20:15:58Z | [
"python",
"csv",
"pandas",
"nltk",
"stop-words"
] | I have some issues with Pandas and NLTK. I am new at programming, so excuse me if i ask questions that might be easy to solve. I have a csv file which has 3 columns(Id,Title,Body) and about 15.000 rows.
My goal is to remove the stopwords from this csv file. The operation for lowercase and split are working well. But i can not find my mistake why the stopwords does not get removed. What am i missing?
```
import pandas as pd
from nltk.corpus import stopwords
pd.read_csv("test10in.csv", encoding="utf-8")
df = pd.read_csv("test10in.csv")
df.columns = ['Id','Title','Body']
df['Title'] = df['Title'].str.lower().str.split()
df['Body'] = df['Body'].str.lower().str.split()
stop = stopwords.words('english')
df['Title'].apply(lambda x: [item for item in x if item not in stop])
df['Body'].apply(lambda x: [item for item in x if item not in stop])
df.to_csv("test10out.csv")
``` | you are trying to do an inplace replace. you should do
```
df['Title'] = df['Title'].apply(lambda x: [item for item in x if item not in stop])
df['Body'] = df['Body'].apply(lambda x: [item for item in x if item not in stop])
``` |
installing libicu-dev on mac | 33,259,191 | 4 | 2015-10-21T12:21:22Z | 33,352,241 | 9 | 2015-10-26T17:44:53Z | [
"python",
"windows",
"osx",
"unicode",
"pip"
] | how do i install libicu-dev on mac. This is the instruction recommended on the documentation
```
sudo apt-get install python-numpy libicu-dev
```
<http://polyglot.readthedocs.org/en/latest/Installation.html>
I am using anaconda but it seems to always throw up an
```
In file included from _icu.cpp:27:
./common.h:86:10: fatal error: 'unicode/utypes.h' file not found
#include <unicode/utypes.h>
```
error | I just got PyICU to install on OSX, after it was failing due to that same error. Here is what I recommend:
1. Install [homebrew](http://brew.sh/) (package manager for OSX)
2. `brew install icu4c` # Install the library; may be already installed
3. Verify that the necessary include directory is present: `ls -l /usr/local/opt/icu4c/include/`
4. If you do not have that directory, you may need to reinstall icu4u. I found that I had to do the following:
1. `brew remove icu4c`
2. `brew install icu4c`
5. Try to install polyglot to see if it can find icu4c: `pip install polyglot`
6. If that still complains, you can try specifying library location: `CFLAGS=-I/usr/local/opt/icu4c/include LDFLAGS=-L/usr/local/opt/icu4c/lib pip install polyglot` |
Split Text in Python | 33,260,106 | 2 | 2015-10-21T13:03:31Z | 33,260,154 | 11 | 2015-10-21T13:05:30Z | [
"python",
"text",
"split"
] | Is there an easy way to split text into separate lines each time a specific type of font arises. For example, I have text that looks like this:
```
BILLY: The sky is blue. SALLY: It really is blue. SAM: I think it looks like this: terrible.
```
I'd like to split the text into lines for each speaker:
```
BILLY: The sky is blue.
SALLY: It really is blue.
SAM: I think it looks like this: terrible.
```
The speaker is always capitalized with a colon following the name. | ```
import re
a="BILLY: The sky is blue. SALLY: It really is blue. SAM: I think it looks like this: terrible."
print re.split(r"\s(?=[A-Z]+:)",a)
```
You can use `re.split` for this.
Output:`['BILLY: The sky is blue.', 'SALLY: It really is blue.', 'SAM: I think it looks like this: terrible.']` |
pandas replace zeros with previous non zero value | 33,261,359 | 5 | 2015-10-21T13:59:09Z | 33,261,465 | 8 | 2015-10-21T14:03:27Z | [
"python",
"pandas"
] | I have the following dataframe:
```
index = range(14)
data = [1, 0, 0, 2, 0, 4, 6, 8, 0, 0, 0, 0, 2, 1]
df = pd.DataFrame(data=data, index=index, columns = ['A'])
```
How can I fill the zeros with the previous non-zero value using pandas? Is there a fillna that is not just for "NaN"?.
The output should look like:
```
[1, 1, 1, 2, 2, 4, 6, 8, 8, 8, 8, 8, 2, 1]
```
(This question was asked before here [Fill zero values of 1d numpy array with last non-zero values](http://stackoverflow.com/questions/30488961/fill-zero-values-of-1d-numpy-array-with-last-non-zero-values) but he was asking exclusively for a numpy solution) | You can use `replace` with `method='ffill'`
```
In [87]: df['A'].replace(to_replace=0, method='ffill')
Out[87]:
0 1
1 1
2 1
3 2
4 2
5 4
6 6
7 8
8 8
9 8
10 8
11 8
12 2
13 1
Name: A, dtype: int64
```
To get numpy array, work on `values`
```
In [88]: df['A'].replace(to_replace=0, method='ffill').values
Out[88]: array([1, 1, 1, 2, 2, 4, 6, 8, 8, 8, 8, 8, 2, 1], dtype=int64)
``` |
Why is "object.__dict__ is object.__dict__" False? | 33,262,578 | 8 | 2015-10-21T14:52:10Z | 33,262,645 | 7 | 2015-10-21T14:55:31Z | [
"python"
] | If I run the following code in a Python interpreter:
```
>>> object.__dict__ is object.__dict__
False
```
Why is the result `False`? | `object.__dict__`, unlike other `__dict__`s, returns a `mappingproxy` object (a `dict_proxy` in Python 2). These are created *on the fly* when `__dict__` is requested. So as a result, you get a new proxy every time you access `object.__dict__`. They all proxy the same underlying object, but the proxy is a fresh one all the time. Thatâs why you canât get two identical ones. |
Python/Django: Why does importing a module right before using it prevent a circular import? | 33,262,825 | 10 | 2015-10-21T15:04:08Z | 33,263,168 | 7 | 2015-10-21T15:21:22Z | [
"python",
"django",
"import",
"circular"
] | I've run into this problem a few times in different situations but my setup is the following:
I have two Django models files. One that contains User models and CouponCodes that a user can use to sign up for a Course. Those are both in the account/models.py file. Course and the related many-to-many field are in a different models file, course/models.py. I usually refer to these in my code as amod and cmod respectively.
In course/models.py I have an import statement:
```
from account import models as amod
class Course(ExtendedModel):
stuff = stuff
```
I need to import the account/models.py file for the many-to-many model/table between Course and User which is not shown here. So far, so good.
In the account/models.py file I have the CouponCode model. Each instance gets created and then can be assigned to a particular Course object after creation to allow a student to use it to sign up for a course in the system.
```
class CouponCode(ExtendedModel):
assigned_course = UniqueIDForeignKey("course.Course", blank=True, null=True, related_name='assigned_coupon_code_set')
...
...
@staticmethod
def assign_batch(c, upper_limit):
import course.models as cmod # Why is this allowed here?
assert isinstance(c, cmod.Course)
# Do other stuff here
```
That static method allows me to pass in a course object and a number of CouponCodes that I want to assign to it and then it will assign the next N number of unassigned codes to that course. My question arises from the assert statement.
I need to import the Course object from course/models.py in order to insure that the object being passed in is actually an instance of Course, but if I do that at the top of the file, I get problems because this file is already being imported into course/models.py. (amod is being imported into cmod and then in amod I need to import cmod).
Why does it allow me to do this if I import it in the method right before I need it versus at the top of the file? | When a module is imported (well, the first time it's imported in a given process), all the top-level statements are executed (remember that `import` *is* an executable statement). So you cannot have module1 with an `import module2` statement at the top-level and module2 with an `import module1` at the top-level - it cannot obviously work.
Now if in module2 you move the `import module1` statement within a function, this statement won't be executed before the function is actually called, so it won't prevent module1 from importing module2.
Note that this is still considered bad practice, most of the time a circular dependency means you should refactor your code to avoid the problem (extract the parts both modules depends on into a third module that depends from both but none of the others depends on, or simply merge the modules) - but some cases are complex due to other constraints so having this as a last-resort solution is fine.
Also, you *do not* need to import a model to reference it in a `ForeignKey` or `Many2Many` field - you can pass a `"appname.ModelName"` string, cf <https://docs.djangoproject.com/en/1.8/ref/models/fields/#foreignkey>
> To refer to models defined in another application, you can explicitly
> specify a model with the full application label. For example, if the
> Manufacturer model above is defined in another application called
> production, youâd need to use:
>
> ```
> class Car(models.Model):
> manufacturer = models.ForeignKey('production.Manufacturer')
> ```
>
> This sort of reference can be useful when resolving circular import
> dependencies between two applications. |
Is there any way to clear django.db.connection.queries? | 33,265,144 | 5 | 2015-10-21T17:00:19Z | 33,265,391 | 7 | 2015-10-21T17:14:34Z | [
"python",
"django",
"django-models"
] | I want to monitor query time in my system(built with `Django models`).
Finally I found `django.db.connection.queries`.
It shows all queries and time taking for it.
Using this, I want to print the list of which I have done queries at regular interval and then I want to clear the list I printed after printing.
It seems to have methods for a list object(`pop, remove` and so on).
But Even though I call `pop()`, It doesn't have any effect, still showing the same length.
How can I clear the list..?
Or Is there any other methods for my intention?
p.s I also found `Django-debug-toolbar` but it seems only for view part. | You can call `reset_queries()` from the django.db module. |
Python: get a frequency count based on two columns (variables) in pandas dataframe | 33,271,098 | 4 | 2015-10-21T23:41:23Z | 33,271,634 | 7 | 2015-10-22T00:44:50Z | [
"python",
"pandas",
"group-by",
"dataframe"
] | Hello I have the following dataframe.
```
Group Size
Short Small
Short Small
Moderate Medium
Moderate Small
Tall Large
```
I want to count the frequency of how many time the same row appears in the dataframe.
```
Group Size Time
Short Small 2
Moderate Medium 1
Moderate Small 1
Tall Large 1
``` | You can use groupby's [`size`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.size.html):
```
In [11]: df.groupby(["Group", "Size"]).size()
Out[11]:
Group Size
Moderate Medium 1
Small 1
Short Small 2
Tall Large 1
dtype: int64
In [12]: df.groupby(["Group", "Size"]).size().reset_index(name="Time")
Out[12]:
Group Size Time
0 Moderate Medium 1
1 Moderate Small 1
2 Short Small 2
3 Tall Large 1
``` |
No module named BeautifulSoup (but it should be installed) | 33,286,790 | 3 | 2015-10-22T17:14:51Z | 33,286,853 | 7 | 2015-10-22T17:19:06Z | [
"python"
] | I downloaded BeautifulSoup.
Then I upgraded pip:
> pip install --upgrade pip
Then, installed BS:
> pip install beautifulsoup4
It seems like everything worked fine, but now when I run these three lines of code:
```
from BeautifulSoup import BeautifulSoup
import urllib2
import csv
```
I get this error.
> Traceback (most recent call last):
> File
> "C:\Users\rshuell001.spyder2\temp.py", line 1, in
> from BeautifulSoup import BeautifulSoup ImportError: No module named BeautifulSoup
I'm using Anaconda-Spyder
What am I doing wrong? | I think it should be `from bs4 import BeauitfulSoup` :) |
Replace empty strings with None/null values in DataFrame | 33,287,886 | 5 | 2015-10-22T18:14:52Z | 33,308,193 | 10 | 2015-10-23T17:27:25Z | [
"python",
"apache-spark",
"dataframe",
"apache-spark-sql",
"pyspark"
] | I have a [Spark 1.5.0 DataFrame](https://spark.apache.org/docs/1.5.0/api/python/pyspark.sql.html#pyspark.sql.DataFrame) with a mix of `null` and empty strings in the same column. I want to convert all empty strings in all columns to `null` (`None`, in Python). The DataFrame may have hundreds of columns, so I'm trying to avoid hard-coded manipulations of each column.
See my attempt below, which results in an error.
```
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
## Create a test DataFrame
testDF = sqlContext.createDataFrame([Row(col1='foo', col2=1), Row(col1='', col2=2), Row(col1=None, col2='')])
testDF.show()
## +----+----+
## |col1|col2|
## +----+----+
## | foo| 1|
## | | 2|
## |null|null|
## +----+----+
## Try to replace an empty string with None/null
testDF.replace('', None).show()
## ValueError: value should be a float, int, long, string, list, or tuple
## A string value of null (obviously) doesn't work...
testDF.replace('', 'null').na.drop(subset='col1').show()
## +----+----+
## |col1|col2|
## +----+----+
## | foo| 1|
## |null| 2|
## +----+----+
``` | It is as simple as this:
```
from pyspark.sql.functions import col, when
def blank_as_null(x):
return when(col(x) != "", col(x)).otherwise(None)
dfWithEmptyReplaced = testDF.withColumn("col1", blank_as_null("col1"))
dfWithEmptyReplaced.show()
## +----+----+
## |col1|col2|
## +----+----+
## | foo| 1|
## |null| 2|
## |null|null|
## +----+----+
dfWithEmptyReplaced.na.drop().show()
## +----+----+
## |col1|col2|
## +----+----+
## | foo| 1|
## +----+----+
```
If you want to fill multiple columns you can for example reduce:
```
to_convert = set([...]) # Some set of columns
reduce(lambda df, x: df.withColumn(x, blank_as_null(x)), to_convert, testDF)
```
or use comprehension:
```
exprs = [
blank_as_null(x).alias(x) if x in to_convert else x for x in testDF.columns]
testDF.select(*exprs)
```
If you want to specifically operate on string fields please check [the answer](http://stackoverflow.com/a/39008555/1560062) by [robin-loxley](https://stackoverflow.com/users/1902732/robin-loxley). |
Noun phrases with spacy | 33,289,820 | 5 | 2015-10-22T20:12:04Z | 33,512,175 | 11 | 2015-11-04T01:26:34Z | [
"python",
"spacy"
] | How can I extract noun phrases from text using spacy?
I am not referring to part of speech tags.
In the documentation I cannot find anything about noun phrases or regular parse trees. | If you want base NPs, i.e. NPs without coordination, prepositional phrases or relative clauses, you can use the noun\_chunks iterator on the Doc and Span objects:
```
>>> from spacy.en import English
>>> nlp = English()
>>> doc = nlp(u'The cat and the dog sleep in the basket near the door.')
>>> for np in doc.noun_chunks:
>>> np.text
u'The cat'
u'the dog'
u'the basket'
u'the door'
```
If you need something else, the best way is to iterate over the words of the sentence and consider the syntactic context to determine whether the word governs the phrase-type you want. If it does, yield its subtree:
```
from spacy.symbols import *
np_labels = set([nsubj, nsubjpass, dobj, iobj, pobj]) # Probably others too
def iter_nps(doc):
for word in doc:
if word.dep in np_labels:
yield word.subtree
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.