title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Combine a list of pandas dataframes to one pandas dataframe | 32,444,138 | 6 | 2015-09-07T18:13:42Z | 32,444,187 | 10 | 2015-09-07T18:17:24Z | [
"python",
"pandas"
] | I have a list of Pandas dataframes that I would like to combine into one Pandas dataframe. I am using Python 2.7.10 and Pandas 0.16.2
I created the list of dataframes from:
```
import pandas as pd
dfs = []
sqlall = "select * from mytable"
for chunk in pd.read_sql_query(sqlall , cnxn, chunksize=10000):
dfs.append(chunk)
```
This returns a list of dataframes
```
type(dfs[0])
Out[6]: pandas.core.frame.DataFrame
type(dfs)
Out[7]: list
len(dfs)
Out[8]: 408
```
Here is some sample data
```
# sample dataframes
d1 = pd.DataFrame({'one' : [1., 2., 3., 4.], 'two' : [4., 3., 2., 1.]})
d2 = pd.DataFrame({'one' : [5., 6., 7., 8.], 'two' : [9., 10., 11., 12.]})
d3 = pd.DataFrame({'one' : [15., 16., 17., 18.], 'two' : [19., 10., 11., 12.]})
# list of dataframes
mydfs = [d1, d2, d3]
```
I would like to combine `d1`, `d2`, and `d3` into one pandas dataframe. Alternatively, a method of reading a large-ish table directly into a dataframe when using the `chunksize` option would be very helpful. | Given that all the dataframes have the same columns, you can simply `concat` them:
```
import pandas as pd
df = pd.concat(list_of_dataframes)
``` |
Django app works fine, but getting a TEMPLATE_* warning message | 32,445,953 | 19 | 2015-09-07T20:57:45Z | 32,446,043 | 41 | 2015-09-07T21:07:07Z | [
"python",
"django",
"django-1.8"
] | When I use runserver, it gives this warning message:
> (1\_8.W001) The standalone TEMPLATE\_\* settings were deprecated in
> Django 1.8 and the TEMPLATES dictionary takes precedence. You must put
> the values of the following settings into your default TEMPLATES dict:
> TEMPLATE\_DEBUG.
Quoth the Django Documentation:
> "TEMPLATE\_DEBUG Deprecated since version 1.8: Set the 'debug' option
> in the OPTIONS of a DjangoTemplates backend instead."
Here is my settings.py with my futile attempts to fix it:
```
DEBUG = True
TEMPLATE_DEBUG = DEBUG
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'myapp/templates')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
'debug': DEBUG,
'DEBUG': DEBUG,
'TEMPLATE_DEBUG': DEBUG
},
}, ]
```
What am I missing here? | Set `debug` in `OPTIONS` dictionary of your templates settings.
```
DEBUG = True
TEMPLATES = [
{
...
'OPTIONS': {
'debug': DEBUG,
},
},
]
```
Then remove this line from your settings to stop the warnings
```
TEMPLATE_DEBUG = DEBUG
``` |
Django app works fine, but getting a TEMPLATE_* warning message | 32,445,953 | 19 | 2015-09-07T20:57:45Z | 34,643,824 | 8 | 2016-01-06T22:12:48Z | [
"python",
"django",
"django-1.8"
] | When I use runserver, it gives this warning message:
> (1\_8.W001) The standalone TEMPLATE\_\* settings were deprecated in
> Django 1.8 and the TEMPLATES dictionary takes precedence. You must put
> the values of the following settings into your default TEMPLATES dict:
> TEMPLATE\_DEBUG.
Quoth the Django Documentation:
> "TEMPLATE\_DEBUG Deprecated since version 1.8: Set the 'debug' option
> in the OPTIONS of a DjangoTemplates backend instead."
Here is my settings.py with my futile attempts to fix it:
```
DEBUG = True
TEMPLATE_DEBUG = DEBUG
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'myapp/templates')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
'debug': DEBUG,
'DEBUG': DEBUG,
'TEMPLATE_DEBUG': DEBUG
},
}, ]
```
What am I missing here? | remove APP\_DIRS and add the loaders inside the templates.
example:
```
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')]
,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
'loaders': [
'django_jinja.loaders.AppLoader',
'django_jinja.loaders.FileSystemLoader',
]
},
},
]
``` |
Why is while loop much faster than range in this case? | 32,448,958 | 2 | 2015-09-08T03:45:32Z | 32,449,045 | 7 | 2015-09-08T03:56:17Z | [
"python",
"loops"
] | According to this [post](http://stackoverflow.com/questions/869229/why-is-looping-over-range-in-python-faster-than-using-a-while-loop), range loop should be faster than while loop in python, but please have a look at the following code. It is simply used to test if a number is prime and return the divisor if n is not a prime.
```
import time
def isPrimeWhile(n):
i = 2
while(i <= n/2+1):
if(n%i == 0):
return i
i += 1
return n
def isPrimeRange(n):
for i in range(2,n/2+1):
if(n%i == 0):
return i
return n
n = 353591872901
start = time.time()
isPrimeWhile(n)
print("%s while seconds"%(time.time() - start))
start = time.time()
isPrimeRange(n)
print("%s range seconds"%(time.time() - start))
```
Run the code and you will find the while loop is much faster than the range loop.I am guessing that the range(0,aLargeNumber) takes a very long time to run. But why in the last post mentioned above, the range method is much quicker? Any answers? | Since you are using Python 2+ ( your code needs to use integer division to work in Python 3+ ) you are running into the fact that Python 2+ `range` generates a list of all elements and then iterates over them.
This would explain the differences in time that it takes for the `while` and `range` functions to run.
Incedentally the code for Python 3+ needs the following change:
```
def isPrimeRange(n):
for i in range(2,n//2+1): # integer division
if(n%i == 0):
return i
return n
```
[This Python Blog Post](http://pythoncentral.io/pythons-range-function-explained/) explains in great detail the differences between `range` (returns a list) and `xrange` (returns an iterator) in Python 2+ and how Python 3+ has changed this functionality.
A pick of the most relevent paragraph from that source is here:
> When you're using an iterator, every loop of the for statement produces the next number on the fly. Whereas the original range() function produced all numbers instantaneously, before the for loop started executing. The problem with the original range() function was that it used a very large amount of memory when producing a lot of numbers. However it tends to be quicker with a small amount of numbers. Note that in Python 3.x, you can still produce a list by passing the generator returned to the list() function. |
What is the purpose of this if statement from "Learning Python the Hard Way"? | 32,449,229 | 3 | 2015-09-08T04:21:46Z | 32,449,241 | 8 | 2015-09-08T04:23:50Z | [
"python"
] | I am currently reading *Learning Python the Hard Way* and I have a question regarding one line of the following code.
```
cities = {'CA': 'San Francisco', 'MI': 'Detroit', 'FL': 'Jacksonville'}
cities['NY'] = 'New York'
cities['OR'] = 'Portland'
def find_city(themap, state):
if state in themap:
return themap[state]
else:
return "Not found."
cities['_find'] = find_city
while True:
print "State? (Enter to quit)"
state = raw_input(">")
#This is the line I have question for
if not state: break
city_found = cities['_find'](cities, state)
print city_found
```
1. I would like to know the purpose of this line since the code can run without an error even when I deleted this line.
2. It seems like there's no condition that will ever make the program run this line, as I tried to put a `print` statement before `break` and it never got printed. | An empty string is considered a falsey value. Therefore, `if not state:` means that the content of that block will be evaluated when `state` is an empty string (or any other falsey value). The `break` ends the loop early.
What this does is exit the loop immediately when the user simply presses `Enter` without entering any text. |
Is this a valid use of a conditional expression? | 32,454,225 | 10 | 2015-09-08T09:35:45Z | 32,454,339 | 16 | 2015-09-08T09:40:59Z | [
"python",
"python-3.x"
] | I'm trying to figure out what the best way of doing this is:
```
resource['contents'][media_type] = []
resource['contents'][media_type].append(row[0].toPython()) if row[0] is not None else None
resource['contents'][media_type].append(row[2].toPython()) if row[2] is not None else None
```
I think the code is quite simple; if the rows have a value then add them to the list. Is this approach considered OK? Is there any other approach that would be better? The `toPython` method will return the string description of the contained object. | Using a "ternary" [conditional expression](https://docs.python.org/2/reference/expressions.html#conditional-expressions) (`x if C else y`) for side effects is not at all Pythonic. Here's how I would do it:
```
resource['contents'][media_type] = []
for index in (0, 2):
item = row[i]
if item is not None:
resource['contents'][media_type].append(item.toPython())
```
or using a list comprehension to reduce verbosity:
```
resource['contents'][media_type] = [row[i].toPython() for i in (0, 2)
if row[i] is not None]
```
These approaches are much more readable, and reduce duplication. |
Is this a valid use of a conditional expression? | 32,454,225 | 10 | 2015-09-08T09:35:45Z | 32,454,341 | 10 | 2015-09-08T09:41:08Z | [
"python",
"python-3.x"
] | I'm trying to figure out what the best way of doing this is:
```
resource['contents'][media_type] = []
resource['contents'][media_type].append(row[0].toPython()) if row[0] is not None else None
resource['contents'][media_type].append(row[2].toPython()) if row[2] is not None else None
```
I think the code is quite simple; if the rows have a value then add them to the list. Is this approach considered OK? Is there any other approach that would be better? The `toPython` method will return the string description of the contained object. | No, that's not a valid use of a conditional expression. It confuses anyone trying to read your code.
Use an `if` statement; you can save some space by creating another reference to the list:
```
lst = resource['contents'][media_type] = []
if row[0] is not None: lst.append(row[0].toPython())
if row[2] is not None: lst.append(row[2].toPython())
```
but use a better name for the local reference (`contents` perhaps?), or use a list comprehension:
```
resource['contents'][media_type] = [
col.toPython() for col in (row[0], row[2]) if col is not None]
``` |
Git 2.5.1's bash console doesn't open python interpreter | 32,454,589 | 10 | 2015-09-08T09:51:34Z | 33,696,825 | 12 | 2015-11-13T15:59:04Z | [
"python",
"git",
"bash"
] | If I do it in CMD, it works without issues, but if I try it in Git Bash it doesn't work. I like to use Git Bash as my only console, but I can't do that if it doesn't work with Python 3.4.
Example is in the picture below. This can be easily reproduced. Uninstall Python and Git if they are installed, install Python 3.4, install Git 2.5.1 and you get that result.
[](http://i.stack.imgur.com/nJV5U.png)
How can I make the Python interpreter work in Git Bash ? | The MinTTY terminal that is the new default terminal for Git simply doesn't support Windows console programs. I don't know why the decision was made to change the default terminal, but I know two ways to work around this:
1. Launch python in interactive mode explicitly, or use winpty:
**Interactive mode:**
```
python -i
```
**With winpty**:
As of Git for Windows 2.7.1, Winpty is included out of the box, and can be run like so:
```
winpty /path/to/python.exe
```
winpty can be found installed at `Git\usr\bin`
A less intrusive option to your workflow may be:
2. Write a Bash alias to launch python in interactive mode if there are no arguments:
**Bash Alias (put in your .bashrc):**
```
function maybe_python34() {
if [ $# -eq 0 ]; then
/c/Python34/python.exe -i
else
/c/Python34/python.exe $@
fi
}
alias python=maybe_python34
``` |
Python: understanding iterators and `join()` better | 32,462,194 | 10 | 2015-09-08T15:52:22Z | 32,462,254 | 10 | 2015-09-08T15:55:33Z | [
"python",
"python-internals"
] | The `join()` function accepts an iterable as parameter. However, I was wondering why having:
```
text = 'asdfqwer'
```
This:
```
''.join([c for c in text])
```
Is significantly faster than:
```
''.join(c for c in text)
```
The same occurs with long strings (i.e. `text * 10000000`).
Watching the memory footprint of both executions with long strings, I think they both create one and only one list of chars in memory, and then join them into a string. So I am guessing perhaps the difference is only between how `join()` creates this list out of the generator and how the Python interpreter does the same thing when it sees `[c for c in text]`. But, again, I am just guessing, so I would like somebody to confirm/deny my guesses. | The `join` method reads its input twice; once to determine how much memory to allocate for the resulting string object, then again to perform the actual join. Passing a list is faster than passing a generator object that it needs to make a copy of so that it can iterate over it twice.
A list comprehension is not simply a generator object wrapped in a list, so constructing the list externally is faster than having `join` create it from a generator object. Generator objects are optimized for memory efficiency, not speed.
Of course, a string is *already* an iterable object, so you could just write `''.join(text)`. (Also again this is not as fast as creating the list explicitly from the string.) |
Add colorbar to existing axis | 32,462,881 | 8 | 2015-09-08T16:32:40Z | 32,463,689 | 9 | 2015-09-08T17:21:13Z | [
"python",
"matplotlib"
] | I'm making some interactive plots and I would like to add a colorbar legend. I don't want the colorbar to be in its own axes, so I want to add it to the existing axes. I'm having difficulties doing this, as most of the example code I have found creates a new axes for the colorbar.
I have tried the following code using `matplotlib.colorbar.ColorbarBase`, which adds a colorbar to an existing axes, but it gives me strange results and I can't figure out how to specify attributes of the colorbar (for instance, where on the axes it is placed and what size it is)
```
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.cm import coolwarm
import numpy as np
x = np.random.uniform(1, 10, 10)
y = np.random.uniform(1, 10, 10)
v = np.random.uniform(1, 10, 10)
fig, ax = plt.subplots()
s = ax.scatter(x, y, c=v, cmap=coolwarm)
matplotlib.colorbar.ColorbarBase(ax=ax, cmap=coolwarm, values=sorted(v),
orientation="horizontal")
```
Using `fig.colorbar` instead of`matplotlib.colorbar.ColorbarBase` still doesn't give me quite what I want, and I still don't know how to adjust the attributes of the colorbar.
```
fig.colorbar(s, ax=ax, cax=ax)
```
[](http://i.stack.imgur.com/UyVAf.png)
Let's say I want to have the colorbar in the top left corner, stretching about halfway across the top of the plot. How would I go about doing that?
Am I better off writing a custom function for this, maybe using `LineCollection`? | The colorbar has to have its own axes. However, you can create an axes that overlaps with the previous one. Then use the `cax` kwarg to tell `fig.colorbar` to use the new axes.
For example:
```
import numpy as np
import matplotlib.pyplot as plt
data = np.arange(100, 0, -1).reshape(10, 10)
fig, ax = plt.subplots()
cax = fig.add_axes([0.27, 0.8, 0.5, 0.05])
im = ax.imshow(data, cmap='gist_earth')
fig.colorbar(im, cax=cax, orientation='horizontal')
plt.show()
```
[](http://i.stack.imgur.com/iTjhY.png) |
Spark performance for Scala vs Python | 32,464,122 | 58 | 2015-09-08T17:46:02Z | 32,471,016 | 104 | 2015-09-09T04:39:57Z | [
"python",
"performance",
"scala",
"apache-spark",
"pyspark"
] | I prefer Python over Scala. But, as Spark is natively written in Scala, I was expecting my code to run faster in the Scala than the Python version for obvious reasons.
With that assumption, I thought to learn & write the Scala version of some very common preprocessing code for some 1 GB of data. Data is picked from the SpringLeaf competition on [Kaggle](https://en.wikipedia.org/wiki/Kaggle). Just to give an overview of the data (it contains 1936 dimensions and 145232 rows). Data is composed of various types e.g. int, float, string, boolean. I am using 6 cores out of 8 for Spark processing; that's why I used minPartitions=6 so that every core has something to process.
**Scala Code**
```
val input = sc.textFile("train.csv", minPartitions=6)
val input2 = input.mapPartitionsWithIndex { (idx, iter) => if (idx == 0) iter.drop(1) else iter }
val delim1 = "\001"
def separateCols(line: String): Array[String] = {
val line2 = line.replaceAll("true", "1")
val line3 = line2.replaceAll("false", "0")
val vals: Array[String] = line3.split(",")
for((x,i) <- vals.view.zipWithIndex) {
vals(i) = "VAR_%04d".format(i) + delim1 + x
}
vals
}
val input3 = input2.flatMap(separateCols)
def toKeyVal(line: String): (String, String) = {
val vals = line.split(delim1)
(vals(0), vals(1))
}
val input4 = input3.map(toKeyVal)
def valsConcat(val1: String, val2: String): String = {
val1 + "," + val2
}
val input5 = input4.reduceByKey(valsConcat)
input5.saveAsTextFile("output")
```
**Python Code**
```
input = sc.textFile('train.csv', minPartitions=6)
DELIM_1 = '\001'
def drop_first_line(index, itr):
if index == 0:
return iter(list(itr)[1:])
else:
return itr
input2 = input.mapPartitionsWithIndex(drop_first_line)
def separate_cols(line):
line = line.replace('true', '1').replace('false', '0')
vals = line.split(',')
vals2 = ['VAR_%04d%s%s' %(e, DELIM_1, val.strip('\"')) for e, val in enumerate(vals)]
return vals2
input3 = input2.flatMap(separate_cols)
def to_key_val(kv):
key, val = kv.split(DELIM_1)
return (key, val)
input4 = input3.map(to_key_val)
def vals_concat(v1, v2):
return v1 + ',' + v2
input5 = input4.reduceByKey(vals_concat)
input5.saveAsTextFile('output')
```
**Scala Performance**
Stage 0 (38 mins), Stage 1 (18 sec)
[](http://i.stack.imgur.com/uDJ30.png)
**Python Performance**
Stage 0 (11 mins), Stage 1 (7 sec)
[](http://i.stack.imgur.com/3wwcQ.png)
Both produces different DAG visualisation graphs (due to which both pics show different stage 0 functions for Scala (map) and Python (reduceByKey))
But, essentially both code tries to transform data into (dimension\_id, string of list of values) RDD and save to disk. The output will be used to compute various statistics for each dimension.
Performance wise, Scala code for this real data like this seems to run **4 times slower** than the Python version.
Good news for me is that it gave me good motivation to stay with Python. Bad news is I didn't quite understand why? | ---
The original answer discussing the code can be found below.
---
First of all you have to distinguish between different types of API, each with its own performance consideration.
### RDD API
*(pure Python structures with JVM based orchestration)*
This is the component which will be most affected by a performance of the Python code and the details of PySpark implementation. While Python performance is rather unlikely to be a problem there at least few factors you have to consider:
* an overhead of JVM communication. Practically every data that comes to and from Python executor has to be passed through socket and JVM worker. While this is a relatively efficient local communication it is still not free.
* process based executors (Python) versus thread based (single JVM multiple threads) executors (Scala). Each Python executor runs in its own process. As a side effect it provides stronger isolation than its JVM counterpart and some control over executor lifecycle but potentially significantly higher memory usage:
+ interpreter memory footrpint
+ footprint of the loaded libraries
+ less efficient broadcasting (each process requires its own copy of a broadcast)
* performance of a Python code itself Generally speaking Scala is faster than a plain Python but it will vary on task to task. Moreover you have multiple options including JITs like [Numba](http://numba.pydata.org/), C extensions ([Cython](http://cython.org/)) or specialized libraries like [Theano](http://deeplearning.net/software/theano/). Finally, if you don't use ML / MLlib (or simply NumPy stack), consider using [PyPy](http://pypy.org/) as an alternative interpreter. See [SPARK-3094](https://issues.apache.org/jira/browse/SPARK-3094).
* PySpark configuration provides `spark.python.worker.reuse` option which can be used to choose between forking Python process for each task and reusing existing process. The latter option seems to be useful to avoid expensive garbage collecting (it is more an impression than a result of systematic tests), the former one (default) is optimal for in case of expensive broadcasts and imports.
### MLlib
*(mixed Python and JVM execution)*
Basic considerations are pretty much the same as before with a few additional issues. While basic structures used with MLlib are plain Python RDD objects all algorithms are executed directly using Scala.
It means additional cost of converting Python objects to Scala objects and the other way around, increased memory usage and some additional limitations we'll cover later.
### DataFrame API and Spark ML
*(JVM execution with Python code limited to the driver)*
These are probably the best choice for standard data processing tasks. Since Python code is mostly limited to the high level logical operations on the driver there should be no performance difference between Python and Scala.
A single exception are Python UDFs which significantly less efficient than its Scala equivalents. While there are some chances for improvements (there has been substantial development in Spark 2.0.0) the biggest limitation is full roundtrip between internal representation (JVM) and Python interpreter. If it is only possible you should favor a compositions of built-in expressions. See for example [Stack Overflow while processing several columns with a UDF](http://stackoverflow.com/q/35066231/1560062). Python UDF behavior has been improved in Spark 2.0.0 but it is still suboptimal compared to native execution.
Also be sure to avoid unnecessary passing data between `DataFrames` and `RDDs`. This requires expensive serialization and deserialization not to mention data transfer to and from Python interpreter.
### GraphX and Spark DataSets
As for now (Spark 1.6) neither one provides PySpark API so you can say that PySpark is infinitively worse than Scala (although introduction of [GraphFrames](https://github.com/graphframes/graphframes) makes the first one less important).
### Streaming
From what I've seen so far I would strongly recommend using Scala over Python. It may change in the future if PySpark gets support for structured streams but right now Scala API seems to be much more robust, comprehensive and efficient. My experience is quite limited though.
### Non-performance considerations
Feature parity
Not all Spark features are exposed through PySpark API. Be sure to check if the parts you need are already implemented and try to understand possible limitations.
It is particularly important when you use MLlib and similar mixed contexts (see [How to use Java/Scala function from an action or a transformation?](http://stackoverflow.com/q/31684842/1560062)). To be fair some parts of the PySpark API, like `mllib.linalg`, provide far more comprehensive set of methods than Scala.
API design
PySpark API closely reflects its Scala counterpart and as such is not exactly Pythonic. It means that it is pretty easy to map between languages but at the same time Python code can be significantly harder to understand.
Complex architecture
PySpark data flow is relatively complex compared to pure JVM execution. It is much harder to reason about PySpark programs or debug. Moreover at least basic understanding of Scala and JVM in general is pretty much must have.
### It doesn't have to be one vs. another
Spark DataFrame (SQL, Dataset) API provides an elegant way to integrate Scala / Java code in PySpark application. You can use `DataFrames` to expose data to a native JVM code and read back the results. I've explained some options [somewhere else](http://stackoverflow.com/q/31684842) and you can find a working example of Python-Scala roundtrip in [How to use a Scala class inside Pyspark](http://stackoverflow.com/q/36023860/1560062).
It can be further augmented by introducing User Defined Types (see [How to define schema for custom type in Spark SQL?](http://stackoverflow.com/q/32440461/1560062)).
---
## What is wrong with a code provided in the question
*(Disclaimer: Pythonista point of view. Most likely I've missed some Scala tricks)*
First of all there is one part in your code which doesn't make sense at all. If you already have `(key, value)` pairs created using `zipWithIndex` or `enumerate` what is the point point in creating string just to split it right afterwards? `flatMap` doesn't work recursively so you can simply yield tuples and skip following `map` whatsoever.
Another part I find problematic is `reduceByKey`. Generally speaking `reduceByKey` is useful if applying aggregate function can reduce amount of data that has to be shuffled. Since you simply concatenate strings there is nothing to gain here. Ignoring low level stuff, like number of references, amount of data you have to transfer is exactly the same as for `groupByKey`.
Normally I wouldn't dwell on that, but as far as I can tell it is a bottleneck in your Scala code. Joining strings on JVM is rather expensive operation (See for example: [Is string concatenation in scala as costly as it is in Java?](http://stackoverflow.com/q/8608664/1560062)). It means that something like this `_.reduceByKey((v1: String, v2: String) => v1 + ',' + v2)` which is equivalent to `input4.reduceByKey(valsConcat)` in your code is not a good idea.
If you want to avoid `groupByKey` you can try to use `aggregateByKey` with `StringBuilder`. Something similar to this should do the trick:
```
rdd.aggregateByKey(new StringBuilder)(
(acc, e) => {
if(!acc.isEmpty) acc.append(",").append(e)
else acc.append(e)
},
(acc1, acc2) => {
if(acc1.isEmpty | acc2.isEmpty) acc1.addString(acc2)
else acc1.append(",").addString(acc2)
}
)
```
but I doubt it is worth all the fuss.
Keeping above in mind I've rewritten your code as follows:
**Scala**:
```
val input = sc.textFile("train.csv", 6).mapPartitionsWithIndex{
(idx, iter) => if (idx == 0) iter.drop(1) else iter
}
val pairs = input.flatMap(line => line.split(",").zipWithIndex.map{
case ("true", i) => (i, "1")
case ("false", i) => (i, "0")
case p => p.swap
})
val result = pairs.groupByKey.map{
case (k, vals) => {
val valsString = vals.mkString(",")
s"$k,$valsString"
}
}
result.saveAsTextFile("scalaout")
```
**Python**:
```
def drop_first_line(index, itr):
if index == 0:
return iter(list(itr)[1:])
else:
return itr
def separate_cols(line):
line = line.replace('true', '1').replace('false', '0')
vals = line.split(',')
for (i, x) in enumerate(vals):
yield (i, x)
input = (sc
.textFile('train.csv', minPartitions=6)
.mapPartitionsWithIndex(drop_first_line))
pairs = input.flatMap(separate_cols)
result = (pairs
.groupByKey()
.map(lambda kv: "{0},{1}".format(kv[0], ",".join(kv[1]))))
result.saveAsTextFile("pythonout")
```
### Results
In `local[6]` mode (Intel(R) Xeon(R) CPU E3-1245 V2 @ 3.40GHz) with 4GB memory per executor it takes (n = 3):
* Scala - mean: 250.00s, stdev: 12.49
* Python - mean: 246.66s, stdev: 1.15
I am pretty sure that most of that time is spend on shuffling, serializing, deserializing and other secondary tasks. Just for fun a naive single-threaded code in Python that performs the same task on this machine in less than a minute:
```
def go():
with open("train.csv") as fr:
lines = [
line.replace('true', '1').replace('false', '0').split(",")
for line in fr]
return zip(*lines[1:])
``` |
For line in : not returning all lines | 32,464,521 | 4 | 2015-09-08T18:11:30Z | 32,464,538 | 7 | 2015-09-08T18:12:56Z | [
"python",
"dictionary",
"readlines"
] | I am trying to traverse a text file and take each line and put it into a dictionary. Ex:
If the txt file is
a
b
c
I am trying to create a dictionary like
word\_dict = {'a':1, 'b:2', 'c':3}
When I use this code:
```
def word_dict():
fin = open('words2.txt','r')
dict_words = dict()
i = 1
for line in fin:
txt = fin.readline().strip()
dict_words.update({txt: i})
i += 1
print(dict_words)
```
My dictionary only contains a partial list. If I use this code (not trying to build the dictionary, just testing):
```
def word_dict():
fin = open('words2.txt','r')
i = 1
while fin.readline():
txt = fin.readline().strip()
print(i,'.',txt)
i += 1
```
Same thing. It prints a list of values that is incomplete. The list matches the dictionary values though. What am I missing? | You're trying to read the lines twice.
Just do this:
```
def word_dict(file_path):
with open(file_path, 'r') as input_file:
words = {line.strip(): i for i, line in enumerate(input_file, 1)}
return words
print(word_dict('words2.txt'))
```
This fixes a couple things.
1. Functions should not have hard coded variables, rather you should use an argument. This way you can reuse the function.
2. Functions should (generally) `return` values instead of printing them. This allows you to use the results of the function in further computation.
3. You were using a manual index variable instead of using the builtin `enumerate`.
This line `{line.strip(): i for i, line in enumerate(input_file, 1)}` is what's known as a dictionary comprehension. It is equivalent to the follow code:
```
words = {}
for i, line in enumerate(input_file, 1):
words[line.strip()] = i
``` |
What exactly is the "resolution" parameter of numpy float | 32,465,481 | 4 | 2015-09-08T19:11:42Z | 32,466,516 | 7 | 2015-09-08T20:16:17Z | [
"python",
"numpy",
"floating-point",
"precision",
"floating-accuracy"
] | I am seeking some more understanding about the "resolution" parameter of a numpy float (I guess any computer defined float for that matter).
Consider the following script:
```
import numpy as np
a = np.finfo(10.1)
print a
```
I get an output which among other things prints out:
```
precision=15 resolution= 1.0000000000000001e-15
max= 1.797(...)e+308
min= -max
```
The numpy documentation specifies: "resolution: (floating point number of the appropriate type) The approximate decimal resolution of this type, i.e., 10\*\*-precision." [source](http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.finfo.html)
resolution is derived from precision, but unfortunately this definition is somewhat circular "precision (int): The approximate number of decimal digits to which this kind of float is precise." [source](http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.finfo.html)
I understand that floating point numbers are merely finite representations of real numbers and therefore have error in their representation, and that precision is probably a measure of this deviation. But practically, does it mean that I should expect results to be erroneous if I preform operations using numbers less than the resolution? How can I quantify the error, for say addition, of two floating point numbers given their precision? If the resolution is as "large" as 1e-15, why would the smallest allowable number be on the order of 1e-308?
Thank you in advance! | The short answer is "*dont' confuse [`numpy.finfo`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.finfo.html) with [`numpy.spacing`](http://nullege.com/codes/search/numpy.spacing?fulldoc=1)*".
`finfo` operates on the `dtype`, while `spacing` operates on the value.
## Background Information
First, though, some general explanation:
---
The key part to understand is that floating point numbers are similar to scientific notation. Just like you'd write 0.000001 as `1.0 x 10^-6`, floats are similar to `c x 2^q`. In other words, they have two separate parts - a coefficient (`c`, a.k.a. "significand") and an exponent (`q`). These two values are stored as integers.
Therefore, how closely a value can be represented (let's think of this as the degree of discretization) is a function of both parts, and *depends on the magnitude of the value*.
However, the "precision" (as referred to by `np.finfo`) is essentially the number of significant digits if the number were written in base-10 scientific notation. The "resolution" is the resolution of the coefficient (part in front) if the value were written in the same base-10 scientific notation (i.e. `10^-precision`). In other words, both are only a function of the coefficient.
## Numpy-specific
For `numpy.finfo`, "precision" and "resolution" are simply the inverse of each other. *Neither one tells you how closely a particular number is being represented*. They're purely a function of the `dtype`.
Instead, if you're worried about the absolute degree of discretization, use `numpy.spacing(your_float)`. This will return the difference in the next largest value in that particular format (e.g. it's different for a `float32` than a `float64`).
## Examples
As an example:
```
In [1]: import numpy as np
In [2]: np.spacing(10.1)
Out[2]: 1.7763568394002505e-15
In [3]: np.spacing(10000000000.1)
Out[3]: 1.9073486328125e-06
In [4]: np.spacing(1000000000000.1)
Out[4]: 0.0001220703125
In [5]: np.spacing(100000000000000.1)
Out[5]: 0.015625
In [6]: np.spacing(10000000000000000.1)
Out[6]: 2.0
```
But the precision and resolution don't change:
```
In [7]: np.finfo(10.1).precision
Out[7]: 15
In [8]: np.finfo(10000000000000000.1).precision
Out[8]: 15
In [9]: np.finfo(10.1).resolution
Out[9]: 1.0000000000000001e-15
In [10]: np.finfo(10000000000000000000.1).resolution
Out[10]: 1.0000000000000001e-15
```
Also note that all of these depend on the data type that you're using:
```
In [11]: np.spacing(np.float32(10.1))
Out[11]: 9.5367432e-07
In [12]: np.spacing(np.float32(10000000000000.1))
Out[12]: 1048576.0
In [13]: np.finfo(np.float32).precision
Out[13]: 6
In [14]: np.finfo(np.float32).resolution
Out[14]: 1e-06
In [15]: np.spacing(np.float128(10.1))
Out[15]: 8.6736173798840354721e-19
In [16]: np.spacing(np.float128(10000000000000.1))
Out[16]: 9.5367431640625e-07
In [17]: np.finfo(np.float128).precision
Out[17]: 18
In [18]: np.finfo(np.float128).resolution
Out[18]: 1.0000000000000000007e-18
```
---
## Specific Questions
Now on to your specific questions:
> But practically, does it mean that I should expect results to be erroneous if I preform operations using numbers less than the resolution?
No, because the precision/resolution (in `numpy.finfo` terms) is only a function of the coefficient, and doesn't take into account the exponent. Very small and very large numbers have the same "precision", but that's not an absolute "error".
As a rule of thumb, when using the "resolution" or "precision" terms from `finfo`, think of scientific notation. If we're operating on small numbers with similar magnitudes, we don't need to worry about much.
Let's take the decimal math case with 6 significant digits (somewhat similar to a `float32`):
```
1.20000 x 10^-19 + 2.50000 x 10^-20 => 1.45000 x 10^19
```
However, if we operate on numbers with wildly different magnitudes but limited precision (again, 6 significant digits):
```
1.20000 x 10^6 + 2.50000 x 10^-5 => 1.20000
```
We'll start to see the effects quite clearly.
> How can I quantify the error, for say addition, of two floating point numbers given their precision?
Use `np.spacing(result)`.
> If the resolution is as "large" as 1e-15, why would the smallest allowable number be on the order of 1e-308?
Again, the "resolution" in this case doesn't take into account the exponent, just the part in front.
---
Hopefully that helps clarify things somewhat. All of this is a bit confusing, and everyone gets bitten by it at some point. It's good to try to build up a bit of intuition about it and to know what functions to call to find out exactly in your platform-of-choice! |
How to explode a list inside a Dataframe cell into separate rows | 32,468,402 | 10 | 2015-09-08T22:43:05Z | 32,470,490 | 10 | 2015-09-09T03:36:02Z | [
"python",
"pandas",
"dataframe"
] | I'm looking to turn a pandas cell containing a list into rows for each of those values.
So, take this:
[](http://i.stack.imgur.com/j7lFk.png)
If I'd like to unpack and stack the values in the 'nearest\_neighbors" column so that each value would be a row within each 'opponent' index, how would I best go about this? Are there pandas methods that are meant for operations like this? I'm just not aware.
Thanks in advance, guys. | In the code below, I first reset the index to make the row iteration easier.
I create a list of lists where each element of the outer list is a row of the target DataFrame and each element of the inner list is one of the columns. This nested list will ultimately be concatenated to create the desired DataFrame.
I use a `lambda` function together with list iteration to create a row for each element of the `nearest_neighbors` paired with the relevant `name` and `opponent`.
Finally, I create a new DataFrame from this list (using the original column names and setting the index back to `name` and `opponent`).
```
df = (pd.DataFrame({'name': ['A.J. Price'] * 3,
'opponent': ['76ers', 'blazers', 'bobcats'],
'nearest_neighbors': [['Zach LaVine', 'Jeremy Lin', 'Nate Robinson', 'Isaia']] * 3})
.set_index(['name', 'opponent']))
>>> df
nearest_neighbors
name opponent
A.J. Price 76ers [Zach LaVine, Jeremy Lin, Nate Robinson, Isaia]
blazers [Zach LaVine, Jeremy Lin, Nate Robinson, Isaia]
bobcats [Zach LaVine, Jeremy Lin, Nate Robinson, Isaia]
df.reset_index(inplace=True)
rows = []
_ = df.apply(lambda row: [rows.append([row['name'], row['opponent'], nn])
for nn in row.nearest_neighbors], axis=1)
df_new = pd.DataFrame(rows, columns=df.columns).set_index(['name', 'opponent'])
>>> df_new
nearest_neighbors
name opponent
A.J. Price 76ers Zach LaVine
76ers Jeremy Lin
76ers Nate Robinson
76ers Isaia
blazers Zach LaVine
blazers Jeremy Lin
blazers Nate Robinson
blazers Isaia
bobcats Zach LaVine
bobcats Jeremy Lin
bobcats Nate Robinson
bobcats Isaia
``` |
Splitting a string into consecutive counts? | 32,469,124 | 3 | 2015-09-09T00:22:44Z | 32,469,143 | 8 | 2015-09-09T00:26:15Z | [
"python",
"string",
"list-comprehension"
] | For example given string
```
"aaabbbbccdaeeee"
```
I want to say something like
```
3 a, 4 b, 2 c, 1 d, 1 a, 4 e
```
It is easy enough to do in Python with a brute force loop but I am wondering if there is a more Pythonic / cleaner one-liner type of approach.
My brute force:
```
while source!="":
leading = source[0]
c=0
while source!="" and source[0]==leading:
c+=1
source=source[1:]
print c, leading
``` | Use a [Counter](https://docs.python.org/2/library/collections.html#collections.Counter) for a count of each distinct letter in the string regardless of position:
```
>>> s="aaabbbbccdaeeee"
>>> from collections import Counter
>>> Counter(s)
Counter({'a': 4, 'b': 4, 'e': 4, 'c': 2, 'd': 1})
```
You can use [groupby](https://docs.python.org/2/library/itertools.html#itertools.groupby) if the position in the string has meaning:
```
from itertools import groupby
li=[]
for k, l in groupby(s):
li.append((k, len(list(l))))
print li
```
Prints:
```
[('a', 3), ('b', 4), ('c', 2), ('d', 1), ('a', 1), ('e', 4)]
```
Which can be reduce to a list comprehension:
```
[(k,len(list(l))) for k, l in groupby(s)]
```
You can even use a regex:
```
>>> [(m.group(0)[0], len(m.group(0))) for m in re.finditer(r'((\w)\2*)', s)]
[('a', 3), ('b', 4), ('c', 2), ('d', 1), ('a', 1), ('e', 4)]
``` |
why does python logging level in basicConfig have no effect? | 32,471,999 | 6 | 2015-09-09T06:05:59Z | 32,535,301 | 7 | 2015-09-12T04:55:11Z | [
"python",
"logging"
] | ```
import logging
# root logger
root = logging.getLogger() # root
ch = logging.StreamHandler()
ch.setLevel(logging.WARN)
formatter = logging.Formatter('[root] %(levelname)s - %(message)s')
ch.setFormatter(formatter)
root.addHandler(ch)
# logging as child
c = logging.getLogger('mod')
c.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('[mod] - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
c.addHandler(ch)
c.error('foo')
c.warning('foo')
c.info('foo')
c.debug('foo')
```
output:
```
[mod] - ERROR - foo
[root] ERROR - foo
[mod] - WARNING - foo
[root] WARNING - foo
[mod] - INFO - foo
[mod] - DEBUG - foo
```
It's OK. Level of root is `WARN`, so `INFO` and `DEBUG` of root is not printed.
But when I use `basicConfig`:
```
import logging
# config root logger
logging.basicConfig(level=logging.WARN, format='[root] %(levelname)s - %(message)s')
# logging as child
c = logging.getLogger('mod')
c.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('[mod] - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
c.addHandler(ch)
c.error('foo')
c.warning('foo')
c.info('foo')
c.debug('foo')
```
output:
```
[mod] - ERROR - foo
[root] ERROR - foo
[mod] - WARNING - foo
[root] WARNING - foo
[mod] - INFO - foo
[root] INFO - foo
[mod] - DEBUG - foo
[root] DEBUG - foo
```
The level of `basicConfig` is `WARN`, why level `INFO` and `DEBUG` of root can be printed?
And when I use `logging.info`, it effects. | You are seeing those `[root]` info and debug messages because your call to `logging.basicConfig` creates a root **Handler** with a level of `NOTSET`. A handler with a level of `NOTSET` will output any message it receives (see [Handler.setLevel](https://docs.python.org/3.5/library/logging.html#logging.Handler.setLevel)).
```
>>> import logging
>>> logging.basicConfig(level=logging.WARN, format='[root] %(levelname)s - %(message)s')
>>> [handler.level == logging.NOTSET for handler in logging.getLogger().handlers]
[True]
```
This differs from your first example because in your first example you are creating a root handler with a level of `WARN`.
The `level=` parameter for `logging.basicConfig` is used to set the level of the root **Logger** not any root **Handler**.
### Log message propagation
Log messages are propagated up to parent Loggers but the level of any parent Loggers is not considered. It is the level of any **Handlers** that decides what gets "outputted".
From the docs for [logging.Logger.propagate](https://docs.python.org/3.5/library/logging.html#logging.Logger.propagate):
> Messages are passed directly to the ancestor loggersâ handlers - neither the level nor filters of the ancestor loggers in question are considered.
### What is Logger.level for then?
A Logger uses it's level to decide if to propagate a message to its and any parent loggers' handlers.
If a Logger does not have a level set then it asks its ancestor Loggers for their level and uses that (see [`logging.Logger.setLevel`](https://docs.python.org/3.5/library/logging.html#logging.Logger.setLevel)).
So, the root logger's level is only relevant if you have not set the level on your child logger. |
How to check if a range is a part of another range in Python 3.x | 32,480,423 | 2 | 2015-09-09T13:07:19Z | 32,481,015 | 7 | 2015-09-09T13:33:25Z | [
"python",
"python-3.x",
"range"
] | How can I simply check if a range is a subrange of another ?
`range1 in range2` will not work as expected. | You can do it in `O(1)`, as follows:
```
def range_subset(range1, range2):
"""Whether range1 is a subset of range2."""
if not range1:
return True # empty range is subset of anything
if not range2:
return False # non-empty range can't be subset of empty range
if len(range1) > 1 and range1.step % range2.step:
return False # must have a single value or integer multiple step
return range1.start in range2 and range1[-1] in range2
```
In use:
```
>>> range_subset(range(0, 1), range(0, 4))
True
``` |
how to skip the index in python for loop | 32,480,808 | 4 | 2015-09-09T13:23:58Z | 32,480,880 | 8 | 2015-09-09T13:26:30Z | [
"python",
"for-loop"
] | I have a list like this:
```
array=['for','loop','in','python']
for arr in array:
print arr
```
This will give me the output
```
for
lop
in
python
```
I want to print
```
in
python
```
How can I skip the first 2 indices in python? | Use [slicing](https://docs.python.org/2/tutorial/introduction.html#lists).
```
array = ['for','loop','in','python']
for arr in array[2:]:
print arr
```
When you do this, the starting index in the `for` loop becomes `2`. Thus the output would be:
```
in
python
```
For more info on `slicing` read this: [Explain Python's slice notation](http://stackoverflow.com/questions/509211/explain-pythons-slice-notation) |
Why do these tests fail for this custom Flask session interface? | 32,483,063 | 8 | 2015-09-09T15:00:25Z | 32,597,959 | 7 | 2015-09-16T00:47:03Z | [
"python",
"session",
"flask",
"customization"
] | I am writing a hybrid single page web/PhoneGap application in Flask. Since cookies in a PhoneGap application are basically unavailable, I have implemented a custom [session interface](http://flask.pocoo.org/docs/0.10/api/#session-interface) that completely avoids cookies. It stores session data in the application database and passes the session ID explicitly in the HTTP request and response bodies.
I have created a [GitHub repository](https://github.com/jgonggrijp/session-testcase) with a reduced testcase. It's still a sizeable project in its own right, but the Readme should help you to quickly find your way. The repo includes seven tests that all succeed when using Flask's default cookie-based session interface and all fail with my custom session interface. The main problem appears to be that data are sometimes not retained on the session object, but this is mysterious because the session object inherits from Python's builtin `dict`, which shouldn't spontaneously forget data. In addition, the session interface is straightforward and doesn't seem to make any obvious mistakes compared to [Flask's example Redis session snippet](http://flask.pocoo.org/snippets/75/).
To make matters more frustrating, the custom session interface seems to work correctly in the actual application. Only the unit tests are failing. However, for this reason it is unsafe to assume that the session interface works correctly in all circumstances.
Help will be much appreciated.
**Edit:** Gist is not accepting the reduced testcase because it includes directories. I am now moving it to a full-blown GitHub repository. I'll update this post again when done.
**New edit:** moved the reduced testcase to a proper GitHub repo. The Readme still mentions "this Gist", sorry. | Your problems *mostly* come down to providing the session token in your test requests. If you don't provide the token the session is blank.
I assume your actual application is correctly sending the session token and thus appears to work.
It doesn't take much to fix up the test cases to pass correctly.
## Every request attempts to load a session based on a post param
In your session implementation:
```
def open_session(self, app, request):
s = Session()
if 't' in request.form:
....
return s
```
This means that every request that is not `POST` (or `PUT`) and doesn't have `t` sent will
have a blank session.
Whereas a cookies based implementation will always have the session token available
and will be able to load previous requests' sessions.
Here is one of your sample tests:
```
def test_authorize_captcha_expired(self):
with self.client as c:
with c.session_transaction() as s:
s['captcha-answer'] = u'one two three'.split()
s['captcha-expires'] = datetime.today() - timedelta(minutes=1)
self.assertEqual(c.post('/test', data={
'ca': 'one two three',
}).status_code, 400)
```
You have not supplied a `t` value for the post to `/test`. Thus it gets a blank
session which does not have a `captcha-expires` key and a `KeyError` is raised.
## Your session requires a 'token' key for it to be saved
In your session implementation:
```
def save_session(self, app, session, response):
if session.modified and 'token' in session:
...
# save session to database
...
```
Thus when you have:
```
with c.session_transaction() as s:
s['captcha-answer'] = u'one two three'.split()
s['captcha-expires'] = datetime.today() - timedelta(minutes=1)
```
No session actually gets written to the database. For any subsequent request to
use. Note that it really **does** need to be written to the database since `open_session` will attempt to load something from the database on every request.
To fix the most of those cases you need to supply a 'token' when creating the session and a 't' with that token for any requests that use it.
Thus the sample test I used above would end up like:
```
def test_authorize_captcha_expired(self):
with self.client as c:
token = generate_key(SystemRandom())
with c.session_transaction() as s:
s['token'] = token
s['captcha-answer'] = u'one two three'.split()
s['captcha-expires'] = datetime.today() - timedelta(minutes=1)
self.assertEqual(c.post('/test', data={
'ca': 'one two three',
't': token
}).status_code, 400)
```
## You change the token when you respond with json
...but you are not using the new token when you make a subsequent request
```
def test_reply_to_reflection_passthrough(self):
with self.client as c:
token = 'abcdef'
...
response2 = c.post('/reflection/1/reply', data={
'p': 'test4',
'r': 'testmessage',
't': token,
}, ...
```
By here, the post to `/reflection/1/reply` has generated a new
token and saved it, thus the critical key `last-reply` is not in the
session identified by `abcdef`. If this were a cookies based session then `last-reply` would be available to the next request.
So to fix this test... use the new token
```
def test_reply_to_reflection_passthrough(self):
with self.client as c:
...
response2 = c.post('/reflection/1/reply', data={
...
token = session['token']
with c.session_transaction(method="POST", data={'t':token}) as s:
s['token'] = token
s['last-request'] = datetime.now() - timedelta(milliseconds=1001)
response3 = c.post('/reflection/1/reply', data={
...
```
## A redirect will lose the session token
In the test `test_bump`:
```
def test_bump(self):
response = self.client.post(
'/admin/tip/action/',
data = {'action': 'Bump', 'rowid': '1',},
follow_redirects=True )
self.assertIn(' tips have been bumped.', response.data)
```
The post to `/admin/tip/action` returns a redirect.
Here you are checking for the presence of a flash message. And flash messages
get stored in the session.
With a cookie based session the session id is sent again with the subsequent redirected request.
Since your session id is specified as a post value it does not get sent again, the session and the flash messages are lost.
The way to fix this is not to follow redirects but to check the session for data set by flasks flash method.
```
def test_bump(self):
with self.client as c:
token = generate_key(SystemRandom())
with c.session_transaction() as s:
s['token'] = token
c.post('/admin/tip/action/',
data={'action': 'Bump', 'rowid': '1', 't': token})
with c.session_transaction(method="POST", data={'t': token}) as s:
self.assertIn(' tips have been bumped.', s['_flashes'][0][1])
```
### And thats all
I have sent a pull request with the changes as I have described above, you will find that the tests now pass for both the default flask session and your session implementation. |
How to use viridis in matplotlib 1.4 | 32,484,453 | 12 | 2015-09-09T16:08:21Z | 32,484,859 | 11 | 2015-09-09T16:29:38Z | [
"python",
"matplotlib",
"colormap"
] | I want to use the colormap "viridis" (<http://bids.github.io/colormap/>), and I won't be updating to the development version 1.5 quite yet. Thus, I have downloaded `colormaps.py` from <https://github.com/BIDS/colormap>. Unfortunately, I'm not able to make it work. This is what I do:
```
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import colormaps as cmaps
img=mpimg.imread('stinkbug.png')
lum_img = np.flipud(img[:,:,0])
plt.set_cmap(cmaps.viridis)
imgplot = plt.pcolormesh(lum_img)
```
This gives me a `ValueError`, the traceback ending with,
> ValueError: Colormap viridis is not recognized. Possible values are: Spectral, summer, coolwarm, ...
(And then the complete list of originally installed colormaps.)
Any thoughts on how to fix this issue? | Rather than using `set_cmap`, which requires a `matplotlib.colors.Colormap` instance, you can set the `cmap` directly in the `pcolormesh` call
(`cmaps.viridis` is a `matplotlib.colors.ListedColormap`)
```
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import colormaps as cmaps
img=mpimg.imread('stinkbug.png')
lum_img = np.flipud(img[:,:,0])
imgplot = plt.pcolormesh(lum_img, cmap=cmaps.viridis)
``` |
How to use viridis in matplotlib 1.4 | 32,484,453 | 12 | 2015-09-09T16:08:21Z | 32,484,915 | 9 | 2015-09-09T16:32:01Z | [
"python",
"matplotlib",
"colormap"
] | I want to use the colormap "viridis" (<http://bids.github.io/colormap/>), and I won't be updating to the development version 1.5 quite yet. Thus, I have downloaded `colormaps.py` from <https://github.com/BIDS/colormap>. Unfortunately, I'm not able to make it work. This is what I do:
```
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import colormaps as cmaps
img=mpimg.imread('stinkbug.png')
lum_img = np.flipud(img[:,:,0])
plt.set_cmap(cmaps.viridis)
imgplot = plt.pcolormesh(lum_img)
```
This gives me a `ValueError`, the traceback ending with,
> ValueError: Colormap viridis is not recognized. Possible values are: Spectral, summer, coolwarm, ...
(And then the complete list of originally installed colormaps.)
Any thoughts on how to fix this issue? | To set `viridis` as your colormap using `set_cmap`, you must register it first:
```
import colormaps as cmaps
plt.register_cmap(name='viridis', cmap=cmaps.viridis)
plt.set_cmap(cmaps.viridis)
img=mpimg.imread('stinkbug.png')
lum_img = np.flipud(img[:,:,0])
imgplot = plt.pcolormesh(lum_img)
``` |
Matplotlib and Numpy - Create a calendar heatmap | 32,485,907 | 2 | 2015-09-09T17:32:25Z | 32,492,179 | 8 | 2015-09-10T02:34:22Z | [
"python",
"python-2.7",
"numpy",
"matplotlib"
] | Is it possible to create a calendar heatmap without using pandas?
If so, can someone post a simple example?
I have dates like Aug-16 and a count value like 16 and I thought this would be a quick and easy way to show intensity of counts between days for a long period of time.
Thank you | It's certainly possible, but you'll need to jump through a few hoops.
First off, I'm going to assume you mean a calendar display that looks like a calendar, as opposed to a more linear format (a linear formatted "heatmap" is much easier than this).
The key is reshaping your arbitrary-length 1D series into an Nx7 2D array where each row is a week and columns are days. That's easy enough, but you also need to properly label months and days, which can get a touch verbose.
Here's an example. It doesn't even remotely try to handle crossing across year boundaries (e.g. Dec 2014 to Jan 2015, etc). However, hopefully it gets you started:
```
import datetime as dt
import matplotlib.pyplot as plt
import numpy as np
def main():
dates, data = generate_data()
fig, ax = plt.subplots(figsize=(6, 10))
calendar_heatmap(ax, dates, data)
plt.show()
def generate_data():
num = 100
data = np.random.randint(0, 20, num)
start = dt.datetime(2015, 3, 13)
dates = [start + dt.timedelta(days=i) for i in range(num)]
return dates, data
def calendar_array(dates, data):
i, j = zip(*[d.isocalendar()[1:] for d in dates])
i = np.array(i) - min(i)
j = np.array(j) - 1
ni = max(i) + 1
calendar = np.nan * np.zeros((ni, 7))
calendar[i, j] = data
return i, j, calendar
def calendar_heatmap(ax, dates, data):
i, j, calendar = calendar_array(dates, data)
im = ax.imshow(calendar, interpolation='none', cmap='summer')
label_days(ax, dates, i, j, calendar)
label_months(ax, dates, i, j, calendar)
ax.figure.colorbar(im)
def label_days(ax, dates, i, j, calendar):
ni, nj = calendar.shape
day_of_month = np.nan * np.zeros((ni, 7))
day_of_month[i, j] = [d.day for d in dates]
for (i, j), day in np.ndenumerate(day_of_month):
if np.isfinite(day):
ax.text(j, i, int(day), ha='center', va='center')
ax.set(xticks=np.arange(7),
xticklabels=['M', 'T', 'W', 'R', 'F', 'S', 'S'])
ax.xaxis.tick_top()
def label_months(ax, dates, i, j, calendar):
month_labels = np.array(['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul',
'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
months = np.array([d.month for d in dates])
uniq_months = sorted(set(months))
yticks = [i[months == m].mean() for m in uniq_months]
labels = [month_labels[m - 1] for m in uniq_months]
ax.set(yticks=yticks)
ax.set_yticklabels(labels, rotation=90)
main()
```
[](http://i.stack.imgur.com/Yya7U.png) |
Performance - searching a string in a text file - Python | 32,487,979 | 2 | 2015-09-09T19:47:12Z | 32,488,150 | 7 | 2015-09-09T19:59:35Z | [
"python",
"performance",
"list"
] | I have a set of dates:
```
dates1 = {'21/5/2015', '4/4/2015', '15/6/2015', '30/1/2015', '19/3/2015', '25/2/2015', '25/5/2015', '8/2/2015', '6/6/2015', '15/3/2015', '15/1/2015', '30/5/2015'}
```
the same dates appear in a text ('data' from now on). It´s a pretty long text. I want to loop over the text and get the number of times each date appear in the text, then i print the 5 dates with more occurances.
what i have now is this:
```
def dates(data, dates1):
lines = data.split("\n")
dict_days = {}
for day in dates1:
count = 0
for line in lines:
if day in line:
count += 1
dict_days[day] = count
newA = heapq.nlargest(5, dict_days, key=dict_days.get)
print(newA)
```
I split the tex in lines, create a dict, for every date in the list it looks for it in every line and if it finds it adds 1 to count.
this works fine, BUT it´s taking a looong time running this method.
So what i am asking is if someone knows a more efficient way to do exactly the same
Any help will be really appreciated
**Edit**
**I will try every single answer and let you know, thanks in advance** | Loop over the lines once, extracting any date, check if the date is in the set, if so increment the count using a [Counter](https://docs.python.org/2/library/collections.html#collections.Counter) dict for the counts, at the end call Counter.most\_common to get the 5 most common dates:
```
dates1 = {'21/5/2015', '4/4/2015', '15/6/2015', '30/1/2015', '19/3/2015', '25/2/2015', '25/5/2015', '8/2/2015', '6/6/2015', '15/3/2015', '15/1/2015', '30/5/2015'}
from collections import Counter
import re
def dates(data, dates1):
lines = data.split("\n")
dict_days = Counter()
r = re.compile("\d+/\d+/\d+")
for line in lines:
match = r.search(line)
if match:
dte = match.group()
if dte in dates1:
dict_days[dte] += 1
return dict_days.most_common(5)
```
This does a single pass over the list of lines as opposed to one pass for every dates in dates1.
For 100k lines with the date string at the end of a string with 200+ chars:
```
In [9]: from random import choice
In [10]: dates1 = {'21/5/2015', '4/4/2015', '15/6/2015', '30/1/2015', '19/3/2015', '25/2/2015', '25/5/2015', '8/2/2015', '6/6/2015', '15/3/2015', '15/1/2015', '30/5/2015'}
In [11]: dtes = list(dates1)
In [12]: s = "the same dates appear in a text ('data' from now on). It's a pretty long text. I want to loop over the text and get the number of times each date appear in the text, then i print the 5 dates with more occurances. "
In [13]: data = "\n".join([s+ choice(dtes) for _ in range(100000)])
In [14]: timeit dates(data,dates1)
1 loops, best of 3: 662 ms per loop
```
If more than one date can appear per line you can use findall:
```
def dates(data, dates1):
lines = data.split("\n")
r = re.compile("\d+/\d+/\d+")
dict_days = Counter(dt for line in lines
for dt in r.findall(line) if dt in dates1)
return dict_days.most_common(5)
```
If data is not actually a file like object and is a single string, just search the string itself:
```
def dates(data, dates1):
r = re.compile("\d+/\d+/\d+")
dict_days = Counter((dt for dt in r.findall(data) if dt in dates1))
return dict_days.most_common(5)
```
compiling the dates on the test data seems to be the fastest approach, splitting each substring is pretty close to the search implementation:
```
def dates_split(data, dates1):
lines = data.split("\n")
dict_days = Counter(dt for line in lines
for dt in line.split() if dt in dates1)
return dict_days.most_common(5)
def dates_comp_date1(data, dates1):
lines = data.split("\n")
r = re.compile("|".join(dates1))
dict_days = Counter(dt for line in lines for dt in r.findall(line))
return dict_days.most_common(5)
```
Using the functions above:
```
In [63]: timeit dates(data, dates1)
1 loops, best of 3: 640 ms per loop
In [64]: timeit dates_split(data, dates1)
1 loops, best of 3: 535 ms per loop
In [65]: timeit dates_comp_date1(data, dates1)
1 loops, best of 3: 368 ms per loop
``` |
Convert a 64 bit integer into 8 separate 1 byte integers in python | 32,490,081 | 12 | 2015-09-09T22:23:34Z | 32,490,254 | 8 | 2015-09-09T22:38:22Z | [
"python",
"bitmask"
] | In python, I have been given a 64 bit integer. This Integer was created by taking several different 8 bit integers and mashing them together into one giant 64 bit integer. It is my job to separate them again.
For example:
```
Source number: 2592701575664680400
Binary (64 bits): 0010001111111011001000000101100010101010000101101011111000000000
int 1: 00100011 (35)
int 2: 11111011 (251)
int 3: 00100000 (32)
int 4: 01011000 (88)
int 5: 10101010 (170)
int 6: 00010110 (22)
int 7: 10111110 (190)
int 8: 00000000 (0)
```
So what I would like to do is take my source number `2592701575664680373` and return an array of length 8, where each int in the array are the ints listed above.
I was going to use `struct`, but to be perfectly honest, reading the [documentation](https://docs.python.org/2/library/struct.html) hasn't made it quite clear exactly how I would accomplish that. | # Solution
Solution without converting the number to a string:
```
x = 0b0010001111111011001000000101100010101010000101101011111000000000
numbers = list((x >> i) & 0xFF for i in range(0,64,8))
print(numbers) # [0, 190, 22, 170, 88, 32, 251, 35]
print(list(reversed(numbers))) # [35, 251, 32, 88, 170, 22, 190, 0]
```
# Explanation
Here I used list comprehensions, making a loop in increments of 8 over `i`. So `i` takes the values `0, 8, 16, 24, 32, 40, 48, 56`.
Every time, the bitshift operator `>>` temporarily shifts the number `x` down by `i` bits. This is equivalent to dividing by `256^i`.
So the resulting number is:
```
i = 0: 0010001111111011001000000101100010101010000101101011111000000000
i = 8: 00100011111110110010000001011000101010100001011010111110
i = 16: 001000111111101100100000010110001010101000010110
i = 24: 0010001111111011001000000101100010101010
i = 32: 00100011111110110010000001011000
i = 40: 001000111111101100100000
i = 48: 0010001111111011
i = 56: 00100011
```
By usig `& 0xFF`, I select the last 8 bits of this number. Example:
```
x >> 48: 001000111111101100100000
0xff: 11111111
(x >> 48) & 0xff: 000000000000000000100000
```
Since the leading zeros do not matter, you have the desired number.
The result is converted to a list and printed in normal and reversed order (like OP wanted it).
# Performance
I compared the timing of this result to the other solutions proposed in this thread:
```
In: timeit list(reversed([(x >> i) & 0xFF for i in range(0,64,8)]))
100000 loops, best of 3: 13.9 µs per loop
In: timeit [(x >> (i * 8)) & 0xFF for i in range(7, -1, -1)]
100000 loops, best of 3: 11.1 µs per loop
In: timeit [(x >> i) & 0xFF for i in range(63,-1,-8)]
100000 loops, best of 3: 10.2 µs per loop
In: timeit reversed(struct.unpack('8B', struct.pack('Q', x)))
100000 loops, best of 3: 3.22 µs per loop
In: timeit reversed(struct.pack('Q', x))
100000 loops, best of 3: 2.07 µs per loop
```
Result: my solution is *not* the fastest!
Currently, using `struct` directly (as proposed by Mark Ransom) seems to be the fastest snippet. |
Getting today's date in YYYY-MM-DD in Python? | 32,490,629 | 6 | 2015-09-09T23:20:25Z | 32,490,661 | 19 | 2015-09-09T23:23:57Z | [
"python",
"python-2.7",
"datetime",
"python-datetime"
] | I'm using:
```
str(datetime.datetime.today()).split()[0]
```
to return today's date in the form *YYYY-MM-DD*.
Is there a less crude way to achieve this? | You can use [strftime](http://strftime.org/):
```
datetime.datetime.today().strftime('%Y-%m-%d')
``` |
Selenium Python Headless Webdriver (PhantomJS) Not Working | 32,491,274 | 6 | 2015-09-10T00:42:44Z | 33,037,330 | 10 | 2015-10-09T11:38:19Z | [
"python",
"linux",
"selenium",
"phantomjs",
"splinter"
] | So I'm having trouble getting selenium to work with a headless driver, specifically PhantomJS. I'm trying to get it to work on an Ubuntu webserver (Ubuntu 14.04.2 LTS).
Running the following commands from a python interpreter (Python 2.7.6) gives:
```
from selenium import webdriver
driver = webdriver.PhantomJS()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/phantomjs/webdriver.py", line 51, in __init__
self.service.start()
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/phantomjs/service.py", line 76, in start
raise WebDriverException("Unable to start phantomjs with ghostdriver: %s" % e)
selenium.common.exceptions.WebDriverException: Message: Unable to start phantomjs with ghostdriver: [Errno 2] No such file or directory
```
I've also tried:
```
driver = webdriver.PhantomJS(executable_path="/usr/local/lib/python2.7/dist-packages/selenium/webdriver/phantomjs/")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/phantomjs/webdriver.py", line 51, in __init__
self.service.start()
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/phantomjs/service.py", line 76, in start
raise WebDriverException("Unable to start phantomjs with ghostdriver: %s" % e)
selenium.common.exceptions.WebDriverException: Message: Unable to start phantomjs with ghostdriver: [Errno 13] Permission denied
```
I also added it to the python path:
```
import sys
sys.path.append("/usr/local/lib/python2.7/dist-packages/selenium/webdriver/phantomjs/")
```
I am currently logged in as root. Permissions for the phantomjs directory are:
```
drwxr-sr-x 2 root staff 4096 Sep 9 06:58 phantomjs
```
and for phantomjs/webdriver.py:
```
-rw-r--r-- 1 root root 2985 Sep 9 06:58 webdriver.py
```
I've confirmed selenium is installed and up-to-date (pip install selenium --upgrade). It is installed at:
```
/usr/local/lib/python2.7/dist-packages/selenium/webdriver/phantomjs/
print selenium.__version__
2.47.1
```
I've looked at:
* <http://superuser.com/questions/674322/python-selenium-phantomjs-unable-to-start-phantomjs-with-ghostdriver> - Windows specific but no luck following similar suggestions.
* [Using Selenium in the background](http://stackoverflow.com/questions/16389938/using-selenium-in-the-background) - answer suggests PhatomJS with full path.
* <https://code.google.com/p/selenium/issues/detail?id=6736> - I uninstalled selenium and installed v2.37, with no luck. Reinstalled latest version and still no luck.
* Plus some other links, most seem to recommend specifying the executable\_path.
I've been testing my program on a locally hosted server (on OSX), using chromedriver. I'm actually using Splinter (<https://splinter.readthedocs.org/en/latest/#headless-drivers>) for that, and have tried the other headless drivers (django and zope.testbrowser) but have had similar issues.
I'm open to any suggestions, I don't mind changing driver if required.
Thanks in advance for any help. | I had the same problem as you with the same errors. I have tried to install it on openSuse Server. I ended up installing PhantomJS form source -unfortunately without any success. The way that worked for me was installing Phantomjs via npm
```
sudo npm install -g phantomjs
``` |
Parse an XML string in Python | 32,494,318 | 4 | 2015-09-10T06:06:50Z | 32,494,514 | 11 | 2015-09-10T06:19:13Z | [
"python",
"xml"
] | I have this XML string result and i need to get the values in between the tags. But the data type of the XML is string.
```
final = " <Table><Claimable>false</Claimable><MinorRev>80601</MinorRev><Operation>530600 ION MILL</Operation><HTNum>162</HTNum><WaferEC>80318</WaferEC><HolderType>HACARR</HolderType><Job>167187008</Job></Table>
<Table><Claimable>false</Claimable><MinorRev>71115</MinorRev><Operation>530600 ION MILL</Operation><Experiment>6794</Experiment><HTNum>162</HTNum><WaferEC>71105</WaferEC><HolderType>HACARR</HolderType><Job>16799006</Job></Table> "
```
This is my code sample
```
root = ET.fromstring(final)
print root
```
And this is the error i am receiving :
```
xml.parsers.expat.ExpatError: The markup in the document following the root element must be well-formed.
```
Ive tried using ET.fromstring. But with no luck. | Your XML is malformed. It has to have exactly one top level element. [From Wikipedia](https://en.wikipedia.org/wiki/Root_element):
> Each XML document has exactly one single root element. It encloses all
> the other elements and is therefore the sole parent element to all the
> other elements. ROOT elements are also called PARENT elements.
Try to enclose it within additional tag (e.g. `Tables`) and than parse with ET:
```
xmlData = '''<Tables>
<Table><Claimable>false</Claimable><MinorRev>80601</MinorRev><Operation>530600 ION MILL</Operation><HTNum>162</HTNum><WaferEC>80318</WaferEC><HolderType>HACARR</HolderType><Job>167187008</Job></Table>
<Table><Claimable>false</Claimable><MinorRev>71115</MinorRev><Operation>530600 ION MILL</Operation><Experiment>6794</Experiment><HTNum>162</HTNum><WaferEC>71105</WaferEC><HolderType>HACARR</HolderType><Job>16799006</Job></Table>
</Tables>
'''
import xml.etree.ElementTree as ET
xml = ET.fromstring(xmlData)
for table in xml.getiterator('Table'):
for child in table:
print child.tag, child.text
```
Since Python 2.7 `getiterator('Table')` should be replaced with [`iter('Table')`](https://docs.python.org/2/library/xml.etree.elementtree.html#xml.etree.ElementTree.Element.iter):
```
for table in xml.iter('Table'):
for child in table:
print child.tag, child.text
```
This produces:
```
Claimable false
MinorRev 80601
Operation 530600 ION MILL
HTNum 162
WaferEC 80318
HolderType HACARR
Job 167187008
Claimable false
MinorRev 71115
Operation 530600 ION MILL
Experiment 6794
HTNum 162
WaferEC 71105
HolderType HACARR
Job 16799006
``` |
Python string converts magically to tuple. Why? | 32,499,871 | 3 | 2015-09-10T10:51:15Z | 32,499,912 | 14 | 2015-09-10T10:53:10Z | [
"python",
"string",
"casting",
"tuples"
] | I've got a dict in which I load some info, among others a name which is a plain string. But somehow, when I assign it to a key in the dict it gets converted to a tuple, and I have no idea why.
Here's some of my code:
```
sentTo = str(sentTo)
print type(sentTo), sentTo
ticketJson['sentTo'] = sentTo,
print type(ticketJson['sentTo']), ticketJson['sentTo']
```
which outputs the following on my terminal:
```
<type 'str'> Pete Chasin
<type 'tuple'> ('Pete Chasin',)
```
Why does assigning it to a dict convert it to a tuple? | You told Python to create a tuple containing a string:
```
ticketJson['sentTo'] = sentTo,
# ^
```
It is the comma that defines a tuple. Parentheses are only needed to disambiguate a tuple from other uses of a comma, such as in a function call.
From the [*Parenthesized forms* section](https://docs.python.org/2/reference/expressions.html#parenthesized-forms):
> Note that tuples are not formed by the parentheses, but rather by use of the comma operator. The exception is the empty tuple, for which parentheses *are* required â allowing unparenthesized ânothingâ in expressions would cause ambiguities and allow common typos to pass uncaught.
and from [*Expression lists*](https://docs.python.org/2/reference/expressions.html#expression-lists):
> An expression list containing at least one comma yields a tuple. The length of the tuple is the number of expressions in the list. The expressions are evaluated from left to right. |
Boto3/S3: Renaming an object using copy_object | 32,501,995 | 15 | 2015-09-10T12:31:12Z | 32,502,197 | 10 | 2015-09-10T12:40:53Z | [
"python",
"amazon-web-services",
"amazon-s3",
"boto3"
] | I'm trying to rename a file in my s3 bucket using python boto3, I couldn't clearly understand the arguments. can someone help me here?
What I'm planing is to copy object to a new object, and then delete the actual object.
I found similar questions here, but I need a solution using boto3. | You cannot rename objects in S3, so as you indicated, you need to copy it to a new name and then deleted the old one:
```
client.copy_object(Bucket="BucketName", CopySource="BucketName/OriginalName", Key="NewName")
client.delete_object(Bucket="BucketName", Key="OriginalName")
``` |
Boto3/S3: Renaming an object using copy_object | 32,501,995 | 15 | 2015-09-10T12:31:12Z | 32,504,096 | 18 | 2015-09-10T14:05:44Z | [
"python",
"amazon-web-services",
"amazon-s3",
"boto3"
] | I'm trying to rename a file in my s3 bucket using python boto3, I couldn't clearly understand the arguments. can someone help me here?
What I'm planing is to copy object to a new object, and then delete the actual object.
I found similar questions here, but I need a solution using boto3. | I found another solution
```
s3 = boto3.resource('s3')
s3.Object('my_bucket','my_file_new').copy_from(CopySource='my_bucket/my_file_old')
s3.Object('my_bucket','my_file_old').delete()
``` |
hashing different tuples in python give identical result | 32,504,977 | 6 | 2015-09-10T14:41:35Z | 32,505,139 | 8 | 2015-09-10T14:49:21Z | [
"python",
"hash",
"tuples"
] | I'm working with sets of integer matrices, and I thought representing them as tuples made sense, as they are hashable. However the hash() function gave me strange results for tuples:
```
hash(((1, -1, 0), (1, 0, 0), (1, 0, -1)))
Out[147]: -697649482279922733
hash(((1, 0, -1), (1, 0, 0), (1, -1, 0)))
Out[148]: -697649482279922733
```
As you can see, these two different tuples have the same hash value. Note that they are actually pretty similar (exchange of the first and last subtuples), however I couldn't find a more minimal example: `((0,1),(0,0))` and `((0,0),(0,1))` have different hash values for example.
Any clue of what's going on? I can't believe that it's just incredibly bad luck... Now that I have found where the problem is I could bypass it easily, but I thought it was worth mentioning here anyway. | The hash of a tuple is based on the hashes of the content using the following formula (from the [`tuplehash()` function](https://hg.python.org/cpython/file/v2.7.10/Objects/tupleobject.c#l341)):
```
long mult = 1000003L;
x = 0x345678L;
p = v->ob_item;
while (--len >= 0) {
y = PyObject_Hash(*p++);
if (y == -1)
return -1;
x = (x ^ y) * mult;
/* the cast might truncate len; that doesn't change hash stability */
mult += (long)(82520L + len + len);
}
x += 97531L;
if (x == -1)
x = -2;
return x;
```
As it happens, that formula produces the exact same output for `(1, 0, -1)` and `(1, -1, 0)`:
```
>>> hash((1, -1, 0))
-2528505496374624146
>>> hash((1, 0, -1))
-2528505496374624146
```
because the hashes for the 3 contained integers are `1`, `0` and `-2`:
```
>>> hash(1)
1
>>> hash(0)
0
>>> hash(-1)
-2
```
and swapping the `0` and the `-2` has no actual influence on the outcome.
So the hashes for the 3 contained tuples don't change between the two examples, so the final hash doesn't change either.
This is just a coincidence, in practice this doesn't happen all *that* often and dictionaries and sets already can handle collisions just fine. |
Python regex '\s' does not match unicode BOM (U+FEFF) | 32,506,708 | 8 | 2015-09-10T16:03:28Z | 32,507,020 | 12 | 2015-09-10T16:19:52Z | [
"python",
"regex",
"unicode"
] | The Python `re` module's [documentation](https://docs.python.org/2/library/re.html) says that when the `re.UNICODE` flag is set, `'\s'` will match:
> whatever is classified as space in the Unicode character properties database.
As far I can tell, the BOM (U+FEFF) is [classified as a space](https://en.wikipedia.org/wiki/Unicode_character_property#Whitespace).
However:
```
re.match(u'\s', u'\ufeff', re.UNICODE)
```
evaluates to `None`.
Is this a bug in Python or am I missing something? | U+FEFF is not a whitespace character according to the unicode database.
Wikipedia only lists it as it is a "related character". These are similar to whitespace characters but don't have the `WSpace` property in the database.
None of those characters are matched by `\s`. |
Standard solution for supporting Python 2 and Python 3 | 32,507,715 | 6 | 2015-09-10T16:58:41Z | 32,507,890 | 8 | 2015-09-10T17:08:49Z | [
"python",
"python-3.x",
"import",
"module",
"compatibility"
] | I'm trying to write a forward compatible program and I was wondering what the "best" way to handle the case where you need different imports.
In my specific case, I am using `ConfigParser.SafeConfigParser()` from Python2 which becomes `configparser.ConfigParser()` in Python3.
So far I have made it work either by using a try-except on the import or by using a conditional on the version of Python (using `sys`). Both work, but I was wondering if there was a recommended solution (maybe one I haven't tried yet).
ETA:
Thanks everyone. I used `six.moves` with no issues. | Use [six](https://pypi.python.org/pypi/six)! It's a python compatibility module that irons out the differences between python3 and python2. The documentation [available here](http://pythonhosted.org/six/#module-six.moves) will help you with this problem as well as any other issues you're having..
Specifically for your case you can just
```
from six.moves import configparser
import six
if six.PY2:
ConfigParser = configparser.SafeConfigParser
else:
ConfigParser = configparser.ConfigParser
```
and you'll be good to go. |
Iterate over the lines of two files simultaneously | 32,509,173 | 3 | 2015-09-10T18:25:58Z | 32,509,411 | 10 | 2015-09-10T18:40:57Z | [
"python",
"python-2.x"
] | I am trying to concatenate pieces specific lines together between two files. Such that I want to add something from line 2 in file2 onto line 2 of file1. Then from line 6 from file2 onto line 6 of file 1 and so on. Is there a way to simultaneously iterate through these two files to do this? (It might be helpful to know that the input files are about 15GB each).
Here is a simplified example:
File 1:
```
Ignore
This is a
Ignore
Ignore
Ignore
This is also a
Ignore
Ignore
```
File 2:
```
Ignore
sentence
Ignore
Ignore
Ignore
sentence
Ignore
Ignore
```
Output file:
```
Ignore
This is a sentence
Ignore
Ignore
Ignore
This is also a sentence
Ignore
Ignore
``` | **Python3**:
```
with open('bigfile_1') as bf1:
with open('bigfile_2') as bf2:
for line1, line2 in zip(bf1, bf2):
process(line1, line2)
```
Importantly, bf1 and bf2 will not read the entire file in at once. They are iterators which know how to produce one line at a time.
[`zip()`](https://docs.python.org/3/library/functions.html#zip) works fine with iterators and will produce an interator itself, in this case pairs of lines for you to process.
Using `with` ensures the files will be closed afterwards.
**Python 2.x**
```
import itertools
with open('bigfile_1') as bf1:
with open('bigfile_2') as bf2:
for line1, line2 in itertools.izip(bf1, bf2):
process(line1, line2)
```
Python 2.x can't use zip the same way - it'll produce a list instead of an iterable, eating all of your system memory with those 15GB files. We need to use a special iterable version of zip. |
Sum all values of a counter in Python | 32,511,444 | 5 | 2015-09-10T20:55:57Z | 32,511,472 | 9 | 2015-09-10T20:58:06Z | [
"python",
"python-2.7",
"counter"
] | I have a counter from the `collections` module. What is the best way of summing all of the counts?
For example, I have:
```
my_counter = Counter({'a': 2, 'b': 2, 'c': 2, 'd': 1})
```
and would like to get the value `7` returned. As far as I can tell, the function `sum` is for adding multiple counters together. | Something like this will do
```
sum(my_counter.itervalues())
```
This way you don't create any intermediate data structures, just get the sum lazily evaluated. |
Generator as function argument | 32,521,140 | 67 | 2015-09-11T10:21:14Z | 32,521,643 | 57 | 2015-09-11T10:47:18Z | [
"python",
"python-2.7",
"syntax",
"generator-expression"
] | Can anyone explain why passing a generator as the only positional argument to a function seems to have special rules?
If we have:
```
>>> def f(*args):
>>> print "Success!"
>>> print args
```
1. This works, as expected.
```
>>> f(1, *[2])
Success!
(1, 2)
```
2. This does not work, as expected.
```
>>> f(*[2], 1)
File "<stdin>", line 1
SyntaxError: only named arguments may follow *expression
```
3. This works, as expected
```
>>> f(1 for x in [1], *[2])
Success!
(generator object <genexpr> at 0x7effe06bdcd0>, 2)
```
4. This works, but I don't understand why. Shouldn't it fail in the same way as 2)
```
>>> f(*[2], 1 for x in [1])
Success!
(generator object <genexpr> at 0x7effe06bdcd0>, 2)
``` | **Both 3. and 4. *should* be syntax errors on all Python versions.** However you've found a bug that affects Python versions 2.5 - 3.4, and which was subsequently [posted to the Python issue tracker](https://bugs.python.org/issue25070). Because of the bug, an unparenthesized generator expression was accepted as an argument to a function if it was accompanied only by `*args` and/or `**kwargs`. While Python 2.6+ allowed both cases 3. and 4., Python 2.5 allowed only case 3. - yet both of them were against the [documented grammar](https://docs.python.org/2.5/ref/calls.html):
```
call ::= primary "(" [argument_list [","]
| expression genexpr_for] ")"
```
i.e. the documentation says a function call comprises of `primary` (the expression that evaluates to a callable), followed by, in parentheses, *either* an argument list *or* just an unparenthesized generator expression;
and within the argument list, all generator expressions must be in parentheses.
---
This bug (though it seems it had not been known), had been fixed in Python 3.5 prereleases. In Python 3.5 parentheses are always required around a generator expression, unless it is the only argument to the function:
```
Python 3.5.0a4+ (default:a3f2b171b765, May 19 2015, 16:14:41)
[GCC 4.9.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> f(1 for i in [42], *a)
File "<stdin>", line 1
SyntaxError: Generator expression must be parenthesized if not sole argument
```
This is now documented in the [What's New in Python 3.5](https://docs.python.org/3.6/whatsnew/3.5.html#changes-in-python-behavior), thanks to DeTeReR spotting this bug.
---
## Analysis of the bug
There was a change made to Python 2.6 which [allowed the use of keyword arguments *after* `*args`](https://docs.python.org/2/whatsnew/2.6.html#other-language-changes):
> Itâs also become legal to provide keyword arguments after a \*args
> argument to a function call.
>
> ```
> >>> def f(*args, **kw):
> ... print args, kw
> ...
> >>> f(1,2,3, *(4,5,6), keyword=13)
> (1, 2, 3, 4, 5, 6) {'keyword': 13}
> ```
>
> Previously this would have been a syntax error. (Contributed by Amaury
> Forgeot dâArc; issue 3473.)
---
However, the Python 2.6 [grammar](https://github.com/python/cpython/blob/2.6/Grammar/Grammar#L133) does not make any distinction between keyword arguments, positional arguments, or bare generator expressions - they are all of type `argument` to the parser.
As per Python rules, a generator expression must be parenthesized if it is not the sole argument to the function. This is validated in the [`Python/ast.c`](https://github.com/python/cpython/blob/2.6/Python/ast.c#L1906):
```
for (i = 0; i < NCH(n); i++) {
node *ch = CHILD(n, i);
if (TYPE(ch) == argument) {
if (NCH(ch) == 1)
nargs++;
else if (TYPE(CHILD(ch, 1)) == gen_for)
ngens++;
else
nkeywords++;
}
}
if (ngens > 1 || (ngens && (nargs || nkeywords))) {
ast_error(n, "Generator expression must be parenthesized "
"if not sole argument");
return NULL;
}
```
However this function does *not* consider the `*args` at all - it specifically only looks for ordinary positional arguments and keyword arguments.
Further down in the same function, there is an error message generated for [non-keyword arg after keyword arg](https://github.com/python/cpython/blob/2.6/Python/ast.c#L1931):
```
if (TYPE(ch) == argument) {
expr_ty e;
if (NCH(ch) == 1) {
if (nkeywords) {
ast_error(CHILD(ch, 0),
"non-keyword arg after keyword arg");
return NULL;
}
...
```
But this again applies to arguments that are *not* unparenthesized generator expressions as [evidenced by the `else if` statement](https://github.com/python/cpython/blob/2.6/Python/ast.c#L1945):
```
else if (TYPE(CHILD(ch, 1)) == gen_for) {
e = ast_for_genexp(c, ch);
if (!e)
return NULL;
asdl_seq_SET(args, nargs++, e);
}
```
Thus an unparenthesized generator expression was allowed to slip pass.
---
Now in Python 3.5 one can use the `*args` anywhere in a function call, so
the [Grammar](https://github.com/python/cpython/blob/3.5/Grammar/Grammar#L126) was changed to accommodate for this:
```
arglist: argument (',' argument)* [',']
```
and
```
argument: ( test [comp_for] |
test '=' test |
'**' test |
'*' test )
```
and the [`for` loop was changed](https://github.com/python/cpython/blob/3.5/Python/ast.c#L2628) to
```
for (i = 0; i < NCH(n); i++) {
node *ch = CHILD(n, i);
if (TYPE(ch) == argument) {
if (NCH(ch) == 1)
nargs++;
else if (TYPE(CHILD(ch, 1)) == comp_for)
ngens++;
else if (TYPE(CHILD(ch, 0)) == STAR)
nargs++;
else
/* TYPE(CHILD(ch, 0)) == DOUBLESTAR or keyword argument */
nkeywords++;
}
}
```
Thus fixing the bug.
However the inadvertent change is that the valid looking constructions
```
func(i for i in [42], *args)
```
and
```
func(i for i in [42], **kwargs)
```
where an unparenthesized generator precedes `*args` or `**kwargs` now stopped working.
---
To locate this bug, I tried various Python versions. In 2.5 you'd get `SyntaxError`:
```
Python 2.5.5 (r255:77872, Nov 28 2010, 16:43:48)
[GCC 4.4.5] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> f(*[1], 2 for x in [2])
File "<stdin>", line 1
f(*[1], 2 for x in [2])
```
And this was fixed before some prerelease of Python 3.5:
```
Python 3.5.0a4+ (default:a3f2b171b765, May 19 2015, 16:14:41)
[GCC 4.9.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> f(*[1], 2 for x in [2])
File "<stdin>", line 1
SyntaxError: Generator expression must be parenthesized if not sole argument
```
However, the parenthesized generator expression, it works in Python 3.5, but it does not work not in Python 3.4:
```
f(*[1], (2 for x in [2]))
```
And this is the clue. In Python 3.5 the `*splatting` is generalized; you can use it anywhere in a function call:
```
>>> print(*range(5), 42)
0 1 2 3 4 42
```
So the actual bug (generator working with `*star` without parentheses) *was* indeed fixed in Python 3.5, and the bug could be found in that what changed between Python 3.4 and 3.5 |
Python, distinguishing custom exceptions | 32,523,086 | 6 | 2015-09-11T12:02:53Z | 32,523,156 | 11 | 2015-09-11T12:06:53Z | [
"python",
"exception-handling"
] | fairly new to Python here. Have this code:
```
def someFunction( num ):
if num < 0:
raise Exception("Negative Number!")
elif num > 1000:
raise Exception("Big Number!")
else:
print "Tests passed"
try:
someFunction(10000)
except Exception:
print "This was a negative number but we didn't crash"
except Exception:
print "This was a big number but we didn't crash"
else:
print "All tests passed and we didn't crash"
```
I originally used `raise "Negative Number!"` etc but quickly discovered that this was the old way of doing things and you have to call the Exception class. Now it's working fine but how do I distinguish between my two exceptions? For the code below it's printing "This was a negative number but we didn't crash". Any pointers on this would be great. Thanks! | you need to create your own exception classes if you want to be able to distinguish the kind of exception that happened. example (i inherited from `ValueError` as i think this is the closest to what you want - it also allows you to just catch `ValueError` should the distinction not matter):
```
class NegativeError(ValueError):
pass
class BigNumberError(ValueError):
pass
def someFunction( num ):
if num < 0:
raise NegativeError("Negative Number!")
elif num > 1000:
raise BigNumberError("Big Number!")
else:
print "Tests passed"
try:
someFunction(10000)
except NegativeError as e:
print "This was a negative number but we didn't crash"
print e
except BigNumberError as e:
print "This was a big number but we didn't crash"
print e
else:
print "All tests passed and we didn't crash"
``` |
How to set React to production mode when using Gulp | 32,526,281 | 18 | 2015-09-11T14:49:03Z | 32,999,184 | 17 | 2015-10-07T17:53:10Z | [
"javascript",
"python",
"flask",
"reactjs",
"tornado"
] | I need to run React in production mode, which presumably entails defining the following somewhere in the enviornment:
```
process.env.NODE_ENV = 'production';
```
The issue is that I'm running this behind Tornado (a python web-server), not Node.js. I also use Supervisord to manage the tornado instances, so it's not abundantly clear how to set this in the running environment.
I do however use Gulp to build my jsx files to javascript.
Is it possible to somehow set this inside Gulp? And if so, how do I check that React is running in production mode?
Here is my Gulpfile.js:
```
'use strict';
var gulp = require('gulp'),
babelify = require('babelify'),
browserify = require('browserify'),
browserSync = require('browser-sync'),
source = require('vinyl-source-stream'),
uglify = require('gulp-uglify'),
buffer = require('vinyl-buffer');
var vendors = [
'react',
'react-bootstrap',
'jquery',
];
gulp.task('vendors', function () {
var stream = browserify({
debug: false,
require: vendors
});
stream.bundle()
.pipe(source('vendors.min.js'))
.pipe(buffer())
.pipe(uglify())
.pipe(gulp.dest('build/js'));
return stream;
});
gulp.task('app', function () {
var stream = browserify({
entries: ['./app/app.jsx'],
transform: [babelify],
debug: false,
extensions: ['.jsx'],
fullPaths: false
});
vendors.forEach(function(vendor) {
stream.external(vendor);
});
return stream.bundle()
.pipe(source('build.min.js'))
.pipe(buffer())
.pipe(uglify())
.pipe(gulp.dest('build/js'));
});
gulp.task('watch', [], function () {
// gulp.watch(['./app/**/*.jsx'], ['app', browserSync.reload]);
gulp.watch(['./app/**/*.jsx'], ['app']);
});
gulp.task('browsersync',['vendors','app'], function () {
browserSync({
server: {
baseDir: './',
},
notify: false,
browser: ["google chrome"]
});
});
gulp.task('default',['browsersync','watch'], function() {});
``` | ***Step I:*** Add the following to your gulpfile.js somewhere
```
gulp.task('apply-prod-environment', function() {
process.env.NODE_ENV = 'production';
});
```
***Step II:*** Add it to your default task (or whichever task you use to serve/build your app)
```
// before:
// gulp.task('default',['browsersync','watch'], function() {});
// after:
gulp.task('default',['apply-prod-environment', 'browsersync','watch'], function() {});
```
---
***OPTIONAL:*** If you want to be ABSOLUTELY CERTAIN that you are in prod mode, you can create the following slightly enhanced task instead of the one in Step I:
```
gulp.task('apply-prod-environment', function() {
process.stdout.write("Setting NODE_ENV to 'production'" + "\n");
process.env.NODE_ENV = 'production';
if (process.env.NODE_ENV != 'production') {
throw new Error("Failed to set NODE_ENV to production!!!!");
} else {
process.stdout.write("Successfully set NODE_ENV to production" + "\n");
}
});
```
Which will throw the following error if NODE\_ENV is ever not set to 'production'
```
[13:55:24] Starting 'apply-prod-environment'...
[13:55:24] 'apply-prod-environment' errored after 77 μs
[13:55:24] Error: Failed to set NODE_ENV to production!!!!
``` |
Straightforward way to save the contents of an S3 key to a string in boto3? | 32,526,356 | 6 | 2015-09-11T14:53:16Z | 32,526,756 | 9 | 2015-09-11T15:14:04Z | [
"python",
"amazon-s3",
"boto3"
] | So when I issue a get() what I have is a dict and the 'Body' member of the dict is a "StreamingBody' type and per [How to save S3 object to a file using boto3](https://stackoverflow.com/questions/29378763/how-to-save-s3-object-to-a-file-using-boto3), I see how I could read from this stream in chunks, but I'm wondering if there's an easier way to do this, a la boto. | Update: I have done
`response = s3_client.get_object(Bucket=bn,Key=obj['Key'])
contents = response['Body'].read()`
which seems to work. |
pip doesn't work after upgrade | 32,531,563 | 12 | 2015-09-11T20:23:05Z | 36,548,685 | 12 | 2016-04-11T12:26:45Z | [
"python",
"pip",
"pypi"
] | Today I upgraded from pip 7.1.0 to 7.1.2, and now it doesn't work.
```
$ pip search docker-compose
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip/basecommand.py", line 223, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip/commands/search.py", line 43, in run
pypi_hits = self.search(query, options)
File "/Library/Python/2.7/site-packages/pip/commands/search.py", line 60, in search
hits = pypi.search({'name': query, 'summary': query}, 'or')
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xmlrpclib.py", line 1240, in __call__
return self.__send(self.__name, args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xmlrpclib.py", line 1599, in __request
verbose=self.__verbose
File "/Library/Python/2.7/site-packages/pip/download.py", line 788, in request
return self.parse_response(response.raw)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xmlrpclib.py", line 1490, in parse_response
return u.close()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xmlrpclib.py", line 799, in close
raise Fault(**self._stack[0])
Fault: <Fault 1: "<type 'exceptions.KeyError'>:'hits'">
```
So I tried reinstalling:
```
sudo -H pip install --force-reinstall -U pip
```
The reinstall ran without error, but when I tried to search, I got the same error.
So, I tried reinstalling the old version:
```
sudo -H pip install --force-reinstall -U pip==7.1.0
```
Again, the reinstall worked, but searching was still broken after the reinstall. In addition to the error, I did get the version upgrade message:
```
You are using pip version 7.1.0, however version 7.1.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
```
Disabling the cache also gives the same error:
```
pip search docker-compose --no-cache-dir --disable-pip-version-check
```
The problem seems to only be with the search function, as pip still functions well enough to reinstall itself and such.
I believe I have only installed one other package today, which was docker-compose. The problem occurs when I search for packages other than docker-compose, as in my examples.
Any ideas? | I wasn't able to reproduce this with pip 7.1.2 and either Python 2.7.8 or 3.5.1 on Linux.
The [xmlrpclib docs](https://docs.python.org/3/library/xmlrpc.client.html) have this to say on 'faults':
> Method calls may also raise a special Fault instance, used to signal
> XML-RPC server errors
This implies that pip is reporting a problem on the server (pypi) side.
The [Python Infrastructure Status site](https://status.python.org/incidents/dfpvdhyj0w8y) reports problems with pip search on 2015-09-11 and 2015-09-12.
I suspect that this is not a bug in pip, but a problem with pypi.python.org on the dates in question.This question was asked on 2015-09-11.
A similar error was logged on the [pypi bitbucket repo](https://bitbucket.org/pypa/pypi/issues/338/pypi-search-and-pip-search-failing) on 2015-09-11, reinforcing my theory.
Interestingly, there is another similar [bug logged at pypi's github repo](https://github.com/pypa/pip/issues/3327). In this case the search term is a regex:
`pip search "^docker-compose$"`
I can reproduce this error on Python 2.7.8 and Python3.5.1, pip-7.1.2 and pip-8.1.1 on Linux; however I can't see anything in the pip documentation to suggest that pip search supports regex, and [this answer](http://stackoverflow.com/a/23176820/5320906) states regex is unsupported, so I think this is a separate issue unrelated to the OP's question. |
Log log plot linear regression | 32,536,226 | 8 | 2015-09-12T07:11:40Z | 32,577,245 | 13 | 2015-09-15T03:32:35Z | [
"python",
"numpy",
"matplotlib"
] | ```
fig = plt.figure();
ax=plt.gca()
ax.scatter(x,y,c="blue",alpha=0.95,edgecolors='none')
ax.set_yscale('log')
ax.set_xscale('log')
(Pdb) print x,y
[29, 36, 8, 32, 11, 60, 16, 242, 36, 115, 5, 102, 3, 16, 71, 0, 0, 21, 347, 19, 12, 162, 11, 224, 20, 1, 14, 6, 3, 346, 73, 51, 42, 37, 251, 21, 100, 11, 53, 118, 82, 113, 21, 0, 42, 42, 105, 9, 96, 93, 39, 66, 66, 33, 354, 16, 602]
[310000, 150000, 70000, 30000, 50000, 150000, 2000, 12000, 2500, 10000, 12000, 500, 3000, 25000, 400, 2000, 15000, 30000, 150000, 4500, 1500, 10000, 60000, 50000, 15000, 30000, 3500, 4730, 3000, 30000, 70000, 15000, 80000, 85000, 2200]
```
How can I plot a linear regression on this plot? It should use the log values of course.
```
x=np.array(x)
y=np.array(y)
fig = plt.figure()
ax=plt.gca()
fit = np.polyfit(x, y, deg=1)
ax.plot(x, fit[0] *x + fit[1], color='red') # add reg line
ax.scatter(x,y,c="blue",alpha=0.95,edgecolors='none')
ax.set_yscale('symlog')
ax.set_xscale('symlog')
pdb.set_trace()
```
Result:
Incorrect due to multiple line/curves and white space.
[](http://i.stack.imgur.com/mMEWC.png)
Data:
```
(Pdb) x
array([ 29., 36., 8., 32., 11., 60., 16., 242., 36.,
115., 5., 102., 3., 16., 71., 0., 0., 21.,
347., 19., 12., 162., 11., 224., 20., 1., 14.,
6., 3., 346., 73., 51., 42., 37., 251., 21.,
100., 11., 53., 118., 82., 113., 21., 0., 42.,
42., 105., 9., 96., 93., 39., 66., 66., 33.,
354., 16., 602.])
(Pdb) y
array([ 30, 47, 115, 50, 40, 200, 120, 168, 39, 100, 2, 100, 14,
50, 200, 63, 15, 510, 755, 135, 13, 47, 36, 425, 50, 4,
41, 34, 30, 289, 392, 200, 37, 15, 200, 50, 200, 247, 150,
180, 147, 500, 48, 73, 50, 55, 108, 28, 55, 100, 500, 61,
145, 400, 500, 40, 250])
(Pdb)
``` | [The only mathematical form that is a straight line on a log-log-plot is an exponential function.](https://en.wikipedia.org/wiki/Log%E2%80%93log_plot)
Since you have data with x=0 in it you can't just fit a line to `log(y) = k*log(x) + a` because [`log(0)`](https://en.wikipedia.org/wiki/Logarithm) is undefined. So we'll have to use a fitting function that is an exponential; not a polynomial. To do this we'll use [`scipy.optimize`](http://docs.scipy.org/doc/scipy/reference/optimize.html) and it's [`curve_fit`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html) function. We'll do an exponential and another sightly more complicated function to illustrate how to use this function:
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
# Abhishek Bhatia's data & scatter plot.
x = np.array([ 29., 36., 8., 32., 11., 60., 16., 242., 36.,
115., 5., 102., 3., 16., 71., 0., 0., 21.,
347., 19., 12., 162., 11., 224., 20., 1., 14.,
6., 3., 346., 73., 51., 42., 37., 251., 21.,
100., 11., 53., 118., 82., 113., 21., 0., 42.,
42., 105., 9., 96., 93., 39., 66., 66., 33.,
354., 16., 602.])
y = np.array([ 30, 47, 115, 50, 40, 200, 120, 168, 39, 100, 2, 100, 14,
50, 200, 63, 15, 510, 755, 135, 13, 47, 36, 425, 50, 4,
41, 34, 30, 289, 392, 200, 37, 15, 200, 50, 200, 247, 150,
180, 147, 500, 48, 73, 50, 55, 108, 28, 55, 100, 500, 61,
145, 400, 500, 40, 250])
fig = plt.figure()
ax=plt.gca()
ax.scatter(x,y,c="blue",alpha=0.95,edgecolors='none', label='data')
ax.set_yscale('log')
ax.set_xscale('log')
newX = np.logspace(0, 3, base=10) # Makes a nice domain for the fitted curves.
# Goes from 10^0 to 10^3
# This avoids the sorting and the swarm of lines.
# Let's fit an exponential function.
# This looks like a line on a lof-log plot.
def myExpFunc(x, a, b):
return a * np.power(x, b)
popt, pcov = curve_fit(myExpFunc, x, y)
plt.plot(newX, myExpFunc(newX, *popt), 'r-',
label="({0:.3f}*x**{1:.3f})".format(*popt))
print "Exponential Fit: y = (a*(x**b))"
print "\ta = popt[0] = {0}\n\tb = popt[1] = {1}".format(*popt)
# Let's fit a more complicated function.
# This won't look like a line.
def myComplexFunc(x, a, b, c):
return a * np.power(x, b) + c
popt, pcov = curve_fit(myComplexFunc, x, y)
plt.plot(newX, myComplexFunc(newX, *popt), 'g-',
label="({0:.3f}*x**{1:.3f}) + {2:.3f}".format(*popt))
print "Modified Exponential Fit: y = (a*(x**b)) + c"
print "\ta = popt[0] = {0}\n\tb = popt[1] = {1}\n\tc = popt[2] = {2}".format(*popt)
ax.grid(b='on')
plt.legend(loc='lower right')
plt.show()
```
This produces the following graph:
[](http://i.stack.imgur.com/cJ8fk.png)
and writes this to the terminal:
```
kevin@proton:~$ python ./plot.py
Exponential Fit: y = (a*(x**b))
a = popt[0] = 26.1736126404
b = popt[1] = 0.440755784363
Modified Exponential Fit: y = (a*(x**b)) + c
a = popt[0] = 17.1988418238
b = popt[1] = 0.501625165466
c = popt[2] = 22.6584645232
```
Note: Using `ax.set_xscale('log')` hides the points with x=0 on the plot, but those points do contribute to the fit. |
Pycharm IPython tab completion not working (within python console) | 32,542,289 | 8 | 2015-09-12T18:31:23Z | 34,091,233 | 9 | 2015-12-04T15:02:21Z | [
"python",
"ipython",
"pycharm",
"tab-completion"
] | So I have searched high and low for an answer regarding this and I am starting to come to the conclusion that it simply isn't a feature for Pycharm.
I am using IPython in Pycharm and I cannot get tab completion to work at all. If I start IPython in the terminal within Pycharm, there is no issue and tab completion works as promised.
What am I missing here? I am using Pycharm Community 4.5.4.
Thanks! | I emailed PyCharm support and asked the same question. Their response is that tab completion isn't currently supported, but you can vote [here](https://youtrack.jetbrains.com/issue/PY-9345) to move it up their priority list.
After years of tab completion in my muscle memory, Ctrl + space is not an ideal alternative. |
Control tick labels in Python seaborn package | 32,542,957 | 2 | 2015-09-12T19:42:39Z | 32,543,266 | 10 | 2015-09-12T20:13:08Z | [
"python",
"pandas",
"ipython",
"seaborn"
] | I have a scatter plot matrix generated using the `seaborn` package and I'd like to remove all the tick mark labels as these are just messying up the graph (either that or just remove those on the x-axis), but I'm not sure how to do it and have had no success doing Google searches. Any suggestions?
```
import seaborn as sns
sns.pairplot(wheat[['area_planted',
'area_harvested',
'production',
'yield']])
plt.show()
```
[](http://i.stack.imgur.com/Te8ZC.png) | ```
import seaborn as sns
iris = sns.load_dataset("iris")
g = sns.pairplot(iris)
g.set(xticklabels=[])
```
[](http://i.stack.imgur.com/GFNVF.png) |
How to print from Flask @app.route to python console | 32,550,487 | 11 | 2015-09-13T14:07:28Z | 33,383,239 | 21 | 2015-10-28T05:02:09Z | [
"python",
"flask"
] | I would like to simply print a "hello world" to the python console after /button is called by the user.
This is my naive approach:
```
@app.route('/button/')
def button_clicked():
print 'Hello world!'
return redirect('/')
```
Background: I would like to execute other python commands from flask (not shell). "print" should be the easiest case.
I believe I have not understood a basic twist here.
Thanks in advance! | It seems like you have it worked out, but for others looking for this answer, an easy way to do this is by printing to stderr. You can do that like this:
```
from __future__ import print_function # In python 2.7
import sys
@app.route('/button/')
def button_clicked():
print('Hello world!', file=sys.stderr)
return redirect('/')
```
Flask will display things printed to stderr in the console. For other ways of printing to stderr, see [this stackoverflow post](http://stackoverflow.com/questions/5574702/how-to-print-to-stderr-in-python) |
Flask validates decorator multiple fields simultaneously | 32,555,829 | 17 | 2015-09-14T00:23:26Z | 33,025,472 | 7 | 2015-10-08T20:33:59Z | [
"python",
"validation",
"flask",
"sqlalchemy",
"flask-sqlalchemy"
] | I have been using the @validates decorator in sqlalchemy.orm from flask to validate fields, and all has gone well as long as all of the fields are independent of one another such as:
```
@validates('field_one')
def validates_field_one(self, key, value):
#field one validation
@validates('field_two')
def validates_field_two(self, key, value):
#field two validation
```
However, now I need to do some validation that will require access to field\_one and field\_two simultaneously. It looks like validates accepts multiple arguments to the validates decorator, however, it will simply run the validation function once for each argument, as such:
```
@validates('field_one', 'field_two')
def validates_fields(self, keys, values):
#field validation
```
Results in a work flow of validate field\_one and then validate field\_two. However, I would like to validate both at the same time(a trivial example of which would be assert that the value of field\_one is not the value of field\_two, an example of which would be disallowing self-loops in a graph where field\_one and field\_two refer to nodes and it is performing validation on an edge). How would be the best way to go about doing that? | Order the fields in the order they were defined on the model. Then check if the last field is the one being validated. Otherwise just return the value unchecked. If the validator is validating one of the earlier fields, some of them will not be set yet.
```
@validates('field_one', 'field_two')
def validates_fields(self, key, value):
if key == 'field_two':
assert self.field_one != value
return value
```
See [this example](https://gist.github.com/matrixise/6417293). |
Create labeledPoints from Spark DataFrame in Python | 32,556,178 | 4 | 2015-09-14T01:29:26Z | 32,557,330 | 7 | 2015-09-14T04:29:33Z | [
"python",
"pandas",
"apache-spark",
"apache-spark-mllib"
] | What .map() function in python do I use to create a set of labeledPoints from a spark dataframe? What is the notation if The label/outcome is not the first column but I can refer to its column name, 'status'?
I create the python dataframe with this .map() function:
```
def parsePoint(line):
listmp = list(line.split('\t'))
dataframe = pd.DataFrame(pd.get_dummies(listmp[1:]).sum()).transpose()
dataframe.insert(0, 'status', dataframe['accepted'])
if 'NULL' in dataframe.columns:
dataframe = dataframe.drop('NULL', axis=1)
if '' in dataframe.columns:
dataframe = dataframe.drop('', axis=1)
if 'rejected' in dataframe.columns:
dataframe = dataframe.drop('rejected', axis=1)
if 'accepted' in dataframe.columns:
dataframe = dataframe.drop('accepted', axis=1)
return dataframe
```
I convert it to a spark dataframe after the reduce function has recombined all the pandas dataframes.
```
parsedData=sqlContext.createDataFrame(parsedData)
```
But now how do I create labledPoints from this in python? I assume it may be another .map() function? | If you already have numerical features and which require no additional transformations you can use `VectorAssembler` to combine columns containing independent variables:
```
from pyspark.ml.feature import VectorAssembler
assembler = VectorAssembler(
inputCols=["your", "independent", "variables"],
outputCol="features")
transformed = assembler.transform(parsedData)
```
Next you can simply map:
```
from pyspark.mllib.regression import LabeledPoint
from pyspark.sql.functions import col
(transformed.select(col("outcome_column").alias("label"), col("features"))
.map(lambda row: LabeledPoint(row.label, row.features)))
``` |
What are Type hints in Python 3.5 | 32,557,920 | 58 | 2015-09-14T05:37:33Z | 32,558,710 | 82 | 2015-09-14T06:49:50Z | [
"python",
"python-3.x",
"type-hinting"
] | One of the talked about features in `Python 3.5` is said to be `type hints`.
An example of `type hints` is mentioned in this [article](http://lwn.net/Articles/650904/) and [this](http://lwn.net/Articles/640359/) while also mentioning to use type hints responsibly. Can someone explain more about it and when it should be used and when not? | I would suggest reading [PEP 483](https://www.python.org/dev/peps/pep-0483/) and [PEP 484](https://www.python.org/dev/peps/pep-0484/) and watching [this](https://www.youtube.com/watch?v=2wDvzy6Hgxg) presentation by Guido on Type Hinting. In addition, more examples on Type Hints can be found at their [documentation topic](http://stackoverflow.com/documentation/python/1766/type-hints#t=201607251908319482596).
**In a nutshell**: **Type hinting is *literally what it means*, you hint the type of the object(s) you're using**.
Due to the highly **dynamic** nature of Python, *inferring or checking the type* of an object being used is especially hard. This fact makes it hard for developers to understand what exactly is going on in code they haven't written and, most importantly, for type checking tools found in many IDEs [PyCharm, PyDev come to mind] that are limited due to the fact that they don't have any indicator of what type the objects are. As a result they resort to trying to infer the type with (as mentioned in the presentation) around 50% success rate.
---
To take two important slides from the Type Hinting presentation:
### ***Why Type Hints?***
1. **Helps Type Checkers:** By hinting at what type you want the object to be the type checker can easily detect if, for instance, you're passing an object with a type that isn't expected.
2. **Helps with documentation:** A third person viewing your code will know what is expected where, ergo, how to use it without getting them `TypeErrors`.
3. **Helps IDEs develop more accurate and robust tools:** Development Environments will be better suited at suggesting appropriate methods when know what type your object is. You have probably experienced this with some IDE at some point, hitting the `.` and having methods/attributes pop up which aren't defined for an object.
### ***Why Static Type Checkers?***
* **Find bugs sooner**: This is self evident, I believe.
* **The larger your project the more you need it**: Again, makes sense. Static languages offer a robustness and control that
dynamic languages lack. The bigger and more complex your application becomes the more control and predictability (from
a behavioral aspect) you require.
* **Large teams are already running static analysis**: I'm guessing this verifies the first two points.
**As a closing note for this small introduction**: This is an **optional** feature and from what I understand it has been introduced in order to reap some of the benefits of static typing.
You generally **do not** need to worry about it and **definitely** don't need to use it (especially in cases where you use Python as an auxiliary scripting language). It should be helpful when developing large projects as *it offers much needed robustness, control and additional debugging capabilities*.
---
## **Type Hinting with mypy**:
In order to make this answer more complete, I think a little demonstration would be suitable. I'll be using [`mypy`](http://mypy-lang.org/), the library which inspired Type Hints as they are presented in the PEP. This is mainly written for anybody bumping into this question and wondering where to begin.
Before I do that let reiterate the following: [PEP 484](https://www.python.org/dev/peps/pep-0484/) doesn't enforce anything; it is simply setting a direction for function
annotations and proposing guidelines for **how** type checking can/should be performed. You can annotate your functions and
hint as many things as you want; your scripts will still run regardless of the presence of annotations.
Anyways, as noted in the PEP, hinting types should generally take three forms:
* Function annotations. ([PEP 3107](https://www.python.org/dev/peps/pep-3107/))
* Stub files for built-in/user modules. (Ideal future for type checking)
* Special `# type: type` comments. (Complementing the first two forms)\*\*
Additionally, you'll want to use type hints in conjunction with the new [`typing`](https://docs.python.org/3/library/typing.html) module introduced with `Py3.5`. The typing module will save your life in this situation; in it, many (additional) ABCs are defined along with helper functions and decorators for use in static checking. Most `ABCs` in `collections.abc` are included but in a `Generic` form in order to allow subscription (by defining a `__getitem__()` method).
For anyone interested in a more in-depth explanation of these, the [`mypy documentation`](http://mypy.readthedocs.org/en/latest/) is written very nicely and has a lot of code samples demonstrating/describing the functionality of their checker; it is definitely worth a read.
### Function annotations and special comments:
First, it's interesting to observe some of the behavior we can get when using special comments. Special `# type: type` comments
can be added during variable assignments to indicate the type of an object if one cannot be directly inferred. Simple assignments are
generally easily inferred but others, like lists (with regard to their contents), cannot.
**Note:** If we want to use any derivative of `Containers` and need to specify the contents for that container we **must** use the ***generic*** types from the `typing` module. **These support indexing.**
```
# generic List, supports indexing.
from typing import List
# In this case, the type is easily inferred as type: int.
i = 0
# Even though the type can be inferred as of type list
# there is no way to know the contents of this list.
# By using type: List[str] we indicate we want to use a list of strings.
a = [] # type: List[str]
# Appending an int to our list
# is statically not correct.
a.append(i)
# Appending a string is fine.
a.append("i")
print(a) # [0, 'i']
```
If we add these commands to a file and execute them with our interpreter, everything works just fine and `print(a)` just prints
the contents of list `a`. The `# type` comments have been discarded, *treated as plain comments which have no additional semantic meaning*.
By running this with `mypy`, on the other hand, we get the following responce:
```
(Python3)jimmi@jim: mypy typeHintsCode.py
typesInline.py:14: error: Argument 1 to "append" of "list" has incompatible type "int"; expected "str"
```
Indicating that a list of `str` objects cannot contain an `int`, which, statically speaking, is sound. This can be fixed by either abiding to the type of `a` and only appending `str` objects or by changing the type of the contents of `a` to indicate that any value is acceptable (Intuitively performed with `List[Any]` after `Any` has been imported from `typing`).
Function annotations are added in the form `param_name : type` after each parameter in your function signature and a return type is specified using the `-> type` notation before the ending function colon; all annotations are stored in the `__annotations__` attribute for that function in a handy dictionary form. Using a trivial example (which doesn't require extra types from the `typing` module):
```
def annotated(x: int, y: str) -> bool:
return x < y
```
The `annotated.__annotations__` attribute now has the following values:
```
{'y': <class 'str'>, 'return': <class 'bool'>, 'x': <class 'int'>}
```
If we're a complete noobie, or we are familiar with `Py2.7` concepts and are consequently unaware of the `TypeError` lurking in the comparison of `annotated`, we can perform another static check, catch the error and save us some trouble:
```
(Python3)jimmi@jim: mypy typeHintsCode.py
typeFunction.py: note: In function "annotated":
typeFunction.py:2: error: Unsupported operand types for > ("str" and "int")
```
Among other things, calling the function with invalid arguments will also get caught:
```
annotated(20, 20)
# mypy complains:
typeHintsCode.py:4: error: Argument 2 to "annotated" has incompatible type "int"; expected "str"
```
These can be extended to basically any use-case and the errors caught extend further than basic calls and operations. The types you
can check for are really flexible and I have merely given a small sneak peak of its potential. A look in the `typing` module, the
PEPs or the `mypy` docs will give you a more comprehensive idea of the capabilities offered.
### Stub Files:
Stub files can be used in two different non mutually exclusive cases:
* You need to type check a module for which you do not want to directly alter the function signatures
* You want to write modules and have type-checking but additionally want to separate annotations from content.
What stub files (with an extension of `.pyi`) are is an annotated interface of the module you are making/want to use. They contain
the signatures of the functions you want to type-check with the body of the functions discarded. To get a feel of this, given a set
of three random functions in a module named `randfunc.py`:
```
def message(s):
print(s)
def alterContents(myIterable):
return [i for i in myIterable if i % 2 == 0]
def combine(messageFunc, itFunc):
messageFunc("Printing the Iterable")
a = alterContents(range(1, 20))
return set(a)
```
We can create a stub file `randfunc.pyi`, in which we can place some restrictions if we wish to do so. The downside is that
somebody viewing the source without the stub won't really get that annotation assistance when trying to understand what is supposed
to be passed where.
Anyway, the structure of a stub file is pretty simplistic: Add all function definitions with empty bodies (`pass` filled) and
supply the annotations based on your requirements. Here, let's assume we only want to work with `int` types for our Containers.
```
# Stub for randfucn.py
from typing import Iterable, List, Set, Callable
def message(s: str) -> None: pass
def alterContents(myIterable: Iterable[int])-> List[int]: pass
def combine(
messageFunc: Callable[[str], Any],
itFunc: Callable[[Iterable[int]], List[int]]
)-> Set[int]: pass
```
The `combine` function gives an indication of why you might want to use annotations in a different file, they some times clutter up
the code and reduce readability (big no-no for Python). You could of course use type aliases but that sometime confuses more than it
helps (so use them wisely).
---
This should get you familiarized with the basic concepts of Type Hints in Python. Even though the type checker used has been
`mypy` you should gradually start to see more of them pop-up, some internally in IDEs ([**PyCharm**](http://blog.jetbrains.com/pycharm/2015/11/python-3-5-type-hinting-in-pycharm-5/),) and others as standard python modules.
I'll try and add additional checkers/related packages in the following list when and if I find them (or if suggested).
***Checkers I know of***:
* [**Mypy**](http://mypy-lang.org/): as described here.
* [**PyType**](https://github.com/google/pytype): By Google, uses different notation from what I gather, probably worth a look.
***Related Packages/Projects***:
* [**typeshed:**](https://github.com/python/typeshed/) Official Python repo housing an assortment of stub files for the standard library.
The `typeshed` project is actually one of the best places you can look to see how type hinting might be used in a project of your own. Let's take as an example [the `__init__` dunders of the `Counter` class](https://github.com/python/typeshed/blob/master/stdlib/3/collections.pyi#L78) in the corresponding `.pyi` file:
```
class Counter(Dict[_T, int], Generic[_T]):
@overload
def __init__(self) -> None: ...
@overload
def __init__(self, Mapping: Mapping[_T, int]) -> None: ...
@overload
def __init__(self, iterable: Iterable[_T]) -> None: ...
```
[Where `_T = TypeVar('_T')` is used to define generic classes](http://mypy.readthedocs.org/en/latest/generics.html#defining-generic-classes). For the `Counter` class we can see that it can either take no arguments in its initializer, get a single `Mapping` from any type to an `int` *or* take an `Iterable` of any type.
---
**Notice**: One thing I forgot to mention was that the `typing` module has been introduced on a *provisional basis*. From **[PEP 411](https://www.python.org/dev/peps/pep-0411/)**:
> A provisional package may have its API modified prior to "graduating" into a "stable" state. On one hand, this state provides the package with the benefits of being formally part of the Python distribution. On the other hand, the core development team explicitly states that no promises are made with regards to the the stability of the package's API, which may change for the next release. While it is considered an unlikely outcome, such packages may even be removed from the standard library without a deprecation period if the concerns regarding their API or maintenance prove well-founded.
So take things here with a pinch of salt; I'm doubtfull it will be removed or altered in significant ways but one can never know.
---
\*\* Another topic altogether but valid in the scope of type-hints: [`PEP 526`](https://docs.python.org/3.6/whatsnew/3.6.html#pep-526-syntax-for-variable-annotations) is an effort to replace `# type` comments by introducing new syntax which allows users to annotate the type of variables in simple `varname: type` statements. |
What are Type hints in Python 3.5 | 32,557,920 | 58 | 2015-09-14T05:37:33Z | 34,352,299 | 13 | 2015-12-18T09:28:02Z | [
"python",
"python-3.x",
"type-hinting"
] | One of the talked about features in `Python 3.5` is said to be `type hints`.
An example of `type hints` is mentioned in this [article](http://lwn.net/Articles/650904/) and [this](http://lwn.net/Articles/640359/) while also mentioning to use type hints responsibly. Can someone explain more about it and when it should be used and when not? | The newly released PyCharm 5 supports type hinting. In their blog post about it (see [Python 3.5 type hinting in PyCharm 5](http://blog.jetbrains.com/pycharm/2015/11/python-3-5-type-hinting-in-pycharm-5/)) they offer a great explanation of **what type hints are and aren't** along with several examples and illustrations for how to use them in your code.
Additionally, it is supported in Python 2.7, as explained in [this comment](http://blog.jetbrains.com/pycharm/2015/11/python-3-5-type-hinting-in-pycharm-5/#comment-259076):
> PyCharm supports the typing module from PyPI for Python 2.7, Python 3.2-3.4. *For 2.7 you have to put type hints in \*.pyi stub files since function annotations were added in Python 3.0*. |
What are Type hints in Python 3.5 | 32,557,920 | 58 | 2015-09-14T05:37:33Z | 38,045,620 | 7 | 2016-06-27T03:46:37Z | [
"python",
"python-3.x",
"type-hinting"
] | One of the talked about features in `Python 3.5` is said to be `type hints`.
An example of `type hints` is mentioned in this [article](http://lwn.net/Articles/650904/) and [this](http://lwn.net/Articles/640359/) while also mentioning to use type hints responsibly. Can someone explain more about it and when it should be used and when not? | Adding to Jim's elaborate answer:
Check the [`typing` module](https://docs.python.org/3/library/typing.html) -- this module supports type hints as specified by PEP 484.
For example, the function below takes and returns values of type `str` and is annotated as follows:
```
def greeting(name: str) -> str:
return 'Hello ' + name
```
The `typing` module also supports:
1. [Type alising](https://docs.python.org/3/library/typing.html#type-aliases)
2. Type hinting for [callback functions](https://docs.python.org/3/library/typing.html#callable)
3. [Generics](https://docs.python.org/3/library/typing.html#generics) - Abstract base classes have been extended to support subscription to denote expected types for container elements.
4. [User-defined generic types](https://docs.python.org/3/library/typing.html#user-defined-generic-types) - A user-defined class can be defined as a generic class.
5. [Any type](https://docs.python.org/3/library/typing.html#typing.Any) - Every type is a subtype of Any. |
How can I understand a .pyc file content | 32,562,163 | 3 | 2015-09-14T10:08:27Z | 32,562,303 | 8 | 2015-09-14T10:15:38Z | [
"python",
"python-2.7",
"disassembling",
"pyc"
] | I have a `.pyc` file. I need to understand the content of that file to know how the disassembler works of python, i.e. how can I generate a output like `dis.dis(function)` from `.pyc` file content.
for e.g.
```
>>> def sqr(x):
... return x*x
...
>>> import dis
>>> dis.dis(sqr)
2 0 LOAD_FAST 0 (x)
3 LOAD_FAST 0 (x)
6 BINARY_MULTIPLY
7 RETURN_VALUE
```
I need to get a output like this using the `.pyc` file. | `.pyc` files contain some metadata and a [`marshal`ed](https://docs.python.org/2/library/marshal.html) `code` object; to load the `code` object and disassemble that use:
```
import dis, marshal
with open(pycfile, "rb") as f:
magic_and_timestamp = f.read(8) # first 8 bytes are metadata
code = marshal.load(f) # rest is a marshalled code object
dis.dis(code)
```
Demo with the `bisect` module:
```
>>> import bisect
>>> import dis, marshal
>>> with open(bisect.__file__, "rb") as f:
... magic_and_timestamp = f.read(8) # first 8 bytes are metadata
... code = marshal.load(f) # rest is bytecode
...
>>> dis.dis(code)
1 0 LOAD_CONST 0 ('Bisection algorithms.')
3 STORE_NAME 0 (__doc__)
3 6 LOAD_CONST 1 (0)
9 LOAD_CONST 8 (None)
12 LOAD_CONST 2 (<code object insort_right at 0x106a459b0, file "/Users/mpieters/Development/Library/buildout.python/parts/opt/lib/python2.7/bisect.py", line 3>)
15 MAKE_FUNCTION 2
18 STORE_NAME 2 (insort_right)
22 21 LOAD_NAME 2 (insort_right)
24 STORE_NAME 3 (insort)
24 27 LOAD_CONST 1 (0)
30 LOAD_CONST 8 (None)
33 LOAD_CONST 3 (<code object bisect_right at 0x106a45ab0, file "/Users/mpieters/Development/Library/buildout.python/parts/opt/lib/python2.7/bisect.py", line 24>)
36 MAKE_FUNCTION 2
39 STORE_NAME 4 (bisect_right)
45 42 LOAD_NAME 4 (bisect_right)
45 STORE_NAME 5 (bisect)
47 48 LOAD_CONST 1 (0)
51 LOAD_CONST 8 (None)
54 LOAD_CONST 4 (<code object insort_left at 0x106a45bb0, file "/Users/mpieters/Development/Library/buildout.python/parts/opt/lib/python2.7/bisect.py", line 47>)
57 MAKE_FUNCTION 2
60 STORE_NAME 6 (insort_left)
67 63 LOAD_CONST 1 (0)
66 LOAD_CONST 8 (None)
69 LOAD_CONST 5 (<code object bisect_left at 0x106a45cb0, file "/Users/mpieters/Development/Library/buildout.python/parts/opt/lib/python2.7/bisect.py", line 67>)
72 MAKE_FUNCTION 2
75 STORE_NAME 7 (bisect_left)
89 78 SETUP_EXCEPT 14 (to 95)
90 81 LOAD_CONST 6 (-1)
84 LOAD_CONST 7 (('*',))
87 IMPORT_NAME 8 (_bisect)
90 IMPORT_STAR
91 POP_BLOCK
92 JUMP_FORWARD 17 (to 112)
91 >> 95 DUP_TOP
96 LOAD_NAME 9 (ImportError)
99 COMPARE_OP 10 (exception match)
102 POP_JUMP_IF_FALSE 111
105 POP_TOP
106 POP_TOP
107 POP_TOP
92 108 JUMP_FORWARD 1 (to 112)
>> 111 END_FINALLY
>> 112 LOAD_CONST 8 (None)
115 RETURN_VALUE
```
Note that this is just the *top level code object*, defining the module. If you wanted to analyse the functions contained, you'll need to load the nested `code` objects, from the top-level `code.co_consts` array; for example the `insort_right` function is loaded with `LOAD_CONST 2`, so look for the code object at that index:
```
>>> code.co_consts[2]
<code object insort_right at 0x106a459b0, file "/Users/mpieters/Development/Library/buildout.python/parts/opt/lib/python2.7/bisect.py", line 3>
>>> dis.dis(code.co_consts[2])
12 0 LOAD_FAST 2 (lo)
3 LOAD_CONST 1 (0)
6 COMPARE_OP 0 (<)
9 POP_JUMP_IF_FALSE 27
13 12 LOAD_GLOBAL 0 (ValueError)
15 LOAD_CONST 2 ('lo must be non-negative')
18 CALL_FUNCTION 1
21 RAISE_VARARGS 1
24 JUMP_FORWARD 0 (to 27)
14 >> 27 LOAD_FAST 3 (hi)
30 LOAD_CONST 5 (None)
33 COMPARE_OP 8 (is)
36 POP_JUMP_IF_FALSE 54
15 39 LOAD_GLOBAL 2 (len)
42 LOAD_FAST 0 (a)
45 CALL_FUNCTION 1
48 STORE_FAST 3 (hi)
51 JUMP_FORWARD 0 (to 54)
16 >> 54 SETUP_LOOP 65 (to 122)
>> 57 LOAD_FAST 2 (lo)
60 LOAD_FAST 3 (hi)
63 COMPARE_OP 0 (<)
66 POP_JUMP_IF_FALSE 121
17 69 LOAD_FAST 2 (lo)
72 LOAD_FAST 3 (hi)
75 BINARY_ADD
76 LOAD_CONST 3 (2)
79 BINARY_FLOOR_DIVIDE
80 STORE_FAST 4 (mid)
18 83 LOAD_FAST 1 (x)
86 LOAD_FAST 0 (a)
89 LOAD_FAST 4 (mid)
92 BINARY_SUBSCR
93 COMPARE_OP 0 (<)
96 POP_JUMP_IF_FALSE 108
99 LOAD_FAST 4 (mid)
102 STORE_FAST 3 (hi)
105 JUMP_ABSOLUTE 57
19 >> 108 LOAD_FAST 4 (mid)
111 LOAD_CONST 4 (1)
114 BINARY_ADD
115 STORE_FAST 2 (lo)
118 JUMP_ABSOLUTE 57
>> 121 POP_BLOCK
20 >> 122 LOAD_FAST 0 (a)
125 LOAD_ATTR 3 (insert)
128 LOAD_FAST 2 (lo)
131 LOAD_FAST 1 (x)
134 CALL_FUNCTION 2
137 POP_TOP
138 LOAD_CONST 5 (None)
141 RETURN_VALUE
```
I personally would *avoid* trying to parse the `.pyc` file with anything than the matching Python version and `marshal` module. The `marshal` format is basically an internal serialisation format that changes with the needs of Python itself. New features like list comprehensions and `with` statements and `async await` require new additions to the format, which is not published other than as [C source code](https://hg.python.org/cpython/file/2.7/Python/marshal.c).
If you do go this route, and manage to [read a `code` object](https://hg.python.org/cpython/file/2.7/Python/marshal.c#l985) by other means than using the module, you'll have to parse out the disassembly from the various attributes of the code object; see the [`dis` module source](https://hg.python.org/cpython/file/2.7/Lib/dis.py) for details on how to do this (you'll have to use the `co_firstlineno` and `co_lnotab` attributes to create a bytecode-offset-to-linenumber map, for example). |
Simple way to measure cell execution time in ipython notebook | 32,565,829 | 14 | 2015-09-14T13:15:57Z | 36,690,084 | 9 | 2016-04-18T09:27:25Z | [
"python",
"ipython",
"ipython-notebook",
"jupyter"
] | I would like to get the time spent on the cell execution in addition to the original output from cell.
To this end, I tried `%%timeit -r1 -n1` but it doesn't expose the variable defined within cell.
`%%time` works for cell which only contains 1 statement.
```
In[1]: %%time
1
CPU times: user 4 µs, sys: 0 ns, total: 4 µs
Wall time: 5.96 µs
Out[1]: 1
In[2]: %%time
# Notice there is no out result in this case.
x = 1
x
CPU times: user 3 µs, sys: 0 ns, total: 3 µs
Wall time: 5.96 µs
```
What's the best way to do it? | Use cell magic and this project on github by Phillip Cloud:
Load it by putting this at the top of your notebook or put it in your config file if you always want to load it by default:
```
%install_ext https://raw.github.com/cpcloud/ipython-autotime/master/autotime.py
%load_ext autotime
```
If loaded, every output of subsequent cell execution will include the time in min and sec it took to execute it. |
What did I wrong here [Write to file in new line]? | 32,568,234 | 3 | 2015-09-14T15:11:51Z | 32,568,264 | 9 | 2015-09-14T15:13:29Z | [
"python"
] | I want to make a simple function that writes two words to a file each on a new line.
But if I run this code it only writes "tist - tost" to the file.
Code:
```
def write_words(word1, word2):
w = open("output.txt", "w")
w.write(word1 + " - " + word2 + '\n')
w.close()
write_words("test", "tast")
write_words("tist", "tost")
```
Output:
```
tist - tost
```
How can I write the two phrases to the file? | You need to open your file in append mode, also as a more pythonic way for opening a file you can use [*`with` statement*](https://docs.python.org/3/reference/compound_stmts.html#the-with-statement) which close the file at the end of block :
```
def write_words(word1, word2):
with open("output.txt", "a") as f :
f.write(word1 + " - " + word2 + '\n')
``` |
Does spark predicate pushdown work with JDBC? | 32,573,991 | 11 | 2015-09-14T21:09:33Z | 32,585,936 | 8 | 2015-09-15T12:20:01Z | [
"python",
"jdbc",
"apache-spark",
"apache-spark-sql",
"pyspark"
] | According to [this](https://databricks.com/blog/2015/02/17/introducing-dataframes-in-spark-for-large-scale-data-science.html)
> Catalyst applies logical optimizations such as predicate pushdown. The
> optimizer can push filter predicates down into the data source,
> enabling the physical execution to skip irrelevant data.
Spark supports push down of predicates to the data source.
Is this feature also available / expected for JDBC?
(From inspecting the DB logs I can see it's not the default behavior right now - the full query is passed to the DB, even if it's later limited by spark filters)
**MORE DETAILS**
Running Spark 1.5 with PostgreSQL 9.4
code snippet:
```
from pyspark import SQLContext, SparkContext, Row, SparkConf
from data_access.data_access_db import REMOTE_CONNECTION
sc = SparkContext()
sqlContext = SQLContext(sc)
url = 'jdbc:postgresql://{host}/{database}?user={user}&password={password}'.format(**REMOTE_CONNECTION)
sql = "dummy"
df = sqlContext.read.jdbc(url=url, table=sql)
df = df.limit(1)
df.show()
```
SQL Trace:
```
< 2015-09-15 07:11:37.718 EDT >LOG: execute <unnamed>: SET extra_float_digits = 3
< 2015-09-15 07:11:37.771 EDT >LOG: execute <unnamed>: SELECT * FROM dummy WHERE 1=0
< 2015-09-15 07:11:37.830 EDT >LOG: execute <unnamed>: SELECT c.oid, a.attnum, a.attname, c.relname, n.nspname, a.attnotnull OR (t.typtype = 'd' AND t.typnotnull), pg_catalog.pg_get_expr(d.adbin, d.a
drelid) LIKE '%nextval(%' FROM pg_catalog.pg_class c JOIN pg_catalog.pg_namespace n ON (c.relnamespace = n.oid) JOIN pg_catalog.pg_attribute a ON (c.oid = a.attrelid) JOIN pg_catalog.pg_type t ON (a.a
tttypid = t.oid) LEFT JOIN pg_catalog.pg_attrdef d ON (d.adrelid = a.attrelid AND d.adnum = a.attnum) JOIN (SELECT 15218474 AS oid , 1 AS attnum UNION ALL SELECT 15218474, 3) vals ON (c.oid = vals.oid
AND a.attnum = vals.attnum)
< 2015-09-15 07:11:40.936 EDT >LOG: execute <unnamed>: SET extra_float_digits = 3
< 2015-09-15 07:11:40.964 EDT >LOG: execute <unnamed>: SELECT "id","name" FROM dummy
```
I would expect that the last select will include a `limit 1` clause - but it doesn't | Spark DataFrames support predicate push-down with JDBC sources but term *predicate* is used in a strict SQL meaning. It means it covers only `WHERE` clause. Moreover it looks like it is limited to the logical conjunction (no `IN` and `OR` I am afraid) and simple predicates.
Everything else, like limits, counts, ordering, groups and conditions is processed on the Spark side. One caveat, already covered on SO, is that `df.count()` or `sqlContext.sql("SELECT COUNT(*) FROM df")` is translated to `SELECT 1 FROM df` and requires both substantial data transfer and processing using Spark.
Does it mean it is a lost cause? Not exactly. It is possible to use an arbitrary subquery as a `table` argument. It is less convenient than a predicate pushdown but otherwise works pretty well:
```
n = ... # Number of rows to take
sql = "(SELECT * FROM dummy LIMIT {0}) AS tmp".format(int(n))
df = sqlContext.read.jdbc(url=url, table=sql)
``` |
Convert dictionary to tuple with additional element inside tuple | 32,582,458 | 3 | 2015-09-15T09:27:53Z | 32,582,520 | 8 | 2015-09-15T09:30:50Z | [
"python",
"dictionary",
"tuples"
] | Let's say I have this dictionary:
```
d = {'a': 1, 'b': 2, 'c': 3}
```
So doing `d.items()`, it would convert it to:
```
[('a', 1), ('c', 3), ('b', 2)]
```
But I need it to be like this:
```
[('a', '=', 1), ('c', '=', 3), ('b', '=', 2)]
```
What would be most pythonic/efficient way of doing it? Are there better alternatives than just iterating over a dictionary and building tuples for every key/value pair? | You have to iterate over the dictionary and construct each tuple. Using a list comprehension, this is fairly straightforward to do:
```
>>> [(k, '=', v) for k, v in d.items()]
[('a', '=', 1), ('c', '=', 3), ('b', '=', 2)]
``` |
Django REST Framework: difference between views and viewsets? | 32,589,087 | 5 | 2015-09-15T14:47:55Z | 32,589,879 | 10 | 2015-09-15T15:24:24Z | [
"python",
"django",
"rest",
"django-views",
"django-rest-framework"
] | May be [relevant](http://stackoverflow.com/questions/32430689/what-does-django-rest-framework-mean-trade-offs-between-view-vs-viewsets).
What is the difference between **views** and **viewsets**? And what about **router** and **urlpatterns**? | `ViewSets` and `Routers` are simple tools to speed-up implementing of your API, if you're aiming to standard behaviour and standard URLs.
Using `ViewSet` you don't have to create separate views for getting list of objects and detail of one object. ViewSet will handle for you in consistent way both list and detail.
Using `Router` will connect your `ViewSet` into "standarized" (it's not standard in any global way, just some structure that was implemented by creators of Django REST framework) structure of URLs. That way you don't have to create your urlpatterns by hand and you're guaranteed that all of your urls are consistent (at least on layer that `Router` is responsible for).
It looks like not much, but when implementing some huge api where you will have many and many urlpatterns and views, using `ViewSets` and `Routers` will make big difference.
For better explanation: this is code using ViewSets and Routers:
views.py:
```
from snippets.models import
from rest_framework import viewsets
from yourapp.serializers import ArticleSerializer
class ArticleViewSet(viewsets.ModelViewSet):
queryset = Article.objects.all()
serializer_class = ArticleSerializer
```
urls.py:
```
from django.conf.urls import url, include
from yourapp import views
from rest_framework.routers import DefaultRouter
router = DefaultRouter()
router.register(r'articles', views.ArticleViewSet)
urlpatterns = [
url(r'^', include(router.urls)),
]
```
And equivalent result using normal Views and no routers:
views.py
```
from snippets.models import Article
from snippets.serializers import ArticleSerializer
from rest_framework import generics
class ArticleList(generics.ListCreateAPIView):
queryset = Article.objects.all()
serializer_class = ArticleSerializer
class ArticleDetail(generics.RetrieveUpdateDestroyAPIView):
queryset = Article.objects.all()
serializer_class = ArticleSerializer
```
urls.py
```
from django.conf.urls import url, include
from yourapp import views
urlpatterns = [
url(r'articles/^', views.ArticleList.as_view(), name="article-list"),
url(r'articles/(?P<pk>[0-9]+)/^', views.ArticleDetail.as_view(), name="article-detail"),
]
``` |
Python pandas: how to specify data types when reading an Excel file? | 32,591,466 | 2 | 2015-09-15T16:48:09Z | 32,591,786 | 11 | 2015-09-15T17:06:27Z | [
"python",
"pandas",
"dataframe"
] | I am importing an excel file into a pandas dataframe with the `pandas.read_excel()` function.
One of the columns is the primary key of the table: it's all numbers, but it's stored as text (the little green triangle in the top left of the Excel cells confirms this).
However, when I import the file into a pandas dataframe, the column gets imported as a float. This means that, for example, '0614' becomes 614.
Is there a way to specify the datatype when importing a column? I understand this is possible when importing CSV files but couldn't find anything in the syntax of `read_excel()`.
The only solution I can think of is to add an arbitrary letter at the beginning of the text (converting '0614' into 'A0614') in Excel, to make sure the column is imported as text, and then chopping off the 'A' in python, so I can match it to other tables I am importing from SQL. | You just specify converters. I created an excel spreadsheet of the following structure:
```
names ages
bob 05
tom 4
suzy 3
```
Where the "ages" column is formatted as strings. To load:
import pandas as pd
```
df = pd.read_excel('Book1.xlsx',sheetname='Sheet1',header=0,converters={'names':str,'ages':str})
>>>df
names ages
0 bob 05
1 tom 4
2 suzy 3
``` |
Static method syntax confusion | 32,593,763 | 2 | 2015-09-15T19:08:26Z | 32,593,849 | 8 | 2015-09-15T19:13:58Z | [
"python",
"python-2.7",
"methods",
"static-methods"
] | This is how we make static functions in Python:
```
class A:
@staticmethod
def fun():
print 'hello'
A.fun()
```
This works as expected and prints `hello`.
If it is a member function instead of a static one, we use `self`:
```
class A:
def fun(self):
print 'hello'
A().fun()
```
which also works as expected and prints `hello`.
My confusion is with the following case:
```
class A:
def fun():
print 'hello'
```
In the above case, there is no `staticmethod`, nor `self`. Python interpreter is okay with this definition. However, we cannot call it either of the above methods, namely:
```
A.fun()
A().fun()
```
both gives errors.
**My question is:** Is there any way that I can call this function? If not, why Python do not give me a syntax error in the first place? | Python doesn't give you a syntax error, because the binding of a method (which takes care of passing in `self`) is a *runtime* action.
Only when you look up a method on a class or instance, is a method being bound (because functions are [descriptors](https://docs.python.org/2/howto/descriptor.html) they produce a method when looked up this way). This is done via the [`descriptor.__get__()` method](https://docs.python.org/2/reference/datamodel.html#object.__get__), which is called by the [`object.__getattribute__()` method](https://docs.python.org/2/reference/datamodel.html#object.__getattribute__), which Python called when you tried to access the `fun` attribute on the `A` class or `A()` instance.
You can always 'unwrap' the bound method and reach for the un-wrapped function underneath to call it directly:
```
A.fun.__func__()
```
Incidentally, that's exactly what `staticmethod` does; it is there to 'intercept' the descriptor binding and return the raw function object instead of a bound method. In other words, `staticmethod` *undoes* the normal runtime method binding:
Demo:
```
>>> class A(object): pass
...
>>> def fun(): print 'hello!'
...
>>> fun.__get__(None, A) # binding to a class
<unbound method A.fun>
>>> fun.__get__(None, A)() # calling a bound function, fails as there is no first argument
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unbound method fun() must be called with A instance as first argument (got nothing instead)
>>> fun.__get__(None, A).__func__ # access the wrapped function
<function fun at 0x100ba8378>
>>> staticmethod(fun).__get__(None, A) # staticmethod object just returns the function
<function fun at 0x100ba8378>
>>> staticmethod(fun).__get__(None, A)() # so calling it works
hello!
``` |
How do I generate non-repeating random numbers in a while loop? (Python 3) | 32,596,805 | 4 | 2015-09-15T22:34:44Z | 32,596,849 | 8 | 2015-09-15T22:39:42Z | [
"python",
"python-3.x",
"random",
"while-loop"
] | So far what I have is
```
import random
def go():
rounds = 0
while rounds < 5:
number = random.randint(1, 5)
if number == 1:
print('a')
elif number == 2:
print('b')
elif number == 3:
print('c')
elif number == 4:
print('d')
elif number == 5:
print('e')
rounds = rounds + 1
go()
```
and the output ends up beings something along the lines of
```
e
e
c
b
e
```
How do I make it so a number is only used once and the letters do not repeat?
(ex. something like)
```
a
e
b
c
d
```
Thanks in advance | The [random.sample(population, k)](https://docs.python.org/2/library/random.html#random.sample) method returns a sample of unique values from the specified population of length k.
```
r = random.sample("abcde", 5)
for element in r:
print(element)
``` |
Python not working in the command line of git bash | 32,597,209 | 22 | 2015-09-15T23:18:07Z | 32,599,341 | 22 | 2015-09-16T03:38:17Z | [
"python",
"windows",
"git",
"command-line",
"git-bash"
] | Python will not run in git bash (Windows). When I type python in the command line, it takes me to a blank line without saying that it has entered python 2.7.10 like its does in Powershell. It doesn't give me an error message, but python just doesn't run.
I have already made sure the environmental variables in PATH included `c:\python27`. What else can I check?
---
A session wherein this issue occurs looks like the following:
```
user@hostname MINGW64 ~
$ type python
python is /c/Python27/python
user@hostname MINGW64 ~
$ python
```
...sitting there without returning to the prompt. | [This is a known bug in MSys2, which provides the terminal used by Git Bash.](http://sourceforge.net/p/msys2/tickets/32/) You can work around it by running a Python build without ncurses support, or by using [WinPTY](https://github.com/rprichard/winpty/), used as follows:
> To run a Windows console program in mintty or Cygwin sshd, prepend console.exe to the command-line:
>
> ```
> $ build/console.exe c:/Python27/python.exe
> Python 2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)] on win32
> Type "help", "copyright", "credits" or "license" for more information.
> >>> 10 + 20
> 30
> >>> exit()
> ```
The [prebuilt binaries for msys](https://github.com/downloads/rprichard/winpty/winpty-0.1.1-msys.zip) are likely to work with Git Bash. (Do check whether there's a newer version if significant time has passed since this answer was posted!).
---
As of Git for Windows 2.7.1, also try using `winpty c:Python27/python.exe`; WinPTY may be included out-of-the-box. |
Python not working in the command line of git bash | 32,597,209 | 22 | 2015-09-15T23:18:07Z | 36,530,750 | 18 | 2016-04-10T14:05:16Z | [
"python",
"windows",
"git",
"command-line",
"git-bash"
] | Python will not run in git bash (Windows). When I type python in the command line, it takes me to a blank line without saying that it has entered python 2.7.10 like its does in Powershell. It doesn't give me an error message, but python just doesn't run.
I have already made sure the environmental variables in PATH included `c:\python27`. What else can I check?
---
A session wherein this issue occurs looks like the following:
```
user@hostname MINGW64 ~
$ type python
python is /c/Python27/python
user@hostname MINGW64 ~
$ python
```
...sitting there without returning to the prompt. | Just enter this in your git shell on windows - > `alias python='winpty python.exe'`, that is all and you are going to have alias to the python executable. Enjoy
P.S. For permanent alias addition see below,
```
cd ~
touch .bashrc
```
then open .bashrc, add your command from above and save the file.
You need to create the fila trough the console or you cannot save it with the proper name. |
Fast Iteration of numpy arrays | 32,597,294 | 7 | 2015-09-15T23:27:33Z | 32,600,805 | 7 | 2015-09-16T05:58:30Z | [
"python",
"arrays",
"numpy",
"filtering",
"signal-processing"
] | I'm new to python and I'm trying to do some basic signal-processing stuff and I'm having a serious performance problem. Is there a python trick for doing this in a vectorized manner? Basically I'm trying to implement a 1st order filter, but where the filter characteristics may change from one sample to the next. If it were just one filter I would use numpy.signal.lfilter(), but it's a bit trickier. Here's the snippet of code that goes very slowly:
```
#filter state
state = 0
#perform filtering
for sample in amplitude:
if( sample == 1.0 ): #attack filter
sample = (1.0 - att_coeff) * sample + att_coeff * state
else: #release filter
sample = (1.0 - rel_coeff) * sample + rel_coeff * state
state = sample
``` | You could consider using one of the python-to-native- code converter, among [cython](http://cython.org), [numba](http://numba.pydata.org) or [pythran](https://github.com/serge-sans-paille/pythran).
For instance, running your original code with timeit gives me:
```
$ python -m timeit -s 'from co import co; import numpy as np; a = np.random.random(100000)' 'co(a, .5, .7)'
10 loops, best of 3: 120 msec per loop
```
While annotating it with pythran, as in:
```
#pythran export co(float[], float, float)
def co(amplitude, att_coeff, rel_coeff):
#filter state
state = 0
#perform filtering
for sample in amplitude:
if( sample == 1.0 ): #attack filter
state = (1.0 - att_coeff) * sample + att_coeff * state
else: #release filter
state = (1.0 - rel_coeff) * sample + rel_coeff * state
return state
```
and compiling it with
```
$ pythran co.py
```
gives me:
```
$ python -m timeit -s 'from co import co; import numpy as np; a = np.random.random(100000)' 'co(a, .5, .7)'
```
1000 loops, best of 3: 253 usec per loop
That's roughly a x 470 speedup, I expect Numba and cython to give similar speedups! |
Asyncio RuntimeError: Event Loop is Closed | 32,598,231 | 5 | 2015-09-16T01:22:12Z | 32,615,276 | 11 | 2015-09-16T17:43:03Z | [
"python",
"python-3.x",
"python-asyncio",
"aiohttp"
] | I'm trying to make a bunch of requests (~1000) using Asyncio and the aiohttp library, but I am running into a problem that I can't find much info on.
When I run this code with 10 urls, it runs just fine. When I run it with 100+ urls, it breaks and gives me `RuntimeError: Event loop is closed` error.
```
import asyncio
import aiohttp
@asyncio.coroutine
def get_status(url):
code = '000'
try:
res = yield from asyncio.wait_for(aiohttp.request('GET', url), 4)
code = res.status
res.close()
except Exception as e:
print(e)
print(code)
if __name__ == "__main__":
urls = ['https://google.com/'] * 100
coros = [asyncio.Task(get_status(url)) for url in urls]
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait(coros))
loop.close()
```
The stack trace can be found [here](https://bpaste.net/show/3ee9e9640987).
Any help or insight would be greatly appreciated as I've been banging my head over this for a few hours now. Obviously this would suggest that an event loop has been closed that should still be open, but I don't see how that is possible. | The bug is filed as <https://github.com/python/asyncio/issues/258>
Stay tuned.
As quick workaround I suggest using custom executor, e.g.
```
loop = asyncio.get_event_loop()
executor = concurrent.futures.ThreadPoolExecutor(5)
loop.set_default_executor(executor)
```
Before finishing your program please do
```
executor.shutdown(wait=True)
loop.close()
``` |
How to make this simple string function "pythonic" | 32,600,848 | 5 | 2015-09-16T06:01:15Z | 32,600,945 | 26 | 2015-09-16T06:08:49Z | [
"python",
"string"
] | Coming from the C/C++ world and being a Python newb, I wrote this simple string function that takes an input string (guaranteed to be ASCII) and returns the last four characters. If thereâs less than four characters, I want to fill the leading positions with the letter âA'. (this was not an exercise, but a valuable part of another complex function)
There are dozens of methods of doing this, from brute force, to simple, to elegant. My approach below, while functional, didnât seem "Pythonic".
NOTE: Iâm presently using Python 2.6 â and performance is NOT an issue. The input strings are short (2-8 characters), and I call this function only a few thousand times.
```
def copyFourTrailingChars(src_str):
four_char_array = bytearray("AAAA")
xfrPos = 4
for x in src_str[::-1]:
xfrPos -= 1
four_char_array[xfrPos] = x
if xfrPos == 0:
break
return str(four_char_array)
input_str = "7654321"
print("The output of {0} is {1}".format(input_str, copyFourTrailingChars(input_str)))
input_str = "21"
print("The output of {0} is {1}".format(input_str, copyFourTrailingChars(input_str)))
```
The output is:
```
The output of 7654321 is 4321
The output of 21 is AA21
```
Suggestions from Pythoneers? | I would use simple slicing and then [`str.rjust()`](https://docs.python.org/2/library/stdtypes.html#str.rjust) to right justify the result using `A` as `fillchar` . Example -
```
def copy_four(s):
return s[-4:].rjust(4,'A')
```
Demo -
```
>>> copy_four('21')
'AA21'
>>> copy_four('1233423')
'3423'
``` |
Converting this list into Dictionary using Python | 32,602,353 | 4 | 2015-09-16T07:32:10Z | 32,602,472 | 11 | 2015-09-16T07:38:34Z | [
"python",
"list"
] | ```
list = ['Name=Sachin\n', 'country=India\n', 'game=cricket\n']
```
I want this list in a dictionary with keys as `Name`, `country`, `game` and values as `Sachin`, `India`, `cricket` as corresponding values. I got this list using `readlines()` from a text file. | ```
>>> lst = ['Name=Sachin\n', 'country=India\n', 'game=cricket\n']
>>> result = dict(e.strip().split('=') for e in lst)
>>> print(result)
{'Name': 'Sachin', 'country': 'India', 'game': 'cricket'}
``` |
Error when loading rpy2 with anaconda | 32,602,578 | 5 | 2015-09-16T07:43:33Z | 32,617,616 | 7 | 2015-09-16T19:59:45Z | [
"python",
"anaconda",
"rpy2"
] | I am trying to load `rpy2` inside a project where I am working with `anaconda` and I am getting a surprising error for which I cannot find a solution.
My python version is `3.4`, my anaconda version is `3.17.0` - I am using a Mac (OSX Yosemite version 10.10.4)
`R version 3.2.2 (2015-08-14) -- "Fire Safety"`
`Platform: x86_64-apple-darwin11.4.2 (64-bit)`
```
try:
from rpy2 import robjects
except ImportError as e:
print(e)
```
I am getting this error message
> dlopen(/Users/user1/anaconda/lib/python3.4/site-packages/rpy2/rinterface/\_rinterface.so,
> 2): Library not loaded: @rpath/R/lib/libR.dylib Referenced from:
>
> /Users/user1/anaconda/lib/python3.4/site-packages/rpy2/rinterface/\_rinterface.so
> Reason: image not found
Thanks in advance for your help | I just built an updated rpy2 2.7.0 against R 3.2.2. Can you run
```
conda install -c r rpy2
```
and see if that fixes it? |
Create feature vector programmatically in Spark ML / pyspark | 32,606,294 | 4 | 2015-09-16T10:39:53Z | 32,607,349 | 10 | 2015-09-16T11:29:29Z | [
"python",
"apache-spark",
"pyspark",
"apache-spark-ml"
] | I'm wondering if there is a concise way to run ML (e.g KMeans) on a DataFrame in pyspark if I have the features in multiple numeric columns.
I.e. as in the `Iris` dataset:
```
(a1=5.1, a2=3.5, a3=1.4, a4=0.2, id=u'id_1', label=u'Iris-setosa', binomial_label=1)
```
I'd like to use KMeans without recreating the DataSet with the feature vector added manually as a new column and the original columns hardcoded repeatedly in the code.
The solution I'd like to improve:
```
from pyspark.mllib.linalg import Vectors
from pyspark.sql.types import Row
from pyspark.ml.clustering import KMeans, KMeansModel
iris = sqlContext.read.parquet("/opt/data/iris.parquet")
iris.first()
# Row(a1=5.1, a2=3.5, a3=1.4, a4=0.2, id=u'id_1', label=u'Iris-setosa', binomial_label=1)
df = iris.map(lambda r: Row(
id = r.id,
a1 = r.a1,
a2 = r.a2,
a3 = r.a3,
a4 = r.a4,
label = r.label,
binomial_label=r.binomial_label,
features = Vectors.dense(r.a1, r.a2, r.a3, r.a4))
).toDF()
kmeans_estimator = KMeans()\
.setFeaturesCol("features")\
.setPredictionCol("prediction")\
kmeans_transformer = kmeans_estimator.fit(df)
predicted_df = kmeans_transformer.transform(df).drop("features")
predicted_df.first()
# Row(a1=5.1, a2=3.5, a3=1.4, a4=0.2, binomial_label=1, id=u'id_1', label=u'Iris-setosa', prediction=1)
```
I'm looking for a solution, which is something like:
```
feature_cols = ["a1", "a2", "a3", "a4"]
prediction_col_name = "prediction"
<dataframe independent code for KMeans>
<New dataframe is created, extended with the `prediction` column.>
``` | You can use [`VectorAssembler`](http://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.feature.VectorAssembler):
```
from pyspark.ml.feature import VectorAssembler
ignore = ['id', 'label', 'binomial_label']
assembler = VectorAssembler(
inputCols=[x for x in df.columns if x not in ignore],
outputCol='features')
assembler.transform(df)
```
It can be combined with k-means using ML Pipeline:
```
from pyspark.ml import Pipeline
pipeline = Pipeline(stages=[assembler, kmeans_estimator])
model = pipeline.fit(df)
``` |
splitting lines with " from an infile in python | 32,608,338 | 5 | 2015-09-16T12:14:58Z | 32,608,386 | 8 | 2015-09-16T12:17:49Z | [
"python",
"split"
] | I have a series of input files such as:
```
chr1 hg19_refFlat exon 44160380 44160565 0.000000 + . gene_id "KDM4A"; transcript_id "KDM4A";
chr1 hg19_refFlat exon 19563636 19563732 0.000000 - . gene_id "EMC1"; transcript_id "EMC1";
chr1 hg19_refFlat exon 52870219 52870551 0.000000 + . gene_id "PRPF38A"; transcript_id "PRPF38A";
chr1 hg19_refFlat exon 53373540 53373626 0.000000 - . gene_id "ECHDC2"; transcript_id "ECHDC2_dup2";
chr1 hg19_refFlat exon 11839859 11840067 0.000000 + . gene_id "C1orf167"; transcript_id "C1orf167";
chr1 hg19_refFlat exon 29037032 29037154 0.000000 + . gene_id "GMEB1"; transcript_id "GMEB1";
chr1 hg19_refFlat exon 103356007 103356060 0.000000 - . gene_id "COL11A1"; transcript_id "COL11A1";
```
in my code I am trying to capture 2 elements from each line, the first is the number after where it says exon, the second is the gene (the number and letter combo surrounded by "", e.g. "KDM4A". Here is my code:
```
with open(infile,'r') as r:
start = set([line.strip().split()[3] for line in r])
genes = set([line.split('"')[1] for line in r])
print len(start)
print len(genes)
```
for some reason start works fine but genes is not capturing anything. Here is the output:
```
48050
0
```
I figure this is something to do with the "" surrounding the gene name but if I enter this on the terminal it works fine:
```
>>> x = 'A b P "G" m'
>>> x
'A b P "G" m'
>>> x.split('"')[1]
'G'
>>>
```
Any solutions would be much appreciated? If even if its a completely different way of capturing the 2 items of data from each line. Thanks | It is because your file object is exhausted when you loop over it once here `start = set([line.strip().split()[3] for line in r])` again you are trying to loop here `genes = set([line.split('"')[1] for line in r])` over the exhausted file object
**Solution:**
You could seek to the start of the file (this is one of the solutions)
**Modification to your code:**
```
with open(infile,'r') as r:
start = set([line.strip().split()[3] for line in r])
r.seek(0, 0)
genes = set([line.split('"')[1] for line in r])
print len(start)
print len(genes)
``` |
Replace keys in a dictionary | 32,623,405 | 10 | 2015-09-17T05:54:17Z | 32,623,459 | 15 | 2015-09-17T05:59:22Z | [
"python",
"string",
"python-2.7",
"dictionary"
] | I have a dictionary
```
event_types={"as":0,"ah":0,"es":0,"eh":0,"os":0,"oh":0,"cs":0,"ch":0}
```
How can I replace `a` by `append`, `s` by `see`, `h` by `horse`, `e` by `exp` with a space in between.
Output something like:
```
{"append see":0,"append horse":0,"exp horse":0....}
``` | Have another dictionary with your replacements, like this
```
keys = {"a": "append", "h": "horse", "e": "exp", "s": "see"}
```
Now, if your events look like this
```
event_types = {"as": 0, "ah": 0, "es": 0, "eh": 0}
```
Simply reconstruct it with dictionary comprehension, like this
```
>>> {" ".join([keys[char] for char in k]): v for k, v in event_types.items()}
{'exp see': 0, 'exp horse': 0, 'append see': 0, 'append horse': 0}
```
Here, `" ".join([keys[char] for char in k])`, iterates the characters in the `k`, fetches corresponding words from `keys` dictionary and forms a list. Then, the elements of the list are joined with space character, to get the desired key. |
How to serialize custom user model in DRF | 32,638,268 | 2 | 2015-09-17T19:13:00Z | 32,638,296 | 7 | 2015-09-17T19:15:44Z | [
"python",
"django",
"serialization",
"django-rest-framework",
"django-serializer"
] | I have made a custom user model,by referring the [tutorial](http://www.django-rest-framework.org/tutorial/4-authentication-and-permissions/) , this is how I serialize the new user model:
**Serializers.py**
```
from django.conf import settings
User = settings.AUTH_USER_MODEL
class UserSerializer(serializers.ModelSerializer):
post = serializers.PrimaryKeyRelatedField(many=True, queryset=Listing.objects.all())
class Meta(object):
model = User
fields = ('username', 'email','post')
```
**Views.py**
```
from django.conf import settings
User = settings.AUTH_USER_MODEL
class UserList(generics.ListAPIView):
queryset = User.objects.all()
serializer_class = UserSerializer
```
But when I tried to use this serializer,I get
> 'str' object has no attribute '\_meta'
What did I do wrong? | Instead of
```
User = settings.AUTH_USER_MODEL
```
use
```
from from django.contrib.auth import get_user_model
User = get_user_model()
```
Remember that `settings.AUTH_USER_MODEL` is just a `string` that indicates which user model you will use not the model itself. If you want to get the model, use `get_user_model` |
Is it possible to "transfer" a session between selenium.webdriver and requests.session | 32,639,014 | 6 | 2015-09-17T20:03:40Z | 32,639,151 | 8 | 2015-09-17T20:11:35Z | [
"python",
"session",
"selenium",
"browser",
"python-requests"
] | In theory, if I copy all of the cookies from selenium's `webdriver` object to `requests.Session` object, would requests be able to continue on as if the session was not interrupted?
Specifically, I am interested in writing automation where I get to specific location on the webpage via selenium, then pass on a certain download link to `requests`, which would download and verify specific bytes out of the file, and sometimes a full file. (The value of the file downloaded would change based on my interaction in selenium) | Yes it will definitely work. Following code snippet should help as well -
```
headers = {
"User-Agent":
"Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.157 Safari/537.36"
}
s = requests.session()
s.headers.update(headers)
for cookie in driver.get_cookies():
c = {cookie['name']: cookie['value']}
s.cookies.update(c)
``` |
Python Flask: keeping track of user sessions? How to get Session Cookie ID? | 32,640,090 | 4 | 2015-09-17T21:14:44Z | 32,643,135 | 9 | 2015-09-18T03:00:47Z | [
"python",
"session",
"web-applications",
"flask",
"flask-login"
] | I want to build a simple webapp as part of my learning activity. Webapp is supposed to ask for user to input their email\_id if it encounters a first time visitor else it remembers the user through cookie and automatically logs him/her in to carry out the functions.
This is my first time with creating a user based web app. I have a blue print in my mind but I am unable to figure out how to implement it. Primarily I am confused with respect to the way of collecting user cookie. I have looked into various tutorials and flask\_login but I think what I want to implement is much simpler as compared to what flask\_login is implementing.
Here is what I have so far (it is rudimentary and meant to communicate my use case):
```
from flask import render_template, request, redirect, url_for
@app.route("/", methods= ["GET"])
def first_page():
cookie = response.headers['cookie']
if database.lookup(cookie):
user = database.get(cookie) # it returns user_email related to that cookie id
else:
return redirect_url(url_for('login'))
data = generateSomeData() # some function
return redirect(url_for('do_that'), user_id, data, stats)
@app.route('/do_that', methods =['GET'])
def do_that(user_id):
return render_template('interface.html', user_id, stats,data) # it uses Jinja template
@app.route('/submit', methods =["GET"])
def submit():
# i want to get all the information here
user_id = request.form['user_id']# some data
answer = request.form['answer'] # some response to be recorded
data = request.form['data'] # same data that I passed in do_that to keep
database.update(data,answer,user_id)
return redirect(url_for('/do_that'))
@app.route('/login', methods=['GET'])
def login():
return render_template('login.html')
@app.route('/loggedIn', methods =['GET'])
def loggedIn():
cookie = response.headers['cookie']
user_email = response.form['user_email']
database.insert(cookie, user_email)
return redirect(url_for('first_page'))
``` | You can access request cookies through the [`request.cookies` dictionary](http://flask.pocoo.org/docs/0.10/api/#flask.Request.cookies) and set cookies by using either `make_response` or just storing the result of calling `render_template` in a variable and then calling [`set_cookie` on the response object](http://flask.pocoo.org/docs/0.10/api/#flask.Response.set_cookie):
```
@app.route("/")
def home():
user_id = request.cookies.get('YourSessionCookie')
if user_id:
user = database.get(user_id)
if user:
# Success!
return render_template('welcome.html', user=user)
else:
return redirect(url_for('login'))
else:
return redirect(url_for('login'))
@app.route("/login", methods=["GET", "POST"])
def login():
if request.method == "POST":
# You should really validate that these fields
# are provided, rather than displaying an ugly
# error message, but for the sake of a simple
# example we'll just assume they are provided
user_name = request.form["name"]
password = request.form["password"]
user = db.find_by_name_and_password(user_name, password)
if not user:
# Again, throwing an error is not a user-friendly
# way of handling this, but this is just an example
raise ValueError("Invalid username or password supplied")
# Note we don't *return* the response immediately
response = redirect(url_for("do_that"))
response.set_cookie('YourSessionCookie', user.id)
return response
@app.route("/do-that")
def do_that():
user_id = request.cookies.get('YourSessionCookie')
if user_id:
user = database.get(user_id)
if user:
# Success!
return render_template('do_that.html', user=user)
else:
return redirect(url_for('login'))
else:
return redirect(url_for('login'))
```
### DRYing up the code
Now, you'll note there is a *lot* of boilerplate in the `home` and `do_that` methods, all related to login. You can avoid that by writing your own decorator (see [*What is a decorator*](http://stackoverflow.com/a/1594484/135978) if you want to learn more about them):
```
from functools import wraps
from flask import flash
def login_required(function_to_protect):
@wraps(function_to_protect)
def wrapper(*args, **kwargs):
user_id = request.cookies.get('YourSessionCookie')
if user_id:
user = database.get(user_id)
if user:
# Success!
return function_to_protect(*args, **kwargs)
else:
flash("Session exists, but user does not exist (anymore)")
return redirect(url_for('login'))
else:
flash("Please log in")
return redirect(url_for('login'))
return wrapper
```
Then your `home` and `do_that` methods get *much* shorter:
```
# Note that login_required needs to come before app.route
# Because decorators are applied from closest to furthest
# and we don't want to route and then check login status
@app.route("/")
@login_required
def home():
# For bonus points we *could* store the user
# in a thread-local so we don't have to hit
# the database again (and we get rid of *this* boilerplate too).
user = database.get(request.cookies['YourSessionCookie'])
return render_template('welcome.html', user=user)
@app.route("/do-that")
@login_required
def do_that():
user = database.get(request.cookies['YourSessionCookie'])
return render_template('welcome.html', user=user)
```
### Using what's provided
If you don't *need* your cookie to have a particular name, I would recommend using [`flask.session`](http://flask.pocoo.org/docs/0.10/quickstart/#sessions) as it already has a lot of niceties built into it (it's signed so it can't be tampered with, can be set to be HTTP only, etc.). That DRYs up our `login_required` decorator even more:
```
# You have to set the secret key for sessions to work
# Make sure you keep this secret
app.secret_key = 'something simple for now'
from flask import flash, session
def login_required(function_to_protect):
@wraps(function_to_protect)
def wrapper(*args, **kwargs):
user_id = session.get('user_id')
if user_id:
user = database.get(user_id)
if user:
# Success!
return function_to_protect(*args, **kwargs)
else:
flash("Session exists, but user does not exist (anymore)")
return redirect(url_for('login'))
else:
flash("Please log in")
return redirect(url_for('login'))
```
And then your individual methods can get the user via:
```
user = database.get(session['user_id'])
``` |
Combine (join) networkx Graphs | 32,652,149 | 5 | 2015-09-18T12:33:33Z | 32,697,415 | 7 | 2015-09-21T14:11:23Z | [
"python",
"graph-theory",
"networkx"
] | Say I have two networkx graphs, `G` and `H`:
```
G=nx.Graph()
fromnodes=[0,1,1,1,1,1,2]
tonodes=[1,2,3,4,5,6,7]
for x,y in zip(fromnodes,tonodes):
G.add_edge(x,y)
H=nx.Graph()
fromnodes=range(2,8)
tonodes=range(8,14)
for x,y in zip(fromnodes,tonodes):
H.add_edge(x,y)
```
**What is the best way to join the two networkx graphs?**
I'd like to preserve the node names (note the common nodes, 2 to 7). When I used `nx.disjoint_union(G,H)`, this did not happen:
```
>>> G.nodes()
[0, 1, 2, 3, 4, 5, 6, 7]
>>> H.nodes()
[2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]
>>> Un= nx.disjoint_union(G,H)
>>> Un.nodes()
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
#
```
The `H` node labels were changed (not what I want). I want to join the graphs at the nodes with the same number.
*Note. This is not a duplicate of [Combine two weighted graphs in NetworkX](http://stackoverflow.com/questions/21806751/combine-two-weighted-graphs-in-networkx).* | The function you're looking for is [compose](https://networkx.readthedocs.org/en/stable/reference/generated/networkx.algorithms.operators.binary.compose.html).
```
import networkx as nx
G=nx.Graph()
G.add_edge(1,2)
H=nx.Graph()
H.add_edge(1,3)
F = nx.compose(G,H)
F.edges()
> [(1, 2), (1, 3)]
```
There are also other options to do the [symmetric difference, intersection](https://networkx.readthedocs.org/en/stable/reference/algorithms.operators.html), ... |
Python Function Return Loop | 32,653,496 | 10 | 2015-09-18T13:38:47Z | 32,653,596 | 18 | 2015-09-18T13:43:18Z | [
"python",
"recursion"
] | Ok, so this piece of code is from a practice question at my school. We are to mentally parse the code and check the answer.
When I first parsed it, I got 4. I copied the code and ran it through IDLE and got 8. I ran the debugger and saw that the else: return is looping the if else statement until `x == 0` and then it returns 1.
I do not understand how return 1 is coming out to 8.
```
def foo(x=5):
if x == 0:
return 1
else:
return 2*foo(x-1)
print(foo(3))
```
I understand that it is calling `foo(x-1)` inside the function `foo(x=5)` which makes it check if else again and again until `x == 0` then it returns 1. How does return 1 end up printing 8? | You will make following calls to foo:
```
foo(3) -> foo(2) -> foo(1) -> foo(0)
```
those will return
```
foo(0) -> 1
foo(1) -> 2 * foo(0) -> 2 * 1 -> 2
foo(2) -> 2 * foo(1) -> 2 * 2 -> 4
foo(3) -> 2 * foo(2) -> 2 * 4 -> 8
```
Is it clear now? |
Python Function Return Loop | 32,653,496 | 10 | 2015-09-18T13:38:47Z | 32,653,604 | 11 | 2015-09-18T13:43:44Z | [
"python",
"recursion"
] | Ok, so this piece of code is from a practice question at my school. We are to mentally parse the code and check the answer.
When I first parsed it, I got 4. I copied the code and ran it through IDLE and got 8. I ran the debugger and saw that the else: return is looping the if else statement until `x == 0` and then it returns 1.
I do not understand how return 1 is coming out to 8.
```
def foo(x=5):
if x == 0:
return 1
else:
return 2*foo(x-1)
print(foo(3))
```
I understand that it is calling `foo(x-1)` inside the function `foo(x=5)` which makes it check if else again and again until `x == 0` then it returns 1. How does return 1 end up printing 8? | I think youâre having the right idea (otherwise you wouldnât have gotten the answer 4), youâre simply aborting too early in your mental exercise.
You can keep track of the variables by tabulating them while going through the code:
* `foo(3)`
+ calls `foo(3 - 1)` â `foo(2)`
- calls `foo(2 - 1)` â `foo(1)`
* calls `foo(1 - 1)` â `foo(0)`
+ returns `1`
* returns `2 * foo(1 - 1)` â `2`
- returns `2 * foo(2 - 1)` â `4`
+ returns `2 * foo(3 - 1)` â `8` |
How to create simple 3-layer neural network and teach it using supervised learning? | 32,655,573 | 6 | 2015-09-18T15:19:25Z | 32,683,367 | 8 | 2015-09-20T19:26:45Z | [
"python",
"python-2.7",
"pybrain"
] | Based on [PyBrain's tutorials](http://pybrain.org/docs/index.html#tutorials) I managed to knock together the following code:
```
#!/usr/bin/env python2
# coding: utf-8
from pybrain.structure import FeedForwardNetwork, LinearLayer, SigmoidLayer, FullConnection
from pybrain.datasets import SupervisedDataSet
from pybrain.supervised.trainers import BackpropTrainer
n = FeedForwardNetwork()
inLayer = LinearLayer(2)
hiddenLayer = SigmoidLayer(3)
outLayer = LinearLayer(1)
n.addInputModule(inLayer)
n.addModule(hiddenLayer)
n.addOutputModule(outLayer)
in_to_hidden = FullConnection(inLayer, hiddenLayer)
hidden_to_out = FullConnection(hiddenLayer, outLayer)
n.addConnection(in_to_hidden)
n.addConnection(hidden_to_out)
n.sortModules()
ds = SupervisedDataSet(2, 1)
ds.addSample((0, 0), (0,))
ds.addSample((0, 1), (1,))
ds.addSample((1, 0), (1,))
ds.addSample((1, 1), (0,))
trainer = BackpropTrainer(n, ds)
# trainer.train()
trainer.trainUntilConvergence()
print n.activate([0, 0])[0]
print n.activate([0, 1])[0]
print n.activate([1, 0])[0]
print n.activate([1, 1])[0]
```
It's supposed to learn XOR function, but the results seem quite random:
> 0.208884929522
>
> 0.168926515771
>
> 0.459452834043
>
> 0.424209192223
or
> 0.84956138664
>
> 0.888512762786
>
> 0.564964077401
>
> 0.611111147862 | There are four problems with your approach, all easy to identify after reading [Neural Network FAQ](ftp://ftp.sas.com/pub/neural/FAQ.html):
* [Why use a bias/threshold?](ftp://ftp.sas.com/pub/neural/FAQ2.html#A_bias): you should add a bias node. Lack of bias makes the learning very limited: the separating hyperplane represented by the network can only pass through the origin. With the bias node, it can move freely and fit the data better:
```
bias = BiasUnit()
n.addModule(bias)
bias_to_hidden = FullConnection(bias, hiddenLayer)
n.addConnection(bias_to_hidden)
```
* [Why not code binary inputs as 0 and 1?](ftp://ftp.sas.com/pub/neural/FAQ2.html#A_binary): all your samples lay in a single quadrant of the sample space. Move them to be scattered around the origin:
```
ds = SupervisedDataSet(2, 1)
ds.addSample((-1, -1), (0,))
ds.addSample((-1, 1), (1,))
ds.addSample((1, -1), (1,))
ds.addSample((1, 1), (0,))
```
(Fix the validation code at the end of your script accordingly.)
* `trainUntilConvergence` method works using validation, and does something that resembles the [early stopping method](ftp://ftp.sas.com/pub/neural/FAQ3.html#A_stop). This doesn't make sense for such a small dataset. Use `trainEpochs` instead. `1000` epochs is more than enough for this problem for the network to learn:
```
trainer.trainEpochs(1000)
```
* [What learning rate should be used for backprop?](ftp://ftp.sas.com/pub/neural/FAQ2.html#A_learn_rate): Tune the learning rate parameter. This is something you do every time you employ a neural network. In this case, the value `0.1` or even `0.2` dramatically increases the learning speed:
```
trainer = BackpropTrainer(n, dataset=ds, learningrate=0.1, verbose=True)
```
(Note the `verbose=True` parameter. Observing how the error behaves is essential when tuning parameters.)
With these fixes I get consistent, and correct results for the given network with the given dataset, and error less than `1e-23`. |
Histogram matching of two images in Python 2.x? | 32,655,686 | 4 | 2015-09-18T15:24:38Z | 33,047,048 | 9 | 2015-10-09T20:56:26Z | [
"python",
"numpy",
"image-processing",
"histogram"
] | I'm trying to match the histograms of two images (in MATLAB this could be done using [`imhistmatch`](http://www.mathworks.com/help/images/ref/imhistmatch.html)). Is there an equivalent function available from a standard Python library? I've looked at OpenCV, scipy, and numpy but don't see any similar functionality. | I previously wrote an answer [here](http://stackoverflow.com/a/31493356/1461210) explaining how to do piecewise linear interpolation on an image histogram in order to enforce particular ratios of highlights/midtones/shadows.
The same basic principles underlie [histogram matching](https://en.wikipedia.org/wiki/Histogram_matching) between two images. Essentially you compute the cumulative histograms for your source and template images, then interpolate linearly to find the unique pixel values in the template image that most closely match the quantiles of the unique pixel values in the source image:
```
import numpy as np
def hist_match(source, template):
"""
Adjust the pixel values of a grayscale image such that its histogram
matches that of a target image
Arguments:
-----------
source: np.ndarray
Image to transform; the histogram is computed over the flattened
array
template: np.ndarray
Template image; can have different dimensions to source
Returns:
-----------
matched: np.ndarray
The transformed output image
"""
oldshape = source.shape
source = source.ravel()
template = template.ravel()
# get the set of unique pixel values and their corresponding indices and
# counts
s_values, bin_idx, s_counts = np.unique(source, return_inverse=True,
return_counts=True)
t_values, t_counts = np.unique(template, return_counts=True)
# take the cumsum of the counts and normalize by the number of pixels to
# get the empirical cumulative distribution functions for the source and
# template images (maps pixel value --> quantile)
s_quantiles = np.cumsum(s_counts).astype(np.float64)
s_quantiles /= s_quantiles[-1]
t_quantiles = np.cumsum(t_counts).astype(np.float64)
t_quantiles /= t_quantiles[-1]
# interpolate linearly to find the pixel values in the template image
# that correspond most closely to the quantiles in the source image
interp_t_values = np.interp(s_quantiles, t_quantiles, t_values)
return interp_t_values[bin_idx].reshape(oldshape)
```
For example:
```
from matplotlib import pyplot as plt
from scipy.misc import lena, ascent
source = lena()
template = ascent()
matched = hist_match(source, template)
def ecdf(x):
"""convenience function for computing the empirical CDF"""
vals, counts = np.unique(x, return_counts=True)
ecdf = np.cumsum(counts).astype(np.float64)
ecdf /= ecdf[-1]
return vals, ecdf
x1, y1 = ecdf(source.ravel())
x2, y2 = ecdf(template.ravel())
x3, y3 = ecdf(matched.ravel())
fig = plt.figure()
gs = plt.GridSpec(2, 3)
ax1 = fig.add_subplot(gs[0, 0])
ax2 = fig.add_subplot(gs[0, 1], sharex=ax1, sharey=ax1)
ax3 = fig.add_subplot(gs[0, 2], sharex=ax1, sharey=ax1)
ax4 = fig.add_subplot(gs[1, :])
for aa in (ax1, ax2, ax3):
aa.set_axis_off()
ax1.imshow(source, cmap=plt.cm.gray)
ax1.set_title('Source')
ax2.imshow(template, cmap=plt.cm.gray)
ax2.set_title('template')
ax3.imshow(matched, cmap=plt.cm.gray)
ax3.set_title('Matched')
ax4.plot(x1, y1 * 100, '-r', lw=3, label='Source')
ax4.plot(x2, y2 * 100, '-k', lw=3, label='Template')
ax4.plot(x3, y3 * 100, '--r', lw=3, label='Matched')
ax4.set_xlim(x1[0], x1[-1])
ax4.set_xlabel('Pixel value')
ax4.set_ylabel('Cumulative %')
ax4.legend(loc=5)
```
[](http://i.stack.imgur.com/GWtt1.jpg)
For a pair of RGB images you could apply this function separately to each color channel. |
How to run a python 3.4 script as an executable? | 32,656,758 | 2 | 2015-09-18T16:22:45Z | 32,656,911 | 9 | 2015-09-18T16:32:00Z | [
"python",
"bash"
] | Say, I have a python script named `hello.py`, which I run on mac as:
```
$ python hello.py
```
What do I need to do to run it as:
```
$ hello
``` | Add a "shebang" to the top of the file to tell it how to run your script.
```
#!/usr/bin/env python
```
Then you need to mark the script as "executable":
```
chmod +x hello.py
```
Then you can just run it as `./hello.py` instead of `python hello.py`.
To run it as just `hello`, you can rename the file from `hello.py` to `hello` and then copy it into a folder in your `$PATH`. |
parallelized algorithm for evaluating a 1-d array of functions on a same-length 1d numpy array | 32,669,916 | 5 | 2015-09-19T15:22:46Z | 32,671,603 | 7 | 2015-09-19T18:06:36Z | [
"python",
"performance",
"numpy",
"parallel-processing",
"scientific-computing"
] | The upshot of the below is that I have an embarrassingly parallel for loop that I am trying to thread. There's a bit of rigamarole to explain the problem, but despite all the verbosity, I **think** this should be a rather trivial problem that the multiprocessing module is designed to solve easily.
I have a large length-N array of k distinct functions, and a length-N array of abcissa. Thanks to the clever solution provided by @senderle described in [Efficient algorithm for evaluating a 1-d array of functions on a same-length 1d numpy array](http://stackoverflow.com/questions/28459896/efficient-algorithm-for-evaluating-a-1-d-array-of-functions-on-a-same-length-1d), I have a fast numpy-based algorithm that I can use to evaluate the functions at the abcissa to return a length-N array of ordinates:
```
def apply_indexed_fast(abcissa, func_indices, func_table):
""" Returns the output of an array of functions evaluated at a set of input points
if the indices of the table storing the required functions are known.
Parameters
----------
func_table : array_like
Length k array of function objects
abcissa : array_like
Length Npts array of points at which to evaluate the functions.
func_indices : array_like
Length Npts array providing the indices to use to choose which function
operates on each abcissa element. Thus func_indices is an array of integers
ranging between 0 and k-1.
Returns
-------
out : array_like
Length Npts array giving the evaluation of the appropriate function on each
abcissa element.
"""
func_argsort = func_indices.argsort()
func_ranges = list(np.searchsorted(func_indices[func_argsort], range(len(func_table))))
func_ranges.append(None)
out = np.zeros_like(abcissa)
for i in range(len(func_table)):
f = func_table[i]
start = func_ranges[i]
end = func_ranges[i+1]
ix = func_argsort[start:end]
out[ix] = f(abcissa[ix])
return out
```
What I'm now trying to do is use multiprocessing to parallelize the for loop inside this function. Before describing my approach, for clarity I'll briefly sketch how the algorithm @senderle developed works. If you can read the above code and understand it immediately, just skip the next paragraph of text.
First we find the array of indices that sorts the input *func\_indices*, which we use to define the length-k *func\_ranges* array of integers. The integer entries of *func\_ranges* control the function that gets applied to the appropriate sub-array of the input *abcissa*, which works as follows. Let *f* be the i^th function in the input *func\_table*. Then the slice of the input *abcissa* to which we should apply the function *f* is *slice(func\_ranges[i], func\_ranges[i+1])*. So once *func\_ranges* is calculated, we can just run a simple for loop over the input *func\_table* and successively apply each function object to the appropriate slice, filling our output array. See the code below for a minimal example of this algorithm in action.
```
def trivial_functional(i):
def f(x):
return i*x
return f
k = 250
func_table = np.array([trivial_functional(j) for j in range(k)])
Npts = 1e6
abcissa = np.random.random(Npts)
func_indices = np.random.random_integers(0,len(func_table)-1,Npts)
result = apply_indexed_fast(abcissa, func_indices, func_table)
```
So my goal now is to use multiprocessing to parallelize this calculation. I thought this would be straightforward using my usual trick for threading embarrassingly parallel for loops. But my attempt below raises an exception that I do not understand.
```
from multiprocessing import Pool, cpu_count
def apply_indexed_parallelized(abcissa, func_indices, func_table):
func_argsort = func_indices.argsort()
func_ranges = list(np.searchsorted(func_indices[func_argsort], range(len(func_table))))
func_ranges.append(None)
out = np.zeros_like(abcissa)
num_cores = cpu_count()
pool = Pool(num_cores)
def apply_funci(i):
f = func_table[i]
start = func_ranges[i]
end = func_ranges[i+1]
ix = func_argsort[start:end]
out[ix] = f(abcissa[ix])
pool.map(apply_funci, range(len(func_table)))
pool.close()
return out
result = apply_indexed_parallelized(abcissa, func_indices, func_table)
PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed
```
I have seen this elsewhere on SO: [Multiprocessing: using Pool.map on a function defined in a class](http://stackoverflow.com/questions/3288595/multiprocessing-using-pool-map-on-a-function-defined-in-a-class). One by one, I have tried each method proposed there; in all cases, I get a "too many files open" error because the threads were never closed, or the adapted algorithm simply hangs. This seems like there should be a simple solution since this is nothing more than threading an embarrassingly parallel for loop. | ## Warning/Caveat:
You may not want to apply `multiprocessing` to this problem. You'll find that relatively simple operations on large arrays, the problems will be memory bound with `numpy`. The bottleneck is moving data from RAM to the CPU caches. The CPU is starved for data, so throwing more CPUs at the problem doesn't help much. Furthermore, your current approach will pickle and make a copy of the entire array for every item in your input sequence, which adds lots of overhead.
There are plenty of cases where `numpy` + `multiprocessing` is *very* effective, but you need to make sure you're working with a CPU-bound problem. Ideally, it's a CPU-bound problem with relatively small inputs and outputs to alleviate the overhead of pickling the input and output. For many of the problems that `numpy` is most often used for, that's not the case.
---
## Two Problems with Your Current Approach
On to answering your question:
Your immediate error is due to passing in a function that's not accessible from the global scope (i.e. a function defined inside a function).
However, you have another issue. You're treating the numpy arrays as though they're shared memory that can be modified by each process. Instead, when using `multiprocessing` the original array will be pickled (effectively making a copy) and passed to each process independently. The original array will never be modified.
---
## Avoiding the `PicklingError`
As a minimal example to reproduce your error, consider the following:
```
import multiprocessing
def apply_parallel(input_sequence):
def func(x):
pass
pool = multiprocessing.Pool()
pool.map(func, input_sequence)
pool.close()
foo = range(100)
apply_parallel(foo)
```
This will result in:
```
PicklingError: Can't pickle <type 'function'>: attribute lookup
__builtin__.function failed
```
Of course, in this simple example, we could simply move the function definition back into the `__main__` namespace. However, in yours, you need it to refer to data that's passed in. Let's look at an example that's a bit closer to what you're doing:
```
import numpy as np
import multiprocessing
def parallel_rolling_mean(data, window):
data = np.pad(data, window, mode='edge')
ind = np.arange(len(data)) + window
def func(i):
return data[i-window:i+window+1].mean()
pool = multiprocessing.Pool()
result = pool.map(func, ind)
pool.close()
return result
foo = np.random.rand(20).cumsum()
result = parallel_rolling_mean(foo, 10)
```
There are multiple ways you could handle this, but a common approach is something like:
```
import numpy as np
import multiprocessing
class RollingMean(object):
def __init__(self, data, window):
self.data = np.pad(data, window, mode='edge')
self.window = window
def __call__(self, i):
start = i - self.window
stop = i + self.window + 1
return self.data[start:stop].mean()
def parallel_rolling_mean(data, window):
func = RollingMean(data, window)
ind = np.arange(len(data)) + window
pool = multiprocessing.Pool()
result = pool.map(func, ind)
pool.close()
return result
foo = np.random.rand(20).cumsum()
result = parallel_rolling_mean(foo, 10)
```
Great! It works!
---
However, if you scale this up to large arrays, you'll soon find that it will either run very slow (which you can speed up by increasing `chunksize` in the `pool.map` call) or you'll quickly run out of RAM (once you increase the `chunksize`).
`multiprocessing` pickles the input so that it can be passed to separate and independent python processes. This means you're making a copy of the *entire* array for *every* `i` you operate on.
We'll come back to this point in a bit...
---
## `multiprocessing` Does not share memory between processes
The `multiprocessing` module works by pickling the inputs and passing them to independent processes. This means that if you modify something in one process the other process won't see the modification.
However, `multiprocessing` also provides [primitives that live in shared memory](https://docs.python.org/2/library/multiprocessing.html#shared-ctypes-objects) and can be accessed and modified by child processes. There are a [few different ways](https://github.com/sturlamolden/sharedmem-numpy) of [adapting numpy arrays](http://thousandfold.net/cz/2014/05/01/sharing-numpy-arrays-between-processes-using-multiprocessing-and-ctypes/) to use a shared memory `multiprocessing.Array`. However, I'd recommend avoiding those at first (read up on [false sharing](https://en.wikipedia.org/wiki/False_sharing) if you're not familiar with it). There are cases where it's very useful, but it's typically to conserve memory, not to improve performance.
Therefore, it's best to do all modifications to a large array in a single process (this is also a very useful pattern for general IO). It doesn't have to be the "main" process, but it's easiest to think about that way.
As an example, let's say we wanted to have our `parallel_rolling_mean` function take an output array to store things in. A useful pattern is something similar to the following. Notice the use of iterators and modifying the output only in the main process:
```
import numpy as np
import multiprocessing
def parallel_rolling_mean(data, window, output):
def windows(data, window):
padded = np.pad(data, window, mode='edge')
for i in xrange(len(data)):
yield padded[i:i + 2*window + 1]
pool = multiprocessing.Pool()
results = pool.imap(np.mean, windows(data, window))
for i, result in enumerate(results):
output[i] = result
pool.close()
foo = np.random.rand(20000000).cumsum()
output = np.zeros_like(foo)
parallel_rolling_mean(foo, 10, output)
print output
```
Hopefully that example helps clarify things a bit.
---
## `chunksize` and performance
One quick note on performance: If we scale this up, it will get very slow very quickly. If you look at a system monitor (e.g. `top`/`htop`), you may notice that your cores are sitting idle most of the time.
By default, the master process pickles each input for each process and passes it in immediately and then waits until they're finished to pickle the next input. In many cases, this means that the master process works, then sits idle while the worker processes are busy, then the worker processes sit idle while the master process is pickling the next input.
The key is to increase the `chunksize` parameter. This will cause `pool.imap` to "pre-pickle" the specified number of inputs for each process. Basically, the master thread can stay busy pickling inputs and the worker processes can stay busy processing. The downside is that you're using more memory. If each input uses up a large amount of RAM, this can be a bad idea. If it doesn't, though, this can *dramatically* speed things up.
As a quick example:
```
import numpy as np
import multiprocessing
def parallel_rolling_mean(data, window, output):
def windows(data, window):
padded = np.pad(data, window, mode='edge')
for i in xrange(len(data)):
yield padded[i:i + 2*window + 1]
pool = multiprocessing.Pool()
results = pool.imap(np.mean, windows(data, window), chunksize=1000)
for i, result in enumerate(results):
output[i] = result
pool.close()
foo = np.random.rand(2000000).cumsum()
output = np.zeros_like(foo)
parallel_rolling_mean(foo, 10, output)
print output
```
With `chunksize=1000`, it takes 21 seconds to process a 2-million element array:
```
python ~/parallel_rolling_mean.py 83.53s user 1.12s system 401% cpu 21.087 total
```
But with `chunksize=1` (the default) it takes about eight times as long (2 minutes, 41 seconds).
```
python ~/parallel_rolling_mean.py 358.26s user 53.40s system 246% cpu 2:47.09 total
```
In fact, with the default chunksize, it's actually far worse than a single-process implementation of the same thing, which takes only 45 seconds:
```
python ~/sequential_rolling_mean.py 45.11s user 0.06s system 99% cpu 45.187 total
``` |
How can I read keyboard input in Python | 32,671,306 | 2 | 2015-09-19T17:37:55Z | 32,671,356 | 8 | 2015-09-19T17:42:08Z | [
"python",
"python-2.7",
"user-input"
] | I have problem with keyboard input in Python. I tried raw\_input and it is called only once. But I want to read keyboard input every time user press any key. How can I do it? Thanks for answers. | So for instance you have a Python code like this:
### file1.py
```
#!/bin/python
... do some stuff...
```
And at a certain point of the document you want to always check for input:
```
while True:
input = raw_input(">>>")
... do something with the input...
```
That will always wait for input. You can thread that infinite loop as a separate process and do other things in the meanwhile, so that the user input can have an effect in the tasks you are doing.
If you instead want to ask for input ONLY when a key is pressed, and do that as a loop, with this code (taken from [this ActiveState recipe by Steven D'Aprano](http://code.activestate.com/recipes/577977-get-single-keypress/)) you can wait for the key press to happen, and then ask for an input, execute a task and return back to the previous state.
> ```
> import sys
>
> try:
> import tty, termios
> except ImportError:
> # Probably Windows.
> try:
> import msvcrt
> except ImportError:
> # FIXME what to do on other platforms?
> # Just give up here.
> raise ImportError('getch not available')
> else:
> getch = msvcrt.getch
> else:
> def getch():
> """getch() -> key character
>
> Read a single keypress from stdin and return the resulting character.
> Nothing is echoed to the console. This call will block if a keypress
> is not already available, but will not wait for Enter to be pressed.
>
> If the pressed key was a modifier key, nothing will be detected; if
> it were a special function key, it may return the first character of
> of an escape sequence, leaving additional characters in the buffer.
> """
> fd = sys.stdin.fileno()
> old_settings = termios.tcgetattr(fd)
> try:
> tty.setraw(fd)
> ch = sys.stdin.read(1)
> finally:
> termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
> return ch
> ```
So how to deal with this? Well, now just call `getch()` every time you want to wait for a key press. Just like this:
```
while True:
getch() # this also returns the key pressed, if you want to store it
input = raw_input("Enter input")
do_whatever_with_it
```
You can also thread that and do other tasks in the meanwhile.
Remember that Python 3.x does no longer use raw\_input, but instead simply input(). |
Failing to change bool var state in loop | 32,672,769 | 2 | 2015-09-19T20:14:53Z | 32,672,786 | 7 | 2015-09-19T20:17:21Z | [
"python",
"if-statement",
"for-loop",
"range"
] | I'm trying to write a prime number checker according to : [Prime equations](http://www.wikihow.com/Check-if-a-Number-Is-Prime)
My code so far looks like this:
```
def primer(x):
prime = False
x = math.sqrt(x)
if type(x) == float:
x = math.ceil(x)
for i in range(3,x + 1):
if (i % 2) == 1:
prime == True
print(prime)
```
Trying with 37 as `primer(37)` ---> sqrt(37) is 6.08
rounded to 7
7 mod 3 is = 1 ( Prime number )
I have no clue with that `prime` var is not updating. Could someone shed a light please ?
Edit : Linking some repl -> [Repl Link](https://repl.it/BJVu/1) | You are not actually assigning:
```
prime == True
```
Should be:
```
prime = True
```
`==` is a comparison operator, `=` is for assignment.
You can use the `any` function to create your prime number checker, also we only need to loop to the sqrt of x and check odd numbers. I also added an implementation of [fermat's little theorem](https://en.wikipedia.org/wiki/Fermat%27s_little_theorem) which is mentioned in the link:
```
def primer(x):
if x == 2:
return True
if x % 2 == 0 or x < 3:
return False
return not any(x % i == 0 for i in range(3, int(x ** .5) + 1, 2))
from random import randint
def fermat_little_theorem(p, conf):
"""
int p to test if prime
conf is the confidence level/how many times to try n ** (p - 1) % p == 1
returns with confidence p is prime based on 1/2^conf
"""
if p <= 1:
return False
for _ in range(conf):
n = randint(1, p - 1)
if not n ** (p - 1) % p == 1:
return False
return True
``` |
Is checking if key is in dictionary and getting it's value in the same "if" safe? | 32,679,180 | 5 | 2015-09-20T12:03:29Z | 32,679,210 | 13 | 2015-09-20T12:06:03Z | [
"python",
"python-3.x",
"dictionary"
] | I think this is safe:
```
if key in test_dict:
if test_dict[key] == 'spam':
print('Spam detected!')
```
but is this safe?
```
if key in test_dict and test_dict[key] == 'spam':
print('Spam detected!')
```
It should do the same thing because condition checking is lazy in python. It won't try to get the value (and raise an exception because there's no such key in the dict) because the first condition is already not satisfied. But can I rely on the lazyness and use the second example in my programs? | Yes, it is safe, Python would short circuit if first expression result in False, that is it would not evaluate the second expression in the `if` condition.
But a better way to do your condition would be to use `.get()` , which returns `None` if key is not present in dictionary . Example -
```
if test_dict.get(key) == 'spam':
print('Spam detected!')
``` |
How do I get the user to input a number in Python 3? | 32,679,575 | 3 | 2015-09-20T12:46:29Z | 32,679,606 | 7 | 2015-09-20T12:51:03Z | [
"python",
"python-3.x"
] | I'm trying to make a quiz using Python 3. The quiz randomly generates two separate numbers and operator. But when I try to get the user to input their answer, this shows up in the shell:
```
<class 'int'>
```
I'm not sure what I need to do. Even if I type in the correct answer, it always returns as incorrect.
```
import random
import operator
operation=[
(operator.add, "+"),
(operator.mul, "*"),
(operator.sub, "-")
]
num_of_q=10
score=0
name=input("What is your name? ")
class_name=input("Which class are you in? ")
print(name,", welcome to this maths test!")
for _ in range(num_of_q):
num1=random.randint(0,10)
num2=random.randint(1,10)
op,symbol=random.choice(operation)
print("What is",num1,symbol,num2,"?")
if input(int)==(num1,op,num2):
print("Correct")
score += 1
else:
print("Incorrect")
if num_of_q==10:
print(name,"you got",score,"/",num_of_q)
``` | This line is incorrect:
```
if input(int)==(num1,op,num2):
```
You must convert the input to `int` and apply `op` to `num1` and `num2`:
```
if int(input()) == op(num1, num2):
``` |
How to get the format of image with PIL? | 32,679,589 | 6 | 2015-09-20T12:49:11Z | 32,679,637 | 12 | 2015-09-20T12:54:55Z | [
"python",
"python-imaging-library",
"image-formats"
] | After loading an image file with PIL.Image, how can I determine whether the image file is a PNG/JPG/BMP/GIF? I understand very little about these file formats, can PIL get the `format` metadata from the file header? Or does it need to 'analyze' the data within the file?
If PIL doesn't provide such an API, is there any python library that does? | Try:
```
img = Image.open(filename)
print(img.format) # 'JPEG'
```
More info
* <http://pillow.readthedocs.org/en/latest/reference/Image.html#PIL.Image.format>
* <http://pillow.readthedocs.org/en/latest/handbook/image-file-formats.html> |
ImportError after successful pip installation | 32,680,081 | 2 | 2015-09-20T13:45:10Z | 32,680,082 | 8 | 2015-09-20T13:45:10Z | [
"python",
"pip"
] | I have successfully installed a library with `pip install <library-name>`. But when I try to import it, python raises `ImportError: No module named <library-name>`. Why do I get this error and how can I use the installed library? | **TL;DR**: There are often multiple versions of python interpreters and pip versions present. Using `python -m pip install <library-name>` instead of `pip install <library-name>` will ensure that the library gets installed into the default python interpreter.
**Please also note:** From my personal experience I would advice against using `sudo pip install` to install packages into system's default python interpreter. This can lead to a various messy issues.
Whenever you are tempted to call `pip` with `sudo`, please check first if a [virtualenv](https://virtualenv.pypa.io/en/latest/) is not a better option for you.
---
Most modern systems ship multiple python interpreters. Each interpreter maintains its own set of installed packages. When installing new packages, it is important to understand into which interpreter those packages are actually installed.
On unix systems the shell can be used to understand what exactly is happening.
Typing `which -a python` shows all interpreters that in your `PATH`. The first line corresponds to the interpreter that is used when you run `python` from the command line.
```
/private/tmp/py32/bin/python
/usr/local/bin/python
/usr/bin/python
```
Each pip version belongs to exactly one interpreter. `which -a pip` shows all pip versions. Again the first line is what will be called when you type `pip` in your shell.
```
/usr/local/bin/pip
/usr/bin/python
```
Note that in this case `python` belongs to the interpreter installed in `/private/tmp/py32/`, but `pip` installs into the interpreter `/usr/local/bin`. After a successful install of a library, you will not be able to import it in your default python interpreter.
So how do you import the installed library?
Your first option is to start the desired interpreter with its full path. So if you type `/usr/local/bin/python`, you will be able to import the library.
The second - often preferred - option is to specifically invoke the right version of pip. To do so, you can use `python -m pip install <library-name>` instead of `pip install <library-name>`. This will call the pip version that belongs to your default python interpreter. |
Converting all non-numeric to 0 (zero) in Python | 32,680,639 | 9 | 2015-09-20T14:46:12Z | 32,680,821 | 10 | 2015-09-20T15:05:00Z | [
"python"
] | I'm looking for the easiest way to convert all non-numeric data (including blanks) in Python to zeros. Taking the following for example:
```
someData = [[1.0,4,'7',-50],['8 bananas','text','',12.5644]]
```
I would like the output to be as follows:
```
desiredData = [[1.0,4,7,-50],[0,0,0,12.5644]]
```
So '7' should be 7, but '8 bananas' should be converted to 0. | ```
import numbers
def mapped(x):
if isinstance(x,numbers.Number):
return x
for tpe in (int, float):
try:
return tpe(x)
except ValueError:
continue
return 0
for sub in someData:
sub[:] = map(mapped,sub)
print(someData)
[[1.0, 4, 7, -50], [0, 0, 0, 12.5644]]
```
It will work for different numeric types:
```
In [4]: from decimal import Decimal
In [5]: someData = [[1.0,4,'7',-50 ,"99", Decimal("1.5")],["foobar",'8 bananas','text','',12.5644]]
In [6]: for sub in someData:
...: sub[:] = map(mapped,sub)
...:
In [7]: someData
Out[7]: [[1.0, 4, 7, -50, 99, Decimal('1.5')], [0, 0, 0, 0, 12.5644]]
```
`if isinstance(x,numbers.Number)` catches subelements that are already floats, ints etc.. if it is not a numeric type we first try casting to int then to float, if none of those are successful we simply return `0`. |
How to get string before hyphen | 32,682,199 | 2 | 2015-09-20T17:24:24Z | 32,682,209 | 9 | 2015-09-20T17:25:09Z | [
"python"
] | I have below filename:
```
pagecounts-20150802-000000
```
I want to extract the date out of above `20150802`
I am using the below code but its not working:
```
print os.path.splitext("pagecounts-20150802-000000")[0]
``` | The methods in `os.path` are mainly used for path string manipulation. You want to use string splitting:
```
print 'pagecounts-20150802-000000'.split('-')[1]
``` |
How do I ensure parameter is correct type in Python? | 32,684,720 | 3 | 2015-09-20T21:56:28Z | 32,684,829 | 7 | 2015-09-20T22:09:33Z | [
"python"
] | I'm new to Python, and am trying to figure out if there is a way to specify the variable type in a parameter definition. For example:
```
def function(int(integer))
```
as opposed to:
```
def function(integer)
int(integer)
```
I know it's not a major difference, but I'm trying to use good programming practices here, and if I'm defining a function with a large number of parameters, it could get messy. | Python uses [Duck typing](http://stackoverflow.com/questions/4205130/what-is-duck-typing) which means you should not discriminate objects based on what type they are, but based on what attributes and functions they have. This has many advantages which are outside of the scope of this answer.
What you should do instead if you want to add documentation to your function is to use a [docstring](https://www.python.org/dev/peps/pep-0257/)
```
def square(x):
"""takes a number and squares it"""
return x**2
```
and [type hints](https://www.python.org/dev/peps/pep-0484/)
```
def square(x: int) -> int:
"""takes a number and squares it"""
return x**2
```
if you *really* need to check the arguments given to you, you can do a `hasattr()` to check for the attributes and functions the argument provides. In most situations (including this one) just not checking the type is better
```
def square(x):
"""takes a number and squares it"""
if not hasattr(x, "__pow__"):
raise TypeError("unsupported operand:", type(x))
return x**2
``` |
Recursive generation of pattern | 32,696,640 | 2 | 2015-09-21T13:33:48Z | 32,696,717 | 7 | 2015-09-21T13:37:08Z | [
"python",
"recursion",
"permutation"
] | I'm trying to generate all permutations in a list. I know there's built-in functions for this but I would like to do it myself with a recursive function. I'm trying to understand why my attempt doesn't work:
```
chars = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
def recurse(n, value):
if n > 0:
for c in chars:
value += c
recurse(n-1, value)
else:
print value
return
nchars = 2
recurse(nchars, '')
```
It seems to me my recursion should stop after two iterations (`nchars = 2`) but it keeps going. I would post the current output but it's way to long and messy. Expected output would be:
```
00
01
02
03
...
zx
zy
zz
``` | ```
for c in chars:
value += c
recurse(n-1, value)
```
This loop runs 62 times per recursion level, so `value` will grow and grow to have a maximum size of 124. To avoid this cumulative effect, try:
```
for c in chars:
recurse(n-1, value+c)
``` |
Limit a python file to only be run by a bash script | 32,698,320 | 3 | 2015-09-21T14:51:55Z | 32,698,394 | 8 | 2015-09-21T14:55:24Z | [
"python",
"bash"
] | I am looking for a way to limit how a python file to be called. Basically I only want it to be executable when I call it from a bash script but if ran directly either from a terminal or any other way I do not want it to be able to run. I am not sure if there is a way to do this or not but I figured I would give it a shot. | There is no meaningful way to do this.
UNIX process architecture does not work this way. You cannot control the execution of a script by its parent process.
Instead we should discuss *why* you want to do something like this, you are probably thinking doing it in a wrong way and what good options there is to address the *actual* underlying problem. |
python: replace elements in list with conditional | 32,699,654 | 2 | 2015-09-21T15:59:06Z | 32,699,693 | 7 | 2015-09-21T16:00:57Z | [
"python",
"list"
] | I am trying to do the following with python and am having a strange behavior. Say I have the following list:
```
x = [5, 4, 3, 2, 1]
```
Now, I am doing something like:
```
x[x >= 3] = 3
```
This gives:
```
x = [5, 3, 3, 2, 1]
```
Why does only the second element get changed? I was expecting:
```
[3, 3, 3, 2, 1]
``` | Because python will evaluated the `x>=3` as `True` and since `True` is equal to 1 so the second element of `x` will be convert to 3.
For such aim you need to use a list comprehension :
```
>>> [3 if i >=3 else i for i in x]
[3, 3, 3, 2, 1]
```
And if you want to know that why `x >= 3` returns True, based on following [documentation](https://docs.python.org/2/library/stdtypes.html#comparisons) :
> CPython implementation detail: Objects of different types except numbers are ordered by their type names; objects of the same types that donât support proper comparison are ordered by their address.
A list is always greater than an integer type.As a string is greater than a list :
```
>>> ''>[]
True
``` |
Python map each integer within input to int | 32,708,415 | 5 | 2015-09-22T04:30:48Z | 32,708,428 | 8 | 2015-09-22T04:31:46Z | [
"python"
] | If I am given an input of 1 2 3 4 5, what is the standard method of splitting such input and maybe add 1 to each integer?
I'm thinking something along the lines of splitting the input list and map each to an integer. | You may use list comprehension.
```
s = "1 2 3 4 5"
print [int(i)+1 for i in s.split()]
``` |
Docker image error: "/bin/sh: 1: [python,: not found" | 32,709,075 | 3 | 2015-09-22T05:32:36Z | 32,709,181 | 7 | 2015-09-22T05:42:27Z | [
"python",
"django",
"ubuntu",
"docker",
"dockerfile"
] | I'm building a new Docker image based on the standard Ubuntu 14.04 image.
Here's my *Dockerfile*:
```
FROM ubuntu:14.04
RUN apt-get update -y
RUN apt-get install -y nginx git python-setuptools python-dev
RUN easy_install pip
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt # only 'django' for now
ENV projectname myproject
EXPOSE 80 8000
WORKDIR ${projectname}
CMD ['python', 'manage.py', 'runserver', '0.0.0.0:80']
```
When I try to run this image, I get this error...
> /bin/sh: 1: [python,: not found
But if I open a shell when running the image, running `python` opens the interactive prompt as expected.
Why can't I invoke `python` through `CMD` in the Dockerfile? | Use `"` instead of `'` in CMD. [(Documentation)](https://docs.docker.com/engine/reference/builder/#/cmd) |
Python Inherit from one class but override method calling another class? | 32,715,484 | 5 | 2015-09-22T11:18:50Z | 32,716,126 | 7 | 2015-09-22T11:50:57Z | [
"python",
"oop",
"inheritance",
"super"
] | Let say I have 3 classes: A, B and C. A is a base class for B and B is for C. Hierarchy is kept normally here, but for one method it should be different. For C class it should act like it was inherited from A.
For example like this:
```
class A(object):
def m(self):
print 'a'
class B(A):
def m(self):
super(B, self).m()
print 'b'
class C(B):
def m(self):
super(A, self).m()
print 'c'
```
So basically it should work like this:
```
a = A()
a.m()
a
b = B()
b.m()
a
b
c = C()
c.m()
a
c
```
But it is not going to work for C class, because I get this error:
```
AttributeError: 'super' object has no attribute 'm'
```
To solve this for C class I could inherit from class A, but I want to inherit everything from B and for that specific method `m` call super for base class A. I mean that method is one exception. Or should I call it somehow differently for class C in order to work?
How can I do that? | There are in fact two ways to solve this: you can shortcut the call to `super()` and totally bypass the mro as in Mathias Ettinger's answer, or you can just issue the *correct* call to `super()`:
```
class C(B):
def m(self):
super(B, self).m()
print 'c'
```
Remember that `super()` expects as first argument the class from which it should start looking up the mro. It's usually the class in which the call is made, but you can pass another class upper in the mro if you want. |
IPython 4 shell does not work with Sublime REPL | 32,719,352 | 6 | 2015-09-22T14:16:40Z | 32,770,759 | 11 | 2015-09-24T20:56:37Z | [
"python",
"ipython",
"sublimetext3",
"jupyter",
"sublimerepl"
] | I am having problems with running the IPython shell from the Sublime REPL package.
Here is what I get:
```
C:\Anaconda\lib\site-packages\IPython\config.py:13: ShimWarning:
The`IPython.config` package has been deprecated. You should import from
traitlets.config instead.
"You should import from traitlets.config instead.", ShimWarning)
C:\Anaconda\lib\site-packages\IPython\terminal\console.py:13: ShimWarning:
The `IPython.terminal.console` package has been deprecated. You should
import from jupyter_console instead.
"You should import from jupyter_console instead.", ShimWarning)
C:\Anaconda\lib\site-packages\IPython\frontend.py:21: ShimWarning: The top-
level `frontend` package has been deprecated. All its subpackages have been
moved to the top `IPython` level.
"All its subpackages have been moved to the top `IPython` level.",
ShimWarning)
Traceback (most recent call last):
File "C:\Users\Vladimir\AppData\Roaming\Sublime Text
3\Packages/SublimeREPL/config/Python/ipy_repl.py", line 45, in <module>
from IPython.frontend.terminal.console.app import ZMQTerminalIPythonApp
ImportError: No module named app
```
Does anyone has a solution how this might be fixed? I have the latest version of pyzmq installed and run under Python 2.7.10-0 | With the release of IPython 4.0, the structure has completely changed, and is now implemented as a kernel for the [Jupyter](http://jupyter.org) core, which is capable of running IPython-like sessions using [many different languages](https://github.com/ipython/ipython/wiki/IPython-kernels-for-other-languages) other than Python. IPython is still the "reference implementation", however.
With these changes, the internal API has also changed quite a bit, and some parts have been moved/renamed or just aren't there at all. I've put together [complete instructions on updating SublimeREPL for IPython 4](https://gist.github.com/MattDMo/6cb1dfbe8a124e1ca5af) as a gist on Github, but I'll go over the key parts here.
The first thing you need to do is make sure you have all the components of IPython and Jupyter, as it's been split up from one monolithic packages to quite a number of smaller ones. Run
```
[sudo] pip install -U ipython jupyter
```
from the command line to get all the pieces - no compiler should be necessary. `sudo` is in case you're on a Unix/Linux system and need admin access, and the `-U` flag means the same thing as `--upgrade`.
Once that's done, open Sublime and select **`Preferences â Browse Packagesâ¦`** to open your `Packages` folder in your operating system's file browser application (Finder, Windows Explorer, Nautilus, etc.). Open the `SublimeREPL` folder, then `config`, then `Python`, then open `ipy_repl.py` in Sublime. Delete its entire contents, and replace it with the file included in the gist link above (click [here](https://gist.githubusercontent.com/MattDMo/6cb1dfbe8a124e1ca5af/raw/a511e86dde7b3a70bdbd63b7ac3c98c32cd74277/ipy_repl.py) for the raw copy-and-paste version).
Save `ipy_repl.py` with the new contents, and that should be it! You can now open the IPython REPL in the usual way, and you should see:
```
Jupyter Console 4.0.2
[ZMQTerminalIPythonApp] Loading IPython extension: storemagic
In [1]:
```
If you'd like to get rid of the `[ZMQTerminalIPythonApp]` message, read through the instructions in the gist. However, the message is harmless, so you can safely ignore it if you want.
Good luck!
**NOTE:** These changes have been submitted to the main SublimeREPL project as pull requests, but seeing as the author hasn't been working on the plugin for some time, it may be a while before it's part of the main branch. |
Operation on 2d array columns | 32,719,578 | 8 | 2015-09-22T14:27:11Z | 32,719,624 | 7 | 2015-09-22T14:29:12Z | [
"python",
"arrays",
"2d"
] | I'd like to know if it's possible to apply a function (or juste an operation, such as replacing values) to column in a python 2d array, without using for loops.
I'm sorry if the question has already been asked, but I couldn't find anything specific about my problem.
I'd like to do something like :
```
array[:][2] = 1
```
Which would mean *put 1 for each value at the third column*, or
```
func(array[:][2])
```
Which would mean *apply `func()` to the third row of array*.
Is there any magic python-way to do it ?
EDIT : The truth has been spoken. I forgot to say that I didn't want to avoid `for()` statement to improve performance, but just because I don't wan to add multiples lines for this precise instance. We got 2 answers here, one in a native way, and two more with the help of Numpy. Thanks a lot for your answers ! | You can do this easily with [`numpy`](http://www.numpy.org/) arrays. Example -
```
In [2]: import numpy as np
In [3]: na = np.array([[1,2,3],[3,4,5]])
In [4]: na
Out[4]:
array([[1, 2, 3],
[3, 4, 5]])
In [5]: na[:,2] = 10
In [6]: na
Out[6]:
array([[ 1, 2, 10],
[ 3, 4, 10]])
In [7]: na[:,2]
Out[7]: array([10, 10])
In [8]: def func(a):
...: for i,x in enumerate(a):
...: a[i] = x + 1
...:
In [9]: na
Out[9]:
array([[ 1, 2, 10],
[ 3, 4, 10]])
In [10]: func(na[:,1])
In [11]: na
Out[11]:
array([[ 1, 3, 10],
[ 3, 5, 10]])
```
You can find more details about this [here](http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html) . Please do be careful , for numpy arrays, as stated [in documentation -](http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#basic-slicing-and-indexing)
> All arrays generated by basic slicing are always [views](http://docs.scipy.org/doc/numpy/glossary.html#term-view) of the original array.
This is why when changing the sliced array inside the function, the actual array got changed. |
Operation on 2d array columns | 32,719,578 | 8 | 2015-09-22T14:27:11Z | 32,719,862 | 7 | 2015-09-22T14:39:40Z | [
"python",
"arrays",
"2d"
] | I'd like to know if it's possible to apply a function (or juste an operation, such as replacing values) to column in a python 2d array, without using for loops.
I'm sorry if the question has already been asked, but I couldn't find anything specific about my problem.
I'd like to do something like :
```
array[:][2] = 1
```
Which would mean *put 1 for each value at the third column*, or
```
func(array[:][2])
```
Which would mean *apply `func()` to the third row of array*.
Is there any magic python-way to do it ?
EDIT : The truth has been spoken. I forgot to say that I didn't want to avoid `for()` statement to improve performance, but just because I don't wan to add multiples lines for this precise instance. We got 2 answers here, one in a native way, and two more with the help of Numpy. Thanks a lot for your answers ! | Without numpy it can be done like this:
```
map(lambda x: x[:2] + [1] + x[3:], array)
map(lambda x: x[:2] + my_func(x[2]) + x[3:], array)
``` |
Why is a class __dict__ a mappingproxy? | 32,720,492 | 29 | 2015-09-22T15:08:59Z | 32,720,603 | 31 | 2015-09-22T15:13:12Z | [
"python",
"class",
"python-3.x",
"python-internals"
] | I wonder why a class `__dict__` is a `mappingproxy`, but an instance `__dict__` is just a plain `dict`
```
>>> class A:
pass
>>> a = A()
>>> type(a.__dict__)
<class 'dict'>
>>> type(A.__dict__)
<class 'mappingproxy'>
``` | This helps the interpreter assure that the keys for class-level attributes and methods can only be strings.
Elsewhere, Python is a "consenting adults language", meaning that dicts for objects are exposed and mutable by the user. However, in the case of class-level attributes and methods for classes, if we can guarantee that the keys are strings, we can simplify and speed-up the common case code for attribute and method lookup at the class-level. In particular, the \_\_mro\_\_ search logic for new-style classes is simplified and sped-up by assuming the class dict keys are strings. |
Flask application traceback doesn't show up in server log | 32,722,143 | 6 | 2015-09-22T16:32:34Z | 32,722,523 | 9 | 2015-09-22T16:53:14Z | [
"python",
"nginx",
"flask",
"uwsgi"
] | I'm running my Flask application with uWSGI and nginx. There's a 500 error, but the traceback doesn't appear in the browser or the logs. How do I log the traceback from Flask?
```
uwsgi --http-socket 127.0.0.1:9000 --wsgi-file /var/webapps/magicws/service.py --module service:app --uid www-data --gid www-data --logto /var/log/magicws/magicapp.log
```
The uWSGI log only shows the 500 status code, not the traceback. There's also nothing in the nginx log.
```
[pid: 18343|app: 0|req: 1/1] 127.0.0.1 () {34 vars in 642 bytes}
[Tue Sep 22 15:50:52 2015]
GET /getinfo?color=White => generated 291 bytes in 64 msecs (HTTP/1.0 500)
2 headers in 84 bytes (1 switches on core 0)
``` | Run in debug mode by adding this line before `app.run` gets called
```
app.debug = true
```
or by running with
```
app.run(debug=True)
```
Now a stack trace should appear in the terminal and the browser instead of a generic 500 error page.
When using the new `flask` command to run the server, set the environment variable `FLASK_DEBUG`.
```
FLASK_DEBUG=1 flask run
```
---
In production, you don't want to run your app in debug mode. Instead you should log the errors to a file.
Flask uses the standard Python logging library can be configured to log errors. Insert the the following to have send Flask's log messages to a file.
```
import logging
handler = logging.FileHandler('/path/to/app.log') # errors logged to this file
handler.setLevel(logging.ERROR) # only log errors and above
app.logger.addHandler(handler) # attach the handler to the app's logger
```
Read more about the Python [logging](https://docs.python.org/2/library/logging.config.html) module. In particular you may want to switch the file handler to a `RotatingFileHandler` to ensure log files don't grow too large. You may also want to change the logging level to record more than just errors.
Flask also has [instructions](http://flask.pocoo.org/docs/0.10/errorhandling/#logging-to-a-file) on how to log your application. |
How do I add a title to Seaborn Heatmap? | 32,723,798 | 3 | 2015-09-22T18:03:15Z | 32,724,156 | 7 | 2015-09-22T18:23:13Z | [
"python",
"pandas",
"ipython-notebook",
"seaborn"
] | I want to add a title to a seaborn heatmap. Using Pandas and iPython Notebook
code is below,
```
a1_p = a1.pivot_table( index='Postcode', columns='Property Type', values='Count', aggfunc=np.mean, fill_value=0)
sns.heatmap(a1_p, cmap="YlGnBu")
```
the data is pretty straight forward:
```
In [179]: a1_p
Out [179]:
Property Type Flat Terraced house Unknown
Postcode
E1 11 0 0
E14 12 0 0
E1W 6 0 0
E2 6 0 0
``` | `heatmap` is an `axes`-level function, so you should be able to use just `plt.title` or `ax.set_title`:
```
%matplotlib inline
import numpy as np
import os
import seaborn as sns
import matplotlib.pyplot as plt
data = np.random.randn(10,12)
ax = plt.axes()
sns.heatmap(data, ax = ax)
ax.set_title('lalala')
plt.show()
```
[](http://i.stack.imgur.com/og3bI.png) |
Building a RESTful Flask API for Scrapy | 32,724,537 | 4 | 2015-09-22T18:43:30Z | 32,784,312 | 8 | 2015-09-25T14:21:04Z | [
"python",
"heroku",
"flask",
"scrapy",
"twisted"
] | The API should allow arbitrary HTTP get requests containing URLs the user wants scraped, and then Flask should return the results of the scrape.
The following code works for the first http request, but after twisted reactor stops, it won't restart. I may not even be going about this the right way, but I just want to put a RESTful scrapy API up on Heroku, and what I have so far is all I can think of.
Is there a better way to architect this solution? Or how can I allow `scrape_it` to return without stopping twisted reactor (which can't be started again)?
```
from flask import Flask
import os
import sys
import json
from n_grams.spiders.n_gram_spider import NGramsSpider
# scrapy api
from twisted.internet import reactor
import scrapy
from scrapy.crawler import CrawlerRunner
from scrapy.xlib.pydispatch import dispatcher
from scrapy import signals
app = Flask(__name__)
def scrape_it(url):
items = []
def add_item(item):
items.append(item)
runner = CrawlerRunner()
d = runner.crawl(NGramsSpider, [url])
d.addBoth(lambda _: reactor.stop()) # <<< TROUBLES HERE ???
dispatcher.connect(add_item, signal=signals.item_passed)
reactor.run(installSignalHandlers=0) # the script will block here until the crawling is finished
return items
@app.route('/scrape/<path:url>')
def scrape(url):
ret = scrape_it(url)
return json.dumps(ret, ensure_ascii=False, encoding='utf8')
if __name__ == '__main__':
PORT = os.environ['PORT'] if 'PORT' in os.environ else 8080
app.run(debug=True, host='0.0.0.0', port=int(PORT))
``` | I think there is no a good way to create Flask-based API for Scrapy. Flask is not a right tool for that because it is not based on event loop. To make things worse, Twisted reactor (which Scrapy uses) [can't](http://twistedmatrix.com/trac/wiki/FrequentlyAskedQuestions#WhycanttheTwistedsreactorberestarted) be started/stopped more than once in a single thread.
Let's assume there is no problem with Twisted reactor and you can start and stop it. It won't make things much better because your `scrape_it` function may block for an extended period of time, and so you will need many threads/processes.
I think the way to go is to create an API using async framework like Twisted or Tornado; it will be more efficient than a Flask-based (or Django-based) solution because the API will be able to serve requests while Scrapy is running a spider.
Scrapy is based on Twisted, so using twisted.web or <https://github.com/twisted/klein> can be more straightforward. But Tornado is also not hard because you can make it use Twisted event loop.
There is a project called [ScrapyRT](https://github.com/scrapinghub/scrapyrt) which does something very similar to what you want to implement - it is an HTTP API for Scrapy. ScrapyRT is based on Twisted.
As an examle of Scrapy-Tornado integration check [Arachnado](https://github.com/TeamHG-Memex/arachnado) - [here](https://github.com/TeamHG-Memex/arachnado/blob/0cf1fbc412d926d15fb7365cbf4d792f7c351d07/arachnado/__main__.py#L53) is an example on how to integrate Scrapy's CrawlerProcess with Tornado's Application.
If you really want Flask-based API then it could make sense to start crawls in separate processes and/or use queue solution like Celery. This way you're loosing most of the Scrapy efficiency; if you go this way you can use requests + BeautifulSoup as well. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.