title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Fitting a closed curve to a set of points | 31,464,345 | 16 | 2015-07-16T20:54:42Z | 31,464,962 | 14 | 2015-07-16T21:37:05Z | [
"python",
"numpy",
"scipy",
"curve-fitting",
"data-fitting"
] | I have a set of points `pts` which form a loop and it looks like this:

This is somewhat similar to [31243002](http://stackoverflow.com/questions/31243002/higher-order-local-interpolation-of-implicit-curves-in-python), but instead of putting points in between pairs of points, I would like to fit a smooth curve through the points (coordinates are given at the end of the question), so I tried something similar to `scipy` documentation on [Interpolation](http://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html):
```
values = pts
tck = interpolate.splrep(values[:,0], values[:,1], s=1)
xnew = np.arange(2,7,0.01)
ynew = interpolate.splev(xnew, tck, der=0)
```
but I get this error:
> ValueError: Error on input data
Is there any way to find such a fit?
Coordinates of the points:
```
pts = array([[ 6.55525 , 3.05472 ],
[ 6.17284 , 2.802609],
[ 5.53946 , 2.649209],
[ 4.93053 , 2.444444],
[ 4.32544 , 2.318749],
[ 3.90982 , 2.2875 ],
[ 3.51294 , 2.221875],
[ 3.09107 , 2.29375 ],
[ 2.64013 , 2.4375 ],
[ 2.275444, 2.653124],
[ 2.137945, 3.26562 ],
[ 2.15982 , 3.84375 ],
[ 2.20982 , 4.31562 ],
[ 2.334704, 4.87873 ],
[ 2.314264, 5.5047 ],
[ 2.311709, 5.9135 ],
[ 2.29638 , 6.42961 ],
[ 2.619374, 6.75021 ],
[ 3.32448 , 6.66353 ],
[ 3.31582 , 5.68866 ],
[ 3.35159 , 5.17255 ],
[ 3.48482 , 4.73125 ],
[ 3.70669 , 4.51875 ],
[ 4.23639 , 4.58968 ],
[ 4.39592 , 4.94615 ],
[ 4.33527 , 5.33862 ],
[ 3.95968 , 5.61967 ],
[ 3.56366 , 5.73976 ],
[ 3.78818 , 6.55292 ],
[ 4.27712 , 6.8283 ],
[ 4.89532 , 6.78615 ],
[ 5.35334 , 6.72433 ],
[ 5.71583 , 6.54449 ],
[ 6.13452 , 6.46019 ],
[ 6.54478 , 6.26068 ],
[ 6.7873 , 5.74615 ],
[ 6.64086 , 5.25269 ],
[ 6.45649 , 4.86206 ],
[ 6.41586 , 4.46519 ],
[ 5.44711 , 4.26519 ],
[ 5.04087 , 4.10581 ],
[ 4.70013 , 3.67405 ],
[ 4.83482 , 3.4375 ],
[ 5.34086 , 3.43394 ],
[ 5.76392 , 3.55156 ],
[ 6.37056 , 3.8778 ],
[ 6.53116 , 3.47228 ]])
``` | Your problem is because you're trying to work with x and y directly. The interpolation function you're calling assumes that the x-values are in sorted order and that each `x` value will have a unique y-value.
Instead, you'll need to make a parameterized coordinate system (e.g. the index of your vertices) and interpolate x and y separately using it.
To start with, consider the following:
```
import numpy as np
from scipy.interpolate import interp1d # Different interface to the same function
import matplotlib.pyplot as plt
#pts = np.array([...]) # Your points
x, y = pts.T
i = np.arange(len(pts))
# 5x the original number of points
interp_i = np.linspace(0, i.max(), 5 * i.max())
xi = interp1d(i, x, kind='cubic')(interp_i)
yi = interp1d(i, y, kind='cubic')(interp_i)
fig, ax = plt.subplots()
ax.plot(xi, yi)
ax.plot(x, y, 'ko')
plt.show()
```

I didn't close the polygon. If you'd like, you can add the first point to the end of the array (e.g. `pts = np.vstack([pts, pts[0]])`
If you do that, you'll notice that there's a discontinuity where the polygon closes.

This is because our parameterization doesn't take into account the closing of the polgyon. A quick fix is to pad the array with the "reflected" points:
```
import numpy as np
from scipy.interpolate import interp1d
import matplotlib.pyplot as plt
#pts = np.array([...]) # Your points
pad = 3
pts = np.pad(pts, [(pad,pad), (0,0)], mode='wrap')
x, y = pts.T
i = np.arange(0, len(pts))
interp_i = np.linspace(pad, i.max() - pad + 1, 5 * (i.size - 2*pad))
xi = interp1d(i, x, kind='cubic')(interp_i)
yi = interp1d(i, y, kind='cubic')(interp_i)
fig, ax = plt.subplots()
ax.plot(xi, yi)
ax.plot(x, y, 'ko')
plt.show()
```

Alternately, you can use a specialized curve-smoothing algorithm such as PEAK or a corner-cutting algorithm. |
Fitting a closed curve to a set of points | 31,464,345 | 16 | 2015-07-16T20:54:42Z | 31,465,587 | 8 | 2015-07-16T22:24:47Z | [
"python",
"numpy",
"scipy",
"curve-fitting",
"data-fitting"
] | I have a set of points `pts` which form a loop and it looks like this:

This is somewhat similar to [31243002](http://stackoverflow.com/questions/31243002/higher-order-local-interpolation-of-implicit-curves-in-python), but instead of putting points in between pairs of points, I would like to fit a smooth curve through the points (coordinates are given at the end of the question), so I tried something similar to `scipy` documentation on [Interpolation](http://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html):
```
values = pts
tck = interpolate.splrep(values[:,0], values[:,1], s=1)
xnew = np.arange(2,7,0.01)
ynew = interpolate.splev(xnew, tck, der=0)
```
but I get this error:
> ValueError: Error on input data
Is there any way to find such a fit?
Coordinates of the points:
```
pts = array([[ 6.55525 , 3.05472 ],
[ 6.17284 , 2.802609],
[ 5.53946 , 2.649209],
[ 4.93053 , 2.444444],
[ 4.32544 , 2.318749],
[ 3.90982 , 2.2875 ],
[ 3.51294 , 2.221875],
[ 3.09107 , 2.29375 ],
[ 2.64013 , 2.4375 ],
[ 2.275444, 2.653124],
[ 2.137945, 3.26562 ],
[ 2.15982 , 3.84375 ],
[ 2.20982 , 4.31562 ],
[ 2.334704, 4.87873 ],
[ 2.314264, 5.5047 ],
[ 2.311709, 5.9135 ],
[ 2.29638 , 6.42961 ],
[ 2.619374, 6.75021 ],
[ 3.32448 , 6.66353 ],
[ 3.31582 , 5.68866 ],
[ 3.35159 , 5.17255 ],
[ 3.48482 , 4.73125 ],
[ 3.70669 , 4.51875 ],
[ 4.23639 , 4.58968 ],
[ 4.39592 , 4.94615 ],
[ 4.33527 , 5.33862 ],
[ 3.95968 , 5.61967 ],
[ 3.56366 , 5.73976 ],
[ 3.78818 , 6.55292 ],
[ 4.27712 , 6.8283 ],
[ 4.89532 , 6.78615 ],
[ 5.35334 , 6.72433 ],
[ 5.71583 , 6.54449 ],
[ 6.13452 , 6.46019 ],
[ 6.54478 , 6.26068 ],
[ 6.7873 , 5.74615 ],
[ 6.64086 , 5.25269 ],
[ 6.45649 , 4.86206 ],
[ 6.41586 , 4.46519 ],
[ 5.44711 , 4.26519 ],
[ 5.04087 , 4.10581 ],
[ 4.70013 , 3.67405 ],
[ 4.83482 , 3.4375 ],
[ 5.34086 , 3.43394 ],
[ 5.76392 , 3.55156 ],
[ 6.37056 , 3.8778 ],
[ 6.53116 , 3.47228 ]])
``` | Using the [ROOT Framework](https://root.cern.ch/drupal/) and the pyroot interface I was able to generate the following image

With the following code(I converted your data to a CSV called data.csv so reading it into ROOT would be easier and gave the columns titles of xp,yp)
```
from ROOT import TTree, TGraph, TCanvas, TH2F
c1 = TCanvas( 'c1', 'Drawing Example', 200, 10, 700, 500 )
t=TTree('TP','Data Points')
t.ReadFile('./data.csv')
t.SetMarkerStyle(8)
t.Draw("yp:xp","","ACP")
c1.Print('pydraw.png')
``` |
Fitting a closed curve to a set of points | 31,464,345 | 16 | 2015-07-16T20:54:42Z | 31,466,013 | 13 | 2015-07-16T23:05:18Z | [
"python",
"numpy",
"scipy",
"curve-fitting",
"data-fitting"
] | I have a set of points `pts` which form a loop and it looks like this:

This is somewhat similar to [31243002](http://stackoverflow.com/questions/31243002/higher-order-local-interpolation-of-implicit-curves-in-python), but instead of putting points in between pairs of points, I would like to fit a smooth curve through the points (coordinates are given at the end of the question), so I tried something similar to `scipy` documentation on [Interpolation](http://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html):
```
values = pts
tck = interpolate.splrep(values[:,0], values[:,1], s=1)
xnew = np.arange(2,7,0.01)
ynew = interpolate.splev(xnew, tck, der=0)
```
but I get this error:
> ValueError: Error on input data
Is there any way to find such a fit?
Coordinates of the points:
```
pts = array([[ 6.55525 , 3.05472 ],
[ 6.17284 , 2.802609],
[ 5.53946 , 2.649209],
[ 4.93053 , 2.444444],
[ 4.32544 , 2.318749],
[ 3.90982 , 2.2875 ],
[ 3.51294 , 2.221875],
[ 3.09107 , 2.29375 ],
[ 2.64013 , 2.4375 ],
[ 2.275444, 2.653124],
[ 2.137945, 3.26562 ],
[ 2.15982 , 3.84375 ],
[ 2.20982 , 4.31562 ],
[ 2.334704, 4.87873 ],
[ 2.314264, 5.5047 ],
[ 2.311709, 5.9135 ],
[ 2.29638 , 6.42961 ],
[ 2.619374, 6.75021 ],
[ 3.32448 , 6.66353 ],
[ 3.31582 , 5.68866 ],
[ 3.35159 , 5.17255 ],
[ 3.48482 , 4.73125 ],
[ 3.70669 , 4.51875 ],
[ 4.23639 , 4.58968 ],
[ 4.39592 , 4.94615 ],
[ 4.33527 , 5.33862 ],
[ 3.95968 , 5.61967 ],
[ 3.56366 , 5.73976 ],
[ 3.78818 , 6.55292 ],
[ 4.27712 , 6.8283 ],
[ 4.89532 , 6.78615 ],
[ 5.35334 , 6.72433 ],
[ 5.71583 , 6.54449 ],
[ 6.13452 , 6.46019 ],
[ 6.54478 , 6.26068 ],
[ 6.7873 , 5.74615 ],
[ 6.64086 , 5.25269 ],
[ 6.45649 , 4.86206 ],
[ 6.41586 , 4.46519 ],
[ 5.44711 , 4.26519 ],
[ 5.04087 , 4.10581 ],
[ 4.70013 , 3.67405 ],
[ 4.83482 , 3.4375 ],
[ 5.34086 , 3.43394 ],
[ 5.76392 , 3.55156 ],
[ 6.37056 , 3.8778 ],
[ 6.53116 , 3.47228 ]])
``` | Actually, you were not far from the solution in your question.
Using [`scipy.interpolate.splprep`](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.interpolate.splprep.html) for parametric B-spline interpolation would be the simplest approach. It also natively supports closed curves, if you provide the `per=1` parameter,
```
import numpy as np
from scipy.interpolate import splprep, splev
import matplotlib.pyplot as plt
# define pts from the question
tck, u = splprep(pts.T, u=None, s=0.0, per=1)
u_new = np.linspace(u.min(), u.max(), 1000)
x_new, y_new = splev(u_new, tck, der=0)
plt.plot(pts[:,0], pts[:,1], 'ro')
plt.plot(x_new, y_new, 'b--')
plt.show()
```

Fundamentally, this approach not very different from the one in @Joe Kington's answer. Although, it will probably be a bit more robust, because the equivalent of the `i` vector is chosen, by default, based on the distances between points and not simply their index (see [`splprep` documentation](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.interpolate.splprep.html) for the `u` parameter). |
Is there an equivalent in Python of Fortran's "implicit none"? | 31,467,987 | 8 | 2015-07-17T03:33:06Z | 31,468,253 | 7 | 2015-07-17T04:09:31Z | [
"python",
"python-2.7",
"fortran"
] | In Fortran there is a statement `Implicit none` that throws a compilation error when a local variable is not declared but used. I understand that Python is a dynamically typed language and the scope of a variable may be determined at runtime.
But I would like to avoid certain unintended errors that happen when I forget to initialize a local variable but use it in the main code. For example, the variable `x` in the following code is global even though I did not intend that:
```
def test():
y=x+2 # intended this x to be a local variable but forgot
# x was not initialized
print y
x=3
test()
```
So my question is that: Is there any way to ensure all variables used in `test()` are local to it and that there are no side effects. I am using Python 2.7.x. In case there is a local variable, an error is printed. | > So my question is that: Is there any way to ensure all variables used
> in test() are local to it and that there are no side effects.
There is a technique to validate that globals aren't accessed.
Here's a decorator that scans a function's opcodes for a *LOAD\_GLOBAL*.
```
import dis, sys, re, StringIO
def check_external(func):
'Validate that a function does not have global lookups'
saved_stdout = sys.stdout
sys.stdout = f = StringIO.StringIO()
try:
dis.dis(func)
result = f.getvalue()
finally:
sys.stdout = saved_stdout
externals = re.findall('^.*LOAD_GLOBAL.*$', result, re.MULTILINE)
if externals:
raise RuntimeError('Found globals: %r', externals)
return func
@check_external
def test():
y=x+2 # intended this x to be a local variable but forgot
# x was not initialized
print y
```
To make this practical, you will want a stop list of acceptable global references (i.e. modules). The technique can be extended to cover other opcodes such as *STORE\_GLOBAL* and *DELETE\_GLOBAL*.
All that said, I don't see straight-forward way to detect side-effects. |
Why does the yield function not require parentheses in Python? | 31,468,049 | 6 | 2015-07-17T03:41:39Z | 31,468,074 | 7 | 2015-07-17T03:43:44Z | [
"python",
"python-2.7",
"python-3.x",
"yield-keyword"
] | In Python, I have many times seen the `yield` function used to create a generator. Both this and the `print` function technically both perform the action of methods because they return a value. However, during the change from Python 2 to Python 3, the `print` function gained parentheses like a normal method call, but yield stayed the same. Also, `yield` gains a yellowish color of a reserved keyword while `print` is the purple of a reserved method. Why is `yield` not considered a method and colored this way along with not using parentheses syntax?
(In a similar vein, why does `return` also lack parentheses?)
Let me add some more stuff, yield and continue are not given parentheses in many other languages as well. I just wanted to know what makes it different other than it is reserved. There are many other reserved methods out there which get parentheses. | `print` wasn't a function that gained parentheses: it went from being a statement to being a function. `yield` is still a statement, like `return`. Syntax highlighting is specific to your development environment.
You can find more information about [the difference between expressions and statements here](http://stackoverflow.com/questions/4728073/what-is-the-difference-between-an-expression-and-a-statement-in-python), and more about [the difference between functions and statements here](http://stackoverflow.com/questions/9307291/difference-between-statement-and-function). Also see the documentation on [simple statements](https://docs.python.org/3.4/reference/simple_stmts.html) and [compound statements](https://docs.python.org/3/reference/compound_stmts.html). |
Why does the yield function not require parentheses in Python? | 31,468,049 | 6 | 2015-07-17T03:41:39Z | 31,468,124 | 15 | 2015-07-17T03:50:24Z | [
"python",
"python-2.7",
"python-3.x",
"yield-keyword"
] | In Python, I have many times seen the `yield` function used to create a generator. Both this and the `print` function technically both perform the action of methods because they return a value. However, during the change from Python 2 to Python 3, the `print` function gained parentheses like a normal method call, but yield stayed the same. Also, `yield` gains a yellowish color of a reserved keyword while `print` is the purple of a reserved method. Why is `yield` not considered a method and colored this way along with not using parentheses syntax?
(In a similar vein, why does `return` also lack parentheses?)
Let me add some more stuff, yield and continue are not given parentheses in many other languages as well. I just wanted to know what makes it different other than it is reserved. There are many other reserved methods out there which get parentheses. | So I went digging for an answer. [And it turns out, there is one](https://www.python.org/dev/peps/pep-0255/). From PEP 255, the pep that gave us the `yield` keyword
> Q. Why a new keyword for "yield"? Why not a builtin function instead?
>
> A. Control flow is much better expressed via keyword in Python, and
> yield is a control construct. It's also believed that efficient
> implementation in Jython requires that the compiler be able to
> determine potential suspension points at compile-time, and a new
> keyword makes that easy. The CPython referrence implementation also
> exploits it heavily, to detect which functions *are* generator-
> functions (although a new keyword in place of "def" would solve that
> for CPython -- but people asking the "why a new keyword?" question
> don't want any new keyword).
>
> Q: Then why not some other special syntax without a new keyword? For
> example, one of these instead of "yield 3":
```
return 3 and continue
return and continue 3
return generating 3
continue return 3
return >> , 3
from generator return 3
return >> 3
return << 3
>> 3
<< 3
* 3
```
> A: Did I miss one ? Out of hundreds of messages, I counted three
> suggesting such an alternative, and extracted the above from them.
> It would be nice not to need a new keyword, but nicer to make yield
> very clear -- I don't want to have to *deduce* that a yield is
> occurring from making sense of a previously senseless sequence of
> keywords or operators. Still, if this attracts enough interest,
> proponents should settle on a single consensus suggestion, and Guido
> will Pronounce on it. |
unable to load app 0 (mountpoint='') - Flask app with uwsgi | 31,476,441 | 3 | 2015-07-17T12:59:55Z | 31,478,026 | 14 | 2015-07-17T14:20:02Z | [
"python",
"flask",
"uwsgi"
] | I have a python flask app in below structure
```
Admin
|-app
| -__init__.py
|-wsgi.py
```
My wsgi.py contents is as follows
```
#!/usr/bin/python
from app import app
from app import views
if __name__ == '__main__':
app.run()
```
Contents of **init**.py in app package
```
#!/usr/bin/python
from flask import Flask
app = Flask(__name__)
```
I started wsgi as below
```
uwsgi --socket 127.0.0.1:8080 --protocol=http -w wsgi
```
The server is started successfully but I can error in startup log as below
```
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 709
your memory page size is 4096 bytes
detected max file descriptor number: 256
lock engine: OSX spinlocks
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to TCP address 127.0.0.1:8080 fd 3
Python version: 2.7.6 (default, Sep 9 2014, 15:04:36) [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.39)]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x7fd7eb6000d0
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 72760 bytes (71 KB) for 1 cores
*** Operational MODE: single process ***
unable to load app 0 (mountpoint='') (callable not found or import error)
*** no app loaded. going in full dynamic mode ***
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI worker 1 (and the only) (pid: 70195, cores: 1)
```
Similar issues were posted but whatever solutions offered for those issues are already in my code. I am not able to find why I am getting this error.
Thanks | "Callable not found is the issue" (not the import error, i suspect). Change:
`uwsgi --socket 127.0.0.1:8080 --protocol=http -w wsgi`
into this
```
uwsgi --socket 127.0.0.1:8080 --protocol=http -w wsgi:app
```
or
```
uwsgi --socket 127.0.0.1:8080 --protocol=http --module wsgi --callable app
```
see [here, search for 'flask deploy'](http://projects.unbit.it/uwsgi/wiki/Example). |
In python the result of a /= 2.0 and a = a / 2.0 are not the same | 31,481,031 | 3 | 2015-07-17T17:00:33Z | 31,481,064 | 10 | 2015-07-17T17:02:35Z | [
"python",
"numpy"
] | ```
In [67]: import numpy as np
In [68]: a = np.arange(10)
In [69]: b = a.copy()
In [70]: a /= 2.0
In [71]: a
Out[71]: array([0, 0, 1, 1, 2, 2, 3, 3, 4, 4])
In [72]: b = b / 2.0
In [73]:
In [73]: b
Out[73]: array([ 0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5])
```
I don't know why it the results are different when try to deal with numpy array. | `a = np.arange(10)` has an integer dtype.
```
>>> np.arange(10).dtype
dtype('int64')
```
Modifying an array inplace -- for example, with `a /= 2.0` -- does not change
the dtype. So the result contains ints.
In contrast, `a/2.0` ["upcasts" the resultant array](http://wiki.scipy.org/Tentative_NumPy_Tutorial#head-4c1d53fe504adc97baf27b65513b4b97586a4fc5) to float since the divisor is
a float.
---
If you start with an array of floating-point dtype, then both operations yield the same result:
```
In [12]: a = np.arange(10, dtype='float')
In [13]: b = a.copy()
In [14]: a /= 2.0
In [15]: a
Out[15]: array([ 0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5])
In [16]: b = b / 2.0
In [17]: b
Out[17]: array([ 0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5])
``` |
Linear regression with pymc3 and belief | 31,483,818 | 4 | 2015-07-17T20:01:31Z | 31,493,688 | 7 | 2015-07-18T17:29:58Z | [
"python",
"bayesian",
"pymc3"
] | I am trying to grasp Bayesain statistics with `pymc3`
I ran this code for a simple linear regression
```
#Generating data y=a+bx
import pymc3
import numpy as np
N=1000
alpha,beta, sigma = 2.0, 0.5, 1.0
np.random.seed(47)
X = np.linspace(0, 1, N)
Y = alpha + beta*X + np.random.randn(N)*sigma
#Fitting
linear_model = pymc3.Model()
with linear_model:
alpha = pymc3.Normal('alpha', mu=0, sd=10)
beta = pymc3.Normal('beta', mu=0, sd=10)
sigma = pymc3.HalfNormal('sigma', sd=1)
mu = alpha + beta*X
Y_obs = pymc3.Normal('Y_obs', mu=mu, sd=sigma, observed=Y)
start = pymc3.find_MAP(fmin=optimize.fmin_powell)
step = pymc3.NUTS(scaling=start)
trace = pymc3.sample(500, step, start=start)
```
I dont understand what does the trace stand for
If I understand the Bayesian theory well enough, there supposed to be a `belief` function that gets `alpha`,`beta` and `sigma` and outputs the probability of their combination.
How can I get this `belief` structure out of the `trace` variables ? | The `trace` is the output of the Markov Chain Monte Carlo (MCMC) process. It converges to a distribution (e.g., belief) of your parameters, given the data.
You can view the trace using:
```
pymc3.traceplot(trace, vars=['alpha', 'beta', 'sigma'])
```

If you would like to see the individual realizations of your regression along each point of the trace, you can do something like this:
```
import matplotlib.pyplot as plt
a = trace['alpha']
b = trace['beta']
x = np.linspace(0,1,N)
fig = plt.figure(figsize=(12,4))
ax = fig.add_subplot(1,2,1)
plt.scatter(X,Y, color='g', alpha=0.3)
for i in xrange(500):
y = a[i] + b[i] * x
plt.plot(x, y, 'b', alpha=0.02)
ax = fig.add_subplot(1,2,2)
for i in xrange(500):
y = a[i] + b[i] * x
plt.plot(x, y, 'b', alpha=0.02)
plt.show()
```

**Note**: It appears that your code is missing the line:
`from scipy import optimize` |
Test if all values are in an iterable in a pythonic way | 31,484,585 | 23 | 2015-07-17T20:56:47Z | 31,484,606 | 7 | 2015-07-17T20:58:06Z | [
"python",
"if-statement"
] | I am currently doing this:
```
if x in a and y in a and z in a and q in a and r in a and s in a:
print b
```
Is there a more pythonic way to express this `if` statement? | ```
if all(v in a for v in {x, y, z, q, r, s}):
print(b)
``` |
Test if all values are in an iterable in a pythonic way | 31,484,585 | 23 | 2015-07-17T20:56:47Z | 31,484,610 | 29 | 2015-07-17T20:58:25Z | [
"python",
"if-statement"
] | I am currently doing this:
```
if x in a and y in a and z in a and q in a and r in a and s in a:
print b
```
Is there a more pythonic way to express this `if` statement? | Using the [all](https://docs.python.org/2/library/functions.html#all) function allows to write this in a nice and compact way:
```
if all(i in a for i in (x, y, z, q, r, s)):
print b
```
This code should do almost exactly the same as your example, even if the objects are not hashable or if the `a` object has some funny `__contains__` method. The `all` function also has similar [short-circuit](http://stackoverflow.com/q/17246388) behavior as the chain of `and` in the original problem. Collecting all objects to be tested in a tuple (or a list) will guarantee the same order of execution of the tests as in the original problem. If you use a set, the order might be random. |
Test if all values are in an iterable in a pythonic way | 31,484,585 | 23 | 2015-07-17T20:56:47Z | 31,484,694 | 18 | 2015-07-17T21:04:28Z | [
"python",
"if-statement"
] | I am currently doing this:
```
if x in a and y in a and z in a and q in a and r in a and s in a:
print b
```
Is there a more pythonic way to express this `if` statement? | Another way to do this is to use subsets:
```
if {x, y, z, q, r, s}.issubset(a):
print(b)
```
REPL example:
```
>>> {0, 1, 2}.issubset([0, 1, 2, 3])
True
>>> {0, 1, 2}.issubset([1, 2, 3])
False
```
One caveat with this approach is that all of `x`, `y`, `z`, etc. must be hashable. |
Flask CORS - no Access-control-allow-origin header present on a redirect() | 31,487,379 | 11 | 2015-07-18T03:06:45Z | 31,488,389 | 8 | 2015-07-18T06:14:30Z | [
"python",
"angularjs",
"flask",
"flask-restful"
] | I am implementing OAuth Twitter User-sign in (Flask API and Angular)
I keep getting the following error when I click the sign in with twitter button and a pop up window opens:
```
XMLHttpRequest cannot load https://api.twitter.com/oauth/authenticate?oauth_token=r-euFwAAAAAAgJsmAAABTp8VCiE. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'null' is therefore not allowed access.
```
I am using the python-Cors packages to handle CORS, and I already have instagram sign in working correctly.
I believe it has something to do with the response being a redirect but have not been able to correct the problem.
My flask code looks like this:
```
app = Flask(__name__, static_url_path='', static_folder=client_path)
cors = CORS(app, allow_headers='Content-Type', CORS_SEND_WILDCARD=True)
app.config.from_object('config')
@app.route('/auth/twitter', methods=['POST','OPTIONS'])
@cross_origin(origins='*', send_wildcard=True)
#@crossdomain(origin='')
def twitter():
request_token_url = 'https://api.twitter.com/oauth/request_token'
access_token_url = 'https://api.twitter.com/oauth/access_token'
authenticate_url = 'https://api.twitter.com/oauth/authenticate'
# print request.headers
if request.args.get('oauth_token') and request.args.get('oauth_verifier'):
-- omitted for brevity --
else:
oauth = OAuth1(app.config['TWITTER_CONSUMER_KEY'],
client_secret=app.config['TWITTER_CONSUMER_SECRET'],
callback_uri=app.config['TWITTER_CALLBACK_URL'])
r = requests.post(request_token_url, auth=oauth)
oauth_token = dict(parse_qsl(r.text))
qs = urlencode(dict(oauth_token=oauth_token['oauth_token']))
return redirect(authenticate_url + '?' + qs)
``` | The problem is not yours. Your client-side application is sending requests to Twitter, so it isn't you that need to support CORS, it is Twitter. But the Twitter API does not currently support CORS, which effectively means that you cannot talk to it directly from the browser.
A common practice to avoid this problem is to have your client-side app send the authentication requests to a server of your own (such as this same Flask application that you have), and in turn the server connects to the Twitter API. Since the server side isn't bound to the CORS requirements there is no problem.
In case you want some ideas, I have written a blog article on doing this type of authentication flow for Facebook and Twitter: <http://blog.miguelgrinberg.com/post/oauth-authentication-with-flask> |
str.lstrip() unexpected behaviour | 31,488,854 | 3 | 2015-07-18T07:21:32Z | 31,488,873 | 8 | 2015-07-18T07:24:49Z | [
"python"
] | I am working on a scrapy project and was trying to parse my config
The string is `attr_title` I have to strip 'attr\_' and get `title`. I used lstrip('attr\_'), but getting unexpected results. I know `lstrip` works out combinations and removes them, but having hard time understanding it.
```
In [17]: "attr.title".lstrip('attr.')
Out[17]: 'itle'
```
PS: I know there are multiple solutions for extracting string I am interested in understanding this. | `lstrip` `iterates` over the result string until there is no more combination that matches the left most set of characters
A little illustration is below.
```
In [1]: "attr.title".lstrip('attr.')
Out[1]: 'itle' # Flow --> "attr." --> "t" --> Next char is 'i' which does not match any combination hence, iteration stops & end result ('itle') is returned
In [2]: "attr.tritle".lstrip('attr.')
Out[2]: 'itle' # "attr." --> "t" --> "r" --> Next char is 'i' which does not match any combination hence, iteration stops & end result ('itle') is returned
In [5]: "attr.itratitle".lstrip('attr.')
Out[5]: 'itratitle' # "attr." --> Next char is 'i' which does not match any combination hence, iteration stops & end result ('itratitle') is returned
``` |
Matplotlib: Finding out xlim and ylim after zoom | 31,490,436 | 3 | 2015-07-18T10:57:45Z | 31,491,515 | 8 | 2015-07-18T13:11:50Z | [
"python",
"matplotlib"
] | you for sure know a fast way how I can track down the limits of my figure after having zoomed in? I would like to know the coordinates precisely so I can reproduce the figure with `ax.set_xlim` and `ax.set_ylim`.
I am using the standard qt4agg backend.
edit: I know I can use the cursor to find out the two positions in the lower and upper corner, but maybe there is formal way to do that? | matplotlib has an event handling API you can use to hook in to actions like the ones you're referring to. The [Event Handling](http://matplotlib.org/users/event_handling.html) page gives an overview of the events API, and there's a (very) brief mention of the x- and y- limits events on the [Axes](http://matplotlib.org/api/axes_api.html#matplotlib.axes.Axes) page.
In your scenario, you'd want to register callback functions on the `Axes` object's `xlim_changed` and `ylim_changed` events. These functions will get called whenever the user zooms or shifts the viewport.
Here's a minimum working example:
```
import matplotlib.pyplot as plt
#
# Some toy data
x_seq = [x / 100.0 for x in xrange(1, 100)]
y_seq = [x**2 for x in x_seq]
#
# Scatter plot
fig, ax = plt.subplots(1, 1)
ax.scatter(x_seq, y_seq)
#
# Declare and register callbacks
def on_xlims_change(axes):
print "updated xlims: ", axes.get_xlim()
def on_ylims_change(axes):
print "updated ylims: ", axes.get_ylim()
ax.callbacks.connect('xlim_changed', on_xlims_change)
ax.callbacks.connect('ylim_changed', on_ylims_change)
#
# Show
plt.show()
``` |
Calculating local means in a 1D numpy array | 31,491,932 | 10 | 2015-07-18T14:00:22Z | 31,492,039 | 10 | 2015-07-18T14:14:24Z | [
"python",
"arrays",
"numpy",
"scipy",
"mean"
] | I have 1D NumPy array as follows:
```
import numpy as np
d = np.array([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20])
```
I want to calculate means of (1,2,6,7), (3,4,8,9), and so on.
This involves mean of 4 elements: Two consecutive elements and two consecutive elements 5 positions after.
I tried the following:
```
>> import scipy.ndimage.filters as filt
>> res = filt.uniform_filter(d,size=4)
>> print res
[ 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19]
```
This unfortunately does not give me the desired results. How can I do it? | Instead of indexing, you can approach this with a signal processing perspective. You are basically performing a [discrete convolution](https://en.m.wikipedia.org/wiki/Convolution#Discrete_convolution) of your input signal with a 7-tap kernel where the three centre coefficients are 0 while the extremities are 1, and since you want to compute the average, you need to multiply all of the values by `(1/4)`. However, you're not computing the convolution of all of the elements but we will address that later.
One way is to use [`scipy.ndimage.filters.convolve1d`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.filters.convolve1d.html) for that:
```
import numpy as np
from scipy.ndimage import filters
d = np.arange(20, dtype=np.float)
ker = (1.0/4.0)*np.array([1,1,0,0,0,1,1], dtype=np.float)
out = filters.convolve1d(d, ker)[3:-3:2]
```
Because you're using a 7 tap kernel, convolution will extend the output by 3 to the left and 3 to the right, so you need to make sure to crop out the first and last three elements. You also want to **skip** every other element because convolution involves a sliding window, but you want to discard every other element so that you get the result you want.
We get this for `out`:
```
In [47]: out
Out[47]: array([ 4., 6., 8., 10., 12., 14., 16.])
```
To double-check to see if we have the right result, try some sample calculations for each element. The first element is equal to `(1+2+6+7)/4 = 4`. The second element is equal to `(3+4+8+9)/4 = 6`, and so on.
---
For a solution with less headaches, try [`numpy.convolve`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.convolve.html) with the `mode=valid` flag. This avoids the cutting out of the extra padding to the left and right, but you will still need to skip every other element though:
```
import numpy as np
d = np.arange(20, dtype=np.float)
ker = (1.0/4.0)*np.array([1,1,0,0,0,1,1], dtype=np.float)
out = np.convolve(d, ker, mode='valid')[::2]
```
We also get:
```
In [59]: out
Out[59]: array([ 4., 6., 8., 10., 12., 14., 16.])
```
---
Finally if you want indexing, something like this may suffice:
```
length = len(d[6::2])
out = np.array([(a+b+c+e)/4.0 for (a,b,c,e) in zip(d[::2][:length], d[1::2][:length], d[5::2][:length], d[6::2])])
```
We get:
```
In [69]: out
Out[69]: array([ 4., 6., 8., 10., 12., 14., 16.])
```
This is really ugly, but it works. The total length of your signal is governed by the fact that the end of each window is at the 7th index. The length of this array that contains these indices dictates the final length of your signal. Also, note that for an element in a window, its next element can found by skipping every other element until the end of the array. There are 4 of these sequences in total and we simply `zip` over these 4 sequences where each sequence skips every other element, but there is an offset that we start at. The first sequence starts at offset 0, the next at 1, the next at 5 and the next at 6. We collect these four elements and average them, then skip over every one in the array until we finish.
BTW, I still like convolution better. |
Alternative to python's .sort() (for inserting into a large list and keeping it sorted) | 31,493,603 | 8 | 2015-07-18T17:18:53Z | 31,493,619 | 9 | 2015-07-18T17:20:56Z | [
"python",
"python-2.7",
"sorting"
] | I need to continuously add numbers to a pre-sorted list:
```
for num in numberList:
list.append(num)
list.sort()
```
Each iteration is short but when the given numberList contains tens of thousands of values this method slows way down. Is there a more efficient function available that leaves a list intact and seeks out which index to insert a new number to preserve the correct order of numbers? Anything I've tried writing myself takes longer than .sort() | See the native bisect.insort() which implements insertion sort on lists, this should perfectly fit your needs since the [complexity is O(n) at best and O(n^2) at worst](https://en.wikipedia.org/wiki/Insertion_sort#Best.2C_worst.2C_and_average_cases) instead of O(nlogn) with your current solution (resorting after insertion).
However, there are faster alternatives to construct a sorted data structure, such as [Skip Lists](https://infohost.nmt.edu/tcc/projects/pystyler/skiplist.py) and Binary Search Trees which will allow insertion with complexity O(log n) at best and O(n) at worst, or even better B-trees, [Red-Black trees](https://github.com/beregond/skiplist-vs-redblacktree), Splay trees and AVL trees which all have a complexity O(log n) at both best and worst cases. More infos about the complexity of all those solutions and others can be found in the great [BigO CheatSheet](http://bigocheatsheet.com/) by Eric Rowell. Note however that all those solutions require you to install a third-party module, and generally they need to be compiled with a C compiler.
However, there is a pure-python module called [sortedcontainers](http://www.grantjenks.com/docs/sortedcontainers/), which claims to be as fast or faster than C compiled Python extensions of implementations of AVL trees and B-trees ([benchmark available here](http://www.grantjenks.com/docs/sortedcontainers/performance.html)).
I benchmarked a few solutions to see which is the fastest to do an insertion sort:
```
sortedcontainers: 0.0860911591881
bisect: 0.665865982912
skiplist: 1.49330501066
sort_insert: 17.4167637739
```
Here's the code I used to benchmark:
```
from timeit import Timer
setup = """
L = list(range(10000)) + list(range(10100, 30000))
from bisect import insort
def sort_insert(L, x):
L.append(x)
L.sort()
from lib.skiplist import SkipList
L2 = SkipList(allowDups=1)
for x in L:
L2.insert(x)
from lib.sortedcontainers import SortedList
L3 = SortedList(L)
"""
# Using sortedcontainers.SortedList()
t_sortedcontainers = Timer("for i in xrange(10000, 10100): L3.add(i)", setup)
# Using bisect.insort()
t_bisect = Timer("for i in xrange(10000, 10100): insort(L, i)", setup)
# Using a Skip List
t_skiplist = Timer("for i in xrange(10000, 10100): L2.insert(i)", setup)
# Using a standard list insert and then sorting
t_sort_insert = Timer("for i in xrange(10000, 10100): sort_insert(L, i)", setup)
# Timing the results
print t_sortedcontainers.timeit(number=100)
print t_bisect.timeit(number=100)
print t_skiplist.timeit(number=100)
print t_sort_insert.timeit(number=100)
```
So the results indicate that the **sortedcontainers is indeed almost 7x faster than bisect** (and I expect the speed gap to increase with the list size since the complexity is an order of magnitude different).
What's more surprising is that the skip list is slower than bisect, but it's probably because it's not as optimized as bisect, which is implemented in C and may use some optimization tricks (note that the skiplist.py module I used was the fastest pure-Python Skip List I could find, the [pyskip module](https://github.com/toastdriven/pyskip) being a lot slower).
Also worth of note: if you need to use more complex structures than lists, the sortedcontainers module offers SortedList, SortedListWithKey, SortedDict and SortedSet (while bisect only works on lists). Also, you might be interested by this [somewhat related benchmark](http://code.activestate.com/recipes/305779-sorting-part2-some-performance-considerations/) and this [complexity cheatsheet of various Python operations](https://github.com/zanqi/python-complexity). |
Alternative to python's .sort() (for inserting into a large list and keeping it sorted) | 31,493,603 | 8 | 2015-07-18T17:18:53Z | 31,493,635 | 17 | 2015-07-18T17:22:15Z | [
"python",
"python-2.7",
"sorting"
] | I need to continuously add numbers to a pre-sorted list:
```
for num in numberList:
list.append(num)
list.sort()
```
Each iteration is short but when the given numberList contains tens of thousands of values this method slows way down. Is there a more efficient function available that leaves a list intact and seeks out which index to insert a new number to preserve the correct order of numbers? Anything I've tried writing myself takes longer than .sort() | You can use the [`bisect.insort()` function](https://docs.python.org/2/library/bisect.html#bisect.insort) to insert values into an already sorted list:
```
from bisect import insort
insort(list, num)
```
Note that this'll still take some time as the remaining elements after the insertion point all have to be shifted up a step; you may want to think about re-implementing the list as a linked list instead.
However, if you are keeping the list sorted just to always be able to get the smallest or largest number, you should use the [`heapq` module](https://docs.python.org/2/library/heapq.html) instead; a heap is not kept in strict sorted order, but is very efficient at giving you the either the smallest or largest value very quickly, at all times. |
pandas DataFrame "no numeric data to plot" error | 31,494,870 | 5 | 2015-07-18T19:38:20Z | 31,495,326 | 12 | 2015-07-18T20:35:31Z | [
"python",
"pandas",
"matplotlib"
] | I have a small DataFrame that I want to plot using pandas.
```
2 3
0 1300 1000
1 242751149 199446827
2 237712649 194704827
3 16.2 23.0
```
I am still trying to learn plotting from within pandas . I want a plot In the above example when I say .
```
df.plot()
```
I get the strangest error.
```
Library/Python/2.7/site-packages/pandas-0.16.2-py2.7-macosx-10.10-intel.egg/pandas/tools/plotting.pyc in _compute_plot_data(self)
1015 if is_empty:
1016 raise TypeError('Empty {0!r}: no numeric data to '
-> 1017 'plot'.format(numeric_data.__class__.__name__))
1018
1019 self.data = numeric_data
TypeError: Empty 'DataFrame': no numeric data to plot
```
While I understand that the DataFrame with its very lopsided values makes a very un-interesting plot. I am wondering why the error message complains of no numeric data to plot. | Try the following before plotting:
```
df=df.astype(float)
``` |
Python file open function modes | 31,502,329 | 20 | 2015-07-19T14:36:00Z | 31,503,045 | 9 | 2015-07-19T15:51:45Z | [
"python"
] | I have noticed that, in addition to the documented mode characters, Python 2.7.5.1 in Windows XP and 8.1 also accepts modes `U` and `D` at least when reading files. Mode `U` is used in numpy's [`genfromtxt`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html). Mode `D` has the effect that the file is deleted, as per the following code fragment:
```
f = open('text.txt','rD')
print(f.next())
f.close() # file text.txt is deleted when closed
```
Does anybody know more about these modes, especially whether they are a permanent feature of the language applicable also on Linux systems? | The `D` flag seems to be Windows specific. Windows seems to add several flags to the `fopen` function in its CRT, as described [here](https://msdn.microsoft.com/en-us/library/yeby3zcb.aspx).
While Python does filter the mode string to make sure no errors arise from it, it does allow some of the special flags, as can be seen in the Python sources [here](https://github.com/python/cpython/blob/2.7/Objects/fileobject.c#L209). Specifically, it seems that the `N` flag is filtered out, while the `T` and `D` flags are allowed:
```
while (*++mode) {
if (*mode == ' ' || *mode == 'N') /* ignore spaces and N */
continue;
s = "+TD"; /* each of this can appear only once */
...
```
I would suggest sticking to the documented options to keep the code cross-platform. |
Python test if all N variables are different | 31,505,075 | 4 | 2015-07-19T19:29:02Z | 31,505,105 | 18 | 2015-07-19T19:33:17Z | [
"python",
"idiomatic",
"logical"
] | I'm currently doing a program and I have searched around but I cannot find a solution;
My problem is where I want to make a condition where all variables selected are not equal and I can do that but only with long lines of text, is there a simpler way?
My solution thus far is to do:
```
if A!=B and A!=C and B!=C:
```
But I want to do it for several groups of five variables and it gets quite confusing with that many. What can I do to make it simpler? | Create a set and check whether the number of elements in the set is the same as the number of variables in the list that you passed into it:
```
>>> variables = [a, b, c, d, e]
>>> if len(set(variables)) == len(variables):
... print("All variables are different")
```
A set doesn't have duplicate elements so if you create a set and it has the same number of elements as the number of elements in the original list then you know all elements are different from each other. |
Python: Variables are still accessible if defined in try or if? | 31,505,149 | 7 | 2015-07-19T19:37:25Z | 31,505,354 | 10 | 2015-07-19T19:58:32Z | [
"python",
"scope"
] | I'm a Python beginner and I am from C/C++ background. I'm using Python 2.7.
I read this article: [A Beginnerâs Guide to Pythonâs Namespaces, Scope Resolution, and the LEGB Rule](http://spartanideas.msu.edu/2014/05/12/a-beginners-guide-to-pythons-namespaces-scope-resolution-and-the-legb-rule/), and I think I have some understanding of Python's these technologies.
Today I realized that I can write Python code like this:
```
if condition_1:
var_x = some_value
else:
var_x = another_value
print var_x
```
That is, var\_x is still accessible even it is **not** define **before** the if. Because I am from C/C++ background, this is something new to me, as in C/C++, `var_x` are defined in the scope enclosed by if and else, therefore you cannot access it any more unless you define `var_x` before `if`.
I've tried to search the answers on Google but because I'm still new to Python, I don't even know where to start and what keywords I should use.
My guess is that, in Python, `if` does not create new scope. All the variables that are newly defined in `if` are just in the scope that `if` resides in and this is why the variable is still accessible after the `if`. However, if `var_x`, in the example above, is only defined in `if` but not in `else`, a warning will be issued to say that the `print var_x` may reference to a variable that may not be defined.
I have some confidence in my own understanding. However, **could somebody help correct me if I'm wrong somewhere, or give me a link of the document that discusses about this??**
Thanks. | > My guess is that, in Python, `if` does not create new scope. All the variables that are newly defined in `if` are just in the scope that if resides in and this is why the variable is still accessible after the `if`.
That is correct. In Python, [namespaces](https://docs.python.org/3/tutorial/classes.html#python-scopes-and-namespaces), that essentially decide about the variable scopes, are only created for modules, and functions (including methods; basically any `def`). So everything that happens within a function (and not in a sub-function) is placed in the same namespace.
Itâs important to know however that the mere existance of an assignment within a function will reserve a name in the local namespace. This makes for some interesting situations:
```
def outer ():
x = 5
def inner ():
print(x)
# x = 10
inner()
outer()
```
In the code above, with that line commented out, the code will print `5` as you may expect. Thatâs because `inner` will look in the outer scope for the name `x`. If you add the line `x = 10` though, the name `x` will be *local* to `inner`, so the *earlier* look up to `x` will look in the local namespace of `inner`. But since it hasnât been assigned yet, you will receive an `UnboundLocalError` (*âlocal variable 'x' referenced before assignmentâ*). The [`nonlocal`](http://stackoverflow.com/questions/1261875/python-nonlocal-statement) statement was added in Python 3 to overcome one issue from this: The situation where you want to actually modify the `x` of the outer scope within the inner function.
For more information on name lookup, see [this related question](http://stackoverflow.com/questions/291978/short-description-of-python-scoping-rules). |
How to remove the trailing comma from a loop in Python? | 31,505,452 | 5 | 2015-07-19T20:09:03Z | 31,505,501 | 8 | 2015-07-19T20:13:03Z | [
"python",
"python-3.x"
] | Here is my code so far:
```
def main():
for var in range (1, 101):
num= IsPrime(var)
if num == 'true':
print(var, end=', ')
```
The IsPrime function calculates whether or not a function is prime.
I need to print out the prime numbers from 1 to 100 formatted into a single line with commas and spaces in between. for example, output should look like:
```
2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97
```
I tried to run my program, but I always get a trailing comma at the end of 97. I don't know how to remove the comma, and because it is a loop, str.rstrip and [:-1] don't work.
I need to use a loop and I can't use
```
print('2')
print(', ', var, end='')
```
for the other prime numbers.
I can't tell if there's an easier way to code this or I'm not aware of a function that can do this correctly. | The idiomatic Python code in my opinion would look something like this:
```
print(', '.join([str(x) for x in xrange(1, 101) if IsPrime(x) == 'true']))
```
(Things would be better if `IsPrime` actually returned `True` or `False` instead of a string)
This is functional instead of imperative code.
If you want imperative code, you should print the `', '` before each element excepting the first item of the loop. You can do this with a boolean variable which you set to true after you've seen one item. |
PIP install unable to find ffi.h even though it recognizes libffi | 31,508,612 | 16 | 2015-07-20T03:54:51Z | 31,508,663 | 54 | 2015-07-20T04:01:15Z | [
"python",
"linux",
"pip"
] | I have installed `libffi` on my Linux server as well as correctly set the `PKG_CONFIG_PATH` environment variable to the correct directory, as `pip` recognizes that it is installed; however, when trying to install pyOpenSSL, pip states that it cannot find file 'ffi.h'. I know both that`ffi.h` exists as well as its directory, so how do I go about closing this gap between `ffi.h` and `pip`? | You need to install the development package as well.
`libffi-dev` on Debian/Ubuntu, `libffi-devel` on Redhat/Centos/Fedora. |
Pandas DataFrame: replace all values in a column, based on condition | 31,511,997 | 6 | 2015-07-20T08:35:34Z | 31,512,025 | 8 | 2015-07-20T08:37:09Z | [
"python",
"python-2.7",
"pandas",
"dataframe"
] | I have a simple DataFrame like the following:

I want to select all values from the 'First Season' column and replace those that are over 1990 by 1. In this example, only Baltimore Ravens would have the 1996 replaced by 1 (keeping the rest of the data intact).
I have used the following:
```
df.loc[(df['First Season'] > 1990)] = 1
```
But, it replaces all the values in that row by 1, and not just the values in the 'First Season' column.
How can I replace just the values from that column? | You need to select that column:
```
In [41]:
df.loc[df['First Season'] > 1990, 'First Season'] = 1
df
Out[41]:
Team First Season Total Games
0 Dallas Cowboys 1960 894
1 Chicago Bears 1920 1357
2 Green Bay Packers 1921 1339
3 Miami Dolphins 1966 792
4 Baltimore Ravens 1 326
5 San Franciso 49ers 1950 1003
```
So the syntax here is:
```
df.loc[<mask>(here mask is generating the labels to index) , <optional column(s)> ]
```
You can check the [docs](http://pandas.pydata.org/pandas-docs/stable/indexing.html#selection-by-label) and also the [10 minutes to pandas](http://pandas.pydata.org/pandas-docs/stable/10min.html#selection-by-label) which shows the semantics
**EDIT**
If you want to generate a boolean indicator then you can just use the boolean condition to generate a boolean Series and cast the dtype to `int` this will convert `True` and `False` to `1` and `0` respectively:
```
In [43]:
df['First Season'] = (df['First Season'] > 1990).astype(int)
df
Out[43]:
Team First Season Total Games
0 Dallas Cowboys 0 894
1 Chicago Bears 0 1357
2 Green Bay Packers 0 1339
3 Miami Dolphins 0 792
4 Baltimore Ravens 1 326
5 San Franciso 49ers 0 1003
``` |
pip install -r: OSError: [Errno 13] Permission denied | 31,512,422 | 9 | 2015-07-20T08:58:05Z | 31,512,489 | 13 | 2015-07-20T09:02:33Z | [
"python",
"django",
"pip"
] | I am trying to setup [Django](https://www.djangoproject.com).
When I run `pip install -r requirements.txt`, I get the following exception:
```
Installing collected packages: amqp, anyjson, arrow, beautifulsoup4, billiard, boto, braintree, celery, cffi, cryptography, Django, django-bower, django-braces, django-celery, django-crispy-forms, django-debug-toolbar, django-disqus, django-embed-video, django-filter, django-merchant, django-pagination, django-payments, django-storages, django-vote, django-wysiwyg-redactor, easy-thumbnails, enum34, gnureadline, idna, ipaddress, ipython, kombu, mock, names, ndg-httpsclient, Pillow, pyasn1, pycparser, pycrypto, PyJWT, pyOpenSSL, python-dateutil, pytz, requests, six, sqlparse, stripe, suds-jurko
Cleaning up...
Exception:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 283, in run
requirement_set.install(install_options, global_options, root=options.root_path)
File "/usr/lib/python2.7/dist-packages/pip/req.py", line 1436, in install
requirement.install(install_options, global_options, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/pip/req.py", line 672, in install
self.move_wheel_files(self.source_dir, root=root)
File "/usr/lib/python2.7/dist-packages/pip/req.py", line 902, in move_wheel_files
pycompile=self.pycompile,
File "/usr/lib/python2.7/dist-packages/pip/wheel.py", line 206, in move_wheel_files
clobber(source, lib_dir, True)
File "/usr/lib/python2.7/dist-packages/pip/wheel.py", line 193, in clobber
os.makedirs(destsubdir)
File "/usr/lib/python2.7/os.py", line 157, in makedirs
mkdir(name, mode)
OSError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/amqp-1.4.6.dist-info'
```
What's wrong and how do I fix this? | Perhaps **you are not root** and should do:
```
sudo pip install -r requirements.txt
```
Find more about `sudo` [here](https://wiki.archlinux.org/index.php/Sudo).
Alternately, if you just cannot or **don't want to make system-wide changes**, follow this guide:
[How can I install packages in my $HOME folder with pip?](http://stackoverflow.com/questions/7143077/how-can-i-install-packages-in-my-home-folder-with-pip).
TL;DR:
```
pip install --user runloop
```
You can also use a [virtualenv](https://virtualenv.pypa.io/en/latest/), which might be an even better solution for a development environment, especially if you are working on **multiple projects and want to keep track of each one's dependencies**. |
pip install -r: OSError: [Errno 13] Permission denied | 31,512,422 | 9 | 2015-07-20T08:58:05Z | 31,512,491 | 13 | 2015-07-20T09:02:40Z | [
"python",
"django",
"pip"
] | I am trying to setup [Django](https://www.djangoproject.com).
When I run `pip install -r requirements.txt`, I get the following exception:
```
Installing collected packages: amqp, anyjson, arrow, beautifulsoup4, billiard, boto, braintree, celery, cffi, cryptography, Django, django-bower, django-braces, django-celery, django-crispy-forms, django-debug-toolbar, django-disqus, django-embed-video, django-filter, django-merchant, django-pagination, django-payments, django-storages, django-vote, django-wysiwyg-redactor, easy-thumbnails, enum34, gnureadline, idna, ipaddress, ipython, kombu, mock, names, ndg-httpsclient, Pillow, pyasn1, pycparser, pycrypto, PyJWT, pyOpenSSL, python-dateutil, pytz, requests, six, sqlparse, stripe, suds-jurko
Cleaning up...
Exception:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 283, in run
requirement_set.install(install_options, global_options, root=options.root_path)
File "/usr/lib/python2.7/dist-packages/pip/req.py", line 1436, in install
requirement.install(install_options, global_options, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/pip/req.py", line 672, in install
self.move_wheel_files(self.source_dir, root=root)
File "/usr/lib/python2.7/dist-packages/pip/req.py", line 902, in move_wheel_files
pycompile=self.pycompile,
File "/usr/lib/python2.7/dist-packages/pip/wheel.py", line 206, in move_wheel_files
clobber(source, lib_dir, True)
File "/usr/lib/python2.7/dist-packages/pip/wheel.py", line 193, in clobber
os.makedirs(destsubdir)
File "/usr/lib/python2.7/os.py", line 157, in makedirs
mkdir(name, mode)
OSError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/amqp-1.4.6.dist-info'
```
What's wrong and how do I fix this? | Have you tried with *sudo*?
```
sudo pip install -r requirements.txt
``` |
Check what numbers in a list are divisible by certain numbers? | 31,517,851 | 2 | 2015-07-20T13:30:50Z | 31,517,922 | 7 | 2015-07-20T13:34:17Z | [
"python",
"math",
"for-loop",
"list-comprehension"
] | Write a function that receives a list of numbers
and a list of terms and returns only the elements that are divisible
by all of those terms. You must use two nested list comprehensions to solve it.
divisible\_numbers([12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1], [2, 3]) # returns [12, 6]
def divisible\_numbers(a\_list, a\_list\_of\_terms):
I have a vague pseudo code so far, that consists of check list, check if divisible if it is append to a new list, check new list check if divisible by next term and repeat until, you have gone through all terms, I don't want anyone to do this for me but maybe a hint in the correct direction? | The inner expression should check if for a particular number, that number is evenly divisible by all of the terms in the second list
```
all(i%j==0 for j in a_list_of_terms)
```
Then an outer list comprehension to iterate through the items of the first list
```
[i for i in a_list if all(i%j==0 for j in a_list_of_terms)]
```
All together
```
def divisible_numbers(a_list, a_list_of_terms):
return [i for i in a_list if all(i%j==0 for j in a_list_of_terms)]
```
Testing
```
>>> divisible_numbers([12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1], [2, 3])
[12, 6]
``` |
Plotting multiple lines with Bokeh and pandas | 31,520,951 | 8 | 2015-07-20T15:52:00Z | 31,707,031 | 9 | 2015-07-29T17:15:51Z | [
"python",
"pandas",
"bokeh"
] | I would like to give a pandas dataframe to Bokeh to plot a line chart with multiple lines.
The x-axis should be the df.index and each df.columns should be a separate line.
This is what I wouuld like to do:
```
import pandas as pd
import numpy as np
from bokeh.plotting import figure, show
toy_df = pd.DataFrame(data=np.random.rand(5,3), columns = ('a', 'b' ,'c'), index = pd.DatetimeIndex(start='01-01-2015',periods=5, freq='d'))
p = figure(width=1200, height=900, x_axis_type="datetime")
p.multi_line(df)
show(p)
```
However, i get the error:
```
RuntimeError: Missing required glyph parameters: ys
```
Instead, ive managed to do this:
```
import pandas as pd
import numpy as np
from bokeh.plotting import figure, show
toy_df = pd.DataFrame(data=np.random.rand(5,3), columns = ('a', 'b' ,'c'), index = pd.DatetimeIndex(start='01-01-2015',periods=5, freq='d'))
ts_list_of_list = []
for i in range(0,len(toy_df.columns)):
ts_list_of_list.append(toy_df.index)
vals_list_of_list = toy_df.values.T.tolist()
p = figure(width=1200, height=900, x_axis_type="datetime")
p.multi_line(ts_list_of_list, vals_list_of_list)
show(p)
```
That (ineligantly) does the job but it uses the same color for all 3 lines, see below:

**Questions:**
**1) how can I pass a pandas dataframe to bokeh's multi\_line?**
**2) If not possible directly, how can I manipulate the dataframe data so that multi\_line will create each line with a different color?**
thanks in advance | You need to provide a list of colors to multi\_line. In your example, you would do, something like this:
```
p.multi_line(ts_list_of_list, vals_list_of_list, line_color=['red', 'green', 'blue'])
```
Here's a more general purpose modification of your second example that does more or less what you ended up with, but is a little more concise and perhaps more Pythonic:
```
import pandas as pd
import numpy as np
from bokeh.palettes import Spectral11
from bokeh.plotting import figure, show, output_file
output_file('temp.html')
toy_df = pd.DataFrame(data=np.random.rand(5,3), columns = ('a', 'b' ,'c'), index = pd.DatetimeIndex(start='01-01-2015',periods=5, freq='d'))
numlines=len(toy_df.columns)
mypalette=Spectral11[0:numlines]
p = figure(width=500, height=300, x_axis_type="datetime")
p.multi_line(xs=[toy_df.index.values]*numlines,
ys=[toy_df[name].values for name in toy_df],
line_color=mypalette,
line_width=5)
show(p)
```
which yields:
[](http://i.stack.imgur.com/CZSF4.png) |
groupby weighted average and sum in pandas dataframe | 31,521,027 | 3 | 2015-07-20T15:55:39Z | 31,521,177 | 10 | 2015-07-20T16:03:36Z | [
"python",
"pandas"
] | I have a dataframe ,
```
Out[78]:
contract month year buys adjusted_lots price
0 W Z 5 Sell -5 554.85
1 C Z 5 Sell -3 424.50
2 C Z 5 Sell -2 424.00
3 C Z 5 Sell -2 423.75
4 C Z 5 Sell -3 423.50
5 C Z 5 Sell -2 425.50
6 C Z 5 Sell -3 425.25
7 C Z 5 Sell -2 426.00
8 C Z 5 Sell -2 426.75
9 CC U 5 Buy 5 3328.00
10 SB V 5 Buy 5 11.65
11 SB V 5 Buy 5 11.64
12 SB V 5 Buy 2 11.60
```
I need a sum of adjusted\_lots , price which is weighted average , of price and ajusted\_lots , grouped by all the other columns , ie. grouped by (contract, month , year and buys)
Similiar solution on R was achieved by following code, using dplyr, however unable to do the same in pandas.
```
> newdf = df %>%
select ( contract , month , year , buys , adjusted_lots , price ) %>%
group_by( contract , month , year , buys) %>%
summarise(qty = sum( adjusted_lots) , avgpx = weighted.mean(x = price , w = adjusted_lots) , comdty = "Comdty" )
> newdf
Source: local data frame [4 x 6]
contract month year comdty qty avgpx
1 C Z 5 Comdty -19 424.8289
2 CC U 5 Comdty 5 3328.0000
3 SB V 5 Comdty 12 11.6375
4 W Z 5 Comdty -5 554.8500
```
is the same possible by groupby or any other solution ? | To pass multiple functions to a groupby object, you need to pass a dictionary with the aggregation functions corresponding to the columns:
```
# Define a lambda function to compute the weighted mean:
wm = lambda x: np.average(x, weights=df.loc[x.index, "adjusted_lots"])
# Define a dictionary with the functions to apply for a given column:
f = {'adjusted_lots': ['sum'], 'price': {'weighted_mean' : wm} }
# Groupby and aggregate with your dictionary:
df.groupby(["contract", "month", "year", "buys"]).agg(f)
adjusted_lots price
sum weighted_mean
contract month year buys
C Z 5 Sell -19 424.828947
CC U 5 Buy 5 3328.000000
SB V 5 Buy 12 11.637500
W Z 5 Sell -5 554.850000
```
You can see more here:
* <http://pandas.pydata.org/pandas-docs/stable/groupby.html#applying-multiple-functions-at-once>
and in a similar question here:
* [Apply multiple functions to multiple groupby columns](http://stackoverflow.com/questions/14529838/apply-multiple-functions-to-multiple-groupby-columns)
Hope this helps |
pandas column convert currency to float | 31,521,526 | 6 | 2015-07-20T16:21:19Z | 31,521,773 | 14 | 2015-07-20T16:36:28Z | [
"python",
"pandas",
"currency"
] | I have a df with currency:
```
df = pd.DataFrame({'Currency':['$1.00','$2,000.00','(3,000.00)']})
Currency
0 $1.00
1 $2,000.00
2 (3,000.00)
```
I want to convert the 'Currency' dtype to float but I am having trouble with the parentheses string (which indicate a negative amount). This is my current code:
```
df[['Currency']] = df[['Currency']].replace('[\$,]','',regex=True).astype(float)
```
which produces an error:
```
ValueError: could not convert string to float: (3000.00)
```
What I want as dtype float is:
```
Currency
0 1.00
1 2000.00
2 -3000.00
``` | Just add `)` to the existing command, and then convert `(` to `-` to make numbers in parentheses negative. Then convert to float.
```
(df['Currency'].replace( '[\$,)]','', regex=True )
.replace( '[(]','-', regex=True ).astype(float))
Currency
0 1
1 2000
2 -3000
``` |
Django: timezone.now() does not return current datetime | 31,521,846 | 2 | 2015-07-20T16:39:56Z | 31,585,771 | 7 | 2015-07-23T11:16:29Z | [
"python",
"django",
"datetime",
"django-rest-framework"
] | Through a Django Rest Framework API, I am trying to serve all objects with a datetime in the future.
**Problem is, once the server has started up, every time I submit the query, the API will serve all objects whose datetime is greater than the datetime at which the server started instead of the objects whose datetime is greater than the current time.**
```
from django.utils import timezone
class BananasViewSet(viewsets.ReadOnlyModelViewSet):
queryset = Banana.objects.filter(date_and_time__gte=timezone.now())
...
```
Without any more luck, I also tried this variation:
```
import datetime as dt
class BananasViewSet(viewsets.ReadOnlyModelViewSet):
queryset = Banana.objects.filter(date_and_time__gte=
timezone.make_aware(dt.datetime.now(), timezone.get_current_timezone())
...
```
Making a similar query in a Django shell correctly returns the objects up to date... | As the application code is currently written you're running `timezone.now()` once, when the class is first imported from anywhere.
Rather than apply the time queryset filtering on the class attribute itself, do so in the `get_queryset()` method so that it'll be re-evaluated on each pass.
Eg.
```
class BananasViewSet(viewsets.ReadOnlyModelViewSet):
queryset = Banana.objects.all()
def get_queryset(self):
cutoff = timezone.now()
return self.queryset.filter(date_and_time__gte=cutoff)
``` |
Must Python script define a function as main? | 31,523,059 | 7 | 2015-07-20T17:47:10Z | 31,523,074 | 10 | 2015-07-20T17:48:01Z | [
"python",
"python-3.x",
"coding-style"
] | Must/should a Python script have a `main()` function? For example is it ok to replace
```
if __name__ == '__main__':
main()
```
with
```
if __name__ == '__main__':
entryPoint()
```
(or some other meaningful name) | Using a function named `main()` is just a convention. You can give it any name you want to.
Testing for the module name is *just a nice trick* to prevent code running when your code is not being executed as the `__main__` module (i.e. not when imported as the script Python started with, but imported as a module). You can run any code you like under that `if` test.
Using a function in that case helps keep the global namespace of your module uncluttered by shunting names into a local namespace instead. Naming that function `main()` is commonplace, but not a requirement. |
Must Python script define a function as main? | 31,523,059 | 7 | 2015-07-20T17:47:10Z | 31,523,095 | 7 | 2015-07-20T17:49:40Z | [
"python",
"python-3.x",
"coding-style"
] | Must/should a Python script have a `main()` function? For example is it ok to replace
```
if __name__ == '__main__':
main()
```
with
```
if __name__ == '__main__':
entryPoint()
```
(or some other meaningful name) | No, a Python script doesn't have to have a `main()` function. It is just following conventions because the function that you put under the `if __name__ == '__main__':` statement is the function that really does all of the work for your script, so it is the main function. If there really is function name that would make the program easier to read and clearer, then you can instead use that function name.
In fact, you don't even *need* the `if __name__ == '__main__':` part, but it is a good practice, not just a convention. It will just prevent the `main()` function or whatever else you would like to call it from running when you import the file as a module. If you won't be importing the script, you *probably* don't need it, but it is still a good practice. For more on that and why it does what it does, see [What does `if \_\_name\_\_ == "\_\_main\_\_":` do?](http://stackoverflow.com/questions/419163/what-does-if-name-main-do) |
How can I pass arguments to a docker container with a python entry-point script using command? | 31,523,551 | 12 | 2015-07-20T18:17:06Z | 31,523,657 | 13 | 2015-07-20T18:22:45Z | [
"python",
"docker"
] | So I've got a docker image with a python script as the entry-point and I would like to pass arguments to the python script when the container is run. I've tried to get the arguments using sys.argv and sys.stdin, but neither has worked. I'm trying to run the container using:
```
docker run image argument
``` | It depends how the entrypoint was set up. If it was set up in "exec form" then you simply pass the arguments after the `docker run` command, like this:
```
docker run image -a -b -c
```
If it was set up in "shell form" then you have to override the entrypoint, unfortunately.
```
$ docker run --entrypoint echo image hi
hi
```
You can check the form using `docker inspect`. If the entrypoint appears to begin with `/bin/sh -c`, then it is shell form.
References:
* <http://docs.docker.com/reference/run/#entrypoint-default-command-to-execute-at-runtime>
* <http://docs.docker.com/reference/builder/#entrypoint> |
Get U, Σ, V* from Truncated SVD in scikit-learn | 31,523,575 | 9 | 2015-07-20T18:18:17Z | 31,528,944 | 10 | 2015-07-21T01:35:32Z | [
"python",
"scipy",
"scikit-learn",
"sparse-matrix",
"svd"
] | I am using truncated SVD from `scikit-learn`Â package.
In the definition of SVD, an original matrix **A** is approxmated as a product **A** â **UΣV\*** where **U** and **V** have orthonormal columns, and **Σ** is non-negative diagonal.
I need to get the **U**, **Σ** and **V\*** matrices.
Looking at the source code [here](https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/decomposition/truncated_svd.py#L174) I found out that **V\*** is stored in `self.compoments_` field after calling `fit_transform`.
Is it possible to get **U** and **Σ** matrices?
My code:
```
import sklearn.decomposition as skd
import numpy as np
matrix = np.random.random((20,20))
trsvd = skd.TruncatedSVD(n_components=15)
transformed = trsvd.fit_transform(matrix)
VT = trsvd.components_
``` | Looking into the source via the link you provided, `TruncatedSVD` is basically a wrapper around sklearn.utils.extmath.randomized\_svd; you can manually call this yourself like this:
```
from sklearn.utils.extmath import randomized_svd
U, Sigma, VT = randomized_svd(X, n_components=15,
n_iter=5,
random_state=None)
``` |
How to implement ZCA Whitening? Python | 31,528,800 | 3 | 2015-07-21T01:14:02Z | 31,528,936 | 9 | 2015-07-21T01:34:27Z | [
"python",
"neural-network",
"pca",
"correlated"
] | Im trying to implement **ZCA whitening** and found some articles to do it, but they are a bit confusing.. can someone shine a light for me?
Any tip or help is appreciated!
Here is the articles i read :
<http://courses.media.mit.edu/2010fall/mas622j/whiten.pdf>
<http://bbabenko.tumblr.com/post/86756017649/learning-low-level-vision-feautres-in-10-lines-of>
I tried several things but most of them i didnt understand and i got locked at some step.
Right now i have this as base to start again :
```
dtype = np.float32
data = np.loadtxt("../inputData/train.csv", dtype=dtype, delimiter=',', skiprows=1)
img = ((data[1,1:]).reshape((28,28)).astype('uint8')*255)
``` | Is your data stored in an mxn matrix? Where m is the dimension of the data and n are the total number of cases? If that's not the case, you should resize your data. For instance if your images are of size 28x28 and you have only one image, you should have a 1x784 vector. You could use this function:
```
import numpy as np
def flatten_matrix(matrix):
vector = matrix.flatten(1)
vector = vector.reshape(1, len(vector))
return vector
```
Then you apply ZCA Whitening to your training set using:
```
def zca_whitening(inputs):
sigma = np.dot(inputs, inputs.T)/inputs.shape[1] #Correlation matrix
U,S,V = np.linalg.svd(sigma) #Singular Value Decomposition
epsilon = 0.1 #Whitening constant, it prevents division by zero
ZCAMatrix = np.dot(np.dot(U, np.diag(1.0/np.sqrt(np.diag(S) + epsilon))), U.T) #ZCA Whitening matrix
return np.dot(ZCAMatrix, inputs) #Data whitening
```
It is important to save the `ZCAMatrix` matrix, you should multiply your test cases if you want to predict after training the Neural Net.
Finally, I invite you to take the Stanford UFLDL Tutorials at <http://ufldl.stanford.edu/wiki/index.php/UFLDL_Tutorial> or <http://ufldl.stanford.edu/tutorial/> . They have pretty good explanations and also some programming exercises on MATLAB, however, almost all the functions found on MATLAB are on Numpy by the same name. I hope this may give an insight. |
Geo Django get cities from latitude and longitude | 31,538,288 | 9 | 2015-07-21T11:51:10Z | 31,588,101 | 7 | 2015-07-23T12:59:05Z | [
"python",
"django",
"geodjango"
] | I'm learning how to use Geo Django. When a user registers I save the latitude and longitude information as seen below:
```
from django.contrib.gis.db import models
from django.contrib.gis.geos import Point
class GeoModel(models.Model):
"""
Abstract model to provide GEO fields.
"""
latitude = models.FloatField(blank=True, null=True, verbose_name='Latitude')
longitude = models.FloatField(blank=True, null=True, verbose_name='Longitude')
location = models.PointField(blank=True, null=True)
class Meta:
abstract = True
def save(self, *args, **kwargs):
if self.latitude and self.longitude:
self.location = Point(float(self.latitude), float(self.longitude))
super(GeoModel, self).save(*args, **kwargs)
```
Next I want to add a new field called `city` where \*\*I want to store the city "name" i.e. London based the long and lat details given.
I have looked at `django-cities` where is saves a database of cities from Geo Name, but this seems overkill and not Python 3 compatible so a no go for me. I have seen a package called [geopy](https://github.com/geopy/geopy) but this seems to completely replace Geo Django and uses Google API which comes with restrictions.
I want to stick with **Geo Django** and somehow do a lookup to find out the City name based on long and lat so what are my options? i.e..
1. Find a way to install <http://www.geonames.org/> data django-cities
does not work for Python 3 and Django 1.8.x
2. Somehow use Geo Django
to do this lookup using the data? (unsure ow to do this)
**Is above the correct way to approach this? If so how do I get the data into my DB then do a lookup on it using Geo Django to find the city name?** | You have several solutions to do this:
1. Create another model City
```
from django.contrib.gis.db import models
class City(models.Model):
name = models.CharField(max_lenght=255)
geometry = models.MultiPolygonField()
objects = models.GeoManager()
```
Then you can find the name of the city containing your location with:
```
geomodel = GeoModel(...)
city = City.objects.get(geometry__contains=geomodel.location)
city_name = city.name
```
Depending on the country/region you are interested in, you may find city geometries to fill the City table in OpenData (eg <http://www.gadm.org/country> (not tested but seems to propose open shapefiles))
2. Use an inverse geocoder. Google can provide such service or look at <http://www.geonames.org/export/reverse-geocoding.html> if something can match your needs. |
How to get alternating colours in dashed line using matplotlib? | 31,540,258 | 9 | 2015-07-21T13:21:26Z | 31,544,278 | 8 | 2015-07-21T16:10:29Z | [
"python",
"matplotlib",
"plot"
] | In matplotlib, I want to make a line using `matplotlib.pyplot` which is alternating black and yellow dashes, and then I want to include that line on the legend. How do I do that?
I could do something like:
```
from matplotlib import pyplot as plt, gridspec
import numpy as np
grid = gridspec.GridSpec(1,1)
ax = plt.subplot(grid[0,0])
x = np.arange(1,11)
y = x * 2
ax.plot(x, y, '-', color = 'black', linewidth = 1, label = 'my line')
ax.plot(x, y, '--', color = 'yellow')
ax.legend()
plt.show()
```
but then the line on the legend would appear as a solid black line, rather than as black-and-yellow dashes.
I did look at `matplotlib.path_effects` but I can't work out whether it's possible to achieve what I want; I can outline or shadow the line, but I'm not sure I can overlay a differently-coloured dashed line. | Try this.
```
from matplotlib import pyplot as plt, gridspec, lines
import numpy as np
grid = gridspec.GridSpec(1,1)
ax = plt.subplot(grid[0,0])
x = np.arange(1,11)
y = x * 2
ax.plot(x, y, '-', color = 'black', linewidth = 5)
ax.plot(x, y, '--', color = 'lawngreen', linewidth = 5)
dotted_line1 = lines.Line2D([], [], linewidth=5, linestyle="--", dashes=(10, 1), color='lawngreen')
dotted_line2 = lines.Line2D([], [], linewidth=5, linestyle="-", dashes=(5, 4), color='black')
plt.legend([(dotted_line1, dotted_line2)], ["My Line"])
plt.show()
```
i increased the line width so it is clearly visible. As yellow was not that clear in a white background; changed it to green. Sorry about that. You can change colors any time any way :)
 |
inpolygon for Python - Examples of matplotlib.path.Path contains_points() method? | 31,542,843 | 2 | 2015-07-21T15:04:43Z | 31,543,337 | 11 | 2015-07-21T15:26:00Z | [
"python",
"matlab",
"numpy",
"matplotlib"
] | I have been searching for a python alternative to MATLAB's inpolygon() and I have come across contains\_points as a good option.
However, the docs are a little bare with no indication of what type of data contains\_points expects:
> contains\_points(points, transform=None, radius=0.0)
>
> Returns a bool array which is True if the path contains the corresponding point.
>
> If transform is not None, the path will be transformed before performing the test.
>
> radius allows the path to be made slightly larger or smaller.
I have the polygon stored as an n\*2 numpy array (where n is quite large ~ 500). As far as I can see I need to call the Path() method on this data which seems to work Ok:
```
poly_path = Path(poly_points)
```
At the moment I also have the points I wish to test stored as another n\*2 numpy array (catalog\_points).
Perhaps my problem lies here? As when I run:
```
in_poly = poly_path.contains_points(catalog_points)
```
I get back an ndarray containing 'False' for every value no matter the set of points I use (I have tested this on arrays of points well within the polygon). | Often in these situations, I find the source to be illuminating...
We can see the source for [`path.contains_point`](https://github.com/matplotlib/matplotlib/blob/714d18788320325d0bff75184f62d472f67ceb91/lib/matplotlib/path.py#L483) accepts a container that has at least 2 elements. The source for `contains_points` is a bit harder to figure out since it calls through to a C function [`Py_points_in_path`](https://github.com/matplotlib/matplotlib/blob/382974186f78ef53cc91f7cf161159d5d9f71475/src/_path_wrapper.cpp#L61). It seems that this function accepts a iterable that yields elements that have a length 2:
```
>>> from matplotlib import path
>>> p = path.Path([(0,0), (0, 1), (1, 1), (1, 0)]) # square with legs length 1 and bottom left corner at the origin
>>> p.contains_points([(.5, .5)])
array([ True], dtype=bool)
```
Of course, we could use a numpy array of points as well:
```
>>> points = np.array([.5, .5]).reshape(1, 2)
>>> points
array([[ 0.5, 0.5]])
>>> p.contains_points(points)
array([ True], dtype=bool)
```
And just to check that we aren't always just getting `True`:
```
>>> points = np.array([.5, .5, 1, 1.5]).reshape(2, 2)
>>> points
array([[ 0.5, 0.5],
[ 1. , 1.5]])
>>> p.contains_points(points)
array([ True, False], dtype=bool)
``` |
How can I extend a set with a tuple? | 31,544,050 | 3 | 2015-07-21T15:59:46Z | 31,544,104 | 12 | 2015-07-21T16:02:18Z | [
"python",
"set"
] | Unlike `list.extend(L)`, there is no `extend` function in `set`. How can I extend a tuple to a set in pythonic way?
```
t1 = (1, 2, 3)
t2 = (3, 4, 5)
t3 = (5, 6, 7)
s = set()
s.add(t1)
s.add(t2)
s.add(t3)
print s
set([(3, 4, 5), (5, 6, 7), (1, 2, 3)])
```
My expected result is:
```
set([1, 2, 3, 4, 5, 6, 7])
```
My solutions is something like:
```
for item in t1 :
s.add(item)
``` | Try the `union` method -
```
t1 = (1, 2, 3)
t2 = (3, 4, 5)
t3 = (5, 6, 7)
s= set()
s = s.union(t1)
s = s.union(t2)
s = s.union(t3)
s
>>> set([1, 2, 3, 4, 5, 6, 7])
```
Or as indicated in the comments , cleaner method -
```
s = set().union(t1, t2, t3)
``` |
What misspellings / typos are supported in Python? | 31,546,628 | 3 | 2015-07-21T18:13:45Z | 31,546,629 | 7 | 2015-07-21T18:13:45Z | [
"python",
"python-3.x"
] | What misspellings / typos are supported in Python?
Not alternate spellings such as `is_dir` vs `isdir`, nor `color` vs `colour` but actual wrongly spelt aliases, such as `proprety` for `property` (which isn't supported). | As of Python 3.5 beta 3 the [unittest.mock](https://docs.python.org/3.5/library/unittest.mock.html#unittest.mock.Mock) object now supports `assret` standing in for `assert` -- note that this is not the keyword `assert`, but any attribute of a mock object that matches the regular expression `assert.*` or `assret.*`.
Some explanation:
When a mock object is created the default for any attribute access is to return a new `Mock`, except in one case: if the attribute is one of `assert_called_with`, `assert_called_once_with`, `assert_any_call`, `assert_has_calls`, and `assert_not_called`, in which case some code is actually run.
The problem is if one forgets the exact name and uses, for example, `assert_called`, then instead of code running to check that the mock was called, a new mock is returned instead and the test one wrote passes instead of actually doing the test and possibly failing.
To combat this problem `Mock` now raises an `AttributeError` if any access is made to an attribute that starts with `assert`.
Besides `assert`, `Mock` will also raise an `AttributeError` if any access is made to an attribute that starts with `assret`.
If one does not want the extra protection (for `assert` and `assret`) one can use `unsafe=True` when creating the `Mock`. |
How can I solve system of linear equations in SymPy? | 31,547,657 | 5 | 2015-07-21T19:12:00Z | 31,547,816 | 8 | 2015-07-21T19:21:29Z | [
"python",
"math",
"sympy"
] | Sorry, I am pretty new to sympy and python in general.
I want to solve the following underdetermined linear system of equations:
```
x + y + z = 1
x + y + 2z = 3
``` | SymPy recently got a new Linear system solver: `linsolve` in `sympy.solvers.solveset`, you can use that as follows:
```
In [38]: from sympy import *
In [39]: from sympy.solvers.solveset import linsolve
In [40]: x, y, z = symbols('x, y, z')
```
**List of Equations Form:**
```
In [41]: linsolve([x + y + z - 1, x + y + 2*z - 3 ], (x, y, z))
Out[41]: {(-y - 1, y, 2)}
```
**Augmented Matrix Form:**
```
In [59]: linsolve(Matrix(([1, 1, 1, 1], [1, 1, 2, 3])), (x, y, z))
Out[59]: {(-y - 1, y, 2)}
```
**A\*x = b Form**
```
In [59]: M = Matrix(((1, 1, 1, 1), (1, 1, 2, 3)))
In [60]: system = A, b = M[:, :-1], M[:, -1]
In [61]: linsolve(system, x, y, z)
Out[61]: {(-y - 1, y, 2)}
```
**Note**: Order of solution corresponds the order of given symbols. |
Are haskell type declarations used the same way as python class/function documentation? | 31,549,024 | 4 | 2015-07-21T20:30:51Z | 31,549,711 | 8 | 2015-07-21T21:13:09Z | [
"python",
"haskell",
"type-declaration"
] | I am using the "Learn you a haskell tutorial" and have reached the type declarations section. I understand that they change the way GHCI gives you an error message, but do they also affect the way the actual function works? If not, is it essentially like a python function documentation written with """ """ underneath "def someFunction(x): "? - just an example
Example code:
```
removeNonUppercase :: [Char] -> [Char]
removeNonUppercase st = [ c | c <- st, c `elem` ['A'..'Z']]
```
EDIT: I ask this because the tutorial explains that haskell is type-inferred at compile time. | Signatures aren't just for documentation (even though they are very useful for that as well). They are enforced by the compiler, which means that by adding signatures you can make the types of your functions more restrictive than they would be otherwise. Toy example:
```
add x y = x + y
addInt :: Int -> Int -> Int
addInt x y = x + y
```
```
*Main> :t add
add :: Num a => a -> a -> a
*Main> add 2 3
5
*Main> add 2.1 3.1
5.2
*Main> :t addInt
addInt :: Int -> Int -> Int
*Main> addInt 2 3
5
*Main> addInt 2.1 3.1 -- addInt will not accept non-Ints.
<interactive>:23:8:
No instance for (Fractional Int) arising from the literal â2.1â
In the first argument of âaddIntâ, namely â2.1â
In the expression: addInt 2.1 3.1
In an equation for âitâ: it = addInt 2.1 3.1
```
Besides that, adding type signatures means you will get better (i.e. easier to understand) errors in tricky situations, as the compiler will know what you want to achieve rather than having to guess everything on its own.
There are also situations in which the compiler can't decide the types without the help of some signatures or other type annotations. Perhaps the simplest example is:
```
readAndShow s = show (read s)
```
If you try to use that without specifying any types...
```
Foo.hs:6:17:
No instance for (Show a0) arising from a use of âshowâ
The type variable âa0â is ambiguous
Note: there are several potential instances:
instance (GHC.Arr.Ix a, Show a, Show b) => Show (GHC.Arr.Array a b)
-- Defined in âGHC.Arrâ
instance Show a => Show (Maybe a) -- Defined in âGHC.Showâ
instance (Integral a, Show a) => Show (GHC.Real.Ratio a)
-- Defined in âGHC.Realâ
...plus 26 others
In the expression: show (read s)
In an equation for âreadAndShowâ: readAndShow s = show (read s)
Foo.hs:6:23:
No instance for (Read a0) arising from a use of âreadâ
The type variable âa0â is ambiguous
Note: there are several potential instances:
instance (GHC.Arr.Ix a, Read a, Read b) => Read (GHC.Arr.Array a b)
-- Defined in âGHC.Readâ
instance Read a => Read (Maybe a) -- Defined in âGHC.Readâ
instance (Integral a, Read a) => Read (GHC.Real.Ratio a)
-- Defined in âGHC.Readâ
...plus 25 others
In the first argument of âshowâ, namely â(read s)â
In the expression: show (read s)
In an equation for âreadAndShowâ: readAndShow s = show (read s)
Failed, modules loaded: none.
```
... it won't work. `read` converts a `String` to *some* type, and `show` does the opposite. However, if nothing specifies the type of `read s`, the compiler can't tell which type you want to read the `String` as. So you either need to specify the intermediate type...
```
readAndShowAsInt s = show (read s :: Int)
```
```
*Main> readAndShowAsInt "2"
"2"
```
... Or have something else pick the type for you:
```
readAndAdd :: String -> Int -> Int
readAndAdd s y = read s + y
```
```
*Main> readAndAdd "2" 3
5
``` |
Failed building wheel for rpy2 | 31,552,341 | 4 | 2015-07-22T01:21:53Z | 31,588,516 | 7 | 2015-07-23T13:17:01Z | [
"python",
"pip",
"rpy2"
] | I'm running `Python v2.7.8` and `R v3.0.2 (2013-09-25) -- "Frisbee Sailing"` on my elementary OS Freya 64 bit (based on Ubuntu 14.04).
I'm trying to install the `rpy2` package with:
```
pip install rpy2
```
and it fails with the error:
> Failed building wheel for rpy2
Down below I also see:
> error: command 'gcc' failed with exit status 1
The full output is [here](http://pastebin.com/0HdaDbEx).
I've already tried:
```
sudo apt-get install python-dev libevent-dev python3-dev
```
but the error remains.
I know `pip` works because I've already installed `numpy`, `matplotlib`, `scipy`, and other packages using it. | Found the answer in [this post](http://sysads.co.uk/2014/06/install-r-base-3-1-0-ubuntu-14-04/). I just had to upgrade R from `v3.0.2` to the latest version available for my system (`v3.2.1`).
### Step 1: Uninstall Previous R-base installation
```
sudo apt-get remove r-base-core
```
### Step 2: Update Sources.List File
1. Edit the sources.list file
```
sudo gedit /etc/apt/sources.list
```
2. Add following entry (**for Ubuntu 14.04**):
```
deb http://cran.rstudio.com/bin/linux/ubuntu trusty/
```
### Step 3: Add the Public Keys
```
gpg --keyserver keyserver.ubuntu.com --recv-key E084DAB9
gpg -a --export E084DAB9 | sudo apt-key add -
```
### Step 4: Install R-base
```
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install r-base
```
### Step5: install rpy2
```
pip install rpy2
``` |
How do I make a class that is also a list? | 31,552,432 | 7 | 2015-07-22T01:33:42Z | 31,552,466 | 10 | 2015-07-22T01:38:33Z | [
"python"
] | I want to create a class in Python with various attributes and methods, but for it to inherit the functionality of a list, so that I can append objects to the object itself, not any of its attributes. I want to be able to say '`graph[3]`', rather than '`graph.node_list[3]`'. Is there a way to do this? | All you really need to do is provide a [`__getitem__`](https://docs.python.org/2/reference/datamodel.html#object.__getitem__)
```
In [1]: class Foo:
...: def __init__(self, *args):
...: self.args = args
...: def __getitem__(self, i):
...: return self.args[i]
...:
In [2]: c = Foo(3,4,5)
In [3]: c[2]
Out[3]: 5
In [4]: c[3]
IndexError: tuple index out of range #traceback removed for brevity
In [5]: for i in c: print(i) #look, ma, I can even use a for-loop!
3
4
5
```
Addendum: There are other methods you probably want to provide, too. `__len__` is definitely one of them. There's a rather long list of magic methods, I'd recommend going through them and picking the ones that make sense. |
fluphenazine read as \xef\xac\x82uphenazine | 31,553,324 | 5 | 2015-07-22T03:27:57Z | 31,553,581 | 8 | 2015-07-22T03:54:35Z | [
"python",
"unicode"
] | When I write
```
>>> st = "Piperazine (perphenazine, ï¬uphenazine)"
>>> st
'Piperazine (perphenazine, \xef\xac\x82uphenazine)'
```
What is happening? why doesn't it do this for any `fl`? How do I avoid this?
It looks \xef\xac\x82 is not, in fact, `fl`. Is there any way to 'translate' this character into fl (as the author intended it), without just excluding it via something like
```
unicode(st, errors='ignore').encode('ascii')
``` | This is what is called a "ligature".
In printing, the f and l characters were typeset with a different amount of space between them from what normal pairs of sequential letters used - in fact, the f and l would merge into one character. Other ligatures include "th", "oe", and "st".
That's what you're getting in your input - the "fl" ligature character, UTF-8 encoded. It's a three-byte sequence. I would take minor issue with your assertion that it's "not, in fact `fl`" - it really is, but your input is UTF-8 and not ASCII :-). I'm guessing you pasted from a Word document or an ebook or something that's designed for presentation instead of data fidelity (or perhaps, from the content, it was a LaTeX-generated PDF?).
If you want to handle *this particular case*, you could replace that byte sequence with the ASCII letters "fl". If you want to handle *all such* cases, you will have to use the Unicode Consortium's "UNIDATA" file at: <http://www.unicode.org/Public/UNIDATA/UnicodeData.txt> . In that file, there is a column for the "decomposition" of a character. The f-l ligature has the identifier "LATIN SMALL LIGATURE FL". There is, by the way, a Python module for this data file at <https://docs.python.org/2/library/unicodedata.html> . You want the "decomposition" function:
```
>>> import unicodedata
>>> foo = u"ï¬uphenazine"
>>> unicodedata.decomposition(foo[0])
'<compat> 0066 006C'
```
0066 006C is, of course, ASCII 'f' and 'l'.
Be aware that if you're trying to downcast UTF-8 data to ASCII, you're eventually going to have a bad day. There are only 127 ASCII characters, and UTF-8 has millions upon millions of code points. There are many codepoints in UTF-8 that cannot be readily represented as ASCII in a nonconvoluted way - who wants to have some text end up saying "`<TREBLE CLEF> <SNOWMAN> <AIRPLANE> <YELLOW SMILEY FACE>`"? |
Bokeh : save plot but don't show it | 31,562,898 | 9 | 2015-07-22T12:17:20Z | 31,562,899 | 10 | 2015-07-22T12:17:20Z | [
"python",
"bokeh"
] | I am using [Bokeh](http://bokeh.pydata.org) to produce HTML code including figures with `show` method This method ends on opening default browser with HTML opened in it.
I want to save the HTML code, without showing it. How can I do that ? | Solution is to replace calls to `show` by calls to `save`. |
can't compile openssl because of 'cl' is not recognized | 31,563,735 | 2 | 2015-07-22T12:53:18Z | 31,568,246 | 7 | 2015-07-22T15:58:57Z | [
"python",
"visual-studio",
"openssl",
"swig"
] | I am trying to compile openssl library for python script. I am using Windows x64 bit.
I am now following steps in this like: <https://github.com/dsoprea/M2CryptoWindows>
It worked till I type this command `nmake -f ms\ntdll.mak` in the Developer Command Prompt for VS 2015.
I am getting this error:
```
'cl' is not recognized as an internal or external command,
operable program or batch file.
NMAKE : fatal error U1077: 'cl' : return code '0x1'
Stop.
```
I looked in the previous posts in this forum. Solutions says add this path to the environmental variables:
```
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin
```
It did not solve the problem. Also, when I navigate the bin directory, I do not see the file they pointed on their solutions which is: `vcvars32.bat`. This is all what I see:
[](http://i.stack.imgur.com/pGxms.png)
Can you help? also I am wondering why VS 2015 was installed in Program Files (86) not the 64 one?
Please, note that I installed the SWIG and added the environmental variable: `C:\grr-build\swigwin-3.0.6`
**Update:**
I did installed the VC++ tools from VS. I am using VS 2015 community edition. I repeated the steps, this time I got different error when I type: `nmake -f ms\ntdll.mak` The error says:
```
C:\grr-build\openssl-1.0.2d>nmake -f ms\ntdll.mak
Microsoft (R) Program Maintenance Utility Version 14.00.23026.0
Copyright (C) Microsoft Corporation. All rights reserved.
C:\grr-build\openssl-1.0.2d>nmake -f ms\ntdll.mak
Microsoft (R) Program Maintenance Utility Version 14.00.23026.0
Copyright (C) Microsoft Corporation. All rights reserved.
Building OpenSSL
perl .\util\copy-if-different.pl ".\crypto\buildinf.h" "tmp32dll\buildin
f.h"
Copying: ./crypto/buildinf.h to tmp32dll/buildinf.h
perl .\util\copy-if-different.pl ".\crypto\opensslconf.h" "inc32\openssl
\opensslconf.h"
NOT copying: ./crypto/opensslconf.h to inc32/openssl/opensslconf.h
ml64 /c /Cp /Cx /Zi /Fotmp32dll\x86_64cpuid.obj tmp32dll\x86_64cpuid.asm
'ml64' is not recognized as an internal or external command,
operable program or batch file.
NMAKE : fatal error U1077: 'ml64' : return code '0x1'
Stop.
C:\grr-build\openssl-1.0.2d>
```
**Update:**
This issue has been resolved after doing the posted solution below but bear in mind you need to set the environmental variables for the nasm. | The C++ tools are not being installed by default in Visual Studio 2015. Run the setup again, and install the C++ parts under "`custom`":
[](http://i.stack.imgur.com/TtS5n.png) |
KeyError: SPARK_HOME during SparkConf initialization | 31,566,250 | 2 | 2015-07-22T14:34:54Z | 31,566,695 | 8 | 2015-07-22T14:52:37Z | [
"python",
"apache-spark",
"pyspark"
] | I am a spark newbie and I want to run a Python script from the command line. I have tested pyspark interactively and it works. I get this error when trying to create the sc:
```
File "test.py", line 10, in <module>
conf=(SparkConf().setMaster('local').setAppName('a').setSparkHome('/home/dirk/spark-1.4.1-bin-hadoop2.6/bin'))
File "/home/dirk/spark-1.4.1-bin-hadoop2.6/python/pyspark/conf.py", line 104, in __init__
SparkContext._ensure_initialized()
File "/home/dirk/spark-1.4.1-bin-hadoop2.6/python/pyspark/context.py", line 229, in _ensure_initialized
SparkContext._gateway = gateway or launch_gateway()
File "/home/dirk/spark-1.4.1-bin-hadoop2.6/python/pyspark/java_gateway.py", line 48, in launch_gateway
SPARK_HOME = os.environ["SPARK_HOME"]
File "/usr/lib/python2.7/UserDict.py", line 23, in __getitem__
raise KeyError(key)
KeyError: 'SPARK_HOME'
``` | It seems like there are two problems here.
The first one is a path you use. `SPARK_HOME` should point to the root directory of the Spark installation so in your case it should probably be `/home/dirk/spark-1.4.1-bin-hadoop2.6` not `/home/dirk/spark-1.4.1-bin-hadoop2.6/bin`.
The second problem is a way how you use `setSparkHome`. If you check [a docstring](https://github.com/apache/spark/blob/3c0156899dc1ec1f7dfe6d7c8af47fa6dc7d00bf/python/pyspark/conf.py#L130) its goal is to
> set path where Spark is installed on worker nodes
`SparkConf` constructor assumes that `SPARK_HOME` on master is already set. [It calls](https://github.com/apache/spark/blob/3c0156899dc1ec1f7dfe6d7c8af47fa6dc7d00bf/python/pyspark/conf.py#L104) `pyspark.context.SparkContext._ensure_initialized` [which calls](https://github.com/apache/spark/blob/49351c7f597c67950cc65e5014a89fad31b9a6f7/python/pyspark/context.py#L234) `pyspark.java_gateway.launch_gateway`, [which tries to acccess](https://github.com/apache/spark/blob/49351c7f597c67950cc65e5014a89fad31b9a6f7/python/pyspark/java_gateway.py#L48) `SPARK_HOME` and fails.
To deal with this you should set `SPARK_HOME` before you create `SparkConf`.
```
import os
os.environ["SPARK_HOME"] = "/home/dirk/spark-1.4.1-bin-hadoop2.6"
conf = (SparkConf().setMaster('local').setAppName('a'))
``` |
Set value for particular cell in pandas DataFrame with iloc | 31,569,384 | 4 | 2015-07-22T16:52:44Z | 31,569,794 | 7 | 2015-07-22T17:14:26Z | [
"python",
"pandas"
] | I have a question similar to [this](http://stackoverflow.com/questions/26657378/how-to-modify-a-value-in-one-cell-of-a-pandas-data-frame) and [this](http://stackoverflow.com/questions/13842088/set-value-for-particular-cell-in-pandas-dataframe). The difference is that I have to select row by position, as I do not know the index.
I want to do something like `df.iloc[0, 'COL_NAME'] = x`, but iloc does not allow this kind of access. If I do `df.iloc[0]['COL_NAME] = x` the warning about chained indexing appears. | For mixed position and index, use `.ix`. BUT you need to make sure that your index is not of integer, otherwise it will cause confusions.
```
df.ix[0, 'COL_NAME'] = x
```
### Update:
Alternatively, try
```
df.iloc[0, df.columns.get_loc('COL_NAME')] = x
```
Example:
```
import pandas as pd
import numpy as np
# your data
# ========================
np.random.seed(0)
df = pd.DataFrame(np.random.randn(10, 2), columns=['col1', 'col2'], index=np.random.randint(1,100,10)).sort_index()
print(df)
col1 col2
10 1.7641 0.4002
24 0.1440 1.4543
29 0.3131 -0.8541
32 0.9501 -0.1514
33 1.8676 -0.9773
36 0.7610 0.1217
56 1.4941 -0.2052
58 0.9787 2.2409
75 -0.1032 0.4106
76 0.4439 0.3337
# .iloc with get_loc
# ===================================
df.iloc[0, df.columns.get_loc('col2')] = 100
df
col1 col2
10 1.7641 100.0000
24 0.1440 1.4543
29 0.3131 -0.8541
32 0.9501 -0.1514
33 1.8676 -0.9773
36 0.7610 0.1217
56 1.4941 -0.2052
58 0.9787 2.2409
75 -0.1032 0.4106
76 0.4439 0.3337
``` |
AttributeError: '_socketobject' object has no attribute 'set_tlsext_host_name' | 31,576,258 | 11 | 2015-07-23T00:31:32Z | 31,576,259 | 12 | 2015-07-23T00:31:32Z | [
"python",
"ubuntu",
"ssl",
"https",
"python-requests"
] | In python, on a Ubuntu server, I am trying to get the `requests` library to make https requests, like so:
```
import requests
requests.post("https://example.com")
```
At first, I got the following:
> /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl\_.py:90:
> InsecurePlatformWarning: A true SSLContext object is not available.
> This prevents urllib3 from configuring SSL appropriately and may cause
> certain SSL connections to fail. For more information, see
> <https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning>.
After following the advice in this question: [InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately](http://stackoverflow.com/questions/29134512/insecureplatformwarning-a-true-sslcontext-object-is-not-available-this-prevent), I have now upgraded my warning to the following error:
> AttributeError: '\_socketobject' object has no attribute
> 'set\_tlsext\_host\_name'
How do I fix this? | The fix for me was the following:
```
sudo apt-get purge python-openssl
sudo pip install pyopenssl
``` |
Python: Generating all ordered combinations of a list | 31,581,425 | 3 | 2015-07-23T08:00:23Z | 31,581,790 | 7 | 2015-07-23T08:17:51Z | [
"python",
"list",
"order",
"combinations",
"itertools"
] | I'm using Python 2.7.
I'm having a list, and I want all possible ordered combinations.
```
import itertools
stuff = ["a","b","c", "d"]
for L in range(1, len(stuff)+1):
for subset in itertools.combinations(stuff, L):
print( ' '.join(subset))
```
This will give the following output:
```
a
b
c
d
a b
a c <-- not in correct order
a d <-- not in correct order
b c
b d <-- not in correct order
c d
a b c
a b d <-- not in correct order
a c d <-- not in correct order
b c d
a b c d
```
But I want the output only to be combinations that are in the same order as the `stuff` list. E.g. removing `a d`, `b d`, `a b d` and `a c d` since these are not in correct order compared to the `stuff` list `["a", "b", "c", "d"]`.
I've figured out using this instead:
```
import itertools
stuff = ["a","b","c", "d"]
for L in range(1, len(stuff)+1):
for subset in itertools.combinations(stuff, L):
if ' '.join(subset) in ' '.join(stuff): #added line
print( ' '.join(subset))
```
Is giving me the output I wanted:
```
a
b
c
d
a b
b c
c d
a b c
b c d
a b c d
```
But is there any built-in method in Python that does what I want? | I believe what you are looking for are all possible *slices* of your original list. Your desired output translated into slices is this:
```
a # slices[0:1]
b # slices[1:2]
c # slices[2:3]
d # slices[3:4]
a b # slices[0:2]
b c # slices[1:3]
c d # slices[2:4]
a b c # slices[0:3]
b c d # slices[1:4]
a b c d # slices[0:4]
```
So what you should try to produce are those indexes. And if you look closely and sort them, you can see that those are the 2-combinations of numbers between 0 and 4, where the first number is smaller than the otherâwhich is exactly what `itertools.combinations` does for a list of indexes. So we can just generate those:
```
for i, j in itertools.combinations(range(len(stuff) + 1), 2):
print(stuff[i:j])
```
This produces the following output:
```
['a']
['a', 'b']
['a', 'b', 'c']
['a', 'b', 'c', 'd']
['b']
['b', 'c']
['b', 'c', 'd']
['c']
['c', 'd']
['d']
```
The advantage is that this produces actual sublists of your input, and doesnât care if those where single characters in the first place. It can be any kind of content in a list.
If the output order is of any importance, you can order by the output list size to get the desired result:
```
def getCombinations (lst):
for i, j in itertools.combinations(range(len(lst) + 1), 2):
yield lst[i:j]
for x in sorted(getCombinations(stuff), key=len):
print(' '.join(x))
``` |
Which way is better to skip the 'NoneType' variable? | 31,587,060 | 3 | 2015-07-23T12:14:50Z | 31,587,137 | 7 | 2015-07-23T12:18:12Z | [
"python",
"nonetype"
] | A list contains several `NoneType` elements. To skip the `NoneType`,
```
for item in list :
if item is not None :
fp.write(item + '\n')
#OR
for item in list :
try :
fp.write(item + '\n')
except :
pass
```
Which one is better and whyï¼ | As a general rule of thumb, you should not really be using the `try: except:` pattern for control flow if you can help it. There is some overhead involved with raising an exception that is unnecessary in this context. Hope this helps. |
Monkey patching a @property | 31,590,152 | 20 | 2015-07-23T14:21:02Z | 31,591,589 | 12 | 2015-07-23T15:19:05Z | [
"python",
"python-3.x",
"monkeypatching"
] | Is it at all possible to monkey patch the value of a `@property` of an instance of a class that I do not control?
```
class Foo:
@property
def bar(self):
return here().be['dragons']
f = Foo()
print(f.bar) # baz
f.bar = 42 # MAGIC!
print(f.bar) # 42
```
Obviously the above would produce an error when trying to assign to `f.bar`. Is `# MAGIC!` possible in any way? The implementation details of the `@property` are a black box and not indirectly monkey-patchable. The entire method call needs to be replaced. It needs to affect a single instance only (class-level patching is okay if inevitable, but the changed behaviour must only selectively affect a given instance, not all instances of that class). | Subclass the base class (`Foo`) and change single instance's class to match the new subclass using `__class__` attribute:
```
>>> class Foo:
... @property
... def bar(self):
... return 'Foo.bar'
...
>>> f = Foo()
>>> f.bar
'Foo.bar'
>>> class _SubFoo(Foo):
... bar = 0
...
>>> f.__class__ = _SubFoo
>>> f.bar
0
>>> f.bar = 42
>>> f.bar
42
``` |
What is the difference between iter(x) and x.__iter__()? | 31,590,858 | 4 | 2015-07-23T14:49:14Z | 31,590,988 | 7 | 2015-07-23T14:54:36Z | [
"python"
] | What is the difference between `iter(x)` and `x.__iter__()`?
From my understanding, they both return a `listiterator` object but, in the below example, I notice that they are not equal:
```
x = [1, 2, 3]
y = iter(x)
z = x.__iter__()
y == z
False
```
Is there something that I am not understanding about iterator objects? | Iter objects dont have equality based on this sort of thing.
See that `iter(x) == iter(x)` returns `False` as well. This is because the iter function (which calls `__iter__`) returns an iter object that doesnt overload `__eq__` and therefore only returns `True` when the 2 objects are the same.
Without overloading, `==` is the same as the `is` comparison.
Also, `x.__iter__().__class__ is iter(x).__class__` showing that, in this case, they return the same type of object. |
Python '==' incorrectly returning false | 31,591,914 | 4 | 2015-07-23T15:32:37Z | 31,591,949 | 7 | 2015-07-23T15:34:24Z | [
"python",
"python-3.x"
] | I'm trying to get a difference of two files line by line, and Python is always returning false; even when I do a diff of the same files, Python (almost) always returns false. Goofy example, but it replicates my problem on Python 3.4.3.
```
file1.txt (example)
1
2
3
file1 = r"pathtofile\file1.txt"
file2 = r"pathtofile\file1.txt"
f1 = open(file1, "r")
f2 = open(file2, "r")
for line1 in f1:
found = False
for line2 in f2:
if repr(line1) == repr(line2):
found = True
print("true")
if found == False:
print("false")
```
Python correctly identifies that the first line is the same, but everything after that is false. Can anybody else replicate this? Any ideas? | You have exhausted the iterator after the first iteration over `f2`, you need to `file.seek(0)` to go back to the start of the file.
```
for line1 in f1:
found = False
for line2 in f2:
if repr(line1) == repr(line2):
print("true")
f2.seek(0) # reset pointer to start of file
```
You only check the first line of `f1` against the lines of `f2`, after the first loop there is nothing to iterate over.
Depending on what you want to happen, you either need to `break` when you find the line that matches or else reset `found = False` in the inner loop.
If you want all the matching lines then just store the output in a list or if the files are not very big you can use sets to find to find common lines.
```
with open("f1") as f1, open("f2") as f2:
st = set(f1)
common = st.intersection(f2)
```
If you want the difference use `st.difference(f2)`, for lines in either both not in both use `st.symmetric_difference(f2)`. It all depends on what you actually want to do.
You might also want to check out [filecmp](https://docs.python.org/2/library/filecmp.html) and [difflib](https://docs.python.org/2/library/difflib.html) |
pandas iloc vs ix vs loc explanation? | 31,593,201 | 73 | 2015-07-23T16:34:10Z | 31,593,712 | 145 | 2015-07-23T16:59:47Z | [
"python",
"pandas",
"indexing"
] | Can someone explain how these three methods of slicing are different?
I've seen [the docs](http://pandas.pydata.org/pandas-docs/stable/indexing.html),
and I've seen [these](http://stackoverflow.com/questions/28757389/loc-vs-iloc-vs-ix-vs-at-vs-iat) [answers](http://stackoverflow.com/questions/27667759/is-ix-always-better-than-loc-and-iloc-since-it-is-faster-and-supports-i), but I still find myself unable to explain how the three are different. To me, they seem interchangeable in large part, because they are at the lower levels of slicing.
For example, say we want to get the first five rows of a `DataFrame`. How is it that all three of these work?
```
df.loc[:5]
df.ix[:5]
df.iloc[:5]
```
Can someone present three cases where the distinction in uses are clearer? | First, a recap:
* `loc` works on *labels* in the index.
* `iloc` works on the *positions* in the index (so it only takes integers).
* `ix` usually tries to behave like `loc` but falls back to behaving like `iloc` if the label is not in the index.
It's important to note some subtleties that can make `ix` slightly tricky to use:
* if the index is of integer type, `ix` will only use label-based indexing and not fall back to position-based indexing. If the label is not in the index, an error is raised.
* if the index does not contain *only* integers, then given an integer, `ix` will immediately use position-based indexing rather than label-based indexing. If however `ix` is given another type (e.g. a string), it can use label-based indexing.
---
To illustrate the differences between the three methods, consider the following Series:
```
>>> s = pd.Series(np.nan, index=[49,48,47,46,45, 1, 2, 3, 4, 5])
>>> s
49 NaN
48 NaN
47 NaN
46 NaN
45 NaN
1 NaN
2 NaN
3 NaN
4 NaN
5 NaN
```
Then `s.iloc[:3]` returns the first 3 rows (since it looks at the position) and `s.loc[:3]` returns the first 8 rows (since it looks at the labels):
```
>>> s.iloc[:3]
49 NaN
48 NaN
47 NaN
>>> s.loc[:3]
49 NaN
48 NaN
47 NaN
46 NaN
45 NaN
1 NaN
2 NaN
3 NaN
>>> s.ix[:3] # the integer is in the index so s.ix[:3] works like loc
49 NaN
48 NaN
47 NaN
46 NaN
45 NaN
1 NaN
2 NaN
3 NaN
```
Notice `s.ix[:3]` returns the same Series as `s.loc[:3]` since it looks for the label first rather than going by position (and the index is of integer type).
What if we try with an integer label that isn't in the index (say `6`)?
Here `s.iloc[:6]` returns the first 6 rows of the Series as expected. However, `s.loc[:6]` raises a KeyError since `6` is not in the index.
```
>>> s.iloc[:6]
49 NaN
48 NaN
47 NaN
46 NaN
45 NaN
1 NaN
>>> s.loc[:6]
KeyError: 6
>>> s.ix[:6]
KeyError: 6
```
As per the subtleties noted above, `s.ix[:6]` now raises a KeyError because it tries to work like `loc` but can't find a `6` in the index. Because our index is of integer type it doesn't fall back to behaving like `iloc`.
If, however, our index was of mixed type, given an integer `ix` would behave like `iloc` immediately instead of raising a KeyError:
```
>>> s2 = pd.Series(np.nan, index=['a','b','c','d','e', 1, 2, 3, 4, 5])
>>> s2.index.is_mixed() # index is mix of types
True
>>> s2.ix[:6] # behaves like iloc given integer
a NaN
b NaN
c NaN
d NaN
e NaN
1 NaN
```
Keep in mind that `ix` can still accept non-integers and behave like `loc`:
```
>>> s2.ix[:'c'] # behaves like loc given non-integer
a NaN
b NaN
c NaN
```
---
**General advice**: if you're only indexing using labels, or only indexing using integer positions, stick with `loc` or `iloc` to avoid unexpected results.
If however you have a DataFrame and you want to mix label and positional index types, `ix` lets you do this:
```
>>> df = pd.DataFrame(np.arange(25).reshape(5,5),
index=list('abcde'),
columns=['x','y','z', 8, 9])
>>> df
x y z 8 9
a 0 1 2 3 4
b 5 6 7 8 9
c 10 11 12 13 14
d 15 16 17 18 19
e 20 21 22 23 24
```
Using `ix`, we can slice the rows by label and the columns by position (note that for the columns, `ix` default to position-based slicing since the label `4` is not a column name):
```
>>> df.ix[:'c', :4]
x y z 8
a 0 1 2 3
b 5 6 7 8
c 10 11 12 13
``` |
pandas iloc vs ix vs loc explanation? | 31,593,201 | 73 | 2015-07-23T16:34:10Z | 31,594,055 | 22 | 2015-07-23T17:17:27Z | [
"python",
"pandas",
"indexing"
] | Can someone explain how these three methods of slicing are different?
I've seen [the docs](http://pandas.pydata.org/pandas-docs/stable/indexing.html),
and I've seen [these](http://stackoverflow.com/questions/28757389/loc-vs-iloc-vs-ix-vs-at-vs-iat) [answers](http://stackoverflow.com/questions/27667759/is-ix-always-better-than-loc-and-iloc-since-it-is-faster-and-supports-i), but I still find myself unable to explain how the three are different. To me, they seem interchangeable in large part, because they are at the lower levels of slicing.
For example, say we want to get the first five rows of a `DataFrame`. How is it that all three of these work?
```
df.loc[:5]
df.ix[:5]
df.iloc[:5]
```
Can someone present three cases where the distinction in uses are clearer? | `iloc` works based on integer positioning. So no matter what your row labels are, you can always, e.g., get the first row by doing
```
df.iloc[0]
```
or the last five rows by doing
```
df.iloc[-5:]
```
You can also use it on the columns. This retrieves the 3rd column:
```
df.iloc[:, 2] # the : in the first position indicates all rows
```
You can combine them to get intersections of rows and columns:
```
df.iloc[:3, :3] # The upper-left 3 X 3 entries (assuming df has 3+ rows and columns)
```
On the other hand, `.loc` use named indices. Let's set up a data frame with strings as row and column labels:
```
df = pd.DataFrame(index=['a', 'b', 'c'], columns=['time', 'date', 'name'])
```
Then we can get the first row by
```
df.loc['a'] # equivalent to df.iloc[0]
```
and the second two rows of the `'date'` column by
```
df.loc['b':, 'date'] # equivalent to df.iloc[1:, 1]
```
and so on. Now, it's probably worth pointing out that the default row and column indices for a `DataFrame` are integers from 0 and in this case `iloc` and `loc` would work in the same way. This is why your three examples are equivalent. **If you had a non-numeric index such as strings or datetimes,** `df.loc[:5]` **would raise an error.**
Also, you can do column retrieval just by using the data frame's `__getitem__`:
```
df['time'] # equivalent to df.loc[:, 'time']
```
Now suppose you want to mix position and named indexing, that is, indexing using names on rows and positions on columns (to clarify, I mean select from our data frame, rather than creating a data frame with strings in the row index and integers in the column index). This is where `.ix` comes in:
```
df.ix[:2, 'time'] # the first two rows of the 'time' column
```
EDIT:
I think it's also worth mentioning that you can pass boolean vectors to the `loc` method as well. For example:
```
b = [True, False, True]
df.loc[b]
```
Will return the 1st and 3rd rows of `df`. This is equivalent to `df[b]` for selection, but it can also be used for assigning via boolean vectors:
```
df.loc[b, 'name'] = 'Mary', 'John'
``` |
Scrapy spider memory leak | 31,594,090 | 12 | 2015-07-23T17:19:07Z | 32,027,424 | 7 | 2015-08-15T17:36:53Z | [
"python",
"memory-leaks",
"scrapy",
"scrapyd"
] | My spider have a serious memory leak.. After 15 min of run its memory 5gb and scrapy tells (using prefs() ) that there 900k requests objects and thats all. What can be the reason for this high number of living requests objects? Request only goes up and doesnt goes down. All other objects are close to zero.
My spider looks like this:
```
class ExternalLinkSpider(CrawlSpider):
name = 'external_link_spider'
allowed_domains = ['']
start_urls = ['']
rules = (Rule(LxmlLinkExtractor(allow=()), callback='parse_obj', follow=True),)
def parse_obj(self, response):
if not isinstance(response, HtmlResponse):
return
for link in LxmlLinkExtractor(allow=(), deny=self.allowed_domains).extract_links(response):
if not link.nofollow:
yield LinkCrawlItem(domain=link.url)
```
Here output of prefs()
```
HtmlResponse 2 oldest: 0s ago
ExternalLinkSpider 1 oldest: 3285s ago
LinkCrawlItem 2 oldest: 0s ago
Request 1663405 oldest: 3284s ago
```
Memory for 100k scraped pages can hit 40gb mark on some sites ( for example at victorinox.com it reach 35gb of memory at 100k scraped pages mark). On other its much lesser.
UPD.
[](http://i.stack.imgur.com/hhfPP.png)
[](http://i.stack.imgur.com/4P3V8.png) | There are a few possible issues I see right away.
Before starting though, I wanted to mention that prefs() doesn't show the number of requests queued, it shows *the number of Request() objects that are alive*. It's possible to reference a request object and keep it alive, even if it's no longer queued to be downloaded.
I don't really see anything in the code you've provided that would cause this, though but you should keep it in mind.
Right off the bat, I'd ask: are you using cookies? If not, sites which pass around a session ID as a GET variable will generate a new sessionID for each page visit. You'll essentially continue queuing up the same pages over and over again. For instance, victorinox.com will have something like "jsessionid=18537CBA2F198E3C1A5C9EE17B6C63AD" in it's URL string, with the ID changing for every new page load.
Second, you may that you're hitting a spider trap. That is, a page which just reloads itself, with a new infinite amount of links. Think of a calendar with a link to "next month" and "previous month". I'm not directly seeing any on victorinox.com, though.
Third, from the provided code your Spider is not constrained to any specific domain. It will extract every link it finds on every page, running `parse_obj` on each one. The main page to victorinox.com for instance has a link to <http://www.youtube.com/victorinoxswissarmy>. This will in turn fill up your requests with tons of YouTube links.
You'll need to troubleshoot more to find out exactly what's going on, though.
Some strategies you may want to use:
1. Create a new Downloader Middleware and log all of your requests (to a file, or database). Review the requests for odd behaviour.
2. Limit the Depth to prevent it from continuing down the rabbit hole infinitely.
3. Limit the domain to test if it's still a problem.
If you find you're legitimately just generating to many requests, and memory is an issue, enable the persistent job queue and save the requests to disk, instead. I'd recommend against this as a first step, though, as it's more likely your crawler isn't working as you wanted it to. |
How do I change the figure size for a seaborn.violin_plot? | 31,594,549 | 5 | 2015-07-23T17:43:32Z | 31,597,200 | 7 | 2015-07-23T20:13:37Z | [
"python",
"seaborn"
] | How do I change the size of my image so it's suitable for printing?
For example, I'd like to use to A4 paper, whose dimensions are 11.7 inches by 8.27 inches in landscape orientation. | You can set the context to be `poster` or manually set `fig_size`.
```
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
np.random.seed(0)
n, p = 40, 8
d = np.random.normal(0, 2, (n, p))
d += np.log(np.arange(1, p + 1)) * -5 + 10
# plot
sns.set_style('ticks')
fig, ax = plt.subplots()
# the size of A4 paper
fig.set_size_inches(11.7, 8.27)
sns.violinplot(data=d, inner="points", ax=ax)
sns.despine()
fig.savefig('example.png')
```
[](http://i.stack.imgur.com/uUFRv.png) |
How do I change the figure size for a seaborn.violin_plot? | 31,594,549 | 5 | 2015-07-23T17:43:32Z | 31,597,278 | 9 | 2015-07-23T20:19:13Z | [
"python",
"seaborn"
] | How do I change the size of my image so it's suitable for printing?
For example, I'd like to use to A4 paper, whose dimensions are 11.7 inches by 8.27 inches in landscape orientation. | You need create the matplotlib Figure and Axes objects ahead of time, specifiying how big the figure is:
```
import matploltib.pyplot as plt
import seaborn
import mylib
a4_dims = (11.7, 8.27)
df = mylib.load_data()
fig, ax = plt.subplots(figsize=a4_dims)
seaborn.voilinplot(ax=ax, data=df, **violin_options)
``` |
sympy: how to sympify logical "NOT" | 31,596,229 | 4 | 2015-07-23T19:16:31Z | 31,596,361 | 8 | 2015-07-23T19:24:28Z | [
"python",
"sympy"
] | The following code would work to sympify logical expressions:
```
sympify('a&b') # And(a, b)
sympify('a|b') # Or(a, b)
```
But how do I get a result of `Not(a)`? | It turns out the symbol you are looking for is `~`. See the following:
```
>>> from sympy import sympify
>>> sympify('a&b')
And(a, b)
>>> sympify('a|b')
Or(a, b)
>>> sympify('~a')
Not(a)
``` |
How to find out programmatically if a domain name is registered or not | 31,597,125 | 3 | 2015-07-23T20:09:05Z | 31,597,579 | 14 | 2015-07-23T20:38:05Z | [
"python",
"sockets",
"domain-name",
"pywhois"
] | I use [pywhois](https://bitbucket.org/richardpenman/pywhois) to determine if a domain name is registered or not. Here is my source code. (all permutations from `a.net` to `zzz.net`)
```
#!/usr/bin/env python
import whois #pip install python-whois
import string
import itertools
def main():
characters = list(string.ascii_lowercase)
##domain names generator
for r in range(1, 4) :
for name in itertools.permutations(characters, r) : #from 'a.net' to 'zzz.net'
url = ''.join(name) + '.net'
#check if a domain name is registered or not
try :
w = whois.whois(url)
except (whois.parser.PywhoisError): #NOT FOUND
print(url) #unregistered domain names?
if __name__ == '__main__':
main()
```
I got the following results:
```
jv.net
uli.net
vno.net
xni.net
```
However, all above domain names have already been registered. It is not accurate. **Can anyone explain it?** There are a lot of errors:
```
fgets: Connection reset by peer
connect: No route to host
connect: Network is unreachable
connect: Connection refused
Timeout.
```
There is an alternative way, reported [here](http://stackoverflow.com/questions/11578943/error-when-using-pywhois#comment51145378_11578943).
```
import socket
try:
socket.gethostbyname_ex(url)
except:
print(url) #unregistered domain names?
```
---
In speaking of speed, I use `map` to parallel processing.
```
def select_unregisteredd_domain_names(self, domain_names):
#Parallelism using map
pool = ThreadPool(16) # Sets the pool size
results = pool.map(query_method(), domain_names)
pool.close() #close the pool and wait for the work to finish
pool.join()
return results
``` | This is a tricky problem to solve, trickier than most people realize. The reason for that is that some people don't *want* you to find that out. Most domain registrars apply lots of black magic (i.e. lots of TLD-specific hacks) to get the nice listings they provide, and often they get it wrong. Of course, in the end they will know for sure, since they have [EPP](https://en.wikipedia.org/wiki/Extensible_Provisioning_Protocol) access that will hold the authoritative answer (but it's usually done only when you click "order").
Your first method (whois) used to be a good one, and I did this on a large scale back in the 90s when everything was more open. Nowadays, many TLDs protect this information behind captchas and obstructive web interfaces, and whatnot. If nothing else, there will be quotas on the number of queries per IP. (And it may be for good reason too, I used to get ridiculous amounts of spam to email addresses used for registering domains). Also note that spamming their WHOIS databases with queries is usually in breach of their terms of use and you might get rate limited, blocked, or even get an abuse report to your ISP.
Your second method (DNS) is usually a lot quicker (but don't use gethostbyname, use Twisted or some other async DNS for efficiency). You need to figure out how the response for taken and free domains look like for each TLD. Just because a domain doesn't resolve doesn't mean its free (it could just be unused). And conversely, some TLDs have landing pages for all nonexisting domains. In some cases it will be impossible to determine using DNS alone.
So, how do you solve it? Not with ease, I'm afraid. For each TLD, you need to figure out how to make clever use of DNS and whois databases, starting with DNS and resorting to other means in the tricky cases. Make sure not to flood whois databases with queries.
Another option is to get API access to one of the registrars, they might offer programmatic access to domain search. |
Why list comprehension is much faster than numpy for multiplying arrays? | 31,598,677 | 6 | 2015-07-23T21:50:13Z | 31,598,746 | 9 | 2015-07-23T21:55:35Z | [
"python",
"performance",
"numpy",
"list-comprehension",
"matrix-multiplication"
] | Recently I answered to [THIS](http://stackoverflow.com/questions/31596979/multiplication-between-2-lists/31597029#31597029) question which wanted the multiplication of 2 lists,some user suggested the following way using numpy, alongside mine which I think is the proper way :
```
(a.T*b).T
```
Also I found that `aray.resize()` has a same performance like that. any way another answer suggested a solution using list comprehension :
```
[[m*n for n in second] for m, second in zip(b,a)]
```
But after the benchmark I saw that the list comprehension performs very faster than numpy :
```
from timeit import timeit
s1="""
a=[[2,3,5],[3,6,2],[1,3,2]]
b=[4,2,1]
[[m*n for n in second] for m, second in zip(b,a)]
"""
s2="""
a=np.array([[2,3,5],[3,6,2],[1,3,2]])
b=np.array([4,2,1])
(a.T*b).T
"""
print ' first: ' ,timeit(stmt=s1, number=1000000)
print 'second : ',timeit(stmt=s2, number=1000000,setup="import numpy as np")
```
result :
```
first: 1.49778485298
second : 7.43547797203
```
As you can see numpy is approximately 5 time faster. but most surprising thing was that its faster without using transpose, and for following code :
```
a=np.array([[2,3,5],[3,6,2],[1,3,2]])
b=np.array([[4],[2],[1]])
a*b
```
The list comprehension still was 5 time faster.So besides of this point that list comprehensions performs in C here we used 2 nested loop and a `zip` function So what can be the reason? Is it because of operation `*` in numpy?
Also note that there is no problem with `timeit` here I putted the `import` part in `setup`.
I also tried it with larger arras, the difference gets lower but still doesn't make sense :
```
s1="""
a=[[2,3,5],[3,6,2],[1,3,2]]*10000
b=[4,2,1]*10000
[[m*n for n in second] for m, second in zip(b,a)]
"""
s2="""
a=np.array([[2,3,5],[3,6,2],[1,3,2]]*10000)
b=np.array([4,2,1]*10000)
(a.T*b).T
"""
print ' first: ' ,timeit(stmt=s1, number=1000)
print 'second : ',timeit(stmt=s2, number=1000,setup="import numpy as np")
```
result :
```
first: 10.7480301857
second : 13.1278889179
``` | Creation of numpy arrays is much slower than creation of lists:
```
In [153]: %timeit a = [[2,3,5],[3,6,2],[1,3,2]]
1000000 loops, best of 3: 308 ns per loop
In [154]: %timeit a = np.array([[2,3,5],[3,6,2],[1,3,2]])
100000 loops, best of 3: 2.27 µs per loop
```
There can also fixed costs incurred by NumPy function calls before the meat
of the calculation can be performed by a fast underlying C/Fortran function. This can include ensuring the inputs are NumPy arrays,
These setup/fixed costs are something to keep in mind before assuming NumPy
solutions are inherently faster than pure-Python solutions. NumPy shines when
you set up *large* arrays *once* and then perform *many* fast NumPy operations
on the arrays. It may fail to outperform pure Python if the arrays are small
because the setup cost can outweigh the benefit of offloading the calculations
to compiled C/Fortran functions. For small arrays there simply may not be enough
calculations to make it worth it.
---
If you increase the size of the arrays a bit, and move creation of the arrays
into the setup, then NumPy can be much faster than pure Python:
```
import numpy as np
from timeit import timeit
N, M = 300, 300
a = np.random.randint(100, size=(N,M))
b = np.random.randint(100, size=(N,))
a2 = a.tolist()
b2 = b.tolist()
s1="""
[[m*n for n in second] for m, second in zip(b2,a2)]
"""
s2 = """
(a.T*b).T
"""
s3 = """
a*b[:,None]
"""
assert np.allclose([[m*n for n in second] for m, second in zip(b2,a2)], (a.T*b).T)
assert np.allclose([[m*n for n in second] for m, second in zip(b2,a2)], a*b[:,None])
print 's1: {:.4f}'.format(
timeit(stmt=s1, number=10**3, setup='from __main__ import a2,b2'))
print 's2: {:.4f}'.format(
timeit(stmt=s2, number=10**3, setup='from __main__ import a,b'))
print 's3: {:.4f}'.format(
timeit(stmt=s3, number=10**3, setup='from __main__ import a,b'))
```
yields
```
s1: 4.6990
s2: 0.1224
s3: 0.1234
``` |
Spark mllib predicting weird number or NaN | 31,599,400 | 4 | 2015-07-23T22:53:25Z | 31,610,206 | 7 | 2015-07-24T12:09:26Z | [
"python",
"apache-spark",
"pyspark",
"apache-spark-mllib",
"gradient-descent"
] | I am new to Apache Spark and trying to use the machine learning library to predict some data. My dataset right now is only about 350 points. Here are 7 of those points:
```
"365","4",41401.387,5330569
"364","3",51517.886,5946290
"363","2",55059.838,6097388
"362","1",43780.977,5304694
"361","7",46447.196,5471836
"360","6",50656.121,5849862
"359","5",44494.476,5460289
```
Here's my code:
```
def parsePoint(line):
split = map(sanitize, line.split(','))
rev = split.pop(-2)
return LabeledPoint(rev, split)
def sanitize(value):
return float(value.strip('"'))
parsedData = textFile.map(parsePoint)
model = LinearRegressionWithSGD.train(parsedData, iterations=10)
print model.predict(parsedData.first().features)
```
The prediction is something totally crazy, like `-6.92840330273e+136`. If I don't set iterations in `train()`, then I get `nan` as a result. What am I doing wrong? Is it my data set (the size of it, maybe?) or my configuration? | The problem is that `LinearRegressionWithSGD` uses stochastic gradient descent (SGD) to optimize the weight vector of your linear model. SGD is really sensitive to the provided `stepSize` which is used to update the intermediate solution.
What SGD does is to calculate the gradient `g` of the cost function given a sample of the input points and the current weights `w`. In order to update the weights `w` you go for a certain distance in the opposite direction of `g`. The distance is your step size `s`.
```
w(i+1) = w(i) - s * g
```
Since you're not providing an explicit step size value, MLlib assumes `stepSize = 1`. This seems to not work for your use case. I'd recommend you to try different step sizes, usually lower values, to see how `LinearRegressionWithSGD` behaves:
```
LinearRegressionWithSGD.train(parsedData, numIterartions = 10, stepSize = 0.001)
``` |
Python 2.7.10 error "from urllib.request import urlopen" no module named request | 31,601,238 | 4 | 2015-07-24T02:28:23Z | 31,601,343 | 13 | 2015-07-24T02:43:28Z | [
"python",
"urllib"
] | I opened python code from `github`. I assumed it was `python2.x` and got the above error when I tried to run it. From the reading I've seen Python 3 has depreciated `urllib` itself and replaced it with a number of libraries including `urllib.request`.
It looks like the code was written in python 3 (a confirmation from someone who knows would be appreciated.)
At this point I don't want to move to Python 3 - I haven't researched what it would do to my existing code.
Thinking there should be a `urllib` module for `Python 2`, I searched Google (using "python2 urllib download") and did not find one. (It might have been hidden in the many answers since `urllib` includes downloading functionality.) I looked in my `Python27/lib` directory and didn't see it there.
Can I get a version of this module that runs on `Python27`? Where and how? | Try using urllib2:
<https://docs.python.org/2/library/urllib2.html>
This line should work to replace urlopen:
```
from urllib2 import urlopen
```
Tested in Python 2.7 on Macbook Pro
Try posting a link to the git in question. |
How can I represent this regex to not get a "bad character range" error? | 31,603,075 | 3 | 2015-07-24T05:47:21Z | 31,605,097 | 11 | 2015-07-24T07:56:49Z | [
"python",
"regex"
] | Is there a better way to do this?
```
$ python
Python 2.7.9 (default, Jul 16 2015, 14:54:10)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-55)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import re
>>> re.sub(u'[\U0001d300-\U0001d356]', "", "")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/fast/services/lib/python2.7/re.py", line 155, in sub
return _compile(pattern, flags).sub(repl, string, count)
File "/home/fast/services/lib/python2.7/re.py", line 251, in _compile
raise error, v # invalid expression
sre_constants.error: bad character range
``` | ## Python narrow and wide build (Python versions below 3.3)
The error suggests that you are using "narrow" (UCS-2) build, which only supports Unicode code points up to 65535 as one "Unicode character"1. Characters whose code points are above 65536 are represented as surrogate pairs, which means that the Unicode string `u'\U0001d300'` consists of two "Unicode character" in narrow build.
```
Python 2.7.8 (default, Jul 25 2014, 14:04:36)
[GCC 4.8.3] on cygwin
>>> import sys; sys.maxunicode
65535
>>> len(u'\U0001d300')
2
>>> [hex(ord(i)) for i in u'\U0001d300']
['0xd834', '0xdf00']
```
In "wide" (UCS-4) build, all 1114111 code points are recognized as Unicode character, so the Unicode string `u'\U0001d300'` consists of exactly one "Unicode character"/Unicode code point.
```
Python 2.6.6 (r266:84292, May 1 2012, 13:52:17)
[GCC 4.4.6 20110731 (Red Hat 4.4.6-3)] on linux2
>>> import sys; sys.maxunicode
1114111
>>> len(u'\U0001d300')
1
>>> [hex(ord(i)) for i in u'\U0001d300']
['0x1d300']
```
1 I use "Unicode character" (in quotes) to refer to one character in Python Unicode string, not one Unicode code point. The number of "Unicode characters" in a string is the `len()` of the string. In "narrow" build, one "Unicode character" is a 16-bit code unit of UTF-16, so one astral character will appear as two "Unicode character". In "wide" build, one "Unicode character" always corresponds to one Unicode code point.
## Matching astral plane characters with regex
### Wide build
The regex in the question compiles correctly in "wide" build:
```
Python 2.6.6 (r266:84292, May 1 2012, 13:52:17)
[GCC 4.4.6 20110731 (Red Hat 4.4.6-3)] on linux2
>>> import re; re.compile(u'[\U0001d300-\U0001d356]', re.DEBUG)
in
range (119552, 119638)
<_sre.SRE_Pattern object at 0x7f9f110386b8>
```
### Narrow build
However, the same regex won't work in "narrow" build, since the engine does not recognize surrogate pairs. It just treats `\ud834` as one character, then tries to create a character range from `\udf00` to `\ud834` and fails.
```
Python 2.7.8 (default, Jul 25 2014, 14:04:36)
[GCC 4.8.3] on cygwin
>>> [hex(ord(i)) for i in u'[\U0001d300-\U0001d356]']
['0x5b', '0xd834', '0xdf00', '0x2d', '0xd834', '0xdf56', '0x5d']
```
The workaround is to use the [same method as done in ECMAScript](http://stackoverflow.com/questions/28896329/regex-to-match-egyptian-hieroglyphics/28918460#28918460), where we will construct the regex to match the surrogates representing the code point.
```
Python 2.7.8 (default, Jul 25 2014, 14:04:36)
[GCC 4.8.3] on cygwin
>>> import re; re.compile(u'\ud834[\udf00-\udf56]', re.DEBUG)
literal 55348
in
range (57088, 57174)
<_sre.SRE_Pattern object at 0x6ffffe52210>
>>> input = u'Sample \U0001d340. Another \U0001d305. Leave alone \U00011000'
>>> input
u'Sample \U0001d340. Another \U0001d305. Leave alone \U00011000'
>>> re.sub(u'\ud834[\udf00-\udf56]', '', input)
u'Sample . Another . Leave alone \U00011000'
```
### Using [regexpu](https://mothereff.in/regexpu) to derive astral plane regex for Python narrow build
Since the construction to match astral plane characters in Python narrow build is the same as ES5, you can use regexpu, a tool to convert RegExp literal in ES6 to ES5, to do the conversion for you.
Just paste the *equivalent* regex in ES6 (note the `u` flag and `\u{hh...h}` syntax):
```
/[\u{1d300}-\u{1d356}]/u
```
and you get back the regex which can be used in both Python narrow build and ES5
```
/(?:\uD834[\uDF00-\uDF56])/
```
Do take note to remove the delimiter `/` in JavaScript RegExp literal when you want to use the regex in Python.
The tool is extremely useful when the range spread across multiple high surrogates (U+D800 to U+DBFF). For example, if we have to match the character range
```
/[\u{105c0}-\u{1cb40}]/u
```
The equivalent regex in Python narrow build and ES5 is
```
/(?:\uD801[\uDDC0-\uDFFF]|[\uD802-\uD831][\uDC00-\uDFFF]|\uD832[\uDC00-\uDF40])/
```
which is rather complex and error-prone to derive.
## Python version 3.3 and above
[Python 3.3](https://docs.python.org/3/whatsnew/3.3.html#functionality) implements [PEP 393](http://www.python.org/dev/peps/pep-0393), which eliminates the distinction between narrow build and wide build, and Python from now behaves like a wide build. This eliminates the problem in the question altogether.
## Compatibility issues
While it's possible to workaround and match astral plane characters in Python narrow builds, going forward, it's best to change the execution environment by using Python wide builds, or port the code to use with Python 3.3 and above.
The regex code for narrow build is hard to read and maintain for average programmers, and it has to be completely rewritten when porting to Python 3.
### Reference
* [How to find out if Python is compiled with UCS-2 or UCS-4?](http://stackoverflow.com/questions/1446347/how-to-find-out-if-python-is-compiled-with-ucs-2-or-ucs-4) |
Jupyter (IPython) notebook not plotting | 31,609,600 | 8 | 2015-07-24T11:38:11Z | 31,611,678 | 19 | 2015-07-24T13:22:33Z | [
"python",
"pandas",
"ipython"
] | I installed anaconda to use pandas and scipy. I reading and watching pandas tutorials and they all say to open the ipython notebook using
```
ipython notebook --pylab==inline
```
but when I do that I get a message saying
```
"Support for specifying --pylab on the command line has been removed. Please use '%pylab = inline' or '%matplotlib =inline' in the notebook itself"
```
But that does not work. Then when I try "plot(arange(10))" I get a message saying "name 'plot' is not defined." I trying plotting data from a .csv file and got
```
"matplotlib.axes._subplots.AxesSubplot at 0xebf8b70".
```
What should I do? | I believe the pylab magic was removed when they transitioned from IPython to a more general Jupyter notebook.
Try:
```
%matplotlib inline
```
Also when you get a message like:
```
"matplotlib.axes._subplots.AxesSubplot at 0xebf8b70".
```
That's just IPython displaying the object. You need to specify IPython display it. Hence the matplotlib inline magic. |
How to copy/paste DataFrame from StackOverflow into Python | 31,610,889 | 11 | 2015-07-24T12:45:23Z | 31,610,890 | 12 | 2015-07-24T12:45:23Z | [
"python",
"pandas"
] | In [questions](http://stackoverflow.com/q/17729853/2071807) and [answers](http://stackoverflow.com/a/31609618/2071807), users very often post an example `DataFrame` which their question/answer works with:
```
In []: x
Out[]:
bar foo
0 4 1
1 5 2
2 6 3
```
It'd be really useful to be able to get this `DataFrame` into my Python interpreter so I can start debugging the question, or testing the answer.
How can I do this? | Pandas is written by people that really know what people want to do.
[Since version `0.13`](http://pandas.pydata.org/pandas-docs/stable/whatsnew.html#id45) there's a function [`pd.read_clipboard`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_clipboard.html) which is absurdly effective at making this "just work".
Copy and paste the part of the code in the question that starts `bar foo`, (i.e. the DataFrame) and do this in a Python interpreter:
```
In [53]: import pandas as pd
In [54]: df = pd.read_clipboard()
In [55]: df
Out[55]:
bar foo
0 4 1
1 5 2
2 6 3
```
# Caveats
* Don't include the iPython `In` or `Out` stuff or it won't work
* If you have a named index, you currently need to add `engine='python'` (see [this issue](https://github.com/pydata/pandas/issues/6893) on GitHub). The 'c' engine is currently broken when the index is named.
* It's not brilliant at MultiIndexes:
Try this:
```
0 1 2
level1 level2
foo a 0.518444 0.239354 0.364764
b 0.377863 0.912586 0.760612
bar a 0.086825 0.118280 0.592211
```
which doesn't work at all, or this:
```
0 1 2
foo a 0.859630 0.399901 0.052504
b 0.231838 0.863228 0.017451
bar a 0.422231 0.307960 0.801993
```
Which works, but returns something totally incorrect! |
os.path.exists() gives false positives in %appdata% on Windows | 31,617,936 | 2 | 2015-07-24T19:02:04Z | 31,617,979 | 8 | 2015-07-24T19:04:37Z | [
"python",
"windows",
"python-3.x",
"file-io"
] | I'm trying to make my game project not save in its own directory like it's 1995 or something.
The standard library isn't cooperating.
Basically, I'm trying to save in `%appdata%\MYGAMENAME\` (this is the value of \_savedir on win32.) `open()` will become understandably upset if such a folder does not exist, so I use `os.path.exists()` to check if it does indeed exist and create it if it does not.
Trouble is, `os.path.exists()` returns True, but I can look in the folder and confirm that it does not. It also doesn't return True if I try it in the REPL; only here (I've confirmed with my debugger that it does).
The pickling step appears to proceed properly; it jumps to the `else:` clause immediately after. But I can confirm with the OS filesystem browser and the REPL that neither folder nor file exist!
Here's the full function source (don't laugh!):
```
def save(self):
"Save the game."
#Eh, ____ it, just pickle gamestate. What could go wrong?
save_path=os.path.join(_savedir,"save.sav")
temporary_save_path=os.path.join(_savedir,"new_save.sav")
#Basically, we save to a temporary save, then if we succeed we copy it over the old one.
#If anything goes wrong, we just give up and the old save is untouched. Either way we delete the temp save.
if not os.path.exists(_savedir):
print("Creating",_savedir)
os.makedirs(_savedir)
else:
print(_savedir,"exists!")
try:
pickle.dump(self,open(temporary_save_path,"wb"),protocol=pickle.HIGHEST_PROTOCOL)
except Exception as e:
print("Save failed: {0}".format(e))
print("The game can continue, and your previous save is still intact.")
else:
shutil.move(temporary_save_path,save_path)
finally:
try:
os.remove(temporary_save_path)
except Exception:
pass
```
(Yeah, catching `Exception` is usually inadvisable, but I want things to fail gracefully if ANYTHING goes wrong, there's no case where an actual exception would come up there and I'd want to do anything else.)
What could be the issue here? | Python does not expand the value of `%appdata%`. Instead, a literal directory is created relative to the current working directory. Run `print(os.path.abspath(_savedir))`, that is where the file is created and exists.
Use `os.environ['APPDATA']` to create an absolute path to the application data directory:
```
_savedir = os.path.join(os.environ['APPDATA'], 'MYGAMENAME')
``` |
AttributeError: 'module' object has no attribute 'ORB' | 31,630,559 | 7 | 2015-07-25T19:58:39Z | 31,632,268 | 16 | 2015-07-26T00:01:18Z | [
"python",
"opencv",
"opencv3.0"
] | when I run my python code
```
import numpy as np
import cv2
import matplotlib.pyplot as plt
img1 = cv2.imread('/home/shar/home.jpg',0) # queryImage
img2 = cv2.imread('/home/shar/home2.jpg',0) # trainImage
# Initiate SIFT detector
orb = cv2.ORB()
# find the keypoints and descriptors with SIFT
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)
# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Match descriptors.
matches = bf.match(des1,des2)
# Sort them in the order of their distance.
matches = sorted(matches, key = lambda x:x.distance)
# Draw first 10 matches.
img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:10], flags=2)
plt.imshow(img3),plt.show()
```
I get this error:
```
AttributeError: 'module' object has no attribute 'ORB'
```
I am using python3 and opencv3 | I found this also. I checked the actual contents of the `cv2` module and found `ORB_create()` rather than `ORB()`
Use the line
```
orb = cv2.ORB_create()
```
instead of `orb = cv2.ORB()` and it will work.
Verified on Python 3.4, OpenCV 3 on Windows, using the OpenCV test data set `box.png` and `box_in_scene.png` with the following results. ***Note*** you have to put in `None` for `outImg` in the line `img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:10], flags=2)` also - see [my answer](http://stackoverflow.com/a/31631995/838992) to your other question.
[](http://i.stack.imgur.com/Fwtyg.png) |
TypeError: Required argument 'outImg' (pos 6) not found | 31,631,352 | 5 | 2015-07-25T21:33:04Z | 31,631,995 | 7 | 2015-07-25T23:11:04Z | [
"python",
"opencv",
"opencv3.0"
] | When I run my python code
```
import numpy as np
import cv2
import matplotlib.pyplot as plt
img1 = cv2.imread('/home/shar/home.jpg',0) # queryImage
img2 = cv2.imread('/home/shar/home2.jpg',0) # trainImage
# Initiate SIFT detector
sift = cv2.xfeatures2d.SIFT_create()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
# BFMatcher with default params
bf = cv2.BFMatcher()
matches = bf.knnMatch(des1,des2, k=2)
# Apply ratio test
good = []
for m,n in matches:
if m.distance < 0.75*n.distance:
good.append([m])
# cv2.drawMatchesKnn expects list of lists as matches.
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,good,flags=2)
plt.imshow(img3),plt.show()
```
From this line
```
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,good,flags=2)
```
I get this error
```
TypeError: Required argument 'outImg' (pos 6) not found
```
I am using python3 and opencv3 | You seem to be [following this tutorial page](http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_matcher/py_matcher.html) (based on the code you've shown in this and your two related questions [1](http://stackoverflow.com/questions/31628505/matplotlib-importerror-cannot-import-name-pyplot#comment51209393_31628505), [2](http://stackoverflow.com/questions/31630559/attributeerror-module-object-has-no-attribute-orb)).
The [function documentation is here](http://docs.opencv.org/3.0-beta/modules/features2d/doc/drawing_function_of_keypoints_and_matches.html#drawmatches) (although I note it is still labelled "beta") and implies that `outImg` is optional. However, the python error message is explicit - an argument is required in position 6, it is named `outImg` in the function signature. I suspect the documentation may not exactly match the code requirements. It ***appears*** that the signature of the [C++ code](https://github.com/Itseez/opencv/blob/master/modules/features2d/include/opencv2/features2d.hpp#L1111) that the python binding is calling has no default value for `outImg`, so need that argument to be supplied.
Note that you can inspect the doc string for the actual binding in the python3 interpreter (if it exists) by looking at `<function_name>.__doc__`. In this case, you can see that `outImg` is ***not*** shown as optional. Here is the output from my installation:
```
>>> cv2.drawMatchesKnn.__doc__
'drawMatchesKnn(img1, keypoints1, img2, keypoints2, matches1to2, outImg[, matchC
olor[, singlePointColor[, matchesMask[, flags]]]]) -> outImg'
```
### Solution (note - verified on a windows install, not Linux)
You might note the [last example on that tutorial](http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_matcher/py_matcher.html#flann-based-matcher), which uses the following code - passing in `None` in the place of `outImg`. I think that will work for your case also.
```
draw_params = dict(matchColor = (0,255,0),
singlePointColor = (255,0,0),
matchesMask = matchesMask,
flags = 0)
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,matches,None,**draw_params)
```
You don't need to pass all the `draw_params` dict, you could try just passing `flags` i.e.
```
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,matches,None,flags=2)
```
I have verified this on a fresh install of OpenCV 3 (albeit on Windows, using a prebuilt binary) |
Label axes on Seaborn barplot | 31,632,637 | 9 | 2015-07-26T01:08:48Z | 31,632,745 | 17 | 2015-07-26T01:28:00Z | [
"python",
"matplotlib",
"seaborn"
] | I'm trying to use my own labels for a Seaborn barplot with the following code:
```
import pandas as pd
import seaborn as sns
fake = pd.DataFrame({'cat': ['red', 'green', 'blue'], 'val': [1, 2, 3]})
fig = sns.barplot(x = 'val', y = 'cat',
data = fake,
color = 'black')
fig.set_axis_labels('Colors', 'Values')
```
[](http://i.stack.imgur.com/kZFNu.png)
However I get an error that:
```
AttributeError: 'AxesSubplot' object has no attribute 'set_axis_labels'
```
What gives? | Seaborn's barplot returns an axis-object (not a figure). This means you can do the following:
```
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
fake = pd.DataFrame({'cat': ['red', 'green', 'blue'], 'val': [1, 2, 3]})
ax = sns.barplot(x = 'val', y = 'cat',
data = fake,
color = 'black')
ax.set(xlabel='common xlabel', ylabel='common ylabel')
plt.show()
``` |
python elasticsearch client set mappings during create index | 31,635,828 | 12 | 2015-07-26T10:18:41Z | 31,638,685 | 17 | 2015-07-26T15:24:41Z | [
"python",
"pyelasticsearch",
"elasticsearch-py"
] | I can set mappings of index being created in curl command like this:
```
{
"mappings":{
"logs_june":{
"_timestamp":{
"enabled":"true"
},
"properties":{
"logdate":{
"type":"date",
"format":"dd/MM/yyy HH:mm:ss"
}
}
}
}
}
```
But I need to create that index with elasticsearch client in python and set mappings.. what is the way ? I tried somethings below but not work:
```
self.elastic_con = Elasticsearch([host], verify_certs=True)
self.elastic_con.indices.create(index="accesslog", ignore=400)
params = "{\"mappings\":{\"logs_june\":{\"_timestamp\": {\"enabled\": \"true\"},\"properties\":{\"logdate\":{\"type\":\"date\",\"format\":\"dd/MM/yyy HH:mm:ss\"}}}}}"
self.elastic_con.indices.put_mapping(index="accesslog",body=params)
``` | You can simply add the mapping in the `create` call like this:
```
from elasticsearch import Elasticsearch
self.elastic_con = Elasticsearch([host], verify_certs=True)
mapping = '''
{
"mappings":{
"logs_june":{
"_timestamp":{
"enabled":"true"
},
"properties":{
"logdate":{
"type":"date",
"format":"dd/MM/yyy HH:mm:ss"
}
}
}
}
}'''
self.elastic_con.indices.create(index='test-index', ignore=400, body=mapping)
``` |
Is Python 3.5's grammar LL(1)? | 31,637,435 | 2 | 2015-07-26T13:21:35Z | 31,637,664 | 7 | 2015-07-26T13:45:56Z | [
"python",
"parsing",
"compiler-construction",
"grammar"
] | I saw <http://matt.might.net/teaching/compilers/spring-2015/> saying Python 3.4 is LL(1)
Is Python 3.5's grammar still LL(1) so one can write a recursive descent parser? | Yes. This is a deliberate language feature, and not just something that happened to be the case. [PEP 3099](https://www.python.org/dev/peps/pep-3099/) explicitly *rejected* any changes to this for the Python 2 -> 3 transition (a notably bigger transition than any 3.x -> 3.y will be):
> * The parser won't be more complex than LL(1).
>
> Simple is better than complex. This idea extends to the parser. Restricting Python's grammar to an LL(1) parser is a blessing, not a
> curse. It puts us in handcuffs that prevent us from going overboard
> and ending up with funky grammar rules like some other dynamic
> languages that will go unnamed, such as Perl. |
What's the difference between homebrew python and caskroom python? | 31,639,883 | 4 | 2015-07-26T17:28:24Z | 31,639,943 | 8 | 2015-07-26T17:33:09Z | [
"python",
"homebrew",
"homebrew-cask"
] | The page [Installing Python on Mac OS X](http://docs.python-guide.org/en/latest/starting/install/osx/) suggests that the OS X version is OK for learning but not great for writing real programs; solution - install from Homebrew.
I don't think the caskroom existed when they wrote this page though. Basically, I just want to install the most optimal version for doing Python programming.
When I do a search I get this output:
```
$ brew search python
boost-python gst-python python python3 wxpython zpython
Caskroom/cask/mod_python homebrew/python/vpython Caskroom/cask/python
homebrew/python/python-dbus homebrew/versions/gst-python010 Caskroom/cask/python3
```
This is what homebrew reports:
```
macosx-10-9:~ vagrant$ brew cask info python
python: 2.7.9
Python
https://www.python.org/
Not installed
https://github.com/caskroom/homebrew-cask/blob/master/Casks/python.rb
==> Contents
python-2.7.9-macosx10.6.pkg (pkg)
macosx-10-9:~ vagrant$ brew info python
python: stable 2.7.10 (bottled), HEAD
Interpreted, interactive, object-oriented programming language
https://www.python.org
/usr/local/Cellar/python/2.7.10_2 (4906 files, 77M) *
Poured from bottle
From: https://github.com/Homebrew/homebrew/blob/master/Library/Formula/python.rb
```
So, what's the effective difference between these two packages? | Caskroom python installs the Python Mac OS X packages from <https://www.python.org/downloads/mac-osx/> as they are provided there.
`brew install python` will install from source and under `/usr/local/Cellar/python/...` and properly symlink `/usr/local/bin/python`.
The latter is the "proper homebrew approach" (TM) and will allow updates with `brew upgrade`. |
Python - list comprehension in this case is efficient? | 31,640,246 | 12 | 2015-07-26T18:05:17Z | 31,640,292 | 14 | 2015-07-26T18:09:28Z | [
"python"
] | The is the input "dirty" list in python
```
input_list = [' \n ',' data1\n ',' data2\n',' \n','data3\n'.....]
```
each list element contains either empty spaces with new line chars or data with newline chars
Cleaned it up using the below code..
```
cleaned_up_list = [data.strip() for data in input_list if data.strip()]
```
gives
```
cleaned_up_list = ['data1','data2','data3','data4'..]
```
Does python internally call `strip()` twice during the above list comprehension? or would i have to use a `for` loop iteration and `strip()` just once if i cared about efficiency?
```
for data in input_list
clean_data = data.strip()
if(clean_data):
cleaned_up_list.append(clean_data)
``` | Using your list comp strip *is* called twice, use a gen exp if you want to only call strip once and keep the comprehension:
```
input_list[:] = [x for x in (s.strip() for s in input_list) if x]
```
Input:
```
input_list = [' \n ',' data1\n ',' data2\n',' \n','data3\n']
```
Output:
```
['data1', 'data2', 'data3']
```
`input_list[:]` will change the original list which may or may not be what you want, if you actually want to create a new list just use `cleaned_up_list = ...`.
I always found using `itertools.imap` in python 2 and `map` in python 3 instead of the generator to be the most efficient for larger inputs:
```
from itertools import imap
input_list[:] = [x for x in imap(str.strip, input_list) if x]
```
Some timings with different approaches:
```
In [17]: input_list = [choice(input_list) for _ in range(1000000)]
In [19]: timeit filter(None, imap(str.strip, input_list))
10 loops, best of 3: 115 ms per loop
In [20]: timeit list(ifilter(None,imap(str.strip,input_list)))
10 loops, best of 3: 110 ms per loop
In [21]: timeit [x for x in imap(str.strip,input_list) if x]
10 loops, best of 3: 125 ms per loop
In [22]: timeit [x for x in (s.strip() for s in input_list) if x]
10 loops, best of 3: 145 ms per loop
In [23]: timeit [data.strip() for data in input_list if data.strip()]
10 loops, best of 3: 160 ms per loop
In [24]: %%timeit
....: cleaned_up_list = []
....: for data in input_list:
....: clean_data = data.strip()
....: if clean_data:
....: cleaned_up_list.append(clean_data)
....:
10 loops, best of 3: 150 ms per loop
In [25]:
In [25]: %%timeit
....: cleaned_up_list = []
....: append = cleaned_up_list.append
....: for data in input_list:
....: clean_data = data.strip()
....: if clean_data:
....: append(clean_data)
....:
10 loops, best of 3: 123 ms per loop
```
The fastest approach is actually `itertools.ifilter` combined with `itertools.imap` closely followed by `filter`with `imap`.
Removing the need to reevaluate the function reference `list.append` each iteration is more efficient, if you were stuck with a loop and wanted the most efficient approach then it is a viable alternative. |
Pythonic Django object reuse | 31,642,325 | 2 | 2015-07-26T21:53:55Z | 31,643,185 | 8 | 2015-07-26T23:54:18Z | [
"python",
"django"
] | I've been racking my brain on this for the last few weeks and I just can't seem to understand it. I'm hoping you folks here can give me some clarity.
**A LITTLE BACKGROUND**
I've built an API to help serve a large website and like all of us, are trying to keep the API as efficient as possible. Part of this efficiency is to NOT create an object that contains custom business logic over and over again (Example: a service class) as requests are made. To give some personal background I come from the Java world so I'm use to using a IoC or DI to help handle object creation and injection into my classes to ensure classes are NOT created over and over on a per request basis.
**WHAT I'VE READ**
While looking at many Python IoC and DI posts I've become rather confused on how to best approach creating a given class and not having to worry about the server getting overloaded with too many objects based on the amount of requests it may be handling.
Some people say an IoC or DI really isn't needed. But as I run my Django app I find that unless I construct the object I want globally (top of file) for views.py to use later rather than within each view class or def within views.py I run the change of creating multiple classes of the same type, which from what I understand would cause memory bloat on the server.
So what's the right way to be pythonic to keep objects from being built over and over? Should I invest in using an IoC / DI or not? Can I safely rely on setting up my service.py files to just contain def's instead of classes that contain def's? Is the garbage collector just THAT efficient so I don't even have to worry about it.
I've purposely not placed any code in this post since this seems like a general questions, but I can provide a few code examples if that helps.
Thanks
From a confused engineer that wants to be as pythonic as possible | You come from a background where everything *needs* to be a class, I've programmed web apps in Java too, and sometimes it's harder to unlearn old things than to learn new things, I understand.
In Python / Django you wouldn't make anything a class unless you need many instances and need to keep state.
For a service that's hardly the case, and sometimes you'll notice in Java-like web apps some services are made singletons, which is just a workaround and a rather big anti-pattern in Python
## Pythonic
Python is flexible enough so that a "services class" isn't required, you'd just have a Python module (e.g. `services.py`) with a number of *functions*, emphasis on being a function that takes in something, returns something, in a completely stateless fashion.
```
# services.py
# this is a module, doesn't keep any state within,
# it may read and write to the DB, do some processing etc but doesn't remember things
def get_scores(student_id):
return Score.objects.filter(student=student_id)
# views.py
# receives HTTP requests
def view_scores(request, student_id):
scores = services.get_scores(student_id)
# e.g. use the scores queryset in a template return HTML page
```
Notice how if you need to swap out the service, you'll just be swapping out a single Python module (just a file really), so Pythonistas hardly bother with *explicit* interfaces and other abstractions.
## Memory
Now per each "django worker process", you'd have that one `services` module, that is used over and over for all requests that come in, and when the `Score` queryset is used and no longer pointed at in memory, it'll be cleaned up.
I saw your other post, and well, instantiating a `ScoreService` object for each request, or keeping an instance of it in the global scope is just unnecessary, the above example does the job with one module in memory, and doesn't need us to be smart about it.
And if you *did* need to keep state in-between several requests, keeping them in online instances of `ScoreService` would be a bad idea anyway because now every user might need one instance, that's not viable (too many online objects keeping context). Not to mention that instance is only accessible from the same process unless you have some sharing mechanisms in place.
## Keep state in a datastore
In case you want to keep state in-between requests, you'd keep the state in a datastore, and when the request comes in, you hit the `services` module again to get the context back from the datastore, pick up where you left it and do your business, return your HTTP response, then unused things will get garbage collected.
The emphasis being on keeping things stateless, where any given HTTP request can be processed on any given django process, and all state objects are garbage collected after the response is returned and objects go out of scope.
This may not be the fastest request/response cycle we can pull, but it's scalable as hell
## Look at some major web apps written in Django
I suggest you look at some open source Django projects and look at how they're organized, you'll see a lot of the things you're busting your brains with, Djangonauts just don't bother with. |
Vagrant Not Starting Up. User that created VM doesn't match current user | 31,644,222 | 48 | 2015-07-27T02:43:16Z | 32,256,848 | 83 | 2015-08-27T18:25:05Z | [
"python",
"vagrant",
"virtual-machine",
"virtualbox",
"virtualenv"
] | I was trying to start up my vagrant machine, so I navigated to the folder where my vagrantfile is, and used:
vagrant up && vagrant ssh
but I got the following error message:
> The VirtualBox VM was created with a user that doesn't match the
> current user running Vagrant. VirtualBox requires that the same user
> be used to manage the VM that was created. Please re-run Vagrant with
> that user. This is not a Vagrant issue.
>
> The UID used to create the VM was: 0 Your UID is: 501
I also tried with sudo, but that didn't work either.
Do I need to switch UID's? And how would I do this? | I ran into the same problem today.
I edited my **UID** by opening the file `.vagrant/machines/default/virtualbox/creator_uid` and changing the **501** to a **0**.
After I saved the file, the command vagrant up worked like a champ. |
Vagrant Not Starting Up. User that created VM doesn't match current user | 31,644,222 | 48 | 2015-07-27T02:43:16Z | 32,977,467 | 13 | 2015-10-06T18:50:37Z | [
"python",
"vagrant",
"virtual-machine",
"virtualbox",
"virtualenv"
] | I was trying to start up my vagrant machine, so I navigated to the folder where my vagrantfile is, and used:
vagrant up && vagrant ssh
but I got the following error message:
> The VirtualBox VM was created with a user that doesn't match the
> current user running Vagrant. VirtualBox requires that the same user
> be used to manage the VM that was created. Please re-run Vagrant with
> that user. This is not a Vagrant issue.
>
> The UID used to create the VM was: 0 Your UID is: 501
I also tried with sudo, but that didn't work either.
Do I need to switch UID's? And how would I do this? | Ran into this problem in a slightly different situation. The issue was that ".vagrant" was checked into the git repo, and the committer was running under a different UID than I was.
Solution: add .vagrant to .gitignore. |
No module named 'allauth.account.context_processors' | 31,648,019 | 9 | 2015-07-27T08:13:03Z | 31,675,023 | 35 | 2015-07-28T11:30:23Z | [
"python",
"django",
"packages",
"django-allauth",
"django-settings"
] | I want to use Django-Allauth, so I installed as following and it works perfectly in my laptop localhost; but when I pull it in my server, I
encounter with the following error:
```
No module named 'allauth.account.context_processors'
```
What should I do?
```
# Django AllAuth
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
# Already defined Django-related contexts here
# `allauth` needs this from django
'django.contrib.auth.context_processors.auth',
'django.core.context_processors.request',
# `allauth` specific context processors
'allauth.account.context_processors.account',
'allauth.socialaccount.context_processors.socialaccount',
"django.contrib.auth.context_processors.auth",
"django.core.context_processors.debug",
"django.core.context_processors.i18n",
"django.core.context_processors.media",
"django.core.context_processors.static",
"django.core.context_processors.tz",
"django.core.context_processors.request",
"moolak.context_processors.image",
],
},
},
]
AUTHENTICATION_BACKENDS = (
# Needed to login by username in Django admin, regardless of `allauth`
'django.contrib.auth.backends.ModelBackend',
# `allauth` specific authentication methods, such as login by e-mail
'allauth.account.auth_backends.AuthenticationBackend',
)
SOCIALACCOUNT_QUERY_EMAIL = True
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
SOCIALACCOUNT_PROVIDERS = \
{'google':
{'SCOPE': ['profile', 'email'],
'AUTH_PARAMS': {'access_type': 'online'}}}
SOCIALACCOUNT_PROVIDERS = \
{'facebook': {'SCOPE': ['email', 'public_profile', 'user_friends'], 'AUTH_PARAMS': {'auth_type': 'reauthenticate'}, 'METHOD': 'js_sdk', 'VERSION': 'v2.3'}}
# newsletter
NEWSLETTER_DEFAULT_HEADER_SENDER = 'NewsLetter <[email protected]>'
```
I never used Django-Alluth, so I am newbie, please help me as easy as you can. | This means you have different versions of Allauth in your dev machine and in your server. You should definitely use the same version on both sides.
Into the why of the issue you are hitting on the server, in version 0.22 of django-allauth, the [context processors have been replaced by template tags](http://django-allauth.readthedocs.org/en/latest/release-notes.html#id17).
You just need to ensure that:
1. You are running at least Allauth 0.22, which is the latest version as of now (`pip install django-allauth==0.22`)
2. No Allauth-specific context processors are listed in your Django project settings. So you need to remove these two lines:
```
# `allauth` specific context processors
'allauth.account.context_processors.account',
'allauth.socialaccount.context_processors.socialaccount',
``` |
python requests ssl handshake failure | 31,649,390 | 11 | 2015-07-27T09:24:43Z | 31,678,467 | 14 | 2015-07-28T13:57:39Z | [
"python",
"python-requests"
] | Every time I try to do:
```
requests.get('https://url')
```
I got this message:
```
import requests
>>> requests.get('https://reviews.gethuman.com/companies')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/requests/api.py", line 55, in get
return request('get', url, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 455, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 558, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 385, in send
raise SSLError(e) requests.exceptions.SSLError: [Errno 1]
_ssl.c:510: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure
```
I tried everything:
* update my requests
* update my ssl
but nothing changes.
I am using Python 2.7.6, can't change this. | I resolve the problem in the end i updated my ubuntu from 14.04 to 14.10 and the problem was solved
but in the older version of ubuntu and python I install those lib and it seems to fix all my problems
```
sudo apt-get install python-dev libssl-dev libffi-dev
sudo pip2.7 install -U pyopenssl==0.13.1 pyasn1 ndg-httpsclient
```
if you don`t have pip2.7 installed you can use pip instead |
python requests ssl handshake failure | 31,649,390 | 11 | 2015-07-27T09:24:43Z | 34,924,131 | 13 | 2016-01-21T12:40:57Z | [
"python",
"python-requests"
] | Every time I try to do:
```
requests.get('https://url')
```
I got this message:
```
import requests
>>> requests.get('https://reviews.gethuman.com/companies')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/requests/api.py", line 55, in get
return request('get', url, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 455, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 558, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 385, in send
raise SSLError(e) requests.exceptions.SSLError: [Errno 1]
_ssl.c:510: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure
```
I tried everything:
* update my requests
* update my ssl
but nothing changes.
I am using Python 2.7.6, can't change this. | On OSX, using python 2.7.10 / requests 2.9.1 I only had to to install `requests`using its security setup:
```
pip install requests[security]
```
This installs `pyOpenSSL`, `ndg-httpsclient` and `pyasn1`.
<https://github.com/kennethreitz/requests/blob/master/setup.py#L70> |
argparse argument named "print" | 31,656,015 | 2 | 2015-07-27T14:35:22Z | 31,656,028 | 7 | 2015-07-27T14:36:05Z | [
"python",
"python-2.7",
"arguments",
"keyword",
"argparse"
] | I want to add an argument named 'print' to my argument parser
```
arg_parser.add_argument('--print', action='store_true', help="print stuff")
args = arg_parser.parse_args(sys.argv[1:])
if args.print:
print "stuff"
```
Yields:
```
if args.print:
^
SyntaxError: invalid syntax
``` | You can use `getattr()` to access attributes that happen to be [reserved keywords](https://docs.python.org/2/reference/lexical_analysis.html#keywords) too:
```
if getattr(args, 'print'):
```
However, you'll make it yourself much easier by just avoiding that name as a destination; use `print_` perhaps (via the [`dest` argument](https://docs.python.org/2/library/argparse.html#dest)):
```
arg_parser.add_argument('--print', dest='print_', action='store_true', help="print stuff")
# ...
if args.print_:
```
or, a more common synonym like `verbose`:
```
arg_parser.add_argument('--print', dest='verbose', action='store_true', help="print stuff")
# ...
if args.verbose:
```
Quick demo:
```
>>> import argparse
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('--print', dest='print_', action='store_true', help="print stuff")
_StoreTrueAction(option_strings=['--print'], dest='print_', nargs=0, const=True, default=False, type=None, choices=None, help='print stuff', metavar=None)
>>> args = parser.parse_args(['--print'])
>>> args.print_
True
``` |
python map array of dictionaries to dictionary? | 31,656,912 | 5 | 2015-07-27T15:18:22Z | 31,656,965 | 9 | 2015-07-27T15:20:43Z | [
"python"
] | I've got an array of dictionaries that looks like this:
```
[
{ 'country': 'UK', 'city': 'Manchester' },
{ 'country': 'UK', 'city': 'Liverpool' },
{ 'country': 'France', 'city': 'Paris' } ...
]
```
And I want to end up with a dictionary like this:
```
{ 'Liverpool': 'UK', 'Manchester': 'UK', ... }
```
Obviously I can do this:
```
d = {}
for c in cities:
d[c['city']] = c['country']
```
But is there any way I could do it with a single-line map? | You can use a [*dict comprehension*](https://www.python.org/dev/peps/pep-0274/) :
```
>>> li = [
... { 'country': 'UK', 'city': 'Manchester' },
... { 'country': 'UK', 'city': 'Liverpool' },
... { 'country': 'France', 'city': 'Paris' }
... ]
>>> {d['city']: d['country'] for d in li}
{'Paris': 'France', 'Liverpool': 'UK', 'Manchester': 'UK'}
```
Or us `operator.itemgetter` and `map` function :
```
>>> dict(map(operator.itemgetter('city','country'),li))
{'Paris': 'France', 'Liverpool': 'UK', 'Manchester': 'UK'}
``` |
Python csv.DictReader: parse string? | 31,658,115 | 3 | 2015-07-27T16:11:56Z | 31,658,188 | 7 | 2015-07-27T16:15:36Z | [
"python",
"csv"
] | I am downloading a CSV file directly from a URL using `requests`.
How can I parse the resulting string with `csv.DictReader`?
Right now I have this:
```
r = requests.get(url)
reader_list = csv.DictReader(r.text)
print reader_list.fieldnames
for row in reader_list:
print row
```
But I just get `['r']` as the result of `fieldnames`, and then all kinds of weird things from `print row`. | From documentation of [`csv`](https://docs.python.org/2/library/csv.html#csv.reader) , the first argument to `csv.reader` or `csv.DictReader` is `csvfile` -
> csvfile can be any object which supports the iterator protocol and returns a string each time its next() method is called â file objects and list objects are both suitable
In your case when you give the string as the direct input for the `csv.DictReader()` , the `next()` call on it only provides a single character, and hence that becomes the header, and then next() is continuously called to get each row.
Hence, you need to either provide a in-memory stream of the string (Using StringIO) or a list of lines (using `splitlines`)
You can use [`io.StringIO()`](https://docs.python.org/2/library/io.html#io.StringIO) and then use it in `csv.DictReader` . Example/Demo -
```
>>> import csv
>>> s = """a,b,c
... 1,2,3
... 4,5,6
... 7,8,9"""
>>> import io
>>> reader_list = csv.DictReader(io.StringIO(s))
>>> print reader_list.fieldnames
['a', 'b', 'c']
>>> for row in reader_list:
... print row
...
{'a': '1', 'c': '3', 'b': '2'}
{'a': '4', 'c': '6', 'b': '5'}
{'a': '7', 'c': '9', 'b': '8'}
```
---
Or as indicated in the comments , you can split the lines before giving as input to `csv.DictReader` . Example/Demo -
```
>>> reader_list = csv.DictReader(s.splitlines())
>>> print reader_list.fieldnames
['a', 'b', 'c']
>>> for row in reader_list:
... print row
...
{'a': '1', 'c': '3', 'b': '2'}
{'a': '4', 'c': '6', 'b': '5'}
{'a': '7', 'c': '9', 'b': '8'}
``` |
Spark iteration time increasing exponentially when using join | 31,659,404 | 7 | 2015-07-27T17:22:39Z | 31,662,127 | 13 | 2015-07-27T19:57:10Z | [
"python",
"loops",
"apache-spark",
"iteration",
"pyspark"
] | I'm quite new to Spark and I'm trying to implement some iterative algorithm for clustering (expectation-maximization) with centroid represented by Markov model. So I need to do iterations and joins.
One problem that I experience is that each iterations time growth exponentially.
After some experimenting I found that when doing iterations it's needed to persist RDD that is going to be reused in the next iteration, otherwise every iteration spark will create execution plan that will recalculate the RDD from from start, thus increasing calculation time.
```
init = sc.parallelize(xrange(10000000), 3)
init.cache()
for i in range(6):
print i
start = datetime.datetime.now()
init2 = init.map(lambda n: (n, n*3))
init = init2.map(lambda n: n[0])
# init.cache()
print init.count()
print str(datetime.datetime.now() - start)
```
Results in:
```
0
10000000
0:00:04.283652
1
10000000
0:00:05.998830
2
10000000
0:00:08.771984
3
10000000
0:00:11.399581
4
10000000
0:00:14.206069
5
10000000
0:00:16.856993
```
So adding cache() helps and iteration time become constant.
```
init = sc.parallelize(xrange(10000000), 3)
init.cache()
for i in range(6):
print i
start = datetime.datetime.now()
init2 = init.map(lambda n: (n, n*3))
init = init2.map(lambda n: n[0])
init.cache()
print init.count()
print str(datetime.datetime.now() - start)
0
10000000
0:00:04.966835
1
10000000
0:00:04.609885
2
10000000
0:00:04.324358
3
10000000
0:00:04.248709
4
10000000
0:00:04.218724
5
10000000
0:00:04.223368
```
But when making Join inside the iteration the problem comes back.
Here is some simple code I demonstrating the problem. Even making cache on each RDD transformation doesn't solve the problem:
```
init = sc.parallelize(xrange(10000), 3)
init.cache()
for i in range(6):
print i
start = datetime.datetime.now()
init2 = init.map(lambda n: (n, n*3))
init2.cache()
init3 = init.map(lambda n: (n, n*2))
init3.cache()
init4 = init2.join(init3)
init4.count()
init4.cache()
init = init4.map(lambda n: n[0])
init.cache()
print init.count()
print str(datetime.datetime.now() - start)
```
And here is the output. As you can see iteration time growing exponentially :(
```
0
10000
0:00:00.674115
1
10000
0:00:00.833377
2
10000
0:00:01.525314
3
10000
0:00:04.194715
4
10000
0:00:08.139040
5
10000
0:00:17.852815
```
I will really appreciate any help :) | **Summary**:
Generally speaking iterative algorithms, especially ones with self-join or self-union, require a control over:
* Length of the lineage (see for example [Stackoverflow due to long RDD Lineage](http://stackoverflow.com/q/34461804/1560062) and [unionAll resulting in StackOverflow](http://stackoverflow.com/q/38187333/1560062)).
* Number of partitions.
Problem described here is a result of the lack of the former one. In each iteration number of partition increases with self-join leading to exponential pattern. To address that you have to either control number of partitions in each iteration (see below) or use global tools like `spark.default.parallelism` (see [an answer provided by Travis](http://stackoverflow.com/a/32769637/1560062)). In general the first approach provides much more control in general and doesn't affect other parts of code.
**Original answer**:
As far as I can tell there are two interleaved problems here - growing number of partitions and shuffling overhead during joins. Both can be easily handled so lets go step by step.
First lets create a helper to collect the statistics:
```
import datetime
def get_stats(i, init, init2, init3, init4,
start, end, desc, cache, part, hashp):
return {
"i": i,
"init": init.getNumPartitions(),
"init1": init2.getNumPartitions(),
"init2": init3.getNumPartitions(),
"init4": init4.getNumPartitions(),
"time": str(end - start),
"timen": (end - start).seconds + (end - start).microseconds * 10 **-6,
"desc": desc,
"cache": cache,
"part": part,
"hashp": hashp
}
```
another helper to handle caching/partitioning
```
def procRDD(rdd, cache=True, part=False, hashp=False, npart=16):
rdd = rdd if not part else rdd.repartition(npart)
rdd = rdd if not hashp else rdd.partitionBy(npart)
return rdd if not cache else rdd.cache()
```
extract pipeline logic:
```
def run(init, description, cache=True, part=False, hashp=False,
npart=16, n=6):
times = []
for i in range(n):
start = datetime.datetime.now()
init2 = procRDD(
init.map(lambda n: (n, n*3)),
cache, part, hashp, npart)
init3 = procRDD(
init.map(lambda n: (n, n*2)),
cache, part, hashp, npart)
# If part set to True limit number of the output partitions
init4 = init2.join(init3, npart) if part else init2.join(init3)
init = init4.map(lambda n: n[0])
if cache:
init4.cache()
init.cache()
init.count() # Force computations to get time
end = datetime.datetime.now()
times.append(get_stats(
i, init, init2, init3, init4,
start, end, description,
cache, part, hashp
))
return times
```
and create initial data:
```
ncores = 8
init = sc.parallelize(xrange(10000), ncores * 2).cache()
```
Join operation by itself, if `numPartitions` argument is not provided, adjust number of partitions in the output based on the number of partitions of the input RDDs. It means growing number of partitions with each iteration. If number of partitions is to large things get ugly. You can deal with these by providing `numPartitions` argument for join or repartition RDDs with each iteration.
```
timesCachePart = sqlContext.createDataFrame(
run(init, "cache + partition", True, True, False, ncores * 2))
timesCachePart.select("i", "init1", "init2", "init4", "time", "desc").show()
+-+-----+-----+-----+--------------+-----------------+
|i|init1|init2|init4| time| desc|
+-+-----+-----+-----+--------------+-----------------+
|0| 16| 16| 16|0:00:01.145625|cache + partition|
|1| 16| 16| 16|0:00:01.090468|cache + partition|
|2| 16| 16| 16|0:00:01.059316|cache + partition|
|3| 16| 16| 16|0:00:01.029544|cache + partition|
|4| 16| 16| 16|0:00:01.033493|cache + partition|
|5| 16| 16| 16|0:00:01.007598|cache + partition|
+-+-----+-----+-----+--------------+-----------------+
```
As you can see when we repartition execution time is more or less constant.
The second problem is that above data is partitioned randomly. To ensure join performance we would like to have same keys on a single partition. To achieve that we can use hash partitioner:
```
timesCacheHashPart = sqlContext.createDataFrame(
run(init, "cache + hashpart", True, True, True, ncores * 2))
timesCacheHashPart.select("i", "init1", "init2", "init4", "time", "desc").show()
+-+-----+-----+-----+--------------+----------------+
|i|init1|init2|init4| time| desc|
+-+-----+-----+-----+--------------+----------------+
|0| 16| 16| 16|0:00:00.946379|cache + hashpart|
|1| 16| 16| 16|0:00:00.966519|cache + hashpart|
|2| 16| 16| 16|0:00:00.945501|cache + hashpart|
|3| 16| 16| 16|0:00:00.986777|cache + hashpart|
|4| 16| 16| 16|0:00:00.960989|cache + hashpart|
|5| 16| 16| 16|0:00:01.026648|cache + hashpart|
+-+-----+-----+-----+--------------+----------------+
```
Execution time is constant as before and There is a small improvement over the basic partitioning.
Now lets use cache only as a reference:
```
timesCacheOnly = sqlContext.createDataFrame(
run(init, "cache-only", True, False, False, ncores * 2))
timesCacheOnly.select("i", "init1", "init2", "init4", "time", "desc").show()
+-+-----+-----+-----+--------------+----------+
|i|init1|init2|init4| time| desc|
+-+-----+-----+-----+--------------+----------+
|0| 16| 16| 32|0:00:00.992865|cache-only|
|1| 32| 32| 64|0:00:01.766940|cache-only|
|2| 64| 64| 128|0:00:03.675924|cache-only|
|3| 128| 128| 256|0:00:06.477492|cache-only|
|4| 256| 256| 512|0:00:11.929242|cache-only|
|5| 512| 512| 1024|0:00:23.284508|cache-only|
+-+-----+-----+-----+--------------+----------+
```
As you can see number of partitions (init2, init3, init4) for cache-only version doubles with each iteration and execution time is proportional to the number of partitions.
Finally we can check if we can improve performance with large number of partitions if we use hash partitioner:
```
timesCacheHashPart512 = sqlContext.createDataFrame(
run(init, "cache + hashpart 512", True, True, True, 512))
timesCacheHashPart512.select(
"i", "init1", "init2", "init4", "time", "desc").show()
+-+-----+-----+-----+--------------+--------------------+
|i|init1|init2|init4| time| desc|
+-+-----+-----+-----+--------------+--------------------+
|0| 512| 512| 512|0:00:14.492690|cache + hashpart 512|
|1| 512| 512| 512|0:00:20.215408|cache + hashpart 512|
|2| 512| 512| 512|0:00:20.408070|cache + hashpart 512|
|3| 512| 512| 512|0:00:20.390267|cache + hashpart 512|
|4| 512| 512| 512|0:00:20.362354|cache + hashpart 512|
|5| 512| 512| 512|0:00:19.878525|cache + hashpart 512|
+-+-----+-----+-----+--------------+--------------------+
```
Improvement is not so impressive but if you have a small cluster and a lot of data it is still worth trying.
I guess take away message here is partitioning matters. There are contexts where it is handled for you (`mllib`, `sql`) but if you use low level operations it is your responsibility. |
Efficiently create sparse pivot tables in pandas? | 31,661,604 | 8 | 2015-07-27T19:26:38Z | 31,679,396 | 14 | 2015-07-28T14:33:36Z | [
"python",
"pandas",
"scipy",
"scikit-learn",
"sparse-matrix"
] | I'm working turning a list of records with two columns (A and B) into a matrix representation. I have been using the pivot function within pandas, but the result ends up being fairly large. Does pandas support pivoting into a sparse format? I know I can pivot it and then turn it into some kind of sparse representation, but isn't as elegant as I would like. My end goal is to use it as the input for a predictive model.
Alternatively, is there some kind of sparse pivot capability outside of pandas?
edit: here is an example of a non-sparse pivot
```
import pandas as pd
frame=pd.DataFrame()
frame['person']=['me','you','him','you','him','me']
frame['thing']=['a','a','b','c','d','d']
frame['count']=[1,1,1,1,1,1]
frame
person thing count
0 me a 1
1 you a 1
2 him b 1
3 you c 1
4 him d 1
5 me d 1
frame.pivot('person','thing')
count
thing a b c d
person
him NaN 1 NaN 1
me 1 NaN NaN 1
you 1 NaN 1 NaN
```
This creates a matrix that could contain all possible combinations of persons and things, but it is not sparse.
<http://docs.scipy.org/doc/scipy/reference/sparse.html>
Sparse matrices take up less space because they can imply things like NaN or 0. If I have a very large data set, this pivoting function can generate a matrix that should be sparse due to the large number of NaNs or 0s. I was hoping that I could save a lot of space/memory by generating something that was sparse right off the bat rather than creating a dense matrix and then converting it to sparse. | Here is a method that creates a sparse scipy matrix based on data and indices of person and thing. `person_u` and `thing_u` are lists representing the unique entries for your rows and columns of pivot you want to create. Note: this assumes that your count column already has the value you want in it.
```
from scipy.sparse import csr_matrix
person_u = list(sort(frame.person.unique()))
thing_u = list(sort(frame.thing.unique()))
data = frame['count'].tolist()
row = frame.person.astype('category', categories=person_u).cat.codes
col = frame.thing.astype('category', categories=thing_u).cat.codes
sparse_matrix = csr_matrix((data, (row, col)), shape=(len(person_u), len(thing_u)))
>>> sparse_matrix
<3x4 sparse matrix of type '<type 'numpy.int64'>'
with 6 stored elements in Compressed Sparse Row format>
>>> sparse_matrix.todense()
matrix([[0, 1, 0, 1],
[1, 0, 0, 1],
[1, 0, 1, 0]])
```
Based on your original question, the scipy sparse matrix should be sufficient for your needs, but should you wish to have a sparse dataframe you can do the following:
```
dfs=pd.SparseDataFrame([ pd.SparseSeries(sparse_matrix[i].toarray().ravel(), fill_value=0)
for i in np.arange(sparse_matrix.shape[0]) ], index=person_u, columns=thing_u, default_fill_value=0)
>>> dfs
a b c d
him 0 1 0 1
me 1 0 0 1
you 1 0 1 0
>>> type(dfs)
pandas.sparse.frame.SparseDataFrame
``` |
Asyncio + aiohttp - redis Pub/Sub and websocket read/write in single handler | 31,670,127 | 9 | 2015-07-28T07:36:44Z | 31,684,719 | 14 | 2015-07-28T18:46:57Z | [
"python",
"redis",
"python-asyncio",
"aiohttp"
] | I'm currently playing with [aiohttp](http://aiohttp.readthedocs.org/) to see how it will perform as a server application for mobile app with websocket connection.
Here is simple "Hello world" example ([as gist here](https://gist.github.com/anonymous/2b0432082e0171683beb)):
```
import asyncio
import aiohttp
from aiohttp import web
class WebsocketEchoHandler:
@asyncio.coroutine
def __call__(self, request):
ws = web.WebSocketResponse()
ws.start(request)
print('Connection opened')
try:
while True:
msg = yield from ws.receive()
ws.send_str(msg.data + '/answer')
except:
pass
finally:
print('Connection closed')
return ws
if __name__ == "__main__":
app = aiohttp.web.Application()
app.router.add_route('GET', '/ws', WebsocketEchoHandler())
loop = asyncio.get_event_loop()
handler = app.make_handler()
f = loop.create_server(
handler,
'127.0.0.1',
8080,
)
srv = loop.run_until_complete(f)
print("Server started at {sock[0]}:{sock[1]}".format(
sock=srv.sockets[0].getsockname()
))
try:
loop.run_forever()
except KeyboardInterrupt:
pass
finally:
loop.run_until_complete(handler.finish_connections(1.0))
srv.close()
loop.run_until_complete(srv.wait_closed())
loop.run_until_complete(app.finish())
loop.close()
```
### The problem
Now I would like to use structure described below (node server = python aiohttp). To be more specific, use [Redis Pub/Sub](http://redis.io/topics/pubsub) mechanism with [asyncio-redis](https://github.com/jonathanslenders/asyncio-redis) to read and write both to websocket connection and Redis in my *WebsocketEchoHandler*.
*WebsocketEchoHandler* is a dead simple loop so I'm not sure how should this be done. Using [Tornado](http://www.tornadoweb.org) and [brükva](https://github.com/evilkost/brukva) I would just use callbacks.
[](http://i.stack.imgur.com/HKpKF.png)
### Extra (offtopic perhaps) question
Since I'm using [Redis](http://redis.io/) already, which of two approaches should I take:
1. Like in "classic" web app, have a controller/view for everything, use [Redis](http://redis.io/) just for messaging etc.
2. Web app should be just a layer between client and [Redis](http://redis.io/) used also as task queue (simplest [Python RQ](http://python-rq.org/)). Every request should be delegated to workers.
### EDIT
Image from <http://goldfirestudios.com/blog/136/Horizontally-Scaling-Node.js-and-WebSockets-with-Redis>
### EDIT 2
It seems that I need to clarify.
* **Websocket-only handler is shown above**
* **Redis Pub/Sub handler might look like that:**
```
class WebsocketEchoHandler:
@asyncio.coroutine
def __call__(self, request):
ws = web.WebSocketResponse()
ws.start(request)
connection = yield from asyncio_redis.Connection.create(host='127.0.0.1', port=6379)
subscriber = yield from connection.start_subscribe()
yield from subscriber.subscribe(['ch1', 'ch2'])
print('Connection opened')
try:
while True:
msg = yield from subscriber.next_published()
ws.send_str(msg.value + '/answer')
except:
pass
finally:
print('Connection closed')
return ws
```
This handler just subscribes to Redis channel *ch1* and *ch2* and sends every received message from those channels to websocket.
* **I want to have this handler:**
```
class WebsocketEchoHandler:
@asyncio.coroutine
def __call__(self, request):
ws = web.WebSocketResponse()
ws.start(request)
connection = yield from asyncio_redis.Connection.create(host='127.0.0.1', port=6379)
subscriber = yield from connection.start_subscribe()
yield from subscriber.subscribe(['ch1', 'ch2'])
print('Connection opened')
try:
while True:
# If message recived from redis OR from websocket
msg_ws = yield from ws.receive()
msg_redis = yield from subscriber.next_published()
if msg_ws:
# push to redis / do something else
self.on_msg_from_ws(msg_ws)
if msg_redis:
self.on_msg_from_redis(msg_redis)
except:
pass
finally:
print('Connection closed')
return ws
```
But following code is always called sequentially so reading from websocket blocks reading from Redis:
```
msg_ws = yield from ws.receive()
msg_redis = yield from subscriber.next_published()
```
I want reading to be done *on event* where *event* is message received from one of two sources. | You should use two `while` loops - one that handles messages from the websocket, and one that handles messages from redis. Your main handler can just kick off two coroutines, one handling each loop, and then wait on *both* of them:
```
class WebsocketEchoHandler:
@asyncio.coroutine
def __call__(self, request):
ws = web.WebSocketResponse()
ws.start(request)
connection = yield from asyncio_redis.Connection.create(host='127.0.0.1', port=6379)
subscriber = yield from connection.start_subscribe()
yield from subscriber.subscribe(['ch1', 'ch2'])
print('Connection opened')
try:
# Kick off both coroutines in parallel, and then block
# until both are completed.
yield from asyncio.gather(self.handle_ws(ws), self.handle_redis(subscriber))
except Exception as e: # Don't do except: pass
import traceback
traceback.print_exc()
finally:
print('Connection closed')
return ws
@asyncio.coroutine
def handle_ws(self, ws):
while True:
msg_ws = yield from ws.receive()
if msg_ws:
self.on_msg_from_ws(msg_ws)
@asyncio.coroutine
def handle_redis(self, subscriber):
while True:
msg_redis = yield from subscriber.next_published()
if msg_redis:
self.on_msg_from_redis(msg_redis)
```
This way you can read from any of the two potential sources without having to care about the other. |
Python: transform a list of lists of tuples | 31,676,133 | 4 | 2015-07-28T12:18:56Z | 31,676,183 | 7 | 2015-07-28T12:21:05Z | [
"python",
"list",
"tuples"
] | Suppose I have a data structure as follows:
```
[[ tuple11,
tuple12,
... ],
[ tuple21,
tuple22,
... ],
...]
```
That is, the outer list can have any number of elements and each element (list) can contain any number of elements (tuples). How can I transform it to:
```
[[ tuple11,
tuple21,
... ],
[ tuple12,
tuple22,
... ],
... ]
```
I have the following solution working for two elements in the outer list, but I cannot figure out how to generalise it:
```
map(lambda x, y: [x, y], *the_list)
```
**Added:**
Just to add some more detail, each of the tuples above is in fact a tuple of two `np.array`s.
If I start out with the following data structure:
```
[[(array([111, 111]), array([121, 121])),
(array([112, 112]), array([122, 122])),
(array([131, 131]), array([141, 141])),
(array([132, 132]), array([142, 142]))],
[(array([211, 211]), array([221, 221])),
(array([212, 212]), array([222, 222])),
(array([231, 231]), array([241, 241])),
(array([232, 232]), array([242, 242]))]]
```
I need to turn this into:
```
[[(array([111, 111]), array([121, 121])),
(array([211, 211]), array([221, 221]))],
[(array([112, 112]), array([122, 122])),
(array([212, 212]), array([222, 222]))],
[(array([131, 131]), array([141, 141])),
(array([231, 231]), array([241, 241]))],
[(array([132, 132]), array([142, 142])),
(array([232, 232]), array([242, 242]))]]
```
Note that the arrays are not always 1â2 but 1âN in general for any positive N. There are always two arrays in each tuple but any number of tuples in each inner list and any number of these inner lists in the outer list.
I am quite used to juggling NumPy arrays, but I have very little experience with the native lists, tuples etc. | You want the columns, so you can use `zip` function :
```
zip(*main_list)
```
But since it returns the columns in tuple format if you only want list you can use `map` to convert them to list :
```
map(list,zip(*main_list))
```
Demo:
```
>>> main_list=[[(1,2),(3,4)],[(5,6),(7,8)]]
>>> zip(*main_list)
[((1, 2), (5, 6)), ((3, 4), (7, 8))]
```
And with your example :
```
>>> main_list=[[(np.array([111, 111]), np.array([121, 121])),
... (np.array([112, 112]), np.array([122, 122])),
... (np.array([131, 131]), np.array([141, 141])),
... (np.array([132, 132]), np.array([142, 142]))],
... [(np.array([211, 211]), np.array([221, 221])),
... (np.array([212, 212]), np.array([222, 222])),
... (np.array([231, 231]), np.array([241, 241])),
... (np.array([232, 232]), np.array([242, 242]))]]
>>>
>>> zip(*main_list)
[((array([111, 111]), array([121, 121])), (array([211, 211]), array([221, 221]))),
((array([112, 112]), array([122, 122])), (array([212, 212]), array([222, 222]))),
((array([131, 131]), array([141, 141])), (array([231, 231]), array([241, 241]))),
((array([132, 132]), array([142, 142])), (array([232, 232]), array([242, 242])))]
``` |
Python, where i am wrong for the function to remove vowels in a string? | 31,676,580 | 3 | 2015-07-28T12:38:32Z | 31,676,695 | 8 | 2015-07-28T12:44:12Z | [
"python"
] | I am following the codecademy tutorial, the question is "Define a function called anti\_vowel that takes one string, text, as input and returns the text with all of the vowels removed.
For example: anti\_vowel("Hey You!") should return "Hy Y!"."
The code:
```
def anti_vowel(text):
textlist = list(text)
print textlist
for k in textlist:
if k in"aeiouAEIOU":
print k
textlist.remove(k)
return "".join(textlist)
```
The error is "Oops, try again. Your function fails on anti\_vowel("Hey look Words!"). It returns "Hy lk Words!" when it should return "Hy lk Wrds!"."
I don't know why still an extra 'o' there? Could anyone tell me what cause the error? Thank you! | The reason is already explained in the comments by @jonrsharpe.
I would like to show you an optimised version, which does work and uses list comprehension:
```
def anti_vowel(text):
return "".join([x for x in text if x not in "aeiouAEIOU"])
print anti_vowel("Hey look Words!")
```
output:
```
Hy lk Wrds!
``` |
What is a Pythonic way for Dependency Injection? | 31,678,827 | 23 | 2015-07-28T14:10:11Z | 31,813,464 | 7 | 2015-08-04T15:34:39Z | [
"python",
"oop",
"authentication",
"dependency-injection",
"frameworks"
] | # Introduction
For Java, Dependency Injection works as pure OOP, i.e. you provide an interface to be implemented and in your framework code accept an instance of a class that implements the defined interface.
Now for Python, you are able to do the same way, but I think that method was too much overhead right in case of Python. So then how would you implement it in the Pythonic way?
# Use Case
Say this is the framework code:
```
class FrameworkClass():
def __init__(self, ...):
...
def do_the_job(self, ...):
# some stuff
# depending on some external function
```
# The Basic Approach
The most naive (and maybe the best?) way is to require the external function to be supplied into the `FrameworkClass` constructor, and then be invoked from the `do_the_job` method.
***Framework Code:***
```
class FrameworkClass():
def __init__(self, func):
self.func = func
def do_the_job(self, ...):
# some stuff
self.func(...)
```
***Client Code:***
```
def my_func():
# my implementation
framework_instance = FrameworkClass(my_func)
framework_instance.do_the_job(...)
```
# Question
The question is short. Is there any better commonly used Pythonic way to do this? Or maybe any libraries supporting such functionality?
# UPDATE: Concrete Situation
Imagine I develop a micro web framework, which handles authentication using tokens. This framework needs a function to supply some `ID` obtained from the token and get the user corresponding to that `ID`.
Obviously, the framework does not know anything about users or any other application specific logic, so the client code must inject the user getter functionality into the framework to make the authentication work. | The way we do dependency injection in our project is by using the [inject](https://pypi.python.org/pypi/Inject/3.1.1) lib. Check out the [documentation](https://pypi.python.org/pypi/Inject/3.1.1). I highly recommend using it for DI. It kinda makes no sense with just one function but starts making lots of sense when you have to manage multiple data sources etc, etc.
Following your example it could be something similar to:
```
# framework.py
class FrameworkClass():
def __init__(self, func):
self.func = func
def do_the_job(self):
# some stuff
self.func()
```
Your custom function:
```
# my_stuff.py
def my_func():
print('aww yiss')
```
Somewhere in the application you want to create a bootstrap file that keeps track of all the defined dependencies:
```
# bootstrap.py
import inject
from .my_stuff import my_func
def configure_injection(binder):
binder.bind(FrameworkClass, FrameworkClass(my_func))
inject.configure(configure_injection)
```
And then you could consume the code this way:
```
# some_module.py (has to be loaded with bootstrap.py already loaded somewhere in your app)
import inject
from .framework import FrameworkClass
framework_instance = inject.instance(FrameworkClass)
framework_instance.do_the_job()
```
I'm afraid this is as pythonic as it can get (the module has some python sweetness like decorators to inject by parameter etc - check the docs), as python does not have fancy stuff like interfaces or type hinting.
So to **answer your question** directly would be very hard. I think the true question is: does python have some native support for DI? And the answer is, sadly: no. |
What is a Pythonic way for Dependency Injection? | 31,678,827 | 23 | 2015-07-28T14:10:11Z | 31,909,086 | 9 | 2015-08-09T21:38:05Z | [
"python",
"oop",
"authentication",
"dependency-injection",
"frameworks"
] | # Introduction
For Java, Dependency Injection works as pure OOP, i.e. you provide an interface to be implemented and in your framework code accept an instance of a class that implements the defined interface.
Now for Python, you are able to do the same way, but I think that method was too much overhead right in case of Python. So then how would you implement it in the Pythonic way?
# Use Case
Say this is the framework code:
```
class FrameworkClass():
def __init__(self, ...):
...
def do_the_job(self, ...):
# some stuff
# depending on some external function
```
# The Basic Approach
The most naive (and maybe the best?) way is to require the external function to be supplied into the `FrameworkClass` constructor, and then be invoked from the `do_the_job` method.
***Framework Code:***
```
class FrameworkClass():
def __init__(self, func):
self.func = func
def do_the_job(self, ...):
# some stuff
self.func(...)
```
***Client Code:***
```
def my_func():
# my implementation
framework_instance = FrameworkClass(my_func)
framework_instance.do_the_job(...)
```
# Question
The question is short. Is there any better commonly used Pythonic way to do this? Or maybe any libraries supporting such functionality?
# UPDATE: Concrete Situation
Imagine I develop a micro web framework, which handles authentication using tokens. This framework needs a function to supply some `ID` obtained from the token and get the user corresponding to that `ID`.
Obviously, the framework does not know anything about users or any other application specific logic, so the client code must inject the user getter functionality into the framework to make the authentication work. | See [Raymond Hettinger - Super considered super! - PyCon 2015](https://www.youtube.com/watch?v=EiOglTERPEo) for an argument about how to use super and multiple inheritance instead of DI. If you don't have time to watch the whole video, jump to minute 15 (but I'd recommend watching all of it).
Here is an example of how to apply what's described in this video to your example:
***Framework Code:***
```
class TokenInterface():
def getUserFromToken(self, token):
raise NotImplementedError
class FrameworkClass(TokenInterface):
def do_the_job(self, ...):
# some stuff
self.user = super().getUserFromToken(...)
```
***Client Code:***
```
class SQLUserFromToken(TokenInterface):
def getUserFromToken(self, token):
# load the user from the database
return user
class ClientFrameworkClass(FrameworkClass, SQLUserFromToken):
pass
framework_instance = ClientFrameworkClass()
framework_instance.do_the_job(...)
```
This will work because the Python MRO will guarantee that the getUserFromToken client method is called (if super() is used). The code will have to change if you're on Python 2.x.
One added benefit here is that this will raise an exception if the client does not provide a implementation.
Of course, this is not really dependency injection, it's multiple inheritance and mixins, but it is a Pythonic way to solve your problem. |
Python 2.7 Get month from 2 months ago (i.e get '05' from todays date) | 31,679,821 | 2 | 2015-07-28T14:49:21Z | 31,679,870 | 7 | 2015-07-28T14:51:16Z | [
"python",
"python-2.7",
"date",
"datetime"
] | I'm sure there is an easy way to do this that I have missed with my efforts to find the answer.
Basically how do I get the number of month i.e '05' or '04' from n number of months ago?
Apologies if this was already answered but the questions I researched here could not answer my question.
**Edit**
There is no month parameter in timedelta, so this did not answer my question.
Martin answered my question perfectly! | With some simple modular arithmetic:
```
from datetime import date
def months_ago(count):
today = date.today()
return ((today.month - count - 1) % 12) + 1
```
Demo:
```
>>> date.today()
datetime.date(2015, 7, 28)
>>> for i in range(13):
... print(months_ago(i))
...
7
6
5
4
3
2
1
12
11
10
9
8
7
``` |
Making SVM run faster in python | 31,681,373 | 13 | 2015-07-28T15:53:13Z | 32,025,662 | 12 | 2015-08-15T14:23:48Z | [
"python",
"scikit-learn",
"svm"
] | Using the **code** below for svm in python:
```
from sklearn import datasets
from sklearn.multiclass import OneVsRestClassifier
from sklearn.svm import SVC
iris = datasets.load_iris()
X, y = iris.data, iris.target
clf = OneVsRestClassifier(SVC(kernel='linear', probability=True, class_weight='auto'))
clf.fit(X, y)
proba = clf.predict_proba(X)
```
But it is taking a huge amount of time.
**Actual Data Dimensions**:
```
train-set (1422392,29)
test-set (233081,29)
```
How can I speed it up(parallel or some other way)? Please help.
I have already tried PCA and downsampling.
I have 6 classes.
Edit:
Found <http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html>
but I wish for probability estimates and it seems not to so for svm.
**Edit:**
```
from sklearn import datasets
from sklearn.multiclass import OneVsRestClassifier
from sklearn.svm import SVC,LinearSVC
from sklearn.linear_model import SGDClassifier
import joblib
import numpy as np
from sklearn import grid_search
import multiprocessing
import numpy as np
import math
def new_func(a): #converts array(x) elements to (1/(1 + e(-x)))
a=1/(1 + math.exp(-a))
return a
if __name__ == '__main__':
iris = datasets.load_iris()
cores=multiprocessing.cpu_count()-2
X, y = iris.data, iris.target #loading dataset
C_range = 10.0 ** np.arange(-4, 4); #c value range
param_grid = dict(estimator__C=C_range.tolist())
svr = OneVsRestClassifier(LinearSVC(class_weight='auto'),n_jobs=cores) ################LinearSVC Code faster
#svr = OneVsRestClassifier(SVC(kernel='linear', probability=True, ##################SVC code slow
# class_weight='auto'),n_jobs=cores)
clf = grid_search.GridSearchCV(svr, param_grid,n_jobs=cores,verbose=2) #grid search
clf.fit(X, y) #training svm model
decisions=clf.decision_function(X) #outputs decision functions
#prob=clf.predict_proba(X) #only for SVC outputs probablilites
print decisions[:5,:]
vecfunc = np.vectorize(new_func)
prob=vecfunc(decisions) #converts deicision to (1/(1 + e(-x)))
print prob[:5,:]
```
**Edit 2:**
The answer by user3914041 yields very poor probability estimates. | If you want to stick with SVC as much as possible and train on the full dataset, you can use ensembles of SVCs that are trained on subsets of the data to reduce the number of records per classifier (which apparently has quadratic influence on complexity). Scikit supports that with the `BaggingClassifier` wrapper. That should give you similar (if not better) accuracy compared to a single classifier, with much less training time. The training of the individual classifiers can also be set to run in parallel using the `n_jobs` parameter.
Alternatively, I would also consider using a Random Forest classifier - it supports multi-class classification natively, it is fast and gives pretty good probability estimates when `min_samples_leaf` is set appropriately.
I did a quick tests on the iris dataset blown up 100 times with an ensemble of 10 SVCs, each one trained on 10% of the data. It is more than 10 times faster than a single classifier. These are the numbers I got on my laptop:
Single SVC: 45s
Ensemble SVC: 3s
Random Forest Classifier: 0.5s
See below the code that I used to produce the numbers:
```
import time
import numpy as np
from sklearn.ensemble import BaggingClassifier, RandomForestClassifier
from sklearn import datasets
from sklearn.multiclass import OneVsRestClassifier
from sklearn.svm import SVC
iris = datasets.load_iris()
X, y = iris.data, iris.target
X = np.repeat(X, 100, axis=0)
y = np.repeat(y, 100, axis=0)
start = time.time()
clf = OneVsRestClassifier(SVC(kernel='linear', probability=True, class_weight='auto'))
clf.fit(X, y)
end = time.time()
print "Single SVC", end - start, clf.score(X,y)
proba = clf.predict_proba(X)
n_estimators = 10
start = time.time()
clf = OneVsRestClassifier(BaggingClassifier(SVC(kernel='linear', probability=True, class_weight='auto'), max_samples=1.0 / n_estimators, n_estimators=n_estimators))
clf.fit(X, y)
end = time.time()
print "Bagging SVC", end - start, clf.score(X,y)
proba = clf.predict_proba(X)
start = time.time()
clf = RandomForestClassifier(min_samples_leaf=20)
clf.fit(X, y)
end = time.time()
print "Random Forest", end - start, clf.score(X,y)
proba = clf.predict_proba(X)
```
If you want to make sure that each record is used only once for training in the `BaggingClassifier`, you can set the `bootstrap` parameter to False. |
Python list comprehension trick | 31,682,890 | 2 | 2015-07-28T17:07:50Z | 31,682,915 | 7 | 2015-07-28T17:09:37Z | [
"python",
"list",
"python-3.x",
"list-comprehension"
] | It's probably a silly question, but...
```
list = []
for i in range(1, 5):
for j in range(i):
list.append(i)
print(list)
list2 = [[i]*i for i in range(1, 5)]
print(list2)
```
With following code my output is like
```
[1, 2, 2, 3, 3, 3, 4, 4, 4, 4]
[[1], [2, 2], [3, 3, 3], [4, 4, 4, 4]]
```
I understand why the second one looks like this, but are there any tricks to get the first list with comprehension?
**P.S.**
Python 3 | Is this what you want?
```
>>> list2 = [i for i in range(1, 5) for j in range(i)]
>>> list2
[1, 2, 2, 3, 3, 3, 4, 4, 4, 4]
```
Trick is to put similar `for` loops in same order inside the list comprehension (and since you do not need list of lists, do not create those). |
Suppress stderr within subprocess.check_output() | 31,683,320 | 3 | 2015-07-28T17:31:03Z | 31,683,402 | 7 | 2015-07-28T17:36:10Z | [
"python",
"python-2.7",
"subprocess"
] | I'm trying to find a way to ignore the stderr stream (something similar to `2> /dev/null`):
```
output = subprocess.check_output("netstat -nptl".split())
```
What should I add to the above command to achieve this? | Just tell `subprocess` to redirect it for you:
```
import os
import subprocess
with open(os.devnull, 'w') as devnull:
output = subprocess.check_output("netstat -nptl".split(), stderr=devnull)
```
Note that Python 3 adds a [`subprocess.DEVNULL` object](https://docs.python.org/3/library/subprocess.html#subprocess.DEVNULL), saving you having to open this yourself. |
Automatically create requirements.txt | 31,684,375 | 7 | 2015-07-28T18:29:03Z | 31,684,470 | 9 | 2015-07-28T18:34:04Z | [
"python",
"python-import"
] | Sometimes I download python source code from github and don't know how to install all the dependencies. If there is no requirements.txt file I have to create it by hands.
The question is:
Given python source code directory is it possible to create requirements.txt automatically from import section? | You can use the following code to generate a requirements.txt file:
```
pip install pipreqs
pipreqs /path/to/project
```
more info related to pipreqs can be found [here](https://github.com/bndr/pipreqs).
Sometimes you come across `pip freeze`, but this saves all packages in the environment including those that you don't use in your current project. |
Automatically create requirements.txt | 31,684,375 | 7 | 2015-07-28T18:29:03Z | 33,468,993 | 7 | 2015-11-02T00:39:32Z | [
"python",
"python-import"
] | Sometimes I download python source code from github and don't know how to install all the dependencies. If there is no requirements.txt file I have to create it by hands.
The question is:
Given python source code directory is it possible to create requirements.txt automatically from import section? | If you use virtual environment, `pip freeze > requirements.txt` just fine. If not, [pigar](https://github.com/Damnever/pigar) will be a good choice for you.
By the way, I do not ensure it will work with 2.6. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.