title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Calling list() empties my iterable object? | 34,732,641 | 13 | 2016-01-11T22:54:53Z | 34,732,710 | 25 | 2016-01-11T22:59:04Z | [
"python",
"list"
] | ```
a = range(1, 3)
a = iter(a)
list(a)
a = list(a)
```
`a` evaluates to `[ ]`.
```
a = range(1, 3)
a = iter(a)
a = list(a)
```
`a` evaluates to `[1, 2]`.
The first result is unexpected to me. What semantics are going on here? | The issue is not `list()` but [`iter()`](https://docs.python.org/3.5/library/functions.html#iter) which as documented returns a single-use [`iterator`](https://docs.python.org/3.5/glossary.html#term-iterator). Once something has accessed the `iterator`'s elements, the iterator is permanently empty. The more commonly used [iterable](https://docs.python.org/3.5/glossary.html#term-iterable) type is (normally) reusable, and the two types shouldn't be confused.
Note that you don't need `iter()` in order to turn a `range` into a `list`, because `list()` takes an `iterable` as an argument:
```
>>> a = range(1, 3)
>>> list(a)
[1, 2]
>>> list(a)
[1, 2]
```
And it is only the `iterator` returned by `iter()` that is single-use:
```
>>> b = iter(a)
>>> list(b)
[1, 2]
>>> list(b)
[]
>>> list(a)
[1, 2]
``` |
why is list[-1:1] empty in python? | 34,737,719 | 3 | 2016-01-12T07:24:29Z | 34,737,763 | 11 | 2016-01-12T07:26:52Z | [
"python",
"list"
] | ```
>>>list=[1,2,3]
>>>list[1:2]
2
>>>list[-1:1]
[]
```
In python, `list[first:last]` is `return list[first,first+1,...last-1]`
but `list[-1:1]` is return empty, why not include `list[-1]`? | What are you expecting to return? Your `-1` position in that list is `2`, so your `list[-1:1]` is the same as `list[2:1]` which is empty list. You could do it with `step=-1`:
```
list[-1:1:-1]
3
```
*Note*: Usually it's not a good idea to reassign built-in variables such as `list`. It's better to use some othe name, i.e. `l`:
```
l = [1,2,3]
l[1]
2
l[-1]
3
l[-1:1]
[]
l[-1:1:-1]
[3]
``` |
How to remove final character in a number of strings? | 34,739,993 | 3 | 2016-01-12T09:36:38Z | 34,740,092 | 10 | 2016-01-12T09:41:21Z | [
"python",
"string",
"trailing"
] | I have a number of strings that I would like to remove the last character of each string. When I try out the code below it removes my second line of string instead of removing the last element. Below is my code:
**Code**
```
with open('test.txt') as file:
seqs=file.read().splitlines()
seqs=seqs[:-1]
```
**test.txt**
```
ABCABC
XYZXYZ
```
**Output**
```
ABCABC
```
**Desired output**
```
ABCAB
XYZXY
``` | Change this `seqs=seqs[:-1]`
to a [`list comprehension`](https://docs.python.org/2/tutorial/datastructures.html#list-comprehensions):
```
seqs=[val[:-1] for val in seqs]
```
**Note:**
* The problem in your old method is that `seq` is a list of strings`i.e)["ABCABC","XYZXYZ"]`. You are just getting the before last item `ABCABC` which explains the output.
* What I have did is get the strings from the list and omit it's last charcter. |
How to specify test timeout for python unittest? | 34,743,448 | 4 | 2016-01-12T12:15:40Z | 34,743,601 | 7 | 2016-01-12T12:23:06Z | [
"python",
"unit-testing",
"timeout"
] | I'm using `python` framework `unittest`. Is it possible to specify by framework's abilities a timeout for test? If no, is it possible to specify gracefully a `timeout` for all tests and for some separated tests a private value for each one?
I want to define a `global timeout` for all tests (they will use it by default) and a timeout for some test that can take a long time. | As far as I know [`unittest`](https://docs.python.org/2.7/library/unittest.html) does not contain any support for tests timeout.
You can try [`timeout-decorator`](https://pypi.python.org/pypi/timeout-decorator) library from PyPI. Apply the decorator on individual tests to make them terminate if they take too long:
```
import timeout_decorator
class TestCaseWithTimeouts(unittest.TestCase):
# ... whatever ...
@timeout_decorator.timeout(LOCAL_TIMEOUT)
def test_that_can_take_too_long(self):
sleep(float('inf'))
# ... whatever else ...
```
To create a global timeout, you can replace call
```
unittest.main()
```
with
```
timeout_decorator.timeout(GLOBAL_TIMEOUT)(unittest.main)()
``` |
Tuples and Dictionaries contained within a List | 34,748,017 | 6 | 2016-01-12T15:48:27Z | 34,748,108 | 7 | 2016-01-12T15:52:10Z | [
"python",
"list",
"dictionary",
"tuples"
] | I am trying to a obtain a specific dictionary from within a list that contains both tuples and dictionaries. How would I go about returning the dictionary with key 'k' from the list below?
```
lst = [('apple', 1), ('banana', 2), {'k': [1,2,3]}, {'l': [4,5,6]}]
``` | For your
```
lst = [('apple', 1), ('banana', 2), {'k': [1,2,3]}, {'l': [4,5,6]}]
```
using
```
next(elem for elem in lst if isinstance(elem, dict) and 'k' in elem)
```
returns
```
{'k': [1, 2, 3]}
```
i.e. the first object of your list which is a dictionary and contains key 'k'.
This raises `StopIteration` if no such object is found. If you want to return something else, e.g. `None`, use this:
```
next((elem for elem in lst if isinstance(elem, dict) and 'k' in elem), None)
``` |
Using moviepy, scipy and numpy in amazon lambda | 34,749,806 | 21 | 2016-01-12T17:12:06Z | 34,882,192 | 12 | 2016-01-19T16:39:57Z | [
"python",
"amazon-web-services",
"numpy",
"aws-lambda"
] | I'd like to generate video using `AWS Lambda` feature.
I've followed instructions found [here](http://docs.aws.amazon.com/lambda/latest/dg/with-s3-example-deployment-pkg.html#with-s3-example-deployment-pkg-python) and [here](http://www.perrygeo.com/running-python-with-compiled-code-on-aws-lambda.html).
And I now have the following process to build my `Lambda` function:
## Step 1
Fire a `Amazon Linux EC2` instance and run this as root on it:
```
#! /usr/bin/env bash
# Install the SciPy stack on Amazon Linux and prepare it for AWS Lambda
yum -y update
yum -y groupinstall "Development Tools"
yum -y install blas --enablerepo=epel
yum -y install lapack --enablerepo=epel
yum -y install atlas-sse3-devel --enablerepo=epel
yum -y install Cython --enablerepo=epel
yum -y install python27
yum -y install python27-numpy.x86_64
yum -y install python27-numpy-f2py.x86_64
yum -y install python27-scipy.x86_64
/usr/local/bin/pip install --upgrade pip
mkdir -p /home/ec2-user/stack
/usr/local/bin/pip install moviepy -t /home/ec2-user/stack
cp -R /usr/lib64/python2.7/dist-packages/numpy /home/ec2-user/stack/numpy
cp -R /usr/lib64/python2.7/dist-packages/scipy /home/ec2-user/stack/scipy
tar -czvf stack.tgz /home/ec2-user/stack/*
```
## Step 2
I scp the resulting tarball to my laptop. And then run this script to build a zip archive.
```
#! /usr/bin/env bash
mkdir tmp
rm lambda.zip
tar -xzf stack.tgz -C tmp
zip -9 lambda.zip process_movie.py
zip -r9 lambda.zip *.ttf
cd tmp/home/ec2-user/stack/
zip -r9 ../../../../lambda.zip *
```
`process_movie.py` script is at the moment only a test to see if the stack is ok:
```
def make_movie(event, context):
import os
print(os.listdir('.'))
print(os.listdir('numpy'))
try:
import scipy
except ImportError:
print('can not import scipy')
try:
import numpy
except ImportError:
print('can not import numpy')
try:
import moviepy
except ImportError:
print('can not import moviepy')
```
## Step 3
Then I upload the resulting archive to S3 to be the source of my `lambda` function.
When I test the function I get the following `callstack`:
```
START RequestId: 36c62b93-b94f-11e5-9da7-83f24fc4b7ca Version: $LATEST
['tqdm', 'imageio-1.4.egg-info', 'decorator.pyc', 'process_movie.py', 'decorator-4.0.6.dist-info', 'imageio', 'moviepy', 'tqdm-3.4.0.dist-info', 'scipy', 'numpy', 'OpenSans-Regular.ttf', 'decorator.py', 'moviepy-0.2.2.11.egg-info']
['add_newdocs.pyo', 'numarray', '__init__.py', '__config__.pyc', '_import_tools.py', 'setup.pyo', '_import_tools.pyc', 'doc', 'setupscons.py', '__init__.pyc', 'setup.py', 'version.py', 'add_newdocs.py', 'random', 'dual.pyo', 'version.pyo', 'ctypeslib.pyc', 'version.pyc', 'testing', 'dual.pyc', 'polynomial', '__config__.pyo', 'f2py', 'core', 'linalg', 'distutils', 'matlib.pyo', 'tests', 'matlib.pyc', 'setupscons.pyc', 'setup.pyc', 'ctypeslib.py', 'numpy', '__config__.py', 'matrixlib', 'dual.py', 'lib', 'ma', '_import_tools.pyo', 'ctypeslib.pyo', 'add_newdocs.pyc', 'fft', 'matlib.py', 'setupscons.pyo', '__init__.pyo', 'oldnumeric', 'compat']
can not import scipy
'module' object has no attribute 'core': AttributeError
Traceback (most recent call last):
File "/var/task/process_movie.py", line 91, in make_movie
import numpy
File "/var/task/numpy/__init__.py", line 122, in <module>
from numpy.__config__ import show as show_config
File "/var/task/numpy/numpy/__init__.py", line 137, in <module>
import add_newdocs
File "/var/task/numpy/numpy/add_newdocs.py", line 9, in <module>
from numpy.lib import add_newdoc
File "/var/task/numpy/lib/__init__.py", line 13, in <module>
from polynomial import *
File "/var/task/numpy/lib/polynomial.py", line 11, in <module>
import numpy.core.numeric as NX
AttributeError: 'module' object has no attribute 'core'
END RequestId: 36c62b93-b94f-11e5-9da7-83f24fc4b7ca
REPORT RequestId: 36c62b93-b94f-11e5-9da7-83f24fc4b7ca Duration: 112.49 ms Billed Duration: 200 ms Memory Size: 1536 MB Max Memory Used: 14 MB
```
I cant understand why python does not found the core directory that is present in the folder structure.
EDIT:
Following @jarmod advice I've reduced the `lambda`function to:
```
def make_movie(event, context):
print('running make movie')
import numpy
```
I now have the following error:
```
START RequestId: 6abd7ef6-b9de-11e5-8aee-918ac0a06113 Version: $LATEST
running make movie
Error importing numpy: you should not try to import numpy from
its source directory; please exit the numpy source tree, and relaunch
your python intepreter from there.: ImportError
Traceback (most recent call last):
File "/var/task/process_movie.py", line 3, in make_movie
import numpy
File "/var/task/numpy/__init__.py", line 127, in <module>
raise ImportError(msg)
ImportError: Error importing numpy: you should not try to import numpy from
its source directory; please exit the numpy source tree, and relaunch
your python intepreter from there.
END RequestId: 6abd7ef6-b9de-11e5-8aee-918ac0a06113
REPORT RequestId: 6abd7ef6-b9de-11e5-8aee-918ac0a06113 Duration: 105.95 ms Billed Duration: 200 ms Memory Size: 1536 MB Max Memory Used: 14 MB
``` | I was also following your first link and managed to import **numpy** and **pandas** in a Lambda function this way (on Windows):
1. Started a (free-tier) **t2.micro** [EC2 instance](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance_linux) with 64-bit Amazon Linux AMI 2015.09.1 and used Putty to SSH in.
2. Tried the same **commands** you used and the one recommended by the Amazon article:
```
sudo yum -y update
sudo yum -y upgrade
sudo yum -y groupinstall "Development Tools"
sudo yum -y install blas --enablerepo=epel
sudo yum -y install lapack --enablerepo=epel
sudo yum -y install Cython --enablerepo=epel
sudo yum install python27-devel python27-pip gcc
```
3. Created the **virtual environment**:
```
virtualenv ~/env
source ~/env/bin/activate
```
4. Installed the **packages**:
```
sudo ~/env/bin/pip2.7 install numpy
sudo ~/env/bin/pip2.7 install pandas
```
5. Then, using WinSCP, I logged in and **downloaded** everything (except \_markerlib, pip\*, pkg\_resources, setuptools\* and easyinstall\*) from `/home/ec2-user/env/lib/python2.7/dist-packages`, and everything from `/home/ec2-user/env/lib64/python2.7/site-packages` from the EC2 instance.
6. I put all these folders and files into one **zip**, along with the .py file containing the Lambda function.
[illustration of all files copied](http://i.stack.imgur.com/3tOip.png)
7. Because this .zip is larger than 10 MB, I created an **S3 bucket** to store the file. I copied the link of the file from there and pasted at "Upload a .ZIP from Amazon S3" at the Lambda function.
With this, I could import numpy and pandas. I'm not familiar with moviepy, but scipy might already be tricky as Lambda has a **limit** for unzipped deployment package size at 262 144 000 bytes. I'm afraid numpy and scipy together are already over that. |
Python with...as for custom context manager | 34,749,943 | 20 | 2016-01-12T17:19:11Z | 34,749,982 | 7 | 2016-01-12T17:21:10Z | [
"python"
] | I wrote a simple context manager in Python for handling unit tests (and to try to learn context managers):
```
class TestContext(object):
test_count=1
def __init__(self):
self.test_number = TestContext.test_count
TestContext.test_count += 1
def __enter__(self):
pass
def __exit__(self, exc_type, exc_value, exc_traceback):
if exc_value == None:
print 'Test %d passed' %self.test_number
else:
print 'Test %d failed: %s' %(self.test_number, exc_value)
return True
```
If I write a test as follows, everything works okay.
```
test = TestContext()
with test:
print 'running test %d....' %test.test_number
raise Exception('this test failed')
```
However, if I try to use with...as, I don't get a reference to the TestContext() object. Running this:
```
with TestContext() as t:
print t.test_number
```
Raises the exception `'NoneType' object has no attribute 'test_number'`.
Where am I going wrong? | ```
def __enter__(self):
return self
```
will make it work. The value returned from this method will be assigned to the `as` variable.
See also [the Python doc](https://docs.python.org/2/reference/compound_stmts.html#grammar-token-with_stmt):
> If a target was included in the `with` statement, the return value from `__enter__()` is assigned to it.
If you only need the number, you can even change the context manager's logic to
```
class TestContext(object):
test_count=1
def __init__(self):
self.test_number = TestContext.test_count
TestContext.test_count += 1
def __enter__(self):
return self.test_number
def __exit__(self, exc_type, exc_value, exc_traceback):
if exc_value == None:
print 'Test %d passed' % self.test_number
else:
print 'Test %d failed: %s' % (self.test_number, exc_value)
return True
```
and then do
```
with TestContext() as test_number:
print test_number
``` |
Python with...as for custom context manager | 34,749,943 | 20 | 2016-01-12T17:19:11Z | 34,749,985 | 22 | 2016-01-12T17:21:18Z | [
"python"
] | I wrote a simple context manager in Python for handling unit tests (and to try to learn context managers):
```
class TestContext(object):
test_count=1
def __init__(self):
self.test_number = TestContext.test_count
TestContext.test_count += 1
def __enter__(self):
pass
def __exit__(self, exc_type, exc_value, exc_traceback):
if exc_value == None:
print 'Test %d passed' %self.test_number
else:
print 'Test %d failed: %s' %(self.test_number, exc_value)
return True
```
If I write a test as follows, everything works okay.
```
test = TestContext()
with test:
print 'running test %d....' %test.test_number
raise Exception('this test failed')
```
However, if I try to use with...as, I don't get a reference to the TestContext() object. Running this:
```
with TestContext() as t:
print t.test_number
```
Raises the exception `'NoneType' object has no attribute 'test_number'`.
Where am I going wrong? | [`__enter__` needs to return `self`](https://docs.python.org/2/reference/datamodel.html#object.__enter__).
> The with statement will bind this methodâs return value to the target(s) specified in the as clause of the statement, if any.
This will work.
```
class TestContext(object):
test_count=1
def __init__(self):
self.test_number = TestContext.test_count
TestContext.test_count += 1
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, exc_traceback):
if exc_value == None:
print 'Test %d passed' % self.test_number
else:
print 'Test %d failed: %s' % (self.test_number, exc_value)
return True
``` |
Read a python variable in a shell script? | 34,751,930 | 5 | 2016-01-12T19:12:16Z | 34,752,001 | 7 | 2016-01-12T19:17:02Z | [
"python",
"bash",
"shell"
] | my python file has these 2 variables:
```
week_date = "01/03/16-01/09/16"
cust_id = "12345"
```
how can i read this into a shell script that takes in these 2 variables?
my current shell script requires manual editing of "dt" and "id". I want to read the python variables into the shell script so i can just edit my python parameter file and not so many files.
shell file:
```
#!/bin/sh
dt="01/03/16-01/09/16"
cust_id="12345"
```
In a new python file i could just import the parameter python file. | Consider something akin to the following:
```
#!/bin/bash
# ^^^^ NOT /bin/sh, which doesn't have process substitution available.
python_script='
import sys
d = {} # create a context for variables
exec(open(sys.argv[1], "r").read()) in d # execute the Python code in that context
for k in sys.argv[2:]:
print "%s\0" % str(d[k]).split("\0")[0] # ...and extract your strings NUL-delimited
'
read_python_vars() {
local python_file=$1; shift
local varname
for varname; do
IFS= read -r -d '' "${varname#*:}"
done < <(python -c "$python_script" "$python_file" "${@%%:*}")
}
```
You might then use this as:
```
read_python_vars config.py week_date:dt cust_id:id
echo "Customer id is $id; date range is $dt"
```
...or, if you didn't want to rename the variables as they were read, simply:
```
read_python_vars config.py week_date cust_id
echo "Customer id is $cust_id; date range is $week_date"
```
---
Advantages:
* Unlike a naive regex-based solution (which would have trouble with some of the details of Python parsing -- try teaching `sed` to handle both raw and regular strings, and both single and triple quotes without making it into a hairball!) or a similar approach that used newline-delimited output from the Python subprocess, this will correctly handle any object for which `str()` gives a representation with no NUL characters that your shell script can use.
* Running content through the Python interpreter also means you can determine values programmatically -- for instance, you could have some Python code that asks your version control system for the last-change-date of relevant content.
Think about scenarios such as this one:
```
start_date = '01/03/16'
end_date = '01/09/16'
week_date = '%s-%s' % (start_date, end_date)
```
...using a Python interpreter to parse Python means you aren't restricting how people can update/modify your Python config file in the future.
Now, let's talk caveats:
* If your Python code has side effects, those side effects will obviously take effect (just as they would if you chose to `import` the file as a module in Python). Don't use this to extract configuration from a file whose contents you don't trust.
* Python strings are Pascal-style: They can contain literal NULs. Strings in shell languages are C-style: They're terminated by the first NUL character. Thus, some variables can exist in Python than cannot be represented in shell without nonliteral escaping. To prevent an object whose `str()` representation contains NULs from spilling forward into other assignments, this code terminates strings at their first NUL.
---
Now, let's talk about implementation details.
* `${@%%:*}` is an expansion of `$@` which trims all content after and including the first `:` in each argument, thus passing only the Python variable names to the interpreter. Similarly, `${varname#*:}` is an expansion which trims everything up to and including the first `:` from the variable name passed to `read`. See [the bash-hackers page on parameter expansion](http://wiki.bash-hackers.org/syntax/pe).
* Using `<(python ...)` is process substitution syntax: The `<(...)` expression evaluates to a filename which, when read, will provide output of that command. Using `< <(...)` redirects output from that file, and thus that command (the first `<` is a redirection, whereas the second is part of the `<(` token that starts a process substitution). Using this form to get output into a `while read` loop avoids the bug mentioned in [BashFAQ #24 ("I set variables in a loop that's in a pipeline. Why do they disappear after the loop terminates? Or, why can't I pipe data to read?")](http://mywiki.wooledge.org/BashFAQ/024).
* The `IFS= read -r -d ''` construct has a series of components, each of which makes the behavior of `read` more true to the original content:
+ Clearing `IFS` for the duration of the command prevents whitespace from being trimmed from the end of the variable's content.
+ Using `-r` prevents literal backslashes from being consumed by `read` itself rather than represented in the output.
+ Using `-d ''` sets the first character of the empty string `''` to be the record delimiter. Since C strings are NUL-terminated and the shell uses C strings, that character is a NUL. This ensures that variables' content can contain any non-NUL value, including literal newlines.
See [BashFAQ #001 ("How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?")](http://mywiki.wooledge.org/BashFAQ/001) for more on the process of reading record-oriented data from a string in bash. |
Difference between coroutine and future/task in Python 3.5? | 34,753,401 | 21 | 2016-01-12T20:43:00Z | 34,753,649 | 16 | 2016-01-12T20:57:17Z | [
"python",
"python-asyncio",
"python-3.5"
] | Let's say we have a dummy function:
```
async def foo(arg):
result = await some_remote_call(arg)
return result.upper()
```
What's the difference between:
```
coros = []
for i in range(5):
coros.append(foo(i))
loop = get_event_loop()
loop.run_until_complete(wait(coros))
```
And:
```
from asyncio import ensure_future
futures = []
for i in range(5):
futures.append(ensure_future(foo(i)))
loop = get_event_loop()
loop.run_until_complete(wait(futures))
```
*Note*: The example returns a result, but this isn't the focus of the question. When return value matters, use `gather()` instead of `wait()`.
Regardless of return value, I'm looking for clarity on `ensure_future()`. `wait(coros)` and `wait(futures)` both run the coroutines, so when and why should a coroutine be wrapped in `ensure_future`?
Basically, what's the Right Way (tm) to run a bunch of non-blocking operations using Python 3.5's `async`?
For extra credit, what if I want to batch the calls? For example, I need to call `some_remote_call(...)` 1000 times, but I don't want to crush the web server/database/etc with 1000 simultaneous connections. This is doable with a thread or process pool, but is there a way to do this with `asyncio`? | A coroutine is a generator function that can both yield values and accept values from the outside. The benefit of using a coroutine is that we can pause the execution of a function and resume it later. In case of a network operation, it makes sense to pause the execution of a function while we're waiting for the response. We can use the time to run some other functions.
A future is like the `Promise` objects from Javascript. It is like a place holder for a value that will be materialized in the future. In the above mentioned case, while waiting on network I/O, a function can give us a container, a promise that it will fill the container with the value when the operation completes. We hold on to the future object and when it's fulfilled, we can call a method on it to retrieve the actual result.
**Direct Answer:** You don't need `ensure_future` if you don't need the results. They are good if you need the results or retrieve exceptions occured.
**Extra Credits:** I would choose [`run_in_executor`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.BaseEventLoop.run_in_executor) and pass an `Executor` instance to control the number of max workers.
### Explanations and Sample codes
In the first example, you are using coroutines. The `wait` function takes a bunch of coroutines and combines them together. So `wait()` finishes when all the coroutines are exhausted (completed/finished returning all the values).
```
loop = get_event_loop() #
loop.run_until_complete(wait(coros))
```
The `run_until_complete` method would make sure that the loop is alive until the execution is finished. Please notice how you are not getting the results of the async execution in this case.
In the second example, you are using the `ensure_future` function to wrap a coroutine and return a `Task` object which is a kind of `Future`. The coroutine is scheduled to be executed in the main event loop when you call `ensure_future`. The returned future/task object doesn't yet have a value but over time, when the network operations finish, the future object will hold the result of the operation.
```
from asyncio import ensure_future
futures = []
for i in range(5):
futures.append(ensure_future(foo(i)))
loop = get_event_loop()
loop.run_until_complete(wait(futures))
```
So in this example, we're doing the same thing except we're using futures instead of just using coroutines.
Let's look at an example on how to use asyncio/coroutines/futures:
```
import asyncio
async def slow_operation():
await asyncio.sleep(1)
return 'Future is done!'
def got_result(future):
print(future.result())
# We have result, so let's stop
loop.stop()
loop = asyncio.get_event_loop()
task = loop.create_task(slow_operation())
task.add_done_callback(got_result)
# We run forever
loop.run_forever()
```
Here, we have used the `create_task` method on the `loop` object. `ensure_future` would schedule the task in the main event loop. This method enables us to schedule a coroutine on a loop we choose.
We also see the concept of adding a callback using the `add_done_callback` method on the task object.
A `Task` is `done` when the coroutine returns a value, raises an exception or gets cancelled. There are methods to check these incidents.
I have written some blog posts on these topics which might help:
* <http://masnun.com/2015/11/13/python-generators-coroutines-native-coroutines-and-async-await.html>
* <http://masnun.com/2015/11/20/python-asyncio-future-task-and-the-event-loop.html>
* <http://masnun.com/2015/12/07/python-3-using-blocking-functions-or-codes-with-asyncio.html>
Of course, you can find more details on the official manual: <https://docs.python.org/3/library/asyncio.html> |
Most pythonic way to interleave two strings | 34,756,145 | 96 | 2016-01-13T00:06:53Z | 34,756,161 | 43 | 2016-01-13T00:08:35Z | [
"python",
"string",
"python-2.7",
"python-3.x"
] | What's the most pythonic way to mesh two strings together?
For example:
Input:
```
u = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
l = 'abcdefghijklmnopqrstuvwxyz'
```
Output:
```
'AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz'
``` | With `join()` and `zip()`.
```
>>> ''.join(''.join(item) for item in zip(u,l))
'AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz'
``` |
Most pythonic way to interleave two strings | 34,756,145 | 96 | 2016-01-13T00:06:53Z | 34,756,226 | 118 | 2016-01-13T00:13:42Z | [
"python",
"string",
"python-2.7",
"python-3.x"
] | What's the most pythonic way to mesh two strings together?
For example:
Input:
```
u = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
l = 'abcdefghijklmnopqrstuvwxyz'
```
Output:
```
'AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz'
``` | For me, the most pythonic\* way is the following which *pretty much does the same thing* but uses the `+` operator for concatenating the individual characters in each string:
```
res = "".join(i + j for i, j in zip(u, l))
print(res)
# 'AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz'
```
It is also faster than using two `join()` calls:
```
In [5]: l1 = 'A' * 1000000; l2 = 'a' * 1000000
In [6]: %timeit "".join("".join(item) for item in zip(l1, l2))
1 loops, best of 3: 442 ms per loop
In [7]: %timeit "".join(i + j for i, j in zip(l1, l2))
1 loops, best of 3: 360 ms per loop
```
Faster approaches exist, but they often obfuscate the code.
**Note:** If the two input strings are *not* the same length then the longer one will be truncated as **[`zip`](https://docs.python.org/3/library/functions.html#zip)** stops iterating at the end of the shorter string. In this case instead of `zip` one should use **[`zip_longest`](https://docs.python.org/3/library/itertools.html#itertools.zip_longest)** (**[`izip_longest`](https://docs.python.org/2.7/library/itertools.html#itertools.izip_longest)** in Python 2) from the [`itertools`](https://docs.python.org/3/library/itertools.html) module to ensure that both strings are fully exhausted.
---
\*To take a quote from ***[the Zen of Python](https://www.python.org/dev/peps/pep-0020/)***: **Readability counts**.
Pythonic = **readability** for me; `i + j` is just visually parsed more easily, at least for my eyes. |
Most pythonic way to interleave two strings | 34,756,145 | 96 | 2016-01-13T00:06:53Z | 34,756,604 | 15 | 2016-01-13T00:53:27Z | [
"python",
"string",
"python-2.7",
"python-3.x"
] | What's the most pythonic way to mesh two strings together?
For example:
Input:
```
u = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
l = 'abcdefghijklmnopqrstuvwxyz'
```
Output:
```
'AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz'
``` | If you want the fastest way, you can combine [itertools](https://docs.python.org/2/library/itertools.html) with `operator.add`:
```
In [36]: from operator import add
In [37]: from itertools import starmap, izip
In [38]: timeit "".join([i + j for i, j in uzip(l1, l2)])
1 loops, best of 3: 142 ms per loop
In [39]: timeit "".join(starmap(add, izip(l1,l2)))
1 loops, best of 3: 117 ms per loop
In [40]: timeit "".join(["".join(item) for item in zip(l1, l2)])
1 loops, best of 3: 196 ms per loop
In [41]: "".join(starmap(add, izip(l1,l2))) == "".join([i + j for i, j in izip(l1, l2)]) == "".join(["".join(item) for item in izip(l1, l2)])
Out[42]: True
```
But combining `izip` and `chain.from_iterable` is faster again
```
In [2]: from itertools import chain, izip
In [3]: timeit "".join(chain.from_iterable(izip(l1, l2)))
10 loops, best of 3: 98.7 ms per loop
```
There is also a substantial difference between
`chain(*` and `chain.from_iterable(...`.
```
In [5]: timeit "".join(chain(*izip(l1, l2)))
1 loops, best of 3: 212 ms per loop
```
There is no such thing as a generator with join, passing one is always going to be slower as python will first build a list using the content because it does two passes over the data, one to figure out the size needed and one to actually do the join which would not be possible using a generator:
[join.h](https://github.com/python/cpython/blob/master/Objects/stringlib/join.h#L54):
```
/* Here is the general case. Do a pre-pass to figure out the total
* amount of space we'll need (sz), and see whether all arguments are
* bytes-like.
*/
```
Also if you have different length strings and you don't want to lose data you can use [izip\_longest](https://docs.python.org/2/library/itertools.html#itertools.izip_longest) :
```
In [22]: from itertools import izip_longest
In [23]: a,b = "hlo","elworld"
In [24]: "".join(chain.from_iterable(izip_longest(a, b,fillvalue="")))
Out[24]: 'helloworld'
```
For python 3 it is called `zip_longest`
But for python2, veedrac's suggestion is by far the fastest:
```
In [18]: %%timeit
res = bytearray(len(u) * 2)
res[::2] = u
res[1::2] = l
str(res)
....:
100 loops, best of 3: 2.68 ms per loop
``` |
Most pythonic way to interleave two strings | 34,756,145 | 96 | 2016-01-13T00:06:53Z | 34,756,930 | 53 | 2016-01-13T01:30:36Z | [
"python",
"string",
"python-2.7",
"python-3.x"
] | What's the most pythonic way to mesh two strings together?
For example:
Input:
```
u = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
l = 'abcdefghijklmnopqrstuvwxyz'
```
Output:
```
'AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz'
``` | ## Faster Alternative
Another way:
```
res = [''] * len(u) * 2
res[::2] = u
res[1::2] = l
print(''.join(res))
```
Output:
```
'AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz'
```
## Speed
Looks like it is faster:
```
%%timeit
res = [''] * len(u) * 2
res[::2] = u
res[1::2] = l
''.join(res)
100000 loops, best of 3: 4.75 µs per loop
```
than the fastest solution so far:
```
%timeit "".join(list(chain.from_iterable(zip(u, l))))
100000 loops, best of 3: 6.52 µs per loop
```
Also for the larger strings:
```
l1 = 'A' * 1000000; l2 = 'a' * 1000000
%timeit "".join(list(chain.from_iterable(zip(l1, l2))))
1 loops, best of 3: 151 ms per loop
%%timeit
res = [''] * len(l1) * 2
res[::2] = l1
res[1::2] = l2
''.join(res)
10 loops, best of 3: 92 ms per loop
```
Python 3.5.1.
## Variation for strings with different lengths
```
u = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
l = 'abcdefghijkl'
```
### Shorter one determines length (`zip()` equivalent)
```
min_len = min(len(u), len(l))
res = [''] * min_len * 2
res[::2] = u[:min_len]
res[1::2] = l[:min_len]
print(''.join(res))
```
Output:
```
AaBbCcDdEeFfGgHhIiJjKkLl
```
### Longer one determines length (`itertools.zip_longest(fillvalue='')` equivalent)
```
min_len = min(len(u), len(l))
res = [''] * min_len * 2
res[::2] = u[:min_len]
res[1::2] = l[:min_len]
res += u[min_len:] + l[min_len:]
print(''.join(res))
```
Output:
```
AaBbCcDdEeFfGgHhIiJjKkLlMNOPQRSTUVWXYZ
``` |
Most pythonic way to interleave two strings | 34,756,145 | 96 | 2016-01-13T00:06:53Z | 34,761,165 | 13 | 2016-01-13T07:55:50Z | [
"python",
"string",
"python-2.7",
"python-3.x"
] | What's the most pythonic way to mesh two strings together?
For example:
Input:
```
u = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
l = 'abcdefghijklmnopqrstuvwxyz'
```
Output:
```
'AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz'
``` | On Python 2, by *far* the faster way to do things, at ~3x the speed of list slicing for small strings and ~30x for long ones, is
```
res = bytearray(len(u) * 2)
res[::2] = u
res[1::2] = l
str(res)
```
This wouldn't work on Python 3, though. You could implement something like
```
res = bytearray(len(u) * 2)
res[::2] = u.encode("ascii")
res[1::2] = l.encode("ascii")
res.decode("ascii")
```
but by then you've already lost the gains over list slicing for small strings (it's still 20x the speed for long strings) and this doesn't even work for non-ASCII characters yet.
FWIW, if you *are* doing this on massive strings and need every cycle, *and* for some reason have to use Python strings... here's how to do it:
```
res = bytearray(len(u) * 4 * 2)
u_utf32 = u.encode("utf_32_be")
res[0::8] = u_utf32[0::4]
res[1::8] = u_utf32[1::4]
res[2::8] = u_utf32[2::4]
res[3::8] = u_utf32[3::4]
l_utf32 = l.encode("utf_32_be")
res[4::8] = l_utf32[0::4]
res[5::8] = l_utf32[1::4]
res[6::8] = l_utf32[2::4]
res[7::8] = l_utf32[3::4]
res.decode("utf_32_be")
```
Special-casing the common case of smaller types will help too. FWIW, this is only 3x the speed of list slicing for long strings and a factor of 4 to 5 *slower* for small strings.
Either way I prefer the `join` solutions, but since timings were mentioned elsewhere I thought I might as well join in. |
Most pythonic way to interleave two strings | 34,756,145 | 96 | 2016-01-13T00:06:53Z | 34,784,700 | 9 | 2016-01-14T08:23:24Z | [
"python",
"string",
"python-2.7",
"python-3.x"
] | What's the most pythonic way to mesh two strings together?
For example:
Input:
```
u = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
l = 'abcdefghijklmnopqrstuvwxyz'
```
Output:
```
'AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz'
``` | You could also do this using [`map`](https://docs.python.org/3/library/functions.html#map) and [`operator.add`](https://docs.python.org/3/library/operator.html#operator.add):
```
from operator import add
u = 'AAAAA'
l = 'aaaaa'
s = "".join(map(add, u, l))
```
**Output**:
```
'AaAaAaAaAa'
```
What map does is it takes every element from the first iterable `u` and the first elements from the second iterable `l` and applies the function supplied as the first argument `add`. Then join just joins them. |
Any reason to use "while 1, do something, break" in Python? | 34,756,466 | 4 | 2016-01-13T00:38:18Z | 34,756,566 | 7 | 2016-01-13T00:50:00Z | [
"python",
"while-loop",
"sympy"
] | In the Python library [SymPy](http://www.sympy.org/en/index.html) I try to understand the function `partitions()` in [`sympy.utilities.iterables`](https://github.com/sympy/sympy/blob/master/sympy%2Futilities%2Fiterables.py):
It starts like this:
```
def partitions(n, m=None, k=None, size=False):
"""Generate all partitions of integer n (>= 0).
```
I am confused about the following while loop, because it looks pointless. If I remove the `while 1:` and the `break` it should make no difference. However, I expect that the people who develop SymPy know what they are doing, and don't make very simple mistakes. So does this make some sense that I fail to see?
```
while 1:
# Let i be the smallest key larger than 1. Reuse one instance of i.
i = keys[-1]
newcount = ms[i] = ms[i] - 1
reuse += i
if newcount == 0:
del keys[-1], ms[i]
room += 1
# Break the remainder into pieces of size i-1.
i -= 1
q, r = divmod(reuse, i)
need = q + bool(r)
if need > room:
if not keys:
return
continue
ms[i] = q
keys.append(i)
if r:
ms[r] = 1
keys.append(r)
break
```
For learning purposes I have simplified the whole function, and `my_partitions(n)` gives the same results as `partitions(n)`.
```
def my_partitions(n):
ms = {n: 1}
keys = [n]
yield ms
while keys != [1]:
# Reuse any 1's.
if keys[-1] == 1:
del keys[-1]
reuse = ms.pop(1)
else:
reuse = 0
# Let i be the smallest key larger than 1. Reuse one instance of i.
i = keys[-1]
ms[i] -= 1
reuse += i
if ms[i] == 0:
del keys[-1], ms[i]
# Break the remainder into pieces of size i-1.
i -= 1
q, r = divmod(reuse, i)
ms[i] = q
keys.append(i)
if r:
ms[r] = 1
keys.append(r)
yield ms
``` | This is a dirty hack to bring [goto](https://en.wikipedia.org/wiki/Goto) to Python. The `while 1:` line is the label and the `continue` statement is the goto.
Please do not write code like that if you can avoid it. If you *must* do this, at least make it `while True:` since the argument to `while` is ordinarily a boolean value. |
Delete multiple dictionaries in a list | 34,763,208 | 5 | 2016-01-13T09:45:42Z | 34,763,252 | 11 | 2016-01-13T09:47:24Z | [
"python",
"list",
"dictionary"
] | I have been trying to delete multiple dictionaries in a list but I can only delete one at a time.
Below is the main code I am working on. Records is the list of dictionaries. I want to delete dictionaries that have 0 in them.
```
Records = [{'Name':'Kelvin','Price': 0},{'Name': 'Michael','Price':10}]
```
I want to delete dictionaries with Prices of 0
```
def deleteUnsold(self):
for d in records:
for key, value in d.items():
if d['Price'] == 0:
records.remove(d)
``` | Use a [list comprehension](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions) with an `if` condition
```
>>> Records = [{'Name':'Kelvin','Price': 0},{'Name': 'Michael','Price':10}]
>>> [i for i in Records if i['Price'] != 0]
[{'Price': 10, 'Name': 'Michael'}]
```
Check out [Python: if/else in list comprehension?](http://stackoverflow.com/questions/4260280/python-if-else-in-list-comprehension) to learn more about using a conditional within a list
comprehension.
---
Note that [as mentioned [below](http://stackoverflow.com/questions/34763208/delete-multiple-dictionaries-in-a-list/34763252?noredirect=1#comment57271724_34763252)] you can also leave out checking for the value `0`. However this also works if `Price` is `None`, hence you may use the first alternative if you are not sure of the data type of the value of `Price`
```
>>> [i for i in Records if i['Price']]
[{'Price': 10, 'Name': 'Michael'}]
``` |
create list by -5 and + 5 from given number | 34,765,433 | 3 | 2016-01-13T11:26:02Z | 34,765,470 | 13 | 2016-01-13T11:27:21Z | [
"python",
"list",
"python-2.7",
"range"
] | Suppose `num = 10`
want output like `[5, 6, 7, 8, 9, 10, 11, 12, 13, 14]`
Tried : `range(num-5, num) + range(num, num+5)`
Is there an other way to achieve this ? | Use `range`'s start and stop parameters, like this
```
>>> range(num - 5, num + 5)
[5, 6, 7, 8, 9, 10, 11, 12, 13, 14]
``` |
Python: Splat/unpack operator * in python cannot be used in an expression? | 34,766,677 | 19 | 2016-01-13T12:24:51Z | 34,767,282 | 25 | 2016-01-13T12:53:07Z | [
"python",
"python-2.7"
] | Does anybody know the reasoning as to why the unary (`*`) operator cannot be used in an expression involving iterators/lists/tuples?
Why is it only limited to function unpacking? or am I wrong in thinking that?
For example:
```
>>> [1,2,3, *[4,5,6]]
File "<stdin>", line 1
[1,2,3, *[4,5,6]]
^
SyntaxError: invalid syntax
```
Why doesn't the `*` operator:
```
[1, 2, 3, *[4, 5, 6]] give [1, 2, 3, 4, 5, 6]
```
whereas when the `*` operator is used with a function call it does expand:
```
f(*[4, 5, 6]) is equivalent to f(4, 5, 6)
```
There is a similarity between the `+` and the `*` when using lists but not when extending a list with another type.
For example:
```
# This works
gen = (x for x in range(10))
def hello(*args):
print args
hello(*gen)
# but this does not work
[] + gen
TypeError: can only concatenate list (not "generator") to list
``` | Not allowing unpacking in Python `2.x` has noted and fixed in Python `3.5` which now has this feature as described in **[PEP 448](https://docs.python.org/3/whatsnew/3.5.html#whatsnew-pep-448)**:
```
Python 3.5.0 (v3.5.0:374f501f4567, Sep 13 2015, 02:27:37) on Windows (64 bits).
>>> [1, 2, 3, *[4, 5, 6]]
[1, 2, 3, 4, 5, 6]
```
[Here](https://www.python.org/dev/peps/pep-0448/#disadvantages) are some explanations for the rationale behind this change. |
What is the Matlab equivalent to Python's `not in`? | 34,768,388 | 5 | 2016-01-13T13:48:25Z | 34,768,427 | 12 | 2016-01-13T13:49:50Z | [
"python",
"matlab"
] | In Python, one could get elements that are exclusive to `lst1` using:
```
lst1=['a','b','c']
lst2=['c','d','e']
lst3=[]
for i in lst1:
if i not in lst2:
lst3.append(i)
```
What would be the Matlab equivalent? | You are looking for MATLAB's [`setdiff`](http://www.mathworks.com/help/matlab/ref/setdiff.html) -
```
setdiff(lst1,lst2)
```
Sample run -
```
>> lst1={'a','b','c'};
>> lst2={'c','d','e'};
>> setdiff(lst1,lst2)
ans =
'a' 'b'
```
Verify with Python run -
```
In [161]: lst1=['a','b','c']
...: lst2=['c','d','e']
...: lst3=[]
...: for i in lst1:
...: if i not in lst2:
...: lst3.append(i)
...:
In [162]: lst3
Out[162]: ['a', 'b']
```
In fact, you have `setdiff` in Python's [`NumPy module`](http://www.numpy.org/) as well, as [`numpy.setdiff1d`](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.setdiff1d.html). The equivalent implementation with it would be -
```
In [166]: import numpy as np
In [167]: np.setdiff1d(lst1,lst2) # Output as an array
Out[167]:
array(['a', 'b'],
dtype='|S1')
In [168]: np.setdiff1d(lst1,lst2).tolist() # Output as list
Out[168]: ['a', 'b']
``` |
how to pick random items from a list while avoiding picking the same item in a row | 34,769,801 | 3 | 2016-01-13T14:54:41Z | 34,769,912 | 8 | 2016-01-13T14:59:27Z | [
"python",
"list",
"random"
] | I want to iterate through list with random values. However, I want the item that has been picked to be removed from the list for the next trial, so that I can avoid picking the same item in a row; but it should be added back again after.
please help me on showing that on this simple example.
Thank you
```
import random
l = [1,2,3,4,5,6,7,8]
for i in l:
print random.choice(l)
``` | Both work for list of non-unique elements as well:
```
def choice_without_repetition(lst):
prev = None
while True:
i = random.randrange(len(lst))
if i != prev:
yield lst[i]
prev = i
```
or
```
def choice_without_repetition(lst):
i = 0
while True:
i = (i + random.randrange(1, len(lst))) % len(lst)
yield lst[i]
```
Usage:
```
lst = [1,2,3,4,5,6,7,8]
for x in choice_without_repetition(lst):
print x
``` |
matplotlib taking time when being imported | 34,771,191 | 52 | 2016-01-13T15:54:39Z | 34,999,763 | 57 | 2016-01-25T18:33:26Z | [
"python",
"matplotlib"
] | I just upgraded to the latest stable release of `matplotlib` (1.5.1) and everytime I import matplotlib I get this message:
```
/usr/local/lib/python2.7/dist-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
```
... which always stalls for a few seconds.
Is this the expected behaviour? Was it the same also before, but just without the printed message? | As tom suggested in the comment above, deleting the files: fontList.cache, fontList.py3k.cache and tex.cache solve the problem. In my case the files were under ~/.matplotlib. |
How to config Django using pymysql as driver? | 34,777,755 | 3 | 2016-01-13T21:50:38Z | 34,778,155 | 9 | 2016-01-13T22:18:23Z | [
"python",
"mysql",
"django",
"pymysql"
] | I'm new to Django. It wasted me whole afternoon to config the MySQL engine. I am very confused about the database engine and the database driver. Is the **engine** also the **driver**? All the tutorial said that the ENGINE should be 'django.db.backends.mysql', but how the ENGINE decide which driver is used to connect MySQL?
Every time it says 'django.db.backends.mysql', sadly I can't install MySQLDb and mysqlclient, but PyMysql and the official *mysql connector 2.1.3* has been installed. How could I set the driver to PyMysql or mysql connector?
Many thanks!
* OS: OS X Al Capitan
* Python: 3.5
* Django: 1.9
This question is not yet solved:
Is the **ENGINE** also the **DRIVER**? | [You can import `pymsql` so it presents as MySQLdb](http://www.codes9.com/programing-language/python/python3-5-django1-8-5-the-solution-of-the-import-pymysql-pymysql-install_as_mysqldb-installation-2/). You'll need to do this *before* any django code is run, so put this in your `manage.py` file
```
import pymysql
pymysql.install_as_MySQLdb()
``` |
Anaconda Python installation error | 34,780,267 | 16 | 2016-01-14T01:37:32Z | 34,796,239 | 44 | 2016-01-14T17:42:08Z | [
"python",
"python-2.7",
"installation",
"anaconda"
] | I get the following error during Python 2.7 64-bit windows installation. I previously installed python 3.5 64-bit and it worked fine. But during python 2.7 installation i get this error:
```
Traceback (most recent call last):
File "C:\Anaconda2\Lib\_nsis.py", line 164, in <module> main()
File "C:\Anaconda2\Lib\_nsis.py", line 150, in main
mk_menus(remove=False)
File "C:\Anaconda2\Lib\_nsis.py", line 94, in mk_menus
err("Traceback:\n%s\n" % traceback.format_exc(20))
IOError: [Errno 9] Bad file descriptor
```
Kindly help me out. | I had the same problem today. I did the following to get this fixed:
First, open a DOS prompt and admin rights.
Then, go to your Anaconda2\Scripts folder.
Then, type in:
```
conda update conda
```
and allow all updates. One of the updates should be menuinst.
Then, change to the Anaconda2\Lib directory, and type in the following command:
```
..\python _nsis.py mkmenus
```
Wait for this to complete, then check your Start menu for the new shortcuts.
Steve |
Sort at various levels in Python | 34,784,220 | 9 | 2016-01-14T07:53:11Z | 34,784,358 | 13 | 2016-01-14T08:01:42Z | [
"python",
"list",
"sorting"
] | I am trying to sort a list of tuples like these:
```
[('Pineapple', 1), ('Orange', 3), ('Banana', 1), ('Apple', 1), ('Cherry', 2)]
```
The sorted list should be:
```
[('Orange', 3), ('Cherry', 2), ('Apple', 1), ('Banana', 1), ('Pineapple', 1)]
```
So, here 1st the list should be sorted based on `tuple[1]` in descending order, then if the `tuple` values (`tuple[1]`) match like for`Apple`, `Banana` & `Pineapple` - list should be further sorted based on `tuple[0]` in ascending order.
I have tried the possible ways-
```
top_n.sort(key = operator.itemgetter(1, 0), reverse = True)
# Output: [(Orange, 3), (Cherry, 2), (Pineapple, 1), (Banana, 1), (Apple, 1)]
```
as `"reverse = True"`, Pineapple, then Banana,...
I finally had to come up with a solution:
```
top_n.sort(key = operator.itemgetter(0), reverse = False)
top_n.sort(key = operator.itemgetter(1), reverse = True)
```
Is there any better way to get to the solution like my 1st approach. I am trying to explore more about Python, thus seeking such kind of solution. | Have your key return a tuple of the numeric value *negated*, and then the string. By negating, your numbers will be sorted in descending order, while the strings are sorted in ascending order:
```
top_n.sort(key=lambda t: (-t[1], t[0]))
```
Yes, this is a bit of a hack, but works anywhere you need to sort by two criteria in opposite directions, and one of those criteria is numeric.
Demo:
```
>>> top_n = [('Pineapple', 1), ('Orange', 3), ('Banana', 1), ('Apple', 1), ('Cherry', 2)]
>>> sorted(top_n, key=lambda t: (-t[1], t[0]))
[('Orange', 3), ('Cherry', 2), ('Apple', 1), ('Banana', 1), ('Pineapple', 1)]
``` |
Updating a sliced list | 34,784,558 | 15 | 2016-01-14T08:15:11Z | 34,784,590 | 19 | 2016-01-14T08:16:50Z | [
"python",
"list"
] | I thought I understood Python slicing operations, but when I tried to update a sliced list, I got confused:
```
>>> foo = [1, 2, 3, 4]
>>> foo[:1] = ['one'] # OK, foo updated
>>> foo
['one', 2, 3, 4]
>>> foo[:][1] = 'two' # why foo not updated?
>>> foo
['one', 2, 3, 4]
>>> foo[:][2:] = ['three', 'four'] # Again, foo not updated
>>> foo
['one', 2, 3, 4]
```
Why isn't foo updated after `foo[:][1] = 'two'`?
**Update:** Maybe I didn't explain my questions clearly. I know when slicing, a new list is created. My doubt is why a slicing assignment updates the list (e.g. `foo[:1] = ['one']`), but if there are two levels of slicing, it doesn't update the original list (e.g. `foo[:][2:] = ['three', 'four']`). | `foo[:]` is a copy of `foo`. You mutated the copy. |
Updating a sliced list | 34,784,558 | 15 | 2016-01-14T08:15:11Z | 34,784,864 | 13 | 2016-01-14T08:34:58Z | [
"python",
"list"
] | I thought I understood Python slicing operations, but when I tried to update a sliced list, I got confused:
```
>>> foo = [1, 2, 3, 4]
>>> foo[:1] = ['one'] # OK, foo updated
>>> foo
['one', 2, 3, 4]
>>> foo[:][1] = 'two' # why foo not updated?
>>> foo
['one', 2, 3, 4]
>>> foo[:][2:] = ['three', 'four'] # Again, foo not updated
>>> foo
['one', 2, 3, 4]
```
Why isn't foo updated after `foo[:][1] = 'two'`?
**Update:** Maybe I didn't explain my questions clearly. I know when slicing, a new list is created. My doubt is why a slicing assignment updates the list (e.g. `foo[:1] = ['one']`), but if there are two levels of slicing, it doesn't update the original list (e.g. `foo[:][2:] = ['three', 'four']`). | The main thing to notice here is that `foo[:]` will return a copy of itself and then the indexing `[1]` will be applied *on the copied list that was returned*
```
# indexing is applied on copied list
(foo[:])[1] = 'two'
^
copied list
```
You can view this if you retain a reference to the copied list. So, the `foo[:][1] = 'two'` operation can be re-written as:
```
foo = [1, 2, 3, 4]
# the following is similar to foo[:][1] = 'two'
copy_foo = foo[:]
copy_foo[1] = 'two'
```
Now, `copy_foo` has been altered:
```
print(copy_foo)
# [1, 'two', 3, 4]
```
But, `foo` remains the same:
```
print(foo)
# [1, 2, 3, 4]
```
In your case, *you didn't name the intermediate result* from copying the `foo` list with `foo[:]`, that is, you didn't keep a reference to it. After the assignment to `'two'` is perfomed with `foo[:][1] = 'two'`, *the intermediate copied list ceases to exist*. |
Updating a sliced list | 34,784,558 | 15 | 2016-01-14T08:15:11Z | 34,790,007 | 16 | 2016-01-14T12:46:34Z | [
"python",
"list"
] | I thought I understood Python slicing operations, but when I tried to update a sliced list, I got confused:
```
>>> foo = [1, 2, 3, 4]
>>> foo[:1] = ['one'] # OK, foo updated
>>> foo
['one', 2, 3, 4]
>>> foo[:][1] = 'two' # why foo not updated?
>>> foo
['one', 2, 3, 4]
>>> foo[:][2:] = ['three', 'four'] # Again, foo not updated
>>> foo
['one', 2, 3, 4]
```
Why isn't foo updated after `foo[:][1] = 'two'`?
**Update:** Maybe I didn't explain my questions clearly. I know when slicing, a new list is created. My doubt is why a slicing assignment updates the list (e.g. `foo[:1] = ['one']`), but if there are two levels of slicing, it doesn't update the original list (e.g. `foo[:][2:] = ['three', 'four']`). | This is because python does **not** have *l*-values that could be assigned. Instead, some expressions have an assignment form, which is different.
A `foo[something]` is a syntactic sugar for:
```
foo.__getitem__(something)
```
but a `foo[something] = bar` is a syntactic sugar for rather different:
```
foo.__setitem__(something, bar)
```
Where a slice is just a special case of `something`, so that `foo[x:y]` expands to
```
foo.__getitem__(slice(x, y, None))
```
and `foo[x:y] = bar` expands to
```
foo.__setitem__(slice(x, y, None), bar)
```
Now a `__getitem__` with slice returns a new list that is a copy of the specified range, so modifying it does not affect the original array. And assigning works by the virtue of `__setitem__` being a different method, that can simply do something else.
However the special assignment treatment applies only to the outermost operation. The constituents are normal expressions. So when you write
```
foo[:][1] = 'two'
```
it gets expanded to
```
foo.__getitem__(slice(None, None, None)).__setitem__(1, 'two')
```
the `foo.__getitem__(slice(None, None, None))` part creates a copy and that copy is modified by the `__setitem__`. But not the original array. |
PyODBC : can't open the driver even if it exists | 34,785,653 | 7 | 2016-01-14T09:18:36Z | 34,934,901 | 7 | 2016-01-21T21:43:28Z | [
"python",
"sql-server",
"linux",
"pyodbc",
"unixodbc"
] | I'm new to the linux world and I want to query a Microsoft SQL Server from Python. I used it on Windows and it was perfectly fine but in Linux it's quite painful.
After some hours, I finally succeed to install the Microsoft ODBC driver on Linux Mint with unixODBC.
Then, I set up an anaconda with python 3 environment.
I then do this :
```
import pyodbc as odbc
sql_PIM = odbc.connect("Driver={ODBC Driver 13 for SQL Server};Server=XXX;Database=YYY;Trusted_Connection=Yes")
```
It returns :
```
('01000', "[01000] [unixODBC][Driver Manager]Can't open lib '/opt/microsoft/msodbcsql/lib64/libmsodbcsql-13.0.so.0.0' : file not found (0) (SQLDriverConnect)")
```
The thing I do not undertsand is that PyODBC seems to read the right filepath from odbcinst.ini and still does not work.
I went to "/opt/microsoft/msodbcsql/lib64/libmsodbcsql-13.0.so.0.0" and the file actually exists !
So why does it tell me that it does not exist ?
Here are some possible clues :
* I'm on a virtual environment
* I need to have "read" rights because it's a root filepath
I do not know how to solve either of these problems.
Thanks ! | I also had the same problem on Ubuntu 14 after following the microsoft tutorial for SQL Server Linux ODBC Driver (<https://msdn.microsoft.com/en-us/library/hh568454%28v=sql.110%29.aspx?f=255&MSPPError=-2147217396>).
The file exists and after running an ldd, it showed there were dependencies missing:
/opt/microsoft/msodbcsql/lib64/libmsodbcsql-13.0.so.0.0: /usr/lib/x86\_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /opt/microsoft/msodbcsql/lib64/libmsodbcsql-13.0.so.0.0)
/opt/microsoft/msodbcsql/lib64/libmsodbcsql-13.0.so.0.0: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version`CXXABI\_1.3.8' not found (required by
after searching for a while I found its because Ubuntu's repo didnt have GLIBCXX on version 3.4.20, it was at 3.4.19.
I then added a repo to Ubuntu, updated it and forced it to upgrade libstdc++6
```
sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install libstdc++6
```
Problem solved, tested with isql:
```
+---------------------------------------+
| Connected! |
| |
| sql-statement |
| help [tablename] |
| quit |
| |
+---------------------------------------+
SQL>
```
After that I tried testing using pdo\_odbc (PHP), it then gave me the same driver not found error.
To solve this I had to create a symbolic link to fix libodbcinst.so.2:
sudo ln -s /usr/lib64/libodbcinst.so.2 /lib/x86\_64-linux-gnu/libodbcinst.so.2 |
Advanced custom sort | 34,788,757 | 6 | 2016-01-14T11:42:07Z | 34,789,228 | 7 | 2016-01-14T12:06:05Z | [
"python",
"list",
"python-2.7",
"sorting"
] | I have a list of items that I would like to sort on multiple criterion.
Given input list:
```
cols = [
'Aw H',
'Hm I1',
'Aw I2',
'Hm R',
'Aw R',
'Aw I1',
'Aw E',
'Hm I2',
'Hm H',
'Hm E',
]
```
Criterions:
* Hm > Aw
* I > R > H > E
The output should be:
```
cols = [
'Hm I1',
'Aw I1',
'Hm I2',
'Aw I2',
'Hm R',
'Aw R',
'Hm H',
'Aw H',
'Hm E',
'Aw E'
]
```
I know this function needs to be passed onto the built-in `sorted()` but any ideas how to actually write it? | You could write a function for the key, returning a `tuple` with each portion of interest sorted by priority.
```
def k(s):
m = {'I':0, 'R':1, 'H':2, 'E':3}
return m[s[3]], int(s[4:] or 0), -ord(s[0])
cols = [
'Aw H',
'Hm I1',
'Aw I2',
'Hm R',
'Aw R',
'Aw I1',
'Aw E',
'Hm I2',
'Hm H',
'Hm E',
]
```
Result:
```
>>> for i in sorted(cols, key=k):
... print(i)
...
Hm I1
Aw I1
Hm I2
Aw I2
Hm R
Aw R
Hm H
Aw H
Hm E
Aw E
```
When sorting `tuple`s, the first elements are compared first. If they're the same, the `tuple`s are sorted by their second elements, and so on. This is similar to the way ordinary words are sorted alphabetically.
Since we first want all the elements with `'I'` together, then `'R'`, and so on, we'll put that first. To do that, we define a dictionary that gives each letter its desired priority. When we look up that letter (the fourth character in the string, `s[3]`) in that dictionary, there's the first part of the key.
Next, we want the number after that letter. For this, we'll use some short-circuiting to get either the fifth character and onward (`s[4:]`), or, if there aren't any, a `0`. We send that to `int`, which will evaluate the number as a number to put `'2'` after `'12'` like it should be.
Finally, if the first two parts are the same, items will be sorted based on their first character. If this was a simpler sort we could just specify `reverse=True`. If this part was a number, we could just take its negative. We'll just turn that character into a number with `ord()` and then take the negative of that.
The result is keys of, for example, `(0, 2, -65)` for `'Aw I2'`. |
ImportError: No module named 'ipdb' | 34,804,121 | 6 | 2016-01-15T03:51:38Z | 34,804,150 | 8 | 2016-01-15T03:54:51Z | [
"python",
"ipdb"
] | I'm new to python and I'm trying to use the interactive python debugger in the standard python package. Whenever I run "import ipdb" in my text editor (atom) or in the command line through iPython then I get the error:
ImportError: No module named 'ipdb'
Where is my ipdb module? It's still missing after I reinstalled python.
Thanks! | `pdb` is built-in. `ipdb` you will have to install.
```
pip install ipdb
``` |
cx_freeze converted GUI-app (tkinter) crashes after presssing plot-Button | 34,806,650 | 2 | 2016-01-15T08:06:12Z | 34,893,933 | 9 | 2016-01-20T07:28:56Z | [
"python",
"python-3.x",
"matplotlib",
"tkinter",
"cx-freeze"
] | I've been dealing with this for days now and hope to finde some help. I developed a GUI-application with imported modules tkinter, numpy, scipy, matplotlib, which runs fine in python itself. After having converted to an \*.exe everything works as expected, but NOT the matplotlib section. When I press my defined plot-Button, the \*.exe simply closes and doesn't show any plots.
So I thought to make a minimalistic example, where I simply plot a sin-function and I'm facing the same issue:
Works perfect in python, when converting it to an \*.exe it crashes when pressing the plot Button. The minimalistic example is here:
```
import tkinter as tk
import matplotlib.pyplot as plt
import numpy as np
class MainWindow(tk.Frame):
def __init__(self):
tk.Frame.__init__(self,bg='#9C9C9C',relief="flat", bd=10)
self.place(width=x,height=y)
self.create_widgets()
def function(self):
datax = np.arange(-50,50,0.1)
datay = np.sin(datax)
plt.plot(datax,datay)
plt.show()
def create_widgets(self):
plot = tk.Button(self, text='PLOT', command=self.function)
plot.pack()
x,y=120,300
root=tk.Tk()
root.geometry(str(x)+"x"+str(y))
app = MainWindow()
app.mainloop()
```
And see my corresponding "setup.py" for converting with cx\_freeze.
```
import cx_Freeze
import matplotlib
import sys
import numpy
import tkinter
base = None
if sys.platform == "win32":
base = "Win32GUI"
executables = [cx_Freeze.Executable("test.py", base=base)]
build_exe_options = {"includes": ["matplotlib.backends.backend_tkagg","matplotlib.pyplot",
"tkinter.filedialog","numpy"],
"include_files":[(matplotlib.get_data_path(), "mpl-data")],
"excludes":[],
}
cx_Freeze.setup(
name = "test it",
options = {"build_exe": build_exe_options},
version = "1.0",
description = "I test it",
executables = executables)
```
Any ideas that might solve the issue are highly appreciated. I'm working on a 64-bit Windows10 machine and I'm using the WinPython Distribution with Python 3.4.3. | I found a potential solution (or at least an explanation) for this problem while testing PyInstaller with the same **test.py**. I received error message about a dll file being missing, that file being **mkl\_intel\_thread.dll**.
I searched for that file and it was found inside **numpy** folder.
I copied files matching **mkl\_\*.dll** and also **libiomp5md.dll** to the same directory where the **test.exe** created by `python setup.py build` was. After this the minimal **test.exe** showed the matplotlib window when pressing the **plot** button.
The files were located in folder **lib\site-packages\numpy\core**. |
Python .get() in Java | 34,811,106 | 3 | 2016-01-15T12:28:44Z | 34,811,204 | 10 | 2016-01-15T12:34:26Z | [
"java",
"python"
] | So if we have a dictionary in python we have the amazing `Dictionary.get(x ,y)`
if x exists, return whatever x value is, else return y. Does java have this method? I've looked around and can't find anything | In Java 8, `Map` provides a [`getOrDefault`](https://docs.oracle.com/javase/8/docs/api/java/util/Map.html#getOrDefault-java.lang.Object-V-) method.
In earlier versions, you are probably stuck with either getting the value for the key with [`get`](https://docs.oracle.com/javase/7/docs/api/java/util/Map.html#get%28java.lang.Object%29), and null-checking it; or using something like:
```
value = (map.containsKey(key) ? map.get(key) : defaultValue);
``` |
Is it possible to know the maximum number accepted by chr using Python? | 34,813,535 | 3 | 2016-01-15T14:42:48Z | 34,813,568 | 9 | 2016-01-15T14:43:57Z | [
"python",
"python-3.x",
"unicode"
] | From the Python's documentation of the [`chr`](https://docs.python.org/3.5/library/functions.html#chr) built-in function, the maximum value that `chr` accepts is 1114111 (in decimal) or 0x10FFFF (in base 16). And in fact
```
>>> chr(1114112)
Traceback (most recent call last):
File "<pyshell#20>", line 1, in <module>
chr(1114112)
ValueError: chr() arg not in range(0x110000)
```
My first question is the following, why exactly that number? The second question is: if this number changes, is it possible to know from a Python command the maximum value accepted by `chr`? | Use [`sys.maxunicode`](https://docs.python.org/3/library/sys.html#sys.maxunicode):
> An integer giving the value of the largest Unicode code point, i.e. `1114111` (`0x10FFFF` in hexadecimal).
On my Python 2.7 UCS-2 build the maximum Unicode character supported by `unichr()` is 0xFFFF:
```
>>> import sys
>>> sys.maxunicode
65535
```
but Python 3.3 and newer switched to a new internal storage format for Unicode strings, and the maximum is now *always* `0x10FFFF`. See [PEP 393](https://www.python.org/dev/peps/pep-0393/).
`0x10FFFF` is the maximum Unicode codepoint as defined in the Unicode standard. Quoting the [Wikipedia article on Unicode](https://en.wikipedia.org/wiki/Unicode#Architecture_and_terminology):
> Unicode defines a codespace of 1,114,112 code points in the range 0 to 10FFFF. |
Anaconda prompt loading error: The input line is too long | 34,818,282 | 4 | 2016-01-15T19:21:36Z | 35,138,036 | 11 | 2016-02-01T18:50:22Z | [
"python",
"batch-file",
"windows-7-x64",
"anaconda",
"prompt"
] | I installed Anaconda 64 python 2.7 on Windows 7 64-bit version.
After installation, the anaconda prompt can start with no problem. But whenever I restart/shutdown and restart the laptop, the anaconda prompt will display the following error message, and some python packages have problems to load in the jupyter notebook.
```
Deactivating environment "C:\Users\user\Anaconda2"...
Activating environment "C:\Users\user\Anaconda2"...
The input line is too long.
"PATH_NO_SCRIPTS=C:\Users\user\Anaconda2;;C:\Users\user\Anaconda2\Lib
rary\bin;C:\Python27\;C:\Python27\Scripts;c:\Rtools\bin;c:\Rtools\gcc-4.6.3\bin;
C:\ProgramData\Oracle\Java\javapath;%COSMOSM%;C:\Program Files\Lenovo Fingerprin
t Reader\;C:\Program Files (x86)\Intel\iCLS Client\;C:\Program Files\Intel\iCLS
Client\;C:\Program Files (x86)\AMD APP\bin\x86_64;C:\Program Files (x86)\AMD APP
\bin\x86;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Program File
s (x86)\ATI Technologies\ATI.ACE\Core-Static;C:\Program Files\Intel\Intel(R) Man
agement Engine Components\DAL;C:\Program Files\Intel\Intel(R) Management Engine
Components\IPT;C:\Program Files (x86)\Intel\Intel(R) Management Engine Component
s\DAL;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\IPT;C:\
Program Files\Intel\WiFi\bin\;C:\Program Files\Common Files\Intel\WirelessCommon
\;C:\Program Files\Sony\VAIO Improvement\;C:\Program Files (x86)\Sony\VAIO Start
up Setting Tool;c:\Program Files (x86)\Common Files\Roxio Shared\OEM\DLLShared\;
c:\Program Files (x86)\Common Files\Roxio Shared\OEM\DLLShared\;c:\Program Files
(x86)\Common Files\Roxio Shared\OEM\12.0\DLLShared\;c:\Program Files (x86)\Roxi
o 2010\OEM\AudioCore\;C:\Program Files (x86)\Common Files\Thunder Network\KanKan
\Codecs;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\IVI Foundat
ion\VISA\Win64\Bin\;C:\Program Files (x86)\IVI Foundation\VISA\WinNT\Bin\;C:\Pro
gram Files (x86)\IVI Foundation\VISA\WinNT\Bin;C:\Program Files (x86)\IVI Founda
tion\IVI\bin;C:\Program Files\IVI Foundation\IVI\bin;C:\PROGRA~2\IVIFOU~1\VISA\W
inNT\Bin;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Python27;C:\Users\user\AppData\Local\Smartbar\Application\;C:\Program Files (x86)\WinSCP\;C:\Python
27\Scripts;C:\Program Files\ffmpeg\bin;C:\Program Files\Microsoft SQL Server\110
\Tools\Binn\;C:\Program Files (x86)\MiKTeX 2.9\miktex\bin\;C:\Program Files (x86
)\Windows Kits\8.1\Windows Performance Toolkit\;C:\HashiCorp\Vagrant\bin;C:\Prog
ram Files (x86)\Skype\Phone\;;C:\Users\user\Desktop\win64\\lib;C:\Users\user\Desktop\win64\\3rdparty\cudnn\bin;C:\Users\user\Desktop\win64\\3rdpa
rty\cudart;C:\Users\user\Desktop\win64\\3rdparty\vc;C:\Users\user\Desk
top\win64\\3rdparty\openblas\bin;C:\Python27\;C:\Python27\Scripts;c:\Rtools\bin;
c:\Rtools\gcc-4.6.3\bin;C:\ProgramData\Oracle\Java\javapath;%COSMOSM%;C:\Program
Files\Lenovo Fingerprint Reader\;C:\Program Files (x86)\Intel\iCLS Client\;C:\P
rogram Files\Intel\iCLS Client\;C:\Program Files (x86)\AMD APP\bin\x86_64;C:\Pro
gram Files (x86)\AMD APP\bin\x86;C:\Windows\system32;C:\Windows;C:\Windows\Syste
m32\Wbem;C:\Program Files (x86)\ATI Technologies\ATI.ACE\Core-Static;C:\Program
Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files\Intel\Int
el(R) Management Engine Components\IPT;C:\Program Files (x86)\Intel\Intel(R) Man
agement Engine Components\DAL;C:\Program Files (x86)\Intel\Intel(R) Management E
ngine Components\IPT;C:\Program Files\Intel\WiFi\bin\;C:\Program Files\Common Fi
les\Intel\WirelessCommon\;C:\Program Files\Sony\VAIO Improvement\;C:\Program Fil
es (x86)\Sony\VAIO Startup Setting Tool" was unexpected at this time.
```
I tried to follow the solutions [here](http://stackoverflow.com/questions/682799/what-to-do-with-the-input-line-is-too-long-error-message) and [here](http://stackoverflow.com/questions/12751809/input-line-is-too-long-in-bat-files), but with no success.
I looked into the Script folder under Anaconda, and found the error message might come from the activate.bat file. But I have no clue what to do next.
```
@echo off
SETLOCAL ENABLEDELAYEDEXPANSION
REM Check for CONDA_ENVS_PATH environment variable
REM It it doesn't exist, look inside the Anaconda install tree
IF "%CONDA_ENVS_PATH%" == "" (
REM turn relative path into absolute path
CALL :NORMALIZEPATH CONDA_ENVS_PATH "%~dp0..\envs"
)
REM Used for deactivate, to make sure we restore original state after deactivation
IF "%CONDA_PATH_BACKUP%" == "" (SET "CONDA_PATH_BACKUP=%PATH%")
set "CONDA_NEW_NAME=%~1"
IF "%~2" == "" GOTO skiptoomanyargs
ECHO ERROR: Too many arguments provided
GOTO usage
:skiptoomanyargs
IF "%CONDA_NEW_NAME%" == "" set "CONDA_NEW_NAME=%~dp0.."
REM Search through paths in CONDA_ENVS_PATH
REM First match will be the one used
FOR %%F IN ("%CONDA_ENVS_PATH:;=" "%") DO (
IF EXIST "%%~F\%CONDA_NEW_NAME%\conda-meta" (
SET "CONDA_NEW_PATH=%%~F\%CONDA_NEW_NAME%"
GOTO found_env
)
)
IF EXIST "%CONDA_NEW_NAME%\conda-meta" (
SET "CONDA_NEW_PATH=%CONDA_NEW_NAME%"
) ELSE (
ECHO No environment named "%CONDA_NEW_NAME%" exists in %CONDA_ENVS_PATH%, or is not a valid conda installation directory.
EXIT /b 1
)
:found_env
SET "SCRIPT_PATH=%~dp0"
IF "%SCRIPT_PATH:~-1%"=="\" SET "SCRIPT_PATH=%SCRIPT_PATH:~0,-1%"
REM Set CONDA_NEW_NAME to the last folder name in its path
FOR /F "tokens=* delims=\" %%i IN ("%CONDA_NEW_PATH%") DO SET "CONDA_NEW_NAME=%%~ni"
REM special case for root env:
REM Checks for Library\bin on PATH. If exists, we have root env on PATH.
call :NORMALIZEPATH ROOT_PATH "%~dp0.."
CALL SET "PATH_NO_ROOT=%%PATH:%ROOT_PATH%;=%%"
IF NOT "%PATH_NO_ROOT%"=="%PATH%" SET "CONDA_DEFAULT_ENV=%ROOT_PATH%"
REM Deactivate a previous activation if it is live
IF "%CONDA_DEFAULT_ENV%" == "" GOTO skipdeactivate
REM This search/replace removes the previous env from the path
ECHO Deactivating environment "%CONDA_DEFAULT_ENV%"...
REM Run any deactivate scripts
IF NOT EXIST "%CONDA_DEFAULT_ENV%\etc\conda\deactivate.d" GOTO nodeactivate
PUSHD "%CONDA_DEFAULT_ENV%\etc\conda\deactivate.d"
FOR %%g IN (*.bat) DO CALL "%%g"
POPD
:nodeactivate
REM Remove env name from PROMPT
FOR /F "tokens=* delims=\" %%i IN ("%CONDA_DEFAULT_ENV%") DO SET "CONDA_OLD_ENV_NAME=%%~ni"
call set PROMPT=%%PROMPT:[%CONDA_OLD_ENV_NAME%] =%%
SET "CONDACTIVATE_PATH=%CONDA_DEFAULT_ENV%;%CONDA_DEFAULT_ENV%\Scripts;%CONDA_DEFAULT_ENV%\Library\bin"
CALL SET "PATH=%%PATH:%CONDACTIVATE_PATH%=%%"
SET CONDA_DEFAULT_ENV=
:skipdeactivate
CALL :NORMALIZEPATH CONDA_DEFAULT_ENV "%CONDA_NEW_PATH%"
ECHO Activating environment "%CONDA_DEFAULT_ENV%"...
SET "PATH=%CONDA_DEFAULT_ENV%;%CONDA_DEFAULT_ENV%\Scripts;%CONDA_DEFAULT_ENV%\Library\bin;%PATH%"
IF "%CONDA_NEW_NAME%"=="" (
REM Clear CONDA_DEFAULT_ENV so that this is truly a "root" environment, not an environment pointed at root
SET CONDA_DEFAULT_ENV=
) ELSE (
SET "PROMPT=[%CONDA_NEW_NAME%] %PROMPT%"
)
REM Make sure that root's Scripts dir is on PATH, for sake of keeping activate/deactivate available.
CALL SET "PATH_NO_SCRIPTS=%%PATH:%SCRIPT_PATH%=%%"
IF "%PATH_NO_SCRIPTS%"=="%PATH%" SET "PATH=%PATH%;%SCRIPT_PATH%"
REM Run any activate scripts
IF NOT EXIST "%CONDA_DEFAULT_ENV%\etc\conda\activate.d" GOTO noactivate
PUSHD "%CONDA_DEFAULT_ENV%\etc\conda\activate.d"
FOR %%g IN (*.bat) DO CALL "%%g"
POPD
:noactivate
REM Trim trailing semicolon, if any
IF "%PATH:~-1%"==";" SET "PATH=%PATH:~0,-1%"
REM Clean up any double colons we may have ended up with
SET "PATH=%PATH:;;=;%"
ENDLOCAL & (
SET "PATH=%PATH%"
SET "PROMPT=%PROMPT%"
SET "CONDA_DEFAULT_ENV=%CONDA_DEFAULT_ENV%"
SET "CONDA_PATH_BACKUP=%CONDA_PATH_BACKUP%"
)
EXIT /B
:NORMALIZEPATH
SET "%1=%~dpfn2"
EXIT /B
```
Any hint is appreciated. | I found that if you change from using single quotes for the CALL SET on the following:
```
REM Make sure that root's Scripts dir is on PATH, for sake of keeping activate/deactivate available.
CALL SET "PATH_NO_SCRIPTS=%%PATH:%SCRIPT_PATH%=%%"
IF "%PATH_NO_SCRIPTS%"=="%PATH%" SET "PATH=%PATH%;%SCRIPT_PATH%"
```
to:
```
REM Make sure that root's Scripts dir is on PATH, for sake of keeping
activate/deactivate available.
CALL SET ""PATH_NO_SCRIPTS=%%PATH:%SCRIPT_PATH%=%%""
IF "%PATH_NO_SCRIPTS%"=="%PATH%" SET "PATH=%PATH%;%SCRIPT_PATH%"
```
Solves this issue for me. This is based upon this [answer](http://stackoverflow.com/a/3583282/2888347) |
How to measure Python's asyncio code performance? | 34,826,533 | 6 | 2016-01-16T11:41:39Z | 34,827,291 | 8 | 2016-01-16T13:04:48Z | [
"python",
"performance-testing",
"trace",
"python-asyncio"
] | I can't use normal tools and technics to measure the performance of a coroutine because the time it takes at `await` should not be taken in consideration (or it should just consider the overhead of reading from the awaitable but not the IO latency).
So how do measure the time a coroutine takes ? How do I compare 2 implementations and find the more efficent ? What tools do I use ? | One way is to patch `loop._selector.select` in order to time and save all the IO operations. This can be done using a context manager:
```
@contextlib.contextmanager
def patch_select(*, loop=None):
if loop is None:
loop = asyncio.get_event_loop()
old_select = loop._selector.select
# Define the new select method, used as a context
def new_select(timeout):
if timeout == 0:
return old_select(timeout)
start = time.time()
result = old_select(timeout)
total = time.time() - start
new_select.iotime += total
return result
new_select.iotime = 0.0
# Patch the select method
try:
loop._selector.select = new_select
yield new_select
finally:
loop._selector.select = old_select
```
Then use another context manager to time the full run, and compute the difference between total time and the IO time:
```
@contextlib.contextmanager
def timeit(*, loop=None):
start = time.time()
with patch_select() as context:
yield
total = time.time() - start
io_time = context.iotime
print("IO time: {:.3f}".format(io_time))
print("CPU time: {:.3f}".format(total - io_time))
print("Total time: {:.3f}".format(total))
```
Here is a simple example:
```
loop = asyncio.get_event_loop()
with timeit(loop=loop):
coro = asyncio.sleep(1, result=3)
result = loop.run_until_complete(coro)
print("Result: {}".format(result))
```
It prints the following report:
```
Result: 3
IO time: 1.001
CPU time: 0.011
Total time: 1.012
```
---
**EDIT**
Another approach is to subclass `Task` and override the `_step` method to time the execution of the step:
```
class TimedTask(asyncio.Task):
self.cputime = 0.0
def _step(self, *args, **kwargs):
start = time.time()
result = super()._step(*args, **kwargs)
self.cputime += time.time() - start
return result
```
It is then possible to register the subclass as the default task factory:
```
loop = asyncio.get_event_loop()
task_factory = lambda loop, coro: TimedTask(coro, loop=loop)
loop.set_task_factory(task_factory)
```
Same example:
```
coro = asyncio.sleep(1, result=3, loop=loop)
task = asyncio.ensure_future(coro, loop=loop)
result = loop.run_until_complete(task)
print("Result: {}".format(result))
print("CPU time: {:.4f}".format(task.cputime))
```
With the output:
```
Result: 3
CPU time: 0.0002
``` |
Running TensorFlow on a Slurm Cluster? | 34,826,736 | 4 | 2016-01-16T12:03:59Z | 36,803,148 | 7 | 2016-04-22T20:53:30Z | [
"python",
"python-2.7",
"cluster-computing",
"tensorflow",
"slurm"
] | I could get access to a computing cluster, specifically one node with two 12-Core CPUs, which is running with [Slurm Workload Manager](https://en.wikipedia.org/wiki/Slurm_Workload_Manager).
I would like to run [TensorFlow](https://en.wikipedia.org/wiki/TensorFlow) on that system but unfortunately I were not able to find any information about how to do this or if this is even possible. I am new to this but as far as I understand it, I would have to run TensorFlow by creating a Slurm job and can not directly execute python/tensorflow via ssh.
Has anyone an idea, tutorial or any kind of source on this topic? | It's relatively simple.
Under the simplifying assumptions that you request one process per host, slurm will provide you with all the information you need in environment variables, specifically SLURM\_PROCID, SLURM\_NPROCS and SLURM\_NODELIST.
For example, you can initialize your task index, the number of tasks and the nodelist as follows:
```
from hostlist import expand_hostlist
task_index = int( os.environ['SLURM_PROCID'] )
n_tasks = int( os.environ['SLURM_NPROCS'] )
tf_hostlist = [ ("%s:22222" % host) for host in
expand_hostlist( os.environ['SLURM_NODELIST']) ]
```
Note that slurm gives you a host list in its compressed format (e.g., "myhost[11-99]"), that you need to expand. I do that with module hostlist by
Kent Engström, available here <https://pypi.python.org/pypi/python-hostlist>
At that point, you can go right ahead and create your TensorFlow cluster specification and server with the information you have available, e.g.:
```
cluster = tf.train.ClusterSpec( {"your_taskname" : tf_hostlist } )
server = tf.train.Server( cluster.as_cluster_def(),
job_name = "your_taskname",
task_index = task_index )
```
And you're set! You can now perform TensorFlow node placement on a specific host of your allocation with the usual syntax:
```
for idx in range(n_tasks):
with tf.device("/job:your_taskname/task:%d" % idx ):
...
```
A flaw with the code reported above is that all your jobs will instruct Tensorflow to install servers listening at fixed port 22222. If multiple such jobs happen to be scheduled to the same node, the second one will fail to listen to 22222.
A better solution is to let slurm reserve ports for each job. You need to bring your slurm administrator on board and ask him to configure slurm so it allows you to ask for ports with the --resv-ports option. In practice, this requires asking them to add a line like the following in their slurm.conf:
```
MpiParams=ports=15000-19999
```
Before you bug your slurm admin, check what options are already configured, e.g., with:
```
scontrol show config | grep MpiParams
```
If your site already uses an old version of OpenMPI, there's a chance an option like this is already in place.
Then, amend my first snippet of code as follows:
```
from hostlist import expand_hostlist
task_index = int( os.environ['SLURM_PROCID'] )
n_tasks = int( os.environ['SLURM_NPROCS'] )
port = int( os.environ['SLURM_STEP_RESV_PORTS'].split('-')[0] )
tf_hostlist = [ ("%s:%s" % (host,port)) for host in
expand_hostlist( os.environ['SLURM_NODELIST']) ]
```
Good luck! |
AttributeError: module 'html.parser' has no attribute 'HTMLParseError' | 34,827,566 | 12 | 2016-01-16T13:33:09Z | 36,000,103 | 7 | 2016-03-14T23:38:40Z | [
"python",
"django",
"python-3.x"
] | 1. This is the hints,how can I resolve it?
2. I use Python 3.5.1 created a virtual envirement by virtualenv
3. The source code works well on my friend's computer machine
Error:
```
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "A:\Python3.5\lib\site-packages\django\core\management\__init__.py", line 385, in execute_from_command_line
utility.execute()
File "A:\Python3.5\lib\site-packages\django\core\management\__init__.py", line 354, in execute
django.setup()
File "A:\Python3.5\lib\site-packages\django\__init__.py", line 18, in setup
from django.utils.log import configure_logging
File "A:\Python3.5\lib\site-packages\django\utils\log.py", line 13, in <module>
from django.views.debug import ExceptionReporter, get_exception_reporter_filter
File "A:\Python3.5\lib\site-packages\django\views\debug.py", line 10, in <module>
from django.http import (HttpResponse, HttpResponseServerError,
File "A:\Python3.5\lib\site-packages\django\http\__init__.py", line 4, in <module>
from django.http.response import (
File "A:\Python3.5\lib\site-packages\django\http\response.py", line 13, in <module>
from django.core.serializers.json import DjangoJSONEncoder
File "A:\Python3.5\lib\site-packages\django\core\serializers\__init__.py", line 23, in <module>
from django.core.serializers.base import SerializerDoesNotExist
File "A:\Python3.5\lib\site-packages\django\core\serializers\base.py", line 6, in <module>
from django.db import models
File "A:\Python3.5\lib\site-packages\django\db\models\__init__.py", line 6, in <module>
from django.db.models.query import Q, QuerySet, Prefetch # NOQA
File "A:\Python3.5\lib\site-packages\django\db\models\query.py", line 13, in <module>
from django.db.models.fields import AutoField, Empty
File "A:\Python3.5\lib\site-packages\django\db\models\fields\__init__.py", line 18, in <module>
from django import forms
File "A:\Python3.5\lib\site-packages\django\forms\__init__.py", line 6, in <module>
from django.forms.fields import * # NOQA
File "A:\Python3.5\lib\site-packages\django\forms\fields.py", line 18, in <module>
from django.forms.utils import from_current_timezone, to_current_timezone
File "A:\Python3.5\lib\site-packages\django\forms\utils.py", line 15, in <module>
from django.utils.html import format_html, format_html_join, escape
File "A:\Python3.5\lib\site-packages\django\utils\html.py", line 16, in <module>
from .html_parser import HTMLParser, HTMLParseError
File "A:\Python3.5\lib\site-packages\django\utils\html_parser.py", line 12, in <module>
HTMLParseError = _html_parser.HTMLParseError
AttributeError: module 'html.parser' has no attribute 'HTMLParseError'
``` | As you can read [here](http://www.thefourtheye.in/2015/02/python-35-and-django-17s-htmlparseerror.html) this error is raised...
> because `HTMLParseError` is deprecated from Python 3.3 onwards and removed in Python 3.5.
What you can do is downgrade your Python version or upgrade your Django version. |
Using map in Python | 34,831,050 | 3 | 2016-01-16T19:17:04Z | 34,831,115 | 7 | 2016-01-16T19:22:58Z | [
"python"
] | I'm trying to use the `map` Python function (I know I can use list comprehension but I was instructed to use map in this example) to take the row average of a two row matrix.
Here is what I think the answer should look like:
```
def average_rows2(mat):
print( map( float(sum) / len , [mat[0],mat[1]] ) )
average_rows2([[4, 5, 2, 8], [3, 9, 6, 7]])
```
Right now, only the sum function works:
```
def average_rows2(mat):
print( map( sum , [mat[0],mat[1]] ) )
average_rows2([[4, 5, 2, 8], [3, 9, 6, 7]])
```
The first problem is that adding the `float()` to the sum function gives the error:
```
TypeError: float() argument must be a string or a number
```
Which is weird because the elements of the resulting list should be integers since it successfully calculates the sum.
Also, adding `/ len` to the sum function gives this error:
```
TypeError: unsupported operand type(s) for /: 'builtin_function_or_method' and 'builtin_function_or_method'
```
For this error, I tried `*` and `//` and it says that none are supported operand types. I don't understand why none of these would be supported.
Maybe this means that the `map` function doesn't take composite functions? | The first argument has to be evaluated before it can be passed to `map`. This:
```
float(sum) / len
```
is causing you various errors as it doesn't make any sense to evaluate it on its own (unless you'd shadowed `sum` and `len`, which would be a different problem). You are trying to sum over one built-in function then divide by another! It therefore cannot be an argument.
Instead, make a function, e.g.:
```
lambda lst: float(sum(lst)) / len(lst)
```
This is a callable with a single argument, therefore can be used as the first argument to `map`, which will then apply it to each element in its second argument. You could also use a regular function, rather than an anonymous `lambda` (as now shown in <http://stackoverflow.com/a/34831192/3001761>). |
Comprehensions in Python to sample tuples from a list | 34,832,058 | 13 | 2016-01-16T20:58:34Z | 34,832,108 | 19 | 2016-01-16T21:03:11Z | [
"python",
"list",
"tuples",
"list-comprehension"
] | I am trying to get the list of three-element tuples from the list `[-4, -2, 1, 2, 5, 0]` using comprehensions, and checking whether they fulfil the condition `sum([] == 0)`. The following code works. However, there is no question that there ought to be an easier, much more elegant way of expressing these comprehensions:
```
[
(i, j, k) for i in [-4, -2, 1, 2, 5, 0]
for j in [-4, -2, 1, 2, 5, 0]
for k in [-4, -2, 1, 2, 5, 0] if sum([i, j, k]) == 0
]
```
Output:
```
[(-4, 2, 2), (-2, 1, 1), (-2, 2, 0), (-2, 0, 2), (1, -2, 1),
(1, 1, -2), (2, -4, 2), (2, -2, 0), (2, 2, -4), (2, 0, -2),
(0, -2, 2), (0, 2, -2), (0, 0, 0)]
```
The question is searching for an expression like `(i, j, k) for i, j, k in [-4, -2, 1, 2, 5, 0]`. | You can use [`itertools.product`](https://docs.python.org/3/library/itertools.html#itertools.product) to hide the nested loops in your list comprehension. Use the `repeat` parameter to set the number of loops over the list (i.e. the number of elements in the tuple):
```
>>> import itertools
>>> lst = [-4, -2, 1, 2, 5, 0]
>>> [x for x in itertools.product(lst, repeat=3) if sum(x) == 0]
[(-4, 2, 2),
(-2, 1, 1),
(-2, 2, 0),
(-2, 0, 2),
(1, -2, 1),
(1, 1, -2),
(2, -4, 2),
(2, -2, 0),
(2, 2, -4),
(2, 0, -2),
(0, -2, 2),
(0, 2, -2),
(0, 0, 0)]
``` |
Combine if conditions in Python | 34,834,814 | 3 | 2016-01-17T03:31:27Z | 34,834,820 | 10 | 2016-01-17T03:32:18Z | [
"python"
] | Does this exist in Python? I want to combine an If statement that is very repetitive.
```
# ORIGINAL IF STATEMENT
if a < 100 and b < 100 and c < 100:
#pass
# I KNOW THIS IS WRONG, I JUST WANT TO KNOW IF THERE IS A WAY TO MAKE THE IF CONDITION SHORTER
if [a,b,c] < 100:
#pass
``` | You can use the built-in [`all()`](https://docs.python.org/2/library/functions.html#all):
```
if all(item < 100 for item in [a, b, c]):
``` |
Tensorflow: Where is tf.nn.conv2d Actually Executed? | 34,835,503 | 9 | 2016-01-17T05:42:09Z | 34,843,397 | 10 | 2016-01-17T20:40:12Z | [
"python",
"machine-learning",
"tensorflow"
] | I am curious about the Tensorflow implementation of `tf.nn.conv2d(...)`. To call it, one simply runs `tf.nn.conv2d(...)`. However, I'm going down the rabbit hole trying to see where it is executed. The code is as follows (where the arrow indicates the function it ultimately calls):
```
tf.nn.conv2d(...) -> tf.nn_ops.conv2d(...) -> tf.gen_nn_ops.conv2d(...) -> _op_def_lib.apply_op("Conv2D", ...) -> ?
```
I am familiar with Tensorflow's implementation of LSTMs and the ability to easily manipulate them as one deems fit. Is the function that performs the `conv2d()` calculation written in Python, and if so, where is it? Can I see where and how the strides are executed? | **TL;DR:** The implementation of [`tf.nn.conv2d()`](https://www.tensorflow.org/versions/master/api_docs/python/nn.html#conv2d) is written in C++, which invokes optimized code using either Eigen (on CPU) or the cuDNN library (on GPU). You can find the implementation [here](https://github.com/tensorflow/tensorflow/blob/8eaf671025e8cd5358278f91f7e89e2fbbe6a26b/tensorflow/core/kernels/conv_ops.cc#L115).
The chain of functions that you mentioned in the question (from `tf.nn.conv2d()` down) are Python functions for **building** a TensorFlow graph, but these do not invoke the implementation. Recall that, in TensorFlow, you first [build a symbolic graph, then execute it](https://www.tensorflow.org/versions/master/resources/faq.html#why_does_c_tfmatmula_b_not_execute_the_matrix_multiplication_immediately).
The implementation of `tf.nn.conv2d()` is only executed happens when you call [`Session.run()`](https://www.tensorflow.org/versions/master/api_docs/python/client.html#Session.run) passing a `Tensor` whose value depends on the result of some convolution. For example:
```
input = tf.placeholder(tf.float32)
filter = tf.Variable(tf.truncated_normal([5, 5, 3, 32], stddev=0.1)
conv = tf.nn.conv2d(input, filter, strides=[1, 1, 1, 1], padding='SAME')
result = sess.run(conv, feed_dict={input: ...}) # <== Execution happens here.
```
Invoking `sess.run(...)` tells TensorFlow to run all the ops that are neeeded to compute the value of `conv`, including the convolution itself. The path from here to the implementation is somewhat complicated, but goes through the following steps:
1. `sess.run()` calls the TensorFlow backend to fetch the value of `conv`.
2. The backend prunes the computation graph to work out what nodes must be executed, and places the nodes on the appropriate devices (CPU or GPU).
3. Each device is instructed to execute its subgraph, using an [executor](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/common_runtime/executor.h).
4. The executor eventually invokes the `tensorflow::OpKernel` that corresponds to the convolution operator, by calling its `Compute()` method.
The `"Conv2D"` OpKernel is implemented [here](https://github.com/tensorflow/tensorflow/blob/8eaf671025e8cd5358278f91f7e89e2fbbe6a26b/tensorflow/core/kernels/conv_ops.cc#L94), and its `Compute()` method is [here](https://github.com/tensorflow/tensorflow/blob/8eaf671025e8cd5358278f91f7e89e2fbbe6a26b/tensorflow/core/kernels/conv_ops.cc#L115). Because this op is performance critical for many workloads, the implementation is quite complicated, but the basic idea is that the computation is offloaded to either the Eigen Tensor library (if running on CPU), or cuDNN's optimized GPU implementation. |
What does "list comprehension" in Python mean? How does it work and how can I use it? | 34,835,951 | 26 | 2016-01-17T06:55:55Z | 34,835,952 | 47 | 2016-01-17T06:55:55Z | [
"python",
"list",
"list-comprehension"
] | I have the following code:
```
[x**2 for x in range(10)]
```
When I run it in the Python Shell, it returns:
```
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
```
I've searched and it seems this is called a *list comprehension*, but how does it work? | [From **the documentation**:](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions)
> List comprehensions provide a concise way to create lists. Common applications are to make new lists where each element is the result of some operations applied to each member of another sequence or iterable, or to create a subsequence of those elements that satisfy a certain condition.
---
About your question, the list comprehension does the same thing as the following "plain" Python code:
```
>>> l = []
>>> for x in range(10):
... l.append(x**2)
>>> l
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
```
How do you write it in one line? Hmm...we can...probably...use [`map()`](https://docs.python.org/3/library/functions.html#map) with [`lambda`](https://docs.python.org/3/tutorial/controlflow.html#lambda-expressions):
```
>>> list(map(lambda x: x**2, range(10)))
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
```
But isn't it clearer and simpler to just use a list comprehension?
```
>>> [x**2 for x in range(10)]
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
```
---
Basically, we can do anything with `x`. Not only `x**2`. For example, run a method of `x`:
```
>>> [x.strip() for x in ('foo\n', 'bar\n', 'baz\n')]
['foo', 'bar', 'baz']
```
Or use `x` as another function's argument:
```
>>> [int(x) for x in ('1', '2', '3')]
[1, 2, 3]
```
We can also, for example, use `x` as the key of a `dict` object. Let's see:
```
>>> d = {'foo': '10', 'bar': '20', 'baz': '30'}
>>> [d[x] for x in ['foo', 'baz']]
['10', '30']
```
How about a combination?
```
>>> d = {'foo': '10', 'bar': '20', 'baz': '30'}
>>> [int(d[x].rstrip('0')) for x in ['foo', 'baz']]
[1, 3]
```
And so on.
---
You can also use `if` or `if...else` in a list comprehension. For example, you only want odd numbers in `range(10)`. You can do:
```
>>> l = []
>>> for x in range(10):
... if x%2:
... l.append(x)
>>> l
[1, 3, 5, 7, 9]
```
Ah that's too complex. What about the following version?
```
>>> [x for x in range(10) if x%2]
[1, 3, 5, 7, 9]
```
To use an `if...else` ternary expression, you need put the `if ... else ...` after `x`, **not** after `range(10)`:
```
>>> [i if i%2 != 0 else None for i in range(10)]
[None, 1, None, 3, None, 5, None, 7, None, 9]
```
---
Have you heard about [**nested list comprehension**](https://docs.python.org/3/tutorial/datastructures.html#nested-list-comprehensions)? You can put *two or more `for`s in one list comprehension*. For example:
```
>>> [i for x in [[1, 2, 3], [4, 5, 6]] for i in x]
[1, 2, 3, 4, 5, 6]
>>> [j for x in [[[1, 2], [3]], [[4, 5], [6]]] for i in x for j in i]
[1, 2, 3, 4, 5, 6]
```
Let's talk about the first part, `for x in [[1, 2, 3], [4, 5, 6]]` which gives `[1, 2, 3]` and `[4, 5, 6]`. Then, `for i in x` gives `1`, `2`, `3` and `4`, `5`, `6`.
**Warning:** You always need put `for x in [[1, 2, 3], [4, 5, 6]]` **before** `for i in x`:
```
>>> [j for j in x for x in [[1, 2, 3], [4, 5, 6]]]
Traceback (most recent call last):
File "<input>", line 1, in <module>
NameError: name 'x' is not defined
```
---
We also have *set comprehensions*, *dict comprehensions*, and *generator expressions*.
**set comprehensions** and list comprehensions are basically the same, but the former returns a *set* instead of a *list*:
```
>>> {x for x in [1, 1, 2, 3, 3, 1]}
{1, 2, 3}
```
It's the same as:
```
>>> set([i for i in [1, 1, 2, 3, 3, 1]])
{1, 2, 3}
```
A [**dict comprehension**](https://www.python.org/dev/peps/pep-0274/) *looks like* a set comprehension, but it uses `{key: value for key, value in ...}` or `{i: i for i in ...}` instead of `{i for i in ...}`.
For example:
```
>>> {i: i**2 for i in range(5)}
{0: 0, 1: 1, 2: 4, 3: 9, 4: 16}
```
And it equals:
```
>>> d = {}
>>> for i in range(5):
... d[i] = i**2
>>> d
{0: 0, 1: 1, 2: 4, 3: 9, 4: 16}
```
Does `(i for i in range(5))` give a *tuple*? No!, it's a [**generator expression**](https://www.python.org/dev/peps/pep-0289/). Which returns a **generator**:
```
>>> (i for i in range(5))
<generator object <genexpr> at 0x7f52703fbca8>
```
It's the same as:
```
>>> def gen():
... for i in range(5):
... yield i
>>> gen()
<generator object gen at 0x7f5270380db0>
```
And you can use it as a generator:
```
>>> gen = (i for i in range(5))
>>> next(gen)
0
>>> next(gen)
1
>>> list(gen)
[2, 3, 4]
>>> next(gen)
Traceback (most recent call last):
File "<input>", line 1, in <module>
StopIteration
```
**Note:** If you use a list comprehension *inside a function*, you don't need the `[]` if that function could loop over a generator. For example, [`sum()`](https://docs.python.org/3/library/functions.html#sum):
```
>>> sum(i**2 for i in range(5))
30
```
**Related** (about generators): [Understanding Generators in Python](http://stackoverflow.com/questions/1756096/understanding-generators-in-python). |
a pythonic way to write a constrain() function | 34,837,677 | 5 | 2016-01-17T11:13:13Z | 34,837,691 | 10 | 2016-01-17T11:14:30Z | [
"python"
] | What's the best way of writing a constrain function? or is there already a builtin python function that does this?
**Option 1:**
```
def constrain(val, min_val, max_val):
if val < min_val: return min_val
if val > max_val: return max_val
return val
```
**Option 2:**
```
def constrain(val, min_val, max_val):
if val < min_val:
val = min_val
elif val > max_val:
val = max_val
return val
``` | I do not know if this is the more "pythonic", but you can use built-in [`min()`](https://docs.python.org/3/library/functions.html#min) and [`max()`](https://docs.python.org/3/library/functions.html#max) like this:
```
def constrain(val, min_val, max_val):
return min(max_val, max(min_val, val))
``` |
Can't accurately calculate pi on Python | 34,840,741 | 7 | 2016-01-17T16:38:24Z | 34,840,868 | 7 | 2016-01-17T16:50:36Z | [
"python",
"python-2.7",
"numeric",
"montecarlo",
"pi"
] | I am new member here and I'm gonna drive straight into this as I've spent my whole Sunday trying to get my head around it.
I'm new to Python, having previously learned coding on C++ to a basic-intermediate level (it was a 10-week university module).
I'm trying a couple of iterative techniques to calculate Pi but both are coming up slightly inaccurate and I'm not sure why.
The first method I was taught at university - I'm sure some of you have seen it done before.
```
x=0.0
y=0.0
incircle = 0.0
outcircle = 0.0
pi = 0.0
i = 0
while (i<100000):
x = random.uniform(-1,1)
y = random.uniform(-1,1)
if (x*x+y*y<=1):
incircle=incircle+1
else:
outcircle=outcircle+1
i=i+1
pi = (incircle/outcircle)
print pi
```
It's essentially a generator for random (x,y) co-ordinates on a plane from -1 to +1 on both axes. Then if x^2+y^2 <= 1, we know the point rests inside a circle of radius 1 within the box formed by the co-ordinate axes.
Depending on the position of the point, a counter increases for `incircle` or `outcircle`.
The value for pi is then the ratio of values inside and outside the circle. The co-ordinates are randomly generated so it should be an even spread.
However, even at very high iteration values, my result for Pi is always around the 3.65 mark.
The second method is another iteration which calculates the circumference of a polygon with increasing number of sides until the polygon is almost a circle, then, Pi=Circumference/diameter. (I sort of cheated because the coding has a math.cos(Pi) term so it looks like I'm using Pi to find Pi, but this is only because you can't easily use degrees to represent angles on Python). But even for high iterations the final result seems to end around 3.20, which again is wrong. The code is here:
```
S = 0.0
C = 0.0
L = 1.0
n = 2.0
k = 3.0
while (n<2000):
S = 2.0**k
L = L/(2.0*math.cos((math.pi)/(4.0*n)))
C = S*L
n=n+2.0
k=k+1.0
pi = C/math.sqrt(2.0)
print pi
```
I remember, when doing my C++ course, being told that the problem is a common one and it isn't due to the maths but because of something within the coding, however I can't remember exactly. It may be to do with the random number generation, or the limitations of using floating point numbers, or... anything really. It could even just be my mathematics...
Can anyone think what the issue is?
TL;DR: Trying to calculate Pi, I can get close to it but never very accurately, no matter how many iterations I do.
(Oh and another point - in the second code there's a line saying S=2.0\*\*k. If I set 'n' to anything higher than 2000, the value for S becomes too big to handle and the code crashes. How can I fix this?)
Thanks! | The algorithm for your first version should look more like this:
```
from __future__ import division, print_function
import sys
if sys.version_info.major < 3:
range = xrange
import random
incircle = 0
n = 100000
for n in range(n):
x = random.random()
y = random.random()
if (x*x + y*y <= 1):
incircle += 1
pi = (incircle / n) * 4
print(pi)
```
Prints:
```
3.14699146991
```
This is closer. Increase `n` to get even closer to pi.
The [algorithm](http://polymer.bu.edu/java/java/montepi/MontePi.html) takes into account only one quarter of the unit circle, i.e. with a radius of `1`.
The formula for the area of a quarter circle is:
```
area_c = (pi * r **2) / 4
```
That for the area of the square containing this circle:
```
area_s = r **2
```
where `r` is the radius of the circle.
Now the ratio is:
```
area_c / area_s
```
substitute the equations above, re-arange and you get:
```
pi = 4 * (area_c / area_s)
```
Going Monte Carlo, just replace both areas by a very high number that represents them. Typically, the analogy of darts thrown randomly is used here. |
Import error with spacy: "No module named en" | 34,842,052 | 3 | 2016-01-17T18:32:53Z | 34,842,085 | 7 | 2016-01-17T18:36:05Z | [
"python",
"spacy"
] | I'm having trouble using the Python [spaCy library](https://spacy.io/). It seems to be installed correctly but at
```
from spacy.en import English
```
I get the following import error:
```
Traceback (most recent call last):
File "spacy.py", line 1, in <module>
from spacy.en import English
File "/home/user/CmdData/spacy.py", line 1, in <module>
from spacy.en import English
ImportError: No module named en
```
I'm not very familiar with Python but that's the standard import I saw online, and the library is installed:
```
$ pip list | grep spacy
spacy (0.99)
```
**EDIT**
I tested renaming the file, but that's not the problem. I also get the same error when doing:
```
$ python -m spacy.en.download --force all
/usr/bin/python: No module named en
```
(The command is supposed to download some models) | You are facing this error because you named your own file `spacy.py`. Rename your file, and everything should work. |
Merge multiple backups of the same table schema into 1 master table | 34,844,602 | 6 | 2016-01-17T22:36:24Z | 34,885,447 | 7 | 2016-01-19T19:33:16Z | [
"python",
"sql",
"sqlite"
] | I have about 200 copies of a SQLite database. All taken at different times with different data in them. Some rows are deleted and some are added. They are all in a single directory.
I want to merge all the rows in the table `my_table`, using all the `.db` files in the directory. I want duplicate rows to be deleted, showing all entires from all the databases, just once.
I'd like to do this in pure SQL, but I don't think it's possible, so we can use Python too.
Table definition:
```
CREATE TABLE my_table (
ROWID INTEGER PRIMARY KEY AUTOINCREMENT,
guid TEXT UNIQUE NOT NULL,
text TEXT,
replace INTEGER DEFAULT 0,
service_center TEXT,
handle_id INTEGER DEFAULT 0,
subject TEXT,
country TEXT,
attributedBody BLOB,
version INTEGER DEFAULT 0,
type INTEGER DEFAULT 0,
service TEXT,
account TEXT,
account_guid TEXT,
error INTEGER DEFAULT 0,
date INTEGER,
date_read INTEGER,
date_delivered INTEGER,
is_delivered INTEGER DEFAULT 0,
is_finished INTEGER DEFAULT 0,
is_emote INTEGER DEFAULT 0,
is_from_me INTEGER DEFAULT 0,
is_empty INTEGER DEFAULT 0,
is_delayed INTEGER DEFAULT 0,
is_auto_reply INTEGER DEFAULT 0,
is_prepared INTEGER DEFAULT 0,
is_read INTEGER DEFAULT 0,
is_system_message INTEGER DEFAULT 0,
is_sent INTEGER DEFAULT 0,
has_dd_results INTEGER DEFAULT 0,
is_service_message INTEGER DEFAULT 0,
is_forward INTEGER DEFAULT 0,
was_downgraded INTEGER DEFAULT 0,
is_archive INTEGER DEFAULT 0,
cache_has_attachments INTEGER DEFAULT 0,
cache_roomnames TEXT,
was_data_detected INTEGER DEFAULT 0,
was_deduplicated INTEGER DEFAULT 0,
is_audio_message INTEGER DEFAULT 0,
is_played INTEGER DEFAULT 0,
date_played INTEGER,
item_type INTEGER DEFAULT 0,
other_handle INTEGER DEFAULT -1,
group_title TEXT,
group_action_type INTEGER DEFAULT 0,
share_status INTEGER,
share_direction INTEGER,
is_expirable INTEGER DEFAULT 0,
expire_state INTEGER DEFAULT 0,
message_action_type INTEGER DEFAULT 0,
message_source INTEGER DEFAULT 0
)
``` | To be able to access both the master database and a snapshot at the same time, use [ATTACH](http://www.sqlite.org/lang_attach.html).
To delete and old version of a row, use [INSERT OR REPLACE](http://www.sqlite.org/lang_insert.html):
```
ATTACH 'snapshot123.db' AS snapshot;
INSERT OR REPLACE INTO main.my_table SELECT * FROM snapshot.my_table;
DETACH snapshot;
```
Do this with all databases, in order from oldest to newest.
(SQLite has no loop control mechanism for this; do this loop in Python.)
Alternatively, you can go backwards, from newest and oldest, and insert only rows that do not yet exist:
```
ATTACH 'snapshot123.db' AS snapshot;
INSERT OR IGNORE INTO main.my_table SELECT * FROM snapshot.my_table;
DETACH snapshot;
``` |
convert entire pandas dataframe to integers in pandas (0.17.0) | 34,844,711 | 2 | 2016-01-17T22:48:58Z | 34,844,867 | 7 | 2016-01-17T23:05:50Z | [
"python",
"pandas"
] | My question is very similar to [this one](http://stackoverflow.com/questions/33126477/pandas-convert-objectsconvert-numeric-true-deprecated), but I need to convert my entire dataframe instead of just a series. The `to_numeric` function only works on one series at a time and is not a good replacement for the deprecated `convert_objects` command. Is there a way to get similar results to the `convert_objects(convert_numeric=True)` command in the new pandas release?
Thank you Mike Müller for your example. `df.apply(pd.to_numeric)` works very well if the values can all be converted to integers. What if in my dataframe I had strings that could not be converted into integers?
Example:
```
df = pd.DataFrame({'ints': ['3', '5'], 'Words': ['Kobe', 'Bryant']})
df.dtypes
Out[59]:
Words object
ints object
dtype: object
```
Then I could run the deprecated function and get:
```
df = df.convert_objects(convert_numeric=True)
df.dtypes
Out[60]:
Words object
ints int64
dtype: object
```
Running the `apply` command gives me errors, even with try and except handling. | ## All columns convertible
You can apply the function to all columns:
```
df.apply(pd.to_numeric)
```
Example:
```
>>> df = pd.DataFrame({'a': ['1', '2'], 'b': ['45.8', '73.9'], 'c': [10.5, 3.7]})
>>> df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 2 entries, 0 to 1
Data columns (total 3 columns):
a 2 non-null object
b 2 non-null object
c 2 non-null float64
dtypes: float64(1), object(2)
memory usage: 64.0+ bytes
>>> df.apply(pd.to_numeric).info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 2 entries, 0 to 1
Data columns (total 3 columns):
a 2 non-null int64
b 2 non-null float64
c 2 non-null float64
dtypes: float64(2), int64(1)
memory usage: 64.0 bytes
```
## Not all columns convertible
`pd.to_numeric` has the keyword argument `errors`:
> ```
> Signature: pd.to_numeric(arg, errors='raise')
> Docstring:
> Convert argument to a numeric type.
>
> Parameters
> ----------
> arg : list, tuple or array of objects, or Series
> errors : {'ignore', 'raise', 'coerce'}, default 'raise'
> - If 'raise', then invalid parsing will raise an exception
> - If 'coerce', then invalid parsing will be set as NaN
> - If 'ignore', then invalid parsing will return the input
> ```
Setting it to `ignore` will return the column unchanged if it cannot be converted into a numeric type.
As pointed out by Anton Protopopov, the most elegant way is to supply `ignore` as keyword argument to `apply()`:
```
>>> df = pd.DataFrame({'ints': ['3', '5'], 'Words': ['Kobe', 'Bryant']})
>>> df.apply(pd.to_numeric, errors='ignore').info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 2 entries, 0 to 1
Data columns (total 2 columns):
Words 2 non-null object
ints 2 non-null int64
dtypes: int64(1), object(1)
memory usage: 48.0+ bytes
```
My previously suggested way, using [partial](https://docs.python.org/3/library/functools.html?highlight=partial#functools.partial) from the module `functools`, is more verbose:
```
>>> from functools import partial
>>> df = pd.DataFrame({'ints': ['3', '5'], 'Words': ['Kobe', 'Bryant']})
>>> df.apply(partial(pd.to_numeric, errors='ignore')).info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 2 entries, 0 to 1
Data columns (total 2 columns):
Words 2 non-null object
ints 2 non-null int64
dtypes: int64(1), object(1)
memory usage: 48.0+ bytes
``` |
Remove arrays from nested arrays based on first element of each array | 34,848,584 | 2 | 2016-01-18T06:37:39Z | 34,848,641 | 8 | 2016-01-18T06:41:10Z | [
"python",
"list"
] | I have two nested arrays say
```
a=[[1,2,3],[2,4,7],[4,2,8],[3,5,7],[6,1,2]]
b=[[1,6,7],[2,4,9],[4,3,5],[3,10,2],[5,3,2],[7,2,1]]
```
I want to only keep those arrays in `b` whose first element is not common to the first elements of the arrays in `a`, so for these two we should get
```
c=[[5,3,2],[7,2,1]]
```
Is there a way to do this in python? | You may do like this,
```
>>> a=[[1,2,3],[2,4,7],[4,2,8],[3,5,7],[6,1,2]]
>>> b=[[1,6,7],[2,4,9],[4,3,5],[3,10,2],[5,3,2],[7,2,1]]
>>> [i for i in b if i[0] not in [j[0] for j in a]]
[[5, 3, 2], [7, 2, 1]]
>>>
```
or
```
>>> k = [j[0] for j in a]
>>> [i for i in b if i[0] not in k]
[[5, 3, 2], [7, 2, 1]]
``` |
Jupyter: can't create new notebook? | 34,851,801 | 7 | 2016-01-18T09:56:33Z | 35,518,674 | 12 | 2016-02-20T03:13:33Z | [
"python",
"ipython-notebook",
"jupyter",
"jupyter-notebook"
] | I have some existing Python code that I want to convert to a Jupyter notebook. I have run:
```
jupyter notebook
```
Now I can see this in my browser:
[](http://i.stack.imgur.com/Qsz65.png)
But how do I create a new notebook? The `Notebook` link in the menu is greyed out, and I can't see any other options to create a new notebook.
I've noticed this on the command line while Jupyter is running:
```
[W 22:30:08.128 NotebookApp] Native kernel (python2) is not available
``` | None of the other answers worked for me on Ubuntu 14.04. After 2 days of struggling, I finally realized that I needed to install the latest version of IPython (not the one in pip). First, I uninstalled ipython from my system with:
```
sudo apt-get --purge remove ipython
sudo pip uninstall ipython
```
I don't know if you need both, but both did something on my system.
Then, I installed ipython from source like this:
```
git clone https://github.com/ipython/ipython.git
cd ipython
sudo pip install -e .
```
Note the period at the end of the last line. After this, I reran jupyter notebook and the python2 kernel was detected! |
Cannot import cv2 in python in OSX | 34,853,220 | 6 | 2016-01-18T11:07:33Z | 34,853,347 | 15 | 2016-01-18T11:13:12Z | [
"python",
"opencv"
] | I have installed OpenCV 3.1 in my Mac, cv2 is also installed through `pip install cv2`.
```
vinllen@ $ pip install cv2
You are using pip version 7.1.0, however version 7.1.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Requirement already satisfied (use --upgrade to upgrade): cv2 in /usr/local/lib/python2.7/site-packages
```
But it looks like `cv2` and `cv` cannot be used:
```
Python 2.7.10 (default, Jul 13 2015, 12:05:58)
[GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named cv2
>>> import cv
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named cv
```
I have tried almost all the solutions list online, but cannot work. | **I do not know what `pip install cv2` actually installs... but is surely *not* OpenCV.** `pip install cv2` actually installs [this](https://pypi.python.org/pypi/cv2/1.0), which are some *blog distribution utilities*, not sure what it is, but it is **not** OpenCV.
---
To properly install OpenCV, check any of the links @udit043 added in the comment, or refer to any of the tutorials bellow:
Find here a tutorial on how to install OpenCV on OS X:
<http://www.pyimagesearch.com/2015/06/15/install-opencv-3-0-and-python-2-7-on-osx/>
You need to actually compile OpenCV from source and activate python bindings, which takes a while.
Another option is to use `brew` to get OpenCV, but doesn't neccesarilly get you the last version nor a fully optimized one:
<http://www.mobileway.net/2015/02/14/install-opencv-for-python-on-mac-os-x/> |
Enumerating three variables in python list comprehension | 34,853,508 | 12 | 2016-01-18T11:21:19Z | 34,853,551 | 14 | 2016-01-18T11:23:28Z | [
"python",
"list-comprehension"
] | I am trying to print all the possible enumerations of a list for three variables. For example if my input is:
```
x = 1
y = 1
z = 1
```
I want the output to be like:
```
[[0, 0, 0], [0, 0, 1], [0, 1, 0], [1, 0, 0], [1, 1, 0], [1, 0, 1], [0, 1, 1], [1, 1, 1]]
```
If any of the x,y,z variables are higher than 1, it would enumerate all the integers from 0 to the variable value. For example, if x=3 then 0, 1, 2, or 3 would be possible in the first slot of the 3-element lists.
Right now I am creating the list comprehension like this:
```
output = [ [x,y,z] for x,y,z in range(x,y,z)]
```
I think something is wrong with the range function? | You could use the `product()` function from `itertools` as follows:
```
from itertools import product
answer = list(list(x) for x in product([0, 1], repeat=3))
print(answer)
```
**Output**
```
[[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 1, 1], [1, 0, 0], [1, 0, 1], [1, 1, 0], [1, 1, 1]]
``` |
Enumerating three variables in python list comprehension | 34,853,508 | 12 | 2016-01-18T11:21:19Z | 34,853,772 | 8 | 2016-01-18T11:35:49Z | [
"python",
"list-comprehension"
] | I am trying to print all the possible enumerations of a list for three variables. For example if my input is:
```
x = 1
y = 1
z = 1
```
I want the output to be like:
```
[[0, 0, 0], [0, 0, 1], [0, 1, 0], [1, 0, 0], [1, 1, 0], [1, 0, 1], [0, 1, 1], [1, 1, 1]]
```
If any of the x,y,z variables are higher than 1, it would enumerate all the integers from 0 to the variable value. For example, if x=3 then 0, 1, 2, or 3 would be possible in the first slot of the 3-element lists.
Right now I am creating the list comprehension like this:
```
output = [ [x,y,z] for x,y,z in range(x,y,z)]
```
I think something is wrong with the range function? | Complementary to the solutions using `product`, you could also use a triple list comprehension.
```
>>> x, y, z = 1, 2, 3
>>> [(a, b, c) for a in range(x+1) for b in range(y+1) for c in range(z+1)]
[(0, 0, 0),
(0, 0, 1),
(0, 0, 2),
(some more),
(1, 2, 2),
(1, 2, 3)]
```
The `+1` is necessary since `range` does not include the upper bound.
If you want the output to be a list of lists, you can just do `[[a, b, c] for ...]`.
Note, however, that this will obviously only work is you always have three variables (`x`, `y`, `z`), while `product` would work with an arbitrary number of lists/upper limits. |
AttributeError: 'str' object has no attribute 'regex' django 1.9 | 34,853,531 | 8 | 2016-01-18T11:22:17Z | 34,854,283 | 10 | 2016-01-18T11:59:57Z | [
"python",
"django",
"windows",
"django-1.9"
] | I am working with django 1.9 and I am currently coding - in Windows Command Prompt - `python manage.py makemigrations` and the error:
***AttributeError: 'str' object has no attribute 'regex'***
I have tried coding:
```
url(r'^$', 'firstsite.module.views.start', name="home"),
url(r'^admin/', include(admin.site.urls)),
url(r'^$', 'django.contrib.auth.views.login', {'template_name': 'login.html'}, name='login'),
url(r'^signup/$', 'exam.views.signup', name='signup'),
url(r'^signup/submit/$', 'exam.views.signup_submit', name='signup_submit')
```
in urls.py and the error is keeps coming up.
This is my first time coding in django, so my expertise is very limited. Thank you in advance.
This is the whole urls.py:
```
from django.conf.urls import patterns, include, url
import django
# Uncomment the next two lines to enable the admin:
from django.contrib import admin
admin.autodiscover()
from django.conf.urls.static import static
from django.conf import settings
urlpatterns = patterns('',
# Examples:
# url(r'^$', 'firstsite.views.home', name='home'),
# url(r'^firstsite/', include('firstsite.foo.urls')),
# Uncomment the admin/doc line below to enable admin documentation:
# url(r'^admin/doc/', include('django.contrib.admindocs.urls')),
# Uncomment the next line to enable the admin:
#url(r'^admin/', include(admin.site.urls)),
django.conf.urls.handler400,
url(r'^$', 'firstsite.module.views.start', name="home"),
url(r'^admin/', include(admin.site.urls)),
url(r'^$', 'django.contrib.auth.views.login', {'template_name': 'login.html'}, name='login'),
url(r'^signup/$', 'exam.views.signup', name='signup'),
url(r'^signup/submit/$', 'exam.views.signup_submit', name='signup_submit'),
)
``` | Firstly, remove the `django.conf.urls.handler400` from the middle of the urlpatterns. It doesn't belong there, and is the cause of the error.
Once the error has been fixed, you can make a couple of changes to update your code for Django 1.8+
1. Change `urlpatterns` to a list, instead of using `patterns()`
2. Import the views (or view modules), instead of using strings in your `urls()`
3. You are using the same regex for the `start` and `login` views. This means you won't be able to reach the login views. One fix would be to change the regex for the login view to something like `^login/$`
Putting that together, you get something like:
```
from firstsite.module.views import start
from exam import views as exam_views
from django.contrib.auth import views as auth_views
urlpatterns = [
url(r'^$', start, name="home"),
url(r'^admin/', include(admin.site.urls)),
url(r'^login/$', auth_views.login, {'template_name': 'login.html'}, name='login'),
url(r'^signup/$', exam_views.signup, name='signup'),
url(r'^signup/submit/$', exam_views.signup_submit, name='signup_submit'),
]
``` |
Is unsetting a single bit in flags safe with Python variable-length integers? | 34,855,777 | 28 | 2016-01-18T13:20:59Z | 34,856,274 | 10 | 2016-01-18T13:46:51Z | [
"python",
"bit-manipulation"
] | In my program (written in Python 3.4) I have a variable which contains various flags, so for example:
```
FLAG_ONE = 0b1
FLAG_TWO = 0b10
FLAG_THREE = 0b100
status = FLAG_ONE | FLAG_TWO | FLAG_THREE
```
Setting another flag can easily be done with
```
status |= FLAG_FOUR
```
But what if I explicitly want to clear a flag? I'd do
```
status &= ~FLAG_THREE
```
Is this approach safe? As the size of an integer in Python is not defined, what if `status` and `FLAG_THREE` differ in size?
(`status` needs to be a bit field because I need this value for a hardware protocol.) | Clearing a flag works with
```
status &= ~FLAG_THREE
```
because Python treats those negated values as negative:
```
>>> ~1L
-2L
>>> ~1
-2
>>> ~2
-3
```
Thus the `&` operator can act appropriately and yield the wanted result irrespective of the length of the operands, so `0b11111111111111111111111111111111111111111111111111111111111 & ~1` works fine although the left hand operand is biffer than the right hand one.
In the other direction (RH longer than LH), it works nevertheless, because having an excess number of `1` bits doesn't matter. |
Is unsetting a single bit in flags safe with Python variable-length integers? | 34,855,777 | 28 | 2016-01-18T13:20:59Z | 34,856,371 | 13 | 2016-01-18T13:51:37Z | [
"python",
"bit-manipulation"
] | In my program (written in Python 3.4) I have a variable which contains various flags, so for example:
```
FLAG_ONE = 0b1
FLAG_TWO = 0b10
FLAG_THREE = 0b100
status = FLAG_ONE | FLAG_TWO | FLAG_THREE
```
Setting another flag can easily be done with
```
status |= FLAG_FOUR
```
But what if I explicitly want to clear a flag? I'd do
```
status &= ~FLAG_THREE
```
Is this approach safe? As the size of an integer in Python is not defined, what if `status` and `FLAG_THREE` differ in size?
(`status` needs to be a bit field because I need this value for a hardware protocol.) | You should be safe using that approach, yes.
`~` in Python is simply implemented as `-(x+1)` (cf. the [CPython source](https://hg.python.org/cpython/file/d8f48717b74e/Objects/longobject.c#l3985)) and negative numbers are treated as if they have any required number of 1s padding the start. From the [Python Wiki](https://wiki.python.org/moin/BitwiseOperators):
> Of course, Python doesn't use 8-bit numbers. It USED to use however many bits were native to your machine, but since that was non-portable, it has recently switched to using an INFINITE number of bits. Thus the number -5 is treated by bitwise operators as if it were written "...1111111111111111111011".
In other words, with bitwise-and `&` you're guaranteed that those 1s will pad the length of `~FLAG` (a negative integer) to the length of `status`. For example:
```
100000010000 # status
& ~10000 # ~FLAG
```
is treated as
```
100000010000
& 111111101111
= 100000000000 # new status
```
This behaviour is described in a comment in the source [here](https://hg.python.org/cpython/file/d8f48717b74e/Objects/longobject.c#l4202). |
How can I pass union/intersection methods as a parameter in Python? | 34,856,650 | 2 | 2016-01-18T14:05:38Z | 34,856,705 | 7 | 2016-01-18T14:08:19Z | [
"python",
"parameter-passing"
] | Can I pass the union and intersection methods of the [Set](https://docs.python.org/2/library/sets.html) Python class as an argument?
I am searching for direct way to do this, without using additional lambda or regular functions. | Methods are first-class objects, just like other functions. You can pass it bound:
```
x = set([1,2,3])
my_function(x.union)
```
or unbound
```
my_function(set.union)
```
as necessary.
Example:
```
def test(s1, s2, op):
return op(s1,s2)
test(set([1]), set([2]), set.union) # set([1, 2])
``` |
Unexpected behavior of python builtin str function | 34,859,471 | 6 | 2016-01-18T16:25:43Z | 34,860,001 | 7 | 2016-01-18T16:51:41Z | [
"python"
] | I am running into an issue with subtyping the str class because of the `str.__call__` behavior I apparently do not understand.
This is best illustrated by the simplified code below.
```
class S(str):
def __init__(self, s: str):
assert isinstance(s, str)
print(s)
class C:
def __init__(self, s: str):
self.s = S(s)
def __str__(self):
return self.s
c = C("a") # -> prints "a"
c.__str__() # -> does not print "a"
str(c) # -> asserts fails in debug mode, else prints "a" as well!?
```
I always thought the `str(obj)` function simply calls the `obj.__str__` method, and that's it. But for some reason it also calls the `__init__` function of `S` again.
Can someone explain the behavior and how I can avoid that `S.__init__` is called on the result of `C.__str__` when using the `str()` function? | Strictly speaking, `str` isn't a function. It's a type. When you call `str(c)`, Python goes through the normal procedure for generating an instance of a type, calling `str.__new__(str, c)` to create the object (or reuse an existing object), **and then calling the `__init__` method of the result to initialize it**.
[`str.__new__(str, c)`](https://hg.python.org/cpython/file/2.7/Objects/stringobject.c#l3697) calls the C-level function [`PyObject_Str`](https://hg.python.org/cpython/file/2.7/Objects/object.c#l448), which calls [`_PyObject_Str`](https://hg.python.org/cpython/file/2.7/Objects/object.c#l406), which calls your `__str__` method. The result is an instance of `S`, so it counts as a string, and `_PyObject_Str` decides this is good enough rather than trying to coerce an object with `type(obj) is str` out of the result. Thus, `str.__new__(str, c)` returns `c.s`.
Now we get to `__init__`. Since the argument to `str` was `c`, this also gets passed to `__init__`, so Python calls `c.s.__init__(c)`. `__init__` calls `print(c)`, which you might think would call `str(c)` and lead to infinite recursion. However, the [`PRINT_ITEM`](https://hg.python.org/cpython/file/2.7/Python/ceval.c#l1962) opcode calls the C-level [PyFile\_WriteObject](https://hg.python.org/cpython/file/2.7/Objects/fileobject.c#l2546) to write the object, and that calls `PyObject_Str` instead of `str`, so it skips the `__init__` and doesn't recurse infinitely. Instead, it calls `c.__str__()` and prints the resulting `S` instance, as the `S` instance is a string. |
What causes the '' in ['h', 'e', 'l', 'l', 'o', ''] when you do re.findall('[\w]?', 'hello') | 34,860,722 | 25 | 2016-01-18T17:33:00Z | 34,860,788 | 40 | 2016-01-18T17:36:50Z | [
"python",
"regex"
] | What causes the `''` in `['h', 'e', 'l', 'l', 'o', '']` when you do `re.findall('[\w]?', 'hello')`. I thought the result would be `['h', 'e', 'l', 'l', 'o']`, without the last empty string. | The question mark in your regex (`'[\w]?'`) is responsible for the empty string being one of the returned results.
A question mark is a quantifier meaning "zero-or-one matches." You are asking for all occurrences of either zero-or-one "word characters". The letters satisfy the "-or-one word characters" match. The empty string satisfies the âzero word charactersâ match condition.
Change your regex to `'\w'` (remove the question mark and superfluous character class brackets) and the output will be as you expect. |
Build a dictionary with a regular expression | 34,864,409 | 2 | 2016-01-18T21:29:19Z | 34,864,444 | 8 | 2016-01-18T21:31:40Z | [
"python",
"python-2.7"
] | I am getting some data through a serial connection which I'd like to process so I can do more with it.
---
My Python script gets the variable which looks like:
```
data = "P600F600"
```
and my goal is to get this:
```
finaldata = {
'P': 600,
'F': 600
}
```
---
I like regular expressions and my input format is very strict so I've devised this RegEx to grab the data:
```
/([A-Z])(\d+)/
```
---
Based on my limited knowledge of Python, I've devised this.
```
finadata = eval( '{' + re.sub(r"([A-Z])(\d+)", r"'\1':\2,", data) + '}' )
```
but this is clearly a horrible and extremely hacky solution. | In this case, `re.findall` seems to be really helpful:
```
>>> import re
>>> re.findall('([A-Z])(\d+)', 'P600F600')
[('P', '600'), ('F', '600')]
```
It just so happens that a dict can be built from this directly:
```
>>> dict(re.findall('([A-Z])(\d+)', 'P600F600'))
{'P': '600', 'F': '600'}
```
Of course, this leaves you with string values rather than integer values. To get ints, you'd need to construct them more explicitly:
```
>>> items = re.findall('([A-Z])(\d+)', 'P600F600')
>>> {key: int(value) for key, value in items}
{'P': 600, 'F': 600}
```
Or for python2.6- compatibility:
```
>>> dict((key, int(value)) for key, value in items)
{'P': 600, 'F': 600}
``` |
use conda environment in sublime text 3 | 34,865,717 | 4 | 2016-01-18T23:03:30Z | 34,866,051 | 8 | 2016-01-18T23:32:08Z | [
"python",
"sublimetext3",
"anaconda"
] | Using Sublime Text 3, how can I build a python file using a conda environment that I've created as in <http://conda.pydata.org/docs/using/envs.html> | A standard Python [`.sublime-build`](http://docs.sublimetext.info/en/latest/file_processing/build_systems.html) file looks like this:
```
{
"cmd": ["/path/to/python", "-u", "$file"],
"file_regex": "^[ ]*File \"(...*?)\", line ([0-9]*)",
"selector": "source.python"
}
```
All you need to do to use a particular `conda` environment is modify the path to the `python` or `python3` executable within the environment. To find it, activate your environment and type
```
which python
```
or
```
which python3
```
(depending on the version you're using), then copy the path into your custom `.sublime-build` file. Save the file in your `Packages/User` directory, then make sure you pick the right one via **`Tools -> Build System`** before building. |
What is the proper way to determine if an object is a bytes-like object in Python? | 34,869,889 | 15 | 2016-01-19T06:28:09Z | 34,870,210 | 7 | 2016-01-19T06:49:25Z | [
"python",
"python-3.x"
] | I have code that expects `str` but will handle the case of being passed `bytes` in the following way:
```
if isinstance(data, bytes):
data = data.decode()
```
Unfortunately, this does not work in the case of `bytearray`. Is there a more generic way to test whether an object is either `bytes` or `bytearray`, or should I just check for both? Is `hasattr('decode')` as bad as I feel it would be? | There are a few approaches you could use here.
# Duck typing
Since Python is [duck typed](http://en.wikipedia.org/wiki/Duck_typing), you could simply do as follows (which seems to be the way usually suggested):
```
try:
data = data.decode()
except AttributeError:
pass
```
You could use `hasattr` as you describe, however, and it'd probably be fine. This is, of course, assuming the `.decode()` method for the given object returns a string, and has no nasty side effects.
I personally recommend either the exception or `hasattr` method, but whatever you use is up to you.
# Use str()
This approach is uncommon, but is possible:
```
data = str(data, "utf-8")
```
Other encodings are permissible, just like with the buffer protocol's `.decode()`. You can also pass a third parameter to specify error handling.
# Single-dispatch generic functions (Python 3.4+)
Python 3.4 and above include a nifty feature called single-dispatch generic functions, via [functools.singledispatch](https://docs.python.org/3/library/functools.html#functools.singledispatch). This is a bit more verbose, but it's also more explicit:
```
def func(data):
# This is the generic implementation
data = data.decode()
...
@func.register(str)
def _(data):
# data will already be a string
...
```
You could also make special handlers for `bytearray` and `bytes` objects if you so chose.
**Beware**: single-dispatch functions only work on the first argument! This is an intentional feature, see [PEP 433](https://www.python.org/dev/peps/pep-0443/). |
What does tf.nn.embedding_lookup function do? | 34,870,614 | 18 | 2016-01-19T07:14:40Z | 34,877,590 | 31 | 2016-01-19T13:05:06Z | [
"python",
"embedding",
"tensorflow"
] | ```
tf.nn.embedding_lookup(params, ids, partition_strategy='mod', name=None)
```
I cannot understand the duty of this function. Is it like a lookup table? which means return the param corresponding for each id (in ids)?
For instance, in the `skip-gram` model if we use `tf.nn.embedding_lookup(embeddings, train_inputs)`, then for each `train_input` it finds the correspond embedding? | `embedding_lookup` function retrieves rows of the `params` tensor. The behavior is similar to using indexing with arrays in numpy. E.g.
```
matrix = np.random.random([1024, 64]) # 64-dimensional embeddings
ids = np.array([0, 5, 17, 33])
print matrix[ids] # prints a matrix of shape [4, 64]
```
`params` argument can be also a list of tensors in which case the ids will be distributed among the tensors. E.g. given a list of 3 [2, 64] tensors the default behavior is that they will represent ids: [0, 3], [1, 4], [2, 5]. `partition_strategy` controls the way how the ids are distributed among the list. The partitioning is useful for larger scale problems when the matrix might be too large to keep in one piece. |
Trace Bug which happends only sometimes in CI | 34,872,918 | 7 | 2016-01-19T09:25:01Z | 34,948,768 | 8 | 2016-01-22T14:13:41Z | [
"python",
"debugging",
"trace"
] | I have a strange bug in python code which only happens sometimes in CI.
We can't reproduce it.
Where is the test code:
```
response=self.admin_client.post(url, post)
self.assertEqual(200, response.status_code, response)
```
Sometimes we get a 302 which happens since the form gets saved.
My idea to debug this:
```
with some_magic_trace.trace() as trace:
response=self.admin_client.post(url, post)
self.assertEqual(200, response.status_code, trace)
```
The trace should contain the python lines (filename, line offset, line as string) executed by the interpreter.
How to implement `some_magic_trace.trace()`? | The [trace](https://docs.python.org/3.5/library/trace.html#module-trace) module gives you a very simple solution (different from what you are asking for, but simple enough to have a try.)
```
from trace import Trace
tracer = Trace()
response = tracer.runfunc(self.admin_client.post, url, post)
self.assertEqual(200, response.status_code, response)
```
A more complex solution that entails creating a context manager that saves the trace and prints it only on exceptions, requires the use of [sys.settrace](https://docs.python.org/3.5/library/sys.html?highlight=settrace#sys.settrace).
Just a template for your own implementation could be:
```
class MyTracer():
def __init__(self):
self.trace = None
def newscope(self, frame, event, arg):
## real work should be done here, just minimal example
self.trace.append((frame, event, arg))
return None
def pprint(self):
## real pretty printing of trace info should be done here
print(self.trace)
def __enter__(self):
self.trace = []
sys.settrace(self.newscope)
return self
def __exit__(self, exc_type, exc_val, exc_tb):
sys.settrace(None)
if exc_type is not None:
self.pprint()
## print some info gathered from exc_type, exc_val, exc_tb
```
Then you can:
```
with MyTracer():
response=self.admin_client.post(url, post)
self.assertEqual(200, response.status_code, response)
```
The idea is that a MyTracer instance has a tracer method `newscope` that saves some useful info in `self.trace`. On an abnormal exit from the context the `pprint` method is called; on a normal exit the trace info is discarded.
Most of the work has to be done in the tracing method `newscope`.
Some concrete examples of tracing functions can be found [here](https://pymotw.com/2/sys/tracing.html). |
Find the coordinates of a cuboid using list comprehension in Python | 34,873,322 | 4 | 2016-01-19T09:43:19Z | 34,873,400 | 8 | 2016-01-19T09:46:57Z | [
"python",
"list",
"list-comprehension"
] | `X`, `Y` and `Z` are the three coordinates of a cuboid.
Now X=1,Y=1, Z=1 and N=2.
I have to generate a list of all possible coordinates on a 3D grid where the sum of Xi + Yi + Zi is not equal to N. If X=2, the possible values of Xi can be 0, 1 and 2. The same applies to Y and Z.
I have written this below code so far, and it is giving the output as :
```
[[0, 0, 0]]
```
however the expected output is
```
[[0, 0, 0], [0, 0, 1], [0, 1, 0], [1, 0, 0], [1, 1, 1]]
```
Below is my code, what is going wrong in my code?
```
[[x,y,z] for x in range(X) for y in range(Y) for z in range(Z) if x+y+z != N]
``` | `range` is actually a half-closed function. So, the ending value will not be included in the resulting range.
> If X=2, the possible values of Xi can be 0, 1 and 2
In your code, `range(X)` will give only `0` and `1`, if `X` is 2. You should have used `range(X + 1)`.
```
>>> X, Y, Z, N = 1, 1, 1, 2
>>> [[x,y,z] for x in range(X + 1) for y in range(Y + 1) for z in range(Z + 1) if x+y+z != N]
[[0, 0, 0], [0, 0, 1], [0, 1, 0], [1, 0, 0], [1, 1, 1]]
```
You can write the same, with `itertools.product`, like this
```
>>> X, Y, Z, N = 1, 1, 1, 2
>>> from itertools import product
>>> [list(i) for i in product(range(X + 1), range(Y + 1), range(Z + 1)) if sum(i) != N]
[[0, 0, 0], [0, 0, 1], [0, 1, 0], [1, 0, 0], [1, 1, 1]]
``` |
Pythonic way to "round()" like Javascript "Math.round()"? | 34,876,585 | 2 | 2016-01-19T12:15:45Z | 34,876,996 | 7 | 2016-01-19T12:35:51Z | [
"javascript",
"python",
"python-3.x",
"math"
] | I want the most Pythonic way to round numbers just like Javascript does (through `Math.round()`). They're actually slightly different, but this difference can make huge difference for my application.
**Using `round()` method from Python 3:**
```
// Returns the value 20
x = round(20.49)
// Returns the value 20
x = round(20.5)
// Returns the value -20
x = round(-20.5)
// Returns the value -21
x = round(-20.51)
```
**Using `Math.round()` method from Javascript\*:**
```
// Returns the value 20
x = Math.round(20.49);
// Returns the value 21
x = Math.round(20.5);
// Returns the value -20
x = Math.round(-20.5);
// Returns the value -21
x = Math.round(-20.51);
```
Thank you!
**References:**
* [Math.round() explanation at Mozilla Developers Network (MDN)](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/round) | ```
import math
def roundthemnumbers(value):
x = math.floor(value)
if (value - x) < .50:
return x
else:
return math.ceil(value)
```
Haven't had my coffee yet, but that function should do what you need. Maybe with some minor revisions. |
Seeking Elegant Python Dice Iteration | 34,876,784 | 15 | 2016-01-19T12:25:25Z | 34,877,016 | 28 | 2016-01-19T12:36:40Z | [
"python"
] | Is there an elegant way to iterate through possible dice rolls with up to five dice?
I want to replace this hacky Python:
```
self.rolls[0] = [str(a) for a in range(1,7)]
self.rolls[1] = [''.join([str(a), str(b)])
for a in range(1, 7)
for b in range(1, 7)
if a <= b]
self.rolls[2] = [''.join([str(a), str(b), str(c)])
for a in range(1, 7)
for b in range(1, 7)
for c in range(1, 7)
if a <= b <= c]
self.rolls[3] = [''.join([str(a), str(b), str(c), str(d)])
for a in range(1, 7)
for b in range(1, 7)
for c in range(1, 7)
for d in range(1, 7)
if a <= b <= c <= d]
self.rolls[4] = [''.join([str(a), str(b), str(c), str(d), str(e)])
for a in range(1, 7)
for b in range(1, 7)
for c in range(1, 7)
for d in range(1, 7)
for e in range(1, 7)
if a <= b <= c <= d <= e]
``` | You can use [**`itertools`**](https://docs.python.org/2/library/itertools.html)' [**`combinations_with_replacement`**](https://docs.python.org/2/library/itertools.html#itertools.combinations_with_replacement).
For example with 3 4-sided dice (just because the output isn't *too* large):
```
>>> from itertools import combinations_with_replacement
>>> dice = 3
>>> faces = 4
>>> list(combinations_with_replacement(range(1, faces+1), dice))
[(1, 1, 1),
(1, 1, 2),
(1, 1, 3),
(1, 1, 4),
(1, 2, 2),
(1, 2, 3),
(1, 2, 4),
(1, 3, 3),
(1, 3, 4),
(1, 4, 4),
(2, 2, 2),
(2, 2, 3),
(2, 2, 4),
(2, 3, 3),
(2, 3, 4),
(2, 4, 4),
(3, 3, 3),
(3, 3, 4),
(3, 4, 4),
(4, 4, 4)]
``` |
Unexpected behaviour when indexing a 2D np.array with two boolean arrays | 34,877,314 | 10 | 2016-01-19T12:51:08Z | 34,877,540 | 8 | 2016-01-19T13:02:54Z | [
"python",
"arrays",
"numpy",
"indexing",
"slice"
] | ```
two_d = np.array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24]])
first = np.array((True, True, False, False, False))
second = np.array((False, False, False, True, True))
```
Now, when I enter:
```
two_d[first, second]
```
I get:
```
array([3,9])
```
which doesn't make a whole lot of sense to me. Can anybody explain that simply? | When given multiple boolean arrays to index with, NumPy pairs up the indices of the True values. The first true value in `first` in paired with the first true value in `second`, and so on. NumPy then fetches the elements at each of these (x, y) indices.
This means that `two_d[first, second]` is equivalent to:
```
two_d[[0, 1], [3, 4]]
```
In other words you're retrieving the values at index (0, 3) and index (1, 4); `3` and `9`. Note that if the two arrays had different numbers of true values an error would be raised!
The [documents on advanced indexing](http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html) mention this behaviour briefly and suggest `np.ix_` as a 'less surprising' alternative:
> Combining multiple Boolean indexing arrays or a Boolean with an integer indexing array can best be understood with the `obj.nonzero()` analogy. The function `ix_` also supports boolean arrays and will work without any surprises.
Hence you may be looking for:
```
>>> two_d[np.ix_(first, second)]
array([[3, 4],
[8, 9]])
``` |
Determining if I need quotes in HTML with django template | 34,878,869 | 3 | 2016-01-19T14:05:21Z | 34,879,016 | 7 | 2016-01-19T14:11:46Z | [
"python",
"django",
"django-templates"
] | I have a `dict` in python that is something like the following:
```
d = {'key1': .98, 'key2': 'some_str',
#...
}
```
In this dictionary some keys will be mapped to `float` and others will be mapped to `str`
In my HTML I am doing something like the following:
```
html_dict = {
{% for k, v in my_dict.items %}
{{ k }}: "{{ v }}",
{% endfor %}
};
```
However this approach wraps the floats in quotes as well, which is what I don't want. But if I don't know wrap them in quotes the HTML doesn't understand that they are are string values. I would ideally like something like the following:
```
html_dict = {
{% for k, v in my_dict.items %}
{% if check_v_is_str %}
{{ k }}: "{{ v }}",
{% else %}
{{ k }}: {{ v }},
{% endif %}
{% endfor %}
};
``` | Don't do this. You are manually trying to replicate a JS data structure, when there already exists a well-defined structure that both Python and JS know about, namely JSON. Use that instead. Convert the dict to JSON in your view, and pass it to the template where it can be output directly. |
What happens in degenerate case of multiple assignment? | 34,882,417 | 15 | 2016-01-19T16:49:40Z | 34,882,802 | 11 | 2016-01-19T17:06:40Z | [
"python"
] | I'm [teaching myself algorithms](http://rosalind.info/problems/list-view/?location=algorithmic-heights). I needed to swap two items in a list. Python makes all things easy:
```
def swap(A, i, j):
A[i], A[j] = A[j], A[i]
```
This works a treat:
```
>>> A = list(range(5))
>>> A
[0, 1, 2, 3, 4]
>>> swap(A, 0, 1)
>>> A
[1, 0, 2, 3, 4]
```
Note the function is resilient to the degenerate case `i = j`. As you'd expect, it simply leaves the list unchanged:
```
>>> A = list(range(5))
>>> swap(A, 0, 0)
>>> A
[0, 1, 2, 3, 4]
```
Later I wanted to permute three items in a list. I wrote a function to permute them in a 3-cycle:
```
def cycle(A, i, j, k):
A[i], A[j], A[k] = A[j], A[k], A[i]
```
This worked well:
```
>>> A = list("tap")
>>> A
['t', 'a', 'p']
>>> cycle(A, 0, 1, 2)
>>> A
['a', 'p', 't']
```
However I (eventually) discovered it goes wrong in degenerate cases. I assumed a degenerate 3-cycle would be a swap. So it is when `i = j`, `cycle(i, i, k) â¡ swap(i, k)`:
```
>>> A = list(range(5))
>>> cycle(A, 0, 0, 1)
>>> A
[1, 0, 2, 3, 4]
```
But when `i = k` something else happens:
```
>>> A = list(range(5))
>>> sum(A)
10
>>> cycle(A, 1, 0, 1)
>>> A
[1, 1, 2, 3, 4]
>>> sum(A)
11
```
What's going on? `sum` should be invariant under any permutation! Why does this case `i = k` degenerate differently?
How can I achieve what I want? That is a 3-cycle function that degenerates to a swap if only 2 indices are distinct `cycle(i, i, j) â¡ cycle(i, j, i) â¡ cycle(i, j, j) â¡ swap(i, j)` | Well it seems *you are re-assigning to the same target* `A[1]`, to get a visualization of the call:
```
A[1], A[0], A[1] = A[0], A[1], A[1]
```
Remember, from the documentation on **[assignment statements](https://docs.python.org/3.5/reference/simple_stmts.html#assignment-statements)**:
> An assignment statement *evaluates the expression list* (remember that this can be a single expression or a comma-separated list, the latter yielding a tuple) and *assigns the single resulting object to each of the target lists, from left to right.*
So your evaluation goes something like dis:
* Create tuple with values `A[0], A[1], A[1]` translating to `(0, 1, 1)`
* Assign these to the target list `A[1], A[0], A[1]` *from left to right.*
Assignment from left to right takes place:
1. `A[1] = 0`
2. `A[0] = 1`
3. `A[1] = 1`
So the first assignment made is `A[1]` with the first element of the tuple `0`, then the second assignment `A[0]` with the second element `1` and, finally, at the end, `A[1]` *is overriden with the third element in the tuple* `1`.
---
You can get a more *convoluted* view of this with **[`dis.dis`](https://docs.python.org/3/library/dis.html#dis.dis)**; notice how all elements in the right hand of the assignment statement are loaded first and then they are assigned to their values:
```
dis.dis(cycle)
2 0 LOAD_FAST 0 (A)
3 LOAD_FAST 2 (j)
6 BINARY_SUBSCR
7 LOAD_FAST 0 (A)
10 LOAD_FAST 3 (k)
13 BINARY_SUBSCR
14 LOAD_FAST 0 (A)
17 LOAD_FAST 1 (i)
20 BINARY_SUBSCR # Loading Done
21 ROT_THREE
22 ROT_TWO
23 LOAD_FAST 0 (A) # Assign first
26 LOAD_FAST 1 (i)
29 STORE_SUBSCR
30 LOAD_FAST 0 (A) # Assign second
33 LOAD_FAST 2 (j)
36 STORE_SUBSCR
37 LOAD_FAST 0 (A) # Assing third
40 LOAD_FAST 3 (k)
43 STORE_SUBSCR
44 LOAD_CONST 0 (None)
47 RETURN_VALUE
``` |
What happens in degenerate case of multiple assignment? | 34,882,417 | 15 | 2016-01-19T16:49:40Z | 34,882,874 | 23 | 2016-01-19T17:09:44Z | [
"python"
] | I'm [teaching myself algorithms](http://rosalind.info/problems/list-view/?location=algorithmic-heights). I needed to swap two items in a list. Python makes all things easy:
```
def swap(A, i, j):
A[i], A[j] = A[j], A[i]
```
This works a treat:
```
>>> A = list(range(5))
>>> A
[0, 1, 2, 3, 4]
>>> swap(A, 0, 1)
>>> A
[1, 0, 2, 3, 4]
```
Note the function is resilient to the degenerate case `i = j`. As you'd expect, it simply leaves the list unchanged:
```
>>> A = list(range(5))
>>> swap(A, 0, 0)
>>> A
[0, 1, 2, 3, 4]
```
Later I wanted to permute three items in a list. I wrote a function to permute them in a 3-cycle:
```
def cycle(A, i, j, k):
A[i], A[j], A[k] = A[j], A[k], A[i]
```
This worked well:
```
>>> A = list("tap")
>>> A
['t', 'a', 'p']
>>> cycle(A, 0, 1, 2)
>>> A
['a', 'p', 't']
```
However I (eventually) discovered it goes wrong in degenerate cases. I assumed a degenerate 3-cycle would be a swap. So it is when `i = j`, `cycle(i, i, k) â¡ swap(i, k)`:
```
>>> A = list(range(5))
>>> cycle(A, 0, 0, 1)
>>> A
[1, 0, 2, 3, 4]
```
But when `i = k` something else happens:
```
>>> A = list(range(5))
>>> sum(A)
10
>>> cycle(A, 1, 0, 1)
>>> A
[1, 1, 2, 3, 4]
>>> sum(A)
11
```
What's going on? `sum` should be invariant under any permutation! Why does this case `i = k` degenerate differently?
How can I achieve what I want? That is a 3-cycle function that degenerates to a swap if only 2 indices are distinct `cycle(i, i, j) â¡ cycle(i, j, i) â¡ cycle(i, j, j) â¡ swap(i, j)` | `cycle` is doing exactly what you ask it to: assigning to the left hand values the right hand values.
```
def cycle(A, i, j, k):
A[i], A[j], A[k] = A[j], A[k], A[i]
```
is functionally equivalent to
```
def cycle(A, i, j, k):
new_values = A[j], A[k], A[i]
A[i], A[j], A[k] = new_values
```
So when you do `cycle(A, 1, 0, 1)` what you are saying is that you want
```
A[1] = previous_A[0]
A[0] = previous_A[1]
A[1] = previous_A[1]
```
If you want cycle to work sequentially then you must write it sequentially, otherwise python evaluates the right hand and then expands that to the arguments on the left hand. |
What is the point of views in pandas if it is undefined whether an indexing operation returns a view or a copy? | 34,884,536 | 7 | 2016-01-19T18:40:49Z | 34,908,742 | 9 | 2016-01-20T19:24:42Z | [
"python",
"pandas",
"views",
"slice"
] | I have switched from R to pandas. I routinely get SettingWithCopyWarnings, when I do something like
```
df_a = pd.DataFrame({'col1': [1,2,3,4]})
# Filtering step, which may or may not return a view
df_b = df_a[df_a['col1'] > 1]
# Add a new column to df_b
df_b['new_col'] = 2 * df_b['col1']
# SettingWithCopyWarning!!
```
I think I understand the problem, though I'll gladly learn what I got wrong. In the given example, it is undefined whether `df_b` is a view on `df_a` or not. Thus, the effect of assigning to `df_b` is unclear: does it affect `df_a`? The problem can be solved by explicitly making a copy when filtering:
```
df_a = pd.DataFrame({'col1': [1,2,3,4]})
# Filtering step, definitely a copy now
df_b = df_a[df_a['col1'] > 1].copy()
# Add a new column to df_b
df_b['new_col'] = 2 * df_b['col1']
# No Warning now
```
I think there is something that I am missing: if we can never really be sure whether we create a view or not, what are views good for? From the pandas documentation (<http://pandas-docs.github.io/pandas-docs-travis/indexing.html?highlight=view#indexing-view-versus-copy>)
> Outside of simple cases, itâs very hard to predict whether it [**getitem**] will return a view or a copy (it depends on the memory layout of the array, about which pandas makes no guarantees)
Similar warnings can be found for different indexing methods.
I find it very cumbersome and errorprone to sprinkle .copy() calls throughout my code. Am I using the wrong style for manipulating my DataFrames? Or is the performance gain so high that it justifies the apparent awkwardness? | Great question!
The short answer is: this is a flaw in pandas that's being remedied.
You can find a longer discussion of the nature of [the problem here](https://github.com/pydata/pandas/issues/10954), but the main take-away is that we're now moving to a "copy-on-write" behavior in which any time you slice, you get a new copy, and you never have to think about views. The fix will soon come through this [refactoring project.](https://github.com/pydata/pandas/issues/11970) I actually tried to fix it directly ([see here](https://github.com/pydata/pandas/pull/11500)), but it just wasn't feasible in the current architecture.
In truth, we'll keep views in the background -- they make pandas SUPER memory efficient and fast when they can be provided -- but we'll end up hiding them from users so, from the user perspective, if you slice, index, or cut a DataFrame, what you get back will effectively be a new copy.
(This is accomplished by creating views when the user is only reading data, but whenever an assignment operation is used, the view will be converted to a copy before the assignment takes place.)
Best guess is the fix will be in within a year -- in the mean time, I'm afraid some `.copy()` may be necessary, sorry! |
How to avoid slack command timeout error? | 34,896,954 | 3 | 2016-01-20T10:04:29Z | 34,944,331 | 9 | 2016-01-22T10:23:44Z | [
"python",
"slack-api",
"slack"
] | I am working with slack command (python code is running behind this), it works fine, but this gives error
`This slash command experienced a problem: 'Timeout was reached' (error detail provided only to team owning command).`
How to avoid this ? | According to the Slack [slash command documentation](https://api.slack.com/slash-commands#responding_to_a_command), you need to respond within 3000ms (three seconds). If your command takes longer then you get the `Timeout was reached` error. Your code obviously won't stop running, but the user won't get any response to their command.
Three seconds is fine for a quick thing where your command has instant access to data, but might not be long enough if you're calling out to external APIs or doing something complicated. If you *do* need to take longer, then see the *Delayed responses and multiple responses* section of the documentation:
1. Validate the request is okay.
2. Return a `200` response immediately, maybe something along the lines of `{'text': 'ok, got that'}`
3. Go and perform the actual action you want to do.
4. In the original request, you get passed a unique `response_url` parameter. Make a `POST` request to that URL with your follow-up message:
* `Content-type` needs to be `application/json`
* With the body as a JSON-encoded message: `{'text': 'all done :)'}`
* you can return ephemeral or in-channel responses, and add attachments the same as the immediate approach
According to the docs, "you can respond to a user commands up to 5 times within 30 minutes of the user's invocation". |
Generate list of months between interval in python | 34,898,525 | 3 | 2016-01-20T11:16:02Z | 34,898,764 | 7 | 2016-01-20T11:27:50Z | [
"python",
"python-2.7"
] | I want to generate a python list containing all months occurring between two dates, with the input and output formatted as follows:
```
date1 = "2014-10-10" # input start date
date2 = "2016-01-07" # input end date
month_list = ['Oct-14', 'Nov-14', 'Dec-14', 'Jan-15', 'Feb-15', 'Mar-15', 'Apr-15', 'May-15', 'Jun-15', 'Jul-15', 'Aug-15', 'Sep-15', 'Oct-15', 'Nov-15', 'Dec-15', 'Jan-16'] # output
``` | ```
>>> from datetime import datetime, timedelta
>>> from collections import OrderedDict
>>> dates = ["2014-10-10", "2016-01-07"]
>>> start, end = [datetime.strptime(_, "%Y-%m-%d") for _ in dates]
>>> OrderedDict(((start + timedelta(_)).strftime(r"%b-%y"), None) for _ in xrange((end - start).days)).keys()
['Oct-14', 'Nov-14', 'Dec-14', 'Jan-15', 'Feb-15', 'Mar-15', 'Apr-15', 'May-15', 'Jun-15', 'Jul-15', 'Aug-15', 'Sep-15', 'Oct-15', 'Nov-15', 'Dec-15', 'Jan-16']
```
**Update:** a bit of explanation, as requested in one comment. There are three problems here: parsing the dates into appropriate data structures (`strptime`); getting the date range given the two extremes and the step (one month); formatting the output dates (`strftime`). The `datetime` type overloads the subtraction operator, so that `end - start` makes sense. The result is a `timedelta` object that represents the difference between the two dates, and the `.days` attribute gets this difference expressed in days. There is no `.months` attribute, so we iterate one day at a time and convert the dates to the desired output format. This yields a lot of duplicates, which the `OrderedDict` removes while keeping the items in the right order.
Now this is simple and concise because it lets the datetime module do all the work, but it's also horribly inefficient. We're calling a lot of methods for each day while we only need to output months. If performance is not an issue, the above code will be just fine. Otherwise, we'll have to work a bit more. Let's compare the above implementation with a more efficient one:
```
from datetime import datetime, timedelta
from collections import OrderedDict
dates = ["2014-10-10", "2016-01-07"]
def monthlist_short(dates):
start, end = [datetime.strptime(_, "%Y-%m-%d") for _ in dates]
return OrderedDict(((start + timedelta(_)).strftime(r"%b-%y"), None) for _ in xrange((end - start).days)).keys()
def monthlist_fast(dates):
start, end = [datetime.strptime(_, "%Y-%m-%d") for _ in dates]
total_months = lambda dt: dt.month + 12 * dt.year
mlist = []
for tot_m in xrange(total_months(start)-1, total_months(end)):
y, m = divmod(tot_m, 12)
mlist.append(datetime(y, m+1, 1).strftime("%b-%y"))
return mlist
assert monthlist_fast(dates) == monthlist_short(dates)
if __name__ == "__main__":
from timeit import Timer
for func in "monthlist_short", "monthlist_fast":
print func, Timer("%s(dates)" % func, "from __main__ import dates, %s" % func).timeit(1000)
```
On my laptop, I get the following output:
```
monthlist_short 2.3209939003
monthlist_fast 0.0774540901184
```
The concise implementation is about 30 times slower, so I would not recommend it in time-critical applications :) |
Why would I add python to PATH | 34,900,042 | 5 | 2016-01-20T12:25:58Z | 34,900,138 | 10 | 2016-01-20T12:30:30Z | [
"python",
"python-3.x",
"path",
"install"
] | I am beginning to look at python, so when I found a tutorial it said that the first thing to do would be to download python from www.python.org/downloads/
Now when I downloaded python 3, I then started the installation and got to
[](http://i.stack.imgur.com/rkjt6.png)
Why would I want to "Add Python 3.5 to PATH"? What is PATH? Why is it not ticked by default? | PATH is an environment variable in Windows. It basically tells the commandline what folders to look in when attempting to find a file. If you didn't add Python to PATH then you would call it from the commandline like this:
```
C:/Python27/Python some_python_script.py
```
Whereas if you add it to PATH, you can do this:
```
python some_python_script.py
```
Which is shorter and neater. It works because the command line will look through all the PATH folders for `python` and find it in the folder that the Python installer has added there.
The reason it's unticked by default is partly because if you're installing multiple versions of Python, you probably want to be able to control which one your commandline will open by default, which is harder to do if *both* versions are being added to your PATH. |
Confused about a variable assignment (Python) | 34,901,867 | 4 | 2016-01-20T13:49:48Z | 34,901,947 | 7 | 2016-01-20T13:53:49Z | [
"python",
"algorithm"
] | For a task on ProjectEuler I've written code that uses brute force to find the longest chain of primes below 100 that add up to a prime, and the code does give the correct results. So for numbers below 100 the answer is 2 + 3 + 5 + 7 + 11 + 13 = 41
```
import math
def prime(n):
for x in xrange(2,int(math.sqrt(n)+1)):
if n%x == 0:
return False
return True
primes = []
for x in xrange(2,100):
if prime(x):
primes += [x]
record = 0
i = 0
for num in primes:
i += 1
chain = [num]
for secnum in xrange(i,len(primes)-1):
chain += [primes[secnum]]
if len(chain) > record and sum(chain) in primes:
record = len(chain)
seq = chain
print seq
print seq
```
When I run this code I get
```
[2, 3]
[2, 3, 5, 7]
[2, 3, 5, 7, 11, 13]
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89]
```
That last line is extremely confusing to me. In my mind the two print statements should give the same reult. How did my variable seq get assigned to that long list? The last list doesn't even meet the requirements of the if statement wherein seq is assigned. I'm sure this is some really silly brain fart, but I just can't figure out what I screwed up | `seq = chain` creates another *reference* to the same `chain` list. You then print that list, but *the loop doesn't stop*.
You continue to expand `chain`, and since `seq` is just a reference to that list, you'll see those changes once the loop has ended. During the remaining `for` loop iterations `chain` / `seq` continues to change, but the `if` condition is no longer met so you don't see these changes take place.
You continue to expand `chain` here:
```
chain += [primes[secnum]]
```
This uses [*augmented assignment*](http://legacy.python.org/dev/peps/pep-0203/); it doesn't create a new list but extends the existing list. It is equivalent to `chain.extend(primes[secnum])`.
You can fix this by creating a *copy* of `chain` to store in `seq`:
```
seq = chain[:]
``` |
Why is my list not sorted as expected? | 34,907,849 | 3 | 2016-01-20T18:35:30Z | 34,907,913 | 10 | 2016-01-20T18:39:40Z | [
"python",
"sorting"
] | I have a `dict()` called `twitter_users` which holds `TwitterUser` objects as values. I want those objects to be sorted by the field `mentioned`. However, using `sorted()` does not work as I expect. I provide a `lambda` function that is supposed to determine if user `a` or user `b` was mentioned more often.
```
srt = sorted(twitter_users.values(),
cmp=(lambda a,b:
True if a.mentioned > b.mentioned else False))
for s in srt:
print s.mentioned
```
Unfortunately this is not working and the list `srt` isn't sorted in any way.
How can I make this work? | A `cmp` function should return an integer, `0` when equal, `1` or higher when `a` should come after `b` and `-1` or lower if they should come in the opposite order.
You instead return `False` and `True`. Because the Python boolean type is a subclass of `int`, these objects have the values `0` and `1` when interpreted as integers. You never return `-1`, so you are confusing the sorting algorithm; you tell it the order of `a` and `b` is either always 'equal' or `a` should come before `b`, always. But the sorting algorithm sometimes asks for `a` and `b` swapped, in which case you gave it conflicting information!
Note that your expression is rather verbose; `True if a.mentioned > b.mentioned else False` could just be simplified to `a.mentioned > b.mentioned`; the `>` operator already produces either `True` or `False`. Using simple integers you can see that that is not going to produce expected results:
```
>>> sorted([4, 2, 5, 3, 8], cmp=lambda a, b: a > b)
[4, 2, 5, 3, 8]
```
while actually returning `-1`, `0`, or `1` does work:
```
>>> sorted([4, 2, 5, 3, 8], cmp=lambda a, b: 1 if a > b else 0 if a == b else -1)
[2, 3, 4, 5, 8]
```
or instead of such a verbose expression, just use the built-in [`cmp()` function](https://docs.python.org/2/library/functions.html#cmp); for your case you'd use that like this:
```
srt = sorted(twitter_users.values(), cmp=lambda a, b: cmp(a.mentioned, b.mentioned))
```
But you shouldn't really use `cmp` *at all*; there is a far simpler (and more efficient) option. Just use the `key` function instead, which simply returns the `mentioned` attribute:
```
srt = sorted(twitter_users.values(), key=lambda v: v.mentioned)
```
The `key` function produces values by which the actual sort takes place; this function is used to produce a [*Schwartzian transform*](https://en.wikipedia.org/wiki/Schwartzian_transform). Such a transform is more efficient because it is only called O(n) times, while the `cmp` function is called O(n log n) times.
Because you are only accessing an attribute, instead of a `lambda` you can use a [`operator.attrgetter()` object](https://docs.python.org/2/library/operator.html#operator.attrgetter) to do the attribute fetching for you:
```
from operator import attrgetter
srt = sorted(twitter_users.values(), key=attrgetter('mentioned'))
``` |
Cannot gather gradients for GradientDescentOptimizer in TensorFlow | 34,911,276 | 2 | 2016-01-20T21:49:08Z | 34,911,503 | 7 | 2016-01-20T22:03:03Z | [
"python",
"tensorflow"
] | I've been trying to gather the gradient steps for each step of the GradientDescentOptimizer within TensorFlow, however I keep running into a TypeError when I try to pass the result of `apply_gradients()` to `sess.run()`. The code I'm trying to run is:
```
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
x = tf.placeholder(tf.float32,[None,784])
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x,W)+b)
y_ = tf.placeholder(tf.float32,[None,10])
cross_entropy = -tf.reduce_sum(y_*log(y))
# note that up to this point, this example is identical to the tutorial on tensorflow.org
gradstep = tf.train.GradientDescentOptimizer(0.01).compute_gradients(cross_entropy)
sess = tf.Session()
sess.run(tf.initialize_all_variables())
batch_x,batch_y = mnist.train.next_batch(100)
print sess.run(gradstep, feed_dict={x:batch_x,y_:batch_y})
```
Note that if I replace the last line with `print sess.run(train_step,feed_dict={x:batch_x,y_:batch_y})`, where `train_step = tf.GradientDescentOptimizer(0.01).minimize(cross_entropy)`, the error is not raised. My confusion arises from the fact that `minimize` calls `compute_gradients` with exactly the same arguments as its first step. Can someone explain why this behavior occurs? | The [`Optimizer.compute_gradients()`](https://www.tensorflow.org/versions/master/api_docs/python/train.html#Optimizer.compute_gradients) method returns a list of (`Tensor`, `Variable`) pairs, where each tensor is the gradient with respect to the corresponding variable.
`Session.run()` expects a list of `Tensor` objects (or objects convertible to a `Tensor`) as its first argument. It does not understand how to handle a list of pairs, and hence you get a `TypeError` which you try to run `sess.run(gradstep, ...)`
The correct solution depends on what you are trying to do. If you want to fetch all of the gradient values, you can do the following:
```
grad_vals = sess.run([grad for grad, _ in gradstep], feed_dict={x: batch_x, y: batch_y})
# Then, e.g., nuild a variable name-to-gradient dictionary.
var_to_grad = {}
for grad_val, (_, var) in zip(grad_vals, gradstep):
var_to_grad[var.name] = grad_val
```
If you also want to fetch the variables, you can execute the following statement separately:
```
sess.run([var for _, var in gradstep])
```
...though note that—without further modification to your program—this will just return the initial values for each variable.
You will have to run the optimizer's training step (or otherwise call [`Optimizer.apply_gradients()`](https://www.tensorflow.org/versions/master/api_docs/python/train.html#Optimizer.apply_gradients)) to update the variables. |
Combining lists of tuples based on a common tuple element | 34,919,028 | 2 | 2016-01-21T08:46:42Z | 34,919,125 | 7 | 2016-01-21T08:52:57Z | [
"python",
"list"
] | Consider two lists of tuples:
```
data1 = [([X1], 'a'), ([X2], 'b'), ([X3], 'c')]
data2 = [([Y1], 'a'), ([Y2], 'b'), ([Y3], 'c')]
```
Where `len(data1) == len(data2)`
Each tuple contains two elements:
1. list of some strings (i.e `[X1]`)
2. A **common** element for `data1` and `data2`: strings `'a'`, `'b'`, and so on.
I would like to combine them into following:
```
[('a', [X1], [Y1]), ('b', [X2], [Y2]),...]
```
Does anyone know how I can do this? | You can use `zip` function and a list comprehension:
```
[(s1,l1,l2) for (l1,s1),(l2,s2) in zip(data1,data2)]
``` |
Create a simple number pattern in python? | 34,929,895 | 2 | 2016-01-21T17:04:40Z | 34,929,933 | 11 | 2016-01-21T17:06:21Z | [
"python",
"python-3.x"
] | I'm trying to get this number pattern
```
0
01
012
0123
01234
012345
0123456
01234567
012345670
0123456701
```
But I can't figure out how to reset the digits when I reach over 8 in my function. There is my code:
```
def afficherPatron(n):
triangle = ''
for i in range(0, n):
triangle = triangle + (str(i))
print(triangle)
i+=1
```
Thanks in advance to all of you! | Use `i` mod 8 (`i%8`) because it is cyclic 0 to 7 :
```
for i in range(0, n):
triangle = triangle + str(i%8)
print(triangle)
``` |
Iterate over lists with a particular sum | 34,930,745 | 4 | 2016-01-21T17:45:40Z | 34,930,908 | 7 | 2016-01-21T17:54:40Z | [
"python",
"math",
"optimization"
] | I would like to iterate over all lists of length `n` whose elements sum to 2. How can you do this efficiently? Here is a very inefficient method for `n = 10`. Ultimately I would like to do this for `n > 25'.
```
n = 10
for L in itertools.product([-1,1], repeat = n):
if (sum(L) == 2):
print L #Do something with L
``` | you only can have a solution of 2 if you have 2 more +1 than -1 so for n==24
```
a_solution = [-1,]*11 + [1,]*13
```
now you can just use itertools.permutations to get every permutation of this
```
for L in itertools.permutations(a_solution): print L
```
it would probably be faster to use itertools.combinations to eliminate duplicates
```
for indices in itertools.combinations(range(24),11):
a = numpy.ones(24)
a[list(indices)] = -1
print a
```
note for you to get 2 the list must be an even length |
Python - Delete all files EXCEPT for | 34,931,192 | 2 | 2016-01-21T18:11:26Z | 34,931,241 | 7 | 2016-01-21T18:13:45Z | [
"python",
"csv"
] | I have a Python script and I'm trying to delete all files in this directory EXCEPT for the .csv file. Getting syntax error on the "not" in this line:
```
for CleanUp not in glob.glob("c:\python\AIO*.*"):
```
If I remove the "not", it will delete the AIO.csv file, but I need to preserve that file and ONLY that file. Not clear why it's not working.
```
import os
import glob
import time
file_path = "c:\python\AIO.csv"
while not os.path.exists(file_path):
time.sleep(10)
if os.path.isfile(file_path):
#Verifies CSV file was created, then deletes unneeded files.
for CleanUp not in glob.glob("c:\python\AIO*.*"):
os.remove(CleanUp)
``` | Try this instead
```
import os
import glob
import time
file_path = "c:\python\AIO.csv"
while not os.path.exists(file_path):
time.sleep(10)
if os.path.isfile(file_path):
#Verifies CSV file was created, then deletes unneeded files.
for CleanUp in glob.glob('C:/python/*.*'):
print CleanUp
if not CleanUp.endswith('AIO.csv'):
os.remove(CleanUp)
```
Glob doesn't print any directories, only files, and it also gets the entire path so you can just call `os.remove(CleanUp)`. This should work. It works on my machine which is also Windows 7 x64.
I think your problem was that you where looping over the path `c:\python\AIO*.*` which is a file so it only does one loop and terminates which skips all other files in the directory |
How can I convert an absolutely massive number to a string in a reasonable amount of time? | 34,936,226 | 25 | 2016-01-21T23:16:28Z | 34,936,584 | 16 | 2016-01-21T23:47:20Z | [
"python",
"string",
"primes",
"biginteger"
] | This is quite an odd problem I know, but I'm trying to get a copy of the current largest prime number in a file. Getting the number in integer form is fairly easy. I just run this.
```
prime = 2**74207281 - 1
```
It takes about half a second and it works just fine. Operations are fairly quick as well. Dividing it by 10 (without decimals) to shift the digits is quick. However, `str(prime)` is taking a very long time. I reimplemented `str` like this, and found it was processing about a hundred or so digits per second.
```
while prime > 0:
strprime += str(prime%10)
prime //= 10
```
Is there a way to do this more efficiently? I'm doing this in Python. Should I even try this with Python, or is there a better tool for this? | Repeated string concatenation is notoriously inefficient since Python strings are immutable. I would go for
```
strprime = str(prime)
```
In my benchmarks, this is consistently the fastest solution. Here's my little benchmark program:
```
import decimal
def f1(x):
''' Definition by OP '''
strprime = ""
while x > 0:
strprime += str(x%10)
x //= 10
return strprime
def digits(x):
while x > 0:
yield x % 10
x //= 10
def f2(x):
''' Using string.join() to avoid repeated string concatenation '''
return "".join((chr(48 + d) for d in digits(x)))
def f3(x):
''' Plain str() '''
return str(x)
def f4(x):
''' Using Decimal class'''
return decimal.Decimal(x).to_eng_string()
x = 2**100
if __name__ == '__main__':
import timeit
for i in range(1,5):
funcName = "f" + str(i)
print(funcName+ ": " + str(timeit.timeit(funcName + "(x)", setup="from __main__ import " + funcName + ", x")))
```
For me, this prints (using Python 2.7.10):
```
f1: 15.3430171013
f2: 20.8928260803
f3: 0.310356140137
f4: 2.80087995529
``` |
How can I convert an absolutely massive number to a string in a reasonable amount of time? | 34,936,226 | 25 | 2016-01-21T23:16:28Z | 34,937,258 | 9 | 2016-01-22T01:02:19Z | [
"python",
"string",
"primes",
"biginteger"
] | This is quite an odd problem I know, but I'm trying to get a copy of the current largest prime number in a file. Getting the number in integer form is fairly easy. I just run this.
```
prime = 2**74207281 - 1
```
It takes about half a second and it works just fine. Operations are fairly quick as well. Dividing it by 10 (without decimals) to shift the digits is quick. However, `str(prime)` is taking a very long time. I reimplemented `str` like this, and found it was processing about a hundred or so digits per second.
```
while prime > 0:
strprime += str(prime%10)
prime //= 10
```
Is there a way to do this more efficiently? I'm doing this in Python. Should I even try this with Python, or is there a better tool for this? | Took about 32 seconds to output the file using WinGhci (Haskell language):
```
import System.IO
main = writeFile "prime.txt" (show (2^74207281 - 1))
```
The file was 21 megabytes; the last four digits, 6351. |
How can I convert an absolutely massive number to a string in a reasonable amount of time? | 34,936,226 | 25 | 2016-01-21T23:16:28Z | 34,939,701 | 13 | 2016-01-22T05:36:31Z | [
"python",
"string",
"primes",
"biginteger"
] | This is quite an odd problem I know, but I'm trying to get a copy of the current largest prime number in a file. Getting the number in integer form is fairly easy. I just run this.
```
prime = 2**74207281 - 1
```
It takes about half a second and it works just fine. Operations are fairly quick as well. Dividing it by 10 (without decimals) to shift the digits is quick. However, `str(prime)` is taking a very long time. I reimplemented `str` like this, and found it was processing about a hundred or so digits per second.
```
while prime > 0:
strprime += str(prime%10)
prime //= 10
```
Is there a way to do this more efficiently? I'm doing this in Python. Should I even try this with Python, or is there a better tool for this? | Python's integer to string conversion algorithm uses a simplistic algorithm with a running of O(n\*\*2). As the length of the number doubles, the conversion time quadruples.
Some simple tests on my computer show the increase in running time:
```
$ time py35 -c "n=str(2**1000000)"
user 0m1.808s
$ time py35 -c "n=str(2**2000000)"
user 0m7.128s
$ time py35 -c "n=str(2**4000000)"
user 0m28.444s
$ time py35 -c "n=str(2**8000000)"
user 1m54.164s
```
Since the actual exponent is about 10 times larger than my last test value, it should take about 100 times longer. Or just over 3 hours.
Can it be done faster? Yes. There are several methods that are faster.
**Method 1**
It is faster to divide the very large number by a power-of-10 into two roughly equal-sized but smaller numbers. The process is repeated until the numbers are relatively small. Then `str()` is used on each number and leading zeroes are used to pad the result to the same length as the last power-of-10. Then the strings are joined to form the final result. This method is used by the `mpmath` library and the documentation implies it should be about 3x faster.
**Method 2**
Python's integers are stored in binary format. Binary is great for calculations but binary-to-decimal conversion is the bottleneck. It is possible to define your own integer type that stores the value in blocks of 100 (or some similar value) decimal digits. Operations (exponentiation, multiplication, division) will be slower but conversion to a string will be very fast.
Many years ago, I implemented such a class and used efficient algorithms for multiplication and division. The code is no longer available on the Internet but I did find a backup copy that I tested. The running time was reduced to ~14 seconds.
**Update**
I updated the DecInt code referenced above and it is now available at <https://github.com/casevh/DecInt>.
If Python's native integer type is used, the total running time is less than 14 seconds on my computer. If `gmpy2`'s integer type is used instead, the running time is ~3.5 seconds.
```
$ py35 DecInt.py
Calculating 2^74207281
Exponentiation time: 3.236
Conversion to decimal format: 0.304
Total elapsed time: 3.540
Length of result: 22338618 digits
```
**Method 3**
I maintain the [gmpy2](https://pypi.python.org/pypi/gmpy2) library that provide easy access to the GMP library for fast integer arithmetic. GMP implements Method 1 in highly optimized C and assembly code and calculates the prime number and the string representation in ~5 seconds.
**Method 4**
The `decimal` module in Python stores values as decimal digits. Recent versions of Python 3 include a C implementation of the decimal library that is much faster that the pure-Python implementation include with Python 2. The C implementation run in just over 3 seconds on my computer.
```
from decimal import *
getcontext().prec = 23000000
getcontext().Emin = -999999999
getcontext().Emax = 999999999
x=Decimal(2)**74207281 - 1
s=str(x)
``` |
Preprocess a Tensorflow tensor in Numpy | 34,942,745 | 4 | 2016-01-22T09:05:07Z | 34,951,694 | 7 | 2016-01-22T16:40:05Z | [
"python",
"numpy",
"tensorflow"
] | I have set up a CNN in Tensorflow where I read my data with a TFRecordReader. It works well but I would like to do some more preprocessing and data augmentation than offered by the `tf.image` functions. I would specifically like to do some randomized scaling.
Is it possible to process a Tensorflow tensor in Numpy? Or do I need to drop the TFRecordReader and rather do all my preprocessing in Numpy and feed data using the feed\_dict? I suspect that the feed\_dict method is slow when training on images, but I might be wrong? | If you could create a custom I/O pipeline that fetches intermediate results back from TensorFlow using one or more threads, applies arbitrary Python logic, and then feeds them into a queue for subsequent processing. The resulting program would be somewhat more complicated, but I suggest you look at the [threading and queues HOWTO](https://www.tensorflow.org/versions/master/how_tos/threading_and_queues/index.html) for information on how to get started.
---
There is an **experimental** feature that might make this easier, if you [install from source](https://www.tensorflow.org/versions/master/get_started/os_setup.html#installing-from-sources).
If you have already built a preprocessing pipeline using TensorFlow ops, the easiest way to add some custom Python code is to use the [`tf.py_func()`](https://www.tensorflow.org/versions/master/api_docs/python/script_ops.html#py_func) operator, which takes a list of `Tensor` objects, and a Python function that maps one or more NumPy arrays to one or more NumPy arrays.
For example, let's say you have a pipeline like this:
```
reader = tf.TFRecordReader(...)
image_t = tf.image.decode_png(tf.parse_single_example(reader.read(), ...))
```
...you could use `tf.py_func()` to apply some custom NumPy processing as follows:
```
from scipy import ndimage
def preprocess(array):
# `array` is a NumPy array containing.
return ndimage.rotate(array, 45)
image_t = tf.py_func(preprocess, [image_t], [tf.float32])
``` |
How to set layer-wise learning rate in Tensorflow? | 34,945,554 | 6 | 2016-01-22T11:22:10Z | 34,948,185 | 14 | 2016-01-22T13:45:45Z | [
"python",
"deep-learning",
"tensorflow"
] | I am wondering if there is a way that I can use different learning rate for different layers like what is in Caffe. I am trying to modify a pre-trained model and use it for other tasks. What I want is to speed up the training for new added layers and keep the trained layers at low learning rate in order to prevent them from being distorted. for example, I have a 5-conv-layer pre-trained model. Now I add a new conv layer and fine tune it. The first 5 layers would have learning rate of 0.00001 and the last one would have 0.001. Any idea how to achieve this? | It can be achieved quite easily with 2 optimizers:
```
var_list1 = [variables from first 5 layers]
var_list2 = [the rest of variables]
train_op1 = GradientDescentOptimizer(0.00001).minimize(loss, var_list=var_list1)
train_op2 = GradientDescentOptimizer(0.0001).minimize(loss, var_list=var_list2)
train_op = tf.group(train_op1, train_op2)
```
One disadvantage of this implementation is that it computes tf.gradients(.) twice inside the optimizers and thus it might not be optimal in terms of execution speed. This can be mitigated by explicitly calling tf.gradients(.), splitting the list into 2 and passing corresponding gradients to both optimizers.
Related question: [Holding variables constant during optimizer](http://stackoverflow.com/questions/34477889/holding-variables-constant-during-optimizer/34478044#34478044)
EDIT: Added more efficient but longer implementation:
```
var_list1 = [variables from first 5 layers]
var_list2 = [the rest of variables]
opt1 = tf.train.GradientDescentOptimizer(0.00001)
opt2 = tf.train.GradientDescentOptimizer(0.0001)
grads = tf.gradients(loss, var_list1 + var_list2)
grads1 = grads[:len(var_list1)]
grads2 = grads[len(var_list1):]
tran_op1 = opt1.apply_gradients(zip(grads1, var_list1))
train_op2 = opt2.apply_gradients(zip(grads2, var_list2))
train_op = tf.group(train_op1, train_op2)
```
You can use `tf.trainable_variables()` to get all training variables and decide to select from them.
The difference is that in the first implementation `tf.gradients(.)` is called twice inside the optimizers. This may cause some redundant operations to be executed (e.g. gradients on the first layer can reuse some computations for the gradients of the following layers). |
Is there a sunset/sunrise function built into Astropy yet? | 34,952,552 | 3 | 2016-01-22T17:23:51Z | 34,955,251 | 9 | 2016-01-22T20:19:56Z | [
"python",
"coordinates",
"pyephem",
"astropy"
] | I have seen a couple of answers referring to PyEphem on here and how that can produce sunset/sunrise times, however it would be more useful to me if I could find a solution using solely Astropy packages. At the moment the closest I have found is the *(get\_sun)* function in the **astropy.coordinates** package. Is this sunset/sunrise functionality built into Astropy anywhere yet? | The [Astroplan](http://astroplan.readthedocs.org) package has an [Observer](http://astroplan.readthedocs.org/en/latest/api/astroplan.Observer.html) class with a [Observer.sun\_rise\_time](http://astroplan.readthedocs.org/en/latest/api/astroplan.Observer.html#astroplan.Observer.sun_rise_time) and a [Observer.sun\_set\_time](http://astroplan.readthedocs.org/en/latest/api/astroplan.Observer.html#astroplan.Observer.sun_set_time) method.
Astroplan is an [Astropy affiliated package](http://www.astropy.org/affiliated/index.html).
Whether this functionality is going to be put into the core Astropy package in the future or remain in Astroplan isn't clear. But with `pip install astroplan` or `conda install -c astropy astroplan` you can easily install Astroplan and use this. |
Pandas: Compress Column Names to Cell Values where True | 34,955,321 | 4 | 2016-01-22T20:23:59Z | 34,955,500 | 8 | 2016-01-22T20:34:44Z | [
"python",
"pandas",
"dataframe"
] | I have a dataframe that looks like
```
ID Cat1 Cat2 Cat3 Cat4
3432432 True False True False
1242323 False True False False
3423883 False False False True
```
How can I convert that to a dataframe that chooses the first column that is True?
```
ID Status
3432432 Cat1
1242323 Cat2
3423883 Cat4
``` | You could take advantage of the fact that `idxmax` will return the first True:
```
>>> df.set_index("ID").idxmax(axis=1).reset_index(name="Status")
ID Status
0 3432432 Cat1
1 1242323 Cat2
2 3423883 Cat4
```
which works because we have
```
>>> df.iloc[:,1:]
Cat1 Cat2 Cat3 Cat4
0 True False True False
1 False True False False
2 False False False True
>>> df.iloc[:,1:].idxmax(axis=1)
0 Cat1
1 Cat2
2 Cat4
dtype: object
``` |
What is the difference between namedtuple return and its typename argument? | 34,962,181 | 4 | 2016-01-23T10:14:18Z | 34,962,236 | 7 | 2016-01-23T10:19:53Z | [
"python"
] | Python documentation says:
```
collections.namedtuple(typename, field_names[, verbose=False][, rename=False])
Returns a new tuple subclass named typename.
```
and it gives an example
`>>>Point = namedtuple('Point',`...
In all the examples I could find, the return from `namedtuple` and argument `typename` are spelled the same.
Experimenting, it seems the argument does not matter:
```
>>>Class = collections.namedtuple('Junk', 'field')
>>>obj = Class(field=1)
>>>print obj.field
1
```
What is the distinction? How does the `typename` argument matter? | When you execute the following code:
```
Bar = collections.namedtuple('Foo', 'field')
```
you are:
1. creating a new type named `Foo`;
2. assigning that type to a variable named `Bar`.
That code is equivalent to this:
```
class Foo:
...
Bar = Foo
del Foo
```
Even if you assign your class to a variable with a different name, `Foo` will still be the "official" name, that is: `Bar.__name__` will still be `'Foo'`.
You'll see the difference when you print either the class or the instance:
```
>>> Bar = collections.namedtuple('Foo', 'field')
>>> obj = Bar(field=1)
>>> obj
Foo(field=1)
```
You may ask why `namedtuple` requires the type name, as it is redundant (with the usual convention). Well, `namedtuple` builds the type *before* the variable is assigned, so it cannot infer the type name and it needs to be told explicitly. (Or better: it could infer the name by inspecting the code of the caller, but that's hackish and won't work for non conventional cases.) |
Softmax function - python | 34,968,722 | 38 | 2016-01-23T20:52:50Z | 34,969,389 | 21 | 2016-01-23T22:00:06Z | [
"python",
"numpy",
"machine-learning",
"logistic-regression",
"softmax"
] | From the [Udacity's deep learning class](https://www.udacity.com/course/viewer#!/c-ud730/l-6370362152/m-6379811820), the softmax of y\_i is simply the exponential divided by the sum of exponential of the whole Y vector:
[](http://i.stack.imgur.com/iP8Du.png)
Where `S(y_i)` is the softmax function of `y_i` and `e` is the exponentia and `j` is the no. of columns in the input vector Y.
I've tried the following:
```
import numpy as np
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum()
scores = [3.0, 1.0, 0.2]
print(softmax(scores))
```
which returns:
```
[ 0.8360188 0.11314284 0.05083836]
```
And the suggested solution was:
```
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
return np.exp(x) / np.sum(np.exp(x), axis=0)
```
And it outputs the **same output as the first implementation** that really tax the difference of each column and the max and then divided by the sum.
**Can someone show mathematically why? Is one correct and the other one wrong?**
**Are the implementation similar in terms of code and time complexity? Which is more efficient?** | They're both correct but yours has an unnecessary term.
You start with
e ^ (x - max(x)) / sum(e^(x - max(x))
By using the fact that a^(b - c) = (a^b)/(a^c) we have
= e ^ x / e ^ max(x) \* sum(e ^ x / e ^ max(x))
= e ^ x / sum(e ^ x)
Which is what the other answer says. You could replace max(x) with any variable and it would cancel out. |
Softmax function - python | 34,968,722 | 38 | 2016-01-23T20:52:50Z | 35,276,415 | 13 | 2016-02-08T18:13:54Z | [
"python",
"numpy",
"machine-learning",
"logistic-regression",
"softmax"
] | From the [Udacity's deep learning class](https://www.udacity.com/course/viewer#!/c-ud730/l-6370362152/m-6379811820), the softmax of y\_i is simply the exponential divided by the sum of exponential of the whole Y vector:
[](http://i.stack.imgur.com/iP8Du.png)
Where `S(y_i)` is the softmax function of `y_i` and `e` is the exponentia and `j` is the no. of columns in the input vector Y.
I've tried the following:
```
import numpy as np
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum()
scores = [3.0, 1.0, 0.2]
print(softmax(scores))
```
which returns:
```
[ 0.8360188 0.11314284 0.05083836]
```
And the suggested solution was:
```
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
return np.exp(x) / np.sum(np.exp(x), axis=0)
```
And it outputs the **same output as the first implementation** that really tax the difference of each column and the max and then divided by the sum.
**Can someone show mathematically why? Is one correct and the other one wrong?**
**Are the implementation similar in terms of code and time complexity? Which is more efficient?** | I would say that while both are correct mathematically, implementation-wise, first one is better. When computing softmax, the intermediate values may become very large. Dividing two large numbers can be numerically unstable. [These notes](http://cs231n.github.io/linear-classify/#softmax) (from Stanford) mention a normalization trick which is essentially what you are doing. |
Python 3 doesn't need __init__.py in this situation? | 34,969,646 | 5 | 2016-01-23T22:23:28Z | 34,969,715 | 7 | 2016-01-23T22:30:51Z | [
"python",
"python-2.7",
"python-3.x",
"import"
] | Suppose I have:
```
src/
__init__.py
a.py
b.py
```
Suppose `__init__.py` is an empty file, and `a.py` is just one line:
```
TESTVALUE = 5
```
Suppose `b.py` is:
```
from src import a
print(a.TESTVALUE)
```
Now in both Python 2.7 and Python 3.x, running `b.py` gives the result (`5`).
However, if I delete the file `__init__.py`, `b.py` still works in Python 3.x, but in Python 2.7, I get the error:
```
Traceback (most recent call last):
File "b.py", line 5, in <module>
from src import a
ImportError: No module named src
```
Why does Python 2.7 exhibit different behaviour in this situation? | Python 3 supports [namespace packages](https://www.python.org/dev/peps/pep-0420/) that work without an `__init__.py` file.
Furthermore, these packages can be distribute over several directories. This means all directories on your `sys.path` that contain `*.py` files will be recognized as packages.
This breaks backwards compatibility in Python 3 in terms of imports. A typical problem is a directory in your current working directory that has a name like a library such as `numpy` and that contains Python files. While Python 2 ignores this directory, Python 3 will find it first and tries to import the library from there. This has bitten me several times. |
Need to embed `-` character into arguments in python argparse | 34,970,533 | 3 | 2016-01-23T23:59:54Z | 34,970,609 | 7 | 2016-01-24T00:08:14Z | [
"python",
"python-2.7",
"shell",
"command-line-interface",
"argparse"
] | I am designing a tool to meet some spec. I have a scenario where I want the argument to contain `-` its string. Pay attention to `arg-1` in the below line.
```
python test.py --arg-1 arg1Data
```
I am using the `argparse` library on `python27`. For some reason the argparse gets confused with the above trial.
My question is how to avoid this? How can I keep the `-` in my argument?
A sample program (containing the `-`, if this is removed everything works fine):
```
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--arg-1", help="increase output verbosity")
args = parser.parse_args()
if args.args-1:
print "verbosity turned on"
``` | Python argparse module replace dashes by underscores, thus:
```
if args.arg_1:
print "verbosity turned on"
```
Python doc (second paragraph of section [15.4.3.11. dest](https://docs.python.org/2.7/library/argparse.html#dest)) states:
> Any internal `-` characters will be converted to `_` characters to make
> sure the string is a valid attribute name. |
Django many-to-many | 34,972,012 | 3 | 2016-01-24T03:52:21Z | 34,972,055 | 7 | 2016-01-24T03:58:42Z | [
"python",
"django",
"many-to-many"
] | ```
class Actor(models.Model):
name = models.CharField(max_length=50)
def __str__(self):
return self.name
class Movie(models.Model):
title = models.CharField(max_length=50)
actors = models.ManyToManyField(Actor)
def __str__(self):
return self.title
```
how can I access to the movies of an actor from Actor object in a template ?
I need to do it in both directions.
This worked from movies to actors.
```
{{movie.actors.all}}
``` | just put `related_name` into `actors` field
```
actors = models.ManyToManyField(Actor, related_name="actor_movies")
```
and then in template:
```
{{ actor.actor_movies.all }}
```
or if you dont want `related_name`:
template:
```
{{ actor.movie_set.all }}
``` |
403 Forbidden using Urllib2 [Python] | 34,974,117 | 3 | 2016-01-24T09:29:01Z | 35,028,627 | 7 | 2016-01-27T03:58:10Z | [
"python",
"instagram-api"
] | ```
url = 'https://www.instagram.com/accounts/login/ajax/'
values = {'username' : 'User',
'password' : 'Pass'}
#'User-agent', ''
data = urllib.urlencode(values)
req = urllib2.Request(url, data,headers={'User-Agent' : "Mozilla/5.0"})
con = urllib2.urlopen( req )
the_page = response.read()
```
Does anyone have any ideas with this? I keep getting the error "403 forbidden".
Its possible instagram has something that won't let me connect via python (I don't want to connect via their API). What on earth is going on here, does anyone have any ideas?
Thanks!
EDIT: Adding more info.
The error I was getting was this
```
This page could not be loaded. If you have cookies disabled in your browser, or you are browsing in Private Mode, please try enabling cookies or turning off Private Mode, and then retrying your action.
```
I edited my code but am still getting that error.
```
jar = cookielib.FileCookieJar("cookies")
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(jar))
print len(jar) #prints 0
opener.addheaders = [('User-agent','Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.111 Safari/537.36')]
result = opener.open('https://www.instagram.com')
print result.getcode(), len(jar) #prints 200 and 2
url = 'https://www.instagram.com/accounts/login/ajax/'
values = {'username' : 'username',
'password' : 'password'}
data = urllib.urlencode(values)
response = opener.open(url, data)
print response.getcode()
``` | Two important things, for starters:
* make sure you stay on the *legal side*. According to the Instagram's [Terms of Use](https://help.instagram.com/478745558852511):
> We prohibit crawling, scraping, caching or otherwise accessing any content on the Service via automated means, including but not limited to, user profiles and photos (except as may be the result of standard search engine protocols or technologies used by a search engine with Instagram's express consent).
>
> You must not create accounts with the Service through unauthorized means, including but not limited to, by using an automated device, script, bot, spider, crawler or scraper.
* there is an [Instagram API](https://www.instagram.com/developer/) that would help staying on the legal side and make the life easier. There is a Python client: [python-instagram](https://github.com/Instagram/python-instagram)
Aside from that, the Instagram itself is javascript-heavy and you may find it difficult to work with using just `urllib2` or `requests`. If, for some reason, you cannot use the API, you would look into browser automation via [`selenium`](https://selenium-python.readthedocs.org/). Note that you can automate a headless browser like [`PhantomJS`](http://phantomjs.org/) also. Here is a sample code to log in:
```
from selenium import webdriver
USERNAME = "username"
PASSWORD = "password"
driver = webdriver.PhantomJS()
driver.get("https://www.instagram.com")
driver.find_element_by_name("username").send_keys(USERNAME)
driver.find_element_by_name("password").send_keys(PASSWORD)
driver.find_element_by_xpath("//button[. = 'Log in']").click()
``` |
Split text lines in scanned document | 34,981,144 | 5 | 2016-01-24T20:36:46Z | 35,014,061 | 7 | 2016-01-26T12:38:42Z | [
"python",
"opencv",
"ocr",
"skimage"
] | I am trying to find a way to break the split the lines of text in a scanned document that has been adaptive thresholded. Right now, I am storing the pixel values of the document as unsigned ints from 0 to 255, and I am taking the average of the pixels in each line, and I split the lines into ranges based on whether the average of the pixels values is larger than 250, and then I take the median of each range of lines for which this holds. However, this methods sometimes fails, as there can be black splotches on the image.
Is there a more noise-resistant way to do this task?
EDIT: Here is some code. "warped" is the name of the original image, "cuts" is where I want to split the image.
```
warped = threshold_adaptive(warped, 250, offset = 10)
warped = warped.astype("uint8") * 255
# get areas where we can split image on whitespace to make OCR more accurate
color_level = np.array([np.sum(line) / len(line) for line in warped])
cuts = []
i = 0
while(i < len(color_level)):
if color_level[i] > 250:
begin = i
while(color_level[i] > 250):
i += 1
cuts.append((i + begin)/2) # middle of the whitespace region
else:
i += 1
```
EDIT 2: Sample image added
[](http://i.stack.imgur.com/sxRq9.jpg) | From your input image, you need to make text as white, and background as black
[](http://i.stack.imgur.com/DmbZx.png)
You need then to compute the rotation angle of your bill. A simple approach is to find the `minAreaRect` of all white points (`findNonZero`), and you get:
[](http://i.stack.imgur.com/Y1eTU.png)
Then you can rotate your bill, so that text is horizontal:
[](http://i.stack.imgur.com/Ky0jX.png)
Now you can compute horizontal projection (`reduce`). You can take the average value in each line. Apply a threshold `th` on the histogram to account for some noise in the image (here I used `0`, i.e. no noise). Lines with only background will have a value `>0`, text lines will have value `0` in the histogram. Then take the average bin coordinate of each continuous sequence of white bins in the histogram. That will be the `y` coordinate of your lines:
[](http://i.stack.imgur.com/C7Z2h.png)
Here the code. It's in C++, but since most of the work is with OpenCV functions, it should be easy convertible to Python. At least, you can use this as a reference:
```
#include <opencv2/opencv.hpp>
using namespace cv;
using namespace std;
int main()
{
// Read image
Mat3b img = imread("path_to_image");
// Binarize image. Text is white, background is black
Mat1b bin;
cvtColor(img, bin, COLOR_BGR2GRAY);
bin = bin < 200;
// Find all white pixels
vector<Point> pts;
findNonZero(bin, pts);
// Get rotated rect of white pixels
RotatedRect box = minAreaRect(pts);
if (box.size.width > box.size.height)
{
swap(box.size.width, box.size.height);
box.angle += 90.f;
}
Point2f vertices[4];
box.points(vertices);
for (int i = 0; i < 4; ++i)
{
line(img, vertices[i], vertices[(i + 1) % 4], Scalar(0, 255, 0));
}
// Rotate the image according to the found angle
Mat1b rotated;
Mat M = getRotationMatrix2D(box.center, box.angle, 1.0);
warpAffine(bin, rotated, M, bin.size());
// Compute horizontal projections
Mat1f horProj;
reduce(rotated, horProj, 1, CV_REDUCE_AVG);
// Remove noise in histogram. White bins identify space lines, black bins identify text lines
float th = 0;
Mat1b hist = horProj <= th;
// Get mean coordinate of white white pixels groups
vector<int> ycoords;
int y = 0;
int count = 0;
bool isSpace = false;
for (int i = 0; i < rotated.rows; ++i)
{
if (!isSpace)
{
if (hist(i))
{
isSpace = true;
count = 1;
y = i;
}
}
else
{
if (!hist(i))
{
isSpace = false;
ycoords.push_back(y / count);
}
else
{
y += i;
count++;
}
}
}
// Draw line as final result
Mat3b result;
cvtColor(rotated, result, COLOR_GRAY2BGR);
for (int i = 0; i < ycoords.size(); ++i)
{
line(result, Point(0, ycoords[i]), Point(result.cols, ycoords[i]), Scalar(0, 255, 0));
}
return 0;
}
``` |
TensorFlow: Max of a tensor along an axis | 34,987,509 | 5 | 2016-01-25T07:49:27Z | 34,988,069 | 8 | 2016-01-25T08:30:27Z | [
"python",
"deep-learning",
"tensorflow"
] | My question is in two connected parts:
1. How do I calculate the max along a certain axis of a tensor? For example if I have
```
x = tf.constant([[1,220,55],[4,3,-1]])
```
I want something like
```
x_max = tf.max(x, axis=1)
print sess.run(x_max)
output: [220,4]
```
I know there is a tf.argmax and a tf.maximum but neither give the maximum value along an axis of a single tensor. For now I have a workaround:
```
x_max = tf.slice(x, begin=[0,0], size=[-1,1])
for a in range(1,2):
x_max = tf.maximum(x_max , tf.slice(x, begin=[0,a], size=[-1,1]))
```
But it looks less than optimal. Is there a better way to do this?
2. Given the indices of an argmax of a tensor, how do I index into another tensor using those indices? Using the example of x above, how do I do something like the following:
```
ind_max = tf.argmax(x, dimension=1) #output is [1,0]
y = tf.constant([[1,2,3], [6,5,4])
y_ = y[:, ind_max] #y_ should be [2,6]
```
I know slicing like the last line does not exist in tensor flow yet ([#206](https://github.com/tensorflow/tensorflow/issues/206)) , my question is, what is the best workaround for my specific case (maybe using other methods like gather, select, etc...)?
Additional information: I know x and y are going to be two dimensional tensors only! | The [`tf.reduce_max()`](https://www.tensorflow.org/versions/0.6.0/api_docs/python/math_ops.html#reduce_max) operator provides exactly this functionality. By default it computes the global maximum of the given tensor, but you can specify a list of `reduction_indices`, which has the same meaning as `axis` in NumPy. To complete your example:
```
x = tf.constant([[1, 220, 55], [4, 3, -1]])
x_max = tf.reduce_max(x, reduction_indices=[1])
print sess.run(x_max) # ==> "array([220, 4], dtype=int32)"
```
If you compute the argmax using [`tf.argmax()`](https://www.tensorflow.org/versions/0.6.0/api_docs/python/math_ops.html#argmax), you could obtain the the values from a different tensor `y` by flattening `y` using [`tf.reshape()`](https://www.tensorflow.org/versions/0.6.0/api_docs/python/array_ops.html#reshape), converting the argmax indices into vector indices as follows, and using [`tf.gather()`](https://www.tensorflow.org/versions/0.6.0/api_docs/python/array_ops.html#gather) to extract the appropriate values:
```
ind_max = tf.argmax(x, dimension=1)
y = tf.constant([[1, 2, 3], [6, 5, 4]])
flat_y = tf.reshape(y, [-1]) # Reshape to a vector.
# N.B. Handles 2-D case only.
flat_ind_max = ind_max + tf.cast(tf.range(tf.shape(y)[0]) * tf.shape(y)[1], tf.int64)
y_ = tf.gather(flat_y, flat_ind_max)
print sess.run(y_) # ==> "array([2, 6], dtype=int32)"
``` |
Python import error: 'module' object has no attribute 'x' | 34,992,019 | 6 | 2016-01-25T11:56:39Z | 34,992,394 | 7 | 2016-01-25T12:15:01Z | [
"python",
"python-3.x",
"import"
] | I am trying to do a python script that it is divided in multiple files, so I can maintain it more easily instead of making a very-long single file script.
Here is the directory structure:
```
wmlxgettext.py
<pywmlx>
|- __init__.py
|- (some other .py files)
|- <state>
|- __init__.py
|- state.py
|- machine.py
|- lua_idle.py
```
if I reach the main directory of my project (where the wmlxgettext.py script is stored) and if I try to "import pywmlx" I have an import error (Attribute Error: 'module' object has no attribute 'state')
Here is the complete error message:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/user/programmi/my/python/wmlxgettext/true/pywmlx/__init__.py", line 9, in <module>
import pywmlx.state as statemachine
File "/home/user/programmi/my/python/wmlxgettext/true/pywmlx/state/__init__.py", line 1, in <module>
from pywmlx.state.machine import setup
File "/home/user/programmi/my/python/wmlxgettext/true/pywmlx/state/machine.py", line 2, in <module>
from pywmlx.state.lua_idle import setup_luastates
File "/home/user/programmi/my/python/wmlxgettext/true/pywmlx/state/lua_idle.py", line 3, in <module>
import pywmlx.state.machine as statemachine
AttributeError: 'module' object has no attribute 'state'
```
Since I am in the "project main directory" pywmlx should be on PYTHONPATH (infact I have no troubles when I tried to import pywmlx/something.py)
I'm not able to figure where is my error and how to solve this problem.
Here is the **pywmlx/\_\_init\_\_.py** source:
```
# all following imports works well:
from pywmlx.wmlerr import ansi_setEnabled
from pywmlx.wmlerr import wmlerr
from pywmlx.wmlerr import wmlwarn
from pywmlx.postring import PoCommentedString
from pywmlx.postring import WmlNodeSentence
from pywmlx.postring import WmlNode
# this is the import that does not work:
import pywmlx.state as statemachine
```
Here is the **pywmlx/state/\_\_init\_\_.py** source:
```
from pywmlx.state.machine import setup
from pywmlx.state.machine import run
```
But I think that the real problem is somewhat hidden in the "imports" used by one (or all) python modules stored in pywmlx/state directory.
Here is the **pywmlx/state/machine.py** source:
```
# State is a "virtual" class
from pywmlx.state.state import State
from pywmlx.state.lua_idle import setup_luastates
import pywmlx.nodemanip as nodemanip
def addstate(self, name, value):
# code is not important for this question
pass
def setup():
setup_luastates()
def run(self, *, filebuf, fileref, fileno, startstate, waitwml=True):
# to do
pass
```
Finally here is the **pywmlx/state/lua\_idle.py** source:
```
import re
import pywmlx.state.machine as statemachine
# State is a "virtual" class
from pywmlx.state.state import State
# every state is a subclass of State
# all proprieties were defined originally on the base State class:
# self.regex and self.iffail were "None"
# the body of "run" function was only "pass"
class LuaIdleState (State):
def __init__(self):
self.regex = re.compile(r'--.*?\s*#textdomain\s+(\S+)', re.I)
self.iffail = 'lua_checkpo'
def run(xline, match):
statemachine._currentdomain = match.group(1)
xline = None
return (xline, 'lua_idle')
def setup_luastates():
statemachine.addstate('lua_idle', LuaIdleState)
```
Sorry if I posted so much code and so many files... but I fear that the files, in directory, hides more than a single import problem, so I published them all, hoping that I could explain the problem avoiding confusion.
I think that I miss something about how import works in python, so I hope this question can be useful also to other programmers, becouse I think I am not the only one who found the official documentation very difficult to understand when explaining import.
---
Searches Done:
[Not Useful](http://stackoverflow.com/questions/1917958/python-import-mechanics): I am already explicitly using import x.y.z all times I need to import something
[Not Useful](http://stackoverflow.com/questions/3201898/python-import-error-no-module-named): Even if the question asks about import errors, it seems not useful for the same reason as (1)
[Not Useful](http://stackoverflow.com/questions/3721415/python-import-problem): As far as I know, **pywmlx** should be located into PYTHONPATH since "current working directory" on my tests is the directory that contains the main python script and **pywmlx** directory. Correct me if I am wrong | Python does several things when importing packages:
* Create an object in `sys.modules` for the package, with the name as key: `'pywmlx'`, `'pywmlx.state'`, `'pywmlx.state.machine'`, etc.
* Run the bytecode loaded for that module; this may create more modules.
* Once a module is fully loaded and it is located inside another package, set the module as an attribute of the parent module object. Thus the `sys.modules['pywmlx.state']` module is set as the `state` attribute on the `sys.modules['pywmlx']` module object.
That last step hasn't taken place yet in your example, but the following line only works when it has been set:
```
import pywmlx.state.machine as statemachine
```
because this looks up both `state` and `machine` as attributes first. Use this syntax instead:
```
from pywmlx.state import machine as statemachine
```
Alternatively, just use
```
import pywmlx.state.machine
```
and replace `statemachine.` everywhere else with `pywmlx.state.machine.`. This works because all that is added to your namespace is a reference to the `sys.modules['pywmlx']` module object and the attribute references won't need to be resolved until you use that reference in the functions and methods. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.