title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Is there a need to close files that have no reference to them? | 36,046,167 | 47 | 2016-03-16T20:20:26Z | 36,063,184 | 33 | 2016-03-17T14:16:35Z | [
"python",
"python-2.7",
"file"
] | As a complete beginner to programming, I am trying to understand the basic concepts of opening and closing files. One exercise I am doing is creating a script that allows me to copy the contents from one file to another.
```
in_file = open(from_file)
indata = in_file.read()
out_file = open(to_file, 'w')
out_file.write(indata)
out_file.close()
in_file.close()
```
I have tried to shorten this code and came up with this:
```
indata = open(from_file).read()
open(to_file, 'w').write(indata)
```
This works and looks a bit more efficient to me. However, this is also where I get confused. I think I left out the references to the opened files; there was no need for the in\_file and out\_file variables. However, does this leave me with two files that are open, but have nothing referring to them? How do I close these, or is there no need to?
Any help that sheds some light on this topic is much appreciated. | You asked about the "basic concepts", so let's take it from the top: When you open a file, your program gains access to **a system resource,** that is, to something outside the program's own memory space. This is basically a bit of magic provided by the operating system (a *system call,* in Unix terminology). Hidden inside the file object is a reference to a "file descriptor", the actual OS resource associated with the open file. Closing the file tells the system to release this resource.
As an OS resource, **the number of files a process can keep open is limited:** Long ago the per-process limit was about 20 on Unix. Right now my OS X box imposes a limit of 256 open files (though this is an imposed limit, and can be raised). Other systems might set limits of [a few thousand](http://stackoverflow.com/q/870173/699305), or in the [tens of thousands](http://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/) (per user, not per process in this case). When your program ends, all resources are automatically released. So if your program opens a few files, does something with them and exits, you can be sloppy and you'll never know the difference. But if your program will be opening thousands of files, you'll do well to release open files to avoid exceeding OS limits.
There's another benefit to closing files before your process exits: If you opened a file for writing, **closing it will first "flush its output buffer".** This means that i/o libraries optimize disk use by collecting ("buffering") what you write out, and saving it to disk in batches. If you write text to a file and immediately try to reopen and read it without first closing the output handle, you'll find that not everything has been written out. Also, if your program is closed too abruptly (with a signal, or occasionally even through normal exit), the output might never be flushed.
There's already plenty of other answers on how to release files, so here's just a brief list of the approaches:
1. Explicitly with `close()`. (Note for python newbies: Don't forget the parens! My students like to write `in_file.close`, which does nothing.)
2. **Recommended:** Implicitly, by opening files with the `with` statement. The `close()` method will be called when the end of the `with` block is reached, even in the event of abnormal termination (from an exception).
```
with open("data.txt") as in_file:
data = in_file.read()
```
3. Implicitly by the reference manager or garbage collector, if your python engine implements it. This is not recommended since it's not entirely portable; see the other answers for details. That's why the `with` statement was added to python.
4. Implicitly, when your program ends. If a file is open for output, this may run a risk of the program exiting before everything has been flushed to disk. |
Python: easiest way to flatten a tupple containing another tupple from a function | 36,068,444 | 2 | 2016-03-17T18:07:06Z | 36,068,490 | 8 | 2016-03-17T18:09:21Z | [
"python",
"tuples",
"unpack"
] | My code is like this:
```
def f1():
return 2, 3
def f2():
return 1, f1()
```
I can do:
```
a, (b, c) = f2()
```
I want to do:
```
a, b, c = f2()
```
All the solutions I could find require using a lot of insane parenthesis/brackets, or creating an identity function to use `*` operator. I would like to only modify f2().
Is there something simpler? | Instead of using `1, f2()`, use tuple concatenation:
```
def f2():
return (1,) + f1()
```
As mentioned in a comment, you could also do this:
```
def f2():
x,y = f1()
return 1, x, y
```
You could also do this:
```
def f2():
return (lambda *args: args)(1, *f1())
```
That is a little long, but it has an advantage over the `x,y = f1()` solution because this way `f1()` can return a tuple with any number of elements. |
How do you load, label, and feed jpeg data into Tensorflow? | 36,075,319 | 4 | 2016-03-18T02:53:58Z | 36,091,574 | 13 | 2016-03-18T18:16:18Z | [
"python",
"image-processing",
"machine-learning",
"jpeg",
"tensorflow"
] | I have been trying to feed 1750 \* 1750 images into Tensorflow, but I do not know how to label and feed the data after I convert the images into a Tensor using the tf.image.decode\_jpeg() function.
Currently, my code is:
```
import tensorflow as tf
import numpy as np
import imageflow
import os, glob
sess = tf.InteractiveSession()
def read_jpeg(filename_queue):
reader = tf.WholeFileReader()
key, value = reader.read(filename_queue)
my_img = tf.image.decode_jpeg(value)
my_img.set_shape([1750, 1750, 1])
print(value)
return my_img
#####################################################
def read_image_data():
jpeg_files = []
images_tensor = []
i = 1
WORKING_PATH = "/Users/Zanhuang/Desktop/NNP/DATA"
jpeg_files_path = glob.glob(os.path.join(WORKING_PATH, '*.jpeg'))
for filename in jpeg_files_path:
print(i)
i += 1
jpeg_files.append(filename)
filename_queue = tf.train.string_input_producer(jpeg_files)
mlist = [read_jpeg(filename_queue) for _ in range(len(jpeg_files))]
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
images_tensor = tf.convert_to_tensor(images_tensor)
sess.close()
```
Now, as I said earlier, I need to feed and label the data. I have seen the CIFAR-10 tutorial files, but they stored the labels in a file and I plan on not doing that way.
I am quite new to Tensorflow so please keep the response as detailed as possible.
Thanks! | Depending on what you are trying to do, there are several directions to consider.
1. If you just wish to run inference on an arbitrary JPEG file (i.e. labels are not required), then you can follow the example of classify\_image.py which feeds in a JPEG image into a pre-trained Inception network:
[github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/imagenet/classify\_image.py](http://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/imagenet/classify_image.py)
2. If you do wish to train (or fine-tune) a model on a **small** custom data set of JPEG images, then take a look at this example for how to train a model off a small set of JPEG images.
[github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/image\_retraining/retrain.py](http://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/image_retraining/retrain.py)
3. If you do wish to train (or fine-tune) a model on a **large** custom data set of JPEG images, then reading many individual JPEG files will be inefficient and slow down training tremendously.
I would suggest following the procedure of described in the inception/ model library that converts a directory of JPEG images into sharded RecordIO containing serialized JPEG images.
[github.com/tensorflow/models/blob/master/inception/inception/data/build\_image\_data.py](http://github.com/tensorflow/models/blob/master/inception/inception/data/build_image_data.py)
Instructions for running the conversion script are available here:
[github.com/tensorflow/models/blob/master/inception/README.md#how-to-construct-a-new-dataset-for-retraining](http://github.com/tensorflow/models/blob/master/inception/README.md#how-to-construct-a-new-dataset-for-retraining)
After running the conversion, you may then employ/copy the image preprocessing pipeline used by the inception/ model.
[github.com/tensorflow/models/blob/master/inception/inception/image\_processing.py](http://github.com/tensorflow/models/blob/master/inception/inception/image_processing.py) |
Mysterious exceptions when making many concurrent requests from urllib.request to HTTPServer | 36,075,676 | 18 | 2016-03-18T03:36:13Z | 36,439,055 | 7 | 2016-04-05T23:50:45Z | [
"python",
"python-3.x",
"urllib",
"python-multithreading",
"httpserver"
] | I am trying to do [this Matasano crypto challenge](http://cryptopals.com/sets/4/challenges/31/) that involves doing a timing attack against a server with an artificially slowed-down string comparison function. It says to use "the web framework of your choosing", but I didn't feel like installing a web framework, so I decided to use the [HTTPServer class](https://docs.python.org/3/library/http.server.html#http.server.HTTPServer) built into the [`http.server`](https://docs.python.org/3/library/http.server.html) module.
I came up with something that worked, but it was very slow, so I tried to speed it up using the (poorly-documented) thread pool built into [`multiprocessing.dummy`](https://docs.python.org/3.5/library/multiprocessing.html#module-multiprocessing.dummy). It was much faster, but I noticed something strange: if I make 8 or fewer requests concurrently, it works fine. If I have more than that, it works for a while and gives me errors at seemingly random times. The errors seem to be inconsistent and not always the same, but they usually have `Connection refused, invalid argument`, `OSError: [Errno 22] Invalid argument`, `urllib.error.URLError: <urlopen error [Errno 22] Invalid argument>`, `BrokenPipeError: [Errno 32] Broken pipe`, or `urllib.error.URLError: <urlopen error [Errno 61] Connection refused>` in them.
Is there some limit to the number of connections the server can handle? I don't think the number of threads per se is the problem, because I wrote a simple function that did the slowed-down string comparison without running the web server, and called it with 500 simultaneous threads, and it worked fine. I don't think that simply making requests from that many threads is the problem, because I have made crawlers that used over 100 threads (all making simultaneous requests to the same website) and they worked fine. It looks like maybe the HTTPServer is not meant to reliably host production websites that get large amounts of traffic, but I am surprised that it is this easy to make it crash.
I tried gradually removing stuff from my code that looked unrelated to the problem, as I usually do when I diagnose mysterious bugs like this, but that wasn't very helpful in this case. It seemed like as I was removing seemingly unrelated code, the number of connections that the server could handle gradually increased, but there was not a clear cause of the crashes.
**Does anyone know how to increase the number of requests I can make at once, or at least why this is happening?**
My code is complicated, but I came up with this simple program that demonstrates the problem:
```
#!/usr/bin/env python3
import os
import random
from http.server import BaseHTTPRequestHandler, HTTPServer
from multiprocessing.dummy import Pool as ThreadPool
from socketserver import ForkingMixIn, ThreadingMixIn
from threading import Thread
from time import sleep
from urllib.error import HTTPError
from urllib.request import urlopen
class FancyHTTPServer(ThreadingMixIn, HTTPServer):
pass
class MyRequestHandler(BaseHTTPRequestHandler):
def do_GET(self):
sleep(random.uniform(0, 2))
self.send_response(200)
self.end_headers()
self.wfile.write(b"foo")
def log_request(self, code=None, size=None):
pass
def request_is_ok(number):
try:
urlopen("http://localhost:31415/test" + str(number))
except HTTPError:
return False
else:
return True
server = FancyHTTPServer(("localhost", 31415), MyRequestHandler)
try:
Thread(target=server.serve_forever).start()
with ThreadPool(200) as pool:
for i in range(10):
numbers = [random.randint(0, 99999) for j in range(20000)]
for j, result in enumerate(pool.imap(request_is_ok, numbers)):
if j % 20 == 0:
print(i, j)
finally:
server.shutdown()
server.server_close()
print("done testing server")
```
For some reason, the program above works fine unless it has over 100 threads or so, but my real code for the challenge can only handle 8 threads. If I run it with 9, I usually get connection errors, and with 10, I always get connection errors. I tried using [`concurrent.futures.ThreadPoolExecutor`](https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor), [`concurrent.futures.ProcessPoolExecutor`](https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor), and [`multiprocessing.pool`](https://docs.python.org/3.5/library/multiprocessing.html#multiprocessing.pool.Pool) instead of `multiprocessing.dummy.pool` and none of those seemed to help. I tried using a plain `HTTPServer` object (without the `ThreadingMixIn`) and that just made things run very slowly and didn't fix the problem. I tried using `ForkingMixIn` and that didn't fix it either.
What am I supposed to do about this? I am running Python 3.5.1 on a late-2013 MacBook Pro running OS X 10.11.3.
**EDIT:** I tried a few more things, including running the server in a process instead of a thread, as a simple `HTTPServer`, with the `ForkingMixIn`, and with the `ThreadingMixIn`. None of those helped.
**EDIT:** This problem is stranger than I thought. I tried making one script with the server, and another with lots of threads making requests, and running them in different tabs in my terminal. The process with the server ran fine, but the one making requests crashed. The exceptions were a mix of `ConnectionResetError: [Errno 54] Connection reset by peer`, `urllib.error.URLError: <urlopen error [Errno 54] Connection reset by peer>`, `OSError: [Errno 41] Protocol wrong type for socket`, `urllib.error.URLError: <urlopen error [Errno 41] Protocol wrong type for socket>`, `urllib.error.URLError: <urlopen error [Errno 22] Invalid argument>`.
I tried it with a dummy server like the one above, and if I limited the number of concurrent requests to 5 or fewer, it worked fine, but with 6 requests, the client process crashed. There were some errors from the server, but it kept going. The client crashed regardless of whether I was using threads or processes to make the requests. I then tried putting the slowed-down function in the server and it was able to handle 60 concurrent requests, but it crashed with 70. This seems like it may contradict the evidence that the problem is with the server.
**EDIT:** I tried most of the things I described using `requests` instead of `urllib.request` and ran into similar problems.
**EDIT:** I am now running OS X 10.11.4 and running into the same problems. | You're using the default `listen()` backlog value, which is probably the cause of a lot of those errors. This is not the number of simultaneous clients with connection already established, but the number of clients waiting on the listen queue before the connection is established. Change your server class to:
```
class FancyHTTPServer(ThreadingMixIn, HTTPServer):
def server_activate(self):
self.socket.listen(128)
```
128 is a reasonable limit. You might want to check socket.SOMAXCONN or your OS somaxconn if you want to increase it further. If you still have random errors under heavy load, you should check your ulimit settings and increase if needed.
I did that with your example and I got over 1000 threads running fine, so I think that should solve your problem.
---
**Update**
If it improved but it's still crashing with 200 simultaneous clients, then I'm pretty sure your main problem was the backlog size. Be aware that your problem is not the number of concurrent clients, but the number of concurrent connection requests. A brief explanation on what that means, without going too deep into TCP internals.
```
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((HOST, PORT))
s.listen(BACKLOG)
while running:
conn, addr = s.accept()
do_something(conn, addr)
```
In this example, the socket is now accepting connections on the given port, and the `s.accept()` call will block until a client connects. You can have many clients trying to connect simultaneously, and depending on your application you might not be able to call `s.accept()` and dispatch the client connection as fast as the clients are trying to connect. Pending clients are queued, and the max size of that queue is determined by the BACKLOG value. If the queue is full, clients will fail with a Connection Refused error.
Threading doesn't help, because what the ThreadingMixIn class does is to execute the `do_something(conn, addr)` call in a separate thread, so the server can return to the mainloop and the `s.accept()` call.
You can try increasing the backlog further, but there will be a point where that won't help because if the queue grows too large some clients will timeout before the server performs the `s.accept()` call.
So, as I said above, your problem is the number of simultaneous connection attempts, not the number of simultaneous clients. Maybe 128 is enough for your real application, but you're getting an error on your test because you're trying to connect with all 200 threads at once and flooding the queue.
Don't worry about `ulimit` unless you get a `Too many open files` error, but if you want to increase the backlog beyond 128, do some research on `socket.SOMAXCONN`. This is a good start: <https://utcc.utoronto.ca/~cks/space/blog/python/AvoidSOMAXCONN> |
Test if function or method is normal or asynchronous | 36,076,619 | 5 | 2016-03-18T05:13:05Z | 36,076,663 | 7 | 2016-03-18T05:17:40Z | [
"python",
"python-3.x",
"asynchronous",
"python-3.5",
"coroutine"
] | How can I find out if a function or method is a normal function or an async function? I would like my code to automatically support normal or async callbacks and need a way to test what type of function is passed.
```
async def exampleAsyncCb():
pass
def exampleNomralCb():
pass
def isAsync(someFunc):
#do cool dynamic python stuff on the function
return True/False
async def callCallback(cb, arg):
if isAsync(cb):
await cb(arg)
else:
cb(arg)
```
And depending on what type of function gets passed it should either run it normally or with await. I tried various things but have no idea how to implement `isAsync()`. | Use the [inspect](https://docs.python.org/3/library/inspect.html) module of Python.
`inspect.iscoroutinefunction(object)`
> Return true if the object is a coroutine function (a function defined with an async def syntax).
This function is available since Python 3.5.
The module is available for Python 2 with lesser functionalities and certainly without the one you are looking for: [inspect](https://docs.python.org/2/library/inspect.html)
Inspect module as the name suggests is useful to inspect a whole lot of thing. The documentation says
> The inspect module provides several useful functions to help get information about live objects such as modules, classes, methods, functions, tracebacks, frame objects, and code objects. For example, it can help you examine the contents of a class, retrieve the source code of a method, extract and format the argument list for a function, or get all the information you need to display a detailed traceback.
>
> There are four main kinds of services provided by this module: type checking, getting source code, inspecting classes and functions, and examining the interpreter stack.
Some basic capabilities of this module are:
```
inspect.ismodule(object)
inspect.isclass(object)
inspect.ismethod(object)
inspect.isfunction(object)
```
It also packs capability to retrieve the source code
```
inspect.getdoc(object)
inspect.getcomments(object)
inspect.getfile(object)
inspect.getmodule(object)
```
Methods are named intuitively. Description if needed can be found in documentation. |
How do I raise a FileNotFoundError properly? | 36,077,266 | 5 | 2016-03-18T06:06:00Z | 36,077,407 | 11 | 2016-03-18T06:15:35Z | [
"python",
"python-3.x",
"file-not-found"
] | I use a third-party library that's fine but does not handle inexistant files the way I would like. When giving it a non-existant file, instead of raising the good old
```
FileNotFoundError: [Errno 2] No such file or directory: 'nothing.txt'
```
it raises some obscure message:
```
OSError: Syntax error in file None (line 1)
```
I don't want to handle the missing file, don't want to catch nor handle the exception, don't want to raise a custom exception, neither want I to `open` the file, nor to create it if it does not exist.
I only want to check it exists (`os.path.isfile(filename)` will do the trick) and if not, then just raise a proper FileNotFoundError.
I tried this:
```
#!/usr/bin/env python3
import os
if not os.path.isfile("nothing.txt"):
raise FileNotFoundError
```
what only outputs:
```
Traceback (most recent call last):
File "./test_script.py", line 6, in <module>
raise FileNotFoundError
FileNotFoundError
```
This is better than a "Syntax error in file None", but how is it possible to raise the "real" python exception with the proper message, without having to reimplement it? | Pass in arguments:
```
import errno
import os
raise FileNotFoundError(
errno.ENOENT, os.strerror(errno.ENOENT), filename)
```
`FileNotFoundError` is a subclass of [`OSError`](https://docs.python.org/3/library/exceptions.html#OSError), which takes several arguments. The first is an error code from the [`errno` module](https://docs.python.org/3/library/errno.html) (file not found is always `errno.ENOENT`), the second the error message (use [`os.strerror()`](https://docs.python.org/3/library/os.html#os.strerror) to obtain this), and pass in the filename as the 3rd.
The final string representation used in a traceback is built from those arguments:
```
>>> print(FileNotFoundError(errno.ENOENT, os.strerror(errno.ENOENT), 'foobar'))
[Errno 2] No such file or directory: 'foobar'
``` |
Why did Django 1.9 replace tuples () with lists [] in settings and URLs? | 36,081,149 | 31 | 2016-03-18T09:51:06Z | 36,081,236 | 51 | 2016-03-18T09:54:50Z | [
"python",
"django",
"python-2.7",
"django-settings",
"django-1.9"
] | **I am bit curious to know why Django 1.9 replaced tuples () with lists [] in settings, URLs and other configuration files**
I just upgraded to Django 1.9 and noticed these changes. What is the logic behind them?
```
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles'
]
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
STATICFILES_DIRS = [
os.path.join(BASE_DIR, 'static'),
]
```
**urls.py**
```
urlpatterns = [
url(r'^', admin.site.urls),
]
```
Is anything different because of these changes? | It is explained in issue [#8846](https://code.djangoproject.com/ticket/8846) (emphasis mine):
> In the documentation for âCreating your own settings there's a
> recommendation which reads "For settings that are sequences, use
> tuples instead of lists. This is purely for performance."
>
> This is bunk. Profiling shows that tuples run no faster than lists for
> most operations (certainly looping, which we are likely to do most
> often). On the other hand, **list-literal syntax has the advantage that
> it doesn't collapse to a single value when you have a single item and
> omit the trailing comma, like tuple syntax. Using list syntax is no
> slower, more legible and less error prone.** An often-expressed view in
> the wider Python community seems that tuples should not be considered
> as immutable lists. They are intended as fixed-length records - indeed
> the mathematical concept of a tuple is quite distinct from that of a
> sequence.
Also see [this answer](http://stackoverflow.com/a/12363023/2011147) for a more up-to-date discussion.
Another [answer](http://stackoverflow.com/a/68712/2011147) (not directly related to this issue) demonstrates that *accessing elements* is actually faster with a `list`.
**Update and further information:** It is correct that the above issue was closed years ago, but I included it because it explained the rationale behind the decision and many similar discussions refer to the same ticket. The actual implementation decision was triggered after the [following discussion on django-developers](https://groups.google.com/forum/#!msg/django-developers/h4FSYWzMJhs/_2iVc4qgfsoJ) started by core Django developer [Aymeric Augustin](https://www.djangoproject.com/~aaugustin/):
> I prefer them *[lists]* for two reasons:
>
> 1) All these settings are sequences of similar things. Such values are
> best represented with lists, unless they have to be immutable, in
> which case a tuple can be used. (tuples are both ânamedtuples without
> namesâ and âimmutable listsâ in Python.)
>
> 2) Lists arenât prone to the âmissing comma in single-item tupleâ
> problem which bites beginners and experienced pythonistas alike.
> Django even has code to defend against this mistake for a handful of
> settings. Search for âtuple\_settingsâ in the source.
And the switch to lists actually happened in [issue #24149](https://code.djangoproject.com/ticket/24149) which also referred to the above discussion. |
Why did Django 1.9 replace tuples () with lists [] in settings and URLs? | 36,081,149 | 31 | 2016-03-18T09:51:06Z | 36,085,296 | 9 | 2016-03-18T13:11:23Z | [
"python",
"django",
"python-2.7",
"django-settings",
"django-1.9"
] | **I am bit curious to know why Django 1.9 replaced tuples () with lists [] in settings, URLs and other configuration files**
I just upgraded to Django 1.9 and noticed these changes. What is the logic behind them?
```
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles'
]
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
STATICFILES_DIRS = [
os.path.join(BASE_DIR, 'static'),
]
```
**urls.py**
```
urlpatterns = [
url(r'^', admin.site.urls),
]
```
Is anything different because of these changes? | In [the release notes of 1.9](https://docs.djangoproject.com/en/1.9/releases/1.9/#default-settings-that-were-tuples-are-now-lists), there is:
> **Default settings that were tuples are now lists**
>
> The default settings in **django.conf.global\_settings** were a combination of lists and tuples. All settings that were formerly tuples are now lists.
So it appears that it was just done for consistency. Both tuples and lists should work fine. If you use a tuple with 1 element, remember the comma `(1,)` because otherwise it's not a tuple but simply an expression in parens.
As for urlpatterns, those used to be defined using a `patterns()` function, but that was deprecated in Django 1.8, as a list of url instances works fine. As the function will be removed in the future, it shouldn't be used in new apps and projects. |
Counterintuitive behaviour of int() in python | 36,085,185 | 81 | 2016-03-18T13:05:59Z | 36,085,574 | 75 | 2016-03-18T13:24:30Z | [
"python"
] | It's clearly stated in the [docs](https://docs.python.org/3.5/library/functions.html#int) that int(number) is a flooring type conversion:
```
int(1.23)
1
```
and int(string) returns an int if and only if the string is an integer literal.
```
int('1.23')
ValueError
int('1')
1
```
Is there any special reason for that? I find it counterintuitive that the function floors in one case, but not the other. | This is almost certainly a case of applying three of the principles from the [Zen of Python](https://www.python.org/dev/peps/pep-0020/):
> Explicit is better implicit.
>
> [...] practicality beats purity
>
> Errors should never pass silently
Some percentage of the time, someone doing `int('1.23')` is calling the wrong conversion for their use case, and wants something like `float` or `decimal.Decimal` instead. In these cases, it's clearly better for them to get an immediate error that they can fix, rather than silently giving the wrong value.
In the case that you *do* want to truncate that to an int, it is trivial to explicitly do so by passing it through `float` first, and then calling one of `int`, `round`, `trunc`, `floor` or `ceil` as appropriate. This also makes your code more self-documenting, guarding against a later modification "correcting" a hypothetical silently-truncating `int` call to `float` by making it clear that the rounded value *is* what you want. |
Counterintuitive behaviour of int() in python | 36,085,185 | 81 | 2016-03-18T13:05:59Z | 36,085,637 | 122 | 2016-03-18T13:26:51Z | [
"python"
] | It's clearly stated in the [docs](https://docs.python.org/3.5/library/functions.html#int) that int(number) is a flooring type conversion:
```
int(1.23)
1
```
and int(string) returns an int if and only if the string is an integer literal.
```
int('1.23')
ValueError
int('1')
1
```
Is there any special reason for that? I find it counterintuitive that the function floors in one case, but not the other. | There is no *special* reason. Python is simply applying its general principle of not performing implicit conversions, which are well-known causes of problems, particularly for newcomers, in languages such as Perl and Javascript.
`int(some_string)` is an explicit request to convert a string to integer format; the rules for this conversion specify that the string must contain a valid integer literal representation. `int(float)` is an explicit request to convert a float to an integer; the rules for this conversion specify that the float's fractional portion will be truncated.
In order for `int("3.1459")` to return `3` the interpreter would have to implicitly convert the string to a float. Since Python doesn't support implicit conversions, it chooses to raise an exception instead. |
Counterintuitive behaviour of int() in python | 36,085,185 | 81 | 2016-03-18T13:05:59Z | 36,090,233 | 11 | 2016-03-18T17:03:18Z | [
"python"
] | It's clearly stated in the [docs](https://docs.python.org/3.5/library/functions.html#int) that int(number) is a flooring type conversion:
```
int(1.23)
1
```
and int(string) returns an int if and only if the string is an integer literal.
```
int('1.23')
ValueError
int('1')
1
```
Is there any special reason for that? I find it counterintuitive that the function floors in one case, but not the other. | In simple words - they're not the same function. int( decimal ) and int( string ) are 2 different functions with the *same name* that return an integer.
One is a string-integer-conversion, one is performing floor on a decimal, and they're both called 'int' because it's short and makes sense for each, but there's no implication they are providing the same or combined functionality |
Counterintuitive behaviour of int() in python | 36,085,185 | 81 | 2016-03-18T13:05:59Z | 36,098,510 | 16 | 2016-03-19T06:09:21Z | [
"python"
] | It's clearly stated in the [docs](https://docs.python.org/3.5/library/functions.html#int) that int(number) is a flooring type conversion:
```
int(1.23)
1
```
and int(string) returns an int if and only if the string is an integer literal.
```
int('1.23')
ValueError
int('1')
1
```
Is there any special reason for that? I find it counterintuitive that the function floors in one case, but not the other. | Sometimes a thought experiment can be useful.
* Behavior A: `int('1.23')` fails with an error. This is the existing behavior.
* Behavior B: `int('1.23')` produces `1` without error. This is what you're proposing.
With behavior A, it's straightforward and trivial to get the effect of behavior B: use `int(float('1.23'))` instead.
On the other hand, with behavior B, getting the effect of behavior A is significantly more complicated:
```
def parse_pure_int(s):
if "." in s:
raise ValueError("invalid literal for integer with base 10: " + s)
return int(s)
```
(and even with the code above, I don't have complete confidence that there isn't some corner case that it mishandles.)
Behavior A therefore is more expressive than behavior B.
Another thing to consider: `'1.23'` is a string representation of a floating-point value. Converting `'1.23'` to an integer conceptually involves two conversions (string to float to integer), but `int(1.23)` and `int('1')` each involve only one conversion.
---
Edit:
And indeed, there are corner cases that the above code would not handle: `1e-2` and `1E-2` are both floating point values too. |
Static behavior of iterators in Python | 36,085,354 | 4 | 2016-03-18T13:13:12Z | 36,085,414 | 7 | 2016-03-18T13:16:15Z | [
"python",
"python-3.x",
"iterator"
] | I am reading [Learning Python by M.Lutz](http://rads.stackoverflow.com/amzn/click/1449355730) and found bizarre block of code:
```
>>> M = map(abs, (-1, 0, 1))
>>> I1 = iter(M); I2 = iter(M)
>>> print(next(I1), next(I1), next(I1))
1 0 1
>>> next(I2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
```
Why when I call `next(I2)` it happens that iteration is already over?
Didn't I create two separate instances of `I1` and `I2`. Why does it behave like an instances of a `static` object? | This has nothing to do with "static" objects, which don't exist in Python.
`iter(M)` does not create a copy of M. Both I1 and I2 are iterators wrapping the same object; in fact, since `M` is already an iterator, calling `iter` on it just returns the underlying object:
```
>>> iter(M)
<map object at 0x1012272b0>
>>> M
<map object at 0x1012272b0>
>>> M is iter(M)
True
``` |
pandas: how to find the most frequent value of each row? | 36,091,902 | 2 | 2016-03-18T18:35:40Z | 36,092,067 | 7 | 2016-03-18T18:43:08Z | [
"python",
"pandas",
"dataframe"
] | how to find the most frequent value of each row of a dataframe?
For example:
```
In [14]: df
Out[14]:
a b c
0 2 3 3
1 1 1 2
2 7 7 8
```
return:
[3,1,7] | try [.mode()](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.mode.html) method:
```
In [88]: df
Out[88]:
a b c
0 2 3 3
1 1 1 2
2 7 7 8
In [89]: df.mode(axis=1)
Out[89]:
0
0 3
1 1
2 7
``` |
why is 1e400 not an int? | 36,092,203 | 5 | 2016-03-18T18:50:12Z | 36,092,258 | 8 | 2016-03-18T18:53:02Z | [
"python",
"floating-point",
"int",
"scientific-notation"
] | Why is a number in Scientific notation always read as a `float`, and how can i convert a string like '1e400' to an `int` (which is too large for a `float`) ?
```
>>>int('1e400')
ValueError: invalid literal for int() with base 10: '1e400'
>>>int(float('1e400'))
OverflowError: cannot convert float infinity to integer
```
i know, i can make a function like:
```
def strtoint(string):
parts = string.split('e')
if len(parts) == 1:
return int(string)
elif len(parts) == 2:
if int(parts[1])<0:
return int(string)
return int(parts[0])*10**int(parts[1])
else:
return int(string) #raise a error if the string is invalid, but if the variable string is not a string, it may have other way to convert to an `int`
```
But this not a very pythonic way, is there a better way? | Perhaps you could use `Decimal` as an intermediary type before converting to int.
```
>>> import decimal
>>> decimal.Decimal("1e400")
Decimal('1E+400')
>>> int(decimal.Decimal("1e400"))
10000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000
0
``` |
How can I use a 'for' loop for just one variable in a function which depends on two variables? | 36,096,915 | 5 | 2016-03-19T01:47:07Z | 36,097,023 | 38 | 2016-03-19T02:04:15Z | [
"python",
"for-loop"
] | I just want to use the `for` loop for the `t` variable in my function:
```
l = []
def func(s):
for i in range(1, 100):
t = i
p = t * 2 + s * 2
return p
l.append(func(10))
print l
```
I want the value of t to go from 1 to 99 and and print a list of all the value, but I always end up getting `l = [218]`. | I assume you have [NumPy](http://en.wikipedia.org/wiki/NumPy) installed (at least your original question suggested it), so I'll present a way how you can get your result using `numpy-arrays` in a very efficient manner (without any list-comprehensions and explicit iterations):
```
> import numpy as np
> s = 10
> l = np.arange(1, 100) * 2 + s * 2 # Arrange produces an array. Syntax is like "range".
> print(l)
array([ 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46,
48, 50, 52, 54, 56, 58, 60, 62, 64, 66, 68, 70, 72,
74, 76, 78, 80, 82, 84, 86, 88, 90, 92, 94, 96, 98,
100, 102, 104, 106, 108, 110, 112, 114, 116, 118, 120, 122, 124,
126, 128, 130, 132, 134, 136, 138, 140, 142, 144, 146, 148, 150,
152, 154, 156, 158, 160, 162, 164, 166, 168, 170, 172, 174, 176,
178, 180, 182, 184, 186, 188, 190, 192, 194, 196, 198, 200, 202,
204, 206, 208, 210, 212, 214, 216, 218])
```
I used the fact that mathematical operations on NumPy arrays affect all elements. Therefore it is so easy to write the operation: `np.arange(1, 100) * 2`, because it multiplies every element with `2` and [`numpy.arange`](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.arange.html) is a possibility to create an array containing all the numbers between a given `start` and `stop` with an optional `step` (just like the [`python range`](https://docs.python.org/2/library/functions.html#range)).
With NumPy it wouldn't be a good choice to `append` single values (because this recreates the whole array and not just appends values). In most cases it's best to create an array with the final size and shape and operate directly on it. Of course you can [`concatenate`](http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.concatenate.html) or [`append`](http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.append.html) different NumPy arrays, but as mentioned it always creates a completely new array and is therefore not very efficient (if used regularly).
---
So now a few observations on your initial attempt and why it didn't work: Your function created lots of numbers, but it overrides them in each loop and only returns the last one:
```
def func(s):
for i in range(1, 100):
t = i
p = t * 2 + s * 2
# Next iteration of the loop just overwrites p
# Returns only the last p
return p
```
You could make this a generator (with `yield` instead of `return`), but that's probably overkill here, I'll show it nevertheless :-)
```
l = []
def func(s):
for i in range(1, 100):
p = i * 2 + s * 2
yield p
l.append(list(func(10))) # Need to convert the generator to a list here.
# In this case a simple "l = list(func(10))" would be easier if you don't need to append.
``` |
How to call a function with a dictionary that contains more items than the function has parameters? | 36,102,075 | 27 | 2016-03-19T13:03:24Z | 36,102,152 | 11 | 2016-03-19T13:11:05Z | [
"python",
"dictionary",
"named-parameters",
"kwargs"
] | I am looking for the best way to combine a function with a dictionary *that contains more items than the function's inputs*
basic \*\*kwarg unpacking fails in this case:
```
def foo(a,b):
return a + b
d = {'a':1,
'b':2,
'c':3}
foo(**d)
--> TypeError: foo() got an unexpected keyword argument 'c'
```
After some research I came up with the following approach:
```
import inspect
# utilities
def get_input_names(function):
'''get arguments names from function'''
return inspect.getargspec(function)[0]
def filter_dict(dict_,keys):
return {k:dict_[k] for k in keys}
def combine(function,dict_):
'''combine a function with a dictionary that may contain more items than the function's inputs '''
filtered_dict = filter_dict(dict_,get_input_names(function))
return function(**filtered_dict)
# examples
def foo(a,b):
return a + b
d = {'a':1,
'b':2,
'c':3}
print combine(foo,d)
--> 3
```
My question is: is this a good way of dealing with this problem, or is there a better practice or is there a mechanism in the language that I'm missing perhaps? | Your problem lies with the way you defined your function, it should be defined like this -
```
def foo(**kwargs):
```
And then inside the function you can iterate over the number of arguments sent to the function like so -
```
if kwargs is not None:
for key, value in kwargs.iteritems():
do something
```
You can find more info about using \*\*kwargs in this post -
<http://pythontips.com/2013/08/04/args-and-kwargs-in-python-explained/> |
How to call a function with a dictionary that contains more items than the function has parameters? | 36,102,075 | 27 | 2016-03-19T13:03:24Z | 36,102,245 | 8 | 2016-03-19T13:19:39Z | [
"python",
"dictionary",
"named-parameters",
"kwargs"
] | I am looking for the best way to combine a function with a dictionary *that contains more items than the function's inputs*
basic \*\*kwarg unpacking fails in this case:
```
def foo(a,b):
return a + b
d = {'a':1,
'b':2,
'c':3}
foo(**d)
--> TypeError: foo() got an unexpected keyword argument 'c'
```
After some research I came up with the following approach:
```
import inspect
# utilities
def get_input_names(function):
'''get arguments names from function'''
return inspect.getargspec(function)[0]
def filter_dict(dict_,keys):
return {k:dict_[k] for k in keys}
def combine(function,dict_):
'''combine a function with a dictionary that may contain more items than the function's inputs '''
filtered_dict = filter_dict(dict_,get_input_names(function))
return function(**filtered_dict)
# examples
def foo(a,b):
return a + b
d = {'a':1,
'b':2,
'c':3}
print combine(foo,d)
--> 3
```
My question is: is this a good way of dealing with this problem, or is there a better practice or is there a mechanism in the language that I'm missing perhaps? | You can also use a [decorator function](https://www.python.org/dev/peps/pep-0318/) to filter out those *keyword arguments* that are not allowed in you function. Of you use the [`signature`](https://docs.python.org/3/library/inspect.html#inspect.signature) function new in 3.3 to return your function [`Signature`](https://docs.python.org/3/library/inspect.html#inspect.Signature)
```
from inspect import signature
from functools import wraps
def decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
sig = signature(func)
result = func(*[kwargs[param] for param in sig.parameters])
return result
return wrapper
```
From Python 3.0 you can use [`getargspec`](https://docs.python.org/3/library/inspect.html#inspect.getargspec) which is *deprecated since version 3.0*
```
import inspect
def decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
argspec = inspect.getargspec(func).args
result = func(*[kwargs[param] for param in argspec])
return result
return wrapper
```
To apply your *decorate* an existing function you need to pass your function as argument to your decorator:
Demo:
```
>>> def foo(a, b):
... return a + b
...
>>> foo = decorator(foo)
>>> d = {'a': 1, 'b': 2, 'c': 3}
>>> foo(**d)
3
```
To apply your decorator to a newly function simply use `@`
```
>>> @decorator
... def foo(a, b):
... return a + b
...
>>> foo(**d)
3
```
---
You can also define your function using arbitrary keywords arguments `**kwargs`.
```
>>> def foo(**kwargs):
... if 'a' in kwargs and 'b' in kwargs:
... return kwargs['a'] + kwargs['b']
...
>>> d = {'a': 1, 'b': 2, 'c': 3}
>>> foo(**d)
3
``` |
How to call a function with a dictionary that contains more items than the function has parameters? | 36,102,075 | 27 | 2016-03-19T13:03:24Z | 36,102,254 | 23 | 2016-03-19T13:20:22Z | [
"python",
"dictionary",
"named-parameters",
"kwargs"
] | I am looking for the best way to combine a function with a dictionary *that contains more items than the function's inputs*
basic \*\*kwarg unpacking fails in this case:
```
def foo(a,b):
return a + b
d = {'a':1,
'b':2,
'c':3}
foo(**d)
--> TypeError: foo() got an unexpected keyword argument 'c'
```
After some research I came up with the following approach:
```
import inspect
# utilities
def get_input_names(function):
'''get arguments names from function'''
return inspect.getargspec(function)[0]
def filter_dict(dict_,keys):
return {k:dict_[k] for k in keys}
def combine(function,dict_):
'''combine a function with a dictionary that may contain more items than the function's inputs '''
filtered_dict = filter_dict(dict_,get_input_names(function))
return function(**filtered_dict)
# examples
def foo(a,b):
return a + b
d = {'a':1,
'b':2,
'c':3}
print combine(foo,d)
--> 3
```
My question is: is this a good way of dealing with this problem, or is there a better practice or is there a mechanism in the language that I'm missing perhaps? | How about *making a [decorator](http://thecodeship.com/patterns/guide-to-python-function-decorators/)* that would *filter allowed keyword arguments only*:
```
import inspect
def get_input_names(function):
'''get arguments names from function'''
return inspect.getargspec(function)[0]
def filter_dict(dict_,keys):
return {k:dict_[k] for k in keys}
def filter_kwargs(func):
def func_wrapper(**kwargs):
return func(**filter_dict(kwargs, get_input_names(func)))
return func_wrapper
@filter_kwargs
def foo(a,b):
return a + b
d = {'a':1,
'b':2,
'c':3}
print(foo(**d))
```
What is nice about this decorator is that it is generic and reusable. And you would not need to change the way you call and use your target functions. |
How to call a function with a dictionary that contains more items than the function has parameters? | 36,102,075 | 27 | 2016-03-19T13:03:24Z | 36,105,034 | 14 | 2016-03-19T17:45:43Z | [
"python",
"dictionary",
"named-parameters",
"kwargs"
] | I am looking for the best way to combine a function with a dictionary *that contains more items than the function's inputs*
basic \*\*kwarg unpacking fails in this case:
```
def foo(a,b):
return a + b
d = {'a':1,
'b':2,
'c':3}
foo(**d)
--> TypeError: foo() got an unexpected keyword argument 'c'
```
After some research I came up with the following approach:
```
import inspect
# utilities
def get_input_names(function):
'''get arguments names from function'''
return inspect.getargspec(function)[0]
def filter_dict(dict_,keys):
return {k:dict_[k] for k in keys}
def combine(function,dict_):
'''combine a function with a dictionary that may contain more items than the function's inputs '''
filtered_dict = filter_dict(dict_,get_input_names(function))
return function(**filtered_dict)
# examples
def foo(a,b):
return a + b
d = {'a':1,
'b':2,
'c':3}
print combine(foo,d)
--> 3
```
My question is: is this a good way of dealing with this problem, or is there a better practice or is there a mechanism in the language that I'm missing perhaps? | All of these answers are wrong.
It is not possible to do what you are asking, because the function might be declared like this:
```
def foo(**kwargs):
a = kwargs.pop('a')
b = kwargs.pop('b')
if kwargs:
raise TypeError('Unexpected arguments: %r' % kwargs)
```
Now, why on earth would anyone write that?
Because they don't know all of the arguments ahead of time. Here's a more realistic case:
```
def __init__(self, **kwargs):
for name in self.known_arguments():
value = kwargs.pop(name, default)
self.do_something(name, value)
super().__init__(**kwargs) # The superclass does not take any arguments
```
And [here](https://bitbucket.org/NYKevin/nbtparse/src/276f94bc309addbd81b2d87d94b273eac5e6dbf7/nbtparse/semantics/nbtobject.py?at=%40&fileviewer=file-view-default#nbtobject.py-140) is some real-world code which actually does this.
You might ask why we need the last line. Why pass arguments to a superclass that doesn't take any? [Cooperative multiple inheritance](https://rhettinger.wordpress.com/2011/05/26/super-considered-super/). If my class gets an argument it does not recognize, it should not swallow that argument, nor should it error out. It should pass the argument up the chain so that another class I might not know about can handle it. And if nobody handles it, then `object.__init__()` will provide an appropriate error message. Unfortunately, the other answers will not handle that gracefully. They will see `**kwargs` and either pass no arguments or pass all of them, which are both incorrect.
**The bottom line**: There is no general way to discover whether a function call is legal without actually making that function call. `inspect` is a crude approximation, and entirely falls apart in the face of variadic functions. Variadic does not mean "pass whatever you like"; it means "the rules are too complex to express in a signature." As a result, while it may be possible in many cases to do what you're trying to do, there will always be situations where there is no correct answer. |
Python Pandas Dataframe: Normalize data between 0.01 and 0.99? | 36,102,348 | 2 | 2016-03-19T13:28:39Z | 36,102,391 | 9 | 2016-03-19T13:32:08Z | [
"python",
"pandas",
"dataframe",
"normalization"
] | I am trying to bound every value in a dataframe between 0.01 and 0.99
I have successfully normalised the data between 0 and 1 using: `.apply(lambda x: (x - x.min()) / (x.max() - x.min()))` as follows:
```
df = pd.DataFrame({'one' : ['AAL', 'AAL', 'AAPL', 'AAPL'], 'two' : [1, 1, 5, 5], 'three' : [4,4,2,2]})
df[['two', 'three']].apply(lambda x: (x - x.min()) / (x.max() - x.min()))
df
```
Now I want to bound all values between 0.01 and 0.99
This is what I have tried:
```
def bound_x(x):
if x == 1:
return x - 0.01
elif x < 0.99:
return x + 0.01
df[['two', 'three']].apply(bound_x)
```
â
df
But I receive the following error:
```
ValueError: ('The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().', u'occurred at index two')
``` | There's an app, err [clip method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.clip.html), for that:
```
import pandas as pd
df = pd.DataFrame({'one' : ['AAL', 'AAL', 'AAPL', 'AAPL'], 'two' : [1, 1, 5, 5], 'three' : [4,4,2,2]})
df = df[['two', 'three']].apply(lambda x: (x - x.min()) / (x.max() - x.min()))
df = df.clip(lower=0.01, upper=0.99)
```
yields
```
two three
0 0.01 0.99
1 0.01 0.99
2 0.99 0.01
3 0.99 0.01
```
---
The problem with
```
df[['two', 'three']].apply(bound_x)
```
is that `bound_x` gets passed a Series like `df['two']` and then `if x == 1` requires `x == 1` be *evaluated in a boolean context*. `x == 1` is a boolean Series like
```
In [44]: df['two'] == 1
Out[44]:
0 False
1 False
2 True
3 True
Name: two, dtype: bool
```
Python tries to reduce this Series to a single boolean value, `True` or `False`. Pandas follows the NumPy convention of [raising an error when you try to convert a Series (or array) to a bool](http://pandas.pydata.org/pandas-docs/stable/gotchas.html#using-if-truth-statements-with-pandas). |
Are simple hardcoded arithmetics cached/compiled away? | 36,105,978 | 4 | 2016-03-19T19:11:46Z | 36,106,000 | 7 | 2016-03-19T19:13:24Z | [
"python",
"python-internals"
] | I would like to know if python caches / compiles away in its .pyc files simple arithmetics like `5*5+5`.
Sometimes I like to write `if seconds > 24*60*60` for a day for example. I know that the effect on performance is unnoticeable but I'm curious nevertheless. | Yes, CPython (the default implementation of Python) uses a [peephole optimiser](https://en.wikipedia.org/wiki/Peephole_optimization) to collapse such expressions into one number; this is called [*constant folding*](https://en.wikipedia.org/wiki/Constant_folding).
You can check for this using the [`dis` disassembler](https://docs.python.org/2/library/dis.html):
```
>>> import dis
>>> def foo():
... if seconds > 24*60*60:
... pass
...
>>> dis.dis(foo)
2 0 LOAD_GLOBAL 0 (seconds)
3 LOAD_CONST 4 (86400)
6 COMPARE_OP 4 (>)
9 POP_JUMP_IF_FALSE 15
3 12 JUMP_FORWARD 0 (to 15)
>> 15 LOAD_CONST 0 (None)
18 RETURN_VALUE
```
Note the `LOAD_CONST` instruction at offset 3; it loads the final result of the `24*60*60` expression, the expression itself is gone from the bytecode.
See the [`fold_binops_on_constants` function in the `peephole.c` file](https://hg.python.org/cpython/file/v2.7.11/Python/peephole.c#l77). |
How can I limit iterations of a loop in Python? | 36,106,712 | 4 | 2016-03-19T20:22:48Z | 36,106,713 | 8 | 2016-03-19T20:22:48Z | [
"python",
"enumerate"
] | Say I have a list of items, and I want to iterate over the first few of it:
```
items = list(range(10)) # I mean this to represent any kind of iterable.
limit = 5
```
## Naive implementation
The Python naïf coming from other languages would probably write this perfectly serviceable and performant (if unidiomatic) code:
```
index = 0
for item in items: # Python's `for` loop is a for-each.
print(item) # or whatever function of that item.
index += 1
if index == limit:
break
```
## More idiomatic implementation
But Python has enumerate, which subsumes about half of that code nicely:
```
for index, item in enumerate(items):
print(item)
if index == limit: # There's gotta be a better way.
break
```
So we've about cut the extra code in half. But there's gotta be a better way.
# Can we approximate the below pseudocode behavior?
If enumerate took another optional `stop` argument (for example, it takes a `start` argument like this: `enumerate(items, start=1)`) that would, I think, be ideal, but the below doesn't exist (see the [documentation on enumerate here](https://docs.python.org/3/library/functions.html#enumerate)):
```
# hypothetical code, not implemented:
for _, item in enumerate(items, start=0, stop=limit): # `stop` not implemented
print(item)
```
Note that there would be no need to name the `index` because there is no need to reference it.
Is there an idiomatic way to write the above? How?
A secondary question: why isn't this built into enumerate? | > # How can I limit iterations of a loop in Python?
>
> ```
> for index, item in enumerate(items):
> print(item)
> if index == limit:
> break
> ```
>
> Is there a shorter, idiomatic way to write the above? How?
## Including the index
`zip` stops on the shortest iterable of its arguments. (In contrast with the behavior of `zip_longest`, which uses the longest iterable.)
`range` can provide a limited iterable that we can pass to zip along with our primary iterable.
So we can pass a `range` object (with its `stop` argument) to `zip` and use it like a limited enumerate.
## `zip(range(limit), items)`
Using Python 3, `zip` and `range` return iterables, which pipeline the data instead of materializing the data in lists for intermediate steps.
```
for _, item in zip(range(limit), items):
print(item)
```
To get the same behavior in Python 2, just substitute `xrange` for `range` and `itertools.izip` for `zip`.
```
from itertools import izip
for index, item in izip(xrange(limit), items):
print(item)
```
## If not requiring the index, `itertools.islice`
You can use `itertools.islice`:
```
for item in itertools.islice(items, 0, stop):
print(item)
```
which doesn't require assigning to the index.
> # Why isn't this built into `enumerate`?
Here's enumerate implemented in pure Python (with possible modifications to get the desired behavior in comments):
```
def enumerate(collection, start=0): # could add stop=None
i = start
it = iter(collection)
while 1: # could modify to `while i != stop:`
yield (i, next(it))
i += 1
```
The above would be less performant for those using enumerate already, because it would have to check whether it is time to stop every iteration. We can just check and use the old enumerate if don't get a stop argument:
```
_enumerate = enumerate
def enumerate(collection, start=0, stop=None):
if stop is not None:
return zip(range(start, stop), collection)
return _enumerate(collection, start)
```
This extra check would have a slight negligible performance impact.
As to *why* enumerate does not have a stop argument, this was originally proposed (see [PEP 279](https://www.python.org/dev/peps/pep-0279/)):
> This function was originally proposed with optional start
> and stop arguments. GvR [Guido van Rossum] pointed out that the function call
> `enumerate(seqn, 4, 6)` had an alternate, plausible interpretation as
> a slice that would return the fourth and fifth elements of the
> sequence. To avoid the ambiguity, the optional arguments were
> dropped even though it meant losing flexibility as a loop counter.
> That flexibility was most important for the common case of
> counting from one, as in:
>
> ```
> for linenum, line in enumerate(source,1): print linenum, line
> ```
So apparently `start` was kept because it was very valuable, and `stop` was dropped because it had fewer use-cases and contributed to confusion on the usage of the new function.
# Conclusion
I would presume that now the Python community knows the usage of enumerate, the confusion costs would be outweighed by the value of the argument.
Until that time, you can use:
```
for index, element in zip(range(limit), items):
...
```
or, if you don't need the index at all:
```
for element in islice(items, 0, limit):
...
``` |
Why is a double semicolon a SyntaxError in Python? | 36,111,915 | 64 | 2016-03-20T09:07:56Z | 36,111,947 | 101 | 2016-03-20T09:12:36Z | [
"python",
"syntax-error",
"language-lawyer"
] | I know that semicolons are unnecessary in Python, but they can be used to cram multiple statements onto a single line, e.g.
```
>>> x = 42; y = 54
```
I always thought that a semicolon was equivalent to a line break. So I was a bit surprised to learn (h/t [Ned Batchelder on Twitter](https://twitter.com/nedbat/status/709326204051005440)) that a double semicolon is a SyntaxError:
```
>>> x = 42
>>> x = 42;
>>> x = 42;;
File "<stdin>", line 1
x = 42;;
^
SyntaxError: invalid syntax
```
I assumed the last program was equivalent to `x = 42\n\n`. Iâd have thought the statement between the semicolons was treated as an empty line, a no-op. Apparently not.
**Why is this an error?** | From the Python grammar, we can see that `;` is not defined as `\n`. The parser expects another statement after a `;`, except if there's a newline after it:
```
Semicolon w/ statement Maybe a semicolon Newline
\/ \/ \/ \/
simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE
```
That's why `x=42;;` doesn't work; because there isn't a statement between the two semicolons, as "nothing" isn't a statement. If there was any complete statement between them, like a `pass` or even just a `0`, the code would work.
```
x = 42;0; # Fine
x = 42;pass; # Fine
x = 42;; # Syntax error
if x == 42:; print("Yes") # Syntax error - "if x == 42:" isn't a complete statement
``` |
Why is a double semicolon a SyntaxError in Python? | 36,111,915 | 64 | 2016-03-20T09:07:56Z | 36,111,954 | 23 | 2016-03-20T09:12:52Z | [
"python",
"syntax-error",
"language-lawyer"
] | I know that semicolons are unnecessary in Python, but they can be used to cram multiple statements onto a single line, e.g.
```
>>> x = 42; y = 54
```
I always thought that a semicolon was equivalent to a line break. So I was a bit surprised to learn (h/t [Ned Batchelder on Twitter](https://twitter.com/nedbat/status/709326204051005440)) that a double semicolon is a SyntaxError:
```
>>> x = 42
>>> x = 42;
>>> x = 42;;
File "<stdin>", line 1
x = 42;;
^
SyntaxError: invalid syntax
```
I assumed the last program was equivalent to `x = 42\n\n`. Iâd have thought the statement between the semicolons was treated as an empty line, a no-op. Apparently not.
**Why is this an error?** | An empty statement still needs `pass`, even if you have a semicolon.
```
>>> x = 42;pass;
>>> x
42
``` |
Make IPython Import What I Mean | 36,112,275 | 4 | 2016-03-20T09:50:26Z | 36,116,171 | 10 | 2016-03-20T16:12:51Z | [
"python",
"ipython",
"ipython-magic"
] | I want to modify how IPython handles import errors by default. When I prototype something in the IPython shell, I usually forget to first import `os`, `re` or whatever I need. The first few statements often follow this pattern:
```
In [1]: os.path.exists("~/myfile.txt")
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-1-0ffb6014a804> in <module>()
----> 1 os.path.exists("~/myfile.txt")
NameError: name 'os' is not defined
In [2]: import os
In [3]: os.path.exists("~/myfile.txt")
Out[3]: False
```
Sure, that's my fault for having bad habits and,
sure, in a script or real program that makes sense,
but in the shell I'd rather that IPython follow the
DWIM principle, by at least *trying* to import what I am trying to use.
```
In [1]: os.path.exists("~/myfile.txt")
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-1-0ffb6014a804> in <module>()
----> 1 os.path.exists("~/myfile.txt")
NameError: name 'os' is not defined
Catching this for you and trying to import "os" ⦠success!
Retrying â¦
---------------------------------------------------------------------------
Out[1]: False
```
If this is not possible with a vanilla IPython, what would I have to do
to make this work? Is a [wrapper kernel](http://ipython.readthedocs.org/en/stable/development/wrapperkernels.html) the easiest way forward? Or should this be implemented directly in the core, with a magic command?
Note, this is different from [those kind of question](http://stackoverflow.com/questions/11124578/automatically-import-modules-when-entering-the-python-or-ipython-interpreter#11124610) where someone wants to always load pre-defined modules. I don't. Cuz I don't know what I will be working on, and I don't want to load *everything* (nor do I want to keep the list of *everything* updated. | **NOTE:** This is now being maintained [on Github](https://github.com/OrangeFlash81/ipython-auto-import/tree/master). Download the latest version of the script from there!
I developed a script that binds to IPython's exception handling through `set_custom_exc`. If there's a `NameError`, it uses a regex to find what module you tried to use and then attempt to import it. It then runs the function you tried to call again. Here is the code:
```
import sys, IPython, colorama # <-- colorama must be "pip install"-ed
colorama.init()
def custom_exc(shell, etype, evalue, tb, tb_offset=None):
pre = colorama.Fore.CYAN + colorama.Style.BRIGHT + "AutoImport: " + colorama.Style.NORMAL + colorama.Fore.WHITE
if etype == NameError:
shell.showtraceback((etype, evalue, tb), tb_offset) # Show the normal traceback
import re
try:
# Get the name of the module you tried to import
results = re.match("name '(.*)' is not defined", str(evalue))
name = results.group(1)
try:
__import__(name)
except:
print(pre + "{} isn't a module".format(name))
return
# Import the module
IPython.get_ipython().run_code("import {}".format(name))
print(pre + "Imported referenced module \"{}\", will retry".format(name))
except Exception as e:
print(pre + "Attempted to import \"{}\" but an exception occured".format(name))
try:
# Run the failed line again
res = IPython.get_ipython().run_cell(list(get_ipython().history_manager.get_range())[-1][-1])
except Exception as e:
print(pre + "Another exception occured while retrying")
shell.showtraceback((type(e), e, None), None)
else:
shell.showtraceback((etype, evalue, tb), tb_offset=tb_offset)
# Bind the function we created to IPython's exception handler
IPython.get_ipython().set_custom_exc((Exception,), custom_exc)
```
You can make this run automatically when you start an IPython prompt by saving it somewhere and then setting an environment variable called `PYTHONSTARTUP` to the path to this file. You set environment variables differently depending on your OS (remember to alter the paths):
* Windows: `setx PYTHONSTARTUP C:\startup.py` in command prompt
* Mac/Linux (bash): Put `export PYTHONSTARTUP=$HOME/startup.py` into your `~/.bashrc`
Here's a demo of the script in action:
 |
How to loop over a tedious if statement | 36,116,988 | 2 | 2016-03-20T17:25:03Z | 36,117,020 | 10 | 2016-03-20T17:28:13Z | [
"python",
"music",
"translate"
] | I'm currently trying to make a program that takes sheet music for violin and translates the given notes into a position on a string, but my problem is that when I ask if a key is sharp or flat, and how many sharps or flats are in that key signature I find that I'm making a bunch of tedious if/then statements such as:
```
if keysig == sharp and signum == 2:
note['LE'] == 'D4'
note['SC'] == 'A4'
elif keysig == sharp and signum == 3:
note['LE'] == 'D5'
note['SC'] == 'G2'
```
you'll see that it becomes super annoying to go back and make another statement with slight changes. How can I fix this?
For those of you that don't read sheet music:
Basically what I'm trying to say is that this algorithm will take notes that the user inputs such as A, G, or D and turn them into exact locations on the violin finger board so you won't have to search for the notes manually. This program is intended for violin beginners that lack experience in reading sheet music. | You could use a dictionary:
```
transpositions = {
(sharp, 2): {'LE': 'D4', 'SC': 'A4'},
(sharp, 3): {'LE': 'D5', 'SC': 'G2'},
# etc.
}
note.update(transpositions.get((keysig, signum), {}))
```
This uses a tuple of `(keysig, signum)` as the key, mapping to specific note transpositions. If no such signature is found, no updates are made (updating with an empty dictionary). |
Pythonic way to avoid "if x: return x" statements | 36,117,583 | 201 | 2016-03-20T18:11:44Z | 36,117,603 | 254 | 2016-03-20T18:13:00Z | [
"python"
] | I have a method that calls 4 other methods in sequence to check for specific conditions, and returns immediately (not checking the following ones) whenever one returns something Truthy.
```
def check_all_conditions():
x = check_size()
if x:
return x
x = check_color()
if x:
return x
x = check_tone()
if x:
return x
x = check_flavor()
if x:
return x
return None
```
This seems like a lot of baggage code. Instead of each 2-line if statement, I'd rather do something like:
```
x and return x
```
But that is invalid Python. Am I missing a simple, elegant solution here? Incidentally, in this situation, those four check methods may be expensive, so I do not want to call them multiple times. | You could use a loop:
```
conditions = (check_size, check_color, check_tone, check_flavor)
for condition in conditions:
result = condition()
if result:
return result
```
This has the added advantage that you can now make the number of conditions variable.
You could use [`map()`](https://docs.python.org/3/library/functions.html#map) + [`filter()`](https://docs.python.org/3/library/functions.html#filter) (the Python 3 versions, use the [`future_builtins` versions](https://docs.python.org/2/library/future_builtins.html) in Python 2) to get the first such matching value:
```
try:
# Python 2
from future_builtins import map, filter
except ImportError:
# Python 3
pass
conditions = (check_size, check_color, check_tone, check_flavor)
return next(filter(None, map(lambda f: f(), conditions)), None)
```
but if this is more readable is debatable.
Another option is to use a generator expression:
```
conditions = (check_size, check_color, check_tone, check_flavor)
checks = (condition() for condition in conditions)
return next((check for check in checks if check), None)
``` |
Pythonic way to avoid "if x: return x" statements | 36,117,583 | 201 | 2016-03-20T18:11:44Z | 36,117,720 | 368 | 2016-03-20T18:22:56Z | [
"python"
] | I have a method that calls 4 other methods in sequence to check for specific conditions, and returns immediately (not checking the following ones) whenever one returns something Truthy.
```
def check_all_conditions():
x = check_size()
if x:
return x
x = check_color()
if x:
return x
x = check_tone()
if x:
return x
x = check_flavor()
if x:
return x
return None
```
This seems like a lot of baggage code. Instead of each 2-line if statement, I'd rather do something like:
```
x and return x
```
But that is invalid Python. Am I missing a simple, elegant solution here? Incidentally, in this situation, those four check methods may be expensive, so I do not want to call them multiple times. | Alternatively to Martijn's fine answer, you could chain `or`. This will return the first truthy value, or `None` if there's no truthy value:
```
def check_all_conditions():
return check_size() or check_color() or check_tone() or check_flavor() or None
```
Demo:
```
>>> x = [] or 0 or {} or -1 or None
>>> x
-1
>>> x = [] or 0 or {} or '' or None
>>> x is None
True
``` |
Pythonic way to avoid "if x: return x" statements | 36,117,583 | 201 | 2016-03-20T18:11:44Z | 36,121,285 | 17 | 2016-03-21T00:14:15Z | [
"python"
] | I have a method that calls 4 other methods in sequence to check for specific conditions, and returns immediately (not checking the following ones) whenever one returns something Truthy.
```
def check_all_conditions():
x = check_size()
if x:
return x
x = check_color()
if x:
return x
x = check_tone()
if x:
return x
x = check_flavor()
if x:
return x
return None
```
This seems like a lot of baggage code. Instead of each 2-line if statement, I'd rather do something like:
```
x and return x
```
But that is invalid Python. Am I missing a simple, elegant solution here? Incidentally, in this situation, those four check methods may be expensive, so I do not want to call them multiple times. | If you want the same code structure, you could use ternary statements!
```
def check_all_conditions():
x = check_size()
x = x if x else check_color()
x = x if x else check_tone()
x = x if x else check_flavor()
return x if x else None
```
I think this looks nice and clear if you look at it.
Demo:
 |
Pythonic way to avoid "if x: return x" statements | 36,117,583 | 201 | 2016-03-20T18:11:44Z | 36,126,903 | 38 | 2016-03-21T09:14:00Z | [
"python"
] | I have a method that calls 4 other methods in sequence to check for specific conditions, and returns immediately (not checking the following ones) whenever one returns something Truthy.
```
def check_all_conditions():
x = check_size()
if x:
return x
x = check_color()
if x:
return x
x = check_tone()
if x:
return x
x = check_flavor()
if x:
return x
return None
```
This seems like a lot of baggage code. Instead of each 2-line if statement, I'd rather do something like:
```
x and return x
```
But that is invalid Python. Am I missing a simple, elegant solution here? Incidentally, in this situation, those four check methods may be expensive, so I do not want to call them multiple times. | This is a variant of Martijns first example. It also uses the "collection of callables"-style in order to allow short-circuiting.
Instead of a loop you can use the builtin `any`.
```
conditions = (check_size, check_color, check_tone, check_flavor)
return any(condition() for condition in conditions)
```
Note that `any` returns a boolean, so if you need the exact return value of the check, this solution will not work. `any` will not distinguish between `14`, `'red'`, `'sharp'`, `'spicy'` as return values, they will all be returned as `True`. |
Pythonic way to avoid "if x: return x" statements | 36,117,583 | 201 | 2016-03-20T18:11:44Z | 36,129,322 | 78 | 2016-03-21T11:03:04Z | [
"python"
] | I have a method that calls 4 other methods in sequence to check for specific conditions, and returns immediately (not checking the following ones) whenever one returns something Truthy.
```
def check_all_conditions():
x = check_size()
if x:
return x
x = check_color()
if x:
return x
x = check_tone()
if x:
return x
x = check_flavor()
if x:
return x
return None
```
This seems like a lot of baggage code. Instead of each 2-line if statement, I'd rather do something like:
```
x and return x
```
But that is invalid Python. Am I missing a simple, elegant solution here? Incidentally, in this situation, those four check methods may be expensive, so I do not want to call them multiple times. | **Don't change it**
There are other ways of doing this as the various other answers show. None are as clear as your original code. |
Pythonic way to avoid "if x: return x" statements | 36,117,583 | 201 | 2016-03-20T18:11:44Z | 36,140,014 | 67 | 2016-03-21T19:35:05Z | [
"python"
] | I have a method that calls 4 other methods in sequence to check for specific conditions, and returns immediately (not checking the following ones) whenever one returns something Truthy.
```
def check_all_conditions():
x = check_size()
if x:
return x
x = check_color()
if x:
return x
x = check_tone()
if x:
return x
x = check_flavor()
if x:
return x
return None
```
This seems like a lot of baggage code. Instead of each 2-line if statement, I'd rather do something like:
```
x and return x
```
But that is invalid Python. Am I missing a simple, elegant solution here? Incidentally, in this situation, those four check methods may be expensive, so I do not want to call them multiple times. | According to [Curly's law](http://blog.codinghorror.com/curlys-law-do-one-thing/), you can make this code more readable by splitting two concerns:
* What things do I check?
* Has one thing returned true?
into two functions:
```
def all_conditions():
yield check_size()
yield check_color()
yield check_tone()
yield check_flavor()
def check_all_conditions():
for condition in all_conditions():
if condition:
return condition
return None
```
This avoids:
* complicated logical structures
* really long lines
* repetition
...while preserving a linear, easy to read flow.
You can probably also come up with even better function names, according to your particular circumstance, which make it even more readable. |
Pythonic way to avoid "if x: return x" statements | 36,117,583 | 201 | 2016-03-20T18:11:44Z | 36,141,317 | 21 | 2016-03-21T20:49:46Z | [
"python"
] | I have a method that calls 4 other methods in sequence to check for specific conditions, and returns immediately (not checking the following ones) whenever one returns something Truthy.
```
def check_all_conditions():
x = check_size()
if x:
return x
x = check_color()
if x:
return x
x = check_tone()
if x:
return x
x = check_flavor()
if x:
return x
return None
```
This seems like a lot of baggage code. Instead of each 2-line if statement, I'd rather do something like:
```
x and return x
```
But that is invalid Python. Am I missing a simple, elegant solution here? Incidentally, in this situation, those four check methods may be expensive, so I do not want to call them multiple times. | I'm quite surprised nobody mentioned the built-in [`any`](https://docs.python.org/2/library/functions.html#any) which is made for this purpose:
```
def check_all_conditions():
return any([
check_size(),
check_color(),
check_tone(),
check_flavor()
])
```
Note that although this implementation is probably the clearest, it evaluates all the checks even if the first one is `True`.
---
If you really need to stop at the first failed check, consider using [`reduce`](https://docs.python.org/2/library/functions.html#reduce) which is made to convert a list to a simple value:
```
def check_all_conditions():
checks = [check_size, check_color, check_tone, check_flavor]
return reduce(lambda a, f: a or f(), checks, False)
```
> `reduce(function, iterable[, initializer])` : Apply function of two
> arguments cumulatively to the items of iterable, from left to right,
> so as to reduce the iterable to a single value. The left argument, x,
> is the accumulated value and the right argument, y, is the update
> value from the iterable. If the optional initializer is present, it is
> placed before the items of the iterable in the calculation
In your case:
* `lambda a, f: a or f()` is the function that checks that either the accumulator `a` or the current check `f()` is `True`. Note that if `a` is `True`, `f()` won't be evaluated.
* `checks` contains check functions (the `f` item from the lambda)
* `False` is the initial value, otherwise no check would happen and the result would always be `True`
`any` and `reduce` are basic tools for functional programming. I strongly encourage you to train these out as well as [`map`](https://docs.python.org/2/library/functions.html#map) which is awesome too! |
Pythonic way to avoid "if x: return x" statements | 36,117,583 | 201 | 2016-03-20T18:11:44Z | 36,142,616 | 23 | 2016-03-21T22:12:00Z | [
"python"
] | I have a method that calls 4 other methods in sequence to check for specific conditions, and returns immediately (not checking the following ones) whenever one returns something Truthy.
```
def check_all_conditions():
x = check_size()
if x:
return x
x = check_color()
if x:
return x
x = check_tone()
if x:
return x
x = check_flavor()
if x:
return x
return None
```
This seems like a lot of baggage code. Instead of each 2-line if statement, I'd rather do something like:
```
x and return x
```
But that is invalid Python. Am I missing a simple, elegant solution here? Incidentally, in this situation, those four check methods may be expensive, so I do not want to call them multiple times. | Have you considered just writing `if x: return x` all on one line?
```
def check_all_conditions():
x = check_size()
if x: return x
x = check_color()
if x: return x
x = check_tone()
if x: return x
x = check_flavor()
if x: return x
return None
```
This isn't any less *repetitive* than what you had, but IMNSHO it reads quite a bit smoother. |
Pythonic way to avoid "if x: return x" statements | 36,117,583 | 201 | 2016-03-20T18:11:44Z | 36,161,683 | 64 | 2016-03-22T17:48:31Z | [
"python"
] | I have a method that calls 4 other methods in sequence to check for specific conditions, and returns immediately (not checking the following ones) whenever one returns something Truthy.
```
def check_all_conditions():
x = check_size()
if x:
return x
x = check_color()
if x:
return x
x = check_tone()
if x:
return x
x = check_flavor()
if x:
return x
return None
```
This seems like a lot of baggage code. Instead of each 2-line if statement, I'd rather do something like:
```
x and return x
```
But that is invalid Python. Am I missing a simple, elegant solution here? Incidentally, in this situation, those four check methods may be expensive, so I do not want to call them multiple times. | In effectively the same answer as timgeb, but you could use parenthesis for nicer formatting:
```
def check_all_the_things():
return (
one()
or two()
or five()
or three()
or None
)
``` |
Error: 'conda' can only be installed into the root environment | 36,117,904 | 5 | 2016-03-20T18:40:29Z | 36,298,307 | 20 | 2016-03-30T01:58:59Z | [
"python",
"package",
"install",
"seaborn",
"conda"
] | I am getting the following error when I try to install the python package seaborn:
```
conda install --name dato-env seaborn
Error: 'conda' can only be installed into the root environment
```
This, of course, is puzzling because I am not trying to install conda. I am trying to install seaborn.
This is my setup. I have 3 python environments:
* dato-env
* py35
* root
I **successfully** installed seaborn previously (with the command `conda install seaborn`), but it installed in the root environment (and is not available to my iPython notebooks which are using the dato-env).
I tried to install seaborn in the dato-env environment so that it would be available to my iPython notebook code, but I keep getting the above error saying that I must install ***conda*** in the root environment. (conda is installed in the root environment)
How do I successfully install seaborn into my dato-env?
Thanks in advance for any assistance.
Edit:
```
> conda --version
conda 4.0.5
> conda env list
dato-env * /Users/*******/anaconda/envs/dato-env
py35 /Users/*******/anaconda/envs/py35
root /Users/*******/anaconda
``` | If you clone root you get conda-build and conda-env in your new environment but afaik they shouldn't be there and are not required outside root provided root remains on your path. So if you remove them from your non-root env first your command should work. For example, I had the same error when trying to update anaconda but did not get the error doing it this way:
```
source activate my-env
conda remove conda-build
conda remove conda-env
conda update anaconda
```
See this thread for alternative and background: <https://groups.google.com/a/continuum.io/forum/#!topic/anaconda/PkXOIqlEPCU> |
why are empty numpy arrays not printed | 36,133,069 | 6 | 2016-03-21T14:00:56Z | 36,133,235 | 8 | 2016-03-21T14:08:03Z | [
"python",
"arrays",
"numpy"
] | If I initialise a python list
```
x = [[],[],[]]
print(x)
```
then it returns
```
[[], [], []]
```
but if I do the same with a numpy array
```
x = np.array([np.array([]),np.array([]),np.array([])])
print(x)
```
then it only returns
```
[]
```
How can I make it return a nested empty list as it does for a normal python list? | It actually *does* return a nested empty list. For example, try
```
x = np.array([np.array([]),np.array([]),np.array([])])
>>> array([], shape=(3, 0), dtype=float64)
```
or
```
>>> print x.shape
(3, 0)
```
Don't let the output of `print x` fool you. These types of outputs merely reflect the (aesthetic) choices of the implementors of `__str__` and `__repr__`. To actually see the exact dimension, you need to use things like `.shape`. |
Search list of string elements that match another list of string elements | 36,134,536 | 9 | 2016-03-21T15:00:56Z | 36,134,592 | 8 | 2016-03-21T15:03:31Z | [
"python"
] | I have a list with strings called `names`, I need to search each element in the `names` list with each element from the `pattern` list. Found several guides that can loop through for a individual string but not for a list of strings
```
a = [x for x in names if 'st' in x]
```
Thank you in advance!
```
names = ['chris', 'christopher', 'bob', 'bobby', 'kristina']
pattern = ['st', 'bb']
```
Desired output:
```
a = ['christopher', 'bobby', 'kristina]
``` | Use the [any()](https://docs.python.org/3/library/functions.html#any) function with a [generator expression](https://docs.python.org/3/reference/expressions.html#grammar-token-generator_expression):
```
a = [x for x in names if any(pat in x for pat in pattern)]
```
`any()` is a short-circuiting function, so the first time it comes across a pattern that matches, it returns True. Since I am using a generator expression instead of a list comprehension, no patterns after the first pattern that matches are even checked. That means that this is just about the fastest possible way of doing it. |
Defining SQLAlchemy enum column with Python enum raises "ValueError: not a valid enum" | 36,136,112 | 6 | 2016-03-21T16:08:05Z | 36,136,344 | 8 | 2016-03-21T16:18:10Z | [
"python",
"flask",
"sqlalchemy",
"flask-sqlalchemy"
] | I am trying to follow [this example](http://docs.sqlalchemy.org/en/latest/core/type_basics.html) to have an enum column in a table that uses Python's `Enum` type. I define the enum then pass it to the column as shown in the example, but I get `ValueError: <enum 'FruitType'> is not a valid Enum`. How do I correctly define a SQLAlchemy enum column with a Python enum?
```
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
import enum
app = Flask(__name__)
db = SQLAlchemy(app)
class FruitType(enum.Enum):
APPLE = "Crunchy apple"
BANANA = "Sweet banana"
class MyTable(db.Model):
id = db.Column(db.Integer, primary_key = True)
fruit_type = db.Column(enum.Enum(FruitType))
```
```
File "why.py", line 32, in <module>
class MyTable(db.Model):
File "why.py", line 34, in MyTable
fruit_type = db.Column(enum.Enum(FruitType))
File "/usr/lib/python2.7/dist-packages/enum/__init__.py", line 330, in __call__
return cls.__new__(cls, value)
File "/usr/lib/python2.7/dist-packages/enum/__init__.py", line 642, in __new__
raise ValueError("%s is not a valid %s" % (value, cls.__name__))
ValueError: <enum 'FruitType'> is not a valid Enum
``` | The column type should be [`sqlalchemy.types.Enum`](http://docs.sqlalchemy.org/en/latest/core/type_basics.html#sqlalchemy.types.Enum). You're using the Python `Enum` type again, which is valid for the value but not the column type.
```
class MyTable(db.Model):
id = db.Column(db.Integer, primary_key = True)
fruit_type = db.Column(db.Enum(FruitType))
``` |
Python the same char not equals | 36,137,602 | 12 | 2016-03-21T17:15:26Z | 36,137,846 | 9 | 2016-03-21T17:28:14Z | [
"python",
"python-2.7",
"python-3.x"
] | I have a text in my database. I send some text from xhr to my view. Function find does not find some unicode chars.
I want find selected text using just:
```
text.find(selection)
```
but sometimes variable 'selection' has char like that:
```
Ä # in xhr unichr(281)
```
in variable 'text' there is a char:
```
ę # in db has two chars unichr(101) + unichr(808)
``` | Here [`unicodedata.normalize`](https://docs.python.org/3/library/unicodedata.html#unicodedata.normalize) might help you.
Basically if you normalize the data coming from the db, and normalize your selection to the same form, you should have a better result when using `str.find`, `str.__contains__` (i.e. `in`), `str.index`, and friends.
```
>>> u1 = chr(281)
>>> u2 = chr(101) + chr(808)
>>> print(u1, u2)
Ä ę
>>> u1 == u2
False
>>> unicodedata.normalize('NFC', u2) == u1
True
```
NFC stands for the *Normal Form Composed* form. You can read up [here](https://en.wikipedia.org/wiki/Unicode_equivalence#Normalization) for some description of the other possible forms. |
How to find a value closest to another value X in a large sorted array efficiently | 36,143,149 | 3 | 2016-03-21T22:54:36Z | 36,143,259 | 9 | 2016-03-21T23:02:56Z | [
"python",
"search"
] | For a sorted list, how can I find the smallest number which close to the a given number?
For example,
```
mysortedList = [37, 72, 235, 645, 715, 767, 847, 905, 908, 960]
```
How can I find the largest element which is less or equal to 700 **quickly**? (If I have 10 million elements, then it will be slow to search linearly). In this example, the answer is 645. | You can use the [`bisect`](https://docs.python.org/2/library/bisect.html) module:
```
import bisect
data = [37, 72, 235, 645, 715, 767, 847, 905, 908, 960]
location = bisect.bisect_left(data, 700)
result = data[location - 1]
```
This is a module in the standard library which will use [binary search](https://en.wikipedia.org/wiki/Binary_search_algorithm) to find the desired result. Depending on the exact value that you need you can also use [`bisect_right`](https://docs.python.org/2/library/bisect.html#bisect.bisect_right) instead of [`bisect_left`](https://docs.python.org/2/library/bisect.html#bisect.bisect_left).
This is faster than iterating over the list because the binary search algorithm can skip parts of the data that won't contain the answer. This makes it very suitable for finding the nearest number when the data is known to be sorted. |
Difference between "raise" and "raise e"? | 36,153,805 | 14 | 2016-03-22T11:57:45Z | 36,153,948 | 11 | 2016-03-22T12:04:35Z | [
"python",
"exception"
] | In python, is there a difference between `raise` and `raise e` in an except block?
`dis` is showing me different results, but I don't know what it means.
What's the end behavior of both?
```
import dis
def a():
try:
raise Exception()
except Exception as e:
raise
def b():
try:
raise Exception()
except Exception as e:
raise e
dis.dis(a)
# OUT: 4 0 SETUP_EXCEPT 13 (to 16)
# OUT: 5 3 LOAD_GLOBAL 0 (Exception)
# OUT: 6 CALL_FUNCTION 0
# OUT: 9 RAISE_VARARGS 1
# OUT: 12 POP_BLOCK
# OUT: 13 JUMP_FORWARD 22 (to 38)
# OUT: 6 >> 16 DUP_TOP
# OUT: 17 LOAD_GLOBAL 0 (Exception)
# OUT: 20 COMPARE_OP 10 (exception match)
# OUT: 23 POP_JUMP_IF_FALSE 37
# OUT: 26 POP_TOP
# OUT: 27 STORE_FAST 0 (e)
# OUT: 30 POP_TOP
# OUT: 7 31 RAISE_VARARGS 0
# OUT: 34 JUMP_FORWARD 1 (to 38)
# OUT: >> 37 END_FINALLY
# OUT: >> 38 LOAD_CONST 0 (None)
# OUT: 41 RETURN_VALUE
dis.dis(b)
# OUT: 4 0 SETUP_EXCEPT 13 (to 16)
# OUT: 5 3 LOAD_GLOBAL 0 (Exception)
# OUT: 6 CALL_FUNCTION 0
# OUT: 9 RAISE_VARARGS 1
# OUT: 12 POP_BLOCK
# OUT: 13 JUMP_FORWARD 25 (to 41)
# OUT: 6 >> 16 DUP_TOP
# OUT: 17 LOAD_GLOBAL 0 (Exception)
# OUT: 20 COMPARE_OP 10 (exception match)
# OUT: 23 POP_JUMP_IF_FALSE 40
# OUT: 26 POP_TOP
# OUT: 27 STORE_FAST 0 (e)
# OUT: 30 POP_TOP
# OUT: 7 31 LOAD_FAST 0 (e)
# OUT: 34 RAISE_VARARGS 1
# OUT: 37 JUMP_FORWARD 1 (to 41)
# OUT: >> 40 END_FINALLY
# OUT: >> 41 LOAD_CONST 0 (None)
# OUT: 44 RETURN_VALUE
``` | There is no difference in this case. `raise` without arguments [will always raise the last exception thrown](https://docs.python.org/3/reference/simple_stmts.html#the-raise-statement) (which is also accessible with `sys.exc_info()`).
The reason the bytecode is different is because Python is a dynamic language and the interpreter doesn't really "know" that `e` refers to the (unmodified) exception that is currently being handled. But this may not always be the case, consider:
```
try:
raise Exception()
except Exception as e:
if foo():
e = OtherException()
raise e
```
What is `e` now? There is no way to tell when compiling the bytecode (only when actually *running* the program).
In simple examples like yours, it might be possible for the Python interpreter to "optimize" the bytecode, but so far no one has done this. And why should they? It's a micro-optimization at best and may still break in subtle ways in obscure conditions. There is a lot of other fruit that is hanging a lot lower than this and is more nutritious to boot ;-) |
Efficient way of Creating dummy variables in python | 36,157,329 | 4 | 2016-03-22T14:36:20Z | 36,157,546 | 7 | 2016-03-22T14:45:09Z | [
"python"
] | I want to create a vector of dummy variables(can take only O or 1). I am doing the following:
```
data = ['one','two','three','four','six']
variables = ['two','five','ten']
```
I got the following two ways:
```
dummy=[]
for variable in variables:
if variable in data:
dummy.append(1)
else:
dummy.append(0)
```
or with list comprehension:
```
dummy = [1 if variable in data else 0 for variable in variables]
```
Results are ok:
```
>>> [1,0,0]
```
Is there a build in function doing this task quicker? Its kinda slow if the variables are thousands.
**Edit**: Results using `time.time()`:
I am using the following data:
```
data = ['one','two','three','four','six']*100
variables = ['two','five','ten']*100000
```
* Loop(from my example): 2.11 sec
* list comprehension: 1.55 sec
* list comprehension (variables are type of set): 0.0004992 sec
* Example from Peter: 0.0004999 sec
* Example from falsetrue: 0.000502 sec | If you convert `data` to a `set` the lookup will be quicker.
You can also convert the boolean to an integer to get `1` or `0` for `True` or `False`.
```
>>> int(True)
1
```
You can call `__contains__` on the set of data for each variable so save creating the set each time through the loop.
You can map all these together:
```
dummy = list(map(int, map(set(data).__contains__, variables)))
```
**edit:**
Much as I like one-liners, I think it's more readable to use a list comprehension.
If you create the `set` in the list comprehension it will recreate it for each `variable`. So we need two lines:
```
search = set(data)
dummy = [int(variable in search) for variable in variables]
``` |
n**n**n heuristics in Python | 36,159,204 | 5 | 2016-03-22T15:54:53Z | 36,159,331 | 13 | 2016-03-22T16:00:06Z | [
"python",
"algorithm"
] | I just play around with `Python` and found just interesting thing: my computer (i5, 3 GHz) just hang out after several hours of attempting to compute `10 ** 10 ** 10`.
I know Math isn't a purpose Python was created for, but I wonder isn't there a way to help `Python` to compute it.
What I have so far is my observation: `n ** (2** lg(n**n))` works 2 times faster then `n ** n ** n`
```
n = 8 ** (8 ** 8)
n2 = 8 ** (2 ** 24)
# measured by timeit
> 4.449993866728619e-07
> 1.8300124793313444e-07
```
---
1) Does anyone have an idea how to solve `n ** n ** n` in a most sophisticated way?
2) Do generators can help in order to minimize memory abuse? | `10 ** 10 ** 10` is a **very very large number**. Python is trying to allocate enough memory to represent that number. 10.000.000.000 (10 billion) digits takes a lot more memory than your computer can provide in one go, so your computer is now swapping out memory to disk to make space, which is why things are now so very very slow.
To illustrate, try using `sys.getsizeof()` on some numbers that do fit:
```
>>> import sys
>>> sys.getsizeof(10 ** 10 ** 6)
442948
>>> sys.getsizeof(10 ** 10 ** 7)
4429264
```
so an additional digit requires *roughly* 10 times more memory. The amounts above are in bytes, so a 1 million digit number takes almost half a megabyte, 10 million digits takes 4 megabytes. Extrapoliting, your number would require 4 gigabytes of memory. It depends on your OS and hardware if Python will even be given that much memory.
Python stores integers in [increments of 30 bits](https://stackoverflow.com/questions/23016610/why-do-ints-require-three-times-as-much-memory-in-python/23016640#23016640) on modern platforms; so every 30 bits requires an additional 4 bytes of storage. For 10 billion digits that comes down to [`(log2(10 ** 10 ** 10) / 30 * 4) / (1024 ** 3)`](http://www.wolframalpha.com/input/?dataset=&i=(log2(10%5E10%5E10)+%2F+30+*+4)+%2F+(1024+%5E+3)) == about 4.125GiB.
You can't use Python to represent numbers this large. Not even floating point numbers can reach that high:
```
>>> 10.0 ** 10 ** 10
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OverflowError: (34, 'Result too large')
```
I'm not that familiar with bignum (big number) handling in Python; perhaps the [`gmpy` libray](https://code.google.com/archive/p/gmpy/) has facilities to represent such numbers that are better. |
How to use inverse of a GenericRelation | 36,163,430 | 12 | 2016-03-22T19:27:28Z | 39,945,828 | 7 | 2016-10-09T16:25:25Z | [
"python",
"sql",
"django",
"generic-foreign-key",
"django-generic-relations"
] | I must be really misunderstanding something with the [`GenericRelation` field](https://docs.djangoproject.com/en/1.7/ref/contrib/contenttypes/#reverse-generic-relations) from Django's content types framework.
To create a minimal self contained example, I will use the polls example app from the tutorial. Add a generic foreign key field into the `Choice` model, and make a new `Thing` model:
```
class Choice(models.Model):
...
content_type = models.ForeignKey(ContentType)
object_id = models.PositiveIntegerField()
thing = GenericForeignKey('content_type', 'object_id')
class Thing(models.Model):
choices = GenericRelation(Choice, related_query_name='things')
```
With a clean db, synced up tables, and create a few instances:
```
>>> poll = Poll.objects.create(question='the question', pk=123)
>>> thing = Thing.objects.create(pk=456)
>>> choice = Choice.objects.create(choice_text='the choice', pk=789, poll=poll, thing=thing)
>>> choice.thing.pk
456
>>> thing.choices.get().pk
789
```
So far so good - the relation works in both directions from an instance. But from a queryset, the reverse relation is very weird:
```
>>> Choice.objects.values_list('things', flat=1)
[456]
>>> Thing.objects.values_list('choices', flat=1)
[456]
```
Why the inverse relation gives me again the id from the `thing`? I expected instead the primary key of the choice, equivalent to the following result:
```
>>> Thing.objects.values_list('choices__pk', flat=1)
[789]
```
Those ORM queries generate SQL like this:
```
>>> print Thing.objects.values_list('choices__pk', flat=1).query
SELECT "polls_choice"."id" FROM "polls_thing" LEFT OUTER JOIN "polls_choice" ON ( "polls_thing"."id" = "polls_choice"."object_id" AND ("polls_choice"."content_type_id" = 10))
>>> print Thing.objects.values_list('choices', flat=1).query
SELECT "polls_choice"."object_id" FROM "polls_thing" LEFT OUTER JOIN "polls_choice" ON ( "polls_thing"."id" = "polls_choice"."object_id" AND ("polls_choice"."content_type_id" = 10))
```
The Django docs are generally excellent, but I can't understand why the second query or find any documentation of that behaviour - it seems to return data from the wrong table completely? | **TL;DR** This was a bug in Django 1.7 that was fixed in Django 1.8.
* Fix commit: [1c5cbf5e5d5b350f4df4aca6431d46c767d3785a](https://github.com/django/django/commit/1c5cbf5e5d5b350f4df4aca6431d46c767d3785a)
* Fix PR: [GenericRelation filtering targets related model's pk](https://github.com/django/django/pull/3743)
* Bug ticket: [Should filter on related model primary key value, not the object\_id value](https://code.djangoproject.com/ticket/24002)
The change went directly to master and did not go under a deprecation period, which isn't too surprising given that maintaining backwards compatibility here would have been really difficult. More surprising is that there was no mention of the issue in the [1.8 release notes](https://docs.djangoproject.com/en/1.10/releases/1.8/#backwards-incompatible-1-8), since the fix changes behavior of currently working code.
The remainder of this answer is a description of how I found the commit using [`git bisect run`](https://git-scm.com/docs/git-bisect). It's here for my own reference more than anything, so I can come back here if I ever need to bisect a large project again.
---
First we setup a django clone and a test project to reproduce the issue. I used [virtualenvwrapper](https://virtualenvwrapper.readthedocs.io/en/latest/) here, but you can do the isolation however you wish.
```
cd /tmp
git clone https://github.com/django/django.git
cd django
git checkout tags/1.7
mkvirtualenv djbisect
export PYTHONPATH=/tmp/django # get django clone into sys.path
python ./django/bin/django-admin.py startproject djbisect
export PYTHONPATH=$PYTHONPATH:/tmp/django/djbisect # test project into sys.path
export DJANGO_SETTINGS_MODULE=djbisect.mysettings
```
create the following file:
```
# /tmp/django/djbisect/djbisect/models.py
from django.db import models
from django.contrib.contenttypes.models import ContentType
from django.contrib.contenttypes.fields import GenericForeignKey, GenericRelation
class GFKmodel(models.Model):
content_type = models.ForeignKey(ContentType)
object_id = models.PositiveIntegerField()
gfk = GenericForeignKey()
class GRmodel(models.Model):
related_gfk = GenericRelation(GFKmodel)
```
also this one:
```
# /tmp/django/djbisect/djbisect/mysettings.py
from djbisect.settings import *
INSTALLED_APPS += ('djbisect',)
```
Now we have a working project, create the `test_script.py` to use with `git bisect run`:
```
#!/usr/bin/env python
import subprocess, os, sys
db_fname = '/tmp/django/djbisect/db.sqlite3'
if os.path.exists(db_fname):
os.unlink(db_fname)
cmd = 'python /tmp/django/djbisect/manage.py migrate --noinput'
subprocess.check_call(cmd.split())
import django
django.setup()
from django.contrib.contenttypes.models import ContentType
from djbisect.models import GFKmodel, GRmodel
ct = ContentType.objects.get_for_model(GRmodel)
y = GRmodel.objects.create(pk=456)
x = GFKmodel.objects.create(pk=789, content_type=ct, object_id=y.pk)
query1 = GRmodel.objects.values_list('related_gfk', flat=1)
query2 = GRmodel.objects.values_list('related_gfk__pk', flat=1)
print(query1)
print(query2)
print(query1.query)
print(query2.query)
if query1[0] == 789 == query2[0]:
print('FIXED')
sys.exit(1)
else:
print('UNFIXED')
sys.exit(0)
```
The script must be executable, so add the flag with `chmod +x test_script.py`. It should be located in the directory that Django is cloned into, i.e. `/tmp/django/test_script.py` for me. This is because `import django` should pick up the locally checked-out django project first, not any version from site-packages.
The user interface of git bisect was designed to find out where bugs *appeared*, so the usual prefixes of "bad" and "good" are backwards when you're trying to find out when a certain bug was *fixed*. This may seem somewhat upside-down, but the test script should exit with success (return code 0) if the bug is present, and it should fail out (with nonzero return code) if the bug is fixed. This tripped me up a few times!
```
git bisect start --term-new=fixed --term-old=unfixed
git bisect fixed tags/1.8
git bisect unfixed tags/1.7
git bisect run ./test_script.py
```
So this process will do an automated search which eventually finds the commit where the bug was fixed. It takes some time, because there were a lot of commits between Django 1.7 and Django 1.8. It bisected 1362 revisions, roughly 10 steps, and eventually output:
```
1c5cbf5e5d5b350f4df4aca6431d46c767d3785a is the first fixed commit
commit 1c5cbf5e5d5b350f4df4aca6431d46c767d3785a
Author: Anssi Kääriäinen <[email protected]>
Date: Wed Dec 17 09:47:58 2014 +0200
Fixed #24002 -- GenericRelation filtering targets related model's pk
Previously Publisher.objects.filter(book=val) would target
book.object_id if book is a GenericRelation. This is inconsistent to
filtering over reverse foreign key relations, where the target is the
related model's primary key.
```
That's precisely the commit where the query has changed from the incorrect SQL (which gets data from the wrong table)
```
SELECT "djbisect_gfkmodel"."object_id" FROM "djbisect_grmodel" LEFT OUTER JOIN "djbisect_gfkmodel" ON ( "djbisect_grmodel"."id" = "djbisect_gfkmodel"."object_id" AND ("djbisect_gfkmodel"."content_type_id" = 8) )
```
into the correct version:
```
SELECT "djbisect_gfkmodel"."id" FROM "djbisect_grmodel" LEFT OUTER JOIN "djbisect_gfkmodel" ON ( "djbisect_grmodel"."id" = "djbisect_gfkmodel"."object_id" AND ("djbisect_gfkmodel"."content_type_id" = 8) )
```
Of course, from the commit hash we're able to find the pull request and the ticket easily on github. Hopefully this may help someone else one day too - bisecting Django can be tricky to setup due to the migrations! |
Python subprocess .check_call vs .check_output | 36,169,571 | 3 | 2016-03-23T03:56:41Z | 36,214,461 | 10 | 2016-03-25T05:20:10Z | [
"python",
"bash",
"ssh",
"subprocess"
] | My python script (python 3.4.3) calls a bash script via subprocess:
```
import subprocess as sp
res = sp.check_output("bashscript", shell=True)
```
The **bashscript** contains the following line:
```
ssh -MNf somehost
```
which opens a shared master connection to some remote host to allow some subsequent operations.
When executing the python script, it will prompt for password for the `ssh` line but then it blocks after the password is entered and never returns. When I ctrl-C to terminate the script, I see that the connection was properly established (so `ssh` line was successfully executed).
I don't have this blocking problem when using `check_call` instead of `check_output`, but `check_call` does not retrieve stdout. I'd like to understand what exactly is causing the blocking behavior for `check_output`, probably related to some subtlety with `ssh -MNf`. | `check_call()` returns as soon as `/bin/sh` process exits without waiting for descendant processes.
`check_output()` waits until all output is read. If `ssh` inherits the pipe then `check_output()` will wait until it exits (until it closes its inherited pipe ends).
`check_call()` code example:
```
#!/usr/bin/env python
import subprocess
import sys
import time
start = time.time()
cmd = sys.executable + " -c 'import time; time.sleep(2)' &"
subprocess.check_call(cmd, shell=True)
assert (time.time() - start) < 1
```
The output is not read; `check_call()` returns immediately without waiting for the grandchild background python process.
`check_call()` is just `Popen().wait()`. `Popen()` starts the external process and returns immediately without waiting for it to exit. `.wait()` collects the exit status for the process -- it doesn't wait for other (grandchildren) processes.
If the output is read (it is redirected and the grandchild python
process inherits the stdout pipe):
```
start = time.time()
subprocess.check_output(cmd, shell=True)
assert (time.time() - start) > 2
```
then it waits until the background python process that inherited the pipe exits.
`check_output()` calls `Popen().communicate()`, to get the output. `.communicate()` calls `.wait()` internally i.e., `check_output()` also waits for the shell to exit and `check_output()` waits for EOF.
If the grandchild doesn't inherit the pipe then `check_output()` doesn't wait for it:
```
start = time.time()
cmd = sys.executable + " -c 'import time; time.sleep(2)' >/dev/null &"
subprocess.check_output(cmd, shell=True)
assert (time.time() - start) < 1
```
Grandchild's output is redirected to `/dev/null` i.e., it doesn't inherit the parent's pipe and therefore `check_output()` may exit without waiting for it.
Note: `&` at the end which puts the grandchild python process into background. It won't work on Windows where `shell=True` starts `cmd.exe` by default. |
Iterate over a dict except for x item items | 36,184,371 | 7 | 2016-03-23T17:01:48Z | 36,184,412 | 9 | 2016-03-23T17:03:34Z | [
"python",
"dictionary"
] | I have a dict in this format:
```
d_data = {'key_1':value_1,'key_2':value_2,'key_3':value_3,'key_x':value_x,'key_n':value_n}
```
and I have to iterate over it's items:
```
for key,value in columns.items():
do something
```
except for the pair:
```
'key_x':value_x
``` | Simply use the [`continue`](https://docs.python.org/2/reference/simple_stmts.html#the-continue-statement) statement, to skip ahead to the next iteration of the for loop:
```
for key,value in columns.items():
if key == 'key_x':
continue
# do something
``` |
python: check if an numpy array contains any element of another array | 36,190,533 | 5 | 2016-03-23T23:19:59Z | 36,190,611 | 8 | 2016-03-23T23:26:40Z | [
"python",
"numpy"
] | What is the best way to check if an numpy array contains any element of another array?
example:
```
array1 = [10,5,4,13,10,1,1,22,7,3,15,9]
array2 = [3,4,9,10,13,15,16,18,19,20,21,22,23]`
```
I want to get a `True` if `array1` contains any value of `array2`, otherwise a `False`. | Using Pandas, you can use `isin`:
```
a1 = np.array([10,5,4,13,10,1,1,22,7,3,15,9])
a2 = np.array([3,4,9,10,13,15,16,18,19,20,21,22,23])
>>> pd.Series(a1).isin(a2).any()
True
```
And using the [in1d](http://docs.scipy.org/doc/numpy/reference/generated/numpy.in1d.html#numpy.in1d) numpy function(per the comment from @Norman):
```
>>> np.any(np.in1d(a1, a2))
True
```
For small arrays such as those in this example, the solution using set is the clear winner. For larger, dissimilar arrays (i.e. no overlap), the Pandas and Numpy solutions are faster. However, [`np.intersect1d`](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.intersect1d.html) appears to excel for larger arrays.
**Small arrays (12-13 elements)**
```
%timeit set(array1) & set(array2)
The slowest run took 4.22 times longer than the fastest. This could mean that an intermediate result is being cached
1000000 loops, best of 3: 1.69 µs per loop
%timeit any(i in a1 for i in a2)
The slowest run took 12.29 times longer than the fastest. This could mean that an intermediate result is being cached
100000 loops, best of 3: 1.88 µs per loop
%timeit np.intersect1d(a1, a2)
The slowest run took 10.29 times longer than the fastest. This could mean that an intermediate result is being cached
100000 loops, best of 3: 15.6 µs per loop
%timeit np.any(np.in1d(a1, a2))
10000 loops, best of 3: 27.1 µs per loop
%timeit pd.Series(a1).isin(a2).any()
10000 loops, best of 3: 135 µs per loop
```
**Using an array with 100k elements (no overlap)**:
```
a3 = np.random.randint(0, 100000, 100000)
a4 = a3 + 100000
%timeit np.intersect1d(a3, a4)
100 loops, best of 3: 13.8 ms per loop
%timeit pd.Series(a3).isin(a4).any()
100 loops, best of 3: 18.3 ms per loop
%timeit np.any(np.in1d(a3, a4))
100 loops, best of 3: 18.4 ms per loop
%timeit set(a3) & set(a4)
10 loops, best of 3: 23.6 ms per loop
%timeit any(i in a3 for i in a4)
1 loops, best of 3: 34.5 s per loop
``` |
Why can't I duplicate selected items in for loop? | 36,191,418 | 2 | 2016-03-24T00:53:09Z | 36,191,440 | 8 | 2016-03-24T00:56:00Z | [
"python",
"python-3.x"
] | Like I have a list: `letters = ['a', 'b', 'c']`
and I want to insert `'c'` at the first of the list:
```
for letter in letters:
if letter == 'c':
letters.insert(0, letter)
```
When I hit enter there's nothing happen and I have to force quit my terminal... | You should never mutate lists, dictionaries or other such containers while iterating over them. One way around it is to make a copy for iteration:
```
for letter in letters[:]:
if letter == 'c':
letters.insert(0, letter)
``` |
Split python dictionary to result in all combinations of values | 36,198,540 | 7 | 2016-03-24T10:57:22Z | 36,199,218 | 9 | 2016-03-24T11:35:20Z | [
"python",
"dictionary"
] | ```
my_dict = {'a':[1,2], 'b':[3], 'c':{'d':[4,5], 'e':[6,7]}}
```
I need to derive all the combinations out of it as below.
```
{'a':1, 'b':3, 'c':{'d':4, 'e':6}}
{'a':1, 'b':3, 'c':{'d':4, 'e':7}}
{'a':1, 'b':3, 'c':{'d':5, 'e':6}}
{'a':1, 'b':3, 'c':{'d':5, 'e':7}}
{'a':2, 'b':3, 'c':{'d':4, 'e':6}}
```
and so on. There could be any level of nesting here
Please let me know how to achieve this
Something that I tried is pasted below but definitely was reaching nowhere
```
def gen_combinations(data):
my_list =[]
if isinstance(data, dict):
for k, v in data.iteritems():
if isinstance(v, dict):
gen_combinations(v)
elif isinstance(v, list):
for i in range(len(v)):
temp_dict = data.copy()
temp_dict[k] = v[i]
print temp_dict
my_dict = {'a':[1,2], 'b':[3], 'c':{'d':[4,5], 'e':[6,7]}}
gen_combinations(my_dict)
```
Which resulted in
```
{'a': 1, 'c': {'e': [6, 7], 'd': [4, 5]}, 'b': [3]}
{'a': 2, 'c': {'e': [6, 7], 'd': [4, 5]}, 'b': [3]}
{'e': 6, 'd': [4, 5]}
{'e': 7, 'd': [4, 5]}
{'e': [6, 7], 'd': 4}
{'e': [6, 7], 'd': 5}
{'a': [1, 2], 'c': {'e': [6, 7], 'd': [4, 5]}, 'b': 3}
``` | ```
from itertools import product
my_dict = {'a':[1,2], 'b':[3], 'c':{'d':[4,5], 'e':[6,7]}}
def process(d):
to_product = [] # [[('a', 1), ('a', 2)], [('b', 3),], ...]
for k, v in d.items():
if isinstance(v, list):
to_product.append([(k, i) for i in v])
elif isinstance(v, dict):
to_product.append([(k, i) for i in process(v)])
else:
to_product.append([(k, v)])
return [dict(l) for l in product(*to_product)]
for i in process(my_dict):
print(i)
```
Output:
```
{'a': 1, 'b': 3, 'c': {'e': 6, 'd': 4}}
{'a': 2, 'b': 3, 'c': {'e': 6, 'd': 4}}
{'a': 1, 'b': 3, 'c': {'e': 6, 'd': 5}}
{'a': 2, 'b': 3, 'c': {'e': 6, 'd': 5}}
{'a': 1, 'b': 3, 'c': {'e': 7, 'd': 4}}
{'a': 2, 'b': 3, 'c': {'e': 7, 'd': 4}}
{'a': 1, 'b': 3, 'c': {'e': 7, 'd': 5}}
{'a': 2, 'b': 3, 'c': {'e': 7, 'd': 5}}
```
**Upd:**
Code that works as asked [here](http://stackoverflow.com/questions/36198540/split-python-dictionary-to-result-in-all-combinations-of-values?noredirect=1#comment60043432_36198540):
```
from itertools import product
my_dict = {'a':[1,2], 'e':[7], 'f':{'x':[{'a':[3,5]}, {'a':[4]}] } }
def process(d):
to_product = [] # [[('a', 1), ('a', 2)], [('b', 3),], ...]
for k, v in d.items():
if isinstance(v, list) and all(isinstance(i, dict) for i in v):
# specific case, when list of dicts process differently...
c = product(*list(process(i) for i in v))
to_product.append([(k, list(l)) for l in c])
elif isinstance(v, list):
to_product.append([(k, i) for i in v])
elif isinstance(v, dict):
to_product.append([(k, i) for i in process(v)])
else:
to_product.append([(k, v)])
return [dict(l) for l in product(*to_product)]
for i in process(my_dict):
print(i)
```
Output:
```
{'f': {'x': [{'a': 3}, {'a': 4}]}, 'a': 1, 'e': 7}
{'f': {'x': [{'a': 3}, {'a': 4}]}, 'a': 2, 'e': 7}
{'f': {'x': [{'a': 5}, {'a': 4}]}, 'a': 1, 'e': 7}
{'f': {'x': [{'a': 5}, {'a': 4}]}, 'a': 2, 'e': 7}
``` |
Read file content from S3 bucket with boto3 | 36,205,481 | 4 | 2016-03-24T16:41:02Z | 36,205,825 | 7 | 2016-03-24T16:57:04Z | [
"python",
"amazon-web-services",
"amazon-s3",
"boto3"
] | I read the filenames in my S3 bucket by doing
```
objs = boto3.client.list_objects(Bucket='my_bucket')
while 'Contents' in objs.keys():
objs_contents = objs['Contents']
for i in range(len(objs_contents)):
filename = objs_contents[i]['Key']
```
Now, I need to get the actual content of the file, similarly to a `open(filename).readlines()`. What is the best way? | boto3 offers a resource model that makes tasks like iterating through objects easier. Unfortunately, StreamingBody doesn't provide `readline` or `readlines`.
```
s3 = boto3.resource('s3')
bucket = s3.Bucket('test-bucket')
# Iterates through all the objects, doing the pagination for you. Each obj
# is an ObjectSummary, so it doesn't contain the body. You'll need to call
# get to get the whole body.
for obj in bucket.objects.all():
key = obj.key
body = obj.get()['Body'].read()
``` |
clang: error: : errorunsupported option '-fopenmp' on Mac OSX El Capitan building XGBoost | 36,211,018 | 4 | 2016-03-24T22:22:56Z | 36,211,162 | 9 | 2016-03-24T22:38:01Z | [
"python",
"osx",
"gcc",
"xgboost"
] | I'm trying to build [XGBoost](http://xgboost.readthedocs.org/) package for Python following [these instructions](http://xgboost.readthedocs.org/en/latest/build.html#building-on-osx):
> Here is the complete solution to use OpenMP-enabled compilers to install XGBoost. Obtain gcc-5.x.x with openmp support by `brew install gcc --without-multilib`. (brew is the de facto standard of apt-get on OS X. So installing HPC separately is not recommended, but it should work.):
```
git clone --recursive https://github.com/dmlc/xgboost
cd xgboost; cp make/config.mk ./config.mk; make -j4
```
This error occurss precisely in the `make -j4` command.
Searching beforenad, I've tried these two solutions ([1](http://stackoverflow.com/questions/19649634/clang-error-unsupported-option-static-libgcc-on-mac-osx-mavericks) and [2](http://stackoverflow.com/questions/19351219/cuda-clang-and-os-x-mavericks)), to no avail, except for the part to installing another gcc by fear of messing up everything.
Below is the `make` configuration file. It has none suspicious about.
```
#-----------------------------------------------------
# xgboost: the configuration compile script
#
# If you want to change the configuration, please use the following
# steps. Assume you are on the root directory of xgboost.
# First copy the this file so that any local changes will be ignored by git
#
# $ cp make/config.mk .
#
# Next modify the according entries, and then compile by
#
# $ make
#
# or build in parallel with 8 threads
#
# $ make -j8
#----------------------------------------------------
# choice of compiler, by default use system preference.
# export CC = gcc
# export CXX = g++
# export MPICXX = mpicxx
# the additional link flags you want to add
ADD_LDFLAGS =
# the additional compile flags you want to add
ADD_CFLAGS =
# Whether enable openmp support, needed for multi-threading.
USE_OPENMP = 1
# whether use HDFS support during compile
USE_HDFS = 0
# whether use AWS S3 support during compile
USE_S3 = 0
# whether use Azure blob support during compile
USE_AZURE = 0
# Rabit library version,
# - librabit.a Normal distributed version.
# - librabit_empty.a Non distributed mock version,
LIB_RABIT = librabit.a
# path to libjvm.so
LIBJVM=$(JAVA_HOME)/jre/lib/amd64/server
# List of additional plugins, checkout plugin folder.
# uncomment the following lines to include these plugins
# you can also add your own plugin like this
#
# XGB_PLUGINS += plugin/example/plugin.mk
``` | You installed `gcc` with Homebrew, yet the error is from `clang`. That should simply mean that your default compiler still points to `clang` instead of the newly installed `gcc`. If you read the comments in the Makefile, you'll see the following lines:
```
# choice of compiler, by default use system preference.
# export CC = gcc
# export CXX = g++
# export MPICXX = mpicxx
```
and here, you don't want the system one (note: `gcc` for the system points to `clang`:
```
$ which gcc
/usr/bin/gcc
$ gcc --version
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 7.3.0 (clang-703.0.29)
Target: x86_64-apple-darwin15.4.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
```
Instead, point those variables to something in `/usr/local/bin`, e.g.:
```
$ export CC=/usr/local/bin/gcc
```
and similar for the other two variables. |
finding longest word in a list python | 36,214,058 | 2 | 2016-03-25T04:32:40Z | 36,214,066 | 10 | 2016-03-25T04:33:39Z | [
"python",
"python-3.x"
] | I am trying to find the longest word in a non-empty list. My function is supposed to return the longest word. If elements are of equal length in the list, I am trying to sort out the longest in terms of Unicode sorting. For example, I am trying to return the following:
```
>>> highest_word(['a', 'cat', 'sat'])
'sat'
>>> highest_word(['saturation', 'of', 'colour'])
'saturation'
>>> highest_word(['samIam'])
'samIam'
```
So far I can get the first one to work, this is my code so far:
```
def highest_word(wordlist):
longestWord = ""
max_len = 0
for word in wordlist:
if len(word) > max_len:
longestWord = len(word)
longestWord = word
return longestWord
```
Any sort of help would be greatly appreciated. | Here's a simple one liner
```
print(max(['a', 'cat', 'sat', 'g'], key=lambda s: (len(s), s)))
```
This works by mapping each element of the list to a tuple containing its length
and the string itself.
When comparing two tuples `A` and `B`, if `A[0] > B[0]` then `A > B`. Only if `A[0] == B[0]` are the second elements considered. So if the lengths of two strings are equal, then the strings are compared as a tiebreaker. |
How to compare pandas DataFrame against None in Python? | 36,217,969 | 4 | 2016-03-25T10:13:46Z | 36,218,033 | 8 | 2016-03-25T10:16:39Z | [
"python",
"pandas",
"python-2.x",
"nonetype"
] | How do I compare a pandas DataFrame with `None`? I have a constructor that takes one of a `parameter_file` or a `pandas_df` but never both.
```
def __init__(self,copasi_file,row_to_insert=0,parameter_file=None,pandas_df=None):
self.copasi_file=copasi_file
self.parameter_file=parameter_file
self.pandas_df=pandas_df
```
However, when I later try to compare the `pandas_df` against `None`, (i.e. when `self.pandas_df` actually contains a pandas dataframe):
```
if self.pandas_df!=None:
print 'Do stuff'
```
I get the following TypeError:
```
File "C:\Anaconda1\lib\site-packages\pandas\core\internals.py", line 885, in eval
% repr(other))
TypeError: Could not compare [None] with block values
``` | Use `is not`:
```
if self.pandas_df is not None:
print 'Do stuff'
```
[PEP 8](https://www.python.org/dev/peps/pep-0008/) says:
> Comparisons to singletons like `None` should always be done with `is` or `is not`, never the equality operators.
There is also a nice [explanation](http://jaredgrubb.blogspot.de/2009/04/python-is-none-vs-none.html) why. |
Python remove elements from two dimensional list | 36,224,325 | 3 | 2016-03-25T16:55:05Z | 36,224,378 | 10 | 2016-03-25T16:58:07Z | [
"python",
"arrays",
"list",
"max",
"min"
] | Trying to remove min and max values from two dimensional list in array.
My code:
```
myList = [[1, 3, 4], [2, 4, 4], [3, 4, 5]]
maxV = 0
minV = myList[0]0]
for list in myList:
for innerlist in list:
if innerlist > maxV:
maxV = innerlist
if innerlist < minV:
minV = innerlist
innerlist.remove(maxV)
innerlist.remove(minV)
print(myList)
```
This causes me some erros, which i not particulary understand. I'm quite sure that innerlist is not array but ordinary variable. But still i think it should be somehow possible to remove min and max elements from two dimensional list.
I mean I need to remove in every innerlist in my list highest and lowest values.
LF help!
Regards. | Just for the sake of showing a much simpler way of doing this using `list comprehensions`, the `sorted` method and `slicing`:
```
d = [[1, 3, 4], [2, 4, 4], [3, 4, 5]]
n = [sorted(l)[1:-1] for l in d]
print(n)
# [[3], [4], [4]]
```
Some reading material on each of the items used to solve this problem:
* [list
comprehension](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions)
* [sorted](https://docs.python.org/3/library/functions.html#sorted)
* [slicing](http://stackoverflow.com/a/509295/1832539)
To take care of duplicates, [this](http://stackoverflow.com/a/36224499/1832539) answer by Padraic is very well done. |
Count multiple occurrences in a set list | 36,245,973 | 3 | 2016-03-27T09:47:56Z | 36,245,998 | 7 | 2016-03-27T09:52:07Z | [
"python",
"list",
"count"
] | Is there a way to count the amount occurrences of a set of string lists?
For example, when I have this list, it counts 7 `' '` blanks.
```
list = [[' ', ' ', ' ', ' ', ' ', ' ', ' ']]
print(list.count(' '))
```
Is there a way I can do this same thing but for a set of multiple lists? Like this for example below:
```
set = [[' ', ' ', ' ', ' ', ' ', ' ', ' '],
[' ', ' ', ' ', ' ', ' ', ' ', ' '],
[' ', ' ', ' ', ' ', ' ', ' ', ' ']]
print(set.count(' '))
```
When I do it this same way, the output I get is `0` and not the actual count of occurrences. | # Solution
This works:
```
>>> data = [[' ', ' ', ' ', ' ', ' ', ' ', ' '],
[' ', ' ', ' ', ' ', ' ', ' ', ' '],
[' ', ' ', ' ', ' ', ' ', ' ', ' ']]
>>> sum(x.count(' ') for x in data)
21
```
You need to count in each sub list. I use a [generator expression](https://docs.python.org/3/reference/expressions.html#generator-expressions) to do this and sum the results from all sub lists.
BTW, don't use `set` as a variable name. It is a built-in.
# Performance
While not that important for many cases, performance can be interesting:
```
%timeit sum(x.count(' ') for x in data)
1000000 loops, best of 3: 1.28 µs per loop
```
vs.
```
%timeit sum(1 for i in chain.from_iterable(data) if i==' ')
100000 loops, best of 3: 4.79 µs per loop
``` |
how to setup cuDnn with theano on Windows 7 64 bit | 36,248,056 | 4 | 2016-03-27T13:48:04Z | 36,464,973 | 9 | 2016-04-07T01:10:01Z | [
"python",
"theano",
"cudnn"
] | I have installed `Theano` framework and enabled CUDA on my machine, however when I "import theano" in my python console, I got the following message:
```
>>> import theano
Using gpu device 0: GeForce GTX 950 (CNMeM is disabled, CuDNN not available)
```
Now that "CuDNN not available", I downloaded `cuDnn` from Nvidia website. I also updated 'path' in environment, and added 'optimizer\_including=cudnn' in '.theanorc.txt' config file.
Then, I tried again, but failed, with:
```
>>> import theano
Using gpu device 0: GeForce GTX 950 (CNMeM is disabled, CuDNN not available)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Anaconda2\lib\site-packages\theano\__init__.py", line 111, in <module>
theano.sandbox.cuda.tests.test_driver.test_nvidia_driver1()
File "C:\Anaconda2\lib\site-packages\theano\sandbox\cuda\tests\test_driver.py", line 31, in test_nvidia_driver1
profile=False)
File "C:\Anaconda2\lib\site-packages\theano\compile\function.py", line 320, in function
output_keys=output_keys)
File "C:\Anaconda2\lib\site-packages\theano\compile\pfunc.py", line 479, in pfunc
output_keys=output_keys)
File "C:\Anaconda2\lib\site-packages\theano\compile\function_module.py", line 1776, in orig_function
output_keys=output_keys).create(
File "C:\Anaconda2\lib\site-packages\theano\compile\function_module.py", line 1456, in __init__
optimizer_profile = optimizer(fgraph)
File "C:\Anaconda2\lib\site-packages\theano\gof\opt.py", line 101, in __call__
return self.optimize(fgraph)
File "C:\Anaconda2\lib\site-packages\theano\gof\opt.py", line 89, in optimize
ret = self.apply(fgraph, *args, **kwargs)
File "C:\Anaconda2\lib\site-packages\theano\gof\opt.py", line 230, in apply
sub_prof = optimizer.optimize(fgraph)
File "C:\Anaconda2\lib\site-packages\theano\gof\opt.py", line 89, in optimize
ret = self.apply(fgraph, *args, **kwargs)
File "C:\Anaconda2\lib\site-packages\theano\gof\opt.py", line 230, in apply
sub_prof = optimizer.optimize(fgraph)
File "C:\Anaconda2\lib\site-packages\theano\gof\opt.py", line 89, in optimize
ret = self.apply(fgraph, *args, **kwargs)
File "C:\Anaconda2\lib\site-packages\theano\sandbox\cuda\dnn.py", line 2508, in apply
dnn_available.msg)
AssertionError: cuDNN optimization was enabled, but Theano was not able to use it. We got this error:
Theano can not compile with cuDNN. We got this error:
>>>
```
anyone can help me? Thanks. | There should be a way to do it by setting only the Path environment variable but I could never get that to work. The only thing that worked for me was to manually copy the CuDNN files into the appropriate folders in your CUDA installation.
For example, if your CUDA installation is in C:\CUDA\v7.0 and you extracted CuDNN to C:\CuDNN you would copy as follows:
* The contents of C:\CuDNN\lib\x64\ would be copied to C:\CUDA\v7.0\lib\x64\
* The contents of C:\CuDNN\include\ would be copied to C:\CUDA\v7.0\include\
* The contents of C:\CuDNN\bin\ would be copied to C:\CUDA\v7.0\bin\
After that it should work. |
Importing installed package from script raises "AttributeError: module has no attribute" or "ImportError: cannot import name" | 36,250,353 | 15 | 2016-03-27T17:27:05Z | 36,250,354 | 17 | 2016-03-27T17:27:05Z | [
"python",
"exception",
"python-module",
"shadowing"
] | I have a script named `requests.py` that imports the requests package. The script either can't access attributes from the package, or can't import them. Why isn't this working and how do I fix it?
The following code raises an `AttributeError`.
```
import requests
res = requests.get('http://www.google.ca')
print(res)
```
```
Traceback (most recent call last):
File "/Users/me/dev/rough/requests.py", line 1, in <module>
import requests
File "/Users/me/dev/rough/requests.py", line 3, in <module>
requests.get('http://www.google.ca')
AttributeError: module 'requests' has no attribute 'get'
```
The following code raises an `ImportError`.
```
from requests import get
res = get('http://www.google.ca')
print(res)
```
```
Traceback (most recent call last):
File "requests.py", line 1, in <module>
from requests import get
File "/Users/me/dev/rough/requests.py", line 1, in <module>
from requests import get
ImportError: cannot import name 'get'
```
[The following code](http://docs.python-requests.org/en/master/user/advanced/#custom-authentication) raises an `ImportError`.
```
from requests.auth import AuthBase
class PizzaAuth(AuthBase):
"""Attaches HTTP Pizza Authentication to the given Request object."""
def __init__(self, username):
# setup any auth-related data here
self.username = username
def __call__(self, r):
# modify and return the request
r.headers['X-Pizza'] = self.username
return r
```
```
Traceback (most recent call last):
File "requests.py", line 1, in <module>
from requests.auth import AuthBase
File "/Users/me/dev/rough/requests.py", line 1, in <module>
from requests.auth import AuthBase
ImportError: No module named 'requests.auth'; 'requests' is not a package
``` | This happens because your local module named `requests.py` shadows the installed `requests` module you are trying to use. The current directory is prepended to `sys.path`, so the local name takes precedence over the installed name.
An extra debugging tip when this comes up is to look at the Traceback carefully, and realize that the name of your script in question is matching the module you are trying to import:
Your script. Notice the name you used:
```
File "/Users/me/dev/rough/requests.py", line 1, in <module>
```
The module you are trying to import: `requests`
Rename your module to something else to avoid the name collision.
Python may generate a `requests.pyc` file next to your `requests.py` file (in the `__pycache__` directory in Python 3). Remove that as well after your rename, as the interpreter will still reference that file, re-producing the error. However, the `pyc` file in `__pycache__` *should* not affect your code if the `py` file has been removed.
In the example, renaming the file to `my_requests.py`, removing `requests.pyc`, and running again successfully prints `<Response [200]>`. |
limit() and sort() order pymongo and mongodb | 36,250,963 | 12 | 2016-03-27T18:25:06Z | 36,311,154 | 9 | 2016-03-30T14:01:40Z | [
"python",
"mongodb",
"pymongo"
] | Despite reading peoples answers stating that the sort is done first, evidence shows something different that the limit is done before the sort. Is there a way to force sort always first?
```
views = mongo.db.view_logging.find().sort([('count', 1)]).limit(10)
```
Whether I use `.sort().limit()` or `.limit().sort()`, the limit takes precedence. I wonder if this is something to do with `pymongo`... | According to the [documentation](https://docs.mongodb.org/manual/reference/method/db.collection.find/#combine-cursor-methods), **regardless of which goes first in your chain of commands, `sort()` would be always applied before the `limit()`.**
You can also study the [`.explain()`](https://docs.mongodb.org/manual/reference/method/cursor.explain/) results of your query and look at the execution stages - you will find that the sorting input stage examines all of the filtered (in your case all documents in the collection) and then the limit is applied.
---
Let's go through an example.
Imagine there is a `foo` database with a `test` collection having 6 documents:
```
>>> col = db.foo.test
>>> for doc in col.find():
... print(doc)
{'time': '2016-03-28 12:12:00', '_id': ObjectId('56f9716ce4b05e6b92be87f2'), 'value': 90}
{'time': '2016-03-28 12:13:00', '_id': ObjectId('56f971a3e4b05e6b92be87fc'), 'value': 82}
{'time': '2016-03-28 12:14:00', '_id': ObjectId('56f971afe4b05e6b92be87fd'), 'value': 75}
{'time': '2016-03-28 12:15:00', '_id': ObjectId('56f971b7e4b05e6b92be87ff'), 'value': 72}
{'time': '2016-03-28 12:16:00', '_id': ObjectId('56f971c0e4b05e6b92be8803'), 'value': 81}
{'time': '2016-03-28 12:17:00', '_id': ObjectId('56f971c8e4b05e6b92be8806'), 'value': 90}
```
Now, let's execute queries with different order of `sort()` and `limit()` and check the results and the explain plan.
Sort and then limit:
```
>>> from pprint import pprint
>>> cursor = col.find().sort([('time', 1)]).limit(3)
>>> sort_limit_plan = cursor.explain()
>>> pprint(sort_limit_plan)
{u'executionStats': {u'allPlansExecution': [],
u'executionStages': {u'advanced': 3,
u'executionTimeMillisEstimate': 0,
u'inputStage': {u'advanced': 6,
u'direction': u'forward',
u'docsExamined': 6,
u'executionTimeMillisEstimate': 0,
u'filter': {u'$and': []},
u'invalidates': 0,
u'isEOF': 1,
u'nReturned': 6,
u'needFetch': 0,
u'needTime': 1,
u'restoreState': 0,
u'saveState': 0,
u'stage': u'COLLSCAN',
u'works': 8},
u'invalidates': 0,
u'isEOF': 1,
u'limitAmount': 3,
u'memLimit': 33554432,
u'memUsage': 213,
u'nReturned': 3,
u'needFetch': 0,
u'needTime': 8,
u'restoreState': 0,
u'saveState': 0,
u'sortPattern': {u'time': 1},
u'stage': u'SORT',
u'works': 13},
u'executionSuccess': True,
u'executionTimeMillis': 0,
u'nReturned': 3,
u'totalDocsExamined': 6,
u'totalKeysExamined': 0},
u'queryPlanner': {u'indexFilterSet': False,
u'namespace': u'foo.test',
u'parsedQuery': {u'$and': []},
u'plannerVersion': 1,
u'rejectedPlans': [],
u'winningPlan': {u'inputStage': {u'direction': u'forward',
u'filter': {u'$and': []},
u'stage': u'COLLSCAN'},
u'limitAmount': 3,
u'sortPattern': {u'time': 1},
u'stage': u'SORT'}},
u'serverInfo': {u'gitVersion': u'6ce7cbe8c6b899552dadd907604559806aa2e9bd',
u'host': u'h008742.mongolab.com',
u'port': 53439,
u'version': u'3.0.7'}}
```
Limit and then sort:
```
>>> cursor = col.find().limit(3).sort([('time', 1)])
>>> limit_sort_plan = cursor.explain()
>>> pprint(limit_sort_plan)
{u'executionStats': {u'allPlansExecution': [],
u'executionStages': {u'advanced': 3,
u'executionTimeMillisEstimate': 0,
u'inputStage': {u'advanced': 6,
u'direction': u'forward',
u'docsExamined': 6,
u'executionTimeMillisEstimate': 0,
u'filter': {u'$and': []},
u'invalidates': 0,
u'isEOF': 1,
u'nReturned': 6,
u'needFetch': 0,
u'needTime': 1,
u'restoreState': 0,
u'saveState': 0,
u'stage': u'COLLSCAN',
u'works': 8},
u'invalidates': 0,
u'isEOF': 1,
u'limitAmount': 3,
u'memLimit': 33554432,
u'memUsage': 213,
u'nReturned': 3,
u'needFetch': 0,
u'needTime': 8,
u'restoreState': 0,
u'saveState': 0,
u'sortPattern': {u'time': 1},
u'stage': u'SORT',
u'works': 13},
u'executionSuccess': True,
u'executionTimeMillis': 0,
u'nReturned': 3,
u'totalDocsExamined': 6,
u'totalKeysExamined': 0},
u'queryPlanner': {u'indexFilterSet': False,
u'namespace': u'foo.test',
u'parsedQuery': {u'$and': []},
u'plannerVersion': 1,
u'rejectedPlans': [],
u'winningPlan': {u'inputStage': {u'direction': u'forward',
u'filter': {u'$and': []},
u'stage': u'COLLSCAN'},
u'limitAmount': 3,
u'sortPattern': {u'time': 1},
u'stage': u'SORT'}},
u'serverInfo': {u'gitVersion': u'6ce7cbe8c6b899552dadd907604559806aa2e9bd',
u'host': u'h008742.mongolab.com',
u'port': 53439,
u'version': u'3.0.7'}}
```
As you can see, in both cases the sort is applied first and affects all the 6 documents and then the limit limits the results to 3.
And, the **execution plans are exactly the same**:
```
>>> from copy import deepcopy # just in case
>>> cursor = col.find().sort([('time', 1)]).limit(3)
>>> sort_limit_plan = deepcopy(cursor.explain())
>>> cursor = col.find().limit(3).sort([('time', 1)])
>>> limit_sort_plan = deepcopy(cursor.explain())
>>> sort_limit_plan == limit_sort_plan
True
```
Also see:
* [How do you tell Mongo to sort a collection before limiting the results?](http://stackoverflow.com/questions/17509025/how-do-you-tell-mongo-to-sort-a-collection-before-limiting-the-results) |
Counting Cars OpenCV + Python Issue | 36,254,452 | 5 | 2016-03-28T00:57:19Z | 36,274,515 | 13 | 2016-03-29T02:50:27Z | [
"python",
"opencv",
"tracking",
"traffic"
] | I have been Trying to count cars when crossing the line and it works, but the problem is it counts one car many times which is ridiculous because it should be counted once
Here is the code I am using:
```
import cv2
import numpy as np
bgsMOG = cv2.BackgroundSubtractorMOG()
cap = cv2.VideoCapture("traffic.avi")
counter = 0
if cap:
while True:
ret, frame = cap.read()
if ret:
fgmask = bgsMOG.apply(frame, None, 0.01)
cv2.line(frame,(0,60),(160,60),(255,255,0),1)
# To find the countours of the Cars
contours, hierarchy = cv2.findContours(fgmask,
cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
try:
hierarchy = hierarchy[0]
except:
hierarchy = []
for contour, hier in zip(contours, hierarchy):
(x, y, w, h) = cv2.boundingRect(contour)
if w > 20 and h > 20:
cv2.rectangle(frame, (x,y), (x+w,y+h), (255, 0, 0), 1)
#To find centroid of the Car
x1 = w/2
y1 = h/2
cx = x+x1
cy = y+y1
## print "cy=", cy
## print "cx=", cx
centroid = (cx,cy)
## print "centoid=", centroid
# Draw the circle of Centroid
cv2.circle(frame,(int(cx),int(cy)),2,(0,0,255),-1)
# To make sure the Car crosses the line
## dy = cy-108
## print "dy", dy
if centroid > (27, 38) and centroid < (134, 108):
## if (cx <= 132)and(cx >= 20):
counter +=1
## print "counter=", counter
## if cy > 10 and cy < 160:
cv2.putText(frame, str(counter), (x,y-5),
cv2.FONT_HERSHEY_SIMPLEX,
0.5, (255, 0, 255), 2)
## cv2.namedWindow('Output',cv2.cv.CV_WINDOW_NORMAL)
cv2.imshow('Output', frame)
## cv2.imshow('FGMASK', fgmask)
key = cv2.waitKey(60)
if key == 27:
break
cap.release()
cv2.destroyAllWindows()
```
and the video is on my github page @ <https://github.com/Tes3awy/MatLab-Tutorials>
called traffic.avi, and it's also a built-in video in Matlab library
Any help that each car is counted once ?
---
EDIT: The individual frames of the video look as follows:
[](http://i.imgur.com/R5O1yYD.png) | # Preparation
In order to understand what is happening, and eventually solve our problem, we first need to improve the script a little.
I've added logging of the important steps of your algorithm, refactored the code a little, and added saving of the mask and processed images, added ability to run the script using the individual frame images, along with some other modifications.
This is what the script looks like at this point:
```
import logging
import logging.handlers
import os
import time
import sys
import cv2
import numpy as np
from vehicle_counter import VehicleCounter
# ============================================================================
IMAGE_DIR = "images"
IMAGE_FILENAME_FORMAT = IMAGE_DIR + "/frame_%04d.png"
# Support either video file or individual frames
CAPTURE_FROM_VIDEO = False
if CAPTURE_FROM_VIDEO:
IMAGE_SOURCE = "traffic.avi" # Video file
else:
IMAGE_SOURCE = IMAGE_FILENAME_FORMAT # Image sequence
# Time to wait between frames, 0=forever
WAIT_TIME = 1 # 250 # ms
LOG_TO_FILE = True
# Colours for drawing on processed frames
DIVIDER_COLOUR = (255, 255, 0)
BOUNDING_BOX_COLOUR = (255, 0, 0)
CENTROID_COLOUR = (0, 0, 255)
# ============================================================================
def init_logging():
main_logger = logging.getLogger()
formatter = logging.Formatter(
fmt='%(asctime)s.%(msecs)03d %(levelname)-8s [%(name)s] %(message)s'
, datefmt='%Y-%m-%d %H:%M:%S')
handler_stream = logging.StreamHandler(sys.stdout)
handler_stream.setFormatter(formatter)
main_logger.addHandler(handler_stream)
if LOG_TO_FILE:
handler_file = logging.handlers.RotatingFileHandler("debug.log"
, maxBytes = 2**24
, backupCount = 10)
handler_file.setFormatter(formatter)
main_logger.addHandler(handler_file)
main_logger.setLevel(logging.DEBUG)
return main_logger
# ============================================================================
def save_frame(file_name_format, frame_number, frame, label_format):
file_name = file_name_format % frame_number
label = label_format % frame_number
log.debug("Saving %s as '%s'", label, file_name)
cv2.imwrite(file_name, frame)
# ============================================================================
def get_centroid(x, y, w, h):
x1 = int(w / 2)
y1 = int(h / 2)
cx = x + x1
cy = y + y1
return (cx, cy)
# ============================================================================
def detect_vehicles(fg_mask):
log = logging.getLogger("detect_vehicles")
MIN_CONTOUR_WIDTH = 21
MIN_CONTOUR_HEIGHT = 21
# Find the contours of any vehicles in the image
contours, hierarchy = cv2.findContours(fg_mask
, cv2.RETR_EXTERNAL
, cv2.CHAIN_APPROX_SIMPLE)
log.debug("Found %d vehicle contours.", len(contours))
matches = []
for (i, contour) in enumerate(contours):
(x, y, w, h) = cv2.boundingRect(contour)
contour_valid = (w >= MIN_CONTOUR_WIDTH) and (h >= MIN_CONTOUR_HEIGHT)
log.debug("Contour #%d: pos=(x=%d, y=%d) size=(w=%d, h=%d) valid=%s"
, i, x, y, w, h, contour_valid)
if not contour_valid:
continue
centroid = get_centroid(x, y, w, h)
matches.append(((x, y, w, h), centroid))
return matches
# ============================================================================
def filter_mask(fg_mask):
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
# Fill any small holes
closing = cv2.morphologyEx(fg_mask, cv2.MORPH_CLOSE, kernel)
# Remove noise
opening = cv2.morphologyEx(closing, cv2.MORPH_OPEN, kernel)
# Dilate to merge adjacent blobs
dilation = cv2.dilate(opening, kernel, iterations = 2)
return dilation
# ============================================================================
def process_frame(frame_number, frame, bg_subtractor, car_counter):
log = logging.getLogger("process_frame")
# Create a copy of source frame to draw into
processed = frame.copy()
# Draw dividing line -- we count cars as they cross this line.
cv2.line(processed, (0, car_counter.divider), (frame.shape[1], car_counter.divider), DIVIDER_COLOUR, 1)
# Remove the background
fg_mask = bg_subtractor.apply(frame, None, 0.01)
fg_mask = filter_mask(fg_mask)
save_frame(IMAGE_DIR + "/mask_%04d.png"
, frame_number, fg_mask, "foreground mask for frame #%d")
matches = detect_vehicles(fg_mask)
log.debug("Found %d valid vehicle contours.", len(matches))
for (i, match) in enumerate(matches):
contour, centroid = match
log.debug("Valid vehicle contour #%d: centroid=%s, bounding_box=%s", i, centroid, contour)
x, y, w, h = contour
# Mark the bounding box and the centroid on the processed frame
# NB: Fixed the off-by one in the bottom right corner
cv2.rectangle(processed, (x, y), (x + w - 1, y + h - 1), BOUNDING_BOX_COLOUR, 1)
cv2.circle(processed, centroid, 2, CENTROID_COLOUR, -1)
log.debug("Updating vehicle count...")
car_counter.update_count(matches, processed)
return processed
# ============================================================================
def main():
log = logging.getLogger("main")
log.debug("Creating background subtractor...")
bg_subtractor = cv2.BackgroundSubtractorMOG()
log.debug("Pre-training the background subtractor...")
default_bg = cv2.imread(IMAGE_FILENAME_FORMAT % 119)
bg_subtractor.apply(default_bg, None, 1.0)
car_counter = None # Will be created after first frame is captured
# Set up image source
log.debug("Initializing video capture device #%s...", IMAGE_SOURCE)
cap = cv2.VideoCapture(IMAGE_SOURCE)
frame_width = cap.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH)
frame_height = cap.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT)
log.debug("Video capture frame size=(w=%d, h=%d)", frame_width, frame_height)
log.debug("Starting capture loop...")
frame_number = -1
while True:
frame_number += 1
log.debug("Capturing frame #%d...", frame_number)
ret, frame = cap.read()
if not ret:
log.error("Frame capture failed, stopping...")
break
log.debug("Got frame #%d: shape=%s", frame_number, frame.shape)
if car_counter is None:
# We do this here, so that we can initialize with actual frame size
log.debug("Creating vehicle counter...")
car_counter = VehicleCounter(frame.shape[:2], frame.shape[0] / 2)
# Archive raw frames from video to disk for later inspection/testing
if CAPTURE_FROM_VIDEO:
save_frame(IMAGE_FILENAME_FORMAT
, frame_number, frame, "source frame #%d")
log.debug("Processing frame #%d...", frame_number)
processed = process_frame(frame_number, frame, bg_subtractor, car_counter)
save_frame(IMAGE_DIR + "/processed_%04d.png"
, frame_number, processed, "processed frame #%d")
cv2.imshow('Source Image', frame)
cv2.imshow('Processed Image', processed)
log.debug("Frame #%d processed.", frame_number)
c = cv2.waitKey(WAIT_TIME)
if c == 27:
log.debug("ESC detected, stopping...")
break
log.debug("Closing video capture device...")
cap.release()
cv2.destroyAllWindows()
log.debug("Done.")
# ============================================================================
if __name__ == "__main__":
log = init_logging()
if not os.path.exists(IMAGE_DIR):
log.debug("Creating image directory `%s`...", IMAGE_DIR)
os.makedirs(IMAGE_DIR)
main()
```
This script is responsible for processing of the stream of images, and identifying all the vehicles in each frame -- I refer to them as `matches` in the code.
---
The task of counting the detected vehicles is delegated to class `VehicleCounter`. The reason why I chose to make this a class will become evident as we progress. I did not implement your vehicle counting algorithm, because it will not work for reasons that will again become evident as we dig into this deeper.
File `vehicle_counter.py` contains the following code:
```
import logging
# ============================================================================
class VehicleCounter(object):
def __init__(self, shape, divider):
self.log = logging.getLogger("vehicle_counter")
self.height, self.width = shape
self.divider = divider
self.vehicle_count = 0
def update_count(self, matches, output_image = None):
self.log.debug("Updating count using %d matches...", len(matches))
# ============================================================================
```
---
Finally, I wrote a script that will stitch all the generated images together, so it's easier to inspect them:
```
import cv2
import numpy as np
# ============================================================================
INPUT_WIDTH = 160
INPUT_HEIGHT = 120
OUTPUT_TILE_WIDTH = 10
OUTPUT_TILE_HEIGHT = 12
TILE_COUNT = OUTPUT_TILE_WIDTH * OUTPUT_TILE_HEIGHT
# ============================================================================
def stitch_images(input_format, output_filename):
output_shape = (INPUT_HEIGHT * OUTPUT_TILE_HEIGHT
, INPUT_WIDTH * OUTPUT_TILE_WIDTH
, 3)
output = np.zeros(output_shape, np.uint8)
for i in range(TILE_COUNT):
img = cv2.imread(input_format % i)
cv2.rectangle(img, (0, 0), (INPUT_WIDTH - 1, INPUT_HEIGHT - 1), (0, 0, 255), 1)
# Draw the frame number
cv2.putText(img, str(i), (2, 10)
, cv2.FONT_HERSHEY_PLAIN, 0.7, (255, 255, 255), 1)
x = i % OUTPUT_TILE_WIDTH * INPUT_WIDTH
y = i / OUTPUT_TILE_WIDTH * INPUT_HEIGHT
output[y:y+INPUT_HEIGHT, x:x+INPUT_WIDTH,:] = img
cv2.imwrite(output_filename, output)
# ============================================================================
stitch_images("images/frame_%04d.png", "stitched_frames.png")
stitch_images("images/mask_%04d.png", "stitched_masks.png")
stitch_images("images/processed_%04d.png", "stitched_processed.png")
```
---
# Analysis
In order to solve this problem, we should have some idea about what results we expect to get. We should also label all the distinct cars in the video, so it's easier to talk about them.

If we run our script, and stitch the images together, we get the a number of useful files to help us analyze the problem:
* Image containing a [mosaic of input frames](http://i.imgur.com/R5O1yYD.png)
* Image containing a [mosaic of foreground masks](http://i.imgur.com/zXKlmBN.png):
[](http://i.imgur.com/zXKlmBN.png)
* Image containing a [mosaic of processed frames](http://i.imgur.com/4ceMiT2.png)
[](http://i.imgur.com/4ceMiT2.png)
* The [debug log](http://pastebin.com/7yAMxLkz) for the run.
Upon inspecting those, a number of issues become evident:
* The foreground masks tend to be noisy. We should do some filtering (erode/dilate?) to get rid of the noise and narrow gaps.
* Sometimes we miss vehicles (grey ones).
* Some vehicles get detected twice in the single frame.
* Vehicles are rarely detected in the upper regions of the frame.
* The same vehicle is often detected in consecutive frames. We need to figure out a way of tracking the same vehicle in consecutive frames, and counting it only once.
---
# Solution
## 1. Pre-Seeding the Background Subtractor
Our video is quite short, only 120 frames. With learning rate of `0.01`, it will take a substantial part of the video for the background detector to stabilize.
Fortunately, the last frame of the video (frame number 119) is completely devoid of vehicles, and therefore we can use it as our initial background image. (Other options of obtaining suitable image are mentioned in notes and comments.)

To use this initial background image, we simply load it, and `apply` it on the background subtractor with learning factor `1.0`:
```
bg_subtractor = cv2.BackgroundSubtractorMOG()
default_bg = cv2.imread(IMAGE_FILENAME_FORMAT % 119)
bg_subtractor.apply(default_bg, None, 1.0)
```
When we look at the new [mosaic of masks](http://i.imgur.com/vcQ66Zb.png) we can see that we get less noise and the vehicle detection works better in the early frames.
[](http://i.imgur.com/vcQ66Zb.png)
## 2. Cleaning Up the Foreground Mask
A simple approach to improve our foreground mask is to apply a few [morphological transformations](http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_morphological_ops/py_morphological_ops.html).
```
def filter_mask(fg_mask):
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
# Fill any small holes
closing = cv2.morphologyEx(fg_mask, cv2.MORPH_CLOSE, kernel)
# Remove noise
opening = cv2.morphologyEx(closing, cv2.MORPH_OPEN, kernel)
# Dilate to merge adjacent blobs
dilation = cv2.dilate(opening, kernel, iterations = 2)
return dilation
```
Inspecting the [masks](http://i.imgur.com/iavT8I1.png), [processed frames](http://i.imgur.com/ZKqDpfU.png) and the [log file](http://pastebin.com/XvijCjYR) generated with filtering, we can see that we now detect vehicles more reliably, and have mitigated the issue of different parts of one vehicle being detected as separate objects.
[](http://i.imgur.com/iavT8I1.png)
[](http://i.imgur.com/ZKqDpfU.png)
## 3. Tracking Vehicles Between Frames
At this point, we need to go through our log file, and collect all the centroid coordinates for each vehicle. This will allow us to plot and inspect the path each vehicle traces across the image, and develop an algorithm to do this automatically. To make this process easier, we can create a [reduced log](http://pastebin.com/vsQ4wYSJ) by grepping out the relevant entries.
The lists of centroid coordinates:
```
traces = {
'A': [(112, 36), (112, 45), (112, 52), (112, 54), (112, 63), (111, 73), (111, 86), (111, 91), (111, 97), (110, 105)]
, 'B': [(119, 37), (120, 42), (121, 54), (121, 55), (123, 64), (124, 74), (125, 87), (127, 94), (125, 100), (126, 108)]
, 'C': [(93, 23), (91, 27), (89, 31), (87, 36), (85, 42), (82, 49), (79, 59), (74, 71), (70, 82), (62, 86), (61, 92), (55, 101)]
, 'D': [(118, 30), (124, 83), (125, 90), (116, 101), (122, 100)]
, 'E': [(77, 27), (75, 30), (73, 33), (70, 37), (67, 42), (63, 47), (59, 53), (55, 59), (49, 67), (43, 75), (36, 85), (27, 92), (24, 97), (20, 102)]
, 'F': [(119, 30), (120, 34), (120, 39), (122, 59), (123, 60), (124, 70), (125, 82), (127, 91), (126, 97), (128, 104)]
, 'G': [(88, 37), (87, 41), (85, 48), (82, 55), (79, 63), (76, 74), (72, 87), (67, 92), (65, 98), (60, 106)]
, 'H': [(124, 35), (123, 40), (125, 45), (127, 59), (126, 59), (128, 67), (130, 78), (132, 88), (134, 93), (135, 99), (135, 107)]
, 'I': [(98, 26), (97, 30), (96, 34), (94, 40), (92, 47), (90, 55), (87, 64), (84, 77), (79, 87), (74, 93), (73, 102)]
, 'J': [(123, 60), (125, 63), (125, 81), (127, 93), (126, 98), (125, 100)]
}
```
Individual vehicle traces plotted on the background:

Combined enlarged image of all the vehicle traces:

### Vectors
In order to analyze the movement, we need to work with vectors (i.e. the distance and direction moved). The following diagram shows how the angles correspond to movement of vehicles in the image.

We can use the following function to calculate the vector between two points:
```
def get_vector(a, b):
"""Calculate vector (distance, angle in degrees) from point a to point b.
Angle ranges from -180 to 180 degrees.
Vector with angle 0 points straight down on the image.
Values increase in clockwise direction.
"""
dx = float(b[0] - a[0])
dy = float(b[1] - a[1])
distance = math.sqrt(dx**2 + dy**2)
if dy > 0:
angle = math.degrees(math.atan(-dx/dy))
elif dy == 0:
if dx < 0:
angle = 90.0
elif dx > 0:
angle = -90.0
else:
angle = 0.0
else:
if dx < 0:
angle = 180 - math.degrees(math.atan(dx/dy))
elif dx > 0:
angle = -180 - math.degrees(math.atan(dx/dy))
else:
angle = 180.0
return distance, angle
```
### Categorization
One way we can look for patterns that could be used to categorize the movements as valid/invalid is to make a scatter plot (angle vs. distance):

* Green points represent valid movement, that we determined using the lists of points for each vehicle.
* Red points represent invalid movement - vectors between points in adjacent traffic lanes.
* I plotted two blue curves, which we can use to separate the two types of movements. Any point that lies below either curve can be considered as valid. The curves are:
+ `distance = -0.008 * angle**2 + 0.4 * angle + 25.0`
+ `distance = 10.0`
We can use the following function to categorize the movement vectors:
```
def is_valid_vector(a):
distance, angle = a
threshold_distance = max(10.0, -0.008 * angle**2 + 0.4 * angle + 25.0)
return (distance <= threshold_distance)
```
NB: There is one outlier, which is occurs due to our loosing track of vehicle **D** in frames 43..48.
### Algorithm
We will use class `Vehicle` to store information about each tracked vehicle:
* Some kind of identifier
* List of positions, most recent at front
* Last-seen counter -- number of frames since we've last seen this vehicle
* Flag to mark whether the vehicle was counted or not
Class `VehicleCounter` will store a list of currently tracked vehicles and keep track of the total count. On each frame, we will use the list of bounding boxes and positions of identified vehicles (candidate list) to update the state of `VehicleCounter`:
1. Update currently tracked `Vehicle`s:
* For each vehicle
+ If there is any valid match for given vehicle, update vehicle position and reset its last-seen counter. Remove the match from the candidate list.
+ Otherwise, increase the last-seen counter for that vehicle.
2. Create new `Vehicle`s for any remaining matches
3. Update vehicle count
* For each vehicle
+ If the vehicle is past divider and has not been counted yet, update the total count and mark the vehicle as counted
4. Remove vehicles that are no longer visible
* For each vehicle
+ If the last-seen counter exceeds threshold, remove the vehicle
## 4. Solution
We can reuse the main script with the final version of `vehicle_counter.py`, containing the implementation of our counting algorithm:
```
import logging
import math
import cv2
import numpy as np
# ============================================================================
CAR_COLOURS = [ (0,0,255), (0,106,255), (0,216,255), (0,255,182), (0,255,76)
, (144,255,0), (255,255,0), (255,148,0), (255,0,178), (220,0,255) ]
# ============================================================================
class Vehicle(object):
def __init__(self, id, position):
self.id = id
self.positions = [position]
self.frames_since_seen = 0
self.counted = False
@property
def last_position(self):
return self.positions[-1]
def add_position(self, new_position):
self.positions.append(new_position)
self.frames_since_seen = 0
def draw(self, output_image):
car_colour = CAR_COLOURS[self.id % len(CAR_COLOURS)]
for point in self.positions:
cv2.circle(output_image, point, 2, car_colour, -1)
cv2.polylines(output_image, [np.int32(self.positions)]
, False, car_colour, 1)
# ============================================================================
class VehicleCounter(object):
def __init__(self, shape, divider):
self.log = logging.getLogger("vehicle_counter")
self.height, self.width = shape
self.divider = divider
self.vehicles = []
self.next_vehicle_id = 0
self.vehicle_count = 0
self.max_unseen_frames = 7
@staticmethod
def get_vector(a, b):
"""Calculate vector (distance, angle in degrees) from point a to point b.
Angle ranges from -180 to 180 degrees.
Vector with angle 0 points straight down on the image.
Values increase in clockwise direction.
"""
dx = float(b[0] - a[0])
dy = float(b[1] - a[1])
distance = math.sqrt(dx**2 + dy**2)
if dy > 0:
angle = math.degrees(math.atan(-dx/dy))
elif dy == 0:
if dx < 0:
angle = 90.0
elif dx > 0:
angle = -90.0
else:
angle = 0.0
else:
if dx < 0:
angle = 180 - math.degrees(math.atan(dx/dy))
elif dx > 0:
angle = -180 - math.degrees(math.atan(dx/dy))
else:
angle = 180.0
return distance, angle
@staticmethod
def is_valid_vector(a):
distance, angle = a
threshold_distance = max(10.0, -0.008 * angle**2 + 0.4 * angle + 25.0)
return (distance <= threshold_distance)
def update_vehicle(self, vehicle, matches):
# Find if any of the matches fits this vehicle
for i, match in enumerate(matches):
contour, centroid = match
vector = self.get_vector(vehicle.last_position, centroid)
if self.is_valid_vector(vector):
vehicle.add_position(centroid)
self.log.debug("Added match (%d, %d) to vehicle #%d. vector=(%0.2f,%0.2f)"
, centroid[0], centroid[1], vehicle.id, vector[0], vector[1])
return i
# No matches fit...
vehicle.frames_since_seen += 1
self.log.debug("No match for vehicle #%d. frames_since_seen=%d"
, vehicle.id, vehicle.frames_since_seen)
return None
def update_count(self, matches, output_image = None):
self.log.debug("Updating count using %d matches...", len(matches))
# First update all the existing vehicles
for vehicle in self.vehicles:
i = self.update_vehicle(vehicle, matches)
if i is not None:
del matches[i]
# Add new vehicles based on the remaining matches
for match in matches:
contour, centroid = match
new_vehicle = Vehicle(self.next_vehicle_id, centroid)
self.next_vehicle_id += 1
self.vehicles.append(new_vehicle)
self.log.debug("Created new vehicle #%d from match (%d, %d)."
, new_vehicle.id, centroid[0], centroid[1])
# Count any uncounted vehicles that are past the divider
for vehicle in self.vehicles:
if not vehicle.counted and (vehicle.last_position[1] > self.divider):
self.vehicle_count += 1
vehicle.counted = True
self.log.debug("Counted vehicle #%d (total count=%d)."
, vehicle.id, self.vehicle_count)
# Optionally draw the vehicles on an image
if output_image is not None:
for vehicle in self.vehicles:
vehicle.draw(output_image)
cv2.putText(output_image, ("%02d" % self.vehicle_count), (142, 10)
, cv2.FONT_HERSHEY_PLAIN, 0.7, (127, 255, 255), 1)
# Remove vehicles that have not been seen long enough
removed = [ v.id for v in self.vehicles
if v.frames_since_seen >= self.max_unseen_frames ]
self.vehicles[:] = [ v for v in self.vehicles
if not v.frames_since_seen >= self.max_unseen_frames ]
for id in removed:
self.log.debug("Removed vehicle #%d.", id)
self.log.debug("Count updated, tracking %d vehicles.", len(self.vehicles))
# ============================================================================
```
The program now draws the historical paths of all currently tracked vehicles into the output image, along with the vehicle count. Each vehicle is assigned 1 of 10 colours.
Notice that vehicle D ends up being tracked twice, however it is counted only once, since we lose track of it before crossing the divider. Ideas on how to resolve this are mentioned in the appendix.
Based on the last processed frame generated by the script

the total vehicle count is **10**. This is a correct result.
More details can be found in the output the script generated:
* Full [debug log](http://pastebin.com/M1rtdqk9)
* Filtered out [vehicle counter log](http://pastebin.com/kaZLzBTz)
* A mosaic of the processed frames:
[](http://i.imgur.com/Ipi0vkB.png)
---
## A. Potential Improvements
* Refactor, add unit tests.
* Improve filtering/preprocessing of the foreground mask
+ Multiple iterations of filtering, fill holes using `cv2.drawContours` with `CV_FILLED`?
+ Watershed Algorithm?
* Improve categorization of movement vectors
+ Create a predictor to estimate initial movement angle when vehicles are created (and only one position is known)... in order to be able to
+ Use *change in direction* rather than *direction* alone (I think this would cluster the angles of valid motion vectors close to zero).
* Improve vehicle tracking
+ Predict position for frames where vehicle is not seen.
## B. Notes
* It seems it's not possible to directly extract the current background image from `BackgroundSubtractorMOG` in Python (at least in OpenCV 2.4.x), but [there is a way to do it](http://stackoverflow.com/questions/19031836/get-background-model-from-backgroundsubtractormog2-in-python) with a little work.
* As suggested by [Henrik](http://stackoverflow.com/users/1331076/henrik), we can obtain a good estimate of the background using [median blending](http://petapixel.com/2013/05/29/a-look-at-reducing-noise-in-photographs-using-median-blending/). |
All possible ways to interleave two strings | 36,260,956 | 19 | 2016-03-28T11:01:14Z | 36,261,144 | 16 | 2016-03-28T11:11:36Z | [
"python",
"permutation",
"itertools"
] | I am trying to generate all possible ways to interleave any two arbitrary strings in Python.
For example: If the two strings are `'ab'` and `'cd'`, the output I wish to get is:
```
['abcd', 'acbd', 'acdb', 'cabd', 'cadb', 'cdab']
```
See `a` is always before `b` (and `c` before `d`). I am struggling to find a solution to this. I have tried itertools as shown below:
```
import itertools
def shuffle(s,t):
string = s+t
for i in itertools.permutations(string):
print(''.join(i))
shuffle('ab','cd')
```
But as expected, this returns all possible permutations disregarding order of `a` and `b` (and `c` and `d`). | ## The Idea
Let the two strings you want to interleave be `s` and `t`. We will use recursion to generate all the possible ways to interleave these two strings.
If at any point of time we have interleaved the first `i` characters of `s` and the first `j` characters of `t` to create some string `res`, then we have two ways to interleave them for the next step-
1. Append the `i+1` th character of `s` to `res`
2. Append the `j+1` th character of `t` to `res`
We continue this recursion till all characters of both the strings have been used and then we store this result in a list of strings `lis` as in the code below.
## The Code
```
def interleave(s, t, res, i, j, lis):
if i == len(s) and j == len(t):
lis.append(res)
return
if i < len(s):
interleave(s, t, res + s[i], i + 1, j, lis)
if j < len(t):
interleave(s, t, res + t[j], i, j + 1, lis)
l = []
s = "ab"
t = "cd"
interleave(s, t, "", 0, 0, l)
print l
```
**Output**
```
['abcd', 'acbd', 'acdb', 'cabd', 'cadb', 'cdab']
```
This implementation is as efficient as we can get (at least asymptotically) since we never generate the same string twice. |
All possible ways to interleave two strings | 36,260,956 | 19 | 2016-03-28T11:01:14Z | 36,261,206 | 9 | 2016-03-28T11:15:29Z | [
"python",
"permutation",
"itertools"
] | I am trying to generate all possible ways to interleave any two arbitrary strings in Python.
For example: If the two strings are `'ab'` and `'cd'`, the output I wish to get is:
```
['abcd', 'acbd', 'acdb', 'cabd', 'cadb', 'cdab']
```
See `a` is always before `b` (and `c` before `d`). I am struggling to find a solution to this. I have tried itertools as shown below:
```
import itertools
def shuffle(s,t):
string = s+t
for i in itertools.permutations(string):
print(''.join(i))
shuffle('ab','cd')
```
But as expected, this returns all possible permutations disregarding order of `a` and `b` (and `c` and `d`). | Highly inefficient but working:
```
def shuffle(s,t):
if s=="":
return [t]
elif t=="":
return [s]
else:
leftShuffle=[s[0]+val for val in shuffle(s[1:],t)]
rightShuffle=[t[0]+val for val in shuffle(s,t[1:])]
leftShuffle.extend(rightShuffle)
return leftShuffle
print(shuffle("ab","cd"))
``` |
All possible ways to interleave two strings | 36,260,956 | 19 | 2016-03-28T11:01:14Z | 36,271,870 | 13 | 2016-03-28T21:53:28Z | [
"python",
"permutation",
"itertools"
] | I am trying to generate all possible ways to interleave any two arbitrary strings in Python.
For example: If the two strings are `'ab'` and `'cd'`, the output I wish to get is:
```
['abcd', 'acbd', 'acdb', 'cabd', 'cadb', 'cdab']
```
See `a` is always before `b` (and `c` before `d`). I am struggling to find a solution to this. I have tried itertools as shown below:
```
import itertools
def shuffle(s,t):
string = s+t
for i in itertools.permutations(string):
print(''.join(i))
shuffle('ab','cd')
```
But as expected, this returns all possible permutations disregarding order of `a` and `b` (and `c` and `d`). | Several other solutions have already been posted, but most of them generate the full list of interleaved strings (or something equivalent to it) in memory, making their memory usage grow exponentially as a function of the input length. Surely there must be a better way.
Enumerating all ways to interleave two sequences, of length *a* and *b* respectively, is basically the same as enumerating all *a*+*b* bit integers with exably *b* bits set. Each such integer corresponds to a distinct way to interleave the sequences, obtained by replacing every 0 bit with an element of the first sequence, and every 1 bit with an element of the second sequence.
Conveniently, there's a clever and efficient way to [calculate the next integer with the same number of bits set](http://www.geeksforgeeks.org/next-higher-number-with-same-number-of-set-bits/), which we can use to generate all such integers. So let's do that first:
```
def bit_patterns(m, n):
"""Generate all m-bit numbers with exactly n bits set, in ascending order.
See http://www.geeksforgeeks.org/next-higher-number-with-same-number-of-set-bits/
"""
patt = (1 << int(n)) - 1
if patt == 0: yield 0; return # loop below assumes patt has at least one bit set!
while (patt >> m) == 0:
yield patt
lowb = patt & -patt # extract the lowest bit of the pattern
incr = patt + lowb # increment the lowest bit
diff = patt ^ incr # extract the bits flipped by the increment
patt = incr + ((diff // lowb) >> 2) # restore bit count after increment
```
Now we can use this generator to generate all ways to interleave any two sequences:
```
def interleave(a, b):
"""Generate all possible ways to interleave two sequences."""
m = len(a) + len(b)
n = len(a)
for pattern in bit_patterns(m, n):
seq = []
i = j = 0
for k in range(m):
bit = pattern & 1
pattern >>= 1
seq.append(a[i] if bit else b[j])
i += bit
j += 1-bit
yield seq
```
Note that, in order to try to be as generic as possible, this code takes arbitrary sequence types and returns lists. Strings are sequences in Python, so you can pass them in just fine; to convert the generated lists back into strings, you can concatenate their elements e.g. with `"".join()`, like this:
```
foo = "ABCD"
bar = "1234"
for seq in interleave(foo, bar):
print("".join(seq))
```
There we go: a fully non-recursive efficient generator-based solution that uses very little memory even for long inputs, and only generates each output once (thus requiring no inefficient duplicate elimination step). And it even works in both Python 2 and 3. |
Cannot install pip packages due to locale.error inside Ubuntu Vagrant Box | 36,283,915 | 5 | 2016-03-29T12:11:11Z | 36,292,208 | 11 | 2016-03-29T18:24:31Z | [
"python",
"ubuntu",
"vagrant",
"pip",
"virtualenv"
] | I just created a vagrant box with ubuntu/trusty32. The vagrant provisioner, during box creation time, has done the following:
* downloaded python virtualenv source tarball using `wget`
* untarred the virtualenv source tarball using `tar zxvf ./virtualenv.tar.gz`
* created a virtualenv called `venv` using `python ./virtualenv/virtualenv.py ./venv`
* sourced the newly created venv with `source ./venv/bin/activate`
* installed a few pip packages with `pip install django`, `pip install mysqlclient` etc inside the virtual environment.
**All of this worked fine and perfect** - executed by the vagrant provisioner when the vagrant box was being created for the first time.
However, later on, i logged in to the vagrant ssh and tried to install `ipython` via `pip`.
```
$ vagrant ssh
vagrant@django-box:~$ source venv/bin/activate
(venv) vagrant@django-box:~$ pip install ipython
Traceback (most recent call last):
File "/home/vagrant/venv/bin/pip", line 11, in <module>
sys.exit(main())
File "/home/vagrant/venv/local/lib/python2.7/site-packages/pip/__init__.py", line 215, in main
locale.setlocale(locale.LC_ALL, '')
File "/home/vagrant/venv/lib/python2.7/locale.py", line 579, in setlocale
return _setlocale(category, locale)
locale.Error: unsupported locale setting
(venv) vagrant@django-box:~$
```
*Note: this is a fresh new vagrant box running ubuntu/trusty32 out of the box.*
There are several SO questions about locale error but following them revealed no help for this vagrant scenario.
It doesn't make sense that ***all of it worked perfectly fine during vagrant box creation/provision time but not working afterwards when trying the same thing manually***. | First check your current **locale** config by simply putting `locale` in command line.
You should see something similar to:
```
locale: Cannot set LC_CTYPE to default locale: No such file or directory
LANG=C
LC_CTYPE=utf8
```
Set a valid locale in the LC\_CTYPE environment variable by running the following commands:
```
export LANGUAGE=en_US.UTF-8
export LANG=en_US.UTF-8
export LC_ALL=en_US.UTF-8
export LC_CTYPE="en_US.UTF-8"
locale-gen en_US.UTF-8
sudo dpkg-reconfigure locales
```
PS. `en_US.UTF-8` is used here but if you need to check all available locales on your system, run the command `locale -a`
This should solve the problem. |
Python - Shortcircuiting strange behaviour | 36,289,297 | 3 | 2016-03-29T15:58:13Z | 36,289,420 | 8 | 2016-03-29T16:03:31Z | [
"python",
"short-circuiting"
] | In the following code fragment function `f` gets executed as expected:
```
def f():
print('hi')
f() and False
#Output: 'hi'
```
But in the following similar code fragment `a` doesn't increment:
```
a=0
a+=1 and False
a
#Output: 0
```
But if we shortcircuit with True instead of False `a` gets incremented:
```
a=0
a+=1 and True
a
#Output: 1
```
How does shortcircuit work for this to run this way? | That's because `f() and False` is an expression (technically a single-expression statement) whereas `a += 1 and False` is an assignment statement. It actually resolves to `a += (1 and False)`, and since `1 and False` equals `False` and `False` is actually the integer 0, what happens is `a += 0`, a no-op.
`(1 and True)`, however, evaluates to `True` (which is the integer 1), so `a += 1 and True` means `a += 1`.
(also note that Python's `and` and `or` always return the first of their operands that can unambiguously determine the result of the operatio) |
Why can't I break out of this itertools infinite loop? | 36,320,473 | 11 | 2016-03-30T21:57:49Z | 36,320,558 | 10 | 2016-03-30T22:03:18Z | [
"python",
"python-3.x",
"signals",
"posix"
] | In the REPL, we can usually interrupt an infinite loop with a sigint, i.e. `ctrl`+`c`, and regain control in the interpreter.
```
>>> while True: pass
...
^CTraceback (most recent call last):
File "<stdin>", line 1, in <module>
KeyboardInterrupt
>>>
```
But in this loop, the interrupt seems to be blocked and I have to kill the parent process to escape.
```
>>> *x, = itertools.repeat('x')
^C^C^C^C^C^C^C^C^\^\^\^\^\^Z^Z^Z^Z
```
Why is that? | The `KeyboardInterrupt` is checked after each Python instruction. `itertools.repeat` and the tuple generation is handled in C Code. The interrupt is handled afterwards, i.e. never. |
find the set of integers for which two linear equalities holds true | 36,321,558 | 6 | 2016-03-30T23:28:49Z | 36,376,671 | 7 | 2016-04-02T17:39:35Z | [
"java",
"c#",
"python",
"algorithm",
"language-agnostic"
] | What algorithm can I use to find the set of all positive integer values of `n1, n2, ... ,n7` for which the the following inequalities holds true.
```
97n1 + 89n2 + 42n3 + 20n4 + 16n5 + 11n6 + 2n7 - 185 > 0
-98n1 - 90n2 - 43n3 - 21n4 - 17n5 - 12n6 - 3n7 + 205 > 0
n1 >= 0, n2 >= 0, n3 >=0. n4 >=0, n5 >=0, n6 >=0, n7 >= 0
```
For example one set `n1= 2, n2 = n3 = ... = n7 =0` makes the inequality true. How do I find out all other set of values? The similar question has been posted in [M.SE](http://math.stackexchange.com/questions/1723869/finding-all-lattice-point-in-bounded-region).
**ADDED::** I need to generalize the solution for n variables (might be large). What procedure can I apply? For **another** particular case `n=8`
```
97n1 + 89n2 + 42n3 + 20n4 + 16n5 + 11n6 + 6n7 + 2n8 - 185 > 0
-98n1 - 90n2 - 43n3 - 21n4 - 17n5 - 12n6 - 7 - 3n8 + 205 > 0
n1 >= 0, n2 >= 0, n3 >=0. n4 >=0, n5 >=0, n6 >=0, n7 >= 0, n8 >= 0
```
Python takes forever. `Wolfram Mathematica` reveals that there are `4015` solutions in less than minute.
```
Length[Solve[{97 n1 + 89 n2 + 42 n3 + 20 n4 + 16 n5 + 11 n6 + 6 n7 +
2 n8 - 185 > 0,
-98 n1 - 90 n2 - 43 n3 - 21 n4 - 17 n5 - 12 n6 - 7 n7 - 3 n8 +
205 > 0,
n1 >= 0, n2 >= 0, n3 >= 0, n4 >= 0, n5 >= 0, n6 >= 0, n7 >= 0,
n8 >= 0}, {n1, n2, n3, n4, n5, n6, n7, n8}, Integers]]
``` | Reti43 has the right idea, but there's a quick recursive solution that works with less restrictive assumptions about your inequalities.
```
def solve(smin, smax, coef1, coef2):
"""
Return a list of lists of non-negative integers `n` that satisfy
the inequalities,
sum([coef1[i] * n[i] for i in range(len(coef1)]) > smin
sum([coef2[i] * n[i] for i in range(len(coef1)]) < smax
where coef1 and coef2 are equal-length lists of positive integers.
"""
if smax < 0:
return []
n_max = ((smax-1) // coef2[0])
solutions = []
if len(coef1) > 1:
for n0 in range(n_max + 1):
for solution in solve(smin - n0 * coef1[0],
smax - n0 * coef2[0],
coef1[1:], coef2[1:]):
solutions.append([n0] + solution)
else:
n_min = max(0, (smin // coef1[0]) + 1)
for n0 in range(n_min, n_max + 1):
if n0 * coef1[0] > smin and n0 * coef2[0] < smax:
solutions.append([n0])
return solutions
```
You'd apply this to your original problem like this,
```
smin, coef1 = 185, (97, 89, 42, 20, 16, 11, 2)
smax, coef2 = 205, (98, 90, 43, 21, 17, 12, 3)
solns7 = solve(smin, smax, coef1, coef2)
len(solns7)
1013
```
and to the longer problem like this,
```
smin, coef1 = 185, (97, 89, 42, 20, 16, 11, 6, 2)
smax, coef2 = 205, (98, 90, 43, 21, 17, 12, 7, 3)
solns8 = solve(smin, smax, coef1, coef2)
len(solns8)
4015
```
On my Macbook, both of these cases are solved in milliseconds. This should scale reasonably well to slightly larger problems, but fundamentally, it's O(2^N) in the number of coefficients N. How well it actually scales depends on how large the additional coefficients are - the more large coefficients (compared to smax-smin), the fewer possible solutions and the faster it'll run.
**Updated**: From the discussion on the linked [M.SE post](http://math.stackexchange.com/questions/1723869/finding-all-lattice-point-in-bounded-region), I see that the relationship between the two inequalities here is part of the structure of the problem. Given that, a slightly simpler solution can be given. The code below also includes a couple of additional optimizations, which speed up the solution for the 8-variable case from 88 milliseconds to 34 milliseconds on my laptop. I've tried this on examples with as many as 22 variables and gotten the results in less than a minute, but it's never going to be practical for hundreds of variables.
```
def solve(smin, smax, coef):
"""
Return a list of lists of non-negative integers `n` that satisfy
the inequalities,
sum([coef[i] * n[i] for i in range(len(coef)]) > smin
sum([(coef[i]+1) * n[i] for i in range(len(coef)]) < smax
where coef is a list of positive integer coefficients, ordered
from highest to lowest.
"""
if smax <= smin:
return []
if smin < 0 and smax <= coef[-1]+1:
return [[0] * len(coef)]
c0 = coef[0]
c1 = c0 + 1
n_max = ((smax-1) // c1)
solutions = []
if len(coef) > 1:
for n0 in range(n_max + 1):
for solution in solve(smin - n0 * c0,
smax - n0 * c1,
coef[1:]):
solutions.append([n0] + solution)
else:
n_min = max(0, (smin // c0) + 1)
for n0 in range(n_min, n_max + 1):
solutions.append([n0])
return solutions
```
You'd apply it to the 8-variable example like this,
```
solutions = solve(185, 205, (97, 89, 42, 20, 16, 11, 6, 2))
len(solutions)
4015
```
This solution directly enumerates the lattice points in the bounded region. Since you want all of these solutions, the time it takes to get them is going to be proportional (at best) to the number of bound lattice points, which grows exponentially with the number of dimensions (variables). |
matplotlib error - no module named tkinter | 36,327,134 | 4 | 2016-03-31T07:42:34Z | 36,327,323 | 9 | 2016-03-31T07:53:42Z | [
"python",
"matplotlib",
"tkinter"
] | I tried to use the matplotlib package via Pycharm IDE on windows 10.
when I run this code:
```
from matplotlib import pyplot
```
I get the following error:
```
ImportError: No module named 'tkinter'
```
I know that in python 2.x it was called Tkinter, but that is not the problem - I just installed a brand new python 3.5.1.
EDIT: in addition, I also tried to import 'tkinter' and 'Tkinter' - neither of these worked (both returned the error message I mentioned).
any ideas?
thanks in advance | ```
sudo apt-get install python3-tk
```
Then,
```
>> import tkinter # all fine
```
**Edit**:
For Windows, I think the problem is you didn't install complete Python package. Since Tkinter should be shipped with Python out of box. See: <http://www.tkdocs.com/tutorial/install.html>
I suggest install [ipython](https://ipython.org/), which provides powerful shell and necessary packages as well. |
how to get multiple conditional operations after a Pandas groupby? | 36,337,012 | 5 | 2016-03-31T14:57:57Z | 36,341,363 | 9 | 2016-03-31T18:51:10Z | [
"python",
"pandas"
] | consider the following example:
```
import pandas as pd
import numpy as np
df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
'foo', 'bar', 'foo', 'foo'],
'B' : [12,10,-2,-4,-2,5,8,7],
'C' : [-5,5,-20,0,1,5,4,-4]})
df
Out[12]:
A B C
0 foo 12 -5
1 bar 10 5
2 foo -2 -20
3 bar -4 0
4 foo -2 1
5 bar 5 5
6 foo 8 4
7 foo 7 -4
```
Here I need to compute, for **each group in A**, the sum of elements **in B** **conditional on C being non-negative** (i.e. being >=0, a condition based on another column). And vice-versa for C.
However, my code below fails.
```
df.groupby('A').agg({'B': lambda x: x[x.C>0].sum(),
'C': lambda x: x[x.B>0].sum()})
AttributeError: 'Series' object has no attribute 'B'
```
So it seems `apply` would be preferred (because apply *sees* all the dataframe I think), but unfortunately I cannot use a dictionary with `apply`. So I am stuck. Any ideas?
One not-so-pretty not-so-efficient solution would be to create these conditional variables before running the `groupby`, but I am sure this solution does not use the potential of **`Pandas`**.
So, for instance, the expected output for the group `bar` and `column B` would be
```
+10 (indeed C equals 5 and is >=0)
-4 (indeed C equals 0 and is >=0)
+5 = 11
```
Another example:
group `foo` and `column B`
```
NaN (indeed C equals -5 so I dont want to consider the 12 value in B)
+ NaN (indeed C= -20)
-2 (indeed C=1 so its positive)
+ 8
+NaN = 6
```
Remark that I use `NaNs` instead of zero because another function than a sum would give wrong results (median) if we were to put zeros.
In other words, this is a simple conditional sum where the condition is based on another column.
Thanks! | Another alternative is to precompute the values you will need before using `groupby/agg`:
```
import numpy as np
import pandas as pd
N = 1000
df = pd.DataFrame({'A' : np.random.choice(['foo', 'bar'], replace=True, size=(N,)),
'B' : np.random.randint(-10, 10, size=(N,)),
'C' : np.random.randint(-10, 10, size=(N,))})
def using_precomputation(df):
df['B2'] = df['B'] * (df['C'] >= 0).astype(int)
df['C2'] = df['C'] * (df['B'] >= 0).astype(int)
result = df.groupby('A').agg({'B2': 'sum', 'C2': 'sum'})
return result.rename(columns={'B2':'B', 'C2':'C'})
```
Let's compare `using_precomputation` with `using_index` and `using_apply`:
```
def using_index(df):
result = df.groupby('A').agg({'B': lambda x: df.loc[x.index, 'C'][x >= 0].sum(),
'C': lambda x: df.loc[x.index, 'B'][x >= 0].sum()})
return result.rename(columns={'B':'C', 'C':'B'})
def my_func(row):
b = row[row.C >= 0].B.sum()
c = row[row.B >= 0].C.sum()
return pd.Series({'B':b, 'C':c})
def using_apply(df):
return df.groupby('A').apply(my_func)
```
First, let's check that they all return the same result:
```
def is_equal(df, func1, func2):
result1 = func1(df).sort_index(axis=1)
result2 = func2(df).sort_index(axis=1)
assert result1.equals(result2)
is_equal(df, using_precomputation, using_index)
is_equal(df, using_precomputation, using_apply)
```
---
Using the 1000-row DataFrame above:
```
In [83]: %timeit using_precomputation(df)
100 loops, best of 3: 2.45 ms per loop
In [84]: %timeit using_index(df)
100 loops, best of 3: 4.2 ms per loop
In [85]: %timeit using_apply(df)
100 loops, best of 3: 6.84 ms per loop
```
---
**Why is `using_precomputation` faster?**
Precomputation allows us to take advantage of fast vectorized arithmetic on
*entire columns* and allows the aggregation function to be the simple builtin
`sum`. Builtin aggregators tend to be faster than custom aggregation functions
such as the ones used here (based on jezrael's solution):
```
def using_index(df):
result = df.groupby('A').agg({'B': lambda x: df.loc[x.index, 'C'][x >= 0].sum(),
'C': lambda x: df.loc[x.index, 'B'][x >= 0].sum()})
return result.rename(columns={'B':'C', 'C':'B'})
```
Moreover, the less work you have to do on each little group, the better off you
are performance-wise. Having to do double-indexing for each group hurts performance.
Also a killer to performance is using `groupby/apply(func)` where the `func`
returns a `Series`. This forms one Series for each row of the result, and then
causes Pandas to align and concatenate all the Series. Since typically the
Series tend to be short and the number of Series tends to be big, concatenating
all these little Series tends to be slow. Again, you tend to get the best
performance out of Pandas/NumPy when performing *vectorized operations on
big arrays*. Looping through lots of tiny results kills performance. |
getting the last index, better to use len(list)-1 or use own variable? | 36,340,008 | 2 | 2016-03-31T17:33:10Z | 36,340,108 | 7 | 2016-03-31T17:38:46Z | [
"python"
] | In my code below, would it be better to use the i = len(scheduled\_meetings)-1 or the i=0/i+=1 method? I know the cost of len() is O(1) and it's probably more clear whats going on this way, but it gets calculated for every m in sorted\_meetings. i=0 is set once then only incremented if something is appended to the list. Will there even be a difference? Both methods work against my test cases, just wanted to get some feedback on which would be better.
```
import operator
def answer(meetings):
# sort by earliest end time
sorted_meetings = sorted(meetings, key=operator.itemgetter(1))
# always select the earilest end time
scheduled_meetings = [sorted_meetings[0]]
i = 0 #METHOD B
for m in sorted_meetings:
i = len(scheduled_meetings) - 1 #METHOD A
if m[0] >= scheduled_meetings[i][1] or m[1] <= scheduled_meetings[i][0]:
scheduled_meetings.append(m)
i += 1 #METHOD B
return len(scheduled_meetings)
``` | Use neither of these. Use `your_list[-1]` instead. It's less code and doesn't require other variables like `i`.
`-1` means *the first element from the right* which is apparently the last element of the list.
See *Example 3.7* [here](http://www.diveintopython.net/native_data_types/lists.html) for more information about negative list indexes. |
asyncio.ensure_future vs. BaseEventLoop.create_task vs. simple coroutine? | 36,342,899 | 7 | 2016-03-31T20:15:20Z | 36,415,477 | 7 | 2016-04-05T01:11:12Z | [
"python",
"python-3.x",
"python-3.5",
"coroutine",
"python-asyncio"
] | I've seen several basic Python 3.5 tutorials on asyncio doing the same operation in various flavours.
In this code:
```
import asyncio
async def doit(i):
print("Start %d" % i)
await asyncio.sleep(3)
print("End %d" % i)
return i
if __name__ == '__main__':
loop = asyncio.get_event_loop()
#futures = [asyncio.ensure_future(doit(i), loop=loop) for i in range(10)]
#futures = [loop.create_task(doit(i)) for i in range(10)]
futures = [doit(i) for i in range(10)]
result = loop.run_until_complete(asyncio.gather(*futures))
print(result)
```
All the three variants above that define the `futures` variable achieve the same result; the only difference I can see is that with the third variant the execution is out of order (which should not matter in most cases). Is there any other difference? Are there cases where I can't just use the simplest variant (plain list of coroutines)? | ## `ensure_future` vs `create_task`
`ensure_future` is method to create [`Task`](https://docs.python.org/3/library/asyncio-task.html#asyncio.Task) from [`coroutine`](https://docs.python.org/3/library/asyncio-task.html#coroutines). It creates task different ways based on argument (including using of `create_task` for coroutines and future-like objects).
`create_task` is abstract method of `AbstractEventLoop`. Different event loops can implement this function different ways.
You should use `ensure_future` to create tasks. You'll need `create_task` only if you're going to implement your own event loop type.
## When coroutines should be wrapped in tasks?
Wrap coroutine in a Task - is a way to start this coroutine "in background". Here's example:
```
import asyncio
async def msg(text):
await asyncio.sleep(0.1)
print(text)
async def long_operation():
print('long_operation started')
await asyncio.sleep(3)
print('long_operation finished')
async def main():
await msg('first')
# Now you want to start long_operation, but you don't want to wait it finised:
# long_operation should be started, but second msg should be printed immediately.
# Create task to do so:
task = asyncio.ensure_future(long_operation())
await msg('second')
# Now, when you want, you can await task finised:
await task
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
```
Output:
```
first
long_operation started
second
long_operation finished
```
You can replace `asyncio.ensure_future(long_operation())` with just `await long_operation()` to feel the difference. |
What is the equivalent of from django.views.generic.simple import direct_to_template in django 1.9 | 36,349,646 | 6 | 2016-04-01T06:40:35Z | 36,349,783 | 8 | 2016-04-01T06:50:09Z | [
"python",
"django"
] | I want to make my home page as `index.html` which is located inside the template directory named as `templates/castle_tm/index.html`, but the url shows
> "no module named simple".
Generic based views are deprecated in django >1.4. Now, How can i redirect the home page to index.html
urls.py
```
from django.conf.urls import url, patterns, include
from django.conf.urls.static import static
from django.conf import settings
from django.contrib import admin
from castle import views
from django.views.generic.simple import direct_to_template
admin.autodiscover()
url(r'^api/casinova$', direct_to_template,{"template":"castle_tm/index.html"}),
``` | In latest versions of django you can use [`TemplateView`](https://docs.djangoproject.com/en/1.9/ref/class-based-views/base/#templateview)
```
from django.views.generic import TemplateView
...
url(r'^api/casinova$', TemplateView.as_view(template_name='castle_tm/index.html')),
``` |
High performance array mean | 36,351,317 | 9 | 2016-04-01T08:14:51Z | 36,355,092 | 9 | 2016-04-01T11:22:56Z | [
"python",
"arrays",
"numpy",
"matrix"
] | I've got a performance bottleneck. I'm computing the column-wise mean of large arrays (250 rows & 1.3 million columns), and I do so more than a million times in my application.
My test case in Python:
```
import numpy as np
big_array = np.random.random((250, 1300000))
%timeit mean = big_array.mean(axis = 0) # ~400 milliseconds
```
Numpy takes around 400 milliseconds on my machine, running on a single core. I've tried several other matrix libraries across different languages (Cython, R, Julia, Torch), but found only Julia to beat Numpy, by taking around 250 milliseconds.
Can anyone provide evidence of substantial improvements in performance in this task? Perhaps this is a task suited for the GPU?
Edit: My application is evidently memory-constrained, and its performance is dramatically improved by accessing elements of a large array only once, rather than repeatedly. (See comment below.) | Julia, if I'm not mistaken, uses fortran ordering in memory as opposed to numpy which uses C memory layout by default. So if you rearrange things to adhere to the same layout so that the mean is happening along contiguous memory, you get better performance:
```
In [1]: import numpy as np
In [2]: big_array = np.random.random((250, 1300000))
In [4]: big_array_f = np.asfortranarray(big_array)
In [5]: %timeit mean = big_array.mean(axis = 0)
1 loop, best of 3: 319 ms per loop
In [6]: %timeit mean = big_array_f.mean(axis = 0)
1 loop, best of 3: 205 ms per loop
```
Or you can just change you dimensions and take the mean over the other axis:
```
In [10]: big_array = np.random.random((1300000, 250))
In [11]: %timeit mean = big_array.mean(axis = 1)
1 loop, best of 3: 205 ms per loop
``` |
how to remove a object in a python list | 36,360,632 | 6 | 2016-04-01T15:51:38Z | 36,360,751 | 8 | 2016-04-01T15:58:12Z | [
"python"
] | I create a class named point as following:
```
class point:
def __init__(self):
self.x = 0
self.y = 0
```
and create a list of point
```
p1 = point()
p1.x = 1
p1.y = 1
p2 = point()
p2.x = 2
p2.y = 2
p_list = []
p_list.append(p1)
p_list.append(p2)
```
Now I'd like remove the point, which x = 1 and y = 1, why can I do this?
I try to add a cmp method for class point as following
```
class point:
def __init__(self):
self.x = 0
self.y = 0
def __cmp__(self, p):
return self.x==p.x and self.y==p.y
```
But the following code not work
```
r = point()
r.x = 1
r.y = 1
if r in p_list:
print('correct')
else:
print('wrong') # it will go here
p_list.remove(r) # it report 'ValueError: list.remove(x): x not in list'
``` | Your `__cmp__` function is not correct. [`__cmp__`](https://docs.python.org/2/reference/datamodel.html#object.__cmp__) should return `-1/0/+1` depending on whether the second element is smaller/equal/greater than `self`. So when your `__cmp__` is called, it returns `True` if the elements are equal, which is then interpreted as `1`, and thus "greater than". And if the elements are non-equal, it returns `False`, i.e. `0`, which is interpreted as "equal").
For two-dimensional points, "greater than" and "smaller than" are not very clearly defined, anyway, so you can just replace your `__cmp__` with [`__eq__`](https://docs.python.org/2/reference/datamodel.html#object.__eq__) using the same implementation. Your `point` class should be:
```
class point:
def __init__(self, x=0, y=0):
self.x = x
self.y = y
def __eq__(self, p):
return self.x==p.x and self.y==p.y
``` |
Python naming convention - namedtuples | 36,371,047 | 6 | 2016-04-02T08:33:34Z | 36,371,122 | 7 | 2016-04-02T08:43:27Z | [
"python",
"python-3.x",
"naming-conventions",
"pep8",
"namedtuple"
] | I am new to Python and I have been reading both the online documentation and (trying) to follow [PEP 0008](https://www.python.org/dev/peps/pep-0008/) to have a good Python code style.
I am curious about the code segment I found in the official Python [docs](https://docs.python.org/3/library/re.html#writing-a-tokenizer) while studying about the re library:
```
import collections
Token = collections.namedtuple('Token', ['typ', 'value', 'line', 'column'])
```
I cannot understand why the *`Token`* variable is named with a first letter capitalised; I have read through the PEP 0008 and there is no reference for it for what I have seen. Should it not be **`token`** instead or **`TOKEN`** if it was a constant (which for all I know it is not)? | In the code-segment you provided, `Token` is a [**named tuple**](https://docs.python.org/3/library/collections.html#collections.namedtuple), definitely not a constant. It does not follow other variable names naming style only to put emphasis on the fact that it is a **class factory function**.
No warning will occur from an PEP 0008 style checker (like *PyCharm* for example) if you write it as *`token`* but I think it is not good practice since this way it does not distinguish it as a class factory name.
So, *namedtuples* fall under the [Class names](https://www.python.org/dev/peps/pep-0008/#class-names) in PEP 0008. Too bad is not stated more explicitly.
Besides the example you mentioned for [writing a tokenizer](https://docs.python.org/3/library/re.html#writing-a-tokenizer), this can also be seen in the [collections.namedtuple docs](https://docs.python.org/3/library/collections.html#collections.namedtuple) examples:
```
Point = namedtuple('Point', ['x', 'y'])
Point3D = namedtuple('Point3D', Point._fields + ('z',))
Book = namedtuple('Book', ['id', 'title', 'authors'])
``` |
Get part of an integer in Python | 36,378,479 | 4 | 2016-04-02T20:41:40Z | 36,378,493 | 9 | 2016-04-02T20:42:54Z | [
"python",
"integer"
] | Is there an elegant way (maybe in `numpy`) to get a given part of a Python integer, eg say I want to get `90` from `1990`.
I can do:
```
my_integer = 1990
int(str(my_integer)[2:4])
# 90
```
But it is quite ugly.
Any other option? | `1990 % 100` would do the trick.
(`%` is the modulo operator and returns the remainder of the division, here 1990 = 19\*100 + 90.)
---
*Added after answer was accepted:*
If you need something generic, try this:
```
def GetIntegerSlice(i, n, m):
# return nth to mth digit of i (as int)
l = math.floor(math.log10(i)) + 1
return i / int(pow(10, l - m)) % int(pow(10, m - n + 1))
```
It will return the nth to mth digit of i (as int), i.e.
```
>>> GetIntegerSlice(123456, 3, 4)
34
```
Not sure it's an improvement over your suggestion, but it does not rely on string operations and was fun to write.
(Note: casting to `int` before doing the division (rather than only casting the result to `int` in the end) makes it work also for long integers.) |
How to integrate Flask & Scrapy? | 36,384,286 | 6 | 2016-04-03T10:25:38Z | 37,270,442 | 7 | 2016-05-17T08:04:17Z | [
"python",
"flask",
"scrapy"
] | I'm using scrapy to get data and I want to use flask web framework to show the results in webpage. But I don't know how to call the spiders in the flask app. I've tried to use `CrawlerProcess` to call my spiders,but I got the error like this :
```
ValueError
ValueError: signal only works in main thread
Traceback (most recent call last)
File "/Library/Python/2.7/site-packages/flask/app.py", line 1836, in __call__
return self.wsgi_app(environ, start_response)
File "/Library/Python/2.7/site-packages/flask/app.py", line 1820, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/Library/Python/2.7/site-packages/flask/app.py", line 1403, in handle_exception
reraise(exc_type, exc_value, tb)
File "/Library/Python/2.7/site-packages/flask/app.py", line 1817, in wsgi_app
response = self.full_dispatch_request()
File "/Library/Python/2.7/site-packages/flask/app.py", line 1477, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Library/Python/2.7/site-packages/flask/app.py", line 1381, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/Library/Python/2.7/site-packages/flask/app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "/Library/Python/2.7/site-packages/flask/app.py", line 1461, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/Users/Rabbit/PycharmProjects/Flask_template/FlaskTemplate.py", line 102, in index
process = CrawlerProcess()
File "/Library/Python/2.7/site-packages/scrapy/crawler.py", line 210, in __init__
install_shutdown_handlers(self._signal_shutdown)
File "/Library/Python/2.7/site-packages/scrapy/utils/ossignal.py", line 21, in install_shutdown_handlers
reactor._handleSignals()
File "/Library/Python/2.7/site-packages/twisted/internet/posixbase.py", line 295, in _handleSignals
_SignalReactorMixin._handleSignals(self)
File "/Library/Python/2.7/site-packages/twisted/internet/base.py", line 1154, in _handleSignals
signal.signal(signal.SIGINT, self.sigInt)
ValueError: signal only works in main thread
```
My scrapy code like this:
```
class EPGD(Item):
genID = Field()
genID_url = Field()
taxID = Field()
taxID_url = Field()
familyID = Field()
familyID_url = Field()
chromosome = Field()
symbol = Field()
description = Field()
class EPGD_spider(Spider):
name = "EPGD"
allowed_domains = ["epgd.biosino.org"]
term = "man"
start_urls = ["http://epgd.biosino.org/EPGD/search/textsearch.jsp?textquery="+term+"&submit=Feeling+Lucky"]
db = DB_Con()
collection = db.getcollection(name, term)
def parse(self, response):
sel = Selector(response)
sites = sel.xpath('//tr[@class="odd"]|//tr[@class="even"]')
url_list = []
base_url = "http://epgd.biosino.org/EPGD"
for site in sites:
item = EPGD()
item['genID'] = map(unicode.strip, site.xpath('td[1]/a/text()').extract())
item['genID_url'] = base_url+map(unicode.strip, site.xpath('td[1]/a/@href').extract())[0][2:]
item['taxID'] = map(unicode.strip, site.xpath('td[2]/a/text()').extract())
item['taxID_url'] = map(unicode.strip, site.xpath('td[2]/a/@href').extract())
item['familyID'] = map(unicode.strip, site.xpath('td[3]/a/text()').extract())
item['familyID_url'] = base_url+map(unicode.strip, site.xpath('td[3]/a/@href').extract())[0][2:]
item['chromosome'] = map(unicode.strip, site.xpath('td[4]/text()').extract())
item['symbol'] = map(unicode.strip, site.xpath('td[5]/text()').extract())
item['description'] = map(unicode.strip, site.xpath('td[6]/text()').extract())
self.collection.update({"genID":item['genID']}, dict(item), upsert=True)
yield item
sel_tmp = Selector(response)
link = sel_tmp.xpath('//span[@id="quickPage"]')
for site in link:
url_list.append(site.xpath('a/@href').extract())
for i in range(len(url_list[0])):
if cmp(url_list[0][i], "#") == 0:
if i+1 < len(url_list[0]):
print url_list[0][i+1]
actual_url = "http://epgd.biosino.org/EPGD/search/" + url_list[0][i+1]
yield Request(actual_url, callback=self.parse)
break
else:
print "The index is out of range!"
```
My flask code like this:
```
@app.route('/', methods=['GET', 'POST'])
def index():
process = CrawlerProcess()
process.crawl(EPGD_spider)
return redirect(url_for('details'))
@app.route('/details', methods = ['GET'])
def epgd():
if request.method == 'GET':
results = db['EPGD_test'].find()
json_results= []
for result in results:
json_results.append(result)
return toJson(json_results)
```
How can I call my scrapy spiders when using flask web framework? | Adding HTTP server in front of your spiders is not that easy. There are couple of options.
## 1. Python subprocess
If you are really limited to Flask, if you can't use anything else, only way to integrate Scrapy with Flask is by launching external process for every spider crawl as other answer recommends (note that your subprocess needs to be spawned in proper Scrapy project directory).
Directory structure for all examples should look like this, I'm using [dirbot test project](https://github.com/scrapy/dirbot)
```
> tree -L 1
âââ dirbot
âââ README.rst
âââ scrapy.cfg
âââ server.py
âââ setup.py
```
Here's code sample to launch Scrapy in new process:
```
# server.py
import subprocess
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
"""
Run spider in another process and store items in file. Simply issue command:
> scrapy crawl dmoz -o "output.json"
wait for this command to finish, and read output.json to client.
"""
spider_name = "dmoz"
subprocess.check_output(['scrapy', 'crawl', spider_name, "-o", "output.json"])
with open("output.json") as items_file:
return items_file.read()
if __name__ == '__main__':
app.run(debug=True)
```
Save above as server.py and visit localhost:5000, you should be able to see items scraped.
## 2. Twisted-Klein + Scrapy
Other, better way is using some existing project that integrates Twisted with Werkzeug and displays API similar to Flask, e.g. [Twisted-Klein](https://github.com/twisted/klein). Twisted-Klein would allow you to run your spiders asynchronously in same process as your web server. It's better in that it won't block on every request and it allows you to simply return Scrapy/Twisted deferreds from HTTP route request handler.
Following snippet integrates Twisted-Klein with Scrapy, note that you need to create your own base class of CrawlerRunner so that crawler will collect items and return them to caller. This option is bit more advanced, you're running Scrapy spiders in same process as Python server, items are not stored in file but stored in memory (so there is no disk writing/reading as in previous example). Most important thing is that it's asynchronous and it's all running in one Twisted reactor.
```
# server.py
import json
from klein import route, run
from scrapy import signals
from scrapy.crawler import CrawlerRunner
from dirbot.spiders.dmoz import DmozSpider
class MyCrawlerRunner(CrawlerRunner):
"""
Crawler object that collects items and returns output after finishing crawl.
"""
def crawl(self, crawler_or_spidercls, *args, **kwargs):
# keep all items scraped
self.items = []
# create crawler (Same as in base CrawlerProcess)
crawler = self.create_crawler(crawler_or_spidercls)
# handle each item scraped
crawler.signals.connect(self.item_scraped, signals.item_scraped)
# create Twisted.Deferred launching crawl
dfd = self._crawl(crawler, *args, **kwargs)
# add callback - when crawl is done cal return_items
dfd.addCallback(self.return_items)
return dfd
def item_scraped(self, item, response, spider):
self.items.append(item)
def return_items(self, result):
return self.items
def return_spider_output(output):
"""
:param output: items scraped by CrawlerRunner
:return: json with list of items
"""
# this just turns items into dictionaries
# you may want to use Scrapy JSON serializer here
return json.dumps([dict(item) for item in output])
@route("/")
def schedule(request):
runner = MyCrawlerRunner()
spider = DmozSpider()
deferred = runner.crawl(spider)
deferred.addCallback(return_spider_output)
return deferred
run("localhost", 8080)
```
Save above in file server.py and locate it in your Scrapy project directory,
now open localhost:8080, it will launch dmoz spider and return items scraped as json to browser.
## 3. ScrapyRT
There are some problems arising when you try to add HTTP app in front of your spiders. For example you need to handle spider logs sometimes (you may need them in some cases), you need to handle spider exceptions somehow etc. There are projects that allow you to add HTTP API to spiders in an easier way, e.g. [ScrapyRT](https://github.com/scrapinghub/scrapyrt). This is an app that adds HTTP server to your Scrapy spiders and handles all the problems for you (e.g. handling logging, handling spider errors etc).
So after installing ScrapyRT you only need to do:
```
> scrapyrt
```
in your Scrapy project directory, and it will launch HTTP server listening for requests for you. You then visit <http://localhost:9080/crawl.json?spider_name=dmoz&url=http://alfa.com> and it should launch your spider for you crawling url given.
Disclaimer: I'm one of the authors of ScrapyRt. |
What range function does to a python list? | 36,386,543 | 4 | 2016-04-03T14:17:50Z | 36,386,595 | 9 | 2016-04-03T14:22:32Z | [
"python",
"python-2.7"
] | I am not able to figure out what is happening here. Appending reference to `range` function is kind of creating a recursive list at index `3`.
```
>>> x = range(3)
[0, 1, 2]
>>> x.append(x)
[0, 1, 2, [...]]
>>> x[3][3][3][3][0] = 5
[5, 1, 2, [...]]
```
Where as, when I try this:
```
>>> x = range(3)
[0, 1, 2]
>>> x.append(range(3))
[0, 1, 2, [0, 1, 2]]
```
I can easily deduce the reason for second case but not able understand what appending reference to `range` function is doing to the list appended. | In python2, `range`s are `list`s.
`list`s, and most of the things in python are *objects* with *identities*.
```
li = [0,1]
li[1] = li # [0, [...]]
# ^----v
id(li) # 2146307756
id(li[1]) # 2146307756
```
Since you're putting the list inside *itself*, you're creating a *recursive* data structure. |
pip install - locale.Error: unsupported locale setting | 36,394,101 | 30 | 2016-04-04T03:24:45Z | 36,394,262 | 86 | 2016-04-04T03:47:21Z | [
"python",
"python-3.x",
"centos",
"pip"
] | Full stacktrace:
```
â ~ pip install virtualenv
Traceback (most recent call last):
File "/usr/bin/pip", line 11, in <module>
sys.exit(main())
File "/usr/lib/python3.4/site-packages/pip/__init__.py", line 215, in main
locale.setlocale(locale.LC_ALL, '')
File "/usr/lib64/python3.4/locale.py", line 592, in setlocale
return _setlocale(category, locale)
locale.Error: unsupported locale setting
```
On the same server, I previously ran `pip install virtualenv` and it's was python 2.7.x at that time.
Now, I've just installed python3.4 using `curl https://bootstrap.pypa.io/get-pip.py | python3.4`.
```
â ~ pip --version
pip 8.1.1 from /usr/lib/python3.4/site-packages (python 3.4)
```
`pip uninstall virtualenv` throws the same error too | try it:
```
$ export LC_ALL=C
```
Here is my `locale` settings:
```
$ locale
LANG=en_US.UTF-8
LANGUAGE=
LC_CTYPE="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_COLLATE="C"
LC_MONETARY="C"
LC_MESSAGES="C"
LC_PAPER="C"
LC_NAME="C"
LC_ADDRESS="C"
LC_TELEPHONE="C"
LC_MEASUREMENT="C"
LC_IDENTIFICATION="C"
LC_ALL=C
```
***Python2.7***
```
$ uname -a
Linux debian 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1+deb8u6 (2015-11-09) x86_64 GNU/Linux
$ python --version
Python 2.7.9
$ pip --version
pip 8.1.1 from /usr/local/lib/python2.7/dist-packages (python 2.7)
$ unset LC_ALL
$ pip install virtualenv
Traceback (most recent call last):
File "/usr/local/bin/pip", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/pip/__init__.py", line 215, in main
locale.setlocale(locale.LC_ALL, '')
File "/usr/lib/python2.7/locale.py", line 579, in setlocale
return _setlocale(category, locale)
locale.Error: unsupported locale setting
$ export LC_ALL=C
$ pip install virtualenv
Requirement already satisfied (use --upgrade to upgrade): virtualenv in /usr/local/lib/python2.7/dist-packages
``` |
How to add index into a dict | 36,395,127 | 5 | 2016-04-04T05:29:36Z | 36,395,180 | 10 | 2016-04-04T05:34:35Z | [
"python",
"list",
"dictionary",
"indexing",
"list-comprehension"
] | For example, given:
```
['A', 'B', 'A', 'B']
```
I want to have:
```
{'A': [0, 2], 'B': [1, 3]}
```
I tried a loop that goes like; add the index of where the character is found, then replace it with `''` so the next time the loop goes through, it passes on to the next character.
However, that loops doesn't work for other reasons, and I'm stuck with no idea how to proceed. | Use [enumerate](https://docs.python.org/2/library/functions.html#enumerate) and [setdefault](http://www.tutorialspoint.com/python/dictionary_setdefault.htm):
```
example = ['a', 'b', 'a', 'b']
mydict = {}
for idx, item in enumerate(example):
indexes = mydict.setdefault(item, [])
indexes.append(idx)
``` |
Why I cannot import a library that is in my pip list? | 36,397,885 | 2 | 2016-04-04T08:34:27Z | 36,397,946 | 7 | 2016-04-04T08:37:52Z | [
"python",
"import",
"pip",
"pypng"
] | I have just installed `pypng` library with `sudo -E pip install pypng`. I see this library in the list when I execute `pip list` (the version that I see there is `0.0.18`).
When I start python (or ipython) session and execute
```
import pypng
```
I get
```
ImportError: No module named pypng
``` | According to docs on [github](https://github.com/drj11/pypng#quick-start) you need to import png.
```
import png
png.from_array([[255, 0, 0, 255],
[0, 255, 255, 0]], 'L').save("small_smiley.png")
``` |
PyInstaller doesn't import Queue | 36,400,111 | 7 | 2016-04-04T10:25:11Z | 36,668,562 | 8 | 2016-04-16T19:15:25Z | [
"python",
"pyinstaller"
] | I'm trying to compile some script with twisted and Queue.
```
pyinstaller sample.py --onefile
```
```
# -*- coding: utf-8 -*-#
from twisted import *
import queue as Queue
a = Queue.Queue()
```
Unfortunately, produced file fails with `ImportError: No module named queue`. | I don't think this is a PyInstaller or Twisted related issue at all. The `Queue` module is part of the standard library, and the issue is how you're naming it. In Python 2, it's `Queue` with a capital letter, but in Python 3, it's renamed `queue` to follow the more standard naming convention where modules have lowercase names.
Your script seems like it's a port of Python 2 code to Python 3 (thus the `as Queue` part of the `import`), but you're running it with Python 2 still. That may fail in other more subtle ways than just the `Queue` import being wrong (e.g. its Unicode handling may be all wrong). |
Python: Elementwise join of two lists of same length | 36,403,851 | 5 | 2016-04-04T13:20:25Z | 36,403,968 | 7 | 2016-04-04T13:25:01Z | [
"python"
] | I have two lists of the same length
```
a = [[1,2], [2,3], [3,4]]
b = [[9], [10,11], [12,13,19,20]]
```
and want to combine them to
```
c = [[1, 2, 9], [2, 3, 10, 11], [3, 4, 12, 13, 19, 20]]
```
I do this by
```
c= []
for i in range(0,len(a)):
c.append(a[i]+ b[i])
```
However, I am used from R to avoid for loops and the alternatives like zip and itertools do not generate my desired output. Is there a way to do it better?
**EDIT:**
Thanks for the help! My lists have 300,000 components. The execution time of the solutions are
```
[a_ + b_ for a_, b_ in zip(a, b)]
1.59425 seconds
list(map(operator.add, a, b))
2.11901 seconds
``` | ```
>>> help(map)
map(...)
map(function, sequence[, sequence, ...]) -> list
Return a list of the results of applying the function to the items of
the argument sequence(s). If more than one sequence is given, the
function is called with an argument list consisting of the corresponding
item of each sequence, substituting None for missing values when not all
sequences have the same length. If the function is None, return a list of
the items of the sequence (or a list of tuples if more than one sequence).
```
As you can see, `map(â¦)` can take multiple iterables as argument.
```
>>> import operator
>>> help(operator.add)
add(...)
add(a, b) -- Same as a + b.
```
So:
```
>>> import operator
>>> map(operator.add, a, b)
[[1, 2, 9], [2, 3, 10, 11], [3, 4, 12, 13]]
```
Please notice that in Python 3 `map(â¦)` returns a [generator](https://wiki.python.org/moin/Generators) by default. If you need random access or if your want to iterate over the result multiple times, then you have to use `list(map(â¦))`. |
Recognize the characters of license plate | 36,407,608 | 4 | 2016-04-04T16:06:44Z | 36,412,223 | 9 | 2016-04-04T20:23:53Z | [
"python",
"c++",
"opencv"
] | I try to recognize the characters of license plates using OCR, but my licence plate have worse quality. [](http://i.stack.imgur.com/a3QZd.jpg)
I'm trying to somehow improve character recognition for OCR, but my best result is this:result. [](http://i.stack.imgur.com/AC0gz.jpg)
And even tesseract on this picture does not recognize any character. My code is:
```
#include <cv.h> // open cv general include file
#include <highgui.h> // open cv GUI include file
#include <iostream> // standard C++ I/O
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <string>
using namespace cv;
int main( int argc, char** argv )
{
Mat src;
Mat dst;
Mat const structure_elem = getStructuringElement(
MORPH_RECT, Size(2,2));
src = imread(argv[1], CV_LOAD_IMAGE_COLOR); // Read the file
cvtColor(src,src,CV_BGR2GRAY);
imshow( "plate", src );
GaussianBlur(src, src, Size(1,1), 1.5, 1.5);
imshow( "blur", src );
equalizeHist(src, src);
imshow( "equalize", src );
adaptiveThreshold(src, src, 255, ADAPTIVE_THRESH_GAUSSIAN_C, CV_THRESH_BINARY, 15, -1);
imshow( "threshold", src );
morphologyEx(src, src, MORPH_CLOSE, structure_elem);
imshow( "morphological operation", src );
imwrite("end.jpg", src);
waitKey(0);
return 0;
}
```
And my question is, do you know how to achieve better results? More clear image? Despite having my licence plate worse quality, so that the result could read OCR (for example Tesseract).
Thank you for answers. Really I do not know how to do it. | One possible algorithm to clean up the images is as follows:
* Scale the image up, so that the letters are more substantial.
* Reduce the image to only 8 colours by k-means clustering.
* Threshold the image, and erode it to fill in any small gaps and make the letters more substantial.
* Invert the image to make masking easier.
* Create a blank mask image of the same size, set to all zeros
* Find contours in the image. For each contour:
+ Find bounding box of the contour
+ Find the area of the bounding box
+ If the area is too small or too large, drop the contour (I chose 1000 and 10000 as limits)
+ Otherwise draw a filled rectangle corresponding to the bounding box on the mask with white colour (255)
+ Store the bounding box and the corresponding image ROI
* For each separated character (bounding box + image)
+ Recognise the character
---
Note: I prototyped this in Python 2.7 with OpenCV 3.1. C++ ports of this code are near the end of this answer.
---
# Character Recognition
I took inspiration for the character recognition from [this question](http://stackoverflow.com/questions/9413216/simple-digit-recognition-ocr-in-opencv-python) on SO.
Then I found an [image](http://www.feudal.cz/spz/assets/images/spz_2004f.gif) that we can use to extract training images for the correct font. I cut them down to only include digits and letters without accents.
`train_digits.png`:
[](http://i.stack.imgur.com/f1DB2.png)
`train_letters.png`:
[](http://i.stack.imgur.com/L2dSg.png)
Then i wrote a script that splits the individual characters, scales them up and prepares the training images that contain single character per file:
```
import os
import cv2
import numpy as np
# ============================================================================
def extract_chars(img):
bw_image = cv2.bitwise_not(img)
contours = cv2.findContours(bw_image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[1]
char_mask = np.zeros_like(img)
bounding_boxes = []
for contour in contours:
x,y,w,h = cv2.boundingRect(contour)
x,y,w,h = x-2, y-2, w+4, h+4
bounding_boxes.append((x,y,w,h))
characters = []
for bbox in bounding_boxes:
x,y,w,h = bbox
char_image = img[y:y+h,x:x+w]
characters.append(char_image)
return characters
# ============================================================================
def output_chars(chars, labels):
for i, char in enumerate(chars):
filename = "chars/%s.png" % labels[i]
char = cv2.resize(char
, None
, fx=3
, fy=3
, interpolation=cv2.INTER_CUBIC)
cv2.imwrite(filename, char)
# ============================================================================
if not os.path.exists("chars"):
os.makedirs("chars")
img_digits = cv2.imread("train_digits.png", 0)
img_letters = cv2.imread("train_letters.png", 0)
digits = extract_chars(img_digits)
letters = extract_chars(img_letters)
DIGITS = [0, 9, 8 ,7, 6, 5, 4, 3, 2, 1]
LETTERS = [chr(ord('A') + i) for i in range(25,-1,-1)]
output_chars(digits, DIGITS)
output_chars(letters, LETTERS)
# ============================================================================
```
---
The next step was to generate the training data from the character files we created with the previous script.
I followed the algorithm from the answer to the question mentioned above, resizing each character image to 10x10 and using all the pixels as keypoints.
I save the training data as `char_samples.data` and `char_responses.data`
Script to generate training data:
```
import cv2
import numpy as np
CHARS = [chr(ord('0') + i) for i in range(10)] + [chr(ord('A') + i) for i in range(26)]
# ============================================================================
def load_char_images():
characters = {}
for char in CHARS:
char_img = cv2.imread("chars/%s.png" % char, 0)
characters[char] = char_img
return characters
# ============================================================================
characters = load_char_images()
samples = np.empty((0,100))
for char in CHARS:
char_img = characters[char]
small_char = cv2.resize(char_img,(10,10))
sample = small_char.reshape((1,100))
samples = np.append(samples,sample,0)
responses = np.array([ord(c) for c in CHARS],np.float32)
responses = responses.reshape((responses.size,1))
np.savetxt('char_samples.data',samples)
np.savetxt('char_responses.data',responses)
# ============================================================================
```
---
Once we have the training data created, we can run the main script:
```
import cv2
import numpy as np
# ============================================================================
def reduce_colors(img, n):
Z = img.reshape((-1,3))
# convert to np.float32
Z = np.float32(Z)
# define criteria, number of clusters(K) and apply kmeans()
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
K = n
ret,label,center=cv2.kmeans(Z,K,None,criteria,10,cv2.KMEANS_RANDOM_CENTERS)
# Now convert back into uint8, and make original image
center = np.uint8(center)
res = center[label.flatten()]
res2 = res.reshape((img.shape))
return res2
# ============================================================================
def clean_image(img):
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
resized_img = cv2.resize(gray_img
, None
, fx=5.0
, fy=5.0
, interpolation=cv2.INTER_CUBIC)
resized_img = cv2.GaussianBlur(resized_img,(5,5),0)
cv2.imwrite('licence_plate_large.png', resized_img)
equalized_img = cv2.equalizeHist(resized_img)
cv2.imwrite('licence_plate_equ.png', equalized_img)
reduced = cv2.cvtColor(reduce_colors(cv2.cvtColor(equalized_img, cv2.COLOR_GRAY2BGR), 8), cv2.COLOR_BGR2GRAY)
cv2.imwrite('licence_plate_red.png', reduced)
ret, mask = cv2.threshold(reduced, 64, 255, cv2.THRESH_BINARY)
cv2.imwrite('licence_plate_mask.png', mask)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3))
mask = cv2.erode(mask, kernel, iterations = 1)
cv2.imwrite('licence_plate_mask2.png', mask)
return mask
# ============================================================================
def extract_characters(img):
bw_image = cv2.bitwise_not(img)
contours = cv2.findContours(bw_image, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)[1]
char_mask = np.zeros_like(img)
bounding_boxes = []
for contour in contours:
x,y,w,h = cv2.boundingRect(contour)
area = w * h
center = (x + w/2, y + h/2)
if (area > 1000) and (area < 10000):
x,y,w,h = x-4, y-4, w+8, h+8
bounding_boxes.append((center, (x,y,w,h)))
cv2.rectangle(char_mask,(x,y),(x+w,y+h),255,-1)
cv2.imwrite('licence_plate_mask3.png', char_mask)
clean = cv2.bitwise_not(cv2.bitwise_and(char_mask, char_mask, mask = bw_image))
bounding_boxes = sorted(bounding_boxes, key=lambda item: item[0][0])
characters = []
for center, bbox in bounding_boxes:
x,y,w,h = bbox
char_image = clean[y:y+h,x:x+w]
characters.append((bbox, char_image))
return clean, characters
def highlight_characters(img, chars):
output_img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
for bbox, char_img in chars:
x,y,w,h = bbox
cv2.rectangle(output_img,(x,y),(x+w,y+h),255,1)
return output_img
# ============================================================================
img = cv2.imread("licence_plate.jpg")
img = clean_image(img)
clean_img, chars = extract_characters(img)
output_img = highlight_characters(clean_img, chars)
cv2.imwrite('licence_plate_out.png', output_img)
samples = np.loadtxt('char_samples.data',np.float32)
responses = np.loadtxt('char_responses.data',np.float32)
responses = responses.reshape((responses.size,1))
model = cv2.ml.KNearest_create()
model.train(samples, cv2.ml.ROW_SAMPLE, responses)
plate_chars = ""
for bbox, char_img in chars:
small_img = cv2.resize(char_img,(10,10))
small_img = small_img.reshape((1,100))
small_img = np.float32(small_img)
retval, results, neigh_resp, dists = model.findNearest(small_img, k = 1)
plate_chars += str(chr((results[0][0])))
print("Licence plate: %s" % plate_chars)
```
---
# Script Output
Enlarged 5x:
[](http://i.stack.imgur.com/wd6rN.png)
Equalized:
[](http://i.stack.imgur.com/8SnD4.png)
Reduced to 8 colours:
[](http://i.stack.imgur.com/iuxXh.png)
Thresholded:
[](http://i.stack.imgur.com/fQo00.png)
Eroded:
[](http://i.stack.imgur.com/RRumt.png)
Mask selecting only characters:
[](http://i.stack.imgur.com/QB7ys.png)
Clean image with bounding boxes:
[](http://i.stack.imgur.com/3tb0V.png)
Console output:
`Licence plate: 2B99996`
---
C++ code, using OpenCV 2.4.11 and Boost.Filesystem to iterate over files in a directory.
```
#include <boost/filesystem.hpp>
#include <opencv2/opencv.hpp>
#include <iostream>
#include <string>
// ============================================================================
namespace fs = boost::filesystem;
// ============================================================================
typedef std::vector<std::string> string_list;
struct char_match_t
{
cv::Point2i position;
cv::Mat image;
};
typedef std::vector<char_match_t> char_match_list;
// ----------------------------------------------------------------------------
string_list find_input_files(std::string const& dir)
{
string_list result;
fs::path dir_path(dir);
fs::directory_iterator end_itr;
for (fs::directory_iterator i(dir_path); i != end_itr; ++i) {
if (!fs::is_regular_file(i->status())) continue;
if (i->path().extension() == ".png") {
result.push_back(i->path().string());
}
}
return result;
}
// ----------------------------------------------------------------------------
cv::Mat reduce_image(cv::Mat const& img, int K)
{
int n = img.rows * img.cols;
cv::Mat data = img.reshape(1, n);
data.convertTo(data, CV_32F);
std::vector<int> labels;
cv::Mat1f colors;
cv::kmeans(data, K, labels
, cv::TermCriteria(CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 10000, 0.0001)
, 5, cv::KMEANS_PP_CENTERS, colors);
for (int i = 0; i < n; ++i) {
data.at<float>(i, 0) = colors(labels[i], 0);
}
cv::Mat reduced = data.reshape(1, img.rows);
reduced.convertTo(reduced, CV_8U);
return reduced;
}
// ----------------------------------------------------------------------------
cv::Mat clean_image(cv::Mat const& img)
{
cv::Mat resized_img;
cv::resize(img, resized_img, cv::Size(), 5.0, 5.0, cv::INTER_CUBIC);
cv::Mat equalized_img;
cv::equalizeHist(resized_img, equalized_img);
cv::Mat reduced_img(reduce_image(equalized_img, 8));
cv::Mat mask;
cv::threshold(reduced_img
, mask
, 64
, 255
, cv::THRESH_BINARY);
cv::Mat kernel(cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3, 3)));
cv::erode(mask, mask, kernel, cv::Point(-1, -1), 1);
return mask;
}
// ----------------------------------------------------------------------------
cv::Point2i center(cv::Rect const& bounding_box)
{
return cv::Point2i(bounding_box.x + bounding_box.width / 2
, bounding_box.y + bounding_box.height / 2);
}
// ----------------------------------------------------------------------------
char_match_list extract_characters(cv::Mat const& img)
{
cv::Mat inverse_img;
cv::bitwise_not(img, inverse_img);
std::vector<std::vector<cv::Point>> contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(inverse_img.clone(), contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
char_match_list result;
double const MIN_CONTOUR_AREA(1000.0);
double const MAX_CONTOUR_AREA(6000.0);
for (uint32_t i(0); i < contours.size(); ++i) {
cv::Rect bounding_box(cv::boundingRect(contours[i]));
int bb_area(bounding_box.area());
if ((bb_area >= MIN_CONTOUR_AREA) && (bb_area <= MAX_CONTOUR_AREA)) {
int PADDING(2);
bounding_box.x -= PADDING;
bounding_box.y -= PADDING;
bounding_box.width += PADDING * 2;
bounding_box.height += PADDING * 2;
char_match_t match;
match.position = center(bounding_box);
match.image = img(bounding_box);
result.push_back(match);
}
}
std::sort(begin(result), end(result)
, [](char_match_t const& a, char_match_t const& b) -> bool
{
return a.position.x < b.position.x;
});
return result;
}
// ----------------------------------------------------------------------------
std::pair<float, cv::Mat> train_character(char c, cv::Mat const& img)
{
cv::Mat small_char;
cv::resize(img, small_char, cv::Size(10, 10), 0, 0, cv::INTER_LINEAR);
cv::Mat small_char_float;
small_char.convertTo(small_char_float, CV_32FC1);
cv::Mat small_char_linear(small_char_float.reshape(1, 1));
return std::pair<float, cv::Mat>(
static_cast<float>(c)
, small_char_linear);
}
// ----------------------------------------------------------------------------
std::string process_image(cv::Mat const& img, cv::KNearest& knn)
{
cv::Mat clean_img(clean_image(img));
char_match_list characters(extract_characters(clean_img));
std::string result;
for (char_match_t const& match : characters) {
cv::Mat small_char;
cv::resize(match.image, small_char, cv::Size(10, 10), 0, 0, cv::INTER_LINEAR);
cv::Mat small_char_float;
small_char.convertTo(small_char_float, CV_32FC1);
cv::Mat small_char_linear(small_char_float.reshape(1, 1));
float p = knn.find_nearest(small_char_linear, 1);
result.push_back(char(p));
}
return result;
}
// ============================================================================
int main()
{
string_list train_files(find_input_files("./chars"));
cv::Mat samples, responses;
for (std::string const& file_name : train_files) {
cv::Mat char_img(cv::imread(file_name, 0));
std::pair<float, cv::Mat> tinfo(train_character(file_name[file_name.size() - 5], char_img));
responses.push_back(tinfo.first);
samples.push_back(tinfo.second);
}
cv::KNearest knn;
knn.train(samples, responses);
string_list input_files(find_input_files("./input"));
for (std::string const& file_name : input_files) {
cv::Mat plate_img(cv::imread(file_name, 0));
std::string plate(process_image(plate_img, knn));
std::cout << file_name << " : " << plate << "\n";
}
}
// ============================================================================
```
---
C++ code, using OpenCV 3.1 and Boost.Filesystem to iterate over files in a directory.
```
#include <boost/filesystem.hpp>
#include <opencv2/opencv.hpp>
#include <iostream>
#include <string>
// ============================================================================
namespace fs = boost::filesystem;
// ============================================================================
typedef std::vector<std::string> string_list;
struct char_match_t
{
cv::Point2i position;
cv::Mat image;
};
typedef std::vector<char_match_t> char_match_list;
// ----------------------------------------------------------------------------
string_list find_input_files(std::string const& dir)
{
string_list result;
fs::path dir_path(dir);
boost::filesystem::directory_iterator end_itr;
for (boost::filesystem::directory_iterator i(dir_path); i != end_itr; ++i) {
if (!boost::filesystem::is_regular_file(i->status())) continue;
if (i->path().extension() == ".png") {
result.push_back(i->path().string());
}
}
return result;
}
// ----------------------------------------------------------------------------
cv::Mat reduce_image(cv::Mat const& img, int K)
{
int n = img.rows * img.cols;
cv::Mat data = img.reshape(1, n);
data.convertTo(data, CV_32F);
std::vector<int> labels;
cv::Mat1f colors;
cv::kmeans(data, K, labels
, cv::TermCriteria(CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 10000, 0.0001)
, 5, cv::KMEANS_PP_CENTERS, colors);
for (int i = 0; i < n; ++i) {
data.at<float>(i, 0) = colors(labels[i], 0);
}
cv::Mat reduced = data.reshape(1, img.rows);
reduced.convertTo(reduced, CV_8U);
return reduced;
}
// ----------------------------------------------------------------------------
cv::Mat clean_image(cv::Mat const& img)
{
cv::Mat resized_img;
cv::resize(img, resized_img, cv::Size(), 5.0, 5.0, cv::INTER_CUBIC);
cv::Mat equalized_img;
cv::equalizeHist(resized_img, equalized_img);
cv::Mat reduced_img(reduce_image(equalized_img, 8));
cv::Mat mask;
cv::threshold(reduced_img
, mask
, 64
, 255
, cv::THRESH_BINARY);
cv::Mat kernel(cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3, 3)));
cv::erode(mask, mask, kernel, cv::Point(-1, -1), 1);
return mask;
}
// ----------------------------------------------------------------------------
cv::Point2i center(cv::Rect const& bounding_box)
{
return cv::Point2i(bounding_box.x + bounding_box.width / 2
, bounding_box.y + bounding_box.height / 2);
}
// ----------------------------------------------------------------------------
char_match_list extract_characters(cv::Mat const& img)
{
cv::Mat inverse_img;
cv::bitwise_not(img, inverse_img);
std::vector<std::vector<cv::Point>> contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(inverse_img.clone(), contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
char_match_list result;
double const MIN_CONTOUR_AREA(1000.0);
double const MAX_CONTOUR_AREA(6000.0);
for (int i(0); i < contours.size(); ++i) {
cv::Rect bounding_box(cv::boundingRect(contours[i]));
int bb_area(bounding_box.area());
if ((bb_area >= MIN_CONTOUR_AREA) && (bb_area <= MAX_CONTOUR_AREA)) {
int PADDING(2);
bounding_box.x -= PADDING;
bounding_box.y -= PADDING;
bounding_box.width += PADDING * 2;
bounding_box.height += PADDING * 2;
char_match_t match;
match.position = center(bounding_box);
match.image = img(bounding_box);
result.push_back(match);
}
}
std::sort(begin(result), end(result)
, [](char_match_t const& a, char_match_t const& b) -> bool
{
return a.position.x < b.position.x;
});
return result;
}
// ----------------------------------------------------------------------------
std::pair<float, cv::Mat> train_character(char c, cv::Mat const& img)
{
cv::Mat small_char;
cv::resize(img, small_char, cv::Size(10, 10), 0, 0, cv::INTER_LINEAR);
cv::Mat small_char_float;
small_char.convertTo(small_char_float, CV_32FC1);
cv::Mat small_char_linear(small_char_float.reshape(1, 1));
return std::pair<float, cv::Mat>(
static_cast<float>(c)
, small_char_linear);
}
// ----------------------------------------------------------------------------
std::string process_image(cv::Mat const& img, cv::Ptr<cv::ml::KNearest> knn)
{
cv::Mat clean_img(clean_image(img));
char_match_list characters(extract_characters(clean_img));
std::string result;
for (char_match_t const& match : characters) {
cv::Mat small_char;
cv::resize(match.image, small_char, cv::Size(10, 10), 0, 0, cv::INTER_LINEAR);
cv::Mat small_char_float;
small_char.convertTo(small_char_float, CV_32FC1);
cv::Mat small_char_linear(small_char_float.reshape(1, 1));
cv::Mat tmp;
float p = knn->findNearest(small_char_linear, 1, tmp);
result.push_back(char(p));
}
return result;
}
// ============================================================================
int main()
{
string_list train_files(find_input_files("./chars"));
cv::Mat samples, responses;
for (std::string const& file_name : train_files) {
cv::Mat char_img(cv::imread(file_name, 0));
std::pair<float, cv::Mat> tinfo(train_character(file_name[file_name.size() - 5], char_img));
responses.push_back(tinfo.first);
samples.push_back(tinfo.second);
}
cv::Ptr<cv::ml::KNearest> knn(cv::ml::KNearest::create());
cv::Ptr<cv::ml::TrainData> training_data =
cv::ml::TrainData::create(samples
, cv::ml::SampleTypes::ROW_SAMPLE
, responses);
knn->train(training_data);
string_list input_files(find_input_files("./input"));
for (std::string const& file_name : input_files) {
cv::Mat plate_img(cv::imread(file_name, 0));
std::string plate(process_image(plate_img, knn));
std::cout << file_name << " : " << plate << "\n";
}
}
// ============================================================================
```
--- |
How to create a random array in a certain range | 36,412,006 | 6 | 2016-04-04T20:11:12Z | 36,412,132 | 10 | 2016-04-04T20:17:55Z | [
"python",
"arrays",
"numpy",
"random",
"numpy-random"
] | Suppose I want to create a list or a numpy array of 5 elements like this:
```
array = [i, j, k, l, m]
```
where:
* `i` is in range 1.5 to 12.4
* `j` is in range 0 to 5
* `k` is in range 4 to 16
* `l` is in range 3 to 5
* `m` is in range 2.4 to 8.9.
This is an example to show that some ranges include fractions. What would be an easy way to do this? | You can just do (thanks user2357112!)
```
[np.random.uniform(1.5, 12.4), np.random.uniform(0, 5), ...]
```
using [`numpy.random.uniform`](http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.random.uniform.html). |
map vs list; why different behaviour? | 36,417,879 | 13 | 2016-04-05T05:33:00Z | 36,418,140 | 12 | 2016-04-05T05:52:33Z | [
"python",
"python-3.x",
"functional-programming"
] | In the course of implementing the "Variable Elimination" algorithm for a Bayes' Nets program, I encountered an unexpected bug that was the result of an iterative map transformation of a sequence of objects.
For simplicity's sake, I'll use an analogous piece of code here:
```
>>> nums = [1, 2, 3]
>>> for x in [4, 5, 6]:
... # Uses n if x is odd, uses (n + 10) if x is even
... nums = map(
... lambda n: n if x % 2 else n + 10,
... nums)
...
>>> list(nums)
[31, 32, 33]
```
This is definitely the wrong result. Since [4, 5, 6] contains two even numbers, `10` should be added to each element at most twice. I was getting unexpected behaviour with this in the VE algorithm as well, so I modified it to convert the `map` iterator to a `list` after each iteration.
```
>>> nums = [1, 2, 3]
>>> for x in [4, 5, 6]:
... # Uses n if x is odd, uses (n + 10) if x is even
... nums = map(
... lambda n: n if x % 2 else n + 10,
... nums)
... nums = list(nums)
...
>>> list(nums)
[21, 22, 23]
```
From my understanding of iterables, this modification *shouldn't* change anything, but it does. Clearly, the `n + 10` transform for the `not x % 2` case is applied one fewer times in the `list`-ed version.
My Bayes Nets program worked as well after finding this bug, but I'm looking for an explanation as to why it occurred. | The answer is very simple: [`map`](https://docs.python.org/3/library/functions.html#map) is a [lazy](http://stackoverflow.com/questions/20535342/lazy-evaluation-python) function in Python 3, it returns an iterable object (in Python 2 it returns a `list`). Let me add some output to your example:
```
In [6]: nums = [1, 2, 3]
In [7]: for x in [4, 5, 6]:
...: nums = map(lambda n: n if x % 2 else n + 10, nums)
...: print(x)
...: print(nums)
...:
4
<map object at 0x7ff5e5da6320>
5
<map object at 0x7ff5e5da63c8>
6
<map object at 0x7ff5e5da6400>
In [8]: print(x)
6
In [9]: list(nums)
Out[9]: [31, 32, 33]
```
Note the `In[8]` - the value of `x` is 6. We could also transform the `lambda` function, passed to `map` in order to track the value of `x`:
```
In [10]: nums = [1, 2, 3]
In [11]: for x in [4, 5, 6]:
....: nums = map(lambda n: print(x) or (n if x % 2 else n + 10), nums)
....:
In [12]: list(nums)
6
6
6
6
6
6
6
6
6
Out[12]: [31, 32, 33]
```
Because `map` is lazy, it evaluates when `list` is being called. However, the value of `x` is `6` and that is why it produces confusing output. Evaluating `nums` inside the loop produces *expected* output.
```
In [13]: nums = [1, 2, 3]
In [14]: for x in [4, 5, 6]:
....: nums = map(lambda n: print(x) or (n if x % 2 else n + 10), nums)
....: nums = list(nums)
....:
4
4
4
5
5
5
6
6
6
In [15]: nums
Out[15]: [21, 22, 23]
``` |
Django: Is there a way to keep the dev server from restarting when a local .py file is changed and dynamically loaded? | 36,420,833 | 5 | 2016-04-05T08:21:45Z | 36,420,989 | 9 | 2016-04-05T08:28:36Z | [
"python",
"django"
] | In Django (1.9) trying to load `.py` files (modules) dynamically (via `importlib`). The dynamic reload is working like a charm, but every time I reload a module, the dev server restarts, having to reload everything else.
I'm pulling in a lot of outside data (xml) for testing purposes, and every time the environment restarts, it has to reload all of this external xml data. I want to be able to reload a module only, and keep that already loaded xml data intact, so that it doesn't have to go through that process every time I change some py-code.
**Is there a flag I can set/toggle (or any other method) to keep the server from restarting the whole process for this single module reload?**
Any help very appreciated. | If you run the development server using [`--noreload`](https://docs.djangoproject.com/en/1.9/ref/django-admin/#cmdoption-runserver--noreload) parameter it will not auto reload the changes:
```
python manage.py runserver --noreload
```
> Disables the auto-reloader. This means any Python code changes you make while the server is running will not take effect if the particular Python modules have already been loaded into memory. |
Python decorator example | 36,427,374 | 4 | 2016-04-05T13:11:02Z | 36,427,481 | 7 | 2016-04-05T13:15:30Z | [
"python",
"decorator",
"python-decorators"
] | I am learning a bit about decorators from a great tutorial on [thecodeship](http://thecodeship.com/patterns/guide-to-python-function-decorators/) but have found myself rather confused by one example.
First a simple example followed by an explanation is given for what a decorator is.
```
def p_decorate(func):
def func_wrapper(name):
return "<p>{0}</p>".format(func(name))
return func_wrapper
def get_text(name):
return "lorem ipsum, {0} dolor sit amet".format(name)
my_get_text = p_decorate(get_text)
print my_get_text("John")
```
Now this makes sense to me. A decorator is simply a wrapper to a function. And in this guys explanation he says a decorator is a function that takes another function as an argument, generates a new function, and returns the generated function to be used anywhere.
And now the equivalent to the above code is:
```
def p_decorate(func):
def func_wrapper(name):
return "<p>{0}</p>".format(func(name))
return func_wrapper
@p_decorate
def get_text(name):
return "lorem ipsum, {0} dolor sit amet".format(name)
print get_text("John")
```
I believe I understand the way a decorator is initialized when given no arguments. Correct me if I am wrong.
* The decorator by default passes in the function `get_text` and because `p_decorate` returns a function `func_wrapper`, we end up with the true statement `get_text = func_wrapper`.
What is important to me is the first code block equivalent, because I see and understand how the decorator is behaving.
What very much confuses me is the following code:
```
def tags(tag_name):
def tags_decorator(func):
def func_wrapper(name):
return "<{0}>{1}</{0}>".format(tag_name, func(name))
return func_wrapper
return tags_decorator
@tags("p")
def get_text(name):
return "Hello "+name
print get_text("John")
```
Again, correct me if I'm wrong but this is my understanding.
* The decorator accepts the tag string "p" instead of the
default function name. And in turn the function `tags_decorator`
assumes that the parameter that will be passed in is the function
being decorated, `get_text`.
It might be helpful for me to see the equivalent block of code in "non-decorator" form but I can't seem to wrap my head around what that would begin to look like. I also don't comprehend why `tags_decorator` and `func_wrapper` are both returned. What is the purpose of returning two different functions if a decorator only needs to return 1 function to wrap `get_text`.
As a side note, it really comes down to the following.
* Can this block be simplified to something less than a set of 3 functions?
* Can decorators accept more than 1 argument to simplify code? | Within limits, everything after the `@` is executed to *produce* a decorator. In your first example, what follows after the `@` is just a name:
```
@p_decorate
```
so Python looks up `p_decorate` and calls it with the decorated function as an argument:
```
get_text = p_decorate(get_text)
```
(oversimplified a bit, `get_text` is not initially bound to the original function, but you got the gist already).
In your second example, the decorator expression is a little more involved, it includes a call:
```
@tags("p")
```
so Python uses `tags("p")` to find the decorator. In the end this is what then is executed:
```
get_text = tags("p")(get_text)
```
The *output* of `tags("p")` is the decorator here! I call the `tags` function itself a decorator *factory*, it produces a decorator when called. When you call `tags()`, it returns `tags_decorator()`. That's the *real* decorator here.
You could instead remove the decorator function and hardcode the `"p"` value and use that directly:
```
def tags_decorator_p(func):
def func_wrapper(name):
return "<{0}>{1}</{0}>".format("p", func(name))
return func_wrapper
@tags_decorator_p
def get_text(name):
# ...
```
but then you'd have to create separate decorators for each possible value of the argument to `tags()`. That's the value of a decorator factory, you get to add parameters to the decorator and alter how your function is decorated.
A decorator *factory* can take any number of arguments; it is just a function you call to produce a decorator. The decorator *itself* can only accept one argument, the function-to-decorate.
I said, *within limits* at the start of my answer for a reason; the syntax for the expression following the `@` only allows a dotted name (`foo.bar.baz`, so attribute access) and a call (`(arg1, arg2, keyword=arg3)`). See the [reference documentation](https://docs.python.org/2/reference/compound_stmts.html#grammar-token-decorator). The original [PEP 318](https://www.python.org/dev/peps/pep-0318/) states:
> The decorator statement is limited in what it can accept -- arbitrary expressions will not work. Guido preferred this because of a gut feeling [17] . |
create an ordered dict | 36,442,146 | 3 | 2016-04-06T05:20:05Z | 36,442,210 | 9 | 2016-04-06T05:24:23Z | [
"python",
"dictionary"
] | I have a dict like:
```
original_dict = {'two':'2','three':'3','one':'1','foo':'squirrel'}
```
And I wanted two be in this order:
`ordered_dict = OrderedDict({'one':'1','two':'2','three':'3','foo':'squirrel'`})
but I don't get the same order,
`{'one':'1','two':'2','three':'3','foo':'squirrel'}` creates a dict by itself so it doesn't work as I spected
I saw in the documentation that the sorted method can be used
```
OrderedDict(sorted(d.items(), key=lambda t: len(t[0])))
```
But I don't know a function to return the order I want
I tried
```
ordered_dict = OrderedDict(sorted(original_dict.items(),['one','two','three','foo']))
```
But that didn't work.
Note that the order I want can be quite arbitrary, like:
```
['three','foo','one','two',]
``` | ```
order=['one','two','three','foo']
ordered_dict = OrderedDict((k, original_dict[k]) for k in order)
``` |
How can I programmatically terminate a running process in the same script that started it? | 36,452,099 | 3 | 2016-04-06T13:06:44Z | 36,453,982 | 8 | 2016-04-06T14:21:49Z | [
"java",
"python",
"bash",
"perl",
"perl6"
] | **How do I start processes from a script in a way that also allows me to terminate them?**
Basically, I can easily terminate the main script, but terminating the external processes that this main script starts has been the issue. I googled like crazy for Perl 6 solutions. I was just about to post my question and then thought I'd open the question up to solutions in other languages.
Starting external processes is easy with Perl 6:
```
my $proc = shell("possibly_long_running_command");
```
[`shell`](https://doc.perl6.org/routine/shell) returns a process object after the process finishes. So, I don't know how to programmatically find out the PID of the running process because the variable `$proc` isn't even created until the external process finishes. (side note: after it finishes, `$proc.pid` returns an undefined `Any`, so it doesn't tell me what PID it used to have.)
Here is some code demonstrating some of my attempts to create a "self destructing" script:
```
#!/bin/env perl6
say "PID of the main script: $*PID";
# limit run time of this script
Promise.in(10).then( {
say "Took too long! Killing job with PID of $*PID";
shell "kill $*PID"
} );
my $example = shell('echo "PID of bash command: $$"; sleep 20; echo "PID of bash command after sleeping is still $$"');
say "This line is never printed";
```
This results in the following output which kills the main script, but not the externally created process (see output after the word `Terminated`):
```
[prompt]$ ./self_destruct.pl6
PID of the main script: 30432
PID of bash command: 30436
Took too long! Killing job with PID of 30432
Terminated
[prompt]$ my PID after sleeping is still 30436
```
By the way, the PID of `sleep` was also different (i.e. `30437`) according to `top`.
I'm also not sure how to make this work with [`Proc::Async`](https://doc.perl6.org/type/Proc::Async). Unlike the result of `shell`, the asynchronous process object it creates doesn't have a `pid` method.
I was originally looking for a Perl 6 solution, but I'm open to solutions in Python, Perl 5, Java, or any language that interacts with the "shell" reasonably well. | For Perl 6, there seems to be the [Proc::Async](http://doc.perl6.org/type/Proc::Async) module
> Proc::Async allows you to run external commands asynchronously, capturing standard output and error handles, and optionally write to its standard input.
```
# command with arguments
my $proc = Proc::Async.new('echo', 'foo', 'bar');
# subscribe to new output from out and err handles:
$proc.stdout.tap(-> $v { print "Output: $v" });
$proc.stderr.tap(-> $v { print "Error: $v" });
say "Starting...";
my $promise = $proc.start;
# wait for the external program to terminate
await $promise;
say "Done.";
```
Method kill:
```
kill(Proc::Async:D: $signal = "HUP")
```
> Sends a signal to the running program. The signal can be a signal name ("KILL" or "SIGKILL"), an integer (9) or an element of the Signal enum (Signal::SIGKILL).
An example on how to use it:
```
#!/usr/bin/env perl6
use v6;
say 'Start';
my $proc = Proc::Async.new('sleep', 10);
my $promise= $proc.start;
say 'Process started';
sleep 2;
$proc.kill;
await $promise;
say 'Process killed';
```
As you can see, `$proc` has a method to kill the process. |
When the Python interpreter deals with a .py file, is it different from dealing with a single statement? | 36,474,782 | 6 | 2016-04-07T11:25:18Z | 36,475,087 | 10 | 2016-04-07T11:38:49Z | [
"python",
"python-2.7"
] | Running:
```
a = 257
b = 257
print id(a) == id(b)
```
Results in:
[](http://i.stack.imgur.com/lyBdc.png)
[](http://i.stack.imgur.com/5YwTb.png)
Same statement but opposite results. Why? | ## The code in `test.py` is parsed together which is more optimizable than the code parsed as separate statements in the interpreter
When you put it in a `test.py` and run it *as a whole*, the byte code compiler has a better chance of analyzing the usage of literals and optimizing them. (Hence you get `a` and `b` pointing to the same place)
As opposed to when you run the separate statements (parsed separately) in the interpreter (where I think it only optimizes up to 256 but not 257 via preallocation)
Play with this in the interpreter to see the effect of separate statements:
```
>>> a, b = 257, 257 # or if you prefer: a = 257; b = 257
>>> print a is b
True
>>> a = 257
>>> b = 257
>>> print a is b
False
```
Defining a function in the interperter also gives it a change to analyze and optimize the used literals
```
>>> def test():
... a = 257
... b = 257
... print a is b
...
>>> test()
True
```
## This optimization is not limited to integers only, works e.g. for floats too (floats are not subject to a cache like integers in the [`[-5, 256]`](https://docs.python.org/2/c-api/int.html#c.PyInt_FromLong) range are)
```
>>> def test():
... pi = 3.14
... x = 3.14
... return x is pi
...
>>> test()
True
# As opposed to separate statements:
>>> pi = 3.14
>>> x = 3.14
>>> x is pi
False
```
## Looking at the byte code to see that it indeed reuses the same constant
```
>>> dis.dis(test)
2 0 LOAD_CONST 1 (3.14)
3 STORE_FAST 0 (pi)
3 6 LOAD_CONST 1 (3.14) <-- Same constant 1 reused
9 STORE_FAST 1 (x)
4 12 LOAD_FAST 1 (x)
15 LOAD_FAST 0 (pi)
18 COMPARE_OP 8 (is)
21 RETURN_VALUE
``` |
Why do file permissions show different in Python and bash? | 36,486,194 | 9 | 2016-04-07T20:07:53Z | 36,487,902 | 9 | 2016-04-07T21:53:37Z | [
"python",
"bash",
"shell",
"command-line",
"stat"
] | From **Python**:
```
>>> import os
>>> s = os.stat( '/etc/termcap')
>>> print( oct(s.st_mode) )
**0o100444**
```
When I check through **Bash**:
```
$ stat -f "%p %N" /etc/termcap
**120755** /etc/termcap
```
Why does this return a different result? | This is because your /etc/termcap is a **symlink**.
Let me demonstrate this to you:
**Bash**:
```
$ touch bar
$ ln -s bar foo
$ stat -f "%p %N" foo
120755 foo
$ stat -f "%p %N" bar
100644 bar
```
**Python**:
```
>>> import os
>>> oct(os.stat('foo').st_mode)
'0100644'
>>> oct(os.stat('bar').st_mode)
'0100644'
>>> oct(os.lstat('foo').st_mode)
'0120755'
>>> oct(os.lstat('bar').st_mode)
'0100644'
```
Conclusion, use `os.lstat` instead of `os.stat` |
Convert list into tuple then add this tuple into list in python | 36,489,069 | 2 | 2016-04-07T23:39:30Z | 36,489,147 | 8 | 2016-04-07T23:46:33Z | [
"python",
"python-3.x"
] | Hi I want my output to be
```
add_sizes(['hello', 'world']) -> [('hello', 5), ('world', 5)]
```
but I'm getting
```
add_sizes(['hello', 'world']) -> [('hello', 5), ('hello', 5, 'helloworld', 5)]
```
My code is
```
def add_sizes(strings):
s = ()
t=[]
m=[]
for i in strings:
x=i
for i,c in enumerate(list(x)):
t.append(c)
l=(str(''.join(t)),i+1)
s += l
m.append(s)
print(m)
```
Any suggestion would be appreciated thanks | Just use a list comprehension:
```
>>> def add_sizes(strings):
... return [(s, len(s)) for s in strings]
...
>>>
>>> add_sizes(['hello', 'world'])
[('hello', 5), ('world', 5)]
```
Or if you want to do it in-place:
```
>>> def add_size(strings):
... for i, s in enumerate(strings):
... strings[i] = (s, len(s))
... return strings
...
>>> add_sizes(['hello', 'world'])
[('hello', 5), ('world', 5)]
``` |
How to effectively apply gradient clipping in tensor flow? | 36,498,127 | 4 | 2016-04-08T11:09:55Z | 36,501,922 | 7 | 2016-04-08T14:15:39Z | [
"python",
"machine-learning",
"tensorflow",
"deep-learning",
"lstm"
] | Considering the [example code](https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3%20-%20Neural%20Networks/recurrent_network.py).
I would like to know How to apply gradient clipping on this network on the RNN where there is a possibility of exploding gradients.
```
tf.clip_by_value(t, clip_value_min, clip_value_max, name=None)
```
This is an example that could be used but where do I introduce this ?
In the def of RNN
```
lstm_cell = rnn_cell.BasicLSTMCell(n_hidden, forget_bias=1.0)
# Split data because rnn cell needs a list of inputs for the RNN inner loop
_X = tf.split(0, n_steps, _X) # n_steps
tf.clip_by_value(_X, -1, 1, name=None)
```
But this doesn't make sense as the tensor \_X is the input and not the grad what is to be clipped?
Do I have to define my own Optimizer for this or is there a simpler option? | Gradient clipping needs to happen after computing the gradients, but before applying them to update the model's parameters. In the example you're referring to, both of those things are handled by the `AdamOptimizer.minimize()` method on [line 76](https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3%20-%20Neural%20Networks/recurrent_network.py#L76).
In order to clip your gradients you'll need to explicitly compute, clip, and apply them as described in [this section in TensorFlow's api documentation](https://www.tensorflow.org/versions/r0.7/api_docs/python/train.html#processing-gradients-before-applying-them). Specifically you'll need to substitute line 76 in the example with something like the following: (I haven't tested this code.)
```
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
gvs = optimizer.compute_gradients(cost)
capped_gvs = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gvs]
optimizer.apply_gradients(capped_gvs)
``` |
Height of binary tree | 36,500,291 | 5 | 2016-04-08T13:02:38Z | 36,500,340 | 8 | 2016-04-08T13:05:01Z | [
"python",
"python-3.x"
] | I'm trying to implement a recursive method to calculate the height of a binary tree. Here is the "height"-code:
```
def height(self):
if self.root==None:
return 0
return max(height(self.root.left), height(self.root.right))+1
```
When I try to call the function, I get the following error msg:
```
NameError: name 'height' is not defined
```
Does anybody see the problem? | This is a method of your class, hence you must call it from an instance (`self`) or the class itself. Though it won't work as you think, unless you define it as a `staticmethod` or change your call, e.g.
```
def height(self):
return 1 + max(self.left.height() if self.left is not None else 0,
self.right.height() if self.right is not None else 0)
```
or
```
@staticmethod
def height(self):
return 1 + max(self.height(self.left) if self.left is not None else 0,
self.height(self.right) if self.right is not None else 0)
```
Notice, that you shouldn't use `==` to compare with `None` (kudos to timgeb). And you must check whether child-nodes exist, too. And your algorithm doesn't work, so I've changed it slightly.
Example:
```
class Node:
def __init__(self, root=None, left=None, right=None):
self.root = root
self.left = left
self.right = right
def height(self):
return 1 + max(self.left.height() if self.left is not None else 0,
self.right.height() if self.right is not None else 0)
# Create a binary tree of height 4 using the binary-heap property
tree = [Node() for _ in range(10)]
root = tree[0]
for i in xrange(len(tree)):
l_child_idx, r_child_idx = (i + 1) * 2 - 1, (i + 1) * 2
root_idx = (i + 1) // 2
if root_idx:
tree[i].root = tree[root_idx]
if l_child_idx < len(tree):
tree[i].left = tree[l_child_idx]
if r_child_idx < len(tree):
tree[i].right = tree[r_child_idx]
print(root.height()) # -> 4
``` |
Sort a list in python based on another sorted list | 36,518,800 | 4 | 2016-04-09T15:22:38Z | 36,518,844 | 7 | 2016-04-09T15:25:55Z | [
"python",
"list",
"sorting"
] | I would like to sort a list in Python based on a pre-sorted list
```
presorted_list = ['2C','3C','4C','2D','3D','4D']
unsorted_list = ['3D','2C','4D','2D']
```
Is there a way to sort the list to reflect the presorted list despite the fact that not all the elements are present in the unsorted list?
I want the result to look something like this:
```
after_sort = ['2C','2D','3D','4D']
```
Thanks! | ```
In [5]: sorted(unsorted_list, key=presorted_list.index)
Out[5]: ['2C', '2D', '3D', '4D']
```
or, for better performance (particularly when `len(presorted_list)` is large),
```
In [6]: order = {item:i for i, item in enumerate(presorted_list)}
In [7]: sorted(unsorted_list, key=order.__getitem__)
Out[7]: ['2C', '2D', '3D', '4D']
```
For more on how to sort using `key`s, see the excellent [Howto Sort wiki](https://wiki.python.org/moin/HowTo/Sorting/#Key_Functions).
---
If `unsorted_list` contains items (such as `'6D'`) not in `presorted_list` then the above methods will raise an error. You first have to decide how you want to sort these items. If you want them placed at the end of the list, you could use
```
In [10]: unsorted_list = ['3D','2C','6D','4D','2D']
In [11]: sorted(unsorted_list, key=lambda x: order.get(x, float('inf')))
Out[11]: ['2C', '2D', '3D', '4D', '6D']
```
or if you wish to place such items at the front of the list, use
```
In [12]: sorted(unsorted_list, key=lambda x: order.get(x, -1))
Out[12]: ['6D', '2C', '2D', '3D', '4D']
``` |
Pandas: how to get rid of `Unnamed:` column in a dataframe | 36,519,086 | 4 | 2016-04-09T15:47:43Z | 36,519,122 | 8 | 2016-04-09T15:50:11Z | [
"python",
"pandas",
"ipython"
] | I have a situation wherein sometimes when I read a `csv` from `df` I get an unwanted index-like column named `unnamed:0`. This is very annoying! I have tried
```
merge.to_csv('xy.df', mode = 'w', inplace=False)
```
which I thought was a solution to this, but I am still getting the `unnamed:0` column! Does anyone have an idea on this? | It's the index column, pass `index=False` to not write it out, see the [docs](http://pandas.pydata.org/pandas-docs/version/0.18.0/generated/pandas.DataFrame.to_csv.html)
Example:
```
In [37]:
df = pd.DataFrame(np.random.randn(5,3), columns=list('abc'))
pd.read_csv(io.StringIO(df.to_csv()))
Out[37]:
Unnamed: 0 a b c
0 0 0.109066 -1.112704 -0.545209
1 1 0.447114 1.525341 0.317252
2 2 0.507495 0.137863 0.886283
3 3 1.452867 1.888363 1.168101
4 4 0.901371 -0.704805 0.088335
```
compare with:
```
In [38]:
pd.read_csv(io.StringIO(df.to_csv(index=False)))
Out[38]:
a b c
0 0.109066 -1.112704 -0.545209
1 0.447114 1.525341 0.317252
2 0.507495 0.137863 0.886283
3 1.452867 1.888363 1.168101
4 0.901371 -0.704805 0.088335
```
You could also optionally tell `read_csv` that the first column is the index column by passing `index_col=0`:
```
In [40]:
pd.read_csv(io.StringIO(df.to_csv()), index_col=0)
Out[40]:
a b c
0 0.109066 -1.112704 -0.545209
1 0.447114 1.525341 0.317252
2 0.507495 0.137863 0.886283
3 1.452867 1.888363 1.168101
4 0.901371 -0.704805 0.088335
``` |
Understanding == applied to a NumPy array | 36,526,035 | 9 | 2016-04-10T05:06:53Z | 36,526,198 | 12 | 2016-04-10T05:30:22Z | [
"python",
"python-2.7",
"numpy",
"tensorflow"
] | I'm new to Python, and I am learning **TensorFlow**. In a tutorial using the *notMNIST dataset*, they give example code to transform the labels matrix to a one-of-n encoded array.
The goal is to take an array consisting of label integers 0...9, and return a matrix where each integer has been transformed into a one-of-n encoded array like this:
```
0 -> [1, 0, 0, 0, 0, 0, 0, 0, 0, 0]
1 -> [0, 1, 0, 0, 0, 0, 0, 0, 0, 0]
2 -> [0, 0, 1, 0, 0, 0, 0, 0, 0, 0]
...
```
The code they give to do this is:
```
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
```
However, I don't understand how this code does that at all. It looks like it just generates an array of integers in the range of 0 to 9, and then compares that with the labels matrix, and converts the result to a float. How does an `==` operator result in a *one-of-n encoded matrix*? | There are a few things going on here: numpy's vector ops, adding a singleton axis, and broadcasting.
First, you should be able to see how the `==` does the magic.
Let's say we start with a simple label array. `==` behaves in a vectorized fashion, which means that we can compare the entire array with a scalar and get an array consisting of the values of each elementwise comparison. For example:
```
>>> labels = np.array([1,2,0,0,2])
>>> labels == 0
array([False, False, True, True, False], dtype=bool)
>>> (labels == 0).astype(np.float32)
array([ 0., 0., 1., 1., 0.], dtype=float32)
```
First we get a boolean array, and then we coerce to floats: False==0 in Python, and True==1. So we wind up with an array which is 0 where `labels` isn't equal to 0 and 1 where it is.
But there's nothing special about comparing to 0, we could compare to 1 or 2 or 3 instead for similar results:
```
>>> (labels == 2).astype(np.float32)
array([ 0., 1., 0., 0., 1.], dtype=float32)
```
In fact, we could loop over every possible label and generate this array. We could use a listcomp:
```
>>> np.array([(labels == i).astype(np.float32) for i in np.arange(3)])
array([[ 0., 0., 1., 1., 0.],
[ 1., 0., 0., 0., 0.],
[ 0., 1., 0., 0., 1.]], dtype=float32)
```
but this doesn't really take advantage of numpy. What we want to do is have each possible label compared with each element, IOW to compare
```
>>> np.arange(3)
array([0, 1, 2])
```
with
```
>>> labels
array([1, 2, 0, 0, 2])
```
And here's where the magic of numpy broadcasting comes in. Right now, `labels` is a 1-dimensional object of shape (5,). If we make it a 2-dimensional object of shape (5,1), then the operation will "broadcast" over the last axis and we'll get an output of shape (5,3) with the results of comparing each entry in the range with each element of labels.
First we can add an "extra" axis to `labels` using `None` (or `np.newaxis`), changing its shape:
```
>>> labels[:,None]
array([[1],
[2],
[0],
[0],
[2]])
>>> labels[:,None].shape
(5, 1)
```
And then we can make the comparison (this is the transpose of the arrangement we were looking at earlier, but that doesn't really matter).
```
>>> np.arange(3) == labels[:,None]
array([[False, True, False],
[False, False, True],
[ True, False, False],
[ True, False, False],
[False, False, True]], dtype=bool)
>>> (np.arange(3) == labels[:,None]).astype(np.float32)
array([[ 0., 1., 0.],
[ 0., 0., 1.],
[ 1., 0., 0.],
[ 1., 0., 0.],
[ 0., 0., 1.]], dtype=float32)
```
Broadcasting in numpy is very powerful, and well worth reading up on. |
Cycle a list from alternating sides | 36,533,553 | 27 | 2016-04-10T18:11:13Z | 36,533,624 | 23 | 2016-04-10T18:16:17Z | [
"python",
"algorithm",
"list",
"iteration"
] | Given a list
```
a = [0,1,2,3,4,5,6,7,8,9]
```
how can I get
```
b = [0,9,1,8,2,7,3,6,4,5]
```
That is, produce a new list in which each successive element is alternately taken from the two sides of the original list? | [`cycle`](https://docs.python.org/dev/library/itertools.html#itertools.cycle) between getting items from the forward [`iter`](https://docs.python.org/3.5/library/functions.html#iter) and the [`reversed`](https://docs.python.org/3.5/library/functions.html#reversed) one. Just make sure you stop at `len(a)` with [`islice`](https://docs.python.org/dev/library/itertools.html#itertools.islice).
```
from itertools import islice, cycle
iters = cycle((iter(a), reversed(a)))
b = [next(it) for it in islice(iters, len(a))]
>>> b
[0, 9, 1, 8, 2, 7, 3, 6, 4, 5]
```
This can easily be put into a single line but then it becomes much more difficult to read:
```
[next(it) for it in islice(cycle((iter(a),reversed(a))),len(a))]
```
Putting it in one line would also prevent you from using the other half of the iterators if you wanted to:
```
>>> iters = cycle((iter(a), reversed(a)))
>>> [next(it) for it in islice(iters, len(a))]
[0, 9, 1, 8, 2, 7, 3, 6, 4, 5]
>>> [next(it) for it in islice(iters, len(a))]
[5, 4, 6, 3, 7, 2, 8, 1, 9, 0]
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.