title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
How do I check whether a file exists using Python? | 82,831 | 2,729 | 2008-09-17T12:55:00Z | 29,909,391 | 15 | 2015-04-28T02:45:31Z | [
"python",
"file",
"filesystems"
] | How do I check whether a file exists, without using the [`try`](https://docs.python.org/3.6/reference/compound_stmts.html#try) statement? | ```
if os.path.isfile(path_to_file):
try:
open(path_to_file)
pass
except IOError as e:
print "Unable to open file"
```
> Raising exceptions is considered to be an acceptable, and Pythonic,
> approach for flow control in your program. Consider handling missing
> files with IOErrors. In this situation, an IOError exception will be
> raised if the file exists but the user does not have read permissions.
SRC: <http://www.pfinn.net/python-check-if-file-exists.html> |
How do I check whether a file exists using Python? | 82,831 | 2,729 | 2008-09-17T12:55:00Z | 30,444,116 | 53 | 2015-05-25T18:29:22Z | [
"python",
"file",
"filesystems"
] | How do I check whether a file exists, without using the [`try`](https://docs.python.org/3.6/reference/compound_stmts.html#try) statement? | ```
import os
#Your path here e.g. "C:\Program Files\text.txt"
if os.path.exists("C:\..."):
print "File found!"
else:
print "File not found!"
```
Importing `os` makes it easier to navigate and perform standard actions with your operating system.
For reference also see [How to check whether a file exists using Python?](/q/82831)
If you need high-level operations, use `shutil`. |
How do I check whether a file exists using Python? | 82,831 | 2,729 | 2008-09-17T12:55:00Z | 31,932,925 | 33 | 2015-08-11T03:54:25Z | [
"python",
"file",
"filesystems"
] | How do I check whether a file exists, without using the [`try`](https://docs.python.org/3.6/reference/compound_stmts.html#try) statement? | > # How do I check whether a file exists, using Python, without using a try statement?
## Recommendations:
**suppress**
Python 3.4 gives us the [`suppress`](https://docs.python.org/3/library/contextlib.html#contextlib.suppress) context manager (previously the [`ignore`](https://bugs.python.org/issue19266) context manager), which does semantically exactly the same thing in fewer lines, while also (at least superficially) meeting the original ask to avoid a `try` statement:
```
from contextlib import suppress
with suppress(OSError), open(path) as f:
f.read()
```
Usage:
```
>>> with suppress(OSError), open('doesnotexist') as f:
... f.read()
...
>>>
```
For earlier Pythons, you could roll your own `suppress`, but without a `try` will be much more verbose than with. I do believe **this actually is the only answer that doesn't use `try` at any level** that can be applied to prior to Python 3.4 because it uses a context manager instead:
```
class suppress(object):
def __init__(self, *exceptions):
self.exceptions = exceptions
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
if exc_type is not None:
return issubclass(exc_type, self.exceptions)
```
Easier with a try:
```
from contextlib import contextmanager
@contextmanager
def suppress(*exceptions):
try:
yield
except exceptions:
pass
```
## Other, possibly problematic, options:
**isfile**
```
import os
os.path.isfile(path)
```
from the [docs](https://docs.python.org/library/os.path.html#os.path.isfile):
> `os.path.isfile(path)`
>
> Return True if path is an existing regular file. This follows symbolic
> links, so both `islink()` and `isfile()` can be true for the same path.
But if you examine the [source](https://hg.python.org/cpython/file/tip/Lib/genericpath.py#l25) of this function, you'll see it actually does use a try statement:
> ```
> # This follows symbolic links, so both islink() and isdir() can be true
> # for the same path on systems that support symlinks
> def isfile(path):
> """Test whether a path is a regular file"""
> try:
> st = os.stat(path)
> except os.error:
> return False
> return stat.S_ISREG(st.st_mode)
> ```
```
>>> OSError is os.error
True
```
All it's doing is using the given path to see if it can get stats on it, catching `OSError` and then checking if it's a file if it didn't raise the exception.
If you intend to do something with the file, I would suggest directly attempting it with a try-except to avoid a race condition:
```
try:
with open(path) as f:
f.read()
except OSError:
pass
```
**os.access**
Available for Unix and Windows is `os.access`, but to use you must pass flags, and it does not differentiate between files and directories. This is more used to test if the real invoking user has access in an elevated privilege environment:
```
import os
os.access(path, os.F_OK)
```
It also suffers from the same race condition problems as `isfile`. From the [docs](https://docs.python.org/2/library/os.html#os.access):
> Note:
> Using access() to check if a user is authorized to e.g. open a file
> before actually doing so using open() creates a security hole, because
> the user might exploit the short time interval between checking and
> opening the file to manipulate it. Itâs preferable to use EAFP
> techniques. For example:
>
> ```
> if os.access("myfile", os.R_OK):
> with open("myfile") as fp:
> return fp.read()
> return "some default data"
> ```
>
> is better written as:
>
> ```
> try:
> fp = open("myfile")
> except IOError as e:
> if e.errno == errno.EACCES:
> return "some default data"
> # Not a permission error.
> raise
> else:
> with fp:
> return fp.read()
> ``` |
How do I check whether a file exists using Python? | 82,831 | 2,729 | 2008-09-17T12:55:00Z | 32,288,118 | 11 | 2015-08-29T16:15:29Z | [
"python",
"file",
"filesystems"
] | How do I check whether a file exists, without using the [`try`](https://docs.python.org/3.6/reference/compound_stmts.html#try) statement? | Here's a 1 line Python command for the Linux command line environment. I find this VERY HANDY since I'm not such a hot Bash guy.
```
python -c "import os.path; print os.path.isfile('/path_to/file.xxx')"
```
I hope this is helpful. |
How do I check whether a file exists using Python? | 82,831 | 2,729 | 2008-09-17T12:55:00Z | 33,554,911 | 15 | 2015-11-05T21:15:03Z | [
"python",
"file",
"filesystems"
] | How do I check whether a file exists, without using the [`try`](https://docs.python.org/3.6/reference/compound_stmts.html#try) statement? | In `Python 3.4` the language provides a new module to manage files:
```
import pathlib
path = pathlib.Path('path/to/file')
if path.is_file(): # if you want to check a directory: path.is_dir()
# if it is true, return true on your code
``` |
How do I check whether a file exists using Python? | 82,831 | 2,729 | 2008-09-17T12:55:00Z | 35,602,588 | 24 | 2016-02-24T12:44:10Z | [
"python",
"file",
"filesystems"
] | How do I check whether a file exists, without using the [`try`](https://docs.python.org/3.6/reference/compound_stmts.html#try) statement? | In 2016 the best way is still using `os.path.isfile`
```
>>> os.path.isfile('/path/to/some/file.txt')
```
Or in `python3` you can use `pathlib`
```
import pathlib
path = pathlib.Path('/path/to/some/file.txt')
if path.is_file():
...
``` |
How to programmatically enable/disable network interfaces? (Windows XP) | 83,756 | 11 | 2008-09-17T14:22:56Z | 86,611 | 9 | 2008-09-17T19:20:31Z | [
"python",
"networking",
"windows-xp"
] | I need to enable/disable completely network interfaces from a script in Windows XP. I'm looking for a python solution, but any general way (eg WMI, some command-line à la netsh, some windows call) is welcome and will be adjusted. Thanks. | Using the netsh interface
Usage set interface [name = ] IfName
[ [admin = ] ENABLED|DISABLED
[connect = ] CONNECTED|DISCONNECTED
[newname = ] NewName ]
Try including everything inside the outer brackets:
netsh interface set interface name="thename" admin=disabled connect=DISCONNECTED newname="thename"
See also this MS KB page: <http://support.microsoft.com/kb/262265/>
You could follow either of their suggestions.
For disabling the adapter, you will need to determine a way to reference the hardware device. If there will not be multiple adapters with the same name on the computer, you could possibly go off of the Description for the interface (or PCI ID works well). After that, using devcon (disable|enable). Devcon is an add-on console interface for the Device Manager. |
Why isn't the 'len' function inherited by dictionaries and lists in Python | 83,983 | 18 | 2008-09-17T14:43:18Z | 84,019 | 10 | 2008-09-17T14:46:56Z | [
"python"
] | example:
```
a_list = [1, 2, 3]
a_list.len() # doesn't work
len(a_list) # works
```
Python being (very) object oriented, I don't understand why the 'len' function isn't inherited by the object.
Plus I keep trying the wrong solution since it appears as the logical one to me | It just isn't.
You can, however, do:
```
>>> [1,2,3].__len__()
3
```
Adding a `__len__()` method to a class is what makes the `len()` magic work. |
Why isn't the 'len' function inherited by dictionaries and lists in Python | 83,983 | 18 | 2008-09-17T14:43:18Z | 84,154 | 39 | 2008-09-17T14:59:53Z | [
"python"
] | example:
```
a_list = [1, 2, 3]
a_list.len() # doesn't work
len(a_list) # works
```
Python being (very) object oriented, I don't understand why the 'len' function isn't inherited by the object.
Plus I keep trying the wrong solution since it appears as the logical one to me | Guido's explanation is [here](http://mail.python.org/pipermail/python-3000/2006-November/004643.html):
> First of all, I chose len(x) over x.len() for HCI reasons (def \_\_len\_\_() came much later). There are two intertwined reasons actually, both HCI:
>
> (a) For some operations, prefix notation just reads better than postfix â prefix (and infix!) operations have a long tradition in mathematics which likes notations where the visuals help the mathematician thinking about a problem. Compare the easy with which we rewrite a formula like x\*(a+b) into x\*a + x\*b to the clumsiness of doing the same thing using a raw OO notation.
>
> (b) When I read code that says len(x) I know that it is asking for the length of something. This tells me two things: the result is an integer, and the argument is some kind of container. To the contrary, when I read x.len(), I have to already know that x is some kind of container implementing an interface or inheriting from a class that has a standard len(). Witness the confusion we occasionally have when a class that is not implementing a mapping has a get() or keys() method, or something that isnât a file has a write() method.
>
> Saying the same thing in another way, I see âlenâ as a built-in operation. Iâd hate to lose that. /â¦/ |
Why isn't the 'len' function inherited by dictionaries and lists in Python | 83,983 | 18 | 2008-09-17T14:43:18Z | 84,155 | 11 | 2008-09-17T14:59:54Z | [
"python"
] | example:
```
a_list = [1, 2, 3]
a_list.len() # doesn't work
len(a_list) # works
```
Python being (very) object oriented, I don't understand why the 'len' function isn't inherited by the object.
Plus I keep trying the wrong solution since it appears as the logical one to me | The short answer: 1) backwards compatibility and 2) there's not enough of a difference for it to really matter. For a more detailed explanation, read on.
The idiomatic Python approach to such operations is special methods which aren't intended to be called directly. For example, to make `x + y` work for your own class, you write a `__add__` method. To make sure that `int(spam)` properly converts your custom class, write a `__int__` method. To make sure that `len(foo)` does something sensible, write a `__len__` method.
This is how things have always been with Python, and I think it makes a lot of sense for some things. In particular, this seems like a sensible way to implement operator overloading. As for the rest, different languages disagree; in Ruby you'd convert something to an integer by calling `spam.to_i` directly instead of saying `int(spam)`.
You're right that Python is an extremely object-oriented language and that having to call an external function on an object to get its length seems odd. On the other hand, `len(silly_walks)` isn't any more onerous than `silly_walks.len()`, and Guido has said that he actually prefers it (<http://mail.python.org/pipermail/python-3000/2006-November/004643.html>). |
Running multiple sites from a single Python web framework | 85,119 | 3 | 2008-09-17T16:37:23Z | 85,134 | 8 | 2008-09-17T16:39:06Z | [
"python",
"frameworks"
] | What are come good (or at least clever) ways of running multiple sites from a single, common Python web framework (ie: Pylons, TurboGears, etc)? I know you can do redirection based on the domain or path to rewrite the URI to point at a site-specific location and I've also seen some brutish "`if site == 'site1' / elseif / elseif / etc`" that I would like to avoid. | Django has this built in. See [the sites framework](http://docs.djangoproject.com/en/dev/ref/contrib/sites/#ref-contrib-sites).
As a general technique, include a 'host' column in your database schema attached to the data you want to be host-specific, then include the `Host` HTTP header in the query when you are retrieving data. |
Python - time.clock() vs. time.time() - accuracy? | 85,451 | 333 | 2008-09-17T17:09:13Z | 85,480 | 13 | 2008-09-17T17:12:29Z | [
"python"
] | Which is better to use for timing in Python? time.clock() or time.time()? Which one provides more accuracy?
for example:
```
start = time.clock()
... do something
elapsed = (time.clock() - start)
```
vs.
```
start = time.time()
... do something
elapsed = (time.time() - start)
``` | Depends on what you care about. If you mean WALL TIME (as in, the time on the clock on your wall), time.clock() provides NO accuracy because it may manage CPU time. |
Python - time.clock() vs. time.time() - accuracy? | 85,451 | 333 | 2008-09-17T17:09:13Z | 85,489 | 12 | 2008-09-17T17:14:08Z | [
"python"
] | Which is better to use for timing in Python? time.clock() or time.time()? Which one provides more accuracy?
for example:
```
start = time.clock()
... do something
elapsed = (time.clock() - start)
```
vs.
```
start = time.time()
... do something
elapsed = (time.time() - start)
``` | ```
clock() -> floating point number
Return the CPU time or real time since the start of the process or since
the first call to clock(). This has as much precision as the system
records.
time() -> floating point number
Return the current time in seconds since the Epoch.
Fractions of a second may be present if the system clock provides them.
```
Usually time() is more precise, because operating systems do not store the process running time with the precision they store the system time (ie, actual time) |
Python - time.clock() vs. time.time() - accuracy? | 85,451 | 333 | 2008-09-17T17:09:13Z | 85,511 | 33 | 2008-09-17T17:16:00Z | [
"python"
] | Which is better to use for timing in Python? time.clock() or time.time()? Which one provides more accuracy?
for example:
```
start = time.clock()
... do something
elapsed = (time.clock() - start)
```
vs.
```
start = time.time()
... do something
elapsed = (time.time() - start)
``` | The short answer is: most of the time time.clock() will be better.
However, if you're timing some hardware (for example some algorithm you put in the GPU), then time.clock() will get rid of this time and time.time() is the only solution left.
Note: whatever the method used, the timing will depend on factors you cannot control (when will the process switch, how often, ...), this is worse with time.time() but exists also with time.clock(), so you should never run one timing test only, but always run a series of test and look at mean/variance of the times. |
Python - time.clock() vs. time.time() - accuracy? | 85,451 | 333 | 2008-09-17T17:09:13Z | 85,533 | 91 | 2008-09-17T17:18:27Z | [
"python"
] | Which is better to use for timing in Python? time.clock() or time.time()? Which one provides more accuracy?
for example:
```
start = time.clock()
... do something
elapsed = (time.clock() - start)
```
vs.
```
start = time.time()
... do something
elapsed = (time.time() - start)
``` | As of 3.3, [*time.clock()* is deprecated](https://docs.python.org/3/library/time.html#time.clock), and it's suggested to use **[time.process\_time()](https://docs.python.org/3/library/time.html#time.process_time)** or **[time.perf\_counter()](https://docs.python.org/3/library/time.html#time.perf_counter)** instead.
Previously in 2.7, according to the **[time module docs](https://docs.python.org/2.7/library/time.html#time.clock)**:
> **time.clock()**
>
> On Unix, return the current processor time as a floating point number
> expressed in seconds. The precision, and in fact the very definition
> of the meaning of âprocessor timeâ, depends on that of the C function
> of the same name, but in any case, **this is the function to use for
> benchmarking Python or timing algorithms.**
>
> On Windows, this function returns wall-clock seconds elapsed since the
> first call to this function, as a floating point number, based on the
> Win32 function QueryPerformanceCounter(). The resolution is typically
> better than one microsecond.
Additionally, there is the [timeit](https://docs.python.org/2/library/timeit.html) module for benchmarking code snippets. |
Python - time.clock() vs. time.time() - accuracy? | 85,451 | 333 | 2008-09-17T17:09:13Z | 85,536 | 20 | 2008-09-17T17:18:56Z | [
"python"
] | Which is better to use for timing in Python? time.clock() or time.time()? Which one provides more accuracy?
for example:
```
start = time.clock()
... do something
elapsed = (time.clock() - start)
```
vs.
```
start = time.time()
... do something
elapsed = (time.time() - start)
``` | [Others](http://stackoverflow.com/questions/85451#85511) have answered re: time.time() vs. time.clock().
However, if you're timing the execution of a block of code for benchmarking/profiling purposes, you should take a look at the [`timeit` module](https://docs.python.org/library/timeit.html). |
Python - time.clock() vs. time.time() - accuracy? | 85,451 | 333 | 2008-09-17T17:09:13Z | 2,246,226 | 16 | 2010-02-11T17:21:26Z | [
"python"
] | Which is better to use for timing in Python? time.clock() or time.time()? Which one provides more accuracy?
for example:
```
start = time.clock()
... do something
elapsed = (time.clock() - start)
```
vs.
```
start = time.time()
... do something
elapsed = (time.time() - start)
``` | One thing to keep in mind:
Changing the system time affects time.time() but not time.clock().
I needed to control some automatic tests executions. If one step of the test case took more than a given amount of time, that TC was aborted to go on with the next one.
But sometimes a step needed to change the system time (to check the scheduler module of the application under test), so after setting the system time a few hours in the future, the TC timeout expired and the test case was aborted. I had to switch from time.time() to time.clock() to handle this properly. |
Python - time.clock() vs. time.time() - accuracy? | 85,451 | 333 | 2008-09-17T17:09:13Z | 21,178,451 | 9 | 2014-01-17T05:21:01Z | [
"python"
] | Which is better to use for timing in Python? time.clock() or time.time()? Which one provides more accuracy?
for example:
```
start = time.clock()
... do something
elapsed = (time.clock() - start)
```
vs.
```
start = time.time()
... do something
elapsed = (time.time() - start)
``` | For my own `practice. time()` has better precision than `clock()` on Linux. `clock()` only has precision less than 10 ms. While `time()` gives prefect precision.
My test is on CentOS 6.4ï¼ python 2.6
```
using time():
1 requests, response time: 14.1749382019 ms
2 requests, response time: 8.01301002502 ms
3 requests, response time: 8.01491737366 ms
4 requests, response time: 8.41021537781 ms
5 requests, response time: 8.38804244995 ms
```
`using clock():`
```
1 requests, response time: 10.0 ms
2 requests, response time: 0.0 ms
3 requests, response time: 0.0 ms
4 requests, response time: 10.0 ms
5 requests, response time: 0.0 ms
6 requests, response time: 0.0 ms
7 requests, response time: 0.0 ms
8 requests, response time: 0.0 ms
``` |
Search for host with MAC-address using Python | 85,577 | 8 | 2008-09-17T17:23:23Z | 85,613 | 11 | 2008-09-17T17:28:21Z | [
"python",
"network-programming"
] | I'd like to search for a given MAC address on my network, all from within a Python script. I already have a map of all the active IP addresses in the network but I cannot figure out how to glean the MAC address. Any ideas? | You need [ARP](http://en.wikipedia.org/wiki/Address_Resolution_Protocol). Python's standard library doesn't include any code for that, so you either need to call an external program (your OS may have an 'arp' utility) or you need to build the packets yourself (possibly with a tool like [Scapy](http://www.secdev.org/projects/scapy/). |
What are the pros and cons of the various Python implementations? | 86,134 | 9 | 2008-09-17T18:25:28Z | 86,173 | 15 | 2008-09-17T18:31:21Z | [
"python"
] | I am relatively new to Python, and I have always used the standard cpython (v2.5) implementation.
I've been wondering about the other implementations though, particularly Jython and IronPython. What makes them better? What makes them worse? What other implementations are there?
I guess what I'm looking for is a summary and list of pros and cons for each implementation. | **Jython** and **IronPython** are useful if you have an overriding need to interface with existing libraries written in a different platform, like if you have 100,000 lines of Java and you just want to write a 20-line Python script. Not particularly useful for anything else, in my opinion, because they are perpetually a few versions behind CPython due to community inertia.
**Stackless** is interesting because it has support for green threads, continuations, etc. Sort of an Erlang-lite.
**PyPy** is an experimental interpreter/compiler that may one day supplant CPython, but for now is more of a testbed for new ideas. |
Get list of XML attribute values in Python | 87,317 | 10 | 2008-09-17T20:32:04Z | 87,622 | 7 | 2008-09-17T21:00:32Z | [
"python",
"xml",
"xpath",
"parent-child",
"xml-attribute"
] | I need to get a list of attribute values from child elements in Python.
It's easiest to explain with an example.
Given some XML like this:
```
<elements>
<parent name="CategoryA">
<child value="a1"/>
<child value="a2"/>
<child value="a3"/>
</parent>
<parent name="CategoryB">
<child value="b1"/>
<child value="b2"/>
<child value="b3"/>
</parent>
</elements>
```
I want to be able to do something like:
```
>>> getValues("CategoryA")
['a1', 'a2', 'a3']
>>> getValues("CategoryB")
['b1', 'b2', 'b3']
```
It looks like a job for XPath but I'm open to all recommendations. I'd also like to hear about your favourite Python XML libraries. | I'm not really an old hand at Python, but here's an XPath solution using libxml2.
```
import libxml2
DOC = """<elements>
<parent name="CategoryA">
<child value="a1"/>
<child value="a2"/>
<child value="a3"/>
</parent>
<parent name="CategoryB">
<child value="b1"/>
<child value="b2"/>
<child value="b3"/>
</parent>
</elements>"""
doc = libxml2.parseDoc(DOC)
def getValues(cat):
return [attr.content for attr in doc.xpathEval("/elements/parent[@name='%s']/child/@value" % (cat))]
print getValues("CategoryA")
```
With result...
```
['a1', 'a2', 'a3']
``` |
If it is decided that our system needs an overhaul, what is the best way to go about it? | 87,522 | 3 | 2008-09-17T20:48:45Z | 89,434 | 7 | 2008-09-18T02:19:51Z | [
"python",
"asp-classic",
"vbscript"
] | We are mainting a web application that is built on Classic ASP using VBScript as the primary language. We are in agreement that our backend (framework if you will) is out dated and doesn't provide us with the proper tools to move forward in a quick manner. We have pretty much embraced the current webMVC pattern that is all over the place, and cannot do it, in a reasonable manner, with the current technology. The big missing features are proper dispatching and templating with inheritance, amongst others.
Currently there are two paths being discussed:
1. Port the existing application to Classic ASP using JScript, which will allow us to hopefully go from there to .NET MSJscript without too much trouble, and eventually end up on the .NET platform (preferably the MVC stuff will be done by then, ASP.NET isn't much better than were we are on now, in our opinions). This has been argued as the safer path with less risk than the next option, albeit it might take slightly longer.
2. Completely rewrite the application using some other technology, right now the leader of the pack is Python WSGI with a custom framework, ORM, and a good templating solution. There is wiggle room here for even django and other pre-built solutions. This method would hopefully be the quickest solution, as we would probably run a beta beside the actual product, but it does have the potential for a big waste of time if we can't/don't get it right.
This does not mean that our logic is gone, as what we have built over the years is fairly stable, as noted just difficult to deal with. It is built on SQL Server 2005 with heavy use of stored procedures and published on IIS 6, just for a little more background.
Now, the question. Has anyone taken either of the two paths above? If so, was it successful, how could it have been better, etc. We aren't looking to deviate much from doing one of those two things, but some suggestions or other solutions would potentially be helpful. | Don't throw away your code!
It's the single worst mistake you can make (on a large codebase). See [Things You Should Never Do, Part 1](http://www.joelonsoftware.com/articles/fog0000000069.html).
You've invested a lot of effort into that old code and worked out many bugs. Throwing it away is a classic developer mistake (and one I've done many times). It makes you feel "better", like a spring cleaning. But you don't need to buy a new apartment and all new furniture to outfit your house. You can work on one room at a time... and maybe some things just need a new paintjob. Hence, this is where refactoring comes in.
For new functionality in your app, [write it in C# and call it from your classic ASP](http://blog.danbartels.com/articles/322.aspx). You'll be forced to be modular when you rewrite this new code. When you have time, refactor parts of your old code into C# as well, and work out the bugs as you go. Eventually, you'll have replaced your app with all new code.
You could also write your own compiler. We wrote one for our classic ASP app a long time ago to allow us to output PHP. It's called [Wasabi](http://www.joelonsoftware.com/items/2006/09/01b.html) and I think it's the reason Jeff Atwood thought Joel Spolsky went off his rocker. Actually, maybe we should just ship it, and then you could use that.
It allowed us to switch our entire codebase to .NET for the next release while only rewriting a very small portion of our source. It also caused a bunch of people to call us crazy, but writing a compiler is not that complicated, and it gave us a lot of flexibility.
Also, if this is an internal only app, just leave it. Don't rewrite it - you are the only customer and if the requirement is you need to run it as classic asp, you can meet that requirement. |
How do you configure Django for simple development and deployment? | 88,259 | 104 | 2008-09-17T22:16:43Z | 88,331 | 79 | 2008-09-17T22:27:35Z | [
"python",
"django"
] | I tend to use [SQLite](http://en.wikipedia.org/wiki/SQLite) when doing [Django](http://en.wikipedia.org/wiki/Django%5F%28web%5Fframework%29)
development, but on a live server something more robust is
often needed ([MySQL](http://en.wikipedia.org/wiki/MySQL)/[PostgreSQL](http://en.wikipedia.org/wiki/PostgreSQL), for example).
Invariably, there are other changes to make to the Django
settings as well: different logging locations / intensities,
media paths, etc.
How do you manage all these changes to make deployment a
simple, automated process? | **Update:** [django-configurations](http://django-configurations.readthedocs.org/en/latest/) has been released which is probably a better option for most people than doing it manually.
If you would prefer to do things manually, my earlier answer still applies:
I have multiple settings files.
* `settings_local.py` - host-specific configuration, such as database name, file paths, etc.
* `settings_development.py` - configuration used for development, e.g. `DEBUG = True`.
* `settings_production.py` - configuration used for production, e.g. `SERVER_EMAIL`.
I tie these all together with a `settings.py` file that firstly imports `settings_local.py`, and then one of the other two. It decides which to load by two settings inside `settings_local.py` - `DEVELOPMENT_HOSTS` and `PRODUCTION_HOSTS`. `settings.py` calls `platform.node()` to find the hostname of the machine it is running on, and then looks for that hostname in the lists, and loads the second settings file depending on which list it finds the hostname in.
That way, the only thing you really need to worry about is keeping the `settings_local.py` file up to date with the host-specific configuration, and everything else is handled automatically.
Check out an example [here](http://github.com/JimDabell/lojban-website/tree/master/lojban). |
How do you configure Django for simple development and deployment? | 88,259 | 104 | 2008-09-17T22:16:43Z | 88,344 | 11 | 2008-09-17T22:29:30Z | [
"python",
"django"
] | I tend to use [SQLite](http://en.wikipedia.org/wiki/SQLite) when doing [Django](http://en.wikipedia.org/wiki/Django%5F%28web%5Fframework%29)
development, but on a live server something more robust is
often needed ([MySQL](http://en.wikipedia.org/wiki/MySQL)/[PostgreSQL](http://en.wikipedia.org/wiki/PostgreSQL), for example).
Invariably, there are other changes to make to the Django
settings as well: different logging locations / intensities,
media paths, etc.
How do you manage all these changes to make deployment a
simple, automated process? | I have two files. `settings_base.py` which contains common/default settings, and which is checked into source control. Each deployment has a separate `settings.py`, which executes `from settings_base import *` at the beginning and then overrides as needed. |
How do you configure Django for simple development and deployment? | 88,259 | 104 | 2008-09-17T22:16:43Z | 89,823 | 25 | 2008-09-18T03:45:38Z | [
"python",
"django"
] | I tend to use [SQLite](http://en.wikipedia.org/wiki/SQLite) when doing [Django](http://en.wikipedia.org/wiki/Django%5F%28web%5Fframework%29)
development, but on a live server something more robust is
often needed ([MySQL](http://en.wikipedia.org/wiki/MySQL)/[PostgreSQL](http://en.wikipedia.org/wiki/PostgreSQL), for example).
Invariably, there are other changes to make to the Django
settings as well: different logging locations / intensities,
media paths, etc.
How do you manage all these changes to make deployment a
simple, automated process? | Personally, I use a single settings.py for the project, I just have it look up the hostname it's on (my development machines have hostnames that start with "gabriel" so I just have this:
```
import socket
if socket.gethostname().startswith('gabriel'):
LIVEHOST = False
else:
LIVEHOST = True
```
then in other parts I have things like:
```
if LIVEHOST:
DEBUG = False
PREPEND_WWW = True
MEDIA_URL = 'http://static1.grsites.com/'
else:
DEBUG = True
PREPEND_WWW = False
MEDIA_URL = 'http://localhost:8000/static/'
```
and so on. A little bit less readable, but it works fine and saves having to juggle multiple settings files. |
How do you configure Django for simple development and deployment? | 88,259 | 104 | 2008-09-17T22:16:43Z | 91,608 | 21 | 2008-09-18T10:57:14Z | [
"python",
"django"
] | I tend to use [SQLite](http://en.wikipedia.org/wiki/SQLite) when doing [Django](http://en.wikipedia.org/wiki/Django%5F%28web%5Fframework%29)
development, but on a live server something more robust is
often needed ([MySQL](http://en.wikipedia.org/wiki/MySQL)/[PostgreSQL](http://en.wikipedia.org/wiki/PostgreSQL), for example).
Invariably, there are other changes to make to the Django
settings as well: different logging locations / intensities,
media paths, etc.
How do you manage all these changes to make deployment a
simple, automated process? | At the end of settings.py I have the following:
```
try:
from settings_local import *
except ImportError:
pass
```
This way if I want to override default settings I need to just put settings\_local.py right next to settings.py. |
How do I unit test an __init__() method of a python class with assertRaises()? | 88,325 | 16 | 2008-09-17T22:26:37Z | 88,346 | 18 | 2008-09-17T22:29:33Z | [
"python",
"unit-testing",
"exception"
] | I have a class:
```
class MyClass:
def __init__(self, foo):
if foo != 1:
raise Error("foo is not equal to 1!")
```
and a unit test that is supposed to make sure the incorrect arg passed to the constructor properly raises an error:
```
def testInsufficientArgs(self):
foo = 0
self.assertRaises((Error), myClass = MyClass(Error, foo))
```
But I get...
```
NameError: global name 'Error' is not defined
```
Why? Where should I be defining this Error object? I thought it was built-in as a default exception type, no? | 'Error' in this example could be any exception object. I think perhaps you have read a code example that used it as a metasyntatic placeholder to mean, "The Appropriate Exception Class".
The baseclass of all exceptions is called 'Exception', and most of its subclasses are descriptive names of the type of error involved, such as 'OSError', 'ValueError', 'NameError', 'TypeError'.
In this case, the appropriate error is 'ValueError' (the value of foo was wrong, therefore a ValueError). I would recommend replacing 'Error' with 'ValueError' in your script.
Here is a complete version of the code you are trying to write, I'm duplicating everything because you have a weird keyword argument in your original example that you seem to be conflating with an assignment, and I'm using the 'failUnless' function name because that's the non-aliased name of the function:
```
class MyClass:
def __init__(self, foo):
if foo != 1:
raise ValueError("foo is not equal to 1!")
import unittest
class TestFoo(unittest.TestCase):
def testInsufficientArgs(self):
foo = 0
self.failUnlessRaises(ValueError, MyClass, foo)
if __name__ == '__main__':
unittest.main()
```
The output is:
```
.
----------------------------------------------------------------------
Ran 1 test in 0.007s
OK
```
There is a flaw in the unit testing library 'unittest' that other unit testing frameworks fix. You'll note that it is impossible to gain access to the exception object from the calling context. If you want to fix this, you'll have to redefine that method in a subclass of UnitTest:
This is an example of it in use:
```
class TestFoo(unittest.TestCase):
def failUnlessRaises(self, excClass, callableObj, *args, **kwargs):
try:
callableObj(*args, **kwargs)
except excClass, excObj:
return excObj # Actually return the exception object
else:
if hasattr(excClass,'__name__'): excName = excClass.__name__
else: excName = str(excClass)
raise self.failureException, "%s not raised" % excName
def testInsufficientArgs(self):
foo = 0
excObj = self.failUnlessRaises(ValueError, MyClass, foo)
self.failUnlessEqual(excObj[0], 'foo is not equal to 1!')
```
I have copied the failUnlessRaises function from unittest.py from python2.5 and modified it slightly. |
How do I split a string into a list? | 88,613 | 35 | 2008-09-17T23:17:12Z | 88,663 | 35 | 2008-09-17T23:25:56Z | [
"python",
"string",
"list",
"split"
] | If I have this string:
> 2+24\*48/32
what is the most efficient approach for creating this list:
> ['2', '+', '24', '\*', '48', '/', '32'] | You can use `split` from the `re` module.
[re.split(pattern, string, maxsplit=0, flags=0)](http://docs.python.org/library/re.html#re.split)
> Split string by the occurrences of pattern. If capturing parentheses
> are used in pattern, then the text of all groups in the pattern are
> also returned as part of the resulting list.
Example code:
```
import re
data = re.split(r'(\D)', '2+24*48/32')
```
---
\D
> When the UNICODE flag is not specified, \D matches any non-digit
> character; this is equivalent to the set [^0-9]. |
How do I split a string into a list? | 88,613 | 35 | 2008-09-17T23:17:12Z | 88,783 | 18 | 2008-09-17T23:54:14Z | [
"python",
"string",
"list",
"split"
] | If I have this string:
> 2+24\*48/32
what is the most efficient approach for creating this list:
> ['2', '+', '24', '\*', '48', '/', '32'] | This looks like a parsing problem, and thus I am compelled to present a solution based on parsing techniques.
While it may seem that you want to 'split' this string, I think what you actually want to do is 'tokenize' it. Tokenization or lexxing is the compilation step before parsing. I have amended my original example in an edit to implement a proper recursive decent parser here. This is the easiest way to implement a parser by hand.
```
import re
patterns = [
('number', re.compile('\d+')),
('*', re.compile(r'\*')),
('/', re.compile(r'\/')),
('+', re.compile(r'\+')),
('-', re.compile(r'\-')),
]
whitespace = re.compile('\W+')
def tokenize(string):
while string:
# strip off whitespace
m = whitespace.match(string)
if m:
string = string[m.end():]
for tokentype, pattern in patterns:
m = pattern.match(string)
if m:
yield tokentype, m.group(0)
string = string[m.end():]
def parseNumber(tokens):
tokentype, literal = tokens.pop(0)
assert tokentype == 'number'
return int(literal)
def parseMultiplication(tokens):
product = parseNumber(tokens)
while tokens and tokens[0][0] in ('*', '/'):
tokentype, literal = tokens.pop(0)
if tokentype == '*':
product *= parseNumber(tokens)
elif tokentype == '/':
product /= parseNumber(tokens)
else:
raise ValueError("Parse Error, unexpected %s %s" % (tokentype, literal))
return product
def parseAddition(tokens):
total = parseMultiplication(tokens)
while tokens and tokens[0][0] in ('+', '-'):
tokentype, literal = tokens.pop(0)
if tokentype == '+':
total += parseMultiplication(tokens)
elif tokentype == '-':
total -= parseMultiplication(tokens)
else:
raise ValueError("Parse Error, unexpected %s %s" % (tokentype, literal))
return total
def parse(tokens):
tokenlist = list(tokens)
returnvalue = parseAddition(tokenlist)
if tokenlist:
print 'Unconsumed data', tokenlist
return returnvalue
def main():
string = '2+24*48/32'
for tokentype, literal in tokenize(string):
print tokentype, literal
print parse(tokenize(string))
if __name__ == '__main__':
main()
```
Implementation of handling of brackets is left as an exercise for the reader. This example will correctly do multiplication before addition. |
How do I split a string into a list? | 88,613 | 35 | 2008-09-17T23:17:12Z | 89,534 | 17 | 2008-09-18T02:39:38Z | [
"python",
"string",
"list",
"split"
] | If I have this string:
> 2+24\*48/32
what is the most efficient approach for creating this list:
> ['2', '+', '24', '\*', '48', '/', '32'] | ```
>>> import re
>>> re.findall(r'\d+|\D+', '2+24*48/32=10')
['2', '+', '24', '*', '48', '/', '32', '=', '10']
```
Matches consecutive digits or consecutive non-digits.
Each match is returned as a new element in the list.
Depending on the usage, you may need to alter the regular expression. Such as if you need to match numbers with a decimal point.
```
>>> re.findall(r'[0-9\.]+|[^0-9\.]+', '2+24*48/32=10.1')
['2', '+', '24', '*', '48', '/', '32', '=', '10.1']
``` |
How do I split a string into a list? | 88,613 | 35 | 2008-09-17T23:17:12Z | 111,355 | 49 | 2008-09-21T16:25:42Z | [
"python",
"string",
"list",
"split"
] | If I have this string:
> 2+24\*48/32
what is the most efficient approach for creating this list:
> ['2', '+', '24', '\*', '48', '/', '32'] | It just so happens that the tokens you want split are already Python tokens, so you can use the built-in `tokenize` module. It's almost a one-liner:
```
from cStringIO import StringIO
from tokenize import generate_tokens
STRING = 1
list(token[STRING] for token
in generate_tokens(StringIO('2+24*48/32').readline)
if token[STRING])
['2', '+', '24', '*', '48', '/', '32']
``` |
Calling an external command in Python | 89,228 | 2,312 | 2008-09-18T01:35:30Z | 89,237 | 27 | 2008-09-18T01:37:24Z | [
"python",
"shell",
"command",
"subprocess",
"external"
] | How can I call an external command (as if I'd typed it at the Unix shell or Windows command prompt) from within a Python script? | ```
import os
os.system("your command")
```
Note that this is dangerous, since the command isn't cleaned. I leave it up to you to google for the relevant docs on the 'os' and 'sys' modules. There are a bunch of functions (exec\* , spawn\*) that will do similar things. |
Calling an external command in Python | 89,228 | 2,312 | 2008-09-18T01:35:30Z | 89,238 | 46 | 2008-09-18T01:37:49Z | [
"python",
"shell",
"command",
"subprocess",
"external"
] | How can I call an external command (as if I'd typed it at the Unix shell or Windows command prompt) from within a Python script? | ```
import os
cmd = 'ls -al'
os.system(cmd)
```
If you want to return the results of the command, you can use [`os.popen`](https://docs.python.org/2/library/os.html#os.popen). However, this is deprecated since version 2.6 in favor of the [subprocess module](https://docs.python.org/2/library/subprocess.html#module-subprocess), which other answers have covered well. |
Calling an external command in Python | 89,228 | 2,312 | 2008-09-18T01:35:30Z | 89,243 | 2,184 | 2008-09-18T01:39:35Z | [
"python",
"shell",
"command",
"subprocess",
"external"
] | How can I call an external command (as if I'd typed it at the Unix shell or Windows command prompt) from within a Python script? | Look at the [subprocess module](https://docs.python.org/2/library/subprocess.html) in the stdlib:
```
from subprocess import call
call(["ls", "-l"])
```
The advantage of subprocess vs system is that it is more flexible (you can get the stdout, stderr, the "real" status code, better error handling, etc...). I think os.system is deprecated, too, or will be:
<https://docs.python.org/2/library/subprocess.html#replacing-older-functions-with-the-subprocess-module>
For quick/dirty/one time scripts, `os.system` is enough, though. |
Calling an external command in Python | 89,228 | 2,312 | 2008-09-18T01:35:30Z | 89,255 | 63 | 2008-09-18T01:42:30Z | [
"python",
"shell",
"command",
"subprocess",
"external"
] | How can I call an external command (as if I'd typed it at the Unix shell or Windows command prompt) from within a Python script? | I'd recommend using the subprocess module instead of os.system because it does shell escaping for you and is therefore much safer: <http://docs.python.org/library/subprocess.html>
```
subprocess.call(['ping', 'localhost'])
``` |
Calling an external command in Python | 89,228 | 2,312 | 2008-09-18T01:35:30Z | 89,262 | 9 | 2008-09-18T01:43:30Z | [
"python",
"shell",
"command",
"subprocess",
"external"
] | How can I call an external command (as if I'd typed it at the Unix shell or Windows command prompt) from within a Python script? | <https://docs.python.org/2/library/subprocess.html>
...or for a very simple command:
```
import os
os.system('cat testfile')
``` |
Calling an external command in Python | 89,228 | 2,312 | 2008-09-18T01:35:30Z | 89,304 | 11 | 2008-09-18T01:53:27Z | [
"python",
"shell",
"command",
"subprocess",
"external"
] | How can I call an external command (as if I'd typed it at the Unix shell or Windows command prompt) from within a Python script? | os.system is OK, but kind of dated. It's also not very secure. Instead, try subprocess. subprocess does not call sh directly and is therefore more secure than os.system.
Get more information at <https://docs.python.org/library/subprocess.html> |
Calling an external command in Python | 89,228 | 2,312 | 2008-09-18T01:35:30Z | 92,395 | 1,889 | 2008-09-18T13:11:46Z | [
"python",
"shell",
"command",
"subprocess",
"external"
] | How can I call an external command (as if I'd typed it at the Unix shell or Windows command prompt) from within a Python script? | Here's a summary of the ways to call external programs and the advantages and disadvantages of each:
1. `os.system("some_command with args")` passes the command and arguments to your system's shell. This is nice because you can actually run multiple commands at once in this manner and set up pipes and input/output redirection. For example:
```
os.system("some_command < input_file | another_command > output_file")
```
However, while this is convenient, you have to manually handle the escaping of shell characters such as spaces, etc. On the other hand, this also lets you run commands which are simply shell commands and not actually external programs. See [the documentation](https://docs.python.org/2/library/os.html#os.system).
2. `stream = os.popen("some_command with args")` will do the same thing as `os.system` except that it gives you a file-like object that you can use to access standard input/output for that process. There are 3 other variants of popen that all handle the i/o slightly differently. If you pass everything as a string, then your command is passed to the shell; if you pass them as a list then you don't need to worry about escaping anything. See [the documentation](https://docs.python.org/2/library/os.html#os.popen).
3. The `Popen` class of the `subprocess` module. This is intended as a replacement for `os.popen` but has the downside of being slightly more complicated by virtue of being so comprehensive. For example, you'd say:
```
print subprocess.Popen("echo Hello World", shell=True, stdout=subprocess.PIPE).stdout.read()
```
instead of:
```
print os.popen("echo Hello World").read()
```
but it is nice to have all of the options there in one unified class instead of 4 different popen functions. See [the documentation](https://docs.python.org/2/library/subprocess.html#popen-constructor).
4. The `call` function from the `subprocess` module. This is basically just like the `Popen` class and takes all of the same arguments, but it simply waits until the command completes and gives you the return code. For example:
```
return_code = subprocess.call("echo Hello World", shell=True)
```
See [the documentation](https://docs.python.org/2/library/subprocess.html#subprocess.call).
5. If you're on Python 3.5 or later, you can use the new [`subprocess.run`](https://docs.python.org/3.5/library/subprocess.html#subprocess.run) function, which is a lot like the above but even more flexible and returns a [`CompletedProcess`](https://docs.python.org/3.5/library/subprocess.html#subprocess.CompletedProcess) object when the command finishes executing.
6. The os module also has all of the fork/exec/spawn functions that you'd have in a C program, but I don't recommend using them directly.
The `subprocess` module should probably be what you use.
Finally please be aware that for all methods where you pass the final command to be executed by the shell as a string and you are responsible for escaping it. **There are serious security implications** if any part of the string that you pass can not be fully trusted. For example, if a user is entering some/any part of the string. If you are unsure, only use these methods with constants. To give you a hint of the implications consider this code:
```
print subprocess.Popen("echo %s " % user_input, stdout=PIPE).stdout.read()
```
and imagine that the user enters "my mama didnt love me && rm -rf /". |
Calling an external command in Python | 89,228 | 2,312 | 2008-09-18T01:35:30Z | 95,246 | 153 | 2008-09-18T18:20:46Z | [
"python",
"shell",
"command",
"subprocess",
"external"
] | How can I call an external command (as if I'd typed it at the Unix shell or Windows command prompt) from within a Python script? | I typically use:
```
import subprocess
p = subprocess.Popen('ls', shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in p.stdout.readlines():
print line,
retval = p.wait()
```
You are free to do what you want with the stdout data in the pipe. In fact, you can simply omit those parameters (stdout= and stderr=) and it'll behave like os.system(). |
Calling an external command in Python | 89,228 | 2,312 | 2008-09-18T01:35:30Z | 2,030,768 | 7 | 2010-01-08T21:11:30Z | [
"python",
"shell",
"command",
"subprocess",
"external"
] | How can I call an external command (as if I'd typed it at the Unix shell or Windows command prompt) from within a Python script? | There is another difference here which is not mentioned above.
subprocess.Popen executes the as a subprocess. In my case, I need to execute file which needs to communicate with another program .
I tried subprocess, execution was successful. However could not comm w/ .
everything normal when I run both from the terminal.
One more:
(NOTE: kwrite behaves different from other apps. If you try below with firefox results will not be the same)
If you try os.system("kwrite"), program flow freezes until user closes kwrite. To overcome that I tried instead os.system(konsole -e kwrite). This time program continued to flow but kwrite became the subprocess of the konsole.
Anyone runs the kwrite not being a subprocess (i.e. at the system monitor it must be appear at the leftmost edge of the tree) |
Calling an external command in Python | 89,228 | 2,312 | 2008-09-18T01:35:30Z | 2,251,026 | 95 | 2010-02-12T10:15:34Z | [
"python",
"shell",
"command",
"subprocess",
"external"
] | How can I call an external command (as if I'd typed it at the Unix shell or Windows command prompt) from within a Python script? | Some hints on detaching the child process from the calling one (starting the child process in background).
Suppose you want to start a long task from a CGI-script, that is the child process should live longer than the CGI-script execution process.
The classical example from the subprocess module docs is:
```
import subprocess
import sys
# some code here
pid = subprocess.Popen([sys.executable, "longtask.py"]) # call subprocess
# some more code here
```
The idea here is that you do not want to wait in the line 'call subprocess' until the longtask.py is finished. But it is not clear what happens after the line 'some more code here' from the example.
My target platform was freebsd, but the development was on windows, so I faced the problem on windows first.
On windows (win xp), the parent process will not finish until the longtask.py has finished its work. It is not what you want in CGI-script. The problem is not specific to Python, in PHP community the problems are the same.
The solution is to pass DETACHED\_PROCESS flag to the underlying CreateProcess function in win API.
If you happen to have installed pywin32 you can import the flag from the win32process module, otherwise you should define it yourself:
```
DETACHED_PROCESS = 0x00000008
pid = subprocess.Popen([sys.executable, "longtask.py"],
creationflags=DETACHED_PROCESS).pid
```
/\* *UPD 2015.10.27* @eryksun in a comment below notes, that the semantically correct flag is CREATE\_NEW\_CONSOLE (0x00000010) \*/
On freebsd we have another problem: when the parent process is finished, it finishes the child processes as well. And that is not what you want in CGI-script either. Some experiments showed that the problem seemed to be in sharing sys.stdout. And the working solution was the following:
```
pid = subprocess.Popen([sys.executable, "longtask.py"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
```
I have not checked the code on other platforms and do not know the reasons of the behaviour on freebsd. If anyone knows, please share your ideas. Googling on starting background processes in Python does not shed any light yet. |
Calling an external command in Python | 89,228 | 2,312 | 2008-09-18T01:35:30Z | 3,879,406 | 32 | 2010-10-07T07:09:04Z | [
"python",
"shell",
"command",
"subprocess",
"external"
] | How can I call an external command (as if I'd typed it at the Unix shell or Windows command prompt) from within a Python script? | Check "pexpect" python library, too.
It allows for interactive controlling of external programs/commands, even ssh, ftp, telnet etc. You can just type something like:
```
child = pexpect.spawn('ftp 192.168.0.24')
child.expect('(?i)name .*: ')
child.sendline('anonymous')
child.expect('(?i)password')
``` |
Calling an external command in Python | 89,228 | 2,312 | 2008-09-18T01:35:30Z | 4,728,086 | 7 | 2011-01-18T19:21:44Z | [
"python",
"shell",
"command",
"subprocess",
"external"
] | How can I call an external command (as if I'd typed it at the Unix shell or Windows command prompt) from within a Python script? | `subprocess.check_call` is convenient if you don't want to test return values. It throws an exception on any error. |
Calling an external command in Python | 89,228 | 2,312 | 2008-09-18T01:35:30Z | 5,824,565 | 23 | 2011-04-28T20:29:29Z | [
"python",
"shell",
"command",
"subprocess",
"external"
] | How can I call an external command (as if I'd typed it at the Unix shell or Windows command prompt) from within a Python script? | If what you need is the output from the command you are calling you can use subprocess.check\_output since Python 2.7
```
>>> subprocess.check_output(["ls", "-l", "/dev/null"])
'crw-rw-rw- 1 root root 1, 3 Oct 18 2007 /dev/null\n'
``` |
Calling an external command in Python | 89,228 | 2,312 | 2008-09-18T01:35:30Z | 9,676,642 | 31 | 2012-03-13T00:12:54Z | [
"python",
"shell",
"command",
"subprocess",
"external"
] | How can I call an external command (as if I'd typed it at the Unix shell or Windows command prompt) from within a Python script? | I always use `fabric` for this things like:
```
from fabric.operations import local
result = local('ls', capture=True)
print "Content:/n%s" % (result, )
```
But this seem to be a good tool: [`sh` (Python subprocess interface)](https://github.com/amoffat/sh).
Look an example:
```
from sh import vgdisplay
print vgdisplay()
print vgdisplay('-v')
print vgdisplay(v=True)
``` |
Calling an external command in Python | 89,228 | 2,312 | 2008-09-18T01:35:30Z | 10,988,365 | 7 | 2012-06-11T22:28:35Z | [
"python",
"shell",
"command",
"subprocess",
"external"
] | How can I call an external command (as if I'd typed it at the Unix shell or Windows command prompt) from within a Python script? | `os.system` does not allow you to store results, so if you want to store results in some list or something `subprocess.call` works. |
Calling an external command in Python | 89,228 | 2,312 | 2008-09-18T01:35:30Z | 13,106,558 | 24 | 2012-10-28T05:14:01Z | [
"python",
"shell",
"command",
"subprocess",
"external"
] | How can I call an external command (as if I'd typed it at the Unix shell or Windows command prompt) from within a Python script? | This is how I run my commands. This code has everything you need pretty much
```
from subprocess import Popen, PIPE
cmd = "ls -l ~/"
p = Popen(cmd , shell=True, stdout=PIPE, stderr=PIPE)
out, err = p.communicate()
print "Return code: ", p.returncode
print out.rstrip(), err.rstrip()
``` |
Calling an external command in Python | 89,228 | 2,312 | 2008-09-18T01:35:30Z | 13,402,722 | 17 | 2012-11-15T17:13:22Z | [
"python",
"shell",
"command",
"subprocess",
"external"
] | How can I call an external command (as if I'd typed it at the Unix shell or Windows command prompt) from within a Python script? | # Update:
`subprocess.run` is the recommended approach [as of Python 3.5](https://docs.python.org/3.6/whatsnew/3.5.html#whatsnew-subprocess) if your code does not need to maintain compatibility with earlier Python versions. It's more consistent and offers similar ease-of-use as Envoy. (Piping isn't as straightforward though. See [this question for how](http://stackoverflow.com/questions/7389662/link-several-popen-commands-with-pipes).)
Here's some examples from [the docs](https://docs.python.org/3.6/library/subprocess.html#subprocess.run).
Run a process:
```
>>> subprocess.run(["ls", "-l"]) # doesn't capture output
CompletedProcess(args=['ls', '-l'], returncode=0)
```
Raise on failed run:
```
>>> subprocess.run("exit 1", shell=True, check=True)
Traceback (most recent call last):
...
subprocess.CalledProcessError: Command 'exit 1' returned non-zero exit status 1
```
Capture output:
```
>>> subprocess.run(["ls", "-l", "/dev/null"], stdout=subprocess.PIPE)
CompletedProcess(args=['ls', '-l', '/dev/null'], returncode=0,
stdout=b'crw-rw-rw- 1 root root 1, 3 Jan 23 16:23 /dev/null\n')
```
# Original answer:
I recommend trying [Envoy](https://github.com/kennethreitz/envoy). It's a wrapper for subprocess, which in turn [aims to replace](http://docs.python.org/2/library/subprocess.html) the older modules and functions. Envoy is subprocess for humans.
Example usage from [the readme](https://github.com/kennethreitz/envoy#readme):
```
>>> r = envoy.run('git config', data='data to pipe in', timeout=2)
>>> r.status_code
129
>>> r.std_out
'usage: git config [options]'
>>> r.std_err
''
```
Pipe stuff around too:
```
>>> r = envoy.run('uptime | pbcopy')
>>> r.command
'pbcopy'
>>> r.status_code
0
>>> r.history
[<Response 'uptime'>]
``` |
Calling an external command in Python | 89,228 | 2,312 | 2008-09-18T01:35:30Z | 15,954,964 | 18 | 2013-04-11T17:17:53Z | [
"python",
"shell",
"command",
"subprocess",
"external"
] | How can I call an external command (as if I'd typed it at the Unix shell or Windows command prompt) from within a Python script? | # With Standard Library
Use [subprocess module](http://docs.python.org/2/library/subprocess.html):
```
from subprocess import call
call(['ls', '-l'])
```
It is the recommended standard way. However, more complicated tasks (pipes, output, input, etc.) can be tedious to construct and write.
*Note: [shlex.split](https://docs.python.org/2/library/shlex.html#shlex.split) can help you to parse the command for `call` and other `subprocess` functions in case you don't want (or you can't!) provide them in form of lists:*
```
import shlex
from subprocess import call
call(shlex.split('ls -l'))
```
# With External Dependencies
If you do not mind external dependencies, use [plumbum](https://pypi.python.org/pypi/plumbum):
```
from plumbum.cmd import ifconfig
print(ifconfig['wlan0']())
```
It is the best `subprocess` wrapper. It's cross-platform, i.e. it works on both Windows and Unix-like systems. Install by `pip install plumbum`.
Another popular library is [sh](https://pypi.python.org/pypi/sh):
```
from sh import ifconfig
print(ifconfig('wlan0'))
```
However, `sh` dropped Windows support, so it's not as awesome as it used to be. Install by `pip install sh`. |
Calling an external command in Python | 89,228 | 2,312 | 2008-09-18T01:35:30Z | 16,072,857 | 18 | 2013-04-18T01:09:33Z | [
"python",
"shell",
"command",
"subprocess",
"external"
] | How can I call an external command (as if I'd typed it at the Unix shell or Windows command prompt) from within a Python script? | without the output of result
```
import os
os.system("your command here")
```
with output of result
```
import commands
commands.getoutput("your command here")
or
commands.getstatusoutput("your command here")
``` |
Calling an external command in Python | 89,228 | 2,312 | 2008-09-18T01:35:30Z | 26,305,089 | 10 | 2014-10-10T17:41:13Z | [
"python",
"shell",
"command",
"subprocess",
"external"
] | How can I call an external command (as if I'd typed it at the Unix shell or Windows command prompt) from within a Python script? | There is also [Plumbum](http://plumbum.readthedocs.org/en/latest/)
```
>>> from plumbum import local
>>> ls = local["ls"]
>>> ls
LocalCommand(<LocalPath /bin/ls>)
>>> ls()
u'build.py\ndist\ndocs\nLICENSE\nplumbum\nREADME.rst\nsetup.py\ntests\ntodo.txt\n'
>>> notepad = local["c:\\windows\\notepad.exe"]
>>> notepad() # Notepad window pops up
u'' # Notepad window is closed by user, command returns
``` |
How do I verify that a string only contains letters, numbers, underscores and dashes? | 89,909 | 54 | 2008-09-18T04:04:58Z | 89,919 | 77 | 2008-09-18T04:08:19Z | [
"python",
"regex",
"string"
] | I know how to do this if I iterate through all of the characters in the string but I am looking for a more elegant method.
Thanks | A regular expression will do the trick with very little code:
```
import re
...
if re.match("^[A-Za-z0-9_-]*$", my_little_string):
# do something here
``` |
How do I verify that a string only contains letters, numbers, underscores and dashes? | 89,909 | 54 | 2008-09-18T04:04:58Z | 89,971 | 12 | 2008-09-18T04:18:40Z | [
"python",
"regex",
"string"
] | I know how to do this if I iterate through all of the characters in the string but I am looking for a more elegant method.
Thanks | There are a variety of ways of achieving this goal, some are clearer than others. For each of my examples, 'True' means that the string passed is valid, 'False' means it contains invalid characters.
First of all, there's the naive approach:
```
import string
allowed = string.letters + string.digits + '_' + '-'
def check_naive(mystring):
return all(c in allowed for c in mystring)
```
Then there's use of a regular expression, you can do this with re.match(). Note that '-' has to be at the end of the [] otherwise it will be used as a 'range' delimiter. Also note the $ which means 'end of string'. Other answers noted in this question use a special character class, '\w', I always prefer using an explicit character class range using [] because it is easier to understand without having to look up a quick reference guide, and easier to special-case.
```
import re
CHECK_RE = re.compile('[a-zA-Z0-9_-]+$')
def check_re(mystring):
return CHECK_RE.match(mystring)
```
Another solution noted that you can do an inverse match with regular expressions, I've included that here now. Note that [^...] inverts the character class because the ^ is used:
```
CHECK_INV_RE = re.compile('[^a-zA-Z0-9_-]')
def check_inv_re(mystring):
return not CHECK_INV_RE.search(mystring)
```
You can also do something tricky with the 'set' object. Have a look at this example, which removes from the original string all the characters that are allowed, leaving us with a set containing either a) nothing, or b) the offending characters from the string:
```
def check_set(mystring):
return not set(mystring) - set(allowed)
``` |
How do I verify that a string only contains letters, numbers, underscores and dashes? | 89,909 | 54 | 2008-09-18T04:04:58Z | 91,572 | 10 | 2008-09-18T10:49:54Z | [
"python",
"regex",
"string"
] | I know how to do this if I iterate through all of the characters in the string but I am looking for a more elegant method.
Thanks | If it were not for the dashes and underscores, the easiest solution would be
```
my_little_string.isalnum()
```
(Section [3.6.1](https://docs.python.org/3/library/stdtypes.html#str.isalnum) of the Python Library Reference) |
How do I verify that a string only contains letters, numbers, underscores and dashes? | 89,909 | 54 | 2008-09-18T04:04:58Z | 92,000 | 20 | 2008-09-18T12:19:48Z | [
"python",
"regex",
"string"
] | I know how to do this if I iterate through all of the characters in the string but I am looking for a more elegant method.
Thanks | [Edit] There's another solution not mentioned yet, and it seems to outperform the others given so far in most cases.
Use string.translate to replace all valid characters in the string, and see if we have any invalid ones left over. This is pretty fast as it uses the underlying C function to do the work, with very little python bytecode involved.
Obviously performance isn't everything - going for the most readable solutions is probably the best approach when not in a performance critical codepath, but just to see how the solutions stack up, here's a performance comparison of all the methods proposed so far. check\_trans is the one using the string.translate method.
Test code:
```
import string, re, timeit
pat = re.compile('[\w-]*$')
pat_inv = re.compile ('[^\w-]')
allowed_chars=string.ascii_letters + string.digits + '_-'
allowed_set = set(allowed_chars)
trans_table = string.maketrans('','')
def check_set_diff(s):
return not set(s) - allowed_set
def check_set_all(s):
return all(x in allowed_set for x in s)
def check_set_subset(s):
return set(s).issubset(allowed_set)
def check_re_match(s):
return pat.match(s)
def check_re_inverse(s): # Search for non-matching character.
return not pat_inv.search(s)
def check_trans(s):
return not s.translate(trans_table,allowed_chars)
test_long_almost_valid='a_very_long_string_that_is_mostly_valid_except_for_last_char'*99 + '!'
test_long_valid='a_very_long_string_that_is_completely_valid_' * 99
test_short_valid='short_valid_string'
test_short_invalid='/$%$%&'
test_long_invalid='/$%$%&' * 99
test_empty=''
def main():
funcs = sorted(f for f in globals() if f.startswith('check_'))
tests = sorted(f for f in globals() if f.startswith('test_'))
for test in tests:
print "Test %-15s (length = %d):" % (test, len(globals()[test]))
for func in funcs:
print " %-20s : %.3f" % (func,
timeit.Timer('%s(%s)' % (func, test), 'from __main__ import pat,allowed_set,%s' % ','.join(funcs+tests)).timeit(10000))
print
if __name__=='__main__': main()
```
The results on my system are:
```
Test test_empty (length = 0):
check_re_inverse : 0.042
check_re_match : 0.030
check_set_all : 0.027
check_set_diff : 0.029
check_set_subset : 0.029
check_trans : 0.014
Test test_long_almost_valid (length = 5941):
check_re_inverse : 2.690
check_re_match : 3.037
check_set_all : 18.860
check_set_diff : 2.905
check_set_subset : 2.903
check_trans : 0.182
Test test_long_invalid (length = 594):
check_re_inverse : 0.017
check_re_match : 0.015
check_set_all : 0.044
check_set_diff : 0.311
check_set_subset : 0.308
check_trans : 0.034
Test test_long_valid (length = 4356):
check_re_inverse : 1.890
check_re_match : 1.010
check_set_all : 14.411
check_set_diff : 2.101
check_set_subset : 2.333
check_trans : 0.140
Test test_short_invalid (length = 6):
check_re_inverse : 0.017
check_re_match : 0.019
check_set_all : 0.044
check_set_diff : 0.032
check_set_subset : 0.037
check_trans : 0.015
Test test_short_valid (length = 18):
check_re_inverse : 0.125
check_re_match : 0.066
check_set_all : 0.104
check_set_diff : 0.051
check_set_subset : 0.046
check_trans : 0.017
```
The translate approach seems best in most cases, dramatically so with long valid strings, but is beaten out by regexes in test\_long\_invalid (Presumably because the regex can bail out immediately, but translate always has to scan the whole string). The set approaches are usually worst, beating regexes only for the empty string case.
Using all(x in allowed\_set for x in s) performs well if it bails out early, but can be bad if it has to iterate through every character. isSubSet and set difference are comparable, and are consistently proportional to the length of the string regardless of the data.
There's a similar difference between the regex methods matching all valid characters and searching for invalid characters. Matching performs a little better when checking for a long, but fully valid string, but worse for invalid characters near the end of the string. |
Issue with Regular expressions in python | 90,052 | 2 | 2008-09-18T04:36:00Z | 90,138 | 23 | 2008-09-18T04:52:26Z | [
"python",
"html",
"regex"
] | Ok, so i'm working on a regular expression to search out all the header information in a site.
I've compiled the regular expression:
```
regex = re.compile(r'''
<h[0-9]>\s?
(<a[ ]href="[A-Za-z0-9.]*">)?\s?
[A-Za-z0-9.,:'"=/?;\s]*\s?
[A-Za-z0-9.,:'"=/?;\s]?
''', re.X)
```
When i run this in python reg ex. tester, it works out wonderfully.
Sample data:
```
<body>
<h1>Dog </h1>
<h2>Cat </h2>
<h3>Fancy </h3>
<h1>Tall cup of lemons</h1>
<h1><a href="dog.com">Dog thing</a></h1>
</body>
```
Now, in the REDemo, it works wonderfully.
When i put it in my python code, however, it only prints `<a href="dog.com">`
Here's my python code, I'm not sure if i'm doing something wrong or if something is lost in translation. I appreciate your help.
```
stories=[]
response = urllib2.urlopen('http://apricotclub.org/duh.html')
html = response.read().lower()
p = re.compile('<h[0-9]>\\s?(<a href=\"[A-Za-z0-9.]*\">)?\\s?[A-Za-z0-9.,:\'\"=/?;\\s]*\\s?[A-Za-z0-9.,:\'\"=/?;\\s]?')
stories=re.findall(p, html)
for i in stories:
if len(i) >= 5:
print i
```
I should also note, that when i take out the `(<a href=\"[A-Za-z0-9.]*\">)?` from the regular expression it works fine for non-link `<hN>` lines. | This question has been asked in several forms over the last few days, so I'm going to say this very clearly.
# Q: How do I parse HTML with Regular Expressions?
# A: Please Don't.
Use [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/), [html5lib](http://code.google.com/p/html5lib/) or [lxml.html](http://codespeak.net/lxml/lxmlhtml.html). Please. |
Will everything in the standard library treat strings as unicode in Python 3.0? | 91,205 | 10 | 2008-09-18T09:29:23Z | 91,301 | 10 | 2008-09-18T09:52:48Z | [
"python",
"unicode",
"string",
"cgi",
"python-3.x"
] | I'm a little confused about how the standard library will behave now that Python (from 3.0) is unicode-based. Will modules such as CGI and urllib use unicode strings or will they use the new 'bytes' type and just provide encoded data? | Logically a lot of things like MIME-encoded mail messages, URLs, XML documents, and so on should be returned as `bytes` not strings. This could cause some consternation as the libraries start to be nailed down for Python 3 and people discover that they have to be more aware of the `bytes`/`string` conversions than they were for `str`/`unicode` ... |
Is there a pretty printer for python data? | 91,810 | 16 | 2008-09-18T11:43:45Z | 91,818 | 24 | 2008-09-18T11:44:54Z | [
"python",
"prettify"
] | Working with python interactively, it's sometimes necessary to display a result which is some arbitrarily complex data structure (like lists with embedded lists, etc.)
The default way to display them is just one massive linear dump which just wraps over and over and you have to parse carefully to read it.
Is there something that will take any python object and display it in a more rational manner. e.g.
```
[0, 1,
[a, b, c],
2, 3, 4]
```
instead of:
```
[0, 1, [a, b, c], 2, 3, 4]
```
I know that's not a very good example, but I think you get the idea. | ```
from pprint import pprint
a = [0, 1, ['a', 'b', 'c'], 2, 3, 4]
pprint(a)
```
Note that for a short list like my example, pprint will in fact print it all on one line. However, for more complex structures it does a pretty good job of pretty printing data. |
Is there a pretty printer for python data? | 91,810 | 16 | 2008-09-18T11:43:45Z | 92,260 | 10 | 2008-09-18T12:55:39Z | [
"python",
"prettify"
] | Working with python interactively, it's sometimes necessary to display a result which is some arbitrarily complex data structure (like lists with embedded lists, etc.)
The default way to display them is just one massive linear dump which just wraps over and over and you have to parse carefully to read it.
Is there something that will take any python object and display it in a more rational manner. e.g.
```
[0, 1,
[a, b, c],
2, 3, 4]
```
instead of:
```
[0, 1, [a, b, c], 2, 3, 4]
```
I know that's not a very good example, but I think you get the idea. | Somtimes [YAML](http://pyyaml.org/) can be good for this.
```
import yaml
a = [0, 1, ['a', 'b', 'c'], 2, 3, 4]
print yaml.dump(a)
```
Produces:
```
- 0
- 1
- [a, b, c]
- 2
- 3
- 4
``` |
Is there a pretty printer for python data? | 91,810 | 16 | 2008-09-18T11:43:45Z | 93,312 | 8 | 2008-09-18T14:56:26Z | [
"python",
"prettify"
] | Working with python interactively, it's sometimes necessary to display a result which is some arbitrarily complex data structure (like lists with embedded lists, etc.)
The default way to display them is just one massive linear dump which just wraps over and over and you have to parse carefully to read it.
Is there something that will take any python object and display it in a more rational manner. e.g.
```
[0, 1,
[a, b, c],
2, 3, 4]
```
instead of:
```
[0, 1, [a, b, c], 2, 3, 4]
```
I know that's not a very good example, but I think you get the idea. | In addition to `pprint.pprint`, `pprint.pformat` is really useful for making readable `__repr__`s. My complex `__repr__`s usually look like so:
```
def __repr__(self):
from pprint import pformat
return "<ClassName %s>" % pformat({"attrs":self.attrs,
"that_i":self.that_i,
"care_about":self.care_about})
``` |
Google App Engine: how can I programmatically access the properties of my Model class? | 91,821 | 8 | 2008-09-18T11:45:16Z | 91,970 | 7 | 2008-09-18T12:14:37Z | [
"python",
"string",
"google-app-engine"
] | I have a model class:
```
class Person(db.Model):
first_name = db.StringProperty(required=True)
last_name = db.StringProperty(required=True)
```
I have an instance of this class in `p`, and string `s` contains the value `'first_name'`. I would like to do something like:
```
print p[s]
```
and
```
p[s] = new_value
```
Both of which result in a `TypeError`.
Does anybody know how I can achieve what I would like? | If the model class is sufficiently intelligent, it should recognize the standard Python ways of doing this.
Try:
```
getattr(p, s)
setattr(p, s, new_value)
```
There is also hasattr available. |
Python, beyond the basics | 92,230 | 16 | 2008-09-18T12:51:07Z | 92,318 | 14 | 2008-09-18T13:01:53Z | [
"python"
] | I've gotten to grips with the basics of Python and I've got a small holiday which I want to use some of to learn a little more Python. The problem is that I have no idea what to learn or where to start. I'm primarily web development but in this case I don't know how much difference it will make. | Well, there are great ressources for advanced Python programming :
* Dive Into Python ([read it for free](http://www.diveintopython.net/))
* Online python cookbooks (e.g. [here](http://code.activestate.com/recipes/langs/python/) and [there](http://the.taoofmac.com/space/Python/Grimoire))
* O'Reilly's Python Cookbook (see amazon)
* A funny riddle game : [Python Challenge](http://www.pythonchallenge.com/)
Here is a list of subjects you must master if you want to write "Python" on your resume :
* [list comprehensions](http://docs.python.org/tutorial/datastructures.html#list-comprehensions)
* [iterators and generators](http://stackoverflow.com/questions/231767/the-python-yield-keyword-explained/231855#231855)
* [decorators](http://stackoverflow.com/questions/739654/understanding-python-decorators/1594484#1594484)
They are what make Python such a cool language (with the standard library of course, that I keep discovering everyday). |
Stripping non printable characters from a string in python | 92,438 | 59 | 2008-09-18T13:17:06Z | 92,488 | 38 | 2008-09-18T13:23:14Z | [
"python",
"string",
"non-printable"
] | I use to run
```
$s =~ s/[^[:print:]]//g;
```
on Perl to get rid of non printable characters.
In Python there's no POSIX regex classes, and I can't write [:print:] having it mean what I want. I know of no way in Python to detect if a character is printable or not.
What would you do?
EDIT: It has to support Unicode characters as well. The string.printable way will happily strip them out of the output.
curses.ascii.isprint will return false for any unicode character. | As far as I know, the most pythonic/efficient method would be:
```
import string
filtered_string = filter(lambda x: x in string.printable, myStr)
``` |
Stripping non printable characters from a string in python | 92,438 | 59 | 2008-09-18T13:17:06Z | 93,029 | 52 | 2008-09-18T14:28:04Z | [
"python",
"string",
"non-printable"
] | I use to run
```
$s =~ s/[^[:print:]]//g;
```
on Perl to get rid of non printable characters.
In Python there's no POSIX regex classes, and I can't write [:print:] having it mean what I want. I know of no way in Python to detect if a character is printable or not.
What would you do?
EDIT: It has to support Unicode characters as well. The string.printable way will happily strip them out of the output.
curses.ascii.isprint will return false for any unicode character. | Iterating over strings is unfortunately rather slow in Python. Regular expressions are over an order of magnitude faster for this kind of thing. You just have to build the character class yourself. The *unicodedata* module is quite helpful for this, especially the *unicodedata.category()* function. See [Unicode Character Database](http://www.unicode.org/reports/tr44/#General_Category_Values) for descriptions of the categories.
```
import unicodedata, re
all_chars = (unichr(i) for i in xrange(0x110000))
control_chars = ''.join(c for c in all_chars if unicodedata.category(c) == 'Cc')
# or equivalently and much more efficiently
control_chars = ''.join(map(unichr, range(0,32) + range(127,160)))
control_char_re = re.compile('[%s]' % re.escape(control_chars))
def remove_control_chars(s):
return control_char_re.sub('', s)
``` |
Stripping non printable characters from a string in python | 92,438 | 59 | 2008-09-18T13:17:06Z | 93,557 | 8 | 2008-09-18T15:25:37Z | [
"python",
"string",
"non-printable"
] | I use to run
```
$s =~ s/[^[:print:]]//g;
```
on Perl to get rid of non printable characters.
In Python there's no POSIX regex classes, and I can't write [:print:] having it mean what I want. I know of no way in Python to detect if a character is printable or not.
What would you do?
EDIT: It has to support Unicode characters as well. The string.printable way will happily strip them out of the output.
curses.ascii.isprint will return false for any unicode character. | You could try setting up a filter using the `unicodedata.category()` function:
```
printable = Set('Lu', 'Ll', ...)
def filter_non_printable(str):
return ''.join(c for c in str if unicodedata.category(c) in printable)
```
See the [Unicode database character properties](http://www.unicode.org/versions/Unicode9.0.0/ch04.pdf) for the available categories |
How can I highlight text in Scintilla? | 92,565 | 6 | 2008-09-18T13:34:54Z | 92,778 | 9 | 2008-09-18T13:59:19Z | [
"c#",
"python",
"perl",
"ide",
"scintilla"
] | I am writing an editor using [Scintilla](http://www.scintilla.org/).
I am already using a lexer to do automatic syntax highlighting but now I would like to mark search results. If I want to mark only one hit I can set the selection there, however, I would like to mark (e.g. with yellow background) all the hits.
I writing this in Perl but if you have suggestions in other languages that would be cool as well. | Have you read the [Markers reference in Scintilla doc](http://scintilla.sourceforge.net/ScintillaDoc.html#Markers)?
This reference can be a bit obscure, so I advise to take a look at the source code of SciTE as well. This text editor was originally a testbed for Scintilla. It grown to a full fledged editor, but it is still a good implementation reference for all things Scintilla.
In our particular case, there is a Mark All button in the Find dialog. You can find its implementation in SciTEBase::MarkAll() method. This method only loops on search results (until it loops on the first search result, if any) and puts a bookmark on the found lines (and optionally set an indicator on the found items).
The found line is gotten using SCI\_LINEFROMPOSITION(posFound), the bookmark is just a call to SCI\_MARKERADD(lineno, markerBookmark).
Note that the mark can be symbol in a margin, or if not associated to a margin, it will highlight the whole line.
HTH. |
time.sleep -- sleeps thread or process? | 92,928 | 227 | 2008-09-18T14:16:45Z | 92,953 | 15 | 2008-09-18T14:18:51Z | [
"python",
"multithreading",
"time",
"sleep",
"python-internals"
] | In Python for the \*nix, does `time.sleep()` block the thread or the process? | Just the thread. |
time.sleep -- sleeps thread or process? | 92,928 | 227 | 2008-09-18T14:16:45Z | 92,986 | 34 | 2008-09-18T14:22:52Z | [
"python",
"multithreading",
"time",
"sleep",
"python-internals"
] | In Python for the \*nix, does `time.sleep()` block the thread or the process? | It will just sleep the thread except in the case where your application has only a single thread, in which case it will sleep the thread and effectively the process as well.
The python documentation on sleep doesn't specify this however, so I can certainly understand the confusion!
<http://docs.python.org/2/library/time.html> |
time.sleep -- sleeps thread or process? | 92,928 | 227 | 2008-09-18T14:16:45Z | 93,179 | 219 | 2008-09-18T14:42:23Z | [
"python",
"multithreading",
"time",
"sleep",
"python-internals"
] | In Python for the \*nix, does `time.sleep()` block the thread or the process? | It blocks the thread. If you look in Modules/timemodule.c in the Python source, you'll see that in the call to `floatsleep()`, the substantive part of the sleep operation is wrapped in a Py\_BEGIN\_ALLOW\_THREADS and Py\_END\_ALLOW\_THREADS block, allowing other threads to continue to execute while the current one sleeps. You can also test this with a simple python program:
```
import time
from threading import Thread
class worker(Thread):
def run(self):
for x in xrange(0,11):
print x
time.sleep(1)
class waiter(Thread):
def run(self):
for x in xrange(100,103):
print x
time.sleep(5)
def run():
worker().start()
waiter().start()
```
Which will print:
```
>>> thread_test.run()
0
100
>>> 1
2
3
4
5
101
6
7
8
9
10
102
``` |
What's the easiest non-memory intensive way to output XML from Python? | 93,710 | 12 | 2008-09-18T15:42:19Z | 94,114 | 13 | 2008-09-18T16:22:28Z | [
"python",
"xml",
"streaming"
] | Basically, something similar to System.Xml.XmlWriter - A streaming XML Writer that doesn't incur much of a memory overhead. So that rules out xml.dom and xml.dom.minidom. Suggestions? | I think you'll find XMLGenerator from xml.sax.saxutils is the closest thing to what you want.
```
import time
from xml.sax.saxutils import XMLGenerator
from xml.sax.xmlreader import AttributesNSImpl
LOG_LEVELS = ['DEBUG', 'WARNING', 'ERROR']
class xml_logger:
def __init__(self, output, encoding):
"""
Set up a logger object, which takes SAX events and outputs
an XML log file
"""
logger = XMLGenerator(output, encoding)
logger.startDocument()
attrs = AttributesNSImpl({}, {})
logger.startElementNS((None, u'log'), u'log', attrs)
self._logger = logger
self._output = output
self._encoding = encoding
return
def write_entry(self, level, msg):
"""
Write a log entry to the logger
level - the level of the entry
msg - the text of the entry. Must be a Unicode object
"""
#Note: in a real application, I would use ISO 8601 for the date
#asctime used here for simplicity
now = time.asctime(time.localtime())
attr_vals = {
(None, u'date'): now,
(None, u'level'): LOG_LEVELS[level],
}
attr_qnames = {
(None, u'date'): u'date',
(None, u'level'): u'level',
}
attrs = AttributesNSImpl(attr_vals, attr_qnames)
self._logger.startElementNS((None, u'entry'), u'entry', attrs)
self._logger.characters(msg)
self._logger.endElementNS((None, u'entry'), u'entry')
return
def close(self):
"""
Clean up the logger object
"""
self._logger.endElementNS((None, u'log'), u'log')
self._logger.endDocument()
return
if __name__ == "__main__":
#Test it out
import sys
xl = xml_logger(sys.stdout, 'utf-8')
xl.write_entry(2, u"Vanilla log entry")
xl.close()
```
You'll probably want to look at the rest of the article I got that from at <http://www.xml.com/pub/a/2003/03/12/py-xml.html>. |
How do I persist to disk a temporary file using Python? | 94,153 | 13 | 2008-09-18T16:27:26Z | 94,339 | 16 | 2008-09-18T16:45:37Z | [
"python",
"temporary-files"
] | I am attempting to use the 'tempfile' module for manipulating and creating text files. Once the file is ready I want to save it to disk. I thought it would be as simple as using 'shutil.copy'. However, I get a 'permission denied' IOError:
```
>>> import tempfile, shutil
>>> f = tempfile.TemporaryFile(mode ='w+t')
>>> f.write('foo')
>>> shutil.copy(f.name, 'bar.txt')
Traceback (most recent call last):
File "<pyshell#5>", line 1, in <module>
shutil.copy(f.name, 'bar.txt')
File "C:\Python25\lib\shutil.py", line 80, in copy
copyfile(src, dst)
File "C:\Python25\lib\shutil.py", line 46, in copyfile
fsrc = open(src, 'rb')
IOError: [Errno 13] Permission denied: 'c:\\docume~1\\me\\locals~1\\temp\\tmpvqq3go'
>>>
```
Is this not intended when using the 'tempfile' library? Is there a better way to do this? (Maybe I am overlooking something very trivial) | The file you create with `TemporaryFile` or `NamedTemporaryFile` is automatically removed when it's closed, which is why you get an error. If you don't want this, you can use `mkstemp` instead (see the docs for [tempfile](http://docs.python.org/lib/module-tempfile.html)).
```
>>> import tempfile, shutil, os
>>> fd, path = tempfile.mkstemp()
>>> os.write(fd, 'foo')
>>> os.close(fd)
>>> shutil.copy(path, 'bar.txt')
>>> os.remove(path)
``` |
How do I persist to disk a temporary file using Python? | 94,153 | 13 | 2008-09-18T16:27:26Z | 109,591 | 11 | 2008-09-20T22:15:33Z | [
"python",
"temporary-files"
] | I am attempting to use the 'tempfile' module for manipulating and creating text files. Once the file is ready I want to save it to disk. I thought it would be as simple as using 'shutil.copy'. However, I get a 'permission denied' IOError:
```
>>> import tempfile, shutil
>>> f = tempfile.TemporaryFile(mode ='w+t')
>>> f.write('foo')
>>> shutil.copy(f.name, 'bar.txt')
Traceback (most recent call last):
File "<pyshell#5>", line 1, in <module>
shutil.copy(f.name, 'bar.txt')
File "C:\Python25\lib\shutil.py", line 80, in copy
copyfile(src, dst)
File "C:\Python25\lib\shutil.py", line 46, in copyfile
fsrc = open(src, 'rb')
IOError: [Errno 13] Permission denied: 'c:\\docume~1\\me\\locals~1\\temp\\tmpvqq3go'
>>>
```
Is this not intended when using the 'tempfile' library? Is there a better way to do this? (Maybe I am overlooking something very trivial) | Starting from python 2.6 you can also use `NamedTemporaryFile` with the `delete=` option set to False. This way the temporary file will be accessible, even after you close it.
Note that on Windows (NT and later) you cannot access the file a second time while it is still open. You have to close it before you can copy it. This is not true on Unix systems. |
How do I persist to disk a temporary file using Python? | 94,153 | 13 | 2008-09-18T16:27:26Z | 9,155,528 | 28 | 2012-02-06T04:19:42Z | [
"python",
"temporary-files"
] | I am attempting to use the 'tempfile' module for manipulating and creating text files. Once the file is ready I want to save it to disk. I thought it would be as simple as using 'shutil.copy'. However, I get a 'permission denied' IOError:
```
>>> import tempfile, shutil
>>> f = tempfile.TemporaryFile(mode ='w+t')
>>> f.write('foo')
>>> shutil.copy(f.name, 'bar.txt')
Traceback (most recent call last):
File "<pyshell#5>", line 1, in <module>
shutil.copy(f.name, 'bar.txt')
File "C:\Python25\lib\shutil.py", line 80, in copy
copyfile(src, dst)
File "C:\Python25\lib\shutil.py", line 46, in copyfile
fsrc = open(src, 'rb')
IOError: [Errno 13] Permission denied: 'c:\\docume~1\\me\\locals~1\\temp\\tmpvqq3go'
>>>
```
Is this not intended when using the 'tempfile' library? Is there a better way to do this? (Maybe I am overlooking something very trivial) | hop is right, and dF. is incorrect on why the error occurs.
Since you haven't called `f.close()` yet, the file is **not** removed.
The [doc](http://docs.python.org/library/tempfile.html#tempfile.NamedTemporaryFile) for `NamedTemporaryFile` says:
> Whether the name can be used to open the file a second time, while the named temporary file is still open, varies across platforms (it can be so used on Unix; it cannot on Windows NT or later).
And for `TemporaryFile`:
> Under Unix, the directory entry for the file is removed immediately after the file is created. Other platforms do not support this; your code should not rely on a temporary file created using this function having or not having a visible name in the file system.
Therefore, to persist a temporary file (on Windows), you can do the following:
```
>>> import tempfile, shutil
>>> f = tempfile.NamedTemporaryFile(mode='w+t', delete=False)
>>> f.write('foo')
>>> file_name = f.name
>>> f.close()
>>> shutil.copy(file_name, 'bar.txt')
```
The solution Hans Sjunnesson provided is also off, because `copyfileobj` only copies from file-like object to file-like object, not file name:
> shutil.copyfileobj(fsrc, fdst[, length])
>
> > Copy the contents of the file-like object fsrc to the file-like object fdst. The integer length, if given, is the buffer size. In particular, a negative length value means to copy the data without looping over the source data in chunks; by default the data is read in chunks to avoid uncontrolled memory consumption. Note that if the current file position of the fsrc object is not 0, only the contents from the current file position to the end of the file will be copied. |
Distributed python | 94,334 | 6 | 2008-09-18T16:45:07Z | 94,597 | 9 | 2008-09-18T17:18:53Z | [
"python",
"distributed"
] | What is the best python framework to create distributed applications? For example to build a P2P app. | I think you mean "Networked Apps"? Distributed means an app that can split its workload among multiple worker clients over the network.
You probably want.
[Twisted](http://twistedmatrix.com/trac/) |
How do I read selected files from a remote Zip archive over HTTP using Python? | 94,490 | 9 | 2008-09-18T17:03:36Z | 94,491 | 8 | 2008-09-18T17:03:42Z | [
"python",
"http",
"zip"
] | I need to read selected files, matching on the file name, from a remote zip archive using Python. I don't want to save the full zip to a temporary file (it's not that large, so I can handle everything in memory).
I've already written the code and it works, and I'm answering this myself so I can search for it later. But since evidence suggests that I'm one of the dumber participants on Stackoverflow, I'm sure there's room for improvement. | Here's how I did it (grabbing all files ending in ".ranks"):
```
import urllib2, cStringIO, zipfile
try:
remotezip = urllib2.urlopen(url)
zipinmemory = cStringIO.StringIO(remotezip.read())
zip = zipfile.ZipFile(zipinmemory)
for fn in zip.namelist():
if fn.endswith(".ranks"):
ranks_data = zip.read(fn)
for line in ranks_data.split("\n"):
# do something with each line
except urllib2.HTTPError:
# handle exception
``` |
What is the difference between range and xrange functions in Python 2.X? | 94,935 | 378 | 2008-09-18T17:52:51Z | 94,957 | 25 | 2008-09-18T17:55:00Z | [
"python",
"loops",
"range",
"python-2.x",
"xrange"
] | Apparently xrange is faster but I have no idea why it's faster (and no proof besides the anecdotal so far that it is faster) or what besides that is different about
```
for i in range(0, 20):
for i in xrange(0, 20):
``` | xrange returns an iterator and only keeps one number in memory at a time. range keeps the entire list of numbers in memory. |
What is the difference between range and xrange functions in Python 2.X? | 94,935 | 378 | 2008-09-18T17:52:51Z | 94,962 | 460 | 2008-09-18T17:55:13Z | [
"python",
"loops",
"range",
"python-2.x",
"xrange"
] | Apparently xrange is faster but I have no idea why it's faster (and no proof besides the anecdotal so far that it is faster) or what besides that is different about
```
for i in range(0, 20):
for i in xrange(0, 20):
``` | range creates a list, so if you do `range(1, 10000000)` it creates a list in memory with `9999999` elements.
`xrange` is a sequence object that evaluates lazily. |
What is the difference between range and xrange functions in Python 2.X? | 94,935 | 378 | 2008-09-18T17:52:51Z | 94,971 | 18 | 2008-09-18T17:55:59Z | [
"python",
"loops",
"range",
"python-2.x",
"xrange"
] | Apparently xrange is faster but I have no idea why it's faster (and no proof besides the anecdotal so far that it is faster) or what besides that is different about
```
for i in range(0, 20):
for i in xrange(0, 20):
``` | Do spend some time with the [Library Reference](http://docs.python.org/lib/typesseq-xrange.html). The more familiar you are with it, the faster you can find answers to questions like this. Especially important are the first few chapters about builtin objects and types.
> The advantage of the xrange type is that an xrange object will always
> take the same amount of memory, no matter the size of the range it represents.
> There are no consistent performance advantages.
Another way to find quick information about a Python construct is the docstring and the help-function:
```
print xrange.__doc__ # def doc(x): print x.__doc__ is super useful
help(xrange)
``` |
What is the difference between range and xrange functions in Python 2.X? | 94,935 | 378 | 2008-09-18T17:52:51Z | 95,010 | 8 | 2008-09-18T17:59:46Z | [
"python",
"loops",
"range",
"python-2.x",
"xrange"
] | Apparently xrange is faster but I have no idea why it's faster (and no proof besides the anecdotal so far that it is faster) or what besides that is different about
```
for i in range(0, 20):
for i in xrange(0, 20):
``` | It is for optimization reasons.
range() will create a list of values from start to end (0 .. 20 in your example). This will become an expensive operation on very large ranges.
xrange() on the other hand is much more optimised. it will only compute the next value when needed (via an xrange sequence object) and does not create a list of all values like range() does. |
What is the difference between range and xrange functions in Python 2.X? | 94,935 | 378 | 2008-09-18T17:52:51Z | 95,100 | 148 | 2008-09-18T18:08:19Z | [
"python",
"loops",
"range",
"python-2.x",
"xrange"
] | Apparently xrange is faster but I have no idea why it's faster (and no proof besides the anecdotal so far that it is faster) or what besides that is different about
```
for i in range(0, 20):
for i in xrange(0, 20):
``` | > range creates a list, so if you do `range(1, 10000000)` it creates a list in memory with `10000000` elements.
>
> `xrange` ~~is a generator, so it~~ is a sequence object ~~is a~~ that evaluates lazily.
This is true, but in Python 3, range will be implemented by the Python 2 xrange(). If you need to actually generate the list, you will need to do:
```
list(range(1,100))
``` |
What is the difference between range and xrange functions in Python 2.X? | 94,935 | 378 | 2008-09-18T17:52:51Z | 95,168 | 51 | 2008-09-18T18:13:44Z | [
"python",
"loops",
"range",
"python-2.x",
"xrange"
] | Apparently xrange is faster but I have no idea why it's faster (and no proof besides the anecdotal so far that it is faster) or what besides that is different about
```
for i in range(0, 20):
for i in xrange(0, 20):
``` | `xrange` only stores the range params and generates the numbers on demand. However the C implementation of Python currently restricts its args to C longs:
```
xrange(2**32-1, 2**32+1) # When long is 32 bits, OverflowError: Python int too large to convert to C long
range(2**32-1, 2**32+1) # OK --> [4294967295L, 4294967296L]
```
Note that in Python 3.0 there is only `range` and it behaves like the 2.x `xrange` but without the limitations on minimum and maximum end points. |
What is the difference between range and xrange functions in Python 2.X? | 94,935 | 378 | 2008-09-18T17:52:51Z | 95,549 | 10 | 2008-09-18T18:44:38Z | [
"python",
"loops",
"range",
"python-2.x",
"xrange"
] | Apparently xrange is faster but I have no idea why it's faster (and no proof besides the anecdotal so far that it is faster) or what besides that is different about
```
for i in range(0, 20):
for i in xrange(0, 20):
``` | > range creates a list, so if you do range(1, 10000000) it creates a list in memory with 10000000 elements.
> xrange is a generator, so it evaluates lazily.
This brings you two advantages:
1. You can iterate longer lists without getting a `MemoryError`.
2. As it resolves each number lazily, if you stop iteration early, you won't waste time creating the whole list. |
What is the difference between range and xrange functions in Python 2.X? | 94,935 | 378 | 2008-09-18T17:52:51Z | 97,530 | 77 | 2008-09-18T22:11:15Z | [
"python",
"loops",
"range",
"python-2.x",
"xrange"
] | Apparently xrange is faster but I have no idea why it's faster (and no proof besides the anecdotal so far that it is faster) or what besides that is different about
```
for i in range(0, 20):
for i in xrange(0, 20):
``` | Remember, use the timeit module to test which of small snipps of code is faster!
```
$ python -m timeit 'for i in range(1000000):' ' pass'
10 loops, best of 3: 90.5 msec per loop
$ python -m timeit 'for i in xrange(1000000):' ' pass'
10 loops, best of 3: 51.1 msec per loop
```
Personally, I always use range(), unless I were dealing with *really* huge lists -- as you can see, time-wise, for a list of a million entries, the extra overhead is only 0.04 seconds. And as Corey points out, in Python 3.0 xrange will go away and range will give you nice iterator behaviour anyway. |
Python implementation of Parsec? | 94,952 | 10 | 2008-09-18T17:54:39Z | 95,707 | 7 | 2008-09-18T18:58:12Z | [
"python",
"parsing",
"parsec",
"combinators"
] | I recently wrote a parser in Python using Ply (it's a python reimplementation of yacc). When I was almost done with the parser I discovered that the grammar I need to parse requires me to do some look up during parsing to inform the lexer. Without doing a look up to inform the lexer I cannot correctly parse the strings in the language.
Given than I can control the state of the lexer from the grammar rules I think I'll be solving my use case using a look up table in the parser module, but it may become too difficult to maintain/test. So I want to know about some of the other options.
In Haskell I would use Parsec, a library of parsing functions (known as combinators). Is there a Python implementation of Parsec? Or perhaps some other production quality library full of parsing functionality so I can build a context sensitive parser in Python?
EDIT: All my attempts at context free parsing have failed. For this reason, I don't expect ANTLR to be useful here. | I believe that [pyparsing](http://pyparsing.wikispaces.com/) is based on the same principles as parsec. |
Does an application-wide exception handler make sense? | 95,642 | 10 | 2008-09-18T18:52:30Z | 95,676 | 11 | 2008-09-18T18:55:38Z | [
"python",
"exception-handling"
] | Long story short, I have a substantial Python application that, among other things, does outcalls to "losetup", "mount", etc. on Linux. Essentially consuming system resources that must be released when complete.
If my application crashes, I want to ensure these system resources are properly released.
Does it make sense to do something like the following?
```
def main():
# TODO: main application entry point
pass
def cleanup():
# TODO: release system resources here
pass
if __name__ == "__main__":
try:
main()
except:
cleanup()
raise
```
Is this something that is typically done? Is there a better way? Perhaps the destructor in a singleton class? | I like top-level exception handlers in general (regardless of language). They're a great place to cleanup resources that may not be immediately related to resources consumed inside the method that throws the exception.
It's also a fantastic place to **log** those exceptions if you have such a framework in place. Top-level handlers will catch those bizarre exceptions you didn't plan on and let you correct them in the future, otherwise, you may never know about them at all.
Just be careful that your top-level handler doesn't throw exceptions! |
Does an application-wide exception handler make sense? | 95,642 | 10 | 2008-09-18T18:52:30Z | 95,682 | 7 | 2008-09-18T18:56:07Z | [
"python",
"exception-handling"
] | Long story short, I have a substantial Python application that, among other things, does outcalls to "losetup", "mount", etc. on Linux. Essentially consuming system resources that must be released when complete.
If my application crashes, I want to ensure these system resources are properly released.
Does it make sense to do something like the following?
```
def main():
# TODO: main application entry point
pass
def cleanup():
# TODO: release system resources here
pass
if __name__ == "__main__":
try:
main()
except:
cleanup()
raise
```
Is this something that is typically done? Is there a better way? Perhaps the destructor in a singleton class? | A destructor (as in a \_\_del\_\_ method) is a bad idea, as these are not guaranteed to be called. The atexit module is a safer approach, although these will still not fire if the Python interpreter crashes (rather than the Python application), or if os.\_exit() is used, or the process is killed aggressively, or the machine reboots. (Of course, the last item isn't an issue in your case.) If your process is crash-prone (it uses fickle third-party extension modules, for instance) you may want to do the cleanup in a simple parent process for more isolation.
If you aren't really worried, use the atexit module. |
How do I use Django templates without the rest of Django? | 98,135 | 85 | 2008-09-18T23:55:21Z | 98,146 | 8 | 2008-09-18T23:56:36Z | [
"python",
"django",
"templates",
"django-templates",
"template-engine"
] | I want to use the Django template engine in my (Python) code, but I'm not building a Django-based web site. How do I use it without having a settings.py file (and others) and having to set the DJANGO\_SETTINGS\_MODULE environment variable?
If I run the following code:
```
>>> import django.template
>>> from django.template import Template, Context
>>> t = Template('My name is {{ my_name }}.')
```
I get:
```
ImportError: Settings cannot be imported, because environment variable DJANGO_SETTINGS_MODULE is undefined.
``` | Any particular reason you want to use Django's templates? Both [Jinja](http://jinja.pocoo.org/) and [Genshi](http://genshi.edgewall.org/) are, in my opinion, superior.
---
If you really want to, then see the [Django documentation on `settings.py`](http://docs.djangoproject.com/en/dev/topics/settings/#topics-settings). Especially the section "Using settings without setting `DJANGO_SETTINGS_MODULE`". Use something like this:
```
from django.conf import settings
settings.configure (FOO='bar') # Your settings go here
``` |
How do I use Django templates without the rest of Django? | 98,135 | 85 | 2008-09-18T23:55:21Z | 98,178 | 117 | 2008-09-19T00:01:39Z | [
"python",
"django",
"templates",
"django-templates",
"template-engine"
] | I want to use the Django template engine in my (Python) code, but I'm not building a Django-based web site. How do I use it without having a settings.py file (and others) and having to set the DJANGO\_SETTINGS\_MODULE environment variable?
If I run the following code:
```
>>> import django.template
>>> from django.template import Template, Context
>>> t = Template('My name is {{ my_name }}.')
```
I get:
```
ImportError: Settings cannot be imported, because environment variable DJANGO_SETTINGS_MODULE is undefined.
``` | The solution is simple. It's actually [well documented](http://docs.djangoproject.com/en/dev/ref/templates/api/#configuring-the-template-system-in-standalone-mode), but not too easy to find. (I had to dig around -- it didn't come up when I tried a few different Google searches.)
The following code works:
```
>>> from django.template import Template, Context
>>> from django.conf import settings
>>> settings.configure()
>>> t = Template('My name is {{ my_name }}.')
>>> c = Context({'my_name': 'Daryl Spitzer'})
>>> t.render(c)
u'My name is Daryl Spitzer.'
```
See the Django documentation (linked above) for a description of some of the settings you may want to define (as keyword arguments to configure). |
How do I use Django templates without the rest of Django? | 98,135 | 85 | 2008-09-18T23:55:21Z | 98,214 | 38 | 2008-09-19T00:08:41Z | [
"python",
"django",
"templates",
"django-templates",
"template-engine"
] | I want to use the Django template engine in my (Python) code, but I'm not building a Django-based web site. How do I use it without having a settings.py file (and others) and having to set the DJANGO\_SETTINGS\_MODULE environment variable?
If I run the following code:
```
>>> import django.template
>>> from django.template import Template, Context
>>> t = Template('My name is {{ my_name }}.')
```
I get:
```
ImportError: Settings cannot be imported, because environment variable DJANGO_SETTINGS_MODULE is undefined.
``` | [Jinja2](http://jinja.pocoo.org/2/) [syntax](http://jinja.pocoo.org/2/documentation/templates) is pretty much the same as Django's with very few differences, and you get a much more powerfull template engine, which also compiles your template to bytecode (FAST!).
I use it for templating, including in Django itself, and it is very good. You can also easily write extensions if some feature you want is missing.
Here is some demonstration of the code generation:
```
>>> import jinja2
>>> print jinja2.Environment().compile('{% for row in data %}{{ row.name | upper }}{% endfor %}', raw=True)
from __future__ import division
from jinja2.runtime import LoopContext, Context, TemplateReference, Macro, Markup, TemplateRuntimeError, missing, concat, escape, markup_join, unicode_join
name = None
def root(context, environment=environment):
l_data = context.resolve('data')
t_1 = environment.filters['upper']
if 0: yield None
for l_row in l_data:
if 0: yield None
yield unicode(t_1(environment.getattr(l_row, 'name')))
blocks = {}
debug_info = '1=9'
``` |
How do I use Django templates without the rest of Django? | 98,135 | 85 | 2008-09-18T23:55:21Z | 109,380 | 7 | 2008-09-20T21:02:58Z | [
"python",
"django",
"templates",
"django-templates",
"template-engine"
] | I want to use the Django template engine in my (Python) code, but I'm not building a Django-based web site. How do I use it without having a settings.py file (and others) and having to set the DJANGO\_SETTINGS\_MODULE environment variable?
If I run the following code:
```
>>> import django.template
>>> from django.template import Template, Context
>>> t = Template('My name is {{ my_name }}.')
```
I get:
```
ImportError: Settings cannot be imported, because environment variable DJANGO_SETTINGS_MODULE is undefined.
``` | I would also recommend jinja2. There is a [nice article](https://web.archive.org/web/20090421084229/http://lucumr.pocoo.org/2008/9/16/why-jinja-is-not-django-and-why-django-should-have-a-look-at-it) on `django` vs. `jinja2` that gives some in-detail information on why you should prefere the later. |
What is the best solution for database connection pooling in python? | 98,687 | 24 | 2008-09-19T01:36:03Z | 98,703 | 11 | 2008-09-19T01:38:19Z | [
"python",
"mysql",
"connection-pooling"
] | I have developed some custom DAO-like classes to meet some very specialized requirements for my project that is a server-side process that does not run inside any kind of framework.
The solution works great except that every time a new request is made, I open a new connection via MySQLdb.connect.
What is the best "drop in" solution to switch this over to using connection pooling in python? I am imagining something like the commons DBCP solution for Java.
The process is long running and has many threads that need to make requests, but not all at the same time... specifically they do quite a lot of work before brief bursts of writing out a chunk of their results.
Edited to add:
After some more searching I found [anitpool.py](http://furius.ca/antiorm/) which looks decent, but as I'm relatively new to python I guess I just want to make sure I'm not missing a more obvious/more idiomatic/better solution. | Wrap your connection class.
Set a limit on how many connections you make.
Return an unused connection.
Intercept close to free the connection.
Update:
I put something like this in dbpool.py:
```
import sqlalchemy.pool as pool
import MySQLdb as mysql
mysql = pool.manage(mysql)
``` |
What is the best solution for database connection pooling in python? | 98,687 | 24 | 2008-09-19T01:36:03Z | 98,906 | 12 | 2008-09-19T02:13:07Z | [
"python",
"mysql",
"connection-pooling"
] | I have developed some custom DAO-like classes to meet some very specialized requirements for my project that is a server-side process that does not run inside any kind of framework.
The solution works great except that every time a new request is made, I open a new connection via MySQLdb.connect.
What is the best "drop in" solution to switch this over to using connection pooling in python? I am imagining something like the commons DBCP solution for Java.
The process is long running and has many threads that need to make requests, but not all at the same time... specifically they do quite a lot of work before brief bursts of writing out a chunk of their results.
Edited to add:
After some more searching I found [anitpool.py](http://furius.ca/antiorm/) which looks decent, but as I'm relatively new to python I guess I just want to make sure I'm not missing a more obvious/more idiomatic/better solution. | IMO, the "more obvious/more idiomatic/better solution" is to use an existing ORM rather than invent DAO-like classes.
It appears to me that ORM's are more popular than "raw" SQL connections. Why? Because Python *is* OO, and the mapping from SQL row to to object *is* absolutely essential. There aren't many cases where you deal with SQL rows that don't map to Python objects.
I think that [SQLAlchemy](http://www.sqlalchemy.org/) or [SQLObject](http://www.sqlobject.org/) (and the associated connection pooling) the more idiomatic Pythonic solution.
Pooling as a separate feature isn't very common because pure SQL (without object mapping) isn't very popular for the kind of complex, long-running processes that benefit from connection pooling. Yes, pure SQL *is* used, but it's always used in simpler or more controlled applications where pooling isn't helpful.
I think you might have two alternatives:
1. Revise your classes to use SQLAlchemy or SQLObject. While this appears painful at first [all that work wasted], you should be able to leverage all the design and thought and it's merely an exercise in adopting a widely-used ORM and pooling solution.
2. Roll your own simple connection pool using the algorithm you outlined -- a simple Set or List of connections that you cycle through. |
What is the best solution for database connection pooling in python? | 98,687 | 24 | 2008-09-19T01:36:03Z | 99,565 | 21 | 2008-09-19T04:11:38Z | [
"python",
"mysql",
"connection-pooling"
] | I have developed some custom DAO-like classes to meet some very specialized requirements for my project that is a server-side process that does not run inside any kind of framework.
The solution works great except that every time a new request is made, I open a new connection via MySQLdb.connect.
What is the best "drop in" solution to switch this over to using connection pooling in python? I am imagining something like the commons DBCP solution for Java.
The process is long running and has many threads that need to make requests, but not all at the same time... specifically they do quite a lot of work before brief bursts of writing out a chunk of their results.
Edited to add:
After some more searching I found [anitpool.py](http://furius.ca/antiorm/) which looks decent, but as I'm relatively new to python I guess I just want to make sure I'm not missing a more obvious/more idiomatic/better solution. | In MySQL?
I'd say don't bother with the connection pooling. They're often a source of trouble and with MySQL they're not going to bring you the performance advantage you're hoping for. This road may be a lot of effort to follow--politically--because there's so much best practices hand waving and textbook verbiage in this space about the advantages of connection pooling.
Connection pools are simply a bridge between the post-web era of stateless applications (e.g. HTTP protocol) and the pre-web era of stateful long-lived batch processing applications. Since connections were very expensive in pre-web databases (since no one used to care too much about how long a connection took to establish), post-web applications devised this connection pool scheme so that every hit didn't incur this huge processing overhead on the RDBMS.
Since MySQL is more of a web-era RDBMS, connections are extremely lightweight and fast. I have written many high volume web applications that don't use a connection pool at all for MySQL.
This is a complication you may benefit from doing without, so long as there isn't a political obstacle to overcome. |
What is a metaclass in Python? | 100,003 | 3,219 | 2008-09-19T06:10:46Z | 100,037 | 217 | 2008-09-19T06:26:10Z | [
"python",
"oop",
"metaclass",
"python-datamodel"
] | What are metaclasses? What do you use them for? | Metaclasses are the secret sauce that make 'class' work. The default metaclass for a new style object is called 'type'.
```
class type(object)
| type(object) -> the object's type
| type(name, bases, dict) -> a new type
```
Metaclasses take 3 args. '**name**', '**bases**' and '**dict**'
Here is where the secret starts. Look for where name, bases and the dict come from in this example class definition.
```
class ThisIsTheName(Bases, Are, Here):
All_the_code_here
def doesIs(create, a):
dict
```
Lets define a metaclass that will demonstrate how '**class:**' calls it.
```
def test_metaclass(name, bases, dict):
print 'The Class Name is', name
print 'The Class Bases are', bases
print 'The dict has', len(dict), 'elems, the keys are', dict.keys()
return "yellow"
class TestName(object, None, int, 1):
__metaclass__ = test_metaclass
foo = 1
def baz(self, arr):
pass
print 'TestName = ', repr(TestName)
# output =>
The Class Name is TestName
The Class Bases are (<type 'object'>, None, <type 'int'>, 1)
The dict has 4 elems, the keys are ['baz', '__module__', 'foo', '__metaclass__']
TestName = 'yellow'
```
And now, an example that actually means something, this will automatically make the variables in the list "attributes" set on the class, and set to None.
```
def init_attributes(name, bases, dict):
if 'attributes' in dict:
for attr in dict['attributes']:
dict[attr] = None
return type(name, bases, dict)
class Initialised(object):
__metaclass__ = init_attributes
attributes = ['foo', 'bar', 'baz']
print 'foo =>', Initialised.foo
# output=>
foo => None
```
Note that the magic behaviour that 'Initalised' gains by having the metaclass `init_attributes` is not passed onto a subclass of Initalised.
Here is an even more concrete example, showing how you can subclass 'type' to make a metaclass that performs an action when the class is created. This is quite tricky:
```
class MetaSingleton(type):
instance = None
def __call__(cls, *args, **kw):
if cls.instance is None:
cls.instance = super(MetaSingleton, cls).__call__(*args, **kw)
return cls.instance
class Foo(object):
__metaclass__ = MetaSingleton
a = Foo()
b = Foo()
assert a is b
``` |
What is a metaclass in Python? | 100,003 | 3,219 | 2008-09-19T06:10:46Z | 100,059 | 56 | 2008-09-19T06:32:58Z | [
"python",
"oop",
"metaclass",
"python-datamodel"
] | What are metaclasses? What do you use them for? | I think the ONLamp introduction to metaclass programming is well written and gives a really good introduction to the topic despite being several years old already.
<http://www.onlamp.com/pub/a/python/2003/04/17/metaclasses.html>
In short: A class is a blueprint for the creation of an instance, a metaclass is a blueprint for the creation of a class. It can be easily seen that in Python classes need to be first-class objects too to enable this behavior.
I've never written one myself, but I think one of the nicest uses of metaclasses can be seen in the [Django framework](http://www.djangoproject.com/). The model classes use a metaclass approach to enable a declarative style of writing new models or form classes. While the metaclass is creating the class, all members get the possibility to customize the class itself.
* [Creating a new model](http://docs.djangoproject.com/en/dev/intro/tutorial01/#id3)
* [The metaclass enabling this](http://code.djangoproject.com/browser/django/trunk/django/db/models/base.py#L25)
The thing that's left to say is: If you don't know what metaclasses are, the probability that you **will not need them** is 99%. |
What is a metaclass in Python? | 100,003 | 3,219 | 2008-09-19T06:10:46Z | 100,091 | 82 | 2008-09-19T06:45:40Z | [
"python",
"oop",
"metaclass",
"python-datamodel"
] | What are metaclasses? What do you use them for? | One use for metaclasses is adding new properties and methods to an instance automatically.
For example, if you look at [Django models](http://docs.djangoproject.com/en/dev/topics/db/models/), their definition looks a bit confusing. It looks as if you are only defining class properties:
```
class Person(models.Model):
first_name = models.CharField(max_length=30)
last_name = models.CharField(max_length=30)
```
However, at runtime the Person objects are filled with all sorts of useful methods. See the [source](http://code.djangoproject.com/browser/django/trunk/django/db/models/base.py) for some amazing metaclassery. |
What is a metaclass in Python? | 100,003 | 3,219 | 2008-09-19T06:10:46Z | 100,146 | 1,179 | 2008-09-19T07:01:58Z | [
"python",
"oop",
"metaclass",
"python-datamodel"
] | What are metaclasses? What do you use them for? | A metaclass is the class of a class. Like a class defines how an instance of the class behaves, a metaclass defines how a class behaves. A class is an instance of a metaclass.
[](http://i.stack.imgur.com/QQ0OK.png)
While in Python you can use arbitrary callables for metaclasses (like [Jerub](http://stackoverflow.com/questions/100003/what-is-a-metaclass-in-python/100037#100037) shows), the more useful approach is actually to make it an actual class itself. `type` is the usual metaclass in Python. In case you're wondering, yes, `type` is itself a class, and it is its own type. You won't be able to recreate something like `type` purely in Python, but Python cheats a little. To create your own metaclass in Python you really just want to subclass `type`.
A metaclass is most commonly used as a class-factory. Like you create an instance of the class by calling the class, Python creates a new class (when it executes the 'class' statement) by calling the metaclass. Combined with the normal `__init__` and `__new__` methods, metaclasses therefore allow you to do 'extra things' when creating a class, like registering the new class with some registry, or even replace the class with something else entirely.
When the `class` statement is executed, Python first executes the body of the `class` statement as a normal block of code. The resulting namespace (a dict) holds the attributes of the class-to-be. The metaclass is determined by looking at the baseclasses of the class-to-be (metaclasses are inherited), at the `__metaclass__` attribute of the class-to-be (if any) or the `__metaclass__` global variable. The metaclass is then called with the name, bases and attributes of the class to instantiate it.
However, metaclasses actually define the *type* of a class, not just a factory for it, so you can do much more with them. You can, for instance, define normal methods on the metaclass. These metaclass-methods are like classmethods, in that they can be called on the class without an instance, but they are also not like classmethods in that they cannot be called on an instance of the class. `type.__subclasses__()` is an example of a method on the `type` metaclass. You can also define the normal 'magic' methods, like `__add__`, `__iter__` and `__getattr__`, to implement or change how the class behaves.
Here's an aggregated example of the bits and pieces:
```
def make_hook(f):
"""Decorator to turn 'foo' method into '__foo__'"""
f.is_hook = 1
return f
class MyType(type):
def __new__(cls, name, bases, attrs):
if name.startswith('None'):
return None
# Go over attributes and see if they should be renamed.
newattrs = {}
for attrname, attrvalue in attrs.iteritems():
if getattr(attrvalue, 'is_hook', 0):
newattrs['__%s__' % attrname] = attrvalue
else:
newattrs[attrname] = attrvalue
return super(MyType, cls).__new__(cls, name, bases, newattrs)
def __init__(self, name, bases, attrs):
super(MyType, self).__init__(name, bases, attrs)
# classregistry.register(self, self.interfaces)
print "Would register class %s now." % self
def __add__(self, other):
class AutoClass(self, other):
pass
return AutoClass
# Alternatively, to autogenerate the classname as well as the class:
# return type(self.__name__ + other.__name__, (self, other), {})
def unregister(self):
# classregistry.unregister(self)
print "Would unregister class %s now." % self
class MyObject:
__metaclass__ = MyType
class NoneSample(MyObject):
pass
# Will print "NoneType None"
print type(NoneSample), repr(NoneSample)
class Example(MyObject):
def __init__(self, value):
self.value = value
@make_hook
def add(self, other):
return self.__class__(self.value + other.value)
# Will unregister the class
Example.unregister()
inst = Example(10)
# Will fail with an AttributeError
#inst.unregister()
print inst + inst
class Sibling(MyObject):
pass
ExampleSibling = Example + Sibling
# ExampleSibling is now a subclass of both Example and Sibling (with no
# content of its own) although it will believe it's called 'AutoClass'
print ExampleSibling
print ExampleSibling.__mro__
``` |
What is a metaclass in Python? | 100,003 | 3,219 | 2008-09-19T06:10:46Z | 6,428,779 | 70 | 2011-06-21T16:30:26Z | [
"python",
"oop",
"metaclass",
"python-datamodel"
] | What are metaclasses? What do you use them for? | Others have explained how metaclasses work and how they fit into the Python type system. Here's an example of what they can be used for. In a testing framework I wrote, I wanted to keep track of the order in which classes were defined, so that I could later instantiate them in this order. I found it easiest to do this using a metaclass.
```
class MyMeta(type):
counter = 0
def __init__(cls, name, bases, dic):
type.__init__(cls, name, bases, dic)
cls._order = MyMeta.counter
MyMeta.counter += 1
class MyType(object):
__metaclass__ = MyMeta
```
Anything that's a subclass of `MyType` then gets a class attribute `_order` that records the order in which the classes were defined. |
What is a metaclass in Python? | 100,003 | 3,219 | 2008-09-19T06:10:46Z | 6,581,949 | 4,510 | 2011-07-05T11:29:50Z | [
"python",
"oop",
"metaclass",
"python-datamodel"
] | What are metaclasses? What do you use them for? | # Classes as objects
Before understanding metaclasses, you need to master classes in Python. And Python has a very peculiar idea of what classes are, borrowed from the Smalltalk language.
In most languages, classes are just pieces of code that describe how to produce an object. That's kinda true in Python too:
```
>>> class ObjectCreator(object):
... pass
...
>>> my_object = ObjectCreator()
>>> print(my_object)
<__main__.ObjectCreator object at 0x8974f2c>
```
But classes are more than that in Python. Classes are objects too.
Yes, objects.
As soon as you use the keyword `class`, Python executes it and creates
an OBJECT. The instruction
```
>>> class ObjectCreator(object):
... pass
...
```
creates in memory an object with the name "ObjectCreator".
**This object (the class) is itself capable of creating objects (the instances),
and this is why it's a class**.
But still, it's an object, and therefore:
* you can assign it to a variable
* you can copy it
* you can add attributes to it
* you can pass it as a function parameter
e.g.:
```
>>> print(ObjectCreator) # you can print a class because it's an object
<class '__main__.ObjectCreator'>
>>> def echo(o):
... print(o)
...
>>> echo(ObjectCreator) # you can pass a class as a parameter
<class '__main__.ObjectCreator'>
>>> print(hasattr(ObjectCreator, 'new_attribute'))
False
>>> ObjectCreator.new_attribute = 'foo' # you can add attributes to a class
>>> print(hasattr(ObjectCreator, 'new_attribute'))
True
>>> print(ObjectCreator.new_attribute)
foo
>>> ObjectCreatorMirror = ObjectCreator # you can assign a class to a variable
>>> print(ObjectCreatorMirror.new_attribute)
foo
>>> print(ObjectCreatorMirror())
<__main__.ObjectCreator object at 0x8997b4c>
```
# Creating classes dynamically
Since classes are objects, you can create them on the fly, like any object.
First, you can create a class in a function using `class`:
```
>>> def choose_class(name):
... if name == 'foo':
... class Foo(object):
... pass
... return Foo # return the class, not an instance
... else:
... class Bar(object):
... pass
... return Bar
...
>>> MyClass = choose_class('foo')
>>> print(MyClass) # the function returns a class, not an instance
<class '__main__.Foo'>
>>> print(MyClass()) # you can create an object from this class
<__main__.Foo object at 0x89c6d4c>
```
But it's not so dynamic, since you still have to write the whole class yourself.
Since classes are objects, they must be generated by something.
When you use the `class` keyword, Python creates this object automatically. But as
with most things in Python, it gives you a way to do it manually.
Remember the function `type`? The good old function that lets you know what
type an object is:
```
>>> print(type(1))
<type 'int'>
>>> print(type("1"))
<type 'str'>
>>> print(type(ObjectCreator))
<type 'type'>
>>> print(type(ObjectCreator()))
<class '__main__.ObjectCreator'>
```
Well, [`type`](http://docs.python.org/2/library/functions.html#type) has a completely different ability, it can also create classes on the fly. `type` can take the description of a class as parameters,
and return a class.
(I know, it's silly that the same function can have two completely different uses according to the parameters you pass to it. It's an issue due to backwards
compatibility in Python)
`type` works this way:
```
type(name of the class,
tuple of the parent class (for inheritance, can be empty),
dictionary containing attributes names and values)
```
e.g.:
```
>>> class MyShinyClass(object):
... pass
```
can be created manually this way:
```
>>> MyShinyClass = type('MyShinyClass', (), {}) # returns a class object
>>> print(MyShinyClass)
<class '__main__.MyShinyClass'>
>>> print(MyShinyClass()) # create an instance with the class
<__main__.MyShinyClass object at 0x8997cec>
```
You'll notice that we use "MyShinyClass" as the name of the class
and as the variable to hold the class reference. They can be different,
but there is no reason to complicate things.
`type` accepts a dictionary to define the attributes of the class. So:
```
>>> class Foo(object):
... bar = True
```
Can be translated to:
```
>>> Foo = type('Foo', (), {'bar':True})
```
And used as a normal class:
```
>>> print(Foo)
<class '__main__.Foo'>
>>> print(Foo.bar)
True
>>> f = Foo()
>>> print(f)
<__main__.Foo object at 0x8a9b84c>
>>> print(f.bar)
True
```
And of course, you can inherit from it, so:
```
>>> class FooChild(Foo):
... pass
```
would be:
```
>>> FooChild = type('FooChild', (Foo,), {})
>>> print(FooChild)
<class '__main__.FooChild'>
>>> print(FooChild.bar) # bar is inherited from Foo
True
```
Eventually you'll want to add methods to your class. Just define a function
with the proper signature and assign it as an attribute.
```
>>> def echo_bar(self):
... print(self.bar)
...
>>> FooChild = type('FooChild', (Foo,), {'echo_bar': echo_bar})
>>> hasattr(Foo, 'echo_bar')
False
>>> hasattr(FooChild, 'echo_bar')
True
>>> my_foo = FooChild()
>>> my_foo.echo_bar()
True
```
And you can add even more methods after you dynamically create the class, just like adding methods to a normally created class object.
```
>>> def echo_bar_more(self):
... print('yet another method')
...
>>> FooChild.echo_bar_more = echo_bar_more
>>> hasattr(FooChild, 'echo_bar_more')
True
```
You see where we are going: in Python, classes are objects, and you can create a class on the fly, dynamically.
This is what Python does when you use the keyword `class`, and it does so by using a metaclass.
# What are metaclasses (finally)
Metaclasses are the 'stuff' that creates classes.
You define classes in order to create objects, right?
But we learned that Python classes are objects.
Well, metaclasses are what create these objects. They are the classes' classes,
you can picture them this way:
```
MyClass = MetaClass()
MyObject = MyClass()
```
You've seen that `type` lets you do something like this:
```
MyClass = type('MyClass', (), {})
```
It's because the function `type` is in fact a metaclass. `type` is the
metaclass Python uses to create all classes behind the scenes.
Now you wonder why the heck is it written in lowercase, and not `Type`?
Well, I guess it's a matter of consistency with `str`, the class that creates
strings objects, and `int` the class that creates integer objects. `type` is
just the class that creates class objects.
You see that by checking the `__class__` attribute.
Everything, and I mean everything, is an object in Python. That includes ints,
strings, functions and classes. All of them are objects. And all of them have
been created from a class:
```
>>> age = 35
>>> age.__class__
<type 'int'>
>>> name = 'bob'
>>> name.__class__
<type 'str'>
>>> def foo(): pass
>>> foo.__class__
<type 'function'>
>>> class Bar(object): pass
>>> b = Bar()
>>> b.__class__
<class '__main__.Bar'>
```
Now, what is the `__class__` of any `__class__` ?
```
>>> age.__class__.__class__
<type 'type'>
>>> name.__class__.__class__
<type 'type'>
>>> foo.__class__.__class__
<type 'type'>
>>> b.__class__.__class__
<type 'type'>
```
So, a metaclass is just the stuff that creates class objects.
You can call it a 'class factory' if you wish.
`type` is the built-in metaclass Python uses, but of course, you can create your
own metaclass.
# The [`__metaclass__`](http://docs.python.org/2/reference/datamodel.html?highlight=__metaclass__#__metaclass__) attribute
You can add a `__metaclass__` attribute when you write a class:
```
class Foo(object):
__metaclass__ = something...
[...]
```
If you do so, Python will use the metaclass to create the class `Foo`.
Careful, it's tricky.
You write `class Foo(object)` first, but the class object `Foo` is not created
in memory yet.
Python will look for `__metaclass__` in the class definition. If it finds it,
it will use it to create the object class `Foo`. If it doesn't, it will use
`type` to create the class.
Read that several times.
When you do:
```
class Foo(Bar):
pass
```
Python does the following:
Is there a `__metaclass__` attribute in `Foo`?
If yes, create in memory a class object (I said a class object, stay with me here), with the name `Foo` by using what is in `__metaclass__`.
If Python can't find `__metaclass__`, it will look for a `__metaclass__` at the MODULE level, and try to do the same (but only for classes that don't inherit anything, basically old-style classes).
Then if it can't find any `__metaclass__` at all, it will use the `Bar`'s (the first parent) own metaclass (which might be the default `type`) to create the class object.
Be careful here that the `__metaclass__` attribute will not be inherited, the metaclass of the parent (`Bar.__class__`) will be. If `Bar` used a `__metaclass__` attribute that created `Bar` with `type()` (and not `type.__new__()`), the subclasses will not inherit that behavior.
Now the big question is, what can you put in `__metaclass__` ?
The answer is: something that can create a class.
And what can create a class? `type`, or anything that subclasses or uses it.
# Custom metaclasses
The main purpose of a metaclass is to change the class automatically,
when it's created.
You usually do this for APIs, where you want to create classes matching the
current context.
Imagine a stupid example, where you decide that all classes in your module
should have their attributes written in uppercase. There are several ways to
do this, but one way is to set `__metaclass__` at the module level.
This way, all classes of this module will be created using this metaclass,
and we just have to tell the metaclass to turn all attributes to uppercase.
Luckily, `__metaclass__` can actually be any callable, it doesn't need to be a
formal class (I know, something with 'class' in its name doesn't need to be
a class, go figure... but it's helpful).
So we will start with a simple example, by using a function.
```
# the metaclass will automatically get passed the same argument
# that you usually pass to `type`
def upper_attr(future_class_name, future_class_parents, future_class_attr):
"""
Return a class object, with the list of its attribute turned
into uppercase.
"""
# pick up any attribute that doesn't start with '__' and uppercase it
uppercase_attr = {}
for name, val in future_class_attr.items():
if not name.startswith('__'):
uppercase_attr[name.upper()] = val
else:
uppercase_attr[name] = val
# let `type` do the class creation
return type(future_class_name, future_class_parents, uppercase_attr)
__metaclass__ = upper_attr # this will affect all classes in the module
class Foo(): # global __metaclass__ won't work with "object" though
# but we can define __metaclass__ here instead to affect only this class
# and this will work with "object" children
bar = 'bip'
print(hasattr(Foo, 'bar'))
# Out: False
print(hasattr(Foo, 'BAR'))
# Out: True
f = Foo()
print(f.BAR)
# Out: 'bip'
```
Now, let's do exactly the same, but using a real class for a metaclass:
```
# remember that `type` is actually a class like `str` and `int`
# so you can inherit from it
class UpperAttrMetaclass(type):
# __new__ is the method called before __init__
# it's the method that creates the object and returns it
# while __init__ just initializes the object passed as parameter
# you rarely use __new__, except when you want to control how the object
# is created.
# here the created object is the class, and we want to customize it
# so we override __new__
# you can do some stuff in __init__ too if you wish
# some advanced use involves overriding __call__ as well, but we won't
# see this
def __new__(upperattr_metaclass, future_class_name,
future_class_parents, future_class_attr):
uppercase_attr = {}
for name, val in future_class_attr.items():
if not name.startswith('__'):
uppercase_attr[name.upper()] = val
else:
uppercase_attr[name] = val
return type(future_class_name, future_class_parents, uppercase_attr)
```
But this is not really OOP. We call `type` directly and we don't override
or call the parent `__new__`. Let's do it:
```
class UpperAttrMetaclass(type):
def __new__(upperattr_metaclass, future_class_name,
future_class_parents, future_class_attr):
uppercase_attr = {}
for name, val in future_class_attr.items():
if not name.startswith('__'):
uppercase_attr[name.upper()] = val
else:
uppercase_attr[name] = val
# reuse the type.__new__ method
# this is basic OOP, nothing magic in there
return type.__new__(upperattr_metaclass, future_class_name,
future_class_parents, uppercase_attr)
```
You may have noticed the extra argument `upperattr_metaclass`. There is
nothing special about it: `__new__` always receives the class it's defined in, as first parameter. Just like you have `self` for ordinary methods which receive the instance as first parameter, or the defining class for class methods.
Of course, the names I used here are long for the sake of clarity, but like
for `self`, all the arguments have conventional names. So a real production
metaclass would look like this:
```
class UpperAttrMetaclass(type):
def __new__(cls, clsname, bases, dct):
uppercase_attr = {}
for name, val in dct.items():
if not name.startswith('__'):
uppercase_attr[name.upper()] = val
else:
uppercase_attr[name] = val
return type.__new__(cls, clsname, bases, uppercase_attr)
```
We can make it even cleaner by using `super`, which will ease inheritance (because yes, you can have metaclasses, inheriting from metaclasses, inheriting from type):
```
class UpperAttrMetaclass(type):
def __new__(cls, clsname, bases, dct):
uppercase_attr = {}
for name, val in dct.items():
if not name.startswith('__'):
uppercase_attr[name.upper()] = val
else:
uppercase_attr[name] = val
return super(UpperAttrMetaclass, cls).__new__(cls, clsname, bases, uppercase_attr)
```
That's it. There is really nothing more about metaclasses.
The reason behind the complexity of the code using metaclasses is not because
of metaclasses, it's because you usually use metaclasses to do twisted stuff
relying on introspection, manipulating inheritance, vars such as `__dict__`, etc.
Indeed, metaclasses are especially useful to do black magic, and therefore
complicated stuff. But by themselves, they are simple:
* intercept a class creation
* modify the class
* return the modified class
# Why would you use metaclasses classes instead of functions?
Since `__metaclass__` can accept any callable, why would you use a class
since it's obviously more complicated?
There are several reasons to do so:
* The intention is clear. When you read `UpperAttrMetaclass(type)`, you know
what's going to follow
* You can use OOP. Metaclass can inherit from metaclass, override parent methods. Metaclasses can even use metaclasses.
* You can structure your code better. You never use metaclasses for something as
trivial as the above example. It's usually for something complicated. Having the
ability to make several methods and group them in one class is very useful
to make the code easier to read.
* You can hook on `__new__`, `__init__` and `__call__`. Which will allow
you to do different stuff. Even if usually you can do it all in `__new__`,
some people are just more comfortable using `__init__`.
* These are called metaclasses, damn it! It must mean something!
# Why would you use metaclasses?
Now the big question. Why would you use some obscure error prone feature?
Well, usually you don't:
> Metaclasses are deeper magic that
> 99% of users should never worry about.
> If you wonder whether you need them,
> you don't (the people who actually
> need them know with certainty that
> they need them, and don't need an
> explanation about why).
*Python Guru Tim Peters*
The main use case for a metaclass is creating an API. A typical example of this is the Django ORM.
It allows you to define something like this:
```
class Person(models.Model):
name = models.CharField(max_length=30)
age = models.IntegerField()
```
But if you do this:
```
guy = Person(name='bob', age='35')
print(guy.age)
```
It won't return an `IntegerField` object. It will return an `int`, and can even take it directly from the database.
This is possible because `models.Model` defines `__metaclass__` and
it uses some magic that will turn the `Person` you just defined with simple statements
into a complex hook to a database field.
Django makes something complex look simple by exposing a simple API
and using metaclasses, recreating code from this API to do the real job
behind the scenes.
# The last word
First, you know that classes are objects that can create instances.
Well in fact, classes are themselves instances. Of metaclasses.
```
>>> class Foo(object): pass
>>> id(Foo)
142630324
```
Everything is an object in Python, and they are all either instances of classes
or instances of metaclasses.
Except for `type`.
`type` is actually its own metaclass. This is not something you could
reproduce in pure Python, and is done by cheating a little bit at the implementation
level.
Secondly, metaclasses are complicated. You may not want to use them for
very simple class alterations. You can change classes by using two different techniques:
* [monkey patching](http://en.wikipedia.org/wiki/Monkey_patch)
* class decorators
99% of the time you need class alteration, you are better off using these.
But 99% of the time, you don't need class alteration at all. |
What is a metaclass in Python? | 100,003 | 3,219 | 2008-09-19T06:10:46Z | 21,999,253 | 20 | 2014-02-24T21:20:49Z | [
"python",
"oop",
"metaclass",
"python-datamodel"
] | What are metaclasses? What do you use them for? | A metaclass is a class that tells how (some) other class should be created.
This is a case where I saw metaclass as a solution to my problem:
I had a really complicated problem, that probably could have been solved differently, but I chose to solve it using a metaclass. Because of the complexity, it is one of the few modules I have written where the comments in the module surpass the amount of code that has been written. Here it is...
```
#!/usr/bin/env python
# Copyright (C) 2013-2014 Craig Phillips. All rights reserved.
# This requires some explaining. The point of this metaclass excercise is to
# create a static abstract class that is in one way or another, dormant until
# queried. I experimented with creating a singlton on import, but that did
# not quite behave how I wanted it to. See now here, we are creating a class
# called GsyncOptions, that on import, will do nothing except state that its
# class creator is GsyncOptionsType. This means, docopt doesn't parse any
# of the help document, nor does it start processing command line options.
# So importing this module becomes really efficient. The complicated bit
# comes from requiring the GsyncOptions class to be static. By that, I mean
# any property on it, may or may not exist, since they are not statically
# defined; so I can't simply just define the class with a whole bunch of
# properties that are @property @staticmethods.
#
# So here's how it works:
#
# Executing 'from libgsync.options import GsyncOptions' does nothing more
# than load up this module, define the Type and the Class and import them
# into the callers namespace. Simple.
#
# Invoking 'GsyncOptions.debug' for the first time, or any other property
# causes the __metaclass__ __getattr__ method to be called, since the class
# is not instantiated as a class instance yet. The __getattr__ method on
# the type then initialises the class (GsyncOptions) via the __initialiseClass
# method. This is the first and only time the class will actually have its
# dictionary statically populated. The docopt module is invoked to parse the
# usage document and generate command line options from it. These are then
# paired with their defaults and what's in sys.argv. After all that, we
# setup some dynamic properties that could not be defined by their name in
# the usage, before everything is then transplanted onto the actual class
# object (or static class GsyncOptions).
#
# Another piece of magic, is to allow command line options to be set in
# in their native form and be translated into argparse style properties.
#
# Finally, the GsyncListOptions class is actually where the options are
# stored. This only acts as a mechanism for storing options as lists, to
# allow aggregation of duplicate options or options that can be specified
# multiple times. The __getattr__ call hides this by default, returning the
# last item in a property's list. However, if the entire list is required,
# calling the 'list()' method on the GsyncOptions class, returns a reference
# to the GsyncListOptions class, which contains all of the same properties
# but as lists and without the duplication of having them as both lists and
# static singlton values.
#
# So this actually means that GsyncOptions is actually a static proxy class...
#
# ...And all this is neatly hidden within a closure for safe keeping.
def GetGsyncOptionsType():
class GsyncListOptions(object):
__initialised = False
class GsyncOptionsType(type):
def __initialiseClass(cls):
if GsyncListOptions._GsyncListOptions__initialised: return
from docopt import docopt
from libgsync.options import doc
from libgsync import __version__
options = docopt(
doc.__doc__ % __version__,
version = __version__,
options_first = True
)
paths = options.pop('<path>', None)
setattr(cls, "destination_path", paths.pop() if paths else None)
setattr(cls, "source_paths", paths)
setattr(cls, "options", options)
for k, v in options.iteritems():
setattr(cls, k, v)
GsyncListOptions._GsyncListOptions__initialised = True
def list(cls):
return GsyncListOptions
def __getattr__(cls, name):
cls.__initialiseClass()
return getattr(GsyncListOptions, name)[-1]
def __setattr__(cls, name, value):
# Substitut option names: --an-option-name for an_option_name
import re
name = re.sub(r'^__', "", re.sub(r'-', "_", name))
listvalue = []
# Ensure value is converted to a list type for GsyncListOptions
if isinstance(value, list):
if value:
listvalue = [] + value
else:
listvalue = [ None ]
else:
listvalue = [ value ]
type.__setattr__(GsyncListOptions, name, listvalue)
# Cleanup this module to prevent tinkering.
import sys
module = sys.modules[__name__]
del module.__dict__['GetGsyncOptionsType']
return GsyncOptionsType
# Our singlton abstract proxy class.
class GsyncOptions(object):
__metaclass__ = GetGsyncOptionsType()
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.