title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
What is the difference between @staticmethod and @classmethod in Python? | 136,097 | 1,583 | 2008-09-25T21:01:57Z | 30,329,887 | 14 | 2015-05-19T15:27:13Z | [
"python"
] | What is the difference between a function decorated with [`@staticmethod`](http://docs.python.org/library/functions.html#staticmethod) and one decorated with [`@classmethod`](http://docs.python.org/library/functions.html#classmethod)? | I think a better question is "When would you use @classmethod vs @staticmethod?"
@classmethod allows you easy access to private members that are associated to the class definition. this is a great way to do singletons, or factory classes that control the number of instances of the created objects exist.
@staticmethod provides marginal performance gains, but I have yet to see a productive use of a static method within a class that couldn't be achieved as a standalone function outside the class. |
Get last n lines of a file with Python, similar to tail | 136,168 | 122 | 2008-09-25T21:11:11Z | 136,280 | 57 | 2008-09-25T21:30:14Z | [
"python",
"file",
"file-io",
"tail",
"logfiles"
] | I'm writing a log file viewer for a web application and for that I want to paginate through the lines of the log file. The items in the file are line based with the newest item on the bottom.
So I need a `tail()` method that can read `n` lines from the bottom and supports an offset. What I came up with looks like this:
```
def tail(f, n, offset=0):
"""Reads a n lines from f with an offset of offset lines."""
avg_line_length = 74
to_read = n + offset
while 1:
try:
f.seek(-(avg_line_length * to_read), 2)
except IOError:
# woops. apparently file is smaller than what we want
# to step back, go to the beginning instead
f.seek(0)
pos = f.tell()
lines = f.read().splitlines()
if len(lines) >= to_read or pos == 0:
return lines[-to_read:offset and -offset or None]
avg_line_length *= 1.3
```
Is this a reasonable approach? What is the recommended way to tail log files with offsets? | Assumes a unix-like system.
```
import os
def tail(f, n, offset=0):
stdin,stdout = os.popen2("tail -n "+n+offset+" "+f)
stdin.close()
lines = stdout.readlines(); stdout.close()
return lines[:,-offset]
``` |
Get last n lines of a file with Python, similar to tail | 136,168 | 122 | 2008-09-25T21:11:11Z | 136,368 | 97 | 2008-09-25T21:43:18Z | [
"python",
"file",
"file-io",
"tail",
"logfiles"
] | I'm writing a log file viewer for a web application and for that I want to paginate through the lines of the log file. The items in the file are line based with the newest item on the bottom.
So I need a `tail()` method that can read `n` lines from the bottom and supports an offset. What I came up with looks like this:
```
def tail(f, n, offset=0):
"""Reads a n lines from f with an offset of offset lines."""
avg_line_length = 74
to_read = n + offset
while 1:
try:
f.seek(-(avg_line_length * to_read), 2)
except IOError:
# woops. apparently file is smaller than what we want
# to step back, go to the beginning instead
f.seek(0)
pos = f.tell()
lines = f.read().splitlines()
if len(lines) >= to_read or pos == 0:
return lines[-to_read:offset and -offset or None]
avg_line_length *= 1.3
```
Is this a reasonable approach? What is the recommended way to tail log files with offsets? | This may be quicker than yours. Makes no assumptions about line length. Backs through the file one block at a time till it's found the right number of '\n' characters.
```
def tail( f, lines=20 ):
total_lines_wanted = lines
BLOCK_SIZE = 1024
f.seek(0, 2)
block_end_byte = f.tell()
lines_to_go = total_lines_wanted
block_number = -1
blocks = [] # blocks of size BLOCK_SIZE, in reverse order starting
# from the end of the file
while lines_to_go > 0 and block_end_byte > 0:
if (block_end_byte - BLOCK_SIZE > 0):
# read the last block we haven't yet read
f.seek(block_number*BLOCK_SIZE, 2)
blocks.append(f.read(BLOCK_SIZE))
else:
# file too small, start from begining
f.seek(0,0)
# only read what was not read
blocks.append(f.read(block_end_byte))
lines_found = blocks[-1].count('\n')
lines_to_go -= lines_found
block_end_byte -= BLOCK_SIZE
block_number -= 1
all_read_text = ''.join(reversed(blocks))
return '\n'.join(all_read_text.splitlines()[-total_lines_wanted:])
```
I don't like tricky assumptions about line length when -- as a practical matter -- you can never know things like that.
Generally, this will locate the last 20 lines on the first or second pass through the loop. If your 74 character thing is actually accurate, you make the block size 2048 and you'll tail 20 lines almost immediately.
Also, I don't burn a lot of brain calories trying to finesse alignment with physical OS blocks. Using these high-level I/O packages, I doubt you'll see any performance consequence of trying to align on OS block boundaries. If you use lower-level I/O, then you might see a speedup. |
Get last n lines of a file with Python, similar to tail | 136,168 | 122 | 2008-09-25T21:11:11Z | 280,083 | 19 | 2008-11-11T05:17:18Z | [
"python",
"file",
"file-io",
"tail",
"logfiles"
] | I'm writing a log file viewer for a web application and for that I want to paginate through the lines of the log file. The items in the file are line based with the newest item on the bottom.
So I need a `tail()` method that can read `n` lines from the bottom and supports an offset. What I came up with looks like this:
```
def tail(f, n, offset=0):
"""Reads a n lines from f with an offset of offset lines."""
avg_line_length = 74
to_read = n + offset
while 1:
try:
f.seek(-(avg_line_length * to_read), 2)
except IOError:
# woops. apparently file is smaller than what we want
# to step back, go to the beginning instead
f.seek(0)
pos = f.tell()
lines = f.read().splitlines()
if len(lines) >= to_read or pos == 0:
return lines[-to_read:offset and -offset or None]
avg_line_length *= 1.3
```
Is this a reasonable approach? What is the recommended way to tail log files with offsets? | If reading the whole file is acceptable then use a deque.
```
from collections import deque
deque(f, maxlen=n)
```
Prior to 2.6, deques didn't have a maxlen option, but it's easy enough to implement.
```
import itertools
def maxque(items, size):
items = iter(items)
q = deque(itertools.islice(items, size))
for item in items:
del q[0]
q.append(item)
return q
```
If it's a requirement to read the file from the end, then use a gallop (a.k.a exponential) search.
```
def tail(f, n):
assert n >= 0
pos, lines = n+1, []
while len(lines) <= n:
try:
f.seek(-pos, 2)
except IOError:
f.seek(0)
break
finally:
lines = list(f)
pos *= 2
return lines[-n:]
``` |
Get last n lines of a file with Python, similar to tail | 136,168 | 122 | 2008-09-25T21:11:11Z | 692,616 | 17 | 2009-03-28T11:13:44Z | [
"python",
"file",
"file-io",
"tail",
"logfiles"
] | I'm writing a log file viewer for a web application and for that I want to paginate through the lines of the log file. The items in the file are line based with the newest item on the bottom.
So I need a `tail()` method that can read `n` lines from the bottom and supports an offset. What I came up with looks like this:
```
def tail(f, n, offset=0):
"""Reads a n lines from f with an offset of offset lines."""
avg_line_length = 74
to_read = n + offset
while 1:
try:
f.seek(-(avg_line_length * to_read), 2)
except IOError:
# woops. apparently file is smaller than what we want
# to step back, go to the beginning instead
f.seek(0)
pos = f.tell()
lines = f.read().splitlines()
if len(lines) >= to_read or pos == 0:
return lines[-to_read:offset and -offset or None]
avg_line_length *= 1.3
```
Is this a reasonable approach? What is the recommended way to tail log files with offsets? | The code I ended up using. I think this is the best so far:
```
def tail(f, n, offset=None):
"""Reads a n lines from f with an offset of offset lines. The return
value is a tuple in the form ``(lines, has_more)`` where `has_more` is
an indicator that is `True` if there are more lines in the file.
"""
avg_line_length = 74
to_read = n + (offset or 0)
while 1:
try:
f.seek(-(avg_line_length * to_read), 2)
except IOError:
# woops. apparently file is smaller than what we want
# to step back, go to the beginning instead
f.seek(0)
pos = f.tell()
lines = f.read().splitlines()
if len(lines) >= to_read or pos == 0:
return lines[-to_read:offset and -offset or None], \
len(lines) > to_read or pos > 0
avg_line_length *= 1.3
``` |
Get last n lines of a file with Python, similar to tail | 136,168 | 122 | 2008-09-25T21:11:11Z | 6,813,975 | 11 | 2011-07-25T09:18:10Z | [
"python",
"file",
"file-io",
"tail",
"logfiles"
] | I'm writing a log file viewer for a web application and for that I want to paginate through the lines of the log file. The items in the file are line based with the newest item on the bottom.
So I need a `tail()` method that can read `n` lines from the bottom and supports an offset. What I came up with looks like this:
```
def tail(f, n, offset=0):
"""Reads a n lines from f with an offset of offset lines."""
avg_line_length = 74
to_read = n + offset
while 1:
try:
f.seek(-(avg_line_length * to_read), 2)
except IOError:
# woops. apparently file is smaller than what we want
# to step back, go to the beginning instead
f.seek(0)
pos = f.tell()
lines = f.read().splitlines()
if len(lines) >= to_read or pos == 0:
return lines[-to_read:offset and -offset or None]
avg_line_length *= 1.3
```
Is this a reasonable approach? What is the recommended way to tail log files with offsets? | Simple and fast solution with mmap:
```
import mmap
import os
def tail(filename, n):
"""Returns last n lines from the filename. No exception handling"""
size = os.path.getsize(filename)
with open(filename, "rb") as f:
# for Windows the mmap parameters are different
fm = mmap.mmap(f.fileno(), 0, mmap.MAP_SHARED, mmap.PROT_READ)
try:
for i in xrange(size - 1, -1, -1):
if fm[i] == '\n':
n -= 1
if n == -1:
break
return fm[i + 1 if i else 0:].splitlines()
finally:
fm.close()
``` |
Get last n lines of a file with Python, similar to tail | 136,168 | 122 | 2008-09-25T21:11:11Z | 7,047,765 | 18 | 2011-08-13T00:43:38Z | [
"python",
"file",
"file-io",
"tail",
"logfiles"
] | I'm writing a log file viewer for a web application and for that I want to paginate through the lines of the log file. The items in the file are line based with the newest item on the bottom.
So I need a `tail()` method that can read `n` lines from the bottom and supports an offset. What I came up with looks like this:
```
def tail(f, n, offset=0):
"""Reads a n lines from f with an offset of offset lines."""
avg_line_length = 74
to_read = n + offset
while 1:
try:
f.seek(-(avg_line_length * to_read), 2)
except IOError:
# woops. apparently file is smaller than what we want
# to step back, go to the beginning instead
f.seek(0)
pos = f.tell()
lines = f.read().splitlines()
if len(lines) >= to_read or pos == 0:
return lines[-to_read:offset and -offset or None]
avg_line_length *= 1.3
```
Is this a reasonable approach? What is the recommended way to tail log files with offsets? | S.Lott's answer above almost works for me but ends up giving me partial lines. It turns out that it corrupts data on block boundaries because data holds the read blocks in reversed order. When ''.join(data) is called, the blocks are in the wrong order. This fixes that.
```
def tail(f, window=20):
"""
Returns the last `window` lines of file `f` as a list.
"""
if window == 0:
return []
BUFSIZ = 1024
f.seek(0, 2)
bytes = f.tell()
size = window + 1
block = -1
data = []
while size > 0 and bytes > 0:
if bytes - BUFSIZ > 0:
# Seek back one whole BUFSIZ
f.seek(block * BUFSIZ, 2)
# read BUFFER
data.insert(0, f.read(BUFSIZ))
else:
# file too small, start from begining
f.seek(0,0)
# only read what was not read
data.insert(0, f.read(bytes))
linesFound = data[0].count('\n')
size -= linesFound
bytes -= BUFSIZ
block -= 1
return ''.join(data).splitlines()[-window:]
``` |
Get last n lines of a file with Python, similar to tail | 136,168 | 122 | 2008-09-25T21:11:11Z | 13,790,289 | 10 | 2012-12-09T18:21:42Z | [
"python",
"file",
"file-io",
"tail",
"logfiles"
] | I'm writing a log file viewer for a web application and for that I want to paginate through the lines of the log file. The items in the file are line based with the newest item on the bottom.
So I need a `tail()` method that can read `n` lines from the bottom and supports an offset. What I came up with looks like this:
```
def tail(f, n, offset=0):
"""Reads a n lines from f with an offset of offset lines."""
avg_line_length = 74
to_read = n + offset
while 1:
try:
f.seek(-(avg_line_length * to_read), 2)
except IOError:
# woops. apparently file is smaller than what we want
# to step back, go to the beginning instead
f.seek(0)
pos = f.tell()
lines = f.read().splitlines()
if len(lines) >= to_read or pos == 0:
return lines[-to_read:offset and -offset or None]
avg_line_length *= 1.3
```
Is this a reasonable approach? What is the recommended way to tail log files with offsets? | Here is my answer. Pure python. Using timeit it seems pretty fast. Tailing 100 lines of a log file that has 100,000 lines:
```
>>> timeit.timeit('tail.tail(f, 100, 4098)', 'import tail; f = open("log.txt", "r");', number=10)
0.0014600753784179688
>>> timeit.timeit('tail.tail(f, 100, 4098)', 'import tail; f = open("log.txt", "r");', number=100)
0.00899195671081543
>>> timeit.timeit('tail.tail(f, 100, 4098)', 'import tail; f = open("log.txt", "r");', number=1000)
0.05842900276184082
>>> timeit.timeit('tail.tail(f, 100, 4098)', 'import tail; f = open("log.txt", "r");', number=10000)
0.5394978523254395
>>> timeit.timeit('tail.tail(f, 100, 4098)', 'import tail; f = open("log.txt", "r");', number=100000)
5.377126932144165
```
Here is the code:
```
import os
def tail(f, lines=1, _buffer=4098):
"""Tail a file and get X lines from the end"""
# place holder for the lines found
lines_found = []
# block counter will be multiplied by buffer
# to get the block size from the end
block_counter = -1
# loop until we find X lines
while len(lines_found) < lines:
try:
f.seek(block_counter * _buffer, os.SEEK_END)
except IOError: # either file is too small, or too many lines requested
f.seek(0)
lines_found = f.readlines()
break
lines_found = f.readlines()
# we found enough lines, get out
if len(lines_found) > lines:
break
# decrement the block counter to get the
# next X bytes
block_counter -= 1
return lines_found[-lines:]
``` |
Key Presses in Python | 136,734 | 13 | 2008-09-25T22:58:01Z | 136,780 | 17 | 2008-09-25T23:09:39Z | [
"python",
"keypress"
] | Is it possible to make it appear to a system that a key was pressed, for example I need to make `A` key be pressed thousands of times, and it is much to time consuming to do it manually, I would like to write something to do it for me, and the only thing I know well enough is Python.
A better way to put it, I need to emulate a key press, I.E. not capture a key press.
More Info (as requested):
I am running windows XP and need to send the keys to another application. | Install the [pywin32](http://sourceforge.net/projects/pywin32/) extensions. Then you can do the following:
```
import win32com.client as comclt
wsh= comclt.Dispatch("WScript.Shell")
wsh.AppActivate("Notepad") # select another application
wsh.SendKeys("a") # send the keys you want
```
Search for documentation of the WScript.Shell object (I believe installed by default in all Windows XP installations). You can start [here](http://www.microsoft.com/technet/scriptcenter/guide/sas_wsh_pkoy.mspx?mfr=true), perhaps.
**EDIT:** Sending F11
```
import win32com.client as comctl
wsh = comctl.Dispatch("WScript.Shell")
# Google Chrome window title
wsh.AppActivate("icanhazip.com")
wsh.SendKeys("{F11}")
``` |
How can I perform a HEAD request with the mechanize library? | 137,580 | 3 | 2008-09-26T03:20:19Z | 137,624 | 8 | 2008-09-26T03:37:33Z | [
"python",
"http-headers",
"mechanize"
] | I know how to do a HEAD request with httplib, but I have to use mechanize for this site.
Essentially, what I need to do is grab a value from the header (filename) without actually downloading the file.
Any suggestions how I could accomplish this? | Mechanize itself only sends GETs and POSTs, but you can easily extend the Request class to send HEAD. Example:
```
import mechanize
class HeadRequest(mechanize.Request):
def get_method(self):
return "HEAD"
request = HeadRequest("http://www.example.com/")
response = mechanize.urlopen(request)
print response.info()
``` |
Is there something like Python's getattr() in C#? | 138,045 | 12 | 2008-09-26T06:35:30Z | 138,079 | 9 | 2008-09-26T06:57:23Z | [
"c#",
"python",
"user-interface"
] | Is there something like [Python's getattr()](http://effbot.org/zone/python-getattr.htm) in C#? I would like to create a window by reading a list which contains the names of controls to put on the window. | There is also [Type.InvokeMember](http://msdn.microsoft.com/en-us/library/66btctbe.aspx).
```
public static class ReflectionExt
{
public static object GetAttr(this object obj, string name)
{
Type type = obj.GetType();
BindingFlags flags = BindingFlags.Instance |
BindingFlags.Public |
BindingFlags.GetProperty;
return type.InvokeMember(name, flags, Type.DefaultBinder, obj, null);
}
}
```
Which could be used like:
```
object value = ReflectionExt.GetAttr(obj, "PropertyName");
```
or (as an extension method):
```
object value = obj.GetAttr("PropertyName");
``` |
How can I read the RGB value of a given pixel in Python? | 138,250 | 63 | 2008-09-26T08:10:50Z | 138,260 | 108 | 2008-09-26T08:15:56Z | [
"python",
"graphics",
"rgb"
] | If I open an image with `open("image.jpg")`, how can I get the RGB values of a pixel, if I have the coordinates of the pixel?
Then how can I do the reverse of this? Starting with a blank graphic, 'write' a pixel with a certain RGB value?
It would be so much better if I didn't have to download any additional libraries. | It's probably best to use the [Python Image Library](http://www.pythonware.com/products/pil/) to do this which I'm afraid is a separate download.
The easiest way to do what you want is via the [load() method on the Image object](http://effbot.org/imagingbook/image.htm) which returns a pixel access object which you can manipulate like an array:
```
from PIL import Image
im = Image.open("dead_parrot.jpg") #Can be many different formats.
pix = im.load()
print im.size #Get the width and hight of the image for iterating over
print pix[x,y] #Get the RGBA Value of the a pixel of an image
pix[x,y] = value # Set the RGBA Value of the image (tuple)
```
Alternatively, look at [ImageDraw](http://effbot.org/imagingbook/imagedraw.htm) which gives a much richer API for creating images. |
How can I read the RGB value of a given pixel in Python? | 138,250 | 63 | 2008-09-26T08:10:50Z | 139,070 | 20 | 2008-09-26T12:20:38Z | [
"python",
"graphics",
"rgb"
] | If I open an image with `open("image.jpg")`, how can I get the RGB values of a pixel, if I have the coordinates of the pixel?
Then how can I do the reverse of this? Starting with a blank graphic, 'write' a pixel with a certain RGB value?
It would be so much better if I didn't have to download any additional libraries. | **PyPNG - lightweight PNG decoder/encoder**
Although the question hints at JPG, I hope my answer will be useful to some people.
Here's how to read and write PNG pixels using [PyPNG module](https://pypi.python.org/pypi/pypng/0.0.18):
```
import png, array
point = (2, 10) # coordinates of pixel to be painted red
reader = png.Reader(filename='image.png')
w, h, pixels, metadata = reader.read_flat()
pixel_byte_width = 4 if metadata['alpha'] else 3
pixel_position = point[0] + point[1] * w
new_pixel_value = (255, 0, 0, 0) if metadata['alpha'] else (255, 0, 0)
pixels[
pixel_position * pixel_byte_width :
(pixel_position + 1) * pixel_byte_width] = array.array('B', new_pixel_value)
output = open('image-with-red-dot.png', 'wb')
writer = png.Writer(w, h, **metadata)
writer.write_array(output, pixels)
output.close()
```
PyPNG is a single pure Python module less than 4000 lines long, including tests and comments.
[PIL](http://www.pythonware.com/products/pil/) is a more comprehensive imaging library, but it's also significantly heavier. |
How can I read the RGB value of a given pixel in Python? | 138,250 | 63 | 2008-09-26T08:10:50Z | 5,365,853 | 8 | 2011-03-20T00:10:43Z | [
"python",
"graphics",
"rgb"
] | If I open an image with `open("image.jpg")`, how can I get the RGB values of a pixel, if I have the coordinates of the pixel?
Then how can I do the reverse of this? Starting with a blank graphic, 'write' a pixel with a certain RGB value?
It would be so much better if I didn't have to download any additional libraries. | As Dave Webb said.
Here is my working code snippet printing the pixel colours from an image:
```
import os, sys
import Image
im = Image.open("image.jpg")
x = 3
y = 4
pix = im.load()
print pix[x,y]
``` |
Pure Python XSLT library | 138,502 | 19 | 2008-09-26T09:43:43Z | 592,466 | 9 | 2009-02-26T21:13:59Z | [
"python",
"xml",
"xslt"
] | Is there an XSLT library that is pure Python?
Installing libxml2+libxslt or any similar C libraries is a problem on some of the platforms I need to support.
I really only need basic XSLT support, and speed is not a major issue. | Unfortunately there are no pure-python XSLT processors at the moment. If you need something that is more platform independent, you may want to use a Java-based XSLT processor like [Saxon](http://saxon.sourceforge.net/). 4Suite is working on a pure-python XPath parser, but it doesn't look like a pure XSLT processor will be out for some time. Perhaps it would be best to use some of Python's functional capabilities to try and approximate the existing stylesheet or look into the feasibility of using Java instead. |
Is it feasible to compile Python to machine code? | 138,521 | 93 | 2008-09-26T09:51:51Z | 138,553 | 15 | 2008-09-26T10:00:15Z | [
"python",
"c",
"linker",
"compilation"
] | How feasible would it be to compile Python (possibly via an intermediate C representation) into machine code?
Presumably it would need to link to a Python runtime library, and any parts of the Python standard library which were Python themselves would need to be compiled (and linked in) too.
Also, you would need to bundle the Python interpreter if you wanted to do dynamic evaluation of expressions, but perhaps a subset of Python that didn't allow this would still be useful.
Would it provide any speed and/or memory usage advantages? Presumably the startup time of the Python interpreter would be eliminated (although shared libraries would still need loading at startup). | Try [ShedSkin](http://shed-skin.blogspot.com/) Python-to-C++ compiler, but it is far from perfect. Also there is Psyco - Python JIT if only speedup is needed. But IMHO this is not worth the effort. For speed-critical parts of code best solution would be to write them as C/C++ extensions. |
Is it feasible to compile Python to machine code? | 138,521 | 93 | 2008-09-26T09:51:51Z | 138,582 | 11 | 2008-09-26T10:06:06Z | [
"python",
"c",
"linker",
"compilation"
] | How feasible would it be to compile Python (possibly via an intermediate C representation) into machine code?
Presumably it would need to link to a Python runtime library, and any parts of the Python standard library which were Python themselves would need to be compiled (and linked in) too.
Also, you would need to bundle the Python interpreter if you wanted to do dynamic evaluation of expressions, but perhaps a subset of Python that didn't allow this would still be useful.
Would it provide any speed and/or memory usage advantages? Presumably the startup time of the Python interpreter would be eliminated (although shared libraries would still need loading at startup). | [PyPy](http://codespeak.net/pypy/dist/pypy/doc/home.html) is a project to reimplement Python in Python, using compilation to native code as one of the implementation strategies (others being a VM with JIT, using JVM, etc.). Their compiled C versions run slower than CPython on average but much faster for some programs.
[Shedskin](http://code.google.com/p/shedskin/) is an experimental Python-to-C++ compiler.
[Pyrex](http://www.cosc.canterbury.ac.nz/greg.ewing/python/Pyrex/version/Doc/About.html) is a language specially designed for writing Python extension modules. It's designed to bridge the gap between the nice, high-level, easy-to-use world of Python and the messy, low-level world of C. |
Is it feasible to compile Python to machine code? | 138,521 | 93 | 2008-09-26T09:51:51Z | 138,585 | 41 | 2008-09-26T10:06:43Z | [
"python",
"c",
"linker",
"compilation"
] | How feasible would it be to compile Python (possibly via an intermediate C representation) into machine code?
Presumably it would need to link to a Python runtime library, and any parts of the Python standard library which were Python themselves would need to be compiled (and linked in) too.
Also, you would need to bundle the Python interpreter if you wanted to do dynamic evaluation of expressions, but perhaps a subset of Python that didn't allow this would still be useful.
Would it provide any speed and/or memory usage advantages? Presumably the startup time of the Python interpreter would be eliminated (although shared libraries would still need loading at startup). | As @Greg Hewgill says it, there are good reasons why this is not always possible. However, certain kinds of code (like very algorithmic code) can be turned into "real" machine code.
There are several options:
* Use [Psyco](http://psyco.sourceforge.net/), which emits machine code dynamically. You should choose carefully which methods/functions to convert, though.
* Use [Cython](http://cython.org/), which is a Python-*like* language that is compiled into a Python C extension
* Use [PyPy](http://pypy.org), which has a translator from RPython (a *restricted subset* of Python that does not support some of the most "dynamic" features of Python) to C or LLVM.
+ PyPy is still highly experimental
+ not all extensions will be present
After that, you can use one of the existing packages (freeze, Py2exe, PyInstaller) to put everything into one binary.
All in all: there is no general answer for your question. If you have Python code that is performance-critical, try to use as much builtin functionality as possible (or ask a "How do I make my Python code faster" question). If that doesn't help, try to identify the code and port it to C (or Cython) and use the extension. |
Is it feasible to compile Python to machine code? | 138,521 | 93 | 2008-09-26T09:51:51Z | 138,586 | 8 | 2008-09-26T10:06:46Z | [
"python",
"c",
"linker",
"compilation"
] | How feasible would it be to compile Python (possibly via an intermediate C representation) into machine code?
Presumably it would need to link to a Python runtime library, and any parts of the Python standard library which were Python themselves would need to be compiled (and linked in) too.
Also, you would need to bundle the Python interpreter if you wanted to do dynamic evaluation of expressions, but perhaps a subset of Python that didn't allow this would still be useful.
Would it provide any speed and/or memory usage advantages? Presumably the startup time of the Python interpreter would be eliminated (although shared libraries would still need loading at startup). | [Pyrex](http://www.cosc.canterbury.ac.nz/greg.ewing/python/Pyrex/) is a subset of the Python language that compiles to C, done by the guy that first built [list comprehensions](http://www.secnetix.de/olli/Python/list_comprehensions.hawk) for Python. It was mainly developed for building wrappers but can be used in a more general context. [Cython](http://cython.org/) is a more actively maintained fork of pyrex. |
Is it feasible to compile Python to machine code? | 138,521 | 93 | 2008-09-26T09:51:51Z | 11,415,005 | 14 | 2012-07-10T14:00:44Z | [
"python",
"c",
"linker",
"compilation"
] | How feasible would it be to compile Python (possibly via an intermediate C representation) into machine code?
Presumably it would need to link to a Python runtime library, and any parts of the Python standard library which were Python themselves would need to be compiled (and linked in) too.
Also, you would need to bundle the Python interpreter if you wanted to do dynamic evaluation of expressions, but perhaps a subset of Python that didn't allow this would still be useful.
Would it provide any speed and/or memory usage advantages? Presumably the startup time of the Python interpreter would be eliminated (although shared libraries would still need loading at startup). | py2c ( <http://code.google.com/p/py2c>) can convert python code to c/c++
I am the solo developer of py2c. |
Can Regex be used for this particular string manipulation? | 138,552 | 7 | 2008-09-26T10:00:10Z | 138,615 | 8 | 2008-09-26T10:18:50Z | [
"c#",
"python",
"regex",
"language-agnostic"
] | I need to replace character (say) **x** with character (say) **P** in a string, but only if it is contained in a quoted substring.
An example makes it clearer:
```
axbx'cxdxe'fxgh'ixj'k -> axbx'cPdPe'fxgh'iPj'k
```
Let's assume, for the sake of simplicity, that quotes always come in pairs.
The obvious way is to just process the string one character at a time (a simple state machine approach);
however, I'm wondering if regular expressions can be used to do all the processing in one go.
My target language is C#, but I guess my question pertains to any language having builtin or library support for regular expressions. | I was able to do this with Python:
```
>>> import re
>>> re.sub(r"x(?=[^']*'([^']|'[^']*')*$)", "P", "axbx'cxdxe'fxgh'ixj'k")
"axbx'cPdPe'fxgh'iPj'k"
```
What this does is use the non-capturing match (?=...) to check that the character x is within a quoted string. It looks for some nonquote characters up to the next quote, then looks for a sequence of either single characters or quoted groups of characters, until the end of the string.
This relies on your assumption that the quotes are always balanced. This is also not very efficient. |
Can Regex be used for this particular string manipulation? | 138,552 | 7 | 2008-09-26T10:00:10Z | 138,755 | 9 | 2008-09-26T11:04:02Z | [
"c#",
"python",
"regex",
"language-agnostic"
] | I need to replace character (say) **x** with character (say) **P** in a string, but only if it is contained in a quoted substring.
An example makes it clearer:
```
axbx'cxdxe'fxgh'ixj'k -> axbx'cPdPe'fxgh'iPj'k
```
Let's assume, for the sake of simplicity, that quotes always come in pairs.
The obvious way is to just process the string one character at a time (a simple state machine approach);
however, I'm wondering if regular expressions can be used to do all the processing in one go.
My target language is C#, but I guess my question pertains to any language having builtin or library support for regular expressions. | I converted Greg Hewgill's python code to C# and it worked!
```
[Test]
public void ReplaceTextInQuotes()
{
Assert.AreEqual("axbx'cPdPe'fxgh'iPj'k",
Regex.Replace("axbx'cxdxe'fxgh'ixj'k",
@"x(?=[^']*'([^']|'[^']*')*$)", "P"));
}
```
That test passed. |
Vim extension (via Python)? | 138,680 | 10 | 2008-09-26T10:45:57Z | 138,709 | 19 | 2008-09-26T10:55:13Z | [
"python",
"vim"
] | is it possible to extend vim functionality via custom extension (preferably, written in Python)?
What I need ideally is custom command when in command mode. E.g.
ESC
:do\_this
:do\_that | vim supports scripting in python (and in perl as well, I think).
You just have to make sure that the vim distribution you are using has been compiled with python support.
If you are using a Linux system, you can download the source and then compile it with
```
./configure --enable-pythoninterp
make
sudo make install
```
Inside vim, you can type
```
:version
```
to list the available features; if it has python support, you should see a '+python' somewhere (a '-python' otherwise).
Then, to check the usage of the python module, you can type
```
:help python
```
P.S: if you're going to compile the vim sources, make sure to check the available configure options, you might need to specify --with-python-config-dir as well.
P.P.S: to create a "custom command in command mode" (if I understand correctly what you mean), you can create a function "MyFunction" in a vim script (using python or the vim scripting language) and then invoke it with
```
:Call MyFunction()
```
Check
```
:help user-functions
```
for details |
How do I test a django database schema? | 138,851 | 6 | 2008-09-26T11:23:51Z | 139,137 | 8 | 2008-09-26T12:32:46Z | [
"python",
"django",
"unit-testing",
"model"
] | I want to write tests that can show whether or not the database is in sync with my models.py file. Actually I have already written them, only to find out that django creates a new database each time the tests are run based on the models.py file.
Is there any way I can make the **models.py test** use the existing database schema? The one that's in mysql/postgresql, and not the one that's in /myapp/models.py ?
I don't care about the data that's in the database, I only care about it's **schema** i.e. I want my tests to notice if a table in the database has less fields than the schema in my models.py file.
I'm using the unittest framework (actually the django extension to it) if this has any relevance.
thanks | What we did was override the default test\_runner so that it wouldn't create a new database to test against. This way, it runs the test against whatever our current local database looks like. But be very careful if you use this method because any changes to data you make in your tests will be permanent. I made sure that all our tests restores any changes back to their original state, and keep our pristine version of our database on the server and backed up.
So to do this you need to copy the run\_test method from django.test.simple to a location in your project -- I put mine in myproject/test/test\_runner.py
Then make the following changes to that method:
```
// change
old_name = settings.DATABASE_NAME
from django.db import connection
connection.creation.create_test_db(verbosity, autoclobber=not interactive)
result = unittest.TextTestRunner(verbosity=verbosity).run(suite)
connection.creation.destroy_test_db(old_name, verbosity)
// to:
result = unittest.TextTestRunner(verbosity=verbosity).run(suite)
```
Make sure to do all the necessary imports at the top and then in your settings file set the setting:
```
TEST_RUNNER = 'myproject.test.test_runner.run_tests'
```
Now when you run ./manage.py test Django will run the tests against the current state of your database rather than creating a new version based on your current model definitions.
Another thing you can do is create a copy of your database locally, and then do a check in your new run\_test() method like this:
```
if settings.DATABASE_NAME != 'my_test_db':
sys.exit("You cannot run tests using the %s database. Please switch DATABASE_NAME to my_test_db in settings.py" % settings.DATABASE_NAME)
```
That way there's no danger of running tests against your main database. |
listing all functions in a python module | 139,180 | 164 | 2008-09-26T12:38:52Z | 139,193 | 238 | 2008-09-26T12:40:20Z | [
"python"
] | I have a python module installed on my system and I'd like to be able to see what functions/classes/methods are available in it.
I want to call the doc function on each one. In ruby I can do something like ClassName.methods to get a list of all the methods available on that class. Is there something similar in python?
eg. something like:
```
from somemodule import foo
print foo.methods # or whatever is the correct method to call
``` | You can use `dir(module)` to see all available methods/attributes. Also check out PyDocs. |
listing all functions in a python module | 139,180 | 164 | 2008-09-26T12:38:52Z | 139,198 | 69 | 2008-09-26T12:41:04Z | [
"python"
] | I have a python module installed on my system and I'd like to be able to see what functions/classes/methods are available in it.
I want to call the doc function on each one. In ruby I can do something like ClassName.methods to get a list of all the methods available on that class. Is there something similar in python?
eg. something like:
```
from somemodule import foo
print foo.methods # or whatever is the correct method to call
``` | The inspect module. Also see the [`pydoc`](http://docs.python.org/2/library/pydoc.html) module, the `help()` function in the interactive interpreter and the `pydoc` command-line tool which generates the documentation you are after. You can just give them the class you wish to see the documentation of. They can also generate, for instance, HTML output and write it to disk. |
listing all functions in a python module | 139,180 | 164 | 2008-09-26T12:38:52Z | 139,258 | 44 | 2008-09-26T12:50:39Z | [
"python"
] | I have a python module installed on my system and I'd like to be able to see what functions/classes/methods are available in it.
I want to call the doc function on each one. In ruby I can do something like ClassName.methods to get a list of all the methods available on that class. Is there something similar in python?
eg. something like:
```
from somemodule import foo
print foo.methods # or whatever is the correct method to call
``` | ```
import types
import yourmodule
print [yourmodule.__dict__.get(a) for a in dir(yourmodule)
if isinstance(yourmodule.__dict__.get(a), types.FunctionType)]
``` |
listing all functions in a python module | 139,180 | 164 | 2008-09-26T12:38:52Z | 140,106 | 76 | 2008-09-26T15:08:54Z | [
"python"
] | I have a python module installed on my system and I'd like to be able to see what functions/classes/methods are available in it.
I want to call the doc function on each one. In ruby I can do something like ClassName.methods to get a list of all the methods available on that class. Is there something similar in python?
eg. something like:
```
from somemodule import foo
print foo.methods # or whatever is the correct method to call
``` | Once you've `import`ed the module, you can just do:
```
help(modulename)
```
... To get the docs on all the functions at once, interactively. Or you can use:
```
dir(modulename)
```
... To simply list the names of all the functions and variables defined in the module. |
listing all functions in a python module | 139,180 | 164 | 2008-09-26T12:38:52Z | 142,501 | 20 | 2008-09-26T23:41:16Z | [
"python"
] | I have a python module installed on my system and I'd like to be able to see what functions/classes/methods are available in it.
I want to call the doc function on each one. In ruby I can do something like ClassName.methods to get a list of all the methods available on that class. Is there something similar in python?
eg. something like:
```
from somemodule import foo
print foo.methods # or whatever is the correct method to call
``` | This will do the trick:
```
dir(module)
```
However, if you find it annoying to read the returned list, just use the following loop to get one name per line.
```
for i in dir(module): print i
``` |
listing all functions in a python module | 139,180 | 164 | 2008-09-26T12:38:52Z | 9,794,849 | 37 | 2012-03-20T20:59:57Z | [
"python"
] | I have a python module installed on my system and I'd like to be able to see what functions/classes/methods are available in it.
I want to call the doc function on each one. In ruby I can do something like ClassName.methods to get a list of all the methods available on that class. Is there something similar in python?
eg. something like:
```
from somemodule import foo
print foo.methods # or whatever is the correct method to call
``` | An example with inspect:
```
from inspect import getmembers, isfunction
from my_project import my_module
functions_list = [o for o in getmembers(my_module) if isfunction(o[1])]
```
getmembers returns a list of (object\_name, object\_type) tuples.
You can replace isfunction with any of the other isXXX functions in the inspect module. |
listing all functions in a python module | 139,180 | 164 | 2008-09-26T12:38:52Z | 10,079,706 | 13 | 2012-04-09T20:51:58Z | [
"python"
] | I have a python module installed on my system and I'd like to be able to see what functions/classes/methods are available in it.
I want to call the doc function on each one. In ruby I can do something like ClassName.methods to get a list of all the methods available on that class. Is there something similar in python?
eg. something like:
```
from somemodule import foo
print foo.methods # or whatever is the correct method to call
``` | `dir(module)` is the standard way when using a script or the standard interpreter, as mentioned in most answers.
However with an interactive python shell like [IPython](http://ipython.org) you can use tab-completion to get an overview of all objects defined in the module.
This is much more convenient, than using a script and `print` to see what is defined in the module.
* `module.<tab>` will show you all objects defined in the module (functions, classes and so on)
* `module.ClassX.<tab>` will show you the methods and attributes of a class
* `module.function_xy?` or `module.ClassX.method_xy?` will show you the docstring of that function / method
* `module.function_x??` or `module.SomeClass.method_xy??` will show you the source code of the function / method. |
listing all functions in a python module | 139,180 | 164 | 2008-09-26T12:38:52Z | 31,005,891 | 10 | 2015-06-23T14:39:55Z | [
"python"
] | I have a python module installed on my system and I'd like to be able to see what functions/classes/methods are available in it.
I want to call the doc function on each one. In ruby I can do something like ClassName.methods to get a list of all the methods available on that class. Is there something similar in python?
eg. something like:
```
from somemodule import foo
print foo.methods # or whatever is the correct method to call
``` | For completeness' sake, I'd like to point out that sometimes you may want to *parse* code instead of importing it. An `import` will *execute* top-level expressions, and that could be a problem.
For example, I'm letting users select entry point functions for packages being made with [zipapp](https://docs.python.org/dev/library/zipapp.html). Using `import` and `inspect` risks running astray code, leading to crashes, help messages being printed out, GUI dialogs popping up and so on.
Instead I use the [ast](https://docs.python.org/3.2/library/ast.html#module-ast) module to list all the top-level functions:
```
import ast
import sys
def top_level_functions(body):
return (f for f in body if isinstance(f, ast.FunctionDef))
def parse_ast(filename):
with open(filename, "rt") as file:
return ast.parse(file.read(), filename=filename)
if __name__ == "__main__":
for filename in sys.argv[1:]:
print(filename)
tree = parse_ast(filename)
for func in top_level_functions(tree.body):
print(" %s" % func.name)
```
Putting this code in `list.py` and using itself as input, I get:
```
$ python list.py list.py
list.py
top_level_functions
parse_ast
```
Of course, navigating an AST can be tricky sometimes, even for a relatively simple language like Python, because the AST is quite low-level. But if you have a simple and clear use case, it's both doable and safe.
(Though, a downside is that you can't detect functions that are dynamically generated, like `foo = lambda x,y: x*y`.) |
Why results of map() and list comprehension are different? | 139,819 | 9 | 2008-09-26T14:19:48Z | 139,899 | 9 | 2008-09-26T14:31:47Z | [
"python",
"closures",
"list-comprehension",
"late-binding",
"generator-expression"
] | The following test fails:
```
#!/usr/bin/env python
def f(*args):
"""
>>> t = 1, -1
>>> f(*map(lambda i: lambda: i, t))
[1, -1]
>>> f(*(lambda: i for i in t)) # -> [-1, -1]
[1, -1]
>>> f(*[lambda: i for i in t]) # -> [-1, -1]
[1, -1]
"""
alist = [a() for a in args]
print(alist)
if __name__ == '__main__':
import doctest; doctest.testmod()
```
In other words:
```
>>> t = 1, -1
>>> args = []
>>> for i in t:
... args.append(lambda: i)
...
>>> map(lambda a: a(), args)
[-1, -1]
>>> args = []
>>> for i in t:
... args.append((lambda i: lambda: i)(i))
...
>>> map(lambda a: a(), args)
[1, -1]
>>> args = []
>>> for i in t:
... args.append(lambda i=i: i)
...
>>> map(lambda a: a(), args)
[1, -1]
``` | They are different, because the value of `i` in both the generator expression and the list comp are evaluated lazily, i.e. when the anonymous functions are invoked in `f`.
By that time, `i` is bound to the last value if `t`, which is -1.
So basically, this is what the list comprehension does (likewise for the genexp):
```
x = []
i = 1 # 1. from t
x.append(lambda: i)
i = -1 # 2. from t
x.append(lambda: i)
```
Now the lambdas carry around a closure that references `i`, but `i` is bound to -1 in both cases, because that is the last value it was assigned to.
If you want to make sure that the lambda receives the current value of `i`, do
```
f(*[lambda u=i: u for i in t])
```
This way, you force the evaluation of `i` at the time the closure is created.
**Edit**: There is one difference between generator expressions and list comprehensions: the latter leak the loop variable into the surrounding scope. |
Authenticating against active directory using python + ldap | 140,439 | 70 | 2008-09-26T16:08:11Z | 140,495 | 37 | 2008-09-26T16:18:18Z | [
"python",
"authentication",
"active-directory",
"ldap"
] | How do I authenticate against AD using Python + LDAP. I'm currently using the python-ldap library and all it is producing is tears.
I can't even bind to perform a simple query:
```
import sys
import ldap
Server = "ldap://my-ldap-server"
DN, Secret, un = sys.argv[1:4]
Base = "dc=mydomain,dc=co,dc=uk"
Scope = ldap.SCOPE_SUBTREE
Filter = "(&(objectClass=user)(sAMAccountName="+un+"))"
Attrs = ["displayName"]
l = ldap.initialize(Server)
l.protocol_version = 3
print l.simple_bind_s(DN, Secret)
r = l.search(Base, Scope, Filter, Attrs)
Type,user = l.result(r,60)
Name,Attrs = user[0]
if hasattr(Attrs, 'has_key') and Attrs.has_key('displayName'):
displayName = Attrs['displayName'][0]
print displayName
sys.exit()
```
Running this with `[email protected] password username` gives me one of two errors:
`Invalid Credentials` - When I mistype or intentionally use wrong credentials it fails to authenticate.
> ldap.INVALID\_CREDENTIALS: {'info': '80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error, data 52e, vece', 'desc': 'Invalid credentials'}
Or
> ldap.OPERATIONS\_ERROR: {'info': '00000000: LdapErr: DSID-0C090627, comment: In order to perform this operation a successful bind must be completed on the connection., data 0, vece', 'desc': 'Operations error'}
What am I missing out to bind properly?
I am getting the same errors on fedora and windows. | I was missing
```
l.set_option(ldap.OPT_REFERRALS, 0)
```
From the init. |
Authenticating against active directory using python + ldap | 140,439 | 70 | 2008-09-26T16:08:11Z | 141,729 | 22 | 2008-09-26T20:23:04Z | [
"python",
"authentication",
"active-directory",
"ldap"
] | How do I authenticate against AD using Python + LDAP. I'm currently using the python-ldap library and all it is producing is tears.
I can't even bind to perform a simple query:
```
import sys
import ldap
Server = "ldap://my-ldap-server"
DN, Secret, un = sys.argv[1:4]
Base = "dc=mydomain,dc=co,dc=uk"
Scope = ldap.SCOPE_SUBTREE
Filter = "(&(objectClass=user)(sAMAccountName="+un+"))"
Attrs = ["displayName"]
l = ldap.initialize(Server)
l.protocol_version = 3
print l.simple_bind_s(DN, Secret)
r = l.search(Base, Scope, Filter, Attrs)
Type,user = l.result(r,60)
Name,Attrs = user[0]
if hasattr(Attrs, 'has_key') and Attrs.has_key('displayName'):
displayName = Attrs['displayName'][0]
print displayName
sys.exit()
```
Running this with `[email protected] password username` gives me one of two errors:
`Invalid Credentials` - When I mistype or intentionally use wrong credentials it fails to authenticate.
> ldap.INVALID\_CREDENTIALS: {'info': '80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error, data 52e, vece', 'desc': 'Invalid credentials'}
Or
> ldap.OPERATIONS\_ERROR: {'info': '00000000: LdapErr: DSID-0C090627, comment: In order to perform this operation a successful bind must be completed on the connection., data 0, vece', 'desc': 'Operations error'}
What am I missing out to bind properly?
I am getting the same errors on fedora and windows. | If you are open to using pywin32, you can use Win32 calls from Python. This is what we do in our CherryPy web server:
```
import win32security
token = win32security.LogonUser(
username,
domain,
password,
win32security.LOGON32_LOGON_NETWORK,
win32security.LOGON32_PROVIDER_DEFAULT)
authenticated = bool(token)
``` |
Authenticating against active directory using python + ldap | 140,439 | 70 | 2008-09-26T16:08:11Z | 1,126,391 | 7 | 2009-07-14T16:02:22Z | [
"python",
"authentication",
"active-directory",
"ldap"
] | How do I authenticate against AD using Python + LDAP. I'm currently using the python-ldap library and all it is producing is tears.
I can't even bind to perform a simple query:
```
import sys
import ldap
Server = "ldap://my-ldap-server"
DN, Secret, un = sys.argv[1:4]
Base = "dc=mydomain,dc=co,dc=uk"
Scope = ldap.SCOPE_SUBTREE
Filter = "(&(objectClass=user)(sAMAccountName="+un+"))"
Attrs = ["displayName"]
l = ldap.initialize(Server)
l.protocol_version = 3
print l.simple_bind_s(DN, Secret)
r = l.search(Base, Scope, Filter, Attrs)
Type,user = l.result(r,60)
Name,Attrs = user[0]
if hasattr(Attrs, 'has_key') and Attrs.has_key('displayName'):
displayName = Attrs['displayName'][0]
print displayName
sys.exit()
```
Running this with `[email protected] password username` gives me one of two errors:
`Invalid Credentials` - When I mistype or intentionally use wrong credentials it fails to authenticate.
> ldap.INVALID\_CREDENTIALS: {'info': '80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error, data 52e, vece', 'desc': 'Invalid credentials'}
Or
> ldap.OPERATIONS\_ERROR: {'info': '00000000: LdapErr: DSID-0C090627, comment: In order to perform this operation a successful bind must be completed on the connection., data 0, vece', 'desc': 'Operations error'}
What am I missing out to bind properly?
I am getting the same errors on fedora and windows. | That worked for me, **l.set\_option(ldap.OPT\_REFERRALS, 0)** was the key to access the ActiveDirectory. Moreover, I think that you should add an "con.unbind()" in order to close the connection before finishing the script. |
Looking for File Traversal Functions in Python that are Like Java's | 140,758 | 10 | 2008-09-26T17:20:14Z | 140,818 | 25 | 2008-09-26T17:30:39Z | [
"java",
"python",
"file-traversal"
] | In Java you can do `File.listFiles()` and receive all of the files in a directory. You can then easily recurse through directory trees.
Is there an analogous way to do this in Python? | Yes, there is. The Python way is even better.
There are three possibilities:
**1) Like File.listFiles():**
Python has the function os.listdir(path). It works like the Java method.
**2) pathname pattern expansion with glob:**
The module glob contains functions to list files on the file system using Unix shell like pattern, e.g.
```` ```
files = glob.glob('/usr/joe/*.gif')
``` ````
**3) File Traversal with walk:**
Really nice is the os.walk function of Python.
The walk method returns a generation function that recursively list all directories and files below a given starting path.
An Example:
```` ```
import os
from os.path import join
for root, dirs, files in os.walk('/usr'):
print "Current directory", root
print "Sub directories", dirs
print "Files", files
``` ````
You can even on the fly remove directories from "dirs" to avoid walking to that dir: if "joe" in dirs: dirs.remove("joe") to avoid walking into directories called "joe".
listdir and walk are documented [here](http://docs.python.org/lib/os-file-dir.html).
glob is documented [here](http://docs.python.org/lib/module-glob.html). |
How to list only top level directories in Python? | 141,291 | 68 | 2008-09-26T19:01:06Z | 141,313 | 9 | 2008-09-26T19:04:46Z | [
"python",
"filesystems"
] | I want to be able to list only the directories inside some folder.
This means I don't want filenames listed, nor do I want additional sub-folders.
Let's see if an example helps. In the current directory we have:
```
>>> os.listdir(os.getcwd())
['cx_Oracle-doc', 'DLLs', 'Doc', 'include', 'Lib', 'libs', 'LICENSE.txt', 'mod_p
ython-wininst.log', 'NEWS.txt', 'pymssql-wininst.log', 'python.exe', 'pythonw.ex
e', 'README.txt', 'Removemod_python.exe', 'Removepymssql.exe', 'Scripts', 'tcl',
'Tools', 'w9xpopen.exe']
```
However, I don't want filenames listed. Nor do I want sub-folders such as \Lib\curses. Essentially what I want works with the following:
```
>>> for root, dirnames, filenames in os.walk('.'):
... print dirnames
... break
...
['cx_Oracle-doc', 'DLLs', 'Doc', 'include', 'Lib', 'libs', 'Scripts', 'tcl', 'Tools']
```
However, I'm wondering if there's a simpler way of achieving the same results. I get the impression that using os.walk only to return the top level is inefficient/too much. | ```
directories=[d for d in os.listdir(os.getcwd()) if os.path.isdir(d)]
``` |
How to list only top level directories in Python? | 141,291 | 68 | 2008-09-26T19:01:06Z | 141,327 | 61 | 2008-09-26T19:06:57Z | [
"python",
"filesystems"
] | I want to be able to list only the directories inside some folder.
This means I don't want filenames listed, nor do I want additional sub-folders.
Let's see if an example helps. In the current directory we have:
```
>>> os.listdir(os.getcwd())
['cx_Oracle-doc', 'DLLs', 'Doc', 'include', 'Lib', 'libs', 'LICENSE.txt', 'mod_p
ython-wininst.log', 'NEWS.txt', 'pymssql-wininst.log', 'python.exe', 'pythonw.ex
e', 'README.txt', 'Removemod_python.exe', 'Removepymssql.exe', 'Scripts', 'tcl',
'Tools', 'w9xpopen.exe']
```
However, I don't want filenames listed. Nor do I want sub-folders such as \Lib\curses. Essentially what I want works with the following:
```
>>> for root, dirnames, filenames in os.walk('.'):
... print dirnames
... break
...
['cx_Oracle-doc', 'DLLs', 'Doc', 'include', 'Lib', 'libs', 'Scripts', 'tcl', 'Tools']
```
However, I'm wondering if there's a simpler way of achieving the same results. I get the impression that using os.walk only to return the top level is inefficient/too much. | Filter the result using os.path.isdir() (and use os.path.join() to get the real path):
```
>>> [ name for name in os.listdir(thedir) if os.path.isdir(os.path.join(thedir, name)) ]
['ctypes', 'distutils', 'encodings', 'lib-tk', 'config', 'idlelib', 'xml', 'bsddb', 'hotshot', 'logging', 'doc', 'test', 'compiler', 'curses', 'site-packages', 'email', 'sqlite3', 'lib-dynload', 'wsgiref', 'plat-linux2', 'plat-mac']
``` |
How to list only top level directories in Python? | 141,291 | 68 | 2008-09-26T19:01:06Z | 141,336 | 26 | 2008-09-26T19:10:36Z | [
"python",
"filesystems"
] | I want to be able to list only the directories inside some folder.
This means I don't want filenames listed, nor do I want additional sub-folders.
Let's see if an example helps. In the current directory we have:
```
>>> os.listdir(os.getcwd())
['cx_Oracle-doc', 'DLLs', 'Doc', 'include', 'Lib', 'libs', 'LICENSE.txt', 'mod_p
ython-wininst.log', 'NEWS.txt', 'pymssql-wininst.log', 'python.exe', 'pythonw.ex
e', 'README.txt', 'Removemod_python.exe', 'Removepymssql.exe', 'Scripts', 'tcl',
'Tools', 'w9xpopen.exe']
```
However, I don't want filenames listed. Nor do I want sub-folders such as \Lib\curses. Essentially what I want works with the following:
```
>>> for root, dirnames, filenames in os.walk('.'):
... print dirnames
... break
...
['cx_Oracle-doc', 'DLLs', 'Doc', 'include', 'Lib', 'libs', 'Scripts', 'tcl', 'Tools']
```
However, I'm wondering if there's a simpler way of achieving the same results. I get the impression that using os.walk only to return the top level is inefficient/too much. | Filter the list using os.path.isdir to detect directories.
```
filter(os.path.isdir, os.listdir(os.getcwd()))
``` |
How to list only top level directories in Python? | 141,291 | 68 | 2008-09-26T19:01:06Z | 142,368 | 8 | 2008-09-26T22:32:50Z | [
"python",
"filesystems"
] | I want to be able to list only the directories inside some folder.
This means I don't want filenames listed, nor do I want additional sub-folders.
Let's see if an example helps. In the current directory we have:
```
>>> os.listdir(os.getcwd())
['cx_Oracle-doc', 'DLLs', 'Doc', 'include', 'Lib', 'libs', 'LICENSE.txt', 'mod_p
ython-wininst.log', 'NEWS.txt', 'pymssql-wininst.log', 'python.exe', 'pythonw.ex
e', 'README.txt', 'Removemod_python.exe', 'Removepymssql.exe', 'Scripts', 'tcl',
'Tools', 'w9xpopen.exe']
```
However, I don't want filenames listed. Nor do I want sub-folders such as \Lib\curses. Essentially what I want works with the following:
```
>>> for root, dirnames, filenames in os.walk('.'):
... print dirnames
... break
...
['cx_Oracle-doc', 'DLLs', 'Doc', 'include', 'Lib', 'libs', 'Scripts', 'tcl', 'Tools']
```
However, I'm wondering if there's a simpler way of achieving the same results. I get the impression that using os.walk only to return the top level is inefficient/too much. | Note that, instead of doing `os.listdir(os.getcwd())`, it's preferable to do `os.listdir(os.path.curdir)`. One less function call, and it's as portable.
So, to complete the answer, to get a list of directories in a folder:
```
def listdirs(folder):
return [d for d in os.listdir(folder) if os.path.isdir(os.path.join(folder, d))]
```
If you prefer full pathnames, then use this function:
```
def listdirs(folder):
return [
d for d in (os.path.join(folder, d1) for d1 in os.listdir(folder))
if os.path.isdir(d)
]
``` |
How to list only top level directories in Python? | 141,291 | 68 | 2008-09-26T19:01:06Z | 142,535 | 99 | 2008-09-26T23:57:04Z | [
"python",
"filesystems"
] | I want to be able to list only the directories inside some folder.
This means I don't want filenames listed, nor do I want additional sub-folders.
Let's see if an example helps. In the current directory we have:
```
>>> os.listdir(os.getcwd())
['cx_Oracle-doc', 'DLLs', 'Doc', 'include', 'Lib', 'libs', 'LICENSE.txt', 'mod_p
ython-wininst.log', 'NEWS.txt', 'pymssql-wininst.log', 'python.exe', 'pythonw.ex
e', 'README.txt', 'Removemod_python.exe', 'Removepymssql.exe', 'Scripts', 'tcl',
'Tools', 'w9xpopen.exe']
```
However, I don't want filenames listed. Nor do I want sub-folders such as \Lib\curses. Essentially what I want works with the following:
```
>>> for root, dirnames, filenames in os.walk('.'):
... print dirnames
... break
...
['cx_Oracle-doc', 'DLLs', 'Doc', 'include', 'Lib', 'libs', 'Scripts', 'tcl', 'Tools']
```
However, I'm wondering if there's a simpler way of achieving the same results. I get the impression that using os.walk only to return the top level is inefficient/too much. | ```
os.walk('.').next()[1]
``` |
How do I find what is using memory in a Python process in a production system? | 141,351 | 27 | 2008-09-26T19:13:14Z | 9,567,831 | 18 | 2012-03-05T13:56:17Z | [
"python",
"memory-leaks",
"coredump"
] | My production system occasionally exhibits a memory leak I have not been able to reproduce in a development environment. I've used a [Python memory profiler](http://stackoverflow.com/questions/110259/python-memory-profiler) (specifically, Heapy) with some success in the development environment, but it can't help me with things I can't reproduce, and I'm reluctant to instrument our production system with Heapy because it takes a while to do its thing and its threaded remote interface does not work well in our server.
What I think I want is a way to dump a snapshot of the production Python process (or at least gc.get\_objects), and then analyze it offline to see where it is using memory. [How do I get a core dump of a python process like this?](http://stackoverflow.com/questions/141802/how-do-i-dump-an-entire-python-process-for-later-debugging-inspection) Once I have one, how do I do something useful with it? | Using Python's `gc` garbage collector interface and `sys.getsizeof()` it's possible to dump all the python objects and their sizes. Here's the code I'm using in production to troubleshoot a memory leak:
```
rss = psutil.Process(os.getpid()).get_memory_info().rss
# Dump variables if using more than 100MB of memory
if rss > 100 * 1024 * 1024:
memory_dump()
os.abort()
def memory_dump():
dump = open("memory.pickle", 'w')
for obj in gc.get_objects():
i = id(obj)
size = sys.getsizeof(obj, 0)
# referrers = [id(o) for o in gc.get_referrers(obj) if hasattr(o, '__class__')]
referents = [id(o) for o in gc.get_referents(obj) if hasattr(o, '__class__')]
if hasattr(obj, '__class__'):
cls = str(obj.__class__)
cPickle.dump({'id': i, 'class': cls, 'size': size, 'referents': referents}, dump)
```
Note that I'm only saving data from objects that have a `__class__` attribute because those are the only objects I care about. It should be possible to save the complete list of objects, but you will need to take care choosing other attributes. Also, I found that getting the referrers for each object was extremely slow so I opted to save only the referents. Anyway, after the crash, the resulting pickled data can be read back like this:
```
dump = open("memory.pickle")
while dump:
obj = cPickle.load(dump)
``` |
How do I wrap a string in a file in Python? | 141,449 | 49 | 2008-09-26T19:33:55Z | 141,451 | 67 | 2008-09-26T19:34:04Z | [
"python",
"string",
"file",
"wrap"
] | How do I create a file-like object (same duck type as File) with the contents of a string? | Use the [StringIO](https://docs.python.org/2/library/stringio.html) module. For example:
```
>>> from cStringIO import StringIO
>>> f = StringIO('foo')
>>> f.read()
'foo'
```
I use cStringIO (which is faster), but note that it doesn't [accept Unicode strings that cannot be encoded as plain ASCII strings](http://docs.python.org/lib/module-cStringIO.html). (You can switch to StringIO by changing "from cStringIO" to "from StringIO".) |
How do I wrap a string in a file in Python? | 141,449 | 49 | 2008-09-26T19:33:55Z | 142,251 | 19 | 2008-09-26T22:00:25Z | [
"python",
"string",
"file",
"wrap"
] | How do I create a file-like object (same duck type as File) with the contents of a string? | In Python 3.0:
```
import io
with io.StringIO() as f:
f.write('abcdef')
print('gh', file=f)
f.seek(0)
print(f.read())
``` |
Is there an easy way to populate SlugField from CharField? | 141,487 | 20 | 2008-09-26T19:40:57Z | 141,554 | 43 | 2008-09-26T19:51:46Z | [
"python",
"django",
"slug"
] | ```
class Foo(models.Model):
title = models.CharField(max_length=20)
slug = models.SlugField()
```
Is there a built-in way to get the slug field to autopopulate based on the title? Perhaps in the Admin and outside of the Admin. | for Admin in Django 1.0 and up, you'd need to use
```
prepopulated_fields = {'slug': ('title',), }
```
in your admin.py
Your key in the prepopulated\_fields dictionary is the field you want filled, and the value is a tuple of fields you want concatenated.
Outside of admin, you can use the `slugify` function in your views. In templates, you can use the `|slugify` filter.
There is also this package which will take care of this automatically: <https://pypi.python.org/pypi/django-autoslug> |
overloading __init__ in python | 141,545 | 186 | 2008-09-26T19:49:46Z | 141,777 | 257 | 2008-09-26T20:30:15Z | [
"python",
"constructor",
"operator-overloading"
] | Let's say I have a class that has a member called data which is a list.
I want to be able to initialize the class with, for example, a filename (which contains data to initialize the list) or with an actual list.
What's your technique for doing this?
Do you just check the type by looking at `__class__`?
Is there some trick I might be missing?
I'm used to C++ where overloading by argument type is easy.
Thanks. | A much neater way to get 'alternate constructors' is to use classmethods. For instance:
```
>>> class MyData:
... def __init__(self, data):
... "Initialize MyData from a sequence"
... self.data = data
...
... @classmethod
... def fromfilename(cls, filename):
... "Initialize MyData from a file"
... data = open(filename).readlines()
... return cls(data)
...
... @classmethod
... def fromdict(cls, datadict):
... "Initialize MyData from a dict's items"
... return cls(datadict.items())
...
>>> MyData([1, 2, 3]).data
[1, 2, 3]
>>> MyData.fromfilename("/tmp/foobar").data
['foo\n', 'bar\n', 'baz\n']
>>> MyData.fromdict({"spam": "ham"}).data
[('spam', 'ham')]
```
The reason it's neater is that there is no doubt about what type is expected, and you aren't forced to guess at what the caller intended for you to do with the datatype it gave you. The problem with `isinstance(x, basestring)` is that there is no way for the caller to tell you, for instance, that even though the type is not a basestring, you should treat it as a string (and not another sequence.) And perhaps the caller would like to use the same type for different purposes, sometimes as a single item, and sometimes as a sequence of items. Being explicit takes all doubt away and leads to more robust and clearer code. |
overloading __init__ in python | 141,545 | 186 | 2008-09-26T19:49:46Z | 212,130 | 26 | 2008-10-17T13:34:31Z | [
"python",
"constructor",
"operator-overloading"
] | Let's say I have a class that has a member called data which is a list.
I want to be able to initialize the class with, for example, a filename (which contains data to initialize the list) or with an actual list.
What's your technique for doing this?
Do you just check the type by looking at `__class__`?
Is there some trick I might be missing?
I'm used to C++ where overloading by argument type is easy.
Thanks. | Excellent question. I've tackled this problem as well, and while I agree that "factories" (class-method constructors) are a good method, I would like to suggest another, which I've also found very useful:
Here's a sample (this is a `read` method and not a constructor, but the idea is the same):
```
def read(self, str=None, filename=None, addr=0):
""" Read binary data and return a store object. The data
store is also saved in the interal 'data' attribute.
The data can either be taken from a string (str
argument) or a file (provide a filename, which will
be read in binary mode). If both are provided, the str
will be used. If neither is provided, an ArgumentError
is raised.
"""
if str is None:
if filename is None:
raise ArgumentError('Please supply a string or a filename')
file = open(filename, 'rb')
str = file.read()
file.close()
...
... # rest of code
```
The key idea is here is using Python's excellent support for named arguments to implement this. Now, if I want to read the data from a file, I say:
```
obj.read(filename="blob.txt")
```
And to read it from a string, I say:
```
obj.read(str="\x34\x55")
```
This way the user has just a single method to call. Handling it inside, as you saw, is not overly complex |
What limitations have closures in Python compared to language X closures? | 141,642 | 40 | 2008-09-26T20:06:41Z | 141,710 | 39 | 2008-09-26T20:19:27Z | [
"python",
"closures"
] | Where X is any programming language (C#, Javascript, Lisp, Perl, Ruby, Scheme, etc) which supports some flavour of closures.
Some limitations are mentioned in the [Closures in Python](http://ivan.truemesh.com/archives/000411.html) (compared to Ruby's closures), but the article is old and many limitations do not exist in modern Python any more.
Seeing a code example for a concrete limitation would be great.
**Related questions**:
* [Can you explain closures (as they relate to Python)?](http://stackoverflow.com/questions/13857/can-you-explain-closures-as-they-relate-to-python)
* [What is a âClosureâ?](http://stackoverflow.com/questions/36636/what-is-a-closure)
* [How does a javascript closure work ?](http://stackoverflow.com/questions/111102/how-does-a-javascript-closure-work) | The most important limitation, currently, is that you cannot assign to an outer-scope variable. In other words, closures are read-only:
```
>>> def outer(x):
... def inner_reads():
... # Will return outer's 'x'.
... return x
... def inner_writes(y):
... # Will assign to a local 'x', not the outer 'x'
... x = y
... def inner_error(y):
... # Will produce an error: 'x' is local because of the assignment,
... # but we use it before it is assigned to.
... tmp = x
... x = y
... return tmp
... return inner_reads, inner_writes, inner_error
...
>>> inner_reads, inner_writes, inner_error = outer(5)
>>> inner_reads()
5
>>> inner_writes(10)
>>> inner_reads()
5
>>> inner_error(10)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 11, in inner_error
UnboundLocalError: local variable 'x' referenced before assignment
```
A name that gets assigned to in a local scope (a function) is always local, unless declared otherwise. While there is the 'global' declaration to declare a variable global even when it is assigned to, there is no such declaration for enclosed variables -- yet. In Python 3.0, there is (will be) the 'nonlocal' declaration that does just that.
You can work around this limitation in the mean time by using a mutable container type:
```
>>> def outer(x):
... x = [x]
... def inner_reads():
... # Will return outer's x's first (and only) element.
... return x[0]
... def inner_writes(y):
... # Will look up outer's x, then mutate it.
... x[0] = y
... def inner_error(y):
... # Will now work, because 'x' is not assigned to, just referenced.
... tmp = x[0]
... x[0] = y
... return tmp
... return inner_reads, inner_writes, inner_error
...
>>> inner_reads, inner_writes, inner_error = outer(5)
>>> inner_reads()
5
>>> inner_writes(10)
>>> inner_reads()
10
>>> inner_error(15)
10
>>> inner_reads()
15
``` |
Python: How to make a cross-module variable? | 142,545 | 82 | 2008-09-26T23:59:47Z | 142,566 | 75 | 2008-09-27T00:09:29Z | [
"python",
"module",
"global"
] | The `__debug__` variable is handy in part because it affects every module. If I want to create another variable that works the same way, how would I do it?
The variable (let's be original and call it 'foo') doesn't have to be truly global, in the sense that if I change foo in one module, it is updated in others. I'd be fine if I could set foo before importing other modules and then they would see the same value for it. | I don't endorse this solution in any way, shape or form. But if you add a variable to the `__builtin__` module, it will be accessible as if a global from any other module that includes `__builtin__` -- which is all of them, by default.
a.py contains
```
print foo
```
b.py contains
```
import __builtin__
__builtin__.foo = 1
import a
```
The result is that "1" is printed.
**Edit:** The `__builtin__` module is available as the local symbol `__builtins__` -- that's the reason for the discrepancy between two of these answers. Also note that `__builtin__` has been renamed to `builtins` in python3. |
Python: How to make a cross-module variable? | 142,545 | 82 | 2008-09-26T23:59:47Z | 142,581 | 17 | 2008-09-27T00:15:19Z | [
"python",
"module",
"global"
] | The `__debug__` variable is handy in part because it affects every module. If I want to create another variable that works the same way, how would I do it?
The variable (let's be original and call it 'foo') doesn't have to be truly global, in the sense that if I change foo in one module, it is updated in others. I'd be fine if I could set foo before importing other modules and then they would see the same value for it. | Define a module ( call it "globalbaz" ) and have the variables defined inside it. All the modules using this "pseudoglobal" should import the "globalbaz" module, and refer to it using "globalbaz.var\_name"
This works regardless of the place of the change, you can change the variable before or after the import. The imported module will use the latest value. (I tested this in a toy example)
For clarification, globalbaz.py looks just like this:
```
var_name = "my_useful_string"
``` |
Python: How to make a cross-module variable? | 142,545 | 82 | 2008-09-26T23:59:47Z | 142,601 | 101 | 2008-09-27T00:25:00Z | [
"python",
"module",
"global"
] | The `__debug__` variable is handy in part because it affects every module. If I want to create another variable that works the same way, how would I do it?
The variable (let's be original and call it 'foo') doesn't have to be truly global, in the sense that if I change foo in one module, it is updated in others. I'd be fine if I could set foo before importing other modules and then they would see the same value for it. | If you need a global cross-module variable maybe just simple global module-level variable will suffice.
a.py:
```
var = 1
```
b.py:
```
import a
print a.var
import c
print a.var
```
c.py:
```
import a
a.var = 2
```
Test:
```
$ python b.py
# -> 1 2
```
Real-world example: [Django's global\_settings.py](https://github.com/django/django/blob/master/django/conf/global_settings.py) (though in Django apps settings are used by importing the *object* [`django.conf.settings`](http://docs.djangoproject.com/en/dev/topics/settings/#using-settings-in-python-code)). |
Python: How to make a cross-module variable? | 142,545 | 82 | 2008-09-26T23:59:47Z | 3,269,974 | 8 | 2010-07-17T02:22:22Z | [
"python",
"module",
"global"
] | The `__debug__` variable is handy in part because it affects every module. If I want to create another variable that works the same way, how would I do it?
The variable (let's be original and call it 'foo') doesn't have to be truly global, in the sense that if I change foo in one module, it is updated in others. I'd be fine if I could set foo before importing other modules and then they would see the same value for it. | You can pass the globals of one module to onother:
In Module A:
```
import module_b
my_var=2
module_b.do_something_with_my_globals(globals())
print my_var
```
In Module B:
```
def do_something_with_my_globals(glob): # glob is simply a dict.
glob["my_var"]=3
``` |
Python: How to make a cross-module variable? | 142,545 | 82 | 2008-09-26T23:59:47Z | 15,035,172 | 13 | 2013-02-22T23:48:03Z | [
"python",
"module",
"global"
] | The `__debug__` variable is handy in part because it affects every module. If I want to create another variable that works the same way, how would I do it?
The variable (let's be original and call it 'foo') doesn't have to be truly global, in the sense that if I change foo in one module, it is updated in others. I'd be fine if I could set foo before importing other modules and then they would see the same value for it. | I believe that there are plenty of circumstances in which it does make sense and it simplifies programming to have some globals that are known across several (tightly coupled) modules. In this spirit, I would like to elaborate a bit on the idea of having a module of globals which is imported by those modules which need to reference them.
When there is only one such module, I name it "g". In it, I assign default values for every variable I intend to treat as global. In each module that uses any of them, I do not use "from g import var", as this only results in a local variable which is initialized from g only at the time of the import. I make most references in the form g.var, and the "g." serves as a constant reminder that I am dealing with a variable that is potentially accessible to other modules.
If the value of such a global variable is to be used frequently in some function in a module, then that function can make a local copy: var = g.var. However, it is important to realize that assignments to var are local, and global g.var cannot be updated without referencing g.var explicitly in an assignment.
Note that you can also have multiple such globals modules shared by different subsets of your modules to keep things a little more tightly controlled. The reason I use short names for my globals modules is to avoid cluttering up the code too much with occurrences of them. With only a little experience, they become mnemonic enough with only 1 or 2 characters.
It is still possible to make an assignment to, say, g.x when x was not already defined in g, and a different module can then access g.x. However, even though the interpreter permits it, this approach is not so transparent, and I do avoid it. There is still the possibility of accidentally creating a new variable in g as a result of a typo in the variable name for an assignment. Sometimes an examination of dir(g) is useful to discover any surprise names that may have arisen by such accident. |
How do I upgrade python 2.5.2 to python 2.6rc2 on ubuntu linux 8.04? | 142,764 | 8 | 2008-09-27T02:15:03Z | 142,770 | 15 | 2008-09-27T02:19:38Z | [
"python",
"linux",
"ubuntu",
"installation"
] | I'd like to how to upgrade the default python installation(2.5.2) supplied with ubuntu 8.04 to python 2.6rc2. I'd like to make 2.6 the default python version on the system and migrate all the other useful installed python libraries installed on 2.5.2 to python 2.6rc2. Please let me know how I can achieve this.
Thanks
Dirk | With the warning that I think it's a tremendously bad idea to replace the default Python with an unreleased beta version:
First, install 2.6rc2. You can download the source from the [Python website](http://www.python.org/download/releases/2.6/). Standard `./configure && make && sudo make install` installation style.
Next, remove the `/usr/bin/python` symlink. Do *not* remove `/usr/bin/python2.5`. Add a symlink to 2.6 with `ln -s /usr/local/bin/python2.6 /usr/bin/python`.
Once again, I think this is a terrible idea. There is almost certainly a better way to do whatever you're trying to accomplish.
---
Migrating installed libraries is a much longer process. Look in the `/usr/lib/python2.5/site-packages/` and `/usr/local/lib/python2.5/site-packages/` directories. Any libraries installed to them will need to be re-installed with 2.6. Since you're not using a packaged Python version, you cannot use Ubuntu's packages -- you'll have to manually upgrade all the libraries yourself. Most of them can probably be installed with `sudo easy_install <name>`, but some like PyGTK+ are not so easy. You'll have to follow custom installation procedures for each such library. |
Does Python have a bitfield type? | 142,812 | 34 | 2008-09-27T02:47:40Z | 143,221 | 22 | 2008-09-27T08:20:43Z | [
"python",
"bit-fields",
"bitarray"
] | I need a compact representation of an array of booleans, does Python have a builtin bitfield type or will I need to find an alternate solution? | [Bitarray](http://pypi.python.org/pypi/bitarray/) was the best answer I found, when I recently had a similar need. It's a C extension (so much faster than BitVector, which is pure python) and stores its data in an actual bitfield (so it's eight times more memory efficient than a numpy boolean array, which appears to use a byte per element.) |
Does Python have a bitfield type? | 142,812 | 34 | 2008-09-27T02:47:40Z | 1,574,928 | 10 | 2009-10-15T20:43:54Z | [
"python",
"bit-fields",
"bitarray"
] | I need a compact representation of an array of booleans, does Python have a builtin bitfield type or will I need to find an alternate solution? | You should take a look at the [bitstring](http://python-bitstring.googlecode.com) module, which has recently reached version 2.0.
The binary data is compactly stored as a byte array and can be easily created, modified and analysed.
You can create `BitString` objects from binary, octal, hex, integers (big or little endian), strings, bytes, floats, files and more.
```
a = BitString('0xed44')
b = BitString('0b11010010')
c = BitString(int=100, length=14)
d = BitString('uintle:16=55, 0b110, 0o34')
e = BitString(bytes='hello')
f = pack('<2H, bin:3', 5, 17, '001')
```
You can then analyse and modify them with simple functions or slice notation - no need to worry about bit masks etc.
```
a.prepend('0b110')
if '0b11' in b:
c.reverse()
g = a.join([b, d, e])
g.replace('0b101', '0x3400ee1')
if g[14]:
del g[14:17]
else:
g[55:58] = 'uint:11=33, int:9=-1'
```
There is also a concept of a bit position, so that you can treat it like a file or stream if that's useful to you. Properties are used to give different interpretations of the bit data.
```
w = g.read(10).uint
x, y, z = g.readlist('int:4, int:4, hex:32')
if g.peek(8) == '0x00':
g.pos += 10
```
Plus there's support for the standard bit-wise binary operators, packing, unpacking, endianness and more. The latest version is for Python 2.6 to 3.1, and although it's pure Python it is reasonably well optimised in terms of memory and speed. |
Does Python have a bitfield type? | 142,812 | 34 | 2008-09-27T02:47:40Z | 11,481,471 | 24 | 2012-07-14T06:02:45Z | [
"python",
"bit-fields",
"bitarray"
] | I need a compact representation of an array of booleans, does Python have a builtin bitfield type or will I need to find an alternate solution? | If you mainly want to be able to name your bit fields and easily manipulate them, e.g. to work with flags represented as single bits in a communications protocol, then you can use the standard Structure and Union features of [ctypes](http://docs.python.org/library/ctypes.html), as described at [How Do I Properly Declare a ctype Structure + Union in Python? - Stack Overflow](http://stackoverflow.com/questions/10346375/how-do-i-properly-declare-a-ctype-structure-union-in-python)
For example, to work with the 4 least-significant bits of a byte individually, just name them from least to most significant in a LittleEndianStructure. You use a union to provide access to the same data as a byte or int so you can move the data in or out of the communication protocol. In this case that is done via the `flags.asbyte` field:
```
import ctypes
c_uint8 = ctypes.c_uint8
class Flags_bits(ctypes.LittleEndianStructure):
_fields_ = [
("logout", c_uint8, 1),
("userswitch", c_uint8, 1),
("suspend", c_uint8, 1),
("idle", c_uint8, 1),
]
class Flags(ctypes.Union):
_fields_ = [("b", Flags_bits),
("asbyte", c_uint8)]
flags = Flags()
flags.asbyte = 0xc
print(flags.b.idle)
print(flags.b.suspend)
print(flags.b.userswitch)
print(flags.b.logout)
```
The four bits (which I've printed here starting with the most significant, which seems more natural when printing) are 1, 1, 0, 0, i.e. 0xc in binary. |
Drag and drop onto Python script in Windows Explorer | 142,844 | 37 | 2008-09-27T03:02:30Z | 142,854 | 46 | 2008-09-27T03:06:25Z | [
"python",
"windows",
"drag-and-drop",
"windows-explorer"
] | I would like to drag and drop my data file onto a Python script and have it process the file and generate output. The Python script accepts the name of the data file as a command-line parameter, but Windows Explorer doesn't allow the script to be a drop target.
Is there some kind of configuration that needs to be done somewhere for this work? | Sure. From a [mindless technology article called "Make Python Scripts Droppable in Windows"](http://mindlesstechnology.wordpress.com/2008/03/29/make-python-scripts-droppable-in-windows/), you can add a drop handler by adding a registry key:
> Hereâs a registry import file that you can use to do this. Copy the
> following into a .reg file and run it
> (Make sure that your .py extensions
> are mapped to Python.File).
>
> ```
> Windows Registry Editor Version 5.00
>
> [HKEY_CLASSES_ROOT\Python.File\shellex\DropHandler]
> @="{60254CA5-953B-11CF-8C96-00AA00B8708C}"
> ```
This makes Python scripts use the WSH drop handler, which is compatible with long filenames. To use the short filename handler, replace the GUID with `86C86720-42A0-1069-A2E8-08002B30309D`.
A comment in that post indicates that one can enable dropping on "no console Python files (`.pyw`)" or "compiled Python files (`.pyc`)" by using the `Python.NoConFile` and `Python.CompiledFile` classes. |
Drag and drop onto Python script in Windows Explorer | 142,844 | 37 | 2008-09-27T03:02:30Z | 10,246,159 | 18 | 2012-04-20T12:21:36Z | [
"python",
"windows",
"drag-and-drop",
"windows-explorer"
] | I would like to drag and drop my data file onto a Python script and have it process the file and generate output. The Python script accepts the name of the data file as a command-line parameter, but Windows Explorer doesn't allow the script to be a drop target.
Is there some kind of configuration that needs to be done somewhere for this work? | write a simple shell script (file.bat)
```
"c:\Python27\python.exe" yourprogram.py %1
```
where %1 stands for the firs argument you pass to the script.
Now drag%drop your target files on the file.bat icon. |
Has anyone found a good set of python plugins for vim -- specifically module completion? | 144,201 | 22 | 2008-09-27T18:28:42Z | 144,212 | 17 | 2008-09-27T18:42:39Z | [
"python",
"vim",
"code-completion"
] | I'm looking for a suite of plugins that can help me finally switch over to vim full-time.
Right now I'm using Komodo with some good success, but their vim bindings have enough little errors that I'm tired of it.
What I do love in Komodo, though, is the code completion. So, here's what I'm looking for (ordered by importance).
* Code completion, meaning: the ability to code complete modules/functions/etc. in *any* module that's on the pythonpath, **not just system modules**. Bonus points for showing docstrings when completing.
* Jump-to a class definition. I'm guessing CTAGS will do this, so how do you all manage automatically updating your tags files?
* Project type management for managing buffers: ideally the ability to grep for a filename in a directory structure to open it. Bonus for showing an index of class definitions while a buffer is open.
* Bzr integration. Not super important, since most of it I can just drop to the shell to do. | [Here you can find some info](http://www.sontek.net/python-with-a-modular-ide-vim) about this.
It covers code completion, having a list of classes and functions in open files. I haven't got around to do a full configuration for vim, since I don't use Python primarily, but I have the same interests in transforming vim in a better Python IDE.
**Edit:** The original site is down, so found it [saved on the web archive](https://web.archive.org/web/20110106042207/http://sontek.net/python-with-a-modular-ide-vim). |
Python PostgreSQL modules. Which is best? | 144,448 | 20 | 2008-09-27T20:55:04Z | 144,462 | 15 | 2008-09-27T21:00:21Z | [
"python",
"postgresql",
"module"
] | I've seen a number of postgresql modules for python like pygresql, pypgsql, psyco. Most of them are Python DB API 2.0 compliant, some are not being actively developed anymore.
Which module do you recommend? Why? | psycopg2 seems to be the most popular. I've never had any trouble with it. There's actually a pure Python interface for PostgreSQL too, called [bpgsql](http://barryp.org/software/bpgsql/). I wouldn't recommend it over psycopg2, but it's recently become capable enough to support Django and is useful if you can't compile C modules. |
Calling C/C++ from python? | 145,270 | 270 | 2008-09-28T05:34:20Z | 145,283 | 23 | 2008-09-28T05:44:11Z | [
"c++",
"python",
"c"
] | What would be the quickest way to construct a python binding to a C or C++ library?
(using windows if this matters) | The quickest way to do this is using [SWIG](http://www.swig.org/).
Example from SWIG [tutorial](http://www.swig.org/tutorial.html):
```
/* File : example.c */
int fact(int n) {
if (n <= 1) return 1;
else return n*fact(n-1);
}
```
Interface file:
```
/* example.i */
%module example
%{
/* Put header files here or function declarations like below */
extern int fact(int n);
%}
extern int fact(int n);
```
Building a Python module on Unix:
```
swig -python example.i
gcc -fPIC -c example.c example_wrap.c -I/usr/local/include/python2.7
gcc -shared example.o example_wrap.o -o _example.so
```
Usage:
```
>>> import example
>>> example.fact(5)
120
```
Note that you have to have python-dev. Also in some systems python header files will be in /usr/include/python2.7 based on the way you have installed it.
From the tutorial:
> SWIG is a fairly complete C++ compiler with support for nearly every language feature. This includes preprocessing, pointers, classes, inheritance, and even C++ templates. SWIG can also be used to package structures and classes into proxy classes in the target language â exposing the underlying functionality in a very natural manner. |
Calling C/C++ from python? | 145,270 | 270 | 2008-09-28T05:34:20Z | 145,287 | 12 | 2008-09-28T05:48:59Z | [
"c++",
"python",
"c"
] | What would be the quickest way to construct a python binding to a C or C++ library?
(using windows if this matters) | Iâve never used it but Iâve heard good things about [ctypes](https://docs.python.org/3.6/library/ctypes.html). If youâre trying to use it with C++, be sure to evade name mangling via [`extern "C"`](http://stackoverflow.com/q/1041866/2157640). *Thanks for the comment, Florian Bösch.* |
Calling C/C++ from python? | 145,270 | 270 | 2008-09-28T05:34:20Z | 145,305 | 9 | 2008-09-28T06:00:56Z | [
"c++",
"python",
"c"
] | What would be the quickest way to construct a python binding to a C or C++ library?
(using windows if this matters) | [This paper, claiming python to be all a scientist needs,](http://openwetware.org/wiki/Julius_B._Lucks/Projects/Python_All_A_Scientist_Needs) basically says: first prototype everything in Python. Then when you need to speed a part up, use SWIG and translate this part to C. |
Calling C/C++ from python? | 145,270 | 270 | 2008-09-28T05:34:20Z | 145,384 | 16 | 2008-09-28T06:53:18Z | [
"c++",
"python",
"c"
] | What would be the quickest way to construct a python binding to a C or C++ library?
(using windows if this matters) | Check out [pyrex](http://www.cosc.canterbury.ac.nz/greg.ewing/python/Pyrex/) or [cython](http://cython.org/). They're python-like languages for interfacing between C/C++ and python. |
Calling C/C++ from python? | 145,270 | 270 | 2008-09-28T05:34:20Z | 145,436 | 69 | 2008-09-28T07:51:37Z | [
"c++",
"python",
"c"
] | What would be the quickest way to construct a python binding to a C or C++ library?
(using windows if this matters) | You should have a look at [Boost.Python](http://www.boost.org/doc/libs/1_49_0/libs/python/doc/), here is the short introdution taken from their website:
> The Boost Python Library is a framework for interfacing Python and
> C++. It allows you to quickly and seamlessly expose C++ classes
> functions and objects to Python, and vice-versa, using no special
> tools -- just your C++ compiler. It is designed to wrap C++ interfaces
> non-intrusively, so that you should not have to change the C++ code at
> all in order to wrap it, making Boost.Python ideal for exposing
> 3rd-party libraries to Python. The library's use of advanced
> metaprogramming techniques simplifies its syntax for users, so that
> wrapping code takes on the look of a kind of declarative interface
> definition language (IDL). |
Calling C/C++ from python? | 145,270 | 270 | 2008-09-28T05:34:20Z | 145,649 | 339 | 2008-09-28T10:53:31Z | [
"c++",
"python",
"c"
] | What would be the quickest way to construct a python binding to a C or C++ library?
(using windows if this matters) | I like [ctypes](http://docs.python.org/2/library/ctypes.html) a lot, [swig](http://www.swig.org/) always tended to give me [problems](http://groups.google.com/group/comp.lang.python/browse_thread/thread/d94badd9847fe43a?pli=1). Also ctypes has the advantage that you don't need to satisfy any compile time dependency on python, and your binding will work on any python that has ctypes, not just the one it was compiled against.
Suppose you have a simple C++ example class you want to talk to in a file called foo.cpp:
```
#include <iostream>
class Foo{
public:
void bar(){
std::cout << "Hello" << std::endl;
}
};
```
Since ctypes can only talk to C functions, you need to provide those declaring them as extern "C"
```
extern "C" {
Foo* Foo_new(){ return new Foo(); }
void Foo_bar(Foo* foo){ foo->bar(); }
}
```
Next you have to compile this to a shared library
```
g++ -c -fPIC foo.cpp -o foo.o
g++ -shared -Wl,-soname,libfoo.so -o libfoo.so foo.o
```
And finally you have to write your python wrapper (e.g. in fooWrapper.py)
```
from ctypes import cdll
lib = cdll.LoadLibrary('./libfoo.so')
class Foo(object):
def __init__(self):
self.obj = lib.Foo_new()
def bar(self):
lib.Foo_bar(self.obj)
```
Once you have that you can call it like
```
f = Foo()
f.bar() #and you will see "Hello" on the screen
``` |
Calling C/C++ from python? | 145,270 | 270 | 2008-09-28T05:34:20Z | 23,865,947 | 18 | 2014-05-26T08:30:54Z | [
"c++",
"python",
"c"
] | What would be the quickest way to construct a python binding to a C or C++ library?
(using windows if this matters) | I started my journey in the python <-> C++ binding from this page, with the objective of linking high level data types (multidimensional STL vectors with python lists) :-)
Having tried the solutions based on both [ctypes](https://docs.python.org/2/library/ctypes.html) and [boost.python](http://www.boost.org/doc/libs/1_55_0/libs/python/doc/tutorial/doc/html/index.html) (and not being a software engineer) I have found them complex when high level datatypes binding is required, while I have found [swig](http://swig.org) much more simple for such cases.
This example uses therefore SWIG and it has been tested in Linux (but swig is available and is widely used in Windows too).
The objective is to make available to python a C++ function that takes a matrix in form of a 2D STL vector and returns an average of each row (as a 1D STL vector).
The code in C++ ("code.cpp") is as follow:
```
#include <vector>
#include "code.h"
using namespace std;
vector<double> average (vector< vector<double> > i_matrix) {
// compute average of each row..
vector <double> averages;
for (int r = 0; r < i_matrix.size(); r++){
double rsum = 0.0;
double ncols= i_matrix[r].size();
for (int c = 0; c< i_matrix[r].size(); c++){
rsum += i_matrix[r][c];
}
averages.push_back(rsum/ncols);
}
return averages;
}
```
The equivalent header ("code.h") is:
```
#ifndef _code
#define _code
#include <vector>
std::vector<double> average (std::vector< std::vector<double> > i_matrix);
#endif
```
We first compile the C++ code to create an object file:
```
g++ -c -fPIC code.cpp
```
We then define a [swig interface definition file](http://swig.org/Doc3.0/SWIG.html) ("code.i") for our C++ functions.
```
%module code
%{
#include "code.h"
%}
%include "std_vector.i"
namespace std {
/* On a side note, the names VecDouble and VecVecdouble can be changed, but the order of first the inner vector matters !*/
%template(VecDouble) vector<double>;
%template(VecVecdouble) vector< vector<double> >;
}
%include "code.h"
```
Using swig, we generate a C++ interface source code from the swig interface definition file..
```
swig -c++ -python code.i
```
We finally compile the generated C++ interface source file and link everything together to generate a shared library that is directly importable by python (the "\_" matters):
```
g++ -c -fPIC code_wrap.cxx -I/usr/include/python2.7 -I/usr/lib/python2.7
g++ -shared -Wl,-soname,_code.so -o _code.so code.o code_wrap.o
```
We can now use the function in python scripts:
```
#!/usr/bin/env python
import code
a= [[3,5,7],[8,10,12]]
print a
b = code.average(a)
print "Assignment done"
print a
print b
``` |
Text difference algorithm | 145,607 | 42 | 2008-09-28T10:12:25Z | 145,609 | 23 | 2008-09-28T10:14:43Z | [
"c#",
"python",
"diff"
] | I need an algorithm that can compare two text files and highlight their difference and ( even better!) can compute their difference in a meaningful way (like two similar files should have a similarity score higher than two dissimilar files, with the word "similar" defined in the normal terms). It sounds easy to implement, but it's not.
The implementation can be in c# or python.
Thanks. | Look at [difflib](http://docs.python.org/lib/module-difflib.html). (Python)
That will calculate the diffs in various formats. You could then use the size of the context diff as a measure of how different two documents are? |
Text difference algorithm | 145,607 | 42 | 2008-09-28T10:12:25Z | 145,634 | 10 | 2008-09-28T10:35:02Z | [
"c#",
"python",
"diff"
] | I need an algorithm that can compare two text files and highlight their difference and ( even better!) can compute their difference in a meaningful way (like two similar files should have a similarity score higher than two dissimilar files, with the word "similar" defined in the normal terms). It sounds easy to implement, but it's not.
The implementation can be in c# or python.
Thanks. | [Bazaar](http://bazaar-vcs.org/) contains an alternative difference algorithm, called [patience diff](http://bramcohen.livejournal.com/37690.html) (there's more info in the comments on that page) which is claimed to be better than the traditional diff algorithm. The file 'patiencediff.py' in the bazaar distribution is a simple command line front end. |
Text difference algorithm | 145,607 | 42 | 2008-09-28T10:12:25Z | 145,659 | 30 | 2008-09-28T11:04:31Z | [
"c#",
"python",
"diff"
] | I need an algorithm that can compare two text files and highlight their difference and ( even better!) can compute their difference in a meaningful way (like two similar files should have a similarity score higher than two dissimilar files, with the word "similar" defined in the normal terms). It sounds easy to implement, but it's not.
The implementation can be in c# or python.
Thanks. | I can recommend to take a look at Neil Fraser's code and articles:
[google-diff-match-patch](http://code.google.com/p/google-diff-match-patch/)
> Currently available in Java,
> JavaScript, C++ and Python. Regardless
> of language, each library features the
> same API and the same functionality.
> All versions also have comprehensive
> test harnesses.
[Neil Fraser: Diff Strategies](http://neil.fraser.name/writing/diff/) - for theory and implementation notes |
Text difference algorithm | 145,607 | 42 | 2008-09-28T10:12:25Z | 146,957 | 25 | 2008-09-28T23:02:33Z | [
"c#",
"python",
"diff"
] | I need an algorithm that can compare two text files and highlight their difference and ( even better!) can compute their difference in a meaningful way (like two similar files should have a similarity score higher than two dissimilar files, with the word "similar" defined in the normal terms). It sounds easy to implement, but it's not.
The implementation can be in c# or python.
Thanks. | In Python, there is [difflib](http://docs.python.org/lib/module-difflib.html), as also others have suggested.
`difflib` offers the [SequenceMatcher](http://docs.python.org/lib/sequence-matcher.html) class, which can be used to give you a similarity ratio. Example function:
```
def text_compare(text1, text2, isjunk=None):
return difflib.SequenceMatcher(isjunk, text1, text2).ratio()
``` |
Text difference algorithm | 145,607 | 42 | 2008-09-28T10:12:25Z | 478,615 | 8 | 2009-01-26T00:44:40Z | [
"c#",
"python",
"diff"
] | I need an algorithm that can compare two text files and highlight their difference and ( even better!) can compute their difference in a meaningful way (like two similar files should have a similarity score higher than two dissimilar files, with the word "similar" defined in the normal terms). It sounds easy to implement, but it's not.
The implementation can be in c# or python.
Thanks. | My current understanding is that the best solution to the Shortest Edit Script (SES) problem is Myers "middle-snake" method with the Hirschberg linear space refinement.
The Myers algorithm is described in:
> E. Myers, ``An O(ND) Difference
> Algorithm and Its Variations,''
> Algorithmica 1, 2 (1986), 251-266.
The GNU diff utility uses the Myers algorithm.
The "similarity score" you speak of is called the "edit distance" in the literature which is the number of inserts or deletes necessary to transform one sequence into the other.
Note that a number of people have cited the Levenshtein distance algorithm but that is, albeit easy to implement, not the optimal solution as it is inefficient (requires the use of a possibly huge n\*m matrix) and does not provide the "edit script" which is the sequence of edits that could be used to transform one sequence into the other and vice versa.
For a good Myers / Hirschberg implementation look at:
<http://www.ioplex.com/~miallen/libmba/dl/src/diff.c>
The particular library that it is contained within is no longer maintained but to my knowledge the diff.c module itself is still correct.
Mike |
How do I successfully pass a function reference to Djangoâs reverse() function? | 146,522 | 8 | 2008-09-28T19:15:15Z | 146,524 | 8 | 2008-09-28T19:17:44Z | [
"python",
"django"
] | Iâve got a brand new Django project. Iâve added one minimal view function to `views.py`, and one URL pattern to `urls.py`, passing the view by function reference instead of a string:
```
# urls.py
# -------
# coding=utf-8
from django.conf.urls.defaults import *
from myapp import views
urlpatterns = patterns('',
url(r'^myview/$', views.myview),
)
# views.py
----------
# coding=utf-8
from django.http import HttpResponse
def myview(request):
return HttpResponse('MYVIEW LOL', content_type="text/plain")
```
Iâm trying to use `reverse()` to get the URL, by passing it a function reference. But Iâm not getting a match, despite confirming that the view function Iâm passing to reverse is the exact same view function I put in the URL pattern:
```
>>> from django.core.urlresolvers import reverse
>>> import urls
>>> from myapp import views
>>> urls.urlpatterns[0].callback is views.myview
True
>>> reverse(views.myview)
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/Library/Python/2.5/site-packages/django/core/urlresolvers.py", line 254, in reverse
*args, **kwargs)))
File "/Library/Python/2.5/site-packages/django/core/urlresolvers.py", line 243, in reverse
"arguments '%s' not found." % (lookup_view, args, kwargs))
NoReverseMatch: Reverse for '<function myview at 0x6fe6b0>' with arguments '()' and keyword arguments '{}' not found.
```
As far as I can tell from the documentation, function references should be fine in both the URL pattern and `reverse()`.
* [URL patterns with function references](http://docs.djangoproject.com/en/dev/topics/http/urls/#passing-callable-objects-instead-of-strings)
* [`reverse` with function references](http://docs.djangoproject.com/en/dev/topics/http/urls/?from=olddocs#reverse)
Iâm using the Django trunk, revision 9092. | Got it!! The problem is that some of the imports are of `myproject.myapp.views`, and some are just of `myapp.views`. This is confusing the Python module system enough that it no longer detects the functions as the same object. This is because your main `settings.py` probably has a line like:
```
ROOT_URLCONF = `myproject.urls`
```
To solve this, try using the full import in your shell session:
```
>>> from django.core.urlresolvers import reverse
>>> from myproject.myapp import views
>>> reverse(views.myview)
'/myview/'
```
Here's a log of the debugging session, for any interested future readers:
```
>>> from django.core import urlresolvers
>>> from myapp import myview
>>> urlresolvers.get_resolver (None).reverse_dict
{None: ([(u'myview/', [])], 'myview/$'), <function myview at 0x845d17c>: ([(u'myview/', [])], 'myview/$')}
>>> v1 = urlresolvers.get_resolver (None).reverse_dict.items ()[1][0]
>>> reverse(v1)
'/myview/'
>>> v1 is myview
False
>>> v1.__module__
'testproject.myapp.views'
>>> myview.__module__
'myapp.views'
```
What happens if you change the URL match to be `r'^myview/$'`?
---
Have you tried it with the view name? Something like `reverse ('myapp.myview')`?
Is `urls.py` the root URLconf, or in the `myapp` application? There needs to be a full path from the root to a view for it to be resolved. If that's `myproject/myapp/urls.py`, then in `myproject/urls.py` you'll need code like this:
```
from django.conf.urls.defaults import patterns
urlpatterns = patterns ('',
(r'^/', 'myapp.urls'),
)
``` |
I'm using Python regexes in a criminally inefficient manner | 146,607 | 7 | 2008-09-28T20:03:14Z | 146,671 | 10 | 2008-09-28T20:31:44Z | [
"regex",
"algorithm",
"optimization",
"python"
] | My goal here is to create a very simple template language. At the moment, I'm working on replacing a variable with a value, like this:
This input:
> <%"TITLE"="This Is A Test Variable"%>The Web <%"TITLE"%>
Should produce this output:
> The Web This Is A Test Variable
I've got it working. But looking at my code, I'm running multiple identical regexes on the same strings -- that just offends my sense of efficiency. There's got to be a better, more Pythonic way. (It's the two "while" loops that really offend.)
This does pass the unit tests, so if this is silly premature optimization, tell me -- I'm willing to let this go. There may be dozens of these variable definitions and uses in a document, but not hundreds. But I suspect there's obvious (to other people) ways of improving this, and I'm curious what the StackOverflow crowd will come up with.
```
def stripMatchedQuotes(item):
MatchedSingleQuotes = re.compile(r"'(.*)'", re.LOCALE)
MatchedDoubleQuotes = re.compile(r'"(.*)"', re.LOCALE)
item = MatchedSingleQuotes.sub(r'\1', item, 1)
item = MatchedDoubleQuotes.sub(r'\1', item, 1)
return item
def processVariables(item):
VariableDefinition = re.compile(r'<%(.*?)=(.*?)%>', re.LOCALE)
VariableUse = re.compile(r'<%(.*?)%>', re.LOCALE)
Variables={}
while VariableDefinition.search(item):
VarName, VarDef = VariableDefinition.search(item).groups()
VarName = stripMatchedQuotes(VarName).upper().strip()
VarDef = stripMatchedQuotes(VarDef.strip())
Variables[VarName] = VarDef
item = VariableDefinition.sub('', item, 1)
while VariableUse.search(item):
VarName = stripMatchedQuotes(VariableUse.search(item).group(1).upper()).strip()
item = VariableUse.sub(Variables[VarName], item, 1)
return item
``` | The first thing that may improve things is to move the re.compile outside the function. The compilation is cached, but there is a speed hit in checking this to see if its compiled.
Another possibility is to use a single regex as below:
```
MatchedQuotes = re.compile(r"(['\"])(.*)\1", re.LOCALE)
item = MatchedQuotes.sub(r'\2', item, 1)
```
Finally, you can combine this into the regex in processVariables. Taking [Torsten Marek's](http://stackoverflow.com/questions/146607/im-using-python-regexes-in-a-criminally-inefficient-manner#146683) suggestion to use a function for re.sub, this improves and simplifies things dramatically.
```
VariableDefinition = re.compile(r'<%(["\']?)(.*?)\1=(["\']?)(.*?)\3%>', re.LOCALE)
VarRepl = re.compile(r'<%(["\']?)(.*?)\1%>', re.LOCALE)
def processVariables(item):
vars = {}
def findVars(m):
vars[m.group(2).upper()] = m.group(4)
return ""
item = VariableDefinition.sub(findVars, item)
return VarRepl.sub(lambda m: vars[m.group(2).upper()], item)
print processVariables('<%"TITLE"="This Is A Test Variable"%>The Web <%"TITLE"%>')
```
Here are my timings for 100000 runs:
```
Original : 13.637
Global regexes : 12.771
Single regex : 9.095
Final version : 1.846
```
[Edit] Add missing non-greedy specifier
[Edit2] Added .upper() calls so case insensitive like original version |
In Django, where is the best place to put short snippets of HTML-formatted data? | 146,789 | 8 | 2008-09-28T21:31:21Z | 146,833 | 12 | 2008-09-28T21:53:27Z | [
"python",
"django",
"model-view-controller",
"design-patterns"
] | This question is related to (but perhaps not quite the same as):
<http://stackoverflow.com/questions/61451/does-django-have-html-helpers>
My problem is this: In Django, I am constantly reproducing the basic formatting for low-level database objects. Here's an example:
I have two classes, Person and Address. There are multiple Addresses for each Person, setup likeso (in their respective **models.py**)
```
class Person(models.Model):
...
class Address(models.Model):
contact = models.ForeignKey(Person)
```
Now, whenever I look at a Person, I want to see all their Addresses. So suppose **Persons/views.py** has something likeso:
```
def detail(request, person_id):
person = get_object_or_404( Person, pk=person_id )
return render_to_response('persons/details.html',
{ 'title' : unicode(person), 'addresses': person.address_set.all() } )
```
And, I have a template, **persons/details.html**, with code, for example, like-so:
```
{% extends "base.html" %}
{% for address in addresses %}
<b>{{ address.name }}</b>
{{ address.type }} <br>
{{ address.street_1 }}<br>
{{ address.street_2 }}<br>
{{ address.city }} {{ address.stateprov }} {{ address.postalcode }}<br>
{{ address.country }}
<hr>
{{ endfor }}
```
I am repeating this code quite a bit, often with minor variations, such when it's in a table, and then < br > must be substituted by < /td >< td >. Other times, I don't want a street\_2 to display (or the < br > after it). All to say, there is fundamental logic that I want to express, that I am even more loath to tote around with block-and-copy!
What I want is a **persons/details.html** with, for example, the following:
```
{% extends "base.html" %}
{% for address in addresses %}
{% address.as_html4 %}
{% endfor %}
```
And if I want inline table, something likeso (I guess!):
```
{% extends "base.html" %}
<table><tr>
{% for address in addresses %}
<tr><td> {% address.as_html4 </td><td> %} </td></tr>
{% endfor %}
</table>
```
The question is, then: Where is the best place to put the formatting? The logic?
Django seem to have the following (plausible) options:
1. Put the formatting in models.py
2. Put the logic/formatting in views.py
3. Put the logic/formatting in some other sub-class of Person or Address (i.e. addresses/html4.py)
4. Create custom tags
Help / insight much appreciated! | Sounds like an [inclusion tag](http://docs.djangoproject.com/en/dev/howto/custom-template-tags/#inclusion-tags) is what you're looking for. You could have a template and tag for each major variation and use the tag's arguments to customise the context for each template as required.
Basic tag definition:
```
@register.inclusion_tag('person/address.html')
def display_address(address):
return {'address': address}
```
Use in templates (assuming the templatetag module containing it has already been `{% load %}`-ed):
```
{% display_address address %}
``` |
How does one do the equivalent of "import * from module" with Python's __import__ function? | 147,507 | 14 | 2008-09-29T04:28:41Z | 147,541 | 26 | 2008-09-29T04:45:07Z | [
"python",
"python-import"
] | Given a string with a module name, how do you import everything in the module as if you had called:
```
from module import *
```
i.e. given string S="module", how does one get the equivalent of the following:
```
__import__(S, fromlist="*")
```
This doesn't seem to perform as expected (as it doesn't import anything).
Thanks! | Please reconsider. The only thing worse than `import *` is *magic* `import *`.
If you really want to:
```
m = __import__ (S)
try:
attrlist = m.__all__
except AttributeError:
attrlist = dir (m)
for attr in attrlist:
globals()[attr] = getattr (m, attr)
``` |
Debug Pylons application through Eclipse | 147,650 | 11 | 2008-09-29T05:41:00Z | 147,768 | 10 | 2008-09-29T07:03:33Z | [
"python",
"eclipse",
"pylons",
"pydev",
"pyramid"
] | I have Eclipse setup with PyDev and love being able to debug my scripts/apps. I've just started playing around with Pylons and was wondering if there is a way to start up the paster server through Eclipse so I can debug my webapp? | Create a new launch configuration (Python Run)
**Main tab**
Use paster-script.py as main module (you can find it in the Scripts sub-directory in your python installation directory)
Don't forget to add the root folder of your application in the PYTHONPATH zone
**Arguments**
Set the base directory to the root folder also.
As Program Arguments use "serve development.ini" (or whatever you use to debug your app")
**Common Tab**
Check allocate console and launch in background |
How do I manipulate bits in Python? | 147,713 | 24 | 2008-09-29T06:31:27Z | 147,716 | 11 | 2008-09-29T06:34:18Z | [
"python"
] | In C I could, for example, zero out bit #10 in a 32 bit unsigned value like so:
```
unsigned long value = 0xdeadbeef;
value &= ~(1<<10);
```
How do I do that in Python ? | ```
value = 0xdeadbeef
value &= ~(1<<10)
``` |
How do I manipulate bits in Python? | 147,713 | 24 | 2008-09-29T06:31:27Z | 147,736 | 36 | 2008-09-29T06:45:34Z | [
"python"
] | In C I could, for example, zero out bit #10 in a 32 bit unsigned value like so:
```
unsigned long value = 0xdeadbeef;
value &= ~(1<<10);
```
How do I do that in Python ? | Bitwise operations on Python ints work much like in C. The &, | and ^ operators in Python work just like in C. The ~ operator works as for a signed integer in C; that is, ~x computes -x-1.
You have to be somewhat careful with left shifts, since Python integers aren't fixed-width. Use bit masks to obtain the low order bits. For example, to do the equivalent of shift of a 32-bit integer do (x << 5) & 0xffffffff. |
Character reading from file in Python | 147,741 | 60 | 2008-09-29T06:47:47Z | 147,756 | 87 | 2008-09-29T06:55:23Z | [
"python",
"unicode",
"encoding",
"ascii"
] | In a text file, there is a string "I don't like this".
However, when I read it into a string, it becomes "I don\xe2\x80\x98t like this". I understand that \u2018 is the unicode representation of "'". I use
```
f1 = open (file1, "r")
text = f1.read()
```
command to do the reading.
Now, is it possible to read the string in such a way that when it is read into the string, it is "I don't like this", instead of "I don\xe2\x80\x98t like this like this"?
Second edit: I have seen some people use mapping to solve this problem, but really, is there no built-in conversion that does this kind of ANSI to unicode ( and vice versa) conversion? | Ref: <http://docs.python.org/howto/unicode>
*Reading Unicode from a file is therefore simple:*
```
import codecs
f = codecs.open('unicode.rst', encoding='utf-8')
for line in f:
print repr(line)
```
*It's also possible to open files in update mode, allowing both reading and writing:*
```
f = codecs.open('test', encoding='utf-8', mode='w+')
f.write(u'\u4500 blah blah blah\n')
f.seek(0)
print repr(f.readline()[:1])
f.close()
```
**EDIT**: I'm assuming that your intended goal is just to be able to read the file properly into a string in Python. If you're trying to convert to an ASCII string from Unicode, then there's really no direct way to do so, since the Unicode characters won't necessarily exist in ASCII.
If you're trying to convert to an ASCII string, try one of the following:
1. Replace the specific unicode chars with ASCII equivalents, if you are only looking to handle a few special cases such as this particular example
2. Use the `unicodedata` module's `normalize()` and the `string.encode()` method to convert as best you can to the next closest ASCII equivalent (Ref <https://web.archive.org/web/20090228203858/http://techxplorer.com/2006/07/18/converting-unicode-to-ascii-using-python>):
```
>>> teststr
u'I don\xe2\x80\x98t like this'
>>> unicodedata.normalize('NFKD', teststr).encode('ascii', 'ignore')
'I donat like this'
``` |
Character reading from file in Python | 147,741 | 60 | 2008-09-29T06:47:47Z | 147,790 | 8 | 2008-09-29T07:15:17Z | [
"python",
"unicode",
"encoding",
"ascii"
] | In a text file, there is a string "I don't like this".
However, when I read it into a string, it becomes "I don\xe2\x80\x98t like this". I understand that \u2018 is the unicode representation of "'". I use
```
f1 = open (file1, "r")
text = f1.read()
```
command to do the reading.
Now, is it possible to read the string in such a way that when it is read into the string, it is "I don't like this", instead of "I don\xe2\x80\x98t like this like this"?
Second edit: I have seen some people use mapping to solve this problem, but really, is there no built-in conversion that does this kind of ANSI to unicode ( and vice versa) conversion? | There are a few points to consider.
A \u2018 character may appear only as a fragment of representation of a unicode string in Python, e.g. if you write:
```
>>> text = u'â'
>>> print repr(text)
u'\u2018'
```
Now if you simply want to print the unicode string prettily, just use unicode's `encode` method:
```
>>> text = u'I don\u2018t like this'
>>> print text.encode('utf-8')
I donât like this
```
To make sure that every line from any file would be read as unicode, you'd better use the `codecs.open` function instead of just `open`, which allows you to specify file's encoding:
```
>>> import codecs
>>> f1 = codecs.open(file1, "r", "utf-8")
>>> text = f1.read()
>>> print type(text)
<type 'unicode'>
>>> print text.encode('utf-8')
I donât like this
``` |
In Django is there a way to display choices as checkboxes? | 147,752 | 36 | 2008-09-29T06:53:13Z | 147,793 | 66 | 2008-09-29T07:17:46Z | [
"python",
"django"
] | In the admin interface and newforms there is the brilliant helper of being able to define choices. You can use code like this:
```
APPROVAL_CHOICES = (
('yes', 'Yes'),
('no', 'No'),
('cancelled', 'Cancelled'),
)
client_approved = models.CharField(choices=APPROVAL_CHOICES)
```
to create a drop down box in your form and force the user to choose one of those options.
I'm just wondering if there is a way to define a set of choices where multiple can be chosen using checkboxes? (Would also be nice to be able to say that the user can select a maximum number of them.) It seems like it's a feature that is probably implemented, it's just I can't seem to find it in the documentation. | In terms of the forms library, you would use the [MultipleChoiceField](http://docs.djangoproject.com/en/dev/ref/forms/fields/#multiplechoicefield) field with a [CheckboxSelectMultiple](http://docs.djangoproject.com/en/dev/ref/forms/widgets/#django.forms.CheckboxSelectMultiple) widget to do that. You could validate the number of choices which were made by writing a validation method for the field:
```
class MyForm(forms.Form):
my_field = forms.MultipleChoiceField(choices=SOME_CHOICES, widget=forms.CheckboxSelectMultiple())
def clean_my_field(self):
if len(self.cleaned_data['my_field']) > 3:
raise forms.ValidationError('Select no more than 3.')
return self.cleaned_data['my_field']
```
To get this in the admin application, you'd need to customise a ModelForm and [override the form used in the appropriate ModelAdmin](http://docs.djangoproject.com/en/dev/ref/contrib/admin/#adding-custom-validation-to-the-admin). |
Preserving signatures of decorated functions | 147,816 | 71 | 2008-09-29T07:32:55Z | 147,835 | 7 | 2008-09-29T07:43:19Z | [
"python",
"decorator"
] | Suppose I have written a decorator that does something very generic. For example, it might convert all arguments to a specific type, perform logging, implement memoization, etc.
Here is an example:
```
def args_as_ints(f):
def g(*args, **kwargs):
args = [int(x) for x in args]
kwargs = dict((k, int(v)) for k, v in kwargs.items())
return f(*args, **kwargs)
return g
@args_as_ints
def funny_function(x, y, z=3):
"""Computes x*y + 2*z"""
return x*y + 2*z
>>> funny_function("3", 4.0, z="5")
22
```
Everything well so far. There is one problem, however. The decorated function does not retain the documentation of the original function:
```
>>> help(funny_function)
Help on function g in module __main__:
g(*args, **kwargs)
```
Fortunately, there is a workaround:
```
def args_as_ints(f):
def g(*args, **kwargs):
args = [int(x) for x in args]
kwargs = dict((k, int(v)) for k, v in kwargs.items())
return f(*args, **kwargs)
g.__name__ = f.__name__
g.__doc__ = f.__doc__
return g
@args_as_ints
def funny_function(x, y, z=3):
"""Computes x*y + 2*z"""
return x*y + 2*z
```
This time, the function name and documentation are correct:
```
>>> help(funny_function)
Help on function funny_function in module __main__:
funny_function(*args, **kwargs)
Computes x*y + 2*z
```
But there is still a problem: the function signature is wrong. The information "\*args, \*\*kwargs" is next to useless.
What to do? I can think of two simple but flawed workarounds:
1 -- Include the correct signature in the docstring:
```
def funny_function(x, y, z=3):
"""funny_function(x, y, z=3) -- computes x*y + 2*z"""
return x*y + 2*z
```
This is bad because of the duplication. The signature will still not be shown properly in automatically generated documentation. It's easy to update the function and forget about changing the docstring, or to make a typo. [*And yes, I'm aware of the fact that the docstring already duplicates the function body. Please ignore this; funny\_function is just a random example.*]
2 -- Not use a decorator, or use a special-purpose decorator for every specific signature:
```
def funny_functions_decorator(f):
def g(x, y, z=3):
return f(int(x), int(y), z=int(z))
g.__name__ = f.__name__
g.__doc__ = f.__doc__
return g
```
This works fine for a set of functions that have identical signature, but it's useless in general. As I said in the beginning, I want to be able to use decorators entirely generically.
I'm looking for a solution that is fully general, and automatic.
So the question is: is there a way to edit the decorated function signature after it has been created?
Otherwise, can I write a decorator that extracts the function signature and uses that information instead of "\*kwargs, \*\*kwargs" when constructing the decorated function? How do I extract that information? How should I construct the decorated function -- with exec?
Any other approaches? | Take a look at the [decorator](http://www.phyast.pitt.edu/~micheles/python/documentation.html) module - specifically the [decorator](http://www.phyast.pitt.edu/~micheles/python/documentation.html#decorator-is-a-decorator) decorator, which solves this problem. |
Preserving signatures of decorated functions | 147,816 | 71 | 2008-09-29T07:32:55Z | 147,844 | 8 | 2008-09-29T07:49:23Z | [
"python",
"decorator"
] | Suppose I have written a decorator that does something very generic. For example, it might convert all arguments to a specific type, perform logging, implement memoization, etc.
Here is an example:
```
def args_as_ints(f):
def g(*args, **kwargs):
args = [int(x) for x in args]
kwargs = dict((k, int(v)) for k, v in kwargs.items())
return f(*args, **kwargs)
return g
@args_as_ints
def funny_function(x, y, z=3):
"""Computes x*y + 2*z"""
return x*y + 2*z
>>> funny_function("3", 4.0, z="5")
22
```
Everything well so far. There is one problem, however. The decorated function does not retain the documentation of the original function:
```
>>> help(funny_function)
Help on function g in module __main__:
g(*args, **kwargs)
```
Fortunately, there is a workaround:
```
def args_as_ints(f):
def g(*args, **kwargs):
args = [int(x) for x in args]
kwargs = dict((k, int(v)) for k, v in kwargs.items())
return f(*args, **kwargs)
g.__name__ = f.__name__
g.__doc__ = f.__doc__
return g
@args_as_ints
def funny_function(x, y, z=3):
"""Computes x*y + 2*z"""
return x*y + 2*z
```
This time, the function name and documentation are correct:
```
>>> help(funny_function)
Help on function funny_function in module __main__:
funny_function(*args, **kwargs)
Computes x*y + 2*z
```
But there is still a problem: the function signature is wrong. The information "\*args, \*\*kwargs" is next to useless.
What to do? I can think of two simple but flawed workarounds:
1 -- Include the correct signature in the docstring:
```
def funny_function(x, y, z=3):
"""funny_function(x, y, z=3) -- computes x*y + 2*z"""
return x*y + 2*z
```
This is bad because of the duplication. The signature will still not be shown properly in automatically generated documentation. It's easy to update the function and forget about changing the docstring, or to make a typo. [*And yes, I'm aware of the fact that the docstring already duplicates the function body. Please ignore this; funny\_function is just a random example.*]
2 -- Not use a decorator, or use a special-purpose decorator for every specific signature:
```
def funny_functions_decorator(f):
def g(x, y, z=3):
return f(int(x), int(y), z=int(z))
g.__name__ = f.__name__
g.__doc__ = f.__doc__
return g
```
This works fine for a set of functions that have identical signature, but it's useless in general. As I said in the beginning, I want to be able to use decorators entirely generically.
I'm looking for a solution that is fully general, and automatic.
So the question is: is there a way to edit the decorated function signature after it has been created?
Otherwise, can I write a decorator that extracts the function signature and uses that information instead of "\*kwargs, \*\*kwargs" when constructing the decorated function? How do I extract that information? How should I construct the decorated function -- with exec?
Any other approaches? | There is a [decorator module](http://www.phyast.pitt.edu/~micheles/python/documentation.html) with `decorator` decorator you can use:
```
@decorator
def args_as_ints(f, *args, **kwargs):
args = [int(x) for x in args]
kwargs = dict((k, int(v)) for k, v in kwargs.items())
return f(*args, **kwargs)
```
Then the signature and help of the method is preserved:
```
>>> help(funny_function)
Help on function funny_function in module __main__:
funny_function(x, y, z=3)
Computes x*y + 2*z
```
EDIT: J. F. Sebastian pointed out that I didn't modify `args_as_ints` function -- it is fixed now. |
Preserving signatures of decorated functions | 147,816 | 71 | 2008-09-29T07:32:55Z | 147,878 | 50 | 2008-09-29T08:08:30Z | [
"python",
"decorator"
] | Suppose I have written a decorator that does something very generic. For example, it might convert all arguments to a specific type, perform logging, implement memoization, etc.
Here is an example:
```
def args_as_ints(f):
def g(*args, **kwargs):
args = [int(x) for x in args]
kwargs = dict((k, int(v)) for k, v in kwargs.items())
return f(*args, **kwargs)
return g
@args_as_ints
def funny_function(x, y, z=3):
"""Computes x*y + 2*z"""
return x*y + 2*z
>>> funny_function("3", 4.0, z="5")
22
```
Everything well so far. There is one problem, however. The decorated function does not retain the documentation of the original function:
```
>>> help(funny_function)
Help on function g in module __main__:
g(*args, **kwargs)
```
Fortunately, there is a workaround:
```
def args_as_ints(f):
def g(*args, **kwargs):
args = [int(x) for x in args]
kwargs = dict((k, int(v)) for k, v in kwargs.items())
return f(*args, **kwargs)
g.__name__ = f.__name__
g.__doc__ = f.__doc__
return g
@args_as_ints
def funny_function(x, y, z=3):
"""Computes x*y + 2*z"""
return x*y + 2*z
```
This time, the function name and documentation are correct:
```
>>> help(funny_function)
Help on function funny_function in module __main__:
funny_function(*args, **kwargs)
Computes x*y + 2*z
```
But there is still a problem: the function signature is wrong. The information "\*args, \*\*kwargs" is next to useless.
What to do? I can think of two simple but flawed workarounds:
1 -- Include the correct signature in the docstring:
```
def funny_function(x, y, z=3):
"""funny_function(x, y, z=3) -- computes x*y + 2*z"""
return x*y + 2*z
```
This is bad because of the duplication. The signature will still not be shown properly in automatically generated documentation. It's easy to update the function and forget about changing the docstring, or to make a typo. [*And yes, I'm aware of the fact that the docstring already duplicates the function body. Please ignore this; funny\_function is just a random example.*]
2 -- Not use a decorator, or use a special-purpose decorator for every specific signature:
```
def funny_functions_decorator(f):
def g(x, y, z=3):
return f(int(x), int(y), z=int(z))
g.__name__ = f.__name__
g.__doc__ = f.__doc__
return g
```
This works fine for a set of functions that have identical signature, but it's useless in general. As I said in the beginning, I want to be able to use decorators entirely generically.
I'm looking for a solution that is fully general, and automatic.
So the question is: is there a way to edit the decorated function signature after it has been created?
Otherwise, can I write a decorator that extracts the function signature and uses that information instead of "\*kwargs, \*\*kwargs" when constructing the decorated function? How do I extract that information? How should I construct the decorated function -- with exec?
Any other approaches? | 1. Install [decorator](http://www.phyast.pitt.edu/~micheles/python/documentation.html#the-solution) module:
```
$ pip install decorator
```
2. Adapt definition of `args_as_ints()`:
```
import decorator
@decorator.decorator
def args_as_ints(f, *args, **kwargs):
args = [int(x) for x in args]
kwargs = dict((k, int(v)) for k, v in kwargs.items())
return f(*args, **kwargs)
@args_as_ints
def funny_function(x, y, z=3):
"""Computes x*y + 2*z"""
return x*y + 2*z
print funny_function("3", 4.0, z="5")
# 22
help(funny_function)
# Help on function funny_function in module __main__:
#
# funny_function(x, y, z=3)
# Computes x*y + 2*z
```
---
### Python 3.4+
[`functools.wraps()` from stdlib](https://docs.python.org/3/library/functools.html#functools.wraps) preserves signatures since Python 3.4:
```
import functools
def args_as_ints(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
args = [int(x) for x in args]
kwargs = dict((k, int(v)) for k, v in kwargs.items())
return func(*args, **kwargs)
return wrapper
@args_as_ints
def funny_function(x, y, z=3):
"""Computes x*y + 2*z"""
return x*y + 2*z
print(funny_function("3", 4.0, z="5"))
# 22
help(funny_function)
# Help on function funny_function in module __main__:
#
# funny_function(x, y, z=3)
# Computes x*y + 2*z
```
`functools.wraps()` is available [at least since Python 2.5](https://docs.python.org/2.5/lib/module-functools.html) but it does not preserve the signature there:
```
help(funny_function)
# Help on function funny_function in module __main__:
#
# funny_function(*args, **kwargs)
# Computes x*y + 2*z
```
Notice: `*args, **kwargs` instead of `x, y, z=3`. |
Using OR comparisons with IF statements | 148,042 | 3 | 2008-09-29T09:12:24Z | 148,048 | 23 | 2008-09-29T09:13:53Z | [
"python",
"comparison",
"boolean"
] | When using IF statements in Python, you have to do the following to make the "cascade" work correctly.
```
if job == "mechanic" or job == "tech":
print "awesome"
elif job == "tool" or job == "rock":
print "dolt"
```
Is there a way to make Python accept multiple values when checking for "equals to"? For example,
```
if job == "mechanic" or "tech":
print "awesome"
elif job == "tool" or "rock":
print "dolt"
``` | ```
if job in ("mechanic", "tech"):
print "awesome"
elif job in ("tool", "rock"):
print "dolt"
```
The values in parentheses are a tuple. The `in` operator checks to see whether the left hand side item occurs somewhere inside the right handle tuple.
Note that when Python searches a tuple or list using the `in` operator, it does a linear search. If you have a large number of items on the right hand side, this could be a performance bottleneck. A larger-scale way of doing this would be to use a [`frozenset`](http://docs.python.org/lib/types-set.html):
```
AwesomeJobs = frozenset(["mechanic", "tech", ... lots of others ])
def func():
if job in AwesomeJobs:
print "awesome"
```
The use of `frozenset` over `set` is preferred if the list of awesome jobs does not need to be changed during the operation of your program. |
Caching in urllib2? | 148,853 | 11 | 2008-09-29T14:17:13Z | 148,891 | 7 | 2008-09-29T14:25:07Z | [
"python",
"caching",
"urllib2"
] | Is there an easy way to cache things when using urllib2 that I am over-looking, or do I have to roll my own? | This ActiveState Python recipe might be helpful:
<http://code.activestate.com/recipes/491261/> |
Caching in urllib2? | 148,853 | 11 | 2008-09-29T14:17:13Z | 149,145 | 8 | 2008-09-29T15:21:56Z | [
"python",
"caching",
"urllib2"
] | Is there an easy way to cache things when using urllib2 that I am over-looking, or do I have to roll my own? | If you don't mind working at a slightly lower level, httplib2 (<http://code.google.com/p/httplib2/>) is an excellent HTTP library that includes caching functionality. |
Caching in urllib2? | 148,853 | 11 | 2008-09-29T14:17:13Z | 149,917 | 7 | 2008-09-29T18:03:38Z | [
"python",
"caching",
"urllib2"
] | Is there an easy way to cache things when using urllib2 that I am over-looking, or do I have to roll my own? | You could use a decorator function such as:
```
class cache(object):
def __init__(self, fun):
self.fun = fun
self.cache = {}
def __call__(self, *args, **kwargs):
key = str(args) + str(kwargs)
try:
return self.cache[key]
except KeyError:
self.cache[key] = rval = self.fun(*args, **kwargs)
return rval
except TypeError: # incase key isn't a valid key - don't cache
return self.fun(*args, **kwargs)
```
and define a function along the lines of:
```
@cache
def get_url_src(url):
return urllib.urlopen(url).read()
```
This is assuming you're not paying attention to HTTP Cache Controls, but just want to cache the page for the duration of the application |
Keeping GUIs responsive during long-running tasks | 148,963 | 11 | 2008-09-29T14:37:32Z | 148,979 | 15 | 2008-09-29T14:40:08Z | [
"python",
"user-interface",
"wxpython"
] | Keeping the GUI responsive while the application does some CPU-heavy processing is one of the challenges of effective GUI programming.
[Here's a good discussion](http://wiki.wxpython.org/LongRunningTasks) of how to do this in wxPython. To summarize, there are 3 ways:
1. Use threads
2. Use wxYield
3. Chunk the work and do it in the IDLE event handler
Which method have *you* found to be the most effective ? Techniques from other frameworks (like Qt, GTK or Windows API) are also welcome. | Threads. They're what I always go for because you can do it in every framework you need.
And once you're used to multi-threading and parallel processing in one language/framework, you're good on all frameworks. |
Keeping GUIs responsive during long-running tasks | 148,963 | 11 | 2008-09-29T14:37:32Z | 149,212 | 7 | 2008-09-29T15:34:33Z | [
"python",
"user-interface",
"wxpython"
] | Keeping the GUI responsive while the application does some CPU-heavy processing is one of the challenges of effective GUI programming.
[Here's a good discussion](http://wiki.wxpython.org/LongRunningTasks) of how to do this in wxPython. To summarize, there are 3 ways:
1. Use threads
2. Use wxYield
3. Chunk the work and do it in the IDLE event handler
Which method have *you* found to be the most effective ? Techniques from other frameworks (like Qt, GTK or Windows API) are also welcome. | Definitely threads. Why? The future is multi-core. Almost any new CPU has more than one core or if it has just one, it might support hyperthreading and thus pretending it has more than one. To effectively make use of multi-core CPUs (and Intel is planing to go up to 32 cores in the not so far future), you need multiple threads. If you run all in one main thread (usually the UI thread is the main thread), users will have CPUs with 8, 16 and one day 32 cores and your application never uses more than one of these, IOW it runs much, much slower than it could run.
Actual if you plan an application nowadays, I would go away of the classical design and think of a master/slave relationship. Your UI is the master, it's only task is to interact with the user. That is displaying data to the user and gathering user input. Whenever you app needs to "process any data" (even small amounts and much more important big ones), create a "task" of any kind, forward this task to a background thread and make the thread perform the task, providing feedback to the UI (e.g. how many percent it has completed or just if the task is still running or not, so the UI can show a "work-in-progress indicator"). If possible, split the task into many small, independent sub-tasks and run more than one background process, feeding one sub-task to each of them. That way your application can really benefit from multi-core and get faster the more cores CPUs have.
Actually companies like Apple and Microsoft are already planing on how to make their still most single threaded UIs themselves multithreaded. Even with the approach above, you may one day have the situation that the UI is the bottleneck itself. The background processes can process data much faster than the UI can present it to the user or ask the user for input. Today many UI frameworks are little thread-safe, many not thread-safe at all, but that will change. Serial processing (doing one task after another) is a dying design, parallel processing (doing many task at once) is where the future goes. Just look at graphic adapters. Even the most modern NVidia card has a pitiful performance, if you look at the processing speed in MHz/GHz of the GPU alone. How comes it can beat the crap out of CPUs when it comes to 3D calculations? Simple: Instead of calculating one polygon point or one texture pixel after another, it calculates many of them in parallel (actually a whole bunch at the same time) and that way it reaches a throughput that still makes CPUs cry. E.g. the ATI X1900 (to name the competitor as well) has 48 shader units! |
Background tasks on appengine | 149,307 | 6 | 2008-09-29T15:48:55Z | 1,030,325 | 12 | 2009-06-23T02:11:39Z | [
"python",
"google-app-engine",
"cron"
] | How to run background tasks on appengine ? | You may use the [Task Queue Python API](http://code.google.com/appengine/docs/python/taskqueue/). |
BeautifulSoup's Python 3 compatibility | 149,585 | 18 | 2008-09-29T16:49:50Z | 9,906,160 | 17 | 2012-03-28T11:03:10Z | [
"python",
"python-3.x",
"beautifulsoup",
"porting"
] | Does BeautifulSoup work with Python 3?
If not, how soon will there be a port? Will there be a port at all?
Google doesn't turn up anything to me (Maybe it's 'coz I'm looking for the wrong thing?) | Beautiful Soup **4.x** [officially supports Python 3.](https://groups.google.com/forum/#!msg/beautifulsoup/VpNNflJ1rPI/sum07jmEwvgJ)
```
pip install beautifulsoup4
``` |
What is the difference between __reduce__ and __reduce_ex__? | 150,284 | 9 | 2008-09-29T19:31:50Z | 150,309 | 16 | 2008-09-29T19:41:37Z | [
"python",
"pickle"
] | I understand that these methods are for pickling/unpickling and have no relation to the reduce built-in function, but what's the difference between the 2 and why do we need both? | [The docs](https://docs.python.org/2/library/pickle.html#pickling-and-unpickling-extension-types) say that
> If provided, at pickling time
> `__reduce__()` will be called with no
> arguments, and it must return either a
> string or a tuple.
On the other hand,
> It is sometimes useful to know the
> protocol version when implementing
> `__reduce__`. This can be done by
> implementing a method named
> `__reduce_ex__` instead of
> `__reduce__`. `__reduce_ex__`, when it
> exists, is called in preference over
> `__reduce__` (you may still provide
> `__reduce__` for backwards
> compatibility). The `__reduce_ex__`
> method will be called with a single
> integer argument, the protocol
> version.
On the gripping hand, [Guido says](http://mail.python.org/pipermail/python-3000/2008-February/012094.html) that this is an area that could be cleaned up. |
How do you break into the debugger from Python source code? | 150,375 | 19 | 2008-09-29T19:55:12Z | 150,376 | 26 | 2008-09-29T19:55:20Z | [
"python",
"debugging",
"breakpoints",
"pdb"
] | What do you insert into Python source code to have it break into pdb (when execution gets to that spot)? | ```
import pdb; pdb.set_trace()
```
See [Python: Coding in the Debugger for Beginners](http://jjinux.blogspot.com/2007/08/python-coding-in-debugger-for-beginners.html) for this and more helpful hints. |
How Does One Read Bytes from File in Python | 150,532 | 8 | 2008-09-29T20:37:34Z | 150,584 | 16 | 2008-09-29T20:50:20Z | [
"python",
"id3"
] | Similar to [this](http://stackoverflow.com/questions/5415/convert-bytes-to-floating-point-numbers-in-python) question, I am trying to read in an ID3v2 tag header and am having trouble figuring out how to get individual bytes in python.
I first read all ten bytes into a string. I then want to parse out the individual pieces of information.
I can grab the two version number chars in the string, but then I have no idea how to take those two chars and get an integer out of them.
The struct package seems to be what I want, but I can't get it to work.
Here is my code so-far (I am very new to python btw...so take it easy on me):
```
def __init__(self, ten_byte_string):
self.whole_string = ten_byte_string
self.file_identifier = self.whole_string[:3]
self.major_version = struct.pack('x', self.whole_string[3:4]) #this
self.minor_version = struct.pack('x', self.whole_string[4:5]) # and this
self.flags = self.whole_string[5:6]
self.len = self.whole_string[6:10]
```
Printing out any value except is obviously crap because they are not formatted correctly. | If you have a string, with 2 bytes that you wish to interpret as a 16 bit integer, you can do so by:
```
>>> s = '\0\x02'
>>> struct.unpack('>H', s)
(2,)
```
Note that the > is for big-endian (the largest part of the integer comes first). This is the format id3 tags use.
For other sizes of integer, you use different format codes. eg. "i" for a signed 32 bit integer. See help(struct) for details.
You can also unpack several elements at once. eg for 2 unsigned shorts, followed by a signed 32 bit value:
```
>>> a,b,c = struct.unpack('>HHi', some_string)
```
Going by your code, you are looking for (in order):
* a 3 char string
* 2 single byte values (major and minor version)
* a 1 byte flags variable
* a 32 bit length quantity
The format string for this would be:
```
ident, major, minor, flags, len = struct.unpack('>3sBBBI', ten_byte_string)
``` |
How do I calculate number of days betwen two dates using Python? | 151,199 | 193 | 2008-09-29T23:36:25Z | 151,211 | 301 | 2008-09-29T23:41:22Z | [
"python",
"date"
] | If I have two dates (ex. `'8/18/2008'` and `'9/26/2008'`) what is the best way to get the difference measured in days? | If you have two date objects, you can just subtract them.
```
from datetime import date
d0 = date(2008, 8, 18)
d1 = date(2008, 9, 26)
delta = d0 - d1
print delta.days
```
The relevant section of the docs:
<https://docs.python.org/library/datetime.html> |
How do I calculate number of days betwen two dates using Python? | 151,199 | 193 | 2008-09-29T23:36:25Z | 151,212 | 55 | 2008-09-29T23:41:59Z | [
"python",
"date"
] | If I have two dates (ex. `'8/18/2008'` and `'9/26/2008'`) what is the best way to get the difference measured in days? | Using the power of datetime:
```
from datetime import datetime
date_format = "%m/%d/%Y"
a = datetime.strptime('8/18/2008', date_format)
b = datetime.strptime('9/26/2008', date_format)
delta = b - a
print delta.days # that's it
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.