title
stringlengths
12
150
question_id
int64
469
40.1M
question_score
int64
2
5.52k
question_date
stringdate
2008-08-02 15:11:16
2016-10-18 06:16:31
answer_id
int64
536
40.1M
answer_score
int64
7
8.38k
answer_date
stringdate
2008-08-02 18:49:07
2016-10-18 06:19:33
tags
listlengths
1
5
question_body_md
stringlengths
15
30.2k
answer_body_md
stringlengths
11
27.8k
Getting method parameter names in python
218,616
131
2008-10-20T14:22:02Z
218,625
73
2008-10-20T14:24:48Z
[ "python", "decorator", "introspection", "python-datamodel" ]
Given the python function: ``` def aMethod(arg1, arg2): pass ``` How can I extract the number and names of the arguments. Ie. given that I have a reference to func, I want the func.[something] to return ("arg1", "arg2") The usage scenario for this is that I have a decorator, and I wish to use the method arguments in the same order that they appear for the actual function as a key. Ie. how would the decorator look that printed "a,b" when I call aMethod("a","b")
In CPython, the number of arguments is ``` aMethod.func_code.co_argcount ``` and their names are in the beginning of ``` aMethod.func_code.co_varnames ``` These are implementation details of CPython, so this probably does not work in other implementations of Python, such as IronPython and Jython. One portable way to admit "pass-through" arguments is to define your function with the signature `func(*args, **kwargs)`. This is used a lot in e.g. matplotlib, where the outer API layer passes lots of keyword arguments to the lower-level API.
Getting method parameter names in python
218,616
131
2008-10-20T14:22:02Z
218,709
240
2008-10-20T14:52:19Z
[ "python", "decorator", "introspection", "python-datamodel" ]
Given the python function: ``` def aMethod(arg1, arg2): pass ``` How can I extract the number and names of the arguments. Ie. given that I have a reference to func, I want the func.[something] to return ("arg1", "arg2") The usage scenario for this is that I have a decorator, and I wish to use the method arguments in the same order that they appear for the actual function as a key. Ie. how would the decorator look that printed "a,b" when I call aMethod("a","b")
Take a look at the [inspect](http://docs.python.org/library/inspect.html) module - this will do the inspection of the various code object properties for you. ``` >>> inspect.getargspec(aMethod) (['arg1', 'arg2'], None, None, None) ``` The other results are the name of the \*args and \*\*kwargs variables, and the defaults provided. ie. ``` >>> def foo(a,b,c=4, *arglist, **keywords): pass >>> inspect.getargspec(foo) (['a', 'b', 'c'], 'arglist', 'keywords', (4,)) ```
Getting method parameter names in python
218,616
131
2008-10-20T14:22:02Z
220,366
10
2008-10-21T00:02:19Z
[ "python", "decorator", "introspection", "python-datamodel" ]
Given the python function: ``` def aMethod(arg1, arg2): pass ``` How can I extract the number and names of the arguments. Ie. given that I have a reference to func, I want the func.[something] to return ("arg1", "arg2") The usage scenario for this is that I have a decorator, and I wish to use the method arguments in the same order that they appear for the actual function as a key. Ie. how would the decorator look that printed "a,b" when I call aMethod("a","b")
Here is something I think will work for what you want, using a decorator. ``` class LogWrappedFunction(object): def __init__(self, function): self.function = function def logAndCall(self, *arguments, **namedArguments): print "Calling %s with arguments %s and named arguments %s" %\ (self.function.func_name, arguments, namedArguments) self.function.__call__(*arguments, **namedArguments) def logwrap(function): return LogWrappedFunction(function).logAndCall @logwrap def doSomething(spam, eggs, foo, bar): print "Doing something totally awesome with %s and %s." % (spam, eggs) doSomething("beans","rice", foo="wiggity", bar="wack") ``` Run it, it will yield the following output: ``` C:\scripts>python decoratorExample.py Calling doSomething with arguments ('beans', 'rice') and named arguments {'foo': 'wiggity', 'bar': 'wack'} Doing something totally awesome with beans and rice. ```
Getting method parameter names in python
218,616
131
2008-10-20T14:22:02Z
2,991,341
9
2010-06-07T16:37:20Z
[ "python", "decorator", "introspection", "python-datamodel" ]
Given the python function: ``` def aMethod(arg1, arg2): pass ``` How can I extract the number and names of the arguments. Ie. given that I have a reference to func, I want the func.[something] to return ("arg1", "arg2") The usage scenario for this is that I have a decorator, and I wish to use the method arguments in the same order that they appear for the actual function as a key. Ie. how would the decorator look that printed "a,b" when I call aMethod("a","b")
I think what you're looking for is the locals method - ``` In [6]: def test(a, b):print locals() ...: In [7]: test(1,2) {'a': 1, 'b': 2} ```
Getting method parameter names in python
218,616
131
2008-10-20T14:22:02Z
16,542,145
11
2013-05-14T11:36:45Z
[ "python", "decorator", "introspection", "python-datamodel" ]
Given the python function: ``` def aMethod(arg1, arg2): pass ``` How can I extract the number and names of the arguments. Ie. given that I have a reference to func, I want the func.[something] to return ("arg1", "arg2") The usage scenario for this is that I have a decorator, and I wish to use the method arguments in the same order that they appear for the actual function as a key. Ie. how would the decorator look that printed "a,b" when I call aMethod("a","b")
In a decorator method, you can list arguments of the original method in this way: ``` import inspect, itertools def my_decorator(): def decorator(f): def wrapper(*args, **kwargs): # if you want arguments names as a list: args_name = inspect.getargspec(f)[0] print(args_name) # if you want names and values as a dictionary: args_dict = dict(itertools.izip(args_name, args)) print(args_dict) # if you want values as a list: args_values = args_dict.values() print(args_values) ``` If the `**kwargs` are important for you, then it will be a bit complicated: ``` def wrapper(*args, **kwargs): args_name = list(OrderedDict.fromkeys(inspect.getargspec(f)[0] + kwargs.keys())) args_dict = OrderedDict(list(itertools.izip(args_name, args)) + list(kwargs.iteritems())) args_values = args_dict.values() ``` Example: ``` @my_decorator() def my_function(x, y, z=3): pass my_function(1, y=2, z=3, w=0) # prints: # ['x', 'y', 'z', 'w'] # {'y': 2, 'x': 1, 'z': 3, 'w': 0} # [1, 2, 3, 0] ```
How do you share data between a parent and forked child process in Python?
218,935
10
2008-10-20T15:57:05Z
219,048
12
2008-10-20T16:26:05Z
[ "python", "fork", "share" ]
I'm pretty sure one would do this using the os.plock(op) function, but I have no idea how. Also, if there's a better way, I'd be grateful to find out. Code snippets are very welcome.
[Subprocess](http://docs.python.org/library/subprocess) replaces os.popen, os.system, os.spawn, popen2 and commands. A [simple example for piping](http://docs.python.org/library/subprocess#replacing-shell-pipe-line) would be: ``` p1 = Popen(["dmesg"], stdout=PIPE) p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE) output = p2.communicate()[0] ``` You could also use a [memory mapped file](http://docs.python.org/library/mmap.html) with the flag=MAP\_SHARED for shared memory between processes. [multiprocessing](http://docs.python.org/library/multiprocessing.html) abstracts both [pipes](http://docs.python.org/library/multiprocessing.html#exchanging-objects-between-processes) and [shared memory](http://docs.python.org/library/multiprocessing.html#sharing-state-between-processes) and provides a higher level interface. Taken from the Processing documentation: ``` from multiprocessing import Process, Pipe def f(conn): conn.send([42, None, 'hello']) conn.close() if __name__ == '__main__': parent_conn, child_conn = Pipe() p = Process(target=f, args=(child_conn,)) p.start() print parent_conn.recv() # prints "[42, None, 'hello']" p.join() ```
How do you share data between a parent and forked child process in Python?
218,935
10
2008-10-20T15:57:05Z
219,066
7
2008-10-20T16:30:34Z
[ "python", "fork", "share" ]
I'm pretty sure one would do this using the os.plock(op) function, but I have no idea how. Also, if there's a better way, I'd be grateful to find out. Code snippets are very welcome.
Take a look at the [multiprocessing](http://docs.python.org/dev/library/multiprocessing.html) module new in python 2.6 (also available for earlier versions a [pyprocessing](http://pyprocessing.berlios.de/) Here's an example from the docs illustrating passing information using a pipe for instance: ``` from multiprocessing import Process, Pipe def f(conn): conn.send([42, None, 'hello']) conn.close() if __name__ == '__main__': parent_conn, child_conn = Pipe() p = Process(target=f, args=(child_conn,)) p.start() print parent_conn.recv() # prints "[42, None, 'hello']" p.join() ```
how can i use sharepoint (via soap?) from python?
218,987
9
2008-10-20T16:09:36Z
222,242
8
2008-10-21T15:13:46Z
[ "python", "sharepoint", "soap", "ntlm", "suds" ]
I want to use Sharepoint with python (C-Python) Has anyone tried this before ?
To get the wsdl : ``` import sys # we use suds -> https://fedorahosted.org/suds from suds import WebFault from suds.client import * import urllib2 # my 2 url conf # url_sharepoint,url_NTLM_authproxy import myconfig as my # build url wsdl = '_vti_bin/SiteData.asmx?WSDL' url = '/'.join([my.url_sharepoint,wsdl]) # we need a NTLM_auth_Proxy -> http://ntlmaps.sourceforge.net/ # follow instruction and get proxy running proxy_handler = urllib2.ProxyHandler({'http': my.url_NTLM_authproxy }) opener = urllib2.build_opener(proxy_handler) client = SoapClient(url, {'opener' : opener}) print client.wsdl ``` main (mean) problem: the sharepoint-server uses a NTLM-Auth [ :-( ] so i had to use the NTLM-Auth-Proxy To Rob and Enzondio : THANKS for your hints !
how can i use sharepoint (via soap?) from python?
218,987
9
2008-10-20T16:09:36Z
5,403,203
9
2011-03-23T09:36:12Z
[ "python", "sharepoint", "soap", "ntlm", "suds" ]
I want to use Sharepoint with python (C-Python) Has anyone tried this before ?
I suspect that since this question was answered the SUDS library has been updated to take care of the required authentication itself. After jumping through various hoops, I found this to do the trick: ``` from suds import WebFault from suds.client import * from suds.transport.https import WindowsHttpAuthenticated user = r'SERVER\user' password = "yourpassword" url = "http://sharepointserver/_vti_bin/SiteData.asmx?WSDL" ntlm = WindowsHttpAuthenticated(username = user, password = password) client = Client(url, transport=ntlm) ```
How Python web frameworks, WSGI and CGI fit together
219,110
127
2008-10-20T16:43:57Z
219,124
21
2008-10-20T16:49:17Z
[ "python", "apache", "cgi", "wsgi" ]
I have a [Bluehost](http://en.wikipedia.org/wiki/Bluehost) account where I can run Python scripts as CGI. I guess it's the simplest CGI, because to run I have to define the following in `.htaccess`: ``` Options +ExecCGI AddType text/html py AddHandler cgi-script .py ``` Now, whenever I look up web programming with Python, I hear a lot about WSGI and how most frameworks use it. But I just don't understand how it all fits together, especially when my web server is given (Apache running at a host's machine) and not something I can really play with (except defining `.htaccess` commands). How are [WSGI](http://en.wikipedia.org/wiki/Web_Server_Gateway_Interface), CGI, and the frameworks all connected? What do I need to know, install, and do if I want to run a web framework (say [web.py](http://webpy.org/) or [CherryPy](http://en.wikipedia.org/wiki/CherryPy)) on my basic CGI configuration? How to install WSGI support?
You can [run WSGI over CGI as Pep333 demonstrates](http://www.python.org/dev/peps/pep-0333/#the-server-gateway-side) as an example. However every time there is a request a new Python interpreter is started and the whole context (database connections, etc.) needs to be build which all take time. The best if you want to run WSGI would be if your host would install [mod\_wsgi](http://code.google.com/p/modwsgi/) and made an appropriate configuration to defer control to an application of yours. [Flup](http://trac.saddi.com/flup) is another way to run with WSGI for any webserver that can speak [FCGI](http://www.fastcgi.com/drupal/), [SCGI](http://www.mems-exchange.org/software/scgi/) or AJP. From my experience only FCGI really works, and it can be used in Apache either via [mod\_fastcgi](http://www.fastcgi.com/mod_fastcgi/docs/mod_fastcgi.html) or if you can run a separate Python daemon with [mod\_proxy\_fcgi](http://mproxyfcgi.sourceforge.net/). [WSGI](http://wsgi.org) is a protocol much like CGI, which defines a set of rules how webserver and Python code can interact, it is defined as [Pep333](http://www.python.org/dev/peps/pep-0333). It makes it possible that many different webservers can use many different frameworks and applications using the same application protocol. This is very beneficial and makes it so useful.
How Python web frameworks, WSGI and CGI fit together
219,110
127
2008-10-20T16:43:57Z
505,534
49
2009-02-03T00:04:46Z
[ "python", "apache", "cgi", "wsgi" ]
I have a [Bluehost](http://en.wikipedia.org/wiki/Bluehost) account where I can run Python scripts as CGI. I guess it's the simplest CGI, because to run I have to define the following in `.htaccess`: ``` Options +ExecCGI AddType text/html py AddHandler cgi-script .py ``` Now, whenever I look up web programming with Python, I hear a lot about WSGI and how most frameworks use it. But I just don't understand how it all fits together, especially when my web server is given (Apache running at a host's machine) and not something I can really play with (except defining `.htaccess` commands). How are [WSGI](http://en.wikipedia.org/wiki/Web_Server_Gateway_Interface), CGI, and the frameworks all connected? What do I need to know, install, and do if I want to run a web framework (say [web.py](http://webpy.org/) or [CherryPy](http://en.wikipedia.org/wiki/CherryPy)) on my basic CGI configuration? How to install WSGI support?
I think [Florian's answer](http://stackoverflow.com/questions/219110/how-python-web-frameworks-wsgi-and-cgi-fit-together/219124#219124) answers the part of your question about "what is WSGI", especially if you read [the PEP](http://www.python.org/dev/peps/pep-0333). As for the questions you pose towards the end: WSGI, CGI, FastCGI etc. are all protocols for a web server to *run code*, and deliver the dynamic content that is produced. Compare this to static web serving, where a plain HTML file is basically delivered as is to the client. **CGI, FastCGI and SCGI are language agnostic.** You can write CGI scripts in Perl, Python, C, bash, whatever. CGI defines *which* executable will be called, based on the URL, and *how* it will be called: the arguments and environment. It also defines how the return value should be passed back to the web server once your executable is finished. The variations are basically optimisations to be able to handle more requests, reduce latency and so on; the basic concept is the same. **WSGI is Python only.** Rather than a language agnostic protocol, a standard function signature is defined: ``` def simple_app(environ, start_response): """Simplest possible application object""" status = '200 OK' response_headers = [('Content-type','text/plain')] start_response(status, response_headers) return ['Hello world!\n'] ``` That is a complete (if limited) WSGI application. A web server with WSGI support (such as Apache with mod\_wsgi) can invoke this function whenever a request arrives. The reason this is so great is that we can avoid the messy step of converting from a HTTP GET/POST to CGI to Python, and back again on the way out. It's a much more direct, clean and efficient linkage. It also makes it much easier to have long-running frameworks running behind web servers, if all that needs to be done for a request is a function call. With plain CGI, you'd have to [start your whole framework up](http://tools.cherrypy.org/wiki/RunAsCGI) for each individual request. To have WSGI support, you'll need to have installed a WSGI module (like [mod\_wsgi](http://code.google.com/p/modwsgi/)), or use a web server with WSGI baked in (like [CherryPy](http://tools.cherrypy.org/)). If neither of those are possible, you *could* use the CGI-WSGI bridge given in the PEP.
How Python web frameworks, WSGI and CGI fit together
219,110
127
2008-10-20T16:43:57Z
520,194
200
2009-02-06T13:04:12Z
[ "python", "apache", "cgi", "wsgi" ]
I have a [Bluehost](http://en.wikipedia.org/wiki/Bluehost) account where I can run Python scripts as CGI. I guess it's the simplest CGI, because to run I have to define the following in `.htaccess`: ``` Options +ExecCGI AddType text/html py AddHandler cgi-script .py ``` Now, whenever I look up web programming with Python, I hear a lot about WSGI and how most frameworks use it. But I just don't understand how it all fits together, especially when my web server is given (Apache running at a host's machine) and not something I can really play with (except defining `.htaccess` commands). How are [WSGI](http://en.wikipedia.org/wiki/Web_Server_Gateway_Interface), CGI, and the frameworks all connected? What do I need to know, install, and do if I want to run a web framework (say [web.py](http://webpy.org/) or [CherryPy](http://en.wikipedia.org/wiki/CherryPy)) on my basic CGI configuration? How to install WSGI support?
**How WSGI, CGI, and the frameworks are all connected ?** Apache listens on port 80. It gets an HTTP request. It parses the request to find a way to respond. Apache has a LOT of choices for responding. One way to respond is to use CGI to run a script. Another way to respond is to simply serve a file. In the case of CGI, Apache prepares an environment and invokes the script through the CGI protocol. This is a standard Unix Fork/Exec situation -- the CGI subprocess inherits an OS environment including the socket and stdout. The CGI subprocess writes a response, which goes back to Apache; Apache sends this response to the browser. CGI is primitive and annoying. Mostly because it forks a subprocess for every request, and subprocess must exit or close stdout and stderr to signify end of response. WSGI is an interface that is based on the CGI design pattern. It is not necessarily CGI -- it does not have to fork a subprocess for each request. It can be CGI, but it doesn't have to be. WSGI adds to the CGI design pattern in several important ways. It parses the HTTP Request Headers for you and adds these to the environment. It supplies any POST-oriented input as a file-like object in the environment. It also provides you a function that will formulate the response, saving you from a lot of formatting details. **What do I need to know / install / do if I want to run a web framework (say web.py or cherrypy) on my basic CGI configuration ?** Recall that forking a subprocess is expensive. There are two ways to work around this. 1. **Embedded** `mod_wsgi` or `mod_python` embeds Python inside Apache; no process is forked. Apache runs the Django application directly. 2. **Daemon** `mod_wsgi` or `mod_fastcgi` allows Apache to interact with a separate daemon (or "long-running process"), using the WSGI protocol. You start your long-running Django process, then you configure Apache's mod\_fastcgi to communicate with this process. Note that `mod_wsgi` can work in either mode: embedded or daemon. When you read up on mod\_fastcgi, you'll see that Django uses [flup](http://pypi.python.org/pypi/flup/) to create a WSGI-compatible interface from the information provided by mod\_fastcgi. The pipeline works like this. ``` Apache -> mod_fastcgi -> FLUP (via FastCGI protocol) -> Django (via WSGI protocol) ``` Django has several "django.core.handlers" for the various interfaces. For mod\_fastcgi, Django provides a `manage.py runfcgi` that integrates FLUP and the handler. For mod\_wsgi, there's a core handler for this. **How to install WSGI support ?** Follow these instructions. <http://code.google.com/p/modwsgi/wiki/IntegrationWithDjango> For background see this <http://docs.djangoproject.com/en/dev/howto/deployment/#howto-deployment-index>
I’m stunned: weird problem with python and sockets + threads
219,547
2
2008-10-20T19:19:45Z
219,671
7
2008-10-20T19:56:00Z
[ "python", "multithreading", "apache", "sockets" ]
I have a python script that is a http-server: <http://paste2.org/p/89701>, when benchmarking it against ApacheBench (ab) with a concurrency level (-c switch) that is lower then or equal to the value i specified in the socket.listen()-call in the sourcecode everything works fine, but as soon as put the concurrency level in apache bench above the value in the socket.listen()-call performance drops through the floor, some example: * socket.listen(**10**) and ab -n 50 -c **10** <http://localhost/> = **1200req/s** * socket.listen(**10**) and ab -n 50 -c **11** <http://localhost/> = **40req/s** * socket.listen(**100**) and ab -n 5000 -c **100** <http://localhost/> = **1000req/s** * socket.listen(**100**) and ab -n 5000 -c **101** <http://localhost/> = **32req/s** Nothing changes in the code between the two calls, I can’t figure out what is wrong - been at this problem for one day now. Also note that: The multiplexing version of the same code (I wrote to compare to the threaded version) works FINE no matter what socket.listen() is set to or what the concurrency (-c switch) in apache is set to. *I've spent a day on IRC/python docs, posted on comp.lang.python and on my blog - I can't find ANYONE that even has an idea what could be wrong. Help me!*
I cannot confirm your results, and your server is coded fishy. I whipped up my own server and do not have this problem either. Let's move the discussion to a simpler level: ``` import thread, socket, Queue connections = Queue.Queue() num_threads = 10 backlog = 10 def request(): while 1: conn = connections.get() data = '' while '\r\n\r\n' not in data: data += conn.recv(4048) conn.sendall('HTTP/1.1 200 OK\r\n\r\nHello World') conn.close() if __name__ == '__main__': for _ in range(num_threads): thread.start_new_thread(request, ()) acceptor = socket.socket(socket.AF_INET, socket.SOCK_STREAM) acceptor.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) acceptor.bind(('', 1234)) acceptor.listen(backlog) while 1: conn, addr = acceptor.accept() connections.put(conn) ``` which on my machine does: ``` ab -n 10000 -c 10 http://127.0.0.1:1234/ --> 8695.03 [#/sec] ab -n 10000 -c 11 http://127.0.0.1:1234/ --> 8529.41 [#/sec] ```
Ensure a single instance of an application in Linux
220,525
28
2008-10-21T01:58:30Z
220,542
21
2008-10-21T02:10:24Z
[ "python", "linux", "singleinstance" ]
I'm working on a GUI application in WxPython, and I am not sure how I can ensure that only one copy of my application is running at any given time on the machine. Due to the nature of the application, running more than once doesn't make any sense, and will fail quickly. Under Win32, I can simply make a named mutex and check that at startup. Unfortunately, I don't know of any facilities in Linux that can do this. I'm looking for something that will automatically be released should the application crash unexpectedly. I don't want to have to burden my users with having to manually delete lock files because I crashed.
There are several common techniques including using semaphores. The one I see used most often is to create a "pid lock file" on startup that contains the pid of the running process. If the file already exists when the program starts up, open it up and grab the pid inside, check to see if a process with that pid is running, if it is check the cmdline value in /proc/*pid* to see if it is an instance of your program, if it is then quit, otherwise overwrite the file with your pid. The usual name for the pid file is \*application\_name\*`.pid`.
Ensure a single instance of an application in Linux
220,525
28
2008-10-21T01:58:30Z
220,709
49
2008-10-21T03:43:36Z
[ "python", "linux", "singleinstance" ]
I'm working on a GUI application in WxPython, and I am not sure how I can ensure that only one copy of my application is running at any given time on the machine. Due to the nature of the application, running more than once doesn't make any sense, and will fail quickly. Under Win32, I can simply make a named mutex and check that at startup. Unfortunately, I don't know of any facilities in Linux that can do this. I'm looking for something that will automatically be released should the application crash unexpectedly. I don't want to have to burden my users with having to manually delete lock files because I crashed.
The Right Thing is advisory locking using `flock(LOCK_EX)`; in Python, this is found in the [`fcntl` module](http://docs.python.org/3/library/fcntl.html). Unlike pidfiles, these locks are always automatically released when your process dies for any reason, have no race conditions exist relating to file deletion (as the file doesn't *need* to be deleted to release the lock), and there's no chance of a different process inheriting the PID and thus appearing to validate a stale lock. If you want unclean shutdown detection, you can write a marker (such as your PID, for traditionalists) into the file after grabbing the lock, and then truncate the file to 0-byte status before a clean shutdown (while the lock is being held); thus, if the lock is not held and the file is non-empty, an unclean shutdown is indicated.
Ensure a single instance of an application in Linux
220,525
28
2008-10-21T01:58:30Z
221,159
20
2008-10-21T08:30:11Z
[ "python", "linux", "singleinstance" ]
I'm working on a GUI application in WxPython, and I am not sure how I can ensure that only one copy of my application is running at any given time on the machine. Due to the nature of the application, running more than once doesn't make any sense, and will fail quickly. Under Win32, I can simply make a named mutex and check that at startup. Unfortunately, I don't know of any facilities in Linux that can do this. I'm looking for something that will automatically be released should the application crash unexpectedly. I don't want to have to burden my users with having to manually delete lock files because I crashed.
Complete locking solution using the `fcntl` module: ``` import fcntl pid_file = 'program.pid' fp = open(pid_file, 'w') try: fcntl.lockf(fp, fcntl.LOCK_EX | fcntl.LOCK_NB) except IOError: # another instance is running sys.exit(1) ```
Ensure a single instance of an application in Linux
220,525
28
2008-10-21T01:58:30Z
476,133
8
2009-01-24T15:20:51Z
[ "python", "linux", "singleinstance" ]
I'm working on a GUI application in WxPython, and I am not sure how I can ensure that only one copy of my application is running at any given time on the machine. Due to the nature of the application, running more than once doesn't make any sense, and will fail quickly. Under Win32, I can simply make a named mutex and check that at startup. Unfortunately, I don't know of any facilities in Linux that can do this. I'm looking for something that will automatically be released should the application crash unexpectedly. I don't want to have to burden my users with having to manually delete lock files because I crashed.
wxWidgets offers a wxSingleInstanceChecker class for this purpose: [wxPython doc](http://www.wxpython.org/docs/api/wx.SingleInstanceChecker-class.html), or [wxWidgets doc](http://docs.wxwidgets.org/stable/wx_wxsingleinstancechecker.html). The wxWidgets doc has sample code in C++, but the python equivalent should be something like this (untested): ``` name = "MyApp-%s" % wx.GetUserId() checker = wx.SingleInstanceChecker(name) if checker.IsAnotherRunning(): return False ```
Including PYDs/DLLs in py2exe builds
220,777
9
2008-10-21T04:40:57Z
224,274
11
2008-10-22T02:27:44Z
[ "python", "dll", "installation", "py2exe", "pyd" ]
One of the modules for my app uses functions from a .pyd file. There's an option to exclude dlls (exclude\_dlls) but is there one for including them? The build process doesn't seem to be copying the .pyd in my module despite copying the rest of the files (.py). I also need to include a .dll. How do I get py2exe to include both .pyd and .dll files?
.pyd's and .DLL's are different here, in that a .pyd ought to be automatically found by modulefinder and so included (as long as you have the appropriate "import" statement) without needing to do anything. If one is missed, you do the same thing as if a .py file was missed (they're both just modules): use the "include" option for the py2exe options. Modulefinder will not necessarily find dependencies on .DLLs (py2exe can detect some), so you may need to explicitly include these, with the 'data\_files' option. For example, where you had two .DLL's ('foo.dll' and 'bar.dll') to include, and three .pyd's ('module1.pyd', 'module2.pyd', and 'module3.pyd') to include: ``` setup(name='App', # other options, data_files=[('.', 'foo.dll'), ('.', 'bar.dll')], options = {"py2exe" : {"includes" : "module1,module2,module3"}} ) ```
How can I write a method within a Django model to retrieve related objects?
221,328
2
2008-10-21T09:49:04Z
221,338
10
2008-10-21T09:54:58Z
[ "python", "django", "model-view-controller", "frameworks" ]
I have two models. We'll call them object A and object B. Their design looks something like this: ``` class Foo(models.Model): name = models.CharField() class Bar(models.Model): title = models.CharField() Foo= models.ForeignKey('myapp.Foo') ``` Now, suppose I want to make a method within Foo that returns all Bar objects that reference that instance of Foo. How do I do this? ``` class Foo(models.Model): name = models.CharField() def returnBars(self): ???? ```
You get this for free: <http://docs.djangoproject.com/en/dev/topics/db/queries/#backwards-related-objects> By default, you can access a Manager which gives you access to related items through a `RELATEDCLASSNAME_set` attribute: ``` some_foo.bar_set.all() ``` Or you can use the `related_name` argument to `ForeignKey` to specify the attribute which should hold the reverse relationship Manager: ``` class Foo(models.Model): name = models.CharField() class Bar(models.Model): title = models.CharField() foo = models.ForeignKey(Foo, related_name='bars') ... some_foo.bars.all() ```
Is it possible to implement Python code-completion in TextMate?
221,339
20
2008-10-21T09:54:58Z
248,819
9
2008-10-29T23:42:15Z
[ "python", "autocomplete", "text-editor", "textmate" ]
[PySmell](http://github.com/orestis/pysmell/tree/master) seems like a good starting point. I think it should be possible, PySmell's `idehelper.py` does a majority of the complex stuff, it should just be a case of giving it the current line, offering up the completions (the bit I am not sure about) and then replacing the line with the selected one. ``` >>> import idehelper >>> # The path is where my PYSMELLTAGS file is located: >>> PYSMELLDICT = idehelper.findPYSMELLDICT("/Users/dbr/Desktop/pysmell/") >>> options = idehelper.detectCompletionType("", "" 1, 2, "", PYSMELLDICT) >>> completions = idehelper.findCompletions("proc", PYSMELLDICT, options) >>> print completions [{'dup': '1', 'menu': 'pysmell.pysmell', 'kind': 'f', 'word': 'process', 'abbr': 'process(argList, excluded, output, verbose=False)'}] ``` It'll never be perfect, but it would be extremely useful (even if just for completing the stdlib modules, which should never change, so you wont have to constantly regenerate the PYSMELLTAGS file whenever you add a function) --- Progressing! I have the utter-basics of completion in place - barely works, but it's close.. I ran `python pysmells.py /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/*.py -O /Library/Python/2.5/site-packages/pysmell/PYSMELLTAGS` Place the following in a TextMate bundle script, set "input: entire document", "output: insert as text", "activation: key equivalent: alt+esc", "scope selector: source.python" ``` #!/usr/bin/env python import os import sys from pysmell import idehelper CUR_WORD = os.environ.get("TM_CURRENT_WORD") cur_file = os.environ.get("TM_FILEPATH") orig_source = sys.stdin.read() line_no = int(os.environ.get("TM_LINE_NUMBER")) cur_col = int(os.environ.get("TM_LINE_INDEX")) # PYSMELLS is currently in site-packages/pysmell/ PYSMELLDICT = idehelper.findPYSMELLDICT("/Library/Python/2.5/site-packages/pysmell/blah") options = idehelper.detectCompletionType(cur_file, orig_source, line_no, cur_col, "", PYSMELLDICT) completions = idehelper.findCompletions(CUR_WORD, PYSMELLDICT, options) if len(completions) > 0: new_word = completions[0]['word'] new_word = new_word.replace(CUR_WORD, "", 1) # remove what user has already typed print new_word ``` Then I made a new python document, typed "import urll" and hit alt+escape, and it completed it to "import urllib"! As I said, it's entirely a work-in-progress, so don't use it yet.. --- *Last update:* orestis has integrated this into the PySmell project's code! Any further fiddling will happen [on github](http://github.com/orestis/pysmell/tree/master)
EDIT: I've actually took your code above and integrated into a command. It will properly show a completion list for you to choose. You can grab it here: <http://github.com/orestis/pysmell/tree/master> (hit download and do python setup.py install). It's rough but it works. - please report any errors on <http://code.google.com/p/pysmell/> -- Hi, I'm the developer of PySmell. I also use a Mac, so if you can send me an email (contact info is in the source code) with your progress so far, I can try to integrate it :) Oh BTW it's called PySmell - no trailing 's' :)
Putting Copyright Symbol into a Python File
221,376
11
2008-10-21T10:15:16Z
221,380
28
2008-10-21T10:17:32Z
[ "python", "encoding" ]
I need to include a copyright statement at the top of every Python source file I produce: ``` # Copyright: © 2008 etc. ``` However, when I then run such a file I get this message: SyntaxError: Non-ASCII character '\xa9' in file MyFile.py on line 3, but no encoding declared; see <http://www.python.org/peps/pep-0263.html> for details. Apparently Python isn't happy about the copyright symbol because it assumes the source file is all in ASCII. Either I need to make my first line be: ``` # -*- coding: iso-8859-1 -*- ``` to tell Python I'm using Latin encoding, or I can change the copyright statement to: ``` # Copyright: \xa9 2008 etc. ``` which just possibly doesn't have the same legal standing. Is there a more elegant solution?
The copyright symbol in ASCII is spelled `(c)` or "`Copyright`". See circular 61, [Copyright Registration for Computer Programs](http://www.copyright.gov/circs/circ61.pdf). While it's true that the legal formalism (see Circular 1, [Copyright Basics](http://www.copyright.gov/circs/circ01.pdf)) is > The symbol © (the letter C in a > circle), or the word “Copyright,” or > the abbreviation “Copr.”; and... And it's also true that > To guarantee protection for a > copyrighted work in all UCC member > countries, the notice must consist of > the symbol © (the word “Copyright” or > the abbreviation is not acceptable) You can dig through circular [3](http://www.copyright.gov/circs/circ03.html) and [38a](http://www.copyright.gov/circs/circ38a.html). This has, however, already been tested in court. It isn't an interesting issue. If you do a search for "(c) acceptable for c-in-a-circle", you'll find that lawyers all agree that (c) is an acceptable substitute. See Perle and Williams. See Scott on Information Technology Law.
Putting Copyright Symbol into a Python File
221,376
11
2008-10-21T10:15:16Z
221,569
8
2008-10-21T11:50:53Z
[ "python", "encoding" ]
I need to include a copyright statement at the top of every Python source file I produce: ``` # Copyright: © 2008 etc. ``` However, when I then run such a file I get this message: SyntaxError: Non-ASCII character '\xa9' in file MyFile.py on line 3, but no encoding declared; see <http://www.python.org/peps/pep-0263.html> for details. Apparently Python isn't happy about the copyright symbol because it assumes the source file is all in ASCII. Either I need to make my first line be: ``` # -*- coding: iso-8859-1 -*- ``` to tell Python I'm using Latin encoding, or I can change the copyright statement to: ``` # Copyright: \xa9 2008 etc. ``` which just possibly doesn't have the same legal standing. Is there a more elegant solution?
Contrary to the accepted answer, AFAIK, (c) is not an officially recognized alternative to the copyright symbol, although I'm not sure it's been tested in court. However, © is just an abreviation of the word Copyright. Saying "Copyright 2008 Robert Munro" is identical to saying "© 2008 Robert Munro" Your "Copyright: © 2008 etc." Expands to "Copyright: Copyright 2008 etc." Wikipedia's page seems to agree with me <http://en.wikipedia.org/wiki/Copyright_symbol> In the United States, the copyright notice consists of three elements: 1. the © symbol, **or** the word "Copyright" or abbreviation "Copr."; ...
Is it possible to set a timeout on a socket in Twisted?
221,745
8
2008-10-21T12:56:32Z
251,302
13
2008-10-30T18:52:01Z
[ "python", "networking", "sockets", "twisted" ]
I realize I'm probably just dumb and missing something big and important, but I can't figure out how to specify a timeout in twisted using reactor.listenUDP. My goal is to be able to specify a timeout, and after said amount of time, if DatagramProtocol.datagramReceived has not been executed, have it execute a callback or something that I can use to call reactor.stop(). Any help or advice is appreciated. Thanks
I think `reactor.callLater` would work better than `LoopingCall`. Something like this: ``` class Protocol(DatagramProtocol): def __init__(self, timeout): self.timeout = timeout def datagramReceived(self, datagram): self.timeout.cancel() # ... timeout = reactor.callLater(5, timedOut) reactor.listenUDP(Protocol(timeout)) ```
ElementTree XPath - Select Element based on attribute
222,375
34
2008-10-21T15:52:51Z
222,473
32
2008-10-21T16:16:06Z
[ "python", "elementtree" ]
I am having trouble using the attribute XPath Selector in ElementTree, which I should be able to do according to the [Documentation](http://effbot.org/zone/element-xpath.htm) Here's some sample code **XML** ``` <root> <target name="1"> <a></a> <b></b> </target> <target name="2"> <a></a> <b></b> </target> </root> ``` **Python** ``` def parse(document): root = et.parse(document) for target in root.findall("//target[@name='a']"): print target._children ``` I am receiving the following Exception: ``` expected path separator ([) ```
The syntax you're trying to use is new in **[ElementTree 1.3](http://effbot.org/zone/element-xpath.htm)**. Such version is shipped with **Python 2.7** or higher. If you have Python 2.6 or less you still have ElementTree 1.2.6 or less.
ElementTree XPath - Select Element based on attribute
222,375
34
2008-10-21T15:52:51Z
16,105,230
8
2013-04-19T13:00:16Z
[ "python", "elementtree" ]
I am having trouble using the attribute XPath Selector in ElementTree, which I should be able to do according to the [Documentation](http://effbot.org/zone/element-xpath.htm) Here's some sample code **XML** ``` <root> <target name="1"> <a></a> <b></b> </target> <target name="2"> <a></a> <b></b> </target> </root> ``` **Python** ``` def parse(document): root = et.parse(document) for target in root.findall("//target[@name='a']"): print target._children ``` I am receiving the following Exception: ``` expected path separator ([) ```
There are several problems in this code. 1. Python's buildin ElementTree (ET for short) has no real XPATH support; only a limited subset By example, it doesn't support *find-from-root* expressions like `//target`. Notice: the [documentation](http://docs.python.org/2/library/xml.etree.elementtree.html#supported-xpath-syntax) mentions "**//**", but only for children: So an expression as `.//target` is valid; `//...` is not! There is an alternative implementation: [**lxml**](http://lxml.de) which is more rich. It's seams that documentation is used, for the build-in code. That does not match/work. 2. The `@name` notation selects xml-**attributes**; the `key=value` expression within an xml-tag. So that name-value has to be 1 or 2 to select something in the given document. Or, one can search for targets with a child **element** *'a'*: `target[a]` (no @). For the given document, parsed with the build-in ElementTree (v1.3) to root, the following code is correct and working: * `root.findall(".//target")` Find both targets * `root.findall(".//target/a")` Find two a-element * `root.findall(".//target[a]")` This finds both target-element again, as both have an a-element * `root.findall(".//target[@name='1']")` Find only the *first* target. Notice the quotes around 1 are needed; else a SyntaxError is raised * `root.findall(".//target[a][@name='1']")` Also valid; to find that target * `root.findall(".//target[@name='1']/a")` Finds only one a-element; ...
What is the meaning of '(?i)password' in python regular expression?
222,536
4
2008-10-21T16:29:51Z
222,556
7
2008-10-21T16:34:50Z
[ "python", "regex", "pattern-matching" ]
Pexpect can be used to automate tasks in python (does not need TCL to be installed). One of the simplest routines of this class is the 'run()' routine. It accepts a dictionary of expected question patterns as keys and the responses as values. For example pexpect.run ('scp foo [email protected]:.', events={'(?i)password': mypassword}) I know that usually '?' is used to indicate 0 or 1 occurrences of previous literal in the string (for regular expressions that is). However, over here, this does not seem to be the meaning. Can experts comment on what is it?
<https://docs.python.org/library/re.html#regular-expression-syntax> > (?...) This is an extension > notation (a "?" following a "(" is not > meaningful otherwise). The first > character after the "?" determines > what the meaning and further syntax of > the construct is. Extensions usually > do not create a new group; > (?P...) is the only exception to > this rule. Following are the currently > supported extensions. > > (?iLmsux) (One or more letters from > the set "i", "L", "m", "s", "u", "x".) > The group matches the empty string; > the letters set the corresponding > flags (re.I, re.L, re.M, re.S, re.U, > re.X) for the entire regular > expression. This is useful if you wish > to include the flags as part of the > regular expression, instead of passing > a flag argument to the compile() > function. > > Note that the (?x) flag changes how > the expression is parsed. It should be > used first in the expression string, > or after one or more whitespace > characters. If there are > non-whitespace characters before the > flag, the results are undefined. So in this case the string is a regular expression, and is set to be case-insensitive.
Sorting a tuple that contains tuples
222,752
12
2008-10-21T17:40:51Z
222,762
21
2008-10-21T17:44:08Z
[ "python", "sorting", "tuples" ]
I have the following tuple, which contains tuples: ``` MY_TUPLE = ( ('A','Apple'), ('C','Carrot'), ('B','Banana'), ) ``` I'd like to sort this tuple based upon the **second** value contained in inner-tuples (i.e., sort Apple, Carrot, Banana rather than A, B, C). Any thoughts?
``` from operator import itemgetter MY_SORTED_TUPLE = tuple(sorted(MY_TUPLE, key=itemgetter(1))) ``` or without `itemgetter`: ``` MY_SORTED_TUPLE = tuple(sorted(MY_TUPLE, key=lambda item: item[1])) ```
Sorting a tuple that contains tuples
222,752
12
2008-10-21T17:40:51Z
222,776
7
2008-10-21T17:45:56Z
[ "python", "sorting", "tuples" ]
I have the following tuple, which contains tuples: ``` MY_TUPLE = ( ('A','Apple'), ('C','Carrot'), ('B','Banana'), ) ``` I'd like to sort this tuple based upon the **second** value contained in inner-tuples (i.e., sort Apple, Carrot, Banana rather than A, B, C). Any thoughts?
From [Sorting Mini-HOW TO](http://wiki.python.org/moin/HowTo/Sorting#head-d121eed08556ad7cb2a02a886788656dadb709bd) > Often there's a built-in that will > match your needs, such as str.lower(). > The operator module contains a number > of functions useful for this purpose. > For example, you can sort tuples based > on their second element using > operator.itemgetter(): ``` >>> import operator >>> L = [('c', 2), ('d', 1), ('a', 4), ('b', 3)] >>> map(operator.itemgetter(0), L) ['c', 'd', 'a', 'b'] >>> map(operator.itemgetter(1), L) [2, 1, 4, 3] >>> sorted(L, key=operator.itemgetter(1)) [('d', 1), ('c', 2), ('b', 3), ('a', 4)] ``` Hope this helps.
How to use 'super' in Python?
222,877
221
2008-10-21T18:13:15Z
222,922
156
2008-10-21T18:24:50Z
[ "python", "inheritance", "super" ]
Can someone explain to me the difference between doing: ``` class Child(SomeBaseClass): def __init__(self): super(Child, self).__init__() ``` and this: ``` class Child(SomeBaseClass): def __init__(self): SomeBaseClass.__init__(self) ``` I've seen `super` being used quite a lot in classes with only single inheritance. I can see why you'd use it in multiple inheritance but am unclear as to what the advantages are of using it in this kind of situation.
The benefits of `super()` in single-inheritance are minimal -- mostly, you don't have to hard-code the name of the base class into every method that uses its parent methods. However, it's almost impossible to use multiple-inheritance without `super()`. This includes common idioms like mixins, interfaces, abstract classes, etc. This extends to code that later extends yours. If somebody later wanted to write a class that extended `Child` and a mixin, their code would not work properly.
How to use 'super' in Python?
222,877
221
2008-10-21T18:13:15Z
224,020
23
2008-10-22T00:06:32Z
[ "python", "inheritance", "super" ]
Can someone explain to me the difference between doing: ``` class Child(SomeBaseClass): def __init__(self): super(Child, self).__init__() ``` and this: ``` class Child(SomeBaseClass): def __init__(self): SomeBaseClass.__init__(self) ``` I've seen `super` being used quite a lot in classes with only single inheritance. I can see why you'd use it in multiple inheritance but am unclear as to what the advantages are of using it in this kind of situation.
Doesn't all of this assume that the base class is inherited from `object`? ``` class A: def __init__(self): print "A.__init__()" class B(A): def __init__(self): print "B.__init__()" super(B, self).__init__() ``` Will not work. `class A` must be derived from `object`, i.e: `class A(object)`
How to use 'super' in Python?
222,877
221
2008-10-21T18:13:15Z
33,469,090
41
2015-11-02T00:53:24Z
[ "python", "inheritance", "super" ]
Can someone explain to me the difference between doing: ``` class Child(SomeBaseClass): def __init__(self): super(Child, self).__init__() ``` and this: ``` class Child(SomeBaseClass): def __init__(self): SomeBaseClass.__init__(self) ``` I've seen `super` being used quite a lot in classes with only single inheritance. I can see why you'd use it in multiple inheritance but am unclear as to what the advantages are of using it in this kind of situation.
> Can someone explain to me the difference between doing: > > ``` > class Child(SomeBaseClass): > def __init__(self): > super(Child, self).__init__() > ``` > > and this: > > ``` > class Child(SomeBaseClass): > def __init__(self): > SomeBaseClass.__init__(self) > ``` # Indirection with Forward Compatibility What does it give you? For single inheritance, the above is practically identical from a static analysis point of view. However, using `super` gives you a layer of indirection with forward compatibility. Forward compatibility is very important to seasoned developers. You want your code to keep working with minimal changes as you change it. When you look at your revision history, you want to see precisely what changed when. You may start off with single inheritance, but if you decide to add another base class, you only have to change the line with the bases - if the bases change in a class you inherit from (say a mixin is added) you'd change nothing in this class. Particularly in Python 2, getting the arguments to super and the correct method arguments right can be difficult. If you know you're using `super` correctly with single inheritance, that makes debugging less difficult going forward. # Dependency Injection Other people can use your code and inject parents into the method resolution: ``` class SomeBaseClass(object): def __init__(self): print('SomeBaseClass.__init__(self) called') class UnsuperChild(SomeBaseClass): def __init__(self): print('Child.__init__(self) called') SomeBaseClass.__init__(self) class SuperChild(SomeBaseClass): def __init__(self): print('SuperChild.__init__(self) called') super(SuperChild, self).__init__() ``` Say you add another class to your object, and want to inject a class between Foo and Bar (for testing or some other reason): ``` class InjectMe(SomeBaseClass): def __init__(self): print('InjectMe.__init__(self) called') super(InjectMe, self).__init__() class UnsuperInjector(UnsuperChild, InjectMe): pass class SuperInjector(SuperChild, InjectMe): pass ``` Using the un-super child fails to inject the dependency because the child you're using has hard-coded the method to be called after its own: ``` >>> o = UnsuperInjector() UnsuperChild.__init__(self) called SomeBaseClass.__init__(self) called ``` However, the class with the child that uses `super` can correctly inject the dependency: ``` >>> o2 = SuperInjector() SuperChild.__init__(self) called InjectMe.__init__(self) called SomeBaseClass.__init__(self) called ``` ## Conclusion Always use `super` to reference the parent class. What you intend is to reference the parent class that is next-in-line, not specifically the one you see the child inheriting from. Not using `super` can put unnecessary constraints on users of your code.
Using Variables for Class Names in Python?
223,559
27
2008-10-21T21:24:24Z
223,566
27
2008-10-21T21:26:09Z
[ "python", "dynamic-typing" ]
I want to know how to use variables for objects and function names in Python. In PHP, you can do this: ``` $className = "MyClass"; $newObject = new $className(); ``` How do you do this sort of thing in Python? Or, am I totally not appreciating some fundamental difference with Python, and if so, what is it?
In Python, ``` className = MyClass newObject = className() ``` The first line makes the variable `className` refer to the same thing as `MyClass`. Then the next line calls the `MyClass` constructor through the `className` variable. As a concrete example: ``` >>> className = list >>> newObject = className() >>> newObject [] ``` (In Python, `list` is the constructor for the `list` class.) The difference is that in PHP, you represent the name of the class you want to refer to as a string, while in Python you can reference the same class directly. If you *must* use a string (for example if the name of the class is created dynamically), then you will need to use other techniques.
Using Variables for Class Names in Python?
223,559
27
2008-10-21T21:24:24Z
223,586
41
2008-10-21T21:33:05Z
[ "python", "dynamic-typing" ]
I want to know how to use variables for objects and function names in Python. In PHP, you can do this: ``` $className = "MyClass"; $newObject = new $className(); ``` How do you do this sort of thing in Python? Or, am I totally not appreciating some fundamental difference with Python, and if so, what is it?
Assuming that some\_module has a class named "class\_name": ``` import some_module klass = getattr(some_module, "class_name") some_object = klass() ``` I should note that you should be careful here: turning strings into code can be dangerous if the string came from the user, so you should keep security in mind in this situation. :) One other method (assuming that we still are using "class\_name"): ``` class_lookup = { 'class_name' : class_name } some_object = class_lookup['class_name']() #call the object once we've pulled it out of the dict ``` The latter method is probably the most secure way of doing this, so it's probably what you should use if at all possible.
Using Variables for Class Names in Python?
223,559
27
2008-10-21T21:24:24Z
2,875,113
15
2010-05-20T15:14:38Z
[ "python", "dynamic-typing" ]
I want to know how to use variables for objects and function names in Python. In PHP, you can do this: ``` $className = "MyClass"; $newObject = new $className(); ``` How do you do this sort of thing in Python? Or, am I totally not appreciating some fundamental difference with Python, and if so, what is it?
If you need to create a dynamic class in Python (i.e. one whose name is a variable) you can use type() which takes 3 params: name, bases, attrs ``` >>> class_name = 'MyClass' >>> klass = type(class_name, (object,), {'msg': 'foobarbaz'}) <class '__main__.MyClass'> >>> inst = klass() >>> inst.msg foobarbaz ``` * Note however, that this does not 'instantiate' the object (i.e. does not call constructors etc. It creates a new(!) class with the same name.
Elegant structured text file parsing
223,866
19
2008-10-21T23:00:20Z
223,925
12
2008-10-21T23:25:53Z
[ "python", "ruby", "perl", "text-parsing" ]
I need to parse a transcript of a live chat conversation. My first thought on seeing the file was to throw regular expressions at the problem but I was wondering what other approaches people have used. I put elegant in the title as i've previously found that this type of task has a danger of getting hard to maintain just relying on regular expressions. The transcripts are being generated by www.providesupport.com and emailed to an account, I then extract a plain text transcript attachment from the email. The reason for parsing the file is to extract the conversation text for later but also to identify visitors and operators names so that the information can be made available via a CRM. Here is an example of a transcript file: ``` Chat Transcript Visitor: Random Website Visitor Operator: Milton Company: Initech Started: 16 Oct 2008 9:13:58 Finished: 16 Oct 2008 9:45:44 Random Website Visitor: Where do i get the cover sheet for the TPS report? * There are no operators available at the moment. If you would like to leave a message, please type it in the input field below and click "Send" button * Call accepted by operator Milton. Currently in room: Milton, Random Website Visitor. Milton: Y-- Excuse me. You-- I believe you have my stapler? Random Website Visitor: I really just need the cover sheet, okay? Milton: it's not okay because if they take my stapler then I'll, I'll, I'll set the building on fire... Random Website Visitor: oh i found it, thanks anyway. * Random Website Visitor is now off-line and may not reply. Currently in room: Milton. Milton: Well, Ok. But… that's the last straw. * Milton has left the conversation. Currently in room: room is empty. Visitor Details --------------- Your Name: Random Website Visitor Your Question: Where do i get the cover sheet for the TPS report? IP Address: 255.255.255.255 Host Name: 255.255.255.255 Referrer: Unknown Browser/OS: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.2; .NET CLR 1.1.4322; InfoPath.1; .NET CLR 2.0.50727) ```
No and in fact, for the specific type of task you describe, I doubt there's a "cleaner" way to do it than regular expressions. It looks like your files have embedded line breaks so typically what we'll do here is make the line your unit of decomposition, applying per-line regexes. Meanwhile, you create a small state machine and use regex matches to trigger transitions in that state machine. This way you know where you are in the file, and what types of character data you can expect. Also, consider using named capture groups and loading the regexes from an external file. That way if the format of your transcript changes, it's a simple matter of tweaking the regex, rather than writing new parse-specific code.
Elegant structured text file parsing
223,866
19
2008-10-21T23:00:20Z
224,344
11
2008-10-22T03:01:20Z
[ "python", "ruby", "perl", "text-parsing" ]
I need to parse a transcript of a live chat conversation. My first thought on seeing the file was to throw regular expressions at the problem but I was wondering what other approaches people have used. I put elegant in the title as i've previously found that this type of task has a danger of getting hard to maintain just relying on regular expressions. The transcripts are being generated by www.providesupport.com and emailed to an account, I then extract a plain text transcript attachment from the email. The reason for parsing the file is to extract the conversation text for later but also to identify visitors and operators names so that the information can be made available via a CRM. Here is an example of a transcript file: ``` Chat Transcript Visitor: Random Website Visitor Operator: Milton Company: Initech Started: 16 Oct 2008 9:13:58 Finished: 16 Oct 2008 9:45:44 Random Website Visitor: Where do i get the cover sheet for the TPS report? * There are no operators available at the moment. If you would like to leave a message, please type it in the input field below and click "Send" button * Call accepted by operator Milton. Currently in room: Milton, Random Website Visitor. Milton: Y-- Excuse me. You-- I believe you have my stapler? Random Website Visitor: I really just need the cover sheet, okay? Milton: it's not okay because if they take my stapler then I'll, I'll, I'll set the building on fire... Random Website Visitor: oh i found it, thanks anyway. * Random Website Visitor is now off-line and may not reply. Currently in room: Milton. Milton: Well, Ok. But… that's the last straw. * Milton has left the conversation. Currently in room: room is empty. Visitor Details --------------- Your Name: Random Website Visitor Your Question: Where do i get the cover sheet for the TPS report? IP Address: 255.255.255.255 Host Name: 255.255.255.255 Referrer: Unknown Browser/OS: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.2; .NET CLR 1.1.4322; InfoPath.1; .NET CLR 2.0.50727) ```
With Perl, you can use [Parse::RecDescent](http://search.cpan.org/perldoc?Parse::RecDescent) It is simple, and your grammar will be maintainable later on.
How do I perform query filtering in django templates
223,990
50
2008-10-21T23:55:57Z
224,003
83
2008-10-22T00:00:27Z
[ "python", "django", "django-templates" ]
I need to perform a filtered query from within a django template, to get a set of objects equivalent to python code within a view: ``` queryset = Modelclass.objects.filter(somekey=foo) ``` In my template I would like to do ``` {% for object in data.somekey_set.FILTER %} ``` but I just can't seem to find out how to write FILTER.
You can't do this, which is by design. The Django framework authors intended a strict separation of presentation code from data logic. Filtering models is data logic, and outputting HTML is presentation logic. So you have several options. The easiest is to do the filtering, then pass the result to `render_to_response`. Or you could write a method in your model so that you can say `{% for object in data.filtered_set %}`. Finally, you could write your own template tag, although in this specific case I would advise against that.
How do I perform query filtering in django templates
223,990
50
2008-10-21T23:55:57Z
230,615
10
2008-10-23T17:22:35Z
[ "python", "django", "django-templates" ]
I need to perform a filtered query from within a django template, to get a set of objects equivalent to python code within a view: ``` queryset = Modelclass.objects.filter(somekey=foo) ``` In my template I would like to do ``` {% for object in data.somekey_set.FILTER %} ``` but I just can't seem to find out how to write FILTER.
I run into this problem on a regular basis and often use the "add a method" solution. However, there are definitely cases where "add a method" or "compute it in the view" don't work (or don't work well). E.g. when you are caching template fragments and need some non-trivial DB computation to produce it. You don't want to do the DB work unless you need to, but you won't know if you need to until you are deep in the template logic. Some other possible solutions: 1. Use the {% expr <expression> as <var\_name> %} template tag found at <http://www.djangosnippets.org/snippets/9/> The expression is any legal Python expression with your template's Context as your local scope. 2. Change your template processor. Jinja2 (<http://jinja.pocoo.org/2/>) has syntax that is almost identical to the Django template language, but with full Python power available. It's also faster. You can do this wholesale, or you might limit its use to templates that *you* are working on, but use Django's "safer" templates for designer-maintained pages.
How do I perform query filtering in django templates
223,990
50
2008-10-21T23:55:57Z
14,010,929
7
2012-12-23T12:27:28Z
[ "python", "django", "django-templates" ]
I need to perform a filtered query from within a django template, to get a set of objects equivalent to python code within a view: ``` queryset = Modelclass.objects.filter(somekey=foo) ``` In my template I would like to do ``` {% for object in data.somekey_set.FILTER %} ``` but I just can't seem to find out how to write FILTER.
This can be solved with an assignment tag: ``` from django import template register = template.Library() @register.assignment_tag def query(qs, **kwargs): """ template tag which allows queryset filtering. Usage: {% query books author=author as mybooks %} {% for book in mybooks %} ... {% endfor %} """ return qs.filter(**kwargs) ```
How do I perform query filtering in django templates
223,990
50
2008-10-21T23:55:57Z
16,429,027
14
2013-05-07T21:41:18Z
[ "python", "django", "django-templates" ]
I need to perform a filtered query from within a django template, to get a set of objects equivalent to python code within a view: ``` queryset = Modelclass.objects.filter(somekey=foo) ``` In my template I would like to do ``` {% for object in data.somekey_set.FILTER %} ``` but I just can't seem to find out how to write FILTER.
I just add an extra template tag like this: ``` @register.filter def in_category(things, category): return things.filter(category=category) ``` Then I can do: ``` {% for category in categories %} {% for thing in things|in_category:category %} {{ thing }} {% endfor %} {% endfor %} ```
Javascript style dot notation for dictionary keys unpythonic?
224,026
25
2008-10-22T00:08:01Z
224,080
12
2008-10-22T00:40:31Z
[ "python", "coding-style" ]
I've started to use constructs like these: ``` class DictObj(object): def __init__(self): self.d = {} def __getattr__(self, m): return self.d.get(m, None) def __setattr__(self, m, v): super.__setattr__(self, m, v) ``` Update: based on this thread, I've revised the DictObj implementation to: ``` class dotdict(dict): def __getattr__(self, attr): return self.get(attr, None) __setattr__= dict.__setitem__ __delattr__= dict.__delitem__ class AutoEnum(object): def __init__(self): self.counter = 0 self.d = {} def __getattr__(self, c): if c not in self.d: self.d[c] = self.counter self.counter += 1 return self.d[c] ``` where DictObj is a dictionary that can be accessed via dot notation: ``` d = DictObj() d.something = 'one' ``` I find it more aesthetically pleasing than `d['something']`. Note that accessing an undefined key returns None instead of raising an exception, which is also nice. Update: Smashery makes a good point, which mhawke expands on for an easier solution. I'm wondering if there are any undesirable side effects of using **dict** instead of defining a new dictionary; if not, I like mhawke's solution a lot. AutoEnum is an auto-incrementing Enum, used like this: ``` CMD = AutoEnum() cmds = { "peek": CMD.PEEK, "look": CMD.PEEK, "help": CMD.HELP, "poke": CMD.POKE, "modify": CMD.POKE, } ``` Both are working well for me, but I'm feeling unpythonic about them. Are these in fact bad constructs?
With regards to the `DictObj`, would the following work for you? A blank class will allow you to arbitrarily add to or replace stuff in a container object. ``` class Container(object): pass >>> myContainer = Container() >>> myContainer.spam = "in a can" >>> myContainer.eggs = "in a shell" ``` If you want to not throw an AttributeError when there is no attribute, what do you think about the following? Personally, I'd prefer to use a dict for clarity, or to use a try/except clause. ``` class QuietContainer(object): def __getattr__(self, attribute): try: return object.__getattr__(self,attribute) except AttributeError: return None >>> cont = QuietContainer() >>> print cont.me None ``` Right?
Javascript style dot notation for dictionary keys unpythonic?
224,026
25
2008-10-22T00:08:01Z
224,722
8
2008-10-22T07:06:02Z
[ "python", "coding-style" ]
I've started to use constructs like these: ``` class DictObj(object): def __init__(self): self.d = {} def __getattr__(self, m): return self.d.get(m, None) def __setattr__(self, m, v): super.__setattr__(self, m, v) ``` Update: based on this thread, I've revised the DictObj implementation to: ``` class dotdict(dict): def __getattr__(self, attr): return self.get(attr, None) __setattr__= dict.__setitem__ __delattr__= dict.__delitem__ class AutoEnum(object): def __init__(self): self.counter = 0 self.d = {} def __getattr__(self, c): if c not in self.d: self.d[c] = self.counter self.counter += 1 return self.d[c] ``` where DictObj is a dictionary that can be accessed via dot notation: ``` d = DictObj() d.something = 'one' ``` I find it more aesthetically pleasing than `d['something']`. Note that accessing an undefined key returns None instead of raising an exception, which is also nice. Update: Smashery makes a good point, which mhawke expands on for an easier solution. I'm wondering if there are any undesirable side effects of using **dict** instead of defining a new dictionary; if not, I like mhawke's solution a lot. AutoEnum is an auto-incrementing Enum, used like this: ``` CMD = AutoEnum() cmds = { "peek": CMD.PEEK, "look": CMD.PEEK, "help": CMD.HELP, "poke": CMD.POKE, "modify": CMD.POKE, } ``` Both are working well for me, but I'm feeling unpythonic about them. Are these in fact bad constructs?
This is a simpler version of your DictObj class: ``` class DictObj(object): def __getattr__(self, attr): return self.__dict__.get(attr) >>> d = DictObj() >>> d.something = 'one' >>> print d.something one >>> print d.somethingelse None >>> ```
Javascript style dot notation for dictionary keys unpythonic?
224,026
25
2008-10-22T00:08:01Z
224,876
22
2008-10-22T08:29:47Z
[ "python", "coding-style" ]
I've started to use constructs like these: ``` class DictObj(object): def __init__(self): self.d = {} def __getattr__(self, m): return self.d.get(m, None) def __setattr__(self, m, v): super.__setattr__(self, m, v) ``` Update: based on this thread, I've revised the DictObj implementation to: ``` class dotdict(dict): def __getattr__(self, attr): return self.get(attr, None) __setattr__= dict.__setitem__ __delattr__= dict.__delitem__ class AutoEnum(object): def __init__(self): self.counter = 0 self.d = {} def __getattr__(self, c): if c not in self.d: self.d[c] = self.counter self.counter += 1 return self.d[c] ``` where DictObj is a dictionary that can be accessed via dot notation: ``` d = DictObj() d.something = 'one' ``` I find it more aesthetically pleasing than `d['something']`. Note that accessing an undefined key returns None instead of raising an exception, which is also nice. Update: Smashery makes a good point, which mhawke expands on for an easier solution. I'm wondering if there are any undesirable side effects of using **dict** instead of defining a new dictionary; if not, I like mhawke's solution a lot. AutoEnum is an auto-incrementing Enum, used like this: ``` CMD = AutoEnum() cmds = { "peek": CMD.PEEK, "look": CMD.PEEK, "help": CMD.HELP, "poke": CMD.POKE, "modify": CMD.POKE, } ``` Both are working well for me, but I'm feeling unpythonic about them. Are these in fact bad constructs?
Your DictObj example is actually quite common. Object-style dot-notation access can be a win if you are dealing with ‘things that resemble objects’, ie. they have fixed property names containing only characters valid in Python identifiers. Stuff like database rows or form submissions can be usefully stored in this kind of object, making code a little more readable without the excess of ['item access']. The implementation is a bit limited - you don't get the nice constructor syntax of dict, len(), comparisons, 'in', iteration or nice reprs. You can of course implement those things yourself, but in the new-style-classes world you can get them for free by simply subclassing dict: ``` class AttrDict(dict): __getattr__ = dict.__getitem__ __setattr__ = dict.__setitem__ __delattr__ = dict.__delitem__ ``` To get the default-to-None behaviour, simply subclass Python 2.5's collections.defaultdict class instead of dict.
RFC 1123 Date Representation in Python?
225,086
45
2008-10-22T09:59:21Z
225,106
66
2008-10-22T10:07:19Z
[ "python", "http", "datetime" ]
Is there a fairly easy way to convert a datetime object into an RFC 1123 (HTTP/1.1) date/time string, i.e. a string with the format ``` Sun, 06 Nov 1994 08:49:37 GMT ``` Using `strftime` does not work, since the strings are locale-dependant. Do I have to build the string by hand?
You can use wsgiref.handlers.format\_date\_time from the stdlib which does not rely on locale settings ``` from wsgiref.handlers import format_date_time from datetime import datetime from time import mktime now = datetime.now() stamp = mktime(now.timetuple()) print format_date_time(stamp) #--> Wed, 22 Oct 2008 10:52:40 GMT ``` You can use email.utils.formatdate from the stdlib which does not rely on locale settings ``` from email.utils import formatdate from datetime import datetime from time import mktime now = datetime.now() stamp = mktime(now.timetuple()) print formatdate( timeval = stamp, localtime = False, usegmt = True ) #--> Wed, 22 Oct 2008 10:55:46 GMT ``` If you can set the locale process wide then you can do: ``` import locale, datetime locale.setlocale(locale.LC_TIME, 'en_US') datetime.datetime.utcnow().strftime('%a, %d %b %Y %H:%M:%S GMT') ``` If you don't want to set the locale process wide you could use [Babel](http://babel.edgewall.org/) [date formating](http://babel.edgewall.org/wiki/Documentation/dates.html) ``` from datetime import datetime from babel.dates import format_datetime now = datetime.utcnow() format = 'EEE, dd LLL yyyy hh:mm:ss' print format_datetime(now, format, locale='en') + ' GMT' ``` A manual way to format it which is identical with wsgiref.handlers.format\_date\_time is: ``` def httpdate(dt): """Return a string representation of a date according to RFC 1123 (HTTP/1.1). The supplied date must be in UTC. """ weekday = ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"][dt.weekday()] month = ["Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"][dt.month - 1] return "%s, %02d %s %04d %02d:%02d:%02d GMT" % (weekday, dt.day, month, dt.year, dt.hour, dt.minute, dt.second) ```
RFC 1123 Date Representation in Python?
225,086
45
2008-10-22T09:59:21Z
225,177
26
2008-10-22T10:34:04Z
[ "python", "http", "datetime" ]
Is there a fairly easy way to convert a datetime object into an RFC 1123 (HTTP/1.1) date/time string, i.e. a string with the format ``` Sun, 06 Nov 1994 08:49:37 GMT ``` Using `strftime` does not work, since the strings are locale-dependant. Do I have to build the string by hand?
You can use the formatdate() function from the Python standard email module: ``` from email.utils import formatdate print formatdate(timeval=None, localtime=False, usegmt=True) ``` Gives the current time in the desired format: ``` Wed, 22 Oct 2008 10:32:33 GMT ``` In fact, this function does it "by hand" without using strftime()
Parsing different date formats from feedparser in python?
225,274
8
2008-10-22T11:09:20Z
225,382
14
2008-10-22T11:35:42Z
[ "python", "datetime", "parsing", "rss", "feedparser" ]
I'm trying to get the dates from entries in two different RSS feeds through [feedparser](http://feedparser.org). Here is what I'm doing: ``` import feedparser as fp reddit = fp.parse("http://www.reddit.com/.rss") cc = fp.parse("http://contentconsumer.com/feed") print reddit.entries[0].date print cc.entries[0].date ``` And here's how they come out: ``` 2008-10-21T22:23:28.033841+00:00 Wed, 15 Oct 2008 10:06:10 +0000 ``` I want to get to the point where I can find out which is newer easily. I've tried using the datetime module of Python and searching through the feedparser documentation, but I can't get past this problem. Any help would be much appreciated.
Parsing of dates is a pain with RSS feeds in-the-wild, and that's where `feedparser` can be a big help. If you use the `*_parsed` properties (like `updated_parsed`), `feedparser` will have done the work and will return a 9-tuple Python date in UTC. See <http://packages.python.org/feedparser/date-parsing.html> for more gory details.
"Pretty" Continuous Integration for Python
225,598
112
2008-10-22T12:49:54Z
225,788
10
2008-10-22T13:46:55Z
[ "python", "jenkins", "continuous-integration", "buildbot" ]
This is a slightly.. vain question, but BuildBot's output isn't particularly nice to look at.. For example, compared to.. * [phpUnderControl](http://phpundercontrol.org/about.html) * [Jenkins](http://jenkins-ci.org/content/about-jenkins-ci) + [Hudson](http://blogs.oracle.com/arungupta/entry/top_10_features_of_hudson) * [CruiseControl.rb](http://cruisecontrolrb.thoughtworks.com/) ..and others, [BuildBot](http://buildbot.python.org/stable/) looks rather.. archaic I'm currently playing with Hudson, but it is very Java-centric (although with [this guide](http://redsolo.blogspot.com/2007/11/hudson-embraces-python.html), I found it easier to setup than BuildBot, and produced more info) Basically: is there any Continuous Integration systems aimed at python, that produce lots of shiny graphs and the likes? --- **Update:** Since this time the Jenkins project has replaced Hudson as the community version of the package. The original authors have moved to this project as well. Jenkins is now a standard package on Ubuntu/Debian, RedHat/Fedora/CentOS, and others. The following update is still essentially correct. The starting point to do this with [Jenkins](http://jenkins-ci.org) is different. ***Update:*** After trying a few alternatives, I think I'll stick with Hudson. [Integrity](http://integrityapp.com/) was nice and simple, but quite limited. I think [Buildbot](http://buildbot.net/trac) is better suited to having numerous build-slaves, rather than everything running on a single machine like I was using it. Setting Hudson up for a Python project was pretty simple: * Download Hudson from <http://hudson-ci.org/> * Run it with `java -jar hudson.war` * Open the web interface on the default address of `http://localhost:8080` * Go to Manage Hudson, Plugins, click "Update" or similar * Install the Git plugin (I had to set the `git` path in the Hudson global preferences) * Create a new project, enter the repository, SCM polling intervals and so on * Install `nosetests` via `easy_install` if it's not already * In the a build step, add `nosetests --with-xunit --verbose` * Check "Publish JUnit test result report" and set "Test report XMLs" to `**/nosetests.xml` That's all that's required. You can setup email notifications, and [the plugins](http://wiki.hudson-ci.org/display/HUDSON/Plugins) are worth a look. A few I'm currently using for Python projects: * [SLOCCount plugin](http://wiki.hudson-ci.org/display/HUDSON/SLOCCount+Plugin) to count lines of code (and graph it!) - you need to install [sloccount](http://www.dwheeler.com/sloccount/) separately * [Violations](http://wiki.hudson-ci.org/display/HUDSON/Violations) to parse the PyLint output (you can setup warning thresholds, graph the number of violations over each build) * [Cobertura](http://wiki.hudson-ci.org/display/HUDSON/Cobertura+Plugin) can parse the coverage.py output. Nosetest can gather coverage while running your tests, using `nosetests --with-coverage` (this writes the output to `**/coverage.xml`)
Don't know if it would do : [Bitten](http://bitten.edgewall.org/) is made by the guys who write Trac and is integrated with Trac. [Apache Gump](http://gump.apache.org/) is the CI tool used by Apache. It is written in Python.
"Pretty" Continuous Integration for Python
225,598
112
2008-10-22T12:49:54Z
228,196
9
2008-10-23T01:30:17Z
[ "python", "jenkins", "continuous-integration", "buildbot" ]
This is a slightly.. vain question, but BuildBot's output isn't particularly nice to look at.. For example, compared to.. * [phpUnderControl](http://phpundercontrol.org/about.html) * [Jenkins](http://jenkins-ci.org/content/about-jenkins-ci) + [Hudson](http://blogs.oracle.com/arungupta/entry/top_10_features_of_hudson) * [CruiseControl.rb](http://cruisecontrolrb.thoughtworks.com/) ..and others, [BuildBot](http://buildbot.python.org/stable/) looks rather.. archaic I'm currently playing with Hudson, but it is very Java-centric (although with [this guide](http://redsolo.blogspot.com/2007/11/hudson-embraces-python.html), I found it easier to setup than BuildBot, and produced more info) Basically: is there any Continuous Integration systems aimed at python, that produce lots of shiny graphs and the likes? --- **Update:** Since this time the Jenkins project has replaced Hudson as the community version of the package. The original authors have moved to this project as well. Jenkins is now a standard package on Ubuntu/Debian, RedHat/Fedora/CentOS, and others. The following update is still essentially correct. The starting point to do this with [Jenkins](http://jenkins-ci.org) is different. ***Update:*** After trying a few alternatives, I think I'll stick with Hudson. [Integrity](http://integrityapp.com/) was nice and simple, but quite limited. I think [Buildbot](http://buildbot.net/trac) is better suited to having numerous build-slaves, rather than everything running on a single machine like I was using it. Setting Hudson up for a Python project was pretty simple: * Download Hudson from <http://hudson-ci.org/> * Run it with `java -jar hudson.war` * Open the web interface on the default address of `http://localhost:8080` * Go to Manage Hudson, Plugins, click "Update" or similar * Install the Git plugin (I had to set the `git` path in the Hudson global preferences) * Create a new project, enter the repository, SCM polling intervals and so on * Install `nosetests` via `easy_install` if it's not already * In the a build step, add `nosetests --with-xunit --verbose` * Check "Publish JUnit test result report" and set "Test report XMLs" to `**/nosetests.xml` That's all that's required. You can setup email notifications, and [the plugins](http://wiki.hudson-ci.org/display/HUDSON/Plugins) are worth a look. A few I'm currently using for Python projects: * [SLOCCount plugin](http://wiki.hudson-ci.org/display/HUDSON/SLOCCount+Plugin) to count lines of code (and graph it!) - you need to install [sloccount](http://www.dwheeler.com/sloccount/) separately * [Violations](http://wiki.hudson-ci.org/display/HUDSON/Violations) to parse the PyLint output (you can setup warning thresholds, graph the number of violations over each build) * [Cobertura](http://wiki.hudson-ci.org/display/HUDSON/Cobertura+Plugin) can parse the coverage.py output. Nosetest can gather coverage while running your tests, using `nosetests --with-coverage` (this writes the output to `**/coverage.xml`)
We've had great success with [TeamCity](http://www.jetbrains.com/teamcity/) as our CI server and using nose as our test runner. [Teamcity plugin for nosetests](http://pypi.python.org/pypi/teamcity-nose) gives you count pass/fail, readable display for failed test( that can be E-Mailed). You can even see details of the test failures while you stack is running. If of course supports things like running on multiple machines, and it's much simpler to setup and maintain than buildbot.
"Pretty" Continuous Integration for Python
225,598
112
2008-10-22T12:49:54Z
667,800
40
2009-03-20T20:13:24Z
[ "python", "jenkins", "continuous-integration", "buildbot" ]
This is a slightly.. vain question, but BuildBot's output isn't particularly nice to look at.. For example, compared to.. * [phpUnderControl](http://phpundercontrol.org/about.html) * [Jenkins](http://jenkins-ci.org/content/about-jenkins-ci) + [Hudson](http://blogs.oracle.com/arungupta/entry/top_10_features_of_hudson) * [CruiseControl.rb](http://cruisecontrolrb.thoughtworks.com/) ..and others, [BuildBot](http://buildbot.python.org/stable/) looks rather.. archaic I'm currently playing with Hudson, but it is very Java-centric (although with [this guide](http://redsolo.blogspot.com/2007/11/hudson-embraces-python.html), I found it easier to setup than BuildBot, and produced more info) Basically: is there any Continuous Integration systems aimed at python, that produce lots of shiny graphs and the likes? --- **Update:** Since this time the Jenkins project has replaced Hudson as the community version of the package. The original authors have moved to this project as well. Jenkins is now a standard package on Ubuntu/Debian, RedHat/Fedora/CentOS, and others. The following update is still essentially correct. The starting point to do this with [Jenkins](http://jenkins-ci.org) is different. ***Update:*** After trying a few alternatives, I think I'll stick with Hudson. [Integrity](http://integrityapp.com/) was nice and simple, but quite limited. I think [Buildbot](http://buildbot.net/trac) is better suited to having numerous build-slaves, rather than everything running on a single machine like I was using it. Setting Hudson up for a Python project was pretty simple: * Download Hudson from <http://hudson-ci.org/> * Run it with `java -jar hudson.war` * Open the web interface on the default address of `http://localhost:8080` * Go to Manage Hudson, Plugins, click "Update" or similar * Install the Git plugin (I had to set the `git` path in the Hudson global preferences) * Create a new project, enter the repository, SCM polling intervals and so on * Install `nosetests` via `easy_install` if it's not already * In the a build step, add `nosetests --with-xunit --verbose` * Check "Publish JUnit test result report" and set "Test report XMLs" to `**/nosetests.xml` That's all that's required. You can setup email notifications, and [the plugins](http://wiki.hudson-ci.org/display/HUDSON/Plugins) are worth a look. A few I'm currently using for Python projects: * [SLOCCount plugin](http://wiki.hudson-ci.org/display/HUDSON/SLOCCount+Plugin) to count lines of code (and graph it!) - you need to install [sloccount](http://www.dwheeler.com/sloccount/) separately * [Violations](http://wiki.hudson-ci.org/display/HUDSON/Violations) to parse the PyLint output (you can setup warning thresholds, graph the number of violations over each build) * [Cobertura](http://wiki.hudson-ci.org/display/HUDSON/Cobertura+Plugin) can parse the coverage.py output. Nosetest can gather coverage while running your tests, using `nosetests --with-coverage` (this writes the output to `**/coverage.xml`)
You might want to check out [Nose](http://somethingaboutorange.com/mrl/projects/nose/) and [the Xunit output plugin](http://nose.readthedocs.org/en/latest/plugins/xunit.html). You can have it run your unit tests, and coverage checks with this command: ``` nosetests --with-xunit --enable-cover ``` That'll be helpful if you want to go the Jenkins route, or if you want to use another CI server that has support for JUnit test reporting. Similarly you can capture the output of pylint using the [violations plugin for Jenkins](https://wiki.jenkins-ci.org/display/JENKINS/Violations)
"Pretty" Continuous Integration for Python
225,598
112
2008-10-22T12:49:54Z
2,026,520
8
2010-01-08T09:16:55Z
[ "python", "jenkins", "continuous-integration", "buildbot" ]
This is a slightly.. vain question, but BuildBot's output isn't particularly nice to look at.. For example, compared to.. * [phpUnderControl](http://phpundercontrol.org/about.html) * [Jenkins](http://jenkins-ci.org/content/about-jenkins-ci) + [Hudson](http://blogs.oracle.com/arungupta/entry/top_10_features_of_hudson) * [CruiseControl.rb](http://cruisecontrolrb.thoughtworks.com/) ..and others, [BuildBot](http://buildbot.python.org/stable/) looks rather.. archaic I'm currently playing with Hudson, but it is very Java-centric (although with [this guide](http://redsolo.blogspot.com/2007/11/hudson-embraces-python.html), I found it easier to setup than BuildBot, and produced more info) Basically: is there any Continuous Integration systems aimed at python, that produce lots of shiny graphs and the likes? --- **Update:** Since this time the Jenkins project has replaced Hudson as the community version of the package. The original authors have moved to this project as well. Jenkins is now a standard package on Ubuntu/Debian, RedHat/Fedora/CentOS, and others. The following update is still essentially correct. The starting point to do this with [Jenkins](http://jenkins-ci.org) is different. ***Update:*** After trying a few alternatives, I think I'll stick with Hudson. [Integrity](http://integrityapp.com/) was nice and simple, but quite limited. I think [Buildbot](http://buildbot.net/trac) is better suited to having numerous build-slaves, rather than everything running on a single machine like I was using it. Setting Hudson up for a Python project was pretty simple: * Download Hudson from <http://hudson-ci.org/> * Run it with `java -jar hudson.war` * Open the web interface on the default address of `http://localhost:8080` * Go to Manage Hudson, Plugins, click "Update" or similar * Install the Git plugin (I had to set the `git` path in the Hudson global preferences) * Create a new project, enter the repository, SCM polling intervals and so on * Install `nosetests` via `easy_install` if it's not already * In the a build step, add `nosetests --with-xunit --verbose` * Check "Publish JUnit test result report" and set "Test report XMLs" to `**/nosetests.xml` That's all that's required. You can setup email notifications, and [the plugins](http://wiki.hudson-ci.org/display/HUDSON/Plugins) are worth a look. A few I'm currently using for Python projects: * [SLOCCount plugin](http://wiki.hudson-ci.org/display/HUDSON/SLOCCount+Plugin) to count lines of code (and graph it!) - you need to install [sloccount](http://www.dwheeler.com/sloccount/) separately * [Violations](http://wiki.hudson-ci.org/display/HUDSON/Violations) to parse the PyLint output (you can setup warning thresholds, graph the number of violations over each build) * [Cobertura](http://wiki.hudson-ci.org/display/HUDSON/Cobertura+Plugin) can parse the coverage.py output. Nosetest can gather coverage while running your tests, using `nosetests --with-coverage` (this writes the output to `**/coverage.xml`)
Buildbot's waterfall page can be considerably prettified. Here's a nice example <http://build.chromium.org/buildbot/waterfall/waterfall>
Unexpected list comprehension behaviour in Python
225,675
6
2008-10-22T13:15:08Z
225,801
14
2008-10-22T13:50:47Z
[ "python", "list-comprehension", "language-implementation" ]
I believe I'm getting bitten by some combination of nested scoping rules and list comprehensions. [Jeremy Hylton's blog post](http://www.python.org/~jeremy/weblog/040204.html) is suggestive about the causes, but I don't really understand CPython's implementation well-enough to figure out how to get around this. Here is an (overcomplicated?) example. If people have a simpler one that demos it, I'd like to hear it. The issue: the list comprehensions using next() are filled with the result from the last iteration. **edit**: The Problem: What exactly is going on with this, and how do I fix this? Do I have to use a standard for loop? Clearly the function is running the correct number of times, but the list comprehensions end up with the *final* value instead of the result of each loop. Some hypotheses: * generators? * lazy filling of list comprehensions? **code** ``` import itertools def digit(n): digit_list = [ (x,False) for x in xrange(1,n+1)] digit_list[0] = (1,True) return itertools.cycle ( digit_list) ``` ``` >>> D = digit(5) >>> [D.next() for x in range(5)] ## This list comprehension works as expected [(1, True), (2, False), (3, False), (4, False), (5, False)] ``` ``` class counter(object): def __init__(self): self.counter = [ digit(4) for ii in range(2) ] self.totalcount=0 self.display = [0,] * 2 def next(self): self.totalcount += 1 self.display[-1] = self.counter[-1].next()[0] print self.totalcount, self.display return self.display def next2(self,*args): self._cycle(1) self.totalcount += 1 print self.totalcount, self.display return self.display def _cycle(self,digit): d,first = self.counter[digit].next() #print digit, d, first #print self._display self.display[digit] = d if first and digit > 0: self._cycle(digit-1) C = counter() [C.next() for x in range(5)] [C.next2() for x in range(5)] ``` **OUTPUT** ``` In [44]: [C.next() for x in range(6)] 1 [0, 1] 2 [0, 2] 3 [0, 3] 4 [0, 4] 5 [0, 1] 6 [0, 2] Out[44]: [[0, 2], [0, 2], [0, 2], [0, 2], [0, 2], [0, 2]] In [45]: [C.next2() for x in range(6)] 7 [0, 3] 8 [0, 4] 9 [1, 1] 10 [1, 2] 11 [1, 3] 12 [1, 4] Out[45]: [[1, 4], [1, 4], [1, 4], [1, 4], [1, 4], [1, 4]] # this should be: [[0,3],[0,4]....[1,4]] or similar ```
The problem is that with `return self.display` you return a *reference* to this list (not a copy). So what you end up with is a list where each element is a reference to self.display. To illustrate, look at the following: ``` >>> a = [1,2] >>> b = [a,a] >>> b [[1, 2], [1, 2]] >>> a.append(3) >>> b [[1, 2, 3], [1, 2, 3]] ``` You probably want to use something like `return self.display[:]`.
Batch Renaming of Files in a Directory
225,735
55
2008-10-22T13:33:23Z
227,125
64
2008-10-22T18:59:57Z
[ "python", "file-io", "rename", "batch-rename" ]
Is there an easy way to rename a group of files already contained in a directory, using Python? **Example:** I have a directory full of \*.doc files and I want to rename them in a consistent way. > X.doc -> "new(X).doc" > > Y.doc -> "new(Y).doc"
Such renaming is quite easy, for example with [os](http://docs.python.org/lib/module-os.html) and [glob](http://docs.python.org/lib/module-glob.html) modules: ``` import glob, os def rename(dir, pattern, titlePattern): for pathAndFilename in glob.iglob(os.path.join(dir, pattern)): title, ext = os.path.splitext(os.path.basename(pathAndFilename)) os.rename(pathAndFilename, os.path.join(dir, titlePattern % title + ext)) ``` You could then use it in your example like this: ``` rename(r'c:\temp\xx', r'*.doc', r'new(%s)') ``` The above example will convert all `*.doc` files in `c:\temp\xx` dir to `new(%s).doc`, where `%s` is the previous base name of the file (without extension).
Batch Renaming of Files in a Directory
225,735
55
2008-10-22T13:33:23Z
227,209
12
2008-10-22T19:30:05Z
[ "python", "file-io", "rename", "batch-rename" ]
Is there an easy way to rename a group of files already contained in a directory, using Python? **Example:** I have a directory full of \*.doc files and I want to rename them in a consistent way. > X.doc -> "new(X).doc" > > Y.doc -> "new(Y).doc"
If you don't mind using regular expressions, then this function would give you much power in renaming files: ``` import re, glob, os def renamer(files, pattern, replacement): for pathname in glob.glob(files): basename= os.path.basename(pathname) new_filename= re.sub(pattern, replacement, basename) if new_filename != basename: os.rename( pathname, os.path.join(os.path.dirname(pathname), new_filename)) ``` So in your example, you could do (assuming it's the current directory where the files are): ``` renamer("*.doc", r"^(.*)\.doc$", r"new(\1).doc") ``` but you could also roll back to the initial filenames: ``` renamer("*.doc", r"^new\((.*)\)\.doc", r"\1.doc") ``` and more.
Batch Renaming of Files in a Directory
225,735
55
2008-10-22T13:33:23Z
7,917,798
54
2011-10-27T14:53:14Z
[ "python", "file-io", "rename", "batch-rename" ]
Is there an easy way to rename a group of files already contained in a directory, using Python? **Example:** I have a directory full of \*.doc files and I want to rename them in a consistent way. > X.doc -> "new(X).doc" > > Y.doc -> "new(Y).doc"
I prefer writing small one liners for each replace I have to do instead of making a more generic and complex code. E.g.: This replaces all underscores with hyphens in any non-hidden file in the current directory ``` import os [os.rename(f, f.replace('_', '-')) for f in os.listdir('.') if not f.startswith('.')] ```
Batch Renaming of Files in a Directory
225,735
55
2008-10-22T13:33:23Z
20,371,910
7
2013-12-04T09:46:49Z
[ "python", "file-io", "rename", "batch-rename" ]
Is there an easy way to rename a group of files already contained in a directory, using Python? **Example:** I have a directory full of \*.doc files and I want to rename them in a consistent way. > X.doc -> "new(X).doc" > > Y.doc -> "new(Y).doc"
I have this to simply rename all files in subfolders of folder ``` import os def replace(fpath, old_str, new_str): for path, subdirs, files in os.walk(fpath): for name in files: if(old_str.lower() in name.lower()): os.rename(os.path.join(path,name), os.path.join(path, name.lower().replace(old_str,new_str))) ``` I am replacing all occurences of old\_str with any case by new\_str.
Django Forms - How to Use Prefix Parameter
226,510
16
2008-10-22T16:10:03Z
226,568
29
2008-10-22T16:27:38Z
[ "python", "html", "django", "forms" ]
Say I have a form like: ``` class GeneralForm(forms.Form): field1 = forms.IntegerField(required=False) field2 = forms. IntegerField(required=False) ``` And I want to show it twice on a page within one form tag each time with a different prefix e.g.,: ``` rest of page ... <form ..> GeneralForm(data,prefix="form1").as_table() GeneralForm(data,prefix="form2").as_table() <input type="submit" /> </form> rest of page ... ``` When the user submits this, how do I get the submitted form back into two separate forms to do validation, and redisplay it? [This was the only documentation](http://docs.djangoproject.com/en/dev/ref/forms/api/#prefixes-for-forms) I could find and it's peckish.
You process each form as you normally would, ensuring that you create instances which have the same prefixes as those used to generate the form initially. Here's a slightly awkward example using the form you've given, as I don't know what the exact use case is: ``` def some_view(request): if request.method == 'POST': form1 = GeneralForm(request.POST, prefix='form1') form2 = GeneralForm(request.POST, prefix='form2') if all([form1.is_valid(), form2.is_valid()]): pass # Do stuff with the forms else: form1 = GeneralForm(prefix='form1') form2 = GeneralForm(prefix='form2') return render_to_response('some_template.html', { 'form1': form1, 'form2': form2, }) ``` Here's some real-world sample code which demonstrates processing forms using the prefix: <http://collingrady.wordpress.com/2008/02/18/editing-multiple-objects-in-django-with-newforms/>
How does one put a link / url to the web-site's home page in Django?
226,528
17
2008-10-22T16:14:40Z
226,540
32
2008-10-22T16:19:44Z
[ "python", "django", "django-urls" ]
In Django templates, is there a variable in the context (e.g. `{{ BASE\_URL }}`, `{{ ROOT\_URL }}`, or `{{ MEDIA\_URL }}` that one can use to link to the `home` url of a project? I.e. if Django is running in the root of a project, the variable (let's call it R) `{{ R }}` in a template would be `/`. If the root url is a sub-folder `http://host/X/` the variable `{{ R }}` would be `/X/` (or `http://host/X/`). It seems painfully simple, but I can't find an answer. :) Thank you!
You could give the URL configuration which you're using to handle the home page a name and use that: urls.py: ``` from django.conf.urls.defaults import * urlpatterns = patterns('myproject.views', url(r'^$', 'index', name='index'), ) ``` Templates: ``` <a href="{% url index %}">... ``` This note in the Django Book has some tips about deploying your applications to a subdirectory: <http://www.djangobook.com/en/1.0/chapter20/#cn43>
Python Disk-Based Dictionary
226,693
34
2008-10-22T17:00:11Z
226,796
18
2008-10-22T17:34:09Z
[ "python", "database", "dictionary", "disk-based" ]
I was running some dynamic programming code (trying to brute-force disprove the Collatz conjecture =P) and I was using a dict to store the lengths of the chains I had already computed. Obviously, it ran out of memory at some point. Is there any easy way to use some variant of a `dict` which will page parts of itself out to disk when it runs out of room? Obviously it will be slower than an in-memory dict, and it will probably end up eating my hard drive space, but this could apply to other problems that are not so futile. I realized that a disk-based dictionary is pretty much a database, so I manually implemented one using sqlite3, but I didn't do it in any smart way and had it look up every element in the DB one at a time... it was about 300x slower. Is the smartest way to just create my own set of dicts, keeping only one in memory at a time, and paging them out in some efficient manner?
Hash-on-disk is generally addressed with Berkeley DB or something similar - several options are listed in the [Python Data Persistence documentation](http://docs.python.org/library/persistence.html). You can front it with an in-memory cache, but I'd test against native performance first; with operating system caching in place it might come out about the same.
Python Disk-Based Dictionary
226,693
34
2008-10-22T17:00:11Z
228,837
49
2008-10-23T07:22:01Z
[ "python", "database", "dictionary", "disk-based" ]
I was running some dynamic programming code (trying to brute-force disprove the Collatz conjecture =P) and I was using a dict to store the lengths of the chains I had already computed. Obviously, it ran out of memory at some point. Is there any easy way to use some variant of a `dict` which will page parts of itself out to disk when it runs out of room? Obviously it will be slower than an in-memory dict, and it will probably end up eating my hard drive space, but this could apply to other problems that are not so futile. I realized that a disk-based dictionary is pretty much a database, so I manually implemented one using sqlite3, but I didn't do it in any smart way and had it look up every element in the DB one at a time... it was about 300x slower. Is the smartest way to just create my own set of dicts, keeping only one in memory at a time, and paging them out in some efficient manner?
The 3rd party [shove](http://pypi.python.org/pypi/shove) module is also worth taking a look at. It's very similar to shelve in that it is a simple dict-like object, however it can store to various backends (such as file, SVN, and S3), provides optional compression, and is even threadsafe. It's a very handy module ``` from shove import Shove mem_store = Shove() file_store = Shove('file://mystore') file_store['key'] = value ```
How do you create a simple Google Talk Client using the Twisted Words Python library?
227,279
16
2008-10-22T19:48:33Z
228,959
7
2008-10-23T08:32:31Z
[ "python", "twisted", "xmpp", "google-talk" ]
I am interested in making a Google Talk client using Python and would like to use the Twisted libraries Words module. I have looked at the examples, but they don't work with the current implementation of Google Talk. Has anybody had any luck with this? Would you mind documenting a brief tutorial? As a simple task, I'd like to create a client/bot that tracks the Online time of my various Google Talk accounts so that I can get an aggregate number. I figure I could friend the bot in each account and then use the XMPP presence information to keep track of the times that I can then aggregate. Thanks.
I have written a simple Jabber bot (and thus Google talk bot) using the `xmpppy` library, which works well. The examples on [xmpppy](http://xmpppy.sourceforge.net/) should get you started (specifically [`bot.py`](http://xmpppy.sourceforge.net/examples/bot.py)) As for something actually implemented in twisted.Words: [Here](http://yoan.dosimple.ch/blog/2007/01/30/) is a simple tutorial on creating a bot that prints every received message to the local terminal (and a version that replies with the revere of the received message). To track the online time of various accounts, you would add a callback for "presences" (going online/offline/away etc are "presence changes", in Jabber terminology) For a more complete system, [pownce-jabber-bot](http://code.google.com/p/pownce-jabber-bot/) uses twisted.words and [wokkel](http://wokkel.ik.nu/) for the jabber interface. [The powncebot/\_\_init\_\_.py](http://code.google.com/p/pownce-jabber-bot/source/browse/trunk/powncebot/%5F%5Finit%5F%5F.py?spec=svn15&r=15) file seems like a good place to start - it's seems pretty simple.
How do you create a simple Google Talk Client using the Twisted Words Python library?
227,279
16
2008-10-22T19:48:33Z
327,229
14
2008-11-29T05:42:54Z
[ "python", "twisted", "xmpp", "google-talk" ]
I am interested in making a Google Talk client using Python and would like to use the Twisted libraries Words module. I have looked at the examples, but they don't work with the current implementation of Google Talk. Has anybody had any luck with this? Would you mind documenting a brief tutorial? As a simple task, I'd like to create a client/bot that tracks the Online time of my various Google Talk accounts so that I can get an aggregate number. I figure I could friend the bot in each account and then use the XMPP presence information to keep track of the times that I can then aggregate. Thanks.
wokkel is the future of twisted words. [metajack](http://metajack.im/) wrote a really nice [blog post](http://metajack.im/2008/09/25/an-xmpp-echo-bot-with-twisted-and-wokkel/) on getting started. If you want a nice, functional sample project to start with, check out my [whatsup](http://github.com/dustin/whatsup) bot.
Solving an inequality for minimum value
227,282
3
2008-10-22T19:48:55Z
227,319
11
2008-10-22T19:59:29Z
[ "python", "language-agnostic", "equation", "linear-programming", "inequality" ]
I'm working on a programming problem which boils down to a set of an equation and inequality: ``` x[0]*a[0] + x[1]*a[1] + ... x[n]*a[n] >= D x[0]*b[0] + x[1]*b[1] + ... x[n]*b[n] = C ``` I want to solve for the values of `X` that will give the absolute minimum of `C`, given the input `D` and lists and `A` and `B` consisting of `a[0 - n]` and `b[0 - n ]`. I'm doing the problem at the moment in Python, but the problem in general is language-agnostic. CLARIFICATION UPDATE: the coefficients `x[0 - n]` are restricted to the set of non-negative integers.
This looks like a [linear programming](http://en.wikipedia.org/wiki/Linear_programming) problem. The [Simplex algorithm](http://en.wikipedia.org/wiki/Simplex_algorithm) normally gives good results. It basically walks the boundaries of the subspace delimited by the inequalities, looking for the optimum. Think of it visually: each inequality denotes a half-space, a plane in n-dimensional space that you have to be on the right side of. Your utility function is what you're trying to optimize. If the space is closed, the optimum is going to be at one of the apexes of the closed space; if it's open, it's possible that the optimum is infinite.
ASCII value of a character in Python
227,459
548
2008-10-22T20:39:57Z
227,466
34
2008-10-22T20:41:56Z
[ "python", "ascii" ]
How do I get the [ASCII](http://en.wikipedia.org/wiki/ASCII) value of a character as an int in [Python](http://en.wikipedia.org/wiki/Python_%28programming_language%29)?
You are looking for: ``` ord() ```
ASCII value of a character in Python
227,459
548
2008-10-22T20:39:57Z
227,472
705
2008-10-22T20:43:04Z
[ "python", "ascii" ]
How do I get the [ASCII](http://en.wikipedia.org/wiki/ASCII) value of a character as an int in [Python](http://en.wikipedia.org/wiki/Python_%28programming_language%29)?
From [here](http://mail.python.org/pipermail/python-win32/2005-April/003100.html): > function ord() would get the int value > of the char. And in case you want to > convert back after playing with the > number, function chr() does the trick. ``` >>> ord('a') 97 >>> chr(97) 'a' >>> chr(ord('a') + 3) 'd' >>> ``` There is also the `unichr` function, returning the [Unicode](http://en.wikipedia.org/wiki/Unicode) character whose ordinal is the `unichr` argument: ``` >>> unichr(97) u'a' >>> unichr(1234) u'\u04d2' ```
ASCII value of a character in Python
227,459
548
2008-10-22T20:39:57Z
227,889
113
2008-10-22T23:19:20Z
[ "python", "ascii" ]
How do I get the [ASCII](http://en.wikipedia.org/wiki/ASCII) value of a character as an int in [Python](http://en.wikipedia.org/wiki/Python_%28programming_language%29)?
Note that ord() doesn't give you the ASCII value per se; it gives you the numeric value of the character in whatever encoding it's in. Therefore the result of ord('ä') can be 228 if you're using Latin-1, or it can raise a TypeError if you're using UTF-8. It can even return the Unicode codepoint instead if you pass it a unicode: ``` >>> ord(u'あ') 12354 ```
Open file, read it, process, and write back - shortest method in Python
227,461
13
2008-10-22T20:40:00Z
230,416
25
2008-10-23T16:28:39Z
[ "python", "coding-style" ]
I want to do some basic filtering on a file. Read it, do processing, write it back. I'm not looking for "golfing", but want the simplest and most elegant method to achieve this. I came up with: ``` from __future__ import with_statement filename = "..." # or sys.argv... with open(filename) as f: new_txt = # ...some translation of f.read() open(filename, 'w').write(new_txt) ``` The `with` statement makes things shorter since I don't have to explicitly open and close the file. Any other ideas ?
Actually an easier way using fileinput is to use the inplace parameter: ``` import fileinput for line in fileinput.input (filenameToProcess, inplace=1): process (line) ``` If you use the inplace parameter it will redirect stdout to your file, so that if you do a print it will write back to your file. This example adds line numbers to your file: ``` import fileinput for line in fileinput.input ("b.txt",inplace=1): print "%d: %s" % (fileinput.lineno(),line), ```
Passing a list while retaining the original
227,790
12
2008-10-22T22:38:49Z
227,802
10
2008-10-22T22:42:32Z
[ "python", "list" ]
So I'm teaching myself Python, and I'm having an issue with lists. I want to pass my function a list and pop items off it while retaining the original list. How do I make python "instance" the passed list rather that passing a pointer to the original one? Example: ``` def burninate(b): c = [] for i in range(3): c.append(b.pop()) return c a = range(6) d = burninate(a) print a, d ``` Output: [0, 1, 2] [5, 4, 3] Desired output: [0, 1, 2, 3, 4, 5] [5, 4, 3] Thanks!
You can call `burninate()` with a copy of the list like this: `d = burninate(a[:])` or, `d = burninate(list(a))` The other alternative is to make a copy of the list in your method: ``` def burninate(b): c=[] b=b[:] for i in range(3): c.append(b.pop()) return c >>> a = range(6) >>> b = burninate(a) >>> print a, b >>> [0, 1, 2, 3, 4, 5] [5, 4, 3] ```
Passing a list while retaining the original
227,790
12
2008-10-22T22:38:49Z
227,855
14
2008-10-22T23:00:37Z
[ "python", "list" ]
So I'm teaching myself Python, and I'm having an issue with lists. I want to pass my function a list and pop items off it while retaining the original list. How do I make python "instance" the passed list rather that passing a pointer to the original one? Example: ``` def burninate(b): c = [] for i in range(3): c.append(b.pop()) return c a = range(6) d = burninate(a) print a, d ``` Output: [0, 1, 2] [5, 4, 3] Desired output: [0, 1, 2, 3, 4, 5] [5, 4, 3] Thanks!
As other answers have suggested, you can provide your function with a copy of the list. As an alternative, your function could take a copy of the argument: ``` def burninate(b): c = [] b = list(b) for i in range(3): c.append(b.pop()) return c ``` Basically, you need to be clear in your mind (and in your documentation) whether your function will change its arguments. In my opinion, functions that return computed values should not change their arguments, and functions that change their arguments should not return anything. See python's [].sort(), [].extend(), {}.update(), etc. for examples. Obviously there are exceptions (like .pop()). Also, depending on your particular case, you could rewrite the function to avoid using pop() or other functions that modify the argument. e.g. ``` def burninante(b): return b[:-4:-1] # return the last three elements in reverse order ```
What's win32con module in python? Where can I find it?
227,928
17
2008-10-22T23:38:30Z
227,930
10
2008-10-22T23:40:26Z
[ "python", "module" ]
I'm building an open source project that uses python and c++ in Windows. I came to the following error message: ``` ImportError: No module named win32con ``` The same happened in a "prebuilt" code that it's working ( except in my computer :P ) I think this is kind of "popular" module in python because I've saw several messages in other forums but none that could help me. I have Python2.6, should I have that module already installed? Is that something of VC++? Thank you for the help. I got this url <http://sourceforge.net/projects/pywin32/> but I'm not sure what to do with the executable :S
This module contains constants related to Win32 programming. It is not part of the Python 2.6 release, but should be part of the download of the pywin32 project. **Edit:** I imagine that the executable is an installation program, though the last time I downloaded pywin32 it was just a zip file.
Python debugger: Stepping into a function that you have called interactively
228,642
25
2008-10-23T05:35:32Z
228,653
17
2008-10-23T05:42:00Z
[ "python", "debugging", "pdb" ]
Python is quite cool, but unfortunately, its debugger is not as good as perl -d. One thing that I do very commonly when experimenting with code is to call a function from within the debugger, and step into that function, like so: ``` # NOTE THAT THIS PROGRAM EXITS IMMEDIATELY WITHOUT CALLING FOO() ~> cat -n /tmp/show_perl.pl 1 #!/usr/local/bin/perl 2 3 sub foo { 4 print "hi\n"; 5 print "bye\n"; 6 } 7 8 exit 0; ~> perl -d /tmp/show_perl.pl Loading DB routines from perl5db.pl version 1.28 Editor support available. Enter h or `h h' for help, or `man perldebug' for more help. main::(/tmp/show_perl.pl:8): exit 0; # MAGIC HAPPENS HERE -- I AM STEPPING INTO A FUNCTION THAT I AM CALLING INTERACTIVELY DB<1> s foo() main::((eval 6)[/usr/local/lib/perl5/5.8.6/perl5db.pl:628]:3): 3: foo(); DB<<2>> s main::foo(/tmp/show_perl.pl:4): print "hi\n"; DB<<2>> n hi main::foo(/tmp/show_perl.pl:5): print "bye\n"; DB<<2>> n bye DB<2> n Debugged program terminated. Use q to quit or R to restart, use O inhibit_exit to avoid stopping after program termination, h q, h R or h O to get additional info. DB<2> q ``` This is incredibly useful when trying to step through a function's handling of various different inputs to figure out why it fails. However, it does not seem to work in either pdb or pydb (I'd show an equivalent python example to the one above but it results in a large exception stack dump). So my question is twofold: 1. Am I missing something? 2. Is there a python debugger that would indeed let me do this? Obviously I could put the calls in the code myself, but I love working interactively, eg. not having to start from scratch when I want to try calling with a slightly different set of arguments.
And I've answered my own question! It's the "debug" command in pydb: ``` ~> cat -n /tmp/test_python.py 1 #!/usr/local/bin/python 2 3 def foo(): 4 print "hi" 5 print "bye" 6 7 exit(0) 8 ~> pydb /tmp/test_python.py (/tmp/test_python.py:7): <module> 7 exit(0) (Pydb) debug foo() ENTERING RECURSIVE DEBUGGER ------------------------Call level 11 (/tmp/test_python.py:3): foo 3 def foo(): ((Pydb)) s (/tmp/test_python.py:4): foo 4 print "hi" ((Pydb)) s hi (/tmp/test_python.py:5): foo 5 print "bye" ((Pydb)) s bye ------------------------Return from level 11 (<type 'NoneType'>) ----------------------Return from level 10 (<type 'NoneType'>) LEAVING RECURSIVE DEBUGGER (/tmp/test_python.py:7): <module> ```
Python debugger: Stepping into a function that you have called interactively
228,642
25
2008-10-23T05:35:32Z
229,380
14
2008-10-23T11:27:26Z
[ "python", "debugging", "pdb" ]
Python is quite cool, but unfortunately, its debugger is not as good as perl -d. One thing that I do very commonly when experimenting with code is to call a function from within the debugger, and step into that function, like so: ``` # NOTE THAT THIS PROGRAM EXITS IMMEDIATELY WITHOUT CALLING FOO() ~> cat -n /tmp/show_perl.pl 1 #!/usr/local/bin/perl 2 3 sub foo { 4 print "hi\n"; 5 print "bye\n"; 6 } 7 8 exit 0; ~> perl -d /tmp/show_perl.pl Loading DB routines from perl5db.pl version 1.28 Editor support available. Enter h or `h h' for help, or `man perldebug' for more help. main::(/tmp/show_perl.pl:8): exit 0; # MAGIC HAPPENS HERE -- I AM STEPPING INTO A FUNCTION THAT I AM CALLING INTERACTIVELY DB<1> s foo() main::((eval 6)[/usr/local/lib/perl5/5.8.6/perl5db.pl:628]:3): 3: foo(); DB<<2>> s main::foo(/tmp/show_perl.pl:4): print "hi\n"; DB<<2>> n hi main::foo(/tmp/show_perl.pl:5): print "bye\n"; DB<<2>> n bye DB<2> n Debugged program terminated. Use q to quit or R to restart, use O inhibit_exit to avoid stopping after program termination, h q, h R or h O to get additional info. DB<2> q ``` This is incredibly useful when trying to step through a function's handling of various different inputs to figure out why it fails. However, it does not seem to work in either pdb or pydb (I'd show an equivalent python example to the one above but it results in a large exception stack dump). So my question is twofold: 1. Am I missing something? 2. Is there a python debugger that would indeed let me do this? Obviously I could put the calls in the code myself, but I love working interactively, eg. not having to start from scratch when I want to try calling with a slightly different set of arguments.
You can interactively debug a function with pdb as well, provided the script you want to debug does not exit() at the end: ``` $ cat test.py #!/usr/bin/python def foo(f, g): h = f+g print h return 2*f ``` To debug, start an interactive python session and import pdb: ``` $ python Python 2.5.1 (r251:54869, Apr 18 2007, 22:08:04) [GCC 4.0.1 (Apple Computer, Inc. build 5367)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import pdb >>> import test >>> pdb.runcall(test.foo, 1, 2) > /Users/simon/Desktop/test.py(4)foo() -> h = f+g (Pdb) n > /Users/simon/Desktop/test.py(5)foo() -> print h (Pdb) ``` The pdb module comes with python and is documented in the modules docs at <http://docs.python.org/modindex.html>
How to get hex string from signed integer
228,702
7
2008-10-23T06:14:02Z
228,708
21
2008-10-23T06:17:15Z
[ "python" ]
Say I have the classic 4-byte signed integer, and I want something like print hex(-1) to give me something like >> 0xffffffff In reality, the above gives me -0x1. I'm dawdling about in some lower level language, and python commandline is quick n easy. So.. is there a way to do it?
This will do the trick: ``` >>> print hex (-1 & 0xffffffff) 0xffffffffL ``` or, in function form (and stripping off the trailing "L"): ``` >>> def hex2(n): ... return hex (n & 0xffffffff)[:-1] ... >>> print hex2(-1) 0xffffffff >>> print hex2(17) 0x11 ``` or, a variant that always returns fixed size (there may well be a better way to do this): ``` >>> def hex3(n): ... return "0x%s"%("00000000%s"%(hex(n&0xffffffff)[2:-1]))[-8:] ... >>> print hex3(-1) 0xffffffff >>> print hex3(17) 0x00000011 ``` Or, avoiding the hex() altogether, thanks to Ignacio and bobince: ``` def hex2(n): return "0x%x"%(n&0xffffffff) def hex3(n): return "0x%s"%("00000000%x"%(n&0xffffffff))[-8:] ```
How do I iterate through a string in Python?
228,730
39
2008-10-23T06:32:30Z
228,762
14
2008-10-23T06:49:27Z
[ "python" ]
As an example, lets say I wanted to list the frequency of each letter of the alphabet in a string. What would be the easiest way to do it? This is an example of what I'm thinking of... the question is how to make allTheLetters equal to said letters without something like allTheLetters = "abcdefg...xyz". In many other languages I could just do letter++ and increment my way through the alphabet, but thus far I haven't come across a way to do that in python. ``` def alphCount(text): lowerText = text.lower() for letter in allTheLetters: print letter + ":", lowertext.count(letter) ```
> the question is how to make > allTheLetters equal to said letters > without something like allTheLetters = > "abcdefg...xyz" That's actually provided by the string module, it's not like you have to manually type it yourself ;) ``` import string allTheLetters = string.ascii_lowercase def alphCount(text): lowerText = text.lower() for letter in allTheLetters: print letter + ":", lowertext.count(letter) ```
How do I iterate through a string in Python?
228,730
39
2008-10-23T06:32:30Z
228,790
9
2008-10-23T07:04:14Z
[ "python" ]
As an example, lets say I wanted to list the frequency of each letter of the alphabet in a string. What would be the easiest way to do it? This is an example of what I'm thinking of... the question is how to make allTheLetters equal to said letters without something like allTheLetters = "abcdefg...xyz". In many other languages I could just do letter++ and increment my way through the alphabet, but thus far I haven't come across a way to do that in python. ``` def alphCount(text): lowerText = text.lower() for letter in allTheLetters: print letter + ":", lowertext.count(letter) ```
If you just want to do a frequency count of a string, try this: ``` s = 'hi there' f = {} for c in s: f[c] = f.get(c, 0) + 1 print f ```
How do I iterate through a string in Python?
228,730
39
2008-10-23T06:32:30Z
228,850
72
2008-10-23T07:28:05Z
[ "python" ]
As an example, lets say I wanted to list the frequency of each letter of the alphabet in a string. What would be the easiest way to do it? This is an example of what I'm thinking of... the question is how to make allTheLetters equal to said letters without something like allTheLetters = "abcdefg...xyz". In many other languages I could just do letter++ and increment my way through the alphabet, but thus far I haven't come across a way to do that in python. ``` def alphCount(text): lowerText = text.lower() for letter in allTheLetters: print letter + ":", lowertext.count(letter) ```
The question you've asked (how to iterate through the alphabet) is not the same question as the problem you're trying to solve (how to count the frequency of letters in a string). You can use string.lowercase, as other posters have suggested: ``` import string allTheLetters = string.lowercase ``` To do things the way you're "used to", treating letters as numbers, you can use the "ord" and "chr" functions. There's absolutely no reason to ever do exactly this, but maybe it comes closer to what you're actually trying to figure out: ``` def getAllTheLetters(begin='a', end='z'): beginNum = ord(begin) endNum = ord(end) for number in xrange(beginNum, endNum+1): yield chr(number) ``` You can tell it does the right thing because this code prints `True`: ``` import string print ''.join(getAllTheLetters()) == string.lowercase ``` But, to solve the problem you're actually trying to solve, you want to use a dictionary and collect the letters as you go: ``` from collections import defaultdict def letterOccurrances(string): frequencies = defaultdict(lambda: 0) for character in string: frequencies[character.lower()] += 1 return frequencies ``` Use like so: ``` occs = letterOccurrances("Hello, world!") print occs['l'] print occs['h'] ``` This will print '3' and '1' respectively. Note that this works for unicode as well: ``` # -*- coding: utf-8 -*- occs = letterOccurrances(u"héĺĺó, ẃóŕĺd!") print occs[u'l'] print occs[u'ĺ'] ``` If you were to try the other approach on unicode (incrementing through every character) you'd be waiting a long time; there are millions of unicode characters. To implement your original function (print the counts of each letter in alphabetical order) in terms of this: ``` def alphCount(text): for character, count in sorted(letterOccurrances(text).iteritems()): print "%s: %s" % (character, count) alphCount("hello, world!") ```
SQLite parameter substitution problem
228,912
45
2008-10-23T08:13:36Z
228,981
113
2008-10-23T08:41:23Z
[ "python", "sqlite" ]
Using SQLite3 with Python 2.5, I'm trying to iterate through a list and pull the weight of an item from the database based on the item's name. I tried using the "?" parameter substitution suggested to prevent SQL injections but it doesn't work. For example, when I use: ``` for item in self.inventory_names: self.cursor.execute("SELECT weight FROM Equipment WHERE name = ?", item) self.cursor.close() ``` I get the error: > sqlite3.ProgrammingError: Incorrect number of bindings supplied. The current statement uses 1, and there are 8 supplied. I believe this is somehow caused by the initial creation of the database; the module I made that actually creates the DB does have 8 bindings. ``` cursor.execute("""CREATE TABLE Equipment (id INTEGER PRIMARY KEY, name TEXT, price INTEGER, weight REAL, info TEXT, ammo_cap INTEGER, availability_west TEXT, availability_east TEXT)""") ``` However, when I use the less-secure "%s" substitution for each item name, it works just fine. Like so: ``` for item in self.inventory_names: self.cursor.execute("SELECT weight FROM Equipment WHERE name = '%s'" % item) self.cursor.close() ``` I can't figure out why it thinks I have 8 bindins when I'm only calling one. How can I fix it?
The `Cursor.execute()` method expects a sequence as second parameter. You are supplying a string which happens to be 8 characters long. Use the following form instead: ``` self.cursor.execute("SELECT weight FROM Equipment WHERE name = ?", [item]) ``` Python library reference 13.13.3: sqlite3 [Cursor Objects](http://www.python.org/doc/2.5.2/lib/sqlite3-Cursor-Objects.html).
SQLite parameter substitution problem
228,912
45
2008-10-23T08:13:36Z
597,198
28
2009-02-28T00:32:57Z
[ "python", "sqlite" ]
Using SQLite3 with Python 2.5, I'm trying to iterate through a list and pull the weight of an item from the database based on the item's name. I tried using the "?" parameter substitution suggested to prevent SQL injections but it doesn't work. For example, when I use: ``` for item in self.inventory_names: self.cursor.execute("SELECT weight FROM Equipment WHERE name = ?", item) self.cursor.close() ``` I get the error: > sqlite3.ProgrammingError: Incorrect number of bindings supplied. The current statement uses 1, and there are 8 supplied. I believe this is somehow caused by the initial creation of the database; the module I made that actually creates the DB does have 8 bindings. ``` cursor.execute("""CREATE TABLE Equipment (id INTEGER PRIMARY KEY, name TEXT, price INTEGER, weight REAL, info TEXT, ammo_cap INTEGER, availability_west TEXT, availability_east TEXT)""") ``` However, when I use the less-secure "%s" substitution for each item name, it works just fine. Like so: ``` for item in self.inventory_names: self.cursor.execute("SELECT weight FROM Equipment WHERE name = '%s'" % item) self.cursor.close() ``` I can't figure out why it thinks I have 8 bindins when I'm only calling one. How can I fix it?
I have spent half a day trying to figure out why something like this would give me an error: ``` cursor.execute("SELECT * from ? WHERE name = ?", (table_name, name)) ``` only to find out that table names *cannot be parametrized*. Hope this will help other people save some time.
SQLite parameter substitution problem
228,912
45
2008-10-23T08:13:36Z
7,305,758
21
2011-09-05T08:51:03Z
[ "python", "sqlite" ]
Using SQLite3 with Python 2.5, I'm trying to iterate through a list and pull the weight of an item from the database based on the item's name. I tried using the "?" parameter substitution suggested to prevent SQL injections but it doesn't work. For example, when I use: ``` for item in self.inventory_names: self.cursor.execute("SELECT weight FROM Equipment WHERE name = ?", item) self.cursor.close() ``` I get the error: > sqlite3.ProgrammingError: Incorrect number of bindings supplied. The current statement uses 1, and there are 8 supplied. I believe this is somehow caused by the initial creation of the database; the module I made that actually creates the DB does have 8 bindings. ``` cursor.execute("""CREATE TABLE Equipment (id INTEGER PRIMARY KEY, name TEXT, price INTEGER, weight REAL, info TEXT, ammo_cap INTEGER, availability_west TEXT, availability_east TEXT)""") ``` However, when I use the less-secure "%s" substitution for each item name, it works just fine. Like so: ``` for item in self.inventory_names: self.cursor.execute("SELECT weight FROM Equipment WHERE name = '%s'" % item) self.cursor.close() ``` I can't figure out why it thinks I have 8 bindins when I'm only calling one. How can I fix it?
The argument of `cursor.execute` that represents the values you need inserted in the database should be a tuple (sequence). However consider this example and see what's happening: ``` >>> ('jason') 'jason' >>> ('jason',) ('jason',) ``` The first example evaluates to a string instead; so the correct way of representing single valued tuple is as in the second evaluation. Anyhow, the code below to fix your error. ``` self.cursor.execute("SELECT weight FROM Equipment WHERE name = ?", (item,)) ``` Also giving the `cursor.execute` value arguments as strings,(which is what you are doing) results in the first evaluation in the example and results into the error you are getting.
os.walk without digging into directories below
229,186
50
2008-10-23T10:03:59Z
229,219
108
2008-10-23T10:15:38Z
[ "python", "file", "os.walk" ]
How do I limit `os.walk` to only return files in the directory I provide it? ``` def _dir_list(self, dir_name, whitelist): outputList = [] for root, dirs, files in os.walk(dir_name): for f in files: if os.path.splitext(f)[1] in whitelist: outputList.append(os.path.join(root, f)) else: self._email_to_("ignore") return outputList ```
Don't use os.walk. Example: ``` import os root = "C:\\" for item in os.listdir(root): if os.path.isfile(os.path.join(root, item)): print item ```
os.walk without digging into directories below
229,186
50
2008-10-23T10:03:59Z
229,293
11
2008-10-23T10:46:22Z
[ "python", "file", "os.walk" ]
How do I limit `os.walk` to only return files in the directory I provide it? ``` def _dir_list(self, dir_name, whitelist): outputList = [] for root, dirs, files in os.walk(dir_name): for f in files: if os.path.splitext(f)[1] in whitelist: outputList.append(os.path.join(root, f)) else: self._email_to_("ignore") return outputList ```
The suggestion to use `listdir` is a good one. The direct answer to your question is `root, dirs, files = os.walk(dir_name).next()`
os.walk without digging into directories below
229,186
50
2008-10-23T10:03:59Z
234,329
54
2008-10-24T16:46:02Z
[ "python", "file", "os.walk" ]
How do I limit `os.walk` to only return files in the directory I provide it? ``` def _dir_list(self, dir_name, whitelist): outputList = [] for root, dirs, files in os.walk(dir_name): for f in files: if os.path.splitext(f)[1] in whitelist: outputList.append(os.path.join(root, f)) else: self._email_to_("ignore") return outputList ```
Use the `walklevel` function. ``` import os def walklevel(some_dir, level=1): some_dir = some_dir.rstrip(os.path.sep) assert os.path.isdir(some_dir) num_sep = some_dir.count(os.path.sep) for root, dirs, files in os.walk(some_dir): yield root, dirs, files num_sep_this = root.count(os.path.sep) if num_sep + level <= num_sep_this: del dirs[:] ``` It works just like `os.walk`, but you can pass it a `level` parameter that indicates how deep the recursion will go.
os.walk without digging into directories below
229,186
50
2008-10-23T10:03:59Z
20,868,760
9
2014-01-01T12:44:22Z
[ "python", "file", "os.walk" ]
How do I limit `os.walk` to only return files in the directory I provide it? ``` def _dir_list(self, dir_name, whitelist): outputList = [] for root, dirs, files in os.walk(dir_name): for f in files: if os.path.splitext(f)[1] in whitelist: outputList.append(os.path.join(root, f)) else: self._email_to_("ignore") return outputList ```
I think the solution is actually very simple. use ``` break ``` to only do first iteration of the for loop, there must be a more elegant way. ``` for root, dirs, files in os.walk(dir_name): ... ... break ``` The first time you call os.walk, it returns tulips for the current directory, then on next loop the contents of the next directory. Take original script and just add a **break**. ``` def _dir_list(self, dir_name, whitelist): outputList = [] for root, dirs, files in os.walk(dir_name): for f in files: if os.path.splitext(f)[1] in whitelist: outputList.append(os.path.join(root, f)) else: self._email_to_("ignore") break return outputList ```
Python Find Question
229,352
2
2008-10-23T11:15:42Z
229,401
9
2008-10-23T11:32:46Z
[ "python", "url" ]
I am using Python to extract the filename from a link using rfind like below: ``` url = "http://www.google.com/test.php" print url[url.rfind("/") +1 : ] ``` This works ok with links without a / at the end of them and returns "test.php". I have encountered links with / at the end like so "http://www.google.com/test.php/". I am have trouble getting the page name when there is a "/" at the end, can anyone help? Cheers
Just removing the slash at the end won't work, as you can probably have a URL that looks like this: ``` http://www.google.com/test.php?filepath=tests/hey.xml ``` ...in which case you'll get back "hey.xml". Instead of manually checking for this, you can use **urlparse** to get rid of the parameters, then do the check other people suggested: ``` from urlparse import urlparse url = "http://www.google.com/test.php?something=heyharr/sir/a.txt" f = urlparse(url)[2].rstrip("/") print f[f.rfind("/")+1:] ```
How to flush output of Python print?
230,751
605
2008-10-23T17:59:39Z
230,774
765
2008-10-23T18:04:59Z
[ "python", "printing", "flush" ]
How do I force Python's print function to output to the screen?
``` import sys sys.stdout.flush() ``` Print by default prints to `sys.stdout`. References: * <http://docs.python.org/reference/simple_stmts.html#the-print-statement> * <http://docs.python.org/library/sys.html> * <http://docs.python.org/library/stdtypes.html#file-objects>
How to flush output of Python print?
230,751
605
2008-10-23T17:59:39Z
230,780
229
2008-10-23T18:06:49Z
[ "python", "printing", "flush" ]
How do I force Python's print function to output to the screen?
Running `python -h`, I see a *command line option*: > -u : unbuffered binary stdout and stderr; also PYTHONUNBUFFERED=x > see man page for details on internal buffering relating to '-u' Here is the [relevant doc](http://docs.python.org/using/cmdline.html#cmdoption-u).
How to flush output of Python print?
230,751
605
2008-10-23T17:59:39Z
231,216
26
2008-10-23T19:54:26Z
[ "python", "printing", "flush" ]
How do I force Python's print function to output to the screen?
Using the `-u` command-line switch works, but it is a little bit clumsy. It would mean that the program would potentially behave incorrectly if the user invoked the script without the `-u` option. I usually use a custom `stdout`, like this: ``` class flushfile(file): def __init__(self, f): self.f = f def write(self, x): self.f.write(x) self.f.flush() import sys sys.stdout = flushfile(sys.stdout) ``` ... Now all your `print` calls (which use `sys.stdout` implicitly), will be automatically `flush`ed.
How to flush output of Python print?
230,751
605
2008-10-23T17:59:39Z
288,536
12
2008-11-13T22:15:25Z
[ "python", "printing", "flush" ]
How do I force Python's print function to output to the screen?
Dan's idea doesn't quite work: ``` #!/usr/bin/env python class flushfile(file): def __init__(self, f): self.f = f def write(self, x): self.f.write(x) self.f.flush() import sys sys.stdout = flushfile(sys.stdout) print "foo" ``` The result: ``` Traceback (most recent call last): File "./passpersist.py", line 12, in <module> print "foo" ValueError: I/O operation on closed file ``` I believe the problem is that it inherits from the file class, which actually isn't necessary. According to the docs for sys.stdout: > stdout and stderr needn’t be built-in > file objects: any object is acceptable > as long as it has a write() method > that takes a string argument. so changing ``` class flushfile(file): ``` to ``` class flushfile(object): ``` makes it work just fine.
How to flush output of Python print?
230,751
605
2008-10-23T17:59:39Z
741,601
17
2009-04-12T10:57:58Z
[ "python", "printing", "flush" ]
How do I force Python's print function to output to the screen?
Why not try using an unbuffered file? ``` f = open('xyz.log', 'a', 0) ``` OR ``` sys.stdout = open('out.log', 'a', 0) ```
How to flush output of Python print?
230,751
605
2008-10-23T17:59:39Z
8,471,288
11
2011-12-12T07:46:41Z
[ "python", "printing", "flush" ]
How do I force Python's print function to output to the screen?
``` import sys print 'This will be output immediately.' sys.stdout.flush() ```
How to flush output of Python print?
230,751
605
2008-10-23T17:59:39Z
9,462,099
55
2012-02-27T08:38:27Z
[ "python", "printing", "flush" ]
How do I force Python's print function to output to the screen?
Also as suggested in [this blog](http://algorithmicallyrandom.blogspot.com/2009/10/python-tips-and-tricks-flushing-stdout.html) one can reopen `sys.stdout` in unbuffered mode: ``` sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0) ``` Each `stdout.write` and `print` operation will be automatically flushed afterwards.
How to flush output of Python print?
230,751
605
2008-10-23T17:59:39Z
23,142,556
143
2014-04-17T20:10:31Z
[ "python", "printing", "flush" ]
How do I force Python's print function to output to the screen?
Since Python 3.3, you can force the normal `print()` function to flush without the need to use `sys.stdout.flush()`; just set the "flush" keyword argument to true. From [the documentation](https://docs.python.org/3.3/library/functions.html#print): > **print(\*objects, sep=' ', end='\n', file=sys.stdout, flush=False)** > > Print objects to the stream file, separated by sep and followed by end. sep, end and file, if present, must be given as keyword arguments. > > All non-keyword arguments are converted to strings like str() does and written to the stream, separated by sep and followed by end. Both sep and end must be strings; they can also be None, which means to use the default values. If no objects are given, print() will just write end. > > The file argument must be an object with a write(string) method; if it is not present or None, sys.stdout will be used. **Whether output is buffered is usually determined by file, but if the flush keyword argument is true, the stream is forcibly flushed.**
How to flush output of Python print?
230,751
605
2008-10-23T17:59:39Z
33,265,549
8
2015-10-21T17:23:32Z
[ "python", "printing", "flush" ]
How do I force Python's print function to output to the screen?
``` print("Foo", flush=True) ``` Like that
How to flush output of Python print?
230,751
605
2008-10-23T17:59:39Z
35,467,658
11
2016-02-17T21:01:03Z
[ "python", "printing", "flush" ]
How do I force Python's print function to output to the screen?
> # How to flush output of Python print? ## Python 3.3+ Using Python 3.3 or higher, you can just provide `flush=True` as a keyword argument to the `print` function: ``` print('foo', flush=True) ``` ## Python 2 (or < 3.3) They did not backport the `flush` argument to Python 2.7 So if you're using Python 2 (or less than 3.3), and want code that's compatible with both 2 and 3, may I suggest the following compatibility code. (Note the `__future__` import must be at/very "near the [top of your module](https://docs.python.org/2/reference/simple_stmts.html#future-statements)"): ``` from __future__ import print_function import sys if sys.version_info[:2] < (3, 3): old_print = print def print(*args, **kwargs): flush = kwargs.pop('flush', False) old_print(*args, **kwargs) file = kwargs.get('file', sys.stdout) if flush and file is not None: file.flush() ``` The above compatibility code will cover most uses, but for a much more thorough treatment, [see the `six` module](https://bitbucket.org/gutworth/six/src/3deee854df8a5f1cc04dd721c18dee2128584f8c/six.py?at=default#six.py-721). --- ## Addendum Here's the help on the print function from Python 2.7.12 - note that there is *no* `flush` argument: ``` >>> from __future__ import print_function >>> help(print) print(...) print(value, ..., sep=' ', end='\n', file=sys.stdout) Prints the values to a stream, or to sys.stdout by default. Optional keyword arguments: file: a file-like object (stream); defaults to the current sys.stdout. sep: string inserted between values, default a space. end: string appended after the last value, default a newline. ```
Make python enter password when running a csh script
230,845
5
2008-10-23T18:22:33Z
230,986
7
2008-10-23T18:58:34Z
[ "python", "scripting", "passwords", "root", "csh" ]
I'm writing a python script that executes a csh script in Solaris 10. The csh script prompts the user for the root password (which I know) but I'm not sure how to make the python script answer the prompt with the password. Is this possible? Here is what I'm using to execute the csh script: ``` import commands commands.getoutput('server stop') ```
Have a look at the [pexpect](http://www.noah.org/wiki/Pexpect) module. It is designed to deal with interactive programs, which seems to be your case. Oh, and remember that hard-encoding root's password in a shell or python script is potentially a security hole :D
Given a list of variable names in Python, how do I a create a dictionary with the variable names as keys (to the variables' values)?
230,896
13
2008-10-23T18:33:51Z
231,368
14
2008-10-23T20:29:33Z
[ "python", "list", "dictionary" ]
I have a list of variable names, like this: ``` ['foo', 'bar', 'baz'] ``` (I originally asked how I convert a list of variables. See Greg Hewgill's answer below.) How do I convert this to a dictionary where the keys are the variable names (as strings) and the values are the values of the variables? ``` {'foo': foo, 'bar': bar, 'baz': baz} ``` Now that I'm re-asking the question, I came up with: ``` d = {} for name in list_of_variable_names: d[name] = eval(name) ``` Can that be improved upon? **Update**, responding to the question (in a comment) of why I'd want to do this: I often find myself using the % operator to strings with a dictionary of names and values to interpolate. Often the names in the string is just the names of local variables. So (with the answer below) I can do something like this: ``` message = '''Name: %(name)s ZIP: %(zip)s Dear %(name)s, ...''' % dict((x, locals()[x]) for x in ['name', 'zip']) ```
Forget filtering `locals()`! The dictionary you give to the formatting string is allowed to contain unused keys: ``` >>> name = 'foo' >>> zip = 123 >>> unused = 'whoops!' >>> locals() {'name': 'foo', 'zip': 123, ... 'unused': 'whoops!', ...} >>> '%(name)s %(zip)i' % locals() 'foo 123' ```
What does the "yield" keyword do?
231,767
5,524
2008-10-23T22:21:11Z
231,778
141
2008-10-23T22:24:03Z
[ "python", "iterator", "generator", "yield", "coroutine" ]
What is the use of the `yield` keyword in Python? What does it do? For example, I'm trying to understand this code**1**: ``` def _get_child_candidates(self, distance, min_dist, max_dist): if self._leftchild and distance - max_dist < self._median: yield self._leftchild if self._rightchild and distance + max_dist >= self._median: yield self._rightchild ``` And this is the caller: ``` result, candidates = list(), [self] while candidates: node = candidates.pop() distance = node._get_dist(obj) if distance <= max_dist and distance >= min_dist: result.extend(node._values) candidates.extend(node._get_child_candidates(distance, min_dist, max_dist)) return result ``` What happens when the method `_get_child_candidates` is called? A list is returned? A single element is returned? Is it called again? When will subsequent calls stop? --- 1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/).
`yield` is just like `return` - it returns whatever you tell it to. The only difference is that the next time you call the function, execution starts from the last call to the `yield` statement. In the case of your code, the function `get_child_candidates` is acting like an iterator so that when you extend your list, it adds one element at a time to the new list. `list.extend` calls an iterator until it's exhausted. In the case of the code sample you posted, it would be much clearer to just return a tuple and append that to the list.
What does the "yield" keyword do?
231,767
5,524
2008-10-23T22:21:11Z
231,788
78
2008-10-23T22:26:06Z
[ "python", "iterator", "generator", "yield", "coroutine" ]
What is the use of the `yield` keyword in Python? What does it do? For example, I'm trying to understand this code**1**: ``` def _get_child_candidates(self, distance, min_dist, max_dist): if self._leftchild and distance - max_dist < self._median: yield self._leftchild if self._rightchild and distance + max_dist >= self._median: yield self._rightchild ``` And this is the caller: ``` result, candidates = list(), [self] while candidates: node = candidates.pop() distance = node._get_dist(obj) if distance <= max_dist and distance >= min_dist: result.extend(node._values) candidates.extend(node._get_child_candidates(distance, min_dist, max_dist)) return result ``` What happens when the method `_get_child_candidates` is called? A list is returned? A single element is returned? Is it called again? When will subsequent calls stop? --- 1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/).
It's returning a generator. I'm not particularly familiar with Python, but I believe it's the same kind of thing as [C#'s iterator blocks](http://csharpindepth.com/Articles/Chapter11/StreamingAndIterators.aspx) if you're familiar with those. There's an [IBM article](http://www.ibm.com/developerworks/library/l-pycon.html) which explains it reasonably well (for Python) as far as I can see. The key idea is that the compiler/interpreter/whatever does some trickery so that as far as the caller is concerned, they can keep calling next() and it will keep returning values - *as if the generator method was paused*. Now obviously you can't really "pause" a method, so the compiler builds a state machine for you to remember where you currently are and what the local variables etc look like. This is much easier than writing an iterator yourself.
What does the "yield" keyword do?
231,767
5,524
2008-10-23T22:21:11Z
231,801
236
2008-10-23T22:28:41Z
[ "python", "iterator", "generator", "yield", "coroutine" ]
What is the use of the `yield` keyword in Python? What does it do? For example, I'm trying to understand this code**1**: ``` def _get_child_candidates(self, distance, min_dist, max_dist): if self._leftchild and distance - max_dist < self._median: yield self._leftchild if self._rightchild and distance + max_dist >= self._median: yield self._rightchild ``` And this is the caller: ``` result, candidates = list(), [self] while candidates: node = candidates.pop() distance = node._get_dist(obj) if distance <= max_dist and distance >= min_dist: result.extend(node._values) candidates.extend(node._get_child_candidates(distance, min_dist, max_dist)) return result ``` What happens when the method `_get_child_candidates` is called? A list is returned? A single element is returned? Is it called again? When will subsequent calls stop? --- 1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/).
Think of it this way: An iterator is just a fancy sounding term for an object that has a next() method. So a yield-ed function ends up being something like this: Original version: ``` def some_function(): for i in xrange(4): yield i for i in some_function(): print i ``` This is basically what the python interpreter does with the above code: ``` class it: def __init__(self): #start at -1 so that we get 0 when we add 1 below. self.count = -1 #the __iter__ method will be called once by the for loop. #the rest of the magic happens on the object returned by this method. #in this case it is the object itself. def __iter__(self): return self #the next method will be called repeatedly by the for loop #until it raises StopIteration. def next(self): self.count += 1 if self.count < 4: return self.count else: #a StopIteration exception is raised #to signal that the iterator is done. #This is caught implicitly by the for loop. raise StopIteration def some_func(): return it() for i in some_func(): print i ``` For more insight as to what's happening behind the scenes, the for loop can be rewritten to this: ``` iterator = some_func() try: while 1: print iterator.next() except StopIteration: pass ``` Does that make more sense or just confuse you more? :) **EDIT:** I should note that this IS an oversimplification for illustrative purposes. :) **EDIT 2:** Forgot to throw the StopIteration exception
What does the "yield" keyword do?
231,767
5,524
2008-10-23T22:21:11Z
231,855
8,384
2008-10-23T22:48:44Z
[ "python", "iterator", "generator", "yield", "coroutine" ]
What is the use of the `yield` keyword in Python? What does it do? For example, I'm trying to understand this code**1**: ``` def _get_child_candidates(self, distance, min_dist, max_dist): if self._leftchild and distance - max_dist < self._median: yield self._leftchild if self._rightchild and distance + max_dist >= self._median: yield self._rightchild ``` And this is the caller: ``` result, candidates = list(), [self] while candidates: node = candidates.pop() distance = node._get_dist(obj) if distance <= max_dist and distance >= min_dist: result.extend(node._values) candidates.extend(node._get_child_candidates(distance, min_dist, max_dist)) return result ``` What happens when the method `_get_child_candidates` is called? A list is returned? A single element is returned? Is it called again? When will subsequent calls stop? --- 1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/).
To understand what `yield` does, you must understand what *generators* are. And before generators come *iterables*. ## Iterables When you create a list, you can read its items one by one. Reading its items one by one is called iteration: ``` >>> mylist = [1, 2, 3] >>> for i in mylist: ... print(i) 1 2 3 ``` `mylist` is an *iterable*. When you use a list comprehension, you create a list, and so an iterable: ``` >>> mylist = [x*x for x in range(3)] >>> for i in mylist: ... print(i) 0 1 4 ``` Everything you can use "`for... in...`" on is an iterable; `lists`, `strings`, files... These iterables are handy because you can read them as much as you wish, but you store all the values in memory and this is not always what you want when you have a lot of values. ## Generators Generators are iterators, but **you can only iterate over them once**. It's because they do not store all the values in memory, **they generate the values on the fly**: ``` >>> mygenerator = (x*x for x in range(3)) >>> for i in mygenerator: ... print(i) 0 1 4 ``` It is just the same except you used `()` instead of `[]`. BUT, you **cannot** perform `for i in mygenerator` a second time since generators can only be used once: they calculate 0, then forget about it and calculate 1, and end calculating 4, one by one. ## Yield `Yield` is a keyword that is used like `return`, except the function will return a generator. ``` >>> def createGenerator(): ... mylist = range(3) ... for i in mylist: ... yield i*i ... >>> mygenerator = createGenerator() # create a generator >>> print(mygenerator) # mygenerator is an object! <generator object createGenerator at 0xb7555c34> >>> for i in mygenerator: ... print(i) 0 1 4 ``` Here it's a useless example, but it's handy when you know your function will return a huge set of values that you will only need to read once. To master `yield`, you must understand that **when you call the function, the code you have written in the function body does not run.** The function only returns the generator object, this is a bit tricky :-) Then, your code will be run each time the `for` uses the generator. Now the hard part: The first time the `for` calls the generator object created from your function, it will run the code in your function from the beginning until it hits `yield`, then it'll return the first value of the loop. Then, each other call will run the loop you have written in the function one more time, and return the next value, until there is no value to return. The generator is considered empty once the function runs but does not hit `yield` anymore. It can be because the loop had come to an end, or because you do not satisfy an `"if/else"` anymore. --- ## Your code explained Generator: ``` # Here you create the method of the node object that will return the generator def node._get_child_candidates(self, distance, min_dist, max_dist): # Here is the code that will be called each time you use the generator object: # If there is still a child of the node object on its left # AND if distance is ok, return the next child if self._leftchild and distance - max_dist < self._median: yield self._leftchild # If there is still a child of the node object on its right # AND if distance is ok, return the next child if self._rightchild and distance + max_dist >= self._median: yield self._rightchild # If the function arrives here, the generator will be considered empty # there is no more than two values: the left and the right children ``` Caller: ``` # Create an empty list and a list with the current object reference result, candidates = list(), [self] # Loop on candidates (they contain only one element at the beginning) while candidates: # Get the last candidate and remove it from the list node = candidates.pop() # Get the distance between obj and the candidate distance = node._get_dist(obj) # If distance is ok, then you can fill the result if distance <= max_dist and distance >= min_dist: result.extend(node._values) # Add the children of the candidate in the candidates list # so the loop will keep running until it will have looked # at all the children of the children of the children, etc. of the candidate candidates.extend(node._get_child_candidates(distance, min_dist, max_dist)) return result ``` This code contains several smart parts: * The loop iterates on a list but the list expands while the loop is being iterated :-) It's a concise way to go through all these nested data even if it's a bit dangerous since you can end up with an infinite loop. In this case, `candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))` exhausts all the values of the generator, but `while` keeps creating new generator objects which will produce different values from the previous ones since it's not applied on the same node. * The `extend()` method is a list object method that expects an iterable and adds its values to the list. Usually we pass a list to it: ``` >>> a = [1, 2] >>> b = [3, 4] >>> a.extend(b) >>> print(a) [1, 2, 3, 4] ``` But in your code it gets a generator, which is good because: 1. You don't need to read the values twice. 2. You may have a lot of children and you don't want them all stored in memory. And it works because Python does not care if the argument of a method is a list or not. Python expects iterables so it will work with strings, lists, tuples and generators! This is called duck typing and is one of the reason why Python is so cool. But this is another story, for another question... You can stop here, or read a little bit to see an advanced use of a generator: ## Controlling a generator exhaustion ``` >>> class Bank(): # let's create a bank, building ATMs ... crisis = False ... def create_atm(self): ... while not self.crisis: ... yield "$100" >>> hsbc = Bank() # when everything's ok the ATM gives you as much as you want >>> corner_street_atm = hsbc.create_atm() >>> print(corner_street_atm.next()) $100 >>> print(corner_street_atm.next()) $100 >>> print([corner_street_atm.next() for cash in range(5)]) ['$100', '$100', '$100', '$100', '$100'] >>> hsbc.crisis = True # crisis is coming, no more money! >>> print(corner_street_atm.next()) <type 'exceptions.StopIteration'> >>> wall_street_atm = hsbc.create_atm() # it's even true for new ATMs >>> print(wall_street_atm.next()) <type 'exceptions.StopIteration'> >>> hsbc.crisis = False # trouble is, even post-crisis the ATM remains empty >>> print(corner_street_atm.next()) <type 'exceptions.StopIteration'> >>> brand_new_atm = hsbc.create_atm() # build a new one to get back in business >>> for cash in brand_new_atm: ... print cash $100 $100 $100 $100 $100 $100 $100 $100 $100 ... ``` It can be useful for various things like controlling access to a resource. ## Itertools, your best friend The itertools module contains special functions to manipulate iterables. Ever wish to duplicate a generator? Chain two generators? Group values in a nested list with a one liner? `Map / Zip` without creating another list? Then just `import itertools`. An example? Let's see the possible orders of arrival for a 4 horse race: ``` >>> horses = [1, 2, 3, 4] >>> races = itertools.permutations(horses) >>> print(races) <itertools.permutations object at 0xb754f1dc> >>> print(list(itertools.permutations(horses))) [(1, 2, 3, 4), (1, 2, 4, 3), (1, 3, 2, 4), (1, 3, 4, 2), (1, 4, 2, 3), (1, 4, 3, 2), (2, 1, 3, 4), (2, 1, 4, 3), (2, 3, 1, 4), (2, 3, 4, 1), (2, 4, 1, 3), (2, 4, 3, 1), (3, 1, 2, 4), (3, 1, 4, 2), (3, 2, 1, 4), (3, 2, 4, 1), (3, 4, 1, 2), (3, 4, 2, 1), (4, 1, 2, 3), (4, 1, 3, 2), (4, 2, 1, 3), (4, 2, 3, 1), (4, 3, 1, 2), (4, 3, 2, 1)] ``` ## Understanding the inner mechanisms of iteration Iteration is a process implying iterables (implementing the `__iter__()` method) and iterators (implementing the `__next__()` method). Iterables are any objects you can get an iterator from. Iterators are objects that let you iterate on iterables. More about it in this article about [how does the for loop work](http://effbot.org/zone/python-for-statement.htm).
What does the "yield" keyword do?
231,767
5,524
2008-10-23T22:21:11Z
232,111
72
2008-10-24T00:36:05Z
[ "python", "iterator", "generator", "yield", "coroutine" ]
What is the use of the `yield` keyword in Python? What does it do? For example, I'm trying to understand this code**1**: ``` def _get_child_candidates(self, distance, min_dist, max_dist): if self._leftchild and distance - max_dist < self._median: yield self._leftchild if self._rightchild and distance + max_dist >= self._median: yield self._rightchild ``` And this is the caller: ``` result, candidates = list(), [self] while candidates: node = candidates.pop() distance = node._get_dist(obj) if distance <= max_dist and distance >= min_dist: result.extend(node._values) candidates.extend(node._get_child_candidates(distance, min_dist, max_dist)) return result ``` What happens when the method `_get_child_candidates` is called? A list is returned? A single element is returned? Is it called again? When will subsequent calls stop? --- 1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](http://well-adjusted.de/~jrschulz/mspace/).
An example in plain language. I will provide a correspondence between high-level human concepts to low-level python concepts. I want to operate on a sequence of numbers, but I don't want to bother my self with the creation of that sequence, I want only to focus on the operation I want to do. So, I do the following: * I call you and tell you that I want a sequence of numbers which is produced in a specific way, and I let you know what the algorithm is. **This step corresponds to `def`ining the generator function, i.e. the function containing a `yield`.** * Sometime later, I tell you, "ok, get ready to tell me the sequence of numbers". **This step corresponds to calling the generator function which returns a generator object.** Note that you don't tell me any numbers yet, you just grab your paper and pencil. * I ask you, "tell me the next number", and you tell me the first number; after that, you wait for me to ask you for the next number. It's your job to remember where you were, what numbers you have already said, what is the next number. I don't care about the details. **This step corresponds to calling `.next()` on the generator object.** * … repeat previous step, until… * eventually, you might come to an end. You don't tell me a number, you just shout, "hold your horses! I'm done! No more numbers!" **This step corresponds to the generator object ending its job, and raising a `StopIteration` exception** The generator function does not need to raise the exception, it's raised automatically when the function ends or issues a `return`. This is what a generator does (a function that contains a `yield`); it starts executing, pauses whenever it does a `yield`, and when asked for a `.next()` value it continues from the point it was last. It fits perfectly by design with the iterator protocol of python, which describes how to sequentially request for values. The most famous user of the iterator protocol is the `for` command in python. So, whenever you do a: ``` for item in sequence: ``` it doesn't matter if `sequence` is a list, a string, a dictionary or a generator *object* like described above; the result is the same: you read items off a sequence one by one. Note that `def`ining a function which contains a `yield` keyword is not the only way to create a generator; it's just the easiest way to create one. For more accurate information, read about [iterator types](http://docs.python.org/library/stdtypes.html#iterator-types), the [yield statement](http://docs.python.org/reference/simple_stmts.html#yield) and [generators](http://docs.python.org/glossary.html#term-generator) in the Python documentation.