title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
How do I execute a program from python? os.system fails due to spaces in path | 204,017 | 206 | 2008-10-15T08:24:28Z | 911,976 | 26 | 2009-05-26T18:13:51Z | [
"python",
"shellexecute"
] | I have a python script that needs to execute an external program, but for some reason fails.
If I have the following script:
```
import os;
os.system("C:\\Temp\\a b c\\Notepad.exe");
raw_input();
```
Then it fails with the following error:
```
'C:\Temp\a' is not recognized as an internal or external command, operable program or batch file.
```
If I escape the program with quotes:
```
import os;
os.system('"C:\\Temp\\a b c\\Notepad.exe"');
raw_input();
```
Then it works. However, if I add a parameter, it stops working again:
```
import os;
os.system('"C:\\Temp\\a b c\\Notepad.exe" "C:\\test.txt"');
raw_input();
```
What is the right way to execute a program and wait for it to complete? I do not need to read output from it, as it is a visual program that does a job and then just exits, but I need to wait for it to complete.
Also note, moving the program to a non-spaced path is not an option either.
---
**Edit** This does not work either:
```
import os;
os.system("'C:\\Temp\\a b c\\Notepad.exe'");
raw_input();
```
Note the swapped single/double quotes.
with or without a parameter to notepad here, it fails with the error message
```
The filename, directory name, or volume label syntax is incorrect.
``` | The outermost quotes are consumed by Python itself, and the Windows shell doesn't see it. As mentioned above, Windows only understands double-quotes.
Python will convert forward-slashed to backslashes on Windows, so you can use
```
os.system('"C://Temp/a b c/Notepad.exe"')
```
The ' is consumed by Python, which then passes "C://Temp/a b c/Notepad.exe" (as a Windows path, no double-backslashes needed) to CMD.EXE |
How do I execute a program from python? os.system fails due to spaces in path | 204,017 | 206 | 2008-10-15T08:24:28Z | 1,622,730 | 14 | 2009-10-26T01:33:47Z | [
"python",
"shellexecute"
] | I have a python script that needs to execute an external program, but for some reason fails.
If I have the following script:
```
import os;
os.system("C:\\Temp\\a b c\\Notepad.exe");
raw_input();
```
Then it fails with the following error:
```
'C:\Temp\a' is not recognized as an internal or external command, operable program or batch file.
```
If I escape the program with quotes:
```
import os;
os.system('"C:\\Temp\\a b c\\Notepad.exe"');
raw_input();
```
Then it works. However, if I add a parameter, it stops working again:
```
import os;
os.system('"C:\\Temp\\a b c\\Notepad.exe" "C:\\test.txt"');
raw_input();
```
What is the right way to execute a program and wait for it to complete? I do not need to read output from it, as it is a visual program that does a job and then just exits, but I need to wait for it to complete.
Also note, moving the program to a non-spaced path is not an option either.
---
**Edit** This does not work either:
```
import os;
os.system("'C:\\Temp\\a b c\\Notepad.exe'");
raw_input();
```
Note the swapped single/double quotes.
with or without a parameter to notepad here, it fails with the error message
```
The filename, directory name, or volume label syntax is incorrect.
``` | At least in Windows 7 and Python 3.1, os.system in Windows wants the command line *double-quoted* if there are spaces in path to the command. For example:
```
TheCommand = '\"\"C:\\Temp\\a b c\\Notepad.exe\"\"'
os.system(TheCommand)
```
A real-world example that was stumping me was cloning a drive in Virtual box. The subprocess.call solution above didn't work because of some access rights issue, but when I double-quoted the command, os.system became happy:
```
TheCommand = '\"\"C:\\Program Files\\Sun\\VirtualBox\\VBoxManage.exe\" ' \
+ ' clonehd \"' + OrigFile + '\" \"' + NewFile + '\"\"'
os.system(TheCommand)
``` |
Checking for member existence in Python | 204,308 | 23 | 2008-10-15T10:54:18Z | 204,481 | 10 | 2008-10-15T12:15:54Z | [
"python",
"exception",
"introspection",
"hasattr"
] | I regularly want to check if an object has a member or not. An example is the creation of a singleton in a function. For that purpose, you can use `hasattr` like this:
```
class Foo(object):
@classmethod
def singleton(self):
if not hasattr(self, 'instance'):
self.instance = Foo()
return self.instance
```
But you can also do this:
```
class Foo(object):
@classmethod
def singleton(self):
try:
return self.instance
except AttributeError:
self.instance = Foo()
return self.instance
```
Is one method better of the other?
**Edit:** Added the `@classmethod` ... But note that the question is *not* about how to make a singleton but how to check the presence of a member in an object.
**Edit:** For that example, a typical usage would be:
```
s = Foo.singleton()
```
Then `s` is an object of type `Foo`, the same each time. And, typically, the method is called many times. | I just tried to measure times:
```
class Foo(object):
@classmethod
def singleton(self):
if not hasattr(self, 'instance'):
self.instance = Foo()
return self.instance
class Bar(object):
@classmethod
def singleton(self):
try:
return self.instance
except AttributeError:
self.instance = Bar()
return self.instance
from time import time
n = 1000000
foo = [Foo() for i in xrange(0,n)]
bar = [Bar() for i in xrange(0,n)]
print "Objs created."
print
for times in xrange(1,4):
t = time()
for d in foo: d.singleton()
print "#%d Foo pass in %f" % (times, time()-t)
t = time()
for d in bar: d.singleton()
print "#%d Bar pass in %f" % (times, time()-t)
print
```
On my machine:
```
Objs created.
#1 Foo pass in 1.719000
#1 Bar pass in 1.140000
#2 Foo pass in 1.750000
#2 Bar pass in 1.187000
#3 Foo pass in 1.797000
#3 Bar pass in 1.203000
```
It seems that try/except is faster. It seems also more readable to me, anyway depends on the case, this test was very simple maybe you'd need a more complex one. |
Checking for member existence in Python | 204,308 | 23 | 2008-10-15T10:54:18Z | 204,523 | 21 | 2008-10-15T12:28:11Z | [
"python",
"exception",
"introspection",
"hasattr"
] | I regularly want to check if an object has a member or not. An example is the creation of a singleton in a function. For that purpose, you can use `hasattr` like this:
```
class Foo(object):
@classmethod
def singleton(self):
if not hasattr(self, 'instance'):
self.instance = Foo()
return self.instance
```
But you can also do this:
```
class Foo(object):
@classmethod
def singleton(self):
try:
return self.instance
except AttributeError:
self.instance = Foo()
return self.instance
```
Is one method better of the other?
**Edit:** Added the `@classmethod` ... But note that the question is *not* about how to make a singleton but how to check the presence of a member in an object.
**Edit:** For that example, a typical usage would be:
```
s = Foo.singleton()
```
Then `s` is an object of type `Foo`, the same each time. And, typically, the method is called many times. | These are two different methodologies: â1 is LBYL (look before you leap) and â2 is EAFP (easier to ask forgiveness than permission).
Pythonistas typically suggest that EAFP is better, with arguments in style of "what if a process creates the file between the time you test for it and the time you try to create it yourself?". This argument does not apply here, but it's the general idea. Exceptions should not be treated as *too* exceptional.
Performance-wise in your case âsince setting up exception managers (the `try` keyword) is very cheap in CPython while creating an exception (the `raise` keyword and internal exception creation) is what is relatively expensiveâ using method â2 the exception would be raised only once; afterwards, you just use the property. |
What would you recommend for a high traffic ajax intensive website? | 204,802 | 7 | 2008-10-15T13:57:23Z | 204,853 | 8 | 2008-10-15T14:11:29Z | [
"php",
"python",
"lighttpd",
"cherrypy",
"high-load"
] | For a website like reddit with lots of up/down votes and lots of comments per topic what should I go with?
Lighttpd/Php or Lighttpd/CherryPy/Genshi/SQLAlchemy?
and for database what would scale better / be fastest MySQL ( 4.1 or 5 ? ) or PostgreSQL? | I can't speak to the MySQL/PostgreSQL question as I have limited experience with Postgres, but my Masters research project was about high-performance websites with CherryPy, and I don't think you'll be disappointed if you use CherryPy for your site. It can easily scale to thousands of simultaneous users on commodity hardware.
Of course, the same could be said for PHP, and I don't know of any reasonable benchmarks comparing PHP and CherryPy performance. But if you were wondering whether CherryPy can handle a high-traffic site with a huge number of requests per second, the answer is definitely yes. |
What would you recommend for a high traffic ajax intensive website? | 204,802 | 7 | 2008-10-15T13:57:23Z | 204,916 | 8 | 2008-10-15T14:26:18Z | [
"php",
"python",
"lighttpd",
"cherrypy",
"high-load"
] | For a website like reddit with lots of up/down votes and lots of comments per topic what should I go with?
Lighttpd/Php or Lighttpd/CherryPy/Genshi/SQLAlchemy?
and for database what would scale better / be fastest MySQL ( 4.1 or 5 ? ) or PostgreSQL? | The ideal setup would be close to [this](http://www.igvita.com/2008/02/11/nginx-and-memcached-a-400-boost/):

In short, [nginx](http://wiki.codemongers.com/) is a fast and light webserver/front-proxy with a unique module that let's it fetch data directly from [memcached](http://www.danga.com/memcached/)'s RAM store, without hitting the disk, or any dynamic webapp. Of course, if the request's URL wasn't already cached (or if it has expired), the request proceeds to the webapp as usual. The genius part is that when the webapp has generated the response, a copy of it goes to memcached, ready to be reused.
All this is perfectly applicable not only to webpages, but to AJAX query/responses.
in the article the 'back' servers are http, and specifically talk about mongrel. It would be even better if the back were FastCGI and other (faster?) framework; but it's a lot less critical, since the nginx/memcached team absorb the biggest part of the load.
note that if your url scheme for the AJAX traffic is well designed (REST is best, IMHO), you can put most of the DB right in memcached, and any POST (which WILL pass to the app) can preemptively update the cache. |
Is it possible to compile Python natively (beyond pyc byte code)? | 205,062 | 13 | 2008-10-15T15:02:09Z | 205,075 | 7 | 2008-10-15T15:05:47Z | [
"python",
"module",
"compilation"
] | I wonder if it is possible to create an executable module from a Python script. I need to have the most performance and the flexibility of Python script, without needing to run in the Python environment. I would use this code to load on demand user modules to customize my application. | You can use something like py2exe to compile your python script into an exe, or Freeze for a linux binary.
see: <http://stackoverflow.com/questions/2933/an-executable-python-app#2937> |
Is it possible to compile Python natively (beyond pyc byte code)? | 205,062 | 13 | 2008-10-15T15:02:09Z | 205,096 | 14 | 2008-10-15T15:11:26Z | [
"python",
"module",
"compilation"
] | I wonder if it is possible to create an executable module from a Python script. I need to have the most performance and the flexibility of Python script, without needing to run in the Python environment. I would use this code to load on demand user modules to customize my application. | * There's [pyrex](http://www.cosc.canterbury.ac.nz/greg.ewing/python/Pyrex/) that compiles python like source to python extension modules
* [rpython](http://codespeak.net/pypy/dist/pypy/doc/coding-guide.html#our-runtime-interpreter-is-restricted-python) which allows you to compile python with some restrictions to various backends like C, LLVM, .Net etc.
* There's also [shed-skin](http://shed-skin.blogspot.com/) which translates python to C++, but I can't say if it's any good.
* [PyPy](http://codespeak.net/pypy/dist/pypy/doc/home.html) implements a JIT compiler which attempts to optimize runtime by translating pieces of what's running at runtime to machine code, if you write for the PyPy interpreter that might be a feasible path.
* The same author that is working on JIT in PyPy wrote [psyco](http://psyco.sourceforge.net/) previously which optimizes python in the CPython interpreter. |
How can I check the syntax of Python code in Emacs without actually executing it? | 205,704 | 14 | 2008-10-15T17:46:12Z | 206,617 | 8 | 2008-10-15T21:40:49Z | [
"python",
"validation",
"emacs",
"syntax"
] | Python's IDLE has 'Check Module' (Alt-X) to check the syntax which can be called without needing to run the code. Is there an equivalent way to do this in Emacs instead of running and executing the code? | You can [use Pyflakes together with Flymake](http://www.plope.org/Members/chrism/flymake-mode) in order to get instant notification when your python code is valid (and avoids a few common pitfalls as well). |
How can I check the syntax of Python code in Emacs without actually executing it? | 205,704 | 14 | 2008-10-15T17:46:12Z | 8,584,325 | 17 | 2011-12-21T02:10:48Z | [
"python",
"validation",
"emacs",
"syntax"
] | Python's IDLE has 'Check Module' (Alt-X) to check the syntax which can be called without needing to run the code. Is there an equivalent way to do this in Emacs instead of running and executing the code? | ```
python -m py_compile script.py
``` |
Why do attribute references act like this with Python inheritance? | 206,734 | 16 | 2008-10-15T22:18:00Z | 206,765 | 22 | 2008-10-15T22:27:18Z | [
"python",
"class",
"inheritance"
] | The following seems strange.. Basically, the somedata attribute seems shared between all the classes that inherited from `the_base_class`.
```
class the_base_class:
somedata = {}
somedata['was_false_in_base'] = False
class subclassthing(the_base_class):
def __init__(self):
print self.somedata
first = subclassthing()
{'was_false_in_base': False}
first.somedata['was_false_in_base'] = True
second = subclassthing()
{'was_false_in_base': True}
>>> del first
>>> del second
>>> third = subclassthing()
{'was_false_in_base': True}
```
Defining `self.somedata` in the `__init__` function is obviously the correct way to get around this (so each class has it's own `somedata` dict) - but when is such behavior desirable? | You are right, `somedata` is shared between all instances of the class and it's subclasses, because it is created at class *definition* time. The lines
```
somedata = {}
somedata['was_false_in_base'] = False
```
are executed when the class is defined, i.e. when the interpreter encounters the `class` statement - **not** when the instance is created (think static initializer blocks in Java). If an attribute does not exist in a class instance, the class object is checked for the attribute.
At class definition time, you can run arbritrary code, like this:
```
import sys
class Test(object):
if sys.platform == "linux2":
def hello(self):
print "Hello Linux"
else:
def hello(self):
print "Hello ~Linux"
```
On a Linux system, `Test().hello()` will print `Hello Linux`, on all other systems the other string will be printed.
In constrast, objects in `__init__` are created at *instantiation* time and belong to the instance only (when they are assigned to `self`):
```
class Test(object):
def __init__(self):
self.inst_var = [1, 2, 3]
```
Objects defined on a class object rather than instance can be useful in many cases. For instance, you might want to cache instances of your class, so that instances with the same member values can be shared (assuming they are supposed to be immutable):
```
class SomeClass(object):
__instances__ = {}
def __new__(cls, v1, v2, v3):
try:
return cls.__insts__[(v1, v2, v3)]
except KeyError:
return cls.__insts__.setdefault(
(v1, v2, v3),
object.__new__(cls, v1, v2, v3))
```
Mostly, I use data in class bodies in conjunction with metaclasses or generic factory methods. |
Why do attribute references act like this with Python inheritance? | 206,734 | 16 | 2008-10-15T22:18:00Z | 206,800 | 10 | 2008-10-15T22:40:10Z | [
"python",
"class",
"inheritance"
] | The following seems strange.. Basically, the somedata attribute seems shared between all the classes that inherited from `the_base_class`.
```
class the_base_class:
somedata = {}
somedata['was_false_in_base'] = False
class subclassthing(the_base_class):
def __init__(self):
print self.somedata
first = subclassthing()
{'was_false_in_base': False}
first.somedata['was_false_in_base'] = True
second = subclassthing()
{'was_false_in_base': True}
>>> del first
>>> del second
>>> third = subclassthing()
{'was_false_in_base': True}
```
Defining `self.somedata` in the `__init__` function is obviously the correct way to get around this (so each class has it's own `somedata` dict) - but when is such behavior desirable? | Note that part of the behavior youâre seeing is due to `somedata` being a `dict`, as opposed to a simple data type such as a `bool`.
For instance, see this different example which behaves differently (although very similar):
```
class the_base_class:
somedata = False
class subclassthing(the_base_class):
def __init__(self):
print self.somedata
>>> first = subclassthing()
False
>>> first.somedata = True
>>> print first.somedata
True
>>> second = subclassthing()
False
>>> print first.somedata
True
>>> del first
>>> del second
>>> third = subclassthing()
False
```
The reason this example behaves differently from the one given in the question is because here `first.somedata` is being given a new value (the object `True`), whereas in the first example the dict object referenced by `first.somedata` (and also by the other subclass instances) is being modified.
See Torsten Marekâs comment to this answer for further clarification. |
Scrape a dynamic website | 206,855 | 12 | 2008-10-15T23:04:13Z | 206,860 | 7 | 2008-10-15T23:09:14Z | [
"python",
"ajax",
"screen-scraping",
"beautifulsoup"
] | What is the best method to scrape a dynamic website where most of the content is generated by what appears to be ajax requests? I have previous experience with a Mechanize, BeautifulSoup, and python combo, but I am up for something new.
--Edit--
For more detail: I'm trying to scrape the CNN [primary database](http://www.cnn.com/ELECTION/2008/primaries/results/state/). There is a wealth of information there, but there doesn't appear to be an api. | This is a difficult problem because you either have to reverse engineer the javascript on a per-site basis, or implement a javascript engine and run the scripts (which has its own difficulties and pitfalls).
It's a heavy weight solution, but I've seen people doing this with greasemonkey scripts - allow Firefox to render everything and run the javascript, and then scrape the elements. You can even initiate user actions on the page if needed. |
Scrape a dynamic website | 206,855 | 12 | 2008-10-15T23:04:13Z | 216,143 | 7 | 2008-10-19T07:38:59Z | [
"python",
"ajax",
"screen-scraping",
"beautifulsoup"
] | What is the best method to scrape a dynamic website where most of the content is generated by what appears to be ajax requests? I have previous experience with a Mechanize, BeautifulSoup, and python combo, but I am up for something new.
--Edit--
For more detail: I'm trying to scrape the CNN [primary database](http://www.cnn.com/ELECTION/2008/primaries/results/state/). There is a wealth of information there, but there doesn't appear to be an api. | The best solution that I found was to use Firebug to monitor XmlHttpRequests, and then to use a script to resend them. |
Python: Difference between class and instance attributes | 207,000 | 84 | 2008-10-16T00:23:41Z | 207,128 | 100 | 2008-10-16T01:26:53Z | [
"python",
"attributes"
] | Is there any meaningful distinction between:
```
class A(object):
foo = 5 # some default value
```
vs.
```
class B(object):
def __init__(self, foo=5):
self.foo = foo
```
If you're creating a lot of instances, is there any difference in performance or space requirements for the two styles? When you read the code, do you consider the meaning of the two styles to be significantly different? | Beyond performance considerations, there is a significant *semantic* difference. In the class attribute case, there is just one object referred to. In the instance-attribute-set-at-instantiation, there can be multiple objects referred to. For instance
```
>>> class A: foo = []
>>> a, b = A(), A()
>>> a.foo.append(5)
>>> b.foo
[5]
>>> class A:
... def __init__(self): self.foo = []
>>> a, b = A(), A()
>>> a.foo.append(5)
>>> b.foo
[]
``` |
Python: Difference between class and instance attributes | 207,000 | 84 | 2008-10-16T00:23:41Z | 207,759 | 22 | 2008-10-16T08:16:28Z | [
"python",
"attributes"
] | Is there any meaningful distinction between:
```
class A(object):
foo = 5 # some default value
```
vs.
```
class B(object):
def __init__(self, foo=5):
self.foo = foo
```
If you're creating a lot of instances, is there any difference in performance or space requirements for the two styles? When you read the code, do you consider the meaning of the two styles to be significantly different? | The difference is that the attribute on the class is shared by all instances. The attribute on an instance is unique to that instance.
If coming from C++, attributes on the class are more like static member variables. |
Python: Difference between class and instance attributes | 207,000 | 84 | 2008-10-16T00:23:41Z | 26,642,476 | 11 | 2014-10-29T23:32:51Z | [
"python",
"attributes"
] | Is there any meaningful distinction between:
```
class A(object):
foo = 5 # some default value
```
vs.
```
class B(object):
def __init__(self, foo=5):
self.foo = foo
```
If you're creating a lot of instances, is there any difference in performance or space requirements for the two styles? When you read the code, do you consider the meaning of the two styles to be significantly different? | Since people in the comments here and in two other questions marked as dups all appear to be confused about this in the same way, I think it's worth adding an additional answer on top of [Alex Coventry's](http://stackoverflow.com/a/207128/908494).
The fact that Alex is assigning a value of a mutable type, like a list, has nothing to do with whether things are shared or not. We can see this with the `id` function or the `is` operator:
```
>>> class A: foo = object()
>>> a, b = A(), A()
>>> a.foo is b.foo
True
>>> class A:
... def __init__(self): self.foo = object()
>>> a, b = A(), A()
>>> a.foo is b.foo
False
```
(If you're wondering why I used `object()` instead of, say, `5`, that's to avoid running into two whole other issues which I don't want to get into here; for two different reasons, entirely separately-created `5`s can end up being the same instance of the number `5`. But entirely separately-created `object()`s cannot.)
---
So, why is it that `a.foo.append(5)` in Alex's example affects `b.foo`, but `a.foo = 5` in my example doesn't? Well, try `a.foo = 5` in Alex's example, and notice that it doesn't affect `b.foo` there *either*.
`a.foo = 5` is just making `a.foo` into a name for `5`. That doesn't affect `b.foo`, or any other name for the old value that `a.foo` used to refer to.\* It's a little tricky that we're creating an instance attribute that hides a class attribute,\*\* but once you get that, nothing complicated is happening here.
---
Hopefully it's now obvious why Alex used a list: the fact that you can mutate a list means it's easier to show that two variables name the same list, and also means it's more important in real-life code to know whether you have two lists or two names for the same list.
---
\* The confusion for people coming from a language like C++ is that in Python, values aren't stored in variables. Values live off in value-land, on their own, variables are just names for values, and assignment just creates a new name for a value. If it helps, think of each Python variable as a `shared_ptr<T>` instead of a `T`.
\*\* Some people take advantage of this by using a class attribute as a "default value" for an instance attribute that instances may or may not set. This can be useful in some cases, but it can also be confusing, so be careful with it. |
List of IP addresses/hostnames from local network in Python | 207,234 | 23 | 2008-10-16T02:32:33Z | 207,246 | 12 | 2008-10-16T02:38:02Z | [
"python",
"networking"
] | How can I get a list of the IP addresses or host names from a local network easily in Python?
It would be best if it was multi-platform, but it needs to work on Mac OS X first, then others follow.
**Edit:** By local I mean all **active** addresses within a local network, such as `192.168.xxx.xxx`.
So, if the IP address of my computer (within the local network) is `192.168.1.1`, and I have three other connected computers, I would want it to return the IP addresses `192.168.1.2`, `192.168.1.3`, `192.168.1.4`, and possibly their hostnames. | If by "local" you mean on the same network segment, then you have to perform the following steps:
1. Determine your own IP address
2. Determine your own netmask
3. Determine the network range
4. Scan all the addresses (except the lowest, which is your network address and the highest, which is your broadcast address).
5. Use your DNS's reverse lookup to determine the hostname for IP addresses which respond to your scan.
Or you can just let Python execute nmap externally and pipe the results back into your program. |
List of IP addresses/hostnames from local network in Python | 207,234 | 23 | 2008-10-16T02:32:33Z | 602,965 | 14 | 2009-03-02T16:27:29Z | [
"python",
"networking"
] | How can I get a list of the IP addresses or host names from a local network easily in Python?
It would be best if it was multi-platform, but it needs to work on Mac OS X first, then others follow.
**Edit:** By local I mean all **active** addresses within a local network, such as `192.168.xxx.xxx`.
So, if the IP address of my computer (within the local network) is `192.168.1.1`, and I have three other connected computers, I would want it to return the IP addresses `192.168.1.2`, `192.168.1.3`, `192.168.1.4`, and possibly their hostnames. | **Update**: The script is now located on [github](http://github.com/bwaldvogel/neighbourhood).
I wrote a [small python script](https://github.com/bwaldvogel/neighbourhood/blob/master/neighbourhood.py), that leverages [scapy](http://www.secdev.org/projects/scapy/)'s `arping()`. |
Python module that implements ftps | 207,939 | 9 | 2008-10-16T09:33:39Z | 208,256 | 9 | 2008-10-16T12:00:12Z | [
"python",
"ftps"
] | I was wondering if anybody could point me towards a free ftps module for python.
I am a complete newbie to python, but this is something I need for a work project. I need an ftps client to connect to a 3rd party ftps server.
thanks,
David. | I believe you could use Twisted to implement FTPS by simply using its FTP implementation, but changing the [`FTPClient.connectFactory`](http://twistedmatrix.com/trac/browser/trunk/twisted/protocols/ftp.py?rev=24609#L2186) attribute to be a function that does something with [`connectSSL`](http://twistedmatrix.com/documents/8.1.0/api/twisted.internet.interfaces.IReactorSSL.connectSSL.html) rather than `connectTCP`.
Are you sure you want FTPS though? [SFTP is a different, better, and much more popular protocol](http://geekswithblogs.net/bvamsi/archive/2006/03/23/73147.aspx) these days: Twisted contains [an SFTP implementation](http://twistedmatrix.com/trac/browser/trunk/twisted/conch/ssh/filetransfer.py?rev=24609) as well. |
How to enable MySQL client auto re-connect with MySQLdb? | 207,981 | 30 | 2008-10-16T09:56:05Z | 210,096 | 7 | 2008-10-16T20:11:18Z | [
"python",
"mysql"
] | I came across PHP way of doing the trick:
```
my_bool reconnect = 1;
mysql_options(&mysql, MYSQL_OPT_RECONNECT, &reconnect);
```
but no luck with MySQLdb (python-mysql).
Can anybody please give a clue? Thanks. | If you are using ubuntu Linux there was a patch added to the python-mysql package that added the ability to set that same MYSQL\_OPT\_RECONNECT option (see [here](https://launchpad.net/ubuntu/hardy/+source/python-mysqldb/1.2.2-5)). I have not tried it though.
Unfortunately, the patch was later removed due to a conflict with autoconnect and transations (described [here](https://launchpad.net/ubuntu/+source/python-mysqldb)).
The comments from that page say:
1.2.2-7 Published in intrepid-release on 2008-06-19
python-mysqldb (1.2.2-7) unstable; urgency=low
[ Sandro Tosi ]
\* debian/control
- list items lines in description starts with 2 space, to avoid reformat
on webpages (Closes: #480341)
[ Bernd Zeimetz ]
\* debian/patches/02\_reconnect.dpatch:
- Dropping patch:
Comment in Storm which explains the problem:
```
# Here is another sad story about bad transactional behavior. MySQL
# offers a feature to automatically reconnect dropped connections.
# What sounds like a dream, is actually a nightmare for anyone who
# is dealing with transactions. When a reconnection happens, the
# currently running transaction is transparently rolled back, and
# everything that was being done is lost, without notice. Not only
# that, but the connection may be put back in AUTOCOMMIT mode, even
# when that's not the default MySQLdb behavior. The MySQL developers
# quickly understood that this is a terrible idea, and removed the
# behavior in MySQL 5.0.3. Unfortunately, Debian and Ubuntu still
# have a patch right now which *reenables* that behavior by default
# even past version 5.0.3.
``` |
How to enable MySQL client auto re-connect with MySQLdb? | 207,981 | 30 | 2008-10-16T09:56:05Z | 982,873 | 58 | 2009-06-11T18:38:11Z | [
"python",
"mysql"
] | I came across PHP way of doing the trick:
```
my_bool reconnect = 1;
mysql_options(&mysql, MYSQL_OPT_RECONNECT, &reconnect);
```
but no luck with MySQLdb (python-mysql).
Can anybody please give a clue? Thanks. | I solved this problem by creating a function that wraps the `cursor.execute()` method since that's what was throwing the `MySQLdb.OperationalError` exception. The other example above implies that it is the `conn.cursor()` method that throws this exception.
```
import MySQLdb
class DB:
conn = None
def connect(self):
self.conn = MySQLdb.connect()
def query(self, sql):
try:
cursor = self.conn.cursor()
cursor.execute(sql)
except (AttributeError, MySQLdb.OperationalError):
self.connect()
cursor = self.conn.cursor()
cursor.execute(sql)
return cursor
db = DB()
sql = "SELECT * FROM foo"
cur = db.query(sql)
# wait a long time for the Mysql connection to timeout
cur = db.query(sql)
# still works
``` |
How to enable MySQL client auto re-connect with MySQLdb? | 207,981 | 30 | 2008-10-16T09:56:05Z | 29,331,237 | 9 | 2015-03-29T15:48:28Z | [
"python",
"mysql"
] | I came across PHP way of doing the trick:
```
my_bool reconnect = 1;
mysql_options(&mysql, MYSQL_OPT_RECONNECT, &reconnect);
```
but no luck with MySQLdb (python-mysql).
Can anybody please give a clue? Thanks. | I had problems with the proposed solution because it didn't catch the exception. I am not sure why.
I have solved the problem with the `ping(True)` statement which I think is neater:
```
import MySQLdb
con=MySQLdb.Connect()
con.ping(True)
cur=con.cursor()
```
Got it from here: <http://www.neotitans.com/resources/python/mysql-python-connection-error-2006.html> |
How to read and write multiple files? | 208,120 | 4 | 2008-10-16T11:06:36Z | 208,156 | 9 | 2008-10-16T11:23:29Z | [
"python"
] | I want to write a program for this: In a folder I have *n* number of files; first read one file and perform some operation then store result in a separate file. Then read 2nd file, perform operation again and save result in new 2nd file. Do the same procedure for *n* number of files. The program reads all files one by one and stores results of each file separately. Please give examples how I can do it. | ```
import sys
# argv is your commandline arguments, argv[0] is your program name, so skip it
for n in sys.argv[1:]:
print(n) #print out the filename we are currently processing
input = open(n, "r")
output = open(n + ".out", "w")
# do some processing
input.close()
output.close()
```
Then call it like:
```
./foo.py bar.txt baz.txt
``` |
How to base64 encode a PDF file in Python | 208,894 | 6 | 2008-10-16T14:54:49Z | 210,534 | 21 | 2008-10-16T22:33:24Z | [
"python",
"encoding",
"base64",
"xml-rpc"
] | How should I base64 encode a PDF file for transport over XML-RPC in Python? | If you don't want to use the xmlrpclib's Binary class, you can just use the .encode() method of strings:
```
a = open("pdf_reference.pdf", "rb").read().encode("base64")
``` |
Can I use Python as a Bash replacement? | 209,470 | 207 | 2008-10-16T17:11:37Z | 209,557 | 8 | 2008-10-16T17:40:43Z | [
"python",
"bash",
"shell"
] | I currently do my textfile manipulation through a bunch of badly remembered AWK, sed, Bash and a tiny bit of Perl.
I've seen mentioned a few places that python is good for this kind of thing, I know a little and I would like to know more. Is Python a good choice for this, and is there a good book or guide to learning how to use Python to replace shell scripting, AWK, sed and friends? | I suggest the awesome online book *[Dive Into Python](http://www.diveintopython.net)*. It's how I learned the language originally.
Beyone teaching you the basic structure of the language, and a whole lot of useful data structures, it has a good chapter on [file handling](http://www.diveintopython.net/file_handling/index.html) and subsequent chapters on [regular expressions](http://www.diveintopython.net/regular_expressions/index.html) and more. |
Can I use Python as a Bash replacement? | 209,470 | 207 | 2008-10-16T17:11:37Z | 209,562 | 127 | 2008-10-16T17:41:44Z | [
"python",
"bash",
"shell"
] | I currently do my textfile manipulation through a bunch of badly remembered AWK, sed, Bash and a tiny bit of Perl.
I've seen mentioned a few places that python is good for this kind of thing, I know a little and I would like to know more. Is Python a good choice for this, and is there a good book or guide to learning how to use Python to replace shell scripting, AWK, sed and friends? | Any shell has several sets of features.
* The Essential Linux/Unix commands. All of these are available through the [subprocess](https://docs.python.org/3/library/subprocess.html) library. This isn't always the best first choice for doing *all* external commands. Look also at [shutil](https://docs.python.org/3/library/shutil.html) for some commands that are separate Linux commands, but you could probably implement directly in your Python scripts. Another huge batch of Linux commands are in the [os](https://docs.python.org/3/library/os.html) library; you can do these more simply in Python.
And -- bonus! -- more quickly. Each separate Linux command in the shell (with a few exceptions) forks a subprocess. By using Python `shutil` and `os` modules, you don't fork a subprocess.
* The shell environment features. This includes stuff that sets a command's environment (current directory and environment variables and what-not). You can easily manage this from Python directly.
* The shell programming features. This is all the process status code checking, the various logic commands (if, while, for, etc.) the test command and all of it's relatives. The function definition stuff. This is all much, much easier in Python. This is one of the huge victories in getting rid of bash and doing it in Python.
* Interaction features. This includes command history and what-not. You don't need this for writing shell scripts. This is only for human interaction, and not for script-writing.
* The shell file management features. This includes redirection and pipelines. This is trickier. Much of this can be done with subprocess. But some things that are easy in the shell are unpleasant in Python. Specifically stuff like `(a | b; c ) | something >result`. This runs two processes in parallel (with output of `a` as input to `b`), followed by a third process. The output from that sequence is run in parallel with `something` and the output is collected into a file named `result`. That's just complex to express in any other language.
Specific programs (awk, sed, grep, etc.) can often be rewritten as Python modules. Don't go overboard. Replace what you need and evolve your "grep" module. Don't start out writing a Python module that replaces "grep".
The best thing is that you can do this in steps.
1. Replace AWK and PERL with Python. Leave everything else alone.
2. Look at replacing GREP with Python. This can be a bit more complex, but your version of GREP can be tailored to your processing needs.
3. Look at replacing FIND with Python loops that use `os.walk`. This is a big win because you don't spawn as many processes.
4. Look at replacing common shell logic (loops, decisions, etc.) with Python scripts. |
Can I use Python as a Bash replacement? | 209,470 | 207 | 2008-10-16T17:11:37Z | 209,670 | 28 | 2008-10-16T18:16:52Z | [
"python",
"bash",
"shell"
] | I currently do my textfile manipulation through a bunch of badly remembered AWK, sed, Bash and a tiny bit of Perl.
I've seen mentioned a few places that python is good for this kind of thing, I know a little and I would like to know more. Is Python a good choice for this, and is there a good book or guide to learning how to use Python to replace shell scripting, AWK, sed and friends? | * If you want to use Python as a shell, why not have a look at [IPython](http://ipython.org/) ? It is also good to learn interactively the language.
* If you do a lot of text manipulation, and if you use Vim as a text editor, you can also directly write plugins for Vim in python. just type ":help python" in Vim and follow the instructions or have a look at this [presentation](http://www.tummy.com/Community/Presentations/vimpython-20070225/vim.html). It is so easy and powerfull to write functions that you will use directly in your editor! |
Can I use Python as a Bash replacement? | 209,470 | 207 | 2008-10-16T17:11:37Z | 210,290 | 16 | 2008-10-16T20:58:33Z | [
"python",
"bash",
"shell"
] | I currently do my textfile manipulation through a bunch of badly remembered AWK, sed, Bash and a tiny bit of Perl.
I've seen mentioned a few places that python is good for this kind of thing, I know a little and I would like to know more. Is Python a good choice for this, and is there a good book or guide to learning how to use Python to replace shell scripting, AWK, sed and friends? | In the beginning there was sh, sed, and awk (and find, and grep, and...). It was good. But awk can be an odd little beast and hard to remember if you don't use it often. Then the great camel created Perl. Perl was a system administrator's dream. It was like shell scripting on steroids. Text processing, including regular expressions were just part of the language. Then it got ugly... People tried to make big applications with Perl. Now, don't get me wrong, Perl can be an application, but it can (can!) look like a mess if you're not really careful. Then there is all this flat data business. It's enough to drive a programmer nuts.
Enter Python, Ruby, et al. These are really very good general purpose languages. They support text processing, and do it well (though perhaps not as tightly entwined in the basic core of the language). But they also scale up very well, and still have nice looking code at the end of the day. They also have developed pretty hefty communities with plenty of libraries for most anything.
Now, much of the negativeness towards Perl is a matter of opinion, and certainly some people can write very clean Perl, but with this many people complaining about it being too easy to create obfuscated code, you know some grain of truth is there. The question really becomes then, are you ever going to use this language for more than simple bash script replacements. If not, learn some more Perl.. it is absolutely fantastic for that. If, on the other hand, you want a language that will grow with you as you want to do more, may I suggest Python or Ruby.
Either way, good luck! |
Can I use Python as a Bash replacement? | 209,470 | 207 | 2008-10-16T17:11:37Z | 12,915,952 | 81 | 2012-10-16T13:37:46Z | [
"python",
"bash",
"shell"
] | I currently do my textfile manipulation through a bunch of badly remembered AWK, sed, Bash and a tiny bit of Perl.
I've seen mentioned a few places that python is good for this kind of thing, I know a little and I would like to know more. Is Python a good choice for this, and is there a good book or guide to learning how to use Python to replace shell scripting, AWK, sed and friends? | Yes, of course :)
Take a look at these libraries which help you ***Never write shell scripts again*** (Plumbum's motto).
* [Plumbum](http://plumbum.readthedocs.org/en/latest/)
* [Sarge](https://bitbucket.org/vinay.sajip/sarge/)
* [sh](http://amoffat.github.com/sh/)
Also, if you want to replace awk, sed and grep with something Python based then I recommend [pyp](http://pyvideo.org/video/686/the-pyed-piper-a-modern-python-alternative-to-aw) -
> "The Pyed Piper", or pyp, is a linux command line text manipulation
> tool similar to awk or sed, but which uses standard python string and
> list methods as well as custom functions evolved to generate fast
> results in an intense production environment. |
Can I use Python as a Bash replacement? | 209,470 | 207 | 2008-10-16T17:11:37Z | 15,712,610 | 42 | 2013-03-29T22:49:18Z | [
"python",
"bash",
"shell"
] | I currently do my textfile manipulation through a bunch of badly remembered AWK, sed, Bash and a tiny bit of Perl.
I've seen mentioned a few places that python is good for this kind of thing, I know a little and I would like to know more. Is Python a good choice for this, and is there a good book or guide to learning how to use Python to replace shell scripting, AWK, sed and friends? | I just discovered how to combine the best parts of bash and ipython. Up to now this seems more comfortable to me than using subprocess and so on. You can easily copy big parts of existing bash scripts and e.g. add error handling in the python way :)
And here is my result:
```
#!/usr/bin/env ipython3
# *** How to have the most comfort scripting experience of your life ***
# ######################################################################
#
# ⦠by using ipython for scripting combined with subcommands from bash!
#
# 1. echo "#!/usr/bin/env ipython3" > scriptname.ipy # creates new ipy-file
#
# 2. chmod +x scriptname.ipy # make in executable
#
# 3. starting with line 2, write normal python or do some of
# the ! magic of ipython, so that you can use unix commands
# within python and even assign their output to a variable via
# var = !cmd1 | cmd2 | cmd3 # enjoy ;)
#
# 4. run via ./scriptname.ipy - if it fails with recognizing % and !
# but parses raw python fine, please check again for the .ipy suffix
# ugly example, please go and find more in the wild
files = !ls *.* | grep "y"
for file in files:
!echo $file | grep "p"
# sorry for this nonsense example ;)
``` |
Can I use Python as a Bash replacement? | 209,470 | 207 | 2008-10-16T17:11:37Z | 16,726,383 | 7 | 2013-05-24T01:23:24Z | [
"python",
"bash",
"shell"
] | I currently do my textfile manipulation through a bunch of badly remembered AWK, sed, Bash and a tiny bit of Perl.
I've seen mentioned a few places that python is good for this kind of thing, I know a little and I would like to know more. Is Python a good choice for this, and is there a good book or guide to learning how to use Python to replace shell scripting, AWK, sed and friends? | One reason I love Python is that it is much better standardized than the POSIX tools. I have to double and triple check that each bit is compatible with other operating systems. A program written on a Linux system might not work the same on a BSD system of OSX. With Python, I just have to check that the target system has a sufficiently modern version of Python.
Even better, a program written in standard Python will even run on Windows! |
Can I use Python as a Bash replacement? | 209,470 | 207 | 2008-10-16T17:11:37Z | 30,617,053 | 33 | 2015-06-03T10:08:43Z | [
"python",
"bash",
"shell"
] | I currently do my textfile manipulation through a bunch of badly remembered AWK, sed, Bash and a tiny bit of Perl.
I've seen mentioned a few places that python is good for this kind of thing, I know a little and I would like to know more. Is Python a good choice for this, and is there a good book or guide to learning how to use Python to replace shell scripting, AWK, sed and friends? | As of 2015 and Python 3.4's release, there's now a reasonably complete user-interactive shell available at: <http://xon.sh/> or [https://github.com/scopatz/xonsh](https://github.com/scopatz/xonsh/)
The [demonstration video](http://xon.sh/_static/xonsh-demo.webm) does not show pipes being used, but they ARE supported when in the default shell mode.
Xonsh ('conch') tries very hard to emulate bash, so things you've already gained muscle memory for, like
```
env | uniq | sort -r | grep PATH
```
or
```
my-web-server 2>&1 | my-log-sorter
```
will still work fine.
The tutorial is quite lengthy and seems to cover a significant amount of the functionality someone would generally expect at a ash or bash prompt:
* Compiles, Evaluates, & Executes!
* Command History and Tab Completion
* Help & Superhelp with ? & ??
* Aliases & Customized Prompts
* Executes Commands and/or \*.xsh Scripts which can also be imported
* Environment Variables including Lookup with ${}
* Input/Output Redirection and Combining
* Background Jobs & Job Control
* Nesting Subprocesses, Pipes, and Coprocesses
* Subprocess-mode when a command exists, Python-mode otherwise
* Captured Subprocess with $(), Uncaptured Subprocess with $[], Python Evaluation with @()
* Filename Globbing with \* or Regular Expression Filename Globbing with Backticks |
Howto do python command-line autocompletion but NOT only at the beginning of a string | 209,484 | 6 | 2008-10-16T17:16:34Z | 209,503 | 10 | 2008-10-16T17:23:34Z | [
"python",
"linux",
"unix",
"command-line",
"autocomplete"
] | Python, through it's readline bindings allows for great command-line autocompletion (as described in [here](http://stackoverflow.com/questions/187621/how-to-make-a-python-command-line-program-autocomplete-arbitrary-things-not-int)).
But, the completion only seems to work at the beginning of strings. If you want to match the middle or end of a string readline doesn't work.
I would like to autocomplete strings, in a command-line python program by matching what I type with any of the strings in a list of available strings.
* A good example of the type of autocompletion I would like to have is the type that happens in GMail when you type in the To field. If you type one of your contacts' last name, it will come up just as well as if you typed her first name.
* Some use of the up and down arrows or some other method to select from the matched strings may be needed (and not needed in the case of readline) and that is fine in my case.
* My particular use case is a command-line program that sends emails.
* Specific code examples would be very helpful.
Using terminal emulators like curses would be fine. It only has to run on linux, not Mac or Windows.
Here is an example:
Say I have the following three strings in a list
```
['Paul Eden <[email protected]>',
'Eden Jones <[email protected]>',
'Somebody Else <[email protected]>']
```
I would like some code that will autocomplete the first two items in the list after I type 'Eden' and then allow me to pick one of them (all through the command-line using the keyboard). | I'm not sure I understand the problem. You could use readline.clear\_history and readline.add\_history to set up the completable strings you want, then control-r to search backword in the history (just as if you were at a shell prompt). For example:
```
#!/usr/bin/env python
import readline
readline.clear_history()
readline.add_history('foo')
readline.add_history('bar')
while 1:
print raw_input('> ')
```
Alternatively, you could write your own completer version and bind the appropriate key to it. This version uses caching in case your match list is huge:
```
#!/usr/bin/env python
import readline
values = ['Paul Eden <[email protected]>',
'Eden Jones <[email protected]>',
'Somebody Else <[email protected]>']
completions = {}
def completer(text, state):
try:
matches = completions[text]
except KeyError:
matches = [value for value in values
if text.upper() in value.upper()]
completions[text] = matches
try:
return matches[state]
except IndexError:
return None
readline.set_completer(completer)
readline.parse_and_bind('tab: menu-complete')
while 1:
a = raw_input('> ')
print 'said:', a
``` |
Convert hex string to int in Python | 209,513 | 414 | 2008-10-16T17:28:03Z | 209,529 | 99 | 2008-10-16T17:32:10Z | [
"python",
"string",
"hex"
] | How do I convert a hex string to an int in Python?
I may have it as "`0xffff`" or just "`ffff`". | `int(hexString, 16)` does the trick, and works with and without the 0x prefix:
```
>>> int("a", 16)
10
>>> int("0xa",16)
10
``` |
Convert hex string to int in Python | 209,513 | 414 | 2008-10-16T17:28:03Z | 209,530 | 32 | 2008-10-16T17:32:32Z | [
"python",
"string",
"hex"
] | How do I convert a hex string to an int in Python?
I may have it as "`0xffff`" or just "`ffff`". | For any given string s:
```
int(s, 16)
``` |
Convert hex string to int in Python | 209,513 | 414 | 2008-10-16T17:28:03Z | 209,550 | 594 | 2008-10-16T17:37:52Z | [
"python",
"string",
"hex"
] | How do I convert a hex string to an int in Python?
I may have it as "`0xffff`" or just "`ffff`". | **Without** the 0x prefix, you need to specify the base explicitly, otherwise there's no way to tell:
```
x = int("deadbeef", 16)
```
**With** the 0x prefix, Python can distinguish hex and decimal automatically.
```
>>> print int("0xdeadbeef", 0)
3735928559
>>> print int("10", 0)
10
```
(You *must* specify `0` as the base in order to invoke this prefix-guessing behavior; omitting the second parameter means to assume base-10. See the comments for more details.) |
Convert hex string to int in Python | 209,513 | 414 | 2008-10-16T17:28:03Z | 11,275,700 | 9 | 2012-06-30T16:25:01Z | [
"python",
"string",
"hex"
] | How do I convert a hex string to an int in Python?
I may have it as "`0xffff`" or just "`ffff`". | Adding to Dan's answer above: if you supply the int() function with a hex string, you will have to specify the base as 16 or it will not think you gave it a valid value. Specifying base 16 is unnecessary for hex numbers not contained in strings.
```
print int(0xdeadbeef) # valid
myHex = "0xdeadbeef"
print int(myHex) # invalid, raises ValueError
print int(myHex , 16) # valid
``` |
Map two lists into a dictionary in Python | 209,840 | 435 | 2008-10-16T19:05:47Z | 209,854 | 770 | 2008-10-16T19:09:05Z | [
"python",
"dictionary"
] | Imagine that you have:
```
keys = ('name', 'age', 'food')
values = ('Monty', 42, 'spam')
```
What is the simplest way to produce the following dictionary ?
```
dict = {'name' : 'Monty', 'age' : 42, 'food' : 'spam'}
```
This code works, but I'm not really proud of it :
```
dict = {}
junk = map(lambda k, v: dict.update({k: v}), keys, values)
``` | Like this:
```
>>> keys = ['a', 'b', 'c']
>>> values = [1, 2, 3]
>>> dictionary = dict(zip(keys, values))
>>> print dictionary
{'a': 1, 'b': 2, 'c': 3}
```
Voila :-) The pairwise dict constructor and zip function are awesomely useful: <https://docs.python.org/2/library/functions.html#func-dict> |
Map two lists into a dictionary in Python | 209,840 | 435 | 2008-10-16T19:05:47Z | 209,855 | 25 | 2008-10-16T19:09:18Z | [
"python",
"dictionary"
] | Imagine that you have:
```
keys = ('name', 'age', 'food')
values = ('Monty', 42, 'spam')
```
What is the simplest way to produce the following dictionary ?
```
dict = {'name' : 'Monty', 'age' : 42, 'food' : 'spam'}
```
This code works, but I'm not really proud of it :
```
dict = {}
junk = map(lambda k, v: dict.update({k: v}), keys, values)
``` | ```
>>> keys = ('name', 'age', 'food')
>>> values = ('Monty', 42, 'spam')
>>> dict(zip(keys, values))
{'food': 'spam', 'age': 42, 'name': 'Monty'}
``` |
Map two lists into a dictionary in Python | 209,840 | 435 | 2008-10-16T19:05:47Z | 209,880 | 92 | 2008-10-16T19:16:02Z | [
"python",
"dictionary"
] | Imagine that you have:
```
keys = ('name', 'age', 'food')
values = ('Monty', 42, 'spam')
```
What is the simplest way to produce the following dictionary ?
```
dict = {'name' : 'Monty', 'age' : 42, 'food' : 'spam'}
```
This code works, but I'm not really proud of it :
```
dict = {}
junk = map(lambda k, v: dict.update({k: v}), keys, values)
``` | Try this:
```
>>> import itertools
>>> keys = ('name', 'age', 'food')
>>> values = ('Monty', 42, 'spam')
>>> adict = dict(itertools.izip(keys,values))
>>> adict
{'food': 'spam', 'age': 42, 'name': 'Monty'}
```
It was the simplest solution I could come up with.
PS It's also more economical in memory consumption compared to zip. |
Map two lists into a dictionary in Python | 209,840 | 435 | 2008-10-16T19:05:47Z | 210,234 | 11 | 2008-10-16T20:45:04Z | [
"python",
"dictionary"
] | Imagine that you have:
```
keys = ('name', 'age', 'food')
values = ('Monty', 42, 'spam')
```
What is the simplest way to produce the following dictionary ?
```
dict = {'name' : 'Monty', 'age' : 42, 'food' : 'spam'}
```
This code works, but I'm not really proud of it :
```
dict = {}
junk = map(lambda k, v: dict.update({k: v}), keys, values)
``` | If you need to transform keys or values before creating a dictionary then a [generator expression](http://docs.python.org/ref/genexpr.html) could be used. Example:
```
>>> adict = dict((str(k), v) for k, v in zip(['a', 1, 'b'], [2, 'c', 3]))
```
Take a look [Code Like a Pythonista: Idiomatic Python](http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html). |
Map two lists into a dictionary in Python | 209,840 | 435 | 2008-10-16T19:05:47Z | 10,971,932 | 23 | 2012-06-10T20:03:34Z | [
"python",
"dictionary"
] | Imagine that you have:
```
keys = ('name', 'age', 'food')
values = ('Monty', 42, 'spam')
```
What is the simplest way to produce the following dictionary ?
```
dict = {'name' : 'Monty', 'age' : 42, 'food' : 'spam'}
```
This code works, but I'm not really proud of it :
```
dict = {}
junk = map(lambda k, v: dict.update({k: v}), keys, values)
``` | You can also use dictionary comprehensions in Python ⥠2.7:
```
>>> keys = ('name', 'age', 'food')
>>> values = ('Monty', 42, 'spam')
>>> {k: v for k, v in zip(keys, values)}
{'food': 'spam', 'age': 42, 'name': 'Monty'}
``` |
Map two lists into a dictionary in Python | 209,840 | 435 | 2008-10-16T19:05:47Z | 16,750,190 | 7 | 2013-05-25T13:47:03Z | [
"python",
"dictionary"
] | Imagine that you have:
```
keys = ('name', 'age', 'food')
values = ('Monty', 42, 'spam')
```
What is the simplest way to produce the following dictionary ?
```
dict = {'name' : 'Monty', 'age' : 42, 'food' : 'spam'}
```
This code works, but I'm not really proud of it :
```
dict = {}
junk = map(lambda k, v: dict.update({k: v}), keys, values)
``` | with Python 3.x, goes for dict comprehensions
```
keys = ('name', 'age', 'food')
values = ('Monty', 42, 'spam')
dic = {k:v for k,v in zip(keys, values)}
print(dic)
```
More on [dict comprehensions here](http://www.python.org/dev/peps/pep-0274/), an example is there:
```
>>> print {i : chr(65+i) for i in range(4)}
{0 : 'A', 1 : 'B', 2 : 'C', 3 : 'D'}
``` |
Map two lists into a dictionary in Python | 209,840 | 435 | 2008-10-16T19:05:47Z | 33,737,067 | 21 | 2015-11-16T13:54:24Z | [
"python",
"dictionary"
] | Imagine that you have:
```
keys = ('name', 'age', 'food')
values = ('Monty', 42, 'spam')
```
What is the simplest way to produce the following dictionary ?
```
dict = {'name' : 'Monty', 'age' : 42, 'food' : 'spam'}
```
This code works, but I'm not really proud of it :
```
dict = {}
junk = map(lambda k, v: dict.update({k: v}), keys, values)
``` | > Imagine that you have:
>
> ```
> keys = ('name', 'age', 'food')
> values = ('Monty', 42, 'spam')
> ```
>
> **What is the simplest way to produce the following dictionary ?**
>
> ```
> dict = {'name' : 'Monty', 'age' : 42, 'food' : 'spam'}
> ```
# Python 2
I see some answers mentioning to use `izip` from `itertools`, but this goes away in Python 3. However, `izip` is the best approach for Python 2:
```
from itertools import izip
new_dict = dict(izip(keys, values))
```
# Python 3
In Python 3, `zip` becomes the same function that was in the `itertools` module, so that is simply:
```
new_dict = dict(zip(keys, values))
```
# Python 2.7 and 3, dict comprehension:
A possible improvement on using the dict constructor is to use the native syntax of a dict comprehension (not a list comprehension, as others have mistakenly put it):
```
new_dict = {k: v for k, v in zip(keys, values)}
```
In all cases:
```
>>> new_dict
{'age': 42, 'name': 'Monty', 'food': 'spam'}
```
## Explanation:
If we look at the help on `dict` we see that it takes a variety of forms of arguments:
```
>>> help(dict)
class dict(object)
| dict() -> new empty dictionary
| dict(mapping) -> new dictionary initialized from a mapping object's
| (key, value) pairs
| dict(iterable) -> new dictionary initialized as if via:
| d = {}
| for k, v in iterable:
| d[k] = v
| dict(**kwargs) -> new dictionary initialized with the name=value pairs
| in the keyword argument list. For example: dict(one=1, two=2)
```
The optimal approach is to use an iterable while avoiding creating unnecessary data structures. In Python 2, zip creates an unnecessary list:
```
>>> zip(keys, values)
[('name', 'Monty'), ('age', 42), ('food', 'spam')]
```
In Python 3, the equivalent would be:
```
>>> list(zip(keys, values))
[('name', 'Monty'), ('age', 42), ('food', 'spam')]
```
and Python 3's `zip` merely creates an iterable object:
```
>>> zip(keys, values)
<zip object at 0x7f0e2ad029c8>
```
Since we want to avoid creating unnecessary data structures, we usually want to avoid Python 2's `zip` (since it creates an unnecessary list).
# Less performant alternatives:
This is a generator expression being passed to the dict constructor:
```
generator_expression = ((k, v) for k, v in zip(keys, values))
dict(generator_expression)
```
or equivalently:
```
dict((k, v) for k, v in zip(keys, values))
```
And this is a list comprehension being passed to the dict constructor:
```
dict([(k, v) for k, v in zip(keys, values)])
```
In the first two cases, an extra layer of non-operative (thus unnecessary) computation is placed over the zip iterable, and in the case of the list comprehension, an extra list is unnecessarily created. I would expect all of them to be less performant, and certainly not more-so.
# Performance review:
In 64 bit Python 3.4.3, on Ubuntu 14.04, ordered from fastest to slowest:
```
>>> min(timeit.repeat(lambda: {k: v for k, v in zip(keys, values)}))
0.7836067057214677
>>> min(timeit.repeat(lambda: dict(zip(keys, values))))
1.0321204089559615
>>> min(timeit.repeat(lambda: {keys[i]: values[i] for i in range(len(keys))}))
1.0714934510178864
>>> min(timeit.repeat(lambda: dict([(k, v) for k, v in zip(keys, values)])))
1.6110592018812895
>>> min(timeit.repeat(lambda: dict((k, v) for k, v in zip(keys, values))))
1.7361853648908436
``` |
Python: unsigned 32 bit bitwise arithmetic | 210,629 | 24 | 2008-10-16T23:25:31Z | 210,707 | 8 | 2008-10-17T00:04:03Z | [
"python",
"math"
] | Trying to answer to another post whose solution deals with IP addresses and netmasks, I got stuck with plain bitwise arithmetic.
Is there a standard way, in Python, to carry on bitwise AND, OR, XOR, NOT operations assuming that the inputs are "32 bit" (maybe negative) integers or longs, and that the result must be a long in the range [0, 2\*\*32]?
In other words, I need a working Python counterpart to the C bitwise operations between unsigned longs.
EDIT: the specific issue is this:
```
>>> m = 0xFFFFFF00 # netmask 255.255.255.0
>>> ~m
-4294967041L # wtf?! I want 255
``` | ```
from numpy import uint32
``` |
Python: unsigned 32 bit bitwise arithmetic | 210,629 | 24 | 2008-10-16T23:25:31Z | 210,740 | 47 | 2008-10-17T00:27:26Z | [
"python",
"math"
] | Trying to answer to another post whose solution deals with IP addresses and netmasks, I got stuck with plain bitwise arithmetic.
Is there a standard way, in Python, to carry on bitwise AND, OR, XOR, NOT operations assuming that the inputs are "32 bit" (maybe negative) integers or longs, and that the result must be a long in the range [0, 2\*\*32]?
In other words, I need a working Python counterpart to the C bitwise operations between unsigned longs.
EDIT: the specific issue is this:
```
>>> m = 0xFFFFFF00 # netmask 255.255.255.0
>>> ~m
-4294967041L # wtf?! I want 255
``` | You can use [ctypes](http://docs.python.org/lib/module-ctypes.html) and its `c_uint32`:
```
>>> import ctypes
>>> m = 0xFFFFFF00
>>> ctypes.c_uint32(~m).value
255L
```
So what I did here was casting `~m` to a C 32-bit unsigned integer and retrieving its value back in Python format. |
Python: unsigned 32 bit bitwise arithmetic | 210,629 | 24 | 2008-10-16T23:25:31Z | 210,747 | 32 | 2008-10-17T00:28:43Z | [
"python",
"math"
] | Trying to answer to another post whose solution deals with IP addresses and netmasks, I got stuck with plain bitwise arithmetic.
Is there a standard way, in Python, to carry on bitwise AND, OR, XOR, NOT operations assuming that the inputs are "32 bit" (maybe negative) integers or longs, and that the result must be a long in the range [0, 2\*\*32]?
In other words, I need a working Python counterpart to the C bitwise operations between unsigned longs.
EDIT: the specific issue is this:
```
>>> m = 0xFFFFFF00 # netmask 255.255.255.0
>>> ~m
-4294967041L # wtf?! I want 255
``` | You can mask everything by `0xFFFFFFFF`:
```
>>> m = 0xFFFFFF00
>>> allf = 0xFFFFFFFF
>>> ~m & allf
255L
``` |
Using os.execvp in Python | 210,978 | 6 | 2008-10-17T03:04:38Z | 210,982 | 11 | 2008-10-17T03:07:53Z | [
"python",
"shell",
"exec"
] | I have a question about using `os.execvp` in Python. I have the following bit of code that's used to create a list of arguments:
```
args = [ "java"
, classpath
, "-Djava.library.path=" + lib_path()
, ea
, "-Xmx1000m"
, "-server"
, "code_swarm"
, params
]
```
When I output a string using `" ".join(args)` and paste that into my shell prompt, the JVM launches fine, and everything works. Everything works if I use `os.system(" ".join(args))` in my Python script, too.
*But* the following bit of code does not work:
```
os.execvp("java", args)
```
I get the following error:
```
Unrecognized option: -classpath [and then the classpath I created, which looks okay]
Could not create the Java virtual machine.
```
So what gives? Why does copying/pasting into the shell or using `os.system()` work, but not `os.execvp()`? | If your "classpath" variable contains for instance "-classpath foo.jar", it will not work, since it is thinking the option name is "-classpath foo.jar". Split it in two arguments: [..., "-classpath", classpath, ...].
The other ways (copy and paste and system()) work because the shell splits the command line at the spaces (unless they are escaped or quoted). The command line is in fact passed down to the called program as an array (unlike on Windows), and the JVM is expecting to find an element with only "-classpath" followed by another element with the classpath.
You can see the difference for yourself by calling the following small Python script instead of the JVM:
```
#!/usr/bin/python
import sys
print sys.argv
``` |
Python's __import__ doesn't work as expected | 211,100 | 40 | 2008-10-17T04:46:08Z | 211,101 | 51 | 2008-10-17T04:46:20Z | [
"python",
"python-import"
] | When using `__import__` with a dotted name, something like: `somepackage.somemodule`, the module returned isn't `somemodule`, whatever is returned seems to be mostly empty! what's going on here? | From the python docs on `__import__`:
> ```
> __import__( name[, globals[, locals[, fromlist[, level]]]])
> ```
>
> ...
>
> When the name variable is of the form
> package.module, normally, the
> top-level package (the name up till
> the first dot) is returned, not the
> module named by name. However, when a
> non-empty fromlist argument is given,
> the module named by name is returned.
> This is done for compatibility with
> the bytecode generated for the
> different kinds of import statement;
> when using "import spam.ham.eggs", the
> top-level package spam must be placed
> in the importing namespace, but when
> using "from spam.ham import eggs", the
> spam.ham subpackage must be used to
> find the eggs variable. As a
> workaround for this behavior, use
> getattr() to extract the desired
> components. For example, you could
> define the following helper:
>
> ```
> def my_import(name):
> mod = __import__(name)
> components = name.split('.')
> for comp in components[1:]:
> mod = getattr(mod, comp)
> return mod
> ```
**To paraphrase:**
When you ask for `somepackage.somemodule`, `__import__` returns `somepackage.__init__.py`, which is often empty.
It will return `somemodule` if you provide `fromlist` (a list of the variable names inside `somemodule` you want, which are not actually returned)
You can also, as I did, use the function they suggest.
Note: I asked this question fully intending to answer it myself. There was a big bug in my code, and having misdiagnosed it, it took me a long time to figure it out, so I figured I'd help the SO community out and post the gotcha I ran into here. |
Python's __import__ doesn't work as expected | 211,100 | 40 | 2008-10-17T04:46:08Z | 214,682 | 7 | 2008-10-18T06:37:19Z | [
"python",
"python-import"
] | When using `__import__` with a dotted name, something like: `somepackage.somemodule`, the module returned isn't `somemodule`, whatever is returned seems to be mostly empty! what's going on here? | There is something that works as you want it to: `twisted.python.reflect.namedAny`:
```
>>> from twisted.python.reflect import namedAny
>>> namedAny("operator.eq")
<built-in function eq>
>>> namedAny("pysqlite2.dbapi2.connect")
<built-in function connect>
>>> namedAny("os")
<module 'os' from '/usr/lib/python2.5/os.pyc'>
``` |
Python's __import__ doesn't work as expected | 211,100 | 40 | 2008-10-17T04:46:08Z | 5,138,775 | 32 | 2011-02-28T06:08:46Z | [
"python",
"python-import"
] | When using `__import__` with a dotted name, something like: `somepackage.somemodule`, the module returned isn't `somemodule`, whatever is returned seems to be mostly empty! what's going on here? | python 2.7 has importlib, dotted paths resolve as expected
```
import importlib
foo = importlib.import_module('a.dotted.path')
instance = foo.SomeClass()
``` |
Python's __import__ doesn't work as expected | 211,100 | 40 | 2008-10-17T04:46:08Z | 6,957,437 | 16 | 2011-08-05T13:54:49Z | [
"python",
"python-import"
] | When using `__import__` with a dotted name, something like: `somepackage.somemodule`, the module returned isn't `somemodule`, whatever is returned seems to be mostly empty! what's going on here? | There is a simpler solution, as explained in the documentation:
If you simply want to import a module (potentially within a package) by name, you can call \_\_import\_\_() and then look it up in sys.modules:
```
>>> import sys
>>> name = 'foo.bar.baz'
>>> __import__(name)
<module 'foo' from ...>
>>> baz = sys.modules[name]
>>> baz
<module 'foo.bar.baz' from ...>
``` |
How to build "Tagging" support using CouchDB? | 211,118 | 4 | 2008-10-17T04:57:22Z | 213,138 | 7 | 2008-10-17T17:48:37Z | [
"python",
"couchdb",
"tagging",
"document-oriented-db"
] | I'm using the following view function to iterate over all items in the database (in order to find a tag), but I think the performance is very poor if the dataset is large.
Any other approach?
```
def by_tag(tag):
return '''
function(doc) {
if (doc.tags.length > 0) {
for (var tag in doc.tags) {
if (doc.tags[tag] == "%s") {
emit(doc.published, doc)
}
}
}
};
''' % tag
``` | *Disclaimer: I didn't test this and don't know if it can perform better.*
Create a single perm view:
```
function(doc) {
for (var tag in doc.tags) {
emit([tag, doc.published], doc)
}
};
```
And query with
\_view/your\_view/all?startkey=['your\_tag\_here']&endkey=['your\_tag\_here', {}]
Resulting JSON structure will be slightly different but you will still get the publish date sorting. |
Python Inverse of a Matrix | 211,160 | 50 | 2008-10-17T05:30:49Z | 211,174 | 93 | 2008-10-17T05:41:42Z | [
"python",
"algorithm",
"matrix",
"linear-algebra",
"matrix-inverse"
] | How do I get the inverse of a matrix in python? I've implemented it myself, but it's pure python, and I suspect there are faster modules out there to do it. | You should have a look at [numpy](http://www.scipy.org/Tentative_NumPy_Tutorial) if you do matrix manipulation. This is a module mainly written in C, which will be much faster than programming in pure python. Here is an example of how to invert a matrix, and do other matrix manipulation.
```
from numpy import matrix
from numpy import linalg
A = matrix( [[1,2,3],[11,12,13],[21,22,23]]) # Creates a matrix.
x = matrix( [[1],[2],[3]] ) # Creates a matrix (like a column vector).
y = matrix( [[1,2,3]] ) # Creates a matrix (like a row vector).
print A.T # Transpose of A.
print A*x # Matrix multiplication of A and x.
print A.I # Inverse of A.
print linalg.solve(A, x) # Solve the linear equation system.
```
You can also have a look at the [array](http://www.python.org/doc/2.5.2/lib/module-array.html) module, which is a much more efficient implementation of lists when you have to deal with only one data type. |
Python Inverse of a Matrix | 211,160 | 50 | 2008-10-17T05:30:49Z | 215,523 | 44 | 2008-10-18T20:12:27Z | [
"python",
"algorithm",
"matrix",
"linear-algebra",
"matrix-inverse"
] | How do I get the inverse of a matrix in python? I've implemented it myself, but it's pure python, and I suspect there are faster modules out there to do it. | Make sure you really need to invert the matrix. This is often unnecessary and can be numerically unstable. When most people ask how to invert a matrix, they really want to know how to solve Ax = b where A is a matrix and x and b are vectors. It's more efficient and more accurate to use code that solves the equation Ax = b for x directly than to calculate A inverse then multiply the inverse by B. Even if you need to solve Ax = b for many b values, it's not a good idea to invert A. If you have to solve the system for multiple b values, save the Cholesky factorization of A, but don't invert it.
See [Don't invert that matrix](http://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/). |
Python Inverse of a Matrix | 211,160 | 50 | 2008-10-17T05:30:49Z | 3,128,931 | 9 | 2010-06-27T21:19:14Z | [
"python",
"algorithm",
"matrix",
"linear-algebra",
"matrix-inverse"
] | How do I get the inverse of a matrix in python? I've implemented it myself, but it's pure python, and I suspect there are faster modules out there to do it. | It is a pity that the chosen matrix, repeated here again, is either singular or badly conditioned:
```
A = matrix( [[1,2,3],[11,12,13],[21,22,23]])
```
By definition, the inverse of A when multiplied by the matrix A itself must give a unit matrix. The A chosen in the much praised explanation does not do that. In fact just looking at the inverse gives a clue that the inversion did not work correctly. Look at the magnitude of the individual terms - they are very, very big compared with the terms of the original A matrix...
It is remarkable that the humans when picking an example of a matrix so often manage to pick a singular matrix!
I did have a problem with the solution, so looked into it further. On the ubuntu-kubuntu platform, the debian package numpy does not have the matrix and the linalg sub-packages, so in addition to import of numpy, scipy needs to be imported also.
If the diagonal terms of A are multiplied by a large enough factor, say 2, the matrix will most likely cease to be singular or near singular. So
```
A = matrix( [[2,2,3],[11,24,13],[21,22,46]])
```
becomes neither singular nor nearly singular and the example gives meaningful results... When dealing with floating numbers one must be watchful for the effects of inavoidable round off errors.
Thanks for your contribution,
OldAl. |
Using SQLite in a Python program | 211,501 | 13 | 2008-10-17T09:02:12Z | 211,534 | 12 | 2008-10-17T09:23:01Z | [
"python",
"exception",
"sqlite"
] | I have created a Python module that creates and populates several SQLite tables. Now, I want to use it in a program but I don't really know how to call it properly. All the tutorials I've found are essentially "inline", i.e. they walk through using SQLite in a linear fashion rather than how to actually use it in production.
What I'm trying to do is have a method check to see if the database is already created. If so, then I can use it. If not, an exception is raised and the program will create the database. (Or use if/else statements, whichever is better).
I created a test script to see if my logic is correct but it's not working. When I create the try statement, it just creates a new database rather than checking if one already exists. The next time I run the script, I get an error that the table already exists, even if I tried catching the exception. (I haven't used try/except before but figured this is a good time to learn).
Are there any good tutorials for using SQLite operationally or any suggestions on how to code this? I've looked through the pysqlite tutorial and others I found but they don't address this. | AFAIK an SQLITE database is just a file.
To check if the database exists, check for file existence.
When you open a SQLITE database it will automatically create one if the file that backs it up is not in place.
If you try and open a file as a sqlite3 database that is NOT a database, you will get this:
"sqlite3.DatabaseError: file is encrypted or is not a database"
so check to see if the file exists and also make sure to try and catch the exception in case the file is not a sqlite3 database |
Using SQLite in a Python program | 211,501 | 13 | 2008-10-17T09:02:12Z | 211,573 | 7 | 2008-10-17T09:44:53Z | [
"python",
"exception",
"sqlite"
] | I have created a Python module that creates and populates several SQLite tables. Now, I want to use it in a program but I don't really know how to call it properly. All the tutorials I've found are essentially "inline", i.e. they walk through using SQLite in a linear fashion rather than how to actually use it in production.
What I'm trying to do is have a method check to see if the database is already created. If so, then I can use it. If not, an exception is raised and the program will create the database. (Or use if/else statements, whichever is better).
I created a test script to see if my logic is correct but it's not working. When I create the try statement, it just creates a new database rather than checking if one already exists. The next time I run the script, I get an error that the table already exists, even if I tried catching the exception. (I haven't used try/except before but figured this is a good time to learn).
Are there any good tutorials for using SQLite operationally or any suggestions on how to code this? I've looked through the pysqlite tutorial and others I found but they don't address this. | SQLite automatically creates the database file the first time you try to use it. The SQL statements for creating tables can use `IF NOT EXISTS` to make the commands only take effect if the table has not been created This way you don't need to check for the database's existence beforehand: SQLite can take care of that for you.
The main thing I would still be worried about is that executing `CREATE TABLE IF EXISTS` for every web transaction (say) would be inefficient; you can avoid that by having the program keep an (in-memory) variable saying whether it has created the database today, so it runs the `CREATE TABLE` script once per run. This would still allow for you to delete the database and start over during debugging. |
Using SQLite in a Python program | 211,501 | 13 | 2008-10-17T09:02:12Z | 211,660 | 28 | 2008-10-17T10:30:02Z | [
"python",
"exception",
"sqlite"
] | I have created a Python module that creates and populates several SQLite tables. Now, I want to use it in a program but I don't really know how to call it properly. All the tutorials I've found are essentially "inline", i.e. they walk through using SQLite in a linear fashion rather than how to actually use it in production.
What I'm trying to do is have a method check to see if the database is already created. If so, then I can use it. If not, an exception is raised and the program will create the database. (Or use if/else statements, whichever is better).
I created a test script to see if my logic is correct but it's not working. When I create the try statement, it just creates a new database rather than checking if one already exists. The next time I run the script, I get an error that the table already exists, even if I tried catching the exception. (I haven't used try/except before but figured this is a good time to learn).
Are there any good tutorials for using SQLite operationally or any suggestions on how to code this? I've looked through the pysqlite tutorial and others I found but they don't address this. | Don't make this more complex than it needs to be. The big, independent databases have complex setup and configuration requirements. SQLite is just a file you access with SQL, it's much simpler.
Do the following.
1. Add a table to your database for "Components" or "Versions" or "Configuration" or "Release" or something administrative like that.
CREATE TABLE REVISION(
RELEASE\_NUMBER CHAR(20)
);
2. In your application, connect to your database normally.
3. Execute a simple query against the revision table. Here's what can happen.
* The query fails to execute: your database doesn't exist, so execute a series of CREATE statements to build it.
* The query succeeds but returns no rows or the release number is lower than expected: your database exists, but is out of date. You need to migrate from that release to the current release. Hopefully, you have a sequence of DROP, CREATE and ALTER statements to do this.
* The query succeeds, and the release number is the expected value. Do nothing more, your database is configured correctly. |
What is an easy way to create a trivial one-off Python object? | 211,695 | 9 | 2008-10-17T10:47:49Z | 211,774 | 14 | 2008-10-17T11:17:06Z | [
"python"
] | I would like to create a trivial one-off Python object to hold some command-line options. I would like to do something like this:
```
options = ??????
options.VERBOSE = True
options.IGNORE_WARNINGS = False
# Then, elsewhere in the code...
if options.VERBOSE:
...
```
Of course I could use a dictionary, but `options.VERBOSE` is more readable and easier to type than `options['VERBOSE']`.
I *thought* that I should be able to do
```
options = object()
```
, since `object` is the base type of all class objects and therefore should be something like a class with no attributes. But it doesn't work, because an object created using `object()` doesn't have a `__dict__` member, and so one cannot add attributes to it:
```
options.VERBOSE = True
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'object' object has no attribute 'VERBOSE'
```
What is the simplest "pythonic" way to create an object that can be used this way, preferably without having to create an extra helper class? | The [collections module](http://docs.python.org/library/collections.html) has grown a *namedtuple* function in 2.6:
```
import collections
opt=collections.namedtuple('options','VERBOSE IGNORE_WARNINGS')
myoptions=opt(True, False)
>>> myoptions
options(VERBOSE=True, IGNORE_WARNINGS=False)
>>> myoptions.VERBOSE
True
```
A *namedtuple* is immutable, so you can only assign field values when you create it.
In earlier *Python* versions, you can create an empty class:
```
class options(object):
pass
myoptions=options()
myoptions.VERBOSE=True
myoptions.IGNORE_WARNINGS=False
>>> myoptions.IGNORE_WARNINGS,myoptions.VERBOSE
(False, True)
``` |
What is an easy way to create a trivial one-off Python object? | 211,695 | 9 | 2008-10-17T10:47:49Z | 212,299 | 8 | 2008-10-17T14:10:03Z | [
"python"
] | I would like to create a trivial one-off Python object to hold some command-line options. I would like to do something like this:
```
options = ??????
options.VERBOSE = True
options.IGNORE_WARNINGS = False
# Then, elsewhere in the code...
if options.VERBOSE:
...
```
Of course I could use a dictionary, but `options.VERBOSE` is more readable and easier to type than `options['VERBOSE']`.
I *thought* that I should be able to do
```
options = object()
```
, since `object` is the base type of all class objects and therefore should be something like a class with no attributes. But it doesn't work, because an object created using `object()` doesn't have a `__dict__` member, and so one cannot add attributes to it:
```
options.VERBOSE = True
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'object' object has no attribute 'VERBOSE'
```
What is the simplest "pythonic" way to create an object that can be used this way, preferably without having to create an extra helper class? | Given your requirements, I'd say the custom class is your best bet:
```
class options(object):
VERBOSE = True
IGNORE_WARNINGS = True
if options.VERBOSE:
# ...
```
To be complete, another approach would be using a separate module, i.e. `options.py` to encapsulate your option defaults.
`options.py`:
```
VERBOSE = True
IGNORE_WARNINGS = True
```
Then, in `main.py`:
```
import options
if options.VERBOSE:
# ...
```
This has the feature of removing some clutter from your script. The default values are easy to find and change, as they are cordoned off in their own module. If later your application has grown, you can easily access the options from other modules.
This is a pattern that I frequently use, and would heartily recommend if you don't mind your application growing larger than a single module. Or, start with a custom class, and expand to a module later if your app grows to multiple modules. |
What is an easy way to create a trivial one-off Python object? | 211,695 | 9 | 2008-10-17T10:47:49Z | 212,959 | 7 | 2008-10-17T16:56:19Z | [
"python"
] | I would like to create a trivial one-off Python object to hold some command-line options. I would like to do something like this:
```
options = ??????
options.VERBOSE = True
options.IGNORE_WARNINGS = False
# Then, elsewhere in the code...
if options.VERBOSE:
...
```
Of course I could use a dictionary, but `options.VERBOSE` is more readable and easier to type than `options['VERBOSE']`.
I *thought* that I should be able to do
```
options = object()
```
, since `object` is the base type of all class objects and therefore should be something like a class with no attributes. But it doesn't work, because an object created using `object()` doesn't have a `__dict__` member, and so one cannot add attributes to it:
```
options.VERBOSE = True
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'object' object has no attribute 'VERBOSE'
```
What is the simplest "pythonic" way to create an object that can be used this way, preferably without having to create an extra helper class? | Why not just use [optparse](http://docs.python.org/library/optparse.html#module-optparse):
```
from optparse import OptionParser
[...]
parser = OptionParser()
parser.add_option("-f", "--file", dest="filename",
help="write report to FILE", metavar="FILE")
parser.add_option("-q", "--quiet",
action="store_false", dest="verbose", default=True,
help="don't print status messages to stdout")
(options, args) = parser.parse_args()
file = options.filename
if options.quiet == True:
[...]
``` |
Binary search (bisection) in Python | 212,358 | 129 | 2008-10-17T14:23:17Z | 212,413 | 51 | 2008-10-17T14:36:49Z | [
"python",
"binary-search",
"bisection"
] | Is there a library function that performs binary search on a list/tuple and return the position of the item if found and 'False' (-1, None, etc.) if not?
I found the functions bisect\_left/right in the [bisect module](http://docs.python.org/library/bisect.html), but they still return a position even if the item is not in the list. That's perfectly fine for their intended usage, but I just want to know if an item is in the list or not (don't want to insert anything).
I thought of using `bisect_left` and then checking if the item at that position is equal to what I'm searching, but that seems cumbersome (and I also need to do bounds checking if the number can be larger than the largest number in my list). If there is a nicer method I'd like to know about it.
**Edit** To clarify what I need this for: I'm aware that a dictionary would be very well suited for this, but I'm trying to keep the memory consumption as low as possible. My intended usage would be a sort of double-way look-up table. I have in the table a list of values and I need to be able to access the values based on their index. And also I want to be able to find the index of a particular value or None if the value is not in the list.
Using a dictionary for this would be the fastest way, but would (approximately) double the memory requirements.
I was asking this question thinking that I may have overlooked something in the Python libraries. It seems I'll have to write my own code, as Moe suggested. | Why not look at the code for bisect\_left/right and adapt it to suit your purpose.
like this:
```
def binary_search(a, x, lo=0, hi=None):
if hi is None:
hi = len(a)
while lo < hi:
mid = (lo+hi)//2
midval = a[mid]
if midval < x:
lo = mid+1
elif midval > x:
hi = mid
else:
return mid
return -1
``` |
Binary search (bisection) in Python | 212,358 | 129 | 2008-10-17T14:23:17Z | 212,971 | 32 | 2008-10-17T16:59:57Z | [
"python",
"binary-search",
"bisection"
] | Is there a library function that performs binary search on a list/tuple and return the position of the item if found and 'False' (-1, None, etc.) if not?
I found the functions bisect\_left/right in the [bisect module](http://docs.python.org/library/bisect.html), but they still return a position even if the item is not in the list. That's perfectly fine for their intended usage, but I just want to know if an item is in the list or not (don't want to insert anything).
I thought of using `bisect_left` and then checking if the item at that position is equal to what I'm searching, but that seems cumbersome (and I also need to do bounds checking if the number can be larger than the largest number in my list). If there is a nicer method I'd like to know about it.
**Edit** To clarify what I need this for: I'm aware that a dictionary would be very well suited for this, but I'm trying to keep the memory consumption as low as possible. My intended usage would be a sort of double-way look-up table. I have in the table a list of values and I need to be able to access the values based on their index. And also I want to be able to find the index of a particular value or None if the value is not in the list.
Using a dictionary for this would be the fastest way, but would (approximately) double the memory requirements.
I was asking this question thinking that I may have overlooked something in the Python libraries. It seems I'll have to write my own code, as Moe suggested. | This is a little off-topic (since Moe's answer seems complete to the OP's question), but it might be worth looking at the complexity for your whole procedure from end to end. If you're storing thing in a sorted lists (which is where a binary search would help), and then just checking for existence, you're incurring (worst-case, unless specified):
**Sorted Lists**
* O( n log n) to initially create the list (if it's unsorted data. O(n), if it's sorted )
* O( log n) lookups (this is the binary search part)
* O( n ) insert / delete (might be O(1) or O(log n) average case, depending on your pattern)
Whereas with a [`set()`](http://docs.python.org/library/stdtypes.html#set), you're incurring
* O(n) to create
* O(1) lookup
* O(1) insert / delete
The thing a sorted list really gets you are "next", "previous", and "ranges" (including inserting or deleting ranges), which are O(1) or O(|range|), given a starting index. If you aren't using those sorts of operations often, then storing as sets, and sorting for display might be a better deal overall. [`set()`](http://docs.python.org/library/stdtypes.html#set) incurs very little additional overhead in python. |
Binary search (bisection) in Python | 212,358 | 129 | 2008-10-17T14:23:17Z | 530,397 | 10 | 2009-02-09T22:41:14Z | [
"python",
"binary-search",
"bisection"
] | Is there a library function that performs binary search on a list/tuple and return the position of the item if found and 'False' (-1, None, etc.) if not?
I found the functions bisect\_left/right in the [bisect module](http://docs.python.org/library/bisect.html), but they still return a position even if the item is not in the list. That's perfectly fine for their intended usage, but I just want to know if an item is in the list or not (don't want to insert anything).
I thought of using `bisect_left` and then checking if the item at that position is equal to what I'm searching, but that seems cumbersome (and I also need to do bounds checking if the number can be larger than the largest number in my list). If there is a nicer method I'd like to know about it.
**Edit** To clarify what I need this for: I'm aware that a dictionary would be very well suited for this, but I'm trying to keep the memory consumption as low as possible. My intended usage would be a sort of double-way look-up table. I have in the table a list of values and I need to be able to access the values based on their index. And also I want to be able to find the index of a particular value or None if the value is not in the list.
Using a dictionary for this would be the fastest way, but would (approximately) double the memory requirements.
I was asking this question thinking that I may have overlooked something in the Python libraries. It seems I'll have to write my own code, as Moe suggested. | Simplest is to use [bisect](http://docs.python.org/library/bisect.html) and check one position back to see if the item is there:
```
def binary_search(a,x,lo=0,hi=-1):
i = bisect(a,x,lo,hi)
if i == 0:
return -1
elif a[i-1] == x:
return i-1
else:
return -1
``` |
Binary search (bisection) in Python | 212,358 | 129 | 2008-10-17T14:23:17Z | 2,233,940 | 170 | 2010-02-10T02:05:27Z | [
"python",
"binary-search",
"bisection"
] | Is there a library function that performs binary search on a list/tuple and return the position of the item if found and 'False' (-1, None, etc.) if not?
I found the functions bisect\_left/right in the [bisect module](http://docs.python.org/library/bisect.html), but they still return a position even if the item is not in the list. That's perfectly fine for their intended usage, but I just want to know if an item is in the list or not (don't want to insert anything).
I thought of using `bisect_left` and then checking if the item at that position is equal to what I'm searching, but that seems cumbersome (and I also need to do bounds checking if the number can be larger than the largest number in my list). If there is a nicer method I'd like to know about it.
**Edit** To clarify what I need this for: I'm aware that a dictionary would be very well suited for this, but I'm trying to keep the memory consumption as low as possible. My intended usage would be a sort of double-way look-up table. I have in the table a list of values and I need to be able to access the values based on their index. And also I want to be able to find the index of a particular value or None if the value is not in the list.
Using a dictionary for this would be the fastest way, but would (approximately) double the memory requirements.
I was asking this question thinking that I may have overlooked something in the Python libraries. It seems I'll have to write my own code, as Moe suggested. | ```
from bisect import bisect_left
def binary_search(a, x, lo=0, hi=None): # can't use a to specify default for hi
hi = hi if hi is not None else len(a) # hi defaults to len(a)
pos = bisect_left(a,x,lo,hi) # find insertion position
return (pos if pos != hi and a[pos] == x else -1) # don't walk off the end
``` |
Binary search (bisection) in Python | 212,358 | 129 | 2008-10-17T14:23:17Z | 5,763,198 | 10 | 2011-04-23T08:36:45Z | [
"python",
"binary-search",
"bisection"
] | Is there a library function that performs binary search on a list/tuple and return the position of the item if found and 'False' (-1, None, etc.) if not?
I found the functions bisect\_left/right in the [bisect module](http://docs.python.org/library/bisect.html), but they still return a position even if the item is not in the list. That's perfectly fine for their intended usage, but I just want to know if an item is in the list or not (don't want to insert anything).
I thought of using `bisect_left` and then checking if the item at that position is equal to what I'm searching, but that seems cumbersome (and I also need to do bounds checking if the number can be larger than the largest number in my list). If there is a nicer method I'd like to know about it.
**Edit** To clarify what I need this for: I'm aware that a dictionary would be very well suited for this, but I'm trying to keep the memory consumption as low as possible. My intended usage would be a sort of double-way look-up table. I have in the table a list of values and I need to be able to access the values based on their index. And also I want to be able to find the index of a particular value or None if the value is not in the list.
Using a dictionary for this would be the fastest way, but would (approximately) double the memory requirements.
I was asking this question thinking that I may have overlooked something in the Python libraries. It seems I'll have to write my own code, as Moe suggested. | It might be worth mentioning that the bisect docs now provide searching examples:
<http://docs.python.org/library/bisect.html#searching-sorted-lists>
(Raising ValueError instead of returning -1 or None is more pythonic â list.index() does it, for example. But of course you can adapt the examples to your needs.) |
Using django-rest-interface | 212,941 | 20 | 2008-10-17T16:53:23Z | 996,423 | 12 | 2009-06-15T14:28:37Z | [
"python",
"django",
"rest"
] | I have a django application that I'd like to add some rest interfaces to. I've seen <http://code.google.com/p/django-rest-interface/> but it seems to be pretty simplistic. For instance it doesn't seem to have a way of enforcing security. How would I go about limiting what people can view and manipulate through the rest interface? Normally I'd put this kind of logic in my views. Is this the right place or should I be moving some more logic down into the model? Alternatively is there a better library out there or do I need to roll my own? | I would look into using django-piston <http://bitbucket.org/jespern/django-piston/wiki/Home> application if security is your main concern.
I have used django-rest-interface in the past, its reliable and though simple can be quite powerful, however django-piston seems more flexible going forward. |
python threadsafe object cache | 213,455 | 9 | 2008-10-17T19:05:20Z | 213,768 | 8 | 2008-10-17T20:39:48Z | [
"python",
"multithreading",
"caching"
] | I have implemented a python webserver. Each http request spawns a new thread.
I have a requirement of caching objects in memory and since its a webserver, I want the cache to be thread safe. Is there a standard implementatin of a thread safe object cache in python? I found the following
<http://freshmeat.net/projects/lrucache/>
This does not look to be thread safe. Can anybody point me to a good implementation of thread safe cache in python?
Thanks! | Thread per request is often a bad idea. If your server experiences huge spikes in load it will take the box to its knees. Consider using a thread pool that can grow to a limited size during peak usage and shrink to a smaller size when load is light. |
python threadsafe object cache | 213,455 | 9 | 2008-10-17T19:05:20Z | 215,329 | 7 | 2008-10-18T17:48:05Z | [
"python",
"multithreading",
"caching"
] | I have implemented a python webserver. Each http request spawns a new thread.
I have a requirement of caching objects in memory and since its a webserver, I want the cache to be thread safe. Is there a standard implementatin of a thread safe object cache in python? I found the following
<http://freshmeat.net/projects/lrucache/>
This does not look to be thread safe. Can anybody point me to a good implementation of thread safe cache in python?
Thanks! | Well a lot of operations in Python are thread-safe by default, so a standard dictionary should be ok (at least in certain respects). This is mostly due to the GIL, which will help avoid some of the more serious threading issues.
There's a list here: <http://coreygoldberg.blogspot.com/2008/09/python-thread-synchronization-and.html> that might be useful.
Though atomic nature of those operation just means that you won't have an entirely inconsistent state if you have two threads accessing a dictionary at the same time. So you wouldn't have a corrupted value. However you would (as with most multi-threading programming) not be able to rely on the specific order of those atomic operations.
So to cut a long story short...
If you have fairly simple requirements and aren't to bothered about the ordering of what get written into the cache then you can use a dictionary and know that you'll always get a consistent/not-corrupted value (it just might be out of date).
If you want to ensure that things are a bit more consistent with regard to reading and writing then you might want to look at Django's local memory cache:
<http://code.djangoproject.com/browser/django/trunk/django/core/cache/backends/locmem.py>
Which uses a read/write lock for locking. |
A good multithreaded python webserver? | 213,483 | 13 | 2008-10-17T19:12:38Z | 213,563 | 16 | 2008-10-17T19:33:25Z | [
"python",
"apache",
"webserver",
"mod-python"
] | I am looking for a python webserver which is multithreaded instead of being multi-process (as in case of mod\_python for apache). I want it to be multithreaded because I want to have an in memory object cache that will be used by various http threads. My webserver does a lot of expensive stuff and computes some large arrays which needs to be cached in memory for future use to avoid recomputing. This is not possible in a multi-process web server environment. Storing this information in memcache is also not a good idea as the arrays are large and storing them in memcache would lead to deserialization of data coming from memcache apart from the additional overhead of IPC.
I implemented a simple webserver using BaseHttpServer, it gives good performance but it gets stuck after a few hours time. I need some more matured webserver. Is it possible to configure apache to use mod\_python under a thread model so that I can do some object caching? | [CherryPy](http://cherrypy.org/). Features, as listed from the website:
* A fast, HTTP/1.1-compliant, WSGI thread-pooled webserver. Typically, CherryPy itself takes only 1-2ms per page!
* Support for any other WSGI-enabled webserver or adapter, including Apache, IIS, lighttpd, mod\_python, FastCGI, SCGI, and mod\_wsgi
* Easy to run multiple HTTP servers (e.g. on multiple ports) at once
* A powerful configuration system for developers and deployers alike
* A flexible plugin system
* Built-in tools for caching, encoding, sessions, authorization, static content, and many more
* A native mod\_python adapter
* A complete test suite
* Swappable and customizable...everything.
* Built-in profiling, coverage, and testing support. |
A good multithreaded python webserver? | 213,483 | 13 | 2008-10-17T19:12:38Z | 213,572 | 7 | 2008-10-17T19:35:04Z | [
"python",
"apache",
"webserver",
"mod-python"
] | I am looking for a python webserver which is multithreaded instead of being multi-process (as in case of mod\_python for apache). I want it to be multithreaded because I want to have an in memory object cache that will be used by various http threads. My webserver does a lot of expensive stuff and computes some large arrays which needs to be cached in memory for future use to avoid recomputing. This is not possible in a multi-process web server environment. Storing this information in memcache is also not a good idea as the arrays are large and storing them in memcache would lead to deserialization of data coming from memcache apart from the additional overhead of IPC.
I implemented a simple webserver using BaseHttpServer, it gives good performance but it gets stuck after a few hours time. I need some more matured webserver. Is it possible to configure apache to use mod\_python under a thread model so that I can do some object caching? | Consider reconsidering your design. Maintaining that much state in your webserver is probably a bad idea. Multi-process is a much better way to go for stability.
Is there another way to share state between separate processes? What about a service? Database? Index?
It seems unlikely that maintaining a huge array of data in memory and relying on a single multi-threaded process to serve all your requests is the best design or architecture for your app. |
Would Python make a good substitute for the Windows command-line/batch scripts? | 213,798 | 20 | 2008-10-17T20:48:26Z | 213,810 | 17 | 2008-10-17T20:50:57Z | [
"python",
"command-line",
"scripting"
] | I've got some experience with [Bash](http://en.wikipedia.org/wiki/Bash_%28Unix_shell%29), which I don't mind, but now that I'm doing a lot of Windows development I'm needing to do basic stuff/write basic scripts using
the Windows command-line language. For some reason said language really irritates me, so I was considering learning Python and using that instead.
Is Python suitable for such things? Moving files around, creating scripts to do things like unzipping a backup and restoring a SQL database, etc. | Python is well suited for these tasks, and I would guess much easier to develop in and debug than Windows batch files.
The question is, I think, how easy and painless it is to ensure that all the computers that you have to run these scripts on, have Python installed. |
Would Python make a good substitute for the Windows command-line/batch scripts? | 213,798 | 20 | 2008-10-17T20:48:26Z | 213,820 | 7 | 2008-10-17T20:54:39Z | [
"python",
"command-line",
"scripting"
] | I've got some experience with [Bash](http://en.wikipedia.org/wiki/Bash_%28Unix_shell%29), which I don't mind, but now that I'm doing a lot of Windows development I'm needing to do basic stuff/write basic scripts using
the Windows command-line language. For some reason said language really irritates me, so I was considering learning Python and using that instead.
Is Python suitable for such things? Moving files around, creating scripts to do things like unzipping a backup and restoring a SQL database, etc. | Sure, python is a pretty good choice for those tasks (I'm sure many will recommend PowerShell instead).
Here is a fine introduction from that point of view:
<http://www.redhatmagazine.com/2008/02/07/python-for-bash-scripters-a-well-kept-secret/>
EDIT: About gnud's concern: <http://www.portablepython.com/> |
Would Python make a good substitute for the Windows command-line/batch scripts? | 213,798 | 20 | 2008-10-17T20:48:26Z | 216,285 | 9 | 2008-10-19T11:14:49Z | [
"python",
"command-line",
"scripting"
] | I've got some experience with [Bash](http://en.wikipedia.org/wiki/Bash_%28Unix_shell%29), which I don't mind, but now that I'm doing a lot of Windows development I'm needing to do basic stuff/write basic scripts using
the Windows command-line language. For some reason said language really irritates me, so I was considering learning Python and using that instead.
Is Python suitable for such things? Moving files around, creating scripts to do things like unzipping a backup and restoring a SQL database, etc. | ## Summary
**Windows**: no need to think, use Python.
**Unix**: quick or run-it-once scripts are for Bash, serious and/or long life time scripts are for Python.
## The big talk
In a Windows environment, Python is definitely the best choice since [cmd](http://en.wikipedia.org/wiki/Command_Prompt) is crappy and PowerShell has not really settled yet. What's more Python can run on several platform so it's a better investment. Finally, Python has a huge set of library so you will almost never hit the "god-I-can't-do-that" wall. This is not true for cmd and PowerShell.
In a Linux environment, this is a bit different. A lot of one liners are shorter, faster, more efficient and often more readable in pure Bash. But if you know your quick and dirty script is going to stay around for a while or will need to be improved, go for Python since it's far easier to maintain and extend and [you will be able to do most of the task you can do with GNU tools with the standard library](http://www.google.fr/url?sa=t&source=web&ct=res&cd=2&url=http%3A%2F%2Fwww.dabeaz.com%2Fgenerators%2FGenerators.pdf&ei=yRn7SJDYCIbS0QXFvq2JDw&usg=AFQjCNE6b1w4feELQFUppm2-GFCzYyd2UQ&sig2=nUjS8CM2Pd77W_HXUq4tRw). And if you can't, you can still call the command-line from a Python script.
And of course you can call Python from the shell using -c option:
```
python -c "for line in open('/etc/fstab') : print line"
```
Some more literature about Python used for system administration tasks:
* [The IBM lab point of view](http://www.ibm.com/developerworks/aix/library/au-python/).
* [A nice example to compare bash and python to script report](http://www.unixreview.com/documents/s=9083/sam0401d/).
* [The basics](http://www.samag.com/documents/s=8964/sam0312a/0312a.htm).
* [The must-have book](http://rads.stackoverflow.com/amzn/click/0596515820). |
How to create a numpy record array from C | 214,549 | 7 | 2008-10-18T04:03:35Z | 215,090 | 8 | 2008-10-18T14:19:14Z | [
"python",
"c",
"numpy"
] | On the Python side, I can create new numpy record arrays as follows:
```
numpy.zeros((3,), dtype=[('a', 'i4'), ('b', 'U5')])
```
How do I do the same from a C program? I suppose I have to call `PyArray_SimpleNewFromDescr(nd, dims, descr)`, but how do I construct a `PyArray_Descr` that is appropriate for passing as the third argument to `PyArray_SimpleNewFromDescr`? | Use `PyArray_DescrConverter`. Here's an example:
```
#include <Python.h>
#include <stdio.h>
#include <numpy/arrayobject.h>
int main(int argc, char *argv[])
{
int dims[] = { 2, 3 };
PyObject *op, *array;
PyArray_Descr *descr;
Py_Initialize();
import_array();
op = Py_BuildValue("[(s, s), (s, s)]", "a", "i4", "b", "U5");
PyArray_DescrConverter(op, &descr);
Py_DECREF(op);
array = PyArray_SimpleNewFromDescr(2, dims, descr);
PyObject_Print(array, stdout, 0);
printf("\n");
Py_DECREF(array);
return 0;
}
```
Thanks to [Adam Rosenfield](http://stackoverflow.com/users/9530/adam-rosenfield) for pointing to Section 13.3.10 of the [Guide to NumPy](http://numpy.scipy.org/numpybook.pdf). |
How do you convert YYYY-MM-DDTHH:mm:ss.000Z time format to MM/DD/YYYY time format in Python? | 214,777 | 18 | 2008-10-18T08:53:02Z | 214,786 | 14 | 2008-10-18T09:01:59Z | [
"python",
"datetime"
] | For example, I'm trying to convert 2008-09-26T01:51:42.000Z to 09/26/2008. What's the simplest way of accomplishing this? | ```
>>> import time
>>> timestamp = "2008-09-26T01:51:42.000Z"
>>> ts = time.strptime(timestamp[:19], "%Y-%m-%dT%H:%M:%S")
>>> time.strftime("%m/%d/%Y", ts)
'09/26/2008'
```
See the documentation of the Python [`time`](http://python.org/doc/2.5/lib/module-time.html) module for more information. |
How do you convert YYYY-MM-DDTHH:mm:ss.000Z time format to MM/DD/YYYY time format in Python? | 214,777 | 18 | 2008-10-18T08:53:02Z | 215,313 | 24 | 2008-10-18T17:33:30Z | [
"python",
"datetime"
] | For example, I'm trying to convert 2008-09-26T01:51:42.000Z to 09/26/2008. What's the simplest way of accomplishing this? | The easiest way is to use [dateutil](http://labix.org/python-dateutil).parser.parse() to parse the date string into a timezone aware datetime object, then use strftime() to get the format you want.
```
import datetime, dateutil.parser
d = dateutil.parser.parse('2008-09-26T01:51:42.000Z')
print d.strftime('%m/%d/%Y') #==> '09/26/2008'
``` |
python module dlls | 214,852 | 11 | 2008-10-18T10:17:58Z | 214,868 | 14 | 2008-10-18T10:33:08Z | [
"python",
"module"
] | Is there a way to make a python module load a dll in my application directory rather than the version that came with the python installation, without making changes to the python installation (which would then require I made an installer, and be careful I didn't break other apps for people by overwrting python modules and changing dll versions globaly...)?
Specifically I would like python to use my version of the sqlite3.dll, rather than the version that came with python (which is older and doesn't appear to have the fts3 module). | If you're talking about Python module DLLs, then simply modifying `sys.path` should be fine. However, if you're talking about DLLs *linked* against those DLLs; i.e. a `libfoo.dll` which a `foo.pyd` depends on, then you need to modify your PATH environment variable. I wrote about [doing this for PyGTK a while ago](http://glyf.livejournal.com/7878.html), but in your case I think it should be as simple as:
```
import os
os.environ['PATH'] = 'my-app-dir' + ';' + os.environ['PATH']
```
That will insert `my-app-dir` at the head of your Windows path, which I believe also controls the load-order for DLLs.
Keep in mind that you will need to do this before loading the DLL in question, i.e., before importing anything interesting.
sqlite3 may be a bit of a special case, though, since it is distributed with Python; it's obviously kind of tricky to test this quickly, so I haven't checked `sqlite3.dll` specifically. |
Can you add new statements to Python's syntax? | 214,881 | 72 | 2008-10-18T10:47:21Z | 214,910 | 12 | 2008-10-18T11:16:45Z | [
"python",
"syntax"
] | Can you add new statements (like `print`, `raise`, `with`) to Python's syntax?
Say, to allow..
```
mystatement "Something"
```
Or,
```
new_if True:
print "example"
```
Not so much if you *should*, but rather if it's possible (short of modifying the python interpreters code) | Short of changing and recompiling the source code (which *is* possible with open source), changing the base language is not really possible.
Even if you do recompile the source, it wouldn't be python, just your hacked-up changed version which you need to be very careful not to introduce bugs into.
However, I'm not sure why you'd want to. Python's object-oriented features makes it quite simple to achieve similar results with the language as it stands. |
Can you add new statements to Python's syntax? | 214,881 | 72 | 2008-10-18T10:47:21Z | 215,697 | 44 | 2008-10-18T23:15:04Z | [
"python",
"syntax"
] | Can you add new statements (like `print`, `raise`, `with`) to Python's syntax?
Say, to allow..
```
mystatement "Something"
```
Or,
```
new_if True:
print "example"
```
Not so much if you *should*, but rather if it's possible (short of modifying the python interpreters code) | One way to do things like this is to preprocess the source and modify it, translating your added statement to python. There are various problems this approach will bring, and I wouldn't recommend it for general usage, but for experimentation with language, or specific-purpose metaprogramming, it can occassionally be useful.
For instance, lets say we want to introduce a "myprint" statement, that instead of printing to the screen instead logs to a specific file. ie:
```
myprint "This gets logged to file"
```
would be equivalent to
```
print >>open('/tmp/logfile.txt','a'), "This gets logged to file"
```
There are various options as to how to do the replacing, from regex substitution to generating an AST, to writing your own parser depending on how close your syntax matches existing python. A good intermediate approach is to use the tokenizer module. This should allow you to add new keywords, control structures etc while interpreting the source similarly to the python interpreter, thus avoiding the breakage crude regex solutions would cause. For the above "myprint", you could write the following transformation code:
```
import tokenize
LOGFILE = '/tmp/log.txt'
def translate(readline):
for type, name,_,_,_ in tokenize.generate_tokens(readline):
if type ==tokenize.NAME and name =='myprint':
yield tokenize.NAME, 'print'
yield tokenize.OP, '>>'
yield tokenize.NAME, "open"
yield tokenize.OP, "("
yield tokenize.STRING, repr(LOGFILE)
yield tokenize.OP, ","
yield tokenize.STRING, "'a'"
yield tokenize.OP, ")"
yield tokenize.OP, ","
else:
yield type,name
```
(This does make myprint effectively a keyword, so use as a variable elsewhere will likely cause problems)
The problem then is how to use it so that your code is usable from python. One way would just be to write your own import function, and use it to load code written in your custom language. ie:
```
import new
def myimport(filename):
mod = new.module(filename)
f=open(filename)
data = tokenize.untokenize(translate(f.readline))
exec data in mod.__dict__
return mod
```
This requires you handle your customised code differently from normal python modules however. ie "`some_mod = myimport("some_mod.py")`" rather than "`import some_mod`"
Another fairly neat (albeit hacky) solution is to create a custom encoding (See [PEP 263](http://www.python.org/dev/peps/pep-0263/)) as [this](http://code.activestate.com/recipes/546539/) recipe demonstrates. You could implement this as:
```
import codecs, cStringIO, encodings
from encodings import utf_8
class StreamReader(utf_8.StreamReader):
def __init__(self, *args, **kwargs):
codecs.StreamReader.__init__(self, *args, **kwargs)
data = tokenize.untokenize(translate(self.stream.readline))
self.stream = cStringIO.StringIO(data)
def search_function(s):
if s!='mylang': return None
utf8=encodings.search_function('utf8') # Assume utf8 encoding
return codecs.CodecInfo(
name='mylang',
encode = utf8.encode,
decode = utf8.decode,
incrementalencoder=utf8.incrementalencoder,
incrementaldecoder=utf8.incrementaldecoder,
streamreader=StreamReader,
streamwriter=utf8.streamwriter)
codecs.register(search_function)
```
Now after this code gets run (eg. you could place it in your .pythonrc or site.py) any code starting with the comment "# coding: mylang" will automatically be translated through the above preprocessing step. eg.
```
# coding: mylang
myprint "this gets logged to file"
for i in range(10):
myprint "so does this : ", i, "times"
myprint ("works fine" "with arbitrary" + " syntax"
"and line continuations")
```
Caveats:
There are problems to the preprocessor approach, as you'll probably be familiar with if you've worked with the C preprocessor. The main one is debugging. All python sees is the preprocessed file which means that text printed in the stack trace etc will refer to that. If you've performed significant translation, this may be very different from your source text. The example above doesn't change line numbers etc, so won't be too different, but the more you change it, the harder it will be to figure out. |
Can you add new statements to Python's syntax? | 214,881 | 72 | 2008-10-18T10:47:21Z | 216,795 | 15 | 2008-10-19T18:52:00Z | [
"python",
"syntax"
] | Can you add new statements (like `print`, `raise`, `with`) to Python's syntax?
Say, to allow..
```
mystatement "Something"
```
Or,
```
new_if True:
print "example"
```
Not so much if you *should*, but rather if it's possible (short of modifying the python interpreters code) | Yes, to some extent it is possible. There is a [module](http://entrian.com/goto/) out there that uses `sys.settrace()` to implement `goto` and `comefrom` "keywords":
```
from goto import goto, label
for i in range(1, 10):
for j in range(1, 20):
print i, j
if j == 3:
goto .end # breaking out from nested loop
label .end
print "Finished"
``` |
Can you add new statements to Python's syntax? | 214,881 | 72 | 2008-10-18T10:47:21Z | 220,857 | 10 | 2008-10-21T05:26:58Z | [
"python",
"syntax"
] | Can you add new statements (like `print`, `raise`, `with`) to Python's syntax?
Say, to allow..
```
mystatement "Something"
```
Or,
```
new_if True:
print "example"
```
Not so much if you *should*, but rather if it's possible (short of modifying the python interpreters code) | General answer: you need to preprocess your source files.
More specific answer: install [EasyExtend](http://pypi.python.org/pypi/EasyExtend), and go through following steps
i) Create a new langlet ( extension language )
```
import EasyExtend
EasyExtend.new_langlet("mystmts", prompt = "my> ", source_ext = "mypy")
```
Without additional specification a bunch of files shall be created under EasyExtend/langlets/mystmts/ .
ii) Open mystmts/parsedef/Grammar.ext and add following lines
```
small_stmt: (expr_stmt | print_stmt | del_stmt | pass_stmt | flow_stmt |
import_stmt | global_stmt | exec_stmt | assert_stmt | my_stmt )
my_stmt: 'mystatement' expr
```
This is sufficient to define the syntax of your new statement. The small\_stmt non-terminal is part of the Python grammar and it's the place where the new statement is hooked in. The parser will now recognize the new statement i.e. a source file containing it will be parsed. The compiler will reject it though because it still has to be transformed into valid Python.
iii) Now one has to add semantics of the statement. For this one has to edit
msytmts/langlet.py and add a my\_stmt node visitor.
```
def call_my_stmt(expression):
"defines behaviour for my_stmt"
print "my stmt called with", expression
class LangletTransformer(Transformer):
@transform
def my_stmt(self, node):
_expr = find_node(node, symbol.expr)
return any_stmt(CST_CallFunc("call_my_stmt", [_expr]))
__publish__ = ["call_my_stmt"]
```
iv) cd to langlets/mystmts and type
```
python run_mystmts.py
```
Now a session shall be started and the newly defined statement can be used:
```
__________________________________________________________________________________
mystmts
On Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit (Intel)]
__________________________________________________________________________________
my> mystatement 40+2
my stmt called with 42
```
Quite a few steps to come to a trivial statement, right? There isn't an API yet that lets one define simple things without having to care about grammars. But EE is very reliable modulo some bugs. So it's just a matter of time that an API emerges that lets programmers define convenient stuff like infix operators or small statements using just convenient OO programming. For more complex things like embedding whole languages in Python by means of building a langlet there is no way of going around a full grammar approach. |
Can you add new statements to Python's syntax? | 214,881 | 72 | 2008-10-18T10:47:21Z | 4,572,994 | 7 | 2011-01-01T02:23:19Z | [
"python",
"syntax"
] | Can you add new statements (like `print`, `raise`, `with`) to Python's syntax?
Say, to allow..
```
mystatement "Something"
```
Or,
```
new_if True:
print "example"
```
Not so much if you *should*, but rather if it's possible (short of modifying the python interpreters code) | Here's a very simple but crappy way to add new statements, *in interpretive mode only*. I'm using it for little 1-letter commands for editing gene annotations using only sys.displayhook, but just so I could answer this question I added sys.excepthook for the syntax errors as well. The latter is really ugly, fetching the raw code from the readline buffer. The benefit is, it's trivially easy to add new statements this way.
```
jcomeau@intrepid:~/$ cat demo.py; ./demo.py
#!/usr/bin/python -i
'load everything needed under "package", such as package.common.normalize()'
import os, sys, readline, traceback
if __name__ == '__main__':
class t:
@staticmethod
def localfunction(*args):
print 'this is a test'
if args:
print 'ignoring %s' % repr(args)
def displayhook(whatever):
if hasattr(whatever, 'localfunction'):
return whatever.localfunction()
else:
print whatever
def excepthook(exctype, value, tb):
if exctype is SyntaxError:
index = readline.get_current_history_length()
item = readline.get_history_item(index)
command = item.split()
print 'command:', command
if len(command[0]) == 1:
try:
eval(command[0]).localfunction(*command[1:])
except:
traceback.print_exception(exctype, value, tb)
else:
traceback.print_exception(exctype, value, tb)
sys.displayhook = displayhook
sys.excepthook = excepthook
>>> t
this is a test
>>> t t
command: ['t', 't']
this is a test
ignoring ('t',)
>>> ^D
``` |
Can you add new statements to Python's syntax? | 214,881 | 72 | 2008-10-18T10:47:21Z | 9,108,164 | 77 | 2012-02-02T06:39:44Z | [
"python",
"syntax"
] | Can you add new statements (like `print`, `raise`, `with`) to Python's syntax?
Say, to allow..
```
mystatement "Something"
```
Or,
```
new_if True:
print "example"
```
Not so much if you *should*, but rather if it's possible (short of modifying the python interpreters code) | You may find this useful - [Python internals: adding a new statement to Python](http://eli.thegreenplace.net/2010/06/30/python-internals-adding-a-new-statement-to-python/), quoted here:
---
This article is an attempt to better understand how the front-end of Python works. Just reading documentation and source code may be a bit boring, so I'm taking a hands-on approach here: I'm going to add an `until` statement to Python.
All the coding for this article was done against the cutting-edge Py3k branch in the [Python Mercurial repository mirror](http://code.python.org/hg/branches/py3k/).
### The `until` statement
Some languages, like Ruby, have an `until` statement, which is the complement to `while` (`until num == 0` is equivalent to `while num != 0`). In Ruby, I can write:
```
num = 3
until num == 0 do
puts num
num -= 1
end
```
And it will print:
```
3
2
1
```
So, I want to add a similar capability to Python. That is, being able to write:
```
num = 3
until num == 0:
print(num)
num -= 1
```
### A language-advocacy digression
This article doesn't attempt to suggest the addition of an `until` statement to Python. Although I think such a statement would make some code clearer, and this article displays how easy it is to add, I completely respect Python's philosophy of minimalism. All I'm trying to do here, really, is gain some insight into the inner workings of Python.
### Modifying the grammar
Python uses a custom parser generator named `pgen`. This is a LL(1) parser that converts Python source code into a parse tree. The input to the parser generator is the file `Grammar/Grammar`**[1]**. This is a simple text file that specifies the grammar of Python.
**[1]**: From here on, references to files in the Python source are given relatively to the root of the source tree, which is the directory where you run configure and make to build Python.
Two modifications have to be made to the grammar file. The first is to add a definition for the `until` statement. I found where the `while` statement was defined (`while_stmt`), and added `until_stmt` below **[2]**:
```
compound_stmt: if_stmt | while_stmt | until_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated
if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite]
while_stmt: 'while' test ':' suite ['else' ':' suite]
until_stmt: 'until' test ':' suite
```
**[2]**: This demonstrates a common technique I use when modifying source code Iâm not familiar with: *work by similarity*. This principle wonât solve all your problems, but it can definitely ease the process. Since everything that has to be done for `while` also has to be done for `until`, it serves as a pretty good guideline.
Note that I've decided to exclude the `else` clause from my definition of `until`, just to make it a little bit different (and because frankly I dislike the `else` clause of loops and don't think it fits well with the Zen of Python).
The second change is to modify the rule for `compound_stmt` to include `until_stmt`, as you can see in the snippet above. It's right after `while_stmt`, again.
When you run `make` after modifying `Grammar/Grammar`, notice that the `pgen` program is run to re-generate `Include/graminit.h` and `Python/graminit.c`, and then several files get re-compiled.
### Modifying the AST generation code
After the Python parser has created a parse tree, this tree is converted into an AST, since ASTs are [much simpler to work with](http://eli.thegreenplace.net/2009/02/16/abstract-vs-concrete-syntax-trees/) in subsequent stages of the compilation process.
So, we're going to visit `Parser/Python.asdl` which defines the structure of Python's ASTs and add an AST node for our new `until` statement, again right below the `while`:
```
| While(expr test, stmt* body, stmt* orelse)
| Until(expr test, stmt* body)
```
If you now run `make`, notice that before compiling a bunch of files, `Parser/asdl_c.py` is run to generate C code from the AST definition file. This (like `Grammar/Grammar`) is another example of the Python source-code using a mini-language (in other words, a DSL) to simplify programming. Also note that since `Parser/asdl_c.py` is a Python script, this is a kind of [bootstrapping](http://en.wikipedia.org/wiki/Bootstrapping_%28compilers%29) - to build Python from scratch, Python already has to be available.
While `Parser/asdl_c.py` generated the code to manage our newly defined AST node (into the files `Include/Python-ast.h` and `Python/Python-ast.c`), we still have to write the code that converts a relevant parse-tree node into it by hand. This is done in the file `Python/ast.c`. There, a function named `ast_for_stmt` converts parse tree nodes for statements into AST nodes. Again, guided by our old friend `while`, we jump right into the big `switch` for handling compound statements and add a clause for `until_stmt`:
```
case while_stmt:
return ast_for_while_stmt(c, ch);
case until_stmt:
return ast_for_until_stmt(c, ch);
```
Now we should implement `ast_for_until_stmt`. Here it is:
```
static stmt_ty
ast_for_until_stmt(struct compiling *c, const node *n)
{
/* until_stmt: 'until' test ':' suite */
REQ(n, until_stmt);
if (NCH(n) == 4) {
expr_ty expression;
asdl_seq *suite_seq;
expression = ast_for_expr(c, CHILD(n, 1));
if (!expression)
return NULL;
suite_seq = ast_for_suite(c, CHILD(n, 3));
if (!suite_seq)
return NULL;
return Until(expression, suite_seq, LINENO(n), n->n_col_offset, c->c_arena);
}
PyErr_Format(PyExc_SystemError,
"wrong number of tokens for 'until' statement: %d",
NCH(n));
return NULL;
}
```
Again, this was coded while closely looking at the equivalent `ast_for_while_stmt`, with the difference that for `until` I've decided not to support the `else` clause. As expected, the AST is created recursively, using other AST creating functions like `ast_for_expr` for the condition expression and `ast_for_suite` for the body of the `until` statement. Finally, a new node named `Until` is returned.
Note that we access the parse-tree node `n` using some macros like `NCH` and `CHILD`. These are worth understanding - their code is in `Include/node.h`.
### Digression: AST composition
I chose to create a new type of AST for the `until` statement, but actually this isn't necessary. I could've saved some work and implemented the new functionality using composition of existing AST nodes, since:
```
until condition:
# do stuff
```
Is functionally equivalent to:
```
while not condition:
# do stuff
```
Instead of creating the `Until` node in `ast_for_until_stmt`, I could have created a `Not` node with an `While` node as a child. Since the AST compiler already knows how to handle these nodes, the next steps of the process could be skipped.
### Compiling ASTs into bytecode
The next step is compiling the AST into Python bytecode. The compilation has an intermediate result which is a CFG (Control Flow Graph), but since the same code handles it I will ignore this detail for now and leave it for another article.
The code we will look at next is `Python/compile.c`. Following the lead of `while`, we find the function `compiler_visit_stmt`, which is responsible for compiling statements into bytecode. We add a clause for `Until`:
```
case While_kind:
return compiler_while(c, s);
case Until_kind:
return compiler_until(c, s);
```
If you wonder what `Until_kind` is, it's a constant (actually a value of the `_stmt_kind` enumeration) automatically generated from the AST definition file into `Include/Python-ast.h`. Anyway, we call `compiler_until` which, of course, still doesn't exist. I'll get to it an a moment.
If you're curious like me, you'll notice that `compiler_visit_stmt` is peculiar. No amount of `grep`-ping the source tree reveals where it is called. When this is the case, only one option remains - C macro-fu. Indeed, a short investigation leads us to the `VISIT` macro defined in `Python/compile.c`:
```
#define VISIT(C, TYPE, V) {\
if (!compiler_visit_ ## TYPE((C), (V))) \
return 0; \
```
It's used to invoke `compiler_visit_stmt` in `compiler_body`. Back to our business, however...
As promised, here's `compiler_until`:
```
static int
compiler_until(struct compiler *c, stmt_ty s)
{
basicblock *loop, *end, *anchor = NULL;
int constant = expr_constant(s->v.Until.test);
if (constant == 1) {
return 1;
}
loop = compiler_new_block(c);
end = compiler_new_block(c);
if (constant == -1) {
anchor = compiler_new_block(c);
if (anchor == NULL)
return 0;
}
if (loop == NULL || end == NULL)
return 0;
ADDOP_JREL(c, SETUP_LOOP, end);
compiler_use_next_block(c, loop);
if (!compiler_push_fblock(c, LOOP, loop))
return 0;
if (constant == -1) {
VISIT(c, expr, s->v.Until.test);
ADDOP_JABS(c, POP_JUMP_IF_TRUE, anchor);
}
VISIT_SEQ(c, stmt, s->v.Until.body);
ADDOP_JABS(c, JUMP_ABSOLUTE, loop);
if (constant == -1) {
compiler_use_next_block(c, anchor);
ADDOP(c, POP_BLOCK);
}
compiler_pop_fblock(c, LOOP, loop);
compiler_use_next_block(c, end);
return 1;
}
```
I have a confession to make: this code wasn't written based on a deep understanding of Python bytecode. Like the rest of the article, it was done in imitation of the kin `compiler_while` function. By reading it carefully, however, keeping in mind that the Python VM is stack-based, and glancing into the documentation of the `dis` module, which has [a list of Python bytecodes](http://docs.python.org/py3k/library/dis.html) with descriptions, it's possible to understand what's going on.
### That's it, we're done... Aren't we?
After making all the changes and running `make`, we can run the newly compiled Python and try our new `until` statement:
```
>>> until num == 0:
... print(num)
... num -= 1
...
3
2
1
```
Voila, it works! Let's see the bytecode created for the new statement by using the `dis` module as follows:
```
import dis
def myfoo(num):
until num == 0:
print(num)
num -= 1
dis.dis(myfoo)
```
Here's the result:
```
4 0 SETUP_LOOP 36 (to 39)
>> 3 LOAD_FAST 0 (num)
6 LOAD_CONST 1 (0)
9 COMPARE_OP 2 (==)
12 POP_JUMP_IF_TRUE 38
5 15 LOAD_NAME 0 (print)
18 LOAD_FAST 0 (num)
21 CALL_FUNCTION 1
24 POP_TOP
6 25 LOAD_FAST 0 (num)
28 LOAD_CONST 2 (1)
31 INPLACE_SUBTRACT
32 STORE_FAST 0 (num)
35 JUMP_ABSOLUTE 3
>> 38 POP_BLOCK
>> 39 LOAD_CONST 0 (None)
42 RETURN_VALUE
```
The most interesting operation is number 12: if the condition is true, we jump to after the loop. This is correct semantics for `until`. If the jump isn't executed, the loop body keeps running until it jumps back to the condition at operation 35.
Feeling good about my change, I then tried running the function (executing `myfoo(3)`) instead of showing its bytecode. The result was less than encouraging:
```
Traceback (most recent call last):
File "zy.py", line 9, in
myfoo(3)
File "zy.py", line 5, in myfoo
print(num)
SystemError: no locals when loading 'print'
```
Whoa... this can't be good. So what went wrong?
### The case of the missing symbol table
One of the steps the Python compiler performs when compiling the AST is create a symbol table for the code it compiles. The call to `PySymtable_Build` in `PyAST_Compile` calls into the symbol table module (`Python/symtable.c`), which walks the AST in a manner similar to the code generation functions. Having a symbol table for each scope helps the compiler figure out some key information, such as which variables are global and which are local to a scope.
To fix the problem, we have to modify the `symtable_visit_stmt` function in `Python/symtable.c`, adding code for handling `until` statements, after the similar code for `while` statements **[3]**:
```
case While_kind:
VISIT(st, expr, s->v.While.test);
VISIT_SEQ(st, stmt, s->v.While.body);
if (s->v.While.orelse)
VISIT_SEQ(st, stmt, s->v.While.orelse);
break;
case Until_kind:
VISIT(st, expr, s->v.Until.test);
VISIT_SEQ(st, stmt, s->v.Until.body);
break;
```
**[3]**: By the way, without this code thereâs a compiler warning for `Python/symtable.c`. The compiler notices that the `Until_kind` enumeration value isnât handled in the switch statement of `symtable_visit_stmt` and complains. Itâs always important to check for compiler warnings!
And now we really are done. Compiling the source after this change makes the execution of `myfoo(3)` work as expected.
### Conclusion
In this article I've demonstrated how to add a new statement to Python. Albeit requiring quite a bit of tinkering in the code of the Python compiler, the change wasn't difficult to implement, because I used a similar and existing statement as a guideline.
The Python compiler is a sophisticated chunk of software, and I don't claim being an expert in it. However, I am really interested in the internals of Python, and particularly its front-end. Therefore, I found this exercise a very useful companion to theoretical study of the compiler's principles and source code. It will serve as a base for future articles that will get deeper into the compiler.
### References
I used a few excellent references for the construction of this article. Here they are, in no particular order:
* [PEP 339: Design of the CPython compiler](http://www.python.org/dev/peps/pep-0339/) - probably the most important and comprehensive piece of *official* documentation for the Python compiler. Being very short, it painfully displays the scarcity of good documentation of the internals of Python.
* "Python Compiler Internals" - an article by Thomas Lee
* "Python: Design and Implementation" - a presentation by Guido van Rossum
* Python (2.5) Virtual Machine, A guided tour - a presentation by Peter Tröger
[original source](http://eli.thegreenplace.net/2010/06/30/python-internals-adding-a-new-statement-to-python/) |
What's a good library to manipulate Apache2 config files? | 215,542 | 7 | 2008-10-18T20:38:57Z | 215,552 | 7 | 2008-10-18T20:52:03Z | [
"java",
"python",
"perl",
"apache"
] | I'd like to create a script to manipulate Apache2 configuration directly, reading and writing its properties (like adding a new VirtualHost, changing settings of one that already exists).
Are there any libs out there, for Perl, Python or Java that automates that task? | Rather than manipulate the config files, you can use [mod\_perl](http://perl.apache.org/) to embed Perl directly into the config files. This could allow you, for example, to read required vhosts out of a database.
See [Configure Apache with Perl Example](http://perl.apache.org/start/tips/config.html) for quick example and [Apache Configuration in Perl](http://perl.apache.org/docs/1.0/guide/config.html#Apache_Configuration_in_Perl) for all the details. |
What's a good library to manipulate Apache2 config files? | 215,542 | 7 | 2008-10-18T20:38:57Z | 215,695 | 7 | 2008-10-18T23:13:14Z | [
"java",
"python",
"perl",
"apache"
] | I'd like to create a script to manipulate Apache2 configuration directly, reading and writing its properties (like adding a new VirtualHost, changing settings of one that already exists).
Are there any libs out there, for Perl, Python or Java that automates that task? | In Perl, you've got at least 2 modules for that:
[Apache::ConfigFile](http://search.cpan.org/~nwiger/Apache-ConfigFile-1.18/ConfigFile.pm)
[Apache::Admin::Config](https://metacpan.org/pod/Apache%3a%3aAdmin%3a%3aConfig) |
What's the difference between a parent and a reference property in Google App Engine? | 215,570 | 10 | 2008-10-18T21:12:07Z | 215,902 | 8 | 2008-10-19T02:23:47Z | [
"python",
"api",
"google-app-engine"
] | From what I understand, the parent attribute of a db.Model (typically defined/passed in the constructor call) allows you to define hierarchies in your data models. As a result, this increases the size of the entity group. However, it's not very clear to me why we would want to do that. Is this strictly for ACID compliance? I would like to see scenarios where each is best suited or more appropriate. | The only purpose of entity groups (defined by the parent attribute) is to enable transactions among different entities. If you don't need the transactions, don't use the entity group relationships.
I suggest you re-reading the [Keys and Entity Groups](http://code.google.com/appengine/docs/datastore/keysandentitygroups.html) section of the docs, it took me quite a few reads to grasp the idea.
Also watch these talks, among other things they discuss transactions and entity groups:
* [Building Scalable Web Applications with Google App Engine](http://sites.google.com/site/io/building-scalable-web-applications-with-google-app-engine)
* [Under the Covers of the Google App Engine Datastore](http://sites.google.com/site/io/under-the-covers-of-the-google-app-engine-datastore) |
What's the difference between a parent and a reference property in Google App Engine? | 215,570 | 10 | 2008-10-18T21:12:07Z | 216,187 | 15 | 2008-10-19T08:56:43Z | [
"python",
"api",
"google-app-engine"
] | From what I understand, the parent attribute of a db.Model (typically defined/passed in the constructor call) allows you to define hierarchies in your data models. As a result, this increases the size of the entity group. However, it's not very clear to me why we would want to do that. Is this strictly for ACID compliance? I would like to see scenarios where each is best suited or more appropriate. | There are several differences:
* All entities with the same ancestor are in the same entity group. Transactions can only affect entities inside a single entity group.
* All writes to a single entity group are serialized, so throughput is limited.
* The parent entity is set on creation and is fixed. References can be changed at any time.
* With reference properties, you can only query for direct relationships, but with parent properties you can use the .ancestor() filter to find everything (directly or indirectly) descended from a given ancestor.
* Each entity has only a single parent, but can have multiple reference properties. |
What is the purpose of the colon before a block in Python? | 215,581 | 44 | 2008-10-18T21:18:46Z | 215,676 | 56 | 2008-10-18T22:49:32Z | [
"python",
"syntax"
] | What is the purpose of the colon before a block in Python?
Example:
```
if n == 0:
print "The end"
``` | The colon is there to declare the start of an indented block.
Technically, it's not necessary; you could just indent and de-indent when the block is done. However, based on the [Python koan](http://www.python.org/dev/peps/pep-0020/) âexplicit is better than implicitâ (EIBTI), I believe that Guido deliberately made the colon obligatory, so *any* statement that *should* be followed by indented code ends in a colon. (It also allows one-liners if you continue after the colon, but this style is not in wide use.)
It also makes the work of syntax-aware auto-indenting editors easier, which also counted in the decision.
---
This question turns out to be a [Python FAQ](http://docs.python.org/faq/design.html#why-are-colons-required-for-the-if-while-def-class-statements), and I found one of its answers by Guido [here](http://markmail.org/message/ve7mwqxhci4pm6lw):
> **Why are colons required for the if/while/def/class statements?**
>
> The colon is required primarily to enhance readability (one of the results of the experimental ABC language). Consider this:
>
> ```
> if a == b
> print a
> ```
>
> versus
>
> ```
> if a == b:
> print a
> ```
>
> Notice how the second one is slightly easier to read. Notice further how a colon sets off the example in this FAQ answer; itâs a standard usage in English.
>
> Another minor reason is that the colon makes it easier for editors with syntax highlighting; they can look for colons to decide when indentation needs to be increased instead of having to do a more elaborate parsing of the program text. |
What is the purpose of the colon before a block in Python? | 215,581 | 44 | 2008-10-18T21:18:46Z | 216,060 | 12 | 2008-10-19T05:40:07Z | [
"python",
"syntax"
] | What is the purpose of the colon before a block in Python?
Example:
```
if n == 0:
print "The end"
``` | Three reasons:
1. To increase readability. The colon helps the code flow into the following indented block.
2. To help text editors/IDEs, they can automatically indent the next line if the previous line ended with a colon.
3. To make parsing by python slightly easier. |
What is the purpose of the colon before a block in Python? | 215,581 | 44 | 2008-10-18T21:18:46Z | 1,464,645 | 15 | 2009-09-23T08:13:16Z | [
"python",
"syntax"
] | What is the purpose of the colon before a block in Python?
Example:
```
if n == 0:
print "The end"
``` | Consider the following list of things to buy from the grocery store, written in Pewprikanese.
```
pewkah
lalala
chunkykachoo
pewpewpew
skunkybacon
```
When I read that, I'm confused, Are chunkykachoo and pewpewpew a kind of lalala? Or what if chunkykachoo and pewpewpew are indented just because they are special items?
Now see what happens when my Pewprikanese friend add a colon to help me parse the list better: (<-- like this)
```
pewkah
lalala: (<-- see this colon)
chunkykachoo
pewpewpew
skunkybacon
```
Now it's clear that chunkykachoo and pewpewpew are a kind of lalala.
Let's say there is a person who's starting to learn Python, which happens to be her first programming language to learn. Without colons, there's a considerable probability that she's going to keep thinking "this lines are indented because this lines are like special items.", and it could take a while to realize that that's not the best way to think about indentation. |
Why Python decorators rather than closures? | 215,689 | 13 | 2008-10-18T23:07:28Z | 215,782 | 7 | 2008-10-19T00:34:16Z | [
"python",
"decorator"
] | I still haven't got my head around decorators in Python.
I've already started using a lot of closures to do things like customize functions and classes in my coding.
Eg.
```
class Node :
def __init__(self,val,children) :
self.val = val
self.children = children
def makeRunner(f) :
def run(node) :
f(node)
for x in node.children :
run(x)
return run
tree=Node(1,[Node(2,[]),Node(3,[Node(4,[]),Node(5,[])])])
def pp(n) : print "%s," % n.val
printTree = makeRunner(pp)
printTree(tree)
```
As far as I can see, decorators are just a different syntax for doing something similar.
Instead of
```
def pp(n) : print "%s," % n.val
printTree = makeRunner(pp)
```
I would write :
```
@makeRunner
def printTree(n) : print "%s," % n.val
```
Is this all there is to decorators? Or is there a fundamental difference that I've missed? | Are your examples real code, or just examples?
If they're real code, I think you overuse decorators, probably because of your background (i.e. you are used to other programming languages)
### Stage 1: avoiding decorators
```
def run(rootnode, func):
def _run(node): # recursive internal function
func(node)
for x in node.children:
_run(x) # recurse
_run(rootnode) # initial run
```
This run method obsoletes makeRunner. Your example turns to:
```
def pp(n): print "%s," % n.val
run(tree, pp)
```
However, this ignores completely generators, soâ¦
### Stage 2: using generators
```
class Node :
def __init__(self,val,children) :
self.val = val
self.children = children
def __iter__(self): # recursive
yield self
for child in self.children:
for item in child: # recurse
yield item
def run(rootnode, func):
for node in rootnode:
func(node)
```
Your example remains
```
def pp(n): print "%s," % n.val
run(tree, pp)
```
Note that the special method `__iter__` allows us to use the `for node in rootnode:` construct. If you don't like it, just rename the `__iter__` method to e.g. `walker`, and change the `run` loop into: `for node in rootnode.walker():`
Obviously, the `run` function could be a method of `class Node` instead.
As you see, I suggest you use directly `run(tree, func)` instead of binding them to the name `printTree`, but you can use them in a decorator, or you can make use of the `functools.partial` function:
```
printTree= functools.partial(run, func=pp)
```
and from then on, you would just
```
printTree(tree)
``` |
Why Python decorators rather than closures? | 215,689 | 13 | 2008-10-18T23:07:28Z | 335,442 | 13 | 2008-12-02T21:03:22Z | [
"python",
"decorator"
] | I still haven't got my head around decorators in Python.
I've already started using a lot of closures to do things like customize functions and classes in my coding.
Eg.
```
class Node :
def __init__(self,val,children) :
self.val = val
self.children = children
def makeRunner(f) :
def run(node) :
f(node)
for x in node.children :
run(x)
return run
tree=Node(1,[Node(2,[]),Node(3,[Node(4,[]),Node(5,[])])])
def pp(n) : print "%s," % n.val
printTree = makeRunner(pp)
printTree(tree)
```
As far as I can see, decorators are just a different syntax for doing something similar.
Instead of
```
def pp(n) : print "%s," % n.val
printTree = makeRunner(pp)
```
I would write :
```
@makeRunner
def printTree(n) : print "%s," % n.val
```
Is this all there is to decorators? Or is there a fundamental difference that I've missed? | While it is true that syntactically, decorators are just "sugar", that is not the best way to think about them.
Decorators allow you to weave functionality into your existing code without actually modifying it. And they allow you to do it in a way that is declarative.
This allows you to use decorators to do *aspect-oriented programming* (AOP). So you want to use a decorator when you have a cross-cutting concern that you want to encapsulate in one place.
The quintessential example would probably be logging, where you want to log the entry or exit of a function, or both. Using a decorator is equivalent to applying advice (log this!) to a joinpoint (during method entry or exit).
Method decoration is a concept like OOP or list comprehensions. As you point out, it is not always appropriate, and **can** be overused. But in the right place, it can be useful for making code more modular and decoupled. |
How do I coherently organize modules for a PyGTK desktop application? | 216,093 | 7 | 2008-10-19T06:07:28Z | 216,145 | 7 | 2008-10-19T07:39:33Z | [
"python",
"gtk",
"module",
"pygtk",
"organization"
] | I am working on a desktop application in PyGTK and seem to be bumping up against some limitations of my file organization. Thus far I've structured my project this way:
* application.py - holds the primary application class (most functional routines)
* gui.py - holds a loosely coupled GTK gui implementation. Handles signal callbacks, etc.
* command.py - holds command line automation functions not dependent on data in the application class
* state.py - holds the state data persistence class
This has served fairly well so far, but at this point application.py is starting to get rather long. I have looked at numerous other PyGTK applications and they seem to have similar structural issues. At a certain point the primary module starts to get very long and there is not obvious way of breaking the code out into narrower modules without sacrificing clarity and object orientation.
I have considered making the GUI the primary module and having seperate modules for the toolbar routines, the menus routines, etc, but at that point I believe I will lose most of the benefits of OOP and end up with an everything-references-everything scenario.
Should I just deal with having a very long central module or is there a better way of structuring the project so that I don't have to rely on the class browser so much?
**EDIT I**
Ok, so point taken regarding all the MVC stuff. I do have a rough approximation of MVC in my code, but admittedly I could probably gain some mileage by further segregating the model and controller. However, I am reading over python-gtkmvc's documentation (which is a great find by the way, thank you for referencing it) and my impression is that its not going to solve my problem so much as just formalize it. My application is a single glade file, generally a single window. So no matter how tightly I define the MVC roles of the modules I'm still going to have one controller module doing most everything, which is pretty much what I have now. Admittedly I'm a little fuzzy on proper MVC implementation and I'm going to keep researching, but it doesn't look to me like this architecture is going to get any more stuff out of my main file, its just going to rename that file to controller.py.
Should I be thinking about separate Controller/View pairs for seperate sections of the window (the toolbar, the menus, etc)? Perhaps that is what I'm missing here. It seems that this is what S. Lott is referring to in his second bullet point.
Thanks for the responses so far. | In the project [Wader](http://wader-project.org) we use [python gtkmvc](http://pygtkmvc.sourceforge.net/), that makes much easier to apply the MVC patterns when using pygtk and glade, you can see the file organization of our project in the [svn repository](http://trac.wader-project.org/browser/trunk/wader):
```
wader/
cli/
common/
contrib/
gtk/
controllers/
models/
views/
test/
utils/
``` |
In Python, what does it mean if an object is subscriptable or not? | 216,972 | 109 | 2008-10-19T21:08:23Z | 216,980 | 102 | 2008-10-19T21:11:05Z | [
"python",
"terminology"
] | Which types of objects fall into the domain of "subscriptable"? | It basically means that the object implements the `__getitem__()` method. In other words, it describes objects that are "containers", meaning they contain other objects. This includes lists, tuples, and dictionaries. |
In Python, what does it mean if an object is subscriptable or not? | 216,972 | 109 | 2008-10-19T21:08:23Z | 217,081 | 20 | 2008-10-19T22:39:47Z | [
"python",
"terminology"
] | Which types of objects fall into the domain of "subscriptable"? | Off the top of my head, the following are the only built-ins that are subscriptable:
```
string: "foobar"[3] == "b"
tuple: (1,2,3,4)[3] == 4
list: [1,2,3,4][3] == 4
dict: {"a":1, "b":2, "c":3}["c"] == 3
```
But mipadi's answer is correct; any class that implements \_\_getitem\_\_ is subscriptable |
How can I determine the display idle time from Python in Windows, Linux, and MacOS? | 217,157 | 15 | 2008-10-19T23:25:08Z | 1,145,688 | 10 | 2009-07-17T21:07:04Z | [
"python",
"winapi",
"gtk",
"pygtk",
"pywin32"
] | I would like to know how long it's been since the user last hit a key or moved the mouse - not just in my application, but on the whole "computer" (i.e. display), in order to guess whether they're still at the computer and able to observe notifications that pop up on the screen.
I'd like to do this purely from (Py)GTK+, but I am amenable to calling platform-specific functions. Ideally I'd like to call functions which have already been wrapped from Python, but if that's not possible, I'm not above a little bit of C or `ctypes` code, as long as I know what I'm actually looking for.
On Windows I think the function I want is [`GetLastInputInfo`](http://msdn.microsoft.com/en-us/library/ms646302.aspx), but that doesn't seem to be wrapped by pywin32; I hope I'm missing something. | [Gajim](http://www.gajim.org/) does it this way on Windows, OS X and GNU/Linux (and other \*nixes):
1. [Python wrapper module](https://trac.gajim.org/browser/src/common/sleepy.py?rev=604c5de0dfe72dfa76b3014c410a50daae381cbe) (also includes Windows idle time detection code, using `GetTickCount` with `ctypes`);
2. [Ctypes-based module to get X11 idle time](https://trac.gajim.org/browser/src/common/idle.py?rev=604c5de0dfe72dfa76b3014c410a50daae381cbe) (using `XScreenSaverQueryInfo`, was a C module in old Gajim versions);
3. [C module to get OS X idle time](https://trac.gajim.org/browser/src/osx/idle.c?rev=604c5de0dfe72dfa76b3014c410a50daae381cbe) (using `HIDIdleTime` system property).
Those links are to quite dated 0.12 version, so you may want to check current source for possible further improvements and changes. |
how to generate unit test code for methods | 217,900 | 3 | 2008-10-20T09:36:51Z | 217,925 | 7 | 2008-10-20T09:48:11Z | [
"python",
"unit-testing"
] | i want to write code for unit test to test my application code. I have different methods and now want to test these methods one by one in python script.
but i do not how to i write. can any one give me example of small code for unit testing in python.
i am thankful | Read the [unit testing framework section](http://www.python.org/doc/2.5.2/lib/module-unittest.html) of the [Python Library Reference](http://www.python.org/doc/2.5.2/lib/lib.html).
A [basic example](http://www.python.org/doc/2.5.2/lib/minimal-example.html) from the documentation:
```
import random
import unittest
class TestSequenceFunctions(unittest.TestCase):
def setUp(self):
self.seq = range(10)
def testshuffle(self):
# make sure the shuffled sequence does not lose any elements
random.shuffle(self.seq)
self.seq.sort()
self.assertEqual(self.seq, range(10))
def testchoice(self):
element = random.choice(self.seq)
self.assert_(element in self.seq)
def testsample(self):
self.assertRaises(ValueError, random.sample, self.seq, 20)
for element in random.sample(self.seq, 5):
self.assert_(element in self.seq)
if __name__ == '__main__':
unittest.main()
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.