title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
How do I calculate number of days betwen two dates using Python? | 151,199 | 193 | 2008-09-29T23:36:25Z | 151,214 | 15 | 2008-09-29T23:43:01Z | [
"python",
"date"
] | If I have two dates (ex. `'8/18/2008'` and `'9/26/2008'`) what is the best way to get the difference measured in days? | Days until Christmas:
```
>>> import datetime
>>> today = datetime.date.today()
>>> someday = datetime.date(2008, 12, 25)
>>> diff = someday - today
>>> diff.days
86
```
More arithmetic [here](https://web.archive.org/web/20061007015511/http://www.daniweb.com/code/snippet236.html). |
Get other running processes window sizes in Python | 151,846 | 7 | 2008-09-30T05:20:27Z | 152,454 | 11 | 2008-09-30T10:02:38Z | [
"python",
"windows",
"winapi",
"pywin32"
] | This isn't as malicious as it sounds, I want to get the current size of their windows, not look at what is in them. The purpose is to figure out that if every other window is fullscreen then I should start up like that too. Or if all the other processes are only 800x600 despite there being a huge resolution then that is probably what the user wants. Why make them waste time and energy resizing my window to match all the others they have? I am primarily a Windows devoloper but it wouldn't upset me in the least if there was a cross platform way to do this. | Using hints from [WindowMover article](http://www.devx.com/opensource/Article/37773/1954) and [Nattee Niparnan's blog post](http://our.obor.us/?q=node/42) I managed to create this:
```
import win32con
import win32gui
def isRealWindow(hWnd):
'''Return True iff given window is a real Windows application window.'''
if not win32gui.IsWindowVisible(hWnd):
return False
if win32gui.GetParent(hWnd) != 0:
return False
hasNoOwner = win32gui.GetWindow(hWnd, win32con.GW_OWNER) == 0
lExStyle = win32gui.GetWindowLong(hWnd, win32con.GWL_EXSTYLE)
if (((lExStyle & win32con.WS_EX_TOOLWINDOW) == 0 and hasNoOwner)
or ((lExStyle & win32con.WS_EX_APPWINDOW != 0) and not hasNoOwner)):
if win32gui.GetWindowText(hWnd):
return True
return False
def getWindowSizes():
'''
Return a list of tuples (handler, (width, height)) for each real window.
'''
def callback(hWnd, windows):
if not isRealWindow(hWnd):
return
rect = win32gui.GetWindowRect(hWnd)
windows.append((hWnd, (rect[2] - rect[0], rect[3] - rect[1])))
windows = []
win32gui.EnumWindows(callback, windows)
return windows
for win in getWindowSizes():
print win
```
You need the [Win32 Extensions for Python module](http://python.net/crew/mhammond/win32/Downloads.html) for this to work.
EDIT: I discovered that `GetWindowRect` gives more correct results than `GetClientRect`. Source has been updated. |
Get other running processes window sizes in Python | 151,846 | 7 | 2008-09-30T05:20:27Z | 155,587 | 8 | 2008-09-30T23:30:58Z | [
"python",
"windows",
"winapi",
"pywin32"
] | This isn't as malicious as it sounds, I want to get the current size of their windows, not look at what is in them. The purpose is to figure out that if every other window is fullscreen then I should start up like that too. Or if all the other processes are only 800x600 despite there being a huge resolution then that is probably what the user wants. Why make them waste time and energy resizing my window to match all the others they have? I am primarily a Windows devoloper but it wouldn't upset me in the least if there was a cross platform way to do this. | I'm a big fan of [AutoIt](http://www.autoitscript.com/autoit3/). They have a COM version which allows you to use most of their functions from Python.
```
import win32com.client
oAutoItX = win32com.client.Dispatch( "AutoItX3.Control" )
oAutoItX.Opt("WinTitleMatchMode", 2) #Match text anywhere in a window title
width = oAutoItX.WinGetClientSizeWidth("Firefox")
height = oAutoItX.WinGetClientSizeHeight("Firefox")
print width, height
``` |
Errors with Python's mechanize module | 151,929 | 6 | 2008-09-30T06:03:47Z | 155,127 | 8 | 2008-09-30T21:15:47Z | [
"python",
"exception",
"urllib2",
"mechanize"
] | I'm using the `mechanize` module to execute some web queries from Python. I want my program to be error-resilient and handle all kinds of errors (wrong URLs, 403/404 responsese) gracefully. However, I can't find in mechanize's documentation the errors / exceptions it throws for various errors.
I just call it with:
```
self.browser = mechanize.Browser()
self.browser.addheaders = [('User-agent', browser_header)]
self.browser.open(query_url)
self.result_page = self.browser.response().read()
```
How can I know what errors / exceptions can be thrown here and handle them ? | ```
$ perl -0777 -ne'print qq($1) if /__all__ = \[(.*?)\]/s' __init__.py | grep Error
'BrowserStateError',
'ContentTooShortError',
'FormNotFoundError',
'GopherError',
'HTTPDefaultErrorHandler',
'HTTPError',
'HTTPErrorProcessor',
'LinkNotFoundError',
'LoadError',
'ParseError',
'RobotExclusionError',
'URLError',
```
Or:
```
>>> import mechanize
>>> filter(lambda s: "Error" in s, dir(mechanize))
['BrowserStateError', 'ContentTooShortError', 'FormNotFoundError', 'GopherError'
, 'HTTPDefaultErrorHandler', 'HTTPError', 'HTTPErrorProcessor', 'LinkNotFoundErr
or', 'LoadError', 'ParseError', 'RobotExclusionError', 'URLError']
``` |
Is it OK to inspect properties beginning with underscore? | 152,068 | 3 | 2008-09-30T07:24:25Z | 152,083 | 8 | 2008-09-30T07:29:36Z | [
"python",
"sqlalchemy",
"pylons"
] | I've been working on a very simple crud generator for pylons. I came up with something that inspects
```
SomeClass._sa_class_manager.mapper.c
```
Is it ok to inspect this (or to call methods begining with underscore)? I always kind of assumed this is legal though frowned upon as it relies heavily on the internal structure of a class/object. But hey, since python does not really have interfaces in the Java sense maybe it is OK. | It is intentional (in Python) that there are no "private" scopes. It is a convention that anything that starts with an underscore should not ideally be used, and hence you may not complain if its behavior or definition changes in a next version. |
Is it OK to inspect properties beginning with underscore? | 152,068 | 3 | 2008-09-30T07:24:25Z | 152,111 | 8 | 2008-09-30T07:40:11Z | [
"python",
"sqlalchemy",
"pylons"
] | I've been working on a very simple crud generator for pylons. I came up with something that inspects
```
SomeClass._sa_class_manager.mapper.c
```
Is it ok to inspect this (or to call methods begining with underscore)? I always kind of assumed this is legal though frowned upon as it relies heavily on the internal structure of a class/object. But hey, since python does not really have interfaces in the Java sense maybe it is OK. | In general, this usually indicates that the method is effectively internal, rather than part of the documented interface, and should not be relied on. Future versions of the library are free to rename or remove such methods, so if you care about future compatability without having to rewrite, avoid doing it. |
What's the canonical way to check for type in python? | 152,580 | 588 | 2008-09-30T11:00:10Z | 152,583 | 11 | 2008-09-30T11:01:28Z | [
"python",
"types"
] | What is the best way to check whether a given object is of a given type? How about checking whether the object inherits from a given type?
Let's say I have an object `o`. How do I check whether it's a `str`? | ```
isinstance(o, str)
```
[Link](http://docs.python.org/lib/built-in-funcs.html) |
What's the canonical way to check for type in python? | 152,580 | 588 | 2008-09-30T11:00:10Z | 152,592 | 22 | 2008-09-30T11:05:51Z | [
"python",
"types"
] | What is the best way to check whether a given object is of a given type? How about checking whether the object inherits from a given type?
Let's say I have an object `o`. How do I check whether it's a `str`? | `isinstance(o, str)` will return `true` if `o` is an `str` or is of a type that inherits from `str`.
`type(o) == str` will return `true` if and only if `o` is a str. It will return `false` if `o` is of a type that inherits from `str`. |
What's the canonical way to check for type in python? | 152,580 | 588 | 2008-09-30T11:00:10Z | 152,596 | 734 | 2008-09-30T11:07:45Z | [
"python",
"types"
] | What is the best way to check whether a given object is of a given type? How about checking whether the object inherits from a given type?
Let's say I have an object `o`. How do I check whether it's a `str`? | To check if the type of `o` is exactly `str`:
```
type(o) is str
```
To check if `o` is an instance of `str` or any subclass of `str` (this would be the "canonical" way):
```
isinstance(o, str)
```
The following also works, and can be useful in some cases:
```
issubclass(type(o), str)
type(o) in ([str] + str.__subclasses__())
```
See [Built-in Functions](http://docs.python.org/2/library/functions.html) in the Python Library Reference for relevant information.
One more note: in this case, you may actually want to use:
```
isinstance(o, basestring)
```
because this will also catch Unicode strings (`unicode` is not a subclass of `str`; both `str` and `unicode` are subclasses of `basestring`).
Alternatively, `isinstance` accepts a tuple of classes. This will return True if x is an instance of any subclass of any of (str, unicode):
```
isinstance(o, (str, unicode))
``` |
What's the canonical way to check for type in python? | 152,580 | 588 | 2008-09-30T11:00:10Z | 154,156 | 105 | 2008-09-30T17:40:18Z | [
"python",
"types"
] | What is the best way to check whether a given object is of a given type? How about checking whether the object inherits from a given type?
Let's say I have an object `o`. How do I check whether it's a `str`? | The **most** Pythonic way to check the type of an object is... not to check it.
Since Python encourages [Duck Typing](http://wikipedia.org/wiki/Duck_typing), you should just try to use the object's methods the way you want to use them. So if your function is looking for a writable file object, *don't* check that it's a subclass of `file`, just try to use its `.write()` method!
Of course, sometimes these nice abstractions break down and `isinstance(obj, cls)` is what you need. But use sparingly. |
What's the canonical way to check for type in python? | 152,580 | 588 | 2008-09-30T11:00:10Z | 14,532,188 | 8 | 2013-01-25T23:54:51Z | [
"python",
"types"
] | What is the best way to check whether a given object is of a given type? How about checking whether the object inherits from a given type?
Let's say I have an object `o`. How do I check whether it's a `str`? | Here is an example why duck typing is evil without knowing when it is dangerous.
For instance: Here is the Python code (possibly omitting proper indenting), note that this
situation is avoidable by taking care of isinstance and issubclassof functions to make sure that when you really need a duck, you don't get a bomb.
```
class Bomb:
def __init__(self):
""
def talk(self):
self.explode()
def explode(self):
print "BOOM!, The bomb explodes."
class Duck:
def __init__(self):
""
def talk(self):
print "I am a duck, I will not blow up if you ask me to talk."
class Kid:
kids_duck = None
def __init__(self):
print "Kid comes around a corner and asks you for money so he could buy a duck."
def takeDuck(self, duck):
self.kids_duck = duck
print "The kid accepts the duck, and happily skips along"
def doYourThing(self):
print "The kid tries to get the duck to talk"
self.kids_duck.talk()
myKid = Kid()
myBomb = Bomb()
myKid.takeDuck(myBomb)
myKid.doYourThing()
``` |
How to iterate over a timespan after days, hours, weeks and months in Python? | 153,584 | 40 | 2008-09-30T15:35:22Z | 153,667 | 33 | 2008-09-30T15:50:06Z | [
"python",
"datetime"
] | How do I iterate over a timespan after days, hours, weeks or months?
Something like:
```
for date in foo(from_date, to_date, delta=HOURS):
print date
```
Where foo is a function, returning an iterator. I've been looking at the calendar module, but that only works for one specific year or month, not between dates. | I don't think there is a method in Python library, but you can easily create one yourself using [datetime](http://docs.python.org/lib/module-datetime.html) module:
```
from datetime import date, datetime, timedelta
def datespan(startDate, endDate, delta=timedelta(days=1)):
currentDate = startDate
while currentDate < endDate:
yield currentDate
currentDate += delta
```
Then you could use it like this:
```
>>> for day in datespan(date(2007, 3, 30), date(2007, 4, 3),
>>> delta=timedelta(days=1)):
>>> print day
2007-03-30
2007-03-31
2007-04-01
2007-04-02
```
Or, if you wish to make your delta smaller:
```
>>> for timestamp in datespan(datetime(2007, 3, 30, 15, 30),
>>> datetime(2007, 3, 30, 18, 35),
>>> delta=timedelta(hours=1)):
>>> print timestamp
2007-03-30 15:30:00
2007-03-30 16:30:00
2007-03-30 17:30:00
2007-03-30 18:30:00
``` |
How to iterate over a timespan after days, hours, weeks and months in Python? | 153,584 | 40 | 2008-09-30T15:35:22Z | 155,172 | 61 | 2008-09-30T21:30:00Z | [
"python",
"datetime"
] | How do I iterate over a timespan after days, hours, weeks or months?
Something like:
```
for date in foo(from_date, to_date, delta=HOURS):
print date
```
Where foo is a function, returning an iterator. I've been looking at the calendar module, but that only works for one specific year or month, not between dates. | Use [dateutil](http://labix.org/python-dateutil) and its rrule implementation, like so:
```
from dateutil import rrule
from datetime import datetime, timedelta
now = datetime.now()
hundredDaysLater = now + timedelta(days=100)
for dt in rrule.rrule(rrule.MONTHLY, dtstart=now, until=hundredDaysLater):
print dt
```
Output is
```
2008-09-30 23:29:54
2008-10-30 23:29:54
2008-11-30 23:29:54
2008-12-30 23:29:54
```
Replace MONTHLY with any of YEARLY, MONTHLY, WEEKLY, DAILY, HOURLY, MINUTELY, or SECONDLY. Replace dtstart and until with whatever datetime object you want.
This recipe has the advantage for working in all cases, including MONTHLY. Only caveat I could find is that if you pass a day number that doesn't exist for all months, it skips those months. |
How to avoid .pyc files? | 154,443 | 204 | 2008-09-30T18:55:59Z | 154,566 | 9 | 2008-09-30T19:27:08Z | [
"python",
"compiler-construction",
"interpreter"
] | Can I run the python interpreter without generating the compiled .pyc files? | In 2.5, theres no way to suppress it, other than measures like not giving users write access to the directory.
In python 2.6 and 3.0 however, there may be a setting in the sys module called "dont\_write\_bytecode" that can be set to suppress this. This can also be set by passing the "-B" option, or setting the environment variable "PYTHONDONTWRITEBYTECODE" |
How to avoid .pyc files? | 154,443 | 204 | 2008-09-30T18:55:59Z | 154,617 | 221 | 2008-09-30T19:38:02Z | [
"python",
"compiler-construction",
"interpreter"
] | Can I run the python interpreter without generating the compiled .pyc files? | From ["Whatâs New in Python 2.6 - Interpreter Changes"](http://docs.python.org/dev/whatsnew/2.6.html#interpreter-changes):
> Python can now be prevented from
> writing .pyc or .pyo files by
> supplying the [-B](http://docs.python.org/using/cmdline.html#cmdoption-B) switch to the Python
> interpreter, or by setting the
> [PYTHONDONTWRITEBYTECODE](http://docs.python.org/using/cmdline.html#envvar-PYTHONDONTWRITEBYTECODE) environment
> variable before running the
> interpreter. This setting is available
> to Python programs as the
> [`sys.dont_write_bytecode`](http://docs.python.org/library/sys.html#sys.dont_write_bytecode) variable, and
> Python code can change the value to
> modify the interpreterâs behaviour.
Update 2010-11-27: Python 3.2 addresses the issue of cluttering source folders with `.pyc` files by introducing a special `__pycache__` subfolder, see [What's New in Python 3.2 - PYC Repository Directories](http://docs.python.org/dev/whatsnew/3.2.html#pep-3147-pyc-repository-directories). |
How to avoid .pyc files? | 154,443 | 204 | 2008-09-30T18:55:59Z | 154,640 | 20 | 2008-09-30T19:44:04Z | [
"python",
"compiler-construction",
"interpreter"
] | Can I run the python interpreter without generating the compiled .pyc files? | There actually IS a way to do it in Python 2.3+, but it's a bit esoteric. I don't know if you realize this, but you can do the following:
```
$ unzip -l /tmp/example.zip
Archive: /tmp/example.zip
Length Date Time Name
-------- ---- ---- ----
8467 11-26-02 22:30 jwzthreading.py
-------- -------
8467 1 file
$ ./python
Python 2.3 (#1, Aug 1 2003, 19:54:32)
>>> import sys
>>> sys.path.insert(0, '/tmp/example.zip') # Add .zip file to front of path
>>> import jwzthreading
>>> jwzthreading.__file__
'/tmp/example.zip/jwzthreading.py'
```
According to the [zipimport](http://docs.python.org/lib/module-zipimport.html) library:
> Any files may be present in the ZIP archive, but only files .py and .py[co] are available for import. ZIP import of dynamic modules (.pyd, .so) is disallowed. Note that if an archive only contains .py files, Python will not attempt to modify the archive by adding the corresponding .pyc or .pyo file, meaning that if a ZIP archive doesn't contain .pyc files, importing may be rather slow.
Thus, all you have to do is zip the files up, add the zipfile to your sys.path and then import them.
If you're building this for UNIX, you might also consider packaging your script using this recipe: [unix zip executable](http://code.activestate.com/recipes/497000/), but note that you might have to tweak this if you plan on using stdin or reading anything from sys.args (it CAN be done without too much trouble).
In my experience performance doesn't suffer too much because of this, but you should think twice before importing any very large modules this way. |
How to avoid .pyc files? | 154,443 | 204 | 2008-09-30T18:55:59Z | 9,562,273 | 72 | 2012-03-05T05:59:47Z | [
"python",
"compiler-construction",
"interpreter"
] | Can I run the python interpreter without generating the compiled .pyc files? | ```
import sys
sys.dont_write_bytecode = True
``` |
Is timsort general-purpose or Python-specific? | 154,504 | 29 | 2008-09-30T19:12:46Z | 154,812 | 22 | 2008-09-30T20:14:57Z | [
"python",
"algorithm",
"sorting"
] | > Timsort is an adaptive, stable,
> natural mergesort. It has supernatural
> performance on many kinds of partially
> ordered arrays (less than lg(N!)
> comparisons needed, and as few as
> N-1), yet as fast as Python's previous
> highly tuned samplesort hybrid on
> random arrays.
Have you seen [timsort](http://svn.python.org/projects/python/trunk/Objects/listsort.txt) used outside of CPython? Does it make sense? | The algorithm is pretty generic, but the benefits are rather Python-specific. Unlike most sorting routines, what Python's list.sort (which is what uses timsort) cares about is avoiding unnecessary comparisons, because generally comparisons are a *lot* more expensive than swapping items (which is always just a set of pointer copies) or even allocating some extra memory (because it's always just an array of pointers, and the overhead is small compared to the average overhead in any Python operation.)
If you're under similar constraints, then it may be suitable. I've yet to see any other case where comparisons are really that expensive, though :-) |
Is timsort general-purpose or Python-specific? | 154,504 | 29 | 2008-09-30T19:12:46Z | 1,060,238 | 27 | 2009-06-29T20:09:52Z | [
"python",
"algorithm",
"sorting"
] | > Timsort is an adaptive, stable,
> natural mergesort. It has supernatural
> performance on many kinds of partially
> ordered arrays (less than lg(N!)
> comparisons needed, and as few as
> N-1), yet as fast as Python's previous
> highly tuned samplesort hybrid on
> random arrays.
Have you seen [timsort](http://svn.python.org/projects/python/trunk/Objects/listsort.txt) used outside of CPython? Does it make sense? | Yes, it makes quite a bit of sense to use timsort outside of CPython, in specific, or Python, in general.
There is currently an [effort underway](http://bugs.sun.com/bugdatabase/view%5Fbug.do?bug%5Fid=6804124) to replace Java's "modified merge sort" with timsort, and the initial results are quite positive. |
Python module for wiki markup | 154,592 | 26 | 2008-09-30T19:32:59Z | 154,790 | 19 | 2008-09-30T20:11:43Z | [
"python",
"wiki",
"markup"
] | Is there a `Python` module for converting `wiki markup` to other languages (e.g. `HTML`)?
A similar question was asked here, [What's the easiest way to convert wiki markup to html](http://stackoverflow.com/questions/45991/whats-the-easiest-way-to-convert-wiki-markup-to-html), but no `Python` modules are mentioned.
Just curious. :) Cheers. | [mwlib](https://github.com/pediapress/mwlib) provides ways of converting MediaWiki formatted text into HTML, PDF, DocBook and OpenOffice formats. |
Python module for wiki markup | 154,592 | 26 | 2008-09-30T19:32:59Z | 154,972 | 7 | 2008-09-30T20:38:04Z | [
"python",
"wiki",
"markup"
] | Is there a `Python` module for converting `wiki markup` to other languages (e.g. `HTML`)?
A similar question was asked here, [What's the easiest way to convert wiki markup to html](http://stackoverflow.com/questions/45991/whats-the-easiest-way-to-convert-wiki-markup-to-html), but no `Python` modules are mentioned.
Just curious. :) Cheers. | You should look at a good parser for [Creole](http://wikicreole.org/) syntax: [creole.py](http://wiki.sheep.art.pl/Wiki%20Creole%20Parser%20in%20Python). It can convert Creole (which is "a common wiki markup language to be used across different wikis") to HTML. |
Python module for wiki markup | 154,592 | 26 | 2008-09-30T19:32:59Z | 155,184 | 11 | 2008-09-30T21:31:36Z | [
"python",
"wiki",
"markup"
] | Is there a `Python` module for converting `wiki markup` to other languages (e.g. `HTML`)?
A similar question was asked here, [What's the easiest way to convert wiki markup to html](http://stackoverflow.com/questions/45991/whats-the-easiest-way-to-convert-wiki-markup-to-html), but no `Python` modules are mentioned.
Just curious. :) Cheers. | Django uses the following libraries for markup:
* [Markdown](http://www.freewisdom.org/projects/python-markdown/)
* [Textile](http://pypi.python.org/pypi/textile)
* [reStructuredText](http://docutils.sourceforge.net/rst.html)
You can see [how they're used in Django](http://code.djangoproject.com/browser/django/trunk/django/contrib/markup/templatetags/markup.py). |
Python Dependency Injection Framework | 156,230 | 41 | 2008-10-01T04:25:33Z | 156,553 | 12 | 2008-10-01T07:20:48Z | [
"python",
"dependency-injection"
] | Is there a framework equivalent to Guice (<http://code.google.com/p/google-guice>) for Python? | I haven't used it, but the [Spring Python](http://springpython.webfactional.com/) framework is based on Spring and implements [Inversion of Control](http://static.springsource.org/spring-python/1.2.x/sphinx/html/objects.html).
There also appears to be a Guice in Python project: [snake-guice](http://code.google.com/p/snake-guice/) |
Python Dependency Injection Framework | 156,230 | 41 | 2008-10-01T04:25:33Z | 204,482 | 24 | 2008-10-15T12:16:03Z | [
"python",
"dependency-injection"
] | Is there a framework equivalent to Guice (<http://code.google.com/p/google-guice>) for Python? | [Spring Python](http://springpython.webfactional.com) is an offshoot of the Java-based Spring Framework and Spring Security, targeted for Python. This project currently contains the following features:
* [Inversion Of Control (dependency injection)](http://martinfowler.com/articles/injection.html) - use either classic XML, or the python @Object decorator (similar to the Spring JavaConfig subproject) to wire things together. While the @Object format isn't identical to the Guice style (centralized wiring vs. wiring information in each class), it is a valuable way to wire your python app.
* [Aspect-oriented Programming](http://en.wikipedia.org/wiki/Aspect-oriented_programming) - apply interceptors in a horizontal programming paradigm (instead of vertical OOP inheritance) for things like transactions, security, and caching.
* DatabaseTemplate - Reading from the database requires a monotonous cycle of opening cursors, reading rows, and closing cursors, along with exception handlers. With this template class, all you need is the SQL query and row-handling function. Spring Python does the rest.
* Database Transactions - Wrapping multiple database calls with transactions can make your code hard to read. This module provides multiple ways to define transactions without making things complicated.
* Security - Plugin security interceptors to lock down access to your methods, utilizing both authentication and domain authorization.
* Remoting - It is easy to convert your local application into a distributed one. If you have already built your client and server pieces using the IoC container, then going from local to distributed is just a configuration change.
* Samples - to help demonstrate various features of Spring Python, some sample applications have been created:
+ PetClinic - Spring Framework's sample web app has been rebuilt from the ground up using python web containers including: [CherryPy](http://cherrypy.org/). Go check it out for an example of how to use this framework. (NOTE: Other python web frameworks will be added to this list in the future).
+ Spring Wiki - Wikis are powerful ways to store and manage content, so we created a simple one as a demo!
+ Spring Bot - Use Spring Python to build a tiny bot to manage the IRC channel of your open source project. |
Python Dependency Injection Framework | 156,230 | 41 | 2008-10-01T04:25:33Z | 275,184 | 9 | 2008-11-08T20:50:17Z | [
"python",
"dependency-injection"
] | Is there a framework equivalent to Guice (<http://code.google.com/p/google-guice>) for Python? | As an alternative to monkeypatching, I like DI. A nascent project such as <http://code.google.com/p/snake-guice/> may fit the bill.
Or see the blog post [Dependency Injection in Python](http://web.archive.org/web/20090628142546/http://planet.open4free.org/tag/dependency%20injection/) by Dennis Kempin (Aug '08). |
Python Dependency Injection Framework | 156,230 | 41 | 2008-10-01T04:25:33Z | 12,971,813 | 12 | 2012-10-19T09:59:39Z | [
"python",
"dependency-injection"
] | Is there a framework equivalent to Guice (<http://code.google.com/p/google-guice>) for Python? | I like this simple and neat framework.
<http://pypi.python.org/pypi/injector/>
> Dependency injection as a formal pattern is less useful in Python than
> in other languages, primarily due to its support for keyword
> arguments, the ease with which objects can be mocked, and its dynamic
> nature.
>
> That said, a framework for assisting in this process can remove a lot
> of boiler-plate from larger applications. That's where Injector can
> help. It automatically and transitively provides keyword arguments
> with their values. As an added benefit, Injector encourages nicely
> compartmentalized code through the use of Module s.
>
> While being inspired by Guice, it does not slavishly replicate its
> API. Providing a Pythonic API trumps faithfulness. |
Get timer ticks in Python | 156,330 | 40 | 2008-10-01T05:24:38Z | 156,335 | 29 | 2008-10-01T05:27:03Z | [
"python",
"timer"
] | I'm just trying to time a piece of code. The pseudocode looks like:
```
start = get_ticks()
do_long_code()
print "It took " + (get_ticks() - start) + " seconds."
```
How does this look in Python?
More specifically, how do I get the number of ticks since midnight (or however Python organizes that timing)? | What you need is `time()` function from `time` module:
```
import time
start = time.time()
do_long_code()
print "it took", time.time() - start, "seconds."
```
You can use [timeit](http://docs.python.org/lib/module-timeit.html) module for more options though. |
Get timer ticks in Python | 156,330 | 40 | 2008-10-01T05:24:38Z | 157,423 | 31 | 2008-10-01T12:49:11Z | [
"python",
"timer"
] | I'm just trying to time a piece of code. The pseudocode looks like:
```
start = get_ticks()
do_long_code()
print "It took " + (get_ticks() - start) + " seconds."
```
How does this look in Python?
More specifically, how do I get the number of ticks since midnight (or however Python organizes that timing)? | In the `time` module, there are two timing functions: `time` and `clock`. `time` gives you "wall" time, if this is what you care about.
However, the python [docs](http://docs.python.org/lib/module-time.html) say that `clock` should be used for benchmarking. Note that `clock` behaves different in separate systems:
* on MS Windows, it uses the Win32 function QueryPerformanceCounter(), with "resolution typically better than a microsecond". It has no special meaning, it's just a number (it starts counting the first time you call `clock` in your process).
```
# ms windows
t0= time.clock()
do_something()
t= time.clock() - t0 # t is wall seconds elapsed (floating point)
```
* on \*nix, `clock` reports CPU time. Now, this is different, and most probably the value you want, since your program hardly ever is the only process requesting CPU time (even if you have no other processes, the kernel uses CPU time now and then). So, this number, which typically is smaller¹ than the wall time (i.e. time.time() - t0), is more meaningful when benchmarking code:
```
# linux
t0= time.clock()
do_something()
t= time.clock() - t0 # t is CPU seconds elapsed (floating point)
```
Apart from all that, the [timeit](http://docs.python.org/lib/module-timeit.html) module has the `Timer` class that is supposed to use what's best for benchmarking from the available functionality.
¹ unless threading gets in the wayâ¦
² Python â¥3.3: there are [`time.perf_counter()` and `time.process_time()`](http://www.python.org/dev/peps/pep-0418/#id18). `perf_counter` is being used by the `timeit` module. |
Get all items from thread Queue | 156,360 | 9 | 2008-10-01T05:40:06Z | 156,564 | 11 | 2008-10-01T07:25:30Z | [
"python",
"multithreading",
"queue"
] | I have one thread that writes results into a Queue.
In another thread (GUI), I periodically (in the IDLE event) check if there are results in the queue, like this:
```
def queue_get_all(q):
items = []
while 1:
try:
items.append(q.get_nowait())
except Empty, e:
break
return items
```
Is this a good way to do it ?
**Edit:**
> I'm asking because sometimes the
> waiting thread gets stuck for a few
> seconds without taking out new
> results.
The "stuck" problem turned out to be because I was doing the processing in the idle event handler, without making sure that such events are actually generated by calling `wx.WakeUpIdle`, as is recommended. | If you're always pulling all available items off the queue, is there any real point in using a queue, rather than just a list with a lock? ie:
```
from __future__ import with_statement
import threading
class ItemStore(object):
def __init__(self):
self.lock = threading.Lock()
self.items = []
def add(self, item):
with self.lock:
self.items.append(item)
def getAll(self):
with self.lock:
items, self.items = self.items, []
return items
```
If you're also pulling them individually, and making use of the blocking behaviour for empty queues, then you should use Queue, but your use case looks much simpler, and might be better served by the above approach.
**[Edit2]** I'd missed the fact that you're polling the queue from an idle loop, and from your update, I see that the problem isn't related to contention, so the below approach isn't really relevant to your problem. I've left it in in case anyone finds a blocking variant of this useful:
For cases where you do want to block until you get at least one result, you can modify the above code to wait for data to become available through being signalled by the producer thread. Eg.
```
class ItemStore(object):
def __init__(self):
self.cond = threading.Condition()
self.items = []
def add(self, item):
with self.cond:
self.items.append(item)
self.cond.notify() # Wake 1 thread waiting on cond (if any)
def getAll(self, blocking=False):
with self.cond:
# If blocking is true, always return at least 1 item
while blocking and len(self.items) == 0:
self.cond.wait()
items, self.items = self.items, []
return items
``` |
Customized command line parsing in Python | 156,873 | 7 | 2008-10-01T09:35:36Z | 156,949 | 10 | 2008-10-01T10:09:19Z | [
"python",
"parsing",
"shell",
"command-line",
"arguments"
] | I'm writing a shell for a project of mine, which by design parses commands that looks like this:
COMMAND\_NAME ARG1="Long Value" ARG2=123 [email protected]
My problem is that Python's command line parsing libraries (getopt and optparse) forces me to use '-' or '--' in front of the arguments. This behavior doesn't match my requirements.
Any ideas how can this be solved? Any existing library for this? | You could split them up with shlex.split(), which can handle the quoted values you have, and pretty easily parse this with a very simple regular expression. Or, you can just use regular expressions for both splitting and parsing. Or simply use split().
```
args = {}
for arg in shlex.split(cmdln_args):
key, value = arg.split('=', 1)
args[key] = value
``` |
Customized command line parsing in Python | 156,873 | 7 | 2008-10-01T09:35:36Z | 157,100 | 9 | 2008-10-01T10:57:17Z | [
"python",
"parsing",
"shell",
"command-line",
"arguments"
] | I'm writing a shell for a project of mine, which by design parses commands that looks like this:
COMMAND\_NAME ARG1="Long Value" ARG2=123 [email protected]
My problem is that Python's command line parsing libraries (getopt and optparse) forces me to use '-' or '--' in front of the arguments. This behavior doesn't match my requirements.
Any ideas how can this be solved? Any existing library for this? | 1. Try to follow "[Standards for Command Line Interfaces](http://www.gnu.org/prep/standards/standards.html#Command_002dLine-Interfaces)"
2. Convert your arguments (as Thomas suggested) to OptionParser format.
```
parser.parse_args(["--"+p if "=" in p else p for p in sys.argv[1:]])
```
If command-line arguments are not in sys.argv or a similar list but in a string then (as ironfroggy suggested) use `shlex.split()`.
```
parser.parse_args(["--"+p if "=" in p else p for p in shlex.split(argsline)])
``` |
Emacs and Python | 157,018 | 34 | 2008-10-01T10:29:22Z | 157,074 | 20 | 2008-10-01T10:47:59Z | [
"python",
"emacs"
] | I recently started learning [Emacs](http://www.gnu.org/software/emacs/). I went through the tutorial, read some introductory articles, so far so good.
Now I want to use it for Python development. From what I understand, there are two separate Python modes for Emacs: python-mode.el, which is part of the Python project; and python.el, which is part of Emacs 22.
I read all information I could find but most of it seems fairly outdated and I'm still confused.
The questions:
1. What is their difference?
2. Which mode should I install and use?
3. Are there other Emacs add-ons that are essential for Python development?
Relevant links:
* [EmacsEditor](http://wiki.python.org/moin/EmacsEditor) @ wiki.python.org
* [PythonMode](http://www.emacswiki.org/cgi-bin/wiki/PythonMode) @ emacswiki.org | If you are using GNU Emacs 21 or before, or XEmacs, use python-mode.el. The GNU Emacs 22 python.el won't work on them. On GNU Emacs 22, python.el does work, and ties in better with GNU Emacs's own symbol parsing and completion, ElDoc, etc. I use XEmacs myself, so I don't use it, and I have heard people complain that it didn't work very nicely in the past, but there are updates available that fix some of the issues (for instance, on the emacswiki page you link), and you would hope some were integrated upstream by now. If I were the GNU Emacs kind, I would use python.el until I found specific reasons not to.
The python-mode.el's single biggest problem as far as I've seen is that it doesn't quite understand triple-quoted strings. It treats them as single-quoted, meaning that a single quote inside a triple-quoted string will throw off the syntax highlighting: it'll think the string has ended there. You may also need to change your auto-mode-alist to turn on python-mode for .py files; I don't remember if that's still the case but my init.el has been setting auto-mode-alist for many years now.
As for other addons, nothing I would consider 'essential'. XEmacs's func-menu is sometimes useful, it gives you a little function/class browser menu for the current file. I don't remember if GNU Emacs has anything similar. I have a rst-mode for reStructuredText editing, as that's used in some projects. Tying into whatever VC you use, if any, may be useful to you, but there is builtin support for most and easily downloaded .el files for the others. |
Emacs and Python | 157,018 | 34 | 2008-10-01T10:29:22Z | 158,868 | 8 | 2008-10-01T17:52:18Z | [
"python",
"emacs"
] | I recently started learning [Emacs](http://www.gnu.org/software/emacs/). I went through the tutorial, read some introductory articles, so far so good.
Now I want to use it for Python development. From what I understand, there are two separate Python modes for Emacs: python-mode.el, which is part of the Python project; and python.el, which is part of Emacs 22.
I read all information I could find but most of it seems fairly outdated and I'm still confused.
The questions:
1. What is their difference?
2. Which mode should I install and use?
3. Are there other Emacs add-ons that are essential for Python development?
Relevant links:
* [EmacsEditor](http://wiki.python.org/moin/EmacsEditor) @ wiki.python.org
* [PythonMode](http://www.emacswiki.org/cgi-bin/wiki/PythonMode) @ emacswiki.org | [This site](http://www.rwdev.eu/articles/emacspyeng) has a description of how to get Python code completion in Emacs.
[Ropemacs](http://rope.sourceforge.net/ropemacs.html) is a way to get Rope to work in emacs. I haven't had extensive experience with either, but they're worth looking into. |
Emacs and Python | 157,018 | 34 | 2008-10-01T10:29:22Z | 4,569,972 | 7 | 2010-12-31T12:00:22Z | [
"python",
"emacs"
] | I recently started learning [Emacs](http://www.gnu.org/software/emacs/). I went through the tutorial, read some introductory articles, so far so good.
Now I want to use it for Python development. From what I understand, there are two separate Python modes for Emacs: python-mode.el, which is part of the Python project; and python.el, which is part of Emacs 22.
I read all information I could find but most of it seems fairly outdated and I'm still confused.
The questions:
1. What is their difference?
2. Which mode should I install and use?
3. Are there other Emacs add-ons that are essential for Python development?
Relevant links:
* [EmacsEditor](http://wiki.python.org/moin/EmacsEditor) @ wiki.python.org
* [PythonMode](http://www.emacswiki.org/cgi-bin/wiki/PythonMode) @ emacswiki.org | Given the number of times I have several open buffers all called `__init__.py`, I consider the [uniquify](http://www.emacswiki.org/emacs/uniquify) library essential for python development.
Pyflakes also aids productivity. |
Most pythonic way of counting matching elements in something iterable | 157,039 | 13 | 2008-10-01T10:37:20Z | 157,141 | 14 | 2008-10-01T11:09:14Z | [
"python",
"list-comprehension"
] | I have an iterable of entries on which I would like to gather some simple statistics, say the count of all numbers divisible by two and the count of all numbers divisible by three.
My first alternative, While only iterating through the list once and avoiding the list expansion (and keeping the [split loop](http://www.refactoring.com/catalog/splitLoop.html) refactoring in mind), looks rather bloated:
(alt 1)
```
r = xrange(1, 10)
twos = 0
threes = 0
for v in r:
if v % 2 == 0:
twos+=1
if v % 3 == 0:
threes+=1
print twos
print threes
```
This looks rather nice, but has the drawback of expanding the expression to a list:
(alt 2)
```
r = xrange(1, 10)
print len([1 for v in r if v % 2 == 0])
print len([1 for v in r if v % 3 == 0])
```
What I would really like is something like a function like this:
(alt 3)
```
def count(iterable):
n = 0
for i in iterable:
n += 1
return n
r = xrange(1, 10)
print count(1 for v in r if v % 2 == 0)
print count(1 for v in r if v % 3 == 0)
```
But this looks a lot like something that could be done without a function. The final variant is this:
(alt 4)
```
r = xrange(1, 10)
print sum(1 for v in r if v % 2 == 0)
print sum(1 for v in r if v % 3 == 0)
```
and while the smallest (and in my book probably the most elegant) it doesn't feel like it expresses the intent very well.
So, my question to you is:
Which alternative do you like best to gather these types of stats? Feel free to supply your own alternative if you have something better.
To clear up some confusion below:
* In reality my filter predicates are more complex than just this simple test.
* The objects I iterate over are larger and more complex than just numbers
* My filter functions are more different and hard to parameterize into one predicate | Having to iterate over the list multiple times isn't elegant IMHO.
I'd probably create a function that allows doing:
```
twos, threes = countmatching(xrange(1,10),
lambda a: a % 2 == 0,
lambda a: a % 3 == 0)
```
A starting point would be something like this:
```
def countmatching(iterable, *predicates):
v = [0] * len(predicates)
for e in iterable:
for i,p in enumerate(predicates):
if p(e):
v[i] += 1
return tuple(v)
```
Btw, "itertools recipes" has a recipe for doing much like your alt4.
```
def quantify(seq, pred=None):
"Count how many times the predicate is true in the sequence"
return sum(imap(pred, seq))
``` |
Accurate timestamping in Python | 157,359 | 12 | 2008-10-01T12:36:17Z | 157,439 | 11 | 2008-10-01T12:54:25Z | [
"python",
"timestamp",
"timer"
] | I've been building an error logging app recently and was after a way of accurately timestamping the incoming data. When I say accurately I mean each timestamp should be accurate relative to each other (no need to sync to an atomic clock or anything like that).
I've been using datetime.now() as a first stab, but this isn't perfect:
```
>>> for i in range(0,1000):
... datetime.datetime.now()
...
datetime.datetime(2008, 10, 1, 13, 17, 27, 562000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 562000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 562000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 562000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 578000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 578000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 578000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 578000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 578000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 609000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 609000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 609000)
etc.
```
The changes between clocks for the first second of samples looks like this:
```
uSecs difference
562000
578000 16000
609000 31000
625000 16000
640000 15000
656000 16000
687000 31000
703000 16000
718000 15000
750000 32000
765000 15000
781000 16000
796000 15000
828000 32000
843000 15000
859000 16000
890000 31000
906000 16000
921000 15000
937000 16000
968000 31000
984000 16000
```
So it looks like the timer data is only updated every ~15-32ms on my machine. The problem comes when we come to analyse the data because sorting by something other than the timestamp and then sorting by timestamp again can leave the data in the wrong order (chronologically). It would be nice to have the time stamps accurate to the point that any call to the time stamp generator gives a unique timestamp.
I had been considering some methods involving using a time.clock() call added to a starting datetime, but would appreciate a solution that would work accurately across threads on the same machine. Any suggestions would be very gratefully received. | time.clock() only measures wallclock time on Windows. On other systems, time.clock() actually measures CPU-time. On those systems time.time() is more suitable for wallclock time, and it has as high a resolution as Python can manage -- which is as high as the OS can manage; usually using gettimeofday(3) (microsecond resolution) or ftime(3) (millisecond resolution.) Other OS restrictions actually make the real resolution a lot higher than that. datetime.datetime.now() uses time.time(), so time.time() directly won't be better.
For the record, if I use datetime.datetime.now() in a loop, I see about a 1/10000 second resolution. From looking at your data, you have much, much coarser resolution than that. I'm not sure if there's anything Python as such can do, although you may be able to convince the OS to do better through other means.
I seem to recall that on Windows, time.clock() is actually (slightly) more accurate than time.time(), but it measures wallclock since the first call to time.clock(), so you have to remember to 'initialize' it first. |
Accurate timestamping in Python | 157,359 | 12 | 2008-10-01T12:36:17Z | 157,711 | 7 | 2008-10-01T13:55:57Z | [
"python",
"timestamp",
"timer"
] | I've been building an error logging app recently and was after a way of accurately timestamping the incoming data. When I say accurately I mean each timestamp should be accurate relative to each other (no need to sync to an atomic clock or anything like that).
I've been using datetime.now() as a first stab, but this isn't perfect:
```
>>> for i in range(0,1000):
... datetime.datetime.now()
...
datetime.datetime(2008, 10, 1, 13, 17, 27, 562000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 562000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 562000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 562000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 578000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 578000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 578000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 578000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 578000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 609000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 609000)
datetime.datetime(2008, 10, 1, 13, 17, 27, 609000)
etc.
```
The changes between clocks for the first second of samples looks like this:
```
uSecs difference
562000
578000 16000
609000 31000
625000 16000
640000 15000
656000 16000
687000 31000
703000 16000
718000 15000
750000 32000
765000 15000
781000 16000
796000 15000
828000 32000
843000 15000
859000 16000
890000 31000
906000 16000
921000 15000
937000 16000
968000 31000
984000 16000
```
So it looks like the timer data is only updated every ~15-32ms on my machine. The problem comes when we come to analyse the data because sorting by something other than the timestamp and then sorting by timestamp again can leave the data in the wrong order (chronologically). It would be nice to have the time stamps accurate to the point that any call to the time stamp generator gives a unique timestamp.
I had been considering some methods involving using a time.clock() call added to a starting datetime, but would appreciate a solution that would work accurately across threads on the same machine. Any suggestions would be very gratefully received. | You're unlikely to get sufficiently fine-grained control that you can completely eliminate the possibility
of duplicate timestamps - you'd need resolution smaller than the time it takes to generate a datetime object. There are a couple of other approaches you might take to deal with it:
1. Deal with it. Leave your timestamps non-unique as they are, but rely on python's sort being stable to deal with reordering problems. Sorting on timestamp *first*, then something else will retain the timestamp ordering - you just have to be careful to always start from the timestamp ordered list every time, rather than doing multiple sorts on the same list.
2. Append your own value to enforce uniqueness. Eg. include an incrementing integer value as part of the key, or append such a value only if timestamps are different. Eg.
The following will guarantee unique timestamp values:
```
class TimeStamper(object):
def __init__(self):
self.lock = threading.Lock()
self.prev = None
self.count = 0
def getTimestamp(self):
with self.lock:
ts = str(datetime.now())
if ts == self.prev:
ts +='.%04d' % self.count
self.count += 1
else:
self.prev = ts
self.count = 1
return ts
```
For multiple processes (rather than threads), it gets a bit trickier though. |
Python 2.5 dictionary 2 key sort | 157,424 | 14 | 2008-10-01T12:50:19Z | 157,445 | 17 | 2008-10-01T12:56:01Z | [
"python"
] | I have a dictionary of 200,000 items (the keys are strings and the values are integers).
What is the best/most pythonic way to print the items sorted by descending value then ascending key (i.e. a 2 key sort)?
```
a={ 'keyC':1, 'keyB':2, 'keyA':1 }
b = a.items()
b.sort( key=lambda a:a[0])
b.sort( key=lambda a:a[1], reverse=True )
print b
>>>[('keyB', 2), ('keyA', 1), ('keyC', 1)]
``` | You can't sort dictionaries. You have to sort the list of items.
Previous versions were wrong. When you have a numeric value, it's easy to sort in reverse order. These will do that. But this isn't general. This only works because the value is numeric.
```
a = { 'key':1, 'another':2, 'key2':1 }
b= a.items()
b.sort( key=lambda a:(-a[1],a[0]) )
print b
```
Here's an alternative, using an explicit function instead of a lambda and the cmp instead of the key option.
```
def valueKeyCmp( a, b ):
return cmp( (-a[1], a[0]), (-b[1], b[0] ) )
b.sort( cmp= valueKeyCmp )
print b
```
The more general solution is actually two separate sorts
```
b.sort( key=lambda a:a[1], reverse=True )
b.sort( key=lambda a:a[0] )
print b
``` |
Hiding a password in a (python) script | 157,938 | 75 | 2008-10-01T14:37:17Z | 157,975 | 69 | 2008-10-01T14:43:51Z | [
"python",
"security"
] | I have got a python script which is creating an ODBC connection. The ODBC connection is generated with a connection string. In this connection string I have to include the username and password for this connection.
Is there an easy way to obscure this password in the file (just that nobody can read the password when I'm editing the file) ? | [Base64 encoding](http://docs.python.org/lib/module-base64.html) is in the standard library and will do to stop shoulder surfers:
```
>>> import base64
>>> print base64.b64encode("password")
cGFzc3dvcmQ=
>>> print base64.b64decode("cGFzc3dvcmQ=")
password
``` |
Hiding a password in a (python) script | 157,938 | 75 | 2008-10-01T14:37:17Z | 158,221 | 10 | 2008-10-01T15:28:38Z | [
"python",
"security"
] | I have got a python script which is creating an ODBC connection. The ODBC connection is generated with a connection string. In this connection string I have to include the username and password for this connection.
Is there an easy way to obscure this password in the file (just that nobody can read the password when I'm editing the file) ? | How about importing the username and password from a file external to the script? That way even if someone got hold of the script, they wouldn't automatically get the password. |
Hiding a password in a (python) script | 157,938 | 75 | 2008-10-01T14:37:17Z | 158,248 | 33 | 2008-10-01T15:34:13Z | [
"python",
"security"
] | I have got a python script which is creating an ODBC connection. The ODBC connection is generated with a connection string. In this connection string I have to include the username and password for this connection.
Is there an easy way to obscure this password in the file (just that nobody can read the password when I'm editing the file) ? | Douglas F Shearer's is the generally approved solution in Unix when you need to specify a password for a remote login.
You add a **--password-from-file** option to specify the path and read plaintext from a file.
The file can then be in the user's own area protected by the operating system.
It also allows different users to automatically pick up their own own file.
For passwords that the user of the script isn't allowed to know - you can run the script with elavated permission and have the password file owned by that root/admin user. |
Hiding a password in a (python) script | 157,938 | 75 | 2008-10-01T14:37:17Z | 158,387 | 15 | 2008-10-01T16:09:40Z | [
"python",
"security"
] | I have got a python script which is creating an ODBC connection. The ODBC connection is generated with a connection string. In this connection string I have to include the username and password for this connection.
Is there an easy way to obscure this password in the file (just that nobody can read the password when I'm editing the file) ? | The best solution, assuming the username and password can't be given at runtime by the user, is probably a separate source file containing only variable initialization for the username and password that is imported into your main code. This file would only need editing when the credentials change. Otherwise, if you're only worried about shoulder surfers with average memories, base 64 encoding is probably the easiest solution. ROT13 is just too easy to decode manually, isn't case sensitive and retains too much meaning in it's encrypted state. Encode your password and user id outside the python script. Have he script decode at runtime for use.
Giving scripts credentials for automated tasks is always a risky proposal. Your script should have its own credentials and the account it uses should have no access other than exactly what is necessary. At least the password should be long and rather random. |
Hiding a password in a (python) script | 157,938 | 75 | 2008-10-01T14:37:17Z | 160,042 | 10 | 2008-10-01T22:26:09Z | [
"python",
"security"
] | I have got a python script which is creating an ODBC connection. The ODBC connection is generated with a connection string. In this connection string I have to include the username and password for this connection.
Is there an easy way to obscure this password in the file (just that nobody can read the password when I'm editing the file) ? | base64 is the way to go for your simple needs. There is no need to import anything:
```
>>> 'your string'.encode('base64')
'eW91ciBzdHJpbmc=\n'
>>> _.decode('base64')
'your string'
``` |
Hiding a password in a (python) script | 157,938 | 75 | 2008-10-01T14:37:17Z | 6,451,826 | 17 | 2011-06-23T09:17:55Z | [
"python",
"security"
] | I have got a python script which is creating an ODBC connection. The ODBC connection is generated with a connection string. In this connection string I have to include the username and password for this connection.
Is there an easy way to obscure this password in the file (just that nobody can read the password when I'm editing the file) ? | If you are working on a Unix system, take advantage of the netrc module in the standard Python library. It reads passwords from a separate text file (.netrc), which has the format decribed [here](http://www.mavetju.org/unix/netrc.php).
Here is a small usage example:
```
import netrc
# Define which host in the .netrc file to use
HOST = 'mailcluster.loopia.se'
# Read from the .netrc file in your home directory
secrets = netrc.netrc()
username, account, password = secrets.authenticators( HOST )
print username, password
``` |
Hiding a password in a (python) script | 157,938 | 75 | 2008-10-01T14:37:17Z | 22,821,470 | 10 | 2014-04-02T19:45:46Z | [
"python",
"security"
] | I have got a python script which is creating an ODBC connection. The ODBC connection is generated with a connection string. In this connection string I have to include the username and password for this connection.
Is there an easy way to obscure this password in the file (just that nobody can read the password when I'm editing the file) ? | Here is a simple method:
1. Create a python module - let's call it peekaboo.py.
2. In peekaboo.py, include both the password and any code needing that password
3. Create a compiled version - peekaboo.pyc - by importing this module (via python commandline, etc...).
4. Now, delete peekaboo.py.
5. You can now happily import peekaboo relying only on peekaboo.pyc. Since peekaboo.pyc is byte compiled it is not readable to the casual user.
This should be a bit more secure than base64 decoding - although it is vulnerable to a py\_to\_pyc decompiler. |
Python module dependency | 158,268 | 15 | 2008-10-01T15:38:58Z | 158,326 | 7 | 2008-10-01T15:52:34Z | [
"python",
"module",
"circular-dependency"
] | Ok I have two modules, each containing a class, the problem is their classes reference each other.
Lets say for example I had a room module and a person module containing CRoom and CPerson.
The CRoom class contains infomation about the room, and a CPerson list of every one in the room.
The CPerson class however sometimes needs to use the CRoom class for the room its in, for example to find the door, or too see who else is in the room.
The problem is with the two modules importing each other I just get an import error on which ever is being imported second :(
In c++ I could solve this by only including the headers, and since in both cases the classes just have pointers to the other class, a forward declaration would suffice for the header eg:
```
class CPerson;//forward declare
class CRoom
{
std::set<CPerson*> People;
...
```
Is there anyway to do this in python, other than placing both classes in the same module or something like that?
edit: added python example showing problem using above classes
error:
> Traceback (most recent call last):
> File "C:\Projects\python\test\main.py", line 1, in
> from room import CRoom
> File "C:\Projects\python\test\room.py", line 1, in
> from person import CPerson
> File "C:\Projects\python\test\person.py", line 1, in
> from room import CRoom
> ImportError: cannot import name CRoom
> room.py
```
from person import CPerson
class CRoom:
def __init__(Self):
Self.People = {}
Self.NextId = 0
def AddPerson(Self, FirstName, SecondName, Gender):
Id = Self.NextId
Self.NextId += 1#
Person = CPerson(FirstName,SecondName,Gender,Id)
Self.People[Id] = Person
return Person
def FindDoorAndLeave(Self, PersonId):
del Self.People[PeopleId]
```
person.py
```
from room import CRoom
class CPerson:
def __init__(Self, Room, FirstName, SecondName, Gender, Id):
Self.Room = Room
Self.FirstName = FirstName
Self.SecondName = SecondName
Self.Gender = Gender
Self.Id = Id
def Leave(Self):
Self.Room.FindDoorAndLeave(Self.Id)
``` | Do you actually need to reference the classes at class definition time? ie.
```
class CRoom(object):
person = CPerson("a person")
```
Or (more likely), do you just need to use CPerson in the methods of your class (and vice versa). eg:
```
class CRoom(object):
def getPerson(self): return CPerson("someone")
```
If the second, there's no problem - as by the time the method gets **called** rather than defined, the module will be imported. Your sole problem is how to refer to it. Likely you're doing something like:
```
from CRoom import CPerson # or even import *
```
With circularly referencing modules, you can't do this, as at the point one module imports another, the original modules body won't have finished executing, so the namespace will be incomplete. Instead, use qualified references. ie:
```
#croom.py
import cperson
class CRoom(object):
def getPerson(self): return cperson.CPerson("someone")
```
Here, python doesn't need to lookup the attribute on the namespace until the method actually gets called, by which time both modules should have completed their initialisation. |
Python module dependency | 158,268 | 15 | 2008-10-01T15:38:58Z | 158,403 | 16 | 2008-10-01T16:11:41Z | [
"python",
"module",
"circular-dependency"
] | Ok I have two modules, each containing a class, the problem is their classes reference each other.
Lets say for example I had a room module and a person module containing CRoom and CPerson.
The CRoom class contains infomation about the room, and a CPerson list of every one in the room.
The CPerson class however sometimes needs to use the CRoom class for the room its in, for example to find the door, or too see who else is in the room.
The problem is with the two modules importing each other I just get an import error on which ever is being imported second :(
In c++ I could solve this by only including the headers, and since in both cases the classes just have pointers to the other class, a forward declaration would suffice for the header eg:
```
class CPerson;//forward declare
class CRoom
{
std::set<CPerson*> People;
...
```
Is there anyway to do this in python, other than placing both classes in the same module or something like that?
edit: added python example showing problem using above classes
error:
> Traceback (most recent call last):
> File "C:\Projects\python\test\main.py", line 1, in
> from room import CRoom
> File "C:\Projects\python\test\room.py", line 1, in
> from person import CPerson
> File "C:\Projects\python\test\person.py", line 1, in
> from room import CRoom
> ImportError: cannot import name CRoom
> room.py
```
from person import CPerson
class CRoom:
def __init__(Self):
Self.People = {}
Self.NextId = 0
def AddPerson(Self, FirstName, SecondName, Gender):
Id = Self.NextId
Self.NextId += 1#
Person = CPerson(FirstName,SecondName,Gender,Id)
Self.People[Id] = Person
return Person
def FindDoorAndLeave(Self, PersonId):
del Self.People[PeopleId]
```
person.py
```
from room import CRoom
class CPerson:
def __init__(Self, Room, FirstName, SecondName, Gender, Id):
Self.Room = Room
Self.FirstName = FirstName
Self.SecondName = SecondName
Self.Gender = Gender
Self.Id = Id
def Leave(Self):
Self.Room.FindDoorAndLeave(Self.Id)
``` | **No need to import CRoom**
You don't use `CRoom` in `person.py`, so don't import it. Due to dynamic binding, Python doesn't need to "see all class definitions at compile time".
If you actually *do* use `CRoom` in `person.py`, then change `from room import CRoom` to `import room` and use module-qualified form `room.CRoom`. See [Effbot's Circular Imports](http://effbot.org/zone/import-confusion.htm#circular-imports) for details.
*Sidenote:* you probably have an error in `Self.NextId += 1` line. It increments `NextId` of instance, not `NextId` of class. To increment class's counter use `CRoom.NextId += 1` or `Self.__class__.NextId += 1`. |
Best way to store and use a large text-file in python | 158,546 | 4 | 2008-10-01T16:37:08Z | 158,753 | 10 | 2008-10-01T17:30:41Z | [
"python"
] | I'm creating a networked server for a boggle-clone I wrote in python, which accepts users, solves the boards, and scores the player input. The dictionary file I'm using is 1.8MB (the ENABLE2K dictionary), and I need it to be available to several game solver classes. Right now, I have it so that each class iterates through the file line-by-line and generates a hash table(associative array), but the more solver classes I instantiate, the more memory it takes up.
What I would like to do is import the dictionary file once and pass it to each solver instance as they need it. But what is the best way to do this? Should I import the dictionary in the global space, then access it in the solver class as globals()['dictionary']? Or should I import the dictionary then pass it as an argument to the class constructor? Is one of these better than the other? Is there a third option? | If you create a dictionary.py module, containing code which reads the file and builds a dictionary, this code will only be executed the first time it is imported. Further imports will return a reference to the existing module instance. As such, your classes can:
```
import dictionary
dictionary.words[whatever]
```
where dictionary.py has:
```
words = {}
# read file and add to 'words'
``` |
Getting MAC Address | 159,137 | 68 | 2008-10-01T18:51:36Z | 159,150 | 13 | 2008-10-01T18:55:14Z | [
"python",
"windows",
"linux",
"networking"
] | I need a cross platform method of determining the MAC address of a computer at run time. For windows the 'wmi' module can be used and the only method under Linux I could find was to run ifconfig and run a regex across its output. I don't like using a package that only works on one OS, and parsing the output of another program doesn't seem very elegant not to mention error prone.
Does anyone know a cross platform method (windows and linux) method to get the MAC address? If not, does anyone know any more elegant methods then those I listed above? | netifaces is a good module to use for getting the mac address (and other addresses). It's crossplatform and makes a bit more sense than using socket or uuid.
```
>>> import netifaces
>>> netifaces.interfaces()
['lo', 'eth0', 'tun2']
>>> netifaces.ifaddresses('eth0')[netifaces.AF_LINK]
[{'addr': '08:00:27:50:f2:51', 'broadcast': 'ff:ff:ff:ff:ff:ff'}]
```
---
* [pypi location](https://pypi.python.org/pypi/netifaces)
* [Good Intro to netifaces](http://alastairs-place.net/projects/netifaces/) |
Getting MAC Address | 159,137 | 68 | 2008-10-01T18:51:36Z | 159,195 | 92 | 2008-10-01T19:06:30Z | [
"python",
"windows",
"linux",
"networking"
] | I need a cross platform method of determining the MAC address of a computer at run time. For windows the 'wmi' module can be used and the only method under Linux I could find was to run ifconfig and run a regex across its output. I don't like using a package that only works on one OS, and parsing the output of another program doesn't seem very elegant not to mention error prone.
Does anyone know a cross platform method (windows and linux) method to get the MAC address? If not, does anyone know any more elegant methods then those I listed above? | Python 2.5 includes an uuid implementation which (in at least one version) needs the mac address. You can import the mac finding function into your own code easily:
```
from uuid import getnode as get_mac
mac = get_mac()
```
The return value is the mac address as 48 bit integer. |
Getting MAC Address | 159,137 | 68 | 2008-10-01T18:51:36Z | 160,821 | 17 | 2008-10-02T03:49:54Z | [
"python",
"windows",
"linux",
"networking"
] | I need a cross platform method of determining the MAC address of a computer at run time. For windows the 'wmi' module can be used and the only method under Linux I could find was to run ifconfig and run a regex across its output. I don't like using a package that only works on one OS, and parsing the output of another program doesn't seem very elegant not to mention error prone.
Does anyone know a cross platform method (windows and linux) method to get the MAC address? If not, does anyone know any more elegant methods then those I listed above? | One other thing that you should note is that `uuid.getnode()` can fake the MAC addr by returning a random 48-bit number which may not be what you are expecting. Also, there's no explicit indication that the MAC address has been faked, but you could detect it by calling `getnode()` twice and seeing if the result varies. If the same value is returned by both calls, you have the MAC address, otherwise you are getting a faked address.
```
>>> print uuid.getnode.__doc__
Get the hardware address as a 48-bit positive integer.
The first time this runs, it may launch a separate program, which could
be quite slow. If all attempts to obtain the hardware address fail, we
choose a random 48-bit number with its eighth bit set to 1 as recommended
in RFC 4122.
``` |
Getting MAC Address | 159,137 | 68 | 2008-10-01T18:51:36Z | 4,789,267 | 57 | 2011-01-25T01:57:17Z | [
"python",
"windows",
"linux",
"networking"
] | I need a cross platform method of determining the MAC address of a computer at run time. For windows the 'wmi' module can be used and the only method under Linux I could find was to run ifconfig and run a regex across its output. I don't like using a package that only works on one OS, and parsing the output of another program doesn't seem very elegant not to mention error prone.
Does anyone know a cross platform method (windows and linux) method to get the MAC address? If not, does anyone know any more elegant methods then those I listed above? | The pure python solution for this problem under Linux to get the MAC for a specific local interface, originally posted as a comment by vishnubob and improved by on Ben Mackey in [this activestate recipe](http://code.activestate.com/recipes/439094-get-the-ip-address-associated-with-a-network-inter/)
```
#!/usr/bin/python
import fcntl, socket, struct
def getHwAddr(ifname):
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
info = fcntl.ioctl(s.fileno(), 0x8927, struct.pack('256s', ifname[:15]))
return ':'.join(['%02x' % ord(char) for char in info[18:24]])
print getHwAddr('eth0')
``` |
Getting MAC Address | 159,137 | 68 | 2008-10-01T18:51:36Z | 18,031,954 | 7 | 2013-08-03T10:40:10Z | [
"python",
"windows",
"linux",
"networking"
] | I need a cross platform method of determining the MAC address of a computer at run time. For windows the 'wmi' module can be used and the only method under Linux I could find was to run ifconfig and run a regex across its output. I don't like using a package that only works on one OS, and parsing the output of another program doesn't seem very elegant not to mention error prone.
Does anyone know a cross platform method (windows and linux) method to get the MAC address? If not, does anyone know any more elegant methods then those I listed above? | Using my answer from here: <http://stackoverflow.com/a/18031868/2362361>
It would be important to know to which iface you want the MAC for since many can exist (bluetooth, several nics, etc.).
This does the job when you know the IP of the iface you need the MAC for, using `netifaces` (available in PyPI):
```
import netifaces as nif
def mac_for_ip(ip):
'Returns a list of MACs for interfaces that have given IP, returns None if not found'
for i in nif.interfaces():
addrs = nif.ifaddresses(i)
try:
if_mac = addrs[nif.AF_LINK][0]['addr']
if_ip = addrs[nif.AF_INET][0]['addr']
except IndexError, KeyError: #ignore ifaces that dont have MAC or IP
if_mac = if_ip = None
if if_ip == ip:
return if_mac
return None
```
Testing:
```
>>> mac_for_ip('169.254.90.191')
'2c:41:38:0a:94:8b'
``` |
Nginx + fastcgi truncation problem | 159,541 | 10 | 2008-10-01T20:26:12Z | 5,218,788 | 7 | 2011-03-07T10:54:59Z | [
"python",
"django",
"nginx",
"fastcgi"
] | I'm running a Django site using the fastcgi interface to nginx. However, some pages are being served truncated (i.e. the page source just stops, sometimes in the middle of a tag). How do I fix this (let me know what extra information is needed, and I'll post it)
Details:
I'm using flup, and spawning the fastcgi server with the following command:
```
python ./manage.py runfcgi umask=000 maxchildren=5 maxspare=1 minspare=0 method=prefork socket=/path/to/runfiles/django.sock pidfile=/path/to/runfiles/django.pid
```
The nginx config is as follows:
```
# search and replace this: {project_location}
pid /path/to/runfiles/nginx.pid;
worker_processes 2;
error_log /path/to/runfiles/error_log;
events {
worker_connections 1024;
use epoll;
}
http {
# default nginx location
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main
'$remote_addr - $remote_user [$time_local] '
'"$request" $status $bytes_sent '
'"$http_referer" "$http_user_agent" '
'"$gzip_ratio"';
client_header_timeout 3m;
client_body_timeout 3m;
send_timeout 3m;
connection_pool_size 256;
client_header_buffer_size 1k;
large_client_header_buffers 4 2k;
request_pool_size 4k;
output_buffers 4 32k;
postpone_output 1460;
sendfile on;
tcp_nopush on;
keepalive_timeout 75 20;
tcp_nodelay on;
client_max_body_size 10m;
client_body_buffer_size 256k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
client_body_temp_path /path/to/runfiles/client_body_temp;
proxy_temp_path /path/to/runfiles/proxy_temp;
fastcgi_temp_path /path/to/runfiles/fastcgi_temp;
gzip on;
gzip_min_length 1100;
gzip_buffers 4 32k;
gzip_types text/plain text/html application/x-javascript text/xml text/css;
ignore_invalid_headers on;
server {
listen 80;
server_name alpha2.sonyalabs.com;
index index.html;
root /path/to/django-root/static;
# static resources
location ~* ^/static/.*$
{
root /path/to/django-root;
expires 30d;
break;
}
location / {
# host and port to fastcgi server
fastcgi_pass unix:/path/to/runfiles/django.sock;
fastcgi_param PATH_INFO $fastcgi_script_name;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_pass_header Authorization;
fastcgi_intercept_errors off;
}
location /403.html {
root /usr/local/nginx;
access_log off;
}
location /401.html {
root /usr/local/nginx;
access_log off;
}
location /404.html {
root /usr/local/nginx;
access_log off;
}
location = /_.gif {
empty_gif;
access_log off;
}
access_log /path/to/runfiles/localhost.access_log main;
error_log /path/to/runfiles/localhost.error_log;
}
}
``` | I had the same exact problem running Nagios on nginx. I stumbled upon your question while googling for an answer, and reading "permission denied" related answers it struck me (and perhaps it will help you) :
* Nginx error.log was reporting :
2011/03/07 11:36:02 [crit] 30977#0: \*225952 open() "/var/lib/nginx/fastcgi/2/65/0000002652" failed (13: Permission denied)
* so I just ran # chown -R www-data:www-data /var/lib/nginx/fastcgi
* Fixed ! (and thank you for your indirect help) |
What is the naming convention in Python for variable and function names? | 159,720 | 326 | 2008-10-01T21:01:16Z | 159,745 | 381 | 2008-10-01T21:05:24Z | [
"python",
"variables",
"function",
"naming-conventions"
] | Coming from a C# background the naming convention for variables and method names are usually either CamelCase or Pascal Case:
```
// C# example
string thisIsMyVariable = "a"
public void ThisIsMyMethod()
```
In Python, I have seen the above but I have also seen underscores being used:
```
# python example
this_is_my_variable = 'a'
def this_is_my_function():
```
Is there a more preferable, definitive coding style for Python? | See Python [PEP 8](http://www.python.org/dev/peps/pep-0008/).
> Function names should be lowercase,
> with words separated by underscores as
> necessary to improve readability.
>
> mixedCase is allowed only in contexts
> where that's already the prevailing
> style
Variables...
> Use the function naming rules:
> lowercase with words separated by
> underscores as necessary to improve
> readability.
Personally, I deviate from this because I also prefer `mixedCase` over `lower_case` for my own projects. |
What is the naming convention in Python for variable and function names? | 159,720 | 326 | 2008-10-01T21:01:16Z | 159,778 | 26 | 2008-10-01T21:12:41Z | [
"python",
"variables",
"function",
"naming-conventions"
] | Coming from a C# background the naming convention for variables and method names are usually either CamelCase or Pascal Case:
```
// C# example
string thisIsMyVariable = "a"
public void ThisIsMyMethod()
```
In Python, I have seen the above but I have also seen underscores being used:
```
# python example
this_is_my_variable = 'a'
def this_is_my_function():
```
Is there a more preferable, definitive coding style for Python? | There is [PEP 8](http://www.python.org/dev/peps/pep-0008/), as other answers show, but PEP 8 is only the styleguide for the standard library, and it's only taken as gospel therein. One of the most frequent deviations of PEP 8 for other pieces of code is the variable naming, specifically for methods. There is no single predominate style, although considering the volume of code that uses mixedCase, if one were to make a strict census one would probably end up with a version of PEP 8 with mixedCase. There is little other deviation from PEP 8 that is quite as common. |
What is the naming convention in Python for variable and function names? | 159,720 | 326 | 2008-10-01T21:01:16Z | 159,798 | 11 | 2008-10-01T21:16:21Z | [
"python",
"variables",
"function",
"naming-conventions"
] | Coming from a C# background the naming convention for variables and method names are usually either CamelCase or Pascal Case:
```
// C# example
string thisIsMyVariable = "a"
public void ThisIsMyMethod()
```
In Python, I have seen the above but I have also seen underscores being used:
```
# python example
this_is_my_variable = 'a'
def this_is_my_function():
```
Is there a more preferable, definitive coding style for Python? | Most python people prefer underscores, but even I am using python since more than 5 years right now, I still do not like them. They just look ugly to me, but maybe that's all the Java in my head.
I simply like CamelCase better since it fits better with the way classes are named, It feels more logical to have `SomeClass.doSomething()` than `SomeClass.do_something()`. If you look around in the global module index in python, you will find both, which is due to the fact that it's a collection of libraries from various sources that grew overtime and not something that was developed by one company like Sun with strict coding rules. I would say the bottom line is: Use whatever you like better, it's just a question of personal taste. |
What is the naming convention in Python for variable and function names? | 159,720 | 326 | 2008-10-01T21:01:16Z | 160,769 | 9 | 2008-10-02T03:24:30Z | [
"python",
"variables",
"function",
"naming-conventions"
] | Coming from a C# background the naming convention for variables and method names are usually either CamelCase or Pascal Case:
```
// C# example
string thisIsMyVariable = "a"
public void ThisIsMyMethod()
```
In Python, I have seen the above but I have also seen underscores being used:
```
# python example
this_is_my_variable = 'a'
def this_is_my_function():
```
Is there a more preferable, definitive coding style for Python? | Personally I try to use CamelCase for classes, mixedCase methods and functions. Variables are usually underscore separated (when I can remember). This way I can tell at a glance what exactly I'm calling, rather than everything looking the same. |
What is the naming convention in Python for variable and function names? | 159,720 | 326 | 2008-10-01T21:01:16Z | 160,830 | 150 | 2008-10-02T03:53:12Z | [
"python",
"variables",
"function",
"naming-conventions"
] | Coming from a C# background the naming convention for variables and method names are usually either CamelCase or Pascal Case:
```
// C# example
string thisIsMyVariable = "a"
public void ThisIsMyMethod()
```
In Python, I have seen the above but I have also seen underscores being used:
```
# python example
this_is_my_variable = 'a'
def this_is_my_function():
```
Is there a more preferable, definitive coding style for Python? | David Goodger (in "Code Like a Pythonista" [here](http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html)) describes the PEP 8 recommendations as follows:
* `joined_lower` for functions, methods,
attributes, variables
* `joined_lower` or `ALL_CAPS` for
constants
* `StudlyCaps` for classes
* `camelCase` only to conform to
pre-existing conventions |
What is the naming convention in Python for variable and function names? | 159,720 | 326 | 2008-10-01T21:01:16Z | 264,226 | 23 | 2008-11-05T02:51:29Z | [
"python",
"variables",
"function",
"naming-conventions"
] | Coming from a C# background the naming convention for variables and method names are usually either CamelCase or Pascal Case:
```
// C# example
string thisIsMyVariable = "a"
public void ThisIsMyMethod()
```
In Python, I have seen the above but I have also seen underscores being used:
```
# python example
this_is_my_variable = 'a'
def this_is_my_function():
```
Is there a more preferable, definitive coding style for Python? | As mentioned, PEP 8 says to use `lower_case_with_underscores` for variables, methods and functions.
I prefer using `lower_case_with_underscores` for variables and `mixedCase` for methods and functions makes the code more explicit and readable. Thus following the [Zen of Python's](http://www.python.org/dev/peps/pep-0020/) "explicit is better than implicit" and "Readability counts" |
What is the naming convention in Python for variable and function names? | 159,720 | 326 | 2008-10-01T21:01:16Z | 2,708,015 | 26 | 2010-04-25T11:23:57Z | [
"python",
"variables",
"function",
"naming-conventions"
] | Coming from a C# background the naming convention for variables and method names are usually either CamelCase or Pascal Case:
```
// C# example
string thisIsMyVariable = "a"
public void ThisIsMyMethod()
```
In Python, I have seen the above but I have also seen underscores being used:
```
# python example
this_is_my_variable = 'a'
def this_is_my_function():
```
Is there a more preferable, definitive coding style for Python? | As the [Style Guide for Python Code](http://www.python.org/dev/peps/pep-0008/) admits,
> The naming conventions of Python's
> library are a bit of a mess, so we'll
> never get this completely consistent
Note that this refers just to Python's *standard library*. If they can't get *that* consistent, then there hardly is much hope of having a generally-adhered-to convention for *all* Python code, is there?
From that, and the discussion here, I would deduce that it's **not** a horrible sin if one keeps using e.g. Java's or C#'s (clear and well-established) naming conventions for variables and functions when crossing over to Python. Keeping in mind, of course, that it is best to abide with whatever the prevailing style for a codebase / project / team happens to be. As the Python Style Guide points out, *internal consistency* matters most.
Feel free to dismiss me as a heretic. :-) Like the OP, I'm not a "Pythonista", not yet anyway. |
What is the naming convention in Python for variable and function names? | 159,720 | 326 | 2008-10-01T21:01:16Z | 8,423,697 | 252 | 2011-12-07T22:44:24Z | [
"python",
"variables",
"function",
"naming-conventions"
] | Coming from a C# background the naming convention for variables and method names are usually either CamelCase or Pascal Case:
```
// C# example
string thisIsMyVariable = "a"
public void ThisIsMyMethod()
```
In Python, I have seen the above but I have also seen underscores being used:
```
# python example
this_is_my_variable = 'a'
def this_is_my_function():
```
Is there a more preferable, definitive coding style for Python? | [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html) has the following convention:
> module\_name, package\_name, ClassName, method\_name, ExceptionName,
> function\_name, GLOBAL\_CONSTANT\_NAME, global\_var\_name,
> instance\_var\_name, function\_parameter\_name, local\_var\_name |
What is the difference between Ruby and Python versions of"self"? | 159,990 | 6 | 2008-10-01T22:09:16Z | 160,227 | 7 | 2008-10-01T23:32:01Z | [
"python",
"ruby",
"language-features"
] | I've done some Python but have just now starting to use Ruby
I could use a good explanation of the difference between "self" in these two languages.
**Obvious on first glance:**
Self is not a keyword in Python, but there is a "self-like" value no matter what you call it.
Python methods receive self as an explicit argument, whereas Ruby does not.
Ruby sometimes has methods explicitly defined as part of self using dot notation.
**Initial Googling reveals**
<http://rubylearning.com/satishtalim/ruby_self.html>
<http://www.ibiblio.org/g2swap/byteofpython/read/self.html> | Python is designed to support more than just object-oriented programming. Preserving the same interface between methods and functions lets the two styles interoperate more cleanly.
Ruby was built from the ground up to be object-oriented. Even the literals are objects (evaluate 1.class and you get Fixnum). The language was built such that self is a reserved keyword that returns the current instance wherever you are.
If you're inside an instance method of one of your class, self is a reference to said instance.
If you're in the definition of the class itself (not in a method), self is the class itself:
```
class C
puts "I am a #{self}"
def instance_method
puts 'instance_method'
end
def self.class_method
puts 'class_method'
end
end
```
At class definition time, 'I am a C' will be printed.
The straight 'def' defines an instance method, whereas the 'def self.xxx' defines a class method.
```
c=C.new
c.instance_method
#=> instance_method
C.class_method
#=> class_method
``` |
Which is the best way to get a list of running processes in unix with python? | 160,245 | 2 | 2008-10-01T23:37:06Z | 160,375 | 8 | 2008-10-02T00:29:39Z | [
"python"
] | I'm trying:
```
import commands
print commands.getoutput("ps -u 0")
```
But it doesn't work on os x.
os instead of commands gives the same output:
USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND
nothing more | This works on Mac OS X 10.5.5. Note the capital **-U** option. Perhaps that's been your problem.
```
import subprocess
ps = subprocess.Popen("ps -U 0", shell=True, stdout=subprocess.PIPE)
print ps.stdout.read()
ps.stdout.close()
ps.wait()
```
Here's the Python version
```
Python 2.5.2 (r252:60911, Feb 22 2008, 07:57:53)
[GCC 4.0.1 (Apple Computer, Inc. build 5363)] on darwin
``` |
What is "lambda binding" in Python? | 160,859 | 9 | 2008-10-02T04:14:01Z | 160,898 | 12 | 2008-10-02T04:38:59Z | [
"python",
"binding",
"lambda"
] | I understand what are lambda functions in Python, but I can't find what is the meaning of "lambda binding" by searching the Python docs.
A link to read about it would be great.
A trivial explained example would be even better.
Thank you. | First, a general definition:
> When a program or function statement
> is executed, the current values of
> formal parameters are saved (on the
> stack) and within the scope of the
> statement, they are bound to the
> values of the actual arguments made in
> the call. When the statement is
> exited, the original values of those
> formal arguments are restored. This
> protocol is fully recursive. If within
> the body of a statement, something is
> done that causes the formal parameters
> to be bound again, to new values, the
> lambda-binding scheme guarantees that
> this will all happen in an orderly
> manner.
Now, there is an excellent [python example in a discussion here](http://markmail.org/message/fypalne4rp5curta "Theoretical question about Lambda"):
"...there is only one binding for `x`: doing `x = 7` just changes the value in the pre-existing binding. That's why
```
def foo(x):
a = lambda: x
x = 7
b = lambda: x
return a,b
```
returns two functions that both return 7; if there was a new binding after the `x = 7`, the functions would return different values [assuming you don't call foo(7), of course. Also assuming nested\_scopes]...." |
Emulation of lex like functionality in Perl or Python | 160,889 | 4 | 2008-10-02T04:34:06Z | 160,922 | 8 | 2008-10-02T04:53:03Z | [
"python",
"perl",
"parsing",
"lex"
] | Here's the deal. Is there a way to have strings tokenized in a line based on multiple regexes?
One example:
I have to get all href tags, their corresponding text and some other text based on a different regex.
So I have 3 expressions and would like to tokenize the line and extract tokens of text matching every expression.
I have actually done this using flex (not to be confused with Adobe), which is an implementation of the good old lex. lex provides
an elegant way to do this by executing "actions" based on expressions. One can control the way
lex reading a file too (block / line based read).
The problem is that flex actually produces C/ C++ code which actually does the tokenizing job. I have a
make file which wraps all these things.
I was wondering if perl /python can in some way do the same thing. Its just that I would like to do everything
I like in a single programming language itself.
Tokenizing is just one of the things that I want to do as part of my application.
Apart from perl or python can any language (functional also) do this?
I did read about PLY and ANTLR here (<http://stackoverflow.com/questions/34081/parsing-where-can-i-learn-about-it#34085>).
But is there a way to do it naturally in python itself? pardon my ignorance, but are these tools used in any popular products / services?
Thank you. | Look at documentation for following modules on [CPAN](http://www.cpan.org/)
[HTML::TreeBuilder](http://search.cpan.org/search?query=HTML%3A%3ATreeBuilder&mode=all)
[HTML::TableExtract](http://search.cpan.org/author/MSISK/HTML-TableExtract-2.10/lib/HTML/TableExtract.pm)
and
[Parse::RecDescent](http://search.cpan.org/search?query=parse+recdescent&mode=all)
I've used these modules to process quite large and complex web-pages. |
Emulation of lex like functionality in Perl or Python | 160,889 | 4 | 2008-10-02T04:34:06Z | 161,146 | 7 | 2008-10-02T06:52:06Z | [
"python",
"perl",
"parsing",
"lex"
] | Here's the deal. Is there a way to have strings tokenized in a line based on multiple regexes?
One example:
I have to get all href tags, their corresponding text and some other text based on a different regex.
So I have 3 expressions and would like to tokenize the line and extract tokens of text matching every expression.
I have actually done this using flex (not to be confused with Adobe), which is an implementation of the good old lex. lex provides
an elegant way to do this by executing "actions" based on expressions. One can control the way
lex reading a file too (block / line based read).
The problem is that flex actually produces C/ C++ code which actually does the tokenizing job. I have a
make file which wraps all these things.
I was wondering if perl /python can in some way do the same thing. Its just that I would like to do everything
I like in a single programming language itself.
Tokenizing is just one of the things that I want to do as part of my application.
Apart from perl or python can any language (functional also) do this?
I did read about PLY and ANTLR here (<http://stackoverflow.com/questions/34081/parsing-where-can-i-learn-about-it#34085>).
But is there a way to do it naturally in python itself? pardon my ignorance, but are these tools used in any popular products / services?
Thank you. | If you're specifically after parsing links out of web-pages, then Perl's [WWW::Mechanize](http://search.cpan.org/perldoc?WWW::Mechanize) module will figure things out for you in a very elegant fashion. Here's a sample program that grabs the first page of Stack Overflow and parses out all the links, printing their text and corresponding URLs:
```
#!/usr/bin/perl
use strict;
use warnings;
use WWW::Mechanize;
my $mech = WWW::Mechanize->new;
$mech->get("http://stackoverflow.com/");
$mech->success or die "Oh no! Couldn't fetch stackoverflow.com";
foreach my $link ($mech->links) {
print "* [",$link->text, "] points to ", $link->url, "\n";
}
```
In the main loop, each `$link` is a [WWW::Mechanize::Link](http://search.cpan.org/perldoc?WWW::Mechanize::Link) object, so you're not just constrained to getting the text and URL.
All the best,
Paul |
Django and Python 2.6 | 162,808 | 5 | 2008-10-02T15:00:00Z | 163,163 | 7 | 2008-10-02T15:58:41Z | [
"python",
"django"
] | I'm just starting to get into Django, and of course as of last night one of the two new Python versions went final (2.6 obviously ;)) so I'm wondering if 2.6 plus Django is ready for actual use or do the Django team need more time to finish with tweaks/cleanup?
All the google searches I did were inconclusive, I saw bits about some initial test runs on beta 2 but nothing more recent seemed to show up.
Edit: <http://groups.google.com/group/django-developers/browse_thread/thread/a48f81d916f24a04>
They've confirmed here 1.0 w/2.6 works fine as far as they know. | The impression I get is that 2.6 should work fine with Django 1.0. As found here: <http://simonwillison.net/2008/Oct/2/whatus/> |
urllib2 file name | 163,009 | 24 | 2008-10-02T15:27:04Z | 163,095 | 37 | 2008-10-02T15:43:12Z | [
"python",
"url",
"urllib2"
] | If I open a file using urllib2, like so:
```
remotefile = urllib2.urlopen('http://example.com/somefile.zip')
```
Is there an easy way to get the file name other then parsing the original URL?
EDIT: changed openfile to urlopen... not sure how that happened.
EDIT2: I ended up using:
```
filename = url.split('/')[-1].split('#')[0].split('?')[0]
```
Unless I'm mistaken, this should strip out all potential queries as well. | Did you mean [urllib2.urlopen](http://www.python.org/doc/2.5.2/lib/module-urllib2.html#l2h-3928)?
You could potentially lift the *intended* filename *if* the server was sending a Content-Disposition header by checking `remotefile.info()['Content-Disposition']`, but as it is I think you'll just have to parse the url.
You could use `urlparse.urlsplit`, but if you have any URLs like at the second example, you'll end up having to pull the file name out yourself anyway:
```
>>> urlparse.urlsplit('http://example.com/somefile.zip')
('http', 'example.com', '/somefile.zip', '', '')
>>> urlparse.urlsplit('http://example.com/somedir/somefile.zip')
('http', 'example.com', '/somedir/somefile.zip', '', '')
```
Might as well just do this:
```
>>> 'http://example.com/somefile.zip'.split('/')[-1]
'somefile.zip'
>>> 'http://example.com/somedir/somefile.zip'.split('/')[-1]
'somefile.zip'
``` |
urllib2 file name | 163,009 | 24 | 2008-10-02T15:27:04Z | 163,202 | 10 | 2008-10-02T16:06:16Z | [
"python",
"url",
"urllib2"
] | If I open a file using urllib2, like so:
```
remotefile = urllib2.urlopen('http://example.com/somefile.zip')
```
Is there an easy way to get the file name other then parsing the original URL?
EDIT: changed openfile to urlopen... not sure how that happened.
EDIT2: I ended up using:
```
filename = url.split('/')[-1].split('#')[0].split('?')[0]
```
Unless I'm mistaken, this should strip out all potential queries as well. | If you only want the file name itself, assuming that there's no query variables at the end like <http://example.com/somedir/somefile.zip?foo=bar> then you can use os.path.basename for this:
```
[user@host]$ python
Python 2.5.1 (r251:54869, Apr 18 2007, 22:08:04)
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> os.path.basename("http://example.com/somefile.zip")
'somefile.zip'
>>> os.path.basename("http://example.com/somedir/somefile.zip")
'somefile.zip'
>>> os.path.basename("http://example.com/somedir/somefile.zip?foo=bar")
'somefile.zip?foo=bar'
```
Some other posters mentioned using urlparse, which will work, but you'd still need to strip the leading directory from the file name. If you use os.path.basename() then you don't have to worry about that, since it returns only the final part of the URL or file path. |
Python - How do I pass a string into subprocess.Popen (using the stdin argument)? | 163,542 | 175 | 2008-10-02T17:25:23Z | 163,556 | 21 | 2008-10-02T17:27:55Z | [
"python",
"subprocess",
"stdin"
] | If I do the following:
```
import subprocess
from cStringIO import StringIO
subprocess.Popen(['grep','f'],stdout=subprocess.PIPE,stdin=StringIO('one\ntwo\nthree\nfour\nfive\nsix\n')).communicate()[0]
```
I get:
```
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "/build/toolchain/mac32/python-2.4.3/lib/python2.4/subprocess.py", line 533, in __init__
(p2cread, p2cwrite,
File "/build/toolchain/mac32/python-2.4.3/lib/python2.4/subprocess.py", line 830, in _get_handles
p2cread = stdin.fileno()
AttributeError: 'cStringIO.StringI' object has no attribute 'fileno'
```
Apparently a cStringIO.StringIO object doesn't quack close enough to a file duck to suit subprocess.Popen. How do I work around this? | I figured out this workaround:
```
>>> p = subprocess.Popen(['grep','f'],stdout=subprocess.PIPE,stdin=subprocess.PIPE)
>>> p.stdin.write('one\ntwo\nthree\nfour\nfive\nsix\n')
>>> p.communicate()[0]
'four\nfive\n'
>>> p.stdin.close()
```
Is there a better one? |
Python - How do I pass a string into subprocess.Popen (using the stdin argument)? | 163,542 | 175 | 2008-10-02T17:25:23Z | 163,870 | 12 | 2008-10-02T18:33:23Z | [
"python",
"subprocess",
"stdin"
] | If I do the following:
```
import subprocess
from cStringIO import StringIO
subprocess.Popen(['grep','f'],stdout=subprocess.PIPE,stdin=StringIO('one\ntwo\nthree\nfour\nfive\nsix\n')).communicate()[0]
```
I get:
```
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "/build/toolchain/mac32/python-2.4.3/lib/python2.4/subprocess.py", line 533, in __init__
(p2cread, p2cwrite,
File "/build/toolchain/mac32/python-2.4.3/lib/python2.4/subprocess.py", line 830, in _get_handles
p2cread = stdin.fileno()
AttributeError: 'cStringIO.StringI' object has no attribute 'fileno'
```
Apparently a cStringIO.StringIO object doesn't quack close enough to a file duck to suit subprocess.Popen. How do I work around this? | "Apparently a cStringIO.StringIO object doesn't quack close enough to a file duck to suit subprocess.Popen"
:-)
I'm afraid not. The pipe is a low-level OS concept, so it absolutely requires a file object that is represented by an OS-level file descriptor. Your workaround is the right one. |
Python - How do I pass a string into subprocess.Popen (using the stdin argument)? | 163,542 | 175 | 2008-10-02T17:25:23Z | 165,662 | 194 | 2008-10-03T04:11:07Z | [
"python",
"subprocess",
"stdin"
] | If I do the following:
```
import subprocess
from cStringIO import StringIO
subprocess.Popen(['grep','f'],stdout=subprocess.PIPE,stdin=StringIO('one\ntwo\nthree\nfour\nfive\nsix\n')).communicate()[0]
```
I get:
```
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "/build/toolchain/mac32/python-2.4.3/lib/python2.4/subprocess.py", line 533, in __init__
(p2cread, p2cwrite,
File "/build/toolchain/mac32/python-2.4.3/lib/python2.4/subprocess.py", line 830, in _get_handles
p2cread = stdin.fileno()
AttributeError: 'cStringIO.StringI' object has no attribute 'fileno'
```
Apparently a cStringIO.StringIO object doesn't quack close enough to a file duck to suit subprocess.Popen. How do I work around this? | [`Popen.communicate()`](https://docs.python.org/3/library/subprocess.html?highlight=subprocess#subprocess.Popen.communicate) documentation:
> Note that if you want to send data to
> the processâs stdin, you need to
> create the Popen object with
> stdin=PIPE. Similarly, to get anything
> other than None in the result tuple,
> you need to give stdout=PIPE and/or
> stderr=PIPE too.
>
> **Replacing os.popen\***
```
pipe = os.popen(cmd, 'w', bufsize)
# ==>
pipe = Popen(cmd, shell=True, bufsize=bufsize, stdin=PIPE).stdin
```
> **Warning** Use communicate() rather than
> stdin.write(), stdout.read() or
> stderr.read() to avoid deadlocks due
> to any of the other OS pipe buffers
> filling up and blocking the child
> process.
So your example could be written as follows:
```
from subprocess import Popen, PIPE, STDOUT
p = Popen(['grep', 'f'], stdout=PIPE, stdin=PIPE, stderr=STDOUT)
grep_stdout = p.communicate(input=b'one\ntwo\nthree\nfour\nfive\nsix\n')[0]
print(grep_stdout.decode())
# -> four
# -> five
# ->
``` |
Python - How do I pass a string into subprocess.Popen (using the stdin argument)? | 163,542 | 175 | 2008-10-02T17:25:23Z | 10,134,899 | 7 | 2012-04-13T03:36:37Z | [
"python",
"subprocess",
"stdin"
] | If I do the following:
```
import subprocess
from cStringIO import StringIO
subprocess.Popen(['grep','f'],stdout=subprocess.PIPE,stdin=StringIO('one\ntwo\nthree\nfour\nfive\nsix\n')).communicate()[0]
```
I get:
```
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "/build/toolchain/mac32/python-2.4.3/lib/python2.4/subprocess.py", line 533, in __init__
(p2cread, p2cwrite,
File "/build/toolchain/mac32/python-2.4.3/lib/python2.4/subprocess.py", line 830, in _get_handles
p2cread = stdin.fileno()
AttributeError: 'cStringIO.StringI' object has no attribute 'fileno'
```
Apparently a cStringIO.StringIO object doesn't quack close enough to a file duck to suit subprocess.Popen. How do I work around this? | ```
from subprocess import Popen, PIPE
from tempfile import SpooledTemporaryFile as tempfile
f = tempfile()
f.write('one\ntwo\nthree\nfour\nfive\nsix\n')
f.seek(0)
print Popen(['/bin/grep','f'],stdout=PIPE,stdin=f).stdout.read()
f.close()
``` |
Python - How do I pass a string into subprocess.Popen (using the stdin argument)? | 163,542 | 175 | 2008-10-02T17:25:23Z | 24,982,453 | 12 | 2014-07-27T15:29:17Z | [
"python",
"subprocess",
"stdin"
] | If I do the following:
```
import subprocess
from cStringIO import StringIO
subprocess.Popen(['grep','f'],stdout=subprocess.PIPE,stdin=StringIO('one\ntwo\nthree\nfour\nfive\nsix\n')).communicate()[0]
```
I get:
```
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "/build/toolchain/mac32/python-2.4.3/lib/python2.4/subprocess.py", line 533, in __init__
(p2cread, p2cwrite,
File "/build/toolchain/mac32/python-2.4.3/lib/python2.4/subprocess.py", line 830, in _get_handles
p2cread = stdin.fileno()
AttributeError: 'cStringIO.StringI' object has no attribute 'fileno'
```
Apparently a cStringIO.StringIO object doesn't quack close enough to a file duck to suit subprocess.Popen. How do I work around this? | I am using python3 and found out that you need to encode your string before you can pass it into stdin:
```
p = Popen(['grep', 'f'], stdout=PIPE, stdin=PIPE, stderr=PIPE)
out, err = p.communicate(input='one\ntwo\nthree\nfour\nfive\nsix\n'.encode())
print(out)
``` |
Python - How do I pass a string into subprocess.Popen (using the stdin argument)? | 163,542 | 175 | 2008-10-02T17:25:23Z | 33,482,438 | 9 | 2015-11-02T16:34:03Z | [
"python",
"subprocess",
"stdin"
] | If I do the following:
```
import subprocess
from cStringIO import StringIO
subprocess.Popen(['grep','f'],stdout=subprocess.PIPE,stdin=StringIO('one\ntwo\nthree\nfour\nfive\nsix\n')).communicate()[0]
```
I get:
```
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "/build/toolchain/mac32/python-2.4.3/lib/python2.4/subprocess.py", line 533, in __init__
(p2cread, p2cwrite,
File "/build/toolchain/mac32/python-2.4.3/lib/python2.4/subprocess.py", line 830, in _get_handles
p2cread = stdin.fileno()
AttributeError: 'cStringIO.StringI' object has no attribute 'fileno'
```
Apparently a cStringIO.StringIO object doesn't quack close enough to a file duck to suit subprocess.Popen. How do I work around this? | I'm a bit surprised nobody suggested creating a pipe, which is in my opinion the far simplest way to pass a string to stdin of a subprocess:
```
read, write = os.pipe()
os.write(write, "stdin input here")
os.close(write)
subprocess.check_call(['your-command'], stdin=read)
``` |
Can "list_display" in a Django ModelAdmin display attributes of ForeignKey fields? | 163,823 | 150 | 2008-10-02T18:26:19Z | 163,968 | 19 | 2008-10-02T18:53:23Z | [
"python",
"django",
"django-admin",
"modeladmin"
] | I have a Person model that has a foreign key relationship to Book. Book has a number of fields, but I'm most concerned about "author" (a standard CharField).
With that being said, in my PersonAdmin model, I'd like to display "book.author" using "list\_display". I've tried all of the obvious methods for doing so (see below), but nothing seems to work. Any suggestions?
```
class PersonAdmin(admin.ModelAdmin):
list_display = ['book.author',]
``` | According to the documentation, you can only display the `__unicode__` representation of a ForeignKey:
<http://docs.djangoproject.com/en/dev/ref/contrib/admin/#list-display>
Seems odd that it doesn't support the `'book__author'` style format which is used everywhere else in the DB API.
Turns out there's [a ticket for this feature](http://code.djangoproject.com/ticket/5863), which is marked as Won't Fix. |
Can "list_display" in a Django ModelAdmin display attributes of ForeignKey fields? | 163,823 | 150 | 2008-10-02T18:26:19Z | 164,631 | 240 | 2008-10-02T21:11:56Z | [
"python",
"django",
"django-admin",
"modeladmin"
] | I have a Person model that has a foreign key relationship to Book. Book has a number of fields, but I'm most concerned about "author" (a standard CharField).
With that being said, in my PersonAdmin model, I'd like to display "book.author" using "list\_display". I've tried all of the obvious methods for doing so (see below), but nothing seems to work. Any suggestions?
```
class PersonAdmin(admin.ModelAdmin):
list_display = ['book.author',]
``` | As another option, you can do look ups like:
```
class UserAdmin(admin.ModelAdmin):
list_display = (..., 'get_author')
def get_author(self, obj):
return obj.book.author
get_author.short_description = 'Author'
get_author.admin_order_field = 'book__author'
``` |
Can "list_display" in a Django ModelAdmin display attributes of ForeignKey fields? | 163,823 | 150 | 2008-10-02T18:26:19Z | 496,453 | 9 | 2009-01-30T17:41:43Z | [
"python",
"django",
"django-admin",
"modeladmin"
] | I have a Person model that has a foreign key relationship to Book. Book has a number of fields, but I'm most concerned about "author" (a standard CharField).
With that being said, in my PersonAdmin model, I'd like to display "book.author" using "list\_display". I've tried all of the obvious methods for doing so (see below), but nothing seems to work. Any suggestions?
```
class PersonAdmin(admin.ModelAdmin):
list_display = ['book.author',]
``` | You can show whatever you want in list display by using a callable. It would look like this:
```
def book_author(object):
return object.book.author
class PersonAdmin(admin.ModelAdmin):
list_display = [book_author,]
``` |
Can "list_display" in a Django ModelAdmin display attributes of ForeignKey fields? | 163,823 | 150 | 2008-10-02T18:26:19Z | 3,351,431 | 51 | 2010-07-28T09:13:55Z | [
"python",
"django",
"django-admin",
"modeladmin"
] | I have a Person model that has a foreign key relationship to Book. Book has a number of fields, but I'm most concerned about "author" (a standard CharField).
With that being said, in my PersonAdmin model, I'd like to display "book.author" using "list\_display". I've tried all of the obvious methods for doing so (see below), but nothing seems to work. Any suggestions?
```
class PersonAdmin(admin.ModelAdmin):
list_display = ['book.author',]
``` | Like the rest, I went with callables too. But they have one downside: by default, you can't order on them. Fortunately, there is a solution for that:
```
def author(self):
return self.book.author
author.admin_order_field = 'book__author'
``` |
Can "list_display" in a Django ModelAdmin display attributes of ForeignKey fields? | 163,823 | 150 | 2008-10-02T18:26:19Z | 14,677,451 | 9 | 2013-02-03T21:21:58Z | [
"python",
"django",
"django-admin",
"modeladmin"
] | I have a Person model that has a foreign key relationship to Book. Book has a number of fields, but I'm most concerned about "author" (a standard CharField).
With that being said, in my PersonAdmin model, I'd like to display "book.author" using "list\_display". I've tried all of the obvious methods for doing so (see below), but nothing seems to work. Any suggestions?
```
class PersonAdmin(admin.ModelAdmin):
list_display = ['book.author',]
``` | I just posted a snippet that makes admin.ModelAdmin support '\_\_' syntax:
<http://djangosnippets.org/snippets/2887/>
So you can do:
```
class PersonAdmin(RelatedFieldAdmin):
list_display = ['book__author',]
```
This is basically just doing the same thing described in the other answers, but it automatically takes care of (1) setting admin\_order\_field (2) setting short\_description and (3) modifying the queryset to avoid a database hit for each row. |
Can "list_display" in a Django ModelAdmin display attributes of ForeignKey fields? | 163,823 | 150 | 2008-10-02T18:26:19Z | 23,747,842 | 52 | 2014-05-19T21:55:14Z | [
"python",
"django",
"django-admin",
"modeladmin"
] | I have a Person model that has a foreign key relationship to Book. Book has a number of fields, but I'm most concerned about "author" (a standard CharField).
With that being said, in my PersonAdmin model, I'd like to display "book.author" using "list\_display". I've tried all of the obvious methods for doing so (see below), but nothing seems to work. Any suggestions?
```
class PersonAdmin(admin.ModelAdmin):
list_display = ['book.author',]
``` | Despite all the great answers above and due to me being new to Django, I was still stuck. Here's my explanation from a very newbie perspective.
**models.py**
```
class Author(models.Model):
name = models.CharField(max_length=255)
class Book(models.Model):
author = models.ForeignKey(Author)
title = models.CharField(max_length=255)
```
**admin.py (Incorrect Way)** - you think it would work by using 'model\_\_field' to reference, but it doesn't
```
class BookAdmin(admin.ModelAdmin):
model = Book
list_display = ['title', 'author__name', ]
admin.site.register(Book, BookAdmin)
```
**admin.py (Correct Way)** - this is how you reference a foreign key name the Django way
```
class BookAdmin(admin.ModelAdmin):
model = Book
list_display = ['title', 'get_name', ]
def get_name(self, obj):
return obj.author.name
get_name.admin_order_field = 'author' #Allows column order sorting
get_name.short_description = 'Author Name' #Renames column head
#Filtering on side - for some reason, this works
#list_filter = ['title', 'author__name']
admin.site.register(Book, BookAdmin)
```
For additional reference, see the Django model link [here](https://docs.djangoproject.com/en/dev/ref/contrib/admin/#list-display) |
Can "list_display" in a Django ModelAdmin display attributes of ForeignKey fields? | 163,823 | 150 | 2008-10-02T18:26:19Z | 28,190,954 | 7 | 2015-01-28T11:20:40Z | [
"python",
"django",
"django-admin",
"modeladmin"
] | I have a Person model that has a foreign key relationship to Book. Book has a number of fields, but I'm most concerned about "author" (a standard CharField).
With that being said, in my PersonAdmin model, I'd like to display "book.author" using "list\_display". I've tried all of the obvious methods for doing so (see below), but nothing seems to work. Any suggestions?
```
class PersonAdmin(admin.ModelAdmin):
list_display = ['book.author',]
``` | Please note that adding the `get_author` function would slow the list\_display in the admin, because showing each person would make a SQL query.
To avoid this, you need to modify `get_queryset` method in PersonAdmin, for example:
```
def get_queryset(self, request):
return super(PersonAdmin,self).get_queryset(request).select_related('book')
```
> Before: 73 queries in 36.02ms (67 duplicated queries in admin)
>
> After: 6 queries in 10.81ms |
How do I deploy a Python desktop application? | 164,137 | 28 | 2008-10-02T19:31:51Z | 164,291 | 12 | 2008-10-02T20:02:01Z | [
"python",
"deployment"
] | I have started on a personal python application that runs on the desktop. I am using wxPython as a GUI toolkit. Should there be a demand for this type of application, I would possibly like to commercialize it.
I have no knowledge of deploying "real-life" Python applications, though I have used [`py2exe`](http://www.py2exe.org/) in the past with varied success. How would I obfuscate the code? Can I somehow deploy only the bytecode?
An ideal solution would not jeopardize my intellectual property (source code), would not require a direct installation of Python (though I'm sure it will need to have some embedded interpreter), and would be cross-platform (Windows, Mac, and Linux). Does anyone know of any tools or resources in this area?
Thanks. | You can distribute the compiled Python bytecode (.pyc files) instead of the source. You can't prevent decompilation in Python (or any other language, really). You could use an obfuscator like [pyobfuscate](http://www.lysator.liu.se/~astrand/projects/pyobfuscate/) to make it more annoying for competitors to decipher your decompiled source.
As Alex Martelli says [in this thread](http://mail.python.org/pipermail/python-list/2006-April/1079623.html), if you want to keep your code a secret, you shouldn't run it on other people's machines.
IIRC, the last time I used [cx\_Freeze](http://python.net/crew/atuining/cx_Freeze/) it created a DLL for Windows that removed the necessity for a native Python installation. This is at least worth checking out. |
Change Django Templates Based on User-Agent | 164,427 | 38 | 2008-10-02T20:30:09Z | 164,507 | 18 | 2008-10-02T20:44:38Z | [
"python",
"django",
"django-templates",
"mobile-website",
"django-middleware"
] | I've made a Django site, but I've drank the Koolaid and I want to make an *IPhone* version. After putting much thought into I've come up with two options:
1. Make a whole other site, like i.xxxx.com. Tie it into the same database using Django's sites framework.
2. Find some time of middleware that reads the user-agent, and changes the template directories dynamically.
I'd really prefer option #2, however; I have some reservations, mainly because the Django documentation [discourages changing settings on the fly](http://docs.djangoproject.com/en/dev/topics/settings/). I found a [snippet](http://www.djangosnippets.org/snippets/1098/) that would do the what I'd like. My main issue is having it as seamless as possible, I'd like it to be automagic and transparent to the user.
Has anyone else come across the same issue? Would anyone care to share about how they've tackled making IPhone versions of Django sites?
**Update**
I went with a combination of middleware and tweaking the template call.
For the middleware, I used [minidetector](http://code.google.com/p/minidetector/). I like it because it detects a [plethora](http://www.youtube.com/watch?v=b6E682C7Jj4) of mobile user-agents. All I have to do is check request.mobile in my views.
For the template call tweak:
```
def check_mobile(request, template_name):
if request.mobile:
return 'mobile-%s'%template_name
return template_name
```
I use this for any view that I know I have both versions.
**TODO:**
* Figure out how to access *request.mobile* in an extended version of render\_to\_response so I don't have to use check\_mobile('template\_name.html')
* Using the previous automagically fallback to the regular template if no mobile version exists. | Rather than changing the template directories dynamically you could modify the request and add a value that lets your view know if the user is on an iphone or not. Then wrap render\_to\_response (or whatever you are using for creating HttpResponse objects) to grab the iphone version of the template instead of the standard html version if they are using an iphone. |
Change Django Templates Based on User-Agent | 164,427 | 38 | 2008-10-02T20:30:09Z | 3,487,254 | 9 | 2010-08-15T11:57:53Z | [
"python",
"django",
"django-templates",
"mobile-website",
"django-middleware"
] | I've made a Django site, but I've drank the Koolaid and I want to make an *IPhone* version. After putting much thought into I've come up with two options:
1. Make a whole other site, like i.xxxx.com. Tie it into the same database using Django's sites framework.
2. Find some time of middleware that reads the user-agent, and changes the template directories dynamically.
I'd really prefer option #2, however; I have some reservations, mainly because the Django documentation [discourages changing settings on the fly](http://docs.djangoproject.com/en/dev/topics/settings/). I found a [snippet](http://www.djangosnippets.org/snippets/1098/) that would do the what I'd like. My main issue is having it as seamless as possible, I'd like it to be automagic and transparent to the user.
Has anyone else come across the same issue? Would anyone care to share about how they've tackled making IPhone versions of Django sites?
**Update**
I went with a combination of middleware and tweaking the template call.
For the middleware, I used [minidetector](http://code.google.com/p/minidetector/). I like it because it detects a [plethora](http://www.youtube.com/watch?v=b6E682C7Jj4) of mobile user-agents. All I have to do is check request.mobile in my views.
For the template call tweak:
```
def check_mobile(request, template_name):
if request.mobile:
return 'mobile-%s'%template_name
return template_name
```
I use this for any view that I know I have both versions.
**TODO:**
* Figure out how to access *request.mobile* in an extended version of render\_to\_response so I don't have to use check\_mobile('template\_name.html')
* Using the previous automagically fallback to the regular template if no mobile version exists. | Detect the user agent in middleware, switch the url bindings, profit!
How? Django request objects have a .urlconf attribute, which can be set by middleware.
From django docs:
> Django determines the root URLconf
> module to use. Ordinarily, this is the
> value of the ROOT\_URLCONF setting, but
> if the incoming HttpRequest object has
> an attribute called urlconf (set by
> middleware request processing), its
> value will be used in place of the
> ROOT\_URLCONF setting.
1. In yourproj/middlware.py, write a class that checks the http\_user\_agent string:
```
import re
MOBILE_AGENT_RE=re.compile(r".*(iphone|mobile|androidtouch)",re.IGNORECASE)
class MobileMiddleware(object):
def process_request(self,request):
if MOBILE_AGENT_RE.match(request.META['HTTP_USER_AGENT']):
request.urlconf="yourproj.mobile_urls"
```
2. Don't forget to add this to MIDDLEWARE\_CLASSES in settings.py:
```
MIDDLEWARE_CLASSES= [...
'yourproj.middleware.MobileMiddleware',
...]
```
3. Create a mobile urlconf, yourproj/mobile\_urls.py:
```
urlpatterns=patterns('',('r'/?$', 'mobile.index'), ...)
``` |
Programmatically launching standalone Adobe flashplayer on Linux/X11 | 164,460 | 7 | 2008-10-02T20:36:02Z | 165,089 | 7 | 2008-10-02T23:44:15Z | [
"python",
"linux",
"adobe",
"x11",
"flash-player"
] | The standalone flashplayer takes no arguments other than a .swf file when you launch it from the command line. I need the player to go full screen, no window borders and such. This can be accomplished by hitting ctrl+f once the program has started. I want to do this programmatically as I need it to launch into full screen without any human interaction.
My guess is that I need to some how get a handle to the window and then send it an event that looks like the "ctrl+f" keystroke.
If it makes any difference, it looks like flashplayer is a gtk application and I have python with pygtk installed.
**UPDATE** (the solution I used... thanks to ypnos' answer):
```
./flashplayer http://example.com/example.swf & sleep 3 && ~/xsendkey -window "Adobe Flash Player 10" Control+F
``` | You can use a dedicated application which sends the keystroke to the window manager, which should then pass it to flash, if the window starts as being the active window on the screen. This is quite error prone, though, due to delays between starting flash and when the window will show up.
For example, your script could do something like this:
flashplayer \*.swf
sleep 3 && xsendkey Control+F
The application xsendkey can be found here: <http://people.csail.mit.edu/adonovan/hacks/xsendkey.html>
Without given a specific window, it will send it to the root window, which is handled by your window manager. You could also try to figure out the Window id first, using xprop or something related to it.
Another option is a Window manager, which is able to remember your settings and automatically apply them. Fluxbos for example provides this feature. You could set fluxbox to make the Window decor-less and stretch it over the whole screen, if flashplayer supports being resized. This is also not-so-nice, as it would probably affect all the flashplayer windows you open ever. |
How would I package and sell a Django app? | 164,901 | 30 | 2008-10-02T22:27:56Z | 164,920 | 11 | 2008-10-02T22:40:24Z | [
"python",
"django",
"distribution",
"piracy-prevention"
] | Currently I am hosting a Django app I developed myself for my clients, but I am now starting to look at selling it to people for them to host themselves.
My question is this: How can I package up and sell a Django app, while protecting its code from pirating or theft? Distributing a bunch of .py files doesn't sound like a good idea as the people I sell it to too could just make copies of them and pass them on.
I think for the purpose of this problem it would be safe to assume that everyone who buys this would be running the same (LAMP) setup. | The way I'd go about it is this:
1. Encrypt all of the code
2. Write an installer that contacts the server with the machine's hostname and license file and gets the decryption key, then decrypts the code and compiles it to python bytecode
3. Add (in the installer) a module that checks the machine's hostname and license file on import and dies if it doesn't match
This way the user only has to contact the server when the hostname changes and on first install, but you get a small layer of security. You could change the hostname to something more complex, but there's really no need -- anyone that wants to pirate this will do so, but a simple mechanism like that will keep honest people honest. |
How would I package and sell a Django app? | 164,901 | 30 | 2008-10-02T22:27:56Z | 164,959 | 9 | 2008-10-02T22:54:43Z | [
"python",
"django",
"distribution",
"piracy-prevention"
] | Currently I am hosting a Django app I developed myself for my clients, but I am now starting to look at selling it to people for them to host themselves.
My question is this: How can I package up and sell a Django app, while protecting its code from pirating or theft? Distributing a bunch of .py files doesn't sound like a good idea as the people I sell it to too could just make copies of them and pass them on.
I think for the purpose of this problem it would be safe to assume that everyone who buys this would be running the same (LAMP) setup. | You could package the whole thing up as an Amazon Machine Instance (AMI), and then have them run your app on [Amazon EC2](http://aws.amazon.com/ec2/). The nice thing about this solution is that Amazon will [take care of billing for you](http://docs.amazonwebservices.com/AWSEC2/latest/DeveloperGuide/index.html?paidamis-intro.html), and since you're distributing the entire machine image, you can be certain that all your clients are using the same LAMP stack. The AMI is an encrypted machine image that is configured however you want it.
You can have Amazon bill the client with a one-time fee, usage-based fee, or monthly fee.
Of course, this solution requires that your clients host their app at Amazon, and pay the appropriate fees. |
How would I package and sell a Django app? | 164,901 | 30 | 2008-10-02T22:27:56Z | 164,987 | 48 | 2008-10-02T23:10:21Z | [
"python",
"django",
"distribution",
"piracy-prevention"
] | Currently I am hosting a Django app I developed myself for my clients, but I am now starting to look at selling it to people for them to host themselves.
My question is this: How can I package up and sell a Django app, while protecting its code from pirating or theft? Distributing a bunch of .py files doesn't sound like a good idea as the people I sell it to too could just make copies of them and pass them on.
I think for the purpose of this problem it would be safe to assume that everyone who buys this would be running the same (LAMP) setup. | Don't try and obfuscate or encrypt the code - it will never work.
I would suggest selling the Django application "as a service" - either host it for them, or sell them the code *and support*. Write up a contract that forbids them from redistributing it.
That said, if you were determined to obfuscate the code in some way - you can distribute python applications entirely as .pyc (Python compiled byte-code).. It's how Py2App works.
It will still be re-distributable, *but* it will be very difficult to edit the files - so you could add some basic licensing stuff, and not have it foiled by a few `#`s..
As I said, I don't think you'll succeed in anti-piracy via encryption or obfuscation etc.. Depending on your clients, a simple contract, and maybe some really basic checks will go a long much further than some complicated decryption system (And make the experience of using your application *better*, instead of *hopefully not any worse*) |
How would I package and sell a Django app? | 164,901 | 30 | 2008-10-02T22:27:56Z | 167,240 | 7 | 2008-10-03T14:48:12Z | [
"python",
"django",
"distribution",
"piracy-prevention"
] | Currently I am hosting a Django app I developed myself for my clients, but I am now starting to look at selling it to people for them to host themselves.
My question is this: How can I package up and sell a Django app, while protecting its code from pirating or theft? Distributing a bunch of .py files doesn't sound like a good idea as the people I sell it to too could just make copies of them and pass them on.
I think for the purpose of this problem it would be safe to assume that everyone who buys this would be running the same (LAMP) setup. | You'll never be able to keep the source code from people who really want it. It's best to come to grips with this fact now, and save yourself the headache later. |
How would I package and sell a Django app? | 164,901 | 30 | 2008-10-02T22:27:56Z | 445,887 | 10 | 2009-01-15T07:01:38Z | [
"python",
"django",
"distribution",
"piracy-prevention"
] | Currently I am hosting a Django app I developed myself for my clients, but I am now starting to look at selling it to people for them to host themselves.
My question is this: How can I package up and sell a Django app, while protecting its code from pirating or theft? Distributing a bunch of .py files doesn't sound like a good idea as the people I sell it to too could just make copies of them and pass them on.
I think for the purpose of this problem it would be safe to assume that everyone who buys this would be running the same (LAMP) setup. | "Encrypting" Python source code (or bytecode, or really bytecode for any language that uses it -- not just Python) is like those little JavaScript things some people put on web pages to try to disable the right-hand mouse button, declaring "now you can't steal my images!"
The workarounds are trivial, and will not stop a determined person.
If you're really serious about selling a piece of Python software, you need to act serious. Pay an attorney to draw up license/contract terms, have people agree to them at the time of purchase, and then just let them have the actual software. This means you'll have to haul people into court if they violate the license/contract terms, but you'd have to do that no matter what (e.g., if somebody breaks your "encryption" and starts distributing your software), and having the actual proper form of legal words already set down on paper, with their signature, will be far better for your business in the long term.
If you're really *that* paranoid about people "stealing" your software, though, just stick with a hosted model and don't give them access to the server. Plenty of successful businesses are based around that model. |
Detecting Mouse clicks in windows using python | 165,495 | 13 | 2008-10-03T02:51:44Z | 168,996 | 23 | 2008-10-03T21:38:15Z | [
"python",
"windows",
"mouse"
] | How can I detect mouse clicks regardless of the window the mouse is in?
Perferabliy in python, but if someone can explain it in any langauge I might be able to figure it out.
I found this on microsoft's site:
<http://msdn.microsoft.com/en-us/library/ms645533(VS.85).aspx>
But I don't see how I can detect or pick up the notifications listed.
Tried using pygame's pygame.mouse.get\_pos() function as follows:
```
import pygame
pygame.init()
while True:
print pygame.mouse.get_pos()
```
This just returns 0,0.
I'm not familiar with pygame, is something missing?
In anycase I'd prefer a method without the need to install a 3rd party module.
(other than pywin32 <http://sourceforge.net/projects/pywin32/> ) | The only way to detect mouse events outside your program is to install a Windows hook using [SetWindowsHookEx](http://msdn.microsoft.com/en-us/library/ms644990(VS.85).aspx). The [pyHook](http://www.cs.unc.edu/Research/assist/developer.shtml) module encapsulates the nitty-gritty details. Here's a sample that will print the location of every mouse click:
```
import pyHook
import pythoncom
def onclick(event):
print event.Position
return True
hm = pyHook.HookManager()
hm.SubscribeMouseAllButtonsDown(onclick)
hm.HookMouse()
pythoncom.PumpMessages()
hm.UnhookMouse()
```
You can check the **example.py** script that is installed with the module for more info about the **event** parameter.
pyHook might be tricky to use in a pure Python script, because it requires an active message pump. From the [tutorial](http://mindtrove.info/articles/monitoring-global-input-with-pyhook/):
> Any application that wishes to receive
> notifications of global input events
> must have a Windows message pump. The
> easiest way to get one of these is to
> use the PumpMessages method in the
> Win32 Extensions package for Python.
> [...] When run, this program just sits
> idle and waits for Windows events. If
> you are using a GUI toolkit (e.g.
> wxPython), this loop is unnecessary
> since the toolkit provides its own. |
Python object attributes - methodology for access | 165,883 | 22 | 2008-10-03T06:18:37Z | 165,911 | 20 | 2008-10-03T06:35:37Z | [
"python",
"oop",
"object",
"attributes"
] | Suppose I have a class with some attributes. How is it best (in the Pythonic-OOP) sense to access these attributes ? Just like `obj.attr` ? Or perhaps write get accessors ?
What are the accepted naming styles for such things ?
**Edit:**
Can you elaborate on the best-practices of naming attributes with a single or double leading underscore ? I see in most modules that a single underscore is used.
---
If this question has already been asked (and I have a hunch it has, though searching didn't bring results), please point to it - and I will close this one. | The generally accepted way of doing things is just using simple attributes, like so
```
>>> class MyClass:
... myAttribute = 0
...
>>> c = MyClass()
>>> c.myAttribute
0
>>> c.myAttribute = 1
>>> c.myAttribute
1
```
If you do find yourself needing to be able to write getters and setters, then what you want to look for is "python class properties" and [Ryan Tomayko's article on
Getters/Setters/Fuxors](http://tomayko.com/writings/getters-setters-fuxors) is a great place to start (albeit a little long) |
Python object attributes - methodology for access | 165,883 | 22 | 2008-10-03T06:18:37Z | 166,073 | 7 | 2008-10-03T09:23:40Z | [
"python",
"oop",
"object",
"attributes"
] | Suppose I have a class with some attributes. How is it best (in the Pythonic-OOP) sense to access these attributes ? Just like `obj.attr` ? Or perhaps write get accessors ?
What are the accepted naming styles for such things ?
**Edit:**
Can you elaborate on the best-practices of naming attributes with a single or double leading underscore ? I see in most modules that a single underscore is used.
---
If this question has already been asked (and I have a hunch it has, though searching didn't bring results), please point to it - and I will close this one. | > Edit: Can you elaborate on the best-practices of naming attributes with a single or double leading underscore ? I see in most modules that a single underscore is used.
Single underscore doesn't mean anything special to python, it is just best practice, to tell "hey you probably don't want to access this unless you know what you are doing". Double underscore however makes python mangle the name internally making it accessible only from the class where it is defined.
Double leading AND trailing underscore denotes a special function, such as `__add__` which is called when using the + operator.
Read more in [PEP 8](http://www.python.org/dev/peps/pep-0008/), especially the "Naming Conventions" section. |
Python object attributes - methodology for access | 165,883 | 22 | 2008-10-03T06:18:37Z | 166,098 | 51 | 2008-10-03T09:37:51Z | [
"python",
"oop",
"object",
"attributes"
] | Suppose I have a class with some attributes. How is it best (in the Pythonic-OOP) sense to access these attributes ? Just like `obj.attr` ? Or perhaps write get accessors ?
What are the accepted naming styles for such things ?
**Edit:**
Can you elaborate on the best-practices of naming attributes with a single or double leading underscore ? I see in most modules that a single underscore is used.
---
If this question has already been asked (and I have a hunch it has, though searching didn't bring results), please point to it - and I will close this one. | With regards to the single and double-leading underscores: both indicate the same concept of 'privateness'. That is to say, people will know the attribute (be it a method or a 'normal' data attribute or anything else) is not part of the public API of the object. People will know that to touch it directly is to invite disaster.
On top of that, the double-leading underscore attributes (but not the single-leading underscore attributes) are *name-mangled* to make accessing them *by accident* from subclasses or anywhere else outside the current class less likely. You can still access them, but not as trivially. For example:
```
>>> class ClassA:
... def __init__(self):
... self._single = "Single"
... self.__double = "Double"
... def getSingle(self):
... return self._single
... def getDouble(self):
... return self.__double
...
>>> class ClassB(ClassA):
... def getSingle_B(self):
... return self._single
... def getDouble_B(self):
... return self.__double
...
>>> a = ClassA()
>>> b = ClassB()
```
You can now trivially access `a._single` and `b._single` and get the `_single` attribute created by `ClassA`:
```
>>> a._single, b._single
('Single', 'Single')
>>> a.getSingle(), b.getSingle(), b.getSingle_B()
('Single', 'Single', 'Single')
```
But trying to access the `__double` attribute on the `a` or `b` instance directly won't work:
```
>>> a.__double
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: ClassA instance has no attribute '__double'
>>> b.__double
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: ClassB instance has no attribute '__double'
```
And though methods defined in `ClassA` can get at it directly (when called on either instance):
```
>>> a.getDouble(), b.getDouble()
('Double', 'Double')
```
Methods defined on `ClassB` can not:
```
>>> b.getDouble_B()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 5, in getDouble_B
AttributeError: ClassB instance has no attribute '_ClassB__double'
```
And right in that error you get a hint about what's happening. The `__double` attribute name, when accessed inside a class, is being name-mangled to include the name of the class that it is being accessed *in*. When `ClassA` tries to access `self.__double`, it actually turns -- at compiletime -- into an access of `self._ClassA__double`, and likewise for `ClassB`. (If a method in `ClassB` were to assign to `__double`, not included in the code for brevity, it would therefor not touch `ClassA`'s `__double` but create a new attribute.) There is no other protection of this attribute, so you can still access it directly if you know the right name:
```
>>> a._ClassA__double, b._ClassA__double
('Double', 'Double')
```
**So why is this a problem?**
Well, it's a problem any time you want to inherit and change the behaviour of any code dealing with this attribute. You either have to reimplement everything that touches this double-underscore attribute directly, or you have to guess at the class name and mangle the name manually. The problem gets worse when this double-underscore attribute is actually a method: overriding the method *or calling the method in a subclass* means doing the name-mangling manually, or reimplementing all the code that calls the method to not use the double-underscore name. Not to mention accessing the attribute dynamically, with `getattr()`: you will have to manually mangle there, too.
On the other hand, because the attribute is only trivially rewritten, it offers only superficial 'protection'. Any piece of code can still get at the attribute by manually mangling, although that will make *their* code dependant on the name of *your* class, and efforts on your side to refactor your code or rename your class (while still keeping the same user-visible name, a common practice in Python) would needlessly break their code. They can also 'trick' Python into doing the name-mangling for them by naming their class the same as yours: notice how there is no module name included in the mangled attribute name. And lastly, the double-underscore attribute is still visible in all attribute lists and all forms of introspection that don't take care to skip attributes starting with a (*single*) underscore.
So, *if* you use double-underscore names, use them exceedingly sparingly, as they can turn out quite inconvenient, and never use them for methods **or anything else a subclass may ever want to reimplement, override or access directly**. And realize that double-leading underscore name-mangling offers *no real protection*. In the end, using a single leading underscore wins you just as much and gives you less (potential, future) pain. Use a single leading underscore. |
Sleeping in a batch file | 166,044 | 129 | 2008-10-03T09:10:29Z | 166,058 | 14 | 2008-10-03T09:15:39Z | [
"python",
"batch-file"
] | When writing a batch file to automate something on a Windows box, I've needed to pause its execution for several seconds (usually in a test/wait loop, waiting for a process to start). At the time, the best solution I could find uses ping (I kid you not) to achieve the desired effect. I've found a better write-up of it [here](http://malektips.com/dos0017.html), which describes a callable "wait.bat", implemented as follows:
```
@ping 127.0.0.1 -n 2 -w 1000 > nul
@ping 127.0.0.1 -n %1% -w 1000> nul
```
You can then include calls to wait.bat in your own batch file, passing in the number of seconds to sleep.
[Apparently the Windows 2003 Resource Kit provides a Unix-like sleep command](http://malektips.com/xp_dos_0002.html) (at last!). In the meantime, for those of us still using Windows XP, Windows 2000 or (sadly) [Windows NT](http://en.wikipedia.org/wiki/Windows_NT), is there a better way?
I modified the `sleep.py` script in the [accepted answer](http://stackoverflow.com/questions/166044/sleeping-in-a-dos-batch-file#166290), so that it defaults to one second if no arguments are passed on the command line:
```
import time, sys
time.sleep(float(sys.argv[1]) if len(sys.argv) > 1 else 1)
``` | `SLEEP.exe` is included in most Resource Kits e.g. [The Windows Server 2003 Resource Kit](http://www.microsoft.com/downloads/details.aspx?familyid=9d467a69-57ff-4ae7-96ee-b18c4790cffd&displaylang=en) which can be installed on Windows XP too.
```
Usage: sleep time-to-sleep-in-seconds
sleep [-m] time-to-sleep-in-milliseconds
sleep [-c] commited-memory ratio (1%-100%)
``` |
Sleeping in a batch file | 166,044 | 129 | 2008-10-03T09:10:29Z | 166,093 | 14 | 2008-10-03T09:35:43Z | [
"python",
"batch-file"
] | When writing a batch file to automate something on a Windows box, I've needed to pause its execution for several seconds (usually in a test/wait loop, waiting for a process to start). At the time, the best solution I could find uses ping (I kid you not) to achieve the desired effect. I've found a better write-up of it [here](http://malektips.com/dos0017.html), which describes a callable "wait.bat", implemented as follows:
```
@ping 127.0.0.1 -n 2 -w 1000 > nul
@ping 127.0.0.1 -n %1% -w 1000> nul
```
You can then include calls to wait.bat in your own batch file, passing in the number of seconds to sleep.
[Apparently the Windows 2003 Resource Kit provides a Unix-like sleep command](http://malektips.com/xp_dos_0002.html) (at last!). In the meantime, for those of us still using Windows XP, Windows 2000 or (sadly) [Windows NT](http://en.wikipedia.org/wiki/Windows_NT), is there a better way?
I modified the `sleep.py` script in the [accepted answer](http://stackoverflow.com/questions/166044/sleeping-in-a-dos-batch-file#166290), so that it defaults to one second if no arguments are passed on the command line:
```
import time, sys
time.sleep(float(sys.argv[1]) if len(sys.argv) > 1 else 1)
``` | I faced a similar problem, but I just knocked up a very short C++ console application to do the same thing. Just run *MySleep.exe 1000* - perhaps easier than downloading/installing the whole resource kit.
```
#include <tchar.h>
#include <stdio.h>
#include "Windows.h"
int _tmain(int argc, _TCHAR* argv[])
{
if (argc == 2)
{
_tprintf(_T("Sleeping for %s ms\n"), argv[1]);
Sleep(_tstoi(argv[1]));
}
else
{
_tprintf(_T("Wrong number of arguments.\n"));
}
return 0;
}
``` |
Sleeping in a batch file | 166,044 | 129 | 2008-10-03T09:10:29Z | 166,290 | 16 | 2008-10-03T10:42:19Z | [
"python",
"batch-file"
] | When writing a batch file to automate something on a Windows box, I've needed to pause its execution for several seconds (usually in a test/wait loop, waiting for a process to start). At the time, the best solution I could find uses ping (I kid you not) to achieve the desired effect. I've found a better write-up of it [here](http://malektips.com/dos0017.html), which describes a callable "wait.bat", implemented as follows:
```
@ping 127.0.0.1 -n 2 -w 1000 > nul
@ping 127.0.0.1 -n %1% -w 1000> nul
```
You can then include calls to wait.bat in your own batch file, passing in the number of seconds to sleep.
[Apparently the Windows 2003 Resource Kit provides a Unix-like sleep command](http://malektips.com/xp_dos_0002.html) (at last!). In the meantime, for those of us still using Windows XP, Windows 2000 or (sadly) [Windows NT](http://en.wikipedia.org/wiki/Windows_NT), is there a better way?
I modified the `sleep.py` script in the [accepted answer](http://stackoverflow.com/questions/166044/sleeping-in-a-dos-batch-file#166290), so that it defaults to one second if no arguments are passed on the command line:
```
import time, sys
time.sleep(float(sys.argv[1]) if len(sys.argv) > 1 else 1)
``` | If you have Python installed, or don't mind installing it (it has other uses too :), just create the following **sleep.py** script and add it somewhere in your PATH:
```
import time, sys
time.sleep(float(sys.argv[1]))
```
It will allow sub-second pauses (e.g. 1.5 sec, 0.1 etc), should you have such a need. If you want to call it as `sleep` rather than `sleep.py`, then you can add the `.PY` extension to your PATHEXT environment variable. In XP, you can edit it in:
My Computer â Properties (menu) â Advanced (tab) â Environment Variables (button) â System variables (frame) |
Sleeping in a batch file | 166,044 | 129 | 2008-10-03T09:10:29Z | 1,304,768 | 7 | 2009-08-20T08:28:55Z | [
"python",
"batch-file"
] | When writing a batch file to automate something on a Windows box, I've needed to pause its execution for several seconds (usually in a test/wait loop, waiting for a process to start). At the time, the best solution I could find uses ping (I kid you not) to achieve the desired effect. I've found a better write-up of it [here](http://malektips.com/dos0017.html), which describes a callable "wait.bat", implemented as follows:
```
@ping 127.0.0.1 -n 2 -w 1000 > nul
@ping 127.0.0.1 -n %1% -w 1000> nul
```
You can then include calls to wait.bat in your own batch file, passing in the number of seconds to sleep.
[Apparently the Windows 2003 Resource Kit provides a Unix-like sleep command](http://malektips.com/xp_dos_0002.html) (at last!). In the meantime, for those of us still using Windows XP, Windows 2000 or (sadly) [Windows NT](http://en.wikipedia.org/wiki/Windows_NT), is there a better way?
I modified the `sleep.py` script in the [accepted answer](http://stackoverflow.com/questions/166044/sleeping-in-a-dos-batch-file#166290), so that it defaults to one second if no arguments are passed on the command line:
```
import time, sys
time.sleep(float(sys.argv[1]) if len(sys.argv) > 1 else 1)
``` | You can use ping:
```
ping 127.0.0.1 -n 11 -w 1000 >nul: 2>nul:
```
will wait 10 seconds.
The reason you have to use 11 is because the first ping goes out immediately, not after one second. The number should always be one more than the number of seconds you want to wait.
Keep in mind that the purpose of the `-w` is not to wait one second, it's to ensure that you wait no *more* than one second in the event that there are network problems. `ping` on it's own will send one ICMP packet per second. It's probably not required for localhost but old habits die hard. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.