title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
How to merge two Python dictionaries in a single expression? | 38,987 | 1,867 | 2008-09-02T07:44:30Z | 19,279,501 | 10 | 2013-10-09T18:09:08Z | [
"python",
"dictionary",
"mapping",
"expression",
"idioms"
] | I have two Python dictionaries, and I want to write a single expression that returns these two dictionaries, merged. The `update()` method would be what I need, if it returned its result instead of modifying a dict in-place.
```
>>> x = {'a':1, 'b': 2}
>>> y = {'b':10, 'c': 11}
>>> z = x.update(y)
>>> print z
None
>>> x
{'a': 1, 'b': 10, 'c': 11}
```
How can I get that final merged dict in z, not x?
(To be extra-clear, the last-one-wins conflict-handling of `dict.update()` is what I'm looking for as well.) | In python3, the `items` method [no longer returns a list](http://docs.python.org/dev/whatsnew/3.0.html#views-and-iterators-instead-of-lists), but rather a *view*, which acts like a set. In this case you'll need to take the set union since concatenating with `+` won't work:
```
dict(x.items() | y.items())
```
For python3-like behavior in version 2.7, the `viewitems` method should work in place of `items`:
```
dict(x.viewitems() | y.viewitems())
```
I prefer this notation anyways since it seems more natural to think of it as a set union operation rather than concatenation (as the title shows).
**Edit:**
A couple more points for python 3. First, note that the `dict(x, **y)` trick won't work in python 3 unless the keys in `y` are strings.
Also, Raymond Hettinger's Chainmap [answer](http://stackoverflow.com/a/16259217/386279) is pretty elegant, since it can take an arbitrary number of dicts as arguments, but [from the docs](http://docs.python.org/dev/library/collections) it looks like it sequentially looks through a list of all the dicts for each lookup:
> Lookups search the underlying mappings successively until a key is found.
This can slow you down if you have a lot of lookups in your application:
```
In [1]: from collections import ChainMap
In [2]: from string import ascii_uppercase as up, ascii_lowercase as lo; x = dict(zip(lo, up)); y = dict(zip(up, lo))
In [3]: chainmap_dict = ChainMap(y, x)
In [4]: union_dict = dict(x.items() | y.items())
In [5]: timeit for k in union_dict: union_dict[k]
100000 loops, best of 3: 2.15 µs per loop
In [6]: timeit for k in chainmap_dict: chainmap_dict[k]
10000 loops, best of 3: 27.1 µs per loop
```
So about an order of magnitude slower for lookups. I'm a fan of Chainmap, but looks less practical where there may be many lookups. |
How to merge two Python dictionaries in a single expression? | 38,987 | 1,867 | 2008-09-02T07:44:30Z | 26,853,961 | 1,255 | 2014-11-10T22:11:48Z | [
"python",
"dictionary",
"mapping",
"expression",
"idioms"
] | I have two Python dictionaries, and I want to write a single expression that returns these two dictionaries, merged. The `update()` method would be what I need, if it returned its result instead of modifying a dict in-place.
```
>>> x = {'a':1, 'b': 2}
>>> y = {'b':10, 'c': 11}
>>> z = x.update(y)
>>> print z
None
>>> x
{'a': 1, 'b': 10, 'c': 11}
```
How can I get that final merged dict in z, not x?
(To be extra-clear, the last-one-wins conflict-handling of `dict.update()` is what I'm looking for as well.) | > # How can I merge two Python dictionaries in a single expression?
Say you have two dicts and you want to merge them into a new dict without altering the original dicts:
```
x = {'a': 1, 'b': 2}
y = {'b': 3, 'c': 4}
```
The desired result is to get a new dictionary (`z`) with the values merged, and the second dict's values overwriting those from the first.
```
>>> z
{'a': 1, 'b': 3, 'c': 4}
```
A new syntax for this, proposed in [PEP 448](https://www.python.org/dev/peps/pep-0448) and [available as of Python 3.5](https://mail.python.org/pipermail/python-dev/2015-February/138564.html), is
```
z = {**x, **y}
```
And it is indeed a single expression. It is now showing as implemented in the [release schedule for 3.5, PEP 478](https://www.python.org/dev/peps/pep-0478/#features-for-3-5), and it has now made its way into [What's New in Python 3.5](https://docs.python.org/dev/whatsnew/3.5.html#pep-448-additional-unpacking-generalizations) document.
However, since many organizations are still on Python 2, you may wish to do this in a backwards compatible way. The classically Pythonic way, available in Python 2 and Python 3.0-3.4, is to do this as a two-step process:
```
z = x.copy()
z.update(y) # which returns None since it mutates z
```
In both approaches, `y` will come second and its values will replace `x`'s values, thus `'b'` will point to `3` in our final result.
# Not yet on Python 3.5, but want a *single expression*
If you are not yet on Python 3.5, or need to write backward-compatible code, and you want this in a *single expression*, the most performant while correct approach is to put it in a function:
```
def merge_two_dicts(x, y):
'''Given two dicts, merge them into a new dict as a shallow copy.'''
z = x.copy()
z.update(y)
return z
```
and then you have a single expression:
```
z = merge_two_dicts(x, y)
```
You can also make a function to merge an undefined number of dicts, from zero to a very large number:
```
def merge_dicts(*dict_args):
'''
Given any number of dicts, shallow copy and merge into a new dict,
precedence goes to key value pairs in latter dicts.
'''
result = {}
for dictionary in dict_args:
result.update(dictionary)
return result
```
This function will work in Python 2 and 3 for all dicts. e.g. given dicts `a` to `g`:
```
z = merge_dicts(a, b, c, d, e, f, g)
```
and key value pairs in `g` will take precedence over dicts `a` to `f`, and so on.
# Critiques of Other Answers
Don't use what you see in the most upvoted answer:
```
z = dict(x.items() + y.items())
```
In Python 2, you create two lists in memory for each dict, create a third list in memory with length equal to the length of the first two put together, and then discard all three lists to create the dict. **In Python 3, this will fail** because you're adding two `dict_items` objects together, not two lists -
```
>>> c = dict(a.items() + b.items())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'dict_items' and 'dict_items'
```
and you would have to explicitly create them as lists, e.g. `z = dict(list(x.items()) + list(y.items()))`. This is a waste of resources and computation power.
Similarly, taking the union of `items()` in Python 3 (`viewitems()` in Python 2.7) will also fail when values are unhashable objects (like lists, for example). Even if your values are hashable, **since sets are semantically unordered, the behavior is undefined in regards to precedence. So don't do this:**
```
>>> c = dict(a.items() | b.items())
```
This example demonstrates what happens when values are unhashable:
```
>>> x = {'a': []}
>>> y = {'b': []}
>>> dict(x.items() | y.items())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'list'
```
Here's an example where y should have precedence, but instead the value from x is retained due to the arbitrary order of sets:
```
>>> x = {'a': 2}
>>> y = {'a': 1}
>>> dict(x.items() | y.items())
{'a': 2}
```
Another hack you should not use:
```
z = dict(x, **y)
```
This uses the `dict` constructor, and is very fast and memory efficient (even slightly more-so than our two-step process) but unless you know precisely what is happening here (that is, the second dict is being passed as keyword arguments to the dict constructor), it's difficult to read, it's not the intended usage, and so it is not Pythonic. **Also, this fails in Python 3 when keys are not strings.**
```
>>> c = dict(a, **b)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: keyword arguments must be strings
```
From the [mailing list](https://mail.python.org/pipermail/python-dev/2010-April/099459.html), Guido van Rossum, the creator of the language, wrote:
> I am fine with
> declaring dict({}, \*\*{1:3}) illegal, since after all it is abuse of
> the \*\* mechanism.
and
> Apparently dict(x, \*\*y) is going around as "cool hack" for "call
> x.update(y) and return x". Personally I find it more despicable than
> cool.
# Less Performant But Correct Ad-hocs
These approaches are less performant, but they will provide correct behavior.
They will be *much less* performant than `copy` and `update` or the new unpacking because they iterate through each key-value pair at a higher level of abstraction, but they *do* respect the order of precedence (latter dicts have precedence)
You can also chain the dicts manually inside a dict comprehension:
```
{k: v for d in dicts for k, v in d.items()} # iteritems in Python 2.7
```
or in python 2.6 (and perhaps as early as 2.4 when generator expressions were introduced):
```
dict((k, v) for d in dicts for k, v in d.items())
```
`itertools.chain` will chain the iterators over the key-value pairs in the correct order:
```
import itertools
z = dict(itertools.chain(x.iteritems(), y.iteritems()))
```
# Performance Analysis
I'm only going to do the performance analysis of the usages known to behave correctly.
```
import timeit
```
The following is done on Ubuntu 14.04
In Python 2.7 (system Python):
```
>>> min(timeit.repeat(lambda: merge_two_dicts(x, y)))
0.5726828575134277
>>> min(timeit.repeat(lambda: {k: v for d in (x, y) for k, v in d.items()} ))
1.163769006729126
>>> min(timeit.repeat(lambda: dict(itertools.chain(x.iteritems(), y.iteritems()))))
1.1614501476287842
>>> min(timeit.repeat(lambda: dict((k, v) for d in (x, y) for k, v in d.items())))
2.2345519065856934
```
In Python 3.5 (deadsnakes PPA):
```
>>> min(timeit.repeat(lambda: {**x, **y}))
0.4094954460160807
>>> min(timeit.repeat(lambda: merge_two_dicts(x, y)))
0.7881555100320838
>>> min(timeit.repeat(lambda: {k: v for d in (x, y) for k, v in d.items()} ))
1.4525277839857154
>>> min(timeit.repeat(lambda: dict(itertools.chain(x.items(), y.items()))))
2.3143140770262107
>>> min(timeit.repeat(lambda: dict((k, v) for d in (x, y) for k, v in d.items())))
3.2069112799945287
``` |
How to merge two Python dictionaries in a single expression? | 38,987 | 1,867 | 2008-09-02T07:44:30Z | 28,753,078 | 22 | 2015-02-26T21:27:52Z | [
"python",
"dictionary",
"mapping",
"expression",
"idioms"
] | I have two Python dictionaries, and I want to write a single expression that returns these two dictionaries, merged. The `update()` method would be what I need, if it returned its result instead of modifying a dict in-place.
```
>>> x = {'a':1, 'b': 2}
>>> y = {'b':10, 'c': 11}
>>> z = x.update(y)
>>> print z
None
>>> x
{'a': 1, 'b': 10, 'c': 11}
```
How can I get that final merged dict in z, not x?
(To be extra-clear, the last-one-wins conflict-handling of `dict.update()` is what I'm looking for as well.) | Python 3.5 (PEP 448) allows a nicer syntax option:
```
x = {'a': 1, 'b': 1}
y = {'a': 2, 'c': 2}
final = {**x, **y}
final
# {'a': 2, 'b': 1, 'c': 2}
```
Or even
```
final = {'a': 1, 'b': 1, **x, **y}
``` |
How to merge two Python dictionaries in a single expression? | 38,987 | 1,867 | 2008-09-02T07:44:30Z | 31,812,635 | 7 | 2015-08-04T14:54:58Z | [
"python",
"dictionary",
"mapping",
"expression",
"idioms"
] | I have two Python dictionaries, and I want to write a single expression that returns these two dictionaries, merged. The `update()` method would be what I need, if it returned its result instead of modifying a dict in-place.
```
>>> x = {'a':1, 'b': 2}
>>> y = {'b':10, 'c': 11}
>>> z = x.update(y)
>>> print z
None
>>> x
{'a': 1, 'b': 10, 'c': 11}
```
How can I get that final merged dict in z, not x?
(To be extra-clear, the last-one-wins conflict-handling of `dict.update()` is what I'm looking for as well.) | Simple solution using itertools that preserves order (latter dicts have precedence)
```
import itertools as it
merge = lambda *args: dict(it.chain.from_iterable(it.imap(dict.iteritems, args)))
```
And it's usage:
```
>>> x = {'a':1, 'b': 2}
>>> y = {'b':10, 'c': 11}
>>> merge(x, y)
{'a': 1, 'b': 10, 'c': 11}
>>> z = {'c': 3, 'd': 4}
>>> merge(x, y, z)
{'a': 1, 'b': 10, 'c': 3, 'd': 4}
``` |
Search and replace a line in a file in Python | 39,086 | 170 | 2008-09-02T09:19:04Z | 39,110 | 111 | 2008-09-02T09:42:21Z | [
"python",
"file"
] | I want to loop over the contents of a text file and do a search and replace on some lines and write the result back to the file. I could first load the whole file in memory and then write it back, but that probably is not the best way to do it.
What is the best way to do this, within the following code?
```
f = open(file)
for line in f:
if line.contains('foo'):
newline = line.replace('foo', 'bar')
# how to write this newline back to the file
``` | I guess something like this should do it. It basically writes the content to a new file and replaces the old file with the new file:
```
from tempfile import mkstemp
from shutil import move
from os import remove, close
def replace(file_path, pattern, subst):
#Create temp file
fh, abs_path = mkstemp()
with open(abs_path,'w') as new_file:
with open(file_path) as old_file:
for line in old_file:
new_file.write(line.replace(pattern, subst))
close(fh)
#Remove original file
remove(file_path)
#Move new file
move(abs_path, file_path)
``` |
Search and replace a line in a file in Python | 39,086 | 170 | 2008-09-02T09:19:04Z | 39,113 | 8 | 2008-09-02T09:42:57Z | [
"python",
"file"
] | I want to loop over the contents of a text file and do a search and replace on some lines and write the result back to the file. I could first load the whole file in memory and then write it back, but that probably is not the best way to do it.
What is the best way to do this, within the following code?
```
f = open(file)
for line in f:
if line.contains('foo'):
newline = line.replace('foo', 'bar')
# how to write this newline back to the file
``` | As lassevk suggests, write out the new file as you go, here is some example code:
```
fin = open("a.txt")
fout = open("b.txt", "wt")
for line in fin:
fout.write( line.replace('foo', 'bar') )
fin.close()
fout.close()
``` |
Search and replace a line in a file in Python | 39,086 | 170 | 2008-09-02T09:19:04Z | 290,494 | 182 | 2008-11-14T15:47:40Z | [
"python",
"file"
] | I want to loop over the contents of a text file and do a search and replace on some lines and write the result back to the file. I could first load the whole file in memory and then write it back, but that probably is not the best way to do it.
What is the best way to do this, within the following code?
```
f = open(file)
for line in f:
if line.contains('foo'):
newline = line.replace('foo', 'bar')
# how to write this newline back to the file
``` | The shortest way would probably be to use the [fileinput module](http://docs.python.org/2/library/fileinput.html?highlight=fileinput#fileinput). For example, the following adds line numbers to a file, in-place:
```
import fileinput
for line in fileinput.input("test.txt", inplace=True):
print "%d: %s" % (fileinput.filelineno(), line),
```
What happens here is:
1. The original file is moved to a backup file
2. The standard output is redirected to the original file within the loop
3. Thus any `print` statements write back into the original file
`fileinput` has more bells and whistles. For example, it can be used to automatically operate on all files in `sys.args[1:]`, without your having to iterate over them explicitly. Starting with Python 3.2 it also provides a convenient context manager for use in a `with` statement.
---
While `fileinput` is great for throwaway scripts, I would be wary of using it in real code because admittedly it's not very readable or familiar. In real (production) code it's worthwhile to spend just a few more lines of code to make the process explicit and thus make the code readable.
There are two options:
1. The file is not overly large, and you can just read it wholly to memory. Then close the file, reopen it in writing mode and write the modified contents back.
2. The file is too large to be stored in memory; you can move it over to a temporary file and open that, reading it line by line, writing back into the original file. Note that this requires twice the storage. |
Search and replace a line in a file in Python | 39,086 | 170 | 2008-09-02T09:19:04Z | 315,088 | 52 | 2008-11-24T19:02:28Z | [
"python",
"file"
] | I want to loop over the contents of a text file and do a search and replace on some lines and write the result back to the file. I could first load the whole file in memory and then write it back, but that probably is not the best way to do it.
What is the best way to do this, within the following code?
```
f = open(file)
for line in f:
if line.contains('foo'):
newline = line.replace('foo', 'bar')
# how to write this newline back to the file
``` | Here's another example that was tested, and will match search & replace patterns:
```
import fileinput
import sys
def replaceAll(file,searchExp,replaceExp):
for line in fileinput.input(file, inplace=1):
if searchExp in line:
line = line.replace(searchExp,replaceExp)
sys.stdout.write(line)
```
Example use:
```
replaceAll("/fooBar.txt","Hello\sWorld!$","Goodbye\sWorld.")
``` |
Search and replace a line in a file in Python | 39,086 | 170 | 2008-09-02T09:19:04Z | 1,388,570 | 30 | 2009-09-07T10:07:25Z | [
"python",
"file"
] | I want to loop over the contents of a text file and do a search and replace on some lines and write the result back to the file. I could first load the whole file in memory and then write it back, but that probably is not the best way to do it.
What is the best way to do this, within the following code?
```
f = open(file)
for line in f:
if line.contains('foo'):
newline = line.replace('foo', 'bar')
# how to write this newline back to the file
``` | This should work: (inplace editing)
```
import fileinput
# Does a list of files, and
# redirects STDOUT to the file in question
for line in fileinput.input(files, inplace = 1):
print line.replace("foo", "bar"),
``` |
Search and replace a line in a file in Python | 39,086 | 170 | 2008-09-02T09:19:04Z | 13,641,746 | 13 | 2012-11-30T08:51:17Z | [
"python",
"file"
] | I want to loop over the contents of a text file and do a search and replace on some lines and write the result back to the file. I could first load the whole file in memory and then write it back, but that probably is not the best way to do it.
What is the best way to do this, within the following code?
```
f = open(file)
for line in f:
if line.contains('foo'):
newline = line.replace('foo', 'bar')
# how to write this newline back to the file
``` | Based on the answer by Thomas Watnedal.
However, this does not answer the line-to-line part of the original question exactly. The function can still replace on a line-to-line basis
This implementation replaces the file contents without using temporary files, as a consequence file permissions remain unchanged.
Also re.sub instead of replace, allows regex replacement instead of plain text replacement only.
Reading the file as a single string instead of line by line allows for multiline match and replacement.
```
import re
def replace(file, pattern, subst):
# Read contents from file as a single string
file_handle = open(file, 'r')
file_string = file_handle.read()
file_handle.close()
# Use RE package to allow for replacement (also allowing for (multiline) REGEX)
file_string = (re.sub(pattern, subst, file_string))
# Write contents to file.
# Using mode 'w' truncates the file.
file_handle = open(file, 'w')
file_handle.write(file_string)
file_handle.close()
``` |
Search and replace a line in a file in Python | 39,086 | 170 | 2008-09-02T09:19:04Z | 21,857,132 | 7 | 2014-02-18T14:43:02Z | [
"python",
"file"
] | I want to loop over the contents of a text file and do a search and replace on some lines and write the result back to the file. I could first load the whole file in memory and then write it back, but that probably is not the best way to do it.
What is the best way to do this, within the following code?
```
f = open(file)
for line in f:
if line.contains('foo'):
newline = line.replace('foo', 'bar')
# how to write this newline back to the file
``` | If you're wanting a generic function that replaces *any* text with some other text, this is likely the best way to go, particularly if you're a fan of regex's:
```
import re
def replace( filePath, text, subs, flags=0 ):
with open( filePath, "r+" ) as file:
fileContents = file.read()
textPattern = re.compile( re.escape( text ), flags )
fileContents = textPattern.sub( subs, fileContents )
file.seek( 0 )
file.truncate()
file.write( fileContents )
``` |
Finding a file in a Python module distribution | 39,104 | 12 | 2008-09-02T09:40:26Z | 39,659 | 14 | 2008-09-02T14:26:07Z | [
"python",
"distutils"
] | I've written a Python package that includes a bsddb database of pre-computed values for one of the more time-consuming computations. For simplicity, my setup script installs the database file in the same directory as the code which accesses the database (on Unix, something like /usr/lib/python2.5/site-packages/mypackage/).
How do I store the final location of the database file so my code can access it? Right now, I'm using a hack based on the `__file__` variable in the module which accesses the database:
```
dbname = os.path.join(os.path.dirname(__file__), "database.dat")
```
It works, but it seems... hackish. Is there a better way to do this? I'd like to have the setup script just grab the final installation location from the distutils module and stuff it into a "dbconfig.py" file that gets installed alongside the code that accesses the database. | Try using pkg\_resources, which is part of setuptools (and available on all of the pythons I have access to right now):
```
>>> import pkg_resources
>>> pkg_resources.resource_ filename(__name__, "foo.config")
'foo.config'
>>> pkg_resources.resource_filename('tempfile', "foo.config")
'/usr/lib/python2.4/foo.config'
```
There's more discussion about using pkg\_resources to get resources on the [eggs](http://peak.telecommunity.com/DevCenter/PythonEggs#accessing-package-resources) page and the [pkg\_resources](http://peak.telecommunity.com/DevCenter/PkgResources) page.
Also note, where possible it's probably advisable to use pkg\_resources.resource\_stream or pkg\_resources.resource\_string because if the package is part of an egg, resource\_filename will copy the file to a temporary directory. |
Finding a file in a Python module distribution | 39,104 | 12 | 2008-09-02T09:40:26Z | 9,918,496 | 9 | 2012-03-29T02:52:42Z | [
"python",
"distutils"
] | I've written a Python package that includes a bsddb database of pre-computed values for one of the more time-consuming computations. For simplicity, my setup script installs the database file in the same directory as the code which accesses the database (on Unix, something like /usr/lib/python2.5/site-packages/mypackage/).
How do I store the final location of the database file so my code can access it? Right now, I'm using a hack based on the `__file__` variable in the module which accesses the database:
```
dbname = os.path.join(os.path.dirname(__file__), "database.dat")
```
It works, but it seems... hackish. Is there a better way to do this? I'd like to have the setup script just grab the final installation location from the distutils module and stuff it into a "dbconfig.py" file that gets installed alongside the code that accesses the database. | Use `pkgutil.get_data`. Itâs the cousin of `pkg_resources.resource_stream`, but in the standard library, and should work with flat filesystem installs as well as zipped packages and other importers. |
What is the best way to do Bit Field manipulation in Python? | 39,663 | 25 | 2008-09-02T14:28:40Z | 39,760 | 7 | 2008-09-02T15:13:44Z | [
"python",
"udp",
"bits",
"bit-fields"
] | I'm reading some MPEG Transport Stream protocol over UDP and it has some funky bitfields in it (length 13 for example). I'm using the "struct" library to do the broad unpacking, but is there a simple way to say "Grab the next 13 bits" rather than have to hand-tweak the bit manipulation? I'd like something like the way C does bit fields (without having to revert to C).
Suggestions? | It's an often-asked question. There's an [ASPN Cookbook](http://code.activestate.com/recipes/113799/) entry on it that has served me in the past.
And there is an [extensive page of requirements one person would like to see from a module doing this.](http://wiki.python.org/moin/BitManipulation) |
What is the best way to do Bit Field manipulation in Python? | 39,663 | 25 | 2008-09-02T14:28:40Z | 1,086,668 | 25 | 2009-07-06T12:20:13Z | [
"python",
"udp",
"bits",
"bit-fields"
] | I'm reading some MPEG Transport Stream protocol over UDP and it has some funky bitfields in it (length 13 for example). I'm using the "struct" library to do the broad unpacking, but is there a simple way to say "Grab the next 13 bits" rather than have to hand-tweak the bit manipulation? I'd like something like the way C does bit fields (without having to revert to C).
Suggestions? | The [bitstring](http://python-bitstring.googlecode.com) module is designed to address just this problem. It will let you read, modify and construct data using bits as the basic building blocks. The latest versions are for Python 2.6 or later (including Python 3) but version 1.0 supported Python 2.4 and 2.5 as well.
A relevant example for you might be this, which strips out all the null packets from a transport stream (and quite possibly uses your 13 bit field?):
```
from bitstring import Bits, BitStream
# Opening from a file means that it won't be all read into memory
s = Bits(filename='test.ts')
outfile = open('test_nonull.ts', 'wb')
# Cut the stream into 188 byte packets
for packet in s.cut(188*8):
# Take a 13 bit slice and interpret as an unsigned integer
PID = packet[11:24].uint
# Write out the packet if the PID doesn't indicate a 'null' packet
if PID != 8191:
# The 'bytes' property converts back to a string.
outfile.write(packet.bytes)
```
Here's another example including reading from bitstreams:
```
# You can create from hex, binary, integers, strings, floats, files...
# This has a hex code followed by two 12 bit integers
s = BitStream('0x000001b3, uint:12=352, uint:12=288')
# Append some other bits
s += '0b11001, 0xff, int:5=-3'
# read back as 32 bits of hex, then two 12 bit unsigned integers
start_code, width, height = s.readlist('hex:32, 2*uint:12')
# Skip some bits then peek at next bit value
s.pos += 4
if s.peek(1):
flags = s.read(9)
```
You can use standard slice notation to slice, delete, reverse, overwrite, etc. at the bit level, and there are bit level find, replace, split etc. functions. Different endiannesses are also supported.
```
# Replace every '1' bit by 3 bits
s.replace('0b1', '0b001')
# Find all occurrences of a bit sequence
bitposlist = list(s.findall('0b01000'))
# Reverse bits in place
s.reverse()
```
The full documentation is [here](http://packages.python.org/bitstring/). |
PGP signatures from Python? | 39,929 | 5 | 2008-09-02T16:19:11Z | 40,069 | 8 | 2008-09-02T17:15:23Z | [
"python",
"security",
"cross-platform",
"gnupg"
] | What is the easiest way to create and verify PGP/GPG signatures from within a Python application?
I can call pgp or gpg using subprocess and parse the output, but I was looking for a way that didn't require an external program to be installed (my application is cross-platform mac/windows/unix). | I think [GPGME](http://www.gnupg.org/related_software/gpgme/) and the [PyMe Python wrapper](http://pyme.sourceforge.net/) should do what you need. |
javascript locals()? | 39,960 | 17 | 2008-09-02T16:29:30Z | 40,173 | 13 | 2008-09-02T18:01:29Z | [
"javascript",
"python"
] | In python one can get a dictionary of all local and global variables in the current scope with the built-in functions locals() and globals(). Is there some equivalent way of doing this in javascript? For instance, I would like to do something like the following:
```
var foo = function(){ alert('foo'); };
var bar = function(){ alert('bar'); };
var s = 'foo';
locals()[s](); // alerts 'foo'
```
Is this at all possible, or should I just be using a local object for the lookup? | * locals() - No.
* globals() - Yes.
`window` is a reference to the global scope, like `globals()` in python.
```
globals()["foo"]
```
is the same as:
```
window["foo"]
``` |
Python deployment and /usr/bin/env portability | 40,705 | 12 | 2008-09-02T21:21:14Z | 40,715 | 8 | 2008-09-02T21:25:40Z | [
"python",
"executable",
"environment",
"shebang"
] | At the beginning of all my executable Python scripts I put the [shebang](http://en.wikipedia.org/wiki/Shebang_(Unix)) line:
```
#!/usr/bin/env python
```
I'm running these scripts on a system where `env python` yields a Python 2.2 environment. My scripts quickly fail because I have a manual check for a compatible Python version:
```
if sys.version_info < (2, 4):
raise ImportError("Cannot run with Python version < 2.4")
```
I don't want to have to change the shebang line on every executable file, if it's possible; however, I don't have administrative access to the machine to change the result of `env python` and I don't want to force a particular version, as in:
```
#!/usr/bin/env python2.4
```
I'd like to avoid this because system may have a newer version than Python 2.4, or may have Python 2.5 but no Python 2.4.
What's the elegant solution?
[Edit:] I wasn't specific enough in posing the question -- I'd like to let users execute the scripts without manual configuration (e.g. path alteration or symlinking in `~/bin` and ensuring your PATH has `~/bin` before the Python 2.2 path). Maybe some distribution utility is required to prevent the manual tweaks? | "env" simply executes the first thing it finds in the PATH env var. To switch to different python, prepend the directory for that python's executable to the path before invoking your script. |
Always including the user in the django template context | 41,547 | 25 | 2008-09-03T12:22:44Z | 41,558 | 18 | 2008-09-03T12:31:58Z | [
"python",
"django",
"authentication",
"session",
"cookies"
] | I am working on a small intranet site for a small company, where user should be able to post. I have imagined a very simple authentication mechanism where people just enter their email address, and gets sent a unique login url, that sets a cookie that will always identify them for future requests.
In my template setup, I have base.html, and the other pages extend this. I want to show logged in or register button in the base.html, but how can I ensure that the necessary variables are always a part of the context? It seems that each view just sets up the context as they like, and there is no global context population. Is there a way of doing this without including the user in each context creation?
Or will I have to make my own custom shortcuts to setup the context properly? | In a more general sense of not having to explicitly set variables in each view, it sounds like you want to look at writing your own [context processor](http://docs.djangoproject.com/en/dev/ref/templates/api/#writing-your-own-context-processors).
From the docs:
> A context processor has a very simple interface: It's just a Python function that takes one argument, an HttpRequest object, and returns a dictionary that gets added to the template context. Each context processor must return a dictionary. |
Always including the user in the django template context | 41,547 | 25 | 2008-09-03T12:22:44Z | 269,249 | 31 | 2008-11-06T16:05:03Z | [
"python",
"django",
"authentication",
"session",
"cookies"
] | I am working on a small intranet site for a small company, where user should be able to post. I have imagined a very simple authentication mechanism where people just enter their email address, and gets sent a unique login url, that sets a cookie that will always identify them for future requests.
In my template setup, I have base.html, and the other pages extend this. I want to show logged in or register button in the base.html, but how can I ensure that the necessary variables are always a part of the context? It seems that each view just sets up the context as they like, and there is no global context population. Is there a way of doing this without including the user in each context creation?
Or will I have to make my own custom shortcuts to setup the context properly? | @Ryan: Documentation about preprocessors is a bit small
@Staale: Adding user to the Context every time one is calling the template in view, DRY
Solution is very simple
**A**: In your settings add
```
TEMPLATE_CONTEXT_PROCESSORS = (
'myapp.processor_file_name.user',
)
```
**B**: In myapp/processor\_file\_name.py insert
```
def user(request):
if hasattr(request, 'user'):
return {'user':request.user }
return {}
```
From now on you're able to use user object functionalities in your templates.
```
{{ user.get_full_name }}
``` |
Always including the user in the django template context | 41,547 | 25 | 2008-09-03T12:22:44Z | 1,064,621 | 43 | 2009-06-30T16:14:29Z | [
"python",
"django",
"authentication",
"session",
"cookies"
] | I am working on a small intranet site for a small company, where user should be able to post. I have imagined a very simple authentication mechanism where people just enter their email address, and gets sent a unique login url, that sets a cookie that will always identify them for future requests.
In my template setup, I have base.html, and the other pages extend this. I want to show logged in or register button in the base.html, but how can I ensure that the necessary variables are always a part of the context? It seems that each view just sets up the context as they like, and there is no global context population. Is there a way of doing this without including the user in each context creation?
Or will I have to make my own custom shortcuts to setup the context properly? | There is **no need** to write a context processor for the user object if you already have the [`"django.core.context_processors.auth"`](https://docs.djangoproject.com/en/dev/topics/auth/default/#authentication-data-in-templates) in [`TEMPLATE_CONTEXT_PROCESSORS`](https://docs.djangoproject.com/en/dev/ref/settings/#std%3asetting-TEMPLATE_CONTEXT_PROCESSORS) **and** if you're using [`RequestContext`](https://docs.djangoproject.com/en/dev/ref/templates/api/#subclassing-context-requestcontext) in your views.
if you are using django 1.4 or latest the module has been moved to `django.contrib.auth.context_processors.auth` |
Splitting tuples in Python - best practice? | 41,701 | 12 | 2008-09-03T13:48:30Z | 41,707 | 13 | 2008-09-03T13:50:24Z | [
"python",
"tuples"
] | I have a method in my Python code that returns a tuple - a row from a SQL query. Let's say it has three fields: (jobId, label, username)
For ease of passing it around between functions, I've been passing the entire tuple as a variable called 'job'. Eventually, however, I want to get at the bits, so I've been using code like this:
(jobId, label, username) = job
I've realised, however, that this is a maintenance nightmare, because now I can never add new fields to the result set without breaking all of my existing code. How should I have written this?
Here are my two best guesses:
(jobId, label, username) = (job[0], job[1], job[2])
...but that doesn't scale nicely when you have 15...20 fields
or to convert the results from the SQL query to a dictionary straight away and pass that around (I don't have control over the fact that it starts life as a tuple, that's fixed for me) | I'd say that a dictionary is definitely the best way to do it. It's easily extensible, allows you to give each value a sensible name, and Python has a lot of built-in language features for using and manipulating dictionaries. If you need to add more fields later, all you need to change is the code that converts the tuple to a dictionary and the code that actually makes use of the new values.
For example:
```
job={}
job['jobid'], job['label'], job['username']=<querycode>
``` |
Splitting tuples in Python - best practice? | 41,701 | 12 | 2008-09-03T13:48:30Z | 41,846 | 13 | 2008-09-03T14:51:48Z | [
"python",
"tuples"
] | I have a method in my Python code that returns a tuple - a row from a SQL query. Let's say it has three fields: (jobId, label, username)
For ease of passing it around between functions, I've been passing the entire tuple as a variable called 'job'. Eventually, however, I want to get at the bits, so I've been using code like this:
(jobId, label, username) = job
I've realised, however, that this is a maintenance nightmare, because now I can never add new fields to the result set without breaking all of my existing code. How should I have written this?
Here are my two best guesses:
(jobId, label, username) = (job[0], job[1], job[2])
...but that doesn't scale nicely when you have 15...20 fields
or to convert the results from the SQL query to a dictionary straight away and pass that around (I don't have control over the fact that it starts life as a tuple, that's fixed for me) | @Staale
There is a better way:
```
job = dict(zip(keys, values))
``` |
Standard way to open a folder window in linux? | 41,969 | 6 | 2008-09-03T15:47:23Z | 42,046 | 7 | 2008-09-03T16:18:25Z | [
"python",
"linux",
"cross-platform",
"desktop"
] | I want to open a folder window, in the appropriate file manager, from within a cross-platform (windows/mac/linux) Python application.
On OSX, I can open a window in the finder with
```
os.system('open "%s"' % foldername)
```
and on Windows with
```
os.startfile(foldername)
```
What about unix/linux? Is there a standard way to do this or do I have to special case gnome/kde/etc and manually run the appropriate application (nautilus/konqueror/etc)?
This looks like something that could be specified by the [freedesktop.org](http://freedesktop.org) folks (a python module, similar to `webbrowser`, would also be nice!). | ```
os.system('xdg-open "%s"' % foldername)
```
`xdg-open` can be used for files/urls also |
What is a tuple useful for? | 42,034 | 25 | 2008-09-03T16:13:06Z | 42,048 | 32 | 2008-09-03T16:18:53Z | [
"python",
"tuples"
] | I am learning Python for a class now, and we just covered tuples as one of the data types. I read the Wikipedia page on it, but, I could not figure out where such a data type would be useful in practice. Can I have some examples, perhaps in Python, where an immutable set of numbers would be needed? How is this different from a list? | * Tuples are used whenever you want to return multiple results from a function.
* Since they're immutable, they can be used as keys for a dictionary (lists can't). |
What is a tuple useful for? | 42,034 | 25 | 2008-09-03T16:13:06Z | 42,052 | 14 | 2008-09-03T16:20:59Z | [
"python",
"tuples"
] | I am learning Python for a class now, and we just covered tuples as one of the data types. I read the Wikipedia page on it, but, I could not figure out where such a data type would be useful in practice. Can I have some examples, perhaps in Python, where an immutable set of numbers would be needed? How is this different from a list? | Tuples make good dictionary keys when you need to combine more than one piece of data into your key and don't feel like making a class for it.
```
a = {}
a[(1,2,"bob")] = "hello!"
a[("Hello","en-US")] = "Hi There!"
```
I've used this feature primarily to create a dictionary with keys that are coordinates of the vertices of a mesh. However, in my particular case, the exact comparison of the floats involved worked fine which might not always be true for your purposes [in which case I'd probably convert your incoming floats to some kind of fixed-point integer] |
What is a tuple useful for? | 42,034 | 25 | 2008-09-03T16:13:06Z | 48,414 | 7 | 2008-09-07T13:12:28Z | [
"python",
"tuples"
] | I am learning Python for a class now, and we just covered tuples as one of the data types. I read the Wikipedia page on it, but, I could not figure out where such a data type would be useful in practice. Can I have some examples, perhaps in Python, where an immutable set of numbers would be needed? How is this different from a list? | I like [this explanation](http://jtauber.com/blog/2006/04/15/python_tuples_are_not_just_constant_lists/).
Basically, you should use tuples when there's a constant structure (the 1st position always holds one type of value and the second another, and so forth), and lists should be used for lists of homogeneous values.
Of course there's always exceptions, but this is a good general guideline. |
Best way to extract text from a Word doc without using COM/automation? | 42,482 | 15 | 2008-09-03T20:18:47Z | 43,364 | 8 | 2008-09-04T08:52:01Z | [
"python",
"ms-word"
] | Is there a reasonable way to extract plain text from a Word file that doesn't depend on COM automation? (This is a a feature for a web app deployed on a non-Windows platform - that's non-negotiable in this case.)
Antiword seems like it might be a reasonable option, but it seems like it might be abandoned.
A Python solution would be ideal, but doesn't appear to be available. | I use catdoc or antiword for this, whatever gives the result that is the easiest to parse. I have embedded this in python functions, so it is easy to use from the parsing system (which is written in python).
```
import os
def doc_to_text_catdoc(filename):
(fi, fo, fe) = os.popen3('catdoc -w "%s"' % filename)
fi.close()
retval = fo.read()
erroroutput = fe.read()
fo.close()
fe.close()
if not erroroutput:
return retval
else:
raise OSError("Executing the command caused an error: %s" % erroroutput)
# similar doc_to_text_antiword()
```
The -w switch to catdoc turns off line wrapping, BTW. |
Best way to extract text from a Word doc without using COM/automation? | 42,482 | 15 | 2008-09-03T20:18:47Z | 1,979,931 | 14 | 2009-12-30T12:23:05Z | [
"python",
"ms-word"
] | Is there a reasonable way to extract plain text from a Word file that doesn't depend on COM automation? (This is a a feature for a web app deployed on a non-Windows platform - that's non-negotiable in this case.)
Antiword seems like it might be a reasonable option, but it seems like it might be abandoned.
A Python solution would be ideal, but doesn't appear to be available. | (Same answer as [extracting text from MS word files in python](http://stackoverflow.com/questions/125222/extracting-text-from-ms-word-files-in-python))
Use the native Python docx module which I made this week. Here's how to extract all the text from a doc:
```
document = opendocx('Hello world.docx')
# This location is where most document content lives
docbody = document.xpath('/w:document/w:body', namespaces=wordnamespaces)[0]
# Extract all text
print getdocumenttext(document)
```
See [Python DocX site](https://python-docx.readthedocs.org/en/latest/)
100% Python, no COM, no .net, no Java, no parsing serialized XML with regexs, no crap. |
Python re.sub MULTILINE caret match | 42,581 | 31 | 2008-09-03T21:00:33Z | 42,597 | 75 | 2008-09-03T21:08:02Z | [
"python",
"regex",
"python-2.x"
] | The Python docs say:
> re.MULTILINE: When specified, the pattern character '^' matches at the beginning of the string and at the beginning of each line (immediately following each newline)... By default, '^' matches only at the beginning of the string...
So what's going on when I get the following unexpected result?
```
>>> import re
>>> s = """// The quick brown fox.
... // Jumped over the lazy dog."""
>>> re.sub('^//', '', s, re.MULTILINE)
' The quick brown fox.\n// Jumped over the lazy dog.'
``` | Look at the definition of [`re.sub`](http://docs.python.org/library/re.html#re.sub):
```
sub(pattern, repl, string[, count])
```
The 4th argument is the count, you are using `re.MULTILINE` (which is 8) as the count, not as a flag.
You have to compile your regex if you wish to use flags.
```
re.sub(re.compile('^//', re.MULTILINE), '', s)
```
A `flags` argument was added in Python 2.7, so the full definition is now:
```
re.sub(pattern, repl, string[, count, flags])
```
Which means that:
```
re.sub('^//', '', s, flags=re.MULTILINE)
```
works. |
Get Last Day of the Month in Python | 42,950 | 306 | 2008-09-04T00:54:44Z | 42,997 | 43 | 2008-09-04T01:26:33Z | [
"python",
"date"
] | Is there a way using Python's standard library to easily determine (i.e. one function call) the last day of a given month?
If the standard library doesn't support that, does the dateutil package support this? | EDIT: See @[Blair Conrad's answer](#43663) for a cleaner solution
---
```
>>> import datetime
>>> datetime.date (2000, 2, 1) - datetime.timedelta (days = 1)
datetime.date(2000, 1, 31)
>>>
``` |
Get Last Day of the Month in Python | 42,950 | 306 | 2008-09-04T00:54:44Z | 43,088 | 32 | 2008-09-04T02:25:50Z | [
"python",
"date"
] | Is there a way using Python's standard library to easily determine (i.e. one function call) the last day of a given month?
If the standard library doesn't support that, does the dateutil package support this? | EDIT: see my other answer. It has a better implementation than this one, which I leave here just in case someone's interested in seeing how one might "roll your own" calculator.
@[John Millikin](#42997) gives a good answer, with the added complication of calculating the first day of the next month.
The following isn't particularly elegant, but to figure out the last day of the month that any given date lives in, you could try:
```
def last_day_of_month(date):
if date.month == 12:
return date.replace(day=31)
return date.replace(month=date.month+1, day=1) - datetime.timedelta(days=1)
>>> last_day_of_month(datetime.date(2002, 1, 17))
datetime.date(2002, 1, 31)
>>> last_day_of_month(datetime.date(2002, 12, 9))
datetime.date(2002, 12, 31)
>>> last_day_of_month(datetime.date(2008, 2, 14))
datetime.date(2008, 2, 29)
``` |
Get Last Day of the Month in Python | 42,950 | 306 | 2008-09-04T00:54:44Z | 43,663 | 552 | 2008-09-04T12:44:12Z | [
"python",
"date"
] | Is there a way using Python's standard library to easily determine (i.e. one function call) the last day of a given month?
If the standard library doesn't support that, does the dateutil package support this? | I didn't notice this earlier when I was looking at the [documentation for the calendar module](https://docs.python.org/2/library/calendar.html), but a method called [monthrange](http://docs.python.org/library/calendar.html#calendar.monthrange) provides this information:
> **monthrange(year, month)**
> Returns weekday of first day of the month and number of days in month, for the specified year and month.
```
>>> import calendar
>>> calendar.monthrange(2002,1)
(1, 31)
>>> calendar.monthrange(2008,2)
(4, 29)
>>> calendar.monthrange(2100,2)
(0, 28)
```
so:
```
calendar.monthrange(year, month)[1]
```
seems like the simplest way to go.
Just to be clear, `monthrange` supports leap years as well:
```
>>> from calendar import monthrange
>>> monthrange(2012, 2)
(2, 29)
```
[My previous answer](http://stackoverflow.com/questions/42950/get-last-day-of-the-month-in-python#43088) still works, but is clearly suboptimal. |
Get Last Day of the Month in Python | 42,950 | 306 | 2008-09-04T00:54:44Z | 356,535 | 10 | 2008-12-10T15:47:57Z | [
"python",
"date"
] | Is there a way using Python's standard library to easily determine (i.e. one function call) the last day of a given month?
If the standard library doesn't support that, does the dateutil package support this? | Another solution would be to do something like this:
```
from datetime import datetime
def last_day_of_month(year, month):
""" Work out the last day of the month """
last_days = [31, 30, 29, 28, 27]
for i in last_days:
try:
end = datetime(year, month, i)
except ValueError:
continue
else:
return end.date()
return None
```
And use the function like this:
```
>>>
>>> last_day_of_month(2008, 2)
datetime.date(2008, 2, 29)
>>> last_day_of_month(2009, 2)
datetime.date(2009, 2, 28)
>>> last_day_of_month(2008, 11)
datetime.date(2008, 11, 30)
>>> last_day_of_month(2008, 12)
datetime.date(2008, 12, 31)
``` |
Get Last Day of the Month in Python | 42,950 | 306 | 2008-09-04T00:54:44Z | 13,565,185 | 37 | 2012-11-26T12:48:40Z | [
"python",
"date"
] | Is there a way using Python's standard library to easily determine (i.e. one function call) the last day of a given month?
If the standard library doesn't support that, does the dateutil package support this? | If you don't want to import the `calendar` module, a simple two-step function can also be:
```
import datetime
def last_day_of_month(any_day):
next_month = any_day.replace(day=28) + datetime.timedelta(days=4) # this will never fail
return next_month - datetime.timedelta(days=next_month.day)
```
Outputs:
```
>>> for month in range(1, 13):
... print last_day_of_month(datetime.date(2012, month, 1))
...
2012-01-31
2012-02-29
2012-03-31
2012-04-30
2012-05-31
2012-06-30
2012-07-31
2012-08-31
2012-09-30
2012-10-31
2012-11-30
2012-12-31
``` |
Get Last Day of the Month in Python | 42,950 | 306 | 2008-09-04T00:54:44Z | 14,994,380 | 14 | 2013-02-21T04:09:09Z | [
"python",
"date"
] | Is there a way using Python's standard library to easily determine (i.e. one function call) the last day of a given month?
If the standard library doesn't support that, does the dateutil package support this? | This is actually pretty easy with `dateutil.relativedelta` (package python-datetutil for pip). `day=31` will always always return the last day of the month.
Example:
```
from datetime import datetime
from dateutil.relativedelta import relativedelta
date_in_feb = datetime.datetime(2013, 2, 21)
print datetime.datetime(2013, 2, 21) + relativedelta(day=31) # End-of-month
>>> datetime.datetime(2013, 2, 28, 0, 0)
``` |
Get Last Day of the Month in Python | 42,950 | 306 | 2008-09-04T00:54:44Z | 23,447,523 | 7 | 2014-05-03T17:16:58Z | [
"python",
"date"
] | Is there a way using Python's standard library to easily determine (i.e. one function call) the last day of a given month?
If the standard library doesn't support that, does the dateutil package support this? | ```
from datetime import timedelta
(any_day.replace(day=1) + timedelta(days=32)).replace(day=1) - timedelta(days=1)
``` |
How to generate urls in django | 43,290 | 22 | 2008-09-04T07:36:39Z | 43,312 | 32 | 2008-09-04T07:54:57Z | [
"python",
"django",
"url",
"django-urls"
] | In Django's template language, you can use `{% url [viewname] [args] %}` to generate a URL to a specific view with parameters. How can you programatically do the same in Python code?
What I need is to create a list of menu items where each item has name, URL, and an active flag (whether it's the current page or not). This is because it will be a lot cleaner to do this in Python than the template language. | If you need to use something similar to the `{% url %}` template tag in your code, Django provides the `django.core.urlresolvers.reverse()`. The `reverse` function has the following signature:
```
reverse(viewname, urlconf=None, args=None, kwargs=None)
```
<https://docs.djangoproject.com/en/dev/ref/urlresolvers/> |
How to generate urls in django | 43,290 | 22 | 2008-09-04T07:36:39Z | 55,734 | 8 | 2008-09-11T03:05:27Z | [
"python",
"django",
"url",
"django-urls"
] | In Django's template language, you can use `{% url [viewname] [args] %}` to generate a URL to a specific view with parameters. How can you programatically do the same in Python code?
What I need is to create a list of menu items where each item has name, URL, and an active flag (whether it's the current page or not). This is because it will be a lot cleaner to do this in Python than the template language. | I'm using two different approaches in my `models.py`. The first is the `permalink` decorator:
```
from django.db.models import permalink
def get_absolute_url(self):
"""Construct the absolute URL for this Item."""
return ('project.app.views.view_name', [str(self.id)])
get_absolute_url = permalink(get_absolute_url)
```
You can also call `reverse` directly:
```
from django.core.urlresolvers import reverse
def get_absolute_url(self):
"""Construct the absolute URL for this Item."""
return reverse('project.app.views.view_name', None, [str(self.id)])
``` |
Can I write native iPhone apps using Python | 43,315 | 78 | 2008-09-04T07:59:57Z | 43,331 | 32 | 2008-09-04T08:21:31Z | [
"iphone",
"python",
"cocoa-touch"
] | Using [PyObjC](http://pyobjc.sourceforge.net/), you can use Python to write Cocoa applications for OS X. Can I write native iPhone apps using Python and if so, how? | Not currently, currently the only languages available to access the iPhone SDK are C/C++, Objective C and Swift.
There is no technical reason why this could not change in the future but I wouldn't hold your breath for this happening in the short term.
That said, Objective-C and Swift really are not too scary...
> # 2016 edit
>
> Javascript with NativeScript framework is available to use now. |
Can I write native iPhone apps using Python | 43,315 | 78 | 2008-09-04T07:59:57Z | 43,358 | 51 | 2008-09-04T08:44:11Z | [
"iphone",
"python",
"cocoa-touch"
] | Using [PyObjC](http://pyobjc.sourceforge.net/), you can use Python to write Cocoa applications for OS X. Can I write native iPhone apps using Python and if so, how? | You can use PyObjC on the iPhone as well, due to the excellent work by Jay Freeman (saurik). See [iPhone Applications in Python](http://www.saurik.com/id/5).
Note that this requires a jailbroken iPhone at the moment. |
Can I write native iPhone apps using Python | 43,315 | 78 | 2008-09-04T07:59:57Z | 2,167,033 | 22 | 2010-01-30T06:03:39Z | [
"iphone",
"python",
"cocoa-touch"
] | Using [PyObjC](http://pyobjc.sourceforge.net/), you can use Python to write Cocoa applications for OS X. Can I write native iPhone apps using Python and if so, how? | Yes you can. You write your code in tinypy (which is restricted Python), then use tinypy to convert it to C++, and finally compile this with XCode into a native iPhone app. Phil Hassey has published a game called Elephants! using this approach. Here are more details,
<http://www.philhassey.com/blog/2009/12/23/elephants-is-free-on-the-app-store/> |
Can I write native iPhone apps using Python | 43,315 | 78 | 2008-09-04T07:59:57Z | 2,637,228 | 20 | 2010-04-14T12:18:15Z | [
"iphone",
"python",
"cocoa-touch"
] | Using [PyObjC](http://pyobjc.sourceforge.net/), you can use Python to write Cocoa applications for OS X. Can I write native iPhone apps using Python and if so, how? | An update to the iOS Developer Agreement means that you can use whatever you like, as long as you meet the developer guidelines. Section 3.3.1, which restricted what developers could use for iOS development, has been entirely removed.
Source: <http://daringfireball.net/2010/09/app_store_guidelines> |
Can I write native iPhone apps using Python | 43,315 | 78 | 2008-09-04T07:59:57Z | 3,684,714 | 21 | 2010-09-10T12:48:14Z | [
"iphone",
"python",
"cocoa-touch"
] | Using [PyObjC](http://pyobjc.sourceforge.net/), you can use Python to write Cocoa applications for OS X. Can I write native iPhone apps using Python and if so, how? | It seems this is now something developers are allowed to do: the iOS Developer Agreement was changed yesterday and appears to have been ammended in a such a way as to make embedding a Python interpretter in your application legal:
**SECTION 3.3.2 â INTERPRETERS**
**Old:**
> 3.3.2 An Application may not itself install or launch other executable
> code by any means, including without
> limitation through the use of a
> plug-in architecture, calling other
> frameworks, other APIs or otherwise.
> Unless otherwise approved by Apple in
> writing, no interpreted code may be
> downloaded or used in an Application
> except for code that is interpreted
> and run by Appleâs Documented APIs and
> built-in interpreter(s).
> Notwithstanding the foregoing, with
> Appleâs prior written consent, an
> Application may use embedded
> interpreted code in a limited way if
> such use is solely for providing minor
> features or functionality that are
> consistent with the intended and
> advertised purpose of the Application.
**New:**
> 3.3.2 An Application may not download or install executable code.
> Interpreted code may only be used in
> an Application if all scripts, code
> and interpreters are packaged in the
> Application and not downloaded. The
> only exception to the foregoing is
> scripts and code downloaded and run by
> Appleâs built-in WebKit framework. |
Can I write native iPhone apps using Python | 43,315 | 78 | 2008-09-04T07:59:57Z | 11,448,458 | 15 | 2012-07-12T09:07:42Z | [
"iphone",
"python",
"cocoa-touch"
] | Using [PyObjC](http://pyobjc.sourceforge.net/), you can use Python to write Cocoa applications for OS X. Can I write native iPhone apps using Python and if so, how? | Yes, nowadays you can develop apps for iOS in Python.
There are two frameworks that you may want to checkout: [Kivy](http://kivy.org/) and [PyMob](http://pyzia.com/technology.html).
Please consider the answers to [this question](http://stackoverflow.com/questions/10664196/is-it-possible-to-use-python-to-write-cross-platform-apps-for-both-ios-and-andro) too, as they are more up-to-date than this one. |
How to find the mime type of a file in python? | 43,580 | 101 | 2008-09-04T12:07:27Z | 43,588 | 53 | 2008-09-04T12:12:20Z | [
"python",
"mime"
] | Let's say you want to save a bunch of files somewhere, for instance in BLOBs. Let's say you want to dish these files out via a web page and have the client automatically open the correct application/viewer.
Assumption: The browser figures out which application/viewer to use by the mime-type (content-type?) header in the HTTP response.
Based on that assumption, in addition to the bytes of the file, you also want to save the MIME type.
How would you find the MIME type of a file? I'm currently on a Mac, but this should also work on Windows.
Does the browser add this information when posting the file to the web page?
Is there a neat python library for finding this information? A WebService or (even better) a downloadable database? | The [mimetypes module](https://docs.python.org/library/mimetypes.html) in the standard library will determine/guess the MIME type from a file extension.
If users are uploading files the HTTP post will contain the MIME type of the file alongside the data. For example, Django makes this data available as an attribute of the [UploadedFile](https://docs.djangoproject.com/en/dev/topics/http/file-uploads/#uploadedfile-objects) object. |
How to find the mime type of a file in python? | 43,580 | 101 | 2008-09-04T12:07:27Z | 1,662,074 | 8 | 2009-11-02T15:48:09Z | [
"python",
"mime"
] | Let's say you want to save a bunch of files somewhere, for instance in BLOBs. Let's say you want to dish these files out via a web page and have the client automatically open the correct application/viewer.
Assumption: The browser figures out which application/viewer to use by the mime-type (content-type?) header in the HTTP response.
Based on that assumption, in addition to the bytes of the file, you also want to save the MIME type.
How would you find the MIME type of a file? I'm currently on a Mac, but this should also work on Windows.
Does the browser add this information when posting the file to the web page?
Is there a neat python library for finding this information? A WebService or (even better) a downloadable database? | in python 2.6:
```
mime = subprocess.Popen("/usr/bin/file --mime PATH", shell=True, \
stdout=subprocess.PIPE).communicate()[0]
``` |
How to find the mime type of a file in python? | 43,580 | 101 | 2008-09-04T12:07:27Z | 2,133,843 | 39 | 2010-01-25T16:39:06Z | [
"python",
"mime"
] | Let's say you want to save a bunch of files somewhere, for instance in BLOBs. Let's say you want to dish these files out via a web page and have the client automatically open the correct application/viewer.
Assumption: The browser figures out which application/viewer to use by the mime-type (content-type?) header in the HTTP response.
Based on that assumption, in addition to the bytes of the file, you also want to save the MIME type.
How would you find the MIME type of a file? I'm currently on a Mac, but this should also work on Windows.
Does the browser add this information when posting the file to the web page?
Is there a neat python library for finding this information? A WebService or (even better) a downloadable database? | More reliable way than to use the mimetypes library would be to use the python-magic package.
```
import magic
m = magic.open(magic.MAGIC_MIME)
m.load()
m.file("/tmp/document.pdf")
```
This would be equivalent to using file(1).
On Django one could also make sure that the MIME type matches that of UploadedFile.content\_type. |
How to find the mime type of a file in python? | 43,580 | 101 | 2008-09-04T12:07:27Z | 2,753,385 | 111 | 2010-05-02T12:02:45Z | [
"python",
"mime"
] | Let's say you want to save a bunch of files somewhere, for instance in BLOBs. Let's say you want to dish these files out via a web page and have the client automatically open the correct application/viewer.
Assumption: The browser figures out which application/viewer to use by the mime-type (content-type?) header in the HTTP response.
Based on that assumption, in addition to the bytes of the file, you also want to save the MIME type.
How would you find the MIME type of a file? I'm currently on a Mac, but this should also work on Windows.
Does the browser add this information when posting the file to the web page?
Is there a neat python library for finding this information? A WebService or (even better) a downloadable database? | The python-magic method suggested by toivotuo is outdated. [Python-magic's](http://github.com/ahupp/python-magic) current trunk is at Github and based on the readme there, finding the MIME-type, is done like this.
```
# For MIME types
>>> import magic
>>> mime = magic.Magic(mime=True)
>>> mime.from_file("testdata/test.pdf")
'application/pdf'
>>>
``` |
How to find the mime type of a file in python? | 43,580 | 101 | 2008-09-04T12:07:27Z | 12,297,929 | 9 | 2012-09-06T10:22:50Z | [
"python",
"mime"
] | Let's say you want to save a bunch of files somewhere, for instance in BLOBs. Let's say you want to dish these files out via a web page and have the client automatically open the correct application/viewer.
Assumption: The browser figures out which application/viewer to use by the mime-type (content-type?) header in the HTTP response.
Based on that assumption, in addition to the bytes of the file, you also want to save the MIME type.
How would you find the MIME type of a file? I'm currently on a Mac, but this should also work on Windows.
Does the browser add this information when posting the file to the web page?
Is there a neat python library for finding this information? A WebService or (even better) a downloadable database? | There are 3 different libraries that wraps libmagic.
2 of them are available on pypi (so pip install will work):
* filemagic
* python-magic
And another, similar to python-magic is available directly in the latest libmagic sources, and it is the one you probably have in your linux distribution.
In Debian the package python-magic is about this one and it is used as toivotuo said and it is not obsoleted as Simon Zimmermann said (IMHO).
It seems to me another take (by the original author of libmagic).
Too bad is not available directly on pypi. |
How to find the mime type of a file in python? | 43,580 | 101 | 2008-09-04T12:07:27Z | 21,755,201 | 13 | 2014-02-13T13:09:49Z | [
"python",
"mime"
] | Let's say you want to save a bunch of files somewhere, for instance in BLOBs. Let's say you want to dish these files out via a web page and have the client automatically open the correct application/viewer.
Assumption: The browser figures out which application/viewer to use by the mime-type (content-type?) header in the HTTP response.
Based on that assumption, in addition to the bytes of the file, you also want to save the MIME type.
How would you find the MIME type of a file? I'm currently on a Mac, but this should also work on Windows.
Does the browser add this information when posting the file to the web page?
Is there a neat python library for finding this information? A WebService or (even better) a downloadable database? | This seems to be very easy
```
>>> from mimetypes import MimeTypes
>>> import urllib
>>> mime = MimeTypes()
>>> url = urllib.pathname2url('Upload.xml')
>>> mime_type = mime.guess_type(url)
>>> print mime_type
('application/xml', None)
```
Please refer [Old Post](http://stackoverflow.com/a/14412233/1182058) |
Pros and Cons of different approaches to web programming in Python | 43,709 | 24 | 2008-09-04T13:00:13Z | 43,753 | 7 | 2008-09-04T13:24:55Z | [
"python",
"frameworks",
"cgi",
"wsgi"
] | I'd like to do some server-side scripting using Python. But I'm kind of lost with the number of ways to do that.
It starts with the do-it-yourself CGI approach and it seems to end with some pretty robust frameworks that would basically do all the job themselves. And a huge lot of stuff in between, like [web.py](http://webpy.org/), [Pyroxide](http://pyroxide.org/) and [Django](http://wiki.python.org/moin/Django).
* What are the **pros** and **cons** of the frameworks or approaches that *you've worked on*?
* What **trade-offs** are there?
* For **what kind of projects** they do well and for what they don't?
Edit: I haven't got much experience with web programing yet.
I would like to avoid the basic and tedious things like parsing the URL for parameters, etc.
On the other hand, while the video of [blog created in 15 minutes](http://www.rubyonrails.org/screencasts) with [Ruby on Rails](http://www.rubyonrails.org/) left me impressed, I realized that there were hundreds of things hidden from me - which is cool if you need to write a working webapp in no time, but not that great for really understanding the magic - and that's what I seek now. | If you decide to go with a framework that is WSGI-based (for instance [TurboGears](http://www.turbogears.org/2.0)), I would recommend you go through the excellent article [Another Do-It-Yourself Framework](http://pythonpaste.org/webob/do-it-yourself.html) by Ian Bicking.
In the article, he builds a simple web application framework from scratch.
Also, check out the video [Creating a web framework with WSGI](http://www.vimeo.com/3258566) by Kevin Dangoor. Dangoor is the founder of the TurboGears project. |
Pros and Cons of different approaches to web programming in Python | 43,709 | 24 | 2008-09-04T13:00:13Z | 43,773 | 17 | 2008-09-04T13:35:45Z | [
"python",
"frameworks",
"cgi",
"wsgi"
] | I'd like to do some server-side scripting using Python. But I'm kind of lost with the number of ways to do that.
It starts with the do-it-yourself CGI approach and it seems to end with some pretty robust frameworks that would basically do all the job themselves. And a huge lot of stuff in between, like [web.py](http://webpy.org/), [Pyroxide](http://pyroxide.org/) and [Django](http://wiki.python.org/moin/Django).
* What are the **pros** and **cons** of the frameworks or approaches that *you've worked on*?
* What **trade-offs** are there?
* For **what kind of projects** they do well and for what they don't?
Edit: I haven't got much experience with web programing yet.
I would like to avoid the basic and tedious things like parsing the URL for parameters, etc.
On the other hand, while the video of [blog created in 15 minutes](http://www.rubyonrails.org/screencasts) with [Ruby on Rails](http://www.rubyonrails.org/) left me impressed, I realized that there were hundreds of things hidden from me - which is cool if you need to write a working webapp in no time, but not that great for really understanding the magic - and that's what I seek now. | CGI is great for low-traffic websites, but it has some performance problems for anything else. This is because every time a request comes in, the server starts the CGI application in its own process. This is bad for two reasons: 1) Starting and stopping a process can take time and 2) you can't cache anything in memory. You can go with FastCGI, but I would argue that you'd be better off just writing a straight [WSGI](http://www.python.org/dev/peps/pep-0333/) app if you're going to go that route (the way WSGI works really isn't a whole heck of a lot different from CGI).
Other than that, your choices are for the most part how much you want the framework to do. You can go with an all singing, all dancing framework like Django or Pylons. Or you can go with a mix-and-match approach (use something like CherryPy for the HTTP stuff, SQLAlchemy for the database stuff, paste for deployment, etc). I should also point out that most frameworks will also let you switch different components out for others, so these two approaches aren't necessarily mutually exclusive.
Personally, I dislike frameworks that do too much magic for me and prefer the mix-and-match technique, but I've been told that I'm also completely insane. :)
How much web programming experience do you have? If you're a beginner, I say go with Django. If you're more experienced, I say to play around with the different approaches and techniques until you find the right one. |
Pros and Cons of different approaches to web programming in Python | 43,709 | 24 | 2008-09-04T13:00:13Z | 43,835 | 12 | 2008-09-04T14:11:54Z | [
"python",
"frameworks",
"cgi",
"wsgi"
] | I'd like to do some server-side scripting using Python. But I'm kind of lost with the number of ways to do that.
It starts with the do-it-yourself CGI approach and it seems to end with some pretty robust frameworks that would basically do all the job themselves. And a huge lot of stuff in between, like [web.py](http://webpy.org/), [Pyroxide](http://pyroxide.org/) and [Django](http://wiki.python.org/moin/Django).
* What are the **pros** and **cons** of the frameworks or approaches that *you've worked on*?
* What **trade-offs** are there?
* For **what kind of projects** they do well and for what they don't?
Edit: I haven't got much experience with web programing yet.
I would like to avoid the basic and tedious things like parsing the URL for parameters, etc.
On the other hand, while the video of [blog created in 15 minutes](http://www.rubyonrails.org/screencasts) with [Ruby on Rails](http://www.rubyonrails.org/) left me impressed, I realized that there were hundreds of things hidden from me - which is cool if you need to write a working webapp in no time, but not that great for really understanding the magic - and that's what I seek now. | The simplest web program is a CGI script, which is basically just a program whose standard output is redirected to the web browser making the request. In this approach, every page has its own executable file, which must be loaded and parsed on every request. This makes it really simple to get something up and running, but scales badly both in terms of performance and organization. So when I need a very dynamic page very quickly that won't grow into a larger system, I use a CGI script.
One step up from this is embedding your Python code in your HTML code, such as with PSP. I don't think many people use this nowadays, since modern template systems have made this pretty obsolete. I worked with PSP for awhile and found that it had basically the same organizational limits as CGI scripts (every page has its own file) plus some whitespace-related annoyances from trying to mix whitespace-ignorant HTML with whitespace-sensitive Python.
The next step up is very simple web frameworks such as web.py, which I've also used. Like CGI scripts, it's very simple to get something up and running, and you don't need any complex configuration or automatically generated code. Your own code will be pretty simple to understand, so you can see what's happening. However, it's not as feature-rich as other web frameworks; last time I used it, there was no session tracking, so I had to roll my own. It also has "too much magic behavior" to quote Guido ("upvars(), bah").
Finally, you have feature-rich web frameworks such as Django. These will require a bit of work to get simple Hello World programs working, but every major one has a great, well-written tutorial (especially Django) to walk you through it. I highly recommend using one of these web frameworks for any real project because of the convenience and features and documentation, etc.
Ultimately you'll have to decide what you prefer. For example, frameworks all use template languages (special code/tags) to generate HTML files. Some of them such as Cheetah templates let you write arbitrary Python code so that you can do anything in a template. Others such as Django templates are more restrictive and force you to separate your presentation code from your program logic. It's all about what you personally prefer.
Another example is URL handling; some frameworks such as Django have you define the URLs in your application through regular expressions. Others such as CherryPy automatically map your functions to urls by your function names. Again, this is a personal preference.
I personally use a mix of web frameworks by using CherryPy for my web server stuff (form parameters, session handling, url mapping, etc) and Django for my object-relational mapping and templates. My recommendation is to start with a high level web framework, work your way through its tutorial, then start on a small personal project. I've done this with all of the technologies I've mentioned and it's been really beneficial. Eventually you'll get a feel for what you prefer and become a better web programmer (and a better programmer in general) in the process. |
Modulus operation with negatives values - weird thing? | 43,775 | 14 | 2008-09-04T13:36:46Z | 43,780 | 9 | 2008-09-04T13:40:59Z | [
"python",
"math",
"modulo"
] | Can you please tell me how much is `(-2) % 5`?
According to my Python interpreter is 3, but do you have a wise explanation for this?
I've read that in some languages the result can be machine-dependent, but I'm not sure though. | Your Python interpreter is correct.
One (stupid) way of calculating a modulus is to subtract or add the modulus until the resulting value is between 0 and (modulus â 1).
e.g.:
13 mod 5 = (13 â 5) mod 5 = (13 â 10) mod 5 = 3
or in your case: â2 mod 5 = (â2 + 5) mod 5 = 3 |
Modulus operation with negatives values - weird thing? | 43,775 | 14 | 2008-09-04T13:36:46Z | 43,783 | 11 | 2008-09-04T13:41:25Z | [
"python",
"math",
"modulo"
] | Can you please tell me how much is `(-2) % 5`?
According to my Python interpreter is 3, but do you have a wise explanation for this?
I've read that in some languages the result can be machine-dependent, but I'm not sure though. | The result of the modulus operation on negatives seems to be programming language dependent and here is a listing <http://en.wikipedia.org/wiki/Modulo_operation> |
Modulus operation with negatives values - weird thing? | 43,775 | 14 | 2008-09-04T13:36:46Z | 43,794 | 15 | 2008-09-04T13:46:23Z | [
"python",
"math",
"modulo"
] | Can you please tell me how much is `(-2) % 5`?
According to my Python interpreter is 3, but do you have a wise explanation for this?
I've read that in some languages the result can be machine-dependent, but I'm not sure though. | By the way: most programming languages would disagree with Python and give the result `-2`. Depending on the interpretation of modulus this is correct. However, the most agreed-upon mathematical definition states that the modulus of *a* and *b* is the (strictly positive) rest *r* of the division of *a* / *b*. More precisely, 0 <= *r* < *b* by definition. |
How do I document a module in Python? | 44,084 | 31 | 2008-09-04T16:06:48Z | 44,095 | 33 | 2008-09-04T16:12:23Z | [
"python",
"documentation",
"python-module"
] | That's it. If you want to document a function or a class, you put a string just after the definition. For instance:
```
def foo():
"""This function does nothing."""
pass
```
But what about a module? How can I document what a *file.py* does? | For the packages, you can document it in `__init__.py`.
For the modules, you can add a docstring simply in the module file.
All the information is here: <http://www.python.org/dev/peps/pep-0257/> |
How do I document a module in Python? | 44,084 | 31 | 2008-09-04T16:06:48Z | 23,450,896 | 14 | 2014-05-03T23:29:51Z | [
"python",
"documentation",
"python-module"
] | That's it. If you want to document a function or a class, you put a string just after the definition. For instance:
```
def foo():
"""This function does nothing."""
pass
```
But what about a module? How can I document what a *file.py* does? | Add your docstring as the [first statement in the module](http://legacy.python.org/dev/peps/pep-0257/#what-is-a-docstring).
Since I like seeing an example:
```
"""
Your module's verbose yet thorough docstring.
"""
import foo
# ...
``` |
Iterate over subclasses of a given class in a given module | 44,352 | 15 | 2008-09-04T18:05:23Z | 44,381 | 9 | 2008-09-04T18:20:21Z | [
"python",
"oop"
] | In Python, given a module X and a class Y, how can I iterate or generate a list of all subclasses of Y that exist in module X? | Here's one way to do it:
```
import inspect
def get_subclasses(mod, cls):
"""Yield the classes in module ``mod`` that inherit from ``cls``"""
for name, obj in inspect.getmembers(mod):
if hasattr(obj, "__bases__") and cls in obj.__bases__:
yield obj
``` |
Iterate over subclasses of a given class in a given module | 44,352 | 15 | 2008-09-04T18:05:23Z | 408,465 | 19 | 2009-01-03T01:56:21Z | [
"python",
"oop"
] | In Python, given a module X and a class Y, how can I iterate or generate a list of all subclasses of Y that exist in module X? | Although Quamrana's suggestion works fine, there are a couple of possible improvements I'd like to suggest to make it more pythonic. They rely on using the inspect module from the standard library.
1. You can avoid the getattr call by using `inspect.getmembers()`
2. The try/catch can be avoided by using `inspect.isclass()`
With those, you can reduce the whole thing to a single list comprehension if you like:
```
def find_subclasses(module, clazz):
return [
cls
for name, cls in inspect.getmembers(module)
if inspect.isclass(cls) and issubclass(cls, clazz)
]
``` |
How would you make a comma-separated string from a list? | 44,778 | 180 | 2008-09-04T21:04:04Z | 44,781 | 333 | 2008-09-04T21:06:12Z | [
"python",
"list"
] | What would be your preferred way to concatenate strings from a sequence such that between each two consecutive pair a comma is added. That is, how do you map, for instance, `[ 'a', 'b', 'c' ]` to `'a,b,c'`? (The cases `[ s ]` and `[]` should be mapped to `s` and `''`, respectively.)
I usually end up using something like `''.join(map(lambda x: x+',',l))[:-1]`, but also feeling somewhat unsatisfied.
Edit: I'm both ashamed and happy that the solution is so simple. Obviously I have hardly a clue as to what I'm doing. (I probably needed "simple" concatenation in the past and somehow memorised `s.join([e1,e2,...])` as a shorthand for `s+e1+e2+...`.) | ```
myList = ['a','b','c','d']
myString = ",".join(myList )
```
This won't work if the list contains numbers.
---
As [Ricardo Reyes](http://stackoverflow.com/users/3399/ricardo-reyes) suggested, if it contains non-string types (such as integers, floats, bools, None) then do:
```
myList = ','.join(map(str, myList))
``` |
How would you make a comma-separated string from a list? | 44,778 | 180 | 2008-09-04T21:04:04Z | 44,788 | 42 | 2008-09-04T21:08:29Z | [
"python",
"list"
] | What would be your preferred way to concatenate strings from a sequence such that between each two consecutive pair a comma is added. That is, how do you map, for instance, `[ 'a', 'b', 'c' ]` to `'a,b,c'`? (The cases `[ s ]` and `[]` should be mapped to `s` and `''`, respectively.)
I usually end up using something like `''.join(map(lambda x: x+',',l))[:-1]`, but also feeling somewhat unsatisfied.
Edit: I'm both ashamed and happy that the solution is so simple. Obviously I have hardly a clue as to what I'm doing. (I probably needed "simple" concatenation in the past and somehow memorised `s.join([e1,e2,...])` as a shorthand for `s+e1+e2+...`.) | Why the map/lambda magic? Doesn't this work?
```
>>>foo = [ 'a', 'b', 'c' ]
>>>print ",".join(foo)
a,b,c
>>>print ",".join([])
>>>print ",".join(['a'])
a
```
Edit: @mark-biek points out the case for numbers.
Perhaps the list comprehension:
```
>>>','.join([str(x) for x in foo])
```
is more "pythonic".
Edit2:
Thanks for the suggestions. I'll use the generator rather than the list comprehension in the future.
```
>>>','.join(str(x) for x in foo)
``` |
How would you make a comma-separated string from a list? | 44,778 | 180 | 2008-09-04T21:04:04Z | 44,791 | 10 | 2008-09-04T21:09:33Z | [
"python",
"list"
] | What would be your preferred way to concatenate strings from a sequence such that between each two consecutive pair a comma is added. That is, how do you map, for instance, `[ 'a', 'b', 'c' ]` to `'a,b,c'`? (The cases `[ s ]` and `[]` should be mapped to `s` and `''`, respectively.)
I usually end up using something like `''.join(map(lambda x: x+',',l))[:-1]`, but also feeling somewhat unsatisfied.
Edit: I'm both ashamed and happy that the solution is so simple. Obviously I have hardly a clue as to what I'm doing. (I probably needed "simple" concatenation in the past and somehow memorised `s.join([e1,e2,...])` as a shorthand for `s+e1+e2+...`.) | Don't you just want:
```
",".join(l)
```
Obviously it gets more complicated if you need to quote/escape commas etc in the values. In that case I would suggest looking at the csv module in the standard library:
<https://docs.python.org/library/csv.html> |
Can someone explain __all__ in Python? | 44,834 | 409 | 2008-09-04T21:28:18Z | 44,842 | 198 | 2008-09-04T21:30:46Z | [
"python",
"syntax",
"namespaces"
] | I have been using Python more and more, and I keep seeing the variable `__all__` set in different `__init__.py` files. Can someone explain what this does? | It's a list of public objects of that module. It overrides the default of hiding everything that begins with an underscore. |
Can someone explain __all__ in Python? | 44,834 | 409 | 2008-09-04T21:28:18Z | 44,843 | 43 | 2008-09-04T21:31:16Z | [
"python",
"syntax",
"namespaces"
] | I have been using Python more and more, and I keep seeing the variable `__all__` set in different `__init__.py` files. Can someone explain what this does? | From [(An Unofficial) Python Reference Wiki](http://effbot.org/pyref/__all__.htm):
> The public names defined by a module are determined by checking the module's namespace for a variable named `__all__`; if defined, it must be a sequence of strings which are names defined or imported by that module. The names given in `__all__` are all considered public and are required to exist. If `__all__` is not defined, the set of public names includes all names found in the module's namespace which do not begin with an underscore character ("\_"). `__all__` should contain the entire public API. It is intended to avoid accidentally exporting items that are not part of the API (such as library modules which were imported and used within the module). |
Can someone explain __all__ in Python? | 44,834 | 409 | 2008-09-04T21:28:18Z | 64,130 | 483 | 2008-09-15T15:49:50Z | [
"python",
"syntax",
"namespaces"
] | I have been using Python more and more, and I keep seeing the variable `__all__` set in different `__init__.py` files. Can someone explain what this does? | Linked to, but not explicitly mentioned here, is exactly when `__all__` is used. It is a list of strings defining what symbols in a module will be exported when `from <module> import *` is used on the module.
For example, the following code in a `foo.py` explicitly exports the symbols `bar` and `baz`:
```
__all__ = ['bar', 'baz']
waz = 5
bar = 10
def baz(): return 'baz'
```
These symbols can then be imported like so:
```
from foo import *
print bar
print baz
# The following will trigger an exception, as "waz" is not exported by the module
print waz
```
If the `__all__` above is commented out, this code will then execute to completion, as the default behaviour of `import *` is to import all symbols that do not begin with an underscore, from the given namespace.
Reference: <https://docs.python.org/3.5/tutorial/modules.html#importing-from-a-package>
**NOTE:** `__all__` affects the `from <module> import *` behavior only. Members that are not mentioned in `__all__` are still accessible from outside the module and can be imported with `from <module> import <member>`. |
Can someone explain __all__ in Python? | 44,834 | 409 | 2008-09-04T21:28:18Z | 2,838,800 | 76 | 2010-05-15T03:22:29Z | [
"python",
"syntax",
"namespaces"
] | I have been using Python more and more, and I keep seeing the variable `__all__` set in different `__init__.py` files. Can someone explain what this does? | It also changes what pydoc will show:
module1.py
```
a = "A"
b = "B"
c = "C"
```
module2.py
```
__all__ = ['a', 'b']
a = "A"
b = "B"
c = "C"
```
$ pydoc module1
```
Help on module module1:
NAME
module1
FILE
module1.py
DATA
a = 'A'
b = 'B'
c = 'C'
```
$ pydoc module2
```
Help on module module2:
NAME
module2
FILE
module2.py
DATA
__all__ = ['a', 'b']
a = 'A'
b = 'B'
```
I declare `__all__` in all my modules, as well as underscore internal details, these really help when using things you've never used before in live interpreter sessions. |
Can someone explain __all__ in Python? | 44,834 | 409 | 2008-09-04T21:28:18Z | 16,595,377 | 84 | 2013-05-16T19:01:48Z | [
"python",
"syntax",
"namespaces"
] | I have been using Python more and more, and I keep seeing the variable `__all__` set in different `__init__.py` files. Can someone explain what this does? | I'm just adding this to be precise:
All other answers refer to *modules*. The original question explicitely mentioned `__all__` in `__init__.py` files, so this is about python *packages*.
Generally, `__all__` only comes into play when the `from xxx import *` variant of the `import` statement is used. This applies to packages as well as to modules.
The behaviour for modules is explained in the other answers. The exact behaviour for packages is described [here](http://docs.python.org/2/tutorial/modules.html#importing-from-a-package) in detail.
In short, `__all__` on package level does approximately the same thing as for modules, except it deals with *modules within the package* (in contrast to specifying *names within the module*). So `__all__` specifies all modules that shall be loaded and imported into the current namespace when us use `from package import *`.
The big difference is, that when you *omit* the declaration of `__all__` in a package's `__init__.py`, the statement `from package import *` will not import anything at all (with exceptions explained in the documentation, see link above).
On the other hand, if you omit `__all__` in a module, the "starred import" will import all names (not starting with an underscore) defined in the module. |
Can someone explain __all__ in Python? | 44,834 | 409 | 2008-09-04T21:28:18Z | 35,710,527 | 22 | 2016-02-29T21:58:50Z | [
"python",
"syntax",
"namespaces"
] | I have been using Python more and more, and I keep seeing the variable `__all__` set in different `__init__.py` files. Can someone explain what this does? | > **Explain \_\_all\_\_ in Python?**
>
> I keep seeing the variable `__all__` set in different `__init__.py` files.
>
> What does this do?
# What does `__all__` do?
It declares the semantically "public" names from a module. If there is a name in `__all__`, users are expected to use it, and they can have the expectation that it will not change.
It also will have programmatic affects:
## `import *`
`__all__` in a module, e.g. `module.py`:
```
__all__ = ['foo', 'Bar']
```
means that when you `import *` from the module, only those names in the `__all__` are imported:
```
from module import * # imports foo and Bar
```
## Documentation tools
Documentation and code autocompletion tools may (in fact, should) also inspect the `__all__` to determine what names to show as available from a module.
# `__init__.py` makes a directory a Python package
From the [docs](https://docs.python.org/2/tutorial/modules.html#packages):
> The `__init__.py` files are required to make Python treat the directories as containing packages; this is done to prevent directories with a common name, such as string, from unintentionally hiding valid modules that occur later on the module search path.
>
> In the simplest case, `__init__.py` can just be an empty file, but it can also execute initialization code for the package or set the `__all__` variable.
So the `__init__.py` can declare the `__all__` for a *package*.
# Managing an API:
A package is typically made up of modules that may import one another, but that are necessarily tied together with an `__init__.py` file. That file is what makes the directory an actual Python package. For example, say you have the following:
```
package/
|-__init__.py # makes directory a Python package
|-module_1.py
|-module_2.py
```
in the `__init__.py` you write:
```
from module_1 import *
from module_2 import *
```
and in `module_1` you have:
```
__all__ = ['foo',]
```
and in `module_2` you have:
```
__all__ = ['Bar',]
```
And now you have presented a complete api that someone else can use when they import your package, like so:
```
import package
package.foo()
package.Bar()
```
And they won't have all the other names you used when creating your modules cluttering up the `package` namespace.
# `__all__` in `__init__.py`
After more work, maybe you've decided that the modules are too big and need to be split up. So you do the following:
```
package/
|-__init__.py
|-module_1/
| |-__init__.py
| |-foo_implementation.py
|-module_2/
|-__init__.py
|-Bar_implementation.py
```
And in each `__init__.py` you declare an `__all__`, e.g. in module\_1:
```
from foo_implementation import *
__all__ = ['foo']
```
And module\_2's `__init__.py`:
```
from Bar_implementation import *
__all__ = ['Bar']
```
And you can easily add things to your API that you can manage at the subpackage level instead of the subpackage's module level. If you want to add a new name to the API, you simply update the `__init__.py`, e.g. in module\_2:
```
from Bar_implementation import *
from Baz_implementation import *
__all__ = ['Bar', 'Baz']
```
And if you're not ready to publish `Baz` in the top level API, in your top level `__init__.py` you could have:
```
from module_1 import * # also constrained by __all__'s
from module_2 import * # in the __init__.py's
__all__ = ['foo', 'Bar'] # further constraining the names advertised
```
and if your users are aware of the availability of `Baz`, they can use it:
```
import package
package.Baz()
```
but if they don't know about it, other tools (like [pydoc](https://hg.python.org/cpython/file/2.7/Lib/pydoc.py#l173)) won't inform them.
You can later change that when `Baz` is ready for prime time:
```
from module_1 import *
from module_2 import *
__all__ = ['foo', 'Bar', 'Baz']
```
# Prefixing `_` versus `__all__`:
By default, Python will export all names that do not start with an `_`. You certainly *could* rely on this mechanism. Some packages in the Python standard library, in fact, *do* rely on this, but to do so, they alias their imports, for example, in [`ctypes/__init__.py`](https://hg.python.org/cpython/file/default/Lib/ctypes/__init__.py#l3):
```
import os as _os, sys as _sys
```
Using the `_` convention can be more elegant because it removes the redundancy of naming the names again. But it adds the redundancy for imports (if you have a lot of them) and it is *easy* to forget to do this consistently - and the last thing you want is to have to indefinitely support something you intended to only be an implementation detail, just because you forgot to prefix an `_` when naming a function.
I personally write an `__all__` early in my development lifecycle for modules so that others who might use my code know what they should use and not use.
Most packages in the standard library also use `__all__`.
# When avoiding `__all__` makes sense
It makes sense to stick to the `_` prefix convention in lieu of `__all__` when:
* You're still in early development mode and have no users, and are constantly tweaking your API.
* Maybe you do have users, but you have unittests that cover the API, and you're still actively adding to the API and tweaking in development.
# An `export` decorator
The downside of using `__all__` is that you have to write the names of functions and classes being exported twice - and the information is kept separate from the definitions. We *could* use a decorator to solve this problem.
I got the idea for such an export decorator from David Beazley's talk on packaging. This implementation seems to work well in CPython's traditional importer. If you have a special import hook or system, I do not guarantee it, but if you adopt it, it is fairly trivial to back out - you'll just need to manually add the names back into the `__all__`
So in, for example, a utility library, you would define the decorator:
```
import sys
def export(fn):
mod = sys.modules[fn.__module__]
if hasattr(mod, '__all__'):
mod.__all__.append(fn.__name__)
else:
mod.__all__ = [fn.__name__]
return fn
```
and then, where you would define an `__all__`, you do this:
```
$ cat > main.py
from lib import export
__all__ = [] # optional - we create a list if __all__ is not there.
@export
def foo(): pass
@export
def bar():
'bar'
def main():
print('main')
if __name__ == '__main__':
main()
```
And this works fine whether run as main or imported by another function.
```
$ cat > run.py
import main
main.main()
$ python run.py
main
```
And API provisioning with `import *` will work too:
```
$ cat > run.py
from main import *
foo()
bar()
main() # expected to error here, not exported
$ python run.py
Traceback (most recent call last):
File "run.py", line 4, in <module>
main() # expected to error here, not exported
NameError: name 'main' is not defined
``` |
Python packages - import by class, not file | 45,122 | 35 | 2008-09-05T02:15:33Z | 45,126 | 71 | 2008-09-05T02:18:21Z | [
"python",
"packages"
] | Say I have the following file structure:
```
app/
app.py
controllers/
__init__.py
project.py
plugin.py
```
If app/controllers/project.py defines a class Project, app.py would import it like this:
```
from app.controllers.project import Project
```
I'd like to just be able to do:
```
from app.controllers import Project
```
How would this be done? | You need to put
```
from project import Project
```
in `controllers/__init__.py`.
Note that when [Absolute imports](http://www.python.org/dev/peps/pep-0328/) become the default (Python 2.7?), you will want to add a dot before the module name (to avoid collisions with a top-level model named `project`), i.e.,
```
from .project import Project
``` |
Where can I find the time and space complexity of the built-in sequence types in Python | 45,228 | 14 | 2008-09-05T04:27:46Z | 45,538 | 12 | 2008-09-05T11:04:04Z | [
"python",
"performance",
"complexity-theory",
"big-o",
"sequences"
] | I've been unable to find a source for this information, short of looking through the Python source code myself to determine how the objects work. Does anyone know where I could find this online? | Raymond D. Hettinger does [an excellent talk](http://www.youtube.com/watch?v=hYUsssClE94) ([slides](http://wenku.baidu.com/view/9c6fb20dcc1755270722089d.html)) about Python's built-in collections called 'Core Python Containers - Under the Hood'. The version I saw focussed mainly on `set` and `dict`, but `list` was covered too.
There are also some photos of the pertinent slides from EuroPython in [a blog](http://paranoid-engineering.blogspot.com/2008/07/europython-2008-day-3.html).
Here is a summary of my notes on `list`:
* Stores items as an array of pointers. Subscript costs O(1) time. Append costs amortized O(1) time. Insert costs O(n) time.
* Tries to avoid `memcpy` when growing by over-allocating. Many small lists will waste a lot of space, but large lists never waste more than about 12.5% to overallocation.
* Some operations pre-size. Examples given were `range(n)`, `map()`, `list()`, `[None] * n`, and slicing.
* When shrinking, the array is `realloc`ed only when it is wasting 50% of space. `pop` is cheap. |
Where can I find the time and space complexity of the built-in sequence types in Python | 45,228 | 14 | 2008-09-05T04:27:46Z | 46,201 | 16 | 2008-09-05T16:19:03Z | [
"python",
"performance",
"complexity-theory",
"big-o",
"sequences"
] | I've been unable to find a source for this information, short of looking through the Python source code myself to determine how the objects work. Does anyone know where I could find this online? | Checkout the [TimeComplexity](http://wiki.python.org/moin/TimeComplexity) page on the py dot org wiki. It covers set/dicts/lists/etc at least as far as time complexity goes. |
Analizing MIPS binaries: is there a Python library for parsing binary data? | 45,954 | 10 | 2008-09-05T14:44:25Z | 3,647,010 | 8 | 2010-09-05T17:29:19Z | [
"python",
"x86",
"mips",
"elf",
"dwarf"
] | I'm working on a utility which needs to resolve hex addresses to a symbolic function name and source code line number within a binary. The utility will run on Linux on x86, though the binaries it analyzes will be for a MIPS-based embedded system. The MIPS binaries are in ELF format, using DWARF for the symbolic debugging information.
I'm currently planning to fork objdump, passing in a list of hex addresses and parsing the output to get function names and source line numbers. I have compiled an objdump with support for MIPS binaries, and it is working.
I'd prefer to have a package allowing me to look things up natively from the Python code without forking another process. I can find no mention of libdwarf, libelf, or libbfd on python.org, nor any mention of python on dwarfstd.org.
Is there a suitable module available somewhere? | You might be interested in the DWARF library from [pydevtools](http://code.google.com/p/pydevtools/):
```
>>> from bintools.dwarf import DWARF
>>> dwarf = DWARF('test/test')
>>> dwarf.get_loc_by_addr(0x8048475)
('/home/emilmont/Workspace/dbg/test/main.c', 36, 0)
``` |
Django: Print url of view without hardcoding the url | 47,207 | 7 | 2008-09-06T02:42:49Z | 47,304 | 15 | 2008-09-06T07:22:52Z | [
"python",
"django"
] | Can i print out a url `/admin/manage/products/add` of a certain view in a template?
Here is the rule i want to create a link for
```
(r'^manage/products/add/$', create_object, {'model': Product, 'post_save_redirect': ''}),
```
I would like to have /manage/products/add in a template without hardcoding it. How can i do this?
**Edit:** I am not using the default admin (well, i am but it is at another url), this is my own | You can use `get_absolute_url`, but that will only work for a particular object. Since your object hasn't been created yet, it won't work in this case.
You want to use [named URL patterns](https://docs.djangoproject.com/en/1.2/topics/http/urls/#naming-url-patterns). Here's a quick intro:
Change the line in your urls.py to:
```
(r'^manage/products/add/$', create_object, {'model': Product, 'post_save_redirect': ''}, "create-product"),
```
Then, in your template you use this to display the URL:
```
{% url create-product %}
```
If you're using Django 1.5 or higher you need this:
```
{% url 'create-product' %}
```
You can do some more powerful things with named URL patterns, they're very handy. Note that they are only in the development version (and also 1.0). |
How do you set up a python wsgi server under IIS? | 47,253 | 22 | 2008-09-06T04:26:49Z | 47,266 | 20 | 2008-09-06T05:32:39Z | [
"python",
"iis",
"deployment",
"windows-server",
"iis-modules"
] | I work in a windows environment and would prefer to deploy code to IIS. At the same time I would like to code in python.
Having read that IIS can run fastCGI application, I went to the IIS site where it describes in detail how to get PHP up and running but not much about anything else.
Does anyone have experience getting a **Python** framework **running under IIS** using something other that plain old CGI?
If so can you explain to direct me to some instructions on setting this up? | There shouldn't be any need to use FastCGI. There exists a [ISAPI extension for WSGI](https://github.com/hexdump42/isapi-wsgi). |
Is there a way to attach a debugger to a multi-threaded Python process? | 47,701 | 24 | 2008-09-06T18:14:05Z | 56,510 | 8 | 2008-09-11T13:19:10Z | [
"python",
"debugging"
] | I'm trying to debug a deadlock in a multi-threaded Python application after it has locked up. Is there a way to attach a debugger to inspect the state of the process?
Edit: I'm attempting this on Linux, but it would be great if there were a cross-platform solution. It's Python after all :) | Yeah, gdb is good for lower level debugging.
You can change threads with the *thread* command.
e.g
```
(gdb) thr 2
[Switching to thread 2 (process 6159 thread 0x3f1b)]
(gdb) backtrace
....
```
You could also check out Python specific debuggers like [Winpdb](http://winpdb.org/about/), or [pydb](http://bashdb.sourceforge.net/pydb/). Both platform independent. |
Is there a way to attach a debugger to a multi-threaded Python process? | 47,701 | 24 | 2008-09-06T18:14:05Z | 553,633 | 12 | 2009-02-16T15:18:44Z | [
"python",
"debugging"
] | I'm trying to debug a deadlock in a multi-threaded Python application after it has locked up. Is there a way to attach a debugger to inspect the state of the process?
Edit: I'm attempting this on Linux, but it would be great if there were a cross-platform solution. It's Python after all :) | Use [Winpdb](http://winpdb.org/). It is a **platform independent** graphical GPL Python debugger with support for remote debugging over a network, multiple threads, namespace modification, embedded debugging, encrypted communication and is up to 20 times faster than pdb.
Features:
* GPL license. Winpdb is Free Software.
* Compatible with CPython 2.3 through 2.6 and Python 3000
* Compatible with wxPython 2.6 through 2.8
* Platform independent, and tested on Ubuntu Gutsy and Windows XP.
* User Interfaces: rpdb2 is console based, while winpdb requires wxPython 2.6 or later.
 |
Generator Expressions vs. List Comprehension | 47,789 | 213 | 2008-09-06T20:07:59Z | 47,792 | 59 | 2008-09-06T20:10:59Z | [
"python",
"list-comprehension",
"generator"
] | When should you use generator expressions and when should you use list comprehensions in Python?
```
# Generator expression
(x*2 for x in range(256))
# List comprehension
[x*2 for x in range(256)]
``` | Use list comprehensions when the result needs to be iterated over multiple times, or where speed is paramount. Use generator expressions where the range is large or infinite. |
Generator Expressions vs. List Comprehension | 47,789 | 213 | 2008-09-06T20:07:59Z | 47,793 | 89 | 2008-09-06T20:11:17Z | [
"python",
"list-comprehension",
"generator"
] | When should you use generator expressions and when should you use list comprehensions in Python?
```
# Generator expression
(x*2 for x in range(256))
# List comprehension
[x*2 for x in range(256)]
``` | Iterating over the *generator expression* or the *list comprehension* will do the same thing. However, the *list comprehension* will create the entire list in memory first while the *generator expression* will create the items on the fly, so you are able to use it for very large (and also infinite!) sequences. |
Generator Expressions vs. List Comprehension | 47,789 | 213 | 2008-09-06T20:07:59Z | 47,826 | 168 | 2008-09-06T20:54:08Z | [
"python",
"list-comprehension",
"generator"
] | When should you use generator expressions and when should you use list comprehensions in Python?
```
# Generator expression
(x*2 for x in range(256))
# List comprehension
[x*2 for x in range(256)]
``` | John's answer is good (that list comprehensions are better when you want to iterate over something multiple times). However, it's also worth noting that you should use a list if you want to use any of the list methods. For example, the following code won't work:
```
def gen():
return (something for something in get_some_stuff())
print gen()[:2] # generators don't support indexing or slicing
print [5,6] + gen() # generators can't be added to lists
```
Basically, use a generator expression if all you're doing is iterating once. If you want to store and use the generated results, then you're probably better off with a list comprehension.
Since performance is the most common reason to choose one over the other, my advice is to not worry about it and just pick one; if you find that your program is running too slowly, then and only then should you go back and worry about tuning your code. |
Generator Expressions vs. List Comprehension | 47,789 | 213 | 2008-09-06T20:07:59Z | 189,840 | 34 | 2008-10-10T01:42:30Z | [
"python",
"list-comprehension",
"generator"
] | When should you use generator expressions and when should you use list comprehensions in Python?
```
# Generator expression
(x*2 for x in range(256))
# List comprehension
[x*2 for x in range(256)]
``` | The benefit of a generator expression is that it uses less memory since it doesn't build the whole list at once. Generator expressions are best used when the list is an intermediary, such as summing the results, or creating a dict out of the results.
For example:
```
sum(x*2 for x in xrange(256))
dict( ((k, some_func(k) for k in some_list_of_keys) )
```
The advantage there is that the list isn't completely generated, and thus little memory is used (and should also be faster)
You should, though, use list comprehensions when the desired final product is a list. You are not going to save any memeory using generator expressions, since you want the generated list. You also get the benefit of being able to use any of the list functions like sorted or reversed.
For example:
```
reversed( [x*2 for x in xrange(256)] )
``` |
Generator Expressions vs. List Comprehension | 47,789 | 213 | 2008-09-06T20:07:59Z | 22,858,478 | 27 | 2014-04-04T09:14:57Z | [
"python",
"list-comprehension",
"generator"
] | When should you use generator expressions and when should you use list comprehensions in Python?
```
# Generator expression
(x*2 for x in range(256))
# List comprehension
[x*2 for x in range(256)]
``` | The important point is that the list comprehension creates a new list. The generator creates a an iterable object that will "filter" the source material on-the-fly as you consume the bits.
Imagine you have a 2TB log file called "hugefile.txt", and you want the content and length for all the lines that start with the word "ENTRY".
So you try starting out by writing a list comprehension:
```
logfile = open("hugefile.txt","r")
entry_lines = [(line,len(line)) for line in logfile if line.startswith("ENTRY")]
```
This slurps up the whole file, processes each line, and stores the matching lines in your array. This array could therefore contain up to 2TB of content. That's a lot of RAM, and probably not practical for your purposes.
So instead we can use a generator to apply a "filter" to our content. No data is actually read until we start iterating over the result.
```
logfile = open("hugefile.txt","r")
entry_lines = ((line,len(line)) for line in logfile if line.startswith("ENTRY"))
```
Not even a single line has been read from our file yet. In fact, say we want to filter our result even further:
```
long_entries = ((line,length) for (line,length) in entry_lines if length > 80)
```
Still nothing has been read, but we've specified now two generators that will act on our data as we wish.
Lets write out our filtered lines to another file:
```
outfile = open("filtered.txt","a")
for entry,length in long_entries:
outfile.write(entry)
```
*Now* we read the input file. As our `for` loop continues to request additional lines, the `long_entries` generator demands lines from the `entry_lines` generator, returning only those whose length is greater than 80 characters. And in turn, the `entry_lines` generator requests lines (filtered as indicated) from the `logfile` iterator, which in turn reads the file.
So instead of "pushing" data to your output function in the form of a fully-populated list, you're giving the output function a way to "pull" data only when its needed. This is in our case much more efficient, but not quite as flexible. Generators are one way, one pass; the data from the log file we've read gets immediately discarded, so we can't go back to a previous line. On the other hand, we don't have to worry about keeping data around once we're done with it. |
What are the advantages of packaging your python library/application as an .egg file? | 47,953 | 23 | 2008-09-06T23:35:30Z | 47,956 | 29 | 2008-09-06T23:39:33Z | [
"python",
"zip",
"packaging",
"software-distribution",
"egg"
] | I've read some about .egg files and I've noticed them in my lib directory but what are the advantages/disadvantages of using then as a developer? | From the [Python Enterprise Application Kit community](http://peak.telecommunity.com/DevCenter/PythonEggs):
> *"Eggs are to Pythons as Jars are to Java..."*
>
> Python eggs are a way of bundling
> additional information with a Python
> project, that allows the project's
> dependencies to be checked and
> satisfied at runtime, as well as
> allowing projects to provide plugins
> for other projects. There are several
> binary formats that embody eggs, but
> the most common is '.egg' zipfile
> format, because it's a convenient one
> for distributing projects. All of the
> formats support including
> package-specific data, project-wide
> metadata, C extensions, and Python
> code.
>
> The primary benefits of Python Eggs
> are:
>
> * They enable tools like the "Easy Install" Python package manager
> * .egg files are a "zero installation" format for a Python
> package; no build or install step is
> required, just put them on PYTHONPATH
> or sys.path and use them (may require
> the runtime installed if C extensions
> or data files are used)
> * They can include package metadata, such as the other eggs they depend on
> * They allow "namespace packages" (packages that just contain other
> packages) to be split into separate
> distributions (e.g. zope.*, twisted.*,
> peak.\* packages can be distributed as
> separate eggs, unlike normal packages
> which must always be placed under the
> same parent directory. This allows
> what are now huge monolithic packages
> to be distributed as separate
> components.)
> * They allow applications or libraries to specify the needed
> version of a library, so that you can
> e.g. require("Twisted-Internet>=2.0")
> before doing an import
> twisted.internet.
> * They're a great format for distributing extensions or plugins to
> extensible applications and frameworks
> (such as Trac, which uses eggs for
> plugins as of 0.9b1), because the egg
> runtime provides simple APIs to locate
> eggs and find their advertised entry
> points (similar to Eclipse's
> "extension point" concept).
> * There are also other benefits that may come from having a standardized
> format, similar to the benefits of
> Java's "jar" format. |
Glade or no glade: What is the best way to use PyGtk? | 48,123 | 27 | 2008-09-07T04:00:41Z | 48,136 | 12 | 2008-09-07T04:20:06Z | [
"python",
"gtk",
"pygtk",
"glade",
"gtk2"
] | I've been learning python for a while now with some success. I even managed to create one or two (simple) programs using PyGtk + Glade.
The thing is: I am not sure if the best way to use GTK with python is by building the interfaces using Glade.
I was wondering if the more experienced ones among us (remember, I'm just a beginner) could point out the benefits and caveats of using Glade as opposed to creating everything in the code itself (assuming that learning the correct gtk bindings wouldn't exactly be a problem). | Use GtkBuilder instead of Glade, it's integrated into Gtk itself instead of a separate library.
The main benefit of Glade is that it's much, much easier to create the interface. It's a bit more work to connect signal handlers, but I've never felt that matters much. |
Glade or no glade: What is the best way to use PyGtk? | 48,123 | 27 | 2008-09-07T04:00:41Z | 48,734 | 19 | 2008-09-07T20:09:47Z | [
"python",
"gtk",
"pygtk",
"glade",
"gtk2"
] | I've been learning python for a while now with some success. I even managed to create one or two (simple) programs using PyGtk + Glade.
The thing is: I am not sure if the best way to use GTK with python is by building the interfaces using Glade.
I was wondering if the more experienced ones among us (remember, I'm just a beginner) could point out the benefits and caveats of using Glade as opposed to creating everything in the code itself (assuming that learning the correct gtk bindings wouldn't exactly be a problem). | I would say that it depends: if you find that using Glade you can build the apps you want or need to make than that's absolutely fine. If however you actually want to learn how GTK works or you have some non-standard UI requirements you will **have** to dig into GTK internals (which are not that complicated).
Personally I'm usually about 5 minutes into a rich client when I need some feature or customization that is simply impossible through a designer such as Glade or [Stetic](http://www.mono-project.com/Stetic). Perhaps it's just me. Nevertheless it is still useful for me to bootstrap window design using a graphical tool.
My recommendation: if making rich clients using GTK is going to be a significant part of your job/hobby then learn GTK as well since you **will** need to write that code someday.
P.S. I personally find [Stetic](http://www.mono-project.com/Stetic) to be superior to Glade for design work, if a little bit more unstable. |
Project structure for Google App Engine | 48,458 | 112 | 2008-09-07T14:08:47Z | 70,271 | 96 | 2008-09-16T08:10:50Z | [
"python",
"google-app-engine"
] | I started an application in Google App Engine right when it came out, to play with the technology and work on a pet project that I had been thinking about for a long time but never gotten around to starting. The result is [BowlSK](http://www.bowlsk.com). However, as it has grown, and features have been added, it has gotten really difficult to keep things organized - mainly due to the fact that this is my first python project, and I didn't know anything about it until I started working.
What I have:
* Main Level contains:
+ all .py files (didn't know how to make packages work)
+ all .html templates for main level pages
* Subdirectories:
+ separate folders for css, images, js, etc.
+ folders that hold .html templates for subdirecty-type urls
Example:
<http://www.bowlsk.com/> maps to HomePage (default package), template at "index.html"
<http://www.bowlsk.com/games/view-series.html?series=7130> maps to ViewSeriesPage (again, default package), template at "games/view-series.html"
It's nasty. How do I restructure? I had 2 ideas:
* Main Folder containing: appdef, indexes, main.py?
+ Subfolder for code. Does this have to be my first package?
+ Subfolder for templates. Folder heirarchy would match package heirarchy
+ Individual subfolders for css, images, js, etc.
* Main Folder containing appdef, indexes, main.py?
+ Subfolder for code + templates. This way I have the handler class right next to the template, because in this stage, I'm adding lots of features, so modifications to one mean modifications to the other. Again, do I have to have this folder name be the first package name for my classes? I'd like the folder to be "src", but I don't want my classes to be "src.WhateverPage"
Is there a best practice? With Django 1.0 on the horizon, is there something I can do now to improve my ability to integrate with it when it becomes the official GAE templating engine? I would simply start trying these things, and seeing which seems better, but pyDev's refactoring support doesn't seem to handle package moves very well, so it will likely be a non-trivial task to get all of this working again. | First, I would suggest you have a look at "[Rapid Development with Python, Django, and Google App Engine](http://sites.google.com/site/io/rapid-development-with-python-django-and-google-app-engine)"
GvR describes a general/standard project layout on page 10 of his [slide presentation](http://sites.google.com/site/io/rapid-development-with-python-django-and-google-app-engine/rapid_development_with_django_gae.pdf?attredirects=0).
Here I'll post a slightly modified version of the layout/structure from that page. I pretty much follow this pattern myself. You also mentioned you had trouble with packages. Just make sure each of your sub folders has an \_\_init\_\_.py file. It's ok if its empty.
## Boilerplate files
* These hardly vary between projects
* app.yaml: direct all non-static requests to main.py
* main.py: initialize app and send it all requests
## Project lay-out
* static/\*: static files; served directly by App Engine
* myapp/\*.py: app-specific python code
+ views.py, models.py, tests.py, \_\_init\_\_.py, and more
* templates/\*.html: templates (or myapp/templates/\*.html)
Here are some code examples that may help as well:
## main.py
```
import wsgiref.handlers
from google.appengine.ext import webapp
from myapp.views import *
application = webapp.WSGIApplication([
('/', IndexHandler),
('/foo', FooHandler)
], debug=True)
def main():
wsgiref.handlers.CGIHandler().run(application)
```
## myapp/views.py
```
import os
import datetime
import logging
import time
from google.appengine.api import urlfetch
from google.appengine.ext.webapp import template
from google.appengine.api import users
from google.appengine.ext import webapp
from models import *
class IndexHandler(webapp.RequestHandler):
def get(self):
date = "foo"
# Do some processing
template_values = {'data': data }
path = os.path.join(os.path.dirname(__file__) + '/../templates/', 'main.html')
self.response.out.write(template.render(path, template_values))
class FooHandler(webapp.RequestHandler):
def get(self):
#logging.debug("start of handler")
```
## myapp/models.py
```
from google.appengine.ext import db
class SampleModel(db.Model):
```
I think this layout works great for new and relatively small to medium projects. For larger projects I would suggest breaking up the views and models to have their own sub-folders with something like:
## Project lay-out
* static/: static files; served directly by App Engine
+ js/\*.js
+ images/\*.gif|png|jpg
+ css/\*.css
* myapp/: app structure
+ models/\*.py
+ views/\*.py
+ tests/\*.py
+ templates/\*.html: templates |
Project structure for Google App Engine | 48,458 | 112 | 2008-09-07T14:08:47Z | 153,862 | 14 | 2008-09-30T16:30:58Z | [
"python",
"google-app-engine"
] | I started an application in Google App Engine right when it came out, to play with the technology and work on a pet project that I had been thinking about for a long time but never gotten around to starting. The result is [BowlSK](http://www.bowlsk.com). However, as it has grown, and features have been added, it has gotten really difficult to keep things organized - mainly due to the fact that this is my first python project, and I didn't know anything about it until I started working.
What I have:
* Main Level contains:
+ all .py files (didn't know how to make packages work)
+ all .html templates for main level pages
* Subdirectories:
+ separate folders for css, images, js, etc.
+ folders that hold .html templates for subdirecty-type urls
Example:
<http://www.bowlsk.com/> maps to HomePage (default package), template at "index.html"
<http://www.bowlsk.com/games/view-series.html?series=7130> maps to ViewSeriesPage (again, default package), template at "games/view-series.html"
It's nasty. How do I restructure? I had 2 ideas:
* Main Folder containing: appdef, indexes, main.py?
+ Subfolder for code. Does this have to be my first package?
+ Subfolder for templates. Folder heirarchy would match package heirarchy
+ Individual subfolders for css, images, js, etc.
* Main Folder containing appdef, indexes, main.py?
+ Subfolder for code + templates. This way I have the handler class right next to the template, because in this stage, I'm adding lots of features, so modifications to one mean modifications to the other. Again, do I have to have this folder name be the first package name for my classes? I'd like the folder to be "src", but I don't want my classes to be "src.WhateverPage"
Is there a best practice? With Django 1.0 on the horizon, is there something I can do now to improve my ability to integrate with it when it becomes the official GAE templating engine? I would simply start trying these things, and seeing which seems better, but pyDev's refactoring support doesn't seem to handle package moves very well, so it will likely be a non-trivial task to get all of this working again. | My usual layout looks something like this:
* app.yaml
* index.yaml
* request.py - contains the basic WSGI app
* lib
+ `__init__.py` - common functionality, including a request handler base class
* controllers - contains all the handlers. request.yaml imports these.
* templates
+ all the django templates, used by the controllers
* model
+ all the datastore model classes
* static
+ static files (css, images, etc). Mapped to /static by app.yaml
I can provide examples of what my app.yaml, request.py, lib/**init**.py, and sample controllers look like, if this isn't clear. |
Project structure for Google App Engine | 48,458 | 112 | 2008-09-07T14:08:47Z | 12,535,000 | 10 | 2012-09-21T17:07:02Z | [
"python",
"google-app-engine"
] | I started an application in Google App Engine right when it came out, to play with the technology and work on a pet project that I had been thinking about for a long time but never gotten around to starting. The result is [BowlSK](http://www.bowlsk.com). However, as it has grown, and features have been added, it has gotten really difficult to keep things organized - mainly due to the fact that this is my first python project, and I didn't know anything about it until I started working.
What I have:
* Main Level contains:
+ all .py files (didn't know how to make packages work)
+ all .html templates for main level pages
* Subdirectories:
+ separate folders for css, images, js, etc.
+ folders that hold .html templates for subdirecty-type urls
Example:
<http://www.bowlsk.com/> maps to HomePage (default package), template at "index.html"
<http://www.bowlsk.com/games/view-series.html?series=7130> maps to ViewSeriesPage (again, default package), template at "games/view-series.html"
It's nasty. How do I restructure? I had 2 ideas:
* Main Folder containing: appdef, indexes, main.py?
+ Subfolder for code. Does this have to be my first package?
+ Subfolder for templates. Folder heirarchy would match package heirarchy
+ Individual subfolders for css, images, js, etc.
* Main Folder containing appdef, indexes, main.py?
+ Subfolder for code + templates. This way I have the handler class right next to the template, because in this stage, I'm adding lots of features, so modifications to one mean modifications to the other. Again, do I have to have this folder name be the first package name for my classes? I'd like the folder to be "src", but I don't want my classes to be "src.WhateverPage"
Is there a best practice? With Django 1.0 on the horizon, is there something I can do now to improve my ability to integrate with it when it becomes the official GAE templating engine? I would simply start trying these things, and seeing which seems better, but pyDev's refactoring support doesn't seem to handle package moves very well, so it will likely be a non-trivial task to get all of this working again. | I implemented a google app engine boilerplate today and checked it on github. This is along the lines described by Nick Johnson above (who used to work for Google).
Follow this link [gae-boilerplate](https://github.com/droot/gae-boilerplate) |
Calling python from a c++ program for distribution | 49,137 | 18 | 2008-09-08T03:53:39Z | 49,148 | 15 | 2008-09-08T04:01:10Z | [
"c++",
"python",
"embedded-language"
] | I would like to call python script files from my c++ program.
I am not sure that the people I will distribute to will have python installed.
Basically I'm looking for a .lib file that I can use that has an Apache like distribution license. | Boost has a python interface library which could help you.
[Boost.Python](http://www.boost.org/doc/libs/release/libs/python/doc/index.html) |
Calling python from a c++ program for distribution | 49,137 | 18 | 2008-09-08T03:53:39Z | 328,451 | 32 | 2008-11-30T04:52:03Z | [
"c++",
"python",
"embedded-language"
] | I would like to call python script files from my c++ program.
I am not sure that the people I will distribute to will have python installed.
Basically I'm looking for a .lib file that I can use that has an Apache like distribution license. | > I would like to call python script files from my c++ program.
This means that you want to embed Python in your C++ application. As mentioned in [Embedding Python in Another Application](http://docs.python.org/extending/embedding.html):
> Embedding Python is similar to
> extending it, but not quite. The
> difference is that when you extend
> Python, the main program of the
> application is still the Python
> interpreter, while if you embed
> Python, the main program may have
> nothing to do with Python â instead,
> some parts of the application
> occasionally call the Python
> interpreter to run some Python code.
I suggest that you first go through [Embedding Python in Another Application](http://docs.python.org/extending/embedding.html). Then refer the following examples
1. [Embedding Python in C/C++: Part I](http://www.codeproject.com/KB/cpp/embedpython_1.aspx)
2. [Embedding Python in C/C++: Part II](http://www.codeproject.com/KB/cpp/embedpython_2.aspx)
3. [Embedding Python in Multi-Threaded C/C++ Applications](http://www.linuxjournal.com/article/3641)
If you like [Boost.Python](http://www.boost.org/doc/libs/release/libs/python/doc/tutorial/doc/html/python/embedding.html), you may visit the following links:
1. [Embedding Python with Boost.Python Part 1](http://members.gamedev.net/sicrane/articles/EmbeddingPythonPart1.html) |
How do I turn a python program into an .egg file? | 49,164 | 13 | 2008-09-08T04:21:22Z | 49,169 | 9 | 2008-09-08T04:33:39Z | [
"python",
"deployment",
"egg"
] | How do I turn a python program into an .egg file? | [Setuptools](http://peak.telecommunity.com/DevCenter/setuptools) is the software that creates [.egg files](http://peak.telecommunity.com/DevCenter/PythonEggs). It's an extension of the [`distutils`](http://docs.python.org/lib/module-distutils.html) package in the standard library.
The process involves creating a `setup.py` file, then `python setup.py bdist_egg` creates an .egg package. |
Can parallel traversals be done in MATLAB just as in Python? | 49,307 | 7 | 2008-09-08T08:25:55Z | 65,903 | 12 | 2008-09-15T19:20:15Z | [
"python",
"arrays",
"matlab",
"for-loop"
] | Using the `zip` function, Python allows for loops to traverse multiple sequences in parallel.
`for (x,y) in zip(List1, List2):`
Does MATLAB have an equivalent syntax? If not, what is the best way to iterate over two parallel arrays at the same time using MATLAB? | If x and y are column vectors, you can do:
```
for i=[x';y']
# do stuff with i(1) and i(2)
end
```
(with row vectors, just use `x` and `y`).
Here is an example run:
```
>> x=[1 ; 2; 3;]
x =
1
2
3
>> y=[10 ; 20; 30;]
y =
10
20
30
>> for i=[x';y']
disp(['size of i = ' num2str(size(i)) ', i(1) = ' num2str(i(1)) ', i(2) = ' num2str(i(2))])
end
size of i = 2 1, i(1) = 1, i(2) = 10
size of i = 2 1, i(1) = 2, i(2) = 20
size of i = 2 1, i(1) = 3, i(2) = 30
>>
``` |
Java -> Python? | 49,824 | 23 | 2008-09-08T14:36:24Z | 49,828 | 15 | 2008-09-08T14:40:36Z | [
"java",
"python"
] | Besides the dynamic nature of Python (and the syntax), what are some of the major features of the Python language that Java doesn't have, and vice versa? | I think this pair of articles by Philip J. Eby does a great job discussing the differences between the two languages (mostly about philosophy/mentality rather than specific language features).
* [Python is Not Java](http://dirtsimple.org/2004/12/python-is-not-java.html)
* [Java is Not Python, either](http://dirtsimple.org/2004/12/java-is-not-python-either.html) |
Java -> Python? | 49,824 | 23 | 2008-09-08T14:36:24Z | 49,953 | 40 | 2008-09-08T15:35:32Z | [
"java",
"python"
] | Besides the dynamic nature of Python (and the syntax), what are some of the major features of the Python language that Java doesn't have, and vice versa? | 1. List comprehensions. I often find myself filtering/mapping lists, and being able to say `[line.replace("spam","eggs") for line in open("somefile.txt") if line.startswith("nee")]` is really nice.
2. Functions are first class objects. They can be passed as parameters to other functions, defined inside other function, and have lexical scope. This makes it really easy to say things like `people.sort(key=lambda p: p.age)` and thus sort a bunch of people on their age without having to define a custom comparator class or something equally verbose.
3. Everything is an object. Java has basic types which aren't objects, which is why many classes in the standard library define 9 different versions of functions (for boolean, byte, char, double, float, int, long, Object, short). `Array.sort` is a good example. Autoboxing helps, although it makes things awkward when something turns out to be null.
4. Properties. Python lets you create classes with read-only fields, lazily-generated fields, as well as fields which are checked upon assignment to make sure they're never 0 or null or whatever you want to guard against, etc.'
5. Default and keyword arguments. In Java if you want a constructor that can take up to 5 optional arguments, you must define 6 different versions of that constructor. And there's no way at all to say `Student(name="Eli", age=25)`
6. Functions can only return 1 thing. In Python you have tuple assignment, so you can say `spam, eggs = nee()` but in Java you'd need to either resort to mutable out parameters or have a custom class with 2 fields and then have two additional lines of code to extract those fields.
7. Built-in syntax for lists and dictionaries.
8. Operator Overloading.
9. Generally better designed libraries. For example, to parse an XML document in Java, you say
`Document doc = DocumentBuilderFactory.newInstance().newDocumentBuilder().parse("test.xml");`
and in Python you say
`doc = parse("test.xml")`
Anyway, I could go on and on with further examples, but Python is just overall a much more flexible and expressive language. It's also dynamically typed, which I really like, but which comes with some disadvantages.
Java has much better performance than Python and has way better tool support. Sometimes those things matter a lot and Java is the better language than Python for a task; I continue to use Java for some new projects despite liking Python a lot more. But as a language I think Python is superior for most things I find myself needing to accomplish. |
Open source alternative to MATLAB's fmincon function? | 49,926 | 22 | 2008-09-08T15:19:59Z | 70,798 | 13 | 2008-09-16T09:45:06Z | [
"python",
"matlab",
"numpy",
"numerical",
"scientific-computing"
] | Is there an open-source alternative to MATLAB's [`fmincon`](http://www.mathworks.com/access/helpdesk/help/toolbox/optim/index.html?/access/helpdesk/help/toolbox/optim/ug/fmincon.html) function for constrained linear optimization? I'm rewriting a MATLAB program to use Python / [NumPy](http://numpy.scipy.org/) / [SciPy](http://www.scipy.org/) and this is the only function I haven't found an equivalent to. A NumPy-based solution would be ideal, but any language will do. | The open source Python package,[SciPy](http://www.scipy.org/), has quite a large set of optimization routines including some for multivariable problems with constraints (which is what fmincon does I believe). Once you have SciPy installed type the following at the Python command prompt
help(scipy.optimize)
The resulting document is extensive and includes the following which I believe might be of use to you.
```
Constrained Optimizers (multivariate)
fmin_l_bfgs_b -- Zhu, Byrd, and Nocedal's L-BFGS-B constrained optimizer
(if you use this please quote their papers -- see help)
fmin_tnc -- Truncated Newton Code originally written by Stephen Nash and
adapted to C by Jean-Sebastien Roy.
fmin_cobyla -- Constrained Optimization BY Linear Approximation
``` |
Open source alternative to MATLAB's fmincon function? | 49,926 | 22 | 2008-09-08T15:19:59Z | 196,806 | 22 | 2008-10-13T05:51:51Z | [
"python",
"matlab",
"numpy",
"numerical",
"scientific-computing"
] | Is there an open-source alternative to MATLAB's [`fmincon`](http://www.mathworks.com/access/helpdesk/help/toolbox/optim/index.html?/access/helpdesk/help/toolbox/optim/ug/fmincon.html) function for constrained linear optimization? I'm rewriting a MATLAB program to use Python / [NumPy](http://numpy.scipy.org/) / [SciPy](http://www.scipy.org/) and this is the only function I haven't found an equivalent to. A NumPy-based solution would be ideal, but any language will do. | Is your problem convex? Linear? Non-linear? I agree that SciPy.optimize will probably do the job, but fmincon is a sort of bazooka for solving optimization problems, and you'll be better off if you can confine it to one of the categories below (in increasing level of difficulty to solve efficiently)
Linear Program (LP)
Quadratic Program (QP)
Convex Quadratically-Constrained Quadratic Program (QCQP)
Second Order Cone Program (SOCP)
Semidefinite Program (SDP)
Non-Linear Convex Problem
Non-Convex Problem
There are also combinatoric problems such as Mixed-Integer Linear Programs (MILP), but you didn't mention any sort of integrality constraints, suffice to say that they fall into a different class of problems.
The CVXOpt package will be of great use to you if your problem is convex.
If your problem is not convex, you need to choose between finding a local solution or the global solution. Many convex solvers 'sort of' work in a non-convex domain. Finding a good approximation to the global solution would require some form Simulated Annealing or Genetic Algorithm. Finding the global solution will require an enumeration of all local solutions or a combinatorial strategy such as Branch and Bound. |
Open source alternative to MATLAB's fmincon function? | 49,926 | 22 | 2008-09-08T15:19:59Z | 1,856,211 | 12 | 2009-12-06T18:51:50Z | [
"python",
"matlab",
"numpy",
"numerical",
"scientific-computing"
] | Is there an open-source alternative to MATLAB's [`fmincon`](http://www.mathworks.com/access/helpdesk/help/toolbox/optim/index.html?/access/helpdesk/help/toolbox/optim/ug/fmincon.html) function for constrained linear optimization? I'm rewriting a MATLAB program to use Python / [NumPy](http://numpy.scipy.org/) / [SciPy](http://www.scipy.org/) and this is the only function I haven't found an equivalent to. A NumPy-based solution would be ideal, but any language will do. | Python optimization software:
* **OpenOpt** <http://openopt.org> **(this one is numpy-based as you wish, with automatic differentiation by FuncDesigner)**
* **Pyomo** <https://software.sandia.gov/trac/coopr/wiki/Package/pyomo>
* **CVXOPT** <http://abel.ee.ucla.edu/cvxopt/>
* **NLPy** <http://nlpy.sourceforge.net/> |
What Python way would you suggest to check whois database records? | 50,394 | 7 | 2008-09-08T18:43:52Z | 4,078,336 | 8 | 2010-11-02T13:53:05Z | [
"python",
"sysadmin",
"whois"
] | I'm trying to get a webservice up and running that actually requires to check whois databases. What I'm doing right now is ugly and I'd like to avoid it as much as I can: I call gwhois command and parse its output. Ugly.
I did some search to try to find a pythonic way to do this task. Generally I got quite much nothing - [this old discussion list link](http://mail.python.org/pipermail/python-list/2000-March/028122.html) has a way to check if domain exist. Quite not what I was looking for... But still, it was best anwser Google gave me - everything else is just a bunch of unanwsered questions.
Any of you have succeeded to get some method up and running? I'd very much appreciate some tips, or should I just do it the opensource-way, sit down and code something by myself? :) | Look at this:
<http://code.google.com/p/pywhois/>
pywhois - Python module for retrieving WHOIS information of domains
Goal:
- Create a simple importable Python module which will produce parsed WHOIS data for a given domain.
- Able to extract data for all the popular TLDs (com, org, net, ...)
- Query a WHOIS server directly instead of going through an intermediate web service like many others do.
- Works with Python 2.4+ and no external dependencies
Example:
```
>>> import pywhois
>>> w = pywhois.whois('google.com')
>>> w.expiration_date
['14-sep-2011']
>>> w.emails
['[email protected]',
'[email protected]',
'[email protected]',
'[email protected]']
>>> print w
...
``` |
How do I get the path and name of the file that is currently executing? | 50,499 | 262 | 2008-09-08T19:41:10Z | 50,502 | 11 | 2008-09-08T19:42:56Z | [
"python",
"scripting",
"file"
] | I have scripts calling other script files but I need to get the filepath of the file that is currently running within the process.
For example, let's say I have three files. Using [execfile](http://docs.python.org/library/functions.html#execfile):
* `script_1.py` calls `script_2.py`.
* In turn, `script_2.py` calls `script_3.py`.
How can I get the file name and path of **`script_3.py`**, *from code within `script_3.py`*, without having to pass that information as arguments from `script_2.py`?
(Executing `os.getcwd()` returns the original starting script's filepath not the current file's.) | It's not entirely clear what you mean by "the filepath of the file that is currently running within the process".
`sys.argv[0]` usually contains the location of the script that was invoked by the Python interpreter.
Check the [sys documentation](https://docs.python.org/2/library/sys.html) for more details.
As @Tim and @Pat Notz have pointed out, the \_\_file\_\_ attribute provides access to
> the file from which the module was
> loaded, if it was loaded from a file |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.