title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Django Template Variables and Javascript | 298,772 | 92 | 2008-11-18T13:52:31Z | 8,483,184 | 22 | 2011-12-13T01:15:07Z | [
"javascript",
"python",
"django",
"google-app-engine",
"django-templates"
] | When I render a page using the Django template renderer, I can pass in a dictionary variable containing various values to manipulate them in the page using `{{ myVar }}`.
Is there a way to access the same variable in Javascript (perhaps using the DOM, I don't know how Django makes the variables accessible)? I want to be able to lookup details using an AJAX lookup based on the values contained in the variables passed in. | A solution that worked for me is using the hidden input field in the template
```
<input type="hidden" id="myVar" name="variable" value="{{ variable }}">
```
Then getting the value in javascript this way,
```
var myVar = document.getElementById("myVar").value;
``` |
Django Template Variables and Javascript | 298,772 | 92 | 2008-11-18T13:52:31Z | 25,538,871 | 21 | 2014-08-28T00:14:04Z | [
"javascript",
"python",
"django",
"google-app-engine",
"django-templates"
] | When I render a page using the Django template renderer, I can pass in a dictionary variable containing various values to manipulate them in the page using `{{ myVar }}`.
Is there a way to access the same variable in Javascript (perhaps using the DOM, I don't know how Django makes the variables accessible)? I want to be able to lookup details using an AJAX lookup based on the values contained in the variables passed in. | **CAUTION** Check ticket [#17419](https://code.djangoproject.com/ticket/17419) for discussion on adding similar tag into Django core and possible XSS vulnerabilities introduced by using this template tag with user generated data. [Comment](https://code.djangoproject.com/ticket/17419#comment:27) from amacneil discusses most of the concerns raised in the ticket.
---
I think the most flexible and handy way of doing this is to define a template filter for variables you want to use in JS code. This allows you to ensure, that your data is properly escaped and you can use it with complex data structures, such as `dict` and `list`. That's why I write this answer despite there is an accepted answer with a lot of upvotes.
Here is an example of template filter:
```
// myapp/templatetags/js.py
from django.utils.safestring import mark_safe
from django.template import Library
import json
register = Library()
@register.filter(is_safe=True)
def js(obj):
return mark_safe(json.dumps(obj))
```
This template filters converts variable to JSON string. You can use it like so:
```
// myapp/templates/example.html
{% load js %}
<script type="text/javascript">
var someVar = {{ some_var | js }};
</script>
``` |
How do I change directory back to my original working directory with Python? | 299,446 | 5 | 2008-11-18T17:19:47Z | 299,462 | 10 | 2008-11-18T17:24:48Z | [
"python"
] | I have a function that resembles the one below. I'm not sure how to use the os module to get back to my original working directory at the conclusion of the jar's execution.
```
def run():
owd = os.getcwd()
#first change dir to build_dir path
os.chdir(testDir)
#run jar from test directory
os.system(cmd)
#change dir back to original working directory (owd)
```
note: I think my code formatting is off - not sure why. My apologies in advance | You simply need to add the line:
```
os.chdir(owd)
```
Just a note this was also answered in your other [question](http://stackoverflow.com/questions/299249/how-can-i-get-my-python-version-25-script-to-run-a-jar-file-inside-a-folder-ins). |
Validating with an XML schema in Python | 299,588 | 67 | 2008-11-18T17:59:56Z | 299,611 | 7 | 2008-11-18T18:09:40Z | [
"python",
"xml",
"validation",
"xsd"
] | I have an XML file and an XML schema in another file and I'd like to validate that my XML file adheres to the schema. How do I do this in Python?
I'd prefer something using the standard library, but I can install a third-party package if necessary. | lxml provides etree.DTD
from the tests on <http://lxml.de/api/lxml.tests.test_dtd-pysrc.html>
```
...
root = etree.XML(_bytes("<b/>"))
dtd = etree.DTD(BytesIO("<!ELEMENT b EMPTY>"))
self.assert_(dtd.validate(root))
``` |
Validating with an XML schema in Python | 299,588 | 67 | 2008-11-18T17:59:56Z | 299,635 | 42 | 2008-11-18T18:16:53Z | [
"python",
"xml",
"validation",
"xsd"
] | I have an XML file and an XML schema in another file and I'd like to validate that my XML file adheres to the schema. How do I do this in Python?
I'd prefer something using the standard library, but I can install a third-party package if necessary. | I am assuming you mean using XSD files. Surprisingly there aren't many python XML libraries that support this. lxml does however. Check [Validation with lxml](http://lxml.de/validation.html). The page also lists how to use lxml to validate with other schema types. |
Validating with an XML schema in Python | 299,588 | 67 | 2008-11-18T17:59:56Z | 1,946,225 | 12 | 2009-12-22T12:54:33Z | [
"python",
"xml",
"validation",
"xsd"
] | I have an XML file and an XML schema in another file and I'd like to validate that my XML file adheres to the schema. How do I do this in Python?
I'd prefer something using the standard library, but I can install a third-party package if necessary. | The PyXB package at <http://pyxb.sourceforge.net/> generates validating bindings for Python from XML schema documents. It handles almost every schema construct and supports multiple namespaces. |
Validating with an XML schema in Python | 299,588 | 67 | 2008-11-18T17:59:56Z | 5,566,672 | 20 | 2011-04-06T12:53:52Z | [
"python",
"xml",
"validation",
"xsd"
] | I have an XML file and an XML schema in another file and I'd like to validate that my XML file adheres to the schema. How do I do this in Python?
I'd prefer something using the standard library, but I can install a third-party package if necessary. | As for "pure python" solutions: the package index lists:
* [pyxsd](http://pypi.python.org/pypi/pyxsd), the description says it uses xml.etree.cElementTree, which is not "pure python" (but included in stdlib), but source code indicates that it falls back to xml.etree.ElementTree, so this would count as pure python. Haven't used it, but according to the docs, it does do schema validation.
* [minixsv](http://pypi.python.org/pypi/minixsv): 'a lightweight XML schema validator written in "pure" Python'. However, the description says "currently a subset of the XML schema standard is supported", so this may not be enough.
* [XSV](http://pypi.python.org/pypi/XSV), which I think is used for the W3C's online xsd validator (it still seems to use the old pyxml package, which I think is no longer maintained) |
How to unquote a urlencoded unicode string in python? | 300,445 | 38 | 2008-11-18T22:49:27Z | 300,531 | 8 | 2008-11-18T23:22:24Z | [
"python",
"unicode",
"urllib"
] | I have a unicode string like "Tanım" which is encoded as "Tan%u0131m" somehow. How can i convert this encoded string back to original unicode.
Apparently urllib.unquote does not support unicode. | ```
def unquote(text):
def unicode_unquoter(match):
return unichr(int(match.group(1),16))
return re.sub(r'%u([0-9a-fA-F]{4})',unicode_unquoter,text)
``` |
How to unquote a urlencoded unicode string in python? | 300,445 | 38 | 2008-11-18T22:49:27Z | 300,533 | 60 | 2008-11-18T23:22:44Z | [
"python",
"unicode",
"urllib"
] | I have a unicode string like "Tanım" which is encoded as "Tan%u0131m" somehow. How can i convert this encoded string back to original unicode.
Apparently urllib.unquote does not support unicode. | %uXXXX is a [non-standard encoding scheme](http://en.wikipedia.org/wiki/Percent-encoding#Non-standard_implementations) that has been rejected by the w3c, despite the fact that an implementation continues to live on in JavaScript land.
The more common technique seems to be to UTF-8 encode the string and then % escape the resulting bytes using %XX. This scheme is supported by urllib.unquote:
```
>>> urllib2.unquote("%0a")
'\n'
```
Unfortunately, if you really **need** to support %uXXXX, you will probably have to roll your own decoder. Otherwise, it is likely to be far more preferable to simply UTF-8 encode your unicode and then % escape the resulting bytes.
A more complete example:
```
>>> u"Tanım"
u'Tan\u0131m'
>>> url = urllib.quote(u"Tanım".encode('utf8'))
>>> urllib.unquote(url).decode('utf8')
u'Tan\u0131m'
``` |
Python - Backslash Quoting in String Literals | 301,068 | 34 | 2008-11-19T05:22:47Z | 301,075 | 52 | 2008-11-19T05:26:43Z | [
"python",
"string"
] | I have a string that contains both double-quotes and backslashes that I want to set to a variable in Python. However, whenever I try to set it, the quotes or slashes are either removed or escaped. Here's an example:
```
>>> foo = 'baz "\"'
>>> foo
'baz ""'
```
So instead of `baz "\"` like I want I'm getting `baz ""`. If I then try to escape the backslash, it doesn't help either:
```
>>> foo = 'baz "\\"'
>>> foo
'baz "\\"'
```
Which now matches what I put in but wasn't what I originally wanted. How do you get around this problem? | You're being mislead by output -- the second approach you're taking actually does what you want, you just aren't believing it. :)
```
>>> foo = 'baz "\\"'
>>> foo
'baz "\\"'
>>> print foo
baz "\"
```
Incidentally, there's another string form which might be a bit clearer:
```
>>> print r'baz "\"'
baz "\"
``` |
Python - Backslash Quoting in String Literals | 301,068 | 34 | 2008-11-19T05:22:47Z | 301,076 | 17 | 2008-11-19T05:29:45Z | [
"python",
"string"
] | I have a string that contains both double-quotes and backslashes that I want to set to a variable in Python. However, whenever I try to set it, the quotes or slashes are either removed or escaped. Here's an example:
```
>>> foo = 'baz "\"'
>>> foo
'baz ""'
```
So instead of `baz "\"` like I want I'm getting `baz ""`. If I then try to escape the backslash, it doesn't help either:
```
>>> foo = 'baz "\\"'
>>> foo
'baz "\\"'
```
Which now matches what I put in but wasn't what I originally wanted. How do you get around this problem? | Use a raw string:
```
>>> foo = r'baz "\"'
>>> foo
'baz "\\"'
```
Note that although it looks wrong, it's actually right. There is only one backslash in the string `foo`.
This happens because when you just type `foo` at the prompt, python displays the result of `__repr__()` on the string. This leads to the following (notice only one backslash and no quotes around the `print`ed string):
```
>>> foo = r'baz "\"'
>>> foo
'baz "\\"'
>>> print foo
baz "\"
```
And let's keep going because there's more backslash tricks. If you want to have a backslash at the end of the string and use the method above you'll come across a problem:
```
>>> foo = r'baz \'
File "<stdin>", line 1
foo = r'baz \'
^
SyntaxError: EOL while scanning single-quoted string
```
Raw strings don't work properly when you do that. You have to use a regular string and escape your backslashes:
```
>>> foo = 'baz \\'
>>> print foo
baz \
```
However, if you're working with Windows file names, you're in for some pain. What you want to do is use forward slashes and the `os.path.normpath()` function:
```
myfile = os.path.normpath('c:/folder/subfolder/file.txt')
open(myfile)
```
This will save a lot of escaping and hair-tearing. [This page](http://pythonconquerstheuniverse.blogspot.com/2008/06/python-gotcha-raw-strings-and.html) was handy when going through this a while ago. |
Dynamic module import in Python | 301,134 | 281 | 2008-11-19T06:09:57Z | 301,146 | 176 | 2008-11-19T06:17:18Z | [
"python",
"python-import"
] | I'm writing a Python application that takes as a command as an argument, for example:
```
$ python myapp.py command1
```
I want the application to be extensible, that is, to be able to add new modules that implement new commands without having to change the main application source. The tree looks something like:
```
myapp/
__init__.py
commands/
__init__.py
command1.py
command2.py
foo.py
bar.py
```
So I want the application to find the available command modules at runtime and execute the appropriate one.
Currently this is implemented something like:
```
command = sys.argv[1]
try:
command_module = __import__("myapp.commands.%s" % command, fromlist=["myapp.commands"])
except ImportError:
# Display error message
command_module.run()
```
This works just fine, I'm just wondering if there is possibly a more idiomatic way to accomplish what we are doing with this code.
Note that I specifically don't want to get in to using eggs or extension points. This is not an open-source project and I don't expect there to be "plugins". The point is to simplify the main application code and remove the need to modify it each time a new command module is added. | With Python older than 2.7/3.1, that's pretty much how you do it. For newer versions, see `importlib.import_module` [for 2.7+](https://docs.python.org/2/library/importlib.html#importlib.import_module) and [for 3.1+](https://docs.python.org/3.1/library/importlib.html#importlib.import_module).
You can use `exec` if you want to as well.
Note you can import a list of modules by doing this:
```
>>> moduleNames = ['sys', 'os', 're', 'unittest']
>>> moduleNames
['sys', 'os', 're', 'unittest']
>>> modules = map(__import__, moduleNames)
```
Ripped straight from [Dive Into Python](http://www.diveintopython.net/functional_programming/dynamic_import.html). |
Dynamic module import in Python | 301,134 | 281 | 2008-11-19T06:09:57Z | 301,165 | 19 | 2008-11-19T06:28:28Z | [
"python",
"python-import"
] | I'm writing a Python application that takes as a command as an argument, for example:
```
$ python myapp.py command1
```
I want the application to be extensible, that is, to be able to add new modules that implement new commands without having to change the main application source. The tree looks something like:
```
myapp/
__init__.py
commands/
__init__.py
command1.py
command2.py
foo.py
bar.py
```
So I want the application to find the available command modules at runtime and execute the appropriate one.
Currently this is implemented something like:
```
command = sys.argv[1]
try:
command_module = __import__("myapp.commands.%s" % command, fromlist=["myapp.commands"])
except ImportError:
# Display error message
command_module.run()
```
This works just fine, I'm just wondering if there is possibly a more idiomatic way to accomplish what we are doing with this code.
Note that I specifically don't want to get in to using eggs or extension points. This is not an open-source project and I don't expect there to be "plugins". The point is to simplify the main application code and remove the need to modify it each time a new command module is added. | Use the [imp module](http://docs.python.org/library/imp.html#imp.load_module), or the more direct [`__import__()`](http://docs.python.org/library/functions.html#__import__) function. |
Dynamic module import in Python | 301,134 | 281 | 2008-11-19T06:09:57Z | 301,298 | 107 | 2008-11-19T08:21:25Z | [
"python",
"python-import"
] | I'm writing a Python application that takes as a command as an argument, for example:
```
$ python myapp.py command1
```
I want the application to be extensible, that is, to be able to add new modules that implement new commands without having to change the main application source. The tree looks something like:
```
myapp/
__init__.py
commands/
__init__.py
command1.py
command2.py
foo.py
bar.py
```
So I want the application to find the available command modules at runtime and execute the appropriate one.
Currently this is implemented something like:
```
command = sys.argv[1]
try:
command_module = __import__("myapp.commands.%s" % command, fromlist=["myapp.commands"])
except ImportError:
# Display error message
command_module.run()
```
This works just fine, I'm just wondering if there is possibly a more idiomatic way to accomplish what we are doing with this code.
Note that I specifically don't want to get in to using eggs or extension points. This is not an open-source project and I don't expect there to be "plugins". The point is to simplify the main application code and remove the need to modify it each time a new command module is added. | As mentioned the [imp](http://docs.python.org/library/imp.html) module provides you loading functions:
```
imp.load_source(name, path)
imp.load_compiled(name, path)
```
I've used these before to perform something similar.
In my case I defined a specific class with defined methods that were required.
Once I loaded the module I would check if the class was in the module, and then create an instance of that class, something like this:
```
import imp
import os
def load_from_file(filepath):
class_inst = None
expected_class = 'MyClass'
mod_name,file_ext = os.path.splitext(os.path.split(filepath)[-1])
if file_ext.lower() == '.py':
py_mod = imp.load_source(mod_name, filepath)
elif file_ext.lower() == '.pyc':
py_mod = imp.load_compiled(mod_name, filepath)
if hasattr(py_mod, expected_class):
class_inst = getattr(py_mod, expected_class)()
return class_inst
``` |
Dynamic module import in Python | 301,134 | 281 | 2008-11-19T06:09:57Z | 8,028,743 | 10 | 2011-11-06T17:08:08Z | [
"python",
"python-import"
] | I'm writing a Python application that takes as a command as an argument, for example:
```
$ python myapp.py command1
```
I want the application to be extensible, that is, to be able to add new modules that implement new commands without having to change the main application source. The tree looks something like:
```
myapp/
__init__.py
commands/
__init__.py
command1.py
command2.py
foo.py
bar.py
```
So I want the application to find the available command modules at runtime and execute the appropriate one.
Currently this is implemented something like:
```
command = sys.argv[1]
try:
command_module = __import__("myapp.commands.%s" % command, fromlist=["myapp.commands"])
except ImportError:
# Display error message
command_module.run()
```
This works just fine, I'm just wondering if there is possibly a more idiomatic way to accomplish what we are doing with this code.
Note that I specifically don't want to get in to using eggs or extension points. This is not an open-source project and I don't expect there to be "plugins". The point is to simplify the main application code and remove the need to modify it each time a new command module is added. | If you want it in your locals:
```
>>> mod = 'sys'
>>> locals()['my_module'] = __import__(mod)
>>> my_module.version
'2.6.6 (r266:84297, Aug 24 2010, 18:46:32) [MSC v.1500 32 bit (Intel)]'
```
same would work with `globals()` |
Dynamic module import in Python | 301,134 | 281 | 2008-11-19T06:09:57Z | 14,000,967 | 142 | 2012-12-22T07:33:46Z | [
"python",
"python-import"
] | I'm writing a Python application that takes as a command as an argument, for example:
```
$ python myapp.py command1
```
I want the application to be extensible, that is, to be able to add new modules that implement new commands without having to change the main application source. The tree looks something like:
```
myapp/
__init__.py
commands/
__init__.py
command1.py
command2.py
foo.py
bar.py
```
So I want the application to find the available command modules at runtime and execute the appropriate one.
Currently this is implemented something like:
```
command = sys.argv[1]
try:
command_module = __import__("myapp.commands.%s" % command, fromlist=["myapp.commands"])
except ImportError:
# Display error message
command_module.run()
```
This works just fine, I'm just wondering if there is possibly a more idiomatic way to accomplish what we are doing with this code.
Note that I specifically don't want to get in to using eggs or extension points. This is not an open-source project and I don't expect there to be "plugins". The point is to simplify the main application code and remove the need to modify it each time a new command module is added. | The recommended way for Python 2.7 and later is to use [`importlib`](http://docs.python.org/2/library/importlib.html#importlib.import_module) module:
```
my_module = importlib.import_module('os.path')
``` |
Which language is easiest and fastest to work with XML content? | 301,493 | 13 | 2008-11-19T10:35:18Z | 301,630 | 17 | 2008-11-19T11:31:34Z | [
"java",
".net",
"python",
"xml",
"ruby"
] | We have developers with knowledge of these languages - Ruby , Python, .Net or Java. We are developing an application which will mainly handle XML documents. Most of the work is to convert predefined XML files into database tables, providing mapping between XML documents through database, creating reports from database etc. Which language will be the easiest and fastest to work with?
(It is a web-app) | A dynamic language rules for this. Why? The mappings are easy to code and change. You don't have to recompile and rebuild.
Indeed, with a little cleverness, you can have your "XML XPATH to a Tag -> DB table-field" mappings as disjoint blocks of Python code that your main application imports.
The block of Python code **is** your configuration file. It's not an `.ini` file or a `.properties` file that describes a configuration. It **is** the configuration.
We use Python, xml.etree and the SQLAlchemy (to separate the SQL out of your programs) for this because we're up and running with very little effort and a great deal of flexibility.
---
**source.py**
```
"""A particular XML parser. Formats change, so sometimes this changes, too."""
import xml.etree.ElementTree as xml
class SSXML_Source( object ):
ns0= "urn:schemas-microsoft-com:office:spreadsheet"
ns1= "urn:schemas-microsoft-com:office:excel"
def __init__( self, aFileName, *sheets ):
"""Initialize a XML source.
XXX - Create better sheet filtering here, in the constructor.
@param aFileName: the file name.
"""
super( SSXML_Source, self ).__init__( aFileName )
self.log= logging.getLogger( "source.PCIX_XLS" )
self.dom= etree.parse( aFileName ).getroot()
def sheets( self ):
for wb in self.dom.getiterator("{%s}Workbook" % ( self.ns0, ) ):
for ws in wb.getiterator( "{%s}Worksheet" % ( self.ns0, ) ):
yield ws
def rows( self ):
for s in self.sheets():
print s.attrib["{%s}Name" % ( self.ns0, ) ]
for t in s.getiterator( "{%s}Table" % ( self.ns0, ) ):
for r in t.getiterator( "{%s}Row" % ( self.ns0, ) ):
# The XML may not be really useful.
# In some cases, you may have to convert to something useful
yield r
```
**model.py**
```
"""This is your target object.
It's part of the problem domain; it rarely changes.
"""
class MyTargetObject( object ):
def __init__( self ):
self.someAttr= ""
self.anotherAttr= ""
self.this= 0
self.that= 3.14159
def aMethod( self ):
"""etc."""
pass
```
**builder\_today.py** One of many mapping configurations
```
"""One of many builders. This changes all the time to fit
specific needs and situations. The goal is to keep this
short and to-the-point so that it has the mapping and nothing
but the mapping.
"""
import model
class MyTargetBuilder( object ):
def makeFromXML( self, element ):
result= model.MyTargetObject()
result.someAttr= element.findtext( "Some" )
result.anotherAttr= element.findtext( "Another" )
result.this= int( element.findtext( "This" ) )
result.that= float( element.findtext( "that" ) )
return result
```
**loader.py**
```
"""An application that maps from XML to the domain object
using a configurable "builder".
"""
import model
import source
import builder_1
import builder_2
import builder_today
# Configure this: pick a builder is appropriate for the data:
b= builder_today.MyTargetBuilder()
s= source.SSXML_Source( sys.argv[1] )
for r in s.rows():
data= b.makeFromXML( r )
# ... persist data with a DB save or file write
```
---
To make changes, you can correct a builder or create a new builder. You adjust the loader source to identify which builder will be used. You can, without too much trouble, make the selection of builder a command-line parameter. Dynamic imports in dynamic languages seem like overkill to me, but they are handy. |
Which language is easiest and fastest to work with XML content? | 301,493 | 13 | 2008-11-19T10:35:18Z | 301,649 | 8 | 2008-11-19T11:40:58Z | [
"java",
".net",
"python",
"xml",
"ruby"
] | We have developers with knowledge of these languages - Ruby , Python, .Net or Java. We are developing an application which will mainly handle XML documents. Most of the work is to convert predefined XML files into database tables, providing mapping between XML documents through database, creating reports from database etc. Which language will be the easiest and fastest to work with?
(It is a web-app) | ## XSLT
I suggest using [XSLT templates](http://en.wikipedia.org/wiki/Xslt) to transform the XML into INSERT statements (or whatever you need), as required.
You should be able to invoke XSLT from any of the languages you mention.
This will result in a lot less code than doing it the long way round. |
Convert a string with date and time to a date | 301,528 | 18 | 2008-11-19T10:45:01Z | 301,580 | 44 | 2008-11-19T11:05:41Z | [
"python",
"datetime"
] | I've got this date time string:
```
post["date"] = "2007-07-18 10:03:19"
```
I'd like to extract just "2007-07-18" as a date. I've seen some reference to `strptime` but I'm not sure how to use it. How can I extract the date from this string? | The other two answers are fine, but if you actually want the date for something else, you can use the `datetime` module:
```
from datetime import datetime
d = datetime.strptime('2007-07-18 10:03:19', '%Y-%m-%d %H:%M:%S')
day_string = d.strftime('%Y-%m-%d')
```
It might be overkill for now, but it'll come in useful. You can see all of the format specifiers [here](http://docs.python.org/library/time.html#time.strftime). |
Python: urllib/urllib2/httplib confusion | 301,924 | 52 | 2008-11-19T13:44:20Z | 301,998 | 8 | 2008-11-19T14:19:19Z | [
"python",
"http",
"urllib2"
] | I'm trying to test the functionality of a web app by scripting a login sequence in Python, but I'm having some troubles.
Here's what I need to do:
1. Do a POST with a few parameters and headers.
2. Follow a redirect
3. Retrieve the HTML body.
Now, I'm relatively new to python, but the two things I've tested so far haven't worked. First I used httplib, with putrequest() (passing the parameters within the URL), and putheader(). This didn't seem to follow the redirects.
Then I tried urllib and urllib2, passing both headers and parameters as dicts. This seems to return the login page, instead of the page I'm trying to login to, I guess it's because of lack of cookies or something.
Am I missing something simple?
Thanks. | I'd give Mechanize (<http://wwwsearch.sourceforge.net/mechanize/>) a shot. It may well handle your cookie/headers transparently. |
Python: urllib/urllib2/httplib confusion | 301,924 | 52 | 2008-11-19T13:44:20Z | 302,099 | 31 | 2008-11-19T14:52:47Z | [
"python",
"http",
"urllib2"
] | I'm trying to test the functionality of a web app by scripting a login sequence in Python, but I'm having some troubles.
Here's what I need to do:
1. Do a POST with a few parameters and headers.
2. Follow a redirect
3. Retrieve the HTML body.
Now, I'm relatively new to python, but the two things I've tested so far haven't worked. First I used httplib, with putrequest() (passing the parameters within the URL), and putheader(). This didn't seem to follow the redirects.
Then I tried urllib and urllib2, passing both headers and parameters as dicts. This seems to return the login page, instead of the page I'm trying to login to, I guess it's because of lack of cookies or something.
Am I missing something simple?
Thanks. | Focus on `urllib2` for this, it works quite well. Don't mess with `httplib`, it's not the top-level API.
What you're noting is that `urllib2` doesn't follow the redirect.
You need to fold in an instance of `HTTPRedirectHandler` that will catch and follow the redirects.
Further, you may want to subclass the default `HTTPRedirectHandler` to capture information that you'll then check as part of your unit testing.
```
cookie_handler= urllib2.HTTPCookieProcessor( self.cookies )
redirect_handler= HTTPRedirectHandler()
opener = urllib2.build_opener(redirect_handler,cookie_handler)
```
You can then use this `opener` object to POST and GET, handling redirects and cookies properly.
You may want to add your own subclass of `HTTPHandler` to capture and log various error codes, also. |
Python: urllib/urllib2/httplib confusion | 301,924 | 52 | 2008-11-19T13:44:20Z | 302,184 | 11 | 2008-11-19T15:12:08Z | [
"python",
"http",
"urllib2"
] | I'm trying to test the functionality of a web app by scripting a login sequence in Python, but I'm having some troubles.
Here's what I need to do:
1. Do a POST with a few parameters and headers.
2. Follow a redirect
3. Retrieve the HTML body.
Now, I'm relatively new to python, but the two things I've tested so far haven't worked. First I used httplib, with putrequest() (passing the parameters within the URL), and putheader(). This didn't seem to follow the redirects.
Then I tried urllib and urllib2, passing both headers and parameters as dicts. This seems to return the login page, instead of the page I'm trying to login to, I guess it's because of lack of cookies or something.
Am I missing something simple?
Thanks. | I had to do this exact thing myself recently. I only needed classes from the standard library. Here's an excerpt from my code:
```
from urllib import urlencode
from urllib2 import urlopen, Request
# encode my POST parameters for the login page
login_qs = urlencode( [("username",USERNAME), ("password",PASSWORD)] )
# extract my session id by loading a page from the site
set_cookie = urlopen(URL_BASE).headers.getheader("Set-Cookie")
sess_id = set_cookie[set_cookie.index("=")+1:set_cookie.index(";")]
# construct headers dictionary using the session id
headers = {"Cookie": "session_id="+sess_id}
# perform login and make sure it worked
if "Announcements:" not in urlopen(Request(URL_BASE+"login",headers=headers), login_qs).read():
print "Didn't log in properly"
exit(1)
# here's the function I used after this for loading pages
def download(page=""):
return urlopen(Request(URL_BASE+page, headers=headers)).read()
# for example:
print download(URL_BASE + "config")
``` |
Python: urllib/urllib2/httplib confusion | 301,924 | 52 | 2008-11-19T13:44:20Z | 302,205 | 13 | 2008-11-19T15:17:31Z | [
"python",
"http",
"urllib2"
] | I'm trying to test the functionality of a web app by scripting a login sequence in Python, but I'm having some troubles.
Here's what I need to do:
1. Do a POST with a few parameters and headers.
2. Follow a redirect
3. Retrieve the HTML body.
Now, I'm relatively new to python, but the two things I've tested so far haven't worked. First I used httplib, with putrequest() (passing the parameters within the URL), and putheader(). This didn't seem to follow the redirects.
Then I tried urllib and urllib2, passing both headers and parameters as dicts. This seems to return the login page, instead of the page I'm trying to login to, I guess it's because of lack of cookies or something.
Am I missing something simple?
Thanks. | @S.Lott, thank you. Your suggestion worked for me, with some modification. Here's how I did it.
```
data = urllib.urlencode(params)
url = host+page
request = urllib2.Request(url, data, headers)
response = urllib2.urlopen(request)
cookies = CookieJar()
cookies.extract_cookies(response,request)
cookie_handler= urllib2.HTTPCookieProcessor( cookies )
redirect_handler= HTTPRedirectHandler()
opener = urllib2.build_opener(redirect_handler,cookie_handler)
response = opener.open(request)
``` |
Python: urllib/urllib2/httplib confusion | 301,924 | 52 | 2008-11-19T13:44:20Z | 4,836,113 | 15 | 2011-01-29T09:33:42Z | [
"python",
"http",
"urllib2"
] | I'm trying to test the functionality of a web app by scripting a login sequence in Python, but I'm having some troubles.
Here's what I need to do:
1. Do a POST with a few parameters and headers.
2. Follow a redirect
3. Retrieve the HTML body.
Now, I'm relatively new to python, but the two things I've tested so far haven't worked. First I used httplib, with putrequest() (passing the parameters within the URL), and putheader(). This didn't seem to follow the redirects.
Then I tried urllib and urllib2, passing both headers and parameters as dicts. This seems to return the login page, instead of the page I'm trying to login to, I guess it's because of lack of cookies or something.
Am I missing something simple?
Thanks. | Here's my take on this issue.
```
#!/usr/bin/env python
import urllib
import urllib2
class HttpBot:
"""an HttpBot represents one browser session, with cookies."""
def __init__(self):
cookie_handler= urllib2.HTTPCookieProcessor()
redirect_handler= urllib2.HTTPRedirectHandler()
self._opener = urllib2.build_opener(redirect_handler, cookie_handler)
def GET(self, url):
return self._opener.open(url).read()
def POST(self, url, parameters):
return self._opener.open(url, urllib.urlencode(parameters)).read()
if __name__ == "__main__":
bot = HttpBot()
ignored_html = bot.POST('https://example.com/authenticator', {'passwd':'foo'})
print bot.GET('https://example.com/interesting/content')
ignored_html = bot.POST('https://example.com/deauthenticator',{})
``` |
Global hotkey for Python application in Gnome | 302,163 | 6 | 2008-11-19T15:08:12Z | 1,359,417 | 8 | 2009-08-31T21:03:58Z | [
"python",
"gnome"
] | I would like to assign a global hotkey to my Python application, running in Gnome. How do I do that? All I can find are two year old posts saying, well, pretty much nothing :-) | There is python-keybinder which is that same code, but packaged standalone. Also available in debian and ubuntu repositories now.
<https://github.com/engla/keybinder> |
Use only some parts of Django? | 302,651 | 29 | 2008-11-19T17:14:42Z | 302,847 | 36 | 2008-11-19T18:17:41Z | [
"python",
"django"
] | I like Django, but for a particular application I would like to use only parts of it, but I'm not familiar enough with how Django works on the inside, so maybe someone can point me into the right direction as to what I have to check out.
Specifically, I *want to use*:
* The models and database abstraction
* The [caching API](http://docs.djangoproject.com/en/dev/topics/cache/#the-low-level-cache-api), although I want to avoid database lookups by caching, not HTML generation, and since the caching framework in Django is intended for the latter, I'm not sure yet whether that's really appropriate.
I would *not use*:
* Templating
* urlconfigs
Or, more exactly, I'm neither using HTTP nor HTML. So basically, I have a different input / output chain than usual.
Can this work?
My personal killer feature in Django is the Object / database mapping that I can do with the models, so if there's another technology (doesn't have to be Python, I'm in the design phase and I'm pretty agnostic about languages and platforms) that gives me the same abilities, that would be great, too. | I myself use Django for its object/db mapping without using its urlconfigs. Simply create a file called `djangosettings.py` and insert the necessary configuration, for example:
```
DATABASE_ENGINE = 'oracle'
DATABASE_HOST = 'localhost'
DATABASE_NAME = 'ORCL'
DATABASE_USER = 'scott'
DATABASE_PASSWORD = 'tiger'
```
Then in your regular Python code, do
```
import os
os.environ["DJANGO_SETTINGS_MODULE"] = "djangosettings"
```
before you import any Django modules. This will let you use Django's object/db mappings without actually having a Django project, so you can use it for standalone scripts or other web applications or whatever you want.
As for caching, if you don't want to use Django then you should probably decide what you are using and go from there. I recommend using CherryPy, which doesn't use Django-style regular expression URL mapping, but instead automatically maps URLs to functions based on the function names. There's an example right at the top of the CherryPy home page: <http://cherrypy.org/>
CherryPy has its own caching system, so you can accomplish exactly the same thing as what Django does but without needing to use Django's urlconfig system. |
Use only some parts of Django? | 302,651 | 29 | 2008-11-19T17:14:42Z | 304,352 | 11 | 2008-11-20T04:46:55Z | [
"python",
"django"
] | I like Django, but for a particular application I would like to use only parts of it, but I'm not familiar enough with how Django works on the inside, so maybe someone can point me into the right direction as to what I have to check out.
Specifically, I *want to use*:
* The models and database abstraction
* The [caching API](http://docs.djangoproject.com/en/dev/topics/cache/#the-low-level-cache-api), although I want to avoid database lookups by caching, not HTML generation, and since the caching framework in Django is intended for the latter, I'm not sure yet whether that's really appropriate.
I would *not use*:
* Templating
* urlconfigs
Or, more exactly, I'm neither using HTTP nor HTML. So basically, I have a different input / output chain than usual.
Can this work?
My personal killer feature in Django is the Object / database mapping that I can do with the models, so if there's another technology (doesn't have to be Python, I'm in the design phase and I'm pretty agnostic about languages and platforms) that gives me the same abilities, that would be great, too. | Django, being a web framework, is extremely efficient at creating websites. However, it's also equally well-suited to tackling problems off the web. This is the *loose coupling* that the project prides itself on. Nothing stops you from installing a complete version of Django, and just using what you need. As a rule, very few components of Django make broad assumptions about their usage.
Specifically:
* Django models don't know anything
about HTML or HTTP.
* Templates don't know anything about HTML or HTTP.
* The cache
system can be used to [store
*anything that can be pickled*](http://docs.djangoproject.com/en/dev/topics/cache/#the-low-level-cache-api).
One of the main things you'll face when trying to use Django without a web server is setting up the environment properly. The ORM and cache system still need to be configured in settings.py. There are docs on [using django without a settings module](http://docs.djangoproject.com/en/dev/topics/settings/#using-settings-without-setting-django-settings-module) that you may find useful. |
Running a Django site under mod_wsgi | 302,679 | 9 | 2008-11-19T17:21:02Z | 1,038,087 | 10 | 2009-06-24T12:35:32Z | [
"python",
"django",
"apache",
"mod-wsgi"
] | I am trying to run my Django sites with mod\_wsgi instead of mod\_python (RHEL 5). I tried this with all my sites, but get the same problem. I configured it the standard way everyone recommends, but requests to the site simply time out.
Apache conf:
```
<VirtualHost 74.54.144.34>
DocumentRoot /wwwclients/thymeandagain
ServerName thymeandagain4corners.com
ServerAlias www.thymeandagain4corners.com
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
CustomLog /var/log/httpd/thymeandagain_access_log combined
ErrorLog /var/log/httpd/thymeandagain_error_log
LogLevel error
WSGIScriptAlias / /wwwclients/thymeandagain/wsgi_handler.py
WSGIDaemonProcess thymeandagain user=admin group=admin processes=1 threads=16
WSGIProcessGroup thymeandagain
</VirtualHost>
```
wsgi\_handler.py:
```
import sys
import os
sys.path.append("/wwwclients")
os.environ['DJANGO_SETTINGS_MODULE'] = 'thymeandagain.settings'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
```
The daemon mod\_wsgi is supposed to spawn off is not there, so requests just time out and I get a bunch of "Unable to connect to WSGI daemon process" errors in the logs. Is there something about the WSGIDaemonProcess directive that is preventing creation of the daemon? Thanks in advance for any help...
EDIT: I get this in the error log:
```
[[email protected]] mcm_server_readable():2582: timeout: Operation now in progress: select(2) call timed out for read(2)able fds
[[email protected]] mcm_get_line():1592
[[email protected]] mcm_server_readable():2582: timeout: Operation now in progress: select(2) call timed out for read(2)able fds
[[email protected]] mcm_get_line():1592
[Thu Nov 20 21:18:17 2008] [notice] caught SIGTERM, shutting down
[Thu Nov 20 21:18:18 2008] [notice] Digest: generating secret for digest authentication ...
[Thu Nov 20 21:18:18 2008] [notice] Digest: done
[Thu Nov 20 21:18:18 2008] [notice] mod_python: Creating 4 session mutexes based on 8 max processes and 64 max threads.
[Thu Nov 20 21:18:18 2008] [notice] Apache/2.2.3 (Red Hat) mod_python/3.2.8 Python/2.4.3 mod_wsgi/2.1-BRANCH configured -- resuming normal operations
``` | The real problem is permissions on Apache log directory. It is necessary to tell Apache/mod\_wsgi to use an alternate location for the UNIX sockets used to communicate with the daemon processes. See:
[http://code.google.com/p/modwsgi/wiki/ConfigurationIssues#Location\_Of\_UNIX\_Sockets](http://code.google.com/p/modwsgi/wiki/ConfigurationIssues#Location%5FOf%5FUNIX%5FSockets) |
scons : src and include dirs | 302,835 | 5 | 2008-11-19T18:13:08Z | 302,984 | 8 | 2008-11-19T19:05:01Z | [
"python",
"scons"
] | can someone give a scons config file which allows the following structure
```
toplevel/
/src - .cc files
/include .h files
```
at top level I want the o and final exe. | Here is one example of Sconscript file
```
env=Environment(CPPPATH='/usr/include/glib-2.0/:/usr/lib/glib-2.0/include:inc',
CPPDEFINES=[],
LIBS=['glib-2.0'])
env.Program('runme', Glob('src/*.c'))
```
(The environment line is not really necessary for the example, but I have it to include the non standard glib header path and left it there so you can get the idea how to add extra includes and defines)
The source files are in src directory and header files in inc directory. You run scons from the base directory and the output file is also generated in the same directory. |
Best Django 'CMS' component for integration into existing site | 302,983 | 13 | 2008-11-19T19:04:55Z | 1,257,116 | 7 | 2009-08-10T20:46:34Z | [
"python",
"django",
"content-management-system"
] | So I have a relatively large (enough code that it would be easier to write this CMS component from scratch than to rewrite the app to fit into a CMS) webapp that I want to add basic Page/Menu/Media management too, I've seen several Django pluggables addressing this issue, but many seem targeted as full CMS platforms.
Does anyone know of a plugin that can easily integrate with existing templates/views and still sports a powerful/comprehensive admin interface? | If you do not necessarily want a finished CMS with a fixed feature set, but rather tools on top of Django to build your own CMS I recommend looking into FeinCMS. It follows a toolkit philosophy instead of trying to solve everything and (too) often failing to do so.
<http://github.com/matthiask/feincms/tree/master>
Disclaimer: It is my brainchild, and the result of too many frustrating experiences trying to customize another CMS for the needs of my customers. |
Best Django 'CMS' component for integration into existing site | 302,983 | 13 | 2008-11-19T19:04:55Z | 3,892,818 | 25 | 2010-10-08T17:16:32Z | [
"python",
"django",
"content-management-system"
] | So I have a relatively large (enough code that it would be easier to write this CMS component from scratch than to rewrite the app to fit into a CMS) webapp that I want to add basic Page/Menu/Media management too, I've seen several Django pluggables addressing this issue, but many seem targeted as full CMS platforms.
Does anyone know of a plugin that can easily integrate with existing templates/views and still sports a powerful/comprehensive admin interface? | I have worked with all three (and more) and they are all built for different use cases IMHO. I would agree that these are the top-teir choices.
The grid comparison at djangopluggables.com certainly can make evaluating each of these easier.
**django-cms** is the most full-featured and is something you could actually hand over to clients without being irresponsible. Even though it has features for integrating other apps, it doesn't have the extensibility/integration of FeinCMS or the simplicity of django-page-cms. That being said, I think the consensus is that this is the best Open Source CMS for Django. However, it's docs are a little lacking. ***update***: *I have been told that integrating apps into DjangoCMS 2.1 has been improved.*
**FeinCMS** - Is a great set of tools for combining and building CMS functionality into your own apps. It's not "out of the box" at all, which means that you can integrate it however you want. It doesn't want to take over your urls.py or control how you route pages. It's probably a prototype for the next-generation of truly pluggable apps in Django. - We are moving from django-page-cms to FeinCMS because our primary models is high volume eCommerce and I have custom content-types I want to integrate that aren't blogs or flash. Good documentation and support as well.
**Django-page-cms** - Is great if you want to just have some "About Us" pages around your principle application. Its menu system is not truly hierarchical and building your page presentation is up to you. But it's very simple, unobtrusive, and very easy to slap into your app and get a navigation going that clients can manage, or even for yourself. It has no docs that I know of, but you won't really need any. Read the code and you will get it all in 30 minutes or less.
*update*
**Mezzanine** - Is a very well designed CMS and one that I have finally settled on for most of my client work, mostly because it has an integrated eCommerce portion. But beyond that it has very extensible page models, and a custom admin interface that a client might be willing to use. It also has the best "out of the box" experience i.e. You can have a full fledged site up with one command. |
How do I remove/delete a folder that is not empty with Python? | 303,200 | 484 | 2008-11-19T20:15:38Z | 303,225 | 770 | 2008-11-19T20:23:16Z | [
"python",
"file"
] | I am getting an 'access is denied' error when I attempt to delete a folder that is not empty. I used the following command in my attempt: `os.remove("/folder_name")`.
What is the most effective way of removing/deleting a folder/directory that is not empty? | ```
import shutil
shutil.rmtree('/folder_name')
```
[Standard Library Reference: shutil.rmtree](http://docs.python.org/library/shutil.html#shutil.rmtree). |
How do I remove/delete a folder that is not empty with Python? | 303,200 | 484 | 2008-11-19T20:15:38Z | 303,228 | 90 | 2008-11-19T20:23:41Z | [
"python",
"file"
] | I am getting an 'access is denied' error when I attempt to delete a folder that is not empty. I used the following command in my attempt: `os.remove("/folder_name")`.
What is the most effective way of removing/deleting a folder/directory that is not empty? | From [the python docs](http://docs.python.org/library/os.html#os.walk) on `os.walk()`:
```
# Delete everything reachable from the directory named in 'top',
# assuming there are no symbolic links.
# CAUTION: This is dangerous! For example, if top == '/', it
# could delete all your disk files.
import os
for root, dirs, files in os.walk(top, topdown=False):
for name in files:
os.remove(os.path.join(root, name))
for name in dirs:
os.rmdir(os.path.join(root, name))
``` |
How do I remove/delete a folder that is not empty with Python? | 303,200 | 484 | 2008-11-19T20:15:38Z | 25,172,642 | 40 | 2014-08-07T00:59:15Z | [
"python",
"file"
] | I am getting an 'access is denied' error when I attempt to delete a folder that is not empty. I used the following command in my attempt: `os.remove("/folder_name")`.
What is the most effective way of removing/deleting a folder/directory that is not empty? | ```
import shutil
shutil.rmtree(dest, ignore_errors=True)
``` |
Newbie Python Question about tuples | 303,664 | 5 | 2008-11-19T22:36:37Z | 303,697 | 9 | 2008-11-19T22:51:00Z | [
"python",
"tuples",
"cx-oracle"
] | I am new to Python, and I'm working on writing some database code using the `cx_Oracle` module. In the [cx\_Oracle documentation](http://cx-oracle.sourceforge.net/html/module.html) they have a code example like this:
```
import sys
import cx_Oracle
connection = cx_Oracle.Connection("user/pw@tns")
cursor = connection.cursor()
try:
cursor.execute("select 1 / 0 from dual")
except cx_Oracle.DatabaseError, exc:
error, = exc.args
print >> sys.stderr, "Oracle-Error-Code:", error.code
print >> sys.stderr, "Oracle-Error-Message:", error.message
```
My question has to do with where the "error" object is created. What does the "`, =`" do? I tried searching Python documentation, and search engines don't work very well when you're searching for operators. :-)
I know that the exc.args is a singleton tuple, but I just don't understand the "`, =`" syntax. If I remove the comma, I get the error message, "`AttributeError: 'tuple' object has no attribute 'code'`".
Can someone point me to where this is documented? Thanks!
**EDIT:**
This works without having to unpack the tuple:
```
import sys
import cx_Oracle
connection = cx_Oracle.Connection("user/pw@tns")
cursor = connection.cursor()
try:
cursor.execute("select 1 / 0 from dual")
except cx_Oracle.DatabaseError, exc:
print >> sys.stderr, "Oracle-Error-Code:", exc.args[0].code
print >> sys.stderr, "Oracle-Error-Message:", exc.args[0].message
``` | ```
error, = exc.args
```
This is a case of [sequence unpacking](http://docs.python.org/tutorial/datastructures.html#tuples-and-sequences).
A more readable way to write the same, and the style I personally favor, is:
```
[error] = exc.args
```
There are two bits required to understand the previous example:
1. When the left hand side of an assignment is a recursive sequence of names, the value of the right hand side must be a sequence with the same length, and each item of the RHS value is assigned to the corresponding name in the LHS.
2. A one-item tuple in python is written `(foo,)`. In most contexts, the parenthesis can be ommitted. In particular, they can be omitted next to the assignment operator. |
Emacs 23 and iPython | 304,049 | 24 | 2008-11-20T01:12:16Z | 312,741 | 8 | 2008-11-23T17:26:07Z | [
"python",
"emacs",
"ipython",
"emacs23"
] | Is there anyone out there using iPython with emacs 23? The documents on the emacs wiki are a bit of a muddle and I would be interested in hearing from anyone using emacs for Python development. Do you use the download python-mode and ipython.el? What do you recommend? | I got it working quite well with emacs 23. The only open issue is the focus not returning to the python buffer after sending the buffer to the iPython interpreter.
<http://www.emacswiki.org/emacs/PythonMode#toc10>
```
(setq load-path
(append (list nil
"~/.emacs.d/python-mode-1.0/"
"~/.emacs.d/pymacs/"
"~/.emacs.d/ropemacs-0.6"
)
load-path))
(setq py-shell-name "ipython")
(defadvice py-execute-buffer (around python-keep-focus activate)
"return focus to python code buffer"
(save-excursion ad-do-it))
(setenv "PYMACS_PYTHON" "python2.5")
(require 'pymacs)
(pymacs-load "ropemacs" "rope-")
(provide 'python-programming)
``` |
Data Modelling Advice for Blog Tagging system on Google App Engine | 304,117 | 8 | 2008-11-20T01:56:32Z | 307,727 | 7 | 2008-11-21T03:22:38Z | [
"python",
"google-app-engine",
"bigtable",
"data-modeling"
] | Am wondering if anyone might provide some conceptual advice on an efficient way to build a data model to accomplish the simple system described below. Am somewhat new to thinking in a non-relational manner and want to try avoiding any obvious pitfalls. It's my understanding that a basic principal is that "storage is cheap, don't worry about data duplication" as you might in a normalized RDBMS.
What I'd like to model is:
A blog article which can be given 0-n tags. Many blog articles can share the same tag. When retrieving data would like to allow retrieval of all articles matching a tag. In many ways very similar to the approach taken here at stackoverflow.
My normal mindset would be to create a many-to-may relationship between tags and blog articles. However, I'm thinking in the context of GAE that this would be expensive, although I have seen examples of it being done.
Perhaps using a ListProperty containing each tag as part of the article entities, and a second data model to track tags as they're added and deleted? This way no need for any relationships and the ListProperty still allows queries where any list element matching will return results.
Any suggestions on the most efficient way to approach this on GAE? | Thanks to both of you for your suggestions. I've implemented (first iteration) as follows. Not sure if it's the best approach, but it's working.
Class A = Articles. Has a StringListProperty which can be queried on it's list elements
Class B = Tags. One entity per tag, also keeps a running count of the total number of articles using each tag.
Data modifications to A are accompanied by maintenance work on B. Thinking that counts being pre-computed is a good approach in a read-heavy environment. |
What's the best way to find the inverse of datetime.isocalendar()? | 304,256 | 36 | 2008-11-20T03:38:52Z | 1,700,069 | 53 | 2009-11-09T09:57:08Z | [
"python",
"datetime"
] | The Python [`datetime.isocalendar()`](http://www.python.org/doc/2.5.2/lib/datetime-datetime.html) method returns a tuple `(ISO_year, ISO_week_number, ISO_weekday)` for the given `datetime` object. Is there a corresponding inverse function? If not, is there an easy way to compute a date given a year, week number and day of the week? | I recently had to solve this problem myself, and came up with this solution:
```
import datetime
def iso_year_start(iso_year):
"The gregorian calendar date of the first day of the given ISO year"
fourth_jan = datetime.date(iso_year, 1, 4)
delta = datetime.timedelta(fourth_jan.isoweekday()-1)
return fourth_jan - delta
def iso_to_gregorian(iso_year, iso_week, iso_day):
"Gregorian calendar date for the given ISO year, week and day"
year_start = iso_year_start(iso_year)
return year_start + datetime.timedelta(days=iso_day-1, weeks=iso_week-1)
```
A few test cases:
```
>>> iso = datetime.date(2005, 1, 1).isocalendar()
>>> iso
(2004, 53, 6)
>>> iso_to_gregorian(*iso)
datetime.date(2005, 1, 1)
>>> iso = datetime.date(2010, 1, 4).isocalendar()
>>> iso
(2010, 1, 1)
>>> iso_to_gregorian(*iso)
datetime.date(2010, 1, 4)
>>> iso = datetime.date(2010, 1, 3).isocalendar()
>>> iso
(2009, 53, 7)
>>> iso_to_gregorian(*iso)
datetime.date(2010, 1, 3)
``` |
The best way to invoke methods in Python class declarations? | 304,655 | 4 | 2008-11-20T08:40:40Z | 304,679 | 14 | 2008-11-20T08:50:18Z | [
"python",
"class",
"declaration",
"static-methods",
"invocation"
] | Say I am declaring a class `C` and a few of the declarations are very similar. I'd like to use a function `f` to reduce code repetition for these declarations. It's possible to just declare and use `f` as usual:
```
>>> class C(object):
... def f(num):
... return '<' + str(num) + '>'
... v = f(9)
... w = f(42)
...
>>> C.v
'<9>'
>>> C.w
'<42>'
>>> C.f(4)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unbound method f() must be called with C instance as first argument (got int instance instead)
```
Oops! I've inadvertently exposed `f` to the outside world, but it doesn't take a `self` argument (and can't for obvious reasons). One possibility would be to `del` the function after I use it:
```
>>> class C(object):
... def f(num):
... return '<' + str(num) + '>'
... v = f(9)
... del f
...
>>> C.v
'<9>'
>>> C.f
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: type object 'C' has no attribute 'f'
```
But what if I want to use `f` again later, after the declaration? It won't do to delete the function. I could make it "private" (i.e., prefix its name with `__`) and give it the `@staticmethod` treatment, but invoking `staticmethod` objects through abnormal channels gets very funky:
```
>>> class C(object):
... @staticmethod
... def __f(num):
... return '<' + str(num) + '>'
... v = __f.__get__(1)(9) # argument to __get__ is ignored...
...
>>> C.v
'<9>'
```
I have to use the above craziness because `staticmethod` objects, which are descriptors, are not themselves callable. I need to recover the function wrapped by the `staticmethod` object before I can call it.
There has got to be a better way to do this. How can I cleanly declare a function in a class, use it during its declaration, and also use it later from within the class? Should I even be doing this? | Quite simply, the solution is that f does not need to be a member of the class. I am assuming that your thought-process has gone through a Javaish language filter causing the mental block. It goes a little something like this:
```
def f(n):
return '<' + str(num) + '>'
class C(object):
v = f(9)
w = f(42)
```
Then when you want to use f again, just use it
```
>>> f(4)
'<4>'
```
I think the moral of the tale is "In Python, you don't *have* to force everything into a class". |
What do I use on linux to make a python program executable | 304,883 | 39 | 2008-11-20T10:27:57Z | 304,896 | 80 | 2008-11-20T10:32:27Z | [
"python",
"linux",
"file-permissions"
] | I just installed a linux system (Kubuntu) and was wondering if there is a program to make python programs executable for linux. | Just put this in the first line of your script :
```
#!/usr/bin/env python
```
Make the file executable with
```
chmod +x myfile.py
```
Execute with
```
./myfile.py
``` |
What do I use on linux to make a python program executable | 304,883 | 39 | 2008-11-20T10:27:57Z | 14,232,463 | 8 | 2013-01-09T09:53:50Z | [
"python",
"linux",
"file-permissions"
] | I just installed a linux system (Kubuntu) and was wondering if there is a program to make python programs executable for linux. | If you want to obtain a stand-alone binary application in Python try to use a tool like py2exe or [PyInstaller](http://www.pyinstaller.org/). |
Correct way to detect sequence parameter? | 305,359 | 17 | 2008-11-20T13:52:44Z | 306,222 | 18 | 2008-11-20T17:52:46Z | [
"python",
"types",
"sequences"
] | I want to write a function that accepts a parameter which can be either a sequence or a single value. The type of value is str, int, etc., but I **don't** want it to be restricted to a hardcoded list.
In other words, I want to know if the parameter X is a sequence or something I have to convert to a sequence to avoid special-casing later. I could do
`type(X) in (list, tuple)`
but there may be other sequence types I'm not aware of, and no common base class.
-N.
**Edit**: See my "answer" below for why most of these answers don't help me. Maybe you have something better to suggest. | As of 2.6, use [abstract base classes](http://docs.python.org/library/abc.html#module-abc).
```
>>> import collections
>>> isinstance([], collections.Sequence)
True
>>> isinstance(0, collections.Sequence)
False
```
Furthermore ABC's can be customized to account for exceptions, such as not considering strings to be sequences. Here an example:
```
import abc
import collections
class Atomic(object):
__metaclass__ = abc.ABCMeta
@classmethod
def __subclasshook__(cls, other):
return not issubclass(other, collections.Sequence) or NotImplemented
Atomic.register(basestring)
```
After registration the **Atomic** class can be used with **isinstance** and **issubclass**:
```
assert isinstance("hello", Atomic) == True
```
This is still much better than a hard-coded list, because you only need to register the exceptions to the rule, and external users of the code can register their own.
Note that in **Python 3** the syntax for specifying metaclasses changed and the `basestring` abstract superclass was removed, which requires something like the following to be used instead:
```
class Atomic(metaclass=abc.ABCMeta):
@classmethod
def __subclasshook__(cls, other):
return not issubclass(other, collections.Sequence) or NotImplemented
Atomic.register(str)
```
If desired, it's possible to write code which is compatible both both Python 2.6+ *and* 3.x, but doing so requires using a slightly more complicated technique which dynamically creates the needed abstract base class, thereby avoiding syntax errors due to the metaclass syntax difference. This is essentially the same as what Benjamin Peterson's [six](http://pythonhosted.org/six/) module's[`with_metaclass()`](http://pythonhosted.org/six/#six.with_metaclass)function does.
```
class _AtomicBase(object):
@classmethod
def __subclasshook__(cls, other):
return not issubclass(other, collections.Sequence) or NotImplemented
class Atomic(abc.ABCMeta("NewMeta", (_AtomicBase,), {})):
pass
try:
unicode = unicode
except NameError: # 'unicode' is undefined, assume Python >= 3
Atomic.register(str) # str includes unicode in Py3, make both Atomic
Atomic.register(bytes) # bytes will also be considered Atomic (optional)
else:
# basestring is the abstract superclass of both str and unicode types
Atomic.register(basestring) # make both types of strings Atomic
```
In versions before 2.6, there are type checkers in the`operator`module.
```
>>> import operator
>>> operator.isSequenceType([])
True
>>> operator.isSequenceType(0)
False
``` |
List of tables, db schema, dump etc using the Python sqlite3 API | 305,378 | 64 | 2008-11-20T14:00:23Z | 305,395 | 12 | 2008-11-20T14:07:20Z | [
"python",
"api",
"sqlite",
"dump"
] | For some reason I can't find a way to get the equivalents of sqlite's interactive shell commands:
```
.tables
.dump
```
using the Python sqlite3 API.
Is there anything like that? | I'm not familiar with the Python API but you can always use
```
SELECT * FROM sqlite_master;
``` |
List of tables, db schema, dump etc using the Python sqlite3 API | 305,378 | 64 | 2008-11-20T14:00:23Z | 305,639 | 58 | 2008-11-20T15:26:39Z | [
"python",
"api",
"sqlite",
"dump"
] | For some reason I can't find a way to get the equivalents of sqlite's interactive shell commands:
```
.tables
.dump
```
using the Python sqlite3 API.
Is there anything like that? | You can fetch the list of tables and schemata by querying the SQLITE\_MASTER table:
```
sqlite> .tab
job snmptarget t1 t2 t3
sqlite> select name from sqlite_master where type = 'table';
job
t1
t2
snmptarget
t3
sqlite> .schema job
CREATE TABLE job (
id INTEGER PRIMARY KEY,
data VARCHAR
);
sqlite> select sql from sqlite_master where type = 'table' and name = 'job';
CREATE TABLE job (
id INTEGER PRIMARY KEY,
data VARCHAR
)
``` |
List of tables, db schema, dump etc using the Python sqlite3 API | 305,378 | 64 | 2008-11-20T14:00:23Z | 601,222 | 14 | 2009-03-02T03:47:31Z | [
"python",
"api",
"sqlite",
"dump"
] | For some reason I can't find a way to get the equivalents of sqlite's interactive shell commands:
```
.tables
.dump
```
using the Python sqlite3 API.
Is there anything like that? | Apparently the version of sqlite3 included in Python 2.6 has this ability: <http://docs.python.org/dev/library/sqlite3.html>
```
# Convert file existing_db.db to SQL dump file dump.sql
import sqlite3, os
con = sqlite3.connect('existing_db.db')
with open('dump.sql', 'w') as f:
for line in con.iterdump():
f.write('%s\n' % line)
``` |
List of tables, db schema, dump etc using the Python sqlite3 API | 305,378 | 64 | 2008-11-20T14:00:23Z | 10,746,045 | 94 | 2012-05-24T22:15:42Z | [
"python",
"api",
"sqlite",
"dump"
] | For some reason I can't find a way to get the equivalents of sqlite's interactive shell commands:
```
.tables
.dump
```
using the Python sqlite3 API.
Is there anything like that? | In Python:
```
con = sqlite3.connect('database.db')
cursor = con.cursor()
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
print(cursor.fetchall())
```
Watch out for my other [answer](http://stackoverflow.com/a/33100538/236830). There is a much faster way using pandas. |
List of tables, db schema, dump etc using the Python sqlite3 API | 305,378 | 64 | 2008-11-20T14:00:23Z | 33,100,538 | 12 | 2015-10-13T10:38:59Z | [
"python",
"api",
"sqlite",
"dump"
] | For some reason I can't find a way to get the equivalents of sqlite's interactive shell commands:
```
.tables
.dump
```
using the Python sqlite3 API.
Is there anything like that? | The FASTEST way of doing this in python is using Pandas (version 0.16 and up).
Dump one table:
```
db = sqlite3.connect('database.db')
table = pd.read_sql_query("SELECT * from table_name", db)
table.to_csv(table_name + '.csv', index_label='index')
```
Dump all tables:
```
import sqlite3
import pandas as pd
def to_csv():
db = sqlite3.connect('database.db')
cursor = db.cursor()
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
tables = cursor.fetchall()
for table_name in tables:
table_name = table_name[0]
table = pd.read_sql_query("SELECT * from %s" % table_name, db)
table.to_csv(table_name + '.csv', index_label='index')
``` |
In Python, how can you get the name of a member function's class? | 305,924 | 7 | 2008-11-20T16:30:30Z | 305,980 | 10 | 2008-11-20T16:42:42Z | [
"python",
"reflection",
"metaprogramming"
] | I have a function that takes another function as a parameter. If the function is a member of a class, I need to find the name of that class. E.g.
```
def analyser(testFunc):
print testFunc.__name__, 'belongs to the class, ...
```
I thought
```
testFunc.__class__
```
would solve my problems, but that just tells me that testFunc is a function. | ```
testFunc.im_class
```
<https://docs.python.org/reference/datamodel.html#the-standard-type-hierarchy>
> `im_class` is the class of `im_self` for
> bound methods or the class that asked
> for the method for unbound methods |
Python decorator makes function forget that it belongs to a class | 306,130 | 41 | 2008-11-20T17:24:39Z | 306,277 | 36 | 2008-11-20T18:08:33Z | [
"python",
"reflection",
"metaprogramming"
] | I am trying to write a decorator to do logging:
```
def logger(myFunc):
def new(*args, **keyargs):
print 'Entering %s.%s' % (myFunc.im_class.__name__, myFunc.__name__)
return myFunc(*args, **keyargs)
return new
class C(object):
@logger
def f():
pass
C().f()
```
I would like this to print:
```
Entering C.f
```
but instead I get this error message:
```
AttributeError: 'function' object has no attribute 'im_class'
```
Presumably this is something to do with the scope of 'myFunc' inside 'logger', but I've no idea what. | Claudiu's answer is correct, but you can also cheat by getting the class name off of the `self` argument. This will give misleading log statements in cases of inheritance, but will tell you the class of the object whose method is being called. For example:
```
from functools import wraps # use this to preserve function signatures and docstrings
def logger(func):
@wraps(func)
def with_logging(*args, **kwargs):
print "Entering %s.%s" % (args[0].__class__.__name__, func.__name__)
return func(*args, **kwargs)
return with_logging
class C(object):
@logger
def f(self):
pass
C().f()
```
As I said, this won't work properly in cases where you've inherited a function from a parent class; in this case you might say
```
class B(C):
pass
b = B()
b.f()
```
and get the message `Entering B.f` where you actually want to get the message `Entering C.f` since that's the correct class. On the other hand, this might be acceptable, in which case I'd recommend this approach over Claudiu's suggestion. |
Python decorator makes function forget that it belongs to a class | 306,130 | 41 | 2008-11-20T17:24:39Z | 307,263 | 18 | 2008-11-20T23:16:51Z | [
"python",
"reflection",
"metaprogramming"
] | I am trying to write a decorator to do logging:
```
def logger(myFunc):
def new(*args, **keyargs):
print 'Entering %s.%s' % (myFunc.im_class.__name__, myFunc.__name__)
return myFunc(*args, **keyargs)
return new
class C(object):
@logger
def f():
pass
C().f()
```
I would like this to print:
```
Entering C.f
```
but instead I get this error message:
```
AttributeError: 'function' object has no attribute 'im_class'
```
Presumably this is something to do with the scope of 'myFunc' inside 'logger', but I've no idea what. | Functions only become methods at runtime. That is, when you get `C.f` you get a bound function (and `C.f.im_class is C`). At the time your function is defined it is just a plain function, it is not bound to any class. This unbound and disassociated function is what is decorated by logger.
`self.__class__.__name__` will give you the name of the class, but you can also use descriptors to accomplish this in a somewhat more general way. This pattern is described [in a blog post on Decorators and Descriptors](http://blog.ianbicking.org/2008/10/24/decorators-and-descriptors/), and an implementation of your logger decorator in particular would look like:
```
class logger(object):
def __init__(self, func):
self.func = func
def __get__(self, obj, type=None):
return self.__class__(self.func.__get__(obj, type))
def __call__(self, *args, **kw):
print 'Entering %s' % self.func
return self.func(*args, **kw)
class C(object):
@logger
def f(self, x, y):
return x+y
C().f(1, 2)
# => Entering <bound method C.f of <__main__.C object at 0x...>>
```
Obviously the output can be improved (by using, for example, `getattr(self.func, 'im_class', None)`), but this general pattern will work for both methods and functions. However it will *not* work for old-style classes (but just don't use those ;) |
Python decorator makes function forget that it belongs to a class | 306,130 | 41 | 2008-11-20T17:24:39Z | 3,412,743 | 13 | 2010-08-05T07:54:48Z | [
"python",
"reflection",
"metaprogramming"
] | I am trying to write a decorator to do logging:
```
def logger(myFunc):
def new(*args, **keyargs):
print 'Entering %s.%s' % (myFunc.im_class.__name__, myFunc.__name__)
return myFunc(*args, **keyargs)
return new
class C(object):
@logger
def f():
pass
C().f()
```
I would like this to print:
```
Entering C.f
```
but instead I get this error message:
```
AttributeError: 'function' object has no attribute 'im_class'
```
Presumably this is something to do with the scope of 'myFunc' inside 'logger', but I've no idea what. | Ideas proposed here are excellent, but have some disadvantages:
1. `inspect.getouterframes` and `args[0].__class__.__name__` are not suitable for plain functions and static-methods.
2. `__get__` must be in a class, that is rejected by `@wraps`.
3. `@wraps` itself should be hiding traces better.
So, I've combined some ideas from this page, links, docs and my own head,
and finally found a solution, that lacks all three disadvantages above.
As a result, `method_decorator`:
* Knows the class the decorated method is bound to.
* Hides decorator traces by answering to system attributes more correctly than `functools.wraps()` does.
* Is covered with unit-tests for bound an unbound instance-methods, class-methods, static-methods, and plain functions.
Usage:
```
pip install method_decorator
from method_decorator import method_decorator
class my_decorator(method_decorator):
# ...
```
See [full unit-tests for usage details](https://github.com/denis-ryzhkov/method_decorator/blob/master/method_decorator.py#L48).
And here is just the code of the `method_decorator` class:
```
class method_decorator(object):
def __init__(self, func, obj=None, cls=None, method_type='function'):
# These defaults are OK for plain functions
# and will be changed by __get__() for methods once a method is dot-referenced.
self.func, self.obj, self.cls, self.method_type = func, obj, cls, method_type
def __get__(self, obj=None, cls=None):
# It is executed when decorated func is referenced as a method: cls.func or obj.func.
if self.obj == obj and self.cls == cls:
return self # Use the same instance that is already processed by previous call to this __get__().
method_type = (
'staticmethod' if isinstance(self.func, staticmethod) else
'classmethod' if isinstance(self.func, classmethod) else
'instancemethod'
# No branch for plain function - correct method_type for it is already set in __init__() defaults.
)
return object.__getattribute__(self, '__class__')( # Use specialized method_decorator (or descendant) instance, don't change current instance attributes - it leads to conflicts.
self.func.__get__(obj, cls), obj, cls, method_type) # Use bound or unbound method with this underlying func.
def __call__(self, *args, **kwargs):
return self.func(*args, **kwargs)
def __getattribute__(self, attr_name): # Hiding traces of decoration.
if attr_name in ('__init__', '__get__', '__call__', '__getattribute__', 'func', 'obj', 'cls', 'method_type'): # Our known names. '__class__' is not included because is used only with explicit object.__getattribute__().
return object.__getattribute__(self, attr_name) # Stopping recursion.
# All other attr_names, including auto-defined by system in self, are searched in decorated self.func, e.g.: __module__, __class__, __name__, __doc__, im_*, func_*, etc.
return getattr(self.func, attr_name) # Raises correct AttributeError if name is not found in decorated self.func.
def __repr__(self): # Special case: __repr__ ignores __getattribute__.
return self.func.__repr__()
``` |
"is" operator behaves unexpectedly with integers | 306,313 | 264 | 2008-11-20T18:21:16Z | 306,347 | 18 | 2008-11-20T18:29:48Z | [
"python",
"int",
"comparison",
"operators",
"identity"
] | Why does the following behave unexpectedly in Python?
```
>>> a = 256
>>> b = 256
>>> a is b
True # This is an expected result
>>> a = 257
>>> b = 257
>>> a is b
False # What happened here? Why is this False?
>>> 257 is 257
True # Yet the literal numbers compare properly
```
I am using Python 2.5.2. Trying some different versions of Python, it appears that Python 2.3.3 shows the above behaviour between 99 and 100.
Based on the above, I can hypothesize that Python is internally implemented such that "small" integers are stored in a different way than larger integers and the `is` operator can tell the difference. Why the leaky abstraction? What is a better way of comparing two arbitrary objects to see whether they are the same when I don't know in advance whether they are numbers or not? | I think your hypotheses is correct. Experiment with id (identity of object)..
```
In [1]: id(255)
Out[1]: 146349024
In [2]: id(255)
Out[2]: 146349024
In [3]: id(257)
Out[3]: 146802752
In [4]: id(257)
Out[4]: 148993740
In [5]: a=255
In [6]: b=255
In [7]: c=257
In [8]: d=257
In [9]: id(a), id(b), id(c), id(d)
Out[9]: (146349024, 146349024, 146783024, 146804020)
```
It appears that numbers <= 255 are treated as literals and anything above is treated differently! |
"is" operator behaves unexpectedly with integers | 306,313 | 264 | 2008-11-20T18:21:16Z | 306,353 | 222 | 2008-11-20T18:30:20Z | [
"python",
"int",
"comparison",
"operators",
"identity"
] | Why does the following behave unexpectedly in Python?
```
>>> a = 256
>>> b = 256
>>> a is b
True # This is an expected result
>>> a = 257
>>> b = 257
>>> a is b
False # What happened here? Why is this False?
>>> 257 is 257
True # Yet the literal numbers compare properly
```
I am using Python 2.5.2. Trying some different versions of Python, it appears that Python 2.3.3 shows the above behaviour between 99 and 100.
Based on the above, I can hypothesize that Python is internally implemented such that "small" integers are stored in a different way than larger integers and the `is` operator can tell the difference. Why the leaky abstraction? What is a better way of comparing two arbitrary objects to see whether they are the same when I don't know in advance whether they are numbers or not? | Take a look at this:
```
>>> a = 256
>>> b = 256
>>> id(a)
9987148
>>> id(b)
9987148
>>> a = 257
>>> b = 257
>>> id(a)
11662816
>>> id(b)
11662828
```
EDIT: Here's what I found in the Python 2 documentation, ["Plain Integer Objects"](https://docs.python.org/2/c-api/int.html) (It's the same for [Python 3](https://docs.python.org/3/c-api/long.html)):
> The current implementation keeps an
> array of integer objects for all
> integers between -5 and 256, when you
> create an int in that range you
> actually just get back a reference to
> the existing object. So it should be
> possible to change the value of 1. I
> suspect the behaviour of Python in
> this case is undefined. :-) |
"is" operator behaves unexpectedly with integers | 306,313 | 264 | 2008-11-20T18:21:16Z | 306,377 | 46 | 2008-11-20T18:36:06Z | [
"python",
"int",
"comparison",
"operators",
"identity"
] | Why does the following behave unexpectedly in Python?
```
>>> a = 256
>>> b = 256
>>> a is b
True # This is an expected result
>>> a = 257
>>> b = 257
>>> a is b
False # What happened here? Why is this False?
>>> 257 is 257
True # Yet the literal numbers compare properly
```
I am using Python 2.5.2. Trying some different versions of Python, it appears that Python 2.3.3 shows the above behaviour between 99 and 100.
Based on the above, I can hypothesize that Python is internally implemented such that "small" integers are stored in a different way than larger integers and the `is` operator can tell the difference. Why the leaky abstraction? What is a better way of comparing two arbitrary objects to see whether they are the same when I don't know in advance whether they are numbers or not? | It depends on whether you're looking to see if 2 things are equal, or the same object.
"is" checks to see if they are the same object, not just equal. The small ints are probably pointing to the same memory location for space efficiency
```
In [29]: a = 3
In [30]: b = 3
In [31]: id(a)
Out[31]: 500729144
In [32]: id(b)
Out[32]: 500729144
```
You should use "==" to compare equality of arbitrary objects. You can specify the behavior with the `__eq__`, and `__ne__` attributes. |
"is" operator behaves unexpectedly with integers | 306,313 | 264 | 2008-11-20T18:21:16Z | 306,603 | 29 | 2008-11-20T19:50:11Z | [
"python",
"int",
"comparison",
"operators",
"identity"
] | Why does the following behave unexpectedly in Python?
```
>>> a = 256
>>> b = 256
>>> a is b
True # This is an expected result
>>> a = 257
>>> b = 257
>>> a is b
False # What happened here? Why is this False?
>>> 257 is 257
True # Yet the literal numbers compare properly
```
I am using Python 2.5.2. Trying some different versions of Python, it appears that Python 2.3.3 shows the above behaviour between 99 and 100.
Based on the above, I can hypothesize that Python is internally implemented such that "small" integers are stored in a different way than larger integers and the `is` operator can tell the difference. Why the leaky abstraction? What is a better way of comparing two arbitrary objects to see whether they are the same when I don't know in advance whether they are numbers or not? | As you can check in [source file *intobject.c*](http://svn.python.org/projects/python/trunk/Objects/intobject.c), Python caches small integers for efficiency. Every time you create a reference to a small integer, you are referring the cached small integer, not a new object. 257 is not an small integer, so it is calculated as a different object.
It is better to use "==" for that purpose. |
"is" operator behaves unexpectedly with integers | 306,313 | 264 | 2008-11-20T18:21:16Z | 307,594 | 10 | 2008-11-21T01:58:53Z | [
"python",
"int",
"comparison",
"operators",
"identity"
] | Why does the following behave unexpectedly in Python?
```
>>> a = 256
>>> b = 256
>>> a is b
True # This is an expected result
>>> a = 257
>>> b = 257
>>> a is b
False # What happened here? Why is this False?
>>> 257 is 257
True # Yet the literal numbers compare properly
```
I am using Python 2.5.2. Trying some different versions of Python, it appears that Python 2.3.3 shows the above behaviour between 99 and 100.
Based on the above, I can hypothesize that Python is internally implemented such that "small" integers are stored in a different way than larger integers and the `is` operator can tell the difference. Why the leaky abstraction? What is a better way of comparing two arbitrary objects to see whether they are the same when I don't know in advance whether they are numbers or not? | For immutable value objects, like ints, strings or datetimes, object identity is not especially useful. It's better to think about equality. Identity is essentially an implementation detail for value objects - since they're immutable, there's no effective difference between having multiple refs to the same object or multiple objects. |
"is" operator behaves unexpectedly with integers | 306,313 | 264 | 2008-11-20T18:21:16Z | 15,522,094 | 7 | 2013-03-20T11:20:46Z | [
"python",
"int",
"comparison",
"operators",
"identity"
] | Why does the following behave unexpectedly in Python?
```
>>> a = 256
>>> b = 256
>>> a is b
True # This is an expected result
>>> a = 257
>>> b = 257
>>> a is b
False # What happened here? Why is this False?
>>> 257 is 257
True # Yet the literal numbers compare properly
```
I am using Python 2.5.2. Trying some different versions of Python, it appears that Python 2.3.3 shows the above behaviour between 99 and 100.
Based on the above, I can hypothesize that Python is internally implemented such that "small" integers are stored in a different way than larger integers and the `is` operator can tell the difference. Why the leaky abstraction? What is a better way of comparing two arbitrary objects to see whether they are the same when I don't know in advance whether they are numbers or not? | `is` *is* the identity equality operator (functioning like `id(a) == id(b)`); it's just that two equal numbers aren't necessarily the same object. For performance reasons some small integers happen to be [memoized](http://en.wikipedia.org/wiki/Memoization) so they will tend to be the same (this can be done since they are immutable).
[PHP's](http://en.wikipedia.org/wiki/PHP) `===` operator, on the other hand, is described as checking equality and type: `x == y and type(x) == type(y)` as per Paulo Freitas' comment. This will suffice for common numbers, but differ from `is` for classes that define `__eq__` in an absurd manner:
```
class Unequal:
def __eq__(self, other):
return False
```
PHP apparently allows the same thing for "built-in" classes (which I take to mean implemented at C level, not in PHP). A slightly less absurd use might be a timer object, which has a different value every time it's used as a number. Quite why you'd want to emulate Visual Basic's `Now` instead of showing that it is an evaluation with `time.time()` I don't know.
Greg Hewgill (OP) made one clarifying comment "My goal is to compare object identity, rather than equality of value. Except for numbers, where I want to treat object identity the same as equality of value."
This would have yet another answer, as we have to categorize things as numbers or not, to select whether we compare with `==` or `is`. [CPython](http://en.wikipedia.org/wiki/CPython) defines the [number protocol](http://docs.python.org/2/c-api/number.html), including PyNumber\_Check, but this is not accessible from Python itself.
We could try to use `isinstance` with all the number types we know of, but this would inevitably be incomplete. The types module contains a StringTypes list but no NumberTypes. Since Python 2.6, the built in number classes have a base class [`numbers.Number`](https://docs.python.org/2/library/numbers.html#numbers.Number), but it has the same problem:
```
import numpy, numbers
assert not issubclass(numpy.int16,numbers.Number)
assert issubclass(int,numbers.Number)
```
By the way, [NumPy](http://en.wikipedia.org/wiki/NumPy) will produce separate instances of low numbers.
I don't actually know an answer to this variant of the question. I suppose one could theoretically use ctypes to call `PyNumber_Check`, but even that function [has been debated](http://mail.python.org/pipermail/python-dev/2002-November/030237.html), and it's certainly not portable. We'll just have to be less particular about what we test for now.
In the end, this issue stems from Python not originally having a type tree with predicates like [Scheme's](http://en.wikipedia.org/wiki/Scheme_%28programming_language%29) `number?`, or [Haskell's](http://en.wikipedia.org/wiki/Haskell_%28programming_language%29) [type class](https://en.wikipedia.org/wiki/Type_class) [Num](http://www.haskell.org/ghc/docs/7.4-latest/html/libraries/base-4.5.1.0/Prelude.html#t:Num). `is` checks object identity, not value equality. PHP has a colorful history as well, where `===` apparently behaves as `is` only on objects [in PHP5, but not PHP4](http://www.php.net/manual/en/language.operators.comparison.php). Such are the growing pains of moving across languages (including versions of one). |
"is" operator behaves unexpectedly with integers | 306,313 | 264 | 2008-11-20T18:21:16Z | 28,864,111 | 19 | 2015-03-04T20:20:12Z | [
"python",
"int",
"comparison",
"operators",
"identity"
] | Why does the following behave unexpectedly in Python?
```
>>> a = 256
>>> b = 256
>>> a is b
True # This is an expected result
>>> a = 257
>>> b = 257
>>> a is b
False # What happened here? Why is this False?
>>> 257 is 257
True # Yet the literal numbers compare properly
```
I am using Python 2.5.2. Trying some different versions of Python, it appears that Python 2.3.3 shows the above behaviour between 99 and 100.
Based on the above, I can hypothesize that Python is internally implemented such that "small" integers are stored in a different way than larger integers and the `is` operator can tell the difference. Why the leaky abstraction? What is a better way of comparing two arbitrary objects to see whether they are the same when I don't know in advance whether they are numbers or not? | > # Python's âisâ operator behaves unexpectedly with integers?
Let me emphasize: ***Do not use `is` to compare integers.***
This isn't behavior you should have any expectations about.
Instead, use `==` and `!=` to compare for equality and inequality, respectively.
To know this, you need to know the following.
First, what does `is` do? It is a comparison operator. From the [documentation](https://docs.python.org/2/reference/expressions.html#not-in):
> The operators `is` and `is not` test for object identity: `x is y` is true
> if and only if x and y are the same object. `x is not y` yields the
> inverse truth value.
And so the following are equivalent.
```
>>> a is b
>>> id(a) == id(b)
```
From the [documentation](https://docs.python.org/library/functions.html#id):
> **`id`**
> Return the âidentityâ of an object. This is an integer (or long
> integer) which is guaranteed to be unique and constant for this object
> during its lifetime. Two objects with non-overlapping lifetimes may
> have the same `id()` value.
Note that the fact that the id of an object in CPython (the reference implementation of Python) is the location in memory is an implementation detail. Other implementations of Python (such as Jython or IronPython) could easily have a different implementation for `id`.
So what is the use-case for `is`? [PEP8 describes](https://www.python.org/dev/peps/pep-0008/#programming-recommendations):
> Comparisons to singletons like None should always be done with is or
> is not , never the equality operators.
# The Question
You ask, and state, the following question (with code):
> **Why does the following behave unexpectedly in Python?**
>
> ```
> >>> a = 256
> >>> b = 256
> >>> a is b
> True # This is an expected result
> ```
It is *not* an expected result. Why is it expected? It only means that the integers valued at `256` referenced by both `a` and `b` are the same instance of integer. Integers are immutable in Python, thus they cannot change. This should have no impact on any code. It should not be expected. It is merely an implementation detail.
But perhaps we should be glad that there is not a new separate instance in memory every time we state a value equals 256.
> ```
> >>> a = 257
> >>> b = 257
> >>> a is b
> False # What happened here? Why is this False?
> ```
Looks like we now have two separate instances of integers with the value of `257` in memory. Since integers are immutable, this wastes memory. Let's hope we're not wasting a lot of it. We're probably not. But this behavior is not guaranteed.
> ```
> >>> 257 is 257
> True # Yet the literal numbers compare properly
> ```
Well, this looks like your particular implementation of Python is trying to be smart and not creating redundantly valued integers in memory unless it has to. You seem to indicate you are using the referent implementation of Python, which is CPython. Good for CPython.
It might be even better if CPython could do this globally, if it could do so cheaply (as there would a cost in the lookup), perhaps another implementation might.
But as for impact on code, you should not care if an integer is a particular instance of an integer. You should only care what the value of that instance is, and you would use the normal comparison operators for that, i.e. `==`.
## What `is` does
`is` checks that the `id` of two objects are the same. In CPython, the `id` is the location in memory, but it could be some other uniquely identifying number in another implementation. To restate this with code:
```
>>> a is b
```
is the same as
```
>>> id(a) == id(b)
```
## Why would we want to use `is` then?
This can be a very fast check relative to say, checking if two very long strings are equal in value. But since it applies to the uniqueness of the object, we thus have limited use-cases for it. In fact, we mostly want to use it to check for `None`, which is a singleton (a sole instance existing in one place in memory). We might create other singletons if there is potential to conflate them, which we might check with `is`, but these are relatively rare. Here's an example (will work in Python 2 and 3) e.g.
```
SENTINEL_SINGLETON = object() # this will only be created one time.
def foo(keyword_argument=None):
if keyword_argument is None:
print('no argument given to foo')
bar()
bar(keyword_argument)
bar('baz')
def bar(keyword_argument=SENTINEL_SINGLETON):
# SENTINEL_SINGLETON tells us if we were not passed anything
# as None is a legitimate potential argument we could get.
if keyword_argument is SENTINEL_SINGLETON:
print('no argument given to bar')
else:
print('argument to bar: {0}'.format(keyword_argument))
foo()
```
Which prints:
```
no argument given to foo
no argument given to bar
argument to bar: None
argument to bar: baz
```
And so we see, with `is` and a sentinel, we are able to differentiate between when `bar` is called with no arguments and when it is called with `None`. These are the primary use-cases for `is` - do *not* use it to test for equality of integers, strings, tuples, or other things like these. |
How do I randomly select an item from a list using Python? | 306,400 | 824 | 2008-11-20T18:42:21Z | 306,417 | 1,462 | 2008-11-20T18:46:39Z | [
"python",
"list",
"random"
] | Assume I have the following list:
```
foo = ['a', 'b', 'c', 'd', 'e']
```
What is the simplest way to retrieve an item at random from this list? | Use [`random.choice`](https://docs.python.org/2/library/random.html#random.choice):
```
import random
foo = ['a', 'b', 'c', 'd', 'e']
print(random.choice(foo))
``` |
How do I randomly select an item from a list using Python? | 306,400 | 824 | 2008-11-20T18:42:21Z | 12,373,205 | 63 | 2012-09-11T15:31:16Z | [
"python",
"list",
"random"
] | Assume I have the following list:
```
foo = ['a', 'b', 'c', 'd', 'e']
```
What is the simplest way to retrieve an item at random from this list? | In case you also need the index:
```
foo = ['a', 'b', 'c', 'd', 'e']
from random import randrange
random_index = randrange(0,len(foo))
print foo[random_index]
``` |
How do I randomly select an item from a list using Python? | 306,400 | 824 | 2008-11-20T18:42:21Z | 16,514,203 | 14 | 2013-05-13T02:47:36Z | [
"python",
"list",
"random"
] | Assume I have the following list:
```
foo = ['a', 'b', 'c', 'd', 'e']
```
What is the simplest way to retrieve an item at random from this list? | I propose a script for removing randomly picked up items off a list until it is empty:
Maintain a `set` and remove randomly picked up element (with `choice`) until list is empty.
```
s=set(range(1,6))
import random
while len(s)>0:
s.remove(random.choice(list(s)))
print(s)
```
Three runs give three different answers:
```
>>>
set([1, 3, 4, 5])
set([3, 4, 5])
set([3, 4])
set([4])
set([])
>>>
set([1, 2, 3, 5])
set([2, 3, 5])
set([2, 3])
set([2])
set([])
>>>
set([1, 2, 3, 5])
set([1, 2, 3])
set([1, 2])
set([1])
set([])
``` |
How do I randomly select an item from a list using Python? | 306,400 | 824 | 2008-11-20T18:42:21Z | 30,488,952 | 34 | 2015-05-27T17:07:07Z | [
"python",
"list",
"random"
] | Assume I have the following list:
```
foo = ['a', 'b', 'c', 'd', 'e']
```
What is the simplest way to retrieve an item at random from this list? | If you want to randomly select more than one item from a list, or select an item from a set, I'd recommend using `random.sample` instead.
```
import random
group_of_items = {1, 2, 3, 4} # a sequence or set will work here.
num_to_select = 2 # set the number to select here.
list_of_random_items = random.sample(group_of_items, num_to_select)
first_random_item = list_of_random_items[0]
second_random_item = list_of_random_items[1]
```
If you're only pulling a single item from a list though, choice is less clunky, as using sample would have the syntax `random.sample(some_list, 1)[0]` instead of `random.choice(some_list)`.
Unfortunately though, choice only works for a single output from sequences (such as lists or tuples). Though `random.choice(tuple(some_set))` may be an option for getting a single item from a set. |
function pointers in python | 307,494 | 2 | 2008-11-21T01:09:28Z | 307,538 | 16 | 2008-11-21T01:30:33Z | [
"python",
"function-pointers"
] | I would like to do something like the following:
```
def add(a, b):
#some code
def subtract(a, b):
#some code
operations = [add, subtract]
operations[0]( 5,3)
operations[1](5,3)
```
In python, is it possible to assign something like a function pointer? | Did you try it? What you wrote works exactly as written. Functions are first-class objects in Python. |
How to force iPython to use an older version of Python? | 308,254 | 9 | 2008-11-21T09:26:19Z | 308,260 | 10 | 2008-11-21T09:28:51Z | [
"python",
"ipython"
] | I am running an Ubuntu 8.10, using Python 2.5 out of the box. This is fine from the system point of view, but I need Python2.4 since I dev on Zope / Plone.
Well, installing python2.4 is no challenge, but I can't find a (clean) way to make iPython use it : no option in the man nor in the config file.
Before, there was a ipython2.4 package but it is deprecated. | Ok, I answer my own question : I'm dumb :-)
```
ls /usr/bin/ipython*
/usr/bin/ipython /usr/bin/ipython2.4 /usr/bin/ipython2.5
```
Now it's built-in... |
Python data structures overhead/performance | 308,912 | 3 | 2008-11-21T14:22:56Z | 308,982 | 19 | 2008-11-21T14:47:13Z | [
"python",
"optimization"
] | Is there any performance advantage to using lists over dictionaries over tuples in Python?
If I'm optimising for speed, is there any reason to prefer one over another? | Rich,
Lists and dicts are beasts suitable for different needs. Make sure you don't use lists for linear searches where dicts hashes are perfect, because it's way slower. Also, if you just need a list of elements to traverse, don't use dicts because it will take much more space than lists.
That may sound obvious, but picking the correct data structures algorithmically has much higher performance gains that micro-optimization due to more efficient compiled code layouts, etc. If you search in a list in O(n) instead of in a dict in O(1), micro-optimizations won't save you. |
What does functools.wraps do? | 308,999 | 265 | 2008-11-21T14:53:40Z | 309,000 | 467 | 2008-11-21T14:53:47Z | [
"python",
"decorator",
"wraps"
] | In a comment on the [answer to another question](http://stackoverflow.com/questions/306130/python-decorator-makes-function-forget-that-it-belongs-to-a-class#306277), someone said they weren't sure what functools.wraps was doing. So I'm asking this question so that there will be a record of it on StackOverflow for future reference: what does functools.wraps do, exactly? | When you use a decorator, you're replacing one function with another. In other words, if you have a decorator
```
def logged(func):
def with_logging(*args, **kwargs):
print func.__name__ + " was called"
return func(*args, **kwargs)
return with_logging
```
then when you say
```
@logged
def f(x):
"""does some math"""
return x + x * x
```
it's exactly the same as saying
```
def f(x):
"""does some math"""
return x + x * x
f = logged(f)
```
and your function f is replaced with the function with\_logging. Unfortunately, this means that if you then say
```
print f.__name__
```
it will print `with_logging` because that's the name of your new function. In fact, if you look at the docstring for f, it will be blank because with\_logging has no docstring, and so the docstring you wrote won't be there anymore. Also, if you look at the pydoc result for that function, it won't be listed as taking one argument `x`; instead it'll be listed as taking `*args` and `**kwargs` because that's what with\_logging takes.
If using a decorator always meant losing this information about a function, it would be a serious problem. That's why we have `functools.wraps`. This takes a function used in a decorator and adds the functionality of copying over the function name, docstring, arguments list, etc. And since `wraps` is itself a decorator, the following code does the correct thing:
```
from functools import wraps
def logged(func):
@wraps(func)
def with_logging(*args, **kwargs):
print func.__name__ + " was called"
return func(*args, **kwargs)
return with_logging
@logged
def f(x):
"""does some math"""
return x + x * x
print f.__name__ # prints 'f'
print f.__doc__ # prints 'does some math'
``` |
What does functools.wraps do? | 308,999 | 265 | 2008-11-21T14:53:40Z | 1,843,920 | 11 | 2009-12-03T23:46:43Z | [
"python",
"decorator",
"wraps"
] | In a comment on the [answer to another question](http://stackoverflow.com/questions/306130/python-decorator-makes-function-forget-that-it-belongs-to-a-class#306277), someone said they weren't sure what functools.wraps was doing. So I'm asking this question so that there will be a record of it on StackOverflow for future reference: what does functools.wraps do, exactly? | I very often use classes, rather than functions, for my decorators. I was having some trouble with this because an object won't have all the same attributes that are expected of a function. For example, an object won't have the attribute `__name__`. I had a specific issue with this that was pretty hard to trace where Django was reporting the error "object has no attribute '`__name__`'". Unfortunately, for class-style decorators, I don't believe that @wrap will do the job. I have instead created a base decorator class like so:
```
class DecBase(object):
func = None
def __init__(self, func):
self.__func = func
def __getattribute__(self, name):
if name == "func":
return super(DecBase, self).__getattribute__(name)
return self.func.__getattribute__(name)
def __setattr__(self, name, value):
if name == "func":
return super(DecBase, self).__setattr__(name, value)
return self.func.__setattr__(name, value)
```
This class proxies all the attribute calls over to the function that is being decorated. So, you can now create a simple decorator that checks that 2 arguments are specified like so:
```
class process_login(DecBase):
def __call__(self, *args):
if len(args) != 2:
raise Exception("You can only specify two arguments")
return self.func(*args)
``` |
Why can't I inherit from dict AND Exception in Python? | 309,129 | 18 | 2008-11-21T15:37:42Z | 309,563 | 20 | 2008-11-21T17:30:50Z | [
"python",
"multiple-inheritance"
] | I got the following class :
```
class ConstraintFailureSet(dict, Exception) :
"""
Container for constraint failures. It act as a constraint failure itself
but can contain other constraint failures that can be accessed with a dict syntax.
"""
def __init__(self, **failures) :
dict.__init__(self, failures)
Exception.__init__(self)
print isinstance(ConstraintFailureSet(), Exception)
True
raise ConstraintFailureSet()
TypeError: exceptions must be classes, instances, or strings (deprecated), not ConstraintFailureSet
```
What the heck ?
And the worst is that I can't try super() since Exception are old based class...
EDIT : And, yes, I've tried to switch the order of inheritance / init.
EDIT2 : I am using CPython 2.4 on Ubuntu8.10. You newer know is this kind of infos is usefull ;-). Anyway, this little riddle has shut the mouth of 3 of my collegues. You'd be my best-friend-of-the day... | Both `Exception` and `dict` are implemented in C.
I think you can test this the follwing way:
```
>>> class C(object): pass
...
>>> '__module__' in C.__dict__
True
>>> '__module__' in dict.__dict__
False
>>> '__module__' in Exception.__dict__
False
```
Since `Exception` and `dict` have different ideas of how to store their data internally, they are not compatible and thus you cannot inherit from both at the same time.
In later versions of Python you should get an Exception the moment you try to define the class:
```
>>> class foo(dict, Exception):
... pass
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Error when calling the metaclass bases
multiple bases have instance lay-out conflict
``` |
How to setup setuptools for python 2.6 on Windows? | 309,412 | 84 | 2008-11-21T16:44:44Z | 309,783 | 102 | 2008-11-21T18:56:40Z | [
"python",
"windows",
"setuptools"
] | Is there any way to install setuptools for python 2.6 in Windows without having an .exe installer?
There isn't one built at the moment, and the maintainer of setuptools has stated that it's probable be a while before he'll get to it.
Does anyone know of a way to install it anyway? | First Option - Online Installation (i.e. remaining connected to the Internet during the entire installation process):
1. Download [setuptools-0.6c9.tar.gz](http://pypi.python.org/pypi/setuptools#files)
2. Use [7-zip](http://www.7-zip.org/) to extract it to a folder(directory) outside your Windows Python installation folder
3. Go the folder (refer step 2) and run ez\_setup.py from the corresponding dos (command) prompt
4. Ensure that your PATH includes the appropriate C:\Python2X\Scripts directory
Second Option:
1. Download [setuptools-0.6c9.tar.gz](http://pypi.python.org/pypi/setuptools#files)
2. Download [setuptools-0.6c9-py2.6.egg](http://pypi.python.org/packages/2.6/s/setuptools/setuptools-0.6c9-py2.6.egg#md5=ca37b1ff16fa2ede6e19383e7b59245a) to a folder(directory) outside your Windows Python installation folder
3. Use [7-zip](http://www.7-zip.org/) to extract ez\_setup.py in the same folder as [setuptools-0.6c9-py2.6.egg](http://pypi.python.org/packages/2.6/s/setuptools/setuptools-0.6c9-py2.6.egg#md5=ca37b1ff16fa2ede6e19383e7b59245a)
4. Go to the corresponding dos prompt and run python ez\_setup.py setuptools-0.6c9-py2.6.egg from the command prompt
5. Ensure that your PATH includes the appropriate C:\Python2X\Scripts directory
Third Option (assuming that you have Visual Studio 2005 or MinGW on your machine)
1. Download [setuptools-0.6c9.tar.gz](http://pypi.python.org/pypi/setuptools#files)
2. Use [7-zip](http://www.7-zip.org/) to extract it to a folder(directory) outside your Windows Python installation folder
3. Go the folder (refer step 2) and run python setup.py install from the corresponding dos (command) prompt
Please provide feedback. |
How to setup setuptools for python 2.6 on Windows? | 309,412 | 84 | 2008-11-21T16:44:44Z | 425,318 | 50 | 2009-01-08T18:27:59Z | [
"python",
"windows",
"setuptools"
] | Is there any way to install setuptools for python 2.6 in Windows without having an .exe installer?
There isn't one built at the moment, and the maintainer of setuptools has stated that it's probable be a while before he'll get to it.
Does anyone know of a way to install it anyway? | You could download and run <http://peak.telecommunity.com/dist/ez_setup.py>. This will download and install setuptools.
[update]
This script no longer works - the version of setuptools the it downloads is not at the URI specified in ez\_setup.py -navigate to <http://pypi.python.org/packages/2.7/s/setuptools/> for the latest version - the script also does some md5 checking, I haven't looked into it any further. |
How to setup setuptools for python 2.6 on Windows? | 309,412 | 84 | 2008-11-21T16:44:44Z | 675,337 | 10 | 2009-03-23T21:37:23Z | [
"python",
"windows",
"setuptools"
] | Is there any way to install setuptools for python 2.6 in Windows without having an .exe installer?
There isn't one built at the moment, and the maintainer of setuptools has stated that it's probable be a while before he'll get to it.
Does anyone know of a way to install it anyway? | The Nov. 21 answer didn't work for me. I got it working on my 64 bit Vista machine by following the Method 1 instructions, except for Step 3 I typed:
setup.py install
So, in summary, I did:
1. Download setuptools-0.6c9.tar.gz
2. Use 7-zip to extract it to a folder (directory) outside your Windows Python installation folder
3. At a DOS (command) prompt, cd to your the newly created setuptools-0.6c9 folder and type "setup.py install" (without the quotes).
4. Ensure that your PATH includes the appropriate C:\Python2X\Scripts directory |
How to quote a string value explicitly (Python DB API/Psycopg2) | 309,945 | 22 | 2008-11-21T19:47:11Z | 312,423 | 22 | 2008-11-23T11:47:54Z | [
"python",
"sql",
"django",
"psycopg2"
] | For some reasons, I would like to do an explicit quoting of a string value (becoming a part of constructed SQL query) instead of waiting for implicit quotation performed by `cursor.execute` method on contents of its second parameter.
By "implicit quotation" I mean:
```
value = "Unsafe string"
query = "SELECT * FROM some_table WHERE some_char_field = %s;"
cursor.execute( query, (value,) ) # value will be correctly quoted
```
I would prefer something like that:
```
value = "Unsafe string"
query = "SELECT * FROM some_table WHERE some_char_field = %s;" % \
READY_TO_USE_QUOTING_FUNCTION(value)
cursor.execute( query ) # value will be correctly quoted, too
```
Is such low level `READY_TO_USE_QUOTING_FUNCTION` expected by Python DB API specification (I couldn't find such functionality in [PEP 249](http://www.python.org/dev/peps/pep-0249/) document). If not, maybe Psycopg2 provides such function? If not, maybe Django provides such function? I would prefer not to write such function myself... | Ok, so I was curious and went and looked at the source of psycopg2. Turns out I didn't have to go further than the examples folder :)
And yes, this is psycopg2-specific. Basically, if you just want to quote a string you'd do this:
```
from psycopg2.extensions import adapt
print adapt("Hello World'; DROP DATABASE World;")
```
But what you probably want to do is to write and register your own adapter;
In the examples folder of psycopg2 you find the file ['myfirstrecipe.py'](http://sourcecodebrowser.com/psycopg2/2.4.5/myfirstrecipe_8py_source.html) there is an example of how to cast and quote a specific type in a special way.
If you have objects for the stuff you want to do, you can just create an adapter that conforms to the 'IPsycopgSQLQuote' protocol (see pydocs for the myfirstrecipe.py-example...actually that's the only reference I can find to that name) that quotes your object and then registering it like so:
```
from psycopg2.extensions import register_adapter
register_adapter(mytype, myadapter)
```
Also, the other examples are interesting; esp. ['dialtone.py'](http://sourcecodebrowser.com/psycopg2/2.4.5/dialtone_8py_source.html) and ['simple.py'](http://sourcecodebrowser.com/psycopg2/2.4.5/simple_8py_source.html). |
How to quote a string value explicitly (Python DB API/Psycopg2) | 309,945 | 22 | 2008-11-21T19:47:11Z | 24,590,439 | 10 | 2014-07-05T20:46:46Z | [
"python",
"sql",
"django",
"psycopg2"
] | For some reasons, I would like to do an explicit quoting of a string value (becoming a part of constructed SQL query) instead of waiting for implicit quotation performed by `cursor.execute` method on contents of its second parameter.
By "implicit quotation" I mean:
```
value = "Unsafe string"
query = "SELECT * FROM some_table WHERE some_char_field = %s;"
cursor.execute( query, (value,) ) # value will be correctly quoted
```
I would prefer something like that:
```
value = "Unsafe string"
query = "SELECT * FROM some_table WHERE some_char_field = %s;" % \
READY_TO_USE_QUOTING_FUNCTION(value)
cursor.execute( query ) # value will be correctly quoted, too
```
Is such low level `READY_TO_USE_QUOTING_FUNCTION` expected by Python DB API specification (I couldn't find such functionality in [PEP 249](http://www.python.org/dev/peps/pep-0249/) document). If not, maybe Psycopg2 provides such function? If not, maybe Django provides such function? I would prefer not to write such function myself... | I guess you're looking for the [mogrify](http://initd.org/psycopg/docs/cursor.html#cursor.mogrify) function.
Example:
```
>>> cur.mogrify("INSERT INTO test (num, data) VALUES (%s, %s)", (42, 'bar'))
"INSERT INTO test (num, data) VALUES (42, E'bar')"
``` |
How can I translate the following filename to a regular expression in Python? | 310,199 | 2 | 2008-11-21T21:08:07Z | 311,214 | 11 | 2008-11-22T11:09:25Z | [
"python",
"regex"
] | I am battling regular expressions now as I type.
I would like to determine a pattern for the following example file: `b410cv11_test.ext`. I want to be able to do a search for files that match the pattern of the example file aforementioned. Where do I start (so lost and confused) and what is the best way of arriving at a solution that best matches the file pattern? Thanks in advance.
***Further clarification of question:***
I would like the pattern to be as follows: must start with 'b', followed by three digits, followed by 'cv', followed by two digits, then an underscore, followed by 'release', followed by .'ext' | Now that you have a human readable description of your file name, it's quite straight forward to translate it into a regular expression (at least in this case ;)
> must start with
The caret (`^`) anchors a regular expression to the beginning of what you want to match, so your re has to start with this symbol.
> 'b',
Any non-special character in your re will match literally, so you just use "b" for this part: `^b`.
> followed by [...] digits,
This depends a bit on which flavor of re you use:
The most general way of expressing this is to use brackets (`[]`). Those mean "match any one of the characters listed within. `[ASDF]` for example would match either `A` or `S` or `D` or `F`, `[0-9]` would match anything between 0 and 9.
Your re library probably has a shortcut for "any digit". In `sed` and `awk` you could use `[[:digit:]]` [sic!], in python and many other languages you can use `\d`.
So now your re reads `^b\d`.
> followed by three [...]
The most simple way to express this would be to just repeat the atom three times like this: `\d\d\d`.
Again your language might provide a shortcut: braces (`{}`). Sometimes you would have to escape them with a backslash (if you are using sed or awk, read about "extended regular expressions"). They also give you a way to say "at least x, but no more than y occurances of the previous atom": `{x,y}`.
Now you have: `^b\d{3}`
> followed by 'cv',
Literal matching again, now we have `^b\d{3}cv`
> followed by two digits,
We already covered this: `^b\d{3}cv\d{2}`.
> then an underscore, followed by 'release', followed by .'ext'
Again, this should all match literally, but the dot (`.`) is a special character. This means you have to escape it with a backslash: `^\d{3}cv\d{2}_release\.ext`
Leaving out the backslash would mean that a filename like "b410cv11\_test\_ext" would also match, which may or may not be a problem for you.
Finally, if you want to guarantee that there is nothing else following ".ext", anchor the re to the end of the thing to match, use the dollar sign (`$`).
Thus the complete regular expression for your specific problem would be:
```
^b\d{3}cv\d{2}_release\.ext$
```
Easy.
Whatever language or library you use, there has to be a reference somewhere in the documentation that will show you what the exact syntax in your case should be. Once you have learned to break down the problem into a suitable description, understanding the more advanced constructs will come to you step by step. |
Python Input/Output, files | 310,629 | 4 | 2008-11-22T00:31:32Z | 310,635 | 10 | 2008-11-22T00:38:24Z | [
"python"
] | I need to write some methods for loading/saving some classes to and from a binary file. However I also want to be able to accept the binary data from other places, such as a binary string.
In c++ I could do this by simply making my class methods use std::istream and std::ostream which could be a file, a stringstream, the console, whatever.
Does python have a similar input/output class which can be made to represent almost any form of i/o, or at least files and memory? | The Python way to do this is to accept an object that implements read() or write(). If you have a string, you can make this happen with [StringIO](http://docs.python.org/library/stringio.html):
```
from cStringIO import StringIO
s = "My very long string I want to read like a file"
file_like_string = StringIO(s)
data = file_like_string.read(10)
```
Remember that Python uses duck-typing: you don't have to involve a common base class. So long as your object implements read(), it can be read like a file. |
In Django, how does one filter a QuerySet with dynamic field lookups? | 310,732 | 97 | 2008-11-22T02:06:06Z | 310,785 | 170 | 2008-11-22T02:48:30Z | [
"python",
"django",
"django-models"
] | Given a class:
```
from django.db import models
class Person(models.Model):
name = models.CharField(max_length=20)
```
Is it possible, and if so how, to have a QuerySet that filters based on dynamic arguments? For example:
```
# Instead of:
Person.objects.filter(name__startswith='B')
# ... and:
Person.objects.filter(name__endswith='B')
# ... is there some way, given:
filter_by = '{0}__{1}'.format('name', 'startswith')
filter_value = 'B'
# ... that you can run the equivalent of this?
Person.objects.filter(filter_by=filter_value)
# ... which will throw an exception, since `filter_by` is not
# an attribute of `Person`.
```
Help is much appreciated & thank you in advance. | Python's argument expansion may be used to solve this problem:
```
kwargs = {
'{0}__{1}'.format('name', 'startswith'): 'A',
'{0}__{1}'.format('name', 'endswith'): 'Z'
}
Person.objects.filter(**kwargs)
```
This is a very common and useful Python idiom. |
Python library to modify MP3 audio without transcoding | 310,765 | 16 | 2008-11-22T02:30:54Z | 310,817 | 8 | 2008-11-22T03:35:23Z | [
"python",
"mp3",
"codec"
] | I am looking for some general advice about the mp3 format before I start a small project to make sure I am not on a wild-goose chase.
My understanding of the internals of the mp3 format is minimal. Ideally, I am looking for a library that would abstract those details away. I would prefer to use Python (but could be convinced otherwise).
I would like to modify a set of mp3 files in a fairly simple way. I am not so much interested in the ID3 tags but in the audio itself. I want to be able to delete sections (e.g. drop 10 seconds from the 3rd minute), and insert sections (e.g. add credits to the end.)
My understanding is that the mp3 format is lossy, and so decoding it to (for example) PCM format, making the modifications, and then encoding it again to MP3 will lower the audio quality. (I would love to hear that I am wrong.)
I *conjecture* that if I stay in mp3 format, there will be some sort of minimum frame or packet-size to deal with, so the granularity of the operations may be coarser. I can live with that, as long as I get an accuracy of within a couple of seconds.
I have looked at [PyMedia](http://pymedia.org/), but it requires me to migrate to PCM to process the data. Similarly, [LAME](http://lame.sourceforge.net) wants to help me encode, but not access the data in place. I have seen several other libraries that only deal with the ID3 tags.
Can anyone recommend a Python MP3 library? Alternatively, can you disabuse me of my assumption that going to PCM and back is bad and avoidable? | If you want to do things low-level, use [pymad](http://spacepants.org/src/pymad/). It turns MP3s into a buffer of sample data.
If you want something a little higher-level, use the Echo Nest [Remix API](http://code.google.com/p/echo-nest-remix/) (disclosure: I wrote part of it for my dayjob). It includes a few examples. If you look at the [cowbell](http://code.google.com/p/echo-nest-remix/source/browse/#svn/trunk/examples/cowbell) example (i.e., [MoreCowbell.dj](http://morecowbell.dj)), you'll see a fork of pymad that gives you a [NumPy](http://numpy.scipy.org/) array instead of a buffer. That datatype makes it easier to slice out sections and do math on them. |
Cleaning form data in Django | 310,931 | 7 | 2008-11-22T05:44:09Z | 310,968 | 12 | 2008-11-22T06:42:22Z | [
"python",
"django",
"forms",
"slug"
] | How can i clean and modify data from a form in django. I would like to define it on a per field basis for each model, much like using ModelForms.
What I want to achieve is automatically remove leading and trailing spaces from defined fields, or turn a title (from one field) into a slug (which would be another field). | You can define clean\_FIELD\_NAME() methods which can validate and alter data, as documented here: <http://docs.djangoproject.com/en/dev/ref/forms/validation/#ref-forms-validation> |
How do I edit and delete data in Django? | 311,188 | 4 | 2008-11-22T10:28:56Z | 311,191 | 17 | 2008-11-22T10:36:04Z | [
"python",
"django",
"editing"
] | I am using django 1.0 and I have created my models using the example in the Django book. I am able to perform the basic function of adding data; now I need a way of retrieving that data, loading it into a form (change\_form?! or something), **EDIT** it and save it back to the DB. Secondly how do I **DELETE** the data that's in the DB? i.e. search, select and then delete!
Please show me an example of the code I need to write on my `view.py` and `urls.py` for perform this task. | Say you have a model Employee. To edit an entry with primary key emp\_id you do:
```
emp = Employee.objects.get(pk = emp_id)
emp.name = 'Somename'
emp.save()
```
to delete it just do:
```
emp.delete()
```
so a full view would be:
```
def update(request, id):
emp = Employee.objects.get(pk = id)
#you can do this for as many fields as you like
#here I asume you had a form with input like <input type="text" name="name"/>
#so it's basically like that for all form fields
emp.name = request.POST.get('name')
emp.save()
return HttpResponse('updated')
def delete(request, id):
emp = Employee.objects.get(pk = id)
emp.delete()
return HttpResponse('deleted')
```
In urls.py you'd need two entries like this:
```
(r'^delete/(\d+)/$','myproject.myapp.views.delete'),
(r'^update/(\d+)/$','myproject.myapp.views.update'),
```
I suggest you take a look at the [docs](http://docs.djangoproject.com/en/dev/ref/models/instances/?from=olddocs) |
Modern, high performance bloom filter in Python? | 311,202 | 41 | 2008-11-22T10:53:57Z | 311,907 | 25 | 2008-11-22T23:35:59Z | [
"python",
"jython",
"bloom-filter"
] | I'm looking for a production quality bloom filter implementation in Python to handle fairly large numbers of items (say 100M to 1B items with 0.01% false positive rate).
[Pybloom](http://www.imperialviolet.org/pybloom.html) is one option but it seems to be showing its age as it throws DeprecationWarning errors on Python 2.5 on a regular basis. Joe Gregorio also has [an implementation](http://bitworking.org/news/380/bloom-filter-resources).
Requirements are fast lookup performance and stability. I'm also open to creating Python interfaces to particularly good c/c++ implementations, or even to Jython if there's a good Java implementation.
Lacking that, any recommendations on a bit array / bit vector representation that can handle ~16E9 bits? | I recently went down this path as well; though it sounds like my application was slightly different. I was interested in approximating set operations on a large number of strings.
You do make the key observation that a **fast** bit vector is required. Depending on what you want to put in your bloom filter, you may also need to give some thought to the speed of the hashing algorithm(s) used. You might find this [library](http://www.partow.net/programming/hashfunctions/index.html) useful. You may also want to tinker with the random number technique used below that only hashes your key a single time.
In terms of non-Java bit array implementations:
* Boost has [dynamic\_bitset](http://www.boost.org/doc/libs/1_37_0/libs/dynamic_bitset/dynamic_bitset.html)
* Java has the built in [BitSet](http://java.sun.com/javase/6/docs/api/java/util/BitSet.html)
I built my bloom filter using [BitVector](http://cobweb.ecn.purdue.edu/~kak/dist/). I spent some time profiling and optimizing the library and contributing back my patches to Avi. Go to that BitVector link and scroll down to acknowledgments in v1.5 to see details. In the end, I realized that performance was not a goal of this project and decided against using it.
Here's some code I had lying around. I may put this up on google code at python-bloom. Suggestions welcome.
```
from BitVector import BitVector
from random import Random
# get hashes from http://www.partow.net/programming/hashfunctions/index.html
from hashes import RSHash, JSHash, PJWHash, ELFHash, DJBHash
#
# [email protected] / www.asciiarmor.com
#
# copyright (c) 2008, ryan cox
# all rights reserved
# BSD license: http://www.opensource.org/licenses/bsd-license.php
#
class BloomFilter(object):
def __init__(self, n=None, m=None, k=None, p=None, bits=None ):
self.m = m
if k > 4 or k < 1:
raise Exception('Must specify value of k between 1 and 4')
self.k = k
if bits:
self.bits = bits
else:
self.bits = BitVector( size=m )
self.rand = Random()
self.hashes = []
self.hashes.append(RSHash)
self.hashes.append(JSHash)
self.hashes.append(PJWHash)
self.hashes.append(DJBHash)
# switch between hashing techniques
self._indexes = self._rand_indexes
#self._indexes = self._hash_indexes
def __contains__(self, key):
for i in self._indexes(key):
if not self.bits[i]:
return False
return True
def add(self, key):
dupe = True
bits = []
for i in self._indexes(key):
if dupe and not self.bits[i]:
dupe = False
self.bits[i] = 1
bits.append(i)
return dupe
def __and__(self, filter):
if (self.k != filter.k) or (self.m != filter.m):
raise Exception('Must use bloom filters created with equal k / m paramters for bitwise AND')
return BloomFilter(m=self.m,k=self.k,bits=(self.bits & filter.bits))
def __or__(self, filter):
if (self.k != filter.k) or (self.m != filter.m):
raise Exception('Must use bloom filters created with equal k / m paramters for bitwise OR')
return BloomFilter(m=self.m,k=self.k,bits=(self.bits | filter.bits))
def _hash_indexes(self,key):
ret = []
for i in range(self.k):
ret.append(self.hashes[i](key) % self.m)
return ret
def _rand_indexes(self,key):
self.rand.seed(hash(key))
ret = []
for i in range(self.k):
ret.append(self.rand.randint(0,self.m-1))
return ret
if __name__ == '__main__':
e = BloomFilter(m=100, k=4)
e.add('one')
e.add('two')
e.add('three')
e.add('four')
e.add('five')
f = BloomFilter(m=100, k=4)
f.add('three')
f.add('four')
f.add('five')
f.add('six')
f.add('seven')
f.add('eight')
f.add('nine')
f.add("ten")
# test check for dupe on add
assert not f.add('eleven')
assert f.add('eleven')
# test membership operations
assert 'ten' in f
assert 'one' in e
assert 'ten' not in e
assert 'one' not in f
# test set based operations
union = f | e
intersection = f & e
assert 'ten' in union
assert 'one' in union
assert 'three' in intersection
assert 'ten' not in intersection
assert 'one' not in intersection
```
Also, in my case I found it useful to have a faster count\_bits function for BitVector. Drop this code into BitVector 1.5 and it should give you a more performant bit counting method:
```
def fast_count_bits( self, v ):
bits = (
0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4,
1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5,
1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5,
2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5,
2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5,
2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
4, 5, 5, 6, 5, 6, 6, 7, 5, 6, 6, 7, 6, 7, 7, 8 )
return bits[v & 0xff] + bits[(v >> 8) & 0xff] + bits[(v >> 16) & 0xff] + bits[v >> 24]
``` |
Modern, high performance bloom filter in Python? | 311,202 | 41 | 2008-11-22T10:53:57Z | 3,390,002 | 8 | 2010-08-02T17:00:09Z | [
"python",
"jython",
"bloom-filter"
] | I'm looking for a production quality bloom filter implementation in Python to handle fairly large numbers of items (say 100M to 1B items with 0.01% false positive rate).
[Pybloom](http://www.imperialviolet.org/pybloom.html) is one option but it seems to be showing its age as it throws DeprecationWarning errors on Python 2.5 on a regular basis. Joe Gregorio also has [an implementation](http://bitworking.org/news/380/bloom-filter-resources).
Requirements are fast lookup performance and stability. I'm also open to creating Python interfaces to particularly good c/c++ implementations, or even to Jython if there's a good Java implementation.
Lacking that, any recommendations on a bit array / bit vector representation that can handle ~16E9 bits? | Eventually I found [pybloomfiltermap](http://github.com/axiak/pybloomfiltermmap). I haven't used it, but it looks like it'd fit the bill. |
Modern, high performance bloom filter in Python? | 311,202 | 41 | 2008-11-22T10:53:57Z | 4,125,080 | 22 | 2010-11-08T15:10:21Z | [
"python",
"jython",
"bloom-filter"
] | I'm looking for a production quality bloom filter implementation in Python to handle fairly large numbers of items (say 100M to 1B items with 0.01% false positive rate).
[Pybloom](http://www.imperialviolet.org/pybloom.html) is one option but it seems to be showing its age as it throws DeprecationWarning errors on Python 2.5 on a regular basis. Joe Gregorio also has [an implementation](http://bitworking.org/news/380/bloom-filter-resources).
Requirements are fast lookup performance and stability. I'm also open to creating Python interfaces to particularly good c/c++ implementations, or even to Jython if there's a good Java implementation.
Lacking that, any recommendations on a bit array / bit vector representation that can handle ~16E9 bits? | In reaction to Parand, saying "common practice seems to be using something like SHA1 and split up the bits to form multiple hashes", while that may be true in the sense that it's common practice (PyBloom also uses it), it still doesn't mean it's the right thing to do ;-)
For a Bloom filter, the only requirement a hash function has is that its output space must be uniformly distributed given the expected input. While a cryptographic hash certainly fulfils this requirement, it's also a little bit like shooting a fly with a bazooka.
Instead try the [FNV Hash](http://www.isthe.com/chongo/tech/comp/fnv/) which uses just one XOR and one multiplication per input byte, which I estimate is a few hundred times faster than SHA1 :)
The FNV hash is not cryptographically secure, but you don't need it to be. It has slightly [imperfect avalanche behaviour](https://web.archive.org/web/20130617200738/http://bretm.home.comcast.net/~bretm/hash/6.html), but you're not using it for integrity checking either.
About uniformity, note that the second link only did a Chi-square test for the 32-bit FNV hash. It's better to use more bits and the FNV-1 variant, which swaps the XOR and the MUL steps for better bit-dispersion. For a Bloom Filter, there's a few more catches, such as mapping the output uniformly to the index range of the bit-array. If possible, I'd say round up the size of the bit-array to the nearest power of 2 and adjust *k* accordingly. That way you get better accuracy and you can use simple XOR-folding to map the range.
Additionally, here's a reference explaining why you don't want SHA1 (or any cryptographic hash) when you need [a general purpose hash](https://web.archive.org/web/20120722074824/http://bretm.home.comcast.net/~bretm/hash/9.html). |
How to indent Python list-comprehensions? | 311,588 | 31 | 2008-11-22T18:08:18Z | 311,604 | 38 | 2008-11-22T18:19:41Z | [
"python",
"coding-style"
] | List comprehensions can be useful in certain situations, but they can also be rather horrible to read.. As a slightly exaggerated example, how would you indent the following?
```
allUuids = [x.id for x in self.db.query(schema.allPostsUuid).execute(timeout = 20) if x.type == "post" and x.deleted is not False]
``` | It depends on how long they are. I tend to structure them like so:
```
[x.id for x
in self.db.query(schema.allPostsUuid).execute(timeout=20)
if x.type == 'post'
and x.deleted is not False
and ...
and ...]
```
That way every expression has its own line.
If any line becomes too big I like to extract it out in a lambda or expression:
```
transform = lambda x: x.id
results = self.db.query(schema.allPostsUuid).execute(timeout=20)
condition = lambda x: x.deleted is not False and ... and ...
[transform(x) for x in results if condition(x)]
```
And then if a lambda becomes too long it gets promoted to a function. |
How to indent Python list-comprehensions? | 311,588 | 31 | 2008-11-22T18:08:18Z | 311,675 | 27 | 2008-11-22T19:29:24Z | [
"python",
"coding-style"
] | List comprehensions can be useful in certain situations, but they can also be rather horrible to read.. As a slightly exaggerated example, how would you indent the following?
```
allUuids = [x.id for x in self.db.query(schema.allPostsUuid).execute(timeout = 20) if x.type == "post" and x.deleted is not False]
``` | Where I work, our coding guidelines would have us do something like this:
```
all_posts_uuid_query = self.db.query(schema.allPostsUuid)
all_posts_uuid_list = all_posts_uuid_query.execute(timeout=20)
all_uuid_list = [
x.id
for x in all_posts_uuid_list
if (
x.type == "post"
and
not x.deleted # <-- if you don't care about NULLs / None
)
]
``` |
python as a "batch" script (i.e. run commands from python) | 311,601 | 4 | 2008-11-22T18:18:58Z | 311,613 | 17 | 2008-11-22T18:25:47Z | [
"python",
"scripting",
"batch-file"
] | I'm working in a windows environment (my laptop!) and I need a couple of scripts that run other programs, pretty much like a windows batch file.
how can I run a command from python such that the program when run, will replace the script? The program is interactive (for instance, unison) and keeps printing lines and asking for user input all the time.
So, just running a program and printing the output won't suffice. The program has to takeover the script's input/output, pretty mcuh like running the command from a .bat file.
I tried os.execl but it keeps telling me "invalid arguments", also, it doesn't find the program name (doesn't search the PATH variable); I have to give it the full path ..?!
basically, in a batch script I can write:
unison profile
how can I achieve the same effect in python?
EDIT:
I found out it can be done with `os.system( ... )` and since I cannot accept my own answer, I'm closing the question.
---
EDIT: this was supposed to be a comment, but when I posted it I didn't have much points.
Thanks Claudiu, that's pretty much what I want, except for a little thing: I want the function to end when the program exits, but when I try it on unison, it doesn't return control to the python script, but to the windows command line environment
```
>>> os.execlp("unison")
C:\>Usage: unison [options]
or unison root1 root2 [options]
or unison profilename [options]
For a list of options, type "unison -help".
For a tutorial on basic usage, type "unison -doc tutorial".
For other documentation, type "unison -doc topics".
C:\>
C:\>
C:\>
```
how to get around this? | You should create a new processess using the [subprocess module](http://www.python.org/doc/2.5.2/lib/module-subprocess.html).
I'm not fluent in windows processes but its Popen function is cross-platform, and should be preffered to OS specific solutions.
EDIT: I maintain that you should prefer the Subprocess module to os.\* OS specific functions, it is cross-platform and more pythonic (just google it). You can wait for the result easily, and *cleanly*:
```
import os
import subprocess
unison = os.path.join(os.path.curdir, "unison")
p = subprocess.Popen(unison)
p.wait()
``` |
How to print date in a regular format in Python? | 311,627 | 315 | 2008-11-22T18:37:07Z | 311,645 | 60 | 2008-11-22T18:56:08Z | [
"python",
"datetime",
"date"
] | This is my code:
```
import datetime
today = datetime.date.today()
print today
```
This prints: 2008-11-22 which is exactly what I want BUT....I have a list I'm appending this to and then suddenly everything goes "wonky". Here is the code:
```
import datetime
mylist = []
today = datetime.date.today()
mylist.append(today)
print mylist
```
This prints the following:
```
[datetime.date(2008, 11, 22)]
```
How on earth can I get just a simple date like "2008-11-22"? | Use date.strftime. The formatting arguments are [described in the documentation](https://docs.python.org/2/library/time.html#time.strftime).
This one is what you wanted:
```
some_date.strftime('%Y-%m-%d')
```
This one takes Locale into account. (do this)
```
some_date.strftime('%c')
``` |
How to print date in a regular format in Python? | 311,627 | 315 | 2008-11-22T18:37:07Z | 311,655 | 496 | 2008-11-22T19:07:07Z | [
"python",
"datetime",
"date"
] | This is my code:
```
import datetime
today = datetime.date.today()
print today
```
This prints: 2008-11-22 which is exactly what I want BUT....I have a list I'm appending this to and then suddenly everything goes "wonky". Here is the code:
```
import datetime
mylist = []
today = datetime.date.today()
mylist.append(today)
print mylist
```
This prints the following:
```
[datetime.date(2008, 11, 22)]
```
How on earth can I get just a simple date like "2008-11-22"? | ## The WHY: dates are objects
In Python, dates are objects. Therefore, when you manipulate them, you manipulate objects, not strings, not timestamps nor anything.
Any object in Python have TWO string representations:
* The regular representation that is used by "print", can be get using the `str()` function. It is most of the time the most common human readable format and is used to ease display. So `str(datetime.datetime(2008, 11, 22, 19, 53, 42))` gives you `'2008-11-22 19:53:42'`.
* The alternative representation that is used to represent the object nature (as a data). It can be get using the `repr()` function and is handy to know what kind of data your manipulating while you are developing or debugging. `repr(datetime.datetime(2008, 11, 22, 19, 53, 42))` gives you `'datetime.datetime(2008, 11, 22, 19, 53, 42)'`.
What happened is that when you have printed the date using "print", it used `str()` so you could see a nice date string. But when you have printed `mylist`, you have printed a list of objects and Python tried to represent the set of data, using `repr()`.
## The How: what do you want to do with that?
Well, when you manipulate dates, keep using the date objects all long the way. They got thousand of useful methods and most of the Python API expect dates to be objects.
When you want to display them, just use `str()`. In Python, the good practice is to explicitly cast everything. So just when it's time to print, get a string representation of your date using `str(date)`.
One last thing. When you tried to print the dates, you printed `mylist`. If you want to print a date, you must print the date objects, not their container (the list).
E.G, you want to print all the date in a list :
```
for date in mylist :
print str(date)
```
Note that ***in that specific case***, you can even omit `str()` because print will use it for you. But it should not become a habit :-)
## Practical case, using your code
```
import datetime
mylist = []
today = datetime.date.today()
mylist.append(today)
print mylist[0] # print the date object, not the container ;-)
2008-11-22
# It's better to always use str() because :
print "This is a new day : ", mylist[0] # will work
This is a new day : 2008-11-22
print "This is a new day : " + mylist[0] # will crash
cannot concatenate 'str' and 'datetime.date' objects
print "This is a new day : " + str(mylist[0])
This is a new day : 2008-11-22
```
## Advanced date formatting
Dates have a default representation, but you may want to print them in a specific format. In that case, you can get a custom string representation using the `strftime()` method.
`strftime()` expects a string pattern explaining how you want to format your date.
E.G :
```
print today.strftime('We are the %d, %b %Y')
'We are the 22, Nov 2008'
```
All the letter after a `"%"` represent a format for something :
* `%d` is the day number
* `%m` is the month number
* `%b` is the month abbreviation
* `%y` is the year last two digits
* `%Y` is the all year
etc
[Have a look at the official documentation](http://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior), or [McCutchen's quick reference](http://strftime.org) you can't know them all.
Since [PEP3101](http://www.python.org/dev/peps/pep-3101/), every object can have its own format used automatically by the method format of any string. In the case of the datetime, the format is the same used in
strftime. So you can do the same as above like this:
```
print "We are the {:%d, %b %Y}".format(today)
'We are the 22, Nov 2008'
```
The advantage of this form is that you can also convert other objects at the same time.
## Localization
Dates can automatically adapt to the local language and culture if you use them the right way, but it's a bit complicated. Maybe for another question on SO(Stack Overflow) ;-) |
How to print date in a regular format in Python? | 311,627 | 315 | 2008-11-22T18:37:07Z | 14,320,620 | 126 | 2013-01-14T14:46:28Z | [
"python",
"datetime",
"date"
] | This is my code:
```
import datetime
today = datetime.date.today()
print today
```
This prints: 2008-11-22 which is exactly what I want BUT....I have a list I'm appending this to and then suddenly everything goes "wonky". Here is the code:
```
import datetime
mylist = []
today = datetime.date.today()
mylist.append(today)
print mylist
```
This prints the following:
```
[datetime.date(2008, 11, 22)]
```
How on earth can I get just a simple date like "2008-11-22"? | ```
import datetime
print datetime.datetime.now().strftime("%Y-%m-%d %H:%M")
```
**Edit:**
After Cees suggestion, I have started using time as well:
```
import time
print time.strftime("%Y-%m-%d %H:%M")
``` |
How to print date in a regular format in Python? | 311,627 | 315 | 2008-11-22T18:37:07Z | 18,944,849 | 58 | 2013-09-22T14:26:14Z | [
"python",
"datetime",
"date"
] | This is my code:
```
import datetime
today = datetime.date.today()
print today
```
This prints: 2008-11-22 which is exactly what I want BUT....I have a list I'm appending this to and then suddenly everything goes "wonky". Here is the code:
```
import datetime
mylist = []
today = datetime.date.today()
mylist.append(today)
print mylist
```
This prints the following:
```
[datetime.date(2008, 11, 22)]
```
How on earth can I get just a simple date like "2008-11-22"? | The date, datetime, and time objects all support a strftime(format) method,
to create a string representing the time under the control of an explicit format
string.
Here is a list of the format codes with their directive and meaning.
```
%a Localeâs abbreviated weekday name.
%A Localeâs full weekday name.
%b Localeâs abbreviated month name.
%B Localeâs full month name.
%c Localeâs appropriate date and time representation.
%d Day of the month as a decimal number [01,31].
%f Microsecond as a decimal number [0,999999], zero-padded on the left
%H Hour (24-hour clock) as a decimal number [00,23].
%I Hour (12-hour clock) as a decimal number [01,12].
%j Day of the year as a decimal number [001,366].
%m Month as a decimal number [01,12].
%M Minute as a decimal number [00,59].
%p Localeâs equivalent of either AM or PM.
%S Second as a decimal number [00,61].
%U Week number of the year (Sunday as the first day of the week)
%w Weekday as a decimal number [0(Sunday),6].
%W Week number of the year (Monday as the first day of the week)
%x Localeâs appropriate date representation.
%X Localeâs appropriate time representation.
%y Year without century as a decimal number [00,99].
%Y Year with century as a decimal number.
%z UTC offset in the form +HHMM or -HHMM.
%Z Time zone name (empty string if the object is naive).
%% A literal '%' character.
```
This is what we can do with the datetime and time modules in Python
```
import time
import datetime
print "Time in seconds since the epoch: %s" %time.time()
print "Current date and time: " , datetime.datetime.now()
print "Or like this: " ,datetime.datetime.now().strftime("%y-%m-%d-%H-%M")
print "Current year: ", datetime.date.today().strftime("%Y")
print "Month of year: ", datetime.date.today().strftime("%B")
print "Week number of the year: ", datetime.date.today().strftime("%W")
print "Weekday of the week: ", datetime.date.today().strftime("%w")
print "Day of year: ", datetime.date.today().strftime("%j")
print "Day of the month : ", datetime.date.today().strftime("%d")
print "Day of week: ", datetime.date.today().strftime("%A")
```
That will print out something like this:
```
Time in seconds since the epoch: 1349271346.46
Current date and time: 2012-10-03 15:35:46.461491
Or like this: 12-10-03-15-35
Current year: 2012
Month of year: October
Week number of the year: 40
Weekday of the week: 3
Day of year: 277
Day of the month : 03
Day of week: Wednesday
``` |
How to print date in a regular format in Python? | 311,627 | 315 | 2008-11-22T18:37:07Z | 20,066,760 | 15 | 2013-11-19T08:39:04Z | [
"python",
"datetime",
"date"
] | This is my code:
```
import datetime
today = datetime.date.today()
print today
```
This prints: 2008-11-22 which is exactly what I want BUT....I have a list I'm appending this to and then suddenly everything goes "wonky". Here is the code:
```
import datetime
mylist = []
today = datetime.date.today()
mylist.append(today)
print mylist
```
This prints the following:
```
[datetime.date(2008, 11, 22)]
```
How on earth can I get just a simple date like "2008-11-22"? | This is shorter:
```
>>> import time
>>> time.strftime("%Y-%m-%d %H:%M")
'2013-11-19 09:38'
``` |
How to print date in a regular format in Python? | 311,627 | 315 | 2008-11-22T18:37:07Z | 20,776,958 | 11 | 2013-12-25T21:37:07Z | [
"python",
"datetime",
"date"
] | This is my code:
```
import datetime
today = datetime.date.today()
print today
```
This prints: 2008-11-22 which is exactly what I want BUT....I have a list I'm appending this to and then suddenly everything goes "wonky". Here is the code:
```
import datetime
mylist = []
today = datetime.date.today()
mylist.append(today)
print mylist
```
This prints the following:
```
[datetime.date(2008, 11, 22)]
```
How on earth can I get just a simple date like "2008-11-22"? | Or even
```
from datetime import datetime, date
"{:%d.%m.%Y}".format(datetime.now())
```
Out: '25.12.2013
or
```
"{} - {:%d.%m.%Y}".format("Today", datetime.now())
```
Out: 'Today - 25.12.2013'
```
"{:%A}".format(date.today())
```
Out: 'Wednesday'
```
'{}__{:%Y.%m.%d__%H-%M}.log'.format(__name__, datetime.now())
```
Out: '\_\_main\_\_\_\_2014.06.09\_\_16-56.log' |
Parsing GPS receiver output via regex in Python | 311,763 | 12 | 2008-11-22T20:50:33Z | 311,769 | 8 | 2008-11-22T20:54:18Z | [
"python",
"regex",
"parsing",
"gps",
"nmea"
] | I have a friend who is finishing up his masters degree in aerospace engineering. For his final project, he is on a small team tasked with writing a program for tracking weather balloons, rockets and satellites. The program receives input from a GPS device, does calculations with the data, and uses the results of those calculations to control a series of motors designed to orientate a directional communication antenna, so the balloon, rocket or satellite always stays in focus.
Though somewhat of a (eternal) beginner myself, I have more programming experience than my friend. So when he asked me for advice, I convinced him to write the program in Python, my language of choice.
At this point in the project, we are working on the code that parses the input from the GPS device. Here is some example input, with the data we need to extract in bold:
$GPRMC,092204.999,**4250.5589,S,14718.5084,E**,1,12,24.4,**89.6**,M,,,0000\*1F
$GPRMC,093345.679,**4234.7899,N,11344.2567,W**,3,02,24.5,**1000.23**,M,,,0000\*1F
$GPRMC,044584.936,**1276.5539,N,88734.1543,E**,2,04,33.5,**600.323**,M,,,\*00
$GPRMC,199304.973,**3248.7780,N,11355.7832,W**,1,06,02.2,**25722.5**,M,,,\*00
$GPRMC,066487.954,**4572.0089,S,45572.3345,W**,3,09,15.0,**35000.00**,M,,,\*1F
Here is some further explanation of the data:
> "I looks like I'll need five things
> out of every line. And bear in mind
> that any one of these area's may be
> empty. Meaning there will be just two
> commas right next to each other. Such
> as ',,,' There are two fields that may
> be full at any time. Some of them only
> have two or three options that they
> may be but I don't think I should be
> counting on that."
Two days ago my friend was able to acquire the full log from the GPS receiver used to track a recent weather balloon launch. The data is quite long, so I put it all in [this pastebin](http://pastebin.com/f5f5cf9ab).
I am still rather new with regular expressions myself, so I am looking for some assistance. | It's simpler to use split than a regex.
```
>>> line="$GPRMC,092204.999,4250.5589,S,14718.5084,E,1,12,24.4,89.6,M,,,0000*1F "
>>> line.split(',')
['$GPRMC', '092204.999', '4250.5589', 'S', '14718.5084', 'E', '1', '12', '24.4', '89.6', 'M', '', '', '0000*1F ']
>>>
``` |
Parsing GPS receiver output via regex in Python | 311,763 | 12 | 2008-11-22T20:50:33Z | 311,778 | 14 | 2008-11-22T21:00:01Z | [
"python",
"regex",
"parsing",
"gps",
"nmea"
] | I have a friend who is finishing up his masters degree in aerospace engineering. For his final project, he is on a small team tasked with writing a program for tracking weather balloons, rockets and satellites. The program receives input from a GPS device, does calculations with the data, and uses the results of those calculations to control a series of motors designed to orientate a directional communication antenna, so the balloon, rocket or satellite always stays in focus.
Though somewhat of a (eternal) beginner myself, I have more programming experience than my friend. So when he asked me for advice, I convinced him to write the program in Python, my language of choice.
At this point in the project, we are working on the code that parses the input from the GPS device. Here is some example input, with the data we need to extract in bold:
$GPRMC,092204.999,**4250.5589,S,14718.5084,E**,1,12,24.4,**89.6**,M,,,0000\*1F
$GPRMC,093345.679,**4234.7899,N,11344.2567,W**,3,02,24.5,**1000.23**,M,,,0000\*1F
$GPRMC,044584.936,**1276.5539,N,88734.1543,E**,2,04,33.5,**600.323**,M,,,\*00
$GPRMC,199304.973,**3248.7780,N,11355.7832,W**,1,06,02.2,**25722.5**,M,,,\*00
$GPRMC,066487.954,**4572.0089,S,45572.3345,W**,3,09,15.0,**35000.00**,M,,,\*1F
Here is some further explanation of the data:
> "I looks like I'll need five things
> out of every line. And bear in mind
> that any one of these area's may be
> empty. Meaning there will be just two
> commas right next to each other. Such
> as ',,,' There are two fields that may
> be full at any time. Some of them only
> have two or three options that they
> may be but I don't think I should be
> counting on that."
Two days ago my friend was able to acquire the full log from the GPS receiver used to track a recent weather balloon launch. The data is quite long, so I put it all in [this pastebin](http://pastebin.com/f5f5cf9ab).
I am still rather new with regular expressions myself, so I am looking for some assistance. | splitting should do the trick. Here's a good way to extract the data, as well:
```
>>> line = "$GPRMC,199304.973,3248.7780,N,11355.7832,W,1,06,02.2,25722.5,M,,,*00"
>>> line = line.split(",")
>>> neededData = (float(line[2]), line[3], float(line[4]), line[5], float(line[9]))
>>> print neededData
(3248.7779999999998, 'N', 11355.7832, 'W', 25722.5)
``` |
Python - Create a list with initial capacity | 311,775 | 110 | 2008-11-22T20:56:41Z | 311,783 | 43 | 2008-11-22T21:07:18Z | [
"python",
"list",
"dictionary",
"initialization"
] | Code like this often happens:
```
l = []
while foo:
#baz
l.append(bar)
#qux
```
This is really slow if you're about to append thousands of elements to your list, as the list will have to be constantly resized to fit the new elements.
In Java, you can create an ArrayList with an initial capacity. If you have some idea how big your list will be, this will be a lot more efficient.
I understand that code like this can often be re-factored into a list comprehension. If the for/while loop is very complicated, though, this is unfeasible. Is there any equivalent for us Python programmers? | Python lists have no built-in pre-allocation. If you really need to make a list, and need to avoid the overhead of appending (and you should verify that you do), you can do this:
```
l = [None] * 1000 # Make a list of 1000 None's
for i in xrange(1000):
# baz
l[i] = bar
# qux
```
Perhaps you could avoid the list by using a generator instead:
```
def my_things():
while foo:
#baz
yield bar
#qux
for thing in my_things():
# do something with thing
```
This way, the list isn't every stored all in memory at all, merely generated as needed. |
Python - Create a list with initial capacity | 311,775 | 110 | 2008-11-22T20:56:41Z | 311,833 | 89 | 2008-11-22T22:02:34Z | [
"python",
"list",
"dictionary",
"initialization"
] | Code like this often happens:
```
l = []
while foo:
#baz
l.append(bar)
#qux
```
This is really slow if you're about to append thousands of elements to your list, as the list will have to be constantly resized to fit the new elements.
In Java, you can create an ArrayList with an initial capacity. If you have some idea how big your list will be, this will be a lot more efficient.
I understand that code like this can often be re-factored into a list comprehension. If the for/while loop is very complicated, though, this is unfeasible. Is there any equivalent for us Python programmers? | ```
def doAppend( size=10000 ):
result = []
for i in range(size):
message= "some unique object %d" % ( i, )
result.append(message)
return result
def doAllocate( size=10000 ):
result=size*[None]
for i in range(size):
message= "some unique object %d" % ( i, )
result[i]= message
return result
```
**Results**. (evaluate each function 144 times and average the duration)
```
simple append 0.0102
pre-allocate 0.0098
```
**Conclusion**. It barely matters.
Premature optimization is the root of all evil. |
Python - Create a list with initial capacity | 311,775 | 110 | 2008-11-22T20:56:41Z | 5,533,598 | 22 | 2011-04-04T00:48:00Z | [
"python",
"list",
"dictionary",
"initialization"
] | Code like this often happens:
```
l = []
while foo:
#baz
l.append(bar)
#qux
```
This is really slow if you're about to append thousands of elements to your list, as the list will have to be constantly resized to fit the new elements.
In Java, you can create an ArrayList with an initial capacity. If you have some idea how big your list will be, this will be a lot more efficient.
I understand that code like this can often be re-factored into a list comprehension. If the for/while loop is very complicated, though, this is unfeasible. Is there any equivalent for us Python programmers? | Short version: use
```
pre_allocated_list = [None] * size
```
to pre-allocate a list (that is, to be able to address 'size' elements of the list instead of gradually forming the list by appending). This operation is VERY fast, even on big lists. Allocating new objects that will be later assigned to list elements will take MUCH longer and will be THE bottleneck in your program, performance-wise.
Long version:
I think that initialization time should be taken into account.
Since in python everything is a reference, it doesn't matter whether you set each element into None or some string - either way it's only a reference. Though it will take longer if you want to create new object for each element to reference.
For Python 3.2:
```
import time
import copy
def print_timing (func):
def wrapper (*arg):
t1 = time.time ()
res = func (*arg)
t2 = time.time ()
print ("{} took {} ms".format (func.__name__, (t2 - t1) * 1000.0))
return res
return wrapper
@print_timing
def prealloc_array (size, init = None, cp = True, cpmethod=copy.deepcopy, cpargs=(), use_num = False):
result = [None] * size
if init is not None:
if cp:
for i in range (size):
result[i] = init
else:
if use_num:
for i in range (size):
result[i] = cpmethod (i)
else:
for i in range (size):
result[i] = cpmethod (cpargs)
return result
@print_timing
def prealloc_array_by_appending (size):
result = []
for i in range (size):
result.append (None)
return result
@print_timing
def prealloc_array_by_extending (size):
result = []
none_list = [None]
for i in range (size):
result.extend (none_list)
return result
def main ():
n = 1000000
x = prealloc_array_by_appending(n)
y = prealloc_array_by_extending(n)
a = prealloc_array(n, None)
b = prealloc_array(n, "content", True)
c = prealloc_array(n, "content", False, "some object {}".format, ("blah"), False)
d = prealloc_array(n, "content", False, "some object {}".format, None, True)
e = prealloc_array(n, "content", False, copy.deepcopy, "a", False)
f = prealloc_array(n, "content", False, copy.deepcopy, (), False)
g = prealloc_array(n, "content", False, copy.deepcopy, [], False)
print ("x[5] = {}".format (x[5]))
print ("y[5] = {}".format (y[5]))
print ("a[5] = {}".format (a[5]))
print ("b[5] = {}".format (b[5]))
print ("c[5] = {}".format (c[5]))
print ("d[5] = {}".format (d[5]))
print ("e[5] = {}".format (e[5]))
print ("f[5] = {}".format (f[5]))
print ("g[5] = {}".format (g[5]))
if __name__ == '__main__':
main()
```
Evaluation:
```
prealloc_array_by_appending took 118.00003051757812 ms
prealloc_array_by_extending took 102.99992561340332 ms
prealloc_array took 3.000020980834961 ms
prealloc_array took 49.00002479553223 ms
prealloc_array took 316.9999122619629 ms
prealloc_array took 473.00004959106445 ms
prealloc_array took 1677.9999732971191 ms
prealloc_array took 2729.999780654907 ms
prealloc_array took 3001.999855041504 ms
x[5] = None
y[5] = None
a[5] = None
b[5] = content
c[5] = some object blah
d[5] = some object 5
e[5] = a
f[5] = []
g[5] = ()
```
As you can see, just making a big list of references to the same None object takes very little time.
Prepending or extending takes longer (i didn't average anything, but after running this a few times i can tell you that extending and appending take roughly the same time).
Allocating new object for each element - that is what takes the most time. And S.Lott's answer does that - formats a new string every time. Which is not strictly required - if you want to pre-allocate some space, just make a list of None, then assign data to list elements at will. Either way it takes more time to generate data than to append/extend a list, whether you generate it while creating the list, or after that. But if you want a sparsely-populated list, then starting with a list of None is definitely faster. |
Python - Create a list with initial capacity | 311,775 | 110 | 2008-11-22T20:56:41Z | 24,173,567 | 10 | 2014-06-11T22:21:01Z | [
"python",
"list",
"dictionary",
"initialization"
] | Code like this often happens:
```
l = []
while foo:
#baz
l.append(bar)
#qux
```
This is really slow if you're about to append thousands of elements to your list, as the list will have to be constantly resized to fit the new elements.
In Java, you can create an ArrayList with an initial capacity. If you have some idea how big your list will be, this will be a lot more efficient.
I understand that code like this can often be re-factored into a list comprehension. If the for/while loop is very complicated, though, this is unfeasible. Is there any equivalent for us Python programmers? | The Pythonic way for this is:
```
x = [None] * numElements
```
or whatever default value you wish to prepop with, e.g.
```
bottles = [Beer()] * 99
sea = [Fish()] * many
vegetarianPizzas = [None] * peopleOrderingPizzaNotQuiche
```
Python's default approach can be pretty efficient, although that efficiency decays as you increase the number of elements.
Compare
```
import time
class Timer(object):
def __enter__(self):
self.start = time.time()
return self
def __exit__(self, *args):
end = time.time()
secs = end - self.start
msecs = secs * 1000 # millisecs
print('%fms' % msecs)
Elements = 100000
Iterations = 144
print('Elements: %d, Iterations: %d' % (Elements, Iterations))
def doAppend():
result = []
i = 0
while i < Elements:
result.append(i)
i += 1
def doAllocate():
result = [None] * Elements
i = 0
while i < Elements:
result[i] = i
i += 1
def doGenerator():
return list(i for i in range(Elements))
def test(name, fn):
print("%s: " % name, end="")
with Timer() as t:
x = 0
while x < Iterations:
fn()
x += 1
test('doAppend', doAppend)
test('doAllocate', doAllocate)
test('doGenerator', doGenerator)
```
with
```
#include <vector>
typedef std::vector<unsigned int> Vec;
static const unsigned int Elements = 100000;
static const unsigned int Iterations = 144;
void doAppend()
{
Vec v;
for (unsigned int i = 0; i < Elements; ++i) {
v.push_back(i);
}
}
void doReserve()
{
Vec v;
v.reserve(Elements);
for (unsigned int i = 0; i < Elements; ++i) {
v.push_back(i);
}
}
void doAllocate()
{
Vec v;
v.resize(Elements);
for (unsigned int i = 0; i < Elements; ++i) {
v[i] = i;
}
}
#include <iostream>
#include <chrono>
using namespace std;
void test(const char* name, void(*fn)(void))
{
cout << name << ": ";
auto start = chrono::high_resolution_clock::now();
for (unsigned int i = 0; i < Iterations; ++i) {
fn();
}
auto end = chrono::high_resolution_clock::now();
auto elapsed = end - start;
cout << chrono::duration<double, milli>(elapsed).count() << "ms\n";
}
int main()
{
cout << "Elements: " << Elements << ", Iterations: " << Iterations << '\n';
test("doAppend", doAppend);
test("doReserve", doReserve);
test("doAllocate", doAllocate);
}
```
On my Windows 7 i7, 64-bit Python gives
```
Elements: 100000, Iterations: 144
doAppend: 3587.204933ms
doAllocate: 2701.154947ms
doGenerator: 1721.098185ms
```
While C++ gives (built with MSVC, 64-bit, Optimizations enabled)
```
Elements: 100000, Iterations: 144
doAppend: 74.0042ms
doReserve: 27.0015ms
doAllocate: 5.0003ms
```
C++ debug build produces:
```
Elements: 100000, Iterations: 144
doAppend: 2166.12ms
doReserve: 2082.12ms
doAllocate: 273.016ms
```
The point here is that with Python you can achieve a 7-8% performance improvement, and if you think you're writing a high-performance app (or if you're writing something that is used in a web service or something) then that isn't to be sniffed at, but you may need to rethink your choice of language.
Also, the Python code here isn't really Python code. Switching to truly Pythonesque code here gives better performance:
```
import time
class Timer(object):
def __enter__(self):
self.start = time.time()
return self
def __exit__(self, *args):
end = time.time()
secs = end - self.start
msecs = secs * 1000 # millisecs
print('%fms' % msecs)
Elements = 100000
Iterations = 144
print('Elements: %d, Iterations: %d' % (Elements, Iterations))
def doAppend():
for x in range(Iterations):
result = []
for i in range(Elements):
result.append(i)
def doAllocate():
for x in range(Iterations):
result = [None] * Elements
for i in range(Elements):
result[i] = i
def doGenerator():
for x in range(Iterations):
result = list(i for i in range(Elements))
def test(name, fn):
print("%s: " % name, end="")
with Timer() as t:
fn()
test('doAppend', doAppend)
test('doAllocate', doAllocate)
test('doGenerator', doGenerator)
```
Which gives
```
Elements: 100000, Iterations: 144
doAppend: 2153.122902ms
doAllocate: 1346.076965ms
doGenerator: 1614.092112ms
```
(in 32-bit doGenerator does better than doAllocate).
Here the gap between doAppend and doAllocate is significantly larger.
Obviously, the differences here really only apply if you are doing this more than a handful of times or if you are doing this on a heavily loaded system where those numbers are going to get scaled out by orders of magnitude, or if you are dealing with considerably larger lists.
The point here: Do it the pythonic way for the best performance.
But if you are worrying about general, high-level performance, Python is the wrong language. The most fundamental problem being that Python function calls has traditionally been upto 300x slower than other languages due to Python features like decorators etc (<https://wiki.python.org/moin/PythonSpeed/PerformanceTips#Data_Aggregation#Data_Aggregation>). |
Effective Keyboard Input Handling | 312,263 | 9 | 2008-11-23T07:26:06Z | 312,270 | 17 | 2008-11-23T07:32:57Z | [
"python",
"user-interface",
"keyboard",
"user-input",
"interactive"
] | What is a good way to implement keyboard handling? In any language, where I write a keyboard-interactive program (such as a tetris game), I end up having some code that looks like this:
```
for event in pygame.event.get():
if event.type == KEYDOWN:
if False: pass #make everything an elif
elif rotating: pass
elif event.key == K_q:
elif event.key == K_e:
elif event.key == K_LEFT:
curpiece.shift(-1, 0)
shadowpiece = curpiece.clone(); setupshadow(shadowpiece)
elif event.key == K_RIGHT:
curpiece.shift(1, 0)
shadowpiece = curpiece.clone(); setupshadow(shadowpiece)
```
(shortened). I don't like this, as this has to go in my main loop, and it messes with all parts of the program. This also makes it impossible to have a user config screen where they can change which key maps to which action. Is there a good pattern to do this using some form of function callbacks? | You could create a dictionary where the keys are the input and the value is a function that handles the keypress:
```
def handle_quit():
quit()
def handle_left():
curpiece.shift(-1, 0)
shadowpiece = curpiece.clone(); setupshadow(shadowpiece)
def handle_right():
curpiece.shift(1, 0)
shadowpiece = curpiece.clone(); setupshadow(shadowpiece)
def handle_pause():
if not paused:
paused = True
branch = {
K_q: handle_quit
K_e: handle_pause
K_LEFT: handle_left
K_RIGHT: handle_right
}
for event in pygame.event.get():
if event.type == KEYDOWN:
branch[event.key]()
```
Then changing the keys is a matter of modifying keys of the dictionary. |
Fetching attachments from gmail via either python or php | 312,284 | 7 | 2008-11-23T08:03:16Z | 312,317 | 10 | 2008-11-23T09:08:39Z | [
"php",
"python",
"gmail",
"attachment"
] | I have been trying to find information on how to retrieve attachments from a gmail account in either python or PHP, I'm hoping that someone here can be of some help, thanks.
Related:
* [How can I download all emails with attachments from Gmail?](http://stackoverflow.com/questions/348630/how-can-i-download-all-emails-with-attachments-from-gmail) | You will have to enable IMAP access to your GMail account (Settings â Forwarding and POP/IMAP), and then use `imaplib.IMAP4_SSL` to access it.
Use the raw text of every message as an argument to `email.message_from_string` in order to process any attachments. |
How do you split a list into evenly sized chunks? | 312,443 | 959 | 2008-11-23T12:15:52Z | 312,464 | 1,227 | 2008-11-23T12:33:53Z | [
"python",
"list",
"split",
"chunks"
] | I have a list of arbitrary length, and I need to split it up into equal size chunks and operate on it. There are some obvious ways to do this, like keeping a counter and two lists, and when the second list fills up, add it to the first list and empty the second list for the next round of data, but this is potentially extremely expensive.
I was wondering if anyone had a good solution to this for lists of any length, e.g. using generators.
This should work:
```
l = range(1, 1000)
print chunks(l, 10) -> [ [ 1..10 ], [ 11..20 ], .., [ 991..999 ] ]
```
I was looking for something useful in `itertools` but I couldn't find anything obviously useful. Might've missed it, though.
Related question: [What is the most âpythonicâ way to iterate over a list in chunks?](http://stackoverflow.com/questions/434287/what-is-the-most-pythonic-way-to-iterate-over-a-list-in-chunks) | Here's a generator that yields the chunks you want:
```
def chunks(l, n):
"""Yield successive n-sized chunks from l."""
for i in range(0, len(l), n):
yield l[i:i + n]
```
---
```
import pprint
pprint.pprint(list(chunks(range(10, 75), 10)))
[[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[20, 21, 22, 23, 24, 25, 26, 27, 28, 29],
[30, 31, 32, 33, 34, 35, 36, 37, 38, 39],
[40, 41, 42, 43, 44, 45, 46, 47, 48, 49],
[50, 51, 52, 53, 54, 55, 56, 57, 58, 59],
[60, 61, 62, 63, 64, 65, 66, 67, 68, 69],
[70, 71, 72, 73, 74]]
```
---
If you're using Python 2, you should use `xrange()` instead of `range()`:
```
def chunks(l, n):
"""Yield successive n-sized chunks from l."""
for i in xrange(0, len(l), n):
yield l[i:i + n]
```
---
Also you can simply use list comprehension instead of write a function. Python 3:
```
[l[i:i + n] for i in range(0, len(l), n)]
```
Python 2 version:
```
[l[i:i + n] for i in xrange(0, len(l), n)]
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.