title
stringlengths
12
150
question_id
int64
469
40.1M
question_score
int64
2
5.52k
question_date
stringdate
2008-08-02 15:11:16
2016-10-18 06:16:31
answer_id
int64
536
40.1M
answer_score
int64
7
8.38k
answer_date
stringdate
2008-08-02 18:49:07
2016-10-18 06:19:33
tags
listlengths
1
5
question_body_md
stringlengths
15
30.2k
answer_body_md
stringlengths
11
27.8k
Nicest way to pad zeroes to string
339,007
647
2008-12-03T22:39:51Z
14,269,071
28
2013-01-10T23:19:37Z
[ "python", "string" ]
What is the nicest/shortest way to pad a string with zeroes to the left, so the string length has a specific length?
``` >>> '99'.zfill(5) '00099' >>> '99'.rjust(5,'0') '00099' ``` if you want the opposite: ``` >>> '99'.ljust(5,'0') '99000' ```
Nicest way to pad zeroes to string
339,007
647
2008-12-03T22:39:51Z
24,386,708
24
2014-06-24T12:31:08Z
[ "python", "string" ]
What is the nicest/shortest way to pad a string with zeroes to the left, so the string length has a specific length?
Works in both Python 2 and Python 3: ``` >>> "{:0>2}".format("1") # Works for both numbers and strings. '01' >>> "{:02}".format(1) # Only works for numbers. '01' ```
How do you unzip very large files in python?
339,053
15
2008-12-03T23:00:54Z
339,506
16
2008-12-04T03:08:28Z
[ "python", "compression", "zip", "unzip" ]
Using python 2.4 and the built-in `ZipFile` library, I cannot read very large zip files (greater than 1 or 2 GB) because it wants to store the entire contents of the uncompressed file in memory. Is there another way to do this (either with a third-party library or some other hack), or must I "shell out" and unzip it that way (which isn't as cross-platform, obviously).
Here's an outline of decompression of large files. ``` import zipfile import zlib import os src = open( doc, "rb" ) zf = zipfile.ZipFile( src ) for m in zf.infolist(): # Examine the header print m.filename, m.header_offset, m.compress_size, repr(m.extra), repr(m.comment) src.seek( m.header_offset ) src.read( 30 ) # Good to use struct to unpack this. nm= src.read( len(m.filename) ) if len(m.extra) > 0: ex= src.read( len(m.extra) ) if len(m.comment) > 0: cm= src.read( len(m.comment) ) # Build a decompression object decomp= zlib.decompressobj(-15) # This can be done with a loop reading blocks out= open( m.filename, "wb" ) result= decomp.decompress( src.read( m.compress_size ) ) out.write( result ) result = decomp.flush() out.write( result ) # end of the loop out.close() zf.close() src.close() ```
Writing a compiler for a DSL in python
339,217
5
2008-12-04T00:17:14Z
339,235
7
2008-12-04T00:24:14Z
[ "python", "dsl", "dsl-tools" ]
I am writing a game in python and have decided to create a DSL for the map data files. I know I could write my own parser with regex, but I am wondering if there are existing python tools which can do this more easily, like re2c which is used in the PHP engine. Some extra info: * Yes, I *do* need a DSL, and even if I didn't I still want the experience of building and using one in a project. * The DSL contains only data (declarative?), it doesn't get "executed". Most lines look like: `SOMETHING: !abc @123 #xyz/123` I just need to read the tree of data.
There are plenty of Python parsing tools: <http://nedbatchelder.com/text/python-parsers.html>
Writing a compiler for a DSL in python
339,217
5
2008-12-04T00:17:14Z
339,308
11
2008-12-04T01:02:53Z
[ "python", "dsl", "dsl-tools" ]
I am writing a game in python and have decided to create a DSL for the map data files. I know I could write my own parser with regex, but I am wondering if there are existing python tools which can do this more easily, like re2c which is used in the PHP engine. Some extra info: * Yes, I *do* need a DSL, and even if I didn't I still want the experience of building and using one in a project. * The DSL contains only data (declarative?), it doesn't get "executed". Most lines look like: `SOMETHING: !abc @123 #xyz/123` I just need to read the tree of data.
I've always been impressed by [pyparsing](http://pyparsing.wikispaces.com/). The author, Paul McGuire, is active on the [python list](http://mail.python.org/mailman/listinfo/python-list)/[comp.lang.python](http://groups.google.com/group/comp.lang.python) and has always been very helpful with any queries concerning it.
End-line characters from lines read from text file, using Python
339,537
26
2008-12-04T03:39:07Z
339,579
17
2008-12-04T04:13:21Z
[ "python" ]
When reading lines from a text file using python, the end-line character often needs to be truncated before processing the text, as in the following example: ``` f = open("myFile.txt", "r") for line in f: line = line[:-1] # do something with line ``` Is there an elegant way or idiom for retrieving text lines without the end-line character?
Simple. Use **splitlines()** ``` L = open("myFile.txt", "r").read().splitlines(); for line in L: process(line) # this 'line' will not have '\n' character at the end ```
End-line characters from lines read from text file, using Python
339,537
26
2008-12-04T03:39:07Z
339,842
43
2008-12-04T08:04:19Z
[ "python" ]
When reading lines from a text file using python, the end-line character often needs to be truncated before processing the text, as in the following example: ``` f = open("myFile.txt", "r") for line in f: line = line[:-1] # do something with line ``` Is there an elegant way or idiom for retrieving text lines without the end-line character?
The *idiomatic* way to do this in Python is to use **rstrip('\n')**: ``` for line in open('myfile.txt'): # opened in text-mode; all EOLs are converted to '\n' line = line.rstrip('\n') process(line) ``` Each of the other alternatives has a gotcha: * **file('...').read().splitlines()** has to load the whole file in memory at once. * **line = line[:-1]** will fail if the last line has no EOL.
Is rewriting a PHP app into Python a productive step?
340,318
6
2008-12-04T11:48:19Z
340,338
14
2008-12-04T11:54:58Z
[ "php", "python" ]
I have some old apps written in PHP that I'm thinking of converting to Python - both are websites that started as simple static html, then progressed to PHP and now include blogs with admin areas, rss etc. I'm thinking of rewriting them in Python to improve maintainability as well as to take advantage of my increase in experience to write things more robustly. Is this worth the effort?
You need to take some parts into mind here, 1. What will you gain from re-writing 2. Is it an economically wise decision 3. Will the code be easier to handle for new programmers 4. Performance-wise, will this be a good option? These four points is something that is important, will the work be more efficient after you re-write the code? Probably. But will it be worth the cost of re-development? One important step to follow, if you decide to re-write, make 3 documents, first Analyze the project, what needs to be done? How should everything work? Then put up a document with Requirements, what specificly do we need and how should this be done? Last but not least, the design document, where you put all your final class diagrams, the system operations and how the design and flow of the page should work. This will help a new developer, and old ones, to actually think about "do we really need to re-write?".
Suppressing Output of Paramiko SSHClient Class
340,341
4
2008-12-04T11:56:59Z
340,896
7
2008-12-04T15:08:27Z
[ "python", "paramiko" ]
When I call the connect function of the Paramiko `SSHClient` class, it outputs some log data about establishing the connection, which I would like to suppress. Is there a way to do this either through Paramiko itself, or Python in general?
Paramiko doesn't output anything by default. You probably have a call to the logging module, setting a loglevel that's inherited when paramiko sets up it's own logging. If you want to get at the paramiko logger to override the settings: ``` logger = paramiko.util.logging.getLogger() ``` There's also a convenience function to log everything to a file: ``` paramiko.util.log_to_file('filename.log') ```
Can I install Python 3.x and 2.x on the same computer?
341,184
69
2008-12-04T16:18:57Z
341,218
28
2008-12-04T16:29:38Z
[ "python", "windows", "python-3.x", "compatibility" ]
I'm running Windows and the shell/OS automatically runs Python based on the registry settings when you run a program on the command line. Will this break if I install a 2.x and 3.x version of Python on the same machine? I want to play with Python 3 while still being able to run 2.x scripts on the same machine.
You can have both installed. You should write in front of your script : ``` #!/bin/env python2.6 ``` or eventually.. ``` #!/bin/env python3.0 ``` ## Update My solution work perfectly with Unix, after a quick search on [Google](http://news.softpedia.com/news/Your-First-Python-Script-on-Windows-81974.shtml), here is the Windows solution: ``` #!c:/Python/python3_0.exe -u ``` Same thing... in front of your script
Can I install Python 3.x and 2.x on the same computer?
341,184
69
2008-12-04T16:18:57Z
436,455
9
2009-01-12T18:26:55Z
[ "python", "windows", "python-3.x", "compatibility" ]
I'm running Windows and the shell/OS automatically runs Python based on the registry settings when you run a program on the command line. Will this break if I install a 2.x and 3.x version of Python on the same machine? I want to play with Python 3 while still being able to run 2.x scripts on the same machine.
I'm using 2.5, 2.6, and 3.0 from the shell with one line batch scripts of the form: ``` :: The @ symbol at the start turns off the prompt from displaying the command. :: The % represents an argument, while the * means all of them. @c:\programs\pythonX.Y\python.exe %* ``` Name them `pythonX.Y.bat` and put them somewhere in your PATH. Copy the file for the preferred minor version (i.e. the latest) to `pythonX.bat`. (E.g. `copy python2.6.bat python2.bat`.) Then you can use `python2 file.py` from anywhere. However, this doesn't help or even affect the Windows file association situation. For that you'll need a launcher program that reads the `#!` line, and then associate that with .py and .pyw files.
Can I install Python 3.x and 2.x on the same computer?
341,184
69
2008-12-04T16:18:57Z
762,725
7
2009-04-18T01:45:53Z
[ "python", "windows", "python-3.x", "compatibility" ]
I'm running Windows and the shell/OS automatically runs Python based on the registry settings when you run a program on the command line. Will this break if I install a 2.x and 3.x version of Python on the same machine? I want to play with Python 3 while still being able to run 2.x scripts on the same machine.
Here you go... **winpylaunch.py** ``` # # Looks for a directive in the form: #! C:\Python30\python.exe # The directive must start with #! and contain ".exe". # This will be assumed to be the correct python interpreter to # use to run the script ON WINDOWS. If no interpreter is # found then the script will be run with 'python.exe'. # ie: whatever one is found on the path. # For example, in a script which is saved as utf-8 and which # runs on Linux and Windows and uses the Python 2.6 interpreter... # # #!/usr/bin/python # #!C:\Python26\python.exe # # -*- coding: utf-8 -*- # # When run on Linux, Linux uses the /usr/bin/python. When run # on Windows using winpylaunch.py it uses C:\Python26\python.exe. # # To set up the association add this to the registry... # # HKEY_CLASSES_ROOT\Python.File\shell\open\command # (Default) REG_SZ = "C:\Python30\python.exe" S:\usr\bin\winpylaunch.py "%1" %* # # NOTE: winpylaunch.py itself works with either 2.6 and 3.0. Once # this entry has been added python files can be run on the # commandline and the use of winpylaunch.py will be transparent. # import subprocess import sys USAGE = """ USAGE: winpylaunch.py <script.py> [arg1] [arg2...] """ if __name__ == "__main__": if len(sys.argv) > 1: script = sys.argv[1] args = sys.argv[2:] if script.endswith(".py"): interpreter = "python.exe" # Default to wherever it is found on the path. lines = open(script).readlines() for line in lines: if line.startswith("#!") and line.find(".exe") != -1: interpreter = line[2:].strip() break process = subprocess.Popen([interpreter] + [script] + args) process.wait() sys.exit() print(USAGE) ``` I've just knocked this up on reading this thread (because it's what I was needing too). I have Pythons 2.6.1 and 3.0.1 on both Ubuntu and Windows. If it doesn't work for you post fixes here.
Can I install Python 3.x and 2.x on the same computer?
341,184
69
2008-12-04T16:18:57Z
13,297,878
35
2012-11-08T21:11:48Z
[ "python", "windows", "python-3.x", "compatibility" ]
I'm running Windows and the shell/OS automatically runs Python based on the registry settings when you run a program on the command line. Will this break if I install a 2.x and 3.x version of Python on the same machine? I want to play with Python 3 while still being able to run 2.x scripts on the same machine.
The official solution for coexistence seems to be the [Python Launcher for Windows](http://blog.python.org/2011/07/python-launcher-for-windows_11.html), PEP 397 which was included in [Python 3.3.0](http://www.python.org/download/releases/3.3.0/). Installing the release dumps `py.exe` and `pyw.exe` launchers into `%SYSTEMROOT%` (`C:\Windows`) which is then associated with `py` and `pyw` scripts, respectively. In order to use the new launcher (without manually setting up your own associations to it), leave the "Register Extensions" option enabled. I'm not quite sure why, but on my machine it left Py 2.7 as the "default" (of the launcher). Running scripts by calling them directly from the command line will route them through the launcher and parse the shebang (if it exists). You can also explicitly call the launcher and use switches: `py -3 mypy2script.py`. All manner of shebangs seem to work * `#!C:\Python33\python.exe` * `#!python3` * `#!/usr/bin/env python3` as well as wanton abuses * `#! notepad.exe`
Can I install Python 3.x and 2.x on the same computer?
341,184
69
2008-12-04T16:18:57Z
32,195,996
7
2015-08-25T05:15:42Z
[ "python", "windows", "python-3.x", "compatibility" ]
I'm running Windows and the shell/OS automatically runs Python based on the registry settings when you run a program on the command line. Will this break if I install a 2.x and 3.x version of Python on the same machine? I want to play with Python 3 while still being able to run 2.x scripts on the same machine.
Here's my setup: 1. Install both Python 2.7 and 3.4 with the [windows installers](https://www.python.org/downloads/). 2. Go to `C:\Python34` (the default install path) and change python.exe to python3.exe 3. **Edit** [your environment variables](https://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/sysdm_advancd_environmnt_addchange_variable.mspx?mfr=true) to include `C:\Python27\;C:\Python27\Scripts\;C:\Python34\;C:\Python34\Scripts\;` Now in command line you can use `python` for 2.7 and `python3` for 3.4.
Python Decorators run before function it is decorating is called?
341,379
4
2008-12-04T17:13:32Z
341,389
19
2008-12-04T17:17:27Z
[ "python", "django", "decorator" ]
As an example, ``` def get_booking(f=None): print "Calling get_booking Decorator" def wrapper(request, **kwargs): booking = _get_booking_from_session(request) if booking == None: # we don't have a booking in our session. return HttpRedirect('/') else: return f(request=request, booking=booking, **kwargs) return wrapper @get_booking def do_stuff(request, booking): # do stuff here ``` The problem I am having is, the @get\_booking decorator is being called even before I called the function that I am decorating. Output on start: ``` Calling get_booking Decorator Calling get_booking Decorator Calling get_booking Decorator Calling get_booking Decorator Calling get_booking Decorator Calling get_booking Decorator Calling get_booking Decorator Calling get_booking Decorator Calling get_booking Decorator Calling get_booking Decorator [26/Oct/2008 19:54:04] "GET /onlinebooking/?id=1,2 HTTP/1.1" 302 0 [26/Oct/2008 19:54:05] "GET /onlinebooking/ HTTP/1.1" 200 2300 [26/Oct/2008 19:54:05] "GET /site-media/css/style.css HTTP/1.1" 200 800 [26/Oct/2008 19:54:05] "GET /site-media/css/jquery-ui-themeroller.css HTTP/1.1" 200 25492 ``` I haven't even made a call to a function that is decorated at this point. I am just getting started with decorators, so maybe I am missing something. Any help?
I believe python decorators are just syntactic sugar. ``` @foo def bar (): pass ``` is the same thing as ``` def bar (): pass bar = foo(bar) ``` As you can see, *foo* is being called even though *bar* has not been called. This is why you see the output from your decorator function. Your output should contain a single line for every function you applied your decorator to.
Python logging in Django
342,434
39
2008-12-04T23:24:26Z
343,575
13
2008-12-05T11:35:08Z
[ "python", "django", "logging" ]
I'm developing a Django app, and I'm trying to use Python's logging module for error/trace logging. Ideally I'd like to have different loggers configured for different areas of the site. So far I've got all of this working, but one thing has me scratching my head. I have the root logger going to sys.stderr, and I have configured another logger to write to a file. This is in my settings.py file: ``` sviewlog = logging.getLogger('MyApp.views.scans') view_log_handler = logging.FileHandler('C:\\MyApp\\logs\\scan_log.log') view_log_handler.setLevel(logging.INFO) view_log_handler.setFormatter(logging.Formatter('%(asctime)s %(name)-12s %(levelname)-8s %(message)s')) sviewlog.addHandler(view_log_handler) ``` Seems pretty simple. Here's the problem, though: whatever I write to the sviewlog gets written to the log file twice. The root logger only prints it once. It's like addHandler() is being called twice. And when I put my code through a debugger, this is exactly what I see. The code in settings.py is getting executed twice, so two FileHandlers are created and added to the same logger instance. But why? And how do I get around this? Can anyone tell me what's going on here? I've tried moving the sviewlog logger/handler instantiation code to the file where it's used (since that actually seems like the appropriate place to me), but I have the same problem there. Most of the examples I've seen online use only the root logger, and I'd prefer to have multiple loggers.
Difficult to comment on your specific case. If settings.py is executed twice, then it's normal that you get two lines for every log sent. The way we set it up in our projects is to have one module dedicated to logging. That modules has a "module singleton" pattern, so that we only execute the interesting code once. It looks like this: ``` def init_logging(): stdoutHandler = logging.StreamHandler( sys.stdout ) stdoutHandler.setLevel( DEBUG ) stdoutHandler.setFormatter( logging.Formatter( LOG_FORMAT_WITH_TIME ) ) logging.getLogger( LOG_AREA1 ).addHandler( stdoutHandler ) logInitDone=False if not logInitDone: logInitDone = True init_logging() ``` Importing the log.py the first time will configure the logging correctly.
Python logging in Django
342,434
39
2008-12-04T23:24:26Z
345,669
28
2008-12-06T01:06:01Z
[ "python", "django", "logging" ]
I'm developing a Django app, and I'm trying to use Python's logging module for error/trace logging. Ideally I'd like to have different loggers configured for different areas of the site. So far I've got all of this working, but one thing has me scratching my head. I have the root logger going to sys.stderr, and I have configured another logger to write to a file. This is in my settings.py file: ``` sviewlog = logging.getLogger('MyApp.views.scans') view_log_handler = logging.FileHandler('C:\\MyApp\\logs\\scan_log.log') view_log_handler.setLevel(logging.INFO) view_log_handler.setFormatter(logging.Formatter('%(asctime)s %(name)-12s %(levelname)-8s %(message)s')) sviewlog.addHandler(view_log_handler) ``` Seems pretty simple. Here's the problem, though: whatever I write to the sviewlog gets written to the log file twice. The root logger only prints it once. It's like addHandler() is being called twice. And when I put my code through a debugger, this is exactly what I see. The code in settings.py is getting executed twice, so two FileHandlers are created and added to the same logger instance. But why? And how do I get around this? Can anyone tell me what's going on here? I've tried moving the sviewlog logger/handler instantiation code to the file where it's used (since that actually seems like the appropriate place to me), but I have the same problem there. Most of the examples I've seen online use only the root logger, and I'd prefer to have multiple loggers.
Allow me to answer my own question. The underlying problem here is that settings.py gets imported twice, or maybe even more (See [here](http://www.mail-archive.com/[email protected]/msg39059.html)). (I still don't understand why this is. Maybe some Django expert could explain that to me.) This seems to be true of some other modules as well. At this point I don't think it's wise to make assumptions about how many times settings.py will be imported. For that matter, such assumptions aren't safe in general. I've had this code in places other than settings.py, and the results are similar. You have to code around this. That is, you have to check your logger for existing handlers before adding additional handlers to it. This is a bit ugly because it's perfectly reasonable to have multiple handlers -- even of the same type -- attached to one logger. There are a few solutions to dealing with this. One is check the handlers property of your logger object. If you only want one handler and your length > 0, then don't add it. Personally I don't love this solution, because it gets messy with more handlers. I prefer something like this (thanks to Thomas Guettler): ``` # file logconfig.py if not hasattr(logging, "set_up_done"): logging.set_up_done=False def set_up(myhome): if logging.set_up_done: return # set up your logging here # ... logging.set_up_done=True ``` I must say, I wish the fact that Django imports settings.py multiple times were better documented. And I would imagine that my configuration is somehow cause this multiple import, but I'm having trouble finding out what is causing the problem and why. Maybe I just couldn't find that in their documents, but I would think that's the sort of thing you need to warn your users about.
Python logging in Django
342,434
39
2008-12-04T23:24:26Z
3,983,086
23
2010-10-20T23:23:54Z
[ "python", "django", "logging" ]
I'm developing a Django app, and I'm trying to use Python's logging module for error/trace logging. Ideally I'd like to have different loggers configured for different areas of the site. So far I've got all of this working, but one thing has me scratching my head. I have the root logger going to sys.stderr, and I have configured another logger to write to a file. This is in my settings.py file: ``` sviewlog = logging.getLogger('MyApp.views.scans') view_log_handler = logging.FileHandler('C:\\MyApp\\logs\\scan_log.log') view_log_handler.setLevel(logging.INFO) view_log_handler.setFormatter(logging.Formatter('%(asctime)s %(name)-12s %(levelname)-8s %(message)s')) sviewlog.addHandler(view_log_handler) ``` Seems pretty simple. Here's the problem, though: whatever I write to the sviewlog gets written to the log file twice. The root logger only prints it once. It's like addHandler() is being called twice. And when I put my code through a debugger, this is exactly what I see. The code in settings.py is getting executed twice, so two FileHandlers are created and added to the same logger instance. But why? And how do I get around this? Can anyone tell me what's going on here? I've tried moving the sviewlog logger/handler instantiation code to the file where it's used (since that actually seems like the appropriate place to me), but I have the same problem there. Most of the examples I've seen online use only the root logger, and I'd prefer to have multiple loggers.
As of version 1.3, Django uses standard python logging, configured with the `LOGGING` setting (documented here: [1.3](http://docs.djangoproject.com/en/1.3/ref/settings/#std%3asetting-LOGGING), [dev](http://docs.djangoproject.com/en/dev/ref/settings/#std%3asetting-LOGGING)). Django logging reference: [1.3](http://docs.djangoproject.com/en/1.3/topics/logging/), [dev](http://docs.djangoproject.com/en/dev/topics/logging/).
Python logging in Django
342,434
39
2008-12-04T23:24:26Z
5,889,904
9
2011-05-04T21:09:10Z
[ "python", "django", "logging" ]
I'm developing a Django app, and I'm trying to use Python's logging module for error/trace logging. Ideally I'd like to have different loggers configured for different areas of the site. So far I've got all of this working, but one thing has me scratching my head. I have the root logger going to sys.stderr, and I have configured another logger to write to a file. This is in my settings.py file: ``` sviewlog = logging.getLogger('MyApp.views.scans') view_log_handler = logging.FileHandler('C:\\MyApp\\logs\\scan_log.log') view_log_handler.setLevel(logging.INFO) view_log_handler.setFormatter(logging.Formatter('%(asctime)s %(name)-12s %(levelname)-8s %(message)s')) sviewlog.addHandler(view_log_handler) ``` Seems pretty simple. Here's the problem, though: whatever I write to the sviewlog gets written to the log file twice. The root logger only prints it once. It's like addHandler() is being called twice. And when I put my code through a debugger, this is exactly what I see. The code in settings.py is getting executed twice, so two FileHandlers are created and added to the same logger instance. But why? And how do I get around this? Can anyone tell me what's going on here? I've tried moving the sviewlog logger/handler instantiation code to the file where it's used (since that actually seems like the appropriate place to me), but I have the same problem there. Most of the examples I've seen online use only the root logger, and I'd prefer to have multiple loggers.
Reviving an old thread, but I was experiencing duplicate messages while using Django 1.3 Python logging with the [dictConfig format](http://docs.python.org/library/logging.config.html#configuration-dictionary-schema). The `disable_existing_loggers` gets rid of the duplicate handler/logging problem with multiple settings.py loads, but you can still get duplicate log messages if you don't specify the `propagate` boolean appropriately on the specific `logger`. Namely, make sure you set `propagate=False` for child loggers. E.g., ``` 'loggers': { 'django': { 'handlers':['null'], 'propagate': True, 'level':'INFO', }, 'django.request': { 'handlers': ['console'], 'level': 'ERROR', 'propagate': False, }, 'project': { 'handlers': ['console', 'project-log-file'], 'level': 'DEBUG', 'propagate': True, }, 'project.customapp': { 'handlers': ['console', 'customapp-log-file'], 'level': 'DEBUG', 'propagate': False, }, } ``` Here, `project.customapp` sets `propagate=False` so that it won't be caught by the `project` logger as well. The [Django logging docs](http://docs.djangoproject.com/en/dev/topics/logging/) are excellent, as always.
Best way to import version-specific python modules
342,437
13
2008-12-04T23:28:10Z
342,484
27
2008-12-04T23:57:33Z
[ "python", "migration", "module", "code-migration" ]
Which method makes the most sense for importing a module in python that is version specific? My use case is that I'm writing code that will be deployed into a python 2.3 environment and in a few months be upgraded to python 2.5. This: ``` if sys.version_info[:2] >= (2, 5): from string import Template else: from our.compat.string import Template ``` or this ``` try: from string import Template except ImportError: from our.compat.string import Template ``` I know that either case is equally correct and works correctly but which one is preferable?
Always the second way - you never know what different Python installations will have installed. `Template` is a specific case where it matters less, but when you test for the capability instead of the versioning you're always more robust. That's how I make [Testoob](http://www.testoob.org) support Python 2.2 - 2.6: I try to import a module in different ways until it works. It's also relevant to 3rd-party libraries. Here's an extreme case - supporting different options for ElementTree to appear: ``` try: import elementtree.ElementTree as ET except ImportError: try: import cElementTree as ET except ImportError: try: import lxml.etree as ET except ImportError: import xml.etree.ElementTree as ET # Python 2.5 and up ```
How should I learn to use the Windows API with Python?
342,729
17
2008-12-05T02:20:30Z
342,740
28
2008-12-05T02:26:44Z
[ "python", "winapi" ]
I have very little experience building software for Windows, and zero experience using the Windows API, but I'm reasonably familiar with Python. So, how should I go about learning to use the Windows API with Python?
Honestly, no. The Windows API is an 800 pound monster covered with hair. [Charlie Petzold's 15 pound book](http://www.charlespetzold.com/faq.html) was the canonical reference once upon a time. That said, the [Python for Windows](http://python.net/crew/mhammond/win32/) folks have some good material. Microsoft has the [whole API online](http://msdn.microsoft.com/en-us/library/aa383749(VS.85).aspx), including some sample code and such. And the [Wikipedia article](http://en.wikipedia.org/wiki/Win32) is a good overview.
How should I learn to use the Windows API with Python?
342,729
17
2008-12-05T02:20:30Z
343,804
7
2008-12-05T13:19:43Z
[ "python", "winapi" ]
I have very little experience building software for Windows, and zero experience using the Windows API, but I'm reasonably familiar with Python. So, how should I go about learning to use the Windows API with Python?
Avoid tutorials (written by kids, for kids, newbie level) Read the Petzold, Richter, Pietrek, Russinovich and Adv. Win32 api newsgroup news://comp.os.ms-windows.programmer.win32
How should I learn to use the Windows API with Python?
342,729
17
2008-12-05T02:20:30Z
350,143
18
2008-12-08T16:53:44Z
[ "python", "winapi" ]
I have very little experience building software for Windows, and zero experience using the Windows API, but I'm reasonably familiar with Python. So, how should I go about learning to use the Windows API with Python?
About 4 years ago I set out to truly understand the Windows API. I was coding in C# at the time, but I felt like the framework was abstracting me too much from the API (which it was). So I switched to Delphi (C++ or C would have also been good choices). In my opinion, it is important that you start working in a language that creates native code and talks directly to the Windows API and makes you care about buffers, pointers, structures, and real constructs that Windows uses directly. C# is a great language, but not the best choice for learning the Windows API. Next, buy Mark Russinovich's book "Windows Internals" [Amazon link](http://amzn.to/xdu4Br). This is the 5th edition. The 6th edition is coming out April 2012 and adds info about Server 2008 R2 and Windows 7. ## And now, for the most important (and best) resource for learning Win32 API: Mark Russinovich's [Windows Operating Systems Internals Curriculum](http://www.microsoft.com/resources/sharedsource/windowsacademic/curriculumresourcekit.mspx) which is offered for free. It is designed to be used by an instructor to teach students. I went through it and it is awesome. Full of examples, history, and detailed explanations. In my opinion, this is an ideal way to learn the Windows API. Mark Russinovich is a Microsoft Technical Fellow (there are only 14 at MS including the creator of C#). He used to own Winternals until he sold it to MS, he has a PhD in Computer Engineering from Carnegie Mellon, he has been a frequent presenter at Microsoft conferences (even before he worked for them), and he is crazy smart. His presentations are one of the primary reasons I attend Microsoft TechEd every year.
What happened to the python bindings for CGAL?
343,210
7
2008-12-05T08:36:49Z
13,081,921
10
2012-10-26T06:32:58Z
[ "python", "geometry", "polygon", "computational-geometry", "cgal" ]
I found the [Computational Geometry Algorithms Library](http://www.cgal.org/) in my search for an algorithm to decompose a concave polygon into the minimum number of convex components. Links off the site and numerous google results indicate there are python bindings for it, which would be really handy, but all the links are dead! What happened to it? Where can I get it now?
A rewrite of the CGAL-Python bindings has been done as part of the cgal-bindings project. Check it out : <http://code.google.com/p/cgal-bindings/>
What can Pygame do in terms of graphics that wxPython can't?
343,505
16
2008-12-05T11:03:36Z
344,002
19
2008-12-05T14:35:58Z
[ "python", "graphics", "wxpython", "pygame" ]
I want to develop a very simple 2D game in Python. Pygame is the most popular library for game development in Python, but I'm already quite familiar with wxPython and feel comfortable using it. I've even written a [Tetris clone](http://eli.thegreenplace.net/2008/05/31/a-tetris-clone-in-python-wxpython/) in it, and it was pretty smooth. I wonder, what does Pygame offer in terms of graphics (leaving sound aside, for a moment) that wxPython can't do ? Is it somehow simpler/faster to do graphics in Pygame than in wxPython ? Is it even more cross-platform ? It looks like I'm missing something here, but I don't know what.
Well, in theory there is nothing you can do with Pygame that you can't with wxPython. The point is not what but how. In my opinion, it's easier to write a game with PyGame becasue: * It's faster. Pygame is based on SDL which is a C library specifically designed for games, it has been developed with speed in mind. When you develop games, you need speed. * Is a game library, not a general purpose canvas, It has classes and functions useful for sprites, transformations, input handling, drawing, collision detection. It also implements algorithms and techniques often used in games like dirty rectangles, page flipping, etc. * There are [thousands of games](http://pygame.org/tags/) and examples made with it. It will be easier for you to discover how to do any trick. * There are a lot of [libraries](http://pygame.org/tags/libraries) with effects and utilities you could reuse. You want an isometric game, there is a library, you want a physics engine, there is a library, you what some cool visual effect, there is a library. * [PyWeek](http://www.pyweek.org/). :) This is to make the development of your game even funnier! For some very simple games like Tetris, the difference won't be too much, but if you want to develop a fairly complex game, believe me, you will want something like PyGame.
What can Pygame do in terms of graphics that wxPython can't?
343,505
16
2008-12-05T11:03:36Z
344,045
13
2008-12-05T14:53:07Z
[ "python", "graphics", "wxpython", "pygame" ]
I want to develop a very simple 2D game in Python. Pygame is the most popular library for game development in Python, but I'm already quite familiar with wxPython and feel comfortable using it. I've even written a [Tetris clone](http://eli.thegreenplace.net/2008/05/31/a-tetris-clone-in-python-wxpython/) in it, and it was pretty smooth. I wonder, what does Pygame offer in terms of graphics (leaving sound aside, for a moment) that wxPython can't do ? Is it somehow simpler/faster to do graphics in Pygame than in wxPython ? Is it even more cross-platform ? It looks like I'm missing something here, but I don't know what.
wxPython is based on [wxWidgets](http://wxwidgets.org/) which is a GUI-oriented toolkit. It has the advantage of using the styles and decorations provided by the system it runs on and thus it is very easy to write portable applications that integrate nicely into the look and feel of whatever you're running. You want a checkbox? Use wxCheckBox and wxPython will handle looks and interaction. pyGame, on the other hand, is oriented towards game development and thus brings you closer to the hardware in ways wxPython doesn't (and doesn't need to, since it calls the OS for drawing most of its controls). pyGame has lots of game related stuff like collision detection, fine-grained control of surfaces and layers or flipping display buffers at a time of your choosing. That said, graphics-wise you can probably always find a way to do what you want with both toolkits. However, when speed counts or you wish to implement graphically more taxing game ideas than Tetris, you're probably better off with pyGame. If you want to use lots of GUI elements and don't need the fancy graphics and sound functions, you're better off with wxPython. Portability is not an issue. Both are available for the big three (Linux, OSX, Windows). It's more a question of what kind of special capabilities you need, really.
How to convert from UTM to LatLng in python or Javascript
343,865
23
2008-12-05T13:42:34Z
343,961
10
2008-12-05T14:21:28Z
[ "javascript", "python", "gis", "arcgis-js-api", "proj4js" ]
I have a bunch of files with coordinates in UTM form. For each coordinate I have easting, northing and zone. I need to convert this to LatLng for use with Google Map API to show the information in a map. I have found some online calculators that does this, but no actual code or libraries. <http://trac.osgeo.org/proj4js/> is a projection library for Javascript, but looking at the demo it doesn't include UTM projection. I am still pretty fresh to the entire GIS domain, so what I want is something ala: ``` (lat,lng) = transform(easting, northing, zone) ```
What I found is the following site: <http://home.hiwaay.net/~taylorc/toolbox/geography/geoutm.html> It has a javascript converter, you should check the algorithm there. From the page: > Programmers: The JavaScript source code in this document may be copied and reused without restriction.
How to convert from UTM to LatLng in python or Javascript
343,865
23
2008-12-05T13:42:34Z
344,060
8
2008-12-05T14:58:52Z
[ "javascript", "python", "gis", "arcgis-js-api", "proj4js" ]
I have a bunch of files with coordinates in UTM form. For each coordinate I have easting, northing and zone. I need to convert this to LatLng for use with Google Map API to show the information in a map. I have found some online calculators that does this, but no actual code or libraries. <http://trac.osgeo.org/proj4js/> is a projection library for Javascript, but looking at the demo it doesn't include UTM projection. I am still pretty fresh to the entire GIS domain, so what I want is something ala: ``` (lat,lng) = transform(easting, northing, zone) ```
According to this page, UTM is supported by proj4js. <http://trac.osgeo.org/proj4js/wiki/UserGuide#Supportedprojectionclasses> You may also want to take a look at [GDAL](http://gdal.org). The gdal library has excellent python support, though it may be a bit overkill if you're only doing projection conversion.
How to convert from UTM to LatLng in python or Javascript
343,865
23
2008-12-05T13:42:34Z
344,083
31
2008-12-05T15:04:59Z
[ "javascript", "python", "gis", "arcgis-js-api", "proj4js" ]
I have a bunch of files with coordinates in UTM form. For each coordinate I have easting, northing and zone. I need to convert this to LatLng for use with Google Map API to show the information in a map. I have found some online calculators that does this, but no actual code or libraries. <http://trac.osgeo.org/proj4js/> is a projection library for Javascript, but looking at the demo it doesn't include UTM projection. I am still pretty fresh to the entire GIS domain, so what I want is something ala: ``` (lat,lng) = transform(easting, northing, zone) ```
I ended up finding java code from IBM that solved it: <http://www.ibm.com/developerworks/java/library/j-coordconvert/index.html> Just for reference, here is my python implementation of the method I needed: ``` import math def utmToLatLng(zone, easting, northing, northernHemisphere=True): if not northernHemisphere: northing = 10000000 - northing a = 6378137 e = 0.081819191 e1sq = 0.006739497 k0 = 0.9996 arc = northing / k0 mu = arc / (a * (1 - math.pow(e, 2) / 4.0 - 3 * math.pow(e, 4) / 64.0 - 5 * math.pow(e, 6) / 256.0)) ei = (1 - math.pow((1 - e * e), (1 / 2.0))) / (1 + math.pow((1 - e * e), (1 / 2.0))) ca = 3 * ei / 2 - 27 * math.pow(ei, 3) / 32.0 cb = 21 * math.pow(ei, 2) / 16 - 55 * math.pow(ei, 4) / 32 cc = 151 * math.pow(ei, 3) / 96 cd = 1097 * math.pow(ei, 4) / 512 phi1 = mu + ca * math.sin(2 * mu) + cb * math.sin(4 * mu) + cc * math.sin(6 * mu) + cd * math.sin(8 * mu) n0 = a / math.pow((1 - math.pow((e * math.sin(phi1)), 2)), (1 / 2.0)) r0 = a * (1 - e * e) / math.pow((1 - math.pow((e * math.sin(phi1)), 2)), (3 / 2.0)) fact1 = n0 * math.tan(phi1) / r0 _a1 = 500000 - easting dd0 = _a1 / (n0 * k0) fact2 = dd0 * dd0 / 2 t0 = math.pow(math.tan(phi1), 2) Q0 = e1sq * math.pow(math.cos(phi1), 2) fact3 = (5 + 3 * t0 + 10 * Q0 - 4 * Q0 * Q0 - 9 * e1sq) * math.pow(dd0, 4) / 24 fact4 = (61 + 90 * t0 + 298 * Q0 + 45 * t0 * t0 - 252 * e1sq - 3 * Q0 * Q0) * math.pow(dd0, 6) / 720 lof1 = _a1 / (n0 * k0) lof2 = (1 + 2 * t0 + Q0) * math.pow(dd0, 3) / 6.0 lof3 = (5 - 2 * Q0 + 28 * t0 - 3 * math.pow(Q0, 2) + 8 * e1sq + 24 * math.pow(t0, 2)) * math.pow(dd0, 5) / 120 _a2 = (lof1 - lof2 + lof3) / math.cos(phi1) _a3 = _a2 * 180 / math.pi latitude = 180 * (phi1 - fact1 * (fact2 + fact3 + fact4)) / math.pi if not northernHemisphere: latitude = -latitude longitude = ((zone > 0) and (6 * zone - 183.0) or 3.0) - _a3 return (latitude, longitude) ``` And here I thought it was something simple like easting\*x+zone\*y or something.
Django Admin's "view on site" points to example.com instead of my domain
344,851
22
2008-12-05T19:23:11Z
344,909
21
2008-12-05T19:46:26Z
[ "python", "django", "django-admin" ]
I added a `get_absolute_url` function to one of my models. ``` def get_absolute_url(self): return '/foo/bar' ``` The admin site picks it up and adds a "view on site" link to the detail page for that object (when I put a real URL there instead of "/foo/bar"). The problem is instead of going to `http://localhost:8000/foo/bar`, it goes to `http://example.com/foo/bar`. What am I doing wrong?
You have to change [default site](http://docs.djangoproject.com/en/dev/ref/contrib/sites/#ref-contrib-sites) domain value.
Is this idiom pythonic? (someBool and "True Result" or "False Result")
345,745
7
2008-12-06T01:57:04Z
345,773
17
2008-12-06T02:14:46Z
[ "coding-style", "python" ]
I just came across this idiom in some open-source Python, and I choked on my drink. Rather than: ``` if isUp: return "Up" else: return "Down" ``` or even: ``` return "Up" if isUp else "Down" ``` the code read: ``` return isUp and "Up" or "Down" ``` I can see this is the same result, but is this a typical idiom in Python? If so, is it some performance hack that runs fast? Or is it just a once-off that needs a code review?
The "a and b or c" idiom was the canonical way to express the ternary arithmetic if in Python, before [PEP 308](http://www.python.org/dev/peps/pep-0308/) was written and implemented. This idiom fails the "b" answer is false itself; to support the general case, you could write ``` return (a and [b] or [c])[0] ``` An alternative way of spelling it was ``` return (b,c)[not a] ``` which, with the introduction of the bool type, could be rewritten as ``` return (c,b)[bool(a)] ``` (in case it isn't clear: the conversion to bool, and the not operator, is necessary if a is not known to be bool already) Today, the conditional expression syntax should be used if the thing must be an expression; else I recommend to use the if statement.
Is this idiom pythonic? (someBool and "True Result" or "False Result")
345,745
7
2008-12-06T01:57:04Z
345,775
9
2008-12-06T02:20:46Z
[ "coding-style", "python" ]
I just came across this idiom in some open-source Python, and I choked on my drink. Rather than: ``` if isUp: return "Up" else: return "Down" ``` or even: ``` return "Up" if isUp else "Down" ``` the code read: ``` return isUp and "Up" or "Down" ``` I can see this is the same result, but is this a typical idiom in Python? If so, is it some performance hack that runs fast? Or is it just a once-off that needs a code review?
You should read [Using the and-or trick](http://www.diveintopython.net/power_of_introspection/and_or.html) (section 4.6.1) of *Dive Into Python* by Mark Pilgrim. It turns out that the and-or trick has major pitfalls you should be aware of.
Python - No handlers could be found for logger "OpenGL.error"
345,991
64
2008-12-06T05:59:43Z
346,501
156
2008-12-06T17:18:34Z
[ "python", "logging", "opengl", "wxpython", "pyopengl" ]
Okay, what is it, and why does it occur on Win2003 server, but not on WinXP. It doesn't seem to affect my application at all, but I get this error message when I close the application. And it's annoying (as errors messages should be). I am using pyOpenGl and wxPython to do the graphics stuff. Unfortunately, I'm a C# programmer that has taken over this Python app, and I had to learn Python to do it. I can supply code and version numbers etc, but I'm still learning the technical stuff, so any help would be appreciated. Python 2.5, wxPython and pyOpenGL
Looks like OpenGL is trying to report some error on Win2003, however you've not configured your system where to output logging info. You can add the following to the beginning of your program and you'll see details of the error in stderr. ``` import logging logging.basicConfig() ``` Checkout documentation on [logging](https://docs.python.org/2/library/logging.html) module to get more config info, conceptually it's similar to log4J.
Postgres - how to return rows with 0 count for missing data?
346,132
9
2008-12-06T09:32:04Z
346,195
16
2008-12-06T11:30:40Z
[ "python", "database", "postgresql" ]
I have unevenly distributed data(wrt date) for a few years (2003-2008). I want to query data for a given set of start and end date, grouping the data by any of the supported intervals (day, week, month, quarter, year) in PostgreSQL 8.3 (<http://www.postgresql.org/docs/8.3/static/functions-datetime.html#FUNCTIONS-DATETIME-TRUNC>). The problem is that some of the queries give results continuous over the required period, as this one: ``` select to_char(date_trunc('month',date), 'YYYY-MM-DD'),count(distinct post_id) from some_table where category_id=1 and entity_id = 77 and entity2_id = 115 and date <= '2008-12-06' and date >= '2007-12-01' group by date_trunc('month',date) order by date_trunc('month',date); to_char | count ------------+------- 2007-12-01 | 64 2008-01-01 | 31 2008-02-01 | 14 2008-03-01 | 21 2008-04-01 | 28 2008-05-01 | 44 2008-06-01 | 100 2008-07-01 | 72 2008-08-01 | 91 2008-09-01 | 92 2008-10-01 | 79 2008-11-01 | 65 (12 rows) ``` but some of them miss some intervals because there is no data present, as this one: ``` select to_char(date_trunc('month',date), 'YYYY-MM-DD'),count(distinct post_id) from some_table where category_id=1 and entity_id = 75 and entity2_id = 115 and date <= '2008-12-06' and date >= '2007-12-01' group by date_trunc('month',date) order by date_trunc('month',date); to_char | count ------------+------- 2007-12-01 | 2 2008-01-01 | 2 2008-03-01 | 1 2008-04-01 | 2 2008-06-01 | 1 2008-08-01 | 3 2008-10-01 | 2 (7 rows) ``` where the required resultset is: ``` to_char | count ------------+------- 2007-12-01 | 2 2008-01-01 | 2 2008-02-01 | 0 2008-03-01 | 1 2008-04-01 | 2 2008-05-01 | 0 2008-06-01 | 1 2008-07-01 | 0 2008-08-01 | 3 2008-09-01 | 0 2008-10-01 | 2 2008-11-01 | 0 (12 rows) ``` A count of 0 for missing entries. I have seen earlier discussions on Stack Overflow but they don't solve my problem it seems, since my grouping period is one of (day, week, month, quarter, year) and decided on runtime by the application. So an approach like left join with a calendar table or sequence table will not help I guess. My current solution to this is to fill in these gaps in Python (in a Turbogears App) using the calendar module. Is there a better way to do this.
You can create the list of all first days of the last year (say) with ``` select distinct date_trunc('month', (current_date - offs)) as date from generate_series(0,365,28) as offs; date ------------------------ 2007-12-01 00:00:00+01 2008-01-01 00:00:00+01 2008-02-01 00:00:00+01 2008-03-01 00:00:00+01 2008-04-01 00:00:00+02 2008-05-01 00:00:00+02 2008-06-01 00:00:00+02 2008-07-01 00:00:00+02 2008-08-01 00:00:00+02 2008-09-01 00:00:00+02 2008-10-01 00:00:00+02 2008-11-01 00:00:00+01 2008-12-01 00:00:00+01 ``` Then you can join with that series.
Postgres - how to return rows with 0 count for missing data?
346,132
9
2008-12-06T09:32:04Z
15,733,103
9
2013-03-31T18:44:22Z
[ "python", "database", "postgresql" ]
I have unevenly distributed data(wrt date) for a few years (2003-2008). I want to query data for a given set of start and end date, grouping the data by any of the supported intervals (day, week, month, quarter, year) in PostgreSQL 8.3 (<http://www.postgresql.org/docs/8.3/static/functions-datetime.html#FUNCTIONS-DATETIME-TRUNC>). The problem is that some of the queries give results continuous over the required period, as this one: ``` select to_char(date_trunc('month',date), 'YYYY-MM-DD'),count(distinct post_id) from some_table where category_id=1 and entity_id = 77 and entity2_id = 115 and date <= '2008-12-06' and date >= '2007-12-01' group by date_trunc('month',date) order by date_trunc('month',date); to_char | count ------------+------- 2007-12-01 | 64 2008-01-01 | 31 2008-02-01 | 14 2008-03-01 | 21 2008-04-01 | 28 2008-05-01 | 44 2008-06-01 | 100 2008-07-01 | 72 2008-08-01 | 91 2008-09-01 | 92 2008-10-01 | 79 2008-11-01 | 65 (12 rows) ``` but some of them miss some intervals because there is no data present, as this one: ``` select to_char(date_trunc('month',date), 'YYYY-MM-DD'),count(distinct post_id) from some_table where category_id=1 and entity_id = 75 and entity2_id = 115 and date <= '2008-12-06' and date >= '2007-12-01' group by date_trunc('month',date) order by date_trunc('month',date); to_char | count ------------+------- 2007-12-01 | 2 2008-01-01 | 2 2008-03-01 | 1 2008-04-01 | 2 2008-06-01 | 1 2008-08-01 | 3 2008-10-01 | 2 (7 rows) ``` where the required resultset is: ``` to_char | count ------------+------- 2007-12-01 | 2 2008-01-01 | 2 2008-02-01 | 0 2008-03-01 | 1 2008-04-01 | 2 2008-05-01 | 0 2008-06-01 | 1 2008-07-01 | 0 2008-08-01 | 3 2008-09-01 | 0 2008-10-01 | 2 2008-11-01 | 0 (12 rows) ``` A count of 0 for missing entries. I have seen earlier discussions on Stack Overflow but they don't solve my problem it seems, since my grouping period is one of (day, week, month, quarter, year) and decided on runtime by the application. So an approach like left join with a calendar table or sequence table will not help I guess. My current solution to this is to fill in these gaps in Python (in a Turbogears App) using the calendar module. Is there a better way to do this.
This question is old. But since fellow users picked it as master for a new duplicate I am adding a proper answer. ### Proper solution ``` SELECT * FROM ( SELECT day::date FROM generate_series(timestamp '2007-12-01' , timestamp '2008-12-01' , interval '1 month') day ) d LEFT JOIN ( SELECT date_trunc('month', date_col)::date AS day , count(*) AS some_count FROM tbl WHERE date_col >= date '2007-12-01' AND date_col <= date '2008-12-06' -- AND ... more conditions GROUP BY 1 ) t USING (day) ORDER BY day; ``` * Use `LEFT JOIN`, of course. * [`generate_series()`](http://www.postgresql.org/docs/current/interactive/functions-srf.html) can produce a table of timestamps on the fly, and very fast. * It's generally faster to aggregate *before* you join. I recently provided a test case on sqlfiddle.com in this related answer: + [PostgreSQL - order by an array](http://stackoverflow.com/questions/15664373/postgresql-order-by-an-array/15674585#15674585) * Cast the `timestamp` to `date` (`::date`) for a basic format. For more use [`to_char()`](http://www.postgresql.org/docs/current/interactive/functions-formatting.html). * `GROUP BY 1` is syntax shorthand to reference the first output column. Could be `GROUP BY day` as well, but that might conflict with an existing column of the same name. Or `GROUP BY date_trunc('month', date_col)::date` but that's too long for my taste. * Works with the available interval arguments for [`date_trunc()`](http://www.postgresql.org/docs/current/interactive/functions-datetime.html#FUNCTIONS-DATETIME-TRUNC). * For a **more generic solution or arbitrary time intervals** consider this closely related answer: + [Best way to count records by arbitrary time intervals in Rails+Postgres](http://stackoverflow.com/questions/15576794/best-way-to-count-records-by-arbitrary-time-intervals-in-railspostgres/15577413#15577413)
Read file object as string in python
346,230
26
2008-12-06T12:41:36Z
346,237
73
2008-12-06T12:47:18Z
[ "python", "file", "urllib2" ]
I'm using `urllib2` to read in a page. I need to do a quick regex on the source and pull out a few variables but `urllib2` presents as a file object rather than a string. I'm new to python so I'm struggling to see how I use a file object to do this. Is there a quick way to convert this into a string?
You can use Python in interactive mode to search for solutions. if `f` is your object, you can enter `dir(f)` to see all methods and attributes. There's one called `read`. Enter `help(f.read)` and it tells you that `f.read()` is the way to retrieve a string from an file object.
Read file object as string in python
346,230
26
2008-12-06T12:41:36Z
346,255
12
2008-12-06T13:07:39Z
[ "python", "file", "urllib2" ]
I'm using `urllib2` to read in a page. I need to do a quick regex on the source and pull out a few variables but `urllib2` presents as a file object rather than a string. I'm new to python so I'm struggling to see how I use a file object to do this. Is there a quick way to convert this into a string?
From the doc [file.read()](http://docs.python.org/library/stdtypes.html#file.read) (my emphasis): > file.read([size]) > > Read at most size bytes from the file (less if the read hits EOF before obtaining size bytes). If the size argument is negative or omitted, read all data until EOF is reached. **The bytes are returned as a string object**. An empty string is returned when EOF is encountered immediately. (For certain files, like ttys, it makes sense to continue reading after an EOF is hit.) Note that this method may call the underlying C function fread more than once in an effort to acquire as close to size bytes as possible. Also note that when in non-blocking mode, less data than was requested may be returned, even if no size parameter was given. Be aware that a regexp search on a large string object may not be efficient, and consider doing the search line-by-line, using [file.next()](http://docs.python.org/library/stdtypes.html#file.next) (a file object is its own iterator).
Format numbers in django templates
346,467
83
2008-12-06T16:46:49Z
346,633
11
2008-12-06T19:21:33Z
[ "python", "django" ]
I'm trying to format numbers. Examples: ``` 1 => 1 12 => 12 123 => 123 1234 => 1,234 12345 => 12,345 ``` It strikes as a fairly common thing to do but I can't figure out which filter I'm supposed to use. Edit: If you've a generic Python way to do this, I'm happy adding a formatted field in my model.
If you don't want to get involved with locales here is a function that formats numbers: ``` def int_format(value, decimal_points=3, seperator=u'.'): value = str(value) if len(value) <= decimal_points: return value # say here we have value = '12345' and the default params above parts = [] while value: parts.append(value[-decimal_points:]) value = value[:-decimal_points] # now we should have parts = ['345', '12'] parts.reverse() # and the return value should be u'12.345' return seperator.join(parts) ``` Creating a [custom template filter](http://docs.djangoproject.com/en/dev/howto/custom-template-tags/#writing-custom-template-filters) from this function is trivial.
Format numbers in django templates
346,467
83
2008-12-06T16:46:49Z
347,560
162
2008-12-07T13:10:22Z
[ "python", "django" ]
I'm trying to format numbers. Examples: ``` 1 => 1 12 => 12 123 => 123 1234 => 1,234 12345 => 12,345 ``` It strikes as a fairly common thing to do but I can't figure out which filter I'm supposed to use. Edit: If you've a generic Python way to do this, I'm happy adding a formatted field in my model.
Django's contributed [humanize](http://docs.djangoproject.com/en/dev/ref/contrib/humanize/#ref-contrib-humanize) application does this: ``` {% load humanize %} {{ my_num|intcomma }} ``` Be sure to add `'django.contrib.humanize'` to your `INSTALLED_APPS` list in the `settings.py` file.
Format numbers in django templates
346,467
83
2008-12-06T16:46:49Z
2,180,209
47
2010-02-01T21:26:44Z
[ "python", "django" ]
I'm trying to format numbers. Examples: ``` 1 => 1 12 => 12 123 => 123 1234 => 1,234 12345 => 12,345 ``` It strikes as a fairly common thing to do but I can't figure out which filter I'm supposed to use. Edit: If you've a generic Python way to do this, I'm happy adding a formatted field in my model.
Regarding Ned Batchelder's solution, here it is with 2 decimal points and a dollar sign. ``` from django.contrib.humanize.templatetags.humanize import intcomma def currency(dollars): dollars = round(float(dollars), 2) return "$%s%s" % (intcomma(int(dollars)), ("%0.2f" % dollars)[-3:]) ``` Then you can ``` {{my_dollars | currency}} ```
Format numbers in django templates
346,467
83
2008-12-06T16:46:49Z
10,859,264
54
2012-06-02T02:02:31Z
[ "python", "django" ]
I'm trying to format numbers. Examples: ``` 1 => 1 12 => 12 123 => 123 1234 => 1,234 12345 => 12,345 ``` It strikes as a fairly common thing to do but I can't figure out which filter I'm supposed to use. Edit: If you've a generic Python way to do this, I'm happy adding a formatted field in my model.
Building on other answers, to extend this to floats, you can do: ``` {% load humanize %} {{ floatvalue|floatformat:2|intcomma }} ```
Is it more efficient to use "import <module>" or "from <module> import <func>"?
346,723
3
2008-12-06T20:25:05Z
346,753
10
2008-12-06T20:45:37Z
[ "python", "import" ]
Say I only needed to use findall() from the re module, is it more efficient to do: ``` from re import findall ``` or ``` import re ``` Is there actually any difference in speed/memory usage etc?
There is no difference on the import, however there is a small difference on access. When you access the function as ``` re.findall() ``` python will need to first find the module in the global scope and then find findall in modules dict. May make a difference if you are calling it inside a loop thousands of times.
Is it more efficient to use "import <module>" or "from <module> import <func>"?
346,723
3
2008-12-06T20:25:05Z
346,967
8
2008-12-06T23:20:18Z
[ "python", "import" ]
Say I only needed to use findall() from the re module, is it more efficient to do: ``` from re import findall ``` or ``` import re ``` Is there actually any difference in speed/memory usage etc?
When in doubt, time it: ``` from timeit import Timer print Timer("""re.findall(r"\d+", "fg12f 1414 21af 144")""", "import re").timeit() print Timer("""findall(r"\d+", "fg12f 1414 21af 144")""", "from re import findall").timeit() ``` I get the following results, using the minimum of 5 repetitions of 10,000,000 calls: ``` re.findall(): 123.444600105 findall(): 122.056155205 ``` There appears to be a very slight usage advantage to using `findall()` directly, rather than `re.findall()`. However, the actual import statements differ in their speed by a significant amount. On my computer, I get the following results: ``` >>> Timer("import re").timeit() 2.39156508446 >>> Timer("from re import findall").timeit() 4.41387701035 ``` So `import re` appears to be approximately twice as fast to execute. Presumably, though, execution of the imported code is your bottleneck, rather than the actual import.
Django.contrib.flatpages without models
346,840
6
2008-12-06T21:48:55Z
346,877
9
2008-12-06T22:08:59Z
[ "python", "django", "templates", "django-flatpages" ]
I have some flatpages with empty `content` field and their content inside the template (given with `template_name` field). ### Why I am using `django.contrib.flatpages` * It allows me to serve (mostly) static pages with minimal URL configuration. * I don't have to write views for each of them. ### Why I don't need the model `FlatPage` * I leave the content empty and just supply a template path. Therefore I can take advantage of having the source in a file; + I can edit the source directly from the file system, without the help of a server (such as admin). + I can take advantage of syntax highlightning and other editor features. * With the model I have to maintain fixtures for flatpages. + So the data for the same entity is in two seperate places. + If I move the content inside the fixture it'll be more difficult to edit. - Even if fixture maintenance was a non-issue I'd still need to dump and load these fixtures again and again during development. ### What I am looking for Basically; getting rid of `FlatPage` model while maintaining `contrib.flatpages` functionality. I don't have a clear idea how this should be solved. If there's a clean way of modifying (like `add_to_class`) `FlatPages` to get the information somewhere other than the database I'd prefer that. Maybe the metadata can be inserted to the templates and then a special manager that reads this data would replace the default manager of `FlatPages`. If I don't prefer manual editing over admin functionality for flatpages, how can take the database out of the equation?
Using the [`direct_to_template`](https://docs.djangoproject.com/en/1.4/ref/generic-views/#django-views-generic-simple-direct-to-template) generic view would be a lot simpler. You could use the passed in parameters on one view to specify the actual template in urls.py, if you don't want to add an entry for each page: ``` r'^foo/(?P<template_name>.+)/$','direct_to_template', {'template': 'foo_index.html'}), ``` Then import the template in your `foo_index.html`: ``` {% include template_name %} ```
Where do I go from here -- regarding programming?
347,054
3
2008-12-07T00:49:52Z
347,065
7
2008-12-07T00:58:04Z
[ "php", "asp.net", "python", "linux" ]
I seem to be in a never ending tail spin of Linux, or not, Windows or not. Web programming or system programming. Python or PHP. I'am self teaching myself programming. But it seems I keep being torn about which way to go. Unfortunately it is always seemingly good reasons to get side tracked. You know the whole open source or proprietary thing. Lately I have decided after a year that Linux just doesn't cut it for me and it mostly stems from me wanting to watch videos on Channel 9 etc, and the clunkiness that is Linux. So that lead me to, "Should I learn ASP.NET, since I am more so deciding Windows IS a "necessary" evil. I hope this made sense. The reason I settled in on Web Development as my course to learning programming is because I actually have a task to implement rather then aimlessly reading reference books etc. Does anyone have any advice at what they may have done to stay focused and not get lead down every tangent or idea.
You will only have a first language for a little while. Pick any direction that interests you, and follow it. There is no way around the introduction "Drink from the Firehose" experience. Keep early project simple, and tangible. Build useful things and the motivation will be there. Web / desktop / mobile / etc, its all good. Find the one that gets you thinking about code when your not coding, and you'll know your going in the right direction.
How do I concisely implement multiple similar unit tests in the Python unittest framework?
347,109
14
2008-12-07T01:59:49Z
347,607
11
2008-12-07T14:07:12Z
[ "python", "unit-testing" ]
I'm implementing unit tests for a family of functions that all share a number of invariants. For example, calling the function with two matrices produce a matrix of known shape. I would like to write unit tests to test the entire family of functions for this property, without having to write an individual test case for each function (particularly since more functions might be added later). One way to do this would be to iterate over a list of these functions: ``` import unittest import numpy from somewhere import the_functions from somewhere.else import TheClass class Test_the_functions(unittest.TestCase): def setUp(self): self.matrix1 = numpy.ones((5,10)) self.matrix2 = numpy.identity(5) def testOutputShape(unittest.TestCase): """Output of functions be of a certain shape""" for function in all_functions: output = function(self.matrix1, self.matrix2) fail_message = "%s produces output of the wrong shape" % str(function) self.assertEqual(self.matrix1.shape, output.shape, fail_message) if __name__ == "__main__": unittest.main() ``` I got the idea for this from [Dive Into Python](http://www.diveintopython.net/unit_testing/romantest.html). There, it's not a list of functions being tested but a list of known input-output pairs. The problem with this approach is that if any element of the list fails the test, the later elements don't get tested. I looked at subclassing unittest.TestCase and somehow providing the specific function to test as an argument, but as far as I can tell that prevents us from using unittest.main() because there would be no way to pass the argument to the testcase. I also looked at dynamically attaching "testSomething" functions to the testcase, by using setattr with a lamdba, but the testcase did not recognize them. How can I rewrite this so it remains trivial to expand the list of tests, while still ensuring every test is run?
Here's my favorite approach to the "family of related tests". I like explicit subclasses of a TestCase that expresses the common features. ``` class MyTestF1( unittest.TestCase ): theFunction= staticmethod( f1 ) def setUp(self): self.matrix1 = numpy.ones((5,10)) self.matrix2 = numpy.identity(5) def testOutputShape( self ): """Output of functions be of a certain shape""" output = self.theFunction(self.matrix1, self.matrix2) fail_message = "%s produces output of the wrong shape" % (self.theFunction.__name__,) self.assertEqual(self.matrix1.shape, output.shape, fail_message) class TestF2( MyTestF1 ): """Includes ALL of TestF1 tests, plus a new test.""" theFunction= staticmethod( f2 ) def testUniqueFeature( self ): # blah blah blah pass class TestF3( MyTestF1 ): """Includes ALL of TestF1 tests with no additional code.""" theFunction= staticmethod( f3 ) ``` Add a function, add a subclass of `MyTestF1`. Each subclass of MyTestF1 includes all of the tests in MyTestF1 with no duplicated code of any kind. Unique features are handled in an obvious way. New methods are added to the subclass. It's completely compatible with `unittest.main()`
Rounding float to the nearest factor?
347,538
6
2008-12-07T12:45:15Z
347,549
11
2008-12-07T13:00:28Z
[ "python", "algorithm", "math" ]
I have a small math problem I am trying to solve Given a number x and resolution y, I need to find the next x' with the required resolution. e.g. ``` x = 1.002 y = 0.1 x'= 1.1 x = 0.348 y = 0.1 x'= 0.4 x = 0.50 y = 1 x'= 1 x = 0.32 y = 0.05 x'= 0.35 ``` Is there any smart way of doing this in Python?
``` import math def next_multiple(x, y): return math.ceil(x/y)*y def try_it(x, y): print x, y, next_multiple(x, y) for x, y in [ (1.002, 0.1), (0.348, 0.1), (0.50, 1), (0.32, 0.05) ]: try_it(x, y) ``` produces: ``` 1.002 0.1 1.1 0.348 0.1 0.4 0.5 1 1.0 0.32 0.05 0.35 ``` I think your first example output is wrong, The correct answer for x' is 1.1, right?
How do I get nose to discover dynamically-generated testcases?
347,574
6
2008-12-07T13:30:45Z
676,420
7
2009-03-24T07:19:15Z
[ "python", "unit-testing", "nose" ]
This is a follow-up to a [previous question](http://stackoverflow.com/questions/347109/how-do-i-concisely-implement-multiple-similar-unit-tests-in-the-python-unittest) of mine. In the previous question, methods were explored to implement what was essentially the same test over an entire family of functions, ensuring testing did not stop at the first function that failed. My preferred solution used a metaclass to dynamically insert the tests into a unittest.TestCase. Unfortunately, nose does not pick this up because nose statically scans for test cases. How do I get nose to discover and run such a TestCase? Please refer [here](http://stackoverflow.com/questions/347109/how-do-i-concisely-implement-multiple-similar-unit-tests-in-the-python-unittest#347175) for an example of the TestCase in question.
Nose has a "test generator" feature for stuff like this. You write a generator function that yields each "test case" function you want it to run, along with its args. Following your previous example, this could check each of the functions in a separate test: ``` import unittest import numpy from somewhere import the_functions def test_matrix_functions(): for function in the_functions: yield check_matrix_function, function def check_matrix_function(function) matrix1 = numpy.ones((5,10)) matrix2 = numpy.identity(5) output = function(matrix1, matrix2) assert matrix1.shape == output.shape, \ "%s produces output of the wrong shape" % str(function) ```
Gauss-Legendre Algorithm in python
347,734
12
2008-12-07T16:15:40Z
347,749
24
2008-12-07T16:29:55Z
[ "python", "algorithm", "pi" ]
I need some help calculating Pi. I am trying to write a python program that will calculate Pi to X digits. I have tried several from the python mailing list, and it is to slow for my use. I have read about the [Gauss-Legendre Algorithm](http://en.wikipedia.org/wiki/Gauss-Legendre_algorithm), and I have tried porting it to Python with no success. I am reading from [Here](http://www.geocities.com/hjsmithh/Pi/Gauss_L.html), and I would appreciate any input as to where I am going wrong! It outputs: 0.163991276262 ``` from __future__ import division import math def square(x):return x*x a = 1 b = 1/math.sqrt(2) t = 1/4 x = 1 for i in range(1000): y = a a = (a+b)/2 b = math.sqrt(b*y) t = t - x * square((y-a)) x = 2* x pi = (square((a+b)))/4*t print pi raw_input() ```
1. You forgot parentheses around `4*t`: ``` pi = (a+b)**2 / (4*t) ``` 2. You can use `decimal` to perform calculation with higher precision. ``` #!/usr/bin/env python from __future__ import with_statement import decimal def pi_gauss_legendre(): D = decimal.Decimal with decimal.localcontext() as ctx: ctx.prec += 2 a, b, t, p = 1, 1/D(2).sqrt(), 1/D(4), 1 pi = None while 1: an = (a + b) / 2 b = (a * b).sqrt() t -= p * (a - an) * (a - an) a, p = an, 2*p piold = pi pi = (a + b) * (a + b) / (4 * t) if pi == piold: # equal within given precision break return +pi decimal.getcontext().prec = 100 print pi_gauss_legendre() ``` Output: ``` 3.141592653589793238462643383279502884197169399375105820974944592307816406286208\ 998628034825342117068 ```
AKS Primes algorithm in Python
347,811
23
2008-12-07T17:41:10Z
347,840
42
2008-12-07T18:02:27Z
[ "python", "algorithm", "primes" ]
A few years ago, it was proven that [PRIMES is in P](http://www.cse.iitk.ac.in/~manindra/algebra/primality_v6.pdf). Are there any algorithms implementing [their primality test](http://en.wikipedia.org/wiki/AKS_primality_test) in Python? I wanted to run some benchmarks with a naive generator and see for myself how fast it is. I'd implement it myself, but I don't understand the paper enough yet to do that.
Quick answer: no, the AKS test is not the fastest way to test primality. There are much *much* faster primality tests that either assume the (generalized) Riemann hypothesis and/or are randomized. (E.g. [Miller-Rabin](http://en.wikipedia.org/wiki/Miller-Rabin_primality_test) is fast and simple to implement.) The real breakthrough of the paper was theoretical, proving that a *deterministic* polynomial-time algorithm exists for testing primality, without assuming the GRH or other unproved conjectures. That said, if you want to understand and implement it, [Scott Aaronson's short article](http://www.scottaaronson.com/writings/prime.pdf) might help. It doesn't go into all the details, but you can start at page 10 of 12, and it gives enough. :-) There is also a [list of implementations](http://fatphil.org/maths/AKS/#Implementations) (mostly in C++) here. Also, for optimization and improvements (by several orders of magnitude), you might want to look at [this report](http://www.southerington.com/souther/projects/aks/RP-3_report.pdf), or (older) [Crandall and Papadopoulos's report](http://developer.apple.com/hardware/ve/pdf/aks3.pdf), or (older still) [Daniel J Bernstein's report](http://cr.yp.to/papers/aks.pdf). All of them have fairly detailed pseudo-code that lends itself well to implementation.
How to test django caching?
347,812
14
2008-12-07T17:41:34Z
348,079
7
2008-12-07T21:06:49Z
[ "python", "django", "caching", "django-cache" ]
Is there a way to be **sure** that a page is coming from cache on a production server and on the development server as well? The solution **shouldn't** involve caching middleware because not every project uses them. Though the solution itself might **be** a middleware. Just checking if the data is stale is not a very safe testing method IMO.
Mock the view, hit the page, and see if the mock was called. if it was not, the cache was used instead.
How to test django caching?
347,812
14
2008-12-07T17:41:34Z
348,546
18
2008-12-08T02:33:16Z
[ "python", "django", "caching", "django-cache" ]
Is there a way to be **sure** that a page is coming from cache on a production server and on the development server as well? The solution **shouldn't** involve caching middleware because not every project uses them. Though the solution itself might **be** a middleware. Just checking if the data is stale is not a very safe testing method IMO.
We do a lot of component caching and not all of them are updated at the same time. So we set host and timestamp values in a universally included context processor. At the top of each template fragment we stick in: ``` <!-- component_name {{host}} {{timestamp}} --> ``` The component\_name just makes it easy to do a View Source and search for that string. All of our views that are object-detail pages define a context variable "page\_object" and we have this at the top of the base.html template master: ``` <!-- {{page_object.class_id}} @ {{timestamp}} --> ``` class\_id() is a method from a super class used by all of our primary content classes. It is just: ``` def class_id(self): "%s.%s.%s" % (self.__class__._meta.app_label, self.__class__.__name__, self.id) ``` If you load a page and any of the timestamps are more than few seconds old, it's a pretty good bet that the component was cached.
How to test django caching?
347,812
14
2008-12-07T17:41:34Z
5,563,503
12
2011-04-06T08:32:14Z
[ "python", "django", "caching", "django-cache" ]
Is there a way to be **sure** that a page is coming from cache on a production server and on the development server as well? The solution **shouldn't** involve caching middleware because not every project uses them. Though the solution itself might **be** a middleware. Just checking if the data is stale is not a very safe testing method IMO.
Peter Rowells suggestion works well, but you don't need a custom template context processor for timestamps. You can simply use the template tag: ``` <!-- {% now "jS F Y H:i" %} --> ```
What could justify the complexity of Plone?
348,044
12
2008-12-07T20:41:40Z
348,317
7
2008-12-07T23:30:53Z
[ "python", "content-management-system", "plone", "zope" ]
Plone is very complex. [Zope](http://en.wikipedia.org/wiki/Zope)2, [Zope3](http://en.wikipedia.org/wiki/Zope_3), [Five](http://codespeak.net/z3/five/), [ZCML](http://wiki.zope.org/zope3/ZCML), [ZODB](http://en.wikipedia.org/wiki/Zope_Object_Database), [ZEO](http://en.wikipedia.org/wiki/Zope_Object_Database#ZEO), a whole bunch of acronyms and abbreviations. It's hard to begin and the current state seems to be undecided. It is mainly based on Zope2, but incorporates Zope3 via Five. And there are XML config files everywhere. Does the steep learning curve pay of? Is this complexity still justified today? Background: I need a platform. Customers often need a CMS. I'm currently reading "[Professional Plone Development](http://plone.org/news/book-professional-plone-development-now-shipping)", without prior knowledge of Plone. The problem: Customers don't always want the same and you can't know beforehand. One thing is sure: They don't want the default theme of Plone. But any additional feature is a risk. You can't just start and say "[If you want to see the complexity of Plone, you have to ask for it.](http://stackoverflow.com/questions/348044/what-could-justify-the-complexity-of-plone/351692#351692)" when you don't know the system good enough to plan.
I see four things that can justify an investment of time in using Plone: * Plone has a large and helpful community. Most of the things you need, somebody else already did at some time in the past. He probably asked some questions and got helpful answers, or he wrote a tutorial. Usually that leaves traces easy to find. about how he did it. * You won't need to understand the whole complexity for many of your customizing needs. * Plone developers are aware of their complex stack, and are discussing how this can be reduced. Plone has proven in the past that it is able to renew itself and drop old infrastructure in a clean way with defined deprecation phases. * There are many local user groups with helpful people. Oh wait, I was told the plone developer meetings are one of the best! [Like that one](http://plone.org/events/conferences/vienna-2004)
What could justify the complexity of Plone?
348,044
12
2008-12-07T20:41:40Z
348,508
29
2008-12-08T01:59:39Z
[ "python", "content-management-system", "plone", "zope" ]
Plone is very complex. [Zope](http://en.wikipedia.org/wiki/Zope)2, [Zope3](http://en.wikipedia.org/wiki/Zope_3), [Five](http://codespeak.net/z3/five/), [ZCML](http://wiki.zope.org/zope3/ZCML), [ZODB](http://en.wikipedia.org/wiki/Zope_Object_Database), [ZEO](http://en.wikipedia.org/wiki/Zope_Object_Database#ZEO), a whole bunch of acronyms and abbreviations. It's hard to begin and the current state seems to be undecided. It is mainly based on Zope2, but incorporates Zope3 via Five. And there are XML config files everywhere. Does the steep learning curve pay of? Is this complexity still justified today? Background: I need a platform. Customers often need a CMS. I'm currently reading "[Professional Plone Development](http://plone.org/news/book-professional-plone-development-now-shipping)", without prior knowledge of Plone. The problem: Customers don't always want the same and you can't know beforehand. One thing is sure: They don't want the default theme of Plone. But any additional feature is a risk. You can't just start and say "[If you want to see the complexity of Plone, you have to ask for it.](http://stackoverflow.com/questions/348044/what-could-justify-the-complexity-of-plone/351692#351692)" when you don't know the system good enough to plan.
It's hard to answer your question without any background information. Is the complexity justified if you just want a blog? No. Is the complexity justified if you're building a company intranet for 400+ people? Yes. Is it a good investment if you're looking to be a consultant? Absolutely! There's a lot of Plone work out there, and it pays much better than the average PHP job. I'd encourage you to clarify what you're trying to build, and ask the Plone forums for advice. Plone has a very mature and friendly community — and will absolutely let you know if what you're trying to do is a poor fit for Plone. You can of course do whatever you want with Plone, but there are some areas where it's the best solution available, other areas where it'll be a lot of work to change it to do something else. Some background: The reason for the complexity of Plone at this point in time is that it's moving to a more modern architecture. It's bridging both the old and the new approach right now, which adds some complexity until the transition is mostly complete. Plone is doing this to avoid leaving their customers behind by breaking backwards compatibility, which they take very seriously — unlike other systems I could mention (but won't ;). You care about your data, the Plone community cares about their data — and we'd like you to be able to upgrade to the new and better versions even when we're transitioning to a new architecture. This is one of the Plone community's strengths, but there is of course a penalty to pay for modifying the plane while it's flying, and that's a bit of temporary, extra complexity. Furthermore, Plone as a community has a strong focus on security (compare it to any other system on the vulnerabilities reported), and a very professional culture that values good architecture, testing and reusability. As an example, consider the current version of Plone being developed (what will become 4.0): * It starts up 3-4 times faster than the current version. * It uses about 20% less memory than the current version. * There's a much, much easier types system in the works (Dexterity), which will reduce the complexity and speed up the system a lot, while keeping the same level of functionality * The code base is already 20% smaller than the current shipping version, and getting even smaller. * Early benchmarks of the new types system show a 5× speedup for content editing, and we haven't really started optimizing this part yet. — Alexander Limi, Plone co-founder (and slightly biased ;)
What could justify the complexity of Plone?
348,044
12
2008-12-07T20:41:40Z
351,692
23
2008-12-09T03:29:19Z
[ "python", "content-management-system", "plone", "zope" ]
Plone is very complex. [Zope](http://en.wikipedia.org/wiki/Zope)2, [Zope3](http://en.wikipedia.org/wiki/Zope_3), [Five](http://codespeak.net/z3/five/), [ZCML](http://wiki.zope.org/zope3/ZCML), [ZODB](http://en.wikipedia.org/wiki/Zope_Object_Database), [ZEO](http://en.wikipedia.org/wiki/Zope_Object_Database#ZEO), a whole bunch of acronyms and abbreviations. It's hard to begin and the current state seems to be undecided. It is mainly based on Zope2, but incorporates Zope3 via Five. And there are XML config files everywhere. Does the steep learning curve pay of? Is this complexity still justified today? Background: I need a platform. Customers often need a CMS. I'm currently reading "[Professional Plone Development](http://plone.org/news/book-professional-plone-development-now-shipping)", without prior knowledge of Plone. The problem: Customers don't always want the same and you can't know beforehand. One thing is sure: They don't want the default theme of Plone. But any additional feature is a risk. You can't just start and say "[If you want to see the complexity of Plone, you have to ask for it.](http://stackoverflow.com/questions/348044/what-could-justify-the-complexity-of-plone/351692#351692)" when you don't know the system good enough to plan.
If you want to see the complexity of Plone, you have to ask for it. For most people, it's just not there. It installs in a couple of minutes through a one-click installer. Then it's one click to log in, one click to create a page, use a WYSYWIG editor, and one click to save. Everything is through an intuitive web GUI. Plone is a product. If you want to use it as a "platform," then the platform is a stack of over one million lines of code which implements a complete content management suite. No one knows it all. However, all those "acronyms" and "files" are evidence of a software which is factored in components so that no one need know it all. You can get as deep or shallow in it as you need. If there's something you need for some aspect of content management, it's already there, you don't have to create it from scratch, and you can do it in a way that's consistent with a wide practice and review.
What could justify the complexity of Plone?
348,044
12
2008-12-07T20:41:40Z
446,659
9
2009-01-15T13:08:53Z
[ "python", "content-management-system", "plone", "zope" ]
Plone is very complex. [Zope](http://en.wikipedia.org/wiki/Zope)2, [Zope3](http://en.wikipedia.org/wiki/Zope_3), [Five](http://codespeak.net/z3/five/), [ZCML](http://wiki.zope.org/zope3/ZCML), [ZODB](http://en.wikipedia.org/wiki/Zope_Object_Database), [ZEO](http://en.wikipedia.org/wiki/Zope_Object_Database#ZEO), a whole bunch of acronyms and abbreviations. It's hard to begin and the current state seems to be undecided. It is mainly based on Zope2, but incorporates Zope3 via Five. And there are XML config files everywhere. Does the steep learning curve pay of? Is this complexity still justified today? Background: I need a platform. Customers often need a CMS. I'm currently reading "[Professional Plone Development](http://plone.org/news/book-professional-plone-development-now-shipping)", without prior knowledge of Plone. The problem: Customers don't always want the same and you can't know beforehand. One thing is sure: They don't want the default theme of Plone. But any additional feature is a risk. You can't just start and say "[If you want to see the complexity of Plone, you have to ask for it.](http://stackoverflow.com/questions/348044/what-could-justify-the-complexity-of-plone/351692#351692)" when you don't know the system good enough to plan.
I found an anonymous comment [here](http://bitubique.com/content/im-done-plone#comment-10) which is much better than that post itself, so I'm reposting it here in full, with a couple of typos corrected. --- This summer my chess club asked me to make a new website, where the members of the board should be able to add news flashes, articles, ... Sounded like a CMS. Being a Python developer, I looked at Plone and bought the Aspeli book Professional Plone development (excellent written btw). I took 3 weeks of my holiday the study the book and to setup a first mock up of the site. After 3 weeks I realized that Plone has some very nice things but also some very frustrating things On the positivie side * if you don't need to customize Plone, Plone is great in features and layout * Plone has a good security model * Plone has good off the shelf workflows * Plone is multi language (what I needed) On the downside 1. Plone is terrible slow. On my development platform (a 3 years old Pc with 512 MB RAM) it takes 30 seconds to launch Plone and it takes 10 to 15 seconds to reload a page 2. you need a lot of different technologies to customize or develop even the simplest things 3. TAL and Metal are not state of the art and not adapted to the OO design of Plone. 4. Acquisition by default is wrong. Acquisation can be very useful (for e.g. security) but it should be explicitly defined where needed. This is a design flaw 5. Plone does not distinguish between content and layout. This a serious design flaw. There is no reason to apply security settings and roles on e.g. cascading style sheet or the html that creates a 3 column layout and there is no reason why these elements should be in the ZODB and not on the filesystem 6. Plone does not distinguish between the web designer and the content editor/publisher, again a serious flaw. The content editor/publisher add/reviews content running on the live site. The web designer adds/modifies content types, forms and layout on the test server and ports it to the live server when ready. The security restrictions Plone put in place for the content editor should not be applied for the web designer, who has access to the filesystem on the server. 7. Plone does not distinguish between the graphical aspects and the programming aspects of a web designer. Graphical artists uses tools that only speak html, css and a little bit of javasccript, but no Python, adapters and other advanced programming concepts. As a consequence the complete skinning system in Plone is a nightmare I assume that Plone is so slow because of points 4, 5, 6 and 7. Points 6 and 7 made me dropping Plone. I looked around for other options and eventually decided to develop my own CMS on Pylons, which is blazingly fast compared to Plone. On the same development server I have a startup time of 1 second, and a reload page time is not measurable. The site www.kosk.be is running (it is in Dutch). The CMS behind it, named Red Devil, will be launched as a separate open source project beginning next year
Creating a list of objects in Python
348,196
42
2008-12-07T22:15:46Z
348,215
39
2008-12-07T22:22:39Z
[ "python", "list", "object", "loops" ]
I'm trying to create a Python script that opens several databases and compares their contents. In the process of creating that script, I've run into a problem in creating a list whose contents are objects that I've created. I've simplified the program to its bare bones for this posting. First I create a new class, create a new instance of it, assign it an attribute and then write it to a list. Then I assign a new value to the instance and again write it to a list... and again and again... Problem is, it's always the same object so I'm really just changing the base object. When I read the list, I get a repeat of the same object over and over. So how do you write objects to a list within a loop? Thanks, Bob J Here's my simplified code ``` class SimpleClass(object): pass x = SimpleClass # Then create an empty list simpleList = [] #Then loop through from 0 to 3 adding an attribute to the instance 'x' of SimpleClass for count in range(0,4): # each iteration creates a slightly different attribute value, and then prints it to # prove that step is working # but the problem is, I'm always updating a reference to 'x' and what I want to add to # simplelist is a new instance of x that contains the updated attribute x.attr1= '*Bob* '* count print "Loop Count: %s Attribute Value %s" % (count, x.attr1) simpleList.append(x) print '-'*20 # And here I print out each instance of the object stored in the list 'simpleList' # and the problem surfaces. Every element of 'simpleList' contains the same attribute value y = SimpleClass print "Reading the attributes from the objects in the list" for count in range(0,4): y = simpleList[count] print y.attr1 ``` So how do I (append, extend, copy or whatever) the elements of simpleList so that each entry contains a different instance of the object instead of all pointing to the same one?
You demonstrate a fundamental misunderstanding. You never created an instance of SimpleClass at all, because you didn't call it. ``` for count in xrange(4): x = SimpleClass() x.attr = count simplelist.append(x) ``` Or, if you let the class take parameters, instead, you can use a list comprehension. ``` simplelist = [SimpleClass(count) for count in xrange(4)] ```
Creating a list of objects in Python
348,196
42
2008-12-07T22:15:46Z
348,284
7
2008-12-07T23:06:58Z
[ "python", "list", "object", "loops" ]
I'm trying to create a Python script that opens several databases and compares their contents. In the process of creating that script, I've run into a problem in creating a list whose contents are objects that I've created. I've simplified the program to its bare bones for this posting. First I create a new class, create a new instance of it, assign it an attribute and then write it to a list. Then I assign a new value to the instance and again write it to a list... and again and again... Problem is, it's always the same object so I'm really just changing the base object. When I read the list, I get a repeat of the same object over and over. So how do you write objects to a list within a loop? Thanks, Bob J Here's my simplified code ``` class SimpleClass(object): pass x = SimpleClass # Then create an empty list simpleList = [] #Then loop through from 0 to 3 adding an attribute to the instance 'x' of SimpleClass for count in range(0,4): # each iteration creates a slightly different attribute value, and then prints it to # prove that step is working # but the problem is, I'm always updating a reference to 'x' and what I want to add to # simplelist is a new instance of x that contains the updated attribute x.attr1= '*Bob* '* count print "Loop Count: %s Attribute Value %s" % (count, x.attr1) simpleList.append(x) print '-'*20 # And here I print out each instance of the object stored in the list 'simpleList' # and the problem surfaces. Every element of 'simpleList' contains the same attribute value y = SimpleClass print "Reading the attributes from the objects in the list" for count in range(0,4): y = simpleList[count] print y.attr1 ``` So how do I (append, extend, copy or whatever) the elements of simpleList so that each entry contains a different instance of the object instead of all pointing to the same one?
It shouldn't be necessary to recreate the SimpleClass object each time, as some are suggesting, if you're simply using it to output data based on its attributes. However, you're not actually creating an instance of the class; you're simply creating a reference to the class object itself. Therefore, you're adding a reference to the same class attribute to the list (instead of instance attribute), over and over. Instead of: ``` x = SimpleClass ``` you need: ``` x = SimpleClass() ```
Creating a list of objects in Python
348,196
42
2008-12-07T22:15:46Z
352,505
31
2008-12-09T11:50:39Z
[ "python", "list", "object", "loops" ]
I'm trying to create a Python script that opens several databases and compares their contents. In the process of creating that script, I've run into a problem in creating a list whose contents are objects that I've created. I've simplified the program to its bare bones for this posting. First I create a new class, create a new instance of it, assign it an attribute and then write it to a list. Then I assign a new value to the instance and again write it to a list... and again and again... Problem is, it's always the same object so I'm really just changing the base object. When I read the list, I get a repeat of the same object over and over. So how do you write objects to a list within a loop? Thanks, Bob J Here's my simplified code ``` class SimpleClass(object): pass x = SimpleClass # Then create an empty list simpleList = [] #Then loop through from 0 to 3 adding an attribute to the instance 'x' of SimpleClass for count in range(0,4): # each iteration creates a slightly different attribute value, and then prints it to # prove that step is working # but the problem is, I'm always updating a reference to 'x' and what I want to add to # simplelist is a new instance of x that contains the updated attribute x.attr1= '*Bob* '* count print "Loop Count: %s Attribute Value %s" % (count, x.attr1) simpleList.append(x) print '-'*20 # And here I print out each instance of the object stored in the list 'simpleList' # and the problem surfaces. Every element of 'simpleList' contains the same attribute value y = SimpleClass print "Reading the attributes from the objects in the list" for count in range(0,4): y = simpleList[count] print y.attr1 ``` So how do I (append, extend, copy or whatever) the elements of simpleList so that each entry contains a different instance of the object instead of all pointing to the same one?
To fill a list with seperate instances of a class, you can use a for loop in the declaration of the list. The \* multiply will link each copy to the same instance. ``` instancelist = [ MyClass() for i in range(29)] ``` and then access the instances through the index of the list. ``` instancelist[5].attr1 = 'whamma' ```
Receive and send emails in python
348,392
29
2008-12-08T00:12:25Z
348,551
21
2008-12-08T02:36:49Z
[ "python", "email" ]
How can i receive and send email in python? A 'mail server' of sorts. I am looking into making an app that listens to see if it recieves an email addressed to [email protected], and sends an email to the sender. Now, am i able to do this all in python, would it be best to use 3rd party libraries?
Here is a very simple example: ``` import smtplib server = 'mail.server.com' user = '' password = '' recipients = ['[email protected]', '[email protected]'] sender = '[email protected]' message = 'Hello World' session = smtplib.SMTP(server) # if your SMTP server doesn't need authentications, # you don't need the following line: session.login(user, password) session.sendmail(sender, recipients, message) ``` For more options, error handling, etc, look at the [smtplib module documentation](http://www.python.org/doc/2.5.2/lib/module-smtplib.html).
Receive and send emails in python
348,392
29
2008-12-08T00:12:25Z
349,352
10
2008-12-08T12:23:21Z
[ "python", "email" ]
How can i receive and send email in python? A 'mail server' of sorts. I am looking into making an app that listens to see if it recieves an email addressed to [email protected], and sends an email to the sender. Now, am i able to do this all in python, would it be best to use 3rd party libraries?
I do not think it would be a good idea to write a real mail server in Python. This is certainly possible (see mcrute's and Manuel Ceron's posts to have details) but it is a lot of work when you think of everything that a real mail server must handle (queuing, retransmission, dealing with spam, etc). You should explain in more detail what you need. If you just want to react to incoming email, I would suggest to configure the mail server to call a program when it receives the email. This program could do what it wants (updating a database, creating a file, talking to another Python program). To call an arbitrary program from the mail server, you have several choices: 1. For sendmail and Postfix, a `~/.forward` containing `"|/path/to/program"` 2. If you use procmail, a recipe action of `|path/to/program` 3. And certainly many others
How can I download all emails with attachments from Gmail?
348,630
80
2008-12-08T03:57:49Z
348,649
10
2008-12-08T04:17:37Z
[ "java", "python", "perl", "gmail" ]
How do I connect to Gmail and determine which messages have attachments? I then want to download each attachment, printing out the Subject: and From: for each message as I process it.
I'm not an expert on Perl, but what I do know is that GMail supports IMAP and POP3, 2 protocols that are completely standard and allow you to do just that. Maybe that helps you to get started.
How can I download all emails with attachments from Gmail?
348,630
80
2008-12-08T03:57:49Z
641,843
7
2009-03-13T08:40:25Z
[ "java", "python", "perl", "gmail" ]
How do I connect to Gmail and determine which messages have attachments? I then want to download each attachment, printing out the Subject: and From: for each message as I process it.
``` #!/usr/bin/env python """Save all attachments for given gmail account.""" import os, sys from libgmail import GmailAccount ga = GmailAccount("[email protected]", "pA$$w0Rd_") ga.login() # folders: inbox, starred, all, drafts, sent, spam for thread in ga.getMessagesByFolder('all', allPages=True): for msg in thread: sys.stdout.write('.') if msg.attachments: print "\n", msg.id, msg.number, msg.subject, msg.sender for att in msg.attachments: if att.filename and att.content: attdir = os.path.join(thread.id, msg.id) if not os.path.isdir(attdir): os.makedirs(attdir) with open(os.path.join(attdir, att.filename), 'wb') as f: f.write(att.content) ``` untested 1. Make sure TOS allows such scripts otherwise you account will be suspended 2. There might be better options: GMail offline mode, Thunderbird + ExtractExtensions, GmailFS, Gmail Drive, etc.
How can I download all emails with attachments from Gmail?
348,630
80
2008-12-08T03:57:49Z
642,988
141
2009-03-13T14:34:44Z
[ "java", "python", "perl", "gmail" ]
How do I connect to Gmail and determine which messages have attachments? I then want to download each attachment, printing out the Subject: and From: for each message as I process it.
Hard one :-) ``` import email, getpass, imaplib, os detach_dir = '.' # directory where to save attachments (default: current) user = raw_input("Enter your GMail username:") pwd = getpass.getpass("Enter your password: ") # connecting to the gmail imap server m = imaplib.IMAP4_SSL("imap.gmail.com") m.login(user,pwd) m.select("[Gmail]/All Mail") # here you a can choose a mail box like INBOX instead # use m.list() to get all the mailboxes resp, items = m.search(None, "ALL") # you could filter using the IMAP rules here (check http://www.example-code.com/csharp/imap-search-critera.asp) items = items[0].split() # getting the mails id for emailid in items: resp, data = m.fetch(emailid, "(RFC822)") # fetching the mail, "`(RFC822)`" means "get the whole stuff", but you can ask for headers only, etc email_body = data[0][1] # getting the mail content mail = email.message_from_string(email_body) # parsing the mail content to get a mail object #Check if any attachments at all if mail.get_content_maintype() != 'multipart': continue print "["+mail["From"]+"] :" + mail["Subject"] # we use walk to create a generator so we can iterate on the parts and forget about the recursive headach for part in mail.walk(): # multipart are just containers, so we skip them if part.get_content_maintype() == 'multipart': continue # is this part an attachment ? if part.get('Content-Disposition') is None: continue filename = part.get_filename() counter = 1 # if there is no filename, we create one with a counter to avoid duplicates if not filename: filename = 'part-%03d%s' % (counter, 'bin') counter += 1 att_path = os.path.join(detach_dir, filename) #Check if its already there if not os.path.isfile(att_path) : # finally write the stuff fp = open(att_path, 'wb') fp.write(part.get_payload(decode=True)) fp.close() ``` Wowww! That was something. ;-) But try the same in Java, just for fun! By the way, I tested that in a shell, so some errors likely remain. Enjoy **EDIT:** Because mail-box names can change from one country to another, I recommend doing `m.list()` and picking an item in it before `m.select("the mailbox name")` to avoid this error: > imaplib.error: command SEARCH illegal in state AUTH, only allowed in > states SELECTED
How can I download all emails with attachments from Gmail?
348,630
80
2008-12-08T03:57:49Z
643,366
7
2009-03-13T15:52:00Z
[ "java", "python", "perl", "gmail" ]
How do I connect to Gmail and determine which messages have attachments? I then want to download each attachment, printing out the Subject: and From: for each message as I process it.
Take a look at [Mail::Webmail::Gmail](http://search.cpan.org/~mincus/Mail-Webmail-Gmail-1.09/lib/Mail/Webmail/Gmail.pm#GETTING%5FATTACHMENTS): **GETTING ATTACHMENTS** There are two ways to get an attachment: 1 -> By sending a reference to a specific attachment returned by `get_indv_email` ``` # Creates an array of references to every attachment in your account my $messages = $gmail->get_messages(); my @attachments; foreach ( @{ $messages } ) { my $email = $gmail->get_indv_email( msg => $_ ); if ( defined( $email->{ $_->{ 'id' } }->{ 'attachments' } ) ) { foreach ( @{ $email->{ $_->{ 'id' } }->{ 'attachments' } } ) { push( @attachments, $gmail->get_attachment( attachment => $_ ) ); if ( $gmail->error() ) { print $gmail->error_msg(); } } } } ``` 2 -> Or by sending the attachment ID and message ID ``` #retrieve specific attachment my $msgid = 'F000000000'; my $attachid = '0.1'; my $attach_ref = $gmail->get_attachment( attid => $attachid, msgid => $msgid ); ``` ( Returns a reference to a scalar that holds the data from the attachment. )
Formatting a data structure into a comma-separated list of arguments
349,175
6
2008-12-08T11:00:39Z
349,182
12
2008-12-08T11:03:54Z
[ "python", "refactoring", "list" ]
I need to convert a list (or a dict) into a comma-separated list for passing to another language. Is there a nicer way of doing this than: ``` result = '' args = ['a', 'b', 'c', 'd'] i = 0 for arg in args: if i != 0: result += arg else: result += arg + ', ' i += 1 result = 'function (' + result + ') ``` Thanks, Dan
`', '.join(args)` will do the trick.
Formatting a data structure into a comma-separated list of arguments
349,175
6
2008-12-08T11:00:39Z
349,197
11
2008-12-08T11:09:37Z
[ "python", "refactoring", "list" ]
I need to convert a list (or a dict) into a comma-separated list for passing to another language. Is there a nicer way of doing this than: ``` result = '' args = ['a', 'b', 'c', 'd'] i = 0 for arg in args: if i != 0: result += arg else: result += arg + ', ' i += 1 result = 'function (' + result + ') ``` Thanks, Dan
``` 'function(%s)' % ', '.join(args) ``` produces ``` 'function(a, b, c, d)' ```
How do I find the "concrete class" of a django model baseclass
349,206
10
2008-12-08T11:11:45Z
349,494
12
2008-12-08T13:05:59Z
[ "python", "django", "inheritance", "django-models" ]
I'm trying to find the actual class of a django-model object, when using model-inheritance. Some code to describe the problem: ``` class Base(models.model): def basemethod(self): ... class Child_1(Base): pass class Child_2(Base): pass ``` If I create various objects of the two Child classes and the create a queryset containing them all: ``` Child_1().save() Child_2().save() (o1, o2) = Base.objects.all() ``` I want to determine if the object is of type Child\_1 or Child\_2 in basemethod, I can get to the child object via o1.child\_1 and o2.child\_2 but that reconquers knowledge about the childclasses in the baseclass. I have come up with the following code: ``` def concrete_instance(self): instance = None for subclass in self._meta.get_all_related_objects(): acc_name = subclass.get_accessor_name() try: instance = self.__getattribute__(acc_name) return instance except Exception, e: pass ``` But it feels brittle and I'm not sure of what happens when if I inherit in more levels.
Django implements model inheritance with a OneToOneField between the parent model's table and the child model's table. When you do `Base.object.all()`, Django is querying just the Base table, and so has no way of knowing what the child table is. Therefore, unfortunately, it's not possible to go directly to the child model instance without additional queries. This [snippet](http://www.djangosnippets.org/snippets/1031/) shows a common method of adding a ContentType field to the base model: ``` from django.contrib.contenttypes.models import ContentType class Base(models.Model): content_type = models.ForeignKey(ContentType,editable=False,null=True) def save(self): if(not self.content_type): self.content_type = ContentType.objects.get_for_model(self.__class__) self.save_base() def as_leaf_class(self): content_type = self.content_type model = content_type.model_class() if(model == Base): return self return model.objects.get(id=self.id) ``` You can then say `if Base.content_type.model_class()` to determine the type. [Here](http://www.djangosnippets.org/snippets/1034/) is another snippet that adds a custom manager into the mix. As you can see, both of these solutions have the potential to be extremely expensive. If you have a large number of instances, using the as\_leaf\_class() method will require one query on each item. Instead, if you have a known set of child models, simply query each model separately and aggregate the instances into one list.
How do I pass a python list in the post query?
349,369
7
2008-12-08T12:28:28Z
349,384
8
2008-12-08T12:35:54Z
[ "python", "web-services" ]
I want to send some strings in a list in a POST call. eg: ``` www.example.com/?post_data = A list of strings ``` The python code receives the data as a single string (Instead of a list of strings). How do I post it as a list of strings?
There's no such thing as a "list of strings" in a URL (or in practically anything in HTTP - if you specify multiple values for the same header, they come out as a single delimited value in most web app frameworks IME). It's just a single string. I suggest you delimit the strings in some way (e.g. comma-separated) and then parse them out again at the other end.
Getting the lesser n elements of a list in Python
350,519
8
2008-12-08T19:10:10Z
350,568
13
2008-12-08T19:27:39Z
[ "python", "algorithm", "sorting" ]
I need to get the lesser n numbers of a list in Python. I need this to be really fast because it's in a critical part for performance and it needs to be repeated a lot of times. n is usually no greater than 10 and the list usually has around 20000 elements. The list is always different each time I call the function. Sorting can't be made in place. Initially, I have written this function: ``` def mins(items, n): mins = [float('inf')]*n for item in items: for i, min in enumerate(mins): if item < min: mins.insert(i, item) mins.pop() break return mins ``` But this function can't beat a simple sorted(items)[:n] which sort the entire list. Here is my test: ``` from random import randint, random import time test_data = [randint(10, 50) + random() for i in range(20000)] init = time.time() mins = mins(test_data, 8) print 'mins(items, n):', time.time() - init init = time.time() mins = sorted(test_data)[:8] print 'sorted(items)[:n]:', time.time() - init ``` Results: ``` mins(items, n): 0.0632939338684 sorted(items)[:n]: 0.0231449604034 ``` sorted()[:n] is three times faster. I believe this is because: 1. insert() operation is costly because Python lists are not linked lists. 2. sorted() is an optimized c function and mine is pure python. Is there any way to beat sorted()[:n] ? Should I use a C extension, or Pyrex or Psyco or something like that? Thanks in advance for your answers.
You actually want a sorted sequence of mins. ``` mins = items[:n] mins.sort() for i in items[n:]: if i < mins[-1]: mins.append(i) mins.sort() mins= mins[:n] ``` This runs *much* faster because you aren't even looking at mins unless it's provably got a value larger than the given item. About 1/10th the time of the original algorithm. This ran in zero time on my Dell. I had to run it 10 times to get a measurable run time. ``` mins(items, n): 0.297000169754 sorted(items)[:n]: 0.109999895096 mins2(items)[:n]: 0.0309998989105 ``` Using `bisect.insort` instead of append and sort may speed this up a hair further.
Getting the lesser n elements of a list in Python
350,519
8
2008-12-08T19:10:10Z
350,685
10
2008-12-08T20:01:12Z
[ "python", "algorithm", "sorting" ]
I need to get the lesser n numbers of a list in Python. I need this to be really fast because it's in a critical part for performance and it needs to be repeated a lot of times. n is usually no greater than 10 and the list usually has around 20000 elements. The list is always different each time I call the function. Sorting can't be made in place. Initially, I have written this function: ``` def mins(items, n): mins = [float('inf')]*n for item in items: for i, min in enumerate(mins): if item < min: mins.insert(i, item) mins.pop() break return mins ``` But this function can't beat a simple sorted(items)[:n] which sort the entire list. Here is my test: ``` from random import randint, random import time test_data = [randint(10, 50) + random() for i in range(20000)] init = time.time() mins = mins(test_data, 8) print 'mins(items, n):', time.time() - init init = time.time() mins = sorted(test_data)[:8] print 'sorted(items)[:n]:', time.time() - init ``` Results: ``` mins(items, n): 0.0632939338684 sorted(items)[:n]: 0.0231449604034 ``` sorted()[:n] is three times faster. I believe this is because: 1. insert() operation is costly because Python lists are not linked lists. 2. sorted() is an optimized c function and mine is pure python. Is there any way to beat sorted()[:n] ? Should I use a C extension, or Pyrex or Psyco or something like that? Thanks in advance for your answers.
``` import heapq nlesser_items = heapq.nsmallest(n, items) ``` Here's a correct version of [S.Lott's algorithm](http://stackoverflow.com/questions/350519/getting-the-lesser-n-elements-of-a-list-in-python#350568): ``` from bisect import insort from itertools import islice def nsmallest_slott_bisect(n, iterable, insort=insort): it = iter(iterable) mins = sorted(islice(it, n)) for el in it: if el <= mins[-1]: #NOTE: equal sign is to preserve duplicates insort(mins, el) mins.pop() return mins ``` Performance: ``` $ python -mtimeit -s "import marshal; from nsmallest import nsmallest$label as nsmallest; items = marshal.load(open('items.marshal','rb')); n = 10"\ "nsmallest(n, items)" ``` ``` nsmallest_heapq 100 loops, best of 3: 12.9 msec per loop nsmallest_slott_list 100 loops, best of 3: 4.37 msec per loop nsmallest_slott_bisect 100 loops, best of 3: 3.95 msec per loop ``` `nsmallest_slott_bisect` is **3 times faster** than `heapq`'s `nsmallest` (for n=10, len(items)=20000). `nsmallest_slott_list` is only marginally slower. It is unclear why heapq's nsmallest is so slow; its algorithm is almost identical to the presented above (for small n).
Why is my Python C Extension leaking memory?
350,647
5
2008-12-08T19:51:04Z
350,695
16
2008-12-08T20:04:32Z
[ "python", "c", "refcounting" ]
The function below takes a python file handle, reads in packed binary data from the file, creates a Python dictionary and returns it. If I loop it endlessly, it'll continually consume RAM. What's wrong with my RefCounting? ``` static PyObject* __binParse_getDBHeader(PyObject *self, PyObject *args){ PyObject *o; //generic object PyObject* pyDB = NULL; //this has to be a py file object if (!PyArg_ParseTuple(args, "O", &pyDB)){ return NULL; } else { Py_INCREF(pyDB); if (!PyFile_Check(pyDB)){ Py_DECREF(pyDB); PyErr_SetString(PyExc_IOError, "argument 1 must be open file handle"); return NULL; } } FILE *fhDB = PyFile_AsFile(pyDB); long offset = 0; DB_HEADER *pdbHeader = malloc(sizeof(DB_HEADER)); fseek(fhDB,offset,SEEK_SET); //at the beginning fread(pdbHeader, 1, sizeof(DB_HEADER), fhDB ); if (ferror(fhDB)){ fclose(fhDB); Py_DECREF(pyDB); PyErr_SetString(PyExc_IOError, "failed reading database header"); return NULL; } Py_DECREF(pyDB); PyObject *pyDBHeader = PyDict_New(); Py_INCREF(pyDBHeader); o=PyInt_FromLong(pdbHeader->version_number); PyDict_SetItemString(pyDBHeader, "version", o); Py_DECREF(o); PyObject *pyTimeList = PyList_New(0); Py_INCREF(pyTimeList); int i; for (i=0; i<NUM_DRAWERS; i++){ //epochs o=PyInt_FromLong(pdbHeader->last_good_test[i]); PyList_Append(pyTimeList, o); Py_DECREF(o); } PyDict_SetItemString(pyDBHeader, "lastTest", pyTimeList); Py_DECREF(pyTimeList); o=PyInt_FromLong(pdbHeader->temp); PyDict_SetItemString(pyDBHeader, "temp", o); Py_DECREF(o); free(pdbHeader); return (pyDBHeader); } ``` Thanks for taking a look, LarsenMTL
`PyDict_New()` returns a new reference, check the [docs](http://docs.python.org/c-api/dict.html) for `PyDict`. So if you increase the refcount immediately after creating it, you have two references to it. One is transferred to the caller when you return it as a result value, but the other one never goes aways. You also don't need to incref `pyTimeList`. It's yours when you create it. However, you need to decref it, but you only decref it once, so it's leaked as well. You also don't need to call `Py_INCREF` on `pyDB`. It's a borrowed reference and it won't go away as long as your function does not return, because it's still referenced in a lower stack frame. Only if you want to keep the reference in another structure somewhere, you need to increse the refcount. Cf. the [API docs](http://docs.python.org/c-api/arg.html)
How does Django Know the Order to Render Form Fields?
350,799
70
2008-12-08T20:37:44Z
350,913
38
2008-12-08T21:12:56Z
[ "python", "django", "class", "django-forms", "contacts" ]
If I have a Django form such as: ``` class ContactForm(forms.Form): subject = forms.CharField(max_length=100) message = forms.CharField() sender = forms.EmailField() ``` And I call the as\_table() method of an instance of this form, Django will render the fields as the same order as specified above. My question is how does Django know the order that class variables where defined? (Also how do I override this order, for example when I want to add a field from the classe's **init** method?)
I went ahead and answered my own question. Here's the answer for future reference: In Django `form.py` does some dark magic using the `__new__` method to load your class variables ultimately into `self.fields` in the order defined in the class. `self.fields` is a Django `SortedDict` instance (defined in `datastructures.py`). So to override this, say in my example you wanted sender to come first but needed to add it in an **init** method, you would do: ``` class ContactForm(forms.Form): subject = forms.CharField(max_length=100) message = forms.CharField() def __init__(self,*args,**kwargs): forms.Form.__init__(self,*args,**kwargs) #first argument, index is the position of the field you want it to come before self.fields.insert(0,'sender',forms.EmailField(initial=str(time.time()))) ```
How does Django Know the Order to Render Form Fields?
350,799
70
2008-12-08T20:37:44Z
1,191,310
86
2009-07-28T00:12:20Z
[ "python", "django", "class", "django-forms", "contacts" ]
If I have a Django form such as: ``` class ContactForm(forms.Form): subject = forms.CharField(max_length=100) message = forms.CharField() sender = forms.EmailField() ``` And I call the as\_table() method of an instance of this form, Django will render the fields as the same order as specified above. My question is how does Django know the order that class variables where defined? (Also how do I override this order, for example when I want to add a field from the classe's **init** method?)
**[NOTE: this answer is now somewhat outdated - please see the discussion below it].** If f is a form, its fields are f.fields, which is a `django.utils.datastructures.SortedDict` (it presents the items in the order they are added). After form construction f.fields has a keyOrder attribute, which is a list containing the field names in the order they should be presented. You can set this to the correct ordering (though you need to exercise care to ensure you don't omit items or add extras). Here's an example I just created in my current project: ``` class PrivEdit(ModelForm): def __init__(self, *args, **kw): super(ModelForm, self).__init__(*args, **kw) self.fields.keyOrder = [ 'super_user', 'all_districts', 'multi_district', 'all_schools', 'manage_users', 'direct_login', 'student_detail', 'license'] class Meta: model = Privilege ```
How does Django Know the Order to Render Form Fields?
350,799
70
2008-12-08T20:37:44Z
5,747,259
11
2011-04-21T16:53:43Z
[ "python", "django", "class", "django-forms", "contacts" ]
If I have a Django form such as: ``` class ContactForm(forms.Form): subject = forms.CharField(max_length=100) message = forms.CharField() sender = forms.EmailField() ``` And I call the as\_table() method of an instance of this form, Django will render the fields as the same order as specified above. My question is how does Django know the order that class variables where defined? (Also how do I override this order, for example when I want to add a field from the classe's **init** method?)
Fields are listed in the order they are defined in ModelClass.\_meta.fields. But if you want to change order in Form, you can do by using keyOrder function. For example : ``` class ContestForm(ModelForm): class Meta: model = Contest exclude=('create_date', 'company') def __init__(self, *args, **kwargs): super(ContestForm, self).__init__(*args, **kwargs) self.fields.keyOrder = [ 'name', 'description', 'image', 'video_link', 'category'] ```
How does Django Know the Order to Render Form Fields?
350,799
70
2008-12-08T20:37:44Z
34,502,078
7
2015-12-28T23:05:23Z
[ "python", "django", "class", "django-forms", "contacts" ]
If I have a Django form such as: ``` class ContactForm(forms.Form): subject = forms.CharField(max_length=100) message = forms.CharField() sender = forms.EmailField() ``` And I call the as\_table() method of an instance of this form, Django will render the fields as the same order as specified above. My question is how does Django know the order that class variables where defined? (Also how do I override this order, for example when I want to add a field from the classe's **init** method?)
New to Django 1.9 is **[Form.field\_order](https://docs.djangoproject.com/en/1.9/ref/forms/api/#django.forms.Form.field_order)** and **[Form.order\_fields()](https://docs.djangoproject.com/en/1.9/ref/forms/api/#django.forms.Form.order_fields)**.
What does the function set use to check if two objects are different?
351,271
6
2008-12-08T23:02:23Z
351,287
13
2008-12-08T23:08:24Z
[ "python", "methods", "set" ]
Simple code: ``` >>> set([2,2,1,2,2,2,3,3,5,1]) set([1, 2, 3, 5]) ``` Ok, in the resulting sets there are no duplicates. What if the object in the list are not int but are some defined by me? What method does it check to understand if they are different? I implemented \_\_eq\_\_ and \_\_cmp\_\_ with some objects but **set** doesn't seems to use them :\ Does anyone know how to solve this?
According to the [set documentation](http://docs.python.org/library/stdtypes.html#set-types-set-frozenset), the elements must be [hashable](http://docs.python.org/glossary.html#term-hashable). An object is hashable if it has a hash value which never changes during its lifetime (it needs a `__hash__()` method), and can be compared to other objects (it needs an `__eq__()` or `__cmp__()` method). Hashable objects which compare equal must have the same hash value. **EDIT**: added proper Hashable definition thanks to Roberto
How do I mock an IMAP server in Python, despite extreme laziness?
351,656
9
2008-12-09T02:57:49Z
351,675
8
2008-12-09T03:14:39Z
[ "python", "testing", "imap", "mocking" ]
I'm curious to know if there is an easy way to mock an IMAP server (a la the `imaplib` module) in Python, *without* doing a lot of work. Is there a pre-existing solution? Ideally I could connect to the existing IMAP server, do a dump, and have the mock server run off the real mailbox/email structure. Some background into the laziness: I have a nasty feeling that this small script I'm writing will grow over time and would *like* to create a proper testing environment, but given that it might *not* grow over time, I don't want to do much work to get the mock server running.
I found it quite easy to write an IMAP server in twisted last time I tried. It comes with support for writing IMAP servers and you have a huge amount of flexibility.
Python imaplib Gmail authenticate failure
351,927
7
2008-12-09T06:07:50Z
351,934
8
2008-12-09T06:10:52Z
[ "python", "gmail", "imap" ]
I just ran into an issue with Python's imaplib and Gmail's authentication mechanism: ``` >>> import imaplib >>> imap = imaplib.IMAP4_SSL('imap.gmail.com', 993) >>> imap.authenticate('[email protected]', 'Bob Dole likes your style!') Traceback (most recent call last): ... imaplib.error: AUTHENTICATE command error: BAD ['TODO (not supported yet) 31if3458825wff.5'] ``` If authentication is unsupported, how does one log in?
Instead of ``` >>> imap.authenticate('[email protected]', 'Bob Dole likes your style!') ``` use ``` >>> imap.login('[email protected]', 'Bob Dole likes your style!') ```
Capitalize a string
352,478
25
2008-12-09T11:42:47Z
352,494
7
2008-12-09T11:47:59Z
[ "python", "string" ]
Does anyone know of a really simple way of capitalizing just the first letter of a string, regardless of the capitalization of the rest of the string? For example: ``` asimpletest -> Asimpletest aSimpleTest -> ASimpleTest ``` I would like to be able to do all string lengths as well.
``` str = str[0].upper() + str[1:] ``` This should work with every string, except for the empty string (`""`).
Capitalize a string
352,478
25
2008-12-09T11:42:47Z
352,513
49
2008-12-09T11:52:33Z
[ "python", "string" ]
Does anyone know of a really simple way of capitalizing just the first letter of a string, regardless of the capitalization of the rest of the string? For example: ``` asimpletest -> Asimpletest aSimpleTest -> ASimpleTest ``` I would like to be able to do all string lengths as well.
@[saua](#352494) is right, and ``` str = str[:1].upper() + str[1:] ``` will work for any string
Capitalize a string
352,478
25
2008-12-09T11:42:47Z
14,162,785
9
2013-01-04T18:30:46Z
[ "python", "string" ]
Does anyone know of a really simple way of capitalizing just the first letter of a string, regardless of the capitalization of the rest of the string? For example: ``` asimpletest -> Asimpletest aSimpleTest -> ASimpleTest ``` I would like to be able to do all string lengths as well.
What about `your_string.title()`? e.g. `"banana".title() -> Banana`
Capitalize a string
352,478
25
2008-12-09T11:42:47Z
16,212,385
63
2013-04-25T10:29:58Z
[ "python", "string" ]
Does anyone know of a really simple way of capitalizing just the first letter of a string, regardless of the capitalization of the rest of the string? For example: ``` asimpletest -> Asimpletest aSimpleTest -> ASimpleTest ``` I would like to be able to do all string lengths as well.
``` >>> b = "my name" >>> b.capitalize() 'My name' >>> b.title() 'My Name' ```
Extending builtin classes in python
352,537
17
2008-12-09T12:01:30Z
352,546
19
2008-12-09T12:08:35Z
[ "python" ]
How can I extend a builtin class in python? I would like to add a method to the str class. I've done some searching but all I'm finding is older posts, I'm hoping someone knows of something newer.
Just subclass the type ``` >>> class X(str): ... def myMethod( self ): ... return int(self) ... >>> s=X("Hi Mom") >>> s.lower() 'hi mom' >>> s.myMethod() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 3, in myMethod ValueError: invalid literal for int() with base 10: 'Hi Mom' >>> z=X("271828") >>> z.lower() '271828' >>> z.myMethod() 271828 ```
Weighted random selection with and without replacement
352,670
43
2008-12-09T13:15:00Z
353,576
31
2008-12-09T17:27:01Z
[ "python", "algorithm", "random", "random-sample" ]
Recently I needed to do weighted random selection of elements from a list, both with and without replacement. While there are well known and good algorithms for unweighted selection, and some for weighted selection without replacement (such as modifications of the resevoir algorithm), I couldn't find any good algorithms for weighted selection with replacement. I also wanted to avoid the resevoir method, as I was selecting a significant fraction of the list, which is small enough to hold in memory. Does anyone have any suggestions on the best approach in this situation? I have my own solutions, but I'm hoping to find something more efficient, simpler, or both.
One of the fastest ways to make many with replacement samples from an unchanging list is the alias method. The core intuition is that we can create a set of equal-sized bins for the weighted list that can be indexed very efficiently through bit operations, to avoid a binary search. It will turn out that, done correctly, we will need to only store two items from the original list per bin, and thus can represent the split with a single percentage. Let's us take the example of five equally weighted choices, `(a:1, b:1, c:1, d:1, e:1)` To create the alias lookup: 1. Normalize the weights such that they sum to `1.0`. `(a:0.2 b:0.2 c:0.2 d:0.2 e:0.2)` This is the probability of choosing each weight. 2. Find the smallest power of 2 greater than or equal to the number of variables, and create this number of partitions, `|p|`. Each partition represents a probability mass of `1/|p|`. In this case, we create `8` partitions, each able to contain `0.125`. 3. Take the variable with the least remaining weight, and place as much of it's mass as possible in an empty partition. In this example, we see that `a` fills the first partition. `(p1{a|null,1.0},p2,p3,p4,p5,p6,p7,p8)` with `(a:0.075, b:0.2 c:0.2 d:0.2 e:0.2)` 4. If the partition is not filled, take the variable with the most weight, and fill the partition with that variable. Repeat steps 3 and 4, until none of the weight from the original partition need be assigned to the list. For example, if we run another iteration of 3 and 4, we see `(p1{a|null,1.0},p2{a|b,0.6},p3,p4,p5,p6,p7,p8)` with `(a:0, b:0.15 c:0.2 d:0.2 e:0.2)` left to be assigned At runtime: 1. Get a `U(0,1)` random number, say binary `0.001100000` 2. bitshift it `lg2(p)`, finding the index partition. Thus, we shift it by `3`, yielding `001.1`, or position 1, and thus partition 2. 3. If the partition is split, use the decimal portion of the shifted random number to decide the split. In this case, the value is `0.5`, and `0.5 < 0.6`, so return `a`. [Here is some code and another explanation](http://prxq.wordpress.com/2006/04/17/the-alias-method/), but unfortunately it doesn't use the bitshifting technique, nor have I actually verified it.
How to add file extensions based on file type on Linux/Unix?
352,837
11
2008-12-09T14:10:18Z
352,846
12
2008-12-09T14:13:59Z
[ "python", "linux", "bash", "unix", "shell" ]
This is a question regarding Unix shell scripting (any shell), but any other "standard" scripting language solution would also be appreciated: I have a directory full of files where the filenames are hash values like this: ``` fd73d0cf8ee68073dce270cf7e770b97 fec8047a9186fdcc98fdbfc0ea6075ee ``` These files have different original file types such as png, zip, doc, pdf etc. Can anybody provide a script that would rename the files so they get their appropriate file extension, probably based on the output of the `file` command? ## Answer: [J.F. Sebastian's](http://stackoverflow.com/questions/352837/how-to-add-file-extensions-based-on-file-type-on-linuxunix#352973) script will work for both ouput of the filenames as well as the actual renaming.
You can use ``` file -i filename ``` to get a MIME-type. You could potentially lookup the type in a list and then append an extension. You can find a [list of MIME-types](http://www.iana.org/assignments/media-types/media-types.xhtml) and [example file extensions](http://svn.apache.org/repos/asf/httpd/httpd/trunk/docs/conf/mime.types) on the net.
How to add file extensions based on file type on Linux/Unix?
352,837
11
2008-12-09T14:10:18Z
352,919
7
2008-12-09T14:33:48Z
[ "python", "linux", "bash", "unix", "shell" ]
This is a question regarding Unix shell scripting (any shell), but any other "standard" scripting language solution would also be appreciated: I have a directory full of files where the filenames are hash values like this: ``` fd73d0cf8ee68073dce270cf7e770b97 fec8047a9186fdcc98fdbfc0ea6075ee ``` These files have different original file types such as png, zip, doc, pdf etc. Can anybody provide a script that would rename the files so they get their appropriate file extension, probably based on the output of the `file` command? ## Answer: [J.F. Sebastian's](http://stackoverflow.com/questions/352837/how-to-add-file-extensions-based-on-file-type-on-linuxunix#352973) script will work for both ouput of the filenames as well as the actual renaming.
Following csl's response: > You can use > > ``` > file -i filename > ``` > > to get a MIME-type. > You could potentially lookup the type > in a list and then append an > extension. You can find list of > MIME-types and suggested file > extensions on the net. I'd suggest you write a script that takes the output of `file -i filename`, and returns an extension (split on spaces, find the '/', look up that term in a table file) in your language of choice - a few lines at most. Then you can do something like: ``` ls | while read f; do mv "$f" "$f".`file -i "$f" | get_extension.py`; done ``` in bash, or throw that in a bash script. Or make the get\_extension script bigger, but that makes it less useful next time you want the relevant extension. Edit: change from `for f in *` to `ls | while read f` because the latter handles filenames with spaces in (a particular nightmare on Windows).
How to add file extensions based on file type on Linux/Unix?
352,837
11
2008-12-09T14:10:18Z
352,973
9
2008-12-09T14:47:24Z
[ "python", "linux", "bash", "unix", "shell" ]
This is a question regarding Unix shell scripting (any shell), but any other "standard" scripting language solution would also be appreciated: I have a directory full of files where the filenames are hash values like this: ``` fd73d0cf8ee68073dce270cf7e770b97 fec8047a9186fdcc98fdbfc0ea6075ee ``` These files have different original file types such as png, zip, doc, pdf etc. Can anybody provide a script that would rename the files so they get their appropriate file extension, probably based on the output of the `file` command? ## Answer: [J.F. Sebastian's](http://stackoverflow.com/questions/352837/how-to-add-file-extensions-based-on-file-type-on-linuxunix#352973) script will work for both ouput of the filenames as well as the actual renaming.
Here's mimetypes' version: ``` #!/usr/bin/env python """It is a `filename -> filename.ext` filter. `ext` is mime-based. """ import fileinput import mimetypes import os import sys from subprocess import Popen, PIPE if len(sys.argv) > 1 and sys.argv[1] == '--rename': do_rename = True del sys.argv[1] else: do_rename = False for filename in (line.rstrip() for line in fileinput.input()): output, _ = Popen(['file', '-bi', filename], stdout=PIPE).communicate() mime = output.split(';', 1)[0].lower().strip() ext = mimetypes.guess_extension(mime, strict=False) if ext is None: ext = os.path.extsep + 'undefined' filename_ext = filename + ext print filename_ext if do_rename: os.rename(filename, filename_ext) ``` Example: ``` $ ls *.file? | python add-ext.py --rename avi.file.avi djvu.file.undefined doc.file.dot gif.file.gif html.file.html ico.file.obj jpg.file.jpe m3u.file.ksh mp3.file.mp3 mpg.file.m1v pdf.file.pdf pdf.file2.pdf pdf.file3.pdf png.file.png tar.bz2.file.undefined ``` --- Following @Phil H's response that follows @csl' response: ``` #!/usr/bin/env python """It is a `filename -> filename.ext` filter. `ext` is mime-based. """ # Mapping of mime-types to extensions is taken form here: # http://as3corelib.googlecode.com/svn/trunk/src/com/adobe/net/MimeTypeMap.as mime2exts_list = [ ["application/andrew-inset","ez"], ["application/atom+xml","atom"], ["application/mac-binhex40","hqx"], ["application/mac-compactpro","cpt"], ["application/mathml+xml","mathml"], ["application/msword","doc"], ["application/octet-stream","bin","dms","lha","lzh","exe","class","so","dll","dmg"], ["application/oda","oda"], ["application/ogg","ogg"], ["application/pdf","pdf"], ["application/postscript","ai","eps","ps"], ["application/rdf+xml","rdf"], ["application/smil","smi","smil"], ["application/srgs","gram"], ["application/srgs+xml","grxml"], ["application/vnd.adobe.apollo-application-installer-package+zip","air"], ["application/vnd.mif","mif"], ["application/vnd.mozilla.xul+xml","xul"], ["application/vnd.ms-excel","xls"], ["application/vnd.ms-powerpoint","ppt"], ["application/vnd.rn-realmedia","rm"], ["application/vnd.wap.wbxml","wbxml"], ["application/vnd.wap.wmlc","wmlc"], ["application/vnd.wap.wmlscriptc","wmlsc"], ["application/voicexml+xml","vxml"], ["application/x-bcpio","bcpio"], ["application/x-cdlink","vcd"], ["application/x-chess-pgn","pgn"], ["application/x-cpio","cpio"], ["application/x-csh","csh"], ["application/x-director","dcr","dir","dxr"], ["application/x-dvi","dvi"], ["application/x-futuresplash","spl"], ["application/x-gtar","gtar"], ["application/x-hdf","hdf"], ["application/x-javascript","js"], ["application/x-koan","skp","skd","skt","skm"], ["application/x-latex","latex"], ["application/x-netcdf","nc","cdf"], ["application/x-sh","sh"], ["application/x-shar","shar"], ["application/x-shockwave-flash","swf"], ["application/x-stuffit","sit"], ["application/x-sv4cpio","sv4cpio"], ["application/x-sv4crc","sv4crc"], ["application/x-tar","tar"], ["application/x-tcl","tcl"], ["application/x-tex","tex"], ["application/x-texinfo","texinfo","texi"], ["application/x-troff","t","tr","roff"], ["application/x-troff-man","man"], ["application/x-troff-me","me"], ["application/x-troff-ms","ms"], ["application/x-ustar","ustar"], ["application/x-wais-source","src"], ["application/xhtml+xml","xhtml","xht"], ["application/xml","xml","xsl"], ["application/xml-dtd","dtd"], ["application/xslt+xml","xslt"], ["application/zip","zip"], ["audio/basic","au","snd"], ["audio/midi","mid","midi","kar"], ["audio/mpeg","mp3","mpga","mp2"], ["audio/x-aiff","aif","aiff","aifc"], ["audio/x-mpegurl","m3u"], ["audio/x-pn-realaudio","ram","ra"], ["audio/x-wav","wav"], ["chemical/x-pdb","pdb"], ["chemical/x-xyz","xyz"], ["image/bmp","bmp"], ["image/cgm","cgm"], ["image/gif","gif"], ["image/ief","ief"], ["image/jpeg","jpg","jpeg","jpe"], ["image/png","png"], ["image/svg+xml","svg"], ["image/tiff","tiff","tif"], ["image/vnd.djvu","djvu","djv"], ["image/vnd.wap.wbmp","wbmp"], ["image/x-cmu-raster","ras"], ["image/x-icon","ico"], ["image/x-portable-anymap","pnm"], ["image/x-portable-bitmap","pbm"], ["image/x-portable-graymap","pgm"], ["image/x-portable-pixmap","ppm"], ["image/x-rgb","rgb"], ["image/x-xbitmap","xbm"], ["image/x-xpixmap","xpm"], ["image/x-xwindowdump","xwd"], ["model/iges","igs","iges"], ["model/mesh","msh","mesh","silo"], ["model/vrml","wrl","vrml"], ["text/calendar","ics","ifb"], ["text/css","css"], ["text/html","html","htm"], ["text/plain","txt","asc"], ["text/richtext","rtx"], ["text/rtf","rtf"], ["text/sgml","sgml","sgm"], ["text/tab-separated-values","tsv"], ["text/vnd.wap.wml","wml"], ["text/vnd.wap.wmlscript","wmls"], ["text/x-setext","etx"], ["video/mpeg","mpg","mpeg","mpe"], ["video/quicktime","mov","qt"], ["video/vnd.mpegurl","m4u","mxu"], ["video/x-flv","flv"], ["video/x-msvideo","avi"], ["video/x-sgi-movie","movie"], ["x-conference/x-cooltalk","ice"]] #NOTE: take only the first extension mime2ext = dict(x[:2] for x in mime2exts_list) if __name__ == '__main__': import fileinput, os.path from subprocess import Popen, PIPE for filename in (line.rstrip() for line in fileinput.input()): output, _ = Popen(['file', '-bi', filename], stdout=PIPE).communicate() mime = output.split(';', 1)[0].lower().strip() print filename + os.path.extsep + mime2ext.get(mime, 'undefined') ``` --- Here's a snippet for old python's versions (not tested): ``` #NOTE: take only the first extension mime2ext = {} for x in mime2exts_list: mime2ext[x[0]] = x[1] if __name__ == '__main__': import os import sys # this version supports only stdin (part of fileinput.input() functionality) lines = sys.stdin.read().split('\n') for line in lines: filename = line.rstrip() output = os.popen('file -bi ' + filename).read() mime = output.split(';')[0].lower().strip() try: ext = mime2ext[mime] except KeyError: ext = 'undefined' print filename + '.' + ext ``` It should work on Python 2.3.5 (I guess).
Beginner: Trying to understand how apps interact in Django
353,571
23
2008-12-09T17:25:23Z
353,667
20
2008-12-09T18:01:25Z
[ "python", "django", "django-models", "django-apps" ]
I just got done working through the Django tutorials for the second time, and am understanding things much more clearly now. However, I'm still unclear how apps inside a site interact with one another. For example, lets say I'm writing a blog application (a rather popular activity, apparently). Blog posts and comments tend to go together, and yet they are distinct enough that they should be built into separate apps, as is the general philosophy of Djano development. Consider the following example. In reality I would not actually write the comment app myself, as good code for that already exists on the web, but this is for demonstration/practice purposes: **mysite/blog/models.py** ``` from django.db import models class post(models.Model): title = models.CharField(max_length=200) author = models.CharField(max_length=200) content = models.TextField() ``` **mysite/comments/models.py** ``` from django.db import models from mysite.blog.models import post class comment(models.Model): id = models.AutoField() post = models.ForeignKey(post) author = models.CharField(max_length=200) text = models.TextField() ``` Is what I wrote above, importing a model from another app and setting it as a foreign key, how Django apps interact? Or is there a different/better method for the apps that comprise a site to interact? **Update** Per the recommendation in one response, I'm reading the documentation for contrib.contenttypes. If I'm reading this correctly, I could rewrite my example comment app like this: ``` from django.db import models from django.contrib.contenttypes.models import ContentType from django.contrib.contentypes import generic class comment(models.Model): id = models.AutoField() author = models.CharField(max_length=200) text = models.TextField() content_type = models.ForeignKey(ContentType) content_object = generic.GenericForeignKey(content_type, id) ``` Would this be correct?
Take a look at django's built-in [contenttypes framework](http://docs.djangoproject.com/en/dev/ref/contrib/contenttypes/#ref-contrib-contenttypes): `django.contrib.contenttypes` It allows you develop your applications as stand-alone units. This is what the django developers used to allow django's built-in [comment framework](http://docs.djangoproject.com/en/dev/ref/contrib/comments/#ref-contrib-comments-index) to attach a comment to any model in your project. For instance, if you have some content object that you want to "attach" to other content objects of different types, like allowing each user to leave a "favorite" star on a blog post, image, or user profile, you can create a `Favorite` model with a [generic relation field](http://docs.djangoproject.com/en/dev/ref/contrib/contenttypes/#generic-relations) like so: ``` from django.db import models from django.contrib.auth.models import User from django.contrib.contenttypes.models import ContentType from django.contrib.contenttypes import generic class Favorite(models.Model): user = models.ForeignKey(User) content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField() content_object = generic.GenericForeignKey('content_type', 'object_id') ``` In this way you can add a `Favorite` star from any user to any model in your project. If you want to add API access via the recipient model class you can either add a [reverse generic relation field](http://docs.djangoproject.com/en/dev/ref/contrib/contenttypes/#reverse-generic-relations) on the recipient model (although this would be "coupling" the two models, which you said you wanted to avoid), or do the lookup through the `Favorite` model with the `content_type` and `object_id` of the recipient instance, see the [official docs](http://docs.djangoproject.com/en/dev/ref/contrib/contenttypes/#reverse-generic-relations) for an example.
How do I check if a string is a number (float) in Python?
354,038
872
2008-12-09T20:03:42Z
354,073
1,069
2008-12-09T20:15:18Z
[ "python", "casting", "floating-point", "type-conversion" ]
What is the best possible way to check if a string can be represented as a number in Python? The function I currently have right now is: ``` def is_number(s): try: float(s) return True except ValueError: return False ``` Which, not only is ugly and slow, seems clunky. However I haven't found a better method because calling `float` in the main function is even worse.
In case you are looking for parsing (positive, unsigned) integers instead of floats, you can use the [`isdigit()`](https://docs.python.org/2/library/stdtypes.html#str.isdigit) function for string objects. ``` >>> a = "03523" >>> a.isdigit() True >>> b = "963spam" >>> b.isdigit() False ``` [String Methods - `isdigit()`](https://docs.python.org/2/library/stdtypes.html#string-methods) There's also something on Unicode strings, which I'm not too familiar with [Unicode - Is decimal/decimal](https://docs.python.org/2/library/stdtypes.html#unicode.isnumeric)
How do I check if a string is a number (float) in Python?
354,038
872
2008-12-09T20:03:42Z
354,130
429
2008-12-09T20:30:48Z
[ "python", "casting", "floating-point", "type-conversion" ]
What is the best possible way to check if a string can be represented as a number in Python? The function I currently have right now is: ``` def is_number(s): try: float(s) return True except ValueError: return False ``` Which, not only is ugly and slow, seems clunky. However I haven't found a better method because calling `float` in the main function is even worse.
> Which, not only is ugly and slow I'd dispute both. A regex or other string parsing would be uglier and slower. I'm not sure that anything much could be faster than the above. It calls the function and returns. Try/Catch doesn't introduce much overhead because the most common exception is caught without an extensive search of stack frames. The issue is that any numeric conversion function has two kinds of results * A number, if the number is valid * A status code (e.g., via errno) or exception to show that no valid number could be parsed. C (as an example) hacks around this a number of ways. Python lays it out clearly and explicitly. I think your code for doing this is perfect.
How do I check if a string is a number (float) in Python?
354,038
872
2008-12-09T20:03:42Z
354,134
9
2008-12-09T20:31:49Z
[ "python", "casting", "floating-point", "type-conversion" ]
What is the best possible way to check if a string can be represented as a number in Python? The function I currently have right now is: ``` def is_number(s): try: float(s) return True except ValueError: return False ``` Which, not only is ugly and slow, seems clunky. However I haven't found a better method because calling `float` in the main function is even worse.
Casting to float and catching ValueError is probably the fastest way, since float() is specifically meant for just that. Anything else that requires string parsing (regex, etc) will likely be slower due to the fact that it's not tuned for this operation. My $0.02.
How do I check if a string is a number (float) in Python?
354,038
872
2008-12-09T20:03:42Z
358,479
33
2008-12-11T04:56:26Z
[ "python", "casting", "floating-point", "type-conversion" ]
What is the best possible way to check if a string can be represented as a number in Python? The function I currently have right now is: ``` def is_number(s): try: float(s) return True except ValueError: return False ``` Which, not only is ugly and slow, seems clunky. However I haven't found a better method because calling `float` in the main function is even worse.
> Which, not only is ugly and slow, seems clunky. It may take some getting used to, but this is the pythonic way of doing it. As has been already pointed out, the alternatives are worse. But there is one other advantage of doing things this way: polymorphism. The central idea behind duck typing is that "if it walks and talks like a duck, then it's a duck." What if you decide that you need to subclass string so that you can change how you determine if something can be converted into a float? Or what if you decide to test some other object entirely? You can do these things without having to change the above code. Other languages solve these problems by using interfaces. I'll save the analysis of which solution is better for another thread. The point, though, is that python is decidedly on the duck typing side of the equation, and you're probably going to have to get used to syntax like this if you plan on doing much programming in Python (but that doesn't mean you have to like it of course). One other thing you might want to take into consideration: Python is pretty fast in throwing and catching exceptions compared to a lot of other languages (30x faster than .Net for instance). Heck, the language itself even throws exceptions to communicate non-exceptional, normal program conditions (every time you use a for loop). Thus, I wouldn't worry too much about the performance aspects of this code until you notice a significant problem.
How do I check if a string is a number (float) in Python?
354,038
872
2008-12-09T20:03:42Z
3,335,060
27
2010-07-26T13:10:15Z
[ "python", "casting", "floating-point", "type-conversion" ]
What is the best possible way to check if a string can be represented as a number in Python? The function I currently have right now is: ``` def is_number(s): try: float(s) return True except ValueError: return False ``` Which, not only is ugly and slow, seems clunky. However I haven't found a better method because calling `float` in the main function is even worse.
Updated after Alfe pointed out you don't need to check for float separately as complex handles both: ``` def is_number(s): try: complex(s) # for int, long, float and complex except ValueError: return False return True ``` --- Previously said: Is some rare cases you might also need to check for complex numbers (e.g. 1+2i), which can not be represented by a float: ``` def is_number(s): try: float(s) # for int, long and float except ValueError: try: complex(s) # for complex except ValueError: return False return True ```
How do I check if a string is a number (float) in Python?
354,038
872
2008-12-09T20:03:42Z
3,618,897
50
2010-09-01T14:06:08Z
[ "python", "casting", "floating-point", "type-conversion" ]
What is the best possible way to check if a string can be represented as a number in Python? The function I currently have right now is: ``` def is_number(s): try: float(s) return True except ValueError: return False ``` Which, not only is ugly and slow, seems clunky. However I haven't found a better method because calling `float` in the main function is even worse.
There is one exception that you may want to take into account: the string 'NaN' If you want is\_number to return FALSE for 'NaN' this code will not work as Python converts it to its representation of a number that is not a number (talk about identity issues): ``` >>> float('NaN') nan ``` Otherwise, I should actually thank you for the piece of code I now use extensively. :) G.
How do I check if a string is a number (float) in Python?
354,038
872
2008-12-09T20:03:42Z
9,337,733
12
2012-02-18T01:35:32Z
[ "python", "casting", "floating-point", "type-conversion" ]
What is the best possible way to check if a string can be represented as a number in Python? The function I currently have right now is: ``` def is_number(s): try: float(s) return True except ValueError: return False ``` Which, not only is ugly and slow, seems clunky. However I haven't found a better method because calling `float` in the main function is even worse.
## Just Mimic C# **In C# there are two different functions that handle parsing of scalar values:** * Float.Parse() * Float.TryParse() **float.parse():** ``` def parse(string): try: return float(string) except Exception: throw TypeError ``` *Note: If you're wondering why I changed the exception to a TypeError, [here's the documentation](http://docs.python.org/library/exceptions.html).* **float.try\_parse():** ``` def try_parse(string, fail=None): try: return float(string) except Exception: return fail; ``` *Note: You don't want to return the boolean 'False' because that's still a value type. None is better because it indicates failure. Of course, if you want something different you can change the fail parameter to whatever you want.* To extend float to include the 'parse()' and 'try\_parse()' you'll need to monkeypatch the 'float' class to add these methods. If you want respect pre-existing functions the code should be something like: ``` def monkey_patch(): if(!hasattr(float, 'parse')): float.parse = parse if(!hasattr(float, 'try_parse')): float.try_parse = try_parse ``` *SideNote: I personally prefer to call it Monkey Punching because it feels like I'm abusing the language when I do this but YMMV.* **Usage:** ``` float.parse('giggity') // throws TypeException float.parse('54.3') // returns the scalar value 54.3 float.tryParse('twank') // returns None float.tryParse('32.2') // returns the scalar value 32.2 ``` *And the great Sage Pythonas said to the Holy See Sharpisus, "Anything you can do I can do better; I can do anything better than you."*
How do I check if a string is a number (float) in Python?
354,038
872
2008-12-09T20:03:42Z
10,762,002
33
2012-05-25T22:22:32Z
[ "python", "casting", "floating-point", "type-conversion" ]
What is the best possible way to check if a string can be represented as a number in Python? The function I currently have right now is: ``` def is_number(s): try: float(s) return True except ValueError: return False ``` Which, not only is ugly and slow, seems clunky. However I haven't found a better method because calling `float` in the main function is even worse.
how about this: ``` '3.14'.replace('.','',1).isdigit() ``` which will return true only if there is one or no '.' in the string of digits. ``` '3.14.5'.replace('.','',1).isdigit() ``` will return false edit: just saw another comment ... adding a `.replace(badstuff,'',maxnum_badstuff)` for other cases can be done. if you are passing salt and not arbitrary condiments (ref:[xkcd#974](http://xkcd.com/974/)) this will do fine :P
How do I check if a string is a number (float) in Python?
354,038
872
2008-12-09T20:03:42Z
15,205,926
9
2013-03-04T16:12:38Z
[ "python", "casting", "floating-point", "type-conversion" ]
What is the best possible way to check if a string can be represented as a number in Python? The function I currently have right now is: ``` def is_number(s): try: float(s) return True except ValueError: return False ``` Which, not only is ugly and slow, seems clunky. However I haven't found a better method because calling `float` in the main function is even worse.
You can use Unicode strings, they have a method to do just what you want: ``` >>> s = u"345" >>> s.isnumeric() True ``` Or: ``` >>> s = "345" >>> u = unicode(s) >>> u.isnumeric() True ``` <http://www.tutorialspoint.com/python/string_isnumeric.htm> <http://docs.python.org/2/howto/unicode.html>
How do I check if a string is a number (float) in Python?
354,038
872
2008-12-09T20:03:42Z
25,299,619
11
2014-08-14T03:34:08Z
[ "python", "casting", "floating-point", "type-conversion" ]
What is the best possible way to check if a string can be represented as a number in Python? The function I currently have right now is: ``` def is_number(s): try: float(s) return True except ValueError: return False ``` Which, not only is ugly and slow, seems clunky. However I haven't found a better method because calling `float` in the main function is even worse.
For strings of non-numbers, `try: except:` is actually slower than regular expressions. For strings of valid numbers, regex is slower. So, the appropriate method depends on your input. If you find that you are in a performance bind, you can use a new third-party module called [fastnumbers](https://pypi.python.org/pypi/fastnumbers) that provides a function called [isfloat](http://pythonhosted.org//fastnumbers/checks.html#isfloat). Full disclosure, I am the author. I have included its results in the timings below. --- ``` from __future__ import print_function import timeit prep_base = '''\ x = 'invalid' y = '5402' z = '4.754e3' ''' prep_try_method = '''\ def is_number_try(val): try: float(val) return True except ValueError: return False ''' prep_re_method = '''\ import re float_match = re.compile(r'[-+]?\d*\.?\d+(?:[eE][-+]?\d+)?$').match def is_number_re(val): return bool(float_match(val)) ''' fn_method = '''\ from fastnumbers import isfloat ''' print('Try with non-number strings', timeit.timeit('is_number_try(x)', prep_base + prep_try_method), 'seconds') print('Try with integer strings', timeit.timeit('is_number_try(y)', prep_base + prep_try_method), 'seconds') print('Try with float strings', timeit.timeit('is_number_try(z)', prep_base + prep_try_method), 'seconds') print() print('Regex with non-number strings', timeit.timeit('is_number_re(x)', prep_base + prep_re_method), 'seconds') print('Regex with integer strings', timeit.timeit('is_number_re(y)', prep_base + prep_re_method), 'seconds') print('Regex with float strings', timeit.timeit('is_number_re(z)', prep_base + prep_re_method), 'seconds') print() print('fastnumbers with non-number strings', timeit.timeit('isfloat(x)', prep_base + 'from fastnumbers import isfloat'), 'seconds') print('fastnumbers with integer strings', timeit.timeit('isfloat(y)', prep_base + 'from fastnumbers import isfloat'), 'seconds') print('fastnumbers with float strings', timeit.timeit('isfloat(z)', prep_base + 'from fastnumbers import isfloat'), 'seconds') print() ``` --- ``` Try with non-number strings 2.39108395576 seconds Try with integer strings 0.375686168671 seconds Try with float strings 0.369210958481 seconds Regex with non-number strings 0.748660802841 seconds Regex with integer strings 1.02021503448 seconds Regex with float strings 1.08564686775 seconds fastnumbers with non-number strings 0.174362897873 seconds fastnumbers with integer strings 0.179651021957 seconds fastnumbers with float strings 0.20222902298 seconds ``` As you can see * `try: except:` was fast for numeric input but very slow for an invalid input * regex is very efficient when the input is invalid * `fastnumbers` wins in both cases