title
stringlengths
12
150
question_id
int64
469
40.1M
question_score
int64
2
5.52k
question_date
stringdate
2008-08-02 15:11:16
2016-10-18 06:16:31
answer_id
int64
536
40.1M
answer_score
int64
7
8.38k
answer_date
stringdate
2008-08-02 18:49:07
2016-10-18 06:19:33
tags
listlengths
1
5
question_body_md
stringlengths
15
30.2k
answer_body_md
stringlengths
11
27.8k
String separation in required format, Pythonic way? (with or w/o Regex)
558,105
6
2009-02-17T18:22:07Z
558,392
13
2009-02-17T19:32:42Z
[ "regex", "string", "format", "python" ]
I have a string in the format: ``` t='@abc @def Hello this part is text' ``` I want to get this: ``` l=["abc", "def"] s='Hello this part is text' ``` I did this: ``` a=t[t.find(' ',t.rfind('@')):].strip() s=t[:t.find(' ',t.rfind('@'))].strip() b=a.split('@') l=[i.strip() for i in b][1:] ``` It works for the most part, but it fails when the text part has the '@'. Eg, when: ``` t='@abc @def My email is [email protected]' ``` it fails. The @names are there in the beginning and there can be text after @names, which may possibly contain @. Clearly I can append initally with a space and find out first word without '@'. But that doesn't seem an elegant solution. What is a pythonic way of solving this?
Building unashamedly on MrTopf's effort: ``` import re rx = re.compile("((?:@\w+ +)+)(.*)") t='@abc @def @xyz Hello this part is text and my email is [email protected]' a,s = rx.match(t).groups() l = re.split('[@ ]+',a)[1:-1] print l print s ``` prints: > ['abc', 'def', 'xyz'] > Hello this part is text and my email is [email protected] --- Justly called to account by [hasen j](http://stackoverflow.com/users/35364/hasen-j), let me clarify how this works: ``` /@\w+ +/ ``` matches a single tag - @ followed by at least one alphanumeric or \_ followed by at least one space character. + is greedy, so if there is more than one space, it will grab them all. To match any number of these tags, we need to add a plus (one or more things) to the pattern for tag; so we need to group it with parentheses: ``` /(@\w+ +)+/ ``` which matches one-or-more tags, and, being greedy, matches all of them. However, those parentheses now fiddle around with our capture groups, so we undo that by making them into an anonymous group: ``` /(?:@\w+ +)+/ ``` Finally, we make that into a capture group and add another to sweep up the rest: ``` /((?:@\w+ +)+)(.*)/ ``` A last breakdown to sum up: ``` ((?:@\w+ +)+)(.*) (?:@\w+ +)+ ( @\w+ +) @\w+ + ``` --- Note that in reviewing this, I've improved it - \w didn't need to be in a set, and it now allows for multiple spaces between tags. Thanks, hasen-j!
String separation in required format, Pythonic way? (with or w/o Regex)
558,105
6
2009-02-17T18:22:07Z
558,393
7
2009-02-17T19:32:42Z
[ "regex", "string", "format", "python" ]
I have a string in the format: ``` t='@abc @def Hello this part is text' ``` I want to get this: ``` l=["abc", "def"] s='Hello this part is text' ``` I did this: ``` a=t[t.find(' ',t.rfind('@')):].strip() s=t[:t.find(' ',t.rfind('@'))].strip() b=a.split('@') l=[i.strip() for i in b][1:] ``` It works for the most part, but it fails when the text part has the '@'. Eg, when: ``` t='@abc @def My email is [email protected]' ``` it fails. The @names are there in the beginning and there can be text after @names, which may possibly contain @. Clearly I can append initally with a space and find out first word without '@'. But that doesn't seem an elegant solution. What is a pythonic way of solving this?
``` t='@abc @def Hello this part is text' words = t.split(' ') names = [] while words: w = words.pop(0) if w.startswith('@'): names.append(w[1:]) else: break text = ' '.join(words) print names print text ```
Function to determine if two numbers are nearly equal when rounded to n significant decimal digits
558,216
13
2009-02-17T18:49:20Z
558,289
8
2009-02-17T19:07:04Z
[ "python", "math", "floating-point", "numpy" ]
I have been asked to test a library provided by a 3rd party. The library is known to be accurate to *n* significant figures. Any less-significant errors can safely be ignored. I want to write a function to help me compare the results: ``` def nearlyequal( a, b, sigfig=5 ): ``` The purpose of this function is to determine if two floating-point numbers (a and b) are approximately equal. The function will return True if a==b (exact match) or if a and b have the same value when rounded to **sigfig** significant-figures when written in decimal. Can anybody suggest a good implementation? I've written a mini unit-test. Unless you can see a bug in my tests then a good implementation should pass the following: ``` assert nearlyequal(1, 1, 5) assert nearlyequal(1.0, 1.0, 5) assert nearlyequal(1.0, 1.0, 5) assert nearlyequal(-1e-9, 1e-9, 5) assert nearlyequal(1e9, 1e9 + 1 , 5) assert not nearlyequal( 1e4, 1e4 + 1, 5) assert nearlyequal( 0.0, 1e-15, 5 ) assert not nearlyequal( 0.0, 1e-4, 6 ) ``` Additional notes: 1. Values a and b might be of type int, float or numpy.float64. Values a and b will always be of the same type. It's vital that conversion does not introduce additional error into the function. 2. Lets keep this numerical, so functions that convert to strings or use non-mathematical tricks are not ideal. This program will be audited by somebody who is a mathematician who will want to be able to prove that the function does what it is supposed to do. 3. Speed... I've got to compare a lot of numbers so the faster the better. 4. I've got numpy, scipy and the standard-library. Anything else will be hard for me to get, especially for such a small part of the project.
Here's a take. ``` def nearly_equal(a,b,sig_fig=5): return ( a==b or int(a*10**sig_fig) == int(b*10**sig_fig) ) ```
Function to determine if two numbers are nearly equal when rounded to n significant decimal digits
558,216
13
2009-02-17T18:49:20Z
558,322
16
2009-02-17T19:16:25Z
[ "python", "math", "floating-point", "numpy" ]
I have been asked to test a library provided by a 3rd party. The library is known to be accurate to *n* significant figures. Any less-significant errors can safely be ignored. I want to write a function to help me compare the results: ``` def nearlyequal( a, b, sigfig=5 ): ``` The purpose of this function is to determine if two floating-point numbers (a and b) are approximately equal. The function will return True if a==b (exact match) or if a and b have the same value when rounded to **sigfig** significant-figures when written in decimal. Can anybody suggest a good implementation? I've written a mini unit-test. Unless you can see a bug in my tests then a good implementation should pass the following: ``` assert nearlyequal(1, 1, 5) assert nearlyequal(1.0, 1.0, 5) assert nearlyequal(1.0, 1.0, 5) assert nearlyequal(-1e-9, 1e-9, 5) assert nearlyequal(1e9, 1e9 + 1 , 5) assert not nearlyequal( 1e4, 1e4 + 1, 5) assert nearlyequal( 0.0, 1e-15, 5 ) assert not nearlyequal( 0.0, 1e-4, 6 ) ``` Additional notes: 1. Values a and b might be of type int, float or numpy.float64. Values a and b will always be of the same type. It's vital that conversion does not introduce additional error into the function. 2. Lets keep this numerical, so functions that convert to strings or use non-mathematical tricks are not ideal. This program will be audited by somebody who is a mathematician who will want to be able to prove that the function does what it is supposed to do. 3. Speed... I've got to compare a lot of numbers so the faster the better. 4. I've got numpy, scipy and the standard-library. Anything else will be hard for me to get, especially for such a small part of the project.
There is a function `assert_approx_equal` in `numpy.testing` (source [here)](https://github.com/numpy/numpy/blob/1225aef37298ec82048d0828f6cb7e0be8ed58cc/numpy/testing/utils.py#L513) which may be a good starting point. ``` def assert_approx_equal(actual,desired,significant=7,err_msg='',verbose=True): """ Raise an assertion if two items are not equal up to significant digits. .. note:: It is recommended to use one of `assert_allclose`, `assert_array_almost_equal_nulp` or `assert_array_max_ulp` instead of this function for more consistent floating point comparisons. Given two numbers, check that they are approximately equal. Approximately equal is defined as the number of significant digits that agree. ```
Bayesian spam filtering library for Python
558,219
19
2009-02-17T18:50:18Z
558,405
11
2009-02-17T19:35:52Z
[ "python", "spam-prevention", "bayesian", "bayesian-networks" ]
I am looking for a Python library which does Bayesian Spam Filtering. I looked at SpamBayes and OpenBayes, but both seem to be unmaintained (I might be wrong). Can anyone suggest a good Python (or Clojure, Common Lisp, even Ruby) library which implements Bayesian Spam Filtering? Thanks in advance. **Clarification**: I am actually looking for a *Bayesian Spam Classifier* and not necessarily a spam filter. I just want to train it using some data and later tell me whether some given data is spam. Sorry for any confusion.
Do you want spam filtering or Bayesian classification? For Bayesian classification there are a number of Python modules. I was just recently reviewing [Orange](http://www.ailab.si/orange/) which looks very impressive. R has a number of Bayesian modules. You can use [Rpy](http://rpy.sourceforge.net/) to hook into R.
Bayesian spam filtering library for Python
558,219
19
2009-02-17T18:50:18Z
561,654
12
2009-02-18T15:50:37Z
[ "python", "spam-prevention", "bayesian", "bayesian-networks" ]
I am looking for a Python library which does Bayesian Spam Filtering. I looked at SpamBayes and OpenBayes, but both seem to be unmaintained (I might be wrong). Can anyone suggest a good Python (or Clojure, Common Lisp, even Ruby) library which implements Bayesian Spam Filtering? Thanks in advance. **Clarification**: I am actually looking for a *Bayesian Spam Classifier* and not necessarily a spam filter. I just want to train it using some data and later tell me whether some given data is spam. Sorry for any confusion.
Try [Reverend](http://bazaar.launchpad.net/~divmod-dev/divmod.org/trunk/files/head:/Reverend/). It's a spam filtering module.
Bayesian spam filtering library for Python
558,219
19
2009-02-17T18:50:18Z
11,917,279
8
2012-08-11T20:11:51Z
[ "python", "spam-prevention", "bayesian", "bayesian-networks" ]
I am looking for a Python library which does Bayesian Spam Filtering. I looked at SpamBayes and OpenBayes, but both seem to be unmaintained (I might be wrong). Can anyone suggest a good Python (or Clojure, Common Lisp, even Ruby) library which implements Bayesian Spam Filtering? Thanks in advance. **Clarification**: I am actually looking for a *Bayesian Spam Classifier* and not necessarily a spam filter. I just want to train it using some data and later tell me whether some given data is spam. Sorry for any confusion.
RedisBayes looks good to me: <http://pypi.python.org/pypi/redisbayes/0.1.3> In my experience Redis is an awesome addition to your stack and can help process data at blazing fast speeds compared to MySQL, PostgreSQL or any other RDBMS. ``` import redis, redisbayes rb = redisbayes.RedisBayes(redis=redis.Redis()) rb.train('good', 'sunshine drugs love sex lobster sloth') rb.train('bad', 'fear death horror government zombie god') assert rb.classify('sloths are so cute i love them') == 'good' assert rb.classify('i fear god and love the government') == 'bad' print rb.score('i fear god and love the government') rb.untrain('good', 'sunshine drugs love sex lobster sloth') rb.untrain('bad', 'fear death horror government zombie god') ``` Hope that helps a bit.
Check whether a PDF-File is valid (Python)
559,096
10
2009-02-17T22:53:27Z
559,176
7
2009-02-17T23:19:52Z
[ "python", "file", "pdf" ]
**I get a File via a HTTP-Upload and need to be sure its a pdf-file.** Programing Language is Python, but this should not matter. I thought of the following solutions: 1. Check if the first bytes of the string are "%PDF". *This is not a good check but prevents the use from uploading other files accidentally.* 2. Try the libmagic (the "file" command on the bash uses it). *This does exactly the same check as 1.* 3. Take a lib and try to read the page-count out of the file. *If the lib is able to read a pagecount it should be a valid pdf. Problem: I dont know a lib for python which can do this* So anybody got any solutions for a lib or another trick? Thanks
In a project if mine I need to check for the mime type of some uploaded file. I simply use the file command like this: ``` from subprocess import Popen, PIPE filetype = Popen("/usr/bin/file -b --mime -", shell=True, stdout=PIPE, stdin=PIPE).communicate(file.read(1024))[0].strip() ``` You of course might want to move the actual command into some configuration file as also command line options vary among operating systems (e.g. mac). If you just need to know whether it's a PDF or not and do not need to process it anyway I think the file command is a faster solution than a lib. Doing it by hand is of course also possible but the file command gives you maybe more flexibility if you want to check for different types.
Check whether a PDF-File is valid (Python)
559,096
10
2009-02-17T22:53:27Z
559,442
9
2009-02-18T01:10:35Z
[ "python", "file", "pdf" ]
**I get a File via a HTTP-Upload and need to be sure its a pdf-file.** Programing Language is Python, but this should not matter. I thought of the following solutions: 1. Check if the first bytes of the string are "%PDF". *This is not a good check but prevents the use from uploading other files accidentally.* 2. Try the libmagic (the "file" command on the bash uses it). *This does exactly the same check as 1.* 3. Take a lib and try to read the page-count out of the file. *If the lib is able to read a pagecount it should be a valid pdf. Problem: I dont know a lib for python which can do this* So anybody got any solutions for a lib or another trick? Thanks
The two most commonly used PDF libraries for Python are: * [pyPdf](http://pybrary.net/pyPdf/) * [ReportLab](http://www.reportlab.org/downloads.html) Both are pure python so should be easy to install as well be cross-platform. With pyPdf it would probably be as simple as doing: ``` from pyPdf import PdfFileReader doc = PdfFileReader(file("upload.pdf", "rb")) ``` This should be enough, but `doc` will now have `documentInfo()` and `numPages()` methods if you want to do further checking. As Carl answered, pdftotext is also a good solution, and would probably be faster on very large documents (especially ones with many cross-references). However it might be a little slower on small PDF's due to system overhead of forking a new process, etc.
How do I validate the MX record for a domain in python?
559,436
13
2009-02-18T01:06:20Z
559,479
16
2009-02-18T01:25:06Z
[ "python", "email", "dns" ]
I have a large number of email addresses to validate. Initially I parse them with a regexp to throw out the completely crazy ones. I'm left with the ones that look sensible but still might contain errors. I want to find which addresses have valid domains, so given [email protected] I want to know if it's even possible to send emails to abcxyz.com . I want to test that to see if it corresponds to a valid A or MX record - is there an easy way to do it using only Python standard library? I'd rather not add an additional dependency to my project just to support this feature.
There is no DNS interface in the standard library so you will either have to roll your own or use a third party library. This is not a fast-changing concept though, so the external libraries are stable and well tested. The one I've used successful for the same task as your question is [PyDNS](http://pydns.sourceforge.net/). A very rough sketch of my code is something like this: ``` import DNS, smtplib DNS.DiscoverNameServers() mx_hosts = DNS.mxlookup(hostname) # Just doing the mxlookup might be enough for you, # but do something like this to test for SMTP server for mx in mx_hosts: smtp = smtplib.SMTP() #.. if this doesn't raise an exception it is a valid MX host... try: smtp.connect(mx[1]) except smtplib.SMTPConnectError: continue # try the next MX server in list ``` Another library that might be better/faster than PyDNS is [dnsmodule](http://c0re.23.nu/c0de/dnsmodule/) although it looks like it hasn't had any activity since 2002, compared to PyDNS last update in August 2008. **Edit**: I would also like to point out that email addresses can't be easily parsed with a regexp. You are better off using the parseaddr() function in the standard library email.utils module (see my [answer to this question](http://stackoverflow.com/questions/550009/parsing-from-addresses-from-email-text/550036#550036) for example).
Django - queries made repeat/inefficient
559,701
2
2009-02-18T03:12:29Z
559,708
7
2009-02-18T03:18:27Z
[ "python", "django" ]
Alright, I have a Django view, like this: ``` @render_to('home/main.html') def login(request): # also tried Client.objects.select_related().all() clients = Client.objects.all() return {'clients':clients} ``` And I have a template, `main.html`, like this: ``` <ul> {% for client in clients %} <li>{{ client.full_name }}</li> <ul> {% for pet in client.pets.all %} <li>{{ pet.full_name }}</li> {% endfor %} </ul> {% endfor %} </ul> ``` I also print out all the queries in [`sql_queries`](http://docs.djangoproject.com/en/dev/ref/templates/api/#django-core-context-processors-debug) at the bottom of my base template. When I run this view, the following queries are made: ``` SELECT `home_client`.`id`, ... FROM `home_client`; SELECT `home_pet`.`id`, ... FROM `home_pet` WHERE `home_pet`.`client_id` = 1; SELECT `home_client`.`id`, ... FROM `home_client` WHERE `home_client`.`id` = 1; SELECT `home_client`.`id`, ... FROM `home_client` WHERE `home_client`.`id` = 1; SELECT `home_pet`.`id`, ... FROM `home_pet` WHERE `home_pet`.`client_id` = 2; SELECT `home_client`.`id`, ... FROM `home_client` WHERE `home_client`.`id` = 2; ``` My question is, why are all these queries being made? Shouldn't it just be 1 query to retrieve all the clients and a query per client to retrieve all the pets from each client? I have 2 clients in the `home_client` table, so it should be 3 queries total. Most troubling of all is that queries 3 and 4 are 100% identical. I don't want to "prematurely optimize" or anything but I do want to make sure Django isn't being wildly inefficient. Any help on this would be appreciated. Thanks.
Django uses a cache. The RDBMS uses a cache. Don't prematurely optimize the queries. You can play with bulk queries in your view function instead of one-at-a-time queries in your template. ``` @render_to('home/main.html') def login(request): # Query all clients clients = Client.objects.all() # Assemble an in-memory table of pets pets = collections.defaultdict(list) for p in Pet.objects.all(): pets[pet.client].append(p) # Create clients and pets tuples clientsPetTuples = [ (c,pets[c]) for c in clients ] return {'clientPets': clientsPetTuples} ``` However, you don't seem to have any evidence that your template is the slowest part of your application. Further, this trades off giant memory use against SQL use. Until you have measurements that prove that your template queries are actually slow, you shouldn't be over thinking the SQL. Don't worry about the SQL until you have evidence.
Conditional compilation in Python
560,040
10
2009-02-18T06:26:54Z
560,044
21
2009-02-18T06:30:22Z
[ "python", "conditional-compilation" ]
How to do conditional compilation in Python ? Is it using DEF ?
Python isn't compiled in the same sense as C or C++ or even Java, python files are compiled "on the fly", you can think of it as being similar to a interpreted language like Basic or Perl.1 You can do something equivalent to conditional compile by just using an if statement. For example: ``` if FLAG: def f(): print "Flag is set" else: def f(): print "Flag is not set" ``` You can do the same for the creation classes, setting of variables and pretty much everything. The closest way to mimic IFDEF would be to use the hasattr function. E.g.: ``` if hasattr(aModule, 'FLAG'): # do stuff if FLAG is defined in the current module. ``` You could also use a try/except clause to catch name errors, but the idiomatic way would be to set a variable to None at the top of your script. 1. Python code is byte compiled into an intermediate form like Java, however there generally isn't a separate compilation step. The "raw" source files that end in .py are executable.
Conditional compilation in Python
560,040
10
2009-02-18T06:26:54Z
561,338
15
2009-02-18T14:47:12Z
[ "python", "conditional-compilation" ]
How to do conditional compilation in Python ? Is it using DEF ?
There is actually a way to get conditional compilation, but it's very limited. ``` if __debug__: doSomething() ``` The `__debug__` flag is a special case. When calling python with the `-O` or `-OO` options, `__debug__` will be false, and the compiler will ignore that statement. This is used primarily with asserts, which is why assertions go away if you 'really compile' your scripts with optimization. So if your goal is to add debugging code, but prevent it from slowing down or otherwise affecting a 'release' build, this does what you want. But you cannot assign a value to `__debug__`, so that's about all you can use it for.
Implementing chat in an application?
561,301
3
2009-02-18T14:38:01Z
561,313
10
2009-02-18T14:40:42Z
[ "python", "chat" ]
I'm making a game and I am using Python for the server side. It would be fairly trivial to implement chat myself using Python - that's not my question. **My question is** I was just wondering if there were any pre-made chat servers or some kind of service that I would be able to implement inside of my game instead of rolling my own chat server? Maybe like a different process I could run next to my game server process?
I recommend using XMPP/Jabber. There are a lot of libraries for clients and servers in different languages. It's free/open source. <http://en.wikipedia.org/wiki/XMPP>
How to convert an integer to the shortest url-safe string in Python?
561,486
60
2009-02-18T15:25:25Z
561,630
7
2009-02-18T15:48:31Z
[ "python", "url", "base64" ]
I want the shortest possible way of representing an integer in a URL. For example, 11234 can be shortened to '2be2' using hexadecimal. Since base64 uses is a 64 character encoding, it should be possible to represent an integer in base64 using even less characters than hexadecimal. The problem is I can't figure out the cleanest way to convert an integer to base64 (and back again) using Python. The base64 module has methods for dealing with bytestrings - so maybe one solution would be to convert an integer to its binary representation as a Python string... but I'm not sure how to do that either.
You don't want base64 encoding, you want to represent a base 10 numeral in numeral base X. If you want your base 10 numeral represented in the 26 letters available you could use: <http://en.wikipedia.org/wiki/Hexavigesimal>. (You can extend that example for a much larger base by using all the legal url characters) You should atleast be able to get base 38 (26 letters, 10 numbers, +, \_)
How to convert an integer to the shortest url-safe string in Python?
561,486
60
2009-02-18T15:25:25Z
561,631
8
2009-02-18T15:48:31Z
[ "python", "url", "base64" ]
I want the shortest possible way of representing an integer in a URL. For example, 11234 can be shortened to '2be2' using hexadecimal. Since base64 uses is a 64 character encoding, it should be possible to represent an integer in base64 using even less characters than hexadecimal. The problem is I can't figure out the cleanest way to convert an integer to base64 (and back again) using Python. The base64 module has methods for dealing with bytestrings - so maybe one solution would be to convert an integer to its binary representation as a Python string... but I'm not sure how to do that either.
The easy bit is converting the byte string to web-safe base64: ``` import base64 output = base64.urlsafe_b64encode(s) ``` The tricky bit is the first step - convert the integer to a byte string. If your integers are small you're better off hex encoding them - see [saua](http://stackoverflow.com/questions/561486/how-do-you-base64-encode-an-integer-in-python/561534#561534) Otherwise (hacky recursive version): ``` def convertIntToByteString(i): if i == 0: return "" else: return convertIntToByteString(i >> 8) + chr(i & 255) ```
How to convert an integer to the shortest url-safe string in Python?
561,486
60
2009-02-18T15:25:25Z
561,704
54
2009-02-18T16:00:18Z
[ "python", "url", "base64" ]
I want the shortest possible way of representing an integer in a URL. For example, 11234 can be shortened to '2be2' using hexadecimal. Since base64 uses is a 64 character encoding, it should be possible to represent an integer in base64 using even less characters than hexadecimal. The problem is I can't figure out the cleanest way to convert an integer to base64 (and back again) using Python. The base64 module has methods for dealing with bytestrings - so maybe one solution would be to convert an integer to its binary representation as a Python string... but I'm not sure how to do that either.
This answer is similar in spirit to Douglas Leeder's, with the following changes: * It doesn't use actual Base64, so there's no padding characters * Instead of converting the number first to a byte-string (base 256), it converts it directly to base 64, which has the advantage of letting you represent negative numbers using a sign character. ``` import string ALPHABET = string.ascii_uppercase + string.ascii_lowercase + \ string.digits + '-_' ALPHABET_REVERSE = dict((c, i) for (i, c) in enumerate(ALPHABET)) BASE = len(ALPHABET) SIGN_CHARACTER = '$' def num_encode(n): if n < 0: return SIGN_CHARACTER + num_encode(-n) s = [] while True: n, r = divmod(n, BASE) s.append(ALPHABET[r]) if n == 0: break return ''.join(reversed(s)) def num_decode(s): if s[0] == SIGN_CHARACTER: return -num_decode(s[1:]) n = 0 for c in s: n = n * BASE + ALPHABET_REVERSE[c] return n ``` --- ``` >>> num_encode(0) 'A' >>> num_encode(64) 'BA' >>> num_encode(-(64**5-1)) '$_____' ``` --- A few side notes: * You could (*marginally*) increase the human-readibility of the base-64 numbers by putting string.digits first in the alphabet (and making the sign character '-'); I chose the order that I did based on Python's urlsafe\_b64encode. * If you're encoding a lot of negative numbers, you could increase the efficiency by using a sign bit or one's/two's complement instead of a sign character. * You should be able to easily adapt this code to different bases by changing the alphabet, either to restrict it to only alphanumeric characters or to add additional "URL-safe" characters. * I would recommend *against* using a representation other than base 10 in URIs in most cases—it adds complexity and makes debugging harder without significant savings compared to the overhead of HTTP—unless you're going for something TinyURL-esque.
How to convert an integer to the shortest url-safe string in Python?
561,486
60
2009-02-18T15:25:25Z
561,809
9
2009-02-18T16:22:44Z
[ "python", "url", "base64" ]
I want the shortest possible way of representing an integer in a URL. For example, 11234 can be shortened to '2be2' using hexadecimal. Since base64 uses is a 64 character encoding, it should be possible to represent an integer in base64 using even less characters than hexadecimal. The problem is I can't figure out the cleanest way to convert an integer to base64 (and back again) using Python. The base64 module has methods for dealing with bytestrings - so maybe one solution would be to convert an integer to its binary representation as a Python string... but I'm not sure how to do that either.
To encode `n`: ``` data = '' while n > 0: data = chr(n & 255) + data n = n >> 8 encoded = base64.urlsafe_b64encode(data).rstrip('=') ``` To decode `s`: ``` data = base64.urlsafe_b64decode(s + '===') decoded = 0 while len(data) > 0: decoded = (decoded << 8) | ord(data[0]) data = data[1:] ``` In the same spirit as other for some “optimal” encoding, you can use **73** characters according to RFC 1738 (actually 74 if you count “+” as usable): ``` alphabet = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz_`\"!$'()*,-." encoded = '' while n > 0: n, r = divmod(n, len(alphabet)) encoded = alphabet[r] + encoded ``` and the decoding: ``` decoded = 0 while len(s) > 0: decoded = decoded * len(alphabet) + alphabet.find(s[0]) s = s[1:] ```
How to convert an integer to the shortest url-safe string in Python?
561,486
60
2009-02-18T15:25:25Z
561,875
13
2009-02-18T16:40:37Z
[ "python", "url", "base64" ]
I want the shortest possible way of representing an integer in a URL. For example, 11234 can be shortened to '2be2' using hexadecimal. Since base64 uses is a 64 character encoding, it should be possible to represent an integer in base64 using even less characters than hexadecimal. The problem is I can't figure out the cleanest way to convert an integer to base64 (and back again) using Python. The base64 module has methods for dealing with bytestrings - so maybe one solution would be to convert an integer to its binary representation as a Python string... but I'm not sure how to do that either.
You probably do not want real base64 encoding for this - it will add padding etc, potentially even resulting in larger strings than hex would for small numbers. If there's no need to interoperate with anything else, just use your own encoding. Eg. here's a function that will encode to any base (note the digits are actually stored least-significant first to avoid extra reverse() calls: ``` def make_encoder(baseString): size = len(baseString) d = dict((ch, i) for (i, ch) in enumerate(baseString)) # Map from char -> value if len(d) != size: raise Exception("Duplicate characters in encoding string") def encode(x): if x==0: return baseString[0] # Only needed if don't want '' for 0 l=[] while x>0: l.append(baseString[x % size]) x //= size return ''.join(l) def decode(s): return sum(d[ch] * size**i for (i,ch) in enumerate(s)) return encode, decode # Base 64 version: encode,decode = make_encoder("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/") assert decode(encode(435346456456)) == 435346456456 ``` This has the advantage that you can use whatever base you want, just by adding appropriate characters to the encoder's base string. Note that the gains for larger bases are not going to be that big however. base 64 will only reduce the size to 2/3rds of base 16 (6 bits/char instead of 4). Each doubling only adds one more bit per character. Unless you've a real need to compact things, just using hex will probably be the simplest and fastest option.
How to convert an integer to the shortest url-safe string in Python?
561,486
60
2009-02-18T15:25:25Z
18,001,426
18
2013-08-01T18:08:21Z
[ "python", "url", "base64" ]
I want the shortest possible way of representing an integer in a URL. For example, 11234 can be shortened to '2be2' using hexadecimal. Since base64 uses is a 64 character encoding, it should be possible to represent an integer in base64 using even less characters than hexadecimal. The problem is I can't figure out the cleanest way to convert an integer to base64 (and back again) using Python. The base64 module has methods for dealing with bytestrings - so maybe one solution would be to convert an integer to its binary representation as a Python string... but I'm not sure how to do that either.
All the answers given regarding Base64 are very reasonable solutions. But they're technically incorrect. To convert an integer to the *shortest URL safe string* possible, what you want is base 66 (there are [66 URL safe characters](http://tools.ietf.org/html/rfc3986#section-2.3)). That code looks like this: ``` from io import StringIO import urllib BASE66_ALPHABET = u"0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz-_.~" BASE = len(BASE66_ALPHABET) def hexahexacontadecimal_encode_int(n): if n == 0: return BASE66_ALPHABET[0].encode('ascii') r = StringIO() while n: n, t = divmod(n, BASE) r.write(BASE66_ALPHABET[t]) return r.getvalue().encode('ascii')[::-1] ``` Here's a full implementation with source and ready to go pip installable package: <https://github.com/aljungberg/hexahexacontadecimal>
Python Imaging Library save function syntax
562,519
11
2009-02-18T19:38:57Z
562,530
15
2009-02-18T19:43:19Z
[ "python", "python-imaging-library" ]
Simple one I think but essentially I need to know what the syntax is for the save function on the PIL. The help is really vague and I can't find anything online. Any help'd be great, thanks :).
From the [PIL Handbook](http://effbot.org/imagingbook/image.htm#tag-Image.Image.save): ``` im.save(outfile, options...) im.save(outfile, format, options...) ``` Simplest case: ``` im.save('my_image.png') ``` or whatever. In this case, the type of the image will be determined from the extension. Is there a particular problem you're having? Or specific saving option that you'd like to use but aren't sure how to do so? You may be able to find additional information in the documentation on each filetype. [The PIL Handbox Appendixes](http://effbot.org/imagingbook/pil-index.htm#appendixes) list the different file types that are supported. In some cases, options are given for `save`. For example, on the [JPEG file format page](http://effbot.org/imagingbook/format-jpeg.htm), we're told that save supports * `quality` * `optimize`, and * `progressive` with notes about each option.
What's Python good practice for importing and offering optional features?
563,022
12
2009-02-18T21:58:01Z
563,060
25
2009-02-18T22:05:59Z
[ "python", "python-import" ]
I'm writing a piece of software over on github. It's basically a tray icon with some extra features. I want to provide a working piece of code without actually having to make the user install what are essentially dependencies for optional features and I don't actually want to import things I'm not going to use so I thought code like this would be "good solution": ``` ---- IN LOADING FUNCTION ---- features = [] for path in sys.path: if os.path.exists(os.path.join(path, 'pynotify')): features.append('pynotify') if os.path.exists(os.path.join(path, 'gnomekeyring.so')): features.append('gnome-keyring') #user dialog to ask for stuff #notifications available, do you want them enabled? dlg = ConfigDialog(features) if not dlg.get_notifications(): features.remove('pynotify') service_start(features ...) ---- SOMEWHERE ELSE ------ def service_start(features, other_config): if 'pynotify' in features: import pynotify #use pynotify... ``` There are some issues however. If a user formats his machine and installs the newest version of his OS and redeploys this application, features suddenly disappear without warning. The solution is to present this on the configuration window: ``` if 'pynotify' in features: #gtk checkbox else: #gtk label reading "Get pynotify and enjoy notification pop ups!" ``` But if this is say, a mac, how do I know I'm not sending the user on a wild goose chase looking for a dependency they can never fill? The second problem is the: ``` if os.path.exists(os.path.join(path, 'gnomekeyring.so')): ``` issue. Can I be sure that the file is always called gnomekeyring.so across all the linux distros? How do other people test these features? The problem with the basic ``` try: import pynotify except: pynotify = disabled ``` is that the code is global, these might be littered around and even if the user doesn't want pynotify....it's loaded anyway. So what do people think is the best way to solve this problem?
The `try:` method does not need to be global — it can be used in any scope and so modules can be "lazy-loaded" at runtime. For example: ``` def foo(): try: import external_module except ImportError: pass if external_module: external_module.some_whizzy_feature() else: print "You could be using a whizzy feature right now, if you had external_module." ``` When your script is run, no attempt will be made to load `external_module`. The first time `foo()` is called, `external_module` is (if available) loaded and inserted into the function's local scope. Subsequent calls to `foo()` reinsert `external_module` into its scope without needing to reload the module. In general, it's best to let Python handle import logic — it's been doing it for a while. :-)
What's Python good practice for importing and offering optional features?
563,022
12
2009-02-18T21:58:01Z
563,075
7
2009-02-18T22:09:38Z
[ "python", "python-import" ]
I'm writing a piece of software over on github. It's basically a tray icon with some extra features. I want to provide a working piece of code without actually having to make the user install what are essentially dependencies for optional features and I don't actually want to import things I'm not going to use so I thought code like this would be "good solution": ``` ---- IN LOADING FUNCTION ---- features = [] for path in sys.path: if os.path.exists(os.path.join(path, 'pynotify')): features.append('pynotify') if os.path.exists(os.path.join(path, 'gnomekeyring.so')): features.append('gnome-keyring') #user dialog to ask for stuff #notifications available, do you want them enabled? dlg = ConfigDialog(features) if not dlg.get_notifications(): features.remove('pynotify') service_start(features ...) ---- SOMEWHERE ELSE ------ def service_start(features, other_config): if 'pynotify' in features: import pynotify #use pynotify... ``` There are some issues however. If a user formats his machine and installs the newest version of his OS and redeploys this application, features suddenly disappear without warning. The solution is to present this on the configuration window: ``` if 'pynotify' in features: #gtk checkbox else: #gtk label reading "Get pynotify and enjoy notification pop ups!" ``` But if this is say, a mac, how do I know I'm not sending the user on a wild goose chase looking for a dependency they can never fill? The second problem is the: ``` if os.path.exists(os.path.join(path, 'gnomekeyring.so')): ``` issue. Can I be sure that the file is always called gnomekeyring.so across all the linux distros? How do other people test these features? The problem with the basic ``` try: import pynotify except: pynotify = disabled ``` is that the code is global, these might be littered around and even if the user doesn't want pynotify....it's loaded anyway. So what do people think is the best way to solve this problem?
You might want to have a look at the [imp module](http://docs.python.org/library/imp.html), which basically does what you do manually above. So you can first look for a module with find\_module() and then load it via load\_module() or by simply importing it (after checking the config). And btw, if using except: I always would add the specific exception to it (here ImportError) to not accidently catch unrelated errors.
Getting return values from a MySQL stored procedure in Python, using MySQLdb
563,182
6
2009-02-18T22:42:25Z
566,260
9
2009-02-19T17:11:44Z
[ "python", "mysql" ]
I've got a stored procedure in a MySQL database that simply updates a date column and returns the previous date. If I call this stored procedure from the MySQL client, it works fine, but when I try to call the stored procedure from Python using MySQLdb I can't seem to get it to give me the return value. Here's the code to the stored procedure: ``` CREATE PROCEDURE test_stuff.get_lastpoll() BEGIN DECLARE POLLTIME TIMESTAMP DEFAULT NULL; START TRANSACTION; SELECT poll_date_time FROM test_stuff.poll_table LIMIT 1 INTO POLLTIME FOR UPDATE; IF POLLTIME IS NULL THEN INSERT INTO test_stuff.poll_table (poll_date_time) VALUES ( UTC_TIMESTAMP() ); COMMIT; SELECT NULL as POLL_DATE_TIME; ELSE UPDATE test_stuff.poll_table SET poll_date_time = UTC_TIMESTAMP(); COMMIT; SELECT DATE_FORMAT(POLLTIME, '%Y-%m-%d %H:%i:%s') as POLL_DATE_TIME; END IF; END ``` The code I'm using to try to call the stored procedure is similar to this: ``` #!/usr/bin/python import sys import MySQLdb try: mysql = MySQLdb.connect(user=User,passwd=Passwd,db="test_stuff") mysql_cursor = mysql.cursor() results=mysql_cursor.callproc( "get_lastpoll", () ) print results mysql_cursor.close() mysql.close() except MySQLdb.Error, e: print "MySQL Error %d: %s" % ( e.args[0], e.args[1] ) sys.exit(1) ``` I know that you can do IN and OUT parameters, but from what I can determine from the MySQLdb documentation, this isn't possible with MySQLdb. Does anyone have any clue how I could get the results of the stored procedure? If I run it from a SQL tool, here's the output: ``` POLL_DATE_TIME ------------------- 2009-02-18 22:27:07 ``` If I run the Python script, it returns back an empty set, like this: ``` () ```
What I had to do is modify the Python code to use execute() instead of callproc(), and then use the fetchone() to get the results. I'm answering it myself since mluebke's answer wasn't entirely complete (even though it was helpful!). ``` mysql_cursor.execute( "call get_lastpoll();" ) results=mysql_cursor.fetchone() print results[0] ``` This gives me the correct output: ``` 2009-02-19 17:10:42 ```
What's the fastest way to test the validity of a large number of well-formed URLs
563,384
2
2009-02-18T23:46:48Z
563,412
7
2009-02-18T23:59:04Z
[ "python", "http" ]
My project requires me to validate a large number of web URLs. These URLs have been captured by a very unreliable process which I do not control. All of the URLs have already been regexp validated and are known to be well-formed. I also know that they all have valid TLDs I want to be able to filter these URLs quickly in order to determine which of these are incorrect. At this point I do not care what content is on the pages - I'd just like to know as quickly as possible which of the pages are inaccessible (e.g. produce a 404 error). Given that there are a lot of these I do not want to download the entire page, just the HTTP header and then take a good guess from the content of the header whether the page is likely to exist. Can it be done?
To really make this fast you might also use [eventlet](http://pypi.python.org/pypi/eventlet) which uses non-blocking IO to speed things up. You can use a head request like this: ``` from eventlet import httpc try: res = httpc.head(url) except httpc.NotFound: # handle 404 ``` You can then put this into some simple script like [that example script here](http://wiki.secondlife.com/wiki/Eventlet/Examples). With that you should get pretty much concurrency by using a coroutines pool.
How to stop Tkinter Frame from shrinking to fit its contents?
563,827
27
2009-02-19T03:21:07Z
566,840
37
2009-02-19T19:42:43Z
[ "python", "label", "tkinter", "frame" ]
This is the code that's giving me trouble. ``` f = Frame(root, width=1000, bg="blue") f.pack(fill=X, expand=True) l = Label(f, text="hi", width=10, bg="red", fg="white") l.pack() ``` If I comment out the lines with the Label, the Frame displays with the right width. However, adding the Label seems to shrink the Frame down to the Label's size. Is there a way to prevent that from happening?
By default, tk frames *shrink or grow to fit their contents*, which is what you want 99% of the time. The term that describes this feature is "geometry propagation". There is a [command](http://effbot.org/tkinterbook/pack.htm#Tkinter.Pack.pack_propagate-method) to turn geometry propagation on or off. Since you are using pack, the syntax would be: ``` f.pack_propagate(0) ``` or maybe `root.pack_propagate(0)`, depending on which widget(s) you actually want to affect. That being said, the vast majority of the time you should let tkinter compute the size. When you turn geometry propagation off your GUI won't respond well to changes in resolution, changes in fonts, etc. tkinter's geometry managers (`pack`, `place` and `grid`) are remarkably powerful. Learn to take advantage of that power.
How can I check the memory usage of objects in iPython?
563,840
15
2009-02-19T03:27:34Z
563,921
13
2009-02-19T04:07:23Z
[ "python", "memory", "ipython" ]
I am using iPython to run my code. I wonder if there is any module or command which would allow me to check the memory usage of an object. For instance: ``` In [1]: a = range(10000) In [2]: %memusage a Out[2]: 1MB ``` Something like `%memusage <object>` and return the memory used by the object. **Duplicate** > [Find out how much memory is being used by an object in Python](http://stackoverflow.com/questions/33978/find-out-how-much-memory-is-being-used-by-an-object-in-python)
UPDATE: Here is [another](http://code.activestate.com/recipes/544288/), maybe more thorough recipe for estimating the size of a python object. Here is a [thread](http://mail.python.org/pipermail/python-list/2008-January/472683.html) addressing a similar question The solution proposed is to write your own... using some estimates of the known size of primitives, python's object overhead, and the sizes of built in container types. Since the code is not that long, here is a direct copy of it: ``` def sizeof(obj): """APPROXIMATE memory taken by some Python objects in the current 32-bit CPython implementation. Excludes the space used by items in containers; does not take into account overhead of memory allocation from the operating system, or over-allocation by lists and dicts. """ T = type(obj) if T is int: kind = "fixed" container = False size = 4 elif T is list or T is tuple: kind = "variable" container = True size = 4*len(obj) elif T is dict: kind = "variable" container = True size = 144 if len(obj) > 8: size += 12*(len(obj)-8) elif T is str: kind = "variable" container = False size = len(obj) + 1 else: raise TypeError("don't know about this kind of object") if kind == "fixed": overhead = 8 else: # "variable" overhead = 12 if container: garbage_collector = 8 else: garbage_collector = 0 malloc = 8 # in most cases size = size + overhead + garbage_collector + malloc # Round to nearest multiple of 8 bytes x = size % 8 if x != 0: size += 8-x size = (size + 8) return size ```
How can I check the memory usage of objects in iPython?
563,840
15
2009-02-19T03:27:34Z
565,382
22
2009-02-19T13:50:09Z
[ "python", "memory", "ipython" ]
I am using iPython to run my code. I wonder if there is any module or command which would allow me to check the memory usage of an object. For instance: ``` In [1]: a = range(10000) In [2]: %memusage a Out[2]: 1MB ``` Something like `%memusage <object>` and return the memory used by the object. **Duplicate** > [Find out how much memory is being used by an object in Python](http://stackoverflow.com/questions/33978/find-out-how-much-memory-is-being-used-by-an-object-in-python)
Unfortunately this is not possible, but there are a number of ways of approximating the answer: 1. for very simple objects (e.g. ints, strings, floats, doubles) which are represented more or less as simple C-language types you can simply calculate the number of bytes as with [John Mulder's solution](http://stackoverflow.com/a/563921/1922357). 2. For more complex objects a good approximation is to serialize the object to a string using cPickle.dumps. The length of the string is a good approximation of the amount of memory required to store an object. There is one big snag with solution 2, which is that objects usually contain references to other objects. For example a dict contains string-keys and other objects as values. Those other objects might be shared. Since pickle always tries to do a complete serialization of the object it will always over-estimate the amount of memory required to store an object.
How can I check the memory usage of objects in iPython?
563,840
15
2009-02-19T03:27:34Z
15,591,157
11
2013-03-23T19:30:46Z
[ "python", "memory", "ipython" ]
I am using iPython to run my code. I wonder if there is any module or command which would allow me to check the memory usage of an object. For instance: ``` In [1]: a = range(10000) In [2]: %memusage a Out[2]: 1MB ``` Something like `%memusage <object>` and return the memory used by the object. **Duplicate** > [Find out how much memory is being used by an object in Python](http://stackoverflow.com/questions/33978/find-out-how-much-memory-is-being-used-by-an-object-in-python)
If you are using a [numpy array](http://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html), then you can use the attribute `ndarray.nbytes` to evaluate its size in memory: ``` from pylab import * d = array([2,3,4,5]) d.nbytes #Output: 32 ```
Is there a way to change effective process name in Python?
564,695
50
2009-02-19T10:37:12Z
565,071
9
2009-02-19T12:26:14Z
[ "python", "process", "arguments", "hide", "ps" ]
Can I change effective process name of a Python script? I want to show a different name instead of the real name of the process when I get the system process list. In C I can set ``` strcpy(argv[0],"othername"); ``` But in Python ``` argv[0] = "othername" ``` doesn't seem to work. When i get process list (with `ps ax` in my linux box) the real name doesn't change. I prefer a portable solution (or else one solution for posix and another for windows environments), if it exists. Thanks in advance
Simply put, there's no portable way. You'll have to test for the system and use the preferred method for that system. Further, I'm confused about what you mean by process names on Windows. Do you mean a service name? I presume so, because nothing else really makes any sense (at least to my non-Windows using brain). If so, you need to use [Tim Golden's WMI interface](http://timgolden.me.uk/python/wmi) and call the .Change method on the service... at least according to his [tutorial](http://timgolden.me.uk/python/wmi/tutorial.html). For Linux none of the methods I found worked except for [this poorly packaged module](http://code.google.com/p/procname/) that sets argv[0] for you. I don't even know if this will work on BSD variants (which does have a setproctitle system call). I'm pretty sure argv[0] won't work on Solaris.
Is there a way to change effective process name in Python?
564,695
50
2009-02-19T10:37:12Z
923,034
37
2009-05-28T20:39:05Z
[ "python", "process", "arguments", "hide", "ps" ]
Can I change effective process name of a Python script? I want to show a different name instead of the real name of the process when I get the system process list. In C I can set ``` strcpy(argv[0],"othername"); ``` But in Python ``` argv[0] = "othername" ``` doesn't seem to work. When i get process list (with `ps ax` in my linux box) the real name doesn't change. I prefer a portable solution (or else one solution for posix and another for windows environments), if it exists. Thanks in advance
actually you need 2 things on linux: modify `argv[0]` from `C` (for `ps auxf` and friends) and call `prctl` with `PR_SET_NAME` flag. There is absolutely no way to do first piece from python itself. Although, you can just change process name by calling prctl. ``` def set_proc_name(newname): from ctypes import cdll, byref, create_string_buffer libc = cdll.LoadLibrary('libc.so.6') buff = create_string_buffer(len(newname)+1) buff.value = newname libc.prctl(15, byref(buff), 0, 0, 0) def get_proc_name(): from ctypes import cdll, byref, create_string_buffer libc = cdll.LoadLibrary('libc.so.6') buff = create_string_buffer(128) # 16 == PR_GET_NAME from <linux/prctl.h> libc.prctl(16, byref(buff), 0, 0, 0) return buff.value import sys # sys.argv[0] == 'python' # outputs 'python' get_proc_name() set_proc_name('testing yeah') # outputs 'testing yeah' get_proc_name() ``` `ps auxf` will show just 'python' after that :(. But `top` and `ps -A` will show new 'testing yeah' process name :). Also `killall` and `pkill` will work with new name. btw, procname from googlecode also changes `argv[0]`, thus, even, changes `ps auxf` output. **UPDATE**: The solution posted in this answer does not play nice sometimes on FreeBSD. I'm now using [py-setproctitle](http://code.google.com/p/py-setproctitle/) stated [in this answer](http://stackoverflow.com/questions/564695/is-there-a-way-to-change-effective-process-name-in-python/1866700#1866700) for a year or so on various linux and freebsd boxes. No fails so far! Everybody should too! :). It uses almost the same code as [PostgreSQL](http://www.postgresql.org) [uses](http://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/utils/misc/ps_status.c;hb=HEAD) in its main database and child processes.
Is there a way to change effective process name in Python?
564,695
50
2009-02-19T10:37:12Z
1,866,700
62
2009-12-08T12:40:46Z
[ "python", "process", "arguments", "hide", "ps" ]
Can I change effective process name of a Python script? I want to show a different name instead of the real name of the process when I get the system process list. In C I can set ``` strcpy(argv[0],"othername"); ``` But in Python ``` argv[0] = "othername" ``` doesn't seem to work. When i get process list (with `ps ax` in my linux box) the real name doesn't change. I prefer a portable solution (or else one solution for posix and another for windows environments), if it exists. Thanks in advance
I've recently written a Python module to change the process title in a portable way: check <https://github.com/dvarrazzo/py-setproctitle> It is a wrapper around the code used by PostgreSQL to perform the title change. It is currently tested against Linux and Mac OS X: Windows (with limited functionality) and BSD portings are on the way. **Edit:** as of July 2010, the module works with BSD and with limited functionality on Windows, and has been ported to Python 3.x.
How can I make this one-liner work in DOS?
566,559
5
2009-02-19T18:29:30Z
566,563
12
2009-02-19T18:31:50Z
[ "python", "command-line" ]
``` python -c "for x in range(1,10) print x" ``` I enjoy python one liners with -c, but it is limited when indentation is needed. Any ideas?
``` python -c "for x in range(1,10): print x" ``` Just add the colon. To address the question in the comments: > How can I make this work though? python -c "import calendar;print calendar.prcal(2009);for x in range(1,10): print x" ``` python -c "for x in range(1,10): x==1 and __import__('calendar').prcal(2009); print x;" ``` As you can see it's pretty gross. We can't import before the loop. To get around this we check if x is at the first iteration in the loop, if so we do the import. More examples [here](https://web.archive.org/web/20071223101511/http://mail.python.org/pipermail/python-list/2003-April/199822.html).
How to get console window width in python
566,746
151
2009-02-19T19:17:19Z
566,752
63
2009-02-19T19:18:11Z
[ "python", "linux", "console", "terminal", "width" ]
Is there a way in python to programmatically determine the width of the console? I mean the number of characters that fits in one line without wrapping, not the pixel width of the window. **Edit** Looking for a solution that works on Linux
use ``` import console (width, height) = console.getTerminalSize() print "Your terminal's width is: %d" % width ``` **EDIT**: oh, I'm sorry. That's not a python standard lib one, here's the source of console.py (I don't know where it's from). The module seems to work like that: It checks if `termcap` is available, when yes. It uses that; if no it checks whether the terminal supports a special `ioctl` call and that does not work, too, it checks for the environment variables some shells export for that. This will probably work on UNIX only. ``` def getTerminalSize(): import os env = os.environ def ioctl_GWINSZ(fd): try: import fcntl, termios, struct, os cr = struct.unpack('hh', fcntl.ioctl(fd, termios.TIOCGWINSZ, '1234')) except: return return cr cr = ioctl_GWINSZ(0) or ioctl_GWINSZ(1) or ioctl_GWINSZ(2) if not cr: try: fd = os.open(os.ctermid(), os.O_RDONLY) cr = ioctl_GWINSZ(fd) os.close(fd) except: pass if not cr: cr = (env.get('LINES', 25), env.get('COLUMNS', 80)) ### Use get(key[, default]) instead of a try/catch #try: # cr = (env['LINES'], env['COLUMNS']) #except: # cr = (25, 80) return int(cr[1]), int(cr[0]) ```
How to get console window width in python
566,746
151
2009-02-19T19:17:19Z
943,921
175
2009-06-03T09:59:34Z
[ "python", "linux", "console", "terminal", "width" ]
Is there a way in python to programmatically determine the width of the console? I mean the number of characters that fits in one line without wrapping, not the pixel width of the window. **Edit** Looking for a solution that works on Linux
``` import os rows, columns = os.popen('stty size', 'r').read().split() ``` uses the 'stty size' command which according to [a thread on the python mailing list](http://mail.python.org/pipermail/python-list/2000-May/033312.html) is reasonably universal on linux. It opens the 'stty size' command as a file, 'reads' from it, and uses a simple string split to separate the coordinates. Unlike the os.environ["COLUMNS"] value (which I can't access in spite of using bash as my standard shell) the data will also be up-to-date whereas I believe the os.environ["COLUMNS"] value would only be valid for the time of the launch of the python interpreter (suppose the user resized the window since then).
How to get console window width in python
566,746
151
2009-02-19T19:17:19Z
3,010,495
39
2010-06-09T22:36:38Z
[ "python", "linux", "console", "terminal", "width" ]
Is there a way in python to programmatically determine the width of the console? I mean the number of characters that fits in one line without wrapping, not the pixel width of the window. **Edit** Looking for a solution that works on Linux
Code above didn't return correct result on my linux because winsize-struct has 4 unsigned shorts, not 2 signed shorts: ``` def terminal_size(): import fcntl, termios, struct h, w, hp, wp = struct.unpack('HHHH', fcntl.ioctl(0, termios.TIOCGWINSZ, struct.pack('HHHH', 0, 0, 0, 0))) return w, h ``` hp and hp should contain pixel width and height, but don't.
How to get console window width in python
566,746
151
2009-02-19T19:17:19Z
6,550,596
32
2011-07-01T16:23:02Z
[ "python", "linux", "console", "terminal", "width" ]
Is there a way in python to programmatically determine the width of the console? I mean the number of characters that fits in one line without wrapping, not the pixel width of the window. **Edit** Looking for a solution that works on Linux
I searched around and found a solution for windows at : <http://code.activestate.com/recipes/440694-determine-size-of-console-window-on-windows/> and a solution for linux here. So here is a version which works both on linux, os x and windows/cygwin : ``` """ getTerminalSize() - get width and height of console - works on linux,os x,windows,cygwin(windows) """ __all__=['getTerminalSize'] def getTerminalSize(): import platform current_os = platform.system() tuple_xy=None if current_os == 'Windows': tuple_xy = _getTerminalSize_windows() if tuple_xy is None: tuple_xy = _getTerminalSize_tput() # needed for window's python in cygwin's xterm! if current_os == 'Linux' or current_os == 'Darwin' or current_os.startswith('CYGWIN'): tuple_xy = _getTerminalSize_linux() if tuple_xy is None: print "default" tuple_xy = (80, 25) # default value return tuple_xy def _getTerminalSize_windows(): res=None try: from ctypes import windll, create_string_buffer # stdin handle is -10 # stdout handle is -11 # stderr handle is -12 h = windll.kernel32.GetStdHandle(-12) csbi = create_string_buffer(22) res = windll.kernel32.GetConsoleScreenBufferInfo(h, csbi) except: return None if res: import struct (bufx, bufy, curx, cury, wattr, left, top, right, bottom, maxx, maxy) = struct.unpack("hhhhHhhhhhh", csbi.raw) sizex = right - left + 1 sizey = bottom - top + 1 return sizex, sizey else: return None def _getTerminalSize_tput(): # get terminal width # src: http://stackoverflow.com/questions/263890/how-do-i-find-the-width-height-of-a-terminal-window try: import subprocess proc=subprocess.Popen(["tput", "cols"],stdin=subprocess.PIPE,stdout=subprocess.PIPE) output=proc.communicate(input=None) cols=int(output[0]) proc=subprocess.Popen(["tput", "lines"],stdin=subprocess.PIPE,stdout=subprocess.PIPE) output=proc.communicate(input=None) rows=int(output[0]) return (cols,rows) except: return None def _getTerminalSize_linux(): def ioctl_GWINSZ(fd): try: import fcntl, termios, struct, os cr = struct.unpack('hh', fcntl.ioctl(fd, termios.TIOCGWINSZ,'1234')) except: return None return cr cr = ioctl_GWINSZ(0) or ioctl_GWINSZ(1) or ioctl_GWINSZ(2) if not cr: try: fd = os.open(os.ctermid(), os.O_RDONLY) cr = ioctl_GWINSZ(fd) os.close(fd) except: pass if not cr: try: cr = (env['LINES'], env['COLUMNS']) except: return None return int(cr[1]), int(cr[0]) if __name__ == "__main__": sizex,sizey=getTerminalSize() print 'width =',sizex,'height =',sizey ```
How to get console window width in python
566,746
151
2009-02-19T19:17:19Z
14,422,538
93
2013-01-20T07:25:34Z
[ "python", "linux", "console", "terminal", "width" ]
Is there a way in python to programmatically determine the width of the console? I mean the number of characters that fits in one line without wrapping, not the pixel width of the window. **Edit** Looking for a solution that works on Linux
Not sure why it is in the module `shutil`, but it landed there in Python 3.3, [Querying the size of the output terminal](http://docs.python.org/3/library/shutil.html#querying-the-size-of-the-output-terminal): ``` >>> import shutil >>> shutil.get_terminal_size((80, 20)) # pass fallback os.terminal_size(columns=87, lines=23) # returns named-tuple ``` A low-level implementation is in the os module. A backport is now available for Python 3.2 and below: * <https://pypi.python.org/pypi/backports.shutil_get_terminal_size>
How to get console window width in python
566,746
151
2009-02-19T19:17:19Z
23,330,276
10
2014-04-27T23:24:31Z
[ "python", "linux", "console", "terminal", "width" ]
Is there a way in python to programmatically determine the width of the console? I mean the number of characters that fits in one line without wrapping, not the pixel width of the window. **Edit** Looking for a solution that works on Linux
Starting at Python 3.3 it is straight forward: <https://docs.python.org/3/library/os.html#querying-the-size-of-a-terminal> ``` >>> import os >>> ts = os.get_terminal_size() >>> ts.lines 24 >>> ts.columns 80 ```
Simple Prime Generator in Python
567,222
21
2009-02-19T21:22:24Z
568,618
108
2009-02-20T07:42:03Z
[ "python", "primes" ]
could someone please tell me what I'm doing wrong with this code. It is just printing 'count' anyway. I just want a very simple prime generator (nothing fancy). Thanks a lot. lincoln. ``` import math def main(): count = 3 one = 1 while one == 1: for x in range(2, int(math.sqrt(count) + 1)): if count % x == 0: continue if count % x != 0: print count count += 1 ```
There are some problems: * Why do you print out count when it didn't divide by x? It doesn't mean it's prime, it means only that this particular x doesn't divide it * `continue` moves to the next loop iteration - but you really want to stop it using `break` Here's your code with a few fixes, it prints out only primes: ``` import math def main(): count = 3   while True: isprime = True   for x in range(2, int(math.sqrt(count) + 1)): if count % x == 0: isprime = False break   if isprime: print count   count += 1 ``` For much more efficient prime generation, see the Sieve of Erastothenes, as others have suggested. Here's a nice, optimized implementation with many comments: ``` # Sieve of Eratosthenes # Code by David Eppstein, UC Irvine, 28 Feb 2002 # http://code.activestate.com/recipes/117119/ def gen_primes(): """ Generate an infinite sequence of prime numbers. """ # Maps composites to primes witnessing their compositeness. # This is memory efficient, as the sieve is not "run forward" # indefinitely, but only as long as required by the current # number being tested. # D = {}   # The running integer that's checked for primeness q = 2   while True: if q not in D: # q is a new prime. # Yield it and mark its first multiple that isn't # already marked in previous iterations # yield q D[q * q] = [q] else: # q is composite. D[q] is the list of primes that # divide it. Since we've reached q, we no longer # need it in the map, but we'll mark the next # multiples of its witnesses to prepare for larger # numbers # for p in D[q]: D.setdefault(p + q, []).append(p) del D[q]   q += 1 ``` Note that it returns a generator.
Simple Prime Generator in Python
567,222
21
2009-02-19T21:22:24Z
568,684
8
2009-02-20T08:13:32Z
[ "python", "primes" ]
could someone please tell me what I'm doing wrong with this code. It is just printing 'count' anyway. I just want a very simple prime generator (nothing fancy). Thanks a lot. lincoln. ``` import math def main(): count = 3 one = 1 while one == 1: for x in range(2, int(math.sqrt(count) + 1)): if count % x == 0: continue if count % x != 0: print count count += 1 ```
``` def is_prime(num): """Returns True if the number is prime else False.""" if num == 0 or num == 1: return False for x in range(2, num): if num % x == 0: return False else: return True >> filter(is_prime, range(1, 20)) [2, 3, 5, 7, 11, 13, 17, 19] ``` We will get all the prime numbers upto 20 in a list. I could have used Sieve of Eratosthenes but you said you want something very simple. ;)
Simple Prime Generator in Python
567,222
21
2009-02-19T21:22:24Z
2,624,386
7
2010-04-12T18:28:58Z
[ "python", "primes" ]
could someone please tell me what I'm doing wrong with this code. It is just printing 'count' anyway. I just want a very simple prime generator (nothing fancy). Thanks a lot. lincoln. ``` import math def main(): count = 3 one = 1 while one == 1: for x in range(2, int(math.sqrt(count) + 1)): if count % x == 0: continue if count % x != 0: print count count += 1 ```
``` print [x for x in range(2,100) if not [t for t in range(2,x) if not x%t]] ```
Is it safe to use Python UUID module generated values in URL's of a webpage?
567,324
2
2009-02-19T21:47:29Z
567,347
7
2009-02-19T21:51:42Z
[ "python", "url", "uuid" ]
Is it safe to use Python UUID module generated values in URL's of a webpage? Wnat to use those ID's as part of URL's. Are there any non-safe characters ever generated by Python UUID that shouldn't be in URL's?
It is good practice to **always** urlencode data that will be placed into URLs. Then you need not be concerned with the specifics of UUID or if it will change in the future.
running a command as a super user from a python script
567,542
19
2009-02-19T22:33:16Z
567,599
11
2009-02-19T22:46:38Z
[ "python", "subprocess", "sudo" ]
So I'm trying to get a process to be run as a super user from within a python script using subprocess. In the ipython shell something like ``` proc = subprocess.Popen('sudo apach2ctl restart', shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) ``` works fine, but as soon as I stick it into a script I start getting: `sudo: apach2ctl: command not found`. I would guess this is due to the way sudo handles environments on ubuntu. (I've also tried `sudo -E apche2ctl restart` and `sudo env path=$PATH apache2ctl restart` with no avail) So my question is basically, if I want to run `apache2ctl restart` as super user that prompts the user for the super user password when required, how should I go about doing this? I have no intention of storing passwords in the script. Edit: I've tried passing in the commands as both a string and tokenized into a list. In the python interpreter, with a string I'll get the password prompt properly (still doesnt work in a python script as in my original problem), a list just gives the help screen for sudo. Edit 2: So what I gather is that while Popen will work with some commands just as strings when shell=True, it takes ``` proc = subprocess.Popen(['sudo','/usr/sbin/apache2ctl','restart']) ``` without 'shell=True' to get sudo to work. Thanks!
Try giving the full path to apache2ctl.
running a command as a super user from a python script
567,542
19
2009-02-19T22:33:16Z
567,687
14
2009-02-19T23:11:30Z
[ "python", "subprocess", "sudo" ]
So I'm trying to get a process to be run as a super user from within a python script using subprocess. In the ipython shell something like ``` proc = subprocess.Popen('sudo apach2ctl restart', shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) ``` works fine, but as soon as I stick it into a script I start getting: `sudo: apach2ctl: command not found`. I would guess this is due to the way sudo handles environments on ubuntu. (I've also tried `sudo -E apche2ctl restart` and `sudo env path=$PATH apache2ctl restart` with no avail) So my question is basically, if I want to run `apache2ctl restart` as super user that prompts the user for the super user password when required, how should I go about doing this? I have no intention of storing passwords in the script. Edit: I've tried passing in the commands as both a string and tokenized into a list. In the python interpreter, with a string I'll get the password prompt properly (still doesnt work in a python script as in my original problem), a list just gives the help screen for sudo. Edit 2: So what I gather is that while Popen will work with some commands just as strings when shell=True, it takes ``` proc = subprocess.Popen(['sudo','/usr/sbin/apache2ctl','restart']) ``` without 'shell=True' to get sudo to work. Thanks!
Try: `subprocess.call(['sudo', 'apach2ctl', 'restart'])` The subprocess needs to access the real stdin/out/err for it to be able to prompt you, and read in your password. If you set them up as pipes, you need to feed the password into that pipe yourself. If you don't define them, then it grabs sys.stdout, etc...
is there a pythonic way to try something up to a maximum number of times?
567,622
59
2009-02-19T22:52:01Z
567,645
55
2009-02-19T23:00:09Z
[ "python", "exception-handling" ]
I have a python script which is querying a MySQL server on a shared linux host. For some reason, queries to MySQL often return a "server has gone away" error: ``` _mysql_exceptions.OperationalError: (2006, 'MySQL server has gone away') ``` If you try the query again immediately afterwards, it usually succeeds. So, I'd like to know if there's a sensible way in python to try to execute a query, and if it fails, to try again, up to a fixed number of tries. Probably I'd want it to try 5 times before giving up altogether. Here's the kind of code I have: ``` conn = MySQLdb.connect(host, user, password, database) cursor = conn.cursor() try: cursor.execute(query) rows = cursor.fetchall() for row in rows: # do something with the data except MySQLdb.Error, e: print "MySQL Error %d: %s" % (e.args[0], e.args[1]) ``` Clearly I could do it by having another attempt in the except clause, but that's incredibly ugly, and I have a feeling there must be a decent way to achieve this.
How about: ``` conn = MySQLdb.connect(host, user, password, database) cursor = conn.cursor() attempts = 0 while attempts < 3: try: cursor.execute(query) rows = cursor.fetchall() for row in rows: # do something with the data break except MySQLdb.Error, e: attempts += 1 print "MySQL Error %d: %s" % (e.args[0], e.args[1]) ```
is there a pythonic way to try something up to a maximum number of times?
567,622
59
2009-02-19T22:52:01Z
567,697
65
2009-02-19T23:16:22Z
[ "python", "exception-handling" ]
I have a python script which is querying a MySQL server on a shared linux host. For some reason, queries to MySQL often return a "server has gone away" error: ``` _mysql_exceptions.OperationalError: (2006, 'MySQL server has gone away') ``` If you try the query again immediately afterwards, it usually succeeds. So, I'd like to know if there's a sensible way in python to try to execute a query, and if it fails, to try again, up to a fixed number of tries. Probably I'd want it to try 5 times before giving up altogether. Here's the kind of code I have: ``` conn = MySQLdb.connect(host, user, password, database) cursor = conn.cursor() try: cursor.execute(query) rows = cursor.fetchall() for row in rows: # do something with the data except MySQLdb.Error, e: print "MySQL Error %d: %s" % (e.args[0], e.args[1]) ``` Clearly I could do it by having another attempt in the except clause, but that's incredibly ugly, and I have a feeling there must be a decent way to achieve this.
Building on Dana's answer, you might want to do this as a decorator: ``` def retry(howmany): def tryIt(func): def f(): attempts = 0 while attempts < howmany: try: return func() except: attempts += 1 return f return tryIt ``` Then... ``` @retry(5) def the_db_func(): # [...] ``` ### Enhanced version that uses the `decorator` module ``` import decorator, time def retry(howmany, *exception_types, **kwargs): timeout = kwargs.get('timeout', 0.0) # seconds @decorator.decorator def tryIt(func, *fargs, **fkwargs): for _ in xrange(howmany): try: return func(*fargs, **fkwargs) except exception_types or Exception: if timeout is not None: time.sleep(timeout) return tryIt ``` Then... ``` @retry(5, MySQLdb.Error, timeout=0.5) def the_db_func(): # [...] ``` To install [the `decorator` module](http://pypi.python.org/pypi/decorator): ``` $ easy_install decorator ```
How can I process command line arguments in Python?
567,879
18
2009-02-20T00:41:02Z
567,899
15
2009-02-20T00:51:36Z
[ "python", "command-line", "command-line-arguments" ]
What would be an easy expression to process command line arguments if I'm expecting anything like 001 or 999 (let's limit expectations to 001...999 range for this time), and few other arguments passed, and would like to ignore any unexpected? I understand if for example I need to find out if "debug" was passed among parameters it'll be something like that: ``` if 'debug' in argv[1:]: print 'Will be running in debug mode.' ``` How to find out if 009 or 575 was passed? All those are expected calls: ``` python script.py python script.py 011 python script.py 256 debug python script.py 391 xls python script.py 999 debug pdf ``` At this point I don't care about calls like that: ``` python script.py 001 002 245 568 python script.py some unexpected argument python script.py 0001 python script.py 02 ``` ...first one - because of more than one "numeric" argument; second - because of... well, unexpected arguments; third and fourth - because of non-3-digits arguments.
Have a look at the [optparse](http://docs.python.org/library/optparse.html) module. Dealing with sys.argv yourself is fine for really simple stuff, but it gets out of hand quickly. Note that you may find optparse easier to use if you can change your argument format a little; e.g. replace `debug` with `--debug` and `xls` with `--xls` or `--output=xls`.
How can I process command line arguments in Python?
567,879
18
2009-02-20T00:41:02Z
567,923
25
2009-02-20T01:02:40Z
[ "python", "command-line", "command-line-arguments" ]
What would be an easy expression to process command line arguments if I'm expecting anything like 001 or 999 (let's limit expectations to 001...999 range for this time), and few other arguments passed, and would like to ignore any unexpected? I understand if for example I need to find out if "debug" was passed among parameters it'll be something like that: ``` if 'debug' in argv[1:]: print 'Will be running in debug mode.' ``` How to find out if 009 or 575 was passed? All those are expected calls: ``` python script.py python script.py 011 python script.py 256 debug python script.py 391 xls python script.py 999 debug pdf ``` At this point I don't care about calls like that: ``` python script.py 001 002 245 568 python script.py some unexpected argument python script.py 0001 python script.py 02 ``` ...first one - because of more than one "numeric" argument; second - because of... well, unexpected arguments; third and fourth - because of non-3-digits arguments.
As others answered, optparse is the best option, but if you just want quick code try something like this: ``` import sys, re first_re = re.compile(r'^\d{3}$') if len(sys.argv) > 1: if first_re.match(sys.argv[1]): print "Primary argument is : ", sys.argv[1] else: raise ValueError("First argument should be ...") args = sys.argv[2:] else: args = () # ... anywhere in code ... if 'debug' in args: print 'debug flag' if 'xls' in args: print 'xls flag' ``` **EDIT**: Here's an optparse example because so many people are answering optparse without really explaining why, or explaining what you have to change to make it work. The primary reason to use optparse is it gives you more flexibility for expansion later, and gives you more flexibility on the command line. In other words, your options can appear in any order and usage messages are generated automatically. However to make it work with optparse you need to change your specifications to put '-' or '--' in front of the optional arguments and you need to allow all the arguments to be in any order. So here's an example using optparse: ``` import sys, re, optparse first_re = re.compile(r'^\d{3}$') parser = optparse.OptionParser() parser.set_defaults(debug=False,xls=False) parser.add_option('--debug', action='store_true', dest='debug') parser.add_option('--xls', action='store_true', dest='xls') (options, args) = parser.parse_args() if len(args) == 1: if first_re.match(args[0]): print "Primary argument is : ", args[0] else: raise ValueError("First argument should be ...") elif len(args) > 1: raise ValueError("Too many command line arguments") if options.debug: print 'debug flag' if options.xls: print 'xls flag' ``` The differences here with optparse and your spec is that now you can have command lines like: ``` python script.py --debug --xls 001 ``` and you can easily add new options by calling parser.add\_option()
How to check if there exists a process with a given pid?
568,271
53
2009-02-20T04:22:43Z
568,285
90
2009-02-20T04:31:14Z
[ "python", "process", "pid" ]
Is there a way to check to see if a pid corresponds to a valid process? I'm getting a pid from a different source other than from `os.getpid()` and I need to check to see if a process with that pid doesn't exist on the machine. I need it to be available in Unix and Windows. I'm also checking to see if the PID is NOT in use.
Sending signal 0 to a pid will raise an OSError exception if the pid is not running, and do nothing otherwise. ``` import os def check_pid(pid): """ Check For the existence of a unix pid. """ try: os.kill(pid, 0) except OSError: return False else: return True ```
How to check if there exists a process with a given pid?
568,271
53
2009-02-20T04:22:43Z
6,940,314
38
2011-08-04T11:08:51Z
[ "python", "process", "pid" ]
Is there a way to check to see if a pid corresponds to a valid process? I'm getting a pid from a different source other than from `os.getpid()` and I need to check to see if a process with that pid doesn't exist on the machine. I need it to be available in Unix and Windows. I'm also checking to see if the PID is NOT in use.
mluebke code is not 100% correct; kill() can also raise EPERM (access denied) in which case that obviously means a process exists. This is supposed to work: (edited as per Jason R. Coombs comments) ``` import errno import os import sys def pid_exists(pid): """Check whether pid exists in the current process table. UNIX only. """ if pid < 0: return False if pid == 0: # According to "man 2 kill" PID 0 refers to every process # in the process group of the calling process. # On certain systems 0 is a valid PID but we have no way # to know that in a portable fashion. raise ValueError('invalid PID 0') try: os.kill(pid, 0) except OSError as err: if err.errno == errno.ESRCH: # ESRCH == No such process return False elif err.errno == errno.EPERM: # EPERM clearly means there's a process to deny access to return True else: # According to "man 2 kill" possible error values are # (EINVAL, EPERM, ESRCH) raise else: return True ``` You can't do this on Windows unless you use pywin32, ctypes or a C extension module. If you're OK with depending from an external lib you can use [psutil](https://github.com/giampaolo/psutil): ``` >>> import psutil >>> psutil.pid_exists(2353) True ```
How to check if there exists a process with a given pid?
568,271
53
2009-02-20T04:22:43Z
17,622,447
28
2013-07-12T19:16:25Z
[ "python", "process", "pid" ]
Is there a way to check to see if a pid corresponds to a valid process? I'm getting a pid from a different source other than from `os.getpid()` and I need to check to see if a process with that pid doesn't exist on the machine. I need it to be available in Unix and Windows. I'm also checking to see if the PID is NOT in use.
Have a look at the [`psutil`](https://pypi.python.org/pypi/psutil) module: > **psutil** (python system and process utilities) is a cross-platform library for retrieving information on **running processes** and **system utilization** (CPU, memory, disks, network) in Python. [...] It currently supports **Linux**, **Windows**, **OSX**, **FreeBSD** and **Sun Solaris**, both **32-bit** and **64-bit** architectures, with Python versions from **2.6 to 3.4** (users of Python 2.4 and 2.5 may use 2.1.3 version). PyPy is also known to work. It has a function called `pid_exists()` that you can use to check whether a process with the given pid exists. Here's an example: ``` import psutil pid = 12345 if psutil.pid_exists(pid): print "a process with pid %d exists" % pid else: print "a process with pid %d does not exist" % pid ``` For reference: * <https://pypi.python.org/pypi/psutil> * <https://github.com/giampaolo/psutil> * <http://pythonhosted.org/psutil/#psutil.pid_exists>
How do I create an empty array/matrix in NumPy?
568,962
103
2009-02-20T09:58:11Z
569,063
162
2009-02-20T10:36:46Z
[ "python", "arrays", "numpy" ]
I can't figure out how to use an array or matrix in the way that I would normally use a list. I want to create an empty array (or matrix) and then add one column (or row) to it at a time. At the moment the only way I can find to do this is like: ``` mat = None for col in columns: if mat is None: mat = col else: mat = hstack((mat, col)) ``` Whereas if it were a list, I'd do something like this: ``` list = [] for item in data: list.append(item) ``` Is there a way to use that kind of notation for [NumPy](http://en.wikipedia.org/wiki/NumPy) arrays or matrices?
You have the wrong mental model for using NumPy efficiently. NumPy arrays are stored in contiguous blocks of memory. If you want to add rows or columns to an existing array, the entire array needs to be copied to a new block of memory, creating gaps for the new elements to be stored. This is very inefficient if done repeatedly to build an array. In the case of adding rows, your best bet is to create an array that is as big as your data set will eventually be, and then add data to it row-by-row: ``` >>> import numpy >>> a = numpy.zeros(shape=(5,2)) >>> a array([[ 0., 0.], [ 0., 0.], [ 0., 0.], [ 0., 0.], [ 0., 0.]]) >>> a[0] = [1,2] >>> a[1] = [2,3] >>> a array([[ 1., 2.], [ 2., 3.], [ 0., 0.], [ 0., 0.], [ 0., 0.]]) ```
How do I create an empty array/matrix in NumPy?
568,962
103
2009-02-20T09:58:11Z
569,090
42
2009-02-20T10:44:19Z
[ "python", "arrays", "numpy" ]
I can't figure out how to use an array or matrix in the way that I would normally use a list. I want to create an empty array (or matrix) and then add one column (or row) to it at a time. At the moment the only way I can find to do this is like: ``` mat = None for col in columns: if mat is None: mat = col else: mat = hstack((mat, col)) ``` Whereas if it were a list, I'd do something like this: ``` list = [] for item in data: list.append(item) ``` Is there a way to use that kind of notation for [NumPy](http://en.wikipedia.org/wiki/NumPy) arrays or matrices?
A NumPy array is a very different data structure from a list and is designed to be used in different ways. Your use of `hstack` is potentially very inefficient... every time you call it, all the data in the existing array is copied into a new one. (The `append` function will have the same issue.) If you want to build up your matrix one column at a time, you might be best off to keep it in a list until it is finished, and only then convert it into an array. e.g. ``` mylist = [] for item in data: mylist.append(item) mat = numpy.array(mylist) ``` `item` can be a list, an array or any iterable, as long as each `item` has the same number of elements. In this particular case (`data` is some iterable holding the matrix columns) you can simply use ``` mat = numpy.array(data) ``` (Also note that using `list` as a variable name is probably not good practice since it masks the built-in type by that name, which can lead to bugs.) EDIT: If for some reason you really do want to create an empty array, you can just use `numpy.array([])`, but this is rarely useful!
How do I create an empty array/matrix in NumPy?
568,962
103
2009-02-20T09:58:11Z
15,926,110
8
2013-04-10T12:39:58Z
[ "python", "arrays", "numpy" ]
I can't figure out how to use an array or matrix in the way that I would normally use a list. I want to create an empty array (or matrix) and then add one column (or row) to it at a time. At the moment the only way I can find to do this is like: ``` mat = None for col in columns: if mat is None: mat = col else: mat = hstack((mat, col)) ``` Whereas if it were a list, I'd do something like this: ``` list = [] for item in data: list.append(item) ``` Is there a way to use that kind of notation for [NumPy](http://en.wikipedia.org/wiki/NumPy) arrays or matrices?
I looked into this a lot because I needed to use a numpy.array as a set in one of my school projects and I needed to be initialized empty... I didn't found any relevant answer here on Stack Overflow, so I started doodling something. ``` # Initialize your variable as an empty list first In [32]: x=[] # and now cast it as a numpy ndarray In [33]: x=np.array(x) ``` The result will be: ``` In [34]: x Out[34]: array([], dtype=float64) ``` Therefore you can directly initialize an np array as follows: ``` In [36]: x= np.array([], dtype=np.float64) ``` I hope this helps.
How do I create an empty array/matrix in NumPy?
568,962
103
2009-02-20T09:58:11Z
22,978,734
13
2014-04-10T04:34:58Z
[ "python", "arrays", "numpy" ]
I can't figure out how to use an array or matrix in the way that I would normally use a list. I want to create an empty array (or matrix) and then add one column (or row) to it at a time. At the moment the only way I can find to do this is like: ``` mat = None for col in columns: if mat is None: mat = col else: mat = hstack((mat, col)) ``` Whereas if it were a list, I'd do something like this: ``` list = [] for item in data: list.append(item) ``` Is there a way to use that kind of notation for [NumPy](http://en.wikipedia.org/wiki/NumPy) arrays or matrices?
To create an empty multidimensional array in NumPy (e.g. a 2D array `m*n` to store your matrix), in case you don't know `m` how many rows you will append and don't care about the computational cost Stephen Simmons mentioned (namely re-buildinging the array at each append), you can squeeze to 0 the dimension to which you want to append to: `X = np.empty(shape=[0, n])`. This way you can use for example (here `m = 5` which we assume we didn't know when creating the empty matrix, and `n = 2`): ``` n = 2 X = np.empty(shape=[0, n]) for i in range(5): for j in range(2): X = np.append(X, [[i, j]], axis=0) print X ``` which will give you: ``` [[ 0. 0.] [ 0. 1.] [ 1. 0.] [ 1. 1.] [ 2. 0.] [ 2. 1.] [ 3. 0.] [ 3. 1.] [ 4. 0.] [ 4. 1.]] ```
How to programmatically insert comments into a Microsoft Word document?
568,972
9
2009-02-20T10:02:47Z
569,092
7
2009-02-20T10:44:49Z
[ "python", "ms-word", "common-lisp", "openxml" ]
Looking for a way to programmatically insert comments (using the comments feature in Word) into a specific location in a MS Word document. I would prefer an approach that is usable across recent versions of MS Word standard formats and implementable in a non-Windows environment (ideally using Python and/or Common Lisp). I have been looking at the OpenXML SDK but can't seem to find a solution there.
Here is what I did: 1. Create a simple document with word (i.e. a very small one) 2. Add a comment in Word 3. Save as docx. 4. Use the zip module of python to access the archive (docx files are ZIP archives). 5. Dump the content of the entry "word/document.xml" in the archive. This is the XML of the document itself. This should give you an idea what you need to do. After that, you can use one of the XML libraries in Python to parse the document, change it and add it back to a new ZIP archive with the extension ".docx". Simply copy every other entry from the original ZIP and you have a new, valid Word document. There is also a library which might help: [openxmllib](http://code.google.com/p/openxmllib/)
Django: multiple models in one template using forms
569,468
85
2009-02-20T12:50:26Z
575,133
56
2009-02-22T16:09:05Z
[ "python", "django", "design", "django-forms" ]
I'm building a support ticket tracking app and have a few models I'd like to create from one page. Tickets belong to a Customer via a ForeignKey. Notes belong to Tickets via a ForeignKey as well. I'd like to have the option of selecting a Customer (that's a whole separate project) OR creating a new Customer, then creating a Ticket and finally creating a Note assigned to the new ticket. Since I'm fairly new to Django, I tend to work iteratively, trying out new features each time. I've played with ModelForms but I want to hide some of the fields and do some complex validation. It seems like the level of control I'm looking for either requires formsets or doing everything by hand, complete with a tedious, hand-coded template page, which I'm trying to avoid. Is there some lovely feature I'm missing? Does someone have a good reference or example for using formsets? I spent a whole weekend on the API docs for them and I'm still clueless. Is it a design issue if I break down and hand-code everything?
This really isn't too hard to implement with [ModelForms](http://docs.djangoproject.com/en/dev/topics/forms/modelforms/#topics-forms-modelforms). So lets say you have Forms A, B, and C. You print out each of the forms and the page and now you need to handle the POST. ``` if request.POST(): a_valid = formA.is_valid() b_valid = formB.is_valid() c_valid = formC.is_valid() # we do this since 'and' short circuits and we want to check to whole page for form errors if a_valid and b_valid and c_valid: a = formA.save() b = formB.save(commit=False) c = formC.save(commit=False) b.foreignkeytoA = a b.save() c.foreignkeytoB = b c.save() ``` [Here](http://docs.djangoproject.com/en/dev/ref/forms/validation/#ref-forms-validation) are the docs for custom validation.
Django: multiple models in one template using forms
569,468
85
2009-02-20T12:50:26Z
606,318
19
2009-03-03T13:05:53Z
[ "python", "django", "design", "django-forms" ]
I'm building a support ticket tracking app and have a few models I'd like to create from one page. Tickets belong to a Customer via a ForeignKey. Notes belong to Tickets via a ForeignKey as well. I'd like to have the option of selecting a Customer (that's a whole separate project) OR creating a new Customer, then creating a Ticket and finally creating a Note assigned to the new ticket. Since I'm fairly new to Django, I tend to work iteratively, trying out new features each time. I've played with ModelForms but I want to hide some of the fields and do some complex validation. It seems like the level of control I'm looking for either requires formsets or doing everything by hand, complete with a tedious, hand-coded template page, which I'm trying to avoid. Is there some lovely feature I'm missing? Does someone have a good reference or example for using formsets? I spent a whole weekend on the API docs for them and I'm still clueless. Is it a design issue if I break down and hand-code everything?
I very recently had the some problem and just figured out how to do this. Assuming you have three classes, Primary, B, C and that B,C have a foreign key to primary ``` class PrimaryForm(ModelForm): class Meta: model = Primary class BForm(ModelForm): class Meta: model = B exclude = ('primary',) class CForm(ModelForm): class Meta: model = C exclude = ('primary',) def generateView(request): if request.method == 'POST': # If the form has been submitted... primary_form = PrimaryForm(request.POST, prefix = "primary") b_form = BForm(request.POST, prefix = "b") c_form = CForm(request.POST, prefix = "c") if primary_form.is_valid() and b_form.is_valid() and c_form.is_valid(): # All validation rules pass print "all validation passed" primary = primary_form.save() b_form.cleaned_data["primary"] = primary b = b_form.save() c_form.cleaned_data["primary"] = primary c = c_form.save() return HttpResponseRedirect("/viewer/%s/" % (primary.name)) else: print "failed" else: primary_form = PrimaryForm(prefix = "primary") b_form = BForm(prefix = "b") c_form = Form(prefix = "c") return render_to_response('multi_model.html', { 'primary_form': primary_form, 'b_form': b_form, 'c_form': c_form, }) ``` This method should allow you to do whatever validation you require, as well as generating all three objects on the same page. I have also used javascript and hidden fields to allow the generation of multiple B,C objects on the same page.
Django: multiple models in one template using forms
569,468
85
2009-02-20T12:50:26Z
985,901
60
2009-06-12T09:50:28Z
[ "python", "django", "design", "django-forms" ]
I'm building a support ticket tracking app and have a few models I'd like to create from one page. Tickets belong to a Customer via a ForeignKey. Notes belong to Tickets via a ForeignKey as well. I'd like to have the option of selecting a Customer (that's a whole separate project) OR creating a new Customer, then creating a Ticket and finally creating a Note assigned to the new ticket. Since I'm fairly new to Django, I tend to work iteratively, trying out new features each time. I've played with ModelForms but I want to hide some of the fields and do some complex validation. It seems like the level of control I'm looking for either requires formsets or doing everything by hand, complete with a tedious, hand-coded template page, which I'm trying to avoid. Is there some lovely feature I'm missing? Does someone have a good reference or example for using formsets? I spent a whole weekend on the API docs for them and I'm still clueless. Is it a design issue if I break down and hand-code everything?
I just was in about the same situation a day ago, and here are my 2 cents: 1) I found arguably the shortest and most concise demonstration of multiple model entry in single form here: <http://collingrady.wordpress.com/2008/02/18/editing-multiple-objects-in-django-with-newforms/> . In a nutshell: Make a form for each model, submit them both to template in a single `<form>`, using `prefix` keyarg and have the view handle validation. If there is dependency, just make sure you save the "parent" model before dependant, and use parent's ID for foreign key before commiting save of "child" model. The link has the demo. 2) Maybe formsets can be beaten into doing this, but as far as I delved in, formsets are primarily for entering multiples of the same model, which *may* be optionally tied to another model/models by foreign keys. However, there seem to be no default option for entering more than one model's data and that's not what formset seems to be meant for.
How to keep track of thread progress in Python without freezing the PyQt GUI?
569,650
18
2009-02-20T14:00:06Z
574,130
10
2009-02-22T01:33:00Z
[ "python", "multithreading", "user-interface", "pyqt" ]
## **Questions:** 1. What is the best practice for keeping track of a tread's progress without locking the GUI ("Not Responding")? 2. Generally, what are the best practices for threading as it applies to GUI development? ## **Question Background:** * I have a PyQt GUI for Windows. * It is used to process sets of HTML documents. * It takes anywhere from three seconds to three hours to process a set of documents. * I want to be able to process multiple sets at the same time. * I don't want the GUI to lock. * I'm looking at the threading module to achieve this. * I am relatively new to threading. * The GUI has one progress bar. * I want it to display the progress of the selected thread. * Display results of the selected thread if it's finished. * I'm using Python 2.5. **My Idea:** Have the threads emit a QtSignal when the progress is updated that triggers some function that updates the progress bar. Also signal when finished processing so results can be displayed. ``` #NOTE: this is example code for my idea, you do not have # to read this to answer the question(s). import threading from PyQt4 import QtCore, QtGui import re import copy class ProcessingThread(threading.Thread, QtCore.QObject): __pyqtSignals__ = ( "progressUpdated(str)", "resultsReady(str)") def __init__(self, docs): self.docs = docs self.progress = 0 #int between 0 and 100 self.results = [] threading.Thread.__init__(self) def getResults(self): return copy.deepcopy(self.results) def run(self): num_docs = len(self.docs) - 1 for i, doc in enumerate(self.docs): processed_doc = self.processDoc(doc) self.results.append(processed_doc) new_progress = int((float(i)/num_docs)*100) #emit signal only if progress has changed if self.progress != new_progress: self.emit(QtCore.SIGNAL("progressUpdated(str)"), self.getName()) self.progress = new_progress if self.progress == 100: self.emit(QtCore.SIGNAL("resultsReady(str)"), self.getName()) def processDoc(self, doc): ''' this is tivial for shortness sake ''' return re.findall('<a [^>]*>.*?</a>', doc) class GuiApp(QtGui.QMainWindow): def __init__(self): self.processing_threads = {} #{'thread_name': Thread(processing_thread)} self.progress_object = {} #{'thread_name': int(thread_progress)} self.results_object = {} #{'thread_name': []} self.selected_thread = '' #'thread_name' def processDocs(self, docs): #create new thread p_thread = ProcessingThread(docs) thread_name = "example_thread_name" p_thread.setName(thread_name) p_thread.start() #add thread to dict of threads self.processing_threads[thread_name] = p_thread #init progress_object for this thread self.progress_object[thread_name] = p_thread.progress #connect thread signals to GuiApp functions QtCore.QObject.connect(p_thread, QtCore.SIGNAL('progressUpdated(str)'), self.updateProgressObject(thread_name)) QtCore.QObject.connect(p_thread, QtCore.SIGNAL('resultsReady(str)'), self.updateResultsObject(thread_name)) def updateProgressObject(self, thread_name): #update progress_object for all threads self.progress_object[thread_name] = self.processing_threads[thread_name].progress #update progress bar for selected thread if self.selected_thread == thread_name: self.setProgressBar(self.progress_object[self.selected_thread]) def updateResultsObject(self, thread_name): #update results_object for thread with results self.results_object[thread_name] = self.processing_threads[thread_name].getResults() #update results widget for selected thread try: self.setResultsWidget(self.results_object[thread_name]) except KeyError: self.setResultsWidget(None) ``` Any commentary on this approach (e.g. drawbacks, pitfalls, praises, etc.) will be appreciated. ## **Resolution:** I ended up using the QThread class and associated signals and slots to communicate between threads. This is primarily because my program already uses Qt/PyQt4 for the GUI objects/widgets. This solution also required fewer changes to my existing code to implement. Here is a link to an applicable Qt article that explains how Qt handles threads and signals, <http://www.linuxjournal.com/article/9602>. Excerpt below: > Fortunately, Qt permits > signals and slots to be connected > across threads—as long as the threads > are running their own event loops. > This is a much cleaner method of > communication compared to sending and > receiving events, because it avoids > all the bookkeeping and intermediate > QEvent-derived classes that become > necessary in any nontrivial > application. Communicating between > threads now becomes a matter of > connecting signals from one thread to > the slots in another, and the mutexing > and thread-safety issues of exchanging > data between threads are handled by > Qt. > > Why is it necessary to run an event > loop within each thread to which you > want to connect signals? The reason > has to do with the inter-thread > communication mechanism used by Qt > when connecting signals from one > thread to the slot of another thread. > When such a connection is made, it is > referred to as a queued connection. > When signals are emitted through a > queued connection, the slot is invoked > the next time the destination object's > event loop is executed. If the slot > had instead been invoked directly by a > signal from another thread, that slot > would execute in the same context as > the calling thread. Normally, this is > not what you want (and especially not > what you want if you are using a > database connection, as the database > connection can be used only by the > thread that created it). The queued > connection properly dispatches the > signal to the thread object and > invokes its slot in its own context by > piggy-backing on the event system. > This is precisely what we want for > inter-thread communication in which > some of the threads are handling > database connections. The Qt > signal/slot mechanism is at root an > implementation of the inter-thread > event-passing scheme outlined above, > but with a much cleaner and > easier-to-use interface. **NOTE:** *eliben* also has a good answer, and if I weren't using PyQt4, which handles thread-safety and mutexing, his solution would have been my choice.
If you want to use signals to indicate progress to the main thread then you should really be using PyQt's QThread class instead of the Thread class from Python's threading module. A simple example which uses QThread, signals and slots can be found on the PyQt Wiki: <https://wiki.python.org/moin/PyQt/Threading,_Signals_and_Slots>
Lazy choices in Django form
569,696
16
2009-02-20T14:18:37Z
569,748
16
2009-02-20T14:33:58Z
[ "python", "django", "forms", "lazy-evaluation" ]
I have a Django my\_forms.py like this: ``` class CarSearchForm(forms.Form): # lots of fields like this bodystyle = forms.ChoiceField(choices=bodystyle_choices()) ``` Each choice is e.g. ("Saloon", "Saloon (15 cars)"). So the choices are computed by this function. ``` def bodystyle_choices(): return [(bodystyle.bodystyle_name, '%s (%s cars)' % (bodystyle.bodystyle_name, bodystyle.car_set.count())) for bodystyle in Bodystyle.objects.all()] ``` My problem is the choices functions are getting executed every time I merely import my\_forms.py. I think this is due to the way Django declares its fields: in the class but not in a class method. Which is fine but my views.py imports my\_forms.py so the choices lookups are done on every request no matter which view is used. I thought that maybe putting choices=bodystyle\_choices with no bracket would work, but I get: ``` 'function' object is not iterable ``` Obviously I can use caching and put the "import my\_forms" just in the view functions required but that doesn't change the main point: my choices need to be lazy!
Try using a ModelChoiceField instead of a simple ChoiceField. I think you will be able to achieve what you want by tweaking your models a bit. Take a look at the [docs](http://docs.djangoproject.com/en/dev/ref/forms/fields/#modelchoicefield) for more. I would also add that ModelChoiceFields are `lazy` by default :)
Lazy choices in Django form
569,696
16
2009-02-20T14:18:37Z
845,140
39
2009-05-10T11:21:16Z
[ "python", "django", "forms", "lazy-evaluation" ]
I have a Django my\_forms.py like this: ``` class CarSearchForm(forms.Form): # lots of fields like this bodystyle = forms.ChoiceField(choices=bodystyle_choices()) ``` Each choice is e.g. ("Saloon", "Saloon (15 cars)"). So the choices are computed by this function. ``` def bodystyle_choices(): return [(bodystyle.bodystyle_name, '%s (%s cars)' % (bodystyle.bodystyle_name, bodystyle.car_set.count())) for bodystyle in Bodystyle.objects.all()] ``` My problem is the choices functions are getting executed every time I merely import my\_forms.py. I think this is due to the way Django declares its fields: in the class but not in a class method. Which is fine but my views.py imports my\_forms.py so the choices lookups are done on every request no matter which view is used. I thought that maybe putting choices=bodystyle\_choices with no bracket would work, but I get: ``` 'function' object is not iterable ``` Obviously I can use caching and put the "import my\_forms" just in the view functions required but that doesn't change the main point: my choices need to be lazy!
You can use the "lazy" function :) ``` from django.utils.functional import lazy class CarSearchForm(forms.Form): # lots of fields like this bodystyle = forms.ChoiceField(choices=lazy(bodystyle_choices, tuple)()) ``` very nice util function !
How to tell for which object attribute pickle fails?
569,754
28
2009-02-20T14:36:09Z
570,910
14
2009-02-20T19:30:46Z
[ "python", "serialization" ]
When you pickle an object that has some attributes which cannot be pickled it will fail with a generic error message like: ``` PicklingError: Can't pickle <type 'instancemethod'>: attribute lookup __builtin__.instancemethod failed ``` Is there any way to tell which attribute caused the exception? I am using Python 2.5.2. Even though I understand in principle the root cause of the problem (e.g. in the above example having an instance method) it can still be very hard to *exactly pinpoint it*. In my case I already defined a custom `__getstate__` method, but forgot about a critical attribute. This happened in a complicated structure of nested objects, so it took me a while to identify the bad attribute. As requested, here is one simple example were pickle intentionally fails: ``` import cPickle as pickle import new class Test(object): pass def test_func(self): pass test = Test() pickle.dumps(test) print "now with instancemethod..." test.test_meth = new.instancemethod(test_func, test) pickle.dumps(test) ``` This is the output: ``` now with instancemethod... Traceback (most recent call last): File "/home/wilbert/develop/workspace/Playground/src/misc/picklefail.py", line 15, in <module> pickle.dumps(test) File "/home/wilbert/lib/python2.5/copy_reg.py", line 69, in _reduce_ex raise TypeError, "can't pickle %s objects" % base.__name__ TypeError: can't pickle instancemethod objects ``` Unfortunately there is no hint that the attribute `test_meth` causes the problem.
You could file a bug against Python for not including more helpful error messages. In the meantime, modify the `_reduce_ex()` function in `copy_reg.py`. ``` if base is self.__class__: print self # new raise TypeError, "can't pickle %s objects" % base.__name__ ``` Output: ``` <bound method ?.test_func of <__main__.Test object at 0xb7f4230c>> Traceback (most recent call last): File "nopickle.py", line 14, in ? pickle.dumps(test) File "/usr/lib/python2.4/copy_reg.py", line 69, in _reduce_ex raise TypeError, "can't pickle %s objects" % base.__name__ TypeError: can't pickle instancemethod objects ```
How to tell for which object attribute pickle fails?
569,754
28
2009-02-20T14:36:09Z
7,218,986
7
2011-08-28T04:04:50Z
[ "python", "serialization" ]
When you pickle an object that has some attributes which cannot be pickled it will fail with a generic error message like: ``` PicklingError: Can't pickle <type 'instancemethod'>: attribute lookup __builtin__.instancemethod failed ``` Is there any way to tell which attribute caused the exception? I am using Python 2.5.2. Even though I understand in principle the root cause of the problem (e.g. in the above example having an instance method) it can still be very hard to *exactly pinpoint it*. In my case I already defined a custom `__getstate__` method, but forgot about a critical attribute. This happened in a complicated structure of nested objects, so it took me a while to identify the bad attribute. As requested, here is one simple example were pickle intentionally fails: ``` import cPickle as pickle import new class Test(object): pass def test_func(self): pass test = Test() pickle.dumps(test) print "now with instancemethod..." test.test_meth = new.instancemethod(test_func, test) pickle.dumps(test) ``` This is the output: ``` now with instancemethod... Traceback (most recent call last): File "/home/wilbert/develop/workspace/Playground/src/misc/picklefail.py", line 15, in <module> pickle.dumps(test) File "/home/wilbert/lib/python2.5/copy_reg.py", line 69, in _reduce_ex raise TypeError, "can't pickle %s objects" % base.__name__ TypeError: can't pickle instancemethod objects ``` Unfortunately there is no hint that the attribute `test_meth` causes the problem.
I had the same problem as you, but my classes were a bit more complicated (i.e. a large tree of similar objects) so the printing didn't help much so I hacked together a helper function. It is not complete and is only intended for use with pickling protocol 2: It was enough so I could locate my problems. If you want to extend it to cover everything, the protocol is described at <http://www.python.org/dev/peps/pep-0307/> i've made this post editable so everybody can update the code. ``` import pickle def get_pickling_errors(obj,seen=None): if seen == None: seen = [] try: state = obj.__getstate__() except AttributeError: return if state == None: return if isinstance(state,tuple): if not isinstance(state[0],dict): state=state[1] else: state=state[0].update(state[1]) result = {} for i in state: try: pickle.dumps(state[i],protocol=2) except pickle.PicklingError: if not state[i] in seen: seen.append(state[i]) result[i]=get_pickling_errors(state[i],seen) return result ``` An example of the usage where K is the object that doesn't pickle ``` >>> get_pickling_errors(K) {'_gen': {}, '_base': {'_gens': None}} ``` This means that the attibute K.\_gen is not picklable and the same goes for K.\_base.\_gens.
How do I add plain text info to forms in a formset in Django?
570,522
3
2009-02-20T17:30:00Z
571,334
8
2009-02-20T21:39:03Z
[ "python", "django", "django-forms" ]
I want to show a title and description from a db query in each form, but I don't want it to be in a charfield, I want it to be html-formatted text. sample template code: ``` {% for form, data in zipped_data %} <div class="row"> <div class="first_col"> <span class="title">{{ data.0 }}</span> <div class="desc"> {{ data.1|default:"None" }} </div> </div> {% for field in form %} <div class="fieldWrapper" style="float: left; "> {{ field.errors }} {{ field }} </div> {% endfor %} {% endfor %} ``` Is this the most idiomatic way of doing this? Or, is there a way to add text that will not be displayed inside of a textarea or text input to my model: ``` class ReportForm(forms.Form): comment = forms.CharField() ``` ?
Instead of zipping your forms with the additional data, you can override the constructor on your form and hold your title/description as *instance-level* member variables. This is a bit more object-oriented and learning how to do this will help you solve other problems down the road such as dynamic choice fields. ``` class MyForm (forms.Form): def __init__ (self, title, desc, *args, **kwargs): self.title = title self.desc = desc super (MyForm, self).__init__ (*args, **kwargs) # call base class ``` Then in your view code: ``` form = MyForm ('Title A', 'Description A') ``` Adjust accordingly if you need these values to come from the database. Then in your template, you access the instance variables just like you do anything else, e.g.: ``` <h1>{{ form.title }}</h1> <p>{{ form.desc }}</p> ``` From the way you phrased your question, I think you probably have some confusion around the way Django uses Python *class attributes* to provide a declarative form API versus *instance-level* attributes that you apply to individual instances of a class, in this case your form objects. * [Check out this link for a good discussion on the distinction](http://stackoverflow.com/questions/207000/python-difference-between-class-and-instance-attributes) * [And this one](http://stackoverflow.com/questions/206734/why-do-attribute-references-act-like-this-with-python-inheritance)
Detect when a Python module unloads
570,636
7
2009-02-20T18:00:23Z
570,704
12
2009-02-20T18:19:37Z
[ "python" ]
I have a module that uses ctypes to wrap some functionality from a static library into a class. When the module loads, it calls an initialize function in the static library. When the module is unloaded (presumably when the interpreter exits), there's an unload function in the library that I'd like to be called. How can I create this hook?
Use the [atexit](http://docs.python.org/library/atexit.html) module: ``` import mymodule import atexit # call mymodule.unload('param1', 'param2') when the interpreter exits: atexit.register(mymodule.unload, 'param1', 'param2') ``` Another simple example from the docs, using [`register`](http://docs.python.org/library/atexit.html#atexit.register) as a decorator: ``` import atexit @atexit.register def goodbye(): print "You are now leaving the Python sector." ```
Split tags in python
571,186
3
2009-02-20T20:50:04Z
571,238
7
2009-02-20T21:09:48Z
[ "python", "split", "template-engine" ]
I have a file that contains this: ``` <html> <head> <title> Hello! - {{ today }}</title> </head> <body> {{ runner_up }} avasd {{ blabla }} sdvas {{ oooo }} </body> </html> ``` What is the best or most Pythonic way to extract the `{{today}}`, `{{runner_up}}`, etc.? I know it can be done with splits/regular expressions, but I wondered if there were another way. PS: consider the data loaded in a variable called `thedata`. Edit: I think that the HTML example was bad, because it directed some commenters to BeautifulSoup. So, here is a new input data: ``` Fix grammatical or {{spelling}} errors. Clarify meaning without changing it. Correct minor {{mistakes}}. Add related resources or links. Always respect the original {{author}}. ``` Output: ``` spelling mistakes author ```
Mmkay, well here's a generator solution that seems to work well for me. You can also provide different open and close tags if you like. ``` def get_tags(s, open_delim ='{{', close_delim ='}}' ): while True: # Search for the next two delimiters in the source text start = s.find(open_delim) end = s.find(close_delim) # We found a non-empty match if -1 < start < end: # Skip the length of the open delimiter start += len(open_delim) # Spit out the tag yield s[start:end].strip() # Truncate string to start from last match s = s[end+len(close_delim):] else: return ``` Run against your target input like so: ``` # prints: today, runner_up, blabla, oooo for tag in get_tags(html): print tag ``` Edit: it also works against your new example :). In my obviously quick testing, it also seemed to handle malformed tags in a reasonable way, though I make no guarantees of its robustness!
triangle numbers in python
571,488
10
2009-02-20T22:27:36Z
571,526
27
2009-02-20T22:39:49Z
[ "python" ]
I'm trying to solve the problem: > What is the value of the first triangle number to have over five hundred divisors? *A triangle number is a number in the sequence of the sum of numbers i.e. 1+2+3+4+5...* I'm pretty sure that this is working code but I don't know because my computer is taking too long to calculate it. Does anybody have any idea of how to make the program a little faster. Thanks. ``` import math def main(): l = [] one = 0 a = 1 b = 2 while one == 0: a = a + b b += 1 for x in range(1, int(a/2 + 1)): if a % x == 0: l.append(x) if len(l) > 499: print a if __name__ == '__main__': main() ```
Hints: * what is the formula for `n`-th triangular number? * `n` and `n+1` have no common factors (except `1`). Question: given number of factors in `n` and `n+1` how to calculate number of factors in `n*(n+1)`? What about `n/2` and `(n+1)` (or `n` and `(n+1)/2`)? * if you know all prime factors of `n` how to calculate number of divisors of `n`? If you don't want to change your algorithm then you can make it faster by: * replace `l.append` by `factor_count += 1` * enumerate to `int(a**.5)` instead of `a/2` (use `factor_count += 2` in this case).
Are there any declaration keywords in Python?
571,514
9
2009-02-20T22:34:34Z
571,546
14
2009-02-20T22:46:25Z
[ "python", "variables" ]
Are there any declaration keywords in python, like local, global, private, public etc. I know it's type free but how do you know if this statement: ``` x = 5; ``` * Creates a new variable. or * Sets an existing one.
An important thing to understand about Python is there are no variables, only "names". In your example, you have an object "5" and you are creating a name "x" that references the object "5". If later you do: ``` x = "Some string" ``` that is still perfectly valid. Name "x" is now pointing to object "Some string". It's not a conflict of types because the name itself doesn't have a type, only the object. If you try x = 5 + "Some string" you will get a type error because you can't add two incompatible types. In other words, it's not type free. Python objects are strongly typed. Here are some very good discussions about Python typing: * [Strong Typing vs. Strong Testing](http://mindview.net/WebLog/log-0025) * [Typing: Strong vs. Weak, Static vs. Dynamic](http://www.artima.com/weblogs/viewpost.jsp?thread=7590) **Edit**: to finish tying this in with your question, a name can reference an existing object or a new one. ``` # Create a new int object >>> x = 500 # Another name to same object >>> y = x # Create another new int object >>> x = 600 # y still references original object >>> print y 500 # This doesn't update x, it creates a new object and x becomes # a reference to the new int object (which is int because that # is the defined result of adding to int objects). >>> x = x + y >>> print x 1100 # Make original int object 500 go away >>> del y ``` **Edit 2**: The most complete discussion of the difference between mutable objects (that can be changed) and immutable objects (that cannot be changed) in the the official documentation of the [Python Data Model](http://docs.python.org/reference/datamodel.html).
Are there any declaration keywords in Python?
571,514
9
2009-02-20T22:34:34Z
571,610
11
2009-02-20T23:09:54Z
[ "python", "variables" ]
Are there any declaration keywords in python, like local, global, private, public etc. I know it's type free but how do you know if this statement: ``` x = 5; ``` * Creates a new variable. or * Sets an existing one.
It's worth mentioning that there is a global keyword, so if you want to refer to the global x: ``` x = 4 def foo(): x = 7 # x is local to your function ``` You need to do this: ``` x = 4 def foo(): global x # let python know you want to use the top-level x x = 7 ```
Are there any declaration keywords in Python?
571,514
9
2009-02-20T22:34:34Z
571,757
11
2009-02-21T00:21:28Z
[ "python", "variables" ]
Are there any declaration keywords in python, like local, global, private, public etc. I know it's type free but how do you know if this statement: ``` x = 5; ``` * Creates a new variable. or * Sets an existing one.
I really like the understanding that Van Gale is providing, but it doesn't really answer the question of, "how do you know if this statement: creates a new variable or sets an existing variable?" If you want to know how to recognize it when looking at code, you simply look for a previous assignment. Avoid global variables, which is good practice anyway, and you'll be all set. Programmatically, you could try to reference the variable, and see if you get a "Name Error" exception ``` try: x except NameError: # x doesn't exist, do something else: # x exists, do something else ``` I've never needed to do this... and I doubt you will really need to either. ## soapbox alert !!! Even though Python looks kinda loosey-goosey to someone who is used to having to type the class name (or type) over and over and over... it's actually exactly as strict as you want to make it. If you want strict types, you would do it explictly: ``` assert(isinstance(variable, type)) ``` Decorators exist to do this in a very convenient way for function calls... Before long, you might just come to the conclusion that static type checking (at compile time) doesn't actually make your code that much better. There's only a small benefit for the cost of having to have redundant type information all over the place. I'm currently working in actionscript, and typing things like: ``` var win:ThingPicker = PopUpManager.createPopUp(fEmotionsButton, ThingPicker, false) as ThingPicker; ``` which in python would look like: ``` win = createPopup(parent, ThingPicker) ``` And I can see, looking at the actionscript code, that there's simply no benefit to the static type-checking. The variable's lifetime is so short that I would have to be completely drunk to do the wrong thing with it... and have the compiler save me by pointing out a type error.
Adding elements to python generators
571,850
15
2009-02-21T01:11:36Z
571,888
13
2009-02-21T01:31:45Z
[ "python", "append", "generator" ]
Is it possible to append elements to a python generator? I'm currently trying to get all images from a set of disorganized folders and write them to a new directory. To get the files, I'm using os.walk() which returns a list of image files in a single directory. While I can make a generator out of this single list, I don't know how to combine all these lists into one single generator. Any help would be much appreciated. Related: * [Flattening a shallow list in python](http://stackoverflow.com/questions/406121/flattening-a-shallow-list-in-python)
You are looking for [`itertools.chain`](http://docs.python.org/library/itertools.html). It will combine multiple iterables into a single one, like this: ``` >>> for i in itertools.chain([1,2,3], [4,5,6]): ... print i ... 1 2 3 4 5 6 ```
Adding elements to python generators
571,850
15
2009-02-21T01:11:36Z
571,928
13
2009-02-21T01:54:52Z
[ "python", "append", "generator" ]
Is it possible to append elements to a python generator? I'm currently trying to get all images from a set of disorganized folders and write them to a new directory. To get the files, I'm using os.walk() which returns a list of image files in a single directory. While I can make a generator out of this single list, I don't know how to combine all these lists into one single generator. Any help would be much appreciated. Related: * [Flattening a shallow list in python](http://stackoverflow.com/questions/406121/flattening-a-shallow-list-in-python)
This should do it, where `directories` is your list of directories: ``` import os import itertools generators = [os.walk(d) for d in directories] for root, dirs, files in itertools.chain(*generators): print root, dirs, files ```
How do I create a Django form that displays a checkbox label to the right of the checkbox?
572,263
28
2009-02-21T04:18:17Z
2,045,308
30
2010-01-11T22:06:27Z
[ "python", "django", "checkbox" ]
When I define a Django form class similar to this: ``` def class MyForm(forms.Form): check = forms.BooleanField(required=True, label="Check this") ``` It expands to HTML that looks like this: ``` <form action="." id="form" method=POST> <p><label for="check">Check this:</label> <input type="checkbox" name="check" id="check" /></p> <p><input type=submit value="Submit"></p> </form> ``` I would like the checkbox input element to have a label that follows the checkbox, not the other way around. Is there a way to convince Django to do that? **[Edit]** Thanks for the answer from Jonas - still, while it fixes the issue I asked about (checkbox labels are rendered to the right of the checkbox) it introduces a new problem (all widget labels are rendered to the right of their widgets...) I'd like to avoid overriding \_html\_output() since it's obviously not designed for it. The design I would come up with would be to implement a field html output method in the Field classes, override the one for the Boolean field and use that method in \_html\_output(). Sadly, the Django developers chose to go a different way, and I would like to work within the existing framework as much as possible. CSS sounds like a decent approach, except that I don't know enough CSS to pull this off or even to decide whether I like this approach or not. Besides, I prefer markup that still resembles the final output, at least in rendering order. Furthermore, since it can be reasonable to have more than one style sheet for any particular markup, doing this in CSS could mean having to do it multiple times for multiple styles, which pretty much makes CSS the wrong answer. **[Edit]** Seems like I'm answering my own question below. If anyone has a better idea how to do this, don't be shy.
Here's a solution I've come up with (Django v1.1): ``` {% load myfilters %} [...] {% for field in form %} [...] {% if field.field.widget|is_checkbox %} {{ field }}{{ field.label_tag }} {% else %} {{ field.label_tag }}{{ field }} {% endif %} [...] {% endfor %} ``` You'll need to create a custom template tag (in this example in a "myfilters.py" file) containing something like this: ``` from django import template from django.forms.fields import CheckboxInput register = template.Library() @register.filter(name='is_checkbox') def is_checkbox(value): return isinstance(value, CheckboxInput) ``` More info on custom template tags available [here](http://docs.djangoproject.com/en/dev/howto/custom-template-tags/). **Edit**: in the spirit of asker's own answer: Advantages: 1. No futzing with CSS. 2. The markup ends up looking the way it's supposed to. 3. I didn't hack Django internals. (but had to look at quite a bunch) 4. The template is nice, compact and idiomatic. 5. The filter code plays nice regardless of the exact values of the labels and input field names. Disadvantages: 1. There's probably something somewhere out there that does it better and faster. 2. Unlikely that the client will be willing to pay for all the time spent on this just to move the label to the right...
How do I create a Django form that displays a checkbox label to the right of the checkbox?
572,263
28
2009-02-21T04:18:17Z
2,308,693
13
2010-02-22T03:52:58Z
[ "python", "django", "checkbox" ]
When I define a Django form class similar to this: ``` def class MyForm(forms.Form): check = forms.BooleanField(required=True, label="Check this") ``` It expands to HTML that looks like this: ``` <form action="." id="form" method=POST> <p><label for="check">Check this:</label> <input type="checkbox" name="check" id="check" /></p> <p><input type=submit value="Submit"></p> </form> ``` I would like the checkbox input element to have a label that follows the checkbox, not the other way around. Is there a way to convince Django to do that? **[Edit]** Thanks for the answer from Jonas - still, while it fixes the issue I asked about (checkbox labels are rendered to the right of the checkbox) it introduces a new problem (all widget labels are rendered to the right of their widgets...) I'd like to avoid overriding \_html\_output() since it's obviously not designed for it. The design I would come up with would be to implement a field html output method in the Field classes, override the one for the Boolean field and use that method in \_html\_output(). Sadly, the Django developers chose to go a different way, and I would like to work within the existing framework as much as possible. CSS sounds like a decent approach, except that I don't know enough CSS to pull this off or even to decide whether I like this approach or not. Besides, I prefer markup that still resembles the final output, at least in rendering order. Furthermore, since it can be reasonable to have more than one style sheet for any particular markup, doing this in CSS could mean having to do it multiple times for multiple styles, which pretty much makes CSS the wrong answer. **[Edit]** Seems like I'm answering my own question below. If anyone has a better idea how to do this, don't be shy.
I took the answer from romkyns and made it a little more general ``` def field_type(field, ftype): try: t = field.field.widget.__class__.__name__ return t.lower() == ftype except: pass return False ``` This way you can check the widget type directly with a string ``` {% if field|field_type:'checkboxinput' %} <label>{{ field }} {{ field.label }}</label> {% else %} <label> {{ field.label }} </label> {{ field }} {% endif %} ```
How do I create a Django form that displays a checkbox label to the right of the checkbox?
572,263
28
2009-02-21T04:18:17Z
15,308,315
10
2013-03-09T07:18:55Z
[ "python", "django", "checkbox" ]
When I define a Django form class similar to this: ``` def class MyForm(forms.Form): check = forms.BooleanField(required=True, label="Check this") ``` It expands to HTML that looks like this: ``` <form action="." id="form" method=POST> <p><label for="check">Check this:</label> <input type="checkbox" name="check" id="check" /></p> <p><input type=submit value="Submit"></p> </form> ``` I would like the checkbox input element to have a label that follows the checkbox, not the other way around. Is there a way to convince Django to do that? **[Edit]** Thanks for the answer from Jonas - still, while it fixes the issue I asked about (checkbox labels are rendered to the right of the checkbox) it introduces a new problem (all widget labels are rendered to the right of their widgets...) I'd like to avoid overriding \_html\_output() since it's obviously not designed for it. The design I would come up with would be to implement a field html output method in the Field classes, override the one for the Boolean field and use that method in \_html\_output(). Sadly, the Django developers chose to go a different way, and I would like to work within the existing framework as much as possible. CSS sounds like a decent approach, except that I don't know enough CSS to pull this off or even to decide whether I like this approach or not. Besides, I prefer markup that still resembles the final output, at least in rendering order. Furthermore, since it can be reasonable to have more than one style sheet for any particular markup, doing this in CSS could mean having to do it multiple times for multiple styles, which pretty much makes CSS the wrong answer. **[Edit]** Seems like I'm answering my own question below. If anyone has a better idea how to do this, don't be shy.
All presented solutions involve template modifications, which are in general rather inefficient concerning performance. Here's a custom widget that does the job: ``` from django import forms from django.forms.fields import BooleanField from django.forms.util import flatatt from django.utils.encoding import force_text from django.utils.html import format_html from django.utils.translation import ugettext as _ class PrettyCheckboxWidget(forms.widgets.CheckboxInput): def render(self, name, value, attrs=None): final_attrs = self.build_attrs(attrs, type='checkbox', name=name) if self.check_test(value): final_attrs['checked'] = 'checked' if not (value is True or value is False or value is None or value == ''): final_attrs['value'] = force_text(value) if 'prettycheckbox-label' in final_attrs: label = _(final_attrs.pop('prettycheckbox-label')) else: label = '' return format_html('<label for="{0}"><input{1} /> {2}</label>', attrs['id'], flatatt(final_attrs), label) class PrettyCheckboxField(BooleanField): widget = PrettyCheckboxWidget def __init__(self, *args, **kwargs): if kwargs['label']: kwargs['widget'].attrs['prettycheckbox-label'] = kwargs['label'] kwargs['label'] = '' super(PrettyCheckboxField, self).__init__(*args, **kwargs) # usage in form class MyForm(forms.Form): my_boolean = PrettyCheckboxField(label=_('Some label'), widget=PrettyCheckboxWidget()) ``` I have *PrettyCheckboxWidget* and *PrettyCheckboxField* in an extra file, so they may be imported where needed. If you don't need translations, you can remove the ugettext parts. This code works on **Django 1.5** and is untested for lower versions. **Advantages:** * Highly performant, needs no template modifications * Easy to use as a custom widget **Disadvantages:** * "as\_table" renders the checkbox + label inside the second column * {{ field.label }} inside the template is empty. The label is instead bound to {{ field }} * More work than I planned on doing on a Saturday ;-)
CPython internal structures
572,780
3
2009-02-21T10:41:40Z
573,021
8
2009-02-21T13:43:26Z
[ "python", "google-app-engine", "data-structures", "internals", "cpython" ]
GAE has various limitations, one of which is size of biggest allocatable block of memory amounting to 1Mb (now 10 times more, but that doesn't change the question). The limitation means that one cannot put more then some number of items in list() as CPython would try to allocate contiguous memory block for element pointers. Having huge list()s can be considered bad programming practice, but even if no huge structure is created in program itself, CPython maintains some behind the scenes. It appears that CPython is maintaining single global list of objects or something. I.e. application that has many small objects tend to allocate bigger and bigger single blocks of memory. First idea was gc, and disabling it changes application behavior a bit but still some structures are maintained. A simplest short application that experience the issue is: ``` a = b = [] number_of_lists = 8000000 for i in xrange(number_of_lists): b.append([]) b = b[0] ``` Can anyone enlighten me how to prevent CPython from allocating huge internal structures when having many objects in application?
On a 32-bit system, each of the 8000000 lists you create will allocate 20 bytes for the list object itself, plus 16 bytes for a vector of list elements. So you are trying to allocate at least (20+16) \* 8000000 = 20168000000 bytes, about 20 GB. And that's in the best case, if the system malloc only allocates exactly as much memory as requested. I calculated the size of the list object as follows: * 2 Pointers in the `PyListObject` structure itself (see [listobject.h](http://svn.python.org/view/python/branches/release26-maint/Include/listobject.h?view=markup)) * 1 Pointer and one `Py_ssize_t` for the `PyObject_HEAD` part of the list object (see [object.h](http://svn.python.org/view/python/branches/release26-maint/Include/object.h?view=markup)) * one `Py_ssize_t` for the `PyObject_VAR_HEAD` (also in object.h) The vector of list elements is slightly overallocated to avoid having to resize it at each append - see list\_resize in [listobject.c](http://svn.python.org/view/python/branches/release26-maint/Objects/listobject.c?view=markup). The sizes are 0, 4, 8, 16, 25, 35, 46, 58, 72, 88, ... Thus, your one-element lists will allocate room for 4 elements. Your data structure is a somewhat pathological example, paying the price of a variable-sized list object without utilizing it - all your lists have only a single element. You could avoid the 12 bytes overallocation by using tuples instead of lists, but to further reduce the memory consumption, you will have to use a different data structure that uses fewer objects. It's hard to be more specific, as I don't know what you are trying to accomplish.
Python C-API Object Allocation‏
573,275
5
2009-02-21T16:01:45Z
573,424
10
2009-02-21T17:25:29Z
[ "c++", "python", "c", "python-3.x", "python-c-api" ]
I want to use the new and delete operators for creating and destroying my objects. The problem is python seems to break it into several stages. tp\_new, tp\_init and tp\_alloc for creation and tp\_del, tp\_free and tp\_dealloc for destruction. However c++ just has new which allocates and fully constructs the object and delete which destructs and deallocates the object. Which of the python tp\_\* methods do I need to provide and what must they do? Also I want to be able to create the object directly in c++ eg "PyObject \*obj = new MyExtensionObject(args);" Will I also need to overload the new operator in some way to support this? I also would like to be able to subclass my extension types in python, is there anything special I need to do to support this? I'm using python 3.0.1. EDIT: ok, tp\_init seems to make objects a bit too mutable for what I'm doing (eg take a Texture object, changing the contents after creation is fine, but change fundamental aspects of it such as, size, bitdept, etc will break lots of existing c++ stuff that assumes those sort of things are fixed). If I dont implement it will it simply stop people calling \_\_init\_\_ AFTER its constructed (or at least ignore the call, like tuple does). Or should I have some flag that throws an exception or somthing if tp\_init is called more than once on the same object? Apart from that I think ive got most of the rest sorted. ``` extern "C" { //creation + destruction PyObject* global_alloc(PyTypeObject *type, Py_ssize_t items) { return (PyObject*)new char[type->tp_basicsize + items*type->tp_itemsize]; } void global_free(void *mem) { delete[] (char*)mem; } } template<class T> class ExtensionType { PyTypeObject *t; ExtensionType() { t = new PyTypeObject();//not sure on this one, what is the "correct" way to create an empty type object memset((void*)t, 0, sizeof(PyTypeObject)); static PyVarObject init = {PyObject_HEAD_INIT, 0}; *((PyObject*)t) = init; t->tp_basicsize = sizeof(T); t->tp_itemsize = 0; t->tp_name = "unknown"; t->tp_alloc = (allocfunc) global_alloc; t->tp_free = (freefunc) global_free; t->tp_new = (newfunc) T::obj_new; t->tp_dealloc = (destructor)T::obj_dealloc; ... } ...bunch of methods for changing stuff... PyObject *Finalise() { ... } }; template <class T> PyObjectExtension : public PyObject { ... extern "C" static PyObject* obj_new(PyTypeObject *subtype, PyObject *args, PyObject *kwds) { void *mem = (void*)subtype->tp_alloc(subtype, 0); return (PyObject*)new(mem) T(args, kwds) } extern "C" static void obj_dealloc(PyObject *obj) { ~T(); obj->ob_type->tp_free(obj);//most of the time this is global_free(obj) } ... }; class MyObject : PyObjectExtension<MyObject> { public: static PyObject* InitType() { ExtensionType<MyObject> extType(); ...sets other stuff... return extType.Finalise(); } ... }; ```
The documentation for these is at <http://docs.python.org/3.0/c-api/typeobj.html> and <http://docs.python.org/3.0/extending/newtypes.html> describes how to make your own type. tp\_alloc does the low-level memory allocation for the instance. This is equivalent to malloc(), plus initialize the refcnt to 1. Python has it's own allocator, PyType\_GenericAlloc, but a type can implement a specialized allocator. tp\_new is the same as Python's \_\_new\_\_. It's usually used for immutable objects where the data is stored in the instance itself, as compared to a pointer to data. For example, strings and tuples store their data in the instance, instead of using a char \* or a PyTuple \*. For this case, tp\_new figures out how much memory is needed, based on the input parameters, and calls tp\_alloc to get the memory, then initializes the essential fields. tp\_new does not need to call tp\_alloc. It can for example return a cached object. tp\_init is the same as Python's \_\_init\_\_. Most of your initialization should be in this function. The distinction between \_\_new\_\_ and \_\_init\_\_ is called [two-stage initialization](http://groups.google.com/group/comp.lang.python/msg/e78840758484b94d), or [two-phase initialization](http://groups.google.com/group/comp.lang.python/browse_thread/thread/b129dc656f4d0c8b/e43ae56909c3c00a?lnk=gst&q=two-phase+init#e43ae56909c3c00a). You say "*c++ just has new*" but that's not correct. tp\_alloc corresponds a custom arena allocator in C++, \_\_new\_\_ corresponds to a custom type allocator (a factory function), and \_\_init\_\_ is more like the constructor. That last link discusses more about the parallels between C++ and Python style. Also read <http://www.python.org/download/releases/2.2/descrintro/> for details about how \_\_new\_\_ and \_\_init\_\_ interact. You write that you want to "create the object directly in c++". That's rather difficult because at the least you'll have to convert any Python exceptions that occurred during object instantiation into a C++ exception. You might try looking at Boost::Python for some help with this task. Or you can use a two-phase initialization. ;)
Python serialize lexical closures?
573,569
18
2009-02-21T19:03:12Z
574,789
12
2009-02-22T11:42:04Z
[ "python", "serialization", "closures" ]
Is there a way to serialize a lexical closure in Python using the standard library? pickle and marshal appear not to work with lexical closures. I don't really care about the details of binary vs. string serialization, etc., it just has to work. For example: ``` def foo(bar, baz) : def closure(waldo) : return baz * waldo return closure ``` I'd like to just be able to dump instances of closure to a file and read them back. Edit: One relatively obvious way that this could be solved is with some reflection hacks to convert lexical closures into class objects and vice-versa. One could then convert to classes, serialize, unserialize, convert back to closures. Heck, given that Python is duck typed, if you overloaded the function call operator of the class to make it look like a function, you wouldn't even really need to convert it back to a closure and the code using it wouldn't know the difference. If any Python reflection API gurus are out there, please speak up.
If you simply use a class with a `__call__` method to begin with, it should all work smoothly with `pickle`. ``` class foo(object): def __init__(self, bar, baz): self.baz = baz def __call__(self,waldo): return self.baz * waldo ``` On the other hand, a hack which converted a closure into an instance of a new class created at runtime would not work, because of the way `pickle` deals with classes and instances. `pickle` doesn't store classes; only a module name and class name. When reading back an instance or class it tries to import the module and find the required class in it. If you used a class created on-the-fly, you're out of luck.
Python serialize lexical closures?
573,569
18
2009-02-21T19:03:12Z
4,124,868
16
2010-11-08T14:46:46Z
[ "python", "serialization", "closures" ]
Is there a way to serialize a lexical closure in Python using the standard library? pickle and marshal appear not to work with lexical closures. I don't really care about the details of binary vs. string serialization, etc., it just has to work. For example: ``` def foo(bar, baz) : def closure(waldo) : return baz * waldo return closure ``` I'd like to just be able to dump instances of closure to a file and read them back. Edit: One relatively obvious way that this could be solved is with some reflection hacks to convert lexical closures into class objects and vice-versa. One could then convert to classes, serialize, unserialize, convert back to closures. Heck, given that Python is duck typed, if you overloaded the function call operator of the class to make it look like a function, you wouldn't even really need to convert it back to a closure and the code using it wouldn't know the difference. If any Python reflection API gurus are out there, please speak up.
PiCloud has released an open-source (LGPL) pickler which can handle function closure and a whole lot more useful stuff. It can be used independently of their cloud computing infrastructure - it's just a normal pickler. The whole shebang is documented [here](http://docs.picloud.com/), and you can download the code via 'pip install cloud'. Anyway, it does what you want. Let's demonstrate that by pickling a closure: ``` import pickle from StringIO import StringIO import cloud # generate a closure def foo(bar, baz): def closure(waldo): return baz * waldo return closure closey = foo(3, 5) # use the picloud pickler to pickle to a string f = StringIO() pickler = cloud.serialization.cloudpickle.CloudPickler(f) pickler.dump(closey) #rewind the virtual file and reload f.seek(0) closey2 = pickle.load(f) ``` Now we have `closey`, the original closure, and `closey2`, the one that has been restored from a string serialisation. Let's test 'em. ``` >>> closey(4) 20 >>> closey2(4) 20 ``` Beautiful. The module is pure python—you can open it up and easily see what makes the magic work. (The answer is a lot of code.)
Django - Set Up A Scheduled Job?
573,618
299
2009-02-21T19:39:59Z
573,656
24
2009-02-21T20:04:40Z
[ "python", "django", "web-applications", "scheduled-tasks" ]
I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically. Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this. Does anyone know how to set this up? To clarify: I know I can set up a `cron` job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero). I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.
If you're using a standard POSIX OS, you use [cron](http://linux.die.net/man/8/cron). If you're using Windows, you use [at](http://technet.microsoft.com/en-us/library/cc755618.aspx). Write a Django management command to 1. Figure out what platform they're on. 2. Either execute the appropriate "AT" command for your users, **or** update the crontab for your users.
Django - Set Up A Scheduled Job?
573,618
299
2009-02-21T19:39:59Z
573,659
219
2009-02-21T20:06:38Z
[ "python", "django", "web-applications", "scheduled-tasks" ]
I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically. Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this. Does anyone know how to set this up? To clarify: I know I can set up a `cron` job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero). I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.
One solution that I have employed is to do this: 1) Create a [custom management command](http://docs.djangoproject.com/en/dev/howto/custom-management-commands/#howto-custom-management-commands), e.g. ``` python manage.py my_cool_command ``` 2) Use `cron` (on Linux) or `at` (on Windows) to run my command at the required times. This is a simple solution that doesn't require installing a heavy AMQP stack. However there are nice advantages to using something like Celery, mentioned in the other answers. In particular, with Celery it is nice to not have to spread your application logic out into crontab files. However the cron solution works quite nicely for a small to medium sized application and where you don't want a lot of external dependencies. EDIT: In later version of windows the `at` command is deprecated for Windows 8, Server 2012 and above. You can use `schtasks.exe` for same use.
Django - Set Up A Scheduled Job?
573,618
299
2009-02-21T19:39:59Z
573,685
12
2009-02-21T20:29:47Z
[ "python", "django", "web-applications", "scheduled-tasks" ]
I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically. Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this. Does anyone know how to set this up? To clarify: I know I can set up a `cron` job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero). I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.
Look at Django Poor Man's Cron which is a Django app that makes use of spambots, search engine indexing robots and alike to run scheduled tasks in approximately regular intervals See: <http://code.google.com/p/django-poormanscron/>
Django - Set Up A Scheduled Job?
573,618
299
2009-02-21T19:39:59Z
574,245
8
2009-02-22T03:18:07Z
[ "python", "django", "web-applications", "scheduled-tasks" ]
I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically. Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this. Does anyone know how to set this up? To clarify: I know I can set up a `cron` job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero). I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.
I personally use cron, but the [Jobs Scheduling](http://code.google.com/p/django-command-extensions/wiki/JobsScheduling) parts of [django-extensions](https://github.com/django-extensions/django-extensions) looks interesting.
Django - Set Up A Scheduled Job?
573,618
299
2009-02-21T19:39:59Z
621,538
20
2009-03-07T08:32:30Z
[ "python", "django", "web-applications", "scheduled-tasks" ]
I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically. Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this. Does anyone know how to set this up? To clarify: I know I can set up a `cron` job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero). I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.
Interesting new pluggable Django app: [django-chronograph](http://code.google.com/p/django-chronograph/) You only have to add one cron entry which acts as a timer, and you have a very nice Django admin interface into the scripts to run.
Django - Set Up A Scheduled Job?
573,618
299
2009-02-21T19:39:59Z
1,057,920
92
2009-06-29T11:56:47Z
[ "python", "django", "web-applications", "scheduled-tasks" ]
I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically. Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this. Does anyone know how to set this up? To clarify: I know I can set up a `cron` job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero). I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.
[Celery](http://celeryproject.org/) is a distributed task queue, built on AMQP (RabbitMQ). It also handles periodic tasks in a cron-like fashion. Depending on your app, it might be worth a gander.
Django - Set Up A Scheduled Job?
573,618
299
2009-02-21T19:39:59Z
8,575,485
7
2011-12-20T12:30:42Z
[ "python", "django", "web-applications", "scheduled-tasks" ]
I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically. Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this. Does anyone know how to set this up? To clarify: I know I can set up a `cron` job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero). I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.
Brian Neal's suggestion of running management commands via cron works well, but if you're looking for something a little more robust (yet not as elaborate as Celery) I'd look into a library like [Kronos](https://github.com/jgorset/django-kronos): ``` # app/cron.py import kronos @kronos.register('0 * * * *') def task(): pass ```
Django - Set Up A Scheduled Job?
573,618
299
2009-02-21T19:39:59Z
9,071,268
26
2012-01-30T21:47:06Z
[ "python", "django", "web-applications", "scheduled-tasks" ]
I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically. Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this. Does anyone know how to set this up? To clarify: I know I can set up a `cron` job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero). I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.
We've open-sourced what I think is a structured app. that Brian's solution above alludes too. Would love any / all feedback! <https://github.com/tivix/django-cron> It comes with one management command: ``` ./manage.py runcrons ``` That does the job. Each cron is modeled as a class (so its all OO) and each cron runs at a different frequency and we make sure same cron type doesn't run in parallel (in case crons themselves take longer time to run than their frequency!) Thanks!
Django - Set Up A Scheduled Job?
573,618
299
2009-02-21T19:39:59Z
9,995,875
7
2012-04-03T14:54:18Z
[ "python", "django", "web-applications", "scheduled-tasks" ]
I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically. Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this. Does anyone know how to set this up? To clarify: I know I can set up a `cron` job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero). I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.
RabbitMQ and Celery have more features and task handling capabilities than Cron. If task failure isn't an issue, and you think you will handle broken tasks in the next call, then Cron is sufficient. Celery & [AMQP](https://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol) will let you handle the broken task, and it will get executed again by another worker (Celery workers listen for the next task to work on), until the task's `max_retries` attribute is reached. You can even invoke tasks on failure, like logging the failure, or sending an email to the admin once the `max_retries` has been reached. And you can distribute Celery and AMQP servers when you need to scale your application.
Docs for the internals of CPython Implementation
574,004
8
2009-02-22T00:16:52Z
574,393
8
2009-02-22T05:14:50Z
[ "python", "cpython" ]
I am currently in the process of making an embedded system port of the CPython 3.0 Python interpreter and I'm particularly interested in any references or documentation that provides details about the design and structure of code for Release 3.0 or even about any of the 2.x releases. One useful document I have found so far is this [informational PEP](http://www.python.org/dev/peps/pep-0339/) on the implementation - which is a good overview - but is still pretty high level. Hoping to come across something that gives [much] more detail on more of the modules or perhaps even covers something about porting considerations.
There's the documentation for the C API, which is essentially the API for the internals of Python. It won't cover porting details, though. The code itself is fairly well documented. You might try reading in and around the area you'll need to modify.
How do YOU deploy your WSGI application? (and why it is the best way)
574,068
42
2009-02-22T00:58:37Z
574,135
25
2009-02-22T01:39:56Z
[ "python", "deployment", "wsgi" ]
Deploying a WSGI application. There are many ways to skin this cat. I am currently using apache2 with mod-wsgi, but I can see some potential problems with this. So how can it be done? 1. Apache Mod-wsgi (the other mod-wsgi's seem to not be worth it) 2. Pure Python web server eg paste, cherrypy, Spawning, Twisted.web 3. as 2 but with reverse proxy from nginx, apache2 etc, with good static file handling 4. Conversion to other protocol such as FCGI with a bridge (eg Flup) and running in a conventional web server. More? I want to know how you do it, and why it is the best way to do it. I would absolutely **love** you to bore me with details about the whats and the whys, application specific stuff, etc. I will upvote any non-insane answer.
As always: It depends ;-) When I don't need any apache features I am going with a pure python webserver like paste etc. Which one exactly depends on your application I guess and can be decided by doing some benchmarks. I always wanted to do some but never came to it. I guess Spawning might have some advantages in using non blocking IO out of the box but I had sometimes problems with it because of the patching it's doing. You are always free to put a varnish in front as well of course. If an Apache is required I am usually going with solution 3 so that I can keep processes separate. You can also more easily move processes to other servers etc. I simply like to keep things separate. For static files I am using right now a separate server for a project which just serves static images/css/js. I am using lighttpd as webserver which has great performance (in this case I don't have a varnish in front anymore). Another useful tool is [supervisord](http://supervisord.org/) for controlling and monitoring these services. I am additionally using [buildout](http://pypi.python.org/pypi/zc.buildout) for managing my deployments and development sandboxes (together with [virtualenv](http://pypi.python.org/pypi/virtualenv)).
How do YOU deploy your WSGI application? (and why it is the best way)
574,068
42
2009-02-22T00:58:37Z
575,737
13
2009-02-22T20:36:33Z
[ "python", "deployment", "wsgi" ]
Deploying a WSGI application. There are many ways to skin this cat. I am currently using apache2 with mod-wsgi, but I can see some potential problems with this. So how can it be done? 1. Apache Mod-wsgi (the other mod-wsgi's seem to not be worth it) 2. Pure Python web server eg paste, cherrypy, Spawning, Twisted.web 3. as 2 but with reverse proxy from nginx, apache2 etc, with good static file handling 4. Conversion to other protocol such as FCGI with a bridge (eg Flup) and running in a conventional web server. More? I want to know how you do it, and why it is the best way to do it. I would absolutely **love** you to bore me with details about the whats and the whys, application specific stuff, etc. I will upvote any non-insane answer.
The absolute easiest thing to deploy is CherryPy. Your web application can also become a standalone webserver. CherryPy is also a fairly fast server considering that it's written in pure Python. With that said, it's not Apache. Thus, I find that CherryPy is a good choice for lower volume webapps. Other than that, I don't think there's any right or wrong answer to this question. Lots of high-volume websites have been built on the technologies you talk about, and I don't think you can go too wrong any of those ways (although I will say that I agree with mod-wsgi not being up to snuff on every non-apache server). Also, I've been using [isapi\_wsgi](http://code.google.com/p/isapi-wsgi/) to deploy python apps under IIS. It's a less than ideal setup, but it works and you don't always get to choose otherwise when you live in a windows-centric world.
How do YOU deploy your WSGI application? (and why it is the best way)
574,068
42
2009-02-22T00:58:37Z
622,597
13
2009-03-07T22:15:27Z
[ "python", "deployment", "wsgi" ]
Deploying a WSGI application. There are many ways to skin this cat. I am currently using apache2 with mod-wsgi, but I can see some potential problems with this. So how can it be done? 1. Apache Mod-wsgi (the other mod-wsgi's seem to not be worth it) 2. Pure Python web server eg paste, cherrypy, Spawning, Twisted.web 3. as 2 but with reverse proxy from nginx, apache2 etc, with good static file handling 4. Conversion to other protocol such as FCGI with a bridge (eg Flup) and running in a conventional web server. More? I want to know how you do it, and why it is the best way to do it. I would absolutely **love** you to bore me with details about the whats and the whys, application specific stuff, etc. I will upvote any non-insane answer.
> I would absolutely love you to bore me with details about the whats and the whys, application specific stuff, etc Ho. Well you asked for it! Like Daniel I personally use Apache with mod\_wsgi. It is still new enough that deploying it in some environments can be a struggle, but if you're compiling everything yourself anyway it's pretty easy. I've found it very reliable, even the early versions. Props to Graham Dumpleton for keeping control of it pretty much by himself. However for me it's essential that WSGI applications work across all possible servers. There is a bit of a hole at the moment in this area: you have the WSGI standard telling you what a WSGI callable (application) does, but there's no standardisation of deployment; no single way to tell the web server where to find the application. There's also no standardised way to make the server reload the application when you've updated it. The approach I've adopted is to put: * all application logic in modules/packages, preferably in classes * all website-specific customisations to be done by subclassing the main Application and overriding members * all server-specific deployment settings (eg. database connection factory, mail relay settings) as class \_\_init\_\_() parameters * one top-level ‘application.py’ script that initialises the Application class with the correct deployment settings for the current server, then runs the application in such a way that it can work deployed as a CGI script, a mod\_wsgi WSGIScriptAlias (or Passenger, which apparently works the same way), or can be interacted with from the command line * a helper module that takes care of above deployment issues and allows the application to be reloaded when the modules the application is relying on change So what the application.py looks like in the end is something like: ``` #!/usr/bin/env python import os.path basedir= os.path.dirname(__file__) import MySQLdb def dbfactory(): return MySQLdb.connect(db= 'myappdb', unix_socket= '/var/mysql/socket', user= 'u', passwd= 'p') def appfactory(): import myapplication return myapplication.Application(basedir, dbfactory, debug= False) import wsgiwrap ismain= __name__=='__main__' libdir= os.path.join(basedir, 'system', 'lib') application= wsgiwrap.Wrapper(appfactory, libdir, 10, ismain) ``` The wsgiwrap.Wrapper checks every 10 seconds to see if any of the application modules in libdir have been updated, and if so does some nasty sys.modules magic to unload them all reliably. Then appfactory() will be called again to get a new instance of the updated application. (You can also use command line tools such as ``` ./application.py setup ./application.py daemon ``` to run any setup and background-tasks hooks provided by the application callable — a bit like how distutils works. It also responds to start/stop/restart like an init script.) Another trick I use is to put the deployment settings for multiple servers (development/testing/production) in the same application.py script, and sniff ‘socket.gethostname()’ to decide which server-specific bunch of settings to use. At some point I might package wsgiwrap up and release it properly (possibly under a different name). In the meantime if you're interested, you can see a dogfood-development version at <http://www.doxdesk.com/file/software/py/v/wsgiwrap-0.5.py>.
How to determine number of files on a drive with Python?
574,236
5
2009-02-22T03:12:46Z
574,270
7
2009-02-22T03:37:47Z
[ "python", "osx", "filesystems", "hard-drive" ]
I have been trying to figure out how to retrieve (quickly) the number of files on a given HFS+ drive with python. I have been playing with os.statvfs and such, but can't quite get anything (that seems helpful to me). Any ideas? **Edit:** Let me be a bit more specific. =] I am writing a timemachine-like wrapper around rsync for various reasons, and would like a very fast estimate (does not have to be perfect) of the number of files on the drive rsync is going to scan. This way I can watch the progress from rsync (if you call it like `rsync -ax --progress`, or with the `-P` option) as it builds its initial file list, and report a percentage and/or ETA back to the user. This is completely separate from the actual backup, which is no problem tracking progress. But with the drives I am working on with several million files, it means the user is watching a counter of the number of files go up with no upper bound for a few minutes. I have tried playing with os.statvfs with exactly the method described in one of the answers so far, but the results do not make sense to me. ``` >>> import os >>> os.statvfs('/').f_files - os.statvfs('/').f_ffree 64171205L ``` The more portable way gives me around 1.1 million on this machine, which is the same as every other indicator I have seen on this machine, including rsync running its preparations: ``` >>> sum(len(filenames) for path, dirnames, filenames in os.walk("/")) 1084224 ``` Note that the first method is instantaneous, while the second one made me come back 15 minutes later to update because it took just that long to run. Does anyone know of a similar way to get this number, or what is wrong with how I am treating/interpreting the os.statvfs numbers?
The right answer for your purpose is to live without a progress bar once, store the number rsync came up with and assume you have the same number of files as last time for each successive backup. I didn't believe it, but this seems to work on Linux: ``` os.statvfs('/').f_files - os.statvfs('/').f_ffree ``` This computes the total number of file blocks minus the free file blocks. It seems to show results for the whole filesystem even if you point it at another directory. os.statvfs is implemented on Unix only. OK, I admit, I didn't actually let the 'slow, correct' way finish before marveling at the fast method. Just a few drawbacks: I suspect `.f_files` would also count directories, and the result is probably totally wrong. It might work to count the files the slow way, once, and adjust the result from the 'fast' way? The portable way: ``` import os files = sum(len(filenames) for path, dirnames, filenames in os.walk("/")) ``` `os.walk` returns a 3-tuple (dirpath, dirnames, filenames) for each directory in the filesystem starting at the given path. This will probably take a long time for `"/"`, but you knew that already. The easy way: Let's face it, nobody knows or cares how many files they really have, it's a humdrum and nugatory statistic. You can add this cool 'number of files' feature to your program with this code: ``` import random num_files = random.randint(69000, 4000000) ``` Let us know if any of these methods works for you. See also <http://stackoverflow.com/questions/577761/how-do-i-prevent-pythons-os-walk-from-walking-across-mount-points>
Has anyone used SciPy with IronPython?
574,604
16
2009-02-22T08:57:53Z
574,623
8
2009-02-22T09:13:41Z
[ "python", "scipy", "ironpython", "python.net" ]
I've been able to use the standard Python modules from IronPython, but I haven't gotten SciPy to work yet. Has anyone been able to use SciPy from IronPython? What did you have to do to make it work? Update: See [Numerical computing in IronPython with Ironclad](http://www.johndcook.com/blog/2009/03/19/ironclad-ironpytho/) Update: Microsoft is [partnering with Enthought](http://www.johndcook.com/blog/2010/07/01/scipy-and-numpy-for-net/) to make SciPy for .NET.
Anything with components written in C (for example NumPy, which is a component of SciPy) will not work on IronPython as the external language interface works differently. Any C language component will probably not work unless it has been explicitly ported to work with IronPython. You might have to dig into the individual modules and check to see which ones work or are pure python and find out which if any of the C-based ones have been ported yet.
Has anyone used SciPy with IronPython?
574,604
16
2009-02-22T08:57:53Z
574,919
12
2009-02-22T13:21:58Z
[ "python", "scipy", "ironpython", "python.net" ]
I've been able to use the standard Python modules from IronPython, but I haven't gotten SciPy to work yet. Has anyone been able to use SciPy from IronPython? What did you have to do to make it work? Update: See [Numerical computing in IronPython with Ironclad](http://www.johndcook.com/blog/2009/03/19/ironclad-ironpytho/) Update: Microsoft is [partnering with Enthought](http://www.johndcook.com/blog/2010/07/01/scipy-and-numpy-for-net/) to make SciPy for .NET.
Some of my workmates are working on [Ironclad](http://code.google.com/p/ironclad/), a project that will make extension modules for CPython work in IronPython. It's still in development, but parts of numpy, scipy and some other modules already work. You should try it out to see whether the parts of scipy you need are supported. It's an open-source project, so if you're interested you could even help. In any case, some feedback about what you're trying to do and what parts we should look at next is helpful too.
Generating unique, ordered Pythagorean triplets
575,117
17
2009-02-22T16:00:34Z
575,134
12
2009-02-22T16:09:18Z
[ "python", "math" ]
This is a program I wrote to calculate Pythagorean triplets. When I run the program it prints each set of triplets twice because of the if statement. Is there any way I can tell the program to only print a new set of triplets once? Thanks. ``` import math def main(): for x in range (1, 1000): for y in range (1, 1000): for z in range(1, 1000): if x*x == y*y + z*z: print y, z, x print '-'*50 if __name__ == '__main__': main() ```
You should define x < y < z. ``` for x in range (1, 1000): for y in range (x + 1, 1000): for z in range(y + 1, 1000): ``` Another good optimization would be to only use x and y and calculate zsqr = x \* x + y \* y. If zsqr is a square number (or z = sqrt(zsqr) is a whole number), it is a triplet, else not. That way, you need only two loops instead of three (for your example, that's about 1000 times faster).
Generating unique, ordered Pythagorean triplets
575,117
17
2009-02-22T16:00:34Z
575,849
7
2009-02-22T21:31:53Z
[ "python", "math" ]
This is a program I wrote to calculate Pythagorean triplets. When I run the program it prints each set of triplets twice because of the if statement. Is there any way I can tell the program to only print a new set of triplets once? Thanks. ``` import math def main(): for x in range (1, 1000): for y in range (1, 1000): for z in range(1, 1000): if x*x == y*y + z*z: print y, z, x print '-'*50 if __name__ == '__main__': main() ```
Algorithms can be tuned for speed, memory usage, simplicity, and other things. Here is a `pythagore_triplets` algorithm tuned for speed, at the cost of memory usage and simplicity. If all you want is speed, this could be the way to go. Calculation of `list(pythagore_triplets(10000))` takes 40 seconds on my computer, versus 63 seconds for ΤΖΩΤΖΙΟΥ's algorithm, and possibly days of calculation for Tafkas's algorithm (and all other algorithms which use 3 embedded loops instead of just 2). ``` def pythagore_triplets(n=1000): maxn=int(n*(2**0.5))+1 # max int whose square may be the sum of two squares squares=[x*x for x in xrange(maxn+1)] # calculate all the squares once reverse_squares=dict([(squares[i],i) for i in xrange(maxn+1)]) # x*x=>x for x in xrange(1,n): x2 = squares[x] for y in xrange(x,n+1): y2 = squares[y] z = reverse_squares.get(x2+y2) if z != None: yield x,y,z >>> print list(pythagore_triplets(20)) [(3, 4, 5), (5, 12, 13), (6, 8, 10), (8, 15, 17), (9, 12, 15), (12, 16, 20)] ``` Note that if you are going to calculate the first billion triplets, then this algorithm will crash before it even starts, because of an out of memory error. So ΤΖΩΤΖΙΟΥ's algorithm is probably a safer choice for high values of n. BTW, here is Tafkas's algorithm, translated into python for the purpose of my performance tests. Its flaw is to require 3 loops instead of 2. ``` def gcd(a, b): while b != 0: t = b b = a%b a = t return a def find_triple(upper_boundary=1000): for c in xrange(5,upper_boundary+1): for b in xrange(4,c): for a in xrange(3,b): if (a*a + b*b == c*c and gcd(a,b) == 1): yield a,b,c ```
Generating unique, ordered Pythagorean triplets
575,117
17
2009-02-22T16:00:34Z
576,405
59
2009-02-23T03:00:32Z
[ "python", "math" ]
This is a program I wrote to calculate Pythagorean triplets. When I run the program it prints each set of triplets twice because of the if statement. Is there any way I can tell the program to only print a new set of triplets once? Thanks. ``` import math def main(): for x in range (1, 1000): for y in range (1, 1000): for z in range(1, 1000): if x*x == y*y + z*z: print y, z, x print '-'*50 if __name__ == '__main__': main() ```
Pythagorean Triples make a good example for claiming "**`for` loops considered harmful**", because `for` loops seduce us into thinking about counting, often the most irrelevant part of a task. (I'm going to stick with pseudo-code to avoid language biases, and to keep the pseudo-code streamlined, I'll not optimize away multiple calculations of e.g. `x * x` and `y * y`.) **Version 1**: ``` for x in 1..N { for y in 1..N { for z in 1..N { if x * x + y * y == z * z then { // use x, y, z } } } } ``` is the worst solution. It generates duplicates, and traverses parts of the space that aren't useful (e.g. whenever `z < y`). Its time complexity is cubic on `N`. **Version 2**, the first improvement, comes from requiring `x < y < z` to hold, as in: ``` for x in 1..N { for y in x+1..N { for z in y+1..N { if x * x + y * y == z * z then { // use x, y, z } } } } ``` which reduces run time and eliminates duplicated solutions. However, it is still cubic on `N`; the improvement is just a reduction of the co-efficient of `N`-cubed. It is pointless to continue examining increasing values of `z` after `z * z < x * x + y * y` no longer holds. That fact motivates **Version 3**, the first step away from brute-force iteration over `z`: ``` for x in 1..N { for y in x+1..N { z = y + 1 while z * z < x * x + y * y { z = z + 1 } if z * z == x * x + y * y and z <= N then { // use x, y, z } } } ``` For `N` of 1000, this is about 5 times faster than Version 2, but it is *still* cubic on `N`. The next insight is that `x` and `y` are the only independent variables; `z` depends on their values, and the last `z` value considered for the previous value of `y` is a good *starting* search value for the next value of `y`. That leads to **Version 4**: ``` for x in 1..N { y = x+1 z = y+1 while z <= N { while z * z < x * x + y * y { z = z + 1 } if z * z == x * x + y * y and z <= N then { // use x, y, z } y = y + 1 } } ``` which allows `y` and `z` to "sweep" the values above `x` only once. Not only is it over 100 times faster for `N` of 1000, it is quadratic on `N`, so the speedup increases as `N` grows. I've encountered this kind of improvement often enough to be mistrustful of "counting loops" for any but the most trivial uses (e.g. traversing an array). **Update:** Apparently I should have pointed out a few things about V4 that are easy to overlook. 1. **Both** of the `while` loops are controlled by the value of `z` (one directly, the other indirectly through the square of `z`). The inner `while` is actually speeding up the outer `while`, rather than being orthogonal to it. *It's important to look at what the loops are doing, not merely to count how many loops there are.* 2. All of the calculations in V4 are strictly integer arithmetic. Conversion to/from floating-point, as well as floating-point calculations, are costly by comparison. 3. V4 runs in constant memory, requiring only three integer variables. There are no arrays or hash tables to allocate and initialize (and, potentially, to cause an out-of-memory error). 4. The original question allowed all of `x`, `y`, and `x` to vary over the same range. V1..V4 followed that pattern. Below is a not-very-scientific set of timings (using Java under Eclipse on my older laptop with other stuff running...), where the "use x, y, z" was implemented by instantiating a Triple object with the three values and putting it in an ArrayList. (For these runs, `N` was set to 10,000, which produced 12,471 triples in each case.) ``` Version 4: 46 sec. using square root: 134 sec. array and map: 400 sec. ``` The "array and map" algorithm is *essentially*: ``` squares = array of i*i for i in 1 .. N roots = map of i*i -> i for i in 1 .. N for x in 1 .. N for y in x+1 .. N z = roots[squares[x] + squares[y]] if z exists use x, y, z ``` The "using square root" algorithm is *essentially*: ``` for x in 1 .. N for y in x+1 .. N z = (int) sqrt(x * x + y * y) if z * z == x * x + y * y then use x, y, z ``` The actual code for V4 is: ``` public Collection<Triple> byBetterWhileLoop() { Collection<Triple> result = new ArrayList<Triple>(limit); for (int x = 1; x < limit; ++x) { int xx = x * x; int y = x + 1; int z = y + 1; while (z <= limit) { int zz = xx + y * y; while (z * z < zz) {++z;} if (z * z == zz && z <= limit) { result.add(new Triple(x, y, z)); } ++y; } } return result; } ``` Note that `x * x` *is* calculated in the outer loop (although I didn't bother to cache `z * z`); similar optimizations are done in the other variations. I'll be glad to provide the Java source code on request for the other variations I timed, in case I've mis-implemented anything.