title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Generating unique, ordered Pythagorean triplets | 575,117 | 17 | 2009-02-22T16:00:34Z | 578,789 | 11 | 2009-02-23T18:43:00Z | [
"python",
"math"
] | This is a program I wrote to calculate Pythagorean triplets. When I run the program it prints each set of triplets twice because of the if statement. Is there any way I can tell the program to only print a new set of triplets once? Thanks.
```
import math
def main():
for x in range (1, 1000):
for y in range (1, 1000):
for z in range(1, 1000):
if x*x == y*y + z*z:
print y, z, x
print '-'*50
if __name__ == '__main__':
main()
``` | The previously listed algorithms for generating [Pythagorean triplets](http://en.wikipedia.org/wiki/Pythagorean_triplets) are all modifications of the naive approach derived from the basic relationship `a^2 + b^2 = c^2` where `(a, b, c)` is a triplet of positive integers. It turns out that Pythagorean triplets satisfy some fairly remarkable relationships that can be used to generate all Pythagorean triplets.
[Euclid](http://en.wikipedia.org/wiki/Euclid) discovered the first such relationship. He determined that for every Pythagorean triple `(a, b, c)`, possibly after a reordering of `a` and `b` there are relatively prime positive integers `m` and `n` with `m > n`, at least one of which is even, and a positive integer `k` such that
```
a = k (2mn)
b = k (m^2 - n^2)
c = k (m^2 + n^2)
```
Then to generate Pythagorean triplets, generate relatively prime positive integers `m` and `n` of differing parity, and a positive integer `k` and apply the above formula.
```
struct PythagoreanTriple {
public int a { get; private set; }
public int b { get; private set; }
public int c { get; private set; }
public PythagoreanTriple(int a, int b, int c) : this() {
this.a = a < b ? a : b;
this.b = b < a ? a : b;
this.c = c;
}
public override string ToString() {
return String.Format("a = {0}, b = {1}, c = {2}", a, b, c);
}
public static IEnumerable<PythagoreanTriple> GenerateTriples(int max) {
var triples = new List<PythagoreanTriple>();
for (int m = 1; m <= max / 2; m++) {
for (int n = 1 + (m % 2); n < m; n += 2) {
if (m.IsRelativelyPrimeTo(n)) {
for (int k = 1; k <= max / (m * m + n * n); k++) {
triples.Add(EuclidTriple(m, n, k));
}
}
}
}
return triples;
}
private static PythagoreanTriple EuclidTriple(int m, int n, int k) {
int msquared = m * m;
int nsquared = n * n;
return new PythagoreanTriple(k * 2 * m * n, k * (msquared - nsquared), k * (msquared + nsquared));
}
}
public static class IntegerExtensions {
private static int GreatestCommonDivisor(int m, int n) {
return (n == 0 ? m : GreatestCommonDivisor(n, m % n));
}
public static bool IsRelativelyPrimeTo(this int m, int n) {
return GreatestCommonDivisor(m, n) == 1;
}
}
class Program {
static void Main(string[] args) {
PythagoreanTriple.GenerateTriples(1000).ToList().ForEach(t => Console.WriteLine(t));
}
}
```
The Wikipedia article on [Formulas for generating Pythagorean triples](http://en.wikipedia.org/wiki/Formulas_for_generating_Pythagorean_triples) contains other such formulae. |
Generating unique, ordered Pythagorean triplets | 575,117 | 17 | 2009-02-22T16:00:34Z | 8,263,898 | 22 | 2011-11-25T00:59:17Z | [
"python",
"math"
] | This is a program I wrote to calculate Pythagorean triplets. When I run the program it prints each set of triplets twice because of the if statement. Is there any way I can tell the program to only print a new set of triplets once? Thanks.
```
import math
def main():
for x in range (1, 1000):
for y in range (1, 1000):
for z in range(1, 1000):
if x*x == y*y + z*z:
print y, z, x
print '-'*50
if __name__ == '__main__':
main()
``` | Substantially faster than any of the solutions so far. Finds triplets via a ternary tree.
> [Wolfram](http://mathworld.wolfram.com/PythagoreanTriple.html) says:
>
> Hall (1970) and Roberts (1977) prove that is a primitive Pythagorean triple if and only if
>
> `(a,b,c)=(3,4,5)M`
>
> where M is a finite product of the matrices U,A,D.
And there we have a formula to generate every primitive triple.
In the above formula, the hypotenuse is ever growing so it's pretty easy to check for a max length.
In Python:
```
import numpy as np
def gen_prim_pyth_trips(limit=None):
u = np.mat(' 1 2 2; -2 -1 -2; 2 2 3')
a = np.mat(' 1 2 2; 2 1 2; 2 2 3')
d = np.mat('-1 -2 -2; 2 1 2; 2 2 3')
uad = np.array([u, a, d])
m = np.array([3, 4, 5])
while m.size:
m = m.reshape(-1, 3)
if limit:
m = m[m[:, 2] <= limit]
yield from m
m = np.dot(m, uad)
```
If you'd like all triples and not just the primitives:
```
def gen_all_pyth_trips(limit):
for prim in gen_prim_pyth_trips(limit):
i = prim
for _ in range(limit//prim[2]):
yield i
i = i + prim
```
`list(gen_prim_pyth_trips(10**4))` took 2.81 milliseconds to come back with 1593 elements while `list(gen_all_pyth_trips(10**4))` took 19.8 milliseconds to come back with 12471 elements
For reference, the [accepted answer](http://stackoverflow.com/a/576405/992385) [(in python)](https://gist.github.com/kgullion/6adc2d633038ff852923) took 38 seconds for 12471 elements.
Just for fun, setting the upper limit to one million `list(gen_all_pyth_trips(10**6))` returns in 2.66 seconds with 1980642 elements (almost 2 million triples in 3 seconds). `list(gen_all_pyth_trips(10**7))` brings my computer to its knees as the list gets so large it consumes every last bit of ram. Doing something like `sum(1 for _ in gen_all_pyth_trips(10**7))` gets around that limitation and returns in 30 seconds with 23471475 elements.
For more information on the algorithm used, check out the articles on [Wolfram](http://mathworld.wolfram.com/PythagoreanTriple.html) and [Wikipedia](http://en.wikipedia.org/wiki/Tree_of_primitive_Pythagorean_triples). |
In Python, why can a function modify some arguments as perceived by the caller, but not others? | 575,196 | 77 | 2009-02-22T16:42:51Z | 575,204 | 11 | 2009-02-22T16:47:13Z | [
"python"
] | I'm new to Python and am trying to understand its approach to variable scope. In this example, why is `f()` able to alter the value of `x`, as perceived within `main()`, but not the value of `n`?
```
def f(n, x):
n = 2
x.append(4)
print 'In f():', n, x
def main():
n = 1
x = [0,1,2,3]
print 'Before:', n, x
f(n, x)
print 'After: ', n, x
main()
```
Output:
```
Before: 1 [0, 1, 2, 3]
In f(): 2 [0, 1, 2, 3, 4]
After: 1 [0, 1, 2, 3, 4]
``` | `f` doesn't actually alter the value of `x` (which is always the same reference to an instance of a list). Rather, it alters the *contents* of this list.
In both cases, a *copy* is passed to the function. But since `x` is a reference to a list instance, only the reference is copied, not the contents of the list.
In case you are familiar with C, the following approaches the semantics of the Python code:
```
void f(int n, int* x) {
n = 42;
x[0] = 2;
}
```
Here, `n` is an `int` and `x` is an `int*` but both are passed as a copy to the function. Nontheless, the memory *pointed to* by `x` is the same at the sides of the caller and the callee. |
In Python, why can a function modify some arguments as perceived by the caller, but not others? | 575,196 | 77 | 2009-02-22T16:42:51Z | 575,337 | 109 | 2009-02-22T18:06:13Z | [
"python"
] | I'm new to Python and am trying to understand its approach to variable scope. In this example, why is `f()` able to alter the value of `x`, as perceived within `main()`, but not the value of `n`?
```
def f(n, x):
n = 2
x.append(4)
print 'In f():', n, x
def main():
n = 1
x = [0,1,2,3]
print 'Before:', n, x
f(n, x)
print 'After: ', n, x
main()
```
Output:
```
Before: 1 [0, 1, 2, 3]
In f(): 2 [0, 1, 2, 3, 4]
After: 1 [0, 1, 2, 3, 4]
``` | Some answers contain the word "copy" in a context of a function call. I find it confusing.
**Python doesn't copy *objects* you pass during a function call *ever*.**
Function parameters are *names*. When you call a function Python binds these parameters to whatever objects you pass (via names in a caller scope).
Objects can be mutable (like lists) or immutable (like integers, strings in Python). Mutable object you can change. You can't change a name, you just can bind it to another object.
Your example is not about [scopes or namespaces](https://docs.python.org/2/tutorial/classes.html#python-scopes-and-namespaces), it is about [naming and binding](http://docs.python.org/reference/executionmodel.html#naming-and-binding) and [mutability of an object](http://docs.python.org/reference/datamodel.html#objects-values-and-types) in Python.
```
def f(n, x): # these `n`, `x` have nothing to do with `n` and `x` from main()
n = 2 # put `n` label on `2` balloon
x.append(4) # call `append` method of whatever object `x` is referring to.
print 'In f():', n, x
x = [] # put `x` label on `[]` ballon
# x = [] has no effect on the original list that is passed into the function
```
Here are nice pictures on [the difference between variables in other languages and names in Python](http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html#other-languages-have-variables). |
In Python, why can a function modify some arguments as perceived by the caller, but not others? | 575,196 | 77 | 2009-02-22T16:42:51Z | 575,887 | 11 | 2009-02-22T21:52:14Z | [
"python"
] | I'm new to Python and am trying to understand its approach to variable scope. In this example, why is `f()` able to alter the value of `x`, as perceived within `main()`, but not the value of `n`?
```
def f(n, x):
n = 2
x.append(4)
print 'In f():', n, x
def main():
n = 1
x = [0,1,2,3]
print 'Before:', n, x
f(n, x)
print 'After: ', n, x
main()
```
Output:
```
Before: 1 [0, 1, 2, 3]
In f(): 2 [0, 1, 2, 3, 4]
After: 1 [0, 1, 2, 3, 4]
``` | You've got a number of answers already, and I broadly agree with J.F. Sebastian, but you might find this useful as a shortcut:
Any time you see **`varname =`**, you're creating a *new* name binding within the function's scope. Whatever value `varname` was bound to before is lost *within this scope*.
Any time you see **`varname.foo()`** you're calling a method on `varname`. The method may alter varname (e.g. `list.append`). `varname` (or, rather, the object that `varname` names) may exist in more than one scope, and since it's the same object, any changes will be visible in all scopes.
[note that the `global` keyword creates an exception to the first case] |
How does python close files that have been gc'ed? | 575,278 | 14 | 2009-02-22T17:26:49Z | 575,320 | 19 | 2009-02-22T17:53:46Z | [
"python",
"file",
"garbage-collection",
"del"
] | I had always assumed that a file would leak if it was opened without being closed, but I just verified that if I enter the following lines of code, the file will close:
```
>>> f = open('somefile.txt')
>>> del f
```
Just out of sheer curiosity, how does this work? I notice that file doesn't include a `__`del`__` method. | In CPython, at least, files are closed when the file object is deallocated. See the `file_dealloc` function in `Objects/fileobject.c` in the CPython source. Dealloc methods are sort-of like `__del__` for C types, except without some of the problems inherent to `__del__`. |
Why not always use psyco for Python code? | 575,385 | 37 | 2009-02-22T18:23:50Z | 1,437,939 | 20 | 2009-09-17T10:23:30Z | [
"python",
"optimization",
"psyco"
] | [psyco](http://psyco.sourceforge.net/) seems to be quite helpful in optimizing Python code, and it does it in a very non-intrusive way.
Therefore, one has to wonder. Assuming you're always on a x86 architecture (which is where most apps run these days), why not just always use `psyco` for all Python code? Does it make mistakes sometimes and ruins the correctness of the program? Increases the runtime for some weird cases?
Have you had any negative experiences with it? My most negative experience so far was that it made my code faster by only 15%. Usually it's better.
Naturally, using psyco is not a replacement for efficient algorithms and coding. But if you can improve the performance of your code for the cost of two lines (importing and calling psyco), I see no good reason not to. | 1) The memory overhead is the main one, as described in other answers. You also pay the compilation cost, which can be prohibitive if you aren't selective. From the [user reference](http://psyco.sourceforge.net/psycoguide/module-psyco.html):
> Compiling everything is often overkill for medium- or large-sized applications. The drawbacks of compiling too much are in the time spent compiling, plus the amount of memory that this process consumes. It is a subtle balance to keep.
2) Performance can actually be harmed by Psyco compilation. Again from the user guide (["known bugs"](http://psyco.sourceforge.net/psycoguide/tutknownbugs.html) section):
> There are also performance bugs: situations in which Psyco slows down the code instead of accelerating it. It is difficult to make a complete list of the possible reasons, but here are a few common ones:
>
> * The built-in `map` and `filter` functions must be avoided and replaced by list comprehension. For example, `map(lambda x: x*x, lst)` should be replaced by the more readable but more recent syntax `[x*x for x in lst]`.
> * The compilation of regular expressions doesn't seem to benefit from Psyco. (The execution of regular expressions is unaffected, since it is C code.) Don't enable Psyco on this module; if necessary, disable it explicitely, e.g. by calling `psyco.cannotcompile(re.compile)`.
3) Finally, there are some relatively obscure situations where using Psyco will actually introduce bugs. Some of them are [listed here](http://psyco.sourceforge.net/psycoguide/bugs.html#bugs). |
How to obtain the keycodes in Python | 575,650 | 5 | 2009-02-22T20:01:21Z | 575,656 | 8 | 2009-02-22T20:03:45Z | [
"python",
"input",
"keycode"
] | I have to know what key is pressed, but not need the code of the Character, i want to know when someone press the 'A' key even if the key obtained is 'a' or 'A', and so with all other keys.
I can't use PyGame or any other library (including Tkinter). Only Python Standard Library. And this have to be done in a terminal, not a graphical interface.
NOT NEED THE CHARACTER CODE. I NEED TO KNOW THE KEY CODE.
Ex:
```
ord('a') != ord('A') # 97 != 65
someFunction('a') == someFunction('A') # a_code == A_code
``` | Depending on what you are trying to accomplish, perhaps using a library such as [pygame](http://pygame.org) would do what you want. Pygame contains more advanced keypress handling than is normally available with Python's standard libraries. |
How to obtain the keycodes in Python | 575,650 | 5 | 2009-02-22T20:01:21Z | 575,781 | 16 | 2009-02-22T21:02:36Z | [
"python",
"input",
"keycode"
] | I have to know what key is pressed, but not need the code of the Character, i want to know when someone press the 'A' key even if the key obtained is 'a' or 'A', and so with all other keys.
I can't use PyGame or any other library (including Tkinter). Only Python Standard Library. And this have to be done in a terminal, not a graphical interface.
NOT NEED THE CHARACTER CODE. I NEED TO KNOW THE KEY CODE.
Ex:
```
ord('a') != ord('A') # 97 != 65
someFunction('a') == someFunction('A') # a_code == A_code
``` | See [tty](http://docs.python.org/library/tty.html) standard module. It allows switching from default line-oriented (cooked) mode into char-oriented (cbreak) mode with [tty.setcbreak(sys.stdin)](http://docs.python.org/library/tty.html#tty.setcbreak). Reading single char from sys.stdin will result into next pressed keyboard key (if it generates code):
```
import sys
import tty
tty.setcbreak(sys.stdin)
while True:
print ord(sys.stdin.read(1))
```
*Note: solution is Unix (including Linux) only.*
Edit: On Windows try [msvcrt.getche()](http://docs.python.org/library/msvcrt.html#msvcrt.getche)/[getwche()](http://docs.python.org/library/msvcrt.html#msvcrt.getwche). /me has nowhere to try...
---
Edit 2: Utilize win32 low-level console API via [ctypes.windll](http://docs.python.org/library/ctypes.html) (see [example at SO](http://stackoverflow.com/questions/239020/how-can-i-call-a-dll-from-a-scripting-language)) with `ReadConsoleInput` function. You should filter out keypresses - `e.EventType==KEY_EVENT` and look for `e.Event.KeyEvent.wVirtualKeyCode` value. Example of application (not in Python, just to get an idea) can be found at <http://www.benryves.com/tutorials/?t=winconsole&c=4>. |
How to convert rational and decimal number strings to floats in python? | 575,925 | 9 | 2009-02-22T22:08:47Z | 575,944 | 16 | 2009-02-22T22:18:47Z | [
"python",
"rational-numbers"
] | How can I convert strings which can denote decimal or rational numbers to floats
```
>>> ["0.1234", "1/2"]
['0.1234', '1/2']
```
I'd want [0.1234, 0.5].
eval is what I was thinking but no luck:
```
>>> eval("1/2")
0
``` | I'd parse the string if conversion fails:
```
>>> def convert(s):
try:
return float(s)
except ValueError:
num, denom = s.split('/')
return float(num) / float(denom)
...
>>> convert("0.1234")
0.1234
>>> convert("1/2")
0.5
```
Generally using eval is a bad idea, since it's a security risk. *Especially* if the string being evaluated came from outside the system. |
How to convert rational and decimal number strings to floats in python? | 575,925 | 9 | 2009-02-22T22:08:47Z | 575,976 | 7 | 2009-02-22T22:34:55Z | [
"python",
"rational-numbers"
] | How can I convert strings which can denote decimal or rational numbers to floats
```
>>> ["0.1234", "1/2"]
['0.1234', '1/2']
```
I'd want [0.1234, 0.5].
eval is what I was thinking but no luck:
```
>>> eval("1/2")
0
``` | As others have pointed out, using `eval` is potentially a security risk, and certainly a bad habit to get into.
(if you don't think it's as risky as `exec`, imagine `eval`ing something like: `__import__('os').system('rm -rf /')`)
However, if you have python 2.6 or up, you can use [`ast.literal_eval`](http://docs.python.org/library/ast.html#ast.literal_eval), for which the string provided:
> may only consist of the following
> Python literal structures: strings,
> numbers, tuples, lists, dicts,
> booleans, and None.
Thus it should be quite safe :-) |
PyGame not receiving events when 3+ keys are pressed at the same time | 576,634 | 3 | 2009-02-23T05:55:34Z | 576,643 | 10 | 2009-02-23T06:02:08Z | [
"python",
"pygame",
"keyboard-events"
] | *I am developing a simple game in [PyGame](http://www.pygame.org/)... A rocket ship flying around and shooting stuff.*
---
**Question:** Why does pygame stop emitting keyboard events when too may keys are pressed at once?
**About the Key Handling:** The program has a number of variables like `KEYSTATE_FIRE, KEYSTATE_TURNLEFT`, etc...
1. When a `KEYDOWN` event is handled, it sets the corresponding `KEYSTATE_*` variable to True.
2. When a `KEYUP` event is handled, it sets the same variable to False.
**The problem:**
If `UP-ARROW` and `LEFT-ARROW` are being pressed at the same time, pygame DOES NOT emit a `KEYDOWN` event when `SPACE` is pressed. This behavior varies depending on the keys. When pressing letters, it seems that I can hold about 5 of them before pygame stops emitting `KEYDOWN` events for additional keys.
**Verification:** In my main loop, I simply printed each event received to verify the above behavior.
**The code:** For reference, here is the (crude) way of handling key events at this point:
```
while GAME_RUNNING:
FRAME_NUMBER += 1
CLOCK.tick(FRAME_PER_SECOND)
#----------------------------------------------------------------------
# Check for events
for event in pygame.event.get():
print event
if event.type == pygame.QUIT:
raise SystemExit()
elif event.type == pygame.KEYDOWN and event.dict['key'] == pygame.K_UP:
KEYSTATE_FORWARD = True
elif event.type == pygame.KEYUP and event.dict['key'] == pygame.K_UP:
KEYSTATE_FORWARD = False
elif event.type == pygame.KEYDOWN and event.dict['key'] == pygame.K_DOWN:
KEYSTATE_BACKWARD = True
elif event.type == pygame.KEYUP and event.dict['key'] == pygame.K_DOWN:
KEYSTATE_BACKWARD = False
elif event.type == pygame.KEYDOWN and event.dict['key'] == pygame.K_LEFT:
KEYSTATE_TURNLEFT = True
elif event.type == pygame.KEYUP and event.dict['key'] == pygame.K_LEFT:
KEYSTATE_TURNLEFT = False
elif event.type == pygame.KEYDOWN and event.dict['key'] == pygame.K_RIGHT:
KEYSTATE_TURNRIGHT = True
elif event.type == pygame.KEYUP and event.dict['key'] == pygame.K_RIGHT:
KEYSTATE_TURNRIGHT = False
elif event.type == pygame.KEYDOWN and event.dict['key'] == pygame.K_SPACE:
KEYSTATE_FIRE = True
elif event.type == pygame.KEYUP and event.dict['key'] == pygame.K_SPACE:
KEYSTATE_FIRE = False
# remainder of game loop here...
```
**For pressing this sequence:**
* `a (down)`
* `s (down)`
* `d (down)`
* `f (down)`
* `g (down)`
* `h (down)`
* `j (down)`
* `k (down)`
* `a (up)`
* `s (up)`
* `d (up)`
* `f (up)`
* `g (up)`
* `h (up)`
* `j (up)`
* `k (up)`
**Here is the output:**
* `<Event(2-KeyDown {'scancode': 30, 'key': 97, 'unicode': u'a', 'mod': 0})>`
* `<Event(2-KeyDown {'scancode': 31, 'key': 115, 'unicode': u's', 'mod': 0})>`
* `<Event(2-KeyDown {'scancode': 32, 'key': 100, 'unicode': u'd', 'mod': 0})>`
* `<Event(2-KeyDown {'scancode': 33, 'key': 102, 'unicode': u'f', 'mod': 0})>`
* `<Event(3-KeyUp {'scancode': 30, 'key': 97, 'mod': 0})>`
* `<Event(3-KeyUp {'scancode': 31, 'key': 115, 'mod': 0})>`
* `<Event(3-KeyUp {'scancode': 32, 'key': 100, 'mod': 0})>`
* `<Event(3-KeyUp {'scancode': 33, 'key': 102, 'mod': 0})>`
* `<Event(2-KeyDown {'scancode': 36, 'key': 106, 'unicode': u'j', 'mod': 0})>`
* `<Event(2-KeyDown {'scancode': 37, 'key': 107, 'unicode': u'k', 'mod': 0})>`
* `<Event(3-KeyUp {'scancode': 36, 'key': 106, 'mod': 0})>`
* `<Event(3-KeyUp {'scancode': 37, 'key': 107, 'mod': 0})>`
---
Is this a common issue? Is there a workaround? If not, what is the best way to handle multiple-key control issues when using pygame? | This sounds like a input problem, not a code problem - are you sure the problem isn't the keyboard itself? Most keyboards have limitations on the number of keys that can be pressed at the same time. Often times you can't press more than a few keys that are close together at a time.
To test it out, just start pressing and holding letters on the keyboard and see when new letters stop appearing.
My suggestion is to try mapping SPACE to a different key somewhere else and see what happens. |
How can I reference columns by their names in python calling SQLite? | 576,933 | 10 | 2009-02-23T08:50:12Z | 576,954 | 9 | 2009-02-23T09:01:05Z | [
"python",
"sqlite"
] | I have some code which I've been using to query MySQL, and I'm hoping to use it with SQLite. My real hope is that this will not involve making too many changes to the code. Unfortunately, the following code doesn't work with SQLite:
```
cursor.execute(query)
rows = cursor.fetchall()
data = []
for row in rows
data.append(row["column_name"])
```
This gives the following error:
```
TypeError: tuple indices must be integers
```
Whereas if I change the reference to use a column number, it works fine:
```
data.append(row[1])
```
Can I execute the query in such a way that I can reference columns by their names? | I'm not sure if this is the best approach, but here's what I typically do to retrieve a record set using a DB-API 2 compliant module:
```
cursor.execute("""SELECT foo, bar, baz, quux FROM table WHERE id = %s;""",
(interesting_record_id,))
for foo, bar, baz, quux in cursor.fetchall():
frobnicate(foo + bar, baz * quux)
```
The query formatting method is one of the DB-API standards, but happens to be the preferred method for Psycopg2; other DB-API adapters might suggest a different convention which will be fine.
Writing queries like this, where implicit tuple unpacking is used to work with the result set, has typically been more effective for me than trying to worry about matching Python variable names to SQL column names (which I usually only use to drop prefixes, and then only if I'm working with a subset of the column names such that the prefixes don't help to clarify things anymore), and is *much* better than remembering numerical column IDs.
This style also helps you avoid `SELECT * FROM table...`, which is just a maintenance disaster for anything but the simplest tables and queries.
So, not exactly the answer you were asking for, but possibly enlightening nonetheless. |
How can I reference columns by their names in python calling SQLite? | 576,933 | 10 | 2009-02-23T08:50:12Z | 577,004 | 13 | 2009-02-23T09:25:44Z | [
"python",
"sqlite"
] | I have some code which I've been using to query MySQL, and I'm hoping to use it with SQLite. My real hope is that this will not involve making too many changes to the code. Unfortunately, the following code doesn't work with SQLite:
```
cursor.execute(query)
rows = cursor.fetchall()
data = []
for row in rows
data.append(row["column_name"])
```
This gives the following error:
```
TypeError: tuple indices must be integers
```
Whereas if I change the reference to use a column number, it works fine:
```
data.append(row[1])
```
Can I execute the query in such a way that I can reference columns by their names? | To access columns by name, use the [`row_factory`](http://oss.itsystementwicklung.de/download/pysqlite/doc/sqlite3.html#sqlite3.Connection.row_factory) attribute of the Connection instance. It lets you set a function that takes the arguments `cursor` and `row`, and return whatever you'd like. There's a few builtin to pysqlite, namely `sqlite3.Row`, which does what you've asked. |
How can I reference columns by their names in python calling SQLite? | 576,933 | 10 | 2009-02-23T08:50:12Z | 7,099,412 | 8 | 2011-08-17T21:00:19Z | [
"python",
"sqlite"
] | I have some code which I've been using to query MySQL, and I'm hoping to use it with SQLite. My real hope is that this will not involve making too many changes to the code. Unfortunately, the following code doesn't work with SQLite:
```
cursor.execute(query)
rows = cursor.fetchall()
data = []
for row in rows
data.append(row["column_name"])
```
This gives the following error:
```
TypeError: tuple indices must be integers
```
Whereas if I change the reference to use a column number, it works fine:
```
data.append(row[1])
```
Can I execute the query in such a way that I can reference columns by their names? | This can be done by adding a single line after the "connect" statment:
```
conn.row_factory = sqlite3.Row
```
Check the documentation here:
<http://docs.python.org/library/sqlite3.html#accessing-columns-by-name-instead-of-by-index> |
How can I reference columns by their names in python calling SQLite? | 576,933 | 10 | 2009-02-23T08:50:12Z | 20,042,292 | 8 | 2013-11-18T07:27:11Z | [
"python",
"sqlite"
] | I have some code which I've been using to query MySQL, and I'm hoping to use it with SQLite. My real hope is that this will not involve making too many changes to the code. Unfortunately, the following code doesn't work with SQLite:
```
cursor.execute(query)
rows = cursor.fetchall()
data = []
for row in rows
data.append(row["column_name"])
```
This gives the following error:
```
TypeError: tuple indices must be integers
```
Whereas if I change the reference to use a column number, it works fine:
```
data.append(row[1])
```
Can I execute the query in such a way that I can reference columns by their names? | In the five years since the question was asked and then answered, a very simple solution has arisen. Any new code can simply wrap the connection object with a row factory. Code example:
```
import sqlite3
conn = sqlite3.connect('./someFile')
conn.row_factory = sqlite3.Row // Here's the magic!
cursor = conn.execute("SELECT name, age FROM someTable")
for row in cursor:
print(row['name'])
```
Here are some [fine docs](http://docs.python.org/2/library/sqlite3.html#accessing-columns-by-name-instead-of-by-index). Enjoy! |
How to exit a module before it has finished parsing? | 577,119 | 4 | 2009-02-23T10:15:09Z | 577,211 | 8 | 2009-02-23T10:51:16Z | [
"python",
"import",
"module"
] | I have a module that imports a module, but in some cases the module being imported may not exist. After the module is imported there is a class inherits from a class the imported module. If I was to catch the `ImportError` exception in the case the module doesn't exist, how can I stop Python from parsing the rest of the module? I'm open to other solutions if that's not possible.
Here is a basic example (selfaware.py):
```
try:
from skynet import SkyNet
except ImportError:
class SelfAwareSkyNet():
pass
exit_module_parsing_here()
class SelfAwareSkyNet(SkyNet):
pass
```
The only ways I can think to do this is:
* Before importing the `selfaware.py` module, check if the `skynet` module is available, and simply pass or create a stub class. This will cause DRY if `selfaware.py` is imported multiple times.
* Within `selfaware.py` have the class defined withing the `try` block. e.g.:
```
try:
from skynet import SkyNet
class SelfAwareSkyNet(SkyNet):
pass
except ImportError:
class SelfAwareSkyNet():
pass
``` | try: supports an else: clause
```
try:
from skynet import SkyNet
except ImportError:
class SelfAwareSkyNet():
pass
else:
class SelfAwareSkyNet(SkyNet):
pass
``` |
Python "extend" for a dictionary | 577,234 | 211 | 2009-02-23T10:59:32Z | 577,241 | 352 | 2009-02-23T11:01:49Z | [
"python",
"dictionary"
] | Which is the best way to extend a dictionary with another one? For instance:
```
>>> a = { "a" : 1, "b" : 2 }
>>> b = { "c" : 3, "d" : 4 }
>>> a
{'a': 1, 'b': 2}
>>> b
{'c': 3, 'd': 4}
```
I'm looking for any operation to obtain this avoiding `for` loop:
```
{ "a" : 1, "b" : 2, "c" : 3, "d" : 4 }
```
I wish to do something like:
```
a.extend(b) # This does not work
``` | ```
a.update(b)
```
[Python Standard Library Documentation](http://docs.python.org/2/library/stdtypes.html#dict.update) |
Python "extend" for a dictionary | 577,234 | 211 | 2009-02-23T10:59:32Z | 577,245 | 18 | 2009-02-23T11:04:22Z | [
"python",
"dictionary"
] | Which is the best way to extend a dictionary with another one? For instance:
```
>>> a = { "a" : 1, "b" : 2 }
>>> b = { "c" : 3, "d" : 4 }
>>> a
{'a': 1, 'b': 2}
>>> b
{'c': 3, 'd': 4}
```
I'm looking for any operation to obtain this avoiding `for` loop:
```
{ "a" : 1, "b" : 2, "c" : 3, "d" : 4 }
```
I wish to do something like:
```
a.extend(b) # This does not work
``` | ```
a.update(b)
```
Will add keys and values from *b* to *a*, overwriting if there's already a value for a key. |
Python "extend" for a dictionary | 577,234 | 211 | 2009-02-23T10:59:32Z | 1,552,420 | 109 | 2009-10-12T02:27:52Z | [
"python",
"dictionary"
] | Which is the best way to extend a dictionary with another one? For instance:
```
>>> a = { "a" : 1, "b" : 2 }
>>> b = { "c" : 3, "d" : 4 }
>>> a
{'a': 1, 'b': 2}
>>> b
{'c': 3, 'd': 4}
```
I'm looking for any operation to obtain this avoiding `for` loop:
```
{ "a" : 1, "b" : 2, "c" : 3, "d" : 4 }
```
I wish to do something like:
```
a.extend(b) # This does not work
``` | A beautiful gem in [this closed question](http://stackoverflow.com/questions/1551666/how-can-2-python-dictionaries-become-1/1551878#1551878):
The "oneliner way", altering neither of the input dicts, is
```
basket = dict(basket_one, **basket_two)
```
Learn what [`**basket_two` (the `**`) means here](http://www.saltycrane.com/blog/2008/01/how-to-use-args-and-kwargs-in-python/).
In case of conflict, the items from `basket_two` will override the ones from `basket_one`. As one-liners go, this is pretty readable and transparent, and I have no compunction against using it any time a dict that's a mix of two others comes in handy (any reader who has trouble understanding it will in fact be very well served by the way this prompts him or her towards learning about `dict` and the `**` form;-). So, for example, uses like:
```
x = mungesomedict(dict(adict, **anotherdict))
```
are reasonably frequent occurrences in my code.
Originally submitted by [Alex Martelli](http://stackoverflow.com/users/95810/alex-martelli)
***Note:* In Python 3, this will only work if every key in basket\_two is a `string`.** |
Python "extend" for a dictionary | 577,234 | 211 | 2009-02-23T10:59:32Z | 12,697,215 | 12 | 2012-10-02T19:48:15Z | [
"python",
"dictionary"
] | Which is the best way to extend a dictionary with another one? For instance:
```
>>> a = { "a" : 1, "b" : 2 }
>>> b = { "c" : 3, "d" : 4 }
>>> a
{'a': 1, 'b': 2}
>>> b
{'c': 3, 'd': 4}
```
I'm looking for any operation to obtain this avoiding `for` loop:
```
{ "a" : 1, "b" : 2, "c" : 3, "d" : 4 }
```
I wish to do something like:
```
a.extend(b) # This does not work
``` | As others have mentioned, `a.update(b)` for some dicts `a` and `b` will achieve the result you've asked for in your question. However, I want to point out that many times I have seen the `extend` method of mapping/set objects desire that in the syntax `a.extend(b)`, `a`'s values should NOT be overwritten by `b`'s values. `a.update(b)` overwrites `a`'s values, and so isn't a good choice for `extend`.
Note that some languages call this method `defaults` or `inject`, as it can be thought of as a way of injecting b's values (which might be a set of default values) in to a dictionary without overwriting values that might already exist.
Of course, you could simple note that `a.extend(b)` is nearly the same as `b.update(a); a=b`. To remove the assignment, you could do it thus:
```
def extend(a,b):
"""Create a new dictionary with a's properties extended by b,
without overwriting.
>>> extend({'a':1,'b':2},{'b':3,'c':4})
{'a': 1, 'c': 4, 'b': 2}
"""
return dict(b,**a)
```
Thanks to Tom Leys for that smart idea using a side-effect-less `dict` constructor for `extend`. |
Django - how do I _not_ dispatch a signal? | 577,376 | 4 | 2009-02-23T11:45:25Z | 577,432 | 11 | 2009-02-23T12:05:29Z | [
"python",
"django",
"django-signals"
] | I wrote some smart generic counters and managers for my models (to avoid `select count` queries etc.). Therefore I got some heavy logic going on for post\_save.
I would like to prevent handling the signal when there's no need to.
I guess the perfect interface would be:
```
instance.save(dispatch_signal=False)
```
How can I accomplish this?
---
**Update**
More information about what I'm doing, if anyone's interested:
1. Generic counters are stored in a separate table
2. Every time Django paginates an object list, it calls overriden count() method of my custom manager, which basically retrieves the static counter value for appropriate object class.
3. Signals trigger the logic of counters update, which is a bit complicated since it checks many aspects of related models (i.e. it has to generate a visibility property based on a nested category tree). I can't put this logic in Model.save() because one counter depends on many different models. I'd like to have that logic in one piece, instead of fragments spread around.
4. I am denormalizing some of my models, so I rewrite (duplicate) certain values across tables.
5. For testing purposes I run my little command-extension -- [Dilla](http://code.google.com/p/dilla), to populate random data around.
6. I've noticed unwanted signals triggering, therefore I'd like them to run conditionally.
Hope it's clear enough. Excuse my language mistakes. | A quick and dirty solution would be:
```
from django.db.models.signals import post_save
from somewhere_in_my_app import my_post_save_handler
post_save.disconnect(my_post_save_handler)
instance.save()
post_save.connect(my_post_save_handler)
```
But otherwise i strongly recommend moving your logic into the `save()` method of your model. |
Django - how do I _not_ dispatch a signal? | 577,376 | 4 | 2009-02-23T11:45:25Z | 10,881,618 | 11 | 2012-06-04T12:49:48Z | [
"python",
"django",
"django-signals"
] | I wrote some smart generic counters and managers for my models (to avoid `select count` queries etc.). Therefore I got some heavy logic going on for post\_save.
I would like to prevent handling the signal when there's no need to.
I guess the perfect interface would be:
```
instance.save(dispatch_signal=False)
```
How can I accomplish this?
---
**Update**
More information about what I'm doing, if anyone's interested:
1. Generic counters are stored in a separate table
2. Every time Django paginates an object list, it calls overriden count() method of my custom manager, which basically retrieves the static counter value for appropriate object class.
3. Signals trigger the logic of counters update, which is a bit complicated since it checks many aspects of related models (i.e. it has to generate a visibility property based on a nested category tree). I can't put this logic in Model.save() because one counter depends on many different models. I'd like to have that logic in one piece, instead of fragments spread around.
4. I am denormalizing some of my models, so I rewrite (duplicate) certain values across tables.
5. For testing purposes I run my little command-extension -- [Dilla](http://code.google.com/p/dilla), to populate random data around.
6. I've noticed unwanted signals triggering, therefore I'd like them to run conditionally.
Hope it's clear enough. Excuse my language mistakes. | You can disconnect and reconnect the signal. Try using a `with:` statement with this utility class:
```
class SignalBlocker(object):
def __init__(self, signal, receiver, **kwargs):
self.signal = signal
self.receiver = receiver
self.kwargs = kwargs
def __enter__(self, *args, **kwargs):
self.signal.disconnect(self.receiver)
def __exit__(self, *args, **kwargs):
self.signal.connect(self.receiver, **self.kwargs)
```
You can now use:
```
with SignalBlocker(post_save, my_post_save_handler):
instance.save()
``` |
Django - how do I _not_ dispatch a signal? | 577,376 | 4 | 2009-02-23T11:45:25Z | 16,556,300 | 11 | 2013-05-15T03:25:55Z | [
"python",
"django",
"django-signals"
] | I wrote some smart generic counters and managers for my models (to avoid `select count` queries etc.). Therefore I got some heavy logic going on for post\_save.
I would like to prevent handling the signal when there's no need to.
I guess the perfect interface would be:
```
instance.save(dispatch_signal=False)
```
How can I accomplish this?
---
**Update**
More information about what I'm doing, if anyone's interested:
1. Generic counters are stored in a separate table
2. Every time Django paginates an object list, it calls overriden count() method of my custom manager, which basically retrieves the static counter value for appropriate object class.
3. Signals trigger the logic of counters update, which is a bit complicated since it checks many aspects of related models (i.e. it has to generate a visibility property based on a nested category tree). I can't put this logic in Model.save() because one counter depends on many different models. I'd like to have that logic in one piece, instead of fragments spread around.
4. I am denormalizing some of my models, so I rewrite (duplicate) certain values across tables.
5. For testing purposes I run my little command-extension -- [Dilla](http://code.google.com/p/dilla), to populate random data around.
6. I've noticed unwanted signals triggering, therefore I'd like them to run conditionally.
Hope it's clear enough. Excuse my language mistakes. | I found simple and easy solution:
```
MyModel.objects.filter(pk=instance.id).update(**data)
```
It is due to (<https://docs.djangoproject.com/en/1.5/ref/models/querysets/#update>):
> Finally, realize that update() does an update at the SQL level and,
> thus, does not call any save() methods on your models, nor does it
> emit the pre\_save or post\_save signals (which are a consequence of
> calling Model.save()). |
Pause in Python | 577,467 | 38 | 2009-02-23T12:20:26Z | 577,487 | 12 | 2009-02-23T12:29:39Z | [
"python",
"command-line"
] | I am running command-line Python scripts from the Windows taskbar by having a shortcut pointing to the Python interpreter with the actual script as a parameter.
After the script has been processed, the interpreter terminates and the output window is closed which makes it impossible to read script output.
What is the most straightforward way to keep the interpreter window open until any key is pressed?
In batch files, one can end the script with pause. The closest thing to this I found in python is `raw_input()` which is sub-optimal because it requires pressing the return key (instead of any key).
Any ideas? | There's no need to wait for input before closing, just change your command like so:
```
cmd /K python <script>
```
The `/K` switch will execute the command that follows, but leave the command interpreter window open, in contrast to `/C`, which executes and then closes. |
Pause in Python | 577,467 | 38 | 2009-02-23T12:20:26Z | 577,488 | 43 | 2009-02-23T12:30:06Z | [
"python",
"command-line"
] | I am running command-line Python scripts from the Windows taskbar by having a shortcut pointing to the Python interpreter with the actual script as a parameter.
After the script has been processed, the interpreter terminates and the output window is closed which makes it impossible to read script output.
What is the most straightforward way to keep the interpreter window open until any key is pressed?
In batch files, one can end the script with pause. The closest thing to this I found in python is `raw_input()` which is sub-optimal because it requires pressing the return key (instead of any key).
Any ideas? | One way is to leave a `raw_input()` at the end so the script waits for you to press Enter before it terminates. |
Pause in Python | 577,467 | 38 | 2009-02-23T12:20:26Z | 577,529 | 7 | 2009-02-23T12:51:41Z | [
"python",
"command-line"
] | I am running command-line Python scripts from the Windows taskbar by having a shortcut pointing to the Python interpreter with the actual script as a parameter.
After the script has been processed, the interpreter terminates and the output window is closed which makes it impossible to read script output.
What is the most straightforward way to keep the interpreter window open until any key is pressed?
In batch files, one can end the script with pause. The closest thing to this I found in python is `raw_input()` which is sub-optimal because it requires pressing the return key (instead of any key).
Any ideas? | > One way is to leave a raw\_input() at the end so the script waits for you to press enter before it terminates.
The advantage of using raw\_input() instead of msvcrt.\* stuff is that the former is a part of standard Python (i.e. absolutely cross-platform). This also means that the script window will be alive after double-clicking on the script file icon, without the need to do
```
cmd /K python <script>
``` |
Pause in Python | 577,467 | 38 | 2009-02-23T12:20:26Z | 4,130,571 | 26 | 2010-11-09T04:41:40Z | [
"python",
"command-line"
] | I am running command-line Python scripts from the Windows taskbar by having a shortcut pointing to the Python interpreter with the actual script as a parameter.
After the script has been processed, the interpreter terminates and the output window is closed which makes it impossible to read script output.
What is the most straightforward way to keep the interpreter window open until any key is pressed?
In batch files, one can end the script with pause. The closest thing to this I found in python is `raw_input()` which is sub-optimal because it requires pressing the return key (instead of any key).
Any ideas? | Try `os.system("pause")`I used it and it worked for me :) |
How do I prevent Python's os.walk from walking across mount points? | 577,761 | 7 | 2009-02-23T14:16:59Z | 577,830 | 15 | 2009-02-23T14:39:56Z | [
"python",
"linux",
"unix"
] | In Unix all disks are exposed as paths in the main filesystem, so `os.walk('/')` would traverse, for example, `/media/cdrom` as well as the primary hard disk, and that is undesirable for some applications.
How do I get an `os.walk` that stays on a single device?
Related:
* [Is there a way to determine if a subdirectory is in the same filesystem from python when using os.walk?](http://stackoverflow.com/questions/530645/is-there-a-way-to-determine-if-a-subdirectory-is-in-the-same-filesystem-from-pyth/) | From `os.walk` docs:
> When topdown is true, the caller can
> modify the dirnames list in-place
> (perhaps using del or slice
> assignment), and walk() will only
> recurse into the subdirectories whose
> names remain in dirnames; this can be
> used to prune the search
So something like this should work:
```
for root, dirnames, filenames in os.walk(...):
dirnames[:] = [
dir for dir in dirnames
if not os.path.ismount(os.path.join(root, dir))]
...
``` |
How can I make this Python recursive function return a flat list? | 577,940 | 9 | 2009-02-23T15:12:37Z | 577,955 | 10 | 2009-02-23T15:16:14Z | [
"python",
"recursion"
] | Look at this simple function
```
def prime_factors(n):
for i in range(2,n):
if n % i == 0:
return i, prime_factors(n / i)
return n
```
Here's the result of `prime_factors(120)`
```
(2, (2, (2, (3, 5))))
```
Instead of nested tuples, I want it to return one flat tuple or list.
```
(2, 2, 2, 3, 5)
```
Is there a simple way to do that? | ```
def prime_factors(n):
for i in range(2,n):
if n % i == 0:
yield i
for p in prime_factors(n / i):
yield p
return
yield n
```
Example:
```
>>> tuple(prime_factors(100))
(2, 2, 5, 5)
``` |
How can I make this Python recursive function return a flat list? | 577,940 | 9 | 2009-02-23T15:12:37Z | 577,961 | 18 | 2009-02-23T15:17:03Z | [
"python",
"recursion"
] | Look at this simple function
```
def prime_factors(n):
for i in range(2,n):
if n % i == 0:
return i, prime_factors(n / i)
return n
```
Here's the result of `prime_factors(120)`
```
(2, (2, (2, (3, 5))))
```
Instead of nested tuples, I want it to return one flat tuple or list.
```
(2, 2, 2, 3, 5)
```
Is there a simple way to do that? | ```
def prime_factors(n):
for i in range(2,n):
if n % i == 0:
return [i] + prime_factors(n / i)
return [n]
``` |
How can I make this Python recursive function return a flat list? | 577,940 | 9 | 2009-02-23T15:12:37Z | 577,971 | 7 | 2009-02-23T15:19:23Z | [
"python",
"recursion"
] | Look at this simple function
```
def prime_factors(n):
for i in range(2,n):
if n % i == 0:
return i, prime_factors(n / i)
return n
```
Here's the result of `prime_factors(120)`
```
(2, (2, (2, (3, 5))))
```
Instead of nested tuples, I want it to return one flat tuple or list.
```
(2, 2, 2, 3, 5)
```
Is there a simple way to do that? | Without changing the original function, from [Python Tricks](http://kogs-www.informatik.uni-hamburg.de/~meine/python_tricks):
```
def flatten(x):
"""flatten(sequence) -> list
Returns a single, flat list which contains all elements retrieved
from the sequence and all recursively contained sub-sequences
(iterables).
Examples:
>>> [1, 2, [3,4], (5,6)]
[1, 2, [3, 4], (5, 6)]
>>> flatten([[[1,2,3], (42,None)], [4,5], [6], 7, MyVector(8,9,10)])
[1, 2, 3, 42, None, 4, 5, 6, 7, 8, 9, 10]"""
result = []
for el in x:
#if isinstance(el, (list, tuple)):
if hasattr(el, "__iter__") and not isinstance(el, basestring):
result.extend(flatten(el))
else:
result.append(el)
return result
``` |
Python program to find fibonacci series. More Pythonic way | 578,379 | 6 | 2009-02-23T16:53:23Z | 578,424 | 16 | 2009-02-23T17:02:01Z | [
"python"
] | There is another thread to discuss Fibo series in Python. This is to tweak code into more pythonic. [How to write the Fibonacci Sequence in Python](http://stackoverflow.com/questions/494594/how-to-write-the-fibonacci-sequence-in-python)
I am in love with this program I wrote to solve Project Euler Q2. I am newly coding in Python and rejoice each time I do it The Pythonic way! Can you suggest a better Pythonic way to do this?
[Project Euler Q2](http://projecteuler.net/index.php?section=problems&id=2). Find the sum of all the even-valued terms in the Fibonacci sequence which do not exceed four million.
```
fib=[]
def fibo(a=-1,b=1,upto=4000000):
if a+b>=upto:
return
else:
a,b=b,a+b
fib.append(b)
fibo(a,b)
fibo()
even=[i for i in fib if not i%2]
print sum(even)
``` | Using generators is a Pythonic way to generate long sequences while preserving memory:
```
def fibonacci():
a, b = 0, 1
while True:
yield a
a, b = b, a + b
import itertools
upto_4000000 = itertools.takewhile(lambda x: x <= 4000000, fibonacci())
print(sum(x for x in upto_4000000 if x % 2 == 0))
``` |
Python program to find fibonacci series. More Pythonic way | 578,379 | 6 | 2009-02-23T16:53:23Z | 578,426 | 12 | 2009-02-23T17:02:33Z | [
"python"
] | There is another thread to discuss Fibo series in Python. This is to tweak code into more pythonic. [How to write the Fibonacci Sequence in Python](http://stackoverflow.com/questions/494594/how-to-write-the-fibonacci-sequence-in-python)
I am in love with this program I wrote to solve Project Euler Q2. I am newly coding in Python and rejoice each time I do it The Pythonic way! Can you suggest a better Pythonic way to do this?
[Project Euler Q2](http://projecteuler.net/index.php?section=problems&id=2). Find the sum of all the even-valued terms in the Fibonacci sequence which do not exceed four million.
```
fib=[]
def fibo(a=-1,b=1,upto=4000000):
if a+b>=upto:
return
else:
a,b=b,a+b
fib.append(b)
fibo(a,b)
fibo()
even=[i for i in fib if not i%2]
print sum(even)
``` | First I'd do fibo() as a generator:
```
def fibo(a=-1,b=1,upto=4000000):
while a+b<upto:
a,b = b,a+b
yield b
```
Then I'd also select for evenness as a generator rather than a list comprehension.
```
print sum(i for i in fibo() if not i%2)
``` |
Alternative to 'for i in xrange(len(x))' | 578,677 | 9 | 2009-02-23T18:10:52Z | 578,685 | 22 | 2009-02-23T18:12:36Z | [
"python",
"for-loop",
"anti-patterns"
] | So I see in [another post](http://stackoverflow.com/questions/576988/python-specific-antipatterns) the following "bad" snippet, but the only alternatives I have seen involve patching Python.
```
for i in xrange(len(something)):
workwith = something[i]
# do things with workwith...
```
What do I do to avoid this "antipattern"? | See [Pythonic](http://docs.python.org/glossary.html#term-pythonic)
```
for workwith in something:
# do things with workwith
``` |
Alternative to 'for i in xrange(len(x))' | 578,677 | 9 | 2009-02-23T18:10:52Z | 578,694 | 23 | 2009-02-23T18:14:59Z | [
"python",
"for-loop",
"anti-patterns"
] | So I see in [another post](http://stackoverflow.com/questions/576988/python-specific-antipatterns) the following "bad" snippet, but the only alternatives I have seen involve patching Python.
```
for i in xrange(len(something)):
workwith = something[i]
# do things with workwith...
```
What do I do to avoid this "antipattern"? | If you need to know the index in the loop body:
```
for index, workwith in enumerate(something):
print "element", index, "is", workwith
``` |
Alternative to 'for i in xrange(len(x))' | 578,677 | 9 | 2009-02-23T18:10:52Z | 582,541 | 11 | 2009-02-24T16:54:18Z | [
"python",
"for-loop",
"anti-patterns"
] | So I see in [another post](http://stackoverflow.com/questions/576988/python-specific-antipatterns) the following "bad" snippet, but the only alternatives I have seen involve patching Python.
```
for i in xrange(len(something)):
workwith = something[i]
# do things with workwith...
```
What do I do to avoid this "antipattern"? | As there are [two](http://stackoverflow.com/questions/578677/alternative-to-for-i-in-xrangelenx/578694#578694) [answers](http://stackoverflow.com/questions/578677/alternative-to-for-i-in-xrangelenx/578685#578685) to question that are perfectly valid (with an assumption each) and author of the question didn't inform us about the destiny of index, the valid answer should read:
> If you [do not need index](http://stackoverflow.com/questions/578677/alternative-to-for-i-in-xrangelenx/578685#578685) at all:
>
> ```
> for workwith in something:
> print "element", workwith
> ```
>
> If you [need index](http://stackoverflow.com/questions/578677/alternative-to-for-i-in-xrangelenx/578694#578694):
>
> ```
> for index, workwith in enumerate(something):
> print "element", index, "is", workwith
> ```
If my answer is not appropriate, comment please, and I'll delete it :) |
Need help understanding function passing in Python | 578,812 | 4 | 2009-02-23T18:47:43Z | 578,869 | 13 | 2009-02-23T18:59:04Z | [
"python"
] | I am trying to teach myself Python by working through some problems I came up with, and I need some help understanding how to pass functions.
Let's say I am trying to predict tomorrow's temperature based on today's and yesterday's temperature, and I have written the following function:
```
def predict_temp(temp_today, temp_yest, k1, k2):
return k1*temp_today + k2*temp_yest
```
And I have also written an error function to compare a list of predicted temperatures with actual temperatures and return the mean absolute error:
```
def mean_abs_error(predictions, expected):
return sum([abs(x - y) for (x,y) in zip(predictions,expected)]) / float(len(predictions))
```
Now if I have a list of daily temperatures for some interval in the past, I can see how my prediction function would have done **with specific k1 and k2 parameters** like this:
```
>>> past_temps = [41, 35, 37, 42, 48, 30, 39, 42, 33]
>>> pred_temps = [predict_temp(past_temps[i-1],past_temps[i-2],0.5,0.5) for i in xrange(2,len(past_temps))]
>>> print pred_temps
[38.0, 36.0, 39.5, 45.0, 39.0, 34.5, 40.5]
>>> print mean_abs_error(pred_temps, past_temps[2:])
6.5
```
**But how do I design a function to minimize my parameters k1 and k2 of my predict\_temp function given an error function and my past\_temps data?**
Specifically I would like to write a function minimize(args\*) that takes a prediction function, an error function, some training data, and that uses some search/optimization method (gradient descent for example) to estimate and return the values of k1 and k2 that minimize my error given the data?
I am not asking how to implement the optimization method. Assume I can do that. Rather, I would just like to know **how to pass my predict and error functions** (and my data) to my minimize function, and **how to tell my minimize function that it should optimize the parameters k1 and k2**, so that my minimize function can automatically search a bunch of different settings of k1 and k2, applying my prediction function with those parameters each time to the data and computing error (like I did manually for k1=0.5 and k2=0.5 above) and then return the best results.
I would like to be able to pass these functions so I can easily swap in different prediction and error functions (differing by more than just parameter settings that is). Each prediction function might have a different number of free parameters.
My minimize function should look something like this, but I don't know how to proceed:
```
def minimize(prediction_function, which_args_to_optimize, error_function, data):
# 1: guess initial parameters
# 2: apply prediction function with current parameters to data to compute predictions
# 3: use error function to compute error between predictions and data
# 4: if stopping criterion is met, return parameters
# 5: update parameters
# 6: GOTO 2
```
Edit: It's that easy?? This is no fun. I am going back to Java.
On a more serious note, I think I was also getting hung up on how to use different prediction functions with different numbers of parameters to tune. If I just take all the free parameters in as one tuple I can keep the form of the function the same so it easy to pass and use. | Here is an example of how to pass a function into another function. `apply_func_to` will take a function `f` and a number `num` as parameters and `return f(num)`.
```
def my_func(x):
return x*x
def apply_func_to(f, num):
return f(num)
>>>apply_func_to(my_func, 2)
4
```
If you wanna be clever you can use lambda (anonymous functions too). These allow you to pass functions "on the fly" without having to define them separately
```
>>>apply_func_to(lambda x:x*x, 3)
9
```
Hope this helps. |
Is there a pattern for propagating details of both errors and warnings? | 579,097 | 6 | 2009-02-23T19:58:15Z | 579,117 | 7 | 2009-02-23T20:05:54Z | [
"python",
"design-patterns",
"error-handling",
"warnings"
] | Is there a common pattern for propagating details of both errors and warnings? By *errors* I mean serious problems that should cause the flow of code to stop. By *warnings* I mean issues that merit informing the user of a problem, but are too trivial to stop program flow.
I currently use exceptions to deal with hard errors, and the Python logging framework to record warnings. But now I want to record warnings in a database field of the record currently being processed instead. I guess, I want the warnings to bubble up in the same manner as exceptions, but without stopping program flow.
```
>>> import logging
>>>
>>> def process_item(item):
... if item:
... if item == 'broken':
... logging.warning('soft error, continue with next item')
... else:
... raise Exception('hard error, cannot continue')
...
>>> process_item('good')
>>> process_item(None)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 6, in process_item
Exception: hard error, cannot continue
>>> process_item('broken')
WARNING:root:soft error, continue with next item
```
*This example (and my current problem) is in Python, but it should apply to other languages with exceptions too.*
---
Following [David](http://stackoverflow.com/users/56541/david)'s suggestion and a brief play with the example below, Python's `warnings` module is the way to go.
```
import warnings
class MyWarning(Warning):
pass
def causes_warnings():
print 'enter causes_warnings'
warnings.warn("my warning", MyWarning)
print 'leave causes_warnings'
def do_stuff():
print 'enter do_stuff'
causes_warnings()
causes_warnings()
causes_warnings()
print 'leave do_stuff'
with warnings.catch_warnings(record=True) as w:
# Cause all warnings to always be triggered.
warnings.simplefilter("always")
# Trigger a number of warnings.
do_stuff()
# Do something (not very) useful with the warnings generated
print 'Warnings:',','.join([str(warning.message) for warning in w])
```
Output:
```
enter do_stuff
enter causes_warnings
leave causes_warnings
enter causes_warnings
leave causes_warnings
enter causes_warnings
leave causes_warnings
leave do_stuff
Warnings: my warning,my warning,my warning
```
Note: Python 2.6+ is required for `catch_warnings`. | Look into Python's `warnings` module, <http://docs.python.org/library/warnings.html>
I don't think there's much you can say about this problem without specifying the language, as non-terminal error handling varies greatly from one language to another. |
Instantiating a python class in C# | 579,272 | 36 | 2009-02-23T20:47:25Z | 579,609 | 46 | 2009-02-23T22:14:43Z | [
"c#",
".net",
"python",
"ironpython",
"cross-language"
] | I've written a class in python that I want to wrap into a .net assembly via IronPython and instantiate in a C# application. I've migrated the class to IronPython, created a library assembly and referenced it. Now, how do I actually get an instance of that class?
The class looks (partially) like this:
```
class PokerCard:
"A card for playing poker, immutable and unique."
def __init__(self, cardName):
```
The test stub I wrote in C# is:
```
using System;
namespace pokerapp
{
class Program
{
static void Main(string[] args)
{
var card = new PokerCard(); // I also tried new PokerCard("Ah")
Console.WriteLine(card.ToString());
Console.ReadLine();
}
}
}
```
What do I have to do in order to instantiate this class in C#? | IronPython classes are *not* .NET classes. They are instances of IronPython.Runtime.Types.PythonType which is the Python metaclass. This is because Python classes are dynamic and support addition and removal of methods at runtime, things you cannot do with .NET classes.
To use Python classes in C# you will need to use the ObjectOperations class. This class allows you to operate on python types and instances in the semantics of the language itself. e.g. it uses the magic methods when appropriate, auto-promotes integers to longs etc. You can find out more about ObjectOperations by looking at the source or using reflector.
Here is an example. Calculator.py contains a simple class:
```
class Calculator(object):
def add(self, a, b):
return a + b
```
You can use it from your pre .NET 4.0 C# code like this:
```
ScriptEngine engine = Python.CreateEngine();
ScriptSource source = engine.CreateScriptSourceFromFile("Calculator.py");
ScriptScope scope = engine.CreateScope();
ObjectOperations op = engine.Operations;
source.Execute(scope); // class object created
object klaz = scope.GetVariable("Calculator"); // get the class object
object instance = op.Call(klaz); // create the instance
object method = op.GetMember(instance, "add"); // get a method
int result = (int)op.Call(method, 4, 5); // call method and get result (9)
```
You will need to reference the assemblies IronPython.dll, Microsoft.Scripting and Microsoft.Scripting.Core.
C# 4 made this much easier with the *new* dynamic type.
```
ScriptEngine engine = Python.CreateEngine();
ScriptSource source = engine.CreateScriptSourceFromFile("Calculator.py");
ScriptScope scope = engine.CreateScope();
source.Execute(scope);
dynamic Calculator = scope.GetVariable("Calculator");
dynamic calc = Calculator();
int result = calc.add(4, 5);
```
If you are using Visual Studio 2010 or later with NuGet support simply execute this to download and reference the appropriate libraries.
```
Install-Package IronPython
``` |
Instantiating a python class in C# | 579,272 | 36 | 2009-02-23T20:47:25Z | 2,722,635 | 30 | 2010-04-27T15:39:13Z | [
"c#",
".net",
"python",
"ironpython",
"cross-language"
] | I've written a class in python that I want to wrap into a .net assembly via IronPython and instantiate in a C# application. I've migrated the class to IronPython, created a library assembly and referenced it. Now, how do I actually get an instance of that class?
The class looks (partially) like this:
```
class PokerCard:
"A card for playing poker, immutable and unique."
def __init__(self, cardName):
```
The test stub I wrote in C# is:
```
using System;
namespace pokerapp
{
class Program
{
static void Main(string[] args)
{
var card = new PokerCard(); // I also tried new PokerCard("Ah")
Console.WriteLine(card.ToString());
Console.ReadLine();
}
}
}
```
What do I have to do in order to instantiate this class in C#? | Now that .Net 4.0 is released and has the dynamic type, this example should be updated. Using the same python file as in m-sharp's original answer:
```
class Calculator(object):
def add(self, a, b):
return a + b
```
Here is how you would call it using .Net 4.0:
```
string scriptPath = "Calculator.py";
ScriptEngine engine = Python.CreateEngine();
engine.SetSearchPaths(new string[] {"Path to your lib's here. EG:", "C:\\Program Files (x86)\\IronPython 2.7.1\\Lib"});
ScriptSource source = engine.CreateScriptSourceFromFile(scriptPath);
ScriptScope scope = engine.CreateScope();
ObjectOperations op = engine.Operations;
source.Execute(scope);
dynamic Calculator = scope.GetVariable("Calculator");
dynamic calc = Calculator();
return calc.add(x,y);
```
Again, you need to add references to IronPython.dll and Microsoft.Scripting.
As you can see, the initial setting up and creating of the source file is the same.
But once the source is succesfully executed, working with the python functions is far easier thanks to the new "dynamic" keyword. |
formatting long numbers as strings in python | 579,310 | 10 | 2009-02-23T20:55:58Z | 579,376 | 20 | 2009-02-23T21:11:36Z | [
"python",
"formatting",
"string",
"integer"
] | What is an easy way in Python to format integers into strings representing thousands with K, and millions with M, and leaving just couple digits after comma?
I'd like to show 7436313 as 7.44M, and 2345 as 2,34K.
Is there some % string formatting operator available for that? Or that could be done only by actually dividing by 1000 in a loop and constructing result string step by step? | I don't think there's a built-in function that does that. You'll have to roll your own, e.g.:
```
def human_format(num):
magnitude = 0
while abs(num) >= 1000:
magnitude += 1
num /= 1000.0
# add more suffixes if you need them
return '%.2f%s' % (num, ['', 'K', 'M', 'G', 'T', 'P'][magnitude])
print('the answer is %s' % human_format(7436313)) # prints 'the answer is 7.44M'
``` |
Using only the DB part of Django | 579,511 | 32 | 2009-02-23T21:45:54Z | 579,537 | 8 | 2009-02-23T21:51:51Z | [
"python",
"django",
"orm"
] | Does somebody know how "modular" is Django? Can I use just the ORM part, to get classes that map to DB tables and know how to read/write from these tables?
If not, what would you recommend as "the Python equivalent of Hibernate"? | You can certainly use various parts of Django in a stand-alone fashion. It is after-all just a collection of Python modules, which you can import to any other code you would like to use them.
I'd also recommend looking at [SQL Alchemy](http://www.sqlalchemy.org/) if you are only after the ORM side of things. |
Using only the DB part of Django | 579,511 | 32 | 2009-02-23T21:45:54Z | 579,567 | 9 | 2009-02-23T22:00:22Z | [
"python",
"django",
"orm"
] | Does somebody know how "modular" is Django? Can I use just the ORM part, to get classes that map to DB tables and know how to read/write from these tables?
If not, what would you recommend as "the Python equivalent of Hibernate"? | The short answer is: no, you can't use the Django ORM separately from Django.
The long answer is: yes, you can if you are willing to load large parts of Django along with it. For example, the database connection that is used by Django is opened when a request to Django occurs. This happens when a signal is sent so you could ostensibly send this signal to open the connection without using the specific request mechanism. Also, you'd need to setup the various applications and settings for the Django project.
Ultimately, it probably isn't worth your time. [SQL Alchemy](http://www.sqlalchemy.org/) is a relatively well known Python ORM, which is actually more powerful than Django's anyway since it supports multiple database connections and connection pooling and other good stuff.
---
**Edit:** in response to James' criticism elsewhere, I will clarify what I described in my original post. While it is gratifying that a major Django contributor has called me out, I still think I'm right :)
First off, consider what needs to be done to use Django's ORM separate from any other part. You use one of the [methods](http://www.b-list.org/weblog/2007/sep/22/standalone-django-scripts/) described by James for doing a basic setup of Django. But a number of these methods don't allow for using the `syncdb` command, which is required to create the tables for your models. A settings.py file is needed for this, with variables not just for `DATABASE_*`, but also `INSTALLED_APPLICATIONS` with the correct paths to all models.py files.
It is possible to roll your own solution to use `syncdb` without a settings.py, but it requires some advanced knowledge of Django. Of course, you don't need to use `syncdb`; the tables can be created independently of the models. But it is an aspect of the ORM that is not available unless you put some effort into setup.
Secondly, consider how you would create your queries to the DB with the standard `Model.objects.filter()` call. If this is done as part of a view, it's very simple: construct the `QuerySet` and view the instances. For example:
```
tag_query = Tag.objects.filter( name='stackoverflow' )
if( tag_query.count() > 0 ):
tag = tag_query[0]
tag.name = 'stackoverflowed'
tag.save()
```
Nice, simple and clean. Now, without the crutch of Django's request/response chaining system, you need to initialise the database connection, make the query, then close the connection. So the above example becomes:
```
from django.db import reset_queries, close_connection, _rollback_on_exception
reset_queries()
try:
tag_query = Tag.objects.filter( name='stackoverflow' )
if( tag_query.count() > 0 ):
tag = tag_query[0]
tag.name = 'stackoverflowed'
tag.save()
except:
_rollback_on_exception()
finally:
close_connection()
```
The database connection management can also be done via Django signals. All of the above is defined in [django/db/**init**.py](http://code.djangoproject.com/browser/django/trunk/django/db/%5F%5Finit%5F%5F.py). Other ORMs also have this sort of connection management, but you don't need to dig into their source to find out how to do it. SQL Alchemy's connection management system is documented in the [tutorials](http://www.sqlalchemy.org/docs/05/ormtutorial.html) and elsewhere.
Finally, you need to keep in mind that the database connection object is local to the current thread at all times, which may or may not limit you depending on your requirements. If your application is not stateless, like Django, but persistent, you may hit threading issues.
In conclusion, it is a matter of opinion. In my opinion, both the limitations of, and the setup required for, Django's ORM separate from the framework is too much of a liability. There are perfectly viable dedicated ORM solutions available elsewhere that are designed for library usage. Django's is not.
Don't think that all of the above shows I dislike Django and all it's workings, I really do like Django a lot! But I'm realistic about what it's capabilities are and being an ORM library is not one of them.
P.S. Multiple database connection support is being [worked](http://code.djangoproject.com/ticket/1142) on. But it's not there now. |
Using only the DB part of Django | 579,511 | 32 | 2009-02-23T21:45:54Z | 584,208 | 72 | 2009-02-25T00:01:21Z | [
"python",
"django",
"orm"
] | Does somebody know how "modular" is Django? Can I use just the ORM part, to get classes that map to DB tables and know how to read/write from these tables?
If not, what would you recommend as "the Python equivalent of Hibernate"? | If you like Django's ORM, it's perfectly simple to use it "standalone"; I've [written up several techniques for using parts of Django outside of a web context](http://www.b-list.org/weblog/2007/sep/22/standalone-django-scripts/), and you're free to use any of them (or roll your own).
Shane above seems to be a bit misinformed on this and a few other points -- for example, Django *can* do multiple different databases, it just doesn't *default* to that (you need to do a custom manager on the models which use something other than the "main" DB, something that's not too hard and there are recipes floating around for it). It's true that Django itself doesn't do connection management/connection pooling, but personally I've always used external tools for that anyway (e.g., `pgpool`, which rocks harder than anything built in to an ORM ever could).
I'd suggest spending some time reading up and possibly trying a few likely Google searches (e.g., the post I linked you to comes up as the top result for "standalone Django script") to get a feel for what will actually best suit your needs and tastes -- it may be Django's ORM isn't right for you, and you shouldn't use it if it isn't, but unfortunately there's a lot of misinformation out there which muddies the waters.
**Editing to respond to Shane:**
Again, you seem to be misinformed: SQLAlchemy needs to be configured (i.e., told what DB to use, how to connect, etc.) before you can run queries with it, so how is the fact that Django needs similar configuration (accomplished via your choice of methods -- you **do not** need to have a full Django settings file) any disadvantage?
As for multiple DB support, you seem to be confused: the support is there at a low level. The query object -- not `QuerySet`, but the underlying `Query` object it will execute knows what DB it's connecting to, and accepts a DB connection as one of its initialization arguments. Telling one model to use one DB and another model to use another is as simple as setting up one method on a manager which passes the right connection info down into the `Query`. True, there's no higher-level API for this, but that's not the same as "no support" and not the same as "requires custom code" (unless you'd argue that configuring multiple DBs explicitly in SQLAlchemy, required if you want multiple DBs, is also "custom code").
As for whether you end up *indirectly* using things that aren't in `django.db`, well, so what? The fact that `django.db` imports bits of, say, `django.utils` because there are data structures and other bits of code which are useful for more than just an ORM is fine as far as I'm personally concerned; one might as well complain if something has external dependencies or makes use of standard Python libraries instead of being 100% self-contained. |
How do I copy a string to the clipboard on Windows using Python? | 579,687 | 123 | 2009-02-23T22:38:02Z | 579,715 | 10 | 2009-02-23T22:43:08Z | [
"python",
"clipboard"
] | I'm kind of new to Python and I'm trying to make a basic Windows application that builds a string out of user input then adds it to the clipboard. How do I copy a string to the clipboard using Python? | Looks like you need to add win32clipboard to your site-packages. It's part of the [pywin32 package](http://sourceforge.net/projects/pywin32/) |
How do I copy a string to the clipboard on Windows using Python? | 579,687 | 123 | 2009-02-23T22:38:02Z | 3,429,034 | 22 | 2010-08-07T03:33:26Z | [
"python",
"clipboard"
] | I'm kind of new to Python and I'm trying to make a basic Windows application that builds a string out of user input then adds it to the clipboard. How do I copy a string to the clipboard using Python? | You can also use ctypes to tap into the windows API and avoid the massive pywin32 package. This is what I use, (excuse the poor style, but the idea is there.)
```
import ctypes
#Get required functions, strcpy..
strcpy = ctypes.cdll.msvcrt.strcpy
ocb = ctypes.windll.user32.OpenClipboard #Basic Clipboard functions
ecb = ctypes.windll.user32.EmptyClipboard
gcd = ctypes.windll.user32.GetClipboardData
scd = ctypes.windll.user32.SetClipboardData
ccb = ctypes.windll.user32.CloseClipboard
ga = ctypes.windll.kernel32.GlobalAlloc # Global Memory allocation
gl = ctypes.windll.kernel32.GlobalLock # Global Memory Locking
gul = ctypes.windll.kernel32.GlobalUnlock
GMEM_DDESHARE = 0x2000
def Get( ):
ocb(None) # Open Clip, Default task
pcontents = gcd(1) # 1 means CF_TEXT.. too lazy to get the token thingy ...
data = ctypes.c_char_p(pcontents).value
#gul(pcontents) ?
ccb()
return data
def Paste( data ):
ocb(None) # Open Clip, Default task
ecb()
hCd = ga( GMEM_DDESHARE, len( bytes(data,"ascii") )+1 )
pchData = gl(hCd)
strcpy(ctypes.c_char_p(pchData),bytes(data,"ascii"))
gul(hCd)
scd(1,hCd)
ccb()
``` |
How do I copy a string to the clipboard on Windows using Python? | 579,687 | 123 | 2009-02-23T22:38:02Z | 4,203,897 | 196 | 2010-11-17T11:31:06Z | [
"python",
"clipboard"
] | I'm kind of new to Python and I'm trying to make a basic Windows application that builds a string out of user input then adds it to the clipboard. How do I copy a string to the clipboard using Python? | Actually, `pywin32` and `ctypes` seem to be an overkill for this simple task. `Tkinter` is a cross-platform GUI framework, which ships with Python by default and has clipboard accessing methods along with other cool stuff.
If all you need is to put some text to system clipboard, this will do it:
```
from Tkinter import Tk
r = Tk()
r.withdraw()
r.clipboard_clear()
r.clipboard_append('i can has clipboardz?')
r.destroy()
```
And that's all, no need to mess around with platform-specific third-party libraries. |
How do I copy a string to the clipboard on Windows using Python? | 579,687 | 123 | 2009-02-23T22:38:02Z | 9,409,898 | 53 | 2012-02-23T09:06:00Z | [
"python",
"clipboard"
] | I'm kind of new to Python and I'm trying to make a basic Windows application that builds a string out of user input then adds it to the clipboard. How do I copy a string to the clipboard using Python? | I didn't have a solution just a work around
Windows vista onwards has an inbuilt command called clip that takes the output of a command from command line and puts it into the clipboard. E.g. ipconfig | clip
So i made a function with the os module which takes the string and adds it to the clipboard using the inbuilt windows solution.
```
import os
def addToClipBoard(text):
command = 'echo ' + text.strip() + '| clip'
os.system(command)
#example
addToClipBoard('penny lane')
#Penny Lane is now in your ears,eyes and clipboard
```
If you are using windows XP it will work just follow the steps on this site first.
<http://www.techrepublic.com/blog/window-on-windows/copy-and-paste-from-windows-xp-pros-command-prompt-straight-to-the-clipboard/521> |
How do I copy a string to the clipboard on Windows using Python? | 579,687 | 123 | 2009-02-23T22:38:02Z | 24,523,659 | 17 | 2014-07-02T05:43:27Z | [
"python",
"clipboard"
] | I'm kind of new to Python and I'm trying to make a basic Windows application that builds a string out of user input then adds it to the clipboard. How do I copy a string to the clipboard using Python? | You can use [**pyperclip**](http://coffeeghost.net/2010/10/09/pyperclip-a-cross-platform-clipboard-module-for-python/) - cross-platform clipboard module. Or [**Xerox**](https://github.com/kennethreitz/xerox) - similar module, except requires the win32 Python module to work on Windows. |
Python ORM that auto-generates/updates tables and uses SQLite? | 579,770 | 9 | 2009-02-23T22:59:10Z | 579,787 | 16 | 2009-02-23T23:05:02Z | [
"python",
"sqlite",
"orm",
"auto-generate"
] | I am doing some prototyping for a new desktop app i am writing in Python, and i want to use SQLite and an ORM to store data.
My question is, are there any ORM libraries that support auto-generating/updating the database schema and work with SQLite? | [SQLAlchemy](http://www.sqlalchemy.org/) is a great choice in the Python ORM space that supports SQLite. |
What's the Pythonic way to combine two sequences into a dictionary? | 579,856 | 7 | 2009-02-23T23:33:26Z | 579,862 | 42 | 2009-02-23T23:35:34Z | [
"python"
] | Is there a more concise way of doing this in Python?:
```
def toDict(keys, values):
d = dict()
for k,v in zip(keys, values):
d[k] = v
return d
``` | Yes:
```
dict(zip(keys,values))
``` |
Python windows File Version attribute | 580,924 | 16 | 2009-02-24T08:26:26Z | 1,237,635 | 16 | 2009-08-06T08:34:19Z | [
"python",
"windows",
"file-attributes"
] | Last time I asked a similar question but that was about svn related versioning info. Now I am wondering how to query windows "File version" attribute about eg. a dll. I payed attention to wmi and win32file modules as well without success. | Better to add a try/except in case the file has no version number attribute.
filever.py
```
from win32api import GetFileVersionInfo, LOWORD, HIWORD
def get_version_number (filename):
try:
info = GetFileVersionInfo (filename, "\\")
ms = info['FileVersionMS']
ls = info['FileVersionLS']
return HIWORD (ms), LOWORD (ms), HIWORD (ls), LOWORD (ls)
except:
return 0,0,0,0
if __name__ == '__main__':
import os
filename = os.environ["COMSPEC"]
print ".".join ([str (i) for i in get_version_number (filename)])
```
yourscript.py:
```
import os,filever
myPath="C:\\path\\to\\check"
for root, dirs, files in os.walk(myPath):
for file in files:
file = file.lower() # Convert .EXE to .exe so next line works
if (file.count('.exe') or file.count('.dll')): # Check only exe or dll files
fullPathToFile=os.path.join(root,file)
major,minor,subminor,revision=filever.get_version_number(fullPathToFile)
print "Filename: %s \t Version: %s.%s.%s.%s" % (file,major,minor,subminor,revision)
```
Cheers! |
Python windows File Version attribute | 580,924 | 16 | 2009-02-24T08:26:26Z | 2,310,098 | 9 | 2010-02-22T10:06:42Z | [
"python",
"windows",
"file-attributes"
] | Last time I asked a similar question but that was about svn related versioning info. Now I am wondering how to query windows "File version" attribute about eg. a dll. I payed attention to wmi and win32file modules as well without success. | You can use the `pyWin32` module from <http://sourceforge.net/projects/pywin32/>:
```
from win32com.client import Dispatch
ver_parser = Dispatch('Scripting.FileSystemObject')
info = ver_parser.GetFileVersion(path)
if info == 'No Version Information Available':
info = None
``` |
Python windows File Version attribute | 580,924 | 16 | 2009-02-24T08:26:26Z | 7,993,095 | 13 | 2011-11-03T10:08:02Z | [
"python",
"windows",
"file-attributes"
] | Last time I asked a similar question but that was about svn related versioning info. Now I am wondering how to query windows "File version" attribute about eg. a dll. I payed attention to wmi and win32file modules as well without success. | Here is a function which reads all file attributes as a dictionary:
```
import win32api
#==============================================================================
def getFileProperties(fname):
#==============================================================================
"""
Read all properties of the given file return them as a dictionary.
"""
propNames = ('Comments', 'InternalName', 'ProductName',
'CompanyName', 'LegalCopyright', 'ProductVersion',
'FileDescription', 'LegalTrademarks', 'PrivateBuild',
'FileVersion', 'OriginalFilename', 'SpecialBuild')
props = {'FixedFileInfo': None, 'StringFileInfo': None, 'FileVersion': None}
try:
# backslash as parm returns dictionary of numeric info corresponding to VS_FIXEDFILEINFO struc
fixedInfo = win32api.GetFileVersionInfo(fname, '\\')
props['FixedFileInfo'] = fixedInfo
props['FileVersion'] = "%d.%d.%d.%d" % (fixedInfo['FileVersionMS'] / 65536,
fixedInfo['FileVersionMS'] % 65536, fixedInfo['FileVersionLS'] / 65536,
fixedInfo['FileVersionLS'] % 65536)
# \VarFileInfo\Translation returns list of available (language, codepage)
# pairs that can be used to retreive string info. We are using only the first pair.
lang, codepage = win32api.GetFileVersionInfo(fname, '\\VarFileInfo\\Translation')[0]
# any other must be of the form \StringfileInfo\%04X%04X\parm_name, middle
# two are language/codepage pair returned from above
strInfo = {}
for propName in propNames:
strInfoPath = u'\\StringFileInfo\\%04X%04X\\%s' % (lang, codepage, propName)
## print str_info
strInfo[propName] = win32api.GetFileVersionInfo(fname, strInfoPath)
props['StringFileInfo'] = strInfo
except:
pass
return props
``` |
regex '|' operator vs separate runs for each sub-expression | 580,993 | 5 | 2009-02-24T08:55:29Z | 581,011 | 7 | 2009-02-24T09:03:55Z | [
"python",
"regex",
"performance"
] | I've got a fairly large string (~700k) against which I need to run 10 regexes and count all the matches of any of the regexes. My quick and dirty impl was to do something like re.search('(expr1)|(expr2)|...'), but I was wondering if we'd see any performance gains by matching in a loop instead:
In other words, I want to compare the performance of:
```
def CountMatchesInBigstring(bigstring, my_regexes):
"""Counts how many of the expressions in my_regexes match bigstring."""
count = 0
combined_expr = '|'.join(['(%s)' % r for r in my_regexes])
matches = re.search(combined_expr, bigstring)
if matches:
count += NumMatches(matches)
return count
```
vs
```
def CountMatchesInBigstring(bigstring, my_regexes):
"""Counts how many of the expressions in my_regexes match bigstring."""
count = 0
for reg in my_regexes:
matches = re.search(reg, bigstring)
if matches:
count += NumMatches(matches)
return count
```
I'll stop being lazy and run some tests tomorrow (and post the results), but I wondered whether the answer will jump out to someone who actually understands how regexes work :) | The two things will give slightly different results, unless it is guaranteed that a match will match one and only one regex. Otherwise if something matches 2 it will be counted twice.
In theory your solution ought to be quicker (if the expression are mutually exclusive) because the regex compiler ought to be able to make a more efficient search state machine, so only one pass is needed. I would expect the difference to be tiny though, unless the expressions are very similar.
Also, if it were a huge string (bigger than 700k) there might be gains from doing one pass, and so a factor of n fewer memory swaps would be needed (to disk or cpu cache).
My bet is in your tests it isn't really noticeable though. I'm interested in the actual result - please do post the results. |
Python web programming | 581,038 | 12 | 2009-02-24T09:15:24Z | 581,356 | 8 | 2009-02-24T11:05:43Z | [
"python",
"cherrypy"
] | Good morning.
As the title indicates, I've got some questions about using python for web development.
* What is the best setup for a development environment, more specifically, what webserver to use, how to bind python with it. Preferably, I'd like it to be implementable in both, \*nix and win environment.
My major concern when I last tried apache + mod\_python + CherryPy was having to reload webserver to see the changes. Is it considered normal? For some reason cherrypy's autoreload didn't work at all.
* What is the best setup to deploy a working Python app to production and why? I'm now using lighttpd for my PHP web apps, but how would it do for python compared to nginx for example?
* Is it worth diving straight with a framework or to roll something simple of my own? I see that Django has got quite a lot of fans, but I'm thinking it would be overkill for my needs, so I've started looking into CherryPy.
* How exactly are Python apps served if I have to reload httpd to see the changes? Something like a permanent process spawning child processes, with all the major file includes happening on server start and then just lazy loading needed resources?
* Python supports multithreading, do I need to look into using that for a benefit when developing web apps? What would be that benefit and in what situations?
Big thanks! | **What is the best setup for a development environment?**
Doesn't much matter. We use Django, which runs in Windows and Unix nicely. For production, we use Apache in Red Hat.
**Is having to reload webserver to see the changes considered normal?**
Yes. Not clear why you'd want anything different. Web application software shouldn't be dynamic. Content yes. Software no.
In Django, we *develop* without using a web server of any kind on our desktop. The Django "runserver" command reloads the application under most circumstances. For development, this works great. The times when it won't reload are when we've damaged things so badly that the app doesn't properly.
**What is the best setup to deploy a working Python app to production and why?**
"Best" is undefined in this context. Therefore, please provide some qualification for "nest" (e.g., "fastest", "cheapest", "bluest")
**Is it worth diving straight with a framework or to roll something simple of my own?**
Don't waste time rolling your own. We use Django because of the built-in admin page that we don't have to write or maintain. Saves mountains of work.
**How exactly are Python apps served if I have to reload httpd to see the changes?**
Two methods:
* Daemon - mod\_wsgi or mod\_fastcgi have a Python daemon process to which they connect. Change your software. Restart the daemon.
* Embedded - mod\_wsgi or mod\_python have an embedded mode in which the Python interpreter is inside the mod, inside Apache. You have to restart httpd to restart that embedded interpreter.
**Do I need to look into using multi-threaded?**
Yes and no. Yes you do need to be aware of this. No, you don't need to do very much. Apache and mod\_wsgi and Django should handle this for you. |
Can someone explain why scipy.integrate.quad gives different results for equally long ranges while integrating sin(X)? | 581,186 | 3 | 2009-02-24T10:10:29Z | 581,250 | 7 | 2009-02-24T10:32:18Z | [
"python",
"integration",
"scipy",
"numerical-methods"
] | I am trying to numerically integrate an arbitrary (known when I code) function in my program
using numerical integration methods. I am using Python 2.5.2 along with SciPy's numerical integration package. In order to get a feel for it, i decided to try integrating sin(x) and observed this behavior-
```
>>> from math import pi
>>> from scipy.integrate import quad
>>> from math import sin
>>> def integrand(x):
... return sin(x)
...
>>> quad(integrand, -pi, pi)
(0.0, 4.3998892617846002e-14)
>>> quad(integrand, 0, 2*pi)
(2.2579473462709165e-16, 4.3998892617846002e-14)
```
I find this behavior odd because -
1. In ordinary integration, integrating over the full cycle gives zero.
2. In numerical integration, this (1) isn't necessarily the case, because you may just be
approximating the total area under the curve.
In any case, either assuming 1 is True or assuming 2 is True, I find the behavior to be inconsistent. Either both integrations (-pi to pi and 0 to 2\*pi) should return 0.0 (first value in the tuple is the result and the second is the error) or return 2.257...
Can someone please explain why this is happening? Is this really an inconsistency? Can someone also tell me if I am missing something really basic about numerical methods?
In any case, in my final application, I plan to use the above method to find the arc length of a function. If someone has experience in this area, please advise me on the best policy for doing this in Python.
**Edit**
**Note**
I already have the first differential values at all points in the range stored in an array.
Current error is tolerable.
**End note**
I have read Wikipaedia on this. As Dimitry has pointed out, I will be integrating sqrt(1+diff(f(x), x)^2) to get the Arc Length. What I wanted to ask was - is there a better approximation/ Best practice(?) / faster way to do this. If more context is needed, I'll post it separately/ post context here, as you wish. | The `quad` function is a function from an old Fortran library. It works by judging by the flatness and slope of the function it is integrating how to treat the step size it uses for numerical integration in order to maximize efficiency. What this means is that you may get slightly different answers from one region to the next even if they're analytically the same.
Without a doubt both integrations should return zero. Returning something that is 1/(10 trillion) is pretty close to zero! The slight differences are due to the way `quad` is rolling over `sin` and changing its step sizes. For your planned task, `quad` will be all you need.
EDIT:
For what you're doing I think `quad` is fine. It is fast and pretty accurate. My final statement is use it with confidence unless you find something that really has gone quite awry. If it doesn't return a nonsensical answer then it is probably working just fine. No worries. |
Similar to Pass in Python for C# | 581,343 | 12 | 2009-02-24T11:00:30Z | 581,348 | 16 | 2009-02-24T11:03:27Z | [
"c#",
"python"
] | In python we can ..
```
a = 5
if a == 5:
pass #Do Nothing
else:
print "Hello World"
```
I wonder if it a similar way to do this in C# | Use empty braces.
```
int a = 5;
if (a == 5) {}
else {
Console.Write("Hello World");
}
``` |
Similar to Pass in Python for C# | 581,343 | 12 | 2009-02-24T11:00:30Z | 581,357 | 11 | 2009-02-24T11:05:56Z | [
"c#",
"python"
] | In python we can ..
```
a = 5
if a == 5:
pass #Do Nothing
else:
print "Hello World"
```
I wonder if it a similar way to do this in C# | Why not just say:
```
if (a != 5)
{
Console.Write("Hello World");
}
``` |
In Python, how do I make a temp file that persists until the next run? | 581,851 | 3 | 2009-02-24T14:03:26Z | 581,902 | 16 | 2009-02-24T14:18:50Z | [
"python",
"temporary-files"
] | I need to create a folder that I use only once, but need to have it exist until the next run. It seems like I should be using the tmp\_file module in the standard library, but I'm not sure how to get the behavior that I want.
Currently, I'm doing the following to create the directory:
```
randName = "temp" + str(random.randint(1000, 9999))
os.makedirs(randName)
```
And when I want to delete the directory, I just look for a directory with "temp" in it.
This seems like a dirty hack, but I'm not sure of a better way at the moment.
Incidentally, the reason that I need the folder around is that I start a process that uses the folder with the following:
```
subprocess.Popen([command], shell=True).pid
```
and then quit my script to let the other process finish the work. | Creating the folder with a 4-digit random number is insecure, and you also need to worry about collisions with other instances of your program.
A much better way is to create the folder using [`tempfile.mkdtemp`](http://docs.python.org/library/tempfile.html), which does exactly what you want (i.e. the folder is not deleted when your script exits). You would then pass the folder name to the second Popen'ed script as an argument, and it would be responsible for deleting it. |
In Python, how do I make a temp file that persists until the next run? | 581,851 | 3 | 2009-02-24T14:03:26Z | 582,243 | 8 | 2009-02-24T15:43:23Z | [
"python",
"temporary-files"
] | I need to create a folder that I use only once, but need to have it exist until the next run. It seems like I should be using the tmp\_file module in the standard library, but I'm not sure how to get the behavior that I want.
Currently, I'm doing the following to create the directory:
```
randName = "temp" + str(random.randint(1000, 9999))
os.makedirs(randName)
```
And when I want to delete the directory, I just look for a directory with "temp" in it.
This seems like a dirty hack, but I'm not sure of a better way at the moment.
Incidentally, the reason that I need the folder around is that I start a process that uses the folder with the following:
```
subprocess.Popen([command], shell=True).pid
```
and then quit my script to let the other process finish the work. | What you've suggested is dangerous. You may have race conditions if anyone else is trying to create those directories -- including other instances of your application. Also, deleting anything containing "temp" may result in deleting more than you intended. As others have mentioned, [tempfile.mkdtemp](http://docs.python.org/library/tempfile.html#tempfile.mkdtemp) is probably the safest way to go. Here is an example of what you've described, including launching a subprocess to use the new directory.
```
import tempfile
import shutil
import subprocess
d = tempfile.mkdtemp(prefix='tmp')
try:
subprocess.check_call(['/bin/echo', 'Directory:', d])
finally:
shutil.rmtree(d)
``` |
How can you profile a Python script? | 582,336 | 702 | 2009-02-24T16:01:26Z | 582,337 | 755 | 2009-02-24T16:01:40Z | [
"python",
"performance",
"profiling",
"time-complexity"
] | Project Euler and other coding contests often have a maximum time to run or people boast of how fast their particular solution runs. With python, sometimes the approaches are somewhat kludgey - i.e., adding timing code to `__main__`.
What is a good way to profile how long a python program takes to run? | Python includes a profiler called cProfile. It not only gives the total running time, but also times each function separately, and tells you how many times each function was called, making it easy to determine where you should make optimizations.
You can call it from within your code, or from the interpreter, like this:
```
import cProfile
cProfile.run('foo()')
```
Even more usefully, you can invoke the cProfile when running a script:
```
python -m cProfile myscript.py
```
To make it even easier, I made a little batch file called 'profile.bat':
```
python -m cProfile %1
```
So all I have to do is run:
```
profile euler048.py
```
And I get this:
```
1007 function calls in 0.061 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.061 0.061 <string>:1(<module>)
1000 0.051 0.000 0.051 0.000 euler048.py:2(<lambda>)
1 0.005 0.005 0.061 0.061 euler048.py:2(<module>)
1 0.000 0.000 0.061 0.061 {execfile}
1 0.002 0.002 0.053 0.053 {map}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler objects}
1 0.000 0.000 0.000 0.000 {range}
1 0.003 0.003 0.003 0.003 {sum}
```
EDIT: Updated link to a good video resource from PyCon 2013: <http://lanyrd.com/2013/pycon/scdywg/> |
How can you profile a Python script? | 582,336 | 702 | 2009-02-24T16:01:26Z | 583,452 | 10 | 2009-02-24T20:31:10Z | [
"python",
"performance",
"profiling",
"time-complexity"
] | Project Euler and other coding contests often have a maximum time to run or people boast of how fast their particular solution runs. With python, sometimes the approaches are somewhat kludgey - i.e., adding timing code to `__main__`.
What is a good way to profile how long a python program takes to run? | In Virtaal's [source](https://translate.svn.sourceforge.net/svnroot/translate/src/trunk/virtaal/devsupport/profiling.py) there's a very useful class and decorator that can make it profiling (even for specific methods/functions) very easy. The output can then be viewed very comfortably in KCacheGrind. |
How can you profile a Python script? | 582,336 | 702 | 2009-02-24T16:01:26Z | 1,922,945 | 140 | 2009-12-17T16:30:34Z | [
"python",
"performance",
"profiling",
"time-complexity"
] | Project Euler and other coding contests often have a maximum time to run or people boast of how fast their particular solution runs. With python, sometimes the approaches are somewhat kludgey - i.e., adding timing code to `__main__`.
What is a good way to profile how long a python program takes to run? | It's worth pointing out that using the profiler only works (by default) on the main thread, and you won't get any information from other threads if you use them. This can be a bit of a gotcha as it is completely unmentioned in the [profiler documentation](http://docs.python.org/library/profile.html).
If you also want to profile threads, you'll want to look at the [`threading.setprofile()` function](http://docs.python.org/library/threading.html#threading.setprofile) in the docs.
You could also create your own `threading.Thread` subclass to do it:
```
class ProfiledThread(threading.Thread):
# Overrides threading.Thread.run()
def run(self):
profiler = cProfile.Profile()
try:
return profiler.runcall(threading.Thread.run, self)
finally:
profiler.dump_stats('myprofile-%d.profile' % (self.ident,))
```
and use that `ProfiledThread` class instead of the standard one. It might give you more flexibility, but I'm not sure it's worth it, especially if you are using third-party code which wouldn't use your class. |
How can you profile a Python script? | 582,336 | 702 | 2009-02-24T16:01:26Z | 7,693,928 | 103 | 2011-10-08T00:04:12Z | [
"python",
"performance",
"profiling",
"time-complexity"
] | Project Euler and other coding contests often have a maximum time to run or people boast of how fast their particular solution runs. With python, sometimes the approaches are somewhat kludgey - i.e., adding timing code to `__main__`.
What is a good way to profile how long a python program takes to run? | The python wiki is a great page for profiling resources:
<http://wiki.python.org/moin/PythonSpeed/PerformanceTips#Profiling_Code>
as is the python docs:
<http://docs.python.org/library/profile.html>
as shown by Chris Lawlor cProfile is a great tool and can easily be used to print to the screen:
```
python -m cProfile -s time mine.py <args>
```
or to file:
```
python -m cProfile -o output.file mine.py <args>
```
PS> If you are using Ubuntu, make sure to install python-profile
```
sudo apt-get install python-profiler
```
If you output to file you can get nice visualizations using the following tools
PyCallGraph : a tool to create call graph images
install:
```
sudo pip install pycallgraph
```
run:
```
pycallgraph mine.py args
```
view:
```
gimp pycallgraph.png
```
*You can use whatever you like to view the png file, I used gimp*
Unfortunately I often get
dot: graph is too large for cairo-renderer bitmaps. Scaling by 0.257079 to fit
which makes my images unusably small. So I generally create svg files:
```
pycallgraph -f svg -o pycallgraph.svg mine.py <args>
```
PS> make sure to install graphviz (which provides the dot program):
```
sudo pip install graphviz
```
Alternative Graphing using gprof2dot via @maxy / @quodlibetor :
```
sudo pip install gprof2dot
python -m cProfile -o profile.pstats mine.py
gprof2dot -f pstats profile.pstats | dot -Tsvg -o mine.svg
``` |
How can you profile a Python script? | 582,336 | 702 | 2009-02-24T16:01:26Z | 7,838,845 | 19 | 2011-10-20T16:05:34Z | [
"python",
"performance",
"profiling",
"time-complexity"
] | Project Euler and other coding contests often have a maximum time to run or people boast of how fast their particular solution runs. With python, sometimes the approaches are somewhat kludgey - i.e., adding timing code to `__main__`.
What is a good way to profile how long a python program takes to run? | A nice profiling module is the line\_profiler (called using the script kernprof.py). It can be downloaded [here](http://packages.python.org/line_profiler/).
My understanding is that cProfile only gives information about total time spent in each function. So individual lines of code are not timed. This is an issue in scientific computing since often one single line can take a lot of time. Also, as I remember, cProfile didn't catch the time I was spending in say numpy.dot. |
How can you profile a Python script? | 582,336 | 702 | 2009-02-24T16:01:26Z | 8,065,384 | 8 | 2011-11-09T12:59:04Z | [
"python",
"performance",
"profiling",
"time-complexity"
] | Project Euler and other coding contests often have a maximum time to run or people boast of how fast their particular solution runs. With python, sometimes the approaches are somewhat kludgey - i.e., adding timing code to `__main__`.
What is a good way to profile how long a python program takes to run? | Following Joe Shaw's answer about multi-threaded code not to work as expected, I figured that the `runcall` method in cProfile is merely doing `self.enable()` and `self.disable()` calls around the profiled function call, so you can simply do that yourself and have whatever code you want in-between with minimal interference with existing code. |
How can you profile a Python script? | 582,336 | 702 | 2009-02-24T16:01:26Z | 11,822,995 | 275 | 2012-08-06T05:37:07Z | [
"python",
"performance",
"profiling",
"time-complexity"
] | Project Euler and other coding contests often have a maximum time to run or people boast of how fast their particular solution runs. With python, sometimes the approaches are somewhat kludgey - i.e., adding timing code to `__main__`.
What is a good way to profile how long a python program takes to run? | A while ago I made [`pycallgraph`](http://pycallgraph.slowchop.com/) which generates a visualisation from your Python code. **Edit:** I've updated the example to work with the latest release.
After a `pip install pycallgraph` and installing [GraphViz](http://www.graphviz.org/) you can run it from the command line:
```
pycallgraph graphviz -- ./mypythonscript.py
```
Or, you can profile particular parts of your code:
```
from pycallgraph import PyCallGraph
from pycallgraph.output import GraphvizOutput
with PyCallGraph(output=GraphvizOutput()):
code_to_profile()
```
Either of these will generate a `pycallgraph.png` file similar to the image below:
 |
How can you profile a Python script? | 582,336 | 702 | 2009-02-24T16:01:26Z | 13,830,132 | 83 | 2012-12-11T23:16:39Z | [
"python",
"performance",
"profiling",
"time-complexity"
] | Project Euler and other coding contests often have a maximum time to run or people boast of how fast their particular solution runs. With python, sometimes the approaches are somewhat kludgey - i.e., adding timing code to `__main__`.
What is a good way to profile how long a python program takes to run? | @Maxy's comment on [this answer](http://stackoverflow.com/a/7693928/25616) helped me out enough that I think it deserves its own answer: I already had cProfile-generated .pstats files and I didn't want to re-run things with pycallgraph, so I used [gprof2dot](http://code.google.com/p/jrfonseca/wiki/Gprof2Dot), and got pretty svgs:
```
$ sudo apt-get install graphviz
$ git clone https://github.com/jrfonseca/gprof2dot
$ ln -s "$PWD"/gprof2dot/gprof2dot.py ~/bin
$ cd $PROJECT_DIR
$ gprof2dot.py -f pstats profile.pstats | dot -Tsvg -o callgraph.svg
```
and BLAM!
It uses dot (the same thing that pycallgraph uses) so output looks similar. I get the impression that gprof2dot loses less information though:
 |
How can you profile a Python script? | 582,336 | 702 | 2009-02-24T16:01:26Z | 28,660,109 | 18 | 2015-02-22T16:18:05Z | [
"python",
"performance",
"profiling",
"time-complexity"
] | Project Euler and other coding contests often have a maximum time to run or people boast of how fast their particular solution runs. With python, sometimes the approaches are somewhat kludgey - i.e., adding timing code to `__main__`.
What is a good way to profile how long a python program takes to run? | Also worth mentioning is the GUI cProfile dump viewer [RunSnakeRun](http://www.vrplumber.com/programming/runsnakerun/). It allows you to sort and select, thereby zooming in on the relevant parts of the program. The sizes of the rectangles in the picture is proportional to the time taken. If you mouse over a rectangle it highlights that call in the table and everywhere on the map. When you double-click on a rectangle it zooms in on that portion. It will show you who calls that portion and what that portion calls.
The descriptive information is very helpful. It shows you the code for that bit which can be helpful when you are dealing with built-in library calls. It tells you what file and what line to find the code.
Also want to point at that the OP said 'profiling' but it appears he meant 'timing'. Keep in mind programs will run slower when profiled.
 |
How can you profile a Python script? | 582,336 | 702 | 2009-02-24T16:01:26Z | 28,808,860 | 11 | 2015-03-02T11:36:47Z | [
"python",
"performance",
"profiling",
"time-complexity"
] | Project Euler and other coding contests often have a maximum time to run or people boast of how fast their particular solution runs. With python, sometimes the approaches are somewhat kludgey - i.e., adding timing code to `__main__`.
What is a good way to profile how long a python program takes to run? | # pprofile
`line_profiler` (already presented here) also inspired [`pprofile`](https://github.com/vpelletier/pprofile), which is described as:
> Line-granularity, thread-aware deterministic and statistic pure-python
> profiler
It provides line-granularity as `line_profiler`, is pure Python, can be used as a standalone command or a module, and can even generate callgrind-format files that can be easily analyzed with `[k|q]cachegrind`.
# vprof
There is also [vprof](https://github.com/nvdv/vprof), a Python package described as:
> [...] providing rich and interactive visualizations for various Python program characteristics such as running time and memory usage.
[](http://i.stack.imgur.com/uafO3.png) |
How can you profile a Python script? | 582,336 | 702 | 2009-02-24T16:01:26Z | 29,344,687 | 7 | 2015-03-30T11:11:13Z | [
"python",
"performance",
"profiling",
"time-complexity"
] | Project Euler and other coding contests often have a maximum time to run or people boast of how fast their particular solution runs. With python, sometimes the approaches are somewhat kludgey - i.e., adding timing code to `__main__`.
What is a good way to profile how long a python program takes to run? | cProfile is great for quick profiling but most of the time it was ending for me with the errors. Function runctx solves this problem by initializing correctly the environment and variables, hope it can be useful for someone:
```
import cProfile
cProfile.runctx('foo()', None, locals())
``` |
How can you profile a Python script? | 582,336 | 702 | 2009-02-24T16:01:26Z | 32,139,774 | 7 | 2015-08-21T11:59:43Z | [
"python",
"performance",
"profiling",
"time-complexity"
] | Project Euler and other coding contests often have a maximum time to run or people boast of how fast their particular solution runs. With python, sometimes the approaches are somewhat kludgey - i.e., adding timing code to `__main__`.
What is a good way to profile how long a python program takes to run? | There's a lot of great answers but they either use command line or some external program for profiling and/or sorting the results.
I really missed some way I could use in my IDE (eclipse-PyDev) without touching the command line or installing anything. So here it is.
# Profiling without command line
```
def count():
from math import sqrt
for x in range(10**5):
sqrt(x)
if __name__ == '__main__':
import cProfile, pstats
cProfile.run("count()", "{}.profile".format(__file__))
s = pstats.Stats("{}.profile".format(__file__))
s.strip_dirs()
s.sort_stats("time").print_stats(10)
```
See [docs](https://docs.python.org/3.4/library/profile.html) or other answers for more info. |
py2exe + sqlalchemy + sqlite problem | 582,449 | 19 | 2009-02-24T16:28:11Z | 582,520 | 29 | 2009-02-24T16:49:55Z | [
"python",
"sqlite",
"sqlalchemy",
"py2exe"
] | I am playing around with getting some basic stuff to work in Python before i go into full speed dev mode. Here are the specifics:
```
Python 2.5.4
PyQt4 4.4.3
SqlAlchemy 0.5.2
py2exe 0.6.9
setuptools 0.6c9
pysqlite 2.5.1
```
setup.py:
```
from distutils.core import setup
import py2exe
setup(windows=[{"script" : "main.py"}], options={"py2exe" : {"includes" : ["sip", "PyQt4.QtSql","sqlite3"],"packages":["sqlite3",]}})
```
py2exe appears to generate the .exe file correctly, but when i execute dist/main.exe i get this in the main.exe.log
```
Traceback (most recent call last):
File "main.py", line 18, in <module>
File "main.py", line 14, in main
File "db\manager.pyc", line 12, in __init__
File "sqlalchemy\engine\__init__.pyc", line 223, in create_engine
File "sqlalchemy\engine\strategies.pyc", line 48, in create
File "sqlalchemy\engine\url.pyc", line 91, in get_dialect
ImportError: No module named sqlite
```
I've been googling my heart out, but can't seem to find any solutions to this. If i can't get this to work now, my hopes of using Python for this project will be dashed and i will start over using Ruby... (not that there is anything wrong with Ruby, i just wanted to use this project as a good way to teach myself Python) | you need to include the sqlalchemy.databases.sqlite package
```
setup(
windows=[{"script" : "main.py"}],
options={"py2exe" : {
"includes": ["sip", "PyQt4.QtSql"],
"packages": ["sqlalchemy.databases.sqlite"]
}})
``` |
How to import classes defined in __init__.py | 582,723 | 46 | 2009-02-24T17:35:55Z | 583,065 | 36 | 2009-02-24T18:52:35Z | [
"python",
"packages"
] | I am trying to organize some modules for my own use. I have something like this:
```
lib/
__init__.py
settings.py
foo/
__init__.py
someobject.py
bar/
__init__.py
somethingelse.py
```
In `lib/__init__.py`, I want to define some classes to be used if I import lib. However, I can't seem to figure it out without separating the classes into files, and import them in `__init__.py`.
Rather than say:
```
lib/
__init__.py
settings.py
helperclass.py
foo/
__init__.py
someobject.py
bar/
__init__.py
somethingelse.py
from lib.settings import Values
from lib.helperclass import Helper
```
I want something like this:
```
lib/
__init__.py #Helper defined in this file
settings.py
foo/
__init__.py
someobject.py
bar/
__init__.py
somethingelse.py
from lib.settings import Values
from lib import Helper
```
Is it possible, or do I have to separate the class into another file?
## EDIT
OK, if I import lib from another script, I can access the Helper class. How can I access the Helper class from settings.py?
The example [here](http://docs.python.org/tutorial/modules.html) describes Intra-Package References. I quote "submodules often need to refer to each other". In my case, the lib.settings.py needs the Helper and lib.foo.someobject need access to Helper, so where should I define the Helper class? | 1. '`lib/`'s parent directory must be in `sys.path`.
2. Your '`lib/__init__.py`' might look like this:
```
from . import settings # or just 'import settings' on old Python versions
class Helper(object):
pass
```
Then the following example should work:
```
from lib.settings import Values
from lib import Helper
```
### Answer to the edited version of the question:
`__init__.py` defines how your package looks from outside. If you need to use `Helper` in `settings.py` then define `Helper` in a different file e.g., '`lib/helper.py`'.
```
.
| `-- import_submodule.py
`-- lib
|-- __init__.py
|-- foo
| |-- __init__.py
| `-- someobject.py
|-- helper.py
`-- settings.py
2 directories, 6 files
```
The command:
```
$ python import_submodule.py
```
Output:
```
settings
helper
Helper in lib.settings
someobject
Helper in lib.foo.someobject
# ./import_submodule.py
import fnmatch, os
from lib.settings import Values
from lib import Helper
print
for root, dirs, files in os.walk('.'):
for f in fnmatch.filter(files, '*.py'):
print "# %s/%s" % (os.path.basename(root), f)
print open(os.path.join(root, f)).read()
print
# lib/helper.py
print 'helper'
class Helper(object):
def __init__(self, module_name):
print "Helper in", module_name
# lib/settings.py
print "settings"
import helper
class Values(object):
pass
helper.Helper(__name__)
# lib/__init__.py
#from __future__ import absolute_import
import settings, foo.someobject, helper
Helper = helper.Helper
# foo/someobject.py
print "someobject"
from .. import helper
helper.Helper(__name__)
# foo/__init__.py
import someobject
``` |
Run a linux system command as a superuser, using a python script | 583,216 | 15 | 2009-02-24T19:32:21Z | 583,239 | 18 | 2009-02-24T19:39:15Z | [
"python",
"linux",
"sysadmin",
"sudo",
"root"
] | I have got postfix installed on my machine and I am updating virtual\_alias on the fly programmatically(using python)(on some action). Once I update the entry in the /etc/postfix/virtual\_alias, I am running the command:
```
sudo /usr/sbin/postmap /etc/postfix/virtual_alias 2>>/work/postfix_valias_errorfile
```
But I am getting the error:
```
sudo: sorry, you must have a tty to run sudo
```
I want to run the mentioned sudo command in a non-human way(meaning, I am running this system command from a python script.). So how do I get this command run programmatically? | You can either run your python script as root itself - then you won't need to add privilege to reload postfix.
Or you can configure sudo to not need a password for `/etc/init.d/postfix`.
sudo configuration (via visudo) allows NOPASSWD: to allow the command without a password. See [http://www.sudo.ws/sudo/man/sudoers.html#nopasswd\_and\_passwd](http://www.sudo.ws/sudo/man/sudoers.html#nopasswd%5Fand%5Fpasswd)
```
<username> ALL = NOPASSWD: /etc/init.d/postfix
```
or something similar. |
What is the best way to print a table with delimiters in Python | 583,557 | 4 | 2009-02-24T20:56:53Z | 583,757 | 17 | 2009-02-24T21:47:41Z | [
"python",
"coding-style"
] | I want to print a table mixed with string and float values, as tab delimited output printout. Sure I can get the job done:
```
>>> tab = [['a', 1], ['b', 2]]
>>> for row in tab:
... out = ""
... for col in row:
... out = out + str(col) + "\t"
... print out.rstrip()
...
a 1
b 2
```
But I have a feeling there is a better way to do it in Python, at least to print each row with specified delimiter, if not the whole table. Little googling (from [here](http://skymind.com/~ocrow/python_string/)) and it is already shorter:
```
>>> for row in tab:
... print "\t".join([str(col) for col in row])
...
a 1
b 2
```
Is there still a better, or more Python-ish, way to do it? | Your shorter solution would work well as something quick and dirty. But if you need to handle large amounts of data, it'd be better to use `csv` module:
```
import sys, csv
writer = csv.writer(sys.stdout, delimiter="\t")
writer.writerows(data)
```
The benefit of this solution is that you may easily customize all aspects of output format: delimiter, quotation, column headers, escape sequences... |
Is it possible to generate and return a ZIP file with App Engine? | 583,791 | 19 | 2009-02-24T21:56:09Z | 583,819 | 32 | 2009-02-24T22:06:10Z | [
"python",
"google-app-engine",
"zip",
"in-memory"
] | I have a small project that would be perfect for Google App Engine. Implementing it hinges on the ability to generate a ZIP file and return it.
Due to the distributed nature of App Engine, from what I can tell, the ZIP file couldn't be created "in-memory" in the traditional sense. It would basically have to be generated and and sent in a single request/response cycle.
Does the Python zip module even exist in the App Engine environment? | [zipfile](http://docs.python.org/library/zipfile.html) is available at appengine and reworked [example](http://www.tareandshare.com/2008/09/28/Zip-Google-App-Engine-GAE/) follows:
```
from contextlib import closing
from zipfile import ZipFile, ZIP_DEFLATED
from google.appengine.ext import webapp
from google.appengine.api import urlfetch
def addResource(zfile, url, fname):
# get the contents
contents = urlfetch.fetch(url).content
# write the contents to the zip file
zfile.writestr(fname, contents)
class OutZipfile(webapp.RequestHandler):
def get(self):
# Set up headers for browser to correctly recognize ZIP file
self.response.headers['Content-Type'] ='application/zip'
self.response.headers['Content-Disposition'] = \
'attachment; filename="outfile.zip"'
# compress files and emit them directly to HTTP response stream
with closing(ZipFile(self.response.out, "w", ZIP_DEFLATED)) as outfile:
# repeat this for every URL that should be added to the zipfile
addResource(outfile,
'https://www.google.com/intl/en/policies/privacy/',
'privacy.html')
addResource(outfile,
'https://www.google.com/intl/en/policies/terms/',
'terms.html')
``` |
Is it possible to generate and return a ZIP file with App Engine? | 583,791 | 19 | 2009-02-24T21:56:09Z | 2,386,804 | 9 | 2010-03-05T12:54:24Z | [
"python",
"google-app-engine",
"zip",
"in-memory"
] | I have a small project that would be perfect for Google App Engine. Implementing it hinges on the ability to generate a ZIP file and return it.
Due to the distributed nature of App Engine, from what I can tell, the ZIP file couldn't be created "in-memory" in the traditional sense. It would basically have to be generated and and sent in a single request/response cycle.
Does the Python zip module even exist in the App Engine environment? | ```
import zipfile
import StringIO
text = u"ABCDEFGHIJKLMNOPQRSTUVWXYVabcdefghijklmnopqqstuvweyxÃ¡Ã©Ã¶Ã¼Ã¯ä¸ å»£ åº å¹¿ å å½ å½ ç"
zipstream=StringIO.StringIO()
file = zipfile.ZipFile(file=zipstream,compression=zipfile.ZIP_DEFLATED,mode="w")
file.writestr("data.txt.zip",text.encode("utf-8"))
file.close()
zipstream.seek(0)
self.response.headers['Content-Type'] ='application/zip'
self.response.headers['Content-Disposition'] = 'attachment; filename="data.txt.zip"'
self.response.out.write(zipstream.getvalue())
``` |
Determine if a listing is a directory or file in Python over FTP | 584,865 | 5 | 2009-02-25T05:54:36Z | 585,232 | 10 | 2009-02-25T09:04:16Z | [
"python",
"ftp"
] | Python has a standard library module `ftplib` to run FTP communications. It has two means of getting a listing of directory contents. One, `FTP.nlst()`, will return a list of the contents of a directory given a directory name as an argument. (It will return the name of a file if given a file name instead.) This is a robust way to list the contents of a directory but does not give any indication whether each item in the list is a file or directory. The other method is `FTP.dir()`, which gives a string formatted listing of the directory contents of the directory given as an argument (or of the file attributes, given a file name).
According to [a previous question on Stack Overflow](http://stackoverflow.com/questions/111954/using-pythons-ftplib-to-get-a-directory-listing-portably), parsing the results of `dir()` can be fragile (different servers may return different strings). I'm looking for some way to list just the directories contained within another directory over FTP, though. To the best of my knowledge, scraping for a `d` in the permissions part of the string is the only solution I've come up with, but I guess I can't guarantee that the permissions will appear in the same place between different servers. Is there a more robust solution to identifying directories over FTP? | Unfortunately FTP doesn't have a command to list just folders so parsing the results you get from ftp.dir() would be 'best'.
A simple app assuming a standard result from ls (not a windows ftp)
```
from ftplib import FTP
ftp = FTP(host, user, passwd)
for r in ftp.dir():
if r.upper().startswith('D'):
print r[58:] # Starting point
```
[Standard FTP Commands](http://www.hiteksoftware.com/help/English/FtpCommands.htm#ftp%20commands)
[Custom FTP Commands](http://www.hiteksoftware.com/help/English/FtpCommands.htm#custom%20commands) |
Find all strings in python code files | 585,529 | 6 | 2009-02-25T10:44:02Z | 585,884 | 11 | 2009-02-25T12:54:41Z | [
"python"
] | I would like to list all strings within my large python project.
Imagine the different possibilities to create a string in python:
```
mystring = "hello world"
mystring = ("hello "
"world")
mystring = "hello " \
"world"
```
I need a tool that outputs "filename, linenumber, string" for each string in my project. Strings that are spread over multiple lines using "\" or "('')" should be shown in a single line.
Any ideas how this could be done? | unwind's suggestion of using the ast module in 2.6 is a good one. (There's also the undocumented \_ast module in 2.5.) Here's example code for that
```
code = """a = 'blah'
b = '''multi
line
string'''
c = u"spam"
"""
import ast
root = ast.parse(code)
class ShowStrings(ast.NodeVisitor):
def visit_Str(self, node):
print "string at", node.lineno, node.col_offset, repr(node.s)
show_strings = ShowStrings()
show_strings.visit(root)
```
The problem is multiline strings. If you run the above you'll get.
```
string at 1 4 'blah'
string at 4 -1 'multi\nline\nstring'
string at 5 4 u'spam'
```
You see that it doesn't report the start of the multiline string, only the end. There's no good solution for that using the builtin Python tools.
Another option is that you can use my '[python4ply](http://dalkescientific.com/Python/python4ply.html)' module. This is a grammar definition for Python for [PLY](http://www.dabeaz.com/ply/), which is a parser generator. Here's how you might use it:
```
import compiler
import compiler.visitor
# from python4ply; requires the ply parser generator
import python_yacc
code = """a = 'blah'
b = '''multi
line
string'''
c = u"spam"
d = 1
"""
tree = python_yacc.parse(code, "<string>")
#print tree
class ShowStrings(compiler.visitor.ASTVisitor):
def visitConst(self, node):
if isinstance(node.value, basestring):
print "string at", node.lineno, repr(node.value)
visitor = ShowStrings()
compiler.walk(tree, visitor)
```
The output from this is
```
string at 1 'blah'
string at 2 'multi\nline\nstring'
string at 5 u'spam'
```
There's no support for column information. (There is some mostly complete commented out code to support that, but it's not fully tested.) Then again, I see you don't need it. It also means working with Python's 'compiler' module, which is clumsier than the AST module.
Still, with a 30-40 lines of code you should have exactly what you want. |
Python regular expression matching a multiline block of text | 587,345 | 42 | 2009-02-25T19:00:49Z | 587,518 | 10 | 2009-02-25T19:47:22Z | [
"python",
"regex",
"multiline"
] | I'm having a bit of trouble getting a Python regex to work when matching against text that spans multiple lines. The example text is ('\n' is a newline)
```
some Varying TEXT\n
\n
DSJFKDAFJKDAFJDSAKFJADSFLKDLAFKDSAF\n
[more of the above, ending with a newline]\n
[yep, there is a variable number of lines here]\n
\n
(repeat the above a few hundred times).
```
I'd like to capture two things: the 'some\_Varying\_TEXT' part, and all of the lines of uppercase text that comes two lines below it in one capture (i can strip out the newline characters later).
I've tried with a few approaches:
```
re.compile(r"^>(\w+)$$([.$]+)^$", re.MULTILINE) # try to capture both parts
re.compile(r"(^[^>][\w\s]+)$", re.MULTILINE|re.DOTALL) # just textlines
```
and a lot of variations hereof with no luck. The last one seems to match the lines of text one by one, which is not what I really want. I can catch the first part, no problem, but I can't seem to catch the 4-5 lines of uppercase text.
I'd like match.group(1) to be some\_Varying\_Text and group(2) to be line1+line2+line3+etc until the empty line is encountered.
If anyone's curious, its supposed to be a sequence of aminoacids that make up a protein. | This will work:
```
>>> import re
>>> rx_sequence=re.compile(r"^(.+?)\n\n((?:[A-Z]+\n)+)",re.MULTILINE)
>>> rx_blanks=re.compile(r"\W+") # to remove blanks and newlines
>>> text="""Some varying text1
...
... AAABBBBBBCCCCCCDDDDDDD
... EEEEEEEFFFFFFFFGGGGGGG
... HHHHHHIIIIIJJJJJJJKKKK
...
... Some varying text 2
...
... LLLLLMMMMMMNNNNNNNOOOO
... PPPPPPPQQQQQQRRRRRRSSS
... TTTTTUUUUUVVVVVVWWWWWW
... """
>>> for match in rx_sequence.finditer(text):
... title, sequence = match.groups()
... title = title.strip()
... sequence = rx_blanks.sub("",sequence)
... print "Title:",title
... print "Sequence:",sequence
... print
...
Title: Some varying text1
Sequence: AAABBBBBBCCCCCCDDDDDDDEEEEEEEFFFFFFFFGGGGGGGHHHHHHIIIIIJJJJJJJKKKK
Title: Some varying text 2
Sequence: LLLLLMMMMMMNNNNNNNOOOOPPPPPPPQQQQQQRRRRRRSSSTTTTTUUUUUVVVVVVWWWWWW
```
---
Some explanation about this regular expression might be useful: `^(.+?)\n\n((?:[A-Z]+\n)+)`
* The first character (`^`) means "starting at the beginning of a line". Be aware that it does not match the newline itself (same for $: it means "just before a newline", but it does not match the newline itself).
* Then `(.+?)\n\n` means "match as few characters as possible (all characters are allowed) until you reach two newlines". The result (without the newlines) is put in the first group.
* `[A-Z]+\n` means "match as many upper case letters as possible until you reach a newline. This defines what I will call a *textline*.
* `((?:`*textline*`)+)` means match one or more *textlines* but do not put each line in a group. Instead, put **all** the *textlines* in one group.
* You could add a final `\n` in the regular expression if you want to enforce a double newline at the end.
* Also, if you are not sure about what type of newline you will get (`\n` or `\r` or `\r\n`) then just fix the regular expression by replacing every occurrence of `\n` by `(?:\n|\r\n?)`. |
Python regular expression matching a multiline block of text | 587,345 | 42 | 2009-02-25T19:00:49Z | 587,620 | 56 | 2009-02-25T20:06:01Z | [
"python",
"regex",
"multiline"
] | I'm having a bit of trouble getting a Python regex to work when matching against text that spans multiple lines. The example text is ('\n' is a newline)
```
some Varying TEXT\n
\n
DSJFKDAFJKDAFJDSAKFJADSFLKDLAFKDSAF\n
[more of the above, ending with a newline]\n
[yep, there is a variable number of lines here]\n
\n
(repeat the above a few hundred times).
```
I'd like to capture two things: the 'some\_Varying\_TEXT' part, and all of the lines of uppercase text that comes two lines below it in one capture (i can strip out the newline characters later).
I've tried with a few approaches:
```
re.compile(r"^>(\w+)$$([.$]+)^$", re.MULTILINE) # try to capture both parts
re.compile(r"(^[^>][\w\s]+)$", re.MULTILINE|re.DOTALL) # just textlines
```
and a lot of variations hereof with no luck. The last one seems to match the lines of text one by one, which is not what I really want. I can catch the first part, no problem, but I can't seem to catch the 4-5 lines of uppercase text.
I'd like match.group(1) to be some\_Varying\_Text and group(2) to be line1+line2+line3+etc until the empty line is encountered.
If anyone's curious, its supposed to be a sequence of aminoacids that make up a protein. | Try this:
```
re.compile(r"^(.+)\n((?:\n.+)+)", re.MULTILINE)
```
I think your biggest problem is that you're expecting the `^` and `$` anchors to match linefeeds, but they don't. In multiline mode, `^` matches the position immediately *following* a newline and `$` matches the position immediately *preceding* a newline.
Be aware, too, that a newline can consist of a linefeed (\n), a carriage-return (\r), or a carriage-return+linefeed (\r\n). If you aren't certain that your target text uses only linefeeds, you should use this more inclusive version of the regex:
```
re.compile(r"^(.+)(?:\n|\r\n?)((?:(?:\n|\r\n?).+)+)", re.MULTILINE)
```
BTW, you don't want to use the DOTALL modifier here; you're relying on the fact that the dot matches everything *except* newlines. |
How to increment a value with leading zeroes? | 587,647 | 8 | 2009-02-25T20:13:22Z | 587,656 | 8 | 2009-02-25T20:15:43Z | [
"python"
] | What would be the best way to increment a value that contains leading zeroes? For example, I'd like to increment "00000001". However, it should be noted that the number of leading zeroes will not exceed 30. So there may be cases like "0000012", "00000000000000099", or "000000000000045".
I can think of a couple ways, but I want to see if someone comes up with something slick. | ```
int('00000001') + 1
```
if you want the leading zeroes back:
```
"%08g" % (int('000000001') + 1)
``` |
How to increment a value with leading zeroes? | 587,647 | 8 | 2009-02-25T20:13:22Z | 587,690 | 8 | 2009-02-25T20:23:18Z | [
"python"
] | What would be the best way to increment a value that contains leading zeroes? For example, I'd like to increment "00000001". However, it should be noted that the number of leading zeroes will not exceed 30. So there may be cases like "0000012", "00000000000000099", or "000000000000045".
I can think of a couple ways, but I want to see if someone comes up with something slick. | "%%0%ii" % len(x) % (int(x)+1)
-- MarkusQ
P.S. For x = "0000034" it unfolds like so:
```
"%%0%ii" % len("0000034") % (int("0000034")+1)
"%%0%ii" % 7 % (34+1)
"%07i" % 35
"0000035"
``` |
How to increment a value with leading zeroes? | 587,647 | 8 | 2009-02-25T20:13:22Z | 587,791 | 15 | 2009-02-25T20:50:12Z | [
"python"
] | What would be the best way to increment a value that contains leading zeroes? For example, I'd like to increment "00000001". However, it should be noted that the number of leading zeroes will not exceed 30. So there may be cases like "0000012", "00000000000000099", or "000000000000045".
I can think of a couple ways, but I want to see if someone comes up with something slick. | Use the much overlooked str.zfill():
```
str(int(x) + 1).zfill(len(x))
``` |
a question on for loops in python | 588,052 | 3 | 2009-02-25T21:47:41Z | 588,067 | 8 | 2009-02-25T21:52:06Z | [
"python",
"for-loop"
] | I want to calaculate pythagorean triplets(code below) and I want to calculate infinitly how do I do it without using the three for loops? Could I use a for loop in some way? thanks.
```
import math
def main():
for x in range (10000, 1000):
for y in range (10000, 1000):
for z in range(10000, 1000):
if x*x == y*y + z*z:
print y, z, x
print '-'*50
if __name__ == '__main__':
main()
``` | Generally, you can't. Three variables, three loops.
But this is a special case, as [nobody](http://stackoverflow.com/questions/588052/a-question-on-for-loops-in-python/588087#588087) pointed out. You can solve this problem with two loops.
Also, there's no point in checking y, z and z, y.
Oh, and `range(10000, 1000) = []`.
```
import math
for x in range(1, 1000):
for y in range(x, 1000):
z = math.sqrt(x**2 + y**2)
if int(z) == z:
print x, y, int(z)
print '-'*50
``` |
Python Daemon Packaging Best Practices | 588,749 | 23 | 2009-02-26T01:32:39Z | 588,780 | 14 | 2009-02-26T01:46:15Z | [
"python",
"packaging",
"setuptools",
"distutils"
] | I have a tool which I have written in python and generally should be run as a daemon. What are the best practices for packaging this tool for distribution, particularly how should settings files and the daemon executable/script be handled?
Relatedly are there any common tools for setting up the daemon for running on boot as appropriate for the given platform (i.e. *init* scripts on linux, services on windows, *launchd* on os x)? | The best tool I found for helping with init.d scripts is "start-stop-daemon". It will run any application, monitor run/pid files, create them when necessary, provide ways to stop the daemon, set process user/group ids, and can even background your process.
For example, this is a script which can start/stop a wsgi server:
```
#! /bin/bash
case "$1" in
start)
echo "Starting server"
# Activate the virtual environment
. /home/ali/wer-gcms/g-env/bin/activate
# Run start-stop-daemon, the $DAEMON variable contains the path to the
# application to run
start-stop-daemon --start --pidfile $WSGI_PIDFILE \
--user www-data --group www-data \
--chuid www-data \
--exec "$DAEMON"
;;
stop)
echo "Stopping WSGI Application"
# Start-stop daemon can also stop the application by sending sig 15
# (configurable) to the process id contained in the run/pid file
start-stop-daemon --stop --pidfile $WSGI_PIDFILE --verbose
;;
*)
# Refuse to do other stuff
echo "Usage: /etc/init.d/wsgi-application.sh {start|stop}"
exit 1
;;
esac
exit 0
```
You can also see there an example of how to use it with a virtualenv, which I would always recommend. |
Python Daemon Packaging Best Practices | 588,749 | 23 | 2009-02-26T01:32:39Z | 588,891 | 8 | 2009-02-26T02:44:33Z | [
"python",
"packaging",
"setuptools",
"distutils"
] | I have a tool which I have written in python and generally should be run as a daemon. What are the best practices for packaging this tool for distribution, particularly how should settings files and the daemon executable/script be handled?
Relatedly are there any common tools for setting up the daemon for running on boot as appropriate for the given platform (i.e. *init* scripts on linux, services on windows, *launchd* on os x)? | There are many snippets on the internet offering to write a daemon in pure python (no bash scripts)
<http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/>
looks clean...
If you want to write your own,
the principle is the same as with the bash daemon function.
Basically:
**On start:**
* you fork to another process
* open a logfile to redirect your
stdout and stderr
* Save the pid somewhere.
**On stop:**
* You send SIGTERM to the process with pid stored in your pidfile.
* With signal.signal(signal.SIGTERM, sigtermhandler) you can bind a stopping
procedure to the SIGTERM signal.
I don't know any widely used package doing this though. |
Python Daemon Packaging Best Practices | 588,749 | 23 | 2009-02-26T01:32:39Z | 588,904 | 11 | 2009-02-26T02:48:06Z | [
"python",
"packaging",
"setuptools",
"distutils"
] | I have a tool which I have written in python and generally should be run as a daemon. What are the best practices for packaging this tool for distribution, particularly how should settings files and the daemon executable/script be handled?
Relatedly are there any common tools for setting up the daemon for running on boot as appropriate for the given platform (i.e. *init* scripts on linux, services on windows, *launchd* on os x)? | To answer one part of your question, there are no tools I know of that will do daemon setup portably even across Linux systems let alone Windows or Mac OS X.
Most Linux distributions seem to be using `start-stop-daemon` within init scripts now, but you're still going to have minor difference in filesystem layout and big differences in packaging. Using autotools/configure, or distutils/easy\_install if your project is all Python, will go a long way to making it easier to build packages for different Linux/BSD distributions.
Windows is a whole different game and will require [Mark Hammond's win32](http://starship.python.net/crew/mhammond/) extensions and maybe [Tim Golden's WMI](http://timgolden.me.uk/python/wmi.html) extensions.
I don't know Launchd except that "none of the above" are relevant.
For tips on daemonizing Python scripts, I would look to Python apps that are actually doing it in the real world, for example inside Twisted. |
Miminal Linux For a Pylons Web App? | 589,115 | 2 | 2009-02-26T04:33:49Z | 589,127 | 8 | 2009-02-26T04:37:24Z | [
"python",
"linux",
"pylons"
] | I am going to be building a Pylons-based web application. For this purpose, I'd like to build a minimal Linux platform, upon which I would then install the necessary packages such as Python and Pylons, and other necessary dependencies. The other reason to keep it minimal is because this machine will be virtual, probably over KVM, and will eventually be replicated in some cloud environment.
What would you use to do this? I am thinking of using Fedora 10's AOS iso, but would love to understand all my options. | I really like [JeOS](http://www.ubuntu.com/products/whatisubuntu/serveredition/jeos) "Just enough OS" which is a minimal distribution of the Ubuntu Server Edition. |
imploding a list for use in a python MySQLDB IN clause | 589,284 | 35 | 2009-02-26T05:59:25Z | 589,416 | 68 | 2009-02-26T06:46:38Z | [
"python",
"mysql"
] | I know how to map a list to a string:
```
foostring = ",".join( map(str, list_of_ids) )
```
And I know that I can use the following to get that string into an IN clause:
```
cursor.execute("DELETE FROM foo.bar WHERE baz IN ('%s')" % (foostring))
```
What I need is to accomplish the same thing SAFELY (avoiding SQL injection) using MySQLDB. In the above example because foostring is not passed as an argument to execute, it is vulnerable. I also have to quote and escape outside of the mysql library.
(There is a [related SO question](http://stackoverflow.com/questions/315672/automagically-expanding-a-python-list-with-formatted-output), but the answers listed there either do not work for MySQLDB or are vulnerable to SQL injection.) | Use the `list_of_ids` directly:
```
format_strings = ','.join(['%s'] * len(list_of_ids))
cursor.execute("DELETE FROM foo.bar WHERE baz IN (%s)" % format_strings,
tuple(list_of_ids))
```
That way you avoid having to quote yourself, and avoid all kinds of sql injection.
Note that the data (`list_of_ids`) is going directly to mysql's driver, as a parameter (not in the query text) so there is no injection. You can leave any chars you want in the string, no need to remove or quote chars. |
Python - How to check if a file is used by another application? | 589,407 | 8 | 2009-02-26T06:44:48Z | 589,440 | 7 | 2009-02-26T06:55:28Z | [
"python",
"windows",
"unix",
"logging",
"file-io"
] | I want to open a file which is periodically written to by another application. This application cannot be modified. I'd therefore like to only open the file when I know it is not been written to by an other application.
Is there a pythonic way to do this? Otherwise, how do I achieve this in Unix and Windows?
**edit**: I'll try and clarify. *Is there a way to check if the current file has been opened by another application?*
I'd like to start with this question. Whether those other application read/write is irrelevant for now.
I realize it is probably OS dependent, so this may not really be python related right now. | Will your python script desire to open the file for writing or for reading? Is the legacy application opening and closing the file between writes, or does it keep it open?
It is extremely important that we understand what the legacy application is doing, and what your python script is attempting to achieve.
This area of functionality is highly OS-dependent, and the fact that you have no control over the legacy application only makes things harder unfortunately. Whether there is a pythonic or non-pythonic way of doing this will probably be the least of your concerns - the hard question will be whether what you are trying to achieve will be possible at all.
---
**UPDATE**
OK, so knowing (from your comment) that:
> the legacy application is opening and
> closing the file every X minutes, but
> I do not want to assume that at t =
> t\_0 + n\*X + eps it already closed
> the file.
then the problem's parameters are changed. It can actually be done in an OS-independent way given a few assumptions, or as a combination of OS-dependent and OS-independent techniques. :)
1. **OS-independent way**: if it is safe to assume that the legacy application keeps the file open for at most some known quantity of time, say `T` seconds (e.g. opens the file, performs one write, then closes the file), and re-opens it more or less every `X` seconds, where `X` is larger than 2\*`T`.
* `stat` the file
* subtract file's modification time from `now()`, yielding `D`
* if `T` <= `D` < `X` then open the file and do what you need with it
* *This may be safe enough for your application*. Safety increases as `T`/`X` decreases. On \*nix you may have to double check `/etc/ntpd.conf` for proper time-stepping vs. slew configuration (see tinker). For Windows see [MSDN](http://support.microsoft.com/kb/223184)
2. **Windows**: in addition (or in-lieu) of the OS-independent method above, you may attempt to use either:
* sharing (locking): this assumes that the legacy program also opens the file in shared mode (usually the default in Windows apps); moreover, if your application acquires the lock just as the legacy application is attempting the same (race condition), the legacy application will fail.
+ this is extremely intrusive and error prone. Unless both the new application and the legacy application need synchronized access for writing to the same file and you are willing to handle the possibility of the legacy application being denied opening of the file, do not use this method.
* attempting to find out what files are open in the legacy application, using the same techniques as [ProcessExplorer](http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx) (the equivalent of \*nix's `lsof`)
+ you are even more vulnerable to race conditions than the OS-independent technique
3. **Linux/etc.**: in addition (or in-lieu) of the OS-independent method above, you may attempt to use the same technique as `lsof` or, on some systems, simply check which file the symbolic link `/proc/<pid>/fd/<fdes>` points to
* you are even more vulnerable to race conditions than the OS-independent technique
* it is highly unlikely that the legacy application uses locking, but if it is, locking is not a real option unless the legacy application can handle a locked file gracefully (by blocking, not by failing - and if your own application can guarantee that the file will not remain locked, blocking the legacy application for extender periods of time.)
---
**UPDATE 2**
If favouring the "check whether the legacy application has the file open" (intrusive approach prone to race conditions) then you can solve the said race condition by:
1. checking whether the legacy application has the file open (a la `lsof` or `ProcessExplorer`)
2. suspending the legacy application process
3. repeating the check in step 1 to confirm that the legacy application did not open the file between steps 1 and 2; delay and restart at step 1 if so, otherwise proceed to step 4
4. doing your business on the file -- ideally simply renaming it for subsequent, independent processing in order to keep the legacy application suspended for a minimal amount of time
5. resuming the legacy application process |
Python or IronPython | 590,007 | 26 | 2009-02-26T10:41:16Z | 590,026 | 13 | 2009-02-26T10:47:57Z | [
"python",
"ironpython",
"cpython"
] | How does IronPython stack up to the default Windows implementation of Python from python.org? If I am learning Python, will I be learning a subtley different language with IronPython, and what libraries would I be doing without?
Are there, alternatively, any pros to IronPython (not including .NET IL compiled classes) that would make it more attractive an option? | There are some subtle differences in how you write your code, but the biggest difference is in the libraries you have available.
With IronPython, you have all the .Net libraries available, but at the expense of some of the "normal" python libraries that haven't been ported to the .Net VM I think.
Basically, you should expect the syntax and the idioms to be the same, but a script written for IronPython wont run if you try giving it to the "regular" Python interpreter. The other way around is probably more likely, but there too you will find differences I think. |
Python or IronPython | 590,007 | 26 | 2009-02-26T10:41:16Z | 590,782 | 27 | 2009-02-26T14:30:48Z | [
"python",
"ironpython",
"cpython"
] | How does IronPython stack up to the default Windows implementation of Python from python.org? If I am learning Python, will I be learning a subtley different language with IronPython, and what libraries would I be doing without?
Are there, alternatively, any pros to IronPython (not including .NET IL compiled classes) that would make it more attractive an option? | There are a number of important differences:
1. Interoperability with other .NET languages. You can use other .NET libraries from an IronPython application, or use IronPython from a C# application, for example. This interoperability is increasing, with a movement toward greater support for dynamic types in .NET 4.0. For a lot of detail on this, see [these](http://channel9.msdn.com/pdc2008/TL10/) [two](http://channel9.msdn.com/pdc2008/TL16/) presentations at PDC 2008.
2. Better concurrency/multi-core support, due to lack of a GIL. (Note that the GIL doesn't inhibit threading on a single-core machine---it only limits performance on multi-core machines.)
3. Limited ability to consume Python C extensions. The [Ironclad](http://code.google.com/p/ironclad/) project is making significant strides toward improving this---they've nearly gotten [Numpy](http://numpy.scipy.org/) working!
4. Less cross-platform support; basically, you've got the CLR and [Mono](http://www.mono-project.com/Main%5FPage). Mono is impressive, though, and runs on many platforms---and they've got an implementation of Silverlight, called [Moonlight](http://www.mono-project.com/Moonlight).
5. Reports of improved performance, although I have not looked into this carefully.
6. Feature lag: since CPython is the reference Python implementation, it has the "latest and greatest" Python features, whereas IronPython necessarily lags behind. Many people do not find this to be a problem. |
How to change a Python module name? | 590,250 | 3 | 2009-02-26T12:01:22Z | 590,262 | 8 | 2009-02-26T12:05:14Z | [
"python",
"module"
] | Is it only possible if I rename the file? Or is there a `__module__` variable to the file to define what's its name? | Yes, you should rename the file. Best would be after you have done that to remove the `oldname.pyc` and `oldname.pyo` compiled files (if present) from your system, otherwise the module will be importable under the old name too. |
How to change a Python module name? | 590,250 | 3 | 2009-02-26T12:01:22Z | 590,312 | 15 | 2009-02-26T12:16:09Z | [
"python",
"module"
] | Is it only possible if I rename the file? Or is there a `__module__` variable to the file to define what's its name? | If you really want to import the file 'oldname.py' with the statement 'import newname', there is a trick that makes it possible: Import the module *somewhere* with the old name, then inject it into `sys.modules` with the new name. Subsequent import statements will also find it under the new name. Code sample:
```
# this is in file 'oldname.py'
...module code...
```
Usage:
```
# inject the 'oldname' module with a new name
import oldname
import sys
sys.modules['newname'] = oldname
```
Now you can everywhere your module with `import newname`. |
How to change a Python module name? | 590,250 | 3 | 2009-02-26T12:01:22Z | 590,353 | 9 | 2009-02-26T12:26:13Z | [
"python",
"module"
] | Is it only possible if I rename the file? Or is there a `__module__` variable to the file to define what's its name? | You can change the name used for a module when importing by using as:
```
import foo as bar
print bar.baz
``` |
Django - designing models with virtual fields? | 590,921 | 2 | 2009-02-26T15:08:17Z | 590,968 | 13 | 2009-02-26T15:17:49Z | [
"python",
"django",
"django-models"
] | I'd like to ask about the most elegant approach when it comes to designing models with virtual fields such as below in Django...
Let's say we're building an online store and all the products in the system are defined by the model "*Product*".
```
class Product(models.Model):
# common fields that all products share
name = ...
brand = ...
price = ...
```
But the store will have lots of product types completely unrelated with eachother, so I need some way to store those virtual fields of different product types (ie. capacity of a MP3 player, pagecount of a book,..).
The solutions I could come up with my raw Django skills are far from perfect so far:
* Having a "custom\_fields" property
and intermediate tables that I
manage manually. (screaming ugly in
my face :))
* Or inheriting classes from
"*Product*" on the fly with Python's
dangerous exec-eval statements (that is too
much voodoo magic for maintenance
and also implementation would
require knowledge of Django internals).
What's your take on this?
TIA. | Products have Features.
```
class Feature( models.Model ):
feature_name = models.CharField( max_length=128 )
feature_value = models.TextField()
part_of = models.ForeignKey( Product )
```
Like that.
Just a list of features.
```
p= Product( "iPhone", "Apple", 350 )
p.save()
f= Feature( "mp3 capacity", "16Gb", p )
f.save()
```
If you want, you can have a master list of feature names in a separate table. Don't over-analyze features. You can't do any processing on them. All you do is present them. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.