title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Java "Virtual Machine" vs. Python "Interpreter" parlance? | 441,824 | 110 | 2009-01-14T03:39:32Z | 441,854 | 49 | 2009-01-14T03:56:21Z | [
"java",
"python",
"jvm"
] | It's seems rare to read of a Python "virtual machine" while in Java "virtual machine" is used all the time. Both interpret byte codes, why call one a virtual machine and the other an interpreter? | Probably one reason for the different terminology is that one normally thinks of feeding the python interpreter raw human-readable source code and not worrying about bytecode and all that.
In Java, you have to explicitly compile to bytecode and then run just the bytecode, not source code on the VM.
Even though Python uses a virtual machine under the covers, from a user's perspective, one can ignore this detail most of the time. |
Java "Virtual Machine" vs. Python "Interpreter" parlance? | 441,824 | 110 | 2009-01-14T03:39:32Z | 441,973 | 84 | 2009-01-14T05:06:47Z | [
"java",
"python",
"jvm"
] | It's seems rare to read of a Python "virtual machine" while in Java "virtual machine" is used all the time. Both interpret byte codes, why call one a virtual machine and the other an interpreter? | A virtual machine is a virtual computing environment with a specific set of atomic well defined instructions that are supported independent of any specific language and it is generally thought of as a sandbox unto itself. The VM is analogous to an instruction set of a specific CPU and tends to work at a more fundamental level with very basic building blocks of such instructions (or byte codes) that are independent of the next. An instruction executes deterministically based only on the current state of the virtual machine and does not depend on information elsewhere in the instruction stream at that point in time.
An interpreter on the other hand is more sophisticated in that it is tailored to parse a stream of some syntax that is of a specific language and of a specific grammer that must be decoded in the context of the surrounding tokens. You can't look at each byte or even each line in isolation and know exactly what to do next. The tokens in the language can't be taken in isolation like they can relative to the instructions (byte codes) of a VM.
A Java compiler converts Java language into a byte-code stream no different than a C compiler converts C Language programs into assembly code. An interpreter on the other hand doesn't really convert the program into any well defined intermediate form, it just takes the program actions as a matter of the process of interpreting the source.
Another test of the difference between a VM and an interpreter is whether you think of it as being language independent. What we know as the Java VM is not really Java specific. You could make a compiler from other languages that result in byte codes that can be run on the JVM. On the other hand, I don't think we would really think of "compiling" some other language other than Python into Python for interpretation by the Python interpreter.
Because of the sophistication of the interpretation process, this can be a relatively slow process....specifically parsing and identifying the language tokens, etc. and understanding the context of the source to be able to undertake the execution process within the interpreter. To help accelerate such interpreted languages, this is where we can define intermediate forms of pre-parsed, pre-tokenized source code that is more readily directly interpreted. This sort of binary form is still interpreted at execution time, it is just starting from a much less human readable form to improve performance. However, the logic executing that form is not a virtual machine, because those codes still can't be taken in isolation - the context of the surrounding tokens still matter, they are just now in a different more computer efficient form. |
Java "Virtual Machine" vs. Python "Interpreter" parlance? | 441,824 | 110 | 2009-01-14T03:39:32Z | 442,210 | 10 | 2009-01-14T07:44:54Z | [
"java",
"python",
"jvm"
] | It's seems rare to read of a Python "virtual machine" while in Java "virtual machine" is used all the time. Both interpret byte codes, why call one a virtual machine and the other an interpreter? | **Interpreter**, translates source code into some efficient intermediate representation (code) and immediately executes this.
**Virtual Machine**, explicitly executes stored pre-compiled code built by a compiler which is part of the interpreter system.
A very important characteristic of a virtual machine is that the software running inside, is limited to the resources provided by the virtual machine. Precisely, it cannot break out of its virtual world. Think of secure execution of remote code, Java Applets.
In case of python, if we are keeping *pyc* files, as mentioned in the comment of this post, then the mechanism would become more like a VM, and this bytecode executes faster -- it would still be interpreted but from a much computer friendlier form. If we look at this as a whole, PVM is a last step of Python Interpreter.
The bottomline is, when refer Python Interpreter, it means we are referring it as a whole, and when we say PVM, that means we are just talking about a part of Python Interpreter, a runtime-environment. Similar to that of Java, we refer different parts differentyl, JRE, JVM, JDK, etc.
For more, Wikipedia Entry: [Interpreter](http://en.wikipedia.org/wiki/Interpreter_(computing)), and [Virtual Machine](http://en.wikipedia.org/wiki/Virtual_machine). Yet another one [here](http://www.kinabaloo.com/jvm.html). Here you can find the [Comparison of application virtual machines](http://en.wikipedia.org/wiki/Comparison_of_application_virtual_machines). It helps in understanding the difference between, Compilers, Interpreters, and VMs. |
Java "Virtual Machine" vs. Python "Interpreter" parlance? | 441,824 | 110 | 2009-01-14T03:39:32Z | 1,732,383 | 75 | 2009-11-13T22:47:18Z | [
"java",
"python",
"jvm"
] | It's seems rare to read of a Python "virtual machine" while in Java "virtual machine" is used all the time. Both interpret byte codes, why call one a virtual machine and the other an interpreter? | In this post, "virtual machine" refers to process virtual machines, not to
system virtual machines like Qemu or Virtualbox. A process virtual machine is
simply a program which provides a general programming environment -- a program
which can be programmed.
Java has an interpreter as well as a virtual machine, and Python has a virtual
machine as well as an interpreter. The reason "virtual machine" is a more
common term in Java and "interpreter" is a more common term in Python has a lot
to do with the major difference between the two languages: static typing
(Java) vs dynamic typing (Python). In this context, "type" refers to
[primitive data types](http://en.wikipedia.org/wiki/Data%5Ftype) -- types which suggest the in-memory storage size of
the data. The Java virtual machine has it easy. It requires the programmer to
specify the primitive data type of each variable. This provides sufficient
information for Java bytecode not only to be interpreted and executed by the
Java virtual machine, but even to be [compiled into machine instructions](http://gcc.gnu.org/java/).
The Python virtual machine is more complex in the sense that it takes on the
additional task of pausing before the execution of each operation to determine
the primitive data types for each variable or data structure involved in the
operation. Python frees the programmer from thinking in terms of primitive data
types, and allows operations to be expressed at a higher level. The price of
this freedom is performance. "Interpreter" is the preferred term for Python
because it has to pause to inspect data types, and also because the
comparatively concise syntax of dynamically-typed languages is a good fit for
interactive interfaces. There's no technical barrier to building an interactive
Java interface, but trying to write any statically-typed code interactively
would be tedious, so it just isn't done that way.
In the Java world, the virtual machine steals the show because it runs programs
written in a language which can actually be compiled into machine instructions,
and the result is speed and resource efficiency. Java bytecode can be executed
by the Java virtual machine with performance approaching that of compiled
programs, relatively speaking. This is due to the presence of primitive data
type information in the bytecode. The Java virtual machine puts Java in a
category of its own:
**portable interpreted statically-typed language**
The next closest thing is LLVM, but LLVM operates at a different level:
**portable interpreted assembly language**
The term "bytecode" is used in both Java and Python, but not all bytecode is
created equal. bytecode is just the generic term for intermediate languages
used by compilers/interpreters. Even C compilers like gcc use an [intermediate
language (or several)](http://gcc.gnu.org/onlinedocs/gccint/RTL.html) to get the job done. Java bytecode contains
information about primitive data types, whereas Python bytecode does not. In
this respect, the Python (and Bash,Perl,Ruby, etc.) virtual machine truly is
fundamentally slower than the Java virtual machine, or rather, it simply has
more work to do. It is useful to consider what information is contained in
different bytecode formats:
* **llvm:** cpu registers
* **Java:** primitive data types
* **Python:** user-defined types
To draw a real-world analogy: LLVM works with atoms, the Java virtual machine
works with molecules, and The Python virtual machine works with materials.
Since everything must eventually decompose into subatomic particles (real
machine operations), the Python virtual machine has the most complex task.
Intepreters/compilers of statically-typed languages just don't have the same
baggage that interpreters/compilers of dynamically-typed languages have.
Programmers of statically-typed languages have to take up the slack, for which
the payoff is performance. However, just as all nondeterministic functions are
secretly deterministic, so are all dynamically-typed languages secretly
statically-typed. Performance differences between the two language families
should therefore level out around the time Python changes its name to HAL 9000.
The virtual machines of dynamic languages like Python implement some idealized
logical machine, and don't necessarily correspond very closely to any real
physical hardware. The Java virtual machine, in contrast, is more similar in
functionality to a classical C compiler, except that instead of emitting
machine instructions, it executes built-in routines. In Python, an integer is
a Python object with a bunch of attributes and methods attached to it. In
Java, an int is a designated number of bits, usually 32. It's not really a
fair comparison. Python integers should really be compared to the Java
Integer class. Java's "int" primitive data type can't be compared to anything in
the Python language, because the Python language simply lacks this layer of
primitives, and so does Python bytecode.
Because Java variables are explicitly typed, one can reasonably expect
something like [Jython](http://www.jython.org/) performance to be in the same ballpark as
[cPython](http://www.python.org/). On the other hand, a Java virtual machine implemented in Python
is almost guaranteed to be slower than mud. And don't expect Ruby, Perl, etc.,
to fare any better. They weren't designed to do that. They were designed for
"scripting", which is what programming in a dynamic language is called.
Every operation that takes place in a virtual machine eventually has to hit real hardware. Virtual machines contain pre-compiled routines which are general enough to to execute any combination of logical operations. A virtual machine may not be emitting new machine instructions, but it certainly is executing its own routines over and over in arbirtrarily complex sequences. The Java virtual machine, the Python virtual machine, and all the other general-purpose virtual machines out there are equal in the sense that they can be coaxed into performing any logic you can dream up, but they are different in terms of what tasks they take on, and what tasks they leave to the programmer.
Psyco for Python is not a full Python virtual machine, but a just-in-time
compiler that hijacks the regular Python virtual machine at points it thinks it
can compile a few lines of code -- mainly loops where it thinks the primitive
type of some variable will remain constant even if the value is changing with
each iteration. In that case, it can forego some of the incessent type
determination of the regular virtual machine. You have to be a little careful,
though, lest you pull the type out from under Psyco's feet. Pysco, however,
usually knows to just fall back to the regular virtual machine if it isn't
completely confident the type won't change.
The moral of the story is that primitive data type information is really
helpful to a compiler/virtual machine.
Finally, to put it all in perspective consider this: a Python program executed
by a Python interpreter/virtual machine implemented in Java running on a Java
interpreter/virtual machine implemented in LLVM running in a qemu virtual
machine running on an iPhone.
[permalink](http://www.darkarchive.org/w/Pub/ProcessVirtualMachine) |
IronPython on ASP.NET MVC | 441,838 | 21 | 2009-01-14T03:45:24Z | 443,078 | 12 | 2009-01-14T14:16:01Z | [
"python",
"asp.net-mvc",
"linq",
"dynamic",
"ironpython"
] | Has anyone tried ASP.NET MVC using IronPython? Having done a lot of Python development recently, it would be nice to continue with the language as I go into a potential ASP.NET MVC project.
I'm especially interested in exploiting the dynamic aspects of Python with .NET features such as LINQ and want to know if this will be possible. The other route that may be viable for certain dynamic programming would be C# 4.0 with its `dynamic` keyword.
Thoughts, experiences? | Yes, [there is an MVC example from the DLR team](http://www.codeplex.com/aspnet/Release/ProjectReleases.aspx?ReleaseId=17613).
You might also be interested in [Spark](http://sparkviewengine.com/documentation/ironpython). |
IronPython on ASP.NET MVC | 441,838 | 21 | 2009-01-14T03:45:24Z | 4,223,420 | 7 | 2010-11-19T08:59:15Z | [
"python",
"asp.net-mvc",
"linq",
"dynamic",
"ironpython"
] | Has anyone tried ASP.NET MVC using IronPython? Having done a lot of Python development recently, it would be nice to continue with the language as I go into a potential ASP.NET MVC project.
I'm especially interested in exploiting the dynamic aspects of Python with .NET features such as LINQ and want to know if this will be possible. The other route that may be viable for certain dynamic programming would be C# 4.0 with its `dynamic` keyword.
Thoughts, experiences? | Using IronPython in ASP.NET MVC: <http://www.codevoyeur.com/Articles/Tags/ironpython.aspx>
this page contains following articles:
* A Simple IronPython ControllerFactory for ASP.NET MVC
* A Simple IronPython ActionFilter for ASP.NET MVC
* A Simple IronPython Route Mapper for ASP.NET MVC
* An Unobtrusive IronPython ViewEngine for ASP.NET MVC |
Good Python networking libraries for building a TCP server? | 441,849 | 9 | 2009-01-14T03:51:13Z | 441,872 | 10 | 2009-01-14T04:06:30Z | [
"python",
"networking",
"twisted"
] | I was just wondering what network libraries there are out there for Python for building a TCP/IP server. I know that Twisted might jump to mind but the documentation seems scarce, sloppy, and scattered to me.
Also, would using Twisted even have a benefit over rolling my own server with select.select()? | I must agree that the documentation is a bit terse but the tutorial gets you up and running quickly.
<http://twistedmatrix.com/projects/core/documentation/howto/tutorial/index.html>
The event-based programming paradigm of Twisted and it's defereds might be a bit weird at the start (was for me) but it is worth the learning curve.
You'll get up and running doing much more complex stuff more quickly than if you were to write your own framework and it would also mean one less thing to bug hunt as Twisted is very much production proven.
I don't really know of another framework that can offer as much as Twisted can, so my vote would definitely go for Twisted even if the docs aren't for the faint of heart.
I agree with Greg that SocketServer is a nice middle ground but depending on the target audience of your application and the design of it you might have some nice stuff to look forward to in Twisted (the PerspectiveBroker which is very useful comes to mind - <http://twistedmatrix.com/projects/core/documentation/howto/pb-intro.html>) |
Pre-populate an inline FormSet? | 442,040 | 37 | 2009-01-14T05:49:25Z | 442,061 | 25 | 2009-01-14T06:08:25Z | [
"python",
"django",
"django-forms"
] | I'm working on an attendance entry form for a band. My idea is to have a section of the form to enter event information for a performance or rehearsal. Here's the model for the event table:
```
class Event(models.Model):
event_id = models.AutoField(primary_key=True)
date = models.DateField()
event_type = models.ForeignKey(EventType)
description = models.TextField()
```
Then I'd like to have an inline FormSet that links the band members to the event and records whether they were present, absent, or excused:
```
class Attendance(models.Model):
attendance_id = models.AutoField(primary_key=True)
event_id = models.ForeignKey(Event)
member_id = models.ForeignKey(Member)
attendance_type = models.ForeignKey(AttendanceType)
comment = models.TextField(blank=True)
```
Now, what I'd like to do is to pre-populate this inline FormSet with entries for all the current members and default them to being present (around 60 members). Unfortunately, Django [doesn't allow initial values in this case.](http://groups.google.com/group/django-developers/browse_thread/thread/73af9e58bd7626a8)
Any suggestions? | So, you're not going to like the answer, partly because I'm not yet done writing the code and partly because it's a lot of work.
What you need to do, as I discovered when I ran into this myself, is:
1. Spend a lot of time reading through the formset and model-formset code to get a feel for how it all works (not helped by the fact that some of the functionality lives on the formset classes, and some of it lives in factory functions which spit them out). You will need this knowledge in the later steps.
2. Write your own formset class which subclasses from `BaseInlineFormSet` and accepts `initial`. The really tricky bit here is that you *must* override `__init__()`, and you *must* make sure that it calls up to `BaseFormSet.__init__()` rather than using the direct parent or grandparent `__init__()` (since those are `BaseInlineFormSet` and `BaseModelFormSet`, respectively, and neither of them can handle initial data).
3. Write your own subclass of the appropriate admin inline class (in my case it was `TabularInline`) and override its `get_formset` method to return the result of `inlineformset_factory()` using your custom formset class.
4. On the actual `ModelAdmin` subclass for the model with the inline, override `add_view` and `change_view`, and replicate most of the code, but with one big change: build the initial data your formset will need, and pass it to your custom formset (which will be returned by your `ModelAdmin`'s `get_formsets()` method).
I've had a few productive chats with Brian and Joseph about improving this for future Django releases; at the moment, the way the model formsets work just make this more trouble than it's usually worth, but with a bit of API cleanup I think it could be made extremely easy. |
Pre-populate an inline FormSet? | 442,040 | 37 | 2009-01-14T05:49:25Z | 3,766,344 | 16 | 2010-09-22T04:35:19Z | [
"python",
"django",
"django-forms"
] | I'm working on an attendance entry form for a band. My idea is to have a section of the form to enter event information for a performance or rehearsal. Here's the model for the event table:
```
class Event(models.Model):
event_id = models.AutoField(primary_key=True)
date = models.DateField()
event_type = models.ForeignKey(EventType)
description = models.TextField()
```
Then I'd like to have an inline FormSet that links the band members to the event and records whether they were present, absent, or excused:
```
class Attendance(models.Model):
attendance_id = models.AutoField(primary_key=True)
event_id = models.ForeignKey(Event)
member_id = models.ForeignKey(Member)
attendance_type = models.ForeignKey(AttendanceType)
comment = models.TextField(blank=True)
```
Now, what I'd like to do is to pre-populate this inline FormSet with entries for all the current members and default them to being present (around 60 members). Unfortunately, Django [doesn't allow initial values in this case.](http://groups.google.com/group/django-developers/browse_thread/thread/73af9e58bd7626a8)
Any suggestions? | I spent a fair amount of time trying to come up with a solution that I could re-use across sites. James' post contained the key piece of wisdom of extending `BaseInlineFormSet` but strategically invoking calls against `BaseFormSet`.
The solution below is broken into two pieces: a `AdminInline` and a `BaseInlineFormSet`.
1. The `InlineAdmin` dynamically generates an initial value based on the exposed request object.
2. It uses currying to expose the initial values to a custom `BaseInlineFormSet` through keyword arguments passed to the constructor.
3. The `BaseInlineFormSet` constructor pops the initial values off the list of keyword arguments and constructs normally.
4. The last piece is overriding the form construction process by changing the maximum total number of forms and using the `BaseFormSet._construct_form` and `BaseFormSet._construct_forms` methods
Here are some concrete snippets using the OP's classes. I've tested this against Django 1.2.3. I highly recommend keeping the [formset](http://docs.djangoproject.com/en/1.2/topics/forms/formsets/) and [admin](http://docs.djangoproject.com/en/1.2/ref/contrib/admin/#inlinemodeladmin-objects) documentation handy while developing.
**admin.py**
```
from django.utils.functional import curry
from django.contrib import admin
from example_app.forms import *
from example_app.models import *
class AttendanceInline(admin.TabularInline):
model = Attendance
formset = AttendanceFormSet
extra = 5
def get_formset(self, request, obj=None, **kwargs):
"""
Pre-populating formset using GET params
"""
initial = []
if request.method == "GET":
#
# Populate initial based on request
#
initial.append({
'foo': 'bar',
})
formset = super(AttendanceInline, self).get_formset(request, obj, **kwargs)
formset.__init__ = curry(formset.__init__, initial=initial)
return formset
```
**forms.py**
```
from django.forms import formsets
from django.forms.models import BaseInlineFormSet
class BaseAttendanceFormSet(BaseInlineFormSet):
def __init__(self, *args, **kwargs):
"""
Grabs the curried initial values and stores them into a 'private'
variable. Note: the use of self.__initial is important, using
self.initial or self._initial will be erased by a parent class
"""
self.__initial = kwargs.pop('initial', [])
super(BaseAttendanceFormSet, self).__init__(*args, **kwargs)
def total_form_count(self):
return len(self.__initial) + self.extra
def _construct_forms(self):
return formsets.BaseFormSet._construct_forms(self)
def _construct_form(self, i, **kwargs):
if self.__initial:
try:
kwargs['initial'] = self.__initial[i]
except IndexError:
pass
return formsets.BaseFormSet._construct_form(self, i, **kwargs)
AttendanceFormSet = formsets.formset_factory(AttendanceForm, formset=BaseAttendanceFormSet)
``` |
Pre-populate an inline FormSet? | 442,040 | 37 | 2009-01-14T05:49:25Z | 16,417,369 | 13 | 2013-05-07T11:01:21Z | [
"python",
"django",
"django-forms"
] | I'm working on an attendance entry form for a band. My idea is to have a section of the form to enter event information for a performance or rehearsal. Here's the model for the event table:
```
class Event(models.Model):
event_id = models.AutoField(primary_key=True)
date = models.DateField()
event_type = models.ForeignKey(EventType)
description = models.TextField()
```
Then I'd like to have an inline FormSet that links the band members to the event and records whether they were present, absent, or excused:
```
class Attendance(models.Model):
attendance_id = models.AutoField(primary_key=True)
event_id = models.ForeignKey(Event)
member_id = models.ForeignKey(Member)
attendance_type = models.ForeignKey(AttendanceType)
comment = models.TextField(blank=True)
```
Now, what I'd like to do is to pre-populate this inline FormSet with entries for all the current members and default them to being present (around 60 members). Unfortunately, Django [doesn't allow initial values in this case.](http://groups.google.com/group/django-developers/browse_thread/thread/73af9e58bd7626a8)
Any suggestions? | Django 1.4 and higher supports [providing initial values](https://docs.djangoproject.com/en/1.5/topics/forms/modelforms/#providing-initial-values).
In terms of the original question, the following would work:
```
class AttendanceFormSet(models.BaseInlineFormSet):
def __init__(self, *args, **kwargs):
super(AttendanceFormSet, self).__init__(*args, **kwargs)
# Check that the data doesn't already exist
if not kwargs['instance'].member_id_set.filter(# some criteria):
initial = []
initial.append({}) # Fill in with some data
self.initial = initial
# Make enough extra formsets to hold initial forms
self.extra += len(initial)
```
If you find that the forms are being populated but not being save then you may need to customize your model form. An easy way is to pass a tag in the initial data and look for it in the form init:
```
class AttendanceForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
super(AttendanceForm, self).__init__(*args, **kwargs)
# If the form was prepopulated from default data (and has the
# appropriate tag set), then manually set the changed data
# so later model saving code is activated when calling
# has_changed().
initial = kwargs.get('initial')
if initial:
self._changed_data = initial.copy()
class Meta:
model = Attendance
``` |
ReadInt(), ReadByte(), ReadString(), etc. in Python? | 442,188 | 3 | 2009-01-14T07:34:04Z | 442,197 | 8 | 2009-01-14T07:38:17Z | [
"python"
] | The functions ReadInt(), ReadByte(), and ReadString() (to name a few) exist in other languages for reading input from streams. I am trying to read from a socket, and I want to use functions like these. Are they tucked away in Python somewhere under a different way or has someone made a library for it?
Also, there are Write*datatype*() counterparts. | I think [struct.unpack\_from](http://docs.python.org/library/struct.html#struct.unpack_from) is what you're looking for. |
ReadInt(), ReadByte(), ReadString(), etc. in Python? | 442,188 | 3 | 2009-01-14T07:34:04Z | 4,338,551 | 7 | 2010-12-02T18:37:08Z | [
"python"
] | The functions ReadInt(), ReadByte(), and ReadString() (to name a few) exist in other languages for reading input from streams. I am trying to read from a socket, and I want to use functions like these. Are they tucked away in Python somewhere under a different way or has someone made a library for it?
Also, there are Write*datatype*() counterparts. | Python's way is using struct.unpack to read binary data.
I'm very used to the BinaryReader and BinaryWriter in C#, so I made this:
```
from struct import *
class BinaryStream:
def __init__(self, base_stream):
self.base_stream = base_stream
def readByte(self):
return self.base_stream.read(1)
def readBytes(self, length):
return self.base_stream.read(length)
def readChar(self):
return self.unpack('b')
def readUChar(self):
return self.unpack('B')
def readBool(self):
return self.unpack('?')
def readInt16(self):
return self.unpack('h', 2)
def readUInt16(self):
return self.unpack('H', 2)
def readInt32(self):
return self.unpack('i', 4)
def readUInt32(self):
return self.unpack('I', 4)
def readInt64(self):
return self.unpack('q', 8)
def readUInt64(self):
return self.unpack('Q', 8)
def readFloat(self):
return self.unpack('f', 4)
def readDouble(self):
return self.unpack('d', 8)
def readString(self):
length = self.readUInt16()
return self.unpack(str(length) + 's', length)
def writeBytes(self, value):
self.base_stream.write(value)
def writeChar(self, value):
self.pack('c', value)
def writeUChar(self, value):
self.pack('C', value)
def writeBool(self, value):
self.pack('?', value)
def writeInt16(self, value):
self.pack('h', value)
def writeUInt16(self, value):
self.pack('H', value)
def writeInt32(self, value):
self.pack('i', value)
def writeUInt32(self, value):
self.pack('I', value)
def writeInt64(self, value):
self.pack('q', value)
def writeUInt64(self, value):
self.pack('Q', value)
def writeFloat(self, value):
self.pack('f', value)
def writeDouble(self, value):
self.pack('d', value)
def writeString(self, value):
length = len(value)
self.writeUInt16(length)
self.pack(str(length) + 's', value)
def pack(self, fmt, data):
return self.writeBytes(pack(fmt, data))
def unpack(self, fmt, length = 1):
return unpack(fmt, self.readBytes(length))[0]
```
Once you have a stream, you put it in the BinaryStream constructor and you got a BinaryStream :)
Example:
```
from binary import BinaryStream
f = open("Users", "rb")
stream = BinaryStream(f)
users_count = stream.readUInt64()
for i in range(users_count):
username = stream.readString()
password = stream.readString()
``` |
Generic catch for python | 442,343 | 33 | 2009-01-14T09:05:20Z | 442,358 | 60 | 2009-01-14T09:12:41Z | [
"python",
"exception"
] | I have some extremely weird behavior that seems to result in silent exceptions. How can I write a general try catch where I can debug all exceptions. Something along the lines of:
```
try:
# something that fails
except e:
print e
```
A bit more about the problem at hand in detail:
I have a Django app that on the my comp (Ubuntu Linux 8.10) works fine both through runserver and mod-python. On the deploy server (Ubunut Linux 8.10) it works fine through runserver, but fails via apache in mod-python.
I have reduced the cause down to a part off the app that uses Berkeley DB (bsddb.db), and secondary keys. The callback method for secondary keys uses pickle to format the keys. It fails when I call pickle on a single value. However, it only fails when I use cPickle, and using pickle on the same values outside the callback function also works.
I just want to know why it fails with cPickle. | Exceptions are already printed by default before program termination. If you want to send the error somewhere else (not print it) you can do this:
```
try:
something()
except Exception as e:
send_somewhere(traceback.format_exception(*sys.exc_info()))
raise # reraises the exception
```
note that this format using the `as` keyword is for python > 2.6. The old way was:
```
except Exception, e:
``` |
How to properly interact with a process using subprocess module | 443,057 | 6 | 2009-01-14T14:07:12Z | 443,080 | 11 | 2009-01-14T14:16:38Z | [
"python",
"subprocess"
] | I'm having problems redirecting stdio of another program using subprocess module. Just reading from stdout results in hanging, and Popen.communicate() works but it closes pipes after reading/writing. What's the easiest way to implement this?
I was playing around with this on windows:
```
import subprocess
proc = subprocess.Popen('python -c "while True: print \'Hi %s!\' % raw_input()"',
shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
while True:
proc.stdin.write('world\n')
proc_read = proc.stdout.readline()
if proc_read:
print proc_read
``` | Doesn't fit 100% to your example but helps to understand the underlying issue: Process P starts child C. Child C writes something to its stdout. stdout of C is a pipe which has a 4096 character buffer and the output is shorter than that. Now, C waits for some input. For C, everything is fine.
P waits for the output which will never come because the OS sees no reason to flush the output buffer of C (with so little data in it). Since P never gets the output of C, it will never write anything to C, so C hangs waiting for the input from P.
Fix: Use flush after every write to a pipe forcing the OS to send the data *now*.
In your case, adding `proc.stdin.flush()` in the main while loop and a `sys.stdout.flush()` in the child loop after the print should fix your problem.
You should also consider moving the code which reads from the other process into a thread. The idea here is that you can never know when the data will arrive and using a thread helps you to understand these issues while you write the code which processes the results.
At this place, I wanted to show you the new Python 2.6 documentation but it doesn't explain the flush issue, either :( Oh well ... |
Why does Paramiko hang if you use it while loading a module? | 443,387 | 8 | 2009-01-14T15:35:25Z | 450,895 | 13 | 2009-01-16T16:01:36Z | [
"python",
"multithreading",
"ssh",
"module",
"paramiko"
] | Put the following into a file **hello.py** (and `easy_install paramiko` if you haven't got it):
```
hostname,username,password='fill','these','in'
import paramiko
c = paramiko.SSHClient()
c.set_missing_host_key_policy(paramiko.AutoAddPolicy())
c.connect(hostname=hostname, username=username, password=password)
i,o,e = c.exec_command('ls /')
print(o.read())
c.close()
```
Fill in the first line appropriately.
Now type
```
python hello.py
```
and you'll see some ls output.
Now instead type
```
python
```
and then from within the interpreter type
```
import hello
```
and voila! It hangs! It will unhang if you wrap the code in a function `foo` and do `import hello; hello.foo()` instead.
Why does Paramiko hang when used within module initialization? **How is Paramiko even aware that it's being used during module initialization in the first place?** | Paramiko uses separate threads for the underlying transport.
You should *never* have a module that spawns a thread as a side effect of importing. As I understand it, there is a single import lock available, so when a child thread from your module attempts another import, it can block indefinitely, because your main thread still holds the lock. (There are probably other gotchas that I'm not aware of too)
In general, modules shouldn't have side effects of any sort when importing, or you're going to get unpredictable results. Just hold off execution with the `__name__ == '__main__'` trick, and you'll be fine.
[EDIT]
I can't seem to create a simple test case that reproduces this deadlock. I still assume it's a threading issue with import, because the auth code is waiting for an event that never fires. This may be a bug in paramiko, or python, but the good news is that you shouldn't ever see it if you do things correctly ;)
This is a good example why you always want to minimize side effects, and why functional programming techniques are becoming more prevalent. |
Addressing instance name string in __init__(self) in Python | 443,775 | 4 | 2009-01-14T16:55:48Z | 443,868 | 7 | 2009-01-14T17:13:51Z | [
"python",
"instance",
"instantiation"
] | I am doing something like this:
```
class Class(object):
def __init__(self):
self.var=#new instance name string#
```
How do I make the \_\_ init \_\_ method of my instance to use the instance name string for 'c'? Say in case:
```
c=Class()
```
I want c.var equal to 'c'.
Thanks for your replies, I am implementing persistence and Class is persistent object's class. I want \_\_ init \_\_ to add an entry to the database when:
```
c=Class()
```
Then, suppose:
```
del c
```
Later on:
```
c=Class()
```
sholuld create an instance using data from database if there already is an entry 'c', otherwise create new entry.
---
Thanks for your replies, I am implementing persistence and Class is persistent object's class. I want \_\_ init \_\_ to add an entry to the database when:
```
c=Class()
```
Then, suppose:
```
del c
```
Later on:
```
c=Class()
```
sholuld create an instance using data from database if there already is an entry 'c', otherwise create new entry. | Python doesn't have variables, it has [objects and names](http://effbot.org/zone/python-objects.htm). When you do
```
c = Class()
```
you're doing two things:
1. Creating a new object of type `Class`
2. Binding the object to the name `c` in the current scope.
The object you created doesn't have any concept of a "variable name" -- If later you do
```
a = c
```
then the same object is accessible in exactly the same way using the names `a` and `c`. You can delete the name `a`, and the object would still exist.
If the objects you create need to have a name, the best way is to pass it to them [explicitly](http://www.python.org/dev/peps/pep-0020/),
```
class Class(object):
def __init__(self, name):
self.name = name
var = Class('var')
``` |
Python: Callbacks, Delegates, ... ? What is common? | 443,885 | 21 | 2009-01-14T17:20:28Z | 444,057 | 14 | 2009-01-14T18:07:35Z | [
"python",
"events",
"delegates",
"callback"
] | Just want to know what's the common way to react on events in python. There are several ways in other languages like callback functions, delegates, listener-structures and so on.
Is there a common way? Which default language concepts or additional modules are there and which can you recommend? | Personally I don't see a difference between callbacks, listeners, and delegates.
The [observer pattern](http://en.wikipedia.org/wiki/Observer_pattern) (a.k.a listeners, a.k.a "multiple callbacks") is easy to implement - just hold a list of observers, and add or remove callables from it. These callables can be functions, bound methods, or classes with the `__call__` magic method. All you have to do is define the interface you expect from these - e.g. do they receive any parameters.
```
class Foo(object):
def __init__(self):
self._bar_observers = []
def add_bar_observer(self, observer):
self._bar_observers.append(observer)
def notify_bar(self, param):
for observer in self._bar_observers:
observer(param)
def observer(param):
print "observer(%s)" % param
class Baz(object):
def observer(self, param):
print "Baz.observer(%s)" % param
class CallableClass(object):
def __call__(self, param):
print "CallableClass.__call__(%s)" % param
baz = Baz()
foo = Foo()
foo.add_bar_observer(observer) # function
foo.add_bar_observer(baz.observer) # bound method
foo.add_bar_observer(CallableClass()) # callable instance
foo.notify_bar(3)
``` |
How to create python bytes object from long hex string? | 443,967 | 39 | 2009-01-14T17:42:50Z | 443,985 | 26 | 2009-01-14T17:46:38Z | [
"python",
"hex",
"byte"
] | I have a long sequence of hex digits in a string, such as
> 000000000000484240FA063DE5D0B744ADBED63A81FAEA390000C8428640A43D5005BD44
only much longer, several kilobytes. Is there a builtin way to convert this to a bytes object in python 2.6/3? | Try the [binascii module](http://doc.astro-wise.org/binascii.html#-unhexlify)
```
from binascii import unhexlify
b = unhexlify(myhexstr)
``` |
How to create python bytes object from long hex string? | 443,967 | 39 | 2009-01-14T17:42:50Z | 443,990 | 37 | 2009-01-14T17:47:20Z | [
"python",
"hex",
"byte"
] | I have a long sequence of hex digits in a string, such as
> 000000000000484240FA063DE5D0B744ADBED63A81FAEA390000C8428640A43D5005BD44
only much longer, several kilobytes. Is there a builtin way to convert this to a bytes object in python 2.6/3? | You can do this with the hex codec. ie:
```
>>> s='000000000000484240FA063DE5D0B744ADBED63A81FAEA390000C8428640A43D5005BD44'
>>> s.decode('hex')
'\x00\x00\x00\x00\x00\x00HB@\xfa\x06=\xe5\xd0\xb7D\xad\xbe\xd6:\x81\xfa\xea9\x00\x00\xc8B\x86@\xa4=P\x05\xbdD'
``` |
How to create python bytes object from long hex string? | 443,967 | 39 | 2009-01-14T17:42:50Z | 444,005 | 21 | 2009-01-14T17:50:38Z | [
"python",
"hex",
"byte"
] | I have a long sequence of hex digits in a string, such as
> 000000000000484240FA063DE5D0B744ADBED63A81FAEA390000C8428640A43D5005BD44
only much longer, several kilobytes. Is there a builtin way to convert this to a bytes object in python 2.6/3? | ```
result = bytes.fromhex(some_hex_string)
``` |
How to create python bytes object from long hex string? | 443,967 | 39 | 2009-01-14T17:42:50Z | 17,160,152 | 18 | 2013-06-18T03:37:20Z | [
"python",
"hex",
"byte"
] | I have a long sequence of hex digits in a string, such as
> 000000000000484240FA063DE5D0B744ADBED63A81FAEA390000C8428640A43D5005BD44
only much longer, several kilobytes. Is there a builtin way to convert this to a bytes object in python 2.6/3? | Works in Python 2.7 and higher including python3:
```
result = bytearray.fromhex('deadbeef')
```
**Note:** There seems to be a bug with the `bytearray.fromhex()` function in Python 2.6. The python.org documentation states that the function accepts a string as an argument, but when applied, the following error is thrown:
```
>>> bytearray.fromhex('B9 01EF')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: fromhex() argument 1 must be unicode, not str`
``` |
python - readable list of objects | 444,058 | 5 | 2009-01-14T18:07:54Z | 444,073 | 8 | 2009-01-14T18:12:20Z | [
"python",
"list"
] | This is probably a kinda commonly asked question but I could do with help on this. I have a list of class objects and I'm trying to figure out how to make it print an item from that class but rather than desplaying in the;
```
<__main__.evolutions instance at 0x01B8EA08>
```
but instead to show a selected attribute of a chosen object of the class. Can anyone help with that? | If you want to just display a particular attribute of each class instance, you can do
```
print([obj.attr for obj in my_list_of_objs])
```
Which will print out the `attr` attribute of each object in the list `my_list_of_objs`. Alternatively, you can define the `__str__()` method for your class, which specifies how to convert your objects into strings:
```
class evolutions:
def __str__(self):
# return string representation of self
print(my_list_of_objs) # each object is now printed out according to its __str__() method
``` |
sqlalchemy, turning a list of IDs to a list of objects | 444,475 | 17 | 2009-01-14T20:08:32Z | 457,057 | 10 | 2009-01-19T09:49:39Z | [
"python",
"sqlalchemy"
] | I have sequence of IDs I want to retrieve. It's simple:
```
session.query(Record).filter(Record.id.in_(seq)).all()
```
Is there a better way to do it? | Your code is absolutety fine.
`IN` is like a bunch of `X=Y` joined with `OR` and is pretty fast in contemporary databases.
However, if your list of IDs is long, you could make the query a bit more efficient by passing a sub-query returning the list of IDs. |
convert a string of bytes into an int (python) | 444,591 | 76 | 2009-01-14T20:46:17Z | 444,610 | 61 | 2009-01-14T20:52:39Z | [
"python",
"string",
"bytearray"
] | How can I convert a string of bytes into an int in python?
Say like this: `'y\xcc\xa6\xbb'`
I came up with a clever/stupid way of doing it:
```
sum(ord(c) << (i * 8) for i, c in enumerate('y\xcc\xa6\xbb'[::-1]))
```
I know there has to be something builtin or in the standard library that does this more simply...
This is different from [converting a string of hex digits](http://stackoverflow.com/questions/209513/convert-hex-string-to-int-in-python) for which you can use int(xxx, 16), but instead I want to convert a string of actual byte values.
UPDATE:
I kind of like James' answer a little better because it doesn't require importing another module, but Greg's method is faster:
```
>>> from timeit import Timer
>>> Timer('struct.unpack("<L", "y\xcc\xa6\xbb")[0]', 'import struct').timeit()
0.36242198944091797
>>> Timer("int('y\xcc\xa6\xbb'.encode('hex'), 16)").timeit()
1.1432669162750244
```
My hacky method:
```
>>> Timer("sum(ord(c) << (i * 8) for i, c in enumerate('y\xcc\xa6\xbb'[::-1]))").timeit()
2.8819329738616943
```
FURTHER UPDATE:
Someone asked in comments what's the problem with importing another module. Well, importing a module isn't necessarily cheap, take a look:
```
>>> Timer("""import struct\nstruct.unpack(">L", "y\xcc\xa6\xbb")[0]""").timeit()
0.98822188377380371
```
Including the cost of importing the module negates almost all of the advantage that this method has. I believe that this will only include the expense of importing it once for the entire benchmark run; look what happens when I force it to reload every time:
```
>>> Timer("""reload(struct)\nstruct.unpack(">L", "y\xcc\xa6\xbb")[0]""", 'import struct').timeit()
68.474128007888794
```
Needless to say, if you're doing a lot of executions of this method per one import than this becomes proportionally less of an issue. It's also probably i/o cost rather than cpu so it may depend on the capacity and load characteristics of the particular machine. | You can also use the `struct` module to do this:
```
>>> struct.unpack("<L", "y\xcc\xa6\xbb")[0]
3148270713L
``` |
convert a string of bytes into an int (python) | 444,591 | 76 | 2009-01-14T20:46:17Z | 444,814 | 51 | 2009-01-14T21:42:52Z | [
"python",
"string",
"bytearray"
] | How can I convert a string of bytes into an int in python?
Say like this: `'y\xcc\xa6\xbb'`
I came up with a clever/stupid way of doing it:
```
sum(ord(c) << (i * 8) for i, c in enumerate('y\xcc\xa6\xbb'[::-1]))
```
I know there has to be something builtin or in the standard library that does this more simply...
This is different from [converting a string of hex digits](http://stackoverflow.com/questions/209513/convert-hex-string-to-int-in-python) for which you can use int(xxx, 16), but instead I want to convert a string of actual byte values.
UPDATE:
I kind of like James' answer a little better because it doesn't require importing another module, but Greg's method is faster:
```
>>> from timeit import Timer
>>> Timer('struct.unpack("<L", "y\xcc\xa6\xbb")[0]', 'import struct').timeit()
0.36242198944091797
>>> Timer("int('y\xcc\xa6\xbb'.encode('hex'), 16)").timeit()
1.1432669162750244
```
My hacky method:
```
>>> Timer("sum(ord(c) << (i * 8) for i, c in enumerate('y\xcc\xa6\xbb'[::-1]))").timeit()
2.8819329738616943
```
FURTHER UPDATE:
Someone asked in comments what's the problem with importing another module. Well, importing a module isn't necessarily cheap, take a look:
```
>>> Timer("""import struct\nstruct.unpack(">L", "y\xcc\xa6\xbb")[0]""").timeit()
0.98822188377380371
```
Including the cost of importing the module negates almost all of the advantage that this method has. I believe that this will only include the expense of importing it once for the entire benchmark run; look what happens when I force it to reload every time:
```
>>> Timer("""reload(struct)\nstruct.unpack(">L", "y\xcc\xa6\xbb")[0]""", 'import struct').timeit()
68.474128007888794
```
Needless to say, if you're doing a lot of executions of this method per one import than this becomes proportionally less of an issue. It's also probably i/o cost rather than cpu so it may depend on the capacity and load characteristics of the particular machine. | As Greg said, you can use struct if you are dealing with binary values, but if you just have a "hex number" but in byte format you might want to just convert it like:
```
s = 'y\xcc\xa6\xbb'
num = int(s.encode('hex'), 16)
```
...this is the same as:
```
num = struct.unpack(">L", s)[0]
```
...except it'll work for any number of bytes. |
convert a string of bytes into an int (python) | 444,591 | 76 | 2009-01-14T20:46:17Z | 9,634,417 | 138 | 2012-03-09T12:56:38Z | [
"python",
"string",
"bytearray"
] | How can I convert a string of bytes into an int in python?
Say like this: `'y\xcc\xa6\xbb'`
I came up with a clever/stupid way of doing it:
```
sum(ord(c) << (i * 8) for i, c in enumerate('y\xcc\xa6\xbb'[::-1]))
```
I know there has to be something builtin or in the standard library that does this more simply...
This is different from [converting a string of hex digits](http://stackoverflow.com/questions/209513/convert-hex-string-to-int-in-python) for which you can use int(xxx, 16), but instead I want to convert a string of actual byte values.
UPDATE:
I kind of like James' answer a little better because it doesn't require importing another module, but Greg's method is faster:
```
>>> from timeit import Timer
>>> Timer('struct.unpack("<L", "y\xcc\xa6\xbb")[0]', 'import struct').timeit()
0.36242198944091797
>>> Timer("int('y\xcc\xa6\xbb'.encode('hex'), 16)").timeit()
1.1432669162750244
```
My hacky method:
```
>>> Timer("sum(ord(c) << (i * 8) for i, c in enumerate('y\xcc\xa6\xbb'[::-1]))").timeit()
2.8819329738616943
```
FURTHER UPDATE:
Someone asked in comments what's the problem with importing another module. Well, importing a module isn't necessarily cheap, take a look:
```
>>> Timer("""import struct\nstruct.unpack(">L", "y\xcc\xa6\xbb")[0]""").timeit()
0.98822188377380371
```
Including the cost of importing the module negates almost all of the advantage that this method has. I believe that this will only include the expense of importing it once for the entire benchmark run; look what happens when I force it to reload every time:
```
>>> Timer("""reload(struct)\nstruct.unpack(">L", "y\xcc\xa6\xbb")[0]""", 'import struct').timeit()
68.474128007888794
```
Needless to say, if you're doing a lot of executions of this method per one import than this becomes proportionally less of an issue. It's also probably i/o cost rather than cpu so it may depend on the capacity and load characteristics of the particular machine. | In Python 3.2 and later, use
```
>>> int.from_bytes(b'y\xcc\xa6\xbb', byteorder='big')
2043455163
```
or
```
>>> int.from_bytes(b'y\xcc\xa6\xbb', byteorder='little')
3148270713
```
according to the endianness of your byte-string.
This also works for bytestring-integers of arbitrary length, and for two's-complement signed integers by specifying `signed=True`. See the [docs for `from_bytes`](http://docs.python.org/dev/library/stdtypes.html#int.from_bytes). |
Cleanest way to run/debug python programs in windows | 445,595 | 18 | 2009-01-15T04:01:14Z | 445,618 | 8 | 2009-01-15T04:14:05Z | [
"python",
"windows",
"python-idle"
] | Python for Windows by default comes with IDLE, which is the barest-bones IDE I've ever encountered. For editing files, I'll stick to emacs, thank you very much.
However, I want to run programs in some other shell than the crappy windows command prompt, which can't be widened to more than 80 characters.
IDLE lets me run programs in it *if* I open the file, then hit F5 (to go Run-> Run Module). I would rather like to just "run" the command, rather than going through the rigmarole of closing the emacs file, loading the IDLE file, etc. A scan of google and the IDLE docs doesn't seem to give much help about using IDLE's shell but not it's IDE.
Any advice from the stack overflow guys? Ideally I'd either like
* advice on running programs using IDLE's shell
* advice on other ways to run python programs in windows outside of IDLE or "cmd".
Thanks,
/YGA | You can easily widen the Windows console by doing the following:
* click the icon for the console window in the upper right
* select **Properties** from the menu
* click the **Layout** tab
* change the **Window Size** > Width to 140
This can also be saved universally by changing the **Defaults** on the menu. |
Cleanest way to run/debug python programs in windows | 445,595 | 18 | 2009-01-15T04:01:14Z | 445,682 | 31 | 2009-01-15T04:49:05Z | [
"python",
"windows",
"python-idle"
] | Python for Windows by default comes with IDLE, which is the barest-bones IDE I've ever encountered. For editing files, I'll stick to emacs, thank you very much.
However, I want to run programs in some other shell than the crappy windows command prompt, which can't be widened to more than 80 characters.
IDLE lets me run programs in it *if* I open the file, then hit F5 (to go Run-> Run Module). I would rather like to just "run" the command, rather than going through the rigmarole of closing the emacs file, loading the IDLE file, etc. A scan of google and the IDLE docs doesn't seem to give much help about using IDLE's shell but not it's IDE.
Any advice from the stack overflow guys? Ideally I'd either like
* advice on running programs using IDLE's shell
* advice on other ways to run python programs in windows outside of IDLE or "cmd".
Thanks,
/YGA | For an interactive interpreter, nothing beats [IPython](http://ipython.scipy.org/). It's superb. It's also free and open source. On Windows, you'll want to install the readline library. Instructions for that are on the IPython installation documentation.
[Winpdb](http://winpdb.org/) is my Python debugger of choice. It's free, open source, and cross platform (using wxWidgets for the GUI). I wrote a [tutorial on how to use Winpdb](http://code.google.com/p/winpdb/wiki/DebuggingTutorial) to help get people started on using graphical debuggers. |
GAE - How to live with no joins? | 445,827 | 13 | 2009-01-15T06:07:25Z | 446,471 | 13 | 2009-01-15T11:58:10Z | [
"python",
"google-app-engine",
"join",
"gae-datastore"
] | ## Example Problem:
### Entities:
* User contains name and a list of friends (User references)
* Blog Post contains title, content, date and Writer (User)
### Requirement:
I want a page that displays the title and a link to the blog of the last 10 posts by a user's friend. I would also like the ability to keep paging back through older entries.
## SQL Solution:
So in sql land it would be something like:
```
select * from blog_post where user_id in (select friend_id from user_friend where user_id = :userId) order by date
```
## GAE solutions i can think of are:
* Load user, loop through the list of friends and load their latest blog posts. Finally merge all the blog posts to find the latest 10 blog entries
* In a blog post have a list of all users that have the writer as a friend. This would mean a simple read but would result in quota overload when adding a friend who has lots of blog posts.
I don't believe either of these solutions will scale.
Im sure others have hit this problem but I've searched, watched google io videos, read other's code ... What am i missing? | If you look at how the SQL solution you provided will be executed, it will go basically like this:
1. Fetch a list of friends for the current user
2. For each user in the list, start an index scan over recent posts
3. Merge-join all the scans from step 2, stopping when you've retrieved enough entries
You can carry out exactly the same procedure yourself in App Engine, by using the Query instances as iterators and doing a merge join over them.
You're right that this will not scale well to large numbers of friends, but it suffers from exactly the same issues the SQL implementation has, it just doesn't disguise them as well: Fetching the latest 20 (for example) entries costs roughly O(n log n) work, where n is the number of friends. |
GAE - How to live with no joins? | 445,827 | 13 | 2009-01-15T06:07:25Z | 1,043,333 | 7 | 2009-06-25T11:06:54Z | [
"python",
"google-app-engine",
"join",
"gae-datastore"
] | ## Example Problem:
### Entities:
* User contains name and a list of friends (User references)
* Blog Post contains title, content, date and Writer (User)
### Requirement:
I want a page that displays the title and a link to the blog of the last 10 posts by a user's friend. I would also like the ability to keep paging back through older entries.
## SQL Solution:
So in sql land it would be something like:
```
select * from blog_post where user_id in (select friend_id from user_friend where user_id = :userId) order by date
```
## GAE solutions i can think of are:
* Load user, loop through the list of friends and load their latest blog posts. Finally merge all the blog posts to find the latest 10 blog entries
* In a blog post have a list of all users that have the writer as a friend. This would mean a simple read but would result in quota overload when adding a friend who has lots of blog posts.
I don't believe either of these solutions will scale.
Im sure others have hit this problem but I've searched, watched google io videos, read other's code ... What am i missing? | This topic is covered in a Google io talk:
<http://code.google.com/events/io/sessions/BuildingScalableComplexApps.html>
Basically the Google team suggest using list properties and what they call relational index entities, an example application can be found here: <http://pubsub-test.appspot.com/> |
Django: how do you serve media / stylesheets and link to them within templates | 446,026 | 41 | 2009-01-15T08:35:50Z | 447,991 | 47 | 2009-01-15T18:56:51Z | [
"python",
"css",
"django",
"django-templates",
"media"
] | Variations of this question have been asked, but I'm still unable to get my stylesheets to load correctly when my templates are rendered.
I'm attempting to serve static media from the Django process during development - which is strongly discouraged in production, I'm aware. I'll post my configuration and my template, and hopefully someone can help me to understand where I'm going wrong.
Note that I did try to follow the example on the Django project website, however it doesn't mention how to refer to your stylesheets from a template. I've also tried many different variations of the same thing, so my code/settings may be a little off from what's described.
settings.py
```
MEDIA_ROOT = 'D:/Dev Tools/django_projects/dso/media'
MEDIA_URL = '/media/'
ADMIN_MEDIA_PREFIX = '/media/'
```
urls.py
```
from django.conf.urls.defaults import *
from django.conf import settings
from django.contrib import admin
admin.autodiscover()
urlpatterns = patterns('',
(r'^admin/(.*)', admin.site.root),
(r'^ovramt/$', 'dso.ovramt.views.index'),
)
if settings.DEBUG:
urlpatterns += patterns('',
(r'^media/(?P<path>.*)$', 'django.views.static.serve', {'document_root': settings.MEDIA_ROOT}),
)
```
Within my template:
```
<head>
<title> {% block title %} DSO Template {% endblock %} </title>
<meta http-equiv="Content-Type" content="text/html;charset=utf-8" >
<link rel="stylesheet" type="text/css" href="../media/styles.css">
</head>
```
I assure you, the files (templates/media) are in the correct directory on my file system. If there's any extra information I need to provide, please post a comment.
---
Edit:
One of the problems I was having was the use of a '/' prepending my links. If the forward slash is prepended, the link is opened from the root of the site. If there is no forward slash, the link is opened in the current level. An example:
www.example.com/application/ has a link "/app2/ and a link "app3/".
app2 will open at www.example.com/app2/ and app3 will open at www.example.com/application/app3/. This was confusing me I think. | I just had to figure this out myself.
settings.py:
```
MEDIA_ROOT = 'C:/Server/Projects/project_name/static/'
MEDIA_URL = '/static/'
ADMIN_MEDIA_PREFIX = '/media/'
```
urls.py:
```
from django.conf import settings
...
if settings.DEBUG:
urlpatterns += patterns('',
(r'^static/(?P<path>.*)$', 'django.views.static.serve', {'document_root': settings.MEDIA_ROOT}),
)
```
template file:
```
<link rel="stylesheet" type="text/css" href="/static/css/style.css" />
```
With the file located here:
```
"C:/Server/Projects/project_name/static/css/style.css"
``` |
Django: how do you serve media / stylesheets and link to them within templates | 446,026 | 41 | 2009-01-15T08:35:50Z | 479,391 | 10 | 2009-01-26T10:49:00Z | [
"python",
"css",
"django",
"django-templates",
"media"
] | Variations of this question have been asked, but I'm still unable to get my stylesheets to load correctly when my templates are rendered.
I'm attempting to serve static media from the Django process during development - which is strongly discouraged in production, I'm aware. I'll post my configuration and my template, and hopefully someone can help me to understand where I'm going wrong.
Note that I did try to follow the example on the Django project website, however it doesn't mention how to refer to your stylesheets from a template. I've also tried many different variations of the same thing, so my code/settings may be a little off from what's described.
settings.py
```
MEDIA_ROOT = 'D:/Dev Tools/django_projects/dso/media'
MEDIA_URL = '/media/'
ADMIN_MEDIA_PREFIX = '/media/'
```
urls.py
```
from django.conf.urls.defaults import *
from django.conf import settings
from django.contrib import admin
admin.autodiscover()
urlpatterns = patterns('',
(r'^admin/(.*)', admin.site.root),
(r'^ovramt/$', 'dso.ovramt.views.index'),
)
if settings.DEBUG:
urlpatterns += patterns('',
(r'^media/(?P<path>.*)$', 'django.views.static.serve', {'document_root': settings.MEDIA_ROOT}),
)
```
Within my template:
```
<head>
<title> {% block title %} DSO Template {% endblock %} </title>
<meta http-equiv="Content-Type" content="text/html;charset=utf-8" >
<link rel="stylesheet" type="text/css" href="../media/styles.css">
</head>
```
I assure you, the files (templates/media) are in the correct directory on my file system. If there's any extra information I need to provide, please post a comment.
---
Edit:
One of the problems I was having was the use of a '/' prepending my links. If the forward slash is prepended, the link is opened from the root of the site. If there is no forward slash, the link is opened in the current level. An example:
www.example.com/application/ has a link "/app2/ and a link "app3/".
app2 will open at www.example.com/app2/ and app3 will open at www.example.com/application/app3/. This was confusing me I think. | Django already has a context process for MEDIA\_URL, see [Django's documentation](http://docs.djangoproject.com/en/dev/ref/templates/api/#django-core-context-processors-media).
It should be availbale by default (unless you've customized CONTEXT\_PROCESSORS and forgot to add it) in a RequestContext. |
How can I check for Python version in a program that uses new language features? | 446,052 | 194 | 2009-01-15T08:45:28Z | 446,061 | 88 | 2009-01-15T08:50:17Z | [
"python",
"version"
] | If I have a Python script that requires at least a particular
version of Python, what is the correct way to fail gracefully
when an earlier version of Python is used to launch the script?
How do I get control early enough to issue an error message
and exit?
For example, I have a program that uses the ternery operator (new in 2.5) and "with" blocks
(new in 2.6). I wrote a simple little interpreter-version
checker routine which is the first thing the script would
call ... except it doesn't get that far. Instead, the
script fails during python compilation, before my routines
are even called. Thus the user of the script sees some very
obscure synax error tracebacks - which pretty much require
an expert to deduce that it is simply the case of running
the wrong version of Python.
I know how to check the version of Python. The issue is that some syntax is illegal in older versions of Python. Consider this program:
```
import sys
if sys.version_info < (2, 4):
raise "must use python 2.5 or greater"
else:
# syntax error in 2.4, ok in 2.5
x = 1 if True else 2
print x
```
When run under 2.4, I want this result
```
$ ~/bin/python2.4 tern.py
must use python 2.5 or greater
```
and not this result:
```
$ ~/bin/python2.4 tern.py
File "tern.py", line 5
x = 1 if True else 2
^
SyntaxError: invalid syntax
```
(Channeling for a coworker.) | You can test using `eval`:
```
try:
eval("1 if True else 2")
except SyntaxError:
# doesn't have ternary
```
Also, `with` *is* available in Python 2.5, just add `from __future__ import with_statement` .
EDIT: to get control early enough, you could split it do different `.py` files and check compatibility in the main file before importing (e.g. in `__init__.py` in a package):
```
# __init__.py
# Check compatibility
try:
eval("1 if True else 2")
except SyntaxError:
raise ImportError("requires ternary support")
# import from another module
from impl import *
``` |
How can I check for Python version in a program that uses new language features? | 446,052 | 194 | 2009-01-15T08:45:28Z | 446,075 | 18 | 2009-01-15T08:55:44Z | [
"python",
"version"
] | If I have a Python script that requires at least a particular
version of Python, what is the correct way to fail gracefully
when an earlier version of Python is used to launch the script?
How do I get control early enough to issue an error message
and exit?
For example, I have a program that uses the ternery operator (new in 2.5) and "with" blocks
(new in 2.6). I wrote a simple little interpreter-version
checker routine which is the first thing the script would
call ... except it doesn't get that far. Instead, the
script fails during python compilation, before my routines
are even called. Thus the user of the script sees some very
obscure synax error tracebacks - which pretty much require
an expert to deduce that it is simply the case of running
the wrong version of Python.
I know how to check the version of Python. The issue is that some syntax is illegal in older versions of Python. Consider this program:
```
import sys
if sys.version_info < (2, 4):
raise "must use python 2.5 or greater"
else:
# syntax error in 2.4, ok in 2.5
x = 1 if True else 2
print x
```
When run under 2.4, I want this result
```
$ ~/bin/python2.4 tern.py
must use python 2.5 or greater
```
and not this result:
```
$ ~/bin/python2.4 tern.py
File "tern.py", line 5
x = 1 if True else 2
^
SyntaxError: invalid syntax
```
(Channeling for a coworker.) | Try
```
import platform
platform.python_version()
```
Should give you a string like "2.3.1". If this is not exactly waht you want there is a rich set of data available through the "platform" build-in. What you want should be in there somewhere. |
How can I check for Python version in a program that uses new language features? | 446,052 | 194 | 2009-01-15T08:45:28Z | 446,136 | 86 | 2009-01-15T09:26:51Z | [
"python",
"version"
] | If I have a Python script that requires at least a particular
version of Python, what is the correct way to fail gracefully
when an earlier version of Python is used to launch the script?
How do I get control early enough to issue an error message
and exit?
For example, I have a program that uses the ternery operator (new in 2.5) and "with" blocks
(new in 2.6). I wrote a simple little interpreter-version
checker routine which is the first thing the script would
call ... except it doesn't get that far. Instead, the
script fails during python compilation, before my routines
are even called. Thus the user of the script sees some very
obscure synax error tracebacks - which pretty much require
an expert to deduce that it is simply the case of running
the wrong version of Python.
I know how to check the version of Python. The issue is that some syntax is illegal in older versions of Python. Consider this program:
```
import sys
if sys.version_info < (2, 4):
raise "must use python 2.5 or greater"
else:
# syntax error in 2.4, ok in 2.5
x = 1 if True else 2
print x
```
When run under 2.4, I want this result
```
$ ~/bin/python2.4 tern.py
must use python 2.5 or greater
```
and not this result:
```
$ ~/bin/python2.4 tern.py
File "tern.py", line 5
x = 1 if True else 2
^
SyntaxError: invalid syntax
```
(Channeling for a coworker.) | Have a wrapper around your program that does the following.
```
import sys
req_version = (2,5)
cur_version = sys.version_info
if cur_version >= req_version:
import myApp
myApp.run()
else:
print "Your Python interpreter is too old. Please consider upgrading."
```
You can also consider using `sys.version()`, if you plan to encounter people who are using pre-2.0 Python interpreters, but then you have some regular expressions to do.
And there might be more elegant ways to do this. |
How can I check for Python version in a program that uses new language features? | 446,052 | 194 | 2009-01-15T08:45:28Z | 446,181 | 7 | 2009-01-15T09:43:38Z | [
"python",
"version"
] | If I have a Python script that requires at least a particular
version of Python, what is the correct way to fail gracefully
when an earlier version of Python is used to launch the script?
How do I get control early enough to issue an error message
and exit?
For example, I have a program that uses the ternery operator (new in 2.5) and "with" blocks
(new in 2.6). I wrote a simple little interpreter-version
checker routine which is the first thing the script would
call ... except it doesn't get that far. Instead, the
script fails during python compilation, before my routines
are even called. Thus the user of the script sees some very
obscure synax error tracebacks - which pretty much require
an expert to deduce that it is simply the case of running
the wrong version of Python.
I know how to check the version of Python. The issue is that some syntax is illegal in older versions of Python. Consider this program:
```
import sys
if sys.version_info < (2, 4):
raise "must use python 2.5 or greater"
else:
# syntax error in 2.4, ok in 2.5
x = 1 if True else 2
print x
```
When run under 2.4, I want this result
```
$ ~/bin/python2.4 tern.py
must use python 2.5 or greater
```
and not this result:
```
$ ~/bin/python2.4 tern.py
File "tern.py", line 5
x = 1 if True else 2
^
SyntaxError: invalid syntax
```
(Channeling for a coworker.) | Sets became part of the core language in Python 2.4, in order to stay backwards compatible. I did this back then, which will work for you as well:
```
if sys.version_info < (2, 4):
from sets import Set as set
``` |
How can I check for Python version in a program that uses new language features? | 446,052 | 194 | 2009-01-15T08:45:28Z | 3,132,402 | 21 | 2010-06-28T12:38:59Z | [
"python",
"version"
] | If I have a Python script that requires at least a particular
version of Python, what is the correct way to fail gracefully
when an earlier version of Python is used to launch the script?
How do I get control early enough to issue an error message
and exit?
For example, I have a program that uses the ternery operator (new in 2.5) and "with" blocks
(new in 2.6). I wrote a simple little interpreter-version
checker routine which is the first thing the script would
call ... except it doesn't get that far. Instead, the
script fails during python compilation, before my routines
are even called. Thus the user of the script sees some very
obscure synax error tracebacks - which pretty much require
an expert to deduce that it is simply the case of running
the wrong version of Python.
I know how to check the version of Python. The issue is that some syntax is illegal in older versions of Python. Consider this program:
```
import sys
if sys.version_info < (2, 4):
raise "must use python 2.5 or greater"
else:
# syntax error in 2.4, ok in 2.5
x = 1 if True else 2
print x
```
When run under 2.4, I want this result
```
$ ~/bin/python2.4 tern.py
must use python 2.5 or greater
```
and not this result:
```
$ ~/bin/python2.4 tern.py
File "tern.py", line 5
x = 1 if True else 2
^
SyntaxError: invalid syntax
```
(Channeling for a coworker.) | Probably the best way to do do this version comparison is to use the `sys.hexversion`. This is important because comparing version tuples will not give you the desired result in all python versions.
```
import sys
if sys.hexversion < 0x02060000:
print "yep!"
else:
print "oops!"
``` |
How can I check for Python version in a program that uses new language features? | 446,052 | 194 | 2009-01-15T08:45:28Z | 7,642,536 | 7 | 2011-10-04T01:52:20Z | [
"python",
"version"
] | If I have a Python script that requires at least a particular
version of Python, what is the correct way to fail gracefully
when an earlier version of Python is used to launch the script?
How do I get control early enough to issue an error message
and exit?
For example, I have a program that uses the ternery operator (new in 2.5) and "with" blocks
(new in 2.6). I wrote a simple little interpreter-version
checker routine which is the first thing the script would
call ... except it doesn't get that far. Instead, the
script fails during python compilation, before my routines
are even called. Thus the user of the script sees some very
obscure synax error tracebacks - which pretty much require
an expert to deduce that it is simply the case of running
the wrong version of Python.
I know how to check the version of Python. The issue is that some syntax is illegal in older versions of Python. Consider this program:
```
import sys
if sys.version_info < (2, 4):
raise "must use python 2.5 or greater"
else:
# syntax error in 2.4, ok in 2.5
x = 1 if True else 2
print x
```
When run under 2.4, I want this result
```
$ ~/bin/python2.4 tern.py
must use python 2.5 or greater
```
and not this result:
```
$ ~/bin/python2.4 tern.py
File "tern.py", line 5
x = 1 if True else 2
^
SyntaxError: invalid syntax
```
(Channeling for a coworker.) | Although the question is:
How do I get control early enough to issue an error message and exit?
The question that I answer is:
How do I get control early enough to issue an error message before starting the app?
I can answer it a lot differently then the other posts.
Seems answers so far are trying to solve your question from within Python.
I say, do version checking before launching Python. I see your path is Linux or unix.
However I can only offer you a Windows script. I image adapting it to linux scripting syntax wouldn't be too hard.
Here is the DOS script with version 2.7:
```
@ECHO OFF
REM see http://ss64.com/nt/for_f.html
FOR /F "tokens=1,2" %%G IN ('"python.exe -V 2>&1"') DO ECHO %%H | find "2.7" > Nul
IF NOT ErrorLevel 1 GOTO Python27
ECHO must use python2.7 or greater
GOTO EOF
:Python27
python.exe tern.py
GOTO EOF
:EOF
```
This does not run any part of your application and therefore will not raise a Python Exception. It does not create any temp file or add any OS environment variables. And it doesn't end your app to an exception due to different version syntax rules. That's three less possible security points of access.
The "FOR /F" line is the key.
FOR /F "tokens=1,2" %%G IN ('"python.exe -V 2>&1"') DO ECHO %%H | find "2.7" > Nul
For multiple python version check check out url:
<http://www.fpschultze.de/modules/smartfaq/faq.php?faqid=17>
And my hack version:
[MS script; Python version check prelaunch of Python module]
<http://pastebin.com/aAuJ91FQ> |
How can I check for Python version in a program that uses new language features? | 446,052 | 194 | 2009-01-15T08:45:28Z | 31,052,037 | 9 | 2015-06-25T13:39:04Z | [
"python",
"version"
] | If I have a Python script that requires at least a particular
version of Python, what is the correct way to fail gracefully
when an earlier version of Python is used to launch the script?
How do I get control early enough to issue an error message
and exit?
For example, I have a program that uses the ternery operator (new in 2.5) and "with" blocks
(new in 2.6). I wrote a simple little interpreter-version
checker routine which is the first thing the script would
call ... except it doesn't get that far. Instead, the
script fails during python compilation, before my routines
are even called. Thus the user of the script sees some very
obscure synax error tracebacks - which pretty much require
an expert to deduce that it is simply the case of running
the wrong version of Python.
I know how to check the version of Python. The issue is that some syntax is illegal in older versions of Python. Consider this program:
```
import sys
if sys.version_info < (2, 4):
raise "must use python 2.5 or greater"
else:
# syntax error in 2.4, ok in 2.5
x = 1 if True else 2
print x
```
When run under 2.4, I want this result
```
$ ~/bin/python2.4 tern.py
must use python 2.5 or greater
```
and not this result:
```
$ ~/bin/python2.4 tern.py
File "tern.py", line 5
x = 1 if True else 2
^
SyntaxError: invalid syntax
```
(Channeling for a coworker.) | ```
import sys
# prints whether python is version 3 or not
python_version = sys.version_info.major
if python_version == 3:
print("is python 3")
else:
print("not python 3")
``` |
Possible values from sys.platform? | 446,209 | 34 | 2009-01-15T10:02:59Z | 446,210 | 22 | 2009-01-15T10:03:16Z | [
"python",
"cross-platform"
] | What are the possible return values from the following command?
```
import sys
print sys.platform
```
I know there is a lot of possibilities, so I'm mainly interested in the "main" ones (Windows, Linux, Mac OS) | Mac OS X (10.4, 10.5, 10.7, 10.8):
```
darwin
```
Linux (2.6 kernel):
```
linux2
```
Windows XP 32 bit:
```
win32
```
Versions in brackets have been checked - other/newer versions are likely to be the same. |
Possible values from sys.platform? | 446,209 | 34 | 2009-01-15T10:02:59Z | 446,215 | 19 | 2009-01-15T10:04:52Z | [
"python",
"cross-platform"
] | What are the possible return values from the following command?
```
import sys
print sys.platform
```
I know there is a lot of possibilities, so I'm mainly interested in the "main" ones (Windows, Linux, Mac OS) | > As others have indicated, sys.platform
> is derived from the name that the
> system vendor gives their system.
> However, Python also adds
> plat- to sys.path, so
> you can look at all the plat-\*
> directories in the Python
> distribution.
>
> This gives you the list
>
> aix3 aix4 atheos beos5 darwin freebsd2
> freebsd3 freebsd4 freebsd5 freebsd6
> freebsd7 generic irix5 irix6 linux2
> mac netbsd1 next3 os2emx riscos sunos5
> unixware7
>
> Of course, sys.platform can have
> additional values, when Python gets
> compiled on a system for which no
> platform-specific directory has been
> created.
From [here](http://mail.python.org/pipermail/python-list/2006-August/405454.html). |
Possible values from sys.platform? | 446,209 | 34 | 2009-01-15T10:02:59Z | 13,874,620 | 28 | 2012-12-14T07:49:05Z | [
"python",
"cross-platform"
] | What are the possible return values from the following command?
```
import sys
print sys.platform
```
I know there is a lot of possibilities, so I'm mainly interested in the "main" ones (Windows, Linux, Mac OS) | ```
.---------------------.----------.
| System | Value |
|---------------------|----------|
| Linux (2.x and 3.x) | linux2 |
| Windows | win32 |
| Windows/Cygwin | cygwin |
| Mac OS X | darwin |
| OS/2 | os2 |
| OS/2 EMX | os2emx |
| RiscOS | riscos |
| AtheOS | atheos |
| FreeBSD 7 | freebsd7 |
| FreeBSD 8 | freebsd8 |
'---------------------'----------'
``` |
How do I reverse Unicode decomposition using Python? | 446,222 | 5 | 2009-01-15T10:08:25Z | 446,291 | 7 | 2009-01-15T10:33:47Z | [
"python",
"unicode"
] | Using Python 2.5, I have some text in stored in a unicode object:
> Dinis e Isabel, uma difı´cil relac¸aËo
> conjugal e polı´tica
This appears to be [decomposed Unicode](http://www.unicode.org/reports/tr15/#Decomposition). Is there a generic way in Python to reverse the decomposition, so I end up with:
> Dinis e Isabel, uma difÃcil relação
> conjugal e polÃtica | I think you are looking for this:
```
>>> import unicodedata
>>> print unicodedata.normalize("NFC",u"c\u0327")
ç
``` |
How can I deploy a Perl/Python/Ruby script without installing an interpreter? | 446,685 | 21 | 2009-01-15T13:21:14Z | 446,712 | 23 | 2009-01-15T13:28:05Z | [
"python",
"ruby",
"perl"
] | I want to write a piece of software which is essentially a regex data scrubber. I am going to take a contact list in CSV and remove all non-word characters and such from the person's name.
This project has Perl written all over it but my client base is largely non-technical and installing Perl on Windows would not be worth it for them.
Any ideas on how I can use a Perl/Python/Ruby type language without all the headaches of getting the interpreter on their computer?
Thought about web for a second but it would not work for business reasons. | You can use [Perl Archive Toolkit](http://search.cpan.org/perldoc?PAR) to bring a minimal perl core + needed modules + your Perl program with you.
And you can even convert it using [PAR Packer](http://search.cpan.org/perldoc?pp) to a windows exe file that will run just like any other program, from an end user's perspective. |
How can I deploy a Perl/Python/Ruby script without installing an interpreter? | 446,685 | 21 | 2009-01-15T13:21:14Z | 446,741 | 30 | 2009-01-15T13:37:12Z | [
"python",
"ruby",
"perl"
] | I want to write a piece of software which is essentially a regex data scrubber. I am going to take a contact list in CSV and remove all non-word characters and such from the person's name.
This project has Perl written all over it but my client base is largely non-technical and installing Perl on Windows would not be worth it for them.
Any ideas on how I can use a Perl/Python/Ruby type language without all the headaches of getting the interpreter on their computer?
Thought about web for a second but it would not work for business reasons. | You can get Windows executables in all three languages.
* As usual with Perl, there's more than one way to do it:
+ [PAR Packer](http://search.cpan.org/perldoc?pp) (free/open-source)
+ [perl2exe](http://www.indigostar.com/perl2exe.htm) (shareware)
+ [PerlApp](http://community.activestate.com/products/PerlDevKit) (part of the Perl Dev Kit from ActiveState, commercial)
* Python
+ [py2exe](http://www.py2exe.org/)
+ [PyInstaller](http://www.pyinstaller.org/)
* Ruby
+ [RubyScript2Exe](http://www.erikveen.dds.nl/rubyscript2exe/)
+ [OCRA](http://ocra.rubyforge.org/) |
How can I deploy a Perl/Python/Ruby script without installing an interpreter? | 446,685 | 21 | 2009-01-15T13:21:14Z | 449,470 | 8 | 2009-01-16T03:58:59Z | [
"python",
"ruby",
"perl"
] | I want to write a piece of software which is essentially a regex data scrubber. I am going to take a contact list in CSV and remove all non-word characters and such from the person's name.
This project has Perl written all over it but my client base is largely non-technical and installing Perl on Windows would not be worth it for them.
Any ideas on how I can use a Perl/Python/Ruby type language without all the headaches of getting the interpreter on their computer?
Thought about web for a second but it would not work for business reasons. | Using [PAR, the Perl Aachiver](http://search.cpan.org/perldoc?PAR) has already been mentioned in other answers, and is an excellent solution. There's a short tutorial on [building executables using PAR](http://perltraining.com.au/tips/2008-05-23.html) that was published as a [Perl Tip](http://perltraining.com.au/tips/) last year.
In most cases, if you have [PAR::Packer](http://search.cpan.org/perldoc?PAR::Packer) already installed on your build system, you can create a stand-alone executable with no external dependencies or requirements with:
```
pp -o example.exe example.pl
```
In most cases PAR will do all the hard work of determining your module dependencies for you, but if it gets anything wrong there are additional command line options you can use to ensure they get included. See the [pp documentation](http://search.cpan.org/perldoc?pp) for more details.
All the best,
*Paul* |
Where to get/How to build Windows binary of mod_wsgi with python 3.0 support? | 447,015 | 5 | 2009-01-15T14:48:24Z | 1,037,956 | 9 | 2009-06-24T12:06:29Z | [
"python",
"apache",
"visual-c++",
"mod-wsgi"
] | I wanted to experiment a little with python 3.0 at home. I got python 3.0 working, I've played around with some scripts, and I thought it would be fun to try to make a small web-project with it.
As I was googling, it turned out, that mod\_python, for some reasons, will not be able to support python 3.0.
The only other alternative I've found is [mod\_wsgi](http://code.google.com/p/modwsgi/).
On the main page of the [mod\_wsgi](http://code.google.com/p/modwsgi/) project, it says, that if you want to play with python 3.0, you have to get the latest version from subversion repository. I was wondering, if there is somewhere a pre-built windows binaries available?
If there are no such binaries, then I'd be thankful for any resources about building it with VC++ 2008. Or maybe even general resources about building apache and it's modules with VC++ 2008. Thanks.
Oh and, I'm using the latest Apache 2.2 release.
EDIT: Will it be a problem, if I'll be using the official apache build with my own build of a mod\_wsgi (I used depends.exe on apache, and seems that it's not built with VC++ 2008)? | Binaries for Windows are now being supplied from the mod\_wsgi site for Apache 2.2 and Python 2.6 and 3.0. Python 3.0 is only supported for mod\_wsgi 3.0 onwards. See:
<http://code.google.com/p/modwsgi/downloads/list>
---
UPDATE July 2015
The above link is no longer valid. Instead see:
* <https://github.com/GrahamDumpleton/mod_wsgi/blob/develop/win32/README.rst> |
What is the difference between encode/decode? | 447,107 | 122 | 2009-01-15T15:13:59Z | 447,401 | 11 | 2009-01-15T16:15:39Z | [
"python",
"string",
"unicode",
"character-encoding",
"python-2.x"
] | I've never been sure that I understand the difference between str/unicode decode and encode.
I know that `str().decode()` is for when you have a string of bytes that you know has a certain character encoding, given that encoding name it will return a unicode string.
I know that `unicode().encode()` converts unicode chars into a string of bytes according to a given encoding name.
But I don't understand what `str().encode()` and `unicode().decode()` are for. Can anyone explain, and possibly also correct anything else I've gotten wrong above?
EDIT:
Several answers give info on what `.encode` does on a string, but no-one seems to know what `.decode` does for unicode. | mybytestring.encode(somecodec) is meaningful for these values of `somecodec`:
* base64
* bz2
* zlib
* hex
* quopri
* rot13
* string\_escape
* uu
I am not sure what decoding an already decoded unicode text is good for. Trying that with any encoding seems to always try to encode with the system's default encoding first. |
What is the difference between encode/decode? | 447,107 | 122 | 2009-01-15T15:13:59Z | 448,383 | 45 | 2009-01-15T20:41:48Z | [
"python",
"string",
"unicode",
"character-encoding",
"python-2.x"
] | I've never been sure that I understand the difference between str/unicode decode and encode.
I know that `str().decode()` is for when you have a string of bytes that you know has a certain character encoding, given that encoding name it will return a unicode string.
I know that `unicode().encode()` converts unicode chars into a string of bytes according to a given encoding name.
But I don't understand what `str().encode()` and `unicode().decode()` are for. Can anyone explain, and possibly also correct anything else I've gotten wrong above?
EDIT:
Several answers give info on what `.encode` does on a string, but no-one seems to know what `.decode` does for unicode. | To represent a unicode string as a string of bytes is known as **encoding**. Use `u'...'.encode(encoding)`.
Example:
```
>>> u'æøå'.encode('utf8')
'\xc3\x83\xc2\xa6\xc3\x83\xc2\xb8\xc3\x83\xc2\xa5'
>>> u'æøå'.encode('latin1')
'\xc3\xa6\xc3\xb8\xc3\xa5'
>>> u'æøå'.encode('ascii')
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-5:
ordinal not in range(128)
```
You typically encode a unicode string whenever you need to use it for IO, for instance transfer it over the network, or save it to a disk file.
To convert a string of bytes to a unicode string is known as **decoding**. Use `unicode('...', encoding)` or '...'.decode(encoding).
Example:
```
>>> u'æøå'
u'\xc3\xa6\xc3\xb8\xc3\xa5' # the interpreter prints the unicode object like so
>>> unicode('\xc3\xa6\xc3\xb8\xc3\xa5', 'latin1')
u'\xc3\xa6\xc3\xb8\xc3\xa5'
>>> '\xc3\xa6\xc3\xb8\xc3\xa5'.decode('latin1')
u'\xc3\xa6\xc3\xb8\xc3\xa5'
```
You typically decode a string of bytes whenever you receive string data from the network or from a disk file.
I believe there are some changes in unicode handling in python 3, so the above is probably not correct for python 3.
Some good links:
* [The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)](http://www.joelonsoftware.com/articles/Unicode.html)
* [Unicode HOWTO](http://docs.python.org/howto/unicode.html) |
What is the difference between encode/decode? | 447,107 | 122 | 2009-01-15T15:13:59Z | 449,281 | 83 | 2009-01-16T02:06:33Z | [
"python",
"string",
"unicode",
"character-encoding",
"python-2.x"
] | I've never been sure that I understand the difference between str/unicode decode and encode.
I know that `str().decode()` is for when you have a string of bytes that you know has a certain character encoding, given that encoding name it will return a unicode string.
I know that `unicode().encode()` converts unicode chars into a string of bytes according to a given encoding name.
But I don't understand what `str().encode()` and `unicode().decode()` are for. Can anyone explain, and possibly also correct anything else I've gotten wrong above?
EDIT:
Several answers give info on what `.encode` does on a string, but no-one seems to know what `.decode` does for unicode. | The `decode` method of unicode strings really doesn't have any applications at all (unless you have some non-text data in a unicode string for some reason -- see below). It is mainly there for historical reasons, i think. In Python 3 it is completely gone.
`unicode().decode()` will perform an implicit *encoding* of `s` using the default (ascii) codec. Verify this like so:
```
>>> s = u'ö'
>>> s.decode()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeEncodeError: 'ascii' codec can't encode character u'\xf6' in position 0:
ordinal not in range(128)
>>> s.encode('ascii')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeEncodeError: 'ascii' codec can't encode character u'\xf6' in position 0:
ordinal not in range(128)
```
The error messages are exactly the same.
For `str().encode()` it's the other way around -- it attempts an implicit *decoding* of `s` with the default encoding:
```
>>> s = 'ö'
>>> s.decode('utf-8')
u'\xf6'
>>> s.encode()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0:
ordinal not in range(128)
```
Used like this, `str().encode()` is also superfluous.
**But** there is another application of the latter method that is useful: there are [encodings](http://docs.python.org/library/codecs.html#standard-encodings) that have nothing to do with character sets, and thus can be applied to 8-bit strings in a meaningful way:
```
>>> s.encode('zip')
'x\x9c;\xbc\r\x00\x02>\x01z'
```
You are right, though: the ambiguous usage of "encoding" for both these applications is... awkard. Again, with separate `byte` and `string` types in Python 3, this is no longer an issue. |
What is the difference between encode/decode? | 447,107 | 122 | 2009-01-15T15:13:59Z | 452,378 | 9 | 2009-01-16T23:11:40Z | [
"python",
"string",
"unicode",
"character-encoding",
"python-2.x"
] | I've never been sure that I understand the difference between str/unicode decode and encode.
I know that `str().decode()` is for when you have a string of bytes that you know has a certain character encoding, given that encoding name it will return a unicode string.
I know that `unicode().encode()` converts unicode chars into a string of bytes according to a given encoding name.
But I don't understand what `str().encode()` and `unicode().decode()` are for. Can anyone explain, and possibly also correct anything else I've gotten wrong above?
EDIT:
Several answers give info on what `.encode` does on a string, but no-one seems to know what `.decode` does for unicode. | anUnicode.**encode**('encoding') results in a **string** object and can be called on a unicode object
aString.**decode**('encoding') results in an **unicode** object and can be called on a string, encoded in given encoding.
---
Some more explanations:
You can create some unicode object, which doesn't have any encoding set. The way it is stored by Python in memory is none of your concern. You can search it, split it and call any string manipulating function you like.
But there comes a time, when you'd like to print your unicode object to console or into some text file. So you have to **encode** it (for example - in UTF-8), you call encode('utf-8') and you get a string with '\u<someNumber>' inside, which is perfectly printable.
Then, again - you'd like to do the opposite - read string encoded in UTF-8 and treat it as an Unicode, so the \u360 would be one character, not 5. Then you **decode** a string (with selected encoding) and get brand new object of the unicode type.
Just as a side note - you can select some pervert encoding, like 'zip', 'base64', 'rot' and some of them will convert from string to string, but I believe the most common case is one that involves UTF-8/UTF-16 and string. |
Django: Increment blog entry view count by one. Is this efficient? | 447,117 | 16 | 2009-01-15T15:15:31Z | 447,433 | 12 | 2009-01-15T16:22:34Z | [
"python",
"database",
"django",
"performance"
] | I have the following code in my index view.
```
latest_entry_list = Entry.objects.filter(is_published=True).order_by('-date_published')[:10]
for entry in latest_entry_list:
entry.views = entry.views + 1
entry.save()
```
If there are ten (the limit) rows returned from the initial query, will the save issue 10 seperate updated calls to the database, or is Django "smart" enough to issue just one update call?
Is there a more efficient method to achieve this result? | You could handle the updates in a single transaction, which could improve performance significantly. Use a separate function, decorated with @transaction.commit\_manually.
```
@transaction.commit_manually
def update_latest_entries(latest_entry_list):
for entry in latest_entry_list:
entry.views += 1
entry.save()
transaction.commit()
``` |
Django: Increment blog entry view count by one. Is this efficient? | 447,117 | 16 | 2009-01-15T15:15:31Z | 889,463 | 37 | 2009-05-20T18:23:57Z | [
"python",
"database",
"django",
"performance"
] | I have the following code in my index view.
```
latest_entry_list = Entry.objects.filter(is_published=True).order_by('-date_published')[:10]
for entry in latest_entry_list:
entry.views = entry.views + 1
entry.save()
```
If there are ten (the limit) rows returned from the initial query, will the save issue 10 seperate updated calls to the database, or is Django "smart" enough to issue just one update call?
Is there a more efficient method to achieve this result? | You can use `F()` objects for this.
Here is how you import `F`: `from django.db.models import F`
**[New in Django 1.1](http://docs.djangoproject.com/en/dev/topics/db/queries/#updating-multiple-objects-at-once).**
Calls to update can also use F() objects to update one field based on the value of another field in the model. This is especially useful for incrementing counters based upon their current value.
```
Entry.objects.filter(is_published=True).update(views=F('views')+1)
```
Although you can't do an update on a sliced query set... **edit: actually you can...**
This can be done completely in django ORM. You need two SQL queries:
1. Do your filter and collect a list of primary keys
2. Do an update on a non-sliced query set of items matching any of those primary keys.
Getting the non-sliced query set is the hard bit. I wondered about using [`in_bulk`](http://docs.djangoproject.com/en/dev/ref/models/querysets/#in-bulk-id-list) but that returns a dictionary, not a query set. One would usually use [`Q objects`](http://docs.djangoproject.com/en/dev/topics/db/queries/#complex-lookups-with-q-objects) to do complex OR type queries and that will work, but [`pk__in`](http://docs.djangoproject.com/en/dev/ref/models/querysets/#in) does the job much more simply.
```
latest_entry_ids = Entry.objects.filter(is_published=True)\
.order_by('-date_published')
.values_list('id', flat=True)[:10]
non_sliced_query_set = Entry.objects.filter(pk__in=latest_entry_ids)
n = non_sliced_query_set.update(views=F('views')+1)
print n or 0, 'items updated'
```
Due to the way that django executes queries lazily, this results in just 2 database hits, no matter how many items are updated. |
Porting library from Java to Python | 448,095 | 6 | 2009-01-15T19:23:22Z | 448,137 | 9 | 2009-01-15T19:32:22Z | [
"java",
"python"
] | I'm about to port a smallish library from Java to Python and wanted some advice (smallish ~ a few thousand lines of code). I've studied the Java code a little, and noticed some design patterns that are common in both languages. However, there were definitely some Java-only idioms (singletons, etc) present that are generally not-well-received in Python-world.
I know at least one tool (j2py) exists that will turn a .java file into a .py file by walking the AST. Some initial experimentation yielded less than favorable results.
Should I even be considering using an automated tool to generate some code, or are the languages different enough that any tool would create enough re-work to have justified writing from scratch?
If tools aren't the devil, are there any besides j2py that can at least handle same-project import management? I don't expect any tool to match 3rd party libraries from one language to a substitute in another. | If it were me, I'd consider doing the work by hand. A couple thousand lines of code isn't a lot of code, and by rewriting it yourself (rather than translating it automatically), you'll be in a position to decide how to take advantage of Python idioms appropriately. (FWIW, I worked Java almost exclusively for 9 years, and I'm now working in Python, so I know the kind of translation you'd have to do.) |
Python - downloading a file over HTTP with progress bar and basic authentication | 448,207 | 8 | 2009-01-15T19:52:18Z | 448,277 | 7 | 2009-01-15T20:12:47Z | [
"python",
"http",
"download"
] | I'm using `urllib.urlretrieve` to download a file, and implementing a download progress bar using the `reporthook` parameter. Since `urlretrieve` doesn't directly support authentication, I came up with
```
import urllib
def urlretrieve_with_basic_auth(url, filename=None, reporthook=None, data=None,
username="", password=""):
class OpenerWithAuth(urllib.FancyURLopener):
def prompt_user_passwd(self, host, realm):
return username, password
return OpenerWithAuth().retrieve(url, filename, reporthook, data)
```
This works -- but it seems like there might be a more direct way to do this (maybe with urllib2 or httplib2 or...) --any ideas? | [urlgrabber](http://linux.duke.edu/projects/urlgrabber/) has built-in support for progress bars, authentication, and more. |
What is __init__.py for? | 448,271 | 751 | 2009-01-15T20:09:09Z | 448,279 | 554 | 2009-01-15T20:13:24Z | [
"python",
"module"
] | What is `__init__.py` for in a Python source directory? | It's a part of a package. [Here's the documentation.](http://docs.python.org/tutorial/modules.html#packages)
> The `__init__.py` files are required to make Python treat the directories as containing packages; this is done to prevent directories with a common name, such as `string`, from unintentionally hiding valid modules that occur later (deeper) on the module search path. In the simplest case, `__init__.py` can just be an empty file, but it can also execute initialization code for the package or set the `__all__` variable, described later. |
What is __init__.py for? | 448,271 | 751 | 2009-01-15T20:09:09Z | 448,292 | 15 | 2009-01-15T20:18:32Z | [
"python",
"module"
] | What is `__init__.py` for in a Python source directory? | It facilitates importing other python files. When you placed this file in a directory (say stuff)containing other py files, then you can do something like import stuff.other.
```
root\
stuff\
other.py
morestuff\
another.py
```
Without this `__init__.py` inside the directory stuff, you couldn't import other.py, because Python doesn't know where the source code for stuff is and unable to recognize it as a package. |
What is __init__.py for? | 448,271 | 751 | 2009-01-15T20:09:09Z | 448,311 | 64 | 2009-01-15T20:22:58Z | [
"python",
"module"
] | What is `__init__.py` for in a Python source directory? | The `__init__.py` file makes Python treat directories containing it as modules.
Furthermore, this is the first file to be loaded in a module, so you can use it to execute code that you want to run each time a module is loaded, or specify the submodules to be exported. |
What is __init__.py for? | 448,271 | 751 | 2009-01-15T20:09:09Z | 4,116,384 | 323 | 2010-11-07T03:31:14Z | [
"python",
"module"
] | What is `__init__.py` for in a Python source directory? | Files named `__init__.py` are used to mark directories on disk as Python package directories.
If you have the files
```
mydir/spam/__init__.py
mydir/spam/module.py
```
and `mydir` is on your path, you can import the code in `module.py` as
```
import spam.module
```
or
```
from spam import module
```
If you remove the `__init__.py` file, Python will no longer look for submodules inside that directory, so attempts to import the module will fail.
The `__init__.py` file is usually empty, but can be used to export selected portions of the package under more convenient name, hold convenience functions, etc.
Given the example above, the contents of the init module can be accessed as
```
import spam
```
based on <http://effbot.org/pyfaq/what-is-init-py-used-for.htm> |
What is __init__.py for? | 448,271 | 751 | 2009-01-15T20:09:09Z | 18,979,314 | 263 | 2013-09-24T10:38:34Z | [
"python",
"module"
] | What is `__init__.py` for in a Python source directory? | In addition to labeling a directory as a Python package and defining `__all__`, **`__init__.py` allows you to define any variable at the package level.** Doing so is often convenient if a package defines something that will be imported frequently, in an API-like fashion. This pattern promotes adherence to the Pythonic "flat is better than nested" philosophy.
## An example
Here is an example from one of my projects, in which I frequently import a `sessionmaker` called `Session` to interact with my database. I wrote a "database" package with a few modules:
```
database/
__init__.py
schema.py
insertions.py
queries.py
```
My `__init__.py` contains the following code:
```
import os
from sqlalchemy.orm import sessionmaker
from sqlalchemy import create_engine
engine = create_engine(os.environ['DATABASE_URL'])
Session = sessionmaker(bind=engine)
```
Since I define `Session` here, I can start a new session using the syntax below. This code would be the same executed from inside or outside of the "database" package directory.
```
from database import Session
session = Session()
```
Of course, this is a small convenience -- the alternative would be to define `Session` in a new file like "create\_session.py" in my database package, and start new sessions using:
```
from database.create_session import Session
session = Session()
```
## Further reading
There is a pretty interesting reddit thread covering appropriate uses of `__init__.py` here:
<http://www.reddit.com/r/Python/comments/1bbbwk/whats_your_opinion_on_what_to_include_in_init_py/>
The majority opinion seems to be that `__init__.py` files should be very thin to avoid violating the "explicit is better than implicit" philosophy. |
What is __init__.py for? | 448,271 | 751 | 2009-01-15T20:09:09Z | 21,019,300 | 25 | 2014-01-09T11:45:10Z | [
"python",
"module"
] | What is `__init__.py` for in a Python source directory? | In Python the definition of package is very simple. Like Java the hierarchical structure and the directory structure are the same. But you have to have `__init__.py` in a package. I will explain the `__init__.py` file with the example below:
```
package_x/
|-- __init__.py
|-- subPackage_a/
|------ __init__.py
|------ module_m1.py
|-- subPackage_b/
|------ __init__.py
|------ module_n1.py
|------ module_n2.py
|------ module_n3.py
```
`__init__.py` can be empty, as long as it exists. It indicates that the directory should be regarded as a package. Of course, `__init__.py` can also set the appropriate content.
If we add a function in module\_n1:
```
def function_X():
print "function_X in module_n1"
return
```
After running:
```
>>>from package_x.subPackage_b.module_n1 import function_X
>>>function_X()
function_X in module_n1
```
Then we followed the hierarchy package and called module\_n1 the function. We can use `__init__.py` in subPackage\_b like this:
```
__all__ = ['module_n2', 'module_n3']
```
After running:
```
>>>from package_x.subPackage_b import *
>>>module_n1.function_X()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named module_n1
```
Hence using \* importing, module package is subject to `__init__.py` content. |
What is __init__.py for? | 448,271 | 751 | 2009-01-15T20:09:09Z | 27,757,676 | 25 | 2015-01-03T17:41:14Z | [
"python",
"module"
] | What is `__init__.py` for in a Python source directory? | `__init__.py` will treat the directory it is in as a loadable module.
For people who prefer reading code, I put [Two-Bit Alchemist's](http://stackoverflow.com/users/2588818/two-bit-alchemist) comment here.
```
$ find /tmp/mydir/
/tmp/mydir/
/tmp/mydir//spam
/tmp/mydir//spam/__init__.py
/tmp/mydir//spam/module.py
$ cd ~
$ python
>>> import sys
>>> sys.path.insert(0, '/tmp/mydir')
>>> from spam import module
>>> module.myfun(3)
9
>>> exit()
$
$ rm /tmp/mydir/spam/__init__.py*
$
$ python
>>> import sys
>>> sys.path.insert(0, '/tmp/mydir')
>>> from spam import module
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named spam
>>>
``` |
What is __init__.py for? | 448,271 | 751 | 2009-01-15T20:09:09Z | 29,509,611 | 25 | 2015-04-08T08:29:37Z | [
"python",
"module"
] | What is `__init__.py` for in a Python source directory? | 1. For convenience: The other users will not need to know your functions' exact location.
```
your_package/
__init__.py
file1.py/
file2.py/
...
fileN.py
# in __init__.py
from file1 import *
from file2 import *
...
from fileN import *
# in file1.py
def add():
pass
```
then others can call add() by
```
from your_package import add
```
without knowing file1, like
```
from your_package.file1 import add
```
2. Put something for initializing. For example, logging(this should be put in the top level):
```
import logging.config
logging.config.dictConfig(Your_logging_config)
``` |
Setting up Python on Windows/ Apache? | 449,055 | 6 | 2009-01-16T00:02:42Z | 450,097 | 22 | 2009-01-16T11:08:32Z | [
"python",
"wamp",
"mod-python"
] | I want to get a simple Python "hello world" web page script to run on Windows Vista/ Apache but hit different walls. I'm using WAMP. I've installed `mod_python` and the module shows, but I'm not quite sure what I'm supposed to do in e.g. http.conf (things like AddHandler mod\_python .py either bring me to a file not found, or a forbidden, or module not found errors when accessing <http://localhost/myfolder/index.py>). I can get `mod_python.publisher` to work but do I "want" this/ need this?
Can anyone help?
Thanks! | Stay away from `mod_python`. One common misleading idea is that `mod_python` is like `mod_php`, but for python. That is not true. [Wsgi](http://wsgi.org) is the standard to run python web applications, defined by [PEP 333](http://www.python.org/dev/peps/pep-0333/). So use [`mod_wsgi`](http://code.google.com/p/modwsgi/) instead.
Or alternatively, use some web framework that has a server. [Cherrypy](http://www.cherrypy.org/)'s one is particulary good. You will be able to run your application both standalone and through `mod_wsgi`.
An example of Hello World application using cherrypy:
```
import cherrypy
class HelloWorld(object):
def index(self):
return "Hello World!"
index.exposed = True
application = HelloWorld()
if __name__ == '__main__':
cherrypy.engine.start()
cherrypy.engine.block()
```
Very easy huh? Running this application directly on python will start a webserver. Configuring `mod_wsgi` to it will make it run inside apache. |
Are there any good 3rd party GUI products for Python? | 449,168 | 5 | 2009-01-16T01:11:49Z | 449,252 | 8 | 2009-01-16T01:49:55Z | [
"python",
"user-interface"
] | In .Net you have companies like DevEpxress, and Infragistics that offer a range of GUI widgets. Is there any market like that for Python GUI widgets? I'm thinking specifically about widgets like the DevExpress xtraGrid Suite.
Edit 01-16-09: For Example:
<http://www.devexpress.com/Downloads/NET/OnlineDemos.xml>
<http://demos.devexpress.com/ASPxGridViewDemos/>
<http://www.infragistics.com/products/default.aspx> | There are a number of GUI Toolkits available for Python. Obviously, the toolkit you choose will determine your selection of 3rd party widgets.
**The Contenders**
Python comes with [Tkinter](http://wiki.python.org/moin/TkInter) which is easy to use, but not great looking.
There are some very popular cross platform GUI toolkits borrowed from C/C++ that have a lot of external widgets: [wxPython](http://www.wxpython.org/), [pyQt](http://wiki.python.org/moin/PyQt), [pyFLTK](http://pyfltk.sourceforge.net/), [pyGtk](http://www.pygtk.org/)
I also know of, but have not used some of the other toolkits that are out there: [PyGUI](http://www.cosc.canterbury.ac.nz/greg.ewing/python_gui/), [Easygui](http://easygui.sourceforge.net/), [PythonCard](http://pythoncard.sourceforge.net/)
**My Choice**
I'm a fan of [wxPython](http://www.wxpython.org/). They have a nice selection of widgets, some great examples that come with the [install](http://www.wxpython.org/download.php#binaries), good [documentation](http://www.wxpython.org/onlinedocs.php), a [book](http://rads.stackoverflow.com/amzn/click/1932394621), and an active community.
WxWidgets has additional components offered by the community called [wxCode](http://wxcode.sourceforge.net/).
Quote about wxPython from the creator of Python:
> wxPython is the best and most mature cross-platform GUI toolkit,
> given a number of constraints. The only reason wxPython isn't the
> standard Python GUI toolkit is that Tkinter was there first.
>
> -- *Guido van Rossum* |
How do I determine the size of an object in Python? | 449,560 | 271 | 2009-01-16T05:07:12Z | 450,034 | 316 | 2009-01-16T10:42:37Z | [
"python",
"object",
"memory",
"memory-management",
"sizeof"
] | In C, we can find the size of an `int`, `char`, etc. I want to know how to get size of objects like a string, integer, etc. in Python.
Related question: [How many bytes per element are there in a Python list (tuple)?](http://stackoverflow.com/questions/135664/how-many-bytes-per-element-are-there-in-a-python-list-tuple)
I am using an XML file which contains size fields that specify the size of value. I must parse this XML and do my coding. When I want to change the value of a particular field, I will check the size field of that value. Here I want to compare whether the new value that I'm gong to enter is of the same size as in XML. I need to check the size of new value. In case of a string I can say its the length. But in case of int, float, etc. I am confused. | Just use the [sys.getsizeof](http://docs.python.org/library/sys.html#sys.getsizeof) function defined in the `sys` module.
> `sys.getsizeof(object[, default])`:
>
> Return the size of an object in bytes.
> The object can be any type of object.
> All built-in objects will return
> correct results, but this does not
> have to hold true for third-party
> extensions as it is implementation
> specific.
>
> The `default` argument allows to define
> a value which will be returned if the
> object type does not provide means to
> retrieve the size and would cause a
> `TypeError`.
>
> `getsizeof` calls the objectâs
> `__sizeof__` method and adds an additional garbage collector overhead
> if the object is managed by the
> garbage collector.
Usage example, in python 3.0:
```
>>> import sys
>>> x = 2
>>> sys.getsizeof(x)
14
>>> sys.getsizeof(sys.getsizeof)
32
>>> sys.getsizeof('this')
38
>>> sys.getsizeof('this also')
48
```
If you are in python < 2.6 and don't have `sys.getsizeof` you can use [this extensive module](http://code.activestate.com/recipes/546530/) instead. Never used it though. |
How do I determine the size of an object in Python? | 449,560 | 271 | 2009-01-16T05:07:12Z | 450,351 | 11 | 2009-01-16T13:00:14Z | [
"python",
"object",
"memory",
"memory-management",
"sizeof"
] | In C, we can find the size of an `int`, `char`, etc. I want to know how to get size of objects like a string, integer, etc. in Python.
Related question: [How many bytes per element are there in a Python list (tuple)?](http://stackoverflow.com/questions/135664/how-many-bytes-per-element-are-there-in-a-python-list-tuple)
I am using an XML file which contains size fields that specify the size of value. I must parse this XML and do my coding. When I want to change the value of a particular field, I will check the size field of that value. Here I want to compare whether the new value that I'm gong to enter is of the same size as in XML. I need to check the size of new value. In case of a string I can say its the length. But in case of int, float, etc. I am confused. | This can be more complicated than it looks depending on how you want to count things. For instance, if you have a list of ints, do you want the size of the list containing the *references* to the ints? (ie. list only, not what is contained in it), or do you want to include the actual data pointed to, in which case you need to deal with duplicate references, and how to prevent double-counting when two objects contain references to the same object.
You may want to take a look at one of the python memory profilers, such as [pysizer](http://pysizer.8325.org/) to see if they meet your needs. |
How do I determine the size of an object in Python? | 449,560 | 271 | 2009-01-16T05:07:12Z | 3,373,511 | 60 | 2010-07-30T16:33:00Z | [
"python",
"object",
"memory",
"memory-management",
"sizeof"
] | In C, we can find the size of an `int`, `char`, etc. I want to know how to get size of objects like a string, integer, etc. in Python.
Related question: [How many bytes per element are there in a Python list (tuple)?](http://stackoverflow.com/questions/135664/how-many-bytes-per-element-are-there-in-a-python-list-tuple)
I am using an XML file which contains size fields that specify the size of value. I must parse this XML and do my coding. When I want to change the value of a particular field, I will check the size field of that value. Here I want to compare whether the new value that I'm gong to enter is of the same size as in XML. I need to check the size of new value. In case of a string I can say its the length. But in case of int, float, etc. I am confused. | For numpy arrays, `getsizeof` doesn't work - for me it always returns 40 for some reason:
```
from pylab import *
from sys import getsizeof
A = rand(10)
B = rand(10000)
```
Then (in ipython):
```
In [64]: getsizeof(A)
Out[64]: 40
In [65]: getsizeof(B)
Out[65]: 40
```
Happily, though:
```
In [66]: A.nbytes
Out[66]: 80
In [67]: B.nbytes
Out[67]: 80000
``` |
How do I determine the size of an object in Python? | 449,560 | 271 | 2009-01-16T05:07:12Z | 30,316,760 | 88 | 2015-05-19T04:26:33Z | [
"python",
"object",
"memory",
"memory-management",
"sizeof"
] | In C, we can find the size of an `int`, `char`, etc. I want to know how to get size of objects like a string, integer, etc. in Python.
Related question: [How many bytes per element are there in a Python list (tuple)?](http://stackoverflow.com/questions/135664/how-many-bytes-per-element-are-there-in-a-python-list-tuple)
I am using an XML file which contains size fields that specify the size of value. I must parse this XML and do my coding. When I want to change the value of a particular field, I will check the size field of that value. Here I want to compare whether the new value that I'm gong to enter is of the same size as in XML. I need to check the size of new value. In case of a string I can say its the length. But in case of int, float, etc. I am confused. | > # How do I determine the size of an object in Python?
The answer, "Just use sys.getsizeof" is not a complete answer.
That answer *does* work for builtin objects directly, but it does not account for what those objects may contain, specifically, what types, such as tuples, lists, dicts, and sets contain. They can contain instances each other, as well as numbers, strings and other objects.
# A More Complete Answer
Using 64 bit Python 2.7 from the Anaconda distribution and `guppy.hpy` along with `sys.getsizeof`, I have determined the minimum size of the following objects, and note that sets and dicts preallocate space so empty ones don't grow again until after a set amount (which may vary by implementation of the language):
```
Bytes type empty + scaling notes
24 int NA
28 long NA
37 str + 1 byte per additional character
52 unicode + 4 bytes per additional character
56 tuple + 8 bytes per additional item
72 list + 32 for first, 8 for each additional
232 set sixth item increases to 744; 22nd, 2280; 86th, 8424
280 dict sixth item increases to 1048; 22nd, 3352; 86th, 12568
64 class inst has a __dict__ attr, same scaling as dict above
16 __slots__ class with slots has no dict, seems to store in
mutable tuple-like structure.
120 func def doesn't include default args and other attrs
904 class def has a proxy __dict__ structure for class attrs
104 old class makes sense, less stuff, has real dict though.
```
I think 8 bytes per additional item to reference makes a lot of sense on a 64 bit machine. Those 8 bytes point to the place in memory the contained item is at. The 4 bytes are fixed width for unicode in Python 2, if I recall correctly, but in Python 3, str becomes a unicode of width equal to the max width of the characters.
(And for more on slots, [see this answer](http://stackoverflow.com/a/28059785/541136) )
# Recursive Visitor for a More Complete Function
To cover most of these types, I wrote this recursive function to try to estimate the size of most Python objects, including most builtins, types in the collections module, and custom types (slotted and otherwise):
```
import sys
from numbers import Number
from collections import Set, Mapping, deque
try: # Python 2
zero_depth_bases = (basestring, Number, xrange, bytearray)
iteritems = 'iteritems'
except NameError: # Python 3
zero_depth_bases = (str, bytes, Number, range, bytearray)
iteritems = 'items'
def getsize(obj_0):
"""Recursively iterate to sum size of object & members."""
def inner(obj, _seen_ids = set()):
obj_id = id(obj)
if obj_id in _seen_ids:
return 0
_seen_ids.add(obj_id)
size = sys.getsizeof(obj)
if isinstance(obj, zero_depth_bases):
pass # bypass remaining control flow and return
elif isinstance(obj, (tuple, list, Set, deque)):
size += sum(inner(i) for i in obj)
elif isinstance(obj, Mapping) or hasattr(obj, iteritems):
size += sum(inner(k) + inner(v) for k, v in getattr(obj, iteritems)())
# Check for custom object instances - may subclass above too
if hasattr(obj, '__dict__'):
size += inner(vars(obj))
if hasattr(obj, '__slots__'): # can have __slots__ with __dict__
size += sum(inner(getattr(obj, s)) for s in obj.__slots__ if hasattr(obj, s))
return size
return inner(obj_0)
```
And I tested it rather casually (I should unittest it):
```
>>> getsize(['a', tuple('bcd'), Foo()])
344
>>> getsize(Foo())
16
>>> getsize(tuple('bcd'))
194
>>> getsize(['a', tuple('bcd'), Foo(), {'foo': 'bar', 'baz': 'bar'}])
752
>>> getsize({'foo': 'bar', 'baz': 'bar'})
400
>>> getsize({})
280
>>> getsize({'foo':'bar'})
360
>>> getsize('foo')
40
>>> class Bar():
... def baz():
... pass
>>> getsize(Bar())
352
>>> getsize(Bar().__dict__)
280
>>> sys.getsizeof(Bar())
72
>>> getsize(Bar.__dict__)
872
>>> sys.getsizeof(Bar.__dict__)
280
```
It kind of breaks down on class definitions and function definitions because I don't go after all of their attributes, but since they should only exist once in memory for the process, their size really doesn't matter too much. |
How can I split a url string up into separate parts in Python? | 449,775 | 9 | 2009-01-16T07:43:22Z | 449,782 | 8 | 2009-01-16T07:49:55Z | [
"python",
"url",
"parsing"
] | I decided that I'll learn python tonight :)
I know C pretty well (wrote an OS in it) so I'm not a noob in programming so everything in python seems pretty easy, but I don't know how to solve this problem :
let's say I have this address:
<http://example.com/random/folder/path.html>
Now how can I create two strings from this, one containing the "base" name of the server, so in this example it would be
<http://example.com/>
and another containing the thing without the last filename, so in this example it would be
<http://example.com/random/folder/>
.
Also I of course know the possibility to just find the 3rd and last slash respectively but maybe you know a better way :]
Also it would be cool to have the trailing slash in both cases but I don't care since it can be added easily.
So anyone has a good, fast, effective solution for this? Or is there only "my" solution, finding the slashes?
Thanks! | I have no experience with python, but I found the urlparse module, which should do the job:
<http://docs.python.org/library/urlparse.html> |
How can I split a url string up into separate parts in Python? | 449,775 | 9 | 2009-01-16T07:43:22Z | 449,805 | 8 | 2009-01-16T08:11:11Z | [
"python",
"url",
"parsing"
] | I decided that I'll learn python tonight :)
I know C pretty well (wrote an OS in it) so I'm not a noob in programming so everything in python seems pretty easy, but I don't know how to solve this problem :
let's say I have this address:
<http://example.com/random/folder/path.html>
Now how can I create two strings from this, one containing the "base" name of the server, so in this example it would be
<http://example.com/>
and another containing the thing without the last filename, so in this example it would be
<http://example.com/random/folder/>
.
Also I of course know the possibility to just find the 3rd and last slash respectively but maybe you know a better way :]
Also it would be cool to have the trailing slash in both cases but I don't care since it can be added easily.
So anyone has a good, fast, effective solution for this? Or is there only "my" solution, finding the slashes?
Thanks! | If this is the extent of your URL parsing, Python's inbuilt rpartition will do the job:
```
>>> URL = "http://example.com/random/folder/path.html"
>>> Segments = URL.rpartition('/')
>>> Segments[0]
'http://example.com/random/folder'
>>> Segments[2]
'path.html'
```
From [Pydoc](http://www.python.org/doc/3.0/library/stdtypes.html?highlight=rpartition#str.rpartition), str.rpartition:
`Splits the string at the last occurrence of sep, and returns a 3-tuple containing the part before the separator, the separator itself, and the part after the separator. If the separator is not found, return a 3-tuple containing two empty strings, followed by the string itself`
What this means is that rpartition does the searching for you, and splits the string at the last (right most) occurrence of the character you specify (in this case / ). It returns a tuple containing:
```
(everything to the left of char , the character itself , everything to the right of char)
``` |
How can I split a url string up into separate parts in Python? | 449,775 | 9 | 2009-01-16T07:43:22Z | 449,811 | 36 | 2009-01-16T08:14:36Z | [
"python",
"url",
"parsing"
] | I decided that I'll learn python tonight :)
I know C pretty well (wrote an OS in it) so I'm not a noob in programming so everything in python seems pretty easy, but I don't know how to solve this problem :
let's say I have this address:
<http://example.com/random/folder/path.html>
Now how can I create two strings from this, one containing the "base" name of the server, so in this example it would be
<http://example.com/>
and another containing the thing without the last filename, so in this example it would be
<http://example.com/random/folder/>
.
Also I of course know the possibility to just find the 3rd and last slash respectively but maybe you know a better way :]
Also it would be cool to have the trailing slash in both cases but I don't care since it can be added easily.
So anyone has a good, fast, effective solution for this? Or is there only "my" solution, finding the slashes?
Thanks! | The urlparse module in python 2.x (or urllib.parse in python 3.x) would be the way to do it.
```
>>> from urllib.parse import urlparse
>>> url = 'http://example.com/random/folder/path.html'
>>> parse_object = urlparse(url)
>>> parse_object.netloc
'example.com'
>>> parse_object.path
'/random/folder/path.html'
>>> parse_object.scheme
'http'
>>>
```
If you wanted to do more work on the path of the file under the url, you can use the posixpath module :
```
>>> from posixpath import basename, dirname
>>> basename(parse_object.path)
'path.html'
>>> dirname(parse_object.path)
'/random/folder'
```
After that, you can use posixpath.join to glue the parts together.
EDIT: I totally forgot that windows users will choke on the path separator in os.path. I read the posixpath module docs, and it has a special reference to URL manipulation, so all's good. |
What is the correct way to backup ZODB blobs? | 451,952 | 6 | 2009-01-16T20:51:01Z | 2,664,479 | 12 | 2010-04-18T23:51:13Z | [
"python",
"plone",
"zope",
"zodb",
"blobstorage"
] | I am using plone.app.blob to store large ZODB objects in a blobstorage directory. This reduces size pressure on Data.fs but I have not been able to find any advice on backing up this data.
I am already backing up Data.fs by pointing a network backup tool at a directory of repozo backups. Should I simply point that tool at the blobstorage directory to backup my blobs?
What if the database is being repacked or blobs are being added and deleted while the copy is taking place? Are there files in the blobstorage directory that must be copied over in a certain order? | It should be safe to do a repozo backup of the Data.fs followed by an rsync of the blobstorage directory, as long as the database doesn't get packed while those two operations are happening.
This is because, at least when using blobs with FileStorage, modifications to a blob always results in the creation of a new file named based on the object id and transaction id. So if new or updated blobs are written after the Data.fs is backed up, it shouldn't be a problem, as the files that are referenced by the Data.fs should still be around. Deletion of a blob doesn't result in the file being removed until the database is packed, so that should be okay too.
Performing a backup in a different order, or with packing during the backup, may result in a backup Data.fs that references blobs that are not included in the backup. |
Making a method private in a python subclass | 451,963 | 24 | 2009-01-16T20:53:50Z | 452,023 | 26 | 2009-01-16T21:09:56Z | [
"python",
"oop",
"inheritance"
] | Is it possible to make a public method private in a subclass ? I don't want other classes extending this one to be able to call some of the methods . Here is an example :
```
class A:
def __init__(self):
#do something here
def method(self):
#some code here
class B(A):
def __init__(self):
A.__init__(self)
#additional initialization goes here
def method(self):
#this overrides the method ( and possibly make it private here )
```
from this point forward , I don't want any class that extends from B to be able to call `method` .
Is this possible ?
EDIT : a "logical" reason for this is that I don't want users to call methods in wrong order. | There's no way to truly do this in Python. Rather unpythonic, it is.
As Guido would say, we're all consenting adults here.
Here's a good [summary of the philosophy behind everything in Python being public](http://mail.python.org/pipermail/tutor/2003-October/025932.html). |
Making a method private in a python subclass | 451,963 | 24 | 2009-01-16T20:53:50Z | 452,028 | 15 | 2009-01-16T21:11:46Z | [
"python",
"oop",
"inheritance"
] | Is it possible to make a public method private in a subclass ? I don't want other classes extending this one to be able to call some of the methods . Here is an example :
```
class A:
def __init__(self):
#do something here
def method(self):
#some code here
class B(A):
def __init__(self):
A.__init__(self)
#additional initialization goes here
def method(self):
#this overrides the method ( and possibly make it private here )
```
from this point forward , I don't want any class that extends from B to be able to call `method` .
Is this possible ?
EDIT : a "logical" reason for this is that I don't want users to call methods in wrong order. | Python is distributed as source. The very idea of a private method makes very little sense.
The programmer who wants to extend `B`, frustrated by a privacy issue, looks at the source for `B`, copies and pastes the source code for `method` into the subclass `C`.
What have you gained through "privacy"? The best you can hope for is to frustrate your potential customers into copying and pasting.
At worst, they discard your package because they can't extend it.
And yes, all open source is extended in one way or another. You can't foresee everything and every use to which you code will be put. Preventing some future use is hard to do when the code is distributed as source.
See <http://stackoverflow.com/questions/261638/how-do-i-protect-python-code>
---
**Edit** On "idiot-proof" code.
First, python is distributed as source 90% of the time. So, any idiot who downloads, installs, and then refuses to read the API guide and calls the methods out of order still has the source to figure out what went wrong.
We have three classes of idiots.
* People who refuse to read the API guide (or skim it and ignore the relevant parts) and call the methods out of order in spite of the documentation. You can try to make something private, but it won't help because they'll do something else wrong -- and complain about it. [I won't name names, but I've worked with folks who seem to spend a lot of time calling the API's improperly. Also, you'll see questions like this on SO.]
You can only help them with a working code sample they can cut and paste.
* People who are confused by API's and call the methods every different way you can imagine (and some you can't.) You can try to make something private, but they'll never get the API.
You can only help them by providing the working code sample; even then, they'll cut and paste it incorrectly.
* People who reject your API and want to rewrite it to make it "idiot proof".
You can provide them a working code sample, but they don't like your API and will insist on rewriting it. They'll tell you that your API is crazy and they've improved on it.
You can engage these folks in an escalating arms race of "idiot-proofing". Everything you put together they take apart.
At this point, what has privacy done for you? Some people will refuse to understand it; some people are confused by it; and some people want to work around it.
How about public, and let the folks you're calling "idiots" learn from your code? |
Making a method private in a python subclass | 451,963 | 24 | 2009-01-16T20:53:50Z | 452,049 | 7 | 2009-01-16T21:16:58Z | [
"python",
"oop",
"inheritance"
] | Is it possible to make a public method private in a subclass ? I don't want other classes extending this one to be able to call some of the methods . Here is an example :
```
class A:
def __init__(self):
#do something here
def method(self):
#some code here
class B(A):
def __init__(self):
A.__init__(self)
#additional initialization goes here
def method(self):
#this overrides the method ( and possibly make it private here )
```
from this point forward , I don't want any class that extends from B to be able to call `method` .
Is this possible ?
EDIT : a "logical" reason for this is that I don't want users to call methods in wrong order. | This may be a fair approximation. Lexical scoping to the "rescue":
```
#!/usr/bin/env python
class Foo(object):
def __init__(self, name):
self.name = name
self.bar()
def bar(self):
def baz():
print "I'm private"
print self.name
def quux():
baz()
self.quux = quux
if __name__ == "__main__":
f = Foo("f")
f.quux()
g = Foo("g")
g.quux()
f.quux()
```
Prints:
```
I'm private
f
I'm private
g
I'm private
f
``` |
Making a method private in a python subclass | 451,963 | 24 | 2009-01-16T20:53:50Z | 452,240 | 11 | 2009-01-16T22:09:27Z | [
"python",
"oop",
"inheritance"
] | Is it possible to make a public method private in a subclass ? I don't want other classes extending this one to be able to call some of the methods . Here is an example :
```
class A:
def __init__(self):
#do something here
def method(self):
#some code here
class B(A):
def __init__(self):
A.__init__(self)
#additional initialization goes here
def method(self):
#this overrides the method ( and possibly make it private here )
```
from this point forward , I don't want any class that extends from B to be able to call `method` .
Is this possible ?
EDIT : a "logical" reason for this is that I don't want users to call methods in wrong order. | I am surprised that no one has mentioned this, but prefixing the method name with a single underscore is the correct way of labelling it as "private". It's not really *private* of course, (as explained in other answers), but there you go.
```
def _i_am_private(self):
"""If you call me from a subclass you are a naughty person!"""
``` |
Making a method private in a python subclass | 451,963 | 24 | 2009-01-16T20:53:50Z | 452,316 | 12 | 2009-01-16T22:42:42Z | [
"python",
"oop",
"inheritance"
] | Is it possible to make a public method private in a subclass ? I don't want other classes extending this one to be able to call some of the methods . Here is an example :
```
class A:
def __init__(self):
#do something here
def method(self):
#some code here
class B(A):
def __init__(self):
A.__init__(self)
#additional initialization goes here
def method(self):
#this overrides the method ( and possibly make it private here )
```
from this point forward , I don't want any class that extends from B to be able to call `method` .
Is this possible ?
EDIT : a "logical" reason for this is that I don't want users to call methods in wrong order. | You can prefix methods and members with a single or double underscore. A single underscore implies "please don't use me, I'm supposed to be used only by this class", and a double underscore instructs the Python compiler to mangle the method/member name with the class name; as long as the class and its subclasses don't have the same name, the methods/members can be considered "private".
However, the solution to your requirements so far is to write clear documentation. If you don't want your users to call methods in the wrong order, then say so in the documentation.
After all, even C++ privates aren't that private. For example think the old trick:
```
#define private public
#include <module>
``` |
Making a method private in a python subclass | 451,963 | 24 | 2009-01-16T20:53:50Z | 458,101 | 7 | 2009-01-19T16:03:19Z | [
"python",
"oop",
"inheritance"
] | Is it possible to make a public method private in a subclass ? I don't want other classes extending this one to be able to call some of the methods . Here is an example :
```
class A:
def __init__(self):
#do something here
def method(self):
#some code here
class B(A):
def __init__(self):
A.__init__(self)
#additional initialization goes here
def method(self):
#this overrides the method ( and possibly make it private here )
```
from this point forward , I don't want any class that extends from B to be able to call `method` .
Is this possible ?
EDIT : a "logical" reason for this is that I don't want users to call methods in wrong order. | "Everything must be public" proponents think the author is trying to hide a useful API from the users. This guy doesn't want to violate an unquestionable law of Python. He wants to use some methods to define a useful API, and he wants to use other methods to organize the implementation of that API. If there's no separation between the two it doesn't mean the author is not an idiot. It means the author was too lazy to actually define an API.
In Python, instead of marking properties or methods as private, they may be prefixed with `_` as a weak "internal use" indicator, or with `__` as a slightly stronger one. In a module, names may be prefixed with `_` in the same way, and you may also put a sequence of strings that constitute the modules' public API in a variable called `__all__`.
A foolish consistency is the hobgoblin of little minds. |
Making a method private in a python subclass | 451,963 | 24 | 2009-01-16T20:53:50Z | 5,851,875 | 27 | 2011-05-01T22:26:19Z | [
"python",
"oop",
"inheritance"
] | Is it possible to make a public method private in a subclass ? I don't want other classes extending this one to be able to call some of the methods . Here is an example :
```
class A:
def __init__(self):
#do something here
def method(self):
#some code here
class B(A):
def __init__(self):
A.__init__(self)
#additional initialization goes here
def method(self):
#this overrides the method ( and possibly make it private here )
```
from this point forward , I don't want any class that extends from B to be able to call `method` .
Is this possible ?
EDIT : a "logical" reason for this is that I don't want users to call methods in wrong order. | Contrary to popular fashion on this subject, there **are** legitimate reasons to have a distinction between public, private, and protected members, whether you work in Python or a more traditional OOP environment. Many times, it comes to be that you develop auxiliary methods for a particularly long-winded task at some level of object specialization. Needless to say, you really don't want these methods inherited by any subclass because they make no sense in the specialized context and shouldn't even be visible; and yet they are visible, and they diminish the utility of things like tab completion, object navigators, and other system software, because everything at all different levels of abstraction get flattened and thrown together. These programming aids are not trivial, mind you. They are only trivial if you're a student and enjoy doing the same thing a million times just because you're learning how.
Python historically developed in such a way that to implement the public/private distinction became increasingly difficult due to ideological inertia and compatibility concerns. That's the plain truth. It would be a real headache for everyone to change what they've been doing. Consequently, we now have a million Python fans out there, all of whom have read the same one or two original articles deciding unequivocally that the public/private distinction is "unpythonic". These people, for lack of critical thought or fairness to wide-spread, common practices, instantly use this occasion to accrete a predictable slew of appologetics -- *De Defensione Serpentis* -- which I suspect arises not from a rational selection of the *via pythonis* (the pythonic way) but from neglect of other languages, which they either choose not to use, are not skilled at using, or are not able to use because of work.
As someone already said, the best you can do in Python to produce an effect similar to private methods is to prepend the method name with `__` (two underscores). On the other hand, the only thing this accomplishes, practically speaking, is the insertion of a transmogrified attribute name in the object's `__dict__`. For instance, say you have the following class definition:
```
class Dog(object):
def __bark(self):
print 'woof'
```
If you run `dir(Dog())`, you'll see a strange member, called `_Dog__bark`. Indeed, the only reason this trick exists is to circumvent the problem I described before: namely, preventing inheritance, overloading, and replacement of super methods.
Hopefully there will be some standardized implementation of private methods in the future, when people realize that tissue need not have access to the methods by which the individual cell replicates DNA, and the conscious mind need constantly figure out how to repair its tissues and internal organs. |
Is it worth using Python's re.compile? | 452,104 | 238 | 2009-01-16T21:31:57Z | 452,142 | 34 | 2009-01-16T21:42:37Z | [
"python",
"regex"
] | Is there any benefit in using compile for regular expressions in Python?
```
h = re.compile('hello')
h.match('hello world')
```
vs
```
re.match('hello', 'hello world')
``` | FWIW:
```
$ python -m timeit -s "import re" "re.match('hello', 'hello world')"
100000 loops, best of 3: 3.82 usec per loop
$ python -m timeit -s "import re; h=re.compile('hello')" "h.match('hello world')"
1000000 loops, best of 3: 1.26 usec per loop
```
so, if you're going to be using the *same* regex a lot, it may be worth it to do `re.compile` (especially for more complex regexes).
The standard arguments against premature optimization apply, but I don't think you really lose much clarity/straightforwardness by using `re.compile` if you suspect that your regexps may become a performance bottleneck. |
Is it worth using Python's re.compile? | 452,104 | 238 | 2009-01-16T21:31:57Z | 452,143 | 258 | 2009-01-16T21:42:57Z | [
"python",
"regex"
] | Is there any benefit in using compile for regular expressions in Python?
```
h = re.compile('hello')
h.match('hello world')
```
vs
```
re.match('hello', 'hello world')
``` | I've had a lot of experience running a compiled regex 1000s of times versus compiling on-the-fly, and have not noticed any perceivable difference. Obviously, this is anecdotal, and certainly not a great argument *against* compiling, but I've found the difference to be negligible.
EDIT:
After a quick glance at the actual Python 2.5 library code, I see that Python internally compiles AND CACHES regexes whenever you use them anyway (including calls to `re.match()`), so you're really only changing WHEN the regex gets compiled, and shouldn't be saving much time at all - only the time it takes to check the cache (a key lookup on an internal `dict` type).
From module re.py (comments are mine):
```
def match(pattern, string, flags=0):
return _compile(pattern, flags).match(string)
def _compile(*key):
# Does cache check at top of function
cachekey = (type(key[0]),) + key
p = _cache.get(cachekey)
if p is not None: return p
# ...
# Does actual compilation on cache miss
# ...
# Caches compiled regex
if len(_cache) >= _MAXCACHE:
_cache.clear()
_cache[cachekey] = p
return p
```
I still often pre-compile regular expressions, but only to bind them to a nice, reusable name, not for any expected performance gain. |
Is it worth using Python's re.compile? | 452,104 | 238 | 2009-01-16T21:31:57Z | 453,568 | 71 | 2009-01-17T16:49:07Z | [
"python",
"regex"
] | Is there any benefit in using compile for regular expressions in Python?
```
h = re.compile('hello')
h.match('hello world')
```
vs
```
re.match('hello', 'hello world')
``` | For me, the biggest benefit to `re.compile` isn't any kind of premature optimization (which is the [root of all evil](http://programmers.stackexchange.com/questions/39/whats-your-favourite-quote-about-programming/816#816), [anyway](http://en.wikipedia.org/wiki/Optimization_%28computer_science%29#When_to_optimize "read the full two paragraphs, which is available in the original paper and much more useful than the popular quip")). It's being able to separate definition of the regex from its use.
Even a simple expression such as `0|[1-9][0-9]*` (integer in base 10 without leading zeros) can be complex enough that you'd rather not have to retype it, check if you made any typos, and later have to recheck if there are typos when you start debugging. Plus, it's nicer to use a variable name such as num or num\_b10 than `0|[1-9][0-9]*`.
It's certainly possible to store strings and pass them to re.match; however, that's *less* readable:
```
num = "..."
# then, much later:
m = re.match(num, input)
```
Versus compiling:
```
num = re.compile("...")
# then, much later:
m = num.match(input)
```
Though it is fairly close, the last line of the second feels more natural and simpler when used repeatedly. |
Is it worth using Python's re.compile? | 452,104 | 238 | 2009-01-16T21:31:57Z | 13,640,709 | 27 | 2012-11-30T07:24:30Z | [
"python",
"regex"
] | Is there any benefit in using compile for regular expressions in Python?
```
h = re.compile('hello')
h.match('hello world')
```
vs
```
re.match('hello', 'hello world')
``` | Here's a simple test case:
```
~$ for x in 1 10 100 1000 10000 100000 1000000; do python -m timeit -n $x -s 'import re' 're.match("[0-9]{3}-[0-9]{3}-[0-9]{4}", "123-123-1234")'; done
1 loops, best of 3: 3.1 usec per loop
10 loops, best of 3: 2.41 usec per loop
100 loops, best of 3: 2.24 usec per loop
1000 loops, best of 3: 2.21 usec per loop
10000 loops, best of 3: 2.23 usec per loop
100000 loops, best of 3: 2.24 usec per loop
1000000 loops, best of 3: 2.31 usec per loop
```
with re.compile:
```
~$ for x in 1 10 100 1000 10000 100000 1000000; do python -m timeit -n $x -s 'import re' 'r = re.compile("[0-9]{3}-[0-9]{3}-[0-9]{4}")' 'r.match("123-123-1234")'; done
1 loops, best of 3: 1.91 usec per loop
10 loops, best of 3: 0.691 usec per loop
100 loops, best of 3: 0.701 usec per loop
1000 loops, best of 3: 0.684 usec per loop
10000 loops, best of 3: 0.682 usec per loop
100000 loops, best of 3: 0.694 usec per loop
1000000 loops, best of 3: 0.702 usec per loop
```
So, it would seem to compiling is faster with this simple case, *even if you only match once*. |
How can I install the Beautiful Soup module on the Mac? | 452,283 | 38 | 2009-01-16T22:26:45Z | 452,302 | 69 | 2009-01-16T22:37:30Z | [
"python",
"osx",
"module",
"installation"
] | I read this without finding the solution: <http://docs.python.org/install/index.html> | The "normal" way is to:
* Go to the Beautiful Soup web site, <http://www.crummy.com/software/BeautifulSoup/>
* Download the package
* Unpack it
* In a Terminal window, `cd` to the resulting directory
* Type `python setup.py install`
Another solution is to use `easy_install`. Go to <http://peak.telecommunity.com/DevCenter/EasyInstall>), install the package using the instructions on that page, and then type, in a Terminal window:
```
easy_install BeautifulSoup4
# for older v3:
# easy_install BeautifulSoup
```
`easy_install` will take care of downloading, unpacking, building, and installing the package. The advantage to using `easy_install` is that it knows how to search for many different Python packages, because it queries the [PyPI](http://pypi.python.org/) registry. Thus, once you have `easy_install` on your machine, you install many, many different third-party packages simply by one command at a shell. |
How can I install the Beautiful Soup module on the Mac? | 452,283 | 38 | 2009-01-16T22:26:45Z | 452,311 | 14 | 2009-01-16T22:41:46Z | [
"python",
"osx",
"module",
"installation"
] | I read this without finding the solution: <http://docs.python.org/install/index.html> | Brian beat me too it, but since I already have the transcript:
[easy\_install](http://peak.telecommunity.com/DevCenter/EasyInstall)
```
aaron@ares ~$ sudo easy_install BeautifulSoup
Searching for BeautifulSoup
Best match: BeautifulSoup 3.0.7a
Processing BeautifulSoup-3.0.7a-py2.5.egg
BeautifulSoup 3.0.7a is already the active version in easy-install.pth
Using /Library/Python/2.5/site-packages/BeautifulSoup-3.0.7a-py2.5.egg
Processing dependencies for BeautifulSoup
Finished processing dependencies for BeautifulSoup
```
.. or the normal boring way:
```
aaron@ares ~/Downloads$ curl http://www.crummy.com/software/BeautifulSoup/download/BeautifulSoup.tar.gz > bs.tar.gz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 71460 100 71460 0 0 84034 0 --:--:-- --:--:-- --:--:-- 111k
aaron@ares ~/Downloads$ tar -xzvf bs.tar.gz
BeautifulSoup-3.1.0.1/
BeautifulSoup-3.1.0.1/BeautifulSoup.py
BeautifulSoup-3.1.0.1/BeautifulSoup.py.3.diff
BeautifulSoup-3.1.0.1/BeautifulSoupTests.py
BeautifulSoup-3.1.0.1/BeautifulSoupTests.py.3.diff
BeautifulSoup-3.1.0.1/CHANGELOG
BeautifulSoup-3.1.0.1/README
BeautifulSoup-3.1.0.1/setup.py
BeautifulSoup-3.1.0.1/testall.sh
BeautifulSoup-3.1.0.1/to3.sh
BeautifulSoup-3.1.0.1/PKG-INFO
BeautifulSoup-3.1.0.1/BeautifulSoup.pyc
BeautifulSoup-3.1.0.1/BeautifulSoupTests.pyc
aaron@ares ~/Downloads$ cd BeautifulSoup-3.1.0.1/
aaron@ares ~/Downloads/BeautifulSoup-3.1.0.1$ sudo python setup.py install
running install
<... snip ...>
``` |
How can I install the Beautiful Soup module on the Mac? | 452,283 | 38 | 2009-01-16T22:26:45Z | 24,178,431 | 12 | 2014-06-12T07:08:25Z | [
"python",
"osx",
"module",
"installation"
] | I read this without finding the solution: <http://docs.python.org/install/index.html> | I think the current right way to do this is by `pip` like Pramod comments
```
pip install beautifulsoup4
```
because of last changes in Python, see discussion [here](http://stackoverflow.com/questions/3220404/why-use-pip-over-easy-install).
This was not so in the past. |
Python object.__repr__(self) should be an expression? | 452,300 | 51 | 2009-01-16T22:37:18Z | 452,310 | 54 | 2009-01-16T22:41:37Z | [
"python"
] | I was looking at the builtin object methods in the [Python documentation](http://docs.python.org/reference/datamodel.html#objects-values-and-types), and I was interested in the documentation for `object.__repr__(self)`. Here's what it says:
> Called by the repr() built-in function
> and by string conversions (reverse
> quotes) to compute the âofficialâ
> string representation of an object. If
> at all possible, this should look like
> a valid Python expression that could
> be used to recreate an object with the
> same value (given an appropriate
> environment). If this is not possible,
> a string of the form <...some useful
> description...> should be returned.
> The return value must be a string
> object. If a class defines **repr**()
> but not **str**(), then **repr**() is
> also used when an âinformalâ string
> representation of instances of that
> class is required.
>
> This is typically used for debugging,
> so it is important that the
> representation is information-rich and
> unambiguous
The most interesting part to me, was...
> If at all possible, this should look like a valid Python expression that could be used to recreate an object with the same value
... but I'm not sure exactly what this means. It says it should *look* like an expression which can be used to recreate the object, but does that mean it should just be an example of the sort of expression you could use, or should it be an actual expression, that can be executed (eval etc..) to recreate the object? Or... should it be just a rehasing of the actual expression which was used, for pure information purposes?
In general I'm a bit confused as to exactly what I should be putting here. | ```
>>> from datetime import date
>>>
>>> repr(date.today()) # calls date.today().__repr__()
'datetime.date(2009, 1, 16)'
>>> eval(_) # _ is the output of the last command
datetime.date(2009, 1, 16)
```
The output is a string that can be parsed by the python interpreter and results in an equal object.
If that's not possible, it should return a string in the form of `<...some useful description...>`. |
Python object.__repr__(self) should be an expression? | 452,300 | 51 | 2009-01-16T22:37:18Z | 452,312 | 24 | 2009-01-16T22:42:22Z | [
"python"
] | I was looking at the builtin object methods in the [Python documentation](http://docs.python.org/reference/datamodel.html#objects-values-and-types), and I was interested in the documentation for `object.__repr__(self)`. Here's what it says:
> Called by the repr() built-in function
> and by string conversions (reverse
> quotes) to compute the âofficialâ
> string representation of an object. If
> at all possible, this should look like
> a valid Python expression that could
> be used to recreate an object with the
> same value (given an appropriate
> environment). If this is not possible,
> a string of the form <...some useful
> description...> should be returned.
> The return value must be a string
> object. If a class defines **repr**()
> but not **str**(), then **repr**() is
> also used when an âinformalâ string
> representation of instances of that
> class is required.
>
> This is typically used for debugging,
> so it is important that the
> representation is information-rich and
> unambiguous
The most interesting part to me, was...
> If at all possible, this should look like a valid Python expression that could be used to recreate an object with the same value
... but I'm not sure exactly what this means. It says it should *look* like an expression which can be used to recreate the object, but does that mean it should just be an example of the sort of expression you could use, or should it be an actual expression, that can be executed (eval etc..) to recreate the object? Or... should it be just a rehasing of the actual expression which was used, for pure information purposes?
In general I'm a bit confused as to exactly what I should be putting here. | It should be a Python expression that, when eval'd, creates an object with the exact same properties as this one. For example, if you have a `Fraction` class that contains two integers, a numerator and denominator, your `__repr__()` method would look like this:
```
# in the definition of Fraction class
def __repr__(self):
return "Fraction(%d, %d)" % (self.numerator, self.denominator)
```
Assuming that the constructor takes those two values. |
Python object.__repr__(self) should be an expression? | 452,300 | 51 | 2009-01-16T22:37:18Z | 453,381 | 9 | 2009-01-17T14:42:03Z | [
"python"
] | I was looking at the builtin object methods in the [Python documentation](http://docs.python.org/reference/datamodel.html#objects-values-and-types), and I was interested in the documentation for `object.__repr__(self)`. Here's what it says:
> Called by the repr() built-in function
> and by string conversions (reverse
> quotes) to compute the âofficialâ
> string representation of an object. If
> at all possible, this should look like
> a valid Python expression that could
> be used to recreate an object with the
> same value (given an appropriate
> environment). If this is not possible,
> a string of the form <...some useful
> description...> should be returned.
> The return value must be a string
> object. If a class defines **repr**()
> but not **str**(), then **repr**() is
> also used when an âinformalâ string
> representation of instances of that
> class is required.
>
> This is typically used for debugging,
> so it is important that the
> representation is information-rich and
> unambiguous
The most interesting part to me, was...
> If at all possible, this should look like a valid Python expression that could be used to recreate an object with the same value
... but I'm not sure exactly what this means. It says it should *look* like an expression which can be used to recreate the object, but does that mean it should just be an example of the sort of expression you could use, or should it be an actual expression, that can be executed (eval etc..) to recreate the object? Or... should it be just a rehasing of the actual expression which was used, for pure information purposes?
In general I'm a bit confused as to exactly what I should be putting here. | **Guideline:** If you can succinctly provide an *exact representation*, **format it as a Python expression** (which implies that it can be both eval'd and copied directly into source code, in the right context). If providing an *inexact representation*, **use `<...>` format**.
There are many possible representations for any value, but the one that's most interesting for Python programmers is an expression that recreates the value. Remember that **those who understand Python are the target audience**—and that's also why inexact representations should include relevant context. Even the default `<XXX object at 0xNNN>`, while almost entirely useless, still provides type, `id()` (to distinguish different objects), and indication that no better representation is available. |
What are the benefits of using Python for web programming? | 452,305 | 9 | 2009-01-16T22:38:56Z | 452,324 | 13 | 2009-01-16T22:45:56Z | [
"python",
"programming-languages"
] | What makes Python stand out for use in web development? What are some examples of highly successful uses of Python on the web? | [Django](http://www.djangoproject.com/) is, IMHO, one of the major benefits of using Python. Model your domain, code your classes, and voila, your ORM is done, and you can focus on the UI. Add in the ease of templating with the built-in templating language (or one of many others you can use as well), and it becomes *very* easy to whip up effective web applications in no time. Throw in the built-in admin interface, and it's a no-brainer. |
How do I create a list of Python lambdas (in a list comprehension/for loop)? | 452,610 | 16 | 2009-01-17T01:19:13Z | 452,639 | 11 | 2009-01-17T01:37:51Z | [
"python",
"lambda",
"scope",
"closures",
"list-comprehension"
] | I want to create a list of lambda objects from a list of constants in Python; for instance:
```
listOfNumbers = [1,2,3,4,5]
square = lambda x: x * x
listOfLambdas = [lambda: square(i) for i in listOfNumbers]
```
This will create a list of lambda objects, however, when I run them:
```
for f in listOfLambdas:
print f(),
```
I would expect that it would print
```
1 4 9 16 25
```
Instead, it prints:
```
25 25 25 25 25
```
It seems as though the lambdas have all been given the wrong parameter. Have I done something wrong, and is there a way to fix it? I'm in Python 2.4 I think.
EDIT: a bit more of trying things and such came up with this:
```
listOfLambdas = []
for num in listOfNumbers:
action = lambda: square(num)
listOfLambdas.append(action)
print action()
```
Prints the expected squares from 1 to 25, but then using the earlier print statement:
```
for f in listOfLambdas:
print f(),
```
still gives me all `25`s. How did the existing lambda objects change between those two print calls?
Related question: [Why results of map() and list comprehension are different?](http://stackoverflow.com/questions/139819/why-results-of-map-and-list-comprehension-are-different) | I'm guessing that the lambda you're creating in the list comprehension is bound to the variable i which eventually ends up at 5. Thus, when you evaluate the lambdas after the fact, they're all bound to 5 and end up calculating 25. The same thing is happening with num in your second example. When you evaluate the lambda inside the loop it's num hasn't changed so you get the right value. After the loop, num is 5...
I'm not quite sure what you're going for, so I'm not sure how to suggest a solution. How about this?
```
def square(x): return lambda : x*x
listOfLambdas = [square(i) for i in [1,2,3,4,5]]
for f in listOfLambdas: print f()
```
This gives me the expected output:
```
1
4
9
16
25
```
Another way to think of this is that a lambda "captures" its lexical environment at the point where it is created. So, if you give it **num** it doesn't actually resolve that value until its invoked. This is both confusing and powerful. |
How do I create a list of Python lambdas (in a list comprehension/for loop)? | 452,610 | 16 | 2009-01-17T01:19:13Z | 452,659 | 17 | 2009-01-17T01:51:59Z | [
"python",
"lambda",
"scope",
"closures",
"list-comprehension"
] | I want to create a list of lambda objects from a list of constants in Python; for instance:
```
listOfNumbers = [1,2,3,4,5]
square = lambda x: x * x
listOfLambdas = [lambda: square(i) for i in listOfNumbers]
```
This will create a list of lambda objects, however, when I run them:
```
for f in listOfLambdas:
print f(),
```
I would expect that it would print
```
1 4 9 16 25
```
Instead, it prints:
```
25 25 25 25 25
```
It seems as though the lambdas have all been given the wrong parameter. Have I done something wrong, and is there a way to fix it? I'm in Python 2.4 I think.
EDIT: a bit more of trying things and such came up with this:
```
listOfLambdas = []
for num in listOfNumbers:
action = lambda: square(num)
listOfLambdas.append(action)
print action()
```
Prints the expected squares from 1 to 25, but then using the earlier print statement:
```
for f in listOfLambdas:
print f(),
```
still gives me all `25`s. How did the existing lambda objects change between those two print calls?
Related question: [Why results of map() and list comprehension are different?](http://stackoverflow.com/questions/139819/why-results-of-map-and-list-comprehension-are-different) | You have:
```
listOfLambdas = [lambda: i*i for i in range(6)]
for f in listOfLambdas:
print f()
```
Output:
```
25
25
25
25
25
25
```
You need currying! Aside from being delicious, use this default value "hack".
```
listOfLambdas = [lambda i=i: i*i for i in range(6)]
for f in listOfLambdas:
print f()
```
Output:
```
0
1
4
9
16
25
```
Note the `i=i`. That's where the magic happens. |
Does python have an equivalent to Java Class.forName()? | 452,969 | 76 | 2009-01-17T08:10:48Z | 452,971 | 20 | 2009-01-17T08:13:05Z | [
"java",
"python",
"class",
"instantiation"
] | I have the need to take a string argument and create an object of the class named in that string in Python. In Java, I would use `Class.forName().newInstance()`. Is there an equivalent in Python?
---
Thanks for the responses. To answer those who want to know what I'm doing: I want to use a command line argument as the class name, and instantiate it. I'm actually programming in Jython and instantiating Java classes, hence the Java-ness of the question. `getattr()` works great. Thanks much. | Assuming the class is in your scope:
```
globals()['classname'](args, to, constructor)
```
Otherwise:
```
getattr(someModule, 'classname')(args, to, constructor)
```
Edit: Note, you can't give a name like 'foo.bar' to getattr. You'll need to split it by . and call getattr() on each piece left-to-right. This will handle that:
```
module, rest = 'foo.bar.baz'.split('.', 1)
fooBar = reduce(lambda a, b: getattr(a, b), rest.split('.'), globals()[module])
someVar = fooBar(args, to, constructor)
``` |
Does python have an equivalent to Java Class.forName()? | 452,969 | 76 | 2009-01-17T08:10:48Z | 452,981 | 135 | 2009-01-17T08:25:39Z | [
"java",
"python",
"class",
"instantiation"
] | I have the need to take a string argument and create an object of the class named in that string in Python. In Java, I would use `Class.forName().newInstance()`. Is there an equivalent in Python?
---
Thanks for the responses. To answer those who want to know what I'm doing: I want to use a command line argument as the class name, and instantiate it. I'm actually programming in Jython and instantiating Java classes, hence the Java-ness of the question. `getattr()` works great. Thanks much. | Reflection in python is a lot easier and far more flexible than it is in Java.
I recommend reading this [tutorial](http://www.diveintopython.net/power_of_introspection/index.html)
There's no direct function (that I know of) which takes a fully qualified class name and returns the class, however you have all the pieces needed to build that, and you can connect them together.
One bit of advice though: don't try to program in Java style when you're in python.
If you can explain what is it that you're trying to do, maybe we can help you find a more pythonic way of doing it.
Here's a function that does what you want:
```
def get_class( kls ):
parts = kls.split('.')
module = ".".join(parts[:-1])
m = __import__( module )
for comp in parts[1:]:
m = getattr(m, comp)
return m
```
You can use the return value of this function as if it were the class itself.
Here's a usage example:
```
>>> D = get_class("datetime.datetime")
>>> D
<type 'datetime.datetime'>
>>> D.now()
datetime.datetime(2009, 1, 17, 2, 15, 58, 883000)
>>> a = D( 2010, 4, 22 )
>>> a
datetime.datetime(2010, 4, 22, 0, 0)
>>>
```
**How does that work?**
We're using `__import__` to import the module that holds the class, which required that we first extract the module name from the fully qualified name. Then we import the module:
```
m = __import__( module )
```
In this case, `m` will only refer to the top level module,
For example, if your class lives in `foo.baz` module, then `m` will be the module `foo`
We can easily obtain a reference to `foo.baz` using `getattr( m, 'baz' )`
To get from the top level module to the class, have to recursively use `gettatr` on the parts of the class name
Say for example, if you class name is `foo.baz.bar.Model` then we do this:
```
m = __import__( "foo.baz.bar" ) #m is package foo
m = getattr( m, "baz" ) #m is package baz
m = getattr( m, "bar" ) #m is module bar
m = getattr( m, "Model" ) #m is class Model
```
This is what's happening in this loop:
```
for comp in parts[1:]:
m = getattr(m, comp)
```
At the end of the loop, `m` will be a reference to the class. This means that `m` is actually the class itslef, you can do for instance:
```
a = m() #instantiate a new instance of the class
b = m( arg1, arg2 ) # pass arguments to the constructor
``` |
What is the practical difference between xml, json, rss and atom when interfacing with Twitter? | 453,158 | 5 | 2009-01-17T11:19:36Z | 453,389 | 7 | 2009-01-17T14:45:23Z | [
"python",
"xml",
"json",
"twitter",
"twisted"
] | I'm new to web services and as an introduction I'm playing around with the Twitter API using the Twisted framework in python. I've read up on the different formats they offer, but it's still not clear to me which one I should use in my fairly simple project.
Specifically the practical difference between using JSON or XML is something I'd like guidance on. All I'm doing is requesting the public timeline and caching it locally.
Thanks. | For me it boils down to convenience. Using XML, I have to parse the response in to a DOM (or more usually an ElementTree). Using JSON, one call to simplejson.loads(json\_string) and I have a native Python data structure (lists, dictionaries, strings etc) which I can start iterating over and processing. Anything that means writing a few less lines of code is usually a good idea in my opinion.
I often use JSON to move data structures between PHP, Python and JavaScript - again, because it saves me having to figure out an XML serialization and then parse it at the other end.
And like jinzo said, JSON ends up being slightly fewer bytes on the wire.
You might find my blog entry on JSON from a couple of years ago useful: <http://simonwillison.net/2006/Dec/20/json/> |
Advice regarding IPython + MacVim Workflow | 453,329 | 18 | 2009-01-17T13:46:27Z | 453,837 | 14 | 2009-01-17T19:13:43Z | [
"python",
"vim",
"ipython"
] | I've just found [IPython](http://ipython.scipy.org/) and I can report that I'm in deep love. And the affection was immediate. I think this affair will turn into something lasting, like [the one I have with screen](http://stackoverflow.com/questions/431521/run-a-command-in-a-shell-and-keep-running-the-command-when-you-close-the-session#431570). Ipython and screen happen to be the best of friends too so it's a triangular drama. Purely platonic, mind you.
The reason IPython hits the soft spots with me are very much because I generally like command prompts, and especially \*nix-inspired prompts with inspiration from ksh, csh (yes, chs is a monster, but as a prompt it sport lots of really good features), bash and zsh. And IPython does sure feel like home for a \*nix prompt rider. Mixing the system shell and python is also a really good idea. Plus, of course, IPython helps a lot when solving [the Python Challenge](http://www.pythonchallenge.com/) riddles. Invaluable even.
Now, I love Vim too. Since I learnt vi back in the days there's no turning back. And I'm on Mac when I have a choice. Now I'd like to glue together my IPython + MacVim workflow. What I've done so far is that I start Ipython using:
```
ipython -e "open -a MacVim"
```
Thus when I edit from IPython it starts MacVim with the file/module loaded. Could look a bit like so:
```
In [4]: %run foo #This also "imports" foo anew
hello world
In [5]: edit foo
Editing... done. Executing edited code... #This happens immediately
hello world
In [6]: %run foo
hello SO World
```
OK. I think this can be improved. Maybe there's a way to tie IPython into MacVim too? Please share your experiences. Of course if you use TextMate or some other fav editor I'm interested too. Maybe some of the lessons are general. | I use Linux, but I believe this tip can be used in OS X too. I use [GNU Screen](http://www.gnu.org/software/screen/) to send IPython commands from Vim as recommended by [this tip](http://vim.wikia.com/wiki/IPython_integration). This is how I do it:
First, you should open a terminal and start a screen session called 'ipython' or whatever you want, and then start IPython:
```
$ screen -S ipython
$ ipython
```
Then you should put this in your .vimrc:
```
autocmd FileType python map F5 :w<CR>:!screen -x ipython -X stuff $'\%run %:p:h\n'<CR><CR>
```
Then when you hit F5, it will tell Screen to execute the command '%run file' inside the 'ipython' created previously, where file is your current buffer in Vim.
You can tweak this to execute the command you want inside IPython from Vim. For example I use this:
```
autocmd FileType python map <F5> :w<CR>:!screen -x ipython -X stuff $'\%reset\ny\n\%cd %:p:h\n\%run %:t\n'<CR><CR>
```
This executes %reset (answering yes to the prompt), then change to the directory where the current buffer in vim is located and then %run the file. This is specially useful if you have the %pdb active in IPython.
Don't forget that you need an active Screen session called 'ipython' with IPython running inside.
If you like Emacs. There is [good support](http://ipython.scipy.org/dist/ipython.el) for IPython. |
Writing a simple "Rock Paper Scissors" game bot | 453,396 | 7 | 2009-01-17T14:50:12Z | 453,400 | 29 | 2009-01-17T14:53:03Z | [
"python"
] | I need help with a python game im working on (I just started learning Python about 3 days ago, so I'm still a nOob =)
This is what I came up with:
```
import random
from time import sleep
print "Please select: "
print "1 Rock"
print "2 Paper"
print "3 Scissors"
player = input ("Choose from 1-3: ")
if player == 1:
print "You choose Rock"
sleep (2)
print "CPU chooses Paper"
sleep (.5)
print "You lose, and you will never win!"
elif player == 2:
print "You choose Paper"
sleep (2)
print "CPU chooses Scissors"
sleep (.5)
print "You lose, and you will never win!"
else:
print "You choose Scissors"
sleep (2)
print "CPU chooses Rock"
sleep (.5)
print "You lose, and you will never win!"
```
and what I want the program to do is to RANDOMLY choose 1 out of the three options (rock paper scissors) no matter what the user inputs! | Well, you've already imported the random module, that's a start.
Try the random.choice function.
```
>>> from random import choice
>>> cpu_choice = choice(('rock', 'paper', 'scissors'))
``` |
Writing a simple "Rock Paper Scissors" game bot | 453,396 | 7 | 2009-01-17T14:50:12Z | 453,502 | 7 | 2009-01-17T16:05:47Z | [
"python"
] | I need help with a python game im working on (I just started learning Python about 3 days ago, so I'm still a nOob =)
This is what I came up with:
```
import random
from time import sleep
print "Please select: "
print "1 Rock"
print "2 Paper"
print "3 Scissors"
player = input ("Choose from 1-3: ")
if player == 1:
print "You choose Rock"
sleep (2)
print "CPU chooses Paper"
sleep (.5)
print "You lose, and you will never win!"
elif player == 2:
print "You choose Paper"
sleep (2)
print "CPU chooses Scissors"
sleep (.5)
print "You lose, and you will never win!"
else:
print "You choose Scissors"
sleep (2)
print "CPU chooses Rock"
sleep (.5)
print "You lose, and you will never win!"
```
and what I want the program to do is to RANDOMLY choose 1 out of the three options (rock paper scissors) no matter what the user inputs! | ```
import random
ROCK, PAPER, SCISSORS = 1, 2, 3
names = 'ROCK', 'PAPER', 'SCISSORS'
def beats(a, b):
if (a,b) in ((ROCK, PAPER), (PAPER, SCISSORS), (SCISSORS, ROCK)):
return False
return True
print "Please select: "
print "1 Rock"
print "2 Paper"
print "3 Scissors"
player = int(input ("Choose from 1-3: "))
cpu = random.choice((ROCK, PAPER, SCISSORS))
if cpu != player:
if beats(player, cpu):
print "player won"
else:
print "cpu won"
else:
print "tie!"
print names[player-1], "vs", names[cpu-1]
``` |
Queryset API distinct() does not work? | 453,477 | 5 | 2009-01-17T15:49:11Z | 476,232 | 7 | 2009-01-24T16:16:07Z | [
"python",
"django"
] | ```
class Message(models.Model):
subject = models.CharField(max_length=100)
pub_date = models.DateTimeField(default=datetime.now())
class Topic(models.Model):
title = models.CharField(max_length=100)
message = models.ManyToManyField(Message, verbose_name='Discussion')
```
I want to get order all the topics according to the latest message object attached to that topic.
I executed this query but this does not give the distinct queryset.
```
>> Topic.objects.order_by('-message__pub_date').distinct()
``` | You don't need distinct() here, what you need is [aggregation](http://docs.djangoproject.com/en/dev/topics/db/aggregation/#topics-db-aggregation). This query will do what you want:
```
from django.db.models import Max
Topic.objects.annotate(Max('message__pub_date')).order_by('-message__pub_date__max')
```
Though if this is production code, you'll probably want to follow akaihola's advice and denormalize "last\_message\_posted" onto the Topic model directly.
Also, there's an error in your default value for Message.pub\_date. As you have it now, whenever you first run the server and this code is loaded, datetime.now() will be executed once and that value will be used as the pub\_date for all Messages. Use this instead to pass the callable itself so it isn't called until each Message is created:
```
pub_date = models.DateTimeField(default=datetime.now)
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.