title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Generating LMDB for Caffe | 33,627,888 | 4 | 2015-11-10T10:29:30Z | 33,628,970 | 7 | 2015-11-10T11:29:53Z | [
"python",
"deep-learning",
"caffe",
"lmdb",
"pycaffe"
] | I am trying to build a deep learning model for Saliency analysis using caffe (I am using the python wrapper). But I am unable to understand how to generate the lmdb data structure for this purpose. I have gone through the Imagenet and mnist examples and I understand that I should generate labels in the format
```
my_test_dir/picture-foo.jpg 0
```
But in my case, I will be labeling each pixel with 0 or 1 indicating whether that pixel is salient or not. That won't be a single label for an image.
How to generate lmdb files for a per pixel based labeling ? | You can approach this problem in two ways:
**1.** Using HDF5 data layer instead of LMDB. HDF5 is more flexible and can support labels the size of the image. You can see [this answer](http://stackoverflow.com/a/31808324/1714410) for an example of constructing and using HDF5 input data layer.
**2.** You can have two LMDB input layers: one for the image and one for the label. Note that when you build the LMDB you must not use the `'shuffle'` option in order to have the images and their labels in sync. |
How to print the value of a Tensor object in TensorFlow? | 33,633,370 | 41 | 2015-11-10T15:19:58Z | 33,633,839 | 40 | 2015-11-10T15:41:24Z | [
"python",
"tensorflow"
] | I have been using the introductory example of matrix multiplication in TensorFlow.
```
matrix1 = tf.constant([[3., 3.]])
matrix2 = tf.constant([[2.],[2.]])
product = tf.matmul(matrix1, matrix2)
```
And when I print the product, it is displaying it as a TensorObject(obviously).
product
```
<tensorflow.python.framework.ops.Tensor object at 0x10470fcd0>
```
But how do I know the value of `product`?
The following doesn't help:
```
print product
Tensor("MatMul:0", shape=TensorShape([Dimension(1), Dimension(1)]), dtype=float32)
```
I know that graphs run on `Sessions`, but isn't there any way I can check the output of a TensorObject without running the graph in a `session`? | The easiest\* way to evaluate the actual value of a `Tensor` object is to pass it to the `Session.run()` method, or call `Tensor.eval()` when you have a default session (i.e. in a `with tf.Session():` block, or see below). In general,\*\* you cannot print the value of a tensor without running some code in a session.
If you are experimenting with the programming model, and want an easy way to evaluate tensors, the [`tf.InteractiveSession`](http://tensorflow.org/api_docs/python/client.md#InteractiveSession) lets you open a session at the start of your program, and then use that session for all `Tensor.eval()` (and `Operation.run()`) calls. This can be easier in an interactive setting, such as the shell or an IPython notebook, when it's tedious to pass around a `Session` object everywhere.
This might seem silly for such a small expression, but one of the key ideas in Tensorflow is *deferred execution*: it's very cheap to build a large and complex expression, and when you want to evaluate it, the back-end (to which you connect with a `Session`) is able to schedule its execution more efficiently (e.g. executing independent parts in parallel and using GPUs).
---
\* To print the value of a tensor without returning it to your Python program, you can use the [`tf.Print()`](https://www.tensorflow.org/versions/r0.8/api_docs/python/control_flow_ops.html#Print) op, as [And suggests in another answer](http://stackoverflow.com/a/36296783/3574081). Note that you still need to run part of the graph to see the output of this op, which is printed to standard output. If you're running distributed TensorFlow, the `tf.Print()` op will print its output to the standard output of the task where that op runs.
\*\* You *might* be able to use the experimental [`tf.contrib.util.constant_value()`](https://www.tensorflow.org/versions/r0.8/api_docs/python/contrib.util.html#constant_value) function to get the value of a constant tensor, but it isn't intended for general use, and it isn't defined for many operators. |
How to print the value of a Tensor object in TensorFlow? | 33,633,370 | 41 | 2015-11-10T15:19:58Z | 33,640,758 | 9 | 2015-11-10T22:20:06Z | [
"python",
"tensorflow"
] | I have been using the introductory example of matrix multiplication in TensorFlow.
```
matrix1 = tf.constant([[3., 3.]])
matrix2 = tf.constant([[2.],[2.]])
product = tf.matmul(matrix1, matrix2)
```
And when I print the product, it is displaying it as a TensorObject(obviously).
product
```
<tensorflow.python.framework.ops.Tensor object at 0x10470fcd0>
```
But how do I know the value of `product`?
The following doesn't help:
```
print product
Tensor("MatMul:0", shape=TensorShape([Dimension(1), Dimension(1)]), dtype=float32)
```
I know that graphs run on `Sessions`, but isn't there any way I can check the output of a TensorObject without running the graph in a `session`? | No, you can not see the content of the tensor without running the graph (doing `session.run()`). The only things you can see are:
* the dimensionality of the tensor (but I assume it is not hard to calculate it for the [list of the operations](http://tensorflow.org/api_docs/python/math_ops.md#contents) that TF has)
* type of the operation that will be used to generate the tensor (`transpose_1:0`, `random_uniform:0`)
* type of elements in the tensor (`float32`)
I have not found this in documentation, but I believe that the values of the variables (and some of the constants are not calculated at the time of assignment).
---
Take a look at this example:
```
import tensorflow as tf
from datetime import datetime
dim = 7000
```
The first example where I just initiate a constant Tensor of random numbers run approximately the same time irrespectibly of dim (`0:00:00.003261`)
```
startTime = datetime.now()
m1 = tf.truncated_normal([dim, dim], mean=0.0, stddev=0.02, dtype=tf.float32, seed=1)
print datetime.now() - startTime
```
In the second case, where the constant is actually gets evaluated and the values are assigned, the time clearly depends on dim (`0:00:01.244642`)
```
startTime = datetime.now()
m1 = tf.truncated_normal([dim, dim], mean=0.0, stddev=0.02, dtype=tf.float32, seed=1)
sess = tf.Session()
sess.run(m1)
print datetime.now() - startTime
```
And you can make it more clear by calculating something (`d = tf.matrix_determinant(m1)`, keeping in mind that the time will run in `O(dim^2.8)`)
P.S. I found were it is explained in [documentation](http://tensorflow.org/resources/faq.md#contents):
> A Tensor object is a symbolic handle to the result of an operation,
> but does not actually hold the values of the operation's output. |
How to print the value of a Tensor object in TensorFlow? | 33,633,370 | 41 | 2015-11-10T15:19:58Z | 36,296,783 | 24 | 2016-03-29T23:13:22Z | [
"python",
"tensorflow"
] | I have been using the introductory example of matrix multiplication in TensorFlow.
```
matrix1 = tf.constant([[3., 3.]])
matrix2 = tf.constant([[2.],[2.]])
product = tf.matmul(matrix1, matrix2)
```
And when I print the product, it is displaying it as a TensorObject(obviously).
product
```
<tensorflow.python.framework.ops.Tensor object at 0x10470fcd0>
```
But how do I know the value of `product`?
The following doesn't help:
```
print product
Tensor("MatMul:0", shape=TensorShape([Dimension(1), Dimension(1)]), dtype=float32)
```
I know that graphs run on `Sessions`, but isn't there any way I can check the output of a TensorObject without running the graph in a `session`? | While other answers are correct that you cannot print the value until you evaluate the graph, they do not talk about one easy way of actually printing a value inside the graph, once you evaluate it.
The easiest way to see a value of a tensor whenever the graph is evaluated (using `run` or `eval`) is to use the [`Print`](https://www.tensorflow.org/versions/master/api_docs/python/control_flow_ops.html#Print) operation as in this example:
```
# Initialize session
import tensorflow as tf
sess = tf.InteractiveSession()
# Some tensor we want to print the value of
a = tf.constant([1.0, 3.0])
# Add print operation
a = tf.Print(a, [a], message="This is a: ")
# Add more elements of the graph using a
b = tf.add(a, a).eval()
```
Now, whenever we evaluate the whole graph, e.g. using `b.eval()`, we get:
```
I tensorflow/core/kernels/logging_ops.cc:79] This is a: [1 3]
``` |
Python Error : ImportError: No module named 'xml.etree' | 33,633,954 | 2 | 2015-11-10T15:46:23Z | 33,634,074 | 7 | 2015-11-10T15:52:19Z | [
"python",
"xml"
] | I am simply trying to parse an XML file
```
import xml.etree.ElementTree as ET
tree = ET.parse('country_data.xml')
root = tree.getroot()
```
but this gives me
```
import xml.etree.ElementTree as ET
ImportError: No module named 'xml.etree'
```
i am using Python 3.5. I have tried to same code with Python 2.7 and 3.4 but i always get this error. I thought that the XML libraries come as standard. Also, i can see that in my Lib folder
[](http://i.stack.imgur.com/8F9yz.png)
so why cant it pick up the module! I am really confused. Do i have to make some change in an environment variable somewhere?
Please help | Remove the file `xml.py` or a directory `xml` with a file `__init__.py` in it from your current directory and try again. Python will search the current directory first when importing modules. A file named `xml.py` or a package named `xml` in the current directory shadows the standard library package with the same name.
As pointed out in a comment by KeshV, you also need to remove the file `xml.pyc`, if it exists. In Python 2 it will be in the same directory as `xml.py`. In Python 3 it will be in the sub directory `__pycache__`. In General, as long as the `*.py` file is around, you can savely delete the corresponding `*.pyc` file because Python will re-create it upon import of the `*.py` file. |
Pip not working on windows 10, freezes command promt | 33,638,395 | 5 | 2015-11-10T19:53:35Z | 33,928,569 | 15 | 2015-11-26T00:05:08Z | [
"python",
"windows",
"pip"
] | I recently installed Python for windows 10 and need to use pip command to install requests package.
However, whenever I try to use pip in cmd it just freezes my command prompt.
Using CTRL+C, CTRL+D or any command like that to cancel it does not work either, the prompt just freezes like its waiting for input or something, but I get no output or any clue about what to do. [Picture of command promt when its frozen](http://i.stack.imgur.com/uflD4.png)
I've set the PATH variable correctly, and my computer finds pip and launches it, but it just freezes. I've also tried reinstalling python countless times and manually reinstalling pip but nothing seems to do the trick.
EDIT: (Solution) Using easy\_install worked for me (thank you Marco) so I could install the packages I needed for my project. I also managed to fix pip using easy\_install so pip also works for me now.
The solution to fixing pip was: easy\_install pip | I had exactly the same problem here (Windows 10.0.10240). After typing just "pip" and hitting enter, nothing else happened on the console. This problem was affecting including other .exe compiled python related scripts like mezzanine-project.exe.
The antivirus AVAST was the culprit (in my case) !!!
After disabling AVAST files module (or uninstalling AVAST) pip started working again. |
When counting the occurrence of a string in a file, my code does not count the very first word | 33,640,387 | 4 | 2015-11-10T21:55:40Z | 33,640,520 | 7 | 2015-11-10T22:04:25Z | [
"python",
"string",
"file",
"text",
"readline"
] | ## Code
```
def main():
try:
file=input('Enter the name of the file you wish to open: ')
thefile=open(file,'r')
line=thefile.readline()
line=line.replace('.','')
line=line.replace(',','')
thefilelist=line.split()
thefilelistset=set(thefilelist)
d={}
for item in thefilelist:
thefile.seek(0)
wordcount=line.count(' '+item+' ')
d[item]=wordcount
for i in d.items():
print(i)
thefile.close()
except IOError:
print('IOError: Sorry but i had an issue opening the file that you specified to READ from please try again but keep in mind to check your spelling of the file you want to open')
main()
```
## Problem
Basically I am trying to read the file and count the number of times each word in the file appears then print that word with the number of times it appeared next to it.
It all works except that it will not count the first word in the file.
## File I am using
my practice file that I am testing this code on contains this text:
> This file is for testing. It is going to test how many times the words
> in here appear.
## output
```
('for', 1)
('going', 1)
('the', 1)
('testing', 1)
('is', 2)
('file', 1)
('test', 1)
('It', 1)
('This', 0)
('appear', 1)
('to', 1)
('times', 1)
('here', 1)
('how', 1)
('in', 1)
('words', 1)
('many', 1)
```
## note
If you notice it says that 'This' appears 0 times but it does in fact appear in the file.
any ideas? | My guess would be this line:
`wordcount=line.count(' '+item+' ')`
You are looking for "space" + YourWord + "space", and the first word is not preceded by space. |
How to do Xavier initialization on TensorFlow | 33,640,581 | 16 | 2015-11-10T22:07:54Z | 36,784,797 | 30 | 2016-04-22T04:23:57Z | [
"python",
"tensorflow"
] | I'm porting my Caffe network over to TensorFlow but it doesn't seem to have xavier initialization. I'm using `truncated_normal` but this seems to be making it a lot harder to train. | Now TensorFlow 0.8 has the xavier initializer implementation.
<https://www.tensorflow.org/versions/r0.8/api_docs/python/contrib.layers.html#xavier_initializer>
You can use something like this:
```
W = tf.get_variable("W", shape=[784, 256],
initializer=tf.contrib.layers.xavier_initializer())
``` |
Why does TensorFlow example fail when increasing batch size? | 33,641,799 | 11 | 2015-11-10T23:43:14Z | 33,644,778 | 19 | 2015-11-11T05:30:05Z | [
"python",
"tensorflow"
] | I was looking at the [Tensorflow MNIST example for beginners](http://tensorflow.org/tutorials/mnist/beginners/index.md) and found that in this part:
```
for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
```
changing the batch size from 100 to be above 204 causes the model to fail to converge. It works up to 204, but at 205 and any higher number I tried, the accuracy would end up < 10%. Is this a bug, something about the algorithm, something else?
This is running their binary installation for OS X, seems to be version 0.5.0. | You're using the very basic linear model in the beginners example?
Here's a trick to debug it - watch the cross-entropy as you increase the batch size (the first line is from the example, the second I just added):
```
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
cross_entropy = tf.Print(cross_entropy, [cross_entropy], "CrossE")
```
At a batch size of 204, you'll see:
```
I tensorflow/core/kernels/logging_ops.cc:64] CrossE[92.37558]
I tensorflow/core/kernels/logging_ops.cc:64] CrossE[90.107414]
```
But at 205, you'll see a sequence like this, from the start:
```
I tensorflow/core/kernels/logging_ops.cc:64] CrossE[472.02966]
I tensorflow/core/kernels/logging_ops.cc:64] CrossE[475.11697]
I tensorflow/core/kernels/logging_ops.cc:64] CrossE[1418.6655]
I tensorflow/core/kernels/logging_ops.cc:64] CrossE[1546.3833]
I tensorflow/core/kernels/logging_ops.cc:64] CrossE[1684.2932]
I tensorflow/core/kernels/logging_ops.cc:64] CrossE[1420.02]
I tensorflow/core/kernels/logging_ops.cc:64] CrossE[1796.0872]
I tensorflow/core/kernels/logging_ops.cc:64] CrossE[nan]
```
Ack - NaN's showing up. Basically, the large batch size is creating such a huge gradient that your model is spiraling out of control -- the updates it's applying are too large, and overshooting the direction it should go by a huge margin.
In practice, there are a few ways to fix this. You could reduce the learning rate from .01 to, say, .005, which results in a final accuracy of 0.92.
```
train_step = tf.train.GradientDescentOptimizer(0.005).minimize(cross_entropy)
```
Or you could use a more sophisticated optimization algorithm (Adam, Momentum, etc.) that tries to do more to figure out the direction of the gradient. Or you could use a more complex model that has more free parameters across which to disperse that big gradient. |
Why does TensorFlow example fail when increasing batch size? | 33,641,799 | 11 | 2015-11-10T23:43:14Z | 33,645,235 | 10 | 2015-11-11T06:19:15Z | [
"python",
"tensorflow"
] | I was looking at the [Tensorflow MNIST example for beginners](http://tensorflow.org/tutorials/mnist/beginners/index.md) and found that in this part:
```
for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
```
changing the batch size from 100 to be above 204 causes the model to fail to converge. It works up to 204, but at 205 and any higher number I tried, the accuracy would end up < 10%. Is this a bug, something about the algorithm, something else?
This is running their binary installation for OS X, seems to be version 0.5.0. | @dga gave a great answer, but I wanted to expand a little.
When I wrote the beginners tutorial, I implemented the cost function like so:
> cross\_entropy = -tf.reduce\_sum(y\_\*tf.log(y))
I wrote it that way because that looks most similar to the mathematical definition of cross-entropy. But it might actually be better to do something like this:
> cross\_entropy = -tf.reduce\_mean(y\_\*tf.log(y))
Why might it be nicer to use a mean instead of a sum? Well, if we sum, then doubling the batch size doubles the cost, and also doubles the magnitude of the gradient. Unless we adjust our learning rate (or use an algorithm that adjusts it for us, like @dga suggested) our training will explode! But if we use a mean, then our learning rate becomes kind of independent of our batch size, which is nice.
I'd encourage you to check out Adam (`tf.train.AdamOptimizer()`). It's often more tolerant to fiddling with things than SGD. |
Why does TensorFlow example fail when increasing batch size? | 33,641,799 | 11 | 2015-11-10T23:43:14Z | 34,364,526 | 11 | 2015-12-18T21:54:10Z | [
"python",
"tensorflow"
] | I was looking at the [Tensorflow MNIST example for beginners](http://tensorflow.org/tutorials/mnist/beginners/index.md) and found that in this part:
```
for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
```
changing the batch size from 100 to be above 204 causes the model to fail to converge. It works up to 204, but at 205 and any higher number I tried, the accuracy would end up < 10%. Is this a bug, something about the algorithm, something else?
This is running their binary installation for OS X, seems to be version 0.5.0. | **Nan occurs when 0\*log(0) occurs:**
replace:
```
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
```
with:
```
cross_entropy = -tf.reduce_sum(y_*tf.log(y + 1e-10))
``` |
Tensorflow and Anaconda on Ubuntu? | 33,646,541 | 7 | 2015-11-11T08:15:16Z | 33,698,750 | 26 | 2015-11-13T17:40:22Z | [
"python",
"anaconda",
"tensorflow"
] | On my Ubuntu 14.04, I have installed tensorflow, using "pip", as specified in the [Tensorflow Installation instructions](http://tensorflow.org/get_started/os_setup.md) and I made sure it was working by importing it in python and it did work.
Then, I installed Anaconda and it changed my .bashrc file by adding the following line to it:
```
export PATH="/home/sonny/anaconda2/bin:$PATH"
```
But because of this change, now it looks into the PATH above, which doesn't contain tensorflow. now I can't import tensorflow in my python code.
What is the proper way to extend the $PATH environment variable so that it stays using everything from anaconda2 but it becomes able to import "tensorflow"? | I solved the problem but in a different way!
I found a link where the tensorflow.whl files were converted to conda packages, so I went ahead and installed it using the command:
```
conda install -c https://conda.anaconda.org/jjhelmus tensorflow
```
and it worked, since the $PATH points to anaconda packages, I can import it now!
[Source is here](https://anaconda.org/jjhelmus/tensorflow) |
Why do we use name parameter when creating a variable in Tensorflow? | 33,648,167 | 5 | 2015-11-11T10:01:41Z | 33,648,339 | 15 | 2015-11-11T10:10:40Z | [
"python",
"tensorflow"
] | In some of the places I saw the syntax, where variables are initialized with names, sometimes without names. For [example](http://tensorflow.org/get_started/basic_usage.md):
```
# With name
var = tf.Variable(0, name="counter")
# Without
one = tf.constant(1)
```
But then everywhere in the code, it is referred only by the name `var`. So what is the point for the name `counter`? | The `name` parameter is optional (you can create variables and constants with or without it), and the variable you use in your program does not depend on it. Names can be helpful in a couple of places:
**When you want to save or restore your variables** (you can [save them to a binary file](http://tensorflow.org/api_docs/python/state_ops.md#Saver) after the computation). From [docs](http://tensorflow.org/how_tos/variables/index.md):
> By default, it uses the value of the Variable.name property for each
> variable
```
matrix_1 = tf.Variable([[1, 2], [2, 3]], name="v1")
matrix_2 = tf.Variable([[3, 4], [5, 6]], name="v2")
init = tf.initialize_all_variables()
saver = tf.train.Saver()
sess = tf.Session()
sess.run(init)
save_path = saver.save(sess, "/model.ckpt")
sess.close()
```
Nonetheless you have variables `matrix_1`, `matrix_2` they are saves as `v1`, `v2` in the file.
**Also names are used in TensorBoard to nicely show names of edges**. You can even [group them by using the same scope](http://tensorflow.org/how_tos/graph_viz/index.md#name_scoping_and_nodes):
```
import tensorflow as tf
with tf.name_scope('hidden') as scope:
a = tf.constant(5, name='alpha')
W = tf.Variable(tf.random_uniform([1, 2], -1.0, 1.0), name='weights')
b = tf.Variable(tf.zeros([1]), name='biases')
``` |
Tensorflow image reading & display | 33,648,322 | 11 | 2015-11-11T10:09:45Z | 33,862,534 | 17 | 2015-11-23T01:39:14Z | [
"python",
"tensorflow"
] | I've got a bunch of images in a format similar to Cifar10 (binary file, `size = 96*96*3` bytes per image), one image after another ([STL-10 dataset](http://cs.stanford.edu/~acoates/stl10/)). The file I'm opening has 138MB.
I tried to read & check the contents of the Tensors containing the images to be sure that the reading is done right, however I have two questions -
1. Does the `FixedLengthRecordReader` load the whole file, however just provide inputs one at a time? Since reading the first `size` bytes should be relatively fast. However, the code takes about two minutes to run.
2. How to get the actual image contents in a displayable format, or display them internally to validate that the images are read well? I did `sess.run(uint8image)`, however the result is empty.
The code is below:
```
import tensorflow as tf
def read_stl10(filename_queue):
class STL10Record(object):
pass
result = STL10Record()
result.height = 96
result.width = 96
result.depth = 3
image_bytes = result.height * result.width * result.depth
record_bytes = image_bytes
reader = tf.FixedLengthRecordReader(record_bytes=record_bytes)
result.key, value = reader.read(filename_queue)
print value
record_bytes = tf.decode_raw(value, tf.uint8)
depth_major = tf.reshape(tf.slice(record_bytes, [0], [image_bytes]),
[result.depth, result.height, result.width])
result.uint8image = tf.transpose(depth_major, [1, 2, 0])
return result
# probably a hack since I should've provided a string tensor
filename_queue = tf.train.string_input_producer(['./data/train_X'])
image = read_stl10(filename_queue)
print image.uint8image
with tf.Session() as sess:
result = sess.run(image.uint8image)
print result, type(result)
```
**Output:**
```
Tensor("ReaderRead:1", shape=TensorShape([]), dtype=string)
Tensor("transpose:0", shape=TensorShape([Dimension(96), Dimension(96), Dimension(3)]), dtype=uint8)
I tensorflow/core/common_runtime/local_device.cc:25] Local device intra op parallelism threads: 4
I tensorflow/core/common_runtime/local_session.cc:45] Local session inter op parallelism threads: 4
[empty line for last print]
Process finished with exit code 137
```
I'm running this on my CPU, if that adds anything.
EDIT: I found the pure TensorFlow solution thanks to Rosa. Apparently, when using the `string_input_producer`, in order to see the results, you need to initialize the queue runners.
The only required thing to add to the code above is the second line from below:
```
...
with tf.Session() as sess:
tf.train.start_queue_runners(sess=sess)
...
```
Afterwards, the image in the `result` can be displayed with `matplotlib.pyplot.imshow(result)`. I hope this helps someone. If you have any further questions, feel free to ask me or check the link in Rosa's answer. | Just to give a complete answer:
```
filename_queue = tf.train.string_input_producer(['/Users/HANEL/Desktop/tf.png']) # list of files to read
reader = tf.WholeFileReader()
key, value = reader.read(filename_queue)
my_img = tf.image.decode_png(value) # use png or jpg decoder based on your files.
init_op = tf.initialize_all_variables()
with tf.Session() as sess:
sess.run(init_op)
# Start populating the filename queue.
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
for i in range(1): #length of your filename list
image = my_img.eval() #here is your image Tensor :)
print(image.shape)
Image.show(Image.fromarray(np.asarray(image)))
coord.request_stop()
coord.join(threads)
```
Or if you have a directory of images you can add them all via [this Github source file](https://github.com/HamedMP/ImageFlow)
@mttk and @salvador-dali: I hope it is what you need |
Simplify a if statement Python | 33,653,175 | 2 | 2015-11-11T14:54:38Z | 33,653,207 | 7 | 2015-11-11T14:56:10Z | [
"python",
"if-statement"
] | Is there a way to simplify this if-statement:
```
if self[by1,bx1]=='A' or self[by1,bx1+1]=='A' or self[by1,bx1+2]=='A' or self[by1,bx1+3]=='A':
```
coming from a class where self[y,x] fetch a data in a table.
The original code is:
```
for i in range(4):
if self[by1,bx1]=='A' or self[by1,bx1+1]=='A' or self[by1,bx1+2]=='A' or self[by1,bx1+3]=='A':
print('There is already a ship here')
by1=0
bx1=0
self.placing_Battleship_p1()
elif by1==0 or by1==0:
pass
else:
self[by1,bx1+i]='B'
```
I want it to check if every position of my table are not equal to 'A' before changing them for a 'B'. | Sure, you could use `any` for this. This should be equivalent.
```
if any(self[by1,bx1+x]=='A' for x in range(4)):
``` |
Error while importing Tensorflow in python2.7 in Ubuntu 12.04. 'GLIBC_2.17 not found' | 33,655,731 | 21 | 2015-11-11T16:58:54Z | 34,897,674 | 7 | 2016-01-20T10:37:40Z | [
"python",
"ubuntu",
"glibc",
"tensorflow"
] | I have installed the Tensorflow bindings with python successfully. But when I try to import Tensorflow, I get the follwoing error.
> ImportError: /lib/x86\_64-linux-gnu/libc.so.6: version `GLIBC\_2.17' not
> found (required by
> /usr/local/lib/python2.7/dist-packages/tensorflow/python/\_pywrap\_tensorflow.so)
I have tried to update GLIBC\_2.15 to 2.17, but no luck. | I tried [BR\_User solution](http://stackoverflow.com/a/33699100/1990516) and still had an annoying:
```
ImportError: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.14' not found
```
I am on CentOS 6.7, it also lacks an updated c++ standard lib, so to build on BR\_User solution I extracted the correct libstdc++ package, however I found no need for the virtual environment.
Supposing you have already installed tensorflow, it gives:
```
mkdir ~/my_libc_env
cd ~/my_libc_env
wget http://launchpadlibrarian.net/137699828/libc6_2.17-0ubuntu5_amd64.deb
wget http://launchpadlibrarian.net/137699829/libc6-dev_2.17-0ubuntu5_amd64.deb
wget ftp://rpmfind.net/linux/sourceforge/m/ma/magicspecs/apt/3.0/x86_64/RPMS.lib/libstdc++-4.8.2-7mgc30.x86_64.rpm
ar p libc6_2.17-0ubuntu5_amd64.deb data.tar.gz | tar zx
ar p libc6-dev_2.17-0ubuntu5_amd64.deb data.tar.gz | tar zx
rpm2cpio libstdc++-4.8.2-7mgc30.x86_64.rpm| cpio -idmv
```
and then run python with:
```
LD_LIBRARY_PATH="$HOME/my_libc_env/lib/x86_64-linux-gnu/:$HOME/my_libc_env/usr/lib64/" $HOME/my_libc_env/lib/x86_64-linux-gnu/ld-2.17.so `which python`
```
If it doesn't work, I have [another solution](http://stackoverflow.com/a/34900471/1990516), but you won't like it. |
Error while importing Tensorflow in python2.7 in Ubuntu 12.04. 'GLIBC_2.17 not found' | 33,655,731 | 21 | 2015-11-11T16:58:54Z | 34,900,471 | 8 | 2016-01-20T12:45:41Z | [
"python",
"ubuntu",
"glibc",
"tensorflow"
] | I have installed the Tensorflow bindings with python successfully. But when I try to import Tensorflow, I get the follwoing error.
> ImportError: /lib/x86\_64-linux-gnu/libc.so.6: version `GLIBC\_2.17' not
> found (required by
> /usr/local/lib/python2.7/dist-packages/tensorflow/python/\_pywrap\_tensorflow.so)
I have tried to update GLIBC\_2.15 to 2.17, but no luck. | Okay so here is the other solution I mentionned in my [previous answer](http://stackoverflow.com/a/34897674/1990516), it's more tricky, but should always work on systems with GLIBC>=2.12 and GLIBCXX>=3.4.13.
In my case it was on a CentOS 6.7, but it's also fine for Ubuntu 12.04.
We're going to need a version of gcc that supports c++11, either on another machine or an isolated install; but not for the moment.
What we're gonna do here is edit the \_pywrap\_tensorflow.so binary in order to 'weakify' its libc and libstdc++ dependencies, so that ld accepts to link the stubs we're gonna make. Then we'll make those stubs for the missing symbols, and finally we're gonna pre-load all of this when running python.
First of all, I want to thank James for his great article ( <http://www.lightofdawn.org/wiki/wiki.cgi/NewAppsOnOldGlibc> ) and precious advices, I couldn't have made it without him.
So let's start by weakifying the dependencies, it's just about replacing the right bytes in \_pywrap\_tensorflow.so. Please note that this step only works for the current version of tensorflow (0.6.0). So if its not done already create and activate your [virtualenv](http://docs.python-guide.org/en/latest/dev/virtualenvs/) if you have one (if you're not admin virtualenv is a solution, another is to add `--user` flag to pip command), and install tensorflow 0.6.0 (replace cpu by gpu in the url if you want the gpu version) :
```
pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.6.0-cp27-none-linux_x86_64.whl
```
And let's weakify all the annoying dependencies, here is the command for the cpu version of tensorflow:
```
TENSORFLOW_DIR=`python -c "import imp; print(imp.find_module('tensorflow')[1])"`
for addr in 0xC6A93C 0xC6A99C 0xC6A9EC 0xC6AA0C 0xC6AA1C 0xC6AA3C; do printf '\x02' | dd conv=notrunc of=${TENSORFLOW_DIR}/python/_pywrap_tensorflow.so bs=1 seek=$((addr)) ; done
```
And here is the gpu one (run only the correct one or you'll corrupt the binary):
```
TENSORFLOW_DIR=`python -c "import imp; print(imp.find_module('tensorflow')[1])"`
for addr in 0xDC5EA4 0xDC5F04 0xDC5F54 0xDC5F74 0xDC5F84 0xDC5FA4; do printf '\x02' | dd conv=notrunc of=${TENSORFLOW_DIR}/python/_pywrap_tensorflow.so bs=1 seek=$((addr)) ; done
```
You can check it with:
```
readelf -V ${TENSORFLOW_DIR}/python/_pywrap_tensorflow.so
```
Have a look at the article if you want to understand what's going on here.
Now we're gonna make the stubs for the missing libc symbols:
```
mkdir ~/my_stubs
cd ~/my_stubs
MYSTUBS=~/my_stubs
printf "#include <time.h>\n#include <string.h>\nvoid* memcpy(void *dest, const void *src, size_t n) {\nreturn memmove(dest, src, n);\n}\nint clock_gettime(clockid_t clk_id, struct timespec *tp) {\nreturn clock_gettime(clk_id, tp);\n}" > mylibc.c
gcc -s -shared -o mylibc.so -fPIC -fno-builtin mylibc.c
```
You **need** to perform that step on the machine with the missing dependencies (or machine with similar versions of standard libraries (in a cluster for example)).
Now we're gonna probably change of machine since we need a gcc that supports c++11, and it is probably not on the machine that lacks all the dependencies (or you can use an isolated install of a recent gcc). In the following I assume we're still in `~/my_stubs` and somehow you share your home accross the machines, otherwise you'll just have to copy the .so files we're gonna generate when it's done.
So, one stub that we can do for libstdc++, and for the remaining missing ones we're going to compile them from gcc source (it might take some time to clone the repository):
```
printf "#include <functional>\nvoid std::__throw_bad_function_call(void) {\nexit(1);\n}" > bad_function.cc
gcc -std=c++11 -s -shared -o bad_function.so -fPIC -fno-builtin bad_function.cc
git clone https://github.com/gcc-mirror/gcc.git
cd gcc
mkdir my_include
mkdir my_include/ext
cp libstdc++-v3/include/ext/aligned_buffer.h my_include/ext
gcc -I$PWD/my_include -std=c++11 -fpermissive -s -shared -o $MYSTUBS/hashtable.so -fPIC -fno-builtin libstdc++-v3/src/c++11/hashtable_c++0x.cc
gcc -std=c++11 -fpermissive -s -shared -o $MYSTUBS/chrono.so -fPIC -fno-builtin libstdc++-v3/src/c++11/chrono.cc
gcc -std=c++11 -fpermissive -s -shared -o $MYSTUBS/random.so -fPIC -fno-builtin libstdc++-v3/src/c++11/random.cc
gcc -std=c++11 -fpermissive -s -shared -o $MYSTUBS/hash_bytes.so -fPIC -fno-builtin ./libstdc++-v3/libsupc++/hash_bytes.cc
```
And that's it! You can now run a tensorflow python script by preloading all our shared libraries (and your local libstdc++):
```
LIBSTDCPP=`ldconfig -p | grep libstdc++.so.6 | grep 64 | cut -d' ' -f4` #For 64bit machines
LD_PRELOAD="$MYSTUBS/mylibc.so:$MYSTUBS/random.so:$MYSTUBS/hash_bytes.so:$MYSTUBS/chrono.so:$MYSTUBS/hashtable.so:$MYSTUBS/bad_function.so:$LIBSTDCPP" python ${TENSORFLOW_DIR}/models/image/mnist/convolutional.py
```
:) |
Gunicorn Import by filename is not supported (module) | 33,655,841 | 5 | 2015-11-11T17:05:00Z | 33,656,476 | 8 | 2015-11-11T17:37:52Z | [
"python",
"containers",
"gunicorn"
] | i have newly created a container ubuntu and installed the installed the required packages in the virtual environment. Then i have executed the pre-existing python service code by python path/to/my/file/X.py (in virualenv) its working fine. so i have executed with gunicorn as gunicorn -b 0.0.0.0:5000 path/to/my/file/X:app (in virualenv) but i'm getting the following error
```
2015-11-11 16:38:08 [19118] [INFO] Starting gunicorn 17.5
2015-11-11 16:38:08 [19118] [INFO] Listening at: http://0.0.0.0:444 (19118)
2015-11-11 16:38:08 [19118] [INFO] Using worker: sync
2015-11-11 16:38:08 [19123] [INFO] Booting worker with pid: 19123
2015-11-11 16:38:08 [19123] [ERROR] Exception in worker process:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/gunicorn/arbiter.py", line 473, in spawn_worker
worker.init_process()
File "/usr/lib/python2.7/dist-packages/gunicorn/workers/base.py", line 100, in init_process
self.wsgi = self.app.wsgi()
File "/usr/lib/python2.7/dist-packages/gunicorn/app/base.py", line 115, in wsgi
self.callable = self.load()
File "/usr/lib/python2.7/dist-packages/gunicorn/app/wsgiapp.py", line 33, in load
return util.import_app(self.app_uri)
File "/usr/lib/python2.7/dist-packages/gunicorn/util.py", line 362, in import_app
__import__(module)
ImportError: Import by filename is not supported.
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/gunicorn/arbiter.py", line 473, in spawn_worker
worker.init_process()
File "/usr/lib/python2.7/dist-packages/gunicorn/workers/base.py", line 100, in init_process
self.wsgi = self.app.wsgi()
File "/usr/lib/python2.7/dist-packages/gunicorn/app/base.py", line 115, in wsgi
self.callable = self.load()
File "/usr/lib/python2.7/dist-packages/gunicorn/app/wsgiapp.py", line 33, in load
return util.import_app(self.app_uri)
File "/usr/lib/python2.7/dist-packages/gunicorn/util.py", line 362, in import_app
__import__(module)
ImportError: Import by filename is not supported.
2015-11-11 16:38:08 [19123] [INFO] Worker exiting (pid: 19123)
2015-11-11 16:38:09 [19118] [INFO] Shutting down: Master
```
can any one can help me fixing the ImportError: Import by filename is not supported. And why it is coming because i have implemented gunicorn in other server it is working fine there. | It's just like the error says: you can't refer to Python modules by file path, you must refer to it by dotted module path starting at a directory that is in PYTHONPATH.
```
gunicorn -b 0.0.0.0:5000 path.inside.virtualenv.X:app
``` |
Unable to import Tensorflow "No module named copyreg" | 33,656,551 | 9 | 2015-11-11T17:42:11Z | 33,691,154 | 18 | 2015-11-13T10:56:51Z | [
"python",
"tensorflow"
] | El Capitan OS here. I've been trying to find a workaround with import Tensorflow into my ipython notebook, but so far no luck.
Like many people in the forums, I've also had issues with install tensorflow because of the six package. I was able to install after some fidgeting with brew
```
brew link gdbm
brew install python
rew linkapps python
sudo pip install https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl
```
I got a message that tensorflow was installed correctly. Even when I did `sudo pip install tensorflow` I get the message:
```
Requirement already satisfied (use --upgrade to upgrade): tensorflow in /usr/local/lib/python2.7/site-packages
Requirement already satisfied (use --upgrade to upgrade): six>=1.10.0 in /Library/Python/2.7/site-packages (from tensorflow)
Requirement already satisfied (use --upgrade to upgrade): numpy>=1.9.2 in /usr/local/lib/python2.7/site-packages (from tensorflow)
```
However, when I'm on my ipython notebook and I did an `import tensorflow` I get the message: `ImportError: No module named tensorflow`
I've dug further and found this error on the import as well:
```
In [1]: import tensorflow
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-a649b509054f> in <module>()
----> 1 import tensorflow
/usr/local/lib/python2.7/site-packages/tensorflow/__init__.py in <module>()
2 # module.
3 # pylint: disable=wildcard-import
----> 4 from tensorflow.python import *
/usr/local/lib/python2.7/site-packages/tensorflow/python/__init__.py in <module>()
11
12 import tensorflow.python.platform
---> 13 from tensorflow.core.framework.graph_pb2 import *
14 from tensorflow.core.framework.summary_pb2 import *
15 from tensorflow.core.framework.config_pb2 import *
/usr/local/lib/python2.7/site-packages/tensorflow/core/framework/graph_pb2.py in <module>()
6 from google.protobuf import descriptor as _descriptor
7 from google.protobuf import message as _message
----> 8 from google.protobuf import reflection as _reflection
9 from google.protobuf import symbol_database as _symbol_database
10 from google.protobuf import descriptor_pb2
/usr/local/lib/python2.7/site-packages/google/protobuf/reflection.py in <module>()
56 from google.protobuf.pyext import cpp_message as message_impl
57 else:
---> 58 from google.protobuf.internal import python_message as message_impl
59
60 # The type of all Message classes.
/usr/local/lib/python2.7/site-packages/google/protobuf/internal/python_message.py in <module>()
57
58 import six
---> 59 import six.moves.copyreg as copyreg
60
61 # We use "as" to avoid name collisions with variables.
ImportError: No module named copyreg
``` | As Jonah commented, it's solved by this:
On MacOSX
If you encounter:
```
import six.moves.copyreg as copyreg
```
```
ImportError: No module named copyreg
```
Solution: TensorFlow depends on protobuf which requires six-1.10.0. Apple's default python environment has six-1.4.1 and may be difficult to upgrade. So we recommend either installing a separate copy of python via homebrew:
```
brew install python
```
But I highly recommend using virtualenv for this purpose.
```
# On Mac:
$ sudo easy_install pip # If pip is not already installed
$ sudo pip install --upgrade virtualenv
```
Next, set up a new virtualenv environment. To set it up in the directory `~/tensorflow`, run:
```
$ virtualenv --system-site-packages ~/tensorflow
$ cd ~/tensorflow
```
Then activate the virtualenv:
```
$ source bin/activate # If using bash
$ source bin/activate.csh # If using csh
(tensorflow)$ # Your prompt should change
```
Inside the virtualenv, install TensorFlow:
```
(tensorflow)$ pip install --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl
```
You can then run your TensorFlow program like:
```
(tensorflow)$ python tensorflow/models/image/mnist/convolutional.py
# When you are done using TensorFlow:
(tensorflow)$ deactivate # Deactivate the virtualenv
$ # Your prompt should change back
``` |
TensorFlow MNIST example not running with fully_connected_feed.py | 33,659,424 | 8 | 2015-11-11T20:40:17Z | 33,662,396 | 12 | 2015-11-12T00:26:11Z | [
"python",
"tensorflow"
] | I am able to run the `Deep MNIST Example` fine, but when running `fully_connected_feed.py`, I am getting the following error:
```
File "fully_connected_feed.py", line 19, in <module>
from tensorflow.g3doc.tutorials.mnist import input_data ImportError: No module named
g3doc.tutorials.mnist
```
I am new to Python so could also just be a general setup problem. | This is a Python path issue. Assuming that the directory `tensorflow/g3doc/tutorials/mnist` is your current working directory (or in your Python path), the easiest way to resolve it is to change the following lines in fully\_connected\_feed.py from:
```
from tensorflow.g3doc.tutorials.mnist import input_data
from tensorflow.g3doc.tutorials.mnist import mnist
```
...to:
```
import input_data
import mnist
``` |
Tensorflow causes logging messages to double | 33,662,648 | 4 | 2015-11-12T00:51:25Z | 33,664,610 | 8 | 2015-11-12T04:45:29Z | [
"python",
"logging",
"tensorflow"
] | So I was playing around with Google's [Tensorflow](http://www.tensorflow.org/) library they published yesterday and encountered an annoying bug that keeps biting me.
What I did was setup the python logging functions as I usually do, and the result was that, if I import the tensorflow library, all messages in the console started doubling. Interestingly, this does *not* happen if you just use the `logging.warn/info/..()` function.
An example of a code that does *not* double the messages:
```
import tensorflow as tf
import logging
logging.warn('test')
```
An example of a code that *does* double all messages:
```
import tensorflow as tf
import logging
logger = logging.getLogger('TEST')
ch = logging.StreamHandler()
logger.addHandler(ch)
logger.warn('test')
```
Now, I'm a simple man. I like the functionality of `logging`, so I use it. The setup with the `logger` object and the adding of a `StreamHandler` is something I picked up looking at how other people did this, but it looks like it fits with how the thing was meant to be used. However, I do not have in-depth knowledge of the logging library, as it always just kind of worked.
So, any help explaining why the doubling of the messages occurs will be most helpful.
I am using Ubuntu 14.04.3 LTS with Python 2.7.6, but the error happens in all Python 2.7 versions I tried. | I get this output:
```
test
WARNING:TEST:test
```
Tensorflow is *also* using the logging framework and has set up its own handlers, so when you log, by default, it propagates up to the parent logging handlers inside tensorflow. You can change this behavior by setting:
```
logger.propagate = False
```
See also [duplicate output in simple python logging configuration](http://stackoverflow.com/questions/19561058/duplicate-output-in-simple-python-logging-configuration)
Followup: This was an unintended side-effect of the way tensorflow was using the logging package. I've changed it at HEAD to scope its internal loggers under the name "tensorflow" to avoid this pollution. Should be in the github head within a day or so. In the meantime, the logger.propagate solution will work and won't break once that fix is in, so you should be safe to go. Thanks again for spotting this! |
How to create a Tensorflow Tensorboard Empty Graph | 33,663,081 | 7 | 2015-11-12T01:43:23Z | 33,685,349 | 12 | 2015-11-13T03:15:26Z | [
"python",
"tensorflow",
"tensorboard"
] | launch tensorboard with `tensorboard --logdir=/home/vagrant/notebook`
at tensorboard:6006 > graph, it says No graph definition files were found.
To store a graph, create a tf.python.training.summary\_io.SummaryWriter and pass the graph either via the constructor, or by calling its add\_graph() method.
```
import tensorflow as tf
sess = tf.Session()
writer = tf.python.training.summary_io.SummaryWriter("/home/vagrant/notebook", sess.graph_def)
```
However the page is still empty, how can I start playing with tensorboard?
# current tensorboard
[](http://i.stack.imgur.com/UB8dX.png)
# result wanted
An empty graph that can add nodes, editable.
# update
Seems like tensorboard is unable to create a graph to add nodes, drag and edit etc ( I am confused by the official video ).
running <https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/tutorials/mnist/fully_connected_feed.py> and then `tensorboard --logdir=/home/vagrant/notebook/data` is able to view the graph
However seems like tensorflow only provide ability to view summary, nothing much different to make it standout | TensorBoard is a tool for [visualizing the TensorFlow graph](http://tensorflow.org/how_tos/graph_viz/index.md) and analyzing recorded metrics during training and inference. The graph is created using the Python API, then written out using the [`tf.train.SummaryWriter.add_graph()`](http://tensorflow.org/api_docs/python/train.md#SummaryWriter.add_graph) method. When you load the file written by the SummaryWriter into TensorBoard, you can see the graph that was saved, and interactively explore it.
However, TensorBoard is not a tool for *building* the graph itself. It does not have any support for adding nodes to the graph. |
import input_data MNIST tensorflow not working | 33,664,651 | 8 | 2015-11-12T04:51:26Z | 33,664,749 | 12 | 2015-11-12T05:01:10Z | [
"python",
"import",
"machine-learning",
"tensorflow",
"mnist"
] | [TensorFlow MNIST example not running](http://stackoverflow.com/questions/33659424/tensorflow-mnist-example-not-running)
I checked this out and realized that `input_data` was not built-in. So I downloaded the whole folder from [here](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/tutorials/mnist/input_data.py). How can I start the tutorial:
```
import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-6-a5af65173c89> in <module>()
----> 1 import input_data
2 mnist = tf.input_data.read_data_sets("MNIST_data/", one_hot=True)
ImportError: No module named input_data
```
I'm using iPython (Jupyter) so do I need to change my working directory to this folder I downloaded? or can I add this to my `tensorflow` directory? If so, where do I add the files? I installed `tensorflow` with `pip` (on my OSX) and the current location is `~/anaconda/lib/python2.7/site-packages/tensorflow/__init__.py`
Are these files meant to be accessed directly through `tensorflow` like `sklearn` datasets? or am I just supposed to cd into the directory and work from there? The example is not clear. | So let's assume that you are in the directory: `/somePath/tensorflow/tutorial` (and this is your working directory).
All you need to do is to download the [input\_data.py](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/mnist/input_data.py) and put it this. Let the file name where you invoke:
```
import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
...
```
is `main.py` and it is also in this directory.
Whenever this is done, you can just start running `main.py` which will start downloading the files and will put them in the MNIST\_data folder (once they are there the script will not be downloading them next time). |
Randomly change the prompt in the Python interpreter | 33,668,998 | 35 | 2015-11-12T10:14:07Z | 33,669,175 | 8 | 2015-11-12T10:22:20Z | [
"python",
"prompt",
"python-interactive"
] | It's kind of boring to always see the `>>>` prompt in Python. What would be the best way to go about randomly changing the prompt prefix?
I imagine an interaction like:
```
This is a tobbaconist!>> import sys
Sorry?>> import math
Sorry?>> print sys.ps1
Sorry?
What?>>
``` | Nice question. The `>>>` prompt is in `sys.ps1`, the `...` in `sys.ps2`. The next question would be how to change this randomly. Just as a demonstration of changing it by hand:
```
>>> import sys
>>> sys.ps1 = '<<<'
<<<sys.ps1 = '<<< '
<<< sys.ps2 = '.?. '
<<< for i in line:
.?.
``` |
Randomly change the prompt in the Python interpreter | 33,668,998 | 35 | 2015-11-12T10:14:07Z | 33,669,188 | 19 | 2015-11-12T10:23:13Z | [
"python",
"prompt",
"python-interactive"
] | It's kind of boring to always see the `>>>` prompt in Python. What would be the best way to go about randomly changing the prompt prefix?
I imagine an interaction like:
```
This is a tobbaconist!>> import sys
Sorry?>> import math
Sorry?>> print sys.ps1
Sorry?
What?>>
``` | For changing the prompt, we use
```
>>>import sys
>>>sys.ps1 = '=>'
=>
```
Now the way to do it randomly would be something like this:
```
import random
import sys
random_prompts = ['->', '-->', '=>', 'Hello->']
sys.ps1 = random.choice(random_prompts)
```
To execute this when your python interpreter starts, you can follow this guide: <https://docs.python.org/2/tutorial/appendix.html#the-interactive-startup-file> |
Randomly change the prompt in the Python interpreter | 33,668,998 | 35 | 2015-11-12T10:14:07Z | 33,669,342 | 19 | 2015-11-12T10:31:01Z | [
"python",
"prompt",
"python-interactive"
] | It's kind of boring to always see the `>>>` prompt in Python. What would be the best way to go about randomly changing the prompt prefix?
I imagine an interaction like:
```
This is a tobbaconist!>> import sys
Sorry?>> import math
Sorry?>> print sys.ps1
Sorry?
What?>>
``` | Try this:
```
>>> import sys
>>> import random
>>> class RandomPrompt(object):
... prompts = 'hello >', 'hi >', 'hey >'
... def __repr__ (self): return random.choice(self.prompts)
...
>>> sys.ps1 = RandomPrompt()
hello >1
1
hi >2
2
``` |
Randomly change the prompt in the Python interpreter | 33,668,998 | 35 | 2015-11-12T10:14:07Z | 33,669,360 | 65 | 2015-11-12T10:31:36Z | [
"python",
"prompt",
"python-interactive"
] | It's kind of boring to always see the `>>>` prompt in Python. What would be the best way to go about randomly changing the prompt prefix?
I imagine an interaction like:
```
This is a tobbaconist!>> import sys
Sorry?>> import math
Sorry?>> print sys.ps1
Sorry?
What?>>
``` | According to the [docs](https://docs.python.org/2/library/sys.html?highlight=sys#sys.ps1), if you assign a non-string object to `sys.ps1` then it will evaluate the `str` function of it each time:
> If a non-string object is assigned to either variable, its str() is
> re-evaluated each time the interpreter prepares to read a new
> interactive command; this can be used to implement a dynamic prompt.
Well now it's obvious, you should make it dynamic! Make an object with a `__str__` method where you can place any logic you want:
```
class Prompt:
def __str__(self):
# Logic to randomly determine string
return string
```
You can also make changes or insert things into this class as you go too. So for example, you could have a list of messages in `Prompt` that you append to, or change, and that will affect the console message. |
Python option parser overwrite '-h' | 33,669,665 | 4 | 2015-11-12T10:46:05Z | 33,669,714 | 7 | 2015-11-12T10:48:30Z | [
"python",
"elf"
] | I have the following options
```
parser = OptionParser()
parser.add_option('-a', '--all', action='store_true', dest='all', help='writes all header information')
parser.add_option('-h', '--file-header', action='store_true', dest='head', help='prints the elf file header information')
parser.add_option('-l', '--program-header', action='store_true', dest='prog', help='prints the program header')
parser.add_option('-S', '--section-header', action='store_true', dest='sec', help='prints the section header')
```
When running the script I get the error message:
```
optparse.OptionConflictError: option -h/--file-header: conflicting option string(s): -h
```
I know normally -h is reserved to display the help. But I'm trying to write an ELF file reader for some special elf files, and therefore I want to use the same commands like `readelf`. And readelf uses -h for printing the header information.
Is there any possibility to overwrite the -h option in the option parser or is that fixed? | When creating the parser, pass `add_help_option=False`. So you will be able to define it by yourself:
```
parser = OptionParser(add_help_option=False)
``` |
Unexpected behaviour in numpy, when dividing arrays | 33,674,967 | 8 | 2015-11-12T15:22:54Z | 33,675,792 | 7 | 2015-11-12T15:59:32Z | [
"python",
"arrays",
"numpy",
"division"
] | So, in numpy 1.8.2 (with python 2.7.6) there seems to be an issue in array division. When performing in-place division of a sufficiently large array (at least 8192 elements, more than one dimension, data type is irrelevant) with a part of itself, behaviour is inconsistent for different notations.
```
import numpy as np
arr = np.random.rand(2, 5000)
arr_copy = arr.copy()
arr_copy = arr_copy / arr_copy[0]
arr /= arr[0]
print np.sum(arr != arr_copy), arr.size - np.sum(np.isclose(arr, arr_copy))
```
The output is expected to be 0, as the two divisions should be consistent, but it is 1808. Is this a bug? Is it also happening in other numpy versions? | It's not really a bug, as is to do with buffer size as you suggest in the question. Setting the buffer size larger gets rid of the problem (for now...):
```
>>> np.setbufsize(8192*4) # sets new buffer size, returns current size
8192
>>> # same set up as in the question
>>> np.sum(arr != arr_copy), arr.size - np.sum(np.isclose(arr, arr_copy))
(0, 0)
```
And as you state in the comment, the inplace division `arr /= arr[0]` is where this originally goes wrong. Only the first 8192 elements of `arr` are buffered with `arr[0]` simply being a view of the first row of `arr`.
This means that all 5000 values in the first row will be correctly divided by themselves, and the second row will will also be correct up to index 3192. Next the remaining 1808 values are put into the buffer for the inplace division but the first row has already changed: `arr[0]` is now simply a view of a row of ones, so the values in the latter columns will just be divided by one. |
python idiomatic python for loop if else statement | 33,678,809 | 7 | 2015-11-12T18:36:33Z | 33,678,825 | 11 | 2015-11-12T18:37:26Z | [
"python",
"if-statement",
"for-loop",
"idiomatic"
] | How can I use `else` statement in an idiomatic Python `for` loop? Without `else` I can write e.g.:
```
res = [i for i in [1,2,3,4,5] if i < 4]
```
The result is: `[1, 2, 3]`
The normal form of the above code is:
```
res = []
for i in [1,2,3,4,5]:
if i < 4:
res.append(i)
```
The result is the same as in idiomatic form: `[1, 2, 3]`
And I want this:
```
res = [i for i in [1,2,3,4,5] if i < 4 else 0]
```
I get `SyntaxError: invalid syntax`. The result should be: `[1, 2, 3, 0, 0]`
The normal code of this is:
```
res = []
for i in [1,2,3,4,5]:
if i < 4:
res.append(i)
else:
res.append(0)
```
The result is: `[1, 2, 3, 0, 0]` | You were close, you just have to move the ternary to the part of the list comprehension where you're creating the value.
```
res = [i if i < 4 else 0 for i in range(1,6)]
``` |
How to extend Python Enum? | 33,679,930 | 8 | 2015-11-12T19:42:07Z | 33,680,021 | 10 | 2015-11-12T19:46:34Z | [
"python",
"python-3.x",
"enums"
] | What is best practice for extending `Enum` type in Python 3.4 and is there even a possibility for do this?
For example:
```
from enum import Enum
class EventStatus(Enum):
success = 0
failure = 1
class BookingStatus(EventStatus):
duplicate = 2
unknown = 3
Traceback (most recent call last):
...
TypeError: Cannot extend enumerations
```
Currently there is no possible way to create a base enum class with members and use it in other enum classes (like in the example above). Is there any other way to implement inheritance for Python enums? | > Subclassing an enumeration is allowed only if the enumeration does not define any members.
>
> Allowing subclassing of enums that define members would lead to a violation of some important invariants of types and instances.
<https://docs.python.org/3/library/enum.html#restricted-subclassing-of-enumerations>
So **no**, it's not directly possible. |
How do I check an input for an integer in Python? | 33,680,570 | 3 | 2015-11-12T20:19:46Z | 33,680,676 | 9 | 2015-11-12T20:26:42Z | [
"python",
"input"
] | I used:
```
day = int(input('Please input the day you were born: e.g 8th=8 21st = 21 : '))
month = int(input('Please input the month you were born: e.g may = 5 december = 12 : '))
year = int(input('Please input the year you were born: e.g 2001 / 1961 : '))
if day == int and month == int and year == int:
```
But it always even when it's an integer says it's wrong. | ```
def get_int(p,error_msg="Thats Not An Int!"):
while True:
try:
return int(raw_input(p))
except (ValueError,TypeError):
print "ERROR: %s"%error_msg
day = get_int('Please input the day you were born: e.g 8th=8 21st = 21 : ')
#day is guaranteed to be an int
```
I like to take this and abstract it further
```
def force_type(type_class,prompt,error_msg):
while True:
try:
return type_class(prompt)
except (ValueError,TypeError):
print error_msg
```
then it simply becomes
```
def get_int(p,err_msg):
return force_type(int,p,err_msg)
def get_float(p,err_msg):
return force_type(float,p,err_msg)
...
```
# allthat said if you want to typecheck you should ~*not*~ use `type(var)` you should use `isinstance(var,int)` |
How do I add a new column to spark data frame (Pyspark)? | 33,681,487 | 9 | 2015-11-12T21:14:32Z | 33,683,462 | 28 | 2015-11-12T23:37:19Z | [
"python",
"apache-spark",
"apache-spark-sql",
"pyspark"
] | I have a Spark data frame (using Pyspark 1.5.1) and would like to add a new column.
Tried the following without any success:
```
type(randomed_hours) # => list
#Create in Python and transform to RDD
new_col = pd.DataFrame(randomed_hours,columns=['new_col'])
spark_new_col = sqlContext.createDataFrame(new_col)
my_df_spark.withColumn("hours", spark_new_col["new_col"])
```
Also got an error using this:
```
my_df_spark.withColumn("hours", sc.parallelize(randomed_hours))
```
So how do I add a new column (based on Python vector) to existing Data frame with PySpark ?
Thanks !
Boris | You cannot add an arbitrary column to a `DataFrame` in Spark. New columns can be created only by using literals:
```
from pyspark.sql.functions import lit
df = sqlContext.createDataFrame(
[(1, "a", 23.0), (3, "B", -23.0)], ("x1", "x2", "x3"))
df_with_x4 = df.withColumn("x4", lit(0))
df_with_x4.show()
## +---+---+-----+---+
## | x1| x2| x3| x4|
## +---+---+-----+---+
## | 1| a| 23.0| 0|
## | 3| B|-23.0| 0|
## +---+---+-----+---+
```
transforming an existing column:
```
from pyspark.sql.functions import exp
df_with_x5 = df_with_x4.withColumn("x5", exp("x3"))
df_with_x5.show()
## +---+---+-----+---+--------------------+
## | x1| x2| x3| x4| x5|
## +---+---+-----+---+--------------------+
## | 1| a| 23.0| 0| 9.744803446248903E9|
## | 3| B|-23.0| 0|1.026187963170189...|
## +---+---+-----+---+--------------------+
```
included using `join`:
```
from pyspark.sql.functions import exp
lookup = sqlContext.createDataFrame([(1, "foo"), (2, "bar")], ("k", "v"))
df_with_x6 = (df_with_x5
.join(lookup, col("x1") == col("k"), "leftouter")
.drop("k")
.withColumnRenamed("v", "x6"))
## +---+---+-----+---+--------------------+----+
## | x1| x2| x3| x4| x5| x6|
## +---+---+-----+---+--------------------+----+
## | 1| a| 23.0| 0| 9.744803446248903E9| foo|
## | 3| B|-23.0| 0|1.026187963170189...|null|
## +---+---+-----+---+--------------------+----+
```
or generated with function / udf:
```
from pyspark.sql.functions import rand
df_with_x7 = df_with_x6.withColumn("x7", rand())
df_with_x7.show()
## +---+---+-----+---+--------------------+----+-------------------+
## | x1| x2| x3| x4| x5| x6| x7|
## +---+---+-----+---+--------------------+----+-------------------+
## | 1| a| 23.0| 0| 9.744803446248903E9| foo|0.41930610446846617|
## | 3| B|-23.0| 0|1.026187963170189...|null|0.37801881545497873|
## +---+---+-----+---+--------------------+----+-------------------+
```
If you want to add content of an arbitrary RDD as a column you can
* add [row numbers to existing data frame](http://stackoverflow.com/a/32761138/1560062)
* call `zipWithIndex` on RDD and convert it to data frame
* join both using index as a join key |
How do I add a new column to spark data frame (Pyspark)? | 33,681,487 | 9 | 2015-11-12T21:14:32Z | 37,263,999 | 8 | 2016-05-16T22:04:51Z | [
"python",
"apache-spark",
"apache-spark-sql",
"pyspark"
] | I have a Spark data frame (using Pyspark 1.5.1) and would like to add a new column.
Tried the following without any success:
```
type(randomed_hours) # => list
#Create in Python and transform to RDD
new_col = pd.DataFrame(randomed_hours,columns=['new_col'])
spark_new_col = sqlContext.createDataFrame(new_col)
my_df_spark.withColumn("hours", spark_new_col["new_col"])
```
Also got an error using this:
```
my_df_spark.withColumn("hours", sc.parallelize(randomed_hours))
```
So how do I add a new column (based on Python vector) to existing Data frame with PySpark ?
Thanks !
Boris | To add a column using a UDF:
```
df = sqlContext.createDataFrame(
[(1, "a", 23.0), (3, "B", -23.0)], ("x1", "x2", "x3"))
from pyspark.sql.functions import udf
from pyspark.sql.types import *
def valueToCategory(value):
if value == 1: return 'cat1'
elif value == 2: return 'cat2'
...
else: return 'n/a'
# NOTE: it seems that calls to udf() must be after SparkContext() is called
udfValueToCategory = udf(valueToCategory, StringType())
df_with_cat = df.withColumn("category", udfValueToCagetory("x1"))
df_with_cat.show()
## +---+---+-----+---------+
## | x1| x2| x3| category|
## +---+---+-----+---------+
## | 1| a| 23.0| cat1|
## | 3| B|-23.0| n/a|
## +---+---+-----+---------+
``` |
Tensorflow One Hot Encoder? | 33,681,517 | 10 | 2015-11-12T21:16:01Z | 33,681,732 | 7 | 2015-11-12T21:28:56Z | [
"python",
"machine-learning",
"neural-network",
"tensorflow"
] | Does tensorflow have something similar to scikit learn's [one hot encoder](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html) for processing categorical data? Would using a placeholder of tf.string behave as categorical data?
I realize I can manually pre-process the data before sending it to tensorflow, but having it built in is very convenient. | After looking though the [python documentation](http://tensorflow.org/api_docs/python/index.md), I have not found anything similar. One thing that strengthen my belief that it does not exist is that in [their own example](https://github.com/tensorflow/tensorflow/blob/1d76583411038767f673a0c96174c80eaf9ff42f/tensorflow/g3doc/tutorials/mnist/input_data.py) they write `one_hot` manually.
```
def dense_to_one_hot(labels_dense, num_classes=10):
"""Convert class labels from scalars to one-hot vectors."""
num_labels = labels_dense.shape[0]
index_offset = numpy.arange(num_labels) * num_classes
labels_one_hot = numpy.zeros((num_labels, num_classes))
labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1
return labels_one_hot
```
You can also do this in [scikitlearn](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html). |
Tensorflow One Hot Encoder? | 33,681,517 | 10 | 2015-11-12T21:16:01Z | 33,682,213 | 24 | 2015-11-12T21:57:38Z | [
"python",
"machine-learning",
"neural-network",
"tensorflow"
] | Does tensorflow have something similar to scikit learn's [one hot encoder](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html) for processing categorical data? Would using a placeholder of tf.string behave as categorical data?
I realize I can manually pre-process the data before sending it to tensorflow, but having it built in is very convenient. | As of TensorFlow 0.8, there is now a [native one-hot op, `tf.one_hot`](https://www.tensorflow.org/versions/master/api_docs/python/array_ops.html#one_hot) that can convert a set of sparse labels to a dense one-hot representation. This is in addition to [`tf.nn.sparse_softmax_cross_entropy_with_logits`](https://www.tensorflow.org/versions/master/api_docs/python/nn.html#sparse_softmax_cross_entropy_with_logits), which can in some cases let you compute the cross entropy directly on the sparse labels instead of converting them to one-hot.
**Previous answer, in case you want to do it the old way:**
@Salvador's answer is correct - there (used to be) no native op to do it. Instead of doing it in numpy, though, you can do it natively in tensorflow using the sparse-to-dense operators:
```
num_labels = 10
# label_batch is a tensor of numeric labels to process
# 0 <= label < num_labels
sparse_labels = tf.reshape(label_batch, [-1, 1])
derived_size = tf.shape(label_batch)[0]
indices = tf.reshape(tf.range(0, derived_size, 1), [-1, 1])
concated = tf.concat(1, [indices, sparse_labels])
outshape = tf.pack([derived_size, num_labels])
labels = tf.sparse_to_dense(concated, outshape, 1.0, 0.0)
```
The output, labels, is a one-hot matrix of batch\_size x num\_labels.
Note also that as of 2016-02-12 (which I assume will eventually be part of a 0.7 release), TensorFlow also has the `tf.nn.sparse_softmax_cross_entropy_with_logits` op, which in some cases can let you do training without needing to convert to a one-hot encoding.
Edited to add: At the end, you may need to explicitly set the shape of labels. The shape inference doesn't recognize the size of the num\_labels component. If you don't need a dynamic batch size with derived\_size, this can be simplified.
Edited 2016-02-12 to change the assignment of outshape per comment below. |
Issue feeding a list into feed_dict in TensorFlow | 33,684,657 | 7 | 2015-11-13T01:47:05Z | 33,685,256 | 12 | 2015-11-13T03:04:05Z | [
"python",
"tensorflow"
] | I'm trying to pass a list into feed\_dict, however I'm having trouble doing so. Say I have:
```
inputs = 10 * [tf.placeholder(tf.float32, shape=(batch_size, input_size))]
```
where inputs is fed into some function "outputs" that I want to compute. So to run this in tensorflow, I created a session and ran the following:
```
sess.run(outputs, feed_dict = {inputs: data})
#data is my list of inputs, which is also of length 10
```
but I get an error, TypeError: unhashable type: 'list'.
However, I'm able to pass the data element-wise like so:
```
sess.run(outputs, feed_dict = {inputs[0]: data[0], ..., inputs[9]: data[9]})
```
So I'm wondering if there's a way I can solve this issue. I've also tried to construct a dictionary(using a for loop), however this results in a dictionary with a single element, where they key is:
"tensorflow.python.framework.ops.Tensor at 0x107594a10" | There are two issues that are causing problems here:
The first issue is that the [`Session.run()`](http://tensorflow.org/api_docs/python/client.md#Session.run) call only accepts a small number of types as the keys of the `feed_dict`. In particular, lists of tensors are ***not*** supported as keys, so you have to put each tensor as a separate key. One convenient way to do this is using a dictionary comprehension:
```
inputs = [tf.placeholder(...), ...]
data = [np.array(...), ...]
sess.run(y, feed_dict={i: d for i, d in zip(inputs, data)})
```
The second issue is that the `10 * [tf.placeholder(...)]` syntax in Python creates a list with ten elements, where each element is ***the same tensor object*** (i.e. has the same `name` property, the same `id` property, and is reference-identical if you compare two elements from the list using `inputs[i] is inputs[j]`). This explains why, when you tried to create a dictionary using the list elements as keys, you ended up with a dictionary with a single element - because all of the list elements were identical.
To create 10 different placeholder tensors, as you intended, you should instead do the following:
```
inputs = [tf.placeholder(tf.float32, shape=(batch_size, input_size))
for _ in xrange(10)]
```
If you print the elements of this list, you'll see that each element is a tensor with a different name. |
Why is behavior different with respect to global variables in "import module" vs "from module import * "? | 33,687,904 | 6 | 2015-11-13T07:45:24Z | 33,688,115 | 9 | 2015-11-13T08:02:26Z | [
"python",
"python-3.x"
] | Let's have a.py be:
```
def foo():
global spam
spam = 42
return 'this'
```
At a console, if I simply import a, things make sense to me:
```
>>> import a
>>> a.foo()
'this'
>>> a.spam
42
```
However, if I do the less popular thing and...
```
>>> from a import *
>>> foo()
'this'
>>> spam
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'spam' is not defined
>>> a.spam
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'a' is not defined
```
I've read opinions about why people don't like "from module import \* " from a namespace perspective, but I can't find anything on this behavior, and frankly I figured out that this was the issue I was having by accident. | When you ask for `a.spam` there happens a namespace search in the module `a` and `spam` is found. But when you ask for just `spam`:
```
>>> from a import * # imported foo, spam doesn't exist yet
>>> foo()
```
`spam` is created in the *namespace* a (you cannot access it with such import though), but not in the current module. And it seems like nobody promised us to add newly added globals of `a` to all the namespaces module `a` has been imported into via `*`. That will require storing import links inside the interpreter and probably will degrade performance if one heavily imported module does such tricks all the time.
And imagine you have defined `spam` in your main module prior to calling `foo()`. That would be downright name collision.
Just as illustration, you can do `from a import *` to get fresh updates for the module `a`:
```
from a import *
print(foo())
from a import *
print(spam)
``` |
Calculating Precision, Recall and F-score in one pass - python | 33,689,721 | 9 | 2015-11-13T09:40:54Z | 33,689,848 | 8 | 2015-11-13T09:49:09Z | [
"python",
"list",
"machine-learning",
"try-except",
"precision-recall"
] | [Accuracy, precision, recall and f-score](https://en.wikipedia.org/wiki/Precision_and_recall) are measures of a system quality in machine-learning systems. It depends on a confusion matrix of True/False Positives/Negatives.
Given a binary classification task, I have tried the following to get a function that returns accuracy, precision, recall and f-score:
```
gold = [1] + [0] * 9
predicted = [1] * 10
def evaluation(gold, predicted):
true_pos = sum(1 for p,g in zip(predicted, gold) if p==1 and g==1)
true_neg = sum(1 for p,g in zip(predicted, gold) if p==0 and g==0)
false_pos = sum(1 for p,g in zip(predicted, gold) if p==1 and g==0)
false_neg = sum(1 for p,g in zip(predicted, gold) if p==0 and g==1)
try:
recall = true_pos / float(true_pos + false_neg)
except:
recall = 0
try:
precision = true_pos / float(true_pos + false_pos)
except:
precision = 0
try:
fscore = 2*precision*recall / (precision + recall)
except:
fscore = 0
try:
accuracy = (true_pos + true_neg) / float(len(gold))
except:
accuracy = 0
return accuracy, precision, recall, fscore
```
But it seems like I have redundantly looped through the dataset 4 times to get the True/False Positives/Negatives.
Also the multiple `try-excepts` to catch the `ZeroDivisionError` is a little redundant.
**So what is the pythonic way to get the counts of the True/False Positives/Negatives without multiple loops through the dataset?**
**How do I pythonically catch the `ZeroDivisionError` without the multiple try-excepts?**
---
I could also do the following to count the True/False Positives/Negatives in one loop but **is there an alternative way without the multiple `if`?**:
```
for p,g in zip(predicted, gold):
if p==1 and g==1:
true_pos+=1
if p==0 and g==0:
true_neg+=1
if p==1 and g==0:
false_pos+=1
if p==0 and g==1:
false_neg+=1
``` | > what is the pythonic way to get the counts of the True/False
> Positives/Negatives without multiple loops through the dataset?
I would use a [`collections.Counter`](https://docs.python.org/2/library/collections.html#collections.Counter), roughly what you're doing with all of the `if`s (you should be using `elif`s, as your conditions are mutually exclusive) at the end:
```
counts = Counter(zip(predicted, gold))
```
Then e.g. `true_pos = counts[1, 1]`.
> How do I pythonically catch the ZeroDivisionError without the multiple
> try-excepts?
For a start, you should (almost) never use a bare `except:`. If you're catching `ZeroDivisionError`s, then write `except ZeroDivisionError`. You could also consider a [*"look before you leap"*](https://docs.python.org/2/glossary.html#term-lbyl) approach, checking whether the denominator is `0` before trying the division, e.g.
```
accuracy = (true_pos + true_neg) / float(len(gold)) if gold else 0
``` |
List comprehension with else pass | 33,691,552 | 4 | 2015-11-13T11:18:49Z | 33,691,579 | 8 | 2015-11-13T11:20:53Z | [
"python",
"python-2.7",
"list-comprehension"
] | How do I do the following in a list comprehension?
```
test = [["abc", 1],["bca",2]]
result = []
for x in test:
if x[0] =='abc':
result.append(x)
else:
pass
result
Out[125]: [['abc', 1]]
```
Try 1:
```
[x if (x[0] == 'abc') else pass for x in test]
File "<ipython-input-127-d0bbe1907880>", line 1
[x if (x[0] == 'abc') else pass for x in test]
^
SyntaxError: invalid syntax
```
Try 2:
```
[x if (x[0] == 'abc') else None for x in test]
Out[126]: [['abc', 1], None]
```
Try 3:
```
[x if (x[0] == 'abc') for x in test]
File "<ipython-input-122-a114a293661f>", line 1
[x if (x[0] == 'abc') for x in test]
^
SyntaxError: invalid syntax
``` | The `if` needs to be at the end and you don't need the `pass` in the list comprehension. The item will only be added if the `if` condition is met, otherwise the element will be ignored, so the `pass` is implicitly implemented in the list comprehension syntax.
```
[x for x in test if x[0] == 'abc']
```
For completeness, the output of this statement is :
```
[['abc', 1]]
``` |
Why do we need endianness here? | 33,692,321 | 10 | 2015-11-13T12:01:04Z | 33,692,657 | 7 | 2015-11-13T12:21:46Z | [
"python",
"numpy",
"endianness"
] | I am reading a [source-code](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/tutorials/mnist/input_data.py) which downloads the zip-file and reads the data into numpy array. The code suppose to work on macos and linux and here is the snippet that I see:
```
def _read32(bytestream):
dt = numpy.dtype(numpy.uint32).newbyteorder('>')
return numpy.frombuffer(bytestream.read(4), dtype=dt)
```
This function is used in the following context:
```
with gzip.open(filename) as bytestream:
magic = _read32(bytestream)
```
It is not hard to see what happens here, but I am puzzled with the purpose of `newbyteorder('>')`. I read the [documentation](http://docs.scipy.org/doc/numpy/reference/generated/numpy.dtype.newbyteorder.html), and know what endianness mean, but can not understand why exactly developer added newbyteorder (in my opinion it is not really needed). | That's because data downloaded is in big endian format as described in source page: <http://yann.lecun.com/exdb/mnist/>
> All the integers in the files are stored in the MSB first (high
> endian) format used by most non-Intel processors. Users of Intel
> processors and other low-endian machines must flip the bytes of the
> header. |
Use attribute and target matrices for TensorFlow Linear Regression Python | 33,698,510 | 15 | 2015-11-13T17:27:34Z | 33,712,950 | 14 | 2015-11-14T20:24:28Z | [
"python",
"matrix",
"machine-learning",
"scikit-learn",
"tensorflow"
] | I'm trying to follow [this tutorial](http://www.tensorflow.org/tutorials/mnist/beginners/index.md).
TensorFlow just came out and I'm really trying to understand it. I'm familiar with *penalized linear regression* like Lasso, Ridge, and ElasticNet and its usage in `scikit-learn`.
For `scikit-learn` Lasso regression, all I need to input into the regression algorithm is `DF_X` [an M x N dimensional attribute matrix (pd.DataFrame)] and `SR_y` [an M dimensional target vector (pd.Series)]. The `Variable` structure in TensorFlow is a bit new to me and I'm not sure how to structure my input data into what it wants.
It seems as if softmax regression is for classification. **How can I restructure my `DF_X` (M x N attribute matrix) and `SR_y` (M dimensional target vector) to input into `tensorflow` for linear regression?**
My current method for doing a Linear Regression uses pandas, numpy, and sklearn and it's shown below. I think this question will be really helpful for people getting familiar with TensorFlow:
```
#!/usr/bin/python
import pandas as pd
import numpy as np
import tensorflow as tf
from sklearn.linear_model import LassoCV
#Create DataFrames for attribute and target matrices
DF_X = pd.DataFrame(np.array([[0,0,1],[2,3,1],[4,5,1],[3,4,1]]),columns=["att1","att2","att3"],index=["s1","s2","s3","s4"])
SR_y = pd.Series(np.array([3,2,5,8]),index=["s1","s2","s3","s4"],name="target")
print DF_X
#att1 att2 att3
#s1 0 0 1
#s2 2 3 1
#s3 4 5 1
#s4 3 4 1
print SR_y
#s1 3
#s2 2
#s3 5
#s4 8
#Name: target, dtype: int64
#Create Linear Model (Lasso Regression)
model = LassoCV()
model.fit(DF_X,SR_y)
print model
#LassoCV(alphas=None, copy_X=True, cv=None, eps=0.001, fit_intercept=True,
#max_iter=1000, n_alphas=100, n_jobs=1, normalize=False, positive=False,
#precompute='auto', random_state=None, selection='cyclic', tol=0.0001,
#verbose=False)
print model.coef_
#[ 0. 0.3833346 0. ]
``` | Softmax is an only addition function (in logistic regression for example), it is not a model like
```
model = LassoCV()
model.fit(DF_X,SR_y)
```
Therefore you can't simply give it data with fit method. However, you can simply create your model with the help of TensorFlow functions.
First of all, you have to create a computational graph, for example for linear regression you will create tensors with the size of your data. They are only tensors and you will give them your array in another part of the program.
```
import tensorflow as tf
x = tf.placeholder("float", [4, 3])
y_ = tf.placeholder("float",[4])
```
When you create two variables, that will contain initial weights of our model
```
W = tf.Variable(tf.zeros([3,1]))
b = tf.Variable(tf.zeros([1]))
```
And now you can create the model (you want to create regression, not classification therefore you don't need to use tf.nn.softmax )
```
y=tf.matmul(x,W) + b
```
As you have regression and linear model you will use
```
loss=tf.reduce_sum(tf.square(y_ - y))
```
Then we will train our model with the same step as in the tutorial
```
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
```
Now that you created the computational graph you have to write one more part of the program, where you will use this graph to work with your data.
```
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
sess.run(train_step, feed_dict={x:np.asarray(DF_X),y_:np.asarray(SR_y)})
```
Here you give your data to this computational graph with the help of feed\_dict. In TensorFlow you provide information in numpy arrays.
If you want to see your mistake you can write
```
sess.run(loss,feed_dict={x:np.asarray(DF_X),y_:np.asarray(SR_y)})
``` |
Flip non-zero values along each row of a lower triangular numpy array | 33,700,380 | 8 | 2015-11-13T19:27:37Z | 33,700,533 | 8 | 2015-11-13T19:37:06Z | [
"python",
"arrays",
"numpy",
"reverse"
] | I have a lower triangular array, like B:
```
B = np.array([[1,0,0,0],[.25,.75,0,0], [.1,.2,.7,0],[.2,.3,.4,.1]])
>>> B
array([[ 1. , 0. , 0. , 0. ],
[ 0.25, 0.75, 0. , 0. ],
[ 0.1 , 0.2 , 0.7 , 0. ],
[ 0.2 , 0.3 , 0.4 , 0.1 ]])
```
I want to flip it to look like:
```
array([[ 1. , 0. , 0. , 0. ],
[ 0.75, 0.25, 0. , 0. ],
[ 0.7 , 0.2 , 0.1 , 0. ],
[ 0.1 , 0.4 , 0.3 , 0.2 ]])
```
That is, I want to take all the positive values, and reverse within the positive values, leaving the trailing zeros in place. This is not what `fliplr` does:
```
>>> np.fliplr(B)
array([[ 0. , 0. , 0. , 1. ],
[ 0. , 0. , 0.75, 0.25],
[ 0. , 0.7 , 0.2 , 0.1 ],
[ 0.1 , 0.4 , 0.3 , 0.2 ]])
```
Any tips? Also, the actual array I am working with would be something like `B.shape = (200,20,4,4)` instead of `(4,4)`. Each `(4,4)` block looks like the above example (with different numbers across the 200, 20 different entries). | How about this:
```
# row, column indices of the lower triangle of B
r, c = np.tril_indices_from(B)
# flip the column indices by subtracting them from r, which is equal to the number
# of nonzero elements in each row minus one
B[r, c] = B[r, r - c]
print(repr(B))
# array([[ 1. , 0. , 0. , 0. ],
# [ 0.75, 0.25, 0. , 0. ],
# [ 0.7 , 0.2 , 0.1 , 0. ],
# [ 0.1 , 0.4 , 0.3 , 0.2 ]])
```
The same approach will generalize to any arbitrary *N*-dimensional array that consists of multiple lower triangular submatrices:
```
# creates a (200, 20, 4, 4) array consisting of tiled copies of B
B2 = np.tile(B[None, None, ...], (200, 20, 1, 1))
print(repr(B2[100, 10]))
# array([[ 1. , 0. , 0. , 0. ],
# [ 0.25, 0.75, 0. , 0. ],
# [ 0.1 , 0.2 , 0.7 , 0. ],
# [ 0.2 , 0.3 , 0.4 , 0.1 ]])
r, c = np.tril_indices_from(B2[0, 0])
B2[:, :, r, c] = B2[:, :, r, r - c]
print(repr(B2[100, 10]))
# array([[ 1. , 0. , 0. , 0. ],
# [ 0.75, 0.25, 0. , 0. ],
# [ 0.7 , 0.2 , 0.1 , 0. ],
# [ 0.1 , 0.4 , 0.3 , 0.2 ]])
```
For an upper triangular matrix you could simply subtract `r` from `c` instead, e.g.:
```
r, c = np.triu_indices_from(B.T)
B.T[r, c] = B.T[c - r, c]
``` |
Converting large XML file to relational database | 33,703,114 | 11 | 2015-11-13T23:00:37Z | 34,303,681 | 7 | 2015-12-16T03:58:30Z | [
"javascript",
"python",
"xml",
"node.js",
"relational-database"
] | I'm trying to figure out the best way to accomplish the following:
1. Download a large XML (1GB) file on daily basis from a third-party website
2. Convert that XML file to relational database on my server
3. Add functionality to search the database
For the first part, is this something that would need to be done manually, or could it be accomplished with a cron?
Most of the questions and answers related to XML and relational databases refer to Python or PHP. Could this be done with javascript/nodejs as well?
If this question is better suited for a different StackExchange forum, please let me know and I will move it there instead.
Below is a sample of the xml code:
```
<case-file>
<serial-number>123456789</serial-number>
<transaction-date>20150101</transaction-date>
<case-file-header>
<filing-date>20140101</filing-date>
</case-file-header>
<case-file-statements>
<case-file-statement>
<code>AQ123</code>
<text>Case file statement text</text>
</case-file-statement>
<case-file-statement>
<code>BC345</code>
<text>Case file statement text</text>
</case-file-statement>
</case-file-statements>
<classifications>
<classification>
<international-code-total-no>1</international-code-total-no>
<primary-code>025</primary-code>
</classification>
</classifications>
</case-file>
```
**Here's some more information about how these files will be used:**
All XML files will be in the same format. There are probably a few dozen elements within each record. The files are updated by a third party on a daily basis (and are available as zipped files on the third-party website). Each day's file represents new case files as well as updated case files.
The goal is to allow a user to search for information and organize those search results on the page (or in a generated pdf/excel file). For example, a user might want to see all case files that include a particular word within the `<text>` element. Or a user might want to see all case files that include primary code 025 (`<primary-code>` element) and that were filed after a particular date (`<filing-date>` element).
The only data entered into the database will be from the XML files--users won't be adding any of their own information to the database. | All steps could certainly be accomplished using `node.js`. There are modules available that will help you with each of these tasks:
1. * [node-cron](https://github.com/ncb000gt/node-cron): lets you easily set up cron tasks in your node program. Another option would be to set up a cron task on your operating system (lots of resources available for your favourite OS).
* [download](https://github.com/kevva/download): module to easily download files from a URL.
2. [xml-stream](https://github.com/assistunion/xml-stream): allows you to stream a file and register events that fire when the parser encounters certain XML elements. I have successfully used this module to parse KML files (granted they were significantly smaller than your files).
3. [node-postgres](https://github.com/brianc/node-postgres): node client for PostgreSQL (I am sure there are clients for many other common RDBMS, PG is the only one I have used so far).
Most of these modules have pretty great examples that will get you started. Here's how you would probably set up the XML streaming part:
```
var XmlStream = require('xml-stream');
var xml = fs.createReadStream('path/to/file/on/disk'); // or stream directly from your online source
var xmlStream = new XmlStream(xml);
xmlStream.on('endElement case-file', function(element) {
// create and execute SQL query/queries here for this element
});
xmlStream.on('end', function() {
// done reading elements
// do further processing / query database, etc.
});
``` |
Call different function for each list item | 33,705,296 | 3 | 2015-11-14T05:04:25Z | 33,705,301 | 10 | 2015-11-14T05:05:08Z | [
"python"
] | Let's say I have a list like this:
```
[1, 2, 3, 4]
```
And a list of functions like this:
```
[a, b, c, d]
```
Is there an easy way to get this output? Something like `zip`, but with functions and arguments?
```
[a(1), b(2), c(3), d(4)]
``` | Use `zip()` and a list comprehension to apply each function to their paired argument:
```
arguments = [1, 2, 3, 4]
functions = [a, b, c, d]
results = [func(arg) for func, arg in zip(functions, arguments)]
```
Demo:
```
>>> def a(i): return 'function a: {}'.format(i)
...
>>> def b(i): return 'function b: {}'.format(i)
...
>>> def c(i): return 'function c: {}'.format(i)
...
>>> def d(i): return 'function d: {}'.format(i)
...
>>> arguments = [1, 2, 3, 4]
>>> functions = [a, b, c, d]
>>> [func(arg) for func, arg in zip(functions, arguments)]
['function a: 1', 'function b: 2', 'function c: 3', 'function d: 4']
``` |
Alembic: IntegrityError: "column contains null values" when adding non-nullable column | 33,705,697 | 4 | 2015-11-14T06:13:29Z | 33,705,698 | 7 | 2015-11-14T06:13:29Z | [
"python",
"sqlalchemy",
"alembic"
] | I'm adding a column to an existing table. This new column is `nullable=False`.
```
op.add_column('mytable', sa.Column('mycolumn', sa.String(), nullable=False))
```
When I run the migration, it complains:
```
sqlalchemy.exc.IntegrityError: column "mycolumn" contains null values
``` | It is because your existing data have no value on that new column, i.e. `null`. Thus causing said error. When adding a non-nullable column, you must decide what value to give to already-existing data
---
**Alright, existing data should just have "lorem ipsum" for this new column then. But how do I do it? I can't UPDATE because the column is not there yet.**
Use the [`server_default`](https://alembic.readthedocs.org/en/latest/ops.html#alembic.operations.Operations.alter_column.params.server_default) arg:
```
op.add_column('mytable', sa.Column(
'mycolumn',
sa.String(),
nullable=False,
server_default='lorem ipsum', # <--- add this
))
```
---
**But, but, I don't want it to have default value**
Drop it afterwards using `op.alter_column('mytable', 'mycolumn', server_default=None)`
E.g. your `upgrade()` function would be:
```
def upgrade():
op.add_column('mytable', sa.Column('mycolumn', sa.String(), nullable=False, server_default='lorem ipsum'))
op.alter_column('mytable', 'mycolumn', server_default=None)
``` |
Using multiple Python engines (32Bit/64bit and 2.7/3.5) | 33,709,391 | 6 | 2015-11-14T14:22:28Z | 33,711,433 | 13 | 2015-11-14T17:49:00Z | [
"python",
"python-2.7",
"python-3.x",
"anaconda",
"spyder"
] | I would like to use Python for scientific applications and after some research decided that I will use Anaconda as it comes bundled with loads of packages and add new modules using `conda install` through the cmd is easy.
I prefer to use the 64 bit version for better RAM use and efficiency but
32bit version is needed as well because some libraries are 32bit. Similarly, I prefer to use Python 3.5 as that is the future and the way things go. But loads of libraries are still 2.7 which means I need both.
I have to install 4 versions of Anaconda (64bit 2.7, 64bit 3.5, 32bit 2.7, 64bit 3.5). Each version is about 380MB. I am aiming to use Jupyter notebook and Spyder as the IDE. I had to switch between versions when required. I had conflicting libraries, path issues and all sorts of weird problems.
So, I am planning to do a clean install from scratch. I would like to know if there is a more sensible way to handle this. I use Windows 7 64 bit for now if that matters. | Make sure to set the right environmental variables (<https://github.com/conda/conda/issues/1744>)
Create a new environment for 32bit Python 2.7:
```
set CONDA_FORCE_32BIT=1
conda create -n py27_32 python=2.7
```
Activate it:
```
set CONDA_FORCE_32BIT=1
activate py27_32
```
Deactivate it:
```
deactivate py27_32
```
Create one for 64 bit Python 3.5:
```
set CONDA_FORCE_32BIT=
conda create -n py35_64 python=3.5
```
Activate it:
```
set CONDA_FORCE_32BIT=
activate py35_64
```
The best would be to write the activation commands in a batch file so that you have to type only one command and cannot forget to set the right 32/64 bit flag.
**UPDATE**
You don't need to install a full Anaconda distribution for this. [Miniconda](http://conda.pydata.org/miniconda.html) is enough:
> These Miniconda installers contain the conda package manager and Python. Once Miniconda is installed, you can use the conda command to install any other packages and create environments, etc. ...
>
> There are two variants of the installer: Miniconda is Python 2 based and Miniconda3 is Python 3 based. Note that the choice of which Miniconda is installed only affects the root environment. Regardless of which version of Miniconda you install, you can still install both Python 2.x and Python 3.x environments.
I would recommend you to use Miniconda3 64-bit as your root environment.
You can always install a full Anaconda later with:
```
conda install anaconda.
```
**Note** that it might downgrade some of your previously install packages in your active environment. |
Are there rules for naming single-module Python packages? | 33,712,857 | 10 | 2015-11-14T20:14:31Z | 33,810,175 | 7 | 2015-11-19T17:08:51Z | [
"python",
"package",
"python-module",
"pypi"
] | Should the name I give to the lone module in a Python package match the name of the package?
For example if I have a package with a single module with the structure
```
super-duper/
super/
__init.py___
mycode.py
...
```
I can create a package `super-duper` on PyPi which, when installed, will have two folders in `site-packages` with names that don't match:
```
super/
super_duper-1.2.3.dist-info/
```
which means that to import my project I use
```
import super
```
rather than the actual package name (`super_duper`)
This seems to be against common practice (judging from the folders for early every other package I see in `site-packages`) which follow the pattern
```
same_name/
same_name-1.2.3.dist-info/
```
for the PyPi package `same-name`.
Should I instead (always) structure my projects so as
```
super-duper/
super_duper/
__init.py___
mycode.py
...
```
to ensure that the package name and module import name "match":
```
import super_duper
```
Is there a relevant best practice or rule I should be following? | The short answer to your question is: yes, it's generally a good practice to have the name of your module match the name of the package for single module packages (which should be most published packages.)
The slightly longer answer is that naming conventions are always political. The generally accepted method for defining language standards in Python is a process called "Python Enhancement Proposals" (PEPs). PEPs are governed by a body of PEP editors and are [publicly indexed](https://www.python.org/dev/peps/) for review and commenting.
At present, there is only one "Active" (accepted and implemented) PEP I am aware of that covers module naming conventions, which is PEP 8:
> Modules should have short, all-lowercase names. Underscores can be
> used in the module name if it improves readability. Python packages
> should also have short, all-lowercase names, although the use of
> underscores is discouraged.
However, there is another proposal still in the drafting process called [PEP 423](https://www.python.org/dev/peps/pep-0423/#use-a-single-name) that recommends exactly what you state in your post:
> Distribute only one package (or only one module) per project, and use
> package (or module) name as project name.
>
> * It avoids possible confusion between project name and distributed package or module name.
> * It makes the name consistent.
> * It is explicit: when one sees project name, he guesses package/module name, and vice versa.
> * It also limits implicit clashes between package/module names. By using a single name, when you register a project name to PyPI, you
> also perform a basic package/module name availability verification.
It's important to note that this PEP is still in a "Deferred" state, which means it has *not* been ratified by the PEP editors, and is blocked by another proposal (specifically the implementation of an update to the module metadata syntax in PEP 440). However, no competing standards have been drafted in the time since 423's original proposal, and much of the content seems to be fairly uncontroversial, so I would expect it to be accepted in the future without too many major changes. |
Unable to install python pip on Ubuntu 14.04 | 33,717,197 | 2 | 2015-11-15T06:49:21Z | 34,670,459 | 11 | 2016-01-08T06:00:58Z | [
"python",
"ubuntu",
"pip"
] | This is the command I used to install python-pip
```
sudo apt-get install python-pip
```
I get the following error
```
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
python-pip : Depends: python-setuptools (>= 0.6c1) but it is not going to be installed
Recommends: python-dev-all (>= 2.6) but it is not installable
E: Unable to correct problems, you have held broken packages.
```
I already installed the latest version of python-dev
When i try to install python-setuptools using
`sudo apt-get install python-setuptools` I get the below error
```
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
python-setuptools : Depends: python-pkg-resources (= 3.3-1ubuntu1) but 3.3-1ubuntu2 is to be installed
E: Unable to correct problems, you have held broken packages.
```
Explain me how to resolve the error and guide me the steps to get `python-pip` installed. | got the same error when I install python-pip, the following command solved my problem.
```
sudo apt-get install python-pkg-resources=3.3-1ubuntu1
sudo apt-get install python-setuptools
``` |
Why is this TensorFlow implementation vastly less successful than Matlab's NN? | 33,720,645 | 20 | 2015-11-15T14:12:06Z | 33,723,404 | 18 | 2015-11-15T18:34:47Z | [
"python",
"matlab",
"neural-network",
"tensorflow"
] | As a toy example I'm trying to fit a function `f(x) = 1/x` from 100 no-noise data points. The matlab default implementation is phenomenally successful with mean square difference ~10^-10, and interpolates perfectly.
I implement a neural network with one hidden layer of 10 sigmoid neurons. I'm a beginner at neural networks so be on your guard against dumb code.
```
import tensorflow as tf
import numpy as np
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
#Can't make tensorflow consume ordinary lists unless they're parsed to ndarray
def toNd(lst):
lgt = len(lst)
x = np.zeros((1, lgt), dtype='float32')
for i in range(0, lgt):
x[0,i] = lst[i]
return x
xBasic = np.linspace(0.2, 0.8, 101)
xTrain = toNd(xBasic)
yTrain = toNd(map(lambda x: 1/x, xBasic))
x = tf.placeholder("float", [1,None])
hiddenDim = 10
b = bias_variable([hiddenDim,1])
W = weight_variable([hiddenDim, 1])
b2 = bias_variable([1])
W2 = weight_variable([1, hiddenDim])
hidden = tf.nn.sigmoid(tf.matmul(W, x) + b)
y = tf.matmul(W2, hidden) + b2
# Minimize the squared errors.
loss = tf.reduce_mean(tf.square(y - yTrain))
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss)
# For initializing the variables.
init = tf.initialize_all_variables()
# Launch the graph
sess = tf.Session()
sess.run(init)
for step in xrange(0, 4001):
train.run({x: xTrain}, sess)
if step % 500 == 0:
print loss.eval({x: xTrain}, sess)
```
Mean square difference ends at ~2\*10^-3, so about 7 orders of magnitude worse than matlab. Visualising with
```
xTest = np.linspace(0.2, 0.8, 1001)
yTest = y.eval({x:toNd(xTest)}, sess)
import matplotlib.pyplot as plt
plt.plot(xTest,yTest.transpose().tolist())
plt.plot(xTest,map(lambda x: 1/x, xTest))
plt.show()
```
we can see the fit is systematically imperfect:
[](http://i.stack.imgur.com/Blxq9.png)
while the matlab one looks perfect to the naked eye with the differences uniformly < 10^-5:
[](http://i.stack.imgur.com/kC8aJ.jpg)
I have tried to replicate with TensorFlow the diagram of the Matlab network:
[](http://i.stack.imgur.com/ORLXL.png)
Incidentally, the diagram seems to imply a tanh rather than sigmoid activation function. I cannot find it anywhere in documentation to be sure. However, when I try to use a tanh neuron in TensorFlow the fitting quickly fails with `nan` for variables. I do not know why.
Matlab uses LevenbergâMarquardt training algorithm. Bayesian regularization is even more successful with mean squares at 10^-12 (we are probably in the area of vapours of float arithmetic).
Why is TensorFlow implementation so much worse, and what can I do to make it better? | I tried training for 50000 iterations it got to 0.00012 error. It takes about 180 seconds on Tesla K40.
[](http://i.stack.imgur.com/cH2hN.png)
It seems that for this kind of problem, first order gradient descent is not a good fit (pun intended), and you need LevenbergâMarquardt or l-BFGS. I don't think anyone implemented them in TensorFlow yet.
**Edit**
Use `tf.train.AdamOptimizer(0.1)` for this problem. It gets to `3.13729e-05` after 4000 iterations. Also, GPU with default strategy also seems like a bad idea for this problem. There are many small operations and the overhead causes GPU version to run 3x slower than CPU on my machine. |
Why is this TensorFlow implementation vastly less successful than Matlab's NN? | 33,720,645 | 20 | 2015-11-15T14:12:06Z | 33,728,310 | 12 | 2015-11-16T04:00:59Z | [
"python",
"matlab",
"neural-network",
"tensorflow"
] | As a toy example I'm trying to fit a function `f(x) = 1/x` from 100 no-noise data points. The matlab default implementation is phenomenally successful with mean square difference ~10^-10, and interpolates perfectly.
I implement a neural network with one hidden layer of 10 sigmoid neurons. I'm a beginner at neural networks so be on your guard against dumb code.
```
import tensorflow as tf
import numpy as np
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
#Can't make tensorflow consume ordinary lists unless they're parsed to ndarray
def toNd(lst):
lgt = len(lst)
x = np.zeros((1, lgt), dtype='float32')
for i in range(0, lgt):
x[0,i] = lst[i]
return x
xBasic = np.linspace(0.2, 0.8, 101)
xTrain = toNd(xBasic)
yTrain = toNd(map(lambda x: 1/x, xBasic))
x = tf.placeholder("float", [1,None])
hiddenDim = 10
b = bias_variable([hiddenDim,1])
W = weight_variable([hiddenDim, 1])
b2 = bias_variable([1])
W2 = weight_variable([1, hiddenDim])
hidden = tf.nn.sigmoid(tf.matmul(W, x) + b)
y = tf.matmul(W2, hidden) + b2
# Minimize the squared errors.
loss = tf.reduce_mean(tf.square(y - yTrain))
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss)
# For initializing the variables.
init = tf.initialize_all_variables()
# Launch the graph
sess = tf.Session()
sess.run(init)
for step in xrange(0, 4001):
train.run({x: xTrain}, sess)
if step % 500 == 0:
print loss.eval({x: xTrain}, sess)
```
Mean square difference ends at ~2\*10^-3, so about 7 orders of magnitude worse than matlab. Visualising with
```
xTest = np.linspace(0.2, 0.8, 1001)
yTest = y.eval({x:toNd(xTest)}, sess)
import matplotlib.pyplot as plt
plt.plot(xTest,yTest.transpose().tolist())
plt.plot(xTest,map(lambda x: 1/x, xTest))
plt.show()
```
we can see the fit is systematically imperfect:
[](http://i.stack.imgur.com/Blxq9.png)
while the matlab one looks perfect to the naked eye with the differences uniformly < 10^-5:
[](http://i.stack.imgur.com/kC8aJ.jpg)
I have tried to replicate with TensorFlow the diagram of the Matlab network:
[](http://i.stack.imgur.com/ORLXL.png)
Incidentally, the diagram seems to imply a tanh rather than sigmoid activation function. I cannot find it anywhere in documentation to be sure. However, when I try to use a tanh neuron in TensorFlow the fitting quickly fails with `nan` for variables. I do not know why.
Matlab uses LevenbergâMarquardt training algorithm. Bayesian regularization is even more successful with mean squares at 10^-12 (we are probably in the area of vapours of float arithmetic).
Why is TensorFlow implementation so much worse, and what can I do to make it better? | btw, here's a slightly cleaned up version of the above that cleans up some of the shape issues and unnecessary bouncing between tf and np. It achieves 3e-08 after 40k steps, or about 1.5e-5 after 4000:
```
import tensorflow as tf
import numpy as np
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
xTrain = np.linspace(0.2, 0.8, 101).reshape([1, -1])
yTrain = (1/xTrain)
x = tf.placeholder(tf.float32, [1,None])
hiddenDim = 10
b = bias_variable([hiddenDim,1])
W = weight_variable([hiddenDim, 1])
b2 = bias_variable([1])
W2 = weight_variable([1, hiddenDim])
hidden = tf.nn.sigmoid(tf.matmul(W, x) + b)
y = tf.matmul(W2, hidden) + b2
# Minimize the squared errors.
loss = tf.reduce_mean(tf.square(y - yTrain))
step = tf.Variable(0, trainable=False)
rate = tf.train.exponential_decay(0.15, step, 1, 0.9999)
optimizer = tf.train.AdamOptimizer(rate)
train = optimizer.minimize(loss, global_step=step)
init = tf.initialize_all_variables()
# Launch the graph
sess = tf.Session()
sess.run(init)
for step in xrange(0, 40001):
train.run({x: xTrain}, sess)
if step % 500 == 0:
print loss.eval({x: xTrain}, sess)
```
All that said, it's probably not too surprising that LMA is doing better than a more general DNN-style optimizer for fitting a 2D curve. Adam and the rest are targeting very high dimensionality problems, and [LMA starts to get glacially slow for very large networks](http://www.eng.auburn.edu/~wilambm/pap/2011/K10149_C012.pdf) (see 12-15). |
How should I shorten this line of Python code? | 33,723,612 | 2 | 2015-11-15T18:55:56Z | 33,723,656 | 7 | 2015-11-15T18:59:34Z | [
"python",
"python-2.7",
"pep8"
] | Here is the line that needs to be shortened.
```
tree_top = os.path.abspath(os.path.expanduser(os.path.expandvars(sys.argv[1])))
```
* Should I create a variable for each procedure?
* Should I alias `os.path,abspath`, `os.path.expandvars` and `os.path.expanduser` to have shorter names?
* Should I use backslashes? | The easiest way to reduce the width is to use implicit line continuation within parentheses:
```
tree_top = os.path.abspath(
os.path.expanduser(
os.path.expandvars(sys.argv[1])
)
)
```
Alternatively, just select the parts of `os.path` that you need:
```
from os.path import abspath, expanduser, expandvars
tree_top = abspath(expanduser(expandvars(sys.argv[1])))
```
or use some combination of the two. |
How can numpy be so much faster than my Fortran routine? | 33,723,771 | 73 | 2015-11-15T19:10:57Z | 33,724,424 | 105 | 2015-11-15T20:07:33Z | [
"python",
"arrays",
"performance",
"numpy",
"fortran"
] | I get a 512^3 array representing a Temperature distribution from a simulation (written in Fortran). The array is stored in a binary file that's about 1/2G in size. I need to know the minimum, maximum and mean of this array and as I will soon need to understand Fortran code anyway, I decided to give it a go and came up with the following very easy routine.
```
integer gridsize,unit,j
real mini,maxi
double precision mean
gridsize=512
unit=40
open(unit=unit,file='T.out',status='old',access='stream',&
form='unformatted',action='read')
read(unit=unit) tmp
mini=tmp
maxi=tmp
mean=tmp
do j=2,gridsize**3
read(unit=unit) tmp
if(tmp>maxi)then
maxi=tmp
elseif(tmp<mini)then
mini=tmp
end if
mean=mean+tmp
end do
mean=mean/gridsize**3
close(unit=unit)
```
This takes about 25 seconds per file on the machine I use. That struck me as being rather long and so I went ahead and did the following in Python:
```
import numpy
mmap=numpy.memmap('T.out',dtype='float32',mode='r',offset=4,\
shape=(512,512,512),order='F')
mini=numpy.amin(mmap)
maxi=numpy.amax(mmap)
mean=numpy.mean(mmap)
```
Now, I expected this to be faster of course, but I was really blown away. It takes less than a second under identical conditions. The mean deviates from the one my Fortran routine finds (which I also ran with 128-bit floats, so I somehow trust it more) but only on the 7th significant digit or so.
How can numpy be so fast? I mean you have to look at every entry of an array to find these values, right? Am I doing something very stupid in my Fortran routine for it to take so much longer?
**EDIT:**
To answer the questions in the comments:
* Yes, also I ran the Fortran routine with 32-bit and 64-bit floats but it had no impact on performance.
* I used [`iso_fortran_env`](https://gcc.gnu.org/onlinedocs/gfortran/ISO_005fFORTRAN_005fENV.html) which provides 128-bit floats.
* Using 32-bit floats my mean is off quite a bit though, so precision is really an issue.
* I ran both routines on different files in different order, so the caching should have been fair in the comparison I guess ?
* I actually tried open MP, but to read from the file at different positions at the same time. Having read your comments and answers this sounds really stupid now and it made the routine take a lot longer as well. I might give it a try on the array operations but maybe that won't even be necessary.
* The files are actually 1/2G in size, that was a typo, Thanks.
* I will try the array implementation now.
**EDIT 2:**
I implemented what @Alexander Vogt and @casey suggested in their answers, and it is as fast as `numpy` but now I have a precision problem as @Luaan pointed out I might get. Using a 32-bit float array the mean computed by `sum` is 20% off. Doing
```
...
real,allocatable :: tmp (:,:,:)
double precision,allocatable :: tmp2(:,:,:)
...
tmp2=tmp
mean=sum(tmp2)/size(tmp)
...
```
Solves the issue but increases computing time (not by very much, but noticeably).
Is there a better way to get around this issue? I couldn't find a way to read singles from the file directly to doubles.
And how does `numpy` avoid this?
Thanks for all the help so far. | Your Fortran implementation suffers two major shortcomings:
* You mix IO and computations (and read from the file entry by entry).
* You don't use vector/matrix operations.
This implementation does perform the same operation as yours and is faster by a factor of 20 on my machine:
```
program test
integer gridsize,unit
real mini,maxi,mean
real, allocatable :: tmp (:,:,:)
gridsize=512
unit=40
allocate( tmp(gridsize, gridsize, gridsize))
open(unit=unit,file='T.out',status='old',access='stream',&
form='unformatted',action='read')
read(unit=unit) tmp
close(unit=unit)
mini = minval(tmp)
maxi = maxval(tmp)
mean = sum(tmp)/gridsize**3
print *, mini, maxi, mean
end program
```
The idea is to read in the whole file into one array `tmp` in one go. Then, I can use the functions [`MAXVAL`](https://gcc.gnu.org/onlinedocs/gfortran/MAXVAL.html), [`MINVAL`](https://gcc.gnu.org/onlinedocs/gfortran/MINVAL.html), and [`SUM`](https://gcc.gnu.org/onlinedocs/gfortran/SUM.html) on the array directly.
---
For the accuracy issue: Simply using double precision values and doing the conversion on the fly as
```
mean = sum(real(tmp, kind=kind(1.d0)))/real(gridsize**3, kind=kind(1.d0))
```
only marginally increases the calculation time. I tried performing the operation element-wise and in slices, but that did only increase the required time at the default optimization level.
At `-O3`, the element-wise addition performs ~3 % better than the array operation. The difference between double and single precision operations is less than 2% on my machine - on average (the individual runs deviate by far more).
---
Here is a very fast implementation using LAPACK:
```
program test
integer gridsize,unit, i, j
real mini,maxi
integer :: t1, t2, rate
real, allocatable :: tmp (:,:,:)
real, allocatable :: work(:)
! double precision :: mean
real :: mean
real :: slange
call system_clock(count_rate=rate)
call system_clock(t1)
gridsize=512
unit=40
allocate( tmp(gridsize, gridsize, gridsize), work(gridsize))
open(unit=unit,file='T.out',status='old',access='stream',&
form='unformatted',action='read')
read(unit=unit) tmp
close(unit=unit)
mini = minval(tmp)
maxi = maxval(tmp)
! mean = sum(tmp)/gridsize**3
! mean = sum(real(tmp, kind=kind(1.d0)))/real(gridsize**3, kind=kind(1.d0))
mean = 0.d0
do j=1,gridsize
do i=1,gridsize
mean = mean + slange('1', gridsize, 1, tmp(:,i,j),gridsize, work)
enddo !i
enddo !j
mean = mean / gridsize**3
print *, mini, maxi, mean
call system_clock(t2)
print *,real(t2-t1)/real(rate)
end program
```
This uses the single precision matrix 1-norm [`SLANGE`](http://www.math.utah.edu/software/lapack/lapack-s/slange.html) on matrix columns. The run-time is even faster than the approach using single precision array functions - and does not show the precision issue. |
How can numpy be so much faster than my Fortran routine? | 33,723,771 | 73 | 2015-11-15T19:10:57Z | 33,724,538 | 53 | 2015-11-15T20:18:31Z | [
"python",
"arrays",
"performance",
"numpy",
"fortran"
] | I get a 512^3 array representing a Temperature distribution from a simulation (written in Fortran). The array is stored in a binary file that's about 1/2G in size. I need to know the minimum, maximum and mean of this array and as I will soon need to understand Fortran code anyway, I decided to give it a go and came up with the following very easy routine.
```
integer gridsize,unit,j
real mini,maxi
double precision mean
gridsize=512
unit=40
open(unit=unit,file='T.out',status='old',access='stream',&
form='unformatted',action='read')
read(unit=unit) tmp
mini=tmp
maxi=tmp
mean=tmp
do j=2,gridsize**3
read(unit=unit) tmp
if(tmp>maxi)then
maxi=tmp
elseif(tmp<mini)then
mini=tmp
end if
mean=mean+tmp
end do
mean=mean/gridsize**3
close(unit=unit)
```
This takes about 25 seconds per file on the machine I use. That struck me as being rather long and so I went ahead and did the following in Python:
```
import numpy
mmap=numpy.memmap('T.out',dtype='float32',mode='r',offset=4,\
shape=(512,512,512),order='F')
mini=numpy.amin(mmap)
maxi=numpy.amax(mmap)
mean=numpy.mean(mmap)
```
Now, I expected this to be faster of course, but I was really blown away. It takes less than a second under identical conditions. The mean deviates from the one my Fortran routine finds (which I also ran with 128-bit floats, so I somehow trust it more) but only on the 7th significant digit or so.
How can numpy be so fast? I mean you have to look at every entry of an array to find these values, right? Am I doing something very stupid in my Fortran routine for it to take so much longer?
**EDIT:**
To answer the questions in the comments:
* Yes, also I ran the Fortran routine with 32-bit and 64-bit floats but it had no impact on performance.
* I used [`iso_fortran_env`](https://gcc.gnu.org/onlinedocs/gfortran/ISO_005fFORTRAN_005fENV.html) which provides 128-bit floats.
* Using 32-bit floats my mean is off quite a bit though, so precision is really an issue.
* I ran both routines on different files in different order, so the caching should have been fair in the comparison I guess ?
* I actually tried open MP, but to read from the file at different positions at the same time. Having read your comments and answers this sounds really stupid now and it made the routine take a lot longer as well. I might give it a try on the array operations but maybe that won't even be necessary.
* The files are actually 1/2G in size, that was a typo, Thanks.
* I will try the array implementation now.
**EDIT 2:**
I implemented what @Alexander Vogt and @casey suggested in their answers, and it is as fast as `numpy` but now I have a precision problem as @Luaan pointed out I might get. Using a 32-bit float array the mean computed by `sum` is 20% off. Doing
```
...
real,allocatable :: tmp (:,:,:)
double precision,allocatable :: tmp2(:,:,:)
...
tmp2=tmp
mean=sum(tmp2)/size(tmp)
...
```
Solves the issue but increases computing time (not by very much, but noticeably).
Is there a better way to get around this issue? I couldn't find a way to read singles from the file directly to doubles.
And how does `numpy` avoid this?
Thanks for all the help so far. | The numpy is faster because you wrote much more efficient code in python (and much of the numpy backend is written in optimized Fortran and C) and terribly inefficient code in Fortran.
Look at your python code. You load the entire array at once and then call functions that can operate on an array.
Look at your fortran code. You read one value at a time and do some branching logic with it.
The majority of your discrepancy is the fragmented IO you have written in Fortran.
You can write the Fortran just about the same way as you wrote the python and you'll find it runs much faster that way.
```
program test
implicit none
integer :: gridsize, unit
real :: mini, maxi, mean
real, allocatable :: array(:,:,:)
gridsize=512
allocate(array(gridsize,gridsize,gridsize))
unit=40
open(unit=unit, file='T.out', status='old', access='stream',&
form='unformatted', action='read')
read(unit) array
maxi = maxval(array)
mini = minval(array)
mean = sum(array)/size(array)
close(unit)
end program test
``` |
Pip hangs in Windows 7 | 33,724,228 | 7 | 2015-11-15T19:50:39Z | 34,800,120 | 7 | 2016-01-14T21:30:46Z | [
"python",
"windows",
"python-2.7",
"python-3.x",
"pip"
] | I have `Python 2.7.10` installed with pip om Windows 7. When I'm trying to install package or even just run `pip` in cmd with no options, it stacks, prints nothing, and even ctrl+c does not work, I have to close cmd.
Task Manager shows 3 running `pip.exe *32` processes, and when I close cmd I can kill one of them. Other 2 are removed only after reloading Windows.
Same thing happens with `Python 3.5`
I tried to reinstall pip or python, neither was helpful.
pip-7.1.2
upd 1
Figured out that I have same problem with virtualenv. | I had exactly the same problem. The reason - in my case - was my antivirus program Avast. It blocked pip. As soon as I inactivated it. It works. I need to find a way now to explain Avast to stop blocking pip. |
Counting the number of unique words in a list | 33,726,361 | 2 | 2015-11-15T23:26:14Z | 33,726,420 | 8 | 2015-11-15T23:33:06Z | [
"python",
"python-3.x"
] | Using the following code from <http://stackoverflow.com/a/11899925>, I am able to find if a word is unique or not (by comparing if it was used once or greater than once):
```
helloString = ['hello', 'world', 'world']
count = {}
for word in helloString :
if word in count :
count[word] += 1
else:
count[word] = 1
```
But, if I were to have a string with hundreds of words, how would I be able to count the number of unique words within that string?
For example, my code has:
```
uniqueWordCount = 0
helloString = ['hello', 'world', 'world', 'how', 'are', 'you', 'doing', 'today']
count = {}
for word in words :
if word in count :
count[word] += 1
else:
count[word] = 1
```
How would I be able to set `uniqueWordCount` to `6`? Usually, I am really good at solving these types of algorithmic puzzles, but I have been unsuccessful with figuring this one out. I feel as if it is right beneath my nose. | The best way to solve this is to use the `set` collection type. A `set` is a collection in which all elements are unique. Therefore:
```
unique = set([ 'one', 'two', 'two'])
len(unique) # is 2
```
You can use a set from the outset, adding words to it as you go:
```
unique.add('three')
```
This will throw out any duplicates as they are added. Or, you can collect all the elements in a list and pass the list to the `set()` function, which will remove the duplicates at that time. The example I provided above shows this pattern:
```
unique = set([ 'one', 'two', 'two'])
unique.add('three')
# unique now contains {'one', 'two', 'three'}
```
[Read more about sets in Python.](https://docs.python.org/3.5/library/stdtypes.html?highlight=set#set-types-set-frozenset) |
In slicing, why can't I reverse a list, skipping the last item in a single bracket? | 33,727,311 | 4 | 2015-11-16T01:40:15Z | 33,727,329 | 9 | 2015-11-16T01:42:08Z | [
"python"
] | In Python, I can set the `end` that I want in a slice:
```
l = [0, 1, 2, 3, 4, 5]
l[:-1] = [0, 1, 2, 3, 4]
```
I can also set the step I want:
```
l[::-1] = [5, 4, 3, 2, 1, 0]
```
So, how come I cannot reverse the list that skips the last item in a single take? I mean why this happens:
```
l[:-1:-1] = []
```
To get the expected result, using slices, I have to do:
```
l[:-1][::-1] = [4, 3, 2, 1, 0]
```
Does this have anything to do with the precedence of the fields? The order in which the actions takes place during the slicing? | You can, but you have to do it like this:
```
>>> x[-2::-1]
[4, 3, 2, 1, 0]
```
The reason is that when you use a negative slice, the "start" of the slice is towards the end of the list, and the "end" of the slice is towards the beginning of the list. In other words, if you want to take a backwards slice and leave off the element at the end of the list, the element you want to leave off is at the *beginning* of the slice, not the end, so you need to specify an explicit slice start to leave it off.
You seem to be thinking that the slice notation `[a:b:-1]` just means "take the slice `[a:b]` as you usually would, and then reverse it", but that's not what it means. Read [the documentation](https://docs.python.org/2/library/stdtypes.html#sequence-types-str-unicode-list-tuple-bytearray-buffer-xrange) for the full explanation of how slices work. The slice starts at the start and goes by the step until it gets to the end; if the step is negative, the slice goes in the opposite direction, giving a very different result than just going in the same direction and then reversing it afterwards. |
Python thinks my tuple is an integer | 33,735,091 | 2 | 2015-11-16T12:12:39Z | 33,735,141 | 8 | 2015-11-16T12:14:54Z | [
"python",
"indexing",
"integer",
"int",
"tuples"
] | I am trying to print out the positions of a given substring inside of a string, but on line 18 I keep getting the error
`Traceback (most recent call last):
File "prog.py", line 18, in <module>
TypeError: 'int' object has no attribute '__getitem__'`
I have no idea why this is happening, because I am new to python. But anyway, here's my program:
```
sentence = "one two three one four one"
word = "one"
tracked = ()
n = 0
p = 0
for c in sentence:
p += 1
if n == 0 and c == word[n]:
n += 1
tracked = (p)
elif n == len(word) and c == word[n]:
print(tracked[1], tracked[2])
tracked = ()
n = 0
elif c == word[n]:
n += 1
tracked = (tracked[1], p)
else:
tracked = ()
n = 0
``` | `tracked = (p)` is an integer, not a tuple. The brackets don't necessarily create a tuple, because they're also used for operator precedence in expressions. In this case, it's just evaluating it as an expression so `(p)` gets evaluated to `p`. If you wanted to make it a tuple, you'd need to add a comma `(p,)` which makes it a tuple.
Though in your case you're trying to call `tracked[1], tracked[2]`, neither of which will be valid for a single item tuple. It's unclear what you're trying to do, but tuples are explicitly immutable (meaning they don't change, can't be appended to etc.) and it seems like lists are more likely what you'd need. |
How to compact this for? | 33,738,355 | 3 | 2015-11-16T15:00:39Z | 33,738,408 | 7 | 2015-11-16T15:03:18Z | [
"python",
"dictionary"
] | I have a dictionary of dictionaries. I want to count how many of those dictionaries has element "status" set to "connecting".
This is my working code:
```
connecting = 0
for x in self.servers:
if self.servers[x]["status"] == "connecting": connecting += 1
```
Is there any way of compacting this? I was thinking something like:
```
connecting = [1 if self.servers[x]["status"] == "closed" else 0 for x in self.servers]
```
But it just returns a list of 0 and 1, doesn't add that 1 to connecting, which is what I expected. | You can use a generator expression within `sum` function :
```
sum(x["status"]=="connecting" for x in self.servers.values())
```
Note that since the result of `x["status"]=="connecting"` is a boolean value and if it be True python will evaluated it as 1, so in the end it will returns the number of dictionaries that follows your conditions. |
sampling multinomial from small log probability vectors in numpy/scipy | 33,738,382 | 14 | 2015-11-16T15:01:51Z | 33,819,405 | 8 | 2015-11-20T05:08:56Z | [
"python",
"numpy",
"scipy",
"probability",
"precision"
] | Is there a function in numpy/scipy that lets you sample multinomial from a vector of small log probabilities, without losing precision? example:
```
# sample element randomly from these log probabilities
l = [-900, -1680]
```
the naive method fails because of underflow:
```
import scipy
import numpy as np
# this makes a all zeroes
a = np.exp(l) / scipy.misc.logsumexp(l)
r = np.random.multinomial(1, a)
```
this is one attempt:
```
def s(l):
m = np.max(l)
norm = m + np.log(np.sum(np.exp(l - m)))
p = np.exp(l - norm)
return np.where(np.random.multinomial(1, p) == 1)[0][0]
```
is this the best/fastest method and can `np.exp()` in the last step be avoided? | First of all, I believe the problem you're encountering is because you're normalizing your probabilities incorrectly. This line is incorrect:
```
a = np.exp(l) / scipy.misc.logsumexp(l)
```
You're dividing a probability by a log probability, which makes no sense. Instead you probably want
```
a = np.exp(l - scipy.misc.logsumexp(l))
```
If you do that, you find `a = [1, 0]` and your multinomial sampler works as expected up to floating point precision in the second probability.
---
### A Solution for Small N: Histograms
That said, if you still need more precision and performance is not as much of a concern, one way you could make progress is by implementing a multinomial sampler from scratch, and then modifying this to work at higher precision.
NumPy's multinomial function is [implemented in Cython](https://github.com/numpy/numpy/blob/master/numpy/random/mtrand/mtrand.pyx#L4406), and essentially performs a loop over a number of binomial samples and combines them into a multinomial sample.
and you can call it like this:
```
np.random.multinomial(10, [0.1, 0.2, 0.7])
# [0, 1, 9]
```
(Note that the precise output values here & below are random, and will change from call to call).
Another way you might implement a multinomial sampler is to generate *N* uniform random values, then compute the histogram with bins defined by the cumulative probabilities:
```
def multinomial(N, p):
rand = np.random.uniform(size=N)
p_cuml = np.cumsum(np.hstack([[0], p]))
p_cuml /= p_cuml[-1]
return np.histogram(rand, bins=p_cuml)[0]
multinomial(10, [0.1, 0.2, 0.7])
# [1, 1, 8]
```
With this method in mind, we can think about doing things to higher precision by keeping *everything* in log-space. The main trick is to realize that the log of uniform random deviates is equivalent to the negative of exponential random deviates, and so you can do everything above without ever leaving log space:
```
def multinomial_log(N, logp):
log_rand = -np.random.exponential(size=N)
logp_cuml = np.logaddexp.accumulate(np.hstack([[-np.inf], logp]))
logp_cuml -= logp_cuml[-1]
return np.histogram(log_rand, bins=logp_cuml)[0]
multinomial_log(10, np.log([0.1, 0.2, 0.7]))
# [1, 2, 7]
```
The resulting multinomial draws will maintain precision even for very small values in the *p* array.
Unfortunately, these histogram-based solutions will be *much* slower than the native `numpy.multinomial` function, so if performance is an issue you may need another approach. One option would be to adapt the Cython code linked above to work in log-space, using similar mathematical tricks as I used here.
---
### A Solution for Large N: Poisson Approximation
The problem with the above solution is that as *N* grows large, it becomes *very* slow.
I was thinking about this and realized there's a more efficient way forward, despite `np.random.multinomial` failing for probabilities smaller than `1E-16` or so.
Here's an example of that failure: on a 64-bit machine, this will always give zero for the first entry because of the way the code is implemented, when in reality it should give something near 10:
```
np.random.multinomial(1E18, [1E-17, 1])
# array([ 0, 1000000000000000000])
```
If you dig into the source, you can trace this issue to the binomial function upon which the multinomial function is built. The cython code internally does something like this:
```
def multinomial_basic(N, p, size=None):
results = np.array([np.random.binomial(N, pi, size) for pi in p])
results[-1] = int(N) - results[:-1].sum(0)
return np.rollaxis(results, 0, results.ndim)
multinomial_basic(1E18, [1E-17, 1])
# array([ 0, 1000000000000000000])
```
The problem is that the `binomial` function chokes on very small values of `p` â this is because the algorithm [computes the value `(1 - p)`](https://github.com/numpy/numpy/blob/master/numpy/random/mtrand/distributions.c#L277), so the value of `p` is limited by floating-point precision.
So what can we do? Well, it turns out that for small values of p, the Poisson distribution is an extremely good approximation of the binomial distribution, and the implementation doesn't have these issues. So we can build a robust multinomial function based on a robust binomial sampler that switches to a Poisson sampler at small p:
```
def binomial_robust(N, p, size=None):
if p < 1E-7:
return np.random.poisson(N * p, size)
else:
return np.random.binomial(N, p, size)
def multinomial_robust(N, p, size=None):
results = np.array([binomial_robust(N, pi, size) for pi in p])
results[-1] = int(N) - results[:-1].sum(0)
return np.rollaxis(results, 0, results.ndim)
multinomial_robust(1E18, [1E-17, 1])
array([ 12, 999999999999999988])
```
The first entry is nonzero and near 10 as expected! Note that we can't use `N` larger than `1E18`, because it will overflow the long integer.
But we can confirm that our approach works for smaller probabilities using the `size` parameter, and averaging over results:
```
p = [1E-23, 1E-22, 1E-21, 1E-20, 1]
size = int(1E6)
multinomial_robust(1E18, p, size).mean(0)
# array([ 1.70000000e-05, 9.00000000e-05, 9.76000000e-04,
# 1.00620000e-02, 1.00000000e+18])
```
We see that even for these very small probabilities, the multinomial values are turning up in the right proportion. The result is a very robust and very fast approximation to the multinomial distribution for small `p`. |
How do I know if I can disable SQLALCHEMY_TRACK_MODIFICATIONS? | 33,738,467 | 29 | 2015-11-16T15:05:39Z | 33,790,196 | 31 | 2015-11-18T20:56:34Z | [
"python",
"flask",
"sqlalchemy",
"flask-sqlalchemy"
] | Every time I run my app that uses Flask-SQLAlchemy I get the following warning that the `SQLALCHEMY_TRACK_MODIFICATIONS` option will be disabled.
```
/home/david/.virtualenvs/flask-sqlalchemy/lib/python3.5/site-packages/flask_sqlalchemy/__init__.py:800: UserWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True to suppress this warning.
warnings.warn('SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True to suppress this warning.')
```
I tried to find out what this option does, but the Flask-SQLAlchemy documentation isn't clear about what uses this tracking.
> `SQLALCHEMY_TRACK_MODIFICATIONS`
>
> If set to True (the default) Flask-SQLAlchemy will track modifications of objects and emit signals. This requires extra memory and can be disabled if not needed.
How do I find out if my project requires `SQLALCHEMY_TRACK_MODIFICATIONS = True` or if I can safely disable this feature and save memory on my server? | Most likely your application doesn't use the Flask-SQLAlchemy event system, so you're probably safe to turn off. You'll need to audit the code to verify--you're looking for anything that hooks into [`models_committed` or `before_models_committed`](http://flask-sqlalchemy.pocoo.org/dev/signals/). If you do find that you're using the Flask-SQLAlchemy event system, you probably should update the code to use SQLAlchemy's built-in event system instead.
To turn off the Flask-SQLAlchemy event system (and disable the warning), just add `SQLALCHEMY_TRACK_MODIFICATIONS = False` to your app config until the default is changed (most likely in Flask-SQLAlchemy v3).
Background--here's what the warning is telling you:
Flask-SQLAlchemy has its own event notification system that gets layered on top of SQLAlchemy. To do this, it tracks modifications to the SQLAlchemy session. This takes extra resources, so the option `SQLALCHEMY_TRACK_MODIFICATIONS` allows you to disable the modification tracking system. Currently the option defaults to `True`, but in the future, that default will change to `False`, thereby disabling the event system.
As far as I understand, the rationale for the change is three-fold:
1. Not many people use Flask-SQLAlchemy's event system, but most people don't realize they can save system resources by disabling it. So a saner default is to disable it and those who want it can turn it on.
2. The event system in Flask-SQLAlchemy has been rather buggy (see issues linked to in the pull request mentioned below), requiring additional maintenance for a feature that few people use.
3. In v0.7, SQLAlchemy itself added a [powerful event system](http://docs.sqlalchemy.org/en/latest/core/event.html) including the ability to create custom events. Ideally, the Flask-SQLAlchemy event system should do nothing more than create a few custom SQLAlchemy event hooks and listeners, and then let SQLAlchemy itself manage the event trigger.
You can see more in the discussion around [the pull request that started triggering this warning](https://github.com/mitsuhiko/flask-sqlalchemy/pull/256). |
What does from __future__ import absolute_import actually do? | 33,743,880 | 35 | 2015-11-16T20:18:11Z | 33,744,115 | 13 | 2015-11-16T20:35:40Z | [
"python",
"python-2.7",
"python-import",
"python-2.5"
] | I have [answered](http://stackoverflow.com/a/22679558/2588818) a question regarding absolute imports in Python, which I thought I understood based on reading [the Python 2.5 changelog](https://docs.python.org/2.5/whatsnew/pep-328.html) and accompanying [PEP](https://www.python.org/dev/peps/pep-0328/). However, upon installing Python 2.5 and attempting to craft an example of properly using `from __future__ import absolute_import`, I realize things are not so clear.
Straight from the changelog linked above, this statement accurately summarized my understanding of the absolute import change:
> Let's say you have a package directory like this:
>
> ```
> pkg/
> pkg/__init__.py
> pkg/main.py
> pkg/string.py
> ```
>
> This defines a package named `pkg` containing the `pkg.main` and `pkg.string` submodules.
>
> Consider the code in the main.py module. What happens if it executes the statement `import string`? In Python 2.4 and earlier, it will first look in the package's directory to perform a relative import, finds pkg/string.py, imports the contents of that file as the `pkg.string` module, and that module is bound to the name `"string"` in the `pkg.main` module's namespace.
So I created this exact directory structure:
```
$ ls -R
.:
pkg/
./pkg:
__init__.py main.py string.py
```
`__init__.py` and `string.py` are empty. `main.py` contains the following code:
```
import string
print string.ascii_uppercase
```
As expected, running this with Python 2.5 fails with an `AttributeError`:
```
$ python2.5 pkg/main.py
Traceback (most recent call last):
File "pkg/main.py", line 2, in <module>
print string.ascii_uppercase
AttributeError: 'module' object has no attribute 'ascii_uppercase'
```
However, further along in the 2.5 changelog, we find this (emphasis added):
> In Python 2.5, you can switch `import`'s behaviour to absolute imports using a `from __future__ import absolute_import` directive. This absolute-import behaviour will become the default in a future version (probably Python 2.7). **Once absolute imports are the default, `import string` will always find the standard library's version.**
I thus created `pkg/main2.py`, identical to `main.py` but with the additional future import directive. It now looks like this:
```
from __future__ import absolute_import
import string
print string.ascii_uppercase
```
Running this with Python 2.5, however... fails with an `AttributeError`:
```
$ python2.5 pkg/main2.py
Traceback (most recent call last):
File "pkg/main2.py", line 3, in <module>
print string.ascii_uppercase
AttributeError: 'module' object has no attribute 'ascii_uppercase'
```
This pretty flatly contradicts the statement that `import string` will **always** find the std-lib version with absolute imports enabled. What's more, despite the warning that absolute imports are scheduled to become the "new default" behavior, I hit this same problem using both Python 2.7, with or without the `__future__` directive:
```
$ python2.7 pkg/main.py
Traceback (most recent call last):
File "pkg/main.py", line 2, in <module>
print string.ascii_uppercase
AttributeError: 'module' object has no attribute 'ascii_uppercase'
$ python2.7 pkg/main2.py
Traceback (most recent call last):
File "pkg/main2.py", line 3, in <module>
print string.ascii_uppercase
AttributeError: 'module' object has no attribute 'ascii_uppercase'
```
as well as Python 3.5, with or without (assuming the `print` statement is changed in both files):
```
$ python3.5 pkg/main.py
Traceback (most recent call last):
File "pkg/main.py", line 2, in <module>
print(string.ascii_uppercase)
AttributeError: module 'string' has no attribute 'ascii_uppercase'
$ python3.5 pkg/main2.py
Traceback (most recent call last):
File "pkg/main2.py", line 3, in <module>
print(string.ascii_uppercase)
AttributeError: module 'string' has no attribute 'ascii_uppercase'
```
---
I have tested other variations of this. Instead of `string.py`, I have created an empty module -- a directory named `string` containing only an empty `__init__.py` -- and instead of issuing imports from `main.py`, I have `cd`'d to `pkg` and run imports directly from the REPL. Neither of these variations (nor a combination of them) changed the results above. I cannot reconcile this with what I have read about the `__future__` directive and absolute imports.
It seems to me that this is easily explicable by [the following](https://docs.python.org/2/library/sys.html#sys.path) (this is from the Python 2 docs but this statement remains unchanged in the same docs for Python 3):
> ### sys.path
>
> (...)
>
> As initialized upon program startup, the first item of this list, `path[0]`, is the directory containing the script that was used to invoke the Python interpreter. If the script directory is not available (e.g. if the interpreter is invoked interactively or if the script is read from standard input), `path[0]` is the empty string, **which directs Python to search modules in the current directory first.**
So what am I missing? Why does the `__future__` statement seemingly not do what it says, and what is the resolution of this contradiction between these two sections of documentation, as well as between described and actual behavior? | The changelog is sloppily worded. `from __future__ import absolute_import` does not care about whether something is part of the standard library, and `import string` will not always give you the standard-library module with absolute imports on.
`from __future__ import absolute_import` means that if you `import string`, Python will always look for a top-level `string` module, rather than `current_package.string`. However, it does not affect the logic Python uses to decide what file is the `string` module. When you do
```
python pkg/script.py
```
`pkg/script.py` doesn't look like part of a package to Python. Following the normal procedures, the `pkg` directory is added to the path, and all `.py` files in the `pkg` directory look like top-level modules. `import string` finds `pkg/string.py` not because it's doing a relative import, but because `pkg/string.py` appears to be the top-level module `string`. The fact that this isn't the standard-library `string` module doesn't come up.
To run the file as part of the `pkg` package, you could do
```
python -m pkg.script
```
In this case, the `pkg` directory will not be added to the path. However, the current directory will be added to the path.
You can also add some boilerplate to `pkg/script.py` to make Python treat it as part of the `pkg` package even when run as a file:
```
if __name__ == '__main__' and __package__ is None:
__package__ = 'pkg'
```
However, this won't affect `sys.path`. You'll need some additional handling to remove the `pkg` directory from the path, and if `pkg`'s parent directory isn't on the path, you'll need to stick that on the path too. |
Spark union of multiple RDDs | 33,743,978 | 8 | 2015-11-16T20:25:01Z | 33,744,540 | 15 | 2015-11-16T21:00:31Z | [
"python",
"apache-spark",
"pyspark"
] | In my pig code I do this:
```
all_combined = Union relation1, relation2,
relation3, relation4, relation5, relation 6.
```
I want to do the same with spark. However, unfortunately, I see that I have to keep doing it pairwise:
```
first = rdd1.union(rdd2)
second = first.union(rdd3)
third = second.union(rdd4)
# .... and so on
```
Is there a union operator that will let me operate on multiple rdds at a time:
e.g. `union(rdd1, rdd2,rdd3, rdd4, rdd5, rdd6)`
It is a matter on convenience. | If these are RDDs you can use `SparkContext.union` method:
```
rdd1 = sc.parallelize([1, 2, 3])
rdd2 = sc.parallelize([4, 5, 6])
rdd3 = sc.parallelize([7, 8, 9])
rdd = sc.union([rdd1, rdd2, rdd3])
rdd.collect()
## [1, 2, 3, 4, 5, 6, 7, 8, 9]
```
There is no `DataFrame` equivalent but it is just a matter of a simple one-liner:
```
from functools import reduce # For Python 3.x
from pyspark.sql import DataFrame
def unionAll(*dfs):
return reduce(DataFrame.unionAll, dfs)
df1 = sqlContext.createDataFrame([(1, "foo1"), (2, "bar1")], ("k", "v"))
df2 = sqlContext.createDataFrame([(3, "foo2"), (4, "bar2")], ("k", "v"))
df3 = sqlContext.createDataFrame([(5, "foo3"), (6, "bar3")], ("k", "v"))
unionAll(df1, df2, df3).show()
## +---+----+
## | k| v|
## +---+----+
## | 1|foo1|
## | 2|bar1|
## | 3|foo2|
## | 4|bar2|
## | 5|foo3|
## | 6|bar3|
## +---+----+
```
If number of `DataFrames` is large using `SparkContext.union` on RDDs and recreating `DataFrame` may be a better choice to avoid [issues related to growing lineages](http://stackoverflow.com/q/34461804/1560062):
```
def unionAll(*dfs):
first, *rest = dfs # Python 3.x, for 2.x you'll have to unpack manually
return first.sql_ctx.createDataFrame(
first.sql_ctx._sc.union([df.rdd for df in dfs]),
first.schema
)
``` |
Problems implementing an XOR gate with Neural Nets in Tensorflow | 33,747,596 | 7 | 2015-11-17T01:22:50Z | 33,750,395 | 12 | 2015-11-17T06:21:18Z | [
"python",
"neural-network",
"tensorflow"
] | I want to make a trivial neural network, it should just implement the XOR gate. I am using the TensorFlow library, in python.
For an XOR gate, the only data I train with, is the complete truth table, that should be enough right? Over optimization is what I will expect to happen very quickly. Problem with the code is that the *weights* and *biases* do not update. Somehow it still gives me 100% accuracy with zero for the biases and weights.
```
x = tf.placeholder("float", [None, 2])
W = tf.Variable(tf.zeros([2,2]))
b = tf.Variable(tf.zeros([2]))
y = tf.nn.softmax(tf.matmul(x,W) + b)
y_ = tf.placeholder("float", [None,1])
print "Done init"
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
train_step = tf.train.GradientDescentOptimizer(0.75).minimize(cross_entropy)
print "Done loading vars"
init = tf.initialize_all_variables()
print "Done: Initializing variables"
sess = tf.Session()
sess.run(init)
print "Done: Session started"
xTrain = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
yTrain = np.array([[1], [0], [0], [0]])
acc=0.0
while acc<0.85:
for i in range(500):
sess.run(train_step, feed_dict={x: xTrain, y_: yTrain})
print b.eval(sess)
print W.eval(sess)
print "Done training"
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print "Result:"
acc= sess.run(accuracy, feed_dict={x: xTrain, y_: yTrain})
print acc
B0 = b.eval(sess)[0]
B1 = b.eval(sess)[1]
W00 = W.eval(sess)[0][0]
W01 = W.eval(sess)[0][1]
W10 = W.eval(sess)[1][0]
W11 = W.eval(sess)[1][1]
for A,B in product([0,1],[0,1]):
top = W00*A + W01*A + B0
bottom = W10*B + W11*B + B1
print "A:",A," B:",B
# print "Top",top," Bottom: ", bottom
print "Sum:",top+bottom
```
I am following the tutorial from <http://tensorflow.org/tutorials/mnist/beginners/index.md#softmax_regressions>
and in the final for-loop I am printing the results form the matrix(as described in the link).
Can anybody point out my error and what I should do to fix it? | There are a few issues with your program.
The first issue is that the function you're learning isn't XOR - it's NOR. The lines:
```
xTrain = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
yTrain = np.array([[1], [0], [0], [0]])
```
...should be:
```
xTrain = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
yTrain = np.array([[0], [1], [1], [0]])
```
The next big issue is that the network you've designed [isn't capable of learning XOR](https://en.wikipedia.org/wiki/Perceptrons_(book)#The_XOR_affair). You'll need to use a non-linear function (such as `tf.nn.relu()` and define at least one more layer to learn the XOR function. For example:
```
x = tf.placeholder("float", [None, 2])
W_hidden = tf.Variable(...)
b_hidden = tf.Variable(...)
hidden = tf.nn.relu(tf.matmul(x, W_hidden) + b_hidden)
W_logits = tf.Variable(...)
b_logits = tf.Variable(...)
logits = tf.matmul(hidden, W_logits) + b_logits
```
A further issue is that initializing the weights to zero will [prevent your network from training](http://stats.stackexchange.com/questions/27112/danger-of-setting-all-initial-weights-to-zero-in-backpropagation). Typically, you should initialize your weights randomly, and your biases to zero. Here's one popular way to do it:
```
HIDDEN_NODES = 2
W_hidden = tf.Variable(tf.truncated_normal([2, HIDDEN_NODES], stddev=1./math.sqrt(2)))
b_hidden = tf.Variable(tf.zeros([HIDDEN_NODES]))
W_logits = tf.Variable(tf.truncated_normal([HIDDEN_NODES, 2], stddev=1./math.sqrt(HIDDEN_NODES)))
b_logits = tf.Variable(tf.zeros([2]))
```
Putting it all together, and using TensorFlow routines for cross-entropy (with a one-hot encoding of `yTrain` for convenience), here's a program that learns XOR:
```
import math
import tensorflow as tf
import numpy as np
HIDDEN_NODES = 10
x = tf.placeholder(tf.float32, [None, 2])
W_hidden = tf.Variable(tf.truncated_normal([2, HIDDEN_NODES], stddev=1./math.sqrt(2)))
b_hidden = tf.Variable(tf.zeros([HIDDEN_NODES]))
hidden = tf.nn.relu(tf.matmul(x, W_hidden) + b_hidden)
W_logits = tf.Variable(tf.truncated_normal([HIDDEN_NODES, 2], stddev=1./math.sqrt(HIDDEN_NODES)))
b_logits = tf.Variable(tf.zeros([2]))
logits = tf.matmul(hidden, W_logits) + b_logits
y = tf.nn.softmax(logits)
y_input = tf.placeholder(tf.float32, [None, 2])
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, y_input)
loss = tf.reduce_mean(cross_entropy)
train_op = tf.train.GradientDescentOptimizer(0.2).minimize(loss)
init_op = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init_op)
xTrain = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
yTrain = np.array([[1, 0], [0, 1], [0, 1], [1, 0]])
for i in xrange(500):
_, loss_val = sess.run([train_op, loss], feed_dict={x: xTrain, y_input: yTrain})
if i % 10 == 0:
print "Step:", i, "Current loss:", loss_val
for x_input in [[0, 0], [0, 1], [1, 0], [1, 1]]:
print x_input, sess.run(y, feed_dict={x: [x_input]})
```
Note that this is probably not the most efficient neural network for computing XOR, so suggestions for tweaking the parameters are welcome! |
Sparksql filtering (selecting with where clause) with multiple conditions | 33,747,834 | 2 | 2015-11-17T01:51:20Z | 33,754,039 | 7 | 2015-11-17T10:03:01Z | [
"python",
"sql",
"apache-spark",
"apache-spark-sql",
"pyspark"
] | Hi I have the following issue:
```
numeric.registerTempTable("numeric").
```
All the values that I want to filter on are literal null strings and not N/A or Null values.
I tried these three options:
1. `numeric_filtered = numeric.filter(numeric['LOW'] != 'null').filter(numeric['HIGH'] != 'null').filter(numeric['NORMAL'] != 'null')`
2. `numeric_filtered = numeric.filter(numeric['LOW'] != 'null' AND numeric['HIGH'] != 'null' AND numeric['NORMAL'] != 'null')`
3. `sqlContext.sql("SELECT * from numeric WHERE LOW != 'null' AND HIGH != 'null' AND NORMAL != 'null'")`
Unfortunately, numeric\_filtered is always empty. I checked and numeric has data that should be filtered based on these conditions.
Here are some sample values:
Low High Normal
3.5 5.0 null
2.0 14.0 null
null 38.0 null
null null null
1.0 null 4.0 | Your are using logical conjunction (AND). It means that all columns have to be different than `'null'` for row to be included. Lets illustrate that using `filter` version as an example:
```
numeric = sqlContext.createDataFrame([
('3.5,', '5.0', 'null'), ('2.0', '14.0', 'null'), ('null', '38.0', 'null'),
('null', 'null', 'null'), ('1.0', 'null', '4.0')],
('low', 'high', 'normal'))
numeric_filtered_1 = numeric.where(numeric['LOW'] != 'null')
numeric_filtered_1.show()
## +----+----+------+
## | low|high|normal|
## +----+----+------+
## |3.5,| 5.0| null|
## | 2.0|14.0| null|
## | 1.0|null| 4.0|
## +----+----+------+
numeric_filtered_2 = numeric_filtered_1.where(
numeric_filtered_1['NORMAL'] != 'null')
numeric_filtered_2.show()
## +---+----+------+
## |low|high|normal|
## +---+----+------+
## |1.0|null| 4.0|
## +---+----+------+
numeric_filtered_3 = numeric_filtered_2.where(
numeric_filtered_2['HIGH'] != 'null')
numeric_filtered_3.show()
## +---+----+------+
## |low|high|normal|
## +---+----+------+
## +---+----+------+
```
All remaining methods you've tried follow exactly the same schema. What you need here is a logical disjunction (OR).
```
from pyspark.sql.functions import col
numeric_filtered = df.where(
(col('LOW') != 'null') |
(col('NORMAL') != 'null') |
(col('HIGH') != 'null'))
numeric_filtered.show()
## +----+----+------+
## | low|high|normal|
## +----+----+------+
## |3.5,| 5.0| null|
## | 2.0|14.0| null|
## |null|38.0| null|
## | 1.0|null| 4.0|
## +----+----+------+
```
or with raw SQL:
```
numeric.registerTempTable("numeric")
sqlContext.sql("""SELECT * FROM numeric
WHERE low != 'null' OR normal != 'null' OR high != 'null'"""
).show()
## +----+----+------+
## | low|high|normal|
## +----+----+------+
## |3.5,| 5.0| null|
## | 2.0|14.0| null|
## |null|38.0| null|
## | 1.0|null| 4.0|
## +----+----+------+
``` |
output of numpy.where(condition) is not an array, but a tuple of arrays: why? | 33,747,908 | 10 | 2015-11-17T01:59:31Z | 33,748,216 | 8 | 2015-11-17T02:38:59Z | [
"python",
"arrays",
"numpy"
] | I am experimenting with the `numpy.where(condition[, x, y])` function.
From the [numpy documentation,](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.where.html) I learn that if you give just one array as input, it should return the indices where the array is non-zero (i.e. "True"):
> If only condition is given, return the tuple condition.nonzero(), the
> indices where condition is True.
But if try it, it returns me a *tuple* of two elements, where the first is the wanted list of indices, and the second is a null element:
```
>>> import numpy as np
>>> array = np.array([1,2,3,4,5,6,7,8,9])
>>> np.where(array>4)
(array([4, 5, 6, 7, 8]),) # notice the comma before the last parenthesis
```
so the question is: why? what is the purpose of this behaviour? in what situation this is useful?
Indeed, to get the wanted list of indices I have to add the indexing, as in `np.where(array>4)[0]`, which seems... "ugly".
---
ADDENDUM
I understand (from some answers) that it is actually a tuple of just one element. Still I don't understand why to give the output in this way. To illustrate how this is not ideal, consider the following error (which motivated my question in the first place):
```
>>> import numpy as np
>>> array = np.array([1,2,3,4,5,6,7,8,9])
>>> pippo = np.where(array>4)
>>> pippo + 1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can only concatenate tuple (not "int") to tuple
```
so that you need to do some indexing to access the actual array of indices:
```
>>> pippo[0] + 1
array([5, 6, 7, 8, 9])
``` | In Python `(1)` means just `1`. `()` can be freely added to group numbers and expressions for human readability (e.g. `(1+3)*3` v `(1+3,)*3`). Thus to denote a 1 element tuple it uses `(1,)` (and requires you to use it as well).
Thus
```
(array([4, 5, 6, 7, 8]),)
```
is a one element tuple, that element being an array.
If you applied `where` to a 2d array, the result would be a 2 element tuple.
The result of `where` is such that it can be plugged directly into an indexing slot, e.g.
```
a[where(a>0)]
a[a>0]
```
should return the same things
as would
```
I,J = where(a>0) # a is 2d
a[I,J]
a[(I,J)]
```
Or with your example:
```
In [278]: a=np.array([1,2,3,4,5,6,7,8,9])
In [279]: np.where(a>4)
Out[279]: (array([4, 5, 6, 7, 8], dtype=int32),) # tuple
In [280]: a[np.where(a>4)]
Out[280]: array([5, 6, 7, 8, 9])
In [281]: I=np.where(a>4)
In [282]: I
Out[282]: (array([4, 5, 6, 7, 8], dtype=int32),)
In [283]: a[I]
Out[283]: array([5, 6, 7, 8, 9])
In [286]: i, = np.where(a>4) # note the , on LHS
In [287]: i
Out[287]: array([4, 5, 6, 7, 8], dtype=int32) # not tuple
In [288]: a[i]
Out[288]: array([5, 6, 7, 8, 9])
In [289]: a[(i,)]
Out[289]: array([5, 6, 7, 8, 9])
```
======================
`np.flatnonzero` shows the correct way of returning just one array, regardless of the dimensions of the input array.
```
In [299]: np.flatnonzero(a>4)
Out[299]: array([4, 5, 6, 7, 8], dtype=int32)
In [300]: np.flatnonzero(a>4)+10
Out[300]: array([14, 15, 16, 17, 18], dtype=int32)
```
It's doc says:
> This is equivalent to a.ravel().nonzero()[0]
In fact that is literally what the function does.
By flattening `a` removes the question of what to do with multiple dimensions. And then it takes the response out of the tuple, giving you a plain array. With flattening it doesn't have make a special case for 1d arrays.
===========================
@Divakar suggests `np.argwhere`:
```
In [303]: np.argwhere(a>4)
Out[303]:
array([[4],
[5],
[6],
[7],
[8]], dtype=int32)
```
which does `np.transpose(np.where(a>4))`
Or if you don't like the column vector, you could transpose it again
```
In [307]: np.argwhere(a>4).T
Out[307]: array([[4, 5, 6, 7, 8]], dtype=int32)
```
except now it is a 1xn array.
We could just as well have wrapped `where` in `array`:
```
In [311]: np.array(np.where(a>4))
Out[311]: array([[4, 5, 6, 7, 8]], dtype=int32)
```
Lots of ways of taking an array out the `where` tuple (`[0]`, `i,=`, `transpose`, `array`, etc). |
Tensorflow: How to replace a node in a calculation graph? | 33,748,552 | 5 | 2015-11-17T03:17:24Z | 33,770,771 | 9 | 2015-11-18T02:24:34Z | [
"python",
"tensorflow"
] | If you have two disjoint graphs, and want to link them, turning this:
```
x = tf.placeholder('float')
y = f(x)
y = tf.placeholder('float')
z = f(y)
```
into this:
```
x = tf.placeholder('float')
y = f(x)
z = g(y)
```
Is there a way to do that? It seems like it could make construction easier in some cases.
For example if you have a graph that has the input image as a `tf.placeholder`, and want to optimize the input image, deep-dream style, is there a way to just replace the placeholder with a `tf.variable` node? Or do you have to think of that before building the graph? | TL;DR: If you can define the two computations as Python functions, you should do that. If you can't, there's more advanced functionality in TensorFlow to serialize and import graphs, which allows you to compose graphs from different sources.
One way to do this in TensorFlow is to build the disjoint computations as separate `tf.Graph` objects, then convert them to serialized protocol buffers using [`Graph.as_graph_def()`](http://tensorflow.org/api_docs/python/framework.html#Graph.as_graph_def):
```
with tf.Graph().as_default() as g_1:
input = tf.placeholder(tf.float32, name="input")
y = f(input)
# NOTE: using identity to get a known name for the output tensor.
output = tf.identity(y, name="output")
gdef_1 = g_1.as_graph_def()
with tf.Graph().as_default() as g_2: # NOTE: g_2 not g_1
input = tf.placeholder(tf.float32, name="input")
z = g(input)
output = tf.identity(y, name="output")
gdef_2 = g_2.as_graph_def()
```
Then you could compose `gdef_1` and `gdef_2` into a third graph, using [`tf.import_graph_def()`](http://tensorflow.org/api_docs/python/framework.html#import_graph_def):
```
with tf.Graph().as_default() as g_combined:
x = tf.placeholder(tf.float32, name="")
# Import gdef_1, which performs f(x).
# "input:0" and "output:0" are the names of tensors in gdef_1.
y, = tf.import_graph_def(gdef_1, input_map={"input:0": x},
return_elements=["output:0"])
# Import gdef_2, which performs g(y)
z, = tf.import_graph_def(gdef_2, input_map={"input:0": y},
return_elements=["output:0"]
``` |
Count occurrence of tuples with Python | 33,749,573 | 2 | 2015-11-17T05:07:43Z | 33,749,614 | 8 | 2015-11-17T05:11:12Z | [
"python",
"python-2.7",
"tuples"
] | I'm trying to convert a list of Python tuples that display product and cost to a list of tuples that display the cost and the count of products at a given cost.
For example, given the below input:
```
[('Product1', 9), ('Product2', 1),
('Product3', 1), ('Product4', 2),
('Product5', 3), ('Product6', 4),
('Product7', 5), ('Product8', 6),
('Product9', 7), ('Product10', 8),
('Product11', 3), ('Product12', 1),
('Product13', 2), ('Product14', 3),
('Product15', 4), ('Product16', 5),
('Product17', 6), ('Product18', 7)]
```
I'm trying to create a function in Python that would render the below. i.e. The value 1 is rendered 3 times for three different products, hence (1, 3).
```
[(1, 3), (2, 1), (3, 2), (4, 1), (5, 2), (6, 2), (7, 2), (8, 1) (9, 1)]
``` | Maybe [`collections.Counter`](http://docs.python.org/2/library/collections.html#collections.Counter) could solve your problem:
```
>>> from collections import Counter
>>> c = Counter(elem[1] for elem in given_list)
```
Output will look like this:
```
Counter({1: 3, 3: 3, 2: 2, 4: 2, 5: 2, 6: 2, 7: 2, 8: 1, 9: 1})
```
If you want it in a list like you've specified in the question, then you can do this:
```
>>> list(c.iteritems())
[(1, 3), (2, 2), (3, 3), (4, 2), (5, 2), (6, 2), (7, 2), (8, 1), (9, 1)]
``` |
How to install xgboost package in python (windows platform)? | 33,749,735 | 13 | 2015-11-17T05:22:41Z | 35,119,904 | 20 | 2016-01-31T21:46:54Z | [
"python",
"python-2.7",
"installation",
"machine-learning",
"xgboost"
] | <http://xgboost.readthedocs.org/en/latest/python/python_intro.html>
On the homepage of xgboost(above link), it says:
To install XGBoost, do the following steps:
1. You need to run `make` in the root directory of the project
2. In the python-package directory run
python setup.py install
However, when I did it, for step 1 the following error appear:
make : The term 'make' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the
spelling of the name, or if a path was included, verify that the path is correct and try again.
then I skip step1 and did step 2 directly, another error appear:
```
Traceback (most recent call last):
File "setup.py", line 19, in <module>
LIB_PATH = libpath['find_lib_path']()
File "xgboost/libpath.py", line 44, in find_lib_path
'List of candidates:\n' + ('\n'.join(dll_path)))
__builtin__.XGBoostLibraryNotFound: Cannot find XGBoost Libarary in the candicate path, did you install compilers and run build.sh in root path?
```
Does anyone know how to install xgboost for python on Windows10 platform? Thanks for your help! | Note that as of the most recent release the Microsoft Visual Studio instructions no longer seem to apply as this link returns a 404 error:
<https://github.com/dmlc/xgboost/tree/master/windows>
You can read more about the removal of the MSVC build from Tianqi Chen's comment [here](https://github.com/dmlc/xgboost/issues/928#issuecomment-193065994).
So here's what I did to finish a 64-bit build on Windows:
1. Download and install MinGW-64: <http://sourceforge.net/projects/mingw-w64/>
2. On the first screen of the install prompt make sure you set the Architecture to **x86\_64** and the Threads to **win32**
3. I installed to C:\mingw64 (to avoid spaces in the file path) so I added this to my PATH environment variable: C:\mingw64\mingw64\bin
4. I also noticed that the make utility that is included in bin\mingw64 is called **mingw32-make** so to simplify things I just renamed this to **make**
5. Open a Windows command prompt and type gcc. You should see something like "fatal error: no input file"
6. Next type make. You should see something like "No targets specified and no makefile found"
7. Type git. If you don't have git, install it and add it to your
PATH.
These should be all the tools you need to build the xgboost project. To get the source code run these lines:
1. cd c:\
2. git clone --recursive <https://github.com/dmlc/xgboost>
3. cd xgboost
4. git submodule init
5. git submodule update
6. cp make/mingw64.mk config.mk
7. make -j4
Note that I ran this part from a Cygwin shell. If you are using the Windows command prompt you should be able to change cp to copy and arrive at the same result. However, if the build fails on you for any reason I would recommend trying again using cygwin.
If the build finishes successfully, you should have a file called xgboost.exe located in the project root. To install the Python package, do the following:
1. cd python-package
2. python setup.py install
Now you should be good to go. Open up Python, and you can import the package with:
```
import xgboost as xgb
```
To test the installation, I went ahead and ran the basic\_walkthrough.py file that was included in the demo/guide-python folder of the project and didn't get any errors. |
Calculate logarithm in python | 33,754,670 | 2 | 2015-11-17T10:32:06Z | 33,754,732 | 10 | 2015-11-17T10:35:06Z | [
"python"
] | I am wondering why the result of `log base 10 (1.5)` in python = 0.405465108108 while the real answer = 0.176091259.
This is the code that I wrote:
```
import math
print math.log(1.5)
```
Can someone know how to solve this issue | From [the documentation](https://docs.python.org/2/library/math.html#math.log):
> With one argument, return the natural logarithm of *x* (to base *e*).
>
> With two arguments, return the logarithm of *x* to the given *base*, calculated as `log(x)/log(base)`.
But the log10 is made available as `math.log10()`, which does not resort to log division if possible. |
For loop syntax in Python without using range() or xrange() | 33,759,539 | 3 | 2015-11-17T14:34:14Z | 33,759,625 | 7 | 2015-11-17T14:37:33Z | [
"python",
"syntax"
] | I do not know much about python so i apologize if my question is a very basic one.
Let's say i have a list
```
lst = [1,2,3,4,5,6,7,8,9,10]
```
Now what i want to know is that if there is any way to write the following piece of code in python without using `range()` or `xrange()`:
```
for i in lst:
for j in lst after element i: '''This is the line i want the syntax for'''
#Do Something
```
The second loop is to access elements after the element `i` i.e., if i = 3, `j` would have to loop through from 4 to 10, so the pairs of numbers if `i` and `j` are printed would be (1,2)..(1,10), (2,3)...(2,10), (3,4)..(3,10) etc.
I have no idea what to search for or what query to type on any search engine.
Any help would be much appreciated. | It looks like you might want to use [enumerate()](https://docs.python.org/3/library/functions.html#enumerate):
```
for index, item in enumerate(lst):
for j in lst[index+1:]:
#Do Something
``` |
For loop syntax in Python without using range() or xrange() | 33,759,539 | 3 | 2015-11-17T14:34:14Z | 33,759,639 | 8 | 2015-11-17T14:38:17Z | [
"python",
"syntax"
] | I do not know much about python so i apologize if my question is a very basic one.
Let's say i have a list
```
lst = [1,2,3,4,5,6,7,8,9,10]
```
Now what i want to know is that if there is any way to write the following piece of code in python without using `range()` or `xrange()`:
```
for i in lst:
for j in lst after element i: '''This is the line i want the syntax for'''
#Do Something
```
The second loop is to access elements after the element `i` i.e., if i = 3, `j` would have to loop through from 4 to 10, so the pairs of numbers if `i` and `j` are printed would be (1,2)..(1,10), (2,3)...(2,10), (3,4)..(3,10) etc.
I have no idea what to search for or what query to type on any search engine.
Any help would be much appreciated. | This is what list slicing is about, you can take part of your list from i'th element through
```
lst[i:]
```
furthermore, in order to have both index and value you need `enumerate` operation, which changes the list into list of pairs `(index, value)`
thus
```
for ind, i in enumerate(lst):
for j in lst[ind+1: ]:
#Do Something
``` |
Tensorflow: How to restore a previously saved model (python) | 33,759,623 | 48 | 2015-11-17T14:37:26Z | 33,762,168 | 66 | 2015-11-17T16:30:03Z | [
"python",
"python-2.7",
"machine-learning",
"tensorflow"
] | I want to make a prediction with a model already saved by tensorflow, so I need to restore the model first.
The code I have fails because I can't call Saver() without the variables of the model, but that is exactly what I want to load! Do I need to create a dummy model with the same variables first and then restore my saved model and replace the dummy one? Maybe I'm approaching this incorrectly?
```
import tensorflow as tf
saver = tf.train.Saver()
with tf.Session() as sess:
ckpt = tf.train.get_checkpoint_state("my_path_to_model")
if ckpt and ckpt.model_checkpoint_path:
# Restores from checkpoint
saver.restore(sess, "my_path_to_model")
print "Model loaded"
else:
print "No checkpoint file found"
```
I already looked into the code of the cifar10 model example, where a saved model is restored, but I can't understand the logic in the code. As you can tell, I'm just starting with this.
Any help would be really appreciated | The checkpoints that are saved contain values for the `Variable`s in your model, not the model/graph itself, which means that the graph should be the same when you restore the checkpoint.
Here's an example for a linear regression where there's a training loop that saves variable checkpoints and an evaluation section that will restore variables saved in a prior run and compute predictions. Of course, you can also restore variables and continue training if you'd like.
```
x = tf.placeholder(tf.float32)
y = tf.placeholder(tf.float32)
w = tf.Variable(tf.zeros([1, 1], dtype=tf.float32))
b = tf.Variable(tf.ones([1, 1], dtype=tf.float32))
y_hat = tf.add(b, tf.matmul(x, w))
...more setup for optimization and what not...
saver = tf.train.Saver() # defaults to saving all variables - in this case w and b
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
if FLAGS.train:
for i in xrange(FLAGS.training_steps):
...training loop...
if (i + 1) % FLAGS.checkpoint_steps == 0:
saver.save(sess, FLAGS.checkpoint_dir + 'model.ckpt',
global_step=i+1)
else:
# Here's where you're restoring the variables w and b.
# Note that the graph is exactly as it was when the variables were
# saved in a prior training run.
ckpt = tf.train.get_checkpoint_state(FLAGS.checkpoint_dir)
if ckpt and ckpt.model_checkpoint_path:
saver.restore(sess, ckpt.model_checkpoint_path)
else:
...no checkpoint found...
# Now you can run the model to get predictions
batch_x = ...load some data...
predictions = sess.run(y_hat, feed_dict={x: batch_x})
```
Here are the [docs](http://tensorflow.org/how_tos/variables/index.md) for `Variable`s, which cover saving and restoring. And here are the [docs](http://tensorflow.org/api_docs/python/state_ops.md#AUTOGENERATED-saving-and-restoring-variables) for the `Saver`. |
Tensorflow: How to restore a previously saved model (python) | 33,759,623 | 48 | 2015-11-17T14:37:26Z | 33,763,208 | 21 | 2015-11-17T17:22:19Z | [
"python",
"python-2.7",
"machine-learning",
"tensorflow"
] | I want to make a prediction with a model already saved by tensorflow, so I need to restore the model first.
The code I have fails because I can't call Saver() without the variables of the model, but that is exactly what I want to load! Do I need to create a dummy model with the same variables first and then restore my saved model and replace the dummy one? Maybe I'm approaching this incorrectly?
```
import tensorflow as tf
saver = tf.train.Saver()
with tf.Session() as sess:
ckpt = tf.train.get_checkpoint_state("my_path_to_model")
if ckpt and ckpt.model_checkpoint_path:
# Restores from checkpoint
saver.restore(sess, "my_path_to_model")
print "Model loaded"
else:
print "No checkpoint file found"
```
I already looked into the code of the cifar10 model example, where a saved model is restored, but I can't understand the logic in the code. As you can tell, I'm just starting with this.
Any help would be really appreciated | There are two parts to the model, the model definition, saved by `Supervisor` as `graph.pbtxt` in the model directory and the numerical values of tensors, saved into checkpoint files like `model.ckpt-1003418`.
The model definition can be restored using `tf.import_graph_def`, and the weights are restored using `Saver`.
However, `Saver` uses special collection holding list of variables that's attached to the model Graph, and this collection is not initialized using import\_graph\_def, so you can't use the two together at the moment (it's on our roadmap to fix). For now, you have to use approach of Ryan Sepassi -- manually construct a graph with identical node names, and use `Saver` to load the weights into it.
(Alternatively you could hack it by using by using `import_graph_def`, creating variables manually, and using `tf.add_to_collection(tf.GraphKeys.VARIABLES, variable)` for each variable, then using `Saver`) |
Cannot run Google App Engine custom managed VM: --custom-entrypoint must be set error | 33,764,630 | 4 | 2015-11-17T18:42:41Z | 33,814,096 | 7 | 2015-11-19T20:48:52Z | [
"python",
"google-app-engine",
"google-app-engine-python",
"gae-module",
"google-managed-vm"
] | **PROBLEM DESCRIPTION**
I am trying to create a custom managed VM for Google App Engine that behaves identically to the standard python27 managed VM provided by Google. (I'm doing this as a first step to adding a C++ library to the runtime).
From google [documentation](https://cloud.google.com/appengine/docs/managed-vms/tutorial/step2#dockerfile), the following Dockerfile specifies the standard python27 runtime:
```
FROM gcr.io/google_appengine/python-compat
ADD . /app
```
I have verified that this is the right Dockerfile by examining the one generated by `gcloud preview app run` when using the standard python27 runtime. It is identical to this.
But when I run my application with this Dockerfile using `dev_appserver.py` or with `gcloud preview app run` I get an error saying:
```
The --custom_entrypoint flag must be set for custom runtimes
```
I am using the latest versions of gcloud (1.9.86, with app-engine-python component version 1.9.28) and the standalone python app engine SDK (1.9.28). I had the same problem with earlier versions, so I updated to the latest.
**THINGS I HAVE TRIED:**
`gcloud preview app run --help` has the following to say about `--custom-entrypoint`:
```
--custom-entrypoint CUSTOM_ENTRYPOINT
Specify an entrypoint for custom runtime modules. This is required when
such modules are present. Include "{port}" in the string (without
quotes) to pass the port number in as an argument. For instance:
--custom_entrypoint="gunicorn -b localhost:{port} mymodule:application"
```
I am not sure what to make of this. Should the docker image not already contain an ENTRYPOINT? Why am I being required to provide one in addition? Also, what should the entrypoint be for the `gcr.io/google_appengine/python-compat` image be? Google provides no documentation for this.
I have tried a meaningless `--custom-entrypoint="echo"`, which silences the error, but the application does not response to any HTTP requests.
The two other relevant stackoverflow questions I have found have not helped. The accepted answers seem to suggest that this is a bug in the SDK that was resolved. But I have tried it in two versions of the SDK, including the latest, and I still have the problem.
* [How to fix â`The --custom_entrypoint flag must be set for custom runtimes`â?](http://stackoverflow.com/questions/31280849/how-to-fix-the-custom-entrypoint-flag-must-be-set-for-custom-runtimes)
* [Google Managed VM error - custom entry point](http://stackoverflow.com/questions/33255674/google-managed-vm-error-custom-entry-point)
**STEPS TO REPRORDUCE:**
To highlight my problem, I have created a trivial application that generates the error. It consists of just three files:
`app.yaml`:
```
module: default
version: 1
runtime: custom
api_version: 1
threadsafe: true
vm: true
handlers:
- url: /.*
script: wsgi.app
```
`Dockerfile`:
```
FROM gcr.io/google_appengine/python-compat
ADD . /app
```
This `Dockerfile` is the same one that is used for the python27 runtime (and in fact literally copy-pasted from the Dockerfile generated by `gcloud preview app run` when using the python27 runtime), so this should be identical to setting `runtime: python27`.
`wsgi.py`:
```
import webapp2
class Hello(webapp2.RequestHandler):
def get(self):
self.response.write(u'Hello')
app = webapp2.WSGIApplication([('/Hello', Hello)], debug=True)
```
When I run `dev_appserver.py app.yaml` in the directory containing these three files however, I get the following error:
```
Traceback (most recent call last):
File "/home/vagrant/google-cloud-sdk/platform/google_appengine/dev_appserver.py", line 83, in <module>
_run_file(__file__, globals())
File "/home/vagrant/google-cloud-sdk/platform/google_appengine/dev_appserver.py", line 79, in _run_file
execfile(_PATHS.script_file(script_name), globals_)
File "/home/vagrant/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 1033, in <module>
main()
File "/home/vagrant/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 1026, in main
dev_server.start(options)
File "/home/vagrant/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 818, in start
self._dispatcher.start(options.api_host, apis.port, request_data)
File "/home/vagrant/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/dispatcher.py", line 194, in start
_module.start()
File "/home/vagrant/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/module.py", line 1555, in start
self._add_instance()
File "/home/vagrant/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/module.py", line 1707, in _add_instance
expect_ready_request=True)
File "/home/vagrant/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/custom_runtime.py", line 73, in new_instance
assert self._runtime_config_getter().custom_config.custom_entrypoint
File "/home/vagrant/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/module.py", line 383, in _get_runtime_config
raise ValueError('The --custom_entrypoint flag must be set for '
ValueError: The --custom_entrypoint flag must be set for custom runtimes
``` | **UPDATE**
THIS MAY NO LONGER BE ACCURATE. SEE NICK'S ANSWER.
(Though I could not get that working. But I did not try very hard)
---
There is a completely undocumented but absolutely essential piece of information w.r.t. custom managed VMs:
**THEY CANNOT BE RUN ON THE DEVELOPMENT SERVER!**
If you think this crucial fact would be mentioned anywhere sane, like say, the documentation page for custom managed VMs, or for `dev_appserver.py`, or even as an error message when running `dev_appserver.py`, then you are giving Google far too much credit.
The only place where I can find any kind of statement about this is in the Readme file of the [appengine-java-vm-guestbook-extras demo](https://github.com/GoogleCloudPlatform/appengine-java-vm-guestbook-extras) on github (seriously):
> The Cloud SDK does not support anymore running custom runtimes when a
> Dockerfile is provided. You'll have to deploy the application to App
> Engine
Google does not care to:
1. Implement this basic and important feature.
2. Document that the development server is missing such an important feature.
3. Give any reasonable error message when the user tires to perform the action.
I hope this answer saves some sorry developer from the days of torment I suffered because of this. |
Finding partial subsets python | 33,768,108 | 2 | 2015-11-17T22:16:58Z | 33,768,143 | 9 | 2015-11-17T22:19:18Z | [
"python",
"set"
] | I'm looking for a way to get the number of elements of one `set` that appear in another `set`.
Given these two sets:
```
a = 'a b c d'
b = 'a b c e f'
a = set(a.split())
b = set(b.split())
```
This prints false:
```
print a.issubset(b) # prints False
```
Is there a pythonic way to instead print "3" since three elements of `a` appear in `b`? | IIUC, you can use [`set.intersection`](https://docs.python.org/2/library/sets.html#set-objects):
```
>>> a.issubset(b)
False
>>> a.intersection(b)
{'a', 'c', 'b'}
>>> len(a.intersection(b))
3
```
which could be abbreviated `&` since both `a` and `b` are sets:
```
>>> len(a & b)
3
``` |
Generate and execute R, Python, etc.., script from within bash script | 33,769,018 | 4 | 2015-11-17T23:26:00Z | 33,769,363 | 8 | 2015-11-17T23:54:31Z | [
"python",
"bash"
] | I have been trying to find a solution for this for a while but haven't found anything satisfactory yet. I write a lot of bash scripts, but sometimes I want to use R or Python as part of the script. Right now, I end up having to write two scripts; the original bash script to perform the first half of the task, and the R or Python script to perform the second half of the task. I call the R/Python script from within the bash script.
I am not satisfied with this solution because it splits my program over two files, which increases the chances of things getting out of sync, more files to track, etc.. Is there a way to write a block of text that contains the entirety of my R/Python script, and then have bash spit it out into a file and pass arguments to it & execute it? Is there an easier solution? This is more complicated than passing simple one-liners to R/Python because it usually involves creating and manipulating objects over several steps. | There are probably lots of solutions, but this one works:
```
#!/bin/bash
## do stuff
R --slave <<EOF
## R code
set.seed(101)
rnorm($1)
EOF
```
If you want the flexibility to pass additional bash arguments to R, I suggest:
```
#!/bin/bash
## do stuff
R --slave --args $@ <<EOF
## R code
set.seed(101)
args <- as.numeric(commandArgs(trailingOnly=TRUE))
do.call(rnorm,as.list(args))
EOF
```
* this allows a flexible number of arguments, but assumes they will all be numeric
* it also assumes that all parameters will be passed through from the bash script to the R sub-script
obviously you could relax these, e.g. refer to parameters positionally |
Libxml2 installation onto Mac | 33,770,087 | 3 | 2015-11-18T01:11:15Z | 33,770,588 | 7 | 2015-11-18T02:05:39Z | [
"python",
"osx",
"libxml2"
] | I'm trying to install "libxml2" and "libxslt" in order to use scrapy (web scraping with python) on a mac.
I have homebrew and I ran
`$ brew install libxml2 libxslt`
I get this message
`OS X already provides this software and installing another version in parallel can cause all kinds of trouble.`
When I try to instal scrapy, using
`$ pip install scrapy`
I get this error:
```
Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?
Perhaps try: xcode-select --install
```
When I try
`$ xcode-select libxml2 install`
I get an invalid argument error.
Any suggestions? | You should run `$ xcode-select --install`. This will install the XCode command line tools which include libxml2. |
How to annotate Count with a condition in a Django queryset | 33,775,011 | 10 | 2015-11-18T08:28:41Z | 33,777,815 | 18 | 2015-11-18T10:39:15Z | [
"python",
"django",
"django-queryset"
] | Using Django ORM, can one do something like `queryset.objects.annotate(Count('queryset_objects', gte=VALUE))`. Catch my drift?
---
Here's a quick example to use for illustrating a possible answer:
In a Django website, content creators submit articles, and regular users view (i.e. read) the said articles. Articles can either be published (i.e. available for all to read), or in draft mode. The models depicting these requirements are:
```
class Article(models.Model):
author = models.ForeignKey(User)
published = models.BooleanField(default=False)
class Readership(models.Model):
reader = models.ForeignKey(User)
which_article = models.ForeignKey(Article)
what_time = models.DateTimeField(auto_now_add=True)
```
**My question is:** How can I get all published articles, sorted by unique readership from the last 30 mins? I.e. I want to count how many distinct (unique) views each published article got in the last half an hour, and then produce a list of articles sorted by these distinct views. Please help me formulate the query, I can't wrap my head around how to do this.
---
I tried the following:
```
date = datetime.now()-timedelta(minutes=30)
articles = Article.objects.filter(published=True).extra(select = {
"views" : """
SELECT COUNT(*)
FROM myapp_readership
JOIN myapp_article on myapp_readership.which_article_id = myapp_article.id
WHERE myapp_readership.reader_id = myapp_user.id
AND myapp_readership.what_time > %s """ % date,
}).order_by("-views")
```
This sprang me the error: **syntax error at or near "01"** (where "01" was the datetime object inside extra). It's not much to go on. | ### For django >= 1.8
Use [Conditional Aggregation](https://docs.djangoproject.com/en/1.8/ref/models/conditional-expressions/#conditional-aggregation):
```
from django.db.models import Count, Case, When, IntegerField
Article.objects.annotate(
numviews=Count(Case(
When(readership__what_time__lt=treshold, then=1),
output_field=CharField(),
))
)
```
**Explanation:**
normal query through your articles will be annotated with `numviews` field. That field will be constructed as a CASE/WHEN expression, wrapped by Count, that will return 1 for readership matching criteria and `NULL` for readership not matching criteria. Count will ignore nulls and count only values.
You will get zeros on articles that haven't been viewed recently and you can use that `numviews` field for sorting and filtering.
Query behind this for PostgreSQL will be:
```
SELECT
"app_article"."id",
"app_article"."author",
"app_article"."published",
COUNT(
CASE WHEN "app_readership"."what_time" < 2015-11-18 11:04:00.000000+01:00 THEN 1
ELSE NULL END
) as "numviews"
FROM "app_article" LEFT OUTER JOIN "app_readership"
ON ("app_article"."id" = "app_readership"."which_article_id")
GROUP BY "app_article"."id", "app_article"."author", "app_article"."published"
```
If we want to track only unique queries, we can add distinction into `Count`, and make our `When` clause to return value, we want to distinct on.
```
from django.db.models import Count, Case, When, CharField, F
Article.objects.annotate(
numviews=Count(Case(
When(readership__what_time__lt=treshold, then=F('readership__reader')), # it can be also `readership__reader_id`, it doesn't matter
output_field=CharField(),
), distinct=True)
)
```
That will produce:
```
SELECT
"app_article"."id",
"app_article"."author",
"app_article"."published",
COUNT(
DISTINCT CASE WHEN "app_readership"."what_time" < 2015-11-18 11:04:00.000000+01:00 THEN "app_readership"."reader_id"
ELSE NULL END
) as "numviews"
FROM "app_article" LEFT OUTER JOIN "app_readership"
ON ("app_article"."id" = "app_readership"."which_article_id")
GROUP BY "app_article"."id", "app_article"."author", "app_article"."published"
```
### For django < 1.8 and PostgreSQL
You can just use `raw` for executing SQL statement created by newer versions of django. Apparently there is no simple and optimized method for querying that data without using `raw` (even with `extra` there are some problems with injecting required `JOIN` clause).
```
Articles.objects.raw('SELECT'
' "app_article"."id",'
' "app_article"."author",'
' "app_article"."published",'
' COUNT('
' DISTINCT CASE WHEN "app_readership"."what_time" < 2015-11-18 11:04:00.000000+01:00 THEN "app_readership"."reader_id"'
' ELSE NULL END'
' ) as "numviews"'
'FROM "app_article" LEFT OUTER JOIN "app_readership"'
' ON ("app_article"."id" = "app_readership"."which_article_id")'
'GROUP BY "app_article"."id", "app_article"."author", "app_article"."published"')
``` |
Building custom Caffe layer in python | 33,778,225 | 9 | 2015-11-18T10:57:44Z | 33,797,142 | 8 | 2015-11-19T06:56:26Z | [
"python",
"deep-learning",
"caffe",
"pycaffe"
] | After parsing many links regarding building Caffe layers in Python i still have difficulties in understanding few concepts. Can please someone clarify them?
* Blobs and weights python structure for network is explained here: [Finding gradient of a Caffe conv-filter with regards to input](http://stackoverflow.com/questions/31324739/finding-gradient-of-a-caffe-conv-filter-with-regards-to-input).
* Network and Solver structure is explained here: [Cheat sheet for caffe / pycaffe?](http://stackoverflow.com/questions/32379878/cheat-sheet-for-caffe-pycaffe).
* Example of defining python layer is here: [pyloss.py on git](https://github.com/BVLC/caffe/blob/master/examples/pycaffe/layers/pyloss.py).
* Layer tests here: [test layer on git](https://github.com/BVLC/caffe/blob/master/python/caffe/test/test_python_layer_with_param_str.py).
* Development of new layers for C++ is described here: [git wiki](https://github.com/BVLC/caffe/wiki/Development).
What I am still missing is:
1. `setup()` method: what I should do here? Why in example I should compare the lenght of 'bottom' param with '2'? Why it should be 2? It seems not a batch size because its arbitrary? And bottom as I understand is blob, and then the first dimension is batch size?
2. `reshape()` method: as I understand 'bottom' input param is blob of below layer, and 'top' param is blob of upper layer, and I need to reshape top layer according to output shape of my calculations with forward pass. But why do I need to do this every forward pass if these shapes do not change from pass to pass, only weights change?
3. `reshape` and `forward` methods have 0 indexes for 'top' input param used. Why would I need to use `top[0].data=...` or `top[0].input=...` instead of `top.data=...` and `top.input=...`? Whats this index about? If we do not use other part of this top list, why it is exposed in this way? I can suspect its or C++ backbone coincidence, but it would be good to know exactly.
4. `reshape()` method, line with:
```
if bottom[0].count != bottom[1].count
```
what I do here? why its dimension is 2 again? And what I am counting here? Why both part of blobs (0 and 1) should be equal in amount of some members (`count`)?
5. `forward()` method, what I define by this line:
```
self.diff[...] = bottom[0].data - bottom[1].data
```
When it is used after forward path if I define it? Can we just use
```
diff = bottom[0].data - bottom[1].data
```
instead to count loss later in this method, without assigning to `self`, or its done with some purpose?
6. `backward()` method: what's this about: `for i in range(2):`? Why again range is 2?
7. `backward()` method, `propagate_down` parameter: why it is defined? I mean if its True, gradient should be assigned to `bottom[X].diff` as I see, but why would someone call method which would do nothing with `propagate_down = False`, if it just do nothing and still cycling inside?
I'm sorry if those questions are too obvious, I just wasn't able to find a good guide to understand them and asking for help here. | You asked a lot of questions here, I'll give you some highlights and pointers that I hope will clarify matters for you. I will not explicitly answer all your questions.
It seems like you are most confused about the the difference between a blob and a layer's input/output. Indeed most of the layers has a *single* blob as input and a *single* blob as output, but it is not always the case. Consider a loss layer: it has *two* inputs: predictions and ground truth labels. So, in this case `bottom` is a vector of length **2**(!) with `bottom[0]` being a (4-D) blob representing predictions, while `bottom[1]` is another blob with the labels. Thus, when constructing such a layer you must ascertain that you have exactly (hard coded) 2 input blobs (see e.g., `ExactNumBottomBlobs()` in [`AccuracyLayer`](https://github.com/BVLC/caffe/blob/master/include/caffe/loss_layers.hpp#L40) definition).
The same goes for `top` blobs as well: indeed in most cases there is a single `top` for each layer, but it's not always the case (see e.g., [`AccuracyLayer`](https://github.com/BVLC/caffe/blob/master/include/caffe/loss_layers.hpp#L40)). Therefore, `top` is also a *vector* of 4-D blobs, one for each `top` of the layer. Most of the time there would be a single element in that vector, but sometimes you might find more than one.
I believe this covers your questions 1,3,4 and 6.
As of `reshape()` (Q.2) this function is not called every forward pass, it is called only when net is setup to allocate space for inputs/outputs and params.
Occasionally, you might want to change input size for your net (e.g., for detection nets) then you need to call `reshape()` for all layers of the net to accommodate the new input size.
As for `propagate_down` parameter (Q.7): since a layer may have more than one `bottom` you would need, in principle, to pass the gradient to *all* `bottom`s during backprop. However, what is the meaning of a gradient to the `label` bottom of a loss layer? There are cases when you do not want to propagate to *all* `bottom`s: this is what this flag is for. (here's an [example](http://stackoverflow.com/a/33349475/1714410) with a loss layer with three `bottom`s that expect gradient to all of them). |
Spark Dataframe distinguish columns with duplicated name | 33,778,664 | 5 | 2015-11-18T11:16:51Z | 33,779,190 | 11 | 2015-11-18T11:44:47Z | [
"python",
"apache-spark",
"apache-spark-sql",
"pyspark",
"spark-dataframe"
] | So as I know in Spark Dataframe, that for multiple columns can have the same name as shown in below dataframe snapshot:
```
[
Row(a=107831, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0}), a=107831, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0})),
Row(a=107831, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0}), a=125231, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0047, 3: 0.0, 4: 0.0043})),
Row(a=107831, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0}), a=145831, f=SparseVector(5, {0: 0.0, 1: 0.2356, 2: 0.0036, 3: 0.0, 4: 0.4132})),
Row(a=107831, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0}), a=147031, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0})),
Row(a=107831, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0}), a=149231, f=SparseVector(5, {0: 0.0, 1: 0.0032, 2: 0.2451, 3: 0.0, 4: 0.0042}))
]
```
Above result is created by join with a dataframe to itself, you can see there are `4` columns with both two `a` and `f`.
The problem is is there when I try to do more calculation with the `a` column, I cant find a way to select the `a`, I have try `df[0]` and `df.select('a')`, both returned me below error mesaage:
```
AnalysisException: Reference 'a' is ambiguous, could be: a#1333L, a#1335L.
```
**Is there anyway in Spark API that I can distinguish the columns from the duplicated names again? or maybe some way to let me change the column names?** | Lets start with some data:
```
from pyspark.mllib.linalg import SparseVector
from pyspark.sql import Row
df1 = sqlContext.createDataFrame([
Row(a=107831, f=SparseVector(
5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0})),
Row(a=125231, f=SparseVector(
5, {0: 0.0, 1: 0.0, 2: 0.0047, 3: 0.0, 4: 0.0043})),
])
df2 = sqlContext.createDataFrame([
Row(a=107831, f=SparseVector(
5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0})),
Row(a=107831, f=SparseVector(
5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0})),
])
```
There are a few ways you can approach this problem. First of all you can unambiguously reference child table columns using parent columns:
```
df1.join(df2, df1['a'] == df2['a']).select(df1['f']).show(2)
## +--------------------+
## | f|
## +--------------------+
## |(5,[0,1,2,3,4],[0...|
## |(5,[0,1,2,3,4],[0...|
## +--------------------+
```
You can also use table aliases:
```
from pyspark.sql.functions import col
df1_a = df1.alias("df1_a")
df2_a = df2.alias("df2_a")
df1_a.join(df2_a, col('df1_a.a') == col('df2_a.a')).select('df1_a.f').show(2)
## +--------------------+
## | f|
## +--------------------+
## |(5,[0,1,2,3,4],[0...|
## |(5,[0,1,2,3,4],[0...|
## +--------------------+
```
Finally you can programmatically rename columns:
```
df1_r = df1.select(*(col(x).alias(x + '_df1') for x in df1.columns))
df2_r = df1.select(*(col(x).alias(x + '_df2') for x in df2.columns))
df1_r.join(df2_r, col('a_df1') == col('a_df2')).select(col('f_df1')).show(2)
## +--------------------+
## | f_df1|
## +--------------------+
## |(5,[0,1,2,3,4],[0...|
## |(5,[0,1,2,3,4],[0...|
## +--------------------+
``` |
Is there a way to gain access to the class of a method when all you have is a callable | 33,782,461 | 9 | 2015-11-18T14:19:58Z | 33,782,524 | 14 | 2015-11-18T14:23:16Z | [
"python",
"methods",
"metaprogramming"
] | I have code that is like:
```
class Foo:
def foo(self):
pass
class Bar:
def foo(self):
pass
f = random.choice((Foo().foo, Bar().foo))
```
How do I access `Bar` or `Foo` from f?
`f.__dict__` is of little to no help, but as `repr(f)` gives `<bound method Bar.foo of <__main__.Bar object at 0x10c6eec18>>'` it must be possible, but how? | Each bound method has the `__self__` attribute which is the
> instance to which this method is bound, or `None`
(copied from [here](https://docs.python.org/3/library/inspect.html#types-and-members))
More about bound methods (from [*Data Model*](https://docs.python.org/3/reference/datamodel.html)):
> If you access a method (a function defined in a class namespace)
> through an instance, you get a special object: a bound method (also
> called instance method) object. ... Bound methods have two special
> read-only attributes: `m.__self__` is the object on which the method
> operates...
So `f.__self__` will get you the class instance:
```
print(f.__self__) # <__main__.Foo object at 0x7f766efeee48>
```
And `type(f.__self__)` or `f.__self__.__class__` will get you the type object:
```
print(type(f.__self__)) # <class '__main__.Foo'>
```
You'd only use `__class__` for [old-style classes](https://docs.python.org/2/reference/datamodel.html#new-style-and-classic-classes). |
How can I visualize the weights(variables) in cnn in Tensorflow? | 33,783,672 | 17 | 2015-11-18T15:13:38Z | 33,794,463 | 12 | 2015-11-19T02:55:47Z | [
"python",
"tensorflow"
] | After training the cnn model, I want to visualize the weight or print out the weights, what can I do?
I cannot even print out the variables after training.
Thank you! | To visualize the weights, you can use a [`tf.image_summary()`](http://tensorflow.org/api_docs/python/train.html#image_summary) op to transform a convolutional filter (or a slice of a filter) into a summary proto, write them to a log using a [`tf.train.SummaryWriter`](http://tensorflow.org/api_docs/python/train.html#SummaryWriter), and visualize the log using [TensorBoard](http://tensorflow.org/how_tos/summaries_and_tensorboard/index.html).
Let's say you have the following (simplified) program:
```
filter = tf.Variable(tf.truncated_normal([8, 8, 3]))
images = tf.placeholder(tf.float32, shape=[None, 28, 28])
conv = tf.nn.conv2d(images, filter, strides=[1, 1, 1, 1], padding="SAME")
# More ops...
loss = ...
optimizer = tf.GradientDescentOptimizer(0.01)
train_op = optimizer.minimize(loss)
filter_summary = tf.image_summary(filter)
sess = tf.Session()
summary_writer = tf.train.SummaryWriter('/tmp/logs', sess.graph_def)
for i in range(10000):
sess.run(train_op)
if i % 10 == 0:
# Log a summary every 10 steps.
summary_writer.add_summary(filter_summary, i)
```
After doing this, you can start TensorBoard to visualize the logs in `/tmp/logs`, and you will be able to see a visualization of the filter.
Note that this trick visualizes depth-3 filters as RGB images (to match the channels of the input image). If you have deeper filters, or they don't make sense to interpret as color channels, you can use the [`tf.split()`](http://tensorflow.org/api_docs/python/array_ops.html#split) op to split the filter on the depth dimension, and generate one image summary per depth. |
How can I visualize the weights(variables) in cnn in Tensorflow? | 33,783,672 | 17 | 2015-11-18T15:13:38Z | 35,858,862 | 9 | 2016-03-08T04:17:57Z | [
"python",
"tensorflow"
] | After training the cnn model, I want to visualize the weight or print out the weights, what can I do?
I cannot even print out the variables after training.
Thank you! | Like @mrry said, you can use `tf.image_summary`. For example, for `cifar10_train.py`, you can put this code somewhere under `def train()`. Note how you access a var under scope 'conv1'
```
# Visualize conv1 features
with tf.variable_scope('conv1') as scope_conv:
weights = tf.get_variable('weights')
# scale weights to [0 255] and convert to uint8 (maybe change scaling?)
x_min = tf.reduce_min(weights)
x_max = tf.reduce_max(weights)
weights_0_to_1 = (weights - x_min) / (x_max - x_min)
weights_0_to_255_uint8 = tf.image.convert_image_dtype (weights_0_to_1, dtype=tf.uint8)
# to tf.image_summary format [batch_size, height, width, channels]
weights_transposed = tf.transpose (weights_0_to_255_uint8, [3, 0, 1, 2])
# this will display random 3 filters from the 64 in conv1
tf.image_summary('conv1/filters', weights_transposed, max_images=3)
```
If you want to visualize all your `conv1` filters in one nice grid, you would have to organize them into a grid yourself. I did that today, so now I'd like to share a [gist for visualizing conv1 as a grid](https://gist.github.com/kukuruza/03731dc494603ceab0c5) |
How is `min` of two integers just as fast as 'bit hacking'? | 33,784,519 | 38 | 2015-11-18T15:50:36Z | 33,784,710 | 33 | 2015-11-18T16:00:20Z | [
"python"
] | I was watching a [lecture series](http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-172-performance-engineering-of-software-systems-fall-2010/video-lectures/lecture-2-bit-hacks/) on 'Bit Hacking' and came across the following optimization for finding the minimum of two integers:
```
return x ^ ((y ^ x) & -(x > y))
```
Which said to be faster than:
```
if x < y:
return x
else:
return y
```
Since the `min` function can handle more than just two integers (floats, strings, lists, and even custom objects) I assumed that calling `min(x, y)` would take longer than the optimized bit hack above. To my surprise, they were nearly identical:
```
>>> python -m timeit "min(4, 5)"
1000000 loops, best of 3: 0.203 usec per loop
>>> python -m timeit "4 ^ ((5 ^ 4) & -(4 > 5))"
10000000 loops, best of 3: 0.19 usec per loop
```
This is true even for numbers greater than `255` (pre allocated python integer objects)
```
>>> python -m timeit "min(15456, 54657)"
10000000 loops, best of 3: 0.191 usec per loop
python -m timeit "15456 ^ ((54657 ^ 15456) & -(54657 > 15456))"
10000000 loops, best of 3: 0.18 usec per loop
```
How is it that a function so versatile like `min` can still be so fast and optimized?
Note: I ran the above code using Python 3.5. I'm assuming that this is the same for Python 2.7+ but haven't tested
---
I've created the following c module:
```
#include <Python.h>
static PyObject * my_min(PyObject *self, PyObject *args){
const long x;
const long y;
if (!PyArg_ParseTuple(args, "ll", &x, &y))
return NULL;
return PyLong_FromLong(x ^ ((y ^ x) & -(x > y)));
}
static PyMethodDef MyMinMethods[] =
{
{ "my_min", my_min, METH_VARARGS, "bit hack min"
},
{NULL, NULL, 0, NULL}
};
PyMODINIT_FUNC
initmymin(void)
{
PyObject *m;
m = Py_InitModule("mymin", MyMinMethods);
if (m == NULL)
return;
}
```
Compiled it, and installed it onto my system (an ubuntu VM machine). I then ran the following:
```
>>> python -m timeit 'min(4, 5)'
10000000 loops, best of 3: 0.11 usec per loop
>>> python -m timeit -s 'import mymin' 'mymin.my_min(4,5)'
10000000 loops, best of 3: 0.129 usec per loop
```
While I understand that this is a VM machine, shouldn't there still be a greater gap in execution time with the 'bit hacking' being offloaded into native c? | This is likely due to how the `min` function is implemented in python.
Many python builtins are actually implemented in low level languages such as C or assembly and use the python apis in order to be callable in python.
Your bit fiddling technique is likely very fast in C but in python the interpretation overhead of the statement will far exceed the overhead of calling even a complex function implemented in a low level language.
If you really want a fair test compare a C program, or C python extension implementing that technique to your python call of `min` and see how it compares, I expect that will explain the result you see.
**EDIT:**
Thanks to @Two-BitAlchemist I can now give some more details onto additional reasons this bit twiddling will not work well in python. It appears that integers are not stored in the obvious way but are actually a fairly complex expanding object designed to store potentially very large numbers.
Some details on this are findable [here](http://www.laurentluce.com/posts/python-integer-objects-implementation/) (Thanks to Two-BitAlchemist) though it appears this is changed somewhat in newer python versions. Still the point remains that we are most certainly not manipulation a simple set of bits when we touch an integer in python, but a complex object where the bit manipulations are in fact virtual method calls with enormous overhead (compared to what they do). |
How is `min` of two integers just as fast as 'bit hacking'? | 33,784,519 | 38 | 2015-11-18T15:50:36Z | 33,785,191 | 22 | 2015-11-18T16:22:54Z | [
"python"
] | I was watching a [lecture series](http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-172-performance-engineering-of-software-systems-fall-2010/video-lectures/lecture-2-bit-hacks/) on 'Bit Hacking' and came across the following optimization for finding the minimum of two integers:
```
return x ^ ((y ^ x) & -(x > y))
```
Which said to be faster than:
```
if x < y:
return x
else:
return y
```
Since the `min` function can handle more than just two integers (floats, strings, lists, and even custom objects) I assumed that calling `min(x, y)` would take longer than the optimized bit hack above. To my surprise, they were nearly identical:
```
>>> python -m timeit "min(4, 5)"
1000000 loops, best of 3: 0.203 usec per loop
>>> python -m timeit "4 ^ ((5 ^ 4) & -(4 > 5))"
10000000 loops, best of 3: 0.19 usec per loop
```
This is true even for numbers greater than `255` (pre allocated python integer objects)
```
>>> python -m timeit "min(15456, 54657)"
10000000 loops, best of 3: 0.191 usec per loop
python -m timeit "15456 ^ ((54657 ^ 15456) & -(54657 > 15456))"
10000000 loops, best of 3: 0.18 usec per loop
```
How is it that a function so versatile like `min` can still be so fast and optimized?
Note: I ran the above code using Python 3.5. I'm assuming that this is the same for Python 2.7+ but haven't tested
---
I've created the following c module:
```
#include <Python.h>
static PyObject * my_min(PyObject *self, PyObject *args){
const long x;
const long y;
if (!PyArg_ParseTuple(args, "ll", &x, &y))
return NULL;
return PyLong_FromLong(x ^ ((y ^ x) & -(x > y)));
}
static PyMethodDef MyMinMethods[] =
{
{ "my_min", my_min, METH_VARARGS, "bit hack min"
},
{NULL, NULL, 0, NULL}
};
PyMODINIT_FUNC
initmymin(void)
{
PyObject *m;
m = Py_InitModule("mymin", MyMinMethods);
if (m == NULL)
return;
}
```
Compiled it, and installed it onto my system (an ubuntu VM machine). I then ran the following:
```
>>> python -m timeit 'min(4, 5)'
10000000 loops, best of 3: 0.11 usec per loop
>>> python -m timeit -s 'import mymin' 'mymin.my_min(4,5)'
10000000 loops, best of 3: 0.129 usec per loop
```
While I understand that this is a VM machine, shouldn't there still be a greater gap in execution time with the 'bit hacking' being offloaded into native c? | Well, the bit hacking trick might have been faster in the 90s, but it is slower on current machines by a factor of two. Compare for yourself:
```
// gcc -Wall -Wextra -std=c11 ./min.c -D_POSIX_SOURCE -Os
// ./a.out 42
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define COUNT (1 << 28)
static int array[COUNT];
int main(int argc, char **argv) {
(void) argc;
unsigned seed = atoi(argv[1]);
for (unsigned i = 0; i < COUNT; ++i) {
array[i] = rand_r(&seed);
}
clock_t begin = clock();
int x = array[0];
for (unsigned i = 1; i < COUNT; ++i) {
int y = array[i];
#if 1
x = x ^ ((y ^ x) & -(x > y));
# else
if (y < x) {
x = y;
}
#endif
}
clock_t end = clock();
double time_spent = (double)(end - begin) / CLOCKS_PER_SEC;
printf("Minimum: %d (%.3f seconds)\n", x, time_spent);
return 0;
}
```
On an average 0.277 seconds in the "naïve" implementation, but 0.442 seconds for the "optimized" implementation. Always have a grain of doubt in CS classes. At least since the [CMOVxx](http://www.rcollins.org/p6/opcodes/CMOV.html) instruction (added with Pentium Pro in 1995) there is no chance that the bit hacking solution could have been faster.
On an i5-750 (gcc (Debian 5.2.1-23) 5.2.1 20151028):
```
optimized naïve
O0 1.367 0.781
O1 0.530 0.274
O2 0.444 0.271
O3 0.442 0.144
Os 0.446 0.273
```
*Afterthought:* Compiler developers are very smart people, who spend their working days finding and implementing optimizations. If the bit hacking trick were faster, then your compiler would implement `min()` this way. And you can safely assume that the compiler understands what you are doing inside the loop. But the people working for Intel, AMD and so on are smart, too, so they will optimize important operations such as `min()` and `max()` if they see that the compiler hackers do weird hacks because the obvious solution is slow.
For the extra-curious: This is the generated code for the "optimized" implementation with -O3:
```
mov $0x40600b00, %ebp # int *e = &array[COUNT];
mov 0x600b00, %ebx # int x = array[0];
mov $0x600b04, %edx # int *i = &array[1];
loop:
mov (%rdx), %eax # int y = *i;
xor %ecx, %ecx # int tmp = (
cmp %ebx, %eax # y < x
setl %cl # ? 1 : 0 );
xor %ebx, %eax # y ^= x;
add $0x4, %rdx # ++i;
neg %ecx # tmp = -tmp;
and %ecx, %eax # y &= tmp;
xor %eax, %ebx # x ^= y;
cmp %rdx, %rbp # if (i != e) {
jne loop # goto loop; }
```
And the naïve implementation with -Os (-O3 is huge and full of SSE instructions I would have to look up):
```
mov 600ac0, %ebx # int x = array[0];
mov $0x40600abc,%ecx # int *e = &array[COUNT];
mov $0x600ac0,%eax # int *i = &array[0];
loop:
mov 0x4(%rax),%edx # int y = *(i + 1);
cmp %edx,%ebx # if (x > y) {
cmovg %edx,%ebx # x = y; }
add $0x4,%rax # ++i;
cmp %rcx,%rax # if (i != e) {
jne loop # goto loop; }
``` |
How is `min` of two integers just as fast as 'bit hacking'? | 33,784,519 | 38 | 2015-11-18T15:50:36Z | 33,792,518 | 14 | 2015-11-18T23:28:39Z | [
"python"
] | I was watching a [lecture series](http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-172-performance-engineering-of-software-systems-fall-2010/video-lectures/lecture-2-bit-hacks/) on 'Bit Hacking' and came across the following optimization for finding the minimum of two integers:
```
return x ^ ((y ^ x) & -(x > y))
```
Which said to be faster than:
```
if x < y:
return x
else:
return y
```
Since the `min` function can handle more than just two integers (floats, strings, lists, and even custom objects) I assumed that calling `min(x, y)` would take longer than the optimized bit hack above. To my surprise, they were nearly identical:
```
>>> python -m timeit "min(4, 5)"
1000000 loops, best of 3: 0.203 usec per loop
>>> python -m timeit "4 ^ ((5 ^ 4) & -(4 > 5))"
10000000 loops, best of 3: 0.19 usec per loop
```
This is true even for numbers greater than `255` (pre allocated python integer objects)
```
>>> python -m timeit "min(15456, 54657)"
10000000 loops, best of 3: 0.191 usec per loop
python -m timeit "15456 ^ ((54657 ^ 15456) & -(54657 > 15456))"
10000000 loops, best of 3: 0.18 usec per loop
```
How is it that a function so versatile like `min` can still be so fast and optimized?
Note: I ran the above code using Python 3.5. I'm assuming that this is the same for Python 2.7+ but haven't tested
---
I've created the following c module:
```
#include <Python.h>
static PyObject * my_min(PyObject *self, PyObject *args){
const long x;
const long y;
if (!PyArg_ParseTuple(args, "ll", &x, &y))
return NULL;
return PyLong_FromLong(x ^ ((y ^ x) & -(x > y)));
}
static PyMethodDef MyMinMethods[] =
{
{ "my_min", my_min, METH_VARARGS, "bit hack min"
},
{NULL, NULL, 0, NULL}
};
PyMODINIT_FUNC
initmymin(void)
{
PyObject *m;
m = Py_InitModule("mymin", MyMinMethods);
if (m == NULL)
return;
}
```
Compiled it, and installed it onto my system (an ubuntu VM machine). I then ran the following:
```
>>> python -m timeit 'min(4, 5)'
10000000 loops, best of 3: 0.11 usec per loop
>>> python -m timeit -s 'import mymin' 'mymin.my_min(4,5)'
10000000 loops, best of 3: 0.129 usec per loop
```
While I understand that this is a VM machine, shouldn't there still be a greater gap in execution time with the 'bit hacking' being offloaded into native c? | Lets do a slightly deeper dive here to find out the real reason behind this weirdness (if any).
Lets create 3 methods and look at their python bytecode and runtimes...
```
import dis
def func1(x, y):
return min(x, y)
def func2(x, y):
if x < y:
return x
return y
def func3(x, y):
return x ^ ((y ^ x) & -(x > y))
print "*" * 80
dis.dis(func1)
print "*" * 80
dis.dis(func2)
print "*" * 80
dis.dis(func3)
```
The output from this program is...
```
*****************************************************
4 0 LOAD_GLOBAL 0 (min)
3 LOAD_FAST 0 (x)
6 LOAD_FAST 1 (y)
9 CALL_FUNCTION 2
12 RETURN_VALUE
*****************************************************
7 0 LOAD_FAST 0 (x)
3 LOAD_FAST 1 (y)
6 COMPARE_OP 0 (<)
9 POP_JUMP_IF_FALSE 16
8 12 LOAD_FAST 0 (x)
15 RETURN_VALUE
9 >> 16 LOAD_FAST 1 (y)
19 RETURN_VALUE
*****************************************************
12 0 LOAD_FAST 0 (x)
3 LOAD_FAST 1 (y)
6 LOAD_FAST 0 (x)
9 BINARY_XOR
10 LOAD_FAST 0 (x)
13 LOAD_FAST 1 (y)
16 COMPARE_OP 4 (>)
19 UNARY_NEGATIVE
20 BINARY_AND
21 BINARY_XOR
22 RETURN_VALUE
```
Here are the running times of each of these functions
```
%timeit func1(4343,434234)
1000000 loops, best of 3: 282 ns per loop
%timeit func2(23432, 3243424)
10000000 loops, best of 3: 137 ns per loop
%timeit func3(928473, 943294)
1000000 loops, best of 3: 246 ns per loop
```
func2 is the fastest because it has the least amount of work to do in the python interpreter. How?. Looking at the bytecode for func2, we see that in either case of `x > y` or `x < y`, the python interpreter will execute 6 instructions.
func3 will execute 11 instructions (and is thus almost twice as slow as func2... in fact, its extremely close to 137.0 \* 11 / 6 = 251 ns).
func1 has just 5 python instructions, and by the logic in the previous 2 points, we might think that func1 should probably be the fastest. However, there is a `CALL_FUNCTION` in there... and function calls have a lot of overhead in Python (because it creates a new eval frame for the function call - that's the thing that we see in the python stacktrace - a stack of eval frames).
More details : Because python is interpreted, each python bytecode instruction takes a lot longer than a single C/asm statement. In fact, you can take a look at the python interpreter source code to see that each instruction has an overhead of 30 or so C statements (this is from a very rough look at ceval.c python main interpreter loop). The `for (;;)` loop executes one python instruction per loop cycle (ignoring optimizations).
<https://github.com/python/cpython/blob/master/Python/ceval.c#L1221>
So, with so much overhead for each instruction, there is no point in comparing 2 tiny C code snippets in python. One will take 34 and the other will take 32 cpu cycles, because the python interpreter adds 30 cycles overhead for each instruction.
In OP's C module, if we loop inside the C function to do the comparison a million times, that loop will not have the python interpreter's overhead for each instruction. It will probably run 30 to 40 times faster.
Tips for python optimization...
Profile your code to find hotspots, refactor hot code into its own function (write tests for hotspot before that to make sure refactor does not break stuff), avoid function calls from the hot code (inline functions if possible), use the `dis` module on new function to find ways to reduce the number of python instructions (`if x` is faster than `if x is True`... surprised?), and lastly modify your algorithm. Finally, if python speedup is not enough, reimplement your new function in C.
ps : The explanation above is simplified to keep the answer within reasonable size. For example, not all python instructions take the same amount of time, and there are optimizations, so not every instruction has the same overhead... and lot more things. Please ignore such omissions for the sake of brevity. |
Getting "Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?" when installing lxml through pip | 33,785,755 | 42 | 2015-11-18T16:49:22Z | 33,785,756 | 37 | 2015-11-18T16:49:22Z | [
"python"
] | I'm getting an error `Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?` when trying to install lxml through pip.
```
c:\users\f\appdata\local\temp\xmlXPathInitqjzysz.c(1) : fatal error C1083: Cannot open include file: 'libxml/xpath.h': No such file or directory
*********************************************************************************
Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?
*********************************************************************************
error: command 'C:\\Users\\f\\AppData\\Local\\Programs\\Common\\Microsoft\\Visual C++ for Python\\9.0\\VC\\Bin\\cl.exe' failed with exit status 2
```
I don't find any libxml2 dev packages to install via pip.
Using Python 2.7.10 on x86 in a virtualenv under Windows 10. | Install lxml from <http://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml> for your python version. It's a precompiled WHL with required modules/dependencies.
The site lists several packages, when e.g. using Win32 Python 2.7, use `lxml-3.6.1-cp27-cp27m-win32.whl`.
Just install with `pip install lxml-3.6.1-cp27-cp27m-win32.whl`. |
Getting "Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?" when installing lxml through pip | 33,785,755 | 42 | 2015-11-18T16:49:22Z | 34,433,713 | 26 | 2015-12-23T10:35:17Z | [
"python"
] | I'm getting an error `Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?` when trying to install lxml through pip.
```
c:\users\f\appdata\local\temp\xmlXPathInitqjzysz.c(1) : fatal error C1083: Cannot open include file: 'libxml/xpath.h': No such file or directory
*********************************************************************************
Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?
*********************************************************************************
error: command 'C:\\Users\\f\\AppData\\Local\\Programs\\Common\\Microsoft\\Visual C++ for Python\\9.0\\VC\\Bin\\cl.exe' failed with exit status 2
```
I don't find any libxml2 dev packages to install via pip.
Using Python 2.7.10 on x86 in a virtualenv under Windows 10. | Try to use:
`easy_install lxml`
That works for me, win10, python 2.7. |
Getting "Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?" when installing lxml through pip | 33,785,755 | 42 | 2015-11-18T16:49:22Z | 34,816,278 | 20 | 2016-01-15T17:09:32Z | [
"python"
] | I'm getting an error `Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?` when trying to install lxml through pip.
```
c:\users\f\appdata\local\temp\xmlXPathInitqjzysz.c(1) : fatal error C1083: Cannot open include file: 'libxml/xpath.h': No such file or directory
*********************************************************************************
Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?
*********************************************************************************
error: command 'C:\\Users\\f\\AppData\\Local\\Programs\\Common\\Microsoft\\Visual C++ for Python\\9.0\\VC\\Bin\\cl.exe' failed with exit status 2
```
I don't find any libxml2 dev packages to install via pip.
Using Python 2.7.10 on x86 in a virtualenv under Windows 10. | On Mac OS X El Capitan I had to run these two commands to fix this error:
```
xcode-select --install
pip install lxml
```
Which ended up installing lxml-3.5.0
When you run the xcode-select command you may have to sign a EULA (so have an X-Term handy for the UI if you're doing this on a headless machine). |
Getting "Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?" when installing lxml through pip | 33,785,755 | 42 | 2015-11-18T16:49:22Z | 35,872,362 | 60 | 2016-03-08T16:12:13Z | [
"python"
] | I'm getting an error `Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?` when trying to install lxml through pip.
```
c:\users\f\appdata\local\temp\xmlXPathInitqjzysz.c(1) : fatal error C1083: Cannot open include file: 'libxml/xpath.h': No such file or directory
*********************************************************************************
Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?
*********************************************************************************
error: command 'C:\\Users\\f\\AppData\\Local\\Programs\\Common\\Microsoft\\Visual C++ for Python\\9.0\\VC\\Bin\\cl.exe' failed with exit status 2
```
I don't find any libxml2 dev packages to install via pip.
Using Python 2.7.10 on x86 in a virtualenv under Windows 10. | I had this issue and realised that whilst I did have libxml2 installed, I didn't have the necessary development libraries required by the python package. Installing them solved the problem:
```
sudo apt-get install libxml2-dev libxslt1-dev
sudo pip install lxml
``` |
Getting "Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?" when installing lxml through pip | 33,785,755 | 42 | 2015-11-18T16:49:22Z | 37,462,166 | 7 | 2016-05-26T13:24:21Z | [
"python"
] | I'm getting an error `Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?` when trying to install lxml through pip.
```
c:\users\f\appdata\local\temp\xmlXPathInitqjzysz.c(1) : fatal error C1083: Cannot open include file: 'libxml/xpath.h': No such file or directory
*********************************************************************************
Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?
*********************************************************************************
error: command 'C:\\Users\\f\\AppData\\Local\\Programs\\Common\\Microsoft\\Visual C++ for Python\\9.0\\VC\\Bin\\cl.exe' failed with exit status 2
```
I don't find any libxml2 dev packages to install via pip.
Using Python 2.7.10 on x86 in a virtualenv under Windows 10. | In case anyone else has the same issue as this on
> Centos, try:
```
yum install python-lxml
```
> Ubuntu
```
sudo apt-get install -y python-lxml
```
worked for me. |
TensorFlow Error found in Tutorial | 33,785,936 | 11 | 2015-11-18T16:57:38Z | 33,786,141 | 14 | 2015-11-18T17:09:07Z | [
"python",
"tensorflow"
] | Dare I even ask? This is such a new technology at this point that I can't find a way to solve this seemingly simple error. The tutorial I'm going over can be found here- <http://www.tensorflow.org/tutorials/mnist/pros/index.html#deep-mnist-for-experts>
I literally copied and pasted all of the code into IPython Notebook and at the very last chunk of code I get an error.
```
# To train and evaluate it we will use code that is nearly identical to that for the simple one layer SoftMax network above.
# The differences are that: we will replace the steepest gradient descent optimizer with the more sophisticated ADAM optimizer.
cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
sess.run(tf.initialize_all_variables())
for i in range(20000):
batch = mnist.train.next_batch(50)
if i%100 == 0:
train_accuracy = accuracy.eval(feed_dict={x:batch[0], y_: batch[1], keep_prob: 1.0})
print "step %d, training accuracy %g"%(i, train_accuracy)
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
print "test accuracy %g"%accuracy.eval(feed_dict={
x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0})
```
After running this code, I receive this error.
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-46-a5d1ab5c0ca8> in <module>()
15
16 print "test accuracy %g"%accuracy.eval(feed_dict={
---> 17 x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0})
/root/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in eval(self, feed_dict, session)
403
404 """
--> 405 return _eval_using_default_session(self, feed_dict, self.graph, session)
406
407
/root/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in _eval_using_default_session(tensors, feed_dict, graph, session)
2712 session = get_default_session()
2713 if session is None:
-> 2714 raise ValueError("Cannot evaluate tensor using eval(): No default "
2715 "session is registered. Use 'with "
2716 "DefaultSession(sess)' or pass an explicit session to "
ValueError: Cannot evaluate tensor using eval(): No default session is registered. Use 'with DefaultSession(sess)' or pass an explicit session to eval(session=sess)
```
I thought that I may need to install or reinstall TensorFlow via conda install <https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.5.0-cp27-none-linux_x86_64.whl> but conda doesn't even know how to install it.
Does anyone have any idea of how to work around this error? | I figured it out. As you see in the value error, it says `No default session is registered. Use 'with DefaultSession(sess)' or pass an explicit session to eval(session=sess)` so the answer I came up with is to pass an explicit session to eval, just like it says. Here is where I made the changes.
```
if i%100 == 0:
train_accuracy = accuracy.eval(session=sess, feed_dict={x:batch[0], y_: batch[1], keep_prob: 1.0})
```
And
```
train_step.run(session=sess, feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
```
Now the code is working fine. |
Tensorflow: Using Adam optimizer | 33,788,989 | 22 | 2015-11-18T19:45:42Z | 33,846,659 | 27 | 2015-11-21T17:53:30Z | [
"python",
"tensorflow"
] | I am experimenting with some simple models in tensorflow, including one that looks very similar to the first [MNIST for ML Beginners example](http://www.tensorflow.org/tutorials/mnist/beginners/index.md), but with a somewhat larger dimensionality. I am able to use the gradient descent optimizer with no problems, getting good enough convergence. When I try to use the ADAM optimizer, I get errors like this:
```
tensorflow.python.framework.errors.FailedPreconditionError: Attempting to use uninitialized value Variable_21/Adam
[[Node: Adam_2/update_Variable_21/ApplyAdam = ApplyAdam[T=DT_FLOAT, use_locking=false, _device="/job:localhost/replica:0/task:0/cpu:0"](Variable_21, Variable_21/Adam, Variable_21/Adam_1, beta1_power_2, beta2_power_2, Adam_2/learning_rate, Adam_2/beta1, Adam_2/beta2, Adam_2/epsilon, gradients_11/add_10_grad/tuple/control_dependency_1)]]
```
where the specific variable that complains about being uninitialized changes depending on the run. What does this error mean? And what does it suggest is wrong? It seems to occur regardless of the learning rate I use. | The AdamOptimizer class creates additional variables, called "slots", to hold values for the "m" and "v" accumulators.
See the source here if you're curious, it's actually quite readable:
<https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/training/adam.py#L39> . Other optimizers, such as Momentum and Adagrad use slots too.
These variables must be initialized before you can train a model.
The normal way to initialize variables is to call `tf.initialize_all_variables()` which adds ops to initialize the variables present in the graph *when it is called*.
(Aside: unlike its name suggests, initialize\_all\_variables() does not initialize anything, it only add ops that will initialize the variables when run.)
What you must do is call initialize\_all\_variables() *after* you have added the optimizer:
```
...build your model...
# Add the optimizer
train_op = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
# Add the ops to initialize variables. These will include
# the optimizer slots added by AdamOptimizer().
init_op = tf.initialize_all_variables()
# launch the graph in a session
sess = tf.Session()
# Actually intialize the variables
sess.run(init_op)
# now train your model
for ...:
sess.run(train_op)
``` |
python numpy operation instead of for loops | 33,792,040 | 5 | 2015-11-18T22:53:42Z | 33,792,971 | 10 | 2015-11-19T00:08:11Z | [
"python",
"numpy",
"vectorization"
] | I wrote some lines in python which work fine but is very slow; I think due to the for loops. I hope one can speed up the following operations using numpy commands. Let me define the goal.
Let's assume I have (always) a 3d numpy array (first dim: row, second dim: col, third dim: frame); for simplecity I take a 2d array (since the operation must be applied for every 2d array in the third dimension). Moreover, I take a 6x11 array, also for simplicity reasons (see drawing below).
1. I want to calculate the mean for all rows, i.e. sum\_j a\_ij resulting in an array. This, of course can be easily done. (I call this calue CM\_tilde)
2. Now, for **each row** I want to calculate a selected mean, i.e. adding all values below a certain threshold and dividing them by the number of all columns (=N). If the value is above this defined threshold, the CM\_tilde value (mean of the entire row) is added. This value is called CM
3. Afterwards, the CM value is subtracted from each element in the row
In addition to this I want to have a numpy array or list where all those CM values are listed.
The figure:
[](http://i.stack.imgur.com/w0V6p.png)
The following code is working but very slow (especially if the arrays getting large)
```
CM_tilde = np.mean(data, axis=1)
N = data.shape[1]
data_cm = np.zeros(( data.shape[0], data.shape[1], data.shape[2] ))
all_CMs = np.zeros(( data.shape[0], data.shape[2]))
for frame in range(data.shape[2]):
for row in range(data.shape[0]):
CM=0
for col in range(data.shape[1]):
if data[row, col, frame] < (CM_tilde[row, frame]+threshold):
CM += data[row, col, frame]
else:
CM += CM_tilde[row, frame]
CM = CM/N
all_CMs[row, frame] = CM
# calculate CM corrected value
for col in range(data.shape[1]):
data_cm[row, col, frame] = data[row, col, frame] - CM
print "frame: ", frame
return data_cm, all_CMs
```
Any ideas? | It's quite easy to vectorize what you're doing:
```
import numpy as np
#generate dummy data
nrows=6
ncols=11
nframes=3
threshold=0.3
data=np.random.rand(nrows,ncols,nframes)
CM_tilde = np.mean(data, axis=1)
N = data.shape[1]
all_CMs2 = np.mean(np.where(data < (CM_tilde[:,None,:]+threshold),data,CM_tilde[:,None,:]),axis=1)
data_cm2 = data - all_CMs2[:,None,:]
```
Comparing this with your originals:
```
In [684]: (data_cm==data_cm2).all()
Out[684]: True
In [685]: (all_CMs==all_CMs2).all()
Out[685]: True
```
The logic is that we work with arrays of size `[nrows,ncols,nframes]` simultaneously. The main trick is to make use of python's broadcasting, by turning `CM_tilde` of size `[nrows,nframes]` into `CM_tilde[:,None,:]` of size `[nrows,1,nframes]`. Python will then use the same values for each column, since that is a singleton dimension of this modified `CM_tilde`.
By using `np.where` we choose (based on the `threshold`) whether we want to get the corresponding value of `data`, or, again, the broadcast value of `CM_tilde`. A new use of `np.mean` allows us to compute `all_CMs2`.
In the final step we made use of broadcasting by directly subtracting this new `all_CMs2` from the corresponding elements of `data`.
It might help in vectorizing code this way by looking at the implicit indices of your temporary variables. What I mean is that your temporary variable `CM` lives inside a loop over `[nrows,nframes]`, and its value is reset with each iteration. This means that `CM` is in effect a quantity `CM[row,frame]` (later explicitly assigned to the 2d array `all_CMs`), and from here it's easy to see that you can construct it by summing up an appropriate `CMtmp[row,col,frames]` quantity along its column dimension. If it helps, you can name the `np.where(...)` part as `CMtmp` for this purpose, and then compute `np.mean(CMtmp,axis=1)` from that. Same result, obviously, but probably more transparent. |
Matching/Counting lists in python dictionary | 33,792,339 | 4 | 2015-11-18T23:14:07Z | 33,792,487 | 7 | 2015-11-18T23:25:39Z | [
"python",
"dictionary"
] | I have a dictionary `{x: [a,b,c,d], y: [a,c,g,f,h],...}`. So the key is one variable with the value being a list (of different sizes).
My goal is to match up each list against every list in the dictionary and come back with a count of how many times a certain list has been repeated.
I tried this but does not seem to work:
```
count_dict = {}
counter = 1
for value in dict.values():
count_dict[dict.key] = counter
counter += 1
``` | You could map the lists to tuples so they can be used as keys and use a `Counter` dict to do the counting:
```
from collections import Counter
count = Counter(map(tuple, d.values()))
``` |
Why PyGame coordinate system has its origin at the top-left corner? | 33,805,481 | 9 | 2015-11-19T13:37:13Z | 33,806,018 | 8 | 2015-11-19T14:00:41Z | [
"python",
"graphics",
"pygame"
] | I've been using PyGame for a while and I have had to make coordinate transformations to change the normal coordinate system used in mathematics (with its origin at the bottom-left corner) to the PyGame coordinate system (with its origin at the top-left corner). I found [this post](http://stackoverflow.com/questions/10167329/change-the-position-of-the-origin-in-pygame-coordinate-system) very useful for that.
But I am wondering, why is PyGame using this odd coordinate system? | It's not just PyGame - it's an old convention for graphics displays. Many APIs allow you to override and choose your own convention, but even then, they are mapping back to that top-left-corner convention in the background.
The origin of the convention is easy to see for old CRT displays. The raster scan for each frame progressed top-down, with each line scanned left-to-right. Since the scan was done that way, the signal was sent that way, and the pixel buffer in memory was organized that way to allow the hardware to implement a relatively simple and efficient linear scan of memory for each frame.
With LCDs and other newer display technologies, I'm pretty sure the convention it's just for historic reasons - the legacy of a presumably arbitrary decision at one point about how the raster should scan out a picture for TV signals and CRTs many decades ago. |
Why does g.append(l.pop()) return the second half of l but l only has the first half | 33,806,742 | 2 | 2015-11-19T14:32:39Z | 33,806,814 | 11 | 2015-11-19T14:35:53Z | [
"python",
"list",
"python-2.7"
] | While I was working over one program, I got to see one strange behavior in my code.Here is what I saw.
```
>>> l = [1,2,3,4,5,6,7,8]
>>> g = []
>>> for i in l:
... g.append(l.pop())
...
>>> l
[1, 2, 3, 4]
>>> g
[8, 7, 6, 5]
>>>
```
The list `g` was supposed to have all the elements of list `l` here! But why did it only take half of the list into account?
**Disclaimer**: I am not trying to copy a list or reverse a list. This was something that I found while working over something else. | **YOU SHOULD NORMALLY NOT DO THIS!**
Changing the Iterable you are looping over is not good!
**Explanation:**
As you can see `l.pop()` always takes the last item of `l`
`g.append()` now adds the popped item to the end of `g`
After 4 runs `l` doesnt have any items left.
First Run:
```
i = v
l = [1,2,3,4,5,6,7]
g = [8]
```
Second Run:
```
i = v
l = [1,2,3,4,5,6]
g = [8,7]
```
Third Run:
```
i = v
l = [1,2,3,4,5]
g = [8,7,6]
```
Fourth Run:
```
i = v
l = [1,2,3,4]
g = [8,7,6,5]
```
Now we are at the end of `l` and we stop the loop |
Lists are the same but not considered equal? | 33,808,011 | 5 | 2015-11-19T15:29:04Z | 33,808,055 | 9 | 2015-11-19T15:30:50Z | [
"python",
"list",
"python-2.7",
"equality"
] | novice at Python encountering a problem testing for equality. I have a list of lists, states[]; each state contains x, in this specific case x=3, Boolean values. In my program, I generate a list of Boolean values, the first three of which correspond to a state[i]. I loop through the list of states testing for equality (one of them is certainly correct as all possible boolean permutations are in states, but equality is never detected. No clue why, here is some code I modified to test it:
```
temp1 = []
for boolean in aggregate:
temp1.append(boolean)
if len(temp1) == len(propositions):
break
print temp1
print states[0]
if temp1 == states[0]:
print 'True'
else:
print 'False'
```
In this case, the length of propisitons is 3. The output I get from this code is:
```
[True, True, True]
(True, True, True)
False
```
I'm guessing this has to do with the difference in brackets? Something to do with the fact that states[0] is a list within a list? Cheers. | You are comparing a **tuple** `(True, True, True)` against a **list** `[True, True, True]`
Of course they're different.
**Try casting your `list` to `tuple` on-the-go, to compare:**
```
temp1 = []
for boolean in aggregate:
temp1.append(boolean)
if len(temp1) == len(propositions):
break
print temp1
print states[0]
if tuple(temp1) == states[0]:
print 'True'
else:
print 'False'
```
**Or casting your `tuple` to `list` on-the-go, to compare:**
```
temp1 = []
for boolean in aggregate:
temp1.append(boolean)
if len(temp1) == len(propositions):
break
print temp1
print states[0]
if temp1 == list(states[0]):
print 'True'
else:
print 'False'
```
**Output:**
```
[True, True, True]
(True, True, True)
True
``` |
Convert string ">0" to python > 0 | 33,808,854 | 2 | 2015-11-19T16:07:18Z | 33,808,961 | 8 | 2015-11-19T16:11:55Z | [
"python"
] | I read the conditions ">0", "<60", etc., from a xml file. What is the best way to convert them to python language to compare? The sample code is what I want to do:
```
if str == ">0":
if x > 0:
print "yes"
else:
print "no"
elif str == "<60":
if x < 60:
print "yes"
...
``` | I would use [regex](https://docs.python.org/2/library/re.html) and [`operator`](https://docs.python.org/2/library/operator.html).
```
from operator import lt, gt
import re
operators = {
">": gt,
"<": lt,
}
string = ">60"
x = 3
op, n = re.findall(r'([><])(\d+)', string)[0]
print(operators[op](x, int(n)))
```
Depending on your string, the regex can be modified. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.