title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
How to feed a placeholder? | 33,810,990 | 4 | 2015-11-19T17:52:29Z | 33,812,296 | 11 | 2015-11-19T19:03:15Z | [
"python",
"tensorflow"
] | I am trying to implement a simple feed forward network. However, I can't figure out how to feed a `Placeholder`. This example:
```
import tensorflow as tf
num_input = 2
num_hidden = 3
num_output = 2
x = tf.placeholder("float", [num_input, 1])
W_hidden = tf.Variable(tf.zeros([num_hidden, num_input]))
W_out = tf.Variable(tf.zeros([num_output, num_hidden]))
b_hidden = tf.Variable(tf.zeros([num_hidden]))
b_out = tf.Variable(tf.zeros([num_output]))
h = tf.nn.softmax(tf.matmul(W_hidden,x) + b_hidden)
sess = tf.Session()
with sess.as_default():
print h.eval()
```
Gives me the following error:
```
...
results = self._do_run(target_list, unique_fetch_targets, feed_dict_string)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 419, in _do_run
e.code)
tensorflow.python.framework.errors.InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape dim { size: 2 } dim { size: 1 }
[[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[2,1], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Caused by op u'Placeholder', defined at:
File "/home/sfalk/workspace/SemEval2016/java/semeval2016-python/slot1_tf.py", line 8, in <module>
x = tf.placeholder("float", [num_input, 1])
...
```
I have tried
```
tf.assign([tf.Variable(1.0), tf.Variable(1.0)], x)
tf.assign([1.0, 1.0], x)
```
but that does not work apparently. | To feed a placeholder, you use the `feed_dict` argument to `Session.run()` (or `Tensor.eval()`). Let's say you have the following graph, with a placeholder:
```
x = tf.placeholder(tf.float32, shape=[2, 2])
y = tf.constant([[1.0, 1.0], [0.0, 1.0]])
z = tf.matmul(x, y)
```
If you want to evaluate `z`, you must feed a value for `x`. You can do this as follows:
```
sess = tf.Session()
print sess.run(z, feed_dict={x: [[3.0, 4.0], [5.0, 6.0]]})
```
For more information, see the [documentation on feeding](http://tensorflow.org/how_tos/reading_data/index.html#Feeding). |
Python creating tuple groups in list from another list | 33,812,142 | 4 | 2015-11-19T18:54:03Z | 33,812,257 | 8 | 2015-11-19T19:00:57Z | [
"python",
"list",
"tuples"
] | Let's say I have this data:
```
data = [1, 2, 3, -4, -5, 3, 2, 4, -2, 5, 6, -5, -1, 1]
```
I need it to be grouped in another list by tuples. One tuple consists of two lists. One for positive numbers, another for negative. And tuples should be created by checking what kind of number it is. Last negative number (I mean in a row that between negative numbers there were no positive ones) means, other numbers must go into another tuple and when it finds another last negative number, it should create another tuple.
So rules are these: All found numbers are being added into first tuple, when it finds negative number, it still adds it to that tuple, till it finds positive number (it means new tuple must be created).
I think it is easier to show, than to explain. After parsing `data`, the list should look like this:
```
l = [([1, 2, 3], [-4, -5]), ([3, 2, 4], [-2]), ([5, 6], [-5, -1]), ([1], [])]
```
I created a solution, but I wonder if it's quite optimal. Maybe it is possible to write a more elegant one (and I wonder about performance, is there some better way to write such parser with best possible performance:))?
```
def neighborhood(iterable):
iterator = iter(iterable)
prev = None
item = iterator.next() # throws StopIteration if empty.
for next in iterator:
yield (prev,item,next)
prev = item
item = next
yield (prev,item,None)
l = []
pos = []
neg = []
for prev, item, next in neighborhood(data):
if item > 0:
pos.append(item)
if not next:
l.append((pos, neg))
else:
neg.append(item)
if next > 0:
l.append((pos, neg))
pos = []
neg = []
elif not next:
l.append((pos, neg))
print l
```
P.S. `if not next` part I think can be used only once after main check. | I'd use [`itertools.groupby`](https://docs.python.org/3/library/itertools.html#itertools.groupby) to make a list of consecutive tuples containing positive/negative lists first, and then group into consecutive pairs. This can still be done in one pass through the list by taking advantage of generators:
```
from itertools import groupby, zip_longest
x = (list(v) for k,v in groupby(data, lambda x: x < 0))
l = list(zip_longest(x, x, fillvalue=[]))
```
This gives `l` as:
```
[([1, 2, 3], [-4, -5]), ([3, 2, 4], [-2]), ([5, 6], [-5, -1]), ([1], [])]
```
A couple of notes on the code above:
* The initial grouping into positive/negative values is handed to `groupby` which should be reasonably performant (it's compiled code).
* The zipping-a-generator method for grouping into pairs is a reasonably common idiom in Python. It's guaranteed to work since `zip` guarantees than an iterable is consumed from right to left.
* In Python 2, use `izip_longest`. |
send_file() when called return text document instead of an image | 33,818,466 | 4 | 2015-11-20T03:27:52Z | 33,818,894 | 7 | 2015-11-20T04:16:26Z | [
"python",
"web",
"flask"
] | I want to send an image file from server side to client side. I am using flask framework.
But the problem is whenever I call the route in which `send_file()` is, the response return is an File. When I click this file `gedit` opens it with nothing in that file. That means it must be text file written.
I referred the flask docs for [`send_file()`](http://flask.pocoo.org/docs/0.10/api/#flask.send_file).
Here is what I am doing in the code:
```
@app.route('/try')
def trial():
todown = 'https://igcdn-photos-e-a.akamaihd.net//hphotos-ak-xaf1//t51.2885-15//e35//12093691_1082288621781484_1524190206_n.jpg'
resp = requests.get(todown)
return send_file(resp,mimetype="image/jpeg",attachment_filename="img.jpg",as_attachment=True)
```
Whenever I load `localhost:5000/try` a file is downloaded but not the image file that I want to download.
Error I am getting in my terminal is `AttributeError: 'Response' object has no attribute 'read' error`.
What must be the problem. Is anything missing in this above snippet? | 1. `resp` is a `requests.models.Response` object, not string nor bytes:
```
>>> import requests
>>> todown = 'https://igcdn-photos-e-a.akamaihd.net//hphotos-ak-xaf1//t51.2885-15//e35//12093691_1082288621781484_1524190206_n.jpg'
>>> resp = requests.get(todown)
>>> resp
<Response [200]>
>>> type(resp)
<class 'requests.models.Response'>
```
2. `Flask.send_file()` *sends a file*.
---
So first at all you need use `resp.content` to get the content of the object, it'll return bytes object (and by the way, `resp.text` return string object.
Always use `.content` if you're downloading an image, video, or other non-text things).
```
>>> import requests
>>> todown = 'https://igcdn-photos-e-a.akamaihd.net//hphotos-ak-xaf1//t51.2885-15//e35//12093691_1082288621781484_1524190206_n.jpg'
>>> resp = requests.get(todown)
>>> type(resp.content)
<class 'bytes'>
```
Please check [the document](http://docs.python-requests.org/en/latest/) for more details.
---
Then, because `Flask.send_file()` *send a file*, so you need write the image into a file before you send it.
But since you don't need use this image on your server anyway, I'd suggest use [`io.BytesIO`](https://docs.python.org/3/library/io.html#io.BytesIO) in this case, then you don't need delete that image after you sent it. And note that use [`io.StringIO`](https://docs.python.org/3/library/io.html#io.StringIO) if you're sending a text file.
For example:
```
import requests
from io import BytesIO
from flask import Flask, send_file
app = Flask(__name__)
@app.route('/')
def tria():
todown = 'https://igcdn-photos-e-a.akamaihd.net//hphotos-ak-xaf1//t51.2885-15//e35//12093691_1082288621781484_1524190206_n.jpg'
resp = requests.get(todown)
return send_file(BytesIO(resp.content), mimetype="image/jpeg", attachment_filename="img2.jpg", as_attachment=True)
app.run(port=80, debug=True)
```
---
However, if you want write the image into a file and send it then, sure you can also do it. We can use [`tempfile.NamedTemporaryFile()`](https://docs.python.org/3/library/tempfile.html#tempfile.NamedTemporaryFile) to create a *tempfile* instead of just create a file to avoid rewrite your important files.
From the document:
> This function operates exactly as `TemporaryFile()` does, except that the file is guaranteed to have a visible name in the file system (on Unix, the directory entry is not unlinked).
>
> That name can be retrieved from the `name` attribute of the file object. Whether the name can be used to open the file a second time, while the named temporary file is still open, varies across platforms (it can be so used on Unix; it cannot on Windows NT or later). If delete is true (the default), the file is deleted as soon as it is closed.
>
> The returned object is always a file-like object whose `file` attribute is the underlying true file object. This file-like object can be used in a `with` statement, just like a normal file.
For example:
```
import tempfile
import requests
from flask import Flask, send_file
app = Flask(__name__)
@app.route('/')
def tria():
todown = 'https://igcdn-photos-e-a.akamaihd.net//hphotos-ak-xaf1//t51.2885-15//e35//12093691_1082288621781484_1524190206_n.jpg'
resp = requests.get(todown)
with tempfile.NamedTemporaryFile() as f:
# create a file-like object use `NamedTemporaryFile()` and `with`
# as the basic usage which said in the document
f.write(resp.content)
# write the content of the image into it
return send_file(f.name, mimetype="image/jpeg",
attachment_filename="img2.jpg", as_attachment=True)
# `f.name` is the temp file's filename
app.run(port=80, debug=True)
``` |
UnicodeDecodeError: ('utf-8' codec) while reading a csv file | 33,819,557 | 7 | 2015-11-20T05:22:13Z | 33,819,765 | 8 | 2015-11-20T05:41:32Z | [
"python",
"pandas",
"utf-8",
"python-unicode"
] | what i am trying is reading a csv to make a dataframe---making changes in a column---again updating/reflecting changed value into same csv(to\_csv)- again trying to read that csv to make another dataframe...there i am getting an error
```
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe7 in position 7: invalid continuation byte
```
my code is
```
import pandas as pd
df = pd.read_csv("D:\ss.csv")
df.columns #o/p is Index(['CUSTOMER_MAILID', 'False', 'True'], dtype='object')
df['True'] = df['True'] + 2 #making changes to one column of type float
df.to_csv("D:\ss.csv") #updating that .csv
df1 = pd.read_csv("D:\ss.csv") #again trying to read that csv
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe7 in position 7: invalid continuation byte
```
So please suggest how can i avoid the error and be able to read that csv again to a dataframe.
I know somewhere i am missing "encode = some codec type" or "decode = some type" while reading and writing to csv.
But i don't know what exactly should be changed.so need help. | # Known encoding
If you know the encoding of the file you want to read in,
you can use
```
pd.read_csv('filename.txt', encoding='encoding')
```
These are the possible encodings:
<https://docs.python.org/3/library/codecs.html#standard-encodings>
# Unknown encoding
If you do not know the encoding, you can try to use chardet, however this is not guaranteed to work. It is more a guess work.
```
import chardet
import pandas as pd
with open('filename.csv', 'rb') as f:
result = chardet.detect(f.read()) # or readline if the file is large
pd.read_csv('filename.csv', encoding=result['encoding'])
``` |
How to sort only few values inside a list in Python | 33,822,603 | 9 | 2015-11-20T09:00:40Z | 33,823,004 | 7 | 2015-11-20T09:20:52Z | [
"python",
"list",
"sorting"
] | Suppose
```
A = [9, 5, 34, 33, 32, 31, 300, 30, 3, 256]
```
I want to sort only a particular section in a list. For example, here I want to sort only `[300, 30, 3]` so that overall list becomes:
```
A = [9, 5, 34, 33, 32, 31, 3, 30, 300, 256]
```
Suppose `B = [300, 30, 400, 40, 500, 50, 600, 60]` then after sorting it should be `B = [30, 300, 40, 400, 50, 500, 60, 600]`.
Main idea `if the leftmost digit is same 300, 30, 30` and right most digits contain only `zeros` then we should arrange it in increasing order.
Another example:
```
A = [100, 10, 1, 2000, 20, 2]
```
After sorting it should be `A = [1, 10, 100, 2, 20, 2000]`
Could anyone suggest some techniques to approach such issue. The values in my list will always be arranged in this way `[200, 20, 2, 300, 30, 3, 400, 40, 4]`.
Code:
```
nums = [3, 30, 31, 32, 33, 34, 300, 256, 5, 9]
nums = sorted(nums, key=lambda x: str(x), reverse=True)
print nums
>> [9, 5, 34, 33, 32, 31, 300, 30, 3, 256]
```
But my final output should be `[9, 5, 34, 33, 32, 31, 3, 30, 300 256]`.
Here is a big example:
```
A = [9, 5, 100, 10, 30, 3, 265, 200, 20, 2]
```
After sorting it should be:
```
A = [9, 5, 10, 100, 3, 30, 265, 2, 20, 200]
``` | Since each expected sequence is contains the numbers which are a common coefficient of power of then, You can use a `scientific_notation` function which returns the common coefficient.Then you can categorize your numbers based on this function and concatenate them.
```
>>> from operator import itemgetter
>>> from itertools import chain,groupby
>>> def scientific_notation(number):
... while number%10 == 0:
... number = number/10
... return number
>>> A = [9, 5, 34, 33, 32, 31, 300, 30, 3, 256]
>>> G=[list(g) for _,g in groupby(A,key=scientific_notation)]
>>> list(chain.from_iterable(sorted(sub) if len(sub)>1 else sub for sub in G))
[9, 5, 34, 33, 32, 31, 3, 30, 300, 256]
```
Note that since we are categorizing the numbers based on coefficient of power of then if the length of a sub-list be more than 1 means that it's a sequence of expected numbers which need to be sort.So instead of checking the length of each sequence you can simply apply the sort on all the generators in group by:
```
>>> list(chain.from_iterable(sorted(g) for _,g in groupby(A,key=scientific_notation)))
[9, 5, 34, 33, 32, 31, 3, 30, 300, 256]
``` |
Why won't dynamically adding a `__call__` method to an instance work? | 33,824,228 | 8 | 2015-11-20T10:22:12Z | 33,824,320 | 8 | 2015-11-20T10:27:21Z | [
"python",
"instance"
] | In both Python 2 and Python 3 the code:
```
class Foo(object):
pass
f = Foo()
f.__call__ = lambda *args : args
f(1, 2, 3)
```
returns as error `Foo object is not callable`. Why does that happen?
PS: With old-style classes it works as expected.
PPS: There behavior is intended (see accepted answer). As a work-around it's possible to define a `__call__` at class level that just forwards to another member and set this "normal" member to a per-instance `__call__` implementation. | Double-underscore methods are always looked up on the class, never the instance. See [*Special method lookup for new-style classes*](https://docs.python.org/2/reference/datamodel.html#special-method-lookup-for-new-style-classes):
> For new-style classes, implicit invocations of special methods are only guaranteed to work correctly if defined on an objectâs type, not in the objectâs instance dictionary.
That's because the *type* might need to support the same operation (in which case the special method is looked up on the metatype).
For example, classes are callable (that's how you produce an instance), but if Python looked up the `__call__` method *on the actual object*, then you could never do so on classes that implement `__call__` for their instances. `ClassObject()` would become `ClassObject.__call__()` which would fail because the `self` parameter is not passed in to the unbound method. So instead `type(ClassObject).__call__(ClassObject)` is used, and calling `instance()` translates to `type(instance).__call__(instance)`. |
How to convert this list into a dictionary | 33,824,334 | 4 | 2015-11-20T10:28:27Z | 33,824,432 | 9 | 2015-11-20T10:33:52Z | [
"python",
"list",
"dictionary"
] | I have a list currently that looks like this
```
list = [['hate', '10'], ['would', '5'], ['hello', '10'], ['pigeon', '1'], ['adore', '10']]
```
I want to convert it to a dictionary like this
```
dict = {'hate': '10', 'would': '5', 'hello': '10', 'pigeon': '1', 'adore': '10'}
```
So basically the `list [i][0]` will be the key and the `list [i][1]` will be values. Any help would be really appreciated :) | Use the `dict` constructor:
```
In [1]: lst = [['hate', '10'], ['would', '5'], ['hello', '10'], ['pigeon', '1'], ['adore', '10']]
In [2]: dict(lst)
Out[2]: {'adore': '10', 'hate': '10', 'hello': '10', 'pigeon': '1', 'would': '5'}
```
Note that from your edit it seems you need the values to be integers rather than strings (e.g. `'10'`), in which case you can cast the second item of each inner list into an `int` before passing them to `dict`:
```
In [3]: dict([(e[0], int(e[1])) for e in lst])
Out[3]: {'adore': 10, 'hate': 10, 'hello': 10, 'pigeon': 1, 'would': 5}
``` |
Why is pos_tag() so painfully slow and can this be avoided? | 33,829,160 | 5 | 2015-11-20T14:29:31Z | 33,829,434 | 10 | 2015-11-20T14:43:38Z | [
"python",
"nltk"
] | I want to be able to get POS-Tags of sentences one by one like in this manner:
```
def __remove_stop_words(self, tokenized_text, stop_words):
sentences_pos = nltk.pos_tag(tokenized_text)
filtered_words = [word for (word, pos) in sentences_pos
if pos not in stop_words and word not in stop_words]
return filtered_words
```
But the problem is that `pos_tag()` takes about a second for each sentence. There is another option to use `pos_tag_sents()` to do this batch-wise and speed things up. But my life would be easier if I could do this sentence by sentence.
Is there a way to do this faster? | For nltk version 3.1, inside [`nltk/tag/__init__.py`](https://github.com/nltk/nltk/blob/develop/nltk/tag/__init__.py#L110), `pos_tag` is defined like this:
```
from nltk.tag.perceptron import PerceptronTagger
def pos_tag(tokens, tagset=None):
tagger = PerceptronTagger()
return _pos_tag(tokens, tagset, tagger)
```
So each call to `pos_tag` first instantiates `PerceptronTagger` which takes some time because it involves [loading a pickle file](https://github.com/nltk/nltk/blob/develop/nltk/tag/perceptron.py#L141). `_pos_tag` [simply calls `tagger.tag`](https://github.com/nltk/nltk/blob/develop/nltk/tag/__init__.py#L82) when `tagset` is `None`.
So you can save some time by loading the file *once*, and calling `tagger.tag` yourself instead of calling `pos_tag`:
```
from nltk.tag.perceptron import PerceptronTagger
tagger = PerceptronTagger()
def __remove_stop_words(self, tokenized_text, stop_words, tagger=tagger):
sentences_pos = tagger.tag(tokenized_text)
filtered_words = [word for (word, pos) in sentences_pos
if pos not in stop_words and word not in stop_words]
return filtered_words
```
---
`pos_tag_sents` uses the same trick as above -- [it instantiates `PerceptronTagger` once](https://github.com/nltk/nltk/blob/develop/nltk/tag/__init__.py#L127) before calling `_pos_tag` many times. So you'll get a comparable gain in performance using the above code as you would by refactoring and calling `pos_tag_sents`.
---
Also, if `stop_words` is a long list, you may save a bit of time by making `stop_words` a set:
```
stop_words = set(stop_words)
```
since checking membership in a set (e.g. `pos not in stop_words`) is a `O(1)` (constant time) operation while checking membership in a list is a `O(n)` operation (i.e. it requires time which grows proportionally to the length of the list.) |
passing bash array to python list | 33,829,444 | 5 | 2015-11-20T14:44:09Z | 33,829,676 | 9 | 2015-11-20T14:55:12Z | [
"python",
"arrays",
"linux",
"bash"
] | I'm trying to pass an array from bash to python using the old getenv method however I keep getting this error:
```
./crcFiles.sh: line 7: export: `0021': not a valid identifier
Traceback (most recent call last):
File "/shares/web/vm3618/optiload/prog/legalLitres.py", line 30, in <module>
for i in mdcArray.split(' '):
AttributeError: 'NoneType' object has no attribute 'split'
```
could someone please explain why the $mdcNo isn't passing from bash to python successfully?
Code .sh:
```
#!/bin/bash
mdcNo=('0021' '0022' '0036' '0055' '0057' '0059' '0061' '0062' '0063' '0065' '0066' '0086' '0095' '0098' '0106' '0110' '0113' '0114' '0115' '0121' '0126' '0128' '0135' '0141' '0143' '0153' '0155' '0158')
localDIR=/shares/web/vm3618/optiload/prog
export mdcNo
$localDIR/legalLitres.py
for i in "${mdcNo[@]}"
do
echo $i
cp $localDIR/MDC$i/*/QqTrkRec.txt $localDIR/crccalc/.
cd $localDIR/crccalc
./crccalc.py QqTrkRec.txt
cp $localDIR/crccalc/QqTrkRec.txt $localDIR/MDC$i/.
done
```
code .py:
```
#!/usr/bin/python
import glob
import os
mdcArray = os.getenv('mdcNo')
#Legal Litres that hex and decimal
legalLitresHex = "47E0"
legalLitresTxt = '18,400'
# file name and Legal Litres header
legalHeader = ":00F0:"
hexFile = "QqTrkRec.txt"
# insert comment to explain change
comment = "#\n# 2015 Nov 20: Legal Litres changed to 18,400\n#\n"
commentFlag0 = "# SetDATA"
commentFlag1 = "# SetDATA"
try:
for i in mdcArray.split(' '):
line = ""
Qqfile = glob.glob("/shares/web/vm3618/optiload/prog/MDC"+i+"/*/"+hexFile)
outFile = Qqfile[0]+".new"
print i
``` | When you `export` a variable from the shell, what you are really doing is adding it to the POSIX "environment" array that all child processes inherit. But the POSIX environment is a flat array of name=value strings; it cannot itself contain arrays. So Bash doesn't even attempt to put arrays there. It will let you `export` an array variable, and doing so even sets the "exported" flag on that variable, but the environment is not touched. You can verify this fact by running `env` or a new copy of `bash` and looking for the "exported" variable:
```
$ export myArr=(this is an array)
$ bash -c 'echo "${myArr[@]}"'
$
```
(Some other array-having shells, notably ksh, will actually export an array variable to the environment, but the exported value will consist of only the first element of the array.)
If you want to pass a shell array to the Python script, your best bet is to do so as command line arguments. If you run the Python script like this:
```
python code.py "${mdcNo[@]}"
```
... then the Python code can just loop over `sys.argv`, which is always a list. (Specifically, the passed-in array will be the slice `sys.argv[1:]`, since `sys.argv[0]` is always set to the name of the script itself.)
If that's not an option, then you'll have to set the environment variable to a string with some delimiter between elements and split it inside the Python code. Something like this..
Bash:
```
export mdcList='0021,0022,0036,0055,0057,0059,0061,0062,0063,0065,0066,0086,0095,0098,0106,0110,0113,0114,0115,0121,0126,0128,0135,0141,0143,0153,0155,0158'
```
Or you can build the string up from the array:
```
export mdcList=${mdcNo[0]}
for i in "${mdcNo[@]:1}"; do
mdcList+=,$i
done
```
Either way, the Python script can recover the array as a list like this:
```
mdc_no = os.getenv('mdcList').split(',')
```
If your array elements aren't just numbers, you can replace the comma with something less likely to show up in an element; the traditional choice would be the ASCII Unit Separator (U+001F, `$'\x1f'` in Bash, `'\x1f'` in Python). |
Printing the loss during TensorFlow training | 33,833,818 | 8 | 2015-11-20T18:41:15Z | 33,834,561 | 10 | 2015-11-20T19:26:32Z | [
"python",
"tensorflow"
] | I am looking at the TensorFlow "MNIST For ML Beginners" tutorial, and I want to print out the training loss after every training step.
My training loop currently looks like this:
```
for i in range(100):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
```
Now, `train_step` is defined as:
```
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
```
Where `cross_entropy` is the loss which I want to print out:
```
cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
```
One way to print this would be to explicitly compute `cross_entropy` in the training loop:
```
for i in range(100):
batch_xs, batch_ys = mnist.train.next_batch(100)
cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
print 'loss = ' + str(cross_entropy)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
```
I now have two questions regarding this:
1. Given that `cross_entropy` is already computed during `sess.run(train_step, ...)`, it seems inefficient to compute it twice, requiring twice the number of forward passes of all the training data. Is there a way to access the value of `cross_entropy` when it was computed during `sess.run(train_step, ...)`?
2. How do I even print a `tf.Variable`? Using `str(cross_entropy)` gives me an error...
Thank you! | You can fetch the value of `cross_entropy` by adding it to the list of arguments to `sess.run(...)`. For example, your `for`-loop could be rewritten as follows:
```
for i in range(100):
batch_xs, batch_ys = mnist.train.next_batch(100)
cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
_, loss_val = sess.run([train_step, cross_entropy],
feed_dict={x: batch_xs, y_: batch_ys})
print 'loss = ' + loss_val
```
The same approach can be used to print the current value of a variable. Let's say, in addition to the value of `cross_entropy`, you wanted to print the value of a `tf.Variable` called `W`, you could do the following:
```
for i in range(100):
batch_xs, batch_ys = mnist.train.next_batch(100)
cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
_, loss_val, W_val = sess.run([train_step, cross_entropy, W],
feed_dict={x: batch_xs, y_: batch_ys})
print 'loss = %s' % loss_val
print 'W = %s' % W_val
``` |
Type hints: solve circular dependency | 33,837,918 | 5 | 2015-11-20T23:46:07Z | 33,844,891 | 7 | 2015-11-21T15:06:56Z | [
"python",
"type-hinting",
"python-3.5"
] | The following produces `NameError: name 'Client' is not defined`. How can I solve it?
```
class Server():
def register_client(self, client: Client)
pass
class Client():
def __init__(self, server: Server):
server.register_client(self)
``` | You can use a [forward reference](http://legacy.python.org/dev/peps/pep-0484/#forward-references) by using a *string* name for the not-yet-defined `Client` class:
```
class Server():
def register_client(self, client: 'Client')
pass
``` |
Is there any difference between Python list slicing [-1:] and [-1]? | 33,841,023 | 4 | 2015-11-21T07:52:48Z | 33,841,106 | 8 | 2015-11-21T08:01:23Z | [
"python",
"list"
] | I have read a snippet of code like this:
```
s = self.buffer_file.readline()
if s[-1:] == "\n":
return s
```
And if I do this:
```
s = 'abc'
In [78]: id(s[-1:]), id(s[-1])
Out[78]: (140419827715248, 140419827715248)
In [79]: id(s[-1:]) is id(s[-1])
Out[79]: False
In [80]: id(s[-1:]) == id(s[-1])
Out[80]: True
```
It doesn't make sense to me, the ID numbers are same, but the IDs are different. So they are different for some reason. | The difference is that the result of slicing a list is a list
```
x = [1, 2, 3]
print(x[-1]) # --> 3
print(x[-1:]) # --> [3]
```
The second case just happens to be a list of one element, but it's still a list.
Note however that Python doesn't have a `char` type distinct from the `str` type and this means that both element access and slicing on `str` objects return another `str` object:
```
print("abcd"[-1]) # --> "d"
print("abcd"[-1:]) # --> "d"
```
The only advantage of using for example `s[-1:]` or `s[:1]` with a string instead of `s[-1]` and `s[0]` is that the slice expressions will not raise a runtime error when acting on an empty string (something that element access does)... this may allow code simplification:
```
if len(s) > 0 and s[0] == '*': ...
if s[:1] == '*': ...
``` |
Is there any difference between Python list slicing [-1:] and [-1]? | 33,841,023 | 4 | 2015-11-21T07:52:48Z | 33,841,242 | 7 | 2015-11-21T08:19:30Z | [
"python",
"list"
] | I have read a snippet of code like this:
```
s = self.buffer_file.readline()
if s[-1:] == "\n":
return s
```
And if I do this:
```
s = 'abc'
In [78]: id(s[-1:]), id(s[-1])
Out[78]: (140419827715248, 140419827715248)
In [79]: id(s[-1:]) is id(s[-1])
Out[79]: False
In [80]: id(s[-1:]) == id(s[-1])
Out[80]: True
```
It doesn't make sense to me, the ID numbers are same, but the IDs are different. So they are different for some reason. | The key practical benefit to `s[-1:]` rather than `s[-1]` is that the former will produce an empty iterable rather than stopping with a traceback.
```
>>> 'hi'[-1:]
'i'
>>> 'hi'[-1]
'i'
>>> ''[-1:]
''
>>> ''[-1]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: string index out of range
```
This allows `if s[-1:] == "\n":` to handle an empty line `s` without preceding it with `if s:` by simply evaluating to `False` instead of generating an error that would need to be handled with a `try..except` structure. |
check if a key exists in a bucket in s3 using boto3 | 33,842,944 | 12 | 2015-11-21T11:46:03Z | 33,843,019 | 20 | 2015-11-21T11:53:54Z | [
"python",
"amazon-s3",
"boto3"
] | I would like to know if a key exists in boto3. I can loop the bucket contents and check the key if it matches.
But that seems longer and an overkill. Boto3 official docs explicitly state how to do this.
May be I am missing the obvious. Can anybody point me how I can achieve this. | Boto 2's `boto.s3.key.Key` object used to have an `exists` method that checked if the key existed on S3 by doing a HEAD request and looking at the the result, but it seems that that no longer exists. You have to do it yourself:
```
import boto3
import botocore
s3 = boto3.resource('s3')
exists = False
try:
s3.Object('my-bucket', 'dootdoot.jpg').load()
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == "404":
exists = False
else:
raise e
else:
exists = True
print(exists)
```
That should print True if the key exists and False otherwise. `load()` does a HEAD request for a single key, so it should be relatively quick.
Of course, you might be checking if the object exists because you're planning on using it. If that is the case, you can just forget about the `load()` and do a `get()` directly, then handle the error case there. |
check if a key exists in a bucket in s3 using boto3 | 33,842,944 | 12 | 2015-11-21T11:46:03Z | 34,562,141 | 12 | 2016-01-02T02:53:50Z | [
"python",
"amazon-s3",
"boto3"
] | I would like to know if a key exists in boto3. I can loop the bucket contents and check the key if it matches.
But that seems longer and an overkill. Boto3 official docs explicitly state how to do this.
May be I am missing the obvious. Can anybody point me how I can achieve this. | I'm not a big fan of using exceptions for control flow. This is an alternative approach that works in boto3:
```
import boto3
bucket = s3.Bucket('my-bucket')
key = 'dootdoot.jpg'
objs = list(bucket.objects.filter(Prefix=key))
if len(objs) > 0 and objs[0].key == key:
print("Exists!")
else:
print("Doesn't exist")
``` |
How to serialize groups of a user with Django-Rest-Framework | 33,844,003 | 5 | 2015-11-21T13:39:13Z | 33,844,179 | 8 | 2015-11-21T13:57:59Z | [
"python",
"django",
"rest",
"django-rest-framework"
] | I'm trying to get users groups with Django REST framework, but only what I got is empty field named "groups".
This is my UserSerializer:
```
class UserSerializer(serializers.ModelSerializer):
class Meta:
model = User
fields = ('url', 'username', 'email', 'is_staff', 'groups')
```
any ideas how to get users groups data?
thanks in advance | You have to specify that it's a nested relationships:
```
class GroupSerializer(serializers.ModelSerializer):
class Meta:
model = Group
fields = ('name',)
class UserSerializer(serializers.ModelSerializer):
groups = GroupSerializer(many=True)
class Meta:
model = User
fields = ('url', 'username', 'email', 'is_staff', 'groups',)
```
Check documentation for more information : [Nested relationships](http://www.django-rest-framework.org/api-guide/relations/#nested-relationships) |
how to set rmse cost function in tensorflow | 33,846,069 | 3 | 2015-11-21T16:58:43Z | 37,511,638 | 7 | 2016-05-29T15:23:36Z | [
"python",
"tensorflow"
] | I have cost function in tensorflow.
```
activation = tf.add(tf.mul(X, W), b)
cost = (tf.pow(Y-y_model, 2)) # use sqr error for cost function
```
update : i am trying out this example
<https://github.com/nlintz/TensorFlow-Tutorials/blob/master/1_linear_regression.py>
How can i change it to rmse cost function?
Please let me know in comments if more info is required. | ```
tf.sqrt(tf.reduce_mean(tf.square(tf.sub(targets, outputs))))
``` |
Why is the scapy installation failing on Mac? | 33,849,901 | 4 | 2015-11-21T23:33:41Z | 33,855,601 | 7 | 2015-11-22T14:00:06Z | [
"python",
"osx",
"python-3.x",
"scapy"
] | When I try installing scapy on Mac, I get this error:
```
Collecting scapy
Downloading scapy-2.3.1.zip (1.1MB)
100% |ââââââââââââââââââââââââââââââââ| 1.1MB 436kB/s
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 20, in <module>
File "/private/tmp/pip-build-f7vu4fsp/scapy/setup.py", line 35
os.chmod(fname,0755)
^
SyntaxError: invalid token
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /private/tmp/pip-build-f7vu4fsp/scapy
```
I tried using `pip install scapy` and `pip3 install scapy`. | To install scapy for python3, you have to run `pip3 install scapy-python3`. Just `pip3 install scapy` will install old 2.x version, which is not python3 compatible. |
Django Rest Framework Serializer Relations: How to get list of all child objects in parent's serializer? | 33,853,255 | 5 | 2015-11-22T09:15:03Z | 33,853,315 | 8 | 2015-11-22T09:25:03Z | [
"python",
"django",
"serialization",
"django-rest-framework"
] | I'm new to DRF and have just started building an API. I have two models, a child model connected to a parent model with a foreign key. Here is the simplified version of the model I have:
```
class Parent(models.Model):
name = models.CharField(max_length=50)
class Child(models.Model):
parent = models.ForeignKey(Parent)
child_name = models.CharField(max_length=80)
```
To create serializers, I followed the [DRF Serializer Relations](http://www.django-rest-framework.org/api-guide/relations/) and I've created them as the following:
```
class ChildSerializer(serializers.HyperlinkedModelSerializer):
parent_id = serializers.PrimaryKeyRelatedField(queryset=Parent.objects.all(),source='parent.id')
class Meta:
model = Child
fields = ('url','id','child_name','parent_id')
def create(self, validated_data):
subject = Child.objects.create(parent=validated_data['parent']['id'], child_name=validated_data['child_name'])
return child
class ParentSerializer(serializers.HyperlinkedModelSerializer):
children = ChildSerializer(many=True, read_only=True)
class Meta:
model = Course
fields = ('url','id','name','children')
```
I'm trying to get the list of all children in parent's serializer. What I want is to be able to get a response like this:
```
{
'url': 'https://dummyapidomain.com/parents/1/',
'id': '1',
'name': 'Dummy Parent Name',
'cildren': [
{'id': 1, 'child_name': 'Dummy Children I'},
{'id': 2, 'child_name': 'Dummy Children II'},
{'id': 3, 'child_name': 'Dummy Children III'},
...
],
}
```
I wasn't expecting this to work since there is no link between Parent and Child in the Parent model, [but it is the suggested way to do it in the documentation](http://www.django-rest-framework.org/api-guide/relations/#nested-relationships), and it didn't work.
I'm getting the following error message:
```
Got AttributeError when attempting to get a value for field `children` on serializer `ParentSerializer`.
The serializer field might be named incorrectly and not match any attribute or key on the `Parent` instance.
Original exception text was: 'Parent' object has no attribute 'children'.
```
I think it's perfectly reasonable, but I couldn't understand what I'm missing here.
How can I get the list of all children in parent's serializer? | I think your problem is you forgot to add a **related\_name** for your Children model. I would have the models like this:
```
class Parent(models.Model):
name = models.CharField(max_length=50)
class Child(models.Model):
parent = models.ForeignKey(Parent, related_name='children') # <--- Add related_name
child_name = models.CharField(max_length=80)
```
And I think with this change you will solve the error your getting |
Demystifying sharedctypes performance | 33,853,543 | 11 | 2015-11-22T09:56:37Z | 33,915,113 | 13 | 2015-11-25T11:16:39Z | [
"python"
] | In python it is possible to share ctypes objects between multiple processes. However I notice that allocating these objects seems to be extremely expensive.
Consider following code:
```
from multiprocessing import sharedctypes as sct
import ctypes as ct
import numpy as np
n = 100000
l = np.random.randint(0, 10, size=n)
def foo1():
sh = sct.RawArray(ct.c_int, l)
return sh
def foo2():
sh = sct.RawArray(ct.c_int, len(l))
sh[:] = l
return sh
%timeit foo1()
%timeit foo2()
sh1 = foo1()
sh2 = foo2()
for i in range(n):
assert sh1[i] == sh2[i]
```
The output is:
```
10 loops, best of 3: 30.4 ms per loop
100 loops, best of 3: 9.65 ms per loop
```
There are two things that puzzle me:
* Why is explicit allocation and initialization compared to passing a numpy array so much faster?
* Why is allocating shared memory in python so extremely expensive? `%timeit np.arange(n)` only takes `46.4 µs`. There are several orders of magnitude between those timings. | # Sample Code
I rewrote your sample code a little bit to look into this issue. Here's where I landed, I'll use it in my answer below:
`so.py`:
```
from multiprocessing import sharedctypes as sct
import ctypes as ct
import numpy as np
n = 100000
l = np.random.randint(0, 10, size=n)
def sct_init():
sh = sct.RawArray(ct.c_int, l)
return sh
def sct_subscript():
sh = sct.RawArray(ct.c_int, n)
sh[:] = l
return sh
def ct_init():
sh = (ct.c_int * n)(*l)
return sh
def ct_subscript():
sh = (ct.c_int * n)(n)
sh[:] = l
return sh
```
Note that I added two test cases that do not use shared memory (and use regular a `ctypes` array instead).
`timer.py`:
```
import traceback
from timeit import timeit
for t in ["sct_init", "sct_subscript", "ct_init", "ct_subscript"]:
print(t)
try:
print(timeit("{0}()".format(t), setup="from so import {0}".format(t), number=100))
except Exception as e:
print("Failed:", e)
traceback.print_exc()
print
print()
print ("Test",)
from so import *
sh1 = sct_init()
sh2 = sct_subscript()
for i in range(n):
assert sh1[i] == sh2[i]
print("OK")
```
# Test results
The results from running the above code using Python 3.6a0 (specifically [`3c2fbdb`](https://github.com/python/cpython/commit/3c2fbdb)) are:
```
sct_init
2.844902500975877
sct_subscript
0.9383537038229406
ct_init
2.7903486443683505
ct_subscript
0.978101353161037
Test
OK
```
What's interesting is that *if you change `n`*, the results scale linearly. For example, using `n = 100000` (10 times bigger), you get something that's pretty much 10 times slower:
```
sct_init
30.57974253082648
sct_subscript
9.48625904135406
ct_init
30.509132395964116
ct_subscript
9.465419146697968
Test
OK
```
# Speed difference
In the end, the speed difference lies in the hot loop that is called to initialize the array by copying every single value over from the Numpy array (`l`) to the new array (`sh`). This makes sense, because as we noted speed scales linearly with array size.
When you pass the Numpy array as a constructor argument, the function that does this is [`Array_init`](https://github.com/python/cpython/blob/6fd916862e1a93b1578d8eabdefc3979a4d4af62/Modules/_ctypes/_ctypes.c#L4213-L4232). However, if you assign using `sh[:] = l`, then it's [`Array_ass_subscript` that does the job](https://github.com/python/cpython/blob/6fd916862e1a93b1578d8eabdefc3979a4d4af62/Modules/_ctypes/_ctypes.c#L4398-L4453).
Again, what matters here are the hot loops. Let's look at them.
`Array_init` hot loop (slower):
```
for (i = 0; i < n; ++i) {
PyObject *v;
v = PyTuple_GET_ITEM(args, i);
if (-1 == PySequence_SetItem((PyObject *)self, i, v))
return -1;
}
```
`Array_ass_subscript` hot loop (faster):
```
for (cur = start, i = 0; i < otherlen; cur += step, i++) {
PyObject *item = PySequence_GetItem(value, i);
int result;
if (item == NULL)
return -1;
result = Array_ass_item(myself, cur, item);
Py_DECREF(item);
if (result == -1)
return -1;
}
```
As it turns out, the majority of the speed difference lies in using `PySequence_SetItem` vs. `Array_ass_item`.
Indeed, if you change the code for `Array_init` to use `Array_ass_item` instead of `PySequence_SetItem` (`if (-1 == Array_ass_item((PyObject *)self, i, v))`), and recompile Python, the new results become:
```
sct_init
11.504781467840075
sct_subscript
9.381130554247648
ct_init
11.625461496878415
ct_subscript
9.265848568174988
Test
OK
```
Still a bit slower, but not by much.
In other words, most of the overhead is caused by a slower hot loop, and mostly caused by [the code that `PySequence_SetItem` wraps around `Array_ass_item`](https://github.com/python/cpython/blob/1364858e6ec7abfe04d92b7796ae8431eda87a7a/Objects/abstract.c#L1584-L1609).
This code might appear like little overhead at first read, but it really isn't.
`PySequence_SetItem` actually calls into the entire Python machinery to resolve the `__setitem__` method and call it.
This *eventually* resolves in a call to `Array_ass_item`, but only after a large number of levels of indirection (which a direct call to `Array_ass_item` would bypass entirely!)
Going through the rabbit hole, the call sequence looks a bit like this:
* `s->ob_type->tp_as_sequence->sq_ass_item` points to [`slot_sq_ass_item`](https://github.com/python/cpython/blob/cca9b8e3ff022d48eeb76d8567f297bc399fec3a/Objects/typeobject.c#L5790-L5803).
* `slot_sq_ass_item` calls into [`call_method`](https://github.com/python/cpython/blob/cca9b8e3ff022d48eeb76d8567f297bc399fec3a/Objects/typeobject.c#L1439-L1471).
* `call_method` calls into [`PyObject_Call`](https://github.com/python/cpython/blob/1364858e6ec7abfe04d92b7796ae8431eda87a7a/Objects/abstract.c#L2149-L2175)
* And on and on until we eventually get to `Array_ass_item`..!
In other words, we have C code in `Array_init` that's calling Python code (`__setitem__`) in a hot loop. That's slow.
## Why ?
Now, why does Python use `PySequence_SetItem` in `Array_init` and not `Array_ass_item` in `Array_init`?
That's because if it did, it would be bypassing the hooks that are exposed to the developer in Python-land.
Indeed, you *can* intercept calls to `sh[:] = ...` by subclassing the array and overriding `__setitem__` (`__setslice__` in Python 2). It will be called once, with a `slice` argument for the index.
Likewise, defining your own `__setitem__` also overrides the logic in the constructor. It will be called N times, with an integer argument for the index.
This means that if `Array_init` directly called into `Array_ass_item`, then you would lose something: `__setitem__` would no longer be called in the constructor, and you wouldn't be able to override the behavior anymore.
Now can we try to retain the faster speed all the while still exposing the same Python hooks?
Well, perhaps, using this code in `Array_init` instead of the existing hot loop:
```
return PySequence_SetSlice((PyObject*)self, 0, PyTuple_GET_SIZE(args), args);
```
Using this will call into `__setitem__` **once** with a slice argument (on Python 2, it would call into `__setslice__`). We still go through the Python hooks, but we only do it once instead of N times.
Using this code, the performance becomes:
```
sct_init
12.24651838419959
sct_subscript
10.984305887017399
ct_init
12.138383641839027
ct_subscript
11.79078131634742
Test
OK
```
## Other overhead
I think the rest of the overhead may be due to the tuple instantiation that takes place [when calling `__init__` on the array object](https://github.com/python/cpython/blob/6fd916862e1a93b1578d8eabdefc3979a4d4af62/Lib/multiprocessing/sharedctypes.py#L66) (note the `*`, and the fact that `Array_init` expects a tuple for `args`) â this presumably scales with `n` as well.
Indeed, if you replace `sh[:] = l` with `sh[:] = tuple(l)` in the test case, then the performance results become *almost* identical. With `n = 100000`:
```
sct_init
11.538272527977824
sct_subscript
10.985187001060694
ct_init
11.485244687646627
ct_subscript
10.843198659364134
Test
OK
```
There's probably still something smaller going on, but ultimately we're comparing two substantially different hot loops. There's simply little reason to expect them to have identical performance.
I think it might be interesting to try calling `Array_ass_subscript` from `Array_init` for the hot loop and see the results, though!
# Baseline speed
Now, to your second question, regarding allocating shared memory.
Note that there isn't really a cost to allocating *shared* memory. As noted in the results above, there isn't a substantial difference between using shared memory or not.
Looking at the Numpy code (`np.arange` is [implemented here](https://github.com/numpy/numpy/blob/eeba2cbfa4c56447e36aad6d97e323ecfbdade56/numpy/core/src/multiarray/multiarraymodule.c#L2912-L2930)), we can finally understand why it's so much faster than `sct.RawArray`: **`np.arange` doesn't appear to make calls to Python "user-land"** (i.e. no call to `PySequence_GetItem` or `PySequence_SetItem`).
That doesn't necessarily explain *all* the difference, but you'd probably want to start investigating there. |
Generate random number outside of range in python | 33,857,855 | 18 | 2015-11-22T17:30:18Z | 33,859,020 | 8 | 2015-11-22T19:11:51Z | [
"python",
"random",
"range"
] | I'm currently working on a pygame game and I need to place objects randomly on the screen, except they cannot be within a designated rectangle. Is there an easy way to do this rather than continuously generating a random pair of coordinates until it's outside of the rectangle?
Here's a rough example of what the screen and the rectangle look like.
```
______________
| __ |
| |__| |
| |
| |
|______________|
```
Where the screen size is 1000x800 and the rectangle is [x: 500, y: 250, width: 100, height: 75]
A more code oriented way of looking at it would be
```
x = random_int
0 <= x <= 1000
and
500 > x or 600 < x
y = random_int
0 <= y <= 800
and
250 > y or 325 < y
``` | This offers an O(1) approach in terms of both time and memory.
**Rationale**
The accepted answer along with some other answers seem to hinge on the necessity to generate lists of *all* possible coordinates, or recalculate until there is an acceptable solution. Both approaches take more time and memory than necessary.
Note that depending on the requirements for uniformity of coordinate generation, there are different solutions as is shown below.
**First attempt**
My approach is to randomly choose only valid coordinates around the designated box (think *left/right*, *top/bottom*), then select at random which side to choose:
```
import random
# set bounding boxes
maxx=1000
maxy=800
blocked_box = [(500, 250), (100, 75)]
# generate left/right, top/bottom and choose as you like
def gen_rand_limit(p1, dim):
x1, y1 = p1
w, h = dim
x2, y2 = x1 + w, y1 + h
left = random.randrange(0, x1)
right = random.randrange(x2+1, maxx-1)
top = random.randrange(0, y1)
bottom = random.randrange(y2, maxy-1)
return random.choice([left, right]), random.choice([top, bottom])
# check boundary conditions are met
def check(x, y, p1, dim):
x1, y1 = p1
w, h = dim
x2, y2 = x1 + w, y1 + h
assert 0 <= x <= maxx, "0 <= x(%s) <= maxx(%s)" % (x, maxx)
assert x1 > x or x2 < x, "x1(%s) > x(%s) or x2(%s) < x(%s)" % (x1, x, x2, x)
assert 0 <= y <= maxy, "0 <= y(%s) <= maxy(%s)" %(y, maxy)
assert y1 > y or y2 < y, "y1(%s) > y(%s) or y2(%s) < y(%s)" % (y1, y, y2, y)
# sample
points = []
for i in xrange(1000):
x,y = gen_rand_limit(*blocked_box)
check(x, y, *blocked_box)
points.append((x,y))
```
**Results**
Given the constraints as outlined in the OP, this actually produces random coordinates (blue) around the designated rectangle (red) as desired, however leaves out any of the valid points that are outside the rectangle but fall within the respective x or y dimensions of the rectangle:
[](http://i.stack.imgur.com/MjwoC.png)
```
# visual proof via matplotlib
import matplotlib
from matplotlib import pyplot as plt
from matplotlib.patches import Rectangle
X,Y = zip(*points)
fig = plt.figure()
ax = plt.scatter(X, Y)
p1 = blocked_box[0]
w,h = blocked_box[1]
rectangle = Rectangle(p1, w, h, fc='red', zorder=2)
ax = plt.gca()
plt.axis((0, maxx, 0, maxy))
ax.add_patch(rectangle)
```
**Improved**
This is easily fixed by limiting only either x or y coordinates (note that `check` is no longer valid, comment to run this part):
```
def gen_rand_limit(p1, dim):
x1, y1 = p1
w, h = dim
x2, y2 = x1 + w, y1 + h
# should we limit x or y?
limitx = random.choice([0,1])
limity = not limitx
# generate x, y O(1)
if limitx:
left = random.randrange(0, x1)
right = random.randrange(x2+1, maxx-1)
x = random.choice([left, right])
y = random.randrange(0, maxy)
else:
x = random.randrange(0, maxx)
top = random.randrange(0, y1)
bottom = random.randrange(y2, maxy-1)
y = random.choice([top, bottom])
return x, y
```
[](http://i.stack.imgur.com/o5pwN.png)
**Adjusting the random bias**
As pointed out in the comments this solution suffers from a bias given to points outside the rows/columns of the rectangle. The following fixes that *in principle* by giving each coordinate the same probability:
```
def gen_rand_limit(p1, dim):
x1, y1 = p1Final solution -
w, h = dim
x2, y2 = x1 + w, y1 + h
# generate x, y O(1)
# --x
left = random.randrange(0, x1)
right = random.randrange(x2+1, maxx)
withinx = random.randrange(x1, x2+1)
# adjust probability of a point outside the box columns
# a point outside has probability (1/(maxx-w)) v.s. a point inside has 1/w
# the same is true for rows. adjupx/y adjust for this probability
adjpx = ((maxx - w)/w/2)
x = random.choice([left, right] * adjpx + [withinx])
# --y
top = random.randrange(0, y1)
bottom = random.randrange(y2+1, maxy)
withiny = random.randrange(y1, y2+1)
if x == left or x == right:
adjpy = ((maxy- h)/h/2)
y = random.choice([top, bottom] * adjpy + [withiny])
else:
y = random.choice([top, bottom])
return x, y
```
The following plot has 10'000 points to illustrate the uniform placement of points (the points overlaying the box' border are due to point size).
Disclaimer: Note that this plot places the red box in the very middle such that`top/bottom`, `left/right` have the same probability among each other. The adjustment thus is relative to the blocking box, but not for all areas of the graph. A final solution requires to adjust the probabilities for each of these separately.
[](http://i.stack.imgur.com/WlDwy.png)
**Simpler solution, yet slightly modified problem**
It turns out that adjusting the probabilities for different areas of the coordinate system is quite tricky. After some thinking I came up with a slightly modified approach:
Realizing that on any 2D coordinate system blocking out a rectangle divides the area into N sub-areas (N=8 in the case of the question) where a valid coordinate can be chosen. Looking at it this way, we can define the valid sub-areas as boxes of coordinates. Then we can choose a box at random and a coordinate at random from within that box:
```
def gen_rand_limit(p1, dim):
x1, y1 = p1
w, h = dim
x2, y2 = x1 + w, y1 + h
# generate x, y O(1)
boxes = (
((0,0),(x1,y1)), ((x1,0),(x2,y1)), ((x2,0),(maxx,y1)),
((0,y1),(x1,y2)), ((x2,y1),(maxx,y2)),
((0,y2),(x1,maxy)), ((x1,y2),(x2,maxy)), ((x2,y2),(maxx,maxy)),
)
box = boxes[random.randrange(len(boxes))]
x = random.randrange(box[0][0], box[1][0])
y = random.randrange(box[0][1], box[1][1])
return x, y
```
Note this is not generalized as the blocked box may not be in the middle hence `boxes` would look different. As this results in each box chosen with the same probability, we get the same *number* of points in each box. Obviously the densitiy is higher in smaller boxes:
[](http://i.stack.imgur.com/83xD5.png)
If the requirement is to generate a uniform distribution among all possible coordinates, the solution is to calculate `boxes` such that each box is about the same size as the blocking box. YMMV |
Generate random number outside of range in python | 33,857,855 | 18 | 2015-11-22T17:30:18Z | 33,861,819 | 10 | 2015-11-23T00:02:04Z | [
"python",
"random",
"range"
] | I'm currently working on a pygame game and I need to place objects randomly on the screen, except they cannot be within a designated rectangle. Is there an easy way to do this rather than continuously generating a random pair of coordinates until it's outside of the rectangle?
Here's a rough example of what the screen and the rectangle look like.
```
______________
| __ |
| |__| |
| |
| |
|______________|
```
Where the screen size is 1000x800 and the rectangle is [x: 500, y: 250, width: 100, height: 75]
A more code oriented way of looking at it would be
```
x = random_int
0 <= x <= 1000
and
500 > x or 600 < x
y = random_int
0 <= y <= 800
and
250 > y or 325 < y
``` | 1. Partition the box into a set of sub-boxes.
2. Among the valid sub-boxes, choose which one to place your point in with probability proportional to their areas
3. Pick a random point uniformly at random from within the chosen sub-box.
[](http://i.stack.imgur.com/EIZ68.png)
This will generate samples from the uniform probability distribution on the valid region, based on the chain rule of conditional probability. |
install psycopg2 on mac osx 10.9.5 [pg_config] [pip] | 33,866,695 | 8 | 2015-11-23T08:42:29Z | 33,866,865 | 13 | 2015-11-23T08:51:13Z | [
"python",
"osx",
"postgresql",
"psycopg2"
] | I'm trying to install `psycopg2` on my macbook. I still get the same error. I found a lot of same questions on stackoverflow but no answer seems to work. I think it is outdated. I'm using:
`Mac osx 10.9.5
Python 3.4.3`
My error code is:
> Running setup.py egg\_info for package psycopg2 Error: pg\_config
> executable not found.
>
> Please add the directory containing pg\_config to the PATH or specify
> the full executable path with the option:
>
> python setup.py build\_ext --pg-config /path/to/pg\_config build ...
>
> or with the pg\_config option in 'setup.cfg'. Complete output from
> command python setup.py egg\_info: running egg\_info
>
> writing pip-egg-info/psycopg2.egg-info/PKG-INFO
>
> writing top-level names to
> pip-egg-info/psycopg2.egg-info/top\_level.txt
>
> writing dependency\_links to
> pip-egg-info/psycopg2.egg-info/dependency\_links.txt
>
> warning: manifest\_maker: standard file '-c' not found
>
> Error: pg\_config executable not found.
>
> Please add the directory containing pg\_config to the PATH
>
> or specify the full executable path with the option:
>
> python setup.py build\_ext --pg-config /path/to/pg\_config build ...
>
> or with the pg\_config option in 'setup.cfg'.
>
> ---------------------------------------- Command python setup.py egg\_info failed with error code 1 in
> /Users/stevengerrits/build/psycopg2 Storing complete log in
> /Users/stevengerrits/Library/Logs/pip.log | You don't seem to have postgres installed, check how to install postgresql in your system, one of the way is
`brew install postgresql` (if you use homebrew- recommended)
or download the postgres app from postgresapp.com, pg\_config should come with postgres and psycopg2 is trying to find it. |
install psycopg2 on mac osx 10.9.5 [pg_config] [pip] | 33,866,695 | 8 | 2015-11-23T08:42:29Z | 35,817,509 | 8 | 2016-03-05T17:44:24Z | [
"python",
"osx",
"postgresql",
"psycopg2"
] | I'm trying to install `psycopg2` on my macbook. I still get the same error. I found a lot of same questions on stackoverflow but no answer seems to work. I think it is outdated. I'm using:
`Mac osx 10.9.5
Python 3.4.3`
My error code is:
> Running setup.py egg\_info for package psycopg2 Error: pg\_config
> executable not found.
>
> Please add the directory containing pg\_config to the PATH or specify
> the full executable path with the option:
>
> python setup.py build\_ext --pg-config /path/to/pg\_config build ...
>
> or with the pg\_config option in 'setup.cfg'. Complete output from
> command python setup.py egg\_info: running egg\_info
>
> writing pip-egg-info/psycopg2.egg-info/PKG-INFO
>
> writing top-level names to
> pip-egg-info/psycopg2.egg-info/top\_level.txt
>
> writing dependency\_links to
> pip-egg-info/psycopg2.egg-info/dependency\_links.txt
>
> warning: manifest\_maker: standard file '-c' not found
>
> Error: pg\_config executable not found.
>
> Please add the directory containing pg\_config to the PATH
>
> or specify the full executable path with the option:
>
> python setup.py build\_ext --pg-config /path/to/pg\_config build ...
>
> or with the pg\_config option in 'setup.cfg'.
>
> ---------------------------------------- Command python setup.py egg\_info failed with error code 1 in
> /Users/stevengerrits/build/psycopg2 Storing complete log in
> /Users/stevengerrits/Library/Logs/pip.log | To install `psycop2` you need have installed server before( I have installed [PostgesApp](http://postgresapp.com))
Run manually command including the path of `pg_config` program in `PATH` env variable, in my case:
```
export PATH=$PATH:/Applications/Postgres.app/Contents/Versions/9.4/bin/
```
and then run
```
pip3 install psycopg2
``` |
Merging dictionary keys if values the same | 33,871,034 | 5 | 2015-11-23T12:23:50Z | 33,871,138 | 7 | 2015-11-23T12:29:14Z | [
"python"
] | So this is a weird problem that I suspect is really simple to solve. I'm building a lyrics webapp for remote players in my house. It currently generates a dictionary of players with the song they're playing. Eg:
```
{
'bathroom': <Song: Blur - Song 2>,
'bedroom1': <Song: Blur - Song 2>,
'kitchen': <Song: Meat Loaf - I'd Do Anything for Love (But I Won't Do That)>,
}
```
Occasionally subsets of these players are synced. So âas aboveâ they display the same value. **I'd like to group these in the interface.** I could be more intelligent when I'm building the dictionary, but assuming I won't do that, is there a good way to merge keys by value?
The desired output from the above would be, something like:
```
{
'bathroom,bedroom1': <Song: Blur - Song 2>,
'kitchen': <Song: Meat Loaf - I'd Do Anything for Love (But I Won't Do That)>,
}
```
However this does break how I'd like to look things up (I'd like to specify by name, hence this is a dictionary)... Is there a better collection that can have multiple keys per value and indicate when there are merged duplicates (and backwards-refer to all their keys)?
---
There is a good answer which flips this around to a key of songs, and list of players as a value. This is great except that sometimes I want to know which song is playing on a named player. That's why I originally went with a dictionary.
Is there a good way to preserve a lookup in both directions (short of keeping both collections around)? | ```
from itertools import groupby
x = {
'bathroom': 'a',
'bedroom1': 'a',
'kitchen': 'b'
}
{
','.join(i[0] for i in v): k
for k,v in groupby(sorted(x.iteritems(), key=lambda p: p[1]), lambda p: p[1])
}
``` |
String substitution performance in python | 33,872,176 | 4 | 2015-11-23T13:22:50Z | 33,872,252 | 7 | 2015-11-23T13:26:19Z | [
"python",
"string",
"performance",
"list"
] | I have a list of ~50,000 strings (titles), and a list of ~150 words to remove from these titles, if they are found. My code so far is below. The final output should be the list of 50,000 strings, with all instances of the 150 words removed. I would like to know the most efficient (performance wise) way of doing this. My code seems to be running, albeit very slowly ..
```
excludes = GetExcludes()
titles = GetTitles()
titles_alpha = []
titles_excl = []
for k in range(len(titles)):
#remove all non-alphanumeric characters
s = re.sub('[^0-9a-zA-Z]+', ' ',titles[k])
#remove extra white space
s = re.sub( '\s+', ' ', s).strip()
#lowercase
s = s.lower()
titles_alpha.append(s)
#remove any excluded words
for i in range (len(excludes)):
titles_excl.append(titles_alpha[k].replace(excludes[i],''))
print titles_excl
``` | A lot of the performance overhead of regular expressions comes from compiling the regular expressions. You should move the compilation of the regular expression out of the loop.
This should give you a considerable improvement:
```
pattern1 = re.compile('[^0-9a-zA-Z]+')
pattern2 = re.compile('\s+')
for k in range(len(titles)):
#remove all non-alphanumeric characters
s = re.sub(pattern1,' ',titles[k])
#remove extra white space
s = re.sub(pattern2,' ', s).strip()
```
Some tests with `wordlist.txt` from [here](https://raw.githubusercontent.com/danielmiessler/SecLists/master/Passwords/10k_most_common.txt):
```
import re
def noncompiled():
with open("wordlist.txt",'r') as f:
titles = f.readlines()
titles = ["".join([title,nonalpha]) for title in titles for nonalpha in "!@#$%"]
for k in range(len(titles)):
#remove all non-alphanumeric characters
s = re.sub('[^0-9a-zA-Z]+', ' ',titles[k])
#remove extra white space
s = re.sub( '\s+', ' ', s).strip()
def compiled():
with open("wordlist.txt",'r') as f:
titles = f.readlines()
titles = ["".join([title,nonalpha]) for title in titles for nonalpha in "!@#$%"]
pattern1=re.compile('[^0-9a-zA-Z]+')
pattern2 = re.compile( '\s+')
for k in range(len(titles)):
#remove all non-alphanumeric characters
s = pattern1.sub('',titles[k])
#remove extra white space
s = pattern2.sub('', s)
In [2]: %timeit noncompiled()
1 loops, best of 3: 292 ms per loop
In [3]: %timeit compiled()
10 loops, best of 3: 176 ms per loop
```
To remove the "bad words" from your excludes list, you should as @zsquare suggested create a joined regex, which will most likely be the fastest that you can get.
```
def with_excludes():
with open("wordlist.txt",'r') as f:
titles = f.readlines()
titles = ["".join([title,nonalpha]) for title in titles for nonalpha in "!@#$%"]
pattern1=re.compile('[^0-9a-zA-Z]+')
pattern2 = re.compile( '\s+')
excludes = ["shit","poo","ass","love","boo","ch"]
excludes_regex = re.compile('|'.join(excludes))
for k in range(len(titles)):
#remove all non-alphanumeric characters
s = pattern1.sub('',titles[k])
#remove extra white space
s = pattern2.sub('', s)
#remove bad words
s = pattern2.sub('', s)
In [2]: %timeit with_excludes()
1 loops, best of 3: 251 ms per loop
```
You can take this approach one step further by just compiling a master regex:
```
def master():
with open("wordlist.txt",'r') as f:
titles = f.readlines()
titles = ["".join([title,nonalpha]) for title in titles for nonalpha in "!@#$%"]
excludes = ["shit","poo","ass","love","boo","ch"]
nonalpha='[^0-9a-zA-Z]+'
whitespace='\s+'
badwords = '|'.join(excludes)
master_regex=re.compile('|'.join([nonalpha,whitespace,badwords]))
for k in range(len(titles)):
#remove all non-alphanumeric characters
s = master_regex.sub('',titles[k])
In [2]: %timeit master()
10 loops, best of 3: 148 ms per loop
```
You can gain some more speed by avoiding the iteration in python:
```
result = [master_regex.sub('',item) for item in titles]
In [4]: %timeit list_comp()
10 loops, best of 3: 139 ms per loop
```
*Note: The data generation step:*
```
def baseline():
with open("wordlist.txt",'r') as f:
titles = f.readlines()
titles = ["".join([title,nonalpha]) for title in titles for nonalpha in "!@#$%"]
In [2]: %timeit baseline()
10 loops, best of 3: 24.8 ms per loop
``` |
List sorting in Python | 33,875,422 | 6 | 2015-11-23T16:05:57Z | 33,875,631 | 10 | 2015-11-23T16:16:47Z | [
"python",
"list",
"sorting",
"python-3.x"
] | I have N-numbers lists, but for instance 3 lists:
```
a = [1,1,1,1]
b = [2,2,2,2]
c = [3,3,3,3]
```
And I want to get output like this:
```
f = [1,2,3]
g = [1,2,3]
```
And etc.
The issue is a solution has to be independent of numbers of list and items inside of a list.
For example:
```
a = [1,1]
b = [2]
c = [3,3,3]
# output
f = [1,2,3]
g = [1,3]
h = [3]
``` | You can use [zip\_longest](https://docs.python.org/3/library/itertools.html#itertools.zip_longest)
```
>>> from itertools import zip_longest
>>> a = [1,1]
>>> b = [2]
>>> c = [3,3,3]
>>> f,g,h=[[e for e in li if e is not None] for li in zip_longest(a,b,c)]
>>> f
[1, 2, 3]
>>> g
[1, 3]
>>> h
[3]
```
If `None` is a potential valid value in the lists, use a sentinel [object](https://docs.python.org/3/library/functions.html#object) instead of the default `None`:
```
>>> b = [None]
>>> sentinel = object()
>>> [[e for e in li if e is not sentinel] for li in zip_longest(a,b,c, fillvalue=sentinel)]
[[1, None, 3], [1, 3], [3]]
``` |
Python modulo result differs from wolfram alpha? | 33,879,279 | 8 | 2015-11-23T19:41:31Z | 33,879,472 | 11 | 2015-11-23T19:52:44Z | [
"python",
"cryptography",
"rsa",
"modulo",
"wolframalpha"
] | When I run my python 3 program:
```
exp = 211
p = 199
q = 337
d = (exp ** (-1)) % ((p - 1)*(q - 1))
```
results in 211^(-1).
But when I run the [calculation in wolfram alpha](http://www.wolframalpha.com/input/?i=%28%20211%5E%28-1%29%29%20mod%20%28%28199%20-%201%29*%28337%20-%201%29%29) I get the result I was expecting.
I did some test outputs and the variables `exp`, `p` and `q` in the program are all the integer values I used in wolfram alpha.
My goal is to derive a private key from a (weakly) encrypted integer.
If I test my wolfram alpha result, I can decrypt the encrypted message correctly. | Wolfram Alpha is computing the [*modular inverse*](https://en.wikipedia.org/wiki/Modular_multiplicative_inverse). That is, it's finding the integer `x` such that
```
exp*x == 1 mod (p - 1)*(q - 1)
```
This is not the same as the modulo operator `%`. Here, Python is simply calculating the remainder when `1/exp` is divided by `(p - 1)*(q - 1)` when given the expression in your question.
Copying the Python code from [this answer](http://stackoverflow.com/a/9758173/3923281), you can compute the desired value with Python too:
```
>>> modinv(exp, (p - 1)*(q - 1))
45403
``` |
Get spotify currently playing track | 33,883,360 | 2 | 2015-11-24T00:26:39Z | 33,923,095 | 7 | 2015-11-25T17:40:46Z | [
"python",
"linux",
"spotify"
] | **EDIT : Let's try to clarify all this.**
I'm writing a python script, and I want it to tell me the song that Spotify is currently playing.
I've tried looking for libraries that could help me but didn't find any that are still maintained and working.
I've also looked through Spotify's web API, but it does not provide any way to get that information.
The only potential solution I found would be to grab the title of my Spotify (desktop app) window. But I didn't manage to do that so far.
So basically, what I'm asking is whether anyone knows :
* How to apply the method I'm already trying to use (get the window's title from a program), either in pure python or using an intermediary shell script.
OR
* Any other way to extract that information from Spotify's desktop app or web client.
---
**Original post :**
I'm fiddling with the idea of a python status bar for a linux environment, nothing fancy, just a script tailored to my own usage. What I'm trying to do right now is to display the currently playing track from spotify (namely, the artist and title).
There does not seem to be anything like that in their official web API. I haven't found any third party library that would do that either. Most libraries I found are either deprecated since spotify released their current API, or they are based on said API which does not do what I want.
I've also read a bunch of similar question in here, most of which had no answers, or a deprecated solution.
I thought about grabbing the window title, since it does diplay the information I need. But not only does that seem really convoluted, I also have difficulties making this happen. I was trying to get it by running a combination of the linux commands xdotools and xprop inside my script.
It's worth mentionning that since I'm already using the psutil lib for other informations, I already have access to spotify's PID.
Any idea how I could do that ?
And in case my method was the only one you can think of, any idea how to actually make it work ?
Your help will be appreciated. | The Spotify client on Linux implements a D-Bus interface called MPRIS - Media Player Remote Interfacing Specification.
<http://specifications.freedesktop.org/mpris-spec/latest/index.html>
You could access the title (and other metadata) from python like this:
```
import dbus
session_bus = dbus.SessionBus()
spotify_bus = session_bus.get_object("org.mpris.MediaPlayer2.spotify",
"/org/mpris/MediaPlayer2")
spotify_properties = dbus.Interface(spotify_bus,
"org.freedesktop.DBus.Properties")
metadata = spotify_properties.Get("org.mpris.MediaPlayer2.Player", "Metadata")
# The property Metadata behaves like a python dict
for key, value in metadata.items():
print key, value
# To just print the title
print metadata['xesam:title']
``` |
Creating lists with loops in Python | 33,888,298 | 3 | 2015-11-24T08:04:39Z | 33,888,415 | 7 | 2015-11-24T08:12:33Z | [
"python",
"list",
"loops"
] | I'm trying to create a sequence of lists with different variable names that correspond to different lines of a text file. My current code requires me to hard-code the number of lines in the file:
```
with open('ProjectEuler11Data.txt') as numbers:
data = numbers.readlines()
for line in data:
if line == 1:
line1 = line.split()
if line == 2:
line2 = line.split()
if line == 3:
line3 = line.split()
if line == 4:
line4 = line.split()
```
In this case, I would have to continue up to 20, which isn't that bad, but I imagine that there's an easier way that I'm not aware of.
This is for a ProjectEuler problem, but there aren't any spoilers, and also I'm looking for advice on this specific task, rather than my strategy as a whole. Thanks! | ```
with open('ProjectEuler11Data.txt') as numbers:
data = numbers.readlines()
lines = [line.split() for line in data]
```
I am not sure why you need different variable names for each line when you can have an array with all lines at the end.
You can now simply access the individual lines by line[0], line[1] and so on. |
Difference between io.open vs open in python | 33,891,373 | 2 | 2015-11-24T10:38:48Z | 33,891,608 | 7 | 2015-11-24T10:49:58Z | [
"python",
"file",
"python-3.x",
"io",
"python-2.x"
] | In the past, there's `codecs` which got replaced by `io`. Although it seems like it's more advisable to use `io.open`, most introductory python classes still teaches `open`.
There's a question with [Difference between open and codecs.open in Python](http://stackoverflow.com/questions/5250744/difference-between-open-and-codecs-open-in-python) but **is `open` a mere duck-type of `io.open`?**
**If not, why is it better to use `io.open`? And why is it easier to teach with `open`?**
In this post (<http://code.activestate.com/lists/python-list/681909/>), Steven DAprano says that the built in `open` is using the `io.open` in the backend. **So should we all refactored our code to use `open` instead of `io.open`?**
**Other than backward compatibility for py2.x, are there any reason to use `io.open` instead of `open` in py3.0?** | Situation in Python3 according to the docs:
> `io.open(file, *[options]*)`
>
> This is an alias for the builtin open() function.
and
> **While the builtin open() and the associated io module are the
> recommended approach** for working with encoded text files, this module
> *[i.e. codecs]* provides additional utility functions and classes that
> allow the use of a wider range of codecs when working with binary
> files
(bold and italics are my edits) |
Correct way of "Absolute Import" in Python 2.7 | 33,893,610 | 24 | 2015-11-24T12:21:06Z | 33,949,448 | 13 | 2015-11-27T02:24:13Z | [
"python",
"python-2.7",
"python-import"
] | * Python 2.7.10
* In virtualenv
* Enable `from __future__ import absolute_import` in each module
The directory tree looks like:
```
Project/
prjt/
__init__.py
pkg1/
__init__.py
module1.py
tests/
__init__.py
test_module1.py
pkg2/
__init__.py
module2.py
tests/
__init__.py
test_module2.py
pkg3/
__init__.py
module3.py
tests/
__init__.py
test_module3.py
data/
log/
```
---
I tried to use the function `compute()` of `pkg2/module2.py` in `pkg1/module1.py` by writing like:
```
# In module1.py
import sys
sys.path.append('/path/to/Project/prjt')
from prjt.pkg2.module2 import compute
```
But when I ran `python module1.py`, the interpreter raised an ImportError that `No module named prjt.pkg2.module2`.
1. What is the correct way of "absolute import"? Do I have to add the path to `Project` to `sys.path`?
2. How could I run `test_module1.py` in the interactive interpreter? By `python prjt/pkg1/tests/test_module1.py` or `python -m prjt/pkg1/tests/test_module1.py`? | ### How python find module
*python will find module from `sys.path`, and the first entry `sys.path[0]` is '' means, python will find module from the current working directory*
```
import sys
print sys.path
```
and python find third-party module from `site-packages`
so to absolute import, you can
**append your package to the `sys.path`**
```
import sys
sys.path.append('the_folder_of_your_package')
import module_you_created
module_you_created.fun()
```
**export PYTHONPATH**
the PYTHONPATH will be imported into sys.path before execution
```
export PYTHONPATH=the_folder_of_your_package
import sys
[p for p in sys.path if 'the_folder_of_your_package' in p]
```
> How could I run test\_module1.py in the interactive interpreter? By python Project/pkg1/tests/test\_module1.py or python -m Project/pkg1/tests/test\_module1.py?
you can use `if __name__ == '__main__':` idiomatic way, and use `python Project/pkg1/tests/test_module1.py`
```
if __name__ = '__main__':
main()
``` |
Precedence of "in" in Python | 33,897,137 | 24 | 2015-11-24T15:08:55Z | 33,897,287 | 18 | 2015-11-24T15:15:27Z | [
"python",
"syntax",
"operator-precedence"
] | This is a bit of a (very basic) language-lawyer kind of question. I understand what the code does, and why, so please no elementary explanations.
In an expression, `in` has [higher precedence](https://docs.python.org/3.5/reference/expressions.html?highlight=precedence#operator-precedence) than `and`. So if I write
```
if n in "seq1" and "something":
...
```
it is interpreted just like
```
if (n in "seq1") and "something":
...
```
However, the `in` of a `for` loop has lower precedence than `and` (in fact it has to, otherwise the following would be a syntax error). Hence if a Python beginner [writes](http://stackoverflow.com/a/33880344/699305)
```
for n in "seq1" and "something":
...
```
..., it is equivalent to this:
```
for n in ("seq1" and "something"):
...
```
(which, provided "seq1" is truthy, evaluates to `for n in "something"`).
So, the question: Where is the precedence of the for-loop's `in` keyword specified/documented? I understand that `n in ...` is not an expression in this context (it does not have a value), but is part of the `for` statement's syntax. Still, I'm not sure how/where non-expression precedence is specified. | The word `in` in a `for` loop is part of a *statement*. Statements have no precedence.
`in` the operator, on the other hand, is always going to be part of an expression. Precedence governs the relative priority between operators in expressions.
In statements then, look for the `expression` parts in their documented grammar. For the [`for` statement](https://docs.python.org/2/reference/compound_stmts.html#the-for-statement), the grammar is:
```
for_stmt ::= "for" target_list "in" expression_list ":" suite
["else" ":" suite]
```
The `and` operator in your example is part of the `expression_list` part, but the `"in"` part is *not part of the expression*.
The 'order' then is set in Python's grammar rules, which govern the parser. Statements are the top-level constructs, see the [*Top-level components* documentation](https://docs.python.org/2/reference/toplevel_components.html) (with stand-alone expressions being called [*expression statements*](https://docs.python.org/2/reference/simple_stmts.html#expression-statements)). Expressions are always part of a statement, giving statements priority over anything contained in a statement. |
Precedence of "in" in Python | 33,897,137 | 24 | 2015-11-24T15:08:55Z | 33,897,295 | 28 | 2015-11-24T15:15:48Z | [
"python",
"syntax",
"operator-precedence"
] | This is a bit of a (very basic) language-lawyer kind of question. I understand what the code does, and why, so please no elementary explanations.
In an expression, `in` has [higher precedence](https://docs.python.org/3.5/reference/expressions.html?highlight=precedence#operator-precedence) than `and`. So if I write
```
if n in "seq1" and "something":
...
```
it is interpreted just like
```
if (n in "seq1") and "something":
...
```
However, the `in` of a `for` loop has lower precedence than `and` (in fact it has to, otherwise the following would be a syntax error). Hence if a Python beginner [writes](http://stackoverflow.com/a/33880344/699305)
```
for n in "seq1" and "something":
...
```
..., it is equivalent to this:
```
for n in ("seq1" and "something"):
...
```
(which, provided "seq1" is truthy, evaluates to `for n in "something"`).
So, the question: Where is the precedence of the for-loop's `in` keyword specified/documented? I understand that `n in ...` is not an expression in this context (it does not have a value), but is part of the `for` statement's syntax. Still, I'm not sure how/where non-expression precedence is specified. | In the context of a `for` statement, the `in` is just part of the grammar that makes up that compound statement, and so it is distinct from the operator `in`. The Python grammar specification defines a `for` statement [like this](https://docs.python.org/3/reference/compound_stmts.html#for):
```
for_stmt ::= "for" target_list "in" expression_list ":" suite
["else" ":" suite]
```
The point to make is that this particular `in` will not be interpreted as part of *target\_list*, because a comparison operation (e.g. `x in [x]`) is not a valid *target*. Referring to the grammar specification again, *target\_list* and *target* are [defined as follows](https://docs.python.org/3/reference/simple_stmts.html#assignment-statements):
```
target_list ::= target ("," target)* [","]
target ::= identifier
| "(" target_list ")"
| "[" target_list "]"
| attributeref
| subscription
| slicing
| "*" target
```
So the grammar ensures that the parser sees the first `in` token after a *target\_list* as part of the `for ... in ...` statement, and not as a binary operator. This is why trying to write things very strange like `for (x in [x]) in range(5):` will raise a syntax error: Python's grammar does not permit comparisons like `(x in [x])` to be targets.
Therefore for a statement such as `for n in "seq1" and "something"` is unambiguous. The *target\_list* part is the identifier `n` and the *expression\_list* part is the iterable that `"seq1" and "something"` evaluates to. As the linked documentation goes on to say, each item from the iterable is assigned to *target\_list* in turn. |
How to retrieve pending and executing Celery tasks with their arguments? | 33,897,388 | 9 | 2015-11-24T15:20:17Z | 33,963,077 | 7 | 2015-11-27T18:31:34Z | [
"python",
"celery"
] | In Celery docs, there is the [example](http://docs.celeryproject.org/en/latest/userguide/workers.html#dump-of-currently-executing-tasks) of inspecting executing tasks:
> You can get a list of active tasks using active():
>
> ```
> >>> i.active()
> [{'worker1.example.com':
> [{'name': 'tasks.sleeptask',
> 'id': '32666e9b-809c-41fa-8e93-5ae0c80afbbf',
> 'args': '(8,)',
> 'kwargs': '{}'}]}]
> ```
But this call returns only representations of arguments, obtained by `repr()`. Is there way to get serialized tasks arguments? | OK, I'm gonna drop this in as an answer. Hope this addresses your concern.
Celery offers up a string for the args. To handle it, and get a list:
```
args = '(5,6,7,8)' # from celery status
as_list = list(eval(args))
```
Of course, `eval()` is a little dangerous, so you may want to use literal eval:
```
import ast
args = '(5,6,7,8)' # from celery status
as_list = list(ast.literal_eval(args))
```
That's how I handle parsing Celery args in my workflows. It's kind of a pain. |
Count number of non-NaN entries in each column of Spark dataframe with Pyspark | 33,900,726 | 16 | 2015-11-24T18:03:54Z | 33,901,312 | 45 | 2015-11-24T18:38:00Z | [
"python",
"apache-spark",
"apache-spark-sql",
"pyspark"
] | I have a very large dataset that is loaded in Hive. It consists of about 1.9 million rows and 1450 columns. I need to determine the "coverage" of each of the columns, meaning, the fraction of rows that have non-NaN values for each column.
Here is my code:
```
from pyspark import SparkContext
from pyspark.sql import HiveContext
import string as string
sc = SparkContext(appName="compute_coverages") ## Create the context
sqlContext = HiveContext(sc)
df = sqlContext.sql("select * from data_table")
nrows_tot = df.count()
covgs=sc.parallelize(df.columns)
.map(lambda x: str(x))
.map(lambda x: (x, float(df.select(x).dropna().count()) / float(nrows_tot) * 100.))
```
Trying this out in the pyspark shell, if I then do covgs.take(10), it returns a rather large error stack. It says that there's a problem in save in the file `/usr/lib64/python2.6/pickle.py`. This is the final part of the error:
```
py4j.protocol.Py4JError: An error occurred while calling o37.__getnewargs__. Trace:
py4j.Py4JException: Method __getnewargs__([]) does not exist
at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:333)
at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:342)
at py4j.Gateway.invoke(Gateway.java:252)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:207)
at java.lang.Thread.run(Thread.java:745)
```
If there is a better way to accomplish this than the way I'm trying, I'm open to suggestions. I can't use pandas, though, as it's not currently available on the cluster I work on and I don't have rights to install it. | Let's start with a dummy data:
```
from pyspark.sql import Row
row = Row("x", "y", "z")
df = sc.parallelize([
row(0, 1, 2), row(None, 3, 4), row(None, None, 5)]).toDF()
## +----+----+---+
## | x| y| z|
## +----+----+---+
## | 0| 1| 2|
## |null| 3| 4|
## |null|null| 5|
## +----+----+---+
```
All you need is a simple aggregation:
```
from pyspark.sql.functions import col, count, sum
def count_not_null(c):
"""Use conversion between boolean and integer
- False -> 0
- True -> 1
"""
return sum(col(c).isNotNull().cast("integer")).alias(c)
exprs = [count_not_null(c) for c in df.columns]
df.agg(*exprs).show()
## +---+---+---+
## | x| y| z|
## +---+---+---+
## | 1| 2| 3|
## +---+---+---+
```
You can also leverage SQL `NULL` semantics to achieve the same result without creating a custom function:
```
df.agg(*[
count(c).alias(c) # vertical (column-wise) operations in SQL ignore NULLs
for c in df.columns
]).show()
## +---+---+---+
## | x| y| z|
## +---+---+---+
## | 1| 2| 3|
## +---+---+---+
```
If you prefer fractions:
```
exprs = [(count_not_null(c) / count("*")).alias(c) for c in df.columns]
df.agg(*exprs).show()
## +------------------+------------------+---+
## | x| y| z|
## +------------------+------------------+---+
## |0.3333333333333333|0.6666666666666666|1.0|
## +------------------+------------------+---+
```
or
```
# COUNT(*) is equivalent to COUNT(1) so NULLs won't be an issue
df.select(*[(count(c) / count("*")).alias(c) for c in df.columns]).show()
## +------------------+------------------+---+
## | x| y| z|
## +------------------+------------------+---+
## |0.3333333333333333|0.6666666666666666|1.0|
## +------------------+------------------+---+
``` |
Can you break a while loop from outside the loop? | 33,906,813 | 2 | 2015-11-25T01:05:48Z | 33,906,858 | 7 | 2015-11-25T01:11:05Z | [
"python",
"python-3.x",
"while-loop",
"boolean-expression"
] | Can you break a while loop from outside the loop? Here's a (very simple) example of what I'm trying to do: I want to ask for continuous inside a While loop, but when the input is 'exit', I want the while loop to break!
```
active = True
def inputHandler(value):
if value == 'exit':
active = False
while active is True:
userInput = input("Input here: ")
inputHandler(userInput)
``` | In your case, in `inputHandler`, you are creating a new variable called `active` and storing `False` in it. This will not affect the module level `active`.
To fix this, you need to explicitly say that `active` is not a new variable, but the one declared at the top of the module, with the `global` keyword, like this
```
def inputHandler(value):
global active
if value == 'exit':
active = False
```
---
But, please note that the proper way to do this would be to return the result of `inputHandler` and store it back in `active`.
```
def inputHandler(value):
return value != 'exit'
while active:
userInput = input("Input here: ")
active = inputHandler(userInput)
```
If you look at the `while` loop, we used `while active:`. In Python you either have to use `==` to compare the values, or simply rely on the truthiness of the value. `is` operator should be used only when you need to check if the values are one and the same.
---
But, if you totally want to avoid this, you can simply use [`iter`](https://docs.python.org/3/library/functions.html#iter) function which breaks out automatically when the sentinel value is met.
```
for value in iter(lambda: input("Input here: "), 'exit'):
inputHandler(value)
```
Now, `iter` will keep executing the function passed to it, till the function returns the sentinel value (second parameter) passed to it. |
Flatten a bunch of key/value dictionaries into a single dictionary? | 33,908,636 | 2 | 2015-11-25T04:46:39Z | 33,908,661 | 9 | 2015-11-25T04:49:17Z | [
"python"
] | I want to convert this: `[{u'Key': 'color', u'Value': 'red'}, {u'Key': 'size', u'Value': 'large'}]` into this: `{'color': 'red', 'size': 'large'}`.
Anyone have any recommendations? I'm been playing with list comprehensions, lambda functions, and `zip()` for over an hour and feel like I'm missing an obvious solution. Thanks! | You can use [dictionary comprehension](https://stackoverflow.com/documentation/python/196/comprehensions/738/dictionary-comprehensions#t=201607261143021995509) and try something like this:
### Python-2.7 or Python-3.x
```
>>> a = [{u'Key': 'color', u'Value': 'red'}, {u'Key': 'size', u'Value': 'large'}]
>>> b = {i['Key']:i['Value'] for i in a}
>>> b
{'color': 'red', 'size': 'large'}
```
### Python-2.6
```
b = dict((i['Key'], i['Value']) for i in a)
``` |
Why expressing a regular expression containing '\' work without it being a raw string. | 33,915,134 | 5 | 2015-11-25T11:17:45Z | 33,915,210 | 8 | 2015-11-25T11:21:56Z | [
"python",
"regex",
"python-3.4"
] | Please refer to this Regular Expression HOWTO for python3
<https://docs.python.org/3/howto/regex.html#performing-matches>
```
>>> p = re.compile('\d+')
>>> p.findall('12 drummers drumming, 11 pipers piping, 10 lords a-leaping')
['12', '11', '10']
```
I have read that for regular expression containing `'\'`, the raw strings should be used like `r'\d+'` but in this code snippet `re.compile('\d+')` is used without using the `r` specifier. And it works fine. Why does it work in the first place? Why does this regular expression not need an 'r' preceding it? | It happens to work because `'\d'` doesn't correspond to a special character like `'\n'` or `'\t'` do. Sometimes a raw string turns out the same as the regular string version. Generally, though, raw strings will ensure that you don't get any surprises in your expression. |
Rewrite of zip function won't work | 33,918,006 | 4 | 2015-11-25T13:40:46Z | 33,918,092 | 7 | 2015-11-25T13:45:00Z | [
"python"
] | I'm rewriting the zip function as a practice of my Python skills. The aim is to write it using list comprehension, although I am not 100% sure I am fully comfortable with it hence I'm doing this.
Here is what I have so far:
```
def zip(l1, l2):
return [(l1[0], l2[0])] + zip(l1[1:], l2[1:])
z = zip(['a', 'b', 'c'], [1,2,3])
for i in z: print(i)
```
And here is the error I am getting, which I am unsure of how to fix!
```
Traceback (most recent call last):
File "path-omitted", line 47, in <module>
z = zip(['a', 'b', 'c'], [1, 2,3])
File "path-omitted", line 45, in zip
return [(l1[0], l2[0])] + zip(l1[1:], l2[1:])
File "path-omitted", line 45, in zip
return [(l1[0], l2[0])] + zip(l1[1:], l2[1:])
File "path-omitted", line 45, in zip
return [(l1[0], l2[0])] + zip(l1[1:], l2[1:])
File "path-omitted", line 45, in zip
return [(l1[0], l2[0])] + zip(l1[1:], l2[1:])
IndexError: list index out of range
``` | Your `zip` function implementation is recursive. At some point `l1[1:]` or `l2[1:]` will become empty, and attempts to access the first element will fail with `IndexError`.
Check if both `l1` and `l2` are nonempty and return empty list if they are:
```
def zip(l1, l2):
if not (l1 and l2):
return []
return [(l1[0], l2[0])] + zip(l1[1:], l2[1:])
```
Or you could catch `IndexError` and return `[]`:
```
def zip(l1, l2):
try:
return [(l1[0], l2[0])] + zip(l1[1:], l2[1:])
except IndexError:
return []
``` |
What type signature do generators have in Python? | 33,918,325 | 7 | 2015-11-25T13:54:59Z | 33,918,389 | 8 | 2015-11-25T13:57:36Z | [
"python",
"python-3.x",
"annotations",
"type-hinting"
] | Given that the new Python 3.5 allows type hinting with type signatures I want to use the new feature, but I don't know how to fully annotate a function with the following structure:
```
def yieldMoreIfA(text:str):
if text == "A":
yield text
yield text
return
else:
yield text
return
```
What's the correct signature? | There is a [`Generator[yield_type, send_type, return_type]` type](https://docs.python.org/3/library/typing.html#typing.Generator):
```
from typing import Generator
def yieldMoreIfA(text: str) -> Generator[str, None, None]:
if text == "A":
yield text
yield text
return
else:
yield text
return
``` |
How to set adaptive learning rate for GradientDescentOptimizer? | 33,919,948 | 18 | 2015-11-25T15:08:58Z | 33,922,859 | 42 | 2015-11-25T17:28:00Z | [
"python",
"tensorflow"
] | I am using TensorFlow to train a neural network. This is how I am initializing the `GradientDescentOptimizer`:
```
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
mse = tf.reduce_mean(tf.square(out - out_))
train_step = tf.train.GradientDescentOptimizer(0.3).minimize(mse)
```
The thing here is that I don't know how to set an update rule for the learning rate or a decay value for that.
How can I use an adaptive learning rate here? | First of all, `tf.train.GradientDescentOptimizer` is designed to use a constant learning rate for all variables in all steps. TensorFlow also provides out-of-the-box adaptive optimizers including the [`tf.train.AdagradOptimizer`](http://www.tensorflow.org/api_docs/python/train.html#AdagradOptimizer) and the [`tf.train.AdamOptimizer`](http://www.tensorflow.org/api_docs/python/train.html#AdamOptimizer), and these can be used as drop-in replacements.
However, if you want to control the learning rate with otherwise-vanilla gradient descent, you can take advantage of the fact that the `learning_rate` argument to the [`tf.train.GradientDescentOptimizer` constructor](http://www.tensorflow.org/api_docs/python/train.html#GradientDescentOptimizer.__init__) can be a `Tensor` object. This allows you to compute a different value for the learning rate in each step, for example:
```
learning_rate = tf.placeholder(tf.float32, shape=[])
# ...
train_step = tf.train.GradientDescentOptimizer(
learning_rate=learning_rate).minimize(mse)
sess = tf.Session()
# Feed different values for learning rate to each training step.
sess.run(train_step, feed_dict={learning_rate: 0.1})
sess.run(train_step, feed_dict={learning_rate: 0.1})
sess.run(train_step, feed_dict={learning_rate: 0.01})
sess.run(train_step, feed_dict={learning_rate: 0.01})
```
Alternatively, you could create a scalar `tf.Variable` that holds the learning rate, and assign it each time you want to change the learning rate. |
How to set adaptive learning rate for GradientDescentOptimizer? | 33,919,948 | 18 | 2015-11-25T15:08:58Z | 33,931,754 | 27 | 2015-11-26T06:14:54Z | [
"python",
"tensorflow"
] | I am using TensorFlow to train a neural network. This is how I am initializing the `GradientDescentOptimizer`:
```
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
mse = tf.reduce_mean(tf.square(out - out_))
train_step = tf.train.GradientDescentOptimizer(0.3).minimize(mse)
```
The thing here is that I don't know how to set an update rule for the learning rate or a decay value for that.
How can I use an adaptive learning rate here? | Tensorflow provides an op to automatically apply an exponential decay to a learning rate tensor: [`tf.train.exponential_decay`](http://www.tensorflow.org/api_docs/python/train.html#exponential_decay). For an example of it in use, see [this line in the MNIST convolutional model example](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/mnist/convolutional.py#L245). Then use @mrry's suggestion above to supply this variable as the learning\_rate parameter to your optimizer of choice.
The key excerpt to look at is:
```
# Optimizer: set up a variable that's incremented once per batch and
# controls the learning rate decay.
batch = tf.Variable(0)
learning_rate = tf.train.exponential_decay(
0.01, # Base learning rate.
batch * BATCH_SIZE, # Current index into the dataset.
train_size, # Decay step.
0.95, # Decay rate.
staircase=True)
# Use simple momentum for the optimization.
optimizer = tf.train.MomentumOptimizer(learning_rate,
0.9).minimize(loss,
global_step=batch)
```
Note the `global_step=batch` parameter to minimize. That tells the optimizer to helpfully increment the 'batch' parameter for you every time it trains. |
Why does TensorFlow return [[nan nan]] instead of probabilities from a CSV file? | 33,922,937 | 6 | 2015-11-25T17:32:07Z | 33,929,658 | 9 | 2015-11-26T02:28:00Z | [
"python",
"csv",
"tensorflow"
] | Here is the code that I am using. I'm trying to get a 1, 0, or hopefully a probability in result to a real test set. When I just split up the training set and run it on the training set I get a ~93% accuracy rate, but when I train the program and run it on the actual test set (the one without the 1's and 0's filling in column 1) it returns nothing but nan's.
```
import tensorflow as tf
import numpy as np
from numpy import genfromtxt
import sklearn
# Convert to one hot
def convertOneHot(data):
y=np.array([int(i[0]) for i in data])
y_onehot=[0]*len(y)
for i,j in enumerate(y):
y_onehot[i]=[0]*(y.max() + 1)
y_onehot[i][j]=1
return (y,y_onehot)
data = genfromtxt('cs-training.csv',delimiter=',') # Training data
test_data = genfromtxt('cs-test-actual.csv',delimiter=',') # Actual test data
#This part is to get rid of the nan's at the start of the actual test data
g = 0
for i in test_data:
i[0] = 1
test_data[g] = i
g += 1
x_train=np.array([ i[1::] for i in data])
y_train,y_train_onehot = convertOneHot(data)
x_test=np.array([ i[1::] for i in test_data])
y_test,y_test_onehot = convertOneHot(test_data)
A=data.shape[1]-1 # Number of features, Note first is y
B=len(y_train_onehot[0])
tf_in = tf.placeholder("float", [None, A]) # Features
tf_weight = tf.Variable(tf.zeros([A,B]))
tf_bias = tf.Variable(tf.zeros([B]))
tf_softmax = tf.nn.softmax(tf.matmul(tf_in,tf_weight) + tf_bias)
# Training via backpropagation
tf_softmax_correct = tf.placeholder("float", [None,B])
tf_cross_entropy = -tf.reduce_sum(tf_softmax_correct*tf.log(tf_softmax))
# Train using tf.train.GradientDescentOptimizer
tf_train_step = tf.train.GradientDescentOptimizer(0.01).minimize(tf_cross_entropy)
# Add accuracy checking nodes
tf_correct_prediction = tf.equal(tf.argmax(tf_softmax,1), tf.argmax(tf_softmax_correct,1))
tf_accuracy = tf.reduce_mean(tf.cast(tf_correct_prediction, "float"))
saver = tf.train.Saver([tf_weight,tf_bias])
# Initialize and run
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
print("...")
# Run the training
for i in range(1):
sess.run(tf_train_step, feed_dict={tf_in: x_train, tf_softmax_correct: y_train_onehot})
#print y_train_onehot
saver.save(sess, 'trained_csv_model')
ans = sess.run(tf_softmax, feed_dict={tf_in: x_test})
print ans
#Print accuracy
#result = sess.run(tf_accuracy, feed_dict={tf_in: x_test, tf_softmax_correct: y_test_onehot})
#print result
```
When I print `ans` I get the following.
```
[[ nan nan]
[ nan nan]
[ nan nan]
...,
[ nan nan]
[ nan nan]
[ nan nan]]
```
I don't know what I'm doing wrong here. All I want is for `ans` to yield a 1, 0, or especially an array of probabilities where every unit inside the array has a length of 2.
I don't expect that many people are going to be able to answer this question for me, but please try at the very least. I'm stuck here waiting for a stroke of genius moment which hasn't come in 2 days now so I figured that I would ask. Thank you!
The `test_data` comes out looking like this-
```
[[ 1.00000000e+00 8.85519080e-01 4.30000000e+01 ..., 0.00000000e+00
0.00000000e+00 0.00000000e+00]
[ 1.00000000e+00 4.63295269e-01 5.70000000e+01 ..., 4.00000000e+00
0.00000000e+00 2.00000000e+00]
[ 1.00000000e+00 4.32750360e-02 5.90000000e+01 ..., 1.00000000e+00
0.00000000e+00 2.00000000e+00]
...,
[ 1.00000000e+00 8.15963730e-02 7.00000000e+01 ..., 0.00000000e+00
0.00000000e+00 nan]
[ 1.00000000e+00 3.35456547e-01 5.60000000e+01 ..., 2.00000000e+00
1.00000000e+00 3.00000000e+00]
[ 1.00000000e+00 4.41841663e-01 2.90000000e+01 ..., 0.00000000e+00
0.00000000e+00 0.00000000e+00]]
```
And the only reason that the first unit in the data is equal to 1 is because I got rid of the nan's that filled that position in order to avoid errors. Note that everything after the first column is a feature. The first column is what I'm trying to be able to predict.
EDIT:
I changed the code to the following-
```
import tensorflow as tf
import numpy as np
from numpy import genfromtxt
import sklearn
from sklearn.cross_validation import train_test_split
from tensorflow import Print
# Convert to one hot
def convertOneHot(data):
y=np.array([int(i[0]) for i in data])
y_onehot=[0]*len(y)
for i,j in enumerate(y):
y_onehot[i]=[0]*(y.max() + 1)
y_onehot[i][j]=1
return (y,y_onehot)
#buildDataFromIris()
data = genfromtxt('cs-training.csv',delimiter=',') # Training data
test_data = genfromtxt('cs-test-actual.csv',delimiter=',') # Test data
#for i in test_data[0]:
# print i
#print test_data
#print test_data
g = 0
for i in test_data:
i[0] = 1.
test_data[g] = i
g += 1
#print 1, test_data
x_train=np.array([ i[1::] for i in data])
y_train,y_train_onehot = convertOneHot(data)
#print len(x_train), len(y_train), len(y_train_onehot)
x_test=np.array([ i[1::] for i in test_data])
y_test,y_test_onehot = convertOneHot(test_data)
#for u in y_test_onehot[0]:
# print u
#print y_test_onehot
#print len(x_test), len(y_test), len(y_test_onehot)
#print x_test[0]
#print '1'
# A number of features, 4 in this example
# B = 3 species of Iris (setosa, virginica and versicolor)
A=data.shape[1]-1 # Number of features, Note first is y
#print A
B=len(y_train_onehot[0])
#print B
#print y_train_onehot
tf_in = tf.placeholder("float", [None, A]) # Features
tf_weight = tf.Variable(tf.zeros([A,B]))
tf_bias = tf.Variable(tf.zeros([B]))
tf_softmax = tf.nn.softmax(tf.matmul(tf_in,tf_weight) + tf_bias)
tf_bias = tf.Print(tf_bias, [tf_bias], "Bias: ")
tf_weight = tf.Print(tf_weight, [tf_weight], "Weight: ")
tf_in = tf.Print(tf_in, [tf_in], "TF_in: ")
matmul_result = tf.matmul(tf_in, tf_weight)
matmul_result = tf.Print(matmul_result, [matmul_result], "Matmul: ")
tf_softmax = tf.nn.softmax(matmul_result + tf_bias)
print tf_bias
print tf_weight
print tf_in
print matmul_result
# Training via backpropagation
tf_softmax_correct = tf.placeholder("float", [None,B])
tf_cross_entropy = -tf.reduce_sum(tf_softmax_correct*tf.log(tf_softmax))
print tf_softmax_correct
# Train using tf.train.GradientDescentOptimizer
tf_train_step = tf.train.GradientDescentOptimizer(0.01).minimize(tf_cross_entropy)
# Add accuracy checking nodes
tf_correct_prediction = tf.equal(tf.argmax(tf_softmax,1), tf.argmax(tf_softmax_correct,1))
tf_accuracy = tf.reduce_mean(tf.cast(tf_correct_prediction, "float"))
print tf_correct_prediction
print tf_accuracy
#saver = tf.train.Saver([tf_weight,tf_bias])
# Initialize and run
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
print("...")
prediction = []
# Run the training
#probabilities = []
#print y_train_onehot
#print '-----------------------------------------'
for i in range(1):
sess.run(tf_train_step, feed_dict={tf_in: x_train, tf_softmax_correct: y_train_onehot})
#print y_train_onehot
#saver.save(sess, 'trained_csv_model')
ans = sess.run(tf_softmax, feed_dict={tf_in: x_test})
print ans
```
After the print out I see that one of the objects is Boolean. I don't know if that is the issue but take a look at the following and see if there is any way that you can help.
```
Tensor("Print_16:0", shape=TensorShape([Dimension(2)]), dtype=float32)
Tensor("Print_17:0", shape=TensorShape([Dimension(10), Dimension(2)]), dtype=float32)
Tensor("Print_18:0", shape=TensorShape([Dimension(None), Dimension(10)]), dtype=float32)
Tensor("Print_19:0", shape=TensorShape([Dimension(None), Dimension(2)]), dtype=float32)
Tensor("Placeholder_9:0", shape=TensorShape([Dimension(None), Dimension(2)]), dtype=float32)
Tensor("Equal_4:0", shape=TensorShape([Dimension(None)]), dtype=bool)
Tensor("Mean_4:0", shape=TensorShape([]), dtype=float32)
...
[[ nan nan]
[ nan nan]
[ nan nan]
...,
[ nan nan]
[ nan nan]
[ nan nan]]
``` | I don't know the direct answer, but I know how I'd approach debugging it: [`tf.Print`](http://www.tensorflow.org/api_docs/python/control_flow_ops.html#Print). It's an op that prints the value as tensorflow is executing, and returns the tensor for further computation, so you can just sprinkle them inline in your model.
Try throwing in a few of these. Instead of this line:
```
tf_softmax = tf.nn.softmax(tf.matmul(tf_in,tf_weight) + tf_bias)
```
Try:
```
tf_bias = tf.Print(tf_bias, [tf_bias], "Bias: ")
tf_weight = tf.Print(tf_weight, [tf_weight], "Weight: ")
tf_in = tf.Print(tf_in, [tf_in], "TF_in: ")
matmul_result = tf.matmul(tf_in, tf_weight)
matmul_result = tf.Print(matmul_result, [matmul_result], "Matmul: ")
tf_softmax = tf.nn.softmax(matmul_result + tf_bias)
```
to see what Tensorflow thinks the intermediate values are. If the NaNs are showing up earlier in the pipeline, it should give you a better idea of where the problem lies. Good luck! If you get some data out of this, feel free to follow up and we'll see if we can get you further.
Updated to add: Here's a stripped-down debugging version to try, where I got rid of the input functions and just generated some random data:
```
import tensorflow as tf
import numpy as np
def dense_to_one_hot(labels_dense, num_classes=10):
"""Convert class labels from scalars to one-hot vectors."""
num_labels = labels_dense.shape[0]
index_offset = np.arange(num_labels) * num_classes
labels_one_hot = np.zeros((num_labels, num_classes))
labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1
return labels_one_hot
x_train=np.random.normal(0, 1, [50,10])
y_train=np.random.randint(0, 10, [50])
y_train_onehot = dense_to_one_hot(y_train, 10)
x_test=np.random.normal(0, 1, [50,10])
y_test=np.random.randint(0, 10, [50])
y_test_onehot = dense_to_one_hot(y_test, 10)
# A number of features, 4 in this example
# B = 3 species of Iris (setosa, virginica and versicolor)
A=10
B=10
tf_in = tf.placeholder("float", [None, A]) # Features
tf_weight = tf.Variable(tf.zeros([A,B]))
tf_bias = tf.Variable(tf.zeros([B]))
tf_softmax = tf.nn.softmax(tf.matmul(tf_in,tf_weight) + tf_bias)
tf_bias = tf.Print(tf_bias, [tf_bias], "Bias: ")
tf_weight = tf.Print(tf_weight, [tf_weight], "Weight: ")
tf_in = tf.Print(tf_in, [tf_in], "TF_in: ")
matmul_result = tf.matmul(tf_in, tf_weight)
matmul_result = tf.Print(matmul_result, [matmul_result], "Matmul: ")
tf_softmax = tf.nn.softmax(matmul_result + tf_bias)
# Training via backpropagation
tf_softmax_correct = tf.placeholder("float", [None,B])
tf_cross_entropy = -tf.reduce_sum(tf_softmax_correct*tf.log(tf_softmax))
# Train using tf.train.GradientDescentOptimizer
tf_train_step = tf.train.GradientDescentOptimizer(0.01).minimize(tf_cross_entropy)
# Add accuracy checking nodes
tf_correct_prediction = tf.equal(tf.argmax(tf_softmax,1), tf.argmax(tf_softmax_correct,1))
tf_accuracy = tf.reduce_mean(tf.cast(tf_correct_prediction, "float"))
print tf_correct_prediction
print tf_accuracy
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for i in range(1):
print "Running the training step"
sess.run(tf_train_step, feed_dict={tf_in: x_train, tf_softmax_correct: y_train_onehot})
#print y_train_onehot
#saver.save(sess, 'trained_csv_model')
print "Running the eval step"
ans = sess.run(tf_softmax, feed_dict={tf_in: x_test})
print ans
```
You should see lines starting with "Bias: ", etc. |
KeyError: 'data' with Python Instagram API client | 33,924,581 | 7 | 2015-11-25T19:08:02Z | 35,955,196 | 13 | 2016-03-12T07:58:00Z | [
"python",
"instagram",
"instagram-api",
"keyerror"
] | I'm using this client [`python-instagram`](https://github.com/Instagram/python-instagram) with `Python 3.4.3` on `MacOS`.
My steps:
* Registered a new client on `instagram`, received client\_id and client\_secret
* Pip install python-instagram
* Copy sample\_app.py to my mac
I followed the instructions on [`Sample app`](https://github.com/Instagram/python-instagram#sample-app), I successfully authorized my app via instagram and tried this [list of examples](http://i.stack.imgur.com/VB7Tt.png), but none of them worked. After my click the `<h2>` header and counter of API requests changes and I see `Remaining API Calls = 486/500`.
If I try to get `User Recent Media` an exception `KeyError: 'data'` shows in my terminal. If I delete `try - except` construction, leaving block in `try`, when I'll see 'Error: 500 Internal Server Error'.
Here is the traceback:
```
Traceback (most recent call last):
File "/Users/user/.envs/insta/lib/python3.4/site-packages/bottle.py", line 862, in _handle
return route.call(**args)
File "/Users/user/.envs/insta/lib/python3.4/site-packages/bottle.py", line 1732, in wrapper
rv = callback(*a, **ka)
File "sample_app.py", line 79, in on_recent
recent_media, next = api.user_recent_media()
File "/Users/user/.envs/insta/lib/python3.4/site-packages/instagram/bind.py", line 197, in _call
return method.execute()
File "/Users/user/.envs/insta/lib/python3.4/site-packages/instagram/bind.py", line 189, in execute
content, next = self._do_api_request(url, method, body, headers)
File "/Users/user/.envs/insta/lib/python3.4/site-packages/instagram/bind.py", line 151, in _do_api_request
obj = self.root_class.object_from_dictionary(entry)
File "/Users/user/.envs/insta/lib/python3.4/site-packages/instagram/models.py", line 99, in object_from_dictionary
for comment in entry['comments']['data']:
KeyError: 'data'
```
All the code I used is from the sample of the official python API client by Instagram. | There is an open [`Github issue`](https://github.com/Instagram/python-instagram/issues/202) for this bug, a [`fix`](https://github.com/shackra/python-instagram/commit/c7af85fa867bf33a2370bc051c45db07f656e0da) was sent, but it's not merged yet.
Add the one line fix to `models.py` on your installed package.
Open with sudo:
```
sudo vi /Library/Python/2.7/site-packages/instagram/models.py # Use relevant python version
```
On line 99, add this:
```
if "data" in entry["comments"]:
```
Correct indentation on next two lines:
```
for comment in entry['comments']['data']:
new_media.comments.append(Comment.object_from_dictionary(comment))
``` |
How are these two functions the same? | 33,928,553 | 3 | 2015-11-26T00:03:18Z | 33,928,724 | 7 | 2015-11-26T00:23:09Z | [
"python"
] | Here's the code that Zed Shaw provides in Learning Python the Hard Way:
```
ten_things = "Apples Oranges Crows Telephone Light Sugar"
print "Wait there's not 10 things in that list, let's fix that."
stuff = ten_things.split(' ')
more_stuff = ["Day", "Night", "Song", "Frisbee", "Corn", "Banana", "Girl", "Boy"]
while len(stuff) != 10:
next_one = more_stuff.pop()
print "Adding: ", next_one
stuff.append(next_one)
print "There's %d items now." % len(stuff)
print "There we go: ", stuff
print "Let's do some things with stuff."
print stuff[1]
print stuff[-1] # whoa! fancy
print stuff.pop()
print ' '.join(stuff) # what? cool!
print '#'.join(stuff[3:5]) # super stellar!
```
Then on one of the study drills, he says:
> 2. Translate these two ways to view the function calls. For example, `' '.join(things)` reads
> as, âJoin `things` with `â â` between them.â Meanwhile, `join(' ', things)` means, âCall `join`
> with `â â` and `things`.â Understand how they are really the same thing.
My problem is, I'm having a tough time seeing how they're the same thing? To my understanding, the first function is saying take whatever is in `things`, and concatenate them with `' '`. But the second function (to my knowledge), is saying call `join`, while using `' '` and `things` as an argument? Sort of the way you would use them when defining a function? I'm pretty lost on this...could you guys could clarify on this? | To be precise, `''.join(things)` and `join('',things)` are not necessarily the same. However, `''.join(things)` and `str.join('',things)` *are* the same. The explanation requires some knowledge of how classes work in Python. I'll be glossing over or ignoring a lot of details that are not totally relevant to this discussion.
One might implement some of the built-in string class this way (disclaimer: this is almost certainly *not* how it's actually done).
```
class str:
def __init__(self, characters):
self.chars = characters
def join(self, iterable):
newString = str()
for item in iterable:
newString += item #item is assumed to be a string, and += is defined elsewhere
newString += self.chars
newString = newString[-len(self.chars):] #remove the last instance of self.chars
return newString
```
Okay, notice that the first argument to each function is `self`. This is just by convention, it could be `potatoes` for all Python cares, but the first argument is always the object itself. Python does this so that you can do `''.join(things)` and have it just work. `''` is the string that `self` will be inside the function, and `things` is the iterable.
`''.join(things)` is not the only way to call this function. You can also call it using `str.join('', things)` because it's a method of the class `str`. As before, `self` will be `''` and `iterable` will be `things`.
This is why these two different ways to do the same thing are equivalent: `''.join(things)` is [syntactic sugar](https://en.wikipedia.org/wiki/Syntactic_sugar) for `str.join('', things)`. |
Fast alternative to run a numpy based function over all the rows in Pandas DataFrame | 33,931,933 | 10 | 2015-11-26T06:30:34Z | 33,932,329 | 12 | 2015-11-26T06:57:40Z | [
"python",
"numpy",
"pandas",
"cython"
] | I have a Pandas data frame created the following way:
```
import pandas as pd
def create(n):
df = pd.DataFrame({ 'gene':["foo",
"bar",
"qux",
"woz"],
'cell1':[433.96,735.62,483.42,10.33],
'cell2':[94.93,2214.38,97.93,1205.30],
'cell3':[1500,90,100,80]})
df = df[["gene","cell1","cell2","cell3"]]
df = pd.concat([df]*n)
df = df.reset_index(drop=True)
return df
```
It looks like this:
```
In [108]: create(1)
Out[108]:
gene cell1 cell2 cell3
0 foo 433.96 94.93 1500
1 bar 735.62 2214.38 90
2 qux 483.42 97.93 100
3 woz 10.33 1205.30 80
```
Then I have a function that takes the values of each genes(row)
to compute a certain score:
[](http://i.stack.imgur.com/n0NeU.jpg)
```
import numpy as np
def sparseness(xvec):
n = len(xvec)
xvec_sum = np.sum(np.abs(xvec))
xvecsq_sum = np.sum(np.square(xvec))
denom = np.sqrt(n) - (xvec_sum / np.sqrt(xvecsq_sum))
enum = np.sqrt(n) - 1
sparseness_x = denom/enum
return sparseness_x
```
In reality I need to apply this function on 40K over rows. And currently it runs
very slow using Pandas 'apply':
```
In [109]: df = create(10000)
In [110]: express_df = df.ix[:,1:]
In [111]: %timeit express_df.apply(sparseness, axis=1)
1 loops, best of 3: 8.32 s per loop
```
What's the faster alternative to implement that? | A faster way is to implement a vectorized version of the function, which operates on a two dimensional ndarray directly. This is very doable since many functions in numpy can operate on two dimensional ndarray, controlled using the `axis` parameter. A possible implementation:
```
def sparseness2(xs):
nr = np.sqrt(xs.shape[1])
a = np.sum(np.abs(xs), axis=1)
b = np.sqrt(np.sum(np.square(xs), axis=1))
sparseness = (nr - a/b) / (nr - 1)
return sparseness
res_arr = sparseness2(express_df.values)
res2 = pd.Series(res_arr, index=express_df.index)
```
Some testing:
```
from pandas.util.testing import assert_series_equal
res1 = express_df.apply(sparseness, axis=1)
assert_series_equal(res1, res2) #OK
%timeit sparseness2(express_df.values)
# 1000 loops, best of 3: 655 µs per loop
``` |
Fast alternative to run a numpy based function over all the rows in Pandas DataFrame | 33,931,933 | 10 | 2015-11-26T06:30:34Z | 33,932,750 | 8 | 2015-11-26T07:24:53Z | [
"python",
"numpy",
"pandas",
"cython"
] | I have a Pandas data frame created the following way:
```
import pandas as pd
def create(n):
df = pd.DataFrame({ 'gene':["foo",
"bar",
"qux",
"woz"],
'cell1':[433.96,735.62,483.42,10.33],
'cell2':[94.93,2214.38,97.93,1205.30],
'cell3':[1500,90,100,80]})
df = df[["gene","cell1","cell2","cell3"]]
df = pd.concat([df]*n)
df = df.reset_index(drop=True)
return df
```
It looks like this:
```
In [108]: create(1)
Out[108]:
gene cell1 cell2 cell3
0 foo 433.96 94.93 1500
1 bar 735.62 2214.38 90
2 qux 483.42 97.93 100
3 woz 10.33 1205.30 80
```
Then I have a function that takes the values of each genes(row)
to compute a certain score:
[](http://i.stack.imgur.com/n0NeU.jpg)
```
import numpy as np
def sparseness(xvec):
n = len(xvec)
xvec_sum = np.sum(np.abs(xvec))
xvecsq_sum = np.sum(np.square(xvec))
denom = np.sqrt(n) - (xvec_sum / np.sqrt(xvecsq_sum))
enum = np.sqrt(n) - 1
sparseness_x = denom/enum
return sparseness_x
```
In reality I need to apply this function on 40K over rows. And currently it runs
very slow using Pandas 'apply':
```
In [109]: df = create(10000)
In [110]: express_df = df.ix[:,1:]
In [111]: %timeit express_df.apply(sparseness, axis=1)
1 loops, best of 3: 8.32 s per loop
```
What's the faster alternative to implement that? | Here's one vectorized approach using [`np.einsum`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html) to perform all those operations in one go across the entire dataframe. Now, this `np.einsum` is supposedly pretty efficient for such multiplication and summing purposes. In our case, we can use it to perform summation along one dimension for the `xvec_sum` case and squaring and summation for the `xvecsq_sum` case. The implmentation would look like this -
```
def sparseness_vectorized(A):
nsqrt = np.sqrt(A.shape[1])
B = np.einsum('ij->i',np.abs(A))/np.sqrt(np.einsum('ij,ij->i',A,A))
denom = nsqrt - B
enum = nsqrt - 1
return denom/enum
```
---
Runtime tests -
This section compares all the approaches listed thus far to solve the problem including the one in the question.
```
In [235]: df = create(1000)
...: express_df = df.ix[:,1:]
...:
In [236]: %timeit express_df.apply(sparseness, axis=1)
1 loops, best of 3: 1.36 s per loop
In [237]: %timeit sparseness2(express_df.values)
1000 loops, best of 3: 247 µs per loop
In [238]: %timeit sparseness_vectorized(express_df.values)
1000 loops, best of 3: 231 µs per loop
In [239]: df = create(5000)
...: express_df = df.ix[:,1:]
...:
In [240]: %timeit express_df.apply(sparseness, axis=1)
1 loops, best of 3: 6.66 s per loop
In [241]: %timeit sparseness2(express_df.values)
1000 loops, best of 3: 1.14 ms per loop
In [242]: %timeit sparseness_vectorized(express_df.values)
1000 loops, best of 3: 1.06 ms per loop
``` |
String indexing - Why S[0][0] works and S[1][1] fails? | 33,932,508 | 5 | 2015-11-26T07:09:18Z | 33,932,551 | 10 | 2015-11-26T07:11:55Z | [
"python",
"python-2.7",
"python-3.x"
] | Suppose I create a string:
```
>>> S = "spam"
```
Now I index it as follows:
```
>>> S[0][0][0][0][0]
```
I get output as:
```
>>> 's'
```
But when i index it as:
```
>>> S[1][1][1][1][1]
```
I get output as:
```
Traceback (most recent call last):
File "<pyshell#125>", line 1, in <module>
L[1][1][1][1][1]
IndexError: string index out of range
```
Why is the output not **'p'**?
Why also it is working for **S[0][0]** or **S[0][0][0]** or **S[0][0][0][0]** and not for **S[1][1]** or **S[1][1][1]** or **S[1][1][1][1]**? | The answer is that `S[0]` gives you a string of length 1, which thus necessarily has a character at index 0. `S[1]` also gives you a string of length 1, but it necessarily does not have a character at index 1. See below:
```
>>> S = "spam"
>>> S[0]
's'
>>> S[0][0]
's'
>>> S[1]
'p'
>>> S[1][0]
'p'
>>> S[1][1]
Traceback (most recent call last):
File "<pyshell#20>", line 1, in <module>
S[1][1]
IndexError: string index out of range
``` |
String indexing - Why S[0][0] works and S[1][1] fails? | 33,932,508 | 5 | 2015-11-26T07:09:18Z | 33,932,561 | 8 | 2015-11-26T07:12:19Z | [
"python",
"python-2.7",
"python-3.x"
] | Suppose I create a string:
```
>>> S = "spam"
```
Now I index it as follows:
```
>>> S[0][0][0][0][0]
```
I get output as:
```
>>> 's'
```
But when i index it as:
```
>>> S[1][1][1][1][1]
```
I get output as:
```
Traceback (most recent call last):
File "<pyshell#125>", line 1, in <module>
L[1][1][1][1][1]
IndexError: string index out of range
```
Why is the output not **'p'**?
Why also it is working for **S[0][0]** or **S[0][0][0]** or **S[0][0][0][0]** and not for **S[1][1]** or **S[1][1][1]** or **S[1][1][1][1]**? | The first index (`[0]`) of any string is its first character. Since this results in a one-character string, the first index of that string is the first character, which is itself. You can do `[0]` as much as you want and stay with the same character.
The second index (`[1]`), however, only exists for a string with at least two characters. If you've already indexed a string to produce a single-character string, `[1]` will not work.
```
>>> a = 'abcd'
>>> a[0]
'a'
>>> a[0][0]
'a'
>>> a[1]
'b'
>>> a[1][0][0]
'b'
>>> a[1][1]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: string index out of range
``` |
What's the purpose of tf.app.flags in TensorFlow? | 33,932,901 | 16 | 2015-11-26T07:34:17Z | 33,938,519 | 22 | 2015-11-26T12:17:25Z | [
"python",
"tensorflow"
] | I am reading some example codes in Tensorflow, I found following code
```
flags = tf.app.flags
FLAGS = flags.FLAGS
flags.DEFINE_float('learning_rate', 0.01, 'Initial learning rate.')
flags.DEFINE_integer('max_steps', 2000, 'Number of steps to run trainer.')
flags.DEFINE_integer('hidden1', 128, 'Number of units in hidden layer 1.')
flags.DEFINE_integer('hidden2', 32, 'Number of units in hidden layer 2.')
flags.DEFINE_integer('batch_size', 100, 'Batch size. '
'Must divide evenly into the dataset sizes.')
flags.DEFINE_string('train_dir', 'data', 'Directory to put the training data.')
flags.DEFINE_boolean('fake_data', False, 'If true, uses fake data '
'for unit testing.')
```
in `tensorflow/tensorflow/g3doc/tutorials/mnist/fully_connected_feed.py`
But I can't find any docs about this usage of `tf.app.flags`.
And I found the implementation of this flags is in the
[`tensorflow/tensorflow/python/platform/default/_flags.py`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/platform/default/_flags.py)
Obviously, this `tf.app.flags` is somehow used to configure a network, so why is it not in the API docs? Can anyone explain what is going on here? | The `tf.app.flags` module is presently a thin wrapper around python-gflags, so the [documentation for that project](https://github.com/gflags/python-gflags) is the best resource for how to use it [`argparse`](https://docs.python.org/2.7/library/argparse.html), which implements a subset of the functionality in [`python-gflags`](https://github.com/gflags/python-gflags).
Note that this module is currently packaged as a convenience for writing demo apps, and is not technically part of the public API, so it may change in future.
We recommend that you implement your own flag parsing using `argparse` or whatever library you prefer.
**EDIT:** The `tf.app.flags` module is not in fact implemented using `python-gflags`, but it uses a similar API. |
Are there really only 4 Matplotlib Line Styles? | 33,936,134 | 4 | 2015-11-26T10:25:43Z | 33,936,680 | 7 | 2015-11-26T10:49:51Z | [
"python",
"matplotlib",
"plot"
] | I've been looking for new line styles in matplotlib, and the only line styles available are ["-", "--", "-.", ":",]. (The style options ['', ' ', 'None',] don't count because they just hide the lines.)
Are there really only 4 line styles in Matplotlib pyplot? Are there any extensions that add further line styles? Is there a way to customise line styles? How about some three character line styles like:
* '--.': dash dash dot
* '-..': dash dot dot
* '...': dot dot dot (space)
* 'xxx': x's in a line
* '\/': Zig zags ie '\/\/\/\/'
* '::': parrallel dots, ie :::::
These are just some ideas to expand the range of line styles. | You can use the `dashes` kwarg to set custom dash styles.
From the [docs](http://matplotlib.org/api/lines_api.html#matplotlib.lines.Line2D.set_dashes):
> Set the dash sequence, sequence of dashes with on off ink in points. If seq is empty or if seq = (None, None), the linestyle will be set to solid.
Here's some examples based on a few of your suggestions. Obviously there are many more ways you could customise this.
```
import matplotlib.pyplot as plt
fig,ax = plt.subplots(1)
# 3 dots then space
ax.plot(range(10), range(10), dashes=[3,6,3,6,3,18], lw=3,c='b')
# dash dash dot
ax.plot(range(10), range(0,20,2), dashes=[12,6,12,6,3,6], lw=3,c='r')
# dash dot dot
ax.plot(range(10), range(0,30,3), dashes=[12,6,3,6,3,6], lw=3,c='g')
```
[](http://i.stack.imgur.com/6X1aP.png) |
python bokeh offset with rect plotting | 33,936,852 | 3 | 2015-11-26T10:57:33Z | 34,213,135 | 7 | 2015-12-10T22:46:34Z | [
"python",
"matrix",
"bokeh",
"glyph"
] | I have a problem while plotting a matrix with python bokeh and glyphs.
I'm a newbie in Bokeh and just adapted a code I found on the web.
Everything seems to be ok but there is an offset when I launch the function.
[offset](http://i.stack.imgur.com/obfeE.png)
And the thing I'd like to have is :
[no offset](http://i.stack.imgur.com/yDnYB.png)
the code is the following, please tell me if you see something wrong
```
def disp(dom,matrixs) :
cols = [] #rome colones
rows = [] #rome lignes
libcol = [] #libelle métiers
librow = []
color = [] #couleurs
rate = [] #%age de compétences déjà validées
mank = [] #liste des compétences manquantes
nbmank = [] #nb de compétences manquantes
nbtot = []
for i in matrixs[dom].columns:
for j in matrixs[dom].columns:
#rome colonne
rows.append(i)
#rome ligne
cols.append(j)
#libs
libcol.append(compbyrome[j]['label'])
librow.append(compbyrome[i]['label'])
#val pourcentage
rateval = matrixs[dom][i][j]
#nb competences manquantes
nbmank.append(len(compbyrome[j]['competences'])-(rateval*len(compbyrome[j]['competences'])/100))
nbtot.append(len(compbyrome[j]['competences']))
rate.append(rateval)
if rateval < 20:
col = 0
elif rateval >= 20 and rateval < 40:
col = 1
elif rateval >= 40 and rateval < 60:
col = 2
elif rateval >= 60 and rateval < 80:
col = 3
else :
col = 4
color.append(colors[col])
TOOLS = "hover,save,pan"
source = ColumnDataSource(
data = dict(
rows=rows,
cols=cols,
librow=librow,
libcol=libcol,
color=color,
rate=rate,
nbmank=nbmank,
nbtot=nbtot)
)
if (len(matrixs[dom].columns)) <= 8 :
taille = 800
elif (len(matrixs[dom].columns)) >= 15:
taille = 1000
else:
taille = len(matrixs[dom].columns)*110
p = figure(
title=str(dom),
x_range=list(reversed(librow)),
y_range=librow,
x_axis_location="above",
plot_width=taille,
plot_height=taille,
toolbar_location="left",
tools=TOOLS
)
p.rect("librow", "libcol", len(matrixs[dom].columns)-1, len(matrixs[dom].columns)-1, source=source,
color="color", line_color=None)
p.grid.grid_line_color = None
p.axis.axis_line_color = None
p.axis.major_tick_line_color = None
p.axis.major_label_text_font_size = "10pt"
p.axis.major_label_standoff = 0
p.xaxis.major_label_orientation = np.pi/3
hover = p.select(dict(type=HoverTool))
hover.tooltips = """
<div>
provenance = @rows (@librow)
</div>
<div>
évolution = @cols (@libcol)
</div>
<div>
compétences déjà acquises = @rate %
</div>
<div>
nbmanquantes = @nbmank
</div>
<div>
nbtot = @nbtot
</div>
"""
show(p)
```
I'm getting data from a dict of matrixs as you can see, but I think the problem has nothing to do with datas.
thank you | I had the same question to. The issue is probably duplicates in your `x_range` and `y_range` - I got help via the mailing list: cf. <https://groups.google.com/a/continuum.io/forum/#!msg/bokeh/rvFcJV5_WQ8/jlm13N5qCAAJ> and [Issues with Correlation graphs in Bokeh](http://stackoverflow.com/questions/24179776/issues-with-correlation-graphs-in-bokeh/24208660#24208660)
In your code do:
```
correct_y_range = sorted(list(set(librow)), reverse=True)
correct_x_range = sorted(list(set(librow)))
```
Here is a complete example:
```
from collections import OrderedDict
import numpy as np
import bokeh.plotting as bk
from bokeh.plotting import figure, show, output_file
from bokeh.models import HoverTool, ColumnDataSource
bk.output_notebook() #for viz within ipython notebook
N = 20
arr2d = np.random.randint(0,10,size=(N,N))
predicted = []
actual = []
count = []
color = []
alpha = []
the_color = '#cc0000'
for col in range(N):
for rowi in range(N):
predicted.append(str(rowi))
actual.append(str(col))
count.append(arr2d[rowi,col])
a = arr2d[rowi,col]/10.0
alpha.append(a)
color.append(the_color)
source = ColumnDataSource(
data=dict(predicted=predicted, actual=actual, count=count, alphas=alpha, colors=color)
)
#THE FIX IS HERE! use `set` to dedup
correct_y_range = sorted(list(set(actual)), reverse=True)
correct_x_range = sorted(list(set(predicted)))
p = figure(title='Confusion Matrix',
x_axis_location="above", tools="resize,hover,save",
y_range=correct_y_range, x_range=correct_x_range)
p.plot_width = 600
p.plot_height = p.plot_width
rectwidth = 0.9
p.rect('predicted', 'actual', rectwidth, rectwidth, source=source,
color='colors', alpha='alphas',line_width=1, line_color='k')
p.axis.major_label_text_font_size = "12pt"
p.axis.major_label_standoff = 1
p.xaxis.major_label_orientation = np.pi/3
hover = p.select(dict(type=HoverTool))
hover.tooltips = OrderedDict([
('predicted', '@predicted'),
('actual', '@actual'),
('count', '@count'),
])
show(p)
```
[](http://i.stack.imgur.com/GXQf6.png) |
How to specify multiple return types using type-hints | 33,945,261 | 11 | 2015-11-26T18:45:00Z | 33,945,518 | 18 | 2015-11-26T19:05:37Z | [
"python",
"python-3.x",
"return-type",
"type-hinting",
"python-3.5"
] | I have a function in python that can either return a `bool` or a `list`. Is there a way to specify the return types using type hints.
For example, Is this the correct way to do it?
```
def foo(id) -> list or bool:
...
``` | From the [documentation](https://docs.python.org/3/library/typing.html#typing.Union)
> class `typing.Union`
>
> Union type; ***Union[X, Y] means either X or Y.***
Hence the proper way to represent more than one return data type is
```
def foo(client_id: str) -> Union[list,bool]
```
---
But do note that typing is not enforced in Python3.5
```
>>> def foo(a:str) -> list:
... return("Works")
...
>>> foo(1)
'Works'
```
As you can see I am passing a int value and returning a str. However the `__annotations__` will be set to the respective values.
```
>>> foo.__annotations__
{'return': <class 'list'>, 'a': <class 'str'>}
```
---
Please Go through [PEP 483](https://www.python.org/dev/peps/pep-0483/) for more about Type hints. Also see [What are Type hints in Python 3.5](http://stackoverflow.com/questions/32557920/what-are-type-hints-in-python-3-5)?
Kindly note that this is available only for ***Python 3.5*** and upwards. This is mentioned clearly in [PEP 484](https://www.python.org/dev/peps/pep-0484/). |
Reduce function doesn't handle an empty list | 33,945,882 | 3 | 2015-11-26T19:34:13Z | 33,945,933 | 8 | 2015-11-26T19:39:00Z | [
"python",
"list",
"lambda",
"reduce"
] | I previously created a recursive function to find the product of a list.
Now I've created the same function, but using the `reduce` function and `lamdba`.
When I run this code, I get the correct answer.
```
items = [1, 2, 3, 4, 10]
print(reduce(lambda x, y: x*y, items))
```
However, when I give an empty list, an error occurs - `reduce() of empty sequence with no initial value`. Why is this?
When I created my recursive function, I created code to handle an empty list, is the issue with the reduce function just that it just isn't designed to handle and empty list? or is there another reason?
I cannot seem to find a question or anything online explaining why, I can only find questions with solutions to that particular persons issue, no explanation. | As it is written in the [documentation](https://docs.python.org/2/library/functions.html#reduce):
> If the optional initializer is present, it is placed before the items of the iterable in the calculation, and serves as a default when the iterable is empty. If initializer is not given and iterable contains only one item, the first item is returned.
So if you want your code to work with an empty list, you should use an initializer:
```
>>> reduce(lambda x, y: x*y, [], 1)
1
``` |
Get most significant digit in python | 33,947,632 | 3 | 2015-11-26T22:06:18Z | 33,947,673 | 12 | 2015-11-26T22:10:20Z | [
"python",
"list",
"integer",
"significant-digits"
] | Say I have list `[34523, 55, 65, 2]`
What is the most efficient way to get `[3,5,6,2]` which are the most significant digits. If possible without changing changing each to `str()`? | Assuming you're only dealing with positive numbers, you can divide each number by the largest power of 10 smaller than the number, and then take the floor of the result.
```
>>> from math import log10, floor
>>> lst = [34523, 55, 65, 2]
>>> [floor(x / (10**floor(log10(x)))) for x in lst]
[3, 5, 6, 2]
```
If you're using Python 3, instead of flooring the result, you can use the integer division operator `//`:
```
>>> [x // (10**floor(log10(x))) for x in lst]
[3, 5, 6, 2]
```
However, I have no idea whether this is more efficient than just converting to a string and slicing the first character. (Note that you'll need to be a bit more sophisticated if you have to deal with numbers between 0 and 1.)
```
>>> [int(str(x)[0]) for x in lst]
[3, 5, 6, 2]
```
If this is in a performance-critical piece of code, you should measure the two options and see which is faster. If it's not in a performance-critical piece of code, use whichever one is most readable to you. |
Random Forest is overfitting | 33,948,946 | 4 | 2015-11-27T00:55:00Z | 33,949,738 | 7 | 2015-11-27T03:10:33Z | [
"python",
"machine-learning",
"scikit-learn",
"random-forest"
] | I'm using scikit-learn with a stratified CV to compare some classifiers.
I'm computing: accuracy, recall, auc.
I used for the parameter optimization GridSearchCV with a 5 CV.
```
RandomForestClassifier(warm_start= True, min_samples_leaf= 1, n_estimators= 800, min_samples_split= 5,max_features= 'log2', max_depth= 400, class_weight=None)
```
are the best\_params from the GridSearchCV.
My problem, I think I really overfit. For example:
> Random Forest with standard deviation (+/-)
>
> * precision: 0.99 (+/- 0.06)
> * sensitivity: 0.94 (+/- 0.06)
> * specificity: 0.94 (+/- 0.06)
> * B\_accuracy: 0.94 (+/- 0.06)
> * AUC: 0.94 (+/- 0.11)
>
> Logistic Regression with standard deviation (+/-)
>
> * precision: 0.88(+/- 0.06)
> * sensitivity: 0.79 (+/- 0.06)
> * specificity: 0.68 (+/- 0.06)
> * B\_accuracy: 0.73 (+/- 0.06)
> * AUC: 0.73 (+/- 0.041)
And the others also look like logistic regression (so they are not looking overfitted).
My code for CV is:
```
for i,j in enumerate(data):
X.append(data[i][0])
y.append(float(data[i][1]))
x=np.array(X)
y=np.array(y)
def SD(values):
mean=sum(values)/len(values)
a=[]
for i in range(len(values)):
a.append((values[i]-mean)**2)
erg=sum(a)/len(values)
SD=math.sqrt(erg)
return SD,mean
for name, clf in zip(titles,classifiers):
# go through all classifiers, compute 10 folds
# the next for loop should be 1 tab indent more, coudlnt realy format it here, sorry
pre,sen,spe,ba,area=[],[],[],[],[]
for train_index, test_index in skf:
#print train_index, test_index
#get the index from all train_index and test_index
#change them to list due to some errors
train=train_index.tolist()
test=test_index.tolist()
X_train=[]
X_test=[]
y_train=[]
y_test=[]
for i in train:
X_train.append(x[i])
for i in test:
X_test.append(x[i])
for i in train:
y_train.append(y[i])
for i in test:
y_test.append(y[i])
#clf=clf.fit(X_train,y_train)
#predicted=clf.predict_proba(X_test)
#... other code, calculating metrics and so on...
print name
print("precision: %0.2f \t(+/- %0.2f)" % (SD(pre)[1], SD(pre)[0]))
print("sensitivity: %0.2f \t(+/- %0.2f)" % (SD(sen)[1], SD(pre)[0]))
print("specificity: %0.2f \t(+/- %0.2f)" % (SD(spe)[1], SD(pre)[0]))
print("B_accuracy: %0.2f \t(+/- %0.2f)" % (SD(ba)[1], SD(pre)[0]))
print("AUC: %0.2f \t(+/- %0.2f)" % (SD(area)[1], SD(area)[0]))
print "\n"
```
If I'm using the `scores = cross_validation.cross_val_score(clf, X, y, cv=10, scoring='accuracy')` method, I don't get this "overfitting" values. So maybe there is something wrong in the CV method I'm using? But it is only for RF...
I did my own due to the lag of specificity score function in the cross\_val\_function. | Herbert,
if your aim is to compare different learning algorithms, I recommend you to use nested cross-validation. (I refer to learning algorithm as different algorithms such as logistic regression, decision trees, and other discriminative models that learn the hypothesis or model -- the final classifier -- from your training data).
"Regular" cross-validation is fine if you like to tune the hyperparameters of a single algorithms. However, as soon as you start to run the hyperparameter optimization with the same cross-validation parameters/folds, your performance estimate will likely be over-optimistic. The reason if you are running cross-validation over and over again, your test data will become "training data" to some extend.
People asked me this question quite frequently, actually, and I will take some excerpts from a FAQ section I posted here: <http://sebastianraschka.com/faq/docs/evaluate-a-model.html>
> In nested cross-validation, we have an outer k-fold cross-validation loop to split the data into training and test folds, and an inner loop is used to select the model via k-fold cross-validation on the training fold. After model selection, the test fold is then used to evaluate the model performance. After we have identified our "favorite" algorithm, we can follow-up with a "regular" k-fold cross-validation approach (on the complete training set) to find its "optimal" hyperparameters and evaluate it on the independent test set. Let's consider a logistic regression model to make this clearer: Using nested cross-validation you will train m different logistic regression models, 1 for each of the m outer folds, and the inner folds are used to optimize the hyperparameters of each model (e.g., using gridsearch in combination with k-fold cross-validation. If your model is stable, these m models should all have the same hyperparameter values, and you report the average performance of this model based on the outer test folds. Then, you proceed with the next algorithm, e.g., an SVM etc.
[](http://i.stack.imgur.com/h1U2i.png)
I can only highly recommend this excellent paper that discusses this issue in more detail:
* S. Varma and R. Simon. Bias in error estimation when using cross-validation for model selection. BMC bioinformatics, 7(1):91, 2006. (<http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1397873/>)
PS: Typically, you don't need/want to tune the hyperparameters of a Random Forest (so extensively). The idea behind Random Forests (a form of bagging) is actually to not prune the decision trees -- actually, one reason why Breiman came up with the Random Forest algorithm was to deal with the pruning issue/overfitting of individual decision trees. So, the only parameter you really have to "worry" about is the number of trees (and maybe the number of random features per tree). However, typically, you are best off to take training bootstrap samples of size n (where n is the the original number of features in the training set) and squareroot(m) features (where m is the dimensionality of your training set).
Hope that this was helpful!
**Edit:**
Some example code for doing nested CV via scikit-learn:
```
pipe_svc = Pipeline([('scl', StandardScaler()),
('clf', SVC(random_state=1))])
param_range = [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0, 100.0, 1000.0]
param_grid = [{'clf__C': param_range,
'clf__kernel': ['linear']},
{'clf__C': param_range,
'clf__gamma': param_range,
'clf__kernel': ['rbf']}]
# Nested Cross-validation (here: 5 x 2 cross validation)
# =====================================
gs = GridSearchCV(estimator=pipe_svc,
param_grid=param_grid,
scoring='accuracy',
cv=5)
scores = cross_val_score(gs, X_train, y_train, scoring='accuracy', cv=2)
print('CV accuracy: %.3f +/- %.3f' % (np.mean(scores), np.std(scores)))
``` |
How could I use Batch Normalization in TensorFlow? | 33,949,786 | 40 | 2015-11-27T03:17:52Z | 33,950,177 | 35 | 2015-11-27T04:16:11Z | [
"python",
"tensorflow"
] | I would like to use Batch Normalization in TensorFlow, since I found it in the source code [`core/ops/nn_ops.cc`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/ops/nn_ops.cc). However, I did not find it documented on tensorflow.org.
BN has different semantics in MLP and CNN, so I am not sure what exactly this BN does.
I **did not** find a method called `MovingMoments` either.
The C++ code is copied here for reference:
```
REGISTER_OP("BatchNormWithGlobalNormalization")
.Input("t: T")
.Input("m: T")
.Input("v: T")
.Input("beta: T")
.Input("gamma: T")
.Output("result: T")
.Attr("T: numbertype")
.Attr("variance_epsilon: float")
.Attr("scale_after_normalization: bool")
.Doc(R"doc(
Batch normalization.
t: A 4D input Tensor.
m: A 1D mean Tensor with size matching the last dimension of t.
This is the first output from MovingMoments.
v: A 1D variance Tensor with size matching the last dimension of t.
This is the second output from MovingMoments.
beta: A 1D beta Tensor with size matching the last dimension of t.
An offset to be added to the normalized tensor.
gamma: A 1D gamma Tensor with size matching the last dimension of t.
If "scale_after_normalization" is true, this tensor will be multiplied
with the normalized tensor.
variance_epsilon: A small float number to avoid dividing by 0.
scale_after_normalization: A bool indicating whether the resulted tensor
needs to be multiplied with gamma.
)doc");
``` | **Update July 2016** The easiest way to use batch normalization in TensorFlow is through the higher-level interfaces provided in either [contrib/layers](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/layers/python/layers/layers.py), [tflearn](http://tflearn.org/layers/normalization/), or [slim](https://github.com/tensorflow/models/blob/master/inception/inception/slim/ops.py).
**Previous answer if you want to DIY**:
The documentation string for this has improved since the release - see the [docs comment in the master branch](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/ops/nn_ops.cc#L65) instead of the one you found. It clarifies, in particular, that it's the output from `tf.nn.moments`.
You can see a very simple example of its use in the [batch\_norm test code](https://github.com/tensorflow/tensorflow/blob/3972c791b9f4d9a61b9ad6399b481df396f359ff/tensorflow/python/ops/nn_test.py#L518). For a more real-world use example, I've included below the helper class and use notes that I scribbled up for my own use (no warranty provided!):
```
"""A helper class for managing batch normalization state.
This class is designed to simplify adding batch normalization
(http://arxiv.org/pdf/1502.03167v3.pdf) to your model by
managing the state variables associated with it.
Important use note: The function get_assigner() returns
an op that must be executed to save the updated state.
A suggested way to do this is to make execution of the
model optimizer force it, e.g., by:
update_assignments = tf.group(bn1.get_assigner(),
bn2.get_assigner())
with tf.control_dependencies([optimizer]):
optimizer = tf.group(update_assignments)
"""
import tensorflow as tf
class ConvolutionalBatchNormalizer(object):
"""Helper class that groups the normalization logic and variables.
Use:
ewma = tf.train.ExponentialMovingAverage(decay=0.99)
bn = ConvolutionalBatchNormalizer(depth, 0.001, ewma, True)
update_assignments = bn.get_assigner()
x = bn.normalize(y, train=training?)
(the output x will be batch-normalized).
"""
def __init__(self, depth, epsilon, ewma_trainer, scale_after_norm):
self.mean = tf.Variable(tf.constant(0.0, shape=[depth]),
trainable=False)
self.variance = tf.Variable(tf.constant(1.0, shape=[depth]),
trainable=False)
self.beta = tf.Variable(tf.constant(0.0, shape=[depth]))
self.gamma = tf.Variable(tf.constant(1.0, shape=[depth]))
self.ewma_trainer = ewma_trainer
self.epsilon = epsilon
self.scale_after_norm = scale_after_norm
def get_assigner(self):
"""Returns an EWMA apply op that must be invoked after optimization."""
return self.ewma_trainer.apply([self.mean, self.variance])
def normalize(self, x, train=True):
"""Returns a batch-normalized version of x."""
if train:
mean, variance = tf.nn.moments(x, [0, 1, 2])
assign_mean = self.mean.assign(mean)
assign_variance = self.variance.assign(variance)
with tf.control_dependencies([assign_mean, assign_variance]):
return tf.nn.batch_norm_with_global_normalization(
x, mean, variance, self.beta, self.gamma,
self.epsilon, self.scale_after_norm)
else:
mean = self.ewma_trainer.average(self.mean)
variance = self.ewma_trainer.average(self.variance)
local_beta = tf.identity(self.beta)
local_gamma = tf.identity(self.gamma)
return tf.nn.batch_norm_with_global_normalization(
x, mean, variance, local_beta, local_gamma,
self.epsilon, self.scale_after_norm)
```
Note that I called it a `ConvolutionalBatchNormalizer` because it pins the use of `tf.nn.moments` to sum across axes 0, 1, and 2, whereas for non-convolutional use you might only want axis 0.
Feedback appreciated if you use it. |
How could I use Batch Normalization in TensorFlow? | 33,949,786 | 40 | 2015-11-27T03:17:52Z | 34,634,291 | 23 | 2016-01-06T13:26:41Z | [
"python",
"tensorflow"
] | I would like to use Batch Normalization in TensorFlow, since I found it in the source code [`core/ops/nn_ops.cc`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/ops/nn_ops.cc). However, I did not find it documented on tensorflow.org.
BN has different semantics in MLP and CNN, so I am not sure what exactly this BN does.
I **did not** find a method called `MovingMoments` either.
The C++ code is copied here for reference:
```
REGISTER_OP("BatchNormWithGlobalNormalization")
.Input("t: T")
.Input("m: T")
.Input("v: T")
.Input("beta: T")
.Input("gamma: T")
.Output("result: T")
.Attr("T: numbertype")
.Attr("variance_epsilon: float")
.Attr("scale_after_normalization: bool")
.Doc(R"doc(
Batch normalization.
t: A 4D input Tensor.
m: A 1D mean Tensor with size matching the last dimension of t.
This is the first output from MovingMoments.
v: A 1D variance Tensor with size matching the last dimension of t.
This is the second output from MovingMoments.
beta: A 1D beta Tensor with size matching the last dimension of t.
An offset to be added to the normalized tensor.
gamma: A 1D gamma Tensor with size matching the last dimension of t.
If "scale_after_normalization" is true, this tensor will be multiplied
with the normalized tensor.
variance_epsilon: A small float number to avoid dividing by 0.
scale_after_normalization: A bool indicating whether the resulted tensor
needs to be multiplied with gamma.
)doc");
``` | The following works fine for me, it does not require invoking EMA-apply outside.
```
import numpy as np
import tensorflow as tf
from tensorflow.python import control_flow_ops
def batch_norm(x, n_out, phase_train, scope='bn'):
"""
Batch normalization on convolutional maps.
Args:
x: Tensor, 4D BHWD input maps
n_out: integer, depth of input maps
phase_train: boolean tf.Varialbe, true indicates training phase
scope: string, variable scope
Return:
normed: batch-normalized maps
"""
with tf.variable_scope(scope):
beta = tf.Variable(tf.constant(0.0, shape=[n_out]),
name='beta', trainable=True)
gamma = tf.Variable(tf.constant(1.0, shape=[n_out]),
name='gamma', trainable=True)
batch_mean, batch_var = tf.nn.moments(x, [0,1,2], name='moments')
ema = tf.train.ExponentialMovingAverage(decay=0.5)
def mean_var_with_update():
ema_apply_op = ema.apply([batch_mean, batch_var])
with tf.control_dependencies([ema_apply_op]):
return tf.identity(batch_mean), tf.identity(batch_var)
mean, var = tf.cond(phase_train,
mean_var_with_update,
lambda: (ema.average(batch_mean), ema.average(batch_var)))
normed = tf.nn.batch_normalization(x, mean, var, beta, gamma, 1e-3)
return normed
```
Example:
```
import math
n_in, n_out = 3, 16
ksize = 3
stride = 1
phase_train = tf.placeholder(tf.bool, name='phase_train')
input_image = tf.placeholder(tf.float32, name='input_image')
kernel = tf.Variable(tf.truncated_normal([ksize, ksize, n_in, n_out],
stddev=math.sqrt(2.0/(ksize*ksize*n_out))),
name='kernel')
conv = tf.nn.conv2d(input_image, kernel, [1,stride,stride,1], padding='SAME')
conv_bn = batch_norm(conv, n_out, phase_train)
relu = tf.nn.relu(conv_bn)
with tf.Session() as session:
session.run(tf.initialize_all_variables())
for i in range(20):
test_image = np.random.rand(4,32,32,3)
sess_outputs = session.run([relu],
{input_image.name: test_image, phase_train.name: True})
``` |
Comparison to `None` will result in an elementwise object | 33,954,216 | 5 | 2015-11-27T09:29:37Z | 33,954,311 | 7 | 2015-11-27T09:34:17Z | [
"python",
"numpy"
] | Apparantly it will (in the 'future') not be possible anymore to use the following:
```
import numpy as np
np.array([0,1,2]) == None
> False
> FutureWarning: comparison to `None` will result in an elementwise object comparison in the future.
```
This also breaks the lazy loading pattern for numpy arrays:
```
import numpy as np
def f(a=None):
if a == None:
a = <some default value>
<function body>
```
What other possibilities allow you to still use lazy initialization? | You are looking for `is`:
```
if a is None:
a = something else
```
The problem is that, by using the `==` operator, if the input element `a` is a numpy array, numpy will try to perform an element wise comparison and tell you that you cannot compare it.
For `a` a numpy array, `a == None` gives error, `np.all(a == None)` doesn't (but does not do what you expect). Instead `a is None` will work regardless the data type of `a`. |
Progress bar while uploading a file to dropbox | 33,958,600 | 9 | 2015-11-27T13:36:11Z | 33,985,193 | 10 | 2015-11-29T16:40:15Z | [
"python",
"python-2.7",
"progress-bar",
"dropbox",
"dropbox-api"
] | ```
import dropbox
client = dropbox.client.DropboxClient('<token>')
f = open('/ssd-scratch/abhishekb/try/1.mat', 'rb')
response = client.put_file('/data/1.mat', f)
```
I want to upload a big file to dropbox. How can I check the progress? [[Docs]](https://www.dropbox.com/developers-v1/core/docs/python#ChunkedUploader)
EDIT:
The uploader offeset is same below somehow. What am I doing wrong
```
import os,pdb,dropbox
size=1194304
client = dropbox.client.DropboxClient(token)
path='D:/bci_code/datasets/1.mat'
tot_size = os.path.getsize(path)
bigFile = open(path, 'rb')
uploader = client.get_chunked_uploader(bigFile, size)
print "uploading: ", tot_size
while uploader.offset < tot_size:
try:
upload = uploader.upload_chunked()
print uploader.offset
except rest.ErrorResponse, e:
print("something went wrong")
```
EDIT 2:
```
size=1194304
tot_size = os.path.getsize(path)
bigFile = open(path, 'rb')
uploader = client.get_chunked_uploader(bigFile, tot_size)
print "uploading: ", tot_size
while uploader.offset < tot_size:
try:
upload = uploader.upload_chunked(chunk_size=size)
print uploader.offset
except rest.ErrorResponse, e:
print("something went wrong")
``` | `upload_chunked`, as [the documentation](https://www.dropbox.com/developers-v1/core/docs/python#ChunkedUploader.upload_chunked) notes:
> Uploads data from this `ChunkedUploader`'s `file_obj` in chunks, until an
> error occurs. Throws an exception when an error occurs, and can be
> called again to resume the upload.
So yes, it uploads the entire file (unless an error occurs) before returning.
If you want to upload a chunk at a time on your own, you should use [`upload_chunk`](https://www.dropbox.com/developers-v1/core/docs/python#DropboxClient.upload_chunk) and [`commit_chunked_upload`](https://www.dropbox.com/developers-v1/core/docs/python#DropboxClient.commit_chunked_upload).
Here's some working code that shows you how to upload a single chunk at a time and print progress in between chunks:
```
from io import BytesIO
import os
from dropbox.client import DropboxClient
client = DropboxClient(ACCESS_TOKEN)
path = 'test.data'
chunk_size = 1024*1024 # 1MB
total_size = os.path.getsize(path)
upload_id = None
offset = 0
with open(path, 'rb') as f:
while offset < total_size:
offset, upload_id = client.upload_chunk(
BytesIO(f.read(chunk_size)),
offset=offset, upload_id=upload_id)
print('Uploaded so far: {} bytes'.format(offset))
# Note the "auto/" on the next line, which is needed because
# this method doesn't attach the root by itself.
client.commit_chunked_upload('auto/test.data', upload_id)
print('Upload complete.')
``` |
Can Pickle handle files larger than the RAM installed on my machine? | 33,965,021 | 15 | 2015-11-27T21:31:26Z | 33,965,199 | 9 | 2015-11-27T21:50:55Z | [
"python",
"python-3.x",
"pickle",
"textblob"
] | I'm using pickle for saving on disk my NLP classifier built with the TextBlob library.
I'm using pickle after a lot of searches related to [this question](http://stackoverflow.com/questions/33883976/python-textblob-and-text-classification?). At the moment I'm working locally and I have no problem loading the pickle file (which is 1.5Gb) with my i7 and 16gb RAM machine. But the idea is that my program, in the future, has to run on my server which only has 512Mb RAM installed.
Can pickle handle such a large file or will I face memory issues?
On my server I've got Python 3.5 installed and it is a Linux server (not sure which distribution).
I'm asking because at the moment I can't access my server, so I can't just try and find out what happens, but at the same time I'm doubtful if I can keep this approach or I have to find other solutions. | Unfortunately this is difficult to accurately answer without testing it on your machine.
Here are some initial thoughts:
1. There is no inherent size limit that the Pickle module enforces, but you're pushing the boundaries of its intended use. It's not designed for individual large objects. However, you since you're using Python 3.5, you will be able to take advantage of [PEP 3154](https://www.python.org/dev/peps/pep-3154/) which adds better support for large objects. You should specify [pickle.HIGHEST\_PROTOCOL](https://docs.python.org/3.5/library/pickle.html#pickle.HIGHEST_PROTOCOL) when you [dump](https://docs.python.org/3.5/library/pickle.html#pickle.dumps) your data.
2. You will likely have a large performance hit because you're trying to deal with an object that is 3x the size of your memory. Your system will probably start swapping, and possibly even thrashing. RAM is so cheap these days, bumping it up to at least 2GB should help significantly.
3. To handle the swapping, make sure you have enough swap space available (a large swap partition if you're on Linux, or enough space for the swap file on your primary partition on Windows).
4. As [pal sch's comment shows](http://stackoverflow.com/questions/33965021/can-pickle-handle-files-larger-than-the-ram-installed-on-my-machine/33965199#comment55687830_33965021), Pickle is not very friendly to RAM consumption during the pickling process, so you may have to deal with Python trying to get even more memory from the OS than the 1.5GB we may expect for your object.
Given these considerations, I don't expect it to work out very well for you. I'd strongly suggest upgrading the RAM on your target machine to make this work. |
Tensorflow error using my own data | 33,974,231 | 5 | 2015-11-28T17:27:16Z | 33,974,372 | 9 | 2015-11-28T17:40:05Z | [
"python",
"python-2.7",
"tensorflow"
] | I've been playing with the Tensorflow library doing the tutorials. Now I wanted to play with my own data, but I fail horribly. This is perhaps a noob question but I can't figure it out.
I'm using this example: <https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3%20-%20Neural%20Networks/convolutional_network.py>
I want to use my own images, for converting my images to use with tensorflow i'm using this: <https://github.com/HamedMP/ImageFlow/blob/master/ImageFlow.py>
Now I change the parameters in the example from this:
```
n_input = 784
n_classes = 10
```
to this:
```
n_input = 9216
n_classes = 2
```
I did that because my images are 96 \* 96 and there are only 2 classes of my images
I also change the weights and biases to the numbers I need.
I read the data like this:
```
batch_xs = imgReader.read_images(pathname);
```
imgReader being the ImageFlow file
but when I try to run it I gives me an error:
```
ValueError: Cannot feed value of shape (104, 96, 96, 1) for Tensor
u'Placeholder:0', which has shape (Dimension(None), Dimension(9216))
```
I feel like i'm overlooking something small but I don't see it. | This error arises because the shape of the data that you're trying to feed (104 x 96 x 96 x 1) does not match the shape of the input placeholder (`batch_size` x 9216, where `batch_size` may be variable).
To make it work, add the following line before running a training step:
```
batch_xs = np.reshape(batch_xs, (-1, 9216))
```
This uses numpy to reshape the images read in, which are 4-D arrays of `batch_size` x h x w x channels, into a `batch_size` x 9216 element matrix as expected by the placeholder. |
Generate a random sample of points distributed on the surface of a unit sphere | 33,976,911 | 4 | 2015-11-28T22:01:36Z | 33,977,530 | 9 | 2015-11-28T23:11:15Z | [
"python",
"numpy",
"geometry",
"random-sample",
"uniform-distribution"
] | I am trying to generate random points on the surface of the sphere using numpy. I have reviewed the post that explains uniform distribution [here](http://stackoverflow.com/questions/5408276/python-uniform-spherical-distribution). However, need ideas on how to generate the points only on the surface of the sphere. I have coordinates (x, y, z) and the radius of each of these spheres.
I am not very well-versed with Mathematics at this level and trying to make sense of the Monte Carlo simulation.
Any help will be much appreciated.
Thanks,
Parin | Based on [this approach](http://mathworld.wolfram.com/SpherePointPicking.html), you can simply generate a vector consisting of independent samples from three standard normal distributions, then normalize the vector such that its magnitude is 1:
```
import numpy as np
def sample_spherical(npoints, ndim=3):
vec = np.random.randn(ndim, npoints)
vec /= np.linalg.norm(vec, axis=0)
return vec
```
For example:
```
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import axes3d
phi = np.linspace(0, np.pi, 20)
theta = np.linspace(0, 2 * np.pi, 40)
x = np.outer(np.sin(theta), np.cos(phi))
y = np.outer(np.sin(theta), np.sin(phi))
z = np.outer(np.cos(theta), np.ones_like(phi))
xi, yi, zi = sample_spherical(100)
fig, ax = plt.subplots(1, 1, subplot_kw={'projection':'3d', 'aspect':'equal'})
ax.plot_wireframe(x, y, z, color='k', rstride=1, cstride=1)
ax.scatter(xi, yi, zi, s=100, c='r', zorder=10)
```
[](http://i.stack.imgur.com/4vpfj.png)
The same method also generalizes to picking uniformly distributed points on the unit circle (`ndim=2`) or on the surfaces of higher-dimensional unit hyperspheres. |
What's the difference between loop.create_task, asyncio.async/ensure_future and Task? | 33,980,086 | 18 | 2015-11-29T06:30:45Z | 33,980,293 | 12 | 2015-11-29T07:02:18Z | [
"python",
"python-3.x",
"coroutine",
"python-asyncio"
] | I'm a little bit confused by some `asyncio` functions. I see there is [`BaseEventLoop.create_task(coro)`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.BaseEventLoop.create_task) function to schedule a co-routine. The documentation for `create_task` says its a new function and for compatibility we should use [`asyncio.async(coro)`](https://docs.python.org/3/library/asyncio-task.html#asyncio.async) which by referring to docs again I see is an alias for [`asyncio.ensure_future(coro)`](https://docs.python.org/3/library/asyncio-task.html#asyncio.ensure_future) which again schedules the execution of a co-routine.
Meanwhile, I've been using [`Task(coro)`](https://docs.python.org/3/library/asyncio-task.html#asyncio.Task) for scheduling co-routine execution and that too seems to be working fine. so, what's the difference between all these? | As you've noticed, they all do the same thing.
`asyncio.async` had to be replaced with `asyncio.ensure_future` because in Python >= 3.5, `async` has been made a keyword[[1]](https://www.python.org/dev/peps/pep-0492/#backwards-compatibility).
`create_task`'s raison d'etre[[2]](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.BaseEventLoop.create_task):
> Third-party event loops can use their own subclass of Task for interoperability. In this case, the result type is a subclass of Task.
And this also means you **should not** create a `Task` directly, because different event loops might have different ways of creating a "Task".
**Edit**
Another *important* difference is that in addition to accepting coroutines, `ensure_future` also accepts any awaitable object; `create_task` on the other hand just accepts coroutines. |
Theano config directly in script | 33,988,334 | 4 | 2015-11-29T21:25:01Z | 33,992,733 | 7 | 2015-11-30T06:34:42Z | [
"python",
"theano"
] | I'm new to Theano and I wonder how to configure the default setting directly from script (without setting envir. variables). E.g. this is a working solution ([source](http://deeplearning.net/software/theano/tutorial/using_gpu.html)):
```
$ THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python check1.py
```
I intend to come up with the identical solution that is executed by only:
```
$ python check1.py
```
and the additional parameters are set directly in the script itself. E.g. somehow like this:
```
import theano
theano.set('mode', 'FAST_RUN')
theano.set('device', 'gpu')
theano.set('floatX', 'float32')
# rest of the script
```
Is it even possible? I read the [config page](http://deeplearning.net/software/theano/library/config.html) which provides the information that allows me to read the already set values (but not to set them by myself). | When you do this:
```
$ THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python check1.py
```
All you're actually doing is setting an environment variable before running the Python script.
You can set environment variables in Python too. For example, the `THEANO_FLAGS` environment variable can be set inside Python like this:
```
import os
os.environ["THEANO_FLAGS"] = "mode=FAST_RUN,device=gpu,floatX=float32"
```
Note that some Theano config variables cannot be changed after importing Theano so this is fine:
```
import os
os.environ["THEANO_FLAGS"] = "mode=FAST_RUN,device=gpu,floatX=float32"
import theano
```
But this will not work as expected:
```
import theano
import os
os.environ["THEANO_FLAGS"] = "mode=FAST_RUN,device=gpu,floatX=float32"
``` |
Is there a filter() opposite builtin? | 33,989,155 | 2 | 2015-11-29T22:56:49Z | 33,989,179 | 8 | 2015-11-29T22:59:48Z | [
"python",
"functional-programming"
] | Is there a function in Python that does the opposite of `filter`? I.e. keeps the items in the iterable that the callback returns `False` for? Couldn't find anything. | No, there is no built-in inverse function for `filter()`, because you could simply *invert the test*. Just add `not`:
```
positive = filter(lambda v: some_test(v), values)
negative = filter(lambda v: not some_test(v), values)
```
The `itertools` module does have [`itertools.ifilterfalse()`](https://docs.python.org/2/library/itertools.html#itertools.ifilterfalse), which is rather redundant because inverting a boolean test is so simple. The `itertools` version always operates as a generator. |
Python: gensim: RuntimeError: you must first build vocabulary before training the model | 33,989,826 | 5 | 2015-11-30T00:30:37Z | 33,991,111 | 13 | 2015-11-30T03:43:23Z | [
"python",
"gensim",
"word2vec"
] | I know that this question has been asked already, but I was still not able to find a solution for it.
I would like to use gensim's `word2vec` on a custom data set, but now I'm still figuring out in what format the dataset has to be. I had a look at [this post](http://streamhacker.com/2014/12/29/word2vec-nltk/) where the input is basically a list of lists (one big list containing other lists that are tokenized sentences from the NLTK Brown corpus). So I thought that this is the input format I have to use for the command `word2vec.Word2Vec()`. However, it won't work with my little test set and I don't understand why.
What I have tried:
**This worked**:
```
from gensim.models import word2vec
from nltk.corpus import brown
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
brown_vecs = word2vec.Word2Vec(brown.sents())
```
**This didn't work**:
```
sentences = [ "the quick brown fox jumps over the lazy dogs","yoyoyo you go home now to sleep"]
vocab = [s.encode('utf-8').split() for s in sentences]
voc_vec = word2vec.Word2Vec(vocab)
```
I don't understand why it doesn't work with the "mock" data, even though it has the same data structure as the sentences from the Brown corpus:
**vocab**:
```
[['the', 'quick', 'brown', 'fox', 'jumps', 'over', 'the', 'lazy', 'dogs'], ['yoyoyo', 'you', 'go', 'home', 'now', 'to', 'sleep']]
```
**brown.sents()**: (the beginning of it)
```
[['The', 'Fulton', 'County', 'Grand', 'Jury', 'said', 'Friday', 'an', 'investigation', 'of', "Atlanta's", 'recent', 'primary', 'election', 'produced', '``', 'no', 'evidence', "''", 'that', 'any', 'irregularities', 'took', 'place', '.'], ['The', 'jury', 'further', 'said', 'in', 'term-end', 'presentments', 'that', 'the', 'City', 'Executive', 'Committee', ',', 'which', 'had', 'over-all', 'charge', 'of', 'the', 'election', ',', '``', 'deserves', 'the', 'praise', 'and', 'thanks', 'of', 'the', 'City', 'of', 'Atlanta', "''", 'for', 'the', 'manner', 'in', 'which', 'the', 'election', 'was', 'conducted', '.'], ...]
```
Can anyone please tell me what I'm doing wrong? | Default `min_count` in gensim's Word2Vec is set to 5. If there is no word in your vocab with frequency greater than 4, your vocab will be empty and hence the error. Try
```
voc_vec = word2vec.Word2Vec(vocab, min_count=1)
``` |
AttributeError: Unknown property color_cycle | 33,995,707 | 4 | 2015-11-30T09:53:54Z | 34,629,029 | 8 | 2016-01-06T08:55:37Z | [
"python",
"pandas",
"matplotlib"
] | I am learning 'pandas' and trying to plot `id` column but I get an error `AttributeError: Unknown property color_cycle` and empty graph. The graph only appears in interactive shell. When I execute as script I get same error except the graph doesn't appear.
Below is the log:
```
>>> import pandas as pd
>>> pd.set_option('display.mpl_style', 'default')
>>> df = pd.read_csv('2015.csv', parse_dates=['log_date'])
>>> employee_198 = df[df['employee_id'] == 198]
>>> print(employee_198)
id version company_id early_minutes employee_id late_minutes \
90724 91635 0 1 NaN 198 NaN
90725 91636 0 1 NaN 198 0:20:00
90726 91637 0 1 0:20:00 198 NaN
90727 91638 0 1 0:05:00 198 NaN
90728 91639 0 1 0:25:00 198 NaN
90729 91640 0 1 0:15:00 198 0:20:00
90730 91641 0 1 NaN 198 0:15:00
90731 91642 0 1 NaN 198 NaN
90732 91643 0 1 NaN 198 NaN
90733 91644 0 1 NaN 198 NaN
90734 91645 0 1 NaN 198 NaN
90735 91646 0 1 NaN 198 NaN
90736 91647 0 1 NaN 198 NaN
90737 91648 0 1 NaN 198 NaN
90738 91649 0 1 NaN 198 NaN
90739 91650 0 1 NaN 198 0:10:00
90740 91651 0 1 NaN 198 NaN
90741 91652 0 1 NaN 198 NaN
90742 91653 0 1 NaN 198 NaN
90743 91654 0 1 NaN 198 NaN
90744 91655 0 1 NaN 198 NaN
90745 91656 0 1 NaN 198 NaN
90746 91657 0 1 1:30:00 198 NaN
90747 91658 0 1 0:04:25 198 NaN
90748 91659 0 1 NaN 198 NaN
90749 91660 0 1 NaN 198 NaN
90750 91661 0 1 NaN 198 NaN
90751 91662 0 1 NaN 198 NaN
90752 91663 0 1 NaN 198 NaN
90753 91664 0 1 NaN 198 NaN
90897 91808 0 1 NaN 198 0:04:14
91024 91935 0 1 NaN 198 0:21:43
91151 92062 0 1 NaN 198 0:42:07
91278 92189 0 1 NaN 198 0:16:36
91500 92411 0 1 NaN 198 0:07:12
91532 92443 0 1 NaN 198 NaN
91659 92570 0 1 NaN 198 0:53:03
91786 92697 0 1 NaN 198 NaN
91913 92824 0 1 NaN 198 NaN
92040 92951 0 1 NaN 198 NaN
92121 93032 0 1 4:22:35 198 NaN
92420 93331 0 1 NaN 198 NaN
92421 93332 0 1 NaN 198 3:51:15
log_date log_in_time log_out_time over_time remarks \
90724 2015-11-15 No In No Out NaN [Absent]
90725 2015-10-18 10:00:00 17:40:00 NaN NaN
90726 2015-10-19 9:20:00 17:10:00 NaN NaN
90727 2015-10-25 9:30:00 17:25:00 NaN NaN
90728 2015-10-26 9:34:00 17:05:00 NaN NaN
90729 2015-10-27 10:00:00 17:15:00 NaN NaN
90730 2015-10-28 9:55:00 17:30:00 NaN NaN
90731 2015-10-29 9:40:00 17:30:00 NaN NaN
90732 2015-10-30 9:00:00 17:30:00 0:30:00 NaN
90733 2015-10-20 No In No Out NaN [Absent]
90734 2015-10-21 No In No Out NaN [Maha Asthami]
90735 2015-10-22 No In No Out NaN [Nawami/Dashami]
90736 2015-10-23 No In No Out NaN [Absent]
90737 2015-10-24 No In No Out NaN [Off]
90738 2015-11-01 9:15:00 17:30:00 0:15:00 NaN
90739 2015-11-02 9:50:00 17:30:00 NaN NaN
90740 2015-11-03 9:30:00 17:30:00 NaN NaN
90741 2015-11-04 9:40:00 17:30:00 NaN NaN
90742 2015-11-05 9:38:00 17:30:00 NaN NaN
90743 2015-11-06 9:30:00 17:30:00 NaN NaN
90744 2015-11-08 9:30:00 17:30:00 NaN NaN
90745 2015-11-09 9:30:00 17:30:00 NaN NaN
90746 2015-11-10 9:30:00 16:00:00 NaN NaN
90747 2015-11-16 9:30:00 17:25:35 NaN NaN
90748 2015-11-07 No In No Out NaN [Off]
90749 2015-11-11 No In No Out NaN [Laxmi Puja]
90750 2015-11-12 No In No Out NaN [Govardhan Puja]
90751 2015-11-13 No In No Out NaN [Bhai Tika]
90752 2015-11-14 No In No Out NaN [Off]
90753 2015-10-31 No In No Out NaN [Off]
90897 2015-11-17 9:44:14 17:35:01 NaN NaN
91024 2015-11-18 10:01:43 17:36:29 NaN NaN
91151 2015-11-19 10:22:07 17:43:47 NaN NaN
91278 2015-11-20 9:56:36 17:37:00 NaN NaN
91500 2015-11-22 9:47:12 17:46:44 NaN NaN
91532 2015-11-21 No In No Out NaN [Off]
91659 2015-11-23 10:33:03 17:30:00 NaN NaN
91786 2015-11-24 9:34:11 17:32:24 NaN NaN
91913 2015-11-25 9:36:05 17:35:00 NaN NaN
92040 2015-11-26 9:35:39 17:58:05 0:22:26 NaN
92121 2015-11-27 9:08:45 13:07:25 NaN NaN
92420 2015-11-28 No In No Out NaN [Off]
92421 2015-11-29 13:31:15 17:34:44 NaN NaN
shift_in_time shift_out_time work_time under_time
90724 9:30:00 17:30:00 NaN NaN
90725 9:30:00 17:30:00 7:40:00 0:20:00
90726 9:30:00 17:30:00 7:50:00 0:10:00
90727 9:30:00 17:30:00 7:55:00 0:05:00
90728 9:30:00 17:30:00 7:31:00 0:29:00
90729 9:30:00 17:30:00 7:15:00 0:45:00
90730 9:30:00 17:30:00 7:35:00 0:25:00
90731 9:30:00 17:30:00 7:50:00 0:10:00
90732 9:30:00 17:30:00 8:30:00 NaN
90733 9:30:00 17:30:00 NaN NaN
90734 9:30:00 17:30:00 NaN NaN
90735 9:30:00 17:30:00 NaN NaN
90736 9:30:00 17:30:00 NaN NaN
90737 9:30:00 17:30:00 NaN NaN
90738 9:30:00 17:30:00 8:15:00 NaN
90739 9:30:00 17:30:00 7:40:00 0:20:00
90740 9:30:00 17:30:00 8:00:00 NaN
90741 9:30:00 17:30:00 7:50:00 0:10:00
90742 9:30:00 17:30:00 7:52:00 0:08:00
90743 9:30:00 17:30:00 8:00:00 NaN
90744 9:30:00 17:30:00 8:00:00 NaN
90745 9:30:00 17:30:00 8:00:00 NaN
90746 9:30:00 17:30:00 6:30:00 1:30:00
90747 9:30:00 17:30:00 7:55:35 0:04:25
90748 9:30:00 17:30:00 NaN NaN
90749 9:30:00 17:30:00 NaN NaN
90750 9:30:00 17:30:00 NaN NaN
90751 9:30:00 17:30:00 NaN NaN
90752 9:30:00 17:30:00 NaN NaN
90753 9:30:00 17:30:00 NaN NaN
90897 9:30:00 17:30:00 7:50:47 0:09:13
91024 9:30:00 17:30:00 7:34:46 0:25:14
91151 9:30:00 17:30:00 7:21:40 0:38:20
91278 9:30:00 17:30:00 7:40:24 0:19:36
91500 9:30:00 17:30:00 7:59:32 0:00:28
91532 9:30:00 17:30:00 NaN NaN
91659 9:30:00 17:30:00 6:56:57 1:03:03
91786 9:30:00 17:30:00 7:58:13 0:01:47
91913 9:30:00 17:30:00 7:58:55 0:01:05
92040 9:30:00 17:30:00 8:22:26 NaN
92121 9:30:00 17:30:00 3:58:40 4:01:20
92420 9:30:00 17:30:00 NaN NaN
92421 9:30:00 17:30:00 4:03:29 3:56:31
>>> employee_198['id'].plot()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\pandas\tools\plotting.py", line 3497, in __call__
**kwds)
File "C:\Python27\lib\site-packages\pandas\tools\plotting.py", line 2587, in plot_series
**kwds)
File "C:\Python27\lib\site-packages\pandas\tools\plotting.py", line 2384, in _plot
plot_obj.generate()
File "C:\Python27\lib\site-packages\pandas\tools\plotting.py", line 987, in generate
self._make_plot()
File "C:\Python27\lib\site-packages\pandas\tools\plotting.py", line 1664, in _make_plot
**kwds)
File "C:\Python27\lib\site-packages\pandas\tools\plotting.py", line 1678, in _plot
lines = MPLPlot._plot(ax, x, y_values, style=style, **kwds)
File "C:\Python27\lib\site-packages\pandas\tools\plotting.py", line 1300, in _plot
return ax.plot(*args, **kwds)
File "C:\Python27\lib\site-packages\matplotlib\__init__.py", line 1811, in inner
return func(ax, *args, **kwargs)
File "C:\Python27\lib\site-packages\matplotlib\axes\_axes.py", line 1427, in plot
for line in self._get_lines(*args, **kwargs):
File "C:\Python27\lib\site-packages\matplotlib\axes\_base.py", line 386, in _grab_next_args
for seg in self._plot_args(remaining, kwargs):
File "C:\Python27\lib\site-packages\matplotlib\axes\_base.py", line 374, in _plot_args
seg = func(x[:, j % ncx], y[:, j % ncy], kw, kwargs)
File "C:\Python27\lib\site-packages\matplotlib\axes\_base.py", line 280, in _makeline
seg = mlines.Line2D(x, y, **kw)
File "C:\Python27\lib\site-packages\matplotlib\lines.py", line 366, in __init__
self.update(kwargs)
File "C:\Python27\lib\site-packages\matplotlib\artist.py", line 856, in update
raise AttributeError('Unknown property %s' % k)
AttributeError: Unknown property color_cycle
>>>
``` | There's currently a bug in Pandas 0.17.1 with Matplotlib 1.5.0
```
print pandas.__version__
print matplotlib.__version__
```
Instead of using
```
import pandas as pd
pd.set_option('display.mpl_style', 'default')
```
Use:
```
import matplotlib
matplotlib.style.use('ggplot')
``` |
Hash for lambda function in Python | 33,998,594 | 21 | 2015-11-30T12:22:33Z | 33,998,809 | 35 | 2015-11-30T12:34:23Z | [
"python",
"python-2.7",
"hash",
"lambda"
] | I'm trying to get the hash of a lambda function. Why do I get two values (8746164008739 and -9223363290690767077)? Why is the hash from the lambda function not always one value?
```
>>> fn = lambda: 1
>>> hash(fn)
-9223363290690767077
>>> fn = lambda: 1
>>> hash(fn)
8746164008739
>>> fn = lambda: 1
>>> hash(fn)
-9223363290690767077
>>> fn = lambda: 1
>>> hash(fn)
8746164008739
>>> fn = lambda: 1
>>> hash(fn)
-9223363290690767077
``` | Two objects are not guaranteed to hash to the same value unless they compare equal [1].
Python functions (including lambdas) don't compare equal even if they have identical code [2]. For example:
```
>>> (lambda: 1) == (lambda: 1)
False
```
Implementation-wise, this behaviour is due to the fact that function objects don't provide their own equality operator. Instead, they inherit the default one that uses the object's identity, i.e. its address. From the [documentation](https://docs.python.org/2/reference/datamodel.html#object.__cmp__):
> If no `__cmp__()`, `__eq__()` or `__ne__()` operation is defined, class
> instances are compared by object identity (âaddressâ).
Here is what happens in your particular example:
```
fn = lambda: 1 # New function is allocated at address A and stored in fn.
fn = lambda: 1 # New function is allocated at address B and stored in fn.
# The function at address A is garbage collected.
fn = lambda: 1 # New function is allocated at address A and stored in fn.
# The function at address B is garbage collected.
fn = lambda: 1 # New function is allocated at address B and stored in fn.
# The function at address A is garbage collected.
...
```
Since address `A` is always hashed to one value, and address `B` to another, you are seeing `hash(fn)` alternate between the two values. This alternating behaviour is, however, an implementation artefact and could change one day if, for example, the garbage collector were made to behave slightly differently.
The following insightful note has been contributed by @ruakh:
> It is worth noting that it's not possible to write a general process
> for determining if two functions are equivalent. (This is a
> consequence of the [undecidability](https://en.wikipedia.org/wiki/Undecidable_problem) of the [halting problem](https://en.wikipedia.org/wiki/Halting_problem).)
> Furthermore, two Python functions can behave differently even if their
> code is identical (since they may be closures referring to
> distinct-but-identically-named variables). So it makes sense that
> Python functions don't overload the equality operator: there's no way
> to implement anything better than the default object-identity
> comparison.
[1] The converse is generally not true: two objects that compare unequal can have the same hash value. This is called a [hash collision](https://en.wikipedia.org/wiki/Collision_(computer_science)).
[2] *Calling* your lambdas and then hashing the result would of course always give the same value since `hash(1)` is always the same within one program:
```
>>> (lambda: 1)() == (lambda: 1)()
True
``` |
Hash for lambda function in Python | 33,998,594 | 21 | 2015-11-30T12:22:33Z | 33,998,928 | 10 | 2015-11-30T12:41:08Z | [
"python",
"python-2.7",
"hash",
"lambda"
] | I'm trying to get the hash of a lambda function. Why do I get two values (8746164008739 and -9223363290690767077)? Why is the hash from the lambda function not always one value?
```
>>> fn = lambda: 1
>>> hash(fn)
-9223363290690767077
>>> fn = lambda: 1
>>> hash(fn)
8746164008739
>>> fn = lambda: 1
>>> hash(fn)
-9223363290690767077
>>> fn = lambda: 1
>>> hash(fn)
8746164008739
>>> fn = lambda: 1
>>> hash(fn)
-9223363290690767077
``` | The hash of a `lambda` function object is based on its memory address (in CPython this is what the `id` function returns). This means that any two function objects will have different hashes (assuming there are no hash collisions), even if the functions contain the same code.
To explain what's happening in the question, first note that writing `fn = lambda: 1` creates a new function object in memory and binds the name `fn` to it. This new function will therefore have a different hash value to any existing functions.
Repeating `fn = lambda: 1`, you get alternating values for the hashes because when `fn` is bound to the *newly* created function object, the function that `fn` *previously* pointed to is garbage collected by Python. This is because there are no longer any references to it (since the name `fn` now points to a different object).
The Python interpreter then reuses this old memory address for the next new function object created by writing `fn = lambda: 1`.
This behaviour might vary between different systems and Python implementations. |
How to fix FailedPreconditionError: Tensor Flow Tutorial Using numpy/pandas data | 34,001,922 | 4 | 2015-11-30T14:41:20Z | 34,013,098 | 10 | 2015-12-01T05:15:19Z | [
"python",
"pandas",
"classification",
"tensorflow"
] | I am working through the [tensor flow tutorial](https://www.tensorflow.org/tutorials/mnist/pros/index.html), but am trying to use a numpy or pandas format for the data, so that I can compare it with Scikit-Learn results.
I get the digit recognition data from kaggle - [here](https://www.kaggle.com/c/digit-recognizer/data)
The tutorial uses a weird format for uploading the data, where as I'm trying to compare with results from other libraries, so would like to keep it in numpy or pandas format.
Here is the standard tensor flow tutorial code (this all works fine):
```
# Stuff from tensorflow tutorial
import tensorflow as tf
sess = tf.InteractiveSession()
x = tf.placeholder("float", shape=[None, 784])
y_ = tf.placeholder("float", shape=[None, 10])
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x,W) + b)
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
```
Here I read the data, strip out the target variables and split the data into testing and training datasets (this all works fine):
```
# Read dataframe from training data
csvfile='train.csv'
from pandas import DataFrame, read_csv
df = read_csv(csvfile)
# Strip off the target data and make it a separate dataframe.
Target=df.label
del df["label"]
# Split data into training and testing sets
msk = np.random.rand(len(df)) < 0.8
dfTest = df[~msk]
TargetTest = Target[~msk]
df = df[msk]
Target = Target[msk]
# One hot encode the target
OHTarget=pd.get_dummies(Target)
OHTargetTest=pd.get_dummies(TargetTest)
```
Now, when I try to run the training step, I get a FailedPreconditionError:
```
for i in range(100):
batch = np.array(df[i*50:i*50+50].values)
batch = np.multiply(batch, 1.0 / 255.0)
Target_batch = np.array(OHTarget[i*50:i*50+50].values)
Target_batch = np.multiply(Target_batch, 1.0 / 255.0)
train_step.run(feed_dict={x: batch, y_: Target_batch})
```
Here's the full error:
```
---------------------------------------------------------------------------
FailedPreconditionError Traceback (most recent call last)
<ipython-input-82-967faab7d494> in <module>()
4 Target_batch = np.array(OHTarget[i*50:i*50+50].values)
5 Target_batch = np.multiply(Target_batch, 1.0 / 255.0)
----> 6 train_step.run(feed_dict={x: batch, y_: Target_batch})
/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in run(self, feed_dict, session)
1265 none, the default session will be used.
1266 """
-> 1267 _run_using_default_session(self, feed_dict, self.graph, session)
1268
1269
/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in _run_using_default_session(operation, feed_dict, graph, session)
2761 "the operation's graph is different from the session's "
2762 "graph.")
-> 2763 session.run(operation, feed_dict)
2764
2765
/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/client/session.pyc in run(self, fetches, feed_dict)
343
344 # Run request and get response.
--> 345 results = self._do_run(target_list, unique_fetch_targets, feed_dict_string)
346
347 # User may have fetched the same tensor multiple times, but we
/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/client/session.pyc in _do_run(self, target_list, fetch_list, feed_dict)
417 # pylint: disable=protected-access
418 raise errors._make_specific_exception(node_def, op, e.error_message,
--> 419 e.code)
420 # pylint: enable=protected-access
421 raise e_type, e_value, e_traceback
FailedPreconditionError: Attempting to use uninitialized value Variable_1
[[Node: gradients/add_grad/Shape_1 = Shape[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](Variable_1)]]
Caused by op u'gradients/add_grad/Shape_1', defined at:
File "/Users/user32/anaconda/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/Users/user32/anaconda/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/Users/user32/anaconda/lib/python2.7/site-packages/ipykernel/__main__.py", line 3, in <module>
app.launch_new_instance()
File "/Users/user32/anaconda/lib/python2.7/site-packages/traitlets/config/application.py", line 592, in launch_instance
app.start()
File "/Users/user32/anaconda/lib/python2.7/site-packages/ipykernel/kernelapp.py", line 403, in start
ioloop.IOLoop.instance().start()
File "/Users/user32/anaconda/lib/python2.7/site-packages/zmq/eventloop/ioloop.py", line 151, in start
super(ZMQIOLoop, self).start()
File "/Users/user32/anaconda/lib/python2.7/site-packages/tornado/ioloop.py", line 866, in start
handler_func(fd_obj, events)
File "/Users/user32/anaconda/lib/python2.7/site-packages/tornado/stack_context.py", line 275, in null_wrapper
return fn(*args, **kwargs)
File "/Users/user32/anaconda/lib/python2.7/site-packages/zmq/eventloop/zmqstream.py", line 433, in _handle_events
self._handle_recv()
File "/Users/user32/anaconda/lib/python2.7/site-packages/zmq/eventloop/zmqstream.py", line 465, in _handle_recv
self._run_callback(callback, msg)
File "/Users/user32/anaconda/lib/python2.7/site-packages/zmq/eventloop/zmqstream.py", line 407, in _run_callback
callback(*args, **kwargs)
File "/Users/user32/anaconda/lib/python2.7/site-packages/tornado/stack_context.py", line 275, in null_wrapper
return fn(*args, **kwargs)
File "/Users/user32/anaconda/lib/python2.7/site-packages/ipykernel/kernelbase.py", line 260, in dispatcher
return self.dispatch_shell(stream, msg)
File "/Users/user32/anaconda/lib/python2.7/site-packages/ipykernel/kernelbase.py", line 212, in dispatch_shell
handler(stream, idents, msg)
File "/Users/user32/anaconda/lib/python2.7/site-packages/ipykernel/kernelbase.py", line 370, in execute_request
user_expressions, allow_stdin)
File "/Users/user32/anaconda/lib/python2.7/site-packages/ipykernel/ipkernel.py", line 175, in do_execute
shell.run_cell(code, store_history=store_history, silent=silent)
File "/Users/user32/anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2902, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/Users/user32/anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 3006, in run_ast_nodes
if self.run_code(code, result):
File "/Users/user32/anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 3066, in run_code
File "/Users/user32/anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 3066, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-47-e1b82c8bf059>", line 1, in <module>
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
File "/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 165, in minimize
gate_gradients=gate_gradients)
File "/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 205, in compute_gradients
loss, var_list, gate_gradients=(gate_gradients == Optimizer.GATE_OP))
File "/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/gradients.py", line 414, in gradients
in_grads = _AsList(grad_fn(op_wrapper, *out_grads))
File "/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/math_grad.py", line 248, in _AddGrad
sy = array_ops.shape(y)
File "/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 686, in shape
return _op_def_lib.apply_op("Shape", input=input, name=name)
File "/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/op_def_library.py", line 633, in apply_op
op_def=op_def)
File "/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1710, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 988, in __init__
self._traceback = _extract_stack()
...which was originally created as op u'add', defined at:
File "/Users/user32/anaconda/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
[elided 17 identical lines from previous traceback]
File "/Users/user32/anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 3066, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-45-59183d86e462>", line 1, in <module>
y = tf.nn.softmax(tf.matmul(x,W) + b)
File "/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.py", line 403, in binary_op_wrapper
return func(x, y, name=name)
File "/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 44, in add
return _op_def_lib.apply_op("Add", x=x, y=y, name=name)
File "/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/op_def_library.py", line 633, in apply_op
op_def=op_def)
File "/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1710, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/Users/user32/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 988, in __init__
self._traceback = _extract_stack()
```
Any ideas as to how I can fix this?
Thanks for your help! | The `FailedPreconditionError` arises because the program is attempting to read a variable (named `"Variable_1"`) before it has been initialized. In TensorFlow, all variables must be explicitly initialized, by running their "initializer" operations. For convenience, you can run all of the variable initializers in the current session by executing the following statement before your training loop:
```
tf.initialize_all_variables().run()
```
Note that this answer assumes that, as in the question, you are using `tf.InteractiveSession`, which allows you to run operations without specifying a session. For non-interactive uses, it is more common to use `tf.Session`, and initialize as follows:
```
init_op = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init_op)
``` |
Tensorflow slicing based on variable | 34,002,591 | 4 | 2015-11-30T15:53:28Z | 34,002,962 | 8 | 2015-11-30T16:11:55Z | [
"python",
"tensorflow"
] | I've found that indexing still is an open issue in tensorflow [(#206)](https://github.com/tensorflow/tensorflow/issues/206), so I'm wondering what I could use as a workaround at the moment. I want to index/slice a row/column of a matrix based on a variable that changes for every training example.
What I've tried so far:
1. Slicing based on placeholder (doesn't work)
The following (working) code slices based on a fixed number.
```
import tensorflow as tf
import numpy as np
x = tf.placeholder("float")
y = tf.slice(x,[0],[1])
#initialize
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
#run
result = sess.run(y, feed_dict={x:[1,2,3,4,5]})
print(result)
```
However, it seems that I can't simply replace one of these fixed numbers with a tf.placeholder. The following code gives me the error *"TypeError: List of Tensors when single Tensor expected."*
```
import tensorflow as tf
import numpy as np
x = tf.placeholder("float")
i = tf.placeholder("int32")
y = tf.slice(x,[i],[1])
#initialize
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
#run
result = sess.run(y, feed_dict={x:[1,2,3,4,5],i:0})
print(result)
```
This sounds like the brackets around *[i]* are too much, but removing them doesn't help either. How to use a placeholder/variable as index?
2. Slicing based on python variable (doesn't backprop/update properly)
I've also tried using a normal python variable as index. This does not lead to an error, but the network doesn't learn anything while training. I suppose because the changing variable is not properly registered, the graph is malformed and updates don't work?
3. Slicing via one-hot vector + multiplication (works, but is slow)
One workaround I found is using a one-hot vector. Making a one-hot vector in numpy, passing this using a placeholder, then doing the slicing via matrix multiplication. This works, but is quite slow.
Any ideas how to efficiently slice/index based on a variable? | Slicing based on a placeholder should work just fine. It looks like you are running into a type error, due to some subtle issues of shapes and types. Where you have the following:
```
x = tf.placeholder("float")
i = tf.placeholder("int32")
y = tf.slice(x,[i],[1])
```
...you should instead have:
```
x = tf.placeholder("float")
i = tf.placeholder("int32")
y = tf.slice(x,i,[1])
```
...and then you should feed `i` as `[0]` in the call to `sess.run()`.
To make this a little clearer, I would recommend rewriting the code as follows:
```
import tensorflow as tf
import numpy as np
x = tf.placeholder(tf.float32, shape=[None]) # 1-D tensor
i = tf.placeholder(tf.int32, shape=[1])
y = tf.slice(x, i, [1])
#initialize
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
#run
result = sess.run(y, feed_dict={x: [1, 2, 3, 4, 5], i: [0]})
print(result)
```
The additional `shape` arguments to the `tf.placeholder` op help to ensure that the values you feed have the appropriate shapes, and also that TensorFlow will raise an error if the shapes are not correct. |
GRU implementation in Theano | 34,004,474 | 3 | 2015-11-30T17:30:23Z | 34,011,303 | 7 | 2015-12-01T01:53:34Z | [
"python",
"neural-network",
"theano",
"deep-learning",
"gated-recurrent-unit"
] | Based on the LSTM code provided in the official Theano tutorial (<http://deeplearning.net/tutorial/code/lstm.py>), I changed the LSTM layer code (i.e. the functions `lstm_layer()` and `param_init_lstm()`) to perform a GRU instead.
The provided LSTM code trains well, but not the GRU I coded: the accuracy on the training set with the LSTM goes up to 1 (train cost = 0), while with the GRU it stagnates at 0.7 (train cost = 0.3).
Below is the code I use for the GRU. I kept the same function names as in tutorial, so that one can copy paste the code directly in it. What could explain the poor performance of the GRU?
```
import numpy as np
def param_init_lstm(options, params, prefix='lstm'):
"""
GRU
"""
W = np.concatenate([ortho_weight(options['dim_proj']), # Weight matrix for the input in the reset gate
ortho_weight(options['dim_proj']),
ortho_weight(options['dim_proj'])], # Weight matrix for the input in the update gate
axis=1)
params[_p(prefix, 'W')] = W
U = np.concatenate([ortho_weight(options['dim_proj']), # Weight matrix for the previous hidden state in the reset gate
ortho_weight(options['dim_proj']),
ortho_weight(options['dim_proj'])], # Weight matrix for the previous hidden state in the update gate
axis=1)
params[_p(prefix, 'U')] = U
b = np.zeros((3 * options['dim_proj'],)) # Biases for the reset gate and the update gate
params[_p(prefix, 'b')] = b.astype(config.floatX)
return params
def lstm_layer(tparams, state_below, options, prefix='lstm', mask=None):
nsteps = state_below.shape[0]
if state_below.ndim == 3:
n_samples = state_below.shape[1]
else:
n_samples = 1
def _slice(_x, n, dim):
if _x.ndim == 3:
return _x[:, :, n * dim:(n + 1) * dim]
return _x[:, n * dim:(n + 1) * dim]
def _step(m_, x_, h_):
preact = tensor.dot(h_, tparams[_p(prefix, 'U')])
preact += x_
r = tensor.nnet.sigmoid(_slice(preact, 0, options['dim_proj'])) # reset gate
u = tensor.nnet.sigmoid(_slice(preact, 1, options['dim_proj'])) # update gate
U_h_t = _slice( tparams[_p(prefix, 'U')], 2, options['dim_proj'])
x_h_t = _slice( x_, 2, options['dim_proj'])
h_t_temp = tensor.tanh(tensor.dot(r*h_, U_h_t) + x_h_t)
h = (1. - u) * h_ + u * h_t_temp
h = m_[:,None] * h + (1. - m_)[:,None] * h_
return h
state_below = (tensor.dot(state_below, tparams[_p(prefix, 'W')]) +
tparams[_p(prefix, 'b')])
dim_proj = options['dim_proj']
rval, updates = theano.scan(_step,
sequences=[mask, state_below],
outputs_info=[tensor.alloc(numpy_floatX(0.),
n_samples,
dim_proj)],
name=_p(prefix, '_layers'),
n_steps=nsteps)
return rval[0]
``` | The issue comes from the last line, `return rval[0]`: it should instead be `return rval`.
The LSTM code provided in the official Theano tutorial (<http://deeplearning.net/tutorial/code/lstm.py>) uses `return rval[0]` because `outputs_info` contains 2 elements:
```
rval, updates = theano.scan(_step,
sequences=[mask, state_below],
outputs_info=[tensor.alloc(numpy_floatX(0.),
n_samples,
dim_proj),
tensor.alloc(numpy_floatX(0.),
n_samples,
dim_proj)],
name=_p(prefix, '_layers'),
n_steps=nsteps)
return rval[0]
```
In the GRU, `outputs_info` contains just one element:
```
outputs_info=[tensor.alloc(numpy_floatX(0.),
n_samples,
dim_proj)],
```
and despite the brackets, it won't return a list of a list of Theano variables representing the outputs of scan, but directly a Theano variable.
The `rval` is then fed to a pooling layer (in this case, a mean pooling layer):
[](http://i.stack.imgur.com/vEQW8.png)
By taking only `rval[0]` in the GRU, since in the GRU code `rval` is a Theano variable and not a list of a Theano variables, you removed the part in the red rectangle:
[](http://i.stack.imgur.com/q7WMZ.png)
which means you tried to perform the sentence classification just using the first word.
---
Another GRU implementation that can be plugged in the LSTM tutorial:
```
# weight initializer, normal by default
def norm_weight(nin, nout=None, scale=0.01, ortho=True):
if nout is None:
nout = nin
if nout == nin and ortho:
W = ortho_weight(nin)
else:
W = scale * numpy.random.randn(nin, nout)
return W.astype('float32')
def param_init_lstm(options, params, prefix='lstm'):
"""
GRU. Source: https://github.com/kyunghyuncho/dl4mt-material/blob/master/session0/lm.py
"""
nin = options['dim_proj']
dim = options['dim_proj']
# embedding to gates transformation weights, biases
W = numpy.concatenate([norm_weight(nin, dim),
norm_weight(nin, dim)], axis=1)
params[_p(prefix, 'W')] = W
params[_p(prefix, 'b')] = numpy.zeros((2 * dim,)).astype('float32')
# recurrent transformation weights for gates
U = numpy.concatenate([ortho_weight(dim),
ortho_weight(dim)], axis=1)
params[_p(prefix, 'U')] = U
# embedding to hidden state proposal weights, biases
Wx = norm_weight(nin, dim)
params[_p(prefix, 'Wx')] = Wx
params[_p(prefix, 'bx')] = numpy.zeros((dim,)).astype('float32')
# recurrent transformation weights for hidden state proposal
Ux = ortho_weight(dim)
params[_p(prefix, 'Ux')] = Ux
return params
def lstm_layer(tparams, state_below, options, prefix='lstm', mask=None):
nsteps = state_below.shape[0]
if state_below.ndim == 3:
n_samples = state_below.shape[1]
else:
n_samples = state_below.shape[0]
dim = tparams[_p(prefix, 'Ux')].shape[1]
if mask is None:
mask = tensor.alloc(1., state_below.shape[0], 1)
# utility function to slice a tensor
def _slice(_x, n, dim):
if _x.ndim == 3:
return _x[:, :, n*dim:(n+1)*dim]
return _x[:, n*dim:(n+1)*dim]
# state_below is the input word embeddings
# input to the gates, concatenated
state_below_ = tensor.dot(state_below, tparams[_p(prefix, 'W')]) + \
tparams[_p(prefix, 'b')]
# input to compute the hidden state proposal
state_belowx = tensor.dot(state_below, tparams[_p(prefix, 'Wx')]) + \
tparams[_p(prefix, 'bx')]
# step function to be used by scan
# arguments | sequences |outputs-info| non-seqs
def _step_slice(m_, x_, xx_, h_, U, Ux):
preact = tensor.dot(h_, U)
preact += x_
# reset and update gates
r = tensor.nnet.sigmoid(_slice(preact, 0, dim))
u = tensor.nnet.sigmoid(_slice(preact, 1, dim))
# compute the hidden state proposal
preactx = tensor.dot(h_, Ux)
preactx = preactx * r
preactx = preactx + xx_
# hidden state proposal
h = tensor.tanh(preactx)
# leaky integrate and obtain next hidden state
h = u * h_ + (1. - u) * h
h = m_[:, None] * h + (1. - m_)[:, None] * h_
return h
# prepare scan arguments
seqs = [mask, state_below_, state_belowx]
_step = _step_slice
shared_vars = [tparams[_p(prefix, 'U')],
tparams[_p(prefix, 'Ux')]]
init_state = tensor.unbroadcast(tensor.alloc(0., n_samples, dim), 0)
rval, updates = theano.scan(_step,
sequences=seqs,
outputs_info=[init_state],
non_sequences=shared_vars,
name=_p(prefix, '_layers'),
n_steps=nsteps,
strict=True)
return rval
```
---
As a side note, Keras fixed this issue as [follows](https://github.com/fchollet/keras/blob/master/keras/backend/theano_backend.py#L433):
```
results, _ = theano.scan(
_step,
sequences=inputs,
outputs_info=[None] + initial_states,
go_backwards=go_backwards)
# deal with Theano API inconsistency
if type(results) is list:
outputs = results[0]
states = results[1:]
else:
outputs = results
states = []
``` |
Is this time complexity actually O(n^2)? | 34,008,010 | 64 | 2015-11-30T21:06:28Z | 34,008,199 | 58 | 2015-11-30T21:20:14Z | [
"python",
"string",
"algorithm",
"string-concatenation"
] | I am working on a problem out of CTCI.
The third problem of chapter 1 has you take a string such as
`'Mr John Smith '`
and asks you to replace the intermediary spaces with `%20`:
`'Mr%20John%20Smith'`
The author offers this solution in Python, calling it O(n):
```
def urlify(string, length):
'''function replaces single spaces with %20 and removes trailing spaces'''
counter = 0
output = ''
for char in string:
counter += 1
if counter > length:
return output
elif char == ' ':
output = output + '%20'
elif char != ' ':
output = output + char
return output
```
My question:
I understand that this is O(n) in terms of scanning through the actual string from left to right. But aren't strings in Python immutable? If I have a string and I add another string to it with the `+` operator, doesn't it allocate the necessary space, copy over the original, and then copy over the appending string?
If I have a collection of `n` strings each of length 1, then that takes:
`1 + 2 + 3 + 4 + 5 + ... + n = n(n+1)/2`
or O(n^2) time, yes? Or am I mistaken in how Python handles appending?
Alternatively, if you'd be willing to teach me how to fish: How would I go about finding this out for myself? I've been unsuccessful in my attempts to Google an official source. I found <https://wiki.python.org/moin/TimeComplexity> but this doesn't have anything on strings. | In CPython, the standard implementation of Python, there's an implementation detail that makes this usually O(n), implemented in [the code the bytecode evaluation loop calls for `+` or `+=` with two string operands](https://hg.python.org/cpython/file/2.7/Python/ceval.c#l5109). If Python detects that the left argument has no other references, it calls `realloc` to attempt to avoid a copy by resizing the string in place. This is not something you should ever rely on, because it's an implementation detail and because if `realloc` ends up needing to move the string frequently, performance degrades to O(n^2) anyway.
Without the weird implementation detail, the algorithm is O(n^2) due to the quadratic amount of copying involved. Code like this would only make sense in a language with mutable strings, like C++, and even in C++ you'd want to use `+=`. |
Is this time complexity actually O(n^2)? | 34,008,010 | 64 | 2015-11-30T21:06:28Z | 34,008,289 | 19 | 2015-11-30T21:26:11Z | [
"python",
"string",
"algorithm",
"string-concatenation"
] | I am working on a problem out of CTCI.
The third problem of chapter 1 has you take a string such as
`'Mr John Smith '`
and asks you to replace the intermediary spaces with `%20`:
`'Mr%20John%20Smith'`
The author offers this solution in Python, calling it O(n):
```
def urlify(string, length):
'''function replaces single spaces with %20 and removes trailing spaces'''
counter = 0
output = ''
for char in string:
counter += 1
if counter > length:
return output
elif char == ' ':
output = output + '%20'
elif char != ' ':
output = output + char
return output
```
My question:
I understand that this is O(n) in terms of scanning through the actual string from left to right. But aren't strings in Python immutable? If I have a string and I add another string to it with the `+` operator, doesn't it allocate the necessary space, copy over the original, and then copy over the appending string?
If I have a collection of `n` strings each of length 1, then that takes:
`1 + 2 + 3 + 4 + 5 + ... + n = n(n+1)/2`
or O(n^2) time, yes? Or am I mistaken in how Python handles appending?
Alternatively, if you'd be willing to teach me how to fish: How would I go about finding this out for myself? I've been unsuccessful in my attempts to Google an official source. I found <https://wiki.python.org/moin/TimeComplexity> but this doesn't have anything on strings. | I found this snippet of text on [Python Speed > Use the best algorithms and fastest tools](https://wiki.python.org/moin/PythonSpeed):
> String concatenation is best done with `''.join(seq)` which is an `O(n)` process. In contrast, using the `'+'` or `'+='` operators can result in an `O(n^2)` process because new strings may be built for each intermediate step. The CPython 2.4 interpreter mitigates this issue somewhat; however, `''.join(seq)` remains the best practice |
Is this time complexity actually O(n^2)? | 34,008,010 | 64 | 2015-11-30T21:06:28Z | 34,008,322 | 26 | 2015-11-30T21:28:37Z | [
"python",
"string",
"algorithm",
"string-concatenation"
] | I am working on a problem out of CTCI.
The third problem of chapter 1 has you take a string such as
`'Mr John Smith '`
and asks you to replace the intermediary spaces with `%20`:
`'Mr%20John%20Smith'`
The author offers this solution in Python, calling it O(n):
```
def urlify(string, length):
'''function replaces single spaces with %20 and removes trailing spaces'''
counter = 0
output = ''
for char in string:
counter += 1
if counter > length:
return output
elif char == ' ':
output = output + '%20'
elif char != ' ':
output = output + char
return output
```
My question:
I understand that this is O(n) in terms of scanning through the actual string from left to right. But aren't strings in Python immutable? If I have a string and I add another string to it with the `+` operator, doesn't it allocate the necessary space, copy over the original, and then copy over the appending string?
If I have a collection of `n` strings each of length 1, then that takes:
`1 + 2 + 3 + 4 + 5 + ... + n = n(n+1)/2`
or O(n^2) time, yes? Or am I mistaken in how Python handles appending?
Alternatively, if you'd be willing to teach me how to fish: How would I go about finding this out for myself? I've been unsuccessful in my attempts to Google an official source. I found <https://wiki.python.org/moin/TimeComplexity> but this doesn't have anything on strings. | The author relies on an optimization that happens to be here, but is not explicitly dependable. `strA = strB + strC` is typically `O(n)`, making the function `O(n^2)`. However, it is pretty easy to make sure it the whole process is `O(n)`, use an array:
```
output = []
# ... loop thing
output.append('%20')
# ...
output.append(char)
# ...
return ''.join(output)
```
In a nutshell, the `append` operation is *amortized* `O(1)`, (although you can make it strong `O(1)` by pre-allocating the array to the right size), making the loop `O(n)`.
And then the `join` is also `O(n)`, but that's okay because it is outside the loop. |
Is "x < y < z" faster than "x < y and y < z"? | 34,014,906 | 127 | 2015-12-01T07:31:35Z | 34,015,448 | 108 | 2015-12-01T08:06:24Z | [
"python",
"performance"
] | From [this page](https://wiki.python.org/moin/PythonSpeed#Take_advantage_of_interpreter_optimizations), we know that:
> Chained comparisons are faster than using the `and` operator.
> Write `x < y < z` instead of `x < y and y < z`.
However, I got a different result testing the following code snippets:
```
$ python -m timeit "x = 1.2" "y = 1.3" "z = 1.8" "x < y < z"
1000000 loops, best of 3: 0.322 usec per loop
$ python -m timeit "x = 1.2" "y = 1.3" "z = 1.8" "x < y and y < z"
1000000 loops, best of 3: 0.22 usec per loop
$ python -m timeit "x = 1.2" "y = 1.3" "z = 1.1" "x < y < z"
1000000 loops, best of 3: 0.279 usec per loop
$ python -m timeit "x = 1.2" "y = 1.3" "z = 1.1" "x < y and y < z"
1000000 loops, best of 3: 0.215 usec per loop
```
It seems that `x < y and y < z` is faster than `x < y < z`. **Why?**
After searching some posts in this site (like [this one](http://stackoverflow.com/questions/1664292/what-does-evaluated-only-once-mean-for-chained-comparisons-in-python)) I know that "evaluated only once" is the key for `x < y < z`, however I'm still confused. To do further study, I disassembled these two functions using `dis.dis`:
```
import dis
def chained_compare():
x = 1.2
y = 1.3
z = 1.1
x < y < z
def and_compare():
x = 1.2
y = 1.3
z = 1.1
x < y and y < z
dis.dis(chained_compare)
dis.dis(and_compare)
```
And the output is:
```
## chained_compare ##
4 0 LOAD_CONST 1 (1.2)
3 STORE_FAST 0 (x)
5 6 LOAD_CONST 2 (1.3)
9 STORE_FAST 1 (y)
6 12 LOAD_CONST 3 (1.1)
15 STORE_FAST 2 (z)
7 18 LOAD_FAST 0 (x)
21 LOAD_FAST 1 (y)
24 DUP_TOP
25 ROT_THREE
26 COMPARE_OP 0 (<)
29 JUMP_IF_FALSE_OR_POP 41
32 LOAD_FAST 2 (z)
35 COMPARE_OP 0 (<)
38 JUMP_FORWARD 2 (to 43)
>> 41 ROT_TWO
42 POP_TOP
>> 43 POP_TOP
44 LOAD_CONST 0 (None)
47 RETURN_VALUE
## and_compare ##
10 0 LOAD_CONST 1 (1.2)
3 STORE_FAST 0 (x)
11 6 LOAD_CONST 2 (1.3)
9 STORE_FAST 1 (y)
12 12 LOAD_CONST 3 (1.1)
15 STORE_FAST 2 (z)
13 18 LOAD_FAST 0 (x)
21 LOAD_FAST 1 (y)
24 COMPARE_OP 0 (<)
27 JUMP_IF_FALSE_OR_POP 39
30 LOAD_FAST 1 (y)
33 LOAD_FAST 2 (z)
36 COMPARE_OP 0 (<)
>> 39 POP_TOP
40 LOAD_CONST 0 (None)
```
It seems that the `x < y and y < z` has less dissembled commands than `x < y < z`. Should I consider `x < y and y < z` faster than `x < y < z`?
Tested with Python 2.7.6 on an Intel(R) Xeon(R) CPU E5640 @ 2.67GHz. | The difference is that in `x < y < z` `y` is only evaluated once. This does not make a large difference if y is a variable, but it does when it is a function call, which takes some time to compute.
```
from time import sleep
def y():
sleep(.2)
return 1.3
%timeit 1.2 < y() < 1.8
10 loops, best of 3: 203 ms per loop
%timeit 1.2 < y() and y() < 1.8
1 loops, best of 3: 405 ms per loop
``` |
Is "x < y < z" faster than "x < y and y < z"? | 34,014,906 | 127 | 2015-12-01T07:31:35Z | 34,015,812 | 8 | 2015-12-01T08:28:02Z | [
"python",
"performance"
] | From [this page](https://wiki.python.org/moin/PythonSpeed#Take_advantage_of_interpreter_optimizations), we know that:
> Chained comparisons are faster than using the `and` operator.
> Write `x < y < z` instead of `x < y and y < z`.
However, I got a different result testing the following code snippets:
```
$ python -m timeit "x = 1.2" "y = 1.3" "z = 1.8" "x < y < z"
1000000 loops, best of 3: 0.322 usec per loop
$ python -m timeit "x = 1.2" "y = 1.3" "z = 1.8" "x < y and y < z"
1000000 loops, best of 3: 0.22 usec per loop
$ python -m timeit "x = 1.2" "y = 1.3" "z = 1.1" "x < y < z"
1000000 loops, best of 3: 0.279 usec per loop
$ python -m timeit "x = 1.2" "y = 1.3" "z = 1.1" "x < y and y < z"
1000000 loops, best of 3: 0.215 usec per loop
```
It seems that `x < y and y < z` is faster than `x < y < z`. **Why?**
After searching some posts in this site (like [this one](http://stackoverflow.com/questions/1664292/what-does-evaluated-only-once-mean-for-chained-comparisons-in-python)) I know that "evaluated only once" is the key for `x < y < z`, however I'm still confused. To do further study, I disassembled these two functions using `dis.dis`:
```
import dis
def chained_compare():
x = 1.2
y = 1.3
z = 1.1
x < y < z
def and_compare():
x = 1.2
y = 1.3
z = 1.1
x < y and y < z
dis.dis(chained_compare)
dis.dis(and_compare)
```
And the output is:
```
## chained_compare ##
4 0 LOAD_CONST 1 (1.2)
3 STORE_FAST 0 (x)
5 6 LOAD_CONST 2 (1.3)
9 STORE_FAST 1 (y)
6 12 LOAD_CONST 3 (1.1)
15 STORE_FAST 2 (z)
7 18 LOAD_FAST 0 (x)
21 LOAD_FAST 1 (y)
24 DUP_TOP
25 ROT_THREE
26 COMPARE_OP 0 (<)
29 JUMP_IF_FALSE_OR_POP 41
32 LOAD_FAST 2 (z)
35 COMPARE_OP 0 (<)
38 JUMP_FORWARD 2 (to 43)
>> 41 ROT_TWO
42 POP_TOP
>> 43 POP_TOP
44 LOAD_CONST 0 (None)
47 RETURN_VALUE
## and_compare ##
10 0 LOAD_CONST 1 (1.2)
3 STORE_FAST 0 (x)
11 6 LOAD_CONST 2 (1.3)
9 STORE_FAST 1 (y)
12 12 LOAD_CONST 3 (1.1)
15 STORE_FAST 2 (z)
13 18 LOAD_FAST 0 (x)
21 LOAD_FAST 1 (y)
24 COMPARE_OP 0 (<)
27 JUMP_IF_FALSE_OR_POP 39
30 LOAD_FAST 1 (y)
33 LOAD_FAST 2 (z)
36 COMPARE_OP 0 (<)
>> 39 POP_TOP
40 LOAD_CONST 0 (None)
```
It seems that the `x < y and y < z` has less dissembled commands than `x < y < z`. Should I consider `x < y and y < z` faster than `x < y < z`?
Tested with Python 2.7.6 on an Intel(R) Xeon(R) CPU E5640 @ 2.67GHz. | Since the difference in the output seem to be due to lack of optimization I think you should ignore that difference for most cases - it could be that the difference will go away. The difference is because `y` only should be evaluated once and that is solved by duplicating it on the stack which requires an extra `POP_TOP` - the solution to use `LOAD_FAST` might be possible though.
The important difference though is that in `x<y and y<z` the second `y` should be evaluated twice if `x<y` evaluates to true, this has implications if the evaluation of `y` takes considerable time or have side effects.
In most scenarios you should use `x<y<z` despite the fact it's somewhat slower. |
Is "x < y < z" faster than "x < y and y < z"? | 34,014,906 | 127 | 2015-12-01T07:31:35Z | 34,023,747 | 21 | 2015-12-01T15:17:28Z | [
"python",
"performance"
] | From [this page](https://wiki.python.org/moin/PythonSpeed#Take_advantage_of_interpreter_optimizations), we know that:
> Chained comparisons are faster than using the `and` operator.
> Write `x < y < z` instead of `x < y and y < z`.
However, I got a different result testing the following code snippets:
```
$ python -m timeit "x = 1.2" "y = 1.3" "z = 1.8" "x < y < z"
1000000 loops, best of 3: 0.322 usec per loop
$ python -m timeit "x = 1.2" "y = 1.3" "z = 1.8" "x < y and y < z"
1000000 loops, best of 3: 0.22 usec per loop
$ python -m timeit "x = 1.2" "y = 1.3" "z = 1.1" "x < y < z"
1000000 loops, best of 3: 0.279 usec per loop
$ python -m timeit "x = 1.2" "y = 1.3" "z = 1.1" "x < y and y < z"
1000000 loops, best of 3: 0.215 usec per loop
```
It seems that `x < y and y < z` is faster than `x < y < z`. **Why?**
After searching some posts in this site (like [this one](http://stackoverflow.com/questions/1664292/what-does-evaluated-only-once-mean-for-chained-comparisons-in-python)) I know that "evaluated only once" is the key for `x < y < z`, however I'm still confused. To do further study, I disassembled these two functions using `dis.dis`:
```
import dis
def chained_compare():
x = 1.2
y = 1.3
z = 1.1
x < y < z
def and_compare():
x = 1.2
y = 1.3
z = 1.1
x < y and y < z
dis.dis(chained_compare)
dis.dis(and_compare)
```
And the output is:
```
## chained_compare ##
4 0 LOAD_CONST 1 (1.2)
3 STORE_FAST 0 (x)
5 6 LOAD_CONST 2 (1.3)
9 STORE_FAST 1 (y)
6 12 LOAD_CONST 3 (1.1)
15 STORE_FAST 2 (z)
7 18 LOAD_FAST 0 (x)
21 LOAD_FAST 1 (y)
24 DUP_TOP
25 ROT_THREE
26 COMPARE_OP 0 (<)
29 JUMP_IF_FALSE_OR_POP 41
32 LOAD_FAST 2 (z)
35 COMPARE_OP 0 (<)
38 JUMP_FORWARD 2 (to 43)
>> 41 ROT_TWO
42 POP_TOP
>> 43 POP_TOP
44 LOAD_CONST 0 (None)
47 RETURN_VALUE
## and_compare ##
10 0 LOAD_CONST 1 (1.2)
3 STORE_FAST 0 (x)
11 6 LOAD_CONST 2 (1.3)
9 STORE_FAST 1 (y)
12 12 LOAD_CONST 3 (1.1)
15 STORE_FAST 2 (z)
13 18 LOAD_FAST 0 (x)
21 LOAD_FAST 1 (y)
24 COMPARE_OP 0 (<)
27 JUMP_IF_FALSE_OR_POP 39
30 LOAD_FAST 1 (y)
33 LOAD_FAST 2 (z)
36 COMPARE_OP 0 (<)
>> 39 POP_TOP
40 LOAD_CONST 0 (None)
```
It seems that the `x < y and y < z` has less dissembled commands than `x < y < z`. Should I consider `x < y and y < z` faster than `x < y < z`?
Tested with Python 2.7.6 on an Intel(R) Xeon(R) CPU E5640 @ 2.67GHz. | *Optimal* bytecode for both of the functions you defined would be
```
0 LOAD_CONST 0 (None)
3 RETURN_VALUE
```
because the result of the comparison is not used. Let's make the situation more interesting by returning the result of the comparison. Let's also have the result not be knowable at compile time.
```
def interesting_compare(y):
x = 1.1
z = 1.3
return x < y < z # or: x < y and y < z
```
Again, the two versions of the comparison are semantically identical, so the *optimal* bytecode is the same for both constructs. As best I can work it out, it would look like this. I've annotated each line with the stack contents before and after each opcode, in Forth notation (top of stack at right, `--` divides before and after, trailing `?` indicates something that might or might not be there). Note that `RETURN_VALUE` discards everything that happens to be left on the stack underneath the value returned.
```
0 LOAD_FAST 0 (y) ; -- y
3 DUP_TOP ; y -- y y
4 LOAD_CONST 0 (1.1) ; y y -- y y 1.1
7 COMPARE_OP 4 (>) ; y y 1.1 -- y pred
10 JUMP_IF_FALSE_OR_POP 19 ; y pred -- y
13 LOAD_CONST 1 (1.3) ; y -- y 1.3
16 COMPARE_OP 0 (<) ; y 1.3 -- pred
>> 19 RETURN_VALUE ; y? pred --
```
If an implementation of the language, CPython, PyPy, whatever, does not generate this bytecode (or its own equivalent sequence of operations) for both variations, *that demonstrates the poor quality of that bytecode compiler*. Getting from the bytecode sequences you posted to the above is a solved problem (I think all you need for this case is [constant folding](https://en.wikipedia.org/wiki/Constant_folding), [dead code elimination](https://en.wikipedia.org/wiki/Dead_code_elimination), and better modeling of the contents of the stack; [common subexpression elimination](https://en.wikipedia.org/wiki/Common_subexpression_elimination) would also be cheap and valuable), and there's really no excuse for not doing it in a modern language implementation.
Now, it happens that all current implementations of the language have poor-quality bytecode compilers. But you should *ignore* that while coding! Pretend the bytecode compiler is good, and write the most *readable* code. It will probably be plenty fast enough anyway. If it isn't, look for algorithmic improvements first, and give [Cython](http://cython.org/) a try second -- that will provide far more improvement for the same effort than any expression-level tweaks you might apply. |
Python reversing an UTF-8 string | 34,015,615 | 4 | 2015-12-01T08:17:31Z | 34,015,656 | 9 | 2015-12-01T08:19:49Z | [
"python",
"string",
"utf-8",
"character",
"reverse"
] | I'm currently learning Python and as a Slovenian I often use UTF-8 characters to test my programs. Normally everything works fine, but there is one catch that I can't overtake. Even though I've got encoding declared on the top of the file it fails when I try to reverse a string containing special characters
```
#-*- coding: utf-8 -*-
a = "Äšž"
print a #prints Äšž
b = a[::-1]
print b #prints �šÅ� instead of žšÄ
```
Is there any way to fix that? | Python 2 strings are *byte strings*, and UTF-8 encoded text uses multiple bytes per character. Just because your terminal manages to interpret the UTF-8 bytes as characters, doesn't mean that Python knows about what bytes form one UTF-8 character.
Your bytestring consists of 6 bytes, every two bytes form one character:
```
>>> a = "Äšž"
>>> a
'\xc4\x8d\xc5\xa1\xc5\xbe'
```
However, how many bytes UTF-8 uses depends on where in the Unicode standard the character is defined; ASCII characters (the first 128 characters in the Unicode standard) only need 1 byte each, and many emoji need 4 bytes!
In UTF-8 order is *everything*; reversing the above bytestring reverses the bytes, resulting in some gibberish as far as the UTF-8 standard is concerned, but the middle 4 bytes just *happen* to be valid UTF-8 sequences (for `Å¡` and `Å`):
```
>>> a[::-1]
'\xbe\xc5\xa1\xc5\x8d\xc4'
-----~~~~~~~~^^^^^^^^####
| Å¡ Å |
\ \
invalid UTF8 byte opening UTF-8 byte missing a second byte
```
You'd have to decode the byte string to a `unicode` object, which consists of single characters. Reversing that object gives you the right results:
```
b = a.decode('utf8')[::-1]
print b
```
You can always *encode* the object back to UTF-8 again:
```
b = a.decode('utf8')[::-1].encode('utf8')
```
Note that in Unicode, you can still run into issues when reversing text, when [*combining characters*](https://en.wikipedia.org/wiki/Combining_character) are used. Reversing text with combining characters places those combining characters in front rather than after the character they combine with, so they'll combine with the wrong character instead:
```
>>> print u'e\u0301a'
éa
>>> print u'e\u0301a'[::-1]
áe
```
You can mostly avoid this by converting the Unicode data to its normalised form (which replaces combinations with 1-codepoint forms) but there are plenty of other exotic Unicode characters that don't interact with string reversals. |
SKlearn import MLPClassifier fails | 34,016,238 | 24 | 2015-12-01T08:54:19Z | 34,020,223 | 29 | 2015-12-01T12:14:39Z | [
"python",
"scikit-learn",
"neural-network"
] | I am trying to use the [multilayer perceptron](http://scikit-learn.org/dev/modules/neural_networks_supervised.html#classification) from scikit-learn in python. My problem is, that the import is not working. All other modules from scikit-learn are working fine.
```
from sklearn.neural_network import MLPClassifier
```
> Import Error: cannot import name MLPClassifier
I'm using the Python Environment Python64-bit 3.4 in Visual Studio 2015.
I installed sklearn over the console with: `conda install scikit-learn`
I also installed numpy and pandas. After I had the error above I also installed [scikit-neuralnetwork](http://scikit-neuralnetwork.readthedocs.org/en/latest/index.html) with: `pip install scikit-neuralnetwork`
The installed scikit-learn version is 0.17.
What have I done wrong? Am I missing an installation?
----- EDIT ----
In addition to the answer of tttthomasssss, I found the solution on how to install the sknn library for neuronal networks. I followed this [tutorial](http://deeplearning.net/software/theano/install_windows.html).
Do the following steps:
* `pip install scikit-neuralnetwork`
* download and install the [GCC compiler](http://tdm-gcc.tdragon.net/)
* install mingw with `conda install mingw libpython`
You can use the [sknn](http://scikit-neuralnetwork.readthedocs.org/en/latest/guide_beginners.html#example-regression) library after. | `MLPClassifier` is not yet available in `scikit-learn` v0.17 (as of 1 Dec 2015). If you really want to use it you could clone `0.18dev` (however, I don't know how stable this branch currently is). |
How to reverse a string using recursion? | 34,023,003 | 4 | 2015-12-01T14:39:16Z | 34,023,052 | 8 | 2015-12-01T14:41:31Z | [
"python",
"string",
"python-2.7",
"recursion"
] | I'm trying out a simple program which would allow me to print out the reverse word of "computer". When I run my code, I received a runtime error *RuntimeError: maximum recursion depth exceeded in cmp* .
May I know what had happen and how can I solve it?
```
def reverse(str1):
if str1 == '':
return str1
else:
return reverse(str1[1:] + str1[0])
print reverse('retupmoc')
``` | The problem is here,
```
return reverse(str1[1:] + str1[0])
```
You are concatenating the rest of the string with the first character and passing to the `reverse` function. So, the length of the string never reduces.
It should have been
```
return reverse(str1[1:]) + str1[0]
```
Now, you are passing only the rest of the string, excluding the first character to the recursive `reverse` function. So, on each recursive level, one character will be removed from the string and it will eventually meet your base condition. |
Cumulative counts in NumPy without iteration | 34,027,288 | 6 | 2015-12-01T18:16:08Z | 34,027,666 | 8 | 2015-12-01T18:38:48Z | [
"python",
"numpy"
] | I have an array like so:
```
a = np.array([0.1, 0.2, 1.0, 1.0, 1.0, 0.9, 0.6, 1.0, 0.0, 1.0])
```
I'd like to have a running counter of **instances of 1.0** that **resets when it encounters a 0.0**, so the result would be:
```
[0, 0, 1, 2, 3, 3, 3, 4, 0, 1]
```
My initial thought was to use something like b = np.cumsum(a[a==1.0]), but I don't know how to (1) modify this to reset at zeros or (2) quite how to structure it so the output array is the same shape as the input array. Any ideas how to do this without iteration? | I think you could do something like
```
def rcount(a):
without_reset = (a == 1).cumsum()
reset_at = (a == 0)
overcount = np.maximum.accumulate(without_reset * reset_at)
result = without_reset - overcount
return result
```
which gives me
```
>>> a = np.array([0.1, 0.2, 1.0, 1.0, 1.0, 0.9, 0.6, 1.0, 0.0, 1.0])
>>> rcount(a)
array([0, 0, 1, 2, 3, 3, 3, 4, 0, 1])
```
This works because we can use the cumulative maximum to figure out the "overcount":
```
>>> without_reset * reset_at
array([0, 0, 0, 0, 0, 0, 0, 0, 4, 0])
>>> np.maximum.accumulate(without_reset * reset_at)
array([0, 0, 0, 0, 0, 0, 0, 0, 4, 4])
```
---
Sanity testing:
```
def manual(arr):
out = []
count = 0
for x in arr:
if x == 1:
count += 1
if x == 0:
count = 0
out.append(count)
return out
def test():
for w in [1, 2, 10, 10**4]:
for trial in range(100):
for vals in [0,1],[0,1,2]:
b = np.random.choice(vals, size=w)
assert (rcount(b) == manual(b)).all()
print("hooray!")
```
and then
```
>>> test()
hooray!
``` |
How does the min/max function on a nested list work? | 34,050,113 | 19 | 2015-12-02T18:17:45Z | 34,115,735 | 30 | 2015-12-06T09:04:15Z | [
"python",
"list",
"python-2.7",
"python-2.x",
"nested-lists"
] | Lets say, there is a nested list, like:
```
my_list = [[1, 2, 21], [1, 3], [1, 2]]
```
When the function `min()` is called on this:
```
min(my_list)
```
The output received is
```
[1, 2]
```
Why and How does it work? What are some use cases of it? | ## ***How are lists and other sequences compared in Python?***
Lists (and other sequences) in Python are compared [lexicographically](https://docs.python.org/2/tutorial/datastructures.html#comparing-sequences-and-other-types) and not based on any other parameter.
> Sequence objects may be compared to other objects with the same sequence type. The comparison uses *lexicographical* ordering: first the first two items are compared, and if they differ this determines the outcome of the comparison; if they are equal, the next two items are compared, and so on, until either sequence is exhausted.
---
## ***What is lexicographic sorting?***
From the Wikipedia page on [lexicographic sorting](https://en.wikipedia.org/wiki/Lexicographical_order)
> lexicographic or lexicographical order (also known as lexical order, dictionary order, alphabetical order or lexicographic(al) product) is a generalization of the way the alphabetical order of words is based on the alphabetical order of their component letters.
The [`min`](https://docs.python.org/2/library/functions.html#min) function returns the smallest value in the *iterable*. So the lexicographic value of `[1,2]` is the least in that list. You can check by using `[1,2,21]`
```
>>> my_list=[[1,2,21],[1,3],[1,2]]
>>> min(my_list)
[1, 2]
```
---
## ***What is happening in this case of `min`?***
Going element wise on `my_list`, firstly `[1,2,21]` and `[1,3]`. Now from the docs
> If two items to be compared are themselves ***sequences of the same type***, the lexicographical comparison is carried out ***recursively***.
Thus the value of `[1,1,21]` is less than `[1,3]`, because the second element of `[1,3]`, which is, `3` is *lexicographically higher* than the value of the second element of `[1,1,21]`, which is, `1`.
Now comparing `[1,2]` and `[1,2,21]`, and adding another reference from the docs
> If one sequence is an ***initial sub-sequence*** of the other, the ***shorter sequence is the smaller*** (lesser) one.
`[1,2]` is an initial sub-sequence of `[1,2,21]`. Therefore the value of `[1,2]` on the whole is smaller than that of `[1,2,21]`. Hence `[1,2]` is returned as the output.
This can be validated by using the [`sorted`](https://docs.python.org/2/library/functions.html#sorted) function
```
>>> sorted(my_list)
[[1, 2], [1, 2, 21], [1, 3]]
```
---
## *What if the list has multiple minimum elements?*
If the list contains duplicate min elements *the first is returned*
```
>>> my_list=[[1,2],[1,2]]
>>> min(my_list)
[1, 2]
```
This can be confirmed using the `id` function call
```
>>> my_list=[[1,2],[1,2]]
>>> [id(i) for i in my_list]
[140297364849368, 140297364850160]
>>> id(min(my_list))
140297364849368
```
---
## *What do I need to do to prevent lexicographic comparison in `min`?*
If the required comparison is *not lexicographic* then the `key` argument can be used (as mentioned by [Padraic](http://stackoverflow.com/questions/34050113/how-and-why-min-max-function-on-a-nested-list-works/34115735#comment56040029_34115735))
The `min` function has an *additional optional argument* called `key`. The `key` argument takes a function.
> The optional key argument specifies a one-argument ordering function
> like that used for `list.sort()`. The key argument, if supplied, must be
> in keyword form (for example, `min(a,b,c,key=func)`).
For example, if we need the *smallest element by length,* we need to use the [`len`](https://docs.python.org/2/library/functions.html#len) function.
```
>>> my_list=[[1,2,21],[1,3],[1,2]]
>>> min(my_list,key=len) # Notice the key argument
[1, 3]
```
As we can see the first shortest element is returned here.
---
## *What if the list is heterogeneous?*
***Until Python2***
If the list is heterogeneous ***type names*** are considered for ordering, check [Comparisions](https://docs.python.org/2/library/stdtypes.html#comparisons),
> Objects of different types except numbers are ordered by their ***type names***
Hence if you put an `int` and a `list` there you will get the integer value as the smallest as `i` is of lower value than `l`. Similarly `'1'` would be of higher value than both of this.
```
>>> my_list=[[1,1,21],1,'1']
>>> min(my_list)
1
```
***Python3 and onwards***
However this confusing technique was removed in *Python3*. It *now raises a `TypeError`*. Read [What's new in Python 3.0](https://docs.python.org/3.0/whatsnew/3.0.html#ordering-comparisons)
> The ordering comparison operators (`<`, `<=`, `>=`, `>`) raise a `TypeError` exception when the operands donât have a meaningful natural ordering. Thus, expressions like `1 < ''`, `0 > None` or `len <= len` are no longer valid, and e.g. `None < None` raises `TypeError` instead of returning `False`. A corollary is that ***sorting a heterogeneous list no longer makes sense â all the elements must be comparable to each other***.
```
>>> my_list=[[1,1,21],1,'1']
>>> min(my_list)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unorderable types: int() < list()
```
But it works for *Comparable types*, For example
```
>>> my_list=[1,2.0]
>>> min(my_list)
1
```
Here we can see that the `list` contains `float` values and `int` values. But as `float` and `int` are comparable types, `min` function works in this case. |
Number list with no repeats and ordered | 34,058,251 | 14 | 2015-12-03T05:05:02Z | 34,058,322 | 8 | 2015-12-03T05:11:22Z | [
"python"
] | This code returns a list [0,0,0] to [9,9,9], which produces no repeats and each element is in order from smallest to largest.
```
def number_list():
b=[]
for position1 in range(10):
for position2 in range(10):
for position3 in range(10):
if position1<=position2 and position2<=position3:
b.append([position1, position2, position3])
return b
```
Looking for a shorter and better way to write this code without using multiple variables (position1, position2, position3), instead only using one variable `i`.
Here is my attempt at modifying the code, but I'm stuck at implementing the `if` statements:
```
def number_list():
b=[]
for i in range(1000):
b.append(map(int, str(i).zfill(3)))
return b
``` | Simply use list comprehension, one way to do it:
```
>>> [[x,y,z] for x in range(10) for y in range(10) for z in range(10) if x<=y and y<=z]
[[0, 0, 0], [0, 0, 1], [0, 0, 2], [0, 0, 3], [0, 0, 4], [0, 0, 5], [0, 0, 6],
[0, 0, 7], [0, 0, 8], [0, 0, 9], [0, 1, 1], [0, 1, 2], [0, 1, 3], [0, 1, 4], [0, 1, 5], [0, 1, 6], [0, 1, 7], [0, 1, 8], [0, 1, 9], [0, 2, 2], [0, 2, 3],
[0, 2, 4], [0, 2, 5], [0, 2, 6], [0, 2, 7], [0, 2, 8], [0, 2, 9], [0, 3, 3],
[0, 3, 4], [0, 3, 5], [0, 3, 6], [0, 3, 7], [0, 3, 8],....[6, 8, 8], [6, 8, 9],
[6, 9, 9], [7, 7, 7], [7, 7, 8], [7, 7, 9], [7, 8, 8], [7, 8, 9], [7, 9, 9],
[8, 8, 8], [8, 8, 9], [8, 9, 9], [9, 9, 9]]
``` |
Number list with no repeats and ordered | 34,058,251 | 14 | 2015-12-03T05:05:02Z | 34,058,339 | 11 | 2015-12-03T05:12:53Z | [
"python"
] | This code returns a list [0,0,0] to [9,9,9], which produces no repeats and each element is in order from smallest to largest.
```
def number_list():
b=[]
for position1 in range(10):
for position2 in range(10):
for position3 in range(10):
if position1<=position2 and position2<=position3:
b.append([position1, position2, position3])
return b
```
Looking for a shorter and better way to write this code without using multiple variables (position1, position2, position3), instead only using one variable `i`.
Here is my attempt at modifying the code, but I'm stuck at implementing the `if` statements:
```
def number_list():
b=[]
for i in range(1000):
b.append(map(int, str(i).zfill(3)))
return b
``` | On the same note as the other [`itertools`](https://docs.python.org/3/library/itertools.html) answer, there is another way with [`combinations_with_replacement`](https://docs.python.org/3/library/itertools.html#itertools.combinations_with_replacement):
```
list(itertools.combinations_with_replacement(range(10), 3))
``` |
how to set different PYTHONPATH variables for python3 and python2 respectively | 34,066,261 | 11 | 2015-12-03T12:35:09Z | 34,066,989 | 7 | 2015-12-03T13:12:47Z | [
"python",
"pythonpath"
] | I want to add a specific library path only to python2. After adding `export PYTHONPATH="/path/to/lib/"` to my `.bashrc`, however, executing python3 gets the error: Your PYTHONPATH points to a site-packages dir for Python 2.x but you are running Python 3.x!
I think it is due to that python2 and python3 share the common `PYTHONPATH` variable.
So, can I set different `PYTHONPATH` variables respectively for python2 and python3. If not, how can I add a library path exclusively to a particular version of python? | `PYTHONPATH` is somewhat of a hack as far as package management is concerned. A "pretty" solution would be to *package* your library and *install* it.
This could sound more tricky than it is, so let me show you how it works.
Let us assume your "package" has a single file named `wow.py` and you keep it in `/home/user/mylib/wow.py`.
Create the file `/home/user/mylib/setup.py` with the following content:
```
from setuptools import setup
setup(name="WowPackage",
packages=["."],
)
```
That's it, now you can "properly install" your package into the Python distribution of your choice without the need to bother about `PYTHONPATH`. As far as "proper installation" is concerned, you have at least three options:
* "Really proper". Will copy your code to your python site-packages directory:
```
$ python setup.py install
```
* "Development". Will only add a link from the python site-packages to `/home/user/mylib`. This means that changes to code in your directory will have effect.
```
$ python setup.py develop
```
* "User". If you do not want to write to the system directories, you can install the package (either "properly" or "in development mode") to `/home/user/.local` directory, where Python will also find them on its own. For that, just add `--user` to the command.
```
$ python setup.py install --user
$ python setup.py develop --user
```
To remove a package installed in development mode, do
```
$ python setup.py develop -u
```
or
```
$ python setup.py develop -u --user
```
To remove a package installed "properly", do
```
$ pip uninstall WowPackage
```
If your package is more interesting than a single file (e.g. you have subdirectories and such), just list those in the `packages` parameter of the `setup` function (you will need to list everything recursively, hence you'll use a helper function for larger libraries). Once you get a hang of it, make sure to read [a more detailed manual](https://pythonhosted.org/an_example_pypi_project/setuptools.html) as well.
In the end, go and contribute your package to PyPI -- it is as simple as calling `python setup.py sdist register upload` (you'll need a PyPI username, though). |
Difference between writing something on one line and on several lines | 34,069,542 | 5 | 2015-12-03T15:16:27Z | 34,069,705 | 13 | 2015-12-03T15:23:02Z | [
"python"
] | Where is the difference when I write something on one line, seperated by a `,` and on two lines. Apparently I do not understand the difference, because I though the two functions below should return the same.
```
def fibi(n):
a, b = 0, 1
for i in range(n):
a, b = b, a + b
return a
print(fibi(6))
> 8 # expected result (Fibonacci)
```
But
```
def fibi(n):
a, b = 0, 1
for i in range(n):
a = b
b = a + b
return a
print(fibi(6))
> 32
``` | This is because of Python's tuple unpacking. In the first one, Python collects the values on the right, makes them a tuple, then assigns the values of the tuple individually to the names on the left. So, if a == 1 and b == 2:
```
a, b = b, a + b
=> a, b = (2, 3)
=> a = 2, b = 3
```
But in the second example, it's normal assignment:
```
a = b
=> a = 2
b = a + b
=> b = 4
``` |
Install Plotly in Anaconda | 34,072,117 | 5 | 2015-12-03T17:11:05Z | 34,073,946 | 8 | 2015-12-03T18:54:09Z | [
"python",
"plot",
"anaconda",
"conda"
] | **How to install Plotly in Anaconda?**
The <https://conda.anaconda.org/plotly> says to `conda install -c https://conda.anaconda.org/plotly <package>`, and
The <https://plot.ly/python/user-guide/> says to `pip install plotly`. I.e., without package.
So **which packages I should specify in Anaconda conda?**
I tried without one and get errors:
```
C:\>conda install -c https://conda.anaconda.org/plotly
Error: too few arguments, must supply command line package specs or --file
``` | If you don't care which version of Plotly you install, just use `pip`.
`pip install plotly` is an easy way to install the latest stable package for Plotly from PyPi.
`pip` is a useful package and dependency management tool, which makes these things easy, but it should be noted that Anaconda's `conda` tool will do the same thing.
`pip` will install to your Anaconda install location by default.
Check out [this](http://conda.pydata.org/docs/_downloads/conda-pip-virtualenv-translator.html) description of package and environment management between `pip` and `conda`.
Edit: The link will show that `conda` can handle everything `pip` can and more, but if you're not trying to specify the version of the package you need to install, `pip` can be much more concise. |
Tracing code execution in embedded Python interpreter | 34,075,757 | 8 | 2015-12-03T20:42:28Z | 34,076,235 | 8 | 2015-12-03T21:12:45Z | [
"python",
"c",
"python-3.x",
"python-c-api"
] | I'd like to create an application with embedded python interpreter and basic debugging capabilities.
Now I'm searching the API for functions which I could use to **run code step-by-step and get the number of the current line of code** which is being (or is about to be) executed.
Official Python docs seem a little underdone for me when comes it to [tracing and profiling](https://docs.python.org/3.5/c-api/init.html#profiling-and-tracing).
There is, for example, no information about the meaning of the return value of `Py_tracefunc`.
So far I've assembled the following:
```
#include <Python.h>
static int lineCounter = 0;
int trace(PyObject *obj, PyFrameObject *frame, int what, PyObject *arg)
{
if(what == PyTrace_LINE)
{
lineCounter += 1;
printf("line %d\n", lineCounter);
}
return 0;
}
int main(int argc, char *argv[])
{
wchar_t *program = Py_DecodeLocale(argv[0], NULL);
if (program == NULL) {
fprintf(stderr, "Fatal error: cannot decode argv[0]\n");
exit(1);
}
Py_SetProgramName(program); /* optional but recommended */
Py_Initialize();
PyEval_SetTrace(trace, NULL);
char *code = "def adder(a, b):\n"
" return a + b\n"
"x = 3\n"
"y = 4\n"
"print(adder(x, y))\n";
PyRun_SimpleString(code);
Py_Finalize();
PyMem_RawFree(program);
return 0;
}
```
However, the compiler outputs the following error:
```
hello.c:5:26: error: unknown type name âPyFrameObjectâ
int trace(PyObject *obj, PyFrameObject *frame, int what, PyObject *arg)
^
```
I'm operating on ManjaroLinux and using the following to compile the above:
```
gcc -o hello hello.c -I/usr/include/python3.5m -Wno-unused-result -Wsign-compare -Wunreachable-code -march=x86-64 -mtune=generic -O2 -pipe -fstack-protector-strong --param=ssp-buffer-size=4 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -L/usr/lib -lpython3.5m -lpthread -ldl -lutil -lm -Xlinker -export-dynamic
```
I've found that I can replace `PyFrameObject` with `struct _frame` and then program compiles but everyone knows it's a dirty hack, not a solution.
The executable outputs the following:
```
line 1
line 2
line 3
line 4
line 5
7
```
**But I'd like the traces to follow the execution flow of the script (that is: start from line 3, then 4, 5 and then, due to the function call, 2).**
**I could not find anything about step-by-step execution.**
Could you recommend some other sources about Python C API with more information and some introduction to the topic?
*I awarded the answer with bounty since it would expire anyway. However, I'm still looking and would be grateful for answers for other questions from above.* | > ```
> hello.c:5:26: error: unknown type name âPyFrameObjectâ
> ```
This error means that `PyFrameObject` has not been declared. I did a [Google search](https://www.google.com/search?q=pyframeobject&ie=utf-8&oe=utf-8) which showed me [frameobject.h](http://svn.python.org/projects/python/trunk/Include/frameobject.h) in the Python source tree is where that structure is declared.
I expect that you can add the line
```
#include <frameobject.h>
```
to resolve this. |
How to change dataframe column names in pyspark? | 34,077,353 | 11 | 2015-12-03T22:21:55Z | 34,077,809 | 37 | 2015-12-03T22:54:58Z | [
"python",
"apache-spark",
"pyspark",
"pyspark-sql"
] | I come from pandas background and am used to reading data from CSV files into a dataframe and then simply changing the column names to something useful using the simple command:
```
df.columns = new_column_name_list
```
However, the same doesn't work in pyspark dataframes created using sqlContext.
The only solution I could figure out to do this easily is the following:
```
df = sqlContext.read.format("com.databricks.spark.csv").options(header='false', inferschema='true', delimiter='\t').load("data.txt")
oldSchema = df.schema
for i,k in enumerate(oldSchema.fields):
k.name = new_column_name_list[i]
df = sqlContext.read.format("com.databricks.spark.csv").options(header='false', delimiter='\t').load("data.txt", schema=oldSchema)
```
This is basically defining the variable twice and inferring the schema first then renaming the column names and then loading the dataframe again with the updated schema.
Is there a better and more efficient way to do this like we do in pandas ?
My spark version is 1.5.0 | There are many ways to do that:
* Option 1. Using [selectExpr](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=selectexpr#pyspark.sql.DataFrame.selectExpr).
```
data = sqlContext.createDataFrame([("Alberto", 2), ("Dakota", 2)],
["Name", "askdaosdka"])
data.show()
data.printSchema()
# Output
#+-------+----------+
#| Name|askdaosdka|
#+-------+----------+
#|Alberto| 2|
#| Dakota| 2|
#+-------+----------+
#root
# |-- Name: string (nullable = true)
# |-- askdaosdka: long (nullable = true)
df = data.selectExpr("Name as name", "askdaosdka as age")
df.show()
df.printSchema()
# Output
#+-------+---+
#| name|age|
#+-------+---+
#|Alberto| 2|
#| Dakota| 2|
#+-------+---+
#root
# |-- name: string (nullable = true)
# |-- age: long (nullable = true)
```
* Option 2. Using [withColumnRenamed](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=selectexpr#pyspark.sql.DataFrame.withColumnRenamed), notice that this method allows you to "overwrite" the same column.
```
oldColumns = data.schema.names
newColumns = ["name", "age"]
df = reduce(lambda data, idx: data.withColumnRenamed(oldColumns[idx], newColumns[idx]), xrange(len(oldColumns)), data)
df.printSchema()
df.show()
```
* Option 3. using
[alias](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=column#pyspark.sql.Column.alias), in Scala you can also use [as](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.Column).
```
from pyspark.sql.functions import *
data = data.select(col("Name").alias("name"), col("askdaosdka").alias("age"))
data.show()
# Output
#+-------+---+
#| name|age|
#+-------+---+
#|Alberto| 2|
#| Dakota| 2|
#+-------+---+
```
* Option 4. Using [sqlContext.sql](http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.SQLContext.sql), which lets you use SQL queries on `DataFrames` registered as tables.
```
sqlContext.registerDataFrameAsTable(data, "myTable")
df2 = sqlContext.sql("SELECT Name AS name, askdaosdka as age from myTable")
df2.show()
# Output
#+-------+---+
#| name|age|
#+-------+---+
#|Alberto| 2|
#| Dakota| 2|
#+-------+---+
``` |
How to change dataframe column names in pyspark? | 34,077,353 | 11 | 2015-12-03T22:21:55Z | 36,302,241 | 7 | 2016-03-30T07:25:17Z | [
"python",
"apache-spark",
"pyspark",
"pyspark-sql"
] | I come from pandas background and am used to reading data from CSV files into a dataframe and then simply changing the column names to something useful using the simple command:
```
df.columns = new_column_name_list
```
However, the same doesn't work in pyspark dataframes created using sqlContext.
The only solution I could figure out to do this easily is the following:
```
df = sqlContext.read.format("com.databricks.spark.csv").options(header='false', inferschema='true', delimiter='\t').load("data.txt")
oldSchema = df.schema
for i,k in enumerate(oldSchema.fields):
k.name = new_column_name_list[i]
df = sqlContext.read.format("com.databricks.spark.csv").options(header='false', delimiter='\t').load("data.txt", schema=oldSchema)
```
This is basically defining the variable twice and inferring the schema first then renaming the column names and then loading the dataframe again with the updated schema.
Is there a better and more efficient way to do this like we do in pandas ?
My spark version is 1.5.0 | ```
df = df.withColumnRenamed("colName", "newColName").withColumnRenamed("colName2", "newColName2")
```
Advantage of using this way: With long list of columns you would like to change only few column names. This can be very convenient in these scenarios. Very useful when joining tables with duplicate column names. |
Send email task with correct context | 34,079,191 | 14 | 2015-12-04T01:03:52Z | 34,191,943 | 9 | 2015-12-10T00:58:33Z | [
"python",
"python-2.7",
"flask",
"celery",
"celery-task"
] | This code is my celery worker script:
```
from app import celery, create_app
app = create_app('default')
app.app_context().push()
```
When I try to run the worker I will get into this error:
```
File "/home/vagrant/myproject/venv/app/mymail.py", line 29, in send_email_celery
msg.html = render_template(template + '.html', **kwargs)
File "/home/vagrant/myproject/venv/local/lib/python2.7/site-packages/flask/templating.py", line 126, in render_template
ctx.app.update_template_context(context)
File "/home/vagrant/myproject/venv/local/lib/python2.7/site-packages/flask/app.py", line 716, in update_template_context
context.update(func())
TypeError: 'NoneType' object is not iterable
```
My question is how can I send the email task, when using a worker in celery.
**mymail.py**
```
from flask import current_app, render_template
from flask.ext.mail import Message
from . import mail, celery
@celery.task
def send_async_email_celery(msg):
mail.send(msg)
def send_email_celery(to, subject, template, **kwargs):
app = current_app._get_current_object()
msg = Message(subject, sender=app.config['MAIL_SENDER'], recipients=[to])
msg.html = render_template(template + '.html', **kwargs)
send_async_email_celery.delay(msg)
```
**\_\_init\_\_**
```
...
def create_app(config_name):
app = Flask(__name__)
app.config.from_object(config[config_name])
config[config_name].init_app(app)
bootstrap.init_app(app)
mail.init_app(app)
db.init_app(app)
login_manager.init_app(app)
celery.conf.update(app.config)
redis_store.init_app(app)
from .users import main as main_blueprint
app.register_blueprint(main_blueprint)
return app
```
Apparently there is some conflict between the blueprint and worker. Remove the blueprint is not an option, if possible, due the custom filters that I need to use in email template. | Finally found what is the reason of the problem after some debug with this [code](https://github.com/miguelgrinberg/flasky-with-celery).
I have a `app_context_processor` that will not return any result.
```
@mod.app_context_processor
def last_reputation_changes():
if current_user:
#code
return dict(reputation='xxx')
```
When sending the email the `current_user` will need an `else` case to return something, since `current_user` from `from flask.ext.login import current_user` is not defined. Basically I only need something like this.
```
def last_reputation_changes():
if current_user:
#code
return dict(reputation='xxx')
else:
return dict(reputation=None)
```
So the problem is not related with celery, but with the flask login integration. |
Tensor with unspecified dimension in tensorflow | 34,079,787 | 7 | 2015-12-04T02:11:33Z | 34,082,273 | 13 | 2015-12-04T06:33:15Z | [
"python",
"tensorflow"
] | I'm playing around with tensorflow and ran into a problem with the following code:
```
def _init_parameters(self, input_data, labels):
# the input shape is (batch_size, input_size)
input_size = tf.shape(input_data)[1]
# labels in one-hot format have shape (batch_size, num_classes)
num_classes = tf.shape(labels)[1]
stddev = 1.0 / tf.cast(input_size, tf.float32)
w_shape = tf.pack([input_size, num_classes], 'w-shape')
normal_dist = tf.truncated_normal(w_shape, stddev=stddev, name='normaldist')
self.w = tf.Variable(normal_dist, name='weights')
```
(I'm using `tf.pack` as suggested in [this question](http://stackoverflow.com/questions/33711427/tensorflow-initializing-tensor-of-ones), since I was getting the same error)
When I run it (from a larger script that invokes this one), I get this error:
```
ValueError: initial_value must have a shape specified: Tensor("normaldist:0", shape=TensorShape([Dimension(None), Dimension(None)]), dtype=float32)
```
I tried to replicate the process in the interactive shell. Indeed, the dimensions of `normal_dist` are unspecified, although the supplied values do exist:
```
In [70]: input_size.eval()
Out[70]: 4
In [71]: num_classes.eval()
Out[71]: 3
In [72]: w_shape.eval()
Out[72]: array([4, 3], dtype=int32)
In [73]: normal_dist.eval()
Out[73]:
array([[-0.27035281, -0.223277 , 0.14694688],
[-0.16527176, 0.02180306, 0.00807841],
[ 0.22624688, 0.36425814, -0.03099642],
[ 0.25575709, -0.02765726, -0.26169327]], dtype=float32)
In [78]: normal_dist.get_shape()
Out[78]: TensorShape([Dimension(None), Dimension(None)])
```
This is weird. Tensorflow generates the vector but can't say its shape. Am I doing something wrong? | As Ishamael says, all tensors have a static shape, which is known at graph construction time and accessible using [`Tensor.get_shape()`](http://www.tensorflow.org/api_docs/python/framework.html#Tensor.get_shape); and a dynamic shape, which is only known at runtime and is accessible by fetching the value of the tensor, or passing it to an operator like [`tf.shape`](http://www.tensorflow.org/api_docs/python/array_ops.html#shape). In many cases, the static and dynamic shapes are the same, but they can be different - the static shape can be *partially defined* - in order allow the dynamic shape to vary from one step to the next.
In your code `normal_dist` has a partially-defined static shape, because `w_shape` is a computed value. (TensorFlow sometimes attempts to evaluate
these computed values at graph construction time, but it gets stuck at `tf.pack`.) It infers the shape `TensorShape([Dimension(None), Dimension(None)])`, which means "a matrix with an unknown number of rows and columns," because it knowns that `w_shape` is a vector of length 2, so the resulting `normal_dist` must be 2-dimensional.
You have two options to deal with this. You can set the static shape as Ishamael suggests, but this requires you to know the shape at graph construction time. For example, the following may work:
```
normal_dist.set_shape([input_data.get_shape()[1], labels.get_shape()[1]])
```
Alternatively, you can pass `validate_shape=False` to the [`tf.Variable` constructor](http://www.tensorflow.org/api_docs/python/state_ops.html#Variable.__init__). This allows you to create a variable with a partially-defined shape, but it limits the amount of static shape information that can be inferred later on in the graph. |
Why does heroku local:run wants to use the global python installation instead of the currently activated virtual env? | 34,086,320 | 16 | 2015-12-04T10:35:32Z | 34,151,405 | 10 | 2015-12-08T08:44:35Z | [
"python",
"heroku",
"virtualenv",
"pythonpath"
] | Using Heroku to deploy our Django application, everything seems to work by the spec, except the `heroku local:run` command.
We oftentimes need to run commands through Django's manage.py file. Running them on the **remote**, as one-off dynos, works flawlessly.
To run them **locally**, we try:
```
heroku local:run python manage.py the_command
```
Which fails, despite the fact that the current virtual env contains a Django installation, with
```
ImportError: No module named django.core.management
```
## Â Diagnostic through the python path
Then `heroku local:run which python` returns:
```
/usr/local/bin/python
```
Whereas `which python` returns:
```
/Users/myusername/MyProject/venv/bin/python #the correct value
```
---
* Is this a bug in Heroku local:run ? Or are we missunderstanding its expected behaviour ?
* And more importantly: is there a way to have `heroku local:run` use the currently installed virtual env ? | After contacting Heroku's support, we understood the problem.
The support confirmed that `heroku local:run` should as expected use the currently active virtual env.
The problem is a local configuration problem, due to our `.bashrc` content: `heroku local:run` sources `.bashrc` (and in our case, this was prepending $PATH with the path to the global Python installation, making it found before the virtual env's). On the other hand, `heroku local` does not source this file. To quote the last message from their support:
> heroku local:run runs the command using bash in interactive mode, which does read your profile, vs heroku local (aliased to heroku local:start) which does not run in interactive mode. |
Efficiently processing data in text file | 34,087,263 | 5 | 2015-12-04T11:23:58Z | 34,087,401 | 12 | 2015-12-04T11:31:03Z | [
"python",
"file"
] | Lets assume I have a (text) file with the following structure (name, score):
```
a 0
a 1
b 0
c 0
d 3
b 2
```
And so on. My aim is to sum the scores for every name and order them from highest score to lowest score. So in this case, I want the following output:
```
d 3
b 2
a 1
c 0
```
In advance I do not know what names will be in the file.
I was wondering if there is an efficient way to do this. My text file can contain up to 50,000 entries.
The only way I can think of is just start at line 1, remember that name and then go over the whole file to look for that name and sum. This looks horribly inefficient, so I was wondering if there is a better way to do this. | Read all data into a dictionary:
```
from collections import defaultdict
from operator import itemgetter
scores = defaultdict(int)
with open('my_file.txt') as fobj:
for line in fobj:
name, score = line.split()
scores[name] += int(score)
```
and the sorting:
```
for name, score in sorted(scores.items(), key=itemgetter(1), reverse=True):
print(name, score)
```
prints:
```
d 3
b 2
a 1
c 0
```
# Performance
To check the performance of this answer vs. the one from @SvenMarnach, I put both approaches into a function. Here `fobj` is a file opened for reading.
I use `io.StringIO` so IO delays should, hopefully, not be measured:
```
from collections import Counter
def counter(fobj):
scores = Counter()
fobj.seek(0)
for line in fobj:
key, score = line.split()
scores.update({key: int(score)})
return scores.most_common()
from collections import defaultdict
from operator import itemgetter
def default(fobj):
scores = defaultdict(int)
fobj.seek(0)
for line in fobj:
name, score = line.split()
scores[name] += int(score)
return sorted(scores.items(), key=itemgetter(1), reverse=True)
```
Results for `collections.Counter`:
```
%timeit counter(fobj)
10000 loops, best of 3: 59.1 µs per loop
```
Results for `collections.defaultdict`:
```
%timeit default(fobj)
10000 loops, best of 3: 15.8 µs per loop
```
Looks like `defaultdict`is four times faster. I would not have guessed this. But when it comes to performance you **need** to measure. |
Efficiently processing data in text file | 34,087,263 | 5 | 2015-12-04T11:23:58Z | 34,087,437 | 7 | 2015-12-04T11:32:52Z | [
"python",
"file"
] | Lets assume I have a (text) file with the following structure (name, score):
```
a 0
a 1
b 0
c 0
d 3
b 2
```
And so on. My aim is to sum the scores for every name and order them from highest score to lowest score. So in this case, I want the following output:
```
d 3
b 2
a 1
c 0
```
In advance I do not know what names will be in the file.
I was wondering if there is an efficient way to do this. My text file can contain up to 50,000 entries.
The only way I can think of is just start at line 1, remember that name and then go over the whole file to look for that name and sum. This looks horribly inefficient, so I was wondering if there is a better way to do this. | This is a good use case for `collections.Counter`:
```
from collections import Counter
scores = Counter()
with open('my_file') as f:
for line in f:
key, score = line.split()
scores.update({key: int(score)})
for key, score in scores.most_common():
print(key, score)
``` |
How do I log multiple very similar events gracefully in python? | 34,090,999 | 5 | 2015-12-04T14:50:13Z | 34,150,824 | 7 | 2015-12-08T08:03:43Z | [
"python",
"exception",
"logging"
] | With pythons [`logging`](https://docs.python.org/2/library/logging.html) module, is there a way to **collect multiple events into one log entry**? An ideal solution would be an extension of python's `logging` module or a **custom formatter/filter** for it so collecting logging events of the same kind happens in the background and **nothing needs to be added in code body** (e.g. at every call of a logging function).
Here an **example** that generates a **large number of the same or very similar logging** events:
```
import logging
for i in range(99999):
try:
asdf[i] # not defined!
except NameError:
logging.exception('foo') # generates large number of logging events
else: pass
# ... more code with more logging ...
for i in range(88888): logging.info('more of the same %d' % i)
# ... and so on ...
```
So we have the same exception **99999** times and log it. It would be nice, if the log just said something like:
```
ERROR:root:foo (occured 99999 times)
Traceback (most recent call last):
File "./exceptionlogging.py", line 10, in <module>
asdf[i] # not defined!
NameError: name 'asdf' is not defined
INFO:root:foo more of the same (occured 88888 times with various values)
``` | You should probably be writing a message aggregate/statistics class rather than trying to hook onto the logging system's [singletons](http://stackoverflow.com/questions/31875/is-there-a-simple-elegant-way-to-define-singletons-in-python) but I guess you may have an existing code base that uses logging.
I'd also suggest that you should instantiate your loggers rather than always using the default root. The [Python Logging Cookbook](https://docs.python.org/2/howto/logging-cookbook.html) has extensive explanation and examples.
The following class should do what you are asking.
```
import logging
import atexit
import pprint
class Aggregator(object):
logs = {}
@classmethod
def _aggregate(cls, record):
id = '{0[levelname]}:{0[name]}:{0[msg]}'.format(record.__dict__)
if id in cls.logs: # first occurrence
cls.logs[id] = [1, record]
else: # subsequent occurrence
cls.logs[id][0] += 1
@classmethod
def _output(cls):
for count, record in cls.logs.values():
record.__dict__['msg'] += ' (occured {} times)'.format(count)
logging.getLogger(record.__dict__['name']).handle(record)
@staticmethod
def filter(record):
# pprint.pprint(record)
Aggregator._agregate(record)
return False
@staticmethod
def exit():
Aggregator._output()
logging.getLogger().addFilter(Aggregator)
atexit.register(Aggregator.exit)
for i in range(99999):
try:
asdf[i] # not defined!
except NameError:
logging.exception('foo') # generates large number of logging events
else: pass
# ... more code with more logging ...
for i in range(88888): logging.error('more of the same')
# ... and so on ...
```
Note that you don't get any logs until the program exits.
The result of running it this is:
```
ERROR:root:foo (occured 99999 times)
Traceback (most recent call last):
File "C:\work\VEMS\python\logcount.py", line 38, in
asdf[i] # not defined!
NameError: name 'asdf' is not defined
ERROR:root:more of the same (occured 88888 times)
``` |
How to add header row to a pandas DataFrame | 34,091,877 | 8 | 2015-12-04T15:35:59Z | 34,092,032 | 10 | 2015-12-04T15:43:00Z | [
"python",
"csv",
"pandas",
"header"
] | I am reading a csv file into `pandas`. This csv file constists of four columns and some rows, but does not have a header row, which I want to add. I have been trying the following:
```
Cov = pd.read_csv("path/to/file.txt", sep='\t')
Frame=pd.DataFrame([Cov], columns = ["Sequence", "Start", "End", "Coverage"])
Frame.to_csv("path/to/file.txt", sep='\t')
```
But when I apply the code, I get the following Error:
```
ValueError: Shape of passed values is (1, 1), indices imply (4, 1)
```
What exactly does the error mean? And what would be a clean way in python to add a header row to my csv file/pandas df? | You can use `names` directly in the [`read_csv`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html)
> names : array-like, default None List of column names to use. If file
> contains no header row, then you should explicitly pass header=None
```
Cov = pd.read_csv("path/to/file.txt", sep='\t',
names = ["Sequence", "Start", "End", "Coverage"])
```
The line below will not work as you expect. `Cov` is already a dataframe, assuming it really has 4 columns when it's being read from the file.
```
Frame=pd.DataFrame([Cov], columns = ["Sequence", "Start", "End", "Coverage"])
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.