title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
numpy subtract/add 1d array from 2d array | 33,303,348 | 3 | 2015-10-23T13:20:07Z | 33,303,590 | 8 | 2015-10-23T13:30:11Z | [
"python",
"arrays",
"numpy"
] | I have the following 2D-array:
```
a = array([[ 1, 2, 3],
[ 4, 5, 6],
[ 7, 8, 9],
[10, 11, 12],
[13, 14, 15]])
```
and another 1D-array:
```
b = array([ 1, 2, 3, 4, 5])
```
then I want to calculate something like
```
c = a - b
```
with the intent of getting:
```
c = array([[0, 1, 2],
[2, 3, 4],
[4, 5, 6],
[6, 7, 8],
[8, 9, 10]])
```
but instead I get the error message:
```
Traceback (most recent call last):
Python Shell, prompt 79, line 1
ValueError: operands could not be broadcast together with shapes (5,3) (5,)
```
I read the broadcasting rules but didn´t get any wiser. I could do a workaround with for-loops or similar but there should be a direct way. Thanks | You need to convert array `b to a (2, 1) shape` array, use `None or numpy.newaxis` in the index tuple. Here is the [Indexing of Numpy array](http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#basic-slicing).
You can do it Like:
```
import numpy
a = numpy.array([[ 1, 2, 3],
[ 4, 5, 6],
[ 7, 8, 9],
[10, 11, 12],
[13, 14, 15]])
b = numpy.array([ 1, 2, 3, 4, 5])
c=a - b[:,None]
print c
```
Output:
```
Out[2]:
array([[ 0, 1, 2],
[ 2, 3, 4],
[ 4, 5, 6],
[ 6, 7, 8],
[ 8, 9, 10]])
``` |
Django Rest Framework -- no module named rest_framework | 33,308,781 | 7 | 2015-10-23T18:05:38Z | 33,308,988 | 7 | 2015-10-23T18:18:33Z | [
"python",
"django",
"python-3.x",
"pip",
"django-rest-framework"
] | I've installed django rest framework using `pip install djangorestframework` yet I still get this error when I run "python3 manage.py sycndb":
> ImportError: No module named 'rest\_framework'
I'm using python3, is this my issue? | You need to install django rest framework using pip3 (pip for python 3):
```
pip3 install djangorestframework
```
Instructions on how to install pip3 can be found [here](http://stackoverflow.com/questions/6587507/how-to-install-pip-with-python-3) |
Is there a way to change the location of pytest's .cache directory? | 33,310,615 | 8 | 2015-10-23T20:10:09Z | 38,085,113 | 7 | 2016-06-28T19:50:19Z | [
"python",
"caching",
"directory",
"py.test"
] | I need to be able to change the location of pytest's .cache directory to the env variable, WORKSPACE. Due to server permissions out of my control, I am running into this error because my user does not have permission to write in the directory where the tests are being run from:
```
py.error.EACCES: [Permission denied]: open('/path/to/restricted/directory/tests/.cache/v/cache/lastfailed', 'w')
```
Is there a way to set the path of the .cache directory to the environment variable WORKSPACE? | You can prevent the creation of `.cache/` by disabling the "cacheprovider" plugin:
```
py.test -p no:cacheprovider ...
``` |
max([x for x in something]) vs max(x for x in something): why is there a difference and what is it? | 33,326,049 | 16 | 2015-10-25T03:59:45Z | 33,326,128 | 7 | 2015-10-25T04:17:55Z | [
"python",
"python-2.7",
"list-comprehension"
] | I was working on a project for class where my code wasn't producing the same results as the reference code.
I compared my code with the reference code line by line, they appeared almost exactly the same. Everything seemed to be logically equivalent. Eventually I began replacing lines and testing until I found the line that mattered.
Turned out it was something like this (EDIT: exact code is lower down):
```
# my version:
max_q = max([x for x in self.getQValues(state)])
# reference version which worked:
max_q = max(x for x in self.getQValues(state))
```
Now, this baffled me. I tried some experiments with the Python (2.7) interpreter, running tests using `max` on list comprehensions with and without the square brackets. Results seemed to be exactly the same.
Even by debugging via PyCharm I could find no reason why my version didn't produce the exact same result as the reference version. Up to this point I thought I had a pretty good handle on how list comprehensions worked (and how the `max()` function worked), but now I'm not so sure, because this is such a weird discrepancy.
What's going on here? Why does my code produce different results than the reference code (in 2.7)? How does passing in a comprehension without brackets differ from passing in a comprehension with brackets?
EDIT 2: the exact code was this:
```
# works
max_q = max(self.getQValue(nextState, action) for action in legal_actions)
# doesn't work (i.e., provides different results)
max_q = max([self.getQValue(nextState, action) for action in legal_actions])
```
I don't think this should be marked as duplicate -- yes, the other question regards the difference between comprehension objects and list objects, but not why `max()` would provide different results when given a 'some list built by X comprehension', rather than 'X comprehension' alone. | Use of the `[]` around a list comprehension actually generates a list into your variable, or in this case into your max function. Without the brackets you are creating a `generator` object that will be fed into the max function.
```
results1 = (x for x in range(10))
results2 = [x for x in range(10)]
result3 = max(x for x in range(10))
result4 = max([x for x in range(10)])
print(type(results1)) # <class 'generator'>
print(type(results2)) # <class 'list'>
print(result3) # 9
print(result4) # 9
```
As far as I know, they should work essentially the same within the max function. |
max([x for x in something]) vs max(x for x in something): why is there a difference and what is it? | 33,326,049 | 16 | 2015-10-25T03:59:45Z | 33,326,754 | 18 | 2015-10-25T06:07:48Z | [
"python",
"python-2.7",
"list-comprehension"
] | I was working on a project for class where my code wasn't producing the same results as the reference code.
I compared my code with the reference code line by line, they appeared almost exactly the same. Everything seemed to be logically equivalent. Eventually I began replacing lines and testing until I found the line that mattered.
Turned out it was something like this (EDIT: exact code is lower down):
```
# my version:
max_q = max([x for x in self.getQValues(state)])
# reference version which worked:
max_q = max(x for x in self.getQValues(state))
```
Now, this baffled me. I tried some experiments with the Python (2.7) interpreter, running tests using `max` on list comprehensions with and without the square brackets. Results seemed to be exactly the same.
Even by debugging via PyCharm I could find no reason why my version didn't produce the exact same result as the reference version. Up to this point I thought I had a pretty good handle on how list comprehensions worked (and how the `max()` function worked), but now I'm not so sure, because this is such a weird discrepancy.
What's going on here? Why does my code produce different results than the reference code (in 2.7)? How does passing in a comprehension without brackets differ from passing in a comprehension with brackets?
EDIT 2: the exact code was this:
```
# works
max_q = max(self.getQValue(nextState, action) for action in legal_actions)
# doesn't work (i.e., provides different results)
max_q = max([self.getQValue(nextState, action) for action in legal_actions])
```
I don't think this should be marked as duplicate -- yes, the other question regards the difference between comprehension objects and list objects, but not why `max()` would provide different results when given a 'some list built by X comprehension', rather than 'X comprehension' alone. | Are you leaking a local variable which is affecting later code?
```
# works
action = 'something important'
max_q = max(self.getQValue(nextState, action) for action in legal_actions)
assert action == 'something important'
# doesn't work (i.e., provides different results)
max_q = max([self.getQValue(nextState, action) for action in legal_actions])
assert action == 'something important' # fails!
```
Generator and dictionary comprehensions create a new scope, but before py3, list comprehensions do not, for backwards compatibility
Easy way to test - change your code to:
```
max_q = max([self.getQValue(nextState, action) for action in legal_actions])
max_q = max(self.getQValue(nextState, action) for action in legal_actions)
```
Assuming `self.getQValue` is pure, then the only lasting side effect of the first line will be to mess with local variables. If this breaks it, then that's the cause of your problem. |
Why does the 'in' keyword claim it needs an iterable object? | 33,326,150 | 9 | 2015-10-25T04:22:24Z | 33,326,230 | 13 | 2015-10-25T04:35:08Z | [
"python"
] | ```
>>> non_iterable = 1
>>> 5 in non_iterable
Traceback (most recent call last):
File "<input>", line 1, in <module>
TypeError: 'int' object is not iterable
>>> class also_non_iterable:
... def __contains__(self,thing):
... return True
>>> 5 in also_non_iterable()
True
>>> isinstance(also_non_iterable(), Iterable)
False
```
Is there a reason `in` keyword claims to want an iterable object when what it truly wants is an object that implements `__contains__`? | It claims to want an iterable because, if the object's class does not implement an `__contains__` , then `in` tries to iterate through the object and check if the values are equal to the values yield by it.
An Example to show that -
```
>>> class C:
... def __iter__(self):
... return iter([1,2,3,4])
>>>
>>> c = C()
>>> 2 in c
True
>>> 5 in c
False
```
This is explained in [the documentation -](https://docs.python.org/2/reference/expressions.html#in)
> For user-defined classes which define the `__contains__()` method, `x in y` is true if and only if `y.__contains__(x)` is true.
>
> For user-defined classes which do not define `__contains__()` but do define `__iter__()` , `x in y` is true if some value `z` with `x == z` is produced while iterating over `y` . If an exception is raised during the iteration, it is as if in raised that exception. |
Trie tree match performance in word search | 33,327,602 | 17 | 2015-10-25T08:20:16Z | 33,381,104 | 8 | 2015-10-28T00:53:12Z | [
"python",
"algorithm"
] | I have debugging a few similar solutions, but wondering if we could improve Trie Tree to partial match prefix (in search method of class Trie, current search method only check if a full word is matched or not) to even improve performance, which could return from a wrong path earlier? I am not very confident for the idea, so seek for advice earlier.
I post one of the similar solutions. Thanks.
---
Given a 2D board and a list of words from the dictionary, find all words in the board.
Each word must be constructed from letters of sequentially adjacent cell, where "adjacent" cells are those horizontally or vertically neighboring. The same letter cell may not be used more than once in a word.
For example,
Given words `= ["oath","pea","eat","rain"]` and board =
```
[
['o','a','a','n'],
['e','t','a','e'],
['i','h','k','r'],
['i','f','l','v']
]
```
Return ["eat","oath"]
```
class TrieNode():
def __init__(self):
self.children = collections.defaultdict(TrieNode)
self.isWord = False
class Trie():
def __init__(self):
self.root = TrieNode()
def insert(self, word):
node = self.root
for w in word:
node = node.children[w]
node.isWord = True
def search(self, word):
node = self.root
for w in word:
node = node.children.get(w)
if not node:
return False
return node.isWord
class Solution(object):
def findWords(self, board, words):
res = []
trie = Trie()
node = trie.root
for w in words:
trie.insert(w)
for i in xrange(len(board)):
for j in xrange(len(board[0])):
self.dfs(board, node, i, j, "", res)
return res
def dfs(self, board, node, i, j, path, res):
if node.isWord:
res.append(path)
node.isWord = False
if i < 0 or i >= len(board) or j < 0 or j >= len(board[0]):
return
tmp = board[i][j]
node = node.children.get(tmp)
if not node:
return
board[i][j] = "#"
self.dfs(board, node, i+1, j, path+tmp, res)
self.dfs(board, node, i-1, j, path+tmp, res)
self.dfs(board, node, i, j-1, path+tmp, res)
self.dfs(board, node, i, j+1, path+tmp, res)
board[i][j] = tmp
``` | I don't see anything wrong from the `Trie` part in your code.
But I think the trie's original design already has early returning when detecting any mismatch.
Actually, I usually only use regular `dict` as a trie instead of `defaultDict + TrieNode` to avoid making the problem over-complicated. You just need to set a `"#"` key if a certain node is a valid word. And, during insertion, just do `node[w] = {}`.
If you do this, your code can be significantly simplified and early returning will be straightforward, as you will not have a "wrong" key in a node at all!
For example, a simple trie containing only `'ab'` will look like: `{'a': {'b': {'#': {}}}`. So when you search for `'cd'`, as soon as you realized there is no key `'c'` in the outermost dict, you can return false. This implementation is similar to yours, but I believe it's easier to understand. |
django type object Http404 has no attribute get | 33,328,656 | 3 | 2015-10-25T10:32:44Z | 33,328,657 | 10 | 2015-10-25T10:32:44Z | [
"python",
"django"
] | I have this code:
```
if not selected_organization in request.user.organizations.all():
return Http404
```
while returning the http 404 I got this :
```
type object 'Http404' has no attribute 'get'
``` | Took me a while to figure out,
Eventually I had to `raise` the `Http404` and not `return` it! |
Plots are not visible using matplotlib plt.show() | 33,329,921 | 5 | 2015-10-25T12:54:00Z | 33,330,027 | 13 | 2015-10-25T13:05:17Z | [
"python",
"python-3.x",
"ubuntu",
"matplotlib",
"plot"
] | I'm really new in Python and Linux and I need help, I tried to use matplotlib for showing a simple plot in the following way:
```
from matplotlib import pyplot as plt
plt.plot([5,6,7,8], [7,3,8,3])
plt.show()
```
But, when I run `python3 test.py`, I get the following output:
```
/usr/local/lib/python3.4/dist-packages/matplotlib/backends/backend_gtk3agg.py:18: UserWarning: The Gtk3Agg backend is known to not work on Python 3.x with pycairo. Try installing cairocffi.
"The Gtk3Agg backend is known to not work on Python 3.x with pycairo. "
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/matplotlib/backends/backend_gtk3agg.py", line 69, in on_draw_event
buf, cairo.FORMAT_ARGB32, width, height)
NotImplementedError: Surface.create_for_data: Not Implemented yet.
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/matplotlib/backends/backend_gtk3agg.py", line 69, in on_draw_event
buf, cairo.FORMAT_ARGB32, width, height)
NotImplementedError: Surface.create_for_data: Not Implemented yet.
/usr/local/lib/python3.4/dist-packages/matplotlib/backends/backend_gtk3.py:215: Warning: Source ID 7 was not found when attempting to remove it
GLib.source_remove(self._idle_event_id)
```
and an empty figure without white canvas:
[](http://i.stack.imgur.com/rGEaq.jpg)
What is wrong? How can I fix it? | As you can see:
```
"The Gtk3Agg backend is known to not work on Python 3.x with pycairo."
```
And so the ***suggestion*** presented is:
```
Try installing cairocffi.
```
The [**installation guide**](https://pythonhosted.org/cairocffi/overview.html) for `cairocffi` is pretty straight-forward. If the dependencies1 are met it is as simple as:
```
pip install cairocffi
```
---
1) The dependencies for `Python 3.x` ***should*** logically be:
```
sudo apt-get install python3-dev
sudo apt-get install libffi-dev
``` |
Force compiler when running python setup.py install | 33,331,563 | 15 | 2015-10-25T15:41:06Z | 33,373,672 | 9 | 2015-10-27T16:41:34Z | [
"python",
"cython",
"setuptools",
"anaconda"
] | Is there a way to explicitly force the compiler for building Cython extensions when running `python setup.py install`? Where `setup.py` is of the form:
```
import os.path
import numpy as np
from setuptools import setup, find_packages, Extension
from Cython.Distutils import build_ext
setup(name='test',
packages=find_packages(),
cmdclass={'build_ext': build_ext},
ext_modules = [ Extension("test.func", ["test/func.pyx"]) ],
include_dirs=[np.get_include()]
)
```
I'm trying to install a package on Windows 8.1 x64 using Anaconda 3.16, Python 3.4, setuptools 18, Numpy 1.9 and Cython 0.24. The deployment script is adapted from the Cython [wiki](https://github.com/cython/cython/wiki/CythonExtensionsOnWindows#using-windows-sdk-cc-compiler-works-for-all-python-versions) and [this](https://stackoverflow.com/a/13751649/1791279) Stack Overflow answer.
**Makefile.bat**
```
:: create and activate a virtual environement with conda
conda create --yes -n test_env cython setuptools=18 pywin32 libpython numpy=1.9 python=3
call activate test_env
:: activate the MS SDK compiler as explained in the Cython wiki
cd C:\Program Files\Microsoft SDKs\Windows\v7.1\
set MSSdk=1
set DISTUTILS_USE_SDK=1
@call .\Bin\SetEnv /x64 /release
cd C:\test
python setup.py install
```
The problem is that in this case `setup.py install` still used the mingw compiler included with conda instead of the MS Windows SDK 7.1 one.
* So the `DISTUTILS_USE_SDK=1` and `MSSdk=1` don't seem to have an impact on the buid. I'm not sure if activating the MS SDK from within a conda virtualenv might be an issue here.
* Running `python setup.py build_ext --compiler=msvc` correctly builds the extension with the MS compiler, but subsequently running the `setup.py install`, recompiles it with mingw again. Same applies to `python setup.py build --compiler=msvc`.
* Also tried running `%COMSPEC% /E:ON /V:ON /K "%PROGRAMFILES%\Microsoft SDKs\Windows\v7.1\Bin\SetEnv.cmd"` as discussed in the answer linked above, but for me this produces a new terminal prompt, coloured in yellow, and stops the install process.
Is there a way of forcing the compiler for building this package, for instance, by editing the `setup.py`? | You can provide (default) command line arguments for distutils in a separate file called `setup.cfg` (placed parallel to your `setup.py`). See the [docs](https://docs.python.org/3.4/distutils/configfile.html) for more information. To set the compiler use something like:
```
[build]
compiler=msvc
```
Now calling `python setup.py build` is equivalent to calling `python setup.py build --compiler=msvc`. (You can still direct distutils to use an other complier by calling `python setup.py build --compiler=someothercompiler`)
Now you have (successfully directed distutils to use **a** msvc compiler. Unfortunately there is no option to tell it **which** msvc compiler to use. Basically there are two options:
**One:** Do nothing and distutils will try to locate `vcvarsall.bat` and use that to setup an environment. `vcvarsall.bat` (and the compiler it sets the environment up for) are part of Visual Studio, so you have to have installed that for it to work.
**Two:** Install the Windows SDK and tell distutils to use that. Be aware that the name `DISUTILS_USE_SDK` is rather missleading (at least in my opinion). It does NOT in fact tell distutils to use the SDK (and it's `setenv.bat`) to setup an environment, rather it means that distutils should assume the environment has already been set up. That is why you have to use some kind of `Makefile.bat` as you have shown in the OP.
**Side Note:** The specific version of VisualStudio or the Windows SDK depends on the targeted python version. |
weird behaviour of dict.keys() in python | 33,342,345 | 2 | 2015-10-26T09:28:49Z | 33,342,379 | 7 | 2015-10-26T09:31:17Z | [
"python",
"python-3.x"
] | I have this code in my script to perform Huffman encoding:
```
def huffmanEncoding(freqDict):
for key in freqDict.keys():
freqDict[HuffmanTree(value=key)] = freqDict.pop(key)
...
```
What I want to do is to replace each of the keys in the dictionary with a tree node whose value is the original key. The HuffmanTree class works properly.
However, this code has very weird behavior. With debugging tools I found out that sometimes some of the keys were processed twice or more times, which means they are first transformed into a tree node, then transformed again, using its current tree node as the value for the new tree node.
I replaced my code with the one shown following:
```
def huffmanEncoding(freqDict):
keys = list(freqDict.keys())
for key in keys:
freqDict[HuffmanTree(value=key)] = freqDict.pop(key)
```
and now it works properly. But can someone explain why my first version has such weird behavior? If I want to change all the keys in a dictionary, should I always use the second version? | You are adding keys to and removing keys from the dictionary *while iterating*. That means that the [order of the keys also can change](https://stackoverflow.com/questions/15479928/why-is-the-order-in-python-dictionaries-and-sets-arbitrary/15479974#15479974).
Normally, Python would raise an exception when you do this, but because you are both deleting and adding one key each iteration, the internal checks that would raise that exception fail to detect that you are making changes while iterating.
Iterate over a *copy* of the keys view:
```
def huffmanEncoding(freqDict):
for key in list(freqDict):
freqDict[HuffmanTree(value=key)] = freqDict.pop(key)
```
The `list()` call copies all the keys to a separate list object. Rather than you iterating over a [live view of the dictionary keys](https://docs.python.org/2/library/stdtypes.html#dictionary-view-objects) you iterate over a static unchanging list. Popping keys from the original dict will not also remove those keys from the list copy, nor does setting new keys result in the list gaining more keys. That makes the loop entirely stable. |
Add a count field to a django rest framework serializer | 33,345,089 | 5 | 2015-10-26T11:51:23Z | 34,675,543 | 8 | 2016-01-08T11:05:25Z | [
"python",
"django",
"serialization"
] | I am serializing the built-in django Group model and would like to add a field to the serializer that counts the number of users in the group. I am currently using the following serializer:
```
class GroupSerializer(serializers.ModelSerializer):
class Meta:
model = Group
fields = ('id', 'name', 'user_set')
```
This returns the group ID and name and an array of users (user IDs) in the group:
```
{
"id": 3,
"name": "Test1",
"user_set": [
9
]
}
```
What I would like instead as output is something like:
```
{
"id": 3,
"name": "Test1",
"user_count": 1
}
```
Any help would be appreciated. Thanks. | A bit late but short answer. Try this
```
user_count = serializers.IntegerField(
source='user_set.count',
read_only=True
)
``` |
SyntaxError with passing **kwargs and trailing comma | 33,350,454 | 11 | 2015-10-26T16:10:31Z | 33,350,635 | 9 | 2015-10-26T16:19:15Z | [
"python",
"python-3.x",
"syntax-error",
"python-3.4"
] | I wonder why this is a SyntaxError in Python 3.4:
```
some_function(
filename = "foobar.c",
**kwargs,
)
```
It works when removing the trailing comma after `**kwargs`. | As pointed out by vaultah (who for some reason didnât bother to post an answer), this was [reported on the issue tracker](http://bugs.python.org/issue9232) and has been changed since. The syntax will work fine starting with Python 3.6.
> To be explicit, yes, I want to allow trailing comma even after `*args` or `**kwds`. And that's what the patch does. â[Guido van Rossum](http://bugs.python.org/issue9232#msg248449) |
SyntaxError with passing **kwargs and trailing comma | 33,350,454 | 11 | 2015-10-26T16:10:31Z | 33,350,709 | 8 | 2015-10-26T16:22:01Z | [
"python",
"python-3.x",
"syntax-error",
"python-3.4"
] | I wonder why this is a SyntaxError in Python 3.4:
```
some_function(
filename = "foobar.c",
**kwargs,
)
```
It works when removing the trailing comma after `**kwargs`. | The reason it was originally disallowed is because `**kwargs` was the last allowed item in an argument list -- nothing could come after it; however, a `,` looks like there could be more following it.
That has changed so that we can now call with multiple keyword dicts:
```
some_func(a, b, **c, **d,)
```
For consistency's sake, trailing commas are now supported in both definitions and callings of functions. This is really useful when one has either several arguments, or a few long arguments, and so the logical line is split across several physical lines.
The trailing commas are optional in both locations. |
Mysterious interaction between Python's slice bounds and "stride" | 33,352,169 | 13 | 2015-10-26T17:41:41Z | 33,352,237 | 7 | 2015-10-26T17:44:36Z | [
"python",
"list",
"python-2.7",
"slice"
] | I understand that given an iterable such as
```
>>> it = [1, 2, 3, 4, 5, 6, 7, 8, 9]
```
I can turn it into a list and slice off the ends at arbitrary points with, for example
```
>>> it[1:-2]
[2, 3, 4, 5, 6, 7]
```
or reverse it with
```
>>> it[::-1]
[9, 8, 7, 6, 5, 4, 3, 2, 1]
```
or combine the two with
```
>>> it[1:-2][::-1]
[7, 6, 5, 4, 3, 2]
```
However, trying to accomplish this in a single operation produces in some results that puzzle me:
```
>>> it[1:-2:-1]
[]
>>>> it[-1:2:-1]
[9, 8, 7, 6, 5, 4]
>>>> it[-2:1:-1]
[8, 7, 6, 5, 4, 3]
```
Only after much trial and error, do I get what I'm looking for:
```
>>> it[-3:0:-1]
[7, 6, 5, 4, 3, 2]
```
This makes my head hurt (and can't help readers of my code):
```
>>> it[-3:0:-1] == it[1:-2][::-1]
True
```
How can I make sense of this? Should I even be pondering such things?
---
FWYW, my code does a lot of truncating, reversing, and listifying of iterables, and I was looking for something that was faster and clearer (yes, don't laugh) than `list(reversed(it[1:-2]))`. | This is because in a slice like -
```
list[start:stop:step]
```
`start` is ***inclusive***, resultant list starts at index `start`.
`stop` is ***exclusive***, that is the resultant list only contains elements till `stop - 1` (and not the element at `stop`).
So for your case`it[1:-2]` - the `1` is *inclusive* , that means the slice result starts at index `1` , whereas the `-2` is *exclusive* , hence the last element of the slice index would be from index `-3`.
Hence, if you want the reversed of that, you would have to do `it[-3:0:-1]` - only then `-3` would be included in the sliced result, and the sliced result would go upto `1` index. |
Flask hangs when sending a post request to itself | 33,353,192 | 4 | 2015-10-26T18:38:23Z | 33,353,724 | 9 | 2015-10-26T19:11:11Z | [
"python",
"flask"
] | I'm trying to send a post request to my Flask app from one of its own views, but it hangs until I kill the server. If I do the request in JavaScript, it works fine. Why doesn't it work from the Python code?
```
from flask import Blueprint, render_template, abort, request, Response, session, url_for
from jinja2 import TemplateNotFound
from flask.ext.wtf import Form
from wtforms import BooleanField, TextField, PasswordField
import requests
login = Blueprint('login', __name__, template_folder='templates')
class LoginForm(Form):
email = TextField('Email')
password = PasswordField('Password')
@login.route('/login', methods=['GET', 'POST'])
def _login():
form = LoginForm(request.form, csrf_enabled=False)
if form.validate_on_submit():
return requests.post(request.url_root + '/api/login', data={"test": True})
return render_template('login.html', form=form)
``` | Flask's development server is single-threaded by default. It can only handle one request at a time. Making a request blocks until it receives the response. Your Flask code makes a request in the one thread, and then waits. There are no other threads to handle this second request. So the request never completes, and the original request waits forever.
Enable multiple threads or processes on the dev server to avoid the deadlock and fix the immediate problem.
```
app.run(threaded=True)
# or
app.run(processes=2)
```
However, making a full HTTP request to the app from within the app should never be necessary and indicates a deeper design issue. For example, observe that the internal request will not have access to the session on the client's browser. Extract the common code and call it internally, rather than making a new request. |
Ansible: best practice for maintaining list of sudoers | 33,359,404 | 3 | 2015-10-27T03:20:33Z | 33,362,805 | 8 | 2015-10-27T08:16:04Z | [
"python",
"unix",
"ansible",
"user-management"
] | In the [documentation](http://docs.ansible.com/ansible/lineinfile_module.html), there is an example of using the `lineinfile` module to edit `/etc/sudoers`.
```
- lineinfile: "dest=/etc/sudoers state=present regexp='^%wheel' line='%wheel ALL=(ALL) NOPASSWD: ALL'"
```
Feels a bit hackish.
I assumed there would be something in the `user` module to handle this but there doesn't appear to be any options.
What are the best practices for adding and removing users to `/etc/sudoers`? | That line isn't actually adding an users to sudoers, merely making sure that the `wheel` group can have passwordless sudo for all command.
As for adding users to `/etc/sudoers` this is best done by adding users to necessary groups and then giving these groups the relevant access to sudo. This holds true when you aren't using Ansible too.
The [user module](http://docs.ansible.com/ansible/user_module.html) allows you to specify an exclusive list of group or to simply append the specified groups to the current ones that the user already has. This is naturally idempotent as a user cannot be defined to be in a group multiple times.
An example play might look something like this:
```
- hosts: all
vars:
sudoers:
- user1
- user2
- user3
tasks:
- name: Make sure we have a 'wheel' group
group:
name: wheel
state: present
- name: Allow 'wheel' group to have passwordless sudo
lineinfile:
dest: /etc/sudoers
state: present
regexp: '^%wheel'
line: '%wheel ALL=(ALL) NOPASSWD: ALL'
- name: Add sudoers users to wheel group
user:
name: "{{ item }}"
groups: wheel
append: yes
with_items: "{{ sudoers }}"
``` |
Random number between 0 and 1 in python | 33,359,740 | 3 | 2015-10-27T03:59:10Z | 33,359,801 | 8 | 2015-10-27T04:06:00Z | [
"python",
"random"
] | I want a random number between 0 and 1 . like 0.3452
I used `random.randrange(0,1)`
but it is always 0! for me. what should i do? | You can use `random.uniform`
```
random.uniform(0, 1)
``` |
How to show warnings in py.test | 33,363,433 | 10 | 2015-10-27T08:53:27Z | 33,370,276 | 15 | 2015-10-27T14:11:14Z | [
"python",
"django",
"testing",
"py.test",
"pytest-django"
] | I just ran [`py.test`](http://pytest.org) on my code and got the following output:
```
================== 6 passed, 2 pytest-warnings in 40.79 seconds =======================
```
However, I cannot see what `py.test` would like to warn me about. How can I turn on warning output to the console?
`py.test --help` offers me the `--strict` flag:
> --strict run pytest in strict mode, warnings become
> errors.
However I just want to see the output, not make my tests fail.
I checked [pytest.org](https://pytest.org/latest/recwarn.html) and [this question](http://stackoverflow.com/questions/18920576/check-if-any-tests-raise-a-deprecation-warning-with-pytest) but they are only concerned with asserting warnings in python, not showing warnings generated on the commandline. | In this case, pytest-warnings are warnings which were generated for pytest and/or it's plugins. These warnings were not generated for your code. In order to list them in the report, you will need to use option `-r w`. Here is portion of `py.test --help`:
```
-r chars show extra test summary info as specified by chars (f)ailed,
(E)error, (s)skipped, (x)failed, (X)passed
(w)pytest-warnings (a)all.
```
This will allow to show warnings in the report (top portion of the record) will list which pytest plugins use deprecated arguments (in my case bellow):
```
...
================================ pytest-warning summary ================================
WI1 /Projects/.tox/py27/lib/python2.7/site-packages/pytest_timeout.py:68 'pytest_runtest_protocol' hook uses deprecated __multicall__ argument
WI1 /Projects/.tox/py27/lib/python2.7/site-packages/pytest_capturelog.py:171 'pytest_runtest_makereport' hook uses deprecated __multicall__ argument
...
``` |
String character identity paradox | 33,366,403 | 11 | 2015-10-27T11:09:54Z | 33,368,485 | 7 | 2015-10-27T12:52:12Z | [
"python",
"string",
"python-internals"
] | I'm completely stuck with this
```
>>> s = chr(8263)
>>> x = s[0]
>>> x is s[0]
False
```
How is this possible? Does this mean that accessing a string character by indexing create a new instance of the same character? Let's experiment:
```
>>> L = [s[0] for _ in range(1000)]
>>> len(set(L))
1
>>> ids = map(id, L)
>>> len(set(ids))
1000
>>>
```
Yikes what a waste of bytes ;) Or does it mean that `str.__getitem__` has a hidden feature? Can somebody explain?
But this is not the end of my surprise:
```
>>> s = chr(8263)
>>> t = s
>>> print(t is s, id(t) == id(s))
True True
```
This is clear: `t` is an alias for `s`, so they represent the same object and identities coincide. But again, how the following is possible:
```
>>> print(t[0] is s[0])
False
```
`s` and `t` are the same object so what?
But worse:
```
>>> print(id(t[0]) == id(s[0]))
True
```
`t[0]` and `s[0]` have not been garbage collected, are considered as the same object by the `is` operator but have different ids? Can somebody explain? | There are two point to make here.
First, Python does indeed create a new character with the `__getitem__` call, but only if that character has ordinal value greater than 256.
Observe:
```
>>> s = chr(256)
>>> s[0] is s
True
>>> t = chr(257)
>>> t[0] is t
False
```
This is because internally, the compiled [`getitem`](https://hg.python.org/cpython/file/d8f48717b74e/Objects/unicodeobject.c#l10987) function checks the ordinal value of the single chracter and calls the [`get_latin1_char`](https://hg.python.org/cpython/file/d8f48717b74e/Objects/unicodeobject.c#l1725) if that value is 256 or less. This allows some single-character strings to be shared. Otherwise, a new unicode object is created.
The second issue concerns garbage collection and shows that the interpreter can reuse memory addresses very quickly. When you write:
```
>>> s = t # chr(257)
>>> t[0] is s[0]
False
```
Python creates two new single character strings and then compares their memory addresses. These are different (we have different objects as per the explanation above) so comparing the objects with `is` returns False.
On the other hand, we can have the seemingly paradoxical situation that:
```
>>> id(t[0]) == id(s[0])
True
```
because the interpreter quickly reuses the memory address of `t[0]` when it creates the new string `s[0]` at a later moment in time.
If you examine the bytecode this line produces (e.g. with `dis` - see below), you see that the integer address for each side is built in turn (a new string object is created and then `id` is called on it). The references to the object `t[0]` drop to zero as soon as `id(t[0])` is returned (we're comparing integers now, not the object) and so `s[0]` can reuse the same memory address when it is created afterwards.
You can't rely on this to always be the case however.
---
For completeness, here is the disassembled bytecode for the line `id(t[0]) == id(s[0])` which I've annotated. You can see that the lifetime of `t[0]` ends before `s[0]` is created (there are no references to it) hence its memory can be reused.
```
2 0 LOAD_GLOBAL 0 (id)
3 LOAD_GLOBAL 1 (t)
6 LOAD_CONST 1 (0)
9 BINARY_SUBSCR # t[0] is created
10 CALL_FUNCTION 1 # id(t[0]) is computed...
# ...lifetime of string t[0] over
13 LOAD_GLOBAL 0 (id)
16 LOAD_GLOBAL 2 (s)
19 LOAD_CONST 1 (0)
22 BINARY_SUBSCR # s[0] is created...
# ...free to reuse t[0] memory
23 CALL_FUNCTION 1 # id(s[0]) is computed
26 COMPARE_OP 2 (==) # the two ids are compared
29 RETURN_VALUE
``` |
Split dataframe into relatively even chunks according to length | 33,367,142 | 3 | 2015-10-27T11:44:36Z | 33,368,088 | 8 | 2015-10-27T12:31:35Z | [
"python",
"pandas"
] | I have to create a function which would split provided dataframe into chunks of needed size. For instance if dataframe contains 1111 rows, I want to be able to specify chunk size of 400 rows, and get three smaller dataframes with sizes of 400, 400 and 311. Is there a convenience function to do the job? What would be the best way to store and iterate over sliced dataframe?
Example DataFrame
```
import numpy as np
import pandas as pd
test = pd.concat([pd.Series(np.random.rand(1111)), pd.Series(np.random.rand(1111))], axis = 1)
``` | You can use `.groupby` as below.
```
for g, df in test.groupby(np.arange(len(test)) // 400):
print(df.shape)
# (400, 2)
# (400, 2)
# (311, 2)
``` |
What is the easiest way to install BLAS and LAPACK for scipy? | 33,368,261 | 4 | 2015-10-27T12:40:16Z | 33,369,271 | 7 | 2015-10-27T13:27:29Z | [
"python",
"numpy"
] | I would like to run a programme that someone else has prepared and it includes scipy. I have tried to install scipy with
```
pip install scipy
```
but it gives me a long error. I know there are ways with Anaconda and Canopy but I think these are long ways. I would like to have a short way. I have also tried
```
G:\determinator_Oskar>pip install scipy
Collecting scipy
Using cached scipy-0.16.1.tar.gz
Building wheels for collected packages: scipy
Running setup.py bdist_wheel for scipy
Complete output from command g:\myve\scripts\python.exe -c "import setuptools;
__file__='e:\\temp_n~1\\pip-build-1xigxu\\scipy\\setup.py';exec(compile(open(__f
ile__).read().replace('\r\n', '\n'), __file__, 'exec'))" bdist_wheel -d e:\temp_
n~1\tmp07__zrpip-wheel-:
lapack_opt_info:
openblas_lapack_info:
libraries openblas not found in ['g:\\myve\\lib', 'C:\\']
NOT AVAILABLE
lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in ['g:\\myve\\lib', 'C:\\']
NOT AVAILABLE
NOT AVAILABLE
atlas_3_10_threads_info:
Setting PTATLAS=ATLAS
libraries tatlas,tatlas not found in g:\myve\lib
libraries lapack_atlas not found in g:\myve\lib
libraries tatlas,tatlas not found in C:\
libraries lapack_atlas not found in C:\
<class 'numpy.distutils.system_info.atlas_3_10_threads_info'>
NOT AVAILABLE
atlas_3_10_info:
libraries satlas,satlas not found in g:\myve\lib
libraries lapack_atlas not found in g:\myve\lib
libraries satlas,satlas not found in C:\
libraries lapack_atlas not found in C:\
<class 'numpy.distutils.system_info.atlas_3_10_info'>
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in g:\myve\lib
libraries lapack_atlas not found in g:\myve\lib
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
<class 'numpy.distutils.system_info.atlas_threads_info'>
NOT AVAILABLE
atlas_info:
libraries f77blas,cblas,atlas not found in g:\myve\lib
libraries lapack_atlas not found in g:\myve\lib
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
<class 'numpy.distutils.system_info.atlas_info'>
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['g:\\myve\\lib', 'C:\\']
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
g:\myve\lib\site-packages\numpy\distutils\system_info.py:1552: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
g:\myve\lib\site-packages\numpy\distutils\system_info.py:1563: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
g:\myve\lib\site-packages\numpy\distutils\system_info.py:1566: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
Running from scipy source directory.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "e:\temp_n~1\pip-build-1xigxu\scipy\setup.py", line 253, in <module>
setup_package()
File "e:\temp_n~1\pip-build-1xigxu\scipy\setup.py", line 250, in setup_packa
ge
setup(**metadata)
File "g:\myve\lib\site-packages\numpy\distutils\core.py", line 135, in setup
config = configuration()
File "e:\temp_n~1\pip-build-1xigxu\scipy\setup.py", line 175, in configurati
on
config.add_subpackage('scipy')
File "g:\myve\lib\site-packages\numpy\distutils\misc_util.py", line 1001, in
add_subpackage
caller_level = 2)
File "g:\myve\lib\site-packages\numpy\distutils\misc_util.py", line 970, in
get_subpackage
caller_level = caller_level + 1)
File "g:\myve\lib\site-packages\numpy\distutils\misc_util.py", line 907, in
_get_configuration_from_setup_py
config = setup_module.configuration(*args)
File "scipy\setup.py", line 15, in configuration
config.add_subpackage('linalg')
File "g:\myve\lib\site-packages\numpy\distutils\misc_util.py", line 1001, in
add_subpackage
caller_level = 2)
File "g:\myve\lib\site-packages\numpy\distutils\misc_util.py", line 970, in
get_subpackage
caller_level = caller_level + 1)
File "g:\myve\lib\site-packages\numpy\distutils\misc_util.py", line 907, in
_get_configuration_from_setup_py
config = setup_module.configuration(*args)
File "scipy\linalg\setup.py", line 20, in configuration
raise NotFoundError('no lapack/blas resources found')
numpy.distutils.system_info.NotFoundError: no lapack/blas resources found
----------------------------------------
Failed building wheel for scipy
Failed to build scipy
Installing collected packages: scipy
Running setup.py install for scipy
Complete output from command g:\myve\scripts\python.exe -c "import setuptool
s, tokenize;__file__='e:\\temp_n~1\\pip-build-1xigxu\\scipy\\setup.py';exec(comp
ile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __fi
le__, 'exec'))" install --record e:\temp_n~1\pip-3hncqr-record\install-record.tx
t --single-version-externally-managed --compile --install-headers g:\myve\includ
e\site\python2.7\scipy:
lapack_opt_info:
openblas_lapack_info:
libraries openblas not found in ['g:\\myve\\lib', 'C:\\']
NOT AVAILABLE
lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in ['g:\\myve\\lib', 'C:\\']
NOT AVAILABLE
NOT AVAILABLE
atlas_3_10_threads_info:
Setting PTATLAS=ATLAS
libraries tatlas,tatlas not found in g:\myve\lib
libraries lapack_atlas not found in g:\myve\lib
libraries tatlas,tatlas not found in C:\
libraries lapack_atlas not found in C:\
<class 'numpy.distutils.system_info.atlas_3_10_threads_info'>
NOT AVAILABLE
atlas_3_10_info:
libraries satlas,satlas not found in g:\myve\lib
libraries lapack_atlas not found in g:\myve\lib
libraries satlas,satlas not found in C:\
libraries lapack_atlas not found in C:\
<class 'numpy.distutils.system_info.atlas_3_10_info'>
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in g:\myve\lib
libraries lapack_atlas not found in g:\myve\lib
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
<class 'numpy.distutils.system_info.atlas_threads_info'>
NOT AVAILABLE
atlas_info:
libraries f77blas,cblas,atlas not found in g:\myve\lib
libraries lapack_atlas not found in g:\myve\lib
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
<class 'numpy.distutils.system_info.atlas_info'>
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['g:\\myve\\lib', 'C:\\']
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
g:\myve\lib\site-packages\numpy\distutils\system_info.py:1552: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
g:\myve\lib\site-packages\numpy\distutils\system_info.py:1563: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
g:\myve\lib\site-packages\numpy\distutils\system_info.py:1566: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
Running from scipy source directory.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "e:\temp_n~1\pip-build-1xigxu\scipy\setup.py", line 253, in <module>
setup_package()
File "e:\temp_n~1\pip-build-1xigxu\scipy\setup.py", line 250, in setup_pac
kage
setup(**metadata)
File "g:\myve\lib\site-packages\numpy\distutils\core.py", line 135, in set
up
config = configuration()
File "e:\temp_n~1\pip-build-1xigxu\scipy\setup.py", line 175, in configura
tion
config.add_subpackage('scipy')
File "g:\myve\lib\site-packages\numpy\distutils\misc_util.py", line 1001,
in add_subpackage
caller_level = 2)
File "g:\myve\lib\site-packages\numpy\distutils\misc_util.py", line 970, i
n get_subpackage
caller_level = caller_level + 1)
File "g:\myve\lib\site-packages\numpy\distutils\misc_util.py", line 907, i
n _get_configuration_from_setup_py
config = setup_module.configuration(*args)
File "scipy\setup.py", line 15, in configuration
config.add_subpackage('linalg')
File "g:\myve\lib\site-packages\numpy\distutils\misc_util.py", line 1001,
in add_subpackage
caller_level = 2)
File "g:\myve\lib\site-packages\numpy\distutils\misc_util.py", line 970, i
n get_subpackage
caller_level = caller_level + 1)
File "g:\myve\lib\site-packages\numpy\distutils\misc_util.py", line 907, i
n _get_configuration_from_setup_py
config = setup_module.configuration(*args)
File "scipy\linalg\setup.py", line 20, in configuration
raise NotFoundError('no lapack/blas resources found')
numpy.distutils.system_info.NotFoundError: no lapack/blas resources found
----------------------------------------
Command "g:\myve\scripts\python.exe -c "import setuptools, tokenize;__file__='e:
\\temp_n~1\\pip-build-1xigxu\\scipy\\setup.py';exec(compile(getattr(tokenize, 'o
pen', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install
--record e:\temp_n~1\pip-3hncqr-record\install-record.txt --single-version-exter
nally-managed --compile --install-headers g:\myve\include\site\python2.7\scipy"
failed with error code 1 in e:\temp_n~1\pip-build-1xigxu\scipy
```
I have also tried
```
pip install lapack
```
with this result
```
Collecting lapack
Could not find a version that satisfies the requirement lapack (from versions
)
No matching distribution found for lapack
```
I have also tried
```
pip install blas
```
with similar results
```
G:\determinator_Oskar>pip install blas
Collecting blas
Could not find a version that satisfies the requirement blas (from versions: )
No matching distribution found for blas
```
Why does a scipy get so complicated ? | The [SciPy installation page](http://www.scipy.org/install.html) already recommends several ways of installing python with SciPy already included, such as [WinPython](http://winpython.github.io/).
Another way is to use [wheels](https://pip.pypa.io/en/latest/user_guide/#installing-from-wheels) (a built-package format):
```
pip install SomePackage-1.0-py2.py3-none-any.whl
```
The wheel packages you can find on: <http://www.lfd.uci.edu/~gohlke/pythonlibs/>
For SciPy you need:
* the [NumPy wheel packages](http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy)
* and the [SciPy wheel packages](http://www.lfd.uci.edu/~gohlke/pythonlibs/#scipy) |
What is the easiest way to install BLAS and LAPACK for scipy? | 33,368,261 | 4 | 2015-10-27T12:40:16Z | 33,369,334 | 9 | 2015-10-27T13:30:01Z | [
"python",
"numpy"
] | I would like to run a programme that someone else has prepared and it includes scipy. I have tried to install scipy with
```
pip install scipy
```
but it gives me a long error. I know there are ways with Anaconda and Canopy but I think these are long ways. I would like to have a short way. I have also tried
```
G:\determinator_Oskar>pip install scipy
Collecting scipy
Using cached scipy-0.16.1.tar.gz
Building wheels for collected packages: scipy
Running setup.py bdist_wheel for scipy
Complete output from command g:\myve\scripts\python.exe -c "import setuptools;
__file__='e:\\temp_n~1\\pip-build-1xigxu\\scipy\\setup.py';exec(compile(open(__f
ile__).read().replace('\r\n', '\n'), __file__, 'exec'))" bdist_wheel -d e:\temp_
n~1\tmp07__zrpip-wheel-:
lapack_opt_info:
openblas_lapack_info:
libraries openblas not found in ['g:\\myve\\lib', 'C:\\']
NOT AVAILABLE
lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in ['g:\\myve\\lib', 'C:\\']
NOT AVAILABLE
NOT AVAILABLE
atlas_3_10_threads_info:
Setting PTATLAS=ATLAS
libraries tatlas,tatlas not found in g:\myve\lib
libraries lapack_atlas not found in g:\myve\lib
libraries tatlas,tatlas not found in C:\
libraries lapack_atlas not found in C:\
<class 'numpy.distutils.system_info.atlas_3_10_threads_info'>
NOT AVAILABLE
atlas_3_10_info:
libraries satlas,satlas not found in g:\myve\lib
libraries lapack_atlas not found in g:\myve\lib
libraries satlas,satlas not found in C:\
libraries lapack_atlas not found in C:\
<class 'numpy.distutils.system_info.atlas_3_10_info'>
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in g:\myve\lib
libraries lapack_atlas not found in g:\myve\lib
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
<class 'numpy.distutils.system_info.atlas_threads_info'>
NOT AVAILABLE
atlas_info:
libraries f77blas,cblas,atlas not found in g:\myve\lib
libraries lapack_atlas not found in g:\myve\lib
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
<class 'numpy.distutils.system_info.atlas_info'>
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['g:\\myve\\lib', 'C:\\']
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
g:\myve\lib\site-packages\numpy\distutils\system_info.py:1552: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
g:\myve\lib\site-packages\numpy\distutils\system_info.py:1563: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
g:\myve\lib\site-packages\numpy\distutils\system_info.py:1566: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
Running from scipy source directory.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "e:\temp_n~1\pip-build-1xigxu\scipy\setup.py", line 253, in <module>
setup_package()
File "e:\temp_n~1\pip-build-1xigxu\scipy\setup.py", line 250, in setup_packa
ge
setup(**metadata)
File "g:\myve\lib\site-packages\numpy\distutils\core.py", line 135, in setup
config = configuration()
File "e:\temp_n~1\pip-build-1xigxu\scipy\setup.py", line 175, in configurati
on
config.add_subpackage('scipy')
File "g:\myve\lib\site-packages\numpy\distutils\misc_util.py", line 1001, in
add_subpackage
caller_level = 2)
File "g:\myve\lib\site-packages\numpy\distutils\misc_util.py", line 970, in
get_subpackage
caller_level = caller_level + 1)
File "g:\myve\lib\site-packages\numpy\distutils\misc_util.py", line 907, in
_get_configuration_from_setup_py
config = setup_module.configuration(*args)
File "scipy\setup.py", line 15, in configuration
config.add_subpackage('linalg')
File "g:\myve\lib\site-packages\numpy\distutils\misc_util.py", line 1001, in
add_subpackage
caller_level = 2)
File "g:\myve\lib\site-packages\numpy\distutils\misc_util.py", line 970, in
get_subpackage
caller_level = caller_level + 1)
File "g:\myve\lib\site-packages\numpy\distutils\misc_util.py", line 907, in
_get_configuration_from_setup_py
config = setup_module.configuration(*args)
File "scipy\linalg\setup.py", line 20, in configuration
raise NotFoundError('no lapack/blas resources found')
numpy.distutils.system_info.NotFoundError: no lapack/blas resources found
----------------------------------------
Failed building wheel for scipy
Failed to build scipy
Installing collected packages: scipy
Running setup.py install for scipy
Complete output from command g:\myve\scripts\python.exe -c "import setuptool
s, tokenize;__file__='e:\\temp_n~1\\pip-build-1xigxu\\scipy\\setup.py';exec(comp
ile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __fi
le__, 'exec'))" install --record e:\temp_n~1\pip-3hncqr-record\install-record.tx
t --single-version-externally-managed --compile --install-headers g:\myve\includ
e\site\python2.7\scipy:
lapack_opt_info:
openblas_lapack_info:
libraries openblas not found in ['g:\\myve\\lib', 'C:\\']
NOT AVAILABLE
lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in ['g:\\myve\\lib', 'C:\\']
NOT AVAILABLE
NOT AVAILABLE
atlas_3_10_threads_info:
Setting PTATLAS=ATLAS
libraries tatlas,tatlas not found in g:\myve\lib
libraries lapack_atlas not found in g:\myve\lib
libraries tatlas,tatlas not found in C:\
libraries lapack_atlas not found in C:\
<class 'numpy.distutils.system_info.atlas_3_10_threads_info'>
NOT AVAILABLE
atlas_3_10_info:
libraries satlas,satlas not found in g:\myve\lib
libraries lapack_atlas not found in g:\myve\lib
libraries satlas,satlas not found in C:\
libraries lapack_atlas not found in C:\
<class 'numpy.distutils.system_info.atlas_3_10_info'>
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in g:\myve\lib
libraries lapack_atlas not found in g:\myve\lib
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
<class 'numpy.distutils.system_info.atlas_threads_info'>
NOT AVAILABLE
atlas_info:
libraries f77blas,cblas,atlas not found in g:\myve\lib
libraries lapack_atlas not found in g:\myve\lib
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
<class 'numpy.distutils.system_info.atlas_info'>
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['g:\\myve\\lib', 'C:\\']
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
g:\myve\lib\site-packages\numpy\distutils\system_info.py:1552: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
g:\myve\lib\site-packages\numpy\distutils\system_info.py:1563: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
g:\myve\lib\site-packages\numpy\distutils\system_info.py:1566: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
Running from scipy source directory.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "e:\temp_n~1\pip-build-1xigxu\scipy\setup.py", line 253, in <module>
setup_package()
File "e:\temp_n~1\pip-build-1xigxu\scipy\setup.py", line 250, in setup_pac
kage
setup(**metadata)
File "g:\myve\lib\site-packages\numpy\distutils\core.py", line 135, in set
up
config = configuration()
File "e:\temp_n~1\pip-build-1xigxu\scipy\setup.py", line 175, in configura
tion
config.add_subpackage('scipy')
File "g:\myve\lib\site-packages\numpy\distutils\misc_util.py", line 1001,
in add_subpackage
caller_level = 2)
File "g:\myve\lib\site-packages\numpy\distutils\misc_util.py", line 970, i
n get_subpackage
caller_level = caller_level + 1)
File "g:\myve\lib\site-packages\numpy\distutils\misc_util.py", line 907, i
n _get_configuration_from_setup_py
config = setup_module.configuration(*args)
File "scipy\setup.py", line 15, in configuration
config.add_subpackage('linalg')
File "g:\myve\lib\site-packages\numpy\distutils\misc_util.py", line 1001,
in add_subpackage
caller_level = 2)
File "g:\myve\lib\site-packages\numpy\distutils\misc_util.py", line 970, i
n get_subpackage
caller_level = caller_level + 1)
File "g:\myve\lib\site-packages\numpy\distutils\misc_util.py", line 907, i
n _get_configuration_from_setup_py
config = setup_module.configuration(*args)
File "scipy\linalg\setup.py", line 20, in configuration
raise NotFoundError('no lapack/blas resources found')
numpy.distutils.system_info.NotFoundError: no lapack/blas resources found
----------------------------------------
Command "g:\myve\scripts\python.exe -c "import setuptools, tokenize;__file__='e:
\\temp_n~1\\pip-build-1xigxu\\scipy\\setup.py';exec(compile(getattr(tokenize, 'o
pen', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install
--record e:\temp_n~1\pip-3hncqr-record\install-record.txt --single-version-exter
nally-managed --compile --install-headers g:\myve\include\site\python2.7\scipy"
failed with error code 1 in e:\temp_n~1\pip-build-1xigxu\scipy
```
I have also tried
```
pip install lapack
```
with this result
```
Collecting lapack
Could not find a version that satisfies the requirement lapack (from versions
)
No matching distribution found for lapack
```
I have also tried
```
pip install blas
```
with similar results
```
G:\determinator_Oskar>pip install blas
Collecting blas
Could not find a version that satisfies the requirement blas (from versions: )
No matching distribution found for blas
```
Why does a scipy get so complicated ? | > "Why does a scipy get so complicated?
It gets so complicated because Python's package management system is built to track Python package dependencies, and SciPy and other scientific tools have dependencies beyond Python. [Wheels](http://pythonwheels.com/) fix part of the problem, but my experience is that tools like `pip`/`virtualenv` are just not sufficient for installing and managing a scientific Python stack.
If you want an easy way to get up and running with SciPy, I would highly suggest the [Anaconda distribution](https://www.continuum.io/downloads). It will give you everything you need for scientific computing in Python.
If you want a "short way" of doing this (I'm interpreting that as "I don't want to install a huge distribution"), you might try [miniconda](http://conda.pydata.org/miniconda.html) and then run `conda install scipy`. |
How to chose an AWS profile when using boto3 to connect to CloudFront | 33,378,422 | 5 | 2015-10-27T21:02:50Z | 33,395,432 | 11 | 2015-10-28T15:41:31Z | [
"python",
"amazon-web-services",
"boto3"
] | I am using the Boto 3 python library, and want to connect to AWS CloudFront.
I need to specify the correct AWS Profile (AWS Credentials), but looking at the official documentation, I see no way to specify it.
I am initializing the client using the code:
`client = boto3.client('cloudfront')`
However, this results in it using the default profile to connect.
I couldn't find a method where I can specify which profile to use. | I think the docs aren't wonderful at exposing how to do this. It has been a supported feature for some time, however, and there are some details in this [pull request](https://github.com/boto/boto3/pull/69). So there are three ways to do this:
1. Create a new session with the profile
```
dev = boto3.session.Session(profile_name='dev')
```
2. Change the profile of the default session in code
```
boto3.setup_default_session(profile_name='dev')
```
3. Change the profile of the default session with an environment variable
```
$ AWS_PROFILE=dev ipython
>>> import boto3
>>> s3dev = boto3.resource('s3')
``` |
Repeat each item in a list a number of times specified in another list | 33,382,474 | 2 | 2015-10-28T03:34:40Z | 33,382,503 | 8 | 2015-10-28T03:38:30Z | [
"python",
"list",
"repeat"
] | I have two lists, `x` and `y`:
```
>>> x = [2, 3, 4]
>>> y = [1, 2, 3]
```
I want to use these to create a new list. The new list will have each element in `x` repeated the number of times specified by the corresponding element in `y`. Hence, the desired output is
```
>>> new_list
[2, 3, 3, 4, 4, 4]
```
The order of the elements in `new_list` doesn't matter to me. It's also not crucial that it be a `list` -- any sequence type is fine.
What is the fastest, most efficient, most Pythonic way to achieve this? | 1. You can use list comprehension, like this
```
>>> x = [2, 3, 4]
>>> y = [1, 2, 3]
>>> [item for item, count in zip(x, y) for i in range(count)]
[2, 3, 3, 4, 4, 4]
```
Here, we `zip` the `x` and `y` so that the element from `x` and its corresponding count from `y` are grouped as a single tuple. Then, we iterate `count` number of items to produce the same item.
2. If your objects in `x` are immutables, then you can create `count` copies of the same and put them together in a list, like this
```
>>> [i for item, count in zip(x, y) for i in [item] * count]
[2, 3, 3, 4, 4, 4]
```
3. You can do the same lazily, with [`itertools.repeat`](https://docs.python.org/3/library/itertools.html#itertools.repeat), like this
```
>>> from itertools import chain, repeat
>>> chain.from_iterable((repeat(item, count) for item, count in zip(x,y)))
<itertools.chain object at 0x7fabe40b5320>
>>> list(chain.from_iterable((repeat(item, cnt) for item, cnt in zip(x,y))))
[2, 3, 3, 4, 4, 4]
```
Please note that the `chain` returns an iterable, not a list. So, if you don't want all the elements at once, you can get the items one by one from it. This will be highly memory efficient if the `count` is going to be a very big number, as we don't create the entire list in the memory immediately. We generate the values on-demand.
4. [Thanks ShadowRanger](http://stackoverflow.com/questions/33382474/repeat-each-item-in-a-list-a-number-of-times-specified-in-another-list/33382503?noredirect=1#comment54558615_33382503). You can actually apply `repeat` over `x` and `y` and get the result like this
```
>>> list(chain.from_iterable(map(repeat, x, y)))
[2, 3, 3, 4, 4, 4]
```
here, `map` function will apply the values from `x` and `y` to `repeat` one by one. So, the result of `map` will be
```
>>> list(map(repeat, x, y))
[repeat(2, 1), repeat(3, 2), repeat(4, 3)]
```
Now, we use `chain.from_iterable` to consume values from each and every iterable from the iterable returned by `map`. |
Is there a javascript equivalent for the python pass statement that does nothing? | 33,383,840 | 5 | 2015-10-28T05:53:58Z | 33,383,865 | 10 | 2015-10-28T05:55:59Z | [
"javascript",
"python"
] | I am looking for a javascript equivalent for the python `pass` statement that does nothing. Is there such a thing in javascript? | Python's `pass` mainly exists because in Python whitespace matters within a block. In Javascript, the equivalent would be putting nothing within the block, i.e. `{}`. |
Kalman filter (one-dimensional): several approaches? | 33,384,112 | 2 | 2015-10-28T06:13:21Z | 33,385,625 | 8 | 2015-10-28T07:54:21Z | [
"python",
"algorithm",
"filtering",
"kalman-filter"
] | I try to understand how the Kalman filter works and because the multi-dimensional variants were too confusing for the beginning I started off with a one-dimensional example.
I found 3 different sources explaining the scenario of a thermometer but all of these scenarios implement slightly different equations and I do not get the point.
I implemented **solution 2** but my kalman filter was not really working (it highly adapted itself to the measurements and not really considered the noise on it).
So, before I waste more time trying solution 1 or 3 (which I have just read until now): **Can someone supply a clean explanation and/or code example for a one dimensional Kalman filter?**
---
**Solution 1**
```
// x_est: current estimate; p: current estimate error;
// a: constant of the system; kg: kalman gain
// z: current observation;
// Predict
x_est = a * x_est
p = a * p * a
// Update
kg = p / (p + r)
x_est = x_est + kg * (z - x_est)
p = (1 - kg) * p
```
The author (here) only explains that we are changing only current values because there is no need for a thermometer to consider the last value.
So he simplified:
`p[k] = (1 - kg) * p[k-1]` **to** `p = (1 - kg) * p`
`x_est[k] = x_est[k-1] + kg * (z - x_est[k-1])` **to** `x_est = x_est + kg * (z - x_est)`
...and so on...
I do not understand why this is even possible. I thought one of the main parts of the Kalman filter is to consider wether the current observation *z* is useful or not (via the Kalman gain). So that for a high Kalman gain `kg * (z - x_est[k-1])` a "big chunk" of the delta `z - x_est[k-1]` is added to the new estimate. Isn't this whole thing getting pointless, if one always calculates the current values?
---
**Solution 2**
```
# q: process variance / process noise
# r: error in measurement
x_est = x_est
p = p + q;
k = p / (p + r);
x_est = x_est + k * (z â x_est);
p = (1 â k) * p;
```
This is pretty much the same, but the author did not even give an explanation why *x[k-1]* and *p[k-1]* can be altered to *x* and *p*.
---
**Solution 3**
```
# Q: process variance / process noise
# R: error in measurement
# prediction
x_est_kminus1[k] = x_est[k - 1]
p_kminus1[k] = p[k - 1] + Q
# update
kg[k] = p_kminus1[k] / (p_kminus1[k] + R)
x_est[k] = x_est_kminus1[k] + kg[k] * (z[k] - x_est_kminus1[k])
p[k] = (1 - kg[k]) * p_kminus1[k]
```
In this solution the author had two different lists for `x_est` (`x_est` itself and `x_est_kminus1`) and `p` (`p` itself and `p_kminus1`).
Are two lists needed because otherwise p[k] would be calculated twice (in the prediction and the update step)? | All of these solutions are special cases of the general equations and we'll have to see what's special about each one.
# Proper equations
Let's start with the proper general equations for the 1D case:
```
# prediction
x[k] = a * x[k - 1]
p[k] = a * p[k - 1] * a + q
# update
y = z - h * x[k]
kg = p * h / (h * p * h + r)
x[k] = x[k] + kg * y
p[k] = (1 - kg * h) * p[k]
```
* `x` - state
* `p` - error (covariance)
* `a` - state transition
* `q` - transition error
* `z` - measurement
* `h` - state-to-measurement transformation
* `y` - difference between what we would have expected to measure based on the prediction and what we actually measured
* `kg` - kalman gain
* `r` - measurement error
All of the parameters of the model (`a`, `q`, `r`, `h`) could in principal also have an index `k` and change as the system evolves. But in simple cases they can all be taken as constant.
# How the solutions differ from the proper equations
Only solution 1 implements an `a` and that's fine. `a` tells you how the state changes from one step to the other, if you assume the temperature to be stationary then `a == 1`, like in solution 2 and 3.
Solution 1 does not have a `q`. `q` is where we can give an estimate of the process error. Again, if the process is about the system being stationary (`a == 1`) then we could set `q = 0`.
None of your solutions have an `h`, which is the observation transformation (how to get from measurement to state). If you are estimating the temperature, based on measurements of the temperature then `h = 1`.
An example of when `h` may be different from 1 is if you were measuring something else than you are interested in estimating, e.g. using a measurement of humidity to estimate the temperature. Then `h` would be the **linear** transformation `T(humidity) = h * humidity`. I emphasize **linear** because the above are the linear Kalman filter equations and they only apply to linear (in the mathematical sense) systems.
# Current and previous step issue
The question of `k` vs. `k - 1` and having `x_est` and `x_est_kminus1` is purely a matter of implementation. In this regard all your solutions are the same.
Your thinking about `k` and `k - 1` in solution 1 is off. Only the prediction stage needs to think about the current and the previous step (since it's a prediction of the current state based on the previous one), not the update step. The update step acts on the prediction.
From a readability point of view solution 3 is closest to the mathematical equations. In principal the prediction step does not give us `x_est[k]` yet but more like `predicted_x_est[k]`. Then the update step runs on this `predicted_x_est[k]` and gives us our actual `x_est[k]`.
However as I said, all implementations are equivalent because when they are programmed, you can see that after the prediction step, the past is not needed any more. So you can safely use one variable for `p` and `x` without needing to keep a list.
# About kalman gain
You wrote:
> So that for a high Kalman gain kg \* (z - x\_est[k-1]) a "big chunk" of
> the delta z - x\_est[k-1] is added to the new estimate. Isn't this whole
> thing getting pointless, if one always calculates the current values?
In these cases the kalman gain can only be between 0 and 1. When is it biggest? When `r` (measurement error) is 0, which means that we infinitely trust our measurements. The equation then simplifies to
```
x_est = x_est + z - x_est
```
which means that we discard our predicted value (the `x_est` on the right hand side) and set our updated estimate equal to our measurement. This is a valid thing to do when we infinitely trust what we measure.
# Adapting to measurements
> I implemented solution 2 but my kalman filter was not really working (it
> highly adapted itself to the measurements and not really considered the
> noise on it).
Tuning a Kalman Filter is tricky, and requires deep knowledge of the system and proper estimates of `q` and `r`. Remember that `q` is the error on the process (state evolution) and `r` is the error on our measurements. If your Kalman filter is adapting itself too much to the measurements it means that:
* `q` is too large
* `r` is too small
or a combination of the two. You will have to play with the values to find ones that work. |
Unable to install boto3 | 33,388,555 | 3 | 2015-10-28T10:27:24Z | 33,388,653 | 8 | 2015-10-28T10:31:29Z | [
"python",
"virtualenv",
"boto3"
] | I have trouble installing boto3 inside a virtual environment.
I have done what the document says. First I activated virtual environment. then I did a:
```
Sudo pip install boto3
```
Now I enter python
```
>> import boto3
ImportError: No module named boto3
```
But if I import boto, it works
```
>> import boto
>> boto.Version
'2.38.0'
```
Why does it install boto 2.38 when I installed boto3.
I tried closing the terminal and re-opened it.
Should I restart the ubuntu machine.
Regards
Prabhakar S | Don't use `sudo` in a virtual environment because it ignores the environment's variables and therefore `sudo pip` refers to your **global** pip installation.
So with your environment activated, rerun `pip install boto3` but without sudo. |
Getting Spark, Python, and MongoDB to work together | 33,391,840 | 24 | 2015-10-28T13:05:00Z | 33,805,579 | 7 | 2015-11-19T13:40:43Z | [
"python",
"mongodb",
"hadoop",
"apache-spark",
"pymongo"
] | I'm having difficulty getting these components to knit together properly. I have Spark installed and working succesfully, I can run jobs locally, standalone, and also via YARN. I have followed the steps advised (to the best of my knowledge) [here](https://github.com/mongodb/mongo-hadoop/wiki/Spark-Usage) and [here](https://github.com/mongodb/mongo-hadoop/blob/master/spark/src/main/python/README.rst)
I'm working on Ubuntu and the various component versions I have are
* **Spark** spark-1.5.1-bin-hadoop2.6
* **Hadoop** hadoop-2.6.1
* **Mongo** 2.6.10
* **Mongo-Hadoop connector** cloned from <https://github.com/mongodb/mongo-hadoop.git>
* **Python** 2.7.10
I had some difficulty following the various steps such as which jars to add to which path, so what I have added are
* in `/usr/local/share/hadoop-2.6.1/share/hadoop/mapreduce` **I have added** `mongo-hadoop-core-1.5.0-SNAPSHOT.jar`
* the following **environment variables**
+ `export HADOOP_HOME="/usr/local/share/hadoop-2.6.1"`
+ `export PATH=$PATH:$HADOOP_HOME/bin`
+ `export SPARK_HOME="/usr/local/share/spark-1.5.1-bin-hadoop2.6"`
+ `export PYTHONPATH="/usr/local/share/mongo-hadoop/spark/src/main/python"`
+ `export PATH=$PATH:$SPARK_HOME/bin`
My Python program is basic
```
from pyspark import SparkContext, SparkConf
import pymongo_spark
pymongo_spark.activate()
def main():
conf = SparkConf().setAppName("pyspark test")
sc = SparkContext(conf=conf)
rdd = sc.mongoRDD(
'mongodb://username:password@localhost:27017/mydb.mycollection')
if __name__ == '__main__':
main()
```
I am running it using the command
```
$SPARK_HOME/bin/spark-submit --driver-class-path /usr/local/share/mongo-hadoop/spark/build/libs/ --master local[4] ~/sparkPythonExample/SparkPythonExample.py
```
and I am getting the following output as a result
```
Traceback (most recent call last):
File "/home/me/sparkPythonExample/SparkPythonExample.py", line 24, in <module>
main()
File "/home/me/sparkPythonExample/SparkPythonExample.py", line 17, in main
rdd = sc.mongoRDD('mongodb://username:password@localhost:27017/mydb.mycollection')
File "/usr/local/share/mongo-hadoop/spark/src/main/python/pymongo_spark.py", line 161, in mongoRDD
return self.mongoPairRDD(connection_string, config).values()
File "/usr/local/share/mongo-hadoop/spark/src/main/python/pymongo_spark.py", line 143, in mongoPairRDD
_ensure_pickles(self)
File "/usr/local/share/mongo-hadoop/spark/src/main/python/pymongo_spark.py", line 80, in _ensure_pickles
orig_tb)
py4j.protocol.Py4JError
```
According to [here](https://github.com/bartdag/py4j/blob/master/py4j-web/advanced_topics.rst#id19)
> This exception is raised when an exception occurs in the Java client
> code. For example, if you try to pop an element from an empty stack.
> The instance of the Java exception thrown is stored in the
> java\_exception member.
Looking at the source code for `pymongo_spark.py` and the line throwing the error, it says
> "Error while communicating with the JVM. Is the MongoDB Spark jar on
> Spark's CLASSPATH? : "
So in response I have tried to be sure the right jars are being passed, but I might be doing this all wrong, see below
```
$SPARK_HOME/bin/spark-submit --jars /usr/local/share/spark-1.5.1-bin-hadoop2.6/lib/mongo-hadoop-spark-1.5.0-SNAPSHOT.jar,/usr/local/share/spark-1.5.1-bin-hadoop2.6/lib/mongo-java-driver-3.0.4.jar --driver-class-path /usr/local/share/spark-1.5.1-bin-hadoop2.6/lib/mongo-java-driver-3.0.4.jar,/usr/local/share/spark-1.5.1-bin-hadoop2.6/lib/mongo-hadoop-spark-1.5.0-SNAPSHOT.jar --master local[4] ~/sparkPythonExample/SparkPythonExample.py
```
I have imported `pymongo` to the same python program to verify that I can at least access MongoDB using that, and I can.
I know there are quite a few moving parts here so if I can provide any more useful information please let me know. | **Updates**:
*2016-07-04*
Since the last update [MongoDB Spark Connector](https://github.com/mongodb/mongo-spark) matured quite a lot. It provides [up-to-date binaries](https://search.maven.org/#search|ga|1|g%3Aorg.mongodb.spark) and data source based API but it is using `SparkConf` configuration so it is subjectively less flexible than the Stratio/Spark-MongoDB.
*2016-03-30*
Since the original answer I found two different ways to connect to MongoDB from Spark:
* [mongodb/mongo-spark](https://github.com/mongodb/mongo-spark)
* [Stratio/Spark-MongoDB](https://github.com/Stratio/Spark-MongoDB)
While the former one seems to be relatively immature the latter one looks like a much better choice than a Mongo-Hadoop connector and provides a Spark SQL API.
```
# Adjust Scala and package version according to your setup
# although officially 0.11 supports only Spark 1.5
# I haven't encountered any issues on 1.6.1
bin/pyspark --packages com.stratio.datasource:spark-mongodb_2.11:0.11.0
```
```
df = (sqlContext.read
.format("com.stratio.datasource.mongodb")
.options(host="mongo:27017", database="foo", collection="bar")
.load())
df.show()
## +---+----+--------------------+
## | x| y| _id|
## +---+----+--------------------+
## |1.0|-1.0|56fbe6f6e4120712c...|
## |0.0| 4.0|56fbe701e4120712c...|
## +---+----+--------------------+
```
It seems to be much more stable than `mongo-hadoop-spark`, supports predicate pushdown without static configuration and simply works.
**The original answer**:
Indeed, there are quite a few moving parts here. I tried to make it a little bit more manageable by building a simple Docker image which roughly matches described configuration (I've omitted Hadoop libraries for brevity though). You can find [complete source on `GitHub`](https://github.com/zero323/docker-mongo-spark) ([DOI 10.5281/zenodo.47882](https://zenodo.org/record/47882)) and build it from scratch:
```
git clone https://github.com/zero323/docker-mongo-spark.git
cd docker-mongo-spark
docker build -t zero323/mongo-spark .
```
or download an image I've [pushed to Docker Hub](https://hub.docker.com/r/zero323/mongo-spark/) so you can simply `docker pull zero323/mongo-spark`):
Start images:
```
docker run -d --name mongo mongo:2.6
docker run -i -t --link mongo:mongo zero323/mongo-spark /bin/bash
```
Start PySpark shell passing `--jars` and `--driver-class-path`:
```
pyspark --jars ${JARS} --driver-class-path ${SPARK_DRIVER_EXTRA_CLASSPATH}
```
And finally see how it works:
```
import pymongo
import pymongo_spark
mongo_url = 'mongodb://mongo:27017/'
client = pymongo.MongoClient(mongo_url)
client.foo.bar.insert_many([
{"x": 1.0, "y": -1.0}, {"x": 0.0, "y": 4.0}])
client.close()
pymongo_spark.activate()
rdd = (sc.mongoRDD('{0}foo.bar'.format(mongo_url))
.map(lambda doc: (doc.get('x'), doc.get('y'))))
rdd.collect()
## [(1.0, -1.0), (0.0, 4.0)]
```
Please note that mongo-hadoop seems to close the connection after the first action. So calling for example `rdd.count()` after the collect will throw an exception.
Based on different problems I've encountered creating this image I tend to believe that **passing** `mongo-hadoop-1.5.0-SNAPSHOT.jar` and `mongo-hadoop-spark-1.5.0-SNAPSHOT.jar` **to both** `--jars` and `--driver-class-path` **is the only hard requirement**.
**Notes**:
* This image is loosely based on [jaceklaskowski/docker-spark](https://github.com/jaceklaskowski/docker-spark) so please be sure to send some good karma to [@jacek-laskowski](http://stackoverflow.com/users/1305344/jacek-laskowski) if it helps.
* If don't require a development version including [new API](https://github.com/mongodb/mongo-hadoop/wiki/Spark-Usage#python-example-unreleasedin-master-branch) then using `--packages` is most likely a better option. |
Embed "Bokeh created html file" into Flask "template.html" file | 33,398,950 | 2 | 2015-10-28T18:33:30Z | 33,400,091 | 10 | 2015-10-28T19:37:37Z | [
"python",
"flask",
"jinja2",
"bokeh"
] | I have a web application written in Python - Flask. When the user fill out some settings in one of the pages (POST Request), my controller calculates some functions and plot an output using Bokeh with following command and then I redirect to that HTML page created by Bokeh.
```
output_file("templates\\" + idx[j]['name'] + ".html", title = "line plots")
TOOLS="resize,crosshair,pan,wheel_zoom,box_zoom,reset,box_select,lasso_select"
p = figure(tools=TOOLS, x_axis_label = 'time', y_axis_label = 'L', plot_width = 1400, plot_height = 900)
```
All of my HTML pages extends my "Template.HTML" file except the Bokeh generated ones. My question is how can automatically modify Bokeh generated HTML files to also extends my template.html file? This way I have all my nav-bar & jumbotron on top of the Bokeh html files.
```
{% extends "template.html" %}
{% block content %}
<Bokeh.html file>
{% endblock %}
``` | You don't want to use `output_file` in this situation. Bokeh has a function specifically for embedding into HTML templates in web apps, `bokeh.embed.component`, demonstrated in the [quickstart](http://bokeh.pydata.org/en/latest/docs/user_guide/embed.html#components) and [tutorial](http://nbviewer.ipython.org/github/bokeh/bokeh-notebooks/blob/master/tutorial/05%20-%20sharing.ipynb).
```
from bokeh.embed import components
script, div = components(plot)
return render_template('page.html', script=script, div=div)
```
```
<body>
{{ div|safe }}
{{ script|safe }}
</body>
```
[Here is a complete, runnable example that that shows how to use this with Flask.](https://github.com/bokeh/bokeh/tree/master/examples/embed/simple) |
Custom double star operator for a class? | 33,402,220 | 3 | 2015-10-28T21:49:49Z | 33,402,305 | 7 | 2015-10-28T21:55:21Z | [
"python",
"class",
"python-3.x",
"magic-methods"
] | How does one implement custom double star operator (`**`) for unpacking, similar to how `__iter__` works with single star operator (`*`)?
For example:
```
class PlayerManager(object):
def __init__(self, players=None):
self.players = players or []
# Made up method to support ** operator
def __dict_iter__(self):
for player in self.players:
yield get_steamid(player), player
def print_players(**players):
print(players)
player_manager = PlayerManager([list, of, players])
print_players(player_manager)
```
Output:
```
{
'STEAM_0:0:02201': <Player object at 0x0000000000>,
'STEAM_0:0:10232': <Player object at 0x0000000064>,
'STEAM_0:0:73602': <Player object at 0x0000000128>
}
``` | Implement the [`Mapping` ABC](https://docs.python.org/3/library/collections.abc.html#collections.abc.Mapping). Technically, the language docs don't specify which `Mapping` methods are used, so assuming you only need some subset used by the current implementation is a bad idea. [All it says is](https://docs.python.org/3/reference/expressions.html#calls):
> If the syntax \*\*expression appears in the function call, expression must evaluate to a mapping, the contents of which are treated as additional keyword arguments. In the case of a keyword appearing in both expression and as an explicit keyword argument, a TypeError exception is raised.
So if you implement the `Mapping` ABC, you definitely have the right interfaces, regardless of whether it relies on `.items()`, direct iteration and `__getitem__` calls, etc.
FYI, on checking, the behavior in CPython 3.5 definitely dependent on *how* you implement `Mapping` (if you inherit from `dict`, it uses an optimized path that directly accesses `dict` internals, if you don't, it iterates `.keys()` and looks up each key as it goes). So yeah, don't cut corners, implement the whole ABC. Thanks to default implementations inherited from the `Mapping` ABC and it's parents, this can be done with as little as:
```
class MyMapping(Mapping):
def __getitem__(self, key):
...
def __iter__(self):
...
def __len__(self):
...
```
The default implementations you inherit may be suboptimal in certain cases (e.g. `items` and `values` would do semi-evil stuff involving iteration and look up, where direct accessors might be faster depending on internals), so if you're using it for other purposes, I'd suggest overriding those with optimized versions. |
Custom double star operator for a class? | 33,402,220 | 3 | 2015-10-28T21:49:49Z | 33,402,362 | 8 | 2015-10-28T22:00:15Z | [
"python",
"class",
"python-3.x",
"magic-methods"
] | How does one implement custom double star operator (`**`) for unpacking, similar to how `__iter__` works with single star operator (`*`)?
For example:
```
class PlayerManager(object):
def __init__(self, players=None):
self.players = players or []
# Made up method to support ** operator
def __dict_iter__(self):
for player in self.players:
yield get_steamid(player), player
def print_players(**players):
print(players)
player_manager = PlayerManager([list, of, players])
print_players(player_manager)
```
Output:
```
{
'STEAM_0:0:02201': <Player object at 0x0000000000>,
'STEAM_0:0:10232': <Player object at 0x0000000064>,
'STEAM_0:0:73602': <Player object at 0x0000000128>
}
``` | As @ShadowRanger says, implement Mapping. Here's an example:
```
from collections.abc import Mapping
class Foo(Mapping):
def __iter__(self):
yield "a"
yield "b"
def __len__(self):
return 2
def __getitem__(self, item):
return ord(item)
f = Foo()
print(*f)
print(dict(**f))
```
The program outputs:
```
a b
{'a': 97, 'b': 98}
``` |
python Ubuntu error install Pillow 3.0.0 | 33,404,394 | 38 | 2015-10-29T01:20:25Z | 33,404,409 | 62 | 2015-10-29T01:22:21Z | [
"python",
"ubuntu",
"pillow"
] | I recently failed trying to install Pillow 3.0.0 on my Ubuntu 14.04.
No matter what I do (download and try to "sudo python setup.py install" or "sudo -H pip install Pillow==3.0.0 --no-cache-dir") everytime I get error:
```
copying PIL/TiffImagePlugin.py -> build/lib.linux-x86_64-2.7/PIL
running egg_info
writing Pillow.egg-info/PKG-INFO
writing top-level names to Pillow.egg-info/top_level.txt
writing dependency_links to Pillow.egg-info/dependency_links.txt
warning: manifest_maker: standard file '-c' not found
reading manifest file 'Pillow.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'Pillow.egg-info/SOURCES.txt'
copying PIL/OleFileIO-README.md -> build/lib.linux-x86_64-2.7/PIL
running build_ext
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-build-3waMkf/Pillow/setup.py", line 767, in <module>
zip_safe=not debug_build(),
File "/usr/lib/python2.7/distutils/core.py", line 151, in setup
dist.run_commands()
File "/usr/lib/python2.7/distutils/dist.py", line 953, in run_commands
self.run_command(cmd)
File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/usr/local/lib/python2.7/dist-packages/setuptools/command/install.py", line 61, in run
return orig.install.run(self)
File "/usr/lib/python2.7/distutils/command/install.py", line 601, in run
self.run_command('build')
File "/usr/lib/python2.7/distutils/cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/usr/lib/python2.7/distutils/command/build.py", line 128, in run
self.run_command(cmd_name)
File "/usr/lib/python2.7/distutils/cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/usr/lib/python2.7/distutils/command/build_ext.py", line 337, in run
self.build_extensions()
File "/tmp/pip-build-3waMkf/Pillow/setup.py", line 515, in build_extensions
% (f, f))
ValueError: --enable-zlib requested but zlib not found, aborting.
----------------------------------------
Command "/usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip-build-3waMkf/Pillow/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-S_sHo7-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-3waMkf/Pillow
``` | Did you install the dependencies for pillow ? You can install them by
```
$ sudo apt-get build-dep python-imaging
$ sudo apt-get install libjpeg8 libjpeg62-dev libfreetype6 libfreetype6-dev
``` |
How can I await inside future-like object's __await__? | 33,409,888 | 7 | 2015-10-29T09:06:06Z | 33,420,721 | 8 | 2015-10-29T17:19:09Z | [
"python",
"python-3.x",
"python-asyncio"
] | [PEP 0492](https://www.python.org/dev/peps/pep-0492/#await-expression) adds new `__await__` magic method. Object that implements this method becomes *future-like object* and can be awaited using `await`. It's clear:
```
import asyncio
class Waiting:
def __await__(self):
yield from asyncio.sleep(2)
print('ok')
async def main():
await Waiting()
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
```
Ok, but what if I want to call some `async def` defined function instead of `asyncio.sleep`? I can't use `await` because `__await__` is not `async` function, I can't use `yield from` because native coroutines requires `await` expression:
```
async def new_sleep():
await asyncio.sleep(2)
class Waiting:
def __await__(self):
yield from new_sleep() # this is TypeError
await new_sleep() # this is SyntaxError
print('ok')
```
How can I solve it? | Use direct `__await__()` call:
```
async def new_sleep():
await asyncio.sleep(2)
class Waiting:
def __await__(self):
return new_sleep().__await__()
```
The solution was recommended by Yury Selivanov (the author of [PEP 492](https://www.python.org/dev/peps/pep-0492/)) for [aioodbc library](https://github.com/aio-libs/aioodbc/blob/e56615cd005cb402470bbffe6589b38cb5f7acee/aioodbc/utils.py#L27-L28) |
How to get current date and time from GPS unsegment time in python | 33,415,475 | 6 | 2015-10-29T13:23:52Z | 33,426,779 | 8 | 2015-10-30T00:02:19Z | [
"python",
"datetime",
"gps"
] | I have gps unsegmented time like this:
```
Tgps = 1092121243.0
```
And I'd like to understand what date and time is that. The begining of GPS time is 6 January 1980. Python function
```
datetime.utcfromtimestamp
```
could give seconds from 1 January 1970 year.
I found following:
```
from datetime import datetime
GPSfromUTC = (datetime(1980,1,6) - datetime(1970,1,1)).total_seconds()
curDate = datetime.utcfromtimestamp(Tgps + GPSfromUTC)
Out[83]: datetime.datetime(2014, 8, 15, 7, 0, 43)
```
I'm not sure about leapseconds are they included in function datetime or I should calculate them and substract from the result?
May be also exists better solution of this problem? | GPS time started in sync with UTC: `1980-01-06 (UTC) == 1980-01-06 (GPS)`. Both tick in SI seconds. The difference between GPS time and UTC time increases with each (intercalary) leap second.
To find the correct UTC time, you need to know the number of leap seconds occurred before the given GPS time:
```
#!/usr/bin/env python
from datetime import datetime, timedelta
# utc = 1980-01-06UTC + (gps - (leap_count(2014) - leap_count(1980)))
utc = datetime(1980, 1, 6) + timedelta(seconds=1092121243.0 - (35 - 19))
print(utc)
```
### Output
```
2014-08-15 07:00:27 # (UTC)
```
where `leap_count(date)` is the number of leap seconds introduced before the given date. From [TAI-UTC table](http://hpiers.obspm.fr/eop-pc/index.php?index=TAI-UTC_tab&lang=en):
```
1980..: 19s
2012..: 35s
```
and therefore:
```
(leap_count(2014) - leap_count(1980)) == (35 - 19)
```
---
If you are on Unix then you could use `"right"` time zone to get UTC time from TAI time
(and it is easy to get TAI time from GPS time: [TAI = GPS + 19 seconds (constant offset)](http://www.leapsecond.com/java/gpsclock.htm)):
```
#!/usr/bin/env python
import os
import time
os.environ['TZ'] = 'right/UTC' # TAI scale with 1970-01-01 00:00:10 (TAI) epoch
time.tzset() # Unix
from datetime import datetime, timedelta
gps_timestamp = 1092121243.0 # input
gps_epoch_as_gps = datetime(1980, 1, 6)
# by definition
gps_time_as_gps = gps_epoch_as_gps + timedelta(seconds=gps_timestamp)
gps_time_as_tai = gps_time_as_gps + timedelta(seconds=19) # constant offset
tai_epoch_as_tai = datetime(1970, 1, 1, 0, 0, 10)
# by definition
tai_timestamp = (gps_time_as_tai - tai_epoch_as_tai).total_seconds()
print(datetime.utcfromtimestamp(tai_timestamp)) # "right" timezone is in effect!
```
### Output
```
2014-08-15 07:00:27 # (UTC)
```
---
You could avoid changing the timezone if you extract the leap seconds list from the corresponding [`tzfile(5)`](http://man7.org/linux/man-pages/man5/tzfile.5.html). It is a combination of the first two methods where the leap count computation from the first method is automated and the autoupdating `tzdata` (system package for [the tz database](https://en.wikipedia.org/wiki/Tz_database)) from the second method is used:
```
>>> from datetime import datetime, timedelta
>>> import leapseconds
>>> leapseconds.gps_to_utc(datetime(1980,1,6) + timedelta(seconds=1092121243.0))
datetime.datetime(2014, 8, 15, 7, 0, 27)
```
where [`leapseconds.py`](https://gist.github.com/zed/92df922103ac9deb1a05) can extract leap seconds from `/usr/share/zoneinfo/right/UTC` file (part of `tzdata` package).
All three methods produce the same result. |
Regular expression: matching and grouping a variable number of space separated words | 33,416,263 | 3 | 2015-10-29T13:57:38Z | 33,416,695 | 8 | 2015-10-29T14:17:04Z | [
"python",
"regex"
] | I have a string:
```
"foo hello world baz 33"
```
The part between `foo` and `baz` will be some number of space separated words (one or more). I want to match this string with an re that will group out each of those words:
```
>>> re.match(r'foo (<some re here>) baz (\d+)', "foo hello world baz 33").groups()
('hello', 'world', '33')
```
The re should be flexible so that it will work in case there are no words around it:
```
>>> re.match(r'(<some re here>)', "hello world").groups()
('hello', 'world')
```
I'm trying variations with `([\w+\s])+`, but I'm not able to capture a dynamically determined number of groups. Is this possible? | `re.match` returns result at the start of the string. Use `re.search` instead.
`.*?` returns the shortest match between two words/expressions (. means anything, \* means 0 or more occurrences and ? means shortest match).
```
import re
my_str = "foo hello world baz 33"
my_pattern = r'foo\s(.*?)\sbaz'
p = re.search(my_pattern,my_str,re.I)
result = p.group(1).split()
print result
['hello', 'world']
```
**EDIT:**
In case foo or baz are missing, and you need to return the entire string, use an `if-else`:
```
if p is not None:
result = p.group(1).split()
else:
result = my_str
```
Why the `?` in the pattern:
Suppose there are multiple occurrences of the word `baz`:
```
my_str = "foo hello world baz 33 there is another baz"
```
using `pattern = 'foo\s(.*)\sbaz'` will match(longest and greedy) :
```
'hello world baz 33 there is another'
```
whereas , using `pattern = 'foo\s(.*?)\sbaz'` will return the shortest match:
```
'hello world'
``` |
Confusion with split function in Python | 33,419,676 | 4 | 2015-10-29T16:26:37Z | 33,419,811 | 7 | 2015-10-29T16:32:14Z | [
"python"
] | I am trying to alphabetically sort the words from a file. However, the program sorts the lines, not the words, according to their first words. Here it is.
```
fname = raw_input("Enter file name: ")
fh = open(fname)
lst = list()
for line in fh:
lst2 = line.strip()
words = lst2.split()
lst.append(words)
lst.sort()
print lst
```
Here is my input file
```
But soft what light through yonder window breaks
It is the east and Juliet is the sun
Arise fair sun and kill the envious moon
Who is already sick and pale with grief
```
And this is what I'm hoping to get
```
['Arise', 'But', 'It', 'Juliet', 'Who', 'already', 'and', 'breaks', 'east', 'envious', 'fair', 'grief', 'is', 'kill', 'light', 'moon', 'pale', 'sick', 'soft', 'sun', 'the', 'through', 'what', 'window', 'with', 'yonder']
``` | `lst.append(words)` append a list at the end of `lst`, it does not concatenates `lst` and `words`. You need to use `lst.extend(words)` or `lst += words`.
Also, you should not sort the list at each iteration but only at the end of your loop:
```
lst = []
for line in fh:
lst2 = line.strip()
words = lst2.split()
lst.extend(words)
lst.sort()
print lst
```
If you don't want repeated word, use a `set`:
```
st = set()
for line in fh:
lst2 = line.strip()
words = lst2.split()
st.update(words)
lst = list(st)
lst.sort()
print lst
``` |
How does this variable declaration works in python? | 33,421,149 | 3 | 2015-10-29T17:42:23Z | 33,421,175 | 9 | 2015-10-29T17:43:37Z | [
"python",
"python-2.7",
"hex"
] | ```
i = 0x0800
```
What I understand here is that 0x0800 is a hexadecimal number where '0x' denotes the hex type and the following number '0800' is a 2 byte hexadecimal number. On assigning it to a variable 'i', when its type is checked I got this error
```
>>> type(i)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'int' object is not callable
```
Here I make out that 'i' is suppose to be an int object. I got more confused when I tried this
```
>>> print i
2048
```
What is '2048' exactly .. Can anyone throw some light here ? | `i` is an integer, but you *redefined `type`*:
```
>>> i = 0x0800
>>> i
2048
>>> type(i)
<type 'int'>
>>> type = 42
>>> type(i)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'int' object is not callable
>>> del type
>>> type(i)
<type 'int'>
```
Note the `type = 42` line; I created a new global name `type` and that is found before the built-in. You could also use `import __builtin__; __builtin__.type(i)` in Python 2, or `import builtins; builtins.type(i)` in Python 3 to access the original built-in `type()` function:
```
>>> import __builtin__
>>> type = 42
>>> __builtin__.type(type)
<type 'int'>
>>> type(type)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'int' object is not callable
>>> del type
>>> type(type)
<type 'type'>
```
The `0x` notation is just one of several ways of specifying an integer literal. You are still producing a regular integer, only the *syntax* for how you define the value differs here. All of the following notations produce the exact same integer value:
```
0x0800 # hexadecimal
0o04000 # octal, Python 2 also accepts 0400
0b100000000000 # binary
2048 # decimal
```
See the [*Integer Literals* reference documentation](https://docs.python.org/2/reference/lexical_analysis.html#integer-and-long-integer-literals). |
Installing numpy on Docker Alpine | 33,421,965 | 4 | 2015-10-29T18:28:27Z | 33,424,723 | 8 | 2015-10-29T21:14:11Z | [
"python",
"numpy",
"docker",
"pip",
"alpine"
] | I'm trying to install numpy in a docker container based on Alpine 3.1. I'm using the following Dockerfile:
```
FROM alpine:3.1
RUN apk add --update make cmake gcc g++ gfortran
RUN apk add --update python py-pip python-dev
RUN pip install cython
RUN pip install numpy
```
This runs fine until `pip install numpy` when I get the following error:
```
error: Command "gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -Inumpy/core/include -Ibuild/src.linux-x86_64-2.7/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/include/python2.7 -Ibuild/src.linux-x86_64-2.7/numpy/core/src/private -Ibuild/src.linux-x86_64-2.7/numpy/core/src/private -Ibuild/src.linux-x86_64-2.7/numpy/core/src/private -c build/src.linux-x86_64-2.7/numpy/core/src/npymath/ieee754.c -o build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/numpy/core/src/npymath/ieee754.o" failed with exit status 1
```
`easy_install-2.7 numpy` gives the same error.
Are there any config/installation steps I'm missing? | If you don't necessary need to install `numpy` from `pypi`, you could install it from alpine repositories. Package is named `py-numpy` and is in `testing` repository, see [here](http://pkgs.alpinelinux.org/contents?pkgname=py-numpy&arch=x86). Minimal `Dockerfile` example that works for me
```
FROM alpine:3.2
ADD repositories /etc/apk/repositories
RUN apk add --update python python-dev gfortran py-pip build-base py-numpy@testing
```
Content of `repositories` file
```
http://dl-cdn.alpinelinux.org/alpine/v3.2/main
@testing http://dl-cdn.alpinelinux.org/alpine/edge/testing
``` |
ImportError: No module named 'Queue' | 33,432,426 | 4 | 2015-10-30T09:17:19Z | 33,433,119 | 16 | 2015-10-30T09:55:00Z | [
"python",
"python-requests"
] | I am trying to import `requests` module, but I got this error
my python version is 3.4 running on ubuntu 14.04
```
>>> import requests
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/connectionpool.py", line 10, in <module>
from queue import LifoQueue, Empty, Full
ImportError: cannot import name 'LifoQueue'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.4/dist-packages/requests/__init__.py", line 58, in <module>
from . import utils
File "/usr/local/lib/python3.4/dist-packages/requests/utils.py", line 26, in <module>
from .compat import parse_http_list as _parse_list_header
File "/usr/local/lib/python3.4/dist-packages/requests/compat.py", line 7, in <module>
from .packages import chardet
File "/usr/local/lib/python3.4/dist-packages/requests/packages/__init__.py", line 3, in <module>
from . import urllib3
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/__init__.py", line 10, in <module>
from .connectionpool import (
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/connectionpool.py", line 12, in <module>
from Queue import LifoQueue, Empty, Full
ImportError: No module named 'Queue'
``` | Queue is in the multiprocessing module so:
```
from multiprocessing import Queue
``` |
Django ignores router when running tests? | 33,434,318 | 19 | 2015-10-30T10:56:22Z | 33,541,073 | 10 | 2015-11-05T09:39:48Z | [
"python",
"django",
"database"
] | I have a django application that uses 2 database connections:
1. To connect to the actual data the app is to produce
2. To a reference master data system, that is maintained completely outside my control
The issue that I'm having, is that my webapp can absolutely NOT touch the data in the 2nd database. I solved most of the issues by using 2 (sub)apps, one for every database connection. I created a router file that router any migration, and writing to the first app
I also made all the models in the 2nd app non managed, using the
```
model.meta.managed = False
```
option.
To be sure, the user I connect to the 2nd database has read only access
This works fine for migrations and running. However, when I try to run tests using django testcase, Django tries to delete and create a test\_ database on the 2nd database connection.
How can I make sure that Django will NEVER update/delete/insert/drop/truncate over the 2nd connection
How can I run tests, that do not try to create *the second database*, but do create the first.
Thanks!
*edited: code*
**model (for the 2nd app, that should not be managed):**
```
from django.db import models
class MdmMeta(object):
db_tablespace = 'MDM_ADM'
managed = False
ordering = ['name']
class ActiveManager(models.Manager):
def get_queryset(self):
return super(ActiveManager, self).get_queryset().filter(lifecyclestatus='active')
class MdmType(models.Model):
entity_guid = models.PositiveIntegerField(db_column='ENTITYGUID')
entity_name = models.CharField(max_length=255, db_column='ENTITYNAME')
entry_guid = models.PositiveIntegerField(primary_key=True, db_column='ENTRYGUID')
name = models.CharField(max_length=255, db_column='NAME')
description = models.CharField(max_length=512, db_column='DESCRIPTION')
lifecyclestatus = models.CharField(max_length=255, db_column='LIFECYCLESTATUS')
# active_manager = ActiveManager()
def save(self, *args, **kwargs):
raise Exception('Do not save MDM models!')
def delete(self, *args, **kwargs):
raise Exception('Do not delete MDM models!')
def __str__(self):
return self.name
class Meta(MdmMeta):
abstract = True
# Create your models here.
class MdmSpecies(MdmType):
class Meta(MdmMeta):
db_table = 'MDM_SPECIES'
verbose_name = 'Species'
verbose_name_plural = 'Species'
class MdmVariety(MdmType):
class Meta(MdmMeta):
db_table = 'MDM_VARIETY'
verbose_name = 'Variety'
verbose_name_plural = 'Varieties'
...
```
**router:**
```
__author__ = 'CoesseWa'
class MdmRouter(object):
def db_for_read(self, model, **hints):
if model._meta.app_label == 'mdm':
# return 'default'
return 'mdm_db' # trying to use one database connection
return 'default'
def db_for_write(self, model, **hints):
return 'default'
def allow_relation(self, obj1, obj2, **hints):
return None
def allow_migrate(self, db, model):
if model._meta.app_label == 'mdm':
return False
```
**settings:**
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.oracle',
'NAME': '(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=%s)(PORT=1521)))(CONNECT_DATA=(SID=%s)))'
% (get_env_variable('LIMS_MIGRATION_HOST'), get_env_variable('LIMS_MIGRATION_SID')),
'USER': 'LIMS_MIGRATION',
'PASSWORD': get_env_variable('LIMS_MIGRATION_PASSWORD'),
},
'mdm_db': {
'ENGINE': 'django.db.backends.oracle',
'NAME': '(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=GB3P)(PORT=1521)))'
'(CONNECT_DATA=(SID=GB3P)))',
'USER': 'MDM',
'PASSWORD': get_env_variable('MDM_DB_PASSWORD'),
},
}
```
**one testcase:**
```
from django.test.testcases import TestCase
__author__ = 'CoesseWa'
class ModelTest(TestCase):
def test_getting_guid_for_mdm_field(self):
self.assertIsNotNone(1)
```
**output from when running this tests:**
```
...
Destroying old test user...
```
*(before this point, django creates the test database for my first connection = OK)*
```
Creating test user...
```
*=> This next lines should never happen. Fails because I use a read only user (luckily)*
```
Creating test database for alias 'mdm_db'...
Failed (ORA-01031: insufficient privileges
Got an error creating the test database: ORA-01031: insufficient privileges
``` | I solved this by changing the `DATABASES.TEST` definition. I added the `TEST['MIRROR'] = 'default'` to the `mdm_db` database entry.
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.oracle',
'NAME': '(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=%s)(PORT=1521)))(CONNECT_DATA=(SID=%s)))'
% (get_env_variable('LIMS_MIGRATION_HOST'), get_env_variable('LIMS_MIGRATION_SID')),
'USER': 'LIMS_MIGRATION',
'PASSWORD': get_env_variable('LIMS_MIGRATION_PASSWORD'),
},
'mdm_db': {
'ENGINE': 'django.db.backends.oracle',
'NAME': '(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=GB3P)(PORT=1521)))'
'(CONNECT_DATA=(SID=GB3P)))',
'USER': 'MDM',
'PASSWORD': get_env_variable('MDM_DB_PASSWORD'),
'TEST': {
'MIRROR': 'default', # Added this setting
}
},
}
```
According to the [documentation](https://docs.djangoproject.com/en/1.8/topics/testing/advanced/#tests-and-multiple-databases) this option can be abused to skip database creation:
> However, the replica database has been configured as a test mirror
> (using the MIRROR test setting), indicating that under testing,
> replica should be treated as a mirror of default.
>
> When the test environment is configured, a test version of replica
> will not be created. Instead the connection to replica will be
> redirected to point at default.
Running my tests now skips creation of the second database.
Thanks for all the input!! |
Multiple if statements in a lambda function | 33,439,434 | 4 | 2015-10-30T15:23:39Z | 33,439,510 | 8 | 2015-10-30T15:27:11Z | [
"python",
"lambda"
] | I am trying to use 3 if statements within a python lambda function. Here is my code:
```
y=lambda symbol: 'X' if symbol==True 'O' if symbol==False else ' '
```
I Have been able to get two if statements to work just fine e.g.
```
x=lambda cake: "Yum" if cake=="chocolate" else "Yuck"
```
Essentially, I want a lambda function to use if statements to return 'X' if the symbol is True, 'O' if it is false, and ' ' otherwise. I'm not even sure if this is even possible, but I haven't been able to find any information on the internet, so I would really appreciate any help :) | You can use an anonymous dict inside your anonymous function to test for this, using the default value of `dict.get` to symbolize your final "else"
```
y = lambda sym: {False: 'X', True: 'Y'}.get(sym, ' ')
``` |
Pandas dataframe read_csv on bad data | 33,440,805 | 3 | 2015-10-30T16:36:17Z | 33,440,853 | 7 | 2015-10-30T16:39:00Z | [
"python",
"csv",
"pandas"
] | I want to read in a very large csv (cannot be opened in excel and edited easily) but somewhere around the 100,000th row, there is a row with one extra column causing the program to crash. This row is errored so I need a way to ignore the fact that it was an extra column. There is around 50 columns so hardcoding the headers and using names or usecols isn't preferable. I'll also possibly encounter this issue in other csv's and want a generic solution. I couldn't find anything in read\_csv unfortunately. The code is as simple as this:
```
def loadCSV(filePath):
dataframe = pd.read_csv(filePath, index_col=False, encoding='iso-8859-1', nrows=1000)
datakeys = dataframe.keys();
return dataframe, datakeys
``` | pass [`error_bad_lines=False`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html#pandas.read_csv) to skip erroneous rows:
> error\_bad\_lines : boolean, default True Lines with too many fields
> (e.g. a csv line with too many commas) will by default cause an
> exception to be raised, and no DataFrame will be returned. If False,
> then these âbad linesâ will dropped from the DataFrame that is
> returned. (Only valid with C parser) |
How to check if Celery/Supervisor is running using Python | 33,446,350 | 11 | 2015-10-30T23:34:31Z | 33,545,849 | 11 | 2015-11-05T13:30:19Z | [
"python",
"flask",
"celery",
"supervisor"
] | How to write a script in Python that outputs if celery is running on a machine (Ubuntu)?
My use-case. I have a simple python file with some tasks. I'm not using Django or Flask. I use supervisor to run the task queue. For example,
**tasks.py**
```
from celery import Celery, task
app = Celery('tasks')
@app.task()
def add_together(a, b):
return a + b
```
**Supervisor:**
```
[program:celery_worker]
directory = /var/app/
command=celery -A tasks worker info
```
This all works, I now want to have page which checks if celery/supervisor process is running. i.e. something like this maybe using Flask allowing me to host the page giving a 200 status allowing me to load balance.
For example...
**check\_status.py**
```
from flask import Flask
app = Flask(__name__)
@app.route('/')
def status_check():
#check supervisor is running
if supervisor:
return render_template('up.html')
else:
return render_template('down.html')
if __name__ == '__main__':
app.run()
``` | You can run the `celery status` command via code by importing the `celery.bin.celery` package:
```
import celery
import celery.bin.base
import celery.bin.celery
import celery.platforms
app = celery.Celery('tasks', broker='redis://')
status = celery.bin.celery.CeleryCommand.commands['status']()
status.app = status.get_app()
def celery_is_up():
try:
status.run()
return True
except celery.bin.base.Error as e:
if e.status == celery.platforms.EX_UNAVAILABLE:
return False
raise e
if __name__ == '__main__':
if celery_is_up():
print('Celery up!')
else:
print('Celery not responding...')
``` |
Iterating through array | 33,446,708 | 2 | 2015-10-31T00:21:50Z | 33,446,760 | 7 | 2015-10-31T00:28:21Z | [
"python",
"arrays",
"python-3.x",
"swap"
] | I have an array of bools and now I want to swap those entries for numbers.
```
False => 0
True => 1
```
I have written two different pieces of code and I would like to know, which one is better and why. This is not so much about actually solving the problem, as about learning.
```
arr = [[True,False],[False,True],[True,True]]
for i,row in enumerate(arr):
for j,entry in enumerate(row):
if entry:
arr[i][j] = 1
else:
arr[i][j] = 0
print(arr)
```
And the second approach:
```
arr = [[True,False],[False,True],[True,True]]
for i in range(len(arr)):
for j in range(len(arr[i])):
if arr[i][j]:
arr[i][j] = 1
else:
arr[i][j] = 0
print(arr)
```
I read that there are ways to do this with importing `itertools` or similar. I am really not a fan of importing things if it can be done with âon-board toolsâ, but should I rather be using them for this problem? | Let's define your array:
```
>>> arr = [[True,False],[False,True],[True,True]]
```
Now, let's convert the booleans to integer:
```
>>> [[int(i) for i in row] for row in arr]
[[1, 0], [0, 1], [1, 1]]
```
Alternatively, if we want to be more flexible about what gets substituted in, we can use a ternary statement:
```
>>> [[1 if i else 0 for i in row] for row in arr]
[[1, 0], [0, 1], [1, 1]]
``` |
Why explicit test for an empty set does not work | 33,452,797 | 8 | 2015-10-31T14:40:10Z | 33,452,885 | 9 | 2015-10-31T14:49:39Z | [
"python",
"python-3.x"
] | If empty `set()` is `False`, shouldn't the `if test == False` clause in the following code evaluate to `True`?
```
test = set()
# empty sets are false
if test == False:
print("set is empty")
else:
print("set NOT empty")
if not test:
print("set is empty")
```
ouput:
```
set NOT empty
set is empty
``` | In simple terms, the equals operator `==` will perform an equality comparison between those two objects: A set and a boolean value will never be equal, so the result of the comparison is false. On the other hand, just checking `if obj` (or `if not obj`) will check the trueness of the object, something that can be evaluated for every object. In a way, this actually does a type conversion using `if bool(obj)`. And for empty sets, this is false.
In the [data model](https://docs.python.org/3/reference/datamodel.html), both of these operations invoke different special method names. Comparing two objects using the equality operator will invoke [`__eq__`](https://docs.python.org/3/reference/datamodel.html#object.__eq__) while calling `bool()` on an object will invoke [`__bool__`](https://docs.python.org/3/reference/datamodel.html#object.__bool__). |
Symbol not found: _BIO_new_CMS | 33,462,779 | 10 | 2015-11-01T13:43:18Z | 34,132,165 | 15 | 2015-12-07T11:11:49Z | [
"python",
"osx",
"python-2.7",
"scrapy",
"osx-elcapitan"
] | I am new to mac and I don't understand why my scrapy doesn't seem to work any more. I suspect openssl is not valid in my el capitan.
I tried:
```
pip install cryptography
pip install pyOpenSSL
brew install openssl
```
and I still get the error below.
Is there some way I can fix this?
```
$ python
Python 2.7.10 (v2.7.10:15c95b7d81dc, May 23 2015, 09:33:12)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import OpenSSL
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/OpenSSL/__init__.py", line 8, in <module>
from OpenSSL import rand, crypto, SSL
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/OpenSSL/rand.py", line 11, in <module>
from OpenSSL._util import (
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/OpenSSL/_util.py", line 3, in <module>
from cryptography.hazmat.bindings.openssl.binding import Binding
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/cryptography/hazmat/bindings/openssl/binding.py", line 13, in <module>
from cryptography.hazmat.bindings._openssl import ffi, lib
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/cryptography/hazmat/bindings/_openssl.so, 2): Symbol not found: _BIO_new_CMS
Referenced from: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/cryptography/hazmat/bindings/_openssl.so
Expected in: flat namespace
in /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/cryptography/hazmat/bindings/_openssl.so
>>>
``` | I solve this problem by the following command:
```
LDFLAGS="-L/usr/local/opt/openssl/lib" pip install cryptography --no-use-wheel
```
Refer to [if homebrew openssl is linked, cryptography builds an unusable wheel](https://github.com/pyca/cryptography/issues/2138) |
expected two blank lines pep8 warning in python | 33,466,860 | 5 | 2015-11-01T20:30:11Z | 33,467,206 | 11 | 2015-11-01T21:06:00Z | [
"python",
"vim"
] | I'm using vim editor as python IDE, below is a simple python program to calculate square root of a number:-
```
import cmath
def sqrt():
try:
num = int(input("Enter the number : "))
if num >= 0:
main(num)
else:
complex(num)
except:
print("OOPS..!!Something went wrong, try again")
sqrt()
return
def main(num):
squareRoot = num**(1/2)
print("The square Root of ", num, " is ", squareRoot)
return
def complex(num):
ans = cmath.sqrt(num)
print("The Square root if ", num, " is ", ans)
return
sqrt()
```
and the warnings are :-
```
1-square-root.py|2 col 1 C| E302 expected 2 blank lines, found 0 [pep8]
1-square-root.py|15 col 1 C| E302 expected 2 blank lines, found 1 [pep8]
1-square-root.py|21 col 1 C| E302 expected 2 blank lines, found 0 [pep8]
```
Can you please tell why these warnings are coming?
[](http://i.stack.imgur.com/m1Rlp.png) | ```
import cmath
def sqrt():
try:
num = int(input("Enter the number : "))
if num >= 0:
main(num)
else:
complex_num(num)
except:
print("OOPS..!!Something went wrong, try again")
sqrt()
return
def main(num):
square_root = num**(1/2)
print("The square Root of ", num, " is ", square_root)
return
def complex_num(num):
ans = cmath.sqrt(num)
print("The Square root if ", num, " is ", ans)
return
sqrt()
```
The previous will fix your [PEP8](https://www.python.org/dev/peps/pep-0008/) problems. After your import you need to have 2 new lines before starting your code. Also, between each `def foo()` you need to have 2 as well.
In your case you had 0 after import, and you had 1 newline between each function. Part of PEP8 you need to have a newline after the end of your code. Unfortunately I don't know how to show it when I paste your code in here.
Pay attention to the naming, it's part of PEP8 as well. I changed `complex` to `complex_num` to prevent confusion with builtin [`complex`](https://docs.python.org/3.0/library/functions.html#complex).
In the end, they're only warning, they can be ignored if needed. |
Python- Trying to multiply items in list | 33,468,865 | 4 | 2015-11-02T00:20:24Z | 33,468,909 | 7 | 2015-11-02T00:25:57Z | [
"python",
"python-3.x",
"python-3.4"
] | So basically what I'm trying to do here is ask a user for a random string input, for example:
```
asdf34fh2
```
And I want to pull the numbers out of them into a list and get `[3,4,2]` but I keep getting `[34, 2]`.
```
import re
def digit_product():
str1 = input("Enter a string with numbers: ")
if str1.isalpha():
print('Not a string with numbers')
str1 = input("Enter a string with numbers: ")
else:
print(re.findall(r'\d+', str1))
digit_product()
```
And then I want to take that list of numbers and multiply them, and ultimately get 24. | Your regular expression, `\d+`, is the culprit here. The `+` means it matches one or more consecutive digits (`\d`):
```
asdf34fh2
^- ^
\ \_ second match: one or more digits ("2")
\____ first match: one or more digits ("34")
```
It looks like you only want to match one digit, so use `\d` without the `+`. |
Python Numpy array conversion | 33,478,933 | 3 | 2015-11-02T13:39:05Z | 33,479,022 | 8 | 2015-11-02T13:43:22Z | [
"python",
"numpy"
] | I've got 2 numpy arrays x1 and x2. Using python 3.4.3
```
x1 = np.array([2,4,4])
x2 = np.array([3,5,3])
```
I would like to a numpy array like this:
```
[[2,3],[4,5],[4,3]]
```
How would i go about this? | You can use [`numpy.column_stack`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.column_stack.html):
```
In [40]: x1 = np.array([2,4,4])
In [41]: x2 = np.array([3,5,3])
In [42]: np.column_stack((x1, x2))
Out[42]:
array([[2, 3],
[4, 5],
[4, 3]])
``` |
Can you use C++ DLLs in C# code in a UWP? | 33,489,924 | 2 | 2015-11-03T01:23:08Z | 33,490,707 | 7 | 2015-11-03T03:04:44Z | [
"c#",
"python",
"c++",
"dll",
"uwp"
] | I wrote a C++ Class Library in Visual Studio that just defines a function that invokes some Python:
```
#pragma once
#include <Python.h>
extern "C"
__declspec(dllexport)
void python()
{
Py_Initialize();
PyRun_SimpleString("2 + 2");
}
```
I made another project in the same solution that was a C# Blank Universal app. I tried to reference the DLL generated from the previous project I mentioned:
```
using System;
...
namespace StartupApp
{
...
sealed partial class App : Application
{
private const string CPPPythonInterfaceDLL = @"pathtodll";
[DllImport(CPPPythonInterfaceDLL, ExactSpelling = true, CallingConvention = CallingConvention.Cdecl)]
private static extern void python();
public static void Python()
{
python();
}
...
public App()
{
...
Python();
}
...
}
}
```
The app is in a Release configuration.
Whenever I try to run the app on my Local Machine, it always gives an error:
```
The program '[2272] StartupApp.exe' has exited with code -1073741790 (0xc0000022).
Activation of the Windows Store app 'ab6a8ef2-1fa8-4cc7-b7b3-fb7420af7dc3_7dk3a6v9mg4g6!App' failed with error 'The app didn't start'.
```
So my question is this: can I reference a C++ class library from a C# UWP project? Or does the security on UWP apps not allow this?
Or is it because of Python.h?
EDIT:
I built the project with a DLL project and a Runtime Component that wrapped it, and now I have this error:
An exception of type 'System
```
'System.DllNotFoundException' occurred in StartupApp.exe but was not handled in user code
Additional information: Unable to load DLL 'pathtodll': Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))
```
I added a user to the DLL with the object name "Everyone" (I am not sure how else to give everyone permissions) but the error still comes up. | Firstly, UWP can't consume a legacy C++ dll just by DLLImport.
If you want to expose legacy c++ functions to C#, the first suggestion is to wrap that C++ logic using a WinRT component. Then you can reference this component in UWP application by following steps: adding it to the project, open the files' properties in the Solution Explorer window, and mark them as content to be included in the app package. [This post](https://msdn.microsoft.com/en-us/library/mt186162.aspx) would be helpful. [This one](http://blogs.msdn.com/b/chrisbarker/archive/2014/10/08/consuming-a-c-library-project-from-a-windows-phone-store-app.aspx) provides more detailed steps.
If you want to PInvoke the dll, you can follow these steps (You can refer to this [MSDN post](https://social.msdn.microsoft.com/Forums/en-US/f4440c2d-9386-4df5-9419-6b016cf8b345/uwp-native-pinvoke-dll-compiled-with-mingw?forum=wpdevelop)):
1. Add win32 dll into your UWP project making sure to set its type as 'content'
2. Then in the proper cs file, using DllImport to PInvoke the dll.
There is one more thing: You need to make sure your Python dll is not using prohibited APIs in WinRT. You can check this by using /ZW compile option for the dll. |
sklearn dumping model using joblib, dumps multiple files. Which one is the correct model? | 33,497,314 | 4 | 2015-11-03T10:54:41Z | 33,500,427 | 7 | 2015-11-03T13:30:31Z | [
"python",
"machine-learning",
"scikit-learn",
"joblib"
] | I did a sample program to train a SVM using sklearn. Here is the code
```
from sklearn import svm
from sklearn import datasets
from sklearn.externals import joblib
clf = svm.SVC()
iris = datasets.load_iris()
X, y = iris.data, iris.target
clf.fit(X, y)
print(clf.predict(X))
joblib.dump(clf, 'clf.pkl')
```
When I dump the model file I get this amount of files. :
*['clf.pkl', 'clf.pkl\_01.npy', 'clf.pkl\_02.npy', 'clf.pkl\_03.npy', 'clf.pkl\_04.npy', 'clf.pkl\_05.npy', 'clf.pkl\_06.npy', 'clf.pkl\_07.npy', 'clf.pkl\_08.npy', 'clf.pkl\_09.npy', 'clf.pkl\_10.npy', 'clf.pkl\_11.npy']*
I am confused if I did something wrong. Or is this normal? What is \*.npy files. And why there are 11? | To save everything into 1 file you should set compression to True or any number (1 for example).
But you should know that separated representation of np arrays is necessary for main features of joblib dump/load, joblib can load and save objects with np arrays faster than Pickle due to this separated representation, and in contrast to Pickle joblib can correctly save and load objects with memmap numpy arrays. If you want to have one file serialization of whole object (And don't want to save memmap np arrays) - i think that it's better to use Pickle, because in this case AFAIK joblib dump/load functionality will work at same speed as Pickle.
```
import numpy as np
from scikit-learn.externals import joblib
vector = np.arange(0, 10**7)
%timeit joblib.dump(vector, 'vector.pkl')
# 1 loops, best of 3: 818 ms per loop
# file size ~ 80 MB
%timeit vector_load = joblib.load('vector.pkl')
# 10 loops, best of 3: 47.6 ms per loop
# Compressed
%timeit joblib.dump(vector, 'vector.pkl', compress=1)
# 1 loops, best of 3: 1.58 s per loop
# file size ~ 15.1 MB
%timeit vector_load = joblib.load('vector.pkl')
# 1 loops, best of 3: 442 ms per loop
# Pickle
%%timeit
with open('vector.pkl', 'wb') as f:
pickle.dump(vector, f)
# 1 loops, best of 3: 927 ms per loop
%%timeit
with open('vector.pkl', 'rb') as f:
vector_load = pickle.load(f)
# 10 loops, best of 3: 94.1 ms per loop
``` |
Strange inline assignment | 33,500,567 | 5 | 2015-11-03T13:37:45Z | 33,500,638 | 10 | 2015-11-03T13:41:30Z | [
"python",
"tuples",
"variable-assignment"
] | I'm struggling with this strange behaviour in Python (2 and 3):
```
>>> a = [1, 2]
>>> a[a.index(1)], a[a.index(2)] = 2, 1
```
This results in:
```
>>> a
[1, 2]
```
But if you write
```
>>> a = [1, 2]
>>> a[a.index(1)], a[a.index(2)] = x, y
```
where `x, y != 2, 1` (can be `1, 1`, `2, 2` , `3, 5`, etc.), this results in:
```
>>> a == [x, y]
True
```
As one would expect. Why doesn't `a[a.index(1)], a[a.index(2)] = 2, 1` produce the result `a == [2, 1]`?
```
>>> a == [2, 1]
False
``` | Because it actually gets interpreted like this:
```
>>> a = [1, 2]
>>> a
[1, 2]
>>> a[a.index(1)] = 2
>>> a
[2, 2]
>>> a[a.index(2)] = 1
>>> a
[1, 2]
```
To quote, per the [standard rules for assignment](https://docs.python.org/2/reference/simple_stmts.html#assignment-statements) (emphasis mine):
> * If the target list is a comma-separated list of targets: The object must be an iterable with the same number of items as there are targets
> in the target list, and the items are assigned, **from left to right**, to
> the corresponding targets.
The assignment to `a[a.index(1)]` (i.e. `a[0]`) happens *before* the second assignment asks for `a.index(2)`, by which time `a.index(2) == 0`.
You will see the same behaviour for any assignment:
```
foo = [a, b]
foo[foo.index(a)], foo[foo.index(b)] = x, y
```
where `x == b` (in this case, any assignment where the first value on the right-hand side is `2`). |
Create and import helper functions in tests without creating packages in test directory using py.test | 33,508,060 | 11 | 2015-11-03T20:04:09Z | 33,515,264 | 7 | 2015-11-04T06:41:34Z | [
"python",
"unit-testing",
"py.test"
] | **Question**
How can I import helper functions in test files without creating packages in the `test` directory?
**Context**
I'd like to create a test helper function that I can import in several tests. Say, something like this:
```
# In common_file.py
def assert_a_general_property_between(x, y):
# test a specific relationship between x and y
assert ...
# In test/my_test.py
def test_something_with(x):
some_value = some_function_of_(x)
assert_a_general_property_between(x, some_value)
```
*Using Python 3.5, with py.test 2.8.2*
**Current "solution"**
I'm currently doing this via importing a module inside my project's `test` directory (which is now a package), but I'd like to do it with some other mechanism if possible (so that my `test` directory doesn't have packages but just tests, and the tests can be run on an installed version of the package, as is recommended [here in the py.test documentation on good practices](http://pytest.org/latest/goodpractises.html#choosing-a-test-layout-import-rules)). | my option is to create an extra dir in `tests` dir and add it to pythonpath in the conftest so.
```
tests/
helpers/
utils.py
...
conftest.py
setup.cfg
```
in the `conftest.py`
```
import sys
import os
sys.path.append(os.path.join(os.path.dirname(__file__), 'helpers')
```
in `setup.cfg`
```
[pytest]
norecursedirs=tests/helpers
```
this module will be available with `import utils', only be careful to name clashing. |
Create block diagonal numpy array from a given numpy array | 33,508,322 | 7 | 2015-11-03T20:20:58Z | 33,508,367 | 8 | 2015-11-03T20:23:14Z | [
"python",
"arrays",
"numpy"
] | I have a 2-dimensional numpy array with an equal number of columns and rows. I would like to arrange them into a bigger array having the smaller ones on the diagonal. It should be possible to specify how often the starting matrix should be on the diagonal. For example:
```
a = numpy.array([[5, 7],
[6, 3]])
```
So if I wanted this array 2 times on the diagonal the desired output would be:
```
array([[5, 7, 0, 0],
[6, 3, 0, 0],
[0, 0, 5, 7],
[0, 0, 6, 3]])
```
For 3 times:
```
array([[5, 7, 0, 0, 0, 0],
[6, 3, 0, 0, 0, 0],
[0, 0, 5, 7, 0, 0],
[0, 0, 6, 3, 0, 0],
[0, 0, 0, 0, 5, 7],
[0, 0, 0, 0, 6, 3]])
```
Is there a fast way to implement this with numpy methods and for arbitrary sizes of the starting array (still considering the starting array to have the same number of rows and columns)? | Classic case of [`numpy.kron`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.kron.html) -
```
np.kron(np.eye(n), a)
```
Sample run -
```
In [57]: n = 2
In [58]: np.kron(np.eye(n), a)
Out[58]:
array([[ 5., 7., 0., 0.],
[ 6., 3., 0., 0.],
[ 0., 0., 5., 7.],
[ 0., 0., 6., 3.]])
In [59]: n = 3
In [60]: np.kron(np.eye(n), a)
Out[60]:
array([[ 5., 7., 0., 0., 0., 0.],
[ 6., 3., 0., 0., 0., 0.],
[ 0., 0., 5., 7., 0., 0.],
[ 0., 0., 6., 3., 0., 0.],
[ 0., 0., 0., 0., 5., 7.],
[ 0., 0., 0., 0., 6., 3.]])
``` |
Read cell content in an ipython notebook | 33,508,377 | 18 | 2015-11-03T20:23:48Z | 33,645,726 | 7 | 2015-11-11T07:08:31Z | [
"python",
"ipython",
"ipython-notebook"
] | I have an `ipython` notebook with mixed `markdown` and `python` cells.
And I'd like some of my `python` cells to read the adjacent `markdown` cells and process them as input.
**An example of the desired situation:**
> *CELL 1 (markdown)*: SQL Code to execute
>
> *CELL 2 (markdown)*: `select * from tbl where x=1`
>
> *CELL 3 (python)* : `mysql.query(ipython.previous_cell.content)`
(The syntax `ipython.previous_cell.content` is made up)
Executing "*CELL 3*" should be equivalent to `mysql.query("select * from tbl where x=1")`
How can this be done ? | I think you are trying to attack the problem the wrong way.
First yes, it is possible to get the adjacent markdown cell in really hackish way that would not work in headless notebook execution.
What you want to do is use IPython cell magics, that allow arbitrary syntax as long as the cell starts with 2 percent signs followed by an identifier.
Typically you want SQL cells.
You can refer to the documentation about [cells magics](http://ipython.readthedocs.org/en/stable/config/custommagics.html)
or I can show you how to build that :
```
from IPython.core.magic import (
Magics, magics_class, cell_magic, line_magic
)
@magics_class
class StoreSQL(Magics):
def __init__(self, shell=None, **kwargs):
super().__init__(shell=shell, **kwargs)
self._store = []
# inject our store in user availlable namespace under __mystore
# name
shell.user_ns['__mystore'] = self._store
@cell_magic
def sql(self, line, cell):
"""store the cell in the store"""
self._store.append(cell)
@line_magic
def showsql(self, line):
"""show all recorded statements"""
print(self._store)
## use ipython load_ext mechanisme here if distributed
get_ipython().register_magics(StoreSQL)
```
Now you can use SQL syntax in your python cells:
```
%%sql
select * from foo Where QUX Bar
```
a second cell:
```
%%sql
Insert Cheezburger into Can_I_HAZ
```
check what we executed (the 3 dashes show the input /output delimitation, you do not have to type them):
```
%showsql
---
['select * from foo Where QUX Bar', 'Insert Cheezburger into Can_I_HAZ']
```
And what you asked at the beginning in your question:
```
mysql.query(__mystore[-1])
```
This of course does require that you execute the previous cells in the right order, nothing prevent you from using the `%%sql` syntax to name your cells, e.g if `_store` is a `dict`, or better a class where you overwrite `__getattr__`, to act like `__getitem__` to access fields with dot syntax . This is left as an exercise to the reader, or end see of the response:
```
@cell_magic
def sql(self, line, cell):
"""store the cell in the store"""
self._store[line.strip()] = cell
```
you can then use sql cell like
```
%%sql A1
set foo TO Bar where ID=9
```
And then in your Python cells
```
mysql.execute(__mystore.A1)
```
I would also strongly suggest looking at Catherine Develin SqlMagic for [IPython](https://github.com/catherinedevlin/ipython-sql), and this [Notebook gist](https://gist.github.com/Carreau/02e754e4948efdccf048) on GitHub that show this all thing live.
In the comment you seem to say you want to add `pig`, nothing prevent you from having a `%%pig` magic neither. It is also possible to inject Javascript to enable correct Syntax Highlighting of SQL and PIG, but that's beyond the scope of this question. |
Finding majority votes on -1s, 1s and 0s in list - python | 33,511,259 | 9 | 2015-11-03T23:43:58Z | 33,511,352 | 8 | 2015-11-03T23:53:28Z | [
"python",
"list",
"if-statement",
"binary",
"voting"
] | **How to find the majority votes for a list that can contain -1s, 1s and 0s?**
For example, given a list of:
```
x = [-1, -1, -1, -1, 0]
```
The majority is -1 , so the output should return `-1`
Another example, given a list of:
```
x = [1, 1, 1, 0, 0, -1]
```
The majority vote would be `1`
And when we have a tie, the majority vote should return 0, e.g.:
```
x = [1, 1, 1, -1, -1, -1]
```
This should also return zero:
```
x = [1, 1, 0, 0, -1, -1]
```
The simplest case to get the majority vote seem to sum the list up and check whether it's negative, positive or 0.
```
>>> x = [-1, -1, -1, -1, 0]
>>> sum(x) # So majority -> 0
-4
>>> x = [-1, 1, 1, 1, 0]
>>> sum(x) # So majority -> 1
2
>>> x = [-1, -1, 1, 1, 0]
>>> sum(x) # So majority is tied, i.e. -> 0
0
```
After the sum, I could do this check to get the majority vote, i.e.:
```
>>> x = [-1, 1, 1, 1, 0]
>>> majority = -1 if sum(x) < 0 else 1 if sum(x)!=0 else 0
>>> majority
1
>>> x = [-1, -1, 1, 1, 0]
>>> majority = -1 if sum(x) < 0 else 1 if sum(x)!=0 else 0
>>> majority
0
```
But as noted previously, it's ugly: [Python putting an if-elif-else statement on one line](http://stackoverflow.com/questions/14029245/python-putting-an-if-elif-else-statement-on-one-line) and not pythonic.
So the solution seems to be
```
>>> x = [-1, -1, 1, 1, 0]
>>> if sum(x) == 0:
... majority = 0
... else:
... majority = -1 if sum(x) < 0 else 1
...
>>> majority
0
```
---
# EDITED
But there are cases that `sum()` won't work, @RobertB's e.g.
```
>>> x = [-1, -1, 0, 0, 0, 0]
>>> sum(x)
-2
```
But in this case the majority vote should be 0!! | I am assuming that votes for 0 count as votes. So `sum` is not a reasonable option.
Try a Counter:
```
>>> from collections import Counter
>>> x = Counter([-1,-1,-1, 1,1,1,1,0,0,0,0,0,0,0,0])
>>> x
Counter({0: 8, 1: 4, -1: 3})
>>> x.most_common(1)
[(0, 8)]
>>> x.most_common(1)[0][0]
0
```
So you could write code like:
```
from collections import Counter
def find_majority(votes):
vote_count = Counter(votes)
top_two = vote_count.most_common(2)
if len(top_two)>1 and top_two[0][1] == top_two[1][1]:
# It is a tie
return 0
return top_two[0][0]
>>> find_majority([1,1,-1,-1,0]) # It is a tie
0
>>> find_majority([1,1,1,1, -1,-1,-1,0])
1
>>> find_majority([-1,-1,0,0,0]) # Votes for zero win
0
>>> find_majority(['a','a','b',]) # Totally not asked for, but would work
'a'
``` |
when installing pyaudio, pip cannot find portaudio.h in /usr/local/include | 33,513,522 | 10 | 2015-11-04T04:10:11Z | 33,821,084 | 24 | 2015-11-20T07:23:43Z | [
"python",
"osx",
"pyaudio"
] | I'm using mac osx 10.10
As the PyAudio Homepage said, I install the PyAudio using
```
brew install portaudio
pip install pyaudio
```
the installation of portaudio seems successful, I can find headers and libs in /usr/local/include and /usr/local/lib
but when I try to install pyaudio, it gives me an error that
```
src/_portaudiomodule.c:29:10: fatal error: 'portaudio.h' file not found
#include "portaudio.h"
^
1 error generated.
error: command 'cc' failed with exit status 1
```
actually it is in /usr/local/include
why can't it find the file?
some answers to similar questions are not working for me(like using virtualenv, or compile it manually), and I want to find a simple way to solve this. | My workaround on Mac 10.11.1 was:
`$ pip install --global-option='build_ext' --global-option='-I/usr/local/include' --global-option='-L/usr/local/lib' pyaudio` |
Column alias after groupBy in pyspark | 33,516,490 | 5 | 2015-11-04T07:56:22Z | 33,517,194 | 7 | 2015-11-04T08:39:56Z | [
"python",
"scala",
"apache-spark",
"pyspark"
] | I need the resulting data frame in the line below, to have an alias name "maxDiff" for the max('diff') column after groupBy. However, the below line does not makeany change, nor throw an error.
```
grpdf = joined_df.groupBy(temp1.datestamp).max('diff').alias("maxDiff")
``` | This is because you are aliasing the whole `DataFrame` object, not `Column`. Here's an example how to alias the `Column` only:
```
import pyspark.sql.functions as func
grpdf = joined_df \
.groupBy(temp1.datestamp) \
.max('diff') \
.select(func.col("max(diff)").alias("maxDiff"))
``` |
Trying to count words in a file using Python | 33,529,334 | 4 | 2015-11-04T18:28:12Z | 33,529,347 | 7 | 2015-11-04T18:29:10Z | [
"python",
"file"
] | I am attempting to count the number of 'difficult words' in a file, which requires me to count the number of letters in each word. For now, I am only trying to get single words, one at a time, from a file. I've written the following:
```
file = open('infile.txt', 'r+')
fileinput = file.read()
for line in fileinput:
for word in line.split():
print(word)
```
Output:
```
t
h
e
o
r
i
g
i
n
.
.
.
```
It seems to be printing one character at a time instead of one word at a time. I'd really like to know more about what is actually happening here. Any suggestions? | Use [splitlines()](https://docs.python.org/2/library/stdtypes.html#str.splitlines):
```
fopen = open('infile.txt', 'r+')
fileinput = fopen.read()
for line in fileinput.splitlines():
for word in line.split():
print(word)
fopen.close()
```
Without [splitlines()](https://docs.python.org/2/library/stdtypes.html#str.splitlines):
You can also use **with** statement to open the file. It closes the file automagically:
```
with open('infile.txt', 'r+') as fopen:
for line in fopen:
for word in line.split():
print(word)
``` |
How do I specify that the return type of a method is the same as the class itself in python? | 33,533,148 | 10 | 2015-11-04T22:17:54Z | 33,533,514 | 7 | 2015-11-04T22:44:33Z | [
"python",
"python-3.x",
"pycharm",
"typing",
"python-3.5"
] | I have the following code in python 3:
```
class Position:
def __init__(self, x: int, y: int):
self.x = x
self.y = y
def __add__(self, other: Position) -> Position:
return Position(self.x + other.x, self.y + other.y)
```
But my editor (PyCharm) says that the reference Position can not be resolved (in the \_add\_\_ method). How should I specify that I expect the return type to be of type `Position`?
Edit: I think this is actually a PyCharm issue. It actually uses the information in its warnings, and code completion

But correct my if I'm wrong, and need to use some other syntax. | If you try to run this code you will get:
```
NameError: name 'Position' is not defined
```
This is because `Position` must be defined before you can use it on an annotation. I don't like any of the workarounds suggested for similar questions.
# 1. Define a dummy `Position`
Before the class definition, place a dummy definition:
```
class Position(object):
pass
class Position(object):
...
```
This will get rid of the `NameError` and may even look OK:
```
>>> Position.__add__.__annotations__
{'other': __main__.Position, 'return': __main__.Position}
```
But is it?
```
>>> for k, v in Position.__add__.__annotations__.items():
... print(k, 'is Position:', v is Position)
return is Position: False
other is Position: False
```
# 2. Use a string
Just use a string instead of the class itself:
```
...
def __add__(self, other: 'Position') -> 'Position':
...
```
If you use the Django framework this may seems familiar, as Django models use strings for forward references (foreign key definitions where the foreign model is `self` or is not declared yet).
Looks like this is the recommended approach [according to the docs](https://www.python.org/dev/peps/pep-0484/#forward-references):
> # Forward references
>
> When a type hint contains names that have not been defined yet, that definition may be expressed as a string literal, to be resolved later.
>
> A situation where this occurs commonly is the definition of a container class, where the class being defined occurs in the signature of some of the methods. For example, the following code (the start of a simple binary tree implementation) does not work:
```
class Tree:
def __init__(self, left: Tree, right: Tree):
self.left = left
self.right = right
```
> To address this, we write:
```
class Tree:
def __init__(self, left: 'Tree', right: 'Tree'):
self.left = left
self.right = right
```
> The string literal should contain a valid Python expression (i.e., compile(lit, '', 'eval') should be a valid code object) and it should evaluate without errors once the module has been fully loaded. The local and global namespace in which it is evaluated should be the same namespaces in which default arguments to the same function would be evaluated.
# 3. Monkey-patch in order to add the annotations:
You may use this if you are using a decorator that enforces contracts:
```
...
def __add__(self, other):
return self.__class__(self.x + other.x, self.y + other.y)
Position.__add__.__annotations__['return'] = Position
Position.__add__.__annotations__['other'] = Position
```
At least it seems right:
```
>>> for k, v in Position.__add__.__annotations__.items():
... print(k, 'is Position:', v is Position)
return is Position: True
other is Position: True
```
But thinking about it, if you are smart enough to write a contract-enforcing decorator, you probably can teach it to safely `eval` the annotation if it is of type `buitins.str` instead of `builtins.type`.
# Conclusion
The blessed way is the #2. |
What would be a good example of "sending messages to objects" within Python? | 33,534,264 | 6 | 2015-11-04T23:47:40Z | 33,534,723 | 7 | 2015-11-05T00:37:41Z | [
"python",
"ruby",
"oop",
"smalltalk"
] | I recently watched [*Nothing is Something*](https://www.youtube.com/watch?v=9mLK_8hKii8) by Sandi Metz, and in her talk she uses the idea of **sending messages to objects** and goes over how that's done in Ruby. The [4:10](https://www.youtube.com/watch?v=9mLK_8hKii8&t=250)-7:30 section would be a good entry point where she begins on the topic (it's a building block that then permeates over half the talk).
Now, for some background: I don't have a lot of experience with writing programs in Ruby, and zero experience with smalltalk. My OO experience is somewhat limited and *very* stale. I also looked up `send object message python` in Google, and all I saw was relating to sending messages over sockets and email, which isn't quite what I had in mind.
**I'm not sure how to interpret this concept in Python, or how to implement it.** Any ideas? :)
---
*Side note: She mentions her OO-views are derived from experience with smalltalk, so I'm adding that as a tag to this question.* | Python uses a slightly different terminology. It is called "calling a method". But it's the same thing. (C++ calls it "calling a virtual function". Again, same difference.)
Personally, I don't like that terminology, it focuses too much on the implementation detail and loses much of the metaphoric power of the "message sending" terminology.
There are other differences with Python, some of the most important ones being:
* object-oriented data abstraction is achieved via convention as opposed to being a builtin language feature (e.g. Smalltalk, Ruby), or Design Pattern (Scheme, ECMAScript)
* not all subroutines are methods
The fundamental idea of OO is messaging: you send a message to an object, the object responds. Just like in real life, you have no idea what the object does with the message. All you can observe is the reply. The object might process the message itself, it might employ the help of others, it might blindly forward the message without actually doing any work itself.
Since you can't know what the object does with the message and all you can observe is the object's response, all you know about the object is its *protocol* (the messages it understands and how it responds to them). You don't know its implementation, you don't know its representation. That's how OO achieves Data Abstraction, Information Hiding, Data Hiding, Encapsulation.
Also, since each object decides independently how to respond to a message, you get Polymorphism.
*One* typical way of responding to a message, is executing a method corresponding to that message. But that is an implementation mechanism, which is why I don't like that terminology. As a metaphor, it carries none of the connotations I mentioned above.
[Alan Kay has said that OO is about three things, Messaging, Data Abstraction, and Polymorphism](http://www.purl.org/stefan_ram/pub/doc_kay_oop_en):
> OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things.
He later clarified that [the Big Thing is Messaging](http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html):
> Just a gentle reminder that I took some pains at the last OOPSLA to try to remind everyone that Smalltalk is not only NOT its syntax or the class library, it is not even about classes. I'm sorry that I long ago coined the term "objects" for this topic because it gets many people to focus on the lesser idea.
>
> The big idea is "messaging" -- that is what the kernal of Smalltalk/Squeak is all about (and it's something that was never quite completed in our Xerox PARC phase). The Japanese have a small word -- ma -- for "that which is in between" -- perhaps the nearest English equivalent is "interstitial". The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be. Think of the internet -- to live, it (a) has to allow many different kinds of ideas and realizations that are beyond any single standard and (b) to allow varying degrees of safe interoperability between these ideas.
And in fact, as I laid out above, the other two are just consequences of Messaging, in my opinion.
When Alan Kay came up with the term "Object Orientation", he was heavily inspired by what would later become the ARPANet and then the Internet: independent machines ("objects") with their own private memory ("instance variables") that communicate with each other by sending messages.
Similar points are also made in [*On Understanding Data Abstraction, Revisited*](http://CS.UTexas.Edu/~wcook/Drafts/2009/essay.pdf) by [William R. Cook](http://WCook.BlogSpot.Com/) and also his [*Proposal for Simplified, Modern Definitions of "Object" and "Object Oriented"*](http://WCook.BlogSpot.Com/2012/07/proposal-for-simplified-modern.html).
> Dynamic dispatch of operations is the essential characteristic of objects. It means that the operation to be invoked is a dynamic property of the object itself. Operations cannot be identified statically, and there is no way in general to exactly what operation will executed in response to a given request, except by running it. This is exactly the same as with first-class functions, which are always dynamically dispatched.
Python's object system is a bit different from other languages. Python was originally a procedural language, the object system was added later on, with the goal of making the absolut minimal amount of changes to the language as possible. The major data structure in Python were `dict`s (maps / hash tables), and all behavior was in functions. Even before Python's OO features, this minimalism shows itself, e.g. local and global variables are actually just keys in a `dict`. And so, it was natural to make objects and classes much like `dict`s and reuse that concept, an object is essentially a `dict` of values, and a class is a `dict` of functions. There is no separate idea of "method", rather, you have functions which take the receiver as their first argument. (In most other OO languages, the receiver is a "hidden" zeroth argument and available using a special keyword such as `self`, `this`, or `me`.) |
What would be a good example of "sending messages to objects" within Python? | 33,534,264 | 6 | 2015-11-04T23:47:40Z | 33,534,789 | 7 | 2015-11-05T00:44:19Z | [
"python",
"ruby",
"oop",
"smalltalk"
] | I recently watched [*Nothing is Something*](https://www.youtube.com/watch?v=9mLK_8hKii8) by Sandi Metz, and in her talk she uses the idea of **sending messages to objects** and goes over how that's done in Ruby. The [4:10](https://www.youtube.com/watch?v=9mLK_8hKii8&t=250)-7:30 section would be a good entry point where she begins on the topic (it's a building block that then permeates over half the talk).
Now, for some background: I don't have a lot of experience with writing programs in Ruby, and zero experience with smalltalk. My OO experience is somewhat limited and *very* stale. I also looked up `send object message python` in Google, and all I saw was relating to sending messages over sockets and email, which isn't quite what I had in mind.
**I'm not sure how to interpret this concept in Python, or how to implement it.** Any ideas? :)
---
*Side note: She mentions her OO-views are derived from experience with smalltalk, so I'm adding that as a tag to this question.* | Essentially "sending a message" is called in most of OO languages as calling a method. The key difference is that in dynamic languages you don't know if the object knows a method or not. So if you do `var.jump()` you have no idea what the `var` is. Maybe it's a rabbit and it can jump, or maybe it is a rock and it doesn't know how to do that (the Rock class does not implement the `jump()` method). So from conceptual point of view sending a message is asking some object to do something (maybe it does not know what you are asking), while invoking a method is making that object you know do what you want.
In Smalltalk and Ruby it's really easy to send messages with another message, so that if you use `1.send(:to_s)` it's the same as `1.to_s`. And `1.send(:to_s)` itself is a message `send` that is sent to `1` with parameter `:to_s`. So you can also expend this to `1.send(:send, :to_s)`.
In Python meta programming is a real pain in the ass. So if you want to do the same `1.send(:to_s)` you have to use something like getattr(1, "to\_s")(). Here you obtain the `to_s` "method" itself using `getattr` and invoke it with `()`. *This is also a good example of the bad Python design. How do you do something like `1.send(:send, :to_s)` in Python? In Ruby `send` is a method on a code Object. So you can also send it via itself. In Python getattr is an external standalone function.*
If you really want to learn the concept of OO programming I suggest you to play with Smalltalk because I find it to have the best design. (also it's not influenced by C++ or other stuff because in 70s they were not around). There is a nice book [Pharo by Example](http://pharobyexample.org) that teaches you to program in [pharo](/questions/tagged/pharo "show questions tagged 'pharo'"). Also there is a [work in progress version](https://ci.inria.fr/pharo-contribution/view/Books/job/UpdatedPharoByExample/lastSuccessfulBuild/artifact/book-result/UpdatedPharoByExample.pdf) updated for the new releases of the environment. |
Best way to do frequency counting in Julia | 33,534,791 | 3 | 2015-11-05T00:44:28Z | 33,541,781 | 7 | 2015-11-05T10:10:28Z | [
"python",
"binary",
"statistics",
"counter",
"julia-lang"
] | I have a [binary file](https://www.dropbox.com/s/kn8i3us95tgqlq4/160919?dl=0) and I am doing frequency counting in julia.
```
using PyPlot
import StatsBase
const stb=StatsBase
function getall(fname)
b=Mmap.mmap(fname,Vector{Int32})
#a=open(fname)
#b=reinterpret(Int32,readbytes(a))
d=stb.countmap(b)
x=collect(keys(d)) & 0x7ffffff
y=collect(values(d))
#plot(x,y,"r^")
#xlim(0,3000)
#ylim(0,3e5)
#grid("on")
return x,y
end
```
In python, I use `numpy.unique`, `numpy.memmap` and get the similar performance ( 550ms). Could Julia code be faster? Is there any other way to count instead of using StatBases. | The `countmap` operation is a standard operation in any programming language. Additionally, it is also "raw", like sorting, which means it has to do a basic popular operation over the input data. Operations of this kind are hard to optimize, as they are done similarly in most languages - and if they are not fast enough in the source language, a specialized routine is called (read C/Cpp written).
Julia is no exception. Some "raw" linear algebra is outsourced to heavily optimized libraries.
To put a productive (and Julia positive) spin on this answer, there are algorithmic methods to treat special cases of input which would yield a speedup over the general algorithm (i.e. using a hash-based counter Dict). The ability to code these special cases within Julia represents its speed and the attempt to solve the so-called 2-language problem.
Concretely, the following tries to optimize for files with an uneven distribution of 32-bit words (like text files for example), by by-passing a general hash-based Dict and using a faster simple hash and a 16-bit lookup table.
On my test file, it achieved a 10% speedup over the `countmap` implementation in the OP. A modest improvement :).
```
using DataStructures
function getall4(fname)
b=Mmap.mmap(fname,Vector{UInt32})
c = zeros(Int,2^16)
v = Array(UInt16,2^16)
l = length(b)
for i=1:l
d1 = b[i]&0xFFFF
d2 = d1 $ (b[i]>>16)
if d1==v[d2+1]
c[d2+1] += 1
else
c[d2+1] -= 1
end
if (c[d2+1]<=0)
c[d2+1] = 1
v[d2+1] = d1
end
end
cc = DataStructures.counter(UInt32)
fill!(c,0)
for i=1:l
d1 = b[i]&0xFFFF
d2 = d1 $ (b[i]>>16)
if v[d2+1]==d1
c[d2+1] += 1
end
end
for i=1:l
d1 = b[i]&0xFFFF
d2 = d1 $ (b[i]>>16)
if !(v[d2+1]==d1)
push!(cc,b[(i+1)>>1])
end
end
x = UInt32[]
y = Int[]
for i=1:(1<<16)
if c[i]>0
push!(x,(UInt32(i)<<16)+v[i])
push!(y,c[i])
end
end
append!(x,collect(keys(cc.map)))
append!(y,collect(values(cc.map)))
x,y
end
``` |
OS X - Deciding between anaconda and homebrew Python environments | 33,541,876 | 4 | 2015-11-05T10:14:37Z | 33,543,303 | 8 | 2015-11-05T11:22:27Z | [
"python",
"osx",
"numpy",
"homebrew",
"anaconda"
] | I use Python extensively on my Mac OS X, for both numerical applications and web development (roughly equally). I checked the number of Python installations I had on my laptop recently, and was shocked to find **four**:
```
Came with Mac OS X:
/usr/bin/python
Python 2.7.6 (default, Sep 9 2014, 15:04:36)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.39)] on darwin
Installed via Homebrew
/usr/local/bin/python
Python 2.7.10 (default, Jul 13 2015, 12:05:58)
[GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)] on darwin
Installed via Anaconda/Miniconda
~/anaconda/bin/python
Python 2.7.10 |Anaconda 2.3.0 (x86_64)| (default, Oct 19 2015, 18:31:17)
[GCC 4.2.1 (Apple Inc. build 5577)] on darwin
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
Came with the downloaded .pkg from python.org
/System/Library/Frameworks/Python.framework/Versions/Current/bin/python
Python 2.7.6 (default, Sep 9 2014, 15:04:36)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.39)] on darwin
```
I decided to unify all of this, and use `conda`. I removed the Homebrew version and the Python.org download (kept the main system one). Conda is great for numerical computing, because I can install Jupyter/Numpy/Pandas in the root environment, and not have to bother install virtualenvs for every project.
But now my entire web development workflow is messed up. None of my virtualenvs work, since apparently one's not supposed to use conda and virtualenv together. I tried to create conda environments from the `requirements.txt` file. One package I was using with django was "markdown\_deux", which is not available in the Conda repo. I looked at ways of building it, but creating a recipe takes a lot of effort (create YAML file, etc..)
Has anyone found a good compromise for this? I'm thinking of going back to the homebrew version for general use, and writing an alias for changing the path back to the conda version as necessary. Though this will also require tracking which one I'm using now.. | I use Homebrew Python for all my projects (data science, some web dev).
Conda is nothing fancy, you can have the same packages by hand with a combination of `pip` and [Homebrew science](https://github.com/Homebrew/homebrew-science). Actually, it is even better because you have more control on what you install.
You can use your virtualenvs only when you do web development. For the numerical applications you will probably want to have the latest versions of your packages at all times.
If you want to update all your packages at once with pip, you can use this command:
```
sudo -H pip freeze --local | grep -v '^\-e' | cut -d = -f 1 | xargs -n1 sudo -H pip install -U
``` |
What does the built-in function sum do with sum(list, [])? | 33,541,947 | 13 | 2015-11-05T10:17:27Z | 33,542,054 | 17 | 2015-11-05T10:22:26Z | [
"python"
] | When I want to unfold a list, I found a way like below:
```
>>> a = [[1, 2], [3, 4], [5, 6]]
>>> a
[[1, 2], [3, 4], [5, 6]]
>>> sum(a, [])
[1, 2, 3, 4, 5, 6]
```
I don't know what happened in these lines, and [the documentation](https://docs.python.org/2/library/functions.html#sum) states:
> `sum(iterable[, start])`
>
> Sums `start` and the items of an `iterable` from left to right and
> returns the total. `start` defaults to `0`. The iterable's items are
> normally numbers, and the `start` value is not allowed to be a string.
>
> For some use cases, there are good alternatives to `sum()`. The preferred, fast way to concatenate a sequence of strings is by calling
> `''.join(sequence)`. To add floating point values with extended
> precision, see `math.fsum()`. To concatenate a series of iterables,
> consider using `itertools.chain()`.
>
> New in version 2.3.
Don't you think that start should be a number? Why `[]` can be written here?
```
(sum(a, []))
``` | > Don't you think that start should be a number?
`start` *is* a number, by default; `0`, per the documentation you've quoted. Hence when you do e.g.:
```
sum((1, 2))
```
it is evaluated as `0 + 1 + 2` and it equals `3` and everyone's happy. If you want to start from a different number, you can supply that instead:
```
>>> sum((1, 2), 3)
6
```
So far, so good.
---
However, there are other things you can use `+` on, like lists:
```
>>> ['foo'] + ['bar']
['foo', 'bar']
```
If you try to use `sum` for this, though, expecting the same result, you get a `TypeError`:
```
>>> sum((['foo'], ['bar']))
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
sum((['foo'], ['bar']))
TypeError: unsupported operand type(s) for +: 'int' and 'list'
```
because it's now doing `0 + ['foo'] + ['bar']`.
To fix this, you can supply your own `start` as `[]`, so it becomes `[] + ['foo'] + ['bar']` and all is good again. So to answer:
> Why `[]` can be written here?
because although `start` defaults to a number, it doesn't *have* to be one; other things can be added too, and that comes in handy for things *exactly like what you're currently doing*. |
Python intersection of 2 lists of dictionaries | 33,542,997 | 2 | 2015-11-05T11:06:59Z | 33,543,164 | 7 | 2015-11-05T11:15:08Z | [
"python",
"python-3.4"
] | I have 2 lists of dicts like
```
list1 = [{'count': 351, 'evt_datetime': datetime.datetime(2015, 10, 23, 8, 45), 'att_value': 'red'},
{'count': 332, 'evt_datetime': datetime.datetime(2015, 10, 23, 8, 45), 'att_value': 'red'},
{'count': 336, 'evt_datetime': datetime.datetime(2015, 10, 23, 8, 45), 'att_value': 'red'},
{'count': 359, 'evt_datetime': datetime.datetime(2015, 10, 23, 8, 45), 'att_value': 'red'},
{'count': 309, 'evt_datetime': datetime.datetime(2015, 10, 23, 8, 45), 'att_value': 'red'}]
list2 = [{'count': 359, 'evt_datetime': datetime.datetime(2015, 10, 23, 8, 45), 'att_value': 'red'},
{'count': 351, 'evt_datetime': datetime.datetime(2015, 10, 23, 8, 45), 'att_value': 'red'},
{'count': 381, 'evt_datetime': datetime.datetime(2015, 10, 22, 8, 45), 'att_value': 'red'}]
```
I am trying to get common dicts from both the list. My desired output to be exact matches of the keys and values of the dict.
```
[{'count': 359, 'evt_datetime': datetime.datetime(2015, 10, 23, 8, 45), 'att_value': 'red'},
{'count': 351, 'evt_datetime': datetime.datetime(2015, 10, 23, 8, 45), 'att_value': 'red'}]
```
Can this done by python itself efficiently or would require lib like pandas? | Use list comprehension:
```
[x for x in list1 if x in list2]
```
This returns me this list for your data:
```
[{'count': 351, 'evt_datetime': datetime.datetime(2015, 10, 23, 8, 45), 'att_value': 'red'}, {'count': 359, 'evt_datetime': datetime.datetime(2015, 10, 23, 8, 45), 'att_value': 'red'}]
``` |
Color seaborn boxplot based in DataFrame column name | 33,544,910 | 6 | 2015-11-05T12:45:16Z | 33,548,337 | 7 | 2015-11-05T15:19:17Z | [
"python",
"python-3.x",
"pandas",
"matplotlib",
"seaborn"
] | I'd like to create a list of boxplots with the color of the box dependent on the name of the pandas.DataFrame column I use as input.
The column names contain strings that indicate an experimental condition based on which I want the box of the boxplot colored.
I do this to make the boxplots:
```
sns.boxplot(data = data.dropna(), orient="h")
plt.show()
```
This creates a beautiful list of boxplots with correct names. Now I want to give every boxplot that has 'prog +, DMSO+' in its name a red color, leaving the rest as blue.
I tried creating a dictionary with column names as keys and colors as values:
```
color = {}
for column in data.columns:
if 'prog+, DMSO+' in column:
color[column] = 'red'
else:
color[column] = 'blue'
```
And then using the dictionary as color:
```
sns.boxplot(data = data.dropna(), orient="h", color=color[column])
plt.show()
```
This does not work, understandably (there is no loop to go through the dictionary). So I make a loop:
```
for column in data.columns:
sns.boxplot(data = data[column], orient='h', color=color[column])
plt.show()
```
This does make boxplots of different colors but all on top of each other and without the correct labels. If I could somehow put these boxplot nicely in one plot below each other I'd be almost at what I want. Or is there a better way? | You should use the `palette` parameter, which handles multiple colors, rather than `color`, which handles a specific one. You can give `palette` a name, an ordered list, or a dictionary. The latter seems best suited to your question:
```
import seaborn as sns
sns.set_color_codes()
tips = sns.load_dataset("tips")
pal = {day: "r" if day == "Sat" else "b" for day in tips.day.unique()}
sns.boxplot(x="day", y="total_bill", data=tips, palette=pal)
```
[](http://i.stack.imgur.com/rOEx8.png) |
how to generate url from boto3 in amazon web services | 33,549,254 | 4 | 2015-11-05T15:59:36Z | 33,698,531 | 9 | 2015-11-13T17:28:42Z | [
"python",
"amazon-s3",
"boto3"
] | I have a Bucket in s3 and I am trying to pull the url of the image that is in there.
I am using boto3 and boto3 doesn't seem to have an implemented generate url method.
They have a core method, that generates url like this,
```
import botocore.session
session = botocore.session.get_session()
client = session.create_client('s3')
presigned_url = client.generate_presigned_url(
'get_object', Params={'Bucket': self.bucket_name, 'Key': self.key})
```
One thing I am forced to do is, I have to send the parameters along with each request using session object. And the above method does not allow me to set the session variables (ie .. aws credentials)
The closest I can get is this
```
session = Session(aws_access_key_id='342342342342', aws_secret_access_key='3434234322', region_name='us-east-1')
s3 = session.resource('s3')
object = s3.Object('my-dev-bucket', 'amazonKeyString')
print object.get()["Body"]
```
This gets me amazon s3 object which is an object called
```
botocore.response.StreamingBody object at 0x7ffaff8cef50
```
Can I get a url of the image this way.
Kindly help
Prabhakar S | Able to get results and did not face any issues in getting the signed URL.
I used the default session since my aws creds were stored locally in "~/.aws/credentials" file and my default region is set as needed ~/.aws/config
```
import boto3
s3Client = boto3.client('s3')
s3Client.generate_presigned_url('get_object', Params = {'Bucket': 'www.mybucket.com', 'Key': 'hello.txt'}, ExpiresIn = 100)
```
If you need to pass params for Session, import boto3.session and create custom session
```
import boto3.session
session = boto3.session.Session(region_name='eu-central-1')
s3Client = session.client('s3')
``` |
list comprehension with numpy arrays - bad practice? | 33,551,962 | 6 | 2015-11-05T18:18:59Z | 33,552,005 | 8 | 2015-11-05T18:20:55Z | [
"python",
"numpy"
] | I am wondering if the below approach would be considered bad practice, and if so, if someone could give some guidance towards another approach.
Here is the code in question:
```
a = np.array([[1,2,3],[4,5,6]])
b = np.array([-5,5])
c = np.array([np.multiply(a[x],b[x]) for x in range(2)])
```
The objective here is to obtain an array of the same shape as 'a' where the values in the first element of 'a' are multiplied by the first element of 'b' and the values in the second element of 'a' are multiplied by the second element of 'b'
The above code works, but given the mixture of lists/arrays involved I'm concerned this is advised against - but I'm not clear on a more elegant solution. Many thanks in advance! | *NumPythonic* way would be to extend the dimensions of `b` to a 2D array with [`np.newaxis/None`](http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#numpy.newaxis) and then let [`broadcasting`](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) come into play for a `vectorized elementwise multiplication`. The implementation would look like this -
```
c = a * b[:,None]
```
Once the dimensions are extended, you can also use `np.multiply` for the same effect, like so -
```
c = np.multiply(a,b[:,None])
```
Most importantly, here's some performance numbers to persuade you on using `broadcasting` -
```
In [176]: a = np.random.rand(2000,3000)
In [177]: b = np.random.rand(2000)
In [178]: %timeit np.array([np.multiply(a[x],b[x]) for x in range(a.shape[0])])
10 loops, best of 3: 118 ms per loop
In [179]: %timeit a * b[:,None]
10 loops, best of 3: 63.8 ms per loop
In [180]: %timeit np.multiply(a,b[:,None])
10 loops, best of 3: 64 ms per loop
``` |
Do I have Numpy 32 bit or 64 bit? | 33,553,549 | 5 | 2015-11-05T19:51:21Z | 33,553,718 | 12 | 2015-11-05T20:02:06Z | [
"python",
"numpy",
"memory",
"32bit-64bit"
] | How do I check if my installed numpy version is 32bit or 64bit?
Bonus Points for a solution which works inside a script and is system independent. | ```
In [65]: import numpy.distutils.system_info as sysinfo
In [69]: sysinfo.platform_bits
Out[69]: 64
```
This is based on [the value returned by `platform.architecture()`](https://github.com/numpy/numpy/blob/master/numpy/distutils/system_info.py#L153):
```
In [71]: import platform
In [72]: platform.architecture()
Out[74]: ('64bit', 'ELF')
``` |
The similar method from the nltk module produces different results on different machines. Why? | 33,558,709 | 14 | 2015-11-06T02:57:44Z | 33,809,495 | 13 | 2015-11-19T16:35:48Z | [
"python",
"nlp",
"nltk",
"similarity",
"corpus"
] | I have taught a few introductory classes to text mining with Python, and the class tried the similar method with the provided practice texts. Some students got different results for text1.similar() than others.
All versions and etc. were the same.
Does anyone know why these differences would occur? Thanks.
Code used at command line.
```
python
>>> import nltk
>>> nltk.download() #here you use the pop-up window to download texts
>>> from nltk.book import *
*** Introductory Examples for the NLTK Book ***
Loading text1, ..., text9 and sent1, ..., sent9
Type the name of the text or sentence to view it.
Type: 'texts()' or 'sents()' to list the materials.
text1: Moby Dick by Herman Melville 1851
text2: Sense and Sensibility by Jane Austen 1811
text3: The Book of Genesis
text4: Inaugural Address Corpus
text5: Chat Corpus
text6: Monty Python and the Holy Grail
text7: Wall Street Journal
text8: Personals Corpus
text9: The Man Who Was Thursday by G . K . Chesterton 1908
>>>>>> text1.similar("monstrous")
mean part maddens doleful gamesome subtly uncommon careful untoward
exasperate loving passing mouldy christian few true mystifying
imperial modifies contemptible
>>> text2.similar("monstrous")
very heartily so exceedingly remarkably as vast a great amazingly
extremely good sweet
```
Those lists of terms returned by the similar method differ from user to user, they have many words in common, but they are not identical lists. All users were using the same OS, and the same versions of python and nltk.
I hope that makes the question clearer. Thanks. | In your example there are 40 other words which have **exactly one** context in common with the word `'monstrous'`.
In the [`similar`](http://www.nltk.org/_modules/nltk/text.html#Text.similar) function a `Counter` object is used to count the words with similar contexts and then the most common ones (default 20) are printed. Since all 40 have the same frequency the order can differ.
From the [doc](https://docs.python.org/3.4/library/collections.html#collections.Counter.most_common) of `Counter.most_common`:
> Elements with equal counts are ordered arbitrarily
---
I checked the frequency of the similar words with this code (which is essentially a copy of the relevant part of the function code):
```
from nltk.book import *
from nltk.util import tokenwrap
from nltk.compat import Counter
word = 'monstrous'
num = 20
text1.similar(word)
wci = text1._word_context_index._word_to_contexts
if word in wci.conditions():
contexts = set(wci[word])
fd = Counter(w for w in wci.conditions() for c in wci[w]
if c in contexts and not w == word)
words = [w for w, _ in fd.most_common(num)]
# print(tokenwrap(words))
print(fd)
print(len(fd))
print(fd.most_common(num))
```
Output: (different runs give different output for me)
```
Counter({'doleful': 1, 'curious': 1, 'delightfully': 1, 'careful': 1, 'uncommon': 1, 'mean': 1, 'perilous': 1, 'fearless': 1, 'imperial': 1, 'christian': 1, 'trustworthy': 1, 'untoward': 1, 'maddens': 1, 'true': 1, 'contemptible': 1, 'subtly': 1, 'wise': 1, 'lamentable': 1, 'tyrannical': 1, 'puzzled': 1, 'vexatious': 1, 'part': 1, 'gamesome': 1, 'determined': 1, 'reliable': 1, 'lazy': 1, 'passing': 1, 'modifies': 1, 'few': 1, 'horrible': 1, 'candid': 1, 'exasperate': 1, 'pitiable': 1, 'abundant': 1, 'mystifying': 1, 'mouldy': 1, 'loving': 1, 'domineering': 1, 'impalpable': 1, 'singular': 1})
``` |
upgrade to dev version of scikit-learn on Anaconda? | 33,568,244 | 9 | 2015-11-06T13:44:52Z | 33,568,537 | 14 | 2015-11-06T14:01:46Z | [
"python",
"scikit-learn",
"upgrade",
"anaconda"
] | I'm using python through Anaconda, and would like to use a new feature (<http://scikit-learn.org/dev/modules/neural_networks_supervised.html>) in scikit-learn that's currently only available in the development version 0.18.dev0.
However, doing the classical `conda update` doesn't seem to work, as conda doesn't list any dev packages. What would be the simplest way to install a development version into my Anaconda? (For what it's worth, I'm using 64-bit windows 7.) | You can only use `conda` to install a package if someone has built and made available binaries for the package. Some packages publish nightly builds that would allow this, but scikit-learn is not one of them.
To install the bleeding-edge version in one command, you could use pip; e.g.:
```
$ conda install pip
$ pip install git+git://github.com/scikit-learn/scikit-learn.git
```
but keep in mind that this requires compiling all the C extensions within the library, and so it will fail if your system is not set up for that. |
numpy max vs amax vs maximum | 33,569,668 | 4 | 2015-11-06T15:02:46Z | 33,569,857 | 7 | 2015-11-06T15:13:06Z | [
"python",
"math",
"numpy"
] | numpy has three different functions which seem like they can be used for the same things --- except that `numpy.maximum` can *only* be used element-wise, while `numpy.max` and `numpy.amax` can be used on particular axes, or all elements. Why is there more than just `numpy.max`? Is there some subtlety to this in performance?
(Similarly for `min` vs. `amin` vs. `minimum`) | `np.max` and `np.maximum` are not generally used for the same operations.
`np.max` (which is just an alias for `np.amax`) only works on one input array. It can compute the maximum element from that entire array (returning a scalar), or if supplied an `axis` argument, it can compute the maximum value along an axis (returning a new array).
The default behaviour of `np.maximum` is to take two arrays and compute the element-wise maximum of two compatible arrays. Here, 'compatible' means that one array is broadcastable to the other.
But `np.maximum` is also a [universal function](http://docs.scipy.org/doc/numpy/reference/ufuncs.html) which means that it has other features and methods which come in useful when working with multidimensional arrays. For example you can compute the cummulative maximum of an array (or an axis of an array) with:
```
>>> a = np.array([2, 0, 3, -4, -2, 7, 9])
>>> np.maximum.accumulate(a)
array([2, 2, 3, 3, 3, 7, 9])
```
This is not possible with `np.max`.
N.B. You can make `np.maximum` imitate `np.max` to a certain extent when using `np.maximum.reduce`. Basic testing suggests the two approaches are comparable in performance. |
TypeError: __init__() got an unexpected keyword argument 'type' in argparse | 33,574,270 | 2 | 2015-11-06T19:29:34Z | 33,574,480 | 7 | 2015-11-06T19:43:30Z | [
"python",
"parsing",
"argparse"
] | Hey so I'm using argparse to try and generate a quarterly report. This is what the code looks like:
```
parser = argparse.ArgumentParser()
parser.add_argument('-q', "--quarter", action='store_true', type=int, help="Enter a Quarter number: 1,2,3, or 4 ")
parser.add_argument('-y', "--year", action='store_true',type=str,help="Enter a year in the format YYYY ")
args = parser.parse_args()
```
the error I receive is:
> TypeError: **init**() got an unexpected keyword argument 'type'
as far as I can tell from the argparse documentation type is one of the parameters of the add\_argument function. I tried removing this and updating the code to :
```
parser = argparse.ArgumentParser()
parser.add_argument('-q', "--quarter", action='store_true', help="Enter a Quarter number: 1,2,3, or 4 ")
parser.add_argument('-y', "--year", action='store_true',help="Enter a year in the format YYYY ")
args = parser.parse_args()
```
I then tried to run it with: `python scriptname.py -q 1 -y 2015` and it is giving me the following error:
> error:unrecognized arguments: 1 2015
I don't know why that is either. Can anyone please shed some light on this. | What `action="store_true"` means is that if the argument is given on the command line then a `True` value should be stored in the parser. What you actually want is to store the given year (as a string) and quarter (as an int).
```
parser = argparse.ArgumentParser()
parser.add_argument('-q', "--quarter", type=int, help="Enter a Quarter number: 1,2,3, or 4 ")
parser.add_argument('-y', "--year", type=str, help="Enter a year in the format YYYY ")
args = parser.parse_args()
```
When you specify `action='store_true` argparse is internally instantiating a `_StoreAction` instance whose constructor does not accept a `type` parameter (since it will always be a boolean (True/False)). You cannot supply `action="store_true"` and 'type' at the same time. |
Is it possible to copy and update a python dictionary in a single chained expression? | 33,576,444 | 4 | 2015-11-06T22:03:26Z | 33,576,498 | 8 | 2015-11-06T22:09:19Z | [
"python",
"dictionary"
] | Consider the following:
```
x = {1:2}
y = x.copy() # returns a new dictionary
y = x.copy().update({2:3}) # returns None
y = x.copy()[2] = 3 # invalid syntax
```
Given that none of the above work, is there a way to chain a command to `Dict.copy()` to copy and update a dictionary in a single command? | Yes, you can use the [`dict()` function](https://docs.python.org/2/library/stdtypes.html#dict) to create a copy and add keyword arguments; use `**{...}` to add arbitrary keys that are not Python identifiers:
```
y = dict(x, **{2: 3})
```
For string keys that happen to be valid Python identifiers (start with a letter, contain only letters and digits and underscores), use keyword arguments to `dict()`:
```
y = dict(x, foo_bar='spam_eggs')
```
You can combine the two styles, and add more keys:
```
y = dict(x, foo='spam', bar='eggs', **{2: 3, 42: 81})
```
Demo:
```
>>> x = {1: 2}
>>> dict(x, **{2: 3})
{1: 2, 2: 3}
>>> dict(x, foo_bar='spam_eggs')
{1: 2, 'foo_bar': 'spam_eggs'}
>>> dict(x, foo='spam', bar='eggs', **{2: 3, 42: 81})
{1: 2, 2: 3, 'foo': 'spam', 'bar': 'eggs', 42: 81}
>>> x # not changed, copies were made
{1: 2}
``` |
How do I multiply more the one variable by a number simultaneously | 33,582,485 | 3 | 2015-11-07T11:56:38Z | 33,582,520 | 8 | 2015-11-07T12:00:06Z | [
"python",
"list",
"variables"
] | I have three variables, I would like to know how to multiply all of these variables by another variable `number` simultaneously.
For example
```
number = 2
var1 = 0
var2 = 1
var3 = 2
```
The output should be:
```
0
2
4
``` | Use a [list comprehension](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions)
```
>>> number = 2
>>>
>>> var1 = 0
>>> var2 = 1
>>> var3 = 2
>>>
>>> [i*number for i in (var1,var2,var3)]
[0, 2, 4]
```
And to print it
```
>>> for i in output:
... print(i)
...
0
2
4
```
You can use `map` and `lambda` also
```
>>> for i in map(lambda x:x*number,(var1,var2,var3)):
... print(i)
...
0
2
4
``` |
Distribution of Number of Digits of Random Numbers | 33,582,576 | 15 | 2015-11-07T12:06:56Z | 33,582,842 | 8 | 2015-11-07T12:36:48Z | [
"javascript",
"python",
"random"
] | I encounter this curious phenomenon trying to implement a UUID generator in JavaScript.
Basically, in JavaScript, if I generate a large list of random numbers with the built-in `Math.random()` on Node `4.2.2`:
```
var records = {};
var l;
for (var i=0; i < 1e6; i += 1) {
l = String(Math.random()).length;
if (records[l]) {
records[l] += 1;
} else {
records[l] = 1;
}
}
console.log(records);
```
The numbers of digits have a strange pattern:
```
{ '12': 1,
'13': 11,
'14': 65,
'15': 663,
'16': 6619,
'17': 66378,
'18': 611441,
'19': 281175,
'20': 30379,
'21': 2939,
'22': 282,
'23': 44,
'24': 3 }
```
I thought this is a quirk of the random number generator of V8, but similar pattern appears in `Python 3.4.3`:
```
12 : 2
13 : 5
14 : 64
15 : 672
16 : 6736
17 : 66861
18 : 610907
19 : 280945
20 : 30455
21 : 3129
22 : 224
```
And the Python code is as follows:
```
import random
random.seed()
records = {}
for i in range(0, 1000000):
n = random.random()
l = len(str(n))
try:
records[l] += 1
except KeyError:
records[l] = 1;
for i in sorted(records):
print(i, ':', records[i])
```
The pattern from 18 to below is expected: say if random number should have 20 digits, then if the last digit of a number is 0, it effectively has only 19 digits. If the random number generator is good, the probability of that happening is roughly 1/10.
But why the pattern is reversed for 19 and beyond?
I guess this is related to float numbers' binary representation, but I can't figure out exactly why. | The reason is indeed related to floating point representation. A floating point number representation has a maximum number of (binary) digits it can represent, and a limited exponent value range. Now when you print this out without using scientific notation, you might in some cases need to have some zeroes after the decimal point before the significant digits start to follow.
You can visualize this effect by printing those random numbers which have the longest length when converted to `string`:
```
var records = {};
var l, r;
for (var i=0; i < 1e6; i += 1) {
r = Math.random();
l = String(r).length;
if (l === 23) {
console.log(r);
}
if (records[l]) {
records[l] += 1;
} else {
records[l] = 1;
}
}
```
This prints only the 23-long strings, and you will get numbers like these:
```
0.000007411070483631654
0.000053944830052166104
0.000018188989763578967
0.000029525788901141325
0.000009613635131744402
0.000005937417234758158
0.000021099748521158368
```
Notice the zeroes before the first non-zero digit. These are actually not stored in the number part of a floating point representation, but implied by its exponent part.
If you were to take out the leading zeroes, and then make a count:
```
var records = {};
var l, r, s;
for (var i=0; i < 1e6; i += 1) {
r = Math.random();
s = String(r).replace(/^[0\.]+/, '');
l = s.length;
if (records[l]) {
records[l] += 1;
} else {
records[l] = 1;
}
}
```
... you'll get results which are less strange.
However, you will see some irregularity that is due to how `javascript` converts tiny numbers to `string`: when they get too small, the scientific notation is used in the `string` representation. You can see this with the following script (not sure if every browser has the same breaking point, so maybe you need to play a bit with the number):
```
var i = 0.00000123456789012345678;
console.log(String(i), String(i/10));
```
This gives me the following output:
```
0.0000012345678901234567 1.2345678901234568e-7
```
So very small numbers will get a more fixed `string` length as a result, quite often 22 characters, while in the non-scientific notation a length of 23 is common. This influences also the second script I provided and length 22 will get more hits than 23.
It should be noted that `javascript` does not switch to scientific notation when converting to `string` in binary representation:
```
var i = 0.1234567890123456789e-120;
console.log(i.toString(2));
```
The above will print a string of over 450 binary digits! |
Tesseract OCR on AWS Lambda via virtualenv | 33,588,262 | 7 | 2015-11-07T21:57:18Z | 35,724,894 | 10 | 2016-03-01T13:58:02Z | [
"python",
"amazon-web-services",
"virtualenv",
"tesseract",
"aws-lambda"
] | I have spent all week attempting this, so this is a bit of a hail mary.
I am attempting to package up Tesseract OCR into AWS Lambda running on Python (I am also using PILLOW for image pre-processing, hence the choice of Python).
I understand how to deploy Python packages onto AWS using virtualenv, however I cannot seem to find a way of deploying the actual Tesseract OCR into the environment (e.g. /env/)
* Doing `pip install py-tesseract` results in a successful deployment of the python wrapper into /env/, however this relies on a separate (local) install of Tesseract
* Doing `pip install tesseract-ocr` gets me only a certain distance before it errors out as follows which I am assuming is due to a missing leptonica dependency. However, I have no idea how to package up leptonica into /env/ (if that is even possible)
> ```
> tesseract_ocr.cpp:264:10: fatal error: 'leptonica/allheaders.h' file not found
> #include "leptonica/allheaders.h"
> ```
* Downloading 0.9.1 python-tesseract egg file from
<https://bitbucket.org/3togo/python-tesseract/downloads> and doing easy\_install also errors out when looking for dependencies
> ```
> Processing dependencies for python-tesseract==0.9.1
> Searching for python-tesseract==0.9.1
> Reading https://pypi.python.org/simple/python-tesseract/
> Couldn't find index page for 'python-tesseract' (maybe misspelled?)
> Scanning index of all packages (this may take a while)
> Reading https://pypi.python.org/simple/
> No local packages or download links found for python-tesseract==0.9.1
> ```
Any pointers would be greatly appreciated. | The reason it's not working is because these python packages are only wrappers to tesseract. You have to compile tesseract using a AWS Linux instance and copy the binaries and libraries to the zip file of the lambda function.
**1) Start an EC2 instance with 64-bit Amazon Linux;**
**2) Install dependencies:**
```
sudo yum install gcc gcc-c++ make
sudo yum install autoconf aclocal automake
sudo yum install libtool
sudo yum install libjpeg-devel libpng-devel libtiff-devel zlib-devel
```
**3) Compile and install leptonica:**
```
cd ~
mkdir leptonica
cd leptonica
wget http://www.leptonica.com/source/leptonica-1.73.tar.gz
tar -zxvf leptonica-1.73.tar.gz
cd leptonica-1.73
./configure
make
sudo make install
```
**4) Compile and install tesseract**
```
cd ~
mkdir tesseract
cd tesseract
wget https://github.com/tesseract-ocr/tesseract/archive/3.04.01.tar.gz
tar -zxvf 3.04.01.tar.gz
cd tesseract-3.04.01
./autogen.sh
./configure
make
sudo make install
```
**5) Download language traineddata to tessdata**
```
cd /usr/local/share/tessdata
wget https://github.com/tesseract-ocr/tessdata/raw/master/eng.traineddata
export TESSDATA_PREFIX=/usr/local/share/
```
At this point you should be able to use tesseract on this EC2 instance. To copy the binaries of tesseract and use it on a lambda function you will need to copy some files from this instance to the zip file you upload to lambda. I'll post all the commands to get a zip file with all the files you need.
**6) Zip all the stuff you need to run tesseract on lambda**
```
cd ~
mkdir tesseract-lambda
cd tesseract-lambda
cp /usr/local/bin/tesseract .
mkdir lib
cd lib
cp /usr/local/lib/libtesseract.so.3 .
cp /usr/local/lib/liblept.so.5 .
cp /lib64/librt.so.1 .
cp /lib64/libz.so.1 .
cp /usr/lib64/libpng12.so.0 .
cp /usr/lib64/libjpeg.so.62 .
cp /usr/lib64/libtiff.so.5 .
cp /lib64/libpthread.so.0 .
cp /usr/lib64/libstdc++.so.6 .
cp /lib64/libm.so.6 .
cp /lib64/libgcc_s.so.1 .
cp /lib64/libc.so.6 .
cp /lib64/ld-linux-x86-64.so.2 .
cp /usr/lib64/libjbig.so.2.0 .
cd ..
mkdir tessdata
cd tessdata
cp /usr/local/share/tessdata/eng.traineddata .
cd ..
cd ..
zip -r tesseract-lambda.zip tesseract-lambda
```
The tesseract-lambda.zip file have everything lambda needs to run tesseract. The last thing to do is add the lambda function at the root of the zip file and upload it to lambda. Here is an example that I have not tested, but should work.
**7) Write a lambda function like this one and add it on the root of tesseract-lambda.zip:**
```
from __future__ import print_function
import urllib
import boto3
import os
import subprocess
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
LIB_DIR = os.path.join(SCRIPT_DIR, 'lib')
s3 = boto3.client('s3')
def lambda_handler(event, context):
# Get the bucket and object from the event
bucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key']).decode('utf8')
try:
print("Bucket: " + bucket)
print("Key: " + key)
imgfilepath = '/tmp/image.png'
jsonfilepath = '/tmp/result.txt'
exportfile = key + '.txt'
print("Export: " + exportfile)
s3.download_file(bucket, key, imgfilepath)
command = 'LD_LIBRARY_PATH={} TESSDATA_PREFIX={} {}/tesseract {} {}'.format(
LIB_DIR,
SCRIPT_DIR,
SCRIPT_DIR,
imgfilepath,
jsonfilepath,
)
try:
output = subprocess.check_output(command, shell=True)
print(output)
s3.upload_file(jsonfilepath, bucket, exportfile)
except subprocess.CalledProcessError as e:
print(e.output)
except Exception as e:
print(e)
print('Error processing object {} from bucket {}.'.format(key, bucket))
raise e
``` |
Python: Variable not defined? | 33,595,799 | 2 | 2015-11-08T15:50:21Z | 33,595,881 | 8 | 2015-11-08T15:58:45Z | [
"python"
] | So, I have this stuff below
```
def userinput():
adjective1 = input("Adjective: ")
noun1 = input("Noun: ")
noun2 = input("Noun: ")
def story():
print("A vacation is when you take a trip to some " + adjective1 + " place.")
print("Usually you go to some place that is near " + noun1 + " or up on " + noun2 + ".")
```
then when I run the functions and provide input, it comes back with
```
File "/Users/apple/Dropbox/MadLibs 6.py", line 52, in story
print("A vacation is when you take a trip to some " + adjective1 + " place with your "+ adjective2 + " family.")
NameError: name 'adjective1' is not defined
```
What does it mean by this, and how can I fix it? | Its all about scope, you can not acces variable within another function scope
Try this:
```
def userinput():
adjective1 = input("Adjective: ")
noun1 = input("Noun: ")
noun2 = input("Noun: ")
return adjective1, noun1, noun2
def story():
adjective1, noun1, noun2 = userinput()
print("A vacation is when you take a trip to some " + adjective1 + " place.")
print("Usually you go to some place that is near " + noun1 + " or up on " + noun2 + ".")
```
By calling userinput on the second function and getting its returned info you can access it. Notice that adjective1, noun1 and noun2 form story function are locally scoped in that function so they are diferent from the userinput variables although they are named equally. |
Reversing default dict | 33,599,347 | 3 | 2015-11-08T21:41:50Z | 33,599,446 | 7 | 2015-11-08T21:52:55Z | [
"python",
"dictionary"
] | I have a defaultdict(list) where my keys are of type tuple and my values are a list of tuples.
I want to reverse this dictionary and I have already tried using the zip function and this doesn't work
A sample of my dictionary structure is below
```
{(2, '-', 3): [('Canada', 'Trudeau'),('Obama', 'USA')]}
```
Is there anyway to reverse this so I get the keys and values the other way round | Since you have a dict of lists, you need to make the lists an immutable type to use as a key in a Python dict.
You can make the list values tuples:
```
>>> di={(2, '-', 3): [('Canada', 'Trudeau'),('Obama', 'USA')]}
>>> {tuple(v):k for k, v in di.items()}
{(('Canada', 'Trudeau'), ('Obama', 'USA')): (2, '-', 3)}
```
Or turn them into strings:
```
>>> {repr(v):k for k, v in di.items()}
{"[('Canada', 'Trudeau'), ('Obama', 'USA')]": (2, '-', 3)}
```
With either, you can turn the key back into a list if need be:
```
>>> from ast import literal_eval
>>> literal_eval("[('Canada', 'Trudeau'), ('Obama', 'USA')]")
[('Canada', 'Trudeau'), ('Obama', 'USA')]
>>> list((('Canada', 'Trudeau'), ('Obama', 'USA')))
[('Canada', 'Trudeau'), ('Obama', 'USA')]
```
(BTW: Don't use `eval` if code that might have any outside strings -- use [ast.literal\_eval](https://docs.python.org/2/library/ast.html#ast.literal_eval) in production code as in the example here.)
Lastly, consider this dict:
```
di={'key 1':'val', 'key 2': 'val', 'key 3': 'val'}
```
When you reverse the keys and values you will loose data as each key 'val' is added.
You can use a default dict to solve duplicate values becoming keys:
```
dd=defaultdict(list)
for k, v in di.items():
dd[v].append(k)
>>> dd
defaultdict(<type 'list'>, {'val': ['key 1', 'key 2', 'key 3']})
``` |
Installing scipy in Python 3.5 on 32-bit Windows 7 Machine | 33,600,302 | 4 | 2015-11-08T23:30:30Z | 33,601,348 | 9 | 2015-11-09T01:54:02Z | [
"python",
"python-3.x",
"numpy",
"scipy"
] | I have been trying to install Scipy onto my Python 3.5 (32-bit) install on my Windows 7 machine using the pre-built binaries from:
<http://www.lfd.uci.edu/~gohlke/pythonlibs>
I have, in order, installed the following libraries
```
numpyâ1.10.1+mklâcp35ânoneâwin32.whl
scipyâ0.16.1âcp35ânoneâwin32.whl
```
Then, when trying to use the installed packages I get the following erros
```
from scipy import sparse
< ... Complete error trace ommitted ... >
packages\scipy\sparse\csr.py", line 13, in <module>
from ._sparsetools import csr_tocsc, csr_tobsr, csr_count_blocks, \
ImportError: DLL load failed: The specified module could not be found.
```
However, if i follow the same process for Python 3.4 replacing the installers with:
```
numpyâ1.10.1+mklâcp35ânoneâwin32.whl
scipyâ0.16.1âcp35ânoneâwin32.whl
```
Everything seems to work. Are there additional dependencies or install packages that I am missing for the Python 3.5 install? | Make sure you pay attention to this line from the link you provided:
> Many binaries depend on NumPy-1.9+MKL and the Microsoft Visual C++
> 2008 (x64, x86, and SP1 for CPython 2.6 and 2.7), Visual C++ 2010
> (x64, x86, for CPython 3.3 and 3.4), or the Visual C++ 2015 (x64 and
> x86 for CPython 3.5) redistributable packages.
Download the corresponding Microsoft Visual C++ Redistributable Package which should be [this](https://www.microsoft.com/en-us/download/details.aspx?id=48145) one based on your description.
I had a similar problem, can't recall the exact issue, and I download the one for my system and it worked fine. Let me know otherwise. |
Random number in the range 1 to sys.maxsize is always 1 mod 2^10 | 33,602,014 | 42 | 2015-11-09T03:19:21Z | 33,602,119 | 10 | 2015-11-09T03:31:32Z | [
"python",
"python-2.7",
"random"
] | I am trying to find the statistical properties of the PRNGs available in Python (2.7.10) by using the frequency test, runs test and the chi squared test.
For carrying out the frequency test, I need to convert the generated random number to its binary representation and then count the distribution of `1`'s and `0`'s. I was experimenting with the binary representation of the random numbers on the python console and observed this weird behavior:
```
>>> for n in random.sample(xrange(1, sys.maxsize), 50):
... print '{0:b}'.format(n)
...
101101110011011001110011110110101101101101111111101000000000001
110000101001001011101001110111111110011000101011100010000000001
110111101101110011100010001010000101011111110010001110000000001
100001111010011000101001000001000011001111100000001010000000001
1111000010010011111100111110110100100011110111010000000000001
111000001011101011101110100001001001000011011001110110000000001
1000100111011000111000101010000101010100110111000100000000001
11101001000001101111110101111011001000100011011011010000000001
110011010111101101011000110011011001110001111000001010000000001
110110110110111100011111110111011111101000011001100000000001
100010010000011101011100110101011110111100001100100000000000001
10111100011010011010001000101011001110010010000010010000000001
101011100110110001010110000101100000111111011101011000000000001
1111110010110010000111111000010001101011011010101110000000001
11100010101101110110101000101101011011111101101000010000000001
10011110110110010110011010000110010010111001111001010000000001
110110011100111010100111100100000100011101100001100000000000001
100110011001101011110011010101111101100010000111001010000000001
111000101101100111110010110110100110111001000101000000000000001
111111101000010111001011111100111100011101001011010000000001
11110001111100000111010010011111010101101110111001010000000001
100001100101101100010101111100111101111001101010101010000000001
11101010110011000001101110000000001111010001110111000000000001
100111000110111010001110110101001011100101111101010000000001
100001101100000011101101010101111111011010111110111110000000001
100010010011110110111111111000010001101100111001001100000000001
110011111110010011000110101010101001001010000100011010000000001
1111011010100001001101101000011100001011001110010100000000001
110110011101100101001100111010101111001011111101100000000000001
1010001110100101001001011111000111011100001100000110000000001
1000101110010011011000001011010110001000110100100100000000001
11111110011001011100111110110111000001000100100010000000000001
101111101010000101010111111111000001100101111001011110000000001
10010010111111111100000001010010101100111001100000000000001
111110000001110010001110111101110101010110001110000000000000001
100000101101000110101010010000101101000011111010001110000000001
101001011101100011001000011010010000000111110111100010000000001
10110101010000111010110111001111011000001111001100110000000001
10110111100100100011100101001100000000101110100100010000000001
10010111110001011101001110000111011010110100110111110000000001
111011110010110111011011101011001100001000111001010100000000001
101001010001010100010010010001100111101110101111000110000000001
101011111010000101010101000110001101001001011110000000000001
1010001010111101101010111110110110000001111101101110000000001
10111111111010001000110000101101010101011010101100000000001
101011101010110000001111010100100110000011111100100100000000001
111100001101111010100111010001010010000010110110010110000000001
100111111000100110100001110101000010111111010010010000000000001
100111100001011100011000000000101100111111000111100110000000001
110110100000110111011101110101101000101110111111010110000000001
>>>
```
As you can see, all numbers end in `0000000001`, i.e all numbers are `1 mod 2^10`. Why is this so ?
Also, this behavior is observed when the range is `1 to sys.maxsize`. If the range is specified to be `1 to 2^40`, this is not observed. I want to know the reason for this behavior and whether there is anything wrong in my code.
The documentation for the random library that implements the PRNGs that I am using is [here](https://docs.python.org/2/library/random.html).
Let me know if I should provide any more information. | That depends on a lot of things, like how exactly the RNG is implemented, how much bits of state it uses, and how exactly the `sample` function is implemented.
Here's what the documentation says:
> Almost all module functions depend on the basic function random(), which generates a random float uniformly in the semi-open range [0.0, 1.0). Python uses the Mersenne Twister as the core generator. It produces 53-bit precision floats and has a period of 2\*\*19937-1.
So if the `sample` indeed uses `random()` under the hood, then you should only expect 53 bits of meaningful bits in your result. |
Random number in the range 1 to sys.maxsize is always 1 mod 2^10 | 33,602,014 | 42 | 2015-11-09T03:19:21Z | 33,602,186 | 47 | 2015-11-09T03:40:04Z | [
"python",
"python-2.7",
"random"
] | I am trying to find the statistical properties of the PRNGs available in Python (2.7.10) by using the frequency test, runs test and the chi squared test.
For carrying out the frequency test, I need to convert the generated random number to its binary representation and then count the distribution of `1`'s and `0`'s. I was experimenting with the binary representation of the random numbers on the python console and observed this weird behavior:
```
>>> for n in random.sample(xrange(1, sys.maxsize), 50):
... print '{0:b}'.format(n)
...
101101110011011001110011110110101101101101111111101000000000001
110000101001001011101001110111111110011000101011100010000000001
110111101101110011100010001010000101011111110010001110000000001
100001111010011000101001000001000011001111100000001010000000001
1111000010010011111100111110110100100011110111010000000000001
111000001011101011101110100001001001000011011001110110000000001
1000100111011000111000101010000101010100110111000100000000001
11101001000001101111110101111011001000100011011011010000000001
110011010111101101011000110011011001110001111000001010000000001
110110110110111100011111110111011111101000011001100000000001
100010010000011101011100110101011110111100001100100000000000001
10111100011010011010001000101011001110010010000010010000000001
101011100110110001010110000101100000111111011101011000000000001
1111110010110010000111111000010001101011011010101110000000001
11100010101101110110101000101101011011111101101000010000000001
10011110110110010110011010000110010010111001111001010000000001
110110011100111010100111100100000100011101100001100000000000001
100110011001101011110011010101111101100010000111001010000000001
111000101101100111110010110110100110111001000101000000000000001
111111101000010111001011111100111100011101001011010000000001
11110001111100000111010010011111010101101110111001010000000001
100001100101101100010101111100111101111001101010101010000000001
11101010110011000001101110000000001111010001110111000000000001
100111000110111010001110110101001011100101111101010000000001
100001101100000011101101010101111111011010111110111110000000001
100010010011110110111111111000010001101100111001001100000000001
110011111110010011000110101010101001001010000100011010000000001
1111011010100001001101101000011100001011001110010100000000001
110110011101100101001100111010101111001011111101100000000000001
1010001110100101001001011111000111011100001100000110000000001
1000101110010011011000001011010110001000110100100100000000001
11111110011001011100111110110111000001000100100010000000000001
101111101010000101010111111111000001100101111001011110000000001
10010010111111111100000001010010101100111001100000000000001
111110000001110010001110111101110101010110001110000000000000001
100000101101000110101010010000101101000011111010001110000000001
101001011101100011001000011010010000000111110111100010000000001
10110101010000111010110111001111011000001111001100110000000001
10110111100100100011100101001100000000101110100100010000000001
10010111110001011101001110000111011010110100110111110000000001
111011110010110111011011101011001100001000111001010100000000001
101001010001010100010010010001100111101110101111000110000000001
101011111010000101010101000110001101001001011110000000000001
1010001010111101101010111110110110000001111101101110000000001
10111111111010001000110000101101010101011010101100000000001
101011101010110000001111010100100110000011111100100100000000001
111100001101111010100111010001010010000010110110010110000000001
100111111000100110100001110101000010111111010010010000000000001
100111100001011100011000000000101100111111000111100110000000001
110110100000110111011101110101101000101110111111010110000000001
>>>
```
As you can see, all numbers end in `0000000001`, i.e all numbers are `1 mod 2^10`. Why is this so ?
Also, this behavior is observed when the range is `1 to sys.maxsize`. If the range is specified to be `1 to 2^40`, this is not observed. I want to know the reason for this behavior and whether there is anything wrong in my code.
The documentation for the random library that implements the PRNGs that I am using is [here](https://docs.python.org/2/library/random.html).
Let me know if I should provide any more information. | @roeland hinted at the cause: in Python 2, `sample()` uses `int(random.random() * n)` repeatedly. Look at the source code (in your Python's `Lib/random.py`) for full details. In short, `random.random()` returns no more than 53 significant (non-zero) leading bits; then `int()` fills the rest of the low-order bits with zeroes (you're obviously on a machine where `sys.maxsize == 2**63 - 1`); then indexing your base (`xrange(1, sys.maxsize)`) by an even integer with "a lot" of of low-order 0 bits always returns an odd integer with the same number of low-order 0 bits (except for the last).
In Python 3 none of that happens - `random` in Python 3 uses stronger algorithms, and only falls back to `random.random()` when necessary. For example, here under Python 3.4.3:
```
>>> hex(random.randrange(10**70))
'0x91fc11ed768be3a454bd66f593c218d8bbfa3b99f6285291e1d9f964a9'
>>> hex(random.randrange(10**70))
'0x7b07ff02b6676801e33094fca2fcca7f6e235481c479c521643b1acaf4'
```
## EDIT
Here's a more directly relevant example, under 3.4.3 on a 64-bit box:
```
>>> import random, sys
>>> sys.maxsize == 2**63 - 1
True
>>> for i in random.sample(range(1, sys.maxsize), 6):
... print(bin(i))
0b10001100101001001111110110011111000100110100111001100000010110
0b100111100110110100111101001100001100110001110010000101101000101
0b1100000001110000110100111101101010110001100110101111011100111
0b111110100001111100101001001001101101100100011001001010100001110
0b1100110100000011100010000011010010100100110111001111100110100
0b10011010000110101010101110001000101110111100100001111101110111
```
Python 3 doesn't invoke `random.random()` at all in this case, but instead iteratively grabs chunks of 32 bits from the underlying Mersenne Twister (32-bit unsigned ints are "the natural" outputs from this implementation of MT) , pasting them together to build a suitable index. So, in Python 3, platform floats have nothing to do with it; in Python 2, quirks of float behavior have everything to do with it. |
Quickstart Flask application failing for some reason | 33,602,708 | 3 | 2015-11-09T04:46:51Z | 33,603,641 | 8 | 2015-11-09T06:36:09Z | [
"python",
"flask",
"pip"
] | I created a fresh `virtualenv` environment for a Flask application called `flask-test` so that I could do some testing. Imagine my surprise when, running the quickstart application, I get the following error:
```
Honorss-MacBook-Air-2:Desktop Honors$ cd flask-testing
Honorss-MacBook-Air-2:flask-testing Honors$ source bin/activate
(flask-testing)Honorss-MacBook-Air-2:flask-testing Honors$ python app.py
* Restarting with stat
* Debugger is active!
Traceback (most recent call last):
File "app.py", line 10, in <module>
app.run()
File "/Users/Honors/Desktop/flask-testing/lib/python3.5/site-packages/flask/app.py", line 772, in run
run_simple(host, port, self, **options)
File "/Users/Honors/Desktop/flask-testing/lib/python3.5/site-packages/werkzeug/serving.py", line 633, in run_simple
application = DebuggedApplication(application, use_evalex)
File "/Users/Honors/Desktop/flask-testing/lib/python3.5/site-packages/werkzeug/debug/__init__.py", line 169, in __init__
if self.pin is None:
File "/Users/Honors/Desktop/flask-testing/lib/python3.5/site-packages/werkzeug/debug/__init__.py", line 179, in _get_pin
self._pin, self._pin_cookie = get_pin_and_cookie_name(self.app)
File "/Users/Honors/Desktop/flask-testing/lib/python3.5/site-packages/werkzeug/debug/__init__.py", line 96, in get_pin_and_cookie_name
h.update('cookiesalt')
TypeError: Unicode-objects must be encoded before hashing
```
The contents of `app.py` are:
```
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hello World!'
if __name__ == '__main__':
app.debug = True
app.run()
```
`pip list` says that the contents of the environment are:
```
Flask (0.10.1)
Flask-Login (0.3.2)
Flask-WTF (0.12)
itsdangerous (0.24)
Jinja2 (2.8)
MarkupSafe (0.23)
pip (7.1.2)
setuptools (18.2)
Werkzeug (0.11)
wheel (0.24.0)
WTForms (2.0.2)
```
All of my other virtual environments run as expected. | Seems like a bug: the related issue [Werkzeug 0.11 with Flask 0.10.1 and 'app.debug = True' won't start. #798](https://github.com/mitsuhiko/werkzeug/issues/798)
I create a new virtual environment using python3.5 and meet the same error, but if I don't use the debug mode, it's fine.
And as the issue says, downgrade Werkzeug to 0.10.4 seem works. You can have a try. |
In TensorFlow, what is the difference between Session.run() and Tensor.eval()? | 33,610,685 | 39 | 2015-11-09T13:52:52Z | 33,610,914 | 61 | 2015-11-09T14:05:45Z | [
"python",
"tensorflow"
] | TensorFlow has two ways to evaluate part of graph: `Session.run` on a list of variables and `Tensor.eval`. Is there a difference between these two? | If you have a `Tensor` t, calling [`t.eval()`](http://tensorflow.org/api_docs/python/framework.md#Tensor.eval) is equivalent to calling `tf.get_default_session().run(t)`.
You can make a session the default as follows:
```
t = tf.constant(42.0)
sess = tf.Session()
with sess.as_default(): # or `with sess:` to close on exit
assert sess is tf.get_default_session()
assert t.eval() == sess.run(t)
```
The most important different is that you can use `sess.run()` to fetch the values of many tensors in the same step:
```
t = tf.constant(42.0)
u = tf.constant(37.0)
tu = tf.mul(t, u)
ut = tf.mul(u, t)
with sess.as_default():
tu.eval() # runs one step
ut.eval() # runs one step
sess.run([tu, ut]) # runs a single step
```
Note that each call to `eval` and `run` will execute the whole graph from scratch. To cache the result of a computation, assign it to a [`tf.Variable`](http://tensorflow.org/how_tos/variables/index.md). |
In TensorFlow, what is the difference between Session.run() and Tensor.eval()? | 33,610,685 | 39 | 2015-11-09T13:52:52Z | 33,666,260 | 8 | 2015-11-12T07:28:42Z | [
"python",
"tensorflow"
] | TensorFlow has two ways to evaluate part of graph: `Session.run` on a list of variables and `Tensor.eval`. Is there a difference between these two? | The FAQ session on tensor flow has an [answer to exactly the same question](http://tensorflow.org/resources/faq.md#contents). I would just go ahead and leave it here:
---
If `t` is a `Tensor` object, `t.eval()` is shorthand for `sess.run(t)` (where `sess` is the current default session. The two following snippets of code are equivalent:
```
sess = tf.Session()
c = tf.constant(5.0)
print sess.run(c)
c = tf.constant(5.0)
with tf.Session():
print c.eval()
```
In the second example, the session acts as a context manager, which has the effect of installing it as the default session for the lifetime of the with block. The context manager approach can lead to more concise code for simple use cases (like unit tests); if your code deals with multiple graphs and sessions, it may be more straightforward to explicit calls to `Session.run()`.
I'd recommend to at least skim thought the whole FAQ, as it might clarify a lot of things. |
How to deal with inputs outside 0-1 range in tensorflow? | 33,613,945 | 2 | 2015-11-09T16:43:21Z | 33,615,369 | 8 | 2015-11-09T18:05:43Z | [
"python",
"tensorflow"
] | In the example provided at <http://www.tensorflow.org/get_started> if I multiply the input by 2
```
x_data = np.float32(np.random.rand(2, 100))*2
```
I get non-sense output, while I expected to get the same solution.
```
0 [[ -67.06586456 -109.13352203]] [-7.67297792]
20 [[ nan nan]] [ nan]
40 [[ nan nan]] [ nan]
60 [[ nan nan]] [ nan]
80 [[ nan nan]] [ nan]
100 [[ nan nan]] [ nan]
120 [[ nan nan]] [ nan]
140 [[ nan nan]] [ nan]
160 [[ nan nan]] [ nan]
180 [[ nan nan]] [ nan]
200 [[ nan nan]] [ nan]
```
How does tensorflow handle inputs that are not in the 0-1 range?
*EDIT*: Using `AdagradOptimizer` works without an issue. | The issue is that the example uses a very aggressive learning rate:
```
optimizer = tf.train.GradientDescentOptimizer(0.5)
```
This makes learning faster, but stops working if you change the problem a bit. A learning rate of `0.01` would be more typical:
```
optimizer = tf.train.GradientDescentOptimizer(0.01)
```
Now your modification works fine. :) |
tensorflow -- is it or will it (sometime soon) be compatible with a windows workflow? | 33,616,094 | 50 | 2015-11-09T18:51:41Z | 33,618,580 | 50 | 2015-11-09T21:24:48Z | [
"python",
"windows",
"tensorflow"
] | I haven't seen anything about Windows compatibility--is this on the way or currently available somwhere if I put forth some effort? (I have a mac and an ubuntu box but the windows machine is the one with the discrete graphics card that I currently use with theano) | We haven't tried to build TensorFlow on Windows so far: the only supported platforms are Linux (Ubuntu) and Mac OS X, and we've only built binaries for those platforms.
For now, on Windows, the easiest way to get started with TensorFlow would be to use Docker: <http://tensorflow.org/get_started/os_setup.md#docker-based_installation>
It should become easier to add Windows support when Bazel (the build system we are using) adds support for building on Windows, which is [on the roadmap for Bazel 0.3](https://github.com/tensorflow/tensorflow/issues/17#issuecomment-189599501). You can see [the full Bazel roadmap here](http://bazel.io/roadmap.html).
In the meantime, you can follow [issue 17 on the TensorFlow GitHub page](https://github.com/tensorflow/tensorflow/issues/17). |
tensorflow -- is it or will it (sometime soon) be compatible with a windows workflow? | 33,616,094 | 50 | 2015-11-09T18:51:41Z | 33,635,663 | 10 | 2015-11-10T17:10:32Z | [
"python",
"windows",
"tensorflow"
] | I haven't seen anything about Windows compatibility--is this on the way or currently available somwhere if I put forth some effort? (I have a mac and an ubuntu box but the windows machine is the one with the discrete graphics card that I currently use with theano) | As @mrry suggested, it is easier to set up TensorFlow with Docker. Here's how I managed to set it up as well as getting iPython Notebook up and running in my Docker environment (I find it really convenient to use iPython Notebook for all testing purposes as well as documenting my experiments).
I assume that you have installed both docker and boot2docker for Windows here.
First, run TensorFlow docker on daemon and set it up so Jupyter server (iPython Notebook) can be accessed from your main Windows system's browser:
```
docker run -dit -v /c/Users/User/:/media/disk -p 8888:8888 b.gcr.io/tensorflow/tensorflow:latest
```
Replace `/c/Users/User/` with a path in your host you wish to mount i.e. where you can keep your iPython files. *I don't know how to set it to other drives than C:, let me know if you do*. `/media/disk` is the location in your TensorFlow docker where your host path's mounted against.
`-p 8888:8888` basically means "map port 8888 in docker to 8888 in host directory". You can change the second part to other ports if you wish.
When you got it running, you can access it by running the following code:
```
docker exec -ti [docker-id] bash
```
Where [docker-id] can be found by running:
```
docker ps
```
To start your ipython notebook server from within TensorFlow's docker, run the following command:
```
ipython notebook --ip='*'
```
To allow ipython server to listen to all ip so your app may be accessible from host machine.
Instead of viewing your app in `http://localhost:8888`, you can only view it in `http://[boot2docker-ip]:8888`. To find `boot2docker-ip` run this in your terminal (not boot2docker terminal):
```
boot2docker ip
``` |
Change clickable field in Django admin list_display | 33,616,330 | 4 | 2015-11-09T19:06:14Z | 33,616,452 | 11 | 2015-11-09T19:14:09Z | [
"python",
"django",
"django-models",
"django-forms",
"django-admin"
] | In Django 1.8.6, by default, whenever I provide a `list_display` option to a ModelAdmin subclass, the first field in the list becomes clickable and leads to the object edit page.
Is there a way to keep the order of the fields in `list_display`, but change the clickable one?
Currently, I have the `id` field clickable (it goes first in `list_display`), which is a tad small. I would like to better click on, say, `name` to go to the edit page. | You could have a look at [django.contrib.admin.ModelAdmin.list\_display\_links](https://docs.djangoproject.com/en/1.8/ref/contrib/admin/#django.contrib.admin.ModelAdmin.list_display_links)
Basically it is used like
```
class PersonAdmin(admin.ModelAdmin):
list_display = ('first_name', 'last_name', 'birthday')
list_display_links = ('first_name', 'last_name')
```
Hope this will help :) |
How do I use distributed DNN training in TensorFlow? | 33,616,593 | 11 | 2015-11-09T19:23:22Z | 33,641,967 | 13 | 2015-11-11T00:00:48Z | [
"python",
"parallel-processing",
"deep-learning",
"tensorflow"
] | Google released TensorFlow today.
I have been poking around in the code, and I don't see anything in the code or API about training across a cluster of GPU servers.
Does it have distributed training functionality yet? | **Updated:** The initial release of [Distributed TensorFlow](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/distributed_runtime#distributed-tensorflow) occurred on 2/26/2016. The release was announced by coauthor Derek Murray in the original issue [here](https://github.com/tensorflow/tensorflow/issues/23#issuecomment-189285409) and uses [gRPC](http://www.grpc.io/) for inter-process communication.
**Previous:** A distributed implementation of **TensorFlow** has not been released yet. Support for a distributed implementation is the topic of [this issue](https://github.com/tensorflow/tensorflow/issues/23) where coauthor Vijay Vasudevan [wrote](https://github.com/tensorflow/tensorflow/issues/23#issuecomment-155151640):
> we are working on making a distributed implementation available, it's
> currently not in the initial release
and Jeff Dean provided [an update](https://github.com/tensorflow/tensorflow/issues/23#issuecomment-155608002):
> Our current internal distributed extensions are somewhat entangled
> with Google internal infrastructure, which is why we released the
> single-machine version first. The code is not yet in GitHub, because
> it has dependencies on other parts of the Google code base at the
> moment, most of which have been trimmed, but there are some remaining
> ones.
>
> We realize that distributed support is really important, and it's one
> of the top features we're prioritizing at the moment. |
How do I use distributed DNN training in TensorFlow? | 33,616,593 | 11 | 2015-11-09T19:23:22Z | 35,653,639 | 8 | 2016-02-26T13:58:53Z | [
"python",
"parallel-processing",
"deep-learning",
"tensorflow"
] | Google released TensorFlow today.
I have been poking around in the code, and I don't see anything in the code or API about training across a cluster of GPU servers.
Does it have distributed training functionality yet? | It took us a few months, but today marks the release of the initial [distributed TensorFlow runtime](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/distributed_runtime). This includes support for multiple machines, each with multiple GPUs, with communication provided by [gRPC](http://grpc.io).
The current version includes the necessary backend components so that you can assemble a cluster manually and connect to it from a client program. More details are available in the [readme](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/distributed_runtime/README.md). |
Where is the folder for Installing tensorflow with pip, Mac OSX? | 33,616,732 | 14 | 2015-11-09T19:32:04Z | 33,616,733 | 30 | 2015-11-09T19:32:04Z | [
"python",
"osx",
"tensorflow"
] | just installed tensorflow using pip with the command:
`$ pip install tensorflow`
On the ["Getting Started" for Tensorflow](http://tensorflow.org/get_started/os_setup.md) they have an example for convolutional neural networks
`$ python tensorflow/models/image/mnist/convolutional.py`
Where is that directory located when installing with pip? | Installing with pip, installs the packages to the directory "site-packages".
The following code shows the location of tensorflow as well as where pip installs the packages:
```
$ pip show tensorflow
```
Which return:
```
Metadata-Version: 2.0
Name: tensorflow
Version: 0.5.0
Summary: TensorFlow helps the tensors flow
Home-page: http://tensorflow.com/
Author: Google Inc.
Author-email: [email protected]
License: Apache 2.0
Location: /usr/local/lib/python2.7/site-packages
Requires: six, numpy
```
here `Location:` shows where the package is installed with
```
$ cd /usr/local/lib/python2.7/site-packages/tensorflow
``` |
For a pandas Series, shouldn't s.sort_index(inplace=True) change s? | 33,617,439 | 4 | 2015-11-09T20:14:44Z | 33,617,554 | 9 | 2015-11-09T20:21:18Z | [
"python",
"pandas"
] | Given this code:
```
s = pd.Series([1,2,3], index=['C','B','A'])
s.sort_index(inplace=True)
```
Shouldn't `s` now look like this:
```
A 3
B 2
C 1
dtype: int64
```
When I run this, `s` remains unchanged. Maybe I'm confused about what the `inplace` argument is supposed to do. I thought that it was supposed to change the Series on which the method is called.
For the record, this does return the sorted series, but it does so whether or not you set `inplace` to True. | You are indeed correct with your expectation. However, this was not yet implemented before 0.17 / a bug in 0.17 that this keyword was ignored instead of raising an error (like before). But a fix will be released in the upcoming version 0.17.1.
See <https://github.com/pydata/pandas/pull/11422>
So for now, easiest is just to use it without `inplace`:
```
In [4]: s = s.sort_index()
In [5]: s
Out[5]:
A 3
B 2
C 1
dtype: int64
``` |
"No such file or directory" when starting convolutional.py script on tensorflow docker image | 33,621,547 | 3 | 2015-11-10T01:52:11Z | 33,622,640 | 9 | 2015-11-10T03:59:32Z | [
"python",
"docker",
"anaconda",
"tensorflow"
] | I don't have a Linux or Mac machine so in order to check out TensorFlow on Windows, installed docker and downloaded the image of tensorflow-full.
When I run the following command:
```
$ python tensorflow/models/image/mnist/convolutional.py
```
I get this error message:
```
C:\Users\Javiar\Anaconda\python.exe: can't open file 'tensorflow/models/image/mnist/convolutional.py': [Errno 2] No such file or directory
```
Currently on Win 8.1 and have anaconda installed. | It looks like the error message is caused by trying to execute a script file (`.../convolutional.py`) that is inside the container, using the Python interpreter outside the container.
First of all, follow the steps here to ensure that Docker is configured successfully on your Windows machine:
<http://docs.docker.com/engine/installation/windows/#using-docker-from-windows-command-prompt-cmd-exe>
Once you've successfully run the `hello-world` container, run the following command to start the TensorFlow container:
```
docker run -it b.gcr.io/tensorflow/tensorflow
```
(Note that, depending on your terminal, the previous step may or may not work. A common error is `cannot enable tty mode on non tty input`. In that case, run the following command to connect to the VM that is hosting the containers:
```
docker-machine ssh default
```
...then at the resulting prompt, the `docker run` command again.)
At the resulting prompt, you should be able to run the script with the following command:
```
python /usr/local/lib/python2.7/dist-packages/tensorflow/models/image/mnist/convolutional.py
``` |
Fail to run word embedding example in tensorflow tutorial with GPUs | 33,624,048 | 11 | 2015-11-10T06:17:43Z | 33,624,832 | 13 | 2015-11-10T07:28:33Z | [
"python",
"tensorflow"
] | I am trying to run the word embedding example code at <https://github.com/tensorflow/tensorflow/tree/master/tensorflow/g3doc/tutorials/word2vec> (installed with GPU version of tensorflow under Ubuntu 14.04), but it returns the following error message:
```
Found and verified text8.zip
Data size 17005207
Most common words (+UNK) [['UNK', 418391], ('the', 1061396), ('of', 593677), ('and', 416629), ('one', 411764)]
Sample data [5239, 3084, 12, 6, 195, 2, 3137, 46, 59, 156]
3084 -> 12
originated -> as
3084 -> 5239
originated -> anarchism
12 -> 3084
as -> originated
12 -> 6
as -> a
6 -> 12
a -> as
6 -> 195
a -> term
195 -> 6
term -> a
195 -> 2
term -> of
I tensorflow/core/common_runtime/local_device.cc:25] Local device intra op parallelism threads: 12
I tensorflow/core/common_runtime/gpu/gpu_init.cc:88] Found device 0 with properties:
name: GeForce GTX TITAN X
major: 5 minor: 2 memoryClockRate (GHz) 1.076
pciBusID 0000:03:00.0
Total memory: 12.00GiB
Free memory: 443.32MiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:88] Found device 1 with properties:
name: GeForce GTX TITAN X
major: 5 minor: 2 memoryClockRate (GHz) 1.076
pciBusID 0000:05:00.0
Total memory: 12.00GiB
Free memory: 451.61MiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:112] DMA: 0 1
I tensorflow/core/common_runtime/gpu/gpu_init.cc:122] 0: Y Y
I tensorflow/core/common_runtime/gpu/gpu_init.cc:122] 1: Y Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:643] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:03:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:643] Creating TensorFlow device (/gpu:1) -> (device: 1, name: GeForce GTX TITAN X, pci bus id: 0000:05:00.0)
I tensorflow/core/common_runtime/gpu/gpu_region_allocator.cc:47] Setting region size to 254881792
I tensorflow/core/common_runtime/gpu/gpu_region_allocator.cc:47] Setting region size to 263835648
I tensorflow/core/common_runtime/local_session.cc:45] Local session inter op parallelism threads: 12
Initialized
Traceback (most recent call last):
File "word2vec_basic.py", line 171, in <module>
_, loss_val = session.run([optimizer, loss], feed_dict=feed_dict)
File "/home/chentingpc/anaconda/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 345, in run
results = self._do_run(target_list, unique_fetch_targets, feed_dict_string)
File "/home/chentingpc/anaconda/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 419, in _do_run
e.code)
tensorflow.python.framework.errors.InvalidArgumentError: Cannot assign a device to node 'GradientDescent/update_Variable_2/ScatterSub': Could not satisfy explicit device specification '' because the node was colocated with a group of nodes that required incompatible device '/job:localhost/replica:0/task:0/GPU:0'
[[Node: GradientDescent/update_Variable_2/ScatterSub = ScatterSub[T=DT_FLOAT, Tindices=DT_INT64, use_locking=false](Variable_2, gradients/concat_1, GradientDescent/update_Variable_2/mul)]]
Caused by op u'GradientDescent/update_Variable_2/ScatterSub', defined at:
File "word2vec_basic.py", line 145, in <module>
optimizer = tf.train.GradientDescentOptimizer(1.0).minimize(loss)
File "/home/chentingpc/anaconda/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 167, in minimize
name=name)
File "/home/chentingpc/anaconda/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 256, in apply_gradients
update_ops.append(self._apply_sparse(grad, var))
File "/home/chentingpc/anaconda/lib/python2.7/site-packages/tensorflow/python/training/gradient_descent.py", line 40, in _apply_sparse
return var.scatter_sub(delta, use_locking=self._use_locking)
File "/home/chentingpc/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/variables.py", line 324, in scatter_sub
use_locking=use_locking)
File "/home/chentingpc/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/gen_state_ops.py", line 227, in scatter_sub
name=name)
File "/home/chentingpc/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/op_def_library.py", line 633, in apply_op
op_def=op_def)
File "/home/chentingpc/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1710, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/home/chentingpc/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 988, in __init__
self._traceback = _extract_stack()
```
When I run the code in CPU version tensorflow, it works just fine. But not for GPU version. I also tried to use tf.device('/cpu:0') to force it using CUP instead of GPU, but it produces the same output.
Is there any function in this example cannot be run in GPUs? And how do I switch to CPU without reinstalling CPU version of tensorflow since tf.device('/cpu:0') not working? | It seems a whole bunch of operations used in this example aren't supported on a GPU. A quick workaround is to restrict operations such that only matrix muls are ran on the GPU.
There's an example in the docs: <http://tensorflow.org/api_docs/python/framework.md>
See the section on tf.Graph.device(device\_name\_or\_function)
I was able to get it working with the following:
```
def device_for_node(n):
if n.type == "MatMul":
return "/gpu:0"
else:
return "/cpu:0"
with graph.as_default():
with graph.device(device_for_node):
...
``` |
Anaconda3 2.4 with python 3.5 installation error (procedure entry not found; Windows 10) | 33,625,683 | 15 | 2015-11-10T08:29:35Z | 33,627,722 | 8 | 2015-11-10T10:20:33Z | [
"python",
"python-3.x",
"anaconda",
"failed-installation"
] | I have just made up my mind to change from python 2.7 to python 3.5 and therefore tried to reinstall Anaconda (64 bit) with the 3.5 environment. When I try to install the package I get several errors in the form of (translation from German, so maybe not exact):
> The procedure entry "\_\_telemetry\_main\_return\_trigger" could not be found in the DLL "C:\Anaconda3\pythonw.exe".
and
> The procedure entry "\_\_telemetry\_main\_invoke\_trigger" could not be found in the DLL "C:\Anaconda3\python35.dll".
The title of the second error message box still points to pythonw.exe. Both errors appear several times - every time an extraction was completed. The installation progress box reads
> [...]
>
> extraction complete.
>
> Execute: "C:\Anaconda3\pythonw.exe" "C:\Anaconda3\Lib\_nsis.py" postpkg
After torturing myself through the installation I get the warning
> Failed to create Anaconda menus
If I ignore it once gives me my lovely error messages and tells me that
> Failed to initialize Anaconda directories
then
> Failed to add Anaconda to the system PATH
Of course nothing works, if I dare to use this mess it installs. What might go wrong? On other computers with Windows 10 it works well.
P.S.: An installation of Anaconda2 2.4 with python 2.7 works without any error message, but still is not able to be used (other errors). | Finally I have found the reason. So, if anybody else has this problem:
[Here](https://groups.google.com/a/continuum.io/forum/#!topic/anaconda/YI-obkGV4oU) the entry points are an issue as well and Michael Sarahan gives the solution. Install the [Visual C++ Redistributable for Visual Studio 2015](https://www.microsoft.com/en-us/download/confirmation.aspx?id=48145), which is used by the new version of python, first. After that install the Anaconda-package and it should work like a charm. |
How to modify a Python JSON objects array | 33,625,908 | 2 | 2015-11-10T08:43:50Z | 33,625,966 | 8 | 2015-11-10T08:47:23Z | [
"python",
"arrays",
"json",
"python-2.7"
] | Let's assume the following :
```
sp_sample=[{"t":1434946093036,"v":54.0},{"t":1434946095013,"v":53.0},{"t":1434946096823,"v":52.0}
```
I wish I could get the following result :
```
sp_sample=[{"t":1434946093036,"v":5400.0},{"t":1434946095013,"v":5300.0},{"t":1434946096823,"v":5200.0}
```
In other words, I wish I could iterate through the array and multiple v by a 100 factor.
The following only performs the multiplication on the first item, ie yields 54000 :
```
for i, a in enumerate(sp_sample):
a[i]['v'] = a[i]['v'] * 100
```
The `sp_sample` is of type tuple. Using the following yields the whole array, which is not what I expect :
```
print sp_sample[0]
```
Also, tried printing sp\_sample :
```
print sp_sample
```
Which returns the following (replaced the ....... for brevity) :
```
([{'t': 1434946093036, 'v': 54.0}, {'t': 1434946095013, 'v': 53.0}, {'t': 1434946096823, 'v': 52.0}, {'t': 1434946098612, 'v': 52.0}, {'t': 1434946100400, 'v': 51.0}, {'t': 1434946102372, 'v': 49.0},........, {'t': 1434947987823, 'v': 15.0}, {'t': 1434947989851, 'v': 12.0}, {'t': 1434947991899, 'v': 10.0}, {'t': 1434947993744, 'v': 5.0}, {'t': 1434947995599, 'v': 0.0}, {'t': 1434947997455, 'v': 0.0}, {'t': 1434947999494, 'v': 0.0}, {'t': 1434948001542, 'v': 0.0}, {'t': 1434948003417, 'v': 0.0}, {'t': 1434948005264, 'v': 0.0}, {'t': 1434948007120, 'v': 0.0}],)
```
print type(sp\_sample) returns : | Simply iterate over the list and update the dictionaries as you go:
```
sp_sample = [{"t":1434946093036,"v":54.0},{"t":1434946095013,"v":53.0},{"t":1434946096823,"v":52.0}]
for d in sp_sample:
d['v'] *= 100
>>> print(sp_sample)
[{'t': 1434946093036, 'v': 5400.0}, {'t': 1434946095013, 'v': 5300.0}, {'t': 1434946096823, 'v': 5200.0}]
```
This will bind in turn each dictionary in list (tuple?) `sp_sample` to `d`, which you then update in place. You do not need to use `enumerate()`.
Note that you really need to multiply by 100, not 10000, to achieve the output that you have shown.
---
**Update**
`sp_sample` is actually a tuple with a list of dictionaries as its only item. So you need to access the list in the tuple like this:
```
sp_sample = ([{"t":1434946093036,"v":54.0},{"t":1434946095013,"v":53.0},{"t":1434946096823,"v":52.0}],)
for d in sp_sample[0]: # N.B. access first item of tuple
d['v'] *= 100
>>> print(sp_sample)
[{'t': 1434946093036, 'v': 5400.0}, {'t': 1434946095013, 'v': 5300.0}, {'t': 1434946096823, 'v': 5200.0}]
```
Or, since the tuple contains only a single item you could just get rid of the tuple by:
```
sp_sample = sp_sample[0]
for d in sp_sample:
d['v'] *= 100
``` |
Copy file with pathlib in Python | 33,625,931 | 6 | 2015-11-10T08:45:09Z | 33,626,207 | 7 | 2015-11-10T09:01:52Z | [
"python",
"file",
"copy",
"pathlib"
] | I try to copy a file with `pathlib`
```
import pathlib
import shutil
my_file=pathlib.Path('/etc/hosts')
to_file=pathlib.Path('/tmp/foo')
shutil.copy(my_file, to_file)
```
I get this exception:
```
/home/foo_egs_d/bin/python /home/foo_egs_d/src/test-pathlib-copy.py
Traceback (most recent call last):
File "/home/foo_egs_d/src/test-pathlib-copy.py", line 6, in <module>
shutil.copy(my_file, to_file)
File "/usr/lib/python2.7/shutil.py", line 117, in copy
if os.path.isdir(dst):
File "/home/foo_egs_d/lib/python2.7/genericpath.py", line 41, in isdir
st = os.stat(s)
TypeError: coercing to Unicode: need string or buffer, PosixPath found
Process finished with exit code
```
... how to copy file with pathlib in Python? | So what about this?
```
import pathlib
import shutil
my_file = pathlib.Path('/etc/hosts')
to_file = pathlib.Path('/tmp/foo')
shutil.copy(str(my_file), str(to_file))
```
The problem is `pathlib.Path` create a `PosixPath` object if you're using Unix/Linux, `WindowsPath` if you're using Microsoft Windows.
But `shutil.copy()` need string as arguments. So just use `str()` here, when you use `str()` function on a `Path` object it will return the original path. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.