title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Python type() shows different results | 31,040,723 | 3 | 2015-06-25T03:29:00Z | 31,040,776 | 7 | 2015-06-25T03:34:36Z | [
"python",
"sublimetext2"
] | I'm using `Sublime Text 2` while learning Python, actually I'm just a beginner. Now, when I write `type(1/2)` in editor and build it(**cmd+B**), I get the output as **int**. Instead if I write the same instruction in Sublime's terminal(**ctrl + `** ), I get the result as **float**. Can somebody explain me why does that happen?
```
type(1/2) #in Sublime's editor results: <type 'int'>
type(1/2) #in Sublime's python console results <type 'float'>
```
I believe it should be "**int**", but still why does that say "**float**". | Somewhere the code is importing from `__future__.division`
```
>>> type(1/2)
<type 'int'>
>>> from __future__ import division
>>> type(1/2)
<type 'float'>
```
`python2.7`
```
>>> type(1/2)
<type 'int'>
```
Python 3 has type report this as a class so it's not the interpreter using python3.
`python3`
```
>>> type(1/2)
<class 'float'>
``` |
Can't open video using opencv | 31,040,746 | 5 | 2015-06-25T03:31:33Z | 31,130,210 | 7 | 2015-06-30T05:27:26Z | [
"python",
"opencv",
"video",
"codec"
] | The opencv works fine when doing other things. It can open images and show images. But it can't open a video.
The code I'm using to open a video is as below
```
import cv2
cap = cv2.VideoCapture("MOV_0006.mp4")
while True:
ret, frame = cap.read()
cv2.imshow('video', frame)
if cv2.waitKey(1) & 0xff == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
```
But when executing, it outputs error messages like below
```
[h264 @ 0x1053ba0] AVC: nal size 554779904
[h264 @ 0x1053ba0] AVC: nal size 554779904
[h264 @ 0x1053ba0] no frame!
```
My `vlc` and `mplayer` can play this video, but the opencv can't.
I have installed `x264` and `libx264-142` codec package. (using `sudo apt-get install`)
My version of ubuntu is `14.04 trusty`.
I'm not sure is it a codec problem or not?
I have rebuilt opencv either with `WITH_UNICAP=ON` or with `WITH_UNICAP=OFF`, but it doesn't affect the problem at all. The error messages never change. | # It's a codec problem
I converted that `mp4` file to an `avi` file with `ffmpeg`. Then the above opencv code can play that `avi` file well.
Therefore I am sure that this is a codec problem.
(I then converted that `mp4` file to another `mp4` file using `ffmpeg`, thinking maybe `ffmpeg` would help turning that original unreadable `.mp4` codec into a readable `.mp4` codec, but the resulting `.mp4` file ended up broken. This fact may or may not relate to this problem, just mentioning, in case anybody needs this information.)
# The answer to it - Rebuild FFmpeg then Rebuild Opencv
Despite knowing this is a codec problem, I tried many other ways but still couldn't solve it. At last I tried rebuilding ffmpeg and opencv, then the problem was solved!
Following is my detailed rebuilding procedure.
**(1) Build ffmpeg**
1. Download ffmpeg-2.7.1.tar.bz2
> FFmpeg website: <https://www.ffmpeg.org/download.html>
>
> ffmpeg-2.7.1.tar.bz2 link: <http://ffmpeg.org/releases/ffmpeg-2.7.1.tar.bz2>
2. `tar -xvf ffmpeg-2.7.1.tar.bz2`
3. `cd ffmpeg-2.7.1`
4. `./configure --enable-pic --extra-ldexeflags=-pie`
> From <http://www.ffmpeg.org/platform.html#Advanced-linking-configuration>
>
> If you compiled FFmpeg libraries statically and you want to use them to build your own shared library, you may need to force PIC support (with `--enable-pic` during FFmpeg configure).
>
> If your target platform requires position independent binaries, you should pass the correct linking flag (e.g. `-pie`) to `--extra-ldexeflags`.
>
> ---
>
> If you encounter error:
> `yasm/nasm not found or too old. Use --disable-yasm for a crippled build.`
>
> Just `sudo apt-get install yasm`
>
> ---
>
> Further building options: <https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu>
>
> e.g. Adding option `--enable-libmp3lame` enables `png` encoder. (Before `./configure` you need to `sudo apt-get install libmp3lame-dev` with version ⥠3.98.3)
5. `make -j5` (under ffmpeg folder)
6. `sudo make install`
**(2) Build Opencv**
1. `wget http://downloads.sourceforge.net/project/opencvlibrary/opencv-unix/2.4.9/opencv-2.4.9.zip`
2. `unzip opencv-2.4.9.zip`
3. `cd opencv-2.4.9`
4. `mkdir build`
5. `cd build`
6. `cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_NEW_PYTHON_SUPPORT=ON -D WITH_QT=OFF -D WITH_V4L=ON -D CMAKE_SHARED_LINKER_FLAGS=-Wl,-Bsymbolic ..`
> You can change those options depend on your needs. Only the last one `-D CMAKE_SHARED_LINKER_FLAGS=-Wl,-Bsymbolic` is the key option. If you omit this one then the `make` will jump out errors.
>
> This is also from <http://www.ffmpeg.org/platform.html#Advanced-linking-configuration> (the same link of step 4 above)
>
> If you compiled FFmpeg libraries statically and you want to use them to build your own shared library, you may need to ... and add the following option to your project `LDFLAGS`: `-Wl,-Bsymbolic`
7. `make -j5`
8. `sudo make install`
9. `sudo sh -c 'echo "/usr/local/lib" > /etc/ld.so.conf.d/opencv.conf'`
10. `sudo ldconfig`
Now the opencv code should play a `mp4` file well!
# Methods I tried but didn't work
1. Try add `WITH_UNICAP=ON` `WITH_V4L=ON` when `cmake` opencv. But didn't work at all.
2. Try changing codec inside the python opencv code. But in vain.
> `cap = cv2.VideoCapture("MOV_0006.mp4")`
>
> `print cap.get(cv2.cv.CV_CAP_PROP_FOURCC)`
>
> I tested this in two environment. In the first environment the opencv works, and in the other the opencv fails to play a video. But both printed out same codec `828601953.0`.
>
> I tried to change their codec by `cap.set(cv2.cv.CV_CAP_PROP_FOURCC, cv2.cv.CV_FOURCC(*'H264'))` but didn't work at all.
3. Try changing the libraries under `opencv-2.4.8/3rdparty/lib/` into libraries in my workable environment. But couldn't even successfully build.
> I grep `AVC: nal size` and find the libraries contain this error message are `opencv-2.4.8/3rdparty/lib/libavcodec.a` etc. That's why I tried to replace them. But it turns out that this is a bad idea.
4. `sudo apt-get -y install libopencv-dev build-essential cmake git libgtk2.0-dev pkg-config python-dev python-numpy libdc1394-22 libdc1394-22-dev libjpeg-dev libpng12-dev libtiff4-dev libjasper-dev libavcodec-dev libavformat-dev libswscale-dev libxine-dev libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev libv4l-dev libtbb-dev libqt4-dev libfaac-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev libvorbis-dev libxvidcore-dev x264 v4l-utils unzip`
> Try to install some that thing and some this thing. But it was totally useless.
# Problems Related
I've searched the web and there are many similar problems, but NONE of them had a solution!
Below are problems I think the same as mine.
* <http://answers.opencv.org/question/9692/mp4-reading/>
> **Mp4 reading**
>
> hi I am new o opencv with python. I have installed OpenCV 2.4.1 and python 2.7 . I also make a short program that reads the avi file successfully. However it fails to read mp4 file.
* [Cannot open ".mp4" video files using OpenCV 2.4.3, Python 2.7 in Windows 7 machine](http://stackoverflow.com/questions/13834399/cannot-open-mp4-video-files-using-opencv-2-4-3-python-2-7-in-windows-7-machi)
> **Cannot open â.mp4â video files using OpenCV 2.4.3, Python 2.7 in Windows 7 machine**
* [OpenCV 2.4 VideoCapture not working on Windows](http://stackoverflow.com/questions/11699298/opencv-2-4-videocapture-not-working-on-windows)
> **OpenCV 2.4 VideoCapture not working on Windows**
* <http://opencv-users.1802565.n2.nabble.com/OpenCV-ffmpeg-does-not-play-my-mp4-video-td5697169.html>
> **OpenCV/ffmpeg does not play my mp4 video.**
* <http://answers.opencv.org/question/263/videocapture-is-not-working-in-opencv-242/>
> **VideoCapture is not working in OpenCV 2.4.2**
>
> This one mentions rebuilding `ffmpeg`! But the arguments weren't enough for me. |
How do I get authentication in a telegram bot? | 31,042,219 | 13 | 2015-06-25T05:52:34Z | 31,212,577 | 23 | 2015-07-03T18:33:26Z | [
"python",
"authentication",
"telegram",
"telegram-bot"
] | Telegram Bots are ready now.
If we use the analogy of web browser and websites, the telegram client applications are like the browser clients.
The Telegram Chatrooms are like websites.
Suppose we have some information we only want to restrict to certain users, on the websites, we will have authentication.
How do we achieve the same effect on the Telegram Bots?
I was told that I can use deep linking. See description [here](https://core.telegram.org/bots#deep-linking)
I will reproduce it below:
> 1. Create a bot with a suitable username, e.g. @ExampleComBot
> 2. Set up a webhook for incoming messages
> 3. Generate a random string of a sufficient length, e.g. $memcache\_key = "vCH1vGWJxfSeofSAs0K5PA"
> 4. Put the value 123 with the key $memcache\_key into Memcache for 3600 seconds (one hour)
> 5. Show our user the button <https://telegram.me/ExampleComBot?start=vCH1vGWJxfSeofSAs0K5PA>
> 6. Configure the webhook processor to query Memcached with the parameter that is passed in incoming messages beginning with /start.
> If the key exists, record the chat\_id passed to the webhook as
> telegram\_chat\_id for the user 123. Remove the key from Memcache.
> 7. Now when we want to send a notification to the user 123, check if they have the field telegram\_chat\_id. If yes, use the sendMessage
> method in the Bot API to send them a message in Telegram.
I know how to do step 1.
I want to understand the rest.
This is the image I have in mind when I try to decipher step 2.

So the various telegram clients communicate with the Telegram Server when talking to ExampleBot on their applications. The communication is two-way.
Step 2 suggests that the Telegram Server will update the ExampleBot Server via a webhook. A webhook is just a URL.
So far, am I correct?
What's the next step towards using this for authentication? | *Forget about the webhook thingy.*
The deep linking explained:
1. Let the user log in on an actual website with actual username-password authentication.
2. Generate a unique hashcode (we will call it unique\_code)
3. Save unique\_code->username to a database or key-value storage.
4. Show the user the URL <https://telegram.me/YOURBOTNAME?start=unique_code>
5. Now as soon as the user opens this URL in Telegram and presses 'Start', your bot will receive a text message containing '/start unique\_code', where unique\_code is of course replaced by the actual hashcode.
6. Let the bot retrieve the username by querying the database or key-value storage for unique\_code.
7. Save chat\_id->username to a database or key-value storage.
Now when your bot receives another message, it can query message.chat.id in the database to check if the message is from this specific user. (And handle accordingly)
Some code (using [pyTelegramBotAPI](https://github.com/eternnoir/pyTelegramBotAPI)):
```
import telebot
import time
bot = telebot.TeleBot('TOKEN')
def extract_unique_code(text):
# Extracts the unique_code from the sent /start command.
return text.split()[1] if len(text.split()) > 1 else None
def in_storage(unique_code):
# Should check if a unique code exists in storage
return True
def get_username_from_storage(unique_code):
# Does a query to the storage, retrieving the associated username
# Should be replaced by a real database-lookup.
return "ABC" if in_storage(unique_code) else None
def save_chat_id(chat_id, username):
# Save the chat_id->username to storage
# Should be replaced by a real database query.
pass
@bot.message_handler(commands=['start'])
def send_welcome(message):
unique_code = extract_unique_code(message.text)
if unique_code: # if the '/start' command contains a unique_code
username = get_username_from_storage(unique_code)
if username: # if the username exists in our database
save_chat_id(message.chat.id, username)
reply = "Hello {0}, how are you?".format(username)
else:
reply = "I have no clue who you are..."
else:
reply = "Please visit me via a provided URL from the website."
bot.reply_to(message, reply)
bot.polling()
while True:
time.sleep(0)
```
Note: the unique\_code will not be shown as '/start unique\_code', only '/start', in the Telegram client, but your bot will still receive '/start unique\_code'.
Another way I can think of off the top of my head is direct authentication within the bot via '/auth username password', but keep in mind that your username and password will be saved in chat history this way.
Edit 1:
Keep in mind that you do not *have* to use WebHooks to process messages. In fact, pyTelegramBotAPI uses the getUpdates method rather than a webhook. |
Define True, if not defined, causes syntax error | 31,042,827 | 19 | 2015-06-25T06:31:16Z | 31,042,847 | 9 | 2015-06-25T06:32:30Z | [
"python",
"python-3.x"
] | I have found the following construct today in someone elses code:
```
try: True, False
except NameError: True = 1==1; False = 1==0
```
As I understand this, it defines `True` and `False` if they are not defined already. So if they are defined it shouldn't throw the `NameError`-Exception, right?
I have tryed this for myself in a shell and it shows me `SyntaxError: can't assign to keyword`
My question is why does it even shot the syntax error if True and False are defined? If True and False are available on my system, shouldn't it just go past the exception handling and not show a syntax error? | `SyntaxError` shows up during the byte-compilation stage, before the code is ever run -- so you can't get around it with `try`/`except`. |
Define True, if not defined, causes syntax error | 31,042,827 | 19 | 2015-06-25T06:31:16Z | 31,042,962 | 21 | 2015-06-25T06:39:06Z | [
"python",
"python-3.x"
] | I have found the following construct today in someone elses code:
```
try: True, False
except NameError: True = 1==1; False = 1==0
```
As I understand this, it defines `True` and `False` if they are not defined already. So if they are defined it shouldn't throw the `NameError`-Exception, right?
I have tryed this for myself in a shell and it shows me `SyntaxError: can't assign to keyword`
My question is why does it even shot the syntax error if True and False are defined? If True and False are available on my system, shouldn't it just go past the exception handling and not show a syntax error? | This code is written for Python 2.x and won't work on Python 3.x (in which `True` and `False` are true keywords).
Since `True` and `False` are keywords in Python 3, you'll get a `SyntaxError` which you cannot catch.
This code exists because of very old versions of Python. In Python 2.2 (released in 2001!), `True` and `False` did not exist as predefined names, so this code would provide compatible definitions so that later code could simply use `True` and `False`.
When converting your Python 2.x code to Python 3.x, remove these lines; they are historical and have no use in Python 3.x. If you see these lines in someone else's code, they are a sure sign that the program was not written or ported for Python 3. |
Method to get the max distance (step) between values in python? | 31,044,711 | 4 | 2015-06-25T08:11:44Z | 31,044,798 | 8 | 2015-06-25T08:16:12Z | [
"python",
"list"
] | Given an list of integers does exists a default method find the max distance between values?
So if I have this array
```
[1, 3, 5, 9, 15, 30]
```
The max step between the values is 15. Does the list object has a method for do that? | No, `list` objects have no standard "adjacent differences" method or the like. However, using the `pairwise` function mentioned in the [`itertools` recipes](https://docs.python.org/2/library/itertools.html#recipes):
```
def pairwise(iterable):
a, b = tee(iterable)
next(b, None)
return izip(a, b)
```
...you can (concisely *and* efficiently) define
```
>>> max(b-a for (a,b) in pairwise([1, 3, 5, 9, 15, 30]))
15
``` |
Finding prime numbers using list comprehention | 31,045,518 | 4 | 2015-06-25T08:50:02Z | 31,045,603 | 11 | 2015-06-25T08:53:41Z | [
"python",
"math",
"list-comprehension"
] | I was trying to generate all prime numbers in range x to y. I tried simple example first: `range(10,11)` which means to check if 10 is a prime number:
Here is my code:
```
prime_list = [x for x in range(10, 11) for y in range(2,x) if x % x == 0 and x % 1 == 0 and x % y != 0]
```
I know that the thing is missing the option to tell the expression that `x%y != 0` should be checked for all y in `range (2,x)` and return true if and only if all have met this condition.
How do we do that? | Use [`all`](https://docs.python.org/3/library/functions.html#all) to check all elements (from 2 upto x-1) met conditions:
```
>>> [x for x in range(2, 20)
if all(x % y != 0 for y in range(2, x))]
[2, 3, 5, 7, 11, 13, 17, 19]
``` |
What is the most pythonic way to iterate over OrderedDict | 31,046,231 | 17 | 2015-06-25T09:22:45Z | 31,046,250 | 40 | 2015-06-25T09:23:52Z | [
"python",
"python-2.7",
"loops",
"dictionary"
] | I have an OrderedDict and in a loop I want to get index, key and value.
It's sure can be done in multiple ways, i.e.
```
a = collections.OrderedDict({â¦})
for i,b,c in zip(range(len(a)), a.iterkeys(), a.itervalues()):
â¦
```
But I would like to avoid range(len(a)) and shorten a.iterkeys(), a.itervalues() to something like a.iteritems().
With enumerate and iteritems it's possible to rephrase as
```
for i,d in enumerate(a.iteritems()):
b,c = d
```
But it requires to unpack inside the loop body.
Is there a way to unpack in a for statement or maybe a more elegant way to iterate? | You can use tuple unpacking in [`for` statement](https://docs.python.org/2/reference/compound_stmts.html#the-for-statement):
```
for i, (key, value) in enumerate(a.iteritems()):
# Do something with i, key, value
```
---
```
>>> d = {'a': 'b'}
>>> for i, (key, value) in enumerate(d.iteritems()):
... print i, key, value
...
0 a b
```
Side Note:
In Python 3.x, use [`dict.items()`](https://docs.python.org/3/library/stdtypes.html#dict.items) which returns an iterable dictionary view.
```
>>> for i, (key, value) in enumerate(d.items()):
... print(i, key, value)
``` |
Scrapy gives URLError: <urlopen error timed out> | 31,048,130 | 5 | 2015-06-25T10:44:54Z | 31,055,000 | 15 | 2015-06-25T15:46:32Z | [
"python",
"web-scraping",
"scrapy"
] | So I have a scrapy program I am trying to get off the ground but I can't get my code to execute it always comes out with the error below.
I can still visit the site using the `scrapy shell` command so I know the Url's and stuff all work.
Here is my code
```
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from Malscraper.items import MalItem
class MalSpider(CrawlSpider):
name = 'Mal'
allowed_domains = ['www.website.net']
start_urls = ['http://www.website.net/stuff.php?']
rules = [
Rule(LinkExtractor(
allow=['//*[@id="content"]/div[2]/div[2]/div/span/a[1]']),
callback='parse_item',
follow=True)
]
def parse_item(self, response):
mal_list = response.xpath('//*[@id="content"]/div[2]/table/tr/td[2]/')
for mal in mal_list:
item = MalItem()
item['name'] = mal.xpath('a[1]/strong/text()').extract_first()
item['link'] = mal.xpath('a[1]/@href').extract_first()
yield item
```
Edit: Here is the trace.
```
Traceback (most recent call last):
File "C:\Users\2015\Anaconda\lib\site-packages\boto\utils.py", line 210, in retry_url
r = opener.open(req, timeout=timeout)
File "C:\Users\2015\Anaconda\lib\urllib2.py", line 431, in open
response = self._open(req, data)
File "C:\Users\2015\Anaconda\lib\urllib2.py", line 449, in _open
'_open', req)
File "C:\Users\2015\Anaconda\lib\urllib2.py", line 409, in _call_chain
result = func(*args)
File "C:\Users\2015\Anaconda\lib\urllib2.py", line 1227, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "C:\Users\2015\Anaconda\lib\urllib2.py", line 1197, in do_open
raise URLError(err)
URLError: <urlopen error timed out>
```
Edit2:
So with the scrapy `shell command` I am able to manipulate my responses but I just noticed that the same exact error comes up again when visiting the site
Edit3:
I am now finding that the error shows up on EVERY website I use the `shell command` with, but I am able to manipulate the response still.
Edit4:
So how do I verify I am atleast receiving a response from Scrapy when running the `crawl command`?
Now I don't know if its my code that is the reason my logs turns up empty or the error ?
Here is my settings.py
```
BOT_NAME = 'Malscraper'
SPIDER_MODULES = ['Malscraper.spiders']
NEWSPIDER_MODULE = 'Malscraper.spiders'
FEED_URI = 'logs/%(name)s/%(time)s.csv'
FEED_FORMAT = 'csv'
``` | There's an open scrapy issue for this problem: <https://github.com/scrapy/scrapy/issues/1054>
Although it seems to be just a warning on other platforms.
You can disable the S3DownloadHandler (that is causing this error) by adding to your scrapy settings:
```
DOWNLOAD_HANDLERS = {
's3': None,
}
``` |
how to merge two data structure in python | 31,049,042 | 4 | 2015-06-25T11:26:14Z | 31,049,085 | 7 | 2015-06-25T11:28:53Z | [
"python",
"dictionary",
"recursion",
"data-structures",
"iterator"
] | I am having two complex data structure(i.e. \_to and \_from), I want to override the entity of \_to with the same entity of \_from.
I have given this example.
```
# I am having two data structure _to and _from
# I want to override _to from _from
_to = {'host': 'test',
'domain': [
{
'ssl': 0,
'ssl_key': '',
}
],
'x': {}
}
_from = {'status': 'on',
'domain': [
{
'ssl': 1,
'ssl_key': 'Xpyn4zqJEj61ChxOlz4PehMOuPMaxNnH5WUY',
'ssl_cert': 'nuyickK8uk4VxHissViL3O9dV7uGSLF62z52L4dAm78LeVdq'
}
]
}
### I want this output
_result = {'host': 'test',
'status': 'on',
'domain': [
{
'ssl': 1,
'ssl_key': 'Xpyn4zqJEj61ChxOlz4PehMOuPMaxNnH5WUY',
'ssl_cert': 'nuyickK8uk4VxHissViL3O9dV7uGSLF62z52L4dAm78LeVdq'
}
],
'x': {}
}
```
Use case 2:
```
_to = {'host': 'test',
'domain': [
{
'ssl': 0,
'ssl_key': '',
'ssl_cert': 'nuyickK8uk4VxHissViL3O9dV7uGSLF62z52L4dAm78LeVdq',
"abc": [],
'https': 'no'
}
],
'x': {}
}
_from = {
'domain': [
{
'ssl': 1,
'ssl_key': 'Xpyn4zqJEj61ChxOlz4PehMOuPMaxNnH5WUY',
'ssl_cert': 'nuyickK8uk4VxHissViL3O9dV7uGSLF62z52L4dAm78LeVdq'
}
]
}
```
dict.update(dict2) won't help me, because this will delete the extra keys in \_to dict. | It's quite simple:
```
_to.update(_from)
``` |
Migrating database from local development to Heroku-Django 1.8 | 31,057,998 | 4 | 2015-06-25T18:26:29Z | 31,059,287 | 10 | 2015-06-25T19:35:56Z | [
"python",
"django",
"postgresql",
"heroku",
"django-database"
] | After establishing a database using `heroku addons:create heroku-postgresql:hobby-dev`, I tried to migrate my local database to heroku database. So I first ran
`heroku python manage.py migrate`. After that I created a dump file of my local database using `pg_dump -Fc --no-acl --no-owner -h localhost -U myuser mydb > mydb.dump`. I uploaded my `mydb.dump` file to dropbox and then used the following command to load the dump to my heroku database
```
heroku pg:backups restore'https://www.dropbox.com/s/xkc8jhav70hgqfd/mydb.dump?' HEROKU_POSTGRESQL_COLOR_URL
```
But, that throws the following error -
```
r004 ---restore---> HEROKU_POSTGRESQL_PURPLE
[0KRunning... 0.00B..
[0KAn error occurred and your backup did not finish.
Please run `heroku pg:backups info r004` for details.
```
And on running `heroku pg:backups info r004` I get -
```
Database: BACKUP
Started: 2015-06-25 18:19:37 +0000
Finished: 2015-06-25 18:19:38 +0000
Status: Failed
Type: Manual
Backup Size: 0.00B
=== Backup Logs
2015-06-25 18:19:38 +0000: waiting for restore to complete
2015-06-25 18:19:38 +0000: pg_restore: [archiver] did not find magic string in file header
2015-06-25 18:19:38 +0000: restore done
2015-06-25 18:19:38 +0000: waiting for download to complete
2015-06-25 18:19:38 +0000: download done
```
There is not much information on this error online and I can't figure out what the problem is. | If the database is small and you feel lucky this might do it
```
pg_dump --no-acl --no-owner -h localhost -U myuser myd | heroku pg:psql
``` |
Spark 1.4 increase maxResultSize memory | 31,058,504 | 9 | 2015-06-25T18:51:55Z | 31,058,669 | 16 | 2015-06-25T19:01:57Z | [
"python",
"memory",
"apache-spark",
"pyspark",
"jupyter"
] | I am using Spark 1.4 for my research and struggling with the memory settings. My machine has 16GB of memory so no problem there since the size of my file is only 300MB. Although, when I try to convert Spark RDD to panda dataframe using `toPandas()` function I receive the following error:
```
serialized results of 9 tasks (1096.9 MB) is bigger than spark.driver.maxResultSize (1024.0 MB)
```
I tried to fix this changing the spark-config file and still getting the same error. I've heard that this is a problem with spark 1.4 and wondering if you know how to solve this. Any help is much appreciated. | You can set `spark.driver.maxResultSize` parameter in the `SparkConf` object:
```
from pyspark import SparkConf, SparkContext
# In Jupyter you have to stop the current context first
sc.stop()
# Create new config
conf = (SparkConf()
.set("spark.driver.maxResultSize", "2g"))
# Create new context
sc = SparkContext(conf=conf)
```
You should probably create a new `SQLContext` as well:
```
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
``` |
How to load default profile in chrome using Python Selenium Webdriver? | 31,062,789 | 3 | 2015-06-26T00:03:17Z | 31,063,104 | 11 | 2015-06-26T00:43:15Z | [
"python",
"selenium-chromedriver"
] | So I'd like to open chrome with its default profile using pythons webdriver. I've tried everything I could find but I still couldn't get it to work. Thanks for the help! | This is what finally got it working for me.
```
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
options = webdriver.ChromeOptions()
options.add_argument("user-data-dir=C:\\Path") #Path to your chrome profile
w = webdriver.Chrome(executable_path="C:\\Users\\chromedriver.exe", chrome_options=options)
```
To find path to your chrome profile data you need to type `chrome://version/` into address bar . For ex. mine is displayed as `C:\Users\pc\AppData\Local\Google\Chrome\User Data\Default`, to use it in the script I had to exclude `\Default\` so we end up with only `C:\Users\pc\AppData\Local\Google\Chrome\User Data`.
Also if you want to have separate profile just for selenium: replace the path with any other path and if it doesn't exist on start up chrome will create new profile and directory for it. |
Python3 error: initial_value must be str or None | 31,064,981 | 9 | 2015-06-26T04:33:15Z | 31,067,445 | 13 | 2015-06-26T07:28:52Z | [
"python",
"python-3.x",
"urllib2",
"urllib"
] | While porting code from `python2` to `3`, I get this error when reading from a URL
> TypeError: initial\_value must be str or None, not bytes.
```
import urllib
import json
import gzip
from urllib.parse import urlencode
from urllib.request import Request
service_url = 'https://babelfy.io/v1/disambiguate'
text = 'BabelNet is both a multilingual encyclopedic dictionary and a semantic network'
lang = 'EN'
Key = 'KEY'
params = {
'text' : text,
'key' : Key,
'lang' :'EN'
}
url = service_url + '?' + urllib.urlencode(params)
request = Request(url)
request.add_header('Accept-encoding', 'gzip')
response = urllib.request.urlopen(request)
if response.info().get('Content-Encoding') == 'gzip':
buf = StringIO(response.read())
f = gzip.GzipFile(fileobj=buf)
data = json.loads(f.read())
```
The exception is thrown at this line
```
buf = StringIO(response.read())
```
If I use python2, it works fine. | `response.read()` returns an instance of `bytes` while [`StringIO`](https://docs.python.org/3/library/io.html#io.StringIO) is an in-memory stream for text only. Use [`BytesIO`](https://docs.python.org/3/library/io.html#io.BytesIO) instead.
From [What's new in Python 3.0 - Text Vs. Data Instead Of Unicode Vs. 8-bit](https://docs.python.org/3.0/whatsnew/3.0.html#text-vs-data-instead-of-unicode-vs-8-bit)
> The `StringIO` and `cStringIO` modules are gone. Instead, import the `io` module and use `io.StringIO` or `io.BytesIO` for text and data respectively. |
Simple line plots using seaborn | 31,069,191 | 12 | 2015-06-26T09:08:51Z | 31,072,485 | 22 | 2015-06-26T11:50:49Z | [
"python",
"matplotlib",
"plot",
"seaborn",
"roc"
] | I'm trying to plot a ROC curve using seaborn (python).
With matplotlib I simply use the function `plot`:
```
plt.plot(one_minus_specificity, sensitivity, 'bs--')
```
where `one_minus_specificity` and `sensitivity` are two lists of paired values.
Is there a simple counterparts of the plot function in seaborn? I had a look at the gallery but I didn't find any straightforward method. | Since seaborn also uses matplotlib to do its plotting you can easily combine the two. If you only what to adopt the styling of seaborn the [`set_style`](http://stanford.edu/~mwaskom/software/seaborn/tutorial/aesthetics.html#styling-figures-with-axes-style-and-set-style) function should get you started:
```
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
sns.set_style("darkgrid")
plt.plot(np.cumsum(np.random.randn(1000,1)))
plt.show()
```
Result:
 |
Django Get Latest Entry from Database | 31,070,368 | 3 | 2015-06-26T10:03:41Z | 31,070,622 | 7 | 2015-06-26T10:15:43Z | [
"python",
"django",
"django-models"
] | I've got 2 questions, but they are related to the same topic.
I know how to retrieve data from a `for loop` using template tags
```
{% for status in status %}
<tr>
<td>{{ status.status}}</td>
</tr>
{% endfor %}
```
However when I want to retrieve a single object i get an error even when i use:
```
po = Status.objects.latest('id')
```
and remove the for loop.
I get:
```
'Status' object is not iterable
```
My questions are:
1. How can I get the latest entry from the database for a given model?
2. How can I setup my templates tags to allow for just a single record? | You have two different questions here:
1. How do I retrieve the latest object from the database.
You can do this using the [`latest()`](https://docs.djangoproject.com/en/dev/ref/models/querysets/#latest) queryset operator. By reading the docs you will note that this operator works on date fields, not integers.
```
Status.objects.latest('date_added') # or date_updated
```
If you want to do this off the ID you will need to order by ID and select the first result. (this will only work if you are using incrementing primary keys, it will not work with UUID's or randomly generated hashes).
```
Status.objects.order_by('id')[0]
```
Side note: I would personally use the `date_added / date_updated` way of doing this.
2. Iterating over a single object
A single object cannot be iterated over. For this you will need to use a different template. Or, you will need to add the single object into a list.
```
# note the [] around the query
result = [Status.object.latest('date_added')]
```
Personally I have a different views for listing single / multiple result. I have a `ListView` for many result objects and a `DetailView` for single objects. |
Find all local Maxima and Minima when x and y values are given as numpy arrays | 31,070,563 | 5 | 2015-06-26T10:12:49Z | 31,073,798 | 13 | 2015-06-26T12:57:37Z | [
"python",
"numpy",
"derivative"
] | I have two arrays `x` and `y` as :
```
x = np.array([6, 3, 5, 2, 1, 4, 9, 7, 8])
y = np.array([2, 1, 3, 5, 3, 9, 8, 10, 7])
```
I am finding index of local minima and maxima as follows:
```
sortId = np.argsort(x)
x = x[sortId]
y = y[sortId]
minm = np.array([])
maxm = np.array([])
while i < y.size-1:
while(y[i+1] >= y[i]):
i = i + 1
maxm = np.insert(maxm, 0, i)
i++
while(y[i+1] <= y[i]):
i = i + 1
minm = np.insert(minm, 0, i)
i++
```
What is the problem in this code?
The answer should be index of `minima = [2, 5, 7]`
and that of `maxima = [1, 3, 6]`. | You do not need this `while` loop at all. The code below will give you the output you want; it finds all local minima and all local maxima and stores them in `minm` and `maxm`, respectively. Please note: When you apply this to large datasets, make sure to smooth the signals first; otherwise you will end up with tons of extrema.
```
import numpy as np
from scipy.signal import argrelextrema
import matplotlib.pyplot as plt
x = np.array([6, 3, 5, 2, 1, 4, 9, 7, 8])
y = np.array([2, 1, 3 ,5 ,3 ,9 ,8, 10, 7])
# sort the data in x and use that to rearrange y
sortId = np.argsort(x)
x = x[sortId]
y = y[sortId]
# this way the x-axis corresponds to the index of x
plt.plot(x-1, y)
plt.show()
maxm = argrelextrema(y, np.greater) # (array([1, 3, 6]),)
minm = argrelextrema(y, np.less) # (array([2, 5, 7]),)
```
This should be far more efficient than the above `while` loop.
The plot looks like this; I shifted the x-values so that they correspond to the returned indices in `minm` and `maxm`):
 |
Python: using comprehensions in for loops | 31,072,039 | 3 | 2015-06-26T11:26:54Z | 31,072,227 | 11 | 2015-06-26T11:37:38Z | [
"python",
"python-2.7"
] | I'm using Python 2.7. I have a list, and I want to use a for loop to iterate over a subset of that list subject to some condition. Here's an illustration of what I'd like to do:
```
l = [1, 2, 3, 4, 5, 6]
for e in l if e % 2 == 0:
print e
```
which seems to me very neat and Pythonic, and is lovely in every way except for the small matter of a syntax error. This alternative works:
```
for e in (e for e in l if e % 2 == 0):
print e
```
but is ugly as sin. Is there a way to add the conditional directly to the for loop construction, without building the generator?
Edit: you can assume that the processing and filtering that I actually want to perform on `e` are more complex than in the example above. The processing especially doesn't belong one line. | What's wrong with a simple, readable solution:
```
l = [1, 2, 3, 4, 5, 6]
for e in l:
if e % 2 == 0:
print e
```
You can have any amount of statements instead of just a simple `print e` and nobody has to scratch their head trying to figure out what it does.
If you need to use the sub list for something else too (not just iterate over it once), why not construct a new list instead:
```
l = [1, 2, 3, 4, 5, 6]
even_nums = [num for num in l if num % 2 == 0]
```
And now iterate over `even_nums`. One more line, much more readable. |
Passing a list of strings from Python to Rust | 31,074,994 | 9 | 2015-06-26T13:51:14Z | 31,075,375 | 9 | 2015-06-26T14:08:54Z | [
"python",
"rust",
"ctypes",
"ffi"
] | I've been learning Rust for about two weeks now and today, I got into its FFI. I used Python to play with Rust, using ctypes and libc. I passed integers, strings and even learned to pass a list of integers ([thanks to this wonderful answer](http://stackoverflow.com/a/30313295/2313792)).
Then, I tried to pass a list of strings (following the reasoning behind the that answer), but I failed, as I couldn't get a lead on it. In Python, I have something like this to pass the array of strings.
```
def testRust():
lib = ctypes.cdll.LoadLibrary(rustLib)
list_to_send = ['blah', 'blah', 'blah', 'blah']
c_array = (ctypes.c_char_p * len(list_to_send))()
lib.get_strings(c_array, len(list_to_send))
```
In Rust, I thought that there should be something (like a `STRING_RECEIVER`) to collect the incoming strings, but I can't find one.
```
#![feature(libc)]
extern crate libc;
use std::slice;
use libc::{size_t, STRING_RECEIVER};
#[no_mangle]
pub extern fn get_strings(array: *const STRING_RECEIVER, length: size_t) {
let values = unsafe { slice::from_raw_parts(array, length as usize) };
println!("{:?}", values);
}
```
Is there any alternative way to achieve this? | There is absolutely no difference with the case of array of numbers. C strings are zero-terminated arrays of bytes, so their representation in Rust will be `*const c_char`, which could then be converted to `&CStr` which then can be used to obtain `&[u8]` and then `&str`.
Python:
```
import ctypes
rustLib = "libtest.dylib"
def testRust():
lib = ctypes.cdll.LoadLibrary(rustLib)
list_to_send = ['blah', 'blah', 'blah', 'blah']
c_array = (ctypes.c_char_p * len(list_to_send))(*list_to_send)
lib.get_strings(c_array, len(list_to_send))
if __name__=="__main__":
testRust()
```
Rust:
```
#![feature(libc)]
extern crate libc;
use std::slice;
use std::ffi::CStr;
use std::str;
use libc::{size_t, c_char};
#[no_mangle]
pub extern fn get_strings(array: *const *const c_char, length: size_t) {
let values = unsafe { slice::from_raw_parts(array, length as usize) };
let strs: Vec<&str> = values.iter()
.map(|&p| unsafe { CStr::from_ptr(p) }) // iterator of &CStr
.map(|cs| cs.to_bytes()) // iterator of &[u8]
.map(|bs| str::from_utf8(bs).unwrap()) // iterator of &str
.collect();
println!("{:?}", strs);
}
```
Running:
```
% rustc --crate-type=dylib test.rs
% python test.py
["blah", "blah", "blah", "blah"]
```
And again, you should be careful with lifetimes and ensure that `Vec<&str>` does not outlive the original value on the Python side. |
How does Flask-SQLAlchemy create_all discover the models to create? | 31,082,692 | 5 | 2015-06-26T21:50:18Z | 31,091,883 | 7 | 2015-06-27T18:09:40Z | [
"python",
"flask",
"sqlalchemy",
"flask-sqlalchemy"
] | Flask-SQLAlchemy's `db.create_all()` method creates each table corresponding to my defined models. I never instantiate or register instances of the models. They're just class definitions that inherit from `db.Model`. How does it know which models I have defined? | Flask-SQLAlchemy does nothing special, it's all a standard part of SQLAlchemy.
Calling [`db.create_all`](https://github.com/mitsuhiko/flask-sqlalchemy/blob/1d8e9873bed9e8b75a9e7b0903870fe832b92628/flask_sqlalchemy/__init__.py#L967) eventually calls [`db.Model.metadata.create_all`](http://docs.sqlalchemy.org/en/rel_1_0/core/metadata.html#sqlalchemy.schema.MetaData.create_all). Tables are [associated with a `MetaData` instance as they are defined](http://docs.sqlalchemy.org/en/rel_1_0/core/metadata.html#sqlalchemy.schema.Table.params.metadata). The exact mechanism is very circuitous within SQLAlchemy, as there is a lot of behind the scenes bookkeeping going on, so I've greatly simplified the explanation.
`db.Model` is a [declarative base class](http://docs.sqlalchemy.org/en/rel_1_0/orm/extensions/declarative/api.html#sqlalchemy.ext.declarative.declarative_base), which has some special metaclass behavior. When it is defined, it creates a `MetaData` instance internally to store the tables it generates for the models. When you subclass `db.Model`, its metaclass behavior records the class in `db.Model._decl_class_registry` as well as the table in `db.Model.metadata`.
---
Classes are only defined when the modules containing them are imported. If you have a module `my_models` written somewhere, but it is never imported, its code never executes so the models are never registered.
This may be where some confusion about how SQLAlchemy detects the models comes from. No modules are "scanned" for subclasses, `db.Model.__subclasses__` is not used, but importing the modules somewhere *is* required for the code to execute.
1. Module containing models is imported and executed.
2. Model class definition is executed, subclasses `db.Model`
3. Model's table is registered with `db.Model.metadata` |
Installing pzmq with Cygwin | 31,082,901 | 4 | 2015-06-26T22:12:11Z | 32,684,916 | 7 | 2015-09-20T22:22:26Z | [
"python",
"windows",
"gcc",
"cygwin",
"ipython-notebook"
] | For two days I have been struggling to install pyzmq and I am really not sure what the issue is.
The error message I receive after:
```
pip install pyzmq
```
is:
```
error: command 'gcc' failed with exit status 1
```
I have gcc installed.
```
which gcc
/usr/bin/gcc
```
Python is installed at the same location. I am really struggling to find a solution.
Edit: Adding to the output from the error, here is the output that describes the error further:
```
bundled/zeromq/src/signaler.cpp:62:25: fatal error: sys/eventfd.h: No such file or directory
#include <sys/eventfd.h>
^
compilation terminated.
error: command 'gcc' failed with exit status 1
----------------------------------------
Command "/usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip- build-INbMj2/pyzmq/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'),
__file__, 'exec'))" install --record /tmp/pip-n8hQ_h-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-INbMj2/pyzmq
```
Edit Two: Following installation instructions from <https://github.com/zeromq/pyzmq/issues/391>
```
pip install pyzmq --install-option="fetch_libzmq"
```
Yields :
```
#include <sys/eventfd.h>
^
compilation terminated.
error: command 'gcc' failed with exit status 1
```
Next:
```
pip install --no-use-wheel pyzmq --global-option='fetch_libzmq' --install-option='--zmq=bundled'
```
Yields:
```
#include <sys/eventfd.h>
^
compilation terminated.
error: command 'gcc' failed with exit status 1
``` | Installing IPython in Cygwin with pip was painful but not impossible. This comment by @ahmadia on the zeromq GitHub project gives instructions for installing pyzmq: <https://github.com/zeromq/pyzmq/issues/113#issuecomment-25192831>
The comment says it's for 64-bit Cygwin but the instructions worked fine for me on 32-bit. I'll summarize the steps assuming install to /usr/local. First download and extract the tarballs for zeromq and pyzmq. Then:
```
# in zeromq directory
export PKG_CONFIG_PATH=/usr/lib/pkgconfig
./configure --without-libsodium
make
gcc -shared -o cygzmq.dll -Wl,--out-implib=libzmq.dll.a -Wl,--export-all-symbols -Wl,--enable-auto-import -Wl,--whole-archive src/.libs/libzmq.a -Wl,--no-whole-archive -lstdc++
install include/zmq.h /usr/local/include
install include/zmq_utils.h /usr/local/include
install cygzmq.dll /usr/local/bin
install libzmq.dll.a /usr/local/lib
# in pyzmq directory
python setup.py build_ext --zmq=/usr/local --inplace
python setup.py install --zmq=/usr/local --prefix=/usr/local
# finally!
pip install ipython[all]
```
After that, `pip install ipython[all]` just works, notebook included. |
How to stop memory leaks when using `as_ptr()`? | 31,083,223 | 6 | 2015-06-26T22:47:30Z | 31,083,443 | 7 | 2015-06-26T23:10:23Z | [
"python",
"memory-leaks",
"rust",
"ctypes",
"ffi"
] | Since it's my first time learning systems programming, I'm having a hard time wrapping my head around the rules. Now, I got confused about memory leaks. Let's consider an example. Say, Rust is throwing a pointer (to a string) which Python is gonna catch.
In Rust, (I'm just sending the pointer of the `CString`)
```
use std::ffi::CString;
pub extern fn do_something() -> *const c_char {
CString::new(some_string).unwrap().as_ptr()
}
```
In Python, (I'm dereferencing the pointer)
```
def call_rust():
lib = ctypes.cdll.LoadLibrary(rustLib)
lib.do_something.restype = ctypes.c_void_p
c_pointer = lib.do_something()
some_string = ctypes.c_char_p(c_pointer).value
```
Now, my question is about freeing the memory. I thought it should be freed in Python, but then ownership pops in. Because, [`as_ptr`](https://doc.rust-lang.org/std/ffi/struct.CString.html#method.as_ptr) seems to take an immutable reference. So, I got confused about whether I should free the memory in Rust or Python *(or both?)*. If it's gonna be Rust, then how should I go about freeing it when the control flow has landed back into Python? | Your Rust function `do_something` constructs a temporary `CString`, takes a pointer into it, and then *drops the `CString`*. The `*const c_char` is invalid from the instant you return it. If you're on nightly, you probably want `CString#into_ptr` instead of `CString#as_ptr`, as the former consumes the `CString` without deallocating the memory. On stable, you can `mem::forget` the `CString`. Then you can worry about who is supposed to free it.
Freeing from Python will be tricky or impossible, since Rust may use a different allocator. The best approach would be to expose a Rust function that takes a `c_char` pointer, constructs a `CString` for that pointer (rather than copying the data into a new allocation), and drops it. Unfortunately the middle part (creating the `CString`) seems impossible on stable for now: `CString::from_ptr` is unstable.
A workaround would to pass (a pointer to) the *entire `CString`* to Python and provide an accessor function to get the char pointer from it. You simply need to box the `CString` and transmute the box to a raw pointer. Then you can have another function that transmutes the pointer back to a box and lets it drop. |
ImportError: No module named concurrent.futures.process | 31,086,530 | 12 | 2015-06-27T08:05:06Z | 32,397,747 | 27 | 2015-09-04T12:11:01Z | [
"python",
"path"
] | I have followed the procedure given in [How to use valgrind with python?](http://stackoverflow.com/questions/20112989/how-to-use-valgrind-with-python) for checking memory leaks in my python code.
I have my python source under the path
```
/root/Test/ACD/atech
```
I have given above path in `PYTHONPATH`. Everything is working fine if I run the code with default python binary, located under `/usr/bin/`.
I need to run the code with the python binary I have build manually which is located under
```
/home/abcd/workspace/pyhon/bin/python
```
Then I am getting the following error
```
from concurrent.futures.process import ProcessPoolExecutor
ImportError: No module named concurrent.futures.process
```
How can I solve this? | If you're using Python 2.7 you must install this module :
```
pip install futures
```
Futures feature has never included in Python 2.x core. However, it's present in Python 3.x since Python 3.2. |
TypeError: 'list' object is not callable in python | 31,087,111 | 4 | 2015-06-27T09:16:56Z | 31,087,151 | 16 | 2015-06-27T09:22:10Z | [
"python",
"list"
] | I am novice to Python and following a tutorial. There is an example of `list` in the tutorial :
```
example = list('easyhoss')
```
Now, In tutorial, `example= ['e','a',...,'s']`. But in my case I am getting following error:
```
>>> example = list('easyhoss')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'list' object is not callable
```
Please tell me where I am wrong. I searched SO [this](http://stackoverflow.com/questions/5735841/python-typeerror-list-object-is-not-callable) but it is different. | Seems like you've shadowed the built-in name `list` by an instance name somewhere in your code.
```
>>> example = list('easyhoss')
>>> list = list('abc')
>>> example = list('easyhoss')
Traceback (most recent call last):
File "<string>", line 1, in <module>
TypeError: 'list' object is not callable
```
I believe this is fairly obvious. Python stores function and object names in dictionaries (namespaces are organized as dictionaries), hence you can rewrite pretty much any name in any scope. It won't show up as an error of some sort.
You'd better use some IDE like PyCharm (there is a free edition) that highlights name shadowing.
**EDIT**. Thanks to your additional questions I think the whole thing about `built-ins` and scoping is worth clarification. As you might know Python emphasizes that "special cases aren't special enough to break the rules". And there are a couple of rules behind the problem you've faced.
1. *Namespaces*. Python supports nested namespaces. Theoretically you can endlessly nest namespaces. Internally namespaces are organized as dictionaries of names and references to corresponding objects. Any module you create gets its own "global" namespace. In fact it's just a local namespace with respect to that particular module.
2. *Scoping*. When you call a name Python looks in the local namespace (relatively to the call) and if it fails to find the name it repeats the attempt in a higher-level namespaces. `built-in` functions and classes reside in a special high-order namespace `__builtins__`. If you declare `list` in your module's global namespace, the interpreter will never search for that name in the higher-level namespace that is `__builtins__`. Similarly, suppose you create a variable `var` inside a function in your module, and another variable `var` in the module. Then if you call `var` inside the function Python will never give you the global `var`, because there is a `var` in the local namespace - it doesn't need to search in the higher-level namespace.
Here is a simple illustration.
```
>>> example = list("abc") # Works fine
# Creating name "list" in the global namespace of the module
>>> list = list("abc")
>>> example = list("abc")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'list' object is not callable
# Python looks for "list" and finds it in the global namespace.
# But it's not the proper "list".
# Let's remove "list" from the global namespace
>>> del(list)
# Since there is no "list" in the global namespace of the module,
# Python goes to a higher-level namespace to find the name.
>>> example = list("abc") # It works.
```
So, as you see there is nothing special about `built-ins`. And your case is a mere example of universal rules.
**P.S.**
When you start an interactive Python session you create a temporary module. |
Make a pythonic one line statement like "return True if flag"? | 31,091,184 | 2 | 2015-06-27T16:54:47Z | 31,091,215 | 8 | 2015-06-27T16:58:24Z | [
"python",
"syntax"
] | Is there a way to write a one line python statement for
```
if flag:
return True
```
Note that this can be semantically different from
```
return flag
```
In my case, None is expected to be returned otherwise.
I have tried with "return True if flag", which has syntactic error detected by my emacs. | `return True if flag` doesn't work because you need to supply an explicit `else`. You could use:
```
return True if flag else None
```
to replicate the behaviour of your original if statement. |
Do Python 2.7 views, for/in, and modification work well together? | 31,092,518 | 4 | 2015-06-27T19:07:59Z | 31,092,521 | 7 | 2015-06-27T19:08:28Z | [
"python",
"python-2.7",
"dictionary",
"concurrentmodification",
"for-in-loop"
] | The Python docs give warnings about trying to modify a dict while iterating over it. Does this apply to views?
I understand that views are "live" in the sense that if you change the underlying dict, the view automatically reflects the change. I'm also aware that a dict's natural ordering can change if elements are added or removed. How does this work in conjunction with for/in? Can you safely modify the dict without messing up the loop?
```
d = dict()
# assume populated dict
for k in d.viewkeys():
# possibly add or del element of d
```
Does the for/in loop iterate over all the new elements as well? Does it miss elements (because of order change)? | Yes, this applies to dictionary views over either keys or items, as they provide a *live* view of the dictionary contents. You *cannot* add or remove keys to the dictionary while iterating over a dictionary view, because, as you say, this alters the dictionary order.
Demo to show that this is indeed the case:
```
>>> d = {'foo': 'bar'}
>>> for key in d.viewkeys():
... d['spam'] = 'eggs'
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: dictionary changed size during iteration
>>> d = {'foo': 'bar', 'spam': 'eggs'}
>>> for key in d.viewkeys():
... del d['spam']
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: dictionary changed size during iteration
```
You *can* alter values, even when iterating over a values view, as the size of the dictionary won't then change, and the keys remain in the same order. |
Why do these print() calls appear to execute in the wrong order? | 31,093,617 | 3 | 2015-06-27T21:18:23Z | 31,093,631 | 7 | 2015-06-27T21:19:59Z | [
"python",
"python-3.x"
] | weird.py:
```
import sys
def f ():
print('f', end = '')
g()
def g ():
1 / 0
try:
f()
except:
print('toplevel', file = sys.stderr)
```
Python session:
```
Python 3.4.2 (v3.4.2:ab2c023a9432, Oct 6 2014, 22:16:31) [MSC v.1600 64 bit (AM
D64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import weird
toplevel
f>>>
```
Why does "toplevel" print before "f"?
This doesn't happen if the `end = ''` or the `file = sys.stderr` are removed. | Because stdout and stderr are *line buffered*. They buffer characters and only flush when you have a complete line.
By setting `end=''` you ensure there is no complete line and the buffer isn't flushed until *later* when the Python interactive interpreter outputs `>>>` and flushes the buffer explicitly.
If you remove `file=sys.stderr` you output to `sys.stdout` again, and you printed `toplevel\n` as `print()` adds a newline, thus flushing the `sys.stdout` buffer.
You can explicitly force a flush by setting the `flush=True` argument to the `print()` function (Python 3.3 and up) or by calling `sys.stdout.flush()`. |
Vectorize iterative addition in NumPy arrays | 31,093,989 | 8 | 2015-06-27T22:05:18Z | 31,094,448 | 8 | 2015-06-27T23:07:44Z | [
"python",
"loops",
"numpy",
"optimization",
"vectorization"
] | For each element in a randomized array of 2D indices (with potential duplicates), I want to "+=1" to the corresponding grid in a 2D zero array. However, I don't know how to optimize the computation. Using the standard for loop, as shown here,
```
def interadd():
U = 100
input = np.random.random(size=(5000,2)) * U
idx = np.floor(input).astype(np.int)
grids = np.zeros((U,U))
for i in range(len(input)):
grids[idx[i,0],idx[i,1]] += 1
return grids
```
The runtime can be quite significant:
```
>> timeit(interadd, number=5000)
43.69953393936157
```
Is there a way to vectorize this iterative process? | You could speed it up a little by using [`np.add.at`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.at.html), which correctly handles the case of duplicate indices:
```
def interadd(U, idx):
grids = np.zeros((U,U))
for i in range(len(idx)):
grids[idx[i,0],idx[i,1]] += 1
return grids
def interadd2(U, idx):
grids = np.zeros((U,U))
np.add.at(grids, idx.T.tolist(), 1)
return grids
def interadd3(U, idx):
# YXD suggestion
grids = np.zeros((U,U))
np.add.at(grids, (idx[:,0], idx[:,1]), 1)
return grids
```
which gives
```
>>> U = 100
>>> idx = np.floor(np.random.random(size=(5000,2))*U).astype(np.int)
>>> (interadd(U, idx) == interadd2(U, idx)).all()
True
>>> %timeit interadd(U, idx)
100 loops, best of 3: 8.48 ms per loop
>>> %timeit interadd2(U, idx)
100 loops, best of 3: 2.62 ms per loop
```
---
And YXD's suggestion:
```
>>> (interadd(U, idx) == interadd3(U, idx)).all()
True
>>> %timeit interadd3(U, idx)
1000 loops, best of 3: 1.09 ms per loop
``` |
Numpy select non-zero rows | 31,097,043 | 2 | 2015-06-28T07:03:08Z | 31,097,051 | 7 | 2015-06-28T07:04:01Z | [
"python",
"numpy"
] | I wan to select only rows which has not any 0 element.
```
data = np.array([[1,2,3,4,5],
[6,7,0,9,10],
[11,12,13,14,15],
[16,17,18,19,0]])
```
The result would be:
```
array([[1,2,3,4,5],
[11,12,13,14,15]])
``` | Use [`numpy.all`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.all.html):
```
>>> data[np.all(data, axis=1)]
array([[ 1, 2, 3, 4, 5],
[11, 12, 13, 14, 15]])
``` |
Python lightweight case insensitive if "x" in variable | 31,099,178 | 2 | 2015-06-28T11:18:07Z | 31,099,222 | 7 | 2015-06-28T11:23:46Z | [
"python"
] | I am looking for a method analogous to `if "x" in variable:` that is case insensitive and light-weight to implement.
I have tried some of the implementations here but they don't really fit well for my usage: [Case insensitive 'in' - Python](http://stackoverflow.com/questions/3627784/case-insensitive-in-python)
What I would like to make the below code case insensitive:
```
description = "SHORTEST"
if "Short" in description:
direction = "Short"
```
Preferably without having to convert the string to e.g. lowercase. Or if I have to convert it, I would like to keep `description` in its original state â even if it is mixed uppercase and lowercase.
For my usage, it is good that this method is non-discriminating by identifying `"Short"` in `"Shorter"` or `"Shortest"`. | Just do
```
if "Short".lower() in description.lower():
...
```
The `.lower()` method does not change the object, it returns a new one, which is changed. If you are worried about performance, don't be, unless your are doing it on huge strings, or thousands of times per second.
If you are going to do that more than once, or just want more clarity, create a function, like this:
```
def case_insensitive_in(phrase, string):
return phrase.lower() in string.lower()
``` |
Test if all elements of a python list are False | 31,099,561 | 9 | 2015-06-28T12:01:57Z | 31,099,577 | 19 | 2015-06-28T12:03:44Z | [
"python",
"list",
"numpy"
] | How to return 'false' because all elements are 'false'?
The given list is:
```
data = [False, False, False]
``` | Using [`any`](https://docs.python.org/2/library/functions.html#any):
```
>>> data = [False, False, False]
>>> not any(data)
True
```
`any` will return True if there's any truth value in the iterable. |
Efficiently build a graph of words with given Hamming distance | 31,100,623 | 18 | 2015-06-28T13:57:19Z | 31,101,320 | 19 | 2015-06-28T15:12:12Z | [
"python",
"algorithm",
"graph-algorithm",
"hamming-distance"
] | I want to build a graph from a list of words with [Hamming distance](https://en.wikipedia.org/wiki/Hamming_distance) of (say) 1, or to put it differently, two words are connected if they only differ from one letter (*lo**l*** -> *lo**t***).
so that given
`words = [ lol, lot, bot ]`
the graph would be
```
{
'lol' : [ 'lot' ],
'lot' : [ 'lol', 'bot' ],
'bot' : [ 'lot' ]
}
```
The easy way is to compare every word in the list with every other and count the different chars; sadly, this is a `O(N^2)` algorithm.
Which algo/ds/strategy can I use to to achieve better performance?
Also, let's assume only latin chars, and all the words have the same length. | Assuming you store your dictionary in a `set()`, so that [lookup is **O(1)** in the average (worst case **O(n)**)](https://wiki.python.org/moin/TimeComplexity).
You can generate all the valid words at hamming distance 1 from a word:
```
>>> def neighbours(word):
... for j in range(len(word)):
... for d in string.ascii_lowercase:
... word1 = ''.join(d if i==j else c for i,c in enumerate(word))
... if word1 != word and word1 in words: yield word1
...
>>> {word: list(neighbours(word)) for word in words}
{'bot': ['lot'], 'lol': ['lot'], 'lot': ['bot', 'lol']}
```
If **M** is the length of a word, **L** the length of the alphabet (i.e. 26), the **worst case** time complexity of finding neighbouring words with this approach is **O(L\*M\*N)**.
The time complexity of the "easy way" approach is **O(N^2)**.
When this approach is better? When `L*M < N`, i.e. if considering only lowercase letters, when `M < N/26`. (I considered only worst case here)
Note: [the average length of an english word is 5.1 letters](http://arxiv.org/pdf/1208.6109.pdf). Thus, you should consider this approach if your dictionary size is bigger than 132 words.
Probably it is possible to achieve better performance than this. However this was really simple to implement.
## Experimental benchmark:
The "easy way" algorithm (**A1**):
```
from itertools import zip_longest
def hammingdist(w1,w2): return sum(1 if c1!=c2 else 0 for c1,c2 in zip_longest(w1,w2))
def graph1(words): return {word: [n for n in words if hammingdist(word,n) == 1] for word in words}
```
This algorithm (**A2**):
```
def graph2(words): return {word: list(neighbours(word)) for word in words}
```
Benchmarking code:
```
for dict_size in range(100,6000,100):
words = set([''.join(random.choice(string.ascii_lowercase) for x in range(3)) for _ in range(dict_size)])
t1 = Timer(lambda: graph1()).timeit(10)
t2 = Timer(lambda: graph2()).timeit(10)
print('%d,%f,%f' % (dict_size,t1,t2))
```
Output:
```
100,0.119276,0.136940
200,0.459325,0.233766
300,0.958735,0.325848
400,1.706914,0.446965
500,2.744136,0.545569
600,3.748029,0.682245
700,5.443656,0.773449
800,6.773326,0.874296
900,8.535195,0.996929
1000,10.445875,1.126241
1100,12.510936,1.179570
...
```

I ran another benchmark with smaller steps of N to see it closer:
```
10,0.002243,0.026343
20,0.010982,0.070572
30,0.023949,0.073169
40,0.035697,0.090908
50,0.057658,0.114725
60,0.079863,0.135462
70,0.107428,0.159410
80,0.142211,0.176512
90,0.182526,0.210243
100,0.217721,0.218544
110,0.268710,0.256711
120,0.334201,0.268040
130,0.383052,0.291999
140,0.427078,0.312975
150,0.501833,0.338531
160,0.637434,0.355136
170,0.635296,0.369626
180,0.698631,0.400146
190,0.904568,0.444710
200,1.024610,0.486549
210,1.008412,0.459280
220,1.056356,0.501408
...
```

You see the tradeoff is very low (100 for dictionaries of words with length=3). For small dictionaries the O(N^2) algorithm perform *slightly* better, but that is easily beat by the O(LMN) algorithm as N grows.
For dictionaries with longer words, the O(LMN) algorithm remains linear in N, it just has a different slope, so the tradeoff moves slightly to the right (130 for length=5). |
Some doubts modelling some features for the libsvm/scikit-learn library in python | 31,104,106 | 8 | 2015-06-28T19:51:30Z | 31,202,538 | 8 | 2015-07-03T08:51:53Z | [
"python",
"dictionary",
"scikit-learn",
"libsvm"
] | I have scraped a lot of ebay titles like this one:
```
Apple iPhone 5 White 16GB Dual-Core
```
and I have manually tagged all of them in this way
```
B M C S NA
```
where B=Brand (Apple) M=Model (iPhone 5) C=Color (White) S=Size (Size) NA=Not Assigned (Dual Core)
Now I need to train a SVM classifier using the libsvm library in python to learn the sequence patterns that occur in the ebay titles.
I need to extract new value for that attributes (Brand, Model, Color, Size) by considering the problem as a classification one. In this way I can predict new models.
I want to represent these features to use them as input to the libsvm library. I work in python :D.
> 1. Identity of the current word
I think that I can interpret it in this way
```
0 --> Brand
1 --> Model
2 --> Color
3 --> Size
4 --> NA
```
If I know that the word is a Brand I will set that variable to 1 (true). It is ok to do it in the training test (because I have tagged all the words) but how can I do that for the test set? I don't know what is the category of a word (this is why I'm learning it :D).
> 2. N-gram substring features of current word (N=4,5,6)
No Idea, what does it means?
> 3. Identity of 2 words before the current word.
How can I model this feature?
Considering the legend that I create for the 1st feature I have 5^(5) combination)
```
00 10 20 30 40
01 11 21 31 41
02 12 22 32 42
03 13 23 33 43
04 14 24 34 44
```
How can I convert it to a format that the libsvm (or scikit-learn) can understand?
```
4. Membership to the 4 dictionaries of attributes
```
Again how can I do it? Having 4 dictionaries (for color, size, model and brand) I thing that I must create a bool variable that I will set to true if and only if I have a match of the current word in one of the 4 dictionaries.
> 5. Exclusive membership to dictionary of brand names
I think that like in the 4. feature I must use a bool variable. Do you agree?
If this question lacks some info please read my previous question at this address: [Support vector machine in Python using libsvm example of features](http://stackoverflow.com/questions/30991592/support-vector-machine-in-python-using-libsvm-example-of-features)
Last doubt: If I have a multi token value like iPhone 5... I must tag iPhone like a brand and 5 also like a brand or is better to tag {iPhone 5} all as a brand??
In the test dataset iPhone and 5 will be 2 separates word... so what is better to do? | The reason that the solution proposed to you in the previous question had Insufficient results (I assume) - is that the feature were poor for this problem.
If I understand correctly, What you want is the following:
given the sentence -
> Apple iPhone 5 White 16GB Dual-Core
You to get-
> B M C S NA
The problem you are describing here is equivalent to [part of speech tagging](https://en.wikipedia.org/wiki/Part-of-speech_tagging) (POS) in Natural Language Processing.
Consider the following sentence in English:
> We saw the yellow dog
The task of POS is giving the appropriate tag for each word. In this case:
> We(PRP) saw(VBD) the(DT) yellow(JJ) dog(NN)
Don't invest time on understanding the tags in English here, since I give it here only to show you that your problem and POS are equal.
Before I explain how to solve it using SVM, I want to make you aware of other approaches: consider the sentence `Apple iPhone 5 White 16GB Dual-Core` as test data. The tag you set to the word `Apple` must be given as input to the tagger when you are tagging the word `iPhone`. However, After you tag the word a word, you will not change it. Hence, models that are doing sequance tagging usually recievces better results. The easiest example is Hidden Markov Models (HMM). [Here](https://people.cs.umass.edu/~mccallum/courses/inlp2004/lect10-tagginghmm1.pdf) is a short intro to HMM in POS.
Now we model this problem as classification problem. Lets define what is a window -
```
`W-2,W-1,W0,W1,W2`
```
Here, we have a window of size 2. When classifying the word `W0`, we will need the features of all the words in the window (concatenated). Please note that for the first word of the sentence we will use:
```
`START-2,START-1,W0,W1,W2`
```
In order to model the fact that this is the first word. for the second word we have:
```
`START-1,W-1,W0,W1,W2`
```
And similarly for the words at the end of the sentence. The tags `START-2`,`START-1`,`STOP1`,`STOP2` must be added to the model two.
Now, Lets describe what are the features used for tagging W0:
```
Features(W-2),Features(W-1),Features(W0),Features(W1),Features(W2)
```
The features of a token should be the word itself, and the tag (given to the previous word). We shall use binary features.
# **Example - how to build the feature representation:**
### **Step 1 - building the word representation for each token**:
Lets take a window size of 1. When classifying a token, we use `W-1,W0,W1`. Say you build a dictionary, and gave every word in the corpus a number:
```
n['Apple'] = 0
n['iPhone 5'] = 1
n['White'] = 2
n['16GB'] = 3
n['Dual-Core'] = 4
n['START-1'] = 5
n['STOP1'] = 6
```
### **Step 2 - feature token for each tag**:
we create features for the following tags:
```
n['B'] = 7
n['M'] = 8
n['C'] = 9
n['S'] = 10
n['NA'] = 11
n['START-1'] = 12
n['STOP1'] = 13
```
Lets build a feature vector for `START-1,Apple,iPhone 5`: the first token is a word with known tag (`START-1` will always have the tag `START-1`). So the features for this token are:
```
(0,0,0,0,0,0,1,0,0,0,0,0,1,0)
```
(The features that are 1: having the word `START-1`, and tag `START-1`)
For the token `Apple`:
```
(1,0,0,0,0,0,0)
```
Note that we use already-calculated-tags feature for every word before W0 (since we have already calculated it) . Similarly, the features of the token `iPhone 5`:
```
(0,1,0,0,0,0,0)
```
### **Step 3 concatenate all the features**:
Generally, the features for 1-window will be:
```
word(W-1),tag(W-1),word(W0),word(W1)
```
Regarding your question - I would use one more tag - `number` - so that when you tag the word `5` (since you split the title by space), the feature `W0` will have a 1 on some number feature, and 1 in `W-1`'s `model` tag - in case the previous token was tagged correctly as model.
### To sum up, what you should do:
1. give a number to each word in the data
2. build feature representation for the train data (using the tags you already calculated manually)
3. train a model
4. label the test data
# **Final Note - a Warm Tip For Existing Code:**
You can find POS tagger implemented in python [here](https://honnibal.wordpress.com/2013/09/11/a-good-part-of-speechpos-tagger-in-about-200-lines-of-python/). It includes explanation of the problem and code, and it also does this feature extraction I just described for you. Additionally, they used `set` for representing the feature of each word, so the code is much simpler to read.
The data this tagger receives should look like this:
```
Apple_B iPhone_M 5_NUMBER White_C 16GB_S Dual-Core_NA
```
The feature extraction is doing in this manner (see more at the link above):
```
def get_features(i, word, context, prev):
'''Map tokens-in-contexts into a feature representation, implemented as a
set. If the features change, a new model must be trained.'''
def add(name, *args):
features.add('+'.join((name,) + tuple(args)))
features = set()
add('bias') # This acts sort of like a prior
add('i suffix', word[-3:])
add('i-1 tag', prev)
add('i word', context[i])
add('i-1 word', context[i-1])
add('i+1 word', context[i+1])
return features
```
For the example above:
```
context = ["Apple","iPhone","5","White","16GB","Dual-Core"]
prev = "B"
i = 1
word = "iPhone"
```
Generally, `word` is the str of the current word, `context` is a the title split into list, and `prev` is the tag you received for the previous word.
I use this code in the past, it works fast with great results.
Hope its clear, Have fun tagging! |
How to Python split by a character yet maintain that character? | 31,107,132 | 5 | 2015-06-29T03:01:31Z | 31,107,171 | 7 | 2015-06-29T03:08:08Z | [
"python",
"regex",
"python-2.7",
"split",
"newline"
] | Google Maps results are often displayed thus:

```
'\n113 W 5th St\nEureka, MO, United States\n(636) 938-9310\n'
```
Another variation:

```
'Clayton Village Shopping Center, 14856 Clayton Rd\nChesterfield, MO, United States\n(636) 227-2844'
```
And another:

```
'Wildwood, MO\nUnited States\n(636) 458-7707'
```
**Notice the variation in the placement of the `\n` characters.**
I'm looking to extract the first X lines as address, and the last line as phone number. A regex such as `(.*\n.*)\n(.*)` would suffice for the first example, but falls short for the other two. The only thing I can rely on is that the phone number will be in the form `(ddd) ddd-dddd`.
I think a regex that will allow for each and every possible variation will be hard to come by. Is it possible to use `split()`, but maintain the character we have split by? So in this example, split by `"("`, to split out the address and phone number, but *retain this character in the phone number*? I could concatenate the `"("` back into `split("(")[1]`, but is there a neater way? | Don't use regex. Just split the string on the `'\n'`. The last index is a phone number, the other indexes are the address.
```
lines = inputString.split('\n')
phone = lines[-1] if lines[-1].match(REGEX_PHONE_US) else None
address = '\n'.join(lines[:-1]) if phone else inputString
```
Python has a lot of great built in tools for manipulating strings in a more... human way... than regex allows. |
django-rest-framework accept JSON data? | 31,108,075 | 6 | 2015-06-29T04:59:48Z | 31,121,191 | 12 | 2015-06-29T16:47:45Z | [
"python",
"django",
"django-rest-framework"
] | I have created RESTFul APIs using django-rest-framework. The user endpoint is
```
/api/v1/users
```
I want to create new user. I send user data in JSOn format.
```
{
"username": "Test1",
"email": "[email protected]",
"first_name": "Test1",
"last_name": "Test2",
"password":"12121212"
}
```
I am using Chrome extension Postman to test api. But user data has not been saving. The response is:
```
{
"detail": "Unsupported media type \"text/plain;charset=UTF-8\" in request."
}
```
Attached screenshot
 | You have missed adding the `Content-Type` header in the headers section. Just set the `Content-Type` header to `application/json` and it should work.
See the below image:

Also, you might also need to include a CSRF token in the header in case you get an error `{"detail": "CSRF Failed: CSRF token missing or incorrect."}` while making a `POST` request using Postman. In that case, add an `X-CSRFToken` header also with value as the CSRF token value. |
Swapping two sublists in a list | 31,111,258 | 16 | 2015-06-29T08:41:04Z | 31,111,801 | 25 | 2015-06-29T09:09:49Z | [
"python",
"list"
] | Given the following list:
```
my_list=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
```
I want to be able to swap the sub-list `my_list[2:4]` with the sub-list `my_list[7:10]` as quickly and as efficiently as possible, to get the new list:
```
new_list=[0, 1, 7, 8, 9, 4, 5, 6, 2, 3, 10, 11, 12]
```
Here's my attempt:
```
def swap(s1, s2, l):
seg1=l[:s1.start]+l[s2]
seg2=l[s1.stop : s2.start]
seg3=l[s1]+l[s2.stop:]
return seg1+seg2+seg3
print swap(slice(2,4), slice(7,10), [0,1,2,3,4,5,6,7,8,9,10,11,12])
```
This does print the desired output, although this way of doing it looks awful to me.
Is there a more easy and elegant way of doing it, that will not create four new lists for every function call? (I plan to call this function a lot)
I don't mind (actually I'd prefer) changing the original list, rather than creating new instance every function call. | Slices can be assigned.
Two variables can be swapped with `a, b = b, a`.
Combine the two above::
```
>>> my_list[7:10], my_list[2:4] = my_list[2:4], my_list[7:10]
>>> my_list
[0, 1, 7, 8, 9, 4, 5, 6, 2, 3, 10, 11, 12]
```
---
Beware that - if slices have different sizes - the **order is important**: If you swap in the opposite order, you end up with a different result, because it will change first the initial items (lower indices), and then the higher index items (but those will be shifted in a different position by the first assignment).
Also, slices must not overlap. |
Find all possible ordered groups in a list | 31,115,816 | 4 | 2015-06-29T12:32:20Z | 31,115,974 | 8 | 2015-06-29T12:40:01Z | [
"python"
] | Given an ordered list of integers:
```
[1,3,7,8,9]
```
How can I find all the sublists that can be created from original list where order is maintained? Using the example above, I'm looking for a way to programmatically generate these sequences:
```
[[1],[3,7,8,9]]
[[1, 3],[7,8,9]]
[[1, 3, 7],[8,9]]
[[1, 3, 7, 8],[9]]
[[1, 3, 7, 8, 9]]
[[1, 3, 7], [8, 9]]
[[1], [3, 7], [8], [9]]
[[1], [3], [7, 8], [9]]
[[1], [3], [7], [8, 9]]
...
```
I'm basically looking for a way to generate all the permutations of a list where order is maintained. I can generate all the sublists where there are only 2 sublists in total using this code:
```
def partition(arr, idx):
return [arr[:idx], arr[idx:]]
l = [1,3,7,8,9]
for idx in range(1, len(l)):
groups = partition(l, idx)
print(groups)
[[1], [3, 7, 8, 9]]
[[1, 3], [7, 8, 9]]
[[1, 3, 7], [8, 9]]
[[1, 3, 7, 8], [9]]
```
However, this code snippets only splits the original list in two and generates all the possible sublists where there are only two sublists. How can I generate all the possible sublists that can be created from original list where order is maintained? | How about:
```
import itertools
def subsets(seq):
for mask in itertools.product([False, True], repeat=len(seq)):
yield [item for x, item in zip(mask, seq) if x]
def ordered_groups(seq):
for indices in subsets(range(1, len(seq))):
indices = [0] + indices + [len(seq)]
yield [seq[a:b] for a,b in zip(indices, indices[1:])]
for group in ordered_groups([1,3,7,8,9]):
print group
```
Result:
```
[[1, 3, 7, 8, 9]]
[[1, 3, 7, 8], [9]]
[[1, 3, 7], [8, 9]]
[[1, 3, 7], [8], [9]]
[[1, 3], [7, 8, 9]]
[[1, 3], [7, 8], [9]]
[[1, 3], [7], [8, 9]]
[[1, 3], [7], [8], [9]]
[[1], [3, 7, 8, 9]]
[[1], [3, 7, 8], [9]]
[[1], [3, 7], [8, 9]]
[[1], [3, 7], [8], [9]]
[[1], [3], [7, 8, 9]]
[[1], [3], [7, 8], [9]]
[[1], [3], [7], [8, 9]]
[[1], [3], [7], [8], [9]]
``` |
Load environment variables from a shell script | 31,117,531 | 2 | 2015-06-29T13:54:24Z | 32,206,226 | 7 | 2015-08-25T14:05:19Z | [
"python",
"bash",
"shell",
"environment-variables"
] | I have a file with some environment variables that I want to use in a python script
The following works form the command line
```
$ source myFile.sh
$ python ./myScript.py
```
and from inside the python script I can access the variables like
```
import os
os.getenv('myvariable')
```
How can I source the shell script, then access the variables, from with the python script? | If you are saying backward environment propagation, sorry, you can't. It's a security issue. However, directly source environment from python is definitely valid. But it's more or less a manual process.
```
import subprocess as sp
SOURCE = 'your_file_path'
proc = sp.Popen(['bash', '-c', 'source {} && env'.format(SOURCE)], stdout=sp.PIPE)
source_env = {tup[0].strip(): tup[1].strip() for tup in map(lambda s: s.strip().split('=', 1), proc.stdout)}
```
Then you have everything you need in `source_env`.
If you need to write it back to your local environment (which is not recommended, since `source_env` keeps you clean):
```
import os
for k, v in source_env.items():
os.environ[k] = v
```
---
Another tiny attention needs to be paid here, is since I called `bash` here, you should expect the rules are applied here too. So if you want your variable to be seen, you will need to export them.
```
export VAR1='see me'
VAR2='but not me'
``` |
Why can't I remove quotes using `strip('\"')`? | 31,118,332 | 4 | 2015-06-29T14:29:33Z | 31,118,392 | 9 | 2015-06-29T14:32:04Z | [
"python",
"python-3.x"
] | I can only remove the left quotes using `str.strip('\"')`:
```
with open(filename, 'r') as fp :
for line in fp.readlines() :
print(line)
line = line.strip('\"')
print(line)
```
Part of results:
```
"Route d'Espagne"
Route d'Espagne"
```
Using `line.replace('\"', '')` gets the right result:
```
"Route d'Espagne"
Route d'Espagne
```
Can anyone explain it? | Your lines do not *end* with quotes. The newline separator is part of the line to and is not removed when reading from a file, so unless you include `\n` in the set of characters to be stripped the `"` is going to stay.
When diagnosing issues with strings, produce debug output with `print(repr(line))` or even `print(ascii(line))`, to make non-printable or non-ASCII codepoints visible:
```
>>> line = '"Route d\'Espagne"\n'
>>> print(line)
"Route d'Espagne"
>>> print(repr(line))
'"Route d\'Espagne"\n'
```
Add `\n` to the `str.strip()` argument:
```
line = line.strip('"\n')
```
Demo:
```
>>> line.strip('"')
'Route d\'Espagne"\n'
>>> line.strip('"\n')
"Route d'Espagne"
>>> print(line.strip('"\n'))
Route d'Espagne
``` |
Viewing the content of a Spark Dataframe Column | 31,124,131 | 7 | 2015-06-29T19:37:38Z | 31,124,244 | 7 | 2015-06-29T19:44:01Z | [
"python",
"apache-spark",
"dataframe",
"pyspark"
] | I'm using Spark 1.3.1.
I am trying to view the values of a Spark dataframe column in Python. With a Spark dataframe, I can do `df.collect()` to view the contents of the dataframe, but there is no such method for a Spark dataframe column as best as I can see.
For example, the dataframe `df` contains a column named `'zip_code'`. So I can do `df['zip_code']` and it turns a `pyspark.sql.dataframe.Column` type, but I can't find a way to view the values in `df['zip_code']`. | You can access underlying `RDD` and map over it
```
df.rdd.map(lambda r: r.zip_code).collect()
```
You can also use `select` if you don't mind results wrapped using `Row` objects:
```
df.select('zip_code').collect()
```
Finally, if you simply want to inspect content then `show` method should be enough:
```
df.select('zip_code').show()
``` |
Does calling random.normal on an array of values add noise? | 31,128,510 | 6 | 2015-06-30T02:19:20Z | 31,128,610 | 8 | 2015-06-30T02:34:40Z | [
"python",
"numpy",
"random"
] | I saw this pattern in someone's code:
```
import numpy as np
# Create array
xx = np.linspace(0.0, 100.0, num=100)
# Add Noise
xx = np.random.normal(xx)
```
and it seems to add some noise to each value of the array, but I can't find any documentation for this. What's happening? What determines the properties (i.e. scaling) of the noise? Is the given value being treated as the mean (i.e. the `loc` parameter) of each sampling from the `normal` distribution?
I'd also be very curious to know why this behavior doesn't seem to be covered in the documentation. | I don't see it documented either, but many numpy functions that take an ndarray will [operate on it element-wise](http://docs.scipy.org/doc/numpy/reference/ufuncs.html#). Anyway, you can easily verify that when passing it an array it call `numpy.random.normal` for each element on the array using that element's value as the mean and returns an array:
```
In [9]: xx = numpy.array([1, 10, 100, 1000])
In [10]: numpy.random.normal(xx)
Out[10]:
array([ 9.45865328e-01, 1.11542264e+01, 9.88601302e+01,
1.00120448e+03])
```
It appears that it is using the default value of 1.0 for the scale. You can override this though:
```
In [12]: numpy.random.normal(xx, 10)
Out[12]: array([ 8.92500743, -5.66508088, 97.33440273, 1003.37940455])
In [13]: numpy.random.normal(xx, 100)
Out[13]: array([ -75.13092966, -47.0841671 , 154.12913986, 816.3126146 ])
``` |
Remove particular combinations from itertools.combinations | 31,131,526 | 2 | 2015-06-30T06:53:15Z | 31,131,552 | 7 | 2015-06-30T06:55:26Z | [
"python"
] | Suppose we have a pair of tuples where tuples can be of different length. Let's call them tuples `t1` and `t2`:
```
t1 = ('A', 'B', 'C')
t2 = ('d', 'e')
```
Now I compute all combinations of length 2 from both tuples using itertools:
```
import itertools
tuple(itertools.combinations(t1 + t2, 2))
```
Itertools generator produces all possible combinations, but I need only those which occurs between tuples; the expected output is
```
(('A', 'd'), ('A', 'e'), ('B', 'd'), ('B', 'e'), ('C', 'd'), ('C', 'e'))
```
I wonder what is best approach to remove undesired combination. | You need `itertools.product` :
```
>>> t1 = ('A', 'B', 'C')
>>> t2 = ('d', 'e')
>>> from itertools import product
>>>
>>> list(product(t1,t2))
[('A', 'd'), ('A', 'e'), ('B', 'd'), ('B', 'e'), ('C', 'd'), ('C', 'e')]
```
If you are dealing with **short tuples** you can simply do this job with a list comprehension :
```
>>> [(i,j) for i in t1 for j in t2]
[('A', 'd'), ('A', 'e'), ('B', 'd'), ('B', 'e'), ('C', 'd'), ('C', 'e')]
``` |
Virtualenv Command Not Found | 31,133,050 | 11 | 2015-06-30T08:17:41Z | 37,242,519 | 13 | 2016-05-15T19:05:44Z | [
"python",
"osx",
"virtualenv"
] | I couldn't get `virtualenv` to work despite various attempts. I installed `virtualenv` on MAC OS X using:
```
pip install virtualenv
```
and have also added the `PATH` into my `.bash_profile`. Every time I try to run the `virtualenv` command, it returns:
```
-bash: virtualenv: command not found
```
Every time I run `pip install virtualenv`, it returns:
```
Requirement already satisfied (use --upgrade to upgrade): virtualenv in /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages
```
I understand that in mac, the `virtualenv` should be correctly installed in
```
/usr/local/bin
```
The `virtualenv` is indeed installed in `/usr/local/bin`, but whenever I try to run the `virtualenv` command, the command is not found. I've also tried to run the `virtualenv` command in the directory `/usr/local/bin`, and it gives me the same result:
```
-bash: virtualenv: command not found
```
These are the PATHs I added to my .bash\_profile
```
export PATH=$PATH:/usr/local/bin
export PATH=$PATH:/usr/local/bin/python
export PATH=$PATH:/Library/Framework/Python.framework/Version/2.7/lib/site-packages
```
Any workarounds for this? Why is this the case? | I faced the same issue and this is how I solved it:
1. The issue occurred to me because I installed virtualenv via pip as a regular user (not root). pip installed the packages into the directory `~/.local/lib/pythonX.X/site-packages`
2. When I ran pip as root or with admin privileges (sudo), it installed packages in `/usr/lib/pythonX.X/dist-packages`. This path might be different for you.
3. virtualenv command gets recognized only in the second scenario
4. So, to solve the issue, do `pip uninstall virtualenv` and then reinstall it with `sudo pip install virtualenv` (or install as root) |
Python str.translate VS str.replace | 31,143,290 | 4 | 2015-06-30T16:11:49Z | 31,145,842 | 7 | 2015-06-30T18:34:04Z | [
"python"
] | Why in ***Python*** `replace` is ~1.5x quicker than `translate`?
```
In [188]: s = '1 a 2'
In [189]: s.replace(' ','')
Out[189]: '1a2'
In [190]: s.translate(None,' ')
Out[190]: '1a2'
In [191]: %timeit s.replace(' ','')
1000000 loops, best of 3: 399 ns per loop
In [192]: %timeit s.translate(None,' ')
1000000 loops, best of 3: 614 ns per loop
``` | Assuming Python 2.7 (because I had to flip a coin without it being stated), we can find the source code for [string.translate](https://docs.python.org/2/library/string.html#string.translate) and [string.replace](https://docs.python.org/2/library/string.html#string.replace) in `string.py`:
```
>>> import inspect
>>> import string
>>> inspect.getsourcefile(string.translate)
'/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/lib/python2.7/string.py'
>>> inspect.getsourcefile(string.replace)
'/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/lib/python2.7/string.py'
>>>
```
Oh, we can't, `as string.py` starts with:
```
"""A collection of string operations (most are no longer used).
Warning: most of the code you see here isn't normally used nowadays.
Beginning with Python 1.6, many of these functions are implemented as
methods on the standard string object.
```
I upvoted you because you started down the path of profiling, so let's continue down that thread:
```
from cProfile import run
from string import ascii_letters
s = '1 a 2'
def _replace():
for x in range(5000000):
s.replace(' ', '')
def _translate():
for x in range(5000000):
s.translate(None, ' ')
```
for replace:
```
run("_replace()")
5000004 function calls in 2.059 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.976 0.976 2.059 2.059 <ipython-input-3-9253b3223cde>:8(_replace)
1 0.000 0.000 2.059 2.059 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
5000000 1.033 0.000 1.033 0.000 {method 'replace' of 'str' objects}
1 0.050 0.050 0.050 0.050 {range}
```
and for translate:
```
run("_translate()")
5000004 function calls in 1.785 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.977 0.977 1.785 1.785 <ipython-input-3-9253b3223cde>:12(_translate)
1 0.000 0.000 1.785 1.785 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
5000000 0.756 0.000 0.756 0.000 {method 'translate' of 'str' objects}
1 0.052 0.052 0.052 0.052 {range}
```
our number of function calls are the same, not that more function calls means that a run will be slower, but it's typically a good place to look. What's fun is that `translate` ran faster on my machine than `replace`! Consider that the fun of not testing changes in isolation -- not that it matters because we only are concerned with being able to tell why there *could* be a difference.
In any case, we at least now know that there's maybe a performance difference and it does exist when evaluating the string object's method (see `tottime`). The `translate` `__docstring__` suggests that there's a translation table in play, while the replace only mentions old-to-new-substring replacement.
Let's turn to our old buddy [`dis`](https://docs.python.org/2/library/dis.html) for hints:
```
from dis import dis
```
replace:
```
def dis_replace():
'1 a 2'.replace(' ', '')
dis(dis_replace)
dis("'1 a 2'.replace(' ', '')")
3 0 LOAD_CONST 1 ('1 a 2')
3 LOAD_ATTR 0 (replace)
6 LOAD_CONST 2 (' ')
9 LOAD_CONST 3 ('')
12 CALL_FUNCTION 2
15 POP_TOP
16 LOAD_CONST 0 (None)
19 RETURN_VALUE
```
and `translate`, which ran faster for me:
```
def dis_translate():
'1 a 2'.translate(None, ' ')
dis(dis_translate)
2 0 LOAD_CONST 1 ('1 a 2')
3 LOAD_ATTR 0 (translate)
6 LOAD_CONST 0 (None)
9 LOAD_CONST 2 (' ')
12 CALL_FUNCTION 2
15 POP_TOP
16 LOAD_CONST 0 (None)
19 RETURN_VALUE
```
unfortunately, the two look identical to `dis`, which means that we should start looking to the string's C source here (found by going to the python source code for the version of Python I'm using right now)](<https://hg.python.org/cpython/file/a887ce8611d2/Objects/stringobject.c>).
Here's the [source for translate](https://hg.python.org/cpython/file/a887ce8611d2/Objects/stringobject.c#l2198).
If you go through the comments, you can see that there are multiple `replace` function definition lines, based on the length of the input.
Our options for substring replacement are:
[replace\_substring\_in\_place](https://hg.python.org/cpython/file/a887ce8611d2/Objects/stringobject.c#l2552)
```
/* len(self)>=1, len(from)==len(to)>=2, maxcount>=1 */
Py_LOCAL(PyStringObject *)
replace_substring_in_place(PyStringObject *self,
```
and [replace\_substring](https://hg.python.org/cpython/file/a887ce8611d2/Objects/stringobject.c#l2672):
```
/* len(self)>=1, len(from)>=2, len(to)>=2, maxcount>=1 */
Py_LOCAL(PyStringObject *)
replace_substring(PyStringObject *self,
```
and [replace\_delete\_single\_character](https://hg.python.org/cpython/file/a887ce8611d2/Objects/stringobject.c#l2415):
```
/* Special case for deleting a single character */
/* len(self)>=1, len(from)==1, to="", maxcount>=1 */
Py_LOCAL(PyStringObject *)
replace_delete_single_character(PyStringObject *self,
char from_c, Py_ssize_t maxcount)
```
`'1 a 2'.replace(' ', '')` is a len(self)==6, replacing 1 char with an empty string, making it a `replace_delete_single_character`.
You can check out the function body for yourself, but the answer is "the C function body runs faster in `replace_delete_single_character` than `string_translate` for this specific input.
Thank you for asking this question. |
Theano Shared Variables on Python | 31,143,452 | 2 | 2015-06-30T16:21:02Z | 31,146,656 | 9 | 2015-06-30T19:20:11Z | [
"python",
"class",
"theano"
] | I am now learning the Theano library, and I am just feeling confused about Theano shared variables. By reading the tutorial, I think I didn't understand its detailed meaning. The following is the definition of the Theano shared variables from the tutorial:
"Variable with Storage that is shared between functions that it appears in. These variables are meant to be created by registered shared constructors."
Also, I am wondering if the Theano shared variables can be a python class data member. For example:
```
class A(object):
data = None
...
```
Can "data" be or initialized as a Theano Shared variable? I really appreciate if anyone could help me. | Theano shared variables behave more like ordinary Python variables. They have an explicit value that is persistent. In contrast, symbolic variables are not given an explicit value until one is assigned on the execution of a compiled Theano function.
Symbolic variables can be thought of as representing state for the duration of a single execution. Shared variables on the other hand represent state that remains in memory for the lifetime of the Python reference (often similar to the lifetime of the program).
Shared variables are usually used to store/represent neural network weights because we want these values to remain around across many executions of a Theano training or testing function. Often, the purpose of a Theano training function is to update the weights stored in a shared variable. And a testing function needs the current weights to perform the network's forward pass.
As far as Python is concerned Theano variables (shared or symbolic) are just objects -- instances of classes defined within the Theano library. So, yes, references to shared variables can be stored in your own classes, just like any other Python object. |
Get Python Tornado Version? | 31,146,153 | 4 | 2015-06-30T18:50:56Z | 31,147,317 | 7 | 2015-06-30T19:55:44Z | [
"python",
"version",
"tornado"
] | How do I get the current version of my python Tornado Module Version?
With other packages I can do the following:
```
print <modulename>.__version__
```
Source:
[How to check version of python modules?](http://stackoverflow.com/questions/20180543/how-to-check-version-of-python-modules) | Tornado has both `tornado.version`, which is a string for human consumption (currently "4.2"), and `tornado.version_info`, which is a numeric tuple that is better for programmatic comparisons (currently `(4, 2, 0, 0)`). The fourth value of `version_info` will be negative for betas and other pre-releases. |
how do you create a linear regression forecast on time series data in python | 31,147,594 | 20 | 2015-06-30T20:10:13Z | 31,257,836 | 22 | 2015-07-07T00:23:04Z | [
"python"
] | I need to be able to create a python function for forecasting based on linear regression model with confidence bands on time series data:
The function needs to take in an argument to how far out it forecasts. For example 1day, 7days, 30days, 90days etc. Depending on the argument, it will need to create holtwinters forcasting with confidence bands:
My time series data looks like this:
print series
```
[{"target": "average", "datapoints": [[null, 1435688679], [34.870499801635745, 1435688694], [null, 1435688709], [null, 1435688724], [null, 1435688739], [null, 1435688754], [null, 1435688769], [null, 1435688784], [null, 1435688799], [null, 1435688814], [null, 1435688829], [null, 1435688844], [null, 1435688859], [null, 1435688874], [null, 1435688889], [null, 1435688904], [null, 1435688919], [null, 1435688934], [null, 1435688949], [null, 1435688964], [null, 1435688979], [38.180000209808348, 1435688994], [null, 1435689009], [null, 1435689024], [null, 1435689039], [null, 1435689054], [null, 1435689069], [null, 1435689084], [null, 1435689099], [null, 1435689114], [null, 1435689129], [null, 1435689144], [null, 1435689159], [null, 1435689174], [null, 1435689189], [null, 1435689204], [null, 1435689219], [null, 1435689234], [null, 1435689249], [null, 1435689264], [null, 1435689279], [30.79849989414215, 1435689294], [null, 1435689309], [null, 1435689324], [null, 1435689339], [null, 1435689354], [null, 1435689369], [null, 1435689384], [null, 1435689399], [null, 1435689414], [null, 1435689429], [null, 1435689444], [null, 1435689459], [null, 1435689474], [null, 1435689489], [null, 1435689504], [null, 1435689519], [null, 1435689534], [null, 1435689549], [null, 1435689564]]}]
```
Once the function is done it needs to append the forecasted values to the above time series data called series and return series:
```
[{"target": "average", "datapoints": [[null, 1435688679], [34.870499801635745, 1435688694], [null, 1435688709], [null, 1435688724], [null, 1435688739], [null, 1435688754], [null, 1435688769], [null, 1435688784], [null, 1435688799], [null, 1435688814], [null, 1435688829], [null, 1435688844], [null, 1435688859], [null, 1435688874], [null, 1435688889], [null, 1435688904], [null, 1435688919], [null, 1435688934], [null, 1435688949], [null, 1435688964], [null, 1435688979], [38.180000209808348, 1435688994], [null, 1435689009], [null, 1435689024], [null, 1435689039], [null, 1435689054], [null, 1435689069], [null, 1435689084], [null, 1435689099], [null, 1435689114], [null, 1435689129], [null, 1435689144], [null, 1435689159], [null, 1435689174], [null, 1435689189], [null, 1435689204], [null, 1435689219], [null, 1435689234], [null, 1435689249], [null, 1435689264], [null, 1435689279], [30.79849989414215, 1435689294], [null, 1435689309], [null, 1435689324], [null, 1435689339], [null, 1435689354], [null, 1435689369], [null, 1435689384], [null, 1435689399], [null, 1435689414], [null, 1435689429], [null, 1435689444], [null, 1435689459], [null, 1435689474], [null, 1435689489], [null, 1435689504], [null, 1435689519], [null, 1435689534], [null, 1435689549], [null, 1435689564]]},{"target": "Forecast", "datapoints": [[186.77999925613403, 1435520801], [178.95000147819519, 1435521131]]},{"target": "Upper", "datapoints": [[186.77999925613403, 1435520801], [178.95000147819519, 1435521131]]},{"target": "Lower", "datapoints": [[186.77999925613403, 1435520801], [178.95000147819519, 1435521131]]}]
```
Has anyone done something like this in python? Any ideas how to start? | In the text of your question, you clearly state that you would like
upper and lower bounds on your regression output, as well as the output
prediction. You also mention using Holt-Winters algorithms for
forecasting in particular.
The packages suggested by other answerers are useful, but you might note
that `sklearn` LinearRegression does not give you error bounds "out of
the box", statsmodels does [not provide Holt-Winters right now](https://github.com/statsmodels/statsmodels/issues/512).
Therefore, I suggest try using [this implementation](https://gist.github.com/andrequeiroz/5888967) of Holt-Winters.
Unfortunately its license is unclear, so I can't reproduce it here in
full. Now, I'm not sure whether you actually want Holt-Winters
(seasonal) prediction, or Holt's linear exponential smoothing
algorithm. I'm guessing the latter given the title of the post. Thus,
you can use the `linear()` function of the linked library. The
technique is [described in detail here](http://people.duke.edu/~rnau/411avg.htm#HoltLES) for interested readers.
In the interests of not providing a link only answer - I'll describe the
main features here. A function is defined that takes the data i.e.
```
def linear(x, fc, alpha = None, beta = None):
```
`x` is the data to be fit, `fc` is the number of timesteps that you want
to forecast, alpha and beta take their usual Holt-Winters meanings:
roughly a parameter to control the amount of smoothing to the "level"
and to the "trend" respectively. If `alpha` or `beta` are not
specified, they are estimated using [`scipy.optimize.fmin_l_bfgs_b`](http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.optimize.fmin_l_bfgs_b.html)
to minimise the RMSE.
The function simply applies the Holt algorithm by looping through the
existing data points and then returns the forecast as follows:
```
return Y[-fc:], alpha, beta, rmse
```
where `Y[-fc:]` are the forecast points, `alpha` and `beta` are the
values actually used and `rmse` is the root mean squared error.
Unfortunately, as you can see, there are no upper or lower confidence
intervals. By the way - we should probably refer to them as [prediction
intervals](http://robjhyndman.com/hyndsight/intervals/).
### Prediction intervals maths
Holt's algorithm and Holt-Winters algorithm are exponential smoothing
techniques and finding confidence intervals for predictions generated
from them is a tricky subject. They have been referred to as ["rule of
thumb"](https://en.wikipedia.org/w/index.php?title=Holt-Winters) methods and, in the case of the Holt-Winters multiplicative
algorithm, without ["underlying statistical model"](https://www.researchgate.net/publication/4960181_Forecasting_models_and_prediction_intervals_for_the_multiplicative_Holt-Winters_method). However, the
[final footnote to this page](http://people.duke.edu/~rnau/411avg.htm#HoltLES) asserts that:
> It is possible to calculate confidence intervals around long-term
> forecasts produced by exponential smoothing models, by considering them
> as special cases of ARIMA models. (Beware: not all software calculates
> confidence intervals for these models correctly.) The width of the
> confidence intervals depends on (i) the RMS error of the model, (ii) the
> type of smoothing (simple or linear); (iii) the value(s) of the
> smoothing constant(s); and (iv) the number of periods ahead you are
> forecasting. In general, the intervals spread out faster as α gets
> larger in the SES model and they spread out much faster when linear
> rather than simple smoothing is used.
We see [here](https://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average#Examples) that an ARIMA(0,2,2) model is equivalent to a Holt
linear model with additive errors
### Prediction intervals code (i.e. how to proceed)
You indicate in comments that you ["can easily do this in R"](http://stackoverflow.com/questions/31147594/how-do-you-create-a-linear-regression-forecast-on-time-series-data-in-python#comment50306625_31147594). I
guess you may be used to the `holt()` function provided by the
`forecast` package in `R` and therefore expecting such intervals. In
which case - you can adapt the python library to give them to you on the
same basis.
Looking at the [R `holt` code](https://github.com/robjhyndman/forecast/blob/master/R/HoltWintersNew.R#L345), we can see that it returns an object
based on `forecast(ets(...)`. Under the hood - this eventually calls to
[this function `class1`](https://github.com/robjhyndman/forecast/blob/7be1aa446524fdf58e8573e2ec7e85c7a1257fe5/R/etsforecast.R#L113), which returns a mean `mu` and variance
`var` (as well as `cj` which I have to confess I do not understand).
The variance is used to calculate the upper and lower bounds [here](https://github.com/robjhyndman/forecast/blob/7be1aa446524fdf58e8573e2ec7e85c7a1257fe5/R/etsforecast.R#L51).
To do something similar in Python - we would need to produce something
similar to the `class1` R function that estimates the variance of each
prediction. This function takes the residuals found in model fitting and
multiplies them by a factor at each time step to get the variance at
that timestep. *In the particular case of the linear Holt's algorithm*, the factor is the cumulative sum of `alpha + k*beta`
where `k` is the number of timesteps' prediction. Once you have that
variance at each prediction point, treat the errors as normally
distributed and get the X% value from the normal distribution.
Here's an idea how to do this in Python (using the code I linked as
your linear function)
```
#Copy, import or reimplement the RMSE and linear function from
#https://gist.github.com/andrequeiroz/5888967
#factor in case there are not 1 timestep per day - in your case
#assuming the timesteps are UTC epoch - I think they're 5 min
# spaced i.e. 288 per day
timesteps_per_day = 288
# Note - big assumption here - your known data will be regular in time
# i.e. timesteps_per_day observations per day. From the timestamps this seems valid.
# if you can't guarantee that - you'll need to interpolate the data
def holt_predict(data, timestamps, forecast_days, pred_error_level = 0.95):
forecast_timesteps = forecast_days*timesteps_per_day
middle_predictions, alpha, beta, rmse = linear(data,int(forecast_timesteps))
cum_error = [beta+alpha]
for k in range(1,forecast_timesteps):
cum_error.append(cum_error[k-1] + k*beta + alpha)
cum_error = np.array(cum_error)
#Use some numpy multiplication to get the intervals
var = cum_error * rmse**2
# find the correct ppf on the normal distribution (two-sided)
p = abs(scipy.stats.norm.ppf((1-pred_error_level)/2))
interval = np.sqrt(var) * p
upper = middle_predictions + interval
lower = middle_predictions - interval
fcast_timestamps = [timestamps[-1] + i * 86400 / timesteps_per_day for i in range(forecast_timesteps)]
ret_value = []
ret_value.append({'target':'Forecast','datapoints': zip(middle_predictions, fcast_timestamps)})
ret_value.append({'target':'Upper','datapoints':zip(upper,fcast_timestamps)})
ret_value.append({'target':'Lower','datapoints':zip(lower,fcast_timestamps)})
return ret_value
if __name__=='__main__':
import numpy as np
import scipy.stats
from math import sqrt
null = None
data_in = [{"target": "average", "datapoints": [[null, 1435688679],
[34.870499801635745, 1435688694], [null, 1435688709], [null,
1435688724], [null, 1435688739], [null, 1435688754], [null, 1435688769],
[null, 1435688784], [null, 1435688799], [null, 1435688814], [null,
1435688829], [null, 1435688844], [null, 1435688859], [null, 1435688874],
[null, 1435688889], [null, 1435688904], [null, 1435688919], [null,
1435688934], [null, 1435688949], [null, 1435688964], [null, 1435688979],
[38.180000209808348, 1435688994], [null, 1435689009], [null,
1435689024], [null, 1435689039], [null, 1435689054], [null, 1435689069],
[null, 1435689084], [null, 1435689099], [null, 1435689114], [null,
1435689129], [null, 1435689144], [null, 1435689159], [null, 1435689174],
[null, 1435689189], [null, 1435689204], [null, 1435689219], [null,
1435689234], [null, 1435689249], [null, 1435689264], [null, 1435689279],
[30.79849989414215, 1435689294], [null, 1435689309], [null, 1435689324],
[null, 1435689339], [null, 1435689354], [null, 1435689369], [null,
1435689384], [null, 1435689399], [null, 1435689414], [null, 1435689429],
[null, 1435689444], [null, 1435689459], [null, 1435689474], [null,
1435689489], [null, 1435689504], [null, 1435689519], [null, 1435689534],
[null, 1435689549], [null, 1435689564]]}]
#translate the data. There may be better ways if you're
#prepared to use pandas / input data is proper json
time_series = data_in[0]["datapoints"]
epoch_in = []
Y_observed = []
for (y,x) in time_series:
if y and x:
epoch_in.append(x)
Y_observed.append(y)
#Pass in the number of days to forecast
fcast_days = 30
res = holt_predict(Y_observed,epoch_in,fcast_days)
data_out = data_in + res
#data_out now holds the data as you wanted it.
#Optional plot of results
import matplotlib.pyplot as plt
plt.plot(epoch_in,Y_observed)
m,tstamps = zip(*res[0]['datapoints'])
u,tstamps = zip(*res[1]['datapoints'])
l,tstamps = zip(*res[2]['datapoints'])
plt.plot(tstamps,u, label='upper')
plt.plot(tstamps,l, label='lower')
plt.plot(tstamps,m, label='mean')
plt.show()
```
**N.B.** The output I've given adds points as `tuple` type into your object. If you *really* need `list`, then replace `zip(upper,fcast_timestamps)` with `map(list,zip(upper,fcast_timestamps))` where the code adds `upper`, `lower` and `Forecast` dicts to the result.
This code is for the particular case of the Holt's linear algorithm - it is not a generic way to calculate correct prediction intervals.
### Important note
Your sample input data seems to have a lot of `null` and only 3 genuine
data points. This is ***highly unlikely*** to be a good basis for doing
timeseries prediction - especially as they all seem to be with 15 minutes and you're trying to forecast up to 3 months!. Indeed - if you feed that data into the R
`holt()`, it will say:
> `You've got to be joking. I need more data!`
i'm assuming you have a larger dataset to test on. I tried the code above on the stock market opening prices for 2015 and it seemed to give reasonable results (see below).

You may think the prediction intervals look a little wide. [This blog from the author of the R forecast module](http://robjhyndman.com/hyndsight/intervals) implies that is intentional, though :
> "conÂfiÂdence interÂvals for the mean are much narÂrower than preÂdicÂtion interÂvals" |
Logarithmic plot of a cumulative distribution function in matplotlib | 31,147,893 | 13 | 2015-06-30T20:26:13Z | 31,575,603 | 13 | 2015-07-22T23:15:04Z | [
"python",
"numpy",
"matplotlib",
"logarithm",
"cdf"
] | I have a file containing logged events. Each entry has a time and latency. I'm interested in plotting the cumulative distribution function of the latencies. I'm most interested in tail latencies so I want the plot to have a logarithmic y-axis. I'm interested in the latencies at the following percentiles: 90th, 99th, 99.9th, 99.99th, and 99.999th. Here is my code so far that generates a regular CDF plot:
```
# retrieve event times and latencies from the file
times, latencies = read_in_data_from_file('myfile.csv')
# compute the CDF
cdfx = numpy.sort(latencies)
cdfy = numpy.linspace(1 / len(latencies), 1.0, len(latencies))
# plot the CDF
plt.plot(cdfx, cdfy)
plt.show()
```

I know what I want the plot to look like, but I've struggled to get it. I want it to look like this (I did not generate this plot):

Making the x-axis logarithmic is simple. The y-axis is the one giving me problems. Using `set_yscale('log')` doesn't work because it wants to use powers of 10. I really want the y-axis to have the same ticklabels as this plot.
How can I get my data into a logarithmic plot like this one?
EDIT:
If I set the yscale to 'log', and ylim to [0.1, 1], I get the following plot:

The problem is that a typical log scale plot on a data set ranging from 0 to 1 will focus on values close to zero. Instead, I want to focus on the values close to 1. | Essentially you need to apply the following transformation to your `Y` values: `-log10(1-y)`. This imposes the only limitation that `y < 1`, so you should be able to have negative values on the transformed plot.
Here's a modified [example](http://matplotlib.org/examples/api/custom_scale_example.html) from `matplotlib` documentation that shows how to incorporate custom transformations into "scales":
```
import numpy as np
from numpy import ma
from matplotlib import scale as mscale
from matplotlib import transforms as mtransforms
from matplotlib.ticker import FixedFormatter, FixedLocator
class CloseToOne(mscale.ScaleBase):
name = 'close_to_one'
def __init__(self, axis, **kwargs):
mscale.ScaleBase.__init__(self)
self.nines = kwargs.get('nines', 5)
def get_transform(self):
return self.Transform(self.nines)
def set_default_locators_and_formatters(self, axis):
axis.set_major_locator(FixedLocator(
np.array([1-10**(-k) for k in range(1+self.nines)])))
axis.set_major_formatter(FixedFormatter(
[str(1-10**(-k)) for k in range(1+self.nines)]))
def limit_range_for_scale(self, vmin, vmax, minpos):
return vmin, min(1 - 10**(-self.nines), vmax)
class Transform(mtransforms.Transform):
input_dims = 1
output_dims = 1
is_separable = True
def __init__(self, nines):
mtransforms.Transform.__init__(self)
self.nines = nines
def transform_non_affine(self, a):
masked = ma.masked_where(a > 1-10**(-1-self.nines), a)
if masked.mask.any():
return -ma.log10(1-a)
else:
return -np.log10(1-a)
def inverted(self):
return CloseToOne.InvertedTransform(self.nines)
class InvertedTransform(mtransforms.Transform):
input_dims = 1
output_dims = 1
is_separable = True
def __init__(self, nines):
mtransforms.Transform.__init__(self)
self.nines = nines
def transform_non_affine(self, a):
return 1. - 10**(-a)
def inverted(self):
return CloseToOne.Transform(self.nines)
mscale.register_scale(CloseToOne)
if __name__ == '__main__':
import pylab
pylab.figure(figsize=(20, 9))
t = np.arange(-0.5, 1, 0.00001)
pylab.subplot(121)
pylab.plot(t)
pylab.subplot(122)
pylab.plot(t)
pylab.yscale('close_to_one')
pylab.grid(True)
pylab.show()
```
[](http://i.stack.imgur.com/yZILQ.png)
Note that you can control the number of 9's via a keyword argument:
```
pylab.figure()
pylab.plot(t)
pylab.yscale('close_to_one', nines=3)
pylab.grid(True)
```
[](http://i.stack.imgur.com/tY9tv.png) |
Is "__module__" guaranteed to be defined during class creation? | 31,148,770 | 12 | 2015-06-30T21:24:12Z | 31,148,962 | 8 | 2015-06-30T21:37:09Z | [
"python",
"python-internals"
] | I was reading some code that looked basically like this:
```
class Foo(object):
class_name = __module__.replace('_', '-')
```
To me, that looked really weird (`__module__`, what is that?) so I went and looked at the python [data-model](https://docs.python.org/2/reference/datamodel.html). A quick search shows that `__module__` is a property of class objects and of function objects. However, there is no `__module__` available in the global namespace (as can easily be verified by just trying to look at it and observing the `NameError` that results ...).
I decided to chalk this up to implementation specific behavior, but as a last check, I decided to test with other implementations I have handy. It turns out that this code executes with1
* Cpython 2.7.6
* Cpython 3.4.0
* jython 2.5.3
* PyPy 2.2.1 (Python 2.7.3)
My question is whether this behavior is actually defined anywhere in the language reference. I'm not sure why I'd want to, but could I safely rely on `__module__` being in the class creation namespace or did all the implementors just decide to do this the same way?
1All linux, but I doubt that matters ... | What the documentation does define is that classes will have a `__module__` attribute. It seems the way CPython does this is that it defines a local variable `__module__` at the beginning of the class block. This variable then becomes a class attribut like any other variable defined there.
I can't find any documentation saying that `__module__` has to be defined in this way. In particular, I can't find any documentation explicitly saying the attribute has to be define as a local variable in the class body, instead of being assigned as a class attribute at some later stage in class creation. [This answer](http://stackoverflow.com/a/26182040/1427416) to a different question mentions that it works this way, and shows how it appears in the bytecode. There was [a Jython bug](http://bugs.jython.org/issue1022) that they fixed by making it work the same as CPython.
I'm guessing this is a CPython implementation detail that was carried over to other implementations. As far as I can tell the documentation doesn't actually say `__module__` has to be available inside the class body, only on the class object afterwards. |
Python - best way to check if tuple has any empty/None values? | 31,154,372 | 5 | 2015-07-01T06:49:50Z | 31,154,423 | 9 | 2015-07-01T06:52:46Z | [
"python",
"tuples",
"is-empty"
] | What is the best/most efficient way to check if all tuple values? Do I need to iterate over all tuple items and check or is there some even better way?
For example:
```
t1 = (1, 2, 'abc')
t2 = ('', 2, 3)
t3 = (0.0, 3, 5)
t4 = (4, 3, None)
```
Checking these tuples, every tuple except `t1`, should return True, meaning there is so called empty value.
P.S. there is this question: [Test if tuple contains only None values with Python](http://stackoverflow.com/questions/21294383/test-if-tuple-contains-only-none-values-with-python), but is it only about None values | It's very easy:
```
not all(t1)
```
returns `False` only if all values in `t1` are non-empty/nonzero and not `None`. `all` short-circuits, so it only has to check the elements up to the first empty one, which makes it very fast. |
Python String argument without an encoding | 31,161,243 | 5 | 2015-07-01T12:24:56Z | 31,161,384 | 8 | 2015-07-01T12:30:04Z | [
"python",
"python-3.x",
"python-unicode"
] | Am trying to a run this piece of code, and it keeps giving an error saying "String argument without an encoding"
```
ota_packet = ota_packet.encode('utf-8') + bytearray(content[current_pos:(final_pos)]) + '\0'.encode('utf-8')
```
Any help? | You are passing in a string object to a `bytearray()`:
```
bytearray(content[current_pos:(final_pos)])
```
You'll need to supply an encoding argument (second argument) so that it can be encoded to bytes.
For example, you could encode it to UTF-8:
```
bytearray(content[current_pos:(final_pos)], 'utf8')
```
From the [`bytearray()` documentation](https://docs.python.org/3/library/functions.html#bytearray):
> The optional *source* parameter can be used to initialize the array in a few different ways:
>
> * If it is a string, you must also give the *encoding* (and optionally, *errors*) parameters; `bytearray()` then converts the string to bytes using `str.encode()`. |
Creating Probability/Frequency Axis Grid (Irregularly Spaced) with Matplotlib | 31,168,051 | 5 | 2015-07-01T17:35:12Z | 31,170,170 | 9 | 2015-07-01T19:36:56Z | [
"python",
"matplotlib",
"plot",
"probability"
] | I'm trying to create a frequency curve plot, and I'm having trouble manipulating the axis to get the plot I want.
Here is an example of the desired grid/plot I am trying to create:

Here is what I have managed to create with matplotlib:

To create the grid in this plot, I used the following code:
```
m1 = pd.np.arange(.2, 1, .1)
m2 = pd.np.arange(1, 2, .2)
m3 = pd.np.arange(2, 10, 2)
m4 = pd.np.arange(2, 20, 1)
m5 = pd.np.arange(20, 80, 2)
m6 = pd.np.arange(80, 98, 1)
xTick_minor = pd.np.concatenate((m1,m2,m3,m4, m5, m6))
xTick_major = pd.np.array([.2,.5,1,2,5,10,20,30,40,50,60,70,80,90,95,98])
m1 = range(0, 250, 50)
m2 = range(250, 500, 10)
m3 = range(500, 1000, 20)
m4 = range(1000, 5000, 100)
m5 = range(5000, 10000, 200)
m6 = range(10000, 50000, 1000)
yTick_minor = pd.np.concatenate((m1,m2,m3,m4,m5,m6))
yTick_major = pd.np.array([250, 300, 350, 400, 450, 500, 600, 700, 800, 900, 1000, 1500, 2000, 2500, 3000, 3500, 4000, 4500, 5000, 6000, 7000, 8000, 9000, 10000, 15000, 20000, 25000, 30000, 35000, 40000, 45000, 50000])
axes.invert_xaxis()
axes.set_ylabel('Discharge in CFS')
axes.set_xlabel('Exceedance Probability')
axes.xaxis.set_major_formatter(FormatStrFormatter('%3.1f'))
axes.set_xticks(xTick_major)
axes.set_xticks(xTick_minor, minor=True)
axes.grid(which='major', alpha=0.7)
axes.grid(which='minor', alpha=0.4)
axes.set_yticks(yTick_major)
axes.set_yticks(yTick_minor, minor=True)
```
The grid is correct, but what I now want to do is make sure the in the display, the low probability ranges get spaced out more, and the same for the low discharge values (y-axis). Essentially I want to control the **spacing between ticks**, not the tick interval itself, so that the range from .2 to .5 displays similarly to the range between 40 and 50 on the x-axis, as the desired grid shows.
Can this be done in matplotlib? I have read through the documentation on tick\_params and locators, but none of these seem to address this kind of axis formatting. | You can define a custom scale for the x-axis, which you can use instead of 'log'. Unfortunately, it's complicated and you'll need to figure out a function that lets you transform the numbers you give for the x-axis into something more linear. See <http://matplotlib.org/examples/api/custom_scale_example.html>.
Edit to add:
The problem was so interesting I decided to figure out if I could make the custom axis myself. I altered the code from the link to work with your example. I'd be interested to see whether it works the way you want.
Edit: New and improved(?) code! The spacing isn't quite as even as before, but it's now done automatically when you pass a list of points to plt.gca().set\_xscale (see near the end of the code for example). It does a curve fit to fit those points to a logistic function and uses the resulting parameters as the basis for the transformation. I get a warning when I run this code (Warning: converting a masked element to nan). I still haven't figured out what's going on there, but it doesn't seem to be causing problems. Here's the figure that I generated:

```
import numpy as np
from numpy import ma
from matplotlib import scale as mscale
from matplotlib import transforms as mtransforms
from matplotlib.ticker import Formatter, FixedLocator
from scipy.optimize import curve_fit
def logistic(x, L, k, x0):
"""Logistic function (s-curve)."""
return L / (1 + np.exp(-k * (x - x0)))
class ProbabilityScale(mscale.ScaleBase):
"""
Scales data so that points along a logistic curve become evenly spaced.
"""
# The scale class must have a member ``name`` that defines the
# string used to select the scale. For example,
# ``gca().set_yscale("probability")`` would be used to select this
# scale.
name = 'probability'
def __init__(self, axis, **kwargs):
"""
Any keyword arguments passed to ``set_xscale`` and
``set_yscale`` will be passed along to the scale's
constructor.
lower_bound: Minimum value of x. Defaults to .01.
upper_bound_dist: L - upper_bound_dist is the maximum value
of x. Defaults to lower_bound.
"""
mscale.ScaleBase.__init__(self)
lower_bound = kwargs.pop("lower_bound", .01)
if lower_bound <= 0:
raise ValueError("lower_bound must be greater than 0")
self.lower_bound = lower_bound
upper_bound_dist = kwargs.pop("upper_bound_dist", lower_bound)
self.points = kwargs['points']
#determine parameters of logistic function with curve fitting
x = np.linspace(0, 1, len(self.points))
#initial guess for parameters
p0 = [max(self.points), 1, .5]
popt, pcov = curve_fit(logistic, x, self.points, p0 = p0)
[self.L, self.k, self.x0] = popt
self.upper_bound = self.L - upper_bound_dist
def get_transform(self):
"""
Override this method to return a new instance that does the
actual transformation of the data.
The ProbabilityTransform class is defined below as a
nested class of this one.
"""
return self.ProbabilityTransform(self.lower_bound, self.upper_bound, self.L, self.k, self.x0)
def set_default_locators_and_formatters(self, axis):
"""
Override to set up the locators and formatters to use with the
scale. This is only required if the scale requires custom
locators and formatters. Writing custom locators and
formatters is rather outside the scope of this example, but
there are many helpful examples in ``ticker.py``.
"""
axis.set_major_locator(FixedLocator(self.points))
def limit_range_for_scale(self, vmin, vmax, minpos):
"""
Override to limit the bounds of the axis to the domain of the
transform. In this case, the bounds should be
limited to the threshold that was passed in. Unlike the
autoscaling provided by the tick locators, this range limiting
will always be adhered to, whether the axis range is set
manually, determined automatically or changed through panning
and zooming.
"""
return max(vmin, self.lower_bound), min(vmax, self.upper_bound)
class ProbabilityTransform(mtransforms.Transform):
# There are two value members that must be defined.
# ``input_dims`` and ``output_dims`` specify number of input
# dimensions and output dimensions to the transformation.
# These are used by the transformation framework to do some
# error checking and prevent incompatible transformations from
# being connected together. When defining transforms for a
# scale, which are, by definition, separable and have only one
# dimension, these members should always be set to 1.
input_dims = 1
output_dims = 1
is_separable = True
def __init__(self, lower_bound, upper_bound, L, k, x0):
mtransforms.Transform.__init__(self)
self.lower_bound = lower_bound
self.L = L
self.k = k
self.x0 = x0
self.upper_bound = upper_bound
def transform_non_affine(self, a):
"""
This transform takes an Nx1 ``numpy`` array and returns a
transformed copy. Since the range of the scale
is limited by the user-specified threshold, the input
array must be masked to contain only valid values.
``matplotlib`` will handle masked arrays and remove the
out-of-range data from the plot. Importantly, the
``transform`` method *must* return an array that is the
same shape as the input array, since these values need to
remain synchronized with values in the other dimension.
"""
masked = ma.masked_where((a < self.lower_bound) | (a > self.upper_bound), a)
return ma.log((self.L - masked) / masked) / -self.k + self.x0
def inverted(self):
"""
Override this method so matplotlib knows how to get the
inverse transform for this transform.
"""
return ProbabilityScale.InvertedProbabilityTransform(self.lower_bound, self.upper_bound, self.L, self.k, self.x0)
class InvertedProbabilityTransform(mtransforms.Transform):
input_dims = 1
output_dims = 1
is_separable = True
def __init__(self, lower_bound, upper_bound, L, k, x0):
mtransforms.Transform.__init__(self)
self.lower_bound = lower_bound
self.L = L
self.k = k
self.x0 = x0
self.upper_bound = upper_bound
def transform_non_affine(self, a):
return self.L / (1 + np.exp(-self.k * (a - self.x0)))
def inverted(self):
return ProbabilityScale.ProbabilityTransform(self.lower_bound, self.upper_bound, self.L, self.k, self.x0)
# Now that the Scale class has been defined, it must be registered so
# that ``matplotlib`` can find it.
mscale.register_scale(ProbabilityScale)
if __name__ == '__main__':
import matplotlib.pyplot as plt
x = np.linspace(.1, 100, 1000)
points = np.array([.2,.5,1,2,5,10,20,30,40,50,60,70,80,90,95,98])
plt.plot(x, x)
plt.gca().set_xscale('probability', points = points, vmin = .01)
plt.grid(True)
plt.show()
``` |
How to send an array using requests.post (Python)? "Value Error: Too many values to unpack" | 31,168,819 | 11 | 2015-07-01T18:20:59Z | 35,535,240 | 14 | 2016-02-21T11:00:56Z | [
"python",
"api",
"python-requests"
] | I'm trying to send an array(list) of requests to the WheniWork API using requests.post, and I keep getting one of two errors. When I send the list as a list, I get an unpacking error, and when I send it as a string, I get an error asking me to submit an array. I think it has something to do with how requests handles lists. Here are the examples:
```
url='https://api.wheniwork.com/2/batch'
headers={"W-Token": "Ilovemyboss"}
data=[{'url': '/rest/shifts', 'params': {'user_id': 0,'other_stuff':'value'}, 'method':'post',{'url': '/rest/shifts', 'params': {'user_id': 1,'other_stuff':'value'}, 'method':'post'}]
r = requests.post(url, headers=headers,data=data)
print r.text
# ValueError: too many values to unpack
```
Simply wrapping the value for data in quotes:
```
url='https://api.wheniwork.com/2/batch'
headers={"W-Token": "Ilovemyboss"}
data="[]" #removed the data here to emphasize that the only change is the quotes
r = requests.post(url, headers=headers,data=data)
print r.text
#{"error":"Please include an array of requests to make.","code":5000}
``` | You want to pass in *JSON encoded* data. See the [API documentation](http://dev.wheniwork.com/):
> Remember â All post bodies must be JSON encoded data (no form data).
The `requests` library makes this trivially easy:
```
headers = {"W-Token": "Ilovemyboss"}
data = [
{
'url': '/rest/shifts',
'params': {'user_id': 0, 'other_stuff': 'value'},
'method': 'post',
},
{
'url': '/rest/shifts',
'params': {'user_id': 1,'other_stuff': 'value'},
'method':'post',
},
]
requests.post(url, json=data, headers=headers)
```
By using the `json` keyword argument the data is encoded to JSON for you, and the `Content-Type` header is set to `application/json`. |
Django query annotation with boolean field | 31,169,108 | 10 | 2015-07-01T18:36:54Z | 31,186,546 | 12 | 2015-07-02T13:53:35Z | [
"python",
"django",
"orm",
"django-queryset"
] | Let's say I have a `Product` model with products in a storefront, and a `ProductImages` table with images of the product, which can have zero or more images. Here's a simplified example:
```
class Product(models.Model):
product_name = models.CharField(max_length=255)
# ...
class ProductImage(models.Model):
product = models.ForeignKey(Product, related_name='images')
image_file = models.CharField(max_length=255)
# ...
```
When displaying search results for products, I want to prioritize products which have images associated with them. I can easily get the number of images:
```
from django.db.models import Count
Product.objects.annotate(image_count=Count('images'))
```
But that's not actually what I want. I'd like to annotate it with a boolean field, `have_images`, indicating whether the product has one or more images, so that I can sort by that:
```
Product.objects.annotate(have_images=(?????)).order_by('-have_images', 'product_name')
```
How can I do that? Thanks! | I eventually found a way to do this using django 1.8's new [conditional expressions](https://docs.djangoproject.com/en/1.8/ref/models/conditional-expressions/):
```
from django.db.models import Case, When, Value, IntegerField
q = (
Product.objects
.filter(...)
.annotate(image_count=Count('images'))
.annotate(
have_images=Case(
When(image_count__gt=0,
then=Value(1)),
default=Value(0),
output_field=IntegerField()))
.order_by('-have_images')
)
```
And that's how I finally found incentive to upgrade to 1.8 from 1.7. |
pip install access denied on Windows | 31,172,719 | 9 | 2015-07-01T22:26:09Z | 32,885,745 | 12 | 2015-10-01T10:52:46Z | [
"python",
"windows",
"pip",
"access-denied"
] | I am trying to run `pip install mitmproxy` on Windows, but I keep getting access denied, even with `cmd` and `PowerShell` using the `Run as Administrator` option.
```
WindowsError: [Error 5] Access is denied: 'c:\\users\\bruno\\appdata\\local\\temp\\easy_install-0fme6u\\cryptography-0.9.1\\.eggs\\cffi-1.1.2-py2.7-win-amd64.egg\\_cffi_backend.pyd'
```
How can I make this work? | In case of windows, in cmd try to run pip install using python executable
e.g.
```
python -m pip install mitmproxy
```
this should work, at least it worked for me for other package installation. |
getattr and setattr on nested objects? | 31,174,295 | 12 | 2015-07-02T01:29:42Z | 31,174,427 | 15 | 2015-07-02T01:49:55Z | [
"python",
"python-2.7",
"recursion",
"attributes",
"setattr"
] | this is probably a simple problem so hopefuly its easy for someone to point out my mistake or if this is even possible.
I have an object that has multiple objects as properties. I want to be able to dynamically set the properties of these objects like so:
```
class Person(object):
def __init__(self):
self.pet = Pet()
self.residence = Residence()
class Pet(object):
def __init__(self,name='Fido',species='Dog'):
self.name = name
self.species = species
class Residence(object):
def __init__(self,type='House',sqft=None):
self.type = type
self.sqft=sqft
if __name__=='__main__':
p=Person()
setattr(p,'pet.name','Sparky')
setattr(p,'residence.type','Apartment')
print p.__dict__
```
The output is:
{'pet': <**main**.Pet object at 0x10c5ec050>, 'residence': <**main**.Residence object at 0x10c5ec0d0>, 'pet.name': 'Sparky', 'residence.type': 'Apartment'}
As you can see, rather then having the name attribute set on the pet object of the person, a new attribute "pet.name" is created.
I cannot specify person.pet to setattr because different child-objects will be set by the same method, which is parsing some text and filling in the object attributes if/when a relevant key is found.
Is there a easy/built in way to accomplish this?
Or perhaps I need to write a recursive function to parse the string and call getattr multiple times until the necessary child-object is found and then call setattr on that found object?
Thank you! | You could use [`functools.reduce`](https://docs.python.org/3/library/functools.html#functools.reduce):
```
import functools
def rsetattr(obj, attr, val):
pre, _, post = attr.rpartition('.')
return setattr(rgetattr(obj, pre) if pre else obj, post, val)
sentinel = object()
def rgetattr(obj, attr, default=sentinel):
if default is sentinel:
_getattr = getattr
else:
def _getattr(obj, name):
return getattr(obj, name, default)
return functools.reduce(_getattr, [obj]+attr.split('.'))
```
`rgetattr` and `rsetattr` are drop-in replacements for `getattr` and `setattr`,
which can also handle dotted `attr` strings.
---
```
import functools
class Person(object):
def __init__(self):
self.pet = Pet()
self.residence = Residence()
class Pet(object):
def __init__(self,name='Fido',species='Dog'):
self.name = name
self.species = species
class Residence(object):
def __init__(self,type='House',sqft=None):
self.type = type
self.sqft=sqft
def rsetattr(obj, attr, val):
pre, _, post = attr.rpartition('.')
return setattr(rgetattr(obj, pre) if pre else obj, post, val)
sentinel = object()
def rgetattr(obj, attr, default=sentinel):
if default is sentinel:
_getattr = getattr
else:
def _getattr(obj, name):
return getattr(obj, name, default)
return functools.reduce(_getattr, [obj]+attr.split('.'))
```
---
```
if __name__=='__main__':
p = Person()
print(rgetattr(p, 'pet.favorite.color', 'calico'))
# 'calico'
try:
# Without a default argument, `rgetattr`, like `getattr`, raises
# AttributeError when the dotted attribute is missing
print(rgetattr(p, 'pet.favorite.color'))
except AttributeError as err:
print(err)
# 'Pet' object has no attribute 'favorite'
rsetattr(p, 'pet.name', 'Sparky')
rsetattr(p, 'residence.type', 'Apartment')
print(p.__dict__)
print(p.pet.name)
# Sparky
print(p.residence.type)
# Apartment
``` |
How to avoid hard coding in if condition of python script | 31,174,387 | 3 | 2015-07-02T01:42:14Z | 31,174,457 | 7 | 2015-07-02T01:53:27Z | [
"python"
] | I am new to python. I have query regarding un-hardcoding object names(if condition) in python script.
I have fruit = [ Apple, Mango, Pineapple, Banana, Oranges]
and size = [ small, medium , big]
Currently I write code as below:
```
if (fruit == apple, size == small):
statement 1
statement 2
elif (fruit == apple, size == medium):
statement 1
statement 2
elif (fruit == apple, size == big):
statement 1
statement 2
elif (fruit == Mango, size == small):
statement 1
statement 2
elif (fruit == Mango, size = medium):
statement 1
statement 2
```
How can I avoid writing multiple if...else conditions?
Statement 1: Pulling up a dot file related to fruit and size from directory
The path structure is
main-directory/fruit/collateral/fruit\_size.dot
statement 2: Pulling up a txt file related to fruit and size from directory
The path structure is
main-directory/source/readparamters/readparam/fruit\_size.txt
I want to execute statements for each condition one at a time. Currently I take inputs for fruit and size from user. Is there a way in python that the script can automatically take combinations one by one and execute statements? I know it's somewhat complex and python expert can help me. | You can create a map of values and functions. For example
```
MAP = {'apples':{'small':function1,
'large':function3},
'oranges':{'small':function2}}
#Then Run it like so:
fruit = 'apples'
size = 'large'
result = MAP[fruit][size]()
```
That will look up the function for you in the dictionary using fruit and size then and run it and store output in result. This way, if you need to add additional fruit or sizes you can simply modify the data in the dictionary without altering any code.
EDIT:
I just read your update. If processing steps are the same and the only thing that changes is the location of the file, I would suggest to write a function that takes fruit and size as arguments and opens the file based on input. Then you can run it with your desired fruit and sizes and not have a crazy if statement. |
RemovedInDjango19Warning: Model doesn't declare an explicit app_label | 31,179,459 | 6 | 2015-07-02T08:34:09Z | 31,180,171 | 9 | 2015-07-02T09:04:55Z | [
"python",
"django",
"django-signals",
"django-apps",
"deprecation-warning"
] | Have gone through
[Django 1.9 deprecation warnings app\_label](http://stackoverflow.com/questions/29635765/django-1-9-deprecation-warnings-app-label)
but answers couldn't fix my problem, so asking again.
I have an app that is added to INSTALLED\_APPS in settings.
when ever I run `manage.py runserver`, I get this warning,
```
[trimmed path to project]/catalog/models.py:9: RemovedInDjango19Warning: Model class catalog.models.Category doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Category(models.Model):
```
The code from my app,
**signals.py**
```
from django.db.models.signals import post_save
from django.dispatch import receiver
from models import Category
@receiver(post_save, sender=Category)
def someSignal(sender, **kwargs):
pass
```
**apps.py**
```
from django.apps import AppConfig
class CatalogConfig(AppConfig):
name = 'catalog'
verbose_name = 'Catalogue'
```
**init.py**
```
import signals
default_app_config = 'catalog.apps.WhosConfig'
```
Django version 1.8.2 on **Python 2.7.8** | You are importing **models.py** before app configuration run.
To fix it, you could import and configure signals in `CatalogConfig.ready` method.
like this:
**signals.py**
```
def someSignal(sender, **kwargs):
pass
```
**apps.py**
```
from django.apps import AppConfig
from django.db.models.signals import post_save
class CatalogConfig(AppConfig):
name = 'catalog'
verbose_name = 'Catalogue'
def ready(self):
from .signals import someSignal
post_save.connect(
receiver=someSignal,
sender=self.get_model('Category')
)
```
you may want to check ready method [in documentation](https://docs.djangoproject.com/en/1.8/ref/applications/#django.apps.AppConfig.ready) |
How to import all the environment variables in tox | 31,192,106 | 14 | 2015-07-02T18:33:16Z | 32,252,679 | 12 | 2015-08-27T14:51:19Z | [
"python",
"virtualenv",
"setenv",
"tox"
] | I'm using following in setenv to import the environment variable from where I run, but is there a way to import all the variables so that I don't really need to import one by one.
e.g:
{env:TEMPEST\_CONFIG:} and {env:TEMPEST\_CONFIG\_DIR:} used to import these 2 variables.
```
[testenv:nosetests]
setenv =
TEMPEST_CONFIG={env:TEMPEST_CONFIG:}
TEMPEST_CONFIG_DIR={env:TEMPEST_CONFIG_DIR:}
deps = {[testenv]deps}
commands =
find . -type f -name "*.pyc" -delete
bash {toxinidir}/tools/setup.sh
nosetests --with-xunit {posargs}
``` | You can use [passenv](https://testrun.org/tox/latest/config.html#confval-passenv=SPACE-SEPARATED-GLOBNAMES). If you pass the catch all wildcard `*` you have access to all environment variables from the parent environment:
> passenv=SPACE-SEPARATED-GLOBNAMES
>
> *New in version 2.0.*
>
> A list of wildcard environment variable names which shall be copied
> from the tox invocation environment to the test environment when
> executing test commands. If a specified environment variable doesnât
> exist in the tox invocation environment it is ignored. You can use \*
> and ? to match multiple environment variables with one name.
minimal `tox.ini` to reproduce:
```
[tox]
envlist = py27
skipsdist = True
[testenv]
passenv = *
whitelist_externals = echo
commands = echo {env:MY_FANCY_ENV_VAR:} from my fancy env var :)
```
invocation in linux/unix shell:
```
MY_FANCY_ENV_VAR='hello' tox
```
invocation on Windows cmd.exe:
```
set MY_FANCY_ENV_VAR=hello & tox
```
output:
```
py27 create: /tmp/tt/.tox/py27
py27 installed:
py27 runtests: PYTHONHASHSEED='2037875709'
py27 runtests: commands[0] | echo from my fancy env var :)
hello from my fancy env var :)
_______________________ summary __________________________
py27: commands succeeded
congratulations :)
``` |
ipdb debugger, step out of cycle | 31,196,818 | 3 | 2015-07-03T00:40:59Z | 32,097,568 | 7 | 2015-08-19T13:54:08Z | [
"python",
"debugging",
"ipdb"
] | Is there a command to step out of cycles (say, for or while) while debugging on ipdb without having to use breakpoints out of them?
I use the `until` command to step out of list comprehensions, but don't know how could I do a similar thing, if possible, of entire loop blocks. | You can use `j <line number> (jump)` to go to another line.
for example, `j 28` to go to line 28. |
Abstract base class is not enforcing function implementation | 31,201,706 | 4 | 2015-07-03T08:05:53Z | 31,201,830 | 7 | 2015-07-03T08:13:12Z | [
"python",
"python-3.x",
"abstract-class",
"python-3.4"
] | ```
from abc import abstractmethod, ABCMeta
class AbstractBase(object):
__metaclass__ = ABCMeta
@abstractmethod
def must_implement_this_method(self):
raise NotImplementedError()
class ConcreteClass(AbstractBase):
def extra_function(self):
print('hello')
# def must_implement_this_method(self):
# print("Concrete implementation")
d = ConcreteClass() # no error
d.extra_function()
```
I'm on Python 3.4. I want to define an abstract base class that defines somes functions that need to be implemented by it's subclassses. But Python doesn't raise a NotImplementedError when the subclass does not implement the function... | **The syntax** for the declaration of *metaclasses* **has changed in Python 3**. Instead of the `__metaclass__` field, **Python 3 uses a *keyword argument*** in the *base-class list*:
```
import abc
class AbstractBase(metaclass=abc.ABCMeta):
@abc.abstractmethod
def must_implement_this_method(self):
raise NotImplementedError()
```
Calling `d = ConcreteClass()` will **raise an exception** now, **because a *metaclass* derived from `ABCMeta` can not be instantiated unless all of its abstract methods and properties are *overridden*** (For more information see [`@abc.abstractmethod`](https://docs.python.org/3.2/library/abc.html#abc.abstractmethod)):
```
TypeError: Can't instantiate abstract class ConcreteClass with abstract methods
must_implement_this_method
```
Hope this helps :) |
Append differing number of characters to strings in list of strings | 31,213,520 | 3 | 2015-07-03T19:59:05Z | 31,213,598 | 7 | 2015-07-03T20:06:06Z | [
"python"
] | Let's say I have a list of strings:
```
strs = ["aa", "bbb", "c", "dddd"]
```
I want to append spaces to the end of each string so that each string is 4 characters long. That is, I want the end product to look like this:
```
strs_final = ["aa ", "bbb ", "c ", "dddd"]
```
I think list comprehension is the way to go, but I'm not sure exactly how to go about doing this. I know how to, for instance, pick out the strings that are of length 3 and add one space to them:
```
[i+" " for i in strs if len(i) == 3]
```
But I don't know how to add varying numbers of spaces for varying lengths of strings | Another solution is to use [str.ljust](https://docs.python.org/3/library/stdtypes.html#str.ljust)
```
[i.ljust(4) for i in strs]
``` |
What happen to a list when passed to a function? | 31,213,869 | 4 | 2015-07-03T20:32:19Z | 31,213,937 | 7 | 2015-07-03T20:38:06Z | [
"python",
"list",
"function"
] | I'm trying to understand how a list is treated when passed as an argument to a function. So what I did was the following:
I initialized a list:
```
ll = [1,2,3,4]
```
and define a function:
```
def Foo(ar):
ar += [11]
```
I passed the list to the function:
```
Foo(ll)
```
and when I print it I got:
```
print ll # [1, 2, 3, 4, 11] #case(1)
```
So far so good.
Now I modified the function in order to reset the list such that it has a single element only:
```
def Foo(ar):
ar = [11]
```
I recalled the syntax:
```
Foo(ll)
```
and when I reprint it, it resulted in the same list:
```
print ll # [1, 2, 3, 4] # case(2)
```
I thought the list was passed as reference; so whatever we do to the list inside a function will always change the original function passed from the main program. So for case(2) I was expecting the following result:
```
print ll # [11] # case(2) expected result
```
Am I missing something here? | `ar += [11]` is **not** just an assignment. It's a method call (the method called is: `__iadd__`). when Python executes that line it calls the method and then assigns `ar` to the result. The `__iadd__` method of `list` *modifies* the current `list`.
`ar = [11]` is an *assignment* and hence it simply changes the value **of the local name `ar`**. But there is no link between the `ar` name inside the function and the value of the `ll` variable.
You probably want something like:
```
ar[:] = [11]
``` |
How to match both multi line and single line | 31,215,512 | 3 | 2015-07-04T00:04:34Z | 31,215,541 | 9 | 2015-07-04T00:10:16Z | [
"python",
"regex",
"python-2.7"
] | Iâm trying to get my head around some Regex (using Python 2.7) and have hit a confusing snag. Itâs to do with the (.\*) . I know that dot matches everything except for new line unless you use the tag re.DOTALL. But when I do use the tag, it includes too much. Here is the code with a few variations and results that Iâve tried:
```
import re
from urllib2 import urlopen
webpage = urlopen('http://trev.id.au/testfiles/rgxtxt.php').read()
# find the instances of pattern in the file
findPatHTMLComment = re.findall('<!--(.*)-->',webpage)
foundItems = len(findPatHTMLComment) # how many instances where found?
# Print results
print "Found " + str(foundItems) + " matches. They are: "
listIterator = []
listIterator[:]=range(0,foundItems)
for i in listIterator:
print "HTML_Comment["+ str(i) +"]: |" + findPatHTMLComment[i] + "| END HTML Comment"
```
This results in finding 3 matches as it doesn't find the multi-line comment sections.
Using:
```
findPatHTMLComment = re.findall('<!--(.*)-->',webpage,re.DOTALL)
```
Finds a single match using the first at the end of the document.
```
findPatHTMLComment = re.findall('<!--(.*)-->',webpage,re.MULTILINE)
```
Finds the same as the first one, only 3 out of the 5 comments that are in the file.
QUESTION: What is it that I should use in this instance as the regex? Could you explain it for me and others too?
Appreciate any guidance you can provide. Thanks and have a nice day.
EDIT: Include sample data that was at link in code above (will be removing sample data from server soon):
```
<html>
<!--[if lt IE 9 ]>
<script type="text/javascript">
jQuery(function ($) {
function TopSearchIE9(input,inputBtn){
var $topSearch=$(input);
var $topSearchBtn=$(inputBtn);
$topSearch.keydown(function(e) {
if (e.keyCode == 13) {
$topSearchBtn.trigger("click");
return false;
}
});
}
TopSearchIE9(".J-txt-focus1",".J-txt-focus1-btn");
TopSearchIE9(".J-txt-focus2",".J-txt-focus2-btn");
});
</script>
<![endif]-->
<!--[if lt IE 10 ]>
<style>
.new-header-search .hdSch-txt{ width: 225px;}
.new-header-search .hdSch-del{width: 0px; padding: 5px 0px;}
.new-header-search .hdSch-del.del{background:none; padding: }
</style>
<![endif]-->
<body>
<!-- This is a text file with a number of items to allow testing of some regex methods. It has no actual meaning -->
<div head1>Item heading for first item</div>
<!--By the way, this is a comment in a block of HTML text.-->
<div itembody>We can jump over the moon if we are fast enough, but we really shouldn't try it cause we may get a blood nose. When we do try and succeed it feels quite good.</div>
<div head1>Item heading for second item</div>
<div itembody>If this is showing, its the second body within the itembody div tags for this file</div>
<div head1>Item heading for the third item</div>
<div itembody>
Going to add another div tag
<div highlight>
and closing div tag
</div>
in this body to see how it handles that.
</div>
<!-- The above itembody data should
have it's own div and closing div tags -->
<div head1>Item heading for the fourth item</div>
<div itembody>
<p><a href="mailto:[email protected]">email fred</a> or phone him on +63 493 3382 3329 when you are ready to try more regex stuff.</p>
<p>You can also check with Barney by <a href="mailto:[email protected]">emailing him</a> or phone him of +44 394 394 3992 if that is easier</p>
</div>
<!-- Thats all folks... -->
</body>
``` | > But when I do use the tag, it includes too much.
`*` is a [**greedy**](http://www.rexegg.com/regex-greed.html) operator meaning it will match as much as it can and still allow the remainder of the regular expression to match. You need to follow the `*` operator with `?` for a non-greedy match which means "zero or more â preferably as few as possible".
```
re.findall('<!--(.*?)-->', webpage, re.DOTALL)
â
```
The `re.MULTILINE` flag is called multi-line because the anchors `^` and `$` operate on multiple lines when implemented, which in this case using the multi-line modifier is redundant.
On another note, I would consider using [**BeautifulSoup**](http://www.crummy.com/software/BeautifulSoup/bs4/doc/) for this task.
```
from bs4 import BeautifulSoup, Comment
soup = BeautifulSoup(html)
comments = soup.find_all(text=lambda text:isinstance(text, Comment))
``` |
Unexpected output from list(generator) | 31,218,826 | 20 | 2015-07-04T08:55:53Z | 31,218,896 | 15 | 2015-07-04T09:03:31Z | [
"python",
"python-2.7",
"list-comprehension",
"generator-expression"
] | I have a list and a `lambda` function defined as
```
In [1]: i = lambda x: a[x]
In [2]: alist = [(1, 2), (3, 4)]
```
Then I try two different methods to calculate a simple sum
First method.
```
In [3]: [i(0) + i(1) for a in alist]
Out[3]: [3, 7]
```
Second method.
```
In [4]: list(i(0) + i(1) for a in alist)
Out[4]: [7, 7]
```
Both results are unexpectedly different. Why is that happening? | This behaviour has been fixed in python 3. When you use a list comprehension `[i(0) + i(1) for a in alist]` you will define `a` in its surrounding scope which is accessible for `i`. In a new session `list(i(0) + i(1) for a in alist)` will throw error.
```
>>> i = lambda x: a[x]
>>> alist = [(1, 2), (3, 4)]
>>> list(i(0) + i(1) for a in alist)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in <genexpr>
File "<stdin>", line 1, in <lambda>
NameError: global name 'a' is not defined
```
A list comprehension is not a generator: [Generator expressions and list comprehensions](https://docs.python.org/2/howto/functional.html#generator-expressions-and-list-comprehensions).
> Generator expressions are surrounded by parentheses (â()â) and list
> comprehensions are surrounded by square brackets (â[]â).
In your example `list()` as a class has its own scope of variables and it has access to global variables at most. When you use that, `i` will look for `a` inside that scope. Try this in new session:
```
>>> i = lambda x: a[x]
>>> alist = [(1, 2), (3, 4)]
>>> [i(0) + i(1) for a in alist]
[3, 7]
>>> a
(3, 4)
```
Compare it to this in another session:
```
>>> i = lambda x: a[x]
>>> alist = [(1, 2), (3, 4)]
>>> l = (i(0) + i(1) for a in alist)
<generator object <genexpr> at 0x10e60db90>
>>> a
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'a' is not defined
>>> [x for x in l]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in <genexpr>
File "<stdin>", line 1, in <lambda>
NameError: global name 'a' is not defined
```
When you run `list(i(0) + i(1) for a in alist)` you will pass a generator `(i(0) + i(1) for a in alist)` to the `list` class which it will try to convert it to a list in its own scope before return the list. For this generator which has no access inside lambda function, the variable `a` has no meaning.
The generator object `<generator object <genexpr> at 0x10e60db90>` has lost the variable name `a`. Then when `list` tries to call the generator, lambda function will throw error for undefined `a`.
The behaviour of list comprehensions in contrast with generators also mentioned [here](https://www.python.org/dev/peps/pep-0289/):
> List comprehensions also "leak" their loop variable into the
> surrounding scope. This will also change in Python 3.0, so that the
> semantic definition of a list comprehension in Python 3.0 will be
> equivalent to list(). Python 2.4 and beyond
> should issue a deprecation warning if a list comprehension's loop
> variable has the same name as a variable used in the immediately
> surrounding scope.
In python3:
```
>>> i = lambda x: a[x]
>>> alist = [(1, 2), (3, 4)]
>>> [i(0) + i(1) for a in alist]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in <listcomp>
File "<stdin>", line 1, in <lambda>
NameError: name 'a' is not defined
``` |
Scrapy - No module named mail.smtp | 31,219,359 | 9 | 2015-07-04T10:05:56Z | 31,664,739 | 18 | 2015-07-27T23:01:04Z | [
"python",
"scrapy"
] | System: Ubuntu 14.04
I installed scrapy using the command `sudo pip install scrapy`.
I am following the tutorial located [here](http://doc.scrapy.org/en/1.0/intro/tutorial.html).
When I run the command `scrapy crawl dmoz` at [this](http://doc.scrapy.org/en/1.0/intro/tutorial.html#crawling) step, I get the following error:
```
2015-07-04 15:28:58 [scrapy] INFO: Scrapy 1.0.1 started (bot: tutorial)
2015-07-04 15:28:58 [scrapy] INFO: Optional features available: ssl, http11
2015-07-04 15:28:58 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tutorial'}
Traceback (most recent call last):
File "/usr/local/bin/scrapy", line 11, in <module>
sys.exit(execute())
File "/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py", line 143, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py", line 89, in _run_print_help
func(*a, **kw)
File "/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py", line 150, in _run_command
cmd.run(args, opts)
File "/usr/local/lib/python2.7/dist-packages/scrapy/commands/crawl.py", line 57, in run
self.crawler_process.crawl(spname, **opts.spargs)
File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 150, in crawl
crawler = self._create_crawler(crawler_or_spidercls)
File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 166, in _create_crawler
return Crawler(spidercls, self.settings)
File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 46, in __init__
self.extensions = ExtensionManager.from_crawler(self)
File "/usr/local/lib/python2.7/dist-packages/scrapy/middleware.py", line 56, in from_crawler
return cls.from_settings(crawler.settings, crawler)
File "/usr/local/lib/python2.7/dist-packages/scrapy/middleware.py", line 32, in from_settings
mwcls = load_object(clspath)
File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/misc.py", line 44, in load_object
mod = import_module(module)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/usr/local/lib/python2.7/dist-packages/scrapy/extensions/memusage.py", line 16, in <module>
from scrapy.mail import MailSender
File "/usr/local/lib/python2.7/dist-packages/scrapy/mail.py", line 23, in <module>
from twisted.mail.smtp import ESMTPSenderFactory
ImportError: No module named mail.smtp
``` | I had same problem.
installing twisted solved the problem.
```
sudo apt-get install python-twisted
``` |
Error 400 with python-amazon-simple-product-api via pythonanywhere | 31,223,301 | 4 | 2015-07-04T17:21:28Z | 31,274,244 | 8 | 2015-07-07T16:23:05Z | [
"python",
"django",
"amazon-web-services",
"pythonanywhere"
] | I've been at this for the better part of a day but have been coming up with the same Error 400 for quite some time. Basically, the application's goal is to parse a book's ISBN from the Amazon referral url and use it as the reference key to pull images from Amazon's Product Advertising API. The webpage is written in Python 3.4 and Django 1.8. I spent quite a while researching on here and settled for using python-amazon-simple-product-api since it would make parsing results from Amazon a little easier.
Answers like: [How to use Python Amazon Simple Product API to get price of a product](http://stackoverflow.com/questions/24573983/how-to-use-python-amazon-simple-product-api-to-get-price-of-a-product)
Make it seem pretty simple, but I haven't quite gotten it to lookup a product successfully yet. Here's a console printout of what my method usually does, with the correct ISBN already filled:
```
>>> from amazon.api import AmazonAPI
>>> access_key='amazon-access-key'
>>> secret ='amazon-secret-key'
>>> assoc ='amazon-associate-account-name'
>>> amazon = AmazonAPI(access_key, secret, assoc)
>>> product = amazon.lookup(ItemId='1632360705')
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/home/tsuko/.virtualenvs/django17/lib/python3.4/site-packages/amazon/api.py", line 161, in lo
okup
response = self.api.ItemLookup(ResponseGroup=ResponseGroup, **kwargs)
File "/home/tsuko/.virtualenvs/django17/lib/python3.4/site-packages/bottlenose/api.py", line 242, i
n __call__
{'api_url': api_url, 'cache_url': cache_url})
File "/home/tsuko/.virtualenvs/django17/lib/python3.4/site-packages/bottlenose/api.py", line 203, i
n _call_api
return urllib2.urlopen(api_request, timeout=self.Timeout)
File "/usr/lib/python3.4/urllib/request.py", line 153, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.4/urllib/request.py", line 461, in open
response = meth(req, response)
File "/usr/lib/python3.4/urllib/request.py", line 571, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python3.4/urllib/request.py", line 499, in error
return self._call_chain(*args)
File "/usr/lib/python3.4/urllib/request.py", line 433, in _call_chain
result = func(*args)
File "/usr/lib/python3.4/urllib/request.py", line 579, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 400: Bad Request
```
Now I guess I'm curious if this is some quirk with PythonAnywhere, or if I've missed a configuration setting in Django? As far as I can tell through AWS and the Amazon Associates page my keys are correct. I'm not too worried about parsing at this point, just getting the object. I've even tried bypassing the API and just using Bottlenose (which the API extends) but I get the same error 400 result.
I'm really new to Django and Amazon's API, any assistance would be appreciated! | You still haven't authorised your account for API access. To do so, you can go to <https://affiliate-program.amazon.com/gp/advertising/api/registration/pipeline.html> |
Non time-specific once a day crontab? | 31,223,817 | 3 | 2015-07-04T18:18:16Z | 31,223,841 | 7 | 2015-07-04T18:20:48Z | [
"python",
"cron",
"scheduled-tasks",
"crontab",
"cron-task"
] | I have a python script, and I wish to run it once and only once everyday.
I did some research on the `crontab` command, and it seems to do so, but at a fixed time each day.
The issue is that my computer won't be on all day and a specific time for running it is just not possible. What can I do?
Could a log file help? I was thinking of doing a `crontab` every 5 minutes or so and scanning a log file to see any runs for the day. | Install [`anacron`](https://en.wikipedia.org/wiki/Anacron), a `cron` scheduler that'll handle tasks that run at most once a day when your computer is powered on.
From the WikiPedia page:
> anacron is a computer program that performs periodic command scheduling which is traditionally done by cron, but without assuming that the system is running continuously. Thus, it can be used to control the execution of daily, weekly, and monthly jobs (or anything with a period of n days) on systems that don't run 24 hours a day. |
python regex not detecting square brackets | 31,225,304 | 3 | 2015-07-04T21:37:43Z | 31,225,325 | 10 | 2015-07-04T21:40:50Z | [
"python",
"regex"
] | I have a scenario where I want to remove all special characters except spaces from given content and I am working with Python and I was using this regex
```
re.sub(r"[^a-zA-z0-9 ]+","",content)
```
Itt was removing all special characters but was not removing square brackets `[ ]` and I just don't know why this happening??
after that I just use this regex
```
content = re.sub(r"[^a-zA-z0-9 ]+|\[|\]","",content)
```
It's working flawlessly in `IDLE IDE` and removing all kind of special characters but when I want to replace large files like Wikipedia's page then its now not removing closing square brackets `]` I just dont why `Python` doing this weird behavior and | You have a lowercase `z` where it should be upppercase. Change:
```
re.sub(r"[^a-zA-z0-9 ]+","",content)
```
to:
```
re.sub(r"[^a-zA-Z0-9 ]+","",content)
```
---
For the record, the range `'A-z'` expanded to the characters `A...Z`, `[`, `\`, `]`, `^`, `_`, ``` `` ```, `a...z`; that's why your regex was removing everything but those chars.
ASCII table:
 |
Bokeh hover tooltip not displaying all data - Ipython notebook | 31,226,119 | 2 | 2015-07-05T00:05:27Z | 31,234,792 | 7 | 2015-07-05T20:39:11Z | [
"python",
"pandas",
"ipython-notebook",
"bokeh"
] | I am experimenting with Bokeh and mixing pieces of code. I created the graph below from a Pandas DataFrame, which displays the graph correctly with all the tool elements I want. However, the tooltip is partially displaying the data.
Here is the graph:

Here is my code:
```
from bokeh.plotting import figure, show
from bokeh.io import output_notebook
from bokeh.models import HoverTool
from collections import OrderedDict
x = yearly_DF.index
y0 = yearly_DF.weight.values
y1 = yearly_DF.muscle_weight.values
y2 = yearly_DF.bodyfat_p.values
#output_notebook()
p = figure(plot_width=1000, plot_height=600,
tools="pan,box_zoom,reset,resize,save,crosshair,hover",
title="Annual Weight Change",
x_axis_label='Year',
y_axis_label='Weight',
toolbar_location="left"
)
hover = p.select(dict(type=HoverTool))
hover.tooltips = OrderedDict([('Year', '@x'),('Total Weight', '@y0'), ('Muscle Mass', '$y1'), ('BodyFat','$y2')])
output_notebook()
p.line(x, y0, legend="Weight")
p.line(x, y1, legend="Muscle Mass", line_color="red")
show(p)
```
I have tested with Firefox 39.0, Chrome 43.0.2357.130 (64-bit) and Safari Version 8.0.7. I have cleared the cache and I get the same error in all browsers. Also I did pip install bokeh --upgrade to make sure I have the latest version running. | Try using [`ColumnDataSource`](http://bokeh.pydata.org/en/latest/docs/reference/models.html#bokeh.models.sources.ColumnDataSource).
Hover tool needs to have access to the data source so that it can display info.
`@x`, `@y` are the x-y values in data unit. (`@` prefix is special, can only followed by a limited set of variable, `@y2` is not one of them)., Normally I would use `$`+ column\_name to display the value of my interest, such as `$weight`. See [here](http://bokeh.pydata.org/en/latest/docs/user_guide/tools.html#setting-tool-visuals) for more info.
Besides, I am surprised that the hover would appear at all. As I thought hoverTool doesn't work with line glyph, as noted [here](http://bokeh.pydata.org/en/latest/docs/reference/models.html#bokeh.models.tools.HoverTool)
Try the following : (I haven't tested, might have typos).
```
df = yearly_DF.reset_index() # move index to column.
source = ColumnDataSource(ColumnDataSource.from_df(df)
hover.tooltips = OrderedDict([('x', '@x'),('y', '@y'), ('year', '$index'), ('weight','$weight'), ('muscle_weight','$muscle_weight'), ('body_fat','$bodyfat_p')])
p.line(x='index', y='weight', source=source, legend="Weight")
p.line(x='index', y='muscle_weight', source=source, legend="Muscle Mass", line_color="red")
``` |
What is the difference between "range(0,2)" and "list(range(0,2))"? | 31,227,536 | 14 | 2015-07-05T05:53:36Z | 31,227,566 | 10 | 2015-07-05T06:00:02Z | [
"python",
"python-2.7",
"python-3.x"
] | Need to understand the difference between `range(0,2)` and `list(range(0,2))`, using python2.7
Both return a list so what exactly is the difference? | It depends on what version of Python you are using.
In Python 2.x, [`range()`](https://docs.python.org/2/library/functions.html#range) returns a list, so they are equivalent.
In Python 3.x, [`range()`](https://docs.python.org/3/library/functions.html#func-range) returns an immutable sequence type, you need `list(range(0,2))` to get a list. |
What is the difference between "range(0,2)" and "list(range(0,2))"? | 31,227,536 | 14 | 2015-07-05T05:53:36Z | 31,227,578 | 27 | 2015-07-05T06:02:50Z | [
"python",
"python-2.7",
"python-3.x"
] | Need to understand the difference between `range(0,2)` and `list(range(0,2))`, using python2.7
Both return a list so what exactly is the difference? | In Python 3.x ,
`range(0,3)` returns a class of immutable iterable objects that lets you iterate over them, it does not produce lists, and they do not store all the elements in the range in memory, instead they produce the elements on the fly (as you are iterating over them) , whereas `list(range(0,3))` produces a list (by iterating over all the elements and appending to the list internally) .
Example -
```
>>> range(0,3)
range(0, 3)
>>> list(range(0,3))
[0, 1, 2]
```
Ideally, if you only want to iterate over that range of values , `range(0,3)` would be faster than `(list(range(0,3))` because the latter has the overhead of producing a list before you start iterating over it.
In Python 2.x , `range(0,3)` produces an list, instead we also had an `xrange()` function that has similar behavior of `range()` function from Python 3.x (xrange was renamed to range in Python 3.x)
For Python 3.5, From the [documentation](https://docs.python.org/3/library/stdtypes.html#ranges) -
> Range objects implement the collections.abc.Sequence ABC, and provide features such as containment tests, element index lookup, slicing and support for negative indices
So you can do things like -
```
>>> range(0,10)[5]
5
>>> range(0,10)[3:7]
range(3, 7)
>>> 5 in range(6,10)
False
>>> 7 in range(1,8)
True
```
And all of these are constant time operations , as can be seen from this test -
```
In [11]: %timeit a = xrange(0,1000000)[1000]
1000000 loops, best of 3: 342 ns per loop
In [12]: %timeit a = xrange(0,1000000)[10000]
1000000 loops, best of 3: 342 ns per loop
In [13]: %timeit a = xrange(0,1000000)[100000]
1000000 loops, best of 3: 342 ns per loop
In [14]: %timeit a = xrange(0,1000000)[999999]
1000000 loops, best of 3: 342 ns per loop
In [15]: %timeit a = xrange(0,10000000)[9999999]
1000000 loops, best of 3: 339 ns per loop
In [16]: %timeit a = xrange(0,1000000000000)[9999999999]
1000000 loops, best of 3: 341 ns per loop
``` |
Queue vs JoinableQueue in Python | 31,230,241 | 4 | 2015-07-05T12:16:54Z | 31,230,329 | 8 | 2015-07-05T12:27:41Z | [
"python",
"queue",
"multiprocessing"
] | In Python while using multiprocessing module there are 2 kinds of queues:
* Queue
* JoinableQueue.
What is the difference between them?
# Queue
```
from multiprocessing import Queue
q = Queue()
q.put(item) # Put an item on the queue
item = q.get() # Get an item from the queue
```
# JoinableQueue
```
from multiprocessing import JoinableQueue
q = JoinableQueue()
q.task_done() # Signal task completion
q.join() # Wait for completion
``` | [`JoinableQueue`](https://docs.python.org/2/library/multiprocessing.html#multiprocessing.JoinableQueue) has methods `join()` and `task_done()`, which [`Queue`](https://docs.python.org/2/library/multiprocessing.html#multiprocessing.Queue) hasn't.
---
> **class multiprocessing.Queue( [maxsize] )**
>
> Returns a process shared queue implemented using a pipe and a few locks/semaphores. When a process first puts an item on the queue a feeder thread is started which transfers objects from a buffer into the pipe.
>
> The usual Queue.Empty and Queue.Full exceptions from the standard libraryâs Queue module are raised to signal timeouts.
>
> Queue implements all the methods of Queue.Queue except for task\_done() and join().
---
> **class multiprocessing.JoinableQueue( [maxsize] )**
>
> JoinableQueue, a Queue subclass, is a queue which additionally has task\_done() and join() methods.
>
> **task\_done()**
>
> Indicate that a formerly enqueued task is complete. Used by queue consumer threads. For each get() used to fetch a task, a subsequent call to task\_done() tells the queue that the processing on the task is complete.
>
> If a join() is currently blocking, it will resume when all items have been processed (meaning that a task\_done() call was received for every item that had been put() into the queue).
>
> Raises a ValueError if called more times than there were items placed in the queue.
>
> **join()**
>
> Block until all items in the queue have been gotten and processed.
>
> The count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer thread calls task\_done() to indicate that the item was retrieved and all work on it is complete. When the count of unfinished tasks drops to zero, join() unblocks.
---
If you use `JoinableQueue` then you must call `JoinableQueue.task_done()` for each task removed from the queue or else the semaphore used to count the number of unfinished tasks may eventually overflow, raising an exception. |
incorrect answers for quadratic equations | 31,232,563 | 5 | 2015-07-05T16:33:08Z | 31,232,584 | 9 | 2015-07-05T16:34:57Z | [
"python"
] | I was wondering if anyone could tell me why my python code for solving quadratic equations isn't working. I have looked through it and haven't found any errors.
```
print("This program will solve quadratic equations for you")
print("It uses the system 'ax**2 + bx + c'")
print("a, b and c are all numbers with or without decimal \
points")
print("Firstly, what is the value of a?")
a = float(input("\n\nType in the coefficient of x squared"))
b = float(input("\n\nNow for b. Type in the coefficient of x"))
c = float(input("\n\nGreat. now what is the c value? The number alone?"))
print("The first value for x is " ,(-b+(((b**2)-(4*a* c))* * 0.5)/(2*a)))
print("\n\nThe second value for x is " ,(-b-(((b * * 2)-(4*a*c))** 0.5)/(2*a)))
```
When a=1 b=-4 and c=-3 I am expecting -1 and 4 but get 5.5 and 0.5 | Your trouble is in the part that tries to do the quadratic formula:
```
(-b+(((b**2)-(4*a* c))* * 0.5)/2*a)
```
The trouble is that `*` has the same precedence as `/` so you're dividing by 2 and then multiplying by `a`. Also your parentheses are off, so I reduced the unnecessary ones and moved the wrong ones. In short, -b wasn't being put together with the square root before the division. What you want is:
```
(-b+(b**2-4*a*c)**0.5)/(2*a)
```
P.S. For the sake of asking questions, it would be better to ask in the form of something like:
```
>>> a = 2
>>> b = 1
>>> c = 3
>>> (-b+(((b**2)-(4*a* c))* * 0.5)/2*a)
got blah, expected blam
```
Since the other printing and inputting is not to blame (which you should be able to work out fairly easily). |
scrapy crawler caught exception reading instance data | 31,232,681 | 9 | 2015-07-05T16:44:30Z | 31,233,576 | 24 | 2015-07-05T18:24:32Z | [
"python",
"web-crawler",
"scrapy"
] | I am new to python and want to use scrapy to build a web crawler. I go through the tutorial in <http://blog.siliconstraits.vn/building-web-crawler-scrapy/>. Spider code likes following:
```
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from nettuts.items import NettutsItem
from scrapy.http import Request
class MySpider(BaseSpider):
name = "nettuts"
allowed_domains = ["net.tutsplus.com"]
start_urls = ["http://net.tutsplus.com/"]
def parse(self, response):
hxs = HtmlXPathSelector(response)
titles = hxs.select('//h1[@class="post_title"]/a/text()').extract()
for title in titles:
item = NettutsItem()
item["title"] = title
yield item
```
When launch the spider with command line: scrapy crawl nettus, it has following error:
```
[boto] DEBUG: Retrieving credentials from metadata server.
2015-07-05 18:27:17 [boto] ERROR: Caught exception reading instance data
Traceback (most recent call last):
File "/anaconda/lib/python2.7/site-packages/boto/utils.py", line 210, in retry_url
r = opener.open(req, timeout=timeout)
File "/anaconda/lib/python2.7/urllib2.py", line 431, in open
response = self._open(req, data)
File "/anaconda/lib/python2.7/urllib2.py", line 449, in _open
'_open', req)
File "/anaconda/lib/python2.7/urllib2.py", line 409, in _call_chain
result = func(*args)
File "/anaconda/lib/python2.7/urllib2.py", line 1227, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/anaconda/lib/python2.7/urllib2.py", line 1197, in do_open
raise URLError(err)
URLError: <urlopen error [Errno 65] No route to host>
2015-07-05 18:27:17 [boto] ERROR: Unable to read instance data, giving up
```
really do not know what's wrong. Hope somebody could help | in the settings.py file: add following code settings:
DOWNLOAD\_HANDLERS = {'s3': None,} |
Nested for-loops and dictionaries in finding value occurrence in string | 31,233,859 | 10 | 2015-07-05T18:54:27Z | 31,233,886 | 7 | 2015-07-05T18:57:13Z | [
"python",
"dictionary"
] | I've been tasked with creating a dictionary whose keys are elements found in a string and whose values count the number of occurrences per value.
Ex.
```
"abracadabra" â {'r': 2, 'd': 1, 'c': 1, 'b': 2, 'a': 5}
```
I have the for-loop logic behind it here:
```
xs = "hshhsf"
xsUnique = "".join(set(xs))
occurrences = []
freq = []
counter = 0
for i in range(len(xsUnique)):
for x in range(len(xs)):
if xsUnique[i] == xs[x]:
occurrences.append(xs[x])
counter += 1
freq.append(counter)
freq.append(xsUnique[i])
counter = 0
```
This does exactly what I want it to do, except with lists instead of dictionaries. How can I make it so `counter` becomes a value, and `xsUnique[i]` becomes a key in a new dictionary? | The *easiest* way is to use a Counter:
```
>>> from collections import Counter
>>> Counter("abracadabra")
Counter({'a': 5, 'r': 2, 'b': 2, 'c': 1, 'd': 1})
```
If you can't use a Python library, you can use [dict.get](https://docs.python.org/2/library/stdtypes.html#dict.get) with a default value of `0` to make your own counter:
```
s="abracadabra"
count={}
for c in s:
count[c] = count.get(c, 0)+1
>>> count
{'a': 5, 'r': 2, 'b': 2, 'c': 1, 'd': 1}
```
Or, you can use [dict.fromkeys()](https://docs.python.org/2/library/stdtypes.html#dict.fromkeys) to set all the values in a counter to zero and then use that:
```
>>> counter={}.fromkeys(s, 0)
>>> counter
{'a': 0, 'r': 0, 'b': 0, 'c': 0, 'd': 0}
>>> for c in s:
... counter[c]+=1
...
>>> counter
{'a': 5, 'r': 2, 'b': 2, 'c': 1, 'd': 1}
```
If you truly want the *least Pythonic*, i.e., what you might do in C, you would maybe do:
1. create a list for all possible ascii values set to `0`
2. loop over the string and count characters that are present
3. Print non zero values
Example:
```
ascii_counts=[0]*255
s="abracadabra"
for c in s:
ascii_counts[ord(c)]+=1
for i, e in enumerate(ascii_counts):
if e:
print chr(i), e
```
Prints:
```
a 5
b 2
c 1
d 1
r 2
```
That does not scale to use with Unicode, however, since you would need [more than 1 million](http://stackoverflow.com/questions/5924105/how-many-characters-can-be-mapped-with-unicode) list entries... |
What's the difference between select_related and prefetch_related in Django ORM? | 31,237,042 | 41 | 2015-07-06T02:31:40Z | 31,237,071 | 79 | 2015-07-06T02:37:52Z | [
"python",
"django",
"django-models"
] | In Django doc,
> select\_related() âfollowâ foreign-key relationships, selecting additional related-object data when it executes its query.
>
> prefetch\_related() does a separate lookup for each relationship, and does the âjoiningâ in Python.
What does it mean by "doing the joining in python"? Can someone illustrate with an example?
My understanding is that for foreign key relationship, use select\_related; and for M2M relationship, use prefetch\_related. Is this correct? | Your understanding is mostly correct. You use `select_related` when the object that you're going to be selecting is a single object, so `OneToOneField` or a `ForeignKey`. You use `prefetch_related` when you're going to get a "set" of things, so `ManyToManyField`s as you stated or reverse `ForeignKey`s. Just to clarify what I mean by "reverse `ForeignKey`s" here's an example:
```
class ModelA(models.Model):
pass
class ModelB(models.Model):
a = ForeignKey(ModelA)
ModelB.objects.select_related('a').all() # Forward ForeignKey relationship
ModelA.objects.prefetch_related('modelb_set').all() # Reverse ForeignKey relationship
```
The difference is that `select_related` does an SQL join and therefore gets the results back as part of the table from the SQL server. `prefetch_related` on the other hand executes another query and therefore reduces the redundant columns in the original object (`ModelA` in the above example). You may use `prefetch_related` for anything that you can use `select_related` for, it just generally doesn't make much sense since it's another query with no communications savings. |
How to protect my python code when a user inputs a string instead of an integer? | 31,240,305 | 2 | 2015-07-06T07:45:41Z | 31,240,381 | 7 | 2015-07-06T07:49:45Z | [
"python",
"python-3.x"
] | I would like to protect the python calculator I have created from crashing when a user inputs a string instead of an integer.
I have tried doing so with an else statement printing "Invalid Input" (or something else I cant remember) when ever a user inputs a string instead of numbers.
I would also like to know if there is a way to let a user do another operation instead of having to restart the application.
If any importing is required (if you can) please list if it is compatible with cx\_Freeze.
**Source code:**
```
def add (x, y):
return(x + y)
def subtract(x, y):
return(x - y)
def multiply(x, y):
return(x * y)
def divide(x, y):
return(x / y)
print("Select operation.")
print("1.Add")
print("2.Subtract")
print("3.Multiply")
print("4.Divide")
choice = input("Enter choice(1/2/3/4):")
num1 = int(input("Enter first number: "))
num2 = int(input("Enter second number: "))
if choice == '1':
print(num1,"+",num2,"=", add(num1,num2))
elif choice == '2':
print(num1,"-",num2,"=", subtract(num1,num2))
elif choice == '3':
print(num1,"*",num2,"=", multiply(num1,num2))
elif choice == '4':
print(num1,"/",num2,"=", divide(num1,num2))
else:
print("Invalid input")
``` | you can use something like this for input
```
while True:
try:
num1 = int(input("Enter first number: "))
except ValueError:
continue
else:
break
``` |
How to throw exception if script is run with Python 2? | 31,240,907 | 3 | 2015-07-06T08:21:04Z | 31,241,081 | 7 | 2015-07-06T08:29:56Z | [
"python",
"python-3.x"
] | I have a script that should only be run with Python 3. I want to give a nice error message saying that this script should not be run with python2 if a user tries to run it with Python 2.x
How do I do this? When I try checking the Python version, it still throws an error, as Python parses the whole file before executing my `if` condition.
If possible, I'd rather not make another script. | You can write a wrapper start-script in which you only import your actual script and catch for syntax errors:
```
try:
import real_module
except SyntaxError:
print('You need to run this with Python 3')
```
Then, when `real_module.py` uses Python 3 syntax that would throw an exception when used with Python 3, the above message is printed out instead.
Of course, instead of just importing the script, you could also first check the version, and then import it when the version is 3. This has the benefit that you will still see syntax errors of your actual script even when you run it with Python 3:
```
import sys
if sys.version_info[0] < 3:
print('You need to run this with Python 3')
sys.exit(1)
import real_module
``` |
How do I check if an iterator is actually an iterator container? | 31,245,310 | 6 | 2015-07-06T12:00:31Z | 31,245,371 | 8 | 2015-07-06T12:03:54Z | [
"python",
"python-3.x",
"generator"
] | I have a dummy example of an iterator container below (the real one reads a file too large to fit in memory):
```
class DummyIterator:
def __init__(self, max_value):
self.max_value = max_value
def __iter__(self):
for i in range(self.max_value):
yield i
def regular_dummy_iterator(max_value):
for i in range(max_value):
yield i
```
This allows me to iterate over the value **more than once** so that I can implement something like this:
```
def normalise(data):
total = sum(i for i in data)
for val in data:
yield val / total
# this works when I call next()
normalise(DummyIterator(100))
# this doesn't work when I call next()
normalise(regular_dummy_iterator(100))
```
How do I check in the normalise function that I am being passed an iterator container rather than a normal generator? | First of all: There is no such thing as a *iterator container*. You have an *iterable*.
An iterable produces an iterator. Any iterator is also an iterable, but produces *itself* as the iterator:
```
>>> list_iter = iter([])
>>> iter(list_iter) is list_iter
True
```
You don't have an iterator if the `iter(ob) is ob` test is false. |
Python, Pandas : write content of DataFrame into text File | 31,247,198 | 7 | 2015-07-06T13:30:05Z | 31,247,247 | 9 | 2015-07-06T13:33:07Z | [
"python",
"file",
"pandas"
] | I have pandas DataFrame like this
```
X Y Z Value
0 18 55 1 70
1 18 55 2 67
2 18 57 2 75
3 18 58 1 35
4 19 54 2 70
```
I want to write this data to a text File in this way,
```
18 55 1 70
18 55 2 67
18 57 2 75
18 58 1 35
19 54 2 70
```
I have tried something like
```
f = open(writePath, 'a')
f.writelines(['\n', str(data['X']), ' ', str(data['Y']), ' ', str(data['Z']), ' ', str(data['Value'])])
f.close()
```
but its not working.
how to do this? | You can just use [`np.savetxt`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.savetxt.html) and access the np attribute `.values`:
```
np.savetxt(r'c:\data\np.txt', df.values, fmt='%d')
```
yields:
```
18 55 1 70
18 55 2 67
18 57 2 75
18 58 1 35
19 54 2 70
```
or [`to_csv`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html#pandas.DataFrame.to_csv):
```
df.to_csv(r'c:\data\pandas.txt', header=None, index=None, sep=' ', mode='a')
```
Note for `np.savetxt` you'd have to pass a filehandle that has been created with append mode. |
Most efficient way to recode these multiple if statements | 31,249,422 | 2 | 2015-07-06T15:12:35Z | 31,249,603 | 7 | 2015-07-06T15:20:30Z | [
"python",
"python-2.7",
"if-statement"
] | I know this is a ridiculous example, but I'm looking for a more efficient way to write this code. Each project gets different values added to it depending on what state it takes place in. This is just a small snippet. I could potentially want to extend this out for all 50 states, which would be a lot of if statements. I could dump this in a function, but then the function would still have all the if statements.
```
Projects = [['Project A', 'CT', '', ''], ['Project B', 'MA', '', ''], ['Project C', 'RI', '', '']]
for project in Projects:
if project[1] == 'CT':
project[2] = project[0] + project[1]
project[3] = '222'
elif project[1] == 'MA':
project[2] = '123'
project[3] = None
elif project[1] == 'ME':
project[2] = '12323'
project[3] = '333'
elif project[1] == 'RI':
project[2] = 'asdf'
project[3] = '3333'
print Projects
``` | Using a dictionary mapping:
```
for project in Projects:
project[2:4] = {
'CT': [project[0]+project[1], '222'],
'MA': ['123', None],
'ME': ['12323', '333'],
'RI': ['asdf', '3333']
}[project[1]]
```
removes all the `if`/`else`, and just deals with real data :)
As suggested by [jonrsharpe](http://stackoverflow.com/users/3001761/jonrsharpe), it may be more efficient delaying the evaluation of the dictionary values with `lambda`s (at the cost of writing more):
```
for project in Projects:
project[2:4] = {
'CT': lambda: [project[0]+project[1], '222'],
'MA': lambda: ['123', None],
'ME': lambda: ['12323', '333'],
'RI': lambda: ['asdf', '3333']
}[project[1]]()
```
---
---
Edit: explanation for [user2242044](http://stackoverflow.com/users/2242044/user2242044):
consider the function:
```
def foo(x):
print('*** foo(%s)' % x)
return x
```
and see what happens when you do:
```
>>> {1: foo(1), 2: foo(2)}[1]
*** foo(1)
*** foo(2)
1
```
as you see, it computes **all** the values in dictionary, calling both `foo(1)` and `foo(2)`, for then just using the value of `foo(1)`.
With `lambda`s:
```
>>> {1: lambda: foo(1), 2: lambda: foo(2)}[1]()
*** foo(1)
1
```
the dictionary returns a function, and when you call the function, the value is computed, thus computing only the value of `foo(1)` |
Node.js (npm) refuses to find python even after %PYTHON% has been set | 31,251,367 | 9 | 2015-07-06T16:48:16Z | 33,047,257 | 24 | 2015-10-09T21:15:31Z | [
"python",
"node.js",
"npm"
] | So I am trying to get the Node.js to work. Of course, it's not as easy as advertised :)
I happen to have two python versions on my computer, but Node.js seems to only work with the older one, 2.7. Upon error, it also encourages me to set the path into `PYTHON` environment variable with this error:
```
Error: Can't find Python executable "python2.7", you can set the PYTHON env variable.
```
Ok then, I configured the variable as desired:
```
C:\Users\Jakub>set PYTHON=C:\MYSELF\Programs\Python2.7\python.exe
C:\Users\Jakub>echo %PYTHON%
C:\MYSELF\Programs\Python2.7\python.exe
```
You can see that I used `echo` to check whether the variable was really set. Unfortunatelly, that `npm` thing can't read it and the error appears again. Here's the full log right after I set the `%PYTHON%` variable:
```
C:\Users\Jakub>npm install minecraft-protocol
\
> [email protected] install C:\Users\Jakub\node_modules\minecraft-protocol\node_modules\ursa
> node-gyp rebuild
|
C:\Users\Jakub\node_modules\minecraft-protocol\node_modules\ursa>if not defined npm_config_node_gyp (node "C:\Program Files (x86)\nodejs\node_modules\npm\bin\node-gyp-bin\\..\..\no
de_modules\node-gyp\bin\node-gyp.js" rebuild ) else (rebuild)
gyp ERR! configure error
gyp ERR! stack Error: Can't find Python executable "python2.7", you can set the PYTHON env variable.
gyp ERR! stack at failNoPython (C:\Program Files (x86)\nodejs\node_modules\npm\node_modules\node-gyp\lib\configure.js:103:14)
gyp ERR! stack at C:\Program Files (x86)\nodejs\node_modules\npm\node_modules\node-gyp\lib\configure.js:64:11
gyp ERR! stack at FSReqWrap.oncomplete (evalmachine.<anonymous>:95:15)
``` | I figured out the most stable solution is to set `python` npm internal value to actual path:
```
npm config set python C:\Programs\Python2.7\python2.7.exe
```
This skips all environment variable and `%PATH%` crap and just starts the python wherever it's installed. |
Flask ImportError: No Module Named Flask | 31,252,791 | 6 | 2015-07-06T18:12:00Z | 31,253,927 | 15 | 2015-07-06T19:20:48Z | [
"python",
"flask"
] | I'm following the Flask tutorial here:
```
http://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-i-hello-world
```
I get to the point where I try ./run.py and I get:
```
Traceback (most recent call last):
File "./run.py", line 3, in <module>
from app import app
File "/Users/benjaminclayman/Desktop/microblog/app/__init__.py", line 1, in <module>
from flask import Flask
ImportError: No module named flask
```
This looks similar to:
```
http://stackoverflow.com/questions/24188240/importerror-no-module-named-flask
```
But their solutions aren't helpful. For reference, I *do* have a folder named flask which one user mentioned may cause issues. | try deleting the virtualenv you created.
create a new virtualenv
```
virtualenv flask
```
then
```
cd flask
```
let's activate the virtualenv
```
source bin/activate
```
now you should see (flask) on the left of the command line.
Let's install flask
```
pip install flask
```
Then create a file hello.py
```
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run()
```
and run it with
```
python hello.py
``` |
How do i reference values from various ranges within a list? | 31,253,312 | 3 | 2015-07-06T18:44:31Z | 31,253,327 | 7 | 2015-07-06T18:45:38Z | [
"python",
"list",
"slice"
] | What I want to do is reference several different ranges from within a list, i.e. I want the 4-6th elements, the 12 - 18th elements, etc. This was my initial attempt:
```
test = theList[4:7, 12:18]
```
Which I would expect to give do the same thing as:
```
test = theList[4,5,6,12,13,14,15,16,17]
```
But I got a syntax error. What is the best/easiest way to do this? | You can add the two lists.
```
>>> theList = list(range(20))
>>> theList[4:7] + theList[12:18]
[4, 5, 6, 12, 13, 14, 15, 16, 17]
``` |
Why does numpy.linalg.solve() offer more precise matrix inversions than numpy.linalg.inv()? | 31,256,252 | 8 | 2015-07-06T21:49:02Z | 31,257,909 | 11 | 2015-07-07T00:31:31Z | [
"python",
"arrays",
"numpy",
"matrix",
"linear-algebra"
] | I do not quite understand why `numpy.linalg.solve()` gives the more precise answer, whereas `numpy.linalg.inv()` breaks down somewhat, giving (what I believe are) estimates.
For a concrete example, I am solving the equation `C^{-1} * d` where `C` denotes a matrix, and `d` is a vector-array. For the sake of discussion, the dimensions of `C` are shape `(1000,1000)` and `d` is shape `(1,1000)`.
`numpy.linalg.solve(A, b)` solves the equation `A*x=b` for x, i.e. `x = A^{-1} * b.` Therefore, I could either solve this equation by
(1)
```
inverse = numpy.linalg.inv(C)
result = inverse * d
```
or (2)
```
numpy.linalg.solve(C, d)
```
Method (2) gives far more precise results. Why is this?
What exactly is happening such that one "works better" than the other? | `np.linalg.solve(A, b)` does *not* compute the inverse of *A*. Instead it calls one of the [`gesv` LAPACK routines](http://www.netlib.org/lapack/double/dgesv.f), which first factorizes *A* using LU decomposition, then solves for *x* using forward and backward substitution (see [here](https://en.wikipedia.org/wiki/LU_decomposition#Solving_linear_equations)).
`np.linalg.inv` uses the same method to compute the inverse of *A* by solving for *A-1* in *A·A-1 = I* where *I* is the identity\*. The factorization step is exactly the same as above, but it takes more floating point operations to solve for *A-1* (an *n x n* matrix) than for *x* (an *n*-long vector). Additionally, if you then wanted to obtain *x* via the identity *A-1·b = x* then the extra matrix multiplication would incur yet more floating point operations, and therefore slower performance and more numerical error.
There's no need for the intermediate step of computing *A-1* - it is faster and more accurate to obtain *x* directly.
---
\* The relevant bit of source for `inv` is [here](https://github.com/numpy/numpy/blob/master/numpy/linalg/umath_linalg.c.src#L1727-L1729). Unfortunately it's a bit tricky to understand since it's templated C. The important thing to note is that an identity matrix is being passed to the LAPACK solver as parameter `B`. |
How do you perform basic joins of two RDD tables in Spark using Python? | 31,257,077 | 3 | 2015-07-06T22:55:35Z | 31,257,821 | 9 | 2015-07-07T00:20:57Z | [
"python",
"hadoop",
"join",
"apache-spark",
"pyspark"
] | How would you perform basic joins in Spark using python? In R you could use merg() to do this. What is the syntax using python on spark for:
1. Inner Join
2. Left Outer Join
3. Cross Join
With two tables (RDD) with a single column in each that has a common key.
```
RDD(1):(key,U)
RDD(2):(key,V)
```
I think an inner join is something like this:
```
rdd1.join(rdd2).map(case (key, u, v) => (key, ls ++ rs));
```
Is that right? I have searched the internet and can't find a good example of joins. Thanks in advance. | It can be done either using `PairRDDFunctions` or Spark Data Frames. Since data frame operations benefit from [Catalyst Optimizer](https://databricks.com/blog/2015/04/13/deep-dive-into-spark-sqls-catalyst-optimizer.html) the second option can worth considering.
Assuming your data looks as follows:
```
rdd1 = sc.parallelize([("foo", 1), ("bar", 2), ("baz", 3)])
rdd2 = sc.parallelize([("foo", 4), ("bar", 5), ("bar", 6)])
```
### With PairRDDs:
Inner join:
```
rdd1.join(rdd2)
```
Left outer join:
```
rdd1.leftOuterJoin(rdd2)
```
Cartesian product (doesn't require `RDD[(T, U)]`):
```
rdd1.cartesian(rdd2)
```
Broadcast join (doesn't require `RDD[(T, U)]`):
* see [Spark: what's the best strategy for joining a 2-tuple-key RDD with single-key RDD?](http://stackoverflow.com/q/17621596/1560062)
Finally there is `cogroup` which has no direct SQL equivalent but can be useful in some situations:
```
cogrouped = rdd1.cogroup(rdd2)
cogrouped.mapValues(lambda x: (list(x[0]), list(x[1]))).collect()
## [('foo', ([1], [4])), ('bar', ([2], [5, 6])), ('baz', ([3], []))]
```
### With Spark Data Frames
You can use either SQL DSL or execute raw SQL using `sqlContext.sql`.
```
df1 = sqlContext.createDataFrame(rdd1, ('k', 'v1'))
df2 = sqlContext.createDataFrame(rdd2, ('k', 'v2'))
# Register temporary tables to be able to use sqlContext.sql
df1.registerTempTable('df1')
df2.registerTempTable('df2')
```
Inner join:
```
# inner is a default value so it could be omitted
df1.join(df2, df1.k == df2.k, joinType='inner')
sqlContext.sql('SELECT * FROM df1 JOIN df2 ON df1.k = df2.k')
```
Left outer join:
```
df1.join(df2, df1.k == df2.k, joinType='left_outer')
sqlContext.sql('SELECT * FROM df1 LEFT OUTER JOIN df2 ON df1.k = df2.k')
```
Cross join:
```
df1.join(df2)
sqlContext.sql('SELECT * FROM df1 JOIN df2')
```
Since 1.6 (1.5 in Scala) each of these can be combined with `broadcast` function:
```
from pyspark.sql.functions import broadcast
df1.join(broadcast(df2))
```
to perform broadcast join. See also [Why my BroadcastHashJoin is slower than ShuffledHashJoin in Spark](http://stackoverflow.com/q/34139049/1560062) |
Different behavior in python script and python idle? | 31,260,988 | 12 | 2015-07-07T06:07:27Z | 31,261,439 | 8 | 2015-07-07T06:38:09Z | [
"python",
"python-2.7",
"cpython"
] | In the python idle:
```
>>> a=1.1
>>> b=1.1
>>> a is b
False
```
But when I put the code in a script and run it, I will get a different result:
```
$cat t.py
a=1.1
b=1.1
print a is b
$python t.py
True
```
Why did this happen? I know that `is` compares the `id` of two objects, so why the ids of two objects are same/unique in python script/idle?
I also found that, if I use a small int, for example `1`, instead of `1.1`, the result will be the same in both the python script and python idle. Why did small int and small float have different behavior?
I am using CPython 2.7.5. | When Python executes a script file, the whole file is parsed first. You can notice that when you introduce a syntax error somewhere: Regardless of where it is, it will prevent any line from executing.
So since Python parses the file first, literals can be loaded effectively into the memory. Since Python knows that these are constant, all variables that represent those constant values can point to the same object in memory. So the object is shared.
This works for ints and floats, but even for strings; even when there is a constant expression that needs to be evaluated first:
```
a = "foo"
b = "foo"
c = "fo" + "o"
print(a is b)
print(a is c)
```
Now in IDLE, the behavior is very different: As an interactive interpreter, IDLE executes every line separately. So `a = 1.1` and `b = 1.1` are executed in separated contexts which makes it impossible (or just very hard) to figure out that they both share the same constant literal value and could share the memory. So instead, the interpreter will allocate two different objects, which causes the identity check using `is` to fail.
For small integers, the situation is a bit different. Because they are often used, CPython stores a set of integers (in the range between -5 and 256) statically and makes that every value of these points to the same `int` object. Thatâs why you get a different result for small integers than for any other object. See also the following questions:
* [Python's "is" operator behaves unexpectedly with integers](http://stackoverflow.com/questions/306313/pythons-is-operator-behaves-unexpectedly-with-integers)
* [Weird Integer Cache inside Python 2.6](http://stackoverflow.com/questions/15171695/weird-integer-cache-inside-python-2-6) |
Different number of digits in PI | 31,263,763 | 3 | 2015-07-07T08:42:33Z | 31,263,897 | 11 | 2015-07-07T08:48:37Z | [
"python",
"variables",
"printing",
"pi"
] | I am a beginner in Python and I have a doubt regarding PI.
```
>>> import math
>>> p = math.pi
>>> print p
3.14159265359
>>> math.pi
3.141592653589793
```
* Why are the two having different number of digits ?
* How can I get the value of Pi up to more decimal places without using the Chudnovsky algorithm? | > Why are the two having different number of digits ?
One is 'calculated' with `__str__`, the other with `__repr__`:
```
>>> print repr(math.pi)
3.141592653589793
>>> print str(math.pi)
3.14159265359
```
`print` uses the return value of `__str__` of objects to determine what to print. Just doing `math.pi` uses `__repr__`.
> How can I get the value of Pi up to more decimal places without using Chudnovsky algorithm ?
You can show more numbers with `format()` like so
```
>>> print "pi is {:.20f}".format(math.pi)
pi is 3.14159265358979311600
```
Where 20 is the number of decimals. More info in [the docs](https://docs.python.org/2/tutorial/inputoutput.html) |
Start a flask application in separate thread | 31,264,826 | 6 | 2015-07-07T09:30:33Z | 31,265,602 | 17 | 2015-07-07T10:06:04Z | [
"python",
"multithreading",
"flask"
] | I'm currently developping a Python application on which I want to see real-time statistics. I wanted to use `Flask` in order to make it easy to use and to understand.
My issue is that my Flask server should start at the very beginning of my python application and stop at the very end. It should look like that :
```
def main():
""" My main application """
from watcher.flask import app
# watcher.flask define an app as in the Quickstart flask documentation.
# See: http://flask.pocoo.org/docs/0.10/quickstart/#quickstart
app.run() # Starting the flask application
do_my_stuff()
app.stop() # Undefined, for the idea
```
Because i need my application context (for the statistics), I can't use a `multiprocessing.Process`. Then I was trying to use a `threading.Thread` but it looks like Werkzeug doesn't like it :
```
* Running on http://0.0.0.0:10079/
Exception in thread Flask Server:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File ".../develop-eggs/watcher.flask/src/watcher/flask/__init__.py", line 14, in _run
app.run(host=HOSTNAME, port=PORT, debug=DEBUG)
File ".../eggs/Flask-0.10.1-py2.7.egg/flask/app.py", line 772, in run
run_simple(host, port, self, **options)
File ".../eggs/Werkzeug-0.7-py2.7.egg/werkzeug/serving.py", line 609, in run_simple
run_with_reloader(inner, extra_files, reloader_interval)
File ".../eggs/Werkzeug-0.7-py2.7.egg/werkzeug/serving.py", line 524, in run_with_reloader
signal.signal(signal.SIGTERM, lambda *args: sys.exit(0))
ValueError: signal only works in main thread
```
How can I do that without running flask in the main thread ? | You're running `Flask` in debug mode, which enables **the reloader** (reloads the Flask server when your code changes).
Flask can run just fine in a separate thread, but the reloader expects to run in the main thread.
---
To solve your issue, you should either disable debug (`app.debug = False`), or disable the reloader (`app.use_reloader=False`).
Those can also be passed as arguments to `app.run`: `app.run(debug=True, use_reloader=False)`. |
Remove list of indices from a list in Python | 31,267,493 | 4 | 2015-07-07T11:36:13Z | 31,267,522 | 8 | 2015-07-07T11:38:03Z | [
"python"
] | I have a list with points (centroids) and some of them have to be removed.
How can I do this without loops? I've tried [the answer given here](http://stackoverflow.com/a/627453/2071807) but this error is shown:
```
list indices must be integers, not list
```
My lists look like this:
```
centroids = [[320, 240], [400, 200], [450, 600]]
index = [0,2]
```
And I want to remove the elements in `index`. The final result would be:
```
centroids = [[400, 200]]
``` | You can use [`enumerate`](https://docs.python.org/2/library/functions.html#enumerate) within a list comprehension :
```
>>> centroids = [[320, 240], [400, 200], [450, 600]]
>>> index = [0,2]
>>> [element for i,element in enumerate(centroids) if i not in index]
[[400, 200]]
```
Note that finally you have to loop over your list to find the special indices and there is no way that do this without loop. but you can use list comprehension that performs in C language and is faster (some time 2 time faster) than python loops!
Also for getting more performance you can put your indices within a `set` container that has O(1) for checking membership. |
Applying uppercase to a column in pandas dataframe | 31,269,216 | 6 | 2015-07-07T12:59:54Z | 31,272,768 | 10 | 2015-07-07T15:20:47Z | [
"python",
"python-2.7",
"pandas"
] | I'm having trouble applying upper case to a column in my DataFrame.
dataframe is `df`.
`1/2 ID` is the column head that need to apply UPPERCASE.
The problem is that the values are made up of three letters and three numbers. For example `rrr123` is one of the values.
```
df['1/2 ID'] = map(str.upper, df['1/2 ID'])
```
I got an error:
`TypeError: descriptor 'upper' requires a 'str' object but received a 'unicode' error.`
How can I apply upper case to the first three letters in the column of the DataFrame `df`? | If your version of pandas is a recent version then you can just use the vectorised string method [`upper`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.upper.html#pandas.Series.str.upper):
```
df['1/2 ID'].str.upper()
``` |
How to add a favicon to a Pelican blog? | 31,270,373 | 5 | 2015-07-07T13:46:28Z | 31,270,471 | 7 | 2015-07-07T13:50:36Z | [
"python",
"favicon",
"pelican"
] | I am creating a static site with Pelican and I'm confused about how to add a favicon to it.
I've seen [in the documentation](http://docs.getpelican.com/en/3.6.0/tips.html?highlight=favicon#extra-tips) that:
> You can also use the `EXTRA_PATH_METADATA` mechanism to place a
> `favicon.ico` or `robots.txt` at the root of any site.
I don't know where to put my `favicon.ico` file and what to specify in the `EXTRA_PATH_METADATA` setting (if this is really the setting that should be used). | In [my `pelicanconf.py`](https://github.com/textbook/textbook.github.io-source/blob/master/pelicanconf.py#L45), I have:
```
STATIC_PATHS = [
'images',
'extra/robots.txt',
'extra/favicon.ico'
]
EXTRA_PATH_METADATA = {
'extra/robots.txt': {'path': 'robots.txt'},
'extra/favicon.ico': {'path': 'favicon.ico'}
}
```
The structure for these extra files is then:
```
/content
/extra
favicon.ico
robots.txt
```
See [the documentation](http://docs.getpelican.com/en/3.6.0/settings.html#path-metadata), which shows a similar layout. |
pip install --upgrade sqlalchemy gives maximum recursion depth exceeded | 31,273,332 | 5 | 2015-07-07T15:43:37Z | 31,273,772 | 8 | 2015-07-07T16:02:42Z | [
"python",
"python-2.7",
"recursion",
"sqlalchemy",
"pip"
] | I've tried `pip install --upgrade sqlalchemy`, `python2.7 setup.py install`, and after deleting the sqlalchemy folder in site-packages, I've tried `pip install sqlalchemy`. They all give "RuntimeError: maximum recursion depth exceeded in cmp".
```
File "C:\Python27\lib\ntpath.py", line 200, in splitext
return genericpath._splitext(p, sep, altsep, extsep)
File "C:\Python27\lib\genericpath.py", line 102, in _splitext
sepIndex = max(sepIndex, altsepIndex)
RuntimeError: maximum recursion depth exceeded in cmp
```
I have also tried to run the setup.py for v0.9 and get the same result.
Tried adding a line to setup.py to set max recursion to 10000 and python crashes.
Edit: The traceback is a long repetition of this:
```
File "c:\python27\lib\site-packages\distribute-0.6.28-py2.7.egg\setuptools\dist.py", line 225, in __init__
_Distribution.__init__(self,attrs)
File "c:\python27\lib\distutils\dist.py", line 287, in __init__
self.finalize_options()
File "c:\python27\lib\site-packages\distribute-0.6.28-py2.7.egg\setuptools\dist.py", line 257, in finalize_options
ep.require(installer=self.fetch_build_egg)
File "c:\python27\lib\site-packages\distribute-0.6.28-py2.7.egg\pkg_resources.py", line 2029, in require
working_set.resolve(self.dist.requires(self.extras),env,installer))
File "c:\python27\lib\site-packages\distribute-0.6.28-py2.7.egg\pkg_resources.py", line 580, in resolve
dist = best[req.key] = env.best_match(req, self, installer)
File "c:\python27\lib\site-packages\distribute-0.6.28-py2.7.egg\pkg_resources.py", line 825, in best_match
return self.obtain(req, installer) # try and download/install
File "c:\python27\lib\site-packages\distribute-0.6.28-py2.7.egg\pkg_resources.py", line 837, in obtain
return installer(requirement)
File "c:\python27\lib\site-packages\distribute-0.6.28-py2.7.egg\setuptools\dist.py", line 272, in fetch_build_egg
dist = self.__class__({'script_args':['easy_install']})
{repeat until max recursion}
``` | Looks like my "distribute" (v0.6xxx) was out of date.
I ran
```
pip install --upgrade distribute
```
and it installed 0.7.3.
Then ran `pip install sqlalchemy` and it installed.
Same problem encountered installing other packages. |
Extracting Directory from Path with Ending Slash | 31,273,892 | 2 | 2015-07-07T16:07:18Z | 31,273,972 | 10 | 2015-07-07T16:11:15Z | [
"python"
] | What is an elegant way of extracting a directory from a path with an ending slash?
For example
`/foo/bar/test/`
and I want `test`.
I can do `os.path.basename` if there was no ending `/`.
Is my next best option to do something like:
```
if directory[:-1] == '/':
basename = os.path.basename(directory[:-1])
else:
basename = os.path.basename(directory)
```
as this is probably not os agnostic or very clean. | Calling `os.path.abspath` will take care of that for you:
```
>>> import os
>>> os.path.abspath('/foo/bar/test/')
'/foo/bar/test'
>>> os.path.abspath('/foo/bar/test')
'/foo/bar/test'
>>>
```
So:
```
>>> os.path.basename(os.path.abspath('/foo/bar/test/'))
'test'
``` |
Python - Reading Emoji Unicode Characters | 31,280,295 | 5 | 2015-07-07T22:16:25Z | 31,280,408 | 9 | 2015-07-07T22:25:00Z | [
"python",
"python-2.7",
"unicode",
"emoji"
] | I have a Python 2.7 program which reads iOS text messages from a SQLite database. The text messages are unicode strings. In the following text message:
```
u'that\u2019s \U0001f63b'
```
The apostrophe is represented by `\u2019`, but the emoji is represented by `\U0001f63b`. I looked up the code point for the emoji in question, and it's `\uf63b`. I'm not sure where the `0001` is coming from. I know comically little about character encodings.
When I print the text, character by character, using:
```
s = u'that\u2019s \U0001f63b'
for c in s:
print c.encode('unicode_escape')
```
The program produces the following output:
```
t
h
a
t
\u2019
s
\ud83d
\ude3b
```
How can I correctly read these last characters in Python? Am I using encode correctly here? Should I just attempt to trash those `0001`s before reading it, or is there an easier, less silly way? | I don't think you're using encode correctly, nor do you need to. What you have is a valid unicode string with one 4 digit and one 8 digit escape sequence. Try this in the REPL on, say, OS X
```
>>> s = u'that\u2019s \U0001f63b'
>>> print s
thatâs í ½í¸»
```
In python3, though -
```
Python 3.4.3 (default, Jul 7 2015, 15:40:07)
>>> s = u'that\u2019s \U0001f63b'
>>> s[-1]
'í ½í¸»'
``` |
How to limit one session from any browser for a username in flask? | 31,281,470 | 7 | 2015-07-08T00:08:01Z | 31,618,249 | 7 | 2015-07-24T19:21:45Z | [
"python",
"session",
"flask"
] | I am using a gunicorn server in which I am trying to figure out a way to limit only one session per username i.e. if user A is logged in to the app from Chrome he should not be able to login through Firefox unless he logs out of chrome, or shouldn`t be able to open another TAB in chrome itself.
How can I generate a unique id for the browser and store it in a DB so that until the user logs out or session expires, the user can`t login through any other browser. | A possible method of limiting sessions to a single tab involves creating a random token on page load and embedding this token into the page. This most recently generated token gets stored in the user's session as well. This will be similar to how various frameworks add validation tokens to prevent [CSFR](https://en.wikipedia.org/wiki/Cross-site_request_forgery) attacks.
Brief example:
* User loads page in tab 1 in Firefox. `Token1` is generated, embedded and stored in session
* User loads page in tab 2 in Firefox. `Token2` is generated, embedded and stored in session. This overwrites previous value.
* User loads page in tab 1 in Chrome. `Token3` is generated, embedded and stored in session. this overwrites previous value.
At this point, the user has the page open in 3 tabs. The user's session, though, only has `Token3` stored. This method prevents the user from being locked out (different IP addresses, different user agent strings, incogneto mode, etc) because each new session simply generates a new token. The newest load becomes the active window, immediately invalidating all previous sessions.
Next, any time the page interacts with the server (clicks a link, submits data, etc.), the token embedded in the page is sent as well. The server validates that the passed token matches the token in the session. If they match, the action succeeds. If they do not match, the server returns a failure message.
---
You can generate random numbers in multiple ways, but you probably want something secure. We'll use the [example](http://stackoverflow.com/questions/2257441/random-string-generation-with-upper-case-letters-and-digits-in-python) from another question:
```
import string
import random
...
N = 20 # Length of the string we want, adjust as appropriate
''.join(random.SystemRandom().choice(string.ascii_uppercase + string.digits) for _ in range(N))
```
This uses [`random.SystemRandom`](https://docs.python.org/2/library/random.html#random.SystemRandom), which is more secure than simply using [`random.choice`](https://docs.python.org/2/library/random.html#random.choice)
---
On page load, you need to check if the existing token is valid, generate the random token and store this in the user's session. Since we want this everywhere, let's make a decorator first, to reduce duplicate code later. The decorator checks if the session is valid and if not you get to select what to do (insert your own logic). It also sets a session token. This is needed (or you need logic to exclude your main page) otherwise you'll hit an infinite loop where the user attempts to load the main page, doesn't have a token, fails and the process repeats. I have the token regenerating on each page load via the `else` clause. If you do not implement the `if` section, this decorator is pointless as both paths perform the same action and simply reset the token on page load. The logic in the `if` is what will prevent the user from having multiple sessions.
```
from flask import session
from functools import wraps
def random_string(length):
return ''.join(random.SystemRandom().choice(string.ascii_uppercase + string.digits) for _ in range(length))
def validate_token(f):
@wraps(f)
def wrapper(existing_token, *args, **kwargs):
if session['token'] != existing_token:
# Logic on failure. Do you present a 404, do you bounce them back to your main page, do you do something else?
# It is IMPORTANT that you determine and implement this logic
# otherwise the decorator simply changes the token (and behaves the same way as the else block).
session['token'] = random_string(20)
else:
session['token'] = random_string(20)
return f(*args, **kwargs)
return wrapper
```
Now in our routes, we can apply this decorator to each, so that the user session gets updated on each page load:
```
from flask import render_template
@app.route('/path')
@validate_token
def path(token=None):
return render_template('path.html', token=session['token'])
```
In your template, you want to utilize this `token` value anywhere you need to prevent the session from continuing. For example, put it on links, in forms (though [Flask has a method of CSRF protection](http://flask.pocoo.org/snippets/3/) already), etc. The server itself can check if the passed token is valid. The template could look as simple as this:
```
<a href="{{ url_for('path', token=token) }}">Click here to continue</a>
``` |
Strange... What does [::5,0] mean | 31,284,864 | 4 | 2015-07-08T06:15:50Z | 31,285,008 | 10 | 2015-07-08T06:25:57Z | [
"python",
"numpy",
"pandas",
"matplotlib"
] | I found a webpage which explaining how to use `set_xticks` and . `set_xticklabels`.
And they set `set_xticks` and 'set\_xticklabels' as following...
```
ax.set_xticks(xx[::5,0])
ax.set_xticklabels(times[::5])
ax.set_yticks(yy[0,::5])
ax.set_yticklabels(dates[::5])
```
What does `[::5,0]` exactly mean..
I don't have any idea..... | For a numpy array, the notation`[::5,6]` means to take the 6th column for that array and then in the 6th column, every 5th row starting at the first row till the last row.
Example -
```
In [12]: n = np.arange(100000)
In [17]: n.shape = (500,200)
In [18]: n[::1,2]
Out[18]:
array([ 2, 202, 402, 602, 802, 1002, 1202, 1402, 1602,
1802, 2002, 2202, 2402, 2602, 2802, 3002, 3202, 3402,
3602, 3802, 4002, 4202, 4402, 4602, 4802, .....])
In [19]: n[::5,2]
Out[19]:
array([ 2, 1002, 2002, 3002, 4002, 5002, 6002, ...])
```
Reference on numpy array slicing [here](http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html) , if you are interested. |
Logarithmic returns in pandas dataframe | 31,287,552 | 3 | 2015-07-08T08:38:51Z | 31,287,674 | 9 | 2015-07-08T08:44:29Z | [
"python",
"pandas"
] | Python pandas has a pct\_change function which I use to calculate the returns for stock prices in a dataframe:
```
ndf['Return']= ndf['TypicalPrice'].pct_change()
```
I am using the following code to get logarithmic returns, but it gives the exact same values as the pct.change() function:
```
ndf['retlog']=np.log(ndf['TypicalPrice'].astype('float64')/ndf['TypicalPrice'].astype('float64').shift(1))
#np is for numpy
``` | Here is one way to calculate log return using `.shift()`. And the result is similar to but not the same as the gross return calculated by `pct_change()`. Can you upload a copy of your sample data (dropbox share link) to reproduce the inconsistency you saw?
```
import pandas as pd
import numpy as np
np.random.seed(0)
df = pd.DataFrame(100 + np.random.randn(100).cumsum(), columns=['price'])
df['pct_change'] = df.price.pct_change()
df['log_ret'] = np.log(df.price) - np.log(df.price.shift(1))
Out[56]:
price pct_change log_ret
0 101.7641 NaN NaN
1 102.1642 0.0039 0.0039
2 103.1429 0.0096 0.0095
3 105.3838 0.0217 0.0215
4 107.2514 0.0177 0.0176
5 106.2741 -0.0091 -0.0092
6 107.2242 0.0089 0.0089
7 107.0729 -0.0014 -0.0014
.. ... ... ...
92 101.6160 0.0021 0.0021
93 102.5926 0.0096 0.0096
94 102.9490 0.0035 0.0035
95 103.6555 0.0069 0.0068
96 103.6660 0.0001 0.0001
97 105.4519 0.0172 0.0171
98 105.5788 0.0012 0.0012
99 105.9808 0.0038 0.0038
[100 rows x 3 columns]
``` |
How to append a value to list attribute on AWS DynamoDB? | 31,288,085 | 4 | 2015-07-08T09:03:18Z | 35,051,660 | 8 | 2016-01-28T02:08:15Z | [
"python",
"amazon-web-services",
"amazon-dynamodb",
"boto"
] | I'm using DynamoDB as an K-V db (cause there's not much data, I think that's fine) , and part of 'V' is list type (about 10 elements). There's some session to append a new value to it, and I cannot find a way to do this in 1 request. What I did is like this:
```
item = self.list_table.get_item(**{'k': 'some_key'})
item['v'].append('some_value')
item.partial_save()
```
I request the server first and save it after modified the value. That's not atomic and looks ugly. Is there any way to do this in one request? | The following code should work with boto3:
```
table = get_dynamodb_resource().Table("table_name")
result = table.update_item(
Key={
'hash_key': hash_key,
'range_key': range_key
},
UpdateExpression="SET some_attr = list_append(some_attr, :i)",
ExpressionAttributeValues={
':i': [some_value],
},
ReturnValues="UPDATED_NEW"
)
if result['ResponseMetadata']['HTTPStatusCode'] == 200 and 'Attributes' in result:
return result['Attributes']['some_attr']
```
The get\_dynamodb\_resource method here is just:
```
def get_dynamodb_resource():
return boto3.resource(
'dynamodb',
region_name=os.environ['AWS_DYNAMO_REGION'],
endpoint_url=os.environ['AWS_DYNAMO_ENDPOINT'],
aws_secret_access_key=os.environ['AWS_SECRET_ACCESS_KEY'],
aws_access_key_id=os.environ['AWS_ACCESS_KEY_ID'])
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.