title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
How to subclass list and trigger an event whenever the data change? | 39,189,893 | 12 | 2016-08-28T09:36:06Z | 39,190,103 | 7 | 2016-08-28T10:01:04Z | [
"python"
] | I would like to subclass `list` and trigger an event (data checking) every time any change happens to the data. Here is an example subclass:
```
class MyList(list):
def __init__(self, sequence):
super().__init__(sequence)
self._test()
def __setitem__(self, key, value):
super().__setitem__(key, value)
self._test()
def append(self, value):
super().append(value)
self._test()
def _test(self):
""" Some kind of check on the data. """
if not self == sorted(self):
raise ValueError("List not sorted.")
```
Here, I am overriding methods `__init__`, `__setitem__` and `__append__` to perform the check if data changes. I think this approach is undesirable, so my question is: Is there a possibilty of triggering data checking automatically if *any* kind of mutation happens to the underlying data structure? | As you say, this is not the best way to go about it. To correctly implement this, you'd need to know about every method that can change the list.
The way to go is to implement your own list (or rather a mutable sequence). The best way to do this is to use the abstract base classes from Python which you find in the [`collections.abc`](https://docs.python.org/library/collections.abc.html) module. You have to implement only a minimum amount of methods and the module automatically implements the rest for you.
For your specific example, this would be something like this:
```
from collections.abc import MutableSequence
class MyList(MutableSequence):
def __init__(self, iterable=()):
self._list = list(iterable)
def __getitem__(self, key):
return self._list.__getitem__(key)
def __setitem__(self, key, item):
self._list.__setitem__(key, item)
# trigger change handler
def __delitem__(self, key):
self._list.__delitem__(key)
# trigger change handler
def __len__(self):
return self._list.__len__()
def insert(self, index, item):
self._list.insert(index, item)
# trigger change handler
```
### Performance
Some methods are slow in their default implementation. For example `__contains__` is defined in the [`Sequence` class](https://hg.python.org/cpython/file/tip/Lib/_collections_abc.py#l806) as follows:
```
def __contains__(self, value):
for v in self:
if v is value or v == value:
return True
return False
```
Depending on your class, you might be able to implement this faster. However, performance is often less important than writing code which is easy to understand. It can also make writing a class harder, because you're then responsible for implementing the methods correctly. |
How to add multiple values to a key in a Python dictionary | 39,197,261 | 7 | 2016-08-29T01:07:33Z | 39,197,284 | 22 | 2016-08-29T01:13:43Z | [
"python",
"dictionary"
] | I am trying to create a dictionary from the values in the `name_num` dictionary where the length of the list is the new key and the `name_num` dictionary key and value are the value. So:
```
name_num = {"Bill": [1,2,3,4], "Bob":[3,4,2], "Mary": [5, 1], "Jim":[6,17,4], "Kim": [21,54,35]}
```
I want to create the following dictionary:
```
new_dict = {4:{"Bill": [1,2,3,4]}, 3:{"Bob":[3,4,2], "Jim":[6,17,4], "Kim": [21,54,35]}, 2:{"Mary": [5, 1]}}
```
I've tried many variations, but this code gets me the closest:
```
for mykey in name_num:
new_dict[len(name_num[mykey])] = {mykey: name_num[mykey]}
```
Output:
```
new_dict = {4:{"Bill": [1,2,3,4]}, 3:{"Jim":[6,17,4]}, 2:{"Mary": [5, 1]}}
```
I know I need to loop through the code somehow so I can add the other values to key 3. | This is a good use case for [`defaultdict`](https://docs.python.org/3/library/collections.html):
```
from collections import defaultdict
name_num = {
'Bill': [1, 2, 3, 4],
'Bob': [3, 4, 2],
'Mary': [5, 1],
'Jim': [6, 17, 4],
'Kim': [21, 54, 35],
}
new_dict = defaultdict(dict)
for name, nums in name_num.items():
new_dict[len(nums)][name] = nums
print(dict(new_dict))
```
**Output**:
```
{
2: {'Mary': [5, 1]},
3: {'Bob': [3, 4, 2], 'Jim': [6, 17, 4], 'Kim': [21, 54, 35]},
4: {'Bill': [1, 2, 3, 4]}
}
``` |
Why does list(next(iter(())) for _ in range(1)) == []? | 39,214,961 | 17 | 2016-08-29T20:49:16Z | 39,215,240 | 9 | 2016-08-29T21:07:37Z | [
"python"
] | Why does `list(next(iter(())) for _ in range(1))` return an empty list rather than raising `StopIteration`?
```
>>> next(iter(()))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
>>> [next(iter(())) for _ in range(1)]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
>>> list(next(iter(())) for _ in range(1)) # ?!
[]
```
The same thing happens with a custom function that explicitly raises `StopIteration`:
```
>>> def x():
... raise StopIteration
...
>>> x()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in x
StopIteration
>>> [x() for _ in range(1)]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in x
StopIteration
>>> list(x() for _ in range(1)) # ?!
[]
``` | assuming all goes well, the generator comprehension `x() for _ in range(1)` should raise `StopIteration` when it is finished iterating over `range(1)` to indicate that there are no more items to pack into the list.
However because `x()` raises `StopIteration` it ends up exiting early meaning this behaviour is a bug in python that is being addressed with [PEP 479](http://legacy.python.org/dev/peps/pep-0479/)
In python 3.6 or using `from __future__ import generator_stop` in python 3.5 when a StopIteration propagates out farther it is converted into a `RuntimeError` so that `list` doesn't register it as the end of the comprehension. When this is in effect the error looks like this:
```
Traceback (most recent call last):
File "/Users/Tadhg/Documents/codes/test.py", line 6, in <genexpr>
stuff = list(x() for _ in range(1))
File "/Users/Tadhg/Documents/codes/test.py", line 4, in x
raise StopIteration
StopIteration
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/Tadhg/Documents/codes/test.py", line 6, in <module>
stuff = list(x() for _ in range(1))
RuntimeError: generator raised StopIteration
``` |
Understanding the behavior of function descriptors | 39,228,722 | 8 | 2016-08-30T13:22:30Z | 39,228,774 | 8 | 2016-08-30T13:24:47Z | [
"python",
"python-2.7",
"function",
"python-3.x"
] | I was reading [a presentation](http://www.aleax.it/Python/nylug05_om.pdf) on Python's Object model when, in one slide (number `9`), the author asserts that Pythons' functions are descriptors. The example he presents to illustrate is similar to this one I wrote:
```
def mul(x, y):
return x * y
mul2 = mul.__get__(2)
mul2(3) # 6
```
Now, I understand that the point is made, since the function defines a `__get__` it is a descriptor.
What I don't understand is how exactly the call results in the output provided. | That's Python doing what it does in order to support dynamically adding functions to classes.
When `__get__` is invoked on a function object, which is usually done via dot access on an instance of a class, Python will transform the function to a method and implicitly pass the instance (usually recognized as `self`) as the first argument.
In your case, you explicitly call `__get__` and explicitly pass the 'instance' `2` which is bound as the first argument of the function `x`:
```
>>> mul2
<bound method mul of 2>
```
This results in a method with one expected argument that yields the multiplication; calling it returns `2` (the bound argument assigned to `x`) multiplied with anything else you supply as the argument `y`.
---
In addition, a Python implementation of `__get__` for functions is provided in the [`Descriptor HOWTO`](https://docs.python.org/3/howto/descriptor.html#functions-and-methods) document of the Python Docs. Here you can see the transformation, with the usage of [`types.MethodType`](https://docs.python.org/3/library/types.html#types.MethodType), that takes place when `__get__` is invoked :
```
class Function(object):
. . .
def __get__(self, obj, objtype=None):
"Simulate func_descr_get() in Objects/funcobject.c"
return types.MethodType(self, obj, objtype)
```
And the source code for the intrigued visitor is located in `Objects/funcobject.c`.
As you can see if this descriptor did not exist you'd have to automatically wrap functions in `types.MethodType` any time you'd want to dynamically add a function to class, unnecessary hassle. |
creating admin restricted urls | 39,231,178 | 2 | 2016-08-30T15:10:15Z | 39,231,312 | 7 | 2016-08-30T15:15:59Z | [
"python",
"django",
"django-1.9"
] | so in my urls.py (outside django default admin section ) i want to restrict some urls only to admin so if i have this for logged users
```
from django.contrib.auth.decorators import login_required
urlpatterns = [
url(r'^a1$',login_required( views.admin_area1 ), name='a1'),
url(r'^a2$', login_required(views.admin_area2) , name='a2'),
url(r'^a3', login_required(views.admin_area3) , name='a3'),
]
```
is there enyway torestrict these links to logged admins not just any logged user ?
there is but [according to this](http://stackoverflow.com/a/12003808/590589) i can use `user_passes_test` but i have to use it in view | You can use the decorator returned by `user_passes_test(lambda u: u.is_superuser)` in the same way that you use `login_required`:
```
urlpatterns = [
url(r'^a1$', user_passes_test(lambda u: u.is_superuser)(views.admin_area1), name='a1'),
]
```
If you want to restrict access to admins, then it might be more accurate to use the [`staff_member_required`](https://docs.djangoproject.com/en/1.10/ref/contrib/admin/#the-staff-member-required-decorator) decorator (which checks the [`is_staff`](https://docs.djangoproject.com/en/1.10/ref/contrib/auth/#django.contrib.auth.models.User.is_staff) flag) instead of checking the [`is_superuser`](https://docs.djangoproject.com/en/1.10/ref/contrib/auth/#django.contrib.auth.models.User.is_superuser) flag.
```
from django.contrib.admin.views.decorators import staff_member_required
urlpatterns = [
url(r'^a1$', staff_member_required(views.admin_area1), name='a1'),
...
]
``` |
Matlab to Python numpy indexing and multiplication issue | 39,234,553 | 3 | 2016-08-30T18:22:03Z | 39,234,756 | 8 | 2016-08-30T18:34:13Z | [
"python",
"matlab",
"numpy"
] | I have the following line of code in MATLAB which I am trying to convert to Python `numpy`:
```
pred = traindata(:,2:257)*beta;
```
In Python, I have:
```
pred = traindata[ : , 1:257]*beta
```
`beta` is a 256 x 1 array.
In MATLAB,
```
size(pred) = 1389 x 1
```
But in Python,
```
pred.shape = (1389L, 256L)
```
So, I found out that multiplying by the `beta` array is producing the difference between the two arrays.
How do I write the original Python line, so that the size of `pred` is 1389 x 1, like it is in MATLAB when I multiply by my beta array? | I suspect that `beta` is in fact a 1D `numpy` array. In `numpy`, 1D arrays are not row or column vectors where MATLAB clearly makes this distinction. These are simply 1D arrays agnostic of any shape. If you must, you need to manually introduce a new singleton dimension to the `beta` vector to facilitate the multiplication. On top of this, the `*` operator actually performs **element-wise** multiplication. To perform matrix-vector or matrix-matrix multiplication, you must use `numpy`'s [`dot`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html) function to do so.
Therefore, you must do something like this:
```
import numpy as np # Just in case
pred = np.dot(traindata[:, 1:257], beta[:,None])
```
`beta[:,None]` will create a 2D `numpy` array where the elements from the 1D array are populated along the rows, effectively making a column vector (i.e. 256 x 1). However, if you have already done this on `beta`, then you don't need to introduce the new singleton dimension. Just use `dot` normally:
```
pred = np.dot(traindata[:, 1:257], beta)
``` |
Python: simple way to increment by alternating values? | 39,241,505 | 4 | 2016-08-31T05:36:54Z | 39,241,588 | 12 | 2016-08-31T05:44:10Z | [
"python",
"increment"
] | I am building a list of integers that should increment by 2 alternating values.
For example, starting at 0 and alternating between 4 and 2 up to 20 would make:
```
[0,4,6,10,12,16,18]
```
range and xrange only accept a single integer for the increment value. What's the simplest way to do this? | I might use a simple `itertools.cycle` to cycle through the steps:
```
from itertools import cycle
def fancy_range(start, stop, steps=(1,)):
steps = cycle(steps)
val = start
while val < stop:
yield val
val += next(steps)
```
You'd call it like so:
```
>>> list(fancy_range(0, 20, (4, 2)))
[0, 4, 6, 10, 12, 16, 18]
```
The advantage here is that is scales to an arbitrary number of steps quite nicely (though I can't really think of a good use for that at the moment -- But perhaps you can). |
When am I supposed to use del in python? | 39,255,371 | 3 | 2016-08-31T17:12:54Z | 39,255,472 | 8 | 2016-08-31T17:19:12Z | [
"python",
"memory-management"
] | So I am curious lets say I have a class as follows
```
class myClass:
def __init__(self):
parts = 1
to = 2
a = 3
whole = 4
self.contents = [parts,to,a,whole]
```
Is there any benifit of adding lines
```
del parts
del to
del a
del whole
```
inside the constructor or will the memory for these variables be managed by the scope? | Never, unless you are very tight on memory and doing something very bulky. If you are writing usual program, garbage collector should take care of everything.
If you are writing something bulky, you should know that `del` does not delete the object, it just dereferences it. I.e. variable no longer refers to the place in memory where object data is stored. After that it still needs to be cleaned up by garbage collector in order for memory to be freed (that happens automatically).
There is also a way to force garbage collector to clean objects - `gc.collect()`, which may be useful after you ran `del`. For example:
```
import gc
a = [i for i in range(1, 10 ** 9)]
...
del a
# Object [0, 1, 2, ..., 10 ** 9 - 1] is not reachable but still in memory
gc.collect()
# Object deleted from memory
```
---
**Update**: really good note in comments. Watch for other references to the object in memory. For example:
```
import gc
a = [i for i in range(1, 10 ** 9)]
b = a
...
del a
gc.collect()
```
After execution of this block, the large array is still reachable through `b` and will not be cleaned. |
Build 2 lists in one go while reading from file, pythonically | 39,268,792 | 11 | 2016-09-01T10:14:47Z | 39,269,024 | 11 | 2016-09-01T10:25:31Z | [
"python",
"list",
"python-3.x"
] | I'm reading a big file with hundreds of thousands of number pairs representing the edges of a graph. I want to build 2 lists as I go: one with the forward edges and one with the reversed.
Currently I'm doing an explicit `for` loop, because I need to do some pre-processing on the lines I read. However, I'm wondering if there is a more pythonic approach to building those lists, like list comprehensions, etc.
But, as I have 2 lists, I don't see a way to populate them using comprehensions without reading the file twice.
My code right now is:
```
with open('SCC.txt') as data:
for line in data:
line = line.rstrip()
if line:
edge_list.append((int(line.rstrip().split()[0]), int(line.rstrip().split()[1])))
reversed_edge_list.append((int(line.rstrip().split()[1]), int(line.rstrip().split()[0])))
``` | I would keep your logic as it is the *Pythonic* approach just not *split/rstrip* the same line multiple times:
```
with open('SCC.txt') as data:
for line in data:
spl = line.split()
if spl:
i, j = map(int, spl)
edge_list.append((i, j))
reversed_edge_list.append((j, i))
```
Calling *rstrip* when you have already called it is redundant in itself even more so when you are splitting as that would already remove the whitespace so splitting just once means you save doing a lot of unnecessary work.
You can also use *csv.reader* to read the data and filter empty rows once you have a single whitespace delimiting:
```
from csv import reader
with open('SCC.txt') as data:
edge_list, reversed_edge_list = [], []
for i, j in filter(None, reader(data, delimiter=" ")):
i, j = int(i), int(j)
edge_list.append((i, j))
reversed_edge_list.append((j, i))
```
Or if there are multiple whitespaces delimiting you can use `map(str.split, data)`:
```
for i, j in filter(None, map(str.split, data)):
i, j = int(i), int(j)
```
Whatever you choose will be faster than going over the data twice or splitting the sames lines multiple times. |
Python: how to get rid of spaces in str(dict)? | 39,268,928 | 3 | 2016-09-01T10:20:57Z | 39,269,016 | 8 | 2016-09-01T10:25:11Z | [
"python"
] | For example, if you use str() on a dict, you get:
```
>>> str({'a': 1, 'b': 'as df'})
"{'a': 1, 'b': 'as df'}"
```
However, I want the string to be like:
```
"{'a':1,'b':'as df'}"
```
How can I accomplish this? | You could build the compact string representation yourself:
```
In [9]: '{' + ','.join('{0!r}:{1!r}'.format(*x) for x in dct.items()) + '}'
Out[9]: "{'b':'as df','a':1}"
```
It will leave extra spaces inside string representations of nested `list`s, `dict`s etc.
A much better idea is to use the [`json.dumps`](https://docs.python.org/3/library/json.html#json.dumps) function with appropriate separators:
```
In [15]: import json
In [16]: json.dumps(dct, separators=(',', ':'))
Out[16]: '{"b":"as df","a":1}'
```
This will work correctly regardless of the inner structure of `dct`. |
Sort by certain order (Situation: pandas DataFrame Groupby) | 39,275,294 | 4 | 2016-09-01T15:14:26Z | 39,275,799 | 8 | 2016-09-01T15:40:33Z | [
"python",
"sorting",
"pandas"
] | I want to change the day of order presented by below code.
What I want is a result with the order (Mon, Tue, Wed, Thu, Fri, Sat, Sun)
- should I say, sort by key in certain predefined order?
---
Here is my code which needs some tweak:
```
f8 = df_toy_indoor2.groupby(['device_id', 'day'])['dwell_time'].sum()
print(f8)
```
Current result:
```
device_id day
device_112 Thu 436518
Wed 636451
Fri 770307
Tue 792066
Mon 826862
Sat 953503
Sun 1019298
device_223 Mon 2534895
Thu 2857429
Tue 3303173
Fri 3548178
Wed 3822616
Sun 4213633
Sat 4475221
```
Desired result:
```
device_id day
device_112 Mon 826862
Tue 792066
Wed 636451
Thu 436518
Fri 770307
Sat 953503
Sun 1019298
device_223 Mon 2534895
Tue 3303173
Wed 3822616
Thu 2857429
Fri 3548178
Sat 4475221
Sun 4213633
```
---
Here, `type(df_toy_indoor2.groupby(['device_id', 'day'])['dwell_time'])` is a class 'pandas.core.groupby.SeriesGroupBy'.
I have found `.sort_values()` , but it is a built-in sort function by values.
I want to get some pointers to set some order to use it further data manipulation.
Thanks in advance. | Took me some time, but I found the solution. [reindex](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.reindex.html#pandas.Series.reindex) does what you want. See my code example:
```
a = [1, 2] * 2 + [2, 1] * 3 + [1, 2]
b = ['Mon', 'Wed', 'Thu', 'Fri'] * 3
c = list(range(12))
df = pd.DataFrame(data=[a,b,c]).T
df.columns = ['device', 'day', 'value']
df = df.groupby(['device', 'day']).sum()
```
gives:
```
value
device day
1 Fri 7
Mon 0
Thu 12
Wed 14
2 Fri 14
Mon 12
Thu 6
Wed 1
```
Then doing reindex:
```
df.reindex(['Mon', 'Wed', 'Thu', 'Fri'], level='day')
```
or more conveniently (credits to burhan)
```
df.reindex(list(calendar.day_abbr), level='day')
```
gives:
```
value
device day
1 Mon 0
Wed 14
Thu 12
Fri 7
2 Mon 12
Wed 1
Thu 6
Fri 14
``` |
How to parse an HTML table with rowspans in Python? | 39,278,376 | 23 | 2016-09-01T18:16:45Z | 39,336,433 | 12 | 2016-09-05T19:11:14Z | [
"python",
"html",
"python-3.x",
"beautifulsoup",
"html-table"
] | **The problem**
I'm trying to parse an HTML table with rowspans in it, as in, I'm trying to parse my college schedule.
I'm running into the problem where if the last row contains a rowspan, the next row is missing a TD where the rowspan is now that TD that is missing.
I have no clue how to account for this and I hope to be able to parse this schedule.
**What I tried**
Pretty much everything I can think of.
**The result I get**
```
[
{
'blok_eind': 4,
'blok_start': 3,
'dag': 4, # Should be 5
'leraar': 'DOODF000',
'lokaal': 'ALK C212',
'vak': 'PROJ-T',
},
]
```
As you can see, there's a `vak` key with the value `PROJ-T` in the output snippet above, `dag` is `4` while it's supposed to be `5` (a.k.a Friday/Vrijdag), as seen here:

**The result I want**
A Python dict() that looks like the one posted above, but with the right value
Where:
* `day`/`dag` is an int from 1~5 representing Monday~Friday
* `block_start`/`blok_start` is an int that represents when the course starts (Time block, left side of table)
* `block_end`/`blok_eind` is an int that represent in what block the course ends
* `classroom`/`lokaal` is the classroom's code the course is in
* `teacher`/`leraar` is the teacher's ID
* `course`/`vak` is the ID of the course
**Basic HTML Structure for above data**
```
<center>
<table>
<tr>
<td>
<table>
<tbody>
<tr>
<td>
<font>
TEACHER-ID
</font>
</td>
<td>
<font>
<b>
CLASSROOM ID
</b>
</font>
</td>
</tr>
<tr>
<td>
<font>
COURSE ID
</font>
</td>
</tr>
</tbody>
</table>
</td>
</tr>
</table>
</center>
```
**The code**
*HTML*
```
<CENTER><font size="3" face="Arial" color="#000000">
<BR></font>
<font size="6" face="Arial" color="#0000FF">
16AO4EIO1B
</font> <font size="4" face="Arial">
IO1B
</font>
<BR>
<TABLE border="3" rules="all" cellpadding="1" cellspacing="1">
<TR>
<TD align="center">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial" color="#000000">
Maandag 29-08
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
Dinsdag 30-08
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
Woensdag 31-08
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
Donderdag 01-09
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
Vrijdag 02-09
</font> </TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>1</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
8:30
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
9:20
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=4 align="center" nowrap="1">
<TABLE>
<TR>
<TD width="50%" nowrap=1><font size="2" face="Arial">
BLEEJ002
</font> </TD>
<TD width="50%" nowrap=1><font size="2" face="Arial">
<B>ALK B021</B>
</font> </TD>
</TR>
<TR>
<TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial">
WEBD
</font> </TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>2</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
9:20
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
10:10
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=4 align="center" nowrap="1">
<TABLE>
<TR>
<TD width="50%" nowrap=1><font size="2" face="Arial">
BLEEJ002
</font> </TD>
<TD width="50%" nowrap=1><font size="2" face="Arial">
<B>ALK B021B</B>
</font> </TD>
</TR>
<TR>
<TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial">
WEBD
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>3</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
10:25
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
11:15
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=4 align="center" nowrap="1">
<TABLE>
<TR>
<TD width="50%" nowrap=1><font size="2" face="Arial">
DOODF000
</font> </TD>
<TD width="50%" nowrap=1><font size="2" face="Arial">
<B>ALK C212</B>
</font> </TD>
</TR>
<TR>
<TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial">
PROJ-T
</font> </TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>4</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
11:15
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
12:05
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=4 align="center" nowrap="1">
<TABLE>
<TR>
<TD width="50%" nowrap=1><font size="2" face="Arial">
BLEEJ002
</font> </TD>
<TD width="50%" nowrap=1><font size="2" face="Arial">
<B>ALK B021B</B>
</font> </TD>
</TR>
<TR>
<TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial">
MENT
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>5</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
12:05
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
12:55
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>6</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
12:55
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
13:45
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=4 align="center" nowrap="1">
<TABLE>
<TR>
<TD width="50%" nowrap=1><font size="2" face="Arial">
JONGJ003
</font> </TD>
<TD width="50%" nowrap=1><font size="2" face="Arial">
<B>ALK B008</B>
</font> </TD>
</TR>
<TR>
<TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial">
BURG
</font> </TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>7</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
13:45
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
14:35
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=4 align="center" nowrap="1">
<TABLE>
<TR>
<TD width="50%" nowrap=1><font size="2" face="Arial">
FLUIP000
</font> </TD>
<TD width="50%" nowrap=1><font size="2" face="Arial">
<B>ALK B004</B>
</font> </TD>
</TR>
<TR>
<TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial">
ICT algemeen Prakti
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>8</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
14:50
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
15:40
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=4 align="center" nowrap="1">
<TABLE>
<TR>
<TD width="50%" nowrap=1><font size="2" face="Arial">
KOOLE000
</font> </TD>
<TD width="50%" nowrap=1><font size="2" face="Arial">
<B>ALK B008</B>
</font> </TD>
</TR>
<TR>
<TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial">
NED
</font> </TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>9</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
15:40
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
16:30
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>10</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
16:30
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
17:20
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
</TABLE>
<TABLE cellspacing="1" cellpadding="1">
<TR>
<TD valign=bottom> <font size="4" face="Arial" color="#0000FF"></TR></TABLE><font size="3" face="Arial">
Periode1 29-08-2016 (35) - 04-09-2016 (35) G r u b e r & P e t t e r s S o f t w a r e
</font></CENTER>
```
*Python*
```
from pprint import pprint
from bs4 import BeautifulSoup
import requests
r = requests.get("http://rooster.horizoncollege.nl/rstr/ECO/AMR/400-ECO/Roosters/36"
"/c/c00025.htm")
daytable = {
1: "Maandag",
2: "Dinsdag",
3: "Woensdag",
4: "Donderdag",
5: "Vrijdag"
}
timetable = {
1: ("8:30", "9:20"),
2: ("9:20", "10:10"),
3: ("10:25", "11:15"),
4: ("11:15", "12:05"),
5: ("12:05", "12:55"),
6: ("12:55", "13:45"),
7: ("13:45", "14:35"),
8: ("14:50", "15:40"),
9: ("15:40", "16:30"),
10: ("16:30", "17:20"),
}
page = BeautifulSoup(r.content, "lxml")
roster = []
big_rows = 2
last_row_big = False
# There are 10 blocks, each made up out of 2 TR's, run through them
for block_count in range(2, 22, 2):
# There are 5 days, first column is not data we want
for day in range(2, 7):
dayroster = {
"dag": 0,
"blok_start": 0,
"blok_eind": 0,
"lokaal": "",
"leraar": "",
"vak": ""
}
# This selector provides the classroom
table_bold = page.select(
"html > body > center > table > tr:nth-of-type(" + str(block_count) + ") > td:nth-of-type(" + str(
day) + ") > table > tr > td > font > b")
# This selector provides the teacher's code and the course ID
table = page.select(
"html > body > center > table > tr:nth-of-type(" + str(block_count) + ") > td:nth-of-type(" + str(
day) + ") > table > tr > td > font")
# This gets the rowspan on the current row and column
rowspan = page.select(
"html > body > center > table > tr:nth-of-type(" + str(block_count) + ") > td:nth-of-type(" + str(
day) + ")")
try:
if table or table_bold and rowspan[0].attrs.get("rowspan") == "4":
last_row_big = True
# Setting end of class
dayroster["blok_eind"] = (block_count // 2) + 1
else:
last_row_big = False
# Setting end of class
dayroster["blok_eind"] = (block_count // 2)
except IndexError:
pass
if table_bold:
x = table_bold[0]
# Classroom ID
dayroster["lokaal"] = x.contents[0]
if table:
iter = 0
for x in table:
content = x.contents[0].lstrip("\r\n").rstrip("\r\n")
# Cell has data
if content != "":
# Set start of class
dayroster["blok_start"] = block_count // 2
# Set day of class
dayroster["dag"] = day - 1
if iter == 0:
# Teacher ID
dayroster["leraar"] = content
elif iter == 1:
# Course ID
dayroster["vak"] = content
iter += 1
if table or table_bold:
# Store the data
roster.append(dayroster)
# Remove duplicates
seen = set()
new_l = []
for d in roster:
t = tuple(d.items())
if t not in seen:
seen.add(t)
new_l.append(d)
pprint(new_l)
``` | You'll have to track the rowspans on previous rows, one per column.
You could do this simply by copying the integer value of a rowspan into a dictionary, and subsequent rows decrement the rowspan value until it drops to `1` (or we could store the integer value minus 1 and drop to `0` for ease of coding). Then you can adjust subsequent table counts based on preceding rowspans.
Your table complicates this a little by using a default span of size 2, incrementing in steps of two, but that can easily be brought back to manageable numbers by dividing by 2.
Rather than use massive CSS selectors, select just the table rows and we'll iterate over those:
```
roster = []
rowspans = {} # track rowspanning cells
# every second row in the table
rows = page.select('html > body > center > table > tr')[1:21:2]
for block, row in enumerate(rows, 1):
# take direct child td cells, but skip the first cell:
daycells = row.select('> td')[1:]
rowspan_offset = 0
for daynum, daycell in enumerate(daycells, 1):
# rowspan handling; if there is a rowspan here, adjust to find correct position
daynum += rowspan_offset
while rowspans.get(daynum, 0):
rowspan_offset += 1
rowspans[daynum] -= 1
daynum += 1
# now we have a correct day number for this cell, adjusted for
# rowspanning cells.
# update the rowspan accounting for this cell
rowspan = (int(daycell.get('rowspan', 2)) // 2) - 1
if rowspan:
rowspans[daynum] = rowspan
texts = daycell.select("table > tr > td > font")
if texts:
# class info found
teacher, classroom, course = (c.get_text(strip=True) for c in texts)
roster.append({
'blok_start': block,
'blok_eind': block + rowspan,
'dag': daynum,
'leraar': teacher,
'lokaal': classroom,
'vak': course
})
# days that were skipped at the end due to a rowspan
while daynum < 5:
daynum += 1
if rowspans.get(daynum, 0):
rowspans[daynum] -= 1
```
This produces correct output:
```
[{'blok_eind': 2,
'blok_start': 1,
'dag': 5,
'leraar': u'BLEEJ002',
'lokaal': u'ALK B021',
'vak': u'WEBD'},
{'blok_eind': 3,
'blok_start': 2,
'dag': 3,
'leraar': u'BLEEJ002',
'lokaal': u'ALK B021B',
'vak': u'WEBD'},
{'blok_eind': 4,
'blok_start': 3,
'dag': 5,
'leraar': u'DOODF000',
'lokaal': u'ALK C212',
'vak': u'PROJ-T'},
{'blok_eind': 5,
'blok_start': 4,
'dag': 3,
'leraar': u'BLEEJ002',
'lokaal': u'ALK B021B',
'vak': u'MENT'},
{'blok_eind': 7,
'blok_start': 6,
'dag': 5,
'leraar': u'JONGJ003',
'lokaal': u'ALK B008',
'vak': u'BURG'},
{'blok_eind': 8,
'blok_start': 7,
'dag': 3,
'leraar': u'FLUIP000',
'lokaal': u'ALK B004',
'vak': u'ICT algemeen Prakti'},
{'blok_eind': 9,
'blok_start': 8,
'dag': 5,
'leraar': u'KOOLE000',
'lokaal': u'ALK B008',
'vak': u'NED'}]
```
Moreover, this code will continue to work even if courses span *more than 2 blocks*, or just one block; any rowspan size is supported. |
Add an arbitrary element to an xrange()? | 39,278,424 | 2 | 2016-09-01T18:19:23Z | 39,278,445 | 7 | 2016-09-01T18:21:35Z | [
"python",
"generator",
"xrange"
] | In Python, it's more memory-efficient to use `xrange()` instead of `range` when iterating.
The trouble I'm having is that I want to iterate over a large list -- such that I need to use `xrange()` and after that I want to check an arbitrary element.
With `range()`, it's easy: `x = range(...) + [arbitrary element]`.
But with `xrange()`, there doesn't seem to be a cleaner solution than this:
```
for i in xrange(...):
if foo(i):
...
if foo(arbitrary element):
...
```
Any suggestions for cleaner solutions? Is there a way to "append" an arbitrary element to a generator? | I would recommend keeping the `arbitrary_element` check out of the loop, but if you want to make it part of the loop, you can use [`itertools.chain`](https://docs.python.org/2/library/itertools.html#itertools.chain):
```
for i in itertools.chain(xrange(...), [arbitrary_element]):
...
``` |
Add an arbitrary element to an xrange()? | 39,278,424 | 2 | 2016-09-01T18:19:23Z | 39,278,459 | 8 | 2016-09-01T18:22:12Z | [
"python",
"generator",
"xrange"
] | In Python, it's more memory-efficient to use `xrange()` instead of `range` when iterating.
The trouble I'm having is that I want to iterate over a large list -- such that I need to use `xrange()` and after that I want to check an arbitrary element.
With `range()`, it's easy: `x = range(...) + [arbitrary element]`.
But with `xrange()`, there doesn't seem to be a cleaner solution than this:
```
for i in xrange(...):
if foo(i):
...
if foo(arbitrary element):
...
```
Any suggestions for cleaner solutions? Is there a way to "append" an arbitrary element to a generator? | [`itertools.chain`](https://docs.python.org/3/library/itertools.html#itertools.chain) lets you make a combined iterator from multiple iterables without concatenating them (so no expensive temporaries):
```
from itertools import chain
# Must wrap arbitrary element in one-element tuple (or list)
for i in chain(xrange(...), (arbitrary_element,)):
if foo(i):
...
``` |
Why is this Haskell code so slow? | 39,283,047 | 6 | 2016-09-02T01:23:37Z | 39,283,994 | 7 | 2016-09-02T03:44:02Z | [
"python",
"haskell",
"optimization",
"language-comparisons"
] | I'm kind of new to Haskell and tried making a scrabble solver. It takes in the letters you currently have, finds all permutations of them and filters out those that are dictionary words. The code's pretty simple:
```
import Data.List
main = do
dict <- readFile "words"
letters <- getLine
let dictWords = words dict
let perms = permutations letters
print [x | x <- perms, x `elem` dictWords]
```
However it's incredibly slow, compared to a very similar implementation I have with Python. Is there something fundamental I'm doing wrong?
\*edit: Here's my Python code:
```
from itertools import permutations
letters = raw_input("please enter your letters (without spaces): ")
d = open('words')
dictionary = [line.rstrip('\n') for line in d.readlines()]
d.close()
perms = ["".join(p) for p in permutations(letters)]
validWords = []
for p in perms:
if p in dictionary: validWords.append(p)
for validWord in validWords:
print validWord
```
I didn't time them precisely, but roughly it feels like the Python implementation is about 2x as fast as the Haskell one. Perhaps I should't have said the Haskell code was "incredibly slow" in comparison, but since Haskell is statically typed I guess I just thought that it should've been much faster, and not slower than Python at all. | > I'm kind of new to Haskell and tried making a scrabble solver.
You can substantially improve things by using a better algorithm.
Instead of testing every permutation of the input letters, if you
sort them first you can make only one dictionary lookup and get
all of the possible words (anagrams) which may be formed from
them (using all of them).
Here is code which creates that dictionary as a Data.Map.
There is a start-up cost to creating the Map, but after
the first query subsequent lookups are very fast.
```
import Data.List
import qualified Data.Map.Strict as Map
import Control.Monad
import System.IO
main = do
contents <- readFile "words"
let pairs = [ (sort w, [w]) | w <- words contents ]
dict = foldl' (\m (k,v) -> Map.insertWith (++) k v m) Map.empty pairs
-- dict = foldr (\(k,v) m -> Map.insertWith (++) k v m) Map.empty pairs
forever $ do
putStr "Enter letters: " >> hFlush stdout
letters <- getLine
case Map.lookup (sort letters) dict of
Nothing -> putStrLn "No words."
Just ws -> putStrLn $ "Words: " ++ show ws
```
Map creation time for a word file of 236K words (2.5 MB) is about 4-5 seconds. Better performance is likely possible by using ByteStrings or Text instead of Strings.
Some good letter combinations to try:
```
steer rat tuna lapse groan neat
```
Note: Using GHC 7.10.2 I found this code performed the best *without* compiling with -O2. |
Concatenation of Strings and lists | 39,295,972 | 2 | 2016-09-02T15:28:26Z | 39,296,030 | 8 | 2016-09-02T15:31:50Z | [
"python",
"python-2.7"
] | In the following python script, it converts the Celsius degree to Fahrenheit but I need to join two list with strings between and after them
```
Celsius = [39.2, 36.5, 37.3, 37.8]
fahrenheit = map(lambda x: (float(9)/5)*x + 32, Celsius)
print '\n'.join(str(i) for i in Celsius)+" in Celsius is "+''.join(str(i) for i in fahrenheit )+" in farenheit"
```
The outcome is this(not what i wanted):
```
39.2
36.5
37.3
37.8 in Celsius is 102.5697.799.14100.04 in farenheit
```
How can I achieve this:
```
39.2 in Celsius is equivalent to 102.56 in fahrenheit
36.5 in Celsius is equivalent to 97.7 in fahrenheit
37.3 in Celsius is equivalent to 99.14 in fahrenheit
37.8 in Celsius is equivalent to 100.04 in fahrenheit
```
**EDIT SORRY MY BAD**
Well, the original code I had was
```
def fahrenheit(T):
return ((float(9)/5)*T + 32)
def display(c,f):
print c, "in Celsius is equivalent to ",\
f, " in fahrenheit"
Celsius = [39.2, 36.5, 37.3, 37.8]
for c in Celsius:
display(c,fahrenheit(c))
```
But due to reasons I need it to be **within 3 lines** | It's probably easiest to do the formatting as you go:
```
Celsius = [39.2, 36.5, 37.3, 37.8]
def fahrenheit(c):
return (float(9)/5)*c + 32
template = '{} in Celsius is equivalent to {} in fahrenheit'
print '\n'.join(template.format(c, fahrenheit(c)) for c in Celsius)
```
**EDIT**
If you really want it under 3 lines, we can inline the `fahrenheit` function:
```
Celsius = [39.2, 36.5, 37.3, 37.8]
template = '{} in Celsius is equivalent to {} in fahrenheit'
print '\n'.join(template.format(c, (float(9)/5)*c + 32) for c in Celsius)
```
If you don't mind long lines, you could inline `template` as well and get it down to 2 lines...
However, there really isn't any good reason to do this as far as I can tell. There is no penalty for writing python code that takes up more lines. Indeed, there is generally a penalty in the other direction that you pay every time you try to understand a really long complex line of code :-) |
for loop to extract header for a dataframe in pandas | 39,300,691 | 4 | 2016-09-02T21:05:30Z | 39,300,749 | 9 | 2016-09-02T21:11:10Z | [
"python",
"pandas",
"for-loop"
] | I am a newbie in python. I have a data frame that looks like this:
```
A B C D E
0 1 0 1 0 1
1 0 1 0 0 1
2 0 1 1 1 0
3 1 0 0 1 0
4 1 0 0 1 1
```
How can I write a for loop to gather the column names for each row. I expect my result set looks like that:
```
A B C D E Result
0 1 0 1 0 1 ACE
1 0 1 0 0 1 BE
2 0 1 1 1 0 BCD
3 1 0 0 1 0 AD
4 1 0 0 1 1 ADE
```
Anyone can help me with that? Thank you! | The `dot` function is done for that purpose as you want the matrix dot product between your matrix and the vector of column names:
```
df.dot(df.columns)
Out[5]:
0 ACE
1 BE
2 BCD
3 AD
4 ADE
```
If your dataframe is numeric, then obtain the boolean matrix first by test your `df` against 0:
```
(df!=0).dot(df.columns)
```
PS: Just assign the result to the new column
```
df['Result'] = df.dot(df.columns)
df
Out[7]:
A B C D E Result
0 1 0 1 0 1 ACE
1 0 1 0 0 1 BE
2 0 1 1 1 0 BCD
3 1 0 0 1 0 AD
4 1 0 0 1 1 ADE
``` |
Is there any reason for giving self a default value? | 39,300,924 | 31 | 2016-09-02T21:27:11Z | 39,300,946 | 13 | 2016-09-02T21:29:36Z | [
"python",
"class",
"python-3.x"
] | I was browsing through some code, and I noticed a line that caught my attention. The code is similar to the example below
```
class MyClass:
def __init__(self):
pass
def call_me(self=''):
print(self)
```
This looks like any other class that I have seen, however a `str` is being passed in as default value for `self`.
If I print out `self`, it behaves as normal
```
>>> MyClass().call_me()
<__main__.MyClass object at 0x000002A12E7CA908>
```
This has been bugging me and I cannot figure out why this would be used. Is there any reason to why a `str` instance would be passed in as a default value for `self?` | The short answer is yes. That way, you can call the function as:
```
MyClass.call_me()
```
without instantiating `MyClass`, that will print an empty string.
To give a longer answer, we need to look at what is going on behind the scenes.
When you create an instance of the class, `__new__` and `__init__` are called to create it, that is:
```
a = MyClass()
```
is roughly equivalent to:
```
a = MyClass.__new__(MyClass)
MyClass.__init__(a)
```
Whenever you use some method on a created instance `a`:
```
a.call_me()
```
It is "replaced" with `MyClass.call_me(a)`.
So, having a default parameter for `call_me` allows you to call this function not only as a method of an instance, in which case `self` is an instance itself, but also as a static class method.
That way, instead of `MyClass.call_me(a)`, just `MyClass.call_me()` is called. Because the argument list is empty, the default argument is assigned to `self` and the desired result (empty string) is printed. |
Is there any reason for giving self a default value? | 39,300,924 | 31 | 2016-09-02T21:27:11Z | 39,300,948 | 28 | 2016-09-02T21:29:41Z | [
"python",
"class",
"python-3.x"
] | I was browsing through some code, and I noticed a line that caught my attention. The code is similar to the example below
```
class MyClass:
def __init__(self):
pass
def call_me(self=''):
print(self)
```
This looks like any other class that I have seen, however a `str` is being passed in as default value for `self`.
If I print out `self`, it behaves as normal
```
>>> MyClass().call_me()
<__main__.MyClass object at 0x000002A12E7CA908>
```
This has been bugging me and I cannot figure out why this would be used. Is there any reason to why a `str` instance would be passed in as a default value for `self?` | *Not really*, it's just an *odd* way of making it not raise an error when called via the class:
```
MyClass.call_me()
```
works fine since, even though nothing is implicitly passed as with instances, the default value for that argument is provided. If no default was provided, when called, this would of course raise the `TypeError` for args we all love. As to why he chose an empty string as the value, *only he knows*.
Bottom line, this is more *confusing* than it is practical. If you need to do something similar I'd advice a simple [`staticmethod`](https://docs.python.org/3/library/functions.html#staticmethod) with a default argument to achieve a similar effect.
That way *you don't stump anyone reading your code* (like the developer who wrote this did with you ;-):
```
@staticmethod
def call_me(a=''):
print(a)
```
If instead you need access to class attributes you could always opt for the [`classmethod`](https://docs.python.org/3/library/functions.html#classmethod) decorator. Both these (`class` and `static` decorators) also serve a secondary purpose of making your intent crystal clear to others reading your code. |
What is the difference between 'with open(...)' and 'with closing(open(...))' | 39,301,983 | 4 | 2016-09-02T23:53:17Z | 39,302,016 | 14 | 2016-09-02T23:57:41Z | [
"python"
] | From my understanding,
```
with open(...) as x:
```
is supposed to close the file once the `with` statement completed. However, now I see
```
with closing(open(...)) as x:
```
in one place, looked around and figured out, that `closing` is supposed to close the file upon finish of the `with` statement.
So, what's the difference between closing the file and `closing` the file? | Assuming that's `contextlib.closing` and the standard, built-in `open`, `closing` is redundant here. It's a wrapper to allow you to use `with` statements with objects that have a `close` method, but don't support use as context managers. Since the file objects returned by `open` are context managers, `closing` is unneeded. |
How to sum and to mean one DataFrame to create another DataFrame | 39,309,435 | 6 | 2016-09-03T17:14:09Z | 39,309,481 | 8 | 2016-09-03T17:19:47Z | [
"python",
"pandas",
"dataframe"
] | After creating DataFrame with some duplicated cell values in the column **Name**:
```
import pandas as pd
df = pd.DataFrame({'Name': ['Will','John','John','John','Alex'],
'Payment': [15, 10, 10, 10, 15],
'Duration': [30, 15, 15, 15, 20]})
```
[](http://i.stack.imgur.com/MEFnO.png)
I would like to proceed by creating another DataFrame where the duplicated values in **Name** column are consolidated leaving no duplicates. At the same time I want
to sum the payments values John made. I proceed with:
```
df_sum = df.groupby('Name', axis=0).sum().reset_index()
```
[](http://i.stack.imgur.com/7i89K.png)
But since `df.groupby('Name', axis=0).sum()` command applies the sum function to every column in DataFrame the **Duration** (of the visit in minutes) column is processed as well. Instead I would like to get an average values for the **Duration** column. So I would need to use `mean()` method, like so:
```
df_mean = df.groupby('Name', axis=0).mean().reset_index()
```
[](http://i.stack.imgur.com/iojOc.png)
But with `mean()` function the column **Payment** is now showing the average payment values John made and not the sum of all the payments.
How to create a DataFrame where Duration values show the average values while the Payment values show the sum? | You can apply different functions to different columns with groupby.agg:
```
df.groupby('Name').agg({'Duration': 'mean', 'Payment': 'sum'})
Out:
Payment Duration
Name
Alex 15 20
John 30 15
Will 15 30
``` |
Why is ''.join() faster than += in Python? | 39,312,099 | 55 | 2016-09-03T23:11:19Z | 39,312,172 | 72 | 2016-09-03T23:22:51Z | [
"python",
"optimization"
] | I'm able to find a bevy of information online (on Stack Overflow and otherwise) about how it's a very inefficient and bad practice to use `+` or `+=` for concatenation in Python.
I can't seem to find WHY `+=` is so inefficient. Outside of a mention [here](http://stackoverflow.com/a/1350289/3903011 "What is the most efficient string concatenation method in python?") that "it's been optimized for 20% improvement in certain cases" (still not clear what those cases are), I can't find any additional information.
What is happening on a more technical level that makes `''.join()` superior to other Python concatenation methods? | Let's say you have this code to build up a string from three strings:
```
x = 'foo'
x += 'bar' # 'foobar'
x += 'baz' # 'foobarbaz'
```
In this case, Python first needs to allocate and create `'foobar'` before it can allocate and create `'foobarbaz'`.
So for each `+=` that gets called, the entire contents of the string and whatever is getting added to it need to be copied into an entirely new memory buffer. In other words, if you have `N` strings to be joined, you need to allocate approximately `N` temporary strings and the first substring gets copied ~N times. The last substring only gets copied once, but on average, each substring gets copied `~N/2` times.
With `.join`, Python can play a number of tricks since the intermediate strings do not need to be created. [CPython](http://en.wikipedia.org/wiki/CPython) figures out how much memory it needs up front and then allocates a correctly-sized buffer. Finally, it then copies each piece into the new buffer which means that each piece is only copied once.
---
There are other viable approaches which could lead to better performance for `+=` in some cases. E.g. if the internal string representation is actually a [`rope`](https://en.wikipedia.org/wiki/Rope_(data_structure)) or if the runtime is actually smart enough to somehow figure out that the temporary strings are of no use to the program and optimize them away.
However, CPython certainly does *not* do these optimizations reliably (though it may for a [few corner cases](http://stackoverflow.com/questions/39312099/why-is-join-faster-than-in-python/39312172?noredirect=1#comment65960978_39314264)) and since it is the most common implementation in use, many best-practices are based on what works well for CPython. Having a standardized set of norms also makes it easier for other implementations to focus their optimization efforts as well. |
Is there any way to let sympy simplify root(-1, 3) to -1? | 39,319,584 | 4 | 2016-09-04T17:24:50Z | 39,319,682 | 8 | 2016-09-04T17:35:21Z | [
"python",
"sympy"
] | ```
root(-1, 3).simplify()
(-1)**(1/3)//Output
```
This is not what I want, any way to simplify this to -1? | Try
```
real_root(-1, 3)
```
It's referred to in the doc string of the root function too.
The reason is simple: sympy, like many symbolic algebra systems, takes the complex plane into account when calculating "the root". There are 3 complex numbers that, when raised to the power of 3, result in -1. If you're just interested in the real-valued root, be as explicit as you can. |
Why is the order of dict and dict.items() different? | 39,321,424 | 3 | 2016-09-04T20:55:17Z | 39,321,645 | 9 | 2016-09-04T21:26:10Z | [
"python",
"python-2.7",
"dictionary",
"ipython"
] | ```
>>> d = {'A':1, 'b':2, 'c':3, 'D':4}
>>> d
{'A': 1, 'D': 4, 'b': 2, 'c': 3}
>>> d.items()
[('A', 1), ('c', 3), ('b', 2), ('D', 4)]
```
Does the order get randomized twice when I call d.items()? Or does it just get randomized differently? Is there any alternate way to make d.items() return the same order as d?
Edit: Seems to be an IPython thing where it auto sorts the dict. Normally dict and dict.items() should be in the same order. | You seem to have tested this on IPython. IPython uses its own specialized pretty-printing facilities for various types, and the pretty-printer for dicts sorts the keys before printing (if possible). The `d.items()` call doesn't sort the keys, so the output is different.
In an ordinary Python session, the order of the items in the dict's `repr` would match the order of the items from the `items` method. Dict iteration order is supposed to be stable as long as a dict isn't modified. (This guarantee is not explicitly extended to the dict's `repr`, but it would be surprising if the implicit iteration in `repr` broke consistency with other forms of dict iteration.) |
How to find the maximum product of two elements in a list? | 39,329,829 | 2 | 2016-09-05T11:40:39Z | 39,329,936 | 13 | 2016-09-05T11:47:04Z | [
"python",
"python-2.7",
"python-3.x",
"itertools"
] | I was trying out a problem on hackerrank contest for fun, and there came this question.
I used itertools for this, here is the code:
```
import itertools
l = []
for _ in range(int(input())):
l.append(int(input()))
max = l[0] * l[len(l)-1]
for a,b in itertools.combinations(l,2):
if max < (a*b):
max = (a*b)
print(max)
```
Is their any other efficient way than this? As I am getting time out error on some test cases which I cant access (as its a small contest). | Iterate over the list and find the following:
Largest Positive number(a)
Second Largest Positive number(b)
Largest Negative number(c)
Second Largest Negative number(d)
Now, you will be able to figure out the maximum value upon multiplication, either `a*b` or `c*d` |
append zero but not False in a list python | 39,331,381 | 6 | 2016-09-05T13:12:05Z | 39,331,503 | 8 | 2016-09-05T13:19:00Z | [
"python",
"list"
] | I'm trying to move all zeros in a list to the back of the line, my only problem is there is a False bool in the list. I just found out that False == 0, so how do i move all zeros to the back of the list and keep false intact?
```
def move_zeros(array):
#your code here
for i in array:
if i == 0:
array.remove(i)
array.append(i)
answer = array
print answer
```
`move_zeros(["a",0,0,"b",None,"c","d",0,1,False,0,1,0,3,[],0,1,9,0,0,{},0,0,9])`
This is what it returns when you run it.
```
['a', 'b', None, 'c', 'd', 1, 1, 3, [], 1, 9, {}, 9, 0, 0, 0, 0, 0, False, 0, 0, False, 0, 0]
``` | What you're doing is basically a custom sort. So just implement it this way:
```
array.sort(key=lambda item: item is 0)
```
What this means is "Transform the array into a boolean one where items which are 0 are True and everything else is False." Then, sorting those booleans puts all the False values at the left (because they are like 0), and the True values at the right (like 1).
---
Originally I had written a solution which is not supported in Python 3:
```
array.sort(lambda L,R: -1 if R is 0 else 0)
```
What this means is "L is less than R if R is 0". Then we sort according to that. So we end up with any zeros on the right, because anything is less than them. The above only works in Python 2, however. |
Time complexity for a sublist in Python | 39,338,520 | 4 | 2016-09-05T22:49:38Z | 39,338,537 | 7 | 2016-09-05T22:51:56Z | [
"python",
"performance",
"list",
"big-o",
"sublist"
] | In Python, what is the time complexity when we create a sublist from an existing list?
For example, here data is the name of our existing list and list1 is our sublist created by slicing data.
```
data = [1,2,3,4,5,6..100,...1000....,10^6]
list1 = data[101:10^6]
```
What is the running time for creating list1?
```
Is it O(10^6) i.e.O(N), or O(1)?
``` | Getting a list slice in python is `O(M - N)` / `O(10^6 - 101)`
[Here](https://wiki.python.org/moin/TimeComplexity) you can check python list operations time complexity
By underneath, python lists are represented like arrays. So, you can iterate starting on some index(N) and stopping in another one(M) |
Is any() evaluated lazily? | 39,348,588 | 3 | 2016-09-06T12:01:54Z | 39,348,689 | 9 | 2016-09-06T12:08:20Z | [
"python"
] | I am writing a script in which i have to test numbers against a number of conditions. If **any** of the conditions are met i want to return `True` and i want to do that the fastest way possible.
My first idea was to use `any()` instead of nested `if` statements or multiple `or` linking my conditions. Since i would be satisfied if any of the conditions were `True` i could really benefit from `any()` being lazy and returning True as soon as it could.
Based on the fact that the following print happens instantly and not after 10 (= 0 + 1 + 2 + 3 + 4) seconds i assume it is. Is that the case or am i somehow mistaken?
```
import time
def some(sec):
time.sleep(sec)
return True
print(any(some(x) for x in range(5)))
``` | Yes, `any()` and `all()` short-circuit, aborting as soon as the outcome is clear: See the [docs](https://docs.python.org/3/library/functions.html#all):
> **all(iterable)**
>
> Return True if all elements of the iterable are true (or if the
> iterable is empty). Equivalent to:
>
> ```
> def all(iterable):
> for element in iterable:
> if not element:
> return False
> return True
> ```
>
> **any(iterable)**
>
> Return True if any element of the iterable is true. If the iterable is
> empty, return False. Equivalent to:
>
> ```
> def any(iterable):
> for element in iterable:
> if element:
> return True
> return False
> ``` |
Impregnate string with list entries - alternating | 39,350,419 | 3 | 2016-09-06T13:36:36Z | 39,350,520 | 7 | 2016-09-06T13:41:27Z | [
"python"
] | So SO, i am trying to "merge" a string (`a`) and a list of strings (`b`):
```
a = '1234'
b = ['+', '-', '']
```
to get the desired output (`c`):
```
c = '1+2-34'
```
The characters in the desired output string alternate in terms of origin between string and list. Also, the list will always contain one element less than characters in the string. I was wondering what the fastest way to do this is.
what i have so far is the following:
```
c = a[0]
for i in range(len(b)):
c += b[i] + a[1:][i]
print(c) # prints -> 1+2-34
```
But i kind of feel like there is a better way to do this.. | You can use [`itertools.zip_longest`](https://docs.python.org/3.4/library/itertools.html#itertools.zip_longest) to `zip` the two sequences, then keep iterating even after the shorter sequence ran out of characters. If you run out of characters, you'll start getting `None` back, so just consume the rest of the numerical characters.
```
>>> from itertools import chain
>>> from itertools import zip_longest
>>> ''.join(i+j if j else i for i,j in zip_longest(a, b))
'1+2-34'
```
As [@deceze](https://stackoverflow.com/users/476/deceze) suggested in the comments, you can also pass a `fillvalue` argument to `zip_longest` which will insert empty strings. I'd suggest his method since it's a bit more readable.
```
>>> ''.join(i+j for i,j in zip_longest(a, b, fillvalue=''))
'1+2-34'
```
A further optimization suggested by [@ShadowRanger](https://stackoverflow.com/users/364696/shadowranger) is to remove the temporary string concatenations (`i+j`) and replace those with an [`itertools.chain.from_iterable`](https://docs.python.org/3/library/itertools.html#itertools.chain.from_iterable) call instead
```
>>> ''.join(chain.from_iterable(zip_longest(a, b, fillvalue='')))
'1+2-34'
``` |
Return list element by the value of one of its attributes | 39,352,979 | 2 | 2016-09-06T15:42:13Z | 39,353,030 | 9 | 2016-09-06T15:45:30Z | [
"python",
"python-3.x"
] | There is a list of objects
```
l = [obj1, obj2, obj3]
```
Each `obj` is an object of a class and has an `id` attribute.
How can I return an `obj` from the list by its `id`?
P.S. `id`s are unique. and it is guaranteed that the list contains an object with the requested `id` | Assuming the `id` is a hashable object, like a string, you should be using a dictionary, not a list.
```
l = [obj1, obj2, obj3]
d = {o.id:o for o in l}
```
You can then retrieve objects with their keys, e.g. `d['ID_39A']`. |
Why the elements of numpy array not same as themselves? | 39,355,556 | 3 | 2016-09-06T18:25:37Z | 39,355,706 | 7 | 2016-09-06T18:35:58Z | [
"python",
"python-3.x",
"numpy"
] | How do I explain the last line of these?
```
>>> a = 1
>>> a is a
True
>>> a = [1, 2, 3]
>>> a is a
True
>>> a = np.zeros(3)
>>> a
array([ 0., 0., 0.])
>>> a is a
True
>>> a[0] is a[0]
False
```
I always thought that everything is at least "is" that thing itself! | NumPy doesn't store array elements as Python objects. If you try to access an individual element, NumPy has to create a new wrapper object to represent the element, and it has to do this *every time* you access the element. The wrapper objects from two accesses to `a[0]` are different objects, so `a[0] is a[0]` returns `False`. |
How to remap ids to consecutive numbers quickly | 39,356,279 | 6 | 2016-09-06T19:14:20Z | 39,356,608 | 7 | 2016-09-06T19:38:55Z | [
"python",
"pandas",
"dataframe"
] | I have a large csv file with lines that looks like
```
stringa,stringb
stringb,stringc
stringd,stringa
```
I need to convert it so the ids are consecutively numbered from 0. In this case the following would work
```
0,1
1,2
3,0
```
My current code looks like:
```
import csv
names = {}
counter = 0
with open('foo.csv', 'rb') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
if row[0] in names:
id1 = row[0]
else:
names[row[0]] = counter
id1 = counter
counter += 1
if row[1] in names:
id2 = row[1]
else:
names[row[1]] = counter
id2 = counter
counter += 1
print id1, id2
```
Python dicts use a lot of memory sadly and my input is large.
> What can I do when the input is too large for the dict to fit in memory
I would also be interested if there is a better/faster way to solve this problem in general. | ```
df = pd.DataFrame([['a', 'b'], ['b', 'c'], ['d', 'a']])
v = df.stack().unique()
v.sort()
f = pd.factorize(v)
m = pd.Series(f[0], f[1])
df.stack().map(m).unstack()
```
[](http://i.stack.imgur.com/eTlHI.png) |
How to create a list from another list using specific criteria in Python? | 39,361,381 | 5 | 2016-09-07T04:45:45Z | 39,361,427 | 7 | 2016-09-07T04:50:52Z | [
"python",
"list"
] | How can I create a list from another list using python?
If I have a list:
```
input = ['a/b', 'g', 'c/d', 'h', 'e/f']
```
How can I create the list of only those letters that follow slash "/" i.e.
```
desired_output = ['b','d','f']
```
A code would be very helpful. | You probably have this input.You can get by simple list comprehension.
```
input = ["a/b", "g", "c/d", "h", "e/f"]
print [i.split("/")[1] for i in input if i.find("/")==1 ]
```
or
```
print [i.split("/")[1] for i in input if "/" in i ]
```
> Output: ['b', 'd', 'f'] |
Inserting an element before each element of a list | 39,375,906 | 4 | 2016-09-07T17:22:29Z | 39,375,937 | 8 | 2016-09-07T17:24:13Z | [
"python",
"list",
"python-3.x",
"list-comprehension"
] | I'm looking to insert a constant element before each of the existing element of a list, i.e. go from:
```
['foo', 'bar', 'baz']
```
to:
```
['a', 'foo', 'a', 'bar', 'a', 'baz']
```
I've tried using list comprehensions but the best thing I can achieve is an array of arrays using this statement:
```
[['a', elt] for elt in stuff]
```
Which results in this:
```
[['a', 'foo'], ['a', 'bar'], ['a', 'baz']]
```
So not exactly what I want. Can it be achieved using list comprehension? Just in case it matters, I'm using Python 3.5. | Add another loop:
```
[v for elt in stuff for v in ('a', elt)]
```
or use [`itertools.chain.from_iterable()`](https://docs.python.org/3/library/itertools.html#itertools.chain.from_iterable) together with [`zip()`](https://docs.python.org/3/library/functions.html#zip) and [`itertools.repeat()`](https://docs.python.org/3/library/itertools.html#itertools.repeat) if you need an iterable version rather than a full list:
```
from itertools import chain, repeat
try:
# Python 3 version (itertools.izip)
from future_builtins import zip
except ImportError:
# No import needed in Python 3
it = chain.from_iterable(zip(repeat('a'), stuff))
``` |
Retrieving result from celery worker constantly | 39,377,751 | 8 | 2016-09-07T19:33:28Z | 39,430,468 | 7 | 2016-09-10T20:50:47Z | [
"python",
"database",
"celery"
] | I have an web app in which I am trying to use celery to load background tasks from a database. I am currently loading the database upon request, but would like to load the tasks on an hourly interval and have them work in the background. I am using flask and am coding in python.I have redis running as well.
So far using celery I have gotten the worker to process the task and the beat to send the tasks to the worker on an interval. But I want to retrieve the results[a dataframe or query] from the worker and if the result is not ready then it should load the previous result of the worker.
Any ideas on how to do this?
**Edit**
I am retrieving the results from a database using sqlalchemy and I am rendering the results in a webpage. I have my homepage which has all the various links which all lead to different graphs which I want to be loaded in the background so the user does not have to wait long loading times. | The Celery [Task](http://docs.celeryproject.org/en/latest/userguide/tasks.html#tasks) is being executed by a Worker, and it's Result is being stored in the [Celery Backend](http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html#keeping-results).
If I get you correctly, then I think you got few options:
1. [Ignore the result](http://docs.celeryproject.org/en/latest/userguide/tasks.html#ignore-results-you-don-t-want) of the graph-loading-task, store what ever you need, as a side effect of the task, in your database. When needed, query for the most recent result in that database. If the DB is Redis, you may find [ZADD](http://redis.io/commands/ZADD) and [ZRANGE](http://redis.io/commands/ZRANGE) suitable. This way you'll get the new if available, or the previous if not.
2. You can look for the [result of a task if you provide it's id](http://docs.celeryproject.org/en/latest/reference/celery.result.html#celery.result.AsyncResult). You can do this when you want to find out the status, something like (where `celery` is the Celery app): `result = celery.AsyncResult(<the task id>)`
3. Use [callback](http://docs.celeryproject.org/en/latest/userguide/tasks.html#avoid-launching-synchronous-subtasks) to update farther when new result is ready.
4. Let a background thread [wait](http://docs.celeryproject.org/en/latest/reference/celery.result.html#celery.result.AsyncResult.wait) for the AsyncResult, or [native\_join](http://docs.celeryproject.org/en/latest/reference/celery.result.html#celery.result.ResultSet.join_native), which is supported with Redis, and update accordingly (not recommended)
I personally used option #1 in similar cases (using MongoDB) and found it to be very maintainable and flexible. But possibly, due the nature of your UI, option #3 will more suitable for you needs. |
How to mute all sounds in chrome webdriver with selenium | 39,392,479 | 6 | 2016-09-08T13:36:40Z | 39,392,601 | 7 | 2016-09-08T13:43:39Z | [
"python",
"selenium"
] | I want to write a script in which I use selenium package like this:
```
from selenium import webdriver
driver = webdriver.Chrome()
driver.get("https://www.youtube.com/watch?v=hdw1uKiTI5c")
```
now after getting the desired URL I want to mute the chrome sounds.
how could I do this?
something like this:
```
driver.mute()
```
is it possible with any other Webdrivers? like Firefox or ...? | Not sure if you can, generally for any page, do it after you have opened the page, but you can mute all the sound for the entire duration of the browser session by setting the [`--mute-audio`](http://peter.sh/experiments/chromium-command-line-switches/#mute-audio) switcher:
```
from selenium import webdriver
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--mute-audio")
driver = webdriver.Chrome(chrome_options=chrome_options)
driver.get("https://www.youtube.com/watch?v=hdw1uKiTI5c")
```
---
Or, you can [mute the HTML5 video player directly](http://stackoverflow.com/q/6376450/771848):
```
video = driver.find_element_by_css_selector("video")
driver.execute_script("arguments[0].muted = true;", video)
```
You might need to add some delay before that to let the video be initialized before muting it. `time.sleep()` would not be the best way to do it - a better way is to subscribe to the [`loadstart` media event](https://developer.mozilla.org/en-US/docs/Web/Guide/Events/Media_events) - the Python implementation can be found [here](http://stackoverflow.com/a/28438996/771848).
To summarize - complete implementation:
```
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium import webdriver
driver = webdriver.Chrome()
driver.set_script_timeout(10)
driver.get("https://www.youtube.com/watch?v=hdw1uKiTI5c")
# wait for video tag to show up
wait = WebDriverWait(driver, 5)
video = wait.until(EC.visibility_of_element_located((By.TAG_NAME, 'video')))
# wait for video to be initialized
driver.execute_async_script("""
var video = arguments[0],
callback = arguments[arguments.length - 1];
video.addEventListener('loadstart', listener);
function listener() {
callback();
};
""", video)
# mute the video
driver.execute_script("arguments[0].muted = true;", video)
``` |
Python pandas slice dataframe by multiple index ranges | 39,393,856 | 3 | 2016-09-08T14:40:05Z | 39,393,929 | 8 | 2016-09-08T14:43:19Z | [
"python",
"pandas",
"indexing",
"slice"
] | What is the pythonic way to slice a dataframe by more index ranges (eg. by `10:12` and `25:28`)?
I want this in a more elegant way:
```
df = pd.DataFrame({'a':range(10,100)})
df.iloc[[i for i in range(10,12)] + [i for i in range(25,28)]]
```
Result:
```
a
10 20
11 21
25 35
26 36
27 37
```
Something like this would be more elegant:
```
df.iloc[(10:12, 25:28)]
```
Thank you! | You can use numpy's [r\_](http://docs.scipy.org/doc/numpy/reference/generated/numpy.r_.html) "slicing trick":
```
df = pd.DataFrame({'a':range(10,100)})
df.iloc[pd.np.r_[10:12, 25:28]]
```
Gives:
```
a
10 20
11 21
25 35
26 36
27 37
``` |
Script works differently when ran from the terminal and ran from Python | 39,397,034 | 8 | 2016-09-08T17:32:50Z | 39,398,969 | 8 | 2016-09-08T19:40:06Z | [
"python",
"bash",
"subprocess",
"pipeline"
] | I have a short bash script `foo.sh`
```
#!/bin/bash
cat /dev/urandom | tr -dc 'a-z1-9' | fold -w 4 | head -n 1
```
When I run it directly from the shell, it runs fine, exiting when it is done
```
$ ./foo.sh
m1un
$
```
but when I run it from Python
```
$ python -c "import subprocess; subprocess.call(['./foo.sh'])"
ygs9
```
it outputs the line but then just hangs forever. What is causing this discrepancy? | Adding the `trap -p` command to the bash script, stopping the hung python process and running `ps` shows what's going on:
```
$ cat foo.sh
#!/bin/bash
trap -p
cat /dev/urandom | tr -dc 'a-z1-9' | fold -w 4 | head -n 1
$ python -c "import subprocess; subprocess.call(['./foo.sh'])"
trap -- '' SIGPIPE
trap -- '' SIGXFSZ
ko5o
^Z
[1]+ Stopped python -c "import subprocess; subprocess.call(['./foo.sh'])"
$ ps -H -o comm
COMMAND
bash
python
foo.sh
cat
tr
fold
ps
```
Thus, `subprocess.call()` executes the command with the `SIGPIPE` signal masked. When `head` does its job and exits, the remaining processes do not receive the broken pipe signal and do not terminate.
Having the explanation of the problem at hand, it was easy to find the bug in the python bugtracker, which turned out to be [issue#1652](https://bugs.python.org/issue1652). |
When should I use list.count(0), and how do I to discount the "False" item? | 39,404,581 | 18 | 2016-09-09T05:43:27Z | 39,404,665 | 11 | 2016-09-09T05:50:35Z | [
"python",
"list",
"count",
"boolean"
] | `a.count(0)` always returns 11, so what should I do to discount the `False` and return 10?
```
a = ["a",0,0,"b",None,"c","d",0,1,False,0,1,0,3,[],0,1,9,0,0,{},0,0,9]
``` | Python 2.x interprets `False` as `0` and vice versa. AFAIK even `None` and `""` can be considered `False` in conditions.
Redefine count as follows:
```
sum(1 for item in a if item == 0 and type(item) == int)
```
or (Thanks to [Kevin](http://stackoverflow.com/questions/39404581/when-use-list-count0-how-to-discount-false-item/39404665?noredirect=1#comment66136825_39404665), and [Bakuriu](http://stackoverflow.com/questions/39404581/when-use-list-count0-how-to-discount-false-item/39404665?noredirect=1#comment66142335_39404665) for their comments):
```
sum(1 for item in a if item == 0 and type(item) is type(0))
```
or as suggested by [ozgur](http://stackoverflow.com/users/793428/ozgur) in comments (**which is not recommended and is considered wrong**, see [this](http://stackoverflow.com/questions/39404581/when-use-list-count0-how-to-discount-false-item/39404665?noredirect=1#comment66136825_39404665)), simply:
```
sum(1 for item in a if item is 0)
```
it **may** ([âisâ operator behaves unexpectedly with integers](http://stackoverflow.com/questions/306313/is-operator-behaves-unexpectedly-with-integers?noredirect=1&lq=1)) work for *small* primary types, but if your list contains objects, please consider what `is` operator does:
From the documentation for the [`is` operator](http://docs.python.org/2/reference/expressions.html#not-in):
> The operators `is` and `is not` test for object identity: `x is y` is true
> if and only if x and y are the same object.
More information about `is` operator: [Understanding Python's "is" operator](http://stackoverflow.com/questions/13650293/understanding-pythons-is-operator) |
When should I use list.count(0), and how do I to discount the "False" item? | 39,404,581 | 18 | 2016-09-09T05:43:27Z | 39,404,682 | 9 | 2016-09-09T05:52:00Z | [
"python",
"list",
"count",
"boolean"
] | `a.count(0)` always returns 11, so what should I do to discount the `False` and return 10?
```
a = ["a",0,0,"b",None,"c","d",0,1,False,0,1,0,3,[],0,1,9,0,0,{},0,0,9]
``` | You could use `sum` and a generator expression:
```
>>> sum((x==0 and x is not False) for x in ["a",0,0,"b",None,"c","d",0,1,False,0,1,0,3,[],0,1,9,0,0,{},0,0,9])
10
``` |
When should I use list.count(0), and how do I to discount the "False" item? | 39,404,581 | 18 | 2016-09-09T05:43:27Z | 39,404,722 | 8 | 2016-09-09T05:55:35Z | [
"python",
"list",
"count",
"boolean"
] | `a.count(0)` always returns 11, so what should I do to discount the `False` and return 10?
```
a = ["a",0,0,"b",None,"c","d",0,1,False,0,1,0,3,[],0,1,9,0,0,{},0,0,9]
``` | You need to filter out the Falses yourself.
```
>>> a = ["a",0,0,"b",None,"c","d",0,1,False,0,1,0,3,[],0,1,9,0,0,{},0,0,9]
>>> len([x for x in a if x == 0 and x is not False])
10
```
---
Old answer is CPython specific and it's better to use solutions that work on all Python implementations.
Since **CPython** [keeps a pool of small integer objects](http://stackoverflow.com/questions/306313/is-operator-behaves-unexpectedly-with-integers), zero included, you can filter the list with the `is` operator.
This of course shouldn't be used for value comparisons but in this case it works since zeros are what we want to find.
```
>>> a = ["a",0,0,"b",None,"c","d",0,1,False,0,1,0,3,[],0,1,9,0,0,{},0,0,9]
>>> [x for x in a if x is 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
>>> len(_)
10
``` |
How to convert the following string in python? | 39,414,085 | 4 | 2016-09-09T14:36:55Z | 39,414,310 | 7 | 2016-09-09T14:48:18Z | [
"python",
"python-2.7",
"python-3.x"
] | Input : UserID/ContactNumber
Output: user-id/contact-number
I have tried the following code:
```
s ="UserID/ContactNumber"
list = [x for x in s]
for char in list:
if char != list[0] and char.isupper():
list[list.index(char)] = '-' + char
fin_list=''.join((list))
print(fin_list.lower())
```
but the output i got is:
```
user-i-d/-contact-number
``` | You could use a [regular expression](https://docs.python.org/3.5/library/re.html) with a positive lookbehind assertion:
```
>>> import re
>>> s ="UserID/ContactNumber"
>>> re.sub('(?<=[a-z])([A-Z])', r'-\1', s).lower()
'user-id/contact-number'
``` |
limited number of user-initiated background processes | 39,416,623 | 9 | 2016-09-09T17:12:55Z | 39,447,549 | 8 | 2016-09-12T09:46:39Z | [
"python",
"django",
"asynchronous",
"celery",
"background-process"
] | I need to allow users to submit requests for very, very large jobs. We are talking 100 gigabytes of memory and 20 hours of computing time. This costs our company a lot of money, so it was stipulated that only 2 jobs could be running at any time, and requests for new jobs when 2 are already running would be rejected (and the user notified that the server is busy).
My current solution uses an Executor from concurrent.futures, and requires setting the Apache server to run only one process, reducing responsiveness (current user count is very low, so it's okay for now).
If possible I would like to use Celery for this, but I did not see in the documentation any way to accomplish this particular setting.
How can I run up to a limited number of jobs in the background in a Django application, and notify users when jobs are rejected because the server is busy? | I have two solutions for this particular case, one an out of the box solution by celery, and another one that you implement yourself.
1. You can do something like this with celery workers. In particular, you **only create two worker processes with concurrency=1** (or well, one with concurrency=2, but that's gonna be threads, not different processes), this way, only two jobs can be done asynchronously. Now you need a way to raise exceptions if both jobs are occupied, then you use [inspect](http://docs.celeryproject.org/en/latest/userguide/monitoring.html#management-command-line-utilities-inspect-control), to count the number of active tasks and throw exceptions if required. For implementation, you can checkout [this SO post](http://stackoverflow.com/questions/5544629/retrieve-list-of-tasks-in-a-queue-in-celery).
You might also be interested in [rate limits](http://docs.celeryproject.org/en/latest/userguide/tasks.html#Task.rate_limit).
2. You can do it all yourself, using a locking solution of choice. In particular, a nice implementation that makes sure only two processes are running with redis (and redis-py) is as simple as the following. (Considering you know redis, since you know celery)
```
from redis import StrictRedis
redis = StrictRedis('localhost', '6379')
locks = ['compute:lock1', 'compute:lock2']
for key in locks:
lock = redis.lock(key, blocking_timeout=5)
acquired = lock.acquire()
if acquired:
do_huge_computation()
lock.release()
else:
raise SystemLimitsReached("Already at max capacity !")
```
This way you make sure only two running processes can exist in the system. A third processes will block in the line `lock = redis.lock(key)` for **blocking\_timeout** seconds, if the locking was successful, `acquired` would be True, else it's False and you'd tell your user to wait !
I had the same requirement sometime in the past and what I ended up coding was something like the solution above. In particular
1. This has the least amount of race conditions possible
2. It's easy to read
3. Doesn't depend on a sysadmin, suddenly doubling the concurrency of workers under load and blowing up the whole system.
4. You can also ***implement the limit per user***, meaning each user can have 2 simultaneous running jobs, by only changing the lock keys from *compute:lock1* to **compute:userId:lock1** and lock2 accordingly. You can't do this one with vanila celery. |
How can I implement x[i][j] = y[i+j] efficiently in numpy? | 39,426,690 | 4 | 2016-09-10T13:45:12Z | 39,426,755 | 7 | 2016-09-10T13:53:26Z | [
"python",
"numpy"
] | Let **x** be a matrix with a shape of (A,B) and **y** be an array with a size of A+B-1.
```
for i in range(A):
for j in range(B):
x[i][j] = y[i+j]
```
How can I implement equivalent code efficiently using functions in numpy? | **Approach #1** Using [`Scipy's hankel`](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.linalg.hankel.html) -
```
from scipy.linalg import hankel
x = hankel(y[:A],y[A-1:]
```
**Approach #2** Using [`NumPy broadcasting`](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) -
```
x = y[np.arange(A)[:,None] + np.arange(B)]
```
**Approach #3** Using `NumPy strides technique` -
```
n = y.strides[0]
x = np.lib.stride_tricks.as_strided(y, shape=(A,B), strides=(n,n))
```
---
Runtime test -
```
In [93]: def original_app(y,A,B):
...: x = np.zeros((A,B))
...: for i in range(A):
...: for j in range(B):
...: x[i][j] = y[i+j]
...: return x
...:
...: def strided_method(y,A,B):
...: n = y.strides[0]
...: return np.lib.stride_tricks.as_strided(y, shape=(A,B), strides=(n,n))
...:
In [94]: # Inputs
...: A,B = 100,100
...: y = np.random.rand(A+B-1)
...:
In [95]: np.allclose(original_app(y,A,B),hankel(y[:A],y[A-1:]))
Out[95]: True
In [96]: np.allclose(original_app(y,A,B),y[np.arange(A)[:,None] + np.arange(B)])
Out[96]: True
In [97]: np.allclose(original_app(y,A,B),strided_method(y,A,B))
Out[97]: True
In [98]: %timeit original_app(y,A,B)
100 loops, best of 3: 5.29 ms per loop
In [99]: %timeit hankel(y[:A],y[A-1:])
10000 loops, best of 3: 114 µs per loop
In [100]: %timeit y[np.arange(A)[:,None] + np.arange(B)]
10000 loops, best of 3: 60.5 µs per loop
In [101]: %timeit strided_method(y,A,B)
10000 loops, best of 3: 22.4 µs per loop
```
Additional ways based on `strides` -
It seems `strides` technique has been used at few places : [`extract_patches`](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.image.extract_patches_2d.html) and [`view_as_windows`](http://scikit-image.org/docs/dev/api/skimage.util.html#skimage.util.view_as_windows) that are being used in such image-processing based modules. So, with those, we have two more approaches -
```
from skimage.util.shape import view_as_windows
from sklearn.feature_extraction.image import extract_patches
x = extract_patches(y,(B))
x = view_as_windows(y,(B))
In [151]: np.allclose(original_app(y,A,B),extract_patches(y,(B)))
Out[151]: True
In [152]: np.allclose(original_app(y,A,B),view_as_windows(y,(B)))
Out[152]: True
In [153]: %timeit extract_patches(y,(B))
10000 loops, best of 3: 62.4 µs per loop
In [154]: %timeit view_as_windows(y,(B))
10000 loops, best of 3: 108 µs per loop
``` |
How to traverse cyclic directed graphs with modified DFS algorithm | 39,427,638 | 22 | 2016-09-10T15:34:11Z | 39,456,032 | 7 | 2016-09-12T17:54:10Z | [
"python",
"algorithm",
"python-2.7",
"depth-first-search",
"demoscene"
] | **OVERVIEW**
I'm trying to figure out how to traverse **directed cyclic graphs** using some sort of DFS iterative algorithm. Here's a little mcve version of what I currently got implemented (it doesn't deal with cycles):
```
class Node(object):
def __init__(self, name):
self.name = name
def start(self):
print '{}_start'.format(self)
def middle(self):
print '{}_middle'.format(self)
def end(self):
print '{}_end'.format(self)
def __str__(self):
return "{0}".format(self.name)
class NodeRepeat(Node):
def __init__(self, name, num_repeats=1):
super(NodeRepeat, self).__init__(name)
self.num_repeats = num_repeats
def dfs(graph, start):
"""Traverse graph from start node using DFS with reversed childs"""
visited = {}
stack = [(start, "")]
while stack:
# To convert dfs -> bfs
# a) rename stack to queue
# b) pop becomes pop(0)
node, parent = stack.pop()
if parent is None:
if visited[node] < 3:
node.end()
visited[node] = 3
elif node not in visited:
if visited.get(parent) == 2:
parent.middle()
elif visited.get(parent) == 1:
visited[parent] = 2
node.start()
visited[node] = 1
stack.append((node, None))
# Maybe you want a different order, if it's so, don't use reversed
childs = reversed(graph.get(node, []))
for child in childs:
if child not in visited:
stack.append((child, node))
if __name__ == "__main__":
Sequence1 = Node('Sequence1')
MtxPushPop1 = Node('MtxPushPop1')
Rotate1 = Node('Rotate1')
Repeat1 = NodeRepeat('Repeat1', num_repeats=2)
Sequence2 = Node('Sequence2')
MtxPushPop2 = Node('MtxPushPop2')
Translate = Node('Translate')
Rotate2 = Node('Rotate2')
Rotate3 = Node('Rotate3')
Scale = Node('Scale')
Repeat2 = NodeRepeat('Repeat2', num_repeats=3)
Mesh = Node('Mesh')
cyclic_graph = {
Sequence1: [MtxPushPop1, Rotate1],
MtxPushPop1: [Sequence2],
Rotate1: [Repeat1],
Sequence2: [MtxPushPop2, Translate],
Repeat1: [Sequence1],
MtxPushPop2: [Rotate2],
Translate: [Rotate3],
Rotate2: [Scale],
Rotate3: [Repeat2],
Scale: [Mesh],
Repeat2: [Sequence2]
}
dfs(cyclic_graph, Sequence1)
print '-'*80
a = Node('a')
b = Node('b')
dfs({
a : [b],
b : [a]
}, a)
```
The above code is testing a couple of cases, the first would be some sort of representation of the below graph:
[](http://i.stack.imgur.com/s2DSc.png)
The second one is the simplest case of one graph containing one "infinite" loop `{a->b, b->a}`
**REQUIREMENTS**
* There won't exist such a thing like "infinite cycles", let's say when one "infinite cycle" is found, there will be a maximum threshold (global var) to indicate when to stop looping around those "pseudo-infinite cycles"
* All graph nodes are able to create cycles but there will exist a special node called `Repeat` where you can indicate how many iterations to loop around the cycle
* The above mcve I've posted is an iterative version of the traversal algorithm which **doesn't know how to deal with cyclic graphs**. Ideally the solution would be also iterative but if there exists a much better recursive solution, that'd be great
* The data structure we're talking about here shouldn't be called "directed acyclic graphs" really because in this case, **each node has its children ordered**, and in graphs node connections have no order.
* Everything can be connected to anything in the editor. You'll be able to execute any block combination and the only limitation is the execution counter, which will overflow if you made neverending loop or too many iterations.
* The algorithm will preserve start/middle/after node's method execution similarly than the above snippet
**QUESTION**
Could anyone provide some sort of solution which knows how to traverse infinite/finite cycles?
**REFERENCES**
If question is not clear yet at this point, you can read this more about this problem on this [article](http://www.bitfellas.org/e107_plugins/content/content.php?content.674), the whole idea will be using the traversal algorithm to implement a similar tool like the shown in that article.
Here's a screenshot showing up the whole power of this type of data structure I want to figure out how to traverse&run:
[](http://i.stack.imgur.com/N2OtM.jpg) | Before I start, [Run the code on CodeSkulptor!](http://www.codeskulptor.org/#user42_MYuqELNPFl_7.py) I also hope that the comments elaborate what I have done enough. If you need more explanation, look at my explanation of the *recursive* approach below the code.
```
# If you don't want global variables, remove the indentation procedures
indent = -1
MAX_THRESHOLD = 10
INF = 1 << 63
def whitespace():
global indent
return '| ' * (indent)
class Node:
def __init__(self, name, num_repeats=INF):
self.name = name
self.num_repeats = num_repeats
def start(self):
global indent
if self.name.find('Sequence') != -1:
print whitespace()
indent += 1
print whitespace() + '%s_start' % self.name
def middle(self):
print whitespace() + '%s_middle' % self.name
def end(self):
global indent
print whitespace() + '%s_end' % self.name
if self.name.find('Sequence') != -1:
indent -= 1
print whitespace()
def dfs(graph, start):
visits = {}
frontier = [] # The stack that keeps track of nodes to visit
# Whenever we "visit" a node, increase its visit count
frontier.append((start, start.num_repeats))
visits[start] = visits.get(start, 0) + 1
while frontier:
# parent_repeat_count usually contains vertex.repeat_count
# But, it may contain a higher value if a repeat node is its ancestor
vertex, parent_repeat_count = frontier.pop()
# Special case which signifies the end
if parent_repeat_count == -1:
vertex.end()
# We're done with this vertex, clear visits so that
# if any other node calls us, we're still able to be called
visits[vertex] = 0
continue
# Special case which signifies the middle
if parent_repeat_count == -2:
vertex.middle()
continue
# Send the start message
vertex.start()
# Add the node's end state to the stack first
# So that it is executed last
frontier.append((vertex, -1))
# No more children, continue
# Because of the above line, the end method will
# still be executed
if vertex not in graph:
continue
## Uncomment the following line if you want to go left to right neighbor
#### graph[vertex].reverse()
for i, neighbor in enumerate(graph[vertex]):
# The repeat count should propagate amongst neighbors
# That is if the parent had a higher repeat count, use that instead
repeat_count = max(1, parent_repeat_count)
if neighbor.num_repeats != INF:
repeat_count = neighbor.num_repeats
# We've gone through at least one neighbor node
# Append this vertex's middle state to the stack
if i >= 1:
frontier.append((vertex, -2))
# If we've not visited the neighbor more times than we have to, visit it
if visits.get(neighbor, 0) < MAX_THRESHOLD and visits.get(neighbor, 0) < repeat_count:
frontier.append((neighbor, repeat_count))
visits[neighbor] = visits.get(neighbor, 0) + 1
def dfs_rec(graph, node, parent_repeat_count=INF, visits={}):
visits[node] = visits.get(node, 0) + 1
node.start()
if node not in graph:
node.end()
return
for i, neighbor in enumerate(graph[node][::-1]):
repeat_count = max(1, parent_repeat_count)
if neighbor.num_repeats != INF:
repeat_count = neighbor.num_repeats
if i >= 1:
node.middle()
if visits.get(neighbor, 0) < MAX_THRESHOLD and visits.get(neighbor, 0) < repeat_count:
dfs_rec(graph, neighbor, repeat_count, visits)
node.end()
visits[node] = 0
Sequence1 = Node('Sequence1')
MtxPushPop1 = Node('MtxPushPop1')
Rotate1 = Node('Rotate1')
Repeat1 = Node('Repeat1', 2)
Sequence2 = Node('Sequence2')
MtxPushPop2 = Node('MtxPushPop2')
Translate = Node('Translate')
Rotate2 = Node('Rotate2')
Rotate3 = Node('Rotate3')
Scale = Node('Scale')
Repeat2 = Node('Repeat2', 3)
Mesh = Node('Mesh')
cyclic_graph = {
Sequence1: [MtxPushPop1, Rotate1],
MtxPushPop1: [Sequence2],
Rotate1: [Repeat1],
Sequence2: [MtxPushPop2, Translate],
Repeat1: [Sequence1],
MtxPushPop2: [Rotate2],
Translate: [Rotate3],
Rotate2: [Scale],
Rotate3: [Repeat2],
Scale: [Mesh],
Repeat2: [Sequence2]
}
dfs(cyclic_graph, Sequence1)
print '-'*40
dfs_rec(cyclic_graph, Sequence1)
print '-'*40
dfs({Sequence1: [Translate], Translate: [Sequence1]}, Sequence1)
print '-'*40
dfs_rec({Sequence1: [Translate], Translate: [Sequence1]}, Sequence1)
```
The input and (well formatted and indented) output can be found [here](http://www.hastebin.com/etedihayay.py). If you want to see *how* I formatted the output, please refer to the code, which can also be [found on CodeSkulptor](http://www.codeskulptor.org/#user42_MYuqELNPFl_7.py).
---
Right, on to the explanation. The easier to understand but much more inefficient recursive solution, which I'll use to help explain, follows:
```
def dfs_rec(graph, node, parent_repeat_count=INF, visits={}):
visits[node] = visits.get(node, 0) + 1
node.start()
if node not in graph:
node.end()
return
for i, neighbor in enumerate(graph[node][::-1]):
repeat_count = max(1, parent_repeat_count)
if neighbor.num_repeats != INF:
repeat_count = neighbor.num_repeats
if i >= 1:
node.middle()
if visits.get(neighbor, 0) < MAX_THRESHOLD and visits.get(neighbor, 0) < repeat_count:
dfs_rec(graph, neighbor, repeat_count, visits)
node.end()
visits[node] = 0
```
1. The first thing we do is visit the node. We do this by incrementing the number of visits of the node in the dictionary.
2. We then raise the `start` event of the node.
3. We do a simple check to see if the node is a childless (leaf) node or not. If it is, we raise the `end` event and return.
4. Now that we've established that the node has neighbors, we iterate through each neighbor. **Side Note:** I reverse the neighbor list (by using `graph[node][::-1]`) in the recursive version to maintain the same order (right to left) of traversal of neighbors as in the iterative version.
1. For each neighbor, we first calculate the repeat count. The repeat count propagates (is inherited) through from the ancestor nodes, so the inherited repeat count is used *unless* the *neighbor* contains a repeat count value.
2. We raise the `middle` event of the current node (**not** the neighbor) if the second (or greater) neighbor is being processed.
3. If the neighbor can be visited, the neighbor is visited. The visitability check is done by checking whether the neighbor has been visited less than a) `MAX_THRESHOLD` times (for pseudo-infinite cycles) and b) the above calculated repeat count times.
5. We're now done with this node; raise the `end` event and clear its visits in the hashtable. This is done so that if some other node calls it again, it does not fail the visitability check and/or execute for less than the required number of times. |
How to specify "nullable" return type with type hints | 39,429,526 | 6 | 2016-09-10T18:56:49Z | 39,429,578 | 10 | 2016-09-10T19:03:06Z | [
"python",
"python-3.x",
"python-3.5",
"type-hinting"
] | Suppose I have a function:
```
def get_some_date(some_argument: int=None) -> %datetime_or_None%:
if some_argument is not None and some_argument == 1:
return datetime.utcnow()
else:
return None
```
How do I specify the return type for something that can be `None`? | Since your return type can be `datetime` (as returned from `datetime.utcnow()`) or `None` you should use `Optional[datetime]`:
```
from typing import Optional
def get_some_date(some_argument: int=None) -> Optional[datetime]:
# as defined
```
From the documentation, [`Optional`](https://docs.python.org/3/library/typing.html#typing.Optional) is shorthand for:
> `Optional[X]` is equivalent to `Union[X, None]`.
where `Union[X, Y]` means a value of type `X` or `Y`.
---
If you want to be explicit due to concerns that others might stumble on `Optional` and not realize it's meaning, you could always use `Union`:
```
from typing import Union
def get_some_date(some_argument: int=None) -> Union[datetime, None]:
```
As pointed out in the comments by @Michael0x2a `Union[T, None]` is tranformed to `Union[T, type(None)]` so no need to use `type` here.
Visually these might differ but programmatically in both cases the result is *exactly the same*; `Union[datetime.datetime, NoneType]` will be the type stored in `get_some_date.__annotations__`:
```
{'return': typing.Union[datetime.datetime, NoneType], 'some_argument': int}
``` |
Can I join lists with sum()? | 39,435,401 | 3 | 2016-09-11T11:03:17Z | 39,435,470 | 8 | 2016-09-11T11:13:24Z | [
"python"
] | Is it pythonic to use `sum()` for list concatenation?
```
>>> sum(([n]*n for n in range(1,5)),[])
[1, 2, 2, 3, 3, 3, 4, 4, 4, 4]
``` | No it's not, Actually it's [shlemiel the painter algorithm](http://en.wikichip.org/wiki/schlemiel_the_painter%27s_algorithm). Because each time it wants to concatenate a new list it has to traverse the whole list from beginning. (For more info read this article by Joel:
<http://www.joelonsoftware.com/articles/fog0000000319.html>)
The most pythonic way is using a list comprehension:
```
In [28]: [t for n in range(1,5) for t in [n]*n ]
Out[28]: [1, 2, 2, 3, 3, 3, 4, 4, 4, 4]
```
Or `itertools.chain`:
```
In [29]: from itertools import chain
In [32]: list(chain.from_iterable([n]*n for n in range(1,5)))
Out[32]: [1, 2, 2, 3, 3, 3, 4, 4, 4, 4]
```
Or as a pute generator based approach you can use `repeat` instead of multiplying the list:
```
In [33]: from itertools import chain, repeat
# In python2.X use xrange instead of range
In [35]: list(chain.from_iterable(repeat(n, n) for n in range(1,5)))
Out[35]: [1, 2, 2, 3, 3, 3, 4, 4, 4, 4]
```
Or if you are interest to numpy, or want a super fast approach here is one:
```
In [46]: import numpy as np
In [46]: np.repeat(np.arange(1, 5), np.arange(1, 5))
Out[46]: array([1, 2, 2, 3, 3, 3, 4, 4, 4, 4])
``` |
Replace all negative values in a list | 39,441,323 | 3 | 2016-09-11T22:37:47Z | 39,441,362 | 11 | 2016-09-11T22:43:21Z | [
"python",
"python-3.x"
] | I'm trying to solve this problem on codewars and I'm completely stumped:
```
y = [-1, -1, 2, 8, -1, 4]
z = [1,3,5]
#to create [1,3,2,8,5,4]
```
How would I do this?
I tried to do:
```
for e in range(len(y)):
try:
if y[e] < 0:
y[e] = z[e]
except:
pass
```
But this would only work if the negatives correspond to what is in z. | If you are sure that number of negative numbers is always equal with `z` you can convert `z` to an iterable and use a list comprehension for creating your new list:
```
In [9]: z = iter(z)
In [10]: [next(z) if i < 0 else i for i in y]
Out[10]: [1, 3, 2, 8, 5, 4]
```
Note that if the length of `z` is shorter than number of negative numbers it will raise an `StopIteration` error. |
Python regex with \w does not work | 39,446,341 | 2 | 2016-09-12T08:39:41Z | 39,446,390 | 7 | 2016-09-12T08:43:01Z | [
"python",
"regex",
"python-3.x"
] | I want to have a regex to find a phrase and two words preceding it if there are two words.
For example I have the string (one sentence per line):
> Chevy is my car and Rusty is my horse.
> My car is very pretty my dog is red.
If i use the regex:
```
re.finditer(r'[\w+\b|^][\w+\b]my car',txt)
```
I do not get any match.
If I use the regex:
```
re.finditer(r'[\S+\s|^][\S+\s]my car',txt)
```
I am getting:
's my car' and '. My car' (I am ignoring case and using multi-line)
Why is the regex with \w+\b not finding anything? It should find two words and 'my car'
How can I get two complete words before 'my car' if there are two words. If there is only one word preceding my car, I should get it. If there are no words preceding it I should get only 'my car'. In my string example I should get: 'Chevy is my car' and 'My car' (no preceding words here) | In your `r'[\w+\b|^][\w+\b]my car` regex, `[\w+\b|^]` matches 1 symbol that is either a word char, a `+`, a backdpace, `|`, or `^` and `[\w+\b]` matches 1 symbol that is either a word char, or `+`, or a backspace.
The point is that inside a character class, quantifiers and a lot (**but not all**) special characters match literal symbols. E.g. `[+]` matches a plus symbol, `[|^]` matches either a `|` or `^`. Since you want to match a *sequence*, you need to provide a sequence of subpatterns outside of a character class.
It seems as if you intended to use `\b` as a word boundary, however, `\b` inside a character class matches only a backspace character.
To *find two words and 'my car'*, you can use, for example
```
\S+\s+\S+\s+my car
```
See the [regex demo](https://regex101.com/r/eY4wC6/2) (here, `\S+` matches one or more non-whitespace symbols, and `\s+` matches 1 or more whitespaces, and the 2 occurrences of these 2 consecutive subpatterns match these symbols as a *sequence*).
To make the sequences before `my car` optional, just use a `{0,2}` quantifier like this:
```
(?:\S+[ \t]+){0,2}my car
```
See [this regex demo](https://regex101.com/r/eY4wC6/3) (to be used with the `re.IGNORECASE` flag). See [Python demo](https://ideone.com/B930oh):
```
import re
txt = 'Chevy is my car and Rusty is my horse.\nMy car is very pretty my dog is red.'
print(re.findall(r'(?:\S+[ \t]+){0,2}my car', txt, re.I))
```
**Details**:
* `(?:\S+[ \t]+){0,2}` - 0 to 2 sequences of 1+ non-whitespaces followed with 1+ space or tab symbols (you may also replace it with `[^\S\r\n]` to match any horizontal space or `\s` if you also plan to match linebreaks).
* `my car` - a literal text `my car`. |
What's the closest I can get to calling a Python function using a different Python version? | 39,451,822 | 18 | 2016-09-12T13:45:20Z | 39,451,894 | 10 | 2016-09-12T13:49:12Z | [
"python",
"compatibility",
"popen"
] | Say I have two files:
```
# spam.py
import library_Python3_only as l3
def spam(x,y)
return l3.bar(x).baz(y)
```
and
```
# beans.py
import library_Python2_only as l2
...
```
Now suppose I wish to call `spam` from within `beans`. It's not directly possible since both files depend on incompatible Python versions. Of course I can `Popen` a different python process, but how could I pass in the arguments and retrieve the results without too much stream-parsing pain? | Assuming the caller is Python3.5+, you have access to a nicer [subprocess](https://docs.python.org/3/library/subprocess.html) module. Perhaps you could user `subprocess.run`, and communicate via pickled Python objects sent through stdin and stdout, respectively. There would be some setup to do, but no parsing on your side, or mucking with strings etc.
Here's an example of Python2 code via subprocess.Popen
```
p = subprocess.Popen(python3_args, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
stdout, stderr = p.communicate(pickle.dumps(python3_args))
result = pickle.load(stdout)
``` |
What's the closest I can get to calling a Python function using a different Python version? | 39,451,822 | 18 | 2016-09-12T13:45:20Z | 39,452,583 | 11 | 2016-09-12T14:22:21Z | [
"python",
"compatibility",
"popen"
] | Say I have two files:
```
# spam.py
import library_Python3_only as l3
def spam(x,y)
return l3.bar(x).baz(y)
```
and
```
# beans.py
import library_Python2_only as l2
...
```
Now suppose I wish to call `spam` from within `beans`. It's not directly possible since both files depend on incompatible Python versions. Of course I can `Popen` a different python process, but how could I pass in the arguments and retrieve the results without too much stream-parsing pain? | Here is a complete example implementation using `subprocess` and `pickle` that I actually tested. Note that you need to use protocol version 2 explicitly for pickling on the Python 3 side (at least for the combo Python 3.5.2 and Python 2.7.3).
```
# py3bridge.py
import sys
import pickle
import importlib
import io
import traceback
import subprocess
class Py3Wrapper(object):
def __init__(self, mod_name, func_name):
self.mod_name = mod_name
self.func_name = func_name
def __call__(self, *args, **kwargs):
p = subprocess.Popen(['python3', '-m', 'py3bridge',
self.mod_name, self.func_name],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE)
stdout, _ = p.communicate(pickle.dumps((args, kwargs)))
data = pickle.loads(stdout)
if data['success']:
return data['result']
else:
raise Exception(data['stacktrace'])
def main():
try:
target_module = sys.argv[1]
target_function = sys.argv[2]
args, kwargs = pickle.load(sys.stdin.buffer)
mod = importlib.import_module(target_module)
func = getattr(mod, target_function)
result = func(*args, **kwargs)
data = dict(success=True, result=result)
except Exception:
st = io.StringIO()
traceback.print_exc(file=st)
data = dict(success=False, stacktrace=st.getvalue())
pickle.dump(data, sys.stdout.buffer, 2)
if __name__ == '__main__':
main()
```
The Python 3 module (using the `pathlib` module for the showcase)
```
# spam.py
import pathlib
def listdir(p):
return [str(c) for c in pathlib.Path(p).iterdir()]
```
The Python 2 module using `spam.listdir`
```
# beans.py
import py3bridge
delegate = py3bridge.Py3Wrapper('spam', 'listdir')
py3result = delegate('.')
print py3result
``` |
Using List/Tuple/etc. from typing vs directly referring type as list/tuple/etc | 39,458,193 | 2 | 2016-09-12T20:19:59Z | 39,458,225 | 8 | 2016-09-12T20:22:17Z | [
"python",
"python-3.5",
"typing",
"type-hinting"
] | What's the difference of using `List`, `Tuple`, etc. from `typing` module:
```
from typing import Tuple
def f(points: Tuple):
return map(do_stuff, points)
```
As opposed to referring to Python's types directly:
```
def f(points: tuple):
return map(do_stuff, points)
```
And when should I use one over the other? | `typing.Tuple` and `typing.List` are [*Generic types*](https://docs.python.org/3/library/typing.html#generics); this means you can specify what type their *contents* must be:
```
def f(points: Tuple[float, float]):
return map(do_stuff, points)
```
This specifies that the tuple passed in must contain two `float` values. You can't do this with the built-in `tuple` type.
[`typing.Tuple`](https://docs.python.org/3/library/typing.html#typing.Tuple) is special here in that it lets you specify a specific number of elements expected and the type of each position. Use ellipsis if the length is not set and the type should be repeated: `Tuple[float, ...]` describes a variable-length `tuple` with `float`s.
For [`typing.List`](https://docs.python.org/3/library/typing.html#typing.List) and other sequence types you generally only specify the type for all elements; `List[str]` is a list of strings, of any size. Note that functions should preferentially take [`type.Sequence`](https://docs.python.org/3/library/typing.html#typing.Sequence) as arguments and `typing.List` is typically only used for return types; generally speaking most functions would take any sequence and only iterate, but when you return a `list`, you really are returning a specific, mutable sequence type.
You should always pick the `typing` generics even when you are not currently restricting the contents. It is easier to add that constraint later with a generic type as the resulting change will be smaller. |
Accessing the choices passed to argument in argparser? | 39,460,102 | 7 | 2016-09-12T23:19:12Z | 39,460,230 | 7 | 2016-09-12T23:35:10Z | [
"python",
"argparse"
] | Is it possible to access the tuple of choices passed to an argument? If so, how do I go about it
for example if I have
```
parser = argparse.ArgumentParser(description='choose location')
parser.add_argument(
"--location",
choices=('here', 'there', 'anywhere')
)
args = parser.parse_args()
```
can I access the tuple `('here', 'there', 'anywhere')`? | It turns out that `parser.add_argument` actually returns the associated `Action`. You can pick the choices off of that:
```
>>> import argparse
>>> parser = argparse.ArgumentParser(description='choose location')
>>> action = parser.add_argument(
... "--location",
... choices=('here', 'there', 'anywhere')
... )
>>> action.choices
('here', 'there', 'anywhere')
```
Note that (AFAIK) this isn't documented anywhere and *may* be considered an "implementation detail" and therefore subject to change without notice, etc. etc.
There also isn't any *publicly* accessible way to get at the actions stored on an `ArgumentParser` after they've been added. I believe that they are available as `parser._actions` if you're willing to go mucking about with implementation details (and assume any risks involved with that)...
---
Your best bet is to probably create a constant for the location choices and then use that in your code:
```
LOCATION_CHOICES = ('here', 'there', 'anywhere')
parser = argparse.ArgumentParser(description='choose location')
parser.add_argument(
"--location",
choices=LOCATION_CHOICES
)
args = parser.parse_args()
# Use LOCATION_CHOICES down here...
``` |
Index of each element within list of lists | 39,462,049 | 2 | 2016-09-13T04:05:03Z | 39,462,103 | 9 | 2016-09-13T04:10:33Z | [
"python",
"list"
] | I have something like the following list of lists:
```
>>> mylist=[['A','B','C'],['D','E'],['F','G','H']]
```
I want to construct a new list of lists where each element is a tuple where the first value indicates the index of that item within its sublist, and the second value is the original value.
I can obtain this using the following code:
```
>>> final_list=[]
>>> for sublist in mylist:
... slist=[]
... for i,element in enumerate(sublist):
... slist.append((i,element))
... final_list.append(slist)
...
>>>
>>> final_list
[[(0, 'A'), (1, 'B'), (2, 'C')], [(0, 'D'), (1, 'E')], [(0, 'F'), (1, 'G'), (2, 'H')]]
>>>
```
Is there a better or more concise way to do this using list comprehension? | ```
final_list = [list(enumerate(l)) for l in mylist]
``` |
How to read strange csv files in Pandas? | 39,462,978 | 5 | 2016-09-13T05:49:58Z | 39,463,003 | 10 | 2016-09-13T05:52:07Z | [
"python",
"csv",
"pandas"
] | I would like to read sample csv file shown in below
```
--------------
|A|B|C|
--------------
|1|2|3|
--------------
|4|5|6|
--------------
|7|8|9|
--------------
```
I tried
```
pd.read_csv("sample.csv",sep="|")
```
But it didn't work well.
How can I read this csv? | You can add parameter `comment` to [`read_csv`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) and then remove columns with `NaN` by [`dropna`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html):
```
import pandas as pd
import io
temp=u"""--------------
|A|B|C|
--------------
|1|2|3|
--------------
|4|5|6|
--------------
|7|8|9|
--------------"""
#after testing replace io.StringIO(temp) to filename
df = pd.read_csv(io.StringIO(temp), sep="|", comment='-').dropna(axis=1, how='all')
print (df)
A B C
0 1 2 3
1 4 5 6
2 7 8 9
```
More general solution:
```
import pandas as pd
import io
temp=u"""--------------
|A|B|C|
--------------
|1|2|3|
--------------
|4|5|6|
--------------
|7|8|9|
--------------"""
#after testing replace io.StringIO(temp) to filename
#separator is char which is NOT in csv
df = pd.read_csv(io.StringIO(temp), sep="^", comment='-')
#remove first and last | in data and in column names
df.iloc[:,0] = df.iloc[:,0].str.strip('|')
df.columns = df.columns.str.strip('|')
#split column names
cols = df.columns.str.split('|')[0]
#split data
df = df.iloc[:,0].str.split('|', expand=True)
df.columns = cols
print (df)
A B C
0 1 2 3
1 4 5 6
2 7 8 9
``` |
Python: Accessing YAML values using "dot notation" | 39,463,936 | 3 | 2016-09-13T06:57:11Z | 39,464,072 | 7 | 2016-09-13T07:06:45Z | [
"python",
"python-3.x",
"yaml"
] | I'm using a YAML configuration file. So this is the code to load my config in Python:
```
import os
import yaml
with open('./config.yml') as file:
config = yaml.safe_load(file)
```
This code actually creates a dictionary. Now the problem is that in order to access the values I need to use tons of brackets.
YAML:
```
mysql:
user:
pass: secret
```
Python:
```
import os
import yaml
with open('./config.yml') as file:
config = yaml.safe_load(file)
print(config['mysql']['user']['pass']) # <--
```
I'd prefer something like that (dot notation):
```
config('mysql.user.pass')
```
So, my idea is to utilize the PyStache render() interface.
```
import os
import yaml
with open('./config.yml') as file:
config = yaml.safe_load(file)
import pystache
def get_config_value( yml_path, config ):
return pystache.render('{{' + yml_path + '}}', config)
get_config_value('mysql.user.pass', config)
```
Would that be a "good" solution? If not, what would be a better alternative?
**Additional question [Solved]**
I've decided to use Ilja Everilä's solution. But now I've got an additional question: How would you create a wrapper Config class around DotConf?
The following code doesn't work but I hope you get the idea what I'm trying to do:
```
class Config( DotDict ):
def __init__( self ):
with open('./config.yml') as file:
DotDict.__init__(yaml.safe_load(file))
config = Config()
print(config.django.admin.user)
```
Error:
```
AttributeError: 'super' object has no attribute '__getattr__'
```
**Solution**
You just need to pass `self` to the constructor of the super class.
```
DotDict.__init__(self, yaml.safe_load(file))
```
**Even better soltution (Ilja Everilä)**
```
super().__init__(yaml.safe_load(file))
``` | # The Simple
You could use [`reduce`](https://docs.python.org/3/library/functools.html#functools.reduce) to extract the value from the config:
```
In [41]: config = {'asdf': {'asdf': {'qwer': 1}}}
In [42]: from functools import reduce
...:
...: def get_config_value(key, cfg):
...: return reduce(lambda c, k: c[k], key.split('.'), cfg)
...:
In [43]: get_config_value('asdf.asdf.qwer', config)
Out[43]: 1
```
This solution is easy to maintain and has very few new edge cases, if your YAML uses a very limited subset of the language.
# The Correct
Use a proper YAML parser and tools, such as in [this answer](http://stackoverflow.com/a/39485868/2681632).
---
# The Convoluted
On a lighter note (not to be taken too seriously), you could create a wrapper that allows using attribute access:
```
In [47]: class DotConfig:
...:
...: def __init__(self, cfg):
...: self._cfg = cfg
...: def __getattr__(self, k):
...: v = self._cfg[k]
...: if isinstance(v, dict):
...: return DotConfig(v)
...: return v
...:
In [48]: DotConfig(config).asdf.asdf.qwer
Out[48]: 1
```
Do note that this fails for keywords, such as "as", "pass", "if" and the like.
Finally, you could get really crazy (read: probably not a good idea) and customize `dict` to handle dotted string and tuple keys as a special case, with attribute access to items thrown in the mix (with its limitations):
```
In [58]: class DotDict(dict):
...:
...: # update, __setitem__ etc. omitted, but required if
...: # one tries to set items using dot notation. Essentially
...: # this is a read-only view.
...:
...: def __getattr__(self, k):
...: try:
...: v = self[k]
...: except KeyError:
...: return super().__getattr__(k)
...: if isinstance(v, dict):
...: return DotDict(v)
...: return v
...:
...: def __getitem__(self, k):
...: if isinstance(k, str) and '.' in k:
...: k = k.split('.')
...: if isinstance(k, (list, tuple)):
...: return reduce(lambda d, kk: d[kk], k, self)
...: return super().__getitem__(k)
...:
...: def get(self, k, default=None):
...: if isinstance(k, str) and '.' in k:
...: try:
...: return self[k]
...: except KeyError:
...: return default
...: return super().get(k, default=default)
...:
In [59]: dotconf = DotDict(config)
In [60]: dotconf['asdf.asdf.qwer']
Out[60]: 1
In [61]: dotconf['asdf', 'asdf', 'qwer']
Out[61]: 1
In [62]: dotconf.asdf.asdf.qwer
Out[62]: 1
In [63]: dotconf.get('asdf.asdf.qwer')
Out[63]: 1
In [64]: dotconf.get('asdf.asdf.asdf')
In [65]: dotconf.get('asdf.asdf.asdf', 'Nope')
Out[65]: 'Nope'
``` |
Stacked bar graph with variable width elements? | 39,475,683 | 3 | 2016-09-13T17:20:33Z | 39,476,288 | 7 | 2016-09-13T17:59:03Z | [
"python",
"graph",
"ggplot2",
"data-visualization"
] | In Tableau I'm used to making graphs like the one below. It has for each day (or some other discrete variable), a stacked bar of categories of different colours, heights and widths.
You can imagine the categories to be different advertisements that I show to people. The heights correspond to the percentage of people I've shown the advertisement to, and the widths correspond to the rate of acceptance.
It allows me to see very easily which advertisements I should probably show more often (short, but wide bars, like the 'C' category on September 13th and 14th) and which I should show less often (tall, narrow bars, like the 'H' category on September 16th).
Any ideas on how I could create a graph like this in R or Python?
[](http://i.stack.imgur.com/qDSKv.jpg) | Unfortunately, this is not so trivial to achieve with `ggplot2` (I think), because `geom_bar` does not really support changing widths for the same x position. But with a bit of effort, we can achieve the same result:
### Create some fake data
```
set.seed(1234)
d <- as.data.frame(expand.grid(adv = LETTERS[1:7], day = 1:5))
d$height <- runif(7*5, 1, 3)
d$width <- runif(7*5, 0.1, 0.3)
```
My data doesn't add up to 100%, cause I'm lazy.
```
head(d, 10)
# adv day height width
# 1 A 1 1.227407 0.2519341
# 2 B 1 2.244599 0.1402496
# 3 C 1 2.218549 0.1517620
# 4 D 1 2.246759 0.2984301
# 5 E 1 2.721831 0.2614705
# 6 F 1 2.280621 0.2106667
# 7 G 1 1.018992 0.2292812
# 8 A 2 1.465101 0.1623649
# 9 B 2 2.332168 0.2243638
# 10 C 2 2.028502 0.1659540
```
### Make a new variable for stacking
We can't easily use `position_stack` I think, so we'll just do that part ourselves. Basically, we need to calculate the cumulative height for every bar, grouped by day. Using `dplyr` we can do that very easily.
```
library(dplyr)
d2 <- d %>% group_by(day) %>% mutate(cum_height = cumsum(height))
```
### Make the plot
Finally, we create the plot. Note that the `x` and `y` refer to the *middle* of the tiles.
```
library(ggplot2)
ggplot(d2, aes(x = day, y = cum_height - 0.5 * height, fill = adv)) +
geom_tile(aes(width = width, height = height), show.legend = FALSE) +
geom_text(aes(label = adv)) +
scale_fill_brewer(type = 'qual', palette = 2) +
labs(title = "Views and other stuff", y = "% of views")
```
If you don't want to play around with correctly scaling the widths (to something < 1), you can use facets instead:
```
ggplot(d2, aes(x = 1, y = cum_height - 0.5 * height, fill = adv)) +
geom_tile(aes(width = width, height = height), show.legend = FALSE) +
geom_text(aes(label = adv)) +
facet_grid(~day) +
scale_fill_brewer(type = 'qual', palette = 2) +
labs(title = "Views and other stuff", y = "% of views", x = "")
```
### Result
[](http://i.stack.imgur.com/h6CVN.png)
[](http://i.stack.imgur.com/N1i1v.png) |
Stacked bar graph with variable width elements? | 39,475,683 | 3 | 2016-09-13T17:20:33Z | 39,476,491 | 7 | 2016-09-13T18:12:03Z | [
"python",
"graph",
"ggplot2",
"data-visualization"
] | In Tableau I'm used to making graphs like the one below. It has for each day (or some other discrete variable), a stacked bar of categories of different colours, heights and widths.
You can imagine the categories to be different advertisements that I show to people. The heights correspond to the percentage of people I've shown the advertisement to, and the widths correspond to the rate of acceptance.
It allows me to see very easily which advertisements I should probably show more often (short, but wide bars, like the 'C' category on September 13th and 14th) and which I should show less often (tall, narrow bars, like the 'H' category on September 16th).
Any ideas on how I could create a graph like this in R or Python?
[](http://i.stack.imgur.com/qDSKv.jpg) | ```
set.seed(1)
days <- 5
cats <- 8
dat <- prop.table(matrix(rpois(days * cats, days), cats), 2)
bp1 <- barplot(dat, col = seq(cats))
```
[](http://i.stack.imgur.com/pvEFA.png)
```
## some width for rect
rate <- matrix(runif(days * cats, .1, .5), cats)
## calculate xbottom, xtop, ybottom, ytop
bp <- rep(bp1, each = cats)
ybot <- apply(rbind(0, dat), 2, cumsum)[-(cats + 1), ]
ytop <- apply(dat, 2, cumsum)
plot(extendrange(bp1), c(0,1), type = 'n', axes = FALSE, ann = FALSE)
rect(bp - rate, ybot, bp + rate, ytop, col = seq(cats))
text(bp, (ytop + ybot) / 2, LETTERS[seq(cats)])
axis(1, bp1, labels = format(Sys.Date() + seq(days), '%d %b %Y'), lwd = 0)
axis(2)
```
[](http://i.stack.imgur.com/lGULx.png)
Probably not very useful, but you can invert the color you are plotting so that you can actually see the labels:
```
inv_col <- function(color) {
paste0('#', apply(apply(rbind(abs(255 - col2rgb(color))), 2, function(x)
format(as.hexmode(x), 2)), 2, paste, collapse = ''))
}
inv_col(palette())
# [1] "#ffffff" "#00ffff" "#ff32ff" "#ffff00" "#ff0000" "#00ff00" "#0000ff" "#414141"
plot(extendrange(bp1), c(0,1), type = 'n', axes = FALSE, ann = FALSE)
rect(bp - rate, ybot, bp + rate, ytop, col = seq(cats), xpd = NA, border = NA)
text(bp, (ytop + ybot) / 2, LETTERS[seq(cats)], col = inv_col(seq(cats)))
axis(1, bp1, labels = format(Sys.Date() + seq(days), '%d %B\n%Y'), lwd = 0)
axis(2)
```
[](http://i.stack.imgur.com/Dgu1m.png) |
Is there an equivalent to python reduce() function in scala? | 39,482,883 | 3 | 2016-09-14T05:12:27Z | 39,483,031 | 9 | 2016-09-14T05:27:28Z | [
"python",
"scala",
"lambda",
"functional-programming",
"reduce"
] | I've just started learning Scala and functional programming and I'm trying to convert the following from Python to Scala:
```
def immutable_iterative_fibonacci(position):
if (position ==1):
return [1]
if (position == 2):
return [1,1]
next_series = lambda series, _: series + [series [-1] + series [-2]]
return reduce(next_series, range(position - 2), [1, 1])
```
I can't figure out what the equivalent of reduce in Scala is. This is what I currently have. Everything works fine except the last line.
```
def immutable_fibonacci(position: Int) : ArrayBuffer[Int] = {
if (position == 1){
return ArrayBuffer(1)
}
if (position == 2){
return ArrayBuffer(1,1)
}
var next_series = (series: ArrayBuffer[Int]) => series :+ ( series( series.size - 1) + series( series.size -2))
return reduce(next_series, 2 to position, ArrayBuffer(1,1))
}
``` | Summary of Python [`reduce`](https://docs.python.org/2/library/functions.html#reduce), for reference:
```
reduce(function, iterable[, initializer])
```
## Traversable
A good type to look at is [`Traversable`](http://www.scala-lang.org/api/current/index.html#scala.collection.Traversable), a supertype of `ArrayBuffer`. You may want to just peruse that API for a while, because there's a lot of useful stuff in there.
## Reduce
The equivalent of Python's `reduce`, when the `initializer` arg is omitted, is Scala's [`Traversable[A]#reduceLeft`](http://www.scala-lang.org/api/current/index.html#scala.collection.Traversable@reduceLeft[B>:A](op:(B,A)=>B):B):
```
reduceLeft[B >: A](op: (B, A) => B): B
```
The `iterable` arg from the Python function corresponds to the `Traversable` instance, and the `function` arg from the Python function corresponds to `op`.
Note that there are also methods named `reduce`, `reduceRight`, `reduceLeftOption`, and `reduceRightOption`, which are similar but slightly different.
## Fold
Your example, which does provide an `initializer` arg, corresponds to Scala's [`Traversable[A]#foldLeft`](http://www.scala-lang.org/api/current/index.html#scala.collection.Traversable@foldLeft[B](z:B)(op:(B,A)=>B):B):
```
foldLeft[B](z: B)(op: (B, A) => B): B
```
The `initializer` arg from the Python function corresponds to the `z` arg in `foldLeft`.
Again, note that there are some related methods named `fold` and `foldRight`.
## Fibonacci
Without changing the algorithm, here's a cleaned-up version of your code:
```
def fibonacci(position: Int): Seq[Int] =
position match {
case 1 => Vector(1)
case 2 => Vector(1, 1)
case _ =>
(2 to position).foldLeft(Vector(1, 1)) { (series, _) =>
series :+ (series(series.size - 1) + series(series.size - 2))
}
}
```
A few miscellaneous notes:
* We generally never use the `return` keyword
* Pattern matching (what we're doing with the `match` keyword) is often considered cleaner than an `if`-`else` chain
* Replace `var` (which allows multiple assignment) with `val` (which doesn't) wherever possible
* Don't use a mutable collection (`ArrayBuffer`) if you don't need to. `Vector` is a good general-purpose immutable sequence.
And while we're on the topic of collections and the Fibonacci series, for fun you may want to check out the first example in the [`Stream`](http://www.scala-lang.org/api/current/#scala.collection.immutable.Stream) documentation:
```
val fibs: Stream[BigInt] = BigInt(0) #:: BigInt(1) #::
fibs.zip(fibs.tail).map { n => n._1 + n._2 }
fibs.drop(1).take(6).mkString(" ")
// "1 1 2 3 5 8"
``` |
My if statement keeps returning 'None' for empty list | 39,513,310 | 2 | 2016-09-15T14:10:49Z | 39,513,375 | 7 | 2016-09-15T14:13:06Z | [
"python",
"list",
"function",
"if-statement"
] | I'm a beginner at coding in Python and I've been practising with exercises CodeWars.
There's this exercise which basically wants you to recreate the display function of the "likes" on Facebook, i.e. how it shows the number likes you have on a post etc.
Here is my code:
```
def likes(names):
for name in names:
if len(names) == 0:
return 'no one likes this'
elif len(names) == 1:
return '%s likes this' % (name)
elif len(names) == 2:
return '%s and %s like this' % (names[0], names[1])
elif len(names) == 3:
return '%s, %s and %s like this' % (names[0], names[1], names[2])
elif len(names) >= 4:
return '%s, %s and %s others like this' % (names[0], names[1], len(names) - 2)
print likes([])
print likes(['Peter'])
print likes(['Alex', 'Jacob', 'Mark', 'Max'])
```
This prints out:
```
None
Peter likes this
Alex, Jacob and 2 others like this
```
My main issue here is that my first 'if' statement is not producing the string 'no one likes this' for when the argument: [] is empty. Is there a way around this problem? | If `names` is an empty list the `for` loop won't be executed at all, which will cause the function to return `None`. You should change the structure of your function (hint: you might not even need a loop, not an explicit one at least). There is no point in having a loop and then `return` on the very first iteration. |
Is there a Python equivalent to the C# ?. and ?? operators? | 39,534,935 | 6 | 2016-09-16T15:16:18Z | 39,535,290 | 9 | 2016-09-16T15:35:23Z | [
"python",
"ironpython"
] | For instance, in C# (starting with v6) I can say:
```
mass = (vehicle?.Mass / 10) ?? 150;
```
to set mass to a tenth of the vehicle's mass if there is a vehicle, but 150 if the vehicle is null (or has a null mass, if the Mass property is of a nullable type).
Is there an equivalent construction in Python (specifically IronPython) that I can use in scripts for my C# app?
This would be particularly useful for displaying defaults for values that can be modified by other values - for instance, I might have an armor component defined in script for my starship that is always consumes 10% of the space available on the ship it's installed on, and its other attributes scale as well, but I want to display defaults for the armor's size, hitpoints, cost, etc. so you can compare it with other ship components. Otherwise I might have to write a convoluted expression that does a null check or two, like I had to in C# before v6. | No, Python does not (yet) have NULL-coalescing operators.
There is a *proposal* ([PEP 505 â *None-aware operators*](https://www.python.org/dev/peps/pep-0505/)) to add such operators, but no consensus exists wether or not these should be added to the language at all and if so, what form these would take.
From the *Implementation* section:
> Given that the need for None -aware operators is questionable and the spelling of said operators is almost incendiary, the implementation details for CPython will be deferred unless and until we have a clearer idea that one (or more) of the proposed operators will be approved.
Note that Python doesn't really *have* a concept of `null`. Python names and attributes *always* reference *something*, they are never a `null` reference. `None` is just another object in Python, and the community is reluctant to make that one object so special as to need its own operators.
Until such time this gets implemented (if ever, and IronPython catches up to that Python release), you can use Python's [conditional expression](https://docs.python.org/3/reference/expressions.html#conditional-expressions) to achieve the same:
```
mass = 150 if vehicle is None or vehicle.Mass is None else vehicle.Mass / 10
``` |
Python random.random - chance of rolling 0 | 39,537,843 | 3 | 2016-09-16T18:21:09Z | 39,537,894 | 8 | 2016-09-16T18:24:02Z | [
"python",
"random"
] | As described in the [documentation](https://docs.python.org/3.4/library/random.html#random.random), random.random will "return the next random floating point number in the range [0.0, 1.0)"
So what is the chance of it returning a 0? | As per the documentation, it
> It produces 53-bit precision
And the Mersenne Twister it is based on has a huge state space, many times large than this. It also routinely passes statistical tests of bit independence (in programs designed to spot patterns in RNG output). The distribution is essentially uniform with equal probability that any bit will be 0 or 1.
The probability of getting precisely 0.0 will be 1 in 2^53 (assuming an unknown internal state) |
Is it possible to detect the number of local variables declared in a function? | 39,549,078 | 2 | 2016-09-17T16:23:26Z | 39,549,096 | 8 | 2016-09-17T16:25:35Z | [
"python",
"metaprogramming"
] | In a Python test fixture, is it possible to count how many local variables a function declares in its body?
```
def foo():
a = 1
b = 2
Test.assertEqual(countLocals(foo), 2)
```
Alternatively, is there a way to see if a function declares any variables at all?
```
def foo():
a = 1
b = 2
def bar():
pass
Test.assertEqual(hasLocals(foo), True)
Test.assertEqual(hasLocals(bar), False)
```
The use case I'm thinking of has to do with validating user-submitted code. | Yes, the associated code object accounts for all local names in the `co_nlocals` attribute:
```
foo.__code__.co_nlocals
```
Demo:
```
>>> def foo():
... a = 1
... b = 2
...
>>> foo.__code__.co_nlocals
2
```
See the [*Datamodel* documentation](https://docs.python.org/3/reference/datamodel.html):
> *User-defined functions*
>
> *[...]*
>
> `__code__` The code object representing the compiled function body.
>
> *Code objects*
>
> *[...]*
>
> Special read-only attributes: *[...]* `co_nlocals` is the number of local variables used by the function (including arguments); *[...]* |
Python sum() has a different result after importing numpy | 39,552,458 | 2 | 2016-09-17T22:49:36Z | 39,552,503 | 10 | 2016-09-17T22:55:55Z | [
"python",
"numpy",
"sum"
] | I came across this problem by Jake VanderPlas and I am not sure if my understanding of why the result differs after importing the numpy module is entirely correct.
```
>>print(sum(range(5),-1)
>> 9
>> from numpy import *
>> print(sum(range(5),-1))
>> 10
```
It seems like in the first scenario the sum function calculates the sum over the iterable and then subtracts the second args value from the sum.
In the second scenario, after importing numpy, the behavior of the function seems to have modified as the second arg is used to specify the axis along which the sum should be performed.
Exercise number (24)
Source - <http://www.labri.fr/perso/nrougier/teaching/numpy.100/index.html> | *"the behavior of the function seems to have modified as the second arg is used to specify the axis along which the sum should be performed."*
You have basically answered your own question!
It is not technically correct to say that the behavior of the function has been *modified*. `from numpy import *` results in "shadowing" the [builtin `sum` function](https://docs.python.org/2/library/functions.html#sum) with the [numpy `sum` function](http://docs.scipy.org/doc/numpy/reference/generated/numpy.sum.html), so when you use the name `sum`, Python finds the numpy version instead of the builtin version (see @godaygo's answer for more details). These are *different* functions, with different arguments. It is generally a bad idea to use `from somelib import *`, for exactly this reason. Instead, use `import numpy as np`, and then use `np.sum` when you want the numpy function, and plain `sum` when you want the Python builtin function. |
Python: is 'int' a type or a function? | 39,562,080 | 2 | 2016-09-18T19:58:04Z | 39,562,104 | 8 | 2016-09-18T20:00:26Z | [
"python"
] | I did this in Python 3.4:
```
>>> type(int)
<class 'type'>
>>> int(0)
0
```
Now I am wondering what int actually is. Is it a type, or is it a function? Is it both? If it is both, is it also true that all types can be called like functions? | `int` is a **[class](https://docs.python.org/3.5/tutorial/classes.html)**. The type of a class is usually `type`.
And yes, *almost* all classes can be called like functions. You create what's called an **instance** which is an object that behaves as you defined in the class. They can have their own functions and have special attributes.
(`type` is also a class if you're interested but it's a special class. It's a bit complicated but you can read more on it if you'll search for metaclasses) |
ImportError: cannot import name 'QtCore' | 39,574,639 | 11 | 2016-09-19T13:38:41Z | 39,577,184 | 11 | 2016-09-19T15:50:01Z | [
"python",
"anaconda",
"python-import",
"qtcore"
] | I am getting the below error with the following imports.
It seems to be related to pandas import. I am unsure how to debug/solve this.
Imports:
```
import pandas as pd
import numpy as np
import pdb, math, pickle
import matplotlib.pyplot as plt
```
Error:
```
In [1]: %run NN.py
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
/home/abhishek/Desktop/submission/a1/new/NN.py in <module>()
2 import numpy as np
3 import pdb, math, pickle
----> 4 import matplotlib.pyplot as plt
5
6 class NN(object):
/home/abhishek/anaconda3/lib/python3.5/site-packages/matplotlib/pyplot.py in <module>()
112
113 from matplotlib.backends import pylab_setup
--> 114 _backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup()
115
116 _IP_REGISTERED = None
/home/abhishek/anaconda3/lib/python3.5/site-packages/matplotlib/backends/__init__.py in pylab_setup()
30 # imports. 0 means only perform absolute imports.
31 backend_mod = __import__(backend_name,
---> 32 globals(),locals(),[backend_name],0)
33
34 # Things we pull in from all backends
/home/abhishek/anaconda3/lib/python3.5/site-packages/matplotlib/backends/backend_qt4agg.py in <module>()
16
17
---> 18 from .backend_qt5agg import FigureCanvasQTAggBase as _FigureCanvasQTAggBase
19
20 from .backend_agg import FigureCanvasAgg
/home/abhishek/anaconda3/lib/python3.5/site-packages/matplotlib/backends/backend_qt5agg.py in <module>()
14
15 from .backend_agg import FigureCanvasAgg
---> 16 from .backend_qt5 import QtCore
17 from .backend_qt5 import QtGui
18 from .backend_qt5 import FigureManagerQT
/home/abhishek/anaconda3/lib/python3.5/site-packages/matplotlib/backends/backend_qt5.py in <module>()
29 figureoptions = None
30
---> 31 from .qt_compat import QtCore, QtGui, QtWidgets, _getSaveFileName, __version__
32 from matplotlib.backends.qt_editor.formsubplottool import UiSubplotTool
33
/home/abhishek/anaconda3/lib/python3.5/site-packages/matplotlib/backends/qt_compat.py in <module>()
135 # have been changed in the above if block
136 if QT_API in [QT_API_PYQT, QT_API_PYQTv2]: # PyQt4 API
--> 137 from PyQt4 import QtCore, QtGui
138
139 try:
ImportError: cannot import name 'QtCore'
```
Debugging:
```
$ python -c "import PyQt4"
$ python -c "from PyQt4 import QtCore"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: cannot import name 'QtCore'
$ conda list | grep qt
jupyter-qtconsole-colorschemes 0.7.1 <pip>
pyqt 5.6.0 py35_0
qt 5.6.0 0
qtawesome 0.3.3 py35_0
qtconsole 4.2.1 py35_0
qtpy 1.0.2 py35_0
```
I found other answers but all related to Windows.
I am using ubuntu 16.04 with anaconda distribution of python 3. | Downgrading pyqt version 5.6.0 to 4.11.4, and qt from version 5.6.0 to 4.8.7 fixes this:
```
$ conda install pyqt=4.11.4
$ conda install qt=4.8.7
```
The issue itself is being resolved here: <https://github.com/ContinuumIO/anaconda-issues/issues/1068> |
Counting frequencies in two lists, Python | 39,579,431 | 4 | 2016-09-19T18:02:41Z | 39,579,512 | 7 | 2016-09-19T18:08:11Z | [
"python",
"list",
"frequency",
"counting"
] | I'm new to programming in python so please bear over with my newbie question...
I have one initial list (list1) , which I have cleaned for duplicates and ended up with a list with only one of each value (list2):
list1 = [13, 19, 13, 2, 16, 6, 5, 19, 20, 21, 20, 13, 19, 13, 16],
list2 = [13, 19, 2, 16, 6, 5, 20, 21]
What I want is to count how many times each of the values in "list2" appears in "list1", but I can't figure out how to do that without getting it wrong.
The output I am looking for is something similar to this:
Number 13 is represented 1 times in list1.
........ Number 16 is represented 2 times in list1. | The easiest way is to use a counter:
```
from collections import Counter
list1 = [13, 19, 13, 2, 16, 6, 5, 19, 20, 21, 20, 13, 19, 13, 16]
c = Counter(list1)
print(c)
```
giving
```
Counter({2: 1, 5: 1, 6: 1, 13: 4, 16: 2, 19: 3, 20: 2, 21: 1})
```
So you can access the key-value-pairs of the counter representing the items and its occurrences using the same syntax used for acessing dicts:
```
for k, v in c.items():
print('- Element {} has {} occurrences'.format(k, v))
```
giving:
```
- Element 16 has 2 occurrences
- Element 2 has 1 occurrences
- Element 19 has 3 occurrences
- Element 20 has 2 occurrences
- Element 5 has 1 occurrences
- Element 6 has 1 occurrences
- Element 13 has 4 occurrences
- Element 21 has 1 occurrences
``` |
Remove the first N items that match a condition in a Python list | 39,580,063 | 58 | 2016-09-19T18:46:05Z | 39,580,319 | 31 | 2016-09-19T19:03:49Z | [
"python",
"list",
"list-comprehension"
] | If I have a function `matchCondition(x)`, how can I remove the first `n` items in a Python list that match that condition?
One solution is to iterate over each item, mark it for deletion (e.g., by setting it to `None`), and then filter the list with a comprehension. This requires iterating over the list twice and mutates the data. Is there a more idiomatic or efficient way to do this?
```
n = 3
def condition(x):
return x < 5
data = [1, 10, 2, 9, 3, 8, 4, 7]
out = do_remove(data, n, condition)
print(out) # [10, 9, 8, 4, 7] (1, 2, and 3 are removed, 4 remains)
``` | Write a generator that takes the iterable, a condition, and an amount to drop. Iterate over the data and yield items that don't meet the condition. If the condition is met, increment a counter and don't yield the value. Always yield items once the counter reaches the amount you want to drop.
```
def iter_drop_n(data, condition, drop):
dropped = 0
for item in data:
if dropped >= drop:
yield item
continue
if condition(item):
dropped += 1
continue
yield item
data = [1, 10, 2, 9, 3, 8, 4, 7]
out = list(iter_drop_n(data, lambda x: x < 5, 3))
```
This does not require an extra copy of the list, only iterates over the list once, and only calls the condition once for each item. Unless you actually want to see the whole list, leave off the `list` call on the result and iterate over the returned generator directly. |
Remove the first N items that match a condition in a Python list | 39,580,063 | 58 | 2016-09-19T18:46:05Z | 39,580,621 | 59 | 2016-09-19T19:25:17Z | [
"python",
"list",
"list-comprehension"
] | If I have a function `matchCondition(x)`, how can I remove the first `n` items in a Python list that match that condition?
One solution is to iterate over each item, mark it for deletion (e.g., by setting it to `None`), and then filter the list with a comprehension. This requires iterating over the list twice and mutates the data. Is there a more idiomatic or efficient way to do this?
```
n = 3
def condition(x):
return x < 5
data = [1, 10, 2, 9, 3, 8, 4, 7]
out = do_remove(data, n, condition)
print(out) # [10, 9, 8, 4, 7] (1, 2, and 3 are removed, 4 remains)
``` | One way using [`itertools.filterfalse`](https://docs.python.org/3/library/itertools.html#itertools.filterfalse) and [`itertools.count`](https://docs.python.org/3/library/itertools.html#itertools.count):
```
from itertools import count, filterfalse
data = [1, 10, 2, 9, 3, 8, 4, 7]
output = filterfalse(lambda L, c=count(): L < 5 and next(c) < 3, data)
```
Then `list(output)`, gives you:
```
[10, 9, 8, 4, 7]
``` |
Remove the first N items that match a condition in a Python list | 39,580,063 | 58 | 2016-09-19T18:46:05Z | 39,580,831 | 24 | 2016-09-19T19:39:12Z | [
"python",
"list",
"list-comprehension"
] | If I have a function `matchCondition(x)`, how can I remove the first `n` items in a Python list that match that condition?
One solution is to iterate over each item, mark it for deletion (e.g., by setting it to `None`), and then filter the list with a comprehension. This requires iterating over the list twice and mutates the data. Is there a more idiomatic or efficient way to do this?
```
n = 3
def condition(x):
return x < 5
data = [1, 10, 2, 9, 3, 8, 4, 7]
out = do_remove(data, n, condition)
print(out) # [10, 9, 8, 4, 7] (1, 2, and 3 are removed, 4 remains)
``` | The accepted answer was a little too magical for my liking. Here's one where the flow is hopefully a bit clearer to follow:
```
def matchCondition(x):
return x < 5
def my_gen(L, drop_condition, max_drops=3):
count = 0
iterator = iter(L)
for element in iterator:
if drop_condition(element):
count += 1
if count >= max_drops:
break
else:
yield element
yield from iterator
example = [1, 10, 2, 9, 3, 8, 4, 7]
print(list(my_gen(example, drop_condition=matchCondition)))
```
It's similar to logic in [davidism](http://stackoverflow.com/a/39580319/674039) answer, but instead of checking the drop count is exceeded on every step, we just short-circuit the rest of the loop.
*Note:* If you don't have [`yield from`](https://docs.python.org/3/whatsnew/3.3.html) available, just replace it with another for loop over the remaining items in `iterator`. |
Can I use pandas.dataframe.isin() with a numeric tolerance parameter? | 39,602,004 | 6 | 2016-09-20T19:07:50Z | 39,602,108 | 9 | 2016-09-20T19:14:33Z | [
"python",
"pandas",
"comparison",
"floating-accuracy",
"comparison-operators"
] | I reviewed the following posts beforehand. Is there a way to use DataFrame.isin() with an approximation factor or a tolerance value? Or is there another method that could?
[How to filter the DataFrame rows of pandas by "within"/"in"?](http://stackoverflow.com/questions/12065885/how-to-filter-the-dataframe-rows-of-pandas-by-within-in)
[use a list of values to select rows from a pandas dataframe](http://stackoverflow.com/questions/12096252/use-a-list-of-values-to-select-rows-from-a-pandas-dataframe)
EX)
```
df = DataFrame({'A' : [5,6,3.3,4], 'B' : [1,2,3.2, 5]})
In : df
Out:
A B
0 5 1
1 6 2
2 3.3 3.2
3 4 5
df[df['A'].isin([3, 6], tol=.5)]
In : df
Out:
A B
1 6 2
2 3.3 3.2
``` | You can do a similar thing with [numpy's isclose](http://docs.scipy.org/doc/numpy/reference/generated/numpy.isclose.html):
```
df[np.isclose(df['A'].values[:, None], [3, 6], atol=.5).any(axis=1)]
Out:
A B
1 6.0 2.0
2 3.3 3.2
```
---
np.isclose returns this:
```
np.isclose(df['A'].values[:, None], [3, 6], atol=.5)
Out:
array([[False, False],
[False, True],
[ True, False],
[False, False]], dtype=bool)
```
It is a pairwise comparison of `df['A']`'s elements and `[3, 6]` (that's why we needed `df['A'].values[: None]` - for broadcasting). Since you are looking for whether it is close to any one of them in the list, we call `.any(axis=1)` at the end.
---
For multiple columns, change the slice a little bit:
```
mask = np.isclose(df[['A', 'B']].values[:, :, None], [3, 6], atol=0.5).any(axis=(1, 2))
mask
Out: array([False, True, True, False], dtype=bool)
```
You can use this mask to slice the DataFrame (i.e. `df[mask]`)
---
If you want to compare `df['A']` and `df['B']` (and possible other columns) with different vectors, you can create two different masks:
```
mask1 = np.isclose(df['A'].values[:, None], [1, 2, 3], atol=.5).any(axis=1)
mask2 = np.isclose(df['B'].values[:, None], [4, 5], atol=.5).any(axis=1)
mask3 = ...
```
Then slice:
```
df[mask1 & mask2] # or df[mask1 & mask2 & mask3 & ...]
``` |
Short-circuit evaluation like Python's "and" while storing results of checks | 39,603,391 | 26 | 2016-09-20T20:42:29Z | 39,603,504 | 27 | 2016-09-20T20:51:07Z | [
"python",
"short-circuiting"
] | I have multiple expensive functions that return results. I want to return a tuple of the results of all the checks if all the checks succeed. However, if one check fails I don't want to call the later checks, like the short-circuiting behavior of `and`. I could nest `if` statements, but that will get out of hand if there are a lot of checks. How can I get the short-circuit behavior of `and` while also storing the results for later use?
```
def check_a():
# do something and return the result,
# for simplicity, just make it "A"
return "A"
def check_b():
# do something and return the result,
# for simplicity, just make it "B"
return "B"
...
```
This doesn't short-circuit:
```
a = check_a()
b = check_b()
c = check_c()
if a and b and c:
return a, b, c
```
This is messy if there are many checks:
```
if a:
b = check_b()
if b:
c = check_c()
if c:
return a, b, c
```
Is there a shorter way to do this? | Just use a plain old for loop:
```
results = {}
for function in [check_a, check_b, ...]:
results[function.__name__] = result = function()
if not result:
break
```
The results will be a mapping of the function name to their return values, and you can do what you want with the values after the loop breaks.
Use an `else` clause on the for loop if you want special handling for the case where all of the functions have returned truthy results. |
Short-circuit evaluation like Python's "and" while storing results of checks | 39,603,391 | 26 | 2016-09-20T20:42:29Z | 39,603,506 | 9 | 2016-09-20T20:51:16Z | [
"python",
"short-circuiting"
] | I have multiple expensive functions that return results. I want to return a tuple of the results of all the checks if all the checks succeed. However, if one check fails I don't want to call the later checks, like the short-circuiting behavior of `and`. I could nest `if` statements, but that will get out of hand if there are a lot of checks. How can I get the short-circuit behavior of `and` while also storing the results for later use?
```
def check_a():
# do something and return the result,
# for simplicity, just make it "A"
return "A"
def check_b():
# do something and return the result,
# for simplicity, just make it "B"
return "B"
...
```
This doesn't short-circuit:
```
a = check_a()
b = check_b()
c = check_c()
if a and b and c:
return a, b, c
```
This is messy if there are many checks:
```
if a:
b = check_b()
if b:
c = check_c()
if c:
return a, b, c
```
Is there a shorter way to do this? | Write a function that takes an iterable of functions to run. Call each one and append the result to a list, or return `None` if the result is `False`. Either the function will stop calling further checks after one fails, or it will return the results of all the checks.
```
def all_or_none(checks, *args, **kwargs):
out = []
for check in checks:
rv = check(*args, **kwargs)
if not rv:
return None
out.append(rv)
return out
```
```
rv = all_or_none((check_a, check_b, check_c))
# rv is a list if all checks passed, otherwise None
if rv is not None:
return rv
```
```
def check_a(obj):
...
def check_b(obj):
...
# pass arguments to each check, useful for writing reusable checks
rv = all_or_none((check_a, check_b), obj=my_object)
``` |
Pandas: remove encoding from the string | 39,609,426 | 3 | 2016-09-21T06:56:08Z | 39,609,639 | 7 | 2016-09-21T07:07:57Z | [
"python",
"python-2.7",
"pandas"
] | I have the following data frame:
```
str_value
0 Mock%20the%20Week
1 law
2 euro%202016
```
There are many such special characters such as `%20%`, `%2520`, etc..How do I remove them all. I have tried the following but the dataframe is large and I am not sure how many such different characters are there.
```
dfSearch['str_value'] = dfSearch['str_value'].str.replace('%2520', ' ')
dfSearch['str_value'] = dfSearch['str_value'].str.replace('%20', ' ')
``` | You can use the `urllib` library and apply it using [`map`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html) method of a series.
Example -
```
In [23]: import urllib
In [24]: dfSearch["str_value"].map(lambda x:urllib.unquote(x).decode('utf8'))
Out[24]:
0 Mock the Week
1 law
2 euro 2016
``` |
Why does the floating-point value of 4*0.1 look nice in Python 3 but 3*0.1 doesn't? | 39,618,943 | 148 | 2016-09-21T14:07:21Z | 39,619,388 | 73 | 2016-09-21T14:26:14Z | [
"python",
"floating-point",
"rounding",
"floating-accuracy",
"ieee-754"
] | I know that most decimals don't have an exact floating point representation ([Is floating point math broken?](http://stackoverflow.com/questions/588004)).
But I don't see why `4*0.1` is printed nicely as `0.4`, but `3*0.1` isn't, when
both values actually have ugly decimal representations:
```
>>> 3*0.1
0.30000000000000004
>>> 4*0.1
0.4
>>> from decimal import Decimal
>>> Decimal(3*0.1)
Decimal('0.3000000000000000444089209850062616169452667236328125')
>>> Decimal(4*0.1)
Decimal('0.40000000000000002220446049250313080847263336181640625')
``` | `repr` (and `str` in Python 3) will put out as many digits as required to make the value unambiguous. In this case the result of the multiplication `3*0.1` isn't the closest value to 0.3 (0x1.3333333333333p-2 in hex), it's actually one LSB higher (0x1.3333333333334p-2) so it needs more digits to distinguish it from 0.3.
On the other hand, the multiplication `4*0.1` *does* get the closest value to 0.4 (0x1.999999999999ap-2 in hex), so it doesn't need any additional digits.
You can verify this quite easily:
```
>>> 3*0.1 == 0.3
False
>>> 4*0.1 == 0.4
True
```
I used hex notation above because it's nice and compact and shows the bit difference between the two values. You can do this yourself using e.g. `(3*0.1).hex()`. If you'd rather see them in all their decimal glory, here you go:
```
>>> Decimal(3*0.1)
Decimal('0.3000000000000000444089209850062616169452667236328125')
>>> Decimal(0.3)
Decimal('0.299999999999999988897769753748434595763683319091796875')
>>> Decimal(4*0.1)
Decimal('0.40000000000000002220446049250313080847263336181640625')
>>> Decimal(0.4)
Decimal('0.40000000000000002220446049250313080847263336181640625')
``` |
Why does the floating-point value of 4*0.1 look nice in Python 3 but 3*0.1 doesn't? | 39,618,943 | 148 | 2016-09-21T14:07:21Z | 39,619,467 | 287 | 2016-09-21T14:30:11Z | [
"python",
"floating-point",
"rounding",
"floating-accuracy",
"ieee-754"
] | I know that most decimals don't have an exact floating point representation ([Is floating point math broken?](http://stackoverflow.com/questions/588004)).
But I don't see why `4*0.1` is printed nicely as `0.4`, but `3*0.1` isn't, when
both values actually have ugly decimal representations:
```
>>> 3*0.1
0.30000000000000004
>>> 4*0.1
0.4
>>> from decimal import Decimal
>>> Decimal(3*0.1)
Decimal('0.3000000000000000444089209850062616169452667236328125')
>>> Decimal(4*0.1)
Decimal('0.40000000000000002220446049250313080847263336181640625')
``` | The simple answer is because `3*0.1 != 0.3` due to quantization (roundoff) error (whereas `4*0.1 == 0.4` because multiplying by a power of two is usually an "exact" operation).
You can use the `.hex` method in Python to view the internal representation of a number (basically, the *exact* binary floating point value, rather than the base-10 approximation). This can help to explain what's going on under the hood.
```
>>> (0.1).hex()
'0x1.999999999999ap-4'
>>> (0.3).hex()
'0x1.3333333333333p-2'
>>> (0.1*3).hex()
'0x1.3333333333334p-2'
>>> (0.4).hex()
'0x1.999999999999ap-2'
>>> (0.1*4).hex()
'0x1.999999999999ap-2'
```
0.1 is 0x1.999999999999a times 2^-4. The "a" at the end means the digit 10 - in other words, 0.1 in binary floating point is *very slightly* larger than the "exact" value of 0.1 (because the final 0x0.99 is rounded up to 0x0.a). When you multiply this by 4, a power of two, the exponent shifts up (from 2^-4 to 2^-2) but the number is otherwise unchanged, so `4*0.1 == 0.4`.
However, when you multiply by 3, the little tiny difference between 0x0.99 and 0x0.a0 (0x0.07) magnifies into a 0x0.15 error, which shows up as a one-digit error in the last position. This causes 0.1\*3 to be *very slightly* larger than the rounded value of 0.3.
Python 3's float `repr` is designed to be *round-trippable*, that is, the value shown should be exactly convertible into the original value. Therefore, it cannot display `0.3` and `0.1*3` exactly the same way, or the two *different* numbers would end up the same after round-tripping. Consequently, Python 3's `repr` engine chooses to display one with a slight apparent error. |
Why does the floating-point value of 4*0.1 look nice in Python 3 but 3*0.1 doesn't? | 39,618,943 | 148 | 2016-09-21T14:07:21Z | 39,623,207 | 19 | 2016-09-21T17:42:10Z | [
"python",
"floating-point",
"rounding",
"floating-accuracy",
"ieee-754"
] | I know that most decimals don't have an exact floating point representation ([Is floating point math broken?](http://stackoverflow.com/questions/588004)).
But I don't see why `4*0.1` is printed nicely as `0.4`, but `3*0.1` isn't, when
both values actually have ugly decimal representations:
```
>>> 3*0.1
0.30000000000000004
>>> 4*0.1
0.4
>>> from decimal import Decimal
>>> Decimal(3*0.1)
Decimal('0.3000000000000000444089209850062616169452667236328125')
>>> Decimal(4*0.1)
Decimal('0.40000000000000002220446049250313080847263336181640625')
``` | Here's a simplified conclusion from other answers.
> If you check a float on Python's command line or print it, it goes through function `repr` which creates its string representation.
>
> Starting with version 3.2, Python's `str` and `repr` use a complex rounding scheme, which prefers
> nice-looking decimals if possible, but uses more digits where
> necessary to guarantee bijective (one-to-one) mapping between floats
> and their string representations.
>
> This scheme guarantees that value of `repr(float(s))` looks nice for simple
> decimals, even if they can't be
> represented precisely as floats (eg. when `s = "0.1")`.
>
> At the same time it guarantees that `float(repr(x)) == x` holds for every float `x` |
Is there any way to print **kwargs in Python | 39,623,889 | 2 | 2016-09-21T18:23:04Z | 39,623,954 | 7 | 2016-09-21T18:26:44Z | [
"python"
] | I am just curious about `**kwargs`. I am just started learning it, So while going through all the question on stackoverflow and video tutorials I notice we can do like this
```
def print_dict(**kwargs):
print(kwargs)
print_dict(x=1,y=2,z=3)
```
Which gives output as :`{'y': 2, 'x': 1, 'z': 3}`
So I figures why not do the reverse and print the like `x=1,y=2,z=3`
So I tried this:
```
mydict = {'x':1,'y':2,'z':3}
print(**mydict)
```
But I got an error like : `TypeError: 'z' is an invalid keyword argument for this function`(sometimes it shows 'y' is an invalid keyword).
I also tried like assign it to the variable and then print it but again i got an error (`SyntaxError: invalid syntax`):
```
mydict = {'x':1,'y':2,'z':3}
var = **mydict
print(var)
```
See this is working:
```
def print_dict(**this):
print(this)
mydict = {'x':1,'y':2,'z':3}
print_dict(**mydict)
```
But instead of `print(this)` if i do `print(**this)` it gives the error.
As we can print `*arg` as I try this code:
```
def num(*tuple_num):
print(tuple_num)
print(*tuple_num)
num(1,2,3,4)
```
It run perfectly and gives output as:
```
(1, 2, 3, 4)
1 2 3 4
```
So I want to know is there any possible solution/way to print `**kwargs` ? | The syntax `callable(**dictionary)` *applies* the dictionary as if you used separate keyword arguments.
So your example:
```
mydict = {'x':1,'y':2,'z':3}
print(**mydict)
```
Is internally translated to:
```
print(x=1, y=2, z=3)
```
where the exact ordering depends on the current random hash seed. Since `print()` doesn't support those keyword arguments the call fails.
The other `print()` call succeeds, because you passed in the values as separate *positional* arguments:
```
tuple_num = (1, 2, 3, 4)
print(*tuple_num)
```
is effectively the same as:
```
print(1, 2, 3, 4)
```
and the `print()` function supports separate arguments by writing them out one by one with the `sep` value in between (which is a space by default).
The `**dictionary` is not valid syntax outside of a call. Since `callable(**dictionary)` is part of the call syntax, and not an object, there is *nothing to print*.
At most, you can *format* the dictionary to look like the call:
```
print(', '.join(['{}={!r}'.format(k, v) for k, v in mydict.items()]))
``` |
Zen of Python: Errors should never pass silently. Why does zip work the way it does? | 39,628,456 | 6 | 2016-09-22T00:30:01Z | 39,628,603 | 9 | 2016-09-22T00:51:29Z | [
"python"
] | I use python's function zip a lot in my code (mostly to create dicts like below)
```
dict(zip(list_a, list_b))
```
I find it really useful, but sometimes it frustrates me because I end up with a situation where list\_a is a different length to list\_b. zip just goes ahead and zips together the two lists until it achieves a zipped list that is the same length as the shorter list, ignoring the rest of the longer list. This seems like it should be treated as an error in most circumstances, which according to the zen of python should never pass silently.
Given that this is such an integral function, I'm curious as to why it's been designed this way? Why isn't it treated as an error if you try to zip together two lists of different lengths? | ## Reason 1: Historical Reason
`zip` allows unequal-length arguments because it was meant to improve upon `map` by *allowing* unequal-length arguments. This behavior is the reason `zip` exists at all.
Here's how you did `zip` before it existed:
```
>>> a = (1, 2, 3)
>>> b = (4, 5, 6)
>>> for i in map(None, a, b): print i
...
(1, 4)
(2, 5)
(3, 6)
>>> map(None, a, b)
[(1, 4), (2, 5), (3, 6)]
```
This is terrible behaviour, and *does not* support unequal-length lists. This was a major design concern, which you can see plain-as-day in [the official RFC proposing `zip` for the first time](https://www.python.org/dev/peps/pep-0201/#lockstep-for-loops):
> While the map() idiom is a common one in Python, it has several
> disadvantages:
>
> * It is non-obvious to programmers without a functional programming
> background.
> * The use of the magic `None` first argument is non-obvious.
> * It has arbitrary, often unintended, and inflexible semantics when the
> lists are not of the same length - the shorter sequences are padded
> with `None` :
>
> `>>> c = (4, 5, 6, 7)`
>
> `>>> map(None, a, c)`
>
> `[(1, 4), (2, 5), (3, 6), (None, 7)]`
So, no, this behaviour would not be treated as an error - it is *why it was designed in the first place*.
---
## Reason 2: Practical Reason
Because it is pretty useful, is clearly specified and doesn't have to be thought of as an error at all.
By allowing unequal lengths, `zip` only requires that its arguments conform to the [iterator protocol](http://stackoverflow.com/questions/16301253/what-exactly-is-pythons-iterator-protocol). This allows `zip` to be extended to generators, tuples, dictionary keys and literally anything in the world that implements `__next__()` and `__iter__()`, precisely because it doesn't inquire about length.
This is significant, because generators *do not* support `len()` and thus there is no way to check the length beforehand. Add a check for length, and you break `zip`s ability to work on generators, *when it should*. That's a fairly serious disadvantage, wouldn't you agree?
---
## Reason 3: By Fiat
Guido van Rossum wanted it this way:
> *Optional padding.* An earlier version of this PEP proposed an optional pad keyword argument, which would be used when the argument sequences were not the same length. This is similar behavior to the map(None, ...) semantics except that the user would be able to specify pad object. **This has been rejected by the BDFL in favor of always truncating to the shortest sequence, because of the KISS principle.** If there's a true need, it is easier to add later. If it is not needed, it would still be impossible to delete it in the future.
KISS trumps everything. |
Slice list of lists without numpy | 39,644,517 | 2 | 2016-09-22T16:39:55Z | 39,644,552 | 7 | 2016-09-22T16:41:36Z | [
"python",
"list",
"slice"
] | In Python, how could I slice my list of lists and get a sub list of lists without numpy?
For example, get a list of lists from A[1][1] to A[2][2] and store it in B:
```
A = [[1, 2, 3, 4 ],
[11, 12, 13, 14],
[21, 22, 23, 24],
[31, 32, 33, 34]]
B = [[12, 13],
[22, 23]]
``` | You can *slice* `A` and its sublists:
```
In [1]: A = [[1, 2, 3, 4 ],
...: [11, 12, 13, 14],
...: [21, 22, 23, 24],
...: [31, 32, 33, 34]]
In [2]: B = [l[1:3] for l in A[1:3]]
In [3]: B
Out[3]: [[12, 13], [22, 23]]
``` |
Why does Python 3 exec() fail when specifying locals? | 39,647,566 | 9 | 2016-09-22T19:39:34Z | 39,647,647 | 10 | 2016-09-22T19:44:01Z | [
"python",
"python-3.x",
"exec",
"python-exec"
] | The following executes without an error in Python 3:
```
code = """
import math
def func(x):
return math.sin(x)
func(10)
"""
_globals = {}
exec(code, _globals)
```
But if I try to capture the local variable dict as well, it fails with a `NameError`:
```
>>> _globals, _locals = {}, {}
>>> exec(code, _globals, _locals)
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-9-aeda81bf0af1> in <module>()
----> 1 exec(code, {}, {})
<string> in <module>()
<string> in func(x)
NameError: name 'math' is not defined
```
Why is this happening, and how can I execute this code while capturing both global and local variables? | From the [`exec()` documentation](https://docs.python.org/3/library/functions.html#exec):
> Remember that at module level, globals and locals are the same dictionary. If `exec` gets two separate objects as *globals* and *locals*, the code will be executed as if it were embedded in a class definition.
You passed in two separate dictionaries, but tried to execute code that requires module-scope globals to be available. `import math` in a class would produce a *local scope attribute*, and the function you create won't be able to access that as class scope names are not considered for function closures.
See [*Naming and binding*](https://docs.python.org/3/reference/executionmodel.html#naming-and-binding) in the Python execution model reference:
> Class definition blocks and arguments to `exec()` and `eval()` are special in the context of name resolution. A class definition is an executable statement that may use and define names. These references follow the normal rules for name resolution with an exception that unbound local variables are looked up in the global namespace. The namespace of the class definition becomes the attribute dictionary of the class. The scope of names defined in a class block is limited to the class block; it does not extend to the code blocks of methods[.]
You can reproduce the error by trying to execute the code in a class definition:
```
>>> class Demo:
... import math
... def func(x):
... return math.sin(x)
... func(10)
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 5, in Demo
File "<stdin>", line 4, in func
NameError: name 'math' is not defined
```
Just pass in *one* dictionary. |
Is there a difference between str function and percent operator in Python | 39,665,286 | 9 | 2016-09-23T16:14:31Z | 39,665,338 | 16 | 2016-09-23T16:17:33Z | [
"python",
"python-2.7"
] | When converting an object to a string in python, I saw two different idioms:
A: `mystring = str(obj)`
B: `mystring = "%s" % obj`
Is there a difference between those two? (Reading the Python docs, I would suspect no, because the latter case would implicitly call `str(obj)` to convert `obj` to a string.
If yes, when should I use which?
If no, which one should I prefer in "good" python code? (From the python philosophy "explicit over implicit", A would be considered the better one?) | The second version does more work.
The `%s` operator calls `str()` on the value it interpolates, but it also has to parse the template string first to find the placeholder in the first place.
Unless your template string contains *more text*, there is no point in asking Python to spend more cycles on the `"%s" % obj` expression.
However, paradoxically, the `str()` conversion is, in practice, slower as looking up the name `str()` and pushing the stack to call the function takes more time than the string parsing:
```
>>> from timeit import timeit
>>> timeit('str(obj)', 'obj = 4.524')
0.32349491119384766
>>> timeit('"%s" % obj', 'obj = 4.524')
0.27424097061157227
```
You can recover most of that difference by binding `str` to a local name first:
```
>>> timeit('_str(obj)', 'obj = 4.524; _str = str')
0.28351712226867676
```
To most Python developers, using the string templating option is going to be confusing as `str()` is far more straightforward. Stick to the function unless you have a critical section that does a lot of string conversions. |
Python vectorizing nested for loops | 39,667,089 | 11 | 2016-09-23T18:10:01Z | 39,667,342 | 14 | 2016-09-23T18:26:59Z | [
"python",
"numpy",
"vectorization"
] | I'd appreciate some help in finding and understanding a pythonic way to optimize the following array manipulations in nested for loops:
```
def _func(a, b, radius):
"Return 0 if a>b, otherwise return 1"
if distance.euclidean(a, b) < radius:
return 1
else:
return 0
def _make_mask(volume, roi, radius):
mask = numpy.zeros(volume.shape)
for x in range(volume.shape[0]):
for y in range(volume.shape[1]):
for z in range(volume.shape[2]):
mask[x, y, z] = _func((x, y, z), roi, radius)
return mask
```
Where `volume.shape` (182, 218, 200) and `roi.shape` (3,) are both `ndarray` types; and `radius` is an `int` | **Approach #1**
Here's a vectorized approach -
```
m,n,r = volume.shape
x,y,z = np.mgrid[0:m,0:n,0:r]
X = x - roi[0]
Y = y - roi[1]
Z = z - roi[2]
mask = X**2 + Y**2 + Z**2 < radius**2
```
Possible improvement : We can probably speedup the last step with `numexpr` module -
```
import numexpr as ne
mask = ne.evaluate('X**2 + Y**2 + Z**2 < radius**2')
```
**Approach #2**
We can also gradually build the three ranges corresponding to the shape parameters and perform the subtraction against the three elements of `roi` on the fly without actually creating the meshes as done earlier with `np.mgrid`. This would be benefited by the use of [`broadcasting`](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) for efficiency purposes. The implementation would look like this -
```
m,n,r = volume.shape
vals = ((np.arange(m)-roi[0])**2)[:,None,None] + \
((np.arange(n)-roi[1])**2)[:,None] + ((np.arange(r)-roi[2])**2)
mask = vals < radius**2
```
Simplified version : Thanks to @Bi Rico for suggesting an improvement here as we can use `np.ogrid` to perform those operations in a bit more concise manner, like so -
```
m,n,r = volume.shape
x,y,z = np.ogrid[0:m,0:n,0:r]-roi
mask = (x**2+y**2+z**2) < radius**2
```
---
**Runtime test**
Function definitions -
```
def vectorized_app1(volume, roi, radius):
m,n,r = volume.shape
x,y,z = np.mgrid[0:m,0:n,0:r]
X = x - roi[0]
Y = y - roi[1]
Z = z - roi[2]
return X**2 + Y**2 + Z**2 < radius**2
def vectorized_app1_improved(volume, roi, radius):
m,n,r = volume.shape
x,y,z = np.mgrid[0:m,0:n,0:r]
X = x - roi[0]
Y = y - roi[1]
Z = z - roi[2]
return ne.evaluate('X**2 + Y**2 + Z**2 < radius**2')
def vectorized_app2(volume, roi, radius):
m,n,r = volume.shape
vals = ((np.arange(m)-roi[0])**2)[:,None,None] + \
((np.arange(n)-roi[1])**2)[:,None] + ((np.arange(r)-roi[2])**2)
return vals < radius**2
def vectorized_app2_simplified(volume, roi, radius):
m,n,r = volume.shape
x,y,z = np.ogrid[0:m,0:n,0:r]-roi
return (x**2+y**2+z**2) < radius**2
```
Timings -
```
In [106]: # Setup input arrays
...: volume = np.random.rand(90,110,100) # Half of original input sizes
...: roi = np.random.rand(3)
...: radius = 3.4
...:
In [107]: %timeit _make_mask(volume, roi, radius)
1 loops, best of 3: 41.4 s per loop
In [108]: %timeit vectorized_app1(volume, roi, radius)
10 loops, best of 3: 62.3 ms per loop
In [109]: %timeit vectorized_app1_improved(volume, roi, radius)
10 loops, best of 3: 47 ms per loop
In [110]: %timeit vectorized_app2(volume, roi, radius)
100 loops, best of 3: 4.26 ms per loop
In [139]: %timeit vectorized_app2_simplified(volume, roi, radius)
100 loops, best of 3: 4.36 ms per loop
```
So, as always `broadcasting` showing its magic for a crazy almost **`10,000x`** speedup over the original code and more than **`10x`** better than creating meshes by using on-the-fly broadcasted operations! |
Why does the result variable update itself? | 39,667,572 | 2 | 2016-09-23T18:42:19Z | 39,667,609 | 8 | 2016-09-23T18:45:19Z | [
"python",
"datetime"
] | I have the following code:
`result = datetime.datetime.now() - datetime.timedelta(seconds=60)`
```
>>> result.utcnow().isoformat()
'2016-09-23T18:39:34.174406'
>>> result.utcnow().isoformat()
'2016-09-23T18:40:18.240571'
```
Somehow the variable is being updated... and I have no clue as to how or how to stop it. What is this called? How do I prevent it?
Thank you! | `result` is a `datetime` object
`datetime.utcnow()` is a class method of all `datetime` objects.
`result` is not changing at all. `utcnow()` is |
Can someone explain this expression: a[len(a):] = [x] equivalent to list.append(x) | 39,689,099 | 2 | 2016-09-25T16:20:52Z | 39,689,182 | 9 | 2016-09-25T16:28:57Z | [
"python",
"list",
"python-3.x"
] | I'm at the very beginning of learning Python 3. Getting to know the language basics. There is a method to the list data type:
```
list.append(x)
```
and in the tutorial it is said to be equivalent to this expression:
```
a[len(a):] = [x]
```
Can someone please explain this expression? I can't grasp the **len(a):** part. It's a slice right? From the last item to the last? Can't make sense of it.
I'm aware this is very newbie, sorry. I'm determined to learn Python for Blender scripting and the Game Engine, and want to understand well all the constructs. | Think back to how slices work: `a[beginning:end]`.
If you do not supply one of them, then you get all the list from `beginning` or all the way to `end`.
What that means is if I ask for `a[2:]`, I will get the list from the index `2` all the way to the end of the list and `len(a)` is an index right after the last element of the array... so `a[len(a):]` is basically an empty array positioned right after the last element of the array.
Say you have `a = [0,1,2]`, and you do `a[3:] = [3,4,5]`, what you're telling Python is that right after `[0,1,2` and right before `]`, there should be `3,4,5`.
Thus `a` will become `[0,1,2,3,4,5]` and after that step `a[3:]` will indeed be equal to `[3,4,5]` just as you declared.
Edit: as chepner commented, any index greater than or equal to `len(a)` will work just as well. For instance, `a = [0,1,2]` and `a[42:] = [3,4,5]` will also result in `a` becoming `[0,1,2,3,4,5]`. |
linked list output not expected in Python 2.7 | 39,695,046 | 2 | 2016-09-26T05:06:47Z | 39,695,130 | 10 | 2016-09-26T05:16:27Z | [
"python",
"python-2.7"
] | Implement a linked list and I expect output to be `0, -1, -2, -3, ... etc.`, but it is `-98, -98, -98, -98, ... etc.`, wondering what is wrong in my code? Thanks.
```
MAXSIZE = 100
freeListHead = None
class StackNode:
def __init__(self, value, nextNode):
self.value = value
self.nextNode = nextNode
if __name__ == "__main__":
# initialization for nodes and link them to be a free list
nodes=[StackNode(-1, None)] * MAXSIZE
freeListHead = nodes[0]
for i in range(0, len(nodes)-1):
nodes[i].nextNode = nodes[i+1]
nodes[i].value = -i
for node in nodes:
# output -98, -98, -98, -98, ... etc.
# exepcted output, 0, -1, -2, -3, ... etc.
print node.value
``` | This is the problem:
```
# initialization for nodes and link them to be a free list
nodes=[StackNode(-1, None)] * MAXSIZE
```
When you use the multiply operator, it will create multiple *references* to the **same** object, as noted [in this StackOverflow answer](http://stackoverflow.com/a/2785963/895932). So changing one node's value (as in `nodes[i].value = -i`) will affect every node since every item in the list points to the same object.
In that same linked answer, the solution is to use [list comprehension](http://stackoverflow.com/documentation/python/5265/list-comprehensions#t=201609260515199970598&a=syntax), like this:
```
nodes = [StackNode(-1, None) for i in range(MAXSIZE)]
```
Also, note that you did not set the value of the last element, so the output (after the fix I suggested above) will be:
```
0, -1, -2, ..., -98, -1
``` |
Which one is good practice about python formatted string? | 39,696,818 | 4 | 2016-09-26T07:16:31Z | 39,696,863 | 14 | 2016-09-26T07:18:55Z | [
"python",
"file"
] | Suppose I have a file on `/home/ashraful/test.txt`. Simply I just want to open the file.
Now my question is:
which one is good practice?
**Solution 1:**
```
dir = "/home/ashraful/"
fp = open("{0}{1}".format(dir, 'test.txt'), 'r')
```
**Solution 2:**
```
dir = "/home/ashraful/"
fp = open(dir + 'test.txt', 'r')
```
The both way I can open file.
Thanks :) | instead of concatenating string use `os.path.join` `os.path.expanduser` to generate the path and open the file. (assuming you are trying to open a file in your home directory)
```
with open(os.path.join(os.path.expanduser('~'), 'test.txt')) as fp:
# do your stuff with file
``` |
hash function that outputs integer from 0 to 255? | 39,702,457 | 2 | 2016-09-26T12:11:09Z | 39,702,481 | 10 | 2016-09-26T12:12:16Z | [
"python",
"hash",
"integer"
] | I need a very simple hash function in Python that will convert a string to an integer from 0 to 255.
For example:
```
>>> hash_function("abc_123")
32
>>> hash_function("any-string-value")
99
```
It does not matter what the integer is as long as I get the same integer every time I call the function.
I want to use the integer to generate a random subnet mask based on the name of the network. | You could just use the modulus of the [`hash()` function](https://docs.python.org/3/library/functions.html#hash) output:
```
def onebyte_hash(s):
return hash(s) % 256
```
This is what dictionaries and sets use (hash modulus the internal table size).
Demo:
```
>>> onebyte_hash('abc_123')
182
>>> onebyte_hash('any-string-value')
12
```
Caveat: On Python 3.3 and up, *hash randomisation* is enabled by default, and *between restarts of Python* you'll get different values. The hash, then, is only stable if you don't restart the Python process or set [`PYTHONHASHSEED`](https://docs.python.org/3/using/cmdline.html#envvar-PYTHONHASHSEED) to a fixed decimal number (with `0` disabling it altogether). On Python 2 and 3.0 through to 3.2 hash randomisation is either not available or only enabled if you set a seed explicitly.
Another alternative is to just [`hashlib.md5()`](https://docs.python.org/3/library/functions.html#hash) and just take (integer value of) the first byte:
```
import hashlib
try:
# Python 2; Python 3 will throw an exception here as bytes are required
hashlib.md5('')
def onebyte_hash(s):
return ord(hashlib.md5(s).digest()[0])
except TypeError:
# Python 3; encode the string first, return first byte
def onebyte_hash(s):
return hashlib.md5(s.encode('utf8')).digest()[0]
```
MD5 is a well-establish cryptographic hash, the output is stable across Python versions and independent of hash randomisation.
The disadvantage of the latter would be that it'd be marginally slower; Python caches string hashes on the string object, so retrieving the hash later on is fast and cheap most of the time. |
The Pythonic way to grow a list of lists | 39,716,492 | 8 | 2016-09-27T05:18:07Z | 39,716,772 | 8 | 2016-09-27T05:40:05Z | [
"python",
"list",
"nested-lists"
] | I have a large file (2GB) of categorical data (mostly "Nan"--but populated here and there with actual values) that is too large to read into a single data frame. I had a rather difficult time coming up with a object to store all the unique values for each column (Which is my goal--eventually I need to factorize this for modeling)
What I ended it up doing was reading the file in chunks into a dataframe and then get the unique values of each column and store them in a list of lists. My solution works, but seemed most un-pythonic--is there a cleaner way to accomplish this in Python (ver 3.5). I do know the number of columns (~2100).
```
import pandas as pd
#large file of csv separated text data
data=pd.read_csv("./myratherlargefile.csv",chunksize=100000, dtype=str)
collist=[]
master=[]
i=0
initialize=0
for chunk in data:
#so the first time through I have to make the "master" list
if initialize==0:
for col in chunk:
#thinking about this, i should have just dropped this col
if col=='Id':
continue
else:
#use pd.unique as a build in solution to get unique values
collist=chunk[col][chunk[col].notnull()].unique().tolist()
master.append(collist)
i=i+1
#but after first loop just append to the master-list at
#each master-list element
if initialize==1:
for col in chunk:
if col=='Id':
continue
else:
collist=chunk[col][chunk[col].notnull()].unique().tolist()
for item in collist:
master[i]=master[i]+collist
i=i+1
initialize=1
i=0
```
after that, my final task for all the unique values is as follows:
```
i=0
names=chunk.columns.tolist()
for item in master:
master[i]=list(set(item))
master[i]=master[i].append(names[i+1])
i=i+1
```
thus master[i] gives me the column name and then a list of unique values--crude but it does work--my main concern is building the list in a "better" way if possible. | I would suggest instead of a `list` of `list`s, using a [`collections.defaultdict(set)`](https://docs.python.org/2/library/collections.html#collections.defaultdict).
Say you start with
```
uniques = collections.defaultdict(set)
```
Now the loop can become something like this:
```
for chunk in data:
for col in chunk:
uniques[col] = uniques[col].union(chunk[col].unique())
```
Note that:
1. `defaultdict` always has a `set` for `uniques[col]` (that's what it's there for), so you can skip `initialized` and stuff.
2. For a given `col`, you simply update the entry with the union of the current set (which initially is empty, but it doesn't matter) and the new unique elements.
**Edit**
As Raymond Hettinger notes (thanks!), it is better to use
```
uniques[col].update(chunk[col].unique())
``` |
Not nesting version of @atomic() in Django? | 39,719,567 | 15 | 2016-09-27T08:17:37Z | 39,721,631 | 7 | 2016-09-27T09:56:10Z | [
"python",
"django",
"postgresql",
"transactions",
"acid"
] | From the [docs of atomic()](https://docs.djangoproject.com/en/dev/topics/db/transactions/#django.db.transaction.atomic)
> atomic blocks can be nested
This sound like a great feature, but in my use case I want the opposite: I want the transaction to be durable as soon as the block decorated with `@atomic()` gets left successfully.
Is there a way to ensure durability in django's transaction handling?
# Background
Transaction are ACID. The "D" stands for durability. That's why I think transactions can't be nested without loosing feature "D".
Example: If the inner transaction is successful, but the outer transaction is not, then the outer and the inner transaction get rolled back. The result: The inner transaction was not durable.
I use PostgreSQL, but AFAIK this should not matter much. | You can't do that through any API.
Transactions can't be nested while retaining all ACID properties, and not all databases support nested transactions.
Only the outermost atomic block creates a transaction. Inner atomic blocks create a savepoint inside the transaction, and release or roll back the savepoint when exiting the inner block. As such, inner atomic blocks provide atomicity, but as you noted, not e.g. durability.
Since the outermost atomic block creates a transaction, it *must* provide atomicity, and you can't commit a nested atomic block to the database if the containing transaction is not committed.
The only way to ensure that the inner block is committed, is to make sure that the code in the transaction finishes executing without any errors. |
python's `timeit` doesn't always scale linearly with number? | 39,732,027 | 9 | 2016-09-27T18:37:25Z | 39,732,127 | 8 | 2016-09-27T18:43:06Z | [
"python",
"performance",
"optimization",
"timeit"
] | I'm running Python 2.7.10 on a 16GB, 2.7GHz i5, OSX 10.11.5 machine.
I've observed this phenomenon many times in many different types of examples, so the example below, though a bit contrived, is representative. It's just what I happened to be working on earlier today when my curiosity finally piqued.
```
>>> timeit('unicodedata.category(chr)', setup = 'import unicodedata, random; chr=unichr(random.randint(0,50000))', number=100)
3.790855407714844e-05
>>> timeit('unicodedata.category(chr)', setup = 'import unicodedata, random; chr=unichr(random.randint(0,50000))', number=1000)
0.0003371238708496094
>>> timeit('unicodedata.category(chr)', setup = 'import unicodedata, random; chr=unichr(random.randint(0,50000))', number=10000)
0.014712810516357422
>>> timeit('unicodedata.category(chr)', setup = 'import unicodedata, random; chr=unichr(random.randint(0,50000))', number=100000)
0.029777050018310547
>>> timeit('unicodedata.category(chr)', setup = 'import unicodedata, random; chr=unichr(random.randint(0,50000))', number=1000000)
0.21139287948608398
```
You'll notice that, from 100 to 1000, there's a factor of 10 increase in the time, as expected. However, 1e3 to 1e4, it's more like a factor of 50, and then a factor of 2 from 1e4 to 1e5 (so a total factor of 100 from 1e3 to 1e5, which is expected).
I'd figured that there must be some sort of caching-based optimization going on either in the actual process being timed or in `timeit` itself, but I can't quite figure out empirically whether this is the case. The imports don't seem to matter, as can be observed this with a most basic example:
```
>>> timeit('1==1', number=10000)
0.0005490779876708984
>>> timeit('1==1', number=100000)
0.01579904556274414
>>> timeit('1==1', number=1000000)
0.04653501510620117
```
where from 1e4 to 1e6 there's a true factor of 1e2 time difference, but the intermediate steps are ~30 and ~3.
I could do more ad hoc data collection but I haven't got a hypothesis in mind at this point.
Any notion as to why the non-linear scale at certain intermediate numbers of runs? | This has to do with a smaller number of runs not being accurate enough to get the timing resolution you want.
As you increase the number of runs, the ratio between the times approaches the ratio between the number of runs:
```
>>> def timeit_ratio(a, b):
... return timeit('unicodedata.category(chr)', setup = 'import unicodedata, random; chr=unichr(random.randint(0,50000))', number=a) / timeit('unicodedata.category(chr)', setup = 'import unicodedata, random; chr=unichr(random.randint(0,50000))', number=b)
>>> for i in range(32):
... r = timeit_ratio(2**(i+1), 2**i)
... print 2**i, 2**(i+1), r, abs(r - 2)**2 # mean squared error
...
1 2 3.0 1.0
2 4 1.0 1.0
4 8 1.5 0.25
8 16 1.0 1.0
16 32 0.316455696203 2.83432142285
32 64 2.04 0.0016
64 128 1.97872340426 0.000452693526483
128 256 2.05681818182 0.00322830578512
256 512 1.93333333333 0.00444444444444
512 1024 2.01436781609 0.000206434139252
1024 2048 2.18793828892 0.0353208004422
2048 4096 1.98079658606 0.000368771106961
4096 8192 2.11812990721 0.0139546749772
8192 16384 2.15052027269 0.0226563524921
16384 32768 1.93783596324 0.00386436746641
32768 65536 2.28126901347 0.0791122579397
65536 131072 2.18880312306 0.0356466192769
131072 262144 1.8691643357 0.0171179710535
262144 524288 2.02883451562 0.000831429291038
524288 1048576 1.98259818317 0.000302823228866
1048576 2097152 2.088684654 0.00786496785554
2097152 4194304 2.02639479643 0.000696685278755
4194304 8388608 1.98014042724 0.000394402630024
8388608 16777216 1.98264956218 0.000301037692533
``` |
Find maximum value and index in a python list? | 39,748,916 | 3 | 2016-09-28T13:26:26Z | 39,749,005 | 7 | 2016-09-28T13:30:00Z | [
"python",
"list",
"max"
] | I have a python list that is like this,
```
[[12587961, 0.7777777777777778], [12587970, 0.5172413793103449], [12587979, 0.3968253968253968], [12587982, 0.88], [12587984, 0.8484848484848485], [12587992, 0.7777777777777778], [12587995, 0.8070175438596491], [12588015, 0.4358974358974359], [12588023, 0.8985507246376812], [12588037, 0.5555555555555555], [12588042, 0.9473684210526315]]
```
This list can be up to thousand elements in length, how can I get the maximum value in the list according to the second item in the sub-array, and get the index of the maximum value which is the fist element in the sub-array in python? | Use the [`max`](https://docs.python.org/3/library/functions.html#max) function and its `key` parameter, to use only the second element to compare elements of the list.
For example,
```
>>> data = [[12587961, 0.7777777777777778], [12587970, 0.5172413793103449], [12587979, 0.3968253968253968].... [12588042, 0.9473684210
526315]]
>>> max(data, key=lambda item: item[1])
[12588042, 0.9473684210526315]
```
Now, if you want just the first element, then you can simply get the first element alone, or just unpack the result, like this
```
>>> index, value = max(data, key=lambda item: item[1])
>>> index
12588042
>>> value
0.9473684210526315
```
---
Edit: If you want to find the maximum index (first value) out of all elements with the maximum value (second value), then you can do it like this
```
>>> _, max_value = max(data, key=lambda item: item[1])
>>> max(index for index, value in data if value == max_value)
```
You can do the same in a single iteration, like this
```
max_index = float("-inf")
max_value = float("-inf")
for index, value in data:
if value > max_value:
max_value = value
max_index = index
elif value == max_value:
max_index = max(max_index, index)
``` |
How to split text into chunks minimizing the solution? | 39,750,879 | 8 | 2016-09-28T14:47:06Z | 39,752,628 | 7 | 2016-09-28T16:09:29Z | [
"python",
"string",
"algorithm",
"split",
"computer-science"
] | **OVERVIEW**
I got a set of possible valid chunks I can use to split a text (if possible).
How can i split a given text using these chunks such as the result will be optimized (minimized) in terms of the number of resulting chunks?
**TEST SUITE**
```
if __name__ == "__main__":
import random
import sys
random.seed(1)
# 1) Testing robustness
examples = []
sys.stdout.write("Testing correctness...")
N = 50
large_number = "3141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117067982148086513282306647093844609550582231725359408128481"
for i in range(100):
for j in range(i):
choices = random.sample(range(i), j)
examples.append((choices, large_number))
for (choices, large_number) in examples:
get_it_done(choices, large_number)
sys.stdout.write("OK")
# 2) Testing correctness
examples = [
# Example1 ->
# Solution ['012345678910203040506070', '80', '90', '100', '200', '300', '400', '500', '600', '700', '800', '900']
(
[
"0", "1", "2", "3", "4", "5", "6", "7", "8", "9",
"10", "20", "30", "40", "50", "60", "70", "80", "90",
"100", "200", "300", "400", "500", "600", "700", "800", "900",
"012345678910203040506070"
],
"0123456789102030405060708090100200300400500600700800900"
),
# Example2
## Solution ['100']
(
["0", "1", "10", "100"],
"100"
),
# Example3
## Solution ['101234567891020304050', '6070809010020030040050', '0600700800900']
(
[
"10", "20", "30", "40", "50", "60", "70", "80", "90",
"012345678910203040506070",
"101234567891020304050",
"6070809010020030040050",
"0600700800900"
],
"10123456789102030405060708090100200300400500600700800900"
),
# Example4
### Solution ['12', '34', '56', '78', '90']
(
[
"12", "34", "56", "78", "90",
"890",
],
"1234567890"
),
# Example5
## Solution ['12', '34']
(
[
"1", "2", "3",
"12", "23", "34"
],
"1234"
),
# Example6
## Solution ['100', '10']
(
["0", "1", "10", "100"],
"10010"
)
]
score = 0
for (choices, large_number) in examples:
res = get_it_done(choices, large_number)
flag = "".join(res) == large_number
print("{0}\n{1}\n{2} --> {3}".format(
large_number, "".join(res), res, flag))
print('-' * 80)
score += flag
print(
"Score: {0}/{1} = {2:.2f}%".format(score, len(examples), score / len(examples) * 100))
# 3) TODO: Testing optimization, it should provide (if possible)
# minimal cases
```
**QUESTION**
How could I solve this problem on python without using a brute-force approach? | Using dynamic programming, you can construct a list `(l0, l1, l2, ... ln-1)`, where `n` is the number of characters in your input string and `li` is the minimum number of chunks you need to arrive at character `i` of the input string. The overall structure would look as follows:
```
minValues := list with n infinity entries
for i from 0 to n-1
for every choice c that is a suffix of input[0..i]
if i - len(c) < 0
newVal = 1
else
newVal = minValues[i - len(c)] + 1
end if
if(newVal < minValues[i])
minValues[i] = newVal
//optionally record the used chunk
end if
next
next
```
The minimum number of chunk for your entire string is then `ln-1`. You can get the actual chunks by tracking back through the list (which requires to record the used chunks).
Retrieving the choices that are suffixes can be sped up using a trie (of the reverse choice strings). The worst case complexity will still be `O(n * c * lc)`, where `n` is the length of the input string, `c` is the number of choices, and `lc` is the maximum length of the choices. However, this complexity will only occur for choices that are nested suffixes (e.g. `0`, `10`, `010`, `0010`...). In this case, the trie will degenerate to a list. In average, the run time should be much less. Under the assumption that the number of retrieved choices from the trie is always a small constant, it is `O(n * lc)` (actually, the `lc` factor is probably also smaller).
Here is an example:
```
choices = ["0","1","10","100"]
text = "10010"
algorithm step content of minValues
0 1 2 3 4
---------------------------------------------------------
initialize (â, â , â , â , â )
i = 0, c = "1" (1 "1", â , â , â , â )
i = 1, c = "0" (1 "1", 2 "0", â , â , â )
i = 1, c = "10" (1 "1", 1 "10", â , â , â )
i = 2, c = "0" (1 "1", 1 "10", 2 "0", â , â )
i = 2, c = "100" (1 "1", 1 "10", 1 "100", â , â )
i = 3, c = "1" (1 "1", 1 "10", 1 "100", 2 "1", â )
i = 4, c = "0" (1 "1", 1 "10", 1 "100", 2 "1", 3 "0" )
i = 4, c = "10" (1 "1", 1 "10", 1 "100", 2 "1", 2 "10")
```
Meaning: We can compose the string with 2 chunks. Tracing back gives the chunks in reverse order: "10", "100". |
comparing strings in list to strings in list | 39,757,126 | 7 | 2016-09-28T20:32:09Z | 39,757,182 | 8 | 2016-09-28T20:35:20Z | [
"python",
"list"
] | I see that the code below can check if a word is
```
list1 = 'this'
compSet = [ 'this','that','thing' ]
if any(list1 in s for s in compSet): print(list1)
```
Now I want to check if a word in a list is in some other list as below:
```
list1 = ['this', 'and', 'that' ]
compSet = [ 'check','that','thing' ]
```
What's the best way to check if words in list1 are in compSet, and doing something over non-existing elements, e.g., appending 'and' to compSet or deleting 'and' from list1?
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_update\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
I just found that doing the same thing is not working with sys.path. The code below sometimes works to add the path to sys.path, and sometimes not.
```
myPath = '/some/my path/is here'
if not any( myPath in s for s in sys.path):
sys.path.insert(0, myPath)
```
Why is this not working? Also, if I want to do the same operation on a set of my paths,
```
myPaths = [ '/some/my path/is here', '/some/my path2/is here' ...]
```
How can I do it? | There is a simple way to check for the intersection of two lists: convert them to a set and use `intersection`:
```
>>> list1 = ['this', 'and', 'that' ]
>>> compSet = [ 'check','that','thing' ]
>>> set(list1).intersection(compSet)
{'that'}
```
You can also use bitwise operators:
Intersection:
```
>>> set(list1) & set(compSet)
{'that'}
```
Union:
```
>>> set(list1) | set(compSet)
{'this', 'and', 'check', 'thing', 'that'}
```
You can make any of these results a list using `list()`. |
Can't install psycopg2 package through pip install... Is this because of Sierra? | 39,767,810 | 2 | 2016-09-29T10:29:47Z | 39,800,677 | 11 | 2016-09-30T22:01:18Z | [
"python",
"pip",
"psycopg2"
] | I am working on a project for one of my lectures and I need to download the package psycopg2 in order to work with the postgresql database in use. Unfortunately, when I try to pip install psycopg2 the following error pops up:
```
ld: library not found for -lssl
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: command '/usr/bin/clang' failed with exit status 1
ld: library not found for -lssl
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: command '/usr/bin/clang' failed with exit status 1
```
Does anyone know why this is happening? Is it because Sierra has not supported some packages? Thanks in advance! | I fixed this by installing Command Line Tools
```
xcode-select --install
```
then installing openssl via Homebrew and manually linking my homebrew-installed openssl to pip:
```
env LDFLAGS="-I/usr/local/opt/openssl/include -L/usr/local/opt/openssl/lib" pip install psycopg2
```
on macOS Sierra 10.12.1 |
How to identify a string as being a byte literal? | 39,778,978 | 7 | 2016-09-29T19:59:29Z | 39,779,024 | 13 | 2016-09-29T20:02:36Z | [
"python",
"string",
"python-3.x"
] | In Python 3, if I have a string such that:
```
print(some_str)
```
yields something like this:
```
b'This is the content of my string.\r\n'
```
I know it's a byte literal.
Is there a function that can be used to determine if that string is in byte literal format (versus having, say, the Unicode `'u'` prefix) without first interpreting? Or is there another best practice for handling this? I have a situation wherein getting a byte literal string needs to be dealt with differently than if it's in Unicode. In theory, something like this:
```
if is_byte_literal(some_str):
// handle byte literal case
else:
// handle unicode case
``` | The easiest and, arguably, best way to do this would be by utilizing the built-in [`isinstance`](https://docs.python.org/3/library/functions.html#isinstance) with the `bytes` type:
```
some_str = b'hello world'
if isinstance(some_str, bytes):
print('bytes')
elif isinstance(some_str, str):
print('str')
else:
# handle
```
Since, a byte literal will *always* be an instance of `bytes`, `isinstance(some_str, bytes)` will, of course, evaluate to `True`. |
How to get lineno of "end-of-statement" in Python ast | 39,779,538 | 35 | 2016-09-29T20:35:58Z | 39,945,762 | 7 | 2016-10-09T16:17:34Z | [
"python",
"abstract-syntax-tree"
] | I am trying to work on a script that manipulates another script in Python, the script to be modified has structure like:
```
class SomethingRecord(Record):
description = 'This records something'
author = 'john smith'
```
I use `ast` to locate the `description` line number, and I use some code to change the original file with new description string base on the line number. So far so good.
Now the only issue is `description` occasionally is a multi-line string, e.g.
```
description = ('line 1'
'line 2'
'line 3')
```
or
```
description = 'line 1' \
'line 2' \
'line 3'
```
and I only have the line number of the first line, not the following lines. So my one-line replacer would do
```
description = 'new value'
'line 2' \
'line 3'
```
and the code is broken. I figured that if I know both the lineno of start and end/number of lines of `description` assignment I could repair my code to handle such situation. How do I get such information with Python standard library? | I looked at the other answers; it appears people are doing backflips to get around the problems of computing line numbers, when your real problem is one of modifying the code. That suggests the baseline machinery is not helping you the way you really need.
If you use a [program transformation system (PTS)](https://en.wikipedia.org/wiki/Program_transformation), you could avoid a lot of this nonsense.
A good PTS will parse your source code to an AST, and then let you apply source-level rewrite rules to modify the AST, and will finally convert the modified AST back into source text. Generically PTSes accept transformation rules of essentially this form:
```
if you see *this*, replace it by *that*
```
[A parser that builds an AST is NOT a PTS. They don't allow rules like this; you can write ad hoc code to hack at the tree, but that's usually pretty awkward. Not do they do the AST to source text regeneration.]
(My PTS, see bio, called) DMS is a PTS that could accomplish this. OP's specific example would be accomplished easily by using the following rewrite rule:
```
source domain Python; -- tell DMS the syntax of pattern left hand sides
target domain Python; -- tell DMS the syntax of pattern right hand sides
rule replace_description(e: expression): statement -> statement =
" description = \e "
->
" description = ('line 1'
'line 2'
'line 3')";
```
The one transformation rule is given an name *replace\_description* to distinguish it from all the other rule we might define. The rule parameters (e: expression) indicate the pattern will allow an arbitrary expression as defined by the source language. *statement->statement* means the rule maps a statement in the source language, to a statement in the target language; we could use any other syntax category from the Python grammar provided to DMS. The **"** used here is a *metaquote*, used to distinguish the syntax of the rule language form the syntax of the subject language. The second **->** separates the source pattern *this* from the target pattern *that*.
You'll notice that there is no need to mention line numbers. The PTS converts the rule surface syntax into corresponding ASTs by actually parsing the patterns with the same parser used to parse the source file. The ASTs produced for the patterns are used to effect the pattern match/replacement. Because this is driven from ASTs, the actual layout of the orginal code (spacing, linebreaks, comments) don't affect DMS's ability to match or replace. Comments aren't a problem for matching because they are attached to tree nodes rather than being tree nodes; they are preserved in the transformed program. DMS does capture line and precise column information for all tree elements; just not needed to implement transformations. Code layout is also preserved in the output by DMS, using that line/column information.
Other PTSes offer generally similar capabilities. |
How can i simplify this condition in python? | 39,781,887 | 9 | 2016-09-30T00:22:10Z | 39,781,928 | 7 | 2016-09-30T00:27:48Z | [
"python",
"python-3.x",
"if-statement",
"condition",
"simplify"
] | Do you know a simpler way to achieve the same result as this?
I have this code:
```
color1 = input("Color 1: ")
color2 = input("Color 2: ")
if ((color1=="blue" and color2=="yellow") or (color1=="yellow" and color2=="blue")):
print("{0} + {1} = Green".format(color1, color2))
```
I also tried with this:
```
if (color1 + color2 =="blueyellow" or color1 + color2 =="yellowblue")
``` | Don't miss the *bigger picture*. Here is a better way to approach the problem in general.
What if you would define the "mixes" dictionary where you would have mixes of colors as keys and the resulting colors as values.
One idea for implementation is to use immutable by nature [`frozenset`](https://docs.python.org/3/library/stdtypes.html#frozenset)s as mapping keys:
```
mixes = {
frozenset(['blue', 'yellow']): 'green'
}
color1 = input("Color 1: ")
color2 = input("Color 2: ")
mix = frozenset([color1, color2])
if mix in mixes:
print("{0} + {1} = {2}".format(color1, color2, mixes[mix]))
```
This way you may easily *scale* the solution up, add different mixes without having multiple if/else nested conditions. |
How can i simplify this condition in python? | 39,781,887 | 9 | 2016-09-30T00:22:10Z | 39,781,933 | 19 | 2016-09-30T00:28:37Z | [
"python",
"python-3.x",
"if-statement",
"condition",
"simplify"
] | Do you know a simpler way to achieve the same result as this?
I have this code:
```
color1 = input("Color 1: ")
color2 = input("Color 2: ")
if ((color1=="blue" and color2=="yellow") or (color1=="yellow" and color2=="blue")):
print("{0} + {1} = Green".format(color1, color2))
```
I also tried with this:
```
if (color1 + color2 =="blueyellow" or color1 + color2 =="yellowblue")
``` | You can use `set`s for comparison.
> Two sets are equal if and only if every element of each set is contained in the other
```
In [35]: color1 = "blue"
In [36]: color2 = "yellow"
In [37]: {color1, color2} == {"blue", "yellow"}
Out[37]: True
In [38]: {color2, color1} == {"blue", "yellow"}
Out[38]: True
``` |
Python button functions oddly not doing the same | 39,785,577 | 12 | 2016-09-30T07:03:15Z | 39,886,705 | 9 | 2016-10-06T02:45:36Z | [
"python",
"raspberry-pi",
"gpio"
] | I currently have 2 buttons hooked up to my Raspberry Pi (these are the ones with ring LED's in them) and I'm trying to perform this code
```
#!/usr/bin/env python
import RPi.GPIO as GPIO
import time
GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)
GPIO.setup(17, GPIO.OUT) #green LED
GPIO.setup(18, GPIO.OUT) #red LED
GPIO.setup(4, GPIO.IN, GPIO.PUD_UP) #green button
GPIO.setup(27, GPIO.IN, GPIO.PUD_UP) #red button
def remove_events():
GPIO.remove_event_detect(4)
GPIO.remove_event_detect(27)
def add_events():
GPIO.add_event_detect(4, GPIO.FALLING, callback=green, bouncetime=800)
GPIO.add_event_detect(27, GPIO.FALLING, callback=red, bouncetime=800)
def red(pin):
remove_events()
GPIO.output(17, GPIO.LOW)
print "red pushed"
time.sleep(2)
GPIO.output(17, GPIO.HIGH)
add_events()
def green(pin):
remove_events()
GPIO.output(18, GPIO.LOW)
print "green pushed"
time.sleep(2)
GPIO.output(18, GPIO.HIGH)
add_events()
def main():
while True:
print "waiting"
time.sleep(0.5)
GPIO.output(17, GPIO.HIGH)
GPIO.output(18, GPIO.HIGH)
GPIO.add_event_detect(4, GPIO.FALLING, callback=green, bouncetime=800)
GPIO.add_event_detect(27, GPIO.FALLING, callback=red, bouncetime=800)
if __name__ == "__main__":
main()
```
On the surface it looks like a fairly easy script. When a button press is detected:
1. remove the events
2. print the message
3. wait 2 seconds before adding the events and turning the LED's back on
Which normally works out great when I press the green button. I tried it several times in succession and it works without fail. With the red, however, it works well the first time, and the second time, but after it has completed it second red(pin) cycle the script just stops.
Considering both events are fairly similar, I can't explain why it fails on the end of the 2nd red button.
**EDIT: I have changed the pins from red and green respectively (either to different pin's completely or swap them). Either way, it's always the red button code (actually now green button) causes an error. So it seems its' not a physical red button problem, nor a pin problem, this just leaves the code to be at fault...** | I was able to reproduce your problem on my Raspberry Pi 1, Model B by running your script and connecting a jumper cable between ground and GPIO27 to simulate red button presses. (Those are pins 25 and 13 on my particular Pi model.)
The python interpreter is crashing with a Segmentation Fault in the thread dedicated to polling GPIO events after `red` returns from handling a button press. After looking at the implementation of the Python `GPIO` module, it is clear to me that it is unsafe to call `remove_event_detect` from within an event handler callback, and this is causing the crash. In particular, removing an event handler while that event handler is currently running can lead to memory corruption, which will result in crashes (as you have seen) or other strange behaviors.
I suspect you are removing and re-adding the event handlers because you are concerned about getting a callback during the time when you are handing a button press. There is no need to do this. The GPIO module spins up a single polling thread to monitor GPIO events, and will wait for one callback to return before calling another, regardless of the number of GPIO events you are watching.
I suggest you simply make your calls to `add_event_detect` as your script starts, and never remove the callbacks. Simply removing `add_events` and `remove_events` (and their invocations) from your script will correct the problem.
If you are interested in the details of the problem in the `GPIO` module, you can take a look at the [C source code for that module](https://pypi.python.org/packages/c1/a8/de92cf6d04376f541ce250de420f4fe7cbb2b32a7128929a600bc89aede5/RPi.GPIO-0.6.2.tar.gz). Take a look at `run_callbacks` and `remove_callbacks` in the file `RPi.GPIO-0.6.2/source/event_gpio.c`. Notice that both of these functions use a global chain of `struct callback` nodes. `run_callbacks` walks the callback chain by grabbing one node, invoking the callback, and then following that node's link to the next callback in the chain. `remove_callbacks` will walk the same callback chain, and free the memory associated with the callbacks on a particular GPIO pin. If `remove_callbacks` is called in the middle of `run_callbacks`, the node currently held by `run_callbacks` can be freed (and have its memory potentially reused and overwritten) before the pointer to the next node is followed.
The reason you see this problem only for the red button is likely due to the order of calls to `add_event_detect` and `remove_event_detect` causes the memory previously used by the callback node for the red button to be reclaimed for some other purpose and overwritten earlier than the memory used from the green button callback node is similarly reclaimed. However, be assured that the problem exists for both buttons -- it is just luck that that the memory associated with the green button callback isn't changed before the pointer to the next callback node is followed.
More generally, there is a concerning lack of thread synchronization around the callback chain use in the GPIO module in general, and I suspect similar problems could occur if `remove_event_detect` or `add_event_detect` are called while an event handler is running, even if events are removed from another thread! I would suggest that the author of the `RPi.GPIO` module should use some synchronization to ensure that the callback chain can't be modified while callbacks are being made. (Perhaps, in addition to checking whether the chain is being modified on the polling thread itself, `pthread_mutex_lock` and `pthread_mutex_unlock` could be used to prevent other threads from modifying the callback chain while it is in use by the polling thread.)
Unfortunately, that is not currently the case, and for this reason I suggest you avoid calling `remove_event_detect` entirely if you can avoid it. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.