title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Regular Expression Matching First Non-Repeated Character | 39,796,852 | 27 | 2016-09-30T17:22:24Z | 39,796,953 | 14 | 2016-09-30T17:29:38Z | [
"python",
"regex",
"regex-lookarounds"
] | **TL;DR**
`re.search("(.)(?!.*\1)", text).group()` doesn't match the first non-repeating character contained in text (it always returns a character at or before the first non-repeated character, or before the end of the string if there are no non-repeated characters. My understanding is that re.search() should return None if there were no matches).
I'm only interested in understanding why this regex is not working as intended using the Python `re` module, not in any other method of solving the problem
**Full Background**
The problem description comes from <https://www.codeeval.com/open_challenges/12/>. I've already solved this problem using a non-regex method, but revisited it to expand my understanding of Python's `re` module.
The regular expressions i thought would work (named vs unnamed backreferences) are:
`(?P<letter>.)(?!.*(?P=letter))` and `(.)(?!.*\1)` (same results in python2 and python3)
My entire program looks like this
```
import re
import sys
with open(sys.argv[1], 'r') as test_cases:
for test in test_cases:
print(re.search("(?P<letter>.)(?!.*(?P=letter))",
test.strip()
).group()
)
```
and some input/output pairs are:
```
rain | r
teetthing | e
cardiff | c
kangaroo | k
god | g
newtown | e
taxation | x
refurbished | f
substantially | u
```
According to what I've read at <https://docs.python.org/2/library/re.html>:
* `(.)` creates a named group that matches any character and allows later backreferences to it as `\1`.
* `(?!...)` is a negative lookahead which restricts matches to cases where `...` does not match.
* `.*\1` means any number (including zero) of characters followed by whatever was matched by `(.)` earlier
* `re.search(pattern, string)` returns only the first location where the regex pattern produces a match (and would return None if no match could be found)
* `.group()` is equivalent to `.group(0)` which returns the entire match
I think these pieces together should solve the stated problem, and it does work like I think it should for most inputs, but failed on `teething`. Throwing similar problems at it reveals that it seems to ignore repeated characters if they are consecutive:
```
tooth | o # fails on consecutive repeated characters
aardvark | d # but does ok if it sees them later
aah | a # verified last one didn't work just because it was at start
heh | e # but it works for this one
hehe | h # What? It thinks h matches (lookahead maybe doesn't find "heh"?)
heho | e # but it definitely finds "heh" and stops "h" from matching here
hahah | a # so now it won't match h but will match a
hahxyz | a # but it realizes there are 2 h characters here...
hahxyza | h # ... Ok time for StackOverflow
```
I know lookbehind and negative lookbehind are limited to 3-character-max fixed length strings, and cannot contain backreferences even if they evaluate to a fixed length string, but I didn't see the documentation specify any restrictions on negative lookahead. | Well let's take your `tooth` example - here is what the regex-engine does (a lot simplified for better understanding)
Start with `t` then look ahead in the string - and fail the lookahead, as there is another `t`.
```
tooth
^ °
```
Next take `o`, look ahead in the string - and fail, as there is another `o`.
```
tooth
^°
```
Next take the second `o`, look ahead in the string - no other `o` present - match it, return it, work done.
```
tooth
^
```
So your regex doesn't match the first unrepeated character, but the first one, that has no further repetitions towards the end of the string. |
Regular Expression Matching First Non-Repeated Character | 39,796,852 | 27 | 2016-09-30T17:22:24Z | 39,890,983 | 11 | 2016-10-06T08:19:28Z | [
"python",
"regex",
"regex-lookarounds"
] | **TL;DR**
`re.search("(.)(?!.*\1)", text).group()` doesn't match the first non-repeating character contained in text (it always returns a character at or before the first non-repeated character, or before the end of the string if there are no non-repeated characters. My understanding is that re.search() should return None if there were no matches).
I'm only interested in understanding why this regex is not working as intended using the Python `re` module, not in any other method of solving the problem
**Full Background**
The problem description comes from <https://www.codeeval.com/open_challenges/12/>. I've already solved this problem using a non-regex method, but revisited it to expand my understanding of Python's `re` module.
The regular expressions i thought would work (named vs unnamed backreferences) are:
`(?P<letter>.)(?!.*(?P=letter))` and `(.)(?!.*\1)` (same results in python2 and python3)
My entire program looks like this
```
import re
import sys
with open(sys.argv[1], 'r') as test_cases:
for test in test_cases:
print(re.search("(?P<letter>.)(?!.*(?P=letter))",
test.strip()
).group()
)
```
and some input/output pairs are:
```
rain | r
teetthing | e
cardiff | c
kangaroo | k
god | g
newtown | e
taxation | x
refurbished | f
substantially | u
```
According to what I've read at <https://docs.python.org/2/library/re.html>:
* `(.)` creates a named group that matches any character and allows later backreferences to it as `\1`.
* `(?!...)` is a negative lookahead which restricts matches to cases where `...` does not match.
* `.*\1` means any number (including zero) of characters followed by whatever was matched by `(.)` earlier
* `re.search(pattern, string)` returns only the first location where the regex pattern produces a match (and would return None if no match could be found)
* `.group()` is equivalent to `.group(0)` which returns the entire match
I think these pieces together should solve the stated problem, and it does work like I think it should for most inputs, but failed on `teething`. Throwing similar problems at it reveals that it seems to ignore repeated characters if they are consecutive:
```
tooth | o # fails on consecutive repeated characters
aardvark | d # but does ok if it sees them later
aah | a # verified last one didn't work just because it was at start
heh | e # but it works for this one
hehe | h # What? It thinks h matches (lookahead maybe doesn't find "heh"?)
heho | e # but it definitely finds "heh" and stops "h" from matching here
hahah | a # so now it won't match h but will match a
hahxyz | a # but it realizes there are 2 h characters here...
hahxyza | h # ... Ok time for StackOverflow
```
I know lookbehind and negative lookbehind are limited to 3-character-max fixed length strings, and cannot contain backreferences even if they evaluate to a fixed length string, but I didn't see the documentation specify any restrictions on negative lookahead. | [Sebastian's answer](http://stackoverflow.com/a/39796953/3764814) already explains pretty well why your current attempt doesn't work.
### .NET
Since ~~you're~~ [revo](http://stackoverflow.com/users/1020526/revo) is interested in a .NET flavor workaround, the solution becomes trivial:
```
(?<letter>.)(?!.*?\k<letter>)(?<!\k<letter>.+?)
```
[Demo link](http://regexstorm.net/tester?p=(%3F%3Cletter%3E.)(%3F!.*%3F%5Ck%3Cletter%3E)(%3F%3C!%5Ck%3Cletter%3E.%2B%3F)&i=tooth%0D%0Aaardvark%0D%0Aaah%0D%0Aheh%0D%0Ahehe%0D%0Aheho%0D%0Ahahah%0D%0Ahahxyz%0D%0Ahahxyza)
This works because .NET supports **variable-length lookbehinds**. You can also get that result with Python (see below).
So for each letter `(?<letter>.)` we check:
* if it's repeated further in the input `(?!.*?\k<letter>)`
* if it was already encountered before `(?<!\k<letter>.+?)`
(we have to skip the letter we're testing when going backwards, hence the `+`).
---
### Python
The Python [regex module](https://pypi.python.org/pypi/regex) also supports variable-length lookbehinds, so the regex above will work with a small syntactical change: you need to replace `\k` with `\g` (which is quite unfortunate as with this module `\g` is a group backreference, whereas with PCRE it's a recursion).
The regex is:
```
(?<letter>.)(?!.*?\g<letter>)(?<!\g<letter>.+?)
```
And here's an example:
```
$ python
Python 2.7.10 (default, Jun 1 2015, 18:05:38)
[GCC 4.9.2] on cygwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import regex
>>> regex.search(r'(?<letter>.)(?!.*?\g<letter>)(?<!\g<letter>.+?)', 'tooth')
<regex.Match object; span=(4, 5), match='h'>
```
---
### PCRE
Ok, now things start to get dirty: since PCRE doesn't support variable-length lookbehinds, we need to *somehow* remember whether a given letter was already encountered in the input or not.
Unfortunately, the regex engine doesn't provide random access memory support. The best we can get in terms of generic memory is a *stack* - but that's not sufficient for this purpose, as a stack only lets us access its topmost element.
If we accept to restrain ourselves to a given alphabet, we can abuse capturing groups for the purpose of storing flags. Let's see this on a limited alphabet of the three letters `abc`:
```
# Anchor the pattern
\A
# For each letter, test to see if it's duplicated in the input string
(?(?=[^a]*+a[^a]*a)(?<da>))
(?(?=[^b]*+b[^b]*b)(?<db>))
(?(?=[^c]*+c[^c]*c)(?<dc>))
# Skip any duplicated letter and throw it away
[a-c]*?\K
# Check if the next letter is a duplicate
(?:
(?(da)(*FAIL)|a)
| (?(db)(*FAIL)|b)
| (?(dc)(*FAIL)|c)
)
```
Here's how that works:
* First, the `\A` anchor ensures we'll process the input string only once
* Then, for each letter `X` of our alphabet, we'll set up a *is duplicate* flag `dX`:
+ The conditional pattern `(?(cond)then|else)` is used there:
- The condition is `(?=[^X]*+X[^X]*X)` which is true if the input string contains the letter `X` twice.
- If the condition is true, the *then* clause is `(?<dX>)`, which is an empty capturing group that will match the empty string.
- If the condition is false, the `dX` group won't be matched
+ Next, we lazily skip valid letters from our alphabet: `[a-c]*?`
+ And we throw them out in the final match with `\K`
+ Now, we're trying to match *one* letter whose `dX` flag is *not* set. For this purpose, we'll do a conditional branch: `(?(dX)(*FAIL)|X)`
- If `dX` was matched (meaning that `X` is a duplicated character), we `(*FAIL)`, forcing the engine to backtrack and try a different letter.
- If `dX` was *not* matched, we try to match `X`. At this point, if this succeeds, we know that `X` is the first non-duplicated letter.
That last part of the pattern could also be replaced with:
```
(?:
a (*THEN) (?(da)(*FAIL))
| b (*THEN) (?(db)(*FAIL))
| c (*THEN) (?(dc)(*FAIL))
)
```
Which is *somewhat* more optimized. It matches the current letter *first* and only *then* checks if it's a duplicate.
The full pattern for the lowercase letters `a-z` looks like this:
```
# Anchor the pattern
\A
# For each letter, test to see if it's duplicated in the input string
(?(?=[^a]*+a[^a]*a)(?<da>))
(?(?=[^b]*+b[^b]*b)(?<db>))
(?(?=[^c]*+c[^c]*c)(?<dc>))
(?(?=[^d]*+d[^d]*d)(?<dd>))
(?(?=[^e]*+e[^e]*e)(?<de>))
(?(?=[^f]*+f[^f]*f)(?<df>))
(?(?=[^g]*+g[^g]*g)(?<dg>))
(?(?=[^h]*+h[^h]*h)(?<dh>))
(?(?=[^i]*+i[^i]*i)(?<di>))
(?(?=[^j]*+j[^j]*j)(?<dj>))
(?(?=[^k]*+k[^k]*k)(?<dk>))
(?(?=[^l]*+l[^l]*l)(?<dl>))
(?(?=[^m]*+m[^m]*m)(?<dm>))
(?(?=[^n]*+n[^n]*n)(?<dn>))
(?(?=[^o]*+o[^o]*o)(?<do>))
(?(?=[^p]*+p[^p]*p)(?<dp>))
(?(?=[^q]*+q[^q]*q)(?<dq>))
(?(?=[^r]*+r[^r]*r)(?<dr>))
(?(?=[^s]*+s[^s]*s)(?<ds>))
(?(?=[^t]*+t[^t]*t)(?<dt>))
(?(?=[^u]*+u[^u]*u)(?<du>))
(?(?=[^v]*+v[^v]*v)(?<dv>))
(?(?=[^w]*+w[^w]*w)(?<dw>))
(?(?=[^x]*+x[^x]*x)(?<dx>))
(?(?=[^y]*+y[^y]*y)(?<dy>))
(?(?=[^z]*+z[^z]*z)(?<dz>))
# Skip any duplicated letter and throw it away
[a-z]*?\K
# Check if the next letter is a duplicate
(?:
a (*THEN) (?(da)(*FAIL))
| b (*THEN) (?(db)(*FAIL))
| c (*THEN) (?(dc)(*FAIL))
| d (*THEN) (?(dd)(*FAIL))
| e (*THEN) (?(de)(*FAIL))
| f (*THEN) (?(df)(*FAIL))
| g (*THEN) (?(dg)(*FAIL))
| h (*THEN) (?(dh)(*FAIL))
| i (*THEN) (?(di)(*FAIL))
| j (*THEN) (?(dj)(*FAIL))
| k (*THEN) (?(dk)(*FAIL))
| l (*THEN) (?(dl)(*FAIL))
| m (*THEN) (?(dm)(*FAIL))
| n (*THEN) (?(dn)(*FAIL))
| o (*THEN) (?(do)(*FAIL))
| p (*THEN) (?(dp)(*FAIL))
| q (*THEN) (?(dq)(*FAIL))
| r (*THEN) (?(dr)(*FAIL))
| s (*THEN) (?(ds)(*FAIL))
| t (*THEN) (?(dt)(*FAIL))
| u (*THEN) (?(du)(*FAIL))
| v (*THEN) (?(dv)(*FAIL))
| w (*THEN) (?(dw)(*FAIL))
| x (*THEN) (?(dx)(*FAIL))
| y (*THEN) (?(dy)(*FAIL))
| z (*THEN) (?(dz)(*FAIL))
)
```
And here's the [demo on regex101](https://regex101.com/r/Mhwxog/1), complete with unit tests.
You can expand on this pattern if you need a larger alphabet, but obviously this is *not* a general-purpose solution. It's primarily of educational interest and should *not* be used for any serious application.
---
For other flavors, you may try to tweak the pattern to replace PCRE features with simpler equivalents:
* `\A` becomes `^`
* `X (*THEN) (?(dX)(*FAIL))` can be replaced with `(?(dX)(?!)|X)`
* You may throw away the `\K` and replace the last noncapturnig group `(?:`...`)` with a named group like `(?<letter>`...`)` and treat its content as the result.
The only required but somewhat unusual construct is the conditional group `(?(cond)then|else)`. |
cryptography AssertionError: sorry, but this version only supports 100 named groups | 39,829,473 | 13 | 2016-10-03T10:28:14Z | 39,830,224 | 22 | 2016-10-03T11:11:24Z | [
"python",
"python-2.7",
"travis-ci",
"python-cryptography"
] | I'm installing several python packages via `pip install` on travis,
```
language: python
python:
- '2.7'
install:
- pip install -r requirements/env.txt
```
Everything worked fine, but today I started getting following error:
```
Running setup.py install for cryptography
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-build-hKwMR3/cryptography/setup.py", line 334, in <module>
**keywords_with_side_effects(sys.argv)
File "/opt/python/2.7.9/lib/python2.7/distutils/core.py", line 111, in setup
_setup_distribution = dist = klass(attrs)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/setuptools/dist.py", line 269, in __init__
_Distribution.__init__(self,attrs)
File "/opt/python/2.7.9/lib/python2.7/distutils/dist.py", line 287, in __init__
self.finalize_options()
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/setuptools/dist.py", line 325, in finalize_options
ep.load()(self, ep.name, value)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/cffi/setuptools_ext.py", line 181, in cffi_modules
add_cffi_module(dist, cffi_module)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/cffi/setuptools_ext.py", line 48, in add_cffi_module
execfile(build_file_name, mod_vars)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/cffi/setuptools_ext.py", line 24, in execfile
exec(code, glob, glob)
File "src/_cffi_src/build_openssl.py", line 81, in <module>
extra_link_args=extra_link_args(compiler_type()),
File "/tmp/pip-build-hKwMR3/cryptography/src/_cffi_src/utils.py", line 61, in build_ffi_for_binding
extra_link_args=extra_link_args,
File "/tmp/pip-build-hKwMR3/cryptography/src/_cffi_src/utils.py", line 70, in build_ffi
ffi.cdef(cdef_source)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/cffi/api.py", line 105, in cdef
self._cdef(csource, override=override, packed=packed)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/cffi/api.py", line 119, in _cdef
self._parser.parse(csource, override=override, **options)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/cffi/cparser.py", line 299, in parse
self._internal_parse(csource)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/cffi/cparser.py", line 304, in _internal_parse
ast, macros, csource = self._parse(csource)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/cffi/cparser.py", line 260, in _parse
ast = _get_parser().parse(csource)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/cffi/cparser.py", line 40, in _get_parser
_parser_cache = pycparser.CParser()
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/pycparser/c_parser.py", line 87, in __init__
outputdir=taboutputdir)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/pycparser/c_lexer.py", line 66, in build
self.lexer = lex.lex(object=self, **kwargs)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/pycparser/ply/lex.py", line 911, in lex
lexobj.readtab(lextab, ldict)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/pycparser/ply/lex.py", line 233, in readtab
titem.append((re.compile(pat, lextab._lexreflags | re.VERBOSE), _names_to_funcs(func_name, fdict)))
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/re.py", line 194, in compile
return _compile(pattern, flags)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/re.py", line 249, in _compile
p = sre_compile.compile(pattern, flags)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/sre_compile.py", line 583, in compile
"sorry, but this version only supports 100 named groups"
AssertionError: sorry, but this version only supports 100 named groups
```
Solutions? | There is a bug with PyCParser - See <https://github.com/pyca/cryptography/issues/3187>
The work around is to use another version or to not use the binary distribution.
```
pip install git+https://github.com/eliben/pycparser@release_v2.14
```
or
```
pip install --no-binary pycparser
``` |
Getting previous index values of a python list items after shuffling | 39,832,773 | 3 | 2016-10-03T13:25:10Z | 39,832,854 | 12 | 2016-10-03T13:28:48Z | [
"python"
] | Let's say I have such a python list:
```
l = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
```
by using `random.shuffle`,
```
>>> import random
>>> random.shuffle(l)
>>> l
[5, 3, 2, 0, 8, 7, 9, 6, 4, 1]
```
I am having the above list.
How can I get the previous index values list of each item in the shuffled list? | You could pair each item with its index using `enumerate`, then shuffle that.
```
>>> import random
>>> l = [4, 8, 15, 16, 23, 42]
>>> x = list(enumerate(l))
>>> random.shuffle(x)
>>> indices, l = zip(*x)
>>> l
(4, 8, 15, 23, 42, 16)
>>> indices
(0, 1, 2, 4, 5, 3)
```
One advantage of this approach is that it works regardless of whether `l` contains duplicates. |
How does one add an item to GTK's "recently used" file list from Python? | 39,836,725 | 12 | 2016-10-03T17:00:54Z | 39,927,261 | 10 | 2016-10-07T23:53:51Z | [
"python",
"gtk",
"pygtk",
"gtk3"
] | I'm trying to add to the "recently used" files list from Python 3 on Ubuntu.
I am able to successfully *read* the recently used file list like this:
```
from gi.repository import Gtk
recent_mgr = Gtk.RecentManager.get_default()
for item in recent_mgr.get_items():
print(item.get_uri())
```
This prints out the same list of files I see when I look at "Recent" in Nautilus, or look at the "Recently Used" place in the file dialog of apps like GIMP.
However, when I tried adding an item like this (where `/home/laurence/foo/bar.txt` is an existing text file)...
```
recent_mgr.add_item('file:///home/laurence/foo/bar.txt')
```
...the file does not show up in the Recent section of Nautilus or in file dialogs. It doesn't even show up in the results returned by `get_items()`.
How can I add a file to GTK's recently used file list from Python? | A `Gtk.RecentManager` needs to emit the `changed` signal for the update to be written in a private attribute of the C++ class. To use a `RecentManager` object in an application, you need to start the event loop by calling `Gtk.main`:
```
from gi.repository import Gtk
recent_mgr = Gtk.RecentManager.get_default()
uri = r'file:/path/to/my/file'
recent_mgr.add_item(uri)
Gtk.main()
```
If you don't call `Gtk.main()`, the `changed` signal is not emitted and nothing happens.
To answer @andlabs query, the reason why `RecentManager.add_item` returns a boolean is because the `g_file_query_info_async` function is called. The callback function `gtk_recent_manager_add_item_query_info` then gathers the mimetype, application name and command into a `GtkRecentData` struct and finally calls `gtk_recent_manager_add_full`. The source is [here](https://github.com/GNOME/gtk/blob/master/gtk/gtkrecentmanager.c).
If anything goes wrong, it is well after `add_item` has finished, so the method just returns `True` if the object it is called from is a `RecentManager` and if the uri is not `NULL`; and `False` otherwise.
The documentation is inaccurate in saying:
> Returns
>
> TRUE if the new item was successfully added to the recently used resources list
as returning `TRUE` only means that an asynchronous function was called to deal with the addition of a new item. |
How to make an integer larger than any other integer? | 39,843,488 | 41 | 2016-10-04T03:00:56Z | 39,843,523 | 61 | 2016-10-04T03:04:16Z | [
"python",
"python-3.x"
] | Note: while the accepted answer achieves the result I wanted, and @ecatmur answer provides a more comprehensive option, I feel it's very important to emphasize that my use case is a bad idea in the first place. This is explained very well in [@Jason Orendorff answer below](http://stackoverflow.com/a/39856605/336527).
Note: this question is not a duplicate of [the question about `sys.maxint`](http://stackoverflow.com/questions/13795758/what-is-sys-maxint-in-python-3). It has nothing to do with `sys.maxint`; even in python 2 where `sys.maxint` is available, it does NOT represent largest integer (see the accepted answer).
I need to create an integer that's larger than any other integer, meaning an `int` object which returns `True` when compared to any other `int` object using `>`. Use case: library function expects an integer, and the only easy way to force a certain behavior is to pass a very large integer.
In python 2, I can use `sys.maxint` (edit: I was wrong). In python 3, `math.inf` is the closest equivalent, but I can't convert it to `int`. | Since python integers are unbounded, you have to do this with a custom class:
```
import functools
@functools.total_ordering
class NeverSmaller(object):
def __le__(self, other):
return False
class ReallyMaxInt(NeverSmaller, int):
def __repr__(self):
return 'ReallyMaxInt()'
```
Here I've used a mix-in class `NeverSmaller` rather than direct decoration of `ReallyMaxInt`, because on Python 3 the action of `functools.total_ordering` would have been prevented by existing ordering methods inherited from `int`.
Usage demo:
```
>>> N = ReallyMaxInt()
>>> N > sys.maxsize
True
>>> isinstance(N, int)
True
>>> sorted([1, N, 0, 9999, sys.maxsize])
[0, 1, 9999, 9223372036854775807, ReallyMaxInt()]
```
Note that in python2, `sys.maxint + 1` is bigger than `sys.maxint`, so you can't rely on that.
*Disclaimer*: This is an integer in the [OO](https://en.wikipedia.org/wiki/Object-oriented_programming#Inheritance_and_behavioral_subtyping) sense, it is not an integer in the mathematical sense. Consequently, arithmetic operations inherited from the parent class `int` may not behave sensibly. If this causes any issues for your intended use case, then they can be disabled by implementing `__add__` and friends to just error out. |
How to make an integer larger than any other integer? | 39,843,488 | 41 | 2016-10-04T03:00:56Z | 39,855,605 | 23 | 2016-10-04T14:57:45Z | [
"python",
"python-3.x"
] | Note: while the accepted answer achieves the result I wanted, and @ecatmur answer provides a more comprehensive option, I feel it's very important to emphasize that my use case is a bad idea in the first place. This is explained very well in [@Jason Orendorff answer below](http://stackoverflow.com/a/39856605/336527).
Note: this question is not a duplicate of [the question about `sys.maxint`](http://stackoverflow.com/questions/13795758/what-is-sys-maxint-in-python-3). It has nothing to do with `sys.maxint`; even in python 2 where `sys.maxint` is available, it does NOT represent largest integer (see the accepted answer).
I need to create an integer that's larger than any other integer, meaning an `int` object which returns `True` when compared to any other `int` object using `>`. Use case: library function expects an integer, and the only easy way to force a certain behavior is to pass a very large integer.
In python 2, I can use `sys.maxint` (edit: I was wrong). In python 3, `math.inf` is the closest equivalent, but I can't convert it to `int`. | Konsta Vesterinen's [`infinity.Infinity`](https://github.com/kvesteri/infinity) would work ([pypi](https://pypi.python.org/pypi/infinity/)), except that it doesn't inherit from `int`, but you can subclass it:
```
from infinity import Infinity
class IntInfinity(Infinity, int):
pass
assert isinstance(IntInfinity(), int)
assert IntInfinity() > 1e100
```
Another package that implements "infinity" values is [Extremes](https://pypi.python.org/pypi/Extremes), which was salvaged from the rejected [PEP 326](https://www.python.org/dev/peps/pep-0326/); again, you'd need to subclass from `extremes.Max` and `int`. |
How to make an integer larger than any other integer? | 39,843,488 | 41 | 2016-10-04T03:00:56Z | 39,856,605 | 15 | 2016-10-04T15:45:43Z | [
"python",
"python-3.x"
] | Note: while the accepted answer achieves the result I wanted, and @ecatmur answer provides a more comprehensive option, I feel it's very important to emphasize that my use case is a bad idea in the first place. This is explained very well in [@Jason Orendorff answer below](http://stackoverflow.com/a/39856605/336527).
Note: this question is not a duplicate of [the question about `sys.maxint`](http://stackoverflow.com/questions/13795758/what-is-sys-maxint-in-python-3). It has nothing to do with `sys.maxint`; even in python 2 where `sys.maxint` is available, it does NOT represent largest integer (see the accepted answer).
I need to create an integer that's larger than any other integer, meaning an `int` object which returns `True` when compared to any other `int` object using `>`. Use case: library function expects an integer, and the only easy way to force a certain behavior is to pass a very large integer.
In python 2, I can use `sys.maxint` (edit: I was wrong). In python 3, `math.inf` is the closest equivalent, but I can't convert it to `int`. | > Use case: library function expects an integer, and the only easy way to force a certain behavior is to pass a very large integer.
This sounds like a flaw in the library that should be fixed in its interface. Then all its users would benefit. What library is it?
Creating a magical int subclass with overridden comparison operators might work for you. It's brittle, though; you never know what the library is going to do with that object. Suppose it converts it to a string. What should happen? And data is naturally used in different ways as a library evolves; you may update the library one day to find that your trick doesn't work anymore. |
Is python tuple assignment order fixed? | 39,855,410 | 2 | 2016-10-04T14:49:58Z | 39,855,637 | 7 | 2016-10-04T14:58:56Z | [
"python",
"tuples",
"variable-assignment"
] | Will
```
a, a = 2, 1
```
always result in a equal to 1? In other words, is tuple assignment guaranteed to be left-to-right?
The matter becomes relevant when we don't have just a, but a[i], a[j] and i and j may or may not be equal. | Yes, it is part of the python language reference that tuple assignment must take place left to right.
<https://docs.python.org/2.3/ref/assignment.html>
> An assignment statement evaluates the expression list (remember that
> this can be a single expression or a comma-separated list, the latter
> yielding a tuple) and assigns the single resulting object to each of
> the target lists, from left to right.
So all Python implementations should follow this rule (as confirmed by the experiments in the other answer).
Personally, I would still be hesitant to use this as it seems unclear to a future reader of the code. |
Can a python function know when it's being called by a list comprehension? | 39,887,880 | 3 | 2016-10-06T05:04:10Z | 39,888,195 | 8 | 2016-10-06T05:30:50Z | [
"python"
] | I want to make a python function that behaves differently when it's being called from a list comprehension:
```
def f():
# this function returns False when called normally,
# and True when called from a list comprehension
pass
>>> f()
False
>>> [f() for _ in range(3)]
[True, True, True]
```
I tried looking at the inspect module, the dis module, and lib2to3's parser for something to make this trick work, but haven't found anything. There also might be a simple reason why this cannot exist, that I haven't thought of. | You can determine this by inspecting the stack frame in the following sort of way:
```
def f():
try:
raise ValueError
except Exception as e:
if e.__traceback__.tb_frame.f_back.f_code.co_name == '<listcomp>':
return True
```
Then:
```
>>> print(f())
None
>>> print([f() for x in range(10)])
[True, True, True, True, True, True, True, True, True, True]
```
Its not to be recommended though. Really, its not.
### NOTE
As it stands this only detects list comprehensions as requested. It will not detect the use of a generator. For example:
```
>>> print(list(f() for x in range(10)))
[None, None, None, None, None, None, None, None, None, None]
``` |
Is there a more Pythonic way to combine an Else: statement and an Except:? | 39,903,242 | 32 | 2016-10-06T18:29:51Z | 39,903,338 | 9 | 2016-10-06T18:34:43Z | [
"python",
"python-2.7"
] | I have a piece of code that searches AutoCAD for text boxes that contain certain keywords (eg. `"overall_weight"` in this case) and replaces it with a value from a dictionary. However, sometimes the dictionary key is assigned to an empty string and sometimes, the key doesn't exist altogether. In these cases, the `"overall_weight"` keywords should be replaced with `"N/A"`. I was wondering if there was a more pythonic way to combine the `KeyError` exception and the `else` to both go to `nObject.TextString = "N/A"` so its not typed twice.
```
if nObject.TextString == "overall_weight":
try:
if self.var.jobDetails["Overall Weight"]:
nObject.TextString = self.var.jobDetails["Overall Weight"]
else:
nObject.TextString = "N/A"
except KeyError:
nObject.TextString = "N/A"
```
Edit: For clarification for future visitors, there are only 3 cases I need to take care of and the correct answer takes care of all 3 cases without any extra padding.
1. `dict[key]` exists and points to a non-empty string. `TextString` replaced with the value assigned to `dict[key]`.
2. `dict[key]` exists and points to a empty string. `TextString` replaced with `"N/A"`.
3. `dict[key]` doesn't exist. `TextString` replaced with `"N/A"`. | Use `.get()` with a default argument of `"N/A"` which will be used if the key does not exist:
```
nObject.TextString = self.var.jobDetails.get("Overall Weight", "N/A")
```
# Update
If empty strings need to be handled, simply modify as follows:
```
nObject.TextString = self.var.jobDetails.get("Overall Weight") or "N/A"
```
This will set `nObject.TextString` to "N/A" if a `KeyError` is raised, or if the value is retrieved is empty: `''`, `[]`, etc. |
Is there a more Pythonic way to combine an Else: statement and an Except:? | 39,903,242 | 32 | 2016-10-06T18:29:51Z | 39,903,350 | 16 | 2016-10-06T18:35:11Z | [
"python",
"python-2.7"
] | I have a piece of code that searches AutoCAD for text boxes that contain certain keywords (eg. `"overall_weight"` in this case) and replaces it with a value from a dictionary. However, sometimes the dictionary key is assigned to an empty string and sometimes, the key doesn't exist altogether. In these cases, the `"overall_weight"` keywords should be replaced with `"N/A"`. I was wondering if there was a more pythonic way to combine the `KeyError` exception and the `else` to both go to `nObject.TextString = "N/A"` so its not typed twice.
```
if nObject.TextString == "overall_weight":
try:
if self.var.jobDetails["Overall Weight"]:
nObject.TextString = self.var.jobDetails["Overall Weight"]
else:
nObject.TextString = "N/A"
except KeyError:
nObject.TextString = "N/A"
```
Edit: For clarification for future visitors, there are only 3 cases I need to take care of and the correct answer takes care of all 3 cases without any extra padding.
1. `dict[key]` exists and points to a non-empty string. `TextString` replaced with the value assigned to `dict[key]`.
2. `dict[key]` exists and points to a empty string. `TextString` replaced with `"N/A"`.
3. `dict[key]` doesn't exist. `TextString` replaced with `"N/A"`. | Use `get()` function for dictionaries. It will return `None` if the key doesn't exist or if you specify a second value, it will set that as the default. Then your syntax will look like:
```
nObject.TextString = self.var.jobDetails.get('Overall Weight', 'N/A')
``` |
Is there a more Pythonic way to combine an Else: statement and an Except:? | 39,903,242 | 32 | 2016-10-06T18:29:51Z | 39,903,519 | 63 | 2016-10-06T18:45:36Z | [
"python",
"python-2.7"
] | I have a piece of code that searches AutoCAD for text boxes that contain certain keywords (eg. `"overall_weight"` in this case) and replaces it with a value from a dictionary. However, sometimes the dictionary key is assigned to an empty string and sometimes, the key doesn't exist altogether. In these cases, the `"overall_weight"` keywords should be replaced with `"N/A"`. I was wondering if there was a more pythonic way to combine the `KeyError` exception and the `else` to both go to `nObject.TextString = "N/A"` so its not typed twice.
```
if nObject.TextString == "overall_weight":
try:
if self.var.jobDetails["Overall Weight"]:
nObject.TextString = self.var.jobDetails["Overall Weight"]
else:
nObject.TextString = "N/A"
except KeyError:
nObject.TextString = "N/A"
```
Edit: For clarification for future visitors, there are only 3 cases I need to take care of and the correct answer takes care of all 3 cases without any extra padding.
1. `dict[key]` exists and points to a non-empty string. `TextString` replaced with the value assigned to `dict[key]`.
2. `dict[key]` exists and points to a empty string. `TextString` replaced with `"N/A"`.
3. `dict[key]` doesn't exist. `TextString` replaced with `"N/A"`. | Use `dict.get()` which will return the value associated with the given key if it exists otherwise `None`. (Note that `''` and `None` are both falsey values.) If `s` is true then assign it to `nObject.TextString` otherwise give it a value of `"N/A"`.
```
if nObject.TextString == "overall_weight":
nObject.TextString = self.var.jobDetails.get("Overall Weight") or "N/A"
``` |
Is there a way to compile python application into static binary? | 39,913,847 | 22 | 2016-10-07T09:24:20Z | 40,057,634 | 11 | 2016-10-15T10:00:42Z | [
"python",
"build"
] | What I'm trying to do is ship my code to a remote server, that may have different python version installed and/or may not have packages my app requires.
Right now to achieve such portability I have to build relocatable virtualenv with interpreter and code. That approach has some issues (for example, you have to manually copy a bunch of libraries into your virtualenv, since `--always-copy` doesn't work as expected) and generally slow.
There's (in theory) [a way](https://wiki.python.org/moin/BuildStatically) to build python itself statically.
I wonder if I could pack interpreter with my code into one binary and run my application as module. Something like that: `./mypython -m myapp run` or `./mypython -m gunicorn -c ./gunicorn.conf myapp.wsgi:application`. | There are two ways you could go about to solve your problem
1. Use a static builder, like freeze, or pyinstaller, or py2exe
2. Compile using cython
I will explain how you can go about doing it using the second, since the first method is not cross platform and version, and has been explained in other answers. Also, using programs like pyinstaller typically results in huge file sizes, where as using cython will result in a file that's KBs in size
First, install cython. Then, rename your python file (say test.py) into a pyx file
```
$ sudo pip install cython
$ mv test.py test.pyx
```
Then, you can use cython along with GCC to compile it (Cython generates a C file out of a Python .pyx file, and then GCC compiles the C file)
(in reference to <http://stackoverflow.com/a/22040484/5714445>)
```
$ cython test.pyx --embed
$ gcc -Os -I /usr/include/python3.5m -o test test.c -lpython3.5m -lpthread -lm -lutil -ldl
```
NOTE: Depending on your version of python, you might have to change the last command. To know which version of python you are using, simply use
```
$ python -V
```
You will now have a binary file 'test', which is what you are looking for
NOTE: Cython is used to use C-Type Variable definitions for static memory allocation to speed up Python programs. In your case however, you will still be using traditional Python definitions.
NOTE2: If you are using additional libraries (like opencv, for example), you might have to provide the directory to them using -L and then specify the name of the library using -l in the GCC Flags. For more information on this, please refer to GCC flags |
Understanding variable types, names and assignment | 39,917,988 | 3 | 2016-10-07T13:02:58Z | 39,918,038 | 7 | 2016-10-07T13:06:02Z | [
"python"
] | In Python, if you want to define a variable, you don't have to specify the type of it, unlike other languages such as C and Java.
So how can the Python interpreter distinguish between variables and give it the required space in memory like `int` or `float`? | In Python all values are objects with built-in type info. Variables are references to these values. So their type is 'dynamic', just equal to the type of what they happen to refer to (point to) at a particular moment.
Whenever memory is allocated for the contents of a variable, a value is available. Since it has a type, the amount of memory needed is known.
The references (variables) themselves always occupy the same amount of memory, no matter what they point to, since they just contain a conceptual address.
This indeed means that in
```
def f (x):
print (x)
```
x doesn't have a type, since it doesn't have a particular value yet.
The upside is that this is very flexible.
The downside is that the compiler has only limited means to discover errors.
For this reason Python was recently enriched with [type hints](http://stackoverflow.com/questions/32557920/what-are-type-hints-in-python-3-5).
Tools like [mypy](http://mypy-lang.org/) allow static typechecking, even though the interpreter doesn't need it.
But the programmer sometimes does, especially at module boundaries (API's) when she's working in a team. |
Mapping Python list values to dictionary values | 39,955,222 | 2 | 2016-10-10T09:28:49Z | 39,955,362 | 7 | 2016-10-10T09:36:56Z | [
"python"
] | I have a list of rows...
`rows = [2, 21]`
And a dictionary of data...
`data = {'x': [46, 35], 'y': [20, 30]}`
I'd like to construct a second dictionary, `dataRows`, keyed by the row that looks like this...
`dataRows = {2: {'x': 46, 'y': 20}, 21: {'x': 35, 'y': 30}}`
I tried the following code, but the values of `dataRows` are always the same (last value in loop):
```
for i, row in enumerate(rows):
for key, value in data.items():
dataRows[row] = value[i]
```
Any assistance would be greatly appreciated. | Your issue is that you are not puting sub-dictionaries inside dataRows. The fix would be this:
```
for i, row in enumerate(rows):
dataRows[row] = {}
for key, value in data.items():
dataRows[row][key] = value[i]
``` |
What are variable annotations in Python 3.6? | 39,971,929 | 15 | 2016-10-11T07:00:15Z | 39,972,031 | 15 | 2016-10-11T07:08:33Z | [
"python",
"python-3.x",
"annotations",
"type-hinting",
"python-3.6"
] | Python 3.6 is about to be released. [PEP 494 -- Python 3.6 Release Schedule](https://www.python.org/dev/peps/pep-0494/) mentions the end of December, so I went through [What's New in Python 3.6](https://docs.python.org/3.6/whatsnew/3.6.html) to see they mention the *variable annotations*:
> [PEP 484](https://www.python.org/dev/peps/pep-0484) introduced standard for type annotations of function parameters, a.k.a. type hints. This PEP adds syntax to Python for annotating the types of variables including class variables and instance variables:
>
> ```
> primes: List[int] = []
>
> captain: str # Note: no initial value!
>
> class Starship:
> stats: Dict[str, int] = {}
> ```
>
> Just as for function annotations, the Python interpreter does not attach any particular meaning to variable annotations and only stores them in a special attribute `__annotations__` of a class or module. In contrast to variable declarations in statically typed languages, the goal of annotation syntax is to provide an easy way to specify structured type metadata for third party tools and libraries via the abstract syntax tree and the `__annotations__` attribute.
So from what I read they are part of the type hints coming from Python 3.5, described in [What are Type hints in Python 3.5](http://stackoverflow.com/q/32557920/1983854).
I follow the `captain: str` and `class Starship` example, but not sure about the last one: How does `primes: List[int] = []` explain? Is it defining an empty list that will just allow integers? | Everything between `:` and the `=` is a type hint, so `primes` is indeed defined as `List[int]`, and initially set to an empty list (and `stats` is an empty dictionary initially, defined as `Dict[str, int]`).
`List[int]` and `Dict[str, int]` are not part of the next syntax however, these were already defined in the Python 3.5 typing hints PEP. The 3.6 [PEP 526 â *Syntax for Variable Annotations*](https://www.python.org/dev/peps/pep-0526/) proposal *only* defines the syntax to attach the same hints to variables; before you could only attach type hints to variables with comments (e.g. `primes = [] # List[int]`).
Both `List` and `Dict` are *Generic* types, indicating that you have a list or dictionary mapping with specific (concrete) contents.
For `List`, there is only one 'argument' (the elements in the `[...]` syntax), the type of every element in the list. For `Dict`, the first argument is the key type, and the second the value type. So *all* values in the `primes` list are integers, and *all* key-value pairs in the `stats` dictionary are `(str, int)` pairs, mapping strings to integers.
See the [`typing.List`](https://docs.python.org/3/library/typing.html#typing.List) and [`typing.Dict`](https://docs.python.org/3/library/typing.html#typing.Dict) definitions, the [section on *Generics*](https://docs.python.org/3/library/typing.html#generics), as well as [PEP 483 â *The Theory of Type Hints*](https://www.python.org/dev/peps/pep-0483).
Like type hints on functions, their use is optional and are also considered *annotations* (provided there is an object to attach these to, so globals in modules and attributes on classes, but not locals in functions) which you could introspect via the `__annotations__` attribute. You can attach arbitrary info to these annotations, you are not strictly limited to type hint information.
You may want to read the [full proposal](https://www.python.org/dev/peps/pep-0526/); it contains some additional functionality above and beyond the new syntax; it specifies when such annotations are evaluated, how to introspect them and how to declare something as a class attribute vs. instance attribute, for example. |
What are variable annotations in Python 3.6? | 39,971,929 | 15 | 2016-10-11T07:00:15Z | 39,973,133 | 11 | 2016-10-11T08:21:24Z | [
"python",
"python-3.x",
"annotations",
"type-hinting",
"python-3.6"
] | Python 3.6 is about to be released. [PEP 494 -- Python 3.6 Release Schedule](https://www.python.org/dev/peps/pep-0494/) mentions the end of December, so I went through [What's New in Python 3.6](https://docs.python.org/3.6/whatsnew/3.6.html) to see they mention the *variable annotations*:
> [PEP 484](https://www.python.org/dev/peps/pep-0484) introduced standard for type annotations of function parameters, a.k.a. type hints. This PEP adds syntax to Python for annotating the types of variables including class variables and instance variables:
>
> ```
> primes: List[int] = []
>
> captain: str # Note: no initial value!
>
> class Starship:
> stats: Dict[str, int] = {}
> ```
>
> Just as for function annotations, the Python interpreter does not attach any particular meaning to variable annotations and only stores them in a special attribute `__annotations__` of a class or module. In contrast to variable declarations in statically typed languages, the goal of annotation syntax is to provide an easy way to specify structured type metadata for third party tools and libraries via the abstract syntax tree and the `__annotations__` attribute.
So from what I read they are part of the type hints coming from Python 3.5, described in [What are Type hints in Python 3.5](http://stackoverflow.com/q/32557920/1983854).
I follow the `captain: str` and `class Starship` example, but not sure about the last one: How does `primes: List[int] = []` explain? Is it defining an empty list that will just allow integers? | Indeed, variable annotations are just the next step from `# type` comments as they where defined in `PEP 484`; the rationale behind this change is highlighted in the [respective PEP section](https://www.python.org/dev/peps/pep-0526/#rationale). So, instead of hinting the type with:
```
primes = [] # type: List[int]
```
New syntax was introduced to allow for directly annotating the type with an assignment of the form:
```
primes: List[int] = []
```
which, as @Martijn pointed out, denotes a list of integers by using types available in [`typing`](https://docs.python.org/3/library/typing.html) and initializing it to an empty list.
---
In short, the [new syntax introduced](https://docs.python.org/3.6/reference/simple_stmts.html#annotated-assignment-statements) simply allows you to annotate with a type after the colon `:` character and, optionally allows you to assign a value to it:
```
annotated_assignment_stmt ::= augtarget ":" expression ["=" expression]
```
---
Additional changes were also introduced along with the new syntax; modules and classes now have an `__annotations__` attribute, as functions have had for some time, in which the type metadata is attached:
```
from typing import get_type_hints # grabs __annotations__
```
Now `__main__.__annotations__` holds the declared types:
```
>>> from typing import List, get_type_hints
>>> primes: List[int] = []
>>> captain: str
>>> import __main__
>>> get_type_hints(__main__)
{'primes': typing.List<~T>[int]}
```
`captain` won't currently show up through [`get_type_hints`](https://docs.python.org/3.6/library/typing.html#typing.get_type_hints) because `get_type_hints` only returns types that can also be accessed on a module; i.e it needs a value first:
```
>>> captain = "Picard"
>>> get_type_hints(__main__)
{'primes': typing.List<~T>[int], 'captain': <class 'str'>}
```
Using `print(__annotations__)` will show `'captain': <class 'str'>` but you really shouldn't be accessing `__annotations__` directly.
Similarly, for classes:
```
>>> get_type_hints(Starship)
ChainMap({'stats': typing.Dict<~KT, ~VT>[str, int]}, {})
```
Where a `ChainMap` is used to grab the annotations for a given class (located in the first mapping) and all annotations defined in the base classes found in its `mro` (consequent mappings, `{}` for object).
Along with the new syntax, a new [`ClassVar`](https://docs.python.org/3.6/library/typing.html#typing.ClassVar) type has been added to denote class variables. Yup, `stats` in your example is actually an *instance variable*, not a `ClassVar`.
---
As with type hints from `PEP 484`, these are completely optional and are of main use for type checking tools (and whatever else you can build based on this information). It is to be provisional when the stable version of Py 3.6 is released so small tweaks might be added in the future. |
Dictionaries are ordered in Python 3.6 | 39,980,323 | 72 | 2016-10-11T14:59:23Z | 39,980,548 | 27 | 2016-10-11T15:09:00Z | [
"python",
"python-3.x",
"dictionary",
"python-internals",
"python-3.6"
] | Dictionaries are ordered in Python 3.6, unlike in previous Python incarnations. This seems like a substantial change, but it's only a short paragraph in the [documentation](https://docs.python.org/3.6/whatsnew/3.6.html#other-language-changes). It is described as an implementation detail rather than a language feature, but also implies this may become standard in the future.
How does the Python 3.6 dictionary implementation perform better than the older one while preserving element order?
Here is the text from the documentation:
> `dict()` now uses a âcompactâ representation [pioneered by PyPy](https://morepypy.blogspot.com/2015/01/faster-more-memory-efficient-and-more.html). The memory usage of the new dict() is between 20% and 25% smaller compared to Python 3.5. [PEP 468](https://www.python.org/dev/peps/pep-0468) (Preserving the order of \*\*kwargs in a function.) is implemented by this. The order-preserving aspect of this new implementation is considered an implementation detail and should not be relied upon (this may change in the future, but it is desired to have this new dict implementation in the language for a few releases before changing the language spec to mandate order-preserving semantics for all current and future Python implementations; this also helps preserve backwards-compatibility with older versions of the language where random iteration order is still in effect, e.g. Python 3.5). (Contributed by INADA Naoki in [issue 27350](https://bugs.python.org/issue27350). Idea [originally suggested by Raymond Hettinger](https://mail.python.org/pipermail/python-dev/2012-December/123028.html).) | Below is answering the original first question:
> Should I use `dict` or `OrderedDict` in Python 3.6?
I think this sentence from the documentation is actually enough to answer your question
> The order-preserving aspect of this new implementation is considered an implementation detail and should not be relied upon
`dict` is not explicitly meant to be an ordered collection, so if you want to stay consistent and not rely on a side effect of the new implementation you should stick with `OrderedDict`.
Make your code future proof :)
There's a debate about that [here](https://news.ycombinator.com/item?id=12460936). |
Dictionaries are ordered in Python 3.6 | 39,980,323 | 72 | 2016-10-11T14:59:23Z | 39,980,744 | 57 | 2016-10-11T15:17:53Z | [
"python",
"python-3.x",
"dictionary",
"python-internals",
"python-3.6"
] | Dictionaries are ordered in Python 3.6, unlike in previous Python incarnations. This seems like a substantial change, but it's only a short paragraph in the [documentation](https://docs.python.org/3.6/whatsnew/3.6.html#other-language-changes). It is described as an implementation detail rather than a language feature, but also implies this may become standard in the future.
How does the Python 3.6 dictionary implementation perform better than the older one while preserving element order?
Here is the text from the documentation:
> `dict()` now uses a âcompactâ representation [pioneered by PyPy](https://morepypy.blogspot.com/2015/01/faster-more-memory-efficient-and-more.html). The memory usage of the new dict() is between 20% and 25% smaller compared to Python 3.5. [PEP 468](https://www.python.org/dev/peps/pep-0468) (Preserving the order of \*\*kwargs in a function.) is implemented by this. The order-preserving aspect of this new implementation is considered an implementation detail and should not be relied upon (this may change in the future, but it is desired to have this new dict implementation in the language for a few releases before changing the language spec to mandate order-preserving semantics for all current and future Python implementations; this also helps preserve backwards-compatibility with older versions of the language where random iteration order is still in effect, e.g. Python 3.5). (Contributed by INADA Naoki in [issue 27350](https://bugs.python.org/issue27350). Idea [originally suggested by Raymond Hettinger](https://mail.python.org/pipermail/python-dev/2012-December/123028.html).) | > How does the Python 3.6 dictionary implementation perform better than the older one while preserving element order?
Essentially by keeping two arrays, one holding the entries for the dictionary in the order that they were inserted and the other holding a list of indices.
In the previous implementation a sparse array of type *dictionary entries* had to be allocated; unfortunately, it also resulted in a lot of empty space since that array was not allowed to be more than `2/3`s full. This is not the case now since only the *required* entries are stored and a sparse array of type *integer* `2/3`s full is kept.
Obviously creating a sparse array of type "dictionary entries" is much more memory demanding than a sparse array for storing ints ([sized `8 bytes` tops](https://github.com/python/cpython/blob/master/Objects/dict-common.h#L55) in cases of really large dictionaries)
---
[In the original proposal made by Raymond Hettinger](https://mail.python.org/pipermail/python-dev/2012-December/123028.html), a visualization of the data structures used can be seen which captures the gist of the idea.
> For example, the dictionary:
>
> ```
> d = {'timmy': 'red', 'barry': 'green', 'guido': 'blue'}
> ```
>
> is currently stored as:
>
> ```
> entries = [['--', '--', '--'],
> [-8522787127447073495, 'barry', 'green'],
> ['--', '--', '--'],
> ['--', '--', '--'],
> ['--', '--', '--'],
> [-9092791511155847987, 'timmy', 'red'],
> ['--', '--', '--'],
> [-6480567542315338377, 'guido', 'blue']]
> ```
>
> Instead, the data should be organized as follows:
>
> ```
> indices = [None, 1, None, None, None, 0, None, 2]
> entries = [[-9092791511155847987, 'timmy', 'red'],
> [-8522787127447073495, 'barry', 'green'],
> [-6480567542315338377, 'guido', 'blue']]
> ```
As you can visually now see, in the original proposal, a lot of space is essentially empty to reduce collisions and make look-ups faster. With the new approach, you reduce the memory required by moving the sparseness where it's really required, in the indices.
---
> Should you depend on it and/or use it?
As noted in the documentation, this is considered an implementation detail meaning it is subject to change and you shouldn't depend on it.
Different implementations of Python aren't required to make the dictionary ordered, rather, just support an ordered mapping where that is required (Notable examples are *[PEP 520: Preserving Class Attribute Definition Order](https://docs.python.org/3.6/whatsnew/3.6.html#pep-520-preserving-class-attribute-definition-order)* and *[PEP 468: Preserving Keyword Argument Order](https://docs.python.org/3.6/whatsnew/3.6.html#pep-468-preserving-keyword-argument-order)*)
If you want to write code that preserves the ordering and want it to not break on previous versions/different implementations you should always use `OrderedDict`. Besides, `OrderedDict` will most likely eventually become a thin-wrapper around the new `dict` implementation. |
How to *not* create an instance | 39,988,779 | 2 | 2016-10-12T00:34:44Z | 39,988,807 | 7 | 2016-10-12T00:38:41Z | [
"python",
"python-3.x"
] | I would like to avoid the creation of an instance if the arguments do not match the expected values.
I.e. in short:
```
#!/usr/bin/env python3
class Test(object):
def __init__(self, reallydoit = True):
if reallydoit:
self.done = True
else:
return None
make_me = Test()
make_me_not = Test(reallydoit=False)
```
I'd like `make_me_not` to be `None`, and I thought that `return None` could do it, but this variable is an instance of `Test` too:
```
>>> make_me
<__main__.Test object at 0x7fd78c732390>
>>> make_me_not
<__main__.Test object at 0x7fd78c732470>
```
I'm sure there's a way to do this, but my Google-fu failed me so far.
Thank you for any help.
**EDIT:** I would prefer this to be handled silently; the conditional should be interpreted as "Best not create this specific instance" instead of "You are using this class the wrong way". So yes, raising an error and then handling it is a possibility, but I'd prefer making less ruckus. | Just [raise](https://docs.python.org/3/reference/simple_stmts.html#the-raise-statement) an exception in the [*\_\_init\_\_*](https://docs.python.org/3/reference/datamodel.html#object.__init__) method:
```
class Test(object):
def __init__(self, reallydoit = True):
if reallydoit:
self.done = True
else:
raise ValueError('Not really doing it')
```
The other approach is to move your code to a [*\_\_new\_\_*](https://docs.python.org/3/reference/datamodel.html#object.__new__) method:
```
class Test(object):
def __new__(cls, reallydoit = True):
if reallydoit:
return object.__new__(cls)
else:
return None
```
Lastly, you could move the creation decision into a [factory function](https://en.wikipedia.org/wiki/Factory_method_pattern):
```
class Test(object):
pass
def maybe_test(reallydoit=True):
if reallydoit:
return Test()
return None
``` |
Why does __slots__ = ('__dict__',) produce smaller instances? | 40,003,067 | 5 | 2016-10-12T15:55:31Z | 40,003,530 | 7 | 2016-10-12T16:18:18Z | [
"python",
"class",
"python-3.x"
] | ```
class Spam(object):
__slots__ = ('__dict__',)
```
Produces instances smaller than those of a "normal" class. Why is this?
Source: [David Beazley's recent tweet](https://twitter.com/dabeaz/status/785948782219231232). | To me, it looks like the memory savings come from the lack of a `__weakref__` on the instance.
So if we have:
```
class Spam1(object):
__slots__ = ('__dict__',)
class Spam2(object):
__slots__ = ('__dict__', '__weakref__')
class Spam3(object):
__slots__ = ('foo',)
class Eggs(object):
pass
objs = Spam1(), Spam2(), Spam3(), Eggs()
for obj in objs:
obj.foo = 'bar'
import sys
for obj in objs:
print(type(obj).__name__, sys.getsizeof(obj))
```
The results (on python 3.5.2) are:
```
Spam1 48
Spam2 56
Spam3 48
Eggs 56
```
We see that `Spam2` (which has a `__weakref__`) is the same size as `Eggs` (a traditional class).
Note that normally, this savings is going to be completely insignificant (and prevents you from using weak-references in your slots enabled classes). Generally, savings from `__slots__` come from the fact that they don't create a `__dict__` in the first place. Since `__dict__` are implemented using a somewhat sparse table (in order to help avoid hash collisions and maintain O(1) lookup/insert/delete), there's a fair amount of space that isn't used for each dictionary that your program creates. If you add `'__dict__'` to your `__slots__` though, you miss out on this optimization (a dict is still created).
To explore this a little more, we can add more slots:
```
class Spam3(object):
__slots__ = ('foo', 'bar')
```
Now if we re-run, we see that it takes:
```
Spam1 48
Spam2 56
Spam3 56
Eggs 56
```
So each slot takes 8 bytes on the *instance* (for me -- likely because 8 bytes is `sizeof(pointer)` on my system). Also note that `__slots__` is implemented by making descriptors (which live on the *class*, not the instance). So, the instance (even though you might find `__slots__` listed via `dir(instance)`) isn't actually carrying around a `__slots__` value) -- That's being carried around by the *class*.
This also has the consequence that your slots enabled class can't set "default" values... e.g. the following code doesn't work:
```
class Foo(object):
__slots__ = ('foo',)
foo = 'bar'
```
So to boil it down:
* each "slot" on an instance takes up the size of a pointer on your system.
* without `__slots__ = ('__dict__',)` a `__dict__` slot and a `__weakref__` slot is created on the instance
* with `__slots__ = ('__dict__',)`, a `__dict__` slot is created but a `__weakref__` slot is not create on the instance.
* In neither case is `__slots__` actually put on the *instance*. It lives on the *class* (even though you might see it from `dir(instance)`).
* The savings you reap from using `__slots__` in this way is likely to be insignificant. Real savings from `__slots__` happen when you do not create a `dict` for the instance (since `dict` take up a more storage than the sum of the storage required for their contents due to somewhat sparse packing data in the data-structure). On top of that, there are downsides to using slots this way (e.g. no weak-references to your instances). |
Why does map return a map object instead of a list in Python 3? | 40,015,439 | 20 | 2016-10-13T08:01:41Z | 40,015,480 | 7 | 2016-10-13T08:04:08Z | [
"python",
"python-3.x"
] | I am interested in understanding the [new language design of Python 3.x](http://stackoverflow.com/questions/1303347/getting-a-map-to-return-a-list-in-python-3-x).
I do enjoy, in Python 2.7, the function `map`:
```
Python 2.7.12
In[2]: map(lambda x: x+1, [1,2,3])
Out[2]: [2, 3, 4]
```
However, in Python 3.x things have changed:
```
Python 3.5.1
In[2]: map(lambda x: x+1, [1,2,3])
Out[2]: <map at 0x4218390>
```
I understand the how, but I could not find a reference to the why. Why did the language designers make this choice, which, in my opinion, introduces a great deal of pain. Was this to arm-wrestle developers in sticking to list comprehensions?
IMO, list can be naturally thought as [Functors](http://learnyouahaskell.com/functors-applicative-functors-and-monoids); and I have been somehow been thought to think in this way:
```
fmap :: (a -> b) -> f a -> f b
``` | Because it returns an iterator, it omit storing the full list in the memory. So that you can easily iterate over it in the future not making pain to memory. Possibly you even don't need a full list, but the part of it, until your condition is matched.
You can find this [docs](https://docs.python.org/3/glossary.html#term-iterator) useful, because iterators are awesome.
> An object representing a stream of data. Repeated calls to the iteratorâs `__next__()` method (or passing it to the built-in function `next()`) return successive items in the stream. When no more data are available a `StopIteration` exception is raised instead. At this point, the iterator object is exhausted and any further calls to its `__next__()` method just raise `StopIteration` again. Iterators are required to have an `__iter__()` method that returns the iterator object itself so every iterator is also iterable and may be used in most places where other iterables are accepted. One notable exception is code which attempts multiple iteration passes. A container object (such as a `list`) produces a fresh new iterator each time you pass it to the `iter()` function or use it in a for loop. Attempting this with an iterator will just return the same exhausted iterator object used in the previous iteration pass, making it appear like an empty container. |
list() uses more memory than list comprehension | 40,018,398 | 54 | 2016-10-13T10:25:25Z | 40,018,719 | 41 | 2016-10-13T10:40:13Z | [
"python",
"list",
"list-comprehension"
] | So i was playing with `list` objects and found little strange thing that if `list` is created with `list()` it uses more memory, than list comprehension? I'm using Python 3.5.2
```
In [1]: import sys
In [2]: a = list(range(100))
In [3]: sys.getsizeof(a)
Out[3]: 1008
In [4]: b = [i for i in range(100)]
In [5]: sys.getsizeof(b)
Out[5]: 912
In [6]: type(a) == type(b)
Out[6]: True
In [7]: a == b
Out[7]: True
In [8]: sys.getsizeof(list(b))
Out[8]: 1008
```
From the [docs](https://docs.python.org/3.5/library/stdtypes.html#list):
> Lists may be constructed in several ways:
>
> * Using a pair of square brackets to denote the empty list: `[]`
> * Using square brackets, separating items with commas: `[a]`, `[a, b, c]`
> * Using a list comprehension: `[x for x in iterable]`
> * Using the type constructor: `list()` or `list(iterable)`
But it seems that using `list()` it uses more memory.
And as much `list` is bigger, the gap increases.
[](https://i.stack.imgur.com/VVHJL.png)
Why this happens?
**UPDATE #1**
Test with Python 3.6.0b2:
```
Python 3.6.0b2 (default, Oct 11 2016, 11:52:53)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.getsizeof(list(range(100)))
1008
>>> sys.getsizeof([i for i in range(100)])
912
```
**UPDATE #2**
Test with Python 2.7.12:
```
Python 2.7.12 (default, Jul 1 2016, 15:12:24)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.getsizeof(list(xrange(100)))
1016
>>> sys.getsizeof([i for i in xrange(100)])
920
``` | I think you're seeing over-allocation patterns this is a [sample from the source](https://github.com/python/cpython/blob/3.5/Objects/listobject.c#L42):
```
/* This over-allocates proportional to the list size, making room
* for additional growth. The over-allocation is mild, but is
* enough to give linear-time amortized behavior over a long
* sequence of appends() in the presence of a poorly-performing
* system realloc().
* The growth pattern is: 0, 4, 8, 16, 25, 35, 46, 58, 72, 88, ...
*/
new_allocated = (newsize >> 3) + (newsize < 9 ? 3 : 6);
```
---
Printing the sizes of list comprehensions of lengths 0-88 you can see the pattern matches:
```
# create comprehensions for sizes 0-88
comprehensions = [sys.getsizeof([1 for _ in range(l)]) for l in range(90)]
# only take those that resulted in growth compared to previous length
steps = zip(comprehensions, comprehensions[1:])
growths = [x for x in list(enumerate(steps)) if x[1][0] != x[1][1]]
# print the results:
for growth in growths:
print(growth)
```
Results (format is `(list length, (old total size, new total size))`):
```
(0, (64, 96))
(4, (96, 128))
(8, (128, 192))
(16, (192, 264))
(25, (264, 344))
(35, (344, 432))
(46, (432, 528))
(58, (528, 640))
(72, (640, 768))
(88, (768, 912))
```
---
The over-allocation is done for performance reasons allowing lists to grow without allocating more memory with every growth (better [amortized](https://en.wikipedia.org/wiki/Amortized_analysis) performance).
A probable reason for the difference with using list comprehension, is that list comprehension can not deterministically calculate the size of the generated list, but `list()` can. This means comprehensions will continuously grow the list as it fills it using over-allocation until finally filling it.
It is possible that is will not grow the over-allocation buffer with unused allocated nodes once its done (in fact, in most cases it wont, that would defeat the over-allocation purpose).
`list()`, however, can add some buffer no matter the list size since it knows the final list size in advance.
---
Another backing evidence, also from the source, is that we see [list comprehensions invoking `LIST_APPEND`](https://github.com/python/cpython/blob/3.5/Python/compile.c#L3374), which indicates usage of `list.resize`, which in turn indicates consuming the pre-allocation buffer without knowing how much of it will be filled. This is consistent with the behavior you're seeing.
---
To conclude, `list()` will pre-allocate more nodes as a function of the list size
```
>>> sys.getsizeof(list([1,2,3]))
60
>>> sys.getsizeof(list([1,2,3,4]))
64
```
List comprehension does not know the list size so it uses append operations as it grows, depleting the pre-allocation buffer:
```
# one item before filling pre-allocation buffer completely
>>> sys.getsizeof([i for i in [1,2,3]])
52
# fills pre-allocation buffer completely
# note that size did not change, we still have buffered unused nodes
>>> sys.getsizeof([i for i in [1,2,3,4]])
52
# grows pre-allocation buffer
>>> sys.getsizeof([i for i in [1,2,3,4,5]])
68
``` |
list() uses more memory than list comprehension | 40,018,398 | 54 | 2016-10-13T10:25:25Z | 40,019,900 | 23 | 2016-10-13T11:37:10Z | [
"python",
"list",
"list-comprehension"
] | So i was playing with `list` objects and found little strange thing that if `list` is created with `list()` it uses more memory, than list comprehension? I'm using Python 3.5.2
```
In [1]: import sys
In [2]: a = list(range(100))
In [3]: sys.getsizeof(a)
Out[3]: 1008
In [4]: b = [i for i in range(100)]
In [5]: sys.getsizeof(b)
Out[5]: 912
In [6]: type(a) == type(b)
Out[6]: True
In [7]: a == b
Out[7]: True
In [8]: sys.getsizeof(list(b))
Out[8]: 1008
```
From the [docs](https://docs.python.org/3.5/library/stdtypes.html#list):
> Lists may be constructed in several ways:
>
> * Using a pair of square brackets to denote the empty list: `[]`
> * Using square brackets, separating items with commas: `[a]`, `[a, b, c]`
> * Using a list comprehension: `[x for x in iterable]`
> * Using the type constructor: `list()` or `list(iterable)`
But it seems that using `list()` it uses more memory.
And as much `list` is bigger, the gap increases.
[](https://i.stack.imgur.com/VVHJL.png)
Why this happens?
**UPDATE #1**
Test with Python 3.6.0b2:
```
Python 3.6.0b2 (default, Oct 11 2016, 11:52:53)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.getsizeof(list(range(100)))
1008
>>> sys.getsizeof([i for i in range(100)])
912
```
**UPDATE #2**
Test with Python 2.7.12:
```
Python 2.7.12 (default, Jul 1 2016, 15:12:24)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.getsizeof(list(xrange(100)))
1016
>>> sys.getsizeof([i for i in xrange(100)])
920
``` | Thanks everyone for helping me to understand that awesome Python.
I don't want to make question that massive(that why i'm posting answer), just want to show and share my thoughts.
As @ReutSharabani noted correctly: "list() deterministically determines list size". You can see it from that graph.
[](https://i.stack.imgur.com/JrqC9.png)
When you `append` or using list comprehension you always have some sort of boundaries that extends when you reach some point. And with `list()` you have almost the same boundaries, but they are floating.
**UPDATE**
So thanks to @ReutSharabani, @tavo, @SvenFestersen
To sum up: `list()` preallocates memory depend on list size, list comprehension cannot do that(it requests more memory when it needed, like `.append()`). That's why `list()` store more memory.
One more graph, that show `list()` preallocate memory. So green line shows `list(range(830))` appending element by element and for a while memory not changing.
[](https://i.stack.imgur.com/yoV85.png)
**UPDATE 2**
As @Barmar noted in comments below, `list()` must me faster than list comprehension, so i ran `timeit()` with `number=1000` for length of `list` from `4**0` to `4**10` and the results are
[](https://i.stack.imgur.com/WNSnO.png) |
Why does the class definition's metaclass keyword argument accept a callable? | 40,029,807 | 8 | 2016-10-13T19:54:43Z | 40,030,142 | 7 | 2016-10-13T20:14:46Z | [
"python",
"class",
"python-3.x",
"metaclass"
] | ## Background
The Python 3 [documentation](https://docs.python.org/3.6/reference/datamodel.html#determining-the-appropriate-metaclass) clearly describes how the metaclass of a class is determined:
> * if no bases and no explicit metaclass are given, then type() is used
> * if an explicit metaclass is given and it is not an instance of type(), then it is used directly as the metaclass
> * if an instance of type() is given as the explicit metaclass, or bases are defined, then the most derived metaclass is used
Therefore, according to the second rule, it is possible to specify a metaclass using a callable. E.g.,
```
class MyMetaclass(type):
pass
def metaclass_callable(name, bases, namespace):
print("Called with", name)
return MyMetaclass(name, bases, namespace)
class MyClass(metaclass=metaclass_callable):
pass
class MyDerived(MyClass):
pass
print(type(MyClass), type(MyDerived))
```
## Question 1
Is the metaclass of `MyClass`: `metaclass_callable` or `MyMetaclass`? The second rule in the documentation says that the provided callable "is used directly as the metaclass". However, it seems to make more sense to say that the metaclass is `MyMetaclass` since
* `MyClass` and `MyDerived` have type `MyMetaclass`,
* `metaclass_callable` is called once and then appears to be unrecoverable,
* derived classes do not use (as far as I can tell) `metaclass_callable` in any way (they use `MyMetaclass`).
## Question 2
Is there anything you can do with a callable that you can't do with an instance of `type`? What is the purpose of accepting an arbitrary callable? | Regarding your first question the metaclass should be `MyMetaclass` (which it's so):
```
In [7]: print(type(MyClass), type(MyDerived))
<class '__main__.MyMetaclass'> <class '__main__.MyMetaclass'>
```
The reason is that if the metaclass is not an instance of type python calls the methaclass by passing these arguments to it `name, bases, ns, **kwds` (see `new_class`) and since you are returning your real metaclass in that function it gets the correct type for metaclass.
And about the second question:
> What is the purpose of accepting an arbitrary callable?
There is no special purpose, **it's actually the nature of metaclasses** which is because that making an instance from a class always calls the metaclass by calling it's `__call__` method:
```
Metaclass.__call__()
```
Which means that you can pass any callable as your metaclass. So for example if you test it with a nested function the result will still be the same:
```
In [21]: def metaclass_callable(name, bases, namespace):
def inner():
return MyMetaclass(name, bases, namespace)
return inner()
....:
In [22]: class MyClass(metaclass=metaclass_callable):
pass
....:
In [23]: print(type(MyClass), type(MyDerived))
<class '__main__.MyMetaclass'> <class '__main__.MyMetaclass'>
```
---
For more info here is how Python crates a class:
It calls the `new_class` function which it calls `prepare_class` inside itself, then as you can see inside the `prepare_class` python calls the `__prepare__` method of the appropriate metaclass, beside of finding the proper meta (using `_calculate_meta` function ) and creating the appropriate namespace for the class.
So all in one here is the hierarchy of executing a metacalss's methods:
1. `__prepare__` 1
2. `__call__`
3. `__new__`
4. `__init__`
And here is the source code:
```
# Provide a PEP 3115 compliant mechanism for class creation
def new_class(name, bases=(), kwds=None, exec_body=None):
"""Create a class object dynamically using the appropriate metaclass."""
meta, ns, kwds = prepare_class(name, bases, kwds)
if exec_body is not None:
exec_body(ns)
return meta(name, bases, ns, **kwds)
def prepare_class(name, bases=(), kwds=None):
"""Call the __prepare__ method of the appropriate metaclass.
Returns (metaclass, namespace, kwds) as a 3-tuple
*metaclass* is the appropriate metaclass
*namespace* is the prepared class namespace
*kwds* is an updated copy of the passed in kwds argument with any
'metaclass' entry removed. If no kwds argument is passed in, this will
be an empty dict.
"""
if kwds is None:
kwds = {}
else:
kwds = dict(kwds) # Don't alter the provided mapping
if 'metaclass' in kwds:
meta = kwds.pop('metaclass')
else:
if bases:
meta = type(bases[0])
else:
meta = type
if isinstance(meta, type):
# when meta is a type, we first determine the most-derived metaclass
# instead of invoking the initial candidate directly
meta = _calculate_meta(meta, bases)
if hasattr(meta, '__prepare__'):
ns = meta.__prepare__(name, bases, **kwds)
else:
ns = {}
return meta, ns, kwds
def _calculate_meta(meta, bases):
"""Calculate the most derived metaclass."""
winner = meta
for base in bases:
base_meta = type(base)
if issubclass(winner, base_meta):
continue
if issubclass(base_meta, winner):
winner = base_meta
continue
# else:
raise TypeError("metaclass conflict: "
"the metaclass of a derived class "
"must be a (non-strict) subclass "
"of the metaclasses of all its bases")
return winner
```
---
1. Note that it get called implicitly inside the *new\_class* function and before the return. |
Removing elements from an array that are in another array | 40,055,835 | 11 | 2016-10-15T06:36:16Z | 40,056,135 | 9 | 2016-10-15T07:07:33Z | [
"python",
"arrays",
"numpy"
] | Say I have these 2D arrays A and B.
How can I remove elements from A that are in B.
```
A=np.asarray([[1,1,1], [1,1,2], [1,1,3], [1,1,4]])
B=np.asarray([[0,0,0], [1,0,2], [1,0,3], [1,0,4], [1,1,0], [1,1,1], [1,1,4]])
#output = [[1,1,2], [1,1,3]]
```
---
To be more precise, I would like to do something like this.
```
data = some numpy array
label = some numpy array
A = np.argwhere(label==0) #[[1 1 1], [1 1 2], [1 1 3], [1 1 4]]
B = np.argwhere(data>1.5) #[[0 0 0], [1 0 2], [1 0 3], [1 0 4], [1 1 0], [1 1 1], [1 1 4]]
out = np.argwhere(label==0 and data>1.5) #[[1 1 2], [1 1 3]]
``` | Here is a Numpythonic approach with *broadcasting*:
```
In [83]: A[np.all(np.any((A-B[:, None]), axis=2), axis=0)]
Out[83]:
array([[1, 1, 2],
[1, 1, 3]])
```
Here is a timeit with other answer:
```
In [90]: def cal_diff(A, B):
....: A_rows = A.view([('', A.dtype)] * A.shape[1])
....: B_rows = B.view([('', B.dtype)] * B.shape[1])
....: return np.setdiff1d(A_rows, B_rows).view(A.dtype).reshape(-1, A.shape[1])
....:
In [93]: %timeit cal_diff(A, B)
10000 loops, best of 3: 54.1 µs per loop
In [94]: %timeit A[np.all(np.any((A-B[:, None]), axis=2), axis=0)]
100000 loops, best of 3: 9.41 µs per loop
# Even better with Divakar's suggestion
In [97]: %timeit A[~((A[:,None,:] == B).all(-1)).any(1)]
100000 loops, best of 3: 7.41 µs per loop
```
Well, if you are looking for a faster way you should looking for ways that reduce the number of comparisons. In this case (without considering the order) you can generate a unique number from your rows and compare the numbers which can be done with summing the items power of two.
Here is the benchmark with Divakar's in1d approach:
```
In [144]: def in1d_approach(A,B):
.....: dims = np.maximum(B.max(0),A.max(0))+1
.....: return A[~np.in1d(np.ravel_multi_index(A.T,dims),\
.....: np.ravel_multi_index(B.T,dims))]
.....:
In [146]: %timeit in1d_approach(A, B)
10000 loops, best of 3: 23.8 µs per loop
In [145]: %timeit A[~np.in1d(np.power(A, 2).sum(1), np.power(B, 2).sum(1))]
10000 loops, best of 3: 20.2 µs per loop
```
You can use `np.diff` to get the an order independent result:
```
In [194]: B=np.array([[0, 0, 0,], [1, 0, 2,], [1, 0, 3,], [1, 0, 4,], [1, 1, 0,], [1, 1, 1,], [1, 1, 4,], [4, 1, 1]])
In [195]: A[~np.in1d(np.diff(np.diff(np.power(A, 2))), np.diff(np.diff(np.power(B, 2))))]
Out[195]:
array([[1, 1, 2],
[1, 1, 3]])
In [196]: %timeit A[~np.in1d(np.diff(np.diff(np.power(A, 2))), np.diff(np.diff(np.power(B, 2))))]
10000 loops, best of 3: 30.7 µs per loop
```
Benchmark with Divakar's setup:
```
In [198]: B = np.random.randint(0,9,(1000,3))
In [199]: A = np.random.randint(0,9,(100,3))
In [200]: A_idx = np.random.choice(np.arange(A.shape[0]),size=10,replace=0)
In [201]: B_idx = np.random.choice(np.arange(B.shape[0]),size=10,replace=0)
In [202]: A[A_idx] = B[B_idx]
In [203]: %timeit A[~np.in1d(np.diff(np.diff(np.power(A, 2))), np.diff(np.diff(np.power(B, 2))))]
10000 loops, best of 3: 137 µs per loop
In [204]: %timeit A[~np.in1d(np.power(A, 2).sum(1), np.power(B, 2).sum(1))]
10000 loops, best of 3: 112 µs per loop
In [205]: %timeit in1d_approach(A, B)
10000 loops, best of 3: 115 µs per loop
```
Timing with larger arrays (Divakar's solution is slightly faster):
```
In [231]: %timeit A[~np.in1d(np.diff(np.diff(np.power(A, 2))), np.diff(np.diff(np.power(B, 2))))]
1000 loops, best of 3: 1.01 ms per loop
In [232]: %timeit A[~np.in1d(np.power(A, 2).sum(1), np.power(B, 2).sum(1))]
1000 loops, best of 3: 880 µs per loop
In [233]: %timeit in1d_approach(A, B)
1000 loops, best of 3: 807 µs per loop
``` |
Removing elements from an array that are in another array | 40,055,835 | 11 | 2016-10-15T06:36:16Z | 40,056,251 | 9 | 2016-10-15T07:19:03Z | [
"python",
"arrays",
"numpy"
] | Say I have these 2D arrays A and B.
How can I remove elements from A that are in B.
```
A=np.asarray([[1,1,1], [1,1,2], [1,1,3], [1,1,4]])
B=np.asarray([[0,0,0], [1,0,2], [1,0,3], [1,0,4], [1,1,0], [1,1,1], [1,1,4]])
#output = [[1,1,2], [1,1,3]]
```
---
To be more precise, I would like to do something like this.
```
data = some numpy array
label = some numpy array
A = np.argwhere(label==0) #[[1 1 1], [1 1 2], [1 1 3], [1 1 4]]
B = np.argwhere(data>1.5) #[[0 0 0], [1 0 2], [1 0 3], [1 0 4], [1 1 0], [1 1 1], [1 1 4]]
out = np.argwhere(label==0 and data>1.5) #[[1 1 2], [1 1 3]]
``` | Based on [`this solution`](http://stackoverflow.com/a/38674038/3293881) to [`Find the row indexes of several values in a numpy array`](http://stackoverflow.com/questions/38674027/find-the-row-indexes-of-several-values-in-a-numpy-array), here's a NumPy based solution with less memory footprint and could be beneficial when working with large arrays -
```
dims = np.maximum(B.max(0),A.max(0))+1
out = A[~np.in1d(np.ravel_multi_index(A.T,dims),np.ravel_multi_index(B.T,dims))]
```
Sample run -
```
In [38]: A
Out[38]:
array([[1, 1, 1],
[1, 1, 2],
[1, 1, 3],
[1, 1, 4]])
In [39]: B
Out[39]:
array([[0, 0, 0],
[1, 0, 2],
[1, 0, 3],
[1, 0, 4],
[1, 1, 0],
[1, 1, 1],
[1, 1, 4]])
In [40]: out
Out[40]:
array([[1, 1, 2],
[1, 1, 3]])
```
Runtime test on large arrays -
```
In [107]: def in1d_approach(A,B):
...: dims = np.maximum(B.max(0),A.max(0))+1
...: return A[~np.in1d(np.ravel_multi_index(A.T,dims),\
...: np.ravel_multi_index(B.T,dims))]
...:
In [108]: # Setup arrays with B as large array and A contains some of B's rows
...: B = np.random.randint(0,9,(1000,3))
...: A = np.random.randint(0,9,(100,3))
...: A_idx = np.random.choice(np.arange(A.shape[0]),size=10,replace=0)
...: B_idx = np.random.choice(np.arange(B.shape[0]),size=10,replace=0)
...: A[A_idx] = B[B_idx]
...:
```
Timings with `broadcasting` based solutions -
```
In [109]: %timeit A[np.all(np.any((A-B[:, None]), axis=2), axis=0)]
100 loops, best of 3: 4.64 ms per loop # @Kasramvd's soln
In [110]: %timeit A[~((A[:,None,:] == B).all(-1)).any(1)]
100 loops, best of 3: 3.66 ms per loop
```
Timing with less memory footprint based solution -
```
In [111]: %timeit in1d_approach(A,B)
1000 loops, best of 3: 231 µs per loop
```
**Further performance boost**
`in1d_approach` reduces each row by considering each row as an indexing tuple. We can do the same a bit more efficiently by introducing matrix-multiplication with `np.dot`, like so -
```
def in1d_dot_approach(A,B):
cumdims = (np.maximum(A.max(),B.max())+1)**np.arange(B.shape[1])
return A[~np.in1d(A.dot(cumdims),B.dot(cumdims))]
```
Let's test it against the previous on much larger arrays -
```
In [251]: # Setup arrays with B as large array and A contains some of B's rows
...: B = np.random.randint(0,9,(10000,3))
...: A = np.random.randint(0,9,(1000,3))
...: A_idx = np.random.choice(np.arange(A.shape[0]),size=10,replace=0)
...: B_idx = np.random.choice(np.arange(B.shape[0]),size=10,replace=0)
...: A[A_idx] = B[B_idx]
...:
In [252]: %timeit in1d_approach(A,B)
1000 loops, best of 3: 1.28 ms per loop
In [253]: %timeit in1d_dot_approach(A, B)
1000 loops, best of 3: 1.2 ms per loop
``` |
Why can yield be indexed? | 40,061,280 | 6 | 2016-10-15T16:06:41Z | 40,061,337 | 12 | 2016-10-15T16:11:49Z | [
"python",
"python-2.7",
"indexing",
"generator",
"yield"
] | I thought I could make my python (2.7.10) code simpler by directly accessing the index of a value passed to a generator via `send`, and was surprised the code ran. I then discovered an index applied to `yield` doesn't really do anything, nor does it throw an exception:
```
def gen1():
t = yield[0]
assert t
yield False
g = gen1()
next(g)
g.send('char_str')
```
However, if I try to index `yield` thrice or more, I get an exception:
```
def gen1():
t = yield[0][0][0]
assert t
yield False
g = gen1()
next(g)
g.send('char_str')
```
which throws
```
TypeError: 'int' object has no attribute '__getitem__'
```
This was unusually inconsistent behavior, and I was wondering if there is an intuitive explanation for what indexing yield is actually doing? | You are not indexing. You are yielding a list; the expression `yield[0]` is really just the same as the following (but without a variable):
```
lst = [0]
yield lst
```
If you look at what `next()` returned you'd have gotten that list:
```
>>> def gen1():
... t = yield[0]
... assert t
... yield False
...
>>> g = gen1()
>>> next(g)
[0]
```
You don't *have* to have a space between `yield` and the `[0]`, that's all.
The exception is caused by you trying to apply the subscription to the contained `0` integer:
```
>>> [0] # list with one element, the int value 0
[0]
>>> [0][0] # indexing the first element, so 0
0
>>> [0][0][0] # trying to index the 0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'int' object is not subscriptable
```
If you want to index a value sent to the generator, put parentheses around the `yield` expression:
```
t = (yield)[0]
```
Demo:
```
>>> def gen1():
... t = (yield)[0]
... print 'Received: {!r}'.format(t)
... yield False
...
>>> g = gen1()
>>> next(g)
>>> g.send('foo')
Received: 'f'
False
``` |
Most Pythonic way to find/check items in a list with O(1) complexity? | 40,064,377 | 3 | 2016-10-15T21:24:43Z | 40,064,454 | 7 | 2016-10-15T21:33:53Z | [
"python",
"algorithm",
"performance",
"python-3.x",
"time-complexity"
] | The problem I'm facing is finding/checking items in a list with O(1) complexity. The following has a complexity of O(n):
```
'foo' in list_bar
```
This has a complexity of O(n) because you are using the `in` keyword on a `list`. (Refer to [Python Time Complexity](https://wiki.python.org/moin/TimeComplexity))
However, if you use the `in` keyword on a `set`, it has a complexity of O(1).
The reason why I need to figure out O(1) complexity for a list, and not a set, is largely due to the need to account for duplicate items within the list. Sets do not allow for duplicates. A decent example would be :
```
chars_available = ['h', 'e', 'l', 'o', 'o', 'z']
chars_needed = ['h', 'e', 'l', 'l', 'o']
def foo(chars_available, chars_needed):
cpy_needed = list(chars_needed)
for char in cpy_needed:
if char in chars_available:
chars_available.remove(char)
chars_needed.remove(char)
if not chars_needed: return True # if chars_needed == []
return False
foo(chars_available, chars_needed)
```
The example is not the focus here, so please try not to get sidetracked by it. The focus is still trying to get O(1) complexity for finding items in a list. How would I accomplish that pythonically?
(As extra credit, if you did want to show a better way of performing that operation in Python, pseudocode, or another language, I'd be happy to read it).
Thank you!
**Edit:**
In response to Ami Tavory's answer, I learned you can't make lists faster than O(n), but the suggestion for `collections.Counter()` helped solve the application I was working on. I'm uploading my faster solution for Stack Overflow, the performance was phenomenal! If I'm not mistaken (correct me if I'm wrong), it should be O(1) since it involves only hashable values and no loop iteration.
```
from collections import Counter
chars_available = ['h', 'e', 'l', 'o', 'o', 'z']
chars_needed = ['h', 'e', 'l', 'l', 'o']
def foo(chars_available, chars_needed):
counter_available = Counter(chars_available)
counter_needed = Counter(chars_needed)
out = counter_needed - counter_available
if not list(out.elements()): return True
else: return False
foo(chars_available, chars_needed)
```
Very fast, very pythonic! Thanks! | In general, it's impossible to find elements in a `list` in constant time. You could hypothetically maintain both a `list` and a `set`, but updating operations will take linear time.
You mention that your motivation is
> a list, and not a set, is largely due to the need to account for duplicate items within the list. Sets do not allow for duplicates.
and ask not to focus on the example. If this is your motivation, you might want to use instead of a `set`, a `dict` mapping each element to the number of its occurrences.
You might find [`collections.Counter`](https://docs.python.org/2/library/collections.html#collections.Counter) useful in particular:
```
In [1]: from collections import Counter
In [2]: Counter(['h', 'e', 'l', 'o', 'o', 'z'])
Out[2]: Counter({'e': 1, 'h': 1, 'l': 1, 'o': 2, 'z': 1})
``` |
Pandas: How to conditionally assign multiple columns? | 40,090,522 | 5 | 2016-10-17T15:40:08Z | 40,090,781 | 9 | 2016-10-17T15:55:19Z | [
"python",
"pandas"
] | I want to replace negative values with `nan` for only certain columns. The simplest way could be:
```
for col in ['a', 'b', 'c']:
df.loc[df[col ] < 0, col] = np.nan
```
`df` could have many columns and I only want to do this to specific columns.
Is there a way to do this in one line? Seems like this should be easy but I have not been able to figure out. | I don't think you'll get much simpler than this:
```
>>> df = pd.DataFrame({'a': np.arange(-5, 2), 'b': np.arange(-5, 2), 'c': np.arange(-5, 2), 'd': np.arange(-5, 2), 'e': np.arange(-5, 2)})
>>> df
a b c d e
0 -5 -5 -5 -5 -5
1 -4 -4 -4 -4 -4
2 -3 -3 -3 -3 -3
3 -2 -2 -2 -2 -2
4 -1 -1 -1 -1 -1
5 0 0 0 0 0
6 1 1 1 1 1
>>> df[df[cols] < 0] = np.nan
>>> df
a b c d e
0 NaN NaN NaN -5 -5
1 NaN NaN NaN -4 -4
2 NaN NaN NaN -3 -3
3 NaN NaN NaN -2 -2
4 NaN NaN NaN -1 -1
5 0.0 0.0 0.0 0 0
6 1.0 1.0 1.0 1 1
``` |
How to get a list of matchable characters from a regex class | 40,094,588 | 3 | 2016-10-17T19:54:32Z | 40,094,825 | 7 | 2016-10-17T20:08:55Z | [
"python",
"regex",
"python-3.x"
] | Given a regex character class/set, how can i get a list of all matchable characters (in python 3). E.g.:
```
[\dA-C]
```
should give
```
['0','1','2','3','4','5','6','7','8','9','A','B','C']
``` | I think what you are looking for is [`string.printable`](https://docs.python.org/2/library/string.html#string.printable) which returns all the printable characters in Python. For example:
```
>>> import string
>>> string.printable
'0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ \t\n\r\x0b\x0c'
```
Now to check content satisfied by your regex, you may do:
```
>>> import re
>>> x = string.printable
>>> pattern = r'[\dA-C]'
>>> print(re.findall(pattern, x))
['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'A', 'B', 'C']
```
`string.printable` is a combination of *digits, letters, punctuation,* and *whitespace*. Also check [String Constants](https://docs.python.org/2/library/string.html#string-constants) for complete list of constants available with [string](https://docs.python.org/2/library/string.html) module.
---
*In case you need the list of all `unicode` characters*, you may do:
```
import sys
unicode_list = [chr(i) for i in range(sys.maxunicode)]
```
**Note:** It will be a huge list, and console might get stuck for a while to give the result as value of `sys.maxunicode` is:
```
>>> sys.maxunicode
1114111
```
In case you are dealing with some specific unicode formats, refer [Unicode Character Ranges](http://billposer.org/Linguistics/Computation/UnicodeRanges.html) for limiting the ranges you are interested in. |
More pythonic way of updating an existing value in a dictionary | 40,100,856 | 2 | 2016-10-18T06:16:31Z | 40,100,901 | 12 | 2016-10-18T06:19:33Z | [
"python",
"dictionary"
] | Lets assume I got a dictionary `_dict` and a variable `n`.
```
_dict = {'a': 9, 'b': 7, 'c': 'someValue'}
n = 8
```
I want to update just a single entry e.g. `{'b': 7}` only if the value of `n` is greater than the actual value of `b`.
The solution I got so far is
```
_dict.update({'b': n for key, value in _dict.items() if key == 'b' and n > value})
```
Which provides the desired result of `{'a': 9, 'b': 8, 'c': 'someValue'}`. So now to my question: Is there a shorter, more pythonic way of doing this? (preferably without importing additional modules) | There is no point in looping if you just need to update one key:
```
_dict['b'] = max(_dict['b'], n)
```
The above sets `'b'` to the highest value of the two. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.