title
stringlengths
12
150
question_id
int64
469
40.1M
question_score
int64
2
5.52k
question_date
stringdate
2008-08-02 15:11:16
2016-10-18 06:16:31
answer_id
int64
536
40.1M
answer_score
int64
7
8.38k
answer_date
stringdate
2008-08-02 18:49:07
2016-10-18 06:19:33
tags
listlengths
1
5
question_body_md
stringlengths
15
30.2k
answer_body_md
stringlengths
11
27.8k
How to cache reads?
34,302,597
14
2015-12-16T01:37:03Z
34,661,090
8
2016-01-07T17:06:10Z
[ "python", "caching", "samtools", "pysam", "bam" ]
I am using python/pysam to do analyze sequencing data. In its tutorial ([pysam - An interface for reading and writing SAM files](http://pysam.readthedocs.org/en/latest/api.html#pysam.AlignmentFile.mate)) for the command mate it says: 'This method is too slow for high-throughput processing. If a read needs to be processed with its mate, work from a read name sorted file or, better, cache reads.' How would you 'cache reads'?
[Caching](https://en.wikipedia.org/wiki/Memoization) is a typical approach to speed up long running operations. It sacrifices memory for the sake of computational speed. Let's suppose you have a function which given a set of parameters always returns the same result. Unfortunately this function is very slow and you need to call it a considerable amount of times slowing down your program. What you could do, is storing a limited amount of {parameters: result} combinations and skip its logic any time the function is called with the same parameters. It's a dirty trick but quite effective especially if the parameters combination is low compared to the function speed. In Python 3 there's a [decorator](https://docs.python.org/3/library/functools.html#functools.lru_cache) for this purpose. In Python 2 a [library](https://pypi.python.org/pypi/Brownie/0.5.1) can help but you need a bit more work.
Beautiful soup meta content tag
34,302,774
2
2015-12-16T02:00:37Z
34,302,789
8
2015-12-16T02:03:13Z
[ "python", "html", "beautifulsoup", "html-parsing" ]
``` <meta itemprop="streetAddress" content="4103 Beach Bluff Rd"> ``` I have to get the content '4103 Beach Bluff Rd'. I'm trying to get this done with `BeautifulSoup` so, I'm trying this: ``` soup = BeautifulSoup('<meta itemprop="streetAddress" content="4103 Beach Bluff Rd"> ') soup.find(itemprop="streetAddress").get_text() ``` but I'm getting an empy string as result, which may have sense given that when a print the soup object ``` print soup ``` I get the this: ``` <html><head><meta content="4103 Beach Bluff Rd" itemprop="streetAddress"/> </head></html> ``` Apparently the data I want is in the 'meta content' tag, how can I get this data?
> `soup.find(itemprop="streetAddress").get_text()` You are getting the text of a matched element. Instead, *get the "content" attribute value*: ``` soup.find(itemprop="streetAddress").get("content") ``` --- This is possible since `BeautifulSoup` provides a [dictionary-like interface to tag attributes](http://www.crummy.com/software/BeautifulSoup/bs4/doc/#attributes): > You can access a tag’s attributes by treating the tag like a dictionary. Demo: ``` >>> from bs4 import BeautifulSoup >>> >>> soup = BeautifulSoup('<meta itemprop="streetAddress" content="4103 Beach Bluff Rd"> ') >>> soup.find(itemprop="streetAddress").get_text() u'' >>> soup.find(itemprop="streetAddress").get("content") '4103 Beach Bluff Rd' ```
"Failed building wheel for psycopg2" - MacOSX using virtualenv and pip
34,304,833
12
2015-12-16T05:52:31Z
34,841,594
20
2016-01-17T17:52:17Z
[ "python", "django", "postgresql", "virtualenv", "psycopg2" ]
I'm attempting to make a website with a few others for the first time, and have run into a weird error when trying to use Django/Python/VirtualEnv. I've found solutions to this problem for other operating systems, such as Ubuntu, but can't find any good solutions for Mac. This is the relevant code being run: ``` virtualenv -p python3 venv source venv/bin/activate pip install -r requirements.txt ``` After running that block, I get the following errors: > AssertionError > > --- > > Failed building wheel for django-toolbelt > Running setup.py bdist\_wheel for psycopg2 ... > AssertionError > > --- > > Failed building wheel for psycopg2 > Failed to build django-toolbelt psycopg2 I believe I've installed the "django-toolbelt" and "psycopg2", so I'm not sure why it would be failing. The only difference I can think of is that I did not use the command ``` sudo apt-get install libpq-dev ``` as was instructed for Ubuntu usage as I believe that installing postgresql with brew took care of the header. Thanks for any help or insight!
I had the same problem on Arch linux. I think that it's not an OS dependant problem. Anyway, I fixed this by finding the outdated packages and updating then. ``` pip uninstall psycopg2 pip list --outdated pip install --upgrade wheel pip install --upgrade setuptools pip install psycopg2 ``` hope this helps...
Is the python "elif" compiled differently from else: if?
34,304,936
5
2015-12-16T06:01:24Z
34,305,041
11
2015-12-16T06:09:49Z
[ "java", "python", "c++", "if-statement", "micro-optimization" ]
I know in languages such as C, C++, Java and C#, ([C# example](http://stackoverflow.com/questions/3374909/do-else-if-statements-exist-in-c))the `else if` statement is syntactic sugar, in that it's really just a one `else` statement followed by an `if` statement. ``` else if (conition(s)) { ... ``` is equal to ``` else { if (condition(s)) { ... } ``` However, in python, there is a special `elif` statement. I've been wondering if this is just shorthand for developers or if there is some hidden optimization python can do because of this, such as be interpreted faster? But this wouldn't make sense to me, as other languages would be doing it too then (such as JavaScript). So, my question is, in python is the `elif` statement just shorthand for the developers to use or is there something hidden that it gains through doing so?
When you really want to know what is going on behind the scenes in the interpreter, you can use the `dis` module. In this case: ``` >>> def f1(): ... if a: ... b = 1 ... elif aa: ... b = 2 ... >>> def f2(): ... if a: ... b = 1 ... else: ... if aa: ... b = 2 ... >>> dis.dis(f1) 2 0 LOAD_GLOBAL 0 (a) 3 POP_JUMP_IF_FALSE 15 3 6 LOAD_CONST 1 (1) 9 STORE_FAST 0 (b) 12 JUMP_FORWARD 15 (to 30) 4 >> 15 LOAD_GLOBAL 1 (aa) 18 POP_JUMP_IF_FALSE 30 5 21 LOAD_CONST 2 (2) 24 STORE_FAST 0 (b) 27 JUMP_FORWARD 0 (to 30) >> 30 LOAD_CONST 0 (None) 33 RETURN_VALUE >>> dis.dis(f2) 2 0 LOAD_GLOBAL 0 (a) 3 POP_JUMP_IF_FALSE 15 3 6 LOAD_CONST 1 (1) 9 STORE_FAST 0 (b) 12 JUMP_FORWARD 15 (to 30) 5 >> 15 LOAD_GLOBAL 1 (aa) 18 POP_JUMP_IF_FALSE 30 6 21 LOAD_CONST 2 (2) 24 STORE_FAST 0 (b) 27 JUMP_FORWARD 0 (to 30) >> 30 LOAD_CONST 0 (None) 33 RETURN_VALUE ``` It looks like our two functions are using the same bytecode -- So apparently they're equivalent. Careful though, bytecode is an implementation detail of CPython -- There's no telling that *all* python implementations do the same thing behind the scenes -- All that matters is that they have the same behavior. Working through the logic, you can convince yourself that `f1` and `f2` should do the same thing regardless of whether the underlying implementation treats it as "syntatic sugar" or if there is something more sophisticated going on.
Unpack list to variables
34,308,337
4
2015-12-16T09:32:26Z
34,308,407
8
2015-12-16T09:35:26Z
[ "python", "list" ]
I have a list: ``` row = ["Title", "url", 33, "title2", "keyword"] ``` Is there a more pythonic way to unpack this values like: ``` title, url, price, title2, keyword = row[0], row[1], row[2], row[3], row[4] ```
Something like this ? ``` >>> row = ["Title", "url", 33, "title2", "keyword"] >>> title, url, price, title2, keyword = row ``` Also and for the record, note that your example will fail with an IndexError (Python's lists are zero-based). *EDIT : the above note was written before the OP example was fixed...*
How can I train a simple, non-linear regression model with tensor flow?
34,311,893
6
2015-12-16T12:20:20Z
34,316,689
7
2015-12-16T16:04:16Z
[ "python", "regression", "tensorflow" ]
I've seen [this example for linear regression](https://github.com/aymericdamien/TensorFlow-Examples/blob/master/notebooks/2%20-%20Basic%20Classifiers/linear_regression.ipynb) and I would like to train a model [![enter image description here](http://i.stack.imgur.com/SsHgX.png)](http://i.stack.imgur.com/SsHgX.png) where [![enter image description here](http://i.stack.imgur.com/7T0wM.png)](http://i.stack.imgur.com/7T0wM.png) ## What I've tried ``` #!/usr/bin/env python """Example for learning a regression.""" import tensorflow as tf import numpy # Parameters learning_rate = 0.01 training_epochs = 1000 display_step = 50 # Generate training data train_X = [] train_Y = [] f = lambda x: x**2 for x in range(-20, 20): train_X.append(float(x)) train_Y.append(f(x)) train_X = numpy.asarray(train_X) train_Y = numpy.asarray(train_Y) n_samples = train_X.shape[0] # Graph input X = tf.placeholder("float") Y = tf.placeholder("float") # Create Model W1 = tf.Variable(tf.truncated_normal([1, 10], stddev=0.1), name="weight") b1 = tf.Variable(tf.constant(0.1, shape=[1, 10]), name="bias") mul = X * W1 h1 = tf.nn.sigmoid(mul) + b1 W2 = tf.Variable(tf.truncated_normal([10, 1], stddev=0.1), name="weight") b2 = tf.Variable(tf.constant(0.1, shape=[1]), name="bias") activation = tf.nn.sigmoid(tf.matmul(h1, W2) + b2) # Minimize the squared errors l2_loss = tf.reduce_sum(tf.pow(activation-Y, 2))/(2*n_samples) optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(l2_loss) # Initializing the variables init = tf.initialize_all_variables() # Launch the graph with tf.Session() as sess: sess.run(init) # Fit all training data for epoch in range(training_epochs): for (x, y) in zip(train_X, train_Y): sess.run(optimizer, feed_dict={X: x, Y: y}) # Display logs per epoch step if epoch % display_step == 0: cost = sess.run(l2_loss, feed_dict={X: train_X, Y: train_Y}) print("Epoch: {:04d}, cost={:.9f}".format((epoch+1), cost), "W=", sess.run(W1)) # "b=", sess.run(b1) print("Optimization Finished!") print("cost=", sess.run(cost, feed_dict={X: train_X, Y: train_Y}), "W1=", sess.run(W1), ) # "b2=", sess.run(b2) ``` When I execute it, I get: ``` $ python nnetstest.py I tensorflow/core/common_runtime/local_device.cc:25] Local device intra op parallelism threads: 2 I tensorflow/core/common_runtime/local_session.cc:45] Local session inter op parallelism threads: 2 W tensorflow/core/common_runtime/executor.cc:1027] 0x314df50 Compute status: Invalid argument: Incompatible shapes: [40] vs. [1,10] [[Node: mul = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_Placeholder_0, weight)]] Traceback (most recent call last): File "nnetstest.py", line 56, in <module> cost = sess.run(l2_loss, feed_dict={X: train_X, Y: train_Y}) File "/home/moose/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 345, in run results = self._do_run(target_list, unique_fetch_targets, feed_dict_string) File "/home/moose/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 419, in _do_run e.code) tensorflow.python.framework.errors.InvalidArgumentError: Incompatible shapes: [40] vs. [1,10] [[Node: mul = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_Placeholder_0, weight)]] Caused by op u'mul', defined at: File "nnetstest.py", line 32, in <module> mul = X * W1 File "/home/moose/.local/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.py", line 403, in binary_op_wrapper return func(x, y, name=name) File "/home/moose/.local/lib/python2.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 728, in mul return _op_def_lib.apply_op("Mul", x=x, y=y, name=name) File "/home/moose/.local/lib/python2.7/site-packages/tensorflow/python/ops/op_def_library.py", line 633, in apply_op op_def=op_def) File "/home/moose/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1710, in create_op original_op=self._default_original_op, op_def=op_def) File "/home/moose/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 988, in __init__ self._traceback = _extract_stack() ``` I've tried a couple of slight variations in the input data, but I can't get it to work. **How can I train such a simple, non-linear regression model with Google TensorFlow?**
The `InvalidArgumentError` is due to the values that you are feeding (`train_X` and `train_Y`) not having the necessary shape to be multiplied by `W1`. There are a few issues here: 1. The statement `mul = X * W1` should be `mul = tf.matmul(X, W1)`, since `*` computes an elementwise multiplication, which is not what your equation specifies. 2. The input data `X` should be a one-column matrix. To handle scalar and vector data - as you have in your feed calls, you could reshape it as follows: ``` X = tf.placeholder(tf.float32) reshaped_X = tf.reshape(X, [-1, 1]) # ... mul = reshaped_X * W1 ``` 3. When you fetch the final cost, the first argument to `sess.run` should be `l2_loss` (and not `cost`): ``` print("cost=", sess.run(l2_loss, feed_dict={X: train_X, Y: train_Y}), "W1=", sess.run(W1)) ```
Why values of an OrderedDict are not equal?
34,312,674
43
2015-12-16T12:55:32Z
34,312,883
22
2015-12-16T13:05:41Z
[ "python", "python-3.x" ]
With Python 3: ``` >>> from collections import OrderedDict >>> d1 = OrderedDict([('foo', 'bar')]) >>> d2 = OrderedDict([('foo', 'bar')]) ``` I want to check equality: ``` >>> d1 == d2 True >>> d1.keys() == d2.keys() True ``` But: ``` >>> d1.values() == d2.values() False ``` Do you know why values are not equal? > Tested with Python 3.4 and 3.5 --- *Following this question, I posted on the Python-Ideas mailing list to have additional details:* *<https://mail.python.org/pipermail/python-ideas/2015-December/037472.html>*
In python3, `d1.values()` and `d2.values()` are `collections.abc.ValuesView` objects: ``` >>> d1.values() ValuesView(OrderedDict([('foo', 'bar')])) ``` Don't compare them as an object, c onvert them to lists and then compare them: ``` >>> list(d1.values()) == list(d2.values()) True ``` --- Investigating why it works for comparing keys, in `_collections_abc.py` of CPython, `KeysView` is inheriting from `Set` while `ValuesView` does not: ``` class KeysView(MappingView, Set): class ValuesView(MappingView): ``` * Tracing for `__eq__` in `ValuesView` and its parents: `MappingView ==> Sized ==> ABCMeta ==> type ==> object`. `__eq__` is implemented only in `object` and not overridden. * In the other hand, `KeysView` inherits `__eq__` directly from `Set`.
Why values of an OrderedDict are not equal?
34,312,674
43
2015-12-16T12:55:32Z
34,312,962
34
2015-12-16T13:09:48Z
[ "python", "python-3.x" ]
With Python 3: ``` >>> from collections import OrderedDict >>> d1 = OrderedDict([('foo', 'bar')]) >>> d2 = OrderedDict([('foo', 'bar')]) ``` I want to check equality: ``` >>> d1 == d2 True >>> d1.keys() == d2.keys() True ``` But: ``` >>> d1.values() == d2.values() False ``` Do you know why values are not equal? > Tested with Python 3.4 and 3.5 --- *Following this question, I posted on the Python-Ideas mailing list to have additional details:* *<https://mail.python.org/pipermail/python-ideas/2015-December/037472.html>*
In Python 3, `dict.keys()` and `dict.values()` return special iterable classes - respectively a `collections.abc.KeysView` and a `collections.abc.ValuesView`. The first one inherit it's `__eq__` method from `set`, the second uses the default `object.__eq__` which tests on object identity.
Indexing a list with an unique index
34,313,761
24
2015-12-16T13:48:35Z
34,313,968
33
2015-12-16T13:58:09Z
[ "python", "list", "indexing" ]
I have a list say `l = [10,10,20,15,10,20]`. I want to assign each unique value a certain "index" to get `[1,1,2,3,1,2]`. This is my code: ``` a = list(set(l)) res = [a.index(x) for x in l] ``` Which turns out to be very slow. `l` has 1M elements, and 100K unique elements. I have also tried map with lambda and sorting, which did not help. What is the ideal way to do this?
You can do this in `O(N)` time using a [`defaultdict`](https://docs.python.org/2/library/collections.html#collections.defaultdict) and a list comprehension: ``` >>> from itertools import count >>> from collections import defaultdict >>> lst = [10, 10, 20, 15, 10, 20] >>> d = defaultdict(count(1).next) >>> [d[k] for k in lst] [1, 1, 2, 3, 1, 2] ``` In Python 3 use [`__next__`](https://docs.python.org/3/library/stdtypes.html#iterator.__next__) instead of [`next`](https://docs.python.org/2/library/stdtypes.html#iterator.next). --- **If you're wondering how it works?** The `default_factory`(i.e `count(1).next` in this case) passed to `defaultdict` is called only when Python encounters a missing key, so for 10 the value is going to be 1, then for the next ten it is not a missing key anymore hence the previously calculated 1 is used, now 20 is again a missing key and Python will call the `default_factory` again to get its value and so on. `d` at the end will look like this: ``` >>> d defaultdict(<method-wrapper 'next' of itertools.count object at 0x1057c83b0>, {10: 1, 20: 2, 15: 3}) ```
Indexing a list with an unique index
34,313,761
24
2015-12-16T13:48:35Z
34,314,042
21
2015-12-16T14:01:48Z
[ "python", "list", "indexing" ]
I have a list say `l = [10,10,20,15,10,20]`. I want to assign each unique value a certain "index" to get `[1,1,2,3,1,2]`. This is my code: ``` a = list(set(l)) res = [a.index(x) for x in l] ``` Which turns out to be very slow. `l` has 1M elements, and 100K unique elements. I have also tried map with lambda and sorting, which did not help. What is the ideal way to do this?
The slowness of your code arises because `a.index(x)` performs a linear search and you perform that linear search for each of the elements in `l`. So for each of the 1M items you perform (up to) 100K comparisons. The fastest way to transform one value to another is looking it up in a map. You'll need to create the map and fill in the relationship between the original values and the values you want. Then retrieve the value from the map when you encounter another of the same value in your list. Here is an example that makes a single pass through `l`. There may be room for further optimization to eliminate the need to repeatedly reallocate `res` when appending to it. ``` res = [] conversion = {} i = 0 for x in l: if x not in conversion: value = conversion[x] = i i += 1 else: value = conversion[x] res.append(value) ```
How to print flag descriptions in Tensorflow?
34,314,455
2
2015-12-16T14:22:13Z
34,316,117
7
2015-12-16T15:38:44Z
[ "python", "tensorflow" ]
Google has many examples that use flags. They all have descriptions in the definition. Is there a way I can print these descriptions out to the terminal? ``` flags = tf.app.flags FLAGS = flags.FLAGS flags.DEFINE_boolean('test_mode', False, 'This is the description I want A.') flags.DEFINE_boolean('cool_mode', True, 'This is the description I want B.') ```
The `flags` module used in TensorFlow is a wrapper around the [`python-gflags` module](https://github.com/google/python-gflags). To see a list of all flags used in a Python application using `python-gflags`, you can run it with the `-h` or `--help` flag. For example: ``` $ tensorboard -h usage: tensorboard [-h] [--logdir LOGDIR] [--debug DEBUG] [--nodebug] [--host HOST] [--port PORT] optional arguments: -h, --help show this help message and exit --logdir LOGDIR logdir specifies where TensorBoard will look to find TensorFlow event files that it can display. In the simplest case, logdir is a directory containing tfevents files. TensorBoard also supports comparing multiple TensorFlow executions: to do this, you can use directory whose subdirectories contain tfevents files, as in the following example: foo/bar/logdir/ foo/bar/logdir/mnist_1/events.out.tfevents.1444088766 foo/bar/logdir/mnist_2/events.out.tfevents.1444090064 You may also pass a comma seperated list of log directories, and you can assign names to individual log directories by putting a colon between the name and the path, as in tensorboard --logdir=name1:/path/to/logs/1,name2:/path/to/logs/2 --debug DEBUG Whether to run the app in debug mode. This increases log verbosity to DEBUG. --nodebug --host HOST What host to listen to. Defaults to allowing remote access, set to 127.0.0.1 to serve only on localhost. --port PORT What port to serve TensorBoard on. ```
Python nested range function
34,315,140
2
2015-12-16T14:52:22Z
34,315,175
7
2015-12-16T14:53:56Z
[ "python", "range" ]
What is the output of the following nested control structure in Python when executed? ``` for x in range(3): for y in range(x): print x,y ``` I know the answer is ``` 1 0 2 0 2 1 ``` But it is not clear for me why it is this output. I know that the range(3) function would give you {0, 1, 2} so why is not the first output 0 0 instead of 1 0?
Because range(0) returns an empty list `[]`, so the inner loop does nothing the first time it is run.
Python nested range function
34,315,140
2
2015-12-16T14:52:22Z
34,315,231
7
2015-12-16T14:57:08Z
[ "python", "range" ]
What is the output of the following nested control structure in Python when executed? ``` for x in range(3): for y in range(x): print x,y ``` I know the answer is ``` 1 0 2 0 2 1 ``` But it is not clear for me why it is this output. I know that the range(3) function would give you {0, 1, 2} so why is not the first output 0 0 instead of 1 0?
Lets go through this **First run** ``` x = 0 range(0) is [] the print is never reached ``` **Second Run** ``` x = 1 range(1) is [0] <-- one element print is called once with 1 0 ``` **Third Run** ``` x = 2 range(2) is [0,1] <-- two elements print is called twice with 2 0 and 2 1 ```
Python re can't split zero-width anchors?
34,317,442
6
2015-12-16T16:41:49Z
34,317,471
7
2015-12-16T16:43:18Z
[ "python", "regex" ]
``` import re s = 'PythonCookbookListOfContents' # the first line does not work print re.split('(?<=[a-z])(?=[A-Z])', s ) # second line works well print re.sub('(?<=[a-z])(?=[A-Z])', ' ', s) # it should be ['Python', 'Cookbook', 'List', 'Of', 'Contents'] ``` How to split a string from the border of a lower case character and an upper case character using Python re? Why does the first line fail to work while the second line works well?
According to [`re.split`](https://docs.python.org/2/library/re.html#re.split): > Note that split will never split a string on an empty pattern match. > For example: > > ``` > >>> re.split('x*', 'foo') > ['foo'] > >>> re.split("(?m)^$", "foo\n\nbar\n") > ['foo\n\nbar\n'] > ``` --- How about using [`re.findall`](https://docs.python.org/2/library/re.html#re.findall) instead? (Instead of focusing on separators, focus on the item you want to get.) ``` >>> import re >>> s = 'PythonCookbookListOfContents' >>> re.findall('[A-Z][a-z]+', s) ['Python', 'Cookbook', 'List', 'Of', 'Contents'] ``` **UPDATE** Using [`regex` module](https://pypi.python.org/pypi/regex) (*Alternative regular expression module, to replace re*), you can split on zero-width match: ``` >>> import regex >>> s = 'PythonCookbookListOfContents' >>> regex.split('(?<=[a-z])(?=[A-Z])', s, flags=regex.VERSION1) ['Python', 'Cookbook', 'List', 'Of', 'Contents'] ``` **NOTE**: Specify `regex.VERSION1` flag to enable split-on-zero-length-match behavior.
Create dictionary from splitted strings from list of strings
34,319,156
9
2015-12-16T18:09:33Z
34,319,180
15
2015-12-16T18:11:05Z
[ "python", "dictionary", "split", "list-comprehension", "string-split" ]
I feel that this is very simple and I'm close to solution, but I got stacked, and can't find suggestion in the Internet. I have list that looks like: ``` my_list = ['name1@1111', 'name2@2222', 'name3@3333'] ``` In general, each element of the list has the form: `namex@some_number`. I want to make dictionary in pretty way, that has `key = namex` and `value = some_number`. I can do it by: ``` md = {} for item in arguments: md[item.split('@')[0]] = item.split('@')[1] ``` But I would like to do it in one line, with list comprehension or something. I tried do following, and I think I'm not far from what I want. ``` md2 = dict( (k,v) for k,v in item.split('@') for item in arguments ) ``` However, I'm getting error: `ValueError: too many values to unpack`. No idea how to get out of this.
You actually don't need the extra step of creating the tuple ``` >>> my_list = ['name1@1111', 'name2@2222', 'name3@3333'] >>> dict(i.split('@') for i in my_list) {'name3': '3333', 'name1': '1111', 'name2': '2222'} ```
Can you do sums with a datetime in Python?
34,321,209
6
2015-12-16T20:12:55Z
34,321,276
7
2015-12-16T20:17:01Z
[ "python", "python-3.x" ]
I know how to get the date: ``` from datetime import datetime time = datetime.now() print(time) ``` But is there a way where you can work out the days/hours until a certain date, maybe storing the date as an integer or something? Thx for all answers
Just create another datetime object and subtract which will give you a [timedelta](https://docs.python.org/3.5/library/datetime.html#datetime.timedelta) object. ``` from datetime import datetime now = datetime.now() then = datetime(2016,1,1,0,0,0) diff = then - now print(diff) print(diff.total_seconds()) 15 days, 3:42:21.408581 1309365.968044 ``` If you want to take user input: ``` from datetime import datetime while True: inp = input("Enter date in format yyyy/mm/dd hh:mm:ss") try: then = datetime.strptime(inp, "%Y/%m/%d %H:%M:%S") break except ValueError: print("Invalid input") now = datetime.now() diff = then - now print(diff) ``` Demo: ``` $Enter date in format yyyy/mm/dd hh:mm:ss2016/01/01 00:00:00 15 days, 3:04:51.960110 ```
SubfieldBase has been deprecated. Use Field.from_db_value instead
34,321,332
3
2015-12-16T20:20:21Z
34,537,893
9
2015-12-30T22:00:48Z
[ "python", "django" ]
``` /python3.4/site-packages/django/db/models/fields/subclassing.py:22: RemovedInDjango110Warning: SubfieldBase has been deprecated. Use Field.from_db_value instead. RemovedInDjango110Warning) ``` Since I upgraded to Django 1.9 I started having this warning on `runserver` startup. The problem is that I have no idea where it comes from. I am guessing it must be from `forms.py`. Does anyone have a clue?
I experienced this error while using `python-social-auth 0.2.13`. If you are using `python-social-auth`, I submitted a fix for this on [GitHub](https://github.com/omab/python-social-auth/pull/813), just now. This extends another fix submitted [here](https://github.com/omab/python-social-auth/pull/806). Subscribe to both of those pull requests and if/when both pull requests are merged into the `master` branch, you will no longer see the warning.
SQLAlchemy: engine, connection and session difference
34,322,471
6
2015-12-16T21:29:31Z
34,364,247
10
2015-12-18T21:32:15Z
[ "python", "session", "orm", "sqlalchemy", "psycopg2" ]
I use SQLAlchemy and there are at least three entities: `engine`, `session` and `connection`, which have `execute` method, so if I e.g. want to select all records from `table` I can do this ``` engine.execute(select([table])).fetchall() ``` and this ``` connection.execute(select([table])).fetchall() ``` and even this ``` session.execute(select([table])).fetchall() ``` result will be the same. As I understand if someone use `engine.execute` it creates `connection`, opens `session` (Alchemy cares about it for you) and executes query. But is there a global difference between these three ways of performing such task?
**A one-line overview:** The behavior of `execute()` is same in all the cases, but they are 3 different methods, in `Engine`, `Connection`, and `Session` classes. **What exactly is `execute()`:** To understand behavior of `execute()` we need to look into the `Executable` class. `Executable` is a superclass for all “statement” types of objects, including select(), delete(),update(), insert(), text() - in simplest words possible, an `Executable` is a SQL expression construct supported in SQLAlchemy. In all the cases the `execute()` method takes the SQL text or constructed SQL expression i.e. any of the variety of SQL expression constructs supported in SQLAlchemy and returns query results (a `ResultProxy` - Wraps a `DB-API` cursor object to provide easier access to row columns.) --- **To clarify it further (only for conceptual clarification, not a recommended approach)**: In addition to `Engine.execute()` (connectionless execution), `Connection.execute()`, and `Session.execute()`, it is also possible to use the `execute()` directly on any `Executable` construct. The `Executable` class has it's own implementation of `execute()` - As per official documentation, one line description about what the `execute()` does is "**Compile and execute this `Executable`**". In this case we need to explicitly bind the `Executable` (SQL expression construct) with a `Connection` object or, `Engine` object (which implicitly get a `Connection` object), so the `execute()` will know where to execute the `SQL`. The following example demonstrates it well - Given a table as below: ``` from sqlalchemy import MetaData, Table, Column, Integer meta = MetaData() users_table = Table('users', meta, Column('id', Integer, primary_key=True), Column('name', String(50))) ``` **Explicit execution** i.e. `Connection.execute()` - passing the SQL text or constructed SQL expression to the `execute()` method of `Connection`: ``` engine = create_engine('sqlite:///file.db') connection = engine.connect() result = connection.execute(users_table.select()) for row in result: # .... connection.close() ``` **Explicit connectionless execution** i.e. `Engine.execute()` - passing the SQL text or constructed SQL expression directly to the `execute()` method of Engine: ``` engine = create_engine('sqlite:///file.db') result = engine.execute(users_table.select()) for row in result: # .... result.close() ``` **Implicit execution** i.e. `Executable.execute()` - is also connectionless, and calls the `execute()` method of the `Executable`, that is, it calls `execute()` method directly on the `SQL` expression construct (an instance of `Executable`) itself. ``` engine = create_engine('sqlite:///file.db') meta.bind = engine result = users_table.select().execute() for row in result: # .... result.close() ``` Note: Stated the implicit execution example for the purpose of clarification - this way of execution is highly not recommended - as per [docs](http://docs.sqlalchemy.org/en/rel_1_0/core/connections.html#connectionless-execution-implicit-execution): > “implicit execution” is a very old usage pattern that in most cases is > more confusing than it is helpful, and its usage is discouraged. Both > patterns seem to encourage the overuse of expedient “short cuts” in > application design which lead to problems later on. --- ## Your questions: > As I understand if someone use engine.execute it creates connection, > opens session (Alchemy cares about it for you) and executes query. You're right for the part "if someone use `engine.execute` it creates `connection` " but not for "opens `session` (Alchemy cares about it for you) and executes query " - Using `Engine.execute()` and `Connection.execute()` is (almost) one the same thing, in formal, `Connection` object gets created implicitly, and in later case we explicitly instantiate it. What really happens in this case is: ``` `Engine` object (instantiated via `create_engine()`) -> `Connection` object (instantiated via `engine_instance.connect()`) -> `connection.execute({*SQL expression*})` ``` > But is there a global difference between these three ways of > performing such task? At DB layer it's exactly the same thing, all of them are executing SQL (text expression or various SQL expression constructs). From application's point of view there are two options: * Direct execution - Using `Engine.execute()` or `Connection.execute()` * Using `sessions` - efficiently handles transaction as single unit-of-work, with ease via `session.add()`, `session.rollover()`, `session.commit()`, `session.close()`. It is the way to interact with the DB in case of ORM i.e. mapped tables. Provides [identity\_map](http://docs.sqlalchemy.org/en/rel_1_0/core/connections.html#) for instantly getting already accessed or newly created/added objects during a single request. `Session.execute()` ultimately uses `Connection.execute()` statement execution method in order to execute the SQL statement. Using `Session` object is SQLAlchemy ORM's recommended way for an application to interact with the database. An excerpt from the [docs](http://docs.sqlalchemy.org/en/rel_1_0/core/connections.html#): > Its important to note that when using the SQLAlchemy ORM, these > objects are not generally accessed; instead, the Session object is > used as the interface to the database. However, for applications that > are built around direct usage of textual SQL statements and/or SQL > expression constructs without involvement by the ORM’s higher level > management services, the Engine and Connection are king (and queen?) - > read on.
Programming language with multiple roots
34,322,604
2
2015-12-16T21:39:29Z
34,324,155
7
2015-12-16T23:32:16Z
[ "python", "programming-languages", "octave", "julia-lang", "complex-numbers" ]
The answer to 2^(-1/3) are three roots: 0.79370, -0.39685-0.68736i and 0.39685+0.68736i (approximately) See the correct answer at [Wolfram Alpha](http://www.wolframalpha.com/input/?i=2%5E%28-1%2F3%29). I know several languages that supports complex numbers, but they all only return the first of the three results: Python: ``` >>> complex(2,0)**(-1/3) (0.7937005259840998-0j) ``` Octave: ``` >> (2+0i)^(-1/3) ans = 0.79370 ``` Julia: ``` julia> complex(2,0)^(-1/3) 0.7937005259840998 + 0.0im ``` What I'm looking for is something along the lines of: ``` >> 2^(-1/3) [0.79370+0i, -0.39685-0.68736i, 0.39685+0.68736i] ``` Is there a programming language (with a REPL) that will correctly return all three roots, without having to resort to any special modules or libraries, that also has an open source implementation available?
As many comments explained, wanting a general purpose language to give by default the result from every branch of the complex root function is probably a tall order. But **Julia** allows specializing/overloading operators very naturally (as even the out-of-the-box implementation is often written in Julia). Specifically: ``` using Roots,Polynomials # Might need to Pkg.add("Roots") first import Base: ^ ^{T<:AbstractFloat}(b::T, r::Rational{Int64}) = roots(poly([0])^r.den - b^abs(r.num)).^sign(r.num) ``` And now when trying to raise a float to a rational power: ``` julia> 2.0^(-1//3) 3-element Array{Complex{Float64},1}: -0.39685-0.687365im -0.39685+0.687365im 0.793701-0.0im ``` Note that specializing the definition of `^` to rational exponents solves the rounding problem mentioned in the comments.
can't terminate a sudo process created with python, in Ubuntu 15.10
34,337,840
4
2015-12-17T15:08:36Z
34,376,188
7
2015-12-19T22:47:04Z
[ "python", "ubuntu", "subprocess", "sudo", "ubuntu-15.10" ]
I just updated to Ubuntu 15.10 and suddenly in Python 2.7 I am not able to **terminate** a process I created when being **root**. For example, this doesn't terminate tcpdump: ``` import subprocess, shlex, time tcpdump_command = "sudo tcpdump -w example.pcap -i eth0 -n icmp" tcpdump_process = subprocess.Popen( shlex.split(tcpdump_command), stdout=subprocess.PIPE, stderr=subprocess.PIPE) time.sleep(1) tcpdump_process.terminate() tcpdump_out, tcpdump_err = tcpdump_process.communicate() ``` What happened? It works on previous versions.
**TL;DR**: `sudo` does not forward signals sent by a process in the command's process group [since 28 May 2014 commit](https://www.sudo.ws/repos/sudo/rev/7ffa2eefd3c0) released in `sudo 1.8.11` -- the python process (sudo's parent) and the tcpdump process (grandchild) are in the same process group by default and therefore `sudo` does not forward `SIGTERM` signal sent by `.terminate()` to the `tcpdump` process. --- > It shows the same behaviour when running that code while being the root user and while being a regular user + sudo Running as a regular user raises `OSError: [Errno 1] Operation not permitted` exception on `.terminate()` (as expected). Running as `root` reproduces the issue: `sudo` and `tcpdump` processes are not killed on `.terminate()` and the code is stuck on `.communicate()` on Ubuntu 15.10. The same code kills both processes on Ubuntu 12.04. `tcpdump_process` name is misleading because the variable refers to the `sudo` process (the child process), not `tcpdump` (grandchild): ``` python └─ sudo tcpdump -w example.pcap -i eth0 -n icmp └─ tcpdump -w example.pcap -i eth0 -n icmp ``` As [@Mr.E pointed out in the comments](http://stackoverflow.com/questions/34337840/cant-terminate-a-process-created-with-python-in-ubuntu-15-10#comment56417413_34337840), you don't need `sudo` here: you're root already (though you shouldn't be -- you can [sniff the network without root](http://askubuntu.com/q/74059/3712)). If you drop `sudo`; `.terminate()` works. In general, `.terminate()` does not kill the whole process tree recursively and therefore it is expected that a grandchild process survives. Though `sudo` is a special case, [from sudo(8) man page](http://manpages.ubuntu.com/manpages/wily/man8/sudo.8.html): > When the command is run as a child of the `sudo` process, `sudo` will > ***relay signals*** it receives to the command.emphasis is mine i.e., `sudo` should relay `SIGTERM` to `tcpdump` and [`tcpdump` should stop capturing packets on `SIGTERM`, from tcpdump(8) man page](http://manpages.ubuntu.com/manpages/wily/man8/tcpdump.8.html): > Tcpdump will, ..., continue capturing packets until it is > interrupted by a SIGINT signal (generated, for example, by typing your > interrupt character, typically control-C) or a SIGTERM signal > (typically generated with the kill(1) command); i.e., **the expected behavior is**: `tcpdump_process.terminate()` sends SIGTERM to `sudo` which relays the signal to `tcpdump` which should stop capturing and both processes exit and `.communicate()` returns `tcpdump`'s stderr output to the python script. Note: in principle the command may be run without creating a child process, [from the same sudo(8) man page](http://manpages.ubuntu.com/manpages/wily/man8/sudo.8.html): > As a special case, if the policy plugin does not define a close > function and no pty is required, `sudo` will execute the command > directly instead of calling fork(2) first and therefore `.terminate()` may send SIGTERM to the `tcpdump` process directly -- though it is not the explanation: `sudo tcpdump` creates two processes on both Ubuntu 12.04 and 15.10 in my tests. If I run `sudo tcpdump -w example.pcap -i eth0 -n icmp` in the shell then `kill -SIGTERM` terminates both processes. It does not look like Python issue (Python 2.7.3 (used on Ubuntu 12.04) behaves the same on Ubuntu 15.10. Python 3 also fails here). It is related to process groups ([job control](http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man4/termios.4?&manpath=OpenBSD-current&sec=4&query=termios)): passing `preexec_fn=os.setpgrp` to `subprocess.Popen()` so that `sudo` will be in a new process group (job) where it is the leader as in the shell makes `tcpdump_process.terminate()` work in this case. > What happened? It works on previous versions. The explanation is in [the sudo's source code](https://www.sudo.ws/repos/sudo/rev/7ffa2eefd3c0): > ***Do not forward signals sent by a process in the command's process > group***, do not forward it as we don't want the child to indirectly kill > itself. For example, this can happen with some versions of reboot > that call kill(-1, SIGTERM) to kill all other processes.emphasis is mine `preexec_fn=os.setpgrp` changes `sudo`'s process group. `sudo`'s descendants such as `tcpdump` process inherit the group. `python` and `tcpdump` are no longer in the same process group and therefore the signal sent by `.terminate()` is relayed by `sudo` to `tcpdump` and it exits. Ubuntu 15.04 uses `Sudo version 1.8.9p5` where the code from the question works as is. Ubuntu 15.10 uses `Sudo version 1.8.12` that contains [the commit](https://www.sudo.ws/repos/sudo/rev/7ffa2eefd3c0). [sudo(8) man page in wily (15.10)](http://manpages.ubuntu.com/manpages/wily/man8/sudo.8.html) still talks only about the child process itself -- no mention of the process group: > As a special case, sudo will not relay signals that were sent by the > command it is running. It should be instead: > As a special case, sudo will not relay signals that were sent by a process in the process group of the command it is running. You could open a documentation issue on [Ubuntu's bug tracker](https://help.ubuntu.com/community/ReportingBugs) and/or on [the upstream bug tracker](https://www.sudo.ws/).
Pickle python lasagne model
34,338,838
7
2015-12-17T15:54:56Z
34,345,432
9
2015-12-17T22:24:46Z
[ "python", "lasagne" ]
I have trained a simple lstm model in lasagne following the recipie here:<https://github.com/Lasagne/Recipes/blob/master/examples/lstm_text_generation.py> Here is the architecture: l\_in = lasagne.layers.InputLayer(shape=(None, None, vocab\_size)) ``` # We now build the LSTM layer which takes l_in as the input layer # We clip the gradients at GRAD_CLIP to prevent the problem of exploding gradients. l_forward_1 = lasagne.layers.LSTMLayer( l_in, N_HIDDEN, grad_clipping=GRAD_CLIP, nonlinearity=lasagne.nonlinearities.tanh) l_forward_2 = lasagne.layers.LSTMLayer( l_forward_1, N_HIDDEN, grad_clipping=GRAD_CLIP, nonlinearity=lasagne.nonlinearities.tanh) # The l_forward layer creates an output of dimension (batch_size, SEQ_LENGTH, N_HIDDEN) # Since we are only interested in the final prediction, we isolate that quantity and feed it to the next layer. # The output of the sliced layer will then be of size (batch_size, N_HIDDEN) l_forward_slice = lasagne.layers.SliceLayer(l_forward_2, -1, 1) # The sliced output is then passed through the softmax nonlinearity to create probability distribution of the prediction # The output of this stage is (batch_size, vocab_size) l_out = lasagne.layers.DenseLayer(l_forward_slice, num_units=vocab_size, W = lasagne.init.Normal(), nonlinearity=lasagne.nonlinearities.softmax) # Theano tensor for the targets target_values = T.ivector('target_output') # lasagne.layers.get_output produces a variable for the output of the net network_output = lasagne.layers.get_output(l_out) # The loss function is calculated as the mean of the (categorical) cross-entropy between the prediction and target. cost = T.nnet.categorical_crossentropy(network_output,target_values).mean() # Retrieve all parameters from the network all_params = lasagne.layers.get_all_params(l_out) # Compute AdaGrad updates for training print("Computing updates ...") updates = lasagne.updates.adagrad(cost, all_params, LEARNING_RATE) # Theano functions for training and computing cost print("Compiling functions ...") train = theano.function([l_in.input_var, target_values], cost, updates=updates, allow_input_downcast=True) compute_cost = theano.function([l_in.input_var, target_values], cost, allow_input_downcast=True) # In order to generate text from the network, we need the probability distribution of the next character given # the state of the network and the input (a seed). # In order to produce the probability distribution of the prediction, we compile a function called probs. probs = theano.function([l_in.input_var],network_output,allow_input_downcast=True) ``` and the model is trained via: ``` for it in xrange(data_size * num_epochs / BATCH_SIZE): try_it_out() # Generate text using the p^th character as the start. avg_cost = 0; for _ in range(PRINT_FREQ): x,y = gen_data(p) #print(p) p += SEQ_LENGTH + BATCH_SIZE - 1 if(p+BATCH_SIZE+SEQ_LENGTH >= data_size): print('Carriage Return') p = 0; avg_cost += train(x, y) print("Epoch {} average loss = {}".format(it*1.0*PRINT_FREQ/data_size*BATCH_SIZE, avg_cost / PRINT_FREQ)) ``` How can I save the model so I do not need to train it again? With scikit I generally just pickle the model object. However I am unclear on the analogous process with Theano / lasagne.
You can save the weights with numpy: ``` np.savez('model.npz', *lasagne.layers.get_all_param_values(network_output)) ``` And load them again later on like this: ``` with np.load('model.npz') as f: param_values = [f['arr_%d' % i] for i in range(len(f.files))] lasagne.layers.set_all_param_values(network_output, param_values) ``` Source: <https://github.com/Lasagne/Lasagne/blob/master/examples/mnist.py> As for the model definition itself: One option is certainly to keep the code and regenerate the network, before setting the pretrained weights.
Tensorflow read images with labels
34,340,489
18
2015-12-17T17:14:59Z
34,345,827
14
2015-12-17T22:54:53Z
[ "python", "tensorflow" ]
I am building a standard image classification model with Tensorflow. For this I have input images, each assigned with a label (number in {0,1}). The Data can hence be stored in a list using the following format: ``` /path/to/image_0 label_0 /path/to/image_1 label_1 /path/to/image_2 label_2 ... ``` I want to use TensorFlow's queuing system to read my data and feed it to my model. Ignoring the labels, one can easily achieve this by using `string_input_producer` and `wholeFileReader`. Here the code: ``` def read_my_file_format(filename_queue): reader = tf.WholeFileReader() key, value = reader.read(filename_queue) example = tf.image.decode_png(value) return example #removing label, obtaining list containing /path/to/image_x image_list = [line[:-2] for line in image_label_list] input_queue = tf.train.string_input_producer(image_list) input_images = read_my_file_format(input_queue) ``` However, the labels are lost in that process as the image data is purposely shuffled as part of the input pipeline. What is the easiest way of pushing the labels together with the image data through the input queues?
There are three main steps to solving this problem: 1. Populate the [`tf.train.string_input_producer()`](https://www.tensorflow.org/versions/master/api_docs/python/io_ops.html#string_input_producer) with a list of strings containing the original, space-delimited string containing the filename and the label. 2. Use [`tf.read_file(filename)`](https://www.tensorflow.org/versions/master/api_docs/python/io_ops.html#read_file) rather than `tf.WholeFileReader()` to read your image files. `tf.read_file()` is a stateless op that consumes a single filename and produces a single string containing the contents of the file. It has the advantage that it's a pure function, so it's easy to associate data with the input and the output. For example, your `read_my_file_format` function would become: ``` def read_my_file_format(filename_and_label_tensor): """Consumes a single filename and label as a ' '-delimited string. Args: filename_and_label_tensor: A scalar string tensor. Returns: Two tensors: the decoded image, and the string label. """ filename, label = tf.decode_csv(filename_and_label_tensor, [[""], [""]], " ") file_contents = tf.read_file(filename) example = tf.image.decode_png(file_contents) return example, label ``` 3. Invoke the new version of `read_my_file_format` by passing a single dequeued element from the `input_queue`: ``` image, label = read_my_file_format(input_queue.dequeue()) ``` You can then use the `image` and `label` tensors in the remainder of your model.
Tensorflow read images with labels
34,340,489
18
2015-12-17T17:14:59Z
36,947,632
11
2016-04-29T21:20:28Z
[ "python", "tensorflow" ]
I am building a standard image classification model with Tensorflow. For this I have input images, each assigned with a label (number in {0,1}). The Data can hence be stored in a list using the following format: ``` /path/to/image_0 label_0 /path/to/image_1 label_1 /path/to/image_2 label_2 ... ``` I want to use TensorFlow's queuing system to read my data and feed it to my model. Ignoring the labels, one can easily achieve this by using `string_input_producer` and `wholeFileReader`. Here the code: ``` def read_my_file_format(filename_queue): reader = tf.WholeFileReader() key, value = reader.read(filename_queue) example = tf.image.decode_png(value) return example #removing label, obtaining list containing /path/to/image_x image_list = [line[:-2] for line in image_label_list] input_queue = tf.train.string_input_producer(image_list) input_images = read_my_file_format(input_queue) ``` However, the labels are lost in that process as the image data is purposely shuffled as part of the input pipeline. What is the easiest way of pushing the labels together with the image data through the input queues?
Using `slice_input_producer` provides a solution which is much cleaner. Slice Input Producer allows us to create an Input Queue containing arbitrarily many separable values. This snippet of the question would look like this: ``` def read_labeled_image_list(image_list_file): """Reads a .txt file containing pathes and labeles Args: image_list_file: a .txt file with one /path/to/image per line label: optionally, if set label will be pasted after each line Returns: List with all filenames in file image_list_file """ f = open(image_list_file, 'r') filenames = [] labels = [] for line in f: filename, label = line[:-1].split(' ') filenames.append(filename) labels.append(int(label)) return filenames, labels def read_images_from_disk(input_queue): """Consumes a single filename and label as a ' '-delimited string. Args: filename_and_label_tensor: A scalar string tensor. Returns: Two tensors: the decoded image, and the string label. """ label = input_queue[1] file_contents = tf.read_file(input_queue[0]) example = tf.image.decode_png(file_contents, channels=3) return example, label # Reads pfathes of images together with their labels image_list, label_list = read_labeled_image_list(filename) images = ops.convert_to_tensor(image_list, dtype=dtypes.string) labels = ops.convert_to_tensor(label_list, dtype=dtypes.int32) # Makes an input queue input_queue = tf.train.slice_input_producer([images, labels], num_epochs=num_epochs, shuffle=True) image, label = read_images_from_disk(input_queue) # Optional Preprocessing or Data Augmentation # tf.image implements most of the standard image augmentation image = preprocess_image(image) label = preprocess_label(label) # Optional Image and Label Batching image_batch, label_batch = tf.train.batch([image, label], batch_size=batch_size) ``` See also the [generic\_input\_producer](https://github.com/TensorVision/TensorVision/blob/master/examples/inputs/generic_input.py) from the [TensorVision](https://github.com/TensorVision/TensorVision) examples for full input-pipeline.
Convert elements in a list using a dictionary key
34,341,833
3
2015-12-17T18:34:38Z
34,341,876
8
2015-12-17T18:37:05Z
[ "python", "list", "dictionary" ]
I have a list of `values` that match with certain `keys` from a dictionary I created earlier. ``` myDict = {1:'A',2:'B',3:'C'} myList = ['A','A','A','B','B','A','C','C'] ``` How can I create/convert `myList` into something like: ``` myNewList = [1,1,1,2,2,1,3,3] ``` Could someone point me in the right direction? Not sure if it matters, I created the dictionary using json in another script, and I am now loading the created dictionary in my current script.
One easy way is to just invert `myDict` and then use that to map the new list: ``` myNewDict = {v: k for k, v in myDict.iteritems()} myNewList = [myNewDict[x] for x in myList] ``` Also take a look at this for Python naming conventions: [What is the naming convention in Python for variable and function names?](http://stackoverflow.com/questions/159720/what-is-the-naming-convention-in-python-for-variable-and-function-names)
SQL BETWEEN command not working for large ranges
34,348,390
5
2015-12-18T04:08:45Z
34,348,426
8
2015-12-18T04:13:09Z
[ "python", "sql", "sql-azure", "pyodbc" ]
The SQL command BETWEEN only works when I give it a small range for column. Here is what I mean: My code: ``` import AzureSQLHandler as sql database_layer = sql.AzureSQLHandler() RESULTS_TABLE_NAME = "aero2.ResultDataTable" where_string = " smog BETWEEN '4' AND '9'" print database_layer.select_data(RESULTS_TABLE_NAME, "*", where_string) ``` Which corresponds to SQL command: ``` SELECT * FROM aero2.ResultDataTable BETWEEN '4.0' AND '9.0' ``` and select\_data returns a 2-D array containing all these rows. The column I am referencing here has already saved all values equal to 5.0. This Works FINE! But, when I increase the range to, say, '4.0' AND '200.0', it does not return anything.
Strings in databases are compared alphabetically. A string `'4.0'` is greater than a string `'200.0'` because character `4` comes after character `2`. You should use numeric type in your database if you need to support this kind of queries. Make sure that `smog` column has a numeric type (such as DOUBLE) and use `BETWEEN 4.0 AND 200.0` in your query. If you cannot change the schema you can use `CAST`: `cast(smog as DOUBLE) BETWEEN 4.0 and 200.0`, however this solution is less efficient.
Error loading MySQLdb module: libmysqlclient.so.20: cannot open shared object file: No such file or directory
34,348,752
8
2015-12-18T04:53:19Z
34,348,823
7
2015-12-18T05:02:06Z
[ "python", "mysql", "django" ]
I had a running django project and for some reasons I had to remove the current mysql version and install a different MySQL version in my machine. But now when I am trying to run this program am getting an error as follows: ``` raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: libmysqlclient.so.20: cannot open shared object file: No such file or directory ```
reinstall the c shared library: ``` pip uninstall mysql-python pip install mysql-python ```
Writing fits files with astropy.io.fits
34,348,787
4
2015-12-18T04:57:34Z
34,349,126
7
2015-12-18T05:37:26Z
[ "python", "astropy", "fits" ]
I'm trying to append data to a fits file using astropy.io. Here is an example of my code: ``` import numpy as np from astropy.io import fits a1 = np.array([1,2,4,8]) a2 = np.array([0,1,2,3]) hdulist = fits.BinTableHDU.from_columns( [fits.Column(name='FIRST', format='E', array=a1), fits.Column(name='SECOND', format='E', array=a2)]) hdulist.writeto('file.fits') ``` The error I get is ``` type object 'BinTableHDU' has no attribute 'from_columns' ``` 1. Could this be a problem with the astropy.io version I'm using? 2. Is there an easier way to add extensions or columns to a fits file using astropy.io? Any help would be appreciated.
You'll have to upgrade astropy. I can run your example fine; that's with the most recent astropy version. Looking at the change log for 0.4, it's definitely looks like your astropy version is too old. The [log says](https://github.com/astropy/astropy/blob/v0.4.x/CHANGES.rst#api-changes-1): > The astropy.io.fits.new\_table function is now fully deprecated (though > will not be removed for a long time, considering how widely it is > used). > > Instead please use the more explicit BinTableHDU.from\_columns to > create a new binary table HDU, and the similar TableHDU.from\_columns > to create a new ASCII table. These otherwise accept the same arguments > as new\_table which is now just a wrapper for these. implying `from_columns` was newly introduced in 0.4 --- Overall, if you are indeed using astropy version 0.3, you may want to upgrade to version 1.0 or (current) 1.1: * while 0.3 is only about 1.5 years old (and a bit younger if you have a 0.3.x version), the rapid pace of astropy development makes it quite a bit out of date. A lot has changed in the interface, and examples you'll find online these days will rarely work your version. * Since astropy is now to a 1.x(.y) series, that should mean the API is relatively stable: there's only a slim change you'd run into backward compatibility issues. * Version 1.0(.x) is a [long-term support release](http://docs.astropy.org/en/stable/whatsnew/1.0.html#about-long-term-support), with two years of bug fixes. Astropy 1.0 was released on 18 Feb 2015, so if you're looking for more stability, it will last until 18 Feb 2017. (Other versions support six months of bug fixes. But with the previous point, if you do minor release upgrades along the way, you should be fine as well.)
How to delete objects from two apps with the same model name?
34,351,073
2
2015-12-18T08:10:05Z
34,351,122
8
2015-12-18T08:13:39Z
[ "python", "django", "django-models" ]
I have two apps `news` and `article` which both have exactly the same model name `Comment`: ``` class Comment(models.Model): author = models.ForeignKey(User) created = models.DateTimeField(auto_now_add=True) title = models.CharField(max_length=100, default='', blank=True) body = models.TextField() post = models.ForeignKey(Photo) published = models.BooleanField(default=True) ``` Now, in a view I want to delete certain comments from both apps: ``` Comment.objects.filter(author=someauthor).delete() ``` How can I achieve that without changing the model names?
You can use `import ... as ...` so that both model names do not conflict: ``` from news.models import Comment as NewsComment from article.models import Comment as ArticleComment ... NewsComment.objects.filter(author=someauthor).delete() ArticleComment.objects.filter(author=someauthor).delete() ```
python .get() and None
34,357,513
4
2015-12-18T14:08:58Z
34,357,748
9
2015-12-18T14:24:06Z
[ "python" ]
I love python one liners: ``` u = payload.get("actor", {}).get("username", "") ``` Problem I face is, I have no control over what 'payload' contains, other than knowing it is a dictionary. So, if 'payload' does not have "actor", or it does and actor does or doesn't have "username", this one-liner is fine. Problem of course arises when payload DOES have actor, but actor is not a dictionary. Is there as pretty a way to do this comprehensively as a one liner, *and consider the possibility that 'actor' may not be a dictionary?* Of course I can check the type using 'isinstance', but that's not as nice. I'm not requiring a one liner per se, just asking for the most efficient way to ensure 'u' gets populated, without exception, and without prior knowledge of what exactly is in 'payload'.
### Using EAFP As xnx suggested, you can take advantage of the following python paradigm: > [Easier to ask for forgiveness than permission](https://docs.python.org/3/glossary.html#term-eafp) you can use it on `KeyError`s as well: ``` try: u = payload["actor"]["username"] except (AttributeError, KeyError): u = "" ``` ### Using a wrapper with forgiving indexing Sometimes it would be nice to have something like null-conditional operators in Python. With some helper class this can be compressed into a one-liner expression: ``` class Forgive: def __init__(self, value = None): self.value = value def __getitem__(self, name): if self.value is None: return Forgive() try: return Forgive(self.value.__getitem__(name)) except (KeyError, AttributeError): return Forgive() def get(self, default = None): return default if self.value is None else self.value data = {'actor':{'username': 'Joe'}} print(Forgive(data)['actor']['username'].get('default1')) print(Forgive(data)['actor']['address'].get('default2')) ``` ps: one could redefine `__getattr__` as well besides `__getitem__`, so you could even write `Forgive(data)['actor'].username.get('default1')`.
pycharm ssh interpter No such file or directory
34,359,415
2
2015-12-18T15:56:58Z
34,359,948
8
2015-12-18T16:28:36Z
[ "python", "ssh", "pycharm" ]
I am using a macbook pro 15 as local machine and I have a remote server running ubuntu 14.04 I want to use the remote intepreter to run all the computation but I want to write the code from my local machine. When I try to run a simple file with pycharm I receive this error: ``` ssh://[email protected]:22/usr/bin/python3 -u /Users/donbeo/Documents/phd_code/prova.py bash: line 0: cd: /Users/donbeo/Documents/phd_code: No such file or directory /usr/bin/python3: can't open file '/Users/donbeo/Documents/phd_code/prova.py': [Errno 2] No such file or directory Process finished with exit code 2 ``` I saw few people reporting the same problem but I haven't found a good answer so far. Most of the questions are indeed referring to older versions of pycharm. It is clear that the file is not in my remote machine because I create it with pycharm in my local one. I was expecting pycharm to do some sort of synchronisation between the local and remote machine.
To execute your code on remote machine you'll have to perform few steps # Define a remote interpreter for your project 1. Go to File -> Settings -> Project: {project\_name} -> Project Interpreter. 2. Click on cog icon and select Add Remote. 3. Add your SSH host credentials and interpreter path (on remote machine). 4. As a result, you should see new position in project interpreter dropdown selector, spelled like `Python Version (ssh://login@host:port/path/to/interpreter)`. Package list should be populated with records. # Define deployment settings 1. Go to File -> Settings -> Build, Execution, Deployment -> Deployment 2. Create new deployment settings and fill ssh host configuration * Type: SFTP * SFTP host: same as interpreter host * Root path: path where files will be uploaded 3. Click on button "Test SFTP connection" to check if provided data are correct. 4. Go to mappings and configure mapping between local path and deployment path. **Deployment path is relative to root path** - `/` is equivalent to `/my/root/path`, `/dir` to `/my/root/path/dir` etc. # Deploy your code 1. Select Tools -> Deployment -> Upload to {deployment settings name} 2. Upload process will be started in background. Wait for upload to complete. # Run your code 1. Right click on file you want to run and select "Run". Code should run on remote machine.
nltk StanfordNERTagger : NoClassDefFoundError: org/slf4j/LoggerFactory (In Windows)
34,361,725
9
2015-12-18T18:20:22Z
34,364,699
11
2015-12-18T22:10:57Z
[ "python", "windows", "nlp", "nltk", "stanford-nlp" ]
NOTE: I am using Python 2.7 as part of Anaconda distribution. I hope this is not a problem for nltk 3.1. I am trying to use nltk for NER as ``` import nltk from nltk.tag.stanford import StanfordNERTagger #st = StanfordNERTagger('stanford-ner/all.3class.distsim.crf.ser.gz', 'stanford-ner/stanford-ner.jar') st = StanfordNERTagger('english.all.3class.distsim.crf.ser.gz') print st.tag(str) ``` but i get ``` Exception in thread "main" java.lang.NoClassDefFoundError: org/slf4j/LoggerFactory at edu.stanford.nlp.io.IOUtils.<clinit>(IOUtils.java:41) at edu.stanford.nlp.ie.AbstractSequenceClassifier.classifyAndWriteAnswers(AbstractSequenceClassifier.java:1117) at edu.stanford.nlp.ie.AbstractSequenceClassifier.classifyAndWriteAnswers(AbstractSequenceClassifier.java:1076) at edu.stanford.nlp.ie.AbstractSequenceClassifier.classifyAndWriteAnswers(AbstractSequenceClassifier.java:1057) at edu.stanford.nlp.ie.crf.CRFClassifier.main(CRFClassifier.java:3088) Caused by: java.lang.ClassNotFoundException: org.slf4j.LoggerFactory at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 5 more Traceback (most recent call last): File "X:\jnk.py", line 47, in <module> print st.tag(str) File "X:\Anaconda2\lib\site-packages\nltk\tag\stanford.py", line 66, in tag return sum(self.tag_sents([tokens]), []) File "X:\Anaconda2\lib\site-packages\nltk\tag\stanford.py", line 89, in tag_sents stdout=PIPE, stderr=PIPE) File "X:\Anaconda2\lib\site-packages\nltk\internals.py", line 134, in java raise OSError('Java command failed : ' + str(cmd)) OSError: Java command failed : ['X:\\PROGRA~1\\Java\\JDK18~1.0_6\\bin\\java.exe', '-mx1000m', '-cp', 'X:\\stanford\\stanford-ner.jar', 'edu.stanford.nlp.ie.crf.CRFClassifier', '-loadClassifier', 'X:\\stanford\\classifiers\\english.all.3class.distsim.crf.ser.gz', '-textFile', 'x:\\appdata\\local\\temp\\tmpqjsoma', '-outputFormat', 'slashTags', '-tokenizerFactory', 'edu.stanford.nlp.process.WhitespaceTokenizer', '-tokenizerOptions', '"tokenizeNLs=false"', '-encoding', 'utf8'] ``` but i can see that the slf4j jar is there in my lib folder. do i need to update an environment variable? **Edit** Thanks everyone for their help, but i still get the same error. Here is what i tried recently ``` import nltk from nltk.tag import StanfordNERTagger print(nltk.__version__) stanford_ner_dir = 'X:\\stanford\\' eng_model_filename= stanford_ner_dir + 'classifiers\\english.all.3class.distsim.crf.ser.gz' my_path_to_jar= stanford_ner_dir + 'stanford-ner.jar' st = StanfordNERTagger(model_filename=eng_model_filename, path_to_jar=my_path_to_jar) print st._stanford_model print st._stanford_jar st.tag('Rami Eid is studying at Stony Brook University in NY'.split()) ``` and also ``` import nltk from nltk.tag import StanfordNERTagger print(nltk.__version__) st = StanfordNERTagger('english.all.3class.distsim.crf.ser.gz') print st._stanford_model print st._stanford_jar st.tag('Rami Eid is studying at Stony Brook University in NY'.split()) ``` i get ``` 3.1 X:\stanford\classifiers\english.all.3class.distsim.crf.ser.gz X:\stanford\stanford-ner.jar ``` after that it goes on to print the same stacktrace as before. `java.lang.ClassNotFoundException: org.slf4j.LoggerFactory` any idea why this might be happening? I updated my CLASSPATH as well. I even added all the relevant folders to my PATH environment variable.for example the folder where i unzipped the stanford jars, the place where i unzipped slf4j and even the lib folder inside the stanford folder. i have no idea why this is happening :( **Could it be windows? i have had problems with windows paths before** **Update** 1. The Stanford NER version i have is 3.6.0. The zip file says `stanford-ner-2015-12-09.zip` 2. I also tried using the `stanford-ner-3.6.0.jar` instead of `stanford-ner.jar` but still get the same error 3. When i right click on the `stanford-ner-3.6.0.jar`, i notice [![jar properties](http://i.stack.imgur.com/Z8Jlo.png)](http://i.stack.imgur.com/Z8Jlo.png) **i see this for all the files that i have extracted, even the slf4j files.could this be causing the problem?** 4. Finally, why does the error message say `java.lang.NoClassDefFoundError: org/slf4j/LoggerFactory` i do not see any folder named `org` anywhere **Update: Env variables** Here are my env variables ``` CLASSPATH .; X:\jre1.8.0_60\lib\rt.jar; X:\stanford\stanford-ner-3.6.0.jar; X:\stanford\stanford-ner.jar; X:\stanford\lib\slf4j-simple.jar; X:\stanford\lib\slf4j-api.jar; X:\slf4j\slf4j-1.7.13\slf4j-1.7.13\slf4j-log4j12-1.7.13.jar STANFORD_MODELS X:\stanford\classifiers JAVA_HOME X:\PROGRA~1\Java\JDK18~1.0_6 PATH X:\PROGRA~1\Java\JDK18~1.0_6\bin; X:\stanford; X:\stanford\lib; X:\slf4j\slf4j-1.7.13\slf4j-1.7.13 ``` anything wrong here?
# EDITED **Note:** The following answer will only work on: * NLTK version 3.1 * Stanford Tools compiled since 2015-04-20 As both tools changes rather quickly and the API might look very different 3-6 months later. Please treat the following answer as temporal and not an eternal fix. Always refer to <https://github.com/nltk/nltk/wiki/Installing-Third-Party-Software> for the latest instruction on how to interface Stanford NLP tools using NLTK!! --- # Step 1 First update your NLTK to the version 3.1 using ``` pip install -U nltk ``` or (for Windows) download the latest NLTK using <http://pypi.python.org/pypi/nltk> Then check that you have version 3.1 using: ``` python3 -c "import nltk; print(nltk.__version__)" ``` # Step 2 Then download the zip file from <http://nlp.stanford.edu/software/stanford-ner-2015-04-20.zip> and unzip the file and save to `C:\some\path\to\stanford-ner\` (In windows) # Step 3 Then set the environment variable for `CLASSPATH` to `C:\some\path\to\stanford-ner\stanford-ner.jar` and the environment variable for `STANFORD_MODELS` to `C:\some\path\to\stanford-ner\classifiers` Or in command line (**ONLY for Windows**): ``` set CLASSPATH=%CLASSPATH%;C:\some\path\to\stanford-ner\stanford-ner.jar set STANFORD_MODELS=%STANFORD_MODELS%;C:\some\path\to\stanford-ner\classifiers ``` (See <http://stackoverflow.com/a/17176423/610569> for click-click GUI instructions for setting environment variables in Windows) (See [Stanford Parser and NLTK](http://stackoverflow.com/questions/13883277/stanford-parser-and-nltk/34112695#34112695) for details on setting environment variables in Linux) # Step 4 Then in python: ``` >>> from nltk.tag import StanfordNERTagger >>> st = StanfordNERTagger('english.all.3class.distsim.crf.ser.gz') >>> st.tag('Rami Eid is studying at Stony Brook University in NY'.split()) [(u'Rami', u'PERSON'), (u'Eid', u'PERSON'), (u'is', u'O'), (u'studying', u'O'), (u'at', u'O'), (u'Stony', u'ORGANIZATION'), (u'Brook', u'ORGANIZATION'), (u'University', u'ORGANIZATION'), (u'in', u'O'), (u'NY', u'O')] ``` Without setting the environment variables, you can try: ``` from nltk.tag import StanfordNERTagger stanford_ner_dir = 'C:\\some\path\to\stanford-ner\' eng_model_filename= stanford_ner_dir + 'classifiers\english.all.3class.distsim.crf.ser.gz' my_path_to_jar= stanford_ner_dir + 'stanford-ner.jar' st = StanfordNERTagger(model_filename=eng_model_filename, path_to_jar=my_path_to_jar) st.tag('Rami Eid is studying at Stony Brook University in NY'.split()) ``` See more detailed instructions on [Stanford Parser and NLTK](http://stackoverflow.com/questions/13883277/stanford-parser-and-nltk/34112695#34112695)
Cannot press button
34,372,953
12
2015-12-19T16:36:50Z
34,521,990
10
2015-12-30T02:08:12Z
[ "python", "automation", "mechanize", "mechanize-python" ]
I'm trying to code a bot for a game, and need some help to do it. Being a complete noob, I googled how to do it with python and started reading a bit about mechanize. ``` <div class="clearfix"> <a href="#" onclick="return Index.submit_login('server_br73');"> <span class="world_button_active">Mundo 73</span> </a> </div> ``` My problem is in logging in, where i have this raw code for now: ``` import requests import requesocks import xlrd import socks import socket import mechanize import selenium from bs4 import BeautifulSoup # EXCEL file_location = "/home/luis/Dropbox/Projetos/TW/multisbr.xlsx" wb = xlrd.open_workbook(file_location) sheetname = wb.sheet_names () sh1 = wb.sheet_by_index(0) def nickNm(): lista = [sh1.col_values(0, x) for x in range (sh1.ncols)] listaNomes = lista [1] x < 1 print listaNomes def passwd(): lista = [sh1.col_values(1, x) for x in range (sh1.ncols)] listaPasswd = lista [1] x < 1 print listaPasswd # TOR def create_connection(address, timeout=None, source_address=None): sock = socks.socksocket() sock.connect(address) return sock socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, "127.0.0.1", 9050) # patch the socket module socket.socket = socks.socksocket socket.create_connection = create_connection #BeautifulSoup def get_source (): url = 'https://www.tribalwars.com.br' source_code = requests.get(url) plain_text = source_code.text soup = BeautifulSoup(plain_text, 'lxml') # ALFA br = mechanize.Browser () twbr = 'https://www.tribalwars.com.br/index.php' def alfa (): br.open(link) br.select_form(nr=0) br["user"] = "something" br["password"] = "pword" result = br.submit() br.geturl() nickNm() passwd() alfa() ```
There is quite a lot of javascript involved when you perform different actions on a page, `mechanize` [is not a browser and cannot execute javascript](http://stackoverflow.com/questions/802225/how-do-i-use-mechanize-to-process-javascript). One option to make your life easier here would be to *automate a real browser*. Here is an example code to log into the `tribalwars` using [`selenium`](https://selenium-python.readthedocs.org/) and a headless `PhantomJS`: ``` from selenium import webdriver driver = webdriver.PhantomJS() driver.get("https://www.tribalwars.com.br/index.php") # logging in driver.find_element_by_id("user").send_keys("user") driver.find_element_by_id("password").send_keys("password") driver.find_element_by_css_selector("a.login_button").click() ```
Deploy to AWS EB failing because of YAML error in python.config
34,373,107
2
2015-12-19T16:52:55Z
34,424,225
8
2015-12-22T20:51:25Z
[ "python", "django", "amazon-web-services" ]
I am trying to deploy some Django code to an AWS Elastic Beanstalk Environment. I am getting a deployment error: ``` The configuration file __MACOSX/OriginalNewConfig-deploy/.ebextensions/._python.config in application version OriginalNewConfig2-deploy contains invalid YAML or JSON. YAML exception: unacceptable character '' (0x0) special characters are not allowed in "<reader>", position 0, JSON exception: Unexpected character () at position 0.. Update the configuration file. ``` The python.config file (in the .ebextensions folder) looks like this: ``` --- container_commands: 01_wsgipass: command: "echo \"WSGIPassAuthorization On\" >> ../wsgi.conf" packages: yum: libjpeg-turbo-devel: [] ``` The deployment code I am using has apparently been successfully deployed before (by the original developer) so I do not understand why an error is being reported. I am deploying by using the 'Upload and Deploy' button in the AWS EB dashboard.
Seems like MAC creates this hidden folder automatically. I was also having this issue. I've used the following command on terminal: ``` zip -d filename.zip __MACOSX/\* ``` found here: [Mac zip compress without \_\_MACOSX folder?](http://stackoverflow.com/questions/10924236/mac-zip-compress-without-macosx-folder)
Await Future from Executor: Future can't be used in 'await' expression
34,376,814
5
2015-12-20T00:24:50Z
34,376,938
9
2015-12-20T00:47:13Z
[ "python", "python-3.x", "async-await", "future", "python-asyncio" ]
I wanted to use a [ThreadPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html) from a [python coroutine](https://docs.python.org/3/library/asyncio-task.html), to delegate some blocking network calls to a separate thread. However, running the following code: ``` from concurrent.futures import ThreadPoolExecutor import asyncio def work(): # do some blocking io pass async def main(): executor = ThreadPoolExecutor() await executor.submit(work) loop = asyncio.get_event_loop() loop.run_until_complete(main()) loop.close() ``` causes error: ``` TypeError: object Future can't be used in 'await' expression ``` Aren't `Future` objects *awaitable*? Why does it say they aren't? How can I `await` a `Future` object returned by `executor.submit`? *Python 3.5.0* **EDIT** Using `executor.submit` is not my decision. This is used internally by several libraries, like [`requests-futures`](https://github.com/ross/requests-futures). I am searching for a way to interop with those modules from coroutines.
You should use [`loop.run_in_executor`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.BaseEventLoop.run_in_executor): ``` from concurrent.futures import ThreadPoolExecutor import asyncio def work(): # do some blocking io pass async def main(loop): executor = ThreadPoolExecutor() await loop.run_in_executor(executor, work) loop = asyncio.get_event_loop() loop.run_until_complete(main(loop)) loop.close() ``` **EDIT** [`concurrent.futures.Future`](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.Future) object are different from [`asyncio.Future`](https://docs.python.org/3/library/asyncio-task.html#asyncio.Future). The `asyncio.Future` is intended to be used with event loops and is *awaitable*, while the former isn't. `loop.run_in_executor` provides the necessary interoperability between the two. **EDIT #2** > Using executor.submit is not my decision. This is used internally by several libraries, like requests-futures. I am searching for a way to interop with those modules from coroutines. Although undocumented, you can use `asyncio.wrap_future(future, *, loop=None)` to convert a `concurrent.futures.Future` to a `asyncio.Future`.
Confused a bit about django INSTALLED_APPS naming convention
34,377,237
7
2015-12-20T01:50:44Z
34,377,341
9
2015-12-20T02:11:25Z
[ "python", "django" ]
The tutorial on the site creates an app named polls. It's using django 1.9, so the in the INSTALLED\_APPS its ``` polls.apps.PollsConfig ``` I'm watching a tutorial he names the app newsletter and in INSTALLED\_APPS he has ``` newsletter ``` he's using 1.8, though. I am using 1.9. I've watched other tutorials and they also just add a name without dots in the syntax as he does. I realize things may be different, that's understood. To be clear if I named my app dogs,. in the installed apps it would be named like this ``` dogs.apps.DogsConfig ``` or if it was tree it would be ``` tree.apps.TreeConfig ``` Is that how the naming convention goes? also I would assume things would get shorter in newer versions and more convenient. so to go from just adding ``` newsletter, ``` to having to type out ``` polls.apps.PollsConfig ``` seems weird to me. But I'm new so I maybe missing something. Any and all advice is welcome
That is the [*Application Configuration*](https://docs.djangoproject.com/en/1.9/ref/applications/) feature, new to Django 1.7. Basically, now you can list in `INSTALLED_APPS` either the module that contains the application or a class that derives from `django.apps.AppConfig` and defines the behavior of the application. This feature provides several advantages: * Apps can be configured more easily, and even subclassed for customization. * You can have several apps in the same module. Application modules can define the special module variable `default_app_config` to specify the name of their `AppConfig`, so that they can use the new features without having to specify the full name of that class in `INSTALLED_APPS`. But this is a backwards compatibility feature and new applications are recommended to write the full `AppConfig` name. Anyway, most `django/contrib` apps use that `default_app_config`, for compatibility with old configurations. See for example the file `django/contrib/messages/__init__.py` is just: ``` from django.contrib.messages.api import * from django.contrib.messages.constants import * default_app_config = 'django.contrib.messages.apps.MessagesConfig' ``` So, adding it up, per OP request: * If you add in `INSTALLED_APPS` the typename `foo.apps.FooConfig`, then that class will be used to setup the `foo` app, 1.7 style (recommended). * If you add in `INSTALLED_APPS` the plain name `foo`, then: + if there is a variable `foo.default_app_config` this class will be used to setup the `foo` app, 1.7 style. Most (all?) the standard Django apps have this variable, so that you don't need to change your `INSTALLED_APPS` when you upgrade from Django-1.6 to Django-1.7. + if there is not such a variable, then the 1.6 style application will be used, with default values for the advanced configuration options.
Passing arguments to process.crawl in Scrapy python
34,382,356
3
2015-12-20T15:06:10Z
34,383,962
7
2015-12-20T17:50:26Z
[ "python", "web-crawler", "scrapy", "scrapy-spider", "google-crawlers" ]
I would like to get the same result as this command line : scrapy crawl linkedin\_anonymous -a first=James -a last=Bond -o output.json My script is as follows : ``` import scrapy from linkedin_anonymous_spider import LinkedInAnonymousSpider from scrapy.crawler import CrawlerProcess from scrapy.utils.project import get_project_settings spider = LinkedInAnonymousSpider(None, "James", "Bond") process = CrawlerProcess(get_project_settings()) process.crawl(spider) ## <-------------- (1) process.start() ``` I found out that process.crawl() in (1) is creating another LinkedInAnonymousSpider where first and last are None (printed in (2)), if so, then there is no point of creating the object spider and how is it possible to pass the arguments first and last to process.crawl()? linkedin\_anonymous : ``` from logging import INFO import scrapy class LinkedInAnonymousSpider(scrapy.Spider): name = "linkedin_anonymous" allowed_domains = ["linkedin.com"] start_urls = [] base_url = "https://www.linkedin.com/pub/dir/?first=%s&last=%s&search=Search" def __init__(self, input = None, first= None, last=None): self.input = input # source file name self.first = first self.last = last def start_requests(self): print self.first ## <------------- (2) if self.first and self.last: # taking input from command line parameters url = self.base_url % (self.first, self.last) yield self.make_requests_from_url(url) def parse(self, response): . . . ```
pass the spider arguments on the `process.crawl` method: ``` process.crawl(spider, input='inputargument', first='James', last='Bond') ```
How can I easily machine translate something with python?
34,382,722
7
2015-12-20T15:45:15Z
34,382,738
9
2015-12-20T15:47:21Z
[ "python", "nltk" ]
You used to be able to use `nltk.misc.babelfish` to translate things, but the Yahoo Babelfish API went down. Is there an easy way I can, say, do this? ``` >>> import translate >>> translate('carpe diem', 'latin', 'english') 'seize the day' ```
Goslate is a good library for this that uses Google Translate: <http://pythonhosted.org/goslate/> Here's the example from the docs: ``` >>> import goslate >>> gs = goslate.Goslate() >>> print(gs.translate('hello world', 'de')) hallo welt ``` In order to go from "carpe diem" to "seize the day": ``` >>> print(gs.translate('carpe diem', 'en', 'la')) seize the day ``` So it's essentially the same as the Babelfish API used to be, but the order of the target and source languages is switched. And one more thing -- if you need to figure out the short code, `gs.get_languages()` will give you a dictionary of all the short codes for each supported language: `{...'la':'Latin'...}`
'if x % 2: return True' , wouldn't that return True if the number was divisible by 2?
34,385,292
4
2015-12-20T20:07:15Z
34,385,311
10
2015-12-20T20:09:29Z
[ "python", "python-2.7" ]
I don't understand how `if not x % 2: return True` works. Wouldn't that mean this if x is not divisible by two, return True? That's what i see in this code. I see it as `if not x % 2: return True` would return the opposite of if a number is divisible by 2, return True. I just don't understand how that part of the syntax works. ``` def is_even(x): if not x % 2: return True else: return False ```
> Wouldn't that mean this if x is not divisible by two, return True? No, because when x is not divisible by 2 the result of `x%2` would be a nonzero value, which will be evaluated as `True` by Python, so its `not` would be `False`. Read more about [Truth value testing](https://docs.python.org/3/library/stdtypes.html#truth-value-testing) in python.
'if x % 2: return True' , wouldn't that return True if the number was divisible by 2?
34,385,292
4
2015-12-20T20:07:15Z
34,385,341
7
2015-12-20T20:13:15Z
[ "python", "python-2.7" ]
I don't understand how `if not x % 2: return True` works. Wouldn't that mean this if x is not divisible by two, return True? That's what i see in this code. I see it as `if not x % 2: return True` would return the opposite of if a number is divisible by 2, return True. I just don't understand how that part of the syntax works. ``` def is_even(x): if not x % 2: return True else: return False ```
The modulo operator `%` returns the remainder of a division. If `x` is divisible by 2 ('even'), then the remainder is zero and `x % 2` thus evaluates to zero (=False), which makes the whole expression True.
How to Exit Linux terminal using Python script?
34,389,322
4
2015-12-21T05:12:21Z
34,389,409
7
2015-12-21T05:21:23Z
[ "python", "linux", "python-2.7", "terminal", "exit" ]
``` import sys def end(): foo=raw_input() sys.exit() print 'Press enter to Exit python and Terminal' end() ``` When we run the program, we should able to exit the Python Interpreter and Terminal itself. But it only exits python interpreter, not the terminal. Thanks in advance.
`SIGHUP` (hang up) will tell the terminal to exit. The terminal should be your script's parent process, so ``` import os import signal os.kill(os.getppid(), signal.SIGHUP) ```
intersecting lists in a dict (more than two)
34,395,826
3
2015-12-21T12:34:16Z
34,395,882
11
2015-12-21T12:37:55Z
[ "python", "dictionary", "intersection" ]
I have a dict, of varying length. Each entry has a name and a list as so: ``` somedict = {'Name': [1, 2, 3], 'Name2': [], 'Name3': [2,3] } ``` How do I get the intersection for the following list? I need to do it dynamically, I don't know how long the dict will be. For the above list, the intersection would be empty, I know. But for ``` somedict = {'Name': [1, 2, 3], 'Name3': [2,3] } ``` It should return ``` [2, 3] ```
Normally, intersection is a set operation. So, you might want to convert the values of the dictionary to sets and then run intersection, like this ``` >>> set.intersection(*(set(values) for values in data.values())) {2, 3} ``` If you want the result to be a list, just convert the resulting set to a list, like this ``` >>> list(set.intersection(*(set(values) for values in data.values()))) [2, 3] ``` Here, the expression, `*(set(values) for values in data.values())` creates a generator, which yields each and every value of the dictionary item converted to a set and the generator is [unpacked](http://stackoverflow.com/a/12786141/1903116) to the [`set.intersection`](https://docs.python.org/3/library/stdtypes.html#set.intersection) function.
Docker how to run pip requirements.txt only if there was a change?
34,398,632
7
2015-12-21T15:01:30Z
34,399,661
10
2015-12-21T15:58:19Z
[ "python", "docker", "dockerfile" ]
In a Dockerfile I have a layer which installs `requirements.txt`: ``` FROM python:2.7 RUN pip install -r requirements.txt ``` When I build the docker image it runs the whole process **regardless** of any changes made to this file. How do I make sure Docker only runs `pip install -r requirements.txt` if there has been a change to the file? ``` Removing intermediate container f98c845d0f05 Step 3 : RUN pip install -r requirements.txt ---> Running in 8ceb63abaef6 Collecting https://github.com/tomchristie/django-rest-framework/archive/master.zip (from -r requirements.txt (line 30)) Downloading https://github.com/tomchristie/django-rest-framework/archive/master.zip Collecting Django==1.8.7 (from -r requirements.txt (line 1)) ```
I'm assuming that at some point in your build process, you're copying your entire application into the Docker image with `ADD`: ``` ADD . /opt/app WORKDIR /opt/app RUN pip install -r requirements.txt ``` The problem is that you're invalidating the Docker build cache every time you're copying the entire application into the image. This will also invalidate the cache for all subsequent build steps. To prevent this, I'd suggest copying **only the `requirements.txt`** file in a separate build step before adding the entire application into the image: ``` ADD requirements.txt /opt/app/requirements.txt WORKDIR /opt/app RUN pip install -r requirements.txt ADD . /opt/app # continue as before... ``` As the requirements file itself probably changes only rarely, you'll be able to use the cached layers up until the point that you add your application code into the image.
Does Conda replace the need for virtualenv?
34,398,676
9
2015-12-21T15:03:54Z
34,398,794
9
2015-12-21T15:10:02Z
[ "python", "scipy", "virtualenv", "anaconda", "conda" ]
I recently discovered [Conda](http://conda.pydata.org/docs/index.html) after I was having trouble installing SciPy, specifically on a Heroku app that I am developing. With Conda you create environments, very similar to what [virtualenv](https://virtualenv.readthedocs.org/en/latest/) does. My questions are: 1. If I use Conda will it replace the need for virtualenv? If not, how do I use the two together? Do I install virtualenv in Conda, or Conda in virtualenv? 2. Do I still need to use pip? If so, will I still be able to install packages with pip in an isolated environment?
1. Conda replaces virtualenv. In my opinion it is better. It is not limited to Python but can be used for other languages too. In my experience it provides a much smoother experience, especially for scientific packages. The first time I got MayaVi properly installed on Mac was with `conda`. 2. You can still use `pip`. In fact, `conda` installs `pip` in each new environment. It knows about pip-installed packages. For example: ``` conda list ``` lists all installed packages in your current environment. Conda-installed packages show up like this: ``` sphinx_rtd_theme 0.1.7 py35_0 defaults ``` and the ones installed via `pip`like this: ``` wxpython-common 3.0.0.0 <pip> ```
Install wxPython in osx 10.11
34,402,303
2
2015-12-21T18:42:24Z
34,622,956
8
2016-01-05T23:13:04Z
[ "python", "osx", "wxpython" ]
when i try to install wxPython, it show a error: > ``` > > The Installer could not install the software because there was no > > software found to install. > ``` How can i fix it? thank you so much
wxPython is using a [legacy script](https://github.com/wxWidgets/wxPython/blob/14476d72d92c44624d5754c4f1fac2e8d7bc30da/distrib/mac/buildpkg.py), and according to this [technical note](https://developer.apple.com/library/mac/technotes/tn2206/_index.html#//apple_ref/doc/uid/DTS40007919-CH1-TNTAG405) *bundle installers* were deprecated and are (as of El Capitan release) unsupported: > Bundle-style installer packages are a legacy transition aid that is no longer supported. PackageMaker is also no longer supported. It is now necessary to convert to flat-file installer packages using tools like productbuild. That leaves you with two options, 1. Convert the installer to a flat package. 2. Compile wxWidgets and install it locally. To achieve the former, follow these instructions: **0**) Let's assume that you have already mounted the `dmg` and you have moved the `pkg` folder to a *working place*. ``` cd ~/repack_wxpython cp -r /Volumes/wxPython/wxPython-ABC.pkg . ``` **1**) Use the pax utility to extract the payload file (`pax.gz`) from `Contents/Resources` to a folder that will become the root of your new package. ``` mkdir pkg_root cd pkg_root pax -f ../wxPython-ABC.pkg/Contents/Resources/wxPython-ABC.pax.gz -z -r cd .. ``` **2**) Rename the bundle's `preflight`/`postflight` scripts, to `preinstall`/`postinstall` scripts, as required for flat packages, in a scripts folder. ``` mkdir scripts cp wxPython-ABC.pkg/Contents/Resources/preflight scripts/preinstall cp wxPython-ABC.pkg/Contents/Resources/postflight scripts/postinstall ``` **3**) Create the flat package using the `pkgbuild` tool: ``` pkgbuild --root ./pkg_root --scripts ./scripts --identifier com.wxwidgets.wxpython wxPython-ABC.pkg ``` This is the [documentation of the `pkbuild` command](https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man1/pkgbuild.1.html) in case you want to customize the passed parameters. Caveats: The original bundle package contains a `License.rtf` and a `Welcome.txt` files that are not included in the flat package. Those need to be added by defining a [custom XML](https://developer.apple.com/library/mac/documentation/DeveloperTools/Reference/DistributionDefinitionRef/Chapters/Distribution_XML_Ref.html#//apple_ref/doc/uid/TP40005370-CH100-SW22) file and creating another package using [the `productbuild`](http://www.shanekirk.com/2013/10/creating-flat-packages-in-osx/) command.
Can't reproduce distance value between sources obtained with astropy
34,407,678
6
2015-12-22T02:42:55Z
34,407,928
8
2015-12-22T03:16:27Z
[ "python", "coordinates", "astropy" ]
I have two sources with equatorial coordinates `(ra, dec)` and `(ra_0, dec_0)` located at distances `r` and `r_0`, and I need to calculate the 3D distance between them. I use two approaches that *should* give the same result as far as I understand, but do not. The first approach is to apply [astropy](http://www.astropy.org/)'s [separation\_3d](http://docs.astropy.org/en/stable/api/astropy.coordinates.SkyCoord.html#astropy.coordinates.SkyCoord.separation_3d) function. The second approach is to use the expression that gives the distance between two sources with spherical coordinates: [![enter image description here](http://i.stack.imgur.com/ljPfo.gif)](http://i.stack.imgur.com/ljPfo.gif) as shown [here](http://math.stackexchange.com/a/833110/37846). In the MCVE below, the values returned are: ``` 91.3427173002 pc 93.8470493776 pc ``` Shouldn't these two values be equal? [MCVE](http://stackoverflow.com/help/mcve): ``` from astropy.coordinates import SkyCoord from astropy import units as u import numpy as np # Define some coordinates and distances for the sources. c1 = SkyCoord(ra=9.7*u.degree, dec=-50.6*u.degree, distance=1500.3*u.pc) c2 = SkyCoord(ra=7.5*u.degree, dec=-47.6*u.degree, distance=1470.2*u.pc) # Obtain astropy's distance between c1 & c2 coords. print c1.separation_3d(c2) # Obtain distance between c1 & c2 coords using explicit expression. ra_0, dec_0, r_0 = c1.ra.radian, c1.dec.radian, c1.distance ra, dec, r = c2.ra.radian, c2.dec.radian, c2.distance alpha_delta_par = np.sin(dec) * np.sin(dec_0) * np.cos(ra - ra_0) +\ np.cos(dec) * np.cos(dec_0) d_pc = np.sqrt(r**2 + r_0**2 - 2*r*r_0*alpha_delta_par) print d_pc ```
This is a problem with coordinate systems, and the difference between **declination** (astral coordinates) and **polar angle θ** (spherical coordinates) :-) Astral coordinates define **declination as north of the celestial equator**, while spherical coordinates define polar angle **θ as downward from from vertical.** If you change your `alpha_delta_par` to account for this 90° difference by adding `np.pi/2` to all your declination terms, you get ``` alpha_delta_par = np.sin(np.pi/2 + dec)*np.sin(np.pi/2 + dec0)*np.cos(ra - ra0) +\ np.cos(np.pi/2 + dec)*np.cos(np.pi/2 + dec0) ``` Which gives the correct result: `91.3427173002 pc`. Turns out physicists usually use the symbol θ as the polar angle and mathemeticians usually use φ; I went with θ because I followed my heart. [I'm not making this up I swear.](https://en.wikipedia.org/wiki/Spherical_coordinate_system)
python 2.7 : remove a key from a dictionary by part of key
34,415,897
3
2015-12-22T12:25:13Z
34,415,949
8
2015-12-22T12:28:47Z
[ "python", "python-2.7", "dictionary" ]
I have a python dictionary , the dictionary key composed from tupples, like this : ``` { (u'A_String_0', u'A_String_1', u'B_String_3', u'Remove_Me'): 300, (u'A_String_0', u'B_String_4'): 301, (u'A_String_0', u'A_String_1', u'B_String_3', u'Remove_Key'): 301, } ``` I'd like to remove all keys from dictionary when only part of tupple appears in key : for example `'Remove_'` In this case , must pop two keys: one contains `u'Remove_Me'` and another contains `u'Remove_Key'` Finally the dictionary will look like this : ``` { (u'A_String_0', u'B_String_4'): 301 } ``` Thanks a lot !
One way: ``` >>> d = { (u'A_String_0', u'A_String_1', u'B_String_3', u'Remove_Me'): 300, (u'A_String_0', u'B_String_4'): 301, (u'A_String_0', u'A_String_1', u'B_String_3', u'Remove_Key'): 301, } >>> >>> >>> d_out = {k:v for k,v in d.items() if not any(x.startswith('Remove_') for x in k)} >>> d_out {(u'A_String_0', u'B_String_4'): 301} ``` EDIT: In case you wanted to check if `Remove_` is part of any item of the tuple key, then you are better with: ``` >>> d_out = {k:v for k,v in d.items() if not any('Remove_' in x for x in k)} ```
Unpacking tuple-like textfile
34,416,365
9
2015-12-22T12:53:10Z
34,416,814
7
2015-12-22T13:17:50Z
[ "python", "regex", "list", "tuples" ]
Given a textfile of lines of 3-tuples: ``` (0, 12, Tokenization) (13, 15, is) (16, 22, widely) (23, 31, regarded) (32, 34, as) (35, 36, a) (37, 43, solved) (44, 51, problem) (52, 55, due) (56, 58, to) (59, 62, the) (63, 67, high) (68, 76, accuracy) (77, 81, that) (82, 91, rulebased) (92, 102, tokenizers) (103, 110, achieve) (110, 111, .) (0, 3, But) (4, 14, rule-based) (15, 25, tokenizers) (26, 29, are) (30, 34, hard) (35, 37, to) (38, 46, maintain) (47, 50, and) (51, 56, their) (57, 62, rules) (63, 71, language) (72, 80, specific) (80, 81, .) (0, 2, We) (3, 7, show) (8, 12, that) (13, 17, high) (18, 26, accuracy) (27, 31, word) (32, 35, and) (36, 44, sentence) (45, 57, segmentation) (58, 61, can) (62, 64, be) (65, 73, achieved) (74, 76, by) (77, 82, using) (83, 93, supervised) (94, 102, sequence) (103, 111, labeling) (112, 114, on) (115, 118, the) (119, 128, character) (129, 134, level) (135, 143, combined) (144, 148, with) (149, 161, unsupervised) (162, 169, feature) (170, 178, learning) (178, 179, .) (0, 2, We) (3, 12, evaluated) (13, 16, our) (17, 23, method) (24, 26, on) (27, 32, three) (33, 42, languages) (43, 46, and) (47, 55, obtained) (56, 61, error) (62, 67, rates) (68, 70, of) (71, 75, 0.27) (76, 77, ‰) (78, 79, () (79, 86, English) (86, 87, )) (87, 88, ,) (89, 93, 0.35) (94, 95, ‰) (96, 97, () (97, 102, Dutch) (102, 103, )) (104, 107, and) (108, 112, 0.76) (113, 114, ‰) (115, 116, () (116, 123, Italian) (123, 124, )) (125, 128, for) (129, 132, our) (133, 137, best) (138, 144, models) (144, 145, .) ``` The goal is to achieve two different data types: * **`sents_with_positions`**: a list of list of tuples where the the tuples looks like each line of the textfile * **`sents_words`**: a list of list of string made up of only the third element in the tuples from each line of the textfile E.g. From the input textfile: ``` sents_words = [ ('Tokenization', 'is', 'widely', 'regarded', 'as', 'a', 'solved', 'problem', 'due', 'to', 'the', 'high', 'accuracy', 'that', 'rulebased', 'tokenizers', 'achieve', '.'), ('But', 'rule-based', 'tokenizers', 'are', 'hard', 'to', 'maintain', 'and', 'their', 'rules', 'language', 'specific', '.'), ('We', 'show', 'that', 'high', 'accuracy', 'word', 'and', 'sentence', 'segmentation', 'can', 'be', 'achieved', 'by', 'using', 'supervised', 'sequence', 'labeling', 'on', 'the', 'character', 'level', 'combined', 'with', 'unsupervised', 'feature', 'learning', '.') ] sents_with_positions = [ [(0, 12, 'Tokenization'), (13, 15, 'is'), (16, 22, 'widely'), (23, 31, 'regarded'), (32, 34, 'as'), (35, 36, 'a'), (37, 43, 'solved'), (44, 51, 'problem'), (52, 55, 'due'), (56, 58, 'to'), (59, 62, 'the'), (63, 67, 'high'), (68, 76, 'accuracy'), (77, 81, 'that'), (82, 91, 'rulebased'), (92, 102, 'tokenizers'), (103, 110, 'achieve'), (110, 111, '.')], [(0, 3, 'But'), (4, 14, 'rule-based'), (15, 25, 'tokenizers'), (26, 29, 'are'), (30, 34, 'hard'), (35, 37, 'to'), (38, 46, 'maintain'), (47, 50, 'and'), (51, 56, 'their'), (57, 62, 'rules'), (63, 71, 'language'), (72, 80, 'specific'), (80, 81, '.')], [(0, 2, 'We'), (3, 7, 'show'), (8, 12, 'that'), (13, 17, 'high'), (18, 26, 'accuracy'), (27, 31, 'word'), (32, 35, 'and'), (36, 44, 'sentence'), (45, 57, 'segmentation'), (58, 61, 'can'), (62, 64, 'be'), (65, 73, 'achieved'), (74, 76, 'by'), (77, 82, 'using'), (83, 93, 'supervised'), (94, 102, 'sequence'), (103, 111, 'labeling'), (112, 114, 'on'), (115, 118, 'the'), (119, 128, 'character'), (129, 134, 'level'), (135, 143, 'combined'), (144, 148, 'with'), (149, 161, 'unsupervised'), (162, 169, 'feature'), (170, 178, 'learning'), (178, 179, '.')] ] ``` I have been doing it by: * iterating through each line of the textfile, process the tuple, and then appending them to a list to get `sents_with_positions` * and while appending each process sentence to `sents_with_positions`, I append the last elements of the tuples for each sentence to `sents_words` Code: ``` sents_with_positions = [] sents_words = [] _sent = [] for line in _input.split('\n'): if len(line.strip()) > 0: line = line[1:-1] start, _, next = line.partition(',') end, _, next = next.partition(',') text = next.strip() _sent.append((int(start), int(end), text)) else: sents_with_positions.append(_sent) sents_words.append(list(zip(*_sent))[2]) _sent = [] ``` **But is there a simpler way or cleaner way to do achieve the same output?** Maybe through regexes? Or some `itertools` trick? Note that there are cases where there're tricky tuples in the lines of the textfile, e.g. * `(86, 87, ))` # Sometimes the token/word is a bracket * `(96, 97, ()` * `(87, 88, ,)` # Sometimes the token/word is a comma * `(29, 33, Café)` # The token/word is a unicode (sometimes accented), so [a-zA-Z] might be insufficient * `(2, 3, 2)` # Sometimes the token/word is a number * `(47, 52, 3,000)` # Sometimes the token/word is a number/word with comma * `(23, 29, (e.g.))` # Someimtes the token/word contains bracket.
This is, in my opinion, a little more readable and clear, but it may be a little less performant and assumes the input file is correctly formatted (e.g. empty lines are really empty, while your code works even if there is some random whitespace in the "empty" lines). It leverages regex groups, they do all the work of parsing the lines, we just convert start and end to integers. ``` line_regex = re.compile('^\((\d+), (\d+), (.+)\)$', re.MULTILINE) sents_with_positions = [] sents_words = [] for section in _input.split('\n\n'): words_with_positions = [ (int(start), int(end), text) for start, end, text in line_regex.findall(section) ] words = tuple(t[2] for t in words_with_positions) sents_with_positions.append(words_with_positions) sents_words.append(words) ```
Why is this Python code executing twice?
34,419,062
3
2015-12-22T15:22:17Z
34,419,159
8
2015-12-22T15:27:11Z
[ "python", "functional-programming" ]
I'm very new to Python and trying to learn how classes, methods, scopes, etc works by building very silly programs with no real purpose. The code I wrote below is suppose to just define a class `Functions` that is instantiated using an `x` and a `y` value and then one can execute various simple math functions like add subtract, multiply or divide (yes I know there is a Python Math library). However, whenever I run my code and I get to the section where I want to run a math function in my class it runs the entire program over again and then does the math function. What am I doing wrong here? The file name is **MyMath.py** ``` class Functions(): def __init__(self, x, y): self.x = x self.y = y def add(self): return self.x+self.y def subtract(self): return self.x-self.y def multiply(self): return self.x*self.y def divide(self): return self.x/self.y def check_input(input): if input == int: pass else: while not input.isdigit(): input = raw_input("\n " + input + " is not a number. Please try again: ") return input print("Welcome to the customzied Math program!") x = raw_input("\nTo begin, please enter your first number: ") x = check_input(x) y = raw_input("Enter your second number: ") y = check_input(y) from MyMath import Functions math = Functions(x,y) print(math.add()) ```
Remove the following statement. ``` from MyMath import Functions ``` The first line of the program defines the name `Functions`, and you can use it without having to import it. You only use the import command if the class (or function, or variable, ...) is defined in a different file/module. **Note in addition:** When you import anything from a module the whole module is run as a script (although only the `Functions` name is imported into the local namespace). For this reason, everything within a file to be imported should be contained inside a class or function (unless there is a good reason not to...).
SQLAlchemy Model Circular Import
34,421,205
10
2015-12-22T17:18:38Z
34,503,626
7
2015-12-29T02:35:19Z
[ "python", "sqlalchemy" ]
I have two models in the same module named `models`. They are a 1-1 relationship and have been configured per the [SQLAlchemy docs](http://docs.sqlalchemy.org/en/latest/orm/basic_relationships.html#one-to-one). **Vehicle.py** ``` from models.AssetSetting import AssetSetting class Vehicle(Base): __tablename__ = 'vehicles' vehicle_id = Column(Integer, primary_key=True) ... settings = relationship('AssetSetting', backref=backref('asset_settings')) ``` **AssetSetting.py** ``` from models.Vehicle import Vehicle class AssetSetting(Base): __tablename__ = 'asset_settings' asset_alert_setting_id = Column(Integer, primary_key=True, autoincrement=True) ... vehicle = relationship('vehicles', foreign_keys=Column(ForeignKey('vehicles.vehicle_id'))) ``` If I use the string relationship building (i.e. `ForeignKey('vehicles.vehicle_id')`) I get the error: ``` sqlalchemy.exc.InvalidRequestError: When initializing mapper Mapper|AssetSetting|asset_settings, expression 'vehicles' failed to locate a name ("name 'vehicles' is not defined"). If this is a class name, consider adding this relationship() to the <class 'models.AssetSetting.AssetSetting'> class after both dependent classes have been defined. ``` If I use the class mapping, I get the classic circular import error: ``` Traceback (most recent call last): File "tracking_data_runner.py", line 7, in <module> from models.Tracker import Tracker File "/.../models/Tracker.py", line 5, in <module> from models.Vehicle import Vehicle File "/.../models/Vehicle.py", line 13, in <module> from models.Tracker import Tracker ImportError: cannot import name 'Tracker' ``` I believe I could fix this issue by putting the files in the same package but would prefer to keep them separate. Thoughts?
To avoid circular import errors, you should use *string relationship building*, but **both of your models have to use the same `Base`** - the same `declarative_base` instance. Instantiate your `Base` once and use it when initializing both `Vehicle` and `AssetSetting`. Or, you may [explicitly map the table names and classes](http://d) to help mapper relate your models: ``` Base = declarative_base(class_registry={"vehicles": Vehicle, "asset_settings": AssetSetting}) ```
SQLAlchemy Model Circular Import
34,421,205
10
2015-12-22T17:18:38Z
34,503,823
7
2015-12-29T03:05:07Z
[ "python", "sqlalchemy" ]
I have two models in the same module named `models`. They are a 1-1 relationship and have been configured per the [SQLAlchemy docs](http://docs.sqlalchemy.org/en/latest/orm/basic_relationships.html#one-to-one). **Vehicle.py** ``` from models.AssetSetting import AssetSetting class Vehicle(Base): __tablename__ = 'vehicles' vehicle_id = Column(Integer, primary_key=True) ... settings = relationship('AssetSetting', backref=backref('asset_settings')) ``` **AssetSetting.py** ``` from models.Vehicle import Vehicle class AssetSetting(Base): __tablename__ = 'asset_settings' asset_alert_setting_id = Column(Integer, primary_key=True, autoincrement=True) ... vehicle = relationship('vehicles', foreign_keys=Column(ForeignKey('vehicles.vehicle_id'))) ``` If I use the string relationship building (i.e. `ForeignKey('vehicles.vehicle_id')`) I get the error: ``` sqlalchemy.exc.InvalidRequestError: When initializing mapper Mapper|AssetSetting|asset_settings, expression 'vehicles' failed to locate a name ("name 'vehicles' is not defined"). If this is a class name, consider adding this relationship() to the <class 'models.AssetSetting.AssetSetting'> class after both dependent classes have been defined. ``` If I use the class mapping, I get the classic circular import error: ``` Traceback (most recent call last): File "tracking_data_runner.py", line 7, in <module> from models.Tracker import Tracker File "/.../models/Tracker.py", line 5, in <module> from models.Vehicle import Vehicle File "/.../models/Vehicle.py", line 13, in <module> from models.Tracker import Tracker ImportError: cannot import name 'Tracker' ``` I believe I could fix this issue by putting the files in the same package but would prefer to keep them separate. Thoughts?
I discovered my problem was two fold: 1. I was referencing `Vehicles` improperly in my relationship. It should be `relationship('Vehicle'` not `relationship('vehicles'` 2. Apparently it is improper to declare the FK inside the relationship as I did in **AssetSettings.py** (`foreign_keys=Column(ForeignKey('vehicles.vehicle_id'))`). I had to declare the FK and then pass it in to the relationship. My configurations look like this now: **Vehicle.py** ``` class Vehicle(Base, IDiagnostable, IUsage, ITrackable): __tablename__ = 'vehicles' vehicle_id = Column(Integer, primary_key=True)_id = Column(Integer) settings = relationship('AssetSetting', backref=backref('asset_settings')) ``` **AssetSetting.py** ``` class AssetSetting(Base): __tablename__ = 'asset_settings' asset_alert_setting_id = Column(Integer, primary_key=True, autoincrement=True) vehicle_id = Column(ForeignKey('vehicles.vehicle_id')) vehicle = relationship('Vehicle', foreign_keys=vehicle_id) ```
Increment the next element based on previous element
34,422,238
2
2015-12-22T18:28:08Z
34,422,279
7
2015-12-22T18:31:02Z
[ "python", "list", "for-loop", "indexing" ]
When looping through a list, you can work with the current item of the list. For example, if you want to replace certain items with others, you can use: ``` a=['a','b','c','d','e'] b=[] for i in a: if i=='b': b.append('replacement') else: b.append(i) print b ['a', 'replacement', 'c', 'd', 'e'] ``` However, I wish the replace certain values not based on index `i`, but based on index `i+1`. I've been trying for ages and I can't seem to make it work. I would like something like this: ``` c=['a','b','c','d','e'] d=[] for i in c: if i+1=='b': d.append('replacement') else: d.append(i) print d d=['replacement','b','c','d','e'] ``` Is there any way to achieve this?
Use a [list comprehension](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions) along with [`enumerate`](https://docs.python.org/3/library/functions.html#enumerate) ``` >>> ['replacement' if a[i+1]=='b' else v for i,v in enumerate(a[:-1])]+[a[-1]] ['replacement', 'b', 'c', 'd', 'e'] ``` The code replaces all those elements where the next element is `b`. However to take care of the last index and prevent `IndexError`, we just append the last element and loop till the penultimate element. --- Without a list comprehension ``` a=['a','b','c','d','e'] d=[] for i,v in enumerate(a[:-1]): if a[i+1]=='b': d.append('replacement') else: d.append(v) d.append(a[-1]) print d ```
Increment the next element based on previous element
34,422,238
2
2015-12-22T18:28:08Z
34,422,346
7
2015-12-22T18:35:38Z
[ "python", "list", "for-loop", "indexing" ]
When looping through a list, you can work with the current item of the list. For example, if you want to replace certain items with others, you can use: ``` a=['a','b','c','d','e'] b=[] for i in a: if i=='b': b.append('replacement') else: b.append(i) print b ['a', 'replacement', 'c', 'd', 'e'] ``` However, I wish the replace certain values not based on index `i`, but based on index `i+1`. I've been trying for ages and I can't seem to make it work. I would like something like this: ``` c=['a','b','c','d','e'] d=[] for i in c: if i+1=='b': d.append('replacement') else: d.append(i) print d d=['replacement','b','c','d','e'] ``` Is there any way to achieve this?
It's generally better style to not iterate over indices in Python. A common way to approach a problem like this is to use [`zip`](https://docs.python.org/3/library/functions.html#zip) (or the similar [`izip_longest`](https://docs.python.org/3/library/itertools.html#itertools.izip) in `itertools`) to see multiple values at once: ``` In [32]: from itertools import izip_longest In [33]: a=['a','b','c','d','e'] In [34]: b = [] In [35]: for c, next in izip_longest(a, a[1:]): ....: if next == 'd': ....: b.append("replacement") ....: else: ....: b.append(c) ....: In [36]: b Out[36]: ['a', 'b', 'replacement', 'd', 'e'] ```
Replacing characters in string by whitespace except digits
34,429,202
3
2015-12-23T05:29:31Z
34,429,225
10
2015-12-23T05:31:30Z
[ "python" ]
I am trying to remove all the characters and special symbols from a string in python except the numbers(digits 0-9). This is what I am doing- ``` s='das dad 67 8 - 11 2928 313' s1='' for i in range(0,len(s)): if not(ord(s[i])>=48 and ord(s[i])<=57): s1=s1+' ' else: s1=s1+s[i] #s1=s1.split() print(s1) ``` So, basically I am checking the ascii codes for each character, if they do not lie in the range of digits' ascii values, I update them by whitespace. This works fine, but I was curious if there is some other more efficient way I can do this in python. **Edit** I want to replace non-digit characters with whitespace
``` import re s1=re.sub(r"[^0-9 ]"," ",s) ``` You can use `re` here. To prevent `.` of floating numbers use ``` (?!(?<=\d)\.(?=\d))[^0-9 ] ```
Changing a variable inside a method with another method inside it
34,431,264
21
2015-12-23T08:11:05Z
34,431,308
30
2015-12-23T08:14:06Z
[ "python" ]
The following code raises an `UnboundLocalError`: ``` def foo(): i = 0 def incr(): i += 1 incr() print(i) foo() ``` Is there a way to accomplish this?
Use `nonlocal` statement ``` def foo(): i = 0 def incr(): nonlocal i i += 1 incr() print(i) foo() ``` For more information on this new statement added in python 3.x, go to <https://docs.python.org/3/reference/simple_stmts.html#the-nonlocal-statement>
Changing a variable inside a method with another method inside it
34,431,264
21
2015-12-23T08:11:05Z
34,431,314
9
2015-12-23T08:14:48Z
[ "python" ]
The following code raises an `UnboundLocalError`: ``` def foo(): i = 0 def incr(): i += 1 incr() print(i) foo() ``` Is there a way to accomplish this?
See [9.2. Python Scopes and Namespaces](https://docs.python.org/3/tutorial/classes.html#python-scopes-and-namespaces): > if no `global` statement is in effect – assignments to names always go into the innermost scope. Also: > The `global` statement can be used to indicate that particular variables live in the global scope and should be rebound there; the `nonlocal`statement indicates that particular variables **live in an enclosing scope** and should be rebound there. You have many solutions: * Pass `i` as an argument ✓ (I would go with this one) * Use `nonlocal` keyword Note that in Python2.x you can access non-local variables but you **can't** change them.
Changing a variable inside a method with another method inside it
34,431,264
21
2015-12-23T08:11:05Z
34,431,331
20
2015-12-23T08:15:59Z
[ "python" ]
The following code raises an `UnboundLocalError`: ``` def foo(): i = 0 def incr(): i += 1 incr() print(i) foo() ``` Is there a way to accomplish this?
You can use `i` as an argument like this: ``` def foo(): i = 0 def incr(i): return i + 1 i = incr(i) print(i) foo() ```
Sum of product of combinations in a list
34,437,284
5
2015-12-23T13:58:22Z
34,437,352
9
2015-12-23T14:02:19Z
[ "python", "python-3.x", "functional-programming" ]
What is the Pythonic way of summing the product of all combinations in a given list, such as: ``` [1, 2, 3, 4] --> (1 * 2) + (1 * 3) + (1 * 4) + (2 * 3) + (2 * 4) + (3 * 4) = 35 ``` (For this example I have taken all the two-element combinations, but it could have been different.)
Use `itertools.combinations` ``` >>> l = [1, 2, 3, 4] >>> sum([i*j for i,j in list(itertools.combinations(l, 2))]) 35 ```
AssertionError: `HyperlinkedIdentityField` requires the request in the serializer context
34,438,290
5
2015-12-23T14:54:40Z
34,444,082
8
2015-12-23T21:34:35Z
[ "python", "django", "django-views", "django-rest-framework", "django-serializer" ]
I want to create a `many-to-many` relationship where one person can be in many clubs and one club can have many persons. I added the `models.py` and `serializers.py` for the following logic but when I try to serialize it in the command prompt, I get the following error - What am I doing wrong here? I don't even have a `HyperlinkedIdentityField` ``` Traceback (most recent call last): File "<console>", line 1, in <module> File "C:\Users\user\corr\lib\site-packages\rest_framework\serializers.py", line 503, in data ret = super(Serializer, self).data File "C:\Users\user\corr\lib\site-packages\rest_framework\serializers.py", line 239, in data self._data = self.to_representation(self.instance) File "C:\Users\user\corr\lib\site-packages\rest_framework\serializers.py", line 472, in to_representation ret[field.field_name] = field.to_representation(attribute) File "C:\Users\user\corr\lib\site-packages\rest_framework\relations.py", line 320, in to_representation"the serializer." % self.__class__.__name__ AssertionError: `HyperlinkedIdentityField` requires the request in the serializer context. Add `context={'request': request}` when instantiating the serializer. ``` `models.py` ``` class Club(models.Model): club_name = models.CharField(default='',blank=False,max_length=100) class Person(models.Model): person_name = models.CharField(default='',blank=False,max_length=200) clubs = models.ManyToManyField(Club) ``` `serializers.py` ``` class ClubSerializer(serializers.ModelSerializer): class Meta: model = Club fields = ('url','id','club_name','person') class PersonSerializer(serializers.ModelSerializer): clubs = ClubSerializer() class Meta: model = Person fields = ('url','id','person_name','clubs') ``` `views.py` ``` class ClubDetail(generics.ListCreateAPIView): serializer_class = ClubSerializer def get_queryset(self): club = Clubs.objects.get(pk=self.kwargs.get('pk',None)) persons = Person.objects.filter(club=club) return persons class ClubList(generics.ListCreateAPIView): queryset = Club.objects.all() serializer_class = ClubSerializer class PersonDetail(generics.RetrieveUpdateDestroyAPIView): serializer_class = PersonSerializer def get_object(self): person_id = self.kwargs.get('pk',None) return Person.objects.get(pk=person_id) ``` Inspecting the created serializer gives me this - ``` PersonSerializer(<Person: fd>): url = HyperlinkedIdentityField(view_name='person-detail') id = IntegerField(label='ID', read_only=True) person_name = CharField(max_length=200, required=False) clubs = ClubSerializer(): url = HyperlinkedIdentityField(view_name='club-detail') id = IntegerField(label='ID', read_only=True) club_name = CharField(max_length=100, required=False) ``` but `serializer.data` gives me the error \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*edit\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* I realized the error could be because of `url` patterns, so I added the following url patterns but I still get the error - ``` urlpatterns = format_suffix_patterns([ url(r'^$', views.api_root), url(r'^clubs/$', views.ClubList.as_view(), name='club-list'), url(r'^clubs/(?P<pk>[0-9]+)/persons/$', views.ClubDetail.as_view(), name='club-detail'), url(r'^person/(?P<pk>[0-9]+)/$', views.PersonDetail.as_view(), name='person-detail'), ]) ```
You're getting this error as the `HyperlinkedIdentityField` expects to receive `request` in `context` of the serializer so it can build absolute URLs. As you are initializing your serializer on the command line, you don't have access to request and so receive an error. If you need to check your serializer on the command line, you'd need to do something like this: ``` from rest_framework.request import Request from rest_framework.test import APIRequestFactory from .models import Person from .serializers import PersonSerializer factory = APIRequestFactory() request = factory.get('/') serializer_context = { 'request': Request(request), } p = Person.objects.first() s = PersonSerializer(instance=p, context=serializer_context) print s.data ``` Your url field would look something like `http://testserver/person/1/`.
Can't run pip: UnicodeDecodeError
34,440,958
6
2015-12-23T17:36:10Z
34,613,967
20
2016-01-05T14:24:45Z
[ "python", "numpy", "encoding", "pip", "ubuntu-14.04" ]
I have trouble using pip. For example: ``` pip install numpy --upgrade ``` Gives me the following error: ``` Collecting numpy Using cached numpy-1.10.2.tar.gz Exception: Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/pip/basecommand.py", line 211, in main status = self.run(options, args) File "/usr/local/lib/python2.7/dist-packages/pip/commands/install.py", line 305, in run wb.build(autobuilding=True) File "/usr/local/lib/python2.7/dist-packages/pip/wheel.py", line 705, in build self.requirement_set.prepare_files(self.finder) File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 334, in prepare_files functools.partial(self._prepare_file, finder)) File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 321, in _walk_req_to_install more_reqs = handler(req_to_install) File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 505, in _prepare_file abstract_dist.prep_for_dist() File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 123, in prep_for_dist self.req_to_install.run_egg_info() File "/usr/local/lib/python2.7/dist-packages/pip/req/req_install.py", line 376, in run_egg_info self.setup_py, self.name, File "/usr/local/lib/python2.7/dist-packages/pip/req/req_install.py", line 347, in setup_py import setuptools # noqa File "/usr/local/lib/python2.7/dist-packages/setuptools/__init__.py", line 12, in <module> from setuptools.extension import Extension File "/usr/local/lib/python2.7/dist-packages/setuptools/extension.py", line 8, in <module> from .dist import _get_unpatched File "/usr/local/lib/python2.7/dist-packages/setuptools/dist.py", line 19, in <module> import pkg_resources File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3138, in <module> @_call_aside File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3124, in _call_aside f(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3151, in _initialize_master_working_set working_set = WorkingSet._build_master() File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 652, in _build_master ws = cls() File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 645, in __init__ self.add_entry(entry) File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 701, in add_entry for dist in find_distributions(entry, True): File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2139, in find_on_path path_item, entry, metadata, precedence=DEVELOP_DIST File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2521, in from_location py_version=py_version, platform=platform, **kw File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2835, in _reload_version md_version = _version_from_file(self._get_metadata(self.PKG_INFO)) File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2486, in _version_from_file line = next(iter(version_lines), '') File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2654, in _get_metadata for line in self.get_metadata_lines(name): File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2030, in get_metadata_lines return yield_lines(self.get_metadata(name)) File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2025, in get_metadata metadata = f.read() File "/usr/lib/python2.7/codecs.py", line 296, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf8' codec can't decode byte 0xb6 in position 147: invalid start byte ``` Here are some clues: (i) I have the same error when I try to run Spyder. It also appears when I try to to install other packages wtih pip, pandas for example. (ii) I have the feeling, that this is related to the default encoding since sys.getdefaultencoding gives me 'ascii' Note that it works well if I do it in a virtualenv. I'm new to ubuntu so I might have done someting wrong. Setup: python 2.7.6; pip 7.1.2; ubuntu 14.04.03. Thank you for your help.
I had the same issue. In my case, it comes from a non-standard character in a module description. I added a > print f.path in the script > /usr/local/lib/python2.7/dist-packages/pkg\_resources/\_\_init\_\_.py before line 2025, which allowed me to identify the file which was raising an error. It appeared to be the file > /usr/lib/pymodules/python2.7/rpl-1.5.5.egg-info whose author has a name containing the ö character, which can not be read. I simply replaced the "Göran" with "Goran" in this file and it fixed the problem. hope this helps.
Count consecutive characters
34,443,946
4
2015-12-23T21:24:21Z
34,444,401
7
2015-12-23T22:02:38Z
[ "python" ]
**EDITED** How would I count consecutive characters in Python to see the number of times each unique digit repeats before the next unique digit? I'm very new to this language so I am looking for something simple. At first I thought I could do something like: ``` word = '1000' counter=0 print range(len(word)) for i in range(len(word)-1): while word[i]==word[i+1]: counter +=1 print counter*"0" else: counter=1 print counter*"1" ``` So that in this manner I could see the number of times each unique digit repeats. But this of course falls out of range when `i` reaches the last value. **In the example above, I would want Python to tell me that 1 repeats 1, and that 0 repeats 3 times.** The code above fails, however, because of my while statement. I know you can do this with just built-in functions, and would prefer a solution that way. Anyone have any insights?
## Consecutive counts: Ooh nobody's posted [`itertools.groupby`](https://docs.python.org/3/library/itertools.html#itertools.groupby) yet! ``` s = "111000222334455555" from itertools import groupby groups = groupby(s) result = [(label, sum(1 for _ in group)) for label, group in groups] ``` After which, `result` looks like: ``` [("1": 3), ("0", 3), ("2", 3), ("3", 2), ("4", 2), ("5", 5)] ``` And you could format with something like: ``` ", ".join("{}x{}".format(label, count) for label, count in result) # "1x3, 0x3, 2x3, 3x2, 4x2, 5x5" ``` ## Total counts: Someone in the comments is concerned that you want a *total* count of numbers so `"11100111" -> {"1":6, "0":2}`. In that case you want to use a [`collections.Counter`](https://docs.python.org/3/library/collections.html#collections.Counter): ``` from collections import Counter s = "11100111" result = Counter(s) # {"1":6, "0":2} ``` ## Your method: As many have pointed out, your method fails because you're looping through `range(len(s))` but addressing `s[i+1]`. This leads to an off-by-one error when `i` is pointing at the last index of `s`, so `i+1` raises an `IndexError`. One way to fix this would be to loop through `range(len(s)-1)`, but it's more pythonic to generate something to iterate over. For string that's not absolutely huge, `zip(s, s[1:])` isn't a a performance issue, so you could do: ``` counts = [] count = 1 for a, b in zip(s, s[1:]): if a==b: count += 1 else: counts.append((a, count)) count = 1 ``` The only problem being that you'll have to special-case the last character if it's unique. That can be fixed with [`itertools.zip_longest`](https://docs.python.org/3/library/itertools.html#itertools.zip_longest) ``` import itertools counts = [] count = 1 for a, b in itertools.zip_longest(s, s[1:], fillvalue=None): if a==b: count += 1 else: counts.append((a, count)) ``` If you do have a truly *huge* string and can't stand to hold two of them in memory at a time, you can use the [`itertools` recipe `pairwise`](https://docs.python.org/3/library/itertools.html#itertools-recipes). ``` def pairwise(iterable): """iterates pairwise without holding an extra copy of iterable in memory""" a, b = itertools.tee(iterable) next(b, None) return itertools.zip_longest(a, b, fillvalue=None) counts = [] count = 1 for a, b in pairwise(s): ... ```
Unpacking "the rest" of the elements in list comprehension - python3to2
34,449,869
2
2015-12-24T08:49:54Z
34,449,889
9
2015-12-24T08:51:55Z
[ "python", "list", "python-3.x", "tuples", "python-2.x" ]
In Python 3, I could use `for i, *items in tuple` to isolate the first time from the tuple and the rest into items, e.g.: ``` >>> x = [(2, '_', 'loves', 'love', 'VBZ', 'VBZ', '_', '0', 'ROOT', '_', '_'), (1, '_', 'John', 'John', 'NNP', 'NNP', '_', '2', 'nsubj', '_', '_'), (3, '_', 'Mary', 'Mary', 'NNP', 'NNP', '_', '2', 'dobj', '_', '_'), (4, '_', '.', '.', '.', '.', '_', '2', 'punct', '_', '_')] >>> [items for n, *items in sorted(x)] [['_', 'John', 'John', 'NNP', 'NNP', '_', '2', 'nsubj', '_', '_'], ['_', 'loves', 'love', 'VBZ', 'VBZ', '_', '0', 'ROOT', '_', '_'], ['_', 'Mary', 'Mary', 'NNP', 'NNP', '_', '2', 'dobj', '_', '_'], ['_', '.', '.', '.', '.', '_', '2', 'punct', '_', '_']] ``` I need to backport this to Python 2 and I can't use the `*` pointer to collect the rest of the items in the tuple. * What is the equivalent in Python 2? * Is it still possible to achieve the same using the list comprehension? * What is the technical name for this `*` usage? Unpacking? Isolating? Pointers? * Is there a `__future__` import that can be used such that I can use the same syntax in Python 2?
Just use slicing to skip the first element: ``` [all_items[1:] for all_items in sorted(x)] ``` The syntax is referred to *extended tuple unpacking*, where the `*`-prefixed name is called the *catch-all name*. See [PEP 3132](https://www.python.org/dev/peps/pep-3132/). There is no backport of the syntax.
ElementNotVisibleException: Message: Element is not currently visible... selenium (python)
34,456,584
8
2015-12-24T18:23:45Z
34,485,327
7
2015-12-27T22:06:41Z
[ "javascript", "python", "selenium", "selenium-webdriver" ]
I am getting those annoying element is not visible exception using python's selenium, while the element is active, selected, and flashing. The issue is on the page to make a jfiddle, so instead of making a fiddle of the fiddle itself here is a cut and paste way to log in and have a webdriver (named 'driver') in your ipython terminal (enter username and password into ipython, not the page): <https://gist.github.com/codyc4321/787dd6f62e71cc71ae83> Now there is a driver up and you're logged into jsfiddle, everything I do here fails except picking the box the first time (let's say I wanna drop CSS in the CSS box): <https://gist.github.com/codyc4321/f4c03c0606c2e3e4ff5b> Paste `activate_hidden_element` and the first codeline in and see the CSS panel light up. For some reason, this highlighted panel is 'not visible', and you can't paste and code in it. The item is ``` <div class="window top" id="panel_css" data-panel_type="css"> <textarea id="id_code_css" rows="10" cols="40" name="code_css"></textarea> <a href="#" class="windowLabel" data-panel="css"> <span class="label">CSS</span><i class="bts bt-gear"></i> </a> </div> ``` All the other items (HTML, JS) are essentially the same. Why won't this active box allow text to paste in? Thank you SOLUTION: the ugly way I made this service work was to manually fake a cut and paste: ``` css_content = get_inline_content_and_remove_tags(webpage_content, 'style') js_content = get_inline_content_and_remove_tags(webpage_content, 'script') webpage_content = # ...clean cruft... def copy_paste_to_hidden_element(content=None, html_id=None): pyperclip.copy(content) activate_hidden_element(html_id=html_id, driver=driver) call_sp('xdotool key from+ctrl+v') time.sleep(1) copy_paste_to_hidden_element(content=webpage_content, html_id="panel_html") copy_paste_to_hidden_element(content=js_content, html_id="panel_js") copy_paste_to_hidden_element(content=css_content, html_id="panel_css") ``` It does work, the only minor issue is it can't run in the background, I need to leave the screen alone for about 30 seconds
JSFiddle editors are powered by [`CodeMirror`](https://codemirror.net/index.html) which has a *programmatic way to set editor values.* For every JSFiddle editor you need to put values into, locate the element with a `CodeMirror` class, get the `CodeMirror` object and call [`setValue()`](https://codemirror.net/doc/manual.html#api): ``` css_panel = driver.find_element_by_id("panel_css") code_mirror_element = css_panel.find_element_by_css_selector(".CodeMirror") driver.execute_script("arguments[0].CodeMirror.setValue(arguments[1]);", code_mirror_element, "test") ``` --- Demo, using JS panel executing the `alert("Test");` Javascript code: ``` >>> from selenium import webdriver >>> >>> driver = webdriver.Firefox() >>> driver.get("https://jsfiddle.net/user/login/") >>> driver.find_element_by_id("id_username").send_keys("user") >>> driver.find_element_by_name("password").send_keys("password") >>> driver.find_element_by_xpath("//input[@value = 'Log in']").click() >>> >>> driver.get("https://jsfiddle.net/") >>> >>> js_panel = driver.find_element_by_id("panel_js") >>> >>> code_mirror_element = js_panel.find_element_by_css_selector(".CodeMirror") >>> driver.execute_script("arguments[0].CodeMirror.setValue(arguments[1]);", code_mirror_element, "alert('test');") >>> >>> driver.find_element_by_id("run").click() >>> ``` It produces: [![enter image description here](http://i.stack.imgur.com/e1pAg.png)](http://i.stack.imgur.com/e1pAg.png)
Sphinx-apidoc on django build html failure on `django.core.exceptions.AppRegistryNotReady`
34,461,088
5
2015-12-25T07:57:59Z
34,462,027
8
2015-12-25T10:28:19Z
[ "python", "django", "documentation", "python-sphinx" ]
### Question background: I want to write documents with sphinx on my django project and auto create docs with my django code comments. Now I have a django(1.9) project, the file structure is as below: ``` myproject/ myproject/ __init__.py settings.py urls.py wsgi.py myapp/ migrations/ __init__.py admin.py models.py tests.py views.py docs/ _build/ _static/ _templates/ conf.py index.rst Makefile ``` Then, as you see, I put a `docs` folder which holds a *Sphinx* doc project inside. Now I can edit the `*.rst` files and `build html`. But when I tried to `autodoc` the contents, it fails. Below is what I did: First, I added these to the `docs/conf.py`, ref: <http://stackoverflow.com/a/12969839/2544762>: ``` # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.abspath('..')) os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings' ``` Then, I made a [sphinx-apidoc](http://sphinx-doc.org/invocation.html?highlight=apidoc#invocation-of-sphinx-apidoc) action: ``` sphinx-apidoc -o docs/documentation . ``` After that, in the `docs/documentations`, I got some `.rst` files: ``` myproject/ docs/ documentations/ myapp.rst myapp.migrations.rst myproject.rst manage.rst modules.rst ``` After that, I run `make html`, and have the waring with: ``` sphinx-build -b html -d _build/doctrees . _build/html Running Sphinx v1.3.3 loading translations [zh_CN]... done loading pickled environment... done building [mo]: targets for 0 po files that are out of date building [html]: targets for 3 source files that are out of date updating environment: 2 added, 3 changed, 0 removed reading sources... [100%] documentation/modules /home/alfred/app/myproject/docs/documentation/core.rst:25: WARNING: autodoc: failed to import module 'core.models'; the following exception was raised: Traceback (most recent call last): File "/home/alfred/.local/lib/python3.5/site-packages/sphinx/ext/autodoc.py", line 385, in import_object __import__(self.modname) File "/home/alfred/app/myproject/myapp/models.py", line 4, in <module> from django.contrib.auth.models import User, Group File "/home/alfred/.local/lib/python3.5/site-packages/django/contrib/auth/models.py", line 4, in <module> from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager File "/home/alfred/.local/lib/python3.5/site-packages/django/contrib/auth/base_user.py", line 49, in <module> class AbstractBaseUser(models.Model): File "/home/alfred/.local/lib/python3.5/site-packages/django/db/models/base.py", line 94, in __new__ app_config = apps.get_containing_app_config(module) File "/home/alfred/.local/lib/python3.5/site-packages/django/apps/registry.py", line 239, in get_containing_app_config self.check_apps_ready() File "/home/alfred/.local/lib/python3.5/site-packages/django/apps/registry.py", line 124, in check_apps_ready raise AppRegistryNotReady("Apps aren't loaded yet.") django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet. /home/alfred/app/myproject/docs/documentation/core.migrations.rst:10: WARNING: autodoc: failed to import module 'core.migrations.0001_initial'; the following exception was raised: Traceback (most recent call last): File "/home/alfred/.local/lib/python3.5/site-packages/sphinx/ext/autodoc.py", line 385, in import_object __import__(self.modname) File "/home/alfred/app/myproject/myapp/migrations/0001_initial.py", line 7, in <module> import django.contrib.auth.models File "/home/alfred/.local/lib/python3.5/site-packages/django/contrib/auth/models.py", line 4, in <module> from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager File "/home/alfred/.local/lib/python3.5/site-packages/django/contrib/auth/base_user.py", line 49, in <module> class AbstractBaseUser(models.Model): File "/home/alfred/.local/lib/python3.5/site-packages/django/db/models/base.py", line 94, in __new__ app_config = apps.get_containing_app_config(module) File "/home/alfred/.local/lib/python3.5/site-packages/django/apps/registry.py", line 239, in get_containing_app_config self.check_apps_ready() File "/home/alfred/.local/lib/python3.5/site-packages/django/apps/registry.py", line 124, in check_apps_ready raise AppRegistryNotReady("Apps aren't loaded yet.") django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet. /home/alfred/app/myproject/docs/documentation/myproject.rst:10: WARNING: invalid signature for automodule ('myproject.settings-sample') /home/alfred/app/myproject/docs/documentation/myproject.rst:10: WARNING: don't know which module to import for autodocumenting 'myproject.settings-sample' (try placing a "module" or "currentmodule" directive in the document, or giving an explicit module name) /home/alfred/app/myproject/docs/documentation/myproject.rst:26: WARNING: autodoc: failed to import module 'myproject.urls'; the following exception was raised: Traceback (most recent call last): File "/home/alfred/.local/lib/python3.5/site-packages/sphinx/ext/autodoc.py", line 385, in import_object __import__(self.modname) File "/home/alfred/app/myproject/myproject/urls.py", line 20, in <module> url(r'^admin/', include(admin.site.urls)), File "/home/alfred/.local/lib/python3.5/site-packages/django/contrib/admin/sites.py", line 303, in urls return self.get_urls(), 'admin', self.name File "/home/alfred/.local/lib/python3.5/site-packages/django/contrib/admin/sites.py", line 258, in get_urls from django.contrib.contenttypes import views as contenttype_views File "/home/alfred/.local/lib/python3.5/site-packages/django/contrib/contenttypes/views.py", line 5, in <module> from django.contrib.contenttypes.models import ContentType File "/home/alfred/.local/lib/python3.5/site-packages/django/contrib/contenttypes/models.py", line 159, in <module> class ContentType(models.Model): File "/home/alfred/.local/lib/python3.5/site-packages/django/db/models/base.py", line 94, in __new__ app_config = apps.get_containing_app_config(module) File "/home/alfred/.local/lib/python3.5/site-packages/django/apps/registry.py", line 239, in get_containing_app_config self.check_apps_ready() File "/home/alfred/.local/lib/python3.5/site-packages/django/apps/registry.py", line 124, in check_apps_ready raise AppRegistryNotReady("Apps aren't loaded yet.") django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet. looking for now-outdated files... none found pickling environment... done checking consistency... /home/alfred/app/myproject/docs/documentation/modules.rst:: WARNING: document isn't included in any toctree done preparing documents... done writing output... [100%] index generating indices... genindex py-modindex writing additional pages... search copying static files... done copying extra files... done dumping search index in English (code: en) ... done dumping object inventory... done build succeeded, 6 warnings. Build finished. The HTML pages are in _build/html. ``` What did I do wrong? How can I build the document with the django code?
Long time to find the solution: In the `conf.py`, add the following: ``` import django sys.path.insert(0, os.path.abspath('..')) os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings' django.setup() ```
Replace single instances of a character that is sometimes doubled
34,464,490
15
2015-12-25T16:27:38Z
34,464,504
20
2015-12-25T16:30:07Z
[ "python", "regex", "string", "replace" ]
I have a string with each character being separated by a pipe character (including the `"|"`s themselves), for example: ``` "f|u|n|n|y||b|o|y||a||c|a|t" ``` I would like to replace all `"|"`s which are not next to another `"|"` with nothing, to get the result: ``` "funny|boy|a|cat" ``` I tried using `mytext.replace("|", "")`, but that removes everything and makes one long word.
You could replace the double pipe by something else first to make sure that you can still recognize them after removing the single pipes. And then you replace those back to a pipe: ``` >>> t = "f|u|n|n|y||b|o|y||a||c|a|t" >>> t.replace('||', '|-|').replace('|', '').replace('-', '|') 'funny|boy|a|cat' ``` You should try to choose a replacement value that is a safe temporary value and does not naturally appear in your text. Otherwise you will run into conflicts where that character is replace even though it wasn’t a double pipe originally. So don’t use a dash as above if your text may contain a dash. You can also use multiple characters at once, for example: `'<THIS IS A TEMPORARY PIPE>'`. If you want to avoid this conflict completely, you could also solve this entirely different. For example, you could split the string by the double pipes first and perform a replacement on each substring, ultimately joining them back together: ``` >>> '|'.join([s.replace('|', '') for s in t.split('||')]) 'funny|boy|a|cat' ``` And of course, you could also use regular expressions to replace those pipes that are not followed by another pipe: ``` >>> import re >>> re.sub('\|(?!\|)', '', t) 'funny|boy|a|cat' ```
Replace single instances of a character that is sometimes doubled
34,464,490
15
2015-12-25T16:27:38Z
34,464,508
12
2015-12-25T16:30:35Z
[ "python", "regex", "string", "replace" ]
I have a string with each character being separated by a pipe character (including the `"|"`s themselves), for example: ``` "f|u|n|n|y||b|o|y||a||c|a|t" ``` I would like to replace all `"|"`s which are not next to another `"|"` with nothing, to get the result: ``` "funny|boy|a|cat" ``` I tried using `mytext.replace("|", "")`, but that removes everything and makes one long word.
Use sentinel values Replace the `||` by `~`. This will remember the `||`. Then remove the `|`s. Finally re-replace them with `|`. ``` >>> s = "f|u|n|n|y||b|o|y||a||c|a|t" >>> s.replace('||','~').replace('|','').replace('~','|') 'funny|boy|a|cat' ``` --- Another better way is to use the fact that they are almost alternate text. The solution is to make them completely alternate... ``` s.replace('||','|||')[::2] ```
Replace single instances of a character that is sometimes doubled
34,464,490
15
2015-12-25T16:27:38Z
34,464,509
9
2015-12-25T16:30:39Z
[ "python", "regex", "string", "replace" ]
I have a string with each character being separated by a pipe character (including the `"|"`s themselves), for example: ``` "f|u|n|n|y||b|o|y||a||c|a|t" ``` I would like to replace all `"|"`s which are not next to another `"|"` with nothing, to get the result: ``` "funny|boy|a|cat" ``` I tried using `mytext.replace("|", "")`, but that removes everything and makes one long word.
You can use a [*positive look ahead*](http://www.regular-expressions.info/lookaround.html) regex to replace the pips that are followed with an alphabetical character: ``` >>> import re >>> st = "f|u|n|n|y||b|o|y||a||c|a|t" >>> re.sub(r'\|(?=[a-z]|$)',r'',st) 'funny|boy|a|cat' ```
Replace single instances of a character that is sometimes doubled
34,464,490
15
2015-12-25T16:27:38Z
34,464,512
27
2015-12-25T16:30:49Z
[ "python", "regex", "string", "replace" ]
I have a string with each character being separated by a pipe character (including the `"|"`s themselves), for example: ``` "f|u|n|n|y||b|o|y||a||c|a|t" ``` I would like to replace all `"|"`s which are not next to another `"|"` with nothing, to get the result: ``` "funny|boy|a|cat" ``` I tried using `mytext.replace("|", "")`, but that removes everything and makes one long word.
This can be achieved with a relatively simple regex without having to chain `str.replace`: ``` >>> import re >>> s = "f|u|n|n|y||b|o|y||a||c|a|t" >>> re.sub('\|(?!\|)' , '', s) 'funny|boy|a|cat' ``` Explanation: \|(?!\|) will look for a `|` character which is not followed by another `|` character. (?!foo) means negative lookahead, ensuring that whatever you are matching is not followed by foo.
In py.test, what is the use of conftest.py files?
34,466,027
19
2015-12-25T20:08:20Z
34,520,971
23
2015-12-29T23:53:08Z
[ "python", "testing", "py.test" ]
I recently discovered `py.test`. It seems great. However I feel the documentation could be better. I'm trying to understand what `conftest.py` files are meant to be used for. In my (currently small) test suite I have one `conftest.py` file at the project root. I use it to define the fixtures that I inject into my tests. I have two questions: 1. Is this the correct use of `conftest.py`? Does it have other uses? 2. Can I have more than one `conftest.py` file? When would I want to do that? Examples will be appreciated. More generally, how would you define the purpose and correct use of `conftest.py` file(s) in a py.test test suite?
> Is this the correct use of conftest.py? Yes it is, Fixtures are a potential and common use of `conftest.py`. The fixtures that you will define will be shared among all tests in your test suite. However defining fixtures in the root `conftest.py` might be useless and it would slow down testing if such fixtures are not used by all tests. > Does it have other uses? Yes it does. * **Fixtures**: Define fixtures for static data used by tests. This data can be accessed by all tests in the suite unless specified. This could be data as well as helpers of modules which will be passed to all tests. * **External plugin loading**: conftest.py is used to import external plugins or modules. By defining the following global variable, pytest will load the module and make it available for its test. Plugins are generally files defined in your project or other modules which might be needed in your tests. You can also load a set of predefined plugins as of [here](https://pytest.org/latest/plugins.html#requiring-loading-plugins-in-a-test-module-or-conftest-file). `pytest_plugins = "someapp.someplugin"` * **Hooks**: You can specified hooks such as setup and teardown methods and much more to improve your tests. For a set of available hooks, read [here](https://pytest.org/latest/writing_plugins.html#well-specified-hooks). Example: ``` def pytest_runtest_setup(item): """ called before ``pytest_runtest_call(item). """ #do some stuff` ``` * **Test root path**: This is a bit of a hidden feature. By defining `conftest.py` in your root path, you will have `pytest` recognizing your application modules without specifying `PYTHONPATH`. On the background, py.test modifies your `sys.path` by including all submodules which are found from the root path. > Can I have more than one conftest.py file? Yes you can and it is strongly recommended if your test structure is somehow complex. `conftest.py` files have directory scope, therefor creating targeted fixtures and helpers is good practice. > When would I want to do that? Examples will be appreciated. Several cases could fit: Creating a set of tools or **hooks** for a particular group of tests **root/mod/conftest.py** ``` def pytest_runtest_setup(item): print("I am mod") #do some stuff test root/mod2/test.py will NOT produce "I am mod" ``` Load a set of **fixtures** for some tests but not for others. **root/mod/conftest.py** ``` @pytest.fixture() def fixture(): return "some stuff" ``` **root/mod2/conftest.py** ``` @pytest.fixture() def fixture(): return "some other stuff" ``` **root/mod2/test.py** ``` def test(fixture): print(fixture) ``` Will print "some other stuff" **Override** hooks inherited from the root `conftest.py` **root/mod/conftest.py** ``` def pytest_runtest_setup(item): print("I am mod") #do some stuff ``` **root/conftest.py** ``` def pytest_runtest_setup(item): print("I am root") #do some stuff ``` By running any test inside `root/mod`, only "I am mod" is printed. You can read more about `conftest.py` [here](http://pytest.readthedocs.org/en/latest/plugins.html). **EDIT:** > What if I need plain-old helper functions to be called from a number > of tests in different moduls - will they be available to me if I put > them in a conftest.py? Or should I simply put them in a helpers.py > module and import and use it in my test modules? You can use `conftest.py` to define your helpers, however you should follow the common practice. helpers can be used as fixtures at least in `py.test`. For example in my tests I have a mocked redis helper which I inject into my tests this way. **root/helper/redis/redis.py** ``` @pytest.fixture def mock_redis(): return MockRedis() ``` **root/tests/stuff/conftest.py** ``` pytest_plugin="helper.redis.redis" ``` **root/tests/stuff/test.py** ``` def test(mock_redis): print(mock_redis.get('stuff')) ``` This will be a test module that you can freely import in your tests. **NOTE** that you could potentially name `redis.py` as `conftest.py` if your module `redis` contains more tests. However that practice is discouraged because of ambiguity. If you want to use `conftest.py` simply you can put that helper in your root conftest.py and inject it when needed. **root/tests/conftest.py** ``` @pytest.fixture def mock_redis(): return MockRedis() ``` **root/tests/stuff/test.py** ``` def test(mock_redis): print(mock_redis.get(stuff)) ``` Another thing you can do is to write an installable plugin. In that case your helper can be written anywhere but it needs to define an entry point to be installed in your and other potential test frameworks. See [this](https://pytest.org/latest/writing_plugins.html). If you don't want to use fixtures, you could of course define a simple helper and just use the plain old import wherever it is needed. **root/tests/helper/redis.py** ``` class MockRedis(): # stuff ``` **root/tests/stuff/test.py** ``` from helper.redis import MockRedis def test(): print(MockRedis().get(stuff)) ``` However here you might have problems with the path since the module is not in a child folder of the test. You should be able to overcome this (not tested) by adding an **init** to your helper **root/tests/helper/\_\_init\_\_.py** ``` from .redis import MockRedis ``` Or simply adding the helper module to your `PYTHONPATH`.
Using Excel like solver in Python or SQL
34,468,947
3
2015-12-26T06:03:20Z
34,573,807
7
2016-01-03T05:57:12Z
[ "python", "sql", "numpy", "pandas", "solver" ]
Here is a simple calculation that I do in Excel. I will like to know if it can be done python or any other language. ``` Loan amount 7692 Period : 12 months Rate of interest 18 Per Annum The formula in the B2 cell is =A1*18/100/12 The formula in the A2 cells is =A1+B2-C2 ``` The column C is tentative amount the borrower may need to repay each month. All other cells next to C2 simply points to the first installment of 200. After using the solver as shown in the following image, I get the correct installment of 705.20 in the C column. [![excel goal seak](http://i.stack.imgur.com/elmP8.png)](http://i.stack.imgur.com/elmP8.png) I will like to know if this calculation can be done using any scripting language like python (or SQL) Here is how the final version looks like... [![enter image description here](http://i.stack.imgur.com/M0t9I.png)](http://i.stack.imgur.com/M0t9I.png) I tried something like this, but it does not exit the loop and prints all combinations. ``` loan_amount= 7692 interest = 18 months =12 for rg in range(700, 710): for i in range(months): x = loan_amount * interest / 100 / 12 y = loan_amount + x - rg if x < 0: print rg, i exit else: loan_amount = y ```
Well, you can solve it using numerical method (as Excel does), you can solve it with brute force by checking every amount with some step within some range, or you can solve it analytically on a piece of paper. Using the following notation ``` L - initial loan amount = 7692 R - monthly interest rate = 1 + 0.18/12 m - number of months to repay the loan = 12 P - monthly payment to pay the loan in full after m months = unknown ``` [![L_{n}](http://i.stack.imgur.com/hc5r1.png)](http://i.stack.imgur.com/hc5r1.png) is loan amount after the `n`-th month. [![L_{0}](http://i.stack.imgur.com/Fjn3W.png)](http://i.stack.imgur.com/Fjn3W.png) is the initial loan amount (7692). [![L_{m}](http://i.stack.imgur.com/UQGZ9.png)](http://i.stack.imgur.com/UQGZ9.png) is the loan amount after `m` months (0). The main relation between `n`-th and `(n-1)`-th month is: [![L_{n} = L_{n-1} * R - P](http://i.stack.imgur.com/Tx4kh.png)](http://i.stack.imgur.com/Tx4kh.png) So, analytical formula turns out to be: [![P = L * \frac{R^{m}}{\sum_{k=0}^{m-1}R^{k}} = L * R^{m} * \frac{R-1}{R^{m}-1}](http://i.stack.imgur.com/abGdN.png)](http://i.stack.imgur.com/abGdN.png) Now it should be fairly straight-forward to calculate it in any programming language. For the given initial parameters [![R = 1 + \frac{0.18}{12} = 1.015](http://i.stack.imgur.com/9BXBX.png)](http://i.stack.imgur.com/9BXBX.png) [![P = 7692 * 1.015^{12} * \frac{1.015-1}{1.015^{12}-1}\approx 705.2025054](http://i.stack.imgur.com/nATZE.png)](http://i.stack.imgur.com/nATZE.png) --- By the way, if you are modelling how the real bank works, it may be tricky to calculate it correctly to the last cent. The answer that you get from precise analytical formulas like the one above is only approximate. In practice all monthly amounts (both payment and interest) are usually rounded to the cent. With each month there will be some rounding error, which would accumulate and grow. Apart from these rounding errors different months have different number of days and even though payments are the same for each month, the interest is usually calculated for each day of the month, so it varies from month to month. Then there are leap years with extra day, which also affects the monthly interest.
How to check if all elements in a tuple or list are in another?
34,468,983
5
2015-12-26T06:09:27Z
34,469,004
11
2015-12-26T06:14:26Z
[ "python", "list", "set", "tuples" ]
For example, I want to check every elements in tuple `(1, 2)` are in tuple `(1, 2, 3, 4, 5)`. I don't think use loop is a good way to do it, I think it could be done in one line.
You can use [`set.issubset`](https://docs.python.org/3.5/library/stdtypes.html?highlight=issubset#set.issubset) or [`set.issuperset`](https://docs.python.org/3.5/library/stdtypes.html?highlight=issubset#set.issuperset) to check if every element in one tuple or list is in other. ``` >>> tuple1 = (1, 2) >>> tuple2 = (1, 2, 3, 4, 5) >>> set(tuple1).issubset(tuple2) True >>> set(tuple2).issuperset(tuple1) True ```
Python native coroutines and send()
34,469,060
5
2015-12-26T06:25:15Z
34,469,122
7
2015-12-26T06:37:11Z
[ "python", "async-await", "coroutine" ]
Generator based coroutines have a `send()` method which allow bidirectional communication between the caller and the callee and resumes a yielded generator coroutine from the caller. This is the functionality that turns generators into coroutines. While the new native `async/await` coroutines provide superior support for async I/O, I do not see how to get the equivalent of `send()` with them. The use of `yield` in `async` functions is explicitly forbidden, so native coroutines can return only once using a `return` statement. Although `await` expressions bring new values into a coroutine, those values come from callees, not the caller, and the awaited call is evaluated from the beginning each time, not from where it left off. Is there a way to resume a returned coroutine from where it left off and potentially send in a new value? How can I emulate the techniques in David Beazley's [Curious Course on Coroutines and Concurrency](http://www.dabeaz.com/coroutines/) using native coroutines? The general code pattern I have in mind is something like ``` def myCoroutine(): ... while True: ... ping = yield(pong) ... ``` and in the caller ``` while True: ... buzz = myCoroutineGen.send(bizz) ... ``` --- ### Edit I accepted Kevin's answer but I have noticed that the PEP [says](https://www.python.org/dev/peps/pep-0492/#id63) > Coroutines are based on generators internally, thus they share the implementation. Similarly to generator objects, coroutines have throw() , send() and close() methods. ... > throw() , send() methods for coroutines are used to push values and raise errors into Future-like objects. So apparently native coroutines do have a `send()`? How does it work without `yield` expression to receive the values inside the coroutine?
> Is there a way to resume a returned coroutine from where it left off and potentially send in a new value? No. `async` and `await` are *just* syntactic sugar for `yield from`. When a coroutine returns (with the `return` statement), that's it. The frame is gone. It is not resumable. This is exactly how generators have always worked. For example: ``` def foo(): return (yield) ``` You can do `f = foo(); next(f); f.send(5)`, and you will get back 5. But if you try to `f.send()` again, it does not work, because you already returned from the frame. `f` is no longer a live generator. Now, as for new coroutines, so far as I can tell, it seems yielding and sending is reserved for communication between the event loop and certain basic predicates such as `asyncio.sleep()`. The coroutines yield [`asyncio.Future`](https://docs.python.org/3/library/asyncio-task.html#asyncio.Future) objects up to the event loop, and the event loop sends those same future objects back into the coroutine once the associated operations have been completed (they are typically scheduled via [`call_soon()`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.BaseEventLoop.call_soon) and the other event loop methods). You can yield future objects by awaiting them, but it's not a general-purpose interface like `.send()` was. It is specifically intended for use by the event loop implementation. If you are not implementing an event loop, you probably do not want to be playing around with this. If you *are* implementing an event loop, you need to ask yourself why the perfectly good implementations in `asyncio` are not sufficient for your purposes and explain what *specifically* you are trying to do before we can help you. Please note that `yield from` is not deprecated. If you want coroutines that are not tied to an event loop at all, just use that instead. `async` and `await` are [specifically designed for asynchronous programming with event loops](https://www.python.org/dev/peps/pep-0492/#abstract). If that is not what you are doing, then `async` and `await` are the wrong tool to begin with. One more thing: > The use of `yield` in async functions is explicitly forbidden, so native coroutines can return only once using a `return` statement. `await` expressions *do* yield control. `await something()` is entirely analogous to `yield from something()`. They just changed the name so it would be more intuitive to people not familiar with generators. --- For those of you who actually are interested in implementing your own event loop, [here's some example code](https://bitbucket.org/NYKevin/nbtparse/src/e1aec5c6b1691d305a94839791b308835cdadaaf/nbtparse/_utils.py?at=optionally-async&fileviewer=file-view-default#_utils.py-85) showing a (very minimal) implementation. This event loop is extremely stripped down, because it is designed to run certain specially-written coroutines synchronously as if they were normal functions. It does not provide the full range of support you would expect from a real [BaseEventLoop](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.BaseEventLoop) implementation, and is not safe for use with arbitrary coroutines. Ordinarily, I would include the code in my answer, rather than linking to it, but there are copyright concerns and it is not critical to the answer itself.
Linear Regression and Gradient Descent in Scikit learn/Pandas?
34,469,237
4
2015-12-26T06:57:53Z
34,470,001
8
2015-12-26T09:18:05Z
[ "python", "pandas", "machine-learning", "scikit-learn" ]
in coursera course for machine learning <https://share.coursera.org/wiki/index.php/ML:Linear_Regression_with_Multiple_Variables#Gradient_Descent_for_Multiple_Variables>, it says gradient descent should converge. I m using Linear regression from scikit learn. It doesn't provide gradient descent info. I have seen many questions on stackoverflow to implement linear regression with gradient descent. How do we use Linear regression from scikit-learn or pandas in real world? OR Why does scikit-learn or pandas doesn't provide gradient descent info in linear regression output?
Scikit learn provides you two approaches to linear regression: 1) `LinearRegression` object uses Ordinary Least Squares solver from scipy, as LR is one of two classifiers which have **closed form solution**. Despite the ML course - you can actually learn this model by just inverting and multiplicating some matrices. 2) `SGDClassifier` which is an implementation of **stochastic gradient descent**, very generic one where you can choose your penalty terms. To obtain linear regression you choose loss to be `L2` and penalty also to `none` (linear regression) or `L2` (Ridge regression) There is no "typical gradient descent" because it is **rarely used** in practise. If you can decompose your loss function into additive terms, then stochastic approach is known to behave better (thus SGD) and if you can spare enough memory - OLS method is faster and easier (thus first solution).
Logging training and validation loss in tensorboard
34,471,563
5
2015-12-26T13:02:27Z
34,474,533
7
2015-12-26T19:31:54Z
[ "python", "tensorflow", "tensorboard" ]
I'm trying to learn how to use tensorflow and tensorboard. I have a test project based on the MNIST neural net tutorial (<https://www.tensorflow.org/versions/master/tutorials/mnist/tf/index.html>). In my code, I construct a node that calculates the fraction of digits in a data set that are correctly classified, like this: ``` correct = tf.nn.in_top_k(self._logits, labels, 1) correct = tf.to_float(correct) accuracy = tf.reduce_mean(correct) ``` Here, `self._logits`is the inference part of the graph, and `labels` is a placeholder that contains the correct labels. Now, what I would like to do is evaluate the accuracy for both the training set and the validation set as training proceeds. I can do this by running the accuracy node twice, with different feed\_dicts: ``` train_acc = tf.run(accuracy, feed_dict={images : training_set.images, labels : training_set.labels}) valid_acc = tf.run(accuracy, feed_dict={images : validation_set.images, labels : validation_set.labels}) ``` This works as intended. I can print the values, and I can see that initially, the two accuracies will both increase, and eventually the validation accuracy will flatten out while the training accuracy keeps increasing. However, I would also like to get graphs of these values in tensorboard, and I can not figure out how to do this. If I simply add a `scalar_summary` to `accuracy`, the logged values will not distinguish between training set and validation set. I also tried creating two identical `accuracy` nodes with different names and running one on the training set and one on the validation set. I then add a `scalar_summary` to each of these nodes. This does give me two graphs in tensorboard, but instead of one graph showing the training set accuracy and one showing the validation set accuracy, they are both showing identical values that do not match either of the ones printed to the terminal. I am probably misunderstanding how to solve this problem. What is the recommended way of separately logging the output from a single node for different inputs?
There are several different ways you could achieve this, but you're on the right track with creating different [`tf.scalar_summary()`](https://www.tensorflow.org/versions/master/api_docs/python/train.html#scalar_summary) nodes. Since you must explicitly call [`SummaryWriter.add_summary()`](https://www.tensorflow.org/versions/master/api_docs/python/train.html#SummaryWriter.add_summary) each time you want to log a quantity to the event file, the simplest approach is probably to fetch the appropriate summary node each time you want to get the training or validation accuracy: ``` accuracy = tf.reduce_mean(correct) training_summary = tf.scalar_summary("training_accuracy", accuracy) validation_summary = tf.scalar_summary("validation_accuracy", accuracy) summary_writer = tf.train.SummaryWriter(...) for step in xrange(NUM_STEPS): # Perform a training step.... if step % LOG_PERIOD == 0: # To log training accuracy. train_acc, train_summ = sess.run( [accuracy, training_summary], feed_dict={images : training_set.images, labels : training_set.labels}) writer.add_summary(train_summ, step) # To log validation accuracy. valid_acc, valid_summ = sess.run( [accuracy, validation_summary], feed_dict={images : validation_set.images, labels : validation_set.labels}) writer.add_summary(valid_summ, step) ``` Alternatively, you could create a single summary op whose tag is a [`tf.placeholder(tf.string, [])`](https://www.tensorflow.org/versions/master/api_docs/python/io_ops.html#placeholder) and feed the string `"training_accuracy"` or `"validation_accuracy"` as appropriate.
Use a.any() or a.all()
34,472,814
6
2015-12-26T15:46:56Z
34,473,104
11
2015-12-26T16:22:39Z
[ "python", "numpy" ]
``` x = np.arange(0,2,0.5) valeur = 2*x if valeur <= 0.6: print ("this works") else: print ("valeur is too high") ``` here is the error I get: ``` if valeur <= 0.6: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ``` I have read several posts about a.any() or a.all() but still can't find a way that really clearly explain how to fix the problem. I see why Python does not like what I wrote but I am not sure how to fix it.
If you take a look at the result of `valeur <= 0.6`, you can see what’s causing this ambiguity: ``` >>> valeur <= 0.6 array([ True, False, False, False], dtype=bool) ``` So the result is another array that has in this case 4 boolean values. Now what should the result be? Should the condition be true when one value is true? Should the condition be true only when all values are true? That’s exactly what `numpy.any` and `numpy.all` do. The former requires at least one true value, the latter requires that all values are true: ``` >>> np.any(valeur <= 0.6) True >>> np.all(valeur <= 0.6) False ```
using pandas and numpy to parametrize stack overflow's number of users and reputation
34,473,089
8
2015-12-26T16:20:57Z
34,474,542
8
2015-12-26T19:33:50Z
[ "python", "numpy", "pandas" ]
I noticed that Stack Overflow's number of users and their reputation follows an interesting distribution. I created a **pandas DF** to see if I could create a **parametric fit**: ``` import pandas as pd import numpy as np soDF = pd.read_excel('scores.xls') print soDF ``` Which returns this: ``` total_rep users 0 1 4364226 1 200 269110 2 500 158824 3 1000 90368 4 2000 48609 5 3000 32604 6 5000 18921 7 10000 8618 8 25000 2802 9 50000 1000 10 100000 334 ``` If I graph this, I obtain the following chart: [![stack overflow users and reputation](http://i.stack.imgur.com/9rbI2.png)](http://i.stack.imgur.com/9rbI2.png) The distribution seems to follow a **[Power Law](https://en.wikipedia.org/wiki/Power_law)**. So to better visualize it, I added the following: ``` soDF['log_total_rep'] = soDF['total_rep'].apply(np.log10) soDF['log_users'] = soDF['users'].apply(np.log10) soDF.plot(x='log_total_rep', y='log_users') ``` Which produced the following: [![Stack Overflow users and reputation follows a power law](http://i.stack.imgur.com/auY11.png)](http://i.stack.imgur.com/auY11.png) Is there an easy way to use pandas to find the best fit to this data? Although the fit looks linear, perhaps a polynomial fit is better since now I am dealing in logarithmic scales.
NumPy has a lot of functions to do fitting. For polynomial fits, we use **numpy.polyfit** ([documentation](http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.polyfit.html)). Initalize your dataset: ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt data = [k.split() for k in '''0 1 4364226 1 200 269110 2 500 158824 3 1000 90368 4 2000 48609 5 3000 32604 6 5000 18921 7 10000 8618 8 25000 2802 9 50000 1000 10 100000 334'''.split('\n')] soDF = pd.DataFrame(data, columns=('index', 'total_rep', 'users')) soDF['total_rep'] = pd.to_numeric(soDF['total_rep']) soDF['users'] = pd.to_numeric(soDF['users']) soDF['log_total_rep'] = soDF['total_rep'].apply(np.log10) soDF['log_users'] = soDF['users'].apply(np.log10) soDF.plot(x='log_total_rep', y='log_users') ``` Fit a 2nd-degree polynomial ``` coefficients = np.polyfit(soDF['log_total_rep'] , soDF['log_users'], 2) print "Coefficients: ", coefficients ``` Next, let's plot the original + fit: ``` polynomial = np.poly1d(coefficients) xp = np.linspace(-2, 6, 100) plt.plot(soDF['log_total_rep'], soDF['log_users'], '.', xp, polynomial(xp), '-') ``` [![polynomial fit](http://i.stack.imgur.com/4vpa3.png)](http://i.stack.imgur.com/4vpa3.png)
using pandas and numpy to parametrize stack overflow's number of users and reputation
34,473,089
8
2015-12-26T16:20:57Z
34,475,210
9
2015-12-26T21:02:25Z
[ "python", "numpy", "pandas" ]
I noticed that Stack Overflow's number of users and their reputation follows an interesting distribution. I created a **pandas DF** to see if I could create a **parametric fit**: ``` import pandas as pd import numpy as np soDF = pd.read_excel('scores.xls') print soDF ``` Which returns this: ``` total_rep users 0 1 4364226 1 200 269110 2 500 158824 3 1000 90368 4 2000 48609 5 3000 32604 6 5000 18921 7 10000 8618 8 25000 2802 9 50000 1000 10 100000 334 ``` If I graph this, I obtain the following chart: [![stack overflow users and reputation](http://i.stack.imgur.com/9rbI2.png)](http://i.stack.imgur.com/9rbI2.png) The distribution seems to follow a **[Power Law](https://en.wikipedia.org/wiki/Power_law)**. So to better visualize it, I added the following: ``` soDF['log_total_rep'] = soDF['total_rep'].apply(np.log10) soDF['log_users'] = soDF['users'].apply(np.log10) soDF.plot(x='log_total_rep', y='log_users') ``` Which produced the following: [![Stack Overflow users and reputation follows a power law](http://i.stack.imgur.com/auY11.png)](http://i.stack.imgur.com/auY11.png) Is there an easy way to use pandas to find the best fit to this data? Although the fit looks linear, perhaps a polynomial fit is better since now I am dealing in logarithmic scales.
## `python`, `pandas`, and `scipy`, oh my! The scientific python ecosystem has several complimentary libraries. No one library does everything, by design. `pandas` provides tools to manipulate table-like data and timeseries. However, it deliberately doesn't include the type of functionality you're looking for. For fitting statistical distributions, you'd typically use another package such as `scipy.stats`. However, in this case, we don't have the "raw" data (i.e. a long sequence of reputation scores). Instead we have something similar to a histogram. Therefore, we'll need to fit things at a bit lower level than `scipy.stats.powerlaw.fit`. --- ## Stand-alone example For the moment, let's drop `pandas` entirely. There aren't any advantages to using it here, and we'd quickly wind up converting the dataframe to other data structures anyway. `pandas` is great, it's just overkill for this situation. As a quick stand-alone example to reproduce your plot: ``` import matplotlib.pyplot as plt total_rep = [1, 200, 500, 1000, 2000, 3000, 5000, 10000, 25000, 50000, 100000] num_users = [4364226, 269110, 158824, 90368, 48609, 32604, 18921, 8618, 2802, 1000, 334] fig, ax = plt.subplots() ax.loglog(total_rep, num_users) ax.set(xlabel='Total Reputation', ylabel='Number of Users', title='Log-Log Plot of Stackoverflow Reputation') plt.show() ``` [![enter image description here](http://i.stack.imgur.com/0Nf4E.png)](http://i.stack.imgur.com/0Nf4E.png) --- ## What does this data represent? Next, we need to know what we're working with. What we've plotted is similar to a histogram, in that it's raw counts of the number of users at a given reputation level. However, note the little "+" beside each bin the reputation table. This means that, for example, 2082 users have a reputation score of 25000 *or greater*. Our data is basically an estimate of the complimentary cumulative distribution function (CCDF), in the same sense that a histogram is an estimate of the probability distribution function (PDF). We'll just need to normalize it by the total number of users in our sample to get an estimate of the CCDF. In this case, we can simply divide by the first element of `num_users`. Reputation can never be less than 1, so 1 on the x-axis corresponds to a probability of 1 by definition. (In other cases, we'd need to estimate this number.) As an example: ``` import numpy as np import matplotlib.pyplot as plt total_rep = np.array([1, 200, 500, 1000, 2000, 3000, 5000, 10000, 25000, 50000, 100000]) num_users = np.array([4364226, 269110, 158824, 90368, 48609, 32604, 18921, 8618, 2802, 1000, 334]) ccdf = num_users.astype(float) / num_users.max() fig, ax = plt.subplots() ax.loglog(total_rep, ccdf, color='lightblue', lw=2, marker='o', clip_on=False, zorder=10) ax.set(xlabel='Reputation', title='CCDF of Stackoverflow Reputation', ylabel='Probability that Reputation is Greater than X') plt.show() ``` [![enter image description here](http://i.stack.imgur.com/4ij2R.png)](http://i.stack.imgur.com/4ij2R.png) You might wonder why we're converting things over to a "normalized" version. The simplest answer is that it's more useful. It allows us to say something that isn't directly related to our sample size. Tomorrow, the total number of Stackoverflow users (and the numbers at each reputation level) will be different. However, the total probability that any given user has a particular reputation won't have changed significantly. If we want to predict John Skeet's reputation (highest rep. user) when the site hits 5 million registered users, it's much easier to use the probabilities instead of raw counts. ## Naive fit of a power-law distribution Next, let's fit a power-law distribution to the CCDF. Again, if we had the "raw" data in the form of a long list of reputation scores, it would be best to use a statistical package to handle this. In particular, `scipy.stats.powerlaw.fit`. However, we don't have the raw data. The CCDF of a power-law distribution takes the form of `ccdf = x**(-a + 1)`. Therefore, we'll fit a line in log-space, and we can get the `a` parameter of the distribution from `a = 1 - slope`. For the moment, let's use `np.polyfit` to fit the line. We'll need to handle the conversion back and forth from log-space by ourselves: ``` import numpy as np import matplotlib.pyplot as plt total_rep = np.array([1, 200, 500, 1000, 2000, 3000, 5000, 10000, 25000, 50000, 100000]) num_users = np.array([4364226, 269110, 158824, 90368, 48609, 32604, 18921, 8618, 2802, 1000, 334]) ccdf = num_users.astype(float) / num_users.max() # Fit a line in log-space logx = np.log(total_rep) logy = np.log(ccdf) params = np.polyfit(logx, logy, 1) est = np.exp(np.polyval(params, logx)) fig, ax = plt.subplots() ax.loglog(total_rep, ccdf, color='lightblue', ls='', marker='o', clip_on=False, zorder=10, label='Observations') ax.plot(total_rep, est, color='salmon', label='Fit', ls='--') ax.set(xlabel='Reputation', title='CCDF of Stackoverflow Reputation', ylabel='Probability that Reputation is Greater than X') plt.show() ``` [![enter image description here](http://i.stack.imgur.com/qnpt5.png)](http://i.stack.imgur.com/qnpt5.png) There's an immediate problem with this fit. Our estimate states that there's a *greater than 1* probability that users will have a reputation of 1. That's not possible. The problem is that we let `polyfit` choose the best-fit y-intercept for our line. If we take a look a `params` in our code above, it's the second number: ``` In [11]: params Out[11]: array([-0.81938338, 1.15955974]) ``` By definition, the y-intercept should be 1. Instead, the best-fit intercept is about `1.16`. We need to fix that number, and only allow the slope to vary in the linear fit. ## Fixing the y-intercept in the fit First off, note that `log(1) --> 0`. Therefore, we actually want to force the y-intercept in log-space to be 0 instead of 1. It's easiest to do this using `np.linalg.lstsq` to solve for things instead of `np.polyfit`. At any rate, you'd do something similar to: ``` import numpy as np import matplotlib.pyplot as plt total_rep = np.array([1, 200, 500, 1000, 2000, 3000, 5000, 10000, 25000, 50000, 100000]) num_users = np.array([4364226, 269110, 158824, 90368, 48609, 32604, 18921, 8618, 2802, 1000, 334]) ccdf = num_users.astype(float) / num_users.max() # Fit a line with a y-intercept of 1 in log-space logx = np.log(total_rep) logy = np.log(ccdf) slope, _, _, _ = np.linalg.lstsq(logx[:,np.newaxis], logy) params = [slope, 0] est = np.exp(np.polyval(params, logx)) fig, ax = plt.subplots() ax.loglog(total_rep, ccdf, color='lightblue', ls='', marker='o', clip_on=False, zorder=10, label='Observations') ax.plot(total_rep, est, color='salmon', label='Fit', ls='--') ax.set(xlabel='Reputation', title='CCDF of Stackoverflow Reputation', ylabel='Probability that Reputation is Greater than X') plt.show() ``` [![enter image description here](http://i.stack.imgur.com/cLA73.png)](http://i.stack.imgur.com/cLA73.png) Hmmm... Now we have a new problem. Our new line doesn't fit our data very well. This is a common problem with power-law distributions. ## Use only the "tails" in the fit In real-life, observed distributions almost never exactly follow a power-law. However, their "long tails" often do. You can see this quite clearly in this dataset. If we were to exclude the first two data points (low-reputation/high-probability), we'd get a very different line and it would be a much better fit to the remaining data. The fact that only the tail of the distribution follows a power-law explains why we weren't able to fit our data very well when we fixed the y-intercept. There are a lot of different modified power-law models for what happens near a probability of 1, but they all follow a power-law to the right of some cutoff value. Based on our observed data, it looks like we could fit two lines: One to the right of a reputation of ~1000 and one to the left. With that in mind, let's forget about the left-hand side of things and focus on the "long tail" on the right. We'll use `np.polyfit` but exclude the left-most three points from the fit. ``` import numpy as np import matplotlib.pyplot as plt total_rep = np.array([1, 200, 500, 1000, 2000, 3000, 5000, 10000, 25000, 50000, 100000]) num_users = np.array([4364226, 269110, 158824, 90368, 48609, 32604, 18921, 8618, 2802, 1000, 334]) ccdf = num_users.astype(float) / num_users.max() # Fit a line in log-space, excluding reputation <= 1000 logx = np.log(total_rep[total_rep > 1000]) logy = np.log(ccdf[total_rep > 1000]) params = np.polyfit(logx, logy, 1) est = np.exp(np.polyval(params, logx)) fig, ax = plt.subplots() ax.loglog(total_rep, ccdf, color='lightblue', ls='', marker='o', clip_on=False, zorder=10, label='Observations') ax.plot(total_rep[total_rep > 1000], est, color='salmon', label='Fit', ls='--') ax.set(xlabel='Reputation', title='CCDF of Stackoverflow Reputation', ylabel='Probability that Reputation is Greater than X') plt.show() ``` [![enter image description here](http://i.stack.imgur.com/5lGiH.png)](http://i.stack.imgur.com/5lGiH.png) ## Test the different fits In this case, we have some additional data. Let's see how well each different fit predicts the top 5 user's reputation: ``` import numpy as np import matplotlib.pyplot as plt total_rep = np.array([1, 200, 500, 1000, 2000, 3000, 5000, 10000, 25000, 50000, 100000]) num_users = np.array([4364226, 269110, 158824, 90368, 48609, 32604, 18921, 8618, 2802, 1000, 334]) top_5_rep = [832131, 632105, 618926, 596889, 576697] top_5_ccdf = np.array([1, 2, 3, 4, 5], dtype=float) / num_users.max() ccdf = num_users.astype(float) / num_users.max() # Previous fits naive_params = [-0.81938338, 1.15955974] fixed_intercept_params = [-0.68845134, 0] long_tail_params = [-1.26172528, 5.24883471] fits = [naive_params, fixed_intercept_params, long_tail_params] fit_names = ['Naive Fit', 'Fixed Intercept Fit', 'Long Tail Fit'] fig, ax = plt.subplots() ax.loglog(total_rep, ccdf, color='lightblue', ls='', marker='o', clip_on=False, zorder=10, label='Observations') # Plot reputation of top 5 users ax.loglog(top_5_rep, top_5_ccdf, ls='', marker='o', color='darkred', zorder=10, label='Top 5 Users') # Plot different fits for params, name in zip(fits, fit_names): x = [1, 1e7] est = np.exp(np.polyval(params, np.log(x))) ax.loglog(x, est, label=name, ls='--') ax.set(xlabel='Reputation', title='CCDF of Stackoverflow Reputation', ylabel='Probability that Reputation is Greater than X', ylim=[1e-7, 1]) ax.legend() plt.show() ``` [![enter image description here](http://i.stack.imgur.com/Lj7ym.png)](http://i.stack.imgur.com/Lj7ym.png) Wow! They all do a pretty awful job! First off, this is a good reason to use the full series when fitting a distribution instead of just the binned data. However, the root of the problem is that a power-law distribution isn't a very good fit in this case. At first glance, it looks like an exponential distribution might be a better fit, but let's leave that for later. As an example of how badly the different power-law fits over-predict the low-probability observations (i.e. the users with the highest rep), let's predict Jon Skeet's reputation with each model: ``` import numpy as np # Jon Skeet's actual reputation skeet_prob = 1.0 / 4364226 true_rep = 832131 # Previous fits naive_params = [-0.81938338, 1.15955974] fixed_intercept_params = [-0.68845134, 0] long_tail_params = [-1.26172528, 5.24883471] fits = [naive_params, fixed_intercept_params, long_tail_params] fit_names = ['Naive Fit', 'Fixed Intercept Fit', 'Long Tail Fit'] for params, name in zip(fits, fit_names): inv_params = [1 / params[0], -params[1]/params[0]] est = np.exp(np.polyval(inv_params, np.log(skeet_prob))) print '{}:'.format(name) print ' Pred. Rep.: {}'.format(est) print '' print 'True Reputation: {}'.format(true_rep) ``` This yields: ``` Naive Fit: Pred. Rep.: 522562573.099 Fixed Intercept Fit: Pred. Rep.: 4412664023.88 Long Tail Fit: Pred. Rep.: 11728612.2783 True Reputation: 832131 ```
Remove items from list by using python list comprehensions
34,473,834
9
2015-12-26T18:01:25Z
34,473,912
7
2015-12-26T18:11:01Z
[ "python", "list" ]
I have a list of integers which goes like this: ``` unculledlist = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29] ``` I would like cull the values from this list, so that it looks like this: ``` culledlist = [0, 2, 4, 10, 12, 14, 20, 22, 24] ``` But I would like to do this by using list comprehensions. This is a graphical preview of how I am trying to cull the list values. It's easier to understand if I arrange the list values into rows and columns. But this is only visually. I do not need nested lists: [![enter image description here](http://i.stack.imgur.com/WcxIy.jpg)](http://i.stack.imgur.com/WcxIy.jpg) I can do it by using two nested loops: ``` unculledlist = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29] index = 0 culledlist = [] for i in range(6): for j in range(5): if (i % 2 == 0) and (j % 2 == 0): culledlist.append(unculledlist[index]) index += 1 print "culledlist: ", culledlist # culledlist = [0, 2, 4, 10, 12, 14, 20, 22, 24] ``` But I would like to do it with python list comprehensions instead. Can anyone provide an example please? Thank you. EDIT: The reason why I would like to use list comprehensions is because my actual `unculledlist` has a couple of million of integers. Solving this issue with list comprehensions will definitively speed things up. I do not care about readability. I just want to make a quicker solution. I can not use numpy nor scipy modules. But I can use `itertools` module. Not sure if solution with itertools would be quicker than the one with list comprehensions? Or even `lambda`?
I saw this and thought string manipulation would be the easier approach ``` culled_list = [item for item in unculledlist if str(item)[-1] in ['0','2','4']] ``` The result is still a list of integers ``` >>> culled_list [0, 2, 4, 10, 12, 14, 20, 22, 24] ``` Thanks to eugene y for the less complicated approach ``` >>> culled_list = [item for item in unculledlist if item % 10 in (0,2,4)] >>> culled_list [0, 2, 4, 10, 12, 14, 20, 22, 24] ```
How comparator works for objects that are not comparable in python?
34,474,538
7
2015-12-26T19:33:30Z
34,474,650
10
2015-12-26T19:46:32Z
[ "python" ]
I have defined a list as below: ``` list = [1,3,2,[4,5,6]] ``` then defined a comparator method as below: ``` def reverseCom(x,y): if(x>y): return -1 elif(x<y): return 1 else: return 0 ``` Now I have sorted the list using reverseCom: ``` list.sort(reverseCom) print list ``` > Result : [[4, 5, 6], 3, 2, 1] Though the element [4, 5, 6] is not comparable with other elements of the list. How its not throwing any error ? Do you can help me to understand that how sort works with the user defined comparator in python ?
This is a Python 2 quirk. In Python 2, numeric and non numeric values are comparable, and numeric values are always considered to be less than the value of container objects: ``` >>> 1 < [1] True >>> 1 < [2] True >>> 1558 < [1] True >>> 1 < {} True ``` when comparing two containers values of different types, on the other hand, it is the *name of their type* that is taken into consideration: ``` >>> () < [] False >>> 'tuple' < 'list' False >>> {} < [] True >>> 'dict' < 'list' True ``` This feature, however, has been dropped in Python 3, which made numeric and non-numeric values no longer comparable: ``` >>> 1 < [1] Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unorderable types: int() < list() ``` **EDIT:** this next explanation is fully experimentation-based, and I couldn't find sound documentation to back it up. If any one does find it, I'd be glad to read through it. It appears Python 2 has even more rules when it comes to comparison of user-defined objects/non-container objects. In this case it appears that numeric values are always *greater* than non-numeric non-container values. ``` >>> class A: pass ... >>> a = A() >>> 1 > a True >>> 2.7 > a True ``` Now, when comparing two objects of different, non-numeric, non-container types, it seems that it is their *address* that is taken into account: ``` >>> class A: pass ... >>> class B: pass ... >>> a = A() >>> a <__main__.A instance at 0x0000000002265348> >>> b = B() >>> b <__main__.B instance at 0x0000000002265048> >>> a < b False >>> b < a True ``` Which is really bananas, if you ask me. Of course, all that can be changed around if you care to override the `__lt__()` and `__gt__()` methods inside your class definition, which determine the standard behavior of the `<` and `>` operators. Further documentation on how these methods operate can be [found here](https://docs.python.org/2/reference/datamodel.html). **Bottomline:** avoid comparison between different types as much as you can. The result is really unpredictable, unintuitive and not all that well documented. Also, use Python 3 whenever possible.
Need to install urllib2 for Python 3.5.1
34,475,051
6
2015-12-26T20:40:15Z
34,475,081
13
2015-12-26T20:44:04Z
[ "python", "urllib2" ]
I'm running Python 3.5.1 for Mac. I want to use urllib2. I tried installing that but I'm told that it's been split into urllib.request and urllib.error for Python 3. My command (running from the framework bin directory for now because it's not in my path): ``` sudo ./pip3 install urllib.request ``` Returns: ``` Could not find a version that satisfies the requirement urllib.request (from versions: ) No matching distribution found for urllib.request ``` I got the same error before when I tried to install urllib2 in one fell swoop.
You can't, and you don't need to. `urllib2` is the name of the library included in Python 2. You can use the [`urllib.request` library](https://docs.python.org/3/library/urllib.request.html#module-urllib.request) included with Python 3, instead. The `urllib.request` library works the same way `urllib2` works in Python 2. Because it is *already included* you don't need to install it. If you are following a tutorial that tells you to use `urllib2` then you'll find you'll run into more issues. Your tutorial was written for Python 2, not Python 3. Find a different tutorial, or install Python 2.7 and continue your tutorial on that version. You'll find `urllib2` comes with that version. Alternatively, install the [`requests` library](https://pypi.python.org/pypi/requests) for a higher-level and easier to use API. It'll work on both Python 2 and 3.
import local function from a module housed in another directory with relative imports in jupyter notebook using python3
34,478,398
7
2015-12-27T07:14:20Z
35,273,613
9
2016-02-08T15:48:01Z
[ "python", "jupyter-notebook", "relative-import" ]
I have a directory structure similar to the following ``` meta_project project1 __init__.py lib module.py __init__.py notebook_folder notebook.jpynb ``` When working in `notebook.jpynb` if I try to use a relative import to access a function `function()` in `module.py` with: ``` from ..project1.lib.module import function ``` I get the following error ``` SystemError Traceback (most recent call last) <ipython-input-7-6393744d93ab> in <module>() ----> 1 from ..project1.lib.module import function SystemError: Parent module '' not loaded, cannot perform relative import ``` Is there any way to get this to work using relative imports? Note, the notebook server is instantiated at the level of the `meta_project` directory, so it should have access to the information in those files. Note, also, that at least as originally intended `project1` wasn't thought of as a module and therefore does not have an `__init__.py` file, it was just meant as a file-system directory. If the solution to the problem requires treating it as a module and including an `__init__.py` file (even a blank one) that is fine, but doing so is not enough to solve the problem. I share this directory between machines and relative imports allow me to use the same code everywhere, & I often use notebooks for quick prototyping, so suggestions that involve hacking together absolute paths are unlikely to be helpful. --- Edit: This is unlike [Relative imports in Python 3](http://stackoverflow.com/questions/16981921/relative-imports-in-python-3), which talks about relative imports in Python 3 in general and – in particular – running a script from within a package directory. This has to do with working within a jupyter notebook trying to call a function in a local module in another directory which has both different general and particular aspects.
I had almost the same example as you in [this notebook](https://github.com/qPRC/qPRC/blob/master/notebook/qPRC.ipynb) where I wanted to illustrate the usage of an adjacent module's function in a DRY manner. My solution was to tell Python of that additional module import path by adding a snippet like this one to the notebook: ``` import os import sys module_path = os.path.abspath(os.path.join('..')) if module_path not in sys.path: sys.path.append(module_path) ``` This allows you import the desired function from the module hierarchy: ``` from project1.lib.module import function # use the function normally function(...) ``` Note that it is necessary to add empty `__init__.py` files to *project1/* and *lib/* folders if you don't have them already.
Example program of Cython as Python to C Converter
34,480,973
9
2015-12-27T13:40:08Z
34,482,471
10
2015-12-27T16:38:02Z
[ "python", "c", "cython" ]
I found [here](http://stackoverflow.com/questions/7112812/use-cython-as-python-to-c-converter) and [here](http://stackoverflow.com/questions/4650243/convert-python-program-to-c-c-code) that one can use Cython to convert Python to C, but I cannot find any step-by-step example. Let's say I have a simple function: **foo.pyx** ``` cdef void foo(double* x): x[0] = 0.0 ``` **setup.py** ``` from distutils.core import setup from Cython.Build import cythonize setup( ext_modules = cythonize("foo.pyx") ) ``` then I run: **python setup.py build\_ext --inplace** to get foo.c and foo.so files (and build directory). Well, I would like to use translated (I hope) foo function in main.c. What should I put into main.c file and how to compile it in order to be able to use foo function? I am using gcc.
Far from a c expert but for me using ubuntu, the following works: main.c: ``` #include "foo_api.h" #include <stdio.h> int main(int argc, char *argv[]) { Py_Initialize(); initfoo(); import_foo(); double arr[5] = {1,2,3,4,5}; int i = 0; foo(arr); for(i = 0; i < 5; i++) { printf("%f\n", arr[i]); } Py_Finalize(); return 0; } ``` foo.pyx: ``` cdef public api foo(double* x): x[0] = 0.0 ``` From the same directory: ``` $ cython foo.pyx ``` Then: ``` $ cc -I/usr/include/python2.7 -I/usr/include/x86_64-linux-gnu/python2.7 -o foo *.c -lpython2.7 ``` Then just run. ``` $ ./foo 0.000000 2.000000 3.000000 4.000000 5.000000 ``` I used `pkg-config --cflags python` to get the flags: ``` $ pkg-config --cflags python -I/usr/include/python2.7 -I/usr/include/x86_64-linux-gnu/python2.7 ``` Without calling [Py\_Initialize](https://docs.python.org/2/c-api/init.html#c.Py_Initialize) (*Initialize the Python interpreter. In an application embedding Python, this should be called before using any other Python/C API functions;*), you will get: ``` Fatal Python error: PyThreadState_Get: no current thread Aborted (core dumped) ``` Without `initfoo()` or `import_foo()` you get a: ``` Segmentation fault (core dumped) ``` If you don't call [Py\_Finalize](https://docs.python.org/2/c-api/init.html#c.Py_Finalize): `Py_Initialize` *a no-op when called for a second time (without calling Py\_Finalize() first).* To get the [delorean](http://docs.cython.org/src/userguide/external_C_code.html#c-api-declarations) example from the docs to run: main.py: ``` #include "delorean_api.h" #include <stdio.h> Vehicle car; int main(int argc, char *argv[]) { Py_Initialize(); initdelorean(); import_delorean(); car.speed = atoi(argv[1]); car.power = atof(argv[2]); activate(&car); Py_Finalize(); return 0; } ``` delorean.pyx: ``` ctypedef public struct Vehicle: int speed float power cdef api void activate(Vehicle *v): if v.speed >= 88 and v.power >= 1.21: print "Time travel achieved" else: print("Sorry Marty") ``` The procedure is the same, the only change was I had to use `ctypedef` with the Vehicle struct or else in main or use I had t use `struct Vehicle car;` in main: ``` $ cython delorean.pyx $ cc -I/usr/include/python2.7 -I/usr/include/x86_64-linux-gnu/python2.7 -o delorean *.c -lpython2.7 $ ./delorean 1 1 Sorry Marty $ ./delorean 100 2 Time travel achieved ``` You can also get it to work without using `Py_Initialize` etc... In `foo.pyx` you just need to make the function public: ``` cdef public foo(double* x): x[0] = 0.0 ``` I added `#include <python2.7/Python.h>` just imported `foo.h`in main.c and removed `Py_Initialize();` etc. Just importing `python.h` would not work for me but that may not be the case for everyone. ``` #include <python2.7/Python.h> #include "foo.h" #include <stdio.h> int main(int argc, char *argv[]) { double arr[5] = {1,2,3,4,5}; int i = 0; foo(arr); for(i = 0; i < 5; i++) { printf("%f\n", arr[i]); } return 0; } ``` Compiling was the same: ``` $ cython foo.pyx $ cc -I/usr/include/python2.7 -I/usr/include/x86_64-linux-gnu/python2.7 -o foo *.c -lpython2.7 $ ./foo 0.000000 2.000000 3.000000 4.000000 5.000000 ``` If you are using the api version then just include the api header or vice versa as per the docs *However, note that you should include either modulename.h or modulename\_api.h in a given C file, not both, otherwise you may get conflicting dual definitions.* To do the same with the delorean example I had to use `libc.stdio` to print the strings to avoid a segmentation fault: ``` from libc.stdio cimport printf ctypedef public struct Vehicle: int speed float power cdef public void activate(Vehicle *v): if v.speed >= 88 and v.power >= 1.21: printf("Time travel achieved\n") else: printf("Sorry Marty\n") ``` main: ``` #include <python2.7/Python.h> #include <stdio.h> #include "delorean.h" Vehicle car; int main(int argc, char *argv[]) { car.speed = atoi(argv[1]); car.power = atof(argv[2]); activate(&car); return 0; } ``` It might make more sense to return the values: ``` ctypedef public struct Vehicle: int speed float power cdef public char* activate(Vehicle *v): if v.speed >= 88 and v.power >= 1.21: return "Time travel achieved" return "Sorry Marty" ``` main: ``` #include <python2.7/Python.h> #include <stdio.h> #include "delorean.h" Vehicle car; int main(int argc, char *argv[]) { car.speed = atoi(argv[1]); car.power = atof(argv[2]); printf("%s\n",activate(&car)); return 0; } ```
Memory-efficient way to generate a large numpy array containing random boolean values
34,485,591
16
2015-12-27T22:34:44Z
34,485,956
12
2015-12-27T23:24:28Z
[ "python", "performance", "numpy", "random", "boolean" ]
I need to create a large numpy array containing random boolean values without hitting the swap. My laptop has 8 GB of RAM. Creating a `(1200, 2e6)` array takes less than 2 s and use 2.29 GB of RAM: ``` >>> dd = np.ones((1200, int(2e6)), dtype=bool) >>> dd.nbytes/1024./1024 2288.818359375 >>> dd.shape (1200, 2000000) ``` For a relatively small `(1200, 400e3)`, `np.random.randint` is still quite fast, taking roughly 5 s to generate a 458 MB array: ``` db = np.array(np.random.randint(2, size=(int(400e3), 1200)), dtype=bool) print db.nbytes/1024./1024., 'Mb' ``` But if I double the size of the array to `(1200, 800e3)` I hit the swap, and it takes ~2.7 min to create `db` ;( ``` cmd = """ import numpy as np db = np.array(np.random.randint(2, size=(int(800e3), 1200)), dtype=bool) print db.nbytes/1024./1024., 'Mb'""" print timeit.Timer(cmd).timeit(1) ``` Using `random.getrandbits` takes even longer (~8min), and also uses the swap: ``` from random import getrandbits db = np.array([not getrandbits(1) for x in xrange(int(1200*800e3))], dtype=bool) ``` Using `np.random.randint` for a `(1200, 2e6)` just gives a `MemoryError`. Is there a more efficient way to create a `(1200, 2e6)` random boolean array?
One problem with using `np.random.randint` is that it generates 64-bit integers, whereas numpy's `np.bool` dtype uses only 8 bits to represent each boolean value. You are therefore allocating an intermediate array 8x larger than necessary. A workaround that avoids intermediate 64-bit dtypes is to generate a string of random bytes using [`np.random.bytes`](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.random.bytes.html), which can be converted to an array of 8-bit integers using [`np.fromstring`](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.fromstring.html). These integers can then be converted to boolean values, for example by testing whether they are less than *255 \* p*, where *p* is the desired probability of each element being `True`: ``` import numpy as np def random_bool(shape, p=0.5): n = np.prod(shape) x = np.fromstring(np.random.bytes(n), np.uint8, n) return (x < 255 * p).reshape(shape) ``` Benchmark: ``` In [1]: shape = 1200, int(2E6) In [2]: %timeit random_bool(shape) 1 loops, best of 3: 12.7 s per loop ``` One important caveat is that the probability will be rounded down to the nearest multiple of 1/256 (for an exact multiple of 1/256 such as p=1/2 this should not affect accuracy). --- ### Update: An even faster method is to exploit the fact that you only need to generate a single random bit per 0 or 1 in your output array. You can therefore create a random array of 8-bit integers 1/8th the size of the final output, then convert it to `np.bool` using [`np.unpackbits`](http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.unpackbits.html): ``` def fast_random_bool(shape): n = np.prod(shape) nb = -(-n // 8) # ceiling division b = np.fromstring(np.random.bytes(nb), np.uint8, nb) return np.unpackbits(b)[:n].reshape(shape).view(np.bool) ``` For example: ``` In [3]: %timeit fast_random_bool(shape) 1 loops, best of 3: 5.54 s per loop ```
function of difference between value
34,485,990
5
2015-12-27T23:29:38Z
34,486,002
8
2015-12-27T23:31:49Z
[ "python", "list", "python-2.7", "python-3.x" ]
Is there a function in Python to get the difference between two or more values in a list? So, in those two lists: ``` list1 = [1, 5, 3, 7] list2 = [4, 2, 6, 4] ``` I need to calculate the difference between every value in list1 and list2. ``` for i in list1: for ii in list2: print i -ii ``` This gives negative values, but I want the subtraction between the values of the two lists only from highest value to lowest value for not getting negative values. For the above lists, I expect the output to be `[3, 3, 3, 3]`. Thanks.
Either `zip` the lists, or use `numpy`: ``` >>> list1 = [1, 5, 3, 7] >>> list2 = [4, 2, 6, 4] >>> [a-b for a,b in zip(list1, list2)] [-3, 3, -3, 3] >>> import numpy as np >>> np.array(list1) - np.array(list2) array([-3, 3, -3, 3]) ``` Remember to cast the array back to a list as needed. # **edit:** In response to the new requirement that the absolute values are needed: you can add `abs` in the list comprehension: ``` >>> [abs(a-b) for a,b in zip(list1, list2)] [3, 3, 3, 3] ``` and the `numpy` solution would change to: ``` >>> map(abs, np.array(list1) - np.array(list2)) [3, 3, 3, 3] ```
function of difference between value
34,485,990
5
2015-12-27T23:29:38Z
34,486,121
10
2015-12-27T23:51:45Z
[ "python", "list", "python-2.7", "python-3.x" ]
Is there a function in Python to get the difference between two or more values in a list? So, in those two lists: ``` list1 = [1, 5, 3, 7] list2 = [4, 2, 6, 4] ``` I need to calculate the difference between every value in list1 and list2. ``` for i in list1: for ii in list2: print i -ii ``` This gives negative values, but I want the subtraction between the values of the two lists only from highest value to lowest value for not getting negative values. For the above lists, I expect the output to be `[3, 3, 3, 3]`. Thanks.
Assuming you expect `[3, 3, 3, 3]` as the answer in your question, you can use `abs` and `zip`: ``` [abs(i-j) for i,j in zip(list1, list2)] ```
What is the currently correct way to dynamically update plots in Jupyter/iPython?
34,486,642
21
2015-12-28T01:21:11Z
34,486,703
17
2015-12-28T01:33:30Z
[ "python", "matplotlib", "jupyter", "jupyter-notebook" ]
In the answers to [how to dynamically update a plot in a loop in ipython notebook (within one cell)](http://stackoverflow.com/questions/21360361/how-to-dynamically-update-a-plot-in-a-loop-in-ipython-notebook-within-one-cell), an example is given of how to dynamically update a plot inside a Jupyter notebook within a Python loop. However, this works by destroying and re-creating the plot on every iteration, and a comment in one of the threads notes that this situation can be improved by using the new-ish `%matplotlib nbagg` magic, which provides an interactive figure embedded in the notebook, rather than a static image. However, this wonderful new `nbagg` feature seems to be completely undocumented as far as I can tell, and I'm unable to find an example of how to use it to dynamically update a plot. Thus my question is, **how does one efficiently update an existing plot in a Jupyter/Python notebook, using the nbagg backend?** Since dynamically updating plots in matplotlib is a tricky issue in general, a simple working example would be an enormous help. A pointer to any documentation on the topic would also be extremely helpful. To be clear what I'm asking for: what I want to do is to run some simulation code for a few iterations, then draw a plot of its current state, then run it for a few more iterations, then update the plot to reflect the current state, and so on. So the idea is to draw a plot and then, without any interaction from the user, update the data in the plot without destroying and re-creating the whole thing. Here is some slightly modified code from the answer to the linked question above, which achieves this by re-drawing the whole figure every time. I want to achieve the same result, but more efficiently using `nbagg`. ``` %matplotlib inline import time import pylab as pl from IPython import display for i in range(10): pl.clf() pl.plot(pl.randn(100)) display.display(pl.gcf()) display.clear_output(wait=True) time.sleep(1.0) ```
Here is an example that updates a plot in a loop. It updates the data in the figure and does not redraw the whole figure every time. It does block execution, though if you're interested in running a finite set of simulations and saving the results somewhere, it may not be a problem for you. ``` %matplotlib notebook import numpy as np import matplotlib.pyplot as plt import time def pltsin(ax, colors=['b']): x = np.linspace(0,1,100) if ax.lines: for line in ax.lines: line.set_xdata(x) y = np.random.random(size=(100,1)) line.set_ydata(y) else: for color in colors: y = np.random.random(size=(100,1)) ax.plot(x, y, color) fig.canvas.draw() fig,ax = plt.subplots(1,1) ax.set_xlabel('X') ax.set_ylabel('Y') ax.set_xlim(0,1) ax.set_ylim(0,1) for f in range(5): pltsin(ax, ['b', 'r']) time.sleep(1) ``` [I put this up on nbviewer here.](http://nbviewer.ipython.org/gist/branning/c8e63ce81be0391260b1) There is an [IPython Widget version of `nbagg` that is currently a work in progress at the Matplotlib repository](https://github.com/matplotlib/matplotlib/pull/5754). When that is available, that will probably be the best way to use `nbagg`. EDIT: updated to show multiple plots
How to unravel array?
34,489,254
7
2015-12-28T07:20:07Z
34,489,295
10
2015-12-28T07:23:34Z
[ "python", "arrays", "list" ]
I need to generate a `list` for `scipy.optimize.minimize`'s `boundry condition`, it should look like this: ``` bonds = [(0., 0.99),(-30, 30),(-30, 30),(0., 30),(0., 30),(-0.99, 0.99), (0., 0.99),(-30, 30),(-30, 30),(0., 30),(0., 30),(-0.99, 0.99), (0., 0.99),(-30, 30),(-30, 30),(0., 30),(0., 30),(-0.99, 0.99),] ``` I'm wondering if there is any elegant way of doing it? I tried: ``` bonds = [[(0., 0.99),(-30, 30),(-30, 30),(0., 30),(0., 30),(-0.99, 0.99)] for i in range(3)] ``` But this generates ``` [[(0.0, 0.99), (-30, 30), (-30, 30), (0.0, 30), (0.0, 30), (-0.99, 0.99)], [(0.0, 0.99), (-30, 30), (-30, 30), (0.0, 30), (0.0, 30), (-0.99, 0.99)], [(0.0, 0.99), (-30, 30), (-30, 30), (0.0, 30), (0.0, 30), (-0.99, 0.99)]] ``` How can I remove the inner `[]`, to `unravel` the inner arrays into a single one? Or is there any other good way of doing it?
you can do: ``` bonds = [(0., 0.99),(-30, 30),(-30, 30),(0., 30),(0., 30),(-0.99, 0.99)] * 3 ```
Why `type(x).__enter__(x)` instead of `x.__enter__()` in Python standard contextlib?
34,490,998
6
2015-12-28T09:37:10Z
34,491,119
11
2015-12-28T09:45:46Z
[ "python" ]
In [contextlib.py](https://hg.python.org/cpython/file/003f1f60a09c/Lib/contextlib.py#l304), I see the ExitStack class is calling `__enter__()` method via the type object (`type(cm)`) instead of direct method calls to the given object (`cm`). I wonder why or why not. e.g., * does it give better exception traces when an error occurs? * is it just specific to some module author's coding style? * does it have any performance benefits? * does it avoid some artifacts/side-effects with complicated type hierarchies?
First of all, this is what happens when you do `with something`, it's not just `contextlib` that looks up special method on the type. Also, it's worth noting that the same happens with other special methods too: e.g. `a + b` results in `type(a).__add__(a, b)`. But why does it happen? This is a question that is often fired up on the python-dev and python-ideas mailing lists. And when I say "often", I mean "very often". The last one were these: [Missing Core Feature: + - \* / | & do not call **getattr**](https://mail.python.org/pipermail/python-ideas/2015-December/037347.html) and [Eliminating special method lookup](https://mail.python.org/pipermail/python-ideas/2015-December/037359.html). Here are some interesting points: > The current behaviour is by design - special methods are looked up as > slots on the object's class, not as instance attributes. This allows > the interpreter to bypass several steps in the normal instance > attribute lookup process. > > [*(Source)*](https://mail.python.org/pipermail/python-ideas/2015-December/037351.html) > It is worth noting that the behavior is even more magical than this. > Even when looked up on the class, implicit special method lookup > bypasses `__getattr__` and `__getattribute__` of the metaclass. So the > special method lookup is not just an ordinary lookup that happens to > start on the class instead of the instance; it is a fully magic lookup > that does not engage the usual attribute-access-customization hooks at > any level. > > [*(Source)*](https://mail.python.org/pipermail/python-ideas/2015-December/037354.html) This behavior is also documented on the reference documentation: [Special method lookup](https://docs.python.org/3/reference/datamodel.html#special-method-lookup), which says: > Bypassing the `__getattribute__()` machinery in this fashion provides significant scope for speed optimisations within the interpreter, at the cost of some flexibility in the handling of special methods (the special method must be set on the class object itself in order to be consistently invoked by the interpreter). In short, **performance is the main concern**. But let's take a closer look at this. What's the difference between `type(obj).__enter__()` and `obj.__enter__()`? When you write `obj.attr`, `type(obj).__getattribute__('attr')` gets called. The default implementation of `__getattribute__()` looks for `attr` into the instance dictionary (i.e. `obj.__dict__`) and into the class namespace and, failing that, calls `type(obj).__getattr__('attr')`. Now, this was a quick explanation and I have omitted some details, however it should give you an idea of how complicated an attribute lookup can be, and how slow it can become. Short circuiting special method lookup surely provides performance improvements, as looking up `obj.__enter__()` in the "classical" way may be too slow.
Reversing lists of numbers in python
34,494,963
7
2015-12-28T14:01:45Z
34,495,024
8
2015-12-28T14:05:31Z
[ "python", "list" ]
So I have been trying to do this for a while now and am constantly coming up with differing failures. I need to take numerical input from the user and put it into a list and output it in decreasing value: ``` bids = [] bid = input('Bid: ') while bid != '': bids.append(bid) bid = input('Bid: ') print('The auction has finished! The bids were:') for bid in bids: bid = int(bid) for bid in reversed(bids): print(bid) ``` So this worked well most of the time, (I have been using the numbers 2,3,4 & 10 as input as I have been having problems where it shows the numbers in decreasing order for the first numeral) but when I type in 16, 30, 24 it displays the numbers as: ``` The auction has finished! The bids were: 24 30 16 ``` Here is another version I have attempted: ``` bids = [] bid = input('Bid: ') while bid != '': bids.append(bid) bid = input('Bid: ') print('The auction has finished! The bids were:') for bid in bids: bid = int(bid) bids.sort([::-1]) #Here is where the problem is at, I don't know #what the correct syntax is for something like that for bid in bids: print(bid) ``` Any help will be much appreciated as I am fairly new to python and am struggling with my course. -Callum
In your ``` bids.append(bid) ``` you get a list of strings. You want to convert them to integers and sort in decreasing order: ``` numerical_bids = [int(bid) for bid in bids] numerical_bids.sort(reverse=True) ``` and now `numerical_bids` is a list of integers sorted in decreasing order. In your code: ``` for bid in bids: bid = int(bid) ``` does nothing. It converts each `bid` to an integer and forgets it immediately. And ``` bids.sort([::-1]) ``` is not how to use the `sort` method. Read the docs for [list.sort](http://www).
DataFrame object has no attribute 'sort_values'
34,499,728
5
2015-12-28T19:44:49Z
34,499,784
10
2015-12-28T19:49:55Z
[ "python", "pandas", "dataframe" ]
``` dataset = pd.read_csv("dataset.csv").fillna(" ")[:100] dataset['Id']=0 dataset['i']=0 dataset['j']=0 #... entries=dataset[dataset['Id']==0] print type(entries) # Prints <class 'pandas.core.frame.DataFrame'> entries=entries.sort_values(['i','j','ColumnA','ColumnB']) ``` What might be the possible reason of the following error message **at the last line**?: ``` AttributeError: 'DataFrame' object has no attribute 'sort_values' ```
Hello `sort_values` is [new in version 0.17.0](http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.sort_values.html), so check your version of pandas. In the previous versions you should use `sort`. ``` entries=entries.sort(['i','j','ColumnA','ColumnB']) ```
Dictionary order in python
34,499,794
4
2015-12-28T19:50:40Z
34,499,864
11
2015-12-28T19:56:03Z
[ "python", "dictionary" ]
Everywhere I search, they say python dictionary's doesn't have any order. When I run code 1 each time shows a different output (random order). But when I run code 2 it always shows the same sorted output. Why is the dictionary ordered in the second snippet? ``` #code 1 d = {'one': 1, 'two': 2, 'three': 3, 'four': 4} for a, b in d.items(): print(a, b) #code 2 d = {1: 10, 2: 20, 3: 30, 4: 40} for a, b in d.items(): print(a, b) ``` **Outputs** code 1: ``` four 4 two 2 three 3 one 1 ``` code 1 again: ``` three 3 one 1 two 2 four 4 ``` code 2 (always): ``` 1 10 2 20 3 30 4 40 ```
It's related to how hash randomisation is applied. Quoting [docs](https://docs.python.org/3/reference/datamodel.html#object.__hash__) (emphasis mine): > By default, the `__hash__()` values of **str**, **bytes** and **datetime** > objects are “salted” with an unpredictable random value. Although they > remain constant within an individual Python process, they are not > predictable between repeated invocations of Python. For each subsequent run, your strings (keys in snippet 1) are hashed with different salt value - therefore hash value is also changed. Hash values determine ordering. For `int` type, hash function never changes - in fact hash is always equal to integer value. ``` assert hash(42) == 42 ``` If hashing function never changes, there is no change in ordering in subsequent runs. For details in how Python dictionaries are implemented, you may refer to [How are Python's Built In Dictionaries Implemented](http://stackoverflow.com/questions/327311/how-are-pythons-built-in-dictionaries-implemented).
tensorflow: saving and restoring session
34,500,052
5
2015-12-28T20:11:24Z
34,500,690
7
2015-12-28T20:59:34Z
[ "python", "scikit-learn", "tensorflow" ]
I am trying to implement a suggestion from answers: [Tensorflow: How to restore a previously saved model (python)](http://stackoverflow.com/questions/33759623/tensorflow-how-to-restore-a-previously-saved-model-python) I have an object which wraps a `tensorflow` model in a `sklearn` style. ``` import tensorflow as tf class tflasso(): saver = tf.train.Saver() def __init__(self, learning_rate = 2e-2, training_epochs = 5000, display_step = 50, BATCH_SIZE = 100, ALPHA = 1e-5, checkpoint_dir = "./", ): ... def _create_network(self): ... def _load_(self, sess, checkpoint_dir = None): if checkpoint_dir: self.checkpoint_dir = checkpoint_dir print("loading a session") ckpt = tf.train.get_checkpoint_state(self.checkpoint_dir) if ckpt and ckpt.model_checkpoint_path: self.saver.restore(sess, ckpt.model_checkpoint_path) else: raise Exception("no checkpoint found") return def fit(self, train_X, train_Y , load = True): self.X = train_X self.xlen = train_X.shape[1] # n_samples = y.shape[0] self._create_network() tot_loss = self._create_loss() optimizer = tf.train.AdagradOptimizer( self.learning_rate).minimize(tot_loss) # Initializing the variables init = tf.initialize_all_variables() " training per se" getb = batchgen( self.BATCH_SIZE) yvar = train_Y.var() print(yvar) # Launch the graph NUM_CORES = 3 # Choose how many cores to use. sess_config = tf.ConfigProto(inter_op_parallelism_threads=NUM_CORES, intra_op_parallelism_threads=NUM_CORES) with tf.Session(config= sess_config) as sess: sess.run(init) if load: self._load_(sess) # Fit all training data for epoch in range( self.training_epochs): for (_x_, _y_) in getb(train_X, train_Y): _y_ = np.reshape(_y_, [-1, 1]) sess.run(optimizer, feed_dict={ self.vars.xx: _x_, self.vars.yy: _y_}) # Display logs per epoch step if (1+epoch) % self.display_step == 0: cost = sess.run(tot_loss, feed_dict={ self.vars.xx: train_X, self.vars.yy: np.reshape(train_Y, [-1, 1])}) rsq = 1 - cost / yvar logstr = "Epoch: {:4d}\tcost = {:.4f}\tR^2 = {:.4f}".format((epoch+1), cost, rsq) print(logstr ) self.saver.save(sess, self.checkpoint_dir + 'model.ckpt', global_step= 1+ epoch) print("Optimization Finished!") return self ``` When I run: ``` tfl = tflasso() tfl.fit( train_X, train_Y , load = False) ``` I get output: ``` Epoch: 50 cost = 38.4705 R^2 = -1.2036 b1: 0.118122 Epoch: 100 cost = 26.4506 R^2 = -0.5151 b1: 0.133597 Epoch: 150 cost = 22.4330 R^2 = -0.2850 b1: 0.142261 Epoch: 200 cost = 20.0361 R^2 = -0.1477 b1: 0.147998 ``` However, when I try to recover the parameters (even without killing the object): `tfl.fit( train_X, train_Y , load = True)` I get strange results. First of all, the loaded value does not correspond the saved one. ``` loading a session loaded b1: 0.1 <------- Loaded another value than saved Epoch: 50 cost = 30.8483 R^2 = -0.7670 b1: 0.137484 ``` What is the right way to load, and probably first inspect the saved variables?
**TL;DR:** You should try to rework this class so that `self.create_network()` is called (i) only once, and (ii) before the `tf.train.Saver()` is constructed. There are two subtle issues here, which are due to the code structure, and the default behavior of the [`tf.train.Saver` constructor](https://www.tensorflow.org/versions/master/api_docs/python/state_ops.html#Saver.__init__). When you construct a saver with no arguments (as in your code), it collects the current set of variables in your program, and adds ops to the graph for saving and restoring them. In your code, when you call `tflasso()`, it will construct a saver, and there will be no variables (because `create_network()` has not yet been called). As a result, the checkpoint should be empty. The second issue is that—by default—the format of a saved checkpoint is a map from the [`name` property of a variable](https://www.tensorflow.org/versions/master/api_docs/python/state_ops.html#Variable.name) to its current value. If you create two variables with the same name, they will be automatically "uniquified" by TensorFlow: ``` v = tf.Variable(..., name="weights") assert v.name == "weights" w = tf.Variable(..., name="weights") assert v.name == "weights_1" # The "_1" is added by TensorFlow. ``` The consequence of this is that, when you call `self.create_network()` in the second call to `tfl.fit()`, the variables will all have different names from the names that are stored in the checkpoint—or would have been if the saver had been constructed after the network. (You can avoid this behavior by passing a name-`Variable` dictionary to the saver constructor, but this is usually quite awkward.) There are two main workarounds: 1. In each call to `tflasso.fit()`, create the whole model afresh, by defining a new `tf.Graph`, then in that graph building the network and creating a `tf.train.Saver`. 2. **RECOMMENDED** Create the network, then the `tf.train.Saver` in the `tflasso` constructor, and reuse this graph on each call to `tflasso.fit()`. Note that you might need to do some more work to reorganize things (in particular, I'm not sure what you do with `self.X` and `self.xlen`) but it should be possible to achieve this with [placeholders](https://www.tensorflow.org/versions/master/api_docs/python/io_ops.html#placeholder) and feeding.
Finding that which sums to the smallest value
34,501,487
3
2015-12-28T22:06:49Z
34,501,496
7
2015-12-28T22:07:52Z
[ "python", "python-3.x" ]
Is there a better way of doing the following in python: ``` m = float("inf") for i in ((1,2,3),(1,3,1),(2,2,3),(0,2,2)): r = sum(i) if r < m: best = i m = r print(best) ``` Where I'm trying to find the item in ((1,2,3),(1,3,1),(2,2,3),(0,2,2)) which sums to the smallest value. The following is the best I can come up with: ``` data = ((1,2,3),(1,3,1),(2,2,3),(0,2,2)) sums = tuple(sum(i) for i in data) print(data[sums.index(min(sums))]) ```
Just use the built-in [**`min`**](https://docs.python.org/2/library/functions.html#min) ``` data = ((1,2,3),(1,3,1),(2,2,3),(0,2,2)) print(min(data, key=sum)) ```
Download and save PDF file with Python requests module
34,503,412
3
2015-12-29T02:00:58Z
34,503,421
10
2015-12-29T02:02:31Z
[ "python", "python-2.7" ]
I am trying to download a PDF file from a website and save it to disk. My attempts either fail with encoding errors or result in blank PDFs. ``` In [1]: import requests In [2]: url = 'http://www.hrecos.org//images/Data/forweb/HRTVBSH.Metadata.pdf' In [3]: response = requests.get(url) In [4]: with open('/tmp/metadata.pdf', 'wb') as f: ...: f.write(response.text) --------------------------------------------------------------------------- UnicodeEncodeError Traceback (most recent call last) <ipython-input-4-4be915a4f032> in <module>() 1 with open('/tmp/metadata.pdf', 'wb') as f: ----> 2 f.write(response.text) 3 UnicodeEncodeError: 'ascii' codec can't encode characters in position 11-14: ordinal not in range(128) In [5]: import codecs In [6]: with codecs.open('/tmp/metadata.pdf', 'wb', encoding='utf8') as f: ...: f.write(response.text) ...: ``` I know it is a codec problem of some kind but I can't seem to get it to work.
You should use `response.content` in this case: ``` with open('/tmp/metadata.pdf', 'wb') as f: f.write(response.content) ``` From [the document](http://requests.readthedocs.org/en/latest/user/quickstart/#binary-response-content): > You can also access the response body as bytes, for non-text requests: > > ``` > >>> r.content > b'[{"repository":{"open_issues":0,"url":"https://github.com/... > ``` So that means: `response.text` return the output as string object, use it when you're downloading a **text file**. Such as HTML file, etc. And `response.content` return the output as bytes object, use it when you're downloading a **binary file**. Such as PDF file, audio file, image, etc. --- [You can also use `response.raw` instead](http://requests.readthedocs.org/en/latest/user/quickstart/#raw-response-content). However, use it when the file which you're about to download is large. Below is a basic example which you can also find in the document: ``` import requests url = 'http://www.hrecos.org//images/Data/forweb/HRTVBSH.Metadata.pdf' r = requests.get(url, stream=True) with open('/tmp/metadata.pdf', 'wb') as fd: for chunk in r.iter_content(chunk_size): fd.write(chunk) ``` `chunk_size` is the chunk size which you want to use. If you set it as `2000`, then requests will download that file the first `2000` bytes, write them into the file, and do this again again and again unless it finished. So this can save your RAM. But I'd prefer use `response.content` instead in this case since your file is small. As you can see use `response.raw` is complex. --- Relates: * [How to download large file in python with requests.py?](https://stackoverflow.com/questions/16694907/how-to-download-large-file-in-python-with-requests-py) * [How to download image using requests](http://stackoverflow.com/questions/13137817/how-to-download-image-using-requests)
Best way to hash ordered permutation of [1,9]
34,510,823
3
2015-12-29T12:12:19Z
34,512,030
7
2015-12-29T13:23:35Z
[ "python", "algorithm", "hash", "permutation", "8-puzzle" ]
I'm trying to implement a method to keep the visited states of the 8 puzzle from generating again. My initial approach was to save each visited pattern in a list and do a linear check each time the algorithm wants to generate a child. Now I want to do this in `O(1)` time through list access. Each pattern in 8 puzzle is an ordered permutation of numbers between 1 to 9 (9 being the blank block), for example 125346987 is: > 1 2 5 > 3 4 6 > \_ 8 7 The number of all of the possible permutation of this kind is around 363,000 (9!). what is the best way to hash these numbers to indexes of a list of that size?
You can map a permutation of N items to its index in the list of all permutations of N items (ordered lexicographically). Here's some code that does this, and a demonstration that it produces indexes 0 to 23 once each for all permutations of a 4-letter sequence. ``` import itertools def fact(n): r = 1 for i in xrange(n): r *= i + 1 return r def I(perm): if len(perm) == 1: return 0 return sum(p < perm[0] for p in perm) * fact(len(perm) - 1) + I(perm[1:]) for p in itertools.permutations('abcd'): print p, I(p) ``` The best way to understand the code is to prove its correctness. For an array of length n, there's (n-1)! permutations with the smallest element of the array appearing first, (n-1)! permutations with the second smallest element appearing first, and so on. So, to find the index of a given permutation, see count how many items are smaller than the first thing in the permutation and multiply that by (n-1)!. Then recursively add the index of the remainder of the permutation, considered as a permutation of (n-1) elements. The base case is when you have a permutation of length 1. Obviously there's only one such permutation, so its index is 0. A worked example: `[1324]`. * `[1324]`: 1 appears first, and that's the smallest element in the array, so that gives 0 \* (3!) * Removing 1 gives us `[324]`. The first element is 3. There's one element that's smaller, so that gives us 1 \* (2!). * Removing 3 gives us `[24]`. The first element is 2. That's the smallest element remaining, so that gives us 0 \* (1!). * Removing 2 gives us `[4]`. There's only one element, so we use the base case and get 0. Adding up, we get 0\*3! + 1\*2! + 0\*1! + 0 = 1\*2! = 2. So `[1324]` is at index 2 in the sorted list of 4 permutations. That's correct, because at index 0 is `[1234]`, index 1 is `[1243]`, and the lexicographically next permutation is our `[1324]`.
Extracting comments from Python Source Code
34,511,673
7
2015-12-29T13:03:14Z
34,512,388
7
2015-12-29T13:46:21Z
[ "python", "python-2.7" ]
I'm trying to write a program to extract comments in code that user enters. I tried to use regex, but found it difficult to write. Then I found a post [here](http://stackoverflow.com/questions/7032031/python-regex-to-remove-comments). The answer suggests to use `tokenize.generate_tokens` to analyze the grammar, but [the documentation says](https://docs.python.org/2.7/library/tokenize.html#tokenize.generate_tokens): > The `generate_tokens()` generator requires one argument, `readline`, which > must be a callable object which provides the same interface as the > `readline()` method of built-in file objects (see section File Objects). But a string object does not have `readline` method. Then I found another post [here](http://stackoverflow.com/questions/7472839/python-readline-from-a-string), suggesting to use `StringIO.StringIO` to get a `readline` method. So I wrote the following code: ``` import tokenize import io import StringIO def extract(code): res = [] comment = None stringio = StringIO.StringIO(code) for toktype, tokval, begin, end, line in tokenize.generate_tokens(stringio): # print(toknum,tokval) if toktype != tokenize.COMMENT: res.append((toktype, tokval)) else: print tokenize.untokenize(toktype) return tokenize.untokenize(res) ``` And entered the following code: `extract('a = 1+2#A Comment')` But got: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "ext.py", line 10, in extract for toktype, tokval, begin, end, line in tokenize.generate_tokens(stringio): File "C:\Python27\lib\tokenize.py", line 294, in generate_tokens line = readline() AttributeError: StringIO instance has no `__call__` method ``` I know I can write a new class, but is there any better solution?
## Answer for more general cases (extracting from modules, functions): ### Modules: The documentation specifies that one needs to provide a callable which exposes the same interface as the **[`readline()`](https://docs.python.org/2.7/library/io.html#io.IOBase.readline)** method of built-in **file** objects. This hints to: create an object that provides that method. In the case of module, we can just **[`open`](https://docs.python.org/2.7/library/functions.html#open)** a new module as a normal file and pass in it's `readline` method. This is the key, *the argument you pass **is** the method `readline()`*. Given a small `scrpt.py` file with: ``` # My amazing foo function. def foo(): """ docstring """ # I will print print "Hello" return 0 # Return the value # Maaaaaaain if __name__ == "__main__": # this is main print "Main" ``` We will open it as we do all files: ``` fileObj = open('scrpt.py', 'r') ``` This file object now has a method called `readline` (because it is a file object) which we can safely pass to `tokenize.generate_tokens` and create a generator. **[`tokenize.generate_tokens`](https://docs.python.org/2.7/library/tokenize.html#tokenize.generate_tokens)** (simply [**`tokenize.tokenize`**](https://docs.python.org/3.5/library/tokenize.html#tokenize.tokenize) in Py3) returns a named tuple of elements which contain information about the elements tokenized. Here's a small demo: ``` for toktype, tok, start, end, line in tokenize.generate_tokens(fileObj.readline): # we can also use token.tok_name[toktype] instead of 'COMMENT' # from the token module if toktype == tokenize.COMMENT: print 'COMMENT' + " " + tok ``` Notice how we pass the `fileObj.readline` method to it. This will now print: ``` COMMENT # My amazing foo function COMMENT # I will print COMMENT # Return the value COMMENT # Maaaaaaain COMMENT # this is main ``` So all comments regardless of position are detected. Docstrings of course are excluded. ### Functions: You could achieve a similar result without `open` for cases which I really can't think of. Nonetheless, I'll present another way of doing it for completeness sake. In this scenario you'll need two additional modules, **[`inspect`](https://docs.python.org/2.7/library/inspect.html)** and **[`StringIO`](https://docs.python.org/2.7/library/stringio.html#module-StringIO)** (**[`io.StringIO`](https://docs.python.org/3.5/library/io.html#io.StringIO)** in `Python3`): Let's say you have the following function: ``` def bar(): # I am bar print "I really am bar" # bar bar bar baaaar # (bar) return "Bar" ``` You need a file-like object which has a `readline` method to use it with `tokenize`. Well, you can create a file-like object from an `str` using `StringIO.StringIO` and you can get an `str` representing the source of the function with [`inspect.getsource(func)`](https://docs.python.org/2.7/library/inspect.html#inspect.getsource). In code: ``` funcText = inpsect.getsource(bar) funcFile = StringIO.StringIO(funcText) ``` Now we have a file-like object representing the function which has the wanted `readline` method. We can just re-use the loop we previously performed replacing `fileObj.readline` with `funcFile.readline`. The output we get now is of similar nature: ``` COMMENT # I am bar COMMENT # bar bar bar baaaar COMMENT # (bar) ``` --- As an aside, if you really want to create a custom way of doing this with `re` take a look at [the source for the `tokenize.py` module](https://hg.python.org/cpython/file/2.7/Lib/tokenize.py). It defines certain patters for comments, (`r'#[^\r\n]*'`) names et cetera, loops through the lines with `readline` and searches within the `line` list for pattterns. Thankfully, it's not too complex after you look at it for a while :-). --- ### Answer for function `extract` (Update): You've created an object with `StringIO` that provides the interface but *have you haven't passed that intereface (`readline`) to `tokenize.generate_tokens`, instead, you passed the full object (`stringio`)*. Additionally, in your `else` clause a `TypeError` is going to be raised because `untokenize` expects an iterable as input. Making the following changes, your function works fine: ``` def extract(code): res = [] comment = None stringio = StringIO.StringIO(code) # pass in stringio.readline to generate_tokens for toktype, tokval, begin, end, line in tokenize.generate_tokens(stringio.readline): if toktype != tokenize.COMMENT: res.append((toktype, tokval)) else: # wrap (toktype, tokval) tupple in list print tokenize.untokenize([(toktype, tokval)]) return tokenize.untokenize(res) ``` Supplied with input of the form `expr = extract('a=1+2#A comment')` the function will print out the comment and retain the expression in `expr`: ``` expr = extract('a=1+2#A comment') #A comment print expr 'a =1 +2 ' ``` Furthermore, as I later mention `io` houses `StringIO` for Python3 so in this case the `import` is thankfully not required.
Split string to various data types
34,515,139
2
2015-12-29T16:33:50Z
34,515,155
10
2015-12-29T16:35:01Z
[ "python", "string", "list" ]
I would like to convert the following string: ``` s = '1|2|a|b' ``` to ``` [1, 2, 'a', 'b'] ``` Is it possible to do the conversion in one line?
> Is it possible to do the conversion in one line? **YES**, It is possible. But how? *Algorithm for the approach* * Split the string into its constituent parts using [`str.split`](https://docs.python.org/3/library/stdtypes.html#str.split). The output of this is ``` >>> s = '1|2|a|b' >>> s.split('|') ['1', '2', 'a', 'b'] ``` * Now we have got half the problem. Next we need to loop through the split string and then check if each of them is a string or an int. For this we use + A [list comprehension](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions), which is for the looping part + [`str.isdigit`](https://docs.python.org/3/library/stdtypes.html#str.isdigit) for finding if the element is an `int` or a `str`. * The list comprehension can be easily written as `[i for i in s.split('|')]`. But how do we add an `if` clause there? This is covered in [python one-line list comprehension: if-else variants](http://stackoverflow.com/questions/17321138/python-one-line-list-comprehension-if-else-variants). Now that we know which all elements are `int` and which are not, we can easily call the builtin [`int`](https://docs.python.org/3/library/functions.html#int) on it. Hence the final code will look like ``` [int(i) if i.isdigit() else i for i in s.split('|')] ``` Now for a small demo, ``` >>> s = '1|2|a|b' >>> [int(i) if i.isdigit() else i for i in s.split('|')] [1, 2, 'a', 'b'] ``` As we can see, the output is as expected. --- Note that this approach is not suitable if there are many types to be converted.
Werkzeug raises BrokenFilesystemWarning
34,515,331
2
2015-12-29T16:46:46Z
34,517,230
7
2015-12-29T18:55:19Z
[ "python", "unix", "encoding", "utf-8", "flask" ]
I get the following error when I send form data to my Flask app. It says it will use the UTF-8 encoding, but the locale is already UTF-8. What does this error mean? ``` /home/.virtualenvs/project/local/lib/python2.7/site-packages/werkzeug/filesystem.py:63: BrokenFilesystemWarning: Detected a misconfigured UNIX filesystem: Will use UTF-8 as filesystem encoding instead of 'ANSI_X3.4-1968' ``` ``` $ locale LANG=en_US.utf8 LANGUAGE=en_US.utf8 LC_CTYPE="en_US.utf8" LC_NUMERIC="en_US.utf8" LC_TIME="en_US.utf8" LC_COLLATE="en_US.utf8" LC_MONETARY="en_US.utf8" LC_MESSAGES="en_US.utf8" LC_PAPER="en_US.utf8" LC_NAME="en_US.utf8" LC_ADDRESS="en_US.utf8" LC_TELEPHONE="en_US.utf8" LC_MEASUREMENT="en_US.utf8" LC_IDENTIFICATION="en_US.utf8" LC_ALL=en_US.utf8 ```
This is not a critical error, just a warning that Werkzeug couldn't detect a good locale and so is using `UTF-8` instead. This guess is probably correct. See [this Arch Linux wiki article](https://wiki.archlinux.org/index.php/Locale) for how to set up the locale correctly. It mentions that Python may see the `ANSI_X3.4-1968` encoding even if the locale is properly configured, if you are running from certain environments such as Vim. > When executing `:!python -c "import sys; print(sys.stdout.encoding)"` in ViM, the output may be `ANSI_X3.4-1968`, even if the locale is set correctly everyhere. Set the `PYTHONIOENCODING` environment variable to `utf-8` to remedy the situation.
sum over a list of tensors in tensorflow
34,519,627
5
2015-12-29T21:50:33Z
34,520,066
10
2015-12-29T22:25:05Z
[ "python", "tensorflow", "cost-based-optimizer" ]
I have a deep neural network where the weights between layers are stored in a list. `layers[j].weights` I want to incluse the ridge penalty in my cost function. I need then to use something like `tf.nn.l2_loss(layers[j].weights**2 for j in range(self.n_layers))` i.e. the squared sum of all the weights. In particular the weights are defined as: ``` >>> avs.layers [<neural_network.Layer object at 0x10a4b2a90>, <neural_network.Layer object at 0x10ac85080>, <neural_network.Layer object at 0x10b0f3278>, <neural_network.Layer object at 0x10b0eacf8>, <neural_network.Layer object at 0x10b145588>, <neural_network.Layer object at 0x10b165048>, <neural_network.Layer object at 0x10b155ba8>] >>> >>> avs.layers[0].weights <tensorflow.python.ops.variables.Variable object at 0x10b026748> >>> ``` How can I do that in tensorflow ?
The standard way to sum a list of tensors is to use the [`tf.add_n()`](https://www.tensorflow.org/versions/master/api_docs/python/math_ops.html#add_n) operation, which takes a list of tensors (each having the same size and shape) and produces a single tensor containing the sum. For the particular problem that you have, I am assuming that each `layers[j].weights` could have a different size. Therefore you will need reduce each element down to a scalar before summing, e.g. using the [`tf.nn.l2_loss()`](https://www.tensorflow.org/versions/master/api_docs/python/nn.html#l2_loss) function itself: ``` weights = [layers[j].weights for j in range(self.n_layers)] losses = [tf.nn.l2_loss(w) for w in weights] total_loss = tf.add_n(losses) ``` (Note however that when the values to be added are large, you may find it more efficient to calculate a sequence of [`tf.add()`](https://www.tensorflow.org/versions/master/api_docs/python/math_ops.html#add) operations, since TensorFlow keeps the values of each of the `add_n` arguments in memory until *all* of them have been computed. A chain of `add` ops allows some of the computation to happen earlier.)
Python recursive function that retain variable values
34,519,841
3
2015-12-29T22:07:23Z
34,519,921
8
2015-12-29T22:13:28Z
[ "python", "recursion" ]
I am brushing up a bit of good old algorithms, and doing it with python, since I use it more often nowadays. I am facing an issue when running a recursive function; where the variable get reset every time that the recursive function call itself: ``` def recursive_me(mystring): chars = len(mystring) if chars is 0: print("Done") else: first = int(str[0]) total = + first print(total) recursive_me(mystring[1:]) recursive_me("4567") ``` What I am doing here is to get a string made of digits; take the first, convert it to an int; and run recursively the function again, so I can take one digit at time from the string and sum all the values. Ideally the output should show the total, while it add all the digits (4+5+6+7), although when the recursive function is called the first time, the function reset the total value. Is common habit to use global variables when running operations with recursive functions or am I doing something wrong?
You can code as simply as this: ``` def recursive_me(mystring): if mystring: # recursive case return int(mystring[0]) + recursive_me(mystring[1:]) else: # base case return 0 ``` or ``` def recursive_me(mystring, total = 0): if mystring: # recursive case return recursive_me(mystring[1:], total + int(mystring[0])) else: # base case return total ``` although this won't help much in Python since it doesn't implement tail-call optimisation. If you want to see the intermediate values, change the second version like so: ``` def recursive_me(mystring, total = 0): if mystring: # recursive case newtotal = total + int(mystring[0]) print(newtotal) return recursive_me(mystring[1:], newtotal) else: # base case return total ``` then ``` 4 9 15 22 22 # this is the return value; previous output is from `print()` ```
Check if a list has one or more strings that match a regex
34,520,279
9
2015-12-29T22:44:43Z
34,520,315
7
2015-12-29T22:47:55Z
[ "python", "regex", "list" ]
If need to say ``` if <this list has a string in it that matches this rexeg>: do_stuff() ``` I [found](http://www.cademuir.eu/blog/2011/10/20/python-searching-for-a-string-within-a-list-list-comprehension/) this powerful construct to extract matching strings from a list: ``` [m.group(1) for l in my_list for m in [my_regex.search(l)] if m] ``` ...but this is hard to read and overkill. I don't want the list, I just want to know if such a list would have anything in it. Is there a simpler-reading way to get that answer?
You can simply use `any`. Demo: ``` >>> lst = ['hello', '123', 'SO'] >>> any(re.search('\d', s) for s in lst) True >>> any(re.search('\d{4}', s) for s in lst) False ``` use `re.match` if you want to enforce matching from the start of the string. *Explanation*: `any` will check if there is any truthy value in an iterable. In the first example, we pass the contents of the following list (in the form of a generator): ``` >>> [re.search('\d', s) for s in lst] [None, <_sre.SRE_Match object at 0x7f15ef317d30>, None] ``` which has one match-object which is truthy, while `None` will always evaluate to `False` in a boolean context. This is why `any` will return `False` for the second example: ``` >>> [re.search('\d{4}', s) for s in lst] [None, None, None] ```