title
stringlengths
12
150
question_id
int64
469
40.1M
question_score
int64
2
5.52k
question_date
stringdate
2008-08-02 15:11:16
2016-10-18 06:16:31
answer_id
int64
536
40.1M
answer_score
int64
7
8.38k
answer_date
stringdate
2008-08-02 18:49:07
2016-10-18 06:19:33
tags
listlengths
1
5
question_body_md
stringlengths
15
30.2k
answer_body_md
stringlengths
11
27.8k
How to use Java/Scala function from an action or a transformation?
31,684,842
20
2015-07-28T18:54:01Z
34,412,182
18
2015-12-22T09:14:25Z
[ "python", "scala", "apache-spark", "pyspark", "apache-spark-mllib" ]
### Background My original question here was *Why using `DecisionTreeModel.predict` inside map function raises an exception?* and is related to [How to generate tuples of (original lable, predicted label) on Spark with MLlib?](http://stackoverflow.com/q/31680704/1560062) When we use Scala API [a recommended way](https://spark.apache.org/docs/1.4.1/mllib-decision-tree.html#classification) of getting predictions for `RDD[LabeledPoint]` using `DecisionTreeModel` is to simply map over `RDD`: ``` val labelAndPreds = testData.map { point => val prediction = model.predict(point.features) (point.label, prediction) } ``` Unfortunately similar approach in PySpark doesn't work so well: ``` labelsAndPredictions = testData.map( lambda lp: (lp.label, model.predict(lp.features)) labelsAndPredictions.first() ``` > Exception: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transforamtion. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see [SPARK-5063](https://issues.apache.org/jira/browse/SPARK-5063). Instead of that [official documentation](https://spark.apache.org/docs/1.4.1/mllib-decision-tree.html#classification) recommends something like this: ``` predictions = model.predict(testData.map(lambda x: x.features)) labelsAndPredictions = testData.map(lambda lp: lp.label).zip(predictions) ``` So what is going on here? There is no broadcast variable here and [Scala API](https://github.com/apache/spark/blob/3697232b7d438979cc119b2a364296b0eec4a16a/mllib/src/main/scala/org/apache/spark/mllib/tree/model/DecisionTreeModel.scala#L45) defines `predict` as follows: ``` /** * Predict values for a single data point using the model trained. * * @param features array representing a single data point * @return Double prediction from the trained model */ def predict(features: Vector): Double = { topNode.predict(features) } /** * Predict values for the given data set using the model trained. * * @param features RDD representing data points to be predicted * @return RDD of predictions for each of the given data points */ def predict(features: RDD[Vector]): RDD[Double] = { features.map(x => predict(x)) } ``` so at least at the first glance calling from action or transformation is not a problem since prediction seems to be a local operation. ### Explanation After some digging I figured out that the source of the problem is a [`JavaModelWrapper.call`](https://github.com/apache/spark/blob/3c0156899dc1ec1f7dfe6d7c8af47fa6dc7d00bf/python/pyspark/mllib/common.py#L142) method invoked from [DecisionTreeModel.predict](https://github.com/apache/spark/blob/164fe2aa44993da6c77af6de5efdae47a8b3958c/python/pyspark/mllib/tree.py#L76). It [access](https://github.com/apache/spark/blob/3c0156899dc1ec1f7dfe6d7c8af47fa6dc7d00bf/python/pyspark/mllib/common.py#L144) `SparkContext` which is required to call Java function: ``` callJavaFunc(self._sc, getattr(self._java_model, name), *a) ``` ### Question In case of `DecisionTreeModel.predict` there is a recommended workaround and all the required code is already a part of the Scala API but is there any elegant way to handle problem like this in general? Only solutions I can think of right now are rather heavyweight: * pushing everything down to JVM either by extending Spark classes through Implicit Conversions or adding some kind of wrappers * using Py4j gateway directly
Communication using default Py4J gateway is simply not possible. To understand why we have to take a look at the following diagram from the PySpark Internals document [1]: [![enter image description here](http://i.stack.imgur.com/sfcDU.jpg)](http://i.stack.imgur.com/sfcDU.jpg) Since Py4J gateway runs on the driver it is not accessible to Python interpreters which communicate with JVM workers through sockets (See for example [`PythonRDD`](https://github.com/apache/spark/blob/d83c2f9f0b08d6d5d369d9fae04cdb15448e7f0d/core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala) / [`rdd.py`](https://github.com/apache/spark/blob/499ac3e69a102f9b10a1d7e14382fa191516f7b5/python/pyspark/rdd.py#L123_)). Theoretically it could be possible to create a separate Py4J gateway for each worker but in practice it is unlikely to be useful. Ignoring issues like reliability Py4J is simply not designed to perform data intensive tasks. Are there any workarounds? 1. Using [Spark SQL Data Sources API](https://databricks.com/blog/2015/01/09/spark-sql-data-sources-api-unified-data-access-for-the-spark-platform.html) to wrap JVM code. **Pros**: Supported, high level, doesn't require access to the internal PySpark API **Cons**: Relatively verbose and not very well documented, limited mostly to the input data 2. Operating on DataFrames using Scala UDFs. **Pros**: Easy to implement (see [Spark: How to map Python with Scala or Java User Defined Functions?](http://stackoverflow.com/q/33233737/1560062)), no data conversion between Python and Scala if data is already stored in a DataFrame, minimal access to Py4J **Cons**: Requires access to Py4J gateway and internal methods, limited to Spark SQL, hard to debug, not supported 3. Creating high level Scala interface in a similar way how it is done in MLlib. **Pros**: Flexible, ability to execute arbitrary complex code. It can be don either directly on RDD (see for example [MLlib model wrappers](https://github.com/apache/spark/tree/master/mllib/src/main/scala/org/apache/spark/mllib/api/python)) or with `DataFrames` (see [How to use a Scala class inside Pyspark](http://stackoverflow.com/q/36023860/1560062)). The latter solution seems to be much more friendly since all ser-de details are already handled by existing API. **Cons**: Low level, required data conversion, same as UDFs requires access to Py4J and internal API, not supported Some basic examples can be found in [Strings not converted when calling Scala code from a PySpark app](http://stackoverflow.com/q/39458465/1560062) 4. Using external workflow management tool to switch between Python and Scala / Java jobs and passing data to a DFS. **Pros**: Easy to implement, minimal changes to the code itself **Cons**: Cost of reading / writing data ([Tachyon](http://tachyon-project.org/)?) 5. Using shared `SQLContext` (see for example [Apache Zeppelin](https://zeppelin.incubator.apache.org/) or [Livy](https://github.com/cloudera/livy)) to pass data between guest languages using registered temporary tables. **Pros**: Well suited for interactive analysis **Cons**: Not so much for batch jobs (Zeppelin) or may require additional orchestration (Livy) --- 1. Joshua Rosen. (2014, August 04) [PySpark Internals](https://cwiki.apache.org/confluence/display/SPARK/PySpark+Internals). Retrieved from <https://cwiki.apache.org/confluence/display/SPARK/PySpark+Internals>
Trade off between code duplication and performance
31,688,034
12
2015-07-28T22:13:02Z
31,688,190
21
2015-07-28T22:26:23Z
[ "python", "performance", "optimization", "code-duplication" ]
Python, being the dynamic language that it is, offer multiple ways to implement the same feature. These options may vary in readability, maintainability and performance. Even though the usual scripts that I write in Python are of a disposable nature, I now have a certain project that I am working on (academic) that must be readable, maintainable and perform reasonably well. Since I haven't done any serious coding in Python before, including any sort of profiling, I need help in deciding the balance between the three factors I mentioned above. Here's a code snippet from one of the modules in a scientific package that I am working on. It is an n-ary Tree class with a very basic skeleton structure. This was written with inheritance and sub classing in mind. *Note : in the code below a tree is the same thing as a node. Every tree is an instance of the same class Tree.* ``` class Tree(object): def __init__(self, parent=None, value=None): self.parent = parent self.value = value self.children = set() ``` The two functions below belongs to this class (along with many others) ``` def isexternal(self): """Return True if this is an external tree.""" return not bool(self.children) def isleaf(self): """Return True if this is a leaf tree.""" return not bool(self.children) ``` Both these functions are doing exactly the same thing - they are just two different names. So, why not change it to something like: ``` def isleaf(self): """Return True of this is a leaf tree.""" return self.isexternal() ``` **My doubts are these :** I've read that function calls in Python are rather expensive (creating new stacks for each call), but I don't know if it is a good or bad thing if one function depends on another. How will it affect maintainability. This happens many times in my code, where I call one method from another method to avoid code duplication. Is it bad practice to do this? Here's another example of this code duplication scenario in the same class: ``` def isancestor(self, tree): """Return True if this tree is an ancestor of the specified tree.""" return tree.parent is self or (not tree.isroot() and self.isancestor(tree.parent)) def isdescendant(self, tree): """Return True if this tree is a descendant of the specified tree.""" return self.parent is tree or (not self.isroot() and self.parent.isdescendant(tree)) ``` I could instead go for: ``` def isdescendant(self, tree): """Return True if this tree is a descendant of the specified tree.""" return tree.isancestor(self) ```
*Very* broadly speaking, there are two types of optimization: *macro optimizations* and *micro optimizations*. Macro optimizations include things like your choice of **algorithms**, deciding between different **data structures**, and the like. **Things that can have a big impact on performance** and often have large ripple effects on your code base if you change your mind. Switching from a data structure with linear O(n) to one with constant O(1) inserts could be a huge win and well worth the cost of doing it. Adding caching may change a dog slow algorithm into a lightning fast one. Micro optimizations are things like eliding or inlining function calls, eliminating or adding variables, caching calculation results for a very short window, unrolling loops, etc. As a rule, you should forget about these types of optimizations and **focus on the readability and maintainability of your code**. The effects of micro optimizations are simply too small to be worth it. You should only consider these types of changes **after profiling your code**. If you can identify a critical loop that would benefit from such an optimization, and your profiling confirms it would, *and* you make the change and verify the improvement worked with another round of profiling--*then* you should micro optimize. But until then, don't sweat the small stuff. ``` def isdescendant(self, tree): """Return True if this tree is a descendant of the specified tree.""" return tree.isancestor(self) ``` I would absolutely recommend this type of code reuse. It makes it crystal clear that `isdescendant` is the inverse of `isancestor`. It ensures that both functions work the same way so you can't inadvertantly introduce a bug in one but not the other. ``` def isleaf(self): """Return True of this is a leaf tree.""" return self.isexternal() ``` Here I would ask myself if `isleaf` and `isexternal` are conceptually the same. Ignoring that they're implemented the same, are they logically identical? If so, I would have one call the other. If it's just happenstance that they have the same implementation, I might duplicate the code. Can you imagine a scenario where you would want to change one function but not the other? That would point towards duplication.
Get the name or ID of the current Google Compute Instance
31,688,646
3
2015-07-28T23:08:42Z
31,689,692
7
2015-07-29T01:15:43Z
[ "python", "google-compute-engine", "google-cloud-platform" ]
I'm running a number of Google Compute Engine instances that run Python code, and I want to find the name or the ID of each instance from inside the instances. One solution I've found is to get the internal IP of the instance using: ``` import socket internal_ip = socket.gethostbyname(socket.gethostname()) ``` Then I list all the instances: ``` from oauth2client.client import GoogleCredentials from googleapiclient.discovery import build credentials = GoogleCredentials.get_application_default() self.compute = build('compute', 'v1', credentials=credentials) result = self.compute.instances().list(project=project, zone=zone).execute() ``` Then I iterate over all the instances to check if the internal IP matches the IP of an instance: ``` for instance in result["items"]: if instance["networkInterfaces"][0]["networkIP"] == internal_ip: internal_id = instance["id"] ``` This works but it's a bit complicated, is there a more direct way to achieve the same thing, e.g. using Google's Python Client Library or the gcloud command line tool?
**Instance Name:** `socket.gethostname()` or `platform.node()` should return the name of the instance. You might have to do a bit of parsing depending on your OS. This worked for me on Debian and Ubuntu systems: ``` import socket gce_name = socket.gethostname() ``` However, on a CoreOS instance, the `hostname` command gave the name of the instance plus the zone information, so you would have to do some parsing. **Instance ID / Name / More (Recommended):** The better way to do this is to use the [Metadata server](https://cloud.google.com/compute/docs/metadata). This is the easiest way to get instance information, and works with basically any programming language or straight CURL. Here is a Python example using [Requests](http://docs.python-requests.org/en/latest/index.html). ``` import requests metadata_server = "http://metadata/computeMetadata/v1/instance/" metadata_flavor = {'Metadata-Flavor' : 'Google'} gce_id = requests.get(metadata_server + 'id', headers = metadata_flavor).text gce_name = requests.get(metadata_server + 'hostname', headers = metadata_flavor).text gce_machine_type = requests.get(metadata_server + 'machine-type', headers = metadata_flavor).text ``` Again, you might need to do some parsing here, but it is really straightforward! References: [How can I use Python to get the system hostname?](http://stackoverflow.com/questions/4271740/how-can-i-use-python-to-get-the-system-hostname)
Creating large Pandas DataFrames: preallocation vs append vs concat
31,690,076
4
2015-07-29T02:06:02Z
31,713,471
7
2015-07-30T00:42:03Z
[ "python", "pandas" ]
I am confused by the performance in Pandas when building a large dataframe chunk by chunk. In Numpy, we (almost) always see better performance by preallocating a large empty array and then filling in the values. As I understand it, this is due to Numpy grabbing all the memory it needs at once instead of having to reallocate memory with every `append` operation. In Pandas, I seem to be getting better performance by using the `df = df.append(temp)` pattern. Here is an example with timing. The definition of the `Timer` class follows. As you, see I find that preallocating is roughly 10x slower than using `append`! Preallocating a dataframe with `np.empty` values of the appropriate dtype helps a great deal, but the `append` method is still the fastest. ``` import numpy as np from numpy.random import rand import pandas as pd from timer import Timer # Some constants num_dfs = 10 # Number of random dataframes to generate n_rows = 2500 n_cols = 40 n_reps = 100 # Number of repetitions for timing # Generate a list of num_dfs dataframes of random values df_list = [pd.DataFrame(rand(n_rows*n_cols).reshape((n_rows, n_cols)), columns=np.arange(n_cols)) for i in np.arange(num_dfs)] ## # Define two methods of growing a large dataframe ## # Method 1 - append dataframes def method1(): out_df1 = pd.DataFrame(columns=np.arange(4)) for df in df_list: out_df1 = out_df1.append(df, ignore_index=True) return out_df1 def method2(): # # Create an empty dataframe that is big enough to hold all the dataframes in df_list out_df2 = pd.DataFrame(columns=np.arange(n_cols), index=np.arange(num_dfs*n_rows)) #EDIT_1: Set the dtypes of each column for ix, col in enumerate(out_df2.columns): out_df2[col] = out_df2[col].astype(df_list[0].dtypes[ix]) # Fill in the values for ix, df in enumerate(df_list): out_df2.iloc[ix*n_rows:(ix+1)*n_rows, :] = df.values return out_df2 # EDIT_2: # Method 3 - preallocate dataframe with np.empty data of appropriate type def method3(): # Create fake data array data = np.transpose(np.array([np.empty(n_rows*num_dfs, dtype=dt) for dt in df_list[0].dtypes])) # Create placeholder dataframe out_df3 = pd.DataFrame(data) # Fill in the real values for ix, df in enumerate(df_list): out_df3.iloc[ix*n_rows:(ix+1)*n_rows, :] = df.values return out_df3 ## # Time both methods ## # Time Method 1 times_1 = np.empty(n_reps) for i in np.arange(n_reps): with Timer() as t: df1 = method1() times_1[i] = t.secs print 'Total time for %d repetitions of Method 1: %f [sec]' % (n_reps, np.sum(times_1)) print 'Best time: %f' % (np.min(times_1)) print 'Mean time: %f' % (np.mean(times_1)) #>> Total time for 100 repetitions of Method 1: 2.928296 [sec] #>> Best time: 0.028532 #>> Mean time: 0.029283 # Time Method 2 times_2 = np.empty(n_reps) for i in np.arange(n_reps): with Timer() as t: df2 = method2() times_2[i] = t.secs print 'Total time for %d repetitions of Method 2: %f [sec]' % (n_reps, np.sum(times_2)) print 'Best time: %f' % (np.min(times_2)) print 'Mean time: %f' % (np.mean(times_2)) #>> Total time for 100 repetitions of Method 2: 32.143247 [sec] #>> Best time: 0.315075 #>> Mean time: 0.321432 # Time Method 3 times_3 = np.empty(n_reps) for i in np.arange(n_reps): with Timer() as t: df3 = method3() times_3[i] = t.secs print 'Total time for %d repetitions of Method 3: %f [sec]' % (n_reps, np.sum(times_3)) print 'Best time: %f' % (np.min(times_3)) print 'Mean time: %f' % (np.mean(times_3)) #>> Total time for 100 repetitions of Method 3: 6.577038 [sec] #>> Best time: 0.063437 #>> Mean time: 0.065770 ``` I use a nice `Timer` courtesy of Huy Nguyen: ``` # credit: http://www.huyng.com/posts/python-performance-analysis/ import time class Timer(object): def __init__(self, verbose=False): self.verbose = verbose def __enter__(self): self.start = time.clock() return self def __exit__(self, *args): self.end = time.clock() self.secs = self.end - self.start self.msecs = self.secs * 1000 # millisecs if self.verbose: print 'elapsed time: %f ms' % self.msecs ``` If you are still following, I have two questions: 1) Why is the `append` method faster? (NOTE: for very small dataframes, i.e. `n_rows = 40`, it is actually slower). 2) What is the most efficient way to build a large dataframe out of chunks? (In my case, the chunks are all large csv files). Thanks for your help! EDIT\_1: In my real world project, the columns have different dtypes. So I cannot use the `pd.DataFrame(.... dtype=some_type)` trick to improve the performance of preallocation, per BrenBarn's recommendation. The dtype parameter forces all the columns to be the same dtype [Ref. issue [4464]](https://github.com/pydata/pandas/issues/4464) I added some lines to `method2()` in my code to change the dtypes column-by-column to match in the input dataframes. This operation is expensive and negates the benefits of having the appropriate dtypes when writing blocks of rows. EDIT\_2: Try preallocating a dataframe using placeholder array `np.empty(... dtyp=some_type)`. Per @Joris's suggestion.
Your benchmark is actually too small to show the real difference. Appending, copies EACH time, so you are actually doing copying a size N memory space N\*(N-1) times. This is horribly inefficient as the size of your dataframe grows. This certainly might not matter in a very small frame. But if you have any real size this matters a lot. This is specifically noted in the docs [here](http://pandas.pydata.org/pandas-docs/stable/merging.html#concatenating-objects), though kind of a small warning. ``` In [97]: df = DataFrame(np.random.randn(100000,20)) In [98]: df['B'] = 'foo' In [99]: df['C'] = pd.Timestamp('20130101') In [103]: df.info() <class 'pandas.core.frame.DataFrame'> Int64Index: 100000 entries, 0 to 99999 Data columns (total 22 columns): 0 100000 non-null float64 1 100000 non-null float64 2 100000 non-null float64 3 100000 non-null float64 4 100000 non-null float64 5 100000 non-null float64 6 100000 non-null float64 7 100000 non-null float64 8 100000 non-null float64 9 100000 non-null float64 10 100000 non-null float64 11 100000 non-null float64 12 100000 non-null float64 13 100000 non-null float64 14 100000 non-null float64 15 100000 non-null float64 16 100000 non-null float64 17 100000 non-null float64 18 100000 non-null float64 19 100000 non-null float64 B 100000 non-null object C 100000 non-null datetime64[ns] dtypes: datetime64[ns](1), float64(20), object(1) memory usage: 17.5+ MB ``` Appending ``` In [85]: def f1(): ....: result = df ....: for i in range(9): ....: result = result.append(df) ....: return result ....: ``` Concat ``` In [86]: def f2(): ....: result = [] ....: for i in range(10): ....: result.append(df) ....: return pd.concat(result) ....: In [100]: f1().equals(f2()) Out[100]: True In [101]: %timeit f1() 1 loops, best of 3: 1.66 s per loop In [102]: %timeit f2() 1 loops, best of 3: 220 ms per loop ``` Note that I wouldn't even bother trying to pre-allocate. Its somewhat complicated, especially since you are dealing with multiple dtypes (e.g. you *could* make a giant frame and simply `.loc` and it would work). But `pd.concat` is just dead simple, works reliably, and fast. And timing of your sizes from above ``` In [104]: df = DataFrame(np.random.randn(2500,40)) In [105]: %timeit f1() 10 loops, best of 3: 33.1 ms per loop In [106]: %timeit f2() 100 loops, best of 3: 4.23 ms per loop ```
max() not returning the correct maximum number
31,692,420
2
2015-07-29T06:07:55Z
31,692,532
7
2015-07-29T06:14:18Z
[ "python", "floating-point" ]
I am using the following code to find out which of the two numbers has a maximum value: ``` maximum = max(2.3,2.300000000001) print maximum ``` But the output I get is `2.3` and not `2.300000000001`. Can anyone explain why this happens?
Don't worry - `max` isn't broken, and `maximum` indeed holds `2.300000000001`. `print`, however, rounds it when printing. You could use [`repr`](https://docs.python.org/2/library/functions.html#func-repr) to prove that: ``` >>> maximum = max(2.3,2.300000000001) >>> print maximum 2.3 >>> print repr(maximum) 2.300000000001 ```
all( generator ) returns True when it should return False
31,692,660
3
2015-07-29T06:21:35Z
31,692,765
7
2015-07-29T06:27:15Z
[ "python", "numpy", "generator", "canopy" ]
I want to check if a string contains all the keywords. I am using the [Enthought Canopy](https://www.enthought.com/products/canopy/) distribution. For example: ``` string = 'I like roses but not violets' key_words = ['roses', 'violets', 'tulips'] ``` I've read that the `all` function would serve me well. When I use this function in the following way ``` if all( keys in string.lower().split() for keys in key_words): print True ``` Then `True` is returned. I would expect `False` to be returned since `tulips` is not in `string.lower().split()`. How can I fix this?
You probably have a `from numpy import *` in your code. `numpy`'s `all` method does not handle generators well. ``` [1]: string = 'I like roses but not violets' [2]: key_words = ['roses', 'violets', 'tulips'] [3]: if all( keys in string.lower().split() for keys in key_words): ...: print True ...: [4]: from numpy import * [5]: if all( keys in string.lower().split() for keys in key_words): print True ...: True ``` --- If the context is beyond your control, then you can use `from __builtin__ import all` to revert `all` to its default version in your file. However, the recommended method is to either do a selective or a qualified import of `numpy`.
Lost important .py file (overwritten as 0byte file), but the old version still LOADED IN IPYTHON as module -- can it be retrieved?
31,707,587
21
2015-07-29T17:46:44Z
31,707,930
22
2015-07-29T18:04:05Z
[ "python", "linux", "vim", "ipython", "recovery" ]
Due to my stupidity, while managing several different screen sessions with vim open in many of them, in the process of trying to "organize" my sessions I somehow managed to overwrite a very important .py script with a 0Byte file. HOWEVER, I have an ipython instance open that, when running that same .py file as a module, still remembers the code that used to be there! So did I just learn a hard lesson about backups (my last one was done by vim about a week ago, which would leave me with a lot of work to do), or is there any possible, conceivable way to **retrieve the .py file from an already loaded module?** I probably deserve this for being so cavalier, but I'm seriously desperate here.
As noted in comments, `inspect.getsource` will not work because it depends on the original file (ie, `module.__file__`). Best option: check to see if there's a `.pyc` file (ex, `foo.pyc` should be beside `foo.py`). If there is, you can use [Decompile Python 2.7 .pyc](http://stackoverflow.com/questions/8189352/decompile-python-2-7-pyc) to decompile it. The `inspect` modules also caches the source. You may be able to get lucky and use `inspect.getsource(module)`, or `inspect.getsourcelines(module.function)` if it has been called in the past. Otherwise you'll need to rebuild the module "manually" by inspecting the exports (ie, `module.__globals__`). Constants and whatnot are obvious, and for functions you can use `func.func_name` to get its name, `func.__doc__` to get the docstring, `inspect.getargspec(func)` to get the arguments, and `func.func_code` to get details about the code: `co_firstlineno` will get the line number, then `co_code` will get the code. There's more on decompiling that here: [Exploring and decompiling python bytecode](http://stackoverflow.com/questions/1149513/exploring-and-decompiling-python-bytecode) For example, to use `uncompyle2`: ``` >>> def foo(): ... print "Hello, world!" ... >>> from StringIO import StringIO >>> import uncompyle2 >>> out = StringIO() >>> uncompyle2.uncompyle("2.7", foo.func_code, out=out) >>> print out.getvalue() print 'Hello, world!' ``` But, no — I'm not aware of any more straight forward method to take a module and get the source code back out.
patching a class yields "AttributeError: Mock object has no attribute" when accessing instance attributes
31,709,792
7
2015-07-29T19:47:21Z
31,710,001
10
2015-07-29T19:59:41Z
[ "python", "attributes", "mocking", "python-unittest" ]
**The Problem** Using `mock.patch` with `autospec=True` to patch a class is not preserving attributes of instances of that class. **The Details** I am trying to test a class `Bar` that instantiates an instance of class `Foo` as a `Bar` object attribute called `foo`. The `Bar` method under test is called `bar`; it calls method `foo` of the `Foo` instance belonging to `Bar`. In testing this, I am mocking `Foo`, as I only want to test that `Bar` is accessing the correct `Foo` member: ``` import unittest from mock import patch class Foo(object): def __init__(self): self.foo = 'foo' class Bar(object): def __init__(self): self.foo = Foo() def bar(self): return self.foo.foo class TestBar(unittest.TestCase): @patch('foo.Foo', autospec=True) def test_patched(self, mock_Foo): Bar().bar() def test_unpatched(self): assert Bar().bar() == 'foo' ``` The classes and methods work just fine (`test_unpatched` passes), but when I try to Foo in a test case (tested using both nosetests and pytest) using `autospec=True`, I encounter "AttributeError: Mock object has no attribute 'foo'" ``` 19:39 $ nosetests -sv foo.py test_patched (foo.TestBar) ... ERROR test_unpatched (foo.TestBar) ... ok ====================================================================== ERROR: test_patched (foo.TestBar) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/mock.py", line 1201, in patched return func(*args, **keywargs) File "/home/vagrant/dev/constellation/test/foo.py", line 19, in test_patched Bar().bar() File "/home/vagrant/dev/constellation/test/foo.py", line 14, in bar return self.foo.foo File "/usr/local/lib/python2.7/dist-packages/mock.py", line 658, in __getattr__ raise AttributeError("Mock object has no attribute %r" % name) AttributeError: Mock object has no attribute 'foo' ``` Indeed, when I print out `mock_Foo.return_value.__dict__`, I can see that `foo` is not in the list of children or methods: ``` {'_mock_call_args': None, '_mock_call_args_list': [], '_mock_call_count': 0, '_mock_called': False, '_mock_children': {}, '_mock_delegate': None, '_mock_methods': ['__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__'], '_mock_mock_calls': [], '_mock_name': '()', '_mock_new_name': '()', '_mock_new_parent': <MagicMock name='Foo' spec='Foo' id='38485392'>, '_mock_parent': <MagicMock name='Foo' spec='Foo' id='38485392'>, '_mock_wraps': None, '_spec_class': <class 'foo.Foo'>, '_spec_set': None, 'method_calls': []} ``` My understanding of autospec is that, if True, the patch specs should apply recursively. Since foo is indeed an attribute of Foo instances, should it not be patched? If not, how do I get the Foo mock to preserve the attributes of Foo instances? **NOTE:** This is a trivial example that shows the basic problem. In reality, I am mocking a third party module.Class -- `consul.Consul` -- whose client I instantiate in a Consul wrapper class that I have. As I don't maintain the consul module, I can't modify the source to suit my tests (I wouldn't really want to do that anyway). For what it's worth, `consul.Consul()` returns a consul client, which has an attribute `kv` -- an instance of `consul.Consul.KV`. `kv` has a method `get`, which I am wrapping in an instance method `get_key` in my Consul class. After patching `consul.Consul`, the call to get fails because of AttributeError: Mock object has no attribute kv. **Resources Already Checked:** <http://mock.readthedocs.org/en/latest/helpers.html#autospeccing> <http://mock.readthedocs.org/en/latest/patch.html>
No, autospeccing cannot mock out attributes set in the `__init__` method of the original class (or in any other method). It can only mock out *static attributes*, everything that can be found on the class. Otherwise, the mock would have to create an instance of the class you tried to replace with a mock in the first place, which is not a good idea (think classes that create a lot of real resources when instantiated). The recursive nature of an auto-specced mock is then limited to those static attributes; if `foo` is a class attribute, accessing `Foo().foo` will return an auto-specced mock for that attribute. If you have a class `Spam` whose `eggs` attribute is an object of type `Ham`, then the mock of `Spam.eggs` will be an auto-specced mock of the `Ham` class. The [documentation you read](http://mock.readthedocs.org/en/latest/helpers.html#autospeccing) *explicitly* covers this: > A more serious problem is that it is common for instance attributes to be created in the `__init__` method and not to exist on the class at all. `autospec` can’t know about any dynamically created attributes and restricts the api to visible attributes. You should just *set* the missing attributes yourself: ``` @patch('foo.Foo', autospec=TestFoo) def test_patched(self, mock_Foo): mock_Foo.return_value.foo = 'foo' Bar().bar() ``` or create a subclass of your `Foo` class for testing purposes that adds the attribute as a class attribute: ``` class TestFoo(foo.Foo): foo = 'foo' # class attribute @patch('foo.Foo', autospec=TestFoo) def test_patched(self, mock_Foo): Bar().bar() ```
Python equivalent of Haskell's [1..] (to index a list)
31,710,499
6
2015-07-29T20:29:49Z
31,710,528
20
2015-07-29T20:31:30Z
[ "python", "list", "loops", "haskell" ]
I have a list of elements in python. I don't know the number of elements in the list. I would like to add indexes to the list. In Haskell, I could do the following ``` zip [1..] "abcdefghijklmnop" [(1,'a'),(2,'b'),(3,'c'),(4,'d'),(5,'e'),(6,'f'),(7,'g'),(8,'h'),(9,'i'),(10,'j'),(11,'k'),(12,'l'),(13,'m'),(14,'n'),(15,'o'),(16,'p')] ``` Now imagine that the string was of unknown size. This would still work in Haskell, and the integer list gives as many integers as necessary until the string runs out. How would one do the equivalent in Python? I have tried this: ``` s = "abcdefghijklmnop" indexedlist = [] for i,c in enumerate(s): indexedlist.append((i,c)) >>> indexedlist [(0, 'a'), (1, 'b'), (2, 'c'), (3, 'd'), (4, 'e'), (5, 'f'), (6, 'g'), (7, 'h'), (8, 'i'), (9, 'j'), (10, 'k'), (11, 'l'), (12, 'm'), (13, 'n'), (14, 'o'), (15, 'p')] ``` And it works, but I'm wondering if there is a shorter/cleaner way, since it is 4 lines of code and feels much.
Just do `list(enumerate(s))`. This iterates over the `enumerate` object and converts it to a `list`.
Can't catch mocked exception because it doesn't inherit BaseException
31,713,054
8
2015-07-29T23:49:32Z
31,873,937
10
2015-08-07T09:13:28Z
[ "python", "exception-handling", "python-requests", "python-3.3", "python-mock" ]
I'm working on a project that involves connecting to a remote server, waiting for a response, and then performing actions based on that response. We catch a couple of different exceptions, and behave differently depending on which exception is caught. For example: ``` def myMethod(address, timeout=20): try: response = requests.head(address, timeout=timeout) except requests.exceptions.Timeout: # do something special except requests.exceptions.ConnectionError: # do something special except requests.exceptions.HTTPError: # do something special else: if response.status_code != requests.codes.ok: # do something special return successfulConnection.SUCCESS ``` To test this, we've written a test like the following ``` class TestMyMethod(unittest.TestCase): def test_good_connection(self): config = { 'head.return_value': type('MockResponse', (), {'status_code': requests.codes.ok}), 'codes.ok': requests.codes.ok } with mock.patch('path.to.my.package.requests', **config): self.assertEqual( mypackage.myMethod('some_address', mypackage.successfulConnection.SUCCESS ) def test_bad_connection(self): config = { 'head.side_effect': requests.exceptions.ConnectionError, 'requests.exceptions.ConnectionError': requests.exceptions.ConnectionError } with mock.patch('path.to.my.package.requests', **config): self.assertEqual( mypackage.myMethod('some_address', mypackage.successfulConnection.FAILURE ) ``` If I run the function directly, everything happens as expected. I even tested by adding `raise requests.exceptions.ConnectionError` to the `try` clause of the function. But when I run my unit tests, I get ``` ERROR: test_bad_connection (test.test_file.TestMyMethod) ---------------------------------------------------------------- Traceback (most recent call last): File "path/to/sourcefile", line ###, in myMethod respone = requests.head(address, timeout=timeout) File "path/to/unittest/mock", line 846, in __call__ return _mock_self.mock_call(*args, **kwargs) File "path/to/unittest/mock", line 901, in _mock_call raise effect my.package.requests.exceptions.ConnectionError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "Path/to/my/test", line ##, in test_bad_connection mypackage.myMethod('some_address', File "Path/to/package", line ##, in myMethod except requests.exceptions.ConnectionError: TypeError: catching classes that do not inherit from BaseException is not allowed ``` I tried to change the exception I was patching in to `BaseException` and I got a more or less identical error. I've read <http://stackoverflow.com/a/18163759/3076272> already, so I think it must be a bad `__del__` hook somewhere, but I'm not sure where to look for it or what I can even do in the mean time. I'm also relatively new to `unittest.mock.patch()` so it's very possible that I'm doing something wrong there as well. This is a Fusion360 add-in so it is using Fusion 360's packaged version of Python 3.3 - as far as I know it's a vanilla version (i.e. they don't roll their own) but I'm not positive of that.
I could reproduce the error with a minimal example: foo.py: ``` class MyError(Exception): pass class A: def inner(self): err = MyError("FOO") print(type(err)) raise err def outer(self): try: self.inner() except MyError as err: print ("catched ", err) return "OK" ``` Test without mocking : ``` class FooTest(unittest.TestCase): def test_inner(self): a = foo.A() self.assertRaises(foo.MyError, a.inner) def test_outer(self): a = foo.A() self.assertEquals("OK", a.outer()) ``` Ok, all is fine, both test pass The problem comes with the mocks. As soon as the class MyError is mocked, the `expect` clause cannot catch anything and I get same error as the example from the question : ``` class FooTest(unittest.TestCase): def test_inner(self): a = foo.A() self.assertRaises(foo.MyError, a.inner) def test_outer(self): with unittest.mock.patch('foo.MyError'): a = exc2.A() self.assertEquals("OK", a.outer()) ``` Immediately gives : ``` ERROR: test_outer (__main__.FooTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "...\foo.py", line 11, in outer self.inner() File "...\foo.py", line 8, in inner raise err TypeError: exceptions must derive from BaseException During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<pyshell#78>", line 8, in test_outer File "...\foo.py", line 12, in outer except MyError as err: TypeError: catching classes that do not inherit from BaseException is not allowed ``` Here I get a first `TypeError`that you did not have, because I am raising a mock while you forced a true exception with `'requests.exceptions.ConnectionError': requests.exceptions.ConnectionError` in config. But the problem remains that **the `except` clause tries to catch a mock**. TL/DR: as you mock the full `requests` package, the `except requests.exceptions.ConnectionError` clause tries to catch a mock. As the mock is not really a `BaseException`, it causes the error. The only solution I can imagine is not to mock the full `requests` but only the parts that are not exceptions. I must admit I could not find how to say to mock *mock everything except this* but in your example, you only need to patch `requests.head`. So I think that this should work : ``` def test_bad_connection(self): with mock.patch('path.to.my.package.requests.head', side_effect=requests.exceptions.ConnectionError): self.assertEqual( mypackage.myMethod('some_address', mypackage.successfulConnection.FAILURE ) ``` That is : only patch the `head` method with the exception as side effect.
Replace a character with multiple characters using Python
31,714,940
2
2015-07-30T03:56:38Z
31,714,963
7
2015-07-30T03:59:42Z
[ "python", "string", "generator", "str-replace" ]
I have been trying to solve the following problem using Python, and so far without success: Assume you have a string with the characters '0', '1' and '?'. the '?' sign can be either '0' or '1'. Your goal is to print all the possible outputs for such given string.For example, the output for the string '0?1?' should be '0010', '0011', '0110' and '0111' I have tried the following: ``` def comb(S): if not '?' in S: yield S else: yield comb(S.replace('?','0',1)) yield comb(S.replace('?','1',1)) S = '0?1??011' S_generator = comb(S) for s in S_generator: print s ``` The result is strange, and is not what I am trying to get: ``` <generator object comb at 0x106b2ceb0> <generator object comb at 0x106b2cf00> ``` Any idea why it is not working, and how I should change the code for this to work?
`comb()` is a generator function , when you do - ``` yield comb(S.replace('?','0',1)) ``` The `yield` statement does not automatically loop over all the values in the generator and yield them, you have to loop over the values and yield them one by one, Example - ``` def comb(S): if not '?' in S: yield S else: for i in comb(S.replace('?','0',1)): yield i for i in comb(S.replace('?','1',1)): yield i ``` --- Example/Demo - ``` >>> def comb(S): ... if not '?' in S: ... yield S ... else: ... for i in comb(S.replace('?','0',1)): ... yield i ... for i in comb(S.replace('?','1',1)): ... yield i ... >>> for s in comb('abc?def?'): ... print(s) ... abc0def0 abc0def1 abc1def0 abc1def1 >>> for s in comb('0?1?'): ... print(s) ... 0010 0011 0110 0111 >>> S = '0?1??011' >>> for s in comb(S): ... print(s) ... 00100011 00101011 00110011 00111011 01100011 01101011 01110011 01111011 ``` --- **[Edit]**: Note that as of Python 3.3, you can use the new [yield from](https://docs.python.org/3/whatsnew/3.3.html#pep-380) syntax: ``` yield from comb(S.replace('?','0',1)) yield from comb(S.replace('?','1',1)) ```
replacing of numbers to zero in a sum
31,716,622
2
2015-07-30T06:31:12Z
31,716,674
12
2015-07-30T06:33:41Z
[ "python" ]
I have to define a function `'func(a, b, c)'` in which there are 3 variables it calculates their sum. I have to check if there value is greater than `'13'` then the number becomes `'0'` eg., `'def func(3,4,14)'` ---> 7 (3+4+0) I've tried this code below: ``` def no_teen_sum(a, b, c): if(a>13): a=0 elif(b>13): b=0 elif(c>13): c=0 return a+b+c ``` But it didn't work. Am I doing wrong somewhere? Please suggest me the correct way to do it...
Your problem is using `elif`. You want to use `if`: ``` def no_teen_sum(a, b, c): if a > 13: a = 0 if b > 13: b = 0 if c > 13: c = 0 return a + b + c ``` To create a general function you could use [`*args`](https://docs.python.org/2/tutorial/controlflow.html#arbitrary-argument-lists) and [`sum`](https://docs.python.org/2/library/functions.html#sum) in a [variadic function](https://en.wikipedia.org/wiki/Variadic_function): ``` def no_teen_sum(*args): return sum(arg if arg < 13 else 0 for arg in args) ``` Example: ``` >>>no_teen_sum(1, 2, 14) 3 ```
Why does printing a file with unicode characters does not produce the emojis?
31,725,918
2
2015-07-30T14:02:57Z
31,726,061
8
2015-07-30T14:09:01Z
[ "python", "unicode" ]
the content of the text file is ``` u'\u26be\u26be\u26be' ``` When I run the script... ``` import codecs f1 = codecs.open("test1.txt", "r", "utf-8") text = f1.read() print text str1 = u'\u26be\u26be\u26be' print(str1) ``` I get the output... ``` u'\u26be\u26be\u26be' ⚾⚾⚾ ``` Question: Why is that a string, which the same content as the file, is able to produce the emojis properly?
File content `u'\u26be\u26be\u26be'` is like `r"u'\u26be\u26be\u26be'"`. In other words, characters of `u`, `\`, `u`, `2`, ... You can convert such string to the string `⚾⚾⚾` using [`ast.literal_eval`](https://docs.python.org/2/library/ast.html#ast.literal_eval): ``` import ast import codecs with codecs.open("test1.txt", "r", "utf-8") as f1: text = ast.literal_eval(f1.read()) print text ... ``` But, why does the file contain such string (`u'\u26be\u26be\u26be'`) instead of `⚾⚾⚾`? Maybe you need to consider redesigning file saving part.
Is there a best way to change given number of days to years months weeks days in Python?
31,739,208
3
2015-07-31T06:12:16Z
31,739,388
7
2015-07-31T06:24:58Z
[ "python", "django", "datetime" ]
I am giving number of days to convert them to years, months, weeks and days, but I am taking default days to 365 and month days to 30. How do I do it in an effective way? ``` def get_year_month_week_day(days): year = days / 365 days = days % 365 month = days / 30 days = days % 30 week = days / 7 day = days % 7 return year,month,week,day def add_s(num): if num > 1: return 's ' return ' ' @register.filter def daysleft(fdate): cdate = datetime.datetime.now().date() days = (fdate.date() - cdate).days if days == 0: return "Today" elif days == 1: return "Tomorrow" elif days > 0: year, month, week, day = get_year_month_week_day(days) print year, month, week, day days_left = "" if year > 0: days_left += str(year) + " year" + add_s(year) if month > 0: days_left += str(month) + " month" + add_s(month) if week > 0: days_left += str(week) + " week" + add_s(week) if day > 0: days_left += str(day) + " day" + add_s(day) return days_left + " left" else: return "No time left" ```
It is much easier if you use a third-party library named [`python-dateutil`](https://dateutil.readthedocs.org/en/latest/relativedelta.html#module-dateutil.relativedelta): ``` >>> import datetime >>> from dateutil.relativedelta import relativedelta >>> now = datetime.datetime.now() >>> td = datetime.timedelta(days=500) >>> five_hundred_days_ago = now - td >>> print relativedelta(now, five_hundred_days_ago) relativedelta(years=+1, months=+4, days=+13) ``` I hope this helps.
Create a list of integers with duplicate values in Python
31,743,603
6
2015-07-31T10:08:10Z
31,743,627
11
2015-07-31T10:09:46Z
[ "python", "list", "python-2.7", "integer" ]
This question will no doubt to a piece of cake for a Python 2.7 expert (or enthusiast), so here it is. How can I create a list of integers whose value is duplicated next to it's original value like this? ``` a = list([0, 0, 1, 1, 2, 2, 3, 3, 4, 4]) ``` It's really easy to do it like this: ``` for i in range(10): a.append(int(i / 2)) ``` But i'd rather have it in one simple line starting a = *desired output.* Thank you for taking the time to answer. PS. none of the "**Questions that may already have your answer** were what I was looking for.
``` >>> [i//2 for i in xrange(10)] [0, 0, 1, 1, 2, 2, 3, 3, 4, 4] ``` A simple generic approach: ``` >>> f = lambda rep, src: reduce(lambda l, e: l+rep*[e], src, []) >>> f(2, xrange(5)) [0, 0, 1, 1, 2, 2, 3, 3, 4, 4] >>> f(3, ['a', 'b']) ['a', 'a', 'a', 'b', 'b', 'b'] ```
What is choice_set.all in Django tutorial
31,746,571
4
2015-07-31T12:46:30Z
31,746,649
8
2015-07-31T12:50:24Z
[ "python", "django" ]
In the Django tutorial: ``` {% for choice in question.choice_set.all %} ``` I couldn't find a brief explanation for this. I know that in the admin.py file, I have created a foreign key of Question model on the choice model such that for every choice there is a question.
That's the Django metaclass magic in action! Since you have a foreign key from `Choice` model to the `Question` model, you will automagically get the [inverse relation](https://docs.djangoproject.com/en/1.8/ref/models/relations/) on instances of the `question` model back to the set of possible choices. `question.choice_set.all` is the queryset of choices which point to your `question` instance as the foreign key. The default name for this inverse relationship is `choice_set` (because the related model is named `Choice`). But you can override this default name by specifying the `related_name` kwarg on the foreign key: ``` class Choice(models.Model): ... question = models.ForeignKey(Question, related_name='choices') ```
How do you install mysql-connector-python (development version) through pip?
31,748,278
12
2015-07-31T14:11:52Z
34,027,037
31
2015-12-01T18:02:38Z
[ "python", "mysql", "django" ]
I have a virtualenv in which I am running Django 1.8 with Python 3.4 I am trying to get support for MySQL however I am having trouble getting the different connectors to work. I have always used mysql-connector-python with django 1.7 and would like to continue using it. The development version of mysql-connector-python (2.1.2) seems to have support for Django 1.8 now. How do I install the development version using pip install (within my virtualenv)? I have tried running the following command: ``` pip install mysql-connector-python==2.1.2 --allow-external mysql-connector-python ``` but the 2.1.2 developer version is not available there: ``` could not find a version that satisfies the requirement mysql-connector-python==2.1.2 (from versions: 1.1.4, 1.1.5, 1.1.6, 1.2.2, 1.2.3, 2.0.1, 2.0.2, 2.0.3, 2.0.4) Some insecure and unverifiable files were ignored (use --allow-unverified mysql-connector-python to allow). No matching distribution found for mysql-connector-python==2.1.2 ``` I have tried downloading the file directly from: <http://dev.mysql.com/downloads/connector/python/> ...but for some reason after installation the connector is not available within my virtualenv Help please :-) Thank you. ' **EDIT:** I have tried `pip install mysql-python` but it is not supported by Python 3.4
I agree, the debian packages appear to be broken for at least Ubuntu 14.04. The apt-get version and the pip version do not include the 2.1.x releases. To get it done, I had to grab the source: ``` $ git clone https://github.com/mysql/mysql-connector-python.git $ cd mysql-connector-python $ python ./setup.py build $ sudo python ./setup.py install ... >>> import mysql.connector as msc >>> msc.__version__ '2.1.3' >>> ```
Too many if statements
31,748,617
5
2015-07-31T14:27:40Z
31,748,695
10
2015-07-31T14:30:55Z
[ "python", "if-statement", "conditional", "code-readability", "code-maintainability" ]
I have some topic to discuss. I have a fragment of code with 24 ifs/elifs. *Operation* is my own class that represents functionality similar to Enum. Here is a fragment of code: ``` if operation == Operation.START: strategy = strategy_objects.StartObject() elif operation == Operation.STOP: strategy = strategy_objects.StopObject() elif operation == Operation.STATUS: strategy = strategy_objects.StatusObject() (...) ``` I have concerns from readability point of view. Is is better to change it into 24 classes and use polymorphism? I am not convinced that it will make my code maintanable... From one hand those ifs are pretty clear and it shouldn't be hard to follow, on the other hand there are too many ifs. My question is rather general, however I'm writing code in python so I cannot use constructions like switch. What do you think? UPDATE: One important thing is that *StartObject()*, *StopObject()* and *StatusObject()* are constructors and I wanted to assign an object to *strategy* reference.
You could possibly use a dictionary. Dictionaries store references, which means functions are perfectly viable to use, like so: ``` operationFuncs = { Operation.START: strategy_objects.StartObject Operation.STOP: strategy_objects.StopObject Operation.STATUS: strategy_objects.StatusObject (...) } ``` It's good to have a default operation just in case, so when you run it use a `try except` and handle the exception (ie. the equivalent of your `else` clause) ``` try: strategy = operationFuncs[operation]() except KeyError: strategy = strategy_objects.DefaultObject() ``` Alternatively use a dictionary's `get` method, which allows you to specify a default if the key you provide isn't found. ``` strategy = operationFuncs.get(operation(), DefaultObject()) ``` Note that you don't include the parentheses when storing them in the dictionary, you just use them when calling your dictionary. Also this requires that `Operation.START` be hashable, but that should be the case since you described it as a class similar to an ENUM.
How to add percentages on top of bars in seaborn?
31,749,448
5
2015-07-31T15:04:38Z
31,754,317
9
2015-07-31T20:03:17Z
[ "python", "matplotlib", "seaborn" ]
Given the following count plot how do I place percentages on top of the bars? ``` import seaborn as sns sns.set(style="darkgrid") titanic = sns.load_dataset("titanic") ax = sns.countplot(x="class", hue="who", data=titanic) ``` [![enter image description here](http://i.stack.imgur.com/b1m5F.png)](http://i.stack.imgur.com/b1m5F.png) For example for "First" I want total First men/total First, total First women/total First, and total First children/total First on top of their respective bars. Please let me know if my explanation is not clear. Thanks!
`sns.barplot` doesn't explicitly return the barplot values the way `matplotlib.pyplot.bar` does (see last para), but if you've plotted nothing else you can risk assuming that all the `patches` in the axes are your values. Then you can use the sub-totals that the barplot function has calculated for you: ``` from matplotlib.pyplot import show sns.set(style="darkgrid") titanic = sns.load_dataset("titanic") total = float(len(titanic)) # one person per row ax = sns.barplot(x="class", hue="who", data=titanic) for p in ax.patches: height = p.get_height() ax.text(p.get_x(), height+ 3, '%1.2f'%(height/total)) show() ``` produces [![enter image description here](http://i.stack.imgur.com/xe7yB.png)](http://i.stack.imgur.com/xe7yB.png) An alternate approach is to do the sub-summing explicitly, e.g. with the excellent `pandas`, and plot with `matplotlib`, and also do the styling yourself. (Though you can get quite a lot of styling from `sns` context even when using `matplotlib` plotting functions. Try it out -- )
TypeError: 'float' object is not iterable, Python list
31,749,695
3
2015-07-31T15:18:23Z
31,749,762
12
2015-07-31T15:21:50Z
[ "python", "list", "python-2.7", "loops", "typeerror" ]
I am writing a program in Python and am trying to extend a list as such: ``` spectrum_mass[second] = [1.0, 2.0, 3.0] spectrum_intensity[second] = [4.0, 5.0, 6.0] spectrum_mass[first] = [1.0, 34.0, 35.0] spectrum_intensity[second] = [7.0, 8.0, 9.0] for i in spectrum_mass[second]: if i not in spectrum_mass[first]: spectrum_intensity[first].extend(spectrum_intensity[second][spectrum_mass[second].index(i)]) spectrum_mass[first].extend(i) ``` However when I try doing this I am getting `TypeError: 'float' object is not iterable` on line 3. To be clear, `spectrum_mass[second]` is a list (that is in a dictionary, second and first are the keys), as is `spectrum_intensity[first]`, `spectrum_intensity[second]` and `spectrum_mass[second]`. All lists contain floats.
I am guessing the issue is with the line - ``` spectrum_intensity[first].extend(spectrum_intensity[second][spectrum_mass[second].index(i)]) ``` `extend()` function expects an iterable , but you are trying to give it a float. Same behavior in a very smaller example - ``` >>> l = [1,2] >>> l.extend(1.2) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'float' object is not iterable ``` You want to use `.append()` instead - ``` spectrum_intensity[first].append(spectrum_intensity[second][spectrum_mass[second].index(i)]) ``` Same issue in the next line as well , use `append()` instead of `extend()` for - ``` spectrum_mass[first].extend(i) ```
How do I close an image opened in Pillow?
31,751,464
4
2015-07-31T17:00:27Z
31,751,501
10
2015-07-31T17:02:13Z
[ "python", "python-imaging-library", "pillow" ]
I have a python file with the Pillow library imported. I can open an image with ``` Image.open(test.png) ``` But how do I close that image? I'm not using Pillow to edit the image, just to show the image and allow the user to choose to save it or delete it.
With [`Image.close().`](https://pillow.readthedocs.org/en/latest/reference/Image.html#PIL.Image.Image.close) You can also do it in a with block: ``` with Image.open('test.png') as test_image: do_things(test_image) ``` An example of using `Image.close()`: ``` test = Image.open('test.png') test.close() ```
Python vs perl sort performance
31,752,670
5
2015-07-31T18:20:37Z
31,753,182
7
2015-07-31T18:52:48Z
[ "python", "performance", "perl", "sorting" ]
***Solution*** This solved all issues with my Perl code (plus extra implementation code.... :-) ) In conlusion both Perl and Python are equally awesome. ``` use WWW::Curl::Easy; ``` Thanks to ALL who responded, very much appreciated. ***Edit*** It appears that the Perl code I am using is spending the majority of its time performing the http get, for example: ``` my $start_time = gettimeofday; $request = HTTP::Request->new('GET', 'http://localhost:8080/data.json'); $response = $ua->request($request); $page = $response->content; my $end_time = gettimeofday; print "Time taken @{[ $end_time - $start_time ]} seconds.\n"; ``` The result is: ``` Time taken 74.2324419021606 seconds. ``` My python code in comparison: ``` start = time.time() r = requests.get('http://localhost:8080/data.json', timeout=120, stream=False) maxsize = 100000000 content = '' for chunk in r.iter_content(2048): content += chunk if len(content) > maxsize: r.close() raise ValueError('Response too large') end = time.time() timetaken = end-start print timetaken ``` The result is: ``` 20.3471381664 ``` In both cases the sort times are sub second. So first of all I apologise for the misleading question, and it is another lesson for me to never ever make assumptions.... :-) I'm not sure what is the best thing to do with this question now. Perhaps someone can propose a better way of performing the request in perl? ***End of edit*** This is just a quick question regarding sort performance differences in Perl vs Python. This is not a question about which language is better/faster etc, for the record, I first wrote this in perl, noticed the time the sort was taking, and then tried to write the same thing in python to see how fast it would be. I simply want to know, **how can I make the perl code perform as fast as the python code?** Lets say we have the following json: ``` ["3434343424335": { "key1": 2322, "key2": 88232, "key3": 83844, "key4": 444454, "key5": 34343543, "key6": 2323232 }, "78237236343434": { "key1": 23676722, "key2": 856568232, "key3": 838723244, "key4": 4434544454, "key5": 3432323543, "key6": 2323232 } ] ``` Lets say we have a list of around 30k-40k records which we want to sort by one of the sub keys. We then want to build a new array of records ordered by the sub key. Perl - Takes around 27 seconds ``` my @list; $decoded = decode_json($page); foreach my $id (sort {$decoded->{$b}->{key5} <=> $decoded->{$a}->{key5}} keys %{$decoded}) { push(@list,{"key"=>$id,"key1"=>$decoded->{$id}{key1}...etc)); } ``` Python - Takes around 6 seconds ``` list = [] data = json.loads(content) data2 = sorted(data, key = lambda x: data[x]['key5'], reverse=True) for key in data2: tmp= {'id':key,'key1':data[key]['key1'],etc.....} list.append(tmp) ``` For the perl code, I have tried using the following tweaks: ``` use sort '_quicksort'; # use a quicksort algorithm use sort '_mergesort'; # use a mergesort algorithm ```
Your benchmark is flawed, you're benchmarking multiple variables, not one. It is not just sorting data, but it is also doing JSON decoding, and creating strings, and appending to an array. You can't know how much time is spent sorting and how much is spent doing everything else. The matter is made worse in that there are several different JSON implementations in Perl each with their own different performance characteristics. Change the underlying JSON library and the benchmark will change again. If you want to benchmark sort, you'll have to change your benchmark code to eliminate the cost of loading your test data from the benchmark, JSON or not. Perl and Python have their own internal benchmarking libraries that can benchmark individual functions, but their instrumentation can make them perform far less well than they would in the real world. The performance drag from each benchmarking implementation will be different and might introduce a false bias. These benchmarking libraries are more useful for comparing two functions in the same program. For comparing between languages, keep it simple. Simplest thing to do to get an accurate benchmark is to time them within the program using the wall clock. ``` # The current time to the microsecond. use Time::HiRes qw(gettimeofday); my @list; my $decoded = decode_json($page); my $start_time = gettimeofday; foreach my $id (sort {$decoded->{$b}->{key5} <=> $decoded->{$a}->{key5}} keys %{$decoded}) { push(@list,{"key"=>$id,"key1"=>$decoded->{$id}{key1}...etc)); } my $end_time = gettimeofday; print "sort and append took @{[ $end_time - $start_time ]} seconds\n"; ``` (I leave the Python version as an exercise) From here you can improve your technique. You can use CPU seconds instead of wall clock. The array append and cost of creating the string are still involved in the benchmark, they can be eliminated so you're just benchmarking sort. And so on. Additionally, you can use [a profiler](https://metacpan.org/pod/Devel::NYTProf) to find out where your programs are spending their time. These have the same raw performance caveats as benchmarking libraries, the results are only useful to find out what percentage of its time a program is using where, but it will prove useful to quickly see if your benchmark has unexpected drag. The important thing is to benchmark what you think you're benchmarking.
How can I know which element in a list triggered an any() function?
31,759,256
6
2015-08-01T07:04:25Z
31,759,295
9
2015-08-01T07:08:55Z
[ "python", "python-2.7" ]
I'm developing a Python program to detect names of cities in a list of records. The code I've developed so far is the following: ``` aCities = ['MELBOURNE', 'SYDNEY', 'PERTH', 'DUBAI', 'LONDON'] cxTrx = db.cursor() cxTrx.execute( 'SELECT desc FROM AccountingRecords' ) for row in cxTrx.fetchall() : if any( city in row[0] for city in aCities ) : #print the name of the city that fired the any() function else : # no city name found in the accounting record ``` The code works well to detect when a city in the aCities' list is found in the accounting record but as the any() function just returns True or False I'm struggling to know which city (Melbourne, Sydney, Perth, Dubai or London) triggered the exit. I've tried with aCities.index and queue but no success so far.
I don't think it's possible with `any`. You can use [`next`](https://docs.python.org/2/library/functions.html#next) with default value: ``` for row in cxTrx.fetchall() : city = next((city for city in aCities if city in row[0]), None) if city is not None: #print the name of the city that fired the any() function else : # no city name found in the accounting record ```
String comparison '1111' < '99' is True
31,760,478
2
2015-08-01T09:47:31Z
31,760,495
10
2015-08-01T09:49:05Z
[ "python" ]
There is something wrong if you compare two string like this: ``` >>> "1111">'19' False >>> "1111"<'19' True ``` Why is '1111' less than than '19'?
Because strings are compared [*lexicographically*](https://en.wikipedia.org/wiki/Lexicographical_order). `'1'` is smaller than `'9'` (comes earlier in the character set). It doesn't matter that there are other characters after that. If you want to compare *numbers* you have to convert the string to a number first: ``` >>> int('1111') > int('19') True ``` otherwise this is comparing exactly like you'd compare dictionary words; `Aaaa` is smaller than `Ab`
swig unable to find openssl conf
31,762,106
2
2015-08-01T12:59:53Z
31,861,876
16
2015-08-06T17:13:08Z
[ "python", "ubuntu", "openssl", "swig", "m2crypto" ]
Trying to install m2crypto and getting these errors, can anyone help ? ``` SWIG/_evp.i:12: Error: Unable to find 'openssl/opensslconf.h' SWIG/_ec.i:7: Error: Unable to find 'openssl/opensslconf.h' ```
``` ln -s /usr/include/x86_64-linux-gnu/openssl/opensslconf.h /usr/include/openssl/opensslconf.h ``` Just made this and everything worked fine.
ImportError: cannot import name RAND_egd
31,762,371
10
2015-08-01T13:28:24Z
31,763,219
7
2015-08-01T15:00:54Z
[ "python", "ssl", "import", "executable", "py2exe" ]
I've tried to create an exe file using py2exe. I've recently updated Python from 2.7.7 to 2.7.10 to be able to work with `requests` - `proxies`. Before the update everything worked fine but now, the exe file recently created, raising this error: ``` Traceback (most recent call last): File "puoka_2.py", line 1, in <module> import mLib File "mLib.pyc", line 4, in <module> File "urllib2.pyc", line 94, in <module File "httplib.pyc", line 71, in <module File "socket.pyc", line 68, in <module> ImportError: cannot import name RAND_egd ``` It could be probably repaired by changing `options` in setup.py file but I can't figure out what I have to write there. I've tried `options = {'py2exe': {'packages': ['requests','urllib2']}})` but with no success. It works as a Python script but not as an exe. Do anybody knows what to do? EDIT: I've tried to put into `setup.py` file this import: `from _ssl import RAND_egd` and it says that it can't be imported. EDIT2: Setup.py: ``` from distutils.core import setup import py2exe # from _ssl import RAND_egd setup( console=['puoka_2.py'], options = {'py2exe': {'packages': ['requests']}}) ```
According to results in google, it seems to be a very rare Error. I don't know exactly what is wrong but I found a **workaround** for that so if somebody experiences this problem, maybe this answer helps. Go to `socket.py` file and search for `RAND_egd`. There is a block of code (67 line in my case): ``` from _ssl import SSLError as sslerror from _ssl import \ RAND_add, \ RAND_status, \ SSL_ERROR_ZERO_RETURN, \ SSL_ERROR_WANT_READ, \ SSL_ERROR_WANT_WRITE, \ SSL_ERROR_WANT_X509_LOOKUP, \ SSL_ERROR_SYSCALL, \ SSL_ERROR_SSL, \ SSL_ERROR_WANT_CONNECT, \ SSL_ERROR_EOF, \ SSL_ERROR_INVALID_ERROR_CODE try: from _ssl import RAND_egd except ImportError: # LibreSSL does not provide RAND_egd pass ``` Everything what you have to do is to comment the 5 lines: ``` #try: #from _ssl import RAND_egd #except ImportError: ## LibreSSL does not provide RAND_egd #pass ``` I don't know why it raises the `ImportError` because there is a `try - except` block with `pass` so the error should not be raised but it helped me to successfully run the `exe` file. EDIT: WARNING: I don't know whether it could cause some problems. I experienced no problems yet.
pip installation /usr/local/opt/python/bin/python2.7: bad interpreter: No such file or directory
31,768,128
21
2015-08-02T02:49:20Z
31,769,149
14
2015-08-02T06:21:35Z
[ "python", "osx", "installation", "pip", "osx-mavericks" ]
I don't know what's the deal but I am stuck following some stackoverflow solutions which gets nowhere. Can you please help me on this? ``` Monas-MacBook-Pro:CS764 mona$ sudo python get-pip.py The directory '/Users/mona/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. The directory '/Users/mona/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. /tmp/tmpbSjX8k/pip.zip/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning. Collecting pip Downloading pip-7.1.0-py2.py3-none-any.whl (1.1MB) 100% |████████████████████████████████| 1.1MB 181kB/s Installing collected packages: pip Found existing installation: pip 1.4.1 Uninstalling pip-1.4.1: Successfully uninstalled pip-1.4.1 Successfully installed pip-7.1.0 Monas-MacBook-Pro:CS764 mona$ pip --version -bash: /usr/local/bin/pip: /usr/local/opt/python/bin/python2.7: bad interpreter: No such file or directory ```
I'm guessing you have two python installs, or two pip installs, one of which has been partially removed. Why do you use `sudo`? Ideally you should be able to install and run everything from your user account instead of using root. If you mix root and your local account together you are more likely to run into permissions issues (e.g. see the warning it gives about "parent directory is not owned by the current user"). What do you get if you run this? ``` $ head -n1 /usr/local/bin/pip ``` This will show you which python binary `pip` is trying to use. If it's pointing `/usr/local/opt/python/bin/python2.7`, then try running this: ``` $ ls -al /usr/local/opt/python/bin/python2.7 ``` If this says "No such file or directory", then pip is trying to use a python binary that has been removed. Next, try this: ``` $ which python $ which python2.7 ``` To see the path of the python binary that's actually working. Since it looks like pip was successfully installed somewhere, it could be that `/usr/local/bin/pip` is part of an older installation of pip that's higher up on the `PATH`. To test that, you may try moving the non-functioning `pip` binary out of the way like this (might require `sudo`): ``` $ mv /usr/local/bin/pip /usr/local/bin/pip.old ``` Then try running your `pip --version` command again. Hopefully it picks up the correct version and runs successfully.
pip installation /usr/local/opt/python/bin/python2.7: bad interpreter: No such file or directory
31,768,128
21
2015-08-02T02:49:20Z
33,872,341
90
2015-11-23T13:31:00Z
[ "python", "osx", "installation", "pip", "osx-mavericks" ]
I don't know what's the deal but I am stuck following some stackoverflow solutions which gets nowhere. Can you please help me on this? ``` Monas-MacBook-Pro:CS764 mona$ sudo python get-pip.py The directory '/Users/mona/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. The directory '/Users/mona/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. /tmp/tmpbSjX8k/pip.zip/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning. Collecting pip Downloading pip-7.1.0-py2.py3-none-any.whl (1.1MB) 100% |████████████████████████████████| 1.1MB 181kB/s Installing collected packages: pip Found existing installation: pip 1.4.1 Uninstalling pip-1.4.1: Successfully uninstalled pip-1.4.1 Successfully installed pip-7.1.0 Monas-MacBook-Pro:CS764 mona$ pip --version -bash: /usr/local/bin/pip: /usr/local/opt/python/bin/python2.7: bad interpreter: No such file or directory ```
I had used home-brew to install 2.7 on OS X 10.10 and the new install was missing the sym links. I ran ``` brew link --overwrite python ``` as mentioned in [How to symlink python in Homebrew?](http://stackoverflow.com/questions/13354207/how-to-symlink-python-in-homebrew/13354417#13354417) and it solved the problem.
pip installation /usr/local/opt/python/bin/python2.7: bad interpreter: No such file or directory
31,768,128
21
2015-08-02T02:49:20Z
34,550,600
9
2015-12-31T18:57:13Z
[ "python", "osx", "installation", "pip", "osx-mavericks" ]
I don't know what's the deal but I am stuck following some stackoverflow solutions which gets nowhere. Can you please help me on this? ``` Monas-MacBook-Pro:CS764 mona$ sudo python get-pip.py The directory '/Users/mona/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. The directory '/Users/mona/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. /tmp/tmpbSjX8k/pip.zip/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning. Collecting pip Downloading pip-7.1.0-py2.py3-none-any.whl (1.1MB) 100% |████████████████████████████████| 1.1MB 181kB/s Installing collected packages: pip Found existing installation: pip 1.4.1 Uninstalling pip-1.4.1: Successfully uninstalled pip-1.4.1 Successfully installed pip-7.1.0 Monas-MacBook-Pro:CS764 mona$ pip --version -bash: /usr/local/bin/pip: /usr/local/opt/python/bin/python2.7: bad interpreter: No such file or directory ```
I made the same error using sudo for my installation. (oops) ``` brew install python brew linkapps python brew link --overwrite python ``` This brought everything back to normal.
Add bias to Lasagne neural network layers
31,773,204
6
2015-08-02T15:05:37Z
31,774,280
8
2015-08-02T16:57:33Z
[ "python", "neural-network", "theano", "lasagne" ]
I am wondering if there is a way to add bias node to each layer in Lasagne neural network toolkit? I have been trying to find related information in documentation. This is the network I built but i don't know how to add a bias node to each layer. ``` def build_mlp(input_var=None): # This creates an MLP of two hidden layers of 800 units each, followed by # a softmax output layer of 10 units. It applies 20% dropout to the input # data and 50% dropout to the hidden layers. # Input layer, specifying the expected input shape of the network # (unspecified batchsize, 1 channel, 28 rows and 28 columns) and # linking it to the given Theano variable `input_var`, if any: l_in = lasagne.layers.InputLayer(shape=(None, 60), input_var=input_var) # Apply 20% dropout to the input data: l_in_drop = lasagne.layers.DropoutLayer(l_in, p=0.2) # Add a fully-connected layer of 800 units, using the linear rectifier, and # initializing weights with Glorot's scheme (which is the default anyway): l_hid1 = lasagne.layers.DenseLayer( l_in_drop, num_units=800, nonlinearity=lasagne.nonlinearities.rectify, W=lasagne.init.Uniform()) # We'll now add dropout of 50%: l_hid1_drop = lasagne.layers.DropoutLayer(l_hid1, p=0.5) # Another 800-unit layer: l_hid2 = lasagne.layers.DenseLayer( l_hid1_drop, num_units=800, nonlinearity=lasagne.nonlinearities.rectify) # 50% dropout again: l_hid2_drop = lasagne.layers.DropoutLayer(l_hid2, p=0.5) # Finally, we'll add the fully-connected output layer, of 10 softmax units: l_out = lasagne.layers.DenseLayer( l_hid2_drop, num_units=2, nonlinearity=lasagne.nonlinearities.softmax) # Each layer is linked to its incoming layer(s), so we only need to pass # the output layer to give access to a network in Lasagne: return l_out ```
Actually you don't have to explicitly create biases, because `DenseLayer()`, and convolution base layers too, has a default keyword argument: `b=lasagne.init.Constant(0.)`. Thus you can avoid creating `bias`, if you don't want to have with explicitly pass `bias=None`, but this is not that case. Thus in brief you do have bias parameters while you don't pass `None` to `bias` parameter e.g.: ``` hidden = Denselayer(...bias=None) ```
PCA Analysis in PySpark
31,774,311
3
2015-08-02T17:01:05Z
31,775,865
8
2015-08-02T19:40:58Z
[ "python", "apache-spark", "pca" ]
Looking at <http://spark.apache.org/docs/latest/mllib-dimensionality-reduction.html>. The examples seem to only contain Java and Scala. Does Spark MLlib support PCA analysis for python? If so please point me to an example. If not, how to combine Spark with Sklearn?
### Spark < 1.5.0 PySpark <= 1.4.1 doesn't support distributed data structures yet so there is no built-in method to compute PCA. If input matrix is relatively thin you can compute covariance matrix in a distributed manner, collect the results and perform eigendecomposition locally on a driver. Order of operations is more or less like the one bellow. Distributed steps are followed by a name of the operation, local by "\*" and optional method. 1. Create `RDD[Vector]` where each element is a single row from an input matrix. You can use `numpy.ndarray` for each row (`prallelize`) 2. Compute column-wise statistics (`reduce`) 3. Use results from 2. to center the matrix (`map`) 4. Compute outer product for each row (`map outer`) 5. Sum results to obtain covariance matrix (`reduce +`) 6. Collect and compute eigendecomposition \* (`numpy.linalg.eigh`) 7. Choose top-n eigenvectors \* 8. Project the data (`map`) Regarding Sklearn. You can use NumPy (it is already in use in `Mllib`), SciPy, Scikit **locally** on a driver or a worker the same way as usual. ### Spark 1.5.0 Although PySpark 1.5 introduces distributed data structures (`pyspark.mllib.linalg.distributed`) it looks like API is rather limited and there is no implementation of the `computePrincipalComponents` method. It is possible to use either `from pyspark.ml.feature.PCA` or `pyspark.mllib.feature.PCA` though. In the first case an expected input is a data frame with vector column: ``` from pyspark.ml.feature import PCA as PCAml from pyspark.mllib.linalg import Vectors df = sqlContext.createDataFrame([ (Vectors.dense([1, 2, 0]),), (Vectors.dense([2, 0, 1]),), (Vectors.dense([0, 1, 0]),)], ("features", )) pca = PCAml(k=2, inputCol="features", outputCol="pca") model = pca.fit(df) transformed = model.transform(df) ``` For `mllib` version you'll need a `RDD` of `Vector`: ``` from pyspark.mllib.feature import PCA as PCAmllib rdd = sc.parallelize([ Vectors.dense([1, 2, 0]), Vectors.dense([2, 0, 1]), Vectors.dense([0, 1, 0])]) model = PCAmllib(2).fit(rdd) transformed = model.transform(rdd) ```
Installing mechanize for python 3.4
31,774,756
5
2015-08-02T17:46:07Z
31,774,959
8
2015-08-02T18:05:38Z
[ "python", "mechanize" ]
I'm trying to retrieve the mechanize module for python 3.4. Can anybody guide me in the right direction and perhaps walk me through the steps that I would need to take in order to make the correct installation? I'm currently using Windows 10. Thank you.
unfortunately mechanize only works with Python 2.4, Python 2.5, Python 2.6, and Python 2.7. The good news is there are other projects you can take a look at: [RoboBrowser](https://github.com/jmcarp/robobrowser), [MechanicalSoup](https://github.com/hickford/MechanicalSoup) There are more alternatives in this thread as well: [Are there any alternatives to Mechanize in Python?](http://stackoverflow.com/questions/2662705/are-there-any-alternatives-to-mechanize-in-python).
Python Operator (+=) and SyntaxError
31,778,646
2
2015-08-03T02:32:02Z
31,778,661
8
2015-08-03T02:34:17Z
[ "python", "python-2.7", "syntax-error", "operators" ]
Ok, what am I doing wrong? ``` x = 1 print x += 1 ``` Error: ``` print x += 1 ^ SyntaxError: invalid syntax ``` Or, does `+=` not work in Python 2.7 anymore? I would swear that I have used it in the past.
`x += 1` is an [augmented assignment statement](https://docs.python.org/2/reference/simple_stmts.html#augmented-assignment-statements) in Python. You cannot use *statements* inside the print statement , that is why you get the syntax error. You can only use [*Expressions*](https://docs.python.org/2/reference/expressions.html) there. You can do - ``` x = 1 x += 1 print x ```
Runtime error when trying to logout django
31,779,234
4
2015-08-03T04:01:12Z
31,779,289
7
2015-08-03T04:08:25Z
[ "python", "django", "recursion", "runtime", "logout" ]
When I try to logout from my django project, I get the following error: "maximum recursion depth exceeded while calling a Python object" Here is the url for the logout button: ``` url(r'^logout', 'users.views.logout', name='logout'), ``` And here is the view: ``` from django.shortcuts import render from deck1.models import Card from django.template import RequestContext from django.shortcuts import render_to_response from django.http import HttpResponseRedirect, HttpResponse from django.contrib.auth.forms import UserCreationForm from django.contrib.auth.forms import AuthenticationForm from django.contrib.auth import login, authenticate, logout from django.contrib.auth.decorators import login_required from users.forms import RegisterForm @login_required def logout(request): logout(request) return HttpResponseRedirect('/deck') ```
Your view `logout` is overriding the namespace of built-in `logout` function. Define an alias for `django.contrib.auth.login` function using `as` keyword. ``` from django.contrib.auth import logout as django_logout @login_required def logout(request): django_logout(request) return HttpResponseRedirect('/deck') ```
If statement check list contains is returning true when it shouldn't
31,780,687
10
2015-08-03T06:28:22Z
31,780,723
35
2015-08-03T06:30:32Z
[ "python", "list", "if-statement" ]
I have a list which contains the values: ``` ['1', '3', '4', '4'] ``` I have an if statement which will check if the values are contained within the list then output a statement: ``` if "1" and "2" and "3" in columns: print "1, 2 and 3" ``` Considering the list doesn't contain value "2", it should not print the statement, however it is: **Output:** ``` 1, 2 and 3 ``` Can someone explain why this is the case? Is it the way Python reads the list that is making this occur?
It gets evaluated in order of [operator precedence](https://docs.python.org/2/reference/expressions.html#operator-precedence): ``` if "1" and "2" and ("3" in columns): ``` Expands into: ``` if "1" and "2" and True: ``` Which then evaluates `("1" and "2")` leaving us with: ``` if "2" and True ``` Finally: ``` if True: ``` Instead you can check if the `set` of strings are a subset of `columns`: ``` if {"1", "2", "3"}.issubset(columns): print "1, 2 and 3" ```
If statement check list contains is returning true when it shouldn't
31,780,687
10
2015-08-03T06:28:22Z
31,784,924
11
2015-08-03T10:23:05Z
[ "python", "list", "if-statement" ]
I have a list which contains the values: ``` ['1', '3', '4', '4'] ``` I have an if statement which will check if the values are contained within the list then output a statement: ``` if "1" and "2" and "3" in columns: print "1, 2 and 3" ``` Considering the list doesn't contain value "2", it should not print the statement, however it is: **Output:** ``` 1, 2 and 3 ``` Can someone explain why this is the case? Is it the way Python reads the list that is making this occur?
There's two general rules to keep in mind in order to understand what's happening: * **a boolean operator always [returns the result of the evaluation of one operand](https://docs.python.org/2/reference/expressions.html#boolean-operations)**. * **operations are executed in the [order of precendence](https://docs.python.org/2/reference/expressions.html#operator-precedence)** When evaluating the expression `"1" and "2" and "3" in columns`, the [order of operator precedence](https://docs.python.org/2/reference/expressions.html#operator-precedence) makes this be evaluated as `"1" and "2" and ("3" in columns)`. This is thus expanded to `"1" and "2" and True`, since `"3"` is indeed a element of `columns` (note that single or double quotes are interchangeable for python strings). > Operators in the same box group left to right Since we have two operators with the same precedence, the evaluation is then `("1" and "2") and True` . For `and`, the [documentation for boolean operations states](https://docs.python.org/2/reference/expressions.html#and): > The expression x and y first evaluates x; if x is false, its value is > returned; otherwise, y is evaluated and the resulting value is > returned. Thus, `("1" and "2") and True` evaluates to `"2" and True`, which then evaluates to `True`.Therefore your `if` body always executes.
How to parallelized file downloads?
31,784,484
2
2015-08-03T09:58:58Z
31,795,242
9
2015-08-03T19:26:44Z
[ "python", "python-3.x", "download", "subprocess", "wget" ]
I can download a file at a time with: ``` import urllib.request urls = ['foo.com/bar.gz', 'foobar.com/barfoo.gz', 'bar.com/foo.gz'] for u in urls: urllib.request.urlretrieve(u) ``` I could try to `subprocess` it as such: ``` import subprocess import os def parallelized_commandline(command, files, max_processes=2): processes = set() for name in files: processes.add(subprocess.Popen([command, name])) if len(processes) >= max_processes: os.wait() processes.difference_update( [p for p in processes if p.poll() is not None]) #Check if all the child processes were closed for p in processes: if p.poll() is None: p.wait() urls = ['http://www.statmt.org/wmt15/training-monolingual-nc-v10/news-commentary-v10.en.gz', 'http://www.statmt.org/wmt15/training-monolingual-nc-v10/news-commentary-v10.cs.gz', 'http://www.statmt.org/wmt15/training-monolingual-nc-v10/news-commentary-v10.de.gz'] parallelized_commandline('wget', urls) ``` Is there any way to parallelize `urlretrieve` without using `os.system` or `subprocess` to cheat? Given that I must resort to the "cheat" for now, is `subprocess.Popen` the right way to download the data? When using the `parallelized_commandline()` above, it's using multi-thread but not multi-core for the `wget`, is that normal? Is there a way to make it multi-core instead of multi-thread?
You could use a thread pool to download files in parallel: ``` #!/usr/bin/env python3 from multiprocessing.dummy import Pool # use threads for I/O bound tasks from urllib.request import urlretrieve urls = [...] result = Pool(4).map(urlretrieve, urls) # download 4 files at a time ``` You could also download several files at once in a single thread using `asyncio`: ``` #!/usr/bin/env python3 import asyncio import logging from contextlib import closing import aiohttp # $ pip install aiohttp @asyncio.coroutine def download(url, session, semaphore, chunk_size=1<<15): with (yield from semaphore): # limit number of concurrent downloads filename = url2filename(url) logging.info('downloading %s', filename) response = yield from session.get(url) with closing(response), open(filename, 'wb') as file: while True: # save file chunk = yield from response.content.read(chunk_size) if not chunk: break file.write(chunk) logging.info('done %s', filename) return filename, (response.status, tuple(response.headers.items())) urls = [...] logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s') with closing(asyncio.get_event_loop()) as loop, \ closing(aiohttp.ClientSession()) as session: semaphore = asyncio.Semaphore(4) download_tasks = (download(url, session, semaphore) for url in urls) result = loop.run_until_complete(asyncio.gather(*download_tasks)) ``` where [`url2filename()` is defined here](https://gist.github.com/zed/c2168b9c52b032b5fb7d).
Django Rest Framework: turn on pagination on a ViewSet (like ModelViewSet pagination)
31,785,966
8
2015-08-03T11:24:02Z
31,789,223
10
2015-08-03T13:56:18Z
[ "python", "django", "django-rest-framework" ]
I have a ViewSet like this one to list users' data: ``` class Foo(viewsets.ViewSet): def list(self, request): queryset = User.objects.all() serializer = UserSerializer(queryset, many=True) return Response(serializer.data) ``` I want to turn on pagination like the default pagination for ModelViewSet: ``` { "count": 55, "next": "http://myUrl/?page=2", "previous": null, "results": [{...},{...},...,{...}] } ``` [The official doc](http://www.django-rest-framework.org/api-guide/pagination/) says: > Pagination is only performed automatically if you're using the generic views or viewsets ...but my resultset is not paginated at all. How can I paginate it?
> Pagination is only performed automatically if you're using the generic > views or viewsets The first roadblock is translating the docs to english. What they intended to convey is that you desire a generic viewset. The generic viewsets extend from [generic ApiViews](https://github.com/tomchristie/django-rest-framework/blob/master/rest_framework/generics.py) which have extra class methods for paginating querysets and responses. Additionally, you're providing your own `list` method, but the default pagination process is actually handled by the [mixin](https://github.com/tomchristie/django-rest-framework/blob/master/rest_framework/mixins.py#L35): ``` class ListModelMixin(object): """ List a queryset. """ def list(self, request, *args, **kwargs): queryset = self.filter_queryset(self.get_queryset()) page = self.paginate_queryset(queryset) if page is not None: serializer = self.get_serializer(page, many=True) return self.get_paginated_response(serializer.data) serializer = self.get_serializer(queryset, many=True) return Response(serializer.data) ``` The easy solution, use the framework code: ``` class Foo(mixins.ListModelMixin, viewsets.GenericViewSet): queryset = User.objects.all() serializer = UserSerializer ``` The more complex solution would be if you need a custom `list` method, then you should write it as you see fit but in the style of the above mixin code snippet.
Python if statement without indent
31,787,110
2
2015-08-03T12:19:22Z
31,787,348
8
2015-08-03T12:31:24Z
[ "python", "python-2.7" ]
I'm working with an inherited Python program, which runs OK, but does not have correct Python indenting. ``` if not arg_o: print >> sys.stderr, 'Output file needed.' print >> sys.stderr, usage exit() ``` What is going on here? Shouldn't the code below the **if** be indented? **SOLVED** See the accepted answer. It turns out that TextMate was not properly displaying the tabs, which is a worry. When opened in Vim it was indented properly.
After checking the file - `MPprimer.py` - from the code found [here](https://code.google.com/p/mpprimer/downloads/list) . I can see the following lines in it - ``` if not arg_o: print >> sys.stderr, 'Output file needed.' print >> sys.stderr, usage exit() ``` Starting at line 175 . The issue is that this script mixes tabs and spaces. The line 175 - `if not arg_o:` - uses 4 spaces as indentation. whereas , the next line, line 176 - `print >> sys.stderr, 'Output file needed.'` - uses a tab as indentation. In Python 2.x , tabs and spaces can be mixed, but a tab is internally represented by 8 spaces. But in some editors (like Notepad ++ , in my System) , tab is only interpreted as 4 spaces. So even though in those editors they look like they are not correctly indented, they are actually correctly indented , according to python. Please note, it is not a good practice to mix tabs and spaces, Python 3.x , would error out if you mix them in the same script.
Stop Django from creating migrations if the list of choices of a field changes
31,788,450
12
2015-08-03T13:21:05Z
31,788,759
10
2015-08-03T13:33:56Z
[ "python", "django", "database-migration" ]
I have a django core app called "foocore". There are several optional pluging-like apps. For example "superfoo". In my case every plugin adds a new choice in a model CharField which belongs to "foocore". Django migrations detect changes if the list of choices get changed. I think this is not necessary. At least one other developer thinks the same: <https://code.djangoproject.com/ticket/22837> ``` class ActivePlugin(models.Model): plugin_name = models.CharField(max_length=32, choices=get_active_plugins()) ``` The code to get the choices: ``` class get_active_plugins(object): def __iter__(self): for item in ....: yield item ``` The core "foocore" gets used in several projects and every installation has a different set of plugins. Django tries to create useless migrations .... Is there a way to work around this?
See this bug report and discussion for more info: <https://code.djangoproject.com/ticket/22837> The proposed solution was to use a callable as the argument for choices, but it appears this has not been executed for fields but for forms only. If you really need dynamic choices than a `ForeignKey` is the best solution. An alternative solution can be to add the requirement through a custom clean method for the field and/or creating a custom form. Form fields do support callable `choices`. See this answer for more info: <http://stackoverflow.com/a/33514551/54017>
Removing a character from a string in a list of lists
31,794,610
3
2015-08-03T18:48:57Z
31,794,656
7
2015-08-03T18:51:46Z
[ "python", "string", "list", "replace" ]
I'm trying to format some data for performing an analysis. I'm trying to remove `'*'` from all strings that start with one. Here's a snippet of the data: ``` [['Version', 'age', 'language', 'Q1', 'Q2', 'Q3', 'Q4', 'Q5', 'Q6', 'Q7', 'Q8', 'Q9', 'Q10', 'Q11', 'Q12', 'Q13', 'Q14', 'Q15', 'Q16', 'Q17', 'Q18', 'Q19', 'Q20', 'Q21', 'Q22', 'Q23', 'Q24', 'Q25', 'Q26', 'Q27', 'Q28', 'Q29', 'Q30', 'Q31', 'Q32', 'Q33', 'Q34', 'Q35', 'Q36', 'Q37', 'Q38', 'Q39', 'Q40', 'Q41', 'Q42', 'Q43', 'Q44', 'Q45'], ['1', '18 to 40', 'English', '*distort', '*transfer', '*retain', 'constrict', '*secure', '*excite', '*cancel', '*hinder', '*overstate', 'channel', '*diminish', '*abolish', '*comprehend', '*tolerate', '*conduct', '*destroy', '*foster', 'direct', '*challenge', 'forego', '*cause', '*reduce', 'interrupt', '*enhance', '*misapply', '*exhaust', '*extinguish', '*assimilate', 'believe', 'harmonize', '*demolish', 'affirm', 'trouble', 'discuss', '*force', 'divide', '*remove', '*release', 'highlight', 'reinforce', 'stifle', '*compromise', '*experience', 'evaluate', 'replenish']] ``` This should be simple, but nothing I've tried works. For example: ``` for lst in testList: for item in lst: item.replace('*', '') ``` just gives me back the same strings. I've also tried inserting an if statement and indexing the characters in the strings. I know I can access the strings. For example if I say `if item[0] == '*': print item` it prints the correct items.
`string`s are immutable , and as such `item.replace('*','')` returns back the string with the replaced characters, it does not replace them inplace (it cannot , since `string`s are immutable) . you can enumerate over your lists, and then assign the returned string back to the list - Example - ``` for lst in testList: for j, item in enumerate(lst): lst[j] = item.replace('*', '') ``` --- You can also do this easily with a list comprehension - ``` testList = [[item.replace('*', '') for item in lst] for lst in testList] ```
Sum element by element multiple lists of different lengths
31,795,569
4
2015-08-03T19:47:52Z
31,795,607
9
2015-08-03T19:49:58Z
[ "python", "python-3.x" ]
Is there any way to sum over multiple lists, index by index, to get one final list? Knowing that these lists might not have the same length? For example, with these ``` [2,4,0,0], [0,0,2], [0,4] ``` I would like to have ``` [2,8,2,0] ``` as a result. I haven't found any result so far.
You can use [`itertools.zip_longest`](https://docs.python.org/3/library/itertools.html#itertools.zip_longest) with the `fillvalue` argument set to `0`. If you use this in a list comprehension, you can unpack and zip the inner lists and add them in an element-wise fashion. ``` >>> from itertools import zip_longest >>> [sum(i) for i in zip_longest(*l, fillvalue=0)] [2, 8, 2, 0] ```
psycopg2 insert python dictionary as json
31,796,332
2
2015-08-03T20:37:03Z
31,796,487
8
2015-08-03T20:48:14Z
[ "python", "postgresql", "dictionary", "psycopg2" ]
I want to insert a python dictionary as a json into my postgresql database (via python and psycopg2). I have: ``` ... thedictionary = {'price money': '$1', 'name': 'Google', 'color': '', 'imgurl': 'http://www.google.com/images/nav_logo225.png', 'charateristics': 'No Description', 'store': 'google'} ... cur.execute("INSERT INTO product(store_id, url, price, charecteristics, color, dimensions) VALUES (%d, %s, %s, %d, %s, %s)", (1, 'http://www.google.com', '$20', thedictionary, 'red', '8.5x11')) ... ``` And it gives the error message: ``` cur.execute("INSERT INTO product(store_id, url, price, charecteristics, color, dimensions) VALUES (%d, %s, %s, %d, %s, %s)", (1, 'http://www.google.com', '$20', thedictionary, 'red', '8.5x11')) psycopg2.ProgrammingError: can't adapt type 'dict' ``` I am not sure how to proceed from here. I cannot find anything on the internet about how to do this exact kind of thing and I am very new to psycopg2.
``` cur.execute("INSERT INTO product(store_id, url, price, charecteristics, color, dimensions) VALUES (%s, %s, %s, %s, %s, %s)", (1, 'http://www.google.com', '$20', json.dumps(thedictionary), 'red', '8.5x11')) ``` That will solve your problem. However, you really should be storing keys and values in their own separate columns. To retrieve the dictionary, do: ``` cur.execute('select charecteristics from product where store_id = 1') dictionary = json.loads(cur.fetchone()[0]) ``` Hope it helps.
How does python "know" what to do with the "in" keyword?
31,796,910
33
2015-08-03T21:17:33Z
31,796,967
12
2015-08-03T21:22:17Z
[ "python" ]
I'm a bit bewildered by the "in" keyword in python. If I take a sample list of tuples: ``` data = [ (5, 1, 9.8385465), (10, 1, 8.2087544), (15, 1, 7.8788187), (20, 1, 7.5751283) ] ``` I can do two different "for - in" loops and get different results: ``` for G,W,V in data: print G,W,V ``` This prints each set of values on a line, e.g. `5, 1, 9.8385465` ``` for i in data: print i ``` This prints the whole tuple, e.g. `(5, 1, 9.8385465)` How does python "know" that by providing one variable I want to assign the tuple to a variable, and that by providing three variables I want to assign each value from the tuple to one of those variables?
It's called [tuple unpacking](https://docs.python.org/2/tutorial/datastructures.html#tuples-and-sequences), and has nothing to do with the `in` keyword. The `for` loop returns the single thing (a `tuple` in this case), and then that `tuple` gets assigned -- to a single item in the second case, or multiple items in the first case. If you try specifying the incorrect number of variables: ``` for G,W in data: print G,W Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: too many values to unpack ```
How does python "know" what to do with the "in" keyword?
31,796,910
33
2015-08-03T21:17:33Z
31,796,997
35
2015-08-03T21:23:48Z
[ "python" ]
I'm a bit bewildered by the "in" keyword in python. If I take a sample list of tuples: ``` data = [ (5, 1, 9.8385465), (10, 1, 8.2087544), (15, 1, 7.8788187), (20, 1, 7.5751283) ] ``` I can do two different "for - in" loops and get different results: ``` for G,W,V in data: print G,W,V ``` This prints each set of values on a line, e.g. `5, 1, 9.8385465` ``` for i in data: print i ``` This prints the whole tuple, e.g. `(5, 1, 9.8385465)` How does python "know" that by providing one variable I want to assign the tuple to a variable, and that by providing three variables I want to assign each value from the tuple to one of those variables?
According to [the `for` compound statement documentation](https://docs.python.org/2/reference/compound_stmts.html#the-for-statement): > Each item in turn is assigned to the target list using the standard > rules for assignments... Those *"standard rules"* are in [the assignment statement documentation](https://docs.python.org/2/reference/simple_stmts.html#assignment-statements), specifically: > Assignment of an object to a target list is recursively defined as > follows. > > * If the target list is a single target: The object is assigned to that target. > * If the target list is a comma-separated list of targets: The object must be an iterable with the same number of items as there are targets > in the target list, and the items are assigned, from left to right, to > the corresponding targets. So this different behaviour, depending on whether you assign to a single target or a list of targets, is baked right into Python's fundamentals, and applies wherever assignment is used.
How does python "know" what to do with the "in" keyword?
31,796,910
33
2015-08-03T21:17:33Z
31,797,008
21
2015-08-03T21:24:32Z
[ "python" ]
I'm a bit bewildered by the "in" keyword in python. If I take a sample list of tuples: ``` data = [ (5, 1, 9.8385465), (10, 1, 8.2087544), (15, 1, 7.8788187), (20, 1, 7.5751283) ] ``` I can do two different "for - in" loops and get different results: ``` for G,W,V in data: print G,W,V ``` This prints each set of values on a line, e.g. `5, 1, 9.8385465` ``` for i in data: print i ``` This prints the whole tuple, e.g. `(5, 1, 9.8385465)` How does python "know" that by providing one variable I want to assign the tuple to a variable, and that by providing three variables I want to assign each value from the tuple to one of those variables?
This isn't really a feature of the `in` keyword, but of the Python language. The same works with assignment. ``` x = (1, 2, 3) print(x) >>> (1, 2, 3) a, b, c = (1, 2, 3) print(a) >>> 1 print(b) >>> 2 print(c) >>> 3 ``` So to answer your question, it's more that Python knows what to do with assignments when you either: * assign a tuple to a variable, or * assign a tuple to a number of variables equal to the number of items in the tuple
Get java version number from python
31,807,882
2
2015-08-04T11:18:21Z
31,808,419
7
2015-08-04T11:44:47Z
[ "python", "bash", "sed", "grep", "cut" ]
I need to get the java version number, for example "1.5", from python (or bash). I would use: ``` os.system('java -version 2>&1 | grep "java version" | cut -d "\\\"" -f 2') ``` But that returns 1.5.0\_30 It needs to be compatible if the number changes to "1.10" for example. I would like to use cut or grep or even sed. It should be in one line.
The Java runtime seems to send the version information to the stderr. You can get at this using Python's [`subprocess`](https://docs.python.org/2/library/subprocess.html#subprocess.check_output) module: ``` >>> import subprocess >>> version = subprocess.check_output(['java', '-version'], stderr=subprocess.STDOUT) >>> print version java version "1.7.0_79" Java(TM) SE Runtime Environment (build 1.7.0_79-b15) Java HotSpot(TM) Client VM (build 24.79-b02, mixed mode) ``` You can get the version out with a regex: ``` >>> import re >>> pattern = '\"(\d+\.\d+).*\"' >>> print re.search(pattern, version).groups()[0] 1.7 ``` If you are using a pre-2.7 version of Python, see this question: [subprocess.check\_output() doesn't seem to exist (Python 2.6.5)](http://stackoverflow.com/questions/4814970/subprocess-check-output-doesnt-seem-to-exist-python-2-6-5)
pip install requests[security] vs pip install requests: Difference
31,811,949
5
2015-08-04T14:24:45Z
31,812,342
9
2015-08-04T14:42:01Z
[ "python", "python-2.7", "pip", "virtualenv", "python-requests" ]
I am using Ubuntu 14.04 with python version 2.7.6. Today, when I created a new `virtualenv` and tried doing `pip install requests` , I got the error `InsecurePlatformWarning`. I resolved this issue by following the instructions over here [SSL InsecurePlatform error when using Requests package](http://stackoverflow.com/questions/29099404/ssl-insecureplatform-error-when-using-requests-package) But I want to understand what is the actual difference between these two commands: `pip install requests[security]` and `pip install requests` . 1) Why does the former install 3 additional packages? 2) Are there any things that I need to take care about when I push the code to production? 3) Do they both behave the same generally? I searched a lot on stackoverflow and elsewhere, but couldn't find the answer. If it has already been answered then please post the link. Thanks.
> Why does the former install 3 additional packages? Using `requests[security]` instead of `requests` will install [three additional packages](https://github.com/kennethreitz/requests/blob/master/setup.py#L72): * `pyOpenSSL` * `ndg-httpsclient` * `pyasn1` These are defined in `extras_requires`, as [optional features with additional dependencies](http://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-extras-optional-features-with-their-own-dependencies). > Are there any things that I need to take care about when I push the code to production? You'd want to make sure that you are able to install those additional packages without any issues and that any changes to the way SSL connections work don't affect your usage. > Do they both behave the same generally? Using these packages as opposed to the default standard library options will allow for more secure SSL connections. For more information, [here's the pull request where it was merged in](https://github.com/kennethreitz/requests/pull/2195) and [here is the issue where it was discussed](https://github.com/kennethreitz/requests/issues/1995). (From the comments, for when GitHub goes away): > So right now the SSL connections when you use pyOpenSSL, ndg-httspclient, and pyasn1 are more secure than if you just use the stdlib options. However it's hard to actually remember those three things. It would be cool if requests would add an extra to it's setup.py so that people can install requests with betterssl (Donald Stufft) > Also by default requests can't connect to some sites on OSX because of ancient OpenSSL. Using the above 3 packages makes it possible. (Donald Stufft)
AttributeError: 'Nonetype' object has no attribute '_info'
31,816,158
3
2015-08-04T18:00:40Z
32,358,780
9
2015-09-02T16:48:25Z
[ "python", "django" ]
I am working on a `Django` project and this error arises when I try to run any management commands such as: `python manage.py validate --settings ord.settings.development`, `python manage.py syncdb --settings ord.settings.development`. The project uses `Django 1.5` The error is: `AttributeError: 'Nonetype' object has no attribute '_info'`. No other output is given. The project base settings file is: <https://gist.github.com/anonymous/5c0fede63b2724d7880b> The development settings: <https://gist.github.com/anonymous/f60b90dcf573b0a7b920> I have replaced sensitive settings with `x` Any idea what could be wrong? Some extra info, when i comment out the `LANGUAGE_CODE` settings, some commands like `validate`, `runserver`, `shell` run fine but `syncdb` and `migrate` fail with error: `DatabaseError: current transaction is aborted, commands ignored until end of transaction block` Traceback: <https://gist.github.com/anonymous/bc3364ae5ba511566871>
Had the same issue, Please follow the steps: 1. go to `django/utils/translation/trans_real.py` 2. search for `res = _translation(globalpath)` 3. Add the following: ``` if res is None: return gettext_module.NullTranslations() ``` source: <https://code.djangoproject.com/ticket/18192>
Can't import pprint
31,817,717
2
2015-08-04T19:33:03Z
31,817,755
7
2015-08-04T19:34:58Z
[ "python", "import", "pprint" ]
Programming newbie here. Whenever I attempt to 'import pprint' in the Python IDLE, I get the following error: ``` >>> import pprint Traceback (most recent call last): File "<pyshell#21>", line 1, in <module> import pprint File "C:\Python34\pprint.py", line 10, in <module> pprint(count) NameError: name 'pprint' is not defined ``` Thought I would try to 'pip install pprint' in command line but that didn't work either: ``` PS C:\Python34> pip install pprint Collecting pprint Could not find a version that satisfies the requirement pprint (from versions: ) No matching distribution found for pprint ``` I thought Python 3.4.3 was supposed to come with pprint module. How do I get it to work? Can't seem to import pprint but all other modules are working fine. I need the pprint module to work for completion of some exercises from Automate The Boring Stuff with Python. Thanks for looking at my question.
You've named a program `pprint`. Rename your program to something other than `pprint.py` and remove any `pprint.pyc` file that might be present.
Python 2.7: round number to nearest integer
31,818,050
25
2015-08-04T19:51:39Z
31,818,069
47
2015-08-04T19:52:36Z
[ "python", "python-2.7" ]
I've been trying to round long float numbers like: ``` 32.268907563; 32.268907563; 31.2396694215; 33.6206896552; ... ``` With no success so far. I tried `math.ceil(x)` , `math.floor(x)` (although that would round up or down, which is not what I'm looking for) and `round(x)` which didn't work either (still float numbers). What could I do? EDIT: CODE: ``` for i in widthRange: for j in heightRange: r, g, b = rgb_im.getpixel((i, j)) h, s, v = colorsys.rgb_to_hsv(r/255.0, g/255.0, b/255.0) h = h * 360 int(round(h)) print h # Debug ```
``` int(round(x)) ``` Will round it and change it to integer EDIT: You are not assigning int(round(h)) to any variable. When you call int(round(h)), it returns the integer number but does nothing else; you have to change that line for: ``` h = int(round(h)) ``` To assign the new value to h
python 2 strange list comprehension behaviour
31,820,774
6
2015-08-04T23:00:47Z
31,820,859
12
2015-08-04T23:09:15Z
[ "python", "list-comprehension" ]
I was looking around list comprehension and saw smth strange. Code: ``` a = ['a', 'a', 'a', 'b', 'd', 'd', 'c', 'c', 'c'] print [(len(list(g)), k) if len(list(g)) > 1 else k for k, g in groupby(a)] ``` Result: `[(0, 'a'), 'b', (0, 'd'), (0, 'c')]` But I wanted to see: `[(3, 'a'), 'b', (2, 'd'), (3, 'c')]` What's the cause of such behaviour?
When you call `list()` on an `itertools._grouper` object, you exhaust the object. Since you're doing this twice, the second instance results in a length of 0. First: ``` if len(list(g)) ``` now it's exhausted. Then: ``` (len(list(g)), k)) ``` It will have a length of 0. You can nest a generator/comprehension in your `list` comprehension to exhaust the object and save the relevant data before processing it: ``` >>> [(y,x) if y>1 else x for x,y in ((k, len(list(g))) for k, g in groupby(a))] [(3, 'a'), 'b', (2, 'd'), (3, 'c')] ```
How does the @timeout(timelimit) decorator work?
31,822,190
9
2015-08-05T01:56:13Z
31,822,272
13
2015-08-05T02:07:18Z
[ "python" ]
I found this decorator that times out a function here on Stack Overflow, and I am wondering if someone could explain in detail how it works, as the code is very elegant but not clear at all. Usage is `@timeout(timelimit)`. ``` from functools import wraps import errno import os import signal class TimeoutError(Exception): pass def timeout(seconds=100, error_message=os.strerror(errno.ETIME)): def decorator(func): def _handle_timeout(signum, frame): raise TimeoutError(error_message) def wrapper(*args, **kwargs): signal.signal(signal.SIGALRM, _handle_timeout) signal.alarm(seconds) try: result = func(*args, **kwargs) finally: signal.alarm(0) return result return wraps(func)(wrapper) return decorator ```
> # How does the @timeout(timelimit) decorator work? ## Decorator Syntax To be more clear, the usage is like this: ``` @timeout(100) def foo(arg1, kwarg1=None): '''time this out!''' something_worth_timing_out() ``` The above is the decorator syntax. The below is exactly equivalent: ``` def foo(arg1, kwarg1=None): '''time this out!''' something_worth_timing_out() foo = timeout(100)(foo) ``` Note that we name the function that wraps foo, "`foo`". That's what the decorator syntax means and does. ## Necessary imports ``` from functools import wraps import errno import os import signal ``` ## Exception to raise on Timeout ``` class TimeoutError(Exception): pass ``` ## Analysis of the function This is what's called in the line, `@timeout(timelimit)`. These arguments will be locked into the underlying functions, making those functions "closures" so-called because they close-over the data: ``` def timeout(seconds=100, error_message=os.strerror(errno.ETIME)): ``` This will return a function that takes a function as an argument, which the next line proceeds to define. This function will return a function that wraps the original function. : ``` def decorator(func): ``` This is a function to timeout the decorated function: ``` def _handle_timeout(signum, frame): raise TimeoutError(error_message) ``` And this is the actual wrapper. Before calling the wrapped function, it sets a signal that will interrupt the function if it does not finish in time with an exception: ``` def wrapper(*args, **kwargs): signal.signal(signal.SIGALRM, _handle_timeout) signal.alarm(seconds) try: result = func(*args, **kwargs) finally: signal.alarm(0) ``` This will return the result if the function completes: ``` return result ``` This returns the wrapper. It makes sure the wrapped function gets the attributes from the original function, like docstrings, name, function signature... ``` return wraps(func)(wrapper) ``` and this is where the decorator is returned, from the original call, `@timeout(timelimit)`: ``` return decorator ``` ## Benefit of `wraps` The wraps function allows the function that wraps the target function to get the documentation of that function, because `foo` no longer points at the original function: ``` >>> help(foo) Help on function foo in module __main__: foo(arg1, kwarg1=None) time this out! ``` --- ## Better usage of `wraps` To further clarify, wraps returns a decorator, and is intended to be used much like this function. It would be better written like this: ``` def timeout(seconds=100, error_message=os.strerror(errno.ETIME)): def decorator(func): def _handle_timeout(signum, frame): raise TimeoutError(error_message) @wraps(func) def wrapper(*args, **kwargs): signal.signal(signal.SIGALRM, _handle_timeout) signal.alarm(seconds) try: result = func(*args, **kwargs) finally: signal.alarm(0) return result return wrapper return decorator ```
Python: Importing urllib.quote
31,827,012
8
2015-08-05T08:17:28Z
31,827,113
14
2015-08-05T08:22:11Z
[ "python", "python-3.x", "import", "urllib" ]
I would like to use `urllib.quote()`. But python (python3) is not finding the module. Suppose, I have this line of code: ``` print(urllib.quote("châteu", safe='')) ``` How do I import urllib.quote? `import urllib` or `import urllib.quote` both give ``` AttributeError: 'module' object has no attribute 'quote' ``` What confuses me is that `urllib.request` is accessible via `import urllib.request`
In Python 3.x, you need to import [`urllib.parse.quote`](https://docs.python.org/3/library/urllib.parse.html): ``` >>> import urllib.parse >>> urllib.parse.quote("châteu", safe='') 'ch%C3%A2teu' ``` According to [Python 2.x `urllib` module documentation](https://docs.python.org/2/library/urllib.html): > **NOTE** > > The `urllib` module has been split into parts and renamed in Python 3 to > `urllib.request`, `urllib.parse`, and `urllib.error`.
Generating NXN spirals
31,832,862
9
2015-08-05T12:41:21Z
31,833,990
15
2015-08-05T13:29:57Z
[ "python" ]
I have been given the task of creating a spiral in python where the user inputs a number e.g 3 and it will output a 3x3 spiral which looks like this: ``` - - \ / \ | \ - / ``` **I am not looking for the full code** I just have no idea how to do it, obviously printing out every possible solution using if statements isn't possible or logical. The real question here is what should i be looking to do, for loops, define my own function? are there any docs people can link me to that would help. The full task outline is as follows: > Your task here is to write a program to draw a spiral of a given size > inside a box. > > Your program should ask the user for a positive integer denoting the size of the box. Your program should then print out a > spiral inside of a box of that size. > > For example: > > ``` > Enter size: 3 > - - \ > / \ | > \ - / > ``` > > and: > > ``` > Enter size: 4 > - - - \ > / - \ | > | \ / | > \ - - / > ``` > > ​and: > > ``` > Enter size: 5 > - - - - \ > / - - \ | > | / \ | | > | \ - / | > \ - - - / > ``` > > The input size will always be greater than 1.
I respect you for not wanting the full code. This is intentionally only a partial answer. Start by making a 2-dimensional array. Something like: ``` grid = [[None]*n for i in range(n)] ``` This allows you to write code like `grid[i][j] = '\'`. Start with `i,j = 0,0`. In a loop spiral around the grid. It might help to have a variable `direction` which takes on values `'right', 'left', 'up', 'down'` together with a corresponding `delta` taking on values like `(0,1)` (for moving to the right) to be added to `(i,j)` to implement the move. Go along a line in a certain direction placing '-' or '|' until you hit a corner (check for `None` as well as the limits of the overall grid). When you get to a corner, place the appropriate corner marker and change directions. Once the grid is filled, join each rows with an empty string delimiter and join the result with `'\n'` as the delimiter.
Generating NXN spirals
31,832,862
9
2015-08-05T12:41:21Z
31,834,067
14
2015-08-05T13:32:48Z
[ "python" ]
I have been given the task of creating a spiral in python where the user inputs a number e.g 3 and it will output a 3x3 spiral which looks like this: ``` - - \ / \ | \ - / ``` **I am not looking for the full code** I just have no idea how to do it, obviously printing out every possible solution using if statements isn't possible or logical. The real question here is what should i be looking to do, for loops, define my own function? are there any docs people can link me to that would help. The full task outline is as follows: > Your task here is to write a program to draw a spiral of a given size > inside a box. > > Your program should ask the user for a positive integer denoting the size of the box. Your program should then print out a > spiral inside of a box of that size. > > For example: > > ``` > Enter size: 3 > - - \ > / \ | > \ - / > ``` > > and: > > ``` > Enter size: 4 > - - - \ > / - \ | > | \ / | > \ - - / > ``` > > ​and: > > ``` > Enter size: 5 > - - - - \ > / - - \ | > | / \ | | > | \ - / | > \ - - - / > ``` > > The input size will always be greater than 1.
Things to note: * number of characters in rows/cols is **n** * first row will always have **n - 1** `-`s and one `\` * last row will always have **n - 2** `-`s, begin with `\` and ends with `/` For example, when `n` is 4: First row: `- - - \` Last row: `\ - - /` Can be easily achieved using: ``` def get_first_raw(n): return '- ' * (n - 1) + '\\' def get_last_raw(n): return '\\ ' + '- ' * (n - 2) + '/' ``` Now regarding the body of the spiral, note the following: For n = 3: ``` - - \ / \ | \ - / ``` For n = 5: [![enter image description here](http://i.stack.imgur.com/2vOgD.png)](http://i.stack.imgur.com/2vOgD.png) For n = 6: [![enter image description here](http://i.stack.imgur.com/wjN7u.png)](http://i.stack.imgur.com/wjN7u.png) Note that the 4-spiral is **contained** inside it, and the red boxes are **fixed**. Only their length changes according to **n**. It's contained inside it. And fore **n = 7**, the **n = 5** is contained inside it. The same holds for **n = 2k**, each n will have **n/2** spiral contained in it. What I'm trying to say here that you manually draw **n = 3** and **n = 2**. If the spiral should be made from an even number, you use the **n = 2** pattern, construct the first and last rows, and using loops you can append the body of the spiral. Example for **n = 5**: ``` def get_spiral(n): res = [] res.append(get_first_raw(n)) res.append('/ ' + spiral[0] + ' |') for line in spiral[1:]: res.append('| ' + line + ' |') res.append(get_last_raw(n)) return res print '\n'.join(get_spiral(5)) ``` where `spiral` is an initial spiral of size 3: ``` spiral = ['- - \\', '/ \ |', '\ - /'] ``` In order to generate 7-spiral, you do: ``` spiral = build_spiral(5) print '\n'.join(build_spiral(7)) ``` and you'll get: ``` - - - - - - \ / - - - - \ | | / - - \ | | | | / \ | | | | | \ - / | | | \ - - - / | \ - - - - - / ``` Of course this can be improved and you can make the program more efficient, I just wanted to give you a guideline and to share my thoughts.. Here's more spirals for fun: ``` - - - - - - - - - - \ / - - - - - - - - \ | | / - - - - - - \ | | | | / - - - - \ | | | | | | / - - \ | | | | | | | | / \ | | | | | | | | | \ - / | | | | | | | \ - - - / | | | | | \ - - - - - / | | | \ - - - - - - - / | \ - - - - - - - - - / ``` --- ``` - - - - - - - - - - - - - - - - - - - - - - - - \ / - - - - - - - - - - - - - - - - - - - - - - \ | | / - - - - - - - - - - - - - - - - - - - - \ | | | | / - - - - - - - - - - - - - - - - - - \ | | | | | | / - - - - - - - - - - - - - - - - \ | | | | | | | | / - - - - - - - - - - - - - - \ | | | | | | | | | | / - - - - - - - - - - - - \ | | | | | | | | | | | | / - - - - - - - - - - \ | | | | | | | | | | | | | | / - - - - - - - - \ | | | | | | | | | | | | | | | | / - - - - - - \ | | | | | | | | | | | | | | | | | | / - - - - \ | | | | | | | | | | | | | | | | | | | | / - - \ | | | | | | | | | | | | | | | | | | | | | | / \ | | | | | | | | | | | | | | | | | | | | | | | \ - / | | | | | | | | | | | | | | | | | | | | | \ - - - / | | | | | | | | | | | | | | | | | | | \ - - - - - / | | | | | | | | | | | | | | | | | \ - - - - - - - / | | | | | | | | | | | | | | | \ - - - - - - - - - / | | | | | | | | | | | | | \ - - - - - - - - - - - / | | | | | | | | | | | \ - - - - - - - - - - - - - / | | | | | | | | | \ - - - - - - - - - - - - - - - / | | | | | | | \ - - - - - - - - - - - - - - - - - / | | | | | \ - - - - - - - - - - - - - - - - - - - / | | | \ - - - - - - - - - - - - - - - - - - - - - / | \ - - - - - - - - - - - - - - - - - - - - - - - / ``` You also got a top view for a pyramid for free
Difference between 'number % 2:' and 'number % 2 == 0'?
31,837,170
3
2015-08-05T15:48:27Z
31,837,211
11
2015-08-05T15:50:17Z
[ "python", "python-3.x" ]
I'm learning about Python's boolean logic and how you can shorten things down. Are the two expressions in the title equivalent? If not, what are the differences between them?
`number % 2` is 0 (so False) if number is even `number % 2 == 0` is True is number is even The first returns an `int` where the second returns a `bool`. Python's [truthiness](https://docs.python.org/2/library/stdtypes.html#truth-value-testing) lets you handle them the same though.
Why does this class run?
31,839,816
2
2015-08-05T18:10:01Z
31,839,958
7
2015-08-05T18:17:16Z
[ "python", "python-3.x" ]
I've been playing with my codes a little for a while, and this one is not about a bug or anything, but i just don't understand why class main() runs without needing to initialize it... ``` class vars(): var1 = "Universe!" var2 = "Oscar!" var3 = "Rainbow!" class main(): print (vars.var1) def __init__(self): print (vars.var2) print (vars.var3) ``` But yes, thank you very much for reading.
Unlike many other languages, class body is an executable statement in Python and is executed immediately as the interpreter reaches the `class` line. When you run this "program": ``` class Foo: print("hey") ``` it just prints "hey" without any `Foo` object being created. The same applies to the function definition statement `def` (but not to function bodies). When you run this: ``` def foo(arg=print("hi")): print("not yet") ``` it prints "hi", but not "not yet".
More efficient way to look up dictionary values whose keys start with same prefix
31,841,303
5
2015-08-05T19:33:04Z
31,841,363
8
2015-08-05T19:36:30Z
[ "python", "performance", "dictionary", "lookup", "startswith" ]
I have a dictionary whose keys come in sets that share the same prefix, like this: ``` d = { "key1":"valA", "key123":"valB", "key1XY":"valC", "key2":"valD", "key2-22":"valE" } ``` Given a query string, I need to look up all the values associated with keys that start with that prefix, e.g. for `query="key1"` I need to get `["valA", "valB", "valC"]` My implementation below works but is too slow for a large number of queries since the dictionary `d` has about 30,000 keys and most of the keys are more than 20 characters long: ``` result = [d[s] for s in d.keys() if s.startswith(query)] ``` Is there a faster/more efficient way to implement this?
You can avoid producing the intermediate list generated by `dict.keys()` (in python 2.x): ``` result = [d[key] for key in d if key.startswith(query)] ``` But you most likely want to use a [*trie*](https://en.wikipedia.org/wiki/Trie) instead of a dictionary, so you can find all the values associated with a key with a common prefix (a trie is similar to a tree based on prefixes). [Here](http://kmike.ru/python-data-structures/#tries) you can find some different implementation of tries. > [![A trie for keys "A", "to", "tea", "ted", "ten", "i", "in", and "inn".](http://i.stack.imgur.com/ugX7w.png)](http://i.stack.imgur.com/ugX7w.png) > > A trie for keys "A", "to", "tea", "ted", "ten", "i", "in", and "inn". (source [wikipedia](https://en.wikipedia.org/wiki/Trie)) --- Let's compare the timings for the different solutions: ``` # create a dictionary with 30k entries d = {str(x):str(x) for x in xrange(1, 30001)} query = '108' # dict with keys() %timeit [d[s] for s in d.keys() if s.startswith(query)] 100 loops, best of 3: 8.87 ms per loop ``` --- ``` # dict without keys() %timeit [d[s] for s in d if s.startswith(query)] 100 loops, best of 3: 7.83 ms per loop # 11.72% improvement ``` --- ``` # PyTrie (https://pypi.python.org/pypi/PyTrie/0.2) import pytrie pt = pytrie.Trie(d) %timeit [pt[s] for s in pt.iterkeys(query)] 1000 loops, best of 3: 320 µs per loop # 96.36% improvement ``` --- ``` # datrie (https://pypi.python.org/pypi/datrie/0.7) import datrie dt = datrie.Trie('0123456789') for key, val in d.iteritems(): dt[unicode(key)] = val %timeit [dt[s] for s in dt.keys(unicode(query))] 10000 loops, best of 3: 162 µs per loop # 98.17% improvement ```
What is this python expression containing curly braces and a for in loop?
31,846,592
2
2015-08-06T03:59:18Z
31,846,617
8
2015-08-06T04:01:19Z
[ "python" ]
I just came across this line of python: ``` order.messages = {c.Code:[] for c in child_orders} ``` I have no idea what it is doing, other than it is looping over the list `child_orders` and placing the result in `order.messages`. What does it do and what is it called?
It's dictionary comprehension! It's iterating through `child_orders` and creating a dictionary where the key is `c.Code` and the value is `[]`. More info [here](http://stackoverflow.com/questions/1747817/python-create-a-dictionary-with-list-comprehension).
What is this python expression containing curly braces and a for in loop?
31,846,592
2
2015-08-06T03:59:18Z
31,846,622
7
2015-08-06T04:01:29Z
[ "python" ]
I just came across this line of python: ``` order.messages = {c.Code:[] for c in child_orders} ``` I have no idea what it is doing, other than it is looping over the list `child_orders` and placing the result in `order.messages`. What does it do and what is it called?
That's a *dict comprehension*. It is just like a *list comprehension* ``` [3*x for x in range(5)] --> [0,3,6,9,12] ``` except: ``` {x:(3*x) for x in range(5)} ---> { 0:0, 1:3, 2:6, 3:9, 4:12 } ``` * produces a Python `dictionary`, not a `list` * uses curly braces `{}` not square braces `[]` * defines *key:value* pairs based on the iteration through a list In your case the keys are coming from the `Code` property of each element and the value is always set to empty array `[]` The code you posted: ``` order.messages = {c.Code:[] for c in child_orders} ``` is equivalent to this code: ``` order.messages = {} for c in child_orders: order.messages[c.Code] = [] ``` See also: * [PEP0274](https://www.python.org/dev/peps/pep-0274/) * [Python Dictionary Comprehension](http://stackoverflow.com/questions/14507591/python-dictionary-comprehension)
How to compute skipgrams in python?
31,847,682
10
2015-08-06T05:44:34Z
31,886,292
7
2015-08-07T20:40:18Z
[ "python", "nlp", "n-gram", "language-model" ]
A k [skipgram](http://homepages.inf.ed.ac.uk/ballison/pdf/lrec_skipgrams.pdf) is an ngram which is a superset of all ngrams and each (k-i )skipgram till (k-i)==0 (which includes 0 skip grams). So how to efficiently compute these skipgrams in python? Following is the code i tried but it is not doing as expected: ``` <pre> input_list = ['all', 'this', 'happened', 'more', 'or', 'less'] def find_skipgrams(input_list, N,K): bigram_list = [] nlist=[] K=1 for k in range(K+1): for i in range(len(input_list)-1): if i+k+1<len(input_list): nlist=[] for j in range(N+1): if i+k+j+1<len(input_list): nlist.append(input_list[i+k+j+1]) bigram_list.append(nlist) return bigram_list </pre> ``` The above code is not rendering correctly, but print `find_skipgrams(['all', 'this', 'happened', 'more', 'or', 'less'],2,1)` gives following output > [['this', 'happened', 'more'], ['happened', 'more', 'or'], ['more', > 'or', 'less'], ['or', 'less'], ['less'], ['happened', 'more', 'or'], > ['more', 'or', 'less'], ['or', 'less'], ['less'], ['less']] The code listed here also does not give correct output: <https://github.com/heaven00/skipgram/blob/master/skipgram.py> print skipgram\_ndarray("What is your name") gives: ['What,is', 'is,your', 'your,name', 'name,', 'What,your', 'is,name'] name is a unigram!
From the [paper](http://homepages.inf.ed.ac.uk/ballison/pdf/lrec_skipgrams.pdf) that OP links, the following string: > Insurgents killed in ongoing fighting Yields: > 2-skip-bi-grams = {insurgents killed, insurgents in, insurgents > ongoing, killed in, killed ongoing, killed fighting, in ongoing, in > fighting, ongoing fighting} > > 2-skip-tri-grams = {insurgents killed in, insurgents killed ongoing, > insurgents killed fighting, insurgents in ongoing, insurgents in > fighting, insurgents ongoing fighting, killed in ongoing, killed in > fighting, killed ongoing fighting, in ongoing fighting}. With slight modification to NLTK's `ngrams` code (<https://github.com/nltk/nltk/blob/develop/nltk/util.py#L383>): ``` from itertools import chain, combinations import copy from nltk.util import ngrams def pad_sequence(sequence, n, pad_left=False, pad_right=False, pad_symbol=None): if pad_left: sequence = chain((pad_symbol,) * (n-1), sequence) if pad_right: sequence = chain(sequence, (pad_symbol,) * (n-1)) return sequence def skipgrams(sequence, n, k, pad_left=False, pad_right=False, pad_symbol=None): sequence_length = len(sequence) sequence = iter(sequence) sequence = pad_sequence(sequence, n, pad_left, pad_right, pad_symbol) if sequence_length + pad_left + pad_right < k: raise Exception("The length of sentence + padding(s) < skip") if n < k: raise Exception("Degree of Ngrams (n) needs to be bigger than skip (k)") history = [] nk = n+k # Return point for recursion. if nk < 1: return # If n+k longer than sequence, reduce k by 1 and recur elif nk > sequence_length: for ng in skipgrams(list(sequence), n, k-1): yield ng while nk > 1: # Collects the first instance of n+k length history history.append(next(sequence)) nk -= 1 # Iterative drop first item in history and picks up the next # while yielding skipgrams for each iteration. for item in sequence: history.append(item) current_token = history.pop(0) # Iterates through the rest of the history and # pick out all combinations the n-1grams for idx in list(combinations(range(len(history)), n-1)): ng = [current_token] for _id in idx: ng.append(history[_id]) yield tuple(ng) # Recursively yield the skigrams for the rest of seqeunce where # len(sequence) < n+k for ng in list(skipgrams(history, n, k-1)): yield ng ``` Let's do some doctest to match the example in the paper: ``` >>> two_skip_bigrams = list(skipgrams(text, n=2, k=2)) [('Insurgents', 'killed'), ('Insurgents', 'in'), ('Insurgents', 'ongoing'), ('killed', 'in'), ('killed', 'ongoing'), ('killed', 'fighting'), ('in', 'ongoing'), ('in', 'fighting'), ('ongoing', 'fighting')] >>> two_skip_trigrams = list(skipgrams(text, n=3, k=2)) [('Insurgents', 'killed', 'in'), ('Insurgents', 'killed', 'ongoing'), ('Insurgents', 'killed', 'fighting'), ('Insurgents', 'in', 'ongoing'), ('Insurgents', 'in', 'fighting'), ('Insurgents', 'ongoing', 'fighting'), ('killed', 'in', 'ongoing'), ('killed', 'in', 'fighting'), ('killed', 'ongoing', 'fighting'), ('in', 'ongoing', 'fighting')] ``` But do note that if `n+k > len(sequence)`, it will yield the same effects as `skipgrams(sequence, n, k-1)` (this is not a bug, it's a fail safe feature), e.g. ``` >>> three_skip_trigrams = list(skipgrams(text, n=3, k=3)) >>> three_skip_fourgrams = list(skipgrams(text, n=4, k=3)) >>> four_skip_fourgrams = list(skipgrams(text, n=4, k=4)) >>> four_skip_fivegrams = list(skipgrams(text, n=5, k=4)) >>> >>> print len(three_skip_trigrams), three_skip_trigrams 10 [('Insurgents', 'killed', 'in'), ('Insurgents', 'killed', 'ongoing'), ('Insurgents', 'killed', 'fighting'), ('Insurgents', 'in', 'ongoing'), ('Insurgents', 'in', 'fighting'), ('Insurgents', 'ongoing', 'fighting'), ('killed', 'in', 'ongoing'), ('killed', 'in', 'fighting'), ('killed', 'ongoing', 'fighting'), ('in', 'ongoing', 'fighting')] >>> print len(three_skip_fourgrams), three_skip_fourgrams 5 [('Insurgents', 'killed', 'in', 'ongoing'), ('Insurgents', 'killed', 'in', 'fighting'), ('Insurgents', 'killed', 'ongoing', 'fighting'), ('Insurgents', 'in', 'ongoing', 'fighting'), ('killed', 'in', 'ongoing', 'fighting')] >>> print len(four_skip_fourgrams), four_skip_fourgrams 5 [('Insurgents', 'killed', 'in', 'ongoing'), ('Insurgents', 'killed', 'in', 'fighting'), ('Insurgents', 'killed', 'ongoing', 'fighting'), ('Insurgents', 'in', 'ongoing', 'fighting'), ('killed', 'in', 'ongoing', 'fighting')] >>> print len(four_skip_fivegrams), four_skip_fivegrams 1 [('Insurgents', 'killed', 'in', 'ongoing', 'fighting')] ``` This allows `n == k` but it disallow `n > k` as shown in the lines : ``` if n < k: raise Exception("Degree of Ngrams (n) needs to be bigger than skip (k)") ``` --- For understanding sake, let's try to understand the "mystical" line: ``` for idx in list(combinations(range(len(history)), n-1)): pass # Do something ``` Given a list of unique items, combinations produce this: ``` >>> from itertools import combinations >>> x = [0,1,2,3,4,5] >>> list(combinations(x,2)) [(0, 1), (0, 2), (0, 3), (0, 4), (0, 5), (1, 2), (1, 3), (1, 4), (1, 5), (2, 3), (2, 4), (2, 5), (3, 4), (3, 5), (4, 5)] ``` And since the indices of a list of tokens is always unique, e.g. ``` >>> sent = ['this', 'is', 'a', 'foo', 'bar'] >>> current_token = sent.pop(0) # i.e. 'this' >>> range(len(sent)) [0,1,2,3] ``` It's possible to compute the possible [combinations (without replacement)](https://docs.python.org/2/library/itertools.html#itertools.combinations) of the range: ``` >>> n = 3 >>> list(combinations(range(len(sent)), n-1)) [(0, 1), (0, 2), (0, 3), (1, 2), (1, 3), (2, 3)] ``` If we map the indices back to the list of tokens: ``` >>> [tuple(sent[id] for id in idx) for idx in combinations(range(len(sent)), 2) [('is', 'a'), ('is', 'foo'), ('is', 'bar'), ('a', 'foo'), ('a', 'bar'), ('foo', 'bar')] ``` Then we concatenate with the `current_token`, we get the skipgrams for the current token and context+skip window: ``` >>> [tuple([current_token]) + tuple(sent[id] for id in idx) for idx in combinations(range(len(sent)), 2)] [('this', 'is', 'a'), ('this', 'is', 'foo'), ('this', 'is', 'bar'), ('this', 'a', 'foo'), ('this', 'a', 'bar'), ('this', 'foo', 'bar')] ``` So after that we move on to the next word.
Classifying Python array by nearest "seed" region?
31,848,309
12
2015-08-06T06:30:07Z
31,852,160
9
2015-08-06T09:37:41Z
[ "python", "arrays", "numpy", "scipy", "image-segmentation" ]
I have a raster of ecological habitats which I've converted into a two-dimensional Python numpy array (example\_array below). I also have an array containing "seed" regions with unique values (seed\_array below) which I'd like to use to classify my habitat regions. I'd like to **'grow' my seed regions 'into' my habitat regions such that habitats are assigned the ID of the nearest seed region, as measured 'through' the habitat regions.** For example: [![Image of arrays](http://i.stack.imgur.com/oFfOX.png)](http://i.stack.imgur.com/oFfOX.png) My best approach used the `ndimage.distance_transform_edt` function to create an array depicting the nearest "seed" region to each cell in the dataset, which was then substituted back into the habitat array. This doesn't work particularly well, however, as the function doesn't measure distances "through" my habitat regions, for example below where the red circle represents an incorrectly classified cell: [![Incorrect output using ndimage](http://i.stack.imgur.com/lmlQu.png)](http://i.stack.imgur.com/lmlQu.png) Below are sample arrays for my habitat and seed data, and an example of the kind of output I'm looking for. My actual datasets are much larger - over a million habitat/seed regions. Any help would be much appreciated! ``` import numpy as np import scipy.ndimage as ndimage import matplotlib.pyplot as plt # Sample study area array example_array = np.array([[0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1], [0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0], [1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1], [1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1], [1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0], [1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0], [1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0], [1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) # Plot example array plt.imshow(example_array, cmap="spectral", interpolation='nearest') seed_array = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 1, 1, 1, 0, 0, 2, 2, 0, 0, 0, 0], [0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) # Plot seeds plt.imshow(seed_array, cmap="spectral", interpolation='nearest') desired_output = np.array([[0, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 4, 4, 4, 0, 0, 0, 3, 3, 3], [0, 0, 0, 0, 4, 4, 0, 0, 0, 3, 3, 3], [0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 3, 0], [1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 3, 3], [1, 1, 0, 1, 0, 0, 0, 0, 2, 2, 3, 3], [1, 1, 1, 1, 0, 0, 2, 2, 2, 0, 0, 3], [1, 1, 1, 1, 1, 2, 2, 2, 2, 0, 0, 0], [1, 1, 1, 1, 0, 0, 2, 2, 2, 0, 0, 0], [1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0], [1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) # Plot desired output plt.imshow(desired_output, cmap="spectral", interpolation='nearest') ```
You can use [watershed segmentation](http://scikit-image.org/docs/dev/auto_examples/plot_watershed.html) from scikits-image: * Distance transform ``` from scipy import ndimage as nd distance = nd.distance_transform_edt(example_array) ``` * Watershed segmentation ``` from skimage.morphology import watershed, square result = watershed(-distance, seed_array, mask=example_array, \ connectivity=square(3)) ``` * Result ``` subplot(1,2,1) imshow(-distance, 'spectral', interpolation='none') subplot(1,2,2) imshow(result, 'spectral', interpolation='none') ``` [![enter image description here](http://i.stack.imgur.com/XkFRV.png)](http://i.stack.imgur.com/XkFRV.png) --- As another variant, and following your initial approach, you can use watershed to find connected neighbours to nearest seeds. As you mentioned in the question: * Calculate distance to the seeds: ``` distance = nd.distance_transform_edt(seed_array == 0) ``` * Calculate watershed in the distance space: ``` result = watershed(distance, seed_array, mask=example_array, \ connectivity=square(3)) ``` * Plot result: ``` figure(figsize=(9,3)) subplot(1,3,1) imshow(distance, 'jet', interpolation='none') subplot(1,3,2) imshow(np.ma.masked_where(example_array==0, distance), 'jet', interpolation='none') subplot(1,3,3) imshow(result, 'spectral', interpolation='none') ``` [![enter image description here](http://i.stack.imgur.com/YBC6r.png)](http://i.stack.imgur.com/YBC6r.png) --- **Further discussion:** Watershed method tries to grow regions from seeded peaks by *flowing* through the image gradient. As your image is binary, the regions will expand equally in all directions from the seeded points, and thus give you the point in between two regions. For more info about watershed refer to [wikipedia](https://en.wikipedia.org/wiki/Watershed_(image_processing)). In the first example, the distance transform is calculated in the original image, and thus the regions expand equally from seeds until they achieve the splitting point in the middle. In the second example, the distance transform is calculated from all the pixels to any of the seeded points, and then applying watershed in that space. Watershed basically will assign each pixel to its nearest seed, but it will add a connectivity constrain. **NOTE** the sign difference in the distance maps in both plotting and watersed. **NOTE** In distance maps (left image in both plots), blue means close where red means far.
Serve image stored in SQLAlchemy LargeBinary column
31,849,494
5
2015-08-06T07:32:14Z
31,858,076
7
2015-08-06T14:13:07Z
[ "python", "flask", "sqlalchemy", "blob", "jinja" ]
I want to upload a file and store it in the database. I created a LargeBinary column. ``` logo = db.Column(db.LargeBinary) ``` I read the uploaded file and store it in the database. ``` files = request.files.getlist('file') if files: event.logo = files[0].file.read() ``` Is this the proper way to store an image as binary in the database? How do I convert the binary data into an image again to display it?
If you absolutely need to store the image in the database, then yes, this is correct. Typically, files are stored in the filesystem and the path is stored in the database. This is the better solution because the web server typically has an efficient method of serving files from the filesystem, as opposed to the application sending large blobs of data dynamically. --- To serve the image, write a view that gets the image data and sends it as a response. ``` @app.route('/event/<int:id>/logo') def event_logo(id): event = Event.query.get_or_404(id) return app.response_class(event.logo, mimetype='application/octet-stream') ``` ``` <img src="{{ url_for('event_logo', id=event.id }}"/> ``` Preferably serve it using the correct mimetype rather than `application/octet-stream`. --- You could also embed the image data directly in the html using a data uri. This is sub-optimal, because data uris are sent every time the page is rendered, while an image file can be cached by the client. ``` from base64 import b64encode @app.route('/event/<int:id>/logo') def event_logo(id): event = Event.query.get_or_404(id) image = b64encode(event.logo) return render_template('event.html', event=event, logo=image) ``` ``` <p>{{ obj.x }}<br/> {{ obj.y }}</p> <img src="data:;base64,{{ logo }}"/> ```
Filling Many2many field (odoo 8)
31,853,402
6
2015-08-06T10:29:11Z
32,028,259
17
2015-08-15T19:01:23Z
[ "python", "xml", "postgresql", "openerp", "odoo" ]
**What I've done:** I have a module with ``` myfield = fields.Many2one('res.partner', string="Graduate", domain=[('is_graduated', '=', True)]) ``` Then I have another class with ``` _inherit = 'res.partner' is_graduated = fields.Boolean("Graduated before?", default=False) graduations = fields.Many2many('my_module.courses', string="Graduation courses") ``` **What I get:** The `myfield` works good, but the `graduations` field is empty. If you edit `user 1` profile you can add entries to `graduation` field using `Add item`, but I need it to be filled automaticaly. **What I expect:** I expect that every record where `myfield` is set to lets say `user 1`, will be visible in field `graduations` when you open `user 1` profile. When I create record and set `myfield` value to lets say `user 1`, that record must to be visible in `user 1` profile in the field `graduations`. How to achieve that?
``` user_rel_ids = fields.Many2many(comodel_name='course', relation='user_course_rel', column1='user_id', column2='course_id') ``` Or ``` user_rel_id = fields.Many2many('course') ``` For Filling Data (for add new relation) ``` user_rel_id = (4,[course_ids]) ``` A full list of options is in the documentation for the osv class. For a **many2many** field, a list of tuples is expected. Here is the list of tuple that are accepted, with the corresponding semantics: `(0, 0, { values })` link to a **new record** that needs to be created with the given values dictionary `(1, ID, { values })` **update** the linked record with id = ID (write values on it) `(2, ID)` remove and **delete** the linked record with id = ID (calls unlink on ID, that will delete the object completely, and the link to it as well) `(3, ID)` cut the link to the linked record with id = ID (**delete the relationship** between the two objects but does not delete the target object itself) `(4, ID)` **link** to existing record with id = ID (adds a relationship) `(5)` **unlink all** (like using (3,ID) for all linked records) `(6, 0, [IDs])` **replace** the list of linked IDs (like using (5) then (4,ID) for each ID in the list of IDs)
What's the best way to share Jupyter notebooks with non-programmers?
31,855,794
19
2015-08-06T12:28:06Z
33,248,969
8
2015-10-21T00:22:11Z
[ "python", "ipython-notebook", "jupyter" ]
I am trying to wrap my head around what I can/cannot do with Jupyter. I have a Jupyter server running on our internal server, accessible via VPN and password protected. I am the only one actually creating notebooks but I would like to make some notebooks visible to other team members in a read-only way. Ideally I could just share a URL with them that they would bookmark for when they want to see the notebook with refreshed data. I saw export options but cannot find any mention of "publishing" or "making public" local live notebooks. Is this impossible? Is it maybe just a wrong way to think about how Jupyter should be used? Are their best practices around this?
The "best" way to share a Jupyter notebook is to simply to place it on GitHub (and view it directly) or some other public link and use the [Jupyter Notebook Viewer](https://nbviewer.jupyter.org/). When privacy is more of an issue then there are alternatives but it's certainly more complex, there's no built in way to do this in Jupyter alone but a couple of options are: ## Host your own nbviewer GitHub and the Jupyter Notebook Veiwer both use the same tool to render `.ipynb` files into static HTML, this tool is [nbviewer](https://github.com/jupyter/nbviewer). The installation instructions are more complex than I'm willing to go into here but if your company/team has a shared server that doesn't require password access then you could host the nbviewer on that server and direct it to load from your credentialed server. This will probably require some more advanced configuration than you're going to find in the docs. ## Set up a deployment script If you don't necessarily need *live* updating HTML then you could set up a script on your credentialed server that will simply use Jupyter's built in export options to create the static HTML files and then send those to a more publicly accessible server. Good luck!
What's the best way to share Jupyter notebooks with non-programmers?
31,855,794
19
2015-08-06T12:28:06Z
33,249,008
8
2015-10-21T00:27:39Z
[ "python", "ipython-notebook", "jupyter" ]
I am trying to wrap my head around what I can/cannot do with Jupyter. I have a Jupyter server running on our internal server, accessible via VPN and password protected. I am the only one actually creating notebooks but I would like to make some notebooks visible to other team members in a read-only way. Ideally I could just share a URL with them that they would bookmark for when they want to see the notebook with refreshed data. I saw export options but cannot find any mention of "publishing" or "making public" local live notebooks. Is this impossible? Is it maybe just a wrong way to think about how Jupyter should be used? Are their best practices around this?
Michael's suggestion of running your own nbviewer instance is a good one I used in the past with an Enterprise Github server. Another lightweight alternative is to have a cell at the end of your notebook that does a shell call to nbconvert so that it's automatically refreshed after running the whole thing: `!ipython nbconvert <notebook name>.ipynb --to html`
Rotate tick labels for seaborn barplot
31,859,285
6
2015-08-06T15:04:16Z
31,861,477
8
2015-08-06T16:50:17Z
[ "python", "matplotlib", "seaborn" ]
I am trying to display a chart with rotated x-axis labels, but the chart is not displaying. ``` import seaborn as sns %matplotlib inline yellow='#FFB11E' by_school=sns.barplot(x ='Organization Name',y ='Score',data = combined.sort('Organization Name'),color=yellow,ci=None) ``` At this point I can see the image, but after I set the xticklabel, I don't see the image anymore only an object reference. (I would post the image, but I don't enough reputation :() ``` by_school.set_xticklabels('Organization Name',rotation=45) <matplotlib.axes._subplots.AxesSubplot at 0x3971a6a0> ``` A similar question is posted here: [Rotate label text in seaborn factorplot](http://stackoverflow.com/questions/26540035/rotate-label-text-in-seaborn-factorplot) but the solution is not working.
You need a different method call, namely `.set_rotation` for each `ticklable`s. Since you already have the ticklabels, just change their rotations: ``` for item in by_school.get_xticklabels(): item.set_rotation(45) ``` `barplot` returns a `matplotlib.axes` object (as of `seaborn` 0.6.0), therefore you have to rotate the labels this way. In other cases, when the method returns a `FacetGrid` object, refer to [Rotate label text in seaborn factorplot](http://stackoverflow.com/questions/26540035/rotate-label-text-in-seaborn-factorplot?lq=1)
Format: KeyError when using curly brackets in strings
31,859,757
3
2015-08-06T15:24:56Z
31,859,804
7
2015-08-06T15:26:51Z
[ "python", "format" ]
I'm running the following code: ``` asset = {} asset['abc'] = 'def' print type(asset) print asset['abc'] query = '{"abc": "{abc}"}'.format(abc=asset['abc']) print query ``` Which throws a `KeyError` error: ``` [user@localhost] : ~/Documents/vision/inputs/perma_sniff $ python ~/test.py <type 'dict'> def Traceback (most recent call last): File "/home/user/test.py", line 5, in <module> query = '\{"abc": "{abc}"\}'.format(abc=asset['abc']) KeyError: '"abc"' ``` Format is obviously getting confused by the wrapping `{`. How can I make sure format only tries to replace the (correct) inner `{abc}`. ie, expected output is: ``` {"abc": "def"} ``` (I'm aware I could use the `json` module for this task, but I want to avoid that. I would much rather use format.)
To insert a literal brace, double it up: ``` query = '{{"abc": "{abc}"}}'.format(abc=asset['abc']) ``` (This is documented [here](https://docs.python.org/2/library/string.html#format-string-syntax), but not highlighted particularly obviously).
Deleting hdf5 dataset using h5py
31,861,724
5
2015-08-06T17:04:38Z
33,224,934
8
2015-10-19T22:21:15Z
[ "python", "dataset", "hdf5", "h5py" ]
Is there any way to remove a dataset from an hdf5 file, preferably using h5py? Or alternatively, is it possible to overwrite a dataset while keeping the other datasets intact? To my understanding, h5py can read/write hdf5 files in 5 modes ``` f = h5py.File("filename.hdf5",'mode') ``` where mode can be `r`for read, `r+` for read-write, `a` for read-write but creates a new file if it doesn't exist, `w` for write/overwrite, and `w-` which is same as `w` but fails if file already exists. I have tried all but none seem to work. Any suggestions are much appreciated.
Yes, this can be done. ``` with h5py.File(input, "a") as f: f.__delitem__(datasetname) ``` You will need to have the file open in a writeable mode, for example append (as above) or write. UPDATE: As noted by @seppo-enarvi the purpose of `__delitem__` function is to implement the `del` operator, so that one can delete a dataset using `del f[datasetname]`
How to load IPython shell with PySpark
31,862,293
5
2015-08-06T17:36:53Z
31,863,595
8
2015-08-06T18:48:28Z
[ "python", "apache-spark", "ipython", "pyspark" ]
I want to load IPython shell (not IPython notebook) in which I can use PySpark through command line. Is that possible? I have installed Spark-1.4.1.
If you use Spark < 1.2 you can simply execute `bin/pyspark` with an environmental variable `IPYTHON=1`. ``` IPYTHON=1 /path/to/bin/pyspark ``` or ``` export IPYTHON=1 /path/to/bin/pyspark ``` While above will still work on the Spark 1.2 and above recommended way to set Python environment for these versions is `PYSPARK_DRIVER_PYTHON` ``` PYSPARK_DRIVER_PYTHON=ipython /path/to/bin/pyspark ``` or ``` export PYSPARK_DRIVER_PYTHON=ipython /path/to/bin/pyspark ``` You can replace `ipython` with a path to the interpreter of your choice.
Python: Split NumPy array based on values in the array
31,863,083
7
2015-08-06T18:20:45Z
31,863,171
7
2015-08-06T18:25:03Z
[ "python", "arrays", "numpy", "split" ]
I have one big array: ``` [(1.0, 3.0, 1, 427338.4297000002, 4848489.4332) (1.0, 3.0, 2, 427344.7937000003, 4848482.0692) (1.0, 3.0, 3, 427346.4297000002, 4848472.7469) ..., (1.0, 1.0, 7084, 427345.2709999997, 4848796.592) (1.0, 1.0, 7085, 427352.9277999997, 4848790.9351) (1.0, 1.0, 7086, 427359.16060000006, 4848787.4332)] ``` I want to split this array into multiple arrays based on the 2nd value in the array (3.0, 3.0, 3.0...1.0,1.0,10). Every time the 2nd value changes, I want a new array, so basically each new array has the same 2nd value. I've looked this up on Stack Overflow and know of the command ``` np.split(array, number) ``` but I'm not trying to split the array into a certain number of arrays, but rather by a value. How would I be able to split the array in the way specified above? Any help would be appreciated!
You can find the indices where the values differ by using [`numpy.where`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html) and [`numpy.diff`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.diff.html) on the first column: ``` >>> arr = np.array([(1.0, 3.0, 1, 427338.4297000002, 4848489.4332), (1.0, 3.0, 2, 427344.7937000003, 4848482.0692), (1.0, 3.0, 3, 427346.4297000002, 4848472.7469), (1.0, 1.0, 7084, 427345.2709999997, 4848796.592), (1.0, 1.0, 7085, 427352.9277999997, 4848790.9351), (1.0, 1.0, 7086, 427359.16060000006, 4848787.4332)]) >>> np.split(arr, np.where(np.diff(arr[:,1]))[0]+1) [array([[ 1.00000000e+00, 3.00000000e+00, 1.00000000e+00, 4.27338430e+05, 4.84848943e+06], [ 1.00000000e+00, 3.00000000e+00, 2.00000000e+00, 4.27344794e+05, 4.84848207e+06], [ 1.00000000e+00, 3.00000000e+00, 3.00000000e+00, 4.27346430e+05, 4.84847275e+06]]), array([[ 1.00000000e+00, 1.00000000e+00, 7.08400000e+03, 4.27345271e+05, 4.84879659e+06], [ 1.00000000e+00, 1.00000000e+00, 7.08500000e+03, 4.27352928e+05, 4.84879094e+06], [ 1.00000000e+00, 1.00000000e+00, 7.08600000e+03, 4.27359161e+05, 4.84878743e+06]])] ``` **Explanation:** Here first we are going to fetch the items in the second 2 column: ``` >>> arr[:,1] array([ 3., 3., 3., 1., 1., 1.]) ``` Now to find out where the items actually change we can use `numpy.diff`: ``` >>> np.diff(arr[:,1]) array([ 0., 0., -2., 0., 0.]) ``` Any thing non-zero means that the item next to it was different, we can use `numpy.where` to find the indices of non-zero items and then add 1 to it because the actual index of such item is one more than the returned index: ``` >>> np.where(np.diff(arr[:,1]))[0]+1 array([3]) ```
Loop through folders in Python and for files containing strings
31,866,706
5
2015-08-06T22:14:54Z
31,866,815
8
2015-08-06T22:27:38Z
[ "python" ]
I am very new to python. I need to iterate through the subdirectories of a given directory and return all files containing a certain string. ``` for root, dirs, files in os.walk(path): for name in files: if name.endswith((".sql")): if 'gen_dts' in open(name).read(): print name ``` This was the closest I got. The syntax error I get is ``` Traceback (most recent call last): File "<pyshell#77>", line 4, in <module> if 'gen_dts' in open(name).read(): IOError: [Errno 2] No such file or directory: 'dq_offer_desc_bad_pkey_vw.sql' ``` The 'dq\_offer\_desc\_bad\_pkey\_vw.sql' file does not contain 'gen\_dts' in it. I appreciate the help in advance.
You're getting that error because you're trying to open `name`, which is just the file's *name*, not it's full relative path. What you need to do is `open(os.path.join(root, name), 'r')` (I added the mode since it's good practice). ``` for root, dirs, files in os.walk(path): for name in files: if name.endswith('.sql'): filepath = os.path.join(root, name) if 'gen_dts' in open(filepath, 'r').read(): print filepath ``` `os.walk()` returns a generator that gives you tuples like `(root, dirs, files)`, where `root` is the current directory, and `dirs` and `files` are the names of the directories and files, respectively, that are in the root directory. Note that they are the *names*, not the paths; or to be precise, they're the path of that directory/file *relative* to the current root directory, which is another way of saying the same thing. Another way to think of it is that the directories and files in `dirs` and `files` will never have slashes in them. One final point; the root directory paths always begin with the path that you pass to `os.walk()`, whether it was relative to your current working directory or not. So, for `os.walk('three')`, the `root` in the first tuple will be `'three'` (for `os.walk('three/')`, it'll be `'three/'`). For `os.walk('../two/three')`, it'll be `'../two/three'`. For `os.walk('/one/two/three/')`, it'll be `'/one/two/three/'`; the second one might be `'/one/two/three/four'`.
Making an asynchronous task in Flask
31,866,796
5
2015-08-06T22:24:44Z
31,867,108
7
2015-08-06T22:56:04Z
[ "python", "asynchronous", "flask" ]
I am writing an application in `Flask`, which works really well except that `WSGI` is synchronous and blocking. I have one task in particular which calls out to a third party API and that task can take several minutes to complete. I would like to make that call (it's actually a series of calls) and let it run. while control is returned to Flask. My view looks like: ``` @app.route('/render/<id>', methods=['POST']) def render_script(id=None): ... data = json.loads(request.data) text_list = data.get('text_list') final_file = audio_class.render_audio(data=text_list) # do stuff return Response( mimetype='application/json', status=200 ) ``` Now, what I want to do is have the line ``` final_file = audio_class.render_audio() ``` run and provide a callback to be executed when the method returns, whilst Flask can continue to process requests. This is the only task which I need Flask to run asynchronously, and I would like some advice on how best to implement this. I have looked at `Twisted` and `Klein`, but I'm not sure they are overkill, as maybe `Threading` would suffice. Any advice would be much appreciated. *EDIT* Or Maybe `Celery` is a good choice for this?
I would use [Celery](http://www.celeryproject.org/) to handle the asynchronous task for you. You'll need to install a broker to serve as your task queue (RabbitMQ and Redis are recommended). `app.py`: ``` from flask import Flask from celery import Celery broker_url = 'amqp://guest@localhost' # Broker URL for RabbitMQ task queue app = Flask(__name__) celery = Celery(app.name, broker=broker_url) celery.config_from_object('celeryconfig') # Your celery configurations in a celeryconfig.py @celery.task(bind=True) def some_long_task(self, x, y): # Do some long task ... @app.route('/render/<id>', methods=['POST']) def render_script(id=None): ... data = json.loads(request.data) text_list = data.get('text_list') final_file = audio_class.render_audio(data=text_list) some_long_task.delay(x, y) # Call your async task and pass whatever necessary variables return Response( mimetype='application/json', status=200 ) ``` Run your Flask app, and start another process to run your celery worker. ``` $ celery worker -A app.celery --loglevel=debug ``` I would also refer to Miguel Gringberg's [write up](http://blog.miguelgrinberg.com/post/using-celery-with-flask) for a more in depth guide to using Celery with Flask.
Pandas DataFrame: How to natively get minimum across range of rows and columns
31,866,802
14
2015-08-06T22:25:45Z
32,764,796
9
2015-09-24T15:04:47Z
[ "python", "arrays", "numpy", "pandas", "dataframe" ]
I have a Pandas DataFrame that looks similar to this but with 10,000 rows and 500 columns. [![My Dataframe](http://i.stack.imgur.com/SVMYI.png)](http://i.stack.imgur.com/SVMYI.png) For each row, I would like to find the minimum value between 3 days ago at 15:00 and today at 13:30. Is there some native numpy way to do this quickly? My goal is to be able to get the minimum value for each row by saying something like "what is the minimum value from 3 days ago ago 15:00 to 0 days ago (aka today) 13:30?" For this particular example the answers for the last two rows would be: ``` 2011-01-09 2481.22 2011-01-10 2481.22 ``` My current way is this: ``` 1. Get the earliest row (only the values after the start time) 2. Get the middle rows 3. Get the last row (only the values before the end time) 4. Concat (1), (2), and (3) 5. Get the minimum of (4) ``` But this takes a very long time on a large DataFrame --- The following code will generate a similar DF: ``` import numpy import pandas import datetime numpy.random.seed(0) random_numbers = (numpy.random.rand(10, 8)*100 + 2000) columns = [datetime.time(13,0) , datetime.time(13,30), datetime.time(14,0), datetime.time(14,30) , datetime.time(15,0), datetime.time(15,30) ,datetime.time(16,0), datetime.time(16,30)] index = pandas.date_range('2011/1/1', '2011/1/10') df = pandas.DataFrame(data = random_numbers, columns=columns, index = index).astype(int) print df ``` --- Here is the json version of the dataframe: '{"13:00:00":{"1293840000000":2085,"1293926400000":2062,"1294012800000":2035,"1294099200000":2086,"1294185600000":2006,"1294272000000":2097,"1294358400000":2078,"1294444800000":2055,"1294531200000":2023,"1294617600000":2024},"13:30:00":{"1293840000000":2045,"1293926400000":2039,"1294012800000":2035,"1294099200000":2045,"1294185600000":2025,"1294272000000":2099,"1294358400000":2028,"1294444800000":2028,"1294531200000":2034,"1294617600000":2010},"14:00:00":{"1293840000000":2095,"1293926400000":2006,"1294012800000":2001,"1294099200000":2032,"1294185600000":2022,"1294272000000":2040,"1294358400000":2024,"1294444800000":2070,"1294531200000":2081,"1294617600000":2095},"14:30:00":{"1293840000000":2057,"1293926400000":2042,"1294012800000":2018,"1294099200000":2023,"1294185600000":2025,"1294272000000":2016,"1294358400000":2066,"1294444800000":2041,"1294531200000":2098,"1294617600000":2023},"15:00:00":{"1293840000000":2082,"1293926400000":2025,"1294012800000":2040,"1294099200000":2061,"1294185600000":2013,"1294272000000":2063,"1294358400000":2024,"1294444800000":2036,"1294531200000":2096,"1294617600000":2068},"15:30:00":{"1293840000000":2090,"1293926400000":2084,"1294012800000":2092,"1294099200000":2003,"1294185600000":2001,"1294272000000":2049,"1294358400000":2066,"1294444800000":2082,"1294531200000":2090,"1294617600000":2005},"16:00:00":{"1293840000000":2081,"1293926400000":2003,"1294012800000":2009,"1294099200000":2001,"1294185600000":2011,"1294272000000":2098,"1294358400000":2051,"1294444800000":2092,"1294531200000":2029,"1294617600000":2073},"16:30:00":{"1293840000000":2015,"1293926400000":2095,"1294012800000":2094,"1294099200000":2042,"1294185600000":2061,"1294272000000":2006,"1294358400000":2042,"1294444800000":2004,"1294531200000":2099,"1294617600000":2088}}'
You can first stack the DataFrame to create a series and then index slice it as required and take the min. For example: ``` first, last = ('2011-01-07', datetime.time(15)), ('2011-01-10', datetime.time(13, 30)) df.stack().loc[first: last].min() ``` The result of `df.stack` is a `Series` with a `MultiIndex` where the inner level is composed of the original columns. We then slice using `tuple` pairs with the start and end date and times. If you're going to be doing lots of such operations then you should consider assigning `df.stack()` to some variable. You might then consider changing the index to a proper `DatetimeIndex`. Then you can work with both the time series and the grid format as required. Here's another method which avoids stacking and is a lot faster on DataFrames of the size you're actually working with (as a one-off; slicing the stacked `DataFrame` is a lot faster once it's stacked so if you're doing many of these operations you should stack and convert the index). It's less general as it works with `min` and `max` but not with, say, `mean`. It gets the `min` of the subset of the first and last rows and the `min` of the rows in between (if any) and takes the `min` of these three candidates. ``` first_row = df.index.get_loc(first[0]) last_row = df.index.get_loc(last[0]) if first_row == last_row: result = df.loc[first[0], first[1]: last[1]].min() elif first_row < last_row: first_row_min = df.loc[first[0], first[1]:].min() last_row_min = df.loc[last[0], :last[1]].min() middle_min = df.iloc[first_row + 1:last_row].min().min() result = min(first_row_min, last_row_min, middle_min) else: raise ValueError('first row must be <= last row') ``` Note that if `first_row + 1 == last_row` then `middle_min` is `nan` but the result is still correct as long as `middle_min` doesn't come first in the call to `min`.
How can I use zip(), python
31,872,135
3
2015-08-07T07:39:13Z
31,872,162
10
2015-08-07T07:40:42Z
[ "python" ]
For example, I have these variables ``` a = [1,2] b = [3,4] ``` If I use function zip() for it, the result will be: ``` [(1, 3), (2, 4)] ``` But I have this list: ``` a = [[1,2], [3,4]] ``` And, I need to get the same as in the first result: `[(1, 3), (2, 4)]`. But, when I do: ``` zip(a) ``` I get: ``` [([1, 2],), ([3, 4],)] ``` What must I do?
[`zip`](https://docs.python.org/2/library/functions.html#zip) expects multiple iterables, so if you pass a *single* list of lists as parameter, the sublists are just wrapped into tuples with one element each. You have to use `*` to unpack the list when you pass it to `zip`. This way, you effectively pass *two* lists, instead of *one* list of lists: ``` >>> a = [[1,2], [3,4]] >>> zip(*a) [(1, 3), (2, 4)] ```
Python- text based game not calling the correct room
31,879,660
3
2015-08-07T14:03:59Z
31,879,811
8
2015-08-07T14:12:15Z
[ "python", "class", "oop", "namespaces", "python-import" ]
I am writing a text based game and I want to link each room to four other rooms- north, south, east and west. I am starting with just north for now. The user should be able to type 'walk north' and the north room should be called. I have used three files- one where I will write the main story, one to call the appropriate room within the story and one for navigation to avoid mutual importing. rooms.py: ``` import actions class FirstRoom(object): room_name = 'FIRST ROOM' north = 'north_room' def __init__(self): pass def start(self): print self.room_name while True: next = raw_input('> ') actions.walk(next, self.north) actions.command(next) class North(object): room_name = "NORTH ROOM" def __init__(self): pass def start(self): print self.room_name ``` actions.py: ``` import navigation def walk(next, go_north): """Tests for 'walk' command and calls the appropriate room""" if next == 'walk north': navigation.rooms(go_north) else: pass ``` navigation.py: ``` import rooms first_room = rooms.FirstRoom() north_room = rooms.North() def rooms(room): rooms = { 'first_room': first_room.start(), 'north_room': north_room.start(), } rooms[room] ``` When I run first\_room.start() it should print 'FIRST ROOM' which it does. Then I type in 'walk north' and I expect it to print "NORTH ROOM", but instead it prints "FIRST ROOM" again. I can't figure out for the life of me why it doesn't work the way I expect it to, it's as if it's calling first\_room again instead of north\_room. Can anyone figure out what I'm doing wrong?
My guess is that the issue occurs because of how the dictionary `rooms` is defined. When you do - ``` rooms = { 'first_room': first_room.start(), 'north_room': north_room.start(), } rooms[room] ``` The functions get called when you define the dictionary itself, not when you access the values from it (so both functions get called) , you want to store function objects (without calling them) as the values and then call them as - `rooms[room]()` . Example - ``` def rooms(room): rooms = { 'first_room': first_room.start, 'north_room': north_room.start, } rooms[room]() ```
Unpacking a tuple in for loop
31,881,759
3
2015-08-07T15:44:26Z
31,881,855
12
2015-08-07T15:49:08Z
[ "python" ]
I'm having some trouble with unpacking tuples. Specifically, I don't know why this doesn't work: ``` a = [0,1,2,3] b = [4,5,6,7] p = a,b for i,k in p: print i,k --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-422-2ba96d641daa> in <module>() ----> 1 for i,k in p: 2 print i,k 3 ValueError: too many values to unpack ``` It seems to me like the above code should unpack the two lists in the tuple `p` into `i` and `k`, but that's clearly not what's happening and I'm not sure why. So, I guess I have two questions: 1. What does `for i,k in p` actually do, if it doesn't unpack the lists in the tuple 2. How do I easily unpack the lists from the tuple in a for loop? The expected result is: ``` [0,1,2,3] [4,5,6,7] ``` I'm using python 2.7.9, if this happens to be version specific.
If you go step-by-step... First, doing `p = a, b` will get you a tuple consisting of exactly 2 elements -- your lists: ``` >>> a = [0, 1, 2, 3] >>> b = [4, 5, 6, 7] >>> p = a, b >>> print p ([0,1,2,3], [4, 5, 6, 7]) ``` Then, when you do `for i, k in p`, Python will attempt to get the first item inside `p` then unpack it into i, k. So, then, the first iteration of your loop is basically doing something equivalent to this: ``` >>> temp = p[0] >>> print temp [0, 1, 2, 3] >>> i, j = temp Traceback (most recent call last): File "<stdin>", line 1 in <module> ValueError: too many values to unpack ``` This will fail, since what you're basically trying to do is `i, k = [0, 1, 2, 3]`, and there are more elements in the list then there are variables. You may want to use the [zip](https://docs.python.org/2/library/functions.html#zip) function instead, which pairs up the numbers inside both lists. For example: ``` >>> p = zip(a, b) >>> print p [(0, 4), (1, 5), (2, 6), (3, 7)] ``` That way, when we run your loop, the number of elements inside the first tuple matches the number of variables in your loop, so Python will not throw an error. That means that your output would be: ``` 0 4 1 5 2 6 3 7 ``` If that's not your desired output, you need to rethink the structure of your code. --- **Edit:** Based on your update, if you just want your output to be: ``` [0, 1, 2, 3] [4, 5, 6, 7] ``` ...then you don't actually need to use unpacking at all. The following loop would work just fine: ``` for i in p: print i ``` `i` would be assigned to the first item inside `p` in the first iteration, then to the second item when the loop repeats.
Why are .pyc files created on import?
31,882,967
12
2015-08-07T16:54:51Z
31,883,142
17
2015-08-07T17:07:53Z
[ "python", "bytecode", "python-internals", "pyc" ]
I've seen several resources describing what `.pyc` files are and when they're created. But now I'm wondering why they're created when `.py` files are imported? Also, why not create a `.pyc` file for the main Python file doing the importing? I'm guessing it has to do with performance optimization and learning this has encouraged me to break out my files since the built-in compilation seems nice to take advantage of. But I'm not sure if this is the case, and I'm also curious if anyone has stats for the difference between running programs with and without the `.pyc` files if it is indeed for speed. I'd run them myself but I don't have a good, large Python codebase to test it on. :(
Python source code is compiled to bytecode, and it is the bytecode that is run. A `.pyc` file contains a copy of that bytecode, and by caching that Python doesn't have to re-compile the Python code each time it needs to load the module. You can get an idea of how much time is saved by timing the `compile()` function: ``` >>> import urllib2 >>> import timeit >>> urllib2_source = open(urllib2.__file__.rstrip('c')).read() >>> timeit.timeit("compile(source, '', 'exec')", 'from __main__ import urllib2_source as source', number=1000) 6.977046966552734 >>> _ / 1000.0 0.006977046966552734 ``` So it takes 7 milliseconds to compile the `urllib2.py` source code. That doesn't sound like much, but this adds up quickly as Python loads a *lot* of modules in its lifetime. Just run an average script with the [`-v` command-line switch](https://docs.python.org/2/using/cmdline.html#cmdoption-v); here I run the help output for the [`pydoc` tool](https://docs.python.org/2/library/pydoc.html): ``` $ bin/python -v -m pydoc -h # installing zipimport hook import zipimport # builtin # installed zipimport hook # /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/site.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/site.py import site # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/site.pyc # /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/os.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/os.py import os # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/os.pyc import errno # builtin import posix # builtin # /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/posixpath.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/posixpath.py import posixpath # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/posixpath.pyc # /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/stat.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/stat.py import stat # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/stat.pyc # /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/genericpath.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/genericpath.py import genericpath # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/genericpath.pyc # /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/warnings.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/warnings.py import warnings # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/warnings.pyc # /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/linecache.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/linecache.py import linecache # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/linecache.pyc # /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/types.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/types.py import types # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/types.pyc # /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/UserDict.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/UserDict.py import UserDict # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/UserDict.pyc # /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/_abcoll.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/_abcoll.py import _abcoll # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/_abcoll.pyc # /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/abc.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/abc.py import abc # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/abc.pyc # /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/_weakrefset.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/_weakrefset.py import _weakrefset # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/_weakrefset.pyc import _weakref # builtin # /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/copy_reg.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/copy_reg.py import copy_reg # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/copy_reg.pyc import encodings # directory /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/encodings # /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/encodings/__init__.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/encodings/__init__.py import encodings # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/encodings/__init__.pyc # /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/codecs.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/codecs.py import codecs # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/codecs.pyc import _codecs # builtin # /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/encodings/aliases.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/encodings/aliases.py import encodings.aliases # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/encodings/aliases.pyc # /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/encodings/utf_8.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/encodings/utf_8.py import encodings.utf_8 # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/encodings/utf_8.pyc Python 2.7.8 (default, Sep 9 2014, 11:33:29) [GCC 4.2.1 Compatible Apple LLVM 5.1 (clang-503.0.40)] on darwin Type "help", "copyright", "credits" or "license" for more information. # /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/runpy.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/runpy.py import runpy # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/runpy.pyc import imp # builtin # /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/pkgutil.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/pkgutil.py import pkgutil # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/pkgutil.pyc # /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/re.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/re.py import re # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/re.pyc # /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/sre_compile.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/sre_compile.py import sre_compile # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/sre_compile.pyc import _sre # builtin # /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/sre_parse.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/sre_parse.py import sre_parse # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/sre_parse.pyc # /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/sre_constants.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/sre_constants.py import sre_constants # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/sre_constants.pyc dlopen("/Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/_locale.so", 2); import _locale # dynamically loaded from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/_locale.so # /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/inspect.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/inspect.py import inspect # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/inspect.pyc # /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/string.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/string.py import string # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/string.pyc dlopen("/Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/strop.so", 2); import strop # dynamically loaded from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/strop.so # /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/dis.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/dis.py import dis # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/dis.pyc # /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/opcode.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/opcode.py import opcode # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/opcode.pyc # /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/tokenize.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/tokenize.py import tokenize # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/tokenize.pyc dlopen("/Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/itertools.so", 2); import itertools # dynamically loaded from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/itertools.so # /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/token.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/token.py import token # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/token.pyc dlopen("/Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/operator.so", 2); import operator # dynamically loaded from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/operator.so # /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/collections.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/collections.py import collections # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/collections.pyc dlopen("/Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/_collections.so", 2); import _collections # dynamically loaded from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/_collections.so # /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/keyword.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/keyword.py import keyword # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/keyword.pyc # /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/heapq.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/heapq.py import heapq # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/heapq.pyc dlopen("/Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/_heapq.so", 2); import _heapq # dynamically loaded from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/_heapq.so import thread # builtin # /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/repr.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/repr.py import repr # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/repr.pyc # /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/traceback.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/traceback.py import traceback # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/traceback.pyc # /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/locale.pyc matches /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/locale.py import locale # precompiled from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/locale.pyc # /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/functools.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/functools.py import functools # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/functools.pyc dlopen("/Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/_functools.so", 2); import _functools # dynamically loaded from /Users/mpieters/Development/venvs/stackoverflow-2.7/lib/python2.7/lib-dynload/_functools.so # /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/getopt.pyc matches /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/getopt.py import getopt # precompiled from /Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/getopt.pyc pydoc - the Python documentation tool pydoc.py <name> ... Show text documentation on something. <name> may be the name of a Python keyword, topic, function, module, or package, or a dotted reference to a class or function within a module or module in a package. If <name> contains a '/', it is used as the path to a Python source file to document. If name is 'keywords', 'topics', or 'modules', a listing of these things is displayed. pydoc.py -k <keyword> Search for a keyword in the synopsis lines of all available modules. pydoc.py -p <port> Start an HTTP server on the given port on the local machine. pydoc.py -g Pop up a graphical interface for finding and serving documentation. pydoc.py -w <name> ... Write out the HTML documentation for a module to a file in the current directory. If <name> contains a '/', it is treated as a filename; if it names a directory, documentation is written for all the contents. # clear __builtin__._ # clear sys.path # clear sys.argv # clear sys.ps1 # clear sys.ps2 # clear sys.exitfunc # clear sys.exc_type # clear sys.exc_value # clear sys.exc_traceback # clear sys.last_type # clear sys.last_value # clear sys.last_traceback # clear sys.path_hooks # clear sys.path_importer_cache # clear sys.meta_path # clear sys.flags # clear sys.float_info # restore sys.stdin # restore sys.stdout # restore sys.stderr # cleanup __main__ # cleanup[1] _collections # cleanup[1] locale # cleanup[1] functools # cleanup[1] encodings # cleanup[1] site # cleanup[1] runpy # cleanup[1] operator # cleanup[1] supervisor # cleanup[1] _heapq # cleanup[1] abc # cleanup[1] _weakrefset # cleanup[1] sre_constants # cleanup[1] collections # cleanup[1] _codecs # cleanup[1] opcode # cleanup[1] _warnings # cleanup[1] mpl_toolkits # cleanup[1] inspect # cleanup[1] encodings.utf_8 # cleanup[1] repr # cleanup[1] codecs # cleanup[1] getopt # cleanup[1] pkgutil # cleanup[1] _functools # cleanup[1] thread # cleanup[1] keyword # cleanup[1] strop # cleanup[1] signal # cleanup[1] traceback # cleanup[1] itertools # cleanup[1] posix # cleanup[1] encodings.aliases # cleanup[1] exceptions # cleanup[1] _weakref # cleanup[1] token # cleanup[1] dis # cleanup[1] tokenize # cleanup[1] heapq # cleanup[1] string # cleanup[1] imp # cleanup[1] zipimport # cleanup[1] re # cleanup[1] _locale # cleanup[1] sre_compile # cleanup[1] _sre # cleanup[1] sre_parse # cleanup[2] copy_reg # cleanup[2] posixpath # cleanup[2] errno # cleanup[2] _abcoll # cleanup[2] types # cleanup[2] genericpath # cleanup[2] stat # cleanup[2] warnings # cleanup[2] UserDict # cleanup[2] os.path # cleanup[2] linecache # cleanup[2] os # cleanup sys # cleanup __builtin__ # cleanup ints: 21 unfreed ints # cleanup floats ``` That's 53 imports: ``` $ bin/python -v -m pydoc -h 2>&1 | egrep ^import | wc -l 53 ``` rather than load the (larger) source file each time and compiling it, a smaller bytecode file can be read and used immediately. That easily adds up to 1/3rd or 1/2 a second just to print some help information for a command-line tool. Python does not create a cache file for the main script; that's because that would clutter up your scripts directory with files that are not going to be loaded nearly as often as modules are loaded. If you run a script *that* often that the compile time for that one file affects you, you can always either move the majority of the code to a module (and avoid having to compile a large script) or you can use the [`compileall` tool](https://docs.python.org/2/library/compileall.html) to create a `.pyc` cache file for the script, then run that `.pyc` file *directly*. Note that Python then will not recompile that file if you changed the script!
Elegant way to match a string to a random color matplotlib
31,883,097
3
2015-08-07T17:04:00Z
31,883,160
9
2015-08-07T17:09:03Z
[ "python", "matplotlib" ]
I want to translate the labels of some data to colors for graphing with matplotlib I have a list of names `["bob", "joe", "andrew", "pete"]` Is there a built in way to map these strings with color values in matplotlib? I thought about randomly creating hex values but I could end up with similar colors or non visible colors. I've tried a couple different ways of trying to create key values from the below cmap answer: this: ``` #names is a list of distinct names cmap = plt.get_cmap('cool') colors = cmap(np.linspace(0, 1, len(names))) clr = {names[i]: colors[i] for i in range(len(names))} ax.scatter(x, y, z, c=clr) ```
Choose a [color map](http://matplotlib.org/examples/color/colormaps_reference.html), such as `jet`: ``` cmap = plt.get_cmap('jet') ``` The colormap, `cmap`, is a function which can take an array of values from 0 to 1 and map them to RGBA colors. `np.linspace(0, 1, len(names))` produces an array of equally spaced numbers from 0 to 1 of length `len(names)`. Thus, ``` colors = cmap(np.linspace(0, 1, len(names))) ``` selects equally-spaced colors from the `jet` color map. Note that this is not using the *value* of the string, it only uses the *ordinal position* of the string in the list to select a color. Note also that these are not *random* colors, this is just an easy way to generate unique colors from an arbitrary list of strings. --- So: ``` import numpy as np import matplotlib.pyplot as plt cmap = plt.get_cmap('jet') names = ["bob", "joe", "andrew", "pete"] colors = cmap(np.linspace(0, 1, len(names))) print(colors) # [[ 0. 0. 0.5 1. ] # [ 0. 0.83333333 1. 1. ] # [ 1. 0.90123457 0. 1. ] # [ 0.5 0. 0. 1. ]] x = np.linspace(0, np.pi*2, 100) for i, (name, color) in enumerate(zip(names, colors), 1): plt.plot(x, np.sin(x)/i, label=name, c=color) plt.legend() plt.show() ``` [![enter image description here](http://i.stack.imgur.com/nIQN5.png)](http://i.stack.imgur.com/nIQN5.png) --- The problem with ``` clr = {names[i]: colors[i] for i in range(len(names))} ax.scatter(x, y, z, c=clr) ``` is that the `c` parameter of `ax.scatter` expects a [sequence](https://docs.python.org/2/glossary.html#term-sequence) of RGB(A) values of the same length as `x` or a single color. `clr` is a dict, not a sequence. So if `colors` is the same length as `x` then you could use ``` ax.scatter(x, y, z, c=colors) ```
Why is Flask checking `'\\/' in json.dumps('/')` in its json module?
31,883,132
6
2015-08-07T17:06:58Z
31,883,568
7
2015-08-07T17:33:55Z
[ "python", "flask" ]
[The source for the `flask.json` module contains the following line.](https://github.com/mitsuhiko/flask/blob/0.10.1/flask/json.py#L30) What does `'\\/'` mean, and why is Flask checking this? ``` _slash_escape = '\\/' not in _json.dumps('/') ```
Flask is using this to test if the JSON library it's using escapes slashes when it doesn't have to. If the library does, then `json.dump('/')` will produce `'"\\/"'` (equivalent to the raw string `r'"\/"'`, see [here for an explanation on escape characters](http://stackoverflow.com/questions/24085680/why-do-backslashes-appear-twice)). Flask can choose one of multiple JSON libraries, and some libraries/versions escape forward slashes while others don't. [Flask includes a comment explaining this.](https://github.com/mitsuhiko/flask/blob/0.10.1/flask/json.py#L28-L30) If the library does escape slashes, [Flask will undo this when it dumps the JSON](https://github.com/mitsuhiko/flask/blob/0.10.1/flask/json.py#L191-L192), for consistency between libraries. ``` # figure out if simplejson escapes slashes. This behavior was changed # from one version to another without reason. _slash_escape = '\\/' not in _json.dumps('/') ... def htmlsafe_dumps(obj, **kwargs): ... if not _slash_escape: rv = rv.replace('\\/', '/') ... ``` [Flask still escapes unsafe HTML characters](https://github.com/mitsuhiko/flask/blob/0.10.1/flask/json.py#L187-L190) when rendering the JSON in HTML, so the potentially unsafe string `"</script>"` becomes `"\\u003c/script\\u003e"` which is safe.
In Python 3.x, why is there not an itertools shared-object on disk?
31,883,364
4
2015-08-07T17:21:44Z
31,883,380
7
2015-08-07T17:22:32Z
[ "python", "python-3.x", "python-internals" ]
Is the [itertools C module](https://hg.python.org/cpython/file/3.4/Modules/itertoolsmodule.c) included somehow in the main Python binary in 3.x? Assuming that the C module is built and included, which it appears to be: ``` >>> import inspect >>> import itertools >>> >>> inspect.getsourcefile(itertools) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/Cellar/python3/3.4.3_2/Frameworks/Python.framework/Versions/3.4/lib/python3.4/inspect.py", line 571, in getsourcefile filename = getfile(object) File "/usr/local/Cellar/python3/3.4.3_2/Frameworks/Python.framework/Versions/3.4/lib/python3.4/inspect.py", line 518, in getfile raise TypeError('{!r} is a built-in module'.format(object)) TypeError: <module 'itertools' (built-in)> is a built-in module ``` I can't find an `itertools.so` for Python 3.x on my system, but there's one for 2.7. I noted that some other C modules exist as shared objects (`locate '.so' | grep -E '^/usr/local/' | grep '.so'` e.g. `mmap.so`) on disk, so what's the deal with `itertools`? How can I use it if there's not a shared library?
There are hints in the makefile that's near the Python wrapper of `inspect.py`: `/usr/local/Cellar/python3/3.4.3_2/Frameworks/Python.framework/Versions/3.4/lib/python3.4/config-3.4m/Makefile` We can see the build rules for the itertools.c source: ``` 1668 Modules/itertoolsmodule.o: $(srcdir)/Modules/itertoolsmodule.c; $(CC) $(PY_CORE_CFLAGS) -c $(srcdir)/Modules/itertoolsmodule.c -o Modules/itertoolsmodule.o ``` And then trace it a little to see that it's being bundled in: ``` 24 MODOBJS= .. Modules/itertoolsmodule.o ... Modules/xxsubtype.o 462 # objects that get linked into the Python library 463 LIBRARY_OBJS_OMIT_FROZEN= \ ... 470 $(MODOBJS) 471 472 LIBRARY_OBJS= \ 473 $(LIBRARY_OBJS_OMIT_FROZEN) \ 474 Python/frozen.o ... 553 # Build the interpreter 554 $(BUILDPYTHON): Modules/python.o $(LIBRARY) $(LDLIBRARY) $(PY3LIBRARY) 555 $(LINKCC) $(PY_LDFLAGS) $(LINKFORSHARED) -o $@ Modules/python.o $(BLDLIBRARY) $(LIBS) $(MODLIBS) $(SYSLIBS) $(LDLAST) 556 557 platform: $(BUILDPYTHON) pybuilddir.txt 558 $(RUNSHARED) $(PYTHON_FOR_BUILD) -c 'import sys ; from sysconfig import get_platform ; print(get_platform()+"-"+sys.version[0: 3])' >platform 589 # Build static library ... 598 $(AR) $(ARFLAGS) $@ $(MODOBJS) 599 $(RANLIB) $@ 944 $(LIBRARY_OBJS) $(MODOBJS) Modules/python.o: $(PYTHON_HEADERS) ``` Or if made via `distutils`, the path will be something like: `/usr/local/Cellar/python3/3.4.3_2/Frameworks/Python.framework/Versions/3.4/lib/python3.4/_sysconfigdata.py` And assuming that this gets built into a dynamic library: ``` Ξ ~ → strings /usr/local/Cellar/python3/3.4.3_2/Frameworks/Python.framework/Versions/3.4/lib/libpython3.4.dylib | grep itertools itertools itertools._tee_dataobject itertools._tee itertools._grouper itertools.groupby itertools.repeat itertools.product ... ``` Which means that at build time, the itertools.c module gets included in the `libpython` dynamic library, meaning that it's now part of the standard library.
when does `datetime.now(pytz_timezone)` fail?
31,886,808
8
2015-08-07T21:20:17Z
31,931,682
7
2015-08-11T01:16:56Z
[ "python", "datetime", "timezone", "pytz", "delorian" ]
[`delorean` docs](https://github.com/myusuf3/delorean) show this way to *get the current time in a given timezone* [using `datetime`](https://github.com/myusuf3/delorean/blob/2016040e07e63edfe3d565129049b1099d75be5f/README.rst#getting-started): ``` from datetime import datetime from pytz import timezone EST = "US/Eastern" UTC = "UTC" d = datetime.utcnow() utc = timezone(UTC) est = timezone(EST) d = utc.localize(d) d = est.normalize(EST) ``` and compare it with the delorian-based code: ``` from delorean import Delorean EST = "US/Eastern" d = Delorean(timezone=EST) ``` [I believe](https://github.com/myusuf3/delorean/pull/46) the `datetime` example should be written as: ``` from datetime import datetime import pytz eastern_timezone = pytz.timezone("US/Eastern") d = datetime.now(eastern_timezone) ``` that is more concise. Are there any cases when the last code example fails while the first one continues to work? --- **Update:** [the current example:](https://github.com/myusuf3/delorean/blob/24065c7f6c96251cb6cf09436a76acf2c2d99022/README.rst) ``` from datetime import datetime import pytz d = datetime.utcnow() d = pytz.utc.localize(d) est = pytz.timezone('US/Eastern') d = est.normalize(d) return d ``` that is still too verbose. The question stills stands: [do you need the explicit round-trip via utc and `tz.normalize()` or can you use `datetime.now(tz)` instead?](https://github.com/myusuf3/delorean/pull/46)
> When does `datetime.now(pytz_timezone)` fail? As far as I can tell, there are no scenarios where it could fail. `datetime.now` invokes the `fromutc` function on the `tzinfo` instance passed in the parameter. All conversions from UTC to local time are unambiguous, so there are no opportunities for failure. Also, the original code does not even work. ``` d = est.normalize(EST) ``` This would appear to pass a string as the only parameter to `normalize`, which is intended to take a `datetime`. This gives: ``` AttributeError: 'str' object has no attribute 'tzinfo' ``` I believe they meant to write: ``` d = est.normalize(d.astimezone(est)) ``` That said, I don't think the verbosity of their code adds much value. As you noted, it's just as easy to do this in a single step: ``` d = datetime.now(est) ``` Looking at the [cpython source code for `datetime.now`](https://github.com/python/cpython/blob/f18916ecc6ca733478257e1cc0793386eed9a4c9/Modules/_datetimemodule.c#L4157), I can see that when a `tzinfo` object is provided, it calls the `fromutc` method on that object. ``` if (self != NULL && tz != Py_None) { /* Convert UTC to tzinfo's zone. */ PyObject *temp = self; self = _PyObject_CallMethodId(tz, &PyId_fromutc, "O", self); Py_DECREF(temp); } ``` Then, in the pytz source, I see that the `fromutc` method is implemented differently depending on whether the zone is `pytz.UTC`, or an instance of `StaticTzInfo`, or `DstTzInfo`. In all three cases, the transformation from the input UTC value to the target time zone is unambiguous. Here is the `DstTzInfo` implementation, which is the more complex of the three: ``` def fromutc(self, dt): '''See datetime.tzinfo.fromutc''' if (dt.tzinfo is not None and getattr(dt.tzinfo, '_tzinfos', None) is not self._tzinfos): raise ValueError('fromutc: dt.tzinfo is not self') dt = dt.replace(tzinfo=None) idx = max(0, bisect_right(self._utc_transition_times, dt) - 1) inf = self._transition_info[idx] return (dt + inf[0]).replace(tzinfo=self._tzinfos[inf]) ``` This would appear to find the transition from `_utc_transition_times` of the time zone, then apply it to the returned `datetime`. There are no ambiguities in this direction, so the results will be equivalent. Also worth noting, in [the `datetime` docs](https://docs.python.org/2/library/datetime.html#datetime.datetime) it says that `datetime.now` is equivalent to calling: ``` tz.fromutc(datetime.utcnow().replace(tzinfo=tz)) ``` Given the source of `fromutc` in pytz I showed earlier, I'm not sure that this is any different than just: ``` tz.fromutc(datetime.utcnow()) ``` But in either case, I don't think `localize` and `normalize` are necessary.
How do Twisted python Factory and Protocol interfaces work?
31,888,037
4
2015-08-07T23:32:58Z
31,889,487
9
2015-08-08T03:57:02Z
[ "python", "twisted", "twisted.web", "twisted.internet" ]
I am learning Twisted, and beginner's [tutorial](http://krondo.com/?page_id=1327) often uses Factory and Protocol as examples. It appears that Factory and [Protocol](http://twistedmatrix.com/trac/browser/tags/releases/twisted-12.3.0/twisted/internet/interfaces.py) interfaces do not support sending messages. Is it expected that sending messages are implemented independently of Protocol interface? ``` class IProcessProtocol(Interface): def makeConnection(process): def childDataReceived(childFD, data): def childConnectionLost(childFD): def processExited(reason): def processEnded(reason): ```
See: * <http://twistedmatrix.com/documents/current/api/twisted.internet.protocol.Factory.html> * <http://twistedmatrix.com/documents/current/api/twisted.internet.protocol.html> Factories create Protocol instances. What this means is that a protocol (see [here](http://twistedmatrix.com/documents/current/api/twisted.protocols.html) and also note: you can also write your own protocol) the factory will use the protocol to figure out how it should listen and send data. These are the methods available to `Protocol`: > ``` > Method logPrefix Return a prefix matching the class name, to identify log messages related to this protocol instance. > Method dataReceived Called whenever data is received. > Method connectionLost Called when the connection is shut down. > ``` Inherited from BaseProtocol: > ``` > Method makeConnection Make a connection to a transport and a server. > Method connectionMade Called when a > ``` > > connection is made. And once the connection has been made we could do something like: ``` from twisted.internet.protocol import Protocol class SomeProtocol(Protocol): def dataReceived(self, data): print('Do something with data: {}'.format(data)) def connectionMade(self): self.transport.write("Hello there") ``` But wait where does the `Protocol` get `self.transport.write` from? ``` >>> from twisted.internet.protocol import Protocol, BaseProtocol >>> import inspect >>> from pprint import pprint >>> pprint(inspect.getclasstree(inspect.getmro(Protocol))) [(<class 'object'>, ()), [(<class 'twisted.internet.protocol.BaseProtocol'>, (<class 'object'>,)), [(<class 'twisted.internet.protocol.Protocol'>, (<class 'twisted.internet.protocol.BaseProtocol'>,))]]] >>> dir(Protocol) ['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__implemented__', '__init__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__providedBy__', '__provides__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'connected', 'connectionLost', 'connectionMade', 'dataReceived', 'logPrefix', 'makeConnection', 'transport'] ``` Okay so `Protocol` has a `transport` method what about `BaseProtocol`: ``` >>> dir(BaseProtocol) ['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__implemented__', '__init__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__providedBy__', '__provides__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'connected', 'connectionMade', 'makeConnection', 'transport'] >>> type(BaseProtocol.transport) <class 'NoneType'> ``` It does by why is it `None`? So lets look at `BaseProtocol` [here](http://twistedmatrix.com/documents/current/api/twisted.internet.protocol.BaseProtocol.html): > def makeConnection(self, transport): (source) overridden in > > ``` > twisted.internet.endpoints._WrapIProtocol, > twisted.protocols.amp.BinaryBoxProtocol, > twisted.protocols.basic.NetstringReceiver, > twisted.protocols.ftp.ProtocolWrapper, > twisted.protocols.ftp.SenderProtocol, > twisted.protocols.policies.ProtocolWrapper, > twisted.protocols.stateful.StatefulProtocol Make a connection to a > transport and a server. > ``` Note: > ``` > This sets the 'transport' attribute of this > Protocol, and calls the connectionMade() callback. > ``` So when `makeConnection` is called it sets the `transport` attribute of the protocol. So how does that work with the factory? Let's look at a `Factory` [here](http://twistedmatrix.com/documents/current/api/twisted.internet.protocol.Factory.html#buildProtocol) and the [source](http://twistedmatrix.com/trac/browser/tags/releases/twisted-15.3.0/twisted/internet/protocol.py#L120) for `buildProtocol` ``` def buildProtocol(self, addr): """ Create an instance of a subclass of Protocol. The returned instance will handle input on an incoming server connection, and an attribute "factory" pointing to the creating factory. Alternatively, C{None} may be returned to immediately close the new connection. Override this method to alter how Protocol instances get created. @param addr: an object implementing L{twisted.internet.interfaces.IAddress} """ p = self.protocol() p.factory = self return p ``` Okay, so: ``` class BaseProtocol: """ This is the abstract superclass of all protocols. Some methods have helpful default implementations here so that they can easily be shared, but otherwise the direct subclasses of this class are more interesting, L{Protocol} and L{ProcessProtocol}. """ connected = 0 transport = None def makeConnection(self, transport): """Make a connection to a transport and a server. This sets the 'transport' attribute of this Protocol, and calls the connectionMade() callback. """ self.connected = 1 self.transport = transport self.connectionMade() ``` But still where is `transport` coming from? Its coming from `reactor`. Lets look at a TCP example: ``` from twisted.internet import reactor # # # reactor.connectTCP('localhost', 80, SomeProtocolFactory()) ``` From `reactor` we call `connectTCP` like [this](http://twistedmatrix.com/trac/browser/tags/releases/twisted-15.3.0/twisted/internet/iocpreactor/reactor.py#L160): ``` from twisted.internet.iocpreactor import tcp, udp # # # def connectTCP(self, host, port, factory, timeout=30, bindAddress=None): """ @see: twisted.internet.interfaces.IReactorTCP.connectTCP """ c = tcp.Connector(host, port, factory, timeout, bindAddress, self) c.connect() return c ``` Which is calling `tcp.Connector` like `from twisted.internet.iocpreactor import tcp, udp` [here](http://twistedmatrix.com/trac/browser/tags/releases/twisted-15.3.0/twisted/internet/base.py#L1039): ``` def connect(self): """Start connection to remote server.""" if self.state != "disconnected": raise RuntimeError("can't connect in this state") self.state = "connecting" if not self.factoryStarted: self.factory.doStart() self.factoryStarted = 1 ################## # ah here we are ################## self.transport = transport = self._makeTransport() if self.timeout is not None: self.timeoutID = self.reactor.callLater(self.timeout, transport.failIfNotConnected, error.TimeoutError()) self.factory.startedConnecting(self) ``` Which is returning the transport like [this](http://twistedmatrix.com/trac/browser/tags/releases/twisted-15.3.0/twisted/internet/iocpreactor/tcp.py#L384): ``` class Connector(TCPConnector): def _makeTransport(self): return Client(self.host, self.port, self.bindAddress, self, self.reactor) ``` Which in turn is creating the socket connection: * <http://twistedmatrix.com/trac/browser/tags/releases/twisted-15.3.0/twisted/internet/iocpreactor/tcp.py#L231> So the short answer to your question: > Is it expected that sending messages are implemented independently of > Protocol interface? The `Protocol` inherits a `transport` from the `reactor` and is able to send messages with it using `self.transport.write()`. See: * [how to send data with twisted protocol via factory](http://stackoverflow.com/questions/6088280/how-to-send-data-with-twisted-protocol-via-factory)
Why round off of 0.500000 in python differs from 45.500000 using '%.0f'?
31,889,834
4
2015-08-08T05:03:10Z
31,889,938
7
2015-08-08T05:23:22Z
[ "python", "python-2.7", "string-formatting" ]
Recently, I learned art of string formatting in Python 2.7. I decided to play with floating point numbers. Came across an awkward looking solution, as written below. ``` print "%.0f"%45.5000000 #46 print "%.0f"%0.5000000 #0 #Why?? ``` BUT ``` print int(round(45.5000000)) #46 print int(round(0.5000000)) #1 ``` Please help me understand, why this behavior is shown by `%f`.
The internal implementation for the `%.0f` string format uses a [round-half-even](https://en.wikipedia.org/wiki/Rounding#Round_half_to_even) rounding mode. In Python 2, the [`round()`](https://docs.python.org/3/library/functions.html#round) function uses [round-away-from-zero](https://en.wikipedia.org/wiki/Rounding#Round_half_away_from_zero). In Python 3, that [was changed](https://docs.python.org/3/whatsnew/3.0.html#builtins) to round-half-even making it consistent with string formatting. FWIW, the [decimal module](https://docs.python.org/3/library/decimal.html#module-decimal) offers you a choice of rounding modes if you want more control than afforded by `round()` or by string formatting. The [decimal rounding modes](https://docs.python.org/3/library/decimal.html#rounding-modes) are: ROUND\_05UP ROUND\_CEILING ROUND\_DOWN ROUND\_FLOOR ROUND\_HALF\_DOWN ROUND\_HALF\_EVEN ROUND\_HALF\_UP ROUND\_UP.
Repeatedly remove characters from string
31,893,524
3
2015-08-08T13:17:05Z
31,893,568
8
2015-08-08T13:23:05Z
[ "python", "loops", "for-loop", "repeat" ]
``` >>> split=['((((a','b','+b'] >>> [ (w[1:] if w.startswith((' ','!', '@', '#', '$', '%', '^', '&', '*', "(", ")", '-', '_', '+', '=', '~', ':', "'", ';', ',', '.', '?', '|', '\\', '/', '<', '>', '{', '}', '[', ']', '"')) else w) for w in split] ['(((a','b','b'] ``` I wanted `['a', 'b', 'b']` instead. I want to create a repeat function to repeat the command. I make my split clear all the `'('` from the start. Suppose my split is longer, I want to clear all `(((` in front of the words. I don't use `replace` because it will change the `'('` in between of words. E.g. if the `'('` is in the middle of a word like `'aa(aa'`, I don't want to change this.
There is no need to repeat your expression, you are not using the right tools, is all. You are looking for the [`str.lstrip()` method](https://docs.python.org/2/library/stdtypes.html#str.lstrip): ``` [w.lstrip(' !@#$%^&*()-_+=~:\';,.?|\\/<>{}[]"') for w in split] ``` The method treats the string argument as a *set* of characters and does exactly what you tried to do in your code; repeatedly remove the left-most character if it is part of that set. There is a corresponding [`str.rstrip()`](https://docs.python.org/2/library/stdtypes.html#str.rstrip) for removing characters from the end, and [`str.strip()`](https://docs.python.org/2/library/stdtypes.html#str.strip) to remove them from both ends. Demo: ``` >>> split=['((((a', 'b', '+b'] >>> [w.lstrip(' !@#$%^&*()-_+=~:\';,.?|\\/<>{}[]"') for w in split] ['a', 'b', 'b'] ``` If you really needed to repeat an expression, you could just create a new function for that task: ``` def strip_left(w): while w.startswith((' ','!', '@', '#', '$', '%', '^', '&', '*', "(", ")", '-', '_', '+', '=', '~', ':', "'", ';', ',', '.', '?', '|', '\\', '/', '<', '>', '{', '}', '[', ']', '"')): w = w[1:] return w [strip_left(w) for w in split] ```
Smarter than If Else
31,896,495
3
2015-08-08T17:09:35Z
31,896,539
9
2015-08-08T17:14:53Z
[ "python", "if-statement" ]
I'm trying to do a switch (of sorts) of commands. ``` if 'Who' in line.split()[:3]: Who(line) elif 'Where' in line.split()[:3]: Where(line) elif 'What' in line.split()[:3]: What(line) elif 'When' in line.split()[:3]: When(line) elif 'How' in line.split()[:3]: How(line) elif "Make" in line.split()[:3]: Make(line) elif "Can You" in line.split()[:3]: CY(line) else: print("OK") ``` So the explanation. If `Who`, `What`, etc. are in the first 3 words of the command then it executes the corresponding function. I just want to know if there is a smarter way to do this other than a lot of `if`,`elif` and `else`?
Try creating a dictionary with keys being the command names and the values the actual command functions. Example: ``` def who(): ... def where(): ... def default_command(): ... commands = { 'who': who, 'where': where, ... } # usage cmd_name = line.split()[:3][0] # or use all commands in the list command_function = commands.get(cmd_name, default_command) command_function() # execute command ```
Why does `a<b<c` work in Python?
31,896,870
2
2015-08-08T17:51:32Z
31,896,904
9
2015-08-08T17:55:52Z
[ "python", "boolean-expression" ]
The title says it all. For example `1<2<3` returns `True` and `2<3<1` returns `False`. It's great that it works, but I can't explain *why* it works... I can't find anything about it in the documentation. It's always: `expression boolean_operator expression`, not two boolean operators). Also: `a<b` returns a boolean, and `boolean boolean_operator expression` does not explain the behaviour. I'm sure the explanation is (almost) obvious, but I seem to miss it.
This is known as operator chaining. Documentation is available at: <https://docs.python.org/2/reference/expressions.html#not-in> > Comparisons can be chained arbitrarily, e.g., x < y <= z is equivalent to x < y and y <= z, except that y is evaluated only once (but in both cases z is not evaluated at all when x < y is found to be false). And, if you really like formal definitions: > Formally, if `a`, `b`, `c`, ..., `y`, `z` are expressions and `op1`, `op2`, ..., `opN` are comparison operators, then `a op1 b op2 c ... y opN z` is equivalent to `a op1 b and b op2 c and ... y opN z`, except that each expression is evaluated at most once.
How do you assert two functions throw the same error without knowing the error?
31,897,053
3
2015-08-08T18:13:06Z
31,897,099
7
2015-08-08T18:18:51Z
[ "python", "unit-testing" ]
I have an outer function that calls an inner function by passing the arguments along. Is it possible to test that both functions throw the same exception/error without knowing the exact error type? I'm looking for something like: ``` def test_invalidInput_throwsSameError(self): arg = 'invalidarg' self.assertRaisesSameError( innerFunction(arg), outerFunction(arg) ) ```
Assuming you're using `unittest` (and python2.7 or newer) and that you're not doing something pathological like raising old-style class instances as errors, you can get the exception from the error context if you use [`assertRaises`](https://docs.python.org/2/library/unittest.html#unittest.TestCase.assertRaises) as a context manager. ``` with self.assertRaises(Exception) as err_context1: innerFunction(arg) with self.assertRaises(Exception) as err_context2: outerFunction(arg) # Or some other measure of "sameness" self.assertEqual( type(err_context1.exception), type(err_context2.exception)) ```
How to write the resulting RDD to a csv file in Spark python
31,898,964
9
2015-08-08T21:53:51Z
31,899,173
14
2015-08-08T22:24:12Z
[ "python", "csv", "apache-spark", "pyspark", "file-writing" ]
I have a resulting RDD `labelsAndPredictions = testData.map(lambda lp: lp.label).zip(predictions)`. This has output in this format: ``` [(0.0, 0.08482142857142858), (0.0, 0.11442786069651742),.....] ``` What I want is to create a CSV file with one column for `labels` (the first part of the tuple in above output) and one for `predictions`(second part of tuple output). But I don't know how to write to a CSV file in Spark using Python. How can I create a CSV file with the above output?
Just `map` the lines of the RDD (`labelsAndPredictions`) into strings (the lines of the CSV) then use `rdd.saveAsTextFile()`. ``` def toCSVLine(data): return ','.join(str(d) for d in data) lines = labelsAndPredictions.map(toCSVLine) lines.saveAsTextFile('hdfs://my-node:9000/tmp/labels-and-predictions.csv') ```
"OSError: [Errno 1] Operation not permitted" when installing Scrapy in OSX 10.11 (El Capitan) (System Integrity Protection)
31,900,008
80
2015-08-09T01:00:37Z
31,900,088
15
2015-08-09T01:21:20Z
[ "python", "osx", "python-2.7", "scrapy" ]
I'm trying to install Scrapy Python framework in OSX 10.11 (El Capitan) via pip. The installation script downloads the required modules and at some point returns the following error: ``` OSError: [Errno 1] Operation not permitted: '/tmp/pip-nIfswi-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info' ``` I've tried to deactivate the rootless feature in OSX 10.11 with the command: ``` sudo nvram boot-args="rootless=0";sudo reboot ``` but I still get the same error when the machine reboots. Any clue or idea from my fellow StackExchangers? If it helps, the full script output is the following: ``` sudo -s pip install scrapy Collecting scrapy Downloading Scrapy-1.0.2-py2-none-any.whl (290kB) 100% |████████████████████████████████| 290kB 345kB/s Requirement already satisfied (use --upgrade to upgrade): cssselect>=0.9 in /Library/Python/2.7/site-packages (from scrapy) Requirement already satisfied (use --upgrade to upgrade): queuelib in /Library/Python/2.7/site-packages (from scrapy) Requirement already satisfied (use --upgrade to upgrade): pyOpenSSL in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python (from scrapy) Collecting w3lib>=1.8.0 (from scrapy) Downloading w3lib-1.12.0-py2.py3-none-any.whl Collecting lxml (from scrapy) Downloading lxml-3.4.4.tar.gz (3.5MB) 100% |████████████████████████████████| 3.5MB 112kB/s Collecting Twisted>=10.0.0 (from scrapy) Downloading Twisted-15.3.0.tar.bz2 (4.4MB) 100% |████████████████████████████████| 4.4MB 94kB/s Collecting six>=1.5.2 (from scrapy) Downloading six-1.9.0-py2.py3-none-any.whl Requirement already satisfied (use --upgrade to upgrade): zope.interface>=3.6.0 in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python (from Twisted>=10.0.0->scrapy) Requirement already satisfied (use --upgrade to upgrade): setuptools in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python (from zope.interface>=3.6.0->Twisted>=10.0.0->scrapy) Installing collected packages: six, w3lib, lxml, Twisted, scrapy Found existing installation: six 1.4.1 DEPRECATION: Uninstalling a distutils installed project (six) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project. Uninstalling six-1.4.1: Exception: Traceback (most recent call last): File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/basecommand.py", line 223, in main status = self.run(options, args) File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/commands/install.py", line 299, in run root=options.root_path, File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/req/req_set.py", line 640, in install requirement.uninstall(auto_confirm=True) File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/req/req_install.py", line 726, in uninstall paths_to_remove.remove(auto_confirm) File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/req/req_uninstall.py", line 125, in remove renames(path, new_path) File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/utils/__init__.py", line 314, in renames shutil.move(old, new) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move copy2(src, real_dst) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 131, in copy2 copystat(src, dst) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 103, in copystat os.chflags(dst, st.st_flags) OSError: [Errno 1] Operation not permitted: '/tmp/pip-nIfswi-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info' ``` Thank you!
**Warnings** I would suggest **very strongly** against modifying the system Python on Mac; there are numerous issues that can occur. Your particular error shows that the installer has issues resolving the dependencies for Scrapy without impacting the current Python installation. The system uses Python for a number of essential tasks, so **it's important to keep the system installation stable** and as originally installed by Apple. I would also **exhaust all other possibilities** before bypassing built in security. **Package Manager Solutions:** Please look into a Python virtualization tool such as [virtualenv](http://docs.python-guide.org/en/latest/dev/virtualenvs/) first; this will allow you to experiment safely. Another useful tool to use languages and software without conflicting with your Mac OS is [Homebrew](http://brew.sh). Like [MacPorts](https://www.macports.org) or [Fink](http://www.finkproject.org), **Homebrew is a package manager for Mac**, and is useful for safely trying lots of other languages and tools. **"Roll your own" Software Installs:** If you don't like the package manager approach, you could use the `/usr/local` path or create an `/opt/local` directory for installing an alternate Python installation and fix up your paths in your `.bashrc`. Note that you'll have to enable root for these solutions. **How to do it anyway:** **If you absolutely must disable the security check** (and I sincerely hope it's for something other than messing with the system languages and resources), you can disable it temporarily and re-enable it using some of the techniques in this post on how to [Disable System Integrity-Protection](http://osxdaily.com/2015/10/05/disable-rootless-system-integrity-protection-mac-os-x/).
"OSError: [Errno 1] Operation not permitted" when installing Scrapy in OSX 10.11 (El Capitan) (System Integrity Protection)
31,900,008
80
2015-08-09T01:00:37Z
32,723,204
11
2015-09-22T17:29:25Z
[ "python", "osx", "python-2.7", "scrapy" ]
I'm trying to install Scrapy Python framework in OSX 10.11 (El Capitan) via pip. The installation script downloads the required modules and at some point returns the following error: ``` OSError: [Errno 1] Operation not permitted: '/tmp/pip-nIfswi-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info' ``` I've tried to deactivate the rootless feature in OSX 10.11 with the command: ``` sudo nvram boot-args="rootless=0";sudo reboot ``` but I still get the same error when the machine reboots. Any clue or idea from my fellow StackExchangers? If it helps, the full script output is the following: ``` sudo -s pip install scrapy Collecting scrapy Downloading Scrapy-1.0.2-py2-none-any.whl (290kB) 100% |████████████████████████████████| 290kB 345kB/s Requirement already satisfied (use --upgrade to upgrade): cssselect>=0.9 in /Library/Python/2.7/site-packages (from scrapy) Requirement already satisfied (use --upgrade to upgrade): queuelib in /Library/Python/2.7/site-packages (from scrapy) Requirement already satisfied (use --upgrade to upgrade): pyOpenSSL in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python (from scrapy) Collecting w3lib>=1.8.0 (from scrapy) Downloading w3lib-1.12.0-py2.py3-none-any.whl Collecting lxml (from scrapy) Downloading lxml-3.4.4.tar.gz (3.5MB) 100% |████████████████████████████████| 3.5MB 112kB/s Collecting Twisted>=10.0.0 (from scrapy) Downloading Twisted-15.3.0.tar.bz2 (4.4MB) 100% |████████████████████████████████| 4.4MB 94kB/s Collecting six>=1.5.2 (from scrapy) Downloading six-1.9.0-py2.py3-none-any.whl Requirement already satisfied (use --upgrade to upgrade): zope.interface>=3.6.0 in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python (from Twisted>=10.0.0->scrapy) Requirement already satisfied (use --upgrade to upgrade): setuptools in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python (from zope.interface>=3.6.0->Twisted>=10.0.0->scrapy) Installing collected packages: six, w3lib, lxml, Twisted, scrapy Found existing installation: six 1.4.1 DEPRECATION: Uninstalling a distutils installed project (six) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project. Uninstalling six-1.4.1: Exception: Traceback (most recent call last): File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/basecommand.py", line 223, in main status = self.run(options, args) File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/commands/install.py", line 299, in run root=options.root_path, File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/req/req_set.py", line 640, in install requirement.uninstall(auto_confirm=True) File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/req/req_install.py", line 726, in uninstall paths_to_remove.remove(auto_confirm) File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/req/req_uninstall.py", line 125, in remove renames(path, new_path) File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/utils/__init__.py", line 314, in renames shutil.move(old, new) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move copy2(src, real_dst) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 131, in copy2 copystat(src, dst) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 103, in copystat os.chflags(dst, st.st_flags) OSError: [Errno 1] Operation not permitted: '/tmp/pip-nIfswi-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info' ``` Thank you!
You should disable "System Integrity Protection" which is a new feature in El Capitan. First, you should run the command for rootless config on your terminal ``` # nvram boot-args="rootless=0" # reboot ``` Then, you should run the command below on recovery partition's terminal (Recovery OS) ``` # csrutil disable # reboot ``` I've just solved my problem like that. I'm not sure that the first part is necessary. Try as you like. **--WARNING** You should enable SIP again after everything works; Simply reboot again into Recovery Mode and run in terminal ``` # csrutil enable ``` csrutil: [Configuring System Integrity Protection](https://developer.apple.com/library/prerelease/mac/documentation/Security/Conceptual/System_Integrity_Protection_Guide/ConfiguringSystemIntegrityProtection/ConfiguringSystemIntegrityProtection.html)
"OSError: [Errno 1] Operation not permitted" when installing Scrapy in OSX 10.11 (El Capitan) (System Integrity Protection)
31,900,008
80
2015-08-09T01:00:37Z
33,136,494
154
2015-10-14T22:12:25Z
[ "python", "osx", "python-2.7", "scrapy" ]
I'm trying to install Scrapy Python framework in OSX 10.11 (El Capitan) via pip. The installation script downloads the required modules and at some point returns the following error: ``` OSError: [Errno 1] Operation not permitted: '/tmp/pip-nIfswi-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info' ``` I've tried to deactivate the rootless feature in OSX 10.11 with the command: ``` sudo nvram boot-args="rootless=0";sudo reboot ``` but I still get the same error when the machine reboots. Any clue or idea from my fellow StackExchangers? If it helps, the full script output is the following: ``` sudo -s pip install scrapy Collecting scrapy Downloading Scrapy-1.0.2-py2-none-any.whl (290kB) 100% |████████████████████████████████| 290kB 345kB/s Requirement already satisfied (use --upgrade to upgrade): cssselect>=0.9 in /Library/Python/2.7/site-packages (from scrapy) Requirement already satisfied (use --upgrade to upgrade): queuelib in /Library/Python/2.7/site-packages (from scrapy) Requirement already satisfied (use --upgrade to upgrade): pyOpenSSL in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python (from scrapy) Collecting w3lib>=1.8.0 (from scrapy) Downloading w3lib-1.12.0-py2.py3-none-any.whl Collecting lxml (from scrapy) Downloading lxml-3.4.4.tar.gz (3.5MB) 100% |████████████████████████████████| 3.5MB 112kB/s Collecting Twisted>=10.0.0 (from scrapy) Downloading Twisted-15.3.0.tar.bz2 (4.4MB) 100% |████████████████████████████████| 4.4MB 94kB/s Collecting six>=1.5.2 (from scrapy) Downloading six-1.9.0-py2.py3-none-any.whl Requirement already satisfied (use --upgrade to upgrade): zope.interface>=3.6.0 in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python (from Twisted>=10.0.0->scrapy) Requirement already satisfied (use --upgrade to upgrade): setuptools in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python (from zope.interface>=3.6.0->Twisted>=10.0.0->scrapy) Installing collected packages: six, w3lib, lxml, Twisted, scrapy Found existing installation: six 1.4.1 DEPRECATION: Uninstalling a distutils installed project (six) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project. Uninstalling six-1.4.1: Exception: Traceback (most recent call last): File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/basecommand.py", line 223, in main status = self.run(options, args) File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/commands/install.py", line 299, in run root=options.root_path, File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/req/req_set.py", line 640, in install requirement.uninstall(auto_confirm=True) File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/req/req_install.py", line 726, in uninstall paths_to_remove.remove(auto_confirm) File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/req/req_uninstall.py", line 125, in remove renames(path, new_path) File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/utils/__init__.py", line 314, in renames shutil.move(old, new) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move copy2(src, real_dst) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 131, in copy2 copystat(src, dst) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 103, in copystat os.chflags(dst, st.st_flags) OSError: [Errno 1] Operation not permitted: '/tmp/pip-nIfswi-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info' ``` Thank you!
``` pip install --ignore-installed six ``` Would do the trick. Source: [github.com/pypa/pip/issues/3165](https://github.com/pypa/pip/issues/3165)
"OSError: [Errno 1] Operation not permitted" when installing Scrapy in OSX 10.11 (El Capitan) (System Integrity Protection)
31,900,008
80
2015-08-09T01:00:37Z
33,245,444
65
2015-10-20T19:40:42Z
[ "python", "osx", "python-2.7", "scrapy" ]
I'm trying to install Scrapy Python framework in OSX 10.11 (El Capitan) via pip. The installation script downloads the required modules and at some point returns the following error: ``` OSError: [Errno 1] Operation not permitted: '/tmp/pip-nIfswi-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info' ``` I've tried to deactivate the rootless feature in OSX 10.11 with the command: ``` sudo nvram boot-args="rootless=0";sudo reboot ``` but I still get the same error when the machine reboots. Any clue or idea from my fellow StackExchangers? If it helps, the full script output is the following: ``` sudo -s pip install scrapy Collecting scrapy Downloading Scrapy-1.0.2-py2-none-any.whl (290kB) 100% |████████████████████████████████| 290kB 345kB/s Requirement already satisfied (use --upgrade to upgrade): cssselect>=0.9 in /Library/Python/2.7/site-packages (from scrapy) Requirement already satisfied (use --upgrade to upgrade): queuelib in /Library/Python/2.7/site-packages (from scrapy) Requirement already satisfied (use --upgrade to upgrade): pyOpenSSL in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python (from scrapy) Collecting w3lib>=1.8.0 (from scrapy) Downloading w3lib-1.12.0-py2.py3-none-any.whl Collecting lxml (from scrapy) Downloading lxml-3.4.4.tar.gz (3.5MB) 100% |████████████████████████████████| 3.5MB 112kB/s Collecting Twisted>=10.0.0 (from scrapy) Downloading Twisted-15.3.0.tar.bz2 (4.4MB) 100% |████████████████████████████████| 4.4MB 94kB/s Collecting six>=1.5.2 (from scrapy) Downloading six-1.9.0-py2.py3-none-any.whl Requirement already satisfied (use --upgrade to upgrade): zope.interface>=3.6.0 in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python (from Twisted>=10.0.0->scrapy) Requirement already satisfied (use --upgrade to upgrade): setuptools in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python (from zope.interface>=3.6.0->Twisted>=10.0.0->scrapy) Installing collected packages: six, w3lib, lxml, Twisted, scrapy Found existing installation: six 1.4.1 DEPRECATION: Uninstalling a distutils installed project (six) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project. Uninstalling six-1.4.1: Exception: Traceback (most recent call last): File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/basecommand.py", line 223, in main status = self.run(options, args) File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/commands/install.py", line 299, in run root=options.root_path, File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/req/req_set.py", line 640, in install requirement.uninstall(auto_confirm=True) File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/req/req_install.py", line 726, in uninstall paths_to_remove.remove(auto_confirm) File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/req/req_uninstall.py", line 125, in remove renames(path, new_path) File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/utils/__init__.py", line 314, in renames shutil.move(old, new) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move copy2(src, real_dst) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 131, in copy2 copystat(src, dst) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 103, in copystat os.chflags(dst, st.st_flags) OSError: [Errno 1] Operation not permitted: '/tmp/pip-nIfswi-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info' ``` Thank you!
I also think it's absolutely not necessary to start hacking OS X. I was able to solve it doing a ``` brew install python ``` It seems that using the python / pip that comes with new El Capitan has some issues.
"OSError: [Errno 1] Operation not permitted" when installing Scrapy in OSX 10.11 (El Capitan) (System Integrity Protection)
31,900,008
80
2015-08-09T01:00:37Z
36,921,836
7
2016-04-28T17:40:18Z
[ "python", "osx", "python-2.7", "scrapy" ]
I'm trying to install Scrapy Python framework in OSX 10.11 (El Capitan) via pip. The installation script downloads the required modules and at some point returns the following error: ``` OSError: [Errno 1] Operation not permitted: '/tmp/pip-nIfswi-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info' ``` I've tried to deactivate the rootless feature in OSX 10.11 with the command: ``` sudo nvram boot-args="rootless=0";sudo reboot ``` but I still get the same error when the machine reboots. Any clue or idea from my fellow StackExchangers? If it helps, the full script output is the following: ``` sudo -s pip install scrapy Collecting scrapy Downloading Scrapy-1.0.2-py2-none-any.whl (290kB) 100% |████████████████████████████████| 290kB 345kB/s Requirement already satisfied (use --upgrade to upgrade): cssselect>=0.9 in /Library/Python/2.7/site-packages (from scrapy) Requirement already satisfied (use --upgrade to upgrade): queuelib in /Library/Python/2.7/site-packages (from scrapy) Requirement already satisfied (use --upgrade to upgrade): pyOpenSSL in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python (from scrapy) Collecting w3lib>=1.8.0 (from scrapy) Downloading w3lib-1.12.0-py2.py3-none-any.whl Collecting lxml (from scrapy) Downloading lxml-3.4.4.tar.gz (3.5MB) 100% |████████████████████████████████| 3.5MB 112kB/s Collecting Twisted>=10.0.0 (from scrapy) Downloading Twisted-15.3.0.tar.bz2 (4.4MB) 100% |████████████████████████████████| 4.4MB 94kB/s Collecting six>=1.5.2 (from scrapy) Downloading six-1.9.0-py2.py3-none-any.whl Requirement already satisfied (use --upgrade to upgrade): zope.interface>=3.6.0 in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python (from Twisted>=10.0.0->scrapy) Requirement already satisfied (use --upgrade to upgrade): setuptools in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python (from zope.interface>=3.6.0->Twisted>=10.0.0->scrapy) Installing collected packages: six, w3lib, lxml, Twisted, scrapy Found existing installation: six 1.4.1 DEPRECATION: Uninstalling a distutils installed project (six) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project. Uninstalling six-1.4.1: Exception: Traceback (most recent call last): File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/basecommand.py", line 223, in main status = self.run(options, args) File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/commands/install.py", line 299, in run root=options.root_path, File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/req/req_set.py", line 640, in install requirement.uninstall(auto_confirm=True) File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/req/req_install.py", line 726, in uninstall paths_to_remove.remove(auto_confirm) File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/req/req_uninstall.py", line 125, in remove renames(path, new_path) File "/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/utils/__init__.py", line 314, in renames shutil.move(old, new) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move copy2(src, real_dst) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 131, in copy2 copystat(src, dst) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 103, in copystat os.chflags(dst, st.st_flags) OSError: [Errno 1] Operation not permitted: '/tmp/pip-nIfswi-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info' ``` Thank you!
I tried to install AWS via pip in El Capitan but this error appear > OSError: [Errno 1] Operation not permitted: > '/var/folders/wm/jhnj0g\_s16gb36y8kwvrgm7h0000gp/T/pip-wTnb\_D-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info' I found the answer [here](https://github.com/pypa/pip/issues/3165#issuecomment-165232166) ``` sudo -H pip install awscli --upgrade --ignore-installed six ``` It works for me :)
Why is 'a' in ('abc') True while 'a' in ['abc'] is False?
31,900,892
71
2015-08-09T04:24:52Z
31,900,920
135
2015-08-09T04:28:59Z
[ "python" ]
When using the interpreter, the expression `'a' in ('abc')` returns True, while `'a' in ['abc']` returns False. Can somebody explain this behaviour?
`('abc')` is the same as `'abc'`. `'abc'` contains the substring `'a'`, hence `'a' in 'abc' == True`. If you want the tuple instead, you need to write `('abc', )`. `['abc']` is a list (containing a single element, the string `'abc'`). `'a'` is not a member of this list, so `'a' in ['abc'] == False`
Why is 'a' in ('abc') True while 'a' in ['abc'] is False?
31,900,892
71
2015-08-09T04:24:52Z
31,900,922
50
2015-08-09T04:29:15Z
[ "python" ]
When using the interpreter, the expression `'a' in ('abc')` returns True, while `'a' in ['abc']` returns False. Can somebody explain this behaviour?
`('abc')` is not a tuple. I think you confused that with tuple `('abc',)`. Actually, `('abc')` is same as `'abc'`, an `array of characters` where `a` as character is in it, hence, the first lookup returns `True`: ``` >>> 'a' in 'abc' True ``` On the other hand, `['abc']` is a list of string or a list of list of characters *(you can think of it as a 2-d matrix of characters `[['a', 'b', 'c']]`)* where `a`, as a single character, is not the member of the outer list. In fact, it is the first character of the inner list: ``` >>> 'a' in ['abc'] False >>> 'a' in ['abc'][0] True >>> 'a' == ['abc'][0][0] True ```
How to recalculate a variable value every time it is used?
31,905,803
3
2015-08-09T15:26:00Z
31,905,856
9
2015-08-09T15:31:19Z
[ "python" ]
I want a variable to be calculated each time it is used. For example, `var1 = var2 + var3` each time `var1` is printed. How can I create a "dynamic" variable like this? ``` var2 = 4 var3 = 2 print(var1) # 6 var3 = 8 print(var1) # 12 ```
You'll need a property on a class instance. The property will execute some code each time it's accessed. For example, here's a property that increments its value each time it is accessed. Define `__str__` to access the property, so you can `print(var1)` and get its incrementing value. ``` class MyVar(object): def __init__(self, initial=0): self._var1 = initial @property def var1(self): self._var1 += 1 return self._var1 def __str__(self): return str(self.var1) var1 = MyVar() print(var1) # 1 print(var1) # 2 print(var1.var1) # 3 ``` Calculate `var1 = var2 + var3`: ``` class MyVar(object): def __init__(self, var2, var3): self.var2 = var2 self.var3 = var3 @property def var1(self): return self.var2 + self.var3 var = MyVar(4, 2) print(var.var1) # 6 var.var3 = 8 print(var.var1) # 12 ```
Pylint to show only warnings and errors
31,907,762
3
2015-08-09T18:57:13Z
31,908,039
7
2015-08-09T19:28:58Z
[ "python", "pylint" ]
I would like to use pylint to check my code but I am only interested in error and warning levels. Is there a way to do that in command line or in pylintrc? I am not interested in filtering given issues (like listing all messages in MESSAGE CONTROL), I just want pylint to ignore **all** convention and refactor messages. Note: I don't think that's a duplicate of [Using Pylint to display error and warnings](http://stackoverflow.com/questions/20639173/using-pylint-to-display-error-and-warnings)
Use the `-d` / `--disable` option to turn off the "C" and "R" message classes (convention and refactor): ``` -d <msg ids>, --disable=<msg ids> Disable the message, report, category or checker with the given id(s). You can either give multiple identifiers separated by comma (,) or put this option multiple times (only on the command line, not in the configuration file where it should appear only once).You can also use "--disable=all" to disable everything first and then reenable specific checks. For example, if you want to run only the similarities checker, you can use "--disable=all --enable=similarities". If you want to run only the classes checker, but have no Warning level messages displayed, use"--disable=all --enable=classes --disable=W" ``` Without the `disable` option (6 convention, 1 refactor, 2 warning, 1 error): ``` $ pylint x.py C: 1, 0: Missing module docstring (missing-docstring) C: 3, 0: Missing function docstring (missing-docstring) R: 3, 0: Too many statements (775/50) (too-many-statements) W:780,15: Redefining name 'path' from outer scope (line 796) (redefined-outer-name) C:780, 0: Invalid function name "getSection" (invalid-name) C:780, 0: Empty function docstring (empty-docstring) C:782,23: Invalid variable name "inPath" (invalid-name) W:785, 4: Statement seems to have no effect (pointless-statement) E:785, 4: Undefined variable 'something' (undefined-variable) C:796, 4: Invalid constant name "path" (invalid-name) ``` After using the `disable` option (0 convention, 0 refactor, 2 warning, 1 error): ``` $ pylint --disable=R,C x.py W:780,15: Redefining name 'path' from outer scope (line 796) (redefined-outer-name) W:785, 4: Statement seems to have no effect (pointless-statement) E:785, 4: Undefined variable 'something' (undefined-variable) ``` To set this option in `pylintrc`: ``` disable=R,C ```
how to match all 3 digit except a particular number
31,907,906
4
2015-08-09T19:11:56Z
31,908,004
10
2015-08-09T19:24:13Z
[ "python", "regex", "regex-negation" ]
How do i match all 3 digit integers except one particular integer, say 914. Getting all 3 digit integers is simple enough `[0=9][0-9][0-9]` Trying something like `[0-8][0,2-9][0-3,5-9]` removes more integers from the set apart from just 914. How do we solve this problem?
You can use a negative look-ahead to add an exception: ``` \b(?!914)\d{3}\b ``` The word boundary `\b` ensures we match a number as a whole word. See [regex demo](https://regex101.com/r/aH7qQ6/1) and [IDEONE demo](https://ideone.com/0yhWuk): ``` import re p = re.compile(r'\b(?!914)\d{3}\b') test_str = "123\n235\n456\n1000\n910 911 912 913\n 914\n915 916" print(re.findall(p, test_str)) ``` Output: ``` ['123', '235', '456', '910', '911', '912', '913', '915', '916'] ```