qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
11,459,861
I am a molecular biologist using Biopython to analyze mutations in genes and my problem is this: I have a file containing many different sequences (millions), most of which are duplicates. I need to find the duplicates and discard them, keeping one copy of each unique sequence. I was planning on using the module editd...
2012/07/12
[ "https://Stackoverflow.com/questions/11459861", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1513202/" ]
Assuming your file consists solely of sequences arranged one sequence per line, I would suggest the following: ``` seq_file = open(#your file) sequences = [seq for seq in seq_file] uniques = list(set(sequences)) ``` Assuming you have the memory for it. How many millions? ETA: Was reading the comments above (but ...
Four things come to mind: 1. You can use a set(), as described by F.X. - assuming the unique strings will all fit in memory 2. You can use one file per sequence, and feed the files to a program like equivs3e: <http://stromberg.dnsalias.org/~strombrg/equivalence-classes.html#python-3e> 3. You could perhaps use a gdbm a...
11,459,861
I am a molecular biologist using Biopython to analyze mutations in genes and my problem is this: I have a file containing many different sequences (millions), most of which are duplicates. I need to find the duplicates and discard them, keeping one copy of each unique sequence. I was planning on using the module editd...
2012/07/12
[ "https://Stackoverflow.com/questions/11459861", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1513202/" ]
Assuming your file consists solely of sequences arranged one sequence per line, I would suggest the following: ``` seq_file = open(#your file) sequences = [seq for seq in seq_file] uniques = list(set(sequences)) ``` Assuming you have the memory for it. How many millions? ETA: Was reading the comments above (but ...
Does it have to be Python? If the sequences are simply text strings one per line then a shell script will be very efficient: ``` sort input-file-name | uniq > output-file-name ``` This will do the job on files up to 2GB on 32 bit Linux. If you are on Windows then install the GNU utils <http://gnuwin32.sourceforge....
11,459,861
I am a molecular biologist using Biopython to analyze mutations in genes and my problem is this: I have a file containing many different sequences (millions), most of which are duplicates. I need to find the duplicates and discard them, keeping one copy of each unique sequence. I was planning on using the module editd...
2012/07/12
[ "https://Stackoverflow.com/questions/11459861", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1513202/" ]
Does it have to be Python? If the sequences are simply text strings one per line then a shell script will be very efficient: ``` sort input-file-name | uniq > output-file-name ``` This will do the job on files up to 2GB on 32 bit Linux. If you are on Windows then install the GNU utils <http://gnuwin32.sourceforge....
Four things come to mind: 1. You can use a set(), as described by F.X. - assuming the unique strings will all fit in memory 2. You can use one file per sequence, and feed the files to a program like equivs3e: <http://stromberg.dnsalias.org/~strombrg/equivalence-classes.html#python-3e> 3. You could perhaps use a gdbm a...
2,396,382
this is the script >> ``` import ClientForm import urllib2 request = urllib2.Request("http://ritaj.birzeit.edu") response = urllib2.urlopen(request) forms = ClientForm.ParseResponse(response, backwards_compat=False) response.close() form = forms[0] print form sooform = str(raw_input("Form Name: ")) username = str(ra...
2010/03/07
[ "https://Stackoverflow.com/questions/2396382", "https://Stackoverflow.com", "https://Stackoverflow.com/users/288208/" ]
The only `<form>` tag in the HTML served at that URL (save it to a file and look for yourself!) is: ``` <form method="GET" action="http://www.google.com/u/ritaj"> ``` which does a customized Google search and has nothing to do with logging in (plus, for some reason, ClientForm has some problem identifying that speci...
the actual address seems to be using `https` instead of `http`. check the [urllib2](http://docs.python.org/library/urllib2.html) doc to see if it handles HTTPS( i believe you need ssl)
25,240,268
Say for example, I have two text files containing the following: **File 1** > > "key\_one" = "String value for key one" > > "key\_two" = "String value for key two" > > // COMMENT // > > "key\_three" = "String value for key two" > > > **File 2** > > // COMMENT > > "key\_one" = "key\_one" > > ...
2014/08/11
[ "https://Stackoverflow.com/questions/25240268", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1813167/" ]
There is a quote that I heard from somewhere: "If you have a problem and you try to solve it with regular expressions, you now have two problems". What you want to achieve can be easily done with just a few inbuilt Python string methods such as `startswith()` and `split()`, without using any regex. In short you can d...
``` import pprint ``` def get\_values(f): file1 = open(f,"r").readlines() values = {} for line in file1: if line[:2] !="//" and "=" in line: #print line key, value = line.split("=") #print key, value values[key]=value ``` return values ``` def replace\_values(v1, v2): for key in v1: v = v1[key] if key in...
25,240,268
Say for example, I have two text files containing the following: **File 1** > > "key\_one" = "String value for key one" > > "key\_two" = "String value for key two" > > // COMMENT // > > "key\_three" = "String value for key two" > > > **File 2** > > // COMMENT > > "key\_one" = "key\_one" > > ...
2014/08/11
[ "https://Stackoverflow.com/questions/25240268", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1813167/" ]
There is a quote that I heard from somewhere: "If you have a problem and you try to solve it with regular expressions, you now have two problems". What you want to achieve can be easily done with just a few inbuilt Python string methods such as `startswith()` and `split()`, without using any regex. In short you can d...
Using some of the tips given here I coded my own solution. It could probably be improved in a few places but I am pleased with myself for creating the solution without just copying and pasting someone else's answer. So, my solution: ``` import fileinput translations = {} with open('file1.txt', 'r') as fileOne: t...
25,240,268
Say for example, I have two text files containing the following: **File 1** > > "key\_one" = "String value for key one" > > "key\_two" = "String value for key two" > > // COMMENT // > > "key\_three" = "String value for key two" > > > **File 2** > > // COMMENT > > "key\_one" = "key\_one" > > ...
2014/08/11
[ "https://Stackoverflow.com/questions/25240268", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1813167/" ]
You can create a dictionary from the `FILE1` and then use it to replace values in `FILE2` ``` import fileinput import re pattern = re.compile(r'"(.*?)"\s+=\s+"(.*?)"') with open('FILE1', 'r') as f: values = dict(pattern.findall(f.read())) for line in fileinput.input('FILE2', inplace=True): match = pattern.m...
``` import pprint ``` def get\_values(f): file1 = open(f,"r").readlines() values = {} for line in file1: if line[:2] !="//" and "=" in line: #print line key, value = line.split("=") #print key, value values[key]=value ``` return values ``` def replace\_values(v1, v2): for key in v1: v = v1[key] if key in...
25,240,268
Say for example, I have two text files containing the following: **File 1** > > "key\_one" = "String value for key one" > > "key\_two" = "String value for key two" > > // COMMENT // > > "key\_three" = "String value for key two" > > > **File 2** > > // COMMENT > > "key\_one" = "key\_one" > > ...
2014/08/11
[ "https://Stackoverflow.com/questions/25240268", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1813167/" ]
You can create a dictionary from the `FILE1` and then use it to replace values in `FILE2` ``` import fileinput import re pattern = re.compile(r'"(.*?)"\s+=\s+"(.*?)"') with open('FILE1', 'r') as f: values = dict(pattern.findall(f.read())) for line in fileinput.input('FILE2', inplace=True): match = pattern.m...
Using some of the tips given here I coded my own solution. It could probably be improved in a few places but I am pleased with myself for creating the solution without just copying and pasting someone else's answer. So, my solution: ``` import fileinput translations = {} with open('file1.txt', 'r') as fileOne: t...
25,240,268
Say for example, I have two text files containing the following: **File 1** > > "key\_one" = "String value for key one" > > "key\_two" = "String value for key two" > > // COMMENT // > > "key\_three" = "String value for key two" > > > **File 2** > > // COMMENT > > "key\_one" = "key\_one" > > ...
2014/08/11
[ "https://Stackoverflow.com/questions/25240268", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1813167/" ]
``` import pprint ``` def get\_values(f): file1 = open(f,"r").readlines() values = {} for line in file1: if line[:2] !="//" and "=" in line: #print line key, value = line.split("=") #print key, value values[key]=value ``` return values ``` def replace\_values(v1, v2): for key in v1: v = v1[key] if key in...
Using some of the tips given here I coded my own solution. It could probably be improved in a few places but I am pleased with myself for creating the solution without just copying and pasting someone else's answer. So, my solution: ``` import fileinput translations = {} with open('file1.txt', 'r') as fileOne: t...
50,201,607
TL;DR When updating from CMake 3.10 to CMake 3.11.1 on archlinux, the following configuration line: find\_package(Boost COMPONENTS python3 COMPONENTS numpy3 REQUIRED) leads to CMake linking against 3 different libraries ``` -- Boost version: 1.66.0 -- Found the following Boost libraries: -- python3 -- numpy3 -- ...
2018/05/06
[ "https://Stackoverflow.com/questions/50201607", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7141288/" ]
This bug is due to an invalid dependency description in `FindBoost.cmake` ``` set(_Boost_NUMPY_DEPENDENCIES python) ``` This has been fixed at <https://github.com/Kitware/CMake/commit/c747d4ccb349f87963a8d1da69394bc4db6b74ed> Please use latest one, or you can rewrite it manually: ``` set(_Boost_NUMPY_DEPENDENC...
[CMake 3.10 does not properly support Boost 1.66](https://stackoverflow.com/a/42124857/2799037). The Boost dependencies are hard-coded and if they chance, CMake has to adopt. Delete the build directory and reconfigure. The configure step uses cached variables which prevents re-detection with the newer routines.
7,092,407
Im working a mongodb database using pymongo python module. I have a function in my code which when called updates the records in the collection as follows. ``` for record in coll.find(<some query here>): #Code here #... #... coll.update({ '_id' : record['_id'] },record) ``` Now, if i modify the code as f...
2011/08/17
[ "https://Stackoverflow.com/questions/7092407", "https://Stackoverflow.com", "https://Stackoverflow.com/users/898562/" ]
Increase your memory buffer size `php_value memory_limit 64M` in your .htacess or `ini_set('memory_limit','64M');` in your php file
It depends your implimentation. last time when I was working on csv file with more then 500000 records, I got the same message. Later I introduce classes and try to close the open objects. it reduces it memeory consumption. if you are opening an image and editing it. it means it is loading in a memory. in that case siz...
7,092,407
Im working a mongodb database using pymongo python module. I have a function in my code which when called updates the records in the collection as follows. ``` for record in coll.find(<some query here>): #Code here #... #... coll.update({ '_id' : record['_id'] },record) ``` Now, if i modify the code as f...
2011/08/17
[ "https://Stackoverflow.com/questions/7092407", "https://Stackoverflow.com", "https://Stackoverflow.com/users/898562/" ]
As riky said, set the memory limit higher if you can. Also realize that the dimensions are more important than the file size (as the file size is for a compressed image). When you open an image in GD, every pixel gets 3-4 bytes allocated to it, RGB and possibly A. Thus, your 4912px x 3264px image needs to use 48,098,30...
Increase your memory buffer size `php_value memory_limit 64M` in your .htacess or `ini_set('memory_limit','64M');` in your php file
7,092,407
Im working a mongodb database using pymongo python module. I have a function in my code which when called updates the records in the collection as follows. ``` for record in coll.find(<some query here>): #Code here #... #... coll.update({ '_id' : record['_id'] },record) ``` Now, if i modify the code as f...
2011/08/17
[ "https://Stackoverflow.com/questions/7092407", "https://Stackoverflow.com", "https://Stackoverflow.com/users/898562/" ]
Increase your memory buffer size `php_value memory_limit 64M` in your .htacess or `ini_set('memory_limit','64M');` in your php file
The size of the used memory depends on the dimension and the color bit depth. I also ran in to that problem a few years ago, while building a portfolio-website for photographers. The only way to properly solve this is to switch your image library from GD to imagick. Imagick consumes far less memory, and is not tied to ...
7,092,407
Im working a mongodb database using pymongo python module. I have a function in my code which when called updates the records in the collection as follows. ``` for record in coll.find(<some query here>): #Code here #... #... coll.update({ '_id' : record['_id'] },record) ``` Now, if i modify the code as f...
2011/08/17
[ "https://Stackoverflow.com/questions/7092407", "https://Stackoverflow.com", "https://Stackoverflow.com/users/898562/" ]
As riky said, set the memory limit higher if you can. Also realize that the dimensions are more important than the file size (as the file size is for a compressed image). When you open an image in GD, every pixel gets 3-4 bytes allocated to it, RGB and possibly A. Thus, your 4912px x 3264px image needs to use 48,098,30...
It depends your implimentation. last time when I was working on csv file with more then 500000 records, I got the same message. Later I introduce classes and try to close the open objects. it reduces it memeory consumption. if you are opening an image and editing it. it means it is loading in a memory. in that case siz...
7,092,407
Im working a mongodb database using pymongo python module. I have a function in my code which when called updates the records in the collection as follows. ``` for record in coll.find(<some query here>): #Code here #... #... coll.update({ '_id' : record['_id'] },record) ``` Now, if i modify the code as f...
2011/08/17
[ "https://Stackoverflow.com/questions/7092407", "https://Stackoverflow.com", "https://Stackoverflow.com/users/898562/" ]
As riky said, set the memory limit higher if you can. Also realize that the dimensions are more important than the file size (as the file size is for a compressed image). When you open an image in GD, every pixel gets 3-4 bytes allocated to it, RGB and possibly A. Thus, your 4912px x 3264px image needs to use 48,098,30...
The size of the used memory depends on the dimension and the color bit depth. I also ran in to that problem a few years ago, while building a portfolio-website for photographers. The only way to properly solve this is to switch your image library from GD to imagick. Imagick consumes far less memory, and is not tied to ...
41,504,340
This question [explains](https://stackoverflow.com/questions/7300321/how-to-use-pythons-pip-to-download-and-keep-the-zipped-files-for-a-package) how to make pip download and save packages. If I follow this formula, Pip will download wheel (.whl) files if available. ``` (venv) [user@host glances]$ pip download -d whee...
2017/01/06
[ "https://Stackoverflow.com/questions/41504340", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1179137/" ]
According to `pip install -h`: > > --no-use-wheel Do not Find and prefer wheel archives when searching indexes and find-links locations. DEPRECATED in favour of --no-binary. > > > And > > --no-binary Do not use binary packages. Can be supplied multiple times, and each time adds to the existing value. Accepts ei...
use `pip download --no-binary=:all: -r requirements.txt` According to the pip documentation: **--no-binary:** > > Do not use binary packages. Can be supplied multiple times, and each > time adds to the existing value. Accepts either :all: to disable all > binary packages, :none: to empty the set, or one or more ...
6,022,450
I'm using Scrapy to scrape a website. The item page that I want to scrape looks like: <http://www.somepage.com/itempage/&page=x>. Where `x` is any number from `1` to `100`. Thus, I have an `SgmlLinkExractor` Rule with a callback function specified for any page resembling this. The website does not have a listpage with...
2011/05/16
[ "https://Stackoverflow.com/questions/6022450", "https://Stackoverflow.com", "https://Stackoverflow.com/users/648121/" ]
You could list all the known URLs in your [`Spider`](http://doc.scrapy.org/topics/spiders.html#spiders) class' [start\_urls](http://doc.scrapy.org/topics/spiders.html#scrapy.spider.BaseSpider.start_urls) attribute: ``` class SomepageSpider(BaseSpider): name = 'somepage.com' allowed_domains = ['somepage.com'] ...
If it's just a one time thing, you can create a local html file `file:///c:/somefile.html` with all the links. Start scraping that file and add `somepage.com` to allowed domains. Alternately, in the parse function, you can return a new Request which is the next url to be scraped.
58,635,279
I have created a brand new [Python repository](https://github.com/neuropsychology/NeuroKit) based on a cookie-cutter template. Everything looks okay, so I am trying now to set the testing and testing coverage using travis and codecov. I am new to pytest but I am trying to do things right. After looking on the internet,...
2019/10/31
[ "https://Stackoverflow.com/questions/58635279", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4198688/" ]
1) create pytest.ini file in your project directory and add the following lines ``` [pytest] testpaths = tests python_files = *.py python_functions = test_* ``` 2) create .coveragerc file in project directory and add the following lines ``` [report] fail_under = 90 show_missing = True ``` 3) pytest for code cover...
Looks like you're missing `coverage` on your installs. You have it on scripts but it might not be running. Try adding `pip install coverage` in your travis.yml file. Have a go at this too: [codecov](https://github.com/codecov/example-python)
44,492,238
I am learning python & trying to scrape a website, having 10 listing of properties on each page. I want to extract information from each listing on each page. My code for first 5 pages is as follows :- ``` import requests from bs4 import BeautifulSoup urls = [] for i in range(1,5): pages = "http://www.realcommer...
2017/06/12
[ "https://Stackoverflow.com/questions/44492238", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7961265/" ]
There is one problem in your code is that you declared the variable "urls" twice. You need to update the code like below: ``` import requests from bs4 import BeautifulSoup urls = [] for i in range(1,6): pages = "http://www.realcommercial.com.au/sold/property-offices-retail-showrooms+bulky+goods-land+development-...
Use headers in the code and use string concatenation instead of .format(i) The code looks like this ``` import requests from bs4 import BeautifulSoup urls = [] for i in range(1,6): pages = 'http://www.realcommercial.com.au/sold/property-offices-retail-showrooms+bulky+goods-land+development-hotel+leisure-medical...
44,492,238
I am learning python & trying to scrape a website, having 10 listing of properties on each page. I want to extract information from each listing on each page. My code for first 5 pages is as follows :- ``` import requests from bs4 import BeautifulSoup urls = [] for i in range(1,5): pages = "http://www.realcommer...
2017/06/12
[ "https://Stackoverflow.com/questions/44492238", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7961265/" ]
There is one problem in your code is that you declared the variable "urls" twice. You need to update the code like below: ``` import requests from bs4 import BeautifulSoup urls = [] for i in range(1,6): pages = "http://www.realcommercial.com.au/sold/property-offices-retail-showrooms+bulky+goods-land+development-...
You can tell BeautifulSoup to only give you links containing a `href` to make your code safer. Also, rather than modifying your URL to include a page number, you could extract the `next >` link at the bottom. This would also then automatically stop when the final page has been returned: ``` import requests from bs4 i...
21,778,187
I would like to find text in file with regular expression and after replace it to another name. I have to read file line by line at first because in other way re.match(...) can`t find text. My test file where I would like to make modyfications is (no all, I removed some code): ``` //... #include <boost/test/included/...
2014/02/14
[ "https://Stackoverflow.com/questions/21778187", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1693143/" ]
Looks like this isn't possible to do. To cut down on duplicate code, simply declare the error handling function separately and reuse it inside the response and responseError functions. ``` $httpProvider.interceptors.push(function($q) { var handleError = function (rejection) { ... } return { response...
To add to this answer: rejecting the promise in the response interceptor DOES do something. Although one would expect it to call the responseError in first glance, this would not make a lot of sense: the request is fulfilled with succes. But rejecting it in the response interceptor will make the caller of the promise ...
21,778,187
I would like to find text in file with regular expression and after replace it to another name. I have to read file line by line at first because in other way re.match(...) can`t find text. My test file where I would like to make modyfications is (no all, I removed some code): ``` //... #include <boost/test/included/...
2014/02/14
[ "https://Stackoverflow.com/questions/21778187", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1693143/" ]
Looks like this isn't possible to do. To cut down on duplicate code, simply declare the error handling function separately and reuse it inside the response and responseError functions. ``` $httpProvider.interceptors.push(function($q) { var handleError = function (rejection) { ... } return { response...
Should you want to pass the http response to the responseError handler, you could do it like this: ``` $httpProvider.interceptors.push(function($q) { var self = { response: function (response) { if (response.data.error) { return self.responseError(response); } ...
21,778,187
I would like to find text in file with regular expression and after replace it to another name. I have to read file line by line at first because in other way re.match(...) can`t find text. My test file where I would like to make modyfications is (no all, I removed some code): ``` //... #include <boost/test/included/...
2014/02/14
[ "https://Stackoverflow.com/questions/21778187", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1693143/" ]
To add to this answer: rejecting the promise in the response interceptor DOES do something. Although one would expect it to call the responseError in first glance, this would not make a lot of sense: the request is fulfilled with succes. But rejecting it in the response interceptor will make the caller of the promise ...
Should you want to pass the http response to the responseError handler, you could do it like this: ``` $httpProvider.interceptors.push(function($q) { var self = { response: function (response) { if (response.data.error) { return self.responseError(response); } ...
49,773,418
after writing import tensorflow\_hub, the following error emerges: ``` class LatestModuleExporter(tf.estimator.Exporter): ``` AttributeError: module 'tensorflow.python.estimator.estimator\_lib' has no attribute 'Exporter' I'm using python 3.6 with tensorflow 1.7 on Windows 10 thanks!
2018/04/11
[ "https://Stackoverflow.com/questions/49773418", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2393805/" ]
You can reinstall TensorFlow\_hub: ``` pip install ipykernel pip install tensorflow_hub ```
I believe your python3 runtime is not really running with tensorflow 1.7. That attribute exists since tensorflow 1.4. I suspect some mismatch between python2/3 environment, mismatch installing with pip/pip3 or an issue with installing both tensorflow and tf-nightly pip packages. You can double check with: ``` $ pytho...
49,773,418
after writing import tensorflow\_hub, the following error emerges: ``` class LatestModuleExporter(tf.estimator.Exporter): ``` AttributeError: module 'tensorflow.python.estimator.estimator\_lib' has no attribute 'Exporter' I'm using python 3.6 with tensorflow 1.7 on Windows 10 thanks!
2018/04/11
[ "https://Stackoverflow.com/questions/49773418", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2393805/" ]
You should have at least tensorflow 1.7.0, upgrate by: ``` pip install "tensorflow>=1.7.0" pip install tensorflow-hub ``` and then: ``` pip install tensorflow-hub ``` source: [here](https://www.tensorflow.org/hub/installation)
I believe your python3 runtime is not really running with tensorflow 1.7. That attribute exists since tensorflow 1.4. I suspect some mismatch between python2/3 environment, mismatch installing with pip/pip3 or an issue with installing both tensorflow and tf-nightly pip packages. You can double check with: ``` $ pytho...
49,773,418
after writing import tensorflow\_hub, the following error emerges: ``` class LatestModuleExporter(tf.estimator.Exporter): ``` AttributeError: module 'tensorflow.python.estimator.estimator\_lib' has no attribute 'Exporter' I'm using python 3.6 with tensorflow 1.7 on Windows 10 thanks!
2018/04/11
[ "https://Stackoverflow.com/questions/49773418", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2393805/" ]
I believe your python3 runtime is not really running with tensorflow 1.7. That attribute exists since tensorflow 1.4. I suspect some mismatch between python2/3 environment, mismatch installing with pip/pip3 or an issue with installing both tensorflow and tf-nightly pip packages. You can double check with: ``` $ pytho...
Just simple run following line on cell. import tensorflow\_hub as hub
49,773,418
after writing import tensorflow\_hub, the following error emerges: ``` class LatestModuleExporter(tf.estimator.Exporter): ``` AttributeError: module 'tensorflow.python.estimator.estimator\_lib' has no attribute 'Exporter' I'm using python 3.6 with tensorflow 1.7 on Windows 10 thanks!
2018/04/11
[ "https://Stackoverflow.com/questions/49773418", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2393805/" ]
You should have at least tensorflow 1.7.0, upgrate by: ``` pip install "tensorflow>=1.7.0" pip install tensorflow-hub ``` and then: ``` pip install tensorflow-hub ``` source: [here](https://www.tensorflow.org/hub/installation)
You can reinstall TensorFlow\_hub: ``` pip install ipykernel pip install tensorflow_hub ```
49,773,418
after writing import tensorflow\_hub, the following error emerges: ``` class LatestModuleExporter(tf.estimator.Exporter): ``` AttributeError: module 'tensorflow.python.estimator.estimator\_lib' has no attribute 'Exporter' I'm using python 3.6 with tensorflow 1.7 on Windows 10 thanks!
2018/04/11
[ "https://Stackoverflow.com/questions/49773418", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2393805/" ]
You can reinstall TensorFlow\_hub: ``` pip install ipykernel pip install tensorflow_hub ```
Just simple run following line on cell. import tensorflow\_hub as hub
49,773,418
after writing import tensorflow\_hub, the following error emerges: ``` class LatestModuleExporter(tf.estimator.Exporter): ``` AttributeError: module 'tensorflow.python.estimator.estimator\_lib' has no attribute 'Exporter' I'm using python 3.6 with tensorflow 1.7 on Windows 10 thanks!
2018/04/11
[ "https://Stackoverflow.com/questions/49773418", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2393805/" ]
You should have at least tensorflow 1.7.0, upgrate by: ``` pip install "tensorflow>=1.7.0" pip install tensorflow-hub ``` and then: ``` pip install tensorflow-hub ``` source: [here](https://www.tensorflow.org/hub/installation)
Just simple run following line on cell. import tensorflow\_hub as hub
15,448,584
I have 2 lists `a = [2, 6, 12, 13, 1, 4, 5]` and `b = [12, 1]`. Elements in list `b` are a subset of list `a`. From the above pair of lists, I need to create a list of tuples as following : ``` [(12,6),(12,2),(1,13),(1,12),(1,6),(1,2)] ``` Basically, at the point of intersection of list `b` and list `a`, so from ...
2013/03/16
[ "https://Stackoverflow.com/questions/15448584", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1923226/" ]
You typically need to use `glibtool` and `glibtoolize`, since `libtool` already exists on OS X as a binary tool for creating Mach-O dynamic libraries. So, that's how MacPorts installs it, using a program name transform, though the port itself is still named 'libtool'. Some `autogen.sh` scripts (or their equivalent) wi...
I hope my answer is not too naive. I am a noob to OSX. [brew](http://brew.sh/) install libtool solved a similar issue for me.
15,448,584
I have 2 lists `a = [2, 6, 12, 13, 1, 4, 5]` and `b = [12, 1]`. Elements in list `b` are a subset of list `a`. From the above pair of lists, I need to create a list of tuples as following : ``` [(12,6),(12,2),(1,13),(1,12),(1,6),(1,2)] ``` Basically, at the point of intersection of list `b` and list `a`, so from ...
2013/03/16
[ "https://Stackoverflow.com/questions/15448584", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1923226/" ]
You typically need to use `glibtool` and `glibtoolize`, since `libtool` already exists on OS X as a binary tool for creating Mach-O dynamic libraries. So, that's how MacPorts installs it, using a program name transform, though the port itself is still named 'libtool'. Some `autogen.sh` scripts (or their equivalent) wi...
An alternative to Brew is to use `macports`. For example: ``` $ port info libtool libtool @2.4.6_5 (devel, sysutils) Variants: universal Description: GNU libtool is a generic library support script. Libtool hides the complexity of using shared libraries behind a consistent, portable interface. Ho...
15,448,584
I have 2 lists `a = [2, 6, 12, 13, 1, 4, 5]` and `b = [12, 1]`. Elements in list `b` are a subset of list `a`. From the above pair of lists, I need to create a list of tuples as following : ``` [(12,6),(12,2),(1,13),(1,12),(1,6),(1,2)] ``` Basically, at the point of intersection of list `b` and list `a`, so from ...
2013/03/16
[ "https://Stackoverflow.com/questions/15448584", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1923226/" ]
You typically need to use `glibtool` and `glibtoolize`, since `libtool` already exists on OS X as a binary tool for creating Mach-O dynamic libraries. So, that's how MacPorts installs it, using a program name transform, though the port itself is still named 'libtool'. Some `autogen.sh` scripts (or their equivalent) wi...
To bring together a few threads `libtoolize` is installed as `glibtoolize` when you install `libtool` using **brew**. This can be achieved as follows; install it and then create a softlink for libtoolize: ``` brew install libtool ln -s /usr/local/bin/glibtoolize /usr/local/bin/libtoolize ```
15,448,584
I have 2 lists `a = [2, 6, 12, 13, 1, 4, 5]` and `b = [12, 1]`. Elements in list `b` are a subset of list `a`. From the above pair of lists, I need to create a list of tuples as following : ``` [(12,6),(12,2),(1,13),(1,12),(1,6),(1,2)] ``` Basically, at the point of intersection of list `b` and list `a`, so from ...
2013/03/16
[ "https://Stackoverflow.com/questions/15448584", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1923226/" ]
I hope my answer is not too naive. I am a noob to OSX. [brew](http://brew.sh/) install libtool solved a similar issue for me.
An alternative to Brew is to use `macports`. For example: ``` $ port info libtool libtool @2.4.6_5 (devel, sysutils) Variants: universal Description: GNU libtool is a generic library support script. Libtool hides the complexity of using shared libraries behind a consistent, portable interface. Ho...
15,448,584
I have 2 lists `a = [2, 6, 12, 13, 1, 4, 5]` and `b = [12, 1]`. Elements in list `b` are a subset of list `a`. From the above pair of lists, I need to create a list of tuples as following : ``` [(12,6),(12,2),(1,13),(1,12),(1,6),(1,2)] ``` Basically, at the point of intersection of list `b` and list `a`, so from ...
2013/03/16
[ "https://Stackoverflow.com/questions/15448584", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1923226/" ]
I hope my answer is not too naive. I am a noob to OSX. [brew](http://brew.sh/) install libtool solved a similar issue for me.
To bring together a few threads `libtoolize` is installed as `glibtoolize` when you install `libtool` using **brew**. This can be achieved as follows; install it and then create a softlink for libtoolize: ``` brew install libtool ln -s /usr/local/bin/glibtoolize /usr/local/bin/libtoolize ```
15,448,584
I have 2 lists `a = [2, 6, 12, 13, 1, 4, 5]` and `b = [12, 1]`. Elements in list `b` are a subset of list `a`. From the above pair of lists, I need to create a list of tuples as following : ``` [(12,6),(12,2),(1,13),(1,12),(1,6),(1,2)] ``` Basically, at the point of intersection of list `b` and list `a`, so from ...
2013/03/16
[ "https://Stackoverflow.com/questions/15448584", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1923226/" ]
To bring together a few threads `libtoolize` is installed as `glibtoolize` when you install `libtool` using **brew**. This can be achieved as follows; install it and then create a softlink for libtoolize: ``` brew install libtool ln -s /usr/local/bin/glibtoolize /usr/local/bin/libtoolize ```
An alternative to Brew is to use `macports`. For example: ``` $ port info libtool libtool @2.4.6_5 (devel, sysutils) Variants: universal Description: GNU libtool is a generic library support script. Libtool hides the complexity of using shared libraries behind a consistent, portable interface. Ho...
68,438,620
I am trying to build and run the sample `python` application from AWS SAM. I just installed python, below is what command lines gives.. ``` D:\Udemy Work>python Python 3.9.6 (tags/v3.9.6:db3ff76, Jun 28 2021, 15:26:21) [MSC v.1929 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more infor...
2021/07/19
[ "https://Stackoverflow.com/questions/68438620", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1379286/" ]
You basically need unpivot or melt: <https://pandas.pydata.org/docs/reference/api/pandas.melt.html> ``` pd.melt(df, id_vars=['Number','From','To'], value_vars = ['D1_value','D2_value'])\ .rename({'variable':'Type'},axis=1)\ .dropna(subset=['value'],axis=0) ```
You can also use `pd.wide_to_long`, after reordering the column positions: ``` temp = df.rename(columns = lambda col: "_".join(col.split("_")[::-1]) if col.endswith("value") else col) pd.wide_to_long(temp, stubnames = 'value', i=['Number', 'Fro...
28,967,976
I'm reading a pcap file in python using scapy which contains Ethernet packets that have trailer. How can I remove these trailers? P.S: Ethernet packets can not be less than 64 bytes (including FCS).Network adapters add padding zero bytes to end of the packet to overcome this problem. These padding bytes called "Traile...
2015/03/10
[ "https://Stackoverflow.com/questions/28967976", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2133144/" ]
It seems there is no official way to remove it. This work on frames that have IPv4 as network layer protocol: ``` packet_without_trailer=IP(str(packet[IP])[0:packet[IP].len]) ```
Just use the upper layers and ignore the Ethernet layer: `packet = eval(originalPacket[IP])`
5,127,860
When I have lots of different modules using the standard python logging module, the following stack trace does little to help me find out where, exactly, I had a badly formed log statement: ``` Traceback (most recent call last): File "/usr/lib/python2.6/logging/__init__.py", line 768, in emit msg = self.format(r...
2011/02/26
[ "https://Stackoverflow.com/questions/5127860", "https://Stackoverflow.com", "https://Stackoverflow.com/users/254704/" ]
The logging module is designed to stop bad log messages from killing the rest of the code, so the `emit` method catches errors and passes them to a method `handleError`. The easiest thing for you to do would be to temporarily edit `/usr/lib/python2.6/logging/__init__.py`, and find `handleError`. It looks something like...
Rather than editing installed python code, you can also find the errors like this: ``` def handleError(record): raise RuntimeError(record) handler.handleError = handleError ``` where handler is one of the handlers that is giving the problem. Now when the format error occurs you'll see the location.
5,127,860
When I have lots of different modules using the standard python logging module, the following stack trace does little to help me find out where, exactly, I had a badly formed log statement: ``` Traceback (most recent call last): File "/usr/lib/python2.6/logging/__init__.py", line 768, in emit msg = self.format(r...
2011/02/26
[ "https://Stackoverflow.com/questions/5127860", "https://Stackoverflow.com", "https://Stackoverflow.com/users/254704/" ]
The logging module is designed to stop bad log messages from killing the rest of the code, so the `emit` method catches errors and passes them to a method `handleError`. The easiest thing for you to do would be to temporarily edit `/usr/lib/python2.6/logging/__init__.py`, and find `handleError`. It looks something like...
Alternatively you can create a formatter of your own, but then you have to include it everywhere. ``` class DebugFormatter(logging.Formatter): def format(self, record): try: return super(DebugFormatter, self).format(record) except: print "Unable to format record" ...
5,127,860
When I have lots of different modules using the standard python logging module, the following stack trace does little to help me find out where, exactly, I had a badly formed log statement: ``` Traceback (most recent call last): File "/usr/lib/python2.6/logging/__init__.py", line 768, in emit msg = self.format(r...
2011/02/26
[ "https://Stackoverflow.com/questions/5127860", "https://Stackoverflow.com", "https://Stackoverflow.com/users/254704/" ]
The logging module is designed to stop bad log messages from killing the rest of the code, so the `emit` method catches errors and passes them to a method `handleError`. The easiest thing for you to do would be to temporarily edit `/usr/lib/python2.6/logging/__init__.py`, and find `handleError`. It looks something like...
It's not really an answer to the question, but hopefully it will be other beginners with the logging module like me. My problem was that I replaced all occurrences of print with logging.info , so a valid line like `print('a',a)` became `logging.info('a',a)` (but it should be `logging.info('a %s'%a)` instead. This w...
5,127,860
When I have lots of different modules using the standard python logging module, the following stack trace does little to help me find out where, exactly, I had a badly formed log statement: ``` Traceback (most recent call last): File "/usr/lib/python2.6/logging/__init__.py", line 768, in emit msg = self.format(r...
2011/02/26
[ "https://Stackoverflow.com/questions/5127860", "https://Stackoverflow.com", "https://Stackoverflow.com/users/254704/" ]
The logging module is designed to stop bad log messages from killing the rest of the code, so the `emit` method catches errors and passes them to a method `handleError`. The easiest thing for you to do would be to temporarily edit `/usr/lib/python2.6/logging/__init__.py`, and find `handleError`. It looks something like...
**Had same problem** Such a Traceback arises due to the wrong format name. So while creating a format for a log file, check the format name once in python documentation: "<https://docs.python.org/3/library/logging.html#formatter-objects>"
5,127,860
When I have lots of different modules using the standard python logging module, the following stack trace does little to help me find out where, exactly, I had a badly formed log statement: ``` Traceback (most recent call last): File "/usr/lib/python2.6/logging/__init__.py", line 768, in emit msg = self.format(r...
2011/02/26
[ "https://Stackoverflow.com/questions/5127860", "https://Stackoverflow.com", "https://Stackoverflow.com/users/254704/" ]
Rather than editing installed python code, you can also find the errors like this: ``` def handleError(record): raise RuntimeError(record) handler.handleError = handleError ``` where handler is one of the handlers that is giving the problem. Now when the format error occurs you'll see the location.
Alternatively you can create a formatter of your own, but then you have to include it everywhere. ``` class DebugFormatter(logging.Formatter): def format(self, record): try: return super(DebugFormatter, self).format(record) except: print "Unable to format record" ...
5,127,860
When I have lots of different modules using the standard python logging module, the following stack trace does little to help me find out where, exactly, I had a badly formed log statement: ``` Traceback (most recent call last): File "/usr/lib/python2.6/logging/__init__.py", line 768, in emit msg = self.format(r...
2011/02/26
[ "https://Stackoverflow.com/questions/5127860", "https://Stackoverflow.com", "https://Stackoverflow.com/users/254704/" ]
Rather than editing installed python code, you can also find the errors like this: ``` def handleError(record): raise RuntimeError(record) handler.handleError = handleError ``` where handler is one of the handlers that is giving the problem. Now when the format error occurs you'll see the location.
It's not really an answer to the question, but hopefully it will be other beginners with the logging module like me. My problem was that I replaced all occurrences of print with logging.info , so a valid line like `print('a',a)` became `logging.info('a',a)` (but it should be `logging.info('a %s'%a)` instead. This w...
5,127,860
When I have lots of different modules using the standard python logging module, the following stack trace does little to help me find out where, exactly, I had a badly formed log statement: ``` Traceback (most recent call last): File "/usr/lib/python2.6/logging/__init__.py", line 768, in emit msg = self.format(r...
2011/02/26
[ "https://Stackoverflow.com/questions/5127860", "https://Stackoverflow.com", "https://Stackoverflow.com/users/254704/" ]
Rather than editing installed python code, you can also find the errors like this: ``` def handleError(record): raise RuntimeError(record) handler.handleError = handleError ``` where handler is one of the handlers that is giving the problem. Now when the format error occurs you'll see the location.
**Had same problem** Such a Traceback arises due to the wrong format name. So while creating a format for a log file, check the format name once in python documentation: "<https://docs.python.org/3/library/logging.html#formatter-objects>"
5,127,860
When I have lots of different modules using the standard python logging module, the following stack trace does little to help me find out where, exactly, I had a badly formed log statement: ``` Traceback (most recent call last): File "/usr/lib/python2.6/logging/__init__.py", line 768, in emit msg = self.format(r...
2011/02/26
[ "https://Stackoverflow.com/questions/5127860", "https://Stackoverflow.com", "https://Stackoverflow.com/users/254704/" ]
It's not really an answer to the question, but hopefully it will be other beginners with the logging module like me. My problem was that I replaced all occurrences of print with logging.info , so a valid line like `print('a',a)` became `logging.info('a',a)` (but it should be `logging.info('a %s'%a)` instead. This w...
Alternatively you can create a formatter of your own, but then you have to include it everywhere. ``` class DebugFormatter(logging.Formatter): def format(self, record): try: return super(DebugFormatter, self).format(record) except: print "Unable to format record" ...
5,127,860
When I have lots of different modules using the standard python logging module, the following stack trace does little to help me find out where, exactly, I had a badly formed log statement: ``` Traceback (most recent call last): File "/usr/lib/python2.6/logging/__init__.py", line 768, in emit msg = self.format(r...
2011/02/26
[ "https://Stackoverflow.com/questions/5127860", "https://Stackoverflow.com", "https://Stackoverflow.com/users/254704/" ]
Alternatively you can create a formatter of your own, but then you have to include it everywhere. ``` class DebugFormatter(logging.Formatter): def format(self, record): try: return super(DebugFormatter, self).format(record) except: print "Unable to format record" ...
**Had same problem** Such a Traceback arises due to the wrong format name. So while creating a format for a log file, check the format name once in python documentation: "<https://docs.python.org/3/library/logging.html#formatter-objects>"
5,127,860
When I have lots of different modules using the standard python logging module, the following stack trace does little to help me find out where, exactly, I had a badly formed log statement: ``` Traceback (most recent call last): File "/usr/lib/python2.6/logging/__init__.py", line 768, in emit msg = self.format(r...
2011/02/26
[ "https://Stackoverflow.com/questions/5127860", "https://Stackoverflow.com", "https://Stackoverflow.com/users/254704/" ]
It's not really an answer to the question, but hopefully it will be other beginners with the logging module like me. My problem was that I replaced all occurrences of print with logging.info , so a valid line like `print('a',a)` became `logging.info('a',a)` (but it should be `logging.info('a %s'%a)` instead. This w...
**Had same problem** Such a Traceback arises due to the wrong format name. So while creating a format for a log file, check the format name once in python documentation: "<https://docs.python.org/3/library/logging.html#formatter-objects>"
52,629,106
Hello everyone I have a file of which consist of some random information but I only want the part that is important to me. ``` name: Zack age: 17 As Mixed: Zack:17 Subjects opted : 3 Subject #1: Arts name: Mike age: 15 As Mixed: Mike:15 Subjects opted : 3 Subject #1: Arts ``` Above is a example of my text file I wan...
2018/10/03
[ "https://Stackoverflow.com/questions/52629106", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9606164/" ]
You can split the data at the `:` and grab only `As Mixed` parameter ``` content = [i.strip('\n').split(': ') for i in open('filename.txt')] results = [b for a, b in content if a.startswith('As Mixed')] ``` Output: ``` ['Zack:17', 'Mike:15'] ``` To write the results to a file: ``` with open('filename.txt', 'w') ...
Try this ``` import re found = [] match = re.compile('(Mike|Zack):(\w*)') with open('/hope/ninja/Destop/raw.twt', "r") as raw: for rec in raw: found.extend(match.find_all(rec)) print(found) #output: [('Mike', '15'), ('Zack', '17')] ``` This uses regular expressions to find the value needed, basically `(...
31,112,523
I am using this python script to download OSM data and convert it to an undirected networkx graph: <https://gist.github.com/rajanski/ccf65d4f5106c2cdc70e> However,in the ideal case, I would like to generate a directed graph from it in order to refelct the directionality of the osm street network. First of all, can y...
2015/06/29
[ "https://Stackoverflow.com/questions/31112523", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2772305/" ]
The order of the nodes only matters if the way is tagged with *[oneway](https://wiki.openstreetmap.org/wiki/Key:oneway)=yes* or *oneway=-1*. Otherwise the way is bidirectional. This applies only for vehicles of course. The only exception is *[highway=motorway](https://wiki.openstreetmap.org/wiki/Tag:highway%3Dmotorway)...
OK, I updated my script in order to enable directionality: <https://gist.github.com/rajanski/ccf65d4f5106c2cdc70e>
45,382,324
I will try to be very specific and informative. I want to create a Dockerfile with all the packages that are used in geosciences for the good of the geospatial/geoscientific community. The Dockerfile is built on top of the [scipy-notebook](https://github.com/jupyter/docker-stacks/tree/master/scipy-notebook) docker-stac...
2017/07/28
[ "https://Stackoverflow.com/questions/45382324", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5361345/" ]
> > looks like it would work in the older groovy Jenkinsfiles > > > you can use the `script` step to enclose a block of code, and, inside this block, declarative pipelines basically act like scripted, so you can still use the technique described in the answer you referenced. welcome to stackoverflow. i hope you e...
I was facing the same issue and found that instead of using the following avoids 'Requires approval of the script in my Jenkins server at Jenkins > Manage jenkins > In-process Script Approval'. Instead of: env['setup\_build\_number'] = setupResult.getNumber() (from code mentioned in Solution above) Use this: env.setu...
26,154,104
I'm trying to run the following Cypher query in neomodel: ``` MATCH (b1:Bal { text:'flame' }), (b2:Bal { text:'candle' }), p = shortestPath((b1)-[*..15]-(b2)) RETURN p ``` which works great on neo4j via the server console. It returns 3 nodes with two relationships connecting. However, when I attempt the following ...
2014/10/02
[ "https://Stackoverflow.com/questions/26154104", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4101066/" ]
Ok, I figured it out. I used the tutorial [here]( based on @nigel-small 's answer. ``` from py2neo import cypher session = cypher.Session("http://localhost:7474") tx = session.create_transaction() tx.append("START beginning=node(3), end=node(16) MATCH p = shortestPath(beginning-[*..500]-end) RETURN p") tx.execute() ...
The error message you provide is specific to neomodel and looks to have been raised as there is not yet any support for inflating py2neo Path objects in neomodel. This should however work fine in raw py2neo as paths are fully supported, so it may be worth trying that again. Py2neo certainly wouldn't raise an error fro...
70,929,680
I have a dataframe ``` import pandas as pd import numpy as np df1 = pd.DataFrame.from_dict( {"col1": [0, 0, 0, 0, 0], "col2": ["15", [10,15,20], "30", [20, 25], np.nan]}) ``` which looks like this | col1 | col2 | | --- | --- | | 0 | "15" | | 0 | [10,15,20] | | 0 | "30" | | 0 | [20,25] | | 0 | NaN | For co...
2022/01/31
[ "https://Stackoverflow.com/questions/70929680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15815734/" ]
Let us try `explode` then `groupby` with `max` ``` out = df1.col2.explode().groupby(level=0).max() Out[208]: 0 15 1 20 2 30 3 25 4 NaN Name: col2, dtype: object ```
``` import pandas as pd import numpy as np df1 = pd.DataFrame.from_dict( {"col1": [0, 0, 0, 0, 0], "col2": ["15", [10,15,20], "30", [20, 25], np.nan]}) res=df1['col2'] lis=[] for i in res: if type(i)==str: i=int(i) if type(i)==list: i=max(i) lis.append(i) else: lis.ap...
70,929,680
I have a dataframe ``` import pandas as pd import numpy as np df1 = pd.DataFrame.from_dict( {"col1": [0, 0, 0, 0, 0], "col2": ["15", [10,15,20], "30", [20, 25], np.nan]}) ``` which looks like this | col1 | col2 | | --- | --- | | 0 | "15" | | 0 | [10,15,20] | | 0 | "30" | | 0 | [20,25] | | 0 | NaN | For co...
2022/01/31
[ "https://Stackoverflow.com/questions/70929680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15815734/" ]
Let us try `explode` then `groupby` with `max` ``` out = df1.col2.explode().groupby(level=0).max() Out[208]: 0 15 1 20 2 30 3 25 4 NaN Name: col2, dtype: object ```
Another approach that might be easier to understand would be using `apply()` with a simple function that returns the max depending on the type. ``` import pandas as pd import numpy as np df1 = pd.DataFrame.from_dict( {"col1": [0, 0, 0, 0, 0], "col2": ["15", [10,15,20], "30", [20, 25], np.nan]}) def get_max(x...
70,929,680
I have a dataframe ``` import pandas as pd import numpy as np df1 = pd.DataFrame.from_dict( {"col1": [0, 0, 0, 0, 0], "col2": ["15", [10,15,20], "30", [20, 25], np.nan]}) ``` which looks like this | col1 | col2 | | --- | --- | | 0 | "15" | | 0 | [10,15,20] | | 0 | "30" | | 0 | [20,25] | | 0 | NaN | For co...
2022/01/31
[ "https://Stackoverflow.com/questions/70929680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15815734/" ]
Another approach that might be easier to understand would be using `apply()` with a simple function that returns the max depending on the type. ``` import pandas as pd import numpy as np df1 = pd.DataFrame.from_dict( {"col1": [0, 0, 0, 0, 0], "col2": ["15", [10,15,20], "30", [20, 25], np.nan]}) def get_max(x...
``` import pandas as pd import numpy as np df1 = pd.DataFrame.from_dict( {"col1": [0, 0, 0, 0, 0], "col2": ["15", [10,15,20], "30", [20, 25], np.nan]}) res=df1['col2'] lis=[] for i in res: if type(i)==str: i=int(i) if type(i)==list: i=max(i) lis.append(i) else: lis.ap...
28,708,752
I apologize for my ignorance of how python handles strings in advance. I have a .txt file that is at least 1000 lines long. It looks something like below ``` :dodge 1 6 some description string of unknown length E7 8 another description string 3445 0 oil temp something description voltage over limit etc :ford AF 4 de...
2015/02/25
[ "https://Stackoverflow.com/questions/28708752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2884999/" ]
You simply need to split twice on whitespace and join the string, you don't need a regex for a simple repeating pattern: ``` with open("testDTC.txt") as f: for line in f: if line.strip() and not line.startswith(":"): spl = line.split(None,2) print("{} ;{}".format(" ".join(spl[:2]),s...
Based on your example, it seems that in your second column you have a number or numbers separated by spaces, e.g. `8`, `6` followed by some description in third colum which seem not to have any numbers. If this is the case in general, not only for this example, you can use this fact to search for the number separated b...
28,708,752
I apologize for my ignorance of how python handles strings in advance. I have a .txt file that is at least 1000 lines long. It looks something like below ``` :dodge 1 6 some description string of unknown length E7 8 another description string 3445 0 oil temp something description voltage over limit etc :ford AF 4 de...
2015/02/25
[ "https://Stackoverflow.com/questions/28708752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2884999/" ]
You can do this with a pretty simple algorithm without invoking regular expressions so you can see what's going on. ``` with open('test.txt') as infile: with open('out.txt', 'w') as outfile: for line in infile: if not line or line.startswith(':'): # Blank or : line outfile.wri...
Based on your example, it seems that in your second column you have a number or numbers separated by spaces, e.g. `8`, `6` followed by some description in third colum which seem not to have any numbers. If this is the case in general, not only for this example, you can use this fact to search for the number separated b...
28,708,752
I apologize for my ignorance of how python handles strings in advance. I have a .txt file that is at least 1000 lines long. It looks something like below ``` :dodge 1 6 some description string of unknown length E7 8 another description string 3445 0 oil temp something description voltage over limit etc :ford AF 4 de...
2015/02/25
[ "https://Stackoverflow.com/questions/28708752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2884999/" ]
You simply need to split twice on whitespace and join the string, you don't need a regex for a simple repeating pattern: ``` with open("testDTC.txt") as f: for line in f: if line.strip() and not line.startswith(":"): spl = line.split(None,2) print("{} ;{}".format(" ".join(spl[:2]),s...
This is what I would do: ``` for line in f: if ' ' in line: sp = line.split(' ', 2) line = '%s %s ;%s' % (sp[0], sp[1], sp[2]) ```
28,708,752
I apologize for my ignorance of how python handles strings in advance. I have a .txt file that is at least 1000 lines long. It looks something like below ``` :dodge 1 6 some description string of unknown length E7 8 another description string 3445 0 oil temp something description voltage over limit etc :ford AF 4 de...
2015/02/25
[ "https://Stackoverflow.com/questions/28708752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2884999/" ]
You can do this with a pretty simple algorithm without invoking regular expressions so you can see what's going on. ``` with open('test.txt') as infile: with open('out.txt', 'w') as outfile: for line in infile: if not line or line.startswith(':'): # Blank or : line outfile.wri...
This is what I would do: ``` for line in f: if ' ' in line: sp = line.split(' ', 2) line = '%s %s ;%s' % (sp[0], sp[1], sp[2]) ```
28,708,752
I apologize for my ignorance of how python handles strings in advance. I have a .txt file that is at least 1000 lines long. It looks something like below ``` :dodge 1 6 some description string of unknown length E7 8 another description string 3445 0 oil temp something description voltage over limit etc :ford AF 4 de...
2015/02/25
[ "https://Stackoverflow.com/questions/28708752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2884999/" ]
You simply need to split twice on whitespace and join the string, you don't need a regex for a simple repeating pattern: ``` with open("testDTC.txt") as f: for line in f: if line.strip() and not line.startswith(":"): spl = line.split(None,2) print("{} ;{}".format(" ".join(spl[:2]),s...
You can do this with a pretty simple algorithm without invoking regular expressions so you can see what's going on. ``` with open('test.txt') as infile: with open('out.txt', 'w') as outfile: for line in infile: if not line or line.startswith(':'): # Blank or : line outfile.wri...
28,708,752
I apologize for my ignorance of how python handles strings in advance. I have a .txt file that is at least 1000 lines long. It looks something like below ``` :dodge 1 6 some description string of unknown length E7 8 another description string 3445 0 oil temp something description voltage over limit etc :ford AF 4 de...
2015/02/25
[ "https://Stackoverflow.com/questions/28708752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2884999/" ]
You simply need to split twice on whitespace and join the string, you don't need a regex for a simple repeating pattern: ``` with open("testDTC.txt") as f: for line in f: if line.strip() and not line.startswith(":"): spl = line.split(None,2) print("{} ;{}".format(" ".join(spl[:2]),s...
Since you only have 1000 lines or so, I think you can get away with readling it all at once with readlines() and using split for each line. If the line has only one element then print it, then call another loop that treats following lines with more than one element and replaces the third [2] element with the concatenat...
28,708,752
I apologize for my ignorance of how python handles strings in advance. I have a .txt file that is at least 1000 lines long. It looks something like below ``` :dodge 1 6 some description string of unknown length E7 8 another description string 3445 0 oil temp something description voltage over limit etc :ford AF 4 de...
2015/02/25
[ "https://Stackoverflow.com/questions/28708752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2884999/" ]
You can do this with a pretty simple algorithm without invoking regular expressions so you can see what's going on. ``` with open('test.txt') as infile: with open('out.txt', 'w') as outfile: for line in infile: if not line or line.startswith(':'): # Blank or : line outfile.wri...
Since you only have 1000 lines or so, I think you can get away with readling it all at once with readlines() and using split for each line. If the line has only one element then print it, then call another loop that treats following lines with more than one element and replaces the third [2] element with the concatenat...
42,409,365
I am trying to check a website for specific .js files and image files as part of a regular configuration management check. I am using python and selenium. My code is: ``` #!/usr/bin/env python #import modules required for the test to run import time from pyvirtualdisplay import Display from selenium import webdriver ...
2017/02/23
[ "https://Stackoverflow.com/questions/42409365", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7609361/" ]
You need to use ``` for i in page: print(i.get_attribute('src')) ``` This should print `JavaScript` file name like `https://www.google-analytics.com/analytics.js` Also you should note that some `<script>` tags could contain just `JavaScript` code, but not reference to remote file. If you want to get this code y...
as you are using phantomJS, why not use its scripts to capture these data. You can use `netlog.js` to capture all network data loaded for a given page in HAR format. Later use a `python-HAR parser` to list down all your .js or img files. command line: ``` phantomjs --cookies-file=/tmp/foo netlog.js https://google.com...
62,772,454
If given a year-week range e.g, start\_year, start\_week = (2019,45) and end\_year, end\_week = (2020,15) In python how can I check if Year-Week of interest is within the above range of not? For example, for Year = 2020 and Week = 5, I should get a 'True'.
2020/07/07
[ "https://Stackoverflow.com/questions/62772454", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6870708/" ]
Assuming all Year-Week pairs are well-formed (so there's no such thing as `(2019-74)` you can just check with: ``` start_year_week = (2019, 45) end_year_week = (2020, 15) under_test_year_week = (2020, 5) in_range = start_year_week <= under_test_year_week < end_year_week # True ``` Python does tuple comparison by f...
You can parse year and week to a `datetime` object. If you do the same with your test-year /-week, you can use comparison operators to see if it falls within the range. ``` from datetime import datetime start_year, start_week = (2019, 45) end_year, end_week = (2020, 15) # start date, beginning of week date0 = datet...
62,772,454
If given a year-week range e.g, start\_year, start\_week = (2019,45) and end\_year, end\_week = (2020,15) In python how can I check if Year-Week of interest is within the above range of not? For example, for Year = 2020 and Week = 5, I should get a 'True'.
2020/07/07
[ "https://Stackoverflow.com/questions/62772454", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6870708/" ]
Assuming all Year-Week pairs are well-formed (so there's no such thing as `(2019-74)` you can just check with: ``` start_year_week = (2019, 45) end_year_week = (2020, 15) under_test_year_week = (2020, 5) in_range = start_year_week <= under_test_year_week < end_year_week # True ``` Python does tuple comparison by f...
``` start = (2019, 45) end = (2020, 15) def isin(q): if end[0] > q[0] > start[0]: return True elif end[0] == q[0]: if end[1] >= q[1]: return True else: return False elif q[0] == start[0]: if q[1] >= start[1]: return True else: ...
62,772,454
If given a year-week range e.g, start\_year, start\_week = (2019,45) and end\_year, end\_week = (2020,15) In python how can I check if Year-Week of interest is within the above range of not? For example, for Year = 2020 and Week = 5, I should get a 'True'.
2020/07/07
[ "https://Stackoverflow.com/questions/62772454", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6870708/" ]
You can parse year and week to a `datetime` object. If you do the same with your test-year /-week, you can use comparison operators to see if it falls within the range. ``` from datetime import datetime start_year, start_week = (2019, 45) end_year, end_week = (2020, 15) # start date, beginning of week date0 = datet...
``` start = (2019, 45) end = (2020, 15) def isin(q): if end[0] > q[0] > start[0]: return True elif end[0] == q[0]: if end[1] >= q[1]: return True else: return False elif q[0] == start[0]: if q[1] >= start[1]: return True else: ...
45,403,597
trying to deploy my app to uwsgi server my settings file: ``` STATIC_ROOT = "/home/root/djangoApp/staticRoot/" STATIC_URL = '/static/' STATICFILES_DIRS = [ os.path.join(BASE_DIR, "static"), '/home/root/djangoApp/static/', ] ``` and url file: ``` urlpatterns = [ #urls ] + static(settings.STATIC_URL, d...
2017/07/30
[ "https://Stackoverflow.com/questions/45403597", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8375888/" ]
The two paths you have in STATICFILES\_DIRS are the same. So Django copies the files from one of them, then goes on to the second and tries to copy them again, only to see the files already exist. Remove one of those entries, preferably the second.
do you have more than one application? If so, you should put any file on a subdirectory with a unique name (like the app name for example). collectstatic collects files from all the /static/ subdirectories, and if there is a duplication, it throw this error.
72,664,087
I'm using python3 tkinter to build a small GUI on Linux Centos I have my environment set up with all the dependencies installed (cython, numpy, panda, etc) When I go to install tkinter ``` pip3 install tk $ python3 Python 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] on linux Type "he...
2022/06/17
[ "https://Stackoverflow.com/questions/72664087", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11760778/" ]
> > Why is 'pip install tk' not being recognized as a valid installation of tkinter but 'sudo yum install python3-tkinter' works? > > > Because `pip install tk` installs an old package called tensorkit, not tkinter. You can't install tkinter with pip.
so i don't know if centOS uses apt put you can try first uninstalling tinkter with pip and then use apt to install it ``` sudo apt-get install python3-tk ```
73,584,455
I am trying to create a diverging dot plot with python and I am using seaborn relplot to do the small multiples with one of the columns. The datasouce is MakeoverMonday 2018w18: [MOM2018w48](https://data.world/makeovermonday/2018w48) I got this far with this code: ``` sns.set_style("whitegrid") g=sns.relplot(x=cost ...
2022/09/02
[ "https://Stackoverflow.com/questions/73584455", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3433875/" ]
Like most query interfaces, the `Query()` function can only execute one SQL statement at a time. MySQL's prepared statements don't work with multi-query. You could solve this by executing the `SET` statement in one call, then the `SELECT` in a second call. But you'd have to take care to ensure they are executed on the...
Unless drivers implement a special interface, the query is prepared on the server first before execution. Bindvars are therefore database specific: * MySQL: uses the ? variant shown above * PostgreSQL: uses an enumerated $1, $2, etc bindvar syntax * SQLite: accepts both ? and $1 syntax * Oracle: uses a :name syntax * ...
73,584,455
I am trying to create a diverging dot plot with python and I am using seaborn relplot to do the small multiples with one of the columns. The datasouce is MakeoverMonday 2018w18: [MOM2018w48](https://data.world/makeovermonday/2018w48) I got this far with this code: ``` sns.set_style("whitegrid") g=sns.relplot(x=cost ...
2022/09/02
[ "https://Stackoverflow.com/questions/73584455", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3433875/" ]
For those interested I've solved my issue with a few updates. 1. There are settings on the DSN when connecting `?...&multiStatements=true&interpolateParams=true` 2. After adding the above I started getting a new error regarding the collation (`Illegal mix of collations (utf8mb4_0900_ai_ci,IMPLICIT) and (utf8mb4_genera...
Like most query interfaces, the `Query()` function can only execute one SQL statement at a time. MySQL's prepared statements don't work with multi-query. You could solve this by executing the `SET` statement in one call, then the `SELECT` in a second call. But you'd have to take care to ensure they are executed on the...
73,584,455
I am trying to create a diverging dot plot with python and I am using seaborn relplot to do the small multiples with one of the columns. The datasouce is MakeoverMonday 2018w18: [MOM2018w48](https://data.world/makeovermonday/2018w48) I got this far with this code: ``` sns.set_style("whitegrid") g=sns.relplot(x=cost ...
2022/09/02
[ "https://Stackoverflow.com/questions/73584455", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3433875/" ]
For those interested I've solved my issue with a few updates. 1. There are settings on the DSN when connecting `?...&multiStatements=true&interpolateParams=true` 2. After adding the above I started getting a new error regarding the collation (`Illegal mix of collations (utf8mb4_0900_ai_ci,IMPLICIT) and (utf8mb4_genera...
Unless drivers implement a special interface, the query is prepared on the server first before execution. Bindvars are therefore database specific: * MySQL: uses the ? variant shown above * PostgreSQL: uses an enumerated $1, $2, etc bindvar syntax * SQLite: accepts both ? and $1 syntax * Oracle: uses a :name syntax * ...
40,322,718
I'm new to getting data using API and Python. I want to pull data from my trading platform. They've provided the following instructions: <http://www.questrade.com/api/documentation/getting-started> I'm ok up to step 4 and have an access token. I need help with step 5. How do I translate this request: ``` GET /v1/a...
2016/10/29
[ "https://Stackoverflow.com/questions/40322718", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4838024/" ]
As you point out, after step 4 you should have received an access token as follows: ``` { “access_token”: ”C3lTUKuNQrAAmSD/TPjuV/HI7aNrAwDp”, “token_type”: ”Bearer”, “expires_in”: 300, “refresh_token”: ”aSBe7wAAdx88QTbwut0tiu3SYic3ox8F”, “api_server”: ”https://api01.iq.questrade.com” } ``` To mak...
Improving a bit on Peter's reply (Thank you Peter!) start by using the token you got from the QT website to obtain an access\_token and get an api\_server assigned to handle your requests. ``` # replace XXXXXXXX with the token given to you in your questrade account import requests r = requests.get('https://login.que...
51,750,967
[![enter image description here](https://i.stack.imgur.com/qpDFX.jpg)](https://i.stack.imgur.com/qpDFX.jpg)I'm trying to control a relay board (USB RLY08) using a section of python code I found online (<https://github.com/jkesanen/usbrly08/blob/master/usbrly08.py>). It is currently returning an error which I'm not sure...
2018/08/08
[ "https://Stackoverflow.com/questions/51750967", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4153219/" ]
You are getting this error probably because **pyserial** module is not installed on your system. Try installing pyserial package from PyPi index using below command : ``` python -m pip install pyserial ```
you need to install pyserial e.g. with ``` pip install pyserial ```
51,750,967
[![enter image description here](https://i.stack.imgur.com/qpDFX.jpg)](https://i.stack.imgur.com/qpDFX.jpg)I'm trying to control a relay board (USB RLY08) using a section of python code I found online (<https://github.com/jkesanen/usbrly08/blob/master/usbrly08.py>). It is currently returning an error which I'm not sure...
2018/08/08
[ "https://Stackoverflow.com/questions/51750967", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4153219/" ]
You are getting this error probably because **pyserial** module is not installed on your system. Try installing pyserial package from PyPi index using below command : ``` python -m pip install pyserial ```
You need to [install pyserial](https://pythonhosted.org/pyserial/pyserial.html#installation).
30,316,639
I am looking for a way to calculate a square root with an arbitrary precision (something like 50 digits after the dot). In python, it is easily accessible with [Decimal](https://docs.python.org/2/library/decimal.html): ``` from decimal import * getcontext().prec = 50 Decimal(2).sqrt() # and here you go my 50 digits ...
2015/05/19
[ "https://Stackoverflow.com/questions/30316639", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1090562/" ]
This is my own implementation of square root calculation. While waiting for answers, I decided to give [methods of computing square roots](http://en.wikipedia.org/wiki/Methods_of_computing_square_roots) a try. It has a whole bunch of methods but at the very end I found a link to a [Square roots by subtraction](http://w...
Adding precision ---------------- There is probably a solution in go but as I don't code in go, here is a general solution. For instance if your selected language doesn't provide a solution to handle the precision of floats (already happened to me): If your language provides you N digits after the dot, you can, in t...
51,395,535
I'm trying to get my head around \*\*kwargs in python 3 and am running into a strange error. Based on [this post](https://stackoverflow.com/questions/1769403/understanding-kwargs-in-python) on the matter, I tried to create my own version to confirm it worked for me. ``` table = {'Person A':'Age A','Person B':'Age B','...
2018/07/18
[ "https://Stackoverflow.com/questions/51395535", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8032508/" ]
call kw function with `kw(**table)` Python 3 Doc: [link](https://docs.python.org/3.2/glossary.html)
There's no need to make `kwargs` a variable keyword argument here. By specifying `kwargs` with `**` you are defining the function with a variable number of keyword arguments but no positional argument, hence the error you're seeing. Instead, simply define your `kw` function with: ``` def kw(kwargs): ```
51,395,535
I'm trying to get my head around \*\*kwargs in python 3 and am running into a strange error. Based on [this post](https://stackoverflow.com/questions/1769403/understanding-kwargs-in-python) on the matter, I tried to create my own version to confirm it worked for me. ``` table = {'Person A':'Age A','Person B':'Age B','...
2018/07/18
[ "https://Stackoverflow.com/questions/51395535", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8032508/" ]
call kw function with `kw(**table)` Python 3 Doc: [link](https://docs.python.org/3.2/glossary.html)
Writing a separate answer because I do not have enough reputation to comment. There is another error in the original post, this time in the function definition Once you have "opened" a dict with the \*\* operator in arguments, the dict does not exist any more inside the function. So in the function: ``` def kw(**kwa...
51,395,535
I'm trying to get my head around \*\*kwargs in python 3 and am running into a strange error. Based on [this post](https://stackoverflow.com/questions/1769403/understanding-kwargs-in-python) on the matter, I tried to create my own version to confirm it worked for me. ``` table = {'Person A':'Age A','Person B':'Age B','...
2018/07/18
[ "https://Stackoverflow.com/questions/51395535", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8032508/" ]
call kw function with `kw(**table)` Python 3 Doc: [link](https://docs.python.org/3.2/glossary.html)
table = {'Bob':'Old','Franny':'Less Old, Still a little old though','Ribbit':'Only slightly old'} def kw(\*\*kwargs): for i,j in kwargs.items(): print(i,'is ',j) """Put the \*\* before the tablet, like this:""" kw(\*\*table)
51,395,535
I'm trying to get my head around \*\*kwargs in python 3 and am running into a strange error. Based on [this post](https://stackoverflow.com/questions/1769403/understanding-kwargs-in-python) on the matter, I tried to create my own version to confirm it worked for me. ``` table = {'Person A':'Age A','Person B':'Age B','...
2018/07/18
[ "https://Stackoverflow.com/questions/51395535", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8032508/" ]
There's no need to make `kwargs` a variable keyword argument here. By specifying `kwargs` with `**` you are defining the function with a variable number of keyword arguments but no positional argument, hence the error you're seeing. Instead, simply define your `kw` function with: ``` def kw(kwargs): ```
Writing a separate answer because I do not have enough reputation to comment. There is another error in the original post, this time in the function definition Once you have "opened" a dict with the \*\* operator in arguments, the dict does not exist any more inside the function. So in the function: ``` def kw(**kwa...
51,395,535
I'm trying to get my head around \*\*kwargs in python 3 and am running into a strange error. Based on [this post](https://stackoverflow.com/questions/1769403/understanding-kwargs-in-python) on the matter, I tried to create my own version to confirm it worked for me. ``` table = {'Person A':'Age A','Person B':'Age B','...
2018/07/18
[ "https://Stackoverflow.com/questions/51395535", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8032508/" ]
There's no need to make `kwargs` a variable keyword argument here. By specifying `kwargs` with `**` you are defining the function with a variable number of keyword arguments but no positional argument, hence the error you're seeing. Instead, simply define your `kw` function with: ``` def kw(kwargs): ```
table = {'Bob':'Old','Franny':'Less Old, Still a little old though','Ribbit':'Only slightly old'} def kw(\*\*kwargs): for i,j in kwargs.items(): print(i,'is ',j) """Put the \*\* before the tablet, like this:""" kw(\*\*table)
66,204,201
I'm trying to install pymatgen in Google colab via the following command: ``` !pip install pymatgen ``` This throws the following error: ``` Collecting pymatgen Using cached https://files.pythonhosted.org/packages/06/4f/9dc98ea1309012eafe518e32e91d2a55686341f3f4c1cdc19f1f64cb33d0/pymatgen-2021.2.14.tar.gz ...
2021/02/15
[ "https://Stackoverflow.com/questions/66204201", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15211978/" ]
You will need to validate within the addNewUser method. And then `throw` an exception when your validate hits. Example ```java if(username.length > 10) { throw new Exception("Username is too long"); } ``` it will then be catched by your try-catch statement.
There are a few things to consider here. With a try-catch block you can manage exceptions that occur in your program flow. When writing a program it's a good idea to make it as clear as possible so that other people reading it later can understand it better. To that end, consider refactoring the methods. For example ...
66,204,201
I'm trying to install pymatgen in Google colab via the following command: ``` !pip install pymatgen ``` This throws the following error: ``` Collecting pymatgen Using cached https://files.pythonhosted.org/packages/06/4f/9dc98ea1309012eafe518e32e91d2a55686341f3f4c1cdc19f1f64cb33d0/pymatgen-2021.2.14.tar.gz ...
2021/02/15
[ "https://Stackoverflow.com/questions/66204201", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15211978/" ]
You will need to validate within the addNewUser method. And then `throw` an exception when your validate hits. Example ```java if(username.length > 10) { throw new Exception("Username is too long"); } ``` it will then be catched by your try-catch statement.
In order to fix your problem, you should just provide exception messages. ```java public void addNewUser(String username, String password) throws exceptions.InvalidUsernameException, exceptions.InvalidPasswordException, exceptions.DuplicateUserException { // Check if the username has a correct format`enter co...
16,536,071
I was working on these functions (see [this](https://stackoverflow.com/questions/16525224/how-to-breakup-a-list-of-list-in-a-given-way-in-python)): ``` def removeFromList(elementsToRemove): def closure(list): for element in elementsToRemove: if list[0] != element: return ...
2013/05/14
[ "https://Stackoverflow.com/questions/16536071", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2338725/" ]
Use `copy.deepcopy`: ``` from copy import deepcopy new_list = deepcopy([[1], [1, 2], [1, 2, 3]]) ``` Demo: ``` >>> lis = [[1], [1, 2], [1, 2, 3]] >>> new_lis = lis[:] # creates a shallow copy >>> [id(x)==id(y) for x,y in zip(lis,new_lis)] [True, True, True] #inner lists are st...
both with `list(my_list)` and `my_list[:]` you get a shallow copy of the list. ``` id(copy_my_list[0]) == id(my_list[0]) # True ``` so use `copy.deepcopy` to avoid your problem: ``` copy_my_list = copy.deepcopy(my_list) id(copy_my_list[0]) == id(my_list[0]) # False ```
16,536,071
I was working on these functions (see [this](https://stackoverflow.com/questions/16525224/how-to-breakup-a-list-of-list-in-a-given-way-in-python)): ``` def removeFromList(elementsToRemove): def closure(list): for element in elementsToRemove: if list[0] != element: return ...
2013/05/14
[ "https://Stackoverflow.com/questions/16536071", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2338725/" ]
Use `copy.deepcopy`: ``` from copy import deepcopy new_list = deepcopy([[1], [1, 2], [1, 2, 3]]) ``` Demo: ``` >>> lis = [[1], [1, 2], [1, 2, 3]] >>> new_lis = lis[:] # creates a shallow copy >>> [id(x)==id(y) for x,y in zip(lis,new_lis)] [True, True, True] #inner lists are st...
Use a tuple. `my_list = ([1], [1, 2], [1, 2, 3])` `my_list` is now immutable, and anytime you want a mutable copy you can just use `list(my_list)` ``` >>> my_list = ([1], [1, 2], [1, 2, 3]) >>> def mutate(aList): aList.pop() return aList >>> mutate(list(my_list)) [[1], [1, 2]] >>> my_list ([1], [1, ...
16,536,071
I was working on these functions (see [this](https://stackoverflow.com/questions/16525224/how-to-breakup-a-list-of-list-in-a-given-way-in-python)): ``` def removeFromList(elementsToRemove): def closure(list): for element in elementsToRemove: if list[0] != element: return ...
2013/05/14
[ "https://Stackoverflow.com/questions/16536071", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2338725/" ]
both with `list(my_list)` and `my_list[:]` you get a shallow copy of the list. ``` id(copy_my_list[0]) == id(my_list[0]) # True ``` so use `copy.deepcopy` to avoid your problem: ``` copy_my_list = copy.deepcopy(my_list) id(copy_my_list[0]) == id(my_list[0]) # False ```
Use a tuple. `my_list = ([1], [1, 2], [1, 2, 3])` `my_list` is now immutable, and anytime you want a mutable copy you can just use `list(my_list)` ``` >>> my_list = ([1], [1, 2], [1, 2, 3]) >>> def mutate(aList): aList.pop() return aList >>> mutate(list(my_list)) [[1], [1, 2]] >>> my_list ([1], [1, ...
29,813,423
the below python gui code i am trying to select the values from the drop down menu buttons(graph and density) and trying to pass them as command line arguments to os.system command in the readfile() function as shown below but I am having a problem in passing the values I have selected from the drop down menu to os.sys...
2015/04/23
[ "https://Stackoverflow.com/questions/29813423", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2014111/" ]
It is easy to implement with [functools.partial](https://docs.python.org/2/library/functools.html#functools.partial) - apply needed value to your function for each button. Here is a sample: ``` from functools import partial import Tkinter as tk BTNLIST = [0.0, 0.1, 0.2] def btn_clicked(payload=None): """Just pri...
The way you have it, `graph` and `density` are local variables to `graphselected()` and `buttonClicked()`. Therefore, `readfile()` can never access these variables unless you declare them as global in all three functions. Then you want to format a string to incorporate the values in `graph` and `density`. You can do t...
6,958,833
I'm trying to insert a string that was received as an argument into a sqlite db using python: ``` def addUser(self, name): cursor=self.conn.cursor() t = (name) cursor.execute("INSERT INTO users ( unique_key, name, is_online, translate) VALUES (NULL, ?, 1, 0);", t) s...
2011/08/05
[ "https://Stackoverflow.com/questions/6958833", "https://Stackoverflow.com", "https://Stackoverflow.com/users/752462/" ]
You need this: ``` t = (name,) ``` to make a single-element tuple. Remember, it's **commas** that make a tuple, not brackets!
Your `t` variable isn't a tuple, i think it is a 7-length string. To make a tuple, don't forget to put a trailing coma: ``` t = (name,) ```
41,196,390
I have my `index.py` in `/var/www/cgi-bin` My `index.py` looks like this : ``` #!/usr/bin/python print "Content-type:text/html\r\n\r\n" print '<html>' print '<head>' print '<title>Hello Word - First CGI Program</title>' print '</head>' print '<body>' print '<h2>Hello Word! This is my first CGI program</h2>' print '<...
2016/12/17
[ "https://Stackoverflow.com/questions/41196390", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3405554/" ]
Try this : Enable `CGI` `a2enmod cgid` `chmod a+x /var/www/cgi-bin/index.py` but check `cgi-bin` directory owner is `wwwdata` ? Need a `directory` definition on every `Virtualhost` ! Some time required `restart` for killing all `apache` threads ! ``` DocumentRoot /var/www/htdocs #A include B if owner are same !...
use this file to run cgi script: ``` import cgi; import cgitb;cgitb.enable() ```
19,090,032
I need to scrape career pages of multiple companies(with their permission). Important Factors in deciding what do I use 1. I would be scraping around 2000 pages daily, so need a decently fast solution 2. Some of these pages populate data via ajax after page is loaded. 3. My webstack is Ruby/Rails with MySql etc. 4. ...
2013/09/30
[ "https://Stackoverflow.com/questions/19090032", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1549934/" ]
The real benefit of closures and higher-order functions is that they can represent what the programmer sometimes has in mind. If you as the programmer find that what you have in mind is a piece of code, a function, an instruction on how to compute something (or do something), then you should use a closure for this. If...
With a closure, one can save the `self` variable. In particular, when there are many variables to be passed, a closure could be more readable. ``` class Incr: """a class that increments internal variable""" def __init__(self, i): self._i = i def __call__(self): self._i = (self._i + 1) % 10 ...
19,090,032
I need to scrape career pages of multiple companies(with their permission). Important Factors in deciding what do I use 1. I would be scraping around 2000 pages daily, so need a decently fast solution 2. Some of these pages populate data via ajax after page is loaded. 3. My webstack is Ruby/Rails with MySql etc. 4. ...
2013/09/30
[ "https://Stackoverflow.com/questions/19090032", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1549934/" ]
The real benefit of closures and higher-order functions is that they can represent what the programmer sometimes has in mind. If you as the programmer find that what you have in mind is a piece of code, a function, an instruction on how to compute something (or do something), then you should use a closure for this. If...
closure binds data to functions in a weird way. on the other side, data classes try to separate data from functions. To me both of them are bad design. Never use them. OO(class) is a much more convient and natural way.
43,190,221
I have a training file in the following format: > > 0.086, 0.4343, 0.4212, ...., class1 > > > 0.086, 0.4343, 0.4212, ...., class2 > > > 0.086, 0.4343, 0.4212, ...., class5 > > > Where, each row is a one-dimensional vector and the last column is the class in which that vector represents. We can see that a vect...
2017/04/03
[ "https://Stackoverflow.com/questions/43190221", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6363322/" ]
This is a pretty straight forward setup. First thing to know: Your labels need to be in "one hot encoding" format. That means, if you have 5 classes, class 1 is represented by the vector [1,0,0,0,0], class 2 by the vector [0,1,0,0,0], and so on. This is standard. Second, you mention that you want multi-class classifi...
From what I understand you have a multi-label problem. Meaning that a sample can belong to more than one classes Take a look at [sigmoid\_cross\_entropy\_with\_logits](https://www.tensorflow.org/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits) and use that as your loss function. You do not need to use one h...
69,262,618
So I just watched a tutorial that the author didn't need to `import sklearn` when using `predict` function of pickled model in anaconda environment (sklearn installed). I have tried to reproduce the minimal version of it in Google Colab. If you have a pickled-sklearn-model, the code below works in Colab (sklearn insta...
2021/09/21
[ "https://Stackoverflow.com/questions/69262618", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2147347/" ]
There's a few questions being asked here, so let's go through them one by one: > > So, how does it work? as far as I understand pickle doesn't depend on scikit-learn. > > > There is nothing particular to scikit-learn going on here. Pickle will exhibit this behaviour for any module. Here's an example with Numpy: ...
*When the model was first pickled*, you had sklearn installed. The pickle file depends on sklearn for its structure, as the class of the object it represents is a sklearn class, and `pickle` needs to know the details of that class’s structure in order to unpickle the object. When you try to unpickle the file without s...
42,913,788
I'm trying to ask a question on python, so that if the person gets it right, they can move onto the next question. If they get it wrong, they have 3 or so attempts at getting it right, before the quiz moves onto the next question. I thought I solved it with the below program, however this just asks the user make anothe...
2017/03/20
[ "https://Stackoverflow.com/questions/42913788", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7735015/" ]
You're stuck in the loop. So put ``` counter = 3 ``` after ``` score += 1 ``` To get out of the loop. ``` score = 0 counter = 0 while counter<3: answer = input("Make your choice >>>> ") if answer == "c": print("Correct!") score += 1 counter = 3 else: print("That is...
You're stucked in the loop, a cleaner way of solving this is using the function break as in: ``` score = 0 counter = 0 while counter < 3: answer = input("Make your choice >>>> ") if answer == "c": print ("Correct!") score += 1 break else: print("That is incorrect. Try Again...
66,157,729
I have some info store in a MySQL database, something like: `AHmmgZq\n/+AH+G4` We get that using an API, so when I read it in my python I get: `AHmmgZq\\n/+AH+G4` The backslash is doubled! Now I need to put that into a JSON file, how can I remove the extra backslash? **EDIT:** let me show my full code: ``` json_dic...
2021/02/11
[ "https://Stackoverflow.com/questions/66157729", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4663446/" ]
Turns out that the badge appears once you open a TeX file. I thought you'd first create a TeX project, then the file.
As you already figured out, the badge appears once you open a TeX file. Keep also in mind that you have to install LaTeX, or update LaTex. I say so because personally I was trying to use `\tableofcontents` but the table wouldn't be generated until the moment I installed texlive using homebrew (`brew install texlive`)
39,305,286
According to [documentation](https://docs.python.org/3.4/c-api/capsule.html?highlight=capsule), the third argument to `PyCapsule_New()` can specify a destructor, which I assume should be called when the capsule is destroyed. ``` void mapDestroy(PyObject *capsule) { lash_map_simple_t *map; fprintf(stderr, "Ent...
2016/09/03
[ "https://Stackoverflow.com/questions/39305286", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3333488/" ]
The code above has a reference leak: `pymap = PyCapsule_New()` returns a new object (its refcount is 1), but `Py_BuildValue("O", pymap)` creates a new reference to the same object, and its refcount is now 2. Just `return pymap;`.
`Py_BuildValue("O", thingy)` will just increment the refcount for `thingy` and return it – the docs say that it returns a “new reference” but that is not quite true when you pass it an existing `PyObject*`. If these functions of yours – the ones in your question, that is – are all defined in the same translation unit...
48,775,587
I am trying to learn python through some basic exercises with my own online store. I have a list of parts that are in-transit to us that we have already ordered, and I have a list of parts that we are currently out of stock of. I want to be able to send a list to the supplier of what we need - but I do not want to crea...
2018/02/13
[ "https://Stackoverflow.com/questions/48775587", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6713690/" ]
``` for part in onorder: if (part in onorder) == False ... ``` This does not make sense. Since you are iterating over exactly every element of `onorder`, you will never get a `part` not in `onorder`. Therefore, it is not a miracle that the print statement is not being executed.
Doh! Appropriate code was ``` for part in outofstock: if (part not in onorder): print (part) ``` This way it prints my out of stock items which I need to order, unless they were already on order. I can't believe I overly complicated this for no good reason. Thank you so much for pointing out where I had gone w...
48,775,587
I am trying to learn python through some basic exercises with my own online store. I have a list of parts that are in-transit to us that we have already ordered, and I have a list of parts that we are currently out of stock of. I want to be able to send a list to the supplier of what we need - but I do not want to crea...
2018/02/13
[ "https://Stackoverflow.com/questions/48775587", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6713690/" ]
You're looping over the wrong list. To find items in `outofstock` but not in `onorder`, loop over `outofstock`: ``` for part in outofstock: if part not in onorder: print(part) ``` Simpler would be to convert both lists to sets, and compute the difference: ``` print(set(outofstock) - set(onorder)) ```
Doh! Appropriate code was ``` for part in outofstock: if (part not in onorder): print (part) ``` This way it prints my out of stock items which I need to order, unless they were already on order. I can't believe I overly complicated this for no good reason. Thank you so much for pointing out where I had gone w...
2,286,633
I have a basic grasp of XML and python and have been using minidom with some success. I have run into a situation where I am unable to get the values I want from an XML file. Here is the basic structure of the pre-existing file. ``` <localization> <b n="Stats"> <l k="SomeStat1"> <v>10</v> ...
2010/02/18
[ "https://Stackoverflow.com/questions/2286633", "https://Stackoverflow.com", "https://Stackoverflow.com/users/224476/" ]
``` #!/usr/bin/python from xml.dom.minidom import parseString xml = parseString("""<localization> <b n="Stats"> <l k="SomeStat1"> <v>10</v> </l> <l k="SomeStat2"> <v>6</v> </l> </b> <b n="Levels"> <l k="Level1"> <v>Beginner Level<...
``` level = "Level"+raw_input("Enter level number: ") content= open("xmlfile").read() data= content.split("</localization>") for item in data: if "localization" in item: s = item.split("</b>") for i in s: if """<b n="Levels">""" in i: for c in i.split("</l>"): ...
2,286,633
I have a basic grasp of XML and python and have been using minidom with some success. I have run into a situation where I am unable to get the values I want from an XML file. Here is the basic structure of the pre-existing file. ``` <localization> <b n="Stats"> <l k="SomeStat1"> <v>10</v> ...
2010/02/18
[ "https://Stackoverflow.com/questions/2286633", "https://Stackoverflow.com", "https://Stackoverflow.com/users/224476/" ]
If you really only care about searching for an `<l>` tag with a specific "k" attribute and then getting its `<v>` tag (that's how I understood your question), you could do it with DOM: ``` from xml.dom.minidom import parseString xmlDoc = parseString("""<document goes here>""") lNodesWithLevel2 = [lNode for lNode in x...
``` level = "Level"+raw_input("Enter level number: ") content= open("xmlfile").read() data= content.split("</localization>") for item in data: if "localization" in item: s = item.split("</b>") for i in s: if """<b n="Levels">""" in i: for c in i.split("</l>"): ...
2,286,633
I have a basic grasp of XML and python and have been using minidom with some success. I have run into a situation where I am unable to get the values I want from an XML file. Here is the basic structure of the pre-existing file. ``` <localization> <b n="Stats"> <l k="SomeStat1"> <v>10</v> ...
2010/02/18
[ "https://Stackoverflow.com/questions/2286633", "https://Stackoverflow.com", "https://Stackoverflow.com/users/224476/" ]
You might consider using XPATH, a language for addressing parts of an xml document. Here's the answer using `lxml.etree` and it's support for `xpath`. ``` >>> data = """ ... <localization> ... <b n="Stats"> ... <l k="SomeStat1"> ... <v>10</v> ... </l> ... <l k="SomeStat2"> ... ...
``` level = "Level"+raw_input("Enter level number: ") content= open("xmlfile").read() data= content.split("</localization>") for item in data: if "localization" in item: s = item.split("</b>") for i in s: if """<b n="Levels">""" in i: for c in i.split("</l>"): ...
2,286,633
I have a basic grasp of XML and python and have been using minidom with some success. I have run into a situation where I am unable to get the values I want from an XML file. Here is the basic structure of the pre-existing file. ``` <localization> <b n="Stats"> <l k="SomeStat1"> <v>10</v> ...
2010/02/18
[ "https://Stackoverflow.com/questions/2286633", "https://Stackoverflow.com", "https://Stackoverflow.com/users/224476/" ]
If you could use [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/documentation.html) library (couldn't you?) you could end up with this dead-simple code: ``` from BeautifulSoup import BeautifulStoneSoup def get_it(xml, level_n): soup = BeautifulStoneSoup(xml) l = soup.find('l', k="Level%d" % leve...
``` level = "Level"+raw_input("Enter level number: ") content= open("xmlfile").read() data= content.split("</localization>") for item in data: if "localization" in item: s = item.split("</b>") for i in s: if """<b n="Levels">""" in i: for c in i.split("</l>"): ...