qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
64,647,954
I want to webscrape german real estate website immobilienscout24.de. I would like to download the HTML of a given URL and then work with the HTML offline. It is not intended for commercial use or publication and I do not intend on spamming the site, it is merely for coding practice. I would like to write a python tool ...
2020/11/02
[ "https://Stackoverflow.com/questions/64647954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14361382/" ]
I'm the developer of Fredy (<https://github.com/orangecoding/fredy>). I came across the same issue. After digging into the issue, I found how they check if you're a robot. First they set a localstorage value. ``` localstorageAvailable: true ``` And if it's available, they set a value: ``` testLocalStorage: 1 ``` ...
Maybe have a go with [requests](https://requests.readthedocs.io/en/master/), the code below seems to work fine for me: ``` import requests from bs4 import BeautifulSoup r = requests.get('https://www.immobilienscout24.de/') soup = BeautifulSoup(r.text, 'html.parser') print(soup.prettify) ``` Another approach is to ...
64,647,954
I want to webscrape german real estate website immobilienscout24.de. I would like to download the HTML of a given URL and then work with the HTML offline. It is not intended for commercial use or publication and I do not intend on spamming the site, it is merely for coding practice. I would like to write a python tool ...
2020/11/02
[ "https://Stackoverflow.com/questions/64647954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14361382/" ]
Coming back to this question after a while... For your information, I brought back support for Immoscout in Fredy. Have a look here. <https://github.com/orangecoding/fredy#immoscout>
Maybe have a go with [requests](https://requests.readthedocs.io/en/master/), the code below seems to work fine for me: ``` import requests from bs4 import BeautifulSoup r = requests.get('https://www.immobilienscout24.de/') soup = BeautifulSoup(r.text, 'html.parser') print(soup.prettify) ``` Another approach is to ...
64,647,954
I want to webscrape german real estate website immobilienscout24.de. I would like to download the HTML of a given URL and then work with the HTML offline. It is not intended for commercial use or publication and I do not intend on spamming the site, it is merely for coding practice. I would like to write a python tool ...
2020/11/02
[ "https://Stackoverflow.com/questions/64647954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14361382/" ]
I'm the developer of Fredy (<https://github.com/orangecoding/fredy>). I came across the same issue. After digging into the issue, I found how they check if you're a robot. First they set a localstorage value. ``` localstorageAvailable: true ``` And if it's available, they set a value: ``` testLocalStorage: 1 ``` ...
Try to set `Accept-Language` HTTP header (this worked for me to get correct response from server): ``` import requests from bs4 import BeautifulSoup url = "https://www.immobilienscout24.de/Suche/de/wohnung-mieten?sorting=2#" headers = { 'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:82.0) Gecko/201001...
64,647,954
I want to webscrape german real estate website immobilienscout24.de. I would like to download the HTML of a given URL and then work with the HTML offline. It is not intended for commercial use or publication and I do not intend on spamming the site, it is merely for coding practice. I would like to write a python tool ...
2020/11/02
[ "https://Stackoverflow.com/questions/64647954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14361382/" ]
Coming back to this question after a while... For your information, I brought back support for Immoscout in Fredy. Have a look here. <https://github.com/orangecoding/fredy#immoscout>
Try to set `Accept-Language` HTTP header (this worked for me to get correct response from server): ``` import requests from bs4 import BeautifulSoup url = "https://www.immobilienscout24.de/Suche/de/wohnung-mieten?sorting=2#" headers = { 'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:82.0) Gecko/201001...
15,011,674
Can you dereference a variable id retrieved from the [`id`](https://docs.python.org/library/functions.html#id) function in Python? For example: ``` dereference(id(a)) == a ``` I want to know from an academic standpoint; I understand that there are more practical methods.
2013/02/21
[ "https://Stackoverflow.com/questions/15011674", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1248775/" ]
Here's a utility function based on a (now-deleted) comment made by "Tiran" in a weblog [discussion](https://web.archive.org/web/20090404012257/http://www.friday.com/bbum/2007/08/24/python-di/) @Hophat Abc references in [his own answer](https://stackoverflow.com/a/15012076/355230) that will work in both Python 2 and 3. ...
Not easily. You could recurse through the [`gc.get_objects()`](http://docs.python.org/2/library/gc.html#gc.get_objects) list, testing each and every object if it has the same `id()` but that's not very practical. The `id()` function is *not intended* to be dereferenceable; the fact that it is based on the memory addr...
15,011,674
Can you dereference a variable id retrieved from the [`id`](https://docs.python.org/library/functions.html#id) function in Python? For example: ``` dereference(id(a)) == a ``` I want to know from an academic standpoint; I understand that there are more practical methods.
2013/02/21
[ "https://Stackoverflow.com/questions/15011674", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1248775/" ]
Here's a utility function based on a (now-deleted) comment made by "Tiran" in a weblog [discussion](https://web.archive.org/web/20090404012257/http://www.friday.com/bbum/2007/08/24/python-di/) @Hophat Abc references in [his own answer](https://stackoverflow.com/a/15012076/355230) that will work in both Python 2 and 3. ...
There are several ways and it's not difficult to do: In O(n) ``` In [1]: def deref(id_): ....: f = {id(x):x for x in gc.get_objects()} ....: return f[id_] In [2]: foo = [1,2,3] In [3]: bar = id(foo) In [4]: deref(bar) Out[4]: [1, 2, 3] ``` A faster way on average, from the comments (thanks @Martijn...
15,011,674
Can you dereference a variable id retrieved from the [`id`](https://docs.python.org/library/functions.html#id) function in Python? For example: ``` dereference(id(a)) == a ``` I want to know from an academic standpoint; I understand that there are more practical methods.
2013/02/21
[ "https://Stackoverflow.com/questions/15011674", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1248775/" ]
Here's a utility function based on a (now-deleted) comment made by "Tiran" in a weblog [discussion](https://web.archive.org/web/20090404012257/http://www.friday.com/bbum/2007/08/24/python-di/) @Hophat Abc references in [his own answer](https://stackoverflow.com/a/15012076/355230) that will work in both Python 2 and 3. ...
**Note:** Updated to Python 3. Here's yet another answer adapted from a yet another [comment](https://web.archive.org/web/20070905071909/http://www.friday.com/bbum/2007/08/24/python-di/#comment-145316), this one by "Peter Fein", in the discussion @Hophat Abc referenced in his own answer to his own question. Though no...
18,394,350
Today, I wrote a short script for a prime sieve, and I am looking to improve on it. I am rather new to python and programming in general, and so I am wondering: what is a good way to reduce memory usage in a program where large lists of numbers are involved? Here is my example script: ``` def ES(n): A = list(range...
2013/08/23
[ "https://Stackoverflow.com/questions/18394350", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2467476/" ]
First, you're using a really weird, inefficient way to record whether something is composite. You don't need to store string representations of the numbers, or even the numbers themselves. You can just use a big list of booleans, where `prime[n]` is true if `n` is prime. Second, there's no reason to worry about wastin...
Rather than using large lists, you can use generators. ``` def ESgen(): d = {} q = 2 while 1: if q not in d: yield q d[q*q] = [q] else: for p in d[q]: d.setdefault(p+q,[]).append(p) del d[q] q += ...
46,909,017
I want to insert a row in a table of a database from a python script. The column fiels values are saved in variables of different formats: strings, integers, and floats. I search in forums, I tried differente options but no one is working I tried this options: ``` cursor.execute('INSERT INTO table(device, number1, nu...
2017/10/24
[ "https://Stackoverflow.com/questions/46909017", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3278790/" ]
you can use parameter binding. you don't need to worry about the datatypes of the variables that being passed. ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES (?, ?, ?)', ("dev1",1,2.4)) ```
Check if you're calling commit() post execution of query or also you can enable autocommit for connection. ``` connection_obj.commit() ```
46,909,017
I want to insert a row in a table of a database from a python script. The column fiels values are saved in variables of different formats: strings, integers, and floats. I search in forums, I tried differente options but no one is working I tried this options: ``` cursor.execute('INSERT INTO table(device, number1, nu...
2017/10/24
[ "https://Stackoverflow.com/questions/46909017", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3278790/" ]
you can use parameter binding. you don't need to worry about the datatypes of the variables that being passed. ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES (?, ?, ?)', ("dev1",1,2.4)) ```
Firstly, ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES (%s,%d,%f)',(device_var,number1_var, number2_var)) ``` seems wrong. Similarly second version is wrong too. Third with **.format** is correct. Then why did query not run? **Because of the reason below**: Starting with 1.2.0, MySQLdb dis...
46,909,017
I want to insert a row in a table of a database from a python script. The column fiels values are saved in variables of different formats: strings, integers, and floats. I search in forums, I tried differente options but no one is working I tried this options: ``` cursor.execute('INSERT INTO table(device, number1, nu...
2017/10/24
[ "https://Stackoverflow.com/questions/46909017", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3278790/" ]
you can use parameter binding. you don't need to worry about the datatypes of the variables that being passed. ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES (?, ?, ?)', ("dev1",1,2.4)) ```
> > The column fiels values are saved in variables of different formats: strings, integers, and floats. I search in forums, I tried differente options but no one is working > > > Have you read the db-api specs ? The placeholder in the query are not python formatting instructions but just plain placeholders - the d...
46,909,017
I want to insert a row in a table of a database from a python script. The column fiels values are saved in variables of different formats: strings, integers, and floats. I search in forums, I tried differente options but no one is working I tried this options: ``` cursor.execute('INSERT INTO table(device, number1, nu...
2017/10/24
[ "https://Stackoverflow.com/questions/46909017", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3278790/" ]
Check if you're calling commit() post execution of query or also you can enable autocommit for connection. ``` connection_obj.commit() ```
Firstly, ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES (%s,%d,%f)',(device_var,number1_var, number2_var)) ``` seems wrong. Similarly second version is wrong too. Third with **.format** is correct. Then why did query not run? **Because of the reason below**: Starting with 1.2.0, MySQLdb dis...
46,909,017
I want to insert a row in a table of a database from a python script. The column fiels values are saved in variables of different formats: strings, integers, and floats. I search in forums, I tried differente options but no one is working I tried this options: ``` cursor.execute('INSERT INTO table(device, number1, nu...
2017/10/24
[ "https://Stackoverflow.com/questions/46909017", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3278790/" ]
> > The column fiels values are saved in variables of different formats: strings, integers, and floats. I search in forums, I tried differente options but no one is working > > > Have you read the db-api specs ? The placeholder in the query are not python formatting instructions but just plain placeholders - the d...
Firstly, ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES (%s,%d,%f)',(device_var,number1_var, number2_var)) ``` seems wrong. Similarly second version is wrong too. Third with **.format** is correct. Then why did query not run? **Because of the reason below**: Starting with 1.2.0, MySQLdb dis...
46,917,831
I am working on a simple Docker image that has a large number of environment variables. Are you able to import an environment variable file like with docker-compose? I cannot find anything about this in the docker file documentation. **Dockerfile** ``` FROM python:3.6 ENV ENV1 9.3 ENV ENV2 9.3.4 ... ADD . / RUN pi...
2017/10/24
[ "https://Stackoverflow.com/questions/46917831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3361028/" ]
Yes, there are a couple of ways you can do this. Docker Compose -------------- In Docker Compose, you can supply environment variables in the file itself, or point to an external env file: ``` # docker-compose.yml version: '2' services: service-name: image: service-app environment: - GREETING=hello ...
If you need environment variables runtime, it's easiest to create a launcher script that sets up the environment with multiple `export` statements and then launches your process. If you need them build time, have a look at the `ARG` and `ENV` statements. You'll need one per variable.
46,917,831
I am working on a simple Docker image that has a large number of environment variables. Are you able to import an environment variable file like with docker-compose? I cannot find anything about this in the docker file documentation. **Dockerfile** ``` FROM python:3.6 ENV ENV1 9.3 ENV ENV2 9.3.4 ... ADD . / RUN pi...
2017/10/24
[ "https://Stackoverflow.com/questions/46917831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3361028/" ]
Yes, there are a couple of ways you can do this. Docker Compose -------------- In Docker Compose, you can supply environment variables in the file itself, or point to an external env file: ``` # docker-compose.yml version: '2' services: service-name: image: service-app environment: - GREETING=hello ...
There are various options: <https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e-env-env-file> ``` docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash ``` (You can also just reference previously `exported` variables, see `USER` below.) The one answering your ...
46,917,831
I am working on a simple Docker image that has a large number of environment variables. Are you able to import an environment variable file like with docker-compose? I cannot find anything about this in the docker file documentation. **Dockerfile** ``` FROM python:3.6 ENV ENV1 9.3 ENV ENV2 9.3.4 ... ADD . / RUN pi...
2017/10/24
[ "https://Stackoverflow.com/questions/46917831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3361028/" ]
Yes, there are a couple of ways you can do this. Docker Compose -------------- In Docker Compose, you can supply environment variables in the file itself, or point to an external env file: ``` # docker-compose.yml version: '2' services: service-name: image: service-app environment: - GREETING=hello ...
I really like @halfers approach, but this could also work. `docker run` takes an optional parameter called `--env-file` which is super helpful. So your docker file could look like this. ``` COPY .env .env ``` and then in a build script use: ```sh docker build -t my_docker_image . && docker run --env-file .env my_...
46,917,831
I am working on a simple Docker image that has a large number of environment variables. Are you able to import an environment variable file like with docker-compose? I cannot find anything about this in the docker file documentation. **Dockerfile** ``` FROM python:3.6 ENV ENV1 9.3 ENV ENV2 9.3.4 ... ADD . / RUN pi...
2017/10/24
[ "https://Stackoverflow.com/questions/46917831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3361028/" ]
There are various options: <https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e-env-env-file> ``` docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash ``` (You can also just reference previously `exported` variables, see `USER` below.) The one answering your ...
If you need environment variables runtime, it's easiest to create a launcher script that sets up the environment with multiple `export` statements and then launches your process. If you need them build time, have a look at the `ARG` and `ENV` statements. You'll need one per variable.
46,917,831
I am working on a simple Docker image that has a large number of environment variables. Are you able to import an environment variable file like with docker-compose? I cannot find anything about this in the docker file documentation. **Dockerfile** ``` FROM python:3.6 ENV ENV1 9.3 ENV ENV2 9.3.4 ... ADD . / RUN pi...
2017/10/24
[ "https://Stackoverflow.com/questions/46917831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3361028/" ]
I really like @halfers approach, but this could also work. `docker run` takes an optional parameter called `--env-file` which is super helpful. So your docker file could look like this. ``` COPY .env .env ``` and then in a build script use: ```sh docker build -t my_docker_image . && docker run --env-file .env my_...
If you need environment variables runtime, it's easiest to create a launcher script that sets up the environment with multiple `export` statements and then launches your process. If you need them build time, have a look at the `ARG` and `ENV` statements. You'll need one per variable.
59,868,524
My goal is to get a list of the names of all the new items that have been posted on <https://www.prusaprinters.org/prints> during the full 24 hours of a given day. Through a bit of reading I've learned that I should be using Selenium because the site I'm scraping is dynamic (loads more objects as the user scrolls). T...
2020/01/22
[ "https://Stackoverflow.com/questions/59868524", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9452116/" ]
To get text wait for visibility of the elements. Css selector for titles is `#printListOuter h3`: ``` titles = WebDriverWait(driver, 10).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, '#printListOuter h3'))) for title in titles: print(title.text) ``` Shorter version: ``` wait = WebDriverWait(dri...
This is xpath of the name of the items: ``` .//div[@class='print-list-item']/div/a/h3/span ```
14,549,433
Here is what my data frame looks like; ``` Zaman Operasyon Paket 1 2013-01-18 21:39:00 installed linux-api-headers 2 2013-01-18 21:39:00 installed tzdata 3 2013-01-18 21:39:00 installed glibc 4 2013-01-18 21:39:00 installed ncurses 5 2013-01-18 21:39:00 inst...
2013/01/27
[ "https://Stackoverflow.com/questions/14549433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/886669/" ]
You must express that you want *days*; this should do the trick: ``` table(factor(format(data$Zaman,"%D"))) ```
You get error with count because there isn't a function named `count`. But you can use `count` of `plyr` package : ``` count(dat, vars = 'Zaman') Zaman freq 1 2013-01-18 20:39:00 66 2 2013-01-18 20:40:00 16 3 2013-01-18 20:41:00 47 4 2013-01-18 20:52:00 1 5 2013-01-18 23:47:00 1 6 ...
14,549,433
Here is what my data frame looks like; ``` Zaman Operasyon Paket 1 2013-01-18 21:39:00 installed linux-api-headers 2 2013-01-18 21:39:00 installed tzdata 3 2013-01-18 21:39:00 installed glibc 4 2013-01-18 21:39:00 installed ncurses 5 2013-01-18 21:39:00 inst...
2013/01/27
[ "https://Stackoverflow.com/questions/14549433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/886669/" ]
You must express that you want *days*; this should do the trick: ``` table(factor(format(data$Zaman,"%D"))) ```
Perhaps you are simply looking for the `length` function? ``` > aggregate(data,by=list(data$zaman),FUN=length) Group.1 zaman oper paket 1 2013-01-18 21:39:00 6 6 6 ```
14,549,433
Here is what my data frame looks like; ``` Zaman Operasyon Paket 1 2013-01-18 21:39:00 installed linux-api-headers 2 2013-01-18 21:39:00 installed tzdata 3 2013-01-18 21:39:00 installed glibc 4 2013-01-18 21:39:00 installed ncurses 5 2013-01-18 21:39:00 inst...
2013/01/27
[ "https://Stackoverflow.com/questions/14549433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/886669/" ]
You must express that you want *days*; this should do the trick: ``` table(factor(format(data$Zaman,"%D"))) ```
Use `cut`, which has in-built support for working with dates: ``` attributes(data$Zaman)$tzone <- "GMT" table(cut(data$Zaman, "1 day")) # # 2013-01-18 2013-01-19 2013-01-20 2013-01-21 2013-01-22 2013-01-23 2013-01-24 2013-01-25 2013-01-26 # 339 209 20 8 76 56 0 ...
14,549,433
Here is what my data frame looks like; ``` Zaman Operasyon Paket 1 2013-01-18 21:39:00 installed linux-api-headers 2 2013-01-18 21:39:00 installed tzdata 3 2013-01-18 21:39:00 installed glibc 4 2013-01-18 21:39:00 installed ncurses 5 2013-01-18 21:39:00 inst...
2013/01/27
[ "https://Stackoverflow.com/questions/14549433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/886669/" ]
You must express that you want *days*; this should do the trick: ``` table(factor(format(data$Zaman,"%D"))) ```
I'm partial to data.table for these tasks: ``` library(data.table) dt <- as.data.table(df) dt[,days:=format(Zaman,"%d.%m.%Y")] dt[,.N,by=days] days N 1: 18.01.2013 133 2: 19.01.2013 415 3: 20.01.2013 20 <snip> ```
14,549,433
Here is what my data frame looks like; ``` Zaman Operasyon Paket 1 2013-01-18 21:39:00 installed linux-api-headers 2 2013-01-18 21:39:00 installed tzdata 3 2013-01-18 21:39:00 installed glibc 4 2013-01-18 21:39:00 installed ncurses 5 2013-01-18 21:39:00 inst...
2013/01/27
[ "https://Stackoverflow.com/questions/14549433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/886669/" ]
You get error with count because there isn't a function named `count`. But you can use `count` of `plyr` package : ``` count(dat, vars = 'Zaman') Zaman freq 1 2013-01-18 20:39:00 66 2 2013-01-18 20:40:00 16 3 2013-01-18 20:41:00 47 4 2013-01-18 20:52:00 1 5 2013-01-18 23:47:00 1 6 ...
Perhaps you are simply looking for the `length` function? ``` > aggregate(data,by=list(data$zaman),FUN=length) Group.1 zaman oper paket 1 2013-01-18 21:39:00 6 6 6 ```
14,549,433
Here is what my data frame looks like; ``` Zaman Operasyon Paket 1 2013-01-18 21:39:00 installed linux-api-headers 2 2013-01-18 21:39:00 installed tzdata 3 2013-01-18 21:39:00 installed glibc 4 2013-01-18 21:39:00 installed ncurses 5 2013-01-18 21:39:00 inst...
2013/01/27
[ "https://Stackoverflow.com/questions/14549433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/886669/" ]
Use `cut`, which has in-built support for working with dates: ``` attributes(data$Zaman)$tzone <- "GMT" table(cut(data$Zaman, "1 day")) # # 2013-01-18 2013-01-19 2013-01-20 2013-01-21 2013-01-22 2013-01-23 2013-01-24 2013-01-25 2013-01-26 # 339 209 20 8 76 56 0 ...
Perhaps you are simply looking for the `length` function? ``` > aggregate(data,by=list(data$zaman),FUN=length) Group.1 zaman oper paket 1 2013-01-18 21:39:00 6 6 6 ```
14,549,433
Here is what my data frame looks like; ``` Zaman Operasyon Paket 1 2013-01-18 21:39:00 installed linux-api-headers 2 2013-01-18 21:39:00 installed tzdata 3 2013-01-18 21:39:00 installed glibc 4 2013-01-18 21:39:00 installed ncurses 5 2013-01-18 21:39:00 inst...
2013/01/27
[ "https://Stackoverflow.com/questions/14549433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/886669/" ]
I'm partial to data.table for these tasks: ``` library(data.table) dt <- as.data.table(df) dt[,days:=format(Zaman,"%d.%m.%Y")] dt[,.N,by=days] days N 1: 18.01.2013 133 2: 19.01.2013 415 3: 20.01.2013 20 <snip> ```
Perhaps you are simply looking for the `length` function? ``` > aggregate(data,by=list(data$zaman),FUN=length) Group.1 zaman oper paket 1 2013-01-18 21:39:00 6 6 6 ```
22,325,245
I have an example script where I am searching for `[Data]` in a line. The odd thing is that it always matches when reading the file with `csv.reader`. See code below. Any ideas? ``` #!/opt/Python-2.7.3/bin/python import csv import re import os content = '''# foo [Header],, foo bar,blah, [Settings] Yadda,yadda [Data...
2014/03/11
[ "https://Stackoverflow.com/questions/22325245", "https://Stackoverflow.com", "https://Stackoverflow.com/users/719016/" ]
If you want to find a line with string `[Data]` (with brackets), you should add `\` before brackets in pattern: ``` #if re.search("[Data]", line): if re.search("\[Data\]", line): ``` Pattern `[Data]` without backslashes means that you want search any character from set (`D` or `a` or `t`) in line.
Edit: The error in the original code is how you set `founc_csv` the first time it finds data, and then never resets it to 0. That or you can simply remove the need for it entirely: ``` mycsv = [] for l in reader: line = str(l) if line == '[]': continue elif "[Data]" in line: print line ...
19,017,840
I have two large python dictionaries in the following form, and I want to compare them and report their mismatches at **identical indices**, for the **same key**. The dictionaries have the same keys, but the tuples do not have equal lengths. ``` d1 = {'a':(1,2,3,4,66,6,6,64), 'b':(3,2,5,3,2,1,1,1)} d2 = {'a':(1,2,4,3,...
2013/09/26
[ "https://Stackoverflow.com/questions/19017840", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2621263/" ]
**SOLUTION** The trick is to run the VM without GUI. With this you can easily run VM on WIN server like a service too. Prerequired is that exist some VM, you have some already. Below put its name instead `{vm_name}`. --- **1) At first we use build-in executable file "VBoxHeadless.exe".** Create file ``` vm.run.ba...
If you do not mind operating the application once manually, to end with OS running in background; here are the options: Open Virtual Box. Right Click on your Guest OS > Choose: Start Headless. Wait for a while till the OS boots. Then close the Virtual Box application.
19,017,840
I have two large python dictionaries in the following form, and I want to compare them and report their mismatches at **identical indices**, for the **same key**. The dictionaries have the same keys, but the tuples do not have equal lengths. ``` d1 = {'a':(1,2,3,4,66,6,6,64), 'b':(3,2,5,3,2,1,1,1)} d2 = {'a':(1,2,4,3,...
2013/09/26
[ "https://Stackoverflow.com/questions/19017840", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2621263/" ]
You can use VBoxManage to start a VM headless: ``` "C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" startvm "Your VM name" --type headless ```
Following Bruno Garett's Answer, in my experience: testing the `vm.run.bat` file fails, gives a read only error but will work fine running the VB script. Just to save anyone time. Also to shut down headless you can use another batch script (Sam F's solution wont work with Bruno's solution): ``` cd "c:\Program Files\O...
19,017,840
I have two large python dictionaries in the following form, and I want to compare them and report their mismatches at **identical indices**, for the **same key**. The dictionaries have the same keys, but the tuples do not have equal lengths. ``` d1 = {'a':(1,2,3,4,66,6,6,64), 'b':(3,2,5,3,2,1,1,1)} d2 = {'a':(1,2,4,3,...
2013/09/26
[ "https://Stackoverflow.com/questions/19017840", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2621263/" ]
The truly most-consistent option is to use Task Scheduler. Implementing the solution ========================= This requires a couple of pretty easy steps, but I will explain them in detail to ensure anyone from with any technical background can set this up: 1. Identify your virtual machine name 2. Create a task in ...
Following Bruno Garett's Answer, in my experience: testing the `vm.run.bat` file fails, gives a read only error but will work fine running the VB script. Just to save anyone time. Also to shut down headless you can use another batch script (Sam F's solution wont work with Bruno's solution): ``` cd "c:\Program Files\O...
19,017,840
I have two large python dictionaries in the following form, and I want to compare them and report their mismatches at **identical indices**, for the **same key**. The dictionaries have the same keys, but the tuples do not have equal lengths. ``` d1 = {'a':(1,2,3,4,66,6,6,64), 'b':(3,2,5,3,2,1,1,1)} d2 = {'a':(1,2,4,3,...
2013/09/26
[ "https://Stackoverflow.com/questions/19017840", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2621263/" ]
You can use VBoxManage to start a VM headless: ``` "C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" startvm "Your VM name" --type headless ```
I used something similar to Samuel's solution that works great. On the desktop (or any folder), right click and go to New->Shortcut. In the target, type: ``` "C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" startvm {uuid} --type headless ``` In the name, type whatever you want and click Finish. Then to stop th...
19,017,840
I have two large python dictionaries in the following form, and I want to compare them and report their mismatches at **identical indices**, for the **same key**. The dictionaries have the same keys, but the tuples do not have equal lengths. ``` d1 = {'a':(1,2,3,4,66,6,6,64), 'b':(3,2,5,3,2,1,1,1)} d2 = {'a':(1,2,4,3,...
2013/09/26
[ "https://Stackoverflow.com/questions/19017840", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2621263/" ]
**SOLUTION** The trick is to run the VM without GUI. With this you can easily run VM on WIN server like a service too. Prerequired is that exist some VM, you have some already. Below put its name instead `{vm_name}`. --- **1) At first we use build-in executable file "VBoxHeadless.exe".** Create file ``` vm.run.ba...
Following Bruno Garett's Answer, in my experience: testing the `vm.run.bat` file fails, gives a read only error but will work fine running the VB script. Just to save anyone time. Also to shut down headless you can use another batch script (Sam F's solution wont work with Bruno's solution): ``` cd "c:\Program Files\O...
19,017,840
I have two large python dictionaries in the following form, and I want to compare them and report their mismatches at **identical indices**, for the **same key**. The dictionaries have the same keys, but the tuples do not have equal lengths. ``` d1 = {'a':(1,2,3,4,66,6,6,64), 'b':(3,2,5,3,2,1,1,1)} d2 = {'a':(1,2,4,3,...
2013/09/26
[ "https://Stackoverflow.com/questions/19017840", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2621263/" ]
The truly most-consistent option is to use Task Scheduler. Implementing the solution ========================= This requires a couple of pretty easy steps, but I will explain them in detail to ensure anyone from with any technical background can set this up: 1. Identify your virtual machine name 2. Create a task in ...
There an easy manual option right in the GUI too: [![Screenshot from Virtualbox 5.2](https://i.stack.imgur.com/WRts7.png)](https://i.stack.imgur.com/WRts7.png) (Taken from Virtualbox 5.2)
19,017,840
I have two large python dictionaries in the following form, and I want to compare them and report their mismatches at **identical indices**, for the **same key**. The dictionaries have the same keys, but the tuples do not have equal lengths. ``` d1 = {'a':(1,2,3,4,66,6,6,64), 'b':(3,2,5,3,2,1,1,1)} d2 = {'a':(1,2,4,3,...
2013/09/26
[ "https://Stackoverflow.com/questions/19017840", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2621263/" ]
You can use VBoxManage to start a VM headless: ``` "C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" startvm "Your VM name" --type headless ```
If you do not mind operating the application once manually, to end with OS running in background; here are the options: Open Virtual Box. Right Click on your Guest OS > Choose: Start Headless. Wait for a while till the OS boots. Then close the Virtual Box application.
19,017,840
I have two large python dictionaries in the following form, and I want to compare them and report their mismatches at **identical indices**, for the **same key**. The dictionaries have the same keys, but the tuples do not have equal lengths. ``` d1 = {'a':(1,2,3,4,66,6,6,64), 'b':(3,2,5,3,2,1,1,1)} d2 = {'a':(1,2,4,3,...
2013/09/26
[ "https://Stackoverflow.com/questions/19017840", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2621263/" ]
You can use VBoxManage to start a VM headless: ``` "C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" startvm "Your VM name" --type headless ```
An alternative solution : <http://vboxvmservice.sourceforge.net/> It works perfect for me !
19,017,840
I have two large python dictionaries in the following form, and I want to compare them and report their mismatches at **identical indices**, for the **same key**. The dictionaries have the same keys, but the tuples do not have equal lengths. ``` d1 = {'a':(1,2,3,4,66,6,6,64), 'b':(3,2,5,3,2,1,1,1)} d2 = {'a':(1,2,4,3,...
2013/09/26
[ "https://Stackoverflow.com/questions/19017840", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2621263/" ]
The truly most-consistent option is to use Task Scheduler. Implementing the solution ========================= This requires a couple of pretty easy steps, but I will explain them in detail to ensure anyone from with any technical background can set this up: 1. Identify your virtual machine name 2. Create a task in ...
An alternative solution : <http://vboxvmservice.sourceforge.net/> It works perfect for me !
19,017,840
I have two large python dictionaries in the following form, and I want to compare them and report their mismatches at **identical indices**, for the **same key**. The dictionaries have the same keys, but the tuples do not have equal lengths. ``` d1 = {'a':(1,2,3,4,66,6,6,64), 'b':(3,2,5,3,2,1,1,1)} d2 = {'a':(1,2,4,3,...
2013/09/26
[ "https://Stackoverflow.com/questions/19017840", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2621263/" ]
**SOLUTION** The trick is to run the VM without GUI. With this you can easily run VM on WIN server like a service too. Prerequired is that exist some VM, you have some already. Below put its name instead `{vm_name}`. --- **1) At first we use build-in executable file "VBoxHeadless.exe".** Create file ``` vm.run.ba...
The truly most-consistent option is to use Task Scheduler. Implementing the solution ========================= This requires a couple of pretty easy steps, but I will explain them in detail to ensure anyone from with any technical background can set this up: 1. Identify your virtual machine name 2. Create a task in ...
72,751,658
I have written a python program for printing a diamond. It is working properly except that it is printing an extra kite after printing a diamond. May someone please help me to remove this bug? I can't find it and please give a fix from this code please. CODE: ``` limitRows = int(input("Enter the maximum number of rows...
2022/06/25
[ "https://Stackoverflow.com/questions/72751658", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19409398/" ]
You have an extra while loop in the second half of your code. Try this ``` limitRows = int(input("Enter the maximum number of rows: ")) maxRows = limitRows + (limitRows - 1) currentRow = 0 while currentRow <= limitRows: spaces = limitRows - currentRow while spaces > 0: print(" ", end='') space...
You can make this less complex by using just one loop as follows: ``` def make_row(n): return ' '.join(['*'] * n) rows = input('Number of rows: ') if (nrows := int(rows)) > 0: diamond = [make_row(nrows)] j = len(diamond[0]) for i in range(nrows-1, 0, -1): j -= 1 diamond.append(make_ro...
59,529,038
I am using [nameko](https://nameko.readthedocs.io/en/stable/) to build an ETL pipeline with a micro-service architecture, and I do not want to wait for a reply after making a RPC request. ``` from nameko.rpc import rpc, RpcProxy class Scheduler(object): name = "scheduler" task_runner = RpcProxy('task_runner') ...
2019/12/30
[ "https://Stackoverflow.com/questions/59529038", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8776121/" ]
I had the same problem; you need to replace the `async` method with the `call_async` one, and retrieve the data with `result()`. [Documentation](https://nameko.readthedocs.io/en/stable/built_in_extensions.html) [GitHub issue](https://github.com/nameko/nameko/pull/318)
use call\_async instead async or for better result use event from nameko.events import EventDispatcher, event\_handler ``` @event_handler("service_a", "event_emit_name") def get_result(self, payload): #do_something... ``` and in other service ``` from nameko.events import EventDispatcher, event_handler @...
31,196,412
I am new to the world of map reduce, I have run a job and it seems to be taking forever to complete given that it is a relatively small task, I am guessing something has not gone according to plan. I am using hadoop version 2.6, here is some info gathered I thought could help. The map reduce programs themselves are str...
2015/07/02
[ "https://Stackoverflow.com/questions/31196412", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1285948/" ]
If a job is in `ACCEPTED` state for long time and not changing to `RUNNING` state, It could be due to the following reasons. Nodemanager(slave service) is either dead or unable to communicate with resource manager. if the `Active nodes` in the Yarn resource manager [Web ui main page](http://mymacbook.home:8088/) is z...
Have you partitioned your data the same way you query it ? Basically, you don't want to query all your data, which is what you may be doing at the moment. That could explain why it's taking such a long time to run. You want to query a subset of your whole data set. For instance, if you partition over dates, you really...
32,959,770
In python, I can do this to get the current file's path: ``` os.path.dirname(os.path.abspath(__file__)) ``` But if I run this on a thread say: ``` def do_stuff(): class RunThread(threading.Thread): def run(self): print os.path.dirname(os.path.abspath(__file__)) a = RunThread() a.start() ...
2015/10/06
[ "https://Stackoverflow.com/questions/32959770", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1515864/" ]
``` import inspect print(inspect.stack()[0][1]) ``` [inspect](https://docs.python.org/2/library/inspect.html)
I apologise for my previous answer. I was half asleep and replied stupidly. Every time I've done what you're trying to do, I have used it in the inverse order. E.g. `os.path.abspath(os.path.dirname(__file__))`
48,047,495
``` Collecting jws>=0.1.3 (from python-jwt==2.0.1->pyrebase) Using cached https://files.pythonhosted.org/packages/01/9e/1536d578ed50f5fe8196310ddcc921a3cd8e973312d60ac74488b805d395/jws-0.1.3.tar.gz Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>",...
2018/01/01
[ "https://Stackoverflow.com/questions/48047495", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8272238/" ]
I've just solved this. [MyGitHub.io](https://wesely.github.io/pip,%20python,%20pip/Fix-'cp950'-Error-when-using-'pip-install'/) It's a bug from `jws` package, it should consider the encoding problem in its `setup.py`. My Solution : install `jws` first * use `pip download jws` instead of `pip install` * use 7z open...
I solved this problem by deleting Visual Studio Community 2017 including python dev. option
48,047,495
``` Collecting jws>=0.1.3 (from python-jwt==2.0.1->pyrebase) Using cached https://files.pythonhosted.org/packages/01/9e/1536d578ed50f5fe8196310ddcc921a3cd8e973312d60ac74488b805d395/jws-0.1.3.tar.gz Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>",...
2018/01/01
[ "https://Stackoverflow.com/questions/48047495", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8272238/" ]
I've just solved this. [MyGitHub.io](https://wesely.github.io/pip,%20python,%20pip/Fix-'cp950'-Error-when-using-'pip-install'/) It's a bug from `jws` package, it should consider the encoding problem in its `setup.py`. My Solution : install `jws` first * use `pip download jws` instead of `pip install` * use 7z open...
I solved this problem by 1. moving the python-jwt install up in the order of my `requirements.txt` 2. not specifying an older version of python-jwt encoded in pyrebase One of those did the trick.
12,646,883
I've been trying to make a simple blackjack game in python and I seem to be stuck, my code is as follows: ``` from random import choice def deck(): cards = range(1, 12) return choice(cards) def diack(): card1= deck() card2 = deck() hand = card1 + card2 print hand if hand < 21: pr...
2012/09/28
[ "https://Stackoverflow.com/questions/12646883", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1689672/" ]
`if "hit"` just tests if the string `"hit"` exists, and it does. Thus, the `elif` statement is never executed. You need to capture the user input in a variable and test against that instead: ``` choice = raw_input("Would you like to hit or stand?") print choice if choice == "hit": return hand + deck() elif choice...
Assuming you get the indentation right: ``` print raw_input("Would you like to hit or stand?") if "hit": return hand + deck() elif "stand": return hand ``` Your `if` is just checking whether the string `"hit"` is true. All non-empty strings are true, and `"hit"` is non-empty, so this will always succeed. W...
43,983,127
I wish to find all words that start with "Am" and this is what I tried so far with python ``` import re my_string = "America's mom, American" re.findall(r'\b[Am][a-zA-Z]+\b', my_string) ``` but this is the output that I get ``` ['America', 'mom', 'American'] ``` Instead of what I want ``` ['America', 'American']...
2017/05/15
[ "https://Stackoverflow.com/questions/43983127", "https://Stackoverflow.com", "https://Stackoverflow.com/users/863713/" ]
The `[Am]`, a positive [character class](http://www.regular-expressions.info/charclass.html), matches either `A` or `m`. To match a *sequence* of chars, you need to use them one after another. Remove the brackets: ``` import re my_string = "America's mom, American" print(re.findall(r'\bAm[a-zA-Z]+\b', my_string)) # =...
Don't use character class: ``` import re my_string = "America's mom, American" re.findall(r'\bAm[a-zA-Z]+\b', my_string) ```
43,983,127
I wish to find all words that start with "Am" and this is what I tried so far with python ``` import re my_string = "America's mom, American" re.findall(r'\b[Am][a-zA-Z]+\b', my_string) ``` but this is the output that I get ``` ['America', 'mom', 'American'] ``` Instead of what I want ``` ['America', 'American']...
2017/05/15
[ "https://Stackoverflow.com/questions/43983127", "https://Stackoverflow.com", "https://Stackoverflow.com/users/863713/" ]
The `[Am]`, a positive [character class](http://www.regular-expressions.info/charclass.html), matches either `A` or `m`. To match a *sequence* of chars, you need to use them one after another. Remove the brackets: ``` import re my_string = "America's mom, American" print(re.findall(r'\bAm[a-zA-Z]+\b', my_string)) # =...
``` re.findall(r'(Am\w+)', my_text, re.I) ```
65,376,345
I had started scrapy with Official Tutorial, but I can't go with it successfully.My code is totally same with official one. ``` import scrapy class QuotesSpider(scrapy.Spider): name = 'Quotes'; def start_requests(self): urls = [ 'http://quotes.toscrape.com/page/1/', ] for u...
2020/12/20
[ "https://Stackoverflow.com/questions/65376345", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14482346/" ]
There is a `IndentationError`.Need to fix code indentation. Its work fine.
You might find a solution for your issue here > > [Scrapy installed, but won't run from the command line](https://stackoverflow.com/questions/37757233/scrapy-installed-but-wont-run-from-the-command-line) > > >
65,376,345
I had started scrapy with Official Tutorial, but I can't go with it successfully.My code is totally same with official one. ``` import scrapy class QuotesSpider(scrapy.Spider): name = 'Quotes'; def start_requests(self): urls = [ 'http://quotes.toscrape.com/page/1/', ] for u...
2020/12/20
[ "https://Stackoverflow.com/questions/65376345", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14482346/" ]
There is a `IndentationError`.Need to fix code indentation. Its work fine.
It is not about the yield, I think either all the semicolons or maybe the last comma after getall() `'tags': quote.css('div.tags a.tag::text').getall(),` might cause the interpreter to expect sth. else. Remove the semicolons and the last comma - does it still not work? The error output shows the indentation error a...
12,634,452
I have 2 dictionaries ``` a = {'I': [1,2], 'II': [1,2], 'III': [1,2]} b = {'I': [3,4], 'II': [3,4], 'IV': [3,4]} ``` how can i merge them such that i get the following result ``` c = merge_dicts(a,b) ``` where c is `{'I': [1,2,3,4], 'II': [1,2,3,4], 'III': [1,2], 'IV': [3,4]}` Is there a good pythonic way of do...
2012/09/28
[ "https://Stackoverflow.com/questions/12634452", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1742869/" ]
Are you sure they have the same keys? You could do: ``` c = dict( (k,a[k]+b[k]) for k in a ) ``` Addition of lists concatenates so `a[k] + b[k]` gives you something like `[1,2]+[3,4]` which equals `[1,2,3,4]`. The `dict` constructor can take a series of 2-element iterables which turn into `key` - `value` pairs. If ...
``` >>> a = {'I': [1,2], 'II': [1,2]} >>> b = {'I': [3,4], 'II': [3,4]} >>> {key:a[key]+b[key] for key in a} {'I': [1, 2, 3, 4], 'II': [1, 2, 3, 4]} ``` Note that this only works if they share keys exactly.
12,634,452
I have 2 dictionaries ``` a = {'I': [1,2], 'II': [1,2], 'III': [1,2]} b = {'I': [3,4], 'II': [3,4], 'IV': [3,4]} ``` how can i merge them such that i get the following result ``` c = merge_dicts(a,b) ``` where c is `{'I': [1,2,3,4], 'II': [1,2,3,4], 'III': [1,2], 'IV': [3,4]}` Is there a good pythonic way of do...
2012/09/28
[ "https://Stackoverflow.com/questions/12634452", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1742869/" ]
If you need ALL values: ``` from itertools import chain from collections import defaultdict a = {'I': [1,2], 'II': [1,2], 'IV': [1,2]} b = {'I': [3,4], 'II': [3,4], 'V': [3,4]} d = defaultdict(list) for key, value in chain(a.iteritems(), b.iteritems()): d[key].extend(value) d ``` Output: ``` defaultdict(<type...
Are you sure they have the same keys? You could do: ``` c = dict( (k,a[k]+b[k]) for k in a ) ``` Addition of lists concatenates so `a[k] + b[k]` gives you something like `[1,2]+[3,4]` which equals `[1,2,3,4]`. The `dict` constructor can take a series of 2-element iterables which turn into `key` - `value` pairs. If ...
12,634,452
I have 2 dictionaries ``` a = {'I': [1,2], 'II': [1,2], 'III': [1,2]} b = {'I': [3,4], 'II': [3,4], 'IV': [3,4]} ``` how can i merge them such that i get the following result ``` c = merge_dicts(a,b) ``` where c is `{'I': [1,2,3,4], 'II': [1,2,3,4], 'III': [1,2], 'IV': [3,4]}` Is there a good pythonic way of do...
2012/09/28
[ "https://Stackoverflow.com/questions/12634452", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1742869/" ]
If you need ALL values: ``` from itertools import chain from collections import defaultdict a = {'I': [1,2], 'II': [1,2], 'IV': [1,2]} b = {'I': [3,4], 'II': [3,4], 'V': [3,4]} d = defaultdict(list) for key, value in chain(a.iteritems(), b.iteritems()): d[key].extend(value) d ``` Output: ``` defaultdict(<type...
``` >>> a = {'I': [1,2], 'II': [1,2]} >>> b = {'I': [3,4], 'II': [3,4]} >>> {key:a[key]+b[key] for key in a} {'I': [1, 2, 3, 4], 'II': [1, 2, 3, 4]} ``` Note that this only works if they share keys exactly.
12,634,452
I have 2 dictionaries ``` a = {'I': [1,2], 'II': [1,2], 'III': [1,2]} b = {'I': [3,4], 'II': [3,4], 'IV': [3,4]} ``` how can i merge them such that i get the following result ``` c = merge_dicts(a,b) ``` where c is `{'I': [1,2,3,4], 'II': [1,2,3,4], 'III': [1,2], 'IV': [3,4]}` Is there a good pythonic way of do...
2012/09/28
[ "https://Stackoverflow.com/questions/12634452", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1742869/" ]
``` >>> from collections import Counter >>> class ListAccumulator(Counter): ... def __missing__(self, key): ... return [] ... >>> a = {'I': [1,2], 'II': [1,2], 'III': [1,2]} >>> b = {'I': [3,4], 'II': [3,4], 'IV': [3,4]} >>> >>> ListAccumulator(a) + ListAccumulator(b) Counter({'IV': [3, 4], 'I': [1, 2, 3,...
``` >>> a = {'I': [1,2], 'II': [1,2]} >>> b = {'I': [3,4], 'II': [3,4]} >>> {key:a[key]+b[key] for key in a} {'I': [1, 2, 3, 4], 'II': [1, 2, 3, 4]} ``` Note that this only works if they share keys exactly.
12,634,452
I have 2 dictionaries ``` a = {'I': [1,2], 'II': [1,2], 'III': [1,2]} b = {'I': [3,4], 'II': [3,4], 'IV': [3,4]} ``` how can i merge them such that i get the following result ``` c = merge_dicts(a,b) ``` where c is `{'I': [1,2,3,4], 'II': [1,2,3,4], 'III': [1,2], 'IV': [3,4]}` Is there a good pythonic way of do...
2012/09/28
[ "https://Stackoverflow.com/questions/12634452", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1742869/" ]
If you need ALL values: ``` from itertools import chain from collections import defaultdict a = {'I': [1,2], 'II': [1,2], 'IV': [1,2]} b = {'I': [3,4], 'II': [3,4], 'V': [3,4]} d = defaultdict(list) for key, value in chain(a.iteritems(), b.iteritems()): d[key].extend(value) d ``` Output: ``` defaultdict(<type...
``` >>> from collections import Counter >>> class ListAccumulator(Counter): ... def __missing__(self, key): ... return [] ... >>> a = {'I': [1,2], 'II': [1,2], 'III': [1,2]} >>> b = {'I': [3,4], 'II': [3,4], 'IV': [3,4]} >>> >>> ListAccumulator(a) + ListAccumulator(b) Counter({'IV': [3, 4], 'I': [1, 2, 3,...
38,921,815
I am using python 3.5. When I tried to return a generator function instance and i am getting a StopIteration error. Why? here is my code: ``` >>> def gen(start, end): ... '''generator function similar to range function''' ... while start <= end: ... yield start ... start += 1 ... >>> def check(ingen, f...
2016/08/12
[ "https://Stackoverflow.com/questions/38921815", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
In Python, if `yield` is present in a function, then Python treats it as a generator. In a generator, any return will raise `StopIteration` with the returned value. This is a new feature in Python 3.3: see [PEP 380](https://www.python.org/dev/peps/pep-0380/) and [here](https://stackoverflow.com/a/16780113/2097780). `ch...
When a generator hits its `return` statement (explicit or not) it raises `StopIteration`. So when you `return ingen` you end the iteration. `check_v2` is not a generator, since it does not contain the `yield` statement, that's why it works.
38,921,815
I am using python 3.5. When I tried to return a generator function instance and i am getting a StopIteration error. Why? here is my code: ``` >>> def gen(start, end): ... '''generator function similar to range function''' ... while start <= end: ... yield start ... start += 1 ... >>> def check(ingen, f...
2016/08/12
[ "https://Stackoverflow.com/questions/38921815", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
When a generator hits its `return` statement (explicit or not) it raises `StopIteration`. So when you `return ingen` you end the iteration. `check_v2` is not a generator, since it does not contain the `yield` statement, that's why it works.
As others have said, if you return from a generator, it means the the generator has stopped yielding items, which raises a `StopIteration` whatever you return. This means that `check` actually returns an empty iterator. If you want to return the results of another generator, you can use `yield from` : ``` def check...
38,921,815
I am using python 3.5. When I tried to return a generator function instance and i am getting a StopIteration error. Why? here is my code: ``` >>> def gen(start, end): ... '''generator function similar to range function''' ... while start <= end: ... yield start ... start += 1 ... >>> def check(ingen, f...
2016/08/12
[ "https://Stackoverflow.com/questions/38921815", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
When a generator hits its `return` statement (explicit or not) it raises `StopIteration`. So when you `return ingen` you end the iteration. `check_v2` is not a generator, since it does not contain the `yield` statement, that's why it works.
See [PEP 380](https://www.python.org/dev/peps/pep-0380/#formal-semantics): > > In a generator, the statement > > > > ``` > return value > > ``` > > is semantically equivalent to > > > > ``` > raise StopIteration(value) > > ``` > > except that, as currently, the exception cannot be caught by `except` > clau...
38,921,815
I am using python 3.5. When I tried to return a generator function instance and i am getting a StopIteration error. Why? here is my code: ``` >>> def gen(start, end): ... '''generator function similar to range function''' ... while start <= end: ... yield start ... start += 1 ... >>> def check(ingen, f...
2016/08/12
[ "https://Stackoverflow.com/questions/38921815", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
In Python, if `yield` is present in a function, then Python treats it as a generator. In a generator, any return will raise `StopIteration` with the returned value. This is a new feature in Python 3.3: see [PEP 380](https://www.python.org/dev/peps/pep-0380/) and [here](https://stackoverflow.com/a/16780113/2097780). `ch...
As others have said, if you return from a generator, it means the the generator has stopped yielding items, which raises a `StopIteration` whatever you return. This means that `check` actually returns an empty iterator. If you want to return the results of another generator, you can use `yield from` : ``` def check...
38,921,815
I am using python 3.5. When I tried to return a generator function instance and i am getting a StopIteration error. Why? here is my code: ``` >>> def gen(start, end): ... '''generator function similar to range function''' ... while start <= end: ... yield start ... start += 1 ... >>> def check(ingen, f...
2016/08/12
[ "https://Stackoverflow.com/questions/38921815", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
In Python, if `yield` is present in a function, then Python treats it as a generator. In a generator, any return will raise `StopIteration` with the returned value. This is a new feature in Python 3.3: see [PEP 380](https://www.python.org/dev/peps/pep-0380/) and [here](https://stackoverflow.com/a/16780113/2097780). `ch...
See [PEP 380](https://www.python.org/dev/peps/pep-0380/#formal-semantics): > > In a generator, the statement > > > > ``` > return value > > ``` > > is semantically equivalent to > > > > ``` > raise StopIteration(value) > > ``` > > except that, as currently, the exception cannot be caught by `except` > clau...
38,921,815
I am using python 3.5. When I tried to return a generator function instance and i am getting a StopIteration error. Why? here is my code: ``` >>> def gen(start, end): ... '''generator function similar to range function''' ... while start <= end: ... yield start ... start += 1 ... >>> def check(ingen, f...
2016/08/12
[ "https://Stackoverflow.com/questions/38921815", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
As others have said, if you return from a generator, it means the the generator has stopped yielding items, which raises a `StopIteration` whatever you return. This means that `check` actually returns an empty iterator. If you want to return the results of another generator, you can use `yield from` : ``` def check...
See [PEP 380](https://www.python.org/dev/peps/pep-0380/#formal-semantics): > > In a generator, the statement > > > > ``` > return value > > ``` > > is semantically equivalent to > > > > ``` > raise StopIteration(value) > > ``` > > except that, as currently, the exception cannot be caught by `except` > clau...
57,484,399
I'm new to Python and am using Anaconda on Windows 10 to learn how to implement machine learning. Running this code on Spyder: ```py import sklearn as skl ``` Originally got me this: ``` Traceback (most recent call last): File "<ipython-input-1-7135d3f24347>", line 1, in <module> runfile('C:/Users/julia/.spy...
2019/08/13
[ "https://Stackoverflow.com/questions/57484399", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7385274/" ]
I ended up fixing this by uninstalling my current version of Anaconda and installing a version from a few months ago. I didn't get the "ordinal 242" error nor the issues with scikit-learn.
I encountered the same error after letting my PC sit for 4 days unattended. Restarting the kernel solved it. This probably won't work for everyone, but it might save someone a little agony.
66,436,933
I am working with sequencing data and need to count the number of reads that match to a grna library in python. Simplified my data looks like this: ``` reads = ['abc', 'abc','def', 'ghi'] grnas = ['abc', 'ghi'] ``` The grnas list is unique, while the reads list can contain entries that are not of interest and don't ...
2021/03/02
[ "https://Stackoverflow.com/questions/66436933", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8413867/" ]
Your case cannot be less of O(n). Using single process the best solution is: ``` [x for x in reads if x in set(grnas)] or [x for x in reads if x in dict.fromkeys(grnas)] ``` but this is a simple case to parallelyze, you can reduce input data in some bunch of works and append all results.
as the worst case of complexity search in both `set` & `dict` in python is some how `O(N)` so the complexity of the program would be `O(N * M)`. it would not be efficient to use them. so use the `Counter` object which will do the search in `O(1)` so the whole program would be done in `O(max(N, M))` complexity. ```py f...
58,774,718
I'm writing multi-process code, which runs perfectly in Python 3.7. Yet I want one of the parallel process to execute an IO process take stakes for ever using AsyncIO i order to get better performance, but have not been able to get it to run. Ubuntu 18.04, Python 3.7, AsyncIO, pipenv (all pip libraries installed) The...
2019/11/08
[ "https://Stackoverflow.com/questions/58774718", "https://Stackoverflow.com", "https://Stackoverflow.com/users/982446/" ]
As it seems from the *Traceback* log it is look like you are trying to add tasks to not running *event loop*. > > /.pyenv/versions/3.7.4/lib/python3.7/multiprocessing/process.py:313: > RuntimeWarning: coroutine > '.\_get\_filter\_collateral..read\_task' **was never > awaited** > > > The *loop* was just created...
Use `asyncio.ensure_future` instead. See <https://docs.python.org/3/library/asyncio-future.html#asyncio.ensure_future>
6,286,579
Is there some module or command that'll let me send the current region to shell? I want to have something like Python-mode's `python-send-region` which sends the selected region to the currently running Python shell.
2011/06/08
[ "https://Stackoverflow.com/questions/6286579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/419116/" ]
Ok, wrote an easy bit. Will probably spend some time to write a complete minor mode. For time being the following function will send current line (or region if the mark is active). Does quite a good job for me: ``` (defun sh-send-line-or-region (&optional step) (interactive ()) (let ((proc (get-process "shell")) ...
`M-x` `append-to-buffer` `RET`
6,286,579
Is there some module or command that'll let me send the current region to shell? I want to have something like Python-mode's `python-send-region` which sends the selected region to the currently running Python shell.
2011/06/08
[ "https://Stackoverflow.com/questions/6286579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/419116/" ]
M-x shell-command-on-region aka. `M-|`
Here is another solution from [this post](https://superuser.com/questions/688829/evaluate-a-python-one-line-statement-in-gnu-emacs/688834#688834). Just copying it for convenience. The print statement is key here. ``` (add-hook 'python-mode-hook 'my-python-send-statement) (defun my-python-send-statement () ...
6,286,579
Is there some module or command that'll let me send the current region to shell? I want to have something like Python-mode's `python-send-region` which sends the selected region to the currently running Python shell.
2011/06/08
[ "https://Stackoverflow.com/questions/6286579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/419116/" ]
``` (defun shell-region (start end) "execute region in an inferior shell" (interactive "r") (shell-command (buffer-substring-no-properties start end))) ```
Modifying Jurgens answer above to operate on a specific buffer gives the following function, which will send the region and then switch to the buffer, displaying it in another window, the buffer named *PYTHON* is used for illustration. The target buffer should already be running a shell. ``` (defun p-send(start end) ...
6,286,579
Is there some module or command that'll let me send the current region to shell? I want to have something like Python-mode's `python-send-region` which sends the selected region to the currently running Python shell.
2011/06/08
[ "https://Stackoverflow.com/questions/6286579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/419116/" ]
``` (defun shell-region (start end) "execute region in an inferior shell" (interactive "r") (shell-command (buffer-substring-no-properties start end))) ```
`M-x` `append-to-buffer` `RET`
6,286,579
Is there some module or command that'll let me send the current region to shell? I want to have something like Python-mode's `python-send-region` which sends the selected region to the currently running Python shell.
2011/06/08
[ "https://Stackoverflow.com/questions/6286579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/419116/" ]
Ok, wrote an easy bit. Will probably spend some time to write a complete minor mode. For time being the following function will send current line (or region if the mark is active). Does quite a good job for me: ``` (defun sh-send-line-or-region (&optional step) (interactive ()) (let ((proc (get-process "shell")) ...
Here is another solution from [this post](https://superuser.com/questions/688829/evaluate-a-python-one-line-statement-in-gnu-emacs/688834#688834). Just copying it for convenience. The print statement is key here. ``` (add-hook 'python-mode-hook 'my-python-send-statement) (defun my-python-send-statement () ...
6,286,579
Is there some module or command that'll let me send the current region to shell? I want to have something like Python-mode's `python-send-region` which sends the selected region to the currently running Python shell.
2011/06/08
[ "https://Stackoverflow.com/questions/6286579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/419116/" ]
I wrote a package that sends/pipes lines or regions of code to shell processes, basically something similar that ESS is for R. It also allows for multiple shell processes to exist, and lets you choose which one to send the region to. Have a look here: <http://www.emacswiki.org/emacs/essh>
Modifying Jurgens answer above to operate on a specific buffer gives the following function, which will send the region and then switch to the buffer, displaying it in another window, the buffer named *PYTHON* is used for illustration. The target buffer should already be running a shell. ``` (defun p-send(start end) ...
6,286,579
Is there some module or command that'll let me send the current region to shell? I want to have something like Python-mode's `python-send-region` which sends the selected region to the currently running Python shell.
2011/06/08
[ "https://Stackoverflow.com/questions/6286579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/419116/" ]
I wrote a package that sends/pipes lines or regions of code to shell processes, basically something similar that ESS is for R. It also allows for multiple shell processes to exist, and lets you choose which one to send the region to. Have a look here: <http://www.emacswiki.org/emacs/essh>
**Update** The above (brilliant and useful) answers look a bit incomplete as of mid-2020: `sh-mode` has a function for sending shell region to non-interactive shell with output in the minibuffer called `sh-send-line-or-region-and-step`. Alternatively: click `Shell-script` in the mode bar at the bottom of the window...
6,286,579
Is there some module or command that'll let me send the current region to shell? I want to have something like Python-mode's `python-send-region` which sends the selected region to the currently running Python shell.
2011/06/08
[ "https://Stackoverflow.com/questions/6286579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/419116/" ]
Ok, wrote an easy bit. Will probably spend some time to write a complete minor mode. For time being the following function will send current line (or region if the mark is active). Does quite a good job for me: ``` (defun sh-send-line-or-region (&optional step) (interactive ()) (let ((proc (get-process "shell")) ...
Do you want the command to be executed automatically, or just entered into the command line in preparation? `M-x` `append-to-buffer` `RET` will enter the selected text into the specified buffer at point, but the command would not be executed by the shell. A wrapper function for that could automatically choose `*shell...
6,286,579
Is there some module or command that'll let me send the current region to shell? I want to have something like Python-mode's `python-send-region` which sends the selected region to the currently running Python shell.
2011/06/08
[ "https://Stackoverflow.com/questions/6286579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/419116/" ]
M-x shell-command-on-region aka. `M-|`
Modifying Jurgens answer above to operate on a specific buffer gives the following function, which will send the region and then switch to the buffer, displaying it in another window, the buffer named *PYTHON* is used for illustration. The target buffer should already be running a shell. ``` (defun p-send(start end) ...
6,286,579
Is there some module or command that'll let me send the current region to shell? I want to have something like Python-mode's `python-send-region` which sends the selected region to the currently running Python shell.
2011/06/08
[ "https://Stackoverflow.com/questions/6286579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/419116/" ]
`M-x` `append-to-buffer` `RET`
Modifying Jurgens answer above to operate on a specific buffer gives the following function, which will send the region and then switch to the buffer, displaying it in another window, the buffer named *PYTHON* is used for illustration. The target buffer should already be running a shell. ``` (defun p-send(start end) ...
41,565,091
I'm calling xgboost via its scikit-learn-style Python interface: ``` model = xgboost.XGBRegressor() %time model.fit(trainX, trainY) testY = model.predict(testX) ``` Some sklearn models tell you which importance they assign to features via the attribute `feature_importances`. This doesn't seem to exist for the `XGB...
2017/01/10
[ "https://Stackoverflow.com/questions/41565091", "https://Stackoverflow.com", "https://Stackoverflow.com/users/626537/" ]
How did you install xgboost? Did you build the package after cloning it from github, as described in the doc? <http://xgboost.readthedocs.io/en/latest/build.html> As in this answer: [Feature Importance with XGBClassifier](https://stackoverflow.com/questions/38212649/feature-importance-with-xgbclassifier) There al...
This is useful for you,maybe. `xgb.plot_importance(bst)` And this is the link:[plot](http://xgboost.readthedocs.io/en/latest/python/python_intro.html#plotting)
41,565,091
I'm calling xgboost via its scikit-learn-style Python interface: ``` model = xgboost.XGBRegressor() %time model.fit(trainX, trainY) testY = model.predict(testX) ``` Some sklearn models tell you which importance they assign to features via the attribute `feature_importances`. This doesn't seem to exist for the `XGB...
2017/01/10
[ "https://Stackoverflow.com/questions/41565091", "https://Stackoverflow.com", "https://Stackoverflow.com/users/626537/" ]
How did you install xgboost? Did you build the package after cloning it from github, as described in the doc? <http://xgboost.readthedocs.io/en/latest/build.html> As in this answer: [Feature Importance with XGBClassifier](https://stackoverflow.com/questions/38212649/feature-importance-with-xgbclassifier) There al...
This worked for me: ```py model.get_booster().get_score(importance_type='weight') ``` hope it helps
41,565,091
I'm calling xgboost via its scikit-learn-style Python interface: ``` model = xgboost.XGBRegressor() %time model.fit(trainX, trainY) testY = model.predict(testX) ``` Some sklearn models tell you which importance they assign to features via the attribute `feature_importances`. This doesn't seem to exist for the `XGB...
2017/01/10
[ "https://Stackoverflow.com/questions/41565091", "https://Stackoverflow.com", "https://Stackoverflow.com/users/626537/" ]
This worked for me: ```py model.get_booster().get_score(importance_type='weight') ``` hope it helps
This is useful for you,maybe. `xgb.plot_importance(bst)` And this is the link:[plot](http://xgboost.readthedocs.io/en/latest/python/python_intro.html#plotting)
73,966,292
By "Google Batch" I'm referring to the new service Google launched about a month or so ago. <https://cloud.google.com/batch> I have a Python script which takes a few minutes to execute at the moment. However with the data it will soon be processing in the next few months this execution time will go from minutes to **...
2022/10/05
[ "https://Stackoverflow.com/questions/73966292", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13379101/" ]
EDIT -- added the "55 difference columns" part at the bottom. --- Adjusting data to be column pairs: ``` df <- data.frame(matrix(sample(1:10, 200, replace = TRUE), ncol = 20, nrow = 10)) names(df) <- paste0("var", rep(1:10, each = 2), "_", rep(c("apple", "banana"))) names(df) [1] "var1_apple" "var1_banana" "var2...
I think @Tom's comment is spot-on. Restructuring the data probably makes sense if you are working with paired data. E.g.: ``` od <- names(df)[c(TRUE,FALSE)] ev <- names(df)[c(FALSE,TRUE)] data.frame( odd = unlist(df[od]), oddname = rep(od,each=nrow(df)), even = unlist(df[ev]), evenname = rep...
73,966,292
By "Google Batch" I'm referring to the new service Google launched about a month or so ago. <https://cloud.google.com/batch> I have a Python script which takes a few minutes to execute at the moment. However with the data it will soon be processing in the next few months this execution time will go from minutes to **...
2022/10/05
[ "https://Stackoverflow.com/questions/73966292", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13379101/" ]
Here is a SAS solution. As far as I understood your data looks like that: I tried to generate an example for 4 patients and 6 proteins, and two values for each protein (nasal and plasma). ``` data proteinvalues; patient=1; protein1_nas=12; protein1_plas=13; protein2_nas=6; protein2_plas=8; protein3_nas=23; protein3_pl...
I think @Tom's comment is spot-on. Restructuring the data probably makes sense if you are working with paired data. E.g.: ``` od <- names(df)[c(TRUE,FALSE)] ev <- names(df)[c(FALSE,TRUE)] data.frame( odd = unlist(df[od]), oddname = rep(od,each=nrow(df)), even = unlist(df[ev]), evenname = rep...
73,937,555
I have a folder named `deployment`, under deployment there are two sibling folders: `folder1` and `folder2`. i need to move folder2 with its sub contents to folder1 with python scrips, so from: ``` .../deployment/folder1/... /folder1/... ``` to ``` .../deployment/folder1/... /folder1/...
2022/10/03
[ "https://Stackoverflow.com/questions/73937555", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4483819/" ]
Settings are empty, maybe they are not exported correctly. Check your settings file.
I think you're not using the find API call for MongoDB properly, find usually takes up a filter object and an object of properties as a second argument. Check the syntax required for find(){} function and probably you'll get through with it. Hope it helps. Happy coding!!
41,836,353
I have a project in which I run multiple data through a specific function that `"cleans"` them. The cleaning function looks like this: Misc.py ``` def clean(my_data) sys.stdout.write("Cleaning genes...\n") synonyms = FileIO("raw_data/input_data", 3, header=False).openSynonyms() clean_genes = {} for...
2017/01/24
[ "https://Stackoverflow.com/questions/41836353", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3008400/" ]
Use a class with a \_\_call\_\_ operator. You can call objects of this class and store data between calls in the object. Some data probably can best be saved by the constructor. What you've made this way is known as a 'functor' or 'callable object'. Example: ``` class Incrementer: def __init__ (self, increment): ...
I think the cleanest way to do this would be to decorate your "`clean`" (pun intended) function with another function that provides the `synonyms` local for the function. this is iamo cleaner and more concise than creating another custom class, yet still allows you to easily change the "input\_data" file if you need to...
41,836,353
I have a project in which I run multiple data through a specific function that `"cleans"` them. The cleaning function looks like this: Misc.py ``` def clean(my_data) sys.stdout.write("Cleaning genes...\n") synonyms = FileIO("raw_data/input_data", 3, header=False).openSynonyms() clean_genes = {} for...
2017/01/24
[ "https://Stackoverflow.com/questions/41836353", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3008400/" ]
> > Like a 'memory' for the function > > > Half-way to rediscovering object-oriented programming. Encapsulate the data cleaning logic in a class, such as `DataCleaner`. Make it so that instances read synonym data once when instantiated and then retain that information as part of their state. Have the class expose...
I think the cleanest way to do this would be to decorate your "`clean`" (pun intended) function with another function that provides the `synonyms` local for the function. this is iamo cleaner and more concise than creating another custom class, yet still allows you to easily change the "input\_data" file if you need to...
13,152,085
Hi im trying to use regex in python 2.7 to search for text inbetween two quotation marks such as "hello there". Right now im using: ``` matchquotes = re.findall(r'"(?:\\"|.)*?"', text) ``` It works great but only finds quotes using this character: **"** However I'm finding sometimes that some text that im parsing...
2012/10/31
[ "https://Stackoverflow.com/questions/13152085", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1495000/" ]
Using character classes might work, or might break everything for you: ``` matchquotes = re.findall(r'[“”"](?:\\[“”"]|.)*?[“”"]', text) ``` If you don't care a lot about matching pairs always lining up, this will probably do what you want. The case where they use the third type inside the other two is always going t...
Depending on what other processing you are doing and where the text is coming from, it would be better to convert all quotation marks to " rather than handle each case.
13,152,085
Hi im trying to use regex in python 2.7 to search for text inbetween two quotation marks such as "hello there". Right now im using: ``` matchquotes = re.findall(r'"(?:\\"|.)*?"', text) ``` It works great but only finds quotes using this character: **"** However I'm finding sometimes that some text that im parsing...
2012/10/31
[ "https://Stackoverflow.com/questions/13152085", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1495000/" ]
Depending on what other processing you are doing and where the text is coming from, it would be better to convert all quotation marks to " rather than handle each case.
I am no expert but for those type of 'fancy' quotes, I would first get their codes which are like **\xe2\x80\x9c** or **\u2019** from a table. Then I would try to match them writing their regex codes. For that purpose this might be helpful: <http://www.regular-expressions.info/refunicode.html> I hope it helps!
13,152,085
Hi im trying to use regex in python 2.7 to search for text inbetween two quotation marks such as "hello there". Right now im using: ``` matchquotes = re.findall(r'"(?:\\"|.)*?"', text) ``` It works great but only finds quotes using this character: **"** However I'm finding sometimes that some text that im parsing...
2012/10/31
[ "https://Stackoverflow.com/questions/13152085", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1495000/" ]
Using character classes might work, or might break everything for you: ``` matchquotes = re.findall(r'[“”"](?:\\[“”"]|.)*?[“”"]', text) ``` If you don't care a lot about matching pairs always lining up, this will probably do what you want. The case where they use the third type inside the other two is always going t...
I am no expert but for those type of 'fancy' quotes, I would first get their codes which are like **\xe2\x80\x9c** or **\u2019** from a table. Then I would try to match them writing their regex codes. For that purpose this might be helpful: <http://www.regular-expressions.info/refunicode.html> I hope it helps!
71,232,402
First of all, thank you for the time you took to answer me. To give **a little example**, I have a huge dataset (n instances, 3 features) like that: `data = np.array([[7.0, 2.5, 3.1], [4.3, 8.8, 6.2], [1.1, 5.5, 9.9]])` It's labeled in another array: `label = np.array([0, 1, 0])` **Questions**: 1. I know that I c...
2022/02/23
[ "https://Stackoverflow.com/questions/71232402", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18285419/" ]
You could start from and easier variant of this problem: ***Given `arr` and its label, could you find a minimum and maximum values of `arr` items in each group of labels?*** For instance: ``` arr = np.array([55, 7, 49, 65, 46, 75, 4, 54, 43, 54]) label = np.array([1, 3, 2, 0, 0, 2, 1, 1, 1, 2]) ``` Then you woul...
it has already been answered, you can go to this link for your answer [python numpy access list of arrays without for loop](https://stackoverflow.com/questions/36530446/python-numpy-access-list-of-arrays-without-for-loop)
50,221,468
This question comes from "Automate the boring stuff with python" book. ``` atRegex1 = re.compile(r'\w{1,2}at') atRegex2 = re.compile(r'\w{1,2}?at') atRegex1.findall('The cat in the hat sat on the flat mat.') atRegex2.findall('The cat in the hat sat on the flat mat.') ``` I thought the question market ? should c...
2018/05/07
[ "https://Stackoverflow.com/questions/50221468", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8370526/" ]
There's nothing wrong (that is, ordinary assignment in P6 is designed to do as it has done) but at a guess you were hoping that making the structure on the two sides the same would result in `$a` getting `1`, `$b` getting `2` and `$c` getting `3`. For that, you want "binding assignment" (aka just "binding"), not ordin...
If you want to have the result be `1, 2, 3`, you must `Slip` the list: ``` my ($a, $b, $c) = |(1, 2), 3; ``` This is a consequence of the single argument rule: <https://docs.raku.org/type/Signature#Single_Argument_Rule_Slurpy> This is also why this just works: ``` my ($a, $b, $c) = (1, 2, 3); ``` Even though `(1...
50,221,468
This question comes from "Automate the boring stuff with python" book. ``` atRegex1 = re.compile(r'\w{1,2}at') atRegex2 = re.compile(r'\w{1,2}?at') atRegex1.findall('The cat in the hat sat on the flat mat.') atRegex2.findall('The cat in the hat sat on the flat mat.') ``` I thought the question market ? should c...
2018/05/07
[ "https://Stackoverflow.com/questions/50221468", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8370526/" ]
If you want to have the result be `1, 2, 3`, you must `Slip` the list: ``` my ($a, $b, $c) = |(1, 2), 3; ``` This is a consequence of the single argument rule: <https://docs.raku.org/type/Signature#Single_Argument_Rule_Slurpy> This is also why this just works: ``` my ($a, $b, $c) = (1, 2, 3); ``` Even though `(1...
You are asking \*What's wrong here", and I would say some variant of [the single argument rule](https://docs.raku.org/syntax/Single%20Argument%20Rule) is at work. Since parentheses are only used here for grouping, what's going on is this assignment ``` ($a, $b), $c = (1, 2), 3 ``` `(1, 2), 3` are behaving as a singl...
50,221,468
This question comes from "Automate the boring stuff with python" book. ``` atRegex1 = re.compile(r'\w{1,2}at') atRegex2 = re.compile(r'\w{1,2}?at') atRegex1.findall('The cat in the hat sat on the flat mat.') atRegex2.findall('The cat in the hat sat on the flat mat.') ``` I thought the question market ? should c...
2018/05/07
[ "https://Stackoverflow.com/questions/50221468", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8370526/" ]
There's nothing wrong (that is, ordinary assignment in P6 is designed to do as it has done) but at a guess you were hoping that making the structure on the two sides the same would result in `$a` getting `1`, `$b` getting `2` and `$c` getting `3`. For that, you want "binding assignment" (aka just "binding"), not ordin...
You are asking \*What's wrong here", and I would say some variant of [the single argument rule](https://docs.raku.org/syntax/Single%20Argument%20Rule) is at work. Since parentheses are only used here for grouping, what's going on is this assignment ``` ($a, $b), $c = (1, 2), 3 ``` `(1, 2), 3` are behaving as a singl...
66,779,282
I would like to print the rating result for different user in separate array. It can be solved by creating many arrays, but I didn't want to do so, because I have a lot of user in my Json file, so how can I do this programmatically? python code ``` with open('/content/user_data.json') as f: rating = [] js = json....
2021/03/24
[ "https://Stackoverflow.com/questions/66779282", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10275606/" ]
Don't only append all ratings to one list, but create a list for every user: ```py with open('a.json') as f: ratings = [] #to store ratings of all user js = json.load(f) for a in js['Rating']: rating = [] #to store ratings of single user for rate in a['rating']: rating.append(rate['rating']) ra...
A simple one-liner ``` all_ratings = [list(map(lambda x: x['rating'], r['rating'])) for r in js['Rating']] ``` Explanation ``` all_ratings = [ list( # Converts map to list map(lambda x: x['rating'], r['rating']) # Get attribute from list of dict ) for r in j...
66,779,282
I would like to print the rating result for different user in separate array. It can be solved by creating many arrays, but I didn't want to do so, because I have a lot of user in my Json file, so how can I do this programmatically? python code ``` with open('/content/user_data.json') as f: rating = [] js = json....
2021/03/24
[ "https://Stackoverflow.com/questions/66779282", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10275606/" ]
Don't only append all ratings to one list, but create a list for every user: ```py with open('a.json') as f: ratings = [] #to store ratings of all user js = json.load(f) for a in js['Rating']: rating = [] #to store ratings of single user for rate in a['rating']: rating.append(rate['rating']) ra...
You can just indent the print into the outer for loop and reset the array everytime. ``` for user in js["Rating"]: temp_array = [] for rate in user["rating"]: temp_array.append(rate["rating"]) print(temp_array) ```
66,779,282
I would like to print the rating result for different user in separate array. It can be solved by creating many arrays, but I didn't want to do so, because I have a lot of user in my Json file, so how can I do this programmatically? python code ``` with open('/content/user_data.json') as f: rating = [] js = json....
2021/03/24
[ "https://Stackoverflow.com/questions/66779282", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10275606/" ]
A simple one-liner ``` all_ratings = [list(map(lambda x: x['rating'], r['rating'])) for r in js['Rating']] ``` Explanation ``` all_ratings = [ list( # Converts map to list map(lambda x: x['rating'], r['rating']) # Get attribute from list of dict ) for r in j...
You can just indent the print into the outer for loop and reset the array everytime. ``` for user in js["Rating"]: temp_array = [] for rate in user["rating"]: temp_array.append(rate["rating"]) print(temp_array) ```
54,623,084
I'm trying to create a function in python that will print out the anagrams of words in a text file using dictionaries. I've looked at what feels like hundreds of similar questions, so I apologise if this is a repetition, but I can't seem to find a solution that fits my issue. I understand what I need to do (at least, ...
2019/02/11
[ "https://Stackoverflow.com/questions/54623084", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11041870/" ]
I'm not sure about the output format. In my implementation, all anagrams are printed out in the end. ``` with open('words.txt', 'r') as fp: line = fp.readlines() def make_anagram_dict(line): d = {} # avoid using 'dict' as variable name for word in line: word = word.lower() # call lower() only o...
Your code is pretty much there, just needs some tweaks: ``` import re def make_anagram_dict(words): d = {} for word in words: word = word.lower() # call lower() only once key = ''.join(sorted(word)) # make the key if key in d: # check if it's in dictionary already ...
54,623,084
I'm trying to create a function in python that will print out the anagrams of words in a text file using dictionaries. I've looked at what feels like hundreds of similar questions, so I apologise if this is a repetition, but I can't seem to find a solution that fits my issue. I understand what I need to do (at least, ...
2019/02/11
[ "https://Stackoverflow.com/questions/54623084", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11041870/" ]
I'm not sure about the output format. In my implementation, all anagrams are printed out in the end. ``` with open('words.txt', 'r') as fp: line = fp.readlines() def make_anagram_dict(line): d = {} # avoid using 'dict' as variable name for word in line: word = word.lower() # call lower() only o...
I'm going to make the assumption you're grouping words within a file which are anagrams of eachother. If, on the other hand, you're being asked to find all the English-language anagrams for a list of words in a file, you will need a way of determining what is or isn't a word. This means you either need an actual "dic...
54,623,084
I'm trying to create a function in python that will print out the anagrams of words in a text file using dictionaries. I've looked at what feels like hundreds of similar questions, so I apologise if this is a repetition, but I can't seem to find a solution that fits my issue. I understand what I need to do (at least, ...
2019/02/11
[ "https://Stackoverflow.com/questions/54623084", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11041870/" ]
I'm going to make the assumption you're grouping words within a file which are anagrams of eachother. If, on the other hand, you're being asked to find all the English-language anagrams for a list of words in a file, you will need a way of determining what is or isn't a word. This means you either need an actual "dic...
Your code is pretty much there, just needs some tweaks: ``` import re def make_anagram_dict(words): d = {} for word in words: word = word.lower() # call lower() only once key = ''.join(sorted(word)) # make the key if key in d: # check if it's in dictionary already ...
57,876,971
I have a project for one of my college classes that requires me to pull all URLs from a page on the U.S. census bureau website and store them in a CSV file. For the most part I've figured out how to do that but for some reason when the data gets appended to the CSV file, all the entries are being inserted horizontally....
2019/09/10
[ "https://Stackoverflow.com/questions/57876971", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10912876/" ]
You can try: ``` colSums(df1[,2:4]>0) ``` Output: ``` var1 var2 var3 4 4 5 ```
One brutal solution is with `apply` function ``` apply(df1[ ,2:ncol(df1)], 2, function(x){sum(x != 0)}) ```
57,876,971
I have a project for one of my college classes that requires me to pull all URLs from a page on the U.S. census bureau website and store them in a CSV file. For the most part I've figured out how to do that but for some reason when the data gets appended to the CSV file, all the entries are being inserted horizontally....
2019/09/10
[ "https://Stackoverflow.com/questions/57876971", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10912876/" ]
One brutal solution is with `apply` function ``` apply(df1[ ,2:ncol(df1)], 2, function(x){sum(x != 0)}) ```
Assuming negative occurrences are an impossibility the sum of a sign solution works. ``` colSums(sign(df1[names(df1) != "ID"])) ```
57,876,971
I have a project for one of my college classes that requires me to pull all URLs from a page on the U.S. census bureau website and store them in a CSV file. For the most part I've figured out how to do that but for some reason when the data gets appended to the CSV file, all the entries are being inserted horizontally....
2019/09/10
[ "https://Stackoverflow.com/questions/57876971", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10912876/" ]
You can try: ``` colSums(df1[,2:4]>0) ``` Output: ``` var1 var2 var3 4 4 5 ```
A `dplyr` variant could be: ``` df %>% summarise_at(-1, ~ sum(. != 0)) var1 var2 var3 1 4 4 5 ```
57,876,971
I have a project for one of my college classes that requires me to pull all URLs from a page on the U.S. census bureau website and store them in a CSV file. For the most part I've figured out how to do that but for some reason when the data gets appended to the CSV file, all the entries are being inserted horizontally....
2019/09/10
[ "https://Stackoverflow.com/questions/57876971", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10912876/" ]
You can try: ``` colSums(df1[,2:4]>0) ``` Output: ``` var1 var2 var3 4 4 5 ```
Assuming negative occurrences are an impossibility the sum of a sign solution works. ``` colSums(sign(df1[names(df1) != "ID"])) ```
57,876,971
I have a project for one of my college classes that requires me to pull all URLs from a page on the U.S. census bureau website and store them in a CSV file. For the most part I've figured out how to do that but for some reason when the data gets appended to the CSV file, all the entries are being inserted horizontally....
2019/09/10
[ "https://Stackoverflow.com/questions/57876971", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10912876/" ]
A `dplyr` variant could be: ``` df %>% summarise_at(-1, ~ sum(. != 0)) var1 var2 var3 1 4 4 5 ```
Assuming negative occurrences are an impossibility the sum of a sign solution works. ``` colSums(sign(df1[names(df1) != "ID"])) ```
48,402,276
I am taking a Udemy course. The problem I am working on is to take two strings and determine if they are 'one edit away' from each other. That means you can make a single change -- change one letter, add one letter, delete one letter -- from one string and have it become identical to the other. Examples: ``` s1a = "a...
2018/01/23
[ "https://Stackoverflow.com/questions/48402276", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7535419/" ]
This fails to pass this test, because you only look at *unique characters*: ``` >>> s1 = 'abc' >>> s2 = 'bcc' >>> set(s1).symmetric_difference(s2) {'a'} ``` That's a set of length 1, but there are **two** characters changed. By converting to a set, you only see that there is at least one `'c'` character in the `s2` ...
Here's a solution using differences found by list comprehension. ``` def one_away(s1, s2): diff1 = [el for el in s1 if el not in s2] diff2 = [el for el in s2 if el not in s1] if len(diff1) < 2 and len(diff2) < 2: return True return False ``` Unlike a set-based solution, this one doesn't lose ...
48,402,276
I am taking a Udemy course. The problem I am working on is to take two strings and determine if they are 'one edit away' from each other. That means you can make a single change -- change one letter, add one letter, delete one letter -- from one string and have it become identical to the other. Examples: ``` s1a = "a...
2018/01/23
[ "https://Stackoverflow.com/questions/48402276", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7535419/" ]
This fails to pass this test, because you only look at *unique characters*: ``` >>> s1 = 'abc' >>> s2 = 'bcc' >>> set(s1).symmetric_difference(s2) {'a'} ``` That's a set of length 1, but there are **two** characters changed. By converting to a set, you only see that there is at least one `'c'` character in the `s2` ...
Here is solving one away where set is used to find the unique character. Done not completely using set but set is used to find the unique character in two given strings. List as a stack is used to pop item from both stacks, and then to compare them. Using stack, pop items from both items and see if they match. Find th...