id
stringlengths
5
11
text
stringlengths
0
146k
title
stringclasses
1 value
doc_1200
I have to read from csv files, each containing lines in one column , and write them to txt files in the same way. import io import csv import os f = io.open(file, mode="r", encoding="utf-8") lines=f.readlines() np.savetxt(filename+'.txt', lines, delimiter="",newline='\n', fmt="%s") This causes an extra empty line be added between each 2 lines. in the following image I noticed there is one space at the end of each number, maybe that's the cause : This is how the output gets and looks like with one line between each 2 lines : But I don't know how to resolve it. Someone could help ? A: This is because f.readlines() includes the '\n' at the end of each line, so there's no need to add another newline character when you use np.savetext. To resolve this, simply change your command to np.savetxt(filename+'.txt', lines, delimiter="", newline='', fmt="%s") A: Both the io.open and the np.savetxt have settings to set the newline character. They are both applied and result in two "new lines" in between each line. You can resolve the issue by suppressing the newline character in np.savetxt: np.savetxt(filename+'.txt', lines, delimiter="", newline='', fmt="%s")
doc_1201
ssh -p 2899 [ssh_user]@[ssh_server] -L 3306:localhost:3306 -N Then, I enter the password and it gives me the tunnel. When I go to my mysql client (navicat in this case), I create this connection: server: localhost port: 3306 user: [local_user] pass: [local_user_pass] When I connect the tunnel works and I contact with the ssh server, and the schemas are shown and so on. However when I use these params using Zend, I obtain the error "SQLSTATE[HY000] [1044]". resources.db.adapter = "pdo_mysql" resources.db.params.host = "localhost" resources.db.params.username = "[local_user]" resources.db.params.password = "[local_user_pass]" resources.db.params.dbname = "[schema_name]" resources.db.params.charset = "utf8" resources.db.isDefaultTableAdapter = true Why I can access with my mysql client but not with my Zend application?
doc_1202
I'm really interested in trying EF, although I can't seem to find a tutorial that fits in with the way I do my BLL and DAL classes, so would appreciate a pointer in the right direction. Basically if I have a Gift, I would create a Gift class (BLL\Gift.cs): using MyProject.DataAccessLayer; namespace MyProject.BusinessLogicLayer { public class Gift { public int GiftID { get; set; } public string GiftName { get; set; } public string Description { get; set; } public decimal Price { get; set; } public static Gift GetGiftByID(int GiftID) { GiftDAL dataAccessLayer = new GiftDAL(); return dataAccessLayer.GiftsSelectByID(GiftID); } public void DeleteGift(Gift myGift) { GiftDAL dataAccessLayer = new GiftDAL(); dataAccessLayer.DeleteGift(myGift); } public bool UpdateGift(Gift myGift) { GiftDAL dataAccessLayer = new GiftDAL(); return dataAccessLayer.UpdateGift(myGift); } public int InsertGift(string GiftName, string Description, decimal Price) { Gift myGift = new Gift(); myGift.GiftName = GiftName; myGift.Description = Description; myGift.Price = Price; GiftDAL dataAccessLayer = new GiftDAL(); return dataAccessLayer.InsertGift(myGift); } } } I then have a DAL class which holds my connection string (DAL\sqlDAL.css): namespace MyProject.DataAccessLayer { public class SqlDataAccessLayer { public readonly string _connectionString = string.Empty; public SqlDataAccessLayer() { _connectionString = WebConfigurationManager.ConnectionStrings["SQLConnectionString"].ConnectionString; if (string.IsNullOrEmpty(_connectionString)) { throw new Exception("No database connection String found"); } } } } and then a DAL class (DAL\giftDAL.cs) where I've shown a couple of the methods (Update and Delete): using MyProject.BusinessLogicLayer; namespace MyProject.DataAccessLayer { public class GiftDAL : SqlDataAccessLayer { public bool UpdateGift(Gift GifttoUpdate) { string UpdateString = ""; UpdateString += "UPDATE Gifts SET"; UpdateString += "GiftName = @GiftName"; UpdateString += ",Description = @Description "; UpdateString += ",Price = @Price "; UpdateString += " WHERE GiftID = @GiftID"; int RowsAffected = 0; try { using (SqlConnection con = new SqlConnection(_connectionString)) { using (SqlCommand cmd = new SqlCommand(UpdateString, con)) { cmd.Parameters.AddWithValue("@GiftName", GifttoUpdate.GiftName); cmd.Parameters.AddWithValue("@Description", GifttoUpdate.Description); cmd.Parameters.AddWithValue("@Price ", GifttoUpdate.Price); cmd.Parameters.AddWithValue("@GiftID", GifttoUpdate.GiftID); con.Open(); RowsAffected = cmd.ExecuteNonQuery(); } } } catch (Exception ex) { Utils.LogError(ex.Message, ex.InnerException == null ? "N/A" : ex.InnerException.Message, ex.StackTrace); } return (RowsAffected == 1); } public void DeleteGift(Gift GifttoDelete) { string DeleteString = ""; DeleteString += "DELETE FROM GIFTS WHERE GIFTID = @GiftID"; try { using (SqlConnection con = new SqlConnection(_connectionString)) { using (SqlCommand cmd = new SqlCommand(DeleteString, con)) { cmd.Parameters.AddWithValue("@GiftID", GifttoDelete.GiftID); con.Open(); cmd.ExecuteNonQuery(); } } } catch (Exception ex) { Utils.LogError(ex.Message, ex.InnerException == null ? "N/A" : ex.InnerException.Message, ex.StackTrace); } } } } So looking at that, how would you recommend I improve the code (if I continue to use ADO.NET) and what would my next step be to learn EF - or is there a better alternative? Cheers, Robbie A: If you want to stick with ADD.NET then why don't you look at Data Application Block from Microsoft Enterprise Library (current version is 5.0 May 2011) - it will allow you write vendor (MS-SQL/Oracle etc) neutral code easily and most of the boiler-plate coding get wrapped. This is probably a simplest/shortest tutorial that I could find to get you started. However, MSDN link has plenty of information and see key scenario sections to jump-start. Another suggestion is to use TransactionScope for managing transactions (instead of working directly with DbTransaction object). Said all that I will recommend using Entity Framework (or any similar OR mapper tool - e.g. check NHibernet) because then you don't have to write typical code for basic CRUD operations. As far as your Dilemma goes, here is basic code snippet to get you started - I am using EF 4.1 with Code-First approach, POCO entities and Fluent API: Entity: public class Gift { public int Id { get; set; } public string Name { get; set; } public string Description { get; set; } public decimal Price { get; set; } } Data Access Layer: public class MyDbContext : DbContext { public DbSet<Gift> Gifts { get; set; } public MyDbContext () : base("name=[ConnStringName]") {} protected override void OnModelCreating(DbModelBuilder modelBuilder) { // Fluent API to provide mapping - you may use attributes in entity class var giftConfig = modelBuilder.Entity<Gift>(); giftConfig.Property(p => p.Id).HasColumnName("GiftID"); giftConfig.Property(p => p.Name).HasColumnName("GiftName"); giftConfig.Property(p => p.Description).HasColumnName("Description"); giftConfig.Property(p => p.Price).HasColumnName("Price"); giftConfig.HasKey(p => p.Id); base.OnModelCreating(modelBuilder); } } Business Layer: public static class GiftManager { public static Gift GetById(int id) { using(var db = new MyDbContext()) { return db.Gifts.Find(id); } } public static void Add(Gift gift) { using(var db = new MyDbContext()) { // do validation ... db.Gifts.Add(gift); // do auditing ... db.SaveChanges(); } } public static void Update(Gift gift) { using(var db = new MyDbContext()) { // do validation ... var entity = db.Sessions.Find(gift.Id); entity.Name = gift.Name; entity.Description = gift.Description; entity.Price = gift.Price; // do auditing ... db.SaveChanges(); } } } A: One thing that's always important (to me) is how testable a class is. I see a lot of explicit object construction in your code. Your Gift BL class explicitly depends on GiftDAL, which makes it very difficult to test the Gift class. Try to reduce the coupling between classes by making an abstraction of GiftDAL (e.g. an interface) and provide that to Gift from the outside (Dependency Injection). A great book about good software design principle is Clean Code by Robert C. Martin. He establishes the SOLID principles. Check it out! Also, be aware that you are now including persistence within your business logic class (or domain model as it is also called). This can be done (Active Record), but often people choose for a different approach nowadays where they separate their domain model from any infrastructure. The broad idea is: the fact that the objects need somehow be stored is important, but not important for the business logic itself, so those to concerns should be separated where possible. Often an Object Relational Mapper, for .NET NHibernate or Entity Framework are two examples for OR mappers. A: I have a dal class whose methods return only bindable objects like datatable, List, etc. Nothing more or less. Then all business logic happens naturally in code behind in an aspnet application. Making backing objects is just too much work in most cases and overkill. I am comfortable with datatables and lists.
doc_1203
function hello(){console.log("Hello World!");} then assign it to a variable newHello = hello; In Console I get function definition for hello and newHello functions. >newHello = hello; ƒ hello(){console.log("Hello World!");} >hello; ƒ hello(){console.log("Hello World!");} After that i change hello function and assign it to new variable as bellow: function hello(){console.log("Hello World Changed!");} newNewHello=hello; I get function definition for hello and newHello and newNewHello functions and receive follow results: >hello ƒ hello(){console.log("Hello World Changed!");} >newHello ƒ hello(){console.log("Hello World!");} >newNewHello ƒ hello(){console.log("Hello World Changed!");} Why name of these function is equal but body of them if not equal and how it work in background if javascript language? A: When you write newHello = hello;, you aren't saying "From now on, newHello will be a reference to whatever may be named hello at some point in the future when I look at the newHello variable again." Instead, you are saying "newHello will now become a reference to whatever is named hello right now." So when you later change hello to be something else, it doesn't affect newHello at all. newHello is still a reference to whatever you assigned to it previously. Put another way, after that first assignment, you don't have this (where I'm using → to mean "is a reference to": newHello → hello → (your original function) Instead you have this: hello → (your original function) newHello → (your original function) In other words, both hello refer directly to the original function. So changing hello to reference a different function doesn't affect newHello.
doc_1204
From my understanding, We first need to clean the data (remove duplicates, handle null,...) visualise the data then feature selection -(make new features) so are we supposed to split the data after feature selection and then start with modelling? I am really confused! Thanks a lot! A: As you wrote some of them there, the lifecycle of Machine Learning is like below in my point; * *Collecting the data *Study on the collected data, which features are categorical which features are numerical etc. (learn the data types) *Begin to data manipulation / cleaning doing such as removing duplicates, outliers, highly correlated things (i.e. if there are two features male and female remove one of them because if you are not male, you are 100% female.) *Visualize your data to observe outliers, correlations etc. *If you have categorical datas you need convert them to numerical *Separate the dependent and independent features. *Feature selection, choose some of most important features *Decide what to do based on your how many samples do you have. *If it is not too much, it means every samples/records are important for you and consider cross validation *After splitting the data, check the datas again. If your features have different units and there are big differences between them, you should consider to do "Normalization or Standardization" methods to use same units/scales *Everything has been done. Decide which evaluation metrics you want to choose. Define your goals on the projects. What do you want? *Then choose the models. After fitting and predicting process, check your evaluation models, scores. Which one has the highest score? (While doing this, I suggest you to count the time. Time is really important factor. You should consider it too.) One accurate measurement is worth more than a thousand expert opinions You can check one of my project in github here => https://github.com/erolerdogan/Property-Maintenance-Fines I hope these steps help you to understand more. These are my point. If someone edits, adds or shows my error, I would be greatly happy. Thanks
doc_1205
input parameters to jenkins job are defined in a property file. property file location changes based on environment in which jenkins is running ex: for dev environment path be like /app/dev/some/nested/path/propertyfile for prod environment path be like /app/prod/some/nested/path/propertyfile presently using extended choice parameter plugin to read property file. this works well if path to property file is absolute. Problem: is there a way to include global env variable in property file path? can it be done using active-choices plugin? A: using active-choices plugin def choices=[] textFile= new File("/app/${Gloabl_Var}/some/nested/path/propertyfile") textFile.eachLine{ line -> if(line.startsWith('<Property-Key-Name>')) { line.split('=')[1].split(',').each { choices.add(it) }}} return choices using extended choice parameter plugin can't use property file directly, will need to use groovy script import hudson.slaves.EnvironmentVariablesNodeProperty import jenkins.model.Jenkins def global_env_var=Jenkins.get().globalNodeProperties.get(EnvironmentVariablesNodeProperty.class).envVars['<Global_Var>'] def props = new Properties() def stream = new FileInputStream("/app/${global_env_var}/some/nested/path/propertyfile") try { props.load(stream) } finally { stream.close() } return props.getProperty('<Property-Key-Name>').split(',') References: extended-choice active-choices
doc_1206
My program is basically calculating the percent change for cars sold in 2016 and 2017. To test it out I did cars sold in 2016 = 7 and cars sold in 2017 = 12 and I got a really long number. I know that you use (“p”) or (“P”) to format the number but I just can’t figure out where to put it? private void calcbtn_Click(object sender, EventArgs e) { double carsIn2016; // number of cars sold in 2016 double carsIn2017; // number of cars sold in 2017 double Percentchanged; // calculate the % change carsIn2016 = double.Parse(soldIn2016txtbox.Text); //get input from text box carsIn2017 = double.Parse(soldIn2017txtbox.Text); // get input from text box Percentchanged =(carsIn2017 - carsIn2016) / (carsIn2016 * 100); // calculate the % change MessageBox.Show( "Your total % change is " + Percentchanged); } A: Precentage Difference: Work out the difference (increase) between the two numbers you are comparing. Then: divide the increase by the original number and multiply the answer by 100 Since you are calculating precent before printing. If you use formatter then you will get bad result. You just need a number for string formatter, to convert it to precent. Dont multiply by 100 and use formatter. You actually have the precent in PrecentChanged variable. double carsIn2016; // number of cars sold in 2016 double carsIn2017; // number of cars sold in 2017 double Percentchanged; // calculate the % change carsIn2016 = double.Parse("7"); //get input from text box carsIn2017 = double.Parse("12"); // get input from text box Percentchanged =((carsIn2017 - carsIn2016) / (carsIn2016));//* 100); // calculate the % change var output = String.Format("Your total % change is : {0:P}.", Percentchanged); Output 71.43% If you multiply by 100 and then use formatter Output Value: 7,142.86%. A: What you're looking for is ToString(), which can be used with the "P" that you indicate. MessageBox.Show("Your total % change is " + PercentChanged.ToString("P")); You can also do various other formatting via ToString, as indicated on Microsoft's Website
doc_1207
ANGULAR var result = { SearchText: "PARK"}; this.httpClient.post( 'http://localhost:55063/Common/PostAddress',result ).subscribe((res: any[]) => { console.log(res); this.data = res; }); MVC public class CommonController : Controller { protected SCommon sCommon = null; public async Task<ActionResult> PostAddress(RoleModel Id) { sCommon = new SCommon(); var User = await sCommon.GetAddress(Id.SearchText).ConfigureAwait(false); return Json(User, JsonRequestBehavior.AllowGet); } } public class RoleModel { public string SearchText { get; set; } } A: try adding header to your post and stringify your body const httpOptions = { headers: new HttpHeaders({ "Content-Type": "application/json" }) }; var result = { SearchText: "PARK"}; const body = JSON.stringify(result); this.httpClient.post('http://localhost:55063/Common/PostAddress',body,httpOptions).subscribe((res: any[]) => { console.log(res); this.data = res; }); aslo enable cors like this in your Register method var cors = new EnableCorsAttribute("*", "*", "*"); config.EnableCors(cors);
doc_1208
For example in the odd function, if I change while (!(print_zero == 0 && print_odd == 1)) { cv.wait(lck); } to //doesnt work while (print_zero == 1 && print_odd == 0) { cv.wait(lck); } I don't get consistent correct behavior anymore, instead sometimes it deadlocks, sometimes it prints in a funky order. But to me it seems like it should be the same, and that the logic is identical. Does anyone know why this is the case? #include <condition_variable> #include <mutex> #include <iostream> #include <atomic> using namespace std; class ZeroEvenOdd { private: int n; mutex mtx; condition_variable cv; int print_zero, print_even, print_odd; public: ZeroEvenOdd(int n) { this->n = n; print_zero = 1; print_even = 0; print_odd = 1; } void printNumber(int x) { cout << x << endl; } // printNumber(x) outputs "x", where x is an integer. void zero( ) { for (int p = 0; p < n; p++) { std::unique_lock<std::mutex> lck(mtx); while (print_zero == 0) { cv.wait(lck); } printNumber(0); print_zero = 0; cv.notify_all(); } } void even( ) { //2 for (int j = 2; j <= n; j += 2) { std::unique_lock<std::mutex> lck(mtx); while (!(print_zero == 0 && print_even == 1)) { cv.wait(lck); } printNumber(j); print_zero = 1; print_even = 0; print_odd = 1; cv.notify_all(); } } void odd() { //3 for (int i = 1; i <= n; i += 2) { std::unique_lock<std::mutex> lck(mtx); //doesnt work //while (print_zero == 1 && print_odd == 0) { // cv.wait(lck); //} while (!(print_zero == 0 && print_odd == 1)) { cv.wait(lck); } printNumber(i); print_zero = 1; print_even = 1; print_odd = 0; cv.notify_all(); } } }; int main() { std::cout << "Hello World!\n"; ZeroEvenOdd object(10); std::thread t1(&ZeroEvenOdd::zero, &object); std::thread t3(&ZeroEvenOdd::odd, &object); std::thread t2(&ZeroEvenOdd::even, &object); t1.join(); t2.join(); t3.join(); cout << "done" << endl; }
doc_1209
<table id="sessions"> <tbody> <tr> <td>...</td> <td>abcd</td> <td>...</td> <td> <a href="www.example.com"> Example </a> </td> </tr> <tr>...</tr> <tr>...</tr> <tr>...</tr> </tbody> </table> How do I search the table for the row that has the innerText == "abcd" in column n and then take the link in column m of the same row. I will then use the link in page.goto() I would be very grateful for any help! Edit: I have tried the following so far, but it's a little over-complicated and doesn't work const text = await page.$$eval('#sessions tr', rows => { return Array.from(rows, row => { const columns = row.querySelectorAll('td'); return cols = Array.from(columns, column => column.innerText) }); }); const links = await page.$$eval('#sessions tr', rows => { return Array.from(rows, row => { const columns = row.querySelectorAll('td'); return cols = Array.from(columns, column => column.innerHTML) }); }); for (var i = 0; i <= result.length; i++){ if (result[i][2] == "abcd"){ usefulLink = links[i][5]; break; } } Edit 2: A huge thanks to vsemozhebuty for helping, this is where I am at so far: const href = await page.evaluate(() => { const table = Array.from(document.querySelectorAll('#sessions tr')); const tr = [...table.row].find(({ cells }) => cells[0].innerText === "abcd"); if (tr) return tr.cells[1].querySelector('a').href; return null; }); A: Something like this? import puppeteer from 'puppeteer'; const browser = await puppeteer.launch(); const html = ` <!doctype html> <html> <head><meta charset='UTF-8'><title>Test</title></head> <body> <table id="sessions"> <tbody> <tr> <td>abcd</td> <td><a href="https://www.example.com">Example</a></td> </tr> <tr> <td>efgh</td> <td><a href="https://www.example.org">Example</a></td> </tr> </tbody> </table> </body> </html>`; try { const [page] = await browser.pages(); await page.goto(`data:text/html,${html}`); const href = await page.evaluate(() => { const table = document.querySelector('table'); const tr = [...table.rows].find(({ cells }) => cells[0].innerText === "abcd"); if (tr) return tr.cells[1].querySelector('a').href; return null; }); console.log(href); // https://www.example.com/ } catch (err) { console.error(err); } finally { await browser.close(); }
doc_1210
if(Auth::attempt(['email' => request('email'), 'password' => request('password')])){ echo "User Logged In"; } else if(Auth::attempt(['email' => request('email'), 'temp_password' => \Crypt::encrypt(request('password'))])) { echo "User Logged in"; } else { echo "Incorrect Credentials"; } i'm getting this error : Undefined index: password", exception: "ErrorException" if i remove else if part it is working properly. Any help is highly appreciated. A: Auth::Attempt() method expects an array with email and password indexes. @Documentation If you dont provide them, it will fail. 'password' is not a dynamic field to it, it must be password. You need to do it manually for the temp_password field if (Auth::attempt(['email' => request('email'), 'password' => request('password')])) { echo "User Logged In"; } else { $user = User::where('email', '=', request('email'))->first(); if (!$user || !Hash::check(request('password'), $user->temp_password);) { echo "Incorrect Credentials"; return; } Auth::guard()->login($user); echo "User Logged In"; }
doc_1211
Now here i have to use two languages for displaying output - one is english and one is gujarati. Somewhere i have to display mysql db data in english and somewhere in gujarati. Now i need suggestion that how could i implement such functionality? Should i change mysql server locale to gujarati or should i keep the mysql server in english locale and convert the data from gujarati to eng or eng to gujarati at frontend. Please suggest something. A: You should Change the field's collation to utf8_bin so that MySQL will stores the data in gujarati and in english properly. A: To store Gujarati data into MySQL server, you need to make sure that the database and table are set up to support Unicode characters, and that your database connection is also set up to use UTF-8 encoding. Here are the steps you can follow: * *Create a MySQL database: Log in to your MySQL server and create a database using a command like this: CREATE DATABASE your_database_name CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci; This creates a database named your_database_name with the utf8mb4 character set and utf8mb4_unicode_ci collation, which supports Gujarati characters. *Create a MySQL table: Create a table in your database that contains the fields you need to store Gujarati data. For example, you can use a command like this: CREATE TABLE your_table_name ( id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci, address VARCHAR(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci ); This creates a table named your_table_name with an id field and two fields to store Gujarati data: name and address. The CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci part of the command specifies that the name and address fields should use the utf8mb4 character set and utf8mb4_unicode_ci collation. *Insert Gujarati data into the table: To insert Gujarati data into the table, you can use a command like this: INSERT INTO your_table_name (name, address) VALUES ('ગુજરાતી નામ', 'ગુજરાતી સરનામું'); This inserts a row into the your_table_name table with the Gujarati name and address. Make sure that your database connection is also set up to use UTF-8 encoding. In PHP, you can set the connection character set to utf8mb4 using a command like this: mysqli_set_charset($connection, "utf8mb4"); Replace $connection with your database connection object. This ensures that the data is properly encoded when it is sent to and retrieved from the MySQL server. A: Into your connection file you want to include a line of code: mysql_query('set names utf8');
doc_1212
I am trying to recover the code from my classes. the project was fairly simple. I tried using dex2jar and then jd-gui on the .apk I have installed on my phone but I don't seem to be getting the same results other people are getting. There's nothing remotely close to my classes on the end product. Is there a way to recover my code? Either through reverse engineering or maybe Android Studio has some sort of function where it keeps code in a temp file or something? A: Reverse Engineering is overly complicated for recovering code in my opinion. I'm not sure how "move" command works, but you might be able to recover the files with a program dedicated to recovering lost/deleted files. Pori form has a program for that for example, but it's hard recovering if the files got overwritten and not deleted or something. Maybe you've made some backups yourself somewhere? The question is how much time and work you are willing to put into getting exactly your code back, because if simple recovery methods don't work you might be faster just rewriting your code, but it really depends on the project size.
doc_1213
I was able to make it work without SSL but now I'm struggling with a 502. I get the same result when I try to access https://localhost/ localhost:8080 (which worked without encryption before I set the proxy in jira) https://127.0.0.1 and some others. Here is the Jira connector config. <Connector port="8080" maxThreads="150" minSpareThreads="25" connectionTimeout="20000" enableLookups="false" maxHttpHeaderSize="8192" protocol="HTTP/1.1" useBodyEncodingForURI="true" redirectPort="8443" acceptCount="100" disableUploadTimeout="true" scheme="https" proxyName="localhost" proxyPort="443" /> <!-- And now the Apache VHost config sorry for newbe-like config ProxyRequests On NameVirtualHost *:443 <VirtualHost *:443> SSLEngine on SSLCertificateFile /etc/pki/tls/certs/ca.crt SSLCertificateKeyFile /etc/pki/tls/private/ca.key SSLProxyEngine on ServerName localhost ServerAlias jira.ecoledelexcellence.ca ServerAlias 192.168.0.116 ProxyRequests Off ProxyPreserveHost On # <Proxy *> # Order deny,allow # Allow from all # </Proxy> ProxyPass / https://127.0.0.1:8080/ retry=0 ProxyPassReverse / https://127.0.0.1:8080/ retry=0 <Location /> Order allow,deny Allow from all </Location> #HTTP => HTTPS rewrite RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} </VirtualHost> Thanks for any hint A: into the Tomcat, you should add into the Connector config that it is a secure channel: secure="true" This tells Tomcat that even if the SSL engine is not initalized on this Connector, the incoming connections are qualified to be "secure". The proxyName should be the externally visible name of the machine, this helps if the webapp is using scheme, proxyName, and proxyPort variables to construct an URL, see: Tomcat Proxy Support http://tomcat.apache.org/tomcat-7.0-doc/config/http.html Set this attribute to true if you wish to have calls to request.isSecure() to return true for requests received by this Connector. You would want this on an SSL Connector or a non SSL connector that is receiving data from a SSL accelerator, like a crypto card, a SSL appliance or even a webserver. (Also applies to AJP Connectors) HTTP: For the ProxyPass* you don't need the "s" in the https. Also you don't need the Rewrite at the end, it will force all incoming connections to plain http.
doc_1214
col1 date 23.2 2015-01-01 23.2 2015-01-01 22.1 2015-01-01 01.2 2015-01-01 11.9 2015-01-02 12.7 2015-01-02 23.2 2015-01-02 12.4 2015-01-03 23.7 2015-01-03 34.3 2015-01-03 73.4 2015-01-04 83.2 2015-01-04 91.2 2015-01-04 and I need to 'randomly' sample from this data frame with the condition that each row sampled comes from a single date, like this: col1 date 22.1 2015-01-01 23.2 2015-01-02 23.7 2015-01-03 83.2 2015-01-04 So I don't care which row was sampled, I just want to make sure that each row has a unique date. A: Try: library(dplyr) df %>% group_by(date) %>% summarise(sample(col1, 1)) A: dd <- read.table(header = TRUE, text="col1 date 23.2 2015-01-01 23.2 2015-01-01 22.1 2015-01-01 01.2 2015-01-01 11.9 2015-01-02 12.7 2015-01-02 23.2 2015-01-02 12.4 2015-01-03 23.7 2015-01-03 34.3 2015-01-03 73.4 2015-01-04 83.2 2015-01-04 91.2 2015-01-04") @thelatemail's comment is more elegant dd[with(dd, tapply(rownames(dd),date,sample,1) ),] # col1 date # 2 23.2 2015-01-01 # 6 12.7 2015-01-02 # 9 23.7 2015-01-03 # 13 91.2 2015-01-04 or set.seed(1) do.call('rbind', by(dd, dd$date, FUN = function(x) x[sample(seq.int(nrow(x)), 1), ])) # col1 date # 2015-01-01 23.2 2015-01-01 # 2015-01-02 12.7 2015-01-02 # 2015-01-03 23.7 2015-01-03 # 2015-01-04 91.2 2015-01-04 or set.seed(1) tbl <- table(dd$date) dd[unlist(Map(function(x) sample(seq.int(x), 1), tbl)) + cumsum(c(0, head(tbl, -1))), ] # col1 date # 2 23.2 2015-01-01 # 6 12.7 2015-01-02 # 9 23.7 2015-01-03 # 13 91.2 2015-01-04 or set.seed(1) sp <- split(dd, dd$date) do.call('rbind', lapply(sp, function(x) x[sample(seq.int(nrow(x)), 1), ])) # col1 date # 2015-01-01 23.2 2015-01-01 # 2015-01-02 12.7 2015-01-02 # 2015-01-03 23.7 2015-01-03 # 2015-01-04 91.2 2015-01-04
doc_1215
I've tried 'yum install php-bcmath' and got this error: Error: Package: php-mysql-5.3.3-26.el6.x86_64 (base) Requires: php-common(x86-64) = 5.3.3-26.el6 Removing: php-common-5.3.3-23.el6_4.x86_64 (@updates) php-common(x86-64) = 5.3.3-23.el6_4 Updated By: php-common-5.5.6-1.el6.remi.x86_64 (remi-php55) php-common(x86-64) = 5.5.6-1.el6.remi Available: php-common-5.3.3-26.el6.x86_64 (base) php-common(x86-64) = 5.3.3-26.el6 Available: php-common-5.5.5-2.el6.remi.x86_64 (remi-php55) php-common(x86-64) = 5.5.5-2.el6.remi Error: Package: php-gd-5.5.6-1.el6.remi.x86_64 (remi-php55) Requires: libgd.so.3()(64bit) It looks like there are some dependencies I am missing yet I am not too clued up on how to go about installing these. Does anyone have any advice? Thanks in advance. A: Ouch. Looks like you've got dueling repos there. The problem is that you've got 5.3 and 5.5 packages (looks like remi 5.5 repo). 5.3 is installed (Removing: php-common-5.3.3-23) but that repo probably doesn't have php-bcmath so yum went to remi and found it there, but that package says it needs 5.5 installed so yum is stuck trying to take out 5.3 and install 5.5 instead, but 5.5 has some other dependencies not satisfied. So, a couple of options * *Just upgrade PHP to 5.5. You might need the base remi repo to do that (I bet that libgd library is in there) *Turn off the remi 5.5 repo (edit /etc/yum.repos.d/remi.repo and set enabled=0 on the 5.5 repo). If you're not going to 5.5 you don't need it enabled anyways.
doc_1216
A: You can remove them from DOM: $('[itemprop="offers"]').contents().filter(function () { return this.nodeType == 3 && this.data.match(/Silver|Manual/); }).remove(); Or wrap them with span and hide: $('[itemprop="offers"]').contents().filter(function () { return this.nodeType == 3 && this.data.match(/Silver|Manual/); }).each(function(i, el) { $(el).replaceWith($('<span>').text(el.data).css('display', 'none')); }); A: Try this: $("SELECTOR:contains('STRING')").remove(); Where SELECTOR is equel to your div/element you want to select with the string you want to hide in (something like div table tbody tr td) and STRING is equel to the string you want to hide. JSFiddle example: https://jsfiddle.net/5tu7weby/1/ A: You can try how you did with <! in the html or you can use a span element then add css { display:none;}
doc_1217
So my question is, how can I keep on visually adding objects to the scroll view? Thanks A: Unfortunately, there is no easy way to do this in Interface Builder. The best you can do is to increase the height of the UIScrollView, drag and drop your UI elements, then resize and reposition the view to be centered on the screen. Once you do that, don't forget to set the contentSize property of your scroll view, or else you wont be able to access anything outside the main screen. To do that, you can either use scrollView.contentSize = CGSizeMake(width, height);, or you can use the User Defined Runtime Attributes in Interface Builder. This can be found in the Identity Inspector, which has the info about the class, Storyboard ID, et cetera. Add a runtime attribute, set the Key Path to contentSize, the Type to Size, and the value to the desired content size (e.g. {width, height}). A: You can add 1 ScroolView and chose Size : Freeform and add item you need on scroolview and add scroll view to viewcontroller [self.view addsupView: scroolview]; i think help you :p A: I figured out a way to make the view screen bigger and by expanding that, I was able to make the scrollView bigger and use that as a subview of the view. Thanks for the help everyone!
doc_1218
Is there another solution for this? A: Try this require 'net/http' u = URI.parse('http://www.example.com/') status = Net::HTTP.start(u.host, u.port).head(u.request_uri).code # status is HTTP status code You'll need to use rescue to catch exception in case domain resolution fails.
doc_1219
I tried this tutorial ( https://www.digitalocean.com/community/tutorials/how-to-perform-continuous-integration-testing-with-drone-io-on-coreos-and-docker ) and several other tutorials but i failed . can anyone show me please a simple way to build .drone.yml ! Thank you A: Note that this answer applies to drone version 0.5 You can use the Docker plugin to build and publish a Docker image at the successful completion of your build. You add the Docker plugin as a step in your build pipeline section of the .drone.yml file: pipeline: build: image: golang commands: - go build - go test publish: image: plugins/docker repo: foo/bar In many cases you will want to limit execution of the this step to certain branches. This can be done by adding runtime conditions: publish: image: plugins/docker repo: foo/bar when: branch: master You will need to provide drone with credentials to your Docker registry in order for drone to publish. These credentials can be declared directly in the yaml file, although storing these values in plain text in the yaml is generally not recommended: publish: image: plugins/docker repo: foo/bar username: johnsmith password: pa55word when: branch: master You can alternatively provide your credentials using the built-in secret store. Secrets can be added to the secret store on a per-repository basis using the Drone command line utility: export DRONE_SERVER=http://drone.server.address.com export DRONE_TOKEN=... drone secret add \ octocat/hello-world DOCKER_USERNAME johnsmith drone secret add \ octocat/hello-world DOCKER_PASSWORD pa55word drone sign octocat/hello-world Secrets are then interpolated in your yaml at rutnime: publish: image: plugins/docker repo: foo/bar username: ${DOCKER_USERNAME} password: ${DOCKER_PASSWORD} when: branch: master
doc_1220
lineChart.jsp $(function(){ $.ajax({url: "lineBar", async: false, success: function(result) { /* ${cse} */ /* ${ec} */ /* ${it} */ }); EmployeeController.java @RequestMapping("/lineBar") @ResponseBody public String lineBarChart(Model model) throws Exception { int cseCount = 4, ecCount = 5, itCount = 6; System.out.println(cseCount); model.addAttribute("cse", cseCount); model.addAttribute("ec", ecCount); model.addAttribute("it", itCount); return "lineBarChart";} Here i am including the lineChart.jsp page. <div class="col-md-6 p-1"> <jsp:include page="lineChart.jsp"/> </div>
doc_1221
Let me show you my code private static void OnMyCustomPropertyChanged(Object sender, EventArgs e) { PropertyInfo propInfo = e.GetType().GetProperty("PropName"); String propName = propInfo.GetValue(?,?).ToString(); } The problem is, what do I mention in place of the two question marks, the second parameter is null as far as I know since it is not an indexed property. When I use propInfo/propInfo.GetType().GetProperty("PropName")/sender, in place of the first "?", I am getting an exception - TargetException was unhandled by user code. I was wondering if anyone could help me out with this along with an explanation if possible. I would like to understand where I am making the mistake. A: The first parameter must be the instance you want to get the value from. In your example, you should pass e as parameter, because you're getting a property of the e object. That being said, I suspect you want the property of the sender instance instead: PropertyInfo propInfo = sender.GetType().GetProperty("PropName"); String propName = propInfo.GetValue(sender, null).ToString();
doc_1222
The date is stored in a column named EstimatedTime (with a "text" type...) like this 201502181150 <?php $stmt = $db->query('SELECT * FROM data WHERE Status = "D" ORDER BY id DESC'); /* $Date = 201502181150; $time_ahead = date('M d', strtotime($Date. ' + 2 days')); // The above returns Feb 20, but how can I do this on MySQL? */ while($row = $stmt->fetch()) { echo "<tr>"; echo '<td>' . date('M d', strtotime($row['EstimatedTime'])) . '</td>'; echo "</tr>"; } ?> Any help would be appreciated! Thank you so much. A: If EstimatedTime column is declared as datatype DATETIME or DATE, then it's straightforward: WHERE t.EstimatedTime >= DATE(NOW()) AND t.EstimatedTime < DATE(NOW()) + INTERVAL 2 DAY NOW() returns the current date and time, the DATE() function trims off the time portion, making it equivalent to midnight. If the column is declared as character type, rather than DATETIME (but why in God's green earth would you do that?), convert the DATETIME expression to character in an appropriate canonical format, so that string comparisons will work appropriately: WHERE t.EstimatedTime >= DATE_FORMAT(DATE(NOW()) ,'%Y%m%d%H%i%s') AND t.EstimatedTime < DATE_FORMAT(DATE(NOW()) + INTERVAL 2 DAY,'%Y%m%d%H%i%s') If EstimatedTime is stored as a numeric (integer) datatype, then convert the string to numeric by adding a zero... WHERE t.EstimatedTime >= DATE_FORMAT(DATE(NOW()) ,'%Y%m%d%H%i%s')+0 AND t.EstimatedTime < DATE_FORMAT(DATE(NOW()) + INTERVAL 2 DAY,'%Y%m%d%H%i%s')+0 A: You can use str_to_date function and add interval: select str_to_date('201502181150', '%Y %m %d'); so your query will be look: WHERE str_to_date(t.EstimatedTime, '%Y %m %d') >= date(now()) AND str_to_date(t.EstimatedTime, '%Y %m %d') < date(now()) + interval 2 day A: The PHP soltion: Is the value 201502181150 the date in milliseconds? If so, just treat it like an integer and add 1000*60*60*24*2 which is 2 days. You can do it like this: $t = (int)$row['EstimatedTime']; $query = "SELECT * FROM data WHERE EstimatedTime > $t AND EstimatedTime < ". ($t+1000*60*60*24*2) ." ORDER BY id DESC" But it is kind of way around as You have to execute 2 MySQL queries. A: I would try something like <?php $stmt = $db->query('SELECT * FROM data WHERE Status = "D" ORDER BY id DESC'); $currDate = time(); //$time_ahead = Current Date + 2 Days (172800 seconds = 2 days) $time_ahead = $currDate + 172800; while($row = $stmt->fetch()) { if (!strtotime($row['EstimatedTime'] < $currDate || !strtotime($row['EstimatedDate'] > $time_ahead))) { echo "<tr>"; echo '<td>' . date('M d', strtotime($row['EstimatedTime'])) . '</td>'; echo "</tr>"; } } ?>
doc_1223
SELECT count(1), interaction_type_id FROM tibrptsassure.d_interaction_sub_type GROUP BY interaction_type_id HAVING count(interaction_type_id) > 1 ORDER BY count(interaction_type_id) DESC LIMIT 5; Since my application does not support the use of the LIMIT keyword, I tried changing my query using the rank() function like so: SELECT interaction_type_id, rank() OVER (PARTITION BY interaction_type_id ORDER BY count(interaction_type_id) DESC) FROM tibrptsassure.d_interaction_sub_type; However, this way I ended up with the following error message: ERROR: column "d_interaction_sub_type.interaction_type_id" must appear in the GROUP BY clause or be used in an aggregate function LINE 1: SELECT interaction_type_id, rank() OVER (PARTITION BY inter... ^ ********** Error ********** ERROR: column "d_interaction_sub_type.interaction_type_id" must appear in the GROUP BY clause or be used in an aggregate function SQL state: 42803 Character: 9 Is there an equivalent of rownum() in PostgreSQL? (Apart from using the LIMIT keyword to achieve the same result, that is.) Does anybody have any suggestions for me? Thanks in advance. A: Test whether the following works (it is standard postgresql syntax and should work): with t as ( select 1 as id union all select 1 as id union all select 2 union all select 2 union all select 3) select id from t group by id having count(id) > 1 order by id desc limit 1 If this works then you have some syntax problem. If this does not work then you have some other issue - maybe the software you are using is constrained in some really strange way. You can also use row_number(), but it is not very efficient way: with t as ( select 1 as id union all select 1 as id union all select 2 union all select 2 union all select 3) , u as ( select id, count(*) from t group by id ) , v as ( select *, row_number() over(order by id) c from u ) select * from v where c < 2 A: The problem was in my query, i.e. there was a syntax error. What I needed was the top 5 category_id and top 5 instances of type_id in each category_id and top 5 instances of sub_type_id in each type_id. To achieve this, I changed the query in the following way and finally got the expected output: SELECT * FROM ( SELECT t1.int_subtype_key, t2.interaction_sub_type_desc, interaction_category_id, interaction_type_id, interaction_sub_type_id, count(interaction_sub_type_id) AS subtype_cnt, rank() over (PARTITION BY interaction_category_id, interaction_type_id ORDER BY count(interaction_sub_type_id) DESC) AS rank FROM tibrptsassure.f_cc_call_analysis t1 INNER JOIN tibrptsassure.d_interaction_sub_type t2 ON t1.int_cat_key = t2.intr_catg_ref_nbr AND t1.int_subtype_key = t2.intr_sub_type_ref_nbr INNER JOIN tibrptsassure.d_calendar t3 ON t1.interaction_date = t3.calendar_date GROUP BY t2.interaction_sub_type_desc, t1.int_subtype_key, interaction_category_id, interaction_type_id, interaction_sub_type_id) AS sub_type WHERE rank <= 5; Thanks to everyone for paying attention and helping me with this.
doc_1224
class gcb_ip: ip = None country_code = None score = None asn = None records = list() 1) I fill up the records list with a specific method. 2) I can see the records and the rest of my attributes inside the object if I check it my main code. 3) I CAN'T see the records but the rest of my attributes inside the object if I check it into another class method passed by parameters. I come from C++ and I guess that this is a copy/reference parameter passing issue. ¿What's going on? A: Do you have your init function properly defined, and have those parameters set? For example class gcb_ip(object): def __init__(self, IP, country_code, score, asn): self.IP = IP self.country_code = country_code self.score = score self.asn = asn self.records = list() This is the equivalent of setting up your explicit constructor in C++, and initializing your member variables. Then you can reference the parameters of that object as you would expect, for example myIP = someGcbObject.IP
doc_1225
(I also tried using the options -o or /out to specify ooutfilename, but do not seem to exist) A: Doesn't the shell's normal output redirection work for you? Example: objdump -d file.o > file.txt
doc_1226
[1] merge "as of" in the style of pandas: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge_asof.html EDIT: Here's one example left = DataFrame(Dict(:timestamp=>[100,200,300,400])) right = DataFrame(Dict(:timestamp=>[94, 150, 200, 201, 299, 300, 301, 401], :v=>1:8)) The output should be a DataFrame like DataFrame(Dict(:timestamp=>[100,200,300,400], :v=>[1,3,6,7])) I ended up with the following implementation which works for my case but I'm sure is not as robust or general as it could be (lets assume timestamp cols are sorted): function mergeasof(left::DataFrame, right::DataFrame, right_col::String) @assert "timestamp" in names(left) @assert "timestamp" in names(right) @assert right_col in names(right) i = 1 j = 1 left = copy(left) left[!, right_col] = Array{eltype(right[right_col])}(undef, size(left,1)); for j in 1:size(right, 1) if i < size(left, 1) while right.timestamp[j] > left.timestamp[i] i += 1 if i == size(left, 1) break end end left[!, right_col][i] = right[!, right_col][j] end end left end For example, straight away you can see an edge can which isn't covered: if left contains timestamps earlier than the earliest timestamps in right, then in the merged table you should have some missing values. But that means that setting the type with left[!, right_col] = Array{eltype(right[right_col])}(undef, size(left,1)); is wrong. Also this isn't as efficient as it could be if we had an index on the timestamps.
doc_1227
Now, with the port number 27017 open everything works fine but if I block the port 27017 and allow 443 and create an IPtable to redirect the request from 443 to 27017, none of the mongdb machines is talking to each other. however I am able to connect to through 443 from one machine to another manually, by specifying the host name and port number. I appreciate any kind of help.
doc_1228
where i run it in release mode with debug info enabled , with all exceptions Debug->Exceptions menu and SEH Exceptions (/EHa) turned on in the IDE i have : QTimer that calling method : m_NotificationTimer = new QTimer(); connect(m_NotificationTimer, SIGNAL(timeout()), this,SLOT(CheckOuterLinks()/*,Qt::DirectConnection*/)); this function that bean called every 3 seconds: void CollectorWorker2::CheckOuterLinks() { while(!rpm_urlStack->isEmptyO()) { QMutexLocker locker(m_pMutex1); std::map<std::string,std::string > m = rpm_urlStack->topO(); locker.unlock(); } } the top0() method is function that get the top element from QStack , member function and it looks like this it removed the element with pop() somewhere else: std::map<std::string,std::string > UrlStack::topO() { std::map<std::string,std::string > m; try { static QMutex mutex; QMutexLocker locker(&mutex); m = m_OuterLinksToProcessOutStack.top(); return m; } catch (std::exception const & e) { std::cout << "Standard exception: " << e.what() << std::endl; } catch (...) { std::cout << "Unknown exception." << std::endl; } } the m_OuterLinksToProcessOutStack is static member and gets filled with elements from somewhere else in the code protected with mutex also . the m_OuterLinksToProcessOutStack filled with members it left something like 2000 elements i do check it somewhere else , so its not problem of empty stack . the problem i have that after minute of running the application im getting this error in runtime: First-chance exception at 0x00403fc2 in app.exe: 0xC0000005: Access violation reading location 0x00000004. on the top() method of the QStack
doc_1229
Talking to a webservice works fine when the code is running in a simple, standalone Java class. ((WSBindingProvider) docManClient).setOutboundHeaders(Headers.create(otAuthElement)); In the debugger, the docManClient object has this toString(): JAX-WS RI 2.1.4-b01-: Stub for http://innov15.ncr.pwgsc.gc.ca/innov15_cws/DocumentManagement.svc The class path includes jaxws-rt-2.1.4.jar. When the code is running inside Websphere, the cast fails. ((WSBindingProvider) docManClient).setOutboundHeaders(Headers.create(otAuthElement)); java.lang.ClassCastException: com.sun.proxy.$Proxy484 incompatible with com.sun.xml.ws.developer.WSBindingProvider In the debugger, the docManClient object has this toString(): org.apache.axis2.jaxws.client.proxy.JAXWSProxyHandler@a460a46 This jar is in Websphere-land only, and is not part of my project's .ear file: C:\dev_tools\server\IBM\WebSphere\AppServer\plugins\org.apache.axis2.jar I attempted to change the app config, to prefer its own classes above those of Websphere. I used the wsadim tool to change to PARENT_LAST, but this had no affect on the observed behaviour: set dep [$AdminConfig getid /Deployment:my-ear/] set depObject [$AdminConfig showAttribute $dep deployedObject] set classldr [$AdminConfig showAttribute $depObject classloader] $AdminConfig showall $classldr $AdminConfig modify $classldr {{mode PARENT_LAST}} $AdminConfig save $AdminConfig showall $classldr The app is an .ear which contains a single .war. EDIT added later: I'm not clear on the relation between jax-ws-rt.jar and axis2.jar. The axis2.jar is not a drop-in replacement for jax-ws-rt.jar: when I switch to axis2.jar, the code no longer compiles.
doc_1230
P_ID Item Rank 1 ItemName1 ValueTBD I need to be able to write an update statement to populate the value of the column "rank" as follows: * *The top 10000 records need a value of "10" *For Each subsequent 10000 records the value of "rank" will need to be decremented by 1 Therefore records 20000 - 30000 : would have "rank" values of "9" A: Assuming that when you say 'top' and 'subsequent' you actually imply 'order by P_ID': with cte as ( select Rank, row_number() over (order by P_ID) as row_rank from Items) update cte set Rank = 10 - (row_rank-1)/10000; This will update with right range boundary (1-10000 -> rank 10, 10001-20000 - >rank 9, 20001-30000 -> rank 8 etc) and will assign negative ranks for ranges above 100001. Your requirements are inconsistent: you say 'records 20000-30000 would have rank 9'. You probably mean 'records 20001-30000 would have rank 8'. A: Try this: SELECT P_ID,Rank, 10 - ((ROW_NUMBER() OVER(ORDER BY P_ID DESC))/10000) AS new_RANK
doc_1231
For example, with the overlapping intervals * *[10 , 15] *[9 , 21] *[11 , 19] *[100 , 110] *[9 , 10] *[5 , 11] *[39 , 45] If we have [A,B] = [10 , 100], then the result should be [10,15] If we have [A,B] = [14 , 50], then the result should be [39,45] If we have [A,B] = [15 , 25], then the result should be NULL If we have [A,B] = [8 , 31], then the result should be [9,21] I think the segment tree could be used for this, but did not find too many references to study this. Also, the interval tree could be another option. Ideally, this problem could be done in O(log(N)). I'm interested in the worst case - average cases cannot solve this issue for me. What is the best data structure/algorithm to solve this is, with the lowest complexity? Thanks
doc_1232
I'm using Unity 3D Pro 4 on an Intel iMac. Unity is rippling Global Fog, but as well Water tiles, depending on zoom stage, and I cannot see any reason for. I searched all properties in Unity, but found nothing. It's also not project related, because on Windows Computers, they haven't this issue. https://dl.dropboxusercontent.com/u/15371035/Unity%20Rippling%20Effect.png A: This is a z-fighting issue. Graphic drivers on Mac are different from drivers in Windows and may even have different depth sizes in z-buffer. Try distancing the water further from the ground polygons or setting it in a different render queue, so that it always renders after the ground polygons.
doc_1233
* *WooCommerce *WooCommerce Subscriptions *Pakkelabels.dk for WooCommerce "Pakkelabels.dk" is a packaging label plugin for carriers in Denmark. This plugin is using the standard WooCommerce filters and hooks to add additional shipping methods. I am using a mixed checkout. The cart totals currently looks like this: This is what I wan't to do For recurring orders I wan't to limit the shipping methods to just "DAO Pakkeshop" and "Local pick up" (sorry for the Danish language in the image). I have added this to functions.php, which unsets the shipping methods I don't wan't to have, when a specific product ID (the subscription product) is in the cart: add_filter( 'woocommerce_package_rates', 'hide_shipping_methods_woo_sg', 10, 2 ); function hide_shipping_methods_woo_sg( $rates, $package ) { $product_id = get_field('product_auto_cart', 'option'); if($product_id){ $product_cart_id = WC()->cart->generate_cart_id( $product_id ); $in_cart = WC()->cart->find_product_in_cart( $product_cart_id ); if($in_cart) { unset( $rates['pakkelabels_shipping_dao_direct'] ); unset( $rates['pakkelabels_shipping_gls_private'] ); unset( $rates['pakkelabels_shipping_gls_business'] ); unset( $rates['pakkelabels_shipping_gls'] ); unset( $rates['pakkelabels_shipping_pdk'] ); unset( $rates['pakkelabels_shipping_postnord_private'] ); unset( $rates['pakkelabels_shipping_postnord_business'] ); // unset( $rates['local_pickup:19'] ); } return $rates; } } My problem is, that this removes the shipping methods for both the order and recurring order, as you can see on the image. I need some sort of conditional, so that I can target only the recurring order shipping methods and unset those. How can I achieve this? A: Okay - that was a simple fix. WC()->cart->recurring_carts was the conditional I needed. My code now looks like this: add_filter( 'woocommerce_package_rates', 'hide_shipping_methods_woo_sg', 10, 2 ); function hide_shipping_methods_woo_sg( $rates, $package ) { $product_id = get_field('product_auto_cart', 'option'); if($product_id){ $product_cart_id = WC()->cart->generate_cart_id( $product_id ); $in_cart = WC()->cart->find_product_in_cart( $product_cart_id ); if($in_cart && WC()->cart->recurring_carts) { unset( $rates['pakkelabels_shipping_dao_direct'] ); unset( $rates['pakkelabels_shipping_gls_private'] ); unset( $rates['pakkelabels_shipping_gls_business'] ); unset( $rates['pakkelabels_shipping_gls'] ); unset( $rates['pakkelabels_shipping_pdk'] ); unset( $rates['pakkelabels_shipping_postnord_private'] ); unset( $rates['pakkelabels_shipping_postnord_business'] ); // unset( $rates['local_pickup:19'] ); } return $rates; } } The above shipping methods are now removed for recurring carts. My cart totals now looks like this:
doc_1234
namespace Simple_TCP_Client { public partial class Form1 : Form { public Socket _client; public class StateObject { public Socket workSocket = null; public const int BufferSize = 256; public byte[] buffer = new byte[BufferSize]; public StringBuilder sb = new StringBuilder(); } public String response = String.Empty; public void StartClient(IPAddress ipAddress, int portnum) { try { IPEndPoint remoteEP = new IPEndPoint(ipAddress, portnum); Socket client = new Socket(ipAddress.AddressFamily, SocketType.Stream, ProtocolType.Tcp); client.BeginConnect(remoteEP, new AsyncCallback(ConnectCallback), client); _client = client; btn_send.Enabled = true; Send(client, "Hello Server!"); Receive(client); WriteOnLog("Response received : " + response.ToString()); } catch (Exception e) { WriteOnLog(e.ToString()); } } void ConnectCallback(IAsyncResult ar) { try { Socket client = (Socket)ar.AsyncState; client.EndConnect(ar); WriteOnLog("Socket connected to " + client.RemoteEndPoint.ToString()); } catch (Exception e) { WriteOnLog(e.ToString()); } } void Receive(Socket client) { try { StateObject state = new StateObject(); state.workSocket = client; client.BeginReceive(state.buffer, 0, StateObject.BufferSize, 0, new AsyncCallback(ReceiveCallback), state); } catch (Exception e) { WriteOnLog(e.ToString()); } } void ReceiveCallback(IAsyncResult ar) { try { StateObject state = (StateObject)ar.AsyncState; Socket client = state.workSocket; int bytesRead = client.EndReceive(ar); if (bytesRead > 0) { state.sb.Append(Encoding.ASCII.GetString(state.buffer, 0, bytesRead)); client.BeginReceive(state.buffer, 0, StateObject.BufferSize, 0, new AsyncCallback(ReceiveCallback), state); } else { if (state.sb.Length > 1) { string Response = state.sb.ToString(); WriteOnLog(Response); } } } catch (Exception e) { WriteOnLog(e.ToString()); } } void Send(Socket client, String data) { byte[] byteData = Encoding.ASCII.GetBytes(data); client.BeginSend(byteData, 0, byteData.Length, 0, new AsyncCallback(SendCallback), client); } void SendCallback(IAsyncResult ar) { try { Socket client = (Socket)ar.AsyncState; int bytesSent = client.EndSend(ar); WriteOnLog("Sent " + bytesSent + " bytes to server."); } catch (Exception e) { WriteOnLog(e.ToString()); } } public Form1() { InitializeComponent(); btn_send.Enabled = false; } delegate void WriteOnLogCallback(string text); private void WriteOnLog(string text) { if (this.listBox1.InvokeRequired) { WriteOnLogCallback d = new WriteOnLogCallback(WriteOnLog); this.Invoke(d, new object[] { text }); } else { this.listBox1.Items.Add(text); } } private void btn_connect_Click(object sender, EventArgs e) { IPAddress ipAddress; int portNum; if (IPAddress.TryParse(txt_ip.Text, out ipAddress) && int.TryParse(txt_port.Text, out portNum)) { StartClient(ipAddress, portNum); if (_client.Connected) { btn_connect.Enabled = false; } } else { MessageBox.Show("Please Enter a Valid Server's IP Address and Port", "Invalid Address or Port", MessageBoxButtons.OK, MessageBoxIcon.Hand); } } private void btn_disconnect_Click(object sender, EventArgs e) { _client.Close(); btn_connect.Enabled = true; } private void btn_send_Click(object sender, EventArgs e) { if(_client.Connected)//(networkStream.CanWrite) { Send(_client, txt_msg.Text); WriteOnLog(txt_msg.Text); txt_msg.Clear(); } } } }
doc_1235
A: I would try to use wherever is possible the new design patterns. That would mean the Contextual Action Bar: http://developer.android.com/design/patterns/selection.html It looks like Android changes the design patterns every year because the "Quick Actions" pattern was recommend on July 2010 according to this presentation: http://www.slideshare.net/AndroidDev/android-ui-design-tips But as I said, I would go with the new ICS design pattern. You can use the Sherlock ActionBar for backward compatibility: http://actionbarsherlock.com/
doc_1236
If the user is Logged in, Streambuilder returns profile() and if not, signUp() is returned. So far, so good. But what I need is to Navigate to another page using Navigator instead of returning widgets. I need to do this: Navigator.push( context, MaterialPageRoute(builder: (context) => profile()), Instead of: return profile(); The code I'm working on is: body: StreamBuilder( stream: FirebaseAuth.instance.authStateChanges(), builder: (context, snapshot) { if (snapshot.connectionState == ConnectionState.waiting) { return Center(child: CircularProgressIndicator()); } else if (snapshot.hasData) { return profile(); } else if (snapshot.hasError) { return Center(child: Text("Something went wrong")); } else { return signUp(); } }, ), Any idea on how to do this? Should I use other approach instead of a Streambuilder? Thanks in advance! A: the FirebaseAuth.instance.authStateChanges() returns a Stream, so you can listen to it inside your app and make acts based on it, instead of using it in a StreamBuilder, you can listen to it like this: FirebaseAuth.instance.authStateChanges().listen((user) { if(user != null) { Navigator.push( context, MaterialPageRoute(builder: (context) => profile()), } else { Navigator.push( context, MaterialPageRoute(builder: (context) => Login()), } }); you need just to find a place where you're going to call it, once it listens to the authStateChanges(), by authenticating a new user or logging him out, this stream will trigger the change and execute the code inside of it.
doc_1237
Item2=Item("Charlettes web","E.B.White",2013) Item3=Item("The prince of tides","PatConroy",2004) Item4=Item("Arise! Awake!","Josephine",1992) Item5=Item("Wonder","R. J.Palacio",2008) item_list=[Item1,Item2,Item3,Item4,Item5] I want to sort the list "item_list" based on the author names. but while sorting i should ignore the special characters. Then the final output should be a list contaning Item2,Item4,Item1,Item3,Item5 A: You could use a regular expression to create a list containing only the letters (so no special characters are considered) and sort based on that as follows: import re class Item: def __init__(self,item_name,author_name,published_year): self.__item_name=item_name self.__author_name=author_name self.__published_year=published_year def get_item_name(self): return self.__item_name def get_author_name(self): return self.__author_name def get_published_year(self): return self.__published_year Item1 = Item("The Book of Mormon", "Joseph smith Jr.", 1992) Ietm2 = Item("Charlettes web", "E.B.White", 2013) Item3 = Item("The prince of tides", "PatConroy", 2004) Item4 = Item("Arise! Awake!", "Josephine", 1992) Item5 = Item("Wonder", "R. J.Palacio", 2008) item_list = [Item1, Ietm2, Item3, Item4, Item5] new_item_list = sorted(item_list, key=lambda x: re.findall('\w', x.get_author_name())) # For each class item in the new list, display its values for item in new_item_list: print "{}, {}, {}".format(item.get_item_name(), item.get_author_name(), item.get_published_year()) This would give you: Charlettes web, E.B.White, 2013 Arise! Awake!, Josephine, 1992 The Book of Mormon, Joseph smith Jr., 1992 The prince of tides, PatConroy, 2004 Wonder, R. J.Palacio, 2008 The re.findall('\w'), x) regular expression returns a list of just the characters contained in the author's name, thus removing all of the special characters. So for example the first item would be sorted using a sort key of: ['A', 'n', 'd', 'r', 'e', 'w', 'W', 'h', 'i', 't', 'e', 'h', 'e', 'a', 'd']
doc_1238
I have a dataframe with two columns. The first column is called dates and the second column is filled with numbers. The dataframe has 351 row. dates numbers 01.03.2019 5 02.03.2019 8 ... 20.02.2020 3 21.02.2020 2 I want the whole first column to be on the x axis from. I tried to plot it like this: graph = FinalDataframe.plot(figsize=(12, 8)) graph.legend(loc='upper center', bbox_to_anchor=(0.5, -0.075), ncol=4) graph.set_xticklabels(FinalDataframe['dates']) plt.show() But on the x axis are only the first few values from the column instead of the whole column. Furthermore, they are not correlated to the data from the second column. Any suggestions? Thank you in advance! A: Your issue is that x ticks are generated automatically, and spaced out to be readable. However you the tell matplotlib to use all the labels. The simple fix is to tell him to use one tick label per entry, but that’s going to make your x-axis unreadable: graph.set_xticks(range(len(FinalDataframe['dates']))) Now you could space them out manually: graph.set_xticks(range(0, len(FinalDataframe['dates']), 61)) graph.set_xticklabels(FinalDataframe['dates'][::61]) However the best result to plot dates on the x-axis is still to use pandas’ built-in date objects. We can do this with pd.to_datetime This will also allow pandas to know where to place points on the x-axis, by specifying that you want the x-axis to be the dates. In that way, if dates are not sorted or missing, the gaps will be skipped properly, and points will be above the ordinate of the right date. I’m first recreating a dataframe that looks like what you posted: >>> df = pd.DataFrame({'dates': pd.date_range('20190301', '20200221', freq='D').strftime('%d.%m.%Y'), 'numbers': np.random.randint(0, 10, 358)}) >>> df dates numbers 0 01.03.2019 2 1 02.03.2019 2 2 03.03.2019 5 3 04.03.2019 4 4 05.03.2019 3 .. ... ... 353 17.02.2020 2 354 18.02.2020 1 355 19.02.2020 2 356 20.02.2020 3 357 21.02.2020 1 (This should be the same as FinalDataFrame, or if your dates are the index, then it’s the same as FinalDataFrame.reset_index()) Now I’m converting the dates: >>> df['dates'] = pd.to_datetime(df['dates'], format='%d.%m.%Y') >>> df dates numbers 0 2019-03-01 2 1 2019-03-02 2 2 2019-03-03 5 3 2019-03-04 4 4 2019-03-05 3 .. ... ... 353 2020-02-17 2 354 2020-02-18 1 355 2020-02-19 2 356 2020-02-20 3 357 2020-02-21 1 You can check your columns contain dates and not string representations of dates by checking their dtypes: >>> df.dtypes dates datetime64[ns] numbers int64 Finally plotting: >>> ax = df.plot(x='dates', y='numbers', figsize=(12, 8)) >>> ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.075), ncol=4) <matplotlib.legend.Legend object at 0x7fc8c24fd4f0> >>> plt.show() Legends are taken care of automatically. This is what you get:
doc_1239
Given a string s, find the length of the longest substring without repeating characters. class Solution(object): def lengthOfLongestSubstring(self, s): def select(s): list1 = [] for i in s: if i not in list1: list1.append(i) else: return list1 def select_list(st): list2 = [] list2.append(select(st)) if len(st) > 1: st = (st[len(select(st))::]) list2.append(select(st)) return list2 result = len(max(select_list(s))) return result Runtime Error TypeError: 'NoneType' object is not iterable result = len(max(select_list(s))) Line 20 in lengthOfLongestSubstring (Solution.py) ret = Solution().lengthOfLongestSubstring(param_1) Line 41 in _driver (Solution.py) _driver() Line 51 in <module> (Solution.py) I'm a beginner, I ask you not to kick too much for such a simple question u not to kick too much for such a simple question the code is executed but does not pass 'Submit' A: welcome., It's likely that the error is happening in your select function. For example, if you pass None as an argument for s, then the error will be raised. class Solution(object): def lengthOfLongestSubstring(self, s): def select(s): list1 = [] for i in s: if i not in list1: list1.append(i) else: return list1 def select_list(st): list2 = [] list2.append(select(st)) if len(st) > 1: st = (st[len(select(st))::]) list2.append(select(st)) return list2 result = len(max(select_list(s))) return result Solution().lengthOfLongestSubstring(None) # will raise TypeError: 'NoneType' object is not iterable +++++++++++++++++ EDIT +++++++++++++++++ Here is a solution using sliding windows class Solution: """Given a string s, find the length of the longest substring without repeating characters.""" def lengthOfLongestSubstring(self, s: str) -> int: left, right = 0, 0 seen: set[str] = set() longest_substr = 0 while right < len(s): char = s[right] if char in seen: longest_substr = max(longest_substr, len(seen)) while char in seen: discard = s[left] seen.remove(discard) left += 1 else: seen.add(char) right += 1 return max(longest_substr, len(seen))
doc_1240
<div a_example = "x" b_example = "y" class = "z"></div> What is the proper way to get the corresponding properties of a_example and b_example in Javascript? Can xpath do the job? A: Use getAttribute: var elem = document.getElementsByClassName("z")[0], a = elem.getAttribute("a_example"); Here's a working example. But, as has already been mentioned, you should really be using HTML5 data-* attributes, otherwise your markup is invalid. A: Some browsers will add all attributes as named properties of the DOM element, others will only add standard attributes. In both cases you can get non–standard attributes using getAttribute, however such a scheme is not recommended. It is common to use standard attributes and DOM properties and only use getAttribute where necessary as it is inconsistently implemented in different browsers. A: You should take a look at HTML5 data attributes, here is a useful article: http://html5doctor.com/html5-custom-data-attributes/ Reading data attributes from a tag is really easy, and a fallback is available for older browsers. An example from the article: <div id="sunflower" data-leaves="47" data-plant-height="2.4m"></div> <script> // 'Getting' data-attributes using dataset var plant = document.getElementById('sunflower'); var leaves = plant.dataset.leaves; // leaves = 47; </script> A: If you are using jQuery, it is as simple as saying: HTML <div id="testDiv" a_example = "x" b_example = "y" class = "z"></div> Javascript: var attr1 = $('#testDiv').attr('a_example'); A: element.getAttribute(attributename) This should work for you. A: I agree you should look at data attributes and better ways to do add non-standard attributes, but here's a 'raw' answer to your question, but I wouldn't treat this as universally supported (or advisable): alert(document.getElementsByTagName('div')[0].getAttribute('b_example'));
doc_1241
animal_id service_date 610710 2005-10-22 610710 2006-12-03 610710 2006-12-27 610710 2007-12-02 610710 2008-01-17 610710 2008-03-04 The other table is the same but with a different date (event_date) and the diagnosis, animal_id event_date event_description 610710 2006-06-16 PP 610710 2007-02-15 PP 610710 2008-01-09 PN 610710 2008-04-09 PP 610710 2009-06-16 PP So what I would like to do is merge both tables in a way the dates complement each other, meaning if a service was performed on 2005-10-12, when I join both tables this row will link to the closest date in the Events table, and by closest I also mean later - since insemination happens before diagnosis. So the desired output would be something like this, animal_id service_date event_date event_description 1 610710 2005-10-22 NA NA 2 610710 NA 2006-06-16 PP 3 610710 2006-12-03 2007-02-15 PP 4 610710 2006-12-27 2007-02-15 PP 5 610710 2007-12-02 2008-01-09 PN 6 610710 2008-01-17 2008-04-09 PP 7 610710 2008-03-04 NA NA 8 610710 NA 2009-06-16 PP In the final output, I would expect a large number of records not to merge against anything, like row 1 in the example output. There was a service performed in October 2005, but the first Diagnosis I have for that cow is in June 2006 - there are probably a number of service records missing. That is unfortunately to be expected. For this example, only rows 5 and 6 make sense. For rows 3 and 4, I would consider only row 4, since that is probably the insemination that resulted into pregnancy. Is that even possible in R? Thank you! A: What you're asking for is a "non-equi" or "range" join. This isn't supported by base R (or dplyr, lacking dbplyr), but can be done with some other packages. For all, I create event_date_lag so that we limit the amount of returns for each row. (Without it, we'd get multiple matches.) fuzzyjoin out <- fuzzyjoin::fuzzy_full_join( services, events, by = c("animal_id" = "animal_id", "service_date" = "event_date_lag", "service_date" = "event_date"), match_fun = list(`==`, `>=`, `<=`)) # not sure why fuzzyjoin is splitting animal_id out <- transform(out, animal_id = ifelse(is.na(animal_id.x), animal_id.y, animal_id.x)) out$animal_id.x <- out$animal_id.y <- out$event_date_lag <- NULL # ordering here primarily to compare with your desired output out[with(out, order(ifelse(is.na(service_date), event_date, service_date))),] # service_date event_date event_description animal_id # 6 2005-10-22 <NA> <NA> 610710 # 7 <NA> 2006-06-16 PP 610710 # 1 2006-12-03 2007-02-15 PP 610710 # 2 2006-12-27 2007-02-15 PP 610710 # 3 2007-12-02 2008-01-09 PN 610710 # 4 2008-01-17 2008-04-09 PP 610710 # 5 2008-03-04 2008-04-09 PP 610710 # 8 <NA> 2009-06-16 PP 610710 sqldf SQL in general supports the concept of non-equi or range joins. There's nothing special about the sqldf package, just that it provides a native SQL experience (via RSQLite) without the overhead or hassle of uploading your data to a SQL DBMS and pulling it back down in this query. While that is in fact what is happening with sqldf, it automates much of it, allowing one to work directly on R objects using SQL. If by chance you are already getting your data from a DBMS, then a SQL join is by far the most efficient: get it joined at the source. sqldf::sqldf( "select svc.animal_id, svc.service_date, ev.event_date, ev.event_description from services svc left join events ev on svc.animal_id=ev.animal_id and svc.service_date between ev.event_date_lag and ev.event_date order by svc.service_date, ev.event_date") # animal_id service_date event_date event_description # 1 610710 2005-10-22 <NA> <NA> # 2 610710 2006-12-03 2007-02-15 PP # 3 610710 2006-12-27 2007-02-15 PP # 4 610710 2007-12-02 2008-01-09 PN # 5 610710 2008-01-17 2008-04-09 PP # 6 610710 2008-03-04 2008-04-09 PP data.table While I use this often, if you aren't already using it, then it might be a little more than you need (its learning curve, though worth it, can be steep). Notes: * *the data.table-semantics (Y[X], which is effectively "X left join Y") supports inner, left, and right, but not full, semi, or anti-joins. While it might be possible using a cross-join (cartesian product), that explodes memory use and is (imo) not the best way to go. *the join tends to rename the left side (the X in Y[X]) variables to that on the right. This can be confusing, and it can in fact mask the actual pre-merge values, so I'll duplicate service_date to keep it separate. *I'm using as.data.table here just for the SO answer, not because it's required to distinguish between data.frame and data.table variables. If you're switching to data.table, then setDT is the canonical way to go. *If you choose this but do not continue with other data.table operations, then make sure you convert back to normal data.frame using setDF or as.data.frame; there are enough subtle differences that not doing this will be a problem. library(data.table) svcDT <- as.data.table(services) evDT <- as.data.table(events) evDT[svcDT[,sdate:=service_date], on = .(animal_id == animal_id, event_date_lag <= sdate, event_date >= sdate) ][, event_date_lag := NULL ] # animal_id event_date event_description service_date # 1: 610710 2005-10-22 <NA> 2005-10-22 # 2: 610710 2006-12-03 PP 2006-12-03 # 3: 610710 2006-12-27 PP 2006-12-27 # 4: 610710 2007-12-02 PN 2007-12-02 # 5: 610710 2008-01-17 PP 2008-01-17 # 6: 610710 2008-03-04 PP 2008-03-04 Data services <- read.table(header = TRUE, text = " animal_id service_date 610710 2005-10-22 610710 2006-12-03 610710 2006-12-27 610710 2007-12-02 610710 2008-01-17 610710 2008-03-04") services$service_date <- as.Date(services$service_date) events <- read.table(header = TRUE, text = " animal_id event_date event_description 610710 2006-06-16 PP 610710 2007-02-15 PP 610710 2008-01-09 PN 610710 2008-04-09 PP 610710 2009-06-16 PP") events$event_date <- as.Date(events$event_date) events$event_date_lag <- ave(events$event_date, events$animal_id, FUN=function(a) c(a[1][NA], a[-length(a)])) events # animal_id event_date event_description event_date_lag # 1 610710 2006-06-16 PP <NA> # 2 610710 2007-02-15 PP 2006-06-16 # 3 610710 2008-01-09 PN 2007-02-15 # 4 610710 2008-04-09 PP 2008-01-09 # 5 610710 2009-06-16 PP 2008-04-09 A: Using the input shown reproducibly in the Note at the end bind them together using rbind_rows and then sort them by date using arrange. Then define the logical column collapse which is TRUE if the current row has a service_date and the next row has an event_date and they are less than or equal to 90 days apart -- change 90 to whatever you want. Then group by animal_id and a group number which increases by 1 each time a service_date is encountered and further group by rows except if the current row has collapse equal to TRUE then place it in the same group as the next row in order that it be matched to that next row's event_date. Finally summarize the groups and remove the temporary columns. Note that this approach maintains the event rows that do not have corresponding service dates and also ensures that each event date is not matched to more than one service date. library(dplyr) bind_rows(DF1, DF2) %>% arrange(coalesce(service_date, event_date)) %>% group_by(animal_id, group = cumsum(!is.na(service_date))) %>% mutate(collapse = !is.na(service_date) & !is.na(lead(event_date)) & lead(event_date) - service_date <= 90) %>% group_by(n = 1:n() + collapse, .add = TRUE) %>% summarize(animal_id = first(animal_id), service_date = first(service_date), event_date = last(event_date), event_description = last(event_description), .groups = "drop") %>% select(-group, -n) giving: # A tibble: 8 x 4 animal_id service_date event_date event_description <int> <date> <date> <chr> 1 610710 2005-10-22 NA <NA> 2 610710 NA 2006-06-16 PP 3 610710 2006-12-03 NA <NA> 4 610710 2006-12-27 2007-02-15 PP 5 610710 2007-12-02 2008-01-09 PN 6 610710 2008-01-17 NA <NA> 7 610710 2008-03-04 2008-04-09 PP 8 610710 NA 2009-06-16 PP sqldf We can follow pretty much the same logic using the sqldf package: library(sqldf) sqldf("with b0 as (select *, NULL event_date, NULL event_description from DF1 union select animal_id, NULL service_date, event_date, event_description from DF2), b1 as (select *, coalesce(service_date, event_date) date1 from both order by animal_id, date1), b2 as (select *, lead(event_date) over () lead_event_date from b1), b3 as (select *, coalesce(lead_event_date - service_date <= 90, 0) + row_number() over () coll from b2) select distinct animal_id, group_concat(service_date) service_date, group_concat(event_date) event_date, group_concat(event_description) event_description from b3 group by coll") giving: animal_id service_date event_date event_description 1 610710 2005-10-22 <NA> <NA> 2 610710 <NA> 2006-06-16 PP 3 610710 2006-12-03 <NA> <NA> 4 610710 2006-12-27 2007-02-15 PP 5 610710 2007-12-02 2008-01-09 PN 6 610710 2008-01-17 <NA> <NA> 7 610710 2008-03-04 2008-04-09 PP 8 610710 <NA> 2009-06-16 PP Note DF1 <- structure(list(animal_id = c(610710L, 610710L, 610710L, 610710L, 610710L, 610710L), service_date = structure(c(13078, 13485, 13509, 13849, 13895, 13942), class = "Date")), row.names = c(NA, -6L ), class = "data.frame") DF2 <- structure(list(animal_id = c(610710L, 610710L, 610710L, 610710L, 610710L), event_date = structure(c(13315, 13559, 13887, 13978, 14411), class = "Date"), event_description = c("PP", "PP", "PN", "PP", "PP")), row.names = c(NA, -5L), class = "data.frame")
doc_1242
While sending push to user segment I am simply checking the iOS bundle id and trying to send all of the devices in which the app is installed. A: I'm not sure of this is what did it but mine started working after I added my App Store ID to the GoogleService-Info.plist section of my Firebase project. Individual device notifications always worked but I could never get "bulk" notifications to work properly until I added that. I didn't have to redownload the .plist file and add it to my app either. Simply adding the ID to the Firebase configuration page appears to have made it work.
doc_1243
A: Welcome to SO! This is just proper English vocabulary. I.E., you go to a restaurant for food service, thus is provided by a server (waiter/waitress). However, it's not necessary to concatenate the words "MapServer"... "Map server" would do fine because it's just technical jargon. But to ultimately answer your question ESRI usually has the word "MapServer" built into their URLs for accessing their map services. So it's not too surprising that they're often interchangeable. A: I don't believe ESRI documentation actually uses the term Mapserver or MapServer Perhaps not now, but historically it did (ArcGIS 9.3): http://resources.esri.com/help/9.3/arcgisserver/adf/java/help/doc/ffcb723b-2356-4afd-8a2c-a8a5c472e9eb.htm#MapServerObject MapServer can be configured as pooled or non-pooled, depending on the requirements of the application. An example of an application that requires a non-pooled MapServer is an application that changes the map (for example adds or removes layers) or one that manages a geodatabase edit session across multiple requests. ... and: http://resources.esri.com/help/9.3/arcgisserver/adf/java/help/doc/9e54b952-9c9c-47d0-b2e8-71a97de3e1d1.htm MapServer includes the following out-of-the-box extensions to the base server object: UMN MapServer has been around since 1994 ref: https://trac.osgeo.org/mapserver/wiki/MapServerHistory and that certainly pre-dates ArcGIS 9.3 (released June 26, 2008), but I'm not sure how far back it was when ESRI started using the term. MapObjects I think was around in 1998. So it is perhaps unfortunate that ESRI came up with the same phrase for providing a map from a service as UMN, and they certainly did use it in the past; whether they should continue to use it is, I guess, a matter of opinion.
doc_1244
for (int i = 0; i <23; i++) { TableRow row= new TableRow(this); TableRow.LayoutParams lp = new TableRow.LayoutParams(TableRow.LayoutParams.MATCH_PARENT); row.setLayoutParams(lp); tv = new TextView(this); tv.setText(array[i]); ImageView image65 = new ImageView(this); image65.setBackgroundResource(R.drawable.ic_no); row.addView(tv,1); row.addView(image65,0); ll.addView(row,i); } Everything is good except that I want to specify the width and length of the image view in sp. How do I add layoutprams for the image view beside the layout-prams of the row? I tried the following but it didn't work out. for (int i = 0; i <23; i++) { TableRow row= new TableRow(this); TableRow.LayoutParams lp = new TableRow.LayoutParams(TableRow.LayoutParams.MATCH_PARENT); row.setLayoutParams(lp); TableRow.LayoutParams lp2 = new TableRow.LayoutParams(customwidth,customheight); tv = new TextView(this); tv.setText(array[i]); ImageView image65 = new ImageView(this); image65.setBackgroundResource(R.drawable.ic_no); image65.setLayoutParams(lp2); row.addView(tv,1); row.addView(image65,0); ll.addView(row,i); } A: I found the solution for (int i = 0; i <23; i++) { TableRow row= new TableRow(this); TableRow.LayoutParams lp = new TableRow.LayoutParams(TableRow.LayoutParams.MATCH_PARENT); row.setLayoutParams(lp); tv = new TextView(this); tv.setText(array[i]); ImageView image65 = new ImageView(this); Drawable d = getResources().getDrawable(R.drawable.ic_yes); image65.setImageDrawable(d); image65.setLayoutParams(new TableRow.LayoutParams(40, 40)); row.addView(tv,1); row.addView(image65,0); ll.addView(row,i); }
doc_1245
For Eg refer to attached image. enter image description here Numbering should be based on Material column, for eg: * *Computer = 1 *Keyboard = 2 *Mouse = 3 *Monitor = 4 *USB Port = 5 *Pen = 6 *Paper = 7 Numbers to be pasted on another column It has to be dynamic, so that even if the list gets increased with another unique Material name, autonumbering should happen A: # Since you didn't provide an easily reproducible dataset, here's a simple one: > df <- data.frame(Material = c('Keyboard', 'Mouse', 'Keyboard', 'USB', 'USB')) > df Material 1 Keyboard 2 Mouse 3 Keyboard 4 USB 5 USB You can use the match function to find the index in a unique subset of the materials, thus providing each material a unique id: > df$mat.id <- match(df$Material, unique(df$Material)) > df Material mat.id 1 Keyboard 1 2 Mouse 2 3 Keyboard 1 4 USB 3 5 USB 3
doc_1246
<stuff> <item id="1"><![CDATA[first stuff...]]></item> <item id="2"><![CDATA[more stuff...]]></item> </stuff> I am struggling mightily to figure out how to deserialize this with the Simple Framework. I have started out with the following Java classes: import java.util.ArrayList; import java.util.List; import org.simpleframework.xml.Root; import org.simpleframework.xml.ElementList; @Root(name="stuff") public class Stuff { @ElementList(inline=true) public List<Item> itemList = new ArrayList<Item>(); } and import org.simpleframework.xml.Attribute; import org.simpleframework.xml.Element; @Element(name="item", data=true) public class Item { @Attribute public String id; } So the missing piece for me is how do I access the CDATA content for each item element? A: I patiently waited for my son to write up the solution he suggested which turned out to solve the problem. Evidently he will have nothing to do with an organization that would have me as a member, to only slightly distort Groucho's eternal mantra. Here is his suggestion, provided so that other's looking to solve this puzzle have a handy solution: Modify the Item class as follows: import org.simpleframework.Attribute; import org.simpleframework.Text; public class Item { @Attribute public String id; @Text(data=true) public String value; } so that the field value will contain the CDATA text.
doc_1247
File 1 : a_0001, File 2 : b_1001, File 3 : c_2001 present in Directory : /home/swa/IBI directory. I want to form an oracle string as below " [a_001] [b_1001] [c_2001] " and use this string for further oracle processing. I cannot give any code here. As, I don't know any function which does this. A: Since Oracle 11 you can use external table preprocessing to list files in directory and then iterate over files read from external table. Something like here: http://www.oracle-developer.net/display.php?id=513 A: If i remember correctly oracle doesn't have function to list files in directory. For this you'll need to use java source. List files in directory, read input and create your variable. Here's java source which will execute commands in unix environment: create or replace and compile java source named "Host" as import java.io.*; public class Host { public static void executeCommand(String command) { //String[] commands = command.split(" "); try { String[] finalCommand; /*if (isWindows()) { finalCommand = new String[4]; // Use the appropriate path for your windows version. //finalCommand[0] = "C:\\winnt\\system32\\cmd.exe"; // Windows NT/2000 finalCommand[0] = "C:\\windows\\system32\\cmd.exe"; // Windows XP/2003 //finalCommand[0] = "C:\\windows\\syswow64\\cmd.exe"; // Windows 64-bit finalCommand[1] = "/y"; finalCommand[2] = "/c"; finalCommand[3] = command; } else { //finalCommand = new String[commands.length + 2]; finalCommand = new String[commands.length]; //finalCommand[0] = "/bin/bash"; //finalCommand[1] = "-c"; //for (int i = 2; i < commands.length; i++){ // finalCommand[i] = command[i]; //} for (int i = 0; i < commands.length; i++){ finalCommand[i] = commands[i]; } }*/ //final Process pr = Runtime.getRuntime().exec(finalCommand); final Process pr = Runtime.getRuntime().exec(command.split(" ")); pr.waitFor(); new Thread(new Runnable(){ public void run() { BufferedReader br_in = null; try { br_in = new BufferedReader(new InputStreamReader(pr.getInputStream())); String buff = null; while ((buff = br_in.readLine()) != null) { System.out.println("Process out :" + buff); try {Thread.sleep(10); } catch(Exception e) {} } br_in.close(); } catch (IOException ioe) { System.out.println("Exception caught printing process output."); ioe.printStackTrace(); } finally { try { br_in.close(); } catch (Exception ex) {} } } }).start(); new Thread(new Runnable(){ public void run() { BufferedReader br_err = null; try { br_err = new BufferedReader(new InputStreamReader(pr.getErrorStream())); String buff = null; while ((buff = br_err.readLine()) != null) { System.out.println("Process err :" + buff); try {Thread.sleep(10); } catch(Exception e) {} } br_err.close(); } catch (IOException ioe) { System.out.println("Exception caught printing process error."); ioe.printStackTrace(); } finally { try { br_err.close(); } catch (Exception ex) {} } } }).start(); } catch (Exception ex) { System.out.println(ex.getLocalizedMessage()); } } public static boolean isWindows() { if (System.getProperty("os.name").toLowerCase().indexOf("windows") != -1) return true; else return false; } }; PLSQL function: CREATE OR REPLACE PROCEDURE host_command (p_command IN VARCHAR2) AS LANGUAGE JAVA NAME 'Host.executeCommand (java.lang.String)'; Script: DECLARE l_output DBMS_OUTPUT.chararr; l_lines INTEGER := 1000; BEGIN DBMS_OUTPUT.enable(1000000); DBMS_JAVA.set_output(1000000); host_command('printenv'); --host_command('/bin/ls /home/oracle'); DBMS_OUTPUT.get_lines(l_output, l_lines); FOR i IN 1 .. l_lines LOOP -- Do something with the line. -- Data in the collection - l_output(i) /* l_output(i) - unix / java output Parse files list here and make your variable: */ DBMS_OUTPUT.put_line(l_output(i)); END LOOP; END; / A: There is an C library / listener.ora / Library / External Function approach. It could be helpful on XE edition where there is not java Note: when there is an directory listing over 32000 chars this solution causes buffer overflow 1) Prepare Your shared C library (in list.c) - this is definitely not the safest & best solution on how-to code directory listing in C. But You got the basic idea. // // FILE: list.c // // gcc -Wall -fPIC -c list.c // gcc -shared -o list.so list.o // mkdir -p /u01/lib // cp list.so /u01/lib // #include <stdio.h> #include <dirent.h> #include <sys/types.h> char *list_dir(const char *path) { char *filelist; filelist=(char *) calloc(32000,sizeof(char)); struct dirent *entry; DIR *dir=opendir(path); if (dir==NULL) { return; } strcat(filelist,""); while ((entry = readdir(dir)) != NULL) { strcat(filelist,entry->d_name); strcat(filelist,"\n"); } strcat(filelist,"\0"); closedir(dir); return (filelist); } 2) Edit Your LISTENER.ORA ... (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = <PATH TO ORACLE >) (ENVS="EXTPROC_DLLS=/u01/lib/list.so") ^^^^^^^^^^^^^^^^^^^^ This line enables calling of external libs (PROGRAM = extproc) ) ... 3) Prepare database objects create or replace library c_list is '/u01/lib/list.so'; / create or replace function host_list(v_directory in varchar2) return varchar2 as external name "list_dir" library c_list language c parameters (v_directory string, return string); / And finally use it with: declare v_ret varchar2(32000); begin v_ret := host_list('/u01'); dbms_output.put_line(v_ret); end; / You can of course modify the both the C and the string post-processing to suit Your needs. Best regards P.S. You can translate the result to table as follows create or replace package sysop as type file_list_rec is record(filename varchar2(1024)); type file_list is table of file_list_rec; function ls(v_directory varchar2) return file_list pipelined; end; / create or replace package body sysop as function ls(v_directory varchar2) return file_list pipelined is rec file_list_rec; v_host_list varchar2(32000) := ''; begin v_host_list := host_list(v_directory); for file in ( select regexp_substr(v_host_list, '[^'||chr(10)||']+', 1, level) from dual connect by regexp_substr(v_host_list, '[^'||chr(10)||']+', 1, level) is not null) loop pipe row (file); end loop; return; end ls; end sysop; / And use it like this: select * from table(sysop.ls('/u01')) order by 1;
doc_1248
I'm trying to use the daterangepicker in one of my forms and it actually works fine on Chrome, but I just can't make it to work on IE. Could someone please tell me what is wrong? Its as if IE is completly ignoring my callback function My form in Partial View: @using (Html.BeginForm("SaveKPI", "Channel")) { <fieldset> @Html.HiddenFor(modelItem => Model.CampaignId) <div id="kpireportrange" class="pull-right" class="datepicker"> <i class="fa fa-calendar fa-lg"></i><span></span>@Html.HiddenFor(modelItem => Model.SelectedStartDate, new { id = "startDate" }) <b class="caret"></b></div> @Html.DropDownListFor(modelItem => Model.AttributeId, new SelectList(Model.AllAttributes, "AttributeId", "Name"), new { @class = "form-control", @style = "width:250px" }) Target: @Html.TextBoxFor(modelItem => Model.AttributeValue, new { @class = "form-control", @value = "Vul hier het aantal in..", @style = "width:250px" }) <br /> <button type="submit" class="btn btn-default" id="btnSave"> KPI TOEVOEGEN</button> </fieldset> } Script in Partial View: $(document).ready(function () { $('#kpireportrange span').html(moment().format('MMMM YYYY')); var elem = document.getElementById("startDate"); elem.value = moment().format('MMMM YYYY'); $('#kpireportrange').daterangepicker( { ranges: { 'This Month': [moment().startOf('month'), moment().endOf('month')], 'Last Month': [moment().subtract('month', 1).startOf('month'), moment().subtract('month', 1).endOf('month')] }, startDate: moment(), endDate: moment() }, function (start, end) { $('#kpireportrange span').html(start.format('MMMM YYYY')); var elem = document.getElementById("startDate"); elem.value = start.format('MMMM YYYY'); } ); }); A: Finally found the solution to my issue here: https://github.com/dangrossman/bootstrap-daterangepicker/issues/219 Adding this did the trick $.fn.modal.Constructor.prototype.enforceFocus = function () {}; A: wow .. added above line in show.bs.modal function .. worked for me ,, $('#aboutModal').on('show.bs.modal', function (e) { $.fn.modal.Constructor.prototype.enforceFocus = function () {};
doc_1249
bin\mallet import-dir --input D:\Data\test1 --output test1.mallet --keep-sequence --remove-stopwords --extra-stopwords extra.txt by removing --keep-sequence --remove-stopwords --extra-stopwords extra.txt i am able to import file after that, when I try to train model exception is thrown. A: I recommend you to use GUI for mallet. https://code.google.com/p/topic-modeling-tool/
doc_1250
<nav class="navbar navbar-toggleable-xl navbar-inverse bg-primary"> <div class="container"> <a class="navbar-brand" href="#">Home</a> <div class="navbar-collapse collapse"> <ul class="navbar-nav"> <li class="nav-item"> <a class="nav-link" href="#">About</a> </li> <li class="nav-item"> <a class="nav-link" href="#">Speakers</a> </li> <li class="nav-item"> <a class="nav-link" href="#">Schedule</a> </li> @if(Auth::check()) <li role="presentation" class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="false"> Hello {{Auth::user()->name }} <span class="caret"></span> </a> <ul class="dropdown-menu"> <li><a href="{{ route('jobs.index') }}">Jobs</a></li> <li><a href="{{ route('sectors.index') }}">Sectors</a></li> <li><a href="{{ route('statuses.index') }}">Status</a></li> <li><a href="/jobs/create">Create a Job</a></li> <li role="separator" class="divider"></li> <li><a href="{{ route('logout')}}">Logout</a></li> </ul> </li> @else <a href="{{ route('login') }}" class="btn btn-default">Login</a> <a href="{{ route('register') }}" class="btn btn-default">Register</a> @endif </ul> </div> </div> </nav> Also I want to move the login and register buttons to the right of the Navbar. Any help would be appreciated. A: You have probably logged in before and used the remember me button, try clearing your cache and refresh the page. to move the buttons to the right you can just use css styling wrap the buttons in a div tag and float it to the right <div style="right: 0; position: absolute;"> <a href="{{ route('login') }}" class="btn btn-success">Login</a> <a href="{{ route('register') }}" class="btn btn-danger">Register</a></div>
doc_1251
If I notice obvious errors in the entries, I can easily edit the Google Sheet to correct them. For example, a user with name "Foo Bar" accidentally puts "Foo" into the text box for "Last Name" and "Bar" into the entry for "First Name". Or they might have an obvious typo (such as having the word "teh" instead of "the" in the "Title" text box). I can go into the Google Sheet and update the incorrect entries in the spreadsheet. However, after these updates are made in the Google Sheet, if the user selects the "Edit Your Response" link generated by the original entry, it will display the originally entered responses rather than the updated entries. One workaround that I have come up with is that instead of having the Google Form send them an "Edit Your Response" link, if they request it, I can provide the user a link to a form that is fully pre-populated with all of their updated entries. I generate this link using a script that is a slightly modified version of the script in How to prefill Google form checkboxes? After I send them this link, which incorporates all manual updates, and the user can then make any changes that they want to make and submit the form. Unfortunately, this will lead to a new entry into the spreadsheet, so I then need to delete the original entry. The other alternative is that I can select the "Edit Your Response" link and make the changes in the entries by changing their form entries and resubmitting. This also is a little more clunky than just changing values directly in the spreadsheet. So this leads to the question: Is there any way to present the user a link to a form where they can edit responses that includes entries that were manually updated in the Google Sheet. Thanks! Tom A: Since the actual responses are saved in the Form and only copies are in the spreadsheet, you need to use the Edit your Response link. There is code to add this to a column so it is easier to retrieve.
doc_1252
Is it possible?? If it is How? Thank you A: I’m assuming you want to SSH into the VPS. To do that in Python you’ll have to find a module that allows you to establish SSH connections. I recommend Paramiko to do that. Here is a quick example on how you would go about connecting to a server: import paramiko client = paramiko.SSHClient() client.load_system_host_keys() client.connect('ssh.example.com') stdin, stdout, stderr = client.exec_command('ls -l') Or if you want to use username and password instead: import paramiko client = paramiko.SSHClient() client.connect('ssh.example.com', username='root', password='toor') stdin, stdout, stderr = client.exec_command('ls -l')
doc_1253
// https://snack.expo.io/@spencercarli/react-native-flatlist-grid import React from 'react'; import { Button, StyleSheet, Text, View, FlatList, Dimensions } from 'react-native'; import { createStackNavigator, createAppContainer } from 'react-navigation'; //import { Container, Header, Content} from "native-base"; //import { SectionList, FlatList, GridView, FlatGrid } from 'react-native-super-grid';< const data = [ { key: 'A' }, { key: 'B' }, { key: 'C' }, { key: 'D' }, { key: 'E' }, { key: 'F' }, { key: 'G' }, { key: 'H' }, { key: 'I' }, { key: 'J' }, // { key: 'K' }, // { key: 'L' }, ]; const formatData = (data, numColumns) => { const numberOfFullRows = Math.floor(data.length / numColumns); let numberOfElementsLastRow = data.length - (numberOfFullRows * numColumns); while (numberOfElementsLastRow !== numColumns && numberOfElementsLastRow !== 0) { data.push({ key: `blank-${numberOfElementsLastRow}`, empty: true }); numberOfElementsLastRow++; } return data; }; const numColumns = 3; export default class App extends React.Component { renderItem = ({ item, index }) => { if (item.empty === true) { return <View style={[styles.item, styles.itemInvisible]} />; } return ( <View style={styles.item} > <Text style={styles.itemText}>{item.key}</Text> </View> ); }; render() { return ( <FlatList data={formatData(data, numColumns)} style={styles.container} renderItem={this.renderItem} numColumns={numColumns} /> ); } } const styles = StyleSheet.create({ container: { flex: 1, marginVertical: 20, }, item: { backgroundColor: '#4D243D', alignItems: 'center', justifyContent: 'center', flex: 1, margin: 1, height: Dimensions.get('window').width / numColumns, // approximate a square }, itemInvisible: { backgroundColor: 'transparent', }, itemText: { color: '#fff', }, }); A: Yes it´s possible, you just need to change the View to TouchableOpacity and add your onPress functionality, like this: here is the snack modified
doc_1254
╔════════════════════════════════════════════════════════════╗ ╠═ Uploading 9625 files to Google Cloud Storage ═╣ ╚════════════════════════════════════════════════════════════╝ File upload done. Updating service [default]...failed. ERROR: (gcloud.app.deploy) Error Response: [3] Errors were encountered while copying files to App Engine. Details: [ [ { "@type": "type.googleapis.com/google.rpc.ResourceInfo", "description": "Failed to copy file.", "resourceName": "https://storage.googleapis.com/staging.rsvp.appspot.com/df4bc71e8832337e997291648609c4e207b5aa55", "resourceType": "file" } ] ] WHat is the problem here and how can I fix it? A: The reason for this issue was, I had some large size files ( > 30Mb ) in my project folder. I removed them and redeployed, it worked without an issue. A: As OP already mentioned, the reason is large files. Here is how you can also find out which file is causing the problem. If you have an error such as Details: [ [ { "@type": "type.googleapis.com/google.rpc.ResourceInfo", "description": "Failed to copy file.", "resourceName": "https://storage.googleapis.com/staging.bemmu1-hrd.appspot.com/b463c152ee1498bd4d27c1ea67c7f8e82cb4b220", "resourceType": "file" } ] ] Note the hash that appears in resourceName, in this case "b463c152ee1498bd4d27c1ea67c7f8e82cb4b220". Search for it in the most recent log file (mine was /Users/bemmu/.config/gcloud/logs/2019.07.30/00.45.43.657001.log) and you'll find it as part of a really long string, in my case: ...'templates/envato/images/arrangement.png': {'sourceUrl': 'https://storage.googleapis.com/staging.bemmu1-hrd.appspot.com/b463c152ee1498bd4d27c1ea67c7f8e82cb4b220'... From this I could see that in my case the offending file was arrangement.png
doc_1255
Type type = this.GetType(); ??? var x = this.GetQueryable< ??? >().ToList(); class Program { static void Main(string[] args) { var acc = new User(); acc.Select(); } } public partial class User { public DB_Test001Entities context; public User() { context = new DB_Test001Entities(); } public void Select() { Type type = this.GetType(); var x = this.GetQueryable< **???** >().ToList(); } public IQueryable<TEntity> GetQueryable<TEntity>(List<string> includes = null) where TEntity : class { IQueryable<TEntity> items = context.Set<TEntity>(); if (includes != null && includes.Any()) includes.Where(i => i != null).ToList().ForEach(i => { items = items.Include(i); }); return items; } } A: You can do it using reflection. The following sample works smoothly. In program you can use Clerk or Manager, just any instance derived from User to call Select. You can improve your program with this. class Clerk : User { } class Manager : User { } internal class User { public User() { } public string Name { get; set; } public void Select() { var list = new List<string>() {"Jack", "Martin"}; Type thisType = GetType(); MethodInfo method = thisType.GetMethod("GetQueryable").MakeGenericMethod(thisType); method.Invoke(this, new object[] {list}); } public IQueryable<TEntity> GetQueryable<TEntity>(List<string> includes = null) where TEntity : User, new() { if(includes != null) { Console.WriteLine(typeof(TEntity)); var entity = new List<TEntity>(includes.Count); entity.AddRange(includes.Select(item => new TEntity {Name = item})); return entity.AsQueryable(); } return null; } } class Program { static void Main() { User usr = new Manager(); usr.Select(); } }
doc_1256
Btw im using dataTable. example table: http://i38.photobucket.com/albums/e149/eloginko/table_zps20bbecb1.png This is how my table do: http://jsfiddle.net/4GP2h/104/ my script: $("#dialog-confirm").dialog({ resizable: false, height: 140, modal: true, autoOpen: false, buttons: { "Close": function () { $(this).dialog("close"); } } }); var dataSet; try{ dataSet = JSON.parse(localStorage.getItem('dataSet')) || []; } catch (err) { dataSet = []; } $('#myTable').dataTable({ "data": [], "columns": [{ "title": "Name" }, { "title": "Age" }, { "title": "Gender" }, { "title": "Action" }], "bStateSave": true, "stateSave": true, "bPaginate": false, "bLengthChange": false, "bFilter": false, "bInfo": false, "bAutoWidth": false }); oTable = $('#myTable').DataTable(); for (var i = 0; i < dataSet.length; i++) { oTable.row.add(dataSet[i]).draw(); } $('#Save').click(function () { if ($('#name').val() == '' || $('#age').val() == '' || $("input[name='gender']:checked").val() == undefined) { $("#dialog-confirm").dialog("open"); } else { var data = [ $('#name').val(), $('#age').val(), $("[name='gender']:checked").val(), "<button class='delete'>Delete</button>" ]; oTable.row.add(data).draw(); dataSet.push(data); localStorage.setItem('dataSet', JSON.stringify(dataSet)); } }); $(document).on('click', '.delete', function () { var row = $(this).closest('tr'); oTable.row(row).remove().draw(); var rowElements = row.find("td"); for (var i = 0; i < dataSet.length; i++) { var equals = true; for (var j = 0; j < 3; j++) { if (dataSet[i][j] != rowElements[j].innerHTML) { equals = false; break; } } if (equals) { dataSet.splice(i, 1); break; } } localStorage.setItem('dataSet', JSON.stringify(dataSet)); }); A: Add a form tag to your code: <form id='myform'> <div id="dialog-confirm" title="Error"> <p><span class="ui-icon ui-icon-alert" style="float:left; margin:0 7px 0px 0;"></span>Please fill all the required fields!</p> </div> <br /> <br />Name: <input type="text" name="name" id="name" /> <br />Age: <input type="text" name="age" id="age" /> <br />Gender: <input type="radio" name="gender" value="Male" />Male <br /> <input type="radio" name="gender" value="Female" />Female <br /> <button id="Save" name="Save">Save</button> <div class="container well"> <table id="myTable" class="table table-striped table-bordered" cellspacing="0" width="100%"> <tr> <td> Line 1 Edit </td> <td> <input type='text' name='line1' /> </td> </tr> <tr> <td> Line 2 Edit </td> <td> <input type='text' name='line2' /> </td> </tr> </table> </div> </form> Then you can use jquery to send it to your server var formdata = $("#myform").serialize() formdata will have the data from the form into the variable ready for a post or get request. Then use $.post("myurl?" + formdata, function(response_from_server){alert(response_from_server);})
doc_1257
// EventsControl class private bool Filter(object obj) { if (!(obj is Event @event)) return false; if (string.IsNullOrEmpty(Location)) return true; return true; // return @event.Location == Location; } public static readonly DependencyProperty EventsSourceProperty = DependencyProperty.Register( nameof(EventsSource), typeof(ObservableCollection<Event>), typeof(EventsControl), new PropertyMetadata(default(ObservableCollection<Event>), EventsSourceChanged)); public ObservableCollection<Event> EventsSource { get => (ObservableCollection<Event>)GetValue(EventsSourceProperty); set => SetValue(EventsSourceProperty, value); } public ICollectionView EventsView { get; set; } private static void EventsSourceChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) { if (!(d is EventsControl eventsControl)) return; var view = new CollectionViewSource { Source = e.NewValue }.View; view.Filter += eventsControl.Filter; eventsControl.EventsView = view; //view.Refresh(); } What could be wrong with this code? I don't want to use a default view ( WPF CollectionViewSource Multiple Views? ) A: I made it a dependency property and it works. Not sure if that's the best way to solve it though. public static readonly DependencyProperty EventsViewProperty = DependencyProperty.Register( nameof(EventsView), typeof(ICollectionView), typeof(EventsControl), new PropertyMetadata(default(ICollectionView))); public ICollectionView EventsView { get => (ICollectionView) GetValue(EventsViewProperty); set => SetValue(EventsViewProperty, value); }
doc_1258
please could anyone refer me a website or else please give any solution to solve this issue. Currently I need know, how to get Log4j into a web dynamic project. Thanks in Advance Friends A: if your project is maven project, then you need to add apache log4j dependency to your pom and follow below url for configuration http://www.mkyong.com/spring-mvc/spring-mvc-log4j-integration-example/ or you can have java util logging which is provided in java standard library, check in below url Good examples using java.util.logging
doc_1259
A: When creating a model from the database you will be asked to provide a connection string. Sometimes this connection string can be lost (for instance when checking code into source control). If you need to re-enter the connection string you can open your edmx file, and click the white area, then view the Properties window to see the connection string property. Alternatively you can set the connection string in the app.config file using the connection strings section.
doc_1260
Error: tibia.js:8 Uncaught (in promise) SyntaxError: Unexpected end of input Warning: Cross-Origin Read Blocking (CORB) blocked cross-origin response https://api.tibiadata.com/v2/characters/Burdeliusz.json class Tibia { constructor() {} async getCharacter(char) { const characterResponse = await fetch(`https://api.tibiadata.com/v2/characters/${char}.json`, { mode: 'no-cors' }); const character = await characterResponse.json(); return { character } } } I searched similar questions, but I couldn't find the fix. A: It's because the endpoint is not passing the right params in the response header. Header should include: "Access-Control-Allow-Origin" : "*", "Access-Control-Allow-Credentials" : true I tested with Postman and the response had 8 headers: https://api.tibiadata.com/v2/characters/Burdeliusz.json Connection →keep-alive Content-Length →683 Content-Type →application/json; charset=utf-8 Date →Thu, 09 Aug 2018 20:05:30 GMT Server →nginx/1.10.3 Strict-Transport-Security →max-age=63072000; includeSubdomains; preload X-Content-Type-Options →nosniff X-Frame-Options →DENY Example of Access Control Allow Origin: https://api.spacexdata.com/v2/launches Access-Control-Allow-Origin →* CF-RAY →447cd76c595fab66-YYZ Connection →keep-alive Content-Encoding →gzip Content-Type →application/json; charset=utf-8 Date →Thu, 09 Aug 2018 20:06:08 GMT Expect-CT →max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct" Server →cloudflare Set-Cookie →__cfduid=d1dce3c5d11de37f960c7b47dc4f7d6701533845168; expires=Fri, 09-Aug-19 20:06:08 GMT; path=/; domain=.spacexdata.com; HttpOnly; Secure Strict-Transport-Security →max-age=15552000; includeSubDomains Transfer-Encoding →chunked Vary →Accept-Encoding, Origin X-Cache-Status →EXPIRED X-Content-Type-Options →nosniff X-DNS-Prefetch-Control →off X-Download-Options →noopen X-Frame-Options →SAMEORIGIN X-Response-Time →151ms X-XSS-Protection →1; mode=block You can try asking the tibiadata people to add the headers OR Use a proxy to access the endpoint: http://jsfiddle.net/RouzbehHz/b95vcdhm/2/ var proxyUrl = 'https://cors-anywhere.herokuapp.com/', targetUrl = 'https://api.tibiadata.com/v2/characters/Burdeliusz.json' fetch(proxyUrl + targetUrl) .then(blob => blob.json()) .then(data => { console.table(data); document.querySelector("pre").innerHTML = JSON.stringify(data, null, 2); return data; }) .catch(e => { console.log(e); return e; }); You can recreate the proxy server: git clone https://github.com/Rob--W/cors-anywhere.git cd cors-anywhere/ npm install heroku create git push heroku master
doc_1261
I want to let them unsubscribe to those emails. What I did for now : * *i add a subscription column to those contacts *i create a hash to find them well My contact controller : class Contact < ApplicationRecord before_create :add_unsubscribe_hash private def add_unsubscribe_hash self.unsubscribe_hash = SecureRandom.hex end my contact controller def unsubscribe @contact = Contact.find_by_unsubscribe_hash(params[:unsubscribe_hash]) @contact.update_attribute(:subscription, false) authorize(:contact, :unsubscribe) end my route : put 'contacts/unsubscribe/:unsubscribe_hash' => 'contacts#unsubscribe', :as => 'unsubscribe' my link into the mail template : href="<%= link_to 'Unsubscribe', unsubscribe_url(@unsubscribe_hash), method: :put %>" I read a lot about this kind of problem, try so much thing, but i do not find a good way to make this work Having error as "possible unmatched constraints: [:unsubscribe_hash]" or nil for the subscription_hash : my app do not find the @contact or the unsubscribe_hash
doc_1262
Most of the questions posted online point me towards checking Dbnull values from the database and I am doing that in the code. Here's the code where the exception is thrown: int rowNum = Convert.ToInt32(dataTable.Rows[r][dataTable.Columns.Count - 2]); Here's the code where I am checking for the dbnull values: for (int r = 0; r < dataTable.Rows.Count - 1; r++) //rows { for (int c = 1; c < dataTable.Columns.Count - 2; c++) { object val = dataTable.Rows[r][c]; if (!val.Equals(DBNull.Value)) { haveValues = true; } } } Here I am reading the values from the excel spreadsheet. Please point me in the right direction. Thanks in advance. Dimpy A: check for DbNull before calling Convert.ToInt32: as you have seen, this will raise an exception if the value is DbNull. something like: object x = *value from db* int y; if (x != DbNull.Value) y= Convert.ToInt32(x); else //handle null somehow A: You can also check: dataTable.Rows[r].IsNull(c) c can be the index, the column name or the DataColumn A: for (int c = 1; c < dataTable.Columns.Count - 2; c++) You don't check for count-2 so dataTable.Rows[r][dataTable.Columns.Count - 2] could be DBNull and your Convert fails A: I tend to do if(varFromSQL.ToString()!="") and then carry on with my business. Works like a charm every time. And then if i need to System.Convert.ToSomething() i can do that. A: You can try like this: DBNull.Value.Equals(dataTable.Rows[r][c]) if you want the index of the column you can use row[dataTable.Columns.IndexOf(col)] inside the foreach. for (int r = 0; r < dataTable.Rows.Count - 1; r++) //rows { foreach (DataColumn col in dataTable.Rows[r].Table.Columns) { if (!DBNull.Value.Equals(dataTable.Rows[r][col.ColumnName])) { haveValues = true; } } And Please provide more information about the type of exception, maybe the error is accessing an invalid index => int rowNum = Convert.ToInt32(dataTable.Rows[r][dataTable.Columns.Count - 2]); Why you use ...Columns.Count - 2 ?, What about if the count of columns is equals to 1 ? I Hope to this helps. Regards
doc_1263
Thank you! A: Use a bool. Like bool isUpdateEnable; void Update() { if(isUpdateEnable) { // Do whatever you want } } A: Simply disable the according component: Behaviour.enabled Enabled Behaviours are Updated, disabled Behaviours are not. This is shown as the small checkbox in the inspector of the behaviour. This is more efficient than a bool flag in the Update method since continuously calling the "empty" Update method causes unnecessary overhead. Another side effect advantage is that you can also use OnEnable and OnDisable to implement additional behaviour for everytime the component is enabled or disabled.
doc_1264
The MySQL queries are not the issue but getting the values for search and replace is evading me. For example: I have a site in http:// example.com/mysub I need to move it to http:// example.com and change the database I need to remove the http:// from http:// example.com/mysub so I can run a search and replace the values with example.com Any ideas? A: sed -r 's_\bhttps?://__' also handles https, example $ echo "http://example.com" | sed -r 's_\bhttps?://__' example.com $ echo "https://example.com" | sed -r 's_\bhttps?://__' example.com A: You can enable extglob (shot -s extglob) and run the following Parameter Expansion*: var='https://example.com' printf '%s\n' "${var#http?(s)://}" Which will remove the first occurrence of http:// or https:// from the left side of var. Parameter Expansion* : Execute man bash | less +/Parameter\ Expansion or read further at Bash FAQ 37. Or both!
doc_1265
Can someone please have a look at the code below and give me some help as to why it isnt working. I am new to javascript so im still getting used to it. <!DOCTYPE html> <html> <head> <title> Who Am I? </title> <script type="text/javascript"> var imageone = document.getElementById("Zero"); var imagetwo = document.getElementById("One"); var imagethree = document.getElementById("Two"); var imagefour = document.getElementById("Three"); var imagefive = document.getElementById("Four"); var imagesix = document.getElementById("Five"); window.onload = init1; function init1 () { var imageone = document.getElementById("Zero"); imageone.onclick = showAnswerone; } function showAnswerone () { var imageone = document.getElementById("Zero"); imageone.src="Zero.jpg"; } window.onload = init2; function init2 () { var imagetwo = document.getElementById("One"); imagetwo.onclick = showAnswertwo; } function showAnswertwo () { var imagetwo = document.getElementById("One"); imagetwo.src="One.jpg"; } window.onload = init3; function init3 () { var imagethree = document.getElementById("Two"); imagethree.onclick = showAnswerthree; } function showAnswerthree () { var imagethree = document.getElementById("Two"); imagethree.src="Two.jpg"; } window.onload = init4; function init4 () { var imagefour = document.getElementById("Three"); imagefour.onclick = showAnswerfour; } function showAnswerfour () { var imagefour = document.getElementById("Three"); imagefour.src="Three.jpg"; } window.onload = init5; function init5 () { var imagefive = document.getElementById("Four"); image.onclick = showAnswerfive; } function showAnswerfive () { var imagefive = document.getElementById("Four"); imagefive.src="Four.jpg"; } window.onload = init6; function init6 () { var imagesix = document.getElementById("Five"); imagesix.onClick = showAnswersix; } function showAnswersix () { var imagesix = document.getElementById("Five"); imagesix.src="Five.jpg"; } function submitForm() { var var_one = 0, var_two = 0, var_three = 0; var var_four = 0, var_five = 0, var_six = 0; } function var_oneb(){ var_one=5; return true; } function var_onea(){ var_one=0; return true; } function var_twob(){ var_two=5; return true; } function var_twoa(){ var_two=0; return true; } function var_threeb(){ var_three=5; return true; } function var_threea(){ var_three=0; return true; } function var_fourb(){ var_four=5; return true; } function var_foura(){ var_four=0; return true; } function var_fiveb(){ var_five=5; return true; } function var_fivea(){ var_five=0; return true; } function var_sixb(){ var_six=5; return true; } function var_sixa(){ var_six=0; return true; } function results_addition() { var var_results=var_one+var_two+var_three+var_four+var_five+var_six; if(var_results<=29){ document.getElementById('choice1').value="Not all answers are correct"; } else{ if(var_results>=30){ document.getElementById('choice1').value="All answers are correct"; } else{ document.getElementById('choice1').value="All answers are correct"; } } } </script> <style> body { background-color: #ff0000; } div#grid { position: relative; width: 500px; height: 300px; margin-left: 50; margin-right: 50; } table { border-spacing: 0px; position: absolute; left: 40px; top: 40px; border-collapse: collapse; padding: 0px; margin: 0px; } td { border: 1px solid white; text-align: center; width: 160px; height: 110px; vertical-align: middle; align-content: stretch; padding: 5px; margin: 0px; } h2 { font-family: verdana, arial; text-align: center; color: white; font-size: 30px; } h3 { font-family: verdana, arial; text-align: center; color: white; font-size: 18px; } </style> </head> <body> <div id="grid"> <h2> Who Am I? </h2> <table> <tr> <td> <img id = "Zero" src = "Zeroblur.jpg"> </td> <td> <img id = "One" src = "Oneblur.jpg"> </td> <td> <img id = "Two" src = "Twoblur.jpg"> </td> </tr> <tr> <td> <img id = "Three" src = "Threeblur.jpg"> </td> <td> <img id = "Four" src = "Fourblur.jpg"> </td> <td> <img id = "Five" src = "Fiveblur.jpg"> </td> </tr> </table> </div> <br><br><br><br><br><br><br><br> <h3> I am a Rugby League Player. </h3> <h3> Click on me to reveal my identity! </h3> <br> <h3>Which Player am I</h3> <hr> <form action=""> <h3>Player 1 </h3> <center> <h3> Shaun Johnson <INPUT TYPE="radio" NAME="Ra1" VALUE="0" OnClick="var_onea()"> Sonny Bill Williams <INPUT TYPE="radio" NAME="Ra1" VALUE="5" OnClick="var_oneb()"> </h3> </center> <br> <hr> <h3>Player 2 </h3> <center> <h3> Gareth Widdop <INPUT TYPE="radio" NAME="Ra2" VALUE="0" OnClick="var_twoa()"> Sam Tomkins <INPUT TYPE="radio" NAME="Ra2" VALUE="5" OnClick="var_twob()"> </h3> </center> <br> <hr> <h3>Player 3 </h3> <center> <h3> James Graham <INPUT TYPE="radio" NAME="Ra3" VALUE="5" OnClick="var_threea()"> Sam Burgess <INPUT TYPE="radio" NAME="Ra3" VALUE="10" OnClick="var_threeb()"> </h3> </center> <br> <hr> <h3>Player 4 </h3> <center> <h3> Matthew Scott <INPUT TYPE="radio" NAME="Ra4" VALUE="5" OnClick="var_foura()"> Johnathon Thurston <INPUT TYPE="radio" NAME="Ra4" VALUE="10" OnClick="var_fourb()"> </h3> </center> <br> <hr> <h3>Player 5 </h3> <center> <h3> Neil Lowe <INPUT TYPE="radio" NAME="Ra5" VALUE="5" OnClick="var_fivea()"> Danny Brough <INPUT TYPE="radio" NAME="Ra5" VALUE="10" OnClick="var_fiveb()"> </h3> </center> <br> <hr> <h3>Player 6 </h3> <center> <h3> Mitch Garbutt <INPUT TYPE="radio" NAME="Ra6" VALUE="5" OnClick="var_sixa()"> Ryan Hall <INPUT TYPE="radio" NAME="Ra6" VALUE="10" OnClick="var_sixb()"> </h3> </center> <br> <hr> <br> <center> <INPUT TYPE="button" VALUE="Calculate" OnClick="results_addition()"> Your Score: <INPUT TYPE="text" id="choice1" NAME="choice1" VALUE="" SIZE=20> </center> </form> </body> </html> A: Short answer that should get you back on track: You're using onClick instead of onclick to add your click event listener to the images. Longer answer that might help to get everything to work correctly: The way you're attaching event listeners to DOM Events (such as a "click" or a "load") can be improved. Currently, you're overwriting the onload method 5 times: window.onload = init1; // ... window.onload = init2; // ... // etc. By the time the window is loaded, only the last set init method will execute (init6, in your case). If you want to use window.onload = method;, you'll have to create one init method that executes all separate init methods. Like so: function init() { init1(); init2(); // etc. }; window.onload = init; Event better is to add event listeners via the addEventListener method. By using addEventListener, you can add multiple methods that will be executed when an event happens. You can read more about this method on this MDN page. // For just one event listener, this can work: element.onclick = onClick; // If you want to execute multiple methods when an event happens, you'll need: element.addEventListener("click", doSomething); element.addEventListener("click", doSomethingElse); Other than your event handling, there's quite some other stuff you can improve. There's a lot of duplicate code and functions that sort of do the same things but have different names. But I guess that's a different question/topic.
doc_1266
typedef struct { uint16_t x, y; } vector_t; I then create a structure according to above definition in my main like so vector_t vec = {5,10}; And then try to use it in the following function void initVector(vector_t *v) { (*v).x = 10; (*v).y = 20; } I input my function surrounded by to print statements like so. printf("%d %d\n innit \n",vec.x,vec.y); void initVector(&vec); printf("%d %d\n \n",vec.x,vec.y); However when I try to build the program I get the following error expected declaration specifiers or '...' before '&' token When the function is commented out, the print statements gives the vector, so I do not think that is the problem, but I cannot see why it should not work. Any help would be appreciated A: What you want, is to call the function initVector, not to declare it, so you have to replace void initVector(&vec); with just initVector(&vec); BTW, in the function initVector, you can write: void initVector(vector_t *v) { v->x = 10; v->y = 20; }
doc_1267
Whilst I do have linux, I want to concentrate on programming for windows currently. My teachers are asking me to try to port my Winform web browser so that they can use it in linux. Is this possible? I have been using visual studio. Thankyou in advance for any replies
doc_1268
Object storage is the future of how data being stored. But how does a object store in a disk. Or it's just an idea, I can use a file storage along with a MySQL to store the metadata, and claim it is a object storage. Or if it is compatible with the AWS S3, it's an object storage system. I am very confused about this idea, or it's just another fancy word like ajax. A: This post sums it up fairly well - https://www.druva.com/blog/object-storage-versus-block-storage-understanding-technology-differences/ Typical differences you'll see between object storage and traditional file systems/block stores are - * *slower response from object stores due to network latency *object stores don't implement file hierarchies although S3, for example, mimics a file hierarchy in the AWS Console *object stores are typically eventually consistent where block stores provide strong/immediate consistency In the case of S3, each object/file is created with an object key. This allows you to mimic a file system hierarchy and is helpful for uses such as embedding metadata in the key name. This is also beneficial for use cases such as using S3 in place of HDFS with Hadoop based systems. Additionally, S3 allows you to tag objects with key value pairs similar to other AWS services. A: I completely understand your confusion. Imagine a bowl. Into the bowl, you put differently-sized balls that are uniquely labeled. The balls would fall into the bowl without any structure, and they are only differentiable by their unique labels. You can think of object storage systems like that. Each ball in the bowl represents one piece of data such as video, audio, text, email, etc. The size of the ball corresponds to the file size. Each object (i. e., file) is given a unique global 128-bit identifier in object storage, so the ball's label is its identifier. You can think of the bowl itself as the object storage provider. Nowadays, it is mainly AWS S3 buckets that use such organizational schema. I have also read through your comments. You suggest storing the file itself in file storage and storing the metadata in a database table. However, object storage systems came into existence solely to avoid such operations as storing data into tables. You see, to store anything in a table, you need a cleverly-designed database schema that makes data retrieval easier for future operations. All these extra details and requirements for storing data into tables are cost-heavy and entirely unnecessary for people who just want to store their files somewhere without thinking about structure.
doc_1269
I'm trying to do a dynamic list, with database data. What I need is: * *To mount the 'divs' with the data from database *To become these 'divs' draggable *"To insert" some itens "inside" these 'divs' (from database to) *To become these itens draggable, inside the 'divs' and between other 'divs' To resume, is something like columns and cards in Trello. I'm using Dragula and, first, I saw the documentation here, and this and this examples. To solve the numbers 1 and 2, I'm trying this code. And, the situation now is: the 'drake' contain the array with lists name, but I can't 'transform' them in 'divs'. The result in the log is: "element: (3) [{…}, {…}, {…}] main.js:36 counter: 3 main.js:39 i: 0 main.js:41 listName: Lista 1 main.js:39 i: 1 main.js:41 listName: Lista 2 main.js:39 i: 2 main.js:41 listName: Lista 3 main.js:45 dragula: {containers: Array(3), start: ƒ, end: ƒ, cancel: ƒ, remove: ƒ, …} containers: (3) ["Lista 1", "Lista 2", "Lista 3"]" Can you help me ? Thank you a lot. PS1: the "draglist template" is only to show that Dragula is working. PS2: sorry for English mistakes. import { Template } from 'meteor/templating'; import { ReactiveVar } from 'meteor/reactive-var'; import dragula from 'dragula'; import '../node_modules/dragula/dist/dragula.css'; import './main.html'; Listas = new Mongo.Collection('listas'); Tarefas = new Mongo.Collection('tarefas'); console.log("Before enter drag"); Template.dragList.onRendered(function(){ console.log("entrou no onRendered"); dragula([document.querySelector('#left1'), document.querySelector('#right1')]); }); Template.lists.helpers({ 'list': function(){ return Listas.find({}); }, 'tasks': function(){ return Tarefas.find({}); }, 'mount': function(){ console.log("Inside mount"); var drake = dragula({}); var element = Listas.find({}).fetch(); var counter = Listas.find({}).count(); console.log("element: ", element); console.log("counter: ", counter); var i; for (i = 0; i < counter; i++) { console.log("i: ", i); var listName = element.[i].nome; console.log("listName: ", listName); drake.containers.push(listName); // dragula([document.getelemententById(listName)]); }; console.log("dragula: ", drake); }, }); <head> <title>Dragula test</title> <link rel="stylesheet" href="https://www.w3schools.com/w3css/4/w3.css"> </head> <body> <h1>Welcome to Meteor!</h1> <h1>Draglist</h1> {{> dragList}} <br> <h1>Lists</h1> {{> lists}} <br> </body> <template name="dragList"> <h5 class="card-title">Container 1</h5> <div id="left1"> <p class="card-text">This is a draggable p</p> <button class="btn btn-primary">First draggable button</button> <button class="btn btn-primary">Second draggable button</button> </div> <h5 class="card-title">Container 2</h5> <div id="right1"> <p class="card-text">This is another draggable p</p> <button class="btn btn-primary">Third draggable button</button> <button class="btn btn-primary">Fourth draggable button</button> </div> </template> <template name="lists"> <div id=container> {{mount}} </div> </template>
doc_1270
Narrowing down this problem, showed me that even if I explicitly specify the Cookie scheme, it doesn't work: [Authorize(AuthenticationSchemes = CookieAuthenticationDefaults.AuthenticationScheme)] It only works if I don't specify the scheme at all: [Authorization] This is my Startup.cs: services.AddAuthentication() .AddCookie(CookieAuthenticationDefaults.AuthenticationScheme) .AddJwtBearer(cfg => { cfg.TokenValidationParameters = new TokenValidationParameters() { ValidIssuer = _configuration["Tokens:Issuer"], ValidAudience = _configuration["Tokens:Audience"], IssuerSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes(_configuration["Tokens:Key"])) }; }); Anyone else facing this problem? Or does someone know why setting the Cookie scheme explicitly doesn't work?
doc_1271
What would a command containing such two opposite conditions (HAS and HAS NOT) look like? A: #!/bin/bash for i in `ls -d /folder1/string1* | grep -v 'string2$'` do ls -ld $i | grep '^-' > /dev/null # Test that we have a regular file and not a directory etc. if [ $? == 0 ]; then mv $i /folder2 fi done A: Try something like find /folder1 -mindepth 1 -maxdepth 1 -type f \ -name 'string1*' \! -name '*string2' -exec cp -iv {} /folder2 + Note: If your have a older version of find you can replace + with \; A: To me this is another case for (what I shall denote) the read while pattern. cd /folder1 ls string1* | grep -v 'string2$' | while read f; do mv $f /folder2; done The other answers are good alternatives, and in particular, find can do a lot. But I always get a headache using find, and never quite use it enough to do so without the manpage open. Also, starting with ls or a simple find to get a list of files, and then using any or all of sed, awk, grep or whatever you have to hand, to adjust/trim/extend this list, and then bunging it into a loop, is a crude(ish) but pretty powerful technique.
doc_1272
More description As gitolite provide authorization over repos, you can define some repos that all users can access. But about authentication and how this access can be achieved, I couldnt find any appropriate solution. gitolite: Gitolite does not do authentication. It only does authorisation. If we want to use 2 popular git protocol: SSH and HTTP for our works. I think it is not appropriate to provide public access over ssh for security reason(and not even sure how it can be done). So if we want to have public access over HTTP how can this be achieved? And it seems that apache had some changes to block unauthenticated access. gitolite: This does not seem to happen any more. Apache seems to insist on a userid, period public repos can define in gitolite conf: repo testing RW+ = @all public where 'public' is defined in .gitolite.rc and @all is for all users. HTTP_ANON_USER => 'public', So it seems answers from before 2019 will not work (I tried this work for excluding an uri from authentication but failed): https://stackoverflow.com/a/16759565/13663683 my config was: <Location /git> SetEnvIf Request_URI "^/git/public_repo/$" NOPASSWD=true AuthType Basic AuthName "Git Access" Require valid-user AuthUserFile /etc/apache2/git.passwd Order Deny,Allow Satisfy any Deny from all Require valid-user Allow from env=NOPASSWD </Location> And some works where inefficient to implement, because they have overhead: https://commentedcode.org/blog/2017/06/12/gitolite-public-http-access/ and some works with broken links: public repository with gitolite TY. Update I was able to do this work with some tweak around apache config https://stackoverflow.com/a/16759565/13663683 This will create public repo with no authentication, but still lack in someway. You need to absolutely define your public repo subtree and this lack dynamic locations. And more important with this approach you can not define a user to fill this repo, because no-auth route have higher priority.
doc_1273
A: The C++ language doesn't specify such thing as "stack". It is an implementation detail, and as such it doesn't make sense deliberating about unless we are discussing a particular implementation of C++. But yes, in a typical C++ implementation, automatic variables are stored on the execution stack. How do I make stack full? Step 1: Use a language implementation that has limited stack size. This is quite common. Step 2: Create an automatic variable that exceeds the limit. Or nest too many non-tail-recursive function calls. If you're lucky, the program may crash. You wouldn't want stack to be exhausted in production use. How big is stack Depends on language implementation. It may even be configurable. The default is one to a few megabytes on common desktop/server systems. Less on embedded systems. and where is it located? Somewhere in memory where the language implementation has chosen. The most important thing to take out of this is that the memory available for automatic variables is typically limited. As such: * *Don't use large automatic variables. *Don't use recursion when asymptotic growth of depth is linear or worse. *Don't let user input affect the amount or size of automatic variables or depth of recursion without constraint. A: Hello I heard that in c++ stack memory is being used for "normal" variables. Local (automatic) variables declared in a function or main are allocated memory mostly on stack (or register) and are deallocated when the execution is done. How do I make stack full? I tried to use ton of arrays but it didnt help. Using ton of arrays, many recursive calls, parameter passing large structs that contain ton of arrays are ways. Another way might also be to reduce stack size: -Wl,--stack,number (for gcc) How big is stack and where is it located? It depends on platform, operating system so on. Standard does not determine any stack size. Its location is determined by OS before the program starts. OS allocates a memory for stack from virtual memory.
doc_1274
These are the tables and the query... CREATE TABLE #Day (id int, EID int, PID varchar(10), [Day] int, Shift varchar(10)) CREATE TABLE #Night (id int, EID int, PID varchar(10), [Day] int, Shift varchar(10)) INSERT INTO #Day SELECT Atten_ID, EID, PID, DATEPART(DD,in_time) AS [Day], shift FROM Attendance WHERE (shift = 'D') INSERT INTO #Night SELECT Atten_ID, EID, PID, DATEPART(DD,in_time) AS [Day], shift FROM Attendance WHERE (shift = 'N') SELECT #Day.EID, #Day.PID, #Day.Day, #Day.Shift AS DShift, #Night.Shift AS NShift FROM #Day JOIN #Night ON #Day.EID = #Night.EID AND #Day.PID = #Night.PID AND #Day.Day = #Night.Day Result should be like this... EID | PID | Day | DShift | NShift ______________________________________ 100 | S001 | 01 | D | N 100 | S001 | 02 | D | - 100 | S001 | 03 | - | N A: Maybe use FULL OUTER JOIN SELECT COALESCE(d.EID,n.EID), COALESCE(d.PID,n.PID), COALESCE(d.Day,n.Day), d.Shift AS DShift, n.Shift AS NShift FROM #Day d FULL JOIN #Night n ON d.EID = n.EID AND d.PID = n.PID AND d.Day = n.Day A: Just a few notes in addition to Sheen's answer: A left join would show all rows from the left table, and matching rows from the right table. A full join would show all rows from both tables. See A Visual Explanation of SQL Joins In reply to your comment, you could use a case. That allows you to return different values under different conditions: select ... , case when d.Shift is not null and n.Shift is not null then 'D/N' when d.Shift is not null then 'D' when d.Shift is not null then 'N' else '-' end as NorD A: Actually, you dont need no temporary tables, just CTE. And also, you can use CASE to achieve D/N result: WITH vDAY as ( SELECT Atten_ID, EID, PID, DATEPART(DD,in_time) AS [Day], shift FROM Attendance WHERE (shift = 'D') ), vNIGHT as ( SELECT Atten_ID, EID, PID, DATEPART(DD,in_time) AS [Day], shift FROM Attendance WHERE (shift = 'N') ) SELECT COALESCE(d.EID,n.EID), COALESCE(d.PID,n.PID), COALESCE(d.Day,n.Day), CASE WHEN d.Shift='D' and n.Shift='N' then 'D/N' WHEN d.Shift IS NOT NULL then d.Shift ELSE COALESCE( d.Shift, '-' ) END AS DShift, CASE WHEN d.Shift='D' and n.Shift='N' then 'D/N' else COALESCE( n.Shift, '-' ) end AS NShift FROM vDay d FULL JOIN vNight n ON d.EID = n.EID AND d.PID = n.PID AND d.Day = n.Day
doc_1275
My grunt file has the following: server: { src: 'server', dist: 'dist/server', views: 'server/views', protocol: 'https', ip: '127.0.0.1', port: 3000, }, When I run grunt it will open up at 127.0.0.1:3000 on that computer, but hitting :3000 from a mobile device doesn't work. I have made sure port 3000 is unblocked in the firewall, and the mobile device can access other servers running on other ports of the same computer. Is there something else I need to setup in grunt?
doc_1276
It should sort itself out alphabetically and the date in years. Any pointers would be appreciated! Following is my code var table = $(".main").append("<table></table>"); var thead = '<thead><tr></tr></thead>'; table.append(thead); var header = [{ title: 'Name', sortBy: 'name' }, { title: 'Last Name', sortBy: 'lastName' }, { title: 'Date of birth', sortBy: 'dob' }].map( function(header) { var sortButton = '<button id="' + header.sortBy + '" onclick=sortRows("' + header.sortBy + '")>/\\</button>'; $('thead').append('<th>' + header.title + ' ' + sortButton + '</th>'); } ) var tbody = "<tbody></tbody>"; var data = [{ name: 'Peter', lastName: 'Petterson', dob: '13/12/1988' }, { name: 'Anna', lastName: 'Jones', dob: '06/02/1968' }, { name: 'John', lastName: 'Milton', dob: '01/06/2000' }, { name: 'James', lastName: 'White', dob: '30/11/1970' }, { name: 'Luke', lastName: 'Brown', dob: '15/08/1999' }]; $('.search').change(function(event) { searchedName = event.target.value; }) table.append(tbody); draw(); function draw() { $('tbody').html(''); data.map( function(row, i) { $('tbody').append( '<tr><td>' + row.name + '</td><td>' + row.lastName + '</td><td>' + row.dob + '</td><td><button onclick=editRow(' + i + ')>edit</button><button onclick=delRow(' + i + ') >delete</button></td></tr>' ) } ) } var editRow = function(rowNumber) { var editableRow = "<td><input id='name'/></td><td><input id='lastName'/></td><td><input id='dob' type='date'/></td><td><button onclick=saveRow(" + rowNumber + ")>save</button></td>"; var name = $('tbody > tr:nth-child(' + (rowNumber + 1) + ') > td:first-child').text(); var lastName = $('tbody > tr:nth-child(' + (rowNumber + 1) + ') > td:nth-child(2)').text(); var dob = $('tbody > tr:nth-child(' + (rowNumber + 1) + ') > td:nth-child(3)').text(); $('tbody > tr:nth-child(' + (rowNumber + 1) + ')').html(editableRow); $('tbody > tr:nth-child(' + (rowNumber + 1) + ') > td:first-child > input').val(name); $('tbody > tr:nth-child(' + (rowNumber + 1) + ') > td:nth-child(2) > input').val(lastName); $('tbody > tr:nth-child(' + (rowNumber + 1) + ') > td:nth-child(3) > input').val(dob); } var sortRows = function(sortBy) { isAscending[sortBy] = !isAscending[sortBy]; $('#' + sortBy).text(isAscending[sortBy] ? '\\/' : '/\\'); }; var delRow = function(num) { data.splice(num, 1); draw(); } var addPerson = function() { isNewLineToggled = !isNewLineToggled; if (isNewLineToggled) { $('tbody').prepend('<tr>' + editableRow + '</tr>') } else { $('tbody > tr:first-child').remove(); } } var saveRow = function(num) { data[num].name = $('#name').val(); data[num].lastName = $('#lastName').val() data[num].dob = $('#dob').val(); draw(); } <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.min.js"></script> <div class="main"></div>
doc_1277
.. vim: set ft=help norl ts=8 tw=78 et : And appears at the bottom of some text files, such as vim documentation. I just want to know where I can look this topic up in the vim help to read about it. A: It’s called a modeline. Try: :help modeline A: That is the modeline. :help modeline A: It is called a modeline. See :help modeline for more information. Basically, it is a way to tell Vim how to render a text file.
doc_1278
I don't know how can I measure top and left css attributers. Here is my JS which is appending the div editor. $(document).on('dblclick', '.slide', function() { $(this).find(".step-wrapper").prepend('<div class="editor" contenteditable="true"> <h2 class="text2">Title</h2></div>'); }); the html structure: <dic class="step"> <div class="step-wrapper"> </div> </div> A: like this ? http://jsfiddle.net/pWqEQ/ .css({ 'left' : e.pageX, 'top' : e.pageY }
doc_1279
Application is working fine but sometimes we receive-- Attempted to read and write protected memory . This is often an indication that other memory is corrupt error. When I checked Event Viewer for the error, below is the exception: Exception information: Exception type: HibernateException Exception message: Creating a proxy instance failed When we get this error, we usually recycle app pool and issue gets resolve. Need your inputs for long term fix.
doc_1280
var myNumber = undefined; function addOne(callback) { fs.readFile('./User2.txt', 'utf8', function doneReading(err, fileContents) { myNumber = fileContents.toString(); callback(); }); } function logMyNumber() { console.log(myNumber); } addOne(logMyNumber); User2.txt only contains one single character, "1". So when I run it, the output is: "??1". Why does these question marks appear? I originally wanted a number but I just got the message, NaN(not a number, I guess). So I convert the buffer to a string instead, and got this. Any help? A: It seems you are not the first one with this issue. Basically you just need to do something like the following: fs.readFile(filePath, 'utf8', function (err, fileContents) { // Remove BOM character if there is one at the start of the file. if(fileContents.charCodeAt(0) == 65279) fileContents = fileContents.substr(1); } Here you have many other workarounds taken from that discussion: * *Replace: fileContents = fileContents.replace(/^\uFEFF/, ''); *Use fs.readFileSync instead of fs.readFile *Use the bomstrip package. A: I've fixed it by saving the textfile in wordpad instead of notepad. Saving it as text document - ms-dos format. A: Most of IDE support RegExp search, so it's pretty easy to search for that buggy char in the code base: \uFEFF Just replace it with empty string
doc_1281
"foo = { :foo => 'bar', :baz => \"{'foo' : 'bar', 'bar' : 'biff' }\" :bar => 'baz' }, bar, baz = \"('foo,bar,baz')\", &block" and returns an array like this: ["foo = { :foo => 'bar', :baz => \"{'foo' : 'bar', 'bar' : 'biff' }\" :bar => 'baz' }", "bar", "baz = \"('foo,bar,baz')\"", "&block"] However, so far I am unable to split the string correctly, my best effort still breaks the string on internal hashes e.g. "foo = { :foo => 'bar', :baz => \"{'foo' : 'bar', 'bar' : 'biff' }\" :bar => 'baz' }, bar, "baz = \"('foo,bar,baz')\", &block".scan(/(?:[^,(]|\([^)]*\))+/) Which produces: ["foo = { :foo => 'bar'", " :baz => \"{'foo' : 'bar'", " 'bar' : 'biff' }\" :bar => 'baz' }", " bar", " baz = \"('foo,bar,baz')\"", " &block"] I think the regex i am using is close but i am not sure how to check for both parenthesis and curly brackets. Presently, the regex only searches for parentheses. This is my current regex: /(?:[^,(]|\([^)]*\))+/ Any help is greatly appreciated. A: this does not suit you? string.split(',')
doc_1282
These options are their own variables in-code, and I was wondering if there was a way to get the variables and values of these options dynamically in code. In my case, I have these options in a "Settings" class, and I access them from my main form class using Settings.varSetting. I get and set these variables in multiple places in code; is it possible to consolidate the list of variables so that I can access and set them (for example, creating a Settings form which pulls the available options and their values and draws the form dynamically) more easily/consistently? Here are the current variables I have in the Settings class: public static Uri uriHomePage = new Uri("http://www.google.com"); public static int intInitOpacity = 100; public static string strWindowTitle = "OpaciBrowser"; public static bool boolSaveHistory = false; public static bool boolAutoRemoveTask = true; //Automatically remove window from task bar if under: public static int intRemoveTaskLevel = 50; //percent public static bool boolHideOnMinimized = true; Thanks for any help, Karl Tatom ( TheMusiKid ) A: You might want to consider using the Application Settings features built into the framework for loading and storing application settings. A: var dict = typeof(Settings) .GetFields(BindingFlags.Static | BindingFlags.Public) .ToDictionary(f=>f.Name, f=>f.GetValue(null)); A: read about reflections: http://msdn.microsoft.com/en-us/library/ms173183%28v=vs.100%29.aspx
doc_1283
select date_created from smc_log_messages where rownum =1 order by date_created desc and it returns a date like 15-SEP-16 10.15.49.099000000 PM However, when i run select date_created from smc_log_messages order by date_created desc I see data like 30-SEP-16 12.39.00.006000000 AM 30-SEP-16 12.38.59.997000000 AM So, basically adding the rownum is affecting the results. Am I doing something wrong? A: If you want the most recent date, then use: select max(date_created) from smc_log_messages ; If you want the most recent row in Oracle 12C+: select lm.* from smc_log_messages lm order by lm.date_created desc fetch first 1 row only; In earlier versions: select lm.* from (select lm.* from smc_log_messages lm order by lm.date_created desc ) lm where rownum = 1; A: use the MAX function only to get the latest and top record: SELECT MAX() FROM this will work.
doc_1284
<input class="form-control input-lg" type="text" placeholder="Name" name="name"> Since my form has no labels and I depend on placeholder, is it possible to put an element like <sup>*</sup> so that I can show to the users that this field required. I tried but it didn't work. Is there a way to do that? my form http://www.itbotics.com/contact.php A: Use a label: <input class="form-control input-lg" type="text" placeholder="Name" name="name"> <label for="Name">*</label> or you could add * after the placeholder text. any reason for not using labels? they're good practice A: if you dont want to use label: css code: .required::before { content: "*"; color:red; } html code: <span class="required"></span> <input class="form-control input-lg" type="text" placeholder="Name" name="name">
doc_1285
- (void)applicationDidBecomeActive:(UIApplication *)application { //make sure that the user credentials are still ok if (userLeftApplication){ BaseViewController * baseViewController = [[BaseViewController alloc]init]; BOOL detailsAreOK = [baseViewController credentialsValidated]; if (!detailsAreOK){ [self.window.rootViewController performSegueWithIdentifier: @"fromSplashToLogin" sender: self.window.rootViewController]; } userLeftApplication = FALSE; } } However, I get the following exception when trying to perform the segue: Attempt to present <LoginViewController: 0x2012e180> on <FirstViewController: 0x1f59cef0> whose view is not in the window hierarchy! and the user is not being directed there. What is wrong? A: rootViewController isn't currently defined. You can't 'perform a segue from the App Delegate', segues are transitions between view controllers. You need to launch the view controller rather than perform a segue. self.window.rootViewController = baseViewController;
doc_1286
BaseCallback.kt abstract class BaseCallback<T> constructor(private val listener: MutableLiveData<*>) : Callback<T> { override fun onResponse(call: Call<T>, response: Response<T>) { when { response.code() == 401 -> { } response.isSuccessful -> { onSuccess(response.body()) } else -> { val apiError = ErrorUtils.parseError(response) listener.value = ApiResult.Error(exception = apiError!!) } } } override fun onFailure(call: Call<T>, t: Throwable) { listener.value = ApiResult.Error( ApiError(message = t.message ?: "Oops Something Went Wrong!") ) } abstract fun onSuccess(response: T?) } Viewmodel private val _deviceListResponse = MutableLiveData<ApiResult<List<Device>>>() val deviceListResponse: LiveData<ApiResult<List<Device>>> = _deviceListResponse deviceRepo.getDevices().enqueue(object : BaseCallback<List<Device>>(_deviceListResponse) { override fun onSuccess(response: List<Device>?) { val devices = response ?: ArrayList() _deviceListResponse.value = ApiResult.Success(devices) } }) A: No you shouldn't pass LiveData to Callback, BaseCallback should be independent, here is a simple sample: fun <T> baseCallBack(onSuccess: (response: T?) -> Unit, onFailure: (apiResult: ApiResult) -> Unit): Callback<T> { return object : Callback<T> { override fun onFailure(call: Call<T>, t: Throwable) { } override fun onResponse(call: Call<T>, response: Response<T>) { if (response.isSuccessful) { onSuccess(response.body()) } else { onFailure(ApiResult.Error { ApiError(message = t.message ?: "Oops Something Went Wrong!") }) } } } }
doc_1287
When a user try to access the path http://localhost:8080/MyContext/login user will be redirected to a login page, where he can enter his credentials and login. Once used logged in successfully, a scope variable($scope.user) is set and the application redirects to welcome.html $scope.user=user; before that I am initializing the same scope variable as null at the beginning of the controller. There is a navigation bar at the top of Main.html. I am hiding it if the 'user' is not available using 'ng-show'. This is because I want to show this nav bar only to logged in user. So I am expecting it to be shown for welcome.html. That is not happening. I believe the ng-view is not reloading the entire html(Main.html) page. How can i solve this? Main.html <!DOCTYPE html> <html ng-app="MainModule"> <head> <base href="/Utility/"> <title>UTILITY</title> <link href="//maxcdn.bootstrapcdn.com/bootstrap/3.3.1/css/bootstrap.min.css" rel="stylesheet"> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script> <script src="//maxcdn.bootstrapcdn.com/bootstrap/3.3.1/js/bootstrap.min.js"></script> <script data-require="[email protected]" data-semver="1.6.5" src="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.6.5/angular.min.js"></script> <script data-require="angular-route@*" data-semver="1.6.2" src="https://ajax.googleapis.com/ajax/libs/angularjs/1.6.2/angular-route.js"></script> <script data-require="ngstorage@*" data-semver="0.3.6" src="https://cdnjs.cloudflare.com/ajax/libs/ngStorage/0.3.6/ngStorage.min.js"></script> <script src="app.js"></script> <script src="login/login.controller.js"></script> <script src="login/login.service.js"></script> <script src="welcome/welcome.controller.js"></script> <link href="login/signin.css" rel="stylesheet" type="text/css"> </head> <body> <div ng-show="user"> <!-- nav bar container --> <nav class="navbar navbar-default"> <div class="container-fluid"> <div class="navbar-header"> <a class="navbar-brand" href="#">UTILITY</a> </div> <div id="navbar" class="navbar-collapse collapse"> <ul class="nav navbar-nav"> <li class="dropdown"><a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="false">Drop Down 1<span class="caret"></span></a> <ul class="dropdown-menu"> <li><a href="#">Option 1</a></li> <li><a href="#">Option 2</a></li> </ul></li> <li class="dropdown"><a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="false">Drop Down 2<span class="caret"></span></a> <ul class="dropdown-menu"> <li><a href="#">Option 1</a></li> <li><a href="#">Option 2</a></li> </ul></li> </ul> <ul class="nav navbar-nav navbar-right"> <li><a href="#">Logout</a></li> </ul> </div> <!--/.nav-collapse --> </div> <!--/.container-fluid --> </nav> </div> <div ng-view></div> </body> </html> app.js (function() { var mainModule = angular.module("MainModule", [ "ngRoute","ngStorage" ]); mainModule.config(function($routeProvider,$locationProvider) { $routeProvider.when("/login", { templateUrl : "login/Login.html", controller : "LoginController" }).when("/user/:username", { templateUrl : "welcome/welcome.html", controller: "WelcomeController" }).otherwise({ redirectTo : "/login" }); //$locationProvider.html5Mode(true); }); }()); login.controller.js (function() { var loginModule = angular.module("MainModule"); var LoginController = function($log, $scope, $sessionStorage, UserService, $location) { $scope.user = null; var userFound = function(userData) { $scope.user = userData; $scope.message = "Successful Login"; //store the user information here for session management $sessionStorage.user = $scope.user; //window.location.replace("/user/"+$scope.user.userName); $location.path("/user/"+$scope.user.userName); }; var onError = function(errorResponse) { $scope.message = "Invalid User"; }; $scope.submitLogin = function(username, password) { UserService.getUser(username, btoa(password)).then(userFound, onError); }; }; loginModule.controller("LoginController", LoginController); }()); welcome.controller.js (function(){ var loginModule = angular.module("MainModule"); var WelcomeController = function($window, $scope){ $scope.initWelcome = function(){ $window.location.reload(); } }; loginModule.controller("WelcomeController", WelcomeController); }());
doc_1288
Add_Extreme_Variable <- function(dataframe, variable, variable_name){ dataframe %>% group_by(cod_station, year_station) %>% mutate(variable_name= ifelse(variable > quantile(variable, 0.95, na.rm=TRUE),1,0)) %>% ungroup() %>% return() } df <- Add_Extreme_Variable (df, rain, extreme_rain) df is the dataframe I'm working with, rain is a numeric variable in df, and extreme_rain is the name of the variable I want to create. If I use mutate_() everything works well, but the problem it's deprecated. However, the solutions I have found in stackoverflow (1, 2, 3) and the vignette doesn't seem to fit my problem or it seems far more complicated than I need it to be, as I cannot find good examples about how to work with quo(), !! without space, !! with space, how to replace = for :=, and I don't know if working with them at all will solve the problem I have or it's even necessary as the ultimate goal doing this function is to make the code cleaner. Any suggestions? A: You can use {{ }} (curly curly). See Tidy evaluation section in Hadley Wickham's Advanced R book. Below is an example using the gapminder dataset. library(gapminder) library(rlang) library(tidyverse) Add_Extreme_Variable2 <- function(dataframe, group_by_var1, group_by_var2, variable, variable_name) { res <- dataframe %>% group_by({{group_by_var1}}, {{group_by_var2}}) %>% mutate({{variable_name}} := ifelse({{variable}} > quantile({{variable}}, 0.95, na.rm = TRUE), 1, 0)) %>% ungroup() return(res) } df <- Add_Extreme_Variable2(gapminder, continent, year, pop, pop_extreme) %>% arrange(desc(pop_extreme)) df #> # A tibble: 1,704 x 7 #> country continent year lifeExp pop gdpPercap pop_extreme #> <fct> <fct> <int> <dbl> <int> <dbl> <dbl> #> 1 Australia Oceania 1952 69.1 8691212 10040. 1 #> 2 Australia Oceania 1957 70.3 9712569 10950. 1 #> 3 Australia Oceania 1962 70.9 10794968 12217. 1 #> 4 Australia Oceania 1967 71.1 11872264 14526. 1 #> 5 Australia Oceania 1972 71.9 13177000 16789. 1 #> 6 Australia Oceania 1977 73.5 14074100 18334. 1 #> 7 Australia Oceania 1982 74.7 15184200 19477. 1 #> 8 Australia Oceania 1987 76.3 16257249 21889. 1 #> 9 Australia Oceania 1992 77.6 17481977 23425. 1 #> 10 Australia Oceania 1997 78.8 18565243 26998. 1 #> # ... with 1,694 more rows summary(df) #> country continent year lifeExp #> Afghanistan: 12 Africa :624 Min. :1952 Min. :23.60 #> Albania : 12 Americas:300 1st Qu.:1966 1st Qu.:48.20 #> Algeria : 12 Asia :396 Median :1980 Median :60.71 #> Angola : 12 Europe :360 Mean :1980 Mean :59.47 #> Argentina : 12 Oceania : 24 3rd Qu.:1993 3rd Qu.:70.85 #> Australia : 12 Max. :2007 Max. :82.60 #> (Other) :1632 #> pop gdpPercap pop_extreme #> Min. :6.001e+04 Min. : 241.2 Min. :0.00000 #> 1st Qu.:2.794e+06 1st Qu.: 1202.1 1st Qu.:0.00000 #> Median :7.024e+06 Median : 3531.8 Median :0.00000 #> Mean :2.960e+07 Mean : 7215.3 Mean :0.07042 #> 3rd Qu.:1.959e+07 3rd Qu.: 9325.5 3rd Qu.:0.00000 #> Max. :1.319e+09 Max. :113523.1 Max. :1.00000 #> Created on 2019-11-10 by the reprex package (v0.3.0) A: We can use rlangs curly curly ({{}}) operator along with enquo to add new columns with unquoted inputs passed. library(dplyr) library(rlang) Add_Extreme_Variable <- function(dataframe, variable, variable_name){ col_name <- enquo(variable_name) dataframe %>% group_by(cyl, am) %>% mutate(!!col_name := as.integer({{variable}} > quantile({{variable}}, 0.95, na.rm=TRUE))) %>% ungroup() } Add_Extreme_Variable(mtcars, mpg, new) # A tibble: 32 x 12 # mpg cyl disp hp drat wt qsec vs am gear carb new # <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <int> # 1 21 6 160 110 3.9 2.62 16.5 0 1 4 4 0 # 2 21 6 160 110 3.9 2.88 17.0 0 1 4 4 0 # 3 22.8 4 108 93 3.85 2.32 18.6 1 1 4 1 0 # 4 21.4 6 258 110 3.08 3.22 19.4 1 0 3 1 1 # 5 18.7 8 360 175 3.15 3.44 17.0 0 0 3 2 0 # 6 18.1 6 225 105 2.76 3.46 20.2 1 0 3 1 0 # 7 14.3 8 360 245 3.21 3.57 15.8 0 0 3 4 0 # 8 24.4 4 147. 62 3.69 3.19 20 1 0 4 2 1 # 9 22.8 4 141. 95 3.92 3.15 22.9 1 0 4 2 0 #10 19.2 6 168. 123 3.92 3.44 18.3 1 0 4 4 0 # … with 22 more rows
doc_1289
<input type="text" name="data[title]" /> Is it possible to match that input element based on it's name? This does not work: input[name=data[title]] {} I'm using the latest release of Chrome. A: You need to use quotes: input[name="data[title]"] {}
doc_1290
This is the code in my solidity file and I want to get the same value in the javascript file using ethers or web3. bytes32 node = keccak256(abi.encodePacked(nodeString)); I got the same value of abi.encodePacked(nodeString)) by using ethers.utils.solidityPack. const abiEncodedPackedString = ethers.utils.solidityPack(['string'], [nodeString]); But when I tried ethers.utils.solidityKeccak256, the result wasn't the same as node in solidity. const nodeInJavascript = ethers.utils.solidityKeccak256(['string], [abiEncodePackedString]); I have also tried ethers.utils.keccak256(abiEncodePackedString) but I couldn't get the result either. A: in web3 I used this and worked: export const createCourseHash = (courseId, account) => { const hexCourseId = web3.utils.utf8ToHex(courseId) const courseHash = web3.utils.soliditySha3( { type: "bytes16", value: hexCourseId }, { type: "address", value: account } ) return courseHash } in ethers, you are passing plain string. try to convert it to hex value. in fact if you check the documentation, they are not passing plain strings: The value MUST be data, such as: * *an Array of numbers *a data hex string (e.g. "0x1234") *a Uint8Array
doc_1291
I can capture a hover event in jQuery, but it only gives me the coordinates of the cell I entered on, and the cell I left. Is is possible via jQuery or a plugin to detect when the mouse pauses over an area to fire an event? I tried hoverIntent, but that just delays the event but doesn't allow me to fire off events on the same element without exiting and re-entering the element. A: If I was doing this I'd try something like jquery mousemove event for coordinates, and use a defferred object with a timeout for your specified time to define a pause. I'm no jquery genius it just seems like an asynchronous event you want and thats what google came up with for me. The best examples I've found are on the jquery docs @ http://api.jquery.com/deferred.promise/.
doc_1292
TABLE STRUCTURE here is my input, the value aircraft_id is the first one to update and the other one is the aircraft_refistration_number <select name="aircraft_id" class="form-control" id=""> <option value="0" disabled="true" selected="true"> Select </option> @foreach ($aircrafts as $aircraft) <option value="{{ $aircraft->aircraft_id }}">{{ $aircraft->aircraft_registration_number }}</option> @endforeach </select> here is my table here is where i update the registered_company_name which has to be string but then the output is the aircraft_id $txtDescript = $request->input('aircraft_id'); $aircraft = DB::table('settings') ->where('id', 4) ->update(['description' => $txtDescript]); here is the aircraft_id which should be int or id's $airid = $request->input('aircraft_id'); $aircraft = DB::table('series') ->update(['aircraft_id' => $airid]); this one perfectly works here is my output of the problem WHEREAS it should be like THIS output which should be correct A: you can take this longer form, <select name="aircraft_id" class="form-control" id=""> <option value="0" disabled="true" selected="true" id="airselect"> Select </option> @foreach ($aircrafts as $aircraft) <option value="{{ $aircraft->aircraft_id }}">{{ $aircraft->aircraft_registration_number }}</option> @endforeach </select> <input type="hidden" name="aircraft_name" id="aircraft_name" value=""> and set the value of the hidden field using jquery like so $(document).ready(function(){ $(document).on('change','.airselect',function(){ $selected=$( "#airselect option:selected" ).text(); $('#aircraft_name').val($selected); }); }); and then adjust your controller like so $txtDescript = $request->input('aircraft_name'); $aircraft = DB::table('settings') ->where('id', 4) ->update([' description' => $txtDescript]); and $airid = $request->input('aircraft_id'); $aircraft = DB::table('series') ->update(['aircraft_id' => $airid]); try it and let me here from you, i did not test the code
doc_1293
Orders ====== id total_price created_on 1 100 2021-01-22 2 200 2021-01-23 Items ===== id order_id 11 1 12 1 13 2 I want to create a query to get revenue by date. For this i'm going to sum up total price in order and grouping it up by date. Along with revenue, I also want to get total numbers of orders and items for that date. Here's a query that I wrote: SELECT count(orders.id) as orders, sum(orders.total_price) as billing, DATE(CREATED_ON) as created_on FROM orders WHERE orders.deleted_on IS NULL group by Date(orders.created_on); Now I found 2 problems: * *The count of orders is coming incorrect. Not sure what. i'm doing wrong here. *How can I calculate the count of items also in same query ? I'm learning sql and this seems a big difficult to get my head around. Thanks for your help. A: As Items.order_id is foreign key to Order.id as a result we need to join both tables first. SELECT count(order_id) AS orders,sum(total_price) AS billing,Orders.created_on as created_on FROM Orders,(select order_id from Items) as new WHERE Orders.id=new.order_id GROUP BY created_on; A: This is tricky, because when you combine the items you might multiple the revenue. One method is to aggregate the items before joining to orders: SELECT DATE(o.Created_On) as created_on_date, COUNT(*) as num_orders, SUM(i.num_items) as num_items, SUM(o.total_price) as billing FROM orders o LEFT JOIN (SELECT i.order_id, COUNT(*) as num_items FROM items i GROUP BY i.order_id ) i ON i.order_id = o.id WHERE o.deleted_on IS NULL GROUP BY DATE(o.created_on); Note: This uses a LEFT JOIN because you have not specified that all orders have items. If all do then an INNER JOIN would suffice.
doc_1294
https://www.rgagnon.com/javadetails/java-0542.html https://www.youtube.com/watch?v=7QNJvxXCYOY How do I use the Simple HTTP client in Android? These all helped me learn how to send a file through socket, but I am not sure which IP address to use. I set up ServerSocket and Socket, but the code won't proceed at socket = ServerSocket.accept() I wonder if this is because I am not using the correct IP address. I appreciate any help. Thanks in advance. A: I am trying to send a file from Android emulator on Windows to ubuntu. So you're running an emulator on Windows and want to send a file from the emulator to another machine running Ubuntu? Then there is no way for us to answer this question exactly, as IP addresses depend on your personal network setup. I would suggest first moving the file form emulator -> Windows, then you can send it like any other file between your machines (you can use scp from Ubuntu, or a service like DropBox, etc.) You can also probably run ifconfig on your Ubuntu side to obtain your destination IP.
doc_1295
interface Item{ int data=0; String text=""; } public class Problem2{ public static void main(String[] args){ Item item=new Item(){ public int data=2; public String text="an item"; public boolean equals(Object object){ if(object instanceof Item){ Item test=(Item)object; //tests on next line System.out.println(test); System.out.println(String.format("data: %d; text: \"%s\"", test.data, test.text)); //returns data and text fields of interface, after returning fields defined in anonymous class on toString call System.out.println(test); //toString returns same return data==test.data && text.equals(test.text); } return false;} public String toString(){return String.format("{data: %d; text: \"%s\"}", data, text);} }; System.out.println(((Object)item).equals(item)); //returns false } } Please explain how it returns the field of the interface while the fields of the anonymous inner class does not change. Output: {data: 2; text: "an item"} data: 0; text: "" {data: 2; text: "an item"} false A: in java you cannot override variables you only override methods and here while creating the anon class when you type test.data it will refer to the interface data not the local variable (local variable accessible using this.data or data directly) since you issued the .data on an object of the interface type. to resolve you issue just use getter methods here is full example interface Item{ int data=0; String text=""; public int getData(); public String getText(); } public class Test12{ public static void main(String[] args){ Item item=new Item(){ public int data=2; public String text="an item"; public int getData(){ return data; } public String getText(){ return text; } public boolean equals(Object object){ if(object instanceof Item){ Item test=(Item)object; //tests on next line System.out.println(test); System.out.println(String.format("data: %d; text: \"%s\"", test.getData(), test.getData())); //returns data and text fields of interface, after returning fields defined in anonymous class on toString call System.out.println(test); //toString returns same return data==test.getData() && text.equals(test.getText()); } return false;} public String toString(){return String.format("{data: %d; text: \"%s\"}", data, text);} }; System.out.println(((Object)item).equals(item)); //returns false } } A: Your variables aren't getting overriden, you are referring 2 diff set of variables, , the ones in the interface are always public static final, which means basically constants, the the anonymous definition is not. Based on how you refer to them, the output changes.
doc_1296
I am currently examining each Parallel.For's return value ParallelLoopResult and sleeping for 20 milliseconds until the IsCompleted member is set to true. Dim plr as ParallelLoopResult plr = Parallel.For(...) while not plr.IsCompleted Thread.Sleep(20) end while plr = Parallel.For(...) while not plr.IsCompleted Thread.Sleep(20) end while . . . How can I add a kernel level block (i.e. WaitHandle) in place of the loop and Thread.Sleep? Is there a completion event that Parallel.For triggers? Does Parallel.For provide for such a mechanism? A: The Parallel.For will complete all code that it was called for. The IsCompleted only returns false then the loop was interrupted. From http://msdn.microsoft.com/en-us/library/system.threading.tasks.parallelloopresult.aspx: If IsCompleted returns true, then the loop ran to completion, such that all iterations of the loop were executed. If IsCompleted returns false and LowestBreakIteration returns null, a call to Stop was used to end the loop prematurely. If IsCompleted returns false and LowestBreakIteration returns a non-null integral value, Break was used to end the loop prematurely. A: You cant get WaitHandle for Parallel.For() and you dont need to - call is synchronous (all iterations will be completed after call is finished). If you need to execute loop itself on other thread, not only iterations, you must wrap it in Task or Thread, and that objects will provide you wait handles. But if you are goint to wait for results on the same thread as you call Parallel.For() (as in your sample code), it doesn't make any sense.
doc_1297
fizzy.sh: #!/usr/bin/env sh div3() { expr $1 % 3 = 0 } div5() { expr $1 % 5 = 0 } fizzy() { if [ $(div3 $1) ] && [ $(div5 $1) ]; then expr "FizzBuzz" elif [ $(div3 $1) ]; then expr "Fizz" elif [ $(div5 $1) ]; then expr "Buzz" else expr "$1" fi } echo $(fizzy 1) echo $(fizzy 2) echo $(fizzy 3) Example: $ ./fizzy.sh FizzBuzz FizzBuzz FizzBuzz A: expr $1 % 3 = 0 yields 1 or 0, depending on whether the result of $1 % 3 is zero or not, but if treats 0 as true, not false. sh-3.2$ if [ 0 ]; then echo ok; fi ok So you'd need to compare the output of your function against 1. Something like this: #!/usr/bin/env sh div3() { expr $1 % 3 = 0 } div5() { expr $1 % 5 = 0 } fizzy() { if [ $(div3 $1) -eq 1 ] && [ $(div5 $1) -eq 1 ]; then expr "FizzBuzz" elif [ $(div3 $1) -eq 1 ]; then expr "Fizz" elif [ $(div5 $1) -eq 1 ]; then expr "Buzz" else expr "$1" fi } for (( i = 1; i <= 15; i++ )) do echo $(fizzy $i) done A: Without the need for div3 or div5 functions. fizzbuzz() { # eg: fizzbuzz 10 ((($1%15==0))&& echo FizzBuzz)|| ((($1%5==0))&& echo Buzz)|| ((($1%3==0))&& echo Fizz)|| echo $1; } Or you could do it all at once fizzbuzz() { # eg: fizzbuzz for i in {1..100}; do ((($i%15==0))&& echo FizzBuzz)|| ((($i%5==0))&& echo Buzz)|| ((($i%3==0))&& echo Fizz)|| echo $i; done; } A: If your shell is bash, you don't need to call out to expr: div3() { (( $1 % 3 == 0 )); } div5() { (( $1 % 5 == 0 )); } fizzbuzz() { if div3 $1 && div5 $1; then echo FizzBuzz elif div3 $1; then echo Fizz elif div5 $1; then echo Buzz else echo fi } for ((n=10; n<=15; n++)); do printf "%d\t%s\n" $n $(fizzbuzz $n) done
doc_1298
The users (multiple workstation) are running Office 2010 and Office 2016. So far I have not got this working on any other computer other than my own. When opening the workbook the userform loads fine, they enter the data, then click save. When they click save the form hangs a few seconds then just closes. Nothing else happens after. I know using an access form would be the better option here but unfortunately my company isn't very big and only purchased the licence for myself. I'm definitely not an expert with vba and im sure my code is sloppy so any constructive feedback is greatly appreciated. Below is my userform code: Private Sub UserForm_Initialize() 'Sets variables when the userfom initializes Call MakeFormResizeable(Me) Me.tbDate.Value = Format(Now(), "mm/dd/yyyy hh:mm") Call List_box_Data End Sub Private Sub tbTotalPartsComplt1_Change() Dim ssheet As Worksheet Dim lastrow As Long 'Dim ussheet As Worksheet Set ssheet = ThisWorkbook.Sheets("DATATEMP") 'Declare what cells on above worksheets to collect data nr = ssheet.Cells(Rows.Count, 1).End(xlUp).Row + 1 'us = ussheet.Cells(Rows.Count, 1).End(xlUp).Row + 1 lastrow = Cells(Rows.Count, "A").End(xlUp).Row 'Data captured on DATATEMP page ssheet.Cells(nr, 1) = Me.cboHour1 ssheet.Cells(nr, 2) = tbDate ssheet.Cells(nr, 3) = Me.cboEmployeeName ssheet.Cells(nr, 4) = Me.cboWorkArea ssheet.Cells(nr, 5) = Me.cboPartNum1.Value ssheet.Cells(nr, 6) = Me.tbWorkOrder1.Value ssheet.Cells(nr, 7) = Me.cboOpDesc1 ssheet.Cells(nr, 10) = Me.tbStdMin1.Value ssheet.Cells(nr, 11) = Me.tbTotalPartsComplt1.Value ssheet.Cells(nr, 12) = Me.lblPartTotalStdMins1.Caption ssheet.Cells(nr, 13) = Me.cboAreaSup ssheet.Cells(nr, 14) = Me.tbLostTime1 'Lost time mins ssheet.Cells(nr, 15) = Me.cboLostTime1 'Lost time code ssheet.Cells(nr, 16) = Me.cboShift 'Shifts 1st or 2nd ssheet.Cells(nr, 17) = Me.cboPermTemp 'Employee Permanent or Temp hire ssheet.Cells(nr, 18) = Me.cboShiftStart1 'Shift start time ssheet.Cells(nr, 19) = Me.cboShiftEnd1 ' Shift end time ssheet.Cells(nr, 20) = Me.tbNotes ' Multiply the values in Standard Mins box and Parts Completed Box to send to Label Sum = Val(tbStdMin1.Text) * Val(tbTotalPartsComplt1.Text) Summ = Val(tbStdMin1.Text) * Val(tbTotalPartsComplt1.Text) Sum2 = Val(tbTotalPartsComplt1.Text) '+ Val(tbTotalPartsComplt2.Text) + Val(tbTotalPartsComplt3.Text) + Val(tbTotalPartsComplt4.Text) + Val(tbTotalPartsComplt5.Text) + Val(tbTotalPartsComplt6.Text) + Val(tbTotalPartsComplt7.Text) + Val(tbTotalPartsComplt8.Text) + Val(tbTotalPartsComplt9.Text) + Val(tbTotalPartsComplt10.Text) + Val(tbTotalPartsComplt11.Text) + Val(tbTotalPartsComplt12.Text) 'Sum3 = Val(lblPartTotalStdMins1.Caption) Sum4 = Val(tbLostTime1.Text) + Val(tbLostTime2.Text) lblPartTotalStdMins1.Caption = Sum ' Standard mins label lblTotalPartsComp.Caption = Sum2 ' TOTAL parts completed label lblTotalLostMins.Caption = Sum4 lblPartTotalStdMins.Caption = Summ End Sub Private Sub UserForm_QueryClose(Cancel As Integer, CloseMode As Integer) If CloseMode = vbFormControlMenu Then Cancel = True MsgBox "Please use the Close Form button!" End If End Sub Private Sub UserForm_Resize() Call AdjustSizeOfControls End Sub '*''*'''''''''''''''''''''''''''''''''*''*' '*''*'BUTTON CONTROLS BELOW THIS LINE'*''*' '*''*'''''''''''''''''''''''''''''''''*''*' Private Sub btnClose1_Click() 'Application.Visible = True Unload Me ThisWorkbook.Close Application.Quit 'DailyOpLogMain.Hide End Sub '*''''''''''''''''''' '*'' HELP BUTTON ' '*''''''''''''''''''' 'Sends email for feedback/comments/suuport ([email protected],[email protected]) Private Sub btnHelp_Click() Dim xOutApp As Object Dim xOutMail As Object Dim xMailBody As String On Error Resume Next Set xOutApp = CreateObject("Outlook.Application") Set xOutMail = xOutApp.CreateItem(0) xMailBody = "REF: TIME MATRIX APP" & vbNewLine & vbNewLine & _ "Have Some Feedback or Suggestions? Great! We Love Feedback!" & vbNewLine & _ "Having Problems Navigating or Need Support With The App? We Can Help!" & vbNewLine & _ "Write/Comment Below and we will get in touch!" & vbNewLine & _ "" & vbNewLine & _ "" & vbNewLine & _ "**BEGIN MESSAGE BELOW**" On Error Resume Next With xOutMail .To = "[email protected];[email protected]" .CC = "" .BCC = "" .Subject = "Daily Operator Log" .Body = xMailBody .Display 'or use .Send End With On Error GoTo 0 Set xOutMail = Nothing Set xOutApp = Nothing End Sub '*'''''''''''''''*' '*' RESET BUTTON'*' '*'''''''''''''''*' 'Defines what data to erase/clear from cells/fields when clicking the "Clear All" Button Private Sub btnReset_Click() ClearAll Me Me.cboHour1 = "" Me.tbNotes = "" Me.lblPartTotalStdMins1 = "0" 'Me.lblTotalStandardMins.Caption = "0" Me.lblTotalPartsComp.Caption = "0" Worksheets("DATATEMP").Range("A3:P137").ClearContents ReloadDateTime End Sub '*'''''''''''''''*' '*' SAVE BUTTON'*' '*'''''''''''''''*' Private Sub btnSave_Click() Application.EnableCancelKey = xlDisabled 'Check and validate there are no empty entries If Me.cboEmployeeName.Value = "" Then MsgBox "Please enter the Employee Name", vbCritical Exit Sub End If If Me.cboWorkArea.Value = "" Then MsgBox "Please enter the Work Area", vbCritical Exit Sub End If If Me.cboAreaSup.Value = "" Then MsgBox "Please enter the Are Supervisor", vbCritical Exit Sub End If If Me.cboShiftStart1.Value = "" Then MsgBox "Please enter your shift start time", vbCritical Exit Sub End If If Me.cboShiftEnd1.Value = "" Then MsgBox "Please enter your shift end time", vbCritical Exit Sub End If If Me.cboHour1.Value = "" Then MsgBox "Please enter the hour number 1 thru 12", vbCritical Exit Sub End If If Me.cboPartNum1.Value = "" Then MsgBox "Please enter the part number", vbCritical Exit Sub End If If Me.tbWorkOrder1.Value = "" Then MsgBox "Please enter the job number", vbCritical Exit Sub End If If Me.cboOpDesc1.Value = "" Then MsgBox "Please enter the operation performed", vbCritical Exit Sub End If If Me.cboSeqNum1.Value = "" Then MsgBox "Please enter the sequence number", vbCritical Exit Sub End If If Me.cboOpNum1.Value = "" Then MsgBox "Please enter the operation number", vbCritical Exit Sub End If If Me.tbStdMin1.Value = "" Then MsgBox "Please enter standard minutes", vbCritical Exit Sub End If If Me.tbTotalPartsComplt1.Value = "" Then MsgBox "Please enter parts quantity", vbCritical Exit Sub End If If Me.tbTotalPartsComplt1.Value = "" Then MsgBox "Please enter parts quantity", vbCritical Exit Sub End If Dim conn As New ADODB.Connection Dim rs1 As New ADODB.Recordset Dim connstring As String #If Win64 Then conn.Open "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=\\superform\production\_Working Folders\MASTER\DBbackend\ProductionTrimShop1.accdb" #Else conn.Open "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=\\superform\production\_Working Folders\MASTER\DBbackend\ProductionTrimShop1.accdb" #End If connstring = "Select * from TEST" rs1.Open Source:=connstring, ActiveConnection:=conn, LockType:=adLockOptimistic With rs1 'if new data record .AddNew .Fields("Date Time") = Me.tbDate 'date and time stamp .Fields("Employee Name") = Me.cboEmployeeName 'employee name .Fields("Work Area") = Me.cboWorkArea 'work area .Fields("Part Number") = Me.cboPartNum1 'part number .Fields("Hour") = Me.cboHour1 'hour of shift 1 thru 12 .Fields("Job Number") = Me.tbWorkOrder1 'job number .Fields("Operation") = Me.cboOpDesc1 'operation being performed .Fields("Sequence Number") = Me.cboSeqNum1 'sequence number .Fields("Operation Number") = Me.cboOpNum1 'operation number .Fields("Standard Mins") = Me.tbStdMin1 'standard mins to perform operation .Fields("Parts Complete") = Me.tbTotalPartsComplt1 'total parts completed .Fields("Total Std Mins") = Me.lblPartTotalStdMins1 'total of mins standard mins multipled by total number of parts completed .Fields("Area Supervisor") = Me.cboAreaSup 'area supervisor .Fields("Lost Time Mins") = Me.tbLostTime1 'total mins of lost time .Fields("Lost Time Mins2") = Me.tbLostTime2 'total mins of lost time .Fields("Lost Time Code") = Me.cboLostTime1 'lost time code .Fields("Lost Time Code2") = Me.cboLostTime2 .Fields("Shift") = Me.cboShift 'shift being worked .Fields("PermTemp") = Me.cboPermTemp 'employee status permanent hire or temp hire .Fields("Shift Start") = Me.cboShiftStart1 'shift start time .Fields("Shift End") = Me.cboShiftEnd1 'shift end time .Fields("Notes") = Me.tbNotes 'notes or comments .Update .Close End With conn.Close Application.DisplayAlerts = True Application.ScreenUpdating = True MsgBox "Data Submitted Successfully!" 'Clear contents of all fields on UI upon clicking save (indicator of all systems GO) Me.cboSeqNum1 = "" Me.cboOpNum1 = "" 'Me.cboShift = "" Me.cboHour1 = "" Me.tbDate = "" 'Me.cboEmployeeName = "" 'Me.cboWorkArea = "" 'Me.cboAreaSup = "" Me.cboPartNum1 = "" Me.cboOpDesc1 = "" Me.cboSeqNum1 = "" Me.cboOpNum1 = "" Me.tbStdMin1 = "" Me.tbNotes = "" Me.cboLostTime1 = "" Me.tbLostTime1 = "" Me.tbWorkOrder1.Text = "" Me.tbTotalPartsComplt1.Text = "" lblPartTotalStdMins.Caption = "0" Me.lblTotalPartsComp.Caption = "0" Worksheets("DATATEMP").Range("A3:T137").ClearContents ReloadDateTime 'RefreshListbox Call List_box_Data End Sub '*''*'''''''''''''''''''''''''''''''''*''*' '*''*' ^^^^^END BUTTON CONTROLS ^^^^^'*''*' '*''*'''''''''''''''''''''''''''''''''*''*' Private Sub ReloadDateTime() Me.tbDate.Value = Format(Now(), "mm/dd/yyyy hh:mm") End Sub Sub List_box_Data() Dim sh As Worksheet Set sh = ThisWorkbook.Sheets("DATASUPPORT") sh.Cells.ClearContents Dim cnn As New ADODB.Connection Dim rst As New ADODB.Recordset Dim qry As String, i As Integer Dim n As Long qry = "SELECT * FROM TEST ORDER BY ID DESC" 'ElseIf Me.ComboBox1.Value = "Return Pending" Then ' Else 'qry = "SELECT * FROM TBL_Customer WHERE Return_Date IS NULL" ' qry = "SELECT * FROM TBL_Customer WHERE " & Me.ComboBox1.Value & " LIKE '%" & Me.TextBox1.Value & "%'" 'End If cnn.Open "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=\\superform\production\_Working Folders\MASTER\DBbackend\ProductionTrimShop1.accdb" rst.Open qry, cnn, adOpenKeyset, adLockOptimistic sh.Range("A2").CopyFromRecordset rst For i = 1 To rst.Fields.Count sh.Cells(1, i).Value = rst.Fields(i - 1).Name Next i rst.Close cnn.Close With Me.ListBox1 '.List = Dtarr .ColumnCount = 20 .ColumnHeads = True .ColumnWidths = "18,25,80,140,50,80,80,40,40,40,40,40,40,80,40,40,80,80,80,80" n = sh.Range("A" & Application.Rows.Count).End(xlUp).Row If n > 1 Then .RowSource = "DATASUPPORT!A2:T" & n Else .RowSource = "DATASUPPORT!A2:T2" End If End With End Sub Private Sub cboLostTime1_Change() SumLostTime = Val(tbLostTime1.Text) lblTotalLostMins.Caption = SumLostTime End Sub Private Sub cboLostTime2_Change() SumLostTime2 = Val(tbLostTime1.Text) + Val(tbLostTime2.Text) lblTotalLostMins.Caption = SumLostTime2 End Sub Private Sub cboShiftEnd1_Change() With cboShiftEnd1 .Value = Format(.Value, "hh:mm AM/PM") .Value = IIf(.Value = "12:25 AM", "06:00", cboShiftEnd1) End With End Sub Private Sub cboShiftStart1_Change() With cboShiftStart1 .Value = Format(.Value, "hh:mm AM/PM") .Value = IIf(.Value = "12:25 AM", "06:00", cboShiftStart1) End With End Sub Private Sub btnAdmin_Click() Unload Me Application.Visible = True End Sub A: After updating the computers experiencing this issue to Microsoft 365 from Office 2016 the problem went away. I’d still like to know what could be a work around/fix so if anyone happens to know more or would like to test I’m happy to provide the file.
doc_1299
E.g: サイズ:XL 約77㎝×約58㎝ -> サイズ:XL 約77�僉潴�58��<br> This was sourced from this page My attempts at encoding with EUC-JP, and the like have failed and I'm at a bit of a loss as to what the root cause might be here. Here's an example with the problematic bytes from the site: content = b"\xa5\xb5\xa5\xa4\xa5\xba\xa1\xa7XL \xcc\xf377\xad\xd1\xa1\xdf\xcc\xf358\xad\xd1" text = content.decode("EUC-JP") print(text) This should print サイズ:XL 約77㎝×約58㎝, but it throws: Traceback (most recent call last): File "<pyshell#53>", line 1, in <module> text = content.decode("EUC-JP") UnicodeDecodeError: 'euc_jp' codec can't decode byte 0xad in position 15: illegal multibyte sequence A: Looks like the actual encoding is "EUC-JISx0213" or "EUC-JIS-2004", as this code works: content = b"\xa5\xb5\xa5\xa4\xa5\xba\xa1\xa7XL \xcc\xf377\xad\xd1\xa1\xdf\xcc\xf358\xad\xd1" text = content.decode("euc_jis_2004") print(text) text = content.decode("euc_jisx0213") print(text) From Wikipedia on EUC-JP: A related and partially compatible encoding, called EUC-JISx0213 or EUC-JIS-2004, encodes JIS X 0201 and JIS X 0213 But "㎝" is part of the extended character set "JIS X 0208", which "EUC-JS" should support, but apparently not the extension. Note: If you just re-encode the page and save, them the browser will not show it properly as the page is still marked as "EUC-JP" in the meta tag.