id
stringlengths
5
11
text
stringlengths
0
146k
title
stringclasses
1 value
doc_2600
Table players have an id and a name, while the table stats looks like this: id | Player | Stat | Value 1 | 0000001 | Wins | 5 2 | 0000001 | Loses | 6 3 | 0000001 | Jumps | 156 4 | 0000001 | Shots | 580 5 | 0000002 | Wins | 15 6 | 0000002 | Loses | 2 7 | 0000002 | Jumps | 530 8 | 0000002 | Shots | 1704 I want to filter players that match several conditions, like, for example, players that have more than 5 wins but less than 200 jumps. I tried this SELECT players.name FROM players LEFT JOIN stats ON stats.player = players.id WHERE (stats.stat = "Wins" AND stats.value > 5) AND (stats.stat = "Jumps" AND stats.value < 200) GROUP BY players.id But it returns nothing, because the GROUP BY goes after the WHERE. I also tried using OR. SELECT players.name FROM players LEFT JOIN stats ON stats.player = players.id WHERE (stats.stat = "Wins" AND stats.value > 5) OR (stats.stat = "Jumps" AND stats.value < 200) GROUP BY players.id But in that case, it returns the players that match any of the conditions, and I only want the ones that match both conditions. In this specific example, it should only return the player with id 0000001. I know I could do it with a different LEFT JOIN for every different stat, but truth is the actual table is huge and has tons of diferent stats, so I don't think that is an option because it would be very slow. A: There is no need to aggregate. You can do this with two inner joins, one per condition: SELECT p.name FROM player p INNER JOIN stats s1 ON s1.player = p.id AND s1.stat = 'Wins' AND s1.value > 5 INNER JOIN stats s2 ON s2.player = p.id AND s2.stat = 'Jumps' AND s2.value < 200 With an index on stats(player, stat, value), this should be an efficient option.
doc_2601
./src/main.ts - Error: Module build failed (from ./node_modules/@ngtools/webpack/src/ivy/index.js): Error: Emit ./src/polyfills.ts - Error: Module build failed (from ./node_modules/@ngtools/webpack/src/ivy/index.js): Error: Emit Error: Failed to initialize Angular compilation - The target entry-point "@angular/http" has missing dependencies: - rxjs/Observable Here is what my pacakage.json looks like. `{ "name": "admindashboard", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "watch": "ng build --watch --configuration development", "test": "ng test" }, "private": true, "dependencies": { "@angular/animations": "^14.0.0", "@angular/common": "^14.2.12", "@angular/compiler": "^14.0.0", "@angular/core": "^14.0.0", "@angular/forms": "^14.0.0", "@angular/platform-browser": "^14.0.0", "@angular/platform-browser-dynamic": "^14.0.0", "@angular/router": "^14.0.0", "@auth0/angular-jwt": "^5.1.0", "@types/jwt-decode": "^3.1.0", "angular2-jwt": "^0.2.3", "bootstrap": "^5.2.3", "crypto-js": "^4.1.1", "datatables.net": "^1.13.1", "datatables.net-buttons": "^2.3.3", "datatables.net-buttons-dt": "^2.3.3", "datatables.net-dt": "^1.13.1", "jquery": "^3.6.1", "jszip": "^3.10.1", "jwt-decode": "^3.1.2", "ngx-bootstrap": "^9.0.0", "rxjs": "~7.5.0", "rxjs-compat": "^6.6.7", "tslib": "^2.3.0", "zone.js": "~0.11.4" }, "devDependencies": { "@angular-devkit/build-angular": "^14.0.3", "@angular/cli": "^15.0.2", "@angular/compiler-cli": "^14.0.0", "@types/crypto-js": "^4.1.1", "@types/datatables.net": "^1.10.24", "@types/datatables.net-buttons": "^1.4.7", "@types/jasmine": "~4.0.0", "@types/jquery": "^3.5.14", "jasmine-core": "~4.1.0", "karma": "~6.3.0", "karma-chrome-launcher": "~3.1.0", "karma-coverage": "~2.2.0", "karma-jasmine": "~5.0.0", "karma-jasmine-html-reporter": "~1.7.0", "typescript": "~4.7.2" }, "overrides": { "autoprefixer": "10.4.5" } } `
doc_2602
if(isset($_POST['sby'])){ $value = $_POST['sby']; if(isset($_POST['search'])){ if($value == 'cname'){?> <!doctype html> <head> <meta charset="utf-8"> <title>Untitled Document</title> <link href="../../css/style.css" rel="stylesheet" type="text/css" /> <!--[if lt IE 9]> <script src="http://html5shiv.googlecode.com/svn/trunk/html5.js"></script> <![endif]--> </head> <body> <div class="container"> <header> <a href="home.php"><img src="../../img/Intellibest-EMM Logo.jpg" alt="Insert Logo Here" name="Insert_logo" width="180" height="141" id="Insert_logo" style="background-color: #C6D580; display:block;" /></a> </header> <div class="sidebar1" style="height:auto"> <ul class="nav"> <li><a href="../home.php">Home</a></li> <li><a href="../look.php">Look For</a></</li> <li><a href="../make.php">Make Something</a></li> <li><a href="../settings.php">Settings</a></li> </ul> </div> <table> <thead id="tbl"> <tr> <td><h3>Company Name</h3></td> </tr> </thead> <?php do { ?> <tbody id="result"> <tr> <td> <?php echo $row_get_cname['Company Name'];?> </td> <?php } while ($row_get_cname = mysql_fetch_assoc($get_cname)); ?> </tr> </table> <?php } else if($value == 'ename'){?> <table> <thead id="tbl"> <tr> <td><h3>First Name</h3></td> <td><h3>Last Name</h3></td> <td><h3>Position</h3></td> <td><h3>Company Name</h3></td> </tr> </thead> <?php do { ?> <tbody id="result"> <tr> <td> <?php echo $row_get_ename['First Name'].' '.$row_get_ename['Last Name'].'<br>'.$row_get_ename['Position'].'<br>'.$row_get_ename['client companies (pana)`.`Company Name'];?> </td> <?php } while ($row_get_ename = mysql_fetch_assoc($get_ename)); ?> </tr> </tbody> </table> <?php } else if($value == 'iname'){ echo "<table> <thead id=tbl> <tr> <td><h3>Company Name</h3></td> <td><h3>Industry</h3></td> </tr> </thead>"; do { echo "<tbody id=result> <tr> <td>"; echo $row_get_iname['Company Name'].'</td><td>'.$row_get_iname['Industry']; echo "</td>"; } while ($row_get_iname = mysql_fetch_assoc($get_iname)); echo "</tr> </tbody> </table>"; } else{ echo "Please enter a valid search quer"; } } else{ echo "Please define search"; } } else{ echo "Please enter a search query"; } A: Try this: <!doctype html> <head> <meta charset="utf-8"> <title>Untitled Document</title> <link href="../../css/style.css" rel="stylesheet" type="text/css" /> <!--[if lt IE 9]> <script src="http://html5shiv.googlecode.com/svn/trunk/html5.js"></script> <![endif]--> </head> <body> <div class="container"> <header> <a href="home.php"><img src="../../img/Intellibest-EMM Logo.jpg" alt="Insert Logo Here" name="Insert_logo" width="180" height="141" id="Insert_logo" style="background-color: #C6D580; display:block;" /></a> </header> <div class="sidebar1" style="height:auto"> <ul class="nav"> <li><a href="../home.php">Home</a></li> <li><a href="../look.php">Look For</a></</li> <li><a href="../make.php">Make Something</a></li> <li><a href="../settings.php">Settings</a></li> </ul> </div> <?php if(isset($_POST['sby'])){ $value = $_POST['sby']; if(isset($_POST['search'])){ if($value == 'cname'){?> This way, the top of you page will show up, regardless of the value of $value. A: I had a similar problem. If it's under any other Category like <a href> then don't put the style to be normal id but you must put it as it's with the Category #something a:hover hope that helps. A: try editing this part: href="../../css/style.css" and change it to href="css/style.css" or you can try doing it manually
doc_2603
How can I compute the above long m? I am worried about overflow and I don't know about overflow at all. A: You don't need Java to solve this. Long.MAX_VALUE == (2^63)-1. If n == 2^(63/3) = 2^21, then n*n*n = 2^63. So, (m+1) == 2^21, and hence m == (2^21)-1. If you want to write some code to convince yourself of this: long m = (1L << 21) - 1; System.out.println(m*m*m); // 9223358842721533951 System.out.println(m*m*m < Long.MAX_VALUE); // true long n = m + 1; System.out.println(n*n*n); // -9223372036854775808 So n*n*n has obviously overflowed, because its value is negative. (Note that if the result were positive, or even greater than m*m*m, this wouldn't be evidence that it hadn't overflowed. It's just coincidence that the overflow is so apparent). You can also use Long.compareUnsigned: // Negative, so m*m*m < Long.MAX_VALUE) System.out.println(Long.compareUnsigned(m*m*m, Long.MAX_VALUE)); // Positive, so unsigned n*n*n > Long.MAX_VALUE) System.out.println(Long.compareUnsigned(n*n*n, Long.MAX_VALUE)); A: Thank you very much, c0der. I can use Math.cbrt() function to get the answer. And I can check the answer is right by Andy Turner's method. long m1 = (long) Math.cbrt((double) Long.MAX_VALUE); System.out.println(m1*m1*m1); m1++; System.out.println(m1*m1*m1); A: Is this ok? long i = 0, j = 1; while (i*i*i < j*j*j) { i++; j++; } Sytem.out.println(i);
doc_2604
Parse error: syntax error, unexpected end of file in C:\LOCATION on line 123 I know I am more than likely missing a curly brace somewhere. But for the life of me I can't find it. If someone can glance their eyes over and spot it I would appreciate it :) <?php $page_title = 'Register'; include 'header.php'; //master control to ensure scripts only execute when form is submitted if ($_SERVER['REQUEST_METHOD']=='POST') { //Opens connection to database and create an array to store any errors in require 'connect.php'; $errors = array(); //Checks to see if username is empty and if not true stores it into a variable if( empty($_POST['username'])) { $errors[] = 'Please enter username'; } else { $username = mysqli_real_escape_string($conn,trim($_POST['username'])); } //Checks to see if email is empty and if not true stores it into a varible if( empty($_POST['email'])) { $errors[] = 'Please enter email'; } else { $email = mysqli_real_escape_string($conn,trim($_POST['email'])); } //Check to see if passwords match and if they do store into variable if( !empty($_POST['pass1'])) { if( $POST['pass1'] != $_POST['pass2']) { $errors[] = 'Passwords do not match'; } else { $pass = mysqli_real_escape_string($conn, trim($_POST['pass1'])); } } else { $errors[] = 'Please enter a password'; } //Checks to see if Email is already registered if( empty( $errors)) { $q = "SELECT user_id FROM users WHERE user_email='$email'"; $r = mysqli_query($conn, $q); if( mysqli_num_rows($r) != 0) { $errors[] = 'Email Address is already registered'; } //Checks to see if username is already taken if( empty( $errors)) { $q = "SELECT user_id FROM users WHERE user_name='$username'"; $r = mysqli_query($conn, $q); if( mysqli_num_rows($r) != 0) { $errors[] = 'Username is already taken, please choose another'; } //If successful send data to to database if( empty( $errors)) { $q = "INSERT INTO users (user_name, user_pass, user_email, user_date, user_level) VALUES ('$username', SHA1('$pass'), '$email', NOW(), '0')"; $r = mysqli_query($conn, $q); if($r) { echo'<h1>Registered!</h1> <p>you may now login</p>'; } //Closes connections mysqli_close($conn); include 'footer.php'; exit(); } //Otherwise display errors else { echo'<h1>Error!</h1> <p>Following errors have occured:<br>'; foreach($errors as $msg) { echo " - $msg<br>"; } echo '<p>Please try again</p>'; mysqli_close($conn); } } ?> <h1>Sign-up</h1> <form action="signup.php" method="POST"> <p> Username:<input type="text" name="username" value="<?php if (isset($_POST['username'])) echo $_POST['username'];?>"> </p> <p> Password:<input type="password" name="pass1"> Repeat Password:<input type="password" name="pass2"> </p> <p> E-Mail Address:<input type="text" name="email" value="<?php if (isset($_POST['email'])) echo $_POST['email'];?>"> </p> <p> <input type="submit" value="register"> </p> </form> <?php include'footer.php'; ?>
doc_2605
Function FindAndSavePicture() As String ' ' Find the target picture at the active windows ' ' Dim myTempPath As String myTempPath = "C:\Users\" & Environ$("USERNAME") _ & "\AppData\Local\Microsoft\Windows\pic_VBA.jpg" With ActiveWindow.Selection.SlideRange For Each s In .Shapes Debug.Print s.Name If s.Type = msoPicture And s.Width > 250 Then ' Show scale Debug.Print "s.Width=" & s.Width ' s.Width=323,3931 Debug.Print "s.Height=" & s.Height ' s.Height=405 ' Save pic in file system s.Export myTempPath, ppShapeFormatJPG ' assign the return value for this function FindAndSavePicture = myTempPath Exit For End If Next End With End Function Problem The exported image pic_VBA.jpg is much smaller than it is shown in the PowerPoint. I want the original size of the picture. This exported image by VBA pic_VBA.jpg has 331 x 413 in dimensions. And if I export the image manually using Save As Picture..., the exported image pic_SaveAs.jpg has 692 x 862 in dimensions, which is the original size. * *pic_VBA.jpg dimensions : 331 x 413 *pic_SaveAs.jpg dimensions : 692 x 862 (original size) What I've tested s.Export myTempPath, ppShapeFormatJPG, s.Width, s.Height, ppScaleXY It doesn't work. The export image's dimensions are 150 x 413 Question So, how to adjust export image size in PowerPoint using vba ? Related infomations * *MSDN: Shape.Export Method *MSDN: PpExportMode Enumeration A: Is the image scaled in PowerPoint? If it's anything but 100%, you'll need to work out the scale % in X/Y dimensions, set it to 100%, export it and then scale it back to the stored settings. This function will assist with that: ' Function to return the scale percentages of a given picture shape ' Written by: Jamie Garroch of YOUpresent.co.uk Public Type ypPictureScale ypScaleH As Single ypScaleW As Single End Type ' Calculate the scale of a picture by resetting it to 100%, ' comparing with it's former size and then rescaling back to it's original size Public Function PictureScale(oShp As Shape) As ypPictureScale Dim ShpW As Single, ShpH As Single Dim LAR As Boolean ' Save the shape dimensions ShpH = oShp.height ShpW = oShp.width ' Unlock the aspect ratio if locked If oShp.LockAspectRatio Then LAR = True: oShp.LockAspectRatio = msoFalse ' Rescale the image to 100% oShp.ScaleHeight 1, msoTrue oShp.ScaleWidth 1, msoTrue ' Calculate the scale PictureScale.ypScaleH = ShpH / oShp.height PictureScale.ypScaleW = ShpW / oShp.width ' Rescale the image to it's former size oShp.ScaleHeight PictureScale.ScaleH, msoTrue oShp.ScaleWidth PictureScale.ScaleW, msoTrue ' Relock the aspect ratio if originally locked If LAR Then oShp.LockAspectRatio = msoFalse End Function A: It's not clear from your comments, but you may be missing the fact that PowerPoint uses points (72 points to the inch) as dimensions, not inches or pixels. Convert the size of the shape from points to inches then multiply by 150 to get the size PPT will export to. That 150 may vary from one system to another, but I don't believe it does. A: Use ActivePresentation.PageSetup.SlideWidth and ActivePresentation.PageSetup.SlideHeight as ScaleWidth and ScaleHeight in the Shape.Export method to receive a picture file with the original dimensions.
doc_2606
Error: In the project 'app' a resolved Google Play services library dependency depends on another at an exact version (e.g. "[15.0. 1]", but isn't being resolved to that version. The behavior exhibited by the library will be unknown. app.properties: target=android-27 android.library.reference.1=CordovaLib android.library.reference.2=app cordova.gradle.include.1=cordova-android-support-gradle-release/citizen-cordova-android-support-gradle-release.gradle cordova.gradle.include.2=cordova-plugin-telerik-imagepicker/citizen-ignorelinterrors.gradle cordova.system.library.1=com.squareup.okhttp3:okhttp-urlconnection:3.10.0 cordova.system.library.2=com.android.support:support-v4:24.1.1+ cordova.system.library.3=com.facebook.android:facebook-android-sdk:4.40.0 cordova.system.library.4=com.android.support:support-v4:25.+ cordova.system.library.5=com.android.support:appcompat-v7:25.+ cordova.system.library.6=com.google.android.gms:play-services-analytics:15.0.1 cordova.system.library.7=com.google.android.gms:play-services-auth:15.0.1 cordova.system.library.8=com.google.android.gms:play-services-identity:15.0.1 cordova.system.library.9=com.android.support:support-annotations:27.+ cordova.system.library.10=com.microsoft.azure:azure-mobile-android:3.4.0@aar cordova.system.library.11=com.google.code.gson:gson:2.3 cordova.system.library.12=com.google.android.gms:play-services-location:16.+ cordova.system.library.13=com.android.support:appcompat-v7:23+ cordova.gradle.include.3=cordova-plugin-telerik-imagepicker/citizen-androidtarget.gradle cordova.gradle.include.4=cordova-support-google-services/citizen-build.gradle cordova.gradle.include.5=phonegap-plugin-multidex/citizen-multidex.gradle cordova.system.library.14=com.android.support:support-v4:26.+ cordova.system.library.15=com.android.support:appcompat-v7:26.+ cordova.system.library.16=com.android.support:support-v13:27.+ cordova.system.library.17=me.leolin:ShortcutBadger:1.1.17@aar cordova.system.library.18=com.google.firebase:firebase-messaging:17.3.2 the exact error is as below: In project 'app' a resolved Google Play services library dependency depends on another at an exact version (e.g. "[15.0. 1]", but isn't being resolved to that version. Behavior exhibited by the library will be unknown. Dependency failing: com.google.android.gms:play-services-stats:15.0.1 -> com.google.android.gms:play-services-basement@[ 15.0.1], but play-services-basement version was 16.0.1. The following dependencies are project dependencies that are direct or have transitive dependencies that lead to the art ifact with the issue. -- Project 'app' depends onto com.google.android.gms:[email protected] -- Project 'app' depends onto com.google.firebase:[email protected] -- Project 'app' depends onto com.google.android.gms:[email protected] -- Project 'app' depends onto com.google.android.gms:play-services-location@16.+ For extended debugging info execute Gradle from the command line with ./gradlew --info :app:assembleDebug to see the dependency paths to the artifact. This error message came from the google-services Gradle plugin, report issues at https:// github.com/google/play-services-plugins and disable by adding "googleServices { disableVersionCheck = false }" to your build.gradle file. can anyone please help with this issue? thanks in advance. A: Don't try to mix play-services dependency version. Always use same version cordova.system.library.6=com.google.android.gms:play-services-analytics:15.0.1 cordova.system.library.7=com.google.android.gms:play-services-auth:15.0.1 cordova.system.library.8=com.google.android.gms:play-services-identity:15.0.1 cordova.system.library.12=com.google.android.gms:play-services-location:15.0.1
doc_2607
String[] ComputerScience = { "A", "B", "C", "D" }; And so on, containing 40 entries. My code reads 900 pdf from 40 folder corresponding to each element of ComputerScience, manipulates the extracted text and stores the output in a file named A.txt , B.txt, ecc ... Each folder "A", "B", ecc contains 900 pdf. After a lot of documents, an exception "Too many open files" is thrown. I'm supposing that I am correctly closing files handler. static boolean writeOccurencesFile(String WORDLIST,String categoria, TreeMap<String,Integer> map) { File dizionario = new File(WORDLIST); FileReader fileReader = null; FileWriter fileWriter = null; try { File cat_out = new File("files/" + categoria + ".txt"); fileWriter = new FileWriter(cat_out, true); } catch (IOException e) { e.printStackTrace(); } try { fileReader = new FileReader(dizionario); } catch (FileNotFoundException e) { } try { BufferedReader bufferedReader = new BufferedReader(fileReader); if (dizionario.exists()) { StringBuffer stringBuffer = new StringBuffer(); String parola; StringBuffer line = new StringBuffer(); int contatore_index_parola = 1; while ((parola = bufferedReader.readLine()) != null) { if (map.containsKey(parola) && !parola.isEmpty()) { line.append(contatore_index_parola + ":" + map.get(parola).intValue() + " "); map.remove(parola); } contatore_index_parola++; } if (! line.toString().isEmpty()) { fileWriter.append(getCategoryID(categoria) + " " + line + "\n"); // print riga completa documento N x1:y x2:a ... } } else { System.err.println("Dictionary file not found."); } bufferedReader.close(); fileReader.close(); fileWriter.close(); } catch (IOException e) { return false;} catch (NullPointerException ex ) { return false;} finally { try { fileReader.close(); fileWriter.close(); } catch (IOException e) { e.printStackTrace(); } } return true; } But the error still comes. ( it is thrown at:) try { File cat_out = new File("files/" + categoria + ".txt"); fileWriter = new FileWriter(cat_out, true); } catch (IOException e) { e.printStackTrace(); } Thank you. EDIT: SOLVED I found the solution, there was, in the main function in which writeOccurencesFile is called, another function that create a RandomAccessFile and doesn't close it. The debugger sais that Exception has thrown in writeOccurencesFile but using Java Leak Detector i found out that the pdf were already opened and not close after parsing to pure text. Thank you! A: Try using this utility specifically designed for the purpose. This Java agent is a utility that keeps track of where/when/who opened files in your JVM. You can have the agent trace these operations to find out about the access pattern or handle leaks, and dump the list of currently open files and where/when/who opened them. When the exception occurs, this agent will dump the list, allowing you to find out where a large number of file descriptors are in use. A: i have tried using try-with resources; but the problem remains. Also running in system macos built-in console print out a FileNotFound exception at the line of FileWriter fileWriter = ... static boolean writeOccurencesFile(String WORDLIST,String categoria, TreeMap<String,Integer> map) { File dizionario = new File(WORDLIST); try (FileWriter fileWriter = new FileWriter( "files/" + categoria + ".txt" , true)) { try (FileReader fileReader = new FileReader(dizionario)) { try (BufferedReader bufferedReader = new BufferedReader(fileReader)) { if (dizionario.exists()) { StringBuffer stringBuffer = new StringBuffer(); String parola; StringBuffer line = new StringBuffer(); int contatore_index_parola = 1; while ((parola = bufferedReader.readLine()) != null) { if (map.containsKey(parola) && !parola.isEmpty()) { line.append(contatore_index_parola + ":" + map.get(parola).intValue() + " "); map.remove(parola); } contatore_index_parola++; } if (!line.toString().isEmpty()) { fileWriter.append(getCategoryID(categoria) + " " + line + "\n"); // print riga completa documento N x1:y x2:a ... } } else { System.err.println("Dictionary file not found."); } } } } catch (IOException e) { e.printStackTrace(); } return true; } This is the code that i am using now, although the bad managing of Exception, why the files seem to be not closed? Now i am making a test with File Leak Detector A: Maybe your code raises another exception that you are not handling. Try add catch (Exception e) before finally block You also can move BufferedReader declaration out the try and close it in finally
doc_2608
lang: "ttt", l: function(){ console.log(lang); } } console.log(g.l()); ReferenceError: lang is not defined Why is lang undefined? A: You need to use either g.lang or this.lang. this will refer to the g object, unless .call() or .apply() is used. For example, this will result in undefined: var g = { lang: "ttt", l: function(){ console.log(this.lang); } } console.log(g.l.call(Math)); However, this will always give the right result (if you don't reassign g): var g = { lang: "ttt", l: function(){ console.log(g.lang); } } console.log(g.l.call(Math)); A: Because this – unlike, say, Java – is never part of the scope chain lookup. The fix: var g = { lang: "ttt", l: function(){ console.log(this.lang); } } console.log(g.l());
doc_2609
When I update anaconda through command prompt, it displays a spinning line in place while it retrieves data. How can I mimic something like this? EDIT: I have Yaspin working but there's odd visual with windows cmd. I made a very basic script to show the visual. You can see the 'line', and it is spinning, but what are the other characters?
doc_2610
I would really appreciate some help. A: Typically this kind of problem would be handled by your Windows Forms Application's installation package. Opinions vary but I'd suggest the safest/most polite thing to do is to treat .NET as a prerequisite. If .NET is not present, display a message that it is required before the install will succeed and perhaps point to a Microsoft download page like this one or this one. The risk is that you point them to an obsolete download page or that the page moves and invalidates your link. That said, I would have expected most machines to have some version of the .NET Framework installed (by Windows Update for example) so it's a bit surprising that you're being told it needs to be installed. I suggest you follow the instructions in How to: Determine Which .NET Framework Versions Are Installed to check one of your failing machines to confirm that .NET is not installed (very unlikely) or to determine which version (or versions) of .NET is (are) installed. Update 6/21/2015 From the comment below, we have evidence of two systems without .NET installed so my "very unlikely" comment above is a bit off base! Update 7/4/2015 I have a bad habit of forgetting that not everyone configures their Windows systems exactly the same way I configure mine. From this blog post it seems that the .NET Framework is 'only' a Recommended Update.
doc_2611
I think this is a bad idea, because it could lead to unintended consequences. It also offends the aesthete in me, because the field's not call CompanyNameAndUrl, and our company name isn't a URL. Am I right? How can I persuade him he's wrong? Where should I put a URL to get it to appear in the Version information in Windows. Am I wrong? Update: the binaries are digitally-signed, so the URL's visible in there. A: The authoritative answer and example in this case should come from Microsoft: VERSIONINFO Resource (Windows) CompanyName: Company that produced the file—for example, "Microsoft Corporation" or "Standard Microsystems Corporation, Inc." This string is required. Very few people will ever look at a properties of an executable file or a DLL; and out of those who do, I would guess 95% know how to Google the ProductName to find out more about it. I have checked several files from various vendors on my PC and none of them includes a URL. However, if you must, there is always the Comments field... A: Adding to MaxVT's comments - using non-clickable full URLs is so last century. IMO: it doesn't hurt, probably noone will ever see it, and from time to time you have to gie your managers the feeling they are in control of something. A: If you really want a URL associated with your module, why not attach a digital signature? That way you get the added benefit for you and the client of knowing the file is untampered with, and the default viewer will show the URL as a clickable link.
doc_2612
@Rule public final TextFromStandardInputStream systemInMock = emptyStandardInputStream(); @Test public void shouldTakeUserInput() { systemInMock.provideLines("add 5", "another line"); InputOutput inputOutput = new InputOutput(); assertEquals("add 5", inputOutput.getInput()); } } Here, I want to check for the input add 5 my output should some statement using System.out.println() statement within some method. Is It possible to simulate output statement? A: You can use the SystemOutRule to get the output written to System.out and assert against it: public class MyTest { @Rule public final TextFromStandardInputStream systemInMock = emptyStandardInputStream(); @Rule public final SystemOutRule systemOutRule = new SystemOutRule().enableLog(); @Test public void shouldTakeUserInput() { systemInMock.provideLines("add 5", "another line"); MyCode myCode = new MyCode(); myCode.doSomething(); assertEquals("expected output", systemOutRule.getLog()); } }
doc_2613
string<- ("'casual': True,'classy': False,'divey': False,'hipster': False,'intimate': False,'romantic': False,'touristy': False,'trendy': False,'upscale': False") I'm trying to extract Boolean values for each of the categories into separate columns.So my outcome should have 9 columns(each for every category) and rows should include True/ False values. What am I supposed to use in this case? A: An option is to use str_extract_all to extract the word (\\w+) that succeeds a a space followed by a : library(stringr) as.logical(str_extract_all(string, "(?<=: )\\w+")[[1]]) #[1] TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE If we need to parse into a data.frame, it would be better to use fromJSON from jsonlite library(jsonlite) lst1 <- fromJSON(paste0("{", gsub("'", "", gsub("\\b(\\w+)\\b", '"\\1"', string)), "}")) data.frame(lapply(lst1, as.logical)) # casual classy divey hipster intimate romantic touristy trendy upscale #1 TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE Or in base R as.logical(regmatches(string, gregexpr("(?<=: )\\w+", string, perl = TRUE))[[1]])
doc_2614
I installed delayed_job as my queue adapter, and set it as the adapter in several places: config/application.rb, config/environments/{development,production}.rb, and config/initializers/active_job.rb. Installation: I added this to my Gemfile: gem 'delayed_job_active_record' Then, I ran the following commands: $ bundle install $ rails generate delayed_job:active_record $ rake db:migrate $ bin/delayed_job start In config/application.rb, config/environments/production.rb, config/environments/development.rb: config.active_job.queue_adapter = :delayed_job In config/initializers/active_job.rb (added when the above did not work): ActiveJob::Base.queue_adapter = :delayed_job I've also run an ActiveRecord migration for delayed_job, and started bin/delayed_job before running my server. That being said, any time I try: UserMailer.welcome_email(@user).deliver_later(wait: 1.minutes) I get the following error: NotImplementedError (Use a queueing backend to enqueue jobs in the future. Read more at http://guides.rubyonrails.org/active_job_basics.html): app/controllers/user_controller.rb:25:in `create' config.ru:25:in `call' I was under the impression that delayed_job is a queueing backend... am I missing something? EDIT: I can't get sucker_punch to work either. When installing sucker_punch in the bundler, and using: config.active_job.queue_adapter = :sucker_punch in config/application.rb, I get the same error and stack trace. A: If you are having this issue in your development environment even though you are using an adapter capable of asynchronous jobs like Sidekiq, make sure that Rails.application.config.active_job.queue_adapter is set to :async instead of :inline. # config/environments/development.rb Rails.application.config.active_job.queue_adapter = :async A: Provide you are following all the steps listed here, I feel you didn't start delayed_job running bin/delayed_job start Please also check you run rails generate delayed_job:active_record rake db:migrate A: Try this: in controller: @user.delay.welcome_email in your model def welcome_email UserMailer.welcome_email(self).deliver_later(wait: 1.minutes) end A: Figured out what it was: I typically start my server and everything associated with it using a single shell script. In this script, I was running bin/delayed_job start in the background, and starting the server before bin/delayed_job start finished. The solution was to make sure delayed_job start finished before starting the server by running it in the foreground in my startup script. Thanks everyone for all the help!
doc_2615
<ul id="sortable" data-bind="template: { name: 'parameterTemplate', foreach: parameters }, visible: parameters().length > 0" style="width: 100%"> </ul> My template is this: <script type="text/html" id="parameterTemplate"> <li class="ui-state-default parameterItem"> <input type="checkbox" data-bind="checked: isRequired" /> Name: <input data-bind="value: name " /> Type: <input data-bind="value: type " /> Size: <input data-bind="value: size " /> <a href="#" data-bind="click: remove">Delete</a> </li> </script> I'm using the draggable and sortable resources of jQuery to reorder the elements of the list. This means that when the users changes the order of the element, obviously ko databind is not altered, for jQuery does not know knockout exists. It so happens that I want my parameters to be saved in the same order the user configured. So my approach was to select al the li HTML elements via jQuery, getting an array ( var items = $(".parameterItem");) . How can I get , for each item in items, the databound knockout element, associated with the li HTML element? Is it possible? My View Model: function parameter(parameterName, parameterType, parameterSize, descriptive, defaultValue, isRequired, ownerViewModel) { this.name = ko.observable(parameterName); this.type = ko.observable(parameterType); this.size = ko.observable(parameterSize); this.label = ko.observable(parameterName); this.descriptive = ko.observable(descriptive); this.defaultValue = ko.observable(defaultValue); this.descriptive = ko.observable(descriptive); this.isRequired = ko.observable(isRequired); this.ownerViewModel = ownerViewModel; this.remove = function () { ownerViewModel.parameters.remove(this) }; } function excelLoaderViewModel() { this.parameters = ko.observableArray([]); this.newParameterName = ko.observable(); this.newParameterType = ko.observable(); this.newParameterSize = ko.observable(); this.newParameterDescriptive = ko.observable(); this.newParameterIsRequired = ko.observable(); this.newParameterDefaultValue = ko.observable(); this.systemName = ko.observable(); this.addParameter = function () { this.parameters.push( new parameter( this.newParameterName() , this.newParameterType() , this.newParameterSize() , this.newParameterDescriptive() , this.newParameterDefaultValue() , this.newParameterIsRequired() , this)); this.newParameterName(""); this.newParameterType(""); this.newParameterSize(""); this.newParameterIsRequired(false); this.newParameterDefaultValue(""); } var myVM = new excelLoaderViewModel(); ko.applyBindings(myVM); A: Your best bet is to use a custom binding to keep your observableArray in sync with your elements as they are dragged/dropped. Here is a post that I wrote about it a while ago. Here is a custom binding that works with jQuery Templates: //connect items with observableArrays ko.bindingHandlers.sortableList = { init: function(element, valueAccessor) { var list = valueAccessor(); $(element).sortable({ update: function(event, ui) { //retrieve our actual data item var item = ui.item.tmplItem().data; //figure out its new position var position = ko.utils.arrayIndexOf(ui.item.parent().children(), ui.item[0]); //remove the item and add it back in the right spot if (position >= 0) { list.remove(item); list.splice(position, 0, item); } } }); } }; Here is a sample of it in use: http://jsfiddle.net/rniemeyer/vgXNX/ If you are using it with Knockout 1.3 beta without jQuery Templates (or with), then you can replace the tmplItem line with the new ko.dataFor method available in 1.3: //connect items with observableArrays ko.bindingHandlers.sortableList = { init: function(element, valueAccessor) { var list = valueAccessor(); $(element).sortable({ update: function(event, ui) { //retrieve our actual data item var item = ko.dataFor(ui.item[0]); //figure out its new position var position = ko.utils.arrayIndexOf(ui.item.parent().children(), ui.item[0]); //remove the item and add it back in the right spot if (position >= 0) { list.remove(item); list.splice(position, 0, item); } } }); } };
doc_2616
when load the application url is http://localhost:4200/#/ I want to change this to http://localhost:4200/#/carrom. For doing this I changed base url to <base href="/carrom"> then loading url is http://localhost:4200/carrom#/ How can I change this to http://localhost:4200/#/carrom A: Try providing wanted value with APP_BASE_HREF injection token in your AppModule. Something like this: ... import { APP_BASE_HREF } from '@angular/common'; @NgModule({ declarations: [ AppComponent, .... ], imports: [ BrowserModule, ... ], providers: [ .... {provide: APP_BASE_HREF, useValue: '#/carrom'} ], bootstrap: [AppComponent] }) export class AppModule { }
doc_2617
So my questions are: 1. Do you support REST APIs to download data? 2. Can I download data in csv format? 3. Can I download actual responses with questions and not just ids? What APIs will be return survey, questions asked and user responses? 3. Can you send me an example format of how data will look when downloaded using the APIs? 5. Finally, with $26/month subscription for one month, will I get API support? Is the API support available for free subscription? Thanks!! A: SurveyMonkey does have a REST API You can get all responses (just ids) doing: GET /surveys/{survey_id}/responses See: https://developer.surveymonkey.com/api/v3/#surveys-id-responses You can get the details and all answers to questions for a specific response by ID doing: GET /responses/{response_id}/details See: https://developer.surveymonkey.com/api/v3/#responses-id Or you can do this all at once by doing GET /surveys/{id}/responses/bulk See: https://developer.surveymonkey.com/api/v3/#surveys-id-responses-bulk A: * *Answered by akand074. *We only support JSON at this time. *You'll have to make a separate GET to /v3/surveys/{survey_id}/details to get the survey details, and then map it to the response data. *The format, along with response data examples can be found here. *You'll have to contact [email protected] to find out.
doc_2618
Site = "filedsn=" & Server.MapPath("/" & WebName & "/reffiles/accessdsn.dsn") & ";DBQ=" & Server.MapPath("/" & WebName & "/reffiles/MyDatabase.mdb") & ";DefaultDir="Server.MapPath("/" & WebName & "/") & ";" set Database = server.createobject("ADODB.Connection") Database.open(Site) strSQLMax = "SELECT Format(Max(GetDataRange.Date_Reading),'dd/mm/yy') AS MaxOfDate_Reading, Format(Max(GetDataRange.Speed), '#,###') AS MaxOfSpeed FROM GetDataRange;" set WeekRecMax = PiDatabase.Execute(strSQLMax) response.write("<P>Date of Maximum Speed:" & weekRecMax.fields(0).value & " " & weekRecMax.fields(1).value & " f/hr </P>") When I test code here (I live in France, but the local server is configured with Regional and Language settings for the USA), the results for the above code are: Date of Maximum Speed: 14 décembre 2016 - 16:03 1 025 f/hr When I publish my code to our production server in the USA (also Server 2003 with Office 2003) the same page gives this result: Date of Maximum Speed: 14 December 2016 - 16:03 1,025 f/hr Of course, the result "1 025" causes other parts of my code to throw an error as it can not be used in calculations. The space in the "1 025" is actually a "non-breaking space", hex A0. So my question is: why is this happening and what can I do to this local server to produce output like our USA based production server? Note, if I change the '#,###' to '####' then the calculations proceed without issues. Thus, this is not a "show stopper", but it makes me wonder what other surprises may be lurking just around the corner. Thanks A: Move the formatting of DateTime and Numeric values from the database to asp.net. strSQLMax = "SELECT Max(GetDataRange.Date_Reading) AS MaxOfDate_Reading, Max(GetDataRange.Speed) AS MaxOfSpeed FROM GetDataRange;" response.write("<P>Date of Maximum Speed:" & Convert.ToDateTime(weekRecMax.fields(0).value).ToString("dd MMMM yyyy") & " " & Convert.ToDecimal(weekRecMax.fields(1).value).ToString("N0") & " f/hr </P>") The formatting of the values will now be based on the localization, so the code won't break when you use it in the USA. See more on formatting numbers here.
doc_2619
Here is what I have tried so far, although it gives me a syntax error. SELECT * FROM `queue_items` WHERE SELECT TIMEDIFF(processed_at, completed_at) > 120; Edit: I got rid of the syntax error by wrapping the second select in parentheses, although it still doesn't return the desired result. How can I return all rows in the queue_items table, that took longer than two minutes to process? A: All you need to do is remove that second SELECT keyword (which shouldn't be there*), and switch from an integer to a time for the last part. Then, switch the arguments to TIMEDIFF, assuming processed_at is expected to come before completed_at in time. This function does a − b, not b − a. SELECT * FROM `queue_items` WHERE TIMEDIFF(`completed_at`, `processed_at`) > TIME('00:02:00'); You could also rewrite the condition a little bit to be more expressive: SELECT * FROM `queue_items` WHERE `completed_at` > TIMESTAMPADD(SECOND, 120, `processed_at`); I haven't gone into any detail about which would be more efficient, as you're going to need a discussion on indexes and whatnot to make any substantial difference there anyway. * Wrapping it in parentheses turned it into a sub-query, which is not desirable or needed here. A: I would simply use a date/time comparison: SELECT qi.* FROM queue_items qi WHERE completed_at > processed_at + interval 2 minute;
doc_2620
A: Yes, you can simple make it by setup the width from 0 to 100% on @keyframes (as in example in your question) to show the parts of the text. Small demo-example for you to show is how it work. Click on Zero: stripe.onclick = function() { var sec = new Date().getSeconds() % 10; stripe.classList.add('animate'); digit.classList.add('animate'); console.log(); }; #digit { width: 20px; overflow: hidden; font: 32px "Courier New", monospace; cursor: pointer; transition: 2s linear width; } #stripe { width: 20px; transition: 2s linear width; } #stripe.animate { width: 200px; transition: 2s linear width; } #digit.animate { width: 200px; transition: 2s linear width; } <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http-equiv="X-UA-Compatible" content="ie=edge"> <title>Title</title> <style> </style> </head> <body> <div id="digit"><span id="stripe">0123456789</span></div> <script> </script> </body> </html> P.S. JS only for test. By @keyframes you do not need them.
doc_2621
COMMAND : keytool -import -file "E:\postgrescert\server.crt" -keypass changeit -keystore "C:\Java\JDK\jre\lib\security\cacerts" -alias pgssslninet ERROR: keytool error: java.lang.Exception: Input not an X.509 certificate The server.crt is having below content: Certificate: Data: Version: 3 (0x2) Serial Number: a1:ea:8c:61:61:0a:7d:69 Signature Algorithm: sha1WithRSAEncryption Issuer: C=US, ST=CA, L=fg, O=XYZ, OU=IT, CN=Common Name/[email protected] Validity Not Before: Jun 14 23:59:25 2013 GMT Not After : Jul 14 23:59:25 2013 GMT Subject: C=US, ST=CA, L=fg, O=XYZ, OU=IT, CN=Common Name/[email protected] Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:de:7c:dd:6e:5f:98:85:52:b4:13:45:2d:69:26: 61:6c:d7:ad:d6:12:27:bf:e1:07:53:a4:76:27:29: ca:3d:82:e5:63:8c:9e:a5:b0:24:f6:77:86:92:ab: 42:e5:26:8a:4a:ea:ea:4a:65:20:a1:3b:05:c7:e0: 31:8e:4c:6e:e5:9e:e4:9c:de:05:02:b3:59:70:00: df:fb:b9:62:e1:5b:8e:1b:29:2d:7c:41:86:41:a9: 9e:24:f8:65:54:8c:cf:44:c4:7b:fa:12:b4:84:d1: d7:d7:2f:14:32:f9:2e:7b:c2:d8:0b:35:c9:f5:8b: 64:ed:cf:84:6e:bf:97:d0:44:7b:6b:67:c6:5b:6f: 92:5d:f6:d7:01:b6:ba:96:37:c8:3b:f8:be:01:b5: 02:d1:6b:21:67:83:c8:fd:37:bd:70:e5:c1:e4:81: b0:42:a9:04:b1:3d:33:4c:43:2b:33:cc:50:65:1e: c0:15:8d:e3:5f:b0:9c:d9:04:09:18:e7:8f:80:56: 6f:45:1d:0a:c2:2d:02:7e:67:2a:8a:1b:73:4a:db: 80:e0:52:d6:33:23:c7:aa:48:b0:5c:ad:7f:8c:96: 7c:d4:84:61:4d:ae:d3:9c:ef:59:c1:bd:71:83:c3: 5e:a4:04:84:8f:cd:76:82:3a:86:43:ab:c1:f4:e9: 02:d5 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Key Identifier: C1:4F:FA:2E:8F:F3:36:FE:AE:9B:12:73:C7:08:C9:59:96:53:71:A7 X509v3 Authority Key Identifier: keyid:C1:4F:FA:2E:8F:F3:36:FE:AE:9B:12:73:C7:08:C9:59:96:53:71:A7 X509v3 Basic Constraints: CA:TRUE Signature Algorithm: sha1WithRSAEncryption 6b:2f:5f:33:f8:bb:55:66:c3:48:c9:ae:64:c1:89:5b:e1:54: 9a:bc:ae:34:87:7e:bc:e7:30:26:9e:65:58:42:79:19:e2:ee: 93:2a:c7:2d:a9:45:b4:1c:7b:5f:5a:ec:12:e3:76:38:c5:44: aa:7f:bd:60:b6:a6:83:90:68:9d:8f:1c:7a:69:4a:58:a8:55: 5a:36:9e:e3:69:76:50:0e:4c:30:54:11:4c:de:10:91:6f:aa: 49:34:19:1c:96:cb:8a:6c:fd:df:19:ed:e1:84:2b:05:12:68: e6:af:c5:59:c2:61:ca:10:2c:8e:cc:0a:34:7e:08:e5:22:ac: 01:fd:fc:4d:16:4f:66:29:58:ac:8e:25:79:3d:de:b6:ef:55: 6e:26:c5:75:9d:6d:57:4e:02:89:b8:c1:b8:47:b7:09:9b:07: cf:5b:a3:bc:a3:6b:ef:a1:4c:95:a0:be:0f:d4:63:fe:35:c6: c6:42:10:0b:28:13:02:a3:6e:b3:bf:ae:57:a8:bd:a1:25:6a: 2d:cd:c7:20:64:4b:2e:f2:b2:c9:5c:85:cf:6f:de:39:86:84: 94:d3:01:c5:25:b7:ec:65:1b:5f:93:ec:9d:cc:81:fa:c7:34: fc:e4:e2:5c:3f:4b:cc:83:bb:f0:67:88:1f:f6:a1:3b:9e:00: 7b:ba:b2:79 -----BEGIN CERTIFICATE----- MIID7zCCAtegAwIBAgIJAKHqjGFhCn1pMA0GCSqGSIb3DQEBBQUAMIGNMQswCQYD VQQGEwJVUzELMAkGA1UECAwCQ0ExEDAOBgNVBAcMB0ZyZW1vbnQxEjAQBgNVBAoM CURhdGFndWlzZTELMAkGA1UECwwCSVQxFDASBgNVBAMMC0NvbW1vbiBOYW1lMSgw JgYJKoZIhvcNAQkBFhlzcmluaS5zdWJyYUBkYXRhZ3Vpc2UuY29tMB4XDTEzMDYx NDIzNTkyNVoXDTEzMDcxNDIzNTkyNVowgY0xCzAJBgNVBAYTAlVTMQswCQYDVQQI DAJDQTEQMA4GA1UEBwwHRnJlbW9udDESMBAGA1UECgwJRGF0YWd1aXNlMQswCQYD VQQLDAJJVDEUMBIGA1UEAwwLQ29tbW9uIE5hbWUxKDAmBgkqhkiG9w0BCQEWGXNy aW5pLnN1YnJhQGRhdGFndWlzZS5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw ggEKAoIBAQDefN1uX5iFUrQTRS1pJmFs163WEie/4QdTpHYnKco9guVjjJ6lsCT2 d4aSq0LlJopK6upKZSChOwXH4DGOTG7lnuSc3gUCs1lwAN/7uWLhW44bKS18QYZB qZ4k+GVUjM9ExHv6ErSE0dfXLxQy+S57wtgLNcn1i2Ttz4Ruv5fQRHtrZ8Zbb5Jd 9tcBtrqWN8g7+L4BtQLRayFng8j9N71w5cHkgbBCqQSxPTNMQyszzFBlHsAVjeNf sJzZBAkY54+AVm9FHQrCLQJ+ZyqKG3NK24DgUtYzI8eqSLBcrX+MlnzUhGFNrtOc 71nBvXGDw16kBISPzXaCOoZDq8H06QLVAgMBAAGjUDBOMB0GA1UdDgQWBBTBT/ou j/M2/q6bEnPHCMlZllNxpzAfBgNVHSMEGDAWgBTBT/ouj/M2/q6bEnPHCMlZllNx pzAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBBQUAA4IBAQBrL18z+LtVZsNIya5k wYlb4VSavK40h3685zAmnmVYQnkZ4u6TKsctqUW0HHtfWuwS43Y4xUSqf71gtqaD kGidjxx6aUpYqFVaNp7jaXZQDkwwVBFM3hCRb6pJNBkclsuKbP3fGe3hhCsFEmjm r8VZwmHKECyOzAo0fgjlIqwB/fxNFk9mKVisjiV5Pd6271VuJsV1nW1XTgKJuMG4 R7cJmwfPW6O8o2vvoUyVoL4P1GP+NcbGQhALKBMCo26zv65XqL2hJWotzccgZEsu 8rLJXIXPb945hoSU0wHFJbfsZRtfk+ydzIH6xzT85OJcP0vMg7vwZ4gf9qE7ngB7 urJ5 -----END CERTIFICATE----- Can anyone help me to locate the exact issue behind this error. PS : When i removed every thing above -----BEGIN CERTIFICATE-----, it get successfully imported. Does the information above -----BEGIN CERTIFICATE----- is really required. Please help. Regards, Arun A: Can anyone help me to locate the exact issue behind this error. Keytool can handle two formats. One is ASN.1/DER encoding, which looks like binary data under a hex editor. The other is RFC 1421, Certificate Encoding Standard, which is a Base64 encoding of the certificate. See the docs on the Keytool at the Solaris site. When i removed every thing above -----BEGIN CERTIFICATE-----, it get successfully imported. Does the information above -----BEGIN CERTIFICATE----- is really required. The format you describe above is Internet RFC 1421 Certificate Encoding Standard. Keytool should be able to handle the format. The manual clearly states that format is allowed: Certificates are often stored using the printable encoding format defined by the Internet RFC 1421 standard, instead of their binary encoding. This certificate format, also known as "Base 64 encoding", facilitates exporting certificates to other applications by email or through some other mechanism. ... Certificates read by the -import and -printcert commands can be in either this format or binary encoded. In the above, the "this format" is RFC 1421. The "binary encoded" is ASN.1/DER. With that said, the certificate looks like a client certificate since it has a PKCS#9 email address in the Common Name, and it does not have a DNS name (like example.com). Yet is also has a Basic Constraint of CA=TRUE. Placing email addresses and DNS names in the Common Name field is deprecated by both the IETF and CA/B Forums. Those names should be placed in Subject Alternate Name field. Use the Common Name for a friendly name or a display name like "John Doe" or "Datametrics". Java also seems to follow the IETF standards closer than most others (others meaning tools and libraries; and not standards). But the RFCs tend to run fast and loose, and I don't recall the PKCS#9 email address/CA=TRUE flag being prohibited. That issue may affect its import-ability. Bruno or EJP would probably know for certain. A: Same problem here. I just added an empty line at the end and keytool was happy.
doc_2622
I tried at my own but could not figure out a solution. $lang = array(); $lang = array_merge($lang,array( "NA" => "Not applicable", "FA" => "Father", "MO" => "Mother", "IND" => "Independent", )); Help would be highly appreciated. Thanks A: I'm not sure what you need but my guess is this $lang = [ "NA" => "Not applicable", "FA" => "Father", "MO" => "Mother", "IND" => "Independent" ]; $lang = array_merge($lang, array( "BA" => "Bachelor of Arts", "MA" => "Masters Degree", )) //Then you can do echo $lang['NA']; // will output => Not applicable echo $lang['BA']; // will output => Bachelor of Arts A: If I understand your question correctly and you want to echo all " ... complete strings instead of the abbreviations ...", then array_values() is one possible approach: <?php $lang = array( "NA" => "Not applicable", "FA" => "Father", "MO" => "Mother", "IND" => "Independent" ); echo "Array with values: "; print_r(array_values($lang)); echo "<br>"; echo "Text with values: "; echo implode(", ", array_values($lang)); echo "<br>"; ?> If you want to get the value of a specific item from an array, simply get this value by index: <?php $lang = array( "NA" => "Not applicable", "FA" => "Father", "MO" => "Mother", "IND" => "Independent" ); echo $lang["NA"]; ?>
doc_2623
Doing this has, as you might guess, made my programming much faster, and so I want to find a way to enforce these rules. For example, lets say I have a method that makes changes to the state of an object, and returns a value. If the method is called outside of the class, I don't ever want to see it resolve inside parameter parentheses, like this: somefunction(param1, param2, object.change_and_return()); Instead, I want it to be done like this: int relevant_variable_name = object.change_and_return(); somefunction(param1, param2, relevant_variable_name); Another example, is I want to create a base class that includes certain print methods, and I want all classes that are user defined to be derived from that base class, much in the way java has done so. Within my objects, is there a way I can force myself (and anyone else) to adhere to these rules? Ie. if you try to run code that breaks the rules, it will terminate and return the custom error report. Also, if you write code that breaks the rules, the IDE (I use eclipse) will recognize it as an error, underline and call the appropriate javadoc? A: For the check and underline violations part: You can use PMD, it is a static code analyzer. It has a default ruleset, and you can write custom rules matching what you need. However your controls seem to be quite complex to express in "PMD language". PMD is available in Eclipse Marketplace. For the crash if not conform part There see no easy way to do it. Hard/complex ways could be: * *Write a rule within PMD, run the analysis at compile time, parse the report (still at compile time) and return an error if your rule is violated. *Write a Java Agent doing the rule check and make it crash the VM if the rule is violated (not sure it is really feasable, agents are meant for instrumentation). *Use reflection anywhere in your code to load classes, and analyze loaded class against your rules and crash the VM if the rule is violated (seriously don't do this: the code would be ugly and the rule easily bypassable).
doc_2624
But when i read the excel file with read_excel() and display the dataframe, those two columns are printed in scientific format with exponential. How can get rid of this format? Thanks Output in Pandas A: The way scientific notation is applied is controled via pandas' display options: pd.set_option('display.float_format', '{:.2f}'.format) df = pd.DataFrame({'Traded Value':[67867869890077.96,78973434444543.44], 'Deals':[789797, 789878]}) print(df) Traded Value Deals 0 67867869890077.96 789797 1 78973434444543.44 789878 If this is simply for presentational purposes, you may convert your data to strings while formatting them on a column-by-column basis: df = pd.DataFrame({'Traded Value':[67867869890077.96,78973434444543.44], 'Deals':[789797, 789878]}) df Deals Traded Value 0 789797 6.786787e+13 1 789878 7.897343e+13 df['Deals'] = df['Deals'].apply(lambda x: '{:d}'.format(x)) df['Traded Value'] = df['Traded Value'].apply(lambda x: '{:.2f}'.format(x)) df Deals Traded Value 0 789797 67867869890077.96 1 789878 78973434444543.44 An alternative more straightforward method would to put the following line at the top of your code that would format floats only: pd.options.display.float_format = '{:.2f}'.format A: try '{:.0f}' with Sergeys, worked for me.
doc_2625
Example: File tree: c:\root_dir: dull_file.txt subdir relevant_file.txt c:\root_dir\subdir: really_relevant_file.txt also_relevant_file.dat Input: C:\> make_report.bat c:\root_dir *relevant* Output when writing C:\root_dir> type report.txt: c:\root_dir\relevant_file.txt <file contents here> c:\root_dir\subdir\really_relevant_file.txt <file contents here> c:\root_dir\subdir\also_relevant_file.dat <file contents here> So far I've managed to list all the files recursively: dir /s /b /a-d *.txt > file_names.txt Next, I would need for each line in file_names.txt, to write its path into report.txt and to type filepath > report.txt. How can I do that? A: You could use for /r "path\to\directory" %%I in (*) to loop through all files and directories recursively starting at path\to\directory. Echo the fully qualified path, and type the file contents. Redirect all output into report.txt. @echo off >report.txt ( ( for /r "%~1" %%I in (%2) do ( echo %%~fI type "%%~fI" ) ) ) In a console window, type for /? for more info on for /r.
doc_2626
class FruitWrapper{ List<String> fruits; } class HelloController { @PostMapping("/") public String hello(@RequestBody FruitWrapper fruits) { System.out.println(fruits); return "he"; } } I am sending request from react like this: axios.post(`${COURSE_API_URL}`,{"fruits":["apple","orange"]}) .then(resp{console.log(resp.data)}) Error: Access to XMLHttpRequest at 'http://localhost:8081/' from origin 'http://localhost:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: It does not have HTTP ok status. A: Its happening due to CORS origin policy. You need to enable CORS in your server side. To set the origins for RESTful web service by using @CrossOrigin annotation for the controller method. This @CrossOrigin annotation supports specific REST API, and not for the entire application. @RequestMapping(value = "/products") @CrossOrigin(origins = "http://localhost:8080") public ResponseEntity<Object> getProduct() { return null; } Global CORS Configuration @Bean public WebMvcConfigurer corsConfigurer() { return new WebMvcConfigurerAdapter() { @Override public void addCorsMappings(CorsRegistry registry) { registry.addMapping("/products").allowedOrigins("http://localhost:9000"); } }; } source: https://www.tutorialspoint.com/spring_boot/spring_boot_cors_support.htm
doc_2627
#include <iostream> using namespace std; // So the program can see cout and endl class Etradehouse { private: string cnic,name,fname, dob,qua, des,join_date , number , address; int sal; public: void getData(){ cout<<"\nPlease enter National identity Card number : \n"; cin >>cnic ; cout<<"Please enter name: \n"; cin >> name; cout<<"Please enter father name : \n"; cin >> fname; cout<<"Please enter Date of birth : \n"; cin >> dob; cout<<"Please enter qualification : \n"; cin >> qua; cout<<"Please enter designation : \n"; cin >> des; } }; // Class ends here int main() { Etradehouse obj; obj.getData(); } A: It is not skipping, it just stops reading after encountering a space. Use std::getline(std::cin, name);
doc_2628
Contour interior was correct. What should we do to the stroke to outside? Example: A: Is there a reason you have this tagged as C#? Or would a front-end solution like this work? body { background-color: red; } .outlined { font-size: 75px; font-weight: bold; font-family: 'Arial'; color: white; text-shadow: -2px -2px 0 #000, 2px -2px 0 #000, -2px 2px 0 #000, 2px 2px 0 #000; } <p class="outlined">Example</p>
doc_2629
Our users are typically "once in a year" users, so this means you can never be sure which version of the database their app is running on. Now in my new version of the database I need to do some custom migration. The method I use to do this is described in this tutorial: http://9elements.com/io/index.php/customizing-core-data-migrations/ To summarize: I have to make Custom Mapping Models so that I can write my own migration policies for some fields. Now when I create a Custom Mapping Model, I have to select a Source "xcdatamodel" and a Destination "xcdatamodel" (where "destination" is te new version of my database). My question is, if I want to do this custom migration from all possible versions, do I need to create multiple Custom Mapping Models, all with a different source, or is there a smarter way to do this? Or is CoreData smart enough to recognize this? A: The short answer is yes; you need to test every migration from every source model to your current destination model. If that migration requires a custom mapping then you will need to have a mapping for that pair. Core Data does not understand versions; it only understands source and destination. If there is not a way to get from A to B then it will fail. If it can migrate from A to B automatically and you have the option turned on, then it will. Otherwise a heavy (manual) migration is required. Keep in mind that heavy migrations are VERY labor intensive and I strictly recommend avoiding them. I have found it is far more efficient to export (for example to JSON) and import the data back in then it is to do a heavy migration. A: It is enough to have a consistent sequential series of migration models up to the current version. Core Data is "smart" enough to execute the migrations you tell it to migrate in the given order.
doc_2630
number coordinates 101138 0.420335 -.238945 .1446484 101139 .4134844 -0.2437 6.7484e-2 101140 .4140046 -.243681 7.3344e-2 I need to read the text file and find a specific number in the first column and plot only its coordinates. This is my code in which I try to find the coordinates for number "101138" but something is not working because there is no match found. set Output [open "Output1.txt" w] set FileInput [open "Input.txt" r] set filecontent [read $FileInput] set inputList [split $filecontent "\n"] set Text [lsearch -all -inline $inputList "101138"] foreach elem $Text { puts $Output "[lindex $elem 1] [lindes $elem 2] [lindex $elem 3]" } A: You are searching for a list element that exactly matches your given value "101138". However your list is constructed from lines which have multiple whitespace delimited columns. You need to amend your search to match this value in the correct column. One method would be to split each line again and perform an equals match on the correct column. Another might be to use a glob or regexp expression that actually matches the inputs. ie: % set lst {"123 abc def" "456 efg ijk" "789 zxc cvb"} "123 abc def" "456 efg ijk" "789 zxc cvb" % lsearch -all -inline $lst "456*" {456 efg ijk} % lsearch -all -inline -regexp $lst "^456" {456 efg ijk} The second line does a standard (glob) match looking for a list element beginning with 456 followed by anything. The last line searches for a list element that begins with "456" using regular expression matching.
doc_2631
The Magento ver. 1.7.0.2 website has 1025m set in php.ini. MageWorx Admin v1.1.1 Advanced Product Options v2.9.13 Extended Sitemap v3.1.2 Cache management settings are all enabled: Layouts Blocks HTML output Translations Collections Data EAV types and attributes Web Services Configuration Web Services Configuration Catalog Permissions cache TM Full Page Cache Database is running on MySQL 5.6.28 with a size of 1,99 gb. With over 19k products. I hope you have enought information. Now the question: What other options do i have to speed up the backend CMS sytem. It takes over a few minutes to add a new product. This Specific happens on the screen New Attribute / Manage Attributes / Attributes / Catalog / Magento Admin. It takes about 5 min when i press save. http://imgur.com/MMxVOct
doc_2632
business is in some case i only want to execute the action which is defined inside the ArrowButton of DropdownButton but not the action defined inside the DropdownButton click. the question may be silly. but i want to know is it possible or not?
doc_2633
Till now I have tried using rails g scaffold_controller product name:string price:integer and after this I added the this to my routes file namespace :api do resources :products end Now when I go to the link api/products . I get this error uninitialized constant Api::Product on the index action def index @api_products = Api::Product.all end After this I removed the Api:: from my controller index, new and create action. After doing this my index url (/api/products) was working fine but now when I try to create a new product(/api/products/new) I get the following error undefined method `products_path' This is the code for my model file (location is models/) class Product < ActiveRecord::Base end Can anyone please help in implementing this correctly? A: You should move product.rb to app/models/api and change the class name to Api::Product #app/models/api/product.rb class Api::Product < ActiveRecord::Base self.table_name = "products" end
doc_2634
class Program { static void Main() { using(var shopContext = new ShopContext()) { var customer = shopContext.Customers.Find(7); customer.City = "Marion"; customer.State = "Indiana"; shopContext.SaveChanges(); } } } public class ShopContext : DbContext { public DbSet<Customer> Customers { get; set; } } public class Customer { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public string City { get; set; } public string State { get; set; } } Thank you A: When you load the entity from the context it keeps an additional data structure - let's call it entry. The entry contains two set of values - original values and current values. When you execute the SaveChanges operation EF goes through your customer entities and updates current values in the entry so that they match with the real state of your entity - this operation is called detecting changes. During SQL command generation EF will compare current and original values and build an SQL update statement to modify changed values in the database. This operation is called snapshot change tracking - EF keeps a snap shot in the entry. There is an alternative called dynamic change tracking which will modify the current value in the entry at the same time you assign the value to your entity's property. Dynamic change tracking has specific requirements (like all of your properties in the entity must be virtual) because it must wrap your class to a dynamic proxy at runtime. This used to be the preferred way but due to some performance issues in complex scenarios, snapshot change tracking is currently supposed to be used as default.
doc_2635
Now the trouble. Occasionally, as a completely separate process so that data is still being gathered, I would like to perform a time consuming calculation involving the data (order minutes). This involving reading the same file I'm writing. How do people do this? Of course reading a file that you're currently writing should be challenging, but it seems that it must have come up enough in the past that people have considering some sort of slick solution---or at least a natural work-around. Partial solutions: * *It seems that HDF5-1.10.0 has a capability SWMR - Single Write, Multiple Read. This seems like exactly what I want. I can't find a python wrapper for this recent version, or if it exists I can't get Python to talk to the right version of hdf5. Any tips here would be welcomed. I'm using Conda package manager. *I could imagine writing to a buffer, which is occasionally flushed and added to the large database. How do I ensure that I'm not missing data going by while doing this? This also seems like it might be computationally expensive, but perhaps there's no getting around that. *Collect less data. What's the fun in that? A: I suggest you take a look at adding Apache Kafka to your pipeline, it can act as a data buffer and help you separate different tasks done on the data you collect. pipeline example: raw data ===> kafka topic (raw_data) ===> small processing ====> kafak topic (light_processing) ===> a process read from light_processing topic and writes to db or file At the same time you can read with another process the same data from light_processing topic or any other topic and do your heavy processing and so on. if both the light processing and the heavy processing connect to kafka topic with the same groupId the data will be replicated and both processes will get the same stream hope it helped.
doc_2636
private ActionListener song(final JButton button) { return new ActionListener(){ public void actionPerformed(ActionEvent event) { addSongGUI addSong = new addSongGUI(); //the JFrame that opens //once the user presses the "add song" button listOfSongs.add(addSong.musicFile); //the addSongGUI has a musicFile variable that I want to read and get information from String songName = addSong.musicFile.getSongName(); //... and do more stuff } }; } When this runs, "String songName = addSong.musicFile.getSongName();" gives me a null pointer exception, because it tries to read the musicFile from the addSongGUI right away, before the user can pick a song to set the musicFile. So, how can I wait until the user picks a song, closes the window, and then have this line of code read (what can I do to get rid of this null pointer exception)? Thanks. A: As noted, the correct and easy solution is not to display a JFrame when you want a modal dialog -- use a modal JDialog instead: private ActionListener song(final JButton button) { return new ActionListener(){ public void actionPerformed(ActionEvent event) { // AddSongDialog is a modal JDialog AddSongDialog addSong = new AddSongDialog(mainJFrame); addSong.setVisible(true); // show it -- this pauses flow of code here String songName = addSong.musicFile.getSongName(); //... and do more stuff } }; } Again, addSongDialog is a modal JDialog, which is why you would need to pass in the application's main JFrame into it, since the JFrame (or parent JDialog) will be needed when calling the JDailog's super constructor in your constructor. An alternative and far weaker solution is to use a JFrame and add a WindowListener to it, but why do that when the JDialog solution works so easily and simply?
doc_2637
EXEC mySproc NCHAR(0xA5) I get Incorrect syntax near '0xa5'. yet, I can do this DECLARE @foo NCHAR SET @foo = NCHAR(0xA5) EXEC mySproc @foo and even this SELECT NCHAR(0xA5) It is interesting how SQL server chooses to evaluate expressions. Any thoughts? A: Because it violates an T-SQL stored procedure call syntax which states: Execute a stored procedure or function [ { EXEC | EXECUTE } ] { [ @return_status = ] { module_name [ ;number ] | @module_name_var } [ [ @parameter = ] { value | @variable [ OUTPUT ] | [ DEFAULT ] } ] [ ,...n ] [ WITH RECOMPILE ] } [;] where value Is the value of the parameter to pass to the module or pass-through command. If parameter names are not specified, parameter values must be supplied in the order defined in the module. When executing pass-through commands against linked servers, the order of the parameter values depends on the OLE DB provider of the linked server. Most OLE DB providers bind values to parameters from left to right. If the value of a parameter is an object name, character string, or qualified by a database name or schema name, the whole name must be enclosed in single quotation marks. If the value of a parameter is a keyword, the keyword must be enclosed in double quotation marks. If a default is defined in the module, a user can execute the module without specifying a parameter. The default can also be NULL. Generally, the module definition specifies the action that should be taken if a parameter value is NULL. thus you should first perform all the calculations, place the results into variables and then pass the variables into SP call A: You can't pass function calls as arguments into a stored procedure. You have to evaluate them first (as you did in your second example), and then pass them in. You can see that this will also fail: exec myProc len('abc')
doc_2638
At this moment I run a second search if the first one returns nothing. Is it possible to combine the 2 paths into 1 so have to search only once? Thx. using (var de = new DirectoryEntry()) { de.Path = "LDAP://OU=ou1,OU=Users,OU=BE,DC=dc,DC=sys"; de.AuthenticationType = AuthenticationTypes.Secure; var deSearch = new DirectorySearcher { SearchRoot = de, Filter = "(&(objectClass=user) (sAMAccountName=" + userId + "))" }; var result = deSearch.FindOne(); if (result == null) { //User not found in ou1 de.Path = "LDAP://OU=ou2,OU=Users,OU=BE,DC=dc,DC=sys"; de.AuthenticationType = AuthenticationTypes.Secure; deSearch = new DirectorySearcher { SearchRoot = de, Filter = "(&(objectClass=user) (sAMAccountName=" + userId + "))" }; result = deSearch.FindOne(); if (result==null) return null; } using (var deUser = new DirectoryEntry(result.Path)) { //Do something } } A: Change the base object to OU=Users,OU=BE,DC=dc,DC=sys, use the same filter, use a scope of sub or one (depending on where the data is located under the organizational units). For more information about searching a directory, see "LDAP: Using ldapsearch" and "LDAP: Programming Practices".
doc_2639
This is the solution: for the foreign key grid column i use default kendo mechanism so the column display the value instead of id. This is the problem: for that column i use a custom autocomplete editor but: * *when i click into the autocomplete widget it displays the id not the value *when i save a new value autocomplete widget does not show the value The code below show detail grid initialization (master grid not shown). Column group_id is the foreign key. Variable groups contains the key value list used to display group name instead of id. getGroupsAsync(e) is the function the read from the specified data source the list of all available groups. //function used to async load groups var getGruppiAsync = function (e) { var deferred = $.Deferred(), loadGruppi = function () { new kendo.data.DataSource({ type: "odata", serverPaging: false, transport: { read: "/Services/MusicStore.svc/GetGroupsByUser?id_utente=guid'" + e.data.id_utente + "'" }, schema: { data: function (data) { return data.d.results; }, total: function (data) { return data.d.results.length; } } }).fetch(function (data) { deferred.resolve($.map(data.items, function (item) { return { value: item.id_gruppo, text: item.nome }; })); }); }; window.setTimeout(loadGruppi, 1); return deferred.promise(); }; $.when(getGroupsAsync(e)).done(function (groups) { $("<div id='group-grid'/>").appendTo(e.detailCell).kendoGrid({ toolbar: ["create", "save", "cancel"], editable: "incell", autoBind: true, dataSource: { type: "odata", serverFiltering: true, transport: { read: { url: "/Services/MusicStore.svc/Gruppi_Utenti" }, create: { url: function () { return "/Services/MusicStore.svc/Gruppi_Utenti" }, type: "POST", data: function (data) { data.utente_id = e.data.id_utente; data.id_gruppi_utente = Math.uuid(); data.gruppo_id = selectedGruppo; if (data.id_gruppo) delete data["id_gruppo"]; }, }, update: { url: function (data) { return "/Services/MusicStore.svc/Gruppi_Utenti(guid'" + data.id_gruppi_utente + "')"; }, type: "PUT", data: function (data) { data.gruppo_id = selectedGruppo; if (data.id_gruppo) delete data["id_gruppo"]; } }, destroy: { url: function (data) { return "/Services/MusicStore.svc/Gruppi_Utenti(guid'" + data.id_gruppi_utente + "')"; }, type: "DELETE" }, parameterMap: function (data, type) { if (type == "read") { // call the default OData parameterMap var result = kendo.data.transports.odata.parameterMap(data); if (result.$filter) { // encode everything which looks like a GUID var guid = /('[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}')/ig; result.$filter = result.$filter.replace(guid, "guid$1"); } return result; } else return JSON.stringify(data); } }, filter: { field: "utente_id", operator: "eq", value: e.data.id_utente }, schema: { model: detailModel, total: function (response) { if (response.length == 0) return 0; return response.d.__count; } }, error: showError }, columns: [ { field: "group:id", title: "Gruppi", editor: detailEditor, values: groups, }, { command: ["destroy"], title: "&nbsp;" } ] }); }); This function provides autocomplete editor for column group_id var gruppiDataSource = new kendo.data.DataSource({ type: "odata", serverPaging: false, serverFiltering: false, transport: { read: "/Services/MusicStore.svc/Gruppi", }, schema: { data: function (data) { return data.d.results; }, total: function (data) { return data.d.results.length; } } }); gruppiDataSource.read(); function detailEditor(container, options) { //$('<input data-text-field="nome" data-text-value="nome" data-bind="value:' + options.field + '"/>') $('<input data-text-field="nome" data-bind="value:' + options.field + '"/>') .appendTo(container) .kendoAutoComplete({ //index: 0, highlightFirst: true, autoBind: false, placeholder: "Select group", dataTextField: "nome", //dataValueField: "id_gruppo", filter: "contains", minLength: 3, select: onGroupSelect, change: function(e){ }, dataSource: gruppiDataSource }); } The problem with this code is that when i enter in editing mode autocomplete widget show the id of the group instead of name and when i save data the widget remain blank. The data is sent to server but i can't get the gui elements with the correct values. Some suggestions?
doc_2640
public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); } } activity_main.xml <?xml version="1.0" encoding="utf-8"?> ... </androidx.constraintlayout.widget.ConstraintLayout> build.gradle apply plugin: 'com.android.application' ... dependencies { ... implementation 'com.jjoe64:graphview:4.2.2' } for helping Mathiew A: For getting results, add some values in the graph.
doc_2641
We have to following algorithm (implemented in python): def bidirectional_bubble_sort(a): left = -1 right = len(a) while left < right: swap = False left += 1 right -= 1 for i in xrange(left, right): if a[i] > a[i + 1]: t = a[i] a[i] = a[i + 1] a[i + 1] = t swap = True if not swap: return else: swap = False for i in xrange(right - 1, left - 1, -1): if a[i] > a[i + 1]: t = a[i] a[i] = a[i + 1] a[i + 1] = t swap = True if not swap: return I'm a bit confused by the main loop condition. Does the algorithm ever gets to the point where left>=right (before exiting at one of the inner return statements)? A: while left < right: swap = False left += 1 right -= 1 left and right are initialized as left-most and right-most index of the array and for each iteration its going towards right and left direction unconditionally - no matter what will happen on next two loops. So obviously left >= right will happen and exit the loop. For array of even length, left > right and for array of odd length, left == right will be reached and exit the loop. Debug and you will get it yourself. Edit I need to prove that a given bidirectional bubble sort algorithm is correct Can you try this snippet. It seems above implementation is not correct. def bidirectional_bubble_sort(a): left = -1 right = len(a) while left < right: swap = False left += 1 right -= 1 for i in xrange(left, right): if a[i] > a[i + 1]: t = a[i] a[i] = a[i + 1] a[i + 1] = t swap = True if not swap: return else: swap = False for i in xrange(right, left, -1): if a[i] < a[i - 1]: t = a[i] a[i] = a[i - 1] a[i - 1] = t swap = True if not swap: return
doc_2642
SELECT ext.col1, ext.col2, ext.col3, COUNT(*) Link Count FROM `external.`Media` ext INNER JOIN `internal`.`links` l ON ext.someId = links.id INNER JOIN `internal`.tags t ON l.tag_id = t.tag_id WHERE t.parent_tag_id = 1098 AND t.metadata = 'someValue' GROUP BY ext.col1, ext.col2, ext.col3 So I have 3 models from which data is coming from. Currently I have the following in the Media model.. public $belongsTo = array( 'Links' => array( 'className' => 'Links', 'foreignKey' => 'someId', ), 'Tag' => array( 'className' => 'Tag' ) ); Then in the other models : Tag Model public $useTable = "tags"; public $name = 'Tag'; public $hasAndBelongsToMany = array( 'TagLink' => array( 'className' => 'TagLink', 'foreign_key' => 'tag_id', 'conditions' => array( 'Tag.parent_tag_id' => 1098 ) ) ); Link Model public $useTable = "tag_links"; public $name = 'TagLink'; public $hasAndBelongsToMany = array( 'Tag' => array( 'className' => 'Tag', 'foreign_key' => 'tag_id' ) ); Then back in the Media model , I use the find expecting to have the model binding set up : return $this->find("all", array( 'conditions' => array( 'Tag.metadata' => $username ) )); This generates an error : Error: SQLSTATE[42S22]: Column not found: 1054 Unknown column 'Tag.metadata' in 'where clause' I am assuming that this is happening because the table is not found [SQL Generated] SELECT `Media`.`media_id`, `Media`.`mediatype`, `Media`.`created`, `Media`.`active_flag`, `Media`.`mediafile`, `Media`.`fallbackfile`, `Media`.`setupId`, `Media`.`externalId`, `Media`.`width`, `Media`.`height`, `Media`.`brands`, `Media`.`languages`, `Media`.`products`, `Media`.`projects`, `Media`.`affiliates`, `Media`.`description`, `Media`.`linking_code` FROM `external`.`ext_media_gallery` AS `Media` WHERE `Tag`.`metadata` = 'testuser' AND `MediaGallery`.`active_flag` = 'active' AND ((FIND_IN_SET('testuser', `Media`.`affiliates`)) OR (`Media`.`affiliates` = '')) This basically shows that the binding is incorrect/not happening Basically I want to replace the following to avoid using joins on the fly since these would need to apply for every query.. hence the model binding.. This works for me, but I want to convert it to do it using model binding : return $this->find("all", array( "joins" => array( array( "table" => "internal.links", "alias" => "Link", "type" => "INNER", "conditions" => array( "Link.id = Media.someId" ) ), array( "table" => "internal.tags", "alias" => "Tag", "type" => "INNER", "conditions" => array( "Tag.tag_id = Link.tag_id", "Tag.parent_tag_id = 3214", "Tag.metadata" => $username ) ), ) ));
doc_2643
This is my code for the method: filterr(request, respond) { var averageRating = request.params.rating; var sql = "SELECT * FROM shopreview.shops WHERE averageRating = ?"; db.query(sql, [averageRating], function (error, result) { if (error) { throw error; } else { respond.json(result); } }); } My sql statement is working when I test it against my database. However, I keep getting [] as my result. Can someone please help identify what the problem is? Thanks a lot! A: the problem is that "?" since the db is unable to parse it. either add that avarageRating variable like so: var sql = "SELECT * FROM shopreview.shops WHERE averageRating = ${parseInt(avarageRating)}"; or if you're using couchbase you could parse it like this: var sql = `SELECT * FROM shopreview.shops WHERE averageRating = $1`; where $1 is the first variable in the array of variables.
doc_2644
EmbeddedCassandraService cassandraService = new EmbeddedCassandraService(); cassandraService.start(); We are able to use embedded cassandra perfectly fine when testing our domain classes. However, when using it with our API tests (which have a different set of dependencies) it throws the following exception: Caused by: org.apache.cassandra.exceptions.InvalidRequestException: unconfigured table schema_keyspaces at org.apache.cassandra.thrift.ThriftValidation.validateColumnFamily(ThriftValidation.java:115) at org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:920) at org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:915) at org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcessor.java:557) at org.apache.cassandra.cql3.QueryProcessor.parseStatement(QueryProcessor.java:253) at org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal(QueryProcessor.java:354) at org.apache.cassandra.schema.LegacySchemaMigrator.query(LegacySchemaMigrator.java:1044) at org.apache.cassandra.schema.LegacySchemaMigrator.readSchema(LegacySchemaMigrator.java:173) at org.apache.cassandra.schema.LegacySchemaMigrator.migrate(LegacySchemaMigrator.java:77) at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:256) at org.apache.cassandra.service.CassandraDaemon.init(CassandraDaemon.java:503) at org.apache.cassandra.service.EmbeddedCassandraService.start(EmbeddedCassandraService.java:51) at com.company.project.schema.cassandra.config.EmbeddedCassandraConfiguration.session(EmbeddedCassandraConfiguration.java:24) We're not sure why one module works but the other does not as the cassandra dependencies are identical. Additionally, neither module depends on spring-data-cassandra and both modules are using datastax's 3.3.0 driver. I'm confused as to why the issue occurs in the LegacySchemaMigrator as this EmbeddedCassandraService should be creating the system tables from scratch every time it is started (and there shouldn't be any schema to, well, migrate). Does anyone have any insight as to what may be causing this issue? A: The issue was actually that the EmbeddedCassandraService was being started twice.
doc_2645
In Python 3.x, I am using an attribute descriptor. The particular thing about this descriptor it that its set method contains a lot of sanity checks to make sure the value about to be set to the attribute respects some rules. The constructor uses the setattr and getattr to manipulate the attribute. The constructor works well and its code is reported below. class AttributeDescriptor(): <----- Version 001 of this class def __init__(self, attname): self.__attname = "__" + attname def __set__(self, obj, attvalue): #Some data quality checks, not provided here... setattr(obj, self.__attname, attval) def __get__(self, obj, owner): return getattr(obj, self.__attname) class Hobbit(): def __init__(self): pass name = AttributeDescriptor("name") sam = Hobbit() merry = Hobbit() sam.name = "Sam" merry.name = "Merry" print(sam.name) ----> Returns "Sam" print(merry.name) ----> Returns "Merry" print(sam.name) ----> Returns "Sam" I also tried defining the constructor with the following code, which returned erroneous values for the "name" attribute. Indeed, all Hobbits names were set equal to the last name which had been defined. class AttributeDescriptor(): <---- Version 002 of this class def __set__(self, obj, attvalue): #Some data quality checks, not provided here... self.value = attvalue def __get__(self, obj, owner): return self.value class Hobbit(): def __init__(self): pass name = AttributeDescriptor() sam = Hobbit() merry = Hobbit() sam.name = "Sam" merry.name = "Merry" print(sam.name) ----> Returned "Merry" print(merry.name) ----> Returned "Merry" print(sam.name) ----> Returned "Merry" My question is: how come the descriptor Version 002 sets "name" equal to a common value through all its istances ? From what I understand of descriptors, the descriptor Version 001 will store names in an attribute of the Person object instance: sam.__name = "sam" merry.__name = "merry" while the descriptor Version 002 will store names in an attribute of the attribute of the Person object instance: same.name.value = "sam" merry.name.value = "merry" Therefore, there is obviously something which I do not understand about how a Python descriptor works. Could anyone provide me with some clarifications ? A: In your second example your are setting the value on the AttributeDescriptor instance itself. You have only one AttributeDescriptor() instance in your program and there for it changes the same value every time you get to his set method class Hobbit(): def __init__(self): pass name = AttributeDescriptor("name") # <---- happens ONLY once!!! The class definition in python happens only* once ... *unless it doesn't. But lets stick to the easier Truth :)
doc_2646
A = {0, 1, 3, 4, 5} B = {1, 1, 2, 3, 4, 5, 6, 7, 8} I receive the following: Union = {1, 1, 2, 3, 4, 5, 6, 7, 8} Intersection = {0, 1, 3, 4, 5} Which is clearly incorrect. I should be receiving: Union = {0, 1, 2, 3, 4, 5, 6, 7, 8} Intersection = {1, 3, 4, 5} Here's the code in my main pertaining to the intersection/union: Vector<Inty> p1Shingles = new Vector<Inty>(); p1Shingles.add(new Inty(0)); p1Shingles.add(new Inty(1)); p1Shingles.add(new Inty(3)); p1Shingles.add(new Inty(4)); p1Shingles.add(new Inty(5)); Vector<Inty> p2Shingles = new Vector<Inty>(); p2Shingles.add(new Inty(1)); p2Shingles.add(new Inty(1)); p2Shingles.add(new Inty(2)); p2Shingles.add(new Inty(3)); p2Shingles.add(new Inty(4)); p2Shingles.add(new Inty(5)); p2Shingles.add(new Inty(6)); p2Shingles.add(new Inty(7)); p2Shingles.add(new Inty(8)); Vector<Inty> shinglesUnion = vectorUnion(p1Shingles, p2Shingles); Vector<Inty> shinglesIntersection = vectorIntersection(p1Shingles, p2Shingles); Here, Inty is a class I created so that I can change the values of the integers I need to store in a vector, which is not possible with the Integer class. Here are the functions I've written: private static <T> Vector<T> vectorIntersection(Vector<T> p1Shingles, Vector<T> p2Shingles) { Vector<T> intersection = new Vector<T>(); for(T i : p1Shingles) { if(p2Shingles.contains(i)) { intersection.add(i); } } return intersection; } private static <T> Vector<T> vectorUnion(Vector<T> p1Shingles, Vector<T> p2Shingles) { Vector<T> union = new Vector<T>(); union.addAll(p2Shingles); for(T i : p1Shingles) { if(!p2Shingles.contains(i)) { union.add(i); } } return union; } If anyone could give any insight as to why this is not working, I'd love to hear it. Thanks in advance! A: The method isDuplicated does not use parameter i! Actually I think it always returns True. Replacing the entire code of the function by return p2Shingles.contains(i) should be enough.
doc_2647
Njk template there uses collections.all to generate sitemap for all possible pages, like so --- permalink: sitemap.xml hidden: true --- <?xml version="1.0" encoding="utf-8"?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> {%- for page in collections.all %} {%- if not page.data.hidden %} <url> <loc>{{ site.url }}{{ page.url | url }}</loc> <lastmod>{{ page.date | htmlDateDisplay }}</lastmod> </url> {%- endif %} {%- endfor %} One of the outputs in a resulting sitemap is https://skeleventy.netlify.app/category/all/ which is a collection of all possible pages - a bit of a mess. Instead of "category all", it would be better that google indexes each category, for example <url> <loc>https://skeleventy.netlify.app/category/software/</loc> <lastmod>2020-7-20</lastmod> </url> <url> <loc>https://skeleventy.netlify.app/category/writing/</loc> <lastmod>2020-7-20</lastmod> </url> But how can i edit that njk template so that it -captures and outputs different categories in the sitemap? -excludes category/all -leaves other important pages like homepage, each blog post etc. A: I think I have it working. I've got this running on my machine and if you want a copy, just reach out. I don't use Nunjucks normally so forgive any dumb mistake. The first mod I did to sitemap.njk was to hide collections.all: <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> {%- for page in collections.all %} {%- if not page.data.hidden %} {%- if page.url !== "/category/all/" %} <url> <loc>{{ site.url }}{{ page.url | url }}</loc> <lastmod>{{ page.date | htmlDateDisplay }}</lastmod> </url> {% endif %} {%- endif %} {%- endfor %} Kinda hacky but worked. Next, I needed a way to get the blog category pages. I looked at tags.njk. Based on what I saw there, I wrote a filter for .eleventy.js named categories. I do not think this is a great name: eleventyConfig.addFilter("categories", function(collections) { let categories = Object.keys(collections).filter(c => c !== 'all'); return categories; }); Back in the sitemap, I then did this: {%- set cats = collections | categories %} {%- for cat in cats %} {% set newestDate = collections[cat] | getLatestDate %} <url> <loc>{{ site.url }}/category/{{ cat }}/</loc> <lastmod>{{ newestDate | htmlDateDisplay }}</lastmod> </url> {%- endfor %} </urlset> Note the getLatestDate filter, this is defined as such: eleventyConfig.addFilter("getLatestDate", function(collection) { console.log('running getLatestDate'); return collection[0].date; }); It seemed to work well. Here is my output: <?xml version="1.0" encoding="utf-8"?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <url> <loc>https://skeleventy.netlify.app/blog/post-1/</loc> <lastmod>2020-8-26</lastmod> </url> <url> <loc>https://skeleventy.netlify.app/blog/post-2/</loc> <lastmod>2020-8-26</lastmod> </url> <url> <loc>https://skeleventy.netlify.app/blog/post-3/</loc> <lastmod>2020-8-26</lastmod> </url> <url> <loc>https://skeleventy.netlify.app/about/</loc> <lastmod>2020-8-26</lastmod> </url> <url> <loc>https://skeleventy.netlify.app/blog/</loc> <lastmod>2020-8-26</lastmod> </url> <url> <loc>https://skeleventy.netlify.app/contact/</loc> <lastmod>2020-8-26</lastmod> </url> <url> <loc>https://skeleventy.netlify.app/</loc> <lastmod>2020-8-26</lastmod> </url> <url> <loc>https://skeleventy.netlify.app/category/blog/</loc> <lastmod>2020-8-26</lastmod> </url> <url> <loc>https://skeleventy.netlify.app/category/business/</loc> <lastmod>2020-8-26</lastmod> </url> <url> <loc>https://skeleventy.netlify.app/category/digital/</loc> <lastmod>2020-8-26</lastmod> </url> <url> <loc>https://skeleventy.netlify.app/category/health/</loc> <lastmod>2020-8-26</lastmod> </url> </urlset> If you want a complete copy, just reach out to me.
doc_2648
Failed to load JVM DLL C:\Program Files\Java\jdk-9.0.1\bin\server\jvm.dll If you already have a 32-bit JDK installed, define a JAVA_HOME variable in Computer > System Properties > System Settings > Environment Variables. How can I launch PyCharm? A: For Windows 10, try running as an administrator. A: Try to delete the following file: C:\Users\'your user'\AppData\Roaming\JetBrains\ and then you can launch PyCharm as the first time you start.
doc_2649
public data class ModelA(val limit: Int, val offset: Int, val someDataA: DataAlpha) public data class ModelB(val limit: Int, val offset: Int, val someDataB: DataBeta) I would like to generalize pagination based on something like Paginable trait: trait Paginable { var limit: Int var offset: Int } But making ModelA implement Paginable causes an error: Error: 'offset' hides member of supertype 'Paginable' and needs 'override' modifier Adding override: public data class ModelB(override val limit: Int, override val offset: Int, val someDataB: DataBeta) : Paginable causes even more interesting error, crashing the compiler: Error:java.lang.ClassCastException: org.jetbrains.kotlin.psi.JetParameter cannot be cast to org.jetbrains.kotlin.psi.JetProperty at org.jetbrains.kotlin.resolve.OverrideResolver$3.varOverriddenByVal(OverrideResolver.java:562) at org.jetbrains.kotlin.resolve.OverrideResolver.checkOverridesForMemberMarkedOverride(OverrideResolver.java:606) at org.jetbrains.kotlin.resolve.OverrideResolver.checkOverrideForMember(OverrideResolver.java:529) at org.jetbrains.kotlin.resolve.OverrideResolver.checkOverridesInAClass(OverrideResolver.java:269) at org.jetbrains.kotlin.resolve.OverrideResolver.checkOverrides(OverrideResolver.java:260) at org.jetbrains.kotlin.resolve.OverrideResolver.check(OverrideResolver.java:67) at org.jetbrains.kotlin.resolve.LazyTopDownAnalyzer.analyzeDeclarations(LazyTopDownAnalyzer.java:299) at org.jetbrains.kotlin.resolve.LazyTopDownAnalyzerForTopLevel.analyzeDeclarations(LazyTopDownAnalyzerForTopLevel.java:77) at org.jetbrains.kotlin.resolve.LazyTopDownAnalyzerForTopLevel.analyzeFiles(LazyTopDownAnalyzerForTopLevel.java:69) at org.jetbrains.kotlin.resolve.jvm.TopDownAnalyzerFacadeForJVM.analyzeFilesWithJavaIntegration(TopDownAnalyzerFacadeForJVM.java:147) at org.jetbrains.kotlin.resolve.jvm.TopDownAnalyzerFacadeForJVM.analyzeFilesWithJavaIntegrationWithCustomContext(TopDownAnalyzerFacadeForJVM.java:100) at org.jetbrains.kotlin.cli.jvm.compiler.KotlinToJVMBytecodeCompiler$2.invoke(KotlinToJVMBytecodeCompiler.java:307) at org.jetbrains.kotlin.cli.jvm.compiler.KotlinToJVMBytecodeCompiler$2.invoke(KotlinToJVMBytecodeCompiler.java:300) at org.jetbrains.kotlin.cli.common.messages.AnalyzerWithCompilerReport.analyzeAndReport(AnalyzerWithCompilerReport.java:232) at org.jetbrains.kotlin.cli.jvm.compiler.KotlinToJVMBytecodeCompiler.analyze(KotlinToJVMBytecodeCompiler.java:299) at org.jetbrains.kotlin.cli.jvm.compiler.KotlinToJVMBytecodeCompiler.analyzeAndGenerate(KotlinToJVMBytecodeCompiler.java:282) at org.jetbrains.kotlin.cli.jvm.compiler.KotlinToJVMBytecodeCompiler.compileBunchOfSources(KotlinToJVMBytecodeCompiler.java:208) at org.jetbrains.kotlin.cli.jvm.K2JVMCompiler.doExecute(K2JVMCompiler.java:189) at org.jetbrains.kotlin.cli.jvm.K2JVMCompiler.doExecute(K2JVMCompiler.java:49) at org.jetbrains.kotlin.cli.common.CLICompiler.exec(CLICompiler.java:148) at org.jetbrains.kotlin.gradle.tasks.AbstractKotlinCompile.callCompiler(Tasks.kt:86) at org.jetbrains.kotlin.gradle.tasks.AbstractKotlinCompile.compile(Tasks.kt:62) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:63) at org.gradle.api.internal.project.taskfactory.AnnotationProcessingTaskFactory$StandardTaskAction.doExecute(AnnotationProcessingTaskFactory.java:218) at org.gradle.api.internal.project.taskfactory.AnnotationProcessingTaskFactory$StandardTaskAction.execute(AnnotationProcessingTaskFactory.java:211) at org.gradle.api.internal.project.taskfactory.AnnotationProcessingTaskFactory$StandardTaskAction.execute(AnnotationProcessingTaskFactory.java:200) at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:579) at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:562) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeAction(ExecuteActionsTaskExecuter.java:80) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:61) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:46) at org.gradle.api.internal.tasks.execution.PostExecutionAnalysisTaskExecuter.execute(PostExecutionAnalysisTaskExecuter.java:35) at org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:64) at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58) at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:42) at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:52) at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:53) at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:43) I'm aware that the trait can be implemented "manually" inside of the class body but this will break the data class as equals, hashCode and copy will ignore added properties. A: You can fix your code by using the same property kind(val/var) in both side(trait and data class). A: I was looking for achieving multiple inheritence in Kotlin, came across trait. But later found that traits do not exist anymore and their semantics have changed significantly once renamed to interfaces to match Java 8 semantics. The keyword trait was a keyword in Kotlin but now it's removed. It was deprecated on the Kotlin M12 release. You can find more here.
doc_2650
I confirmed setMyState is running because it updates other parts of the application (ex. triggers a different useEffect) and I know MyPage isn't being rendered because I'm doing a console.log() on the first line of rendering MyPage and this log doesn't appear. Can someone help me with why this is happening? <IonApp> <IonReactRouter> <IonTabs> <IonRouterOutlet> <Route exact path="/:tab(MyTab)"> {myState ? <MyPage /> : <IonSpinner />} </Route> ... more code ... EDIT: Pretty sure it's a bug in Ionic? After useEffect runs setMyState, react does a render like it's supposed to but then the IonRouterOutlet has no children (none displayed in React Dev Tools). I traced this back to the ReactRouterViewStack calling getChildrenToRender() which creates const viewItems. This viewItems object is empty {} because the viewStack is empty {}. I don't know how the viewStack is meant to be populated so I'm not sure where to go from here, but I think addViewItem is not running? When I go to another tab and then back to this tab, everything renders correctly and the children to IonRouterOutlet are shown in React Dev Tools. Files: node_modules/@ionic/react/dist/index.esm.js node_modules/@ionic/react-router/dist/index.esm.js A: try adding MyState in the Dependency array of useEffect sample code
doc_2651
Inputs that are provided by user are * *Value of N *Option amongst Hour/Day/Week/Month *Start Date *Start Time I am unable to get the cron expressions right for each of the repeat interval type i.e. Hour/Day/Week/Month so that the trigger time is calculated from the start date. A: Quartz documentation suggests using a SimpleTrigger http://www.quartz-scheduler.org/docs/cookbook/BiDailyTrigger.html, an example for every other day: Trigger trigger = new SimpleTrigger("trigger1", "group1"); trigger.setRepeatCount(SimpleTrigger.REPEAT_INDEFINITELY); // 24 hours * 60(minutes per hour) * 60(seconds per minute) * 1000(milliseconds per second) trigger.setRepeatInterval(2L * 24L * 60L * 60L * 1000L); Note that you will need to set the trigger start time and the misfire rule. A: I think is a good start of how to configure triggers: http://www.opensymphony.com/quartz/wikidocs/CronTriggers%20Tutorial.html
doc_2652
But due to the requirements of the JSON-format, I have two more issues: * *The values with the mac-and ip-addresses should be indicated between square brackets ([ ]) and each one should be indicated between (" "), e.g.: "mac_address":  ["00:10:XX:10:00:0X", "X0:X0:11:X0:00:0X", "X0:11:X0:11:XX:11"] *In the file.txt, I will also have the following key-value information:  product_executable : "C:\Program Files (x86)\McAfee\VirusScan Enterprise\SHSTAT.EXE" /!REMEDIATE I used the current code in the question mentioned above and I get double backslash and a new one at the beginning: "product_executable":  "\"C:\\Program Files (x86)\\McAfee\\VirusScan Enterprise\\SHSTAT.EXE\" /!REMEDIATE" I already tried to handle the hashtable output ($oht) without success as well as trying to modify add and delete characters. My file.txt contains the following structured information (two empty lines at the beginning as well): adapter_name : empty1 route_age : 10 route_nexthop : 172.0.0.1 route_protocol : NETMGMT1 speed : null mac_address : 11:10:XX:10:00:0X, X1:X0:11:X0:00:0X, X1:11:X0:11:XX:11 product_executable : "C:\Program Files (x86)\McAfee\VirusScan Enterprise\SHSTAT.EXE" /!REMEDIATE adapter_name : empty2 route_age : 100 route_nexthop : 172.0.0.2 route_protocol : NETMGMT2 speed : null mac_address : 22:10:XX:10:00:0X, X2:X0:11:X0:00:0X, X2:11:X0:11:XX:11 product_executable : "C:\Program Files (x86)\McAfee\VirusScan Enterprise\SHSTAT.EXE" /!REMEDIATE adapter_name : empty3 route_age : 1000 route_nexthop : 172.0.0.3 route_protocol : NETMGMT3 speed : null mac_address : 33:10:XX:10:00:0X, X3:X0:11:X0:00:0X, X3:11:X0:11:XX:11 product_executable : "C:\Program Files (x86)\McAfee\VirusScan Enterprise\SHSTAT.EXE" /!REMEDIATE The current code (see question mentioned) is: # Read the input file as a whole (-Raw) and split it into blocks (paragraphs) (Get-Content -Raw C:\scripts\file.txt) -split '\r?\n\r?\n' -ne '' | ForEach-Object { # Process each block # Initialize an ordered hashtable for the key-values pairs in this block. $oht = [ordered] @{} # Loop over the block's lines. foreach ($line in $_ -split '\r?\n' -ne '') { # Split the line into key and value... $key, $val = $line -split ':', 2 # ... and add them to the hashtable. $oht[$key.Trim()] = $val.Trim() } $oht # Output the hashtable. } | ConvertTo-Json The actual result is: [ { "adapter_name" : "empty1", "route_age" : 10, "route_nexthop" : "172.0.0.1", "route_protocol" : "NETMGMT1", "speed" : null, "mac_address" : "11:10:XX:10:00:0X, X1:X0:11:X0:00:0X, X1:11:X0:11:XX:11", "product_executable" : "\"C:\\Program Files (x86)\\McAfee\\VirusScan Enterprise\\SHSTAT.EXE" /!REMEDIATE" }, { "adapter_name" : "empty2", "route_age" : 100, "route_nexthop" : "172.0.0.2", "route_protocol" : "NETMGMT2", "speed" : null, "mac_address" : "22:10:XX:10:00:0X, X2:X0:11:X0:00:0X, X2:11:X0:11:XX:11", "product_executable" : "\"C:\\Program Files (x86)\\McAfee\\VirusScan Enterprise\\SHSTAT.EXE" /!REMEDIATE" }, { "adapter_name" : "empty3", "route_age" : 1000, "route_nexthop" : "172.0.0.3", "route_protocol" : "NETMGMT3", "speed" : null, "mac_address" : "33:10:XX:10:00:0X, X3:X0:11:X0:00:0X, X3:11:X0:11:XX:11", "product_executable" : "\"C:\\Program Files (x86)\\McAfee\\VirusScan Enterprise\\SHSTAT.EXE" /!REMEDIATE" } ] And the expected result is: [ { "adapter_name" : "empty1", "route_age" : 10, "route_nexthop" : "172.0.0.1", "route_protocol" : "NETMGMT1", "speed" : null, "mac_address" :  ["11:10:XX:10:00:0X", "X1:X0:11:X0:00:0X", "X1:11:X0:11:XX:11"], "product_executable" : ""C:\Program Files (x86)\McAfee\VirusScan Enterprise\SHSTAT.EXE" /!REMEDIATE" }, { "adapter_name" : "empty2", "route_age" : 100, "route_nexthop" : "172.0.0.2", "route_protocol" : "NETMGMT2", "speed" : null, "mac_address" :  ["22:10:XX:10:00:0X", "X2:X0:11:X0:00:0X", "X2:11:X0:11:XX:11"], "product_executable" : ""C:\Program Files (x86)\McAfee\VirusScan Enterprise\SHSTAT.EXE" /!REMEDIATE" }, { "adapter_name" : "empty3", "route_age" : 1000, "route_nexthop" : "172.0.0.3", "route_protocol" : "NETMGMT3", "speed" : null, "mac_address" :  ["33:10:XX:10:00:0X", "X3:X0:11:X0:00:0X", "X3:11:X0:11:XX:11"], "product_executable" : ""C:\Program Files (x86)\McAfee\VirusScan Enterprise\SHSTAT.EXE" /!REMEDIATE" } ] I would really appreciate your suggestions. A: Strings in JSON can take escape sequences. The character for specifying an escape sequence is backslash \. Escape sequences are useful for, among other things: * *Inserting non-printable or whitespace characters (like TAB or newlines or null) *Inserting a double quote " into the string (since a double quote both begins and ends the string, you must have a way to say "I want this quote to be part of the string, not to terminate it). *Inserting a literal backslash \ (since backlsash is the beginning of an escape sequence, you need a way to say "I want this backlash to be part of the string, not to begin an escape sequence) Therefore in your example, what you're seeing is: "\"C:\\Program Files (x86)\\McAfee\\VirusScan Enterprise\\SHSTAT.EXE\" /!REMEDIATE" * *In the beginning, you have a double quote " to start the JSON string. *Then immediately after you have \", which says "The first character in this string is an actual "" *In the paths, which are themselves delimited by a backslash \, you need it to be double so that it can be interpreted as a single backslash, instead of trying to interpret it as escape sequences \P rogram Files \M cAfee \V irusScan, etc. *At the end of SHSTAT.EXE you see the next \" which is putting in the literal quote that ends your quoted executable string. Long story short, everything is working as expected. When you deserialize the JSON, it will all come out the way it should! Want to see for sure? $myString = @' "C:\Program Files (x86)\McAfee\VirusScan Enterprise\SHSTAT.EXE" /!REMEDIATE '@ Write-Host $myString $myJsonString = $myString | ConvertTo-Json Write-Host $myJsonString $undoJson = $myJsonString | ConvertFrom-Json Write-Host $undoJson A: You'll need to convert the mac address to an array. # Read the input file as a whole (-Raw) and split it into blocks (paragraphs) (Get-Content -Raw C:\scripts\file.txt) -split '\r?\n\r?\n' -ne '' | ForEach-Object { # Process each block # Initialize an ordered hashtable for the key-values pairs in this block. $oht = [ordered] @{} # Loop over the block's lines. foreach ($line in $_ -split '\r?\n' -ne '') { # Split the line into key and value... $key, $val = $line -split ':', 2 # ... and add them to the hashtable. if ($key -like "*mac_address*"){ $val = @($val.Replace(' ','').split(',')) } $oht[$key.Trim()] = $val.Trim() } $oht # Output the hashtable. } | ConvertTo-Json A: briantist's helpful answer shows that the product_executable property values are correctly JSON-encoded. As for the desire to turn the mac_address into arrays of strings: all that is needed is to split the value by separator string ,  into an array: That is, instead of: $oht[$key.Trim()] = $val.Trim() use $oht[$key.Trim()] = $($val.Trim() -split ', ') If there's a chance that the amount of whitespace between the elements is variable, use -split ',\s*' The $(...) ensures that if the -split operation returns just 1 element - i.e., if the input doesn't contain ,  - the input string is returned as-is rather than as a single-element array. The above assumes: * *that all property values that contain ,  should be parsed as arrays *that if mac_address happens to contain just one entry, it should be parsed as a scalar. The following variation applies array parsing only to property mac_address, and always parses its value as an array (you could again surround the -split operation with $(...) to change that): $oht[$key.Trim()] = if ($key.Trim() -eq 'mac_address') { $val.Trim() -split ', ' } else { $val.Trim() }
doc_2653
Now I want to make "sum rows" in another sheet, where I keep a list of accounts in a cell as a comma-separated value. What I want is for this comma-separated value to be looped for all the "raw data" and then have the amount from the bookkeeping of ALL entries for these accounts summed. Example: RAW DATA: A B 1000 1.25 1000 1.75 1000 100.22 2422 29.00 2400 20.00 Sum sheet: A B 1000,2400 123.22 2422 29.00 2400,2422 49.00 I have tried with the following formula, but it doesnt seem to sum all of the accounts - only the first one in each comma-separated list. =ArrayFormula(SUMPRODUCT(SUMIFS(Accounts!F:F;Accounts!A:A;TRIM(MID(SUBSTITUTE(A2;",";REPT(" ";9999));(ROW($BB$1:INDEX($BB:$BB;LEN(A2)-LEN(SUBSTITUTE(A2;",";""))+1))-1)*9999+1;9999)))))) A: For example: Formula in E1: =SUMPRODUCT((A:A=SPLIT(D1,","))*(B:B))
doc_2654
CREATE TABLE core.Institutes ( ID INT NOT NULL PRIMARY KEY IDENTITY(1,1), Name NVARCHAR(128) NOT NULL, OldID INT NULL ) GO CREATE TABLE core.InstitutePlaces ( FKInstituteID INT NOT NULL PRIMARY KEY REFERENCES core.Institutes(ID), FKPlaceID INT NOT NULL REFERENCES core.Places(ID) ) GO CREATE TABLE core.Places ( ID INT NOT NULL PRIMARY KEY IDENTITY(1,1), Name NVARCHAR(128) NOT NULL, FKParentID INT NULL REFERENCES core.Places(ID), OldID INT NULL ) GO on this model public class Place { public int Id { get; set; } public string Name { get; set; } public int? ParentId { get; set; } public Place Parent { get; set; } } public class Institute { public int Id { get; set; } public string Name { get; set; } public Place Place { get; set; } } we're using something like this to do the mapping modelBuilder.Entity<Institutes.Institute>().HasOptional(i => i.Place); but it doesn't work :( This scenario is perfectly managed by the EDML file, so the problem is only about the mapping. A: Something like this will give you (almost) the desired schema with one caveat: Code First does not create a 1:1 relationship in entity splitting scenarios which your desired schema (creating a 1:* association using a join table) is a special case of it. public class Place { public int Id { get; set; } public string Name { get; set; } public int? ParentId { get; set; } public Place Parent { get; set; } } public class Institute { [DatabaseGenerated(DatabaseGenerationOption.None)] public int Id { get; set; } public string Name { get; set; } public int? PlaceId { get; set; } public Place Place { get; set; } } public class Context : DbContext { public DbSet<Place> Places { get; set; } public DbSet<Institute> Institutes { get; set; } protected override void OnModelCreating(ModelBuilder modelBuilder) { modelBuilder.Entity<Institute>().Map(mc => { mc.Properties(p => new { p.Id, p.Name }); mc.ToTable("Institutes"); }) .Map(mc => { mc.Properties(p => new { p.Id, p.PlaceId }); mc.ToTable("InstitutePlaces"); }); modelBuilder.Entity<Place>() .HasOptional(p => p.Parent) .WithMany() .HasForeignKey(p => p.ParentId); } } I had to switch off identity generation due to a bug that I explained here.
doc_2655
$file = get-content C:\Dev\private.key az keyvault secret set --name private_key --value $file --vault-name testing-kv But I encountered the following error: unrecognized arguments: MIIEXXXXXXX... Only the -----BEGIN PRIVATE KEY----- part of the private key is recognized but the rest isn't. I also looked at this post Store Private Key into Azure KeyVault, value got changed and the solution indicates to convert the private key as a secure string and upload the encoded value to the key vault: $secretvalue = ConvertTo-SecureString 'C:\Dev\private.key' -AsPlainText -Force az keyvault secret set --name private_key --value $secretValue But this didn't work because it stores the string [System.Secure.String] in the keyvault. How can I store this private key in its integrity into the keyvault? A: I had to run in Powershell: az login az account set --subscription mysub Go to the folder where you have the private cert and type: az keyvault secret set --name mynewkey --vault-name test-kv --file .\private.key This command reads the private key from a file and stores it in the keyvault without any modification
doc_2656
This is the plot: I have few questions: 1. is it the errors for the test set or train set? 2. why in 30 the error is 1? 3. is it accumulated error? Thank you. my code: base = LinearSVC(tol=1e-10, loss='hinge', C=1000, max_iter=50000) ada = AdaBoostClassifier(base_estimator=base ,algorithm='SAMME', n_estimators=n,random_state=10) A: Sklearn AdaBoostClassifier has a default parameter for n_estimators=50 which I believe is used in your case. However, the boosting process may terminate early if one of the other conditions is reached. This maybe dictated by one of the stopping conditions for the base estimator or the SAMME algorithm. In your case based on the plot, it seems like the boosting stops after 30 estimators. You can easily obtain the actual number of estimators using, len(estimator) where estimator is the fitted estimator using AdaBoostClassifier. The type of error depends on the function performed before printing estimator_errors_. estimator.predict(X_test) estimator.estimator_errors_ shows error for the test data X_test. hope that helps.
doc_2657
I have No problem in cloning, and also no problem in pushing from one of my repository. How can I smoothly push from all my repository. A: As I mentioned before, in that specific repository, do a git remote -v to check what remote URL you are using. And you can compare that URL with the one used in another local repository from which you can push. That way, you can infer why this particular repository cannot push. The OP pradeep adds in the comments: I could push successfully after uninstalling Git and then reinstalling it.
doc_2658
EDIT So I changed up some code following https://github.com/scotch-io/starter-node-angular that @PareshGami suggested. I get the URLS to be hit but now the actual content doesnt load. Here is my updated Code: server.js: var express = require('express'), app = express(), bodyParser = require('body-parser'), mongoose = require('mongoose'); app.use(bodyParser.json()); require('./server/routes')(app); app.use('/js', express.static(__dirname + '/client/js')); app.use('/views', express.static(__dirname + '/client/views')); app.use('/bower_components', express.static(__dirname + '/bower_components')); app.use('/node_modules', express.static(__dirname +'/node_modules')); app.listen(3000); console.log('Im Listening...'); exports = module.exports = app; my angular app.js: (function (angular) { 'use strict'; var app = angular.module('eos', [ 'ngRoute', 'ngResource', 'eos.opsCtrl', 'eos.dashboardCtrl' ]); app.config(function ($routeProvider, $locationProvider){ $routeProvider.when( '/', { templateUrl: 'views/dashboard.html', pageName: 'Dashboard', controller: 'dashboardCtrl' } ); $routeProvider.when( '/ops', { templateUrl: 'views/ops.html', pageName: 'Operations', controller: 'opsCtrl' } ); $routeProvider.otherwise({redirectTo: '/'}); $locationProvider.html5Mode(true); }); }(window.angular)); My routes.js (new): var opsController = require('./controllers/opsController'); module.exports = function(app) { //path.join(__dirname, 'client'); // server routes =========================================================== // handle things like api calls // authentication routes app.get('/api/ops', opsController.list); app.post('/api/ops', opsController.create); // frontend routes ========================================================= // route to handle all angular requests app.get('*', function(req, res) { res.sendFile('index.html', { root: './client' }); }); }; Then rest is identical. And suggestions on why it is not loading the content in the ng-view? FINALLY GOT IT TO WORK! My server.js was set up wrong. Here is the correct server.js. Notice position of: require('./server/routes')(app); it needed to be father down for what I assume is the compile sequence var express = require('express'), app = express(), bodyParser = require('body-parser'), mongoose = require('mongoose'), methodOverride = require('method-override'); // get all data/stuff of the body (POST) parameters app.use(bodyParser.json()); // parse application/vnd.api+json as json app.use(bodyParser.json({ type: 'application/vnd.api+json' })); // parse application/x-www-form-urlencoded app.use(bodyParser.urlencoded({ extended: true })); app.use(methodOverride('X-HTTP-Method-Override')); app.use('/js', express.static(__dirname + '/client/js')); app.use('/views', express.static(__dirname + '/client/views')); app.use('/bower_components', express.static(__dirname + '/bower_components')); app.use('/node_modules', express.static(__dirname +'/node_modules')); require('./server/routes')(app); app.listen(3000); console.log('Im Listening...'); exports = module.exports = app; A: I was directed by PareshGami to look over this site 'setting-up-a-mean-stack-single-page-application'. After following that I was able to get the routing to work. The key to my problem was the ordering of my server.js file and the require('./server/routes')(app); part.
doc_2659
So would Angular work is I give browser a url of a local file like C:\temp\index.html and the js files are either at c:\temp or say c:\temp\js. So actually, I tried it, here is all in one application file (I know it should be separated) <html ng-app="myNoteApp"> <script src="http://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js"></script> <body> <div ng-controller="myNoteCtrl"> <h2>My Note</h2> <p><textarea ng-model="message" cols="40" rows="10"></textarea></p> <p> <button ng-click="save()">Save</button> <button ng-click="clear()">Clear</button> </p> <p>Number of characters left: <span ng-bind="left()"></span></p> </div> <script > // was in separate file but pasted in for demo purposes var app = angular.module("myNoteApp", []); </script> <script > // was in separate file but pasted in for demo purposes app.controller("myNoteCtrl", function($scope) { $scope.message = ""; $scope.left = function() {return 100 - $scope.message.length;}; $scope.clear = function() {$scope.message = "";}; $scope.save = function() {alert("Note Saved:" + $scope.message);}; }); </script> </body> </html> The results are, it works in Chrome and Firefox no problems, IE blocks content initially but one can allow it run. A: Yes you can run a local file, but if you need data off a server, the browser should block it, depending on what version and type of browser you are running. Here is the official Angularjs Tutorial explanation under the PhoneCat Tutorial App: Running the Development Web Server While Angular applications are purely client-side code, and it is possible to open them in a web browser directly from the file system, it is better to serve them from an HTTP web server. In particular, for security reasons, most modern browsers will not allow JavaScript to make server requests if the page is loaded directly from the file system. A: If you need to just display data using an expression like {{mymessage}} inside a div, you don't need a web server. But if you need to load template html files uing ngview, you need a web server- otherwise it will complain with following error. Request cannot load file. Cross origin requests are only supported for protocol schemes: http, data, chrome, chrome-extension, https, chrome-extension-resource. If laoding templates is needed for learning angularjs routing, I found a web server exe easy to use - HFS. So far it meets my requirements for learning AngularJS. References * *HFS:Introduction *HTTP File Server A: You cannot just access an angular application by the filepath on the local machine because you will get cross origin domain errors. The solution is to install http-server (which requires node.js to be installed). This allows you to create a http-server local to your machine and will allow you to access the Angular application as if it were hosted online for development and test purposes. A: So, the way I've done this is to create a temp service and just load that instead of from a url/file. Example: //tempUser.js angular.module("app").constant("tempUser", { firstname : "Joe", lastname : "Smith" }); //userService.js angular.module("app").factory("userService", function ($q, tempUser) { return { load : load }; function load(id) { //TODO: finish impl return $q.when(tempUser); } }); This way the controller can still work as if you were loading from a web service. angular.module("app").controller("UserDetailCtrl", function (userService) { userService.load().then(function (user) { $scope.user = user; }); }); A: As others have said, it's best to serve properly as http. However, there are other workarounds. Some editors, like Brackets (click on the lightning bolt in the top right corner while in a file), can serve the code to your browser properly. For others there might be plugins that do it. Update: My suggestion below worked well enough for AngularJS 1, but just FYI is insufficient for Angular 2. Also see Disable same origin policy in Chrome Further, if you're on Chrome you can run it with flags, which means you add some stuff at behind the .exe part of the path on a short cut; options if you will. Specifically you'd want: --allow-file-access-from-files --allow-file-access --allow-cross-origin-auth-prompt That makes it not throw errors when trying to access files from various origins. There was a plugin for that once, but I couldn't get it to work. Note there's security reaosns why this isn't the default, so maybe don't put it on your main short cut that you use all the time for surfing... - Use at own risk.
doc_2660
This works for one table, =INDEX(Table2[@W],MATCH([@[Franchise ID]],Table2[@[Franchise ID]],FALSE),1) However when I try to add a second table, it throws an error =SUM(INDEX(Table2[@W],MATCH([@[Franchise ID]],Table2[@[Franchise ID]],FALSE),1),INDEX(Table224[@W],MATCH([@[Franchise ID]],Table224[@[Franchise ID]],FALSE),1)) How can I make this work? A: The same has been answered by question owner itself through comment. Posting that answer behalf of @chris James Champeau The Exact Formula Is: =SUM(SUMIF(Table2[Franchise ID],[@[Franchise ID]],Table2[@W]),SUMIF(Table22[Franchise ID],[@[Franchise ID]],Table22[@W]),SUMIF(Table224[Franchise ID],[@[Franchise ID]],Table224[@W]))
doc_2661
<?php require('../../../wp-config.php'); mysql_connect(DB_HOST, DB_USER, DB_PASSWORD); if(isset($_POST['submit'])) { $id=intval($_POST['id']); if (isset($_SESSION['cart'][$id]) && $_SESSION['cart'][$id]['color'] == $_POST['color']) { $_SESSION['cart'][$id]['quantity']++; $link =str_replace( '?action=added_to_cart', '', $_SERVER['HTTP_REFERER'] ); header('location:' . $link . '?action=added_to_cart'); } else { $sql_s="SELECT * FROM wp_posts WHERE ID=($id)"; $query_s=mysql_query($sql_s); if(mysql_num_rows($query_s)!=0) { $row_s=mysql_fetch_array($query_s); if ($_POST['color'] == 'gray') { $price = get_price($id); } else { $price = get_price_colored_stones($id); } $_SESSION['cart'][$row_s['ID']]=array( "quantity" => 1, "price" => $price, "color" => $_POST['color'] ); $link =str_replace( '?action=added_to_cart', '', $_SERVER['HTTP_REFERER'] ); header('location:' . $link . '?action=added_to_cart'); } else { $message="This product id is invalid!"; } } } ?> A: So this is what I did: I made shopping cart row id like that [id_color]. So now there can be products in shoppingcart width same product_id two different rows in shopping cart('gray' and 'colored'). Later I remove color with str_replace, so I can get pure product id width what I can query product information. And code: <?php require('../../../wp-config.php'); mysql_connect(DB_HOST, DB_USER, DB_PASSWORD); if(isset($_POST['submit'])) { $id=intval($_POST['id']); if (isset($_SESSION['cart'][$id . '_' . $_POST['color']])) { $_SESSION['cart'][$id . '_' . $_POST['color']]['quantity']++; $link =str_replace( '?action=added_to_cart', '', $_SERVER['HTTP_REFERER'] ); header('location:' . $link . '?action=added_to_cart'); } else { $sql_s="SELECT * FROM wp_posts WHERE ID=($id)"; $query_s=mysql_query($sql_s); if(mysql_num_rows($query_s)!=0) { $row_s=mysql_fetch_array($query_s); if (!isset($_POST['color'])){ $_POST['color'] = 'gray'; } if ($_POST['color'] == 'gray') { $price = get_price($id); } else { $price = get_price_colored_stones($id); } $_SESSION['cart'][$row_s['ID'] . '_' . $_POST['color']]=array( "quantity" => 1, "price" => $price, "color" => $_POST['color'], "title" => $row_s['post_title'] ); $link =str_replace( '?action=added_to_cart', '', $_SERVER['HTTP_REFERER'] ); header('location:' . $link . '?action=added_to_cart'); } else { $message="This product id is invalid!"; } } }
doc_2662
How do I properly create an object from a .txt file input, and add that object to an ArrayList? HouseListTester Class package RealEstateListings; import java.util.*; public class HouseListTester { public static void main(String[] args) { //create scanner for user input via console Scanner input = new Scanner(System.in); System.out.println("Welcome to Mike's House Listing"); System.out.println("Please enter the file name of the house list: "); String sourceFolder = "C:\\Users\\micha\\Documents\\eclipse-workspace\\Real Estate Listings\\src\\RealEstateListings\\"; HouseList fileName = new HouseList((sourceFolder+input.next())); System.out.println(); System.out.println("Please enter your search criteria"); System.out.print("Minimum price: "); int minPrice = input.nextInt(); System.out.print("Maximum price: "); int maxPrice = input.nextInt(); System.out.print("Minimum area: "); int minArea = input.nextInt(); System.out.print("Maximum area: "); int maxArea = input.nextInt(); System.out.print("Minimum number of bedrooms: "); int minBedrooms = input.nextInt(); System.out.print("Maximum number of bedrooms: "); int maxBedrooms = input.nextInt(); Criteria userCriteria = new Criteria(minPrice, maxPrice, minArea, maxArea, minBedrooms, maxBedrooms); } } Here is my HouseList class package RealEstateListings; import java.util.*; import java.io.*; public class HouseList { ArrayList<House>houseList; public HouseList(String fileName) { try { //create scanner to read input Scanner sc = new Scanner(new File(fileName)); while(sc.hasNextLine()) { //input reads to parameters address price area numBedrooms House newListing = new House(sc.next(), sc.nextInt(), sc.nextInt(), sc.nextInt()); //add newListing to houseList array houseList.add(newListing); } } catch(FileNotFoundException e) { System.out.println("File was not found."); } } public void printHouses(Criteria c) {} public String getHouse(Criteria C) { return ""; } } The House Class package RealEstateListings; public class House { String address; int price,area,numberOfBedrooms; public House(String addr, int salePrice, int saleArea, int numBedrooms) { this.address = addr; this.price = salePrice; this.area = saleArea; this.numberOfBedrooms = numBedrooms; } public int getPrice() {return this.price;} public int getArea() {return this.area;} public int getNumberOfBedrooms() {return this.numberOfBedrooms;} public boolean satisfies(Criteria C) {return true;} public String toString() {return address;} } The Criteria Class and Criteria Constructor have not been finished, and I do not believe it is affecting the ouput, but in case someone needs it for reference to find the solution somehow, here you go: package RealEstateListings; public class House { String address; int price,area,numberOfBedrooms; public House(String addr, int salePrice, int saleArea, int numBedrooms) { this.address = addr; this.price = salePrice; this.area = saleArea; this.numberOfBedrooms = numBedrooms; } public int getPrice() {return this.price;} public int getArea() {return this.area;} public int getNumberOfBedrooms() {return this.numberOfBedrooms;} public boolean satisfies(Criteria C) {return true;} public String toString() {return address;} } Here is my console output: Welcome to Mike's House Listing Please enter the file name of the house list: houses.txt Exception in thread "main" java.lang.NullPointerException at RealEstateListings.HouseList.<init>(HouseList.java:18) at RealEstateListings.HouseListTester.main(HouseListTester.java:13) A: The problem has to do with the HouseList class. The array declared was never initialized. To initialize it houseList = new ArrayList<>();must be in the first line of the HouseList constructor. More on NullPointException: What is a NullPointerException, and how do I fix it?
doc_2663
When deploying the application, the compiler asked me to reference System.dll (v 2.0) and System.Data (v 2.0) and after referencing them my application takes too much space about (35 MB) and my device memory went out of space, because it loads many other libraries like System.Web.dll which take too much space. Any help please about how to reference IBM.Data.Informix dll correctly? A: You can't use desktop assemblies on the device, even if you had room for them. You'll need to either find a CF-compiled Informix assembly or create one. A: The informix .Net driver need that the native drivers be installed in the machine (the CSDK) And because of that you can use the driver in a mobile device or in another computer copying only the assemblies
doc_2664
xxx-javadoc.jar.lastUpdated xxx.sources.jar.lastUpdated The issue seems to be .lastUpdated part. When I look at my project dependencies I can see clearly that intelliJ looks for xxx-javadoc.jar instead of xxx-javadoc.jar.lastUpdated How can I make sure that IntelliJ properly downloads and names javadoc/sources properly? I don't want to manually rename everything and then manually set javadoc/sources through IntelliJ interface. I think this issue happened when I interrupted the download of sources/documentation A: The .lastUpdated files are not the jar fails themselves, but a mechanism that Maven uses to track when it last updated a file. I.e., the file you should load in IntelliJ is the jar file, not the .lastUpdated file. If an interrupted/corrupted update is causing issues, remove that fail along with its .lastUpdated file and download (synchronize in IntelliJ) it again. A: Ok, I have searched around and problem was probably caused by interuption of download sources/documentation process. Using bat file : @echo off setlocal EnableDelayedExpansion set last=? for /f %%I in ('dir /s /b /o:n /a-d "*.lastUpdated"') do ( if !last! NEQ %%~dpI ( set last=%%~dpI echo !last! rd /s /q !last! ) ) goto end :end I managed to remove all the necessary files. Now downloading again. If this happens to you, use the above bat script if you are on windows.
doc_2665
I've tried adding various classes to the rows and container div, i.e. align-middle, align-items-center but nothing seems to do anything. <div id="header"> <p>< BACK TO HOMEPAGE</p> </div> <div id="main-content" style="height:85vh;"> </div> <div id="menu-bar"> <div class="container-fluid text-center "> <div class="row border-bottom border-dark"> <div class="col"> Preview text </div> <div class="col"> Preview text </div> <div class="col"> Preview text </div> <div class="col"> Preview text </div> <div class="col"> Preview text </div> <div class="col col-3"> 6 </div> <div class="col col-2"> 7 </div> <div class="col col-2"> <button type="button" class="btn btn-dark">Play Video</button> </div> </div> <div class="row" style="font-size: smaller;"> <div class="col col-3"> 1 </div> <div class="col"> Preview text </div> <div class="col"> Preview text </div> <div class="col col-3"> Colour </div> <div class="col col-4"> Preview text </div> </div> </div> </div> A: Change <div class="row border-bottom border-dark"> To <div class="row border-bottom border-dark align-items-center"> Here's a JSFiddle with that change. I've added a border to each div, so you can see that it is centered vertically. http://jsfiddle.net/bqfa74y0/ If you want the lower row to be centered vertically, just add align-items-center to the class of the row. I would also strongly recommend that you change: <p>< BACK TO HOMEPAGE</p> To <p>&lt; BACK TO HOMEPAGE</p> Because < has a special meaning in HTML (it's for starting/ending tags), so using it as text could have unexpected consequences. A: Did you mean this? .menu-bar{ length: 200px; width: 100% } .col{ display: inline-block; width: 11%; margin-left: auto; margin-right: auto; text-align: center; height: 50px; } <div id="header"> <p> BACK TO HOMEPAGE</p> </div> <div id="main-content" style="height:85vh;"> </div> <div id="menu-bar"> <div class="container-fluid text-center "> <div class="row border-bottom border-dark"> <div class="col"> Preview text </div> <div class="col"> Preview text </div> <div class="col"> Preview text </div> <div class="col"> Preview text </div> <div class="col"> Preview text </div> <div class="col col-3"> 6 </div> <div class="col col-2"> 7 </div> <div class="col col-2"> <button type="button" class="btn btn-dark">Play Video</button> </div> </div> <div class="row" style="font-size: smaller;"> <div class="col col-3"> 1 </div> <div class="col"> Preview text </div> <div class="col"> Preview text </div> <div class="col col-3"> Colour </div> <div class="col col-4"> Preview text </div> </div> </div> </div>
doc_2666
alt text http://img10.imageshack.us/img10/1803/textboxlistfacebook.png This has just been ported (from scratch) into MooTools. Does anyone know if this exists in jQuery? edit: ahh! autocomplete was the keyword i was missing. cheers! A: devthought have also ported it to JQuery * *TextboxList A: A quick google search revealed the following: * *jquery facebook autocomplete *FCBKcomplete v 2.01 *Facelift
doc_2667
data row1 5 row2 4 row3 12 row4 6 row5 7 I want to make a comparison between current rows and following rows, as this table display. compare YES NO row1<row2 0 row1<row3 1 row1<row4 1 row1<row5 1 row2<row3 1 row2<row4 1 row2<row5 1 row3<row4 0 row3<row5 0 row4<row5 1 Another, I've typed some codes in R, with for loop. for (i in 1:nrow(data)){ if (data[i,] <data[(i+1):5,]){ print("1") } else { print ("0") } } However, I get error information.missing value where TRUE/FALSE needed Can anyone help me to solve this problem? Or, maybe the apply function is better? Sorry for my poor English, and big thanks for your precious time! A: I'm not quite clear on what your final goal is; your expected output looks like an awkward data format. I assume that this is to adhere to some form of custom/legacy data formatting requirements. That aside here, you could use outer to do all pairwise comparisons, and then do some data reshaping library(tidyverse) outer(df$data, df$data, FUN = function(x, y) x < y) %>% as.data.frame() %>% rowid_to_column("rowx") %>% gather(rowy, val, -rowx) %>% mutate( rowx = paste0("row", rowx), rowy = sub("V", "row", rowy)) %>% filter(rowx < rowy) %>% unite(compare, rowx, rowy, sep = "<") %>% transmute( compare, Yes = if_else(val == TRUE, 1, 0), No = if_else(val == FALSE, 1, 0)) ) # compare Yes No #1 row1<row2 1 0 #2 row1<row3 1 0 #3 row2<row3 1 0 #4 row1<row4 1 0 #5 row2<row4 0 1 #6 row3<row4 0 1 #7 row1<row5 1 0 #8 row2<row5 1 0 #9 row3<row5 0 1 #10 row4<row5 1 0 Sample data df <- read.table(text = "data 1 0.05493405 2 0.07844055 3 0.12901255 4 0.0655028 5 0.078554925", header = T)
doc_2668
How can i check if GIT branch (of some repo) is locked, without trying to PUSH to that branch? Thanks. A: Git has no intrinsic concept of branch locking or protected branches. Git can attempt to push to a branch, and that operation can either succeed or fail, possibly with an error message. However, there's no way of querying with Git whether an operation would succeed since in many cases the operation is dependent on the data pushed. Git doesn't provide a dry-run mechanism in the push API, since uploading a large amount of data just to throw it away would be slow and wasteful. If you want to know whether a branch is protected, you'd have to use the API of your particular hosting service to see whether it's protected. If you have multiple hosting services, then you'll probably need to write a script that abstracts over them. For GitHub, the API documentation covers the branch protection options.
doc_2669
* *What browser should I install for this purpose? I am having trouble getting Chromium set up. I tried this set of instructions: https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=157049 *How do I use web.skype.com to start a skype call and send a video/audio stream if I'm in headless display and can only use the command line? Thanks! A: Your best bet is to use the web api in a headless environment to connect to Skype, in this case you can use, for example, a language like NodeJS. You should take a look at open project like this. This is a project like the one you wan't to create. Starting from the Skype web it connect from command line.
doc_2670
How do i write a VBA script that searches for duplicates. The way to searching duplicates is that if any three fields of the record match then the record is highlighted in a separate colour. The fields that are suppose to be matched are not specified, they can be any three fields. For example, if INVOICE NUMBER, SHIPMENT NUMBER and QUANTITY match in some records then its shown as duplicates and highlighted. Similarly if INVOICE NUMBER, QUANTITY and DATE matches in a few records then they are listed as duplicates as well. So in the database, out of the 5 fields if any 3 match then they are listed as duplicates and highlighted in a different colour. Can anyone please help me write a VBA script that does that? A: There isn't really an easy way to do this. With five fields, there are 10 possible ways that any 3 of those fields can be evaluated. So, you could create a macro that basically cycles through all of the columns, searching for duplicates, but it won't be very easy. I have done something like this in the past, but relied instead on a formula in a cell to do the work. I used the SUMPRODUCT function to look up values. Here is what it would look like for you. =SUMPRODUCT((A$2:A2=A3)*(B$2:B2=B3)*(C$2:C2=C3)) + SUMPRODUCT((A$2:A2=A3)*(B$2:B2=B3)*(D$2:D2=D3)) + + SUMPRODUCT((A$2:A2=A3)*(B$2:B2=B3)*(E$2:E2=E3)) + SUMPRODUCT((A$2:A2=A3)*(C$2:C2=C3)*(D$2:D2=D3)) + SUMPRODUCT((A$2:A2=A3)*(C$2:C2=C3)*(E$2:E2=E3)) + SUMPRODUCT((A$2:A2=A3)*(D$2:D2=D3)*(E$2:E2=E3)) + SUMPRODUCT((B$2:B2=B3)*(C$2:C2=C3)*(D$2:D2=D3)) + SUMPRODUCT((B$2:B2=B3)*(C$2:C2=C3)*(E$2:E2=E3)) + SUMPRODUCT((B$2:B2=B3)*(D$2:D2=D3)*(E$2:E2=E3)) + SUMPRODUCT((C$2:C2=C3)*(D$2:D2=D3)*(E$2:E2=E3)) Note that this assumes that your 5 fields are in columns A to E, and never change. The above formula is also designed to be put in row 2 of whatever column you want it in. Then just copy down the formula to have it auto-adjust values for the current row. Row 2 doesn't need it, since that should be your first record (assuming there are headers of course). Oh, since I didn't mention it, if this formula returns a 1, then that indicates duplicate data. A 0 indicates that it is unique (so far). Here is some psuedocode to assist with creating a macro. For each row from 2 to currentRow - 1 Dim numMatches as integer numMatches = 0 if activeSheet.Range("A" & row) = activeSheet.Range("A" & currentRow) then nuMatches = numMatches + 1 Endif 'do the above if statement for each compare if numMatches >= 3 then 'format the currentRow to indicate duplicate endif Next Row
doc_2671
Here's my code. var main = function() { $('.login').click(function() { $('.dropdown-menu').toggle(); }); } $(document).ready(main); .nav a { color: #5a5a5a; font-size: 11px; font-weight: bold; padding: 50px; text-transform: uppercase; } .nav li { display: inline; } #left { float: left; } #right { float: right; } .jumbotron { height: 500px; background-repeat: no-repeat; background-size: cover; } .jumbotron .container { position: relative; top: 100px; } .login { font-size: 11px; } .dropdown-menu { font-size: 16px; margin-top: 5px; min-width: 105px; } <!DOCTYPE html> <html> <head> <link rel="stylesheet" href="http://s3.amazonaws.com/codecademy-content/courses/ltp/css/bootstrap.css"> <link rel="stylesheet" type="text/css" href="cssfile.css" /> <title>Fandoms and Stuff</title> </head> <body> <!--ADD ENDING BODY TAG--> <div class="nav"> <ul id="left"> <li><a href="HomePage.html">Home</a> </li> <li><a href="#">About</a> </li> <li><a href="Fandoms.html">Fandoms</a> </li> </ul> <ul id="right"> <li class="dropdown"> <a href="#" class="login">Log In</a> <ul class="dropdown-menu"> <li><a href="#">Test</a> </li> </ul> </li> <li><a href="#">Register</a> </li> </ul> </div> <div class="jumbotron"> <div class="container"> <h1>Welcome</h1> <p>(this is where I put my long description.)</p> </div> </div> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script> </body> </html> A: The padding in this class is blocking your access to the link "login". If you make the padding: 0 you can then click the link and get a response. I did not look further into the toggle but the current problem you have is not being able to click the link. .nav a { color: #5a5a5a; font-size: 11px; font-weight: bold; padding: 50px; text-transform: uppercase; } A: I fixed it. I was being stupid. I added <script type="text/javascript" src="http://code.jquery.com/jquery-latest.js"></script> <script type="text/javascript" src="app.js"></script> to the <head> element and it worked fine. Thanks for trying to help me out! A: I don't see display: none on .dropdown-menu. Try adding display: none to your .dropdown-menu so it will be display: none when page loads and change it to display: block when you toggle it by clicking .login.
doc_2672
index <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="initial-scale=1, maximum-scale=1, user-scalable=no, width=device-width"> <title></title> <link href="lib/ionic/css/ionic.css" rel="stylesheet"> <link href="css/style.css" rel="stylesheet"> </head> <body ng-app="starter"> <ion-nav-view></ion-nav-view> <script src="lib/ionic/js/ionic.bundle.js"></script> <script src="cordova.js"></script> <script src="app/application.js"></script> <script src="app/controllers/userDataCtr.js"></script> </body> </html> app angular.module('starter', ['ionic']) .run(function($ionicPlatform) { $ionicPlatform.ready(function() { if(window.cordova && window.cordova.plugins.Keyboard) { cordova.plugins.Keyboard.hideKeyboardAccessoryBar(true); cordova.plugins.Keyboard.disableScroll(true); } if(window.StatusBar) { StatusBar.styleDefault(); } }); }) .config(function ($stateProvider, $urlRouterProvider) { $stateProvider .state('home', { url: '/home', controller: 'userDataCtr', templateUrl: 'app/views/home.html' }) ; // if none of the above states are matched, use this as the fallback $urlRouterProvider.otherwise('/home'); }); controller (function() { 'use strict'; angular .module('starter') .controller('userDataCtr', userDataCtr); userDataCtr.$inject=['$state','$window']; function userDataCtr($state,$window) { var vm = this; vm.test = test; function test() { vm.testing = "Does work?"; } } })(); home view <div ng-init="userDataCtr.test()">{{testing}}</div> When I start application I get nothing on screen, blank page. It seems like function is not even started. Does anyone know where I'm wrong? Thanks Edit: Now I figure out that if I change this: (function() { 'use strict'; angular .module('starter') .controller('userDataCtr', userDataCtr); userDataCtr.$inject=['$state','$window']; function userDataCtr($state,$window) { var vm = this; vm.test = test; function test() { vm.testing = "Does work?"; } } })(); to this: (function() { 'use strict'; angular .module('starter') .controller('userDataCtr', userDataCtr); userDataCtr.$inject=['$state','$window','$scope']; function userDataCtr($state,$window,$scope) { $scope.test - function() { $scope.testing = "Does work?"; } } })(); Works !! But I don't want to use $scope. Does anyone knows why first solution fail and another works?
doc_2673
I need to install some python packages in my inference.py file, such as gensim. I put a requirements.txt file in the same folder as train.py and inference.py. The problem is that the requirements.txt is not being packed in the model.tar.gz. That's why although the training and creating the endpoint work fine, but when I check the loggings of the deployed endpoint I see the following error: ModuleNotFoundError: No module named 'gensim' This is a part of my script for training and registering the model. from sagemaker.pytorch.estimator import PyTorch from sagemaker.workflow.step_collections import RegisterModel from sagemaker.workflow.steps import ( ProcessingStep, TrainingStep, ) train_estimator = PyTorch(entry_point= 'train.py', source_dir= BASE_DIR, instance_type= "ml.m5.2xlarge", instance_count=1, role=role, framework_version='1.8.0', py_version='py3', ) step_train = TrainingStep( name="TrainStep", estimator=train_estimator, inputs={ "train": sagemaker.TrainingInput( s3_data=step_process.properties.ProcessingOutputConfig.Outputs[ "train_data" ].S3Output.S3Uri, content_type= 'text/csv', ), }, ) step_register = RegisterModel( name="RegisterStep", estimator= train_estimator, model_data=step_train.properties.ModelArtifacts.S3ModelArtifacts, content_types=["application/json"], response_types=["application/json"], inference_instances=["ml.t2.medium", "ml.m5.2xlarge"], transform_instances=["ml.m5.large"], model_package_group_name=model_package_group_name, approval_status=model_approval_status, source_dir = BASE_DIR, entry_point= os.path.join(BASE_DIR, "inference.py"), depends_on = [step_train] ) This is the structure of my files: -abalone - __init__.py - train.py - inference.py - requirements.py - preprocess.py - evaluate.py - pipeline.py BASE_DIR refers to abalone folder. In the model.tar.gz I see: - model.pth - model.pth.wv.vectors_ngrams.npy - code - __pycache__ - train.py - _repack_model.py - inference.py - preprocess.py - evaluate.py - __init__.py - pipeline.py You can see that it contains everything except the requirements.txt file. In the sagemaker documents it says: "The PyTorch and PyTorchModel classes repack model.tar.gz to include the inference script (and related files), as long as the framework_version is set to 1.2 or higher." But you can see although my framework_version is higher than 1.2, but still it doesn't pack requirements.txt file in the model.tar.gz. Can someone please help me to fix this issue? A: A workaround can be by installing the required packages in the inference.py using import os os.execute("pip install package1 package2 ...") To troubleshoot the issue, I'd recommend deploying the estimator train_estimator using train_estimator.deploy which will create the model, endpoint configuration and the endpoint. Then, check CloudWatch logs to see if it's still failing to pack requirements.txt. The requirements.txt might be used as an environment variable attached with the model created by estimator.deploy and because you use RegisterModel, it ignores that parameter. A: In your screenshot you have requirements.py instead of requirements.txt in your base folder
doc_2674
example: class DB{ ... function query($sqlStatementString,$data){ $this->sth->prepare($sqlStatementString); $this->sth->execute($data); } ... } OR class User{ ... function doSomething(){ $sthForDoSomething->prepare(...); $sthForDoSomething->execute(...); } ... function jump(){ $sthForJump->prepare(...); $sthForJump->execute(...); } } Are there memory/speed implications of using one method over the other? thanks! A: If you are ever going to issue the same query more than once, with only different parameters bound to the placeholders, you should try to structure your code in such a way that you can reuse the statement. Calling prepare causes the database to create and cache an execution plan for the query which you will reuse in future calls to execute, making those queries faster.
doc_2675
if (t instanceof RuntimeException) { throw (RuntimeException) t; } else if (t instanceof Error) { throw (Error) t; } else { throw new RuntimeException(t); } However, is there any existing utility call that does this already? (I am catching throwables because AssertionErrors are Errors.) Edit: To be honest, I don't really want to wrap the exception, so any trick that would allow me to throw any throwable (including checked exceptions) without declaring throws is acceptable. A: The great Guava library has a Method Throwables.propagate(Throwable) that does exactly what your code is doing: JavaDoc From the doc: Propagates throwable as-is if it is an instance of RuntimeException or Error, or else as a last resort, wraps it in a RuntimeException then propagates. A: Yes, there is a way to write a method that will avoid wrapping your checked exceptions. For this use case it is a perfect match, although you should definitely be very careful with it, since it can easily confuse the uninitiated. Here it is: @SuppressWarnings("unchecked") public static <T extends Throwable> void sneakyThrow(Throwable t) throws T { throw (T) t; } and you'd use it as catch (Throwable t) { sneakyThrow(t); } As commented by Joachim Sauer, in certain cases it helps convince the compiler that the line calling sneakyThrow causes the method to terminate. We can just change the declared return type: @SuppressWarnings("unchecked") public static <T extends Throwable> T sneakyThrow(Throwable t) throws T { throw (T) t; } and use it like this: catch (Throwable t) { throw sneakyThrow(t); } For educational purposes it is nice to see what's going on at the bytecode level. The relevant snippet from javap -verbose UncheckedThrower: public static <T extends java.lang.Throwable> java.lang.RuntimeException sneakyThrow(java.lang.Throwable) throws T; descriptor: (Ljava/lang/Throwable;)Ljava/lang/RuntimeException; flags: ACC_PUBLIC, ACC_STATIC Code: stack=1, locals=1, args_size=1 0: aload_0 1: athrow Exceptions: throws java.lang.Throwable Signature: #13 // <T:Ljava/lang/Throwable;>(Ljava/lang/Throwable;)Ljava/lang/RuntimeException;^TT; Note there is no checkcast instruction. The method even legitimately declares to throw T, which can be any Throwable. A: Perhaps setDefaultUncaughtExceptionHandler(Thread.UncaughtExceptionHandler) can help you. There you can define a global exception handler for all exceptions that occur.
doc_2676
interface IServiceProvider { bool Authenticate(string username, string password); } class ABCServiceProvider : IserviceProvider { bool Authenticate(string username, string password) { // implementation} } class EFGServiceProvider : IserviceProvider { bool Authenticate(string username, string password) { // implementation} } and so on... now I've came across a service provider (let's say XYZServiceProvider) that needs some additional information (agentid) for authentication. something like this... class XYZServiceProvider { bool Authenticate(string username, string password, int agentid) { // implementation} } Now if I provide another function for Authenticate in my interface with 3 parameters, and throw not implemented exception in all the classes except for XYZServiceProvider, wouldn't it violate Interface segregation principle? I've similar situation in some of my other part of the code aswell. Can anyone please tell me whats the best way to implement this type of scenrio? I would be really very thankful. A: The best way to solve this would probably be to require agentId in the interface, and to simply ignore it in the cases of ABC and DEF where they don't need it. That way, the consuming class still wouldn't know the difference. Actually it's the Liskov Substitution Principle that is most important if the ABC, DEF and XYZ providers are to be used interchangeably; "Given a class A that is depended upon by class X, X should be able to use a class B derived from A without knowing the difference". The Interface Segregation Principle basically says that an interface should not contain members that any of its consumers do not need, because if the definition of those members were to change, classes that don't even use that method would have to be recompiled because the interface they depended on has changed. While this is relevant (you do have to recompile all consumers of IServiceProvider if you add an overload), you will have to do that anyway if you change Authenticate()'s signature, and of more pressing concern from a maintenance standpoint is that if you added an overload of Authenticate(), your consumers now have to know which overload they need to use. That requires your consuming classes to know the difference between implementations of a common interface, violating LSP. It's never a problem providing more information than a particular provider needs, but there would be a problem using XYZ from a usage that only provides two inputs. To avoid those problems, you would always use the three-parameter overload, so why have the two-parameter one at all? Now, if current usages of IServiceProvider are in areas that don't have and don't care about agentId, and therefore it would be difficult to begin providing it, then I would recommend an Adapter that the concrete XYZ provider plugs into, that implements your current IServiceProvider, and makes the new provider work like the old ones by providing the agentId through some other means: public class XYZAdapter: IServiceProvider { private readonly XYZServiceProvider xyzProvider; public XYZAdapter(XYZServiceProvider provider) { xyzProvider = provider; } public void Authenticate(string username, string password) { xyzProvider.Authenticate(username, password, GetAgentId()); } public int GetAgentId() { //Retrieve the proper agent Id. It can be provided from the class creator, //retrieved from a known constant data source, or pulled from some factory //method provided from this class's creator. Any way you slice it, consumers //of this class cannot know that this information is needed. } } If this is feasible, it meets both LSP and ISP; the interface doesn't have to change to support LSP, therefore preventing the scenario (recompiling and redistributing dependencies) that ISP generally tries to avoid. However it increases class count, and forces new functionality in the Adapter to correctly get the needed agentId without its dependent having to provide anything it wouldn't know about through the IServiceProvider interface.
doc_2677
plot(b$pos,b$log_p,col==ifelse(b$pos==c(14824849,13920386,14837470),90,100), pch=19, xlab='Chromosome 21 position', ylab='-log10(p)') The plot produced, only show one point highlighted red with the following warning message: In b$pos == c(14824849, 13920386,14837470) : longer object length is not a multiple of shorter object length A: OK, the issue is likely to be your condition in the ifelse. If you attempt the condition (b$pos==c(14824849,13920386,14837470)) outside of your ifelse() you will get an error message along the lines of: longer object length is not a multiple of shorter object length If you change the condition to: b$pos %in% c(14824849,13920386,14837470) You will get a vector of TRUE/FALSE values determining whether each entry in b$pos is present in the vector (14824849,13920386,14837470) rather than whether the entries in b$pos are equal to c(14824849,13920386,14837470). x = c(49, 7, 66, 51, 43, 70, 35, 53, 6, 29) y = c(10, 98, 44, 31, 37, 14, 64, 84, 4, 34) x %in% c(6, 7) [1] FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE plot(x, y, col=ifelse(x %in% c(6, 7), 'red', 'blue')) Now this dataset has 10 x values, if you were to write this: plot(x, y, col=ifelse(x == c(1, 7), 'red', 'blue')) This would work fine, the x values would be compared against 1 and 7 alternately e.g: 49 == 1 ? 7 == 7 ? 66 == 1? 51 == 7? .... etc etc. The error message was saying that your vector length of 3 did not exactly go into the length of the b$pos. A: Within the tidyverse and ggplot you can try library(tidyverse) tibble(x = c(49, 7, 66, 51, 43, 70, 35, 53, 6, 29), y = c(10, 98, 44, 31, 37, 14, 64, 84, 4, 34), gr=x %in% c(6, 7)) %>% ggplot(aes(x,y, col=gr)) + geom_point(size=2) + ggalt::geom_encircle(data= . %>% filter(gr), color="green", s_shape=0) + theme_bw() Using ggalt::geom_encircle function you can draw a circle around your points of interest.
doc_2678
from following link link when I inserted this into my code i got following Error: The filename try.xlsx is not readable. My code is: if (file_exists($filepath)) { echo "File present"; } else { die('The file ' . $filename . ' was not found'); } $data = new Spreadsheet_Excel_Reader($filename,false); $data->read($filename); $data->val(1, 'A'); echo $data; So after searching in Google I got link that is Here After Following this also i am getting same error. So can any one help me, where I am going wrong? Thank you. A: PEAR SEW cannot read OfficeOpenXML (.xlsx) format files, only the older BIFF (.xls) format files. If you want to read .xlsx files, then you need a reader library that does support that format such as PHPExcel
doc_2679
Enter the key: <input type="text" name="key" size="35" id="q17" autocomplete="off"> <input id="charsLeft" type="text" class="count1" disabled> <br /><br /> Plaintext: <input type="text" name="key" size="35" id="q18" autocomplete="off"> <input id="charsLeft2" autocapitalize="off" autocorrect="off" type="text" class="count" disabled> JavaScript: $(document).ready(function() { // Count key $("#q17").keyup(function() { $('#charsLeft').val($(this).val().length); }); // Count plaintext $("#q18").keyup(function() { $('#charsLeft2').val($(this).val().length); }); // Compare key|plaintext if ($('#charsLeft').val() < $('#charsLeft2').val()) { alert('Key is shorter than the plaintext'); } }); I am trying to compare the values of the two input fields (charsLeft and charsLeft2). What am I missing here? Here is a fiddle: http://jsfiddle.net/2mgzn4pL/ A: You're comparing strings, not integers. You can use parseInt() to convert them. Try this: if (parseInt($('#charsLeft').val(), 10) < parseInt($('#charsLeft2').val(), 10)) { alert('Key is shorter than the plaintext'); } Updated fiddle A: .val() returns a string. You should use parseInt or parseFloat. A: There a two problems with your code: * *You're comparing string values with .val() not string lengths *Your comparison code is executed on page load - which means only when the page has loaded you'll see the result, but on this event the result is false and nothing is being alerted. For the first problem you should compare string lengths this way $('#charsLeft').val().length < $('#charsLeft2').val().length. For the second problem you should put your comparison code in a button's click event handler function or something in this sort. Another option is to use the focusout event of the second input and attach an event handler with .focusout().
doc_2680
(define myob% (class object% (super-new) (init-field val) (define/public (getval) val) (define/public (setval v) (set! val v)) )) (define ob1 (make-object myob% 5)) (send ob1 getval) (send ob1 setval 10) (send ob1 getval) Output: 5 10 Following regex also works well: (define sl (regexp-match #px"^(.+)[.]([^.]+)$" "ob1.getval")) sl Output: '("ob1.getval" "ob1" "getval") I am trying to make a fn foo which should work like 'send' but take arguments in form of (foo ob1.getval) or (foo ob1.setval 10) . Following macro is not working: (define-syntax foo (syntax-rules () ((_ sstr ...) (define sl (regexp-match #px"^(.+)[.]([^.]+)$" (symbol->string sstr))) (send (string->symbol(list-ref sl 1)) (string->symbol(list-ref sl 2)) ...)))) (foo ob1.getval) The error is: syntax-rules: bad syntax in: (syntax-rules () ((_ sstr ...) (define sl (regexp-match #px"^(.+)[.]([^.]+)$" (symbol->string sstr))) (send (list-ref sl 1) (list-ref sl 2) ...))) Where is the error and how can this be corrected? A: To use dot notation like this, you'll need to define a new #lang language, with its own reader. There are several tools to help with this, and I'll be using one of them, syntax/module-reader, to define #lang send-dot, which once defined can be used like this: #lang send-dot (define o (new (class object% (super-new) (define/public (f x) x)))) (o.f "hellooo") In the latest snapshot version of Racket, you can use the read-cdot option. Make sure you're on the latest snapshot version, since in 6.6, it's completely broken. One way to define a #lang is by declaring a reader submodule. Make a directory called send-dot, and add a file called main.rkt. This file should provide everything from racket. #lang racket (provide (all-from-out racket)) This doesn't define a #lang yet. But to try it out, you can go to DrRacket's File menu, click on Package Manager, and in the Package Source field, enter the path to the send-dot directory. Once you do that, you should be able to use #lang s-exp send-dot in another file just like you would #lang racket. To define a reader for this language and make it a real #lang language, you can add a reader submodule that uses syntax/module-reader as its language. #lang racket (provide (all-from-out racket)) ;; This submodule defines the reader for the language (module reader syntax/module-reader send-dot) Now you should be able to use #lang send-dot just like #lang racket. Now you need to do two more things. One, turn on the read-cdot option so that (o.method args ...) is translated to ((#%dot o method) args ...), and Two, define a #%dot macro so that ((#%dot o method) args ...) is equivalent to (send o method args ...). For the first thing, you can use the #:wrapper1 option, using parameterize to turn read-cdot on. #lang racket (provide (all-from-out racket)) ;; This submodule defines the reader for the language (module reader syntax/module-reader send-dot #:wrapper1 (lambda (thunk) ;; turns on the read-cdot option, ;; which will turn o.method into (#%dot o method), ;; and (o.method args ...) into ((#%dot o method) args ...) (parameterize ([read-cdot #true]) (thunk)))) For the second thing, you need to define a #%dot macro. o.method, or (#%dot o method), needs to be a function that calls the method, so you can use (lambda args (send/apply o method args)). #lang racket (provide #%dot (all-from-out racket)) ;; transforms (#%dot o method) into a function that calls the method ;; so that ((#%dot o method) args ...) will be roughly equivalent to ;; (send o method args ...) (define-syntax-rule (#%dot obj-expr method-id) (let ([obj obj-expr]) (lambda args (send/apply obj method-id args)))) ;; This submodule defines the reader for the language (module reader syntax/module-reader send-dot #:wrapper1 (lambda (thunk) ;; turns on the read-cdot option, ;; which will turn o.method into (#%dot o method), ;; and (o.method args ...) into ((#%dot o method) args ...) (parameterize ([read-cdot #true]) (thunk)))) Now you should be able to use #lang send-dot like this: #lang send-dot (define o (new (class object% (super-new) (define/public (f x) x)))) (o.f "hellooo")
doc_2681
A: I'd use LinearLayout with Horizontal orientation. Place two TextView object together and then use xml styling to set their background. Like so: activity_main.xml: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <LinearLayout android:padding="5dp" android:layout_width="wrap_content" android:layout_height="wrap_content" android:background="@drawable/layout_background"> <ToggleButton android:layout_gravity="center" android:gravity="center" android:layout_marginEnd="5dp" android:layout_marginStart="5dp" android:background="@drawable/textview_back" android:layout_width="wrap_content" android:layout_height="wrap_content" android:textOn="first_text_view" android:textOff="first_text_view"/> <ToggleButton android:layout_marginEnd="5dp" android:layout_marginStart="5dp" android:layout_gravity="center" android:gravity="center" android:background="@drawable/textview_back" android:layout_width="wrap_content" android:layout_height="wrap_content" android:textOn="second_text_view" android:textOff="second_text_view"/> </LinearLayout> </LinearLayout> textview_back.xml <?xml version="1.0" encoding="utf-8"?> <selector xmlns:android="http://schemas.android.com/apk/res/android"> <item android:state_checked="true" android:drawable="@drawable/textview_back_selected" /> <item android:state_checked="false" android:drawable="@drawable/textview_back_unselected" /> </selector> textview_back_selected <?xml version="1.0" encoding="utf-8"?> <selector xmlns:android="http://schemas.android.com/apk/res/android"> <item android:state_checked="true" android:drawable="@drawable/textview_back_selected" /> <item android:state_checked="false" android:drawable="@drawable/textview_back_unselected" /> </selector> textview_back_unselected <?xml version="1.0" encoding="utf-8"?> <shape xmlns:android="http://schemas.android.com/apk/res/android" android:shape="rectangle"> <size android:width="100dp" android:height="20dp"/> <corners android:radius="15dp"/> <solid android:color="#ddd" android:width="1dp"/> </shape> layout_background <?xml version="1.0" encoding="utf-8"?> <shape xmlns:android="http://schemas.android.com/apk/res/android" android:shape="rectangle"> <size android:width="100dp" android:height="20dp"/> <corners android:radius="15dp"/> <solid android:color="#dddddd" android:width="1dp"/> </shape> this should be the results: Hope this is close enough. Note that you'll have to handle in the mainActivity.java code that once you click 1 button, you need to toggle off the other one. A: You can use the material button group <com.google.android.material.button.MaterialButtonToggleGroup android:id="@+id/groupToggleButton" android:layout_width="wrap_content" android:layout_height="wrap_content" app:singleSelection="true"> <Button android:id="@+id/button1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Male" android:layout_marginStart="5dp" style="?attr/materialButtonOutlinedStyle"/> <Button android:id="@+id/button2" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Female" android:layout_marginStart="10dp" style="?attr/materialButtonOutlinedStyle"/> </com.google.android.material.button.MaterialButtonToggleGroup> java class MaterialButtonToggleGroup materialButtonToggleGroup = findViewById(R.id.groupToggleButton); and you can add a listener on any change of selection materialButtonToggleGroup.addOnButtonCheckedListener(new MaterialButtonToggleGroup.OnButtonCheckedListener() { @Override public void onButtonChecked(MaterialButtonToggleGroup group, int checkedId, boolean isChecked) { if (isChecked) { if (checkedId == R.id.button1) { //.. } } } });
doc_2682
I would like to create a multidim array with the following structure $x[index]['word']="house" $x[index]['number']=2,5,7,1,9 where index is the first dimension from 0 to... n second dimension has two fields "word" and "number" and each of these two fields holds an array (the first with strings, the second with numbers) I do not know how to declare this $x I've tried with $x = @(()),@(@()) - doesn't work or $x= ("word", "number"), @(@()) - doesn't work either or $x = @(@(@(@()))) - nope Then I want to use this array like this: $x[0]["word"]= "bla bla bla" $x[0]["number]= "12301230123" $x[1]["word"]= "lorem ipsum" $x[2]["number]=... $x[3]... $x[4]... The most frequent errors are Array assignment failed because index '0' was out of range. Unable to index into an object of type System.Char/INt32 I would like to accomplish this using arrays[][] or jaws @ but no .net [,] stuff. I think I'm missing something. A: If I understood you correctly, you're looking for an array of hashtables. You can store whatever you want inside an object-array, so store hashtables that you can search with words or numbers as keys. Ex: $ht1 = @{} $ht1["myword"] = 2 $ht1["23"] = "myvalue" $ht2 = @{} $ht2["1"] = 12301230123 $arr = @($ht1,$ht2) PS > $arr[1]["1"] 12301230123 PS > $arr[0]["myword"] 2 PS > $arr[0]["23"] myvalue If you know how many you need, you can use a shortcut to create it: #Create array of 100 elements and initialize with hashtables $a = [object[]](1..100) 0..($a.Length-1) | % { $a[$_] = @{ 'word' = $null; 'number' = $null } } #Now you have an array of 100 hastables with the keys initialized. It's ready to recieve some values. PS > $a[99] Name Value ---- ----- number word And if you need to add another pair later, you can simply use: $a += @{ 'word' = $yourwordvar; 'number' = $yournumbervar } A: You could make an array, and initialize it with hashtables: $x=@(@{})*100; 0..99 | foreach {$x[$_]=@{}}; $x[19]["word"]="house"; $x[19]["number"]=25719; You want a big array, for example of length 100. Note the difference in parentheses! You need the second step, because in the previous command the pointer of the hashtable was copied 100 times... and you don't want that :) Now test it: $x[19]["number"]; 25719 $[19]["word"]; house
doc_2683
On the second ajax call I want it to populate the response/data to a paragraph tag class="spots" in the first ajax call to show available spots for that specific time. My problem is the second ajax call only populates & repeats the first result it finds. I think it's because my class selector is the same in the foreach row. I've set an id + item.id to make them dynamic, but how do I access them in my jquery selector? Any help would be appreciated. See code below. $(document).ready(function() { //show datpicker calendar set getDay function on select $( "#datepicker" ).datepicker({ numberOfMonths: 1, showButtonPanel: true, onSelect: getDay, }); function getDay() { var date1 = $('#datepicker').datepicker('getDate'); var day = date1.getDay(); //set hidden input to numberical value of day $('#dayOfWeek').val(day); //set hidden textbox value near datepicker to submit date in proper format for db $('#date').val($.datepicker.formatDate('yy-mm-dd', date1)); //ajax form the get available times to play $.ajax({ url: $('#form').attr('action'), type: 'POST', data : $('#form').serialize(), success: function(response){ //clear results before showing another date selected $('.table').html(""); //loop through json results and build table $.each(JSON.parse(response), function(i, item) { var jdate = $('#date').val(); var id = item.id; $('<tr>').html("<td>" + item.time + "</td><td>" + '<input type="text" name="jtime" value="' + item.time + '"' + "/>" + '<input type="text" name="jdate" value="' + jdate + '"' + ">" + "Spots:" + '<p class="spots" id="spots_' + id + '"'+ ">" + '</p>' + "</td>").appendTo('#availableTimes'); });//end loop //fire getSpots function getSpots(); }//end success }); return false; }; //end getDay function // get available spots function getSpots(){ var values = { 'jtime': $('input[name="jtime"]').val(), 'jdate': $('input[name="jdate"]').val(), }; $.ajax({ //url: form.attr('action'), url: '/reservations/getSpots', type: 'POST', // data : form.serialize(), data : values, success: function(response){ $('.spots').html(response); }//end success }); //end ajax return false; };//end getSpots function })//end doc ready </script> Here is a snippet of code that works but it uses a form with a button to submit the second ajax call. I want it to work like this without the button submit. Want the second ajax call to post when the datapicker date is selected. Maybe i'm thinking about this wrong. //show datpicker calendar set getDay function on select $( "#datepicker" ).datepicker({ numberOfMonths: 1, showButtonPanel: true, onSelect: getDay }); })//end doc ready function getDay() { var date1 = $('#datepicker').datepicker('getDate'); var day = date1.getDay(); //set hidden input to numberical value of day $('#dayOfWeek').val(day); //set hidden textbox value near datepicker to submit date in proper format for db $('#date').val($.datepicker.formatDate('yy-mm-dd', date1)); //ajax form the get available times to play $.ajax({ url: $('#form').attr('action'), type: 'POST', data : $('#form').serialize(), success: function(response){ //clear results before showing another date selected $('.table').html(""); //loop through json results and build table $.each(JSON.parse(response), function(i, item) { var jdate = $('#date').val(); var id = item.id; $('<tr>').html("<td>" + item.time + "</td><td>" + '<form class="insideForm" action="/reservations/getSpots" accept-charset="utf-8" method="">' + '<input type="text" name="jtime" value="' + item.time + '"' + "/>" + '<input type="text" name="jdate" value="' + jdate + '"' + ">" + '<input type="submit" class="btn btn-primary" value="Spots">' + "Spots:" + '<p class="spots" id="spots_' + id + '"'+ ">" + '</p>' + '</form>' + "</td>").appendTo('#availableTimes'); });//end loop getSpots(); }//end success }); return false; }; //end getDay function // get available spots function getSpots(){ //ajax form the get available spots $('.insideForm').submit(function(){ var form = $(this).closest('form'); $.ajax({ url: form.attr('action'), type: 'POST', data : form.serialize(), success: function(response){ $('.spots', form).html(response); }//end success }); //end ajax return false; }); //end submit };//end getSpots function A: You call getSpots once and expect all to be updated? What you need to do is call getSpots once for every row, passing in an id to getSpots, so getSpots can update the correct row using the correct inputs see lines marked // **** for changes to your code A deleted answer had a better approach function getDay() { var date1 = $('#datepicker').datepicker('getDate'); var day = date1.getDay(); //set hidden input to numberical value of day $('#dayOfWeek').val(day); //set hidden textbox value near datepicker to submit date in proper format for db $('#date').val($.datepicker.formatDate('yy-mm-dd', date1)); //ajax form the get available times to play $.ajax({ url: $('#form').attr('action'), type: 'POST', data: $('#form').serialize(), success: function (response) { //clear results before showing another date selected $('.table').html(''); //loop through json results and build table $.each(JSON.parse(response), function (i, item) { var jdate = $('#date').val(); var id = item.id; $('<tr>').html('<td>' + item.time + '</td><td>' + '<input type="text" name="jtime" value="' + item.time + '"' + '/>' + '<input type="text" name="jdate" value="' + jdate + '"' + '>' + 'Spots:' + '<p class="spots" id="spots_' + id + '"' + '>' + '</p>' + '</td>').appendTo('#availableTimes'); // **** call getSpots for every row getSpots('#spots_'+id, item.time, jdate); }); //end loop } //end success }); return false; } //end getDay function // get available spots // **** accept output id, jtime and jdate function getSpots(id, jtime, jdate) { // get the inputs for the current id var values = { 'jtime': jtime, 'jdate': jdate, }; $.ajax({ //url: form.attr('action'), url: '/reservations/getSpots', type: 'POST', // data : form.serialize(), data: values, success: function (response) { // **** update the spots for current id $(id).html(response); } //end success }); //end ajax return false; } //end getSpots function
doc_2684
I'm using method 3 on the below link to remove the password. However rather than editing it manually like method 3, I'm changing it via the Hex in method 2 but programmatically with VB. I have tested this method with a Hex editor and notepad ++ successfully. However notepad will not work. Method of removing macro passwords My Idea * *Change name of .docm to .zip *Extract the contents of the Zip *Open the Vbaproject.bin *Change values from DPB to DPx *Zip it back up *Change format back .docm I'm having issues with 4 as it changes the characters correctly in the output file, however it strips out other necessary content(special characters). My vb.net code: Public Class Form1 Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click Try Dim filecontents = IO.File.ReadAllText("C:\Temp\Decrypter\Analysis\vbaProject.bin") filecontents = filecontents.Replace(Convert.ToChar(&H44) & (Convert.ToChar(&H50) & (Convert.ToChar(&H42))), (Convert.ToChar(&H44) & Convert.ToChar(&H50) & (Convert.ToChar(&H78)))) IO.File.WriteAllText("C:\Temp\Decrypter\Analysis\vbaProject2.bin", filecontents) Catch MsgBox("Error") End Try End Sub End Class This code looks for DPB in the vbaproject.bin file (44, 50, 42), then changes it to DPx (44, 50, 78). When opening the output document I can see the changes happened successfully, however when writing the .bin file back in to the .zip and repackaging in to the .docm this did not work. It works manually when I change these hex values via a hex editor. I believe that the way it reads and writes the output file takes out all the special .bin characters. Similar to how following the above process with notepad on Windows does not work (perhaps certain textboxes can't read / see certain chars / values in different formats). However doing the same with notepad++ does work. How can I change my above code so it doesn't strip out any special characters / other formats in the .bin file when I make the changes from DPB to DPx? Any ideas are greatly appreciated!
doc_2685
I add a SpeedEffect to a EffectStack, this works quite well. But if I try to remove one of the Effects (which are already on the stack) I have to call effect.removeEffect(). This causes a segmentation fault. If I try to call effect.removeEffect() from the TestStack() function, it works well (and prints the expected "speed effect removed" on the console) void Test::testStack() { Story* st = new Story; //<-- only needed for initialization of an Effect Veins::TraCIMobility* mob = new Veins::TraCIMobility; //<-- only needed for initialization of an Effect SpeedEffect a = SpeedEffect(1.0, st, mob); a.removeEffect(); //<-- This one works quite well (&a)->removeEffect(); //<-- Clearly, this works too EffectStack s; s.addEffect(&a); //<-- Adds a Effect to the effect Stack assert(s.getEffects().size() == 1); s.removeEffect(&a); //<-- Try to remove effect from stack } The Stack and the Effect are implemented as following: class Effect { public: Effect(Story* story, Veins::TraCIMobility* car) : m_story(story), m_car(car) {} virtual void removeEffect() = 0; private: Story* m_story; protected: Veins::TraCIMobility* m_car; }; class SpeedEffect : public Effect { public: SpeedEffect(double speed, Story* story, Veins::TraCIMobility* car): Effect(story, car), m_speed(speed){} void removeEffect() { std::cout << "speed effect removed" << std::endl; } private: double m_speed; }; class EffectStack { public: void addEffect(Effect* effect) { if(std::count(m_effects.begin(), m_effects.end(), effect) == 0) { m_effects.push_back(effect); } } void removeEffect(Effect* effect) { if(effect == m_effects.back()) { //effect is pointing on the same address like its doing before, but causes the seg fault m_effects.back()->removeEffect(); //<--- Seg Fault here!! effect->removeEffect(); //<-- if I use this, seg fault too m_effects.pop_back(); }else { removeFromMiddle(effect); } } const std::vector<Effect*>& getEffects() { return m_effects; } private: std::vector<Effect*> m_effects; }; I hope this code is enough, I have removed all functions which are not called by the testing scenario. Is there any problem, because the address of the speedEffect a becomes invalid in the Stack? Maybe you can help me with this. New thoughts about the question: No I have tested a bit more, which makes me even more confused: void dofoo(SpeedEffect* ef) { ef->removeEffect(); //<-- breaks with a segmentation fault } void Test::testStack() { Story* st = new Story; Veins::TraCIMobility* mob = new Veins::TraCIMobility; SpeedEffect e = SpeedEffect(3.0, st, mob); e.removeEffect(); //<-- Works fine (&e)->removeEffect(); //<-- Works fine also dofoo(&a); //<-- Jumps into the dofoo() function } A: This may not help you, but persisting the address of stack-based objects is usually not a great idea. In your code above it's potentially okay since you know EffectStack won't outlive your effect. Does the crash still occur if you do: SpeedEffect* a = new SpeedEffect(1.0, st, mob); (and adjust the rest of the code accordingly?) This will leak memory of course, but it will tell you if the problem is SpeedEffect being destroyed. Another option is to give SpeedEffect a destructor (and Effect a virtual destructor) and set a breakpoint inside to see when the the compiler is destroying 'a'. A: Story* st = new Story; //<-- only needed for initialization of an Effect Veins::TraCIMobility* mob = new Veins::TraCIMobility; //<-- only needed for initialization I don't see the delete st and delete mob - there is memory allocated for these objects inside void Test::testStack() but not explicitly released. add these two statement at the end of the function and try again. A: I have found the problem. I'm using the omnet Simulation framework and there is something unexpected happening if I instanciate the TraciMobility.. So without it, there is no error..
doc_2686
I am setting --conf spark.cores.max=100 --conf spark.executor.instances=20 --conf spark.executor.memory=8G --conf spark.executor.cores=5 --conf spark.driver.memory=4G but since data is not evenly distributed across executors, I kept getting Container killed by YARN for exceeding memory limits. 9.0 GB of 9 GB physical memory used here are my questions: 1. Did I not set up enough memory in the first place? I think 20 * 8G > 150G, but it's hard to make perfect distribution, so some executors will suffer 2. I think about repartition the input dataFrame, so how can I determine how many partition to set? the higher the better, or? 3. The error says "9 GB physical memory used", but i only set 8G to executor memory, where does the extra 1G come from? Thank you! A: When using yarn, there is another setting that figures into how big to make the yarn container request for your executors: spark.yarn.executor.memoryOverhead It defaults to 0.1 * your executor memory setting. It defines how much extra overhead memory to ask for in addition to what you specify as your executor memory. Try increasing this number first. Also, a yarn container won't give you memory of an arbitrary size. It will only return containers allocated with a memory size that is a multiple of it's minimum allocation size, which is controlled by this setting: yarn.scheduler.minimum-allocation-mb Setting that to a smaller number will reduce the risk of you "overshooting" the amount you asked for. I also typically set the below key to a value larger than my desired container size to ensure that the spark request is controlling how big my executors are, instead of yarn stomping on them. This is the maximum container size yarn will give out. nodemanager.resource.memory-mb A: The 9GB is composed of the 8GB executor memory which you add as a parameter, spark.yarn.executor.memoryOverhead which is set to .1, so the total memory of the container is spark.yarn.executor.memoryOverhead + (spark.yarn.executor.memoryOverhead * spark.yarn.executor.memoryOverhead) which is 8GB + (.1 * 8GB) ≈ 9GB. You could run the entire process using a single executor, but this would take ages. To understand this you need to know the notion of partitions and tasks. The number of partition is defined by your input and the actions. For example, if you read a 150gb csv from hdfs and your hdfs blocksize is 128mb, you will end up with 150 * 1024 / 128 = 1200 partitions, which maps directly to 1200 tasks in the Spark UI. Every single tasks will be picked up by an executor. You don't need to hold all the 150gb in memory ever. For example, when you have a single executor, you obviously won't benefit from the parallel capabilities of Spark, but it will just start at the first task, process the data, and save it back to the dfs, and start working on the next task. What you should check: * *How big are the input partitions? Is the input file splittable at all? If a single executor has to load a massive amount of memory, it will run out of memory for sure. *What kind of actions are you performing? For example, if you do a join with very low cardinality, you end up with a massive partitions because all the rows with a specific value, end up in the same partitions. *Very expensive or inefficient actions performed? Any cartesian product etc. Hope this helps. Happy sparking!
doc_2687
Before: { { "code": "KENNEDYS08", "duration": 23, "preview_frame": 1, } } After: [ { "code": "KENNEDYS08", "duration": 23, "preview_frame": 1, } ] The code that returns the json: output = json.dumps(data, ensure_ascii=False, indent=2) Is there an option for replacing the square brackets [] for curly brackets {}? A: when you have {} in Json is the same as having a dictionary in python ! which means that in a dictionary you always need a key/value ! so the first one is incorrect! if was like that in previous version of Django, thats why the change for the [] version, which is the right one indeed, and should be followed!
doc_2688
For instance, if A=[1 1; 0 1; 1 0; -1 1;-1 1;0 1;0 1] I would like to see something like: -1 -1 0 -1 0 0 -1 1 2 0 -1 0 0 1 3 1 -1 0 1 0 1 1 1 1 Is there a way to do it? Thanks. A: A = [ 1 1 0 1 1 0 -1 1 -1 1 0 1 0 1 ]; rows = [ -1 -1 -1 0 -1 1 0 -1 0 1 1 -1 1 0 1 1 ]; count = sum(squeeze(all(bsxfun(@eq, A.', permute(rows, [2 3 1]))))); Of course, if you need the result in the form shown in your question, just build the matrix result = [rows count.'].
doc_2689
For eg- if I enter "apple" it should return "elppa". However, my JUnit tests keep failing here. I tried using the debugger on the JUnit, but it just keeps throwing an exception java.lang.reflect.InvocationTargetException. I would really appreciate it if anyone here could help me fix this. Thanks in advance!! (Also please ignore my silly variable and class names, I was just messing around a bit) public String word; public wechillin (String wordin) { word= wordin; } public void Reverse() { String newword= ""; for (int i= word.length()-1; i>=0; i--) { newword+= word.charAt(i); } word= newword; } Test: @Test public void testReverse() { wechillin all = new wechillin("apple"); all.Reverse(); assertTrue(all.toString.equals("elppa")); } A: * *First of all you can override toString method to return only word value. @Override public String toString() { return word; } *I would recommend you to create getter for the value word and check this value in the unit test. Please see the following example: public class wechillin { public String word; public wechillin (String wordin) { word= wordin; } public void Reverse() { String newword= ""; for (int i= word.length()-1; i>=0; i--) { newword+= word.charAt(i); } word= newword; } public String getWord() { return word; } } And unit test: class wechillinTest { @Test void reverse() { wechillin all = new wechillin("apple"); all.Reverse(); assertTrue(all.getWord().equals("elppa")); } }
doc_2690
The problem is when i use Bitmap b = BitmapFactory.DecodeFile (_coverImgLocation); to load the image, i get a memory exception after scrolling in the listview. I know that the images have to be loaded at the correct size by calculating the samplesize. In this case it's not needed because the images from the server already have the same size as the ImageViews from the rows. when i load the image like this: Bitmap b = ((BitmapDrawable)_activity.Resources.GetDrawable (Resource.Drawable.splash)).Bitmap; i get no memory exception but ofcourse this is the wrong image... How can i retrieve the bitmap from the path without having a memory leak? the getimage method in the viewholder: public void GetImage(string originalImageLocation,string localImageLocation) { if (originalImageLocation == _coverImgLocation) { int screenWidth = _activity.Resources.DisplayMetrics.WidthPixels; int imgWidth = screenWidth - (int)ConvertDpToPix (32f); int imgHeight = (int)(ConvertDpToPix(206f)); BundleProgress.Visibility = ViewStates.Gone; Bitmap b = BitmapFactory.DecodeFile (_coverImgLocation); //memoryexection //Bitmap b = ((BitmapDrawable)_activity.Resources.GetDrawable (Resource.Drawable.splash)).Bitmap;//no memory exception using (b) { CoverIv.SetImageBitmap (b); } } } A: What method CoverIv.SetImageBitmap(b) does? Bitmap will not be garbage collected as long as CoverIv will keep reference to it. Because it is static method, I guess that bitmap is held in static field and will never be garbage collected. Do you still have memory leak when you remove CoverIv.SetImageBitmap(b)? Please check with MAT active references to your bitmap- he will show you why bitmap is not collected.
doc_2691
I have a table called Roles that have several named user roles (i.e.: admin, editor, etc). I want the admin to be able to edit the permissions for these roles and create new ones. What would be the best way to store the permissions? * *Use an integer field with each bit representing one permission? (I think this would quickly get out of hand) *Use a pivot table and a many-many connection to permissions? *Use a string field and just serialize the chosen privileges? New privileges are likely to be added in the future, and my goal is to be able to easily determine if a user has a certain role or not. I.e.: $user->roles->hasAdmin() or something simirar. A: You may want to borrow best practices for role/permissions table from the Laravel Entrust package: // Create table for storing roles Schema::create('{{ $rolesTable }}', function (Blueprint $table) { $table->increments('id'); $table->string('name')->unique(); $table->string('display_name')->nullable(); $table->string('description')->nullable(); $table->timestamps(); }); // Create table for associating roles to users (Many-to-Many) Schema::create('{{ $roleUserTable }}', function (Blueprint $table) { $table->integer('user_id')->unsigned(); $table->integer('role_id')->unsigned(); $table->foreign('user_id')->references('{{ $userKeyName }}')->on('{{ $usersTable }}') ->onUpdate('cascade')->onDelete('cascade'); $table->foreign('role_id')->references('id')->on('{{ $rolesTable }}') ->onUpdate('cascade')->onDelete('cascade'); $table->primary(['user_id', 'role_id']); }); // Create table for storing permissions Schema::create('{{ $permissionsTable }}', function (Blueprint $table) { $table->increments('id'); $table->string('name')->unique(); $table->string('display_name')->nullable(); $table->string('description')->nullable(); $table->timestamps(); }); // Create table for associating permissions to roles (Many-to-Many) Schema::create('{{ $permissionRoleTable }}', function (Blueprint $table) { $table->integer('permission_id')->unsigned(); $table->integer('role_id')->unsigned(); $table->foreign('permission_id')->references('id')->on('{{ $permissionsTable }}') ->onUpdate('cascade')->onDelete('cascade'); $table->foreign('role_id')->references('id')->on('{{ $rolesTable }}') ->onUpdate('cascade')->onDelete('cascade'); $table->primary(['permission_id', 'role_id']); });
doc_2692
Code extract <div class="row"> <!-- contribution --> <div class="col-xs-6 col-md-4" style="border:1px solid red;"> <!-- pic --> <div class="col-xs-4 col-md-3"> ... </div> <!-- payment --> <div class="col-xs-8 col-md-9"> <div class="name">Anonymous</div> <div class="contributed">contributed</div> </div> </div> <!-- end contribution --> </div> It goes the same for all contributions. I can't use a row div for each line as on small screens rows will have only 2 contributions instead of 3. Using only col-xs-6 col-md-4 without rows allows me to have a flexible layout. A: Use clearfix class Add the extra clearfix for only the required viewport <div class="clearfix visible-xs"></div> have a look clearing-bootstrap A: To build on Head In Cloud's answer, you will want to use the clearfix class after every 2nd div (visible-xs visible-sm) for xs and small screens, and then a clearfix class after every 3rd div (hidden-xs hidden-sm) for medium and larger screens. To reproduce your example from above: <div class="row"> <div class="col-xs-6 col-md-4" style="border:1px solid red;"> <!-- inner content --> </div> <div class="col-xs-6 col-md-4" style="border:1px solid red;"> <!-- inner content --> </div> <div class="clearfix visible-xs visible-sm"></div> <div class="col-xs-6 col-md-4" style="border:1px solid red;"> <!-- inner content --> </div> <div class="clearfix hidden-xs hidden-sm"></div> <div class="col-xs-6 col-md-4" style="border:1px solid red;"> <!-- inner content --> </div> <div class="clearfix visible-xs visible-sm"></div> <div class="col-xs-6 col-md-4" style="border:1px solid red;"> <!-- inner content --> </div> <div class="clearfix visible-xs visible-sm"></div> <div class="col-xs-6 col-md-4" style="border:1px solid red;"> <!-- inner content --> </div> <div class="clearfix"></div> <!-- this one is needed for all screen sizes, so just use the clearfix class --> </div> Another option, if it would work for the type of content you're using, is to set a min-height on those elements. It would be an estimate, but if you set the min-height to a value slightly larger than your largest element, then all of those divs would be the same height, so they would stack correctly and you wouldn't have to worry about the clearfix. This isn't ideal, because if you ever change the content, you'd have to make sure it still falls within that min-height value. A: You can easily take care of that issue without bootstrap, i have been struggling with that issue too: As i have expericed, having elements in float style so they behave properly on a responsive enviroment isn't easy, more like hellish. If you want that every element on the same "row" have the same height, the best aproach for IE9 and above is flexbox. Sample, we have 4 boxes that doesnt fit on the container, so we want them to move to a new row if they dont fit but keep all the same height (Being the height value unknown): <div class="container"> <div class="element"> <p> Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et </p> </div> <div class="element"> <p> Lorem ipsum dolor sit amet, consectetuer adipiscing elit. </p> <p> Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et </p> </div> <div class="element"> <p> Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et </p> <p> Lorem ipsum dolor sit amet. </p> </div> <div class="element"> <p> Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et </p> <p> Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et </p> </div> </div> Aplying this styles just fixes it: .container { display: flex; display: -mx-flexbox; display: -webkit-flex; flex-grow: 0; -ms-flex-grow: 0; -webkit-flex-grow: 0; flex-wrap: wrap; width: 400px; /* Sample constraint */ background-color: red; /*Visiblity */ } .element { flex: none; width: 120px; /* Sample constraint */ border: 1px solid blue; /*Visiblity */ } Check this fiddle, it will give all you want. https://jsfiddle.net/upamget0/ Apply the col-12 on the container if need, but usually its not. Source: CSS height 100% in automatic brother div not working in Chrome Great info about flexbox can be found here: https://css-tricks.com/snippets/css/a-guide-to-flexbox/
doc_2693
var secondArray = new int[] { 4, 5, 6, 5, 9, 10}; var sum = Enumerable.Zip(first, second, (a, b) => a + b); I want the sum to be [5, 7, 9, 5 , 9 , 10] since firstArray only have 3 elements. Any workaround? Would be better if I am using for loop for this?
doc_2694
I want to add a NEW object to an array, based on the first one The way i find that works is this one: new Vue({ el: "#app", data: { name: '', //name isnt inside and object //failedExample: {name: ''} array: [] }, methods: { add(){ this.array.push({name: this.name}) // i push a key:value //this.array.push(failedExample) // what i wished to do } } }); https://jsfiddle.net/myrgato/6mvx0y1a/ I understand that by using the commented array.push, i would just be adding the same reference to the object over and over, so when i change the value of failedExample.name, it will change in all positions of the array. Is there a way that this doesnt happens? Like, i add the first object, then the next one as a NEW instead of a reference? A: It should work as you wanted to do with your 'failedExample'. The only wrong thing I see is that you forget the this keyword when you're pushing into array. So try this one: new Vue({ el: "#app", data: { failedExample: { name: 'test'}, array: [] }, methods: { add(){ this.array.push(this.failedExample); console.log(this.array); } } }); Update: If you want to add a new object each time, then try to clone it so you will not have refference problems: this.array.push(Object.assign({}, this.failedExample));
doc_2695
Can other aplications running on the same Weblogic server access classes inside those jar files? A: I managed to find a solution and hope this would help anyone looking for the same answer. After configuring the followings in weblogic-ra.xml, classes inside the JCA adapter RAR file are exposed to the system class path. <wls:enable-access-outside-app>true</wls:enable-access-outside-app> <wls:enable-global-access-to-classes>true</wls:enable-global-access-to-classes>
doc_2696
Traceback (most recent call last): File "/home/container/.local/lib/python3.6/site-packages/discord/ext/commands/core.py", line 85, in wrapped ret = await coro(*args, **kwargs) File "/home/container/main.py", line 36, in listMyWants await botcommandscontroller.listWants(ctx, ctx.author.id) File "/home/container/botcommandscontroller.py", line 10, in listWants wants = mongodbcontroller.getWants(targetID) File "/home/container/mongodbcontroller.py", line 17, in getWants cluster = MongoClient(os.getenv('MONGOCONNECT')) File "/home/container/.local/lib/python3.6/site-packages/pymongo/mongo_client.py", line 712, in __init__ srv_max_hosts=srv_max_hosts, File "/home/container/.local/lib/python3.6/site-packages/pymongo/uri_parser.py", line 467, in parse_uri python_path = sys.executable or "python" NameError: name 'sys' is not defined Here's my getWants method: def getWants(userID): load_dotenv() cluster = MongoClient(os.getenv('MONGOCONNECT')) wantcollection = pokedb["wants"] userWants = "" pipeline = [{'$lookup': {'from': 'pokemon', 'localField': 'dexnum', 'foreignField': 'NUMBER', 'as': 'userwants'}}, {'$unwind': '$userwants'}, {'$match': {'discord_id': userID}}] for doc in (wantcollection.aggregate(pipeline)): if doc['shiny']: userWants += "shiny " userWants += doc['userwants']['NAME'] + ", " if len(userWants) > 2: userWants = userWants[0:len(userWants) - 2] return userWants This method probably doesn't have any relevant info, but here's listWants: async def listWants(ctx, targetID): if targetID is None: await ctx.send(Constants.ErrorMessages.NO_USER_FOUND) return wants = mongodbcontroller.getWants(targetID) if wants != "": await ctx.send(wants) else: await ctx.send(Constants.ErrorMessages.NO_WANTS_FOUND) A: I've experienced this when I use pymongo 4.xx version and I solved it by uninstalling pymongo then I try pip3 install 'pymongo[srv]' It works for me for connection string in 'mongodb+srv://....' format. More info for the pymongo installation pymongo - "dnspython" module must be installed to use mongodb+srv:// URIs A: I guess I should have been more careful about checking my dependencies. Repl.it installed pymongo dependencies that SparkedHost didn't, I think. Installing dnspython and Flask solved my issue. Also, I noticed I forgot pokedb = cluster["pokemon"] in getWants.
doc_2697
ArrayList<String> arrPackage = new ArrayList<>(); ArrayList<String> arrPackageDates = new ArrayList<>(); ArrayList<String> arrPackageDuration = new ArrayList<>(); ArrayList<String> arrPackageFileSize = new ArrayList<>(); // Code to add data to ArrayLists (data is not coming from sqlite database) ... // Code to sort arrPackage alphabatically with case ignored Collections.sort(arrPackage, new Comparator<String>() { @Override public int compare(String s1, String s2) { return s1.compareToIgnoreCase(s2); } }); but how do I know which indexes were changed? A: One approach would be to create a wrapper object Package which contains the four types of metadata which appears in the four current lists. Something like this: public class Package { private String name; private String date; private String duration; private String fileSize; public Package() { // can include other constructors as well } public String getName() { return name; } public void setName(String name) { this.name = name; } // other getters and setters } Then sort using a custom comparator which works on Package objects: List<Package> packages = new ArrayList<>(); Collections.sort(packages, new Comparator<Package>() { @Override public int compare(Package p1, Package p2) { String name1 = p1.getName(); String name2 = p2.getName(); return name1.compareToIgnoreCase(name2); } }); As a general disclaimer, the above operation would most likely be performed must more efficiently in a database. So if your data is ultimately coming from a database, you should try to do such heavy lifting there. A: A simple easy way would be backing up your ArrayList. ArrayList<String> backupPackage = arrPackage; And then use your code to sort the array. Then use a for loop to compare the two arrays. for (int i = 0; i < backupArray.size(); i++) { if (!aar.get(i).equals(aar.get(i))) { // then the item has been changed // ... YOUR CODE // at this point you know which indexes have been changed and can modify your other arrays in any way you need } } A: I used this approach: ArrayList<String> backupPackage = new ArrayList<>(); ArrayList<String> backupPackageDates = new ArrayList<>(); ArrayList<String> backupPackageDuration = new ArrayList<>(); ArrayList<String> backupPackageFileSize = new ArrayList<>(); for(int j=0;j<arrPackage.size();j++) { backupPackage.add(arrPackage.get(j)); } for(int j=0;j<arrPackageDates.size();j++) { backupPackageDates.add(arrPackageDates.get(j)); } for(int j=0;j<arrPackageDuration.size();j++) { backupPackageDuration.add(arrPackageDuration.get(j)); } for(int j=0;j<arrPackageFileSize.size();j++) { backupPackageFileSize.add(arrPackageFileSize.get(j)); } Collections.sort(arrPackage, new Comparator<String>() { @Override public int compare(String s1, String s2) { return s1.compareToIgnoreCase(s2); } }); int newindex; for(int i=0; i<backupPackage.size(); i++) { newindex = backupPackage.indexOf(arrPackage.get(i)); if(newindex != i) { arrPackageDates.set(i, backupPackageDates.get(newindex)); arrPackageDuration.set(i, backupPackageDuration.get(newindex)); arrPackageFileSize.set(i, backupPackageFileSize.get(newindex)); } } backupPackage.clear(); backupPackageDuration.clear(); backupPackageDuration.clear(); backupPackageFileSize.clear();
doc_2698
<div class="datetimepicker input-append " > <input class="input-medium input-block-level" data-format="**dd/MM/yyyy**" data-val="true" data-val-date="The field ORDERDATE must be a date." id="TARGETDATE" name="TARGETDATE" placeholder="Tarih Seçiniz" type="text"> The problem is that, i am unable to insert the date data into the DB when i select the date value like 23/2/2012. I think i need to change the format while inserting the DB. I tried to use change date-format in my model but i could not do. How can i change the date value before inserting to the DB ?
doc_2699
The "power" I hope I could use of EF: * *POCO generation *LINQ Queries to interrogate the database *The navigation between objects parents <-> children *The lazy loading The "power" I hope I could use with MVC(regarding data): * *Data validation *?? The usage of EF object as Strong type for my view? But I've several concern: * *The "recursive" serialization which will happen if I serialize my EF (with bi-directionnal links)for JSON, how to avoid this? *I cannot put validation attribute on POCO class So, I know that my question is a little "generic", but I can't find a good link which point me to a direction to solve all problems which comes with the combination of those two tech, do you know a website or do you have already got those kind of problems? A: The usage of EF object as Strong type for my view? No this is not a power. In most cases this is just an issue which raises both your main concerns. Use specialized model views for your views and Json handling actions and you will be fine. If you worry about conversions between model views and entities check for example AutoMapper to simplify this. Btw. lazy loading in web application can be issue as well. You almost always know what data you need to handle current request so load them directly instead of using lazy loading.