id
stringlengths
5
11
text
stringlengths
0
146k
title
stringclasses
1 value
doc_1300
Testing application into iPad. I have tried with min and max attributes but not working. <span class="field_text">Preferred Date &amp; Time</span> <input id="checkout_datetime" type="datetime-local" class="text_box_cnt" required="required"/> Please help... A: This is an old question, but it's the first result that came up for me when Googling, so I wanted to leave a proper answer. The min and max attributes are indeed the ones to use to restrict input to a given range: <input type="datetime-local" id="meeting-time" name="meeting-time" value="2018-06-12T19:30" min="2018-06-07T00:00" max="2018-06-14T00:00"> A: Use date instead of datetime-local.Here is an example <input id="checkout_datetime" type="date" min="2014-03-20" max="2014-03-30" class="text_box_cnt" /> Reference by http://www.w3.org/TR/html-markup/input.date.html
doc_1301
import asyncio import os loop = asyncio.get_event_loop() async def action(): inp = int(input('enter: ')) await asyncio.sleep(inp) os.system(f"say '{inp} seconds waited'") async def main(): while True: await asyncio.ensure_future(action()) try: asyncio.run(main()) except Exception as e: print(str(e)) finally: loop.close() I'm messing up something and I want to know how to achieve it. Every time user enters a number, script needs to sleep for given time, then speak out that it has waited. This entire thing needs to be in concurrent. if user enters 100 as input, the script should start a task to sleep for 100 seconds, but at the user side, it needs to ask for input again as soon as the user enters it. A: The main problem with your code was that you called input() directly in your async function. input itself is a blocking function and does not return until a newline or end-of-file is read. This is a problem because Python asynchronous code is still single-threaded, and if there is a blocking function, nothing else will execute. You need to use run_in_executor in this case. Another problem with your code, although not directly relevant to your question, was that you mixed the pre-python3.7 way of invoking an event loop and the python3.7+ way. Per documentation, asyncio.run is used on its own. If you want to use the pre 3.7 way of invoking a loop, the correct way is loop = asyncio.get_event_loop() loop.run_until_complete(main()) or loop = asyncio.get_event_loop() asyncio.ensure_future(main()) loop.run_forever() Since you have a while True in your main(), there's no difference between run_until_complete and run_forever. Lastly, there is no point in using ensure_future() in your main(). The point of ensure_future is providing a "normal" (i.e. non-async) function a way to schedule things into the event loop, since they can't use the await keyword. Another reason to use ensure_future is if you want to schedule many tasks with high io-bounds (ex. network requests) without waiting for their results. Since you are awaiting the function call, there is naturally no point of using ensure_future. Here's the modified version: import asyncio import os async def action(): loop = asyncio.get_running_loop() inp = await loop.run_in_executor(None, input, 'Enter a number: ') await asyncio.sleep(int(inp)) os.system(f"say '{inp} seconds waited'") async def main(): while True: await action() asyncio.run(main()) In this version, before a user-input is entered, the code execution is alternating between await action() and await loop.run_in_executor(). When no other tasks are scheduled, the event-loop is mostly idle. However, when there are things scheduled (simulated using await sleep()), then the control will be naturally transferred to the long-running task that is scheduled. One key to Python async programming is you have to ensure the control is transferred back to the event-loop once in a while so other scheduled things can be run. This happens whenever an await is encountered. In your original code, the interpreter get stuck at input() and never had a chance to go back to the event-loop, which is why no other scheduled tasks ever get executed until a user-input is provided. A: You can try something like this: import asyncio WORKERS = 10 async def worker(q): while True: t = await q.get() await asyncio.sleep(t) q.task_done() print(f"say '{t} seconds waited'") async def main(): q = asyncio.Queue() tasks = [] for _ in range(WORKERS): tasks.append(asyncio.create_task(worker(q))) print(f'Keep inserting numbers, "q" to quit...') while (number := await asyncio.to_thread(input)) != "q": q.put_nowait(int(number)) await q.join() for task in tasks: task.cancel() await asyncio.gather(*tasks, return_exceptions=True) if __name__ == "__main__": asyncio.run(main()) Test: $ python test.py Keep inserting numbers, "q" to quit... 1 say '1 seconds waited' 3 2 1 say '1 seconds waited' say '2 seconds waited' say '3 seconds waited' q Note: Python 3.9+ required due to some new syntax (:=) and function (asyncio.to_thread) in use. A: import asyncio async def aworker(q): ''' Worker that takes numbers from the queue and prints them ''' while True: t = await q.get() # Wait for a number to be put in the queue print(f"{t} received {asyncio.current_task().get_coro().__name__}:{asyncio.current_task().get_name()}") await asyncio.sleep(t) q.task_done() print(f"waited for {t} seconds in {asyncio.current_task().get_coro().__name__}:{asyncio.current_task().get_name()}") async def looper(): ''' Infinite loop that prints the current task name ''' i = 0 while True: i+=1 await asyncio.sleep(1) print(f"{i} {asyncio.current_task().get_name()}") names = [] for task in asyncio.all_tasks(): names.append(task.get_name()) print(names) async def main(): q = asyncio.Queue() tasks = [] # create two worker tasks and one infinitely looping task tasks.append(asyncio.create_task(aworker(q), name="aworker 1")) # Create a worker which handles input from the queue tasks.append(asyncio.create_task(aworker(q), name="aworker 2")) # Create another worker which handles input from the queue tasks.append(asyncio.create_task(looper(),name="looper")) # Create a looper task which prints the current task name and the other running tasks for task in tasks: # print the task names thus far print(task.get_name()) print(f'Keep inserting numbers, "q" to quit...') ''' asyncio.thread names itself Task-1 ''' while (number := await asyncio.to_thread(input)) != "q": try: q.put_nowait(int(number)) except ValueError: print("Invalid number") await q.join() for task in tasks: task.cancel() await asyncio.gather(*tasks, return_exceptions=True) if __name__ == "__main__": asyncio.run(main())
doc_1302
Whenever I do a: git diff --numstat <sha1> <sha2> it results in the following: 1 1 test.php Notice how the separating character between the ones are all spaces. Now, when I pipe that command directly into a tr to squeeze those spaces out as follows: git diff --numstat <sha1> <sha2> | tr -s ' ' It results in all the spaces being converted to one single tab character (I tried to paste it here but it didn't actually work). The thing is, if I recall my bash correctly, this is not expected behavior at all. Also, when I tried to replicate this by putting this in a text file, cat-ing the text file and tr-ing it through the very same pipe, it does work as expected. Does anyone know why this is and how would you work around it gracefully? My end goal is to parse these results in python, which is significantly easier if there's only one space separating the two numbers and the file name. A: I'm pretty sure those are not spaces in your output line. I can duplicate your problem when I emit a line containing tab characters. Try this modification: git diff --numstat <sha1> <sha2> | tr -s ' \t' ' ' The first group is a space and tab character, the second group is two spaces.
doc_1303
if (returneddata.daterangeparams.TimeUnitsFrom != null) { ...which throws this error (as seen in the Chrome Dev Tools console) when the value is, indeed, null: Index:1031 Uncaught TypeError: Cannot read property 'TimeUnitsFrom' of null So how can I check for null in a way that I can avoid the error? Based on a suggestion here, I even tried this: if (returneddata.daterangeparams.TimeUnitsFrom != null && variable !== undefined) { ...but I still get the same whinging from the guts of the browser. Code in greater context: function populatedaterangeprams(rptval, returneddata) { var fromval = ''; var toval = ''; if (returneddata.daterangeparams.TimeUnitsFrom != null && returneddata.daterangeparams.TimeUnitsFrom !== undefined) { fromval = returneddata.daterangeparams.TimeUnitsFrom; } if (returneddata.daterangeparams.TimeUnitsTo != null && returneddata.daterangeparams.TimeUnitsTo !== undefined) { toval = returneddata.daterangeparams.TimeUnitsTo; } if (rptval === 1) { // Produce Usage $("#produsagefrom").val(fromval); $("#produsageto").val(toval); } else if (rptval === 2) { . . . So how can I safely check for null in javascript/jQuery? UPDATE As Phil Varg said, I needed to do this: if (returneddata != null && returneddata.daterangeparams != null && returneddata.daterangeparams.TimeUnitsFrom != null) { fromval = returneddata.daterangeparams.TimeUnitsFrom; } if (returneddata != null && returneddata.daterangeparams != null && returneddata.daterangeparams.TimeUnitsTo != null) { toval = returneddata.daterangeparams.TimeUnitsTo; } ...but that seems clunkier than a silicone-and-duct-tape job on the LHC. Isn't there a way to concisify this, where checking returneddata.daterangeparams.TimeUnitsTo for null would first check the first two subparticles of that? A: the problem is that returneddata.daterangeparams is null. and youre calling TimeUnitsFrom on it. so you need to check that returndata is not null, and returndata.daterangeparams is not null, and returneddata.daterangeparams.TimeUnitsFrom is not null
doc_1304
Command: sudo docker run -it --name test -v /home/user/Myhostdir:/mydata centos:latest /bin/bash Error: [user@0bd1bb78b1a5 mydata]$ ls ls: cannot open directory .: Permission denied When I try to ls to find the folder permission, it says 1001. What's happening, and how can to solve this? drwxrwxr-x. 2 1001 1001 38 Jun 2 23:12 mydata My local machine: [user@xxx07012 Myhostdir]$ pwd /home/user/Myhostdir [user@swathi07012 Myhostdir]$ ls -al total 12 drwxrwxr-x. 2 user user 38 Jun 2 23:12 . drwx------. 18 user user 4096 Jun 2 23:11 .. -rw-rw-r--. 1 user user 15 Jun 2 23:12 text.2.txt -rw-rw-r--. 1 user user 25 Jun 2 23:12 text.txt A: This is partially a Docker issue, but mostly an SELinux issue. I am assuming you are running an old 1.x version of Docker. You have a couple of options. First, you could take a look at this blog post to understand the issue a bit more and possibly use the fix mentioned there. Or you could just upgrade to a newer version of Docker. I tested mounting a simple volume on Docker version 18.03.1-ce: docker run -it --name test -v /home/chris/test:/mydata centos:latest /bin/bash [root@bfec7af20b99 /]# cd mydata/ [root@bfec7af20b99 mydata]# ls test.txt.txt [root@bfec7af20b99 mydata]# ls -l total 0 -rwxr-xr-x 1 root root 0 Jun 3 00:40 test.txt.txt
doc_1305
(Batch File) Command "D21" >> Myfile.txt Command "D22" >> Myfile.txt Command "D23" >> Myfile.txt Command "D24" >> Myfile.txt (Output file: Myfile.txt) Fail Succeed Fail Succeed What I would like to do is also send the command that was executed to that file so it might look like this... (Desired output file: Myfile.txt) Command "D21" Fail Command "D22" Fail Command "D23" Succeed Command "D24" Succeed Any thoughts on how I could accomplish this with a minimum of effort? A: I presume you want to selectively redirect echoed commands and output within the batch, and not the whole batch. @echo off echo before not echoed, not captured call :echoCommands >myFile.txt echo after not echoed, not captured exit /b :echoCommands echo on Command "D21" Command "D22" Command "D23" Command "D24" @echo off exit /b If you want to capture the entire batch file, then simply remove the redirection from the script, don't turn echo off, and redirect when you call the batch script. myScript >myFile.txt If you want to capture the entire file output with commands, and you really want to redirect within the script, then something like @if "%~1" neq "_GO_" ( >myFile.txt call "%~f0" _GO_ %* exit /b ) @echo on Command "D21" Command "D22" Command "D23" Command "D24" In all of the solutions above, each command will be printed on one line, and the output will follow on the subsequent line(s). A: This is the way to put command and response on the same line @echo off for /f "skip=6 delims=" %%i in (%~dpnx0) do ( <nul set /p =%%i >> MyFile.txt %%i >> MyFile.txt ) goto :eof Command "D21" Command "D22" Command "D23" Command "D24" A: There are two ways to do this. The first is a little "cryptic" but it gets it done in a one-line command: for %%f in ("Command "D21"" "Command "D22"" "Command "D23"" "Command "D24"") do for /F "usebackq delims=" %%g in (`%%~f`) do echo %%~f %%g >> MyFile.txt This should work just fine. It's just a little cubmersome to add more Command "xxx"'s. The second way is to use a "subroutine" to handle the work: @echo off call :work Command "D21" call :work Command "D22" call :work Command "D23" call :work Command "D24" goto :EOF :work for /F "usebackq delims=" %%g in (`%*`) do echo %* %%g goto :EOF
doc_1306
<input type='submit' name='submitDocUpdate' value='Save'/> And when the form gets submitted I check for that name. if(isset($_POST['submitDocUpdate'])){ //do stuff However, there is one time when I'm trying to submit the form via Javascript, rather than the submit button. document.getElementById("myForm").submit(); Which is working fine, except 1 problem. When I look at the $_POST values that are submitted via the javascript method, it is not including the submitDocUpdate. I get all the other values of the form, but not the submit button value. Like I said, I can think of a few ways to work around it (using a hidden variable, check isset on another form variable, etc) but I'm just wondering if this is the correct behavior of submit() because it seems less-intuitive to me. Thanks in advance. A: Why not use the following instead? <input type="hidden" name="submitDocUpdate" value="Save" /> A: Yes, that is the correct behavior of HTMLFormElement.submit() The reason your submit button value isn't sent is because HTML forms are designed so that they send the value of the submit button that was clicked (or otherwise activated). This allows for multiple submit buttons per form, such as a scenario where you'd want both "Preview" and a "Save" action. Since you are programmatically submitting the form, there is no explicit user action on an individual submit button so nothing is sent. A: Understanding the behavior is good, but here's an answer with some code that solved my problem in jquery and php, that others could adapt. In reality this is stripped out of a more complex system that shows a bootstrap modal confirm when clicking the delete button. TL;DR Have an input dressed up like a button. Upon click change it to a hidden input. html <input id="delete" name="delete" type="button" class="btn btn-danger" data-confirm="Are you sure you want to delete?" value="Delete"></input> jquery $('#delete').click(function(ev) { button.attr('type', 'hidden'); $('#form1').submit(); return false; }); php if(isset($_POST["delete"])){ $result = $foo->Delete(); } A: The submit button value is submitted when the user clicks the button. Calling form.submit() is not clicking the button. You may have multiple submit buttons, and the form.submit() function has no way of knowing which one you want to send to the server. A: Using a version of jQuery 1.0 or greater: $('input[type="submit"]').click(); I actually was working through the same problem when I stumbled upon this post. click() without any arguments fires a click event on whatever elements you select: http://api.jquery.com/click/ A: Here is another solution, with swal confirmation. I use data-* attribute to control form should be send after button click: <button type="submit" id="someActionBtn" name="formAction" data-confirmed="false" value="formActionValue">Some label</button> $("#someActionBtn").on('click', function(e){ if($("#someActionBtn").data("confirmed") == false){ e.preventDefault(); swal({ title: "Some title", html: "Wanna do this?", type: "info", showCancelButton: true }).then(function (isConfirm) { if (isConfirm.value) { $("#someActionBtn").data("confirmed", true); $("#someActionBtn").click(); } }); } }); A: i know this question is old but i think i have something to add... i went through the same problem and i think i found a simple, light and fast solution that i want to share with you <form onsubmit='realSubmit(this);return false;'> <input name='newName'/> <button value='newFile'/> <button value='newDir'/> </form> <script> function getResponse(msg){ alert(msg); } function realSubmit(myForm){ var data = new FormData(myForm); data.append('fsCmd', document.activeElement.value); var xhr = new XMLHttpRequest(); xhr.onload=function(){getResponse(this.responseText);}; xhr.open('POST', 'create.php'); // maybe send() detects urlencoded strings and setRequestHeader() could be omitted xhr.setRequestHeader('Content-Type','application/x-www-form-urlencoded'); xhr.send(new URLSearchParams(data)); // will send some post like "newName=myFile&fsCmd=newFile" } </script> summarizing... * *the functions in onsubmit form event are triggered before the actual form submission, so if your function submits the form early, then next you must return false to avoid the form be submitted again when back *in a form, you can have many <input> or <button> of type="submit" with different name/value pairs (even same name)... which is used to submit the form (i.e. clicked) is which will be included in submission *as forms submitted throught AJAX are actually sent after a function and not after clicking a submit button directly, they are not included in the form because i think if you have many buttons the form doesn't know which to include, and including a not pressed button doesn't make sense... so for ajax you have to include clicked submit button another way *with post method, send() can take a body as urlencoded string, key/value array, FormData or other "BodyInit" instance object, you can copy the actual form data with new FormData(myForm) *FormData objects are manipulable, i used this to include the "submit" button used to send the form (i.e. the last focused element) *send() encodes FormData objects as "multipart/form-data" (chunked), there was nothing i could do to convert to urlencode format... the only way i found without write a function to iterate formdata and fill a string, is to convert again to URLSearchParams with new URLSearchParams(myFormData), they are also "BodyInit" objects but return encoded as "application/x-www-form-urlencoded" references: * *https://developer.mozilla.org/en-US/docs/Web/API/Document/activeElement *https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/send *https://developer.mozilla.org/en-US/docs/Web/API/FormData *https://developer.mozilla.org/en-US/docs/Web/API/URLSearchParams/URLSearchParams *https://developer.mozilla.org/en-US/docs/Web/API/HTMLFormElement/requestSubmit#usage_notes (proves that form.submit() does not emulate a submit button click) A: Although the acepted answer is technicaly right. There is a way to carry the value you'd like to assign. In fact when the from is submited to the server the value of the submit button is associated to the name you gave the submit button. That's how Marcin trick is working and there is multiple way you can achive that depending what you use. Ex. in jQuery you could pass data: { submitDocUpdate = "MyValue" } in MVC I would use: @using (Html.BeginForm("ExternalLogin", "Account", new { submitDocUpdate = "MyValue" })) This is actually how I complied with steam requirement of using thier own image as login link using oAuth: @using (Html.BeginForm("ExternalLogin", "Account", new { provider = "Steam" }, FormMethod.Post, new { id = "steamLogin" })) { <a id="loginLink" class="steam-login-button" href="javascript:document.getElementById('steamLogin').submit()"><img alt="Sign in through Steam" src="https://steamcommunity-a.akamaihd.net/public/images/signinthroughsteam/sits_01.png"/></a> } A: Here is an idea that works fine in all browsers without any external library. HTML Code <form id="form1" method="post" > ...........Form elements............... <input type='button' value='Save' onclick="manualSubmission('form1', 'name_of_button', 'value_of_button')" /> </form> Java Script Put this code just before closing of body tag <script type="text/javascript"> function manualSubmission(f1, n1, v1){ var form_f = document.getElementById(f1); var fld_n = document.createElement("input"); fld_n.setAttribute("type", "hidden"); fld_n.setAttribute("name", n1); fld_n.setAttribute("value", v1); form_f.appendChild(fld_n); form_f.submit(); } </script> PHP Code <?php if(isset($_POST['name_of_button'])){ // Do what you want to do. } ?> Note: Please do not name the button "submit" as it may cause browser incompatibility.
doc_1307
I need to determine whether the current OS supports the different versions of TLS. I've seen the table describing TLS support by Windows version, but following the guideline in Operating System Version: Identifying the current operating system is usually not the best way to determine whether a particular operating system feature is present.[...] Rather than using the Version API Helper functions to determine the operating system platform or version number, test for the presence of the feature itself. I don't want to hard code specific version names in my code, so I am looking for way to query whether this particular feature is supported, (e.g. through the Windows API or similar). P.S. It even seems hard to detect the actual version of the Windows these days, as e.g. both Windows Server 2019 and Windows Server 2022 would return 10.0.
doc_1308
Code snippet: DesiredCapabilities caps = DesiredCapabilities.phantomjs(); caps.setJavascriptEnabled(true); caps.setCapability("takesScreenshot", true); WebDriver driver = new PhantomJSDriver(caps); String screenShot = ((TakesScreenshot) driver).getScreenshotAs(OutputType.BASE64);
doc_1309
const searchInput = document.querySelector('.search'); const suggestions = document.querySelector('.suggestions'); searchInput.addEventListener('change', displayMatches); searchInput.addEventListener('keyup', displayMatches); this is the function - function displayMatches() { const matchArray = findMatches(this.value, name); const html = matchArray.map(place => { const regex = new RegExp(this.value); const nameName = place.name.replace(regex, `<span class="hl">${this.value}</span>`); return ` <a href="${place.url}" target="_blank"> <li> <span class="name">${nameName} <br> ${(place.price)}</span> <img src="${place.imgurl}" alt="Drink Image" height="87.5" width="100"> </li> </a> `; }).join(''); suggestions.innerHTML = html; } All Current Code Below : const endpoint = "https://gist.githubusercontent.com/valeriu7474/4df04fafd994c2f778847a3e94451b44/raw/d288ddbc9cbc8bbcf89a10f2a8ead9eecb4962f6/allcurrentshops"; const name = []; fetch(endpoint).then(blob => blob.json()) .then(data => name.push(...data)); function findMatches(wordToMatch, name) { return name.filter(place => { //we need to figure out if the name match const regEx = new RegExp(wordToMatch, 'gi'); return place.name.match(regEx); }); } // function displayMatches() { // const matchArray = findMatches(this.value, name); // const html = matchArray.map(place => { // const regex = new RegExp(this.value); // const nameName = place.name.replace(regex, `<span class="hl">${this.value}</span>`); // return ` // <a href="${place.url}" target="_blank"> // <li> // <span class="name">${nameName} <br> ${(place.price)}</span> // <img src="${place.imgurl}" alt="Drink Image" height="87.5" width="100"> // </li> // </a> // `; // }).join(''); // suggestions.innerHTML = html; // } function displayMatches() { const searchText = document.querySelector('.search'); const matchArray = findMatches(searchText, name); const html = matchArray.map(place => { const regex = new RegExp(searchText); const nameName = place.name.replace(regex, <span class="hl">${searchText}</span>); return ` <a href="${place.url}" target="_blank"> <li> <span class="name">${nameName} <br> ${(place.price)}</span> <img src="${place.imgurl}" alt="Drink Image" height="87.5" width="100"> </li> </a> `; }).join(''); suggestions.innerHTML = html; } // const searchInput = document.querySelector('.search'); // const suggestions = document.querySelector('.suggestions'); const searchBtn = document.querySelector('.btn-search'); searchBtn.addEventListener('click', displayMatches); A: Remove the eventlisteners from the .search field, and add a button. const searchBtn = document.querySelector('.searchBtn'); searchBtn.addEventListener('click', displayMatches); Edit: If I'm interpreting the function correctly update displayMatches with a new variable for the search text entered. function displayMatches() { const searchText = document.querySelector('.search').value; const matchArray = findMatches(searchText, name); const html = matchArray.map(place => { const regex = new RegExp(searchText); const nameName = place.name.replace(regex, <span class="hl">${searchText}</span>); return ` <a href="${place.url}" target="_blank"> <li> <span class="name">${nameName} <br> ${(place.price)}</span> <img src="${place.imgurl}" alt="Drink Image" height="87.5" width="100"> </li> </a> `; }).join(''); suggestions.innerHTML = html; }
doc_1310
Details: I initialized the array in the main method, and the values were set in one method. I called the array values in a 2nd method, and everything was fine. When I tried to call the array in a 3rd method, I got the out of bounds error, even though the size of the array is exactly the same. I was trying to call the array in order to copy it, and then sort the 2nd array. thank you private static WeatherLocation[] WeatherSpots = new WeatherLocation[6]; private static Scanner Input = new Scanner(System.in); public static void main(String[] args) {int Count; for(Count = 0 ; Count < 6; Count++) WeatherSpots[Count] = new WeatherLocation(); WeatherSpots[0].LocationID = "Tciitcgaitc"; WeatherSpots[1].LocationID = "Redwood Haven"; WeatherSpots[2].LocationID = "Barrier Mountains"; WeatherSpots[3].LocationID = "Nina's Folly"; WeatherSpots[4].LocationID = "Scooly's Hill"; WeatherSpots[5].LocationID = "Twin Cones Park"; SetUp(); String Command = ""; while(!Command.equals("Quit")) { Menu(); System.out.print("Enter Command: "); Command = Input.nextLine(); if(Command.equals("Post")) PostTemperatureInfo(); if(Command.equals("Daily")) WeeklyReport(); else if (Command.equals("HighLow")) Sorting(); } } public static void PostTemperatureInfo() { Scanner LocalInput = new Scanner(System.in); int K; int Temp; //...then get the values for each location... System.out.println( "Enter the Temperature for each weather station below:\n"); System.out.println( "---------------------------------------------------------------"); for(K = 0 ; K < 6 ; K++) { System.out.println( "Weather Station: " + WeatherSpots[K].LocationID); //Display the location of the fishing spot... System.out.print( "Enter Temperature:\t"); //Get the count... Temp = LocalInput.nextInt(); System.out.println( "---------------------------------------------------------------"); WeatherSpots[K].CatchCount = Temp; } System.out.println(""); System.out.println(""); System.out.println(""); } public static void WeeklyReport() { for(K = 0 ; K < 6 ; K++) {System.out.println( "" + WeatherSpots[K].LocationID +"\t\t" + WeatherSpots[K].CatchCount + "\t\t" + String.format("%.2f", (WeatherSpots[K].CatchCount - 32) * 5 / 9)); } } public static void Sorting() {int K = 0; for(K = 0 ; K < 6 ; K++); {int [] copycat = new int[K]; System.arraycopy(WeatherSpots[K].CatchCount, 0, copycat[K], 0, 6); System.out.println("" + copycat[K]); Arrays.sort(copycat, 0, K); System.out.println("Minimum = " + copycat[0]); System.out.println("Maximum = " + copycat[K -1]); } } } A: Q: Why not use "array.length" instead of a hard-coded "6"? Q: I'd really discourage you from using that indentation style, if you can avoid it. Anyway - this should work (I have not tried it myself): public static void Sorting() { for(int K = 0 ; K < WeatherSpots.length ; K++) { int [] copycat = new int[K]; System.arraycopy( WeatherSpots[K].CatchCount, 0, copycat[K], 0, WeatherSpots.length); System.out.println("" + copycat[K]); Arrays.sort(copycat, 0, K); System.out.println("Minimum = " + copycat[0]); System.out.println("Maximum = " + copycat[K -1]); } } The main thing was to get rid of the extraneous ";" after the "for()" loop. A: The problem is that you are allocating an array copycat that is only K integers long, and then you are trying to fit 6 elements into it, even when K == 0. I don't understand your code enough to figure out what the right indexes are, but that's the source of your problem. Actually, I don't believe that your code as posted will compile. This line from Sorting(): System.arraycopy(WeatherSpots[K].CatchCount, 0, copycat[K], 0, 6); seems mighty suspicious. The first and third arguments to System.arraycopy are supposed to be arrays, but copycat[K] is an int. Apparently so is WeatherSpots[K].CatchCount. EDIT: It seems from your comments and code that the Sorting() routine is just supposed to print the min and max values of WeatherSpots[K].CatchCount. This can be done much more easily than you are doing. Here's one way: public static void Sorting() { int min = Integer.MAX_VALUE; int max = Integer.MIN_VALUE; for (WeatherLocation loc : WeatherSpots) { final int count = loc.CatchCount; if (count < min) { min = count; } if (count > max) { max = count; } } System.out.println("Minimum = " + min); System.out.println("Maximum = " + max); }
doc_1311
class Client { constructor() { this.clients = ''; this.client_secret = ''; } clients: string; client_secret: string; } I want class UpdateClient to be like this class UpdateClient { constructor() { this.clients = ''; } clients: string; } Now, I'm sure there will be few approaches in vanilla JS by which I can get the task done, like iterating over all enumerable properties of class client, but I don't want to that. I want a typescript specific solution. I found Omit type utility and it's working as expected. However, there's a small issue which I'm unable to fix. This is the whole code snippet class Client { constructor() { this.clients = ''; this.client_secret = ''; } clients: string; client_secret: string; } type T = Omit<Client, 'client_secret'> I'm getting a type instead of a class. I want to somehow convert this type T to the class UpdateClient and export it. The exported property needs to be a class because the other module using this one expects a class. I'm using typescript v3.7.5 A: If all you want is for UpdateClient to be a class constructor that makes instances of Omit<Client, 'client_secret'>, you can write it this way: const UpdateClient: new () => Omit<Client, 'client_secret'> = Client; The declared type new () => ... means "a constructor which takes no arguments and produces an instance of ...". The syntax is either called a constructor signature or "newable" and is part of the static side of a class. The fact that the above code, assigning Client to the variable UpdateClient, compiles without error shows that the compiler agrees that Client does act like a no-arg constructor of Omit<Client, 'client_secret'>. If, for example, Client's constructor required an argument, or if Omit<Client, 'client_secret'> weren't a supertype of Client, you'd get an error: class RequiresArg { constructor(public clients: string) { } } const Oops: new () => Omit<Client, 'client_secret'> = RequiresArg; // error // Type 'typeof RequiresArg' is not assignable to type 'new () => Pick<Client, "clients">' class NotCompatible { clients?: number; } const StillOops: new () => Omit<Client, 'client_secret'> = NotCompatible; // error // Type 'number | undefined' is not assignable to type 'string'. Anyway, then this will work: const c = new UpdateClient(); c.clients; // okay c.client_secret; // error at compile time, although it does exist at runtime Do note that even though UpdateClient's instances are not known by the compiler to have a client_secret property, it's still just an instance of Client at runtime, so the property will definitely exist at runtime. If that's a problem you should probably do something completely different. But since you said Omit<...> works for you, I guess that's not an issue. Okay, hope that helps; good luck! Playground link to code
doc_1312
I have create the two models Order and OrderStatus. Now, I want to fetch an Order model by it's status. Unfortunately, this approach isn't working anymore, as the load method expects a string or string array now. const order = await new Order().load({"orderStatus": q => q.where({"userId": userId, "status": 10})}); I've found a library called "bookshelf-eloquent" that adds this functionality. However, I'm using Typescript and this library doesn't provide any type declaration. This code works, but TypeScript indicates that the property whereHas doesn't exist. const order = await new Order() .whereHas("orderStatus", q => q.where({"userId": userId, "status": 10})) .get() Either the Bookshelf.js developers have added a new method I haven't seen yet, or I need to have a type declaration file for the bookshelf-eloquent library. Otherwise, I can't use it.
doc_1313
My singleton interface file is as follows: @interface gameData : NSObject <NSCoding> @property (assign, nonatomic) long score; @property (assign, nonatomic) long level; @property (assign, nonatomic) long riddlesCompleted; @property (assign, nonatomic) long hints; @property (assign, nonatomic) long firstLetters; @property (assign, nonatomic) long answers; +(instancetype)sharedGameData; -(void)reset; -(void)save; @end Then the implementation file sets up the encoders and decoders as follows: -(void)encodeWithCoder:(NSCoder *)aCoder{ [aCoder encodeDouble:self.score forKey:gameDataScoreKey]; [aCoder encodeDouble:self.level forKey:gameDataLevelKey]; [aCoder encodeDouble:self.riddlesCompleted forKey:gameDataRiddlesCompletedKey]; [aCoder encodeDouble:self.hints forKey:gameDataHintsKey]; [aCoder encodeDouble:self.firstLetters forKey:gameDataFirstLettersKey]; [aCoder encodeDouble:self.answers forKey:gameDataAnswersKey]; } -(instancetype)initWithCoder:(NSCoder *)decoder{ self = [self init]; if (self) { _score = [decoder decodeDoubleForKey:gameDataScoreKey]; _level = [decoder decodeDoubleForKey:gameDataLevelKey]; _riddlesCompleted = [decoder decodeDoubleForKey:gameDataRiddlesCompletedKey]; _hints = [decoder decodeDoubleForKey:gameDataHintsKey]; _firstLetters = [decoder decodeDoubleForKey:gameDataFirstLettersKey]; _answers = [decoder decodeDoubleForKey:gameDataAnswersKey]; } return self; } +(instancetype) sharedGameData{ static id sharedInstance = nil; static dispatch_once_t onceToken; dispatch_once(&onceToken, ^{ sharedInstance = [self loadInstance]; }); return sharedInstance; } Values are initialised as follows: -(id)init{ if(self = [super init]){ _score = 500; _riddlesCompleted = 0; _level = 1; _hints = 3; _firstLetters = 3; _answers = 3; } return self; } and then the instance is loaded: +(NSString*)filePath{ static NSString* filePath = nil; if (!filePath) { filePath = [[NSSearchPathForDirectoriesInDomains(NSDocumentationDirectory, NSUserDomainMask, YES) firstObject] stringByAppendingString:@"gameData"]; } return filePath; } +(instancetype)loadInstance{ NSData* decodeData = [NSData dataWithContentsOfFile:[gameData filePath]]; if (decodeData) { gameData* gameData = [NSKeyedUnarchiver unarchiveObjectWithData:decodeData]; return gameData; } return [[gameData alloc] init]; } Then elsewhere in the application when I try to access these values I am not able to access the values of hints, firstletters or answers. If I try logging the values as follows: NSLog([NSString stringWithFormat:@"%li", [gameData sharedGameData].score] ); NSLog([NSString stringWithFormat:@"%li", [gameData sharedGameData].hint] ); NSLog([NSString stringWithFormat:@"%li", [gameData sharedGameData].answers]); NSLog([NSString stringWithFormat:@"%li", [gameData sharedGameData].firstLetters]); The output I get is 500 for score but for all the others I get 0 even though they are initialised in the gameData.m file with values 3. A: Just a guess: In loadInstance() you read the values from the file, so if there is a file you won't get in the init() where you set the values. To be sure there is no gamedata-file you should reset the simulator oder delete the app from the device and try again.
doc_1314
Essentially I am trying to make sure that this job (or file in this case) has been claimed by this user by checking if their ID matches with the text that is after "Pilot:", if not, they can't un-claim it and that causes the script to return a message to the user via ctx.send(). I have tried... @bot.command() #Work in progress async def unclaim(ctx, *, message=None): author = ctx.author.id author = '<@'+str(author)+'>' mylines = [] with fileinput.input(cwd+'/jobs/'+message+'.txt', inplace=True) as f: for line in f: mylines.append(line) pilot = mylines[2] pilot = pilot.split(':')[1] if author != pilot: await ctx.send(f'{ctx.author.mention}, you have not claimed this job.') else: print(line.replace('<@'+str(author)+'>', 'no one'), end='') Error Received Updated with new error I noticed that running unclaim results in the file being accessed to be emptied. Not sure why. Ignoring exception in on_command_error Traceback (most recent call last): File "C:\Users\Owner\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\discord\ext\commands\core.py", line 85, in wrapped ret = await coro(*args, **kwargs) File "D:\Projects\dispatch_bot\bot.py", line 98, in unclaim pilot = mylines[2] IndexError: list index out of range The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:\Users\Owner\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\discord\client.py", line 312, in _run_event await coro(*args, **kwargs) File "D:\Projects\dispatch_bot\bot.py", line 45, in on_command_error raise error File "C:\Users\Owner\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\discord\ext\commands\bot.py", line 903, in invoke await ctx.command.invoke(ctx) File "C:\Users\Owner\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\discord\ext\commands\core.py", line 855, in invoke await injected(*ctx.args, **ctx.kwargs) File "C:\Users\Owner\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\discord\ext\commands\core.py", line 94, in wrapped raise CommandInvokeError(exc) from exc discord.ext.commands.errors.CommandInvokeError: Command raised an exception: IndexError: list index out of range Text file being accessed... Leg: KBYS>KSQL Max Weight/Pax: 0/12 Pilot: no one A: Code that fixed the issue. @bot.command() #Work in progress async def unclaim(ctx, *, message=None): author = ctx.author.id author = ' <@'+str(author)+'>' #need to remove space from front of <@ in text file author_is_pilot = False mylines = [] with open(cwd+'/jobs/'+message+'.txt', 'r+') as f: for line in f: mylines.append(line) pilot = mylines[2] pilot = pilot.split(':')[1] if author == pilot: author_is_pilot = True with fileinput.input(cwd+'/jobs/'+message+'.txt', inplace=True) as f: for line in f: print(line.replace(author, ' no one'), end='') else: await ctx.send(f'{ctx.author.mention}, you have not claimed this job.') Two issues with my code in the beginning. * *(As suggest by Pranav) the pilot=mylines[2] and the subsequent line in the for loop made the list I was trying to access out of range. *The variable author did not take the space into account when a job was claimed. (i.e Pilot = <@238472959> ^ I do think the code I wrote originally can also be used, but might need to fix the issues listed above. Will test another day after some sleep. I apologize if this question was not necessary, I was under the impression that because I was using fileinput that I was limit on what I could do within with.
doc_1315
I need a query to fetch only one post of each user (like group by in SQL) POSTS collection data { language:'english', status:'A', desc:'Hi there', userId:'5b891370f43fe3302bbd8918' },{ language:'english', status:'A', desc:'Hi there - 2' userId:'5b891370f43fe3302bbd8918' },{ language:'english', status:'A', desc:'Hi there - 3' userId:'5b891370f43fe3302bbd8001' } Here is my query db.col('posts').aggregate([ { $match: { language: 'english', status: "A" } }, { $sample: { size: 10 } }, { $sort: { _id: -1 } }, { $lookup: { from: 'users', localField: 'userId', foreignField: '_id', as: 'ownerData' } }], (err, data) => { console.log(err,data) }); Desired Output { language:'english', status:'A', desc:'Hi there', userId:'5b891370f43fe3302bbd8918', ownerData:[[object]] },{ language:'english', status:'A', desc:'Hi there - 3' userId:'5b891370f43fe3302bbd8001', ownerData:[[object]] } A: $group: will as group by of mysql. $first: will take first element of collection field from group. $lookup acts as join in mysql. db.tempdate.aggregate([ { $group : { _id : "$userId", language : { $first: '$language' }, status : { $first: '$status' }, desc : { $first: '$desc' } } }, { $lookup: { from: "user", localField: "_id", foreignField: "user_id", as: "userData" } } ]).pretty();` Output `{ "_id" : "5b891370f43fe3302bbd8001", "language" : "english", "status" : "A", "desc" : "Hi there - 3", "userData" : [ { "_id" : ObjectId("5ba3633a12b8613823f3056e"), "user_id" : "5b891370f43fe3302bbd8001", "name" : "Bhuwan" } ] } { "_id" : "5b891370f43fe3302bbd8918", "language" : "english", "status" : "A", "desc" : "Hi there", "userData" : [ { "_id" : ObjectId("5ba3634612b8613823f3056f"), "user_id" : "5b891370f43fe3302bbd8918", "name" : "Harry" } ] } A: You can use $group aggregation stage for the distinct userId and then use $lookup to get users data. db.col('posts').aggregate([ { "$match": { "language": 'english', "status": "A" }}, { "$sample": { "size": 10 }}, { "$sort": { "_id": -1 }}, { "$group": { "_id": "$userId", "language": { "$first": "$language" }, "status": { "$first": "$status" }, "desc": { "$first": "$desc" } }}, { "$lookup": { "from": "users", "localField": "_id", "foreignField": "_id", "as": "ownerData" }} ]) A: Also, you can use group and $last db.getCollection('posts').aggregate([ { "$match": { "language": 'english', "status": "A" }}, { "$group": { "_id": "$userId", "primaryId" : { "$last": "$_id" }, "language": { "$last": "$language" }, "status": { "$last": "$status" }, "desc": { "$last": "$desc" } }}, { "$lookup": { "from": "users", "localField": "_id", "foreignField": "_id", "as": "ownerData" }}, { $unwind:{path: '$ownerData',preserveNullAndEmptyArrays: true} //to convert ownerData to json object } ])
doc_1316
The code on the left is how they suggest I do it, but for some reason the application does not render correctly. But if I make a modification as you can see on the code on the right the application renders correctly. But I need to do it in the same way as hey suggest on the tutorial. Probably the problem is with the const variable, I don't know. Any idea on how to solve this? A: Class members cannot be defined with const. Within a class method you can define a const. You may use private readonly typescript accessors for class members.
doc_1317
The NodeJS, ExpressJS will be hosting REST API's and I want to secure them using Azure AD. I want to use Auth Code flow. My question is: I have put my thoughts in the diagram, is this the right approach? A: This approach looks good to me. I am thinking of it as an advanced version of something like JWT (https://jwt.io/) based authentication. Please see the steps below for JWT: * *The client requests authentication by providing credentials. *The server provides the client with the token that is encrypted using the private key present in the server. *The JWT is stored in client's session and is sent to the server anytime the client requests something from it requiring authentication. *The server then decrypts the token using the public/private key and sends the response back to the client. *A session is validated at this point. With the architecture you have described above, it does the exact same thing except the means to encrypt (generate) and decrypt (verify) the token exists with Azure AD. Below are the steps for achieving authentication based on your architecture: * *The client requests authentication by providing credentials. *The Azure AD server does a 2FA kind of thing but in the end provides the token (equivalent to JWT in the previous approach). *The token is stored in client's session and is sent to the application backend server anytime the client requests something from it requiring authentication. *The backend server uses Azure AD for verifying the token (similar to the decryption/verification step of JWT) and sends the response back to the client. *A session is validated at this point. I would suggest a small change to this though. If you look at the step 4 above. The application server will keep hitting Azure AD every time it needs to authenticate the session. If you could add an actual JWT for this phase, it may help in avoiding these redundant calls to Azure. So the steps described above for JWT may be added after the 4th step for Azure AD described above i.e. create a JWT and store it in clients session once everything is verified from Azure and then keep using JWT based authentication in the future for current session. If required, JWT can be stored in the browser cookies and calls to Azure AD can totally be avoided for a specific period. However, our objective here is not to decrease load on Azure AD server but just suggesting a way of using JWT in this specific situation. I hope it helps.
doc_1318
Code :- class Try def method_missing(method_name, *args) logger.warn "I am try to call #{method_name} with these arguments #{args}" super end end Try.new.dummy(1, "my name is rosy.") Getting error:- stack level too deep (SystemStackError) Please tell us. How to solve this problem. A: I'm assuming you are not in a rails app. Have you instantiated the logger instance? require 'logger' logger = Logger.new(STDOUT) logger.level = Logger::WARN logger.warn "test"
doc_1319
Example: (Windows 8 Task Manager) I want to get that 2.9% with a command. A: Here is the correct answer which is support case then you have multiple processs with same name https://stackoverflow.com/a/34844682/483997 # To get the PID of the process (this will give you the first occurrance if multiple matches) $proc_pid = (get-process "slack").Id[0] # To match the CPU usage to for example Process Explorer you need to divide by the number of cores $cpu_cores = (Get-WMIObject Win32_ComputerSystem).NumberOfLogicalProcessors # This is to find the exact counter path, as you might have multiple processes with the same name $proc_path = ((Get-Counter "\Process(*)\ID Process").CounterSamples | ? {$_.RawValue -eq $proc_pid}).Path # We now get the CPU percentage $prod_percentage_cpu = [Math]::Round(((Get-Counter ($proc_path -replace "\\id process$","\% Processor Time")).CounterSamples.CookedValue) / $cpu_cores) A: Get-Process -Name PowerShell | Select CPU Is this what you're looking for? Or something more monitoring based? param ( [String] [Parameter(Mandatory)] $Title ) do { $process = Get-Process -Name $Title $process Start-Sleep -Seconds 1 } while ($process) A: Get-Process -Name system | select CPU Get the cpu time at 2 instance as (cpu2-cpu1)/(t2-t1)*100. You will get CPU value in %. Get-Process -Name system | select CPU # Get the cpu time at 2 instance as (cpu2-cpu1)/(t2-t1)*100. You will get CPU value in %. $processName = 'OUTLOOK' $sleep_time = 1 # value in seconds while (1) { $CPU_t1 = Get-Process -Name $processName | Select CPU $CPU_t1_sec = $($CPU_t1.CPU) #Write-Host "CPU_t1: $($CPU_t1.CPU)" $date1 = (Get-Date) sleep -Seconds $sleep_time $CPU_t2=Get-Process -Name $processName | Select CPU #Write-Host "CPU_t2: $($CPU_t2.CPU)" $CPU_t2_sec = $($CPU_t2.CPU) $date2 = (Get-Date) $date_diff = $date2 - $date1 $diff_time = $date_diff.seconds #Write-Host "TimeDiff: $diff_time" #compute them to get the percentage $CPU_Utilization = ($CPU_t2_sec - $CPU_t1_sec)/$diff_time $CPU_Utilization_per = $CPU_Utilization * 100 #Sleep $sleep_time Clear-Host Write-Host "CPU_Utilization_Per: $CPU_Utilization_per" #Write-Host "=====================" }
doc_1320
Is there a way to directly stream the R commands into R without me needing to make a R script file? A: Use rpy2 (link). You can run R from directly in your Python script. Here is the specific documentation for plotting using rpy2.
doc_1321
After the command: >gradlew html:superDev I'm getting this error message: > Task :html:beforeRun FAILED FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':html:beforeRun'. > Could not resolve all files for configuration ':html:grettyRunnerJetty94'. > Could not find org.gretty:gretty-runner-jetty94:3.0.2. Searched in the following locations: - https://repo.maven.apache.org/maven2/org/gretty/gretty-runner-jetty94/3.0.2/gretty-runner-jetty94-3.0.2.pom - https://dl.google.com/dl/android/maven2/org/gretty/gretty-runner-jetty94/3.0.2/gretty-runner-jetty94-3.0.2.pom - https://oss.sonatype.org/content/repositories/snapshots/org/gretty/gretty-runner-jetty94/3.0.2/gretty-runner-jetty94-3.0.2.pom - https://oss.sonatype.org/content/repositories/releases/org/gretty/gretty-runner-jetty94/3.0.2/gretty-runner-jetty94-3.0.2.pom Required by: project :html A: To solve this problem, edit the file build.gradle inside your project directory, to reflect the new version of gretty (present is 3.0.3, find any update at https://plugins.gradle.org/plugin/org.gretty). Look for section "buildscript" subsection "dependencies": dependencies { classpath "gradle.plugin.org.gretty:gretty:3.0.3" } Replace: classpath 'org.gretty:gretty:3.0.2' (or whatever wrong version text) With: classpath "gradle.plugin.org.gretty:gretty:3.0.3" (or whatever line of text version present in the gradle link https://plugins.gradle.org/plugin/org.gretty) At the "allprojects" section, look for the "repositories" subsection and add the maven repository as presented in the link https://plugins.gradle.org/plugin/org.gretty. repositories { mavenLocal() mavenCentral() google() maven { url "https://plugins.gradle.org/m2/" } } Save the file and rebuild your project with: >gradlew html:superDev
doc_1322
I have done some test deployments, with respective services and everything works, here the file Deploy1: apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: helloworld1 spec: selector: matchLabels: app: helloworld1 replicas: 1 template: metadata: labels: app: helloworld1 spec: containers: - name: hello image: gcr.io/google-samples/hello-app:1.0 ports: - containerPort: 8080 Deploy2: apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: helloworld2 spec: selector: matchLabels: app: helloworld2 replicas: 1 template: metadata: labels: app: helloworld2 spec: containers: - name: hello image: gcr.io/google-samples/hello-app:2.0 ports: - containerPort: 8080 Deploy3: apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: geojson-example spec: selector: matchLabels: app: geojson-example replicas: 1 template: metadata: labels: app: geojson-example spec: containers: - name: geojson-container image: "nmex87/geojsonexample:latest" ports: - containerPort: 8080 Service1: apiVersion: v1 kind: Service metadata: name: helloworld1 spec: # type: NodePort ports: - port: 8080 selector: app: helloworld1 Service2: apiVersion: v1 kind: Service metadata: name: helloworld2 spec: # type: NodePort ports: - port: 8080 selector: app: helloworld2 Service3: apiVersion: v1 kind: Service metadata: name: geojson-example spec: ports: - port: 8080 selector: app: geojson-example This is the ingress controller: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test-ingress annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/default-backend: geojson-example spec: rules: - http: paths: - path: /geo pathType: Prefix backend: service: name: geojson-example port: number: 8080 - path: /test1 pathType: Prefix backend: service: name: helloworld1 port: number: 8080 - path: /test2 pathType: Prefix backend: service: name: helloworld2 port: number: 8080 When I do a GET on myServer:myPort/test1 or /test2 everything works, on /geo i get the following answer { "timestamp": "2021-03-09T17:02:36.606+00:00", "status": 404, "error": "Not Found", "message": "", "path": "/geo" } Why?? if I create a pod, and from inside the pod, i do a curl on geojson-example it works, but from the external, i obtain a 404 (i think by nginx ingress controller) This is the log of nginx pod: x.x.x.x - - [09/Mar/2021:17:02:21 +0000] "GET /test1 HTTP/1.1" 200 68 "-" "PostmanRuntime/7.26.8" 234 0.006 [default-helloworld1-8080] [] 192.168.168.92:8080 68 0.008 200 x.x.x.x - - [09/Mar/2021:17:02:36 +0000] "GET /geo HTTP/1.1" 404 116 "-" "PostmanRuntime/7.26.8" 232 0.013 [default-geojson-example-8080] [] 192.168.168.109:8080 116 0.012 404 What can I do? A: As far the doc: This annotation is of the form nginx.ingress.kubernetes.io/default-backend: <svc name> to specify a custom default backend. This <svc name> is a reference to a service inside of the same namespace in which you are applying this annotation. This annotation overrides the global default backend. This service will be handle the response when the service in the Ingress rule does not have active endpoints. You cannot use same service as default backend and also for a path. When you do this the path /geo became invalid. As we know default backend serves only the inactive endpoints. Now If you tell that you want geojson-example as default backend(for inactive endpoints) again in the paths if you tell that use geojson-example for a valid path /geo then it became invalid as you are creating a deadlock type situation here. You actually do not need to give this nginx.ingress.kubernetes.io/default-backend annotation. Your ingress should be like below without the default annotation, or you can use the annotation but in that case you need to remove geojson-example from using for any valid path in the paths, or need to use another service for the path /geo. Options that you can use are given below: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test-ingress annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - http: paths: - path: /geo pathType: Prefix backend: service: name: geojson-example port: number: 8080 - path: /test1 pathType: Prefix backend: service: name: helloworld1 port: number: 8080 - path: /test2 pathType: Prefix backend: service: name: helloworld2 port: number: 8080 Or: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test-ingress annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/default-backend: geojson-example spec: rules: - http: paths: - path: /geo pathType: Prefix backend: service: name: <any_other_service> # here use another service except `geojson-example` port: number: 8080 - path: /test1 pathType: Prefix backend: service: name: helloworld1 port: number: 8080 - path: /test2 pathType: Prefix backend: service: name: helloworld2 port: number: 8080 Or: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test-ingress annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/default-backend: geojson-example spec: rules: - http: paths: - path: /test1 pathType: Prefix backend: service: name: helloworld1 port: number: 8080 - path: /test2 pathType: Prefix backend: service: name: helloworld2 port: number: 8080 A: This is for your default backend. You set the geojson-example service as a default backend. The default backend is a service which handles all URL paths and hosts the nginx controller doesn't understand (i.e., all the requests that are not mapped with an Ingress). Basically a default backend exposes two URLs: /healthz that returns 200 / that returns 404 So , if you want geojson-example service as a default backend then you don't need /geo path specification. Then your manifest file will be: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test-ingress annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/default-backend: geojson-example spec: rules: - http: paths: - path: /test1 pathType: Prefix backend: service: name: helloworld1 port: number: 8080 - path: /test2 pathType: Prefix backend: service: name: helloworld2 port: number: 8080 Or if you want geojson-example as a ingress valid path then you have to remove default backend annotation. Then your manifest file will be: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test-ingress annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - http: paths: - path: /geo pathType: Prefix backend: service: name: geojson-example port: number: 8080 - path: /test1 pathType: Prefix backend: service: name: helloworld1 port: number: 8080 - path: /test2 pathType: Prefix backend: service: name: helloworld2 port: number: 8080
doc_1323
var start = 1; var end = 20; var currentPos = 10; How can I calculate the currentPos value, based as a percentage, obviously it would be 50% but I'm wondering how to calculate this with any variables, example: var start = 11; var end = 20; var currentPos = 12; The end and start could potentially be the same value too A: Assuming you won't have an end and start being the same and start<=currentPos<=end var currentPercentile = (currentPos-start)/(end-start); Update if your end and start can be the same value then it can be 100, 0, or unknown since it'd be outside the range, your choice. var currentPercentile = 100; if(end-start >0){ currentPercentile = (currentPos-start)/(end-start); }
doc_1324
class NumberControllerFactory implements FactoryInterface{ public function __invoke(ContainerInterface $container, $requestedName, array $options = null) { return new NumberController($container->get(Bar::class)); } public function createService(ServiceLocatorInterface $services) { return $this($services, NumberController::class); } } I got error: Fatal error: Declaration of Number\Factory\NumberControllerFactory::__invoke() must be compatible with Zend\ServiceManager\Factory\FactoryInterface::__invoke(Interop\Container\ContainerInterface $container, $requestedName, array $options = NULL) in C:\xampp\htdocs\MyProject\module\Number\src\Number\Factory\NumberControllerFactory.php on line 10 I need this, because I want to inject model to controller, because service manager has been removed from controllers in Zend 3. I used skeleton described in https://framework.zend.com/manual/2.4/en/ref/installation.html In composer.json is: "require": { "php": "^5.6 || ^7.0", "zendframework/zend-component-installer": "^1.0 || ^0.3 || ^1.0.0-dev@dev", "zendframework/zend-mvc": "^3.0.1", "zfcampus/zf-development-mode": "^3.0" }, I don't understand this problem, I read a lot of tutorials, for example: https://zendframework.github.io/zend-servicemanager/migration/ coould You help me, please? I guess that currently this method is compatible with Zend\ServiceManager\Factory\FactoryInterface::__invoke A: For injecting model into the controller, you need to create a factory class while configuration in module.config.php as below 'controllers' => [ 'factories' => [ Controller\AlbumController::class => Factory\AlbumControllerFactory::class, ], ], Here AlbumController is the controller class of Album module. After that you need to create a AlbumControllerFactory class inside the module\Album\src\Factory. In this class you need to write the code below: namespace Album\Factory; use Album\Controller\AlbumController; use Album\Model\AlbumTable; use Interop\Container\ContainerInterface; use Zend\ServiceManager\Factory\FactoryInterface; class AlbumControllerFactory implements FactoryInterface { public function __invoke(ContainerInterface $container, $requestedName, array $options = null) { return new AlbumController($container->get(AlbumTable::class)); } } You need to write the below code inside the controller class(AlbumController). public function __construct(AlbumTable $album) { $this->table = $album; } This way you can inject the model class into the controller class. A: Thanks Azhar. My problem was because when I used: 'factories' => array( \Number\Controller\NumberController::class => \Number\Factory\NumberControllerFactory::class ) it wasn't work, there was 404... I had to use: 'Number\Controller\Number' => \Number\Factory\NumberControllerFactory::class in documentation is that I should use full class name ::class. Does somebody know why it doesn't work?
doc_1325
The problem is that after using the application for some time, in App info it shows me huge amount of Storage data and Cache being "eaten". Now it shows me Storage Total: 9,05MB, Application: 4,78MB, Data: 4,27MB, Cache: 11,91MB. The cache&data are getting bigger each time I use the app. Can Android handle this by itself or do I have to clear cache manually from the source code of my app? I don't know why data is getting bigger as I don't find normally to programatically erase data.
doc_1326
Essentially I would like to create a visualisation similar to this one created by Githut - https://madnight.github.io/githut/#/pull_requests/2017/4 Only difference is location - how could I go about doing that? Thanks in advance.
doc_1327
My dataframe contains a column called station_id. The station_id values are unique. That is each row correspond to station id. Then there is another column called trip_id (see example below). Many stations can be associated with a single trip_id. For example l1=[1,1,2,2] l2=[34,45,66,67] df1=pd.DataFrame(list(zip(l1,l2)),columns=['trip_id','station_name']) df1.head() trip_id station_name 0 1 34 1 1 45 2 2 66 3 2 67 I am trying to get a dictionary d={1:[34,45],2:[66,67]}. I solved it with a for loop in the following fashion. from tqdm import tqdm Trips_Stations={} Trips=set(df['trip_id']) T=list(Trips) for i in tqdm(range(len(Trips))): c_id=T[i] Values=list(df[df.trip_id==c_id].stop_id) Trips_Stations.update({c_id:Values}) Trips_Stations My actual dataset has about 65000 rows. The above takes about 2 minutes to run. While this is acceptable for my application, I was wondering if there is a faster way to do it using base pandas. Thanks A: somehow stackoverflow suggested that I look at Group_By This is much faster d=df.groupby('trip_id')['stop_id'].apply(list) from collections import OrderedDict, defaultdict o_d=d.to_dict(OrderedDict) o_d=dict(o_d) It took about 30 secs for the dataframe with 65000 rows. Then
doc_1328
Can I import these files into a new xcode project and load index.html in a UIwebview in iOS ? I tired this but it didn't work, is there a way to do this ?
doc_1329
I have next code. How to store pointers to functions Voice declared by the interface in an array? If the abstract class TAnimal is used instead of the IVoice interface, then the pointers to the Voice function are stored in the array successfully! PS. Delphi 10.3 Rio type IVoice = interface function Voice: string; end; TAnimal = class abstract (TInterfacedObject) strict private FName: string; public property Name: string read FName write FName; end; TDog = class(TAnimal, IVoice) protected function Voice: string; end; TCat = class(TAnimal, IVoice) protected function Voice: string; end; { TDog } function TDog.Voice: string; begin Result:= 'Arf-Arf!'; end; { TCat } function TCat.Voice: string; begin Result:= 'Meow-Meow!'; end; var voices: TArray<IVoice>; funcs: TArray<TFunc<string>>; I: Integer; begin voices:= [TDog.Create, TCat.Create, TDog.Create]; SetLength(funcs, Length(voices)); for I := 0 to High(voices) do funcs[i]:= voices[i].Voice; //<--- don't compile for I := 0 to High(funcs) do Writeln(funcs[i]()); Readln; end. I expect the output Arf-Arf! Meow-Meow! Arf-Arf! but this code don't compile with error: E2010 Incompatible types: 'System.SysUtils.TFunc<System.string>' and 'string' A: You have to manually wrap the call to the interface method in an anonymous method. Like this: funcs[i]:= function: string begin Result := voices[i].Voice; end;
doc_1330
IFS = '\n' for name in `ls ` do number=`echo "$name" | grep -o "[0-9]\{1,2\}"` if [[ ! -z "$number" ]]; then mv "$name" "./$number" fi done A: Just don't use command substitution: use for name in *. A: Looks like two potential issues: First, the IFS variable and it's assignment should not have space in them. Instead of IFS = '\n' it should be IFS=$'\n' Secondly, for name in ls will cause issues with filename having spaces and newlines. If you just wish to handle filename with spaces then do something like this for name in * I don't understand the significance of the line number=`echo "$name" | grep -o "[0-9]\{1,2\}"` This will give you numbers found in filename with spaces in new lines. May be that's what you want. A: Replace for name in `ls` with: ls | while read name Notice: bash variable scoping is awful. If you change a variable inside the loop, it won't take effect outside the loop (in my version it won't, in your version it will). In this example, it doesn't matter. Notice 2: This works for file names with spaces, but fails for some other strange but valid file names. See Charles Duffy's comment below. A: For me, I had to move to use find. find /foo/path/ -maxdepth 1 -type f -name "*.txt" | while read name do #do your stuff with $name done
doc_1331
I'm using macOS 10.13.6 and Android Studio 3.1.2 Does anyone know why this is happening and if/how I can restore the contents of the shelf directory? A: Try using Local History to restore the Shelf contents. Unless the .idea folder is marked as excluded, it should work. To find out why it happens logs and some additional info is needed, please submit a request with logs files attached to the tracker
doc_1332
This function in particular caught my attention. It works as intended in Visual Studio but fails to run asynchronously on my Linux machine. void MCEuroOptPricer::computePriceAsync_() { // a callable object that returns a `std::vector<double>` when called EquityPriceGenerator epg(spot_, numTimeSteps_, timeToExpiry_, riskFreeRate_, volatility_); // fills `seeds_` (a std::vector<int>) with `std::iota` generateSeeds_(); std::vector<std::future<std::vector<double>>> futures; futures.reserve(numScenarios_); for (auto &seed : seeds_) { futures.push_back(std::async(epg, seed)); } std::vector<double> discountedPayoffs; discountedPayoffs.reserve(numScenarios_); for (auto &future : futures) { double terminalPrice = future.get().back(); double payoff = payoff_(terminalPrice); discountedPayoffs.push_back(discFactor_ * payoff); } double numScens = static_cast<double>(numScenarios_); price_ = quantity_ * (1.0 / numScens) * std::accumulate(discountedPayoffs.begin(), discountedPayoffs.end(), 0.0); } I was using clang++ -std=c++17 -O3. This paralleled version runs even slower than the not-paralleled version. It was not using multiple cores according to htop. I tried to call std::async with std::launch::async but it did not help either. Is it because I am missing some compiler options or Visual Studio's compiler is applying some optimization that I am unaware of? How can I make this function run asynchronously on Linux? Not a CS major so I might just be missing something obvious. Any help is greatly appreciated. UPDATE: it turns out that currently std::async is pooled on Windows, but not on UNIX-like systems. This article by Dmitry Danilov explains this in detail. I managed to get similar performance on WSL2 as native Windows with an implementation involving boost/asio/thread_pool.hpp. void MCEuroOptPricer::computePriceWithPool_() { EquityPriceGenerator epg(spot_, numTimeSteps_, timeToExpiry_, riskFreeRate_, volatility_); generateSeeds_(); std::vector<double> discountedPayoffs; discountedPayoffs.reserve(numScenarios_); std::mutex mtx; // avoid data races when writing into the vector boost::asio::thread_pool pool(get_nprocs()); for (auto &seed : seeds_) { boost::asio::post(pool, [&]() { double terminalPrice = (epg(seed)).back(); double payoff = payoff_(terminalPrice); mtx.lock(); discountedPayoffs.push_back(discFactor_ * payoff); mtx.unlock(); }); } pool.join(); double numScens = static_cast<double>(numScenarios_); price_ = quantity_ * (1.0 / numScens) * std::accumulate(discountedPayoffs.begin(), discountedPayoffs.end(), 0.0); }
doc_1333
Endpoints code: from flask import Blueprint, Response, request, current_app from flask_security.core import current_user from flask_security.utils import logout_user, login_user, verify_password from flask_api import status from core.database.user_models import User, USER_DATASTORE from utils.responses import SUCCESS, BAD_REQUEST, NOT_FOUND ACCOUNT_BP = Blueprint("account", __name__) EMAIL_IS_REGISTERED = Response("Email Is Registered", status=status.HTTP_401_UNAUTHORIZED) USER_INACTIVE = Response("User Is Inactive", status=status.HTTP_403_FORBIDDEN) WRONG_CREDENTIALS = Response("Wrong Credentials", status=status.HTTP_401_UNAUTHORIZED) @ACCOUNT_BP.route("/register", methods=['POST']) def register_endpoint() -> Response: """ # TODO: Fill this docstring. """ if current_user.is_authenticated: return NOT_FOUND if "email" in request.form and "password" in request.form: if USER_DATASTORE.create_new_user(request.form["email"], request.form["password"]): user = User.find_by_email(request.form["email"]) login_user(user, remember=True) return SUCCESS return EMAIL_IS_REGISTERED return BAD_REQUEST @ACCOUNT_BP.route("/signin", methods=['POST']) def signin_endpoint() -> Response: """ # TODO: Fill this docstring. """ if current_user.is_authenticated: # IT SHOULD BE False return NOT_FOUND if "email" in request.form and "password" in request.form: user = User.find_by_email(request.form["email"]) if user and verify_password(request.form["password"], user.password): if user.active: login_user(user, remember=True) return SUCCESS return USER_INACTIVE return WRONG_CREDENTIALS return BAD_REQUEST @ACCOUNT_BP.route("/logout") def logout_endpoint() -> Response: if current_user.is_authenticated: logout_user() return SUCCESS return NOT_FOUND Code for test: import unittest from flask import Response from flask.testing import FlaskClient from flask_security.core import current_user from main import SERVER def register(client: FlaskClient, email: str, password: str) -> Response: """Fast method for using ``/account/register`` endpoint""" form_data = 'email=' + email +'&password=' + password return client.post('/account/register', data=form_data, content_type='application/x-www-form-urlencoded') def signin(client: FlaskClient, email: str, password: str) -> Response: """Fast method for using ``/account/signin`` endpoint""" form_data = 'email=' + email +'&password=' + password return client.post('/account/signin', data=form_data, content_type='application/x-www-form-urlencoded') def logout(client: FlaskClient) -> Response: """Fast method for using ``/account/logout`` endpoint""" return client.get('/account/logout') class UsersAccountTestCase(unittest.TestCase): """ # TODO: Fill this docstring. """ __REGISTER_SUCCESS_EMAIL = '[email protected]' __RANDOM_PASSWORD = 'RandomPassword' def test_register_success(self): """ # TODO: Fill this docstring. """ with SERVER.test_client() as client: register_result = register(client, self.__REGISTER_SUCCESS_EMAIL, self.__RANDOM_PASSWORD) self.assertEqual(register_result.status_code, 200) self.assertEqual(register_result.get_data(as_text=True), "Success") self.assertTrue(current_user.is_authenticated) self.assertEqual(current_user.email, self.__REGISTER_SUCCESS_EMAIL) logout_result = logout(client) self.assertEqual(logout_result.status_code, 200) self.assertEqual(logout_result.get_data(as_text=True), "Success") self.assertFalse(current_user.is_authenticated) # THIS PASSES! check_result = signin(client, self.__REGISTER_SUCCESS_EMAIL, self.__RANDOM_PASSWORD) self.assertEqual(check_result.status_code, 200) # THIS RETURNS 404 self.assertEqual(check_result.get_data(as_text=True), "Success") self.assertTrue(current_user.is_authenticated) self.assertEqual(current_user.email, self.__REGISTER_SUCCESS_EMAIL) logout(client) What can possibly lead to this behavior? UPDATE: Just tested endpoints with Postman - everything works as intended. A: This is the strangest issue I have ever seen. I changed return SUCCESS in registration endpoint to something else and it just worked.
doc_1334
I'm doing this because I want to compare the files in the repository against a set of those same files that are not in the repository and have unexpanded keywords. A long long time ago I had a repository in CVS. A long time ago I did a flag day conversion to Subversion. Now I'm trying to convert the whole history to Mercurial and I want to identify exactly which version in Subversion corresponds most closely to the last version in CVS using diff without having to wade through expanded keyword differences. A: It has been implemented in SVN 1.7 (released 2011-10-11) as a --ignore-keywords option to svn export: http://svn.haxx.se/users/archive-2010-09/0187.shtml A: I'm afraid not. You'll have to set up your diff tool to ignore those differences. A: You can use Git to accomplish this. git svn clone http://example.com/path/to/svn/repo Once that command is complete, the only thing you have extra that you wouldn't have with svn export is a .git directory in the top level directory. Remove that directory and you'll have an equivalent to svn export with keywords off.
doc_1335
describe 'Emails' do email_ids.each do |email_id| it "should display #{email_id}" do end end end def email_ids [ '[email protected]', '[email protected]', '[email protected]' ] end The above does not work, as methods are not accessible outside the it block. Please advise how to make the method email_ids accessible outside the it block. A: describe creates a (nested) class and evaluates the given block within that class: describe 'Foo' do p self #=> RSpec::ExampleGroups::Foo describe '' do p self #=> RSpec::ExampleGroups::Foo::Bar end end it blocks on the other hand are evaluated in the corresponding class' instance: describe 'Foo' do it 'foo' do p self #=> #<RSpec::ExampleGroups::Foo ...> end end If you define a method via def email_ids, it becomes an instance method and is therefore only available within the instance, i.e. within it. In order to make a method available within describe, you have to define it as a class method, i.e via def self.email_ids: describe 'Emails' do def self.email_ids %w[[email protected] [email protected] [email protected]] end email_ids.each do |email_id| it "should display #{email_id}" do end end end Output: Emails should display [email protected] should display [email protected] should display [email protected] You can also reuse the helper method across multiple tests by putting it in a module and using extend. See Define helper methods in a module for more examples. A: I have better solution for this, than above. 1.using 'procs' or just local variable as below: email_ids = ->{ %w[[email protected] [email protected] [email protected]] } email_ids = { %w[[email protected] [email protected] [email protected]] } Scope of proc & local variable will be same, but if you want to pass an argument then 'procs' are useful. 2.Define 'email_ids' method in module and include that module in spec, so that method will be accessible inside and outside the 'it' block module EmailFactoryHelper def email_ids %w[[email protected] [email protected] [email protected]] end end include in specs as below: require 'factories_helper/email_factory_helper' include EmailFactoryHelper describe 'Emails' do email_ids.call.each do |email_id| it "should display #{email_id}" do page.should have_content "#{email_id}" end end end Output: Emails should display [email protected] should display [email protected] should display [email protected] Finished in 41.56 seconds 3 examples, 0 failures I have preferred step-2 A: Rather than using proc or scopes, Simply use local variables outside describe block. email_ids = [ '[email protected]', '[email protected]', '[email protected]' ] describe 'Emails' do end A: The solution is to simply define your structure within scope, instead of returning it from a method call: EMAILS = [ '[email protected]', '[email protected]', '[email protected]' ] EMAILS.each do |email| it "should display #{email}" do end end A: The method wasn't accessible because you called the method before you defined the method. This simpler script has the same problem: p email_ids def email_ids [ '[email protected]', '[email protected]', '[email protected]' ] end "undefined local variable or method `email_ids' for main:Object (NameError)" You must define your methods before you call them. You can solve this problem by moving the def email_ids above the describe 'Emails'. A: short version of @stefan's answer: needs to be def self.email_ids # stuff end (def self.method for context/describe/etc; def method for it/etc)
doc_1336
let main a b c d e = Format.eprintf "%B %B %B %B %B@." a b c d e let cmd = let open Cmdliner in let a = Arg.(value & flag & info ["a"] ~doc:"a") in let b = Arg.(value & flag & info ["b"] ~doc:"b") in let c = Arg.(value & flag & info ["c"] ~doc:"c") in let d = Arg.(value & flag & info ["d"] ~doc:"d") in let e = Arg.(value & flag & info ["e"] ~doc:"e") in Term.(const main $ a $ b $ c $ d $ e), Term.(info "test" ~version:"1" ~doc:"abcde" ~exits:default_exits ~man:[]) let () = Cmdliner.Term.(exit @@ eval cmd) If I execute my program with no option I will obtain false false false false false and if I use it with -ade I will obtain true false false true true which is exactly what I wanted. Now, suppose I made a typo in my main function and wrote instead (* Notice the d before c *) let main a b d c e = Format.eprintf "%B %B %B %B %B@." a b c d e If I execute my main program with -ade like previously I will obtain true false true false true which can be considered wrong. So, what I wanted to know is if it was possible to gather options in a record to use them with their proper names, something like the following example (which doesn't work) : open Cmdliner type o = {a : bool Term.t; b : bool Term.t; c : bool Term.t; d : bool Term.t; e : bool Term.t;} (* a - e are not booleans but bool Term.t which gives an obvious error *) let main {a; b; c; d; e} = Format.eprintf "%B %B %B %B %B@." a b c d e let cmd = let a = Arg.(value & flag & info ["a"] ~doc:"a") in let b = Arg.(value & flag & info ["b"] ~doc:"b") in let c = Arg.(value & flag & info ["c"] ~doc:"c") in let d = Arg.(value & flag & info ["d"] ~doc:"d") in let e = Arg.(value & flag & info ["e"] ~doc:"e") in let o = Term.const {a; b; c; d; e} in Term.(const main $ o), Term.(info "test" ~version:"1" ~doc:"abcde" ~exits:default_exits ~man:[]) let () = Cmdliner.Term.(exit @@ eval cmd) This could be useful on big projects and would lighten the number of arguments given to the functions. Maybe there's a way to do it but all the examples I found used the first way of doing. I didn't want to open an issue on the github page so I asked it here. A: This can be done quite directly if you write the field update functions for the record type. For instance, if we have type arg = { a:bool; b:bool; c:bool; d:bool; e: bool } let main {a;b;c;d;e} = Format.eprintf "%B %B %B %B %B@." a b c d e module Update = struct let a a r = { r with a } let b b r = { r with b } let c c r = { r with c } let d d r = { r with d } let e e r = { r with e } end The only missing step is to transform Cmdliner.Term.t that directly provides the argument into terms that update a record of type arg. An implementation would be: let cmd = let open Cmdliner in (* first the starting record *) let start = Term.const { a = false; b=false; c=false; d=false; e=false } in let transform r (update,arg) = Term.( const update $ arg $ r ) in let arg = List.fold_left transform start Update.[ a, Arg.(value & flag & info ["a"] ~doc:"a"); b, Arg.(value & flag & info ["b"] ~doc:"b"); c, Arg.(value & flag & info ["c"] ~doc:"c"); d, Arg.(value & flag & info ["d"] ~doc:"d"); e, Arg.(value & flag & info ["e"] ~doc:"e"); ] in Term.(const main $ arg), Term.info "test" ~version:"1" ~doc:"abcde" ~exits:Term.default_exits ~man:[] let () = Cmdliner.Term.(exit @@ eval cmd) A: This can be achieved with relatively few boilerplate by using labels to emulate a record with Term.t fields, for instance: type arg = {a : bool; b : bool; c : bool; d : bool; e : bool} let main {a; b; c; d; e} = Format.printf "%B %B %B %B %B@." a b c d e let cmd = let open Cmdliner in let arg ~a ~b ~c ~d ~e = Term.(const (fun a b c d e -> {a; b; c; d; e}) $ a $ b $ c $ d $ e) in let a = Arg.(value & flag & info ["a"] ~doc:"a") in let b = Arg.(value & flag & info ["b"] ~doc:"b") in let c = Arg.(value & flag & info ["c"] ~doc:"c") in let d = Arg.(value & flag & info ["d"] ~doc:"d") in let e = Arg.(value & flag & info ["e"] ~doc:"e") in Term. ( const main $ arg ~a ~b ~c ~d ~e , info "test" ~version:"1" ~doc:"abcde" ~exits:default_exits ~man:[] ) let () = Cmdliner.Term.(exit @@ eval cmd) By using the same name for the keyword arguments and the record fields, the risk of typos is limited to the conversion function (arg here), which is presumably much simpler than your real main function. In a large project, the conversion function could easily be generated automatically using a ppx.
doc_1337
I just created a new Rails 3.2.6 application and configured it to use the PostgreSQL database for my local development. I followed this RailsCast and was able to get everything installed and set up correctly. However, whenever I try to do any rails generate or rake commands (rails generate model, rake db:migrate etc), I get the following error referring to my development.log file: Rails Error: Unable to access log file. Please ensure that /Users/****/projects/rails_projects/rails_app/log/development.log exists and is chmod 0666. The log level has been raised to WARN and the output directed to STDERR until the problem is fixed. I see these other stackoverflow questions/answers, but they don't fit my case exactly: * *Rails: Unable to access log file <- This is back in the days of Rails 2.2 *Ruby on Rails Setup: Unable to access log file <- This is regarding a production environment using Apache. I am on a local development environment. Other than that error, my application runs fine. Also, if I create a new rails application with all its defaults, I don't get this error. Any suggestions/hints would be much appreciated. Or if you need any more information about my local environment, please let me know. A: I basically did what the error message suggested and did a chmod 0666 on the development.log file: $> cd /Users/****/projects/rails_projects/rails_app/log/ $> chmod 0666 development.log Everything worked fine after that. A: Have you verified that the log file is there and that you can access it? I've done some similar things in the past. A: Check it out with sudo if your enviroment its on linux, for example, i got that error trying to run the migration - rake db:migrate, so i used sudo rake db:migrate and that's work, maybe because the rake when its trying to consult development.log doesn't have the right permissions or something like that.
doc_1338
A: if you use OOTB retry in business service ,it will retry for all the error codes..You can try to call the http business service from a stage and using a stage error handler call the http service from a while loop.
doc_1339
A: You should start by reading this vuforia "knowledge database" article, which explains how to replace the teapot with a textured plane. Once you've done that, the simplest way to display text will be to generate a texture containing this text, and display it on the plane. This other article explains how to use other textures than the 2 that are provided with the sample app. Hope this helps!
doc_1340
NSRunLoop* rl = [NSRunLoop currentRunLoop]; self.networkStream.delegate = self; [self.networkStream scheduleInRunLoop:rl forMode:NSDefaultRunLoopMode]; [self.networkStream open]; @autoreleasepool { [rl run]; }` The instrument shows leak at location [self.networkStream open] and [r1 run]. Anyone knows what may be the reason for that? Basically I am trying to use the simpleFTP example of the Apple to upload data to server... This functionality should run in the background so I have to use the [NSRunLoop run] function to keep thread alive until the request is not complete. here is the stack trace from instruments 0 libsystem_c.dylib malloc_zone_malloc 1 CoreFoundation __CFAllocatorSystemAllocate 2 CoreFoundation CFAllocatorAllocate 3 CoreFoundation _CFRuntimeCreateInstance 4 CFNetwork CFObject::Allocate(unsigned long, CFClass const&, __CFAllocator const*) 5 CFNetwork CoreReadStreamCreate(__CFAllocator const*, LegacyReadStreamCallBacks const*, void*) 6 CFNetwork CoreReadStreamCreateWithFTPURL(__CFAllocator const*, __CFURL const*) 7 CFNetwork CFReadStreamCreateWithFTPURL 8 CFTMClient -[FTPDownloadRequest startRecieving] /Users/canv/Documents/abc/FTPDownloadRequest.m:96 9 CFTMClient -[CFTMFTPDownload sendFTPDownloadRequest] /Users/canv/Documents/Cabc/CFTMFTPDownload.m:99 10 CFTMClient -[CFTMFTPDownload startTest] /Users/canv/Documents/abc/CFTMFTPDownload.m:52 11 CFTMClient __39-[CFTMTestManager executeTestCaseList:]_block_invoke /Users/canv/Documents/abc/CFTMTestManager.m:325 12 libdispatch.dylib _dispatch_client_callout 13 libdispatch.dylib _dispatch_barrier_sync_f_invoke 14 libdispatch.dylib dispatch_barrier_sync_f 15 libdispatch.dylib dispatch_sync 16 CFTMClient -[CFTMTestManager executeTestCaseList:] /Users/canv/Documents/abc/CFTMTestManager.m:280 17 CFTMClient __35-[CFTMTestManager executeTestList:]_block_invoke /Users/canv/Documents/abc/CFTMTestManager.m:193 18 libdispatch.dylib _dispatch_call_block_and_release 19 libdispatch.dylib _dispatch_client_callout 20 libdispatch.dylib _dispatch_root_queue_drain 21 libdispatch.dylib _dispatch_worker_thread2 22 libsystem_c.dylib _pthread_wqthread 23 libsystem_c.dylib start_wqthread
doc_1341
* *Create an email message with the same subject, recipient, and sender. Saved. *Create another email message with the same subject, recipient, and sender. Scenarios Tested: Emails are both owned by the user creating the records. Emails are both owned by the team the user is a member of. The security role added to the user/team has this Result: Duplicate Detection window pops out but the potential duplicate is not displayed.
doc_1342
Below is the table that I've and I want to generate: Original Table Desired Table A: To create a running total, create a new measure in your table like this (where Table is the name of your table): Running Total = CALCULATE( SUM('Table'[Values]); FILTER(ALLSELECTED('Table'); 'Table'[Date] <= SELECTEDVALUE('Table'[Date])) )
doc_1343
My code: public class CallReceiver extends BroadcastReceiver{ Context my_ctx; private static CountDownTimer countDownTimer; private static Toast my_toast; public void onReceive(Context context, Intent intent) { my_ctx = context; showing_toast("Message"); } public void showing_toast(String Message) { my_toast = Toast.makeText(my_ctx, Message, Toast.LENGTH_SHORT); TextView textView = new TextView(my_ctx); textView.setText(Message); textView.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { countDownTimer.cancel(); } }); my_toast.setView(textView); countDownTimer = new CountDownTimer((30)*1000, 1000) { @Override public void onTick(long millisUntilFinished) { my_toast.show(); } @Override public void onFinish() { my_toast.cancel(); } }.start(); } So, countDownTimer go on and toast message doesn`t disappear although i click on textview
doc_1344
The problem is: I'm using Cancel = True if a report has no data so, the printing process terminated & not complete the rest of reports. Any advice? Below is a sample code to create reports as pdf & save it on a folder Private Sub Savetopdf_Click() Dim ReportPath As String Dim CompanyLogo As String Dim MyWhere As String Dim ReportOutput As String Dim ReportName As String ReportPath = DLookup("AttachentsPath", "emailElements") ReportName = "Report1" MyWhere = "[Type] IN ('Type1','Type3','Type4','Type6') and [Company] IN ('Company1', 'Company2') and [Status] In ('Active')" CompanyLogo = "'Company1','Company2'" ReportOutput = "Report1 Comp1&Comp2" DoCmd.OpenReport ReportName, acViewPreview, , MyWhere, acHidden, CompanyLogo DoCmd.OutputTo acOutputReport, ReportName, acFormatPDF, ReportPath & ReportOutput & ".pdf" DoCmd.Close acReport, ReportName MyWhere = "[Type] IN ('Type1','Type3','Type4','Type6') and [Company] IN ('Company3') and [Status] In ('Active')" CompanyLogo = "'Company3'" ReportOutput = "Report1 Comp3" DoCmd.OpenReport ReportName, acViewPreview, , MyWhere, acHidden, CompanyLogo & "|" DoCmd.OutputTo acOutputReport, ReportName, acFormatPDF, ReportPath & ReportOutput & ".pdf" DoCmd.Close acReport, ReportName ReportName = "Report2" MyWhere = "[Type] IN ('Type1','Type3','Type4','Type6') and [Company] IN ('Company1', 'Company2') and [Status] In ('Active')" CompanyLogo = "'Company1','Company2'" ReportOutput = "Report2 Comp1&Comp2" DoCmd.OpenReport ReportName, acViewPreview, , MyWhere, acHidden, CompanyLogo & "|" DoCmd.OutputTo acOutputReport, ReportName, acFormatPDF, ReportPath & ReportOutput & ".pdf" DoCmd.Close acReport, ReportName MyWhere = "[Type] IN ('Type1','Type3','Type4','Type6') and [Company] IN ('Company3') and [Status] In ('Active')" CompanyLogo = "'Company3'" ReportOutput = "Report2 Comp3" DoCmd.OpenReport ReportName, acViewPreview, , MyWhere, acHidden, CompanyLogo & "|" DoCmd.OutputTo acOutputReport, ReportName, acFormatPDF, ReportPath & ReportOutput & ".pdf" DoCmd.Close acReport, ReportName End Sub A: You are right @june7 I used Dlookup to the report based query with the criteria. ReportName = "Report1" MyWhere = "[Type] IN ('Type1','Type3','Type4','Type6') and [Company] IN ('Company1', 'Company2') and [Status] In ('Active')" If DLookup("[ID]", "[Reportq]", MyWhere) <> "" Then 'added CompanyLogo = "'Company1','Company2'" ReportOutput = "Report1 Comp1&Comp2" DoCmd.OpenReport ReportName, acViewPreview, , MyWhere, acHidden, CompanyLogo DoCmd.OutputTo acOutputReport, ReportName, acFormatPDF, ReportPath & ReportOutput & ".pdf" DoCmd.Close acReport, ReportName End If '''
doc_1345
We can use the dot notification to specify where in a JSON file to read data from, is it possible to the reverse and specify a hierarchy to save data? My end goal is to output a dataset without duplicating parent values, but nesting children underneath instead. A: object_construct function would be of help here: https://docs.snowflake.com/en/sql-reference/functions/object_construct.html A couple of related how-to articles: * *https://community.snowflake.com/s/article/Generating-a-JSON-Dataset-using-Relational-Data-in-Snowflake *https://community.snowflake.com/s/article/How-to-Merge-Combine-Two-JSON-Fields
doc_1346
CREATE TABLE `sys`.`annotations` ( `id` INT GENERATED ALWAYS AS () VIRTUAL, `annotation` LONGTEXT NOT NULL, PRIMARY KEY (`id`)); But this produces the following error message: Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ') STORED', `annotation` LONGTEXT NOT NULL, PRIMARY KEY (`id`))' at line 2 SQL Statement: CREATE TABLE `sys`.`annotations` ( `id` INT GENERATED ALWAYS AS () STORED, `annotation` LONGTEXT NOT NULL, PRIMARY KEY (`id`)) Why would this be happening and how would I go about fixing this? A: I have no idea what an empty expression means for a computed column. Nor do I understand a VIRTUAL column being a primary key. I would use: CREATE TABLE sys.`annotations` ( `id` INT AUTO_INCREMENT PRIMARY KEY, `annotation` LONGTEXT NOT NULL );
doc_1347
Creating the production build works without any error as well, but running the production build results in errors in multiple places of the app. The errors are all related to graphql relay queries returning (the same queries with identical results from the same backend work in development) Is there any good way to debug it? Currently the thrown errors basically give me very limited information. A: Finally found the problem: I had a few files copied form another project which used Object.defineProperty(exports, "__esModule", { value: true }); exports.default = DeselectAllButton; The development server was just working fine, but the production build resulted in very strange errors hidden in react. Replacing the exports resolved the issue
doc_1348
from functools import reduce dfs = [result_eu_SpeciesNameGenuine, result_ieu_SpeciesNameGenuine, result_cosine_SpeciesNameGenuine] df_final = reduce(lambda left,right: pd.merge(left,right,on=index), dfs) df_final A: Try this using pd.DataFrame.join per documentation other can be a list of dataframes: dfs[0].join(dfs[1:])
doc_1349
x = [['A', 'A', 'A', 'A'], ['C', 'T', 'C', 'C'], ['G', 'T', 'C', 'C'], ['T', 'T', 'C', 'C'], ['A', 'T', 'C']] I need to compare each element in sub_list to the other and note number of changes x[0] --> # No change x[1] --> # 1 change (Only one conversion from C to T (T to C conversion = C to T conversion)) x[2] --> # 3 changes(G to T, T to C, G to C (T to C conversion = C to T conversion)) .... So, final count for Changes should be [0,1,3,2,3] A: If I understand well... from collections import Counter from itertools import combinations x = [['A', 'A', 'A', 'A'], ['C', 'T', 'C', 'C'], ['G', 'T', 'C', 'C'], ['T', 'T', 'C', 'C'], ['A', 'T', 'C', 'Z']] def divide_and_square(number, divisor): return (1. * number / divisor) ** 2 # part1 counters = [Counter(sub_list) for sub_list in x] atgc_counts = [sum(val for key, val in counter.items() if key.upper() in "ATGC") for counter in counters] print(atgc_counts) # part 2 conversions = [] for sl in x: sub_list = [base for base in sl if base.upper() in "ATGC"] conversions.append(len(list(combinations(set(sub_list), 2)))) print(conversions) # bonus squared_factor_sums = [] for counter in counters: total = sum(counter.itervalues()) squared_factor_sums.append(sum([divide_and_square(val, total) for val in counter.values()])) print(squared_factor_sums) prints: [4, 4, 4, 4, 3] [0, 1, 3, 1, 3] [1.0, 0.625, 0.375, 0.5, 0.25] * *first the character other that ATGC are removed. *then the duplications are avoided by casting the sub_list into a set *itertools.combinations is used to get all the unique combinations of the elements in the set *combinations are finally counted
doc_1350
This is my first time programming in SQL and don't know much about it. I have created the database as well as the tables I want to use, using the following code: USE master GO IF EXISTS (SELECT * FROM sys.databases WHERE name = 'myTimetable') DROP DATABASE myTimetable GO CREATE DATABASE myTimetable GO USE myTimetable GO CREATE TABLE DayTable ( WeekDay_ID int Identity (1,1) PRIMARY KEY NOT NULL, Day_Name varchar(10) NOT NULL ) GO CREATE TABLE TimeRangeTable ( DayTime_ID int Identity (1,1) PRIMARY KEY NOT NULL, TimeInterval varchar(20) NOT NULL ) GO CREATE TABLE SubjectTable ( Course_ID int Identity (1,1) PRIMARY KEY NOT NULL, CourseCode varchar (10), CourseName varchar (255) NOT NULL ) GO CREATE TABLE ScheduleTable ( WeekDay_ID int references DayTable(WeekDay_ID), DayTime_ID int references TimeRangeTable(DayTime_ID), Course_ID int references SubjectTable(Course_ID), ) GO The tables was created correctly and I managed to insert the correct data into the tables, except for my ScheduleTable (the last table created in the above sample code). Here is the SQL code I used to insert the data: insert into DayTable values ('Monday') insert into DayTable values ('Teusday') insert into DayTable values ('Wensday') insert into DayTable values ('Thursday') insert into DayTable values ('Friday') insert into DayTable values ('Saterday') insert into DayTable values ('Sunday') insert into TimeRangeTable values ('07:30 - 08:20') insert into TimeRangeTable values ('08:30 - 09:20') insert into TimeRangeTable values ('09:30 - 10:20') insert into TimeRangeTable values ('10:30 - 11:20') insert into TimeRangeTable values ('11:30 - 12:20') insert into TimeRangeTable values ('12:30 - 13:20') insert into TimeRangeTable values ('13:30 - 14:20') insert into TimeRangeTable values ('14:30 - 15:20') insert into TimeRangeTable values ('15:30 - 16:20') insert into TimeRangeTable values ('16:30 - 17:20') insert into TimeRangeTable values ('17:30 - 18:20') insert into SubjectTable values ('WTW115','Discrete Mathematics') insert into SubjectTable values ('INF214','Database Design') insert into SubjectTable values ('INL210','Information Seeking and Retreival') insert into SubjectTable values ('INL240','Social and Ethical Impact') insert into SubjectTable values ('INF271','System Analysis and Design') insert into SubjectTable values ('INF154','Introduction to Programming') -- Struling from this point onward... insert into ScheduleTable values('1','1','1') insert into ScheduleTable values('1','2','2') insert into ScheduleTable values('1','3','3') insert into ScheduleTable values('1','4','3') insert into ScheduleTable values('1','5','3') insert into ScheduleTable values('2','4','1') insert into ScheduleTable values('2','5','2') insert into ScheduleTable values('2','6','2') insert into ScheduleTable values('2','9','4') insert into ScheduleTable values('2','10','2') insert into ScheduleTable values('3','1','5') insert into ScheduleTable values('3','2','5') insert into ScheduleTable values('3','6','1') insert into ScheduleTable values('3','7','3') insert into ScheduleTable values('4','1','4') insert into ScheduleTable values('4','3','5') It all executes and inserts the data, but when I display the data for ScheduleTable, is shows the data as Follow: WeekDay_ID DayTime_ID Course_ID ------------------------------------------- 1 1 1 1 2 1 2 2 3 1 3 3 4 1 4 3 5 1 5 3 6 2 4 1 7 2 5 2 8 2 6 2 9 2 9 4 10 2 10 2 11 3 1 5 12 3 2 5 13 3 6 1 14 3 7 3 15 4 1 4 16 4 3 5 Where I wanted it to show the data instead of just the codes, example of what I wanted: WeekDay_ID DayTime_ID Course_ID -------------------------------------------- 1 Monday 07:30 - 08:20 WTW115 2 Monday 08:30 - 09:20 INF214 3 Monday 09:30 - 10:20 INL210 4 Monday 10:30 - 11:20 INL210 5 Monday 11:30 - 12:20 INL210 etc... I know it has something to do with my Schedule table but that is all I know I don't know how to display it in this way as in the example. Any help will be appreciated. A: Time for a few joins: select --* d.Day_Name, tr.TimeInterval, sbj.CourseCode, sbj.CourseName from ScheduleTable as sch join DayTable as d on sch.WeekDay_ID = d.WeekDay_ID join TimeRangeTable as tr on sch.DayTime_ID = tr.DayTime_ID join SubjectTable as sbj on sch.Course_ID = sbj.Course_ID; You could create a view (for the aforementioned statement) for convenience: create view TimeScheduleView as select d.Day_Name, tr.TimeInterval, sbj.CourseCode, sbj.CourseName from ScheduleTable as sch join DayTable as d on sch.WeekDay_ID = d.WeekDay_ID join TimeRangeTable as tr on sch.DayTime_ID = tr.DayTime_ID join SubjectTable as sbj on sch.Course_ID = sbj.Course_ID; go select * from TimeScheduleView;
doc_1351
import java.util.*; public class AccountClient { public static void main(String[] args) { Scanner input = new Scanner(System.in); boolean infiniteLoop = true; boolean invalidInput; int id; // Create array of different accounts Account[] accountArray = new Account[10]; //Initialize each account array with its own unique id and a starting account balance of $100 for (int i = 0; i < accountArray.length; i++) { accountArray[i] = new Account(i, 100); } do { try { //inner loop to detect invalid Input do { invalidInput = false; System.out.print("Enter an id: "); id = input.nextInt(); if (id < 0 || id > 9) { System.out.println("Try again. Id not registered in system. Please enter an id between 0 and 9 (inclusive)."); invalidInput = true; input.nextLine(); } } while (invalidInput); boolean exit; do { exit = false; boolean notAnOption; int choice; do { notAnOption = false; System.out.print("\nMain Menu\n1: check balance\n2: withdraw\n3: deposit\n4: exit\nEnter a choice: "); choice = input.nextInt(); if (choice < 1 || choice > 4) { System.out.println("Sorry, " + choice + " is not an option. Please try again and enter a number between 1 and 4 (inclusive)."); notAnOption = true; } } while(notAnOption); switch (choice) { case 1: System.out.println("The balance for your account is $" + accountArray[id].getBalance()); break; case 2: { boolean withdrawFlag; do { System.out.print("Enter the amount you would like to withdraw: "); double withdrawAmount = input.nextInt(); if (withdrawAmount > accountArray[id].getBalance()) { System.out.println("Sorry, you only have an account balance of $" + accountArray[id].getBalance() + ". Please try again and enter a number at or below this amount."); withdrawFlag = true; } else { accountArray[id].withdraw(withdrawAmount); System.out.println("Thank you. Your withdraw has been completed."); withdrawFlag = false; } } while (withdrawFlag); } break; case 3: { System.out.print("Enter the amount you would like to deposit: "); double depositAmount = input.nextInt(); accountArray[id].deposit(depositAmount); System.out.println("Thank you. You have successfully deposited $" + depositAmount + " into your account."); } break; case 4: { System.out.println("returning to the login screen...\n"); exit = true; } break; } } while (exit == false); } catch (InputMismatchException ex) { System.out.println("Sorry, invalid input. Please enter a number, no letters or symbols."); } finally { input.close(); } } while (infiniteLoop); } } The exception code: Exception in thread "main" java.lang.IllegalStateException: Scanner closed at java.util.Scanner.ensureOpen(Scanner.java:1070) at java.util.Scanner.next(Scanner.java:1465) at java.util.Scanner.nextInt(Scanner.java:2117) at java.util.Scanner.nextInt(Scanner.java:2076) at playground.test.main.Main.main(Main.java:47) Hello, I made a basic program that uses a class called account to simulate an ATM machine. I wanted to throw an exception if the user didn't type in a letter. This worked fine, however I needed to make it loop so the program didn't terminate after it threw the exception. To do this I just put the try catch in the do while loop I had previously. When I did this though, it's throwing an IllegalStateException every time I type in a letter or choose to exit an inner loop I have which takes the user back to the loop of asking them to enter their id. What is an IllegalStateException, what is causing it in my case, and how would I fix this? Thanks. A: It's fairly simple, after you catch the exception the finally clause gets executed. Unfortunately you're closing the scanner within this clause and Scanner.close() closes the underlying input stream (System.in in this case). The standard input stream System.in once closed can't be opened again. To fix this you have to omit the finally clause and close the scanner when your program needs to terminate and not earlier.
doc_1352
const VERTICES: &[Vertex] = &[ Vertex { position: [-0.0868241, 0.49240386, 0.0], color: [0.1, 0.0, 0.5] }, Vertex { position: [-0.49513406, 0.06958647, 0.0], color: [0.5, 0.0, 0.9] }, Vertex { position: [-0.21918549, -0.44939706, 0.0], color: [0.5, 0.0, 0.5] } ]; This array is then formed into a wgpu buffer called vertex_buffer and then a passing it to my shader with a vertex_buffer array like so: const ATTRIBS: [wgpu::VertexAttribute; 3] = wgpu::vertex_attr_array![0 => Float32x3, 1 => Float32x3]; . . . render_pass.set_vertex_buffer(0, self.vertex_buffer.slice(..)); This My question is this: How would I have the triangle rotate over time? My initial approach was to create a second buffer that I pass to the shader as well and use a vertex shader to adjust the vertex positions basted on the input value. But this approach gives me nothing but errors. Specifically I am creating a new buffer every frame which contains the frame number like so let frame_num = self.device.create_buffer_init( &wgpu::util::BufferInitDescriptor { label: Some("Control Num Buffer"), contents: bytemuck::cast_slice(&[ self.frame_num, self.frame_num, self.frame_num ]), usage: wgpu::BufferUsages::VERTEX, } ); and then trying to pass it in like this: render_pass.set_vertex_buffer(1, frame_num.slice(..)); with my shader now looking like this: struct VertexInput { [[location(0)]] position: vec3<f32>; [[location(1)]] color: vec3<f32> }; struct VertexOutput { [[builtin(position)]] clip_position: vec4<f32>; [[location(0)]] color: vec3<f32>; }; [[stage(vertex)]] fn vs_main( [[location(0)]] model: VertexInput, [[location(1)]] numb : f32 ) -> VertexOutput { . . . I am getting errors that are variants of Entry point vs_main at Vertex is invalid Argument 0 varying error The type [2] does not match the varying Are shaders setup to accept multiple buffers like this? Is there better ways to go about this? A: Keep going in the tutorial — you'll learn what you need. Two chapters from where you are, you will reach the section on creating a camera, which will introduce uniform buffers. A uniform buffer, like a vertex buffer, is a wgpu::Buffer, but instead of using it to define vertices, it just becomes available for the shader to read. (“Uniform” means that it doesn't vary per vertex or per triangle — it's the same across the whole draw call.) You'll also need to learn about bind groups which are the means by which uniform buffers (and textures) are passed to the shader. That's in the chapter after the one you're on and before the one on setting up a camera.
doc_1353
* *i followed these in the process of installation of a vuetify project: -npm install -g vue-cli -vue init vuetifyjs/webpack my-project *here is the result among all errors displayed: -npm ERR! Unexpected end of JSON input while parsing near '...","eslint":"^1.3.1","' A: you need to clear the npm cache. try with * *npm cache clean --force *npm install -g vue-cli *vue init vuetifyjs/webpack my-project
doc_1354
public class ChunkMeshGenerator { private static volatile Map<Chunk, Map<GLTexture, List<Quad>>> quads; private static volatile Map<GLTexture, List<Quad>> renderables; static { quads = new ConcurrentHashMap<Chunk, Map<GLTexture, List<Quad>>>(); renderables = new ConcurrentHashMap<GLTexture, List<Quad>>(); } public static void genChunk (Chunk chunk) { List<Quad> temp = new ArrayList<Quad>(); Chunk x0 = null; Chunk x1 = null; Chunk z0 = null; Chunk z1 = null; synchronized (quads) { for (Chunk neighbor : quads.keySet()) { if (neighbor.getAbsoluteX() == chunk.getAbsoluteX()-16 && neighbor.getAbsoluteZ() == chunk.getAbsoluteZ()) { x0 = neighbor; } else if (neighbor.getAbsoluteX() == chunk.getAbsoluteX()+16 && neighbor.getAbsoluteZ() == chunk.getAbsoluteZ()) { x1 = neighbor; } else if (neighbor.getAbsoluteX() == chunk.getAbsoluteX() && neighbor.getAbsoluteZ() == chunk.getAbsoluteZ()-16) { z0 = neighbor; } else if (neighbor.getAbsoluteX() == chunk.getAbsoluteX() && neighbor.getAbsoluteZ() == chunk.getAbsoluteZ()+16) { z1 = neighbor; } } } for (int x = 0; x < Chunk.CHUNK_SIZE; x++) { for (int y = 0; y < Chunk.CHUNK_HEIGHT; y++) { for (int z = 0; z < Chunk.CHUNK_SIZE; z++) { if (chunk.getCube(x, y, z).getType() == BlockType.AIR) continue; if (x == Chunk.CHUNK_SIZE-1) { if (x1 != null && x1.getCube(0, y, z).getType() == BlockType.AIR) { temp.add(chunk.getCube(x, y, z).getFace(Cube.RIGHT)); } } else if (chunk.getCube(x+1, y, z).getType() == BlockType.AIR) { temp.add(chunk.getCube(x, y, z).getFace(Cube.RIGHT)); } if (x == 0) { if (x0 != null && x0.getCube(Chunk.CHUNK_SIZE-1, y, z).getType() == BlockType.AIR) { temp.add(chunk.getCube(x, y, z).getFace(Cube.LEFT)); } } else if (chunk.getCube(x-1, y, z).getType() == BlockType.AIR) { temp.add(chunk.getCube(x, y, z).getFace(Cube.LEFT)); } if (y == Chunk.CHUNK_HEIGHT-1) { temp.add(chunk.getCube(x, y, z).getFace(Cube.TOP)); } else if (chunk.getCube(x, y+1, z).getType() == BlockType.AIR) { temp.add(chunk.getCube(x, y, z).getFace(Cube.TOP)); } if (y != 0 && chunk.getCube(x, y-1, z).getType() == BlockType.AIR) { temp.add(chunk.getCube(x, y, z).getFace(Cube.BOTTOM)); } if (z == Chunk.CHUNK_SIZE-1) { if (z1 != null && z1.getCube(x, y, 0).getType() == BlockType.AIR) { temp.add(chunk.getCube(x, y, z).getFace(Cube.BACK)); } } else if (chunk.getCube(x, y, z+1).getType() == BlockType.AIR) { temp.add(chunk.getCube(x, y, z).getFace(Cube.BACK)); } if (z == 0) { if (z0 != null && z0.getCube(x, y, Chunk.CHUNK_SIZE-1).getType() == BlockType.AIR) { temp.add(chunk.getCube(x, y, z).getFace(Cube.FRONT)); } } else if (chunk.getCube(x, y, z-1).getType() == BlockType.AIR) { temp.add(chunk.getCube(x, y, z).getFace(Cube.FRONT)); } } } } List<Chunk> neighbors = new ArrayList<Chunk>(); neighbors.add(x0); neighbors.add(x1); neighbors.add(z0); neighbors.add(z1); updateNeighbors(chunk, neighbors); Map<GLTexture, List<Quad>> map = quads.get(chunk); if (map == null) { map = new ConcurrentHashMap<GLTexture, List<Quad>>(); quads.put(chunk, map); } for (Quad quad : temp) { List<Quad> batch = map.get(quad.getTexture()); if (batch == null) { batch = new ArrayList<Quad>(); map.put(quad.getTexture(), batch); } batch.add(quad); } genRenderables(); } private static void updateNeighbors (Chunk chunk, List<Chunk> neighbors) { Chunk x0 = neighbors.get(0); Chunk x1 = neighbors.get(1); Chunk z0 = neighbors.get(2); Chunk z1 = neighbors.get(3); for (int x = 0; x < Chunk.CHUNK_SIZE; x++) { for (int z = 0; z < Chunk.CHUNK_SIZE; z++) { for (int y = 0; y < Chunk.CHUNK_HEIGHT; y++) { if (x0 != null && x0.getCube(Chunk.CHUNK_SIZE-1, y, z).getType() != BlockType.AIR && chunk.getCube(0, y, z).getType() == BlockType.AIR) { Map<GLTexture, List<Quad>> chunkQuads = quads.get(x0); if (chunkQuads == null) { chunkQuads = new ConcurrentHashMap<GLTexture, List<Quad>>(); quads.put(x0, chunkQuads); } Quad face = x0.getCube(Chunk.CHUNK_SIZE-1, y, z).getFace(Cube.RIGHT); List<Quad> batch = chunkQuads.get(face.getTexture()); if (batch == null) { batch = new SyncList<Quad>(); chunkQuads.put(face.getTexture(), batch); } batch.add(face); } if (x1 != null && x1.getCube(0, y, z).getType() != BlockType.AIR && chunk.getCube(Chunk.CHUNK_SIZE-1, y, z).getType() == BlockType.AIR) { Map<GLTexture, List<Quad>> chunkQuads = quads.get(x1); if (chunkQuads == null) { chunkQuads = new ConcurrentHashMap<GLTexture, List<Quad>>(); quads.put(x1, chunkQuads); } Quad face = x1.getCube(0, y, z).getFace(Cube.LEFT); List<Quad> batch = chunkQuads.get(face.getTexture()); if (batch == null) { batch = new SyncList<Quad>(); chunkQuads.put(face.getTexture(), batch); } batch.add(face); } if (z0 != null && z0.getCube(x, y, Chunk.CHUNK_SIZE-1).getType() != BlockType.AIR && chunk.getCube(x, y, 0).getType() == BlockType.AIR) { Map<GLTexture, List<Quad>> chunkQuads = quads.get(z0); if (chunkQuads == null) { chunkQuads = new ConcurrentHashMap<GLTexture, List<Quad>>(); quads.put(z0, chunkQuads); } Quad face = z0.getCube(x, y, Chunk.CHUNK_SIZE-1).getFace(Cube.BACK); List<Quad> batch = chunkQuads.get(face.getTexture()); if (batch == null) { batch = new SyncList<Quad>(); chunkQuads.put(face.getTexture(), batch); } batch.add(face); } if (z1 != null && z1.getCube(x, y, 0).getType() != BlockType.AIR && chunk.getCube(x, y, Chunk.CHUNK_SIZE-1).getType() == BlockType.AIR) { Map<GLTexture, List<Quad>> chunkQuads = quads.get(z1); if (chunkQuads == null) { chunkQuads = new ConcurrentHashMap<GLTexture, List<Quad>>(); quads.put(z1, chunkQuads); } Quad face = z1.getCube(x, y, 0).getFace(Cube.FRONT); List<Quad> batch = chunkQuads.get(face.getTexture()); if (batch == null) { batch = new SyncList<Quad>(); chunkQuads.put(face.getTexture(), batch); } batch.add(face); } } } } } public static void removeChunk (Chunk chunk) { quads.remove(chunk); genRenderables(); } public static Map<GLTexture, List<Quad>> getMesh () { return renderables; } private static void genRenderables () { renderables.clear(); for (Chunk chunk : quads.keySet()) { for (GLTexture texture : quads.get(chunk).keySet()) { renderables.putIfAbsent(texture, new ArrayList<Quad>()); renderables.get(texture).addAll(quads.get(chunk).get(texture)); } } } } The main point here is not the functionality of these methods, but rather the parts where I actually modify the quads and renderables maps. As you can see, I write all of the Quad objects that I generate to the quads map. The modifying functions always end with a call to genRenderables(). This ensures that it takes as little time as possible to write to the map. I want to make very clear that synchronizing the reading is NOT an option, as that would slow down my rendering. I'd rather have the computation time required to go into the chunk generation thread than the rendering thread (in this case the "main" thread). Any help is greatly appreciated, thanks! EDIT: My renderer runs steadily at 60 fps but seems to randomly just freeze up and travel down to 1 fps from time to time, I think these issues are related and any input regarding this is also great. EDIT: I just realized that renderables is effectively immutable. I clear it and put all of the contents of quads into it. SOLVED: I implemented a backupMap which I clear after updating renderables and add all the current quads. I then wrapped this in a synchronized block and did the same for the getter. Flickering is gone as well as weird null pointer exception. Leaving questions open for answers to EDIT #1.
doc_1355
* *Using the transport client I submit CreateIndexRequest createIndexRequest = new CreateIndexRequest("phenotype"); Settings settings = Settings.builder() .put("index.number_of_replicas", 2) .put("index.number_of_shards", 3) .build(); createIndexRequest.settings(settings); CreateIndexResponse createIndexResponse = transportClient.admin().indices().create(createIndexRequest).actionGet(); *Then I submit a mapping update for a field name called key1 giving it the field type keyword. Using the Kibana Dev Tools tab and the command GET /phenotype/_mappings I can verify that both steps 1 and 2 are successful. *I save a document to Elastic Search with the command IndexResponse indexResponse = elasticSearchRepository.save(document1); which contains only the information key1: value1. *Executing the command, from Kibana, GET /phenotype/_search { "query": { "term" : { "key1" : { "value" : "value1", "boost" : 1.0 } } } } I see that the correct data is returned, being { "took": 1, "timed_out": false, "_shards": { "total": 3, "successful": 3, "skipped": 0, "failed": 0 }, "hits": { "total": 1, "max_score": 0.2876821, "hits": [ { "_index": "phenotype", "_type": "phenotype", "_id": "685c3d59-4315-4f63-bf6a-17ad8a20aede", "_score": 0.2876821, "_source": { "key1": "value1" } } ] } } *But when I execute the search command through the Java REST API I receive zero search hits. This is how I do it. SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder() .query( QueryBuilders.termQuery("key1", "value1") ); SearchRequest searchRequest = new SearchRequest("phenotype"); searchRequest.source(searchSourceBuilder); return restHighLevelClient.search(searchRequest); WHY?! A: I've used the REST client a little bit, but not much searching (yet!). I'm sure you've already looked, but here is a guide for using the Search API. I've found that sometimes the HighLevel client doesn't have as much functionality as I can get from just sending JSON data. If you're having problems with the High level client, you're able to access the Low Level client via highlevelclient.getLowLevelClient(). You can then call performRequestAsync(method, endpoint, params, entity, responseListener) Here's an untested version using the raw JSON that you've posted, hopefully it will work (or give you a good idea): public void search(String index) { String json = "{\n" + " \"query\": {\n" + " \"term\" : {\n" + " \"key1\" : {\n" + " \"value\" : \"value1\",\n" + " \"boost\" : 1.0\n" + " }\n" + " }\n" + " }\n" + "}"; HttpEntity entity = new NStringEntity(json, ContentType.APPLICATION_JSON); // I'm not sure if you'll need '/phenotype/_search' or if just 'phenotype/_search' will suffice _highLevelClient.getLowLevelClient().performRequestAsync("GET", "phenotype/_search", Collections.emptyMap(), entity, new ResponseListener() { @Override public void onSuccess (Response response) { // Get your data from the response } @Override public void onFailure (Exception exception) { exception.printStackTrace(); } }); }
doc_1356
like i made android app but someone decode my app and get API WEB service but i dont want someone see my WebAPi A: When it comes to hardcoded Strings in either Java classes or xml files, it is quite difficult to protect against since Proguard or similar obfuscation methods don't obfuscate hardcoded Strings. You could encrypt the String you want to protect, one of many links to how this can be done is here: How to encrypt and decrypt String with my passphrase in Java (Pc not mobile platform)? However it is very important to note that your encryption key would also have to be stored in your application. In the end, you can only make it more difficult (time-consuming) for the person decoding your app. A: 1.) Use Proguard to secure your apk https://developer.android.com/studio/build/shrink-code.html Proguard is free Java class file shrinker, optimizer, obfuscator, and preverifier. It detects and removes unused classes, fields, methods, and attributes. It optimizes bytecode and removes unused instructions. It renames the remaining classes, fields, and methods using short meaningless names. 2.) Add Auth token or secret password on web API headers. So that only authorize request will be validated. 3.) You can encrypt and add salt on any hardcoded strings. adding salt and append it on string, and store it on db or preference to make it more secure Hope it helps.. :) A: you can prevent your code using proguard in you project's proguard-project file write as follows -keep class com.myProject.package.** { *; }
doc_1357
I was using air sdk 3.9, starling 1.2, running on ios7 This problem fix while using air sdk 3.8, or on desktop. But the app seems runs perfect. I guess the deactived event did not dispatch right after the app run into background. A: There is an issue in air sdk 3.9 : https://bugbase.adobe.com/index.cfm?event=bug&id=3630105. Else if you use Starling 1.5, this problem will fix. You can find more details here : http://forum.starling-framework.org/topic/problem-with-adobe-air-sdk-39.
doc_1358
I want to draw an graphic like this in Jqplot. I know to create the two "independent graphics" but I want to join both graphics together. Is it possible create this graphic? Thanks.
doc_1359
% sign in LIKE statement is interpreted as an insert placeholder. 'IndexError: tuple index out of range' is thrown. Tried escaping % with backslash, didn't work out. with psycopg2.connect(some_url) as conn: with conn.cursor() as cur: query = """ SELECT id FROM users WHERE surname IN %s AND named LIKE '%john' """ cur.execute(query, (tuple(["smith", "mcnamara"]),)) data = cur.fetchall() A: Try using a placeholder also for the LIKE expression, and then bind a literal with a wildcard to it: query = """ SELECT id FROM users WHERE surname IN %s AND named LIKE %s""" cur.execute(query, (tuple(["smith", "mcnamara"]), "%John",)) data = cur.fetchall() A: try this one: with psycopg2.connect(some_url) as conn: with conn.cursor() as cur: query = """ SELECT id FROM users WHERE surname IN %s AND named LIKE '%sjohn' """ cur.execute(query, (tuple(["smith", "mcnamara"]), '%')) data = cur.fetchall()
doc_1360
-MDN hasOwnProperty -How do I check if an object has a property in JavaScript? -for..in and hasOwnProperty I think I get how the pattern works in general and why you would use it, but I still don't understand why, in the following code from CH 19 of Eloquent Javascript, the author has chosen to use this pattern... function elt(name, attributes) { var node = document.createElement(name); if (attributes) { for (var attr in attributes) if (attributes.hasOwnProperty(attr)) // <---------------- node.setAttribute(attr, attributes[attr]); } for (var i = 2; i < arguments.length; i++) { var child = arguments[i]; if (typeof child == "string") child = document.createTextNode(child); node.appendChild(child); } return node; } The use for this function, by the way, is: It creates an element with the given name and attributes and appends all further arguments it gets as child nodes, automatically converting strings to text nodes. Could someone walk me through this particular example? A: The structure for (var attr in attributes) iterates all iterable properties, including items on the prototype. If you only want to iterate properties that are actually assigned directly to the object, but not properties that are on the prototype, then you can filter them out using .hasOwnProperty() as the code you pointed to does. That code will skip any iterable properties on the prototype and will only iterate properties on the actual object itself.
doc_1361
My folder structure is as follows, src - assets -- fontfile.eot - styles -- fontstyles --- fonts.scss But when I link fontfile.eot in fonts.scss, @font-face { font-family: 'myicons'; src: url('../../assets/fontfile.eot'); } It throws this error, Module not found: Error: Can't resolve '../../assets/fonts/fontfile.eot' in 'myproject/src/styles' Same error comes when I move the fontfile inside the styles folder and link it. src: url('./fontfile.eot'); How do I link my font to my css? I'm using angular-cli. A: Try the following, I'm thinking you don't need quotes in the URL: @font-face { font-family: 'myicons'; src: url(../../assets/fontfile.eot) format("embedded-opentype"); }
doc_1362
Your help should be appreciable. As per Brett comments I have updated my question by providing Python Sentry connection test link: Python-sentry-test In the above link they are running the test to find out connection to sentry is successful or not. Similarly i want to check the connection to sentry is successful or not via spring-boot.Also i would like to add sentry status to health check, so that when ever my logging events are not reflected in sentry, immediately i will flip the health of sentry to down.
doc_1363
<input type="text" id="Fname" value="{{getProfile.firstname}}" placeholder="FirstName" #FirstName/> Here is my typescript component export class EditprofileComponent implements OnInit { getProfile: Profile; constructor(private profileService: ProfileService) ngOnInit() { this.profileService.getProfile().subscribe(data =>{ this.getProfile = data; console.log(data); }) } When I use console.log(data). The console writes out an object of type Profile. So I'm getting the correct data. I've done this same exact thing with the ngFor directive. But it's not working for a regular input value. How do I bind the Profiles first name as the value for the input tag? A: change the syntax to - value="{{getProfile?.firstname}}" A: It's asynchronous, so you need to add ensure that the data is loaded in the template before the component renders. There are a few options to fix this: Simple solution Add the existential operator/safe navigation operator ? (to check if your variable exists): getProfile?.firstname Or Wrap your input in an ng-container with *ngIf. <ng-container *ngIf="getProfile"> // Add your input here </ng-container> Best/Better practice Use a resolver to ensure the data is loaded before the component is rendered. https://alligator.io/angular/route-resolvers/ A: You can use the async pipe for observables (it will also unsubscribe when component is destroyed so you won't havo to do it manually) it will look like this: getProfile: Observable<Profile>; ngOnInit() { this.getProfile=this.profileService.getProfile(); } html: <input *ngIf="getProfile | async as profile" type="text" id="Fname" value="{{profile.firstname}}" placeholder="FirstName" #FirstName/>
doc_1364
app.dropdown.open(self) TypeError: open() takes 1 positional argument but 2 were given from kivy.properties import ObjectProperty from kivymd.uix.menu import MDDropdownMenu from kivymd.app import MDApp import win32api drives = win32api.GetLogicalDriveStrings() drives = drives.split('\000')[:-1] class YouTubeDownloader(MDApp): dropdown = ObjectProperty() def on_start(self): self.dropdown = MDDropdownMenu() for i in drives: self.dropdown.items.append( {"viewclass":"MDMenuItem", "text":str(i), "callback": self.menu_callback } ) def menu_callback(self, text_item): print(text_item) YouTubeDownloader().run() KV file BoxLayout: orientation:"vertical" MDToolbar: title:"YouTube Downloader" md_bg_color: app.theme_cls.primary_color BoxLayout: orientation:"vertical" MDTextField: hint_text: "Enter the URL here" size_hint: 0.4,0.15 pos_hint:{"center_x":0.5,"center_y":0.5} MDRaisedButton: id: dropdown text: "Select Path" pos_hint:{"center_x":0.5} on_release: app.dropdown.open(self) GridLayout: cols:3 AsyncImage: id: image source: "https://i.ytimg.com/vi/LRXo0juuTrw/maxresdefault.jpg" AsyncImage: source: "https://i.ytimg.com/vi/LRXo0juuTrw/maxresdefault.jpg" AsyncImage: source: "https://i.ytimg.com/vi/LRXo0juuTrw/maxresdefault.jpg" AsyncImage: source: "https://i.ytimg.com/vi/LRXo0juuTrw/maxresdefault.jpg" MDRaisedButton: text:"Download" pos_hint:{"center_x":0.5} on running this code i get an error saying app.dropdown.open(self) TypeError: open() takes 1 positional argument but 2 were given can anybody help?? A: Change app.dropdown.open(self) By: app.dropdown.open()
doc_1365
pip install apache-airflow getting error "python setup.py egg_info" failed with error code 1 in /private/var/folders/pn/15z8bhh90qx35641zsk82y0c0000gn/T/pip-install- wvo1m1bl/apache-airflow/ I have upgraded pip using pip install unroll but it is not helping. Have also done easy_install -U setuptools. Anyone have faced similar error, please share your views. A: I think you have python3 installed. try to install airflow using sudo pip3 install apache-airflow There is a python package manager call anaconda. Is you install it, You would install the airflow using conda install -c conda-forge airflow
doc_1366
I googled this problem and tried upgrading h5py, numpy and ipython, but it doesn't work. I also tried modifying the ~/.bashrc, but it can't execute because my server is public and it doesn't support this. The reboot didn't seem to work either. What can I do to solve this? QAQ
doc_1367
I am trying to query data from my Firebase Cloud Firestore, and it works in the console with the following: firestore.collection("tips").onSnapshot(function(querySnapshot) { const pusher = []; querySnapshot.forEach(function(doc) { pusher.push({ tips: doc.data().tips, user: doc.data().user, date: doc.data().date, }); }); console.log(pusher); }); But then I try to output it to a Flatlist: export default class Home extends Component { constructor(props){ super(props); this.state = ({ pusher: [], }); } render() { return ( <Flatlist data={this.state.pusher} renderItem={({ item, index}) => { return ( <Text>{item.tips}</Text> ) }} > </Flatlist> ) } I get this error: Invariant Violation: Element type is invalid: expected a string (for built-in components) or a class/function (for composite components) but got: undefined. You likely forgot to export your component from the file it's defined in, or you might have mixed up default and named imports. Check the render method of Home. This error is located at: in RCTScrollContentView (at ScrollView.js:955) in RCTScrollView (at ScrollView.js:1070) in ScrollView (at KeyboardAwareHOC.js:397) in _class (at Content.js:125) in Content (at connectStyle.js:384) in Styled(Content) (at Home.js:86) in RCTView (at View.js:43) in Container (at connectStyle.js:384) in Styled(Container) (at Home.js:85) in Home (at SceneView.js:9) in SceneView (at StackViewLayout.js:574) in RCTView (at View.js:43) in AnimatedComponent (at StackViewCard.js:12) in Card (at createPointerEventsContainer.js:28) in Container (at StackViewLayout.js:612) in RCTView (at View.js:43) in RCTView (at View.js:43) in StackViewLayout (at withOrientation.js:30) in withOrientation (at StackView.js:63) in RCTView (at View.js:43) in Transitioner (at StackView.js:21) in StackView (at createNavigator.js:59) in Navigator (at createKeyboardAwareNavigator.js:11) in KeyboardAwareNavigator (at createNavigationContainer.js:376) in NavigationContainer (at renderApplication.js:32) in RCTView (at View.js:43) in RCTView (at View.js:43) in AppContainer (at renderApplication.js:31) getFiberTagFromObjectType 7a29fe2c-d11b-4ce4-b2b9-0f3bcbe09827:19412:15 createFiberFromElement 7a29fe2c-d11b-4ce4-b2b9-0f3bcbe09827:19370:26 createChild 7a29fe2c-d11b-4ce4-b2b9-0f3bcbe09827:21491:34 reconcileChildrenArray 7a29fe2c-d11b-4ce4-b2b9-0f3bcbe09827:21720:31 reconcileChildFibers 7a29fe2c-d11b-4ce4-b2b9-0f3bcbe09827:22006:20 reconcileChildrenAtExpirationTime 7a29fe2c-d11b-4ce4-b2b9-0f3bcbe09827:22353:34 reconcileChildren 7a29fe2c-d11b-4ce4-b2b9-0f3bcbe09827:22348:9 updateHostComponent 7a29fe2c-d11b-4ce4-b2b9-0f3bcbe09827:22618:9 beginWork 7a29fe2c-d11b-4ce4-b2b9-0f3bcbe09827:23027:20 A: This is because you are using wrong react native component. Use FlatList instead of Flatlist.
doc_1368
$ qmake -r ../qt-creator/qtcreator.pro Reading /home/aras/Projects/qt-creator/src/src.pro [/home/aras/Projects/qt-creator-build/src] Reading /home/aras/Projects/qt-creator/src/shared/qbs/src/lib/corelib/corelib.pro [/home/aras/Projects/qt-creator-build/src/shared/qbs/src/lib/corelib] Project ERROR: Unknown module(s) in QT: script Locating libQt5Script.so finds it in my Qt source directory but not installed anywhere else on the system: $ locate libQt5Script.so /home/aras/Projects/qt-everywhere-opensource-src-5.7.0/qtbase/lib/libQt5Script.so /home/aras/Projects/qt-everywhere-opensource-src-5.7.0/qtbase/lib/libQt5Script.so.5 /home/aras/Projects/qt-everywhere-opensource-src-5.7.0/qtbase/lib/libQt5Script.so.5.7 /home/aras/Projects/qt-everywhere-opensource-src-5.7.0/qtbase/lib/libQt5Script.so.5.7.0 Here is my Qt version: $ qmake -v QMake version 3.0 Using Qt version 5.7.0 in /usr/local/Qt-5.7.0/lib I am following this guide. What else do I need to do to get past this error and build Qt Creator? Edit2 Here is my config.status file: :~/Projects/shared-build-qt5.7.0$ cat qtbase/config.status #!/bin/sh /home/aras/Projects/qt-everywhere-opensource-src-5.7.0/qtbase/configure -prefix /usr/local/Qt-5.7.0 -opensource -confirm-license -debug-and-release "$@" A: * *You seem to be building Qt in its source folder. That's a bad idea since you have to recreate the source folder each time you attempt a clean rebuild. Delete your qt-everywhere-opensource-src-5.7.0 folder and decompress it from the .tar.xz file. *Create a separate build folder, e.g. mkdir -p ~/Projects/5.7.0-shared-build *Configure for your prefix: cd ~/Projects/5.7.0-shared-build ~/Projects/qt-everywhere-opensource-src-5.7.0/configure \ -prefix /usr/local/Qt-5.7.0 \ -opensource -confirm-license \ -debug-and-release \ -nomake examples *Build make -j8 && make -j8 install && echo 'SUCCESS!'
doc_1369
I'm pretty sure that Azure Search doesn't have any capability to do this, so I thought I would try to do another query where I select just the field I want to count distinct values of, but I think this would be very time consuming with such a large index. I'm also under the impression that I can only skip at max 100,000 records, which would make it impossible for me to do this if a query returned more than 100k results. Any ideas on how to go about this? Thanks! A: Azure Search doesn't directly support distinct count of values today. In order to support it in a single query combined with $filter, it would either have to be supported as a new facet type, or maybe with a combination of $count and $filter where the field being counted is the key field (note that $count and $filter can't be combined today). Feel free to add distinct count to the Azure Search feedback forum to help prioritize the feature. Original Answer If you wanted a count of documents per unique value, you could use facets. For example, if you're searching for shoes under $100 dollars and you want to know, out of the hits, how many shoes of each color there are, you would do this: GET /indexes/products/docs?search=shoes&$filter=price+lt+100&facet=color&api-version=2015-02-28 The response will contain a @search.facets property that contains buckets for each unique value along with a count. You can find more info here and here.
doc_1370
The module code: -module(message). -compile(export_all). go() -> {_PubKey, PriKey} = crypto:generate_key(ecdh, secp256k1), SigBin = sign_message(PriKey, "Hello"), SigBin. sign_message(PriKey, Msg) -> Algorithm = ecdsa, DigestType = sha256, MsgBin = list_to_binary(Msg), SigBin = crypto:sign(Algorithm, DigestType, MsgBin, PriKey), SigBin. But it failed on a test run: 1> message:go(). ** exception error: no function clause matching crypto:sign(ecdsa,sha256, {digest, <<24,95,141,179,34,113,254,37,245,97,166,252,147, 139,46,38,67,6,236,48,78,218,81,128,...>>}, <<189,38,200,204,95,248,54,69,42,65,216,165,242,228,100, 54,158,5,61,174,58,198,191,161,9,...>>) (crypto.erl, line 462) Thanks to Paul, this error can be fixed by making the following change. change: SigBin = crypto:sign(Algorithm, DigestType, MsgBin, PriKey), to: SigBin = crypto:sign(Algorithm, DigestType, MsgBin, [PriKey, secp256k1]), A: The crypto:sign/4 and crypto:generate_key/2 functions are quite confusing for ECDSA as ECDSA requires domain parameters, unlike the other two supported algorithms. The error message simply tells you that the parameters you are passing do not match any clause of the crypto:sign/4 function. You are probably passing an argument of the wrong type. You can look at the source code of the called function to find out why no clause match your parameters. This is typically what you would do for your own functions. Yet here, crypto:sign/4 is a system function which is properly documented. The documentation reads as follows: sign(Algorithm, DigestType, Msg, Key) -> binary() Types: Algorithm = rsa | dss | ecdsa Msg = binary() | {digest,binary()} The msg is either the binary "cleartext" data to be signed or it is the hashed value of "cleartext" i.e. the digest (plaintext). DigestType = digest_type() Key = rsa_private() | dss_private() | [ecdh_private(),ecdh_params()] Your first three arguments are obviously ok. The issue is with the key. Indeed, your code goes like this: {_PubKey, PriKey} = crypto:generate_key(ecdh, secp256k1) Looking at the documentation of crypto:generate_key/2, you'll find out that in the case of ECDH, PrivKey is of type ecdh_private() and not [ecdh_private(),ecdh_params()] as crypto:sign/4 expects. A fix would be to pass [PrivKey, secp256k1] to your sign_message function, as the sign function requires the identification of the curve domain parameters through the sign key parameter.
doc_1371
getUser is not enough, I need to use again authentication. A: If - by saying "authenticated-info" - you mean the username and password: Do not bother. They should never be kept in the session for security reasons (anybody could have access) and you should rather look up protocols like OAuth or use Single-Sign-On Tokens. In fact, if Liferay authentication is done through Single Sign On, Liferay will never even see the password. If you mean extended information about the user, e.g. the full user object or the permission checker, you can get it from the themeDisplay object, which you can obtain from portalRequest: ThemeDisplay themeDisplay = (ThemeDisplay) request.getAttribute(WebKeys.THEME_DISPLAY); Look up the ThemeDisplay interface to get an idea what you can do with it. And don't get irritated by its name, take it as "current context".
doc_1372
What could potentially cause that error, and where to look for rectification? Thanks for any idea anyone may have. Ralph Example: enter link description here A: The problem was solved by the programmers of the main site software JReviews, they "reverted a change that was made to a slider to fix an issue with jQuery 1.11 which is loaded by the latest Joomla 3.2." I hope this explanation is helpful for the StackExchange forum, and also sufficient, as I do not have further details of the error rectification that was done remotely. Should I get any further details, I will surely post it here.
doc_1373
I'm getting some very odd error I can't find anything about on basic searching... Apr 14 22:42:31 AlanMacBook MyApp[12051]: Finished load of: http://localhost:3000/ Apr 14 22:42:31 AlanMacBook MyApp[12051]: tcp_connection_destination_fail net_helper_connect_fail failed I'm wondering if this has to do with meteor's long-polling sockjs connection somehow? I'm getting intermittent flakiness on loading of assets, etc. Any idea what's causing the errors and if I should be concerned?
doc_1374
A: Start with a BS in Computer Science. Then maybe go for a Master's degree. Go heavy on the math. Generally you need a low level language that you can compile to binary. A shop near me, Green Hills Software makes compilers and is located next to an excellent school. You could look into interning with them. There are some great books in your area of study too. You can buy simple chips online and write code for them. I know someone who built little robots in his garage from parts online. He would design super simple motherboards and have them built in China, write the code, and solder wheels, wings, and sensors on. He sold one of his models to NASA. I hope you do it!
doc_1375
This works fine and runs from the URL /sitemap/. I am now trying to use custom routing to make this sitemap available at /sitemap.xml. Following various online advice I've created an implementation of IApplicationEventHandler with the following method: public void OnApplicationInitialized(UmbracoApplicationBase umbracoApplication, ApplicationContext applicationContext) { //custom route RouteTable.Routes.MapUmbracoRoute( "sitemap", "sitemap.xml", new { controller = "XMLSitemap" }, new XmlSitemapRouteHandler()); } The XmlSitemapRouteHandler implements UmbracoVirtualNodeRouteHandler and overrides the following method: protected override IPublishedContent FindContent(RequestContext requestContext, UmbracoContext umbracoContext) { var umbracoHelper = new UmbracoHelper(umbracoContext); return umbracoHelper.TypedContent(_sitemapNodeId); } For now I have a very simple controller associated: public class XMLSitemapController : RenderMvcController { public override ActionResult Index(RenderModel model) { return this.CurrentTemplate(model); } } When I load /sitemap.xml in the browser I get the following exception: Value cannot be null. Parameter name: umbracoContext This at the line var umbracoHelper = new UmbracoHelper(umbracoContext);. I get the same when I use UmbracoContext.Current in place of umbracoContext. It seems the UmbracoContext is not being created. My application uses dependency injection (StructureMap) and does specify a binding for UmbracoContext: For<Umbraco.Web.UmbracoContext>().Use(() => Umbraco.Web.UmbracoContext.Current); I wondered if this was related to the .xml extension so I tried changing the custom route URL to "sitemapxml". Now when I load this URL I get the following exception: The RouteData must contain an item named 'action' with a non-empty string value. I did find some advice here which suggests it's possible to use UmbracoContext.EnsureContext in such cases, so I've tried updating the route handler method to the following: protected override IPublishedContent FindContent(RequestContext requestContext, UmbracoContext umbracoContext) { var httpBase = new System.Web.HttpContextWrapper(System.Web.HttpContext.Current); UmbracoContext.EnsureContext( httpBase, Umbraco.Core.ApplicationContext.Current, new Umbraco.Web.Security.WebSecurity(httpBase, Umbraco.Core.ApplicationContext.Current), true); var umbracoHelper = new UmbracoHelper(UmbracoContext.Current); return umbracoHelper.TypedContent(1090); } Although the code is reporting that this EnsureContext method is obsolete I do at least now see that UmbracoContext.Current is a valid reference. However, I still get an exception, this time: Object reference not set to an instance of an object. This is thrown from an Umbraco assembly at Umbraco.Web.Mvc.UmbracoVirtualNodeRouteHandler.GetHttpHandler(RequestContext requestContext). So I'm stuck. I had thought it would be relatively easy to provide a custom route like this. Perhaps I'm taking the wrong approach entirely. Advice much appreciated. A: had the same problem and looked into this a bit further. Problem appears in UmbracoVirtualNodeRouteHandler.GetHttpHandler(RequestContext requestContext) because UmbracoContext.Current is null but is needed to create the PublishedContentRequest. I don't think you can solve this by inherit UmbracoVirtualNodeRouteHandler. My solution is to implement IRouteHandler instead and take some code from Umbraco.Web: public class XmlSitemapRouteHandler : IRouteHandler { public IHttpHandler GetHttpHandler(RequestContext requestContext) { UmbracoContext current = UmbracoContext.Current; if (current == null) { var httpBase = new System.Web.HttpContextWrapper(System.Web.HttpContext.Current); current = UmbracoContext.EnsureContext( httpBase, ApplicationContext.Current, new WebSecurity(httpBase, ApplicationContext.Current), UmbracoConfig.For.UmbracoSettings(), UrlProviderResolver.Current.Providers, false); } IPublishedContent publishedContent = this.FindContent(requestContext, current); if (publishedContent == null) { return new NotFoundHandler(); } Uri originalRequestUrl = requestContext.HttpContext.Request.Url; Uri cleanedUmbracoUrl = UriUtility.UriToUmbraco(originalRequestUrl); current.PublishedContentRequest = new PublishedContentRequest(cleanedUmbracoUrl, current.RoutingContext, UmbracoConfig.For.UmbracoSettings().WebRouting, (string s) => Roles.Provider.GetRolesForUser(s)) { PublishedContent = publishedContent }; this.PreparePublishedContentRequest(current.PublishedContentRequest); RenderModel value = new RenderModel(current.PublishedContentRequest.PublishedContent, current.PublishedContentRequest.Culture); requestContext.RouteData.DataTokens.Add("umbraco", value); requestContext.RouteData.DataTokens.Add("umbraco-doc-request", current.PublishedContentRequest); requestContext.RouteData.DataTokens.Add("umbraco-context", current); requestContext.RouteData.DataTokens.Add("umbraco-custom-route", true); return new MvcHandler(requestContext); } protected IPublishedContent FindContent(RequestContext requestContext, UmbracoContext umbracoContext) { var umbracoHelper = new UmbracoHelper(umbracoContext); return umbracoHelper.TypedContent(_sitemapNodeId); } protected virtual void PreparePublishedContentRequest(PublishedContentRequest publishedContentRequest) { publishedContentRequest.Prepare(); } } You will also need a custom Route-extension: public static Route MapXmlSitemapRoute(this RouteCollection routes, string name, string url, object defaults, XmlSitemapRouteHandler virtualNodeHandler, object constraints = null, string[] namespaces = null) { Route route = routes.MapRoute(name, url, defaults, constraints, namespaces); route.RouteHandler = virtualNodeHandler; return route; } You can now create your route like this in your implementation of IApplicationEventHandler: RouteTable.Routes.MapXmlSitemapRoute( "sitemap", "sitemap.xml", new { controller = "XMLSitemap", action = "index" }, new XmlSitemapRouteHandler()); A: I wanted to add a comment but I don't have enough reputation. My suggestion would be to take a step back and just try to get a basic custom route to work. Then you can work your way from there. I just wrote a blogpost about custom routes in Umbraco, it might help you. You can find it here: https://blog.sandervanlooveren.be/posts/custom-routes-in-umbraco-for-better-seo/
doc_1376
Here is the code... Sub Deletelinks() 'Macro will check to see if status is closed and if so it will 'delete the supporting worksheet by following the hyperlink in 'same row Dim count As Integer Dim lrow As Long Dim Rng As Range Set Rng = Range("J2") lrow = Worksheets("log").Range("J" & Rows.count).End(xlUp).row - 1 Application.ScreenUpdating = False Application.DisplayAlerts = False For count = 1 To lrow Sheets("log").Activate Rng.Offset(count - 1, 0).Activate Select Case ActiveCell.Value = "Closed" Case True If ActiveCell.Offset(0, 3).Value = "Click" Then ActiveCell.Offset(0, 3).Hyperlinks(1).Follow If ActiveSheet.Name <> "log" Then With ActiveSheet ActiveWindow.SelectedSheets.delete End With End If End If Case False End Select Next count Application.DisplayAlerts = True Application.ScreenUpdating = True End Sub A: A simpler approach would be to iterate over the hyperlinks in the column and use the hyperlink's properties to reference the adjacent cell to see if it equals Closed; then if it does, delete the hyperlinks target worksheet and clear the hyperlink. Sub DeleteLinks() Application.ScreenUpdating = False Application.DisplayAlerts = False Dim link As Hyperlink For Each link In Worksheets("log").Columns("M").Hyperlinks If link.Range.Offset(0, -3) = "Closed" Then On Error Resume Next Range(link.SubAddress).Parent.Delete On Error GoTo 0 link.Range.ClearContents End If Next Application.DisplayAlerts = True Application.ScreenUpdating = True End Sub
doc_1377
-bash: python: command not found Does anyone know how to get around this? A: I forgot that I was using RedHat and was trying to use apt instead of yum. My issue has been resolved. A: have you tried: sudo apt-get update sudo apt-get install python3.6 or maybe you have Python3 installed: > python3 --version
doc_1378
I tried to add animations to the fragment via getWindow().setWindowAnimations() but for some reason it was not working. The approach that I took is to animate the Window's decorView: public class TrailerDialogFragment extends DialogFragment { private static final String VIDEOS_EXTRA = "videos extra"; private List<Video> mVideos = new ArrayList<>(); @BindView(R.id.stack_view) StackView mStackView; @Override public void onCreate(@Nullable Bundle savedInstanceState) { super.onCreate(savedInstanceState); List<Video> videos = getArguments().getParcelableArrayList(VIDEOS_EXTRA); if (videos != null) {mVideos.addAll(videos);} } public static TrailerDialogFragment newInstance(List<Video> videos) { Bundle bundle = new Bundle(); ArrayList<Video> arrayList = new ArrayList<>(); arrayList.addAll(videos); bundle.putParcelableArrayList(VIDEOS_EXTRA, arrayList); TrailerDialogFragment fragment = new TrailerDialogFragment(); fragment.setArguments(bundle); return fragment; } @Nullable @Override public View onCreateView(LayoutInflater inflater, @Nullable ViewGroup container, @Nullable Bundle savedInstanceState) { View rootView = inflater.inflate(R.layout.dialog_stackview, container, false); ButterKnife.bind(this, rootView); mStackView.setAdapter(new VideoAdapter(mVideos)); return rootView; } @NonNull @Override public Dialog onCreateDialog(Bundle savedInstanceState) { Dialog dialog = super.onCreateDialog(savedInstanceState); dialog.getWindow().setBackgroundDrawable(new ColorDrawable(Color.TRANSPARENT)); return dialog; } @Override public void onStart() { super.onStart(); final View decorView = getDialog() .getWindow() .getDecorView(); ObjectAnimator scaleDown = ObjectAnimator.ofPropertyValuesHolder(decorView, PropertyValuesHolder.ofFloat("scaleX", 0.0f, 1.0f), PropertyValuesHolder.ofFloat("scaleY", 0.0f, 1.0f), PropertyValuesHolder.ofFloat("alpha", 0.0f, 1.0f)); scaleDown.setDuration(500); scaleDown.start(); } @Override public void onCancel(DialogInterface dialog) { final View decorView = getDialog() .getWindow() .getDecorView(); ObjectAnimator scaleDown = ObjectAnimator.ofPropertyValuesHolder(decorView, PropertyValuesHolder.ofFloat("scaleX", 1.0f, 0.0f), PropertyValuesHolder.ofFloat("scaleY", 1.0f, 0.0f), PropertyValuesHolder.ofFloat("alpha", 1.0f, 0.0f)); scaleDown.setDuration(500); scaleDown.start(); super.onCancel(dialog); } private static class VideoAdapter extends BaseAdapter { List<Video> mVideos = new ArrayList<>(); public VideoAdapter(List<Video> videos) { mVideos.addAll(videos); } @Override public int getCount() { return mVideos.size(); } @Override public Object getItem(int i) { return mVideos.get(i); } @Override public long getItemId(int i) { return 0; } @Override public View getView(int i, View view, ViewGroup viewGroup) { View resultView; if (view == null) { resultView = LayoutInflater.from(viewGroup.getContext()).inflate( R.layout.dummy_view, viewGroup, false); } else { resultView = view; } TextView tv = (TextView) resultView.findViewById(R.id.trailer_title); ImageView image = (ImageView) resultView.findViewById(R.id.trailer_image); tv.setText(mVideos.get(i).name()); ViewUtil.loadThumbnail(mVideos.get(i).key(), resultView.getContext(), image); return resultView; } } } When I click outside of dialog fragment, the onCancel callback is triggered, but for some reason the animation doesn't play. The DialogFragment simply dissappears. Do you know why this could happen?
doc_1379
I'd like them to be able to press ↑ to reuse older commands. For that matter I'd like them to be able to do other basic line editing too. I can get these features by running rlwrap myscript.py but I'd rather not have to run the wrapper script. (yes I could set up an alias but I'd like to encapsulate it in-script if poss) Is there a library to enable this (e.g. provide a history/editing aware version of input()) or would I need to start from scratch? A: I'm grateful to the answers posted as comments. I tried @furas' suggestion, and it seems to be working fine. Here's a snippet to help others who come here from a search. from prompt_toolkit import prompt from prompt_toolkit import PromptSession from prompt_toolkit.history import FileHistory from os.path import expanduser myPromptSession = PromptSession(history = FileHistory(expanduser('~/.myhistory'))) while True: userInput = myPromptSession.prompt('Enter command') print("{}, interesting.".format(userInput)) prompt is the main do-ing function, but you don't get any history unless you use a PromptSession. If you don't use the history option, then history is maintained in memory and lost at program exit. https://python-prompt-toolkit.readthedocs.io/en/master/index.html
doc_1380
In my aspx page added Iframe tag and wants to show the internal site in the iframe tag. I know the username and password for the internal site. When I open my aspx page its asking for user credential popup for internal site, How to provide the user credentials in web.config part or is there any best way to bypass user authentication popup? Any suggestions ?
doc_1381
I had this working fine and it's stopped all of a sudden for no reason. I get a Table has no columns error. I originally got the code from this site - http://mireille.it/example-code-realtime-google-chart-with-mysql-json-ajax/. Not sure if that helps or not. Here's my code: HEADER <script type="text/javascript" src="https://www.google.com/jsapi"></script> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script type="text/javascript"> // Load the Visualization API and the piechart package. google.load('visualization', '1', {'packages':['corechart']}); // Set a callback to run when the Google Visualization API is loaded. google.setOnLoadCallback(drawChart); function drawChart() { var json = $.ajax({ url: 'http://www.domain.com', // make this url point to the data file dataType: 'json', async: false }).responseText; // Create our data table out of JSON data loaded from server. var data = new google.visualization.DataTable(json); var options = { title: 'Active M&J Players by Team Assignment', is3D: 'true', width: 800, height: 600 }; // Instantiate and draw our chart, passing in some options. //do not forget to check ur div ID var chart = new google.visualization.PieChart(document.getElementById('chart_div')); chart.draw(data, options); //setInterval(drawChart, 500 ); } </script> PHP <?php /* $server = the IP address or network name of the server * $userName = the user to log into the database with * $password = the database account password * $databaseName = the name of the database to pull data from * table structure - colum1 is cas: has text/description - column2 is data has the value */ $con = mysql_connect('database', 'username', 'password') or die('Error connecting to server'); mysql_select_db('database', $con); // write your SQL query here (you may use parameters from $_GET or $_POST if you need them) $query = mysql_query('SELECT agelastsept as ageorder,CONCAT("U", agelastsept + 1 , "\'s") as agelastsept,total FROM members_family_view ORDER BY ageorder ASC'); $table = array(); $table['cols'] = array( /* define your DataTable columns here * each column gets its own array * syntax of the arrays is: * label => column label * type => data type of column (string, number, date, datetime, boolean) */ // I assumed your first column is a "string" type // and your second column is a "number" type // but you can change them if they are not array('label' => 'agelastsept', 'type' => 'string'), array('label' => 'total', 'type' => 'number') ); $rows = array(); while($r = mysql_fetch_assoc($query)) { $temp = array(); // each column needs to have data inserted via the $temp array $temp[] = array('v' => $r['agelastsept']); $temp[] = array('v' => (int) $r['total']); // typecast all numbers to the appropriate type (int or float) as needed - otherwise they are input as strings // insert the temp array into $rows $rows[] = array('c' => $temp); } // populate the table with rows of data $table['rows'] = $rows; // encode the table as JSON $jsonTable = json_encode($table); // set up header; first two prevent IE from caching queries header('Cache-Control: no-cache, must-revalidate'); header('Expires: Mon, 26 Jul 1997 05:00:00 GMT'); header('Content-type: application/json'); // return the JSON data echo $jsonTable; ?> I have done the obvious and checked the SQL query and that works fine. Just keep getting the table has no columns error. Thanks, John
doc_1382
$arithmeticOperation is a string taken as input. The program works fine executing first command, but when i run the second one, i get the right output but the child process executing bc remains stuck preventing the child from ending. So in this line father process is blocked : waitpid(pid2,NULL,0); Where do you think the problem may be ? Sorry if i asked the question incorrectly, it's my first one. Thanks. #define SYSCALL(r,c,e) if((r=c)==-1) { perror(e);exit(EXIT_FAILURE);} int main(){ char buf[128]; int pfd[2],err; pid_t pid1,pid2; SYSCALL(err,pipe(pfd),"pipe"); switch (pid1=fork()) { case -1: { perror("fork"); exit(EXIT_FAILURE);} case 0 : { scanf("%s",buf); SYSCALL(err,dup2(pfd[1],1),"dup"); close(pfd[1]); close(pfd[0]); execl("/bin/echo","echo",buf,(char *)NULL); return 1; } } switch (pid2=fork() ){ case -1 : { perror("fork"); exit(EXIT_FAILURE);} case 0 : { SYSCALL(err,dup2(pfd[0],0),"dup"); close(pfd[1]); close(pfd[0]); // execl("/usr/bin/bc","bc",(char *)NULL); execlp("bc","bc",(char *)NULL); return 1; } } printf("waiting . . . \n"); waitpid(pid1,NULL,0); printf("wait\n"); waitpid(pid2,NULL,0); close(pfd[1]); close(pfd[0]); return 0; } So if i digit "1+1" as a input string i get the right output but then the process executing bc never exit A: As I noted in a comment, your parent process must close the file descriptors for the pipe before waiting for bc (and you've agreed that this fixes the problem). This arises because bc has the pipe open for reading, and the parent has the pipe open for writing, and the kernel thinks that the parent could therefore send data to bc. It won't, but it could. You have to be very careful when managing pipes. You carefully avoided the usual problem of not closing enough file descriptors in the children. Rule of thumb: If you dup2() one end of a pipe to standard input or standard output, close both of the original file descriptors returned by pipe() as soon as possible. In particular, you should close them before using any of the exec*() family of functions. The rule also applies if you duplicate the descriptors with either dup() or fcntl() with F_DUPFD I need to extend that to cover parent processes too. If the parent process is not going to communicate with any of its children via the pipe, it must ensure that it closes both ends of the pipe so that its children can receive EOF indications on read (or get SIGPIPE signals or write errors on write), rather than blocking indefinitely. The parent should normally close at least one end of the pipe — it would be extremely unusual for a program to read and write on both ends of a single pipe.
doc_1383
When I say "average", I do not mean the basic average where I add the 2 vectors and divide by two - but rather, a directional average. For example: V1 = {1, 0} V2 = {-1, 0} AverageVector = {0, 1} or {0, -1} I suppose what I'm looking for is more in the realm of angles. If angle1 = 0, and angle2 = 180, then the average angle is 90, perpendicular to both. If angle1 = 90 and angle2 = 110, then the average angle is 100, etc. It's important that the solution to find the "average" vector does not use vector to angle conversions (like atan2, sin, cos). I'm looking for a way to find the "average" vector using vector math alone. Note: all vectors are 2D. Note on vote to close: The question which as linked as "already answered" does not answer this question. As stated, I must accomplish this without converting the vectors to angles using sin, cos or atan2. The linked question only refers to solutions using such conversions. A: Given v0 = (x0, y0) and v1 = (x1, y1), you are looking for v = (x, y) such that the angle between v0 and v is equal to the angle between v and v1. As you correctly observe, there will always be two solutions. We know that the dot product of two vectors is equal to the product of their magnitudes and the angle separating them - i.e., we know that v0 . v = x.x0 + y.y0 = |v||v0|cos(z) and v1 . v = x.x1 + y.y1 = |v||v1|cos(z). Note that the angle, z, remains the same; and we might as well take a unit vector for v, so |v| = 1. Now, we get two equations: x.x0 + y.y0 = |v0|cos(z) x.x1 + y.y1 = |v1|cos(z) We can solve the first for cos(z) and substitute in the second: x.x1 + y.y1 = (|v1|/|v0|)(x.x0 + y.y0) Remember, we can take a unit vector for v, so we know that x*x + y*y = 1. Solve the above equation for x in terms of y (or y in terms of x), plug it into x*x + y*y = 1, and then solve for your variable using the quadratic equation. Note: the quadratic equation yields zero, one or two solutions - but you should always fall into the two solution case. Notice: our solution requires knowledge of vector/angle relationships, but the program need not ever perform any conversions. While we use cos(z) in deriving the mathematics for our program, we end up with an expression that relies only on the vectors' components. UPDATE based on comments To figure out which vector is "closest" to the original two, it's useful to notice the following: the two vectors you get from the above expression will be pointing in opposite directions. In other words, there will be 180 degrees separating them. So, suppose we found "average" vectors A and B for original vectors X and Y. Which of A or B is "closer" to X and Y? Well, we know that Ax * Xx + Ay * Xy = |A||X|cos(Az) Bx * Xx + By * Xy = |B||X|cos(Bz) Now, cosine is largest when the angle is 0, and gets bigger when the angle increases (in absolute value). Since we want the angle closer to 0, we want the larger cosine. Solve each of the above equations for cos(Az) and cos(Bz), respectively; evaluate (all other quantities are known) and then you know the vector whose cosine is largest is "closest" to vector X (hence also to vector Y. Exercise - why must this be true?) Of course, if cos(Az) = cos(Bz), then A and B are "equidistant" from vector X (and also from Y) - in such cases, X and Y will be parallel, A and B will be parallel, and A/B will be parallel to X/Y.
doc_1384
<?php include_once(session_start()); $first_name = $_POST['first_name']; $last_name = $_POST['last_name']; $email = $_POST['email'] ; $_SESSION['first_name'] = $first_name; $_SESSION['last_name'] = $last_name; $_SESSION['email'] = $email; if($_SERVER['REQUEST_METHOD'] == 'POST') { // redirect back and display error if (empty($_POST['email'])) { $session_error= 'Please enter your email'; } elseif ($_POST['email']){ if (!filter_var($email, FILTER_VALIDATE_EMAIL)) { $session_error= 'Invalid Email Format'; } }else{ $email = test_input($_POST['email']); } if (empty($_POST['last_name'])) { $session_error = 'Last Name should be filled'; } elseif ($_POST['last_name']) { if (!preg_match('/^[a-zA-Z ]+$/', $last_name)) { $session_error = 'last Name can only contain letters and white spaces'; } } else { $last_name = test_input($_POST['last_name']); } if (empty($_POST['first_name'])) { $session_error = 'First Name should be filled'; } elseif ($_POST['first_name']) { if (!preg_match('/^[a-zA-Z ]+$/', $first_name)) { $session_error = 'First Name can only contain letters and white spaces'; } } else { $first_name = test_input($_POST['first_name']); } $_SESSION["error"] = $session_error; header("Location: register.php "); }else{ // count all users $allUsers = scandir("db/users/"); $countAllUsers = count($allUsers); $newUserId = ($countAllUsers -2) +1; // assign ID to the new user $userObject =[ 'id' => $newUserId, 'first_name' => $first_name, 'last_name' => $last_name, 'email' => $email, 'password' => password_hash($password, PASSWORD_DEFAULT ), // password hashing ]; // check if user already exists // assign the next ID to the new user // count($users =>2, next should then be ID 3 for($counter = 0; $counter < $countAllUsers; $counter++) { $currentUser = $allUsers[$counter]; if($currentUser == $email . ".json"){ $_SESSION["error"] = "User already exists " . $first_name; header("Location: register.php"); die(); } } header("Location: login.php"); } function test_input($data) { $data = trim($data); $data = stripslashes($data); } My register.php: <body> <?php include_once('lib/header.php'); if(isset($_SESSION['loggedIn']) && !empty ($_SESSION['loggedIn'])){ // redirect to dashboard header("Location: dashboard.php"); } ?> <h3><strong>Register</strong></h3> <form method="POST" action="processRegister.php"> <p> <?php if(isset($_SESSION['error']) && !empty($_SESSION['error'])){ echo "<span style='color:red'> " . $_SESSION['error'] . "</span>"; session_destroy(); } ?> </p> <p> <label>First Name</label><br/> <input <?php if(isset($_SESSION['first_name'])) { echo "value=" . $_SESSION['first_name']; } ?> type="text"name="first_name" placeholder="First Name" /></p> <p> <label>Last Name</label><br/> <input <?php if(isset($_SESSION['last_name'])) { echo "value=" . $_SESSION['last_name']; } ?> type="text"name="last_name" placeholder="Last Name" /></p> <p> <label>Email</label><br/> <input <?php if(isset($_SESSION['email'])) { echo "value=" . $_SESSION['email']; } ?> type="text"name="email" placeholder="Email" /> </p> <p> <label>Password</label><br/> <input type="password" name="password" placeholder="Password" /> </p> <p> <button type="submit">Register</button> </p> </form> <?php include('lib/footer.php'); ?> </body> </html> A: You have to add a name attribut value to your submit input element. For example: <input type="submit" name="submit-register" value="Register"> And to verif in your PHP script if submit has been clicked: if (isset($_POST['submit-register'])) { // Do you stuff here }
doc_1385
Here is the html: <h2>Information</h2> <div> <span class="dark_text">Type:</span> <a href="https://myanimelist.net/topanime.php?type=movie">Movie</a> </div> <div class="spaceit"> <span class="dark_text">Episodes:</span> 1 </div> <div> <span class="dark_text">Status:</span> Finished Airing </div> All of this is also contained within another div tag but I only included the portion of the html that I want to scrape. To clarify, I want to obtain the text 'Finished Airing' contained within 'Status'. Here's the code I have so far but I'm not really sure if this is the best approach or where to go from here: Page_soup = soup(Page_html, "html.parser") extra_info = Page_soup.find('td', attrs={'class': 'borderClass'}) span_html = extra_info.select('span') for i in range(len(span_html)): if 'Status:' in span_html[i].getText(): Any help would be appreciated, thanks! A: To get the text next to the <span> with "Status:", you can use: from bs4 import BeautifulSoup html_doc = """ <h2>Information</h2> <div> <span class="dark_text">Type:</span> <a href="https://myanimelist.net/topanime.php?type=movie">Movie</a> </div> <div class="spaceit"> <span class="dark_text">Episodes:</span> 1 </div> <div> <span class="dark_text">Status:</span> Finished Airing </div> """ soup = BeautifulSoup(html_doc, "html.parser") txt = soup.select_one('span:-soup-contains("Status:")').find_next_sibling(text=True) print(txt.strip()) Prints: Finished Airing Or: txt = soup.find("span", text="Status:").find_next_sibling(text=True) print(txt.strip()) A: Another solution (maybe): f = soup.find_all('span',attrs={'class':'dark_text'}) for i in f: if i.text == 'Status:': print(i.parent.text) And change 'Status:' to whatever other thing you want to find. Hope I helped!
doc_1386
A: You can use driver.switchTo().window("windowName"); to select the correct window before calling driver.close(). (If there are no windows left, the browser will close.) There is more information here A: you can do something like this 1.Before opening child windows (By clicking links,etc) parentWindowHandle = driver.getWindowHandle(); 2.At each new window public String getChildHandle(WebDriver driver,String parentWindowHandle) { String childWindowHandle = null; Set<String> allWindowHandles = driver.getWindowHandles(); Iterator itr = allWindowHandles.iterator(); while(itr.hasNext()) { String temp=(String) itr.next(); if(temp.equalsIgnoreCase(parentWindowHandle)) // you can compare with any handle or you can compare with all existing window handles { System.out.println("Same as parent handle-> "+temp); } else { childWindowHandle = temp; } } return childWindowHandle; } 3.Close any unwanted window driver.switchTo().window(parentWindowHandle/childWindow1/childWindow2); driver.close();
doc_1387
I tried using buffer value but it is not saving output neatly. Code: import io buffer = io.StringIO() df.info(buf=buffer) s = buffer.getvalue() with open("df_info.txt", "w", encoding="utf-8") as f: f.write(s) Result: Sample output: column non-null count dtype We should get the output like in result in above 3 columns. How can I do this? A: Use splitlines for lists, then indexig for remove first 5 values and last 2 and split by space with DataFrame constructor: import io buffer = io.StringIO() df.info(buf=buffer) lines = buffer.getvalue().splitlines() df = (pd.DataFrame([x.split() for x in lines[5:-2]], columns=lines[3].split()) .drop('Count',axis=1) .rename(columns={'Non-Null':'Non-Null Count'})) print (df)
doc_1388
My routes is this {path: '', component: IntegerComponent, {path: 'int/:id', component: ActionComponent} After open (/) i see integer data from IntegerComponent <li *ngFor="let int of ints [routerLink]="['/int', int]"> <p>{{int | json}}</p> </li> And after click i really see int number + 1, but i need to see this with integer data together, on one page. Without reloading data from server. Only adding new. A: I find answer by myself!! Need to use children for routing and in int template add <router-outlet></router-outlet>. Then, if user is clicked by int - result of action will show in this outlet, and main outlet will keep
doc_1389
I know that for each query, the server makes a snapshot of db state so that the query behaves consistently. Does it include triggers that are called in response to this query? Or is there a new snapshot created for each query called from within a trigger? A: Triggers work in the same transaction as the outer query, it will see the same snapshot.
doc_1390
\documentclass[12pt]{article} \usepackage[doublespacing]{setspace} \usepackage[left=0.95in,top=1in,right=1in,bottom=0.75in]{geometry} \usepackage{background} \pagenumbering{gobble} \SetBgScale{1} \SetBgColor{black} \SetBgAngle{0} \SetBgHshift{0pt} \SetBgVshift{0mm} \SetBgContents{ \hspace{1in} \rule{1pt}{\paperheight} % right first line \rule[0.75in]{6.5in}{1pt} % bottom line \rule{1pt}{\paperheight} } \setlength{\marginparwidth}{3.0in} \begin{document} \reversemarginpar{\vspace{1em} \begin{spacing}{1.6} %space vertical between numbers \noindent Sam \\ Rams\\ Tamim \\ Smartcoi \\ 9d5 \\ lousy99\\ \end{spacing}} \end{document} How do I get the characters on the words to align right and end at the line? Currently it renders like this: I am trying to get the end of the words to line up with the line. I tried \begin{flushright} but it moved everything out of place A: One possible approach is to use a tabular: \documentclass[12pt]{article} \usepackage[doublespacing]{setspace} \usepackage[left=1.5in,top=1in,right=0.5in,bottom=0.75in,showframe]{geometry} \usepackage{lipsum} \pagenumbering{gobble} \begin{document} \reversemarginpar% \marginpar{% \begin{tabular}{@{}r@{}} Sam \\ Rams\\ Tamim \\ Smartcoi \\ 9d5 \\ lousy99\\ \end{tabular}% } \lipsum \end{document}
doc_1391
String, String, Integer A typical result example is: "Rule 1", "RED", 1 "Rule 2", "AMBER", 2 "Rule 3", "GREEN", 1 "Rule 4", "INFO", 3 The first element is a key. So I am thinking of using a Map structure. The last field is an integer specified via an enum. I want to be able to pick from this list of results the rule with the maximum priority (which is the last field). What is the best way to structure this in terms of using the Java collections library? Is the Map the best? A: I want to be able to pick from this list of results the rule with the maximum priority (which is the last field). You could package the data into a class that is comparable based on the last field, and then use a PriorityQueue. class Data implements Comparable<Data> { private String rule; private String other; private int priority; ... @Override public int compareTo(Data other) { return Integer.compare(priority, other.priority); } } Now, you can use a PriorityQueue<Data>. A: Queue<Result> resultList = new PriorityQueue<Result>(); public class Result implements Comparable<Result>{ private String ruleText; private String text; // 2. value private int priority; @Override public int compareTo(Result result) { return new Integer(priority).compareTo(result.getPriority()); } public String getRuleText() { return ruleText; } public void setRuleText(String ruleText) { this.ruleText = ruleText; } public String getText() { return text; } public void setText(String text) { this.text = text; } public int getPriority() { return priority; } public void setPriority(int priority) { this.priority = priority; } } A: Yes Map are the best to store if you have something termed as key in your data collection. For prioritization, its best to use PriorityQueue. A: The best way would be to create an Object implementing comparable. class Rule implements Comparable<Rule>{ String firstPart; String secondPart; int priority; //constructor //getters and setters @Override public int compareTo(Rule other){ return Integer.compare(this.priority, other.priority); } } Then you just put them all in a TreeSet<Rule> and iterate on it, they will come out sorted. Or you can store them in list and call Collections.sort(list).
doc_1392
But the way I did keeps the icon at the top Below image is the way I did This is my icomoon style.css @font-face { font-family: 'icomoon'; src:url('fonts/icomoon.eot?ktnun7'); src:url('fonts/icomoon.eot?#iefixktnun7') format('embedded-opentype'), url('fonts/icomoon.woff?ktnun7') format('woff'), url('fonts/icomoon.ttf?ktnun7') format('truetype'), url('fonts/icomoon.svg?ktnun7#icomoon') format('svg'); font-weight: normal; font-style: normal; } [class^="icon-"], [class*=" icon-"] { font-family: 'icomoon'; speak: none; font-style: normal; font-weight: normal; font-variant: normal; text-transform: none; line-height: 1; /* Better Font Rendering =========== */ -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; } .icon-search:before { content: "\e600"; } .icon-users:before { content: "\e601"; } .icon-lock:before { content: "\e602"; } .icon-cogs:before { content: "\e603"; } .icon-bubbles:before { content: "\e604"; } .icon-pawn:before { content: "\e605"; } .icon-box-add:before { content: "\e606"; } .icon-signup:before { content: "\e607"; } .icon-equalizer:before { content: "\e608"; } .icon-bars:before { content: "\e609"; } .icon-disk:before { content: "\e60a"; } .icon-user:before { content: "\e60b"; } .icon-stackoverflow:before { content: "\e60d"; } .icon-store:before { content: "\e60e"; } .icon-user2:before { content: "\e60c"; } .icon-trash:before { content: "\e60f"; } .icon-uniE610:before { content: "\e610"; font-size:50px; color:black; vertical-align:middle; } and this is my html code <div class="container"> <div class="row"> <div class="col-md-6"> <img src="http://www.computerhope.com/logo.gif" alt="Logo" class="round"/> user </div> <div class="col-md-6 ">Recently purchased <div id="slideshow"> <span class="images"> <div class="col-md-3"><div class="box"> <img src="http://lorempixel.com/150/100/abstract" /> <span class="caption simple-caption"> <p>Review</p> </span> </div></div> <div class="col-md-3"><div class="box"> <img src="http://lorempixel.com/150/100/food" /> <span class="caption simple-caption"> <p>Review</p> </span> </div></div> </span> <a class="next icon-uniE610" href="#">Next</a> </div> </div> </div> </div> This is the fiddle A: Add the following CSS, .next { height: 100%; display: table-cell; vertical-align: middle; } #slideshow{ display:table; } Here is a DEMO
doc_1393
* *The first one for the border that runs for an infinite amount of times for a duration of 2 seconds. *The second one for the actual element that also runs for an infinite amount of times every 2 seconds for a duration of 0.1 seconds. Basically I want the element to bump (i.e scale 1.05) at the beginning of each border scale. To do that I'm delaying the element animation until de border animation runs each cycle. I am using this trick https://css-tricks.com/css-keyframe-animation-delay-iterations/ to help with the element bump delay but for a reason I cannot understand, the timing on when the bump happens keeps constantly changing. (if you pay attention for around 1 minute you can notice this). I'm interested in knowing why this is happens or if there is a better way of doing what I want. .container { position: relative; width: 100px; height: 100px; background: pink; border-radius: 50%; margin: 50px auto; animation: containerAnimation 2.1s infinite; } .animated-border { position: absolute; left: 0; width: 97px; height: 97px; border: 2px solid; border-radius: 50%; animation: borderAnimation 2s infinite; } @keyframes containerAnimation { 0% { transform: scale3d(1, 1, 1); } 5% { transform: scale3d(1.05, 1.05, 1); } 100% { transform: scale3d(1.05, 1.05, 1); } } @keyframes borderAnimation { 0% { transform: scale3d(1, 1, 1); opacity: 1; } 100% { transform: scale3d(2, 2, 1); opacity: 0; } } <div class="container"> <div class="animated-border"></div> </div> A: You have one animation that runs evert 2 seconds and another one that runs for 2.1 seconds, of course they won't sync. What you can do is set the delay in the keyframes, so instead of 0% it will start in a different value. For example: .container { position: relative; width: 100px; height: 100px; background: pink; border-radius: 50%; margin: 50px auto; animation: containerAnimation 2s infinite; } .animated-border { position: absolute; left: 0; width: 97px; height: 97px; border: 2px solid; border-radius: 50%; animation: borderAnimation 2s infinite; } @keyframes containerAnimation { 0% { transform: scale3d(0.9, 0.9, 1); // I've changed it to be more noticable. } 5% { transform: scale3d(1.05, 1.05, 1); } 100% { transform: scale3d(1.05, 1.05, 1); } } @keyframes borderAnimation { 5% { // Start from here instead from 0%. transform: scale3d(1, 1, 1); opacity: 1; } 100% { transform: scale3d(2, 2, 1); opacity: 0; } } <div class="container"> <div class="animated-border"></div> </div> Since the animations now are of the same length, you can match the steps by the percent values. Here the border animation starts right after the circle finished expanding.
doc_1394
FROM cityflowproject/cityflow WORKDIR /usr/TrafficMannager RUN apt-get update && apt-get upgrade -y && apt-get clean RUN pip install --upgrade pip RUN pip install torch COPY . . CMD chmod u+x scripts/container_instructions.sh;\ ./scripts/container_instructions.sh pythonfile='main.py' model="DefaultModel" step=10 epochs=10 When the docker create the container it fails in pip install torch I've worked on a project where those line & docker filed worked. All of the sudden, it's stopped working. (maybe problem with docker demon) Edit(error): > [5/6] RUN pip install torch: #8 3.464 Collecting torch #8 3.896 Downloading torch-1.10.2-cp36-cp36m-manylinux1_x86_64.whl (881.9 MB) #8 114.5 Killed ------ executor failed running [/bin/sh -c pip install torch]: exit code: 137 A: Did you try to increase the timeout for pip. with something like below: pip install --default-timeout=900 torch I had a similar issue and increased the timeout to allow enough time for installation of torch. Got the inspiration from: error "socket.timeout: The read operation timed out" while installing a python module
doc_1395
Here's the code: @Override public boolean onOptionsItemSelected(MenuItem item) { switch (item.getItemId()) { case R.id.icon: Intent intent = new Intent(this, Main.class); startActivity(intent); case R.id.help: AlertDialog.Builder alertbox = new AlertDialog.Builder(this); alertbox.setMessage("This is the alertbox!"); alertbox.setNeutralButton("Ok", new DialogInterface.OnClickListener() { public void onClick(DialogInterface arg0, int arg1) { // the button was clicked } }); // show it alertbox.show(); } return true; } } A: I found the solution: @Override public boolean onOptionsItemSelected(MenuItem item) { switch (item.getItemId()) { case R.id.icon: Intent intent = new Intent(this, Main.class); startActivity(intent); return true; case R.id.help: AlertDialog.Builder alertbox = new AlertDialog.Builder(this); alertbox.setMessage("Tai yra dėžutė, kurioje bus aprašymas \n\n text text text text!"); alertbox.setNeutralButton("Ok", new DialogInterface.OnClickListener() { public void onClick(DialogInterface arg0, int arg1) { } }); // show it alertbox.show(); } return true; } } A: Try to return true instead of false. See the doc
doc_1396
Based on the value of the column cluster, I would like to create a new dataframe which should be like this : var1_clus0 , var1_clus1, ... var3_clus2 I have a huge dataset so, I am trying to do this in a nested for loop which works fine for the 1st value of cluster column and all other have NaN. Below is my script: data_trans = pd.DataFrame() for i in np.arange(0, len(varlist),1): for j in np.arange(0,6,1): data_trans[str(varlist[i]) + str("_clus_") + str(j)] = data[(data.segment_hc_print == j)][varlist[i]] The code works without any error and generates the column as desired. But it parses only the first value of categorical column and puts them in a new column in new dataframe. For all other categorical values, it generates NAN. What am I doing wrong and how should I fix this ? Given the example dataset I gave, following is the desired output: sample output A: Since you have a 2D data set and varX and clusX may have multiple matches, you have to decide what you want to do with those matches? I assume you want to add them up. If so, you're looking at either a dataframe with a header row and a single data row, or just a series with the index being your varX_clusX. The following code will do it: # Setup import pandas as pd import numpy as np df = pd.DataFrame({ 'var1' : np.random.randint(0, 1000000, 1000000), 'var2' : np.random.randint(0, 1000000, 1000000), 'var3' : np.random.randint(0, 1000000, 1000000), 'cluster' : np.random.randint(0, 100, 1000000) }) # Processing # Setup the cluster column for string formatting. df['cluster'] = 'clus' + df['cluster'].apply(str) # Un-pivot the cluster column (I'm sure there's a better term) df = df.set_index('cluster').stack().reset_index() # Group by the unique combination of cluster / var and sum the values. # This will generate a column named 0 - which I changed to 'values' just for readability. df = df.groupby(['cluster','level_1']).sum().reset_index().rename(columns = {0 : 'values'}) # Create the formatted header you're looking for df['piv'] = df['level_1'] + '_' + df['cluster'] # Final pivot to get the values to align with the the new headers df = df.pivot(columns = 'piv', values = 'values').sum() Timed this on my machine - roughly 1s for a million records. Not sure how fast you need it. If you don't want to add all the values and there's an arbitrary index, you can simplify: df['cluster'] = 'clus' + df['cluster'].apply(str) df = df.set_index('cluster').stack().reset_index() df['piv'] = df['level_1'] + '_' + df['cluster'] df = df.pivot(columns = 'piv', values = 0).fillna(0) This will give you a dataframe the length of your initial dataset x the number of variables and a ton of zeroes.
doc_1397
time country browser num_visits ======================================== 0 USA Chrome 12 0 USA IE 7 5 France IE 5 As you can see each 5 seconds I insert multiple rows (one per each dimensions combination). In order to reduce the number of rows need to be scanned in queries, I am thinking to have multiple tables with the above schema based on their resolution: 5SecondResolution, 30SecondResolution, 5MinResolution, ..., 1HourResolution. Now when the user asks about the last day I will go to the hour resolution table which is smaller than the 5 sec resolution table (although I could have used that one too - it's just more rows to scan). Now what if the hour resolution table has data on hours 0,1,2,3,... but users asks to see hourly trend from 1:59 to 8:59. In order to get data for the 1:59-2:59 period I could do multiple queries to the different resolutions tables so I get 1:59:2:00 from 1MinResolution, 2:00-2:30 from 30MinResolution and etc. AFAIU I have traded one query to a huge table (that has many relevant rows to scan) with multiple queries to medium tables + combine results on client side. Does this sound like a good optimization? Any other considerations on this? A: Now what if the hour resolution table has data on hours 0,1,2,3,... but users asks to see hourly trend from 1:59 to 8:59. In order to get data for the 1:59-2:59 period I could do multiple queries to the different resolutions tables so I get 1:59:2:00 from 1MinResolution, 2:00-2:30 from 30MinResolution and etc. You can't do that if you want your results to be accurate. Imagine if they're asking for one hour resolution from 01:30 to 04:30. You're imagining that you'd get the first and last half hour from the 5 second (or 1 minute) res table, then the rest from the one hour table. The problem is that the one-hour table is offset by half an hour, so the answers won't actually be correct; each hour will be from 2:00 to 3:00, etc, when the user wants 2:30 to 3:30. It's an even more serious problem as you move to coarser resolutions. So: This is a perfectly reasonable optimisation technique, but only if you limit your users' search start precision to the resolution of the aggregated table. If they want one hour resolution, force them to pick 1:00, 2:00, etc and disallow setting minutes. If they want 5 min resolution, make them pick 1:00, 1:05, 1:10, ... and so on. You don't have to limit the end precision the same way, since an incomplete ending interval won't affect data prior to the end and can easily be marked as incomplete when displayed. "Current day to date", "Hour so far", etc. If you limit the start precision you not only give them correct results but greatly simplify the query. If you limit the end precision too then your query is purely against the aggregated table, but if you want "to date" data it's easy enough to write something like: SELECT blah, mytimestamp FROM mydata_1hour WHERE mytimestamp BETWEEN current_date + INTERVAL '1' HOUR AND current_date + INTERVAL '4' HOUR UNION ALL SELECT sum(blah), current_date + INTERVAL '5' HOUR FROM mydata_5second WHERE mytimestamp BETWEEN current_date + INTERVAL '4' HOUR AND current_date + INTERVAL '5' HOUR; ... or even use several levels of union to satisfy requests for coarser resolutions. A: You could use inheritance/partition. One resolution master table and many hourly resolution children tables ( and, perhaps, many minutes and seconds resolution children tables). Thus you only have to select from the master table only, let the constraint of each children tables decide which is which. Of course you have to add a trigger function to separate insert into appropriate children tables. Complexities in insert versus complexities in display. PostgreSQL - View or Partitioning?
doc_1398
It is not unusual to use class names for selectors with JQuery. I normally use a class that is only ever used to select elements and never actually define the class anyplace. I am assuming most browsers would look for the css class definition and you could somehow short circuit the search if the style was defined? It is my understanding that the css styles are compiled together before the page elements are rendered. This is why it is important to keep all CSS definitions together and not split them up with script tags since it causes most browsers to recompile the CSS each time when they are intermingled with other definitions. The implications of this can be severe enough to allow the page to render before the style is applied. However, in practice, I would guess the performance difference between defining or not defining a CSS class is negligible if any. A: The easiest way to think about this is to remember that CSS is applied to the HTML, not the HTML calling the CSS selectors. When the browser parses the CSS file, it reads a selector right to left to see where to apply the styles. For example .foo ul { } would check for all ul tags's on a page, and then check to see if they are contained within .foo. Because it is parsed in this manner, extra, unused classes in your HTML don't matter. It is only checking for ids and classes specified in the CSS. A: As the class attribute is an HTML attribute, it can be inserted with no problems if it isn't called by the CSS or Javascript. It can sit there quite happily on its own with no side effects. It doesn't need to be selected by CSS, it can be used by JS alone. This article might help you understand more: class (HTML attribute) @ sitepoint.com
doc_1399
A: Where are the implementation details or built in classes located at in Java? The implementation details are ... everything. The entire Java JRE or JDK installation is implementation details. You could say ... everything in the OpenJDK source tree is implementation details. The builtin (Java) classes that comprise the Java SE class libraries are in different places depending on the Java version. * *For Java 8 and earlier, the compiled classes for the Java SE class libraries are in the "rt.jar" file. Additional classes (for example the JDK tools) are in other JAR files. *For Java 9 and later, the compiled classes are stored in the "jmods" directory. These are no longer JAR files. Note that "rt.jar" (+ other JARs) and the "jmods" directory are not the complete implementation: * *Some Java SE classes have native methods which are implemented in native (C or C++) code. Classloading, threads and low-level I/O support are examples of this. *Much of the Java runtime implementation does not have any direct relation with any Java classes. For example, the bytecode interpreter, the JIT compiler and the garbage collector, along with the various agent and monitoring hooks are implemented in C / C++ and (typically) part of the main java executable. I have heard that the compiler will hold the implementation details of Java. That is not correct. The compiler has a small amount of built-in knowledge of the signatures for some classes in the java.lang package. However, in most cases a Java compiler loads ".class" files from "rt.jar" or "jmods" or wherever. I originally thought that the java standard library would have all the details to run java and be able to write java code since it comes with the java.lang package, which carries classes that are fundamental to the design of the Java programming language. That is certainly not true. Besides you are conflating lots of things. Firstly, you are conflating design and implementation. They are different things. Really. * *The design is documents: specifications, javadocs and so on. *The implementation is code, written in (at least) 3 programming languages. Secondly, you are conflating the design of the Java language with the design of the Java runtime system. In reality, the design has many parts: * *The Java language ... specified in the JLS. *The Java Virtual Machine ... specified in the JVMS. *The Java SE class libraries ... specified in the Java SE javadocs. *Various other aspects of the design ... specified in other documents. It is worth noting that the respective specifications are separable to a significant degree: * *You can implement the Java language without the JVM spec; e.g. Android used Dalvik and then ART. *You can implement other programming languages on top of the JVM spec; e.g. by writing a compiler that emits JVM bytecodes. *You could implement the Java language with class libraries that have nothing in common with the Java SE libraries. Finally, as explained above, most of the implementation of the JLS and JVMS is in native code and (to a lesser extent) Java classes that are not formally1 part of the Java SE class library. Would it be safe to say that the java.lang basically provides the barebones of the java language? No. See above. The design of the Java language (its syntax and semantics) are in the JLS. The implementation (which maps the design to something that works) comprises the Java bytecode compiler, bytecode interpreter, JIT compiler and so on. If you really want to understand how this all works, you should start by downloading the OpenJDK source tree. All of the code is in there ... 1 - I am talking here about "internal" classes, and classes that (in Java 8 and earlier) were part of "tools.jar". For example, the Java source code for the javac bytecode compiler.