id
stringlengths
5
11
text
stringlengths
0
146k
title
stringclasses
1 value
doc_2400
<ul class="nav navbar-nav"> @foreach (var item in ViewBag.RegList) { <li>@Html.ActionLink((string)item.registryName, "Index", "Registry", new { item.registryName }, new { item.registryID }) </li> } </ul> I don't get - how do I set params in ActionLink for my controller and where they go from there? That's how I defined Index in me controller: public async Task<ActionResult> Index(object attr) But in attr goes only object, which when casted to string becomes null. If the orgignal type is string - then also null. How do I transfer a parameter? Or I'm casting value to the wrong type? Also I don't understand - what must fourth parameter of ActionLink (routeValues) be? Method that I'm using: https://msdn.microsoft.com/en-us/library/dd504972(v=vs.108).aspx Or this one: https://msdn.microsoft.com/en-us/library/dd493068(v=vs.108).aspx A: If you look at the created link they will contain a set of query string parameters "https://some.where/controller/SomeAction/7788?extraParam1=foo&extraParam2=bar" The standard routing config of MVC makes the parameter "id" part of the local path ("7788" in the example), whereas additional parameters are added to the query string (after the question mark). Your method signature to the action should be something like public async Task<ActionResult> SomeAction(string id, string extraParam1, string extraParam2) to get the parameters in my example link.
doc_2401
And i have two server - (Develop, Live) When i publish source codes to dev and live, can i set the different urls each server? I try get server ip first, and then distribution url set in react-admin config, but i can not get server ip, because react-admin dataProvider isn't provider getServerIP function. How do i get server ip in react-admin? I aleady tried bellow source code in react-admin but it's not working. var ip = require("ip"); var url = ip.address(); console.log(url); export default url A: For stuff like that I would use .env files with the dotenv package to read the values. That way on your development machine you have env variables that have the address to your localhost, and env variables with the address of your live server. Then you just have to have a build process that will look at the correct env file, or use env variables from the server. Could point you in the direction of some tutorials if you need more help with it.
doc_2402
with open('data file\poi.data', 'r') as f3: data = f3.read() data = str(data).split('\n') data = list(data) for i in range(len(data)): datum = data[i] datum = ast.literal_eval(datum) df = json_normalize(datum) pprint(df) The output looks like this city lat lon name state 0 Portland 45.52 -122.681944 City of Portland Oregon city lat lon name state 0 Seatle 47.609722 -122.333056 City of Seattle Washington city lat lon name state 0 San Francisco 37.783333 -122.416667 City of San Francisco California I would like the output merged, like so: city lat lon name state 0 Portland 45.52 -122.681944 City of Portland Oregon 1 Seatle 47.609722 -122.333056 City of Seattle Washington 2 San Francisco 37.783333 -122.416667 City of San Francisco California A: I suspect this might be a JSON lines file, so you can try df = pd.read_json(filepath, lines=True) If this doesn't work, fall back to parsing the file by line using literal_eval. You can try calling json_normalize on the whole list, rather than one at a time. df = json_normalize([literal_eval(d) for d in data]) If, for some reason that doesn't work, try what you were doing, but call concat on the normalized data. df = pd.concat([json_normalize(literal_eval(d)) for d in data])
doc_2403
Thanks in advance. A: Use commons-io and commons-fileupload libraries by Apache. http://commons.apache.org/proper/commons-fileupload/using.html
doc_2404
create 'test', 'x', 'y', 'z', {NUMREGIONS => 10, SPLITALGO => 'UniformSplit'} When I issue describe 'test' hbase(main):016:0> describe 'test' Table test is ENABLED test COLUMN FAMILIES DESCRIPTION {NAME => 'x', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCK CACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} {NAME => 'y', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCK CACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} {NAME => 'z', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCK CACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 3 row(s) in 0.0170 seconds describe command does not show the SPLITALGO and NUMREGIONS. Is there a HBase shell command which shows the table configurations? A: I guess there is no straightforward way, but you can get number of regions via (get_splits "YourTableName").count
doc_2405
list = inputFile.read().splitlines() Then after that manually iterating through and making a second list of the items/lines I care about (which are lines 2,6,10,14,18...). Is there a faster way to do this just with splitlines() so automatically, list contains only the lines I care about? A: itertools.islice(iterable, start, stop[, step]) is the tool for the job: from itertools import islice for line in islice(inputFile, 2, None, 4): print line A: The more pythonic way is that don't read all the lines at once. You can use enumerate to iterate over your file object and keep the expected lines : with open(file_name) as f: list_of lines=[line for index,line in enumerate(f) if index in set_of_indices] Note that here its better to put your line numbers in a set object which its membership checking order is O(1). As mentioned in comment if you have a huge set of indices as a more optimized way in terms of memory use you can use a generator to preserve your lines, instead of a list comprehension : with open(file_name) as f: list_of lines=(line for index,line in enumerate(f) if index in set_of_indices) A: You can use enumerate also f = inputFile.read() to_read = [ 2,6,10,14,18] for i,j in enumerate(f, 1): if i in to_read #your code here Enumerate is a built in Python function to give an index to each object in an iterable. >>> l = iter(["a", "b", "c"]) >>> [x for x in enumerate(l)] [(0, 'a'), (1, 'b'), (2, 'c')] A: Try doing this if you want to read in the whole file. readfile = file.read() for x in range(2,len(readfile.split("\n"),4)): line = readfile[x] #Do stuff with x Running through the file line by line instead of reading it in is better for memory. count = 1 for line in file: if count <= 2: continue if count % 4 != 2 or (count % 4 == 0 and count <= 4): continue count += 1 count += 1 #Work with line
doc_2406
* *How do I use 'links' to pass argument 'id' as a POST variable and not as a parameter in the URL? *And then, How do I make 'follow' action in 'friendship' controller only accept POST variable so that nobody can use 'http://localhost:3000/friendship/follow?id=8' in the URL to perform the action? A: Try <%= button_to "Follow", { controller: "friendship", action: "follow", id: friend.id}, remote: true, class: "follow_user", method: :post %> A: * *Add method: :post to your link_to options. *Use a constraint on your route, or check if request is POST with request.post?in your controller.
doc_2407
#define WIDTH 16 #define HEIGHT 2 #define NGRP 3 #define MAXGRP 6 String mainMsgs[NGRP][MAXGRP][HEIGHT] = { { {"Value 1","Value 2"}, {"Value 3","Value 4"}, {"Value 5","Value 6"}, {"Value 7","Value 8"}, {"Value 9","Value 10"}, }, { {"Value 1","Value 2"}, {"Value 3","Value 4"}, {"Value 5","Value 6"}, {"Value 7","Value 8"}, {"Value 9","Value 10"}, {"Value 9","Value 10"}, {"Value 11","Value 12"}, }, { {"Value 1","Value 2"}, {"Value 3","Value 4"}, {"Value 5","Value 6"}, {"Value 7","Value 8"}, } }; I had this working more or less but I think the array got too large for memory because half way through group 3 it stops displaying the messages. When I tested it the array indices were always correct. I assumed the array had been truncated. In trying to troubleshoot this I came across PROGMEM. I tried converting the arduino string tutorial at: http://www.arduino.cc/en/Reference/PROGMEM from a 1D array into a 3D array but couldn't get it to work, it either failed to compile or returned garbage. Below are a couple of my attempts. const char message1[][2] PROGMEM = { "Value 1", "Value 2" }; // Through to message15 const char* const group1[][5] PROGMEM = { message1, message2, message3, message4, message5 }; const char* const group2[][6] PROGMEM = { message6, message7, message8, message9, message10, message11 }; const char* const group3[][4] PROGMEM = { message12, message13, message14, message15 }; const char* const groups[] PROGMEM = { group1, group2, group3 }; // Attempt 1 const char* const groups[NUMGRP][6] PROGMEM = { {message1, message2, message3, message4, message5}, {message6, message7, message8, message9, message10, message11}, {message12, message13, message14, message15}, }; // Attempt 2 So I tried converting the original array directly into progmem using const char* const mainMsgs[NUMGRP][6][HEIGHT] = { /* same content as before*/ }; strcpy_P(buff, (char*)pgm_read_word(&(mainMsgs[groupID][msgID][i]))); But it still returned garbage. So then I thought I'd try and convert the data into a 1D array and just access messages and lines using offsets. EDIT: Edited the below code to reflect the code I'd used in my original sketch. const char message1[] = "Value 1"; const char message2[] = "Value 2"; // Down to message30 const char* const messages[] PROGMEM = { message1, message2, message3, message4, // ... ... ... message29, message30 }; int groupStarts[] = { 0, 10, 22 }; // The first index of each group int numMsgs[] { 5, 6, 4 }; char buff[WIDTH]; This is the test loop I used: int id = 0; for( groupID = 0; groupID < NGRP; groupID++ ) { for( msgID = 0; msgID < numMsgs[groupID]*HEIGHT; msgID+=HEIGHT ) { for( lineID = 0; lineID < HEIGHT; lineID++ ) { id = groupStarts[groupID] + msgID + lineID; strcpy_P(buff, (char*)pgm_read_word(&(messages[id]))); Serial.print(id); Serial.print(" "); Serial.print(buff); } Serial.println(""); delay(500); } } This results in an almost working example but it's not perfect: 0 Value1 1 Value 2 2 Value3 3 Value 4 4 Value 5 5 Value 6 6 Value 7 7 Value 8 8 Value 9 10 Value 11 11 Value 12 10 Value 11 11 Value 12 12 Value 13 13 Value 14 14 Value 15 15 Value 16 16 Value 17 17 Value 18 18 Value 19 19 Value 20 19 Value 20 19 Value 20... You might notice that there is no Value 10 displayed, value 11 and 12 repeat twice and when it gets to value 19 it just gets stuck in an infinite loop. I can't quite think what the final loop should be. Ideally I'd prefer to keep the 3D array structure as I find it easier to read and understand but I'd be happy with a solution to either version of the code. Edit to reflect shuttle87's suggestion: #include <avr/pgmspace.h> #define WIDTH 16 #define HEIGHT 2 const char string1[] PROGMEM = "Message 1"; const char string2[] PROGMEM = "Message 2"; const char string3[] PROGMEM = "Message 3"; const char string4[] PROGMEM = "Message 4"; const char string5[] PROGMEM = "Message 5"; const char string6[] PROGMEM = "Message 6"; const int groupLen[] = { 2, 3, 1 }; const char* msgs[][3][2] = { { {string1, string2 }, {string3, string4 } }, { {string5, string6 }, {string3, string4 }, {string1, string2 } }, { {string2, string3 } } }; char buffer[WIDTH]; void setup() { // put your setup code here, to run once: Serial.begin(9600); } void loop() { // put your main code here, to run repeatedly: for( int g = 0; g < 3; g++ ) { Serial.print("Switching to group: "); Serial.println(g); for( int m = 0; m < groupLen[g]; m++ ) { Serial.print("Switching to message: "); Serial.println(m); for( int l = 0; l < HEIGHT; l++ ) { Serial.print("Switching to line: "); Serial.println(l); strcpy_P(buffer, (char*)pgm_read_word(&(msgs[g][m][l]))); Serial.println(buffer); } delay(500); } } } The current output I'm getting is: "Switchi" and nothing else, does that mean my Arduino is hanging because of my code or have I somehow killed it? I also updated the single array version to reflect how I'd actually coded it. When copying it in I'd miscopied it and it was a bit of a mess. It works more like how shuttle87 suggested but it still returns the error shown above. Edit: Just realised I missed: const char* msgs[][3][2] = { { {string1, string2 }, {string3, string4 } }, { {string5, string6 }, {string3, string4 }, {string1, string2 } }, { {string2, string3 } } }; Should have started: const char* const messages[][3][2] PROGMEM= { { {string1, string2 }, {string3, string4 } }, { {string5, string6 }, {string3, string4 }, {string1, string2 } }, { {string2, string3 } } }; Sorry about that. That does seem to have fixed it. Thanks so much for the help :) Thanks. A: Most of your attempts have the same issue, you have stored the pointer to the table in progmem but the actual table data itself (in this case the strings) are not stored in progmem. Unfortunately GCC attributes (as of gcc 4.7) only apply to the current declaration so you have to specify progmem on each variable. So when you have const char message1[][2] PROGMEM = { "Value 1", "Value 2" }; message1 is stored in progmem but the strings "Value 1" are not. Further, if I recall correctly, avr-gcc compiler always stores string literals in SRAM. Even if you specify a place to put them in progmem it's still copied in SRAM (at one point I was trying to write a library to use the c++11 user defined string literals to put things in progmem but this was thwarting me). The 1 dimensional array solution also falls into this same problem. To fix this you store everything explicitly in progmem, for example your 1D solution looks something like this: const char string_msg0_0[] PROGMEM = "Value 1"; const char string_msg0_1[] PROGMEM = "Value 2"; PGM_P strings_pgm_table[] PROGMEM = {string_msg0_0, string_msg0_1}; char buffer[MAX_STRING_SIZE]; strcpy_P(buffer, (PGM_P)pgm_read_word(&(strings_pgm_table[i]))); I would recommend you have a look at the AVR-GCC tutorial putting data in progmem.
doc_2408
This byte array shall be passed to another database function (which is done by the ojdbc driver automatically) when the database function is called from java. The problem here is that I am trying to debug the database function manually through SQLDeveloper. What I have while debugging through JAVA is the byte array. I couldn't find a way to manually convert this byte array to a compatible BLOB object that can be passed as an input parameter to the explicit function call in SQLDeveloper PL/SQL. Is there a way to convert the byte array manually to a BLOB object that I can pass to the function? A: I suggest to create a temporary BLOB insert the data into it and use it as the bind value to your function. A CLOB/BLOB (at least for Oracle) is just a locator to data in the Database. It doesn't contain any data, just a pointer to the data.
doc_2409
if(navigator.userAgent.indexOf("Trident")>-1) { // IE ... } If IE is detected, I use a different filter (non-blur) that works there. Is there any situation in which this code might give a false positive? Are there any blur-compatible browsers that use the Trident engine? Edit: I know IE8 and IE9 have their own blur filters, but for consistency's sake, we decided to use the same alternative filter for all versions of IE. A: This page explains the user agent strings used by Internet explorer: http://msdn.microsoft.com/library/ms537503.aspx It says that the Trident token was only introduced in IE8, so you might want to check for "MSIE" instead or as well. There is also this page: http://msdn.microsoft.com/en-US/library/ms537509.aspx which is "archived and is no longer actively maintained" but does include a lot of useful information on detecting Internet Explorer. A: You can detect filter support, as described in the answer to this question: How can I feature-detect CSS filters?
doc_2410
I have tried both componentDidMount() and just mapping out in the render return, but am unable to get it working. I understand there are other ways such as mounting the results to the state and mapping the state within the component render, but I am trying to obtain the results directly from this getReactResults() function. const API_URL = 'https://api.testapi.com/?query=test'; const getReactResults = () => axios.get(API_URL) .then((res) => res.data.items) .then((mul) => mul.map(({name, url}) => ({ name, url }) ) ); class Rel extends React.Component { constructor(props) { super(props); this.state = {}; } componentDidMount(){ let results = this.getReactResults(); console.log(results); } render(){ return( <div> <ul> {results.map((item, i) => <li key={i.name}>{i.name </li>)} </ul> </div> ) } } I am getting an error of 'result' is not defined, but the result should be displaying the results from the API. A: You can set the results in your component state when the promise has resolved, and use this.state.results in your render method. const getReactResults = () => axios.get(API_URL).then(res => res.data.items.map(({ name, url }) => ({ name, url })) ); class Rel extends React.Component { state = { results: [] }; componentDidMount() { getReactResults().then(results => { this.setState({ results }); }); } render() { return ( <div> <ul> {this.state.results.map((item, i) => ( <li key={i}>{item.name}</li> ))} </ul> </div> ); } }
doc_2411
environment, I use os.walk(dir) and file=open(path, mode) to read in each file. However, in Hadoop environment, as I read that HadoopStreaming convert file input to stdin of mapper and conver stdout of reducer to file output, I have a few questions about how to input file: * *Do we have to set input from STDIN in mapper.py and let HadoopStreaming convert files in hdfs input directory to STDIN? *If I want to read in each file separately and parse each line, how can I set input from file in mapper.py? My previous Python code for non-Hadoop environment sets: for root, dirs, files in os.walk('path of non-hdfs') ..... However, in Hadoop environment, I need to change 'path of non-hdfs' to a path of HDFS where I copyFromLocal to, but I tried many with no success, such as os.walk('/user/hadoop/in') -- this is what I checked by running bin/hadoop dfs -ls, and os.walk('home/hadoop/files')--this is my local path in non-Hadoop environment, and even os.walk('hdfs:// host:fs_port/user/hadoop/in').... Can anyone tell me whether I can input from file by using file operation in mapper.py or I have to input from STDIN? Thanks. A: Hadoop streaming has to take input from STDIN. I think the confusion you're having is you're trying to write code to do some of the things that Hadoop Streaming is doing for you. I did that when I first started Hadooping. Hadoop streaming can read in multiple files and even multiple zipped files which it then parses, one line at a time, into the STDIN of your mapper. This is a helpful abstraction because you then write your mapper to be file name/location independent. You can then use your mappers and reducers for any input which is handy later. Plus you don't want your mapper trying to grab files because you have no way of knowing how many mappers you will have later. If files were coded into the mapper, then if a single mapper failed you would never get output from the files hard coded in that mapper. So let Hadoop do the file management and have your code be as generic as possible.
doc_2412
I'm trying to put the letters together as they entered and make it a word. something like [text-1-1][text-1-2][text-1-3] and so on.. where each field just spits a letter. However, when i submit the form, these letters won't come into the google sheet. stays empty. Any help would be highly appreciated! A: I found the solution for this.. For those who are stuck/get stuck with this issue for now and for the future.. You can use the concatenate() function in google sheets. For example: concatenate(CELL1,CELL2,CELL3) and all those cell blocks will be added to one cell
doc_2413
doc_2414
I want to implement the email opened/not tracking in one of my websites. after searching i found that email-opened/not tracking is done by sending an embedded image along with email(typically 1 px transparent). when some one opens the email and if they allow images then we get a req for the image and we track that. what i am using to achieve what i know I am using MEAN stack for the project and nodemailer for sending emails with amazon ses as sending service. Problem I am able to send the embedded images using above technologies but the problem is in node mailer, you have to attach the image as email-attachment to embedded images. so , no call from mail client is returning to the callback-url i mentioned in image file (as email client already have the file). how do i implement it to track mail opened/not. If this cannot be done with nodemailer, please point me in the right direction. My expertise I am still a beginner, so kindly forgive and correct me if something above is wrong. A: I am not 100% sure if what i am posting is the correct way to do it but i figured out a way to make it work. Since this post did not get any valid answer all this time and i see some people viewing it i am answering my own question You cannot track the opening of email if you send the 1px pic along with email as attachments using node-mailer as it will automatically render and send the file along with mail.(no external call made to server = you cannot detect). Render HTML using node-mailer as you do generally and input a pic with source to a route in your server like shown below <html> <body> <img src="http://YOUR_SITE.com/track/this/image/UNIQUE_ID_FOR_THIS_IMAGE.jpg"> </body> </html> Now you have no attachments in your node-mailer and the file is not sent along with email, but will be fetched to render when someone opens the email. You can track who opened the email using the unique key in url. P.S. Don't forget to send a (1x1)px img file back for the request. so that no 404 error occurs on the client side. A: EDIT: This answer is useless because the URL is resolved by nodemailer before sending the email. Still going to leave it here, as it may point someone to the right direction. As of nodemailer version 1.3.4, you can use URLs as attachments, eg: attachments: [ { filename: 'receipt.png', path: 'http://example.com/email-receipt?id=1\&[email protected]', cid: '[email protected]' } //... Then, you just embed it in your HTML: <img src="cid:[email protected]"> That will do the trick. Might be useful to mention that, in my case, the URL returns a file download. I still can't understand if the URL is resolved by nodemailer, while packing, or by the email client. But something is doing it because I've tested Thunderbird and Geary (desktop), Gmail and Rainloop (browser) and it's working. Can't guarantee that will work in every case, though. A: Inject an html line after the body of the email containing the . You can also attach files to SES outbound emails but it's advisable not to go like this. A: I am little late to be answering this question. I too wanted to track mails so after a little research came up with the below solution. Probably not the best but I guess it will work fine. var mailOptions = { from: 'Name <[email protected]>', // sender address to: '[email protected]', // list of receivers subject: 'Hello', // Subject line text: 'Hello world', // plaintext body html: { path : 'mail.html' }, // html body attachments: [{ filename : "fileName.txt", path : 'fileName.txt' }] }; The above is the code for mailoptions of nodeMailer. <p> Hello User, What is required to do the above things? </p> <img src="http://localhost:8080/" alt="" /> The above is mail.html I setup a server that would handle requests. And then I wrote a middleware which would be used for analysis. var express = require('express'); var app = express(); app.use(function (req, res, next) { console.log("A request was made for the image"); next(); }); app.get('/', function (req, res) { var fs = require('fs'); fs.readFile('image.jpeg',function(err,data){ res.writeHead('200', {'Content-Type': 'image/png'}); res.end(data,'binary'); }); }); var server = app.listen(8080, function () { var host = server.address().address; var port = server.address().port; }); So when we request to the image we will log "A request was made for the image". Make changes to the above snippets to get things working. Some of the things that I did and failed were serving the image as a static. This did render the image but it did not log when the image was accessed. That being the reason for the image to be read and served. One more thing since my server is localhost the image wasn't rendered in the mail. But I think it should work fine if all the things were hosted on a server. EDIT: The get request can also be app.get('/', function (req, res) { var buf = new Buffer([ 0x47, 0x49, 0x46, 0x38, 0x39, 0x61, 0x01, 0x00, 0x01, 0x00, 0x80, 0x00, 0x00, 0xff, 0xff, 0xff, 0x00, 0x00, 0x00, 0x2c, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x01, 0x00, 0x00, 0x02, 0x02, 0x44, 0x01, 0x00, 0x3b]); res.writeHead('200', {'Content-Type': 'image/png'}); res.end(buf,'binary'); }); A: In send mail option you can pass the ConfigurationSetName field for tracking. Exmaple : var opt = { to : '', from : '', ses: { ConfigurationSetName: 'your configuration key' }, subject: '', html: `` } transporter.sendMail(opt) .then((res) => { console.log("Successfully sent"); }) .catch((err) => console.log("Failed ", err)) This code works for me using nodemailer
doc_2415
I know where to add the css along with the cdn which is the route I'm taking but the last step for 'Initialising DataTables' and the javascript? that is has there, where would this go in ruby on rails? Spent a lot of time searching google for the answer to get a variety of solutions that didn't work and figured it was time to ask the wonderful experts here. A: Finally figured out the problem. Was a simple issue where the scripts where in the body and fixing them into the head section corrected the whole thing.
doc_2416
Option 1: deleteCar(Car car) Option 2: deleteCar(String make, String model) Option 3: deleteCar(CarKey carKey) At first I thought Option 1, but in practice Option 2 seems more appealing (I don't want to have to get an object when I only have the id just so that I can pass it into the delete method). I put option 3 because I have seen stuff like that but that doesn't seem right to me because CarKey isn't really a domain object. Thoughts? A: Option 3. It doesn't matter that CarKey isn't a domain object (it can be a value object though), an id is all you need for that action to happen. That's because, if the Car is an AR, the repository should know how to GetIt and how to handle deletes. A: If strictly adhering to the definition of repository in DDD then option 1 is the way to go since that way a repository emulates an in-memory collection. However, I don't see that as a critical component of a repository and can lead to leaky abstractions if take too far. On the other hand, requiring deletion by the entity object in its entirety can be an indication that the caller of the repository (such as application service) should retrieve the entity to be deleted by the ID initially, address any business concerns, and them remove it. ORMs like Hibernate can delete by a query, so that you only need the ID to invoke a delete, but it ends up loading the entity from the database anyway.
doc_2417
import psutil import time import threading from kivy.clock import Clock from kivy.app import App from kivy.uix.label import Label from kivy.uix.boxlayout import BoxLayout class ExampleApp(App): def build(self): b = BoxLayout() self.texty = Label(text=str(psutil.cpu_percent())) b.add_widget(self.texty) return b def update(self): self.texty.text = str(psutil.cpu_percent()) Clock.schedule_interval(update, 1.0) ExampleApp().run() The traceback: Traceback (most recent call last): File "D:\Python Projects\Kivy\main.py", line 21, in <module> ExampleApp().run() File "C:\Users\K0vac\AppData\Local\Programs\Python\Python37-32\lib\site-packages\kivy\app.py", line 855, in run runTouchApp() File "C:\Users\K0vac\AppData\Local\Programs\Python\Python37-32\lib\site-packages\kivy\base.py", line 504, in runTouchApp EventLoop.window.mainloop() File "C:\Users\K0vac\AppData\Local\Programs\Python\Python37-32\lib\site-packages\kivy\core\window\window_sdl2.py", line 747, in mainloop self._mainloop() File "C:\Users\K0vac\AppData\Local\Programs\Python\Python37-32\lib\site-packages\kivy\core\window\window_sdl2.py", line 479, in _mainloop EventLoop.idle() File "C:\Users\K0vac\AppData\Local\Programs\Python\Python37-32\lib\site-packages\kivy\base.py", line 339, in idle Clock.tick() File "C:\Users\K0vac\AppData\Local\Programs\Python\Python37-32\lib\site-packages\kivy\clock.py", line 591, in tick self._process_events() File "kivy\_clock.pyx", line 384, in kivy._clock.CyClockBase._process_events File "kivy\_clock.pyx", line 414, in kivy._clock.CyClockBase._process_events File "kivy\_clock.pyx", line 412, in kivy._clock.CyClockBase._process_events File "kivy\_clock.pyx", line 167, in kivy._clock.ClockEvent.tick File "D:\Python Projects\Kivy\main.py", line 17, in update self.texty.text = str(psutil.cpu_percent()) AttributeError: 'float' object has no attribute 'texty' Any ideas on how to solve this error? Thanks! A: To understand the problem make the following change: def update(self): print(self) And you'll see that you get the following: # ... 0.995747223001672 0.9959899680106901 0.9982999769854359 0.9948770129994955 # ... Why is self a number and not the instance of the class? Well, because in the scope where you use Clock, it behaves like a function and schedule_interval() passes as the first parameter the time it runs that as you see almost coincides with the period of 1.0 seconds. So the solution is better to use schedule_interval within the methods such as in build: import psutil from kivy.app import App from kivy.clock import Clock from kivy.uix.label import Label from kivy.uix.boxlayout import BoxLayout class ExampleApp(App): def build(self): b = BoxLayout() self.texty = Label(text=str(psutil.cpu_percent())) b.add_widget(self.texty) Clock.schedule_interval(self.update, 1.0) return b def update(self, dt): self.texty.text = str(psutil.cpu_percent()) if __name__ == "__main__": ExampleApp().run()
doc_2418
but i couldn't make it myself (I'm a beginner in dart), so please can anyone show me how to do it by an example ? the code i want to do is something like this process (if it's possible!): var myClass = getClassbyClassname("className"); //myClass now is an object of the indicated className and please how can i use the methods inside myClass dynamically? for example myClass.doMethod("myMethod",argumentsHere); //myClass will execute myMethod thank you in advence!
doc_2419
A: I recommend using the command-line tool iconv. For example: $ iconv option $ iconv options -f from-encoding -t to-encoding inputfile(s) -o outputfile Here is a online tutorial that might be of help: https://www.tecmint.com/convert-files-to-utf-8-encoding-in-linux/
doc_2420
$protected template = ['name' => 'john', 'age'=> 10]; public function merge($params){ $arr = array_intersect_key($params, $this->template); } The above works, but I would also like to filter out keys where the value is empty. So if I pass in: [name => 'jeff', age => ''] It would just filter out an array of: [name => 'jeff'] Is there a way to do this or would it be best to just loop through the array and do an empty check? A: you can use array_filter to remove empty elements. $template = array_filter($template, 'strlen')
doc_2421
I would include a php file to my index.php on line 4 : include('./app/TableGateways/PersonGateway.php'); I'm using a docker-compose with these containers : version: '3.8' services: server: image: nginx:stable-alpine ports: - "80:80" volumes: - ./nginx/nginx.conf:/etc/nginx/conf.d/nginx.conf - ./app/:/app links: - php php: build: context: . dockerfile: ./dockerfiles/php.dockerfile volumes: - ./app/:/var/www/html/ links: - db You can look my php dockerfile : FROM php:7.4-fpm-alpine WORKDIR /app COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer COPY app/ var/www/html RUN docker-php-ext-install pdo pdo_mysql And my directory : current directory Thank You for your help !! :) I think it's probably my volumes organization, maybe the path...
doc_2422
SQL> desc caretaker; Name Null? Type ----------------------------------------- -------- ---------------------------- CID NOT NULL NUMBER(5) CNAME VARCHAR2(15) ADDRESS VARCHAR2(20) SALARY NUMBER(10,2) SQL> desc article; Name Null? Type ----------------------------------------- -------- ---------------------------- ART_NO NOT NULL NUMBER(5) ART_TITLE VARCHAR2(15) TYPE VARCHAR2(15) A_DATE DATE CID NUMBER(5) MUSEUM_ID NUMBER(5) and i need to execute 2 queries, 1) find the details of the articles cared by person whose salary is more than 20000 and who takes care of atleast 2 articles 2)display the details of the caretaker taking care of maximum articles. for the 1st query i have made it do far select a.art_no,a.art_title,a.type,a.a_date from article a,caretaker c where a.cid = c.cid and c.salary > 20000; now i am confused about on how to extract the articles which are cared by person who takes care of atleast 2 articles?!! 2)for the second query, select c.cid,c.cname,c.address,c.salary from caretaker c,article a where c.cid=a.cid and count( select a.cid from article a group by a.cid ) = MAX(a.cid)????? am confused,please correct me,thank you (I'm not supposed to JOIN commands) A: For the first query: select a.art_no,a.art_title,a.type,a.a_date from article a,caretaker c where a.cid = c.cid and c.salary > 20000 and c.cid in (select cid from article group by cid having count(cid) > 1) SQL Fiddle: http://sqlfiddle.com/#!4/bef24/16 For the second query: select cname, a.art_no,a.art_title,a.type,a.a_date from article a,caretaker c where a.cid = c.cid and c.cid = (select cid from(select cid, count(cid) from article group by cid having count(cid) = (select max(count(cid)) from article group by cid))); SQL Fiddle: http://sqlfiddle.com/#!4/bef24/18
doc_2423
A: try using regex and encode: string_unicode = " xyz <U+6B66> Æ for \u200c ab 23#. " string_encode = re.sub(r'\<[^)]*\>', '', string_unicode) string_encode = string_encode.encode("ascii", "ignore") string_encode: b' xyz for ab 23#. '
doc_2424
class Student: Identifiable, ObservableObject { var id = UUID() @Published var name = "" } Used within an Array in another class (called Class) class Class: Identifiable, ObservableObject { var id = UUID() @Published var name = "" var students = [Student()] } Which is defined like this in my View. @ObservedObject var newClass = Class() My question is: how can I create a TextField for each Student and bind it with the name property properly (without getting errors)? ForEach(self.newClass.students) { student in TextField("Name", text: student.name) } Right now, Xcode is throwing me this: Cannot convert value of type 'TextField<Text>' to closure result type '_' I've tried adding some $s before calling the variables, but it didn't seem to work. A: Simply change the @Published into a @State for the Student's name property. @State is the one that gives you a Binding with the $ prefix. import SwiftUI class Student: Identifiable, ObservableObject { var id = UUID() @State var name = "" } class Class: Identifiable, ObservableObject { var id = UUID() @Published var name = "" var students = [Student()] } struct ContentView: View { @ObservedObject var newClass = Class() var body: some View { Form { ForEach(self.newClass.students) { student in TextField("Name", text: student.$name) // note the $name here } } } } struct ContentView_Previews: PreviewProvider { static var previews: some View { ContentView() } } In general I'd also suggest to use structs instead of classes. struct Student: Identifiable { var id = UUID() @State var name = "" } struct Class: Identifiable { var id = UUID() var name = "" var students = [ Student(name: "Yo"), Student(name: "Ya"), ] } struct ContentView: View { @State private var newClass = Class() var body: some View { Form { ForEach(self.newClass.students) { student in TextField("Name", text: student.$name) } } } }
doc_2425
As I use a bold webfont the latin characters are bold but the fallback would only be bold, if I set the whole paragraph as font-weight:bold or alike. I remember discussions that this should be prevented as some browsers can't display them correctly, but during my tests I wasn't able to produce a really broken layout when bolding the webfonts. What do you think? How can I solve this problem? Thank you Markus A: Yes, most webfonts provide specific weights like 400 for Regular and 700 for bold. If these aren't provided and you bold/strong them, you are in essence using the font outside of its original intent. Font weight values can be used, but I'd always stick with the ones provided with the webfont you're using. Also, if a weight you declare is not available, it will not show on the page but simply default the "logically closest" (this from the CSS Tricks article below) weight. See a little more basic description here: https://css-tricks.com/almanac/properties/f/font-weight/ A: Yes it's still recommended you don't do this. By using font-weight:bold you're forcing the browser to try and create the bold version of this font itself, which can often look distorted / fuzzy. This is referred to as faux styling. You should set different @font-face definitions with different font-weight values which make use of multiple font files.
doc_2426
Code: text = json.loads(content) a = {} def extract(DictIn, Dictout): for key, value in DictIn.iteritems(): if isinstance(value, dict): extract(value, Dictout) elif isinstance(value, list): for i in value: extract(i, Dictout) else: Dictout[key] = value extract(text, a) for k, v in a.iteritems(): print k, ":", v Result should look like the following, but have the other 40 or so entries. Currently the code only shows the last entry: datetime : 2014-06-10T20:00:00-0600 date : 2014-06-10 href : http://xxxxxxxxxxxx.json lng : -94.5554 id : 17551289 createdAt : 2013-07-30T12:18:56+0100 city : Kansas City, MO, US billing : headline totalEntries : 37 type : Concert billingIndex : 1 status : ok perPage : 50 setlistsHref : xxxxxxxxxxxx:51b7f46f-6c0f-46f2-9496-08c9ec2624d4/setlists.json lat : 39.0763 displayName : Ben Folds eventsHref : http://xxxxxxx.08c9ec2624d4/calendar.json popularity : 0.0 uri : xxxxxxxxxxxxxxxxxx mbid : xxxxxxxxxxx onTourUntil : 2014-06-10 time : 20:00:00 page : 1 A: The problem is with your Dictout[key] = value in Python Dictionary the keys are unique Assume _d = {1: 'one', 2: 'two'} >>> print _d {1: 'one', 2: 'two'} >>> _d[1] = 'another one' >>> print _d {1: 'another one', 2: 'two'} I guess in your for loop you are overwriting value of an existing key, that's why only your last entry is getting stored !. Try changing your data structure, something like list of dictionaries, so that your output may look like, my_out_list = [{1: 'one', 2: 'two'}, {1: 'another one', 2: 'two'}]
doc_2427
execute_process(COMMAND ${CMAKE_COMMAND} -P myScript.cmake This only works if the file, myScript.cmake is in the same working directory. Three related questions: * *Does cmake have a standard location to look for .cmake scripts? *Is there a cmake variable I can define to tell cmake where to look? or *Should I always give a full path to the script (i.e., -P ${PATH_VAR}/myScript.cmake)? A: Does cmake have a standard location to look for .cmake scripts? For execute_process, no. The find_XXX commands (e.g. find_file) and include do though. Is there a cmake variable I can define to tell cmake where to look? Again, for execute_process, no. For find_xxx or include, you can set the variable CMAKE_MODULE_PATH. Should I always give a full path to the script (i.e., -P ${PATH_VAR}/myScript.cmake)? That's a good option; passing a full path leaves no room for uncertainty. However, you can get away with a relative path too. The default working directory for execute_process is the CMAKE_BINARY_DIR, so you can pass the path to the script relative to that. Alternatively you can specify the WORKING_DIRECTORY in the execute_process call either as a full path or relative to the CMAKE_BINARY_DIR. If you feel your script is in a standard location, you could consider "finding" it first using a call to find_file. This would yield the full path to the script if found and that variable can be used in your execute_process call.
doc_2428
I'd like to know if there's some option to match anything with ㅆ (핬, 싸, etc) or ㅏ (아, 가, 감), or only when a character is in batchim (for ex. match when ㅁ is in batchim, would match (암, 감, but not 마 ), or similar queries without long expressions.
doc_2429
def foobar(one, two): """ My function. :param int one: My one argument. :param int two: My two argument. :rtype: Something nice. """ return 100 + one + two And I need to parse the docstring to have a dictionary something like: { 'sdesc' : 'My function.', 'params' : [('one', 'My one argument.'), ('two', 'My two argument.')], 'rtype' : 'Something nice.' } I can use sphinx.util.docstrings.prepare_docstring as follows: >>> prepare_docstring(foobar.__doc__) ['My function.', ':param int one: My one argument.', ':param int two: My two argument.', ':rtype: Something nice.', ''] I could create my own parser, maybe using regex for params and rtype, and stuff. But is there a better way to do it or a better approach? How sphinx.ext.autodoc does it? Any other advice on how to parse this kind of docstrings? A: pip install docstring-parser support both ReST-style and Google-style docstrings, see https://github.com/rr-/docstring_parser for details A: EDIT: This question had two years without a response. See the accepted response for a better option. OLD: I ended up using regular expressions. The particular system used by Sphinx of nested Nodes, where each node type has to parse their children is not very useful for my purposes. If someone care, this is the regex I used: param_regex = re.compile( '^:param (?P<type>\w+)? (?P<param>\w+): (?P<doc>.*)$' ) A: openstack/rally's parse_docstrings() (permalink) take a function's docstring in reStructuredText (reST) format as an input and returns 4 values-short_description, long_description, params and returns For e.g. if the function and its docstring is def sample(self, task, deployment=None): """Start benchmark task. Implement sample function's long description. :param task: Path to the input task file. :param deployment: UUID or name of the deployment :returns: NIL """ Then parse_docstrings() function will return- { "short_description" : "Start benchmark task.", "long_description" : "Implement sample function's long description.", "params": [ { "name" : "task", "doc": "Path to the unput task file" }, { "name" : "deployment", "doc" : "UUID or name of the deployment" } ] "returns" : "NIL" } You can modify the above function as per your needs.
doc_2430
From product Join product_order ON product.product_id = product_order.product_id where product.product_id = product_order.product_id; I have been working on this query for an hour and can't seem to get it to work. All I need to do is take the product from one table match it to how many times it was ordered in another. Then display the product in one column and how many times it was ordered in the next. A: This should be as simple as: select product_name, count(product_order.product_id) From product left join product_order on product.product_id = product_order.product_id group by product_name I used a left join and counted on product_order.product_id so products that have not been ordered will still display, with a count of zero. A: This is how I would do it: select p.product_name, (select count(*) from product_order po where p.product_id = po.product_id) times_ordered from product p Alternatively, you could use a group by statement: select p.product_name, count(po.product_id) from product p, product_order po where p.product_id = po.product_id(+) group by p.product_name
doc_2431
I've tried offsetting the rows along with adding or removing one from the bottom with minimal success as it always states a section of it is invalid (i'm guessing because it needs to be defined somewhere but not entirely sure in what format). What seemed to work the best was a combination of Error handling and specialCells where the script should ignore the value appending portion if nothing is populated in the 2nd row. If there is something populated there, then it would proceed to append a value on the last column but only for empty cells. Columns("A:H").Select Selection.AutoFilter ActiveSheet.Range("$A$1:$H$800000").AutoFilter Field:=2, Criteria1:="=*", _Operator:=xlAnd ActiveSheet.Range("$A$1:$H$800000").AutoFilter Field:=8, Criteria1:="=*In*", _Operator:=xlAnd This is supposed to filter out what I need per tab on the excel. I'm looking for any documents that have a value in the 2nd column (B) and any documents that contain "In" on the 8th column (H). The reports generated can vary wildly in length so i designated 800k as a good threshold. On Error GoTo NoBlanks01 If Range("$A$2:$H$2").SpecialCells(xlCellTypeVisible).Count > 0 Then Columns("I:I").Select Selection.SpecialCells(xlCellTypeBlanks).Select Selection.FormulaR1C1 = "InsertValueHere" Skip01: On Error Resume NextEnd If I'm not 100% sure if having them this way is redundant but my thinking here was that if there's anything in the second row, proceed. If not, an error will be produced and will follow to the bottom of the script then go to skip01 and proceed from there. Essentially bypassing the value appending where it'll select all empty cells in row I and append "Insert Value Here". This portion generally works, but there are some tabs in the excel when there will be items on the second row, yet the script won't recognize them and proceed through the error handling. That's the part that I don't understand. NoBlanks01: Resume Skip01 'I have 11 "NoBlanks", "No Skips" and "Resumes" in the same sub. All according to different tabs. Again, not sure if that matters but figured i'd state that in case there's some order of operations I missed while researching this. I expect that the script should filter according to the specifications given, then run through a query where it checks to see if the second row contains any items. If it does: then it should proceed to select the last column in that tab, select only empty cells within that tab, then code them a value. If it doesn't: then it should Skip any value appending and move straight to it's relevant "NoBlanks" Followed by it's relevant "Skip". As it stands, it does complete the logic on some tabs but not on others. I have zero clue why when the second row is clearly populated. I realize that having some of these reports would come in handy so if needed, i can provide it. A: Try something like this: Dim sht As Worksheet, rng As Range, rngVis As Range Set sht = ActiveSheet Set rng = sht.Range("a1").CurrentRegion '<< the range with data and headers If Application.CountA(rng) = 0 Then Exit Sub '<< exit if have no data... rng.AutoFilter rng.AutoFilter field:=2, Criteria1:="=*" rng.AutoFilter field:=8, Criteria1:="=*In*" 'Next line should not throw an error even if all data ' rows are filtered, since there's always the header row visible Set rngVis = rng.Columns("I").SpecialCells(xlCellTypeVisible) If rngVis.Count > 1 Then '<< ignore if only the header row... rngVis.SpecialCells(xlCellTypeBlanks).Value = "InsertValueHere" End If
doc_2432
I don't know where to set the coordinates. My code: $tableID = "xxxx"; $postBody = new Google_Service_MapsEngine_FeaturesBatchInsertRequest(); $feature = new Google_Service_MapsEngine_Feature(); $feature->setType("Feature"); $geometry = new Google_Service_MapsEngine_GeoJsonGeometry(); $geometry->setType("Point"); $point = new Google_Service_MapsEngine_GeoJsonPoint(); $coordinates = "[86.9253,27.9881]"; $point->setCoordinates($coordinates); $feature->setGeometry($geometry); $properties = array("gx_id" => "804940557", "mountain_name" => "Mt Everest", "height" => "8848"); $feature->setProperties($properties); $postBody->setFeatures(array($feature)); $postBody->setFeatures(array($feature)); $service->tables_features->batchInsert($tableID, $postBody); A: I'm NOT sure about this. There is everything about CURL and JSON in official documentation. But by my quick analysis through github, I guess it's this: $feature->setGeometry(array("geometry" => array("type" => "Polygon","coordinates"=>"[86.9253,27.9881]"))); OR $feature->setGeometry(array("type" => "Polygon","coordinates"=>"[86.9253,27.9881]"));
doc_2433
I would like to combine both of them to one graph and I didn't find a previous example. Here is what I got: c <- ggplot(survey, aes(often_post,often_privacy)) + stat_smooth(method="loess") c <- ggplot(survey, aes(frequent_read,often_privacy)) + stat_smooth(method="loess") How can I combine them? The y axis is "often privacy" and in each graph the x axis is "often post" or "frequent read". I thought I can combine them easily (somehow) because the range is 0-5 in both of them. Many thanks! A: You can use + to combine other plots on the same ggplot object. For example, to plot points and smoothed lines for both pairs of columns: ggplot(survey, aes(often_post,often_privacy)) + geom_point() + geom_smooth() + geom_point(aes(frequent_read,often_privacy)) + geom_smooth(aes(frequent_read,often_privacy)) A: Example code for Ben's solution. #Sample data survey <- data.frame( often_post = runif(10, 0, 5), frequent_read = 5 * rbeta(10, 1, 1), often_privacy = sample(10, replace = TRUE) ) #Reshape the data frame survey2 <- melt(survey, measure.vars = c("often_post", "frequent_read")) #Plot using colour as an aesthetic to distinguish lines (p <- ggplot(survey2, aes(value, often_privacy, colour = variable)) + geom_point() + geom_smooth() ) A: Try this: df <- data.frame(x=x_var, y=y1_var, type='y1') df <- rbind(df, data.frame(x=x_var, y=y2_var, type='y2')) ggplot(df, aes(x, y, group=type, col=type)) + geom_line()
doc_2434
A: Not in any significant way. Not in any way at all, actually. Certainly not more than using a long string for the name, which makes parsing take nanoseconds longer.
doc_2435
I'll have an 2D array [1..N_SECTIONS] of var set of int: content and a set of unique elements items that need to be distributed in content by a certain rule, in a way that in the end they still don't occur more than once in any of the individual sets of content. I tried to solve this over a nested forall however I failed due to I don't see a way to append new elements to the individual sets of an unkown size. Same goes if I would convert this structure to a real 2D array like array[1..N_SECTIONS,1..N_ITEMS] of int: content basically because each section can have a different amount of members. Additionally this way the array is much oversized since N_SECTIONS * N_ITEMS >> N_ITEMS (i.e. the unqiue elements to assign). Unfortunately I couldn't find a neat solution to constrain such a distribution in the official docs. Thanks for any hints in advance! Update: Here's a dataset (Note: the assignment rule here does not guarantee a unique representation of each element, it is just used for demonstration) array[1..5] of int: items = [1,2,3,4,5]; array[1..20] of var set: sections; constraint forall(cur_sec in 1..3)( forall(item in items)( item mod cur_sec == 0 -> sections[cur_sec][card(sections[cur_sec]] = item)); % this is particular line is not working
doc_2436
Failure [INSTALL_PARSE_FAILED_INCONSISTENT_CERTIFICATES] I know that it happens because of the conflict between certificate of previously installed laucher on the emulator & the one i am trying to install. I also changed the package name & tried.That also works on device but not emulator. Thank you.
doc_2437
I am launching multiple (3) instances of Java application from my python3 (pyQt5) tool. The application is LiveGraph2.0: a tool for generating graphs in real-time out of CSV-like logs output on the serial console by devices. By default the application opens multiple windows. My python tool closes all windows but the graph windows and moves/resize it in a specific position on the screen per each instance. I use the following primitives to do this: * *win32process.GetWindowThreadProcessId() *win32gui.GetWindowText() *win32gui.PostMessage() *win32gui.EnumWindows() *win32gui.SetWindowPos() It appears the results are timing sensitive (time.sleep() helps), and the java process remains stuck, either before showing any windows or after closing the last windows with the mouse. Also the java application is not able to open Load/Save dialog boxes after using these win32 primitives. In particular, depending on the timing (on some PCs, with/without using time.sleep() of various amount of seconds) * *that windows are not correctly resized/closed (this is somewhat expected: if I EnumWindows() before the windows I am looking for have a chance to be created and displayed, I understand my callback will not be called for them and the expected action not taken *That the Java application is stuck (Java process listed in task manager, but no windows is shown) *That the Java process is not able to open File Load/Save dialog boxes (Java versions), while it is able to open other windows. *That after closing the last window the Java process appears as running (0 CPU time, 72MB) If I comment out the calls to win32 APIs I do not see the weird behavior above (of course all windows are displayed, all in the same positions on top of the other processes' ones) Here is a snippet of the relevant code class FitLgWins: """Callable object to locate and adjust LiveGraph windows""" def __init__(self, pid: int, geo: QRect): self.pid = pid self.geo = geo def __call__(self) -> None: self.fit_lg_wins() def fit_lg_wins(self): """locate and adjust LiveGraph windows""" def callback(hwnd, res): _, cpid = win32process.GetWindowThreadProcessId(hwnd) if cpid == self.pid: title=win32gui.GetWindowText(hwnd) if "Plot (LiveGraph)" == title: #move this res.append(hwnd) else: #close window win32gui.PostMessage(hwnd,win32con.WM_CLOSE,0,0) return True res = [] win32gui.EnumWindows(callback, res) time.sleep(1) if res: r = self.geo win32gui.SetWindowPos(res[0], win32con.HWND_TOP, r.x(), r.y(), r.width(), r.height(), 0) ## this is the code starting the process p=subprocess.Popen( cmd, shell=False, stdin=None, stdout=None, stderr=None, close_fds=True, creationflags=subprocess.DETACHED_PROCESS ) self.subprocesses.append(p) cb=FitLgWins(p.pid, self.lg_geometries[pos]) timer = threading.Timer(4, cb) timer.start() # will execute delayed cb after 4 seconds I am not able to understand what I am doing wrong. It feels like the Java app is just not ready to handle these events if not provided at the right time (what is the right time?) otherwise some weird bug is triggered.
doc_2438
My manager suggest me to automate SOAPUI through robot framework, i found one library, even this library given below does't seems to be well documented. Even example given in library is more specific to SOAP based web service , i am looking for rest web service testing through soapui automation not soapbased webservices. https://github.com/pavlobaron/robotframework-soapuilibrary So please suggest me on rest webservice testing automation through SOAPUI automation in robotframework. Another approch is rest webservice testing automation through robotframework without soapui tool.ssome welll document library available http://peritus.github.io/robotframework-httplibrary/HttpLibrary.html Could anyone suggest me on above two solution on robotframework testing automation for rest webservice. A: For testing RESTful services you can use the Requests library. The home page for this library is https://github.com/bulkan/robotframework-requests/ For testing SOAP services you can use the Suds library. The home page for this library is https://github.com/ombre42/robotframework-sudslibrary Links to both of these, and many others, are available on the robotframework home page. Here's a quick link: http://robotframework.org/#test-libraries Here is an example that connects to a RESTful service and verifies that it returns a status code of 200, and that the JSON data has some specific keys (note that this test passes at the time that I wrote it, but if the API changes between the time I wrote it and the time you're reading this, it may fail) *** Settings *** | Library | RequestsLibrary | Library | Collections *** Variables *** | ${SERVICE_ROOT} | http://api.openweathermap.org | ${SERVICE_NAME} | openweathermap *** Test Cases *** | Example RESTful API test | | [Documentation] | Example of how to test a RESTful service | | | | Create session | ${SERVICE_NAME} | ${SERVICE_ROOT} | | ${response}= | Get | ${SERVICE_NAME} | /data/2.5/weather?q=chicago,il | | | | Should be equal as numbers | ${response.status_code} | 200 | | ... | Expected a status code of 200 but got ${response.status_code} | values=False | | | | ${json}= | To JSON | ${response.content} | | :FOR | ${key} | IN | | ... | coord | sys | weather | base | main | wind | clouds | dt | id | name | cod | | | Run keyword and continue on failure | | | ... | Dictionary should contain key | ${json} | ${key} | | | ... | expected json result should contain key '${key}' but did not A: This is my blog about how I integrate SoapUI and RF: http://qatesterblog.blogspot.com/2018/04/integrating-soapui-into-robot-framework.html In summary: * *One keyword run each test case by testrunner ith switch: -rMI *The XML report is processed using XML library. If it finds the case is "Finished", that means a Pass. If not, will capture the failure message and throw a FAIL
doc_2439
if suppose Apps A and B are there , So in App 'A' if i click the login button for facebook ,it opens the facebook app in my iphone and after clicking the okay it returns back to App 'B' and show in that app 'B' is logged-in into face book , and remanning vice versa. MAIN Requirement: My main feature is i have two A and B integrated Facebook in both the Apps , if logged in to facebook in 'A' it should show that both A and B are logged - in, Please help me . Thanks , Nikhil A: Generally we can give same appid for different applications, but in login screen, that app name is showing for both apps, that is not a problem. plz make sure to include below code in appdelegate.m : facebook = [[Facebook alloc] initWithAppId:@"481018011908421"]; NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults]; if ([defaults objectForKey:@"FBAccessTokenKey"] && [defaults objectForKey:@"FBExpirationDateKey"]) { facebook.accessToken = [defaults objectForKey:@"FBAccessTokenKey"]; facebook.expirationDate = [defaults objectForKey:@"FBExpirationDateKey"]; NSLog(@"FBAccessTokenKey = %@~~~~ FBExpirationDateKey = %@",facebook.accessToken,facebook.expirationDate); } and i think appid is not required in info.plist and need to give in appdelegate.
doc_2440
public class Foo { public readonly int A = 1; public readonly int B = 2; } When I run VS2010 built in Code Analysis tool, I get 2 identical warnings: that 'field '...' is visible outside of its declaring type, change its accessibility to private and add a property, with the same accessibility as the field has currently, to provide access to it'. I want to suppress this warning for all fields in my class Foo, but I don't want to mark every field with SuppressMessage attribute like this: public class Foo { [SuppressMessage("Microsoft.Design", "CA1051:DoNotDeclareVisibleInstanceFields")] public readonly int A = 1; [SuppressMessage("Microsoft.Design", "CA1051:DoNotDeclareVisibleInstanceFields")] public readonly int B = 2; } I want to mark all class members, using code like this: [SuppressMessage("Microsoft.Design", "CA1051:DoNotDeclareVisibleInstanceFields")] public class Foo { public readonly int A = 1; public readonly int B = 2; } But this code doesn't work, I still get a code analysis warning. How can I do it correctly? A: There is no way to suppress more than 1 message at a time using SuppressMessageAttribute. As discussion can be found here, but the relevant part is: You are running into a common misunderstanding about SuppressMessage. Each time you put a SuppressMessage in a source file, you suppress exactly one problem (one "row" in the grid). Period. A SuppressMessage may be placed either "near" the violation or at the module-level. Module-level, assembly-level, and global suppression all mean the same thing. By placing at the module-level, you do not suppress multiple instances of the problem at once. You merely get to locate the SuppressMessage in a different place of the code. The main benefit is that you can, for example, collect all the suppressions related to the assembly in a single file (for example, GlobalSuppressions.cs). When you use a module-level SuppressMessage, you must specify the Target. The Target must match exactly what is reported in the GUI for a violation of the rule. There is no way to use SuppressMessage to suppress a rule for the entire scope of a class or the entire scope of a namespace. A: You can create the CodeAnalysis rules file with a set of rules like: <?xml version="1.0" encoding="utf-8"?> <RuleSet Name="New Rule Set" Description=" " ToolsVersion="10.0"> <Rules AnalyzerId="Microsoft.Analyzers.ManagedCodeAnalysis" RuleNamespace="Microsoft.Rules.Managed"> <Rule Id="CA1111" Action="Ignore" /> </Rules> </RuleSet> See step by step walkthrough: * *Code Analysis without Visual Studio 2010 *Download an example
doc_2441
I've written the following errorformat. The error message is correctly extracted, but file and line numbers are not. CompilerSet errorformat=%E%n)\ %.%#, \%C%m, \%+C%$, \%C%f:%l, \%Z%$ PHPUnit's output looks something like this: PHPUnit 3.5.12 by Sebastian Bergmann. ............................................................... 63 / 134 ( 47%) .........................E..... Time: 0 seconds, Memory: 11.25Mb There was 1 error: 1) SomeClassTest::testSomething Undefined property: SomeClass::$var /path/to/SomeClass.php:99 /path/to/SomeClassTest.php:15 FAILURES! Tests: 94, Assertions: 170, Errors: 1. Press ENTER or type command to continue I'm happy for the reported file and line to be either the first or last entry in the stack trace. The deepest call is the actual source of the issue. Jumping to the top-level call means I can use to step down into the call stack. I would prefer the latter, SomeClassTest.php:15 in the example above. A: I think the problem is the phrasing of the %Z rule. First I came up with this: :set errorformat=%E%n)\ %.%#,%Z%f:%l,%C%m,%-G%.%# That'll catch the first filename and associate that with the error message. For some reason, association the last filename mentioned was a whole lot harder. I wasn't able to do it with efm, but instead hacked together this Python filter: import sys import re errors = [] OTHER = 0 MESSAGE = 1 FILE_LINE = 2 next_is = OTHER lines = sys.stdin.readlines() for line in lines: line = line.strip() if (next_is == OTHER): if (re.search("^[0-9]+\)", line)): next_is = MESSAGE elif (next_is == MESSAGE): errors.append([line, '']) next_is = FILE_LINE elif (next_is == FILE_LINE): if (re.search("^.+:[0-9]+", line)): errors[-1][1] = line elif (len(line) == 0 and len(errors[-1][1]) > 0): next_is = OTHER for error in errors: print "{0}:{1}".format(error[1], error[0]) This will capture all the errors and output them in a single-line format. The associated filename and line number are the last ones mentioned for the error. This script clobbers all other output, but that'd be solved by adding e.g. a print line after line = line.strip().
doc_2442
Ideally I want there to be a home_team column and a away_team column. Both columns will use a foreign key referencing the the id column in the Teams table. TEAMS TABLE | id | home_team | away_team | | --- | ----------| --------- | | 1 | 2 | 5 | | 2 | 1 | 3 | | 3 | 4 | 6 | Would using t.references be suitable for this? Below is what I have so far... not sure if this is what I want. create_table :games do |t| t.references :home_team, references: teams t.references :away_team, references: teams end Will I also need to add belongs_to to the team model?
doc_2443
For example: this is my text --- works fine as in td width is fixed only height increases but if I insert thisismytext --- then it increases the width of my table. A: you have to use word-break: break-all for the td <style> .BreakWord { word-break: break-all; } </style> <td class="BreakWord">longtextwithoutspace</td> A: and for this to work in firefox word-wrap:break-word; A: In other words, give the browser an excuse to break the line (by including spaces in the contents) and it will. Force it to use one line (by omitting spaces) and it increases the width of the table. In the absence of any extra layout rules, what else could it do?' Edit: there may be cross-browser problems with word-break / wbr (it seems to be CSS3, now, but was formerly an IE invention) A: If your text and cell width is fixed you could add a <td>My text<br />on a different line</td> Something I've found sometimes happens is that the text is wrapping, just the td height hide this A: Use the <wbr> tag in your text every few characters (20? 30? you'll need to experiment). This will allow breaks in your text without spaces. Like this: <td>LongLongLong<wbr>TextTextText</td> This will be all strung together unless a break is needed.
doc_2444
public class PageFragment_Bon extends Fragment implements View.OnClickListener{ public static final String ARG_PAGE = "ARG_PAGE"; private int mPage; private Button start, stop, replay; private MediaPlayer mediaPlayer; int [] filer = new int[18]; public static PageFragment_Bon newInstance(int page) { Bundle args = new Bundle(); args.putInt(ARG_PAGE, page); PageFragment_Bon fragment = new PageFragment_Bon(); fragment.setArguments(args); return fragment; } @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); mPage = getArguments().getInt(ARG_PAGE); } @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View view = inflater.inflate(R.layout.fragment_boenner, container, false); start = (Button) view.findViewById(R.id.start); start.setOnClickListener(this); stop = (Button) view.findViewById(R.id.stop); stop.setOnClickListener(this); replay = (Button) view.findViewById(R.id.replay); replay.setOnClickListener(this); filer[2] = R.raw.takbira; filer[4] = R.raw.alfatiha; filer[14] = R.raw.tashahhud; filer[15] = R.raw.salat; filer[16] = R.raw.assalam; if(filer[mPage] != 0){ start.setVisibility(View.VISIBLE); stop.setVisibility(View.VISIBLE); replay.setVisibility(View.VISIBLE); } return view; } @Override public void onPause() { super.onPause(); if(mediaPlayer != null) { mediaPlayer.stop(); mediaPlayer.reset(); mediaPlayer.release(); } } @Override public void onClick(View v) { if(mediaPlayer == null) mediaPlayer = MediaPlayer.create(getActivity().getBaseContext(), filer[mPage]);//add this line if(v == start){ try { mediaPlayer.start(); } catch (Exception e) { e.printStackTrace(); } } else if(v == stop){ mediaPlayer.pause(); } else if(v == replay){ mediaPlayer.seekTo(0); mediaPlayer.start(); } } @Override public void setUserVisibleHint(boolean isVisibleToUser) { super.setUserVisibleHint(isVisibleToUser); if(!isVisibleToUser){ if(mediaPlayer!=null) { if (mediaPlayer.isPlaying()) { try { mediaPlayer.pause(); mediaPlayer.seekTo(0); } catch (Exception e) { e.printStackTrace(); } } } } } } This is what I get in the logcat: 05-07 13:56:06.726 31550-31550/com.app.hudhud.myapp E/AndroidRuntime: FATAL EXCEPTION: main Process: com.app.hudhud.myapp, PID: 31550 java.lang.RuntimeException: Unable to resume activity {com.app.hudhud.myapp/com.app.hudhud.myapp.Bon}: java.lang.IllegalStateException at android.app.ActivityThread.performResumeActivity(ActivityThread.java:4156) at android.app.ActivityThread.handleResumeActivity(ActivityThread.java:4250) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1839) at android.os.Handler.dispatchMessage(Handler.java:102) at android.os.Looper.loop(Looper.java:158) at android.app.ActivityThread.main(ActivityThread.java:7229) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1230) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1120) Caused by: java.lang.IllegalStateException at android.media.MediaPlayer._start(Native Method) at android.media.MediaPlayer.start(MediaPlayer.java:1425) at com.app.hudhud.myapp.PageFragment_Bon.onResume(PageFragment_Bon.java:148) at android.support.v4.app.Fragment.performResume(Fragment.java:2235) at android.support.v4.app.FragmentManagerImpl.moveToState(FragmentManager.java:1346) at android.support.v4.app.FragmentManagerImpl.moveFragmentToExpectedState(FragmentManager.java:1528) at android.support.v4.app.FragmentManagerImpl.moveToState(FragmentManager.java:1595) at android.support.v4.app.FragmentManagerImpl.dispatchResume(FragmentManager.java:2898) at android.support.v4.app.FragmentController.dispatchResume(FragmentController.java:223) at android.support.v4.app.FragmentActivity.onResumeFragments(FragmentActivity.java:509) at android.support.v4.app.FragmentActivity.onPostResume(FragmentActivity.java:498) at android.support.v7.app.AppCompatActivity.onPostResume(AppCompatActivity.java:172) at android.app.Activity.performResume(Activity.java:7016) at android.app.ActivityThread.performResumeActivity(ActivityThread.java:4145) at android.app.ActivityThread.handleResumeActivity(ActivityThread.java:4250) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1839) at android.os.Handler.dispatchMessage(Handler.java:102) at android.os.Looper.loop(Looper.java:158) at android.app.ActivityThread.main(ActivityThread.java:7229) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1230) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1120) Why does this happen? Have I missed something? A: Note that Caused by: java.lang.IllegalStateException at android.media.MediaPlayer._start(Native Method) When app is closed, I think, MediaPlayer's state is not properly handled. On relaunching app, media player's state is different whic does not support to start playing. I think, when you close app, onPause() is invoked, you release MediaPlayer instance. But when you relaunch app (maybe you launch app from background from task manager), there is no initialized MediaPlayer and you try to play media.
doc_2445
The way the process works is a SPA/view app goes through several pages and builds up the details of a purchase. When the test/user arrives at the last page they are presented with a 'click and pay' button. On clicking this button a modal window opens that displays the checkout (its a third party checkout, I cannot change any of the code, manually everything works fine and we need these tests automated. Once the 'Click and Pay' window opens it presents me with the form and the 'background' window grey's out with a spinner. When I originally worked on this, it wasn't a problem because the checkout was completed using a redirect, so the user was kept on the same page, no opened windows and would interact as normal. Please can someone point me in the direction on how I can interact with that page please? filling in all the details is not an issue, its just trying to get focus on the new checkout window. A: So after writing that question and using the term 'Focus' I suddenly had a few synapses firing and came across this solution [Codeception] Switching to a New Window Basically it provides a simple script that will take the user to the newly opened window. A lot of hassle, but I got there in the end
doc_2446
just to throw a number here I'd like a div to be 100% in size if the viewport is 480px but when the viewport increases in size I'd like the width of the div to shrink. let's say once the viewport is around 1024px the div's width should be 50%; I'm guessing the only solution would be with JavaScript. I've been scratching my head for days and I can't seem to find a way of how to do this. Can anybody help? A: I guess you can achieve this with css media queries if you don't need to have it continuously change. For example: @media all and (max-width: 480px) { div { width: 100%; } } @media all and (min-width: 481px) and (max-width: 1024px) { div { width: 50%; } } This code makes div has width 100% for viewports equals or smaller than 480px and give it 50% for sizes 481-1024. You can tweak it on your own.
doc_2447
<!doctype html> <head> <title>Test</title> </head> <body> <div id="form"> <form id="address"> <p>Route:</p> <input type="text" name="Address" placeholder="Origin"> <input type="text" name="Address" placeholder="Destination"> <button type="button" id="add" onclick="addField('address')">Add field</button> </form> </div> <script type="text/javascript"> function addField(id) { // OnClick function for adding additonal fields for // transit routes with multiple destinations. var field = `<br/><br/> <input type="text" name="Address" placeholder="Origin"> <input type="text" name="Address" placeholder="Destination">` document.getElementById(id).innerHTML += field } </script> </body> </html> I can fetch an array of all the elements within: var elements = document.getElementById("address").elements; But what I want to do is link every Destination field to the next Origin field, so that the Destination in the last field ends up as the Origin for the next row. I imagine it's something like below, but it's not clicking in my head. elements[elements.indexOf(this)-1].value;
doc_2448
To me it seems too much gl**** calls which I guess have some overhead. For example here you see each frame several blocks like: // do this for each mesh in scene // vertexes glEnableVertexAttribArray(0); glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer); glVertexAttribPointer( 0, 3, GL_FLOAT,GL_FALSE,0,(void*)0); // normals glEnableVertexAttribArray(1); glBindBuffer(GL_ARRAY_BUFFER, normal_buffer ); glVertexAttribPointer( 1, 3, GL_FLOAT,GL_FALSE,0,(void*)0); // UVs glEnableVertexAttribArray(2); glBindBuffer(GL_ARRAY_BUFFER, uv_buffer ); glVertexAttribPointer( 2, 2, GL_FLOAT,GL_FALSE,0,(void*)0); // ... glDrawArrays(GL_TRIANGLES, 0, nVerts ); // ... glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); glDisableVertexAttribArray(2); imagine you have not just one but 100 different meshes each with it's own VBOs for vertexes,normas,UVs. Should I really do this procedure each frame for each of them? Sure I can encapsulate that complexity into some function/objects, but I worry about overheads of this gl**** function calls. Is it not possible some part of this machinery to move from per frame loop into scene setup ? Also I read that VAO is a way how to pack corresponding VBOs for one object together. And that binding VAO automatically binds corresponding VBOs. So I was thinking that maybe one VAO for each mesh (not instance) is how it should be done - but according to this answer it does not seems so? A: First things first: Your concerns about GL call overhead have been addressed with the introduction of Vertex Array Objects (see @Criss answer). However the real problem with your train of thought is, that you equate VBOs with geometry meshes, i.e. give each geometry its own VBO. That's not how you should see and use VBOs. VBOs are chunks of memory and you can put the data of several objects into a single VBO; you don't have to draw the whole thing, you can limit draw calls to subsets of a VBO. And you can coalesce geometries with similar or even identical drawing setup and draw them all at once with a single draw call. Either by having the right vertex index list, or by use of instancing. When it comes to the binding state of textures… well, yeah, that's a bit more annoying. You really have to do the whole binding dance when switching textures. That's why in general you sort geometry by texture/shader before drawing, so that the amount of texture switches is minimized. The last 3 or 4 generations of GPUs (as of late 2016) do support bindless textures though, where you can access textures through a 64 bit handle (effectively the address of the relevant data structure in some address space) in the shader. However bindless textures did not yet make it into the core OpenGL standard and you have to use vendor extensions to make use of it. Another interesting approach (popularized by Id Tech 4) is virtual textures. You can allocate sparsely populated texture objects that are huge in their addressable size, but only part of them actually populated with data. During program execution you determine which areas of the texture are required and swap in the required data on demand. A: You should use vertex array object (generated by glGenVertexArrays). Thanks to it you don't have to perform those calls everytime. Vertex buffer object stores: * *Calls to glEnableVertexAttribArray or glDisableVertexAttribArray. *Vertex attribute configurations via glVertexAttribPointer. *Vertex buffer objects associated with vertex attributes by calls to glVertexAttribPointer. Maybe this will be better tutorial. So that you can generate vao object, then bind it, perform the calls and unbind. Now in drawing loop you just have to bind vao. Example: glUseProgram(shaderId); glBindVertexArray(vaoId); glDrawArrays(GL_TRIANGLES, 0, 3); glBindVertexArray(0); glUseProgram(0);
doc_2449
import csv output = open('output.txt' , 'wb') # this functions return the min for num.txt def get_min(num): return int(open('%s.txt' % num, 'r+').readlines()[0]) # temporary variables last_line = '' input_list = [] #iterate over input.txt in sort the input in a list of tuples for i, line in enumerate(open('input.txt', 'r+').readlines()): if i%2 == 0: last_line = line else: input_list.append((last_line, line)) filtered = [(header, data[:get_min(header[-2])] + '\n' ) for (header, data) in input_list] [output.write(''.join(data)) for data in filtered] output.close() In this code input.txt is something like this >012|013|0|3|M AFDSFASDFASDFA >005|5|67|0|6 ACCTCTGACC >029|032|4|5|S GGCAGGGAGCAGGCCTGTA and num.txt is something like this M 4 P 10 I want that in above input.txt check the amount of value from the num.txt by looking at its last column which is same like in num.txt and cut its character according to that values I think the error in my code is that it only accept the integer text file , where it should also accept file which contain alphabets A: You can do it like so; import re min_count = 4 # this variable will contain that count integer from where to start removing str_to_match = 'EOG6CC67M' # this variable will contain the filename you read input = '' # The file input (input.txt) will go in here counter = 0 def callback_f(e): global min_count global counter counter += 1 # Check your input print(str(counter) + ' >>> ' + e.group()) # Only replace the value with nothing (remove it) after a certain count if counter > min_count: return '' # replace with nothing result = re.sub(r''+str_to_match, callback_f, input) With this tactic you can keep count with a global counter and there's no need to do hard line-loops with complex structures. Update More detailed version with file access; import os import re def callback_f(e): global counter counter += 1 # Check your input print(str(counter) + ' >>> ' + e.group()) # Fetch all hash-file names and their content (count) num_files = os.listdir('./num_files') numbers = {} for file in num_files: if file[0] != '.': file_c = open('./num_files/' + file) file_c = file_c.read() numbers[file.split('.')[0]] = file_c # Now the CSV files csv_files = os.listdir('./csv_files') for file in csv_files: if file[0] != '.': for hash_name, min_count in numbers.iteritems(): file_c = open('./csv_files/' + file) file_c = file_c.read() counter = 0 result = re.sub(r''+hash_name, callback_f, file_c) # Write the replaced content back to the file here Considered directory/file structure; + Projects + Project_folder + csv_files - input1.csv - input2.csv ~ etc. + num_files - EOG6CC67M.txt - EOG62JQZP.txt ~ etc. - python_file.py * *The CSV files contain the big chunks of text you state in your original question. *The Num files contain the hash-files with an Integer in them What happens in this script; * *Collect all Hash files (in a dictionary) and it's inner count number *Loop through all CSV files *Subloop through the collected numbers for each CSV file *Replace/remove (based on what you do in callback_f()) hashes after a certain count *Write the output back (it's the last comment in the script, would contain the file.write() functionality) A: The totally revised version, after a long chat with the OP; import os import re # Fetch all hashes and counts file_c = open('num.txt') file_c = file_c.read() lines = re.findall(r'\w+\.txt \d+', file_c) numbers = {} for line in lines: line_split = line.split('.txt ') hash_name = line_split[0] count = line_split[1] numbers[hash_name] = count #print(numbers) # The input file file_i = open('input.txt') file_i = file_i.read() for hash_name, count in numbers.iteritems(): regex = '(' + hash_name.strip() + ')' result = re.findall(r'>.*\|(' + regex + ')(.*?)>', file_i, re.S) if len(result) > 0: data_original = result[0][2] stripped_data = result[0][2][int(count):] file_i = file_i.replace(data_original, '\n' + stripped_data) #print(data_original) #print(stripped_data) #print(file_i) # Write the input file to new input_new.txt f = open('input_new.txt', 'wt') f.write(file_i)
doc_2450
A: This is a bit of a workaround, but you could create a file in your Isolated Storage that stores the last version number of the app. Since this file wouldn't get overwritten on update, you could read it when you update to check what the version they're updating from was. If the file doesn't exist, then that means they had the first version of the app.
doc_2451
I've been searching this for about two hours. Between StackOverflow and Google the closest I've come is the code below. It partially works, but the title tag reads undefined. I'm not sure what I've done wrong. var linktext = document.getElementsByTagName('a').textContent; var titletext = ['a', 'input', 'select', 'button', 'textarea']; for (var i = 0; i < titletext.length; i++) { var elem = document.getElementsByTagName(titletext[i]); for (var j = 0; j < elem.length; j++) { elem[j].setAttribute('title', linktext); } } <a href="#">Some Link Text</a> <a href="#">More Link Text</a> <a href="#">Further Link Text</a> <a href="#">Last Link Text</a> A: You are setting the value of linktext to the textContent property of the Node List of all a elements. Node Lists don't have textContent. You need to read it from a specific element. Given your HTML you might want var linktext = elem[j].textContent … but that wouldn't work for the input, select or textarea elements you have in your list. It isn't clear where you want to copy the data from. This does sound like a really bad idea in the first place though. You seem to be trying to duplicate that text content of elements in the title attribute which seems entirely redundant (horribly so in the case of screen readers which will read out the text content and the (identical) title!) A: var linktext = document.getElementsByTagName('a').textContent; This is undefined. Here textContent property is not defined, just because you have multiple 'a' tags. /*var linktext = document.getElementsByTagName('a').textContent;*/ var titletext = ['a', 'input', 'select', 'button', 'textarea']; for (var i = 0; i < titletext.length; i++) { var elem = document.getElementsByTagName(titletext[i]); for (var j = 0; j < elem.length; j++) { /*elem[j].setAttribute('title', linktext);*/ elem[j].setAttribute('title', elem[j].textContent); } } <a href="#">Some Link Text</a> <a href="#">More Link Text</a> <a href="#">Further Link Text</a> <a href="#">Last Link Text</a>
doc_2452
Is it possible to retrieve a data set as numpy array from a tf.data.TFRecordDataset object? A: You can use the tf.data.Dataset.batch() transformation and tf.contrib.data.get_single_element() to do this. As a refresher, dataset.batch(n) will take up to n consecutive elements of dataset and convert them into one element by concatenating each component. This requires all elements to have a fixed shape per component. If n is larger than the number of elements in dataset (or if n doesn't divide the number of elements exactly), then the last batch can be smaller. Therefore, you can choose a large value for n and do the following: import numpy as np import tensorflow as tf # Insert your own code for building `dataset`. For example: dataset = tf.data.TFRecordDataset(...) # A dataset of tf.string records. dataset = dataset.map(...) # Extract components from each tf.string record. # Choose a value of `max_elems` that is at least as large as the dataset. max_elems = np.iinfo(np.int64).max dataset = dataset.batch(max_elems) # Extracts the single element of a dataset as one or more `tf.Tensor` objects. # No iterator needed in this case! whole_dataset_tensors = tf.contrib.data.get_single_element(dataset) # Create a session and evaluate `whole_dataset_tensors` to get arrays. with tf.Session() as sess: whole_dataset_arrays = sess.run(whole_dataset_tensors)
doc_2453
tasks: test: {include: [bash_exec], args:['-c', 'state --m=4 in=in4.db | cppextract -f , -P NEW_MODEL /stdin Id Date {a,b,b2}{c,d}L {d1,d2,d3,d4}{x,}y | perl -lane '$F[0] = (shift @F) .".$F[0]"; $, = ":"; print @F;' | state2 --id=Id.Date wq.db -'], answer: '{{out}}/utestt.csv', n: 5, cols: [f,k]} When parsed, it yields the following error: Unexpected characters ($F[0] = (shift @F) .".$F[0]"; $, = ":"; print @F;''] This command state --m=4 in=in4.db | cppextract -f , -P NEW_MODEL /stdin Id Date {a,b,b2}{c,d}L {d1,d2,d3,d4}{x,}y | perl -lane '$F[0] = (shift @F) .".$F[0]"; $, = ":"; print @F;' provides right output on linux command line but throws yaml parser exception when running through yaml. A: First, let's untangle the YAML file in a more readable format: tasks: test: { include: [bash_exec], args:['-c', 'state --m=4 in=in4.db | cppextract -f , -P NEW_MODEL /stdin Id Date {a,b,b2}{c,d}L {d1,d2,d3,d4}{x,}y | perl -lane '$F[0] = (shift @F) .".$F[0]"; $, = ":"; print @F;' | state2 --id=Id.Date wq.db -'], answer: '{{out}}/utestt.csv', n: 5, cols: [f,k] } The first problem is args:[; YAML requires you to separate a mapping value from the key (unless the key is a quoted scalar). Let's do that: tasks: test: { include: [bash_exec], args: [ '-c', 'state --m=4 in=in4.db | cppextract -f , -P NEW_MODEL /stdin Id Date {a,b,b2}{c,d}L {d1,d2,d3,d4}{x,}y | perl -lane ' $F[0] = (shift @F) .".$F[0]"; $, = ":"; print @F;' | state2 --id=Id.Date wq.db -' ], answer: '{{out}}/utestt.csv', n: 5, cols: [f,k] } This makes it obvious what happens: You end the single-quoted scalar started with 'state right before the $ symbol. As we are in a YAML flow sequence (started by [), the parser expects a comma or the end of the sequence after that value. However, it finds a $ which is what it complains about. Now obviously, you don't want to stop the scalar before the $; the ' is supposed to be part of the content. There are multiple ways to achieve this, but the most readable way is probably to define the value as a block scalar: tasks: test: include: [bash_exec] args: - '-c' - >- state --m=4 in=in4.db | cppextract -f , -P NEW_MODEL /stdin Id Date {a,b,b2}{c,d}L {d1,d2,d3,d4}{x,}y | perl -lane '$F[0] = (shift @F) .".$F[0]"; $, = ":"; print @F;' | state2 --id=Id.Date wq.db - answer: - '{{out}}/utestt.csv', - n: 5 - cols: [f, k] >- starts a flow scalar, which can span multiple lines, and the linebreaks will be folded into a space character. Note that I removed the surrounding flow mapping ({…}) and replaced it with a block mapping to be able to use a block scalar in it. I also changed answer to be a sequence which it is not currently, but it looks like it should be (it is also erroneous in the YAML you show).
doc_2454
When i'm passing a file path with a space in it(e.g .../test/test item.jpg) i get the error PHP Warning: unlink(): Invalid argument in ...//file location However when i pass a filepath with no spaces in it (e.g ../test/testitem.jpg), i do not get any errors.Why am i getting an invalid argument when i pass an encoded filepath with spaces in it?I thought that by encoding it with encodeURIComponent, the spaces in the filepath should have been encoded and taken care of? I've tried calling the functions without encoding, and i still only get the invalid argument error when the file path contains spaces in them.How would/should i handle the spaces in the filepaths? My function: function DeleteImageDP(){ var itemid=$('#DisplayDeleteItemID').val(); var file=$('#DisplayDeleteFilePath').val(); var filepath=encodeURIComponent(file); var itempicid=$('#DisplayDeleteItemPicID').val(); var cfm=confirm("Confirm deletion of picture? ( Note: Picture wil be deleted permanently."); if(cfm == true) { $.ajax({ url:"delete/deletedp.php", type:"POST", data:"ItemID="+itemid+"&FilePath="+filepath+"&ItemPicID="+itempicid, success:function(){ alert("Image successfully deleted."); $('#ImagePreviewDP').prop('src','').hide(); $('#ImagePreviewDPValidate').val(''); $('#DisplayDelete').hide(); $('#ItemDetailsContainer').trigger('change'); }, error:function(){ alert("Image could not be deleted due to an error."); } }); return true; } else { return false; } }; Edit:PHP Code $bizid=$_SESSION['BizID']; $itemid=$_POST['ItemID']; $file=$_POST['FilePath']; $filepath=realpath('..\\'.$file); $itempicid=$_POST['ItemPicID']; //empties dp field in items table $delete=$cxn->prepare("UPDATE `Items` SET `ItemDP`=:deleted WHERE `BusinessID`=:bizid AND `ItemID`=:itemid"); $delete->bindValue(":bizid",$bizid); $delete->bindValue(":itemid",$itemid); $delete->bindValue(":deleted","NULL"); $delete->execute(); //removes from itempics $deletepic=$cxn->prepare("DELETE FROM `ItemPics` WHERE `BusinessID`=:bizid AND `ItemID`=:itemid AND `ItemPicID`=:itempicid AND `FilePath` LIKE :search"); $deletepic->bindValue(":search","%DP"); $deletepic->bindValue(":bizid",$bizid); $deletepic->bindValue(":itemid",$itemid); $deletepic->bindValue(":itempicid",$itempicid); $deletepic->execute(); if($deletepic) { unlink($filepath);<--- This is the line returning the error return ( true ); } else { return ( false ); } A: The interval is something like a special char in the file names. You need to escape it in order to operate with the file. Try this $filepath = str_replace(" ", "\ ", $filepath); unlink($filepath); A: I just had similar problem and this is what works for me unlink(urldecode($filepath)); for more info on urldecode read this article: http://php.net/manual/en/function.urldecode.php
doc_2455
from skimage import io from skimage.color import rgb2gray from skimage.morphology import convex_hull_image original = io.imread('test.png') image = rgb2gray(original) chull = convex_hull_image(image) I want to crop the original image according to the convex hull in order to eliminate empty space that is in the image (original image attached), and have an image that only contains what is inside the convex hull. How could I crop the original image to reduce its size? (deleting the empty space at left and right) Thank you. A: You can use min and max to find the border of the convex hull image. import numpy as np [rows, columns] = np.where(chull) row1 = min(rows) row2 = max(rows) col1 = min(columns) col2 = max(columns) newImage = original[row1:row2, col1:col2]
doc_2456
I wrote everything properly: in My XML I put the meta-data and the parent activity (Example from Google, I modified the text to my app in my files) android:parentActivityName="com.example.myfirstapp.MainActivity" > <!-- Parent activity meta-data to support 4.0 and lower --> <meta-data android:name="android.support.PARENT_ACTIVITY" android:value="com.example.myfirstapp.MainActivity" /> in My class I put the next line: getSupportActionBar().setDisplayHomeAsUpEnabled(true); but when I click this up button, the app closes, then opened again from the main screen. I want it to return to the main screen without closing (finishing) the app. Can you help me? Thanks. A: Well, according to this guide, When running on Android 4.1 (API level 16) or higher, or when using ActionBarActivity from the Support Library, performing Up navigation simply requires that you declare the parent activity in the manifest file and enable the Up button for the action bar. So, I have two Activities: FirstActivity, which is my launcher and parent Activity and SecondActivity, which is my child Activity. My FirstActivity code: public class FirstActivity extends ActionBarActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.layout_main_activity); Button secondButton = (Button) findViewById(R.id.secondButton); secondButton.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { startActivity(new Intent(FirstActivity.this, SecondActivity.class)); } }); } } My SecondActivity code: public class SecondActivity extends ActionBarActivity{ @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.layout_place_activity); //enable the ActionBar behaviour ActionBar actionBar = getSupportActionBar(); actionBar.setDisplayHomeAsUpEnabled(true); } } My Manifest.xml file: <?xml version="1.0" encoding="utf-8"?> <uses-sdk android:minSdkVersion="16" android:targetSdkVersion="20" /> <uses-permission android:name="android.permission.ACCESS_WIFI_STATE" /> <uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.CHANGE_WIFI_STATE" /> <application android:allowBackup="true" android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <activity android:name="com.testes.activity.FirstActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name="com.testes.activity.SecondActivity" android:parentActivityName="com.testes.activity.FirstActivity" > </activity> </application> </manifest> And this is all I need to have it working your way. I start my FirstActivity, click my Button to go to the SecondActivity, click the ActionBar home button and it goes back to FirstActivity.
doc_2457
* *The first question is how does the server identify that its communicating with an actual client, not someone else who's using the port, I've heard that browsers verify with servers using SHA hashing. *Second question is about the best way to send and receive data in variables, and also identifying which is which, because the current method of splitting data doesn't seem very elegant. Server side code to receive and send data: NetworkStream NetStream1 = TCPSocket.GetStream(); NetStream.Read(Buffer, 0, Buffer.Length); ReceivedData = System.Text.Encoding.ASCII.GetString(Buffer); string[] splitter = ReceivedData.Split('-'); Variable1 = splitter[0]; Variable2 = splitter[1]; //send response SendBuffer = Encoding.ASCII.GetBytes(ResultINT1+"-"+ResultINT2); NetStream.Write(SendBuffer, 0, SendBuffer.Length); NetStream.Flush(); Client code to send and receive NetworkStream SendStream = ClientSocket.GetStream(); byte[] SendBuffer = System.Text.Encoding.ASCII.GetBytes(V1+"-"+V2); SendStream.Write(SendBuffer, 0, SendBuffer.Length); SendStream.Flush(); //response SendStream.Read(RecieveBuffer, 0, RecieveBuffer.Length); string ResultString = System.Text.Encoding.ASCII.GetString(RecieveBuffer); string[] splitted = ResultString.Split('-'); int R1 = Convert.ToInt32(splitted[0]); int R2 = Convert.ToInt16(splitted[1]); A: * *Provide some authentication mechanism *Use some serializer. A: Your first question concerns authentication which is a huge subject and has many possible implementations although I'm not sure exactly what you mean by "someone else who's using the port". Your server should always be on the same port - that is how the client identifies a service. Regarding your second question there are again many possibilities but I would suggest that the simplest for a beginner would be using XmlSerializer and a simple message envelope. * *Create an XmlSerializable class either just using simple public properties or perhaps decorating with XmlElementAttribute, XmlRootAttribute etc. *Serialize to a MemoryStream *Write the bytes from the memory stream wrapped in an envelope (see later) *Receive a complete envelope into a byte array. *construct a MemoryStream from the byte array *Use XmlSerializer to reconstruct a copy of your original object. The envelope is critical. The simplest one is just the binary length of the serialized object. Most protocols will typically extend that with CRC to handle possible corruption but since Ethernet uses a strong CRC and TCP is a reliable transport (albeit with a weak CRC) that is usually overkill. The key point that beginners miss is that TCP is a streaming protocol not a message based protocol thus it is perfectly possible for a sender to make a single write of say 1000 bytes and yet the receiver receives this as a number of smaller chunks. This is why you need some way to detect the end of a message such as using a length and why the receiver needs to accumulate received chunks until a complete message (and possibly part of the next) is received and can be deserialized. This may seem complicated but unfortunately, at the TCP level, it doesn't get any simpler than that :( A: The first question is how does the server identify that its communicating with an actual client, not someone else who's using the port, I've heard that browsers verify with servers using SHA hashing. The server can identify different client by their IP addresses. See StreamReader.ReadToEnd Second question is about the best way to send and receive data in variables, and also identifying which is which, because the current method of splitting data doesn't seem very elegant. It depends on your protocol architecture, but a portable way to exchange values on network is to keep them in text format (this way no problem of endianness, type size...). Said that, be careful of your variable separator : a '-' might be difficult to use with negative numbers, ' ' or ';' are more common. A: You might want to define a communication protocol of some kind - a text-based protocol would be most straightforward to begin with - you can then read and write the "commands" each on a separate line. First, there would be a "handshake", where the client would send something like "HELLO my-awesome-protocol-v1\n" and the server would respond similarly. This way you will be sure that the other person is a client, who understands the protocol or you can close a connection, which does not implement the protocol. Then there could be some way of sending the values of variables with commands like "VAR variableName 123.45\n". You can read https://en.wikipedia.org/wiki/Text-based_protocol and see http://www.ncftp.com/libncftp/doc/ftp_overview.html for inspiration.
doc_2458
A: In the documentation ref-https://developer.tatumgames.com/documentation/preset/business There are sections "Things your Business Should Know", "Which Users Are The Most Active?", "How Well Do You Retain Users?", "What Is Your Audience Like?", "What Is Your Platform Breakdown?" If you go to your Mikros dashboard, go to Insights under Advanced Analytics and for example let's say you click "Which Users Are The Most Active?" If you click analyze scores, it will be calculated specified in the Activity Score under the scores documentation. ref-https://developer.tatumgames.com/documentation/scores A: In my Mikros dashboard I'm having trouble understanding scores work and where to view these scores. The Scores can be found in a few locations when viewing your dashboard. Activity Score :: Found Insights > Which Users Are The Most Active? * *You don't have to Analyze to view the score. It is already available to you. Reputation Score :: Found Insights > What Is Your Audience Like? Spending Score :: Found Insights > Which Users Are the BIG Spenders?
doc_2459
This is my common api call return in Shared file. @Throws(Exception::class) suspend inline fun<reified T> post(url: String, requestBody: HashMap<String, Any>?) : Either<CustomException, T> { try { val response = httpClient.post<T> { url(BASE_URL.plus(url)) contentType(ContentType.Any) if (requestBody != null) { body = requestBody } headers.remove("Content-Type") headers { append("Content-Type", "application/json") append("Accept", "application/json") append("Time-Zone", "+05:30") append("App-Version", "1.0.0(0)") append("Device-Type", "0") } } return Success(response) } catch(e: Exception) { return Failure(e as CustomException) } } It works good in android if I call it like this :- api.post<MyDataClassHere>(url = "url", getBody()).fold( { handleError(it) }, { Log.d("Success", it.toString()) } ) But I am not able to get it run on iOS devices it shows me error like this :- some : Error Domain=KotlinException Code=0 "unsupported call of reified inlined function `com.example.myapplication.shared.apicalls.SpaceXApi.post`" UserInfo={NSLocalizedDescription=unsupported call of reified inlined function `com.example.myapplication.shared.apicalls.SpaceXApi.post`, KotlinException=kotlin.IllegalStateException: unsupported call of reified inlined function `com.example.myapplication.shared.apicalls.SpaceXApi.post`, KotlinExceptionOrigin=} Any help in this is appreciated. Thanks A: Okay so from the Slack conversation here it's clear that it's not possible to create this type of generic function as swift doesn't support reified. The only solution is we need to create different functions for every different api call we need. For eg :- we can create a interface inside which we have all the api implementation and use it in the native platforms. Like this :- interface ApiClient { suspend fun logIn(…): … suspend fun createBlogPost(…): … // etc } Now we can use this in our native platforms.
doc_2460
df looks like this index price 0 4 1 6 2 10 3 12 looking to get a continuous rolling of price the goal is to have it look this a moving mean of all the prices. index price mean 0 4 4 1 6 5 2 10 6.67 3 12 8 thank you in advance! A: you can use expanding: df['mean'] = df.price.expanding().mean() df index price mean 0 4 4.000000 1 6 5.000000 2 10 6.666667 3 12 8.000000 A: Welcome to SO: Hopefully people will soon remember you from prior SO posts, such as this one. From your example, it seems that @Allen has given you code that produces the answer in your table. That said, this isn't exactly the same as a "rolling" mean. The expanding() function Allen uses is taking the mean of the first row divided by n (which is 1), then adding rows 1 and 2 and dividing by n (which is now 2), and so on, so that the last row is (4+6+10+12)/4 = 8. This last number could be the answer if the window you want for the rolling mean is 4, since that would indicate that you want a mean of 4 observations. However, if you keep moving forward with a window size 4, and start including rows 5, 6, 7... then the answer from expanding() might differ from what you want. In effect, expanding() is recording the mean of the entire series (price in this case) as though it were receiving a new piece of data at each row. "Rolling", on the other hand, gives you a result from an aggregation of some window size. Here's another option for doing rolling calculations: the rolling() method in a pandas.dataframe. In your case, you would do: df['rolling_mean'] = df.price.rolling(4).mean() df index price rolling_mean 0 4 nan 1 6 nan 2 10 nan 3 12 8.000000 Those nans are a result of the windowing: until there are enough rows to calculate the mean, the result is nan. You could set a smaller window: df['rolling_mean'] = df.price.rolling(2).mean() df index price rolling_mean 0 4 nan 1 6 5.000000 2 10 8.000000 3 12 11.00000 This shows the reduction in the nan entries as well as the rolling function: it 's only averaging within the size-two window you provided. That results in a different df['rolling_mean'] value than when using df.price.expanding(). Note: you can get rid of the nan by using .rolling(2, min_periods = 1), which tells the function the minimum number of defined values within a window that have to be present to calculate a result.
doc_2461
This is the part of my code, that it is relevant: unsigned long long betAmount = 0; cout << "You have " << chipCount << " chips currently!" << endl; cout << "How many chips would you like to bet?" << endl; cout << "Must be a whole number: "; cin >> betAmount; It is pretty standard, unless given a negative. A: You may use a string to get the input. Check the first byte. If it is a unsigned, use a stringstream to convert the string to unsigned long long.
doc_2462
ApplicationContext context = new ApplicationContext("classpath:context.xml"); MyService myService = (MyService ) context.getBean( "myService " ); However I don't see a simple way to pass properties into the configuration. For example if I want to determine the host name for the remote server at runtime within the client. I'd ideally have an entry in the Spring context like this: <bean id="myService" class="org.springframework.remoting.rmi.RmiProxyFactoryBean"> <property name="serviceUrl" value="rmi://${webServer.host}:80/MyService"/> <property name="serviceInterface" value="com.foo.MyService"/> </bean> and pass the properties to the context from the client as a parameter. I can use a PropertyPlaceholderConfigurer in the context to substitute for these properties, but as far as I can tell this only works for properties read from a file. I have an implementation that addresses this (added as an answer) but I'm looking for a standard Spring implementation to avoid rolling my own. Is there another Spring configurer (or anything else) to help initialise the configuration or am I better off looking at java config to achieve this? A: My existing solution involves defining a new MapAwareApplicationContext that takes a Map as an additional constructor argument. public MapAwareApplicationContext(final URL[] configURLs, final String[] newConfigLocations, final Map<String, String> additionalProperties) { super(null); //standard constructor content here this.map = new HashMap<String, String>(additionalProperties); refresh(); } It overrides postProcessBeanFactory() to add in a MapAwareProcessor: protected void postProcessBeanFactory( final ConfigurableListableBeanFactory beanFactory) { beanFactory.addBeanPostProcessor(new MapAwareProcessor(this.map)); beanFactory.ignoreDependencyInterface(MapAware.class); } The MapAwareProcessor implements postProcessBeforeInitialization() to inject the map into any type that implements the MapAware interface: public Object postProcessBeforeInitialization(final Object bean, final String beanName) { if (this.map != null && bean instanceof MapAware) { ((MapAware) bean).setMap(this.map); } return bean; } I then add a new bean to my config to declare a MapAwarePropertyPlaceholderConfigurer: <bean id="propertyConfigurer" class="com.hsbc.r2ds.spring.MapAwarePropertyPlaceholderConfigurer"/> The configurer implements MapAware, so it will be injected with the Map as above. It then implements resolvePlaceholder() to resolve properties from the map, or delegate to the parent configurer: protected String resolvePlaceholder(final String placeholder, final Properties props, final int systemPropertiesMode) { String propVal = null; if (this.map != null) { propVal = this.map.get(placeholder); } if (propVal == null) { propVal = super.resolvePlaceholder(placeholder, props); } return propVal; } A: See http://forum.springsource.org/showthread.php?t=71815 TestClass.java package com.spring.ioc; public class TestClass { private String first; private String second; public String getFirst() { return first; } public void setFirst(String first) { this.first = first; } public String getSecond() { return second; } public void setSecond(String second) { this.second = second; } } SpringStart.java package com.spring; import java.util.Properties; import com.spring.ioc.TestClass; import org.springframework.context.support.ClassPathXmlApplicationContext; import org.springframework.beans.factory.config.PropertyPlaceholderConfigurer; public class SpringStart { public static void main(String[] args) throws Exception { PropertyPlaceholderConfigurer configurer = new PropertyPlaceholderConfigurer(); Properties properties = new Properties(); properties.setProperty("first.prop", "first value"); properties.setProperty("second.prop", "second value"); configurer.setProperties(properties); ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext(); context.addBeanFactoryPostProcessor(configurer); context.setConfigLocation("spring-config.xml"); context.refresh(); TestClass testClass = (TestClass)context.getBean("testBean"); System.out.println(testClass.getFirst()); System.out.println(testClass.getSecond()); } } spring-config.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd"> <bean id="testBean" class="com.spring.ioc.TestClass"> <property name="first" value="${first.prop}"/> <property name="second" value="${second.prop}"/> </bean> </beans> Output: first value second value A: Update: Based on the question update, my suggestion is: * *Create a ServiceResolver bean which handles whatever you need to handle based on client input; *Declare this bean as a dependency to the relevant services; *At runtime, you may update / use this bean however you see fit. The ServiceResolver can then, either on the init-method or on each invocation determine the values to return to the client, based on e.g. JNDI lookups or enviroment variables. But before doing that, you might want to take a look at the configuration options available. You can either: * *add property files which don't have to be present at compile-time; *look up values from JNDI; *get values from the System.properties. If you need to lookup properties from a custom location, take a look at org.springframework.beans.factory.config.BeanFactoryPostProcessor and how the org.springframework.beans.factory.config.PropertyPlaceholderConfigurer is implemented. The basic idea is that you get the beans with the 'raw' properties, e.g. ${jdbcDriverClassName} and then you get to resolve them and replace them with the desired values. A: PropertyPlaceholderConfigurer can fetch properties from a file, that's true, but if it can't find them, it falls back to using system properties. This sounds like a viable option for your client application, just pass the system property in using -D when you launch the client. From the javadoc A configurer will also check against system properties (e.g. "user.dir") if it cannot resolve a placeholder with any of the specified properties. This can be customized via "systemPropertiesMode". A: Create an RmiProxyFactoryBean instance and configure the serviceUrl property directly in your code: String serverHost = "www.example.com"; RmiProxyFactoryBean factory = new RmiProxyFactoryBean(); factory.setServiceUrl("rmi://" + serverHost + ":80/MyService"); factory.setServiceInterface(MyService.class); try { factory.afterPropertiesSet(); } catch (Exception e) { throw new RuntimeException( "Problem initializing myService factory", e); } MyService myService = (MyService) factory.getObject();
doc_2463
EG: Client address, etc at top of letter, then You have requested the following services: Service001 Service002 Service003 Service004 (to 13) Our Fees will be as follows: Fee001 Fee002 (to 13) The "Our fees will be as follows" is a label and will not move up if a client only has say 2 services. Is there a work around or am I better writing some vba to create like a memo field that lumps all the services into it so that 1 memo field can grow or shrink? Of course I'd probably have the same issue I am thinking. I would appreciate some insight. I am at the beginning of this process and so changing my bones wouldn't be a big deal. Thank you so much for at least reading this :)
doc_2464
Reproducible example: {r test, results = "asis"} stargazer::stargazer(attitude, type = "html", digits = 2, summary.stat = c("mean","sd","median","min", "max")) A: I am heavily biased towards htmlTable::htmlTable, but I will add this anyway. htmlTable, as the name would suggest, is only for making tables, so all the bells and whistles of stargazer are not included, but this function has many options for customizing the output. As such you may need to do extra work to get the output you need to put into a table. Similar to the other answer, you can use css to manipulate the style of the table. For example, you can pass css to css.cell: --- output: html_document --- ```{r test, results='asis', include=FALSE} stargazer::stargazer(attitude, type = "html", digits = 2, summary.stat = c("mean","sd","median","min", "max")) ``` ```{r} ## apply a list of functions to a list or vector f <- function(X, FUN, ...) { fn <- as.character(match.call()$FUN)[-1] out <- sapply(FUN, mapply, X, ...) setNames(as.data.frame(out), fn) } (out <- round(f(attitude, list(mean, sd, median, min, max)), 2)) ``` ```{r, results='asis'} library('htmlTable') htmlTable(out, cgroup = 'Statistic', n.cgroup = 5, caption = 'Table 1: default') htmlTable(out, cgroup = 'Statistic', n.cgroup = 5, caption = 'Table 1: padding', ## padding to cells: top side bottom css.cell = 'padding: 0px 10px 0px;') ``` The following tables for no padding and extra padding on the sides A: Try going into the actual HTML in the file after you create it. There will be a table tag that establishes the style, width, and so on. For example, if the table tag is <table style="text-align:center"> set the width manually as <table style="text-align:center" width="3000">
doc_2465
Here is what the DOM looks like: <em class="x-btn-split" unselectable="on" id="ext-gen161"> <button type="button" id="ext-gen33" class=" x-btn-text"> <div class="mruIcon"></div> <span>Accounts</span> </button> ::after </em> This is what the above element looks like. The Left hand side of the object is the 'button' element and the :after element is the right hand side with the arrow which would bring down a dropdown menu when clicked. As you can see that the right hand side has no identifiers whatsoever and that is partially what is making this difficult to do. I have seen these two links in stackoverflow and have attempted to combine the answers to form my solution, but to no avail. Clicking an element in Selenium WebDriver using JavaScript Locating pseudo element in Selenium WebDriver using JavaScript Here is one my attempts: string script = "return window.getComputedStyle(document.querySelector('#ext-gen33'),':before')"; IJavaScriptExecutor js = (IJavaScriptExecutor) Session.Driver; js.ExecuteScript("arguments[0].click(); ", script); In which I get this error: System.InvalidOperationException: 'unknown error: arguments[0].click is not a function (Session info: chrome=59.0.3071.115) (Driver info: chromedriver=2.30.477700 (0057494ad8732195794a7b32078424f92a5fce41),platform=Windows NT 6.1.7601 SP1 x86_64)' I've also tried using the Actions class in Selenium to move the mouse in reference to the left hand side, similar to this answer as well. I think it may be because I don't know what the offset is measured in and the documentation doesn't seem to give any indication. I think it is in pixels?? Actions build = new Actions(Session.Driver); build.MoveToElement(FindElement(By.Id("ext-gen33"))).MoveByOffset(235, 15).Click().Build().Perform(); This attempt seems to click somewhere as it gives no errors, but I'm not really sure where. I'm attempting to automate Salesforce (Service Cloud) in c# if that helps. Maybe someone can offer a solution? A: I've encounter the same problem while writing Selenium tests for Salesforce and managed to solve it by direct control over mouse using Actions. Wrapper table for this button has hardcoded width of 250px, and you have spotted that. To locate where the mouse is, you can use contextClick() method instead of Click(). It simulates right mouse button so it will always open browser menu. If you do: Actions build = new Actions(Session.Driver); build.MoveToElement(FindElement(By.Id("ext-gen33"))).ContextClick().Build().Perform(); you will spot that mouse moves to the middle of the WebElement, not the top left corner (I thought that it does too). Since that element width is constant, we can move mouse just by 250 / 2 - 1 to the right and it will work :) code: Actions build = new Actions(Session.Driver); build.MoveToElement(FindElement(By.Id("ext-gen33"))).MoveByOffset(124, 0).Click().Build().Perform(); A: For those who are trying to do this in Python, the solution is below: elem= driver.<INSERT THE PATH TO ELEMENT HERE> ActionChains(driver).move_to_element_with_offset(elem,249,1).click().perform() Basically here I'm finding my element in the DOM and assigning to a WebElement. The WebElement is then passed the method move_to_element_with_offset as a param. I got the px values for the element from developer tools. PS: use this import- from selenium.webdriver.common.action_chains import ActionChains You can read more about Action chain class and its method move_to_element_with_offset here: http://selenium-python.readthedocs.io/api.html. Hope this helps. A: Maciej'a answer above worked with WebDriver, but not with the RemoteWebDriver (Selenium 3.12.0) against Firefox V.56. We needed a solution that worked for both local and remote. Ended up using keyboard shortcuts to invoke the Navigation Menu drop down. As an added benefit, this also removes the need to use offsets. String navigationMenuDropdownShortcutKeys = Keys.chord(Keys.ESCAPE, "v"); new Actions(driver) .sendKeys(navigationMenuDropdownShortcutKeys) .perform(); A: Im going to provide an alternative that may work for some scenarios, at least it did the trick for me, and is relatively easy to implement in any language using selenium via a JS script. In my scenario there was an ::after pseudoelement containing the functionality of a button. This button was contained in a position relative to another element under it. So I did the following: * *Get the element that I can, in this question scenario would be that span. *Get the coordinates of the element. *Calculate the coordinates realtive to that element of the pseudoelement you want to click. *Click on those coordinates. This is my code using perl, but I'm sure you can do the same in any language: my $script=" function click_function(x, y) { console.log('Clicking: ' + x + ' ' + y); var ev = new MouseEvent('click', { 'view': window, 'bubbles': true, 'cancelable': true, 'screenX': x, 'screenY': y }); var el = document.elementFromPoint(x, y); el.dispatchEvent(ev); } var element = document.getElementById('here_put_your_id'); //replace elementId with your element's Id. var rect = element.getBoundingClientRect(); var elementLeft,elementTop; //x and y var scrollTop = document.documentElement.scrollTop? document.documentElement.scrollTop:document.body.scrollTop; var scrollLeft = document.documentElement.scrollLeft? document.documentElement.scrollLeft:document.body.scrollLeft; elementTop = rect.top+scrollTop; elementLeft = rect.left+scrollLeft; console.log('Coordiantes: ' + elementLeft + ' ' + elementTop) click_function(elementLeft*1.88, elementTop*1.045) // here put yor relative coordiantes "; $driver->execute_script($script); A: After going through numerous article and the blogs I figured out the way to determine how to detect the Pseudo element in the DOM in the Selenium. And validate based on the certain conditions if it is present or no. Step 1 Find the path to the parent element which consist the pseudo element and pass under the findElement as shown below WebElement pseudoEle = driver.findElement(path); Step 2 String display = ((JavascriptExecutor)getWebDriver()).executeScript("return window.getComputedStyle(arguments[0], ':after').getPropertyValue('display');",pseudoEle).toString(); In the above line of code pass the desired Pseudo code in the place of ":after" (In my case I was looking for 'after') and the property value which is changing based on the pseudo code is present or no (In my case it was 'display'). Note: When the pseudo element was present javascript code return 'Block' which in turn I saved in the display field. And use it according to the scenario. Steps to determine the right property value for your case * *Inspect the element. *Navigate to the parent element of the pseudo code. *Under the Styles tab figure out the field(Green in color) whose value change when the pseudo code is present and when not present. I am sure this would help you to the great extent. Kindly like and support, would encourage me to post more solutions as such. Thanks!
doc_2466
But I couldn't understand the efficiency of branch and bound and backtracking as compared to an brute force search. In worst case whether brute force equals b&b or backtracking ? A: With exhaustive search, you would compute all N! possible routes between the nodes. With backtracking, you might compute a route visiting half the nodes, notice that it is already more expensive than the best route found so far, and stop investigating that partial route at that point. By doing so, you have skipped computing all of the routes that are produced by completing that partial route, thus saving time over exhaustive search, which would have continued to check them all.
doc_2467
ERROR Error: Element type is invalid: expected a string (for built-in components) or a class/function (for composite components) but got: undefined. You likely forgot to export your component from the file it's defined in, or you might have mixed up default and named imports I already to to refresh but still it won't work.
doc_2468
ranging<-function(x){(x-min(x))/(max(x)-min(x))} But obviously it gets the min and the max from the whole table. I'm not used to R programming, how can I get the min and the max for each column, in order to normalize properly each column? A: assuming that you are using a data.frame named df without any factor variables, the following code should work without any special packages (as suggested by @user20650: ranging<-function(x){(x-min(x))/(max(x)-min(x))} dfNorm <- lapply(df, ranging) If your data.frame has factor variables, which should not be normalized, you can use the following: dfNorm <- lapply(df, function(x) ifelse(is.factor(x), x, ranging(x))) A: If we are using dplyr, mutate_each can take the ranging function and apply to all the columns of the dataset. library(dplyr) df1 %>% mutate_each(funs(ranging)) data df1 <- structure(list(v3 = c(0L, 2L, 1L, 4L, 2L, 2L, 2L, 2L), v4 = c(1L, 4L, 2L, 5L, 3L, 3L, 3L, 3L), v5 = c(2L, 6L, 4L, 6L, 4L, 4L, 4L, 4L), v6 = c(3L, 5L, 7L, 4L, 5L, 5L, 5L, 5L)), .Names = c("v3", "v4", "v5", "v6"), row.names = c(NA, -8L), class = "data.frame")
doc_2469
Note: I changed the server port for my springboot test app to 9743 Details: Pom has the following <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.6.0</version> </parent> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> We have the following custom HealthIndicator. It's to check MarkLogic db health and ... I removed the MarkLogic part and mimic'ed its failure by throwing an exception in the health() method: import org.springframework.boot.actuate.health.Health; import org.springframework.boot.actuate.health.HealthIndicator; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; @Configuration public class MarkLogicDBHealthIndicatorConfig { private final Logger logger = LoggerFactory.getLogger(MarkLogicDBHealthIndicatorConfig.class); @Bean public MarkLogicDBHealthIndicator marklogic() { logger.info("Entered MarkLogicDBHealthIndicatorConfig.marklogic(). Creating and returning new MarkLogicDBHealthIndicator()"); return new MarkLogicDBHealthIndicator(); } } class MarkLogicDBHealthIndicator implements HealthIndicator { private final Logger logger = LoggerFactory.getLogger(MarkLogicDBHealthIndicator.class); @Override public Health health() { logger.info("Entered MarkLogicDBHealthIndicator.health()."); Health.Builder mlHealth; try { // Do something that simulates marklogic being down (= just have a java method throw an exception) this.alwaysThrowException(); mlHealth = Health.up(); mlHealth = mlHealth.withDetail("db-host", "my-db-host"); mlHealth = mlHealth.withDetail("db-port", "my-db-port"); mlHealth = mlHealth.withDetail("db-check-time", 1234); } catch (Exception e) { logger.warn("{}-{}. DB HealthCheck failed!", e.getClass().getSimpleName(), e.getMessage(), e); mlHealth = Health.down(e); mlHealth = mlHealth.withDetail("db-host", "my-db-host"); mlHealth = mlHealth.withDetail("db-port", "my-db-port"); mlHealth = mlHealth.withDetail("db-check-time", 1234); } Health h = mlHealth.build(); logger.info("Leaving MarkLogicDBHealthIndicator.health(). h = " + h.toString()); return h; } private void alwaysThrowException() throws Exception { throw new MyException("error"); } } I needed the following in application.yml for sending /actuator/health/readiness and /actuator/health/liveness (otherwise, an http 404 error results). Note these are not needed when sending /actuator/health: management: endpoint: health: probes: enabled: true livenessState: enabled: true readinessState: enabled: true When the application starts, I see the log showing the bean being created: Entered MarkLogicDBHealthIndicatorConfig.marklogic(). Creating and returning new MarkLogicDBHealthIndicator() Exposing 1 endpoint(s) beneath base path '/actuator' When I send http://localhost:9743/actuator/health, I get the expected http status of 503 (in postman) and see my health() method being called from the log: Entered MarkLogicDBHealthIndicator.health(). MyException-error. DB HealthCheck failed! com.ibm.sa.exception.MyException: error However, when I send http://localhost:9743/actuator/health/readiness or http://localhost:9743/actuator/health/liveness, my MarkLogicDBHealthIndicator health() method is NOT called. Note: In our actual deployment, our applications are deployed to Kubernetes, and we specify the liveness and readiness endpoints in each application's deployment yaml (gen'ed using helm so ... easy to change). None of our applications do anything differently for readiness vs liveness so ... we could just switch to /actuator/health for both liveness and readiness & then I know it will work.
doc_2470
I've slow internet connection => multiple attempts to download MinGW and CygWin packages. I selected Dev packages in CygWin and stable packages in MinGW if my memory serves me correct. Problem: 1. Juno is not opening at all, it's throwing exceptions and exits. 2. Helios, although it compiles files console is not working. Could someone point me out in right direction?
doc_2471
With node's new ES6 support and sails.js, I'm a little confused as to how my folder structure is going to look like? Do I still continue using my ORM within the controllers or move them to separate layers? Could someone suggest a good project to which I can refer to, to get the architecture clean with ES6 and sails? A: Your structure remains the same. Just start writing your models/controllers/config/whatever with ES6 syntax and use babel-node for start it. For that install babel as dev dependency and update npm start script in package.json with "start": "babel-node app.js". UPD: Someone can suggest sails-hook-babel but I don't recommend to use it. Hooks have a specific order for loading and you can be confused with ES6 support errors in console, because hook is still not loaded.
doc_2472
* *created the entrypoint.js under the init_script folder entrypoint.js use admin; db.createUser( { user: "patient_db", pwd: "14292", roles: [ { role: "readWrite", db: "patient_db" } ] } ); db.grantRolesToUser( "patient_db", [{ role: "readWrite", db: "patient_db"}]); *created data.js file in the resources path src/main/resources/data.js use patient_db; db.createCollection("holiday"); db.holiday.insert({holiday_date:'25-12-2021',holiday_name:'Christmas',created_by:'John Wick',modified_by:'John_Wick',created_date_time:'2021-04-25 04:23:55',modified_date_time:'2021-04-25 04:23:55'}); *configured the docker-compose.yml docker-compose.yml version: "3" services: patient-service: image: patient-service:1.0 container_name: patient-service ports: - 9090:9090 restart: on-failure networks: - patient-mongo depends_on: - mongo-db links: - mysql-db mongo-db: image: mongo:latest container_name: mongo-db ports: - 27017:27017 networks: - patient-mongo volumes: - 'mongodata:/data/db' - './init_scripts:/docker-entrypoint-initdb.d' environment: - MONGO_INITDB_ROOT_USERNAME=admin - MONGO_INITDB_ROOT_PASSWORD=14292 restart: unless-stopped networks: patient-mongo: volumes: mongodata: 4.Finally, Connection with MongoDB properties-dev.yml spring: data: mongodb: host: mongo-db port: 27017 database: patient_db A: This is how I insert the entrypoint code to mongodb container: * *Create a .sh file (example.sh) *Create mongo users and the data you want to insert. example.sh #!/usr/bin/env bash echo "Creating mongo users..." mongo admin --host localhost -u root -p mypass --eval " db = db.getSiblingDB('patient_db'); db.createUser( { user: "patient_db", pwd: "14292", roles: [ { role: "readWrite", db: "patient_db" } ] } ); db.createCollection('holiday'); db.holiday.insert({holiday_date:'25-12-2021',holiday_name:'Christmas', created_by:'John Wick',modified_by:'John_Wick',created_date_time:'2021-04-25 04:23:55',modified_date_time:'2021-04-25 04:23:55'}); " echo "Mongo users and data created." *At docker-compose, insert the entrypoint volumes: - 'mongodata:/data/db' - './example.sh:/docker-entrypoint-initdb.d/example.sh' Maybe its not the more clean option, but its works perfectly. I did it like this because I didn't get it to work with js files. A: Thanks, @Schwarz54 for your answer. It works with js file init_scripts/mongo_init.js var db = connect("mongodb://admin:[email protected]:27017/admin"); db = db.getSiblingDB('patient_db'); /* 'use' statement doesn't support here to switch db */ db.createUser( { user: "patient_db", pwd: "14292", roles: [ { role: "readWrite", db: "patient_db" } ] } ); db.createCollection("holiday"); db.holiday.insert({holiday_date:'25-12-2021',holiday_name:'Christmas',created_by:'John Wick',modified_by:'John_Wick',created_date_time:'2021-04-25 04:23:55',modified_date_time:'2021-04-25 04:23:55'}); docker-compose.yml volumes: - 'mongodata:/data/db' - './init_scripts:/docker-entrypoint-initdb.d'
doc_2473
I have the Oracle Client 11 instaled to test some features (EF support), but my applications must be in Oracle10 because my customer uses it. So, if i work in my customer project i need the Oracle10 instaled (to test it in my workstation), and if i want to test the features of Oracle11 i need to install oracle 11. My question is: How to coexists Oracle 10 and 11 clients in same workstation. In other words, i want to debug and run my asp.net web applications in my own workstation and simply change the web.config to use Oracle10 or Oracle11 client. It is possible? A: As long as you install the two versions of the Oracle client in different Oracle Homes, they should coexist peacefully. There are just a couple of gotchas to be aware of * *By default, each Oracle Home will have a separate tnsnames.ora file (and sqlnet.ora file, etc.). That often causes confusion if you've configured a TNS alias in one Oracle Home and not in the other. You can configure your environment to use a single set of TNS configuration files by setting the TNS_ADMIN environment variable to point at the directory that contains the one true source of TNS configuration issues (i.e. set TNS_ADMIN to %Oracle11g_Home%\network\admin to always use the tnsnames.ora file from your 11g Oracle Home). *Some third-party products are not multi-home compliant. If you use something like the legacy Microsoft ODBC Driver for Oracle, for example, it will use whichever version of the Oracle client appears first in your PATH. If you are using Oracle drivers to connect to the database, that shouldn't be an issue. If you do need to switch which is the default Oracle Home, you can either manually edit your PATH or you can fire up the Oracle Universal Installer and under Installed Products | Environment, you can control the order that Oracle Homes appear in the PATH.
doc_2474
When I swipe left or right my timeline is very synchronous and when I move the swiper the dates diverge. The new date overtakes the old date, etc. Do you have any idea how to modify this timeline to make it work properly? It should be in sync with each other and run smoothly The carousel is faster than the top carousel `https://jsfiddle.net/u8sLjbxn/1/` I have been struggling to create a timeline, but it is not working properly
doc_2475
I'm thinking of using a character at the end to indicate the end of the variable such as "myVariable_i" which has the added benefit of being used to identify the type of variable. Surely a convention that does this exists and I just can't find it right? A: you can find with match case and whole word to replace it with particular word so that other variable stated here as chance of replacing from myVariableGroup to myNumberGroup should not take place. attached snapshot to find and replace in in netbeans IDE:
doc_2476
A: Use the MouseEnter and MouseLeave event as follows: private void helpToolStripMenuItem_MouseEnter(object sender, EventArgs e) { helpToolStripMenuItem.ForeColor = Color.Green; } private void helpToolStripMenuItem_MouseLeave(object sender, EventArgs e) { helpToolStripMenuItem.ForeColor = Color.Black; }
doc_2477
My question is: can I somehow disable entity tracking and map these tables without primary key and with columns as null? Or can I use code-first approach, does it let me to create and map class with no primary key and all columns as is null? A: Entity Framework must have a key field in order to work. It doesn't have to be a true primary key in the database but it needs to be unique. If you have tables that have a nullable field and no true primary key and you can't modify the database then maybe Entity Framework isn't the best fit for this project. It might work if you never try and load the entities with the null values, but it will throw errors when you do (as you have noticed).
doc_2478
And I can't use an img tag because it will result in a horizontal scrollbar. EDIT: It seems that there is no way to position a background the way I wanted, at least with background-position. You can offset a background from either side by writing background-position: top 50px left 100px, but you cannot do the same with position center. I wonder why. A: Have you try to set a background size and a background position like so : background-position: 100% 0; background-size:50%; You can test it here: https://jsfiddle.net/dL2u6co7/ A: Here is a working solution. I added another block with an absolute positioning inside the container. .container { margin: 50px; padding: 10px 10px; position: relative; width:400px; height:270px; border:2px solid red; } .text { float: left; height: 200px; width: 150px; background-color: green; } .bg { position: absolute; top: 10px; left: 50%; width: 50%; height: 250px; background-image: url('http://www.gettyimages.pt/gi-resources/images/Homepage/Hero/PT/PT_hero_42_153645159.jpg'); background-position: 0 0; background-repeat:no-repeat; background-size: cover; } <div class="container"> <div class="text"> Text block </div> <div class="bg"> </div> </div>
doc_2479
* *php-fpm *nginx *local mysql *app's API *datadog container In the dev process many feature branches are created to add new features. such as * *app-feature1 *app-feature2 *app-feature3 *... I have an AWS EC2 instance per feature branches running docker engine V.18 and docker compose to build the and run the docker stack that compose the php app. To save operation costs 1 AWS EC2 instance can have 3 feature branches at the same time. I was thinking that there should be a custom docker-compose with special port mapping and docker image tag for each feature branch. The goal of this configuration is to be able to test 3 feature branches and access the app through different ports while saving money. I also thought about using docker networks by keeping the same ports and using an nginx to redirect traffic to the different docker network ports. What recommendations do you give? A: One straight forward way I can think of in this case is to use the .env file for your docker-compose. docker-compose.yaml file will look something like this ... ports: - ${NGINX_PORT}:80 ... ports: - ${API_PORT}:80 .env file for each stack will look something like this NGINX_PORT=30000 API_PORT=30001 and NGINX_PORT=30100 API_PORT=30101 for different projects. Note: * *.env must be in the same folder as your docker-compose.yaml. *Make sure that all the ports inside .env files will not be conflicting with each other. You can have some kind of conventions like having prefix for features like feature1 will have port starting with 301 i.e. 301xx. *In this way, your docker-compose.yaml can be as generic as you may like. A: You're making things harder than they have to be. Your app is containerized- use a container system. ECS is very easy to get going with. It's a json file that defines your deployment- basically analogous to docker-compose (they actually supported compose files at some point, not sure if that feature stayed around). You can deploy an arbitrary number of services with different container images. We like to use a terraform module with the image tag as a parameter, but easy enough to write a shell script or whatever. Since you're trying to save money, create a single application load balancer. each app gets a hostname, and each container gets a subpath. For short lived feature branch deployments, you can even deploy on Fargate and not have an ongoing server cost. A: It turns out the solution involved capabilities from docker-compose. In docker docs the concept is called Multiple Isolated environments on a single host to achieve this: * *I used an .env file with so many env vars. The main one is CONTAINER_IMAGE_TAG that defines the git branch ID to identify the stack. *A separate docker-compose-dev file defines ports, image tags, extra metadata that is dev related *Finally the use of --project-name in the docker-compose command allows to have different stacks. an example docker-compose Bash function that uses the docker-compose command docker_compose() { docker-compose -f docker/docker-compose.yaml -f docker/docker-compose-dev.yaml --project-name "project${CONTAINER_IMAGE_TAG}" --project-directory . "$@" } The separation should be done in the image tags, container names, network names, volume names and project name.
doc_2480
the web service is passing information as XML and they need to be in a strict format which sould be spedicied in the XML Schema , so that No wrong information is passed. <wsdl:types> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="UpdatePendingTicketsRequest"> <xs:complexType> <xs:sequence> <xs:element ref="SIMS_REPLY_NAVISION_TO_INTOUCH"> </xs:element> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="UpdatePendingTicketsResponse"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:enumeration value="OK"/> <xs:enumeration value="ERROR_PROCESSING"/> </xs:restriction> </xs:simpleType> </xs:element> <xs:simpleType name="ST_STATUS"> <xs:restriction base="xs:integer"> <xs:enumeration value="1"/> <xs:enumeration value="2"/> <xs:enumeration value="99"/> </xs:restriction> </xs:simpleType> <xs:element name="TRANSACTIONS"> <xs:complexType> <xs:sequence> <xs:element ref="TRANSACTION" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="TRANSACTION"> <xs:complexType> <xs:sequence> <xs:element ref="ORIGINAL_TRANSACTION_ID"/> <xs:element ref="STATUS"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="STATUS"> <xs:complexType> <xs:simpleContent> <xs:extension base="ST_STATUS"> <xs:attribute name="description" use="required"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:enumeration value="DUPLICATE"/> <xs:enumeration value="OK"/> <xs:enumeration value="PROBLEM"/> </xs:restriction> </xs:simpleType> </xs:attribute> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:element> <xs:element name="SIMS_REPLY_NAVISION_TO_INTOUCH"> <xs:complexType> <xs:sequence> <xs:element ref="DATETIME"/> <xs:element ref="TRANSACTIONS"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="ORIGINAL_TRANSACTION_ID" type="xs:string"/> <xs:element name="DATETIME" type="xs:dateTime"/> <xs:element name="FaultStructure"> <xs:complexType > <xs:sequence> <xs:element type="xs:string" name="FaultCode"/> <xs:element type="xs:string" name="FaultString"/> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> </wsdl:types> This is the sample XML Schema which is used to validate the payload. But when i create the same in Coldfusion, this is all i get. <wsdl:types> <schema targetNamespace="http://rpc.xml.coldfusion" xmlns="http://www.w3.org/2001/XMLSchema"> <import namespace="http://xml.apache.org/xml-soap"/> <import namespace="http://schemas.xmlsoap.org/soap/encoding/"/> <complexType name="CFCInvocationException"> <sequence/> </complexType> </schema> </wsdl:types> I did a lot of search and never found a concrete solution for it. A: This might not be answer but I always recommend that do not build webservice in ColdFusion to receive xml document as arguments. Instead use xml string as argument which later you can convert to xml document using xmlParse(). I had such experience in past and later I need to convert it to xml string argument. Thanks Pritesh
doc_2481
I created a smalll project with a failing integration test to show the problem that can be found in Github, run it with: grails test-app -integration -echoOut DataBinding Anyway, I'll explain the problem by describing the classes and the test here: class LocalizableContent { Map contentByLocale = [:].withDefault { locale -> new Text() } static hasMany = [ contentByLocale : Content ] } abstract class Content { static belongsTo = [ localizableContent : LocalizableContent ] static constraints = { localizableContent nullable:true } } class Text extends Content { String text } As you can see, I'm already using the withDefault trick, but apparently it's not being called by Grails / Spring (I even tried to throw an exception in the default closure to verify that the code is not executed). For the sake of the test, I also created a LocalizableContentController which is empty. With all that, the following integration test then fails: void testMapDatabinding() { def rawParams = [ 'contentByLocale[en].text': 'Content' ] def controller = new LocalizableContentController() controller.request.addParameters(rawParams) controller.request.setAttribute(GrailsApplicationAttributes.CONTROLLER, controller) def localizableContent = new LocalizableContent(controller.params) assert localizableContent?.contentByLocale['en']?.text == 'Content' } It says that localizableContent.contentByLocale is a map which looks like ['en': null], so apparently the data binding is understanding the map syntax and trying to create an entry for the 'en' key. But is not trying first to get the entry for that key, since the withDefault is not being called. The following one tests that the withDefault works fine, and it passes: void testMapByDefaultWithNoDatabinding() { assert new LocalizableContent().contentByLocale['en']?.getClass() == Text } What am I missing here? A: withDefault is nothing but a pattern to provide a valid value if you face an unknown key. For example, consider the below use case: def map = [:].withDefault{k-> println k //Should print 'a' 10 } map.test = 32 assert map.test == 32 assert map.a == 10 It takes the unknown key as the parameter, you cannot pass in any value to it, which is kind of logical, because it provides a default value instead of a value being provided. In your case, the data binding would work if set the value to Text like: Map contentByLocale = [:].withDefault { locale -> //locale is the key. 'en' in this case new Text(locale: locale, text: 'Content') } provided you have your Text class defined as class Text extends Content{ String locale String text }
doc_2482
RewriteCond %{THE_REQUEST} /entrar-na-sua-conta.html\?redirecionar=/([^\s&]+) [NC] RewriteRule ^ https://www.portal-gestao.com/%1? [L,R=302] But this is redirecting: https://www.portal-gestao.com/entrar-na-sua-conta.html?redirecionar=/f%C3%B3rum-perguntas-e-respostas/conversation/read.html?id=25 To: https://www.portal-gestao.com/f%25C3%25B3rum-perguntas-e-respostas/conversation/read.html?id=25%3f It seems htaccess is encoding the URL and thus returning a 404 error. Is there a way to redirect to the correct url, like: https://www.portal-gestao.com/fórum-perguntas-e-respostas/conversation/read.html?id=25%3f
doc_2483
I am able to compile, package his code and I used a simple Chrome Web Server (plugin), point it at the project's "target" folder which contains all the web components (i.e. assets, bower_components, META-INF, WEB-INF, etc) and his code, a dashboard can run successfully on the Chrome Web Server. Problem is, all API calls to the Java back-end are failing. IMAGE: API calls 404 error IMAGE: Sample endpoint, api/data Am I missing any steps when trying to run the project locally on my PC? A: Will update the answer in details soon but just want to share the solution. Chrome Web Server is not a proper Servlet Container hence Spring wasn't able to start. Since Spring did not start, requests to the endpoints could not be handled.
doc_2484
mutableDict = { "A" = 2, "B" = 4, "C" = 3, "D" = 1, } I'd like to end up with the array ["D", "A", "C", "B"]. My real dictionary is much larger than just four items, of course. A: The NSDictionary Method keysSortedByValueUsingComparator: should do the trick. You just need a method returning an NSComparisonResult that compares the object's values. Your Dictionary is NSMutableDictionary * myDict; And your Array is NSArray *myArray; myArray = [myDict keysSortedByValueUsingComparator: ^(id obj1, id obj2) { if ([obj1 integerValue] > [obj2 integerValue]) { return (NSComparisonResult)NSOrderedDescending; } if ([obj1 integerValue] < [obj2 integerValue]) { return (NSComparisonResult)NSOrderedAscending; } return (NSComparisonResult)NSOrderedSame; }]; Just use NSNumber objects instead of numeric constants. BTW, this is taken from: https://developer.apple.com/library/content/documentation/Cocoa/Conceptual/Collections/Articles/Dictionaries.html A: NSDictionary has this neat method called allKeys. If you want the array to be sorted though, keysSortedByValueUsingComparator: should do the trick. Richard's solution also works but makes some extra calls you don't necessarily need: // Assuming myDictionary was previously populated with NSNumber values. NSArray *orderedKeys = [myDictionary keysSortedByValueUsingComparator:^NSComparisonResult(id obj1, id obj2){ return [obj1 compare:obj2]; }]; A: Here i have done something like this: NSMutableArray * weekDays = [[NSMutableArray alloc] initWithObjects:@"Sunday",@"Monday",@"Tuesday",@"Wednesday",@"Thursday",@"Friday",@"Saturday", nil]; NSMutableDictionary *dict = [[NSMutableDictionary alloc] init]; NSMutableArray *dictArray = [[NSMutableArray alloc] init]; for(int i = 0; i < [weekDays count]; i++) { dict = [NSMutableDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:i],@"WeekDay",[weekDays objectAtIndex:i],@"Name",nil]; [dictArray addObject:dict]; } NSLog(@"Before Sorting : %@",dictArray); @try { //for using NSSortDescriptor NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"WeekDay" ascending:YES]; NSArray *descriptor = @[sortDescriptor]; NSArray *sortedArray = [dictArray sortedArrayUsingDescriptors:descriptor]; NSLog(@"After Sorting : %@",sortedArray); //for using predicate //here i want to sort the value against weekday but only for WeekDay<=5 int count=5; NSPredicate *Predicate = [NSPredicate predicateWithFormat:@"WeekDay <=%d",count]; NSArray *results = [dictArray filteredArrayUsingPredicate:Predicate]; NSLog(@"After Sorting using predicate : %@",results); } @catch (NSException *exception) { UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Sorting cant be done because of some error" message:[NSString stringWithFormat:@"%@",exception] delegate:self cancelButtonTitle:@"Ok" otherButtonTitles:nil]; [alert setTag:500]; [alert show]; [alert release]; } A: Here's a solution: NSDictionary *dictionary; // initialize dictionary NSArray *sorted = [[dictionary allKeys] sortedArrayUsingComparator:^NSComparisonResult(id obj1, id obj2) { return [[dictionary objectForKey:obj1] compare:[dictionary objectForKey:obj2]]; }]; A: The simplest solution: [dictionary keysSortedByValueUsingSelector:@selector(compare:)]
doc_2485
verify Error Verifying Email Unexpected token < in JSON at position 0 I await your answers Thanks Error Verifying Email
doc_2486
A: For now you can only use features like config.forcePasteAsPlainText. When you set this option to true only paragraphs and line breaks will be created when pasting (as for plain text). In CKEditor 4.1, which will be released in February, new important feature will be introduced - data and features activation based on configuration. You'll be able to configure which elements, styles, attributes and classes are allowed.
doc_2487
The reason is that the SDK that i have is only API-19(kitkat), bur it is expecting API-22. Is there any workaround to build my project with current API that I have? Please help me on this. A: You can upgrade your sdk to 22 via the sdk manager and then change the target sdk in your manifest to 22. If you use eclipse, click Window -> Android SDK Manager, then check what you want to install under the 22 category and install. If you use Android Studio, you have a button for launching the sdk manager right from the toolbar: If you have to use sdk 19 as target, use this link: Cordova build: Please install Android target: "android-22". I dont want android-22. I want android-19 - what do i do?
doc_2488
I am using Identity Server to authenticate and we can successfully do so without an issue. However when I try to call one of the API endpoints (which are decorated with the [Authorize] attribute) from the C# Razor/MVC application I get a 'Authorization failed! Given Policy has not been granted' error. When I run the same endpoint that the C# Razor app consumes using swagger/postman (after I've been authorized and get a token) I have no problem returning returning valid results, so the issue seems to be related my authentication between the Razor app and Web API. When I run these 3 tiers locally in Visual Studio (all with localhost but a different port #), it all works just fine. Any thoughts or additional information needed to give me some ideas of what the issue may be? Thank you so much in advance for your help.
doc_2489
https://www.facebook.com/groups/jyrka98sMods/ and http://jyrka98.webs.com/ This is the code i have for 1 link: [Code] var LinkLabel: TLabel; procedure LinkClick(Sender: TObject); var ErrorCode: Integer; begin ShellExec('', 'https://www.facebook.com/groups/jyrka98sMods/', '', '', SW_SHOW, ewNoWait, ErrorCode); end; procedure InitializeWizard; begin LinkLabel := TLabel.Create(WizardForm); LinkLabel.Parent := WizardForm; LinkLabel.Left := 8; LinkLabel.Top := WizardForm.ClientHeight - LinkLabel.ClientHeight - 8; LinkLabel.Cursor := crHand; LinkLabel.Font.Color := clBlue; LinkLabel.Font.Style := [fsUnderline]; LinkLabel.Caption := 'Visit jyrka98s mods facebook page'; LinkLabel.OnClick := @LinkClick; end; procedure CurPageChanged(CurPageID: Integer); begin LinkLabel.Visible := CurPageID <> wpLicense; end; A: For instance this way: [Setup] AppName=My Program AppVersion=1.5 DefaultDirName={pf}\My Program [Code] const FBLink = 'https://www.facebook.com/groups/jyrka98sMods/'; MPLink = 'http://jyrka98.webs.com/'; var FBLinkLabel: TLabel; MPLinkLabel: TLabel; procedure FBLinkClick(Sender: TObject); var ErrorCode: Integer; begin ShellExec('', FBLink, '', '', SW_SHOW, ewNoWait, ErrorCode); end; procedure MPLinkClick(Sender: TObject); var ErrorCode: Integer; begin ShellExec('', MPLink, '', '', SW_SHOW, ewNoWait, ErrorCode); end; procedure InitializeWizard; begin FBLinkLabel := TLabel.Create(WizardForm); FBLinkLabel.Parent := WizardForm; FBLinkLabel.Left := 8; FBLinkLabel.Top := WizardForm.ClientHeight - FBLinkLabel.ClientHeight - 8; FBLinkLabel.Cursor := crHand; FBLinkLabel.Font.Color := clBlue; FBLinkLabel.Font.Style := [fsUnderline]; FBLinkLabel.Caption := 'Visit jyrka98s mods facebook page'; FBLinkLabel.OnClick := @FBLinkClick; MPLinkLabel := TLabel.Create(WizardForm); MPLinkLabel.Parent := WizardForm; MPLinkLabel.Left := FBLinkLabel.Left + FBLinkLabel.Width + 8; MPLinkLabel.Top := WizardForm.ClientHeight - MPLinkLabel.ClientHeight - 8; MPLinkLabel.Cursor := crHand; MPLinkLabel.Font.Color := clBlue; MPLinkLabel.Font.Style := [fsUnderline]; MPLinkLabel.Caption := 'Visit jyrka98s mods pack page'; MPLinkLabel.OnClick := @MPLinkClick; end; procedure CurPageChanged(CurPageID: Integer); begin FBLinkLabel.Visible := CurPageID <> wpLicense; MPLinkLabel.Visible := CurPageID <> wpLicense; end;
doc_2490
This is my button inside the DataTemplate, and I want to bind it to a command in "DetailDayPageViewModel". Button viewComment = new Button() { TextColor = Color.DodgerBlue, HorizontalOptions = LayoutOptions.Start, VerticalOptions = LayoutOptions.Start, FontSize = 16 }; // this binding does not work viewComment.SetBinding(Button.CommandProperty, nameof(DetailDayPageViewModel.ViewComment)); A: Use RelativeBinding for binding values of Page's BindingContext to property inside DataTemplate. There are two ways to do this: 1: Binding through the ViewModel. ReltiveBinding of Mode FindAncestorBindingContext public class ItemView : Grid { public ItemView() { Button clickButton = new Button() { Text = "Hi there" }; clickButton.SetBinding( Button.CommandProperty, new Binding( "ItemClickCommand", source: new RelativeBindingSource( RelativeBindingSourceMode.FindAncestorBindingContext, typeof(ViewModel)) )); this.Children.Add(clickButton); } } 2: Binding through the Parent view BindingContext: public class ItemView : Grid { public ItemView() { Button clickButton = new Button() { Text = "Hi there" }; clickButton.SetBinding( Button.CommandProperty, new Binding( "BindingContext.ItemClickCommand", source: new RelativeBindingSource( RelativeBindingSourceMode.FindAncestor, typeof(CollectionView)) )); this.Children.Add(clickButton); } } Please do check and see if it helps!! Comment for any queries. A: I think you can take a look to this sample: TestBinding It refers to a ListView, but it should be applicated to a CollectionView. You need to set the the "source": TapGestureRecognizer tgrUpDown2 = new TapGestureRecognizer(); tgrUpDown2.SetBinding(TapGestureRecognizer.CommandProperty, new Binding("BindingContext.UpDown2Command", source: this)); tgrUpDown2.SetBinding(TapGestureRecognizer.CommandParameterProperty, "."); then in your Model, you have the "parameter" passend... this.UpDown2Command = new Command(async (object obj) => { try { if (_isTapped) return; if (obj != null) System.Diagnostics.Debug.WriteLine("Obj is not null"); else System.Diagnostics.Debug.WriteLine("Obj IS null"); _isTapped = true; int idx = List.IndexOf((Model)obj); List[idx].Checked1 = !List[idx].Checked1; _isTapped = false; } catch (Exception ex) { _isTapped = false; await Application.Current.MainPage.DisplayAlert("Attention", ex.Message, "Ok"); } }); This is a useful link I have found some years ago: listview-in-xamarin-forms-in-mvvm If you want a "class" to define your ViewCell, you can assign the class in this way: lv.ItemTemplate = new DataTemplate(() => { return new MyViewCell(lv); }); Where MyViewCell is something like: class MyViewCell : ViewCell { public MyViewCell(ListView lv) { StackLayout slView = new StackLayout(); slView.SetBinding(StackLayout.BackgroundColorProperty, "BackgroundColor"); Label lDesc = new Label(); lDesc.SetBinding(Label.TextProperty, "Description", stringFormat: "DESCRIPTION: {0}"); lDesc.SetBinding(Label.TextColorProperty, "TextColor"); // LABEL QTY TapGestureRecognizer tgrQty = new TapGestureRecognizer(); tgrQty.SetBinding(TapGestureRecognizer.CommandProperty, new Binding("BindingContext.QtyCommand", source: lv)); tgrQty.SetBinding(TapGestureRecognizer.CommandParameterProperty, "."); .... .... View = slView; } } You can pass "ListView" in the constructor so you can use it in "source" binding property.
doc_2491
I am a single person “team” on Apple and have used this for years, but now, with VS2019 I can no longer download a provider profile. Ideas? A: It is a bug. For details, please refer to: https://developercommunity.visualstudio.com/t/Cannot-load-Apple-certificates/1692185. You can report a problem with the visual studio product or installer.For details, please refer to: https://learn.microsoft.com/en-us/visualstudio/ide/how-to-report-a-problem-with-visual-studio?view=vs-2022 It will be fixed in a later version.
doc_2492
------------------------------ |ID | name | employee_code | ------------------------------ |24 | Robert | 20234 | ------------------------------ AND ------------------------------------- |ID | job_code | team | ------------------------------------- |24 | 241124 | Robert, Eduard, Etc. | ------------------------------------- I want to search in second table by employee code and i try something like this: $sql=mysql_query("SELECT * FROM works WHERE (SELECT name FROM employee WHERE employee_code LIKE '%".$_GET['employee_code']."%' AS searchname) team Like %searchname% "); Result: Warning: mysql_fetch_array(): supplied argument is not a valid MySQL result resource A: Try this query - $employee_code = mysql_real_escape_string($_GET['employee_code']); $sql=mysql_query("SELECT w.* FROM employee e JOIN works w ON w.team LIKE CONCAT('%', e.name ,'%') WHERE employee_code LIKE '%$employee_code%'"); see this SQLFiddle example - http://sqlfiddle.com/#!2/8f8b7/1 A: You should be looking at a join . select * from table1 inner join table2 using (`ID`) where job_code = .... Then you have 1 row with both tables joined together also your using mysql_* functions, These are no longer maintained please update to mysqli_* or PDO. Also you need to escape your queries, There an SQL injection attack waiting to happen in that code A: This would probably tell you exactly what was wrong. $sql=mysql_query("SELECT * FROM works WHERE (SELECT name FROM employee WHERE employee_code LIKE '%".$_GET['employee_code']."%' AS searchname) team Like %searchname% "); if (!$sql) echo mysql_error(); You should never just assume that your query has worked and then carry on to use the resource in another command without checking that it did in fact work. Another thing you should not do is just put user input directly into SQL queries without any form of escaping as it will enable anyone to take complete control of your database. SELECT * FROM works WHERE (SELECT name FROM employee WHERE employee_code LIKE '%".mysql_real_escape_string($_GET['employee_code'])."%' AS searchname) team Like %searchname% " A: Your SQL query is wrong Try like this SELECT * FROM works WHERE works.ID=employee.ID AND employee.employee_code=".$_GET['employee_code']." A: SELECT * FROM table1 t1 INNER JOIN table2 t2 ON t1.employee_code = t2.job_code or SELECT t1.id, t1.name, t2.team FROM table1 t1 INNER JOIN table2 t2 ON t1.employee_code = t2.job_code for cleaner result
doc_2493
Client asked to set request status to 'Expired' after 10 days. Do I hardcode the value in the Request entity in the IsRequestExpired() method (Domain layer) like so: public bool IsRequestExpired(DateTime now) { int daysToSetExpiredStatus = 10; return Created <= now.AddDays(-daysToSetExpiredStatus); } This seems to have the following drawbacks: * *not easy to test by manual tester (needs to wait 10 days for result?) *in case business decides the value should now be 20 we need to modify existing code A solution would to both would be passing a value from application settings/env variables, like so: public bool IsRequestExpired(DateTime now, in daysToSetToExpiredStatus) { return Created <= now.AddDays(-daysToSetExpiredStatus); } but is this still considered in line with DDD, as the invariant (expire after 10 days) is not really included in the Request entity (or the Domain layer whatsoever) any more but is fetched from config? A: is this still considered in line with DDD Absolutely! In particular, it's not usually the responsibility of the "domain model" to decide where information comes from (reading from databases, or files, or the internet, or whatever is normally code you want in the application layer, not in the domain layer).
doc_2494
The taskbar icon changes perfectly fine, the executable file that is generated uses the custom icon, but for some reason, the application window doesn't. According to Microsoft Documentation found here, there are two HICON properties that I should set values for within the WNDCLASSEX: hIcon and hIconSm; which, according to research can be set with LoadIcon(hInstance, IDI_APPLICATION) as shown in this example LoadIcon I'm not entirely sure the steps that would be taken to reproduce this problem. I don't know if something went wrong when I made the .rc file to load in the .ico image. Or if I loaded the image incorrectly, or... well... some obscure reason that it would only half work. This is my code for registering the window class that I use to create the window // The window class. This has to be filled BEFORE the window can be WNDCLASSEX wc; / Flags [Redraw on width/height change from resize/movement] wc.style = CS_HREDRAW | CS_VREDRAW | CS_OWNDC; // Pointer to the window processing function for handling messages from this window wc.lpfnWndProc = HandleMessageSetup; // Number of extra bytes to allocate following the window-class structure wc.cbClsExtra = 0; // Number of extra bytes to allocate following the window instance wc.cbWndExtra = 0; // Handle to the instance that contains the window procedure wc.hInstance = m_hInstance; // Handle to the class icon. Must be a handle to an Icon resource wc.hIcon = LoadIcon(m_hInstance, IDI_APPLICATION); // Handle to the small icon for the class wc.hIconSm = LoadIcon(m_hInstance, IDI_APPLICATION); // Handle to the class cursor. If null, an application must explicitly set the cursor shape whenever the mouse moves into the application window wc.hCursor = LoadCursor(NULL, IDC_ARROW); // Handle to the class background brush for the window's background colour. When NULL an application must paint its own background colour wc.hbrBackground = NULL; // Pointer to a null-terminated string for the menu wc.lpszMenuName = NULL; // Pointer to null-terminated string of our class name wc.lpszClassName = m_windowClass.c_str(); wc.cbSize = sizeof(WNDCLASSEX); // Register the class to make it usable RegisterClassEx(&wc); If more code is needed my repository can be found on github (The main class in question is engine/RenderWindow) According to research, creating a window using CreateWindowEx should then simply work. my taskbar icon changes, however not the application window icon. Screenshot There are no errors. Code compiles and runs successfully. A: Since this still has not been answered, here goes nothing: IDI_APPLICATION refers to the default application icon, it gets set when hIcon is null in the wc wndclass. So here is the jucy part: IDI_ICON1 is defined in your resource.h file like: IDI_ICON1 ICON "icon.ico" Once you have the icon defined you need to turn it into a resource using MAKEINTRESOURCE. After which you can use it in the LoadIcon function. dont forget to include your resource file. HICON loadedIcon = LoadIcon(wc.hInstance, MAKEINTRESOURCE(IDI_ICON1)); wc.hIcon = loadedIcon; wc.hIconSm = loadedIcon; The rest is simillar to your example, you call RegisterClassEx and CreateWindow
doc_2495
I am trying to figure out how to basically cut down the inner nested objects like desc to be treated as part of the same Java object, instead of creating a new POJO named Desc. Is there a way to do this with Serialized Names to look into nested JSON objects? Thanks in advance! JSON to be converted to POJO { 'name': name, 'desc': { 'country': country, 'city': city, 'postal': postal, 'street': street, 'substreet': substreet, 'year': year, 'sqm': sqm }, 'owner': [owner], 'manager': [manager], 'lease': { 'leasee': [ { 'userId': leaseeId, 'start': leaseeStart, 'end': leaseeeEnd } ], 'expire': leaseExpire, 'percentIncrease': leasePercentIncrease, 'dueDate': dueDate }, 'deposit': { 'bank': 'Sample Bank', 'description': 'This is a bank' } } Custom POJO public class Asset { private String mId; @SerializedName("name") private String mName; private String mCountry; private String mCity; private String mPostal; private String mStreet; private String mSubstreet; private int mYear; private int mSqm; @SerializedName("owner") private List<String> mOwners; @SerializedName("manager") private List<String> mManagers; private List<Leasee> mLeasees; private DateTime mLeaseExpiration; private int mPercentIncrease; private int mDueDate; private String mDepositBank; private String mDepositDescription; }
doc_2496
C:\Users\me>php --info | grep "extension_dir" extension_dir => ext => ext C:\Users\me>php --info | grep "php.ini" Configuration File (php.ini) Path => C:\Windows Loaded Configuration File => C:\Program Files\php-5.6.8-Win32-VC11-x64\php.ini Yes, they are. There is no C:\Windows\php.ini, but it doesn't matter -- the C:\Program Files\php-5.6.8-Win32-VC11-x64\php.ini is the file I updated and it's loaded. Then I printed the modules list (php --modules) and noticed, that it has nothing in common with the extensions setting in the loaded php.ini. I also tried to change other settings (like memory_limit), but PHP seems to ignore my php.ini and load the configurations from somewhere else. How to detect this magical "somewhere" (means: where PHP actually loads its settings from) and how to make it load the wished settings from the defined file? A: The problem is caused by the fact, that php.ini is being edited by editor that wasn't run with elevated privileges. If the PHP folder sits inside "Program Files", you have to access the files within it with admin privileges. I discovered the unbelievable fact, that the changes aren't saved into the php.ini despite the editor shows them correctly, even if closed/opened! This can be simply verified by checking php.ini content - run more "C:\Program Files\... ...\php.ini" from the command line. The solution is obvious - you have to run the editor with elevated privileges, or grant user permissions to the folder with php.ini and then edit it.
doc_2497
<button @click.prevent="saveTradeClick" class="btn btn-primary">Save</button> It triggers a method: saveTradeClick: function (event) { console.log("click"); this.$emit('SAVE'); console.log("after emit"); }, and a child component should listen to this event to trigger a method. mounted() { this.$parent.$on('SAVE', this.submitTrade); } This is not working. I get the console.log('click'); however, I get nothing out of the child component. When I look in the Vue devtools I get an $emit event but that is all. Any ideas where I could go with this? A: I suggest controlling the state from the parent to have more control instead of your child component relying on your maybe-not-available parent. Example: Parent <template> <div> <button @click.prevent="saveTradeClick" class="btn btn-primary">Save</button> <child :traded="traded"></child> <div> </template> <script> export default { data: () => ({ traded: false }), methods: { saveTradeClick() { this.traded = true; }, }, }; </script> Child <template> <div><div> </template> <script> export default { props: ['traded'], watch:{ traded(val) { if (val) this.submitTrade(); }, }, }; </script> Vue unwritten rules * *Parents are allowed to reference children, e.g. via props or refs *Children do not have any reference to the parent *Children do not change data that is passed via props. A: Another idea for If you would like a child component to listen for events from the parent is to create an Event Bus. Now this feature does not come out of the box ready to use so you have to actually create a new instance of Vue and then export the EventBus feature like so. eventbus.js import Vue from 'vue'; export const EventBus = new Vue(); You would then import the EventBus into the components you would like to interact with. parent component <template> <div class="pleeease-click-me" @click="emitGlobalClickEvent()"></div> </template> <script> // Import the EventBus we just created. import { EventBus } from './event-bus.js'; export default { name: 'Parent', data() { return { clickCount: 0 } }, methods: { emitGlobalClickEvent() { this.clickCount++; // Send the event on a channel (i-got-clicked) with a payload (the click count.) EventBus.$emit('i-got-clicked', this.clickCount); } } } </script> So in your child component you can now listen for the event i-got-clicked child component <script> // Import the EventBus we just created. import { EventBus } from './event-bus.js'; export default { name: 'Child', mounted() { EventBus.$on('i-got-clicked', clickCount => { console.log(`Oh, that's nice. It's gotten ${clickCount} clicks! :)`) }); } </script> and clickCount was passed from the parent so you can share data a long with listening for events. And another solution depending on the size of your application is to look in to vuex
doc_2498
CREATE TABLE `example` ( `id` int(11) NOT NULL AUTO_INCREMENT, `object_id` int(11) NOT NULL DEFAULT '0', `value` varchar(200) DEFAULT NULL, PRIMARY KEY (`id`), UNIQUE KEY `object_id` (`object_id`) ); Every time one of the systems inserts a new row we need to have object_id set to id. We can't use 'before insert' since the id column is an auto_increment column so it's value is NULL before insert and due to the limitations of the MySQL 'after insert' on triggers I can't do the following: CREATE TRIGGER insert_example AFTER INSERT ON example FOR EACH ROW SET NEW.object_id = NEW.id; I can't update the code for either system so I need a way to accomplish this on the database side. Both systems are going to be inserting new rows. How can I accomplish this? A: Using a trigger which fires before the insert should do the job CREATE TRIGGER insert_example BEFORE INSERT ON example FOR EACH ROW SET NEW.object_id = NEW.id; EDIT: As the OP pointed out NEW.id won't work with auto-increment; one could use the following trigger (use at own risk): CREATE TRIGGER insert_example BEFORE INSERT ON example FOR EACH ROW SET NEW.object_id = ( SELECT AUTO_INCREMENT FROM information_schema.TABLES WHERE TABLE_SCHEMA = DATABASE() AND TABLE_NAME = 'example' ); But I'd rather re-think this somewhat strange requirement - why do you need the pk value twice in the table? A: Is there any reason you cant use a BEFORE INSERT trigger? I've always seen AFTER INSERT triggers as a method to manipulate other tables rather than the table for which the trigger was executed on. Rule of thumb, manipulate table the trigger is running on = BEFORE INSERT, manipulate other tables AFTER INSERT :) A: I think your trigger will never create in the first place because you can't refer NEW.column_name in an AFTER INSERT trigger. Try doing this in a BEFORE INSERT trigger (PLEASE IGNORE THIS FIX AS IT WILL NOT WORK): CREATE TRIGGER `insert_example` BEFORE INSERT ON `t` FOR EACH ROW SET NEW.`object_id` = NEW.`id`; Please change the table and column names as per your schema. Hope this helps.
doc_2499
Is there a way to set a specific MIME type string to be treated as JSON by Jersey? A: in JAX-RS, you can specify the MIME type: @POST @Consumes("<client's MIME type>") public void postClichedMessage(String message) { // Store the message } A: You can create a MessageBodyReader and MessageBodyWriter (i.e. JAX-RS Entity Providers) to handle any combinations of Java type / MIME type that Jersey does not support out of the box. The link I posted is for the latest Jersey release. You might try finding the docs for your particular version so you don't end up using some feature that was not supported in the older release.