id
stringlengths 5
11
| text
stringlengths 0
146k
| title
stringclasses 1
value |
|---|---|---|
doc_3000
|
Can't figure out if this is the result of settings in my client, the server or XPages. Does the same thing on two different PC's.
I would think that the all-public would force it to open only public NABs but it does not appear so.
A: I asked the same question myself.
The answer, in the control add
addressBookDb="SERVER!!names.nsf"
From here.
Can I have the extlib name picker running in xPINC lookup the directory on the server?
A: After a fair bit of frustration I have this working for the Notes Client and the Web Client. Perhaps this is obvious to most of you, but it sure wasn't to me.
First on the Name Picker I created a namePickerAggregator. Then I added a dominoNABNamePicker
in the addressBookDb I put the following SSJS:
var server:String = @Name("[CN]",session.getCurrentDatabase().getServer());
var allNABs:Array = session.getAddressBooks().iterator();
var pubNABs = new Array;
var privNABs = new Array;
while (allNABs.hasNext()) {
var db:NotesDatabase = allNABs.next();
if (db.isPublicAddressBook()){
pubNABs.push(db.getFileName())
} else {
privNABs.push(db.getFileName())
}
db.recycle()
}
if (pubNABs[0] == ""){
return privNames[0];
break;
} else {
return server + "!!" + pubNABs[0];
break
}
I then added a second dominoNABNamePicker with the same block of code except the return is
if (pubNABs[1] != "") {
return server + "!!" + pubNABs[1];
break;
} else {
return "";
}
This code works for both the Notes Client and the Web client so I'm now a happy camper, unless I find a gotcha somewhere.
A: Here is what I eventually did. I set a limit on the maximum number of address books (not great but it works) of 4 you can create as many as you want. So I created a couple of sessionScope variable that I created in the after Page Loads event on the XPage. I used this formula.
var allNABs:Array = session.getAddressBooks().iterator();
var pubNABs = new Array;
var privNABs = new Array;
while (allNABs.hasNext()) {
var db:NotesDatabase = allNABs.next();
if (db.isPublicAddressBook()){
pubNABs.push(db.getFilePath())
} else {
privNABs.push(db.getFilePath())
}
db.recycle()
}
sessionScope.put("ssPublicNABs", pubNABs);
sessionScope.put("ssPrivateNABs", privNABs);
because I use several different Name Pickers on the same page I did not want to repeat having to cycle through the Address books.
Then I created 4 NamePicker controls and added 1, 2 , 3 and 4 dominoNABNamePickers providers to each of the successive controls. Then set the rendered property based on the number of public Address books so they would not blow up on me. The db name property on each of the providers is :
var server:String = @Name("[CN]",session.getCurrentDatabase().getServer());
var pubNABs:Array = sessionScope.get("ssPublicNABs");
return server + "!!" + pubNABs[0];
where pubNABs[n] returns the correct filePath for the NAB. It works well in both Notes Client and the Web.
Then to make it not blow up on a local disconnected replica I created 4 more controls and did the same thing but used the privNABs with appropriate rendered properties so that there is no conflict. Seems like the long way around and I'm sure that there is an easier way, but it works.
| |
doc_3001
|
Fruits(string Name1, string Name2, String Name3)
This method is working fine:
Fruits("Apple", "Orange","Pineapple");
I got this error
Fruits("Apple", "Orange");
"No overload for method 'Fruits' takes 2 arguments."
A: as the error says you have to add another constructor with 2 parameters
Fruits(string Name1, string Name2)
or you have to pass another value when you create your Fruits-object
Fruits("Apple", "Orange", "whatever")
A: To add a variable number of parameters:
Fruits(params string[] fruits)
{
string firstparameter = fruits[0];
}
And you can call this method with any number of parameters:
Fruits("Banana");
Fruits("Apple","Orange");
Fruits("Pineapple", "Whatever", "Idontknow");
A: The answer is as @fubo recommended. However I think you should read a proper C# tutorial and understand the basics. If you calling a method you must call it with number of parameters it requires and the correct types as well. You can either pas empty string to the parameter you want to ignore or anything else. But know well if you doing any further processing dependent on that parameter it will be affected.
| |
doc_3002
|
List<String> list = Arrays.asList("Hello", "Hello", "World");
Map<String, Long> wordToFrequency = // what goes here?
So in this case, I would like the map to consist of these entries:
Hello -> 2
World -> 1
How can I do that?
A: Here is the simple solution by StreamEx:
StreamEx.of(list).groupingBy(Function.identity(), MoreCollectors.countingInt());
This has the advantage of reducing the Java stream boilerplate code: collect(Collectors.
A: I think you're just looking for the overload which takes another Collector to specify what to do with each group... and then Collectors.counting() to do the counting:
import java.util.*;
import java.util.stream.*;
class Test {
public static void main(String[] args) {
List<String> list = new ArrayList<>();
list.add("Hello");
list.add("Hello");
list.add("World");
Map<String, Long> counted = list.stream()
.collect(Collectors.groupingBy(Function.identity(), Collectors.counting()));
System.out.println(counted);
}
}
Result:
{Hello=2, World=1}
(There's also the possibility of using groupingByConcurrent for more efficiency. Something to bear in mind for your real code, if it would be safe in your context.)
A: Here is example for list of Objects
Map<String, Long> requirementCountMap = requirements.stream().collect(Collectors.groupingBy(Requirement::getRequirementType, Collectors.counting()));
A: If you're open to using a third-party library, you can use the Collectors2 class in Eclipse Collections to convert the List to a Bag using a Stream. A Bag is a data structure that is built for counting.
Bag<String> counted =
list.stream().collect(Collectors2.countBy(each -> each));
Assert.assertEquals(1, counted.occurrencesOf("World"));
Assert.assertEquals(2, counted.occurrencesOf("Hello"));
System.out.println(counted.toStringOfItemToCount());
Output:
{World=1, Hello=2}
In this particular case, you can simply collect the List directly into a Bag.
Bag<String> counted =
list.stream().collect(Collectors2.toBag());
You can also create the Bag without using a Stream by adapting the List with the Eclipse Collections protocols.
Bag<String> counted = Lists.adapt(list).countBy(each -> each);
or in this particular case:
Bag<String> counted = Lists.adapt(list).toBag();
You could also just create the Bag directly.
Bag<String> counted = Bags.mutable.with("Hello", "Hello", "World");
A Bag<String> is like a Map<String, Integer> in that it internally keeps track of keys and their counts. But, if you ask a Map for a key it doesn't contain, it will return null. If you ask a Bag for a key it doesn't contain using occurrencesOf, it will return 0.
Note: I am a committer for Eclipse Collections.
A: Here are slightly different options to accomplish the task at hand.
using toMap:
list.stream()
.collect(Collectors.toMap(Function.identity(), e -> 1, Math::addExact));
using Map::merge:
Map<String, Integer> accumulator = new HashMap<>();
list.forEach(s -> accumulator.merge(s, 1, Math::addExact));
A: List<String> list = new ArrayList<>();
list.add("Hello");
list.add("Hello");
list.add("World");
Map<String, List<String>> collect = list.stream()
.collect(Collectors.groupingBy(o -> o));
collect.entrySet()
.forEach(e -> System.out.println(e.getKey() + " - " + e.getValue().size()));
| |
doc_3003
|
Is this possible with autotools? (Using autoconf and automake - Not libtool)
A: mkdir builddir
cd builddir
../foobar-1.2.3/configure --my --options (or)
/path/to/foobar-1.2.3/configure --my --options
make
make foobar
make check
make install
What I usually end up whith when building manually is something like
cd foobar-1.2.3
(mkdir _b && cd _b && ../configure --prefix=$PWD/../_i)
make -C _b all check install installcheck
./_i/bin/foobar
vi foo.c bar.c foobar.h
gimp icons/foobar-moo.png
make -C _b install && ./_i/bin/foobar
Then I have the whole stuff related to the foobar program inside one directory foobar-1.2.3, including source code, built files, and a test installation.
The built files in foobar-1.2.3/_b are easily removed with rm -rf _b, and the test installation with rm -rf _i, in the course of editing the source tree from a shell with current working directory foobar-1.2.3.
Of course, you can use a variant of that and move the build/install dirs up one directory: foobar-1.2.3--b and foobar-1.2.3--i alongside foobar-1.2.3.
| |
doc_3004
|
For example in a State Diagram:
Both State_1 and State_2 have two internal states State_1.x and State_2.y (x = 1..2, y = 1..2)
@startuml
[*] --> State_1
State_1 --> State_2
state State_1 {
left to right direction
[*] --> State_1.1
[*] --> State_1.2
}
state State_2 {
top to bottom direction
[*] --> State_2.1
[*] --> State_2.2
}
@enduml
I have tried the code above, it doesn't work. Can we restrict the affect-region of the command 'top to bottom direction'?
I have also tried the following code.
@startuml
[*] --> State_1
State_1 --> State_2
state State_1 {
/‘not good..'/
[*] -> State_1.1
[*] -> State_1.2
[*] -> State_1.3
/‘not good either'/
[*] -right-> State_1.1
[*] -right-> State_1.2
[*] -right-> State_1.3
}
state State_2 {
[*] --> State_2.1
[*] --> State_2.2
}
@enduml
PS, can any UML Modelling Software edit .eps and .svg files?
Thanks in advance!
| |
doc_3005
|
I config web.xml file which is from jasperserver/webapp/web-inf and also config tomcat web.xml. Both are configured by using this filter.
<filter-name>CorsFilter</filter-name>
<filter-class>org.apache.catalina.filters.CorsFilter</filter-class>
<init-param>
<param-name>cors.allowed.origins</param-name>
<param-value>http://10.11.200.42:3000</param-value>
</init-param>
<init-param>
<param-name>cors.allowed.methods</param-name>
<param-value>GET,POST,HEAD,OPTIONS,PUT</param-value>
</init-param>
<init-param>
<param-name>cors.allowed.headers</param-name>
<param-value>Content-Type,X-Requested-With,accept,Origin,Access-Control-Request-Method,Access-Control-Request-Headers</param-value>
</init-param>
<init-param>
<param-name>cors.exposed.headers</param-name>
<param-value>Access-Control-Allow-Origin,Access-Control-Allow-Credentials</param-value>
</init-param>
<init-param>
<param-name>cors.support.credentials</param-name>
<param-value>true</param-value>
</init-param>
<init-param>
<param-name>cors.preflight.maxage</param-name>
<param-value>1800</param-value>
</init-param>
</filter>
<filter-mapping>
<filter-name>CorsFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
A: Go to this location(jaspersoft\jasperreports-server\apache-tomcat\webapps\jasperserver\WEB-INF) where you install your jasperosft and configure web.xml file using below code.
<filter>
<filter-name>CorsFilter</filter-name>
<filter-class>com.jaspersoft.jasperserver.api.security.csrf.CorsFilter</filter-class>
<init-param>
<param-name>cors.allowed.origins</param-name>
<param-value>http://localhost:port</param-value>
</init-param>
<init-param>
<param-name>cors.allowed.methods</param-name>
<param-value>GET,POST</param-value>
</init-param>
<init-param>
<param-name>cors.exposed.headers</param-name>
<param-value>
Access-Control-Allow-Origin,Access-Control-Allow-Credentials
</param-value>
</init-param>
<init-param>
<param-name>cors.support.credentials</param-name>
<param-value>true</param-value>
</init-param>
<init-param>
<param-name>cors.preflight.maxage</param-name>
<param-value>1800</param-value>
</init-param>
</filter>
<filter-mapping>
<filter-name>CorsFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
| |
doc_3006
|
When one of the DBs goes down for some reason, all the queries to the db fail after N seconds (which is the HikariCP's checkoutTimeout). It leads to the problem when a calling thread is waiting for N seconds even when we can already know that the db is down (because it has not been answering for some time).
Are there any mechanisms to have a smarter behavior?
| |
doc_3007
|
For Example, I want
<xsd:string xsi:type="xsd:string" id="ID_4"></xsd:string>
instead of
<xsd:string id="ID_4" xsi:type="xsd:string" />
Any ideas?
A: They two are semantically identical - they mean exactly the same thing.
The XML standard even says that these two are interchangeable.
Any conformant XML parser will not care which one you use - why do you?
| |
doc_3008
|
I used
pecl install apc
I got
C:\PHP>pecl install apc
downloading APC-3.0.19.tgz ...
Starting to download APC-3.0.19.tgz (115,735 bytes) ............
done: 115,735 bytes 47 source files, building
WARNING: php_bin c:\php\php.exe appears to have a suffix \php.exe, but config
variable php_suffix does not match running: msdev APC.dsp /MAKE "APC - Release"
ERROR: Did not understand the completion status returned from msdev.exe.
A: It is not really important, whether you are running Windows 32-bit or 64-bit version. What matters is what Apache (webserver) version you have installed (32/64). Since lots of PHP extensions (like APC) are not available for 64-bit systems, most common setup is as follows:
*
*Operating system 32 or 64-bit (not really important). Apache 32-bit will run easily on Windows 64-bit. The difference is, that for 32-bit apache you MUST install PHP 32-bit. Once you install 64-bit PHP, you may find difficult to install some extensions - there are almost no extensions available for 64-bit PHP platform.
*therefore your webserver should be 32-bit if you care about special extensions, like APC, Imagick etc... Also you need to know if your apache is thread safe (TS) or not thread safe (NTS) and whether it has been compiled in Visual Studio 6 (VC6) or newser Visual Studio 2008 (VC9). You will easily find all this info from phpinfo() function.
*as for the APC, some nice compilation for Windows are available from http://dev.freshsite.pl/php-accelerators/apc.html.
A: For php 5.3 you use php.net/pierre/php_apc-3.1.10-5.3-vc9-x86.zip.
Download it and copy php_apc.dll to your php ext directory. (I choose the file under ts I have thread safe php installation. There is also an apc dll file for non thread safe.)
Add extension=php_apc.dll into your php.ini file
Restart your web server
Run phpinfo() to see if it's installed or not.
I am using php 5.4 and I downloaded php.net/pierre/php_apc-3.1.10-5.4-vc9-x86.zip and its working fine.
Hope this will help mate.Good luck.
A: Installing an extension with the pecl command means :
*
*downloading the sources
*compiling them
And, generally speaking, a windows machine doesn't have what's required to compile software like PHP and/or PHP extensions.
A better / easier solution, in your case, would probably be to find a pre-compiled .dll of the extension, that matches your system and your version of PHP.
With a bit of luck, maybe one of the versions provided on http://downloads.php.net/pierre/ could be OK ?
(It's what kind of acts as replacement of the old pecl4win, until the extensions for windows are available on windows.php.net)
For more informations about which version you should use, take a look at the Which version do I choose? section, in the left side-bar of http://windows.php.net/
A: Also, make sure that the compiled version from here http://downloads.php.net/pierre/
matches your php version, otherwise the extension will not load (php v. 5.2.17 requires php_apc.dll v 5.2.17.17 - which doesn't seem to be available as of this writing - I had to downgrade the php version to play with apc).
Another point, pierre's zip packages, at least the one I downloaded, did not include the management script. you can get it from here: http://pecl.php.net/package/APC - select the version you downloaded, then navigate to Browse Source, then find your version in the 'tags' folder. the apc.php script should be there.
A: There's no available version for php > 5.4.
I'm using APCu instead. Just download the dll and reference it in php.ini.
A: This website offers updated dll and installers for Apache, PHP and APC compiled to work on windows 64 bit. I've been using it for a while and it works fine. You could an APC version compatible with PHP 5.3.22 here
| |
doc_3009
|
To replicate the behaviour, I create two simple m-files:
% script.m
%---------
dbstop if error
func
% func.m
%-------
function func
y=table((1:4)','RowNames',{'a','b','c d','ef'})
y('zz',:) % Throws error to enter debugger
end % function func
I then enter the following commands
script
dbup % Enter base workspace to create new variable y
y=table((1:4)','RowNames',{'a','b','c d','ef'})
y(1,1) % Returns whole table, not data within table
The output is as follows:
>> script
y =
Var1
____
a 1
b 2
c d 3
ef 4
Error using func (line 3)
Unrecognized row name 'zz'.
Error in script (line 1)
func
65 throwAsCaller(ME)
K>> dbup % Enter base workspace to create new variable y
In workspace belonging to func (line 3)
K>> y=table((1:4)','RowNames',{'a','b','c d','ef'})
y =
Var1
____
a 1
b 2
c d 3
ef 4
K>> y(1,1) % Returns whole table, not data within table
ans =
Var1
____
a 1
b 2
c d 3
ef 4
As can be seen, y(1,1) yields the whole table rather than the entry in row 1, column 1.
NOTE: This problematic behaviour is not generally seen in debugger mode, but only after an error (which is when I want to use the debugger!). To see the expected nonproblematic behaviour, I set a breakpoint in func.m at the statement:
y('zz',:)
which for me is line 3, since I don't have any of the opening comment lines:
dbquit
clear all
clear classes
dbstop in func at 3
I then ran the following statements to get the the breakpoint and index into tables:
script % Stops in func.m at statement y('zz',:)
y(1,1) % Now yields entry at row 1, column 1
dbup % See if new table in base workspace also behaves well
y=table((1:4)','RowNames',{'a','b','c d','ef'}) % New table
y(1,1) % Returns entry at row 1, column 1, as expected
I am using Matlab version 2015a.
P.S. The original problem was encountered using row name indexing, but I've troubleshot the issue to the above simpler example.
A: Here is the explanation from TMW:
In your example, the error is thrown within the "subsref" method of the Table class. The "subsref" method is what defines the subscripted reference, or indexing, behaviors of objects in MATLAB. MATLAB allows users to overload the "subsref" method to define custom indexing behaviors for custom classes and objects.
In terms of "subsref" behavior for overloaded versions, the following information is mentioned in documentation (the links are included below):
"MATLAB does not call class-defined 'subsref' or 'subsasgn' methods within the overloaded methods. Within class methods, MATLAB always calls the built-in 'subsref' and 'subsasgn' functions."
"Calling the built-in enables you to use the default indexing behavior when defining a specialized indexing."
http://www.mathworks.com/help/matlab/matlab_oop/indexed-reference-and-assignment.html#br09nsm
http://www.mathworks.com/help/matlab/ref/subsref.html#moreabout
Now, in reference to your example, since the error is thrown within the "subsref" method of the Table class, this establishes the setting for the following steps in your debugging process. When using the "dbup" command to shift the workspace to the workspace of the caller function, it allows you to inspect the variables in the function's workspace, to see what was passed into the function that threw the error, and to determine what might have led to this issue.
However, in your example, we are still technically stuck in the catch-portion of the table "subsref" method, as a result of the error thrown from erroneous indexing. As mentioned above, by default, MATLAB does not call class-defined "subsref" methods from WITHIN an overloaded "subsref" method. We are still technically inside the table's overloaded method, so any indexing takes on the built-in behavior during the debugging. This is why the entire table is returned; this is the built-in behavior. The correct indexing intended for tables is not feasible in this debugging process.
As an additional note, part of the motivation behind this behavior is to avoid falling into an infinite loop by calling the "subsref" method, through indexing, while already inside that method. Instead of calling itself, it will call the built-in method.
| |
doc_3010
|
The file upload form helper auto generates a Thumbnail at 75x75 after upload, and when you edit if it's existing.
Is there a way to change this thumbnail size to represent the correct ratio of the original image?
The upload form helper looks like this at the moment:
<%= f.attachinary_file_field(:logo, :cloudinary => { :transformation => {:width=>400, :height=>200, :crop=>:fill }}) %>
thanks.
| |
doc_3011
|
*
*Cars with the start and end time of parking, along with their length
in meters.
*Total street length in meters (500m)
*Number of cars
*Opening hours of street parking (2-22)
--> Start and end times are in whole hours and length in whole meters.
I was thinking of splitting the street up into short 1 meter segments, but that wouldn't ensure a best solution as I wouldn't be able to optimize the locations based on later segments.
Objective: The goal is to fit all cars in the street parking (Assumed possible). Output will be the location of the car during the given time period.
The goal is similar to this post: Create distribution of available values - Python, but the length has to taken into account: a car can't be split up.
This can be seen in the following figure (car IDs filled in):
1
(I figured out this might be a linear programming or multi-objective optimization problem?)
| |
doc_3012
|
Now I am dealing with a POST request that uploads an NSDictionary with extra parameters for the request.
Here is what the ASIHTTPRequest code looks like:
NSMutableData* mPostData = [NSMutableData dataWithData:[postData dataUsingEncoding:NSUTF8StringEncoding]];
NSString *msgLength = [NSString stringWithFormat:@"%d", [postData length]];
[r setPostBody: mPostData];
[r addRequestHeader: @"Content-Length" value:msgLength];
}
postData is an NSDictionary that has keys / values - which is based on the action taken. For example - uploading an image will have extra parameters. Completing a user registration will have different parameters - but use the same method this code is found in.
The delegate calls this bit of code:
//Request must be ASIFormDataRequest
- (BOOL) addFileWithPath:(NSString*) filePath fileName: (NSString*)fileName ofType: (NSString*) fileType withKey: (NSString*) fileKey uploadProgressDelegate:(id) uploadProgressDelegate
{
if ([request isKindOfClass:[ASIFormDataRequest class]])
{
ASIFormDataRequest *formRequest = (ASIFormDataRequest *) request;
NSLog(@"%@ %@ %@ %@",filePath,fileName,fileType,fileKey);
if (uploadProgressDelegate)
{
[formRequest setUploadProgressDelegate:uploadProgressDelegate];
}
NSLog(@"filename = %@",fileName);
[formRequest setFile:filePath withFileName:fileName andContentType:fileType forKey:fileKey];
return YES;
}
else
{
NSLog(@"WebService must be initialised with PostDataValuesAndKeys so that ASIFormDataRequest is made");
return NO;
}
}
Now from the amount of digging I have done - I can only see this bit of AfNwtworking code - which looks to me like it's from version 1.x
NSString *urlString = @"yourUrl";
NSURL* url = [NSURL URLWithString:urlString];
AFHTTPClient *httpClient = [[AFHTTPClient alloc] initWithBaseURL:url];
NSUserDefaults *userDefaults = [NSUserDefaults standardUserDefaults];
NSURL *localVideoURL = [NSURL URLWithString:[userDefaults objectForKey:@"videoURL"]];
NSData *videoData = [NSData dataWithContentsOfURL:localVideoURL];
NSMutableURLRequest *request = [httpClient multipartFormRequestWithMethod:@"POST" path:nil parameters:nil constructingBodyWithBlock: ^(id <AFMultipartFormData>formData) {
[formData appendPartWithFileData:videoData name:@"video_file" fileName:@"testvideo.mov" mimeType:@"video/quicktime"];
[formData appendPartWithFormData:[[BAUserInfoParser userInfoJson] dataUsingEncoding:NSUTF8StringEncoding] name:@"userInfo"];
[formData appendPartWithFormData:[[userDefaults objectForKey:@"transactionReceiptData"] dataUsingEncoding:NSUTF8StringEncoding] name:@"transactionData"];
}];
AFHTTPRequestOperation *operation = [[AFHTTPRequestOperation alloc] initWithRequest:request];
[operation setUploadProgressBlock:^(NSUInteger bytesWritten, long long totalBytesWritten, long long totalBytesExpectedToWrite) {
// NSLog(@"Sent %lld of %lld bytes", totalBytesWritten, totalBytesExpectedToWrite);
float uploadPercentge = (float)totalBytesWritten / (float)totalBytesExpectedToWrite;
float uploadActualPercentage = uploadPercentge *100;
[lblUploadInfoText setText:[NSString stringWithFormat:@"%.2f %%",uploadActualPercentage]];
if (uploadActualPercentage >= 100) {
lblStatus.text = @"Waitting for response ...";
}
progressBar.progress = uploadPercentge;
}];
[httpClient enqueueHTTPRequestOperation:operation];
[operation setCompletionBlockWithSuccess:^(AFHTTPRequestOperation *operation, id responseObject) {
lblStatus.text = @"Upload Complete";
NSData *JSONData = [operation.responseString dataUsingEncoding:NSUTF8StringEncoding];
NSDictionary *jsonObject = [NSJSONSerialization JSONObjectWithData:JSONData options:NSJSONReadingMutableContainers error:nil];
}
failure:^(AFHTTPRequestOperation *operation, NSError *error) {
NSLog(@"error: %@", operation.responseString);
NSLog(@"%@",error);
}];
[operation start];
}
In version 2 of AFNetworking - there are new convenience methods for this sort of this:
Here is the POST method with extra block..
[self.manager POST:url parameters:urlParameters constructingBodyWithBlock:^(id<AFMultipartFormData> formData) {
// postData
}
} success:^(NSURLSessionDataTask *task, id responseObject) {
//Success
} failure:^(NSURLSessionDataTask *task, NSError *error) {
//Failure
}];
However its the constructingBodyWithBlock I am having an issue with. I'm not sure what I need to do to take a NSDictionary object and upload it in that block.
A: urlParameters should be an NS dictionary. as it is an object that is being uploaded
NSDictionary *urlParameters (make your dictionary)
parameters:urlParameters
this works for me.
| |
doc_3013
|
Lets also assume that 3 documents out of 10 has checked:1 others checked:0
When I search in lucene
checked:1 - returns correct result (3)
checked:0 - returns correct result (7)
-checked:1 - returns correct result (7)
-checked:0 - returns correct result (3)
BUT
-(-(checked:1)) - suddenly returns wrong result (10, i.e. entire data set).
Any idea why lucene query parse acts so weird
A: Each Lucene query has to contain at least one positive term (either MUST/+ or SHOULD) so it matches at least one document. So your queries -checked:1 and -checked:0 are invalid, and I am surprised you are getting any results.
These queries should (most likely) look like this:
*
*+*:* -checked:1
*+*:* -checked:0
Getting back to your problem: double negation makes no sense in Lucene. Why would you have double negation, what are you trying to query?
Generally speaking, don't look at Lucene query operators (! & |) as Boolean operators, they aren't exactly what you think they are.
A: After some research and trial and error and building up on answer from midas, I have came up with the method to resolve this inconsistency. When I say inconsistency, I mean from a common sense view for a user. From information retrieval prospective, midas has linked an interesting article, which explains why such a query makes no sense.
So, the trick is to keep each negative expression with MatchAllDocsQueryNode class, namely the rewritten query has to look like this:
-(-(checked:1 *:*) *:*)
Then the query will produce the expected result. I have accomplished it by writing my own nodeprocessor class, which performs necessary operations.
| |
doc_3014
|
Please help. Let me know if you need any additional details.
A: Most likely one of your libraries was added to project with relative path which goes outside of zipped folder, or with absolute path which also isn't included into archive.
| |
doc_3015
|
Also when I run the command 'heroku ps' it tells me it crashed and gives me this output
=== web (1X): `bin/rails server -p $PORT -e $RAILS_ENV`
web.1: crashed 2014/10/06 17:51:39 (~ 18s ago)\
I followed the tutorial here https://devcenter.heroku.com/articles/getting-started-with-rails4#local-workstation-setup
I tried running heroku logs but there really isn't anything helpful there all it says is status code 503 and code=H10
Is there a way to get a more accurate log so I can have a better idea of what is happening here?
| |
doc_3016
|
i.e. ( Map< String,String> that contains several maps) without declaring any type.
So that I can later manipulate it and use toJson and get a string back.
Is there a way to create a general data collection from a json source ?
A: Try Gson
Gson gson=new Gson();
Type mapOfStringStringType = new TypeToken<Map<String,String>(){}.getType();
Map<String,String> resultSet=gson.fromJson(JSONString,mapOfStringStringType);
| |
doc_3017
|
public class ItemDto {
private String fullname;
private List<SubitemDto> subitemList;
}
public class SubitemDto{
private String fullname;
}
<tr th:each="subitem : ${subitemlist}">
<td>
<input type="hidden" name="fullname" th:value="${subitem.fullname}"/>
</td>
</tr>
Spring Boot and Thymeleaf
My goal is to bind a correct value to the child input from child object.
| |
doc_3018
|
If we’re using client-initiated Ado.Net transactions, then when we try to execute a command that is not a part of current transaction while the transaction is underway, we’ll receive an error. Why is that?
thanx
A: You can't execute a command that is 'not part of the current transaction'. Once a SqlTransaction was started, the BEGIN TRANSACITON statement was executed and the server has enroled your session in a transaction. any new statement comming on the same session will be part of the transaction. The ADO.Net SqlClient enforces this by requiring you to assign the proper SqlTransaction to the SqlCommand.
If you need to execute a statement outside the current transaction, you need to get a new connection to the database and execute your statement on it.
BTW, you said ADO.Net transactions and I took your work for it, the.Net System.Transactions components have some different behaviour.
| |
doc_3019
|
let listEntrants = await Promise.all(listCalls.map(async call =>{
let entrant = await getEntrantId(call.url);
return {attendee : call.attendee, entrant : entrant};
}));
The listCalls are the urls and the method getEntrantId() it return me the info I want.
Thanks.
| |
doc_3020
|
private void insertEvent(int start, int end, String location, String comment, String title) {
ContentValues values = new ContentValues();
TimeZone tz = TimeZone.getDefault();
values.put("calendar_id", 1);
values.put("title", title);
values.put("description", comment);
values.put("eventLocation", location);
values.put("dtstart", start);
values.put("dtend", end);
values.put("allDay", 0);
values.put("rrule", "FREQ=YEARLY");
values.put("eventTimezone", tz.getID());
Uri l_eventUri;
if (android.os.Build.VERSION.SDK_INT <= 7) {
// the old way
l_eventUri = Uri.parse("content://calendar/events");
} else {
// the new way
l_eventUri = Uri.parse("content://com.android.calendar/events");
}
Uri l_uri = getActivity().getContentResolver()
.insert(l_eventUri, values);
}
Here is another try:
private void insertEvent2(int start, int end, String location, String comment, String title){
ContentResolver cr = getActivity().getContentResolver();
ContentValues eventsArray = new ContentValues();
ContentValues values = new ContentValues();
TimeZone timeZone = TimeZone.getDefault();
values.put(CalendarContract.Events.DTSTART,start);
values.put(CalendarContract.Events.DTEND,end);
values.put(CalendarContract.Events.EVENT_TIMEZONE, timeZone.getID());
values.put(CalendarContract.Events.TITLE, title);
values.put(CalendarContract.Events.DESCRIPTION, title);
values.put(CalendarContract.Events.EVENT_LOCATION,location);
values.put(CalendarContract.Events.CALENDAR_ID, 1);
if (ActivityCompat.checkSelfPermission(getActivity(), Manifest.permission.WRITE_CALENDAR) != PackageManager.PERMISSION_GRANTED) {
// TODO: Consider calling
// ActivityCompat#requestPermissions
// here to request the missing permissions, and then overriding
// public void onRequestPermissionsResult(int requestCode, String[] permissions,
// int[] grantResults)
// to handle the case where the user grants the permission. See the documentation
// for ActivityCompat#requestPermissions for more details.
return;
}
eventsArray = values;
Uri l_uri = getActivity().getContentResolver()
.insert(CalendarContract.Events.CONTENT_URI, values);
}
But I can't see the events I have created when I see the default android calendar. Have you any idea how should I do insert an event?
A: use contact resolver to add events on calendar
ContentResolver cr = Env.currentActivity.getContentResolver();
ContentValues[] eventsArray = new ContentValues[projectDataList.size()];
for (int i = 0; i < projectDataList.size(); i++) {
projectDetailData = projectDataList.get(i);
ContentValues values = new ContentValues();
TimeZone timeZone = TimeZone.getDefault();
values.put(CalendarContract.Events.DTSTART, (long) (Util.dateToMiliSec(shift_start_date) + (Util.convertTimeIntoSecound(projectDetailData.shift_start_time, "HH:mm:ss") * 1000)));
values.put(CalendarContract.Events.DTEND, (long) (Util.dateToMiliSec(shift_end_date) + (Util.convertTimeIntoSecound(projectDetailData.shift_end_time, "HH:mm:ss") * 1000)));
values.put(CalendarContract.Events.EVENT_TIMEZONE, timeZone.getID());
values.put(CalendarContract.Events.TITLE, title);
values.put(CalendarContract.Events.DESCRIPTION, title);
values.put(Events.EVENT_LOCATION, work_location);
values.put(CalendarContract.Events.CALENDAR_ID, 1);
if (ActivityCompat.checkSelfPermission(Env.currentActivity, Manifest.permission.WRITE_CALENDAR) != PackageManager.PERMISSION_GRANTED) {
// TODO: Consider calling
// ActivityCompat#requestPermissions
// here to request the missing permissions, and then overriding
// public void onRequestPermissionsResult(int requestCode, String[] permissions,
// int[] grantResults)
// to handle the case where the user grants the permission. See the documentation
// for ActivityCompat#requestPermissions for more details.
return;
}
eventsArray[i] = values;
}
int a = cr.bulkInsert(CalendarContract.Events.CONTENT_URI, eventsArray);
| |
doc_3021
|
I would however to retrieve the list of IDs of my records at the get_raw_records function levels, before the data gets processed and formatted.
The reason is because I am using filters and I would like to call an action in the view that will affect ONLY the records filters (not only the ones in current page).
Could you please help me ?
my method looks as:
def get_raw_records
query = Book.where(user_id: user.id)
query = query.where(subject: params[:subject_id]) if params[:subject_id].present?
query
end
| |
doc_3022
|
For example, if user on 'process_mid' step, after rebooting, he can't go to 'process_end'. User can only begin new stage by typing 'start' command.
bot = telebot.TeleBot(TOKEN)
@bot.message_handler(commands=['start'])
def process_start(message):
text = 'start'
bot.send_message(message.chat.id, text)
bot.register_next_step_handler(message, process_mid)
def process_mid(message):
text = 'mid'
bot.send_message(message.chat.id, text)
bot.register_next_step_handler(message, process_end)
def process_end(message):
text = 'end'
bot.send_message(message.chat.id, text)
bot.polling(none_stop=True)
A: Try to store your user's state along with user's chat ID in database and verify state from there.
For it try to create something like
class States(enumerate):
# Enter all states like numbers
S_START = 0
...
Also, create function for getting state from db:
def get_current_state(user_id):
# getting state by user's chat ID from DB
...
After, just write in DB for each user's chat id state (change it in every handler you need) and validate it in handler func:
@bot.message_handler(func=lambda message: get_current_state(message.chat.id) == config.States.S_START.value)
def some_function(message):
...
A: From my experience with this bot, you need to save all data to a file (or database for that matter) if you want to pick up from where you left off.
You can save your progress along the way and in the process_start function identify the user with his message.chat.id at the entrance to the function and if that value exists in the file (database) then register the next step accordingly.
A: This Example will show you how to use register_next_step handler
https://github.com/eternnoir/pyTelegramBotAPI/blob/master/examples/step_example.py
| |
doc_3023
|
And I want to convert it to stored in SQL Server (data type=date) June, 25, 2013.
Please advise me.
A: Please check whether this is fine,
SELECT DATENAME(MM, CONVERT(DATE, '25/06/2013', 104)) + RIGHT(CONVERT(VARCHAR(12), CONVERT(DATE, '25/06/2013', 104), 107), 9) AS Date_format
A: SQL Server provide convert(), you have to store your data in varchar then use this function
CONVERT(VARCHAR(24),GETDATE(),113)
or
SELECT CAST(DAY(GETDATE()) AS VARCHAR(2)) + ' ' +
DATENAME(MM, GETDATE()) + ' ' +
RIGHT(CAST(YEAR(GETDATE()) AS VARCHAR(4)), 2) AS [DD Month YY]
http://www.sql-server-helper.com/tips/date-formats.aspx
A: WIll this work,
SET DATEFORMAT DMY
DECLARE @DT VARCHAR(15) = '26/05/2014'
SELECT
DATENAME(MONTH,CAST(@DT AS DATETIME)) +','+
CAST(DATEPART(DAY,CAST(@DT AS DATETIME)) AS VARCHAR(2))+','+
CAST(DATEPART(YEAR,CAST(@DT AS DATETIME)) AS VARCHAR(4)) Dt
A: Check out the below link of convert function which can be used:
http://www.w3schools.com/sql/func_convert.asp
A: I wrote this useful extension method:
public static string ToSqlString(this DateTime dt)
{
return "CONVERT(DATETIME, '" + dt.Year + "-" + dt.Month + "-" + dt.Day + " " + dt.Hour + ":" + dt.Minute + ":" + dt.Second + "." + dt.Millisecond + "', 21 )";
}
A: just check the below. There is format option available for as below link
Different dateformat in sqlserver :- http://www.sql-server-helper.com/sql-server-2008/sql-server-2008-date-format.aspx
you just change the 113 to desired value format as above in link.
declare @d datetime = getdate()
select CONVERT( varchar(11) , @d , 113)
| |
doc_3024
|
Everything works fine with my code and i am able to query what i have stored as well.But I am still not sure if my data was completely stored or not!
I know that Jena TDB index the content of the file and that there are several indexes built for one file which will be stored in a specified folder. But how do I check if the database is created and all the RDF files that i will provide to TDB will be stored with the previous ones?
is there any way to do so online maybe or in java? and is my code enough to work with big amount of data or not ?
public static void main(String[] args) {
String directory = "/*location*/ ";
Dataset dataset = TDBFactory.createDataset(directory);
Model tdb = dataset.getNamedModel("RDFData");
// read the input file
String source = "/*location*/rdfstorage.rdf";
FileManager.get().readModel( tdb, source);
tdb.close();
dataset.close();
}
A: Check the location and see if the files have been updated.
It is better to use a transaction. Your code is OK but if it is interrupted, the store may be corrupted.
https://jena.apache.org/documentation/tdb/tdb_transactions.html
If the source is large, use the bulkloader from the command line.
| |
doc_3025
|
The expected CLI is as follows but I cannot see tips like What type of extension do you want to create? ( Temporarily I cannot insert an image, sorry)
[icon] Welcome to the Visual Studio Code Extension generator!
? What type of extension do you want to create? < Use arrow keys>
> New Extension <TypeScript>
> New Extension <JavaScript>
> New Color Theme
> New Language Support
> New Code Snippets
...
I reinstalled yeoman and $ npm install -g yo generator-code but in vain. Has anyone experienced this problem or had an idea? Thank you very much!
OS: macOS Mojave 10.14.4
yeoman's version: 2.0.6
Vscode'v version: 1.33.1 (1.33.1)
A: For me waiting for ~ 2 minutes has helped to resolve the issue.
After that time following message was printed
Unable to fetch latest vscode version: Error: [object Object]
and then the questions appeared.
| |
doc_3026
|
Emulator:
Running on 127.0.0.1:9099
main.dart:
void main() async {
WidgetsFlutterBinding.ensureInitialized();
await Firebase.initializeApp(options: DefaultFirebaseOptions.currentPlatform,);
try {
await FirebaseAuth.instance.useAuthEmulator('localhost', 9099);
} catch (e) {
// ignore: avoid_print
print(e);
}
runApp(
MaterialApp(
title: "Foo",
home: buildContent(),
),
);
}
Registration function:
void createUser() async {
print("createUser()");
try {
final credential = await FirebaseAuth.instance.createUserWithEmailAndPassword(
email: nameController.text,
password: passwordController.text,
);
//final credential = await FirebaseAuth.instance.createUserWithEmailAndPassword(email: nameController.text, password: passwordController.text);
} on FirebaseAuthException catch (e) {
if (e.code == 'weak-password') {
print('The password provided is too weak.');
} else if (e.code == 'email-already-in-use') {
print('The account already exists for that email.');
}
} catch (e) {
print(e);
}
}
Edit:
I keep getting this message when i call "createUserWithEmailAndPassword"
W/System (26859): Ignoring header X-Firebase-Locale because its value was null.
A: In your createUser() Function i think you're sending empty values to firebase
request parameters like this and try it again
void createUser(String name, String Password) async {
print("createUser()");
try {
final credential = await FirebaseAuth.instance.createUserWithEmailAndPassword(
email: name,
password: password,
);
print("User Created Success);
//final credential = await FirebaseAuth.instance.createUserWithEmailAndPassword(email: nameController.text, password: passwordController.text);
} on FirebaseAuthException catch (e) {
if (e.code == 'weak-password') {
print('The password provided is too weak.');
} else if (e.code == 'email-already-in-use') {
print('The account already exists for that email.');
}
} catch (e) {
print(e);
}
}
A: So after some trail and error I ended up adding:
android:usesCleartextTraffic="true"
To the manifest file:
...\my_project\android\app\src\main\AndroidManifest.xml
I am not sure I like the "fix" as I think the requests are sent unencrypted. A google search gives me this description:
Android 6.0 introduced the useCleartextTraffic attribute under application element in android manifest. The default value in Android P is “false”. Setting this to true indicates that the app intends to use clear network traffic
| |
doc_3027
|
The console log displays the following information:
Exception in thread "main" java.lang.ClassCastException: com.google.gson.internal.LinkedTreeMap cannot be cast to Auth$Profile
at Auth.<init>(Auth.java:30)
Here is my code:
public Auth(File profilesFile) {
try {
ProfilesJSON e = (ProfilesJSON)this.gson.fromJson(new FileReader(profilesFile), ProfilesJSON.class);
Map ps = e.authenticationDatabase;
Iterator var5 = ps.keySet().iterator();
while(var5.hasNext()) {
String name = (String)var5.next();
Profile p = (Profile)ps.get(name);
if(p != null) {
if(p.displayName == null || p.displayName.length() == 0) {
p.displayName = p.username;
}
this.profiles.add(p);
}
}
} catch (FileNotFoundException var7) {
;
} catch (NullPointerException var8) {
;
}
}
public class Profile {
public String username;
public String password;
public String uid;
public String displayName;
public String name;
public String playerUID;
public Profile(String u, String t, String id, String d) {
this.username = u;
this.password = t;
this.uid = id;
this.displayName = d;
}
}
public class ProfilesJSON {
public Map profiles;
public String selectedProfile;
public String password;
public Map authenticationDatabase;
}
Line 30 is:
Profile p = (Profile)ps.get(name);
This is a part of my code, my idea is if the player press "Remember Password", the game will generate a .json file to store his infomation..I just want to know what I did wrong, other code i can write it myself
A: Your ps.get(name) is returning a com.google.gson.internal.LinkedTreeMap object instead of Profile.
try to change it to:
LinkedTreeMap p = (LinkedTreeMap )ps.get(name);
Your code doesn't show you errors because there's no error in compile time, ClassCastException is a runtime exception.
| |
doc_3028
|
So how can i fix problem. Because when call function, another user logout. and this user login.
this code:
$creds = array(
'user_login' => 'mysuser',
'user_password' => 'mypassword',
'remember' => false
);
$user = wp_signon( $creds, false );
A: wp_signon() is a function to sign in user with the provided credentials.
If you only want to check the password, using wp_check_password() is the way to go.
$creds = array(
'user_login' => 'mysuser',
'user_password' => 'mypassword'
);
$user = get_user_by('login', $creds['user_login']);
$isValidCreds = $user && wp_check_password($creds['user_password'], $user->data->user_pass, $user->ID));
wp_check_password function reference
| |
doc_3029
|
Error: Main parameters are required ("file1 [file2 file3...]") Usage:
[options] file1 [file2 file3...] Options:
-d
The directory where the file(s) will be created
Default: .
Here is my script
@Test
public void VerfiyLoginWordpress()
{
WebDriver driver=new ChromeDriver();
driver.manage().window().maximize();
driver.get("https://wordpress.com/wp-login.php?redirect_to=https%3A%2F%2Fwordpress.com%2F");
LoginPage login = new LoginPage(driver);
login.TypeUserName();
login.typePassword();
login.RememberMe();
login.clickOnLoginButton();
driver.quit();
Can someone please help out a noob? Thank you in advance :)
A: There are a couple of things you need to do here:
*
*You have to specify the location of the chrome driver in the beginning:
System.setProperty("webdriver.chrome.driver","C:\\your_driver_folder\\chrome.exe");
*Never use driver.manage().window().maximize(); to handle Chrome browser, rather handle it through ChromeOptions class.
Let me know if this helps you.
| |
doc_3030
|
What Im trying to achieve here is to have a left nav component that I can include on many pages with an attribute on the tag selected that I can use to key off of and select the corresponding core-item.
For the life of me I cannot get it working. I guess I'm confused about piercing the shadow DOM from within js? Not really sure what the best approach here is.
A: There are some issues with your code.
The whole menu template should look like (note the setting of selected attribute on paper-item):
<core-menu id="nav">
<template repeat='{{node in nodes}}'>
<paper-item id="{{node.name | lowercase}}" selected='{{selected == node.name}}'>
<a href="{{node.location}}" tabindex="-1">{{node.name}}</a>
</paper-item>
</template>
</core-menu>
I did not get why you needed two nested templates, so I simplified things a bit. Now the only thing left to do is to set the selected attribute of your demo menu to the proper name (id is not needed at all, it’s fine comparing items by names):
<cw-style-demo-menu selected="Assets">
Full live preview: http://plnkr.co/edit/E2B94tfAhJXnPZrusjtz?p=preview
| |
doc_3031
|
const lasso_start = (e) => {
console.log(e);
lasso.items()
.attr("r",3.5) // reset size
.classed("not_possible",true)
.classed("selected",false);
};
const lasso_draw = (e) => {
// Style the possible dots
lasso.possibleItems()
.classed("not_possible",false)
.classed("possible",true);
// Style the not possible dot
lasso.notPossibleItems()
.classed("not_possible",true)
.classed("possible",false);
};
var lasso_end = (e) => {
// Reset the color of all dots
lasso.items()
.classed("not_possible",false)
.classed("possible",false);
// Style the selected dots
lasso.selectedItems()
.classed("selected",true)
.attr("r",7);
// Reset the style of the not selected dots
lasso.notSelectedItems()
.attr("r",3.5);
};
const lassoSelect = () => lasso()
.items(resultChart.selectAll('circle'))
.targetArea(resultChart)
.on("start", (e) => lasso_start(e))
.on("draw", (e) => lasso_draw(e))
.on("end", (e) => lasso_end(e));
resultChart.call(lassoSelect());
First problem is that there is a warning at the import of d3-lasso. My imports are as follows:
import * as d3 from 'd3';
import { lasso } from 'd3-lasso';
And the warning goes as follows:
Could not find a declaration file for module 'd3-lasso'. 'tool-ae-vis/node_modules/d3-lasso/build/d3-lasso.js' implicitly has an 'any' type.
Try `npm i --save-dev @types/d3-lasso` if it exists or add a new declaration (.d.ts) file containing `declare module 'd3-lasso';
The warning is not solved by their suggestion. The warning doesn't cause any problems at this point. Unfortunately, it does cause problems when I run the code above. My console gives the following error when I run the code:
Uncaught ReferenceError: d3 is not defined at lasso (d3-lasso.js:776:1).
At this line d3.drag() is started in d3-lasso.js.
Can anybody help me with this problem? Thank you!
| |
doc_3032
|
In that popin, i click some buttons (all is working fine) until I need to confirm by clicking another button.
Problem comes here. When I look at the execution, Selenium IDE obviously finds the button and click it because I can see the popin close and the result of the actions done in the popin on the website.
But the test fails on this button click with reason : frame no longer exists
Any idea ?
A: They is a try/catch mechnism under development which could be used to mitigate this. In the meantime I exported my tests to run in webdriver and this issue no longer manifests itself.
| |
doc_3033
|
A: Found it - the application file structure is available at /var/vcap.local/dea/apps
| |
doc_3034
|
$sql = $db->prepare("UPDATE `timeslots` SET `service` = ? WHERE `status` = ?");
$sql->bind_param("is", 0, "open");
$sql->execute();
if ($sql->execute()) {
echo "ID: ".$db->insert_id."<br />";
}
But the result is everytime this instead of the ID:
ID: 0
ID: 0
A: The documentation for insert_id clearly states:
Returns the ID generated by an INSERT or UPDATE query on a table with a column having the AUTO_INCREMENT attribute.
Your query does not generate a new ID. You can't use $db->insert_id as there was no new ID reported by MySQL server.
You can trick MySQL into providing this value. Just reset the ID to the value that it had previously by regenerating it again.
$sql = $db->prepare("UPDATE `timeslots`
SET `service` = ?, Id=LAST_INSERT_ID(Id)
WHERE `status` = ?");
See How to get ID of the last updated row in MySQL?
| |
doc_3035
|
BEGIN
#Routine body goes here...
IF @autor=0 THEN
SELECT DISTINCT bdocus.ano_pub
FROM bdocus INNER JOIN
descriptores_bdocus ON bdocus.nserie = descriptores_bdocus.nserie INNER JOIN
descriptores ON descriptores_bdocus.iddescriptor = descriptores.id
WHERE (bdocus.descriptores LIKE '%Victimización%')AND(bdocus.ano_pub LIKE CONCAT(@anyo,'%')) AND (bdocus.codigo_area LIKE CONCAT(@tema,'%')) AND(bdocus.codigo_colectivo LIKE CONCAT(@colectivo,'%')) AND (descriptores.nombre = @descriptores)
GROUP BY bdocus.ano_pub
ORDER BY bdocus.ano_pub DESC;
ELSE
SELECT DISTINCT bdocus.ano_pub
FROM bdocus INNER JOIN
descriptores_bdocus ON bdocus.nserie = descriptores_bdocus.nserie INNER JOIN
descriptores ON descriptores_bdocus.iddescriptor = descriptores.id INNER JOIN
autores_bdocus ON bdocus.nserie = autores_bdocus.nserie
WHERE (bdocus.descriptores LIKE '%Victimización%') AND (autores_bdocus.idautor = @autor) AND(bdocus.ano_pub LIKE CONCAT(@anyo,'%')) AND (bdocus.codigo_area LIKE CONCAT(@tema,'%')) AND(bdocus.codigo_colectivo LIKE CONCAT(@colectivo,'%')) AND (descriptores.nombre = @descriptores)
GROUP BY bdocus.ano_pub
ORDER BY bdocus.ano_pub DESC;
END IF;
END
Parameters are this:
IN `@anyo` varchar(20),IN `@tema` varchar(20),IN `@autor` int,
IN `@colectivo` varchar(30),IN `@descriptores` varchar(40)
and the Values which I want are for @anyo='2013',for @tema='%',for @autor=44439,for @colectivo='%'and for @descriptores='Violencia sexual'.
When I run the second part of the stored procedure I mean:
SELECT DISTINCT bdocus.ano_pub
FROM bdocus INNER JOIN
descriptores_bdocus ON bdocus.nserie = descriptores_bdocus.nserie INNER JOIN
descriptores ON descriptores_bdocus.iddescriptor = descriptores.id INNER JOIN
autores_bdocus ON bdocus.nserie = autores_bdocus.nserie
WHERE (bdocus.descriptores LIKE '%Victimización%') AND (autores_bdocus.idautor = 44439) AND(bdocus.ano_pub LIKE CONCAT('%','%')) AND (bdocus.codigo_area LIKE CONCAT('%','%')) AND(bdocus.codigo_colectivo LIKE CONCAT('%','%')) AND (descriptores.nombre = 'Violencia sexual')
GROUP BY bdocus.ano_pub
ORDER BY bdocus.ano_pub DESC;
I get as result a row with the value which I want. So where is the difference?. Thanks in advance.
A: I think that you ain't getting the result because of the condition, i mean, your procedure ain't going to the 'else' part, therefore you are not getting that result. What you can do, is to try running the first select separatly
SELECT DISTINCT bdocus.ano_pub
FROM bdocus INNER JOIN
descriptores_bdocus ON bdocus.nserie = descriptores_bdocus.nserie INNER JOIN
descriptores ON descriptores_bdocus.iddescriptor = descriptores.id
WHERE (bdocus.descriptores LIKE '%Victimización%')AND(bdocus.ano_pub LIKE CONCAT(@anyo,'%')) AND (bdocus.codigo_area LIKE CONCAT(@tema,'%')) AND(bdocus.codigo_colectivo LIKE CONCAT(@colectivo,'%')) AND (descriptores.nombre = @descriptores)
GROUP BY bdocus.ano_pub
ORDER BY bdocus.ano_pub DESC;
Just to see if is the same result as the whole procedure, and if you get 0 rows like when you run the procedure, it's probably that that's reason. Try to change the condition in order to get the first select being checked first(which is the one you need mostly i assume), and then going for the other.
Should be something like
-- see that i changed the condition
IF @autor != 0 THEN
SELECT DISTINCT bdocus.ano_pub
FROM bdocus INNER JOIN
descriptores_bdocus ON bdocus.nserie = descriptores_bdocus.nserie INNER JOIN
descriptores ON descriptores_bdocus.iddescriptor = descriptores.id INNER JOIN
autores_bdocus ON bdocus.nserie = autores_bdocus.nserie
WHERE (bdocus.descriptores LIKE '%Victimización%') AND (autores_bdocus.idautor = @autor) AND(bdocus.ano_pub LIKE CONCAT(@anyo,'%')) AND (bdocus.codigo_area LIKE CONCAT(@tema,'%')) AND(bdocus.codigo_colectivo LIKE CONCAT(@colectivo,'%')) AND (descriptores.nombre = @descriptores)
GROUP BY bdocus.ano_pub
ORDER BY bdocus.ano_pub DESC;
END IF;
ELSE
SELECT DISTINCT bdocus.ano_pub
FROM bdocus INNER JOIN
descriptores_bdocus ON bdocus.nserie = descriptores_bdocus.nserie INNER JOIN
descriptores ON descriptores_bdocus.iddescriptor = descriptores.id
WHERE (bdocus.descriptores LIKE '%Victimización%')AND(bdocus.ano_pub LIKE CONCAT(@anyo,'%')) AND (bdocus.codigo_area LIKE CONCAT(@tema,'%')) AND(bdocus.codigo_colectivo LIKE CONCAT(@colectivo,'%')) AND (descriptores.nombre = @descriptores)
GROUP BY bdocus.ano_pub
ORDER BY bdocus.ano_pub DESC;
END
Try it and tell me how it goes, other thing that it can be, are the parameters you are putting when calling the procedure
A: It is important to indicate the difference between 9.4. User-Defined Variables and routine parameters 13.1.15. CREATE PROCEDURE and CREATE FUNCTION Syntax, are different variables.
In your example @anyo maybe is NULL and `@anyo` is assigned to '2013'.
SQL Fiddle example
| |
doc_3036
|
Here is the sample code that I'm using to cast the timestamp field.
val messages = df.withColumn("Offset", $"Offset".cast(LongType))
.withColumn("Time(readable)", $"EnqueuedTimeUtc".cast(TimestampType))
.withColumn("Body", $"Body".cast(StringType))
.select("Offset", "Time(readable)", "Body")
display(messages)
4
Is there any other way I can try to avoid the null values?
A: Instead of casting to TimestampType, you can use to_timestamp function and provide the time format explicitly, like so:
import org.apache.spark.sql.types._
import org.apache.spark.sql.functions._
import spark.implicits._
val time_df = Seq((62536, "11/14/2022 4:48:24 PM"), (62537, "12/14/2022 4:48:24 PM")).toDF("Offset", "Time")
val messages = time_df
.withColumn("Offset", $"Offset".cast(LongType))
.withColumn("Time(readable)", to_timestamp($"Time", "MM/dd/yyyy h:mm:ss a"))
.select("Offset", "Time(readable)")
messages.show(false)
+------+-------------------+
|Offset|Time(readable) |
+------+-------------------+
|62536 |2022-11-14 16:48:24|
|62537 |2022-12-14 16:48:24|
+------+-------------------+
messages: org.apache.spark.sql.DataFrame = [Offset: bigint, Time(readable): timestamp]
One thing to remember, is that you will have to set one Spark configuration, to allow for legacy time parser policy:
spark.conf.set("spark.sql.legacy.timeParserPolicy", "LEGACY")
| |
doc_3037
|
Should these train equally, assuming consistent hyperparams?
for epoch in range(20):
LSTM
and
for epoch in range(5):
LSTM -> LSTM -> LSTM -> LSTM
I understand that there would be a difference after training. In the first case, you would send any test batch through one trained LSTM cell, while in the 2nd case, it would go through 4 trained cells. My question pertains to training.
Seems they should be identical.
A: I think you make a big confusion between very different concepts. Let's us go back to the basics. Very simply, in a supervised machine learning experiment you have some training data X, and a model. A model is like a function with internal parameters, you give it some data and it gives you back a prediction. Here, let us say our model has one layer, which is an LSTM. That means the parameters of our model are the parameters of the LSTM (I won't go into what they are, if you don't know them you should read the paper introducing LSTMs).
What is an epoch: very roughly, "training for n epochs" means looping n times on the training data. You show each example n times to the model for update. The more epochs the more you get your network acustomed to your training data. (I'm being very overly simplistic).
I hope it is clearer now that epochs and layers are in no way related to the layers. The layers are what your model is made of, and the epochs is about how many times you will show your examples to the model.
If you put 5 LSTM layers, you will just have 5 times more parameters. But in any case, each of your training examples will go through the 1 or 5 stacked LSTM layers...
| |
doc_3038
|
{
"signers": [
{
"signatureInfo": {
"signatureName": "test",
"signatureInitials": "T",
"fontStyle": "freehand575"
},
"tabs": {
"signHereTabs": [
{
"stampType": "signature",
"name": "SignHere",
"tabLabel": "Sign Here",
"scaleValue": 1,
"optional": "false",
"documentId": "1",
"recipientId": "2",
"pageNumber": "1",
"xPosition": "191",
"yPosition": "123",
"anchorString": "@SW1R",
"anchorXOffset": "0",
"anchorYOffset": "0",
"anchorUnits": "pixels",
"tabId": "19f2a250-7ffc-4452-b7e2-e0bf1fddd660",
"status": "signed"
}
]
},
"creationReason": "sender",
"isBulkRecipient": "false",
"identityVerification": {},
"name": "test",
"email": "[email protected]",
"recipientId": "2",
"recipientIdGuid": "215d99bb-5c50-4bdd-b298-21639d0aad4c",
"requireIdLookup": "false",
"userId": "386ae756-5c1c-480e-abd5-94d960781f5c",
"clientUserId": "49b09797-cea5-4d87-b653-49f13f733dd3",
"routingOrder": "1",
"status": "completed",
"signedDateTime": "2019-08-02T21:21:53.9570000Z",
"deliveredDateTime": "2019-08-02T21:16:54.9800000Z",
"totalTabCount": "4"
},
{
"signatureInfo": {
"signatureName": "test",
"signatureInitials": "T",
"fontStyle": "freehand575"
},
"tabs": {
"signHereTabs": [
{
"stampType": "signature",
"name": "SignHere",
"tabLabel": "Sign Here",
"scaleValue": 1,
"optional": "false",
"documentId": "1",
"recipientId": "3",
"pageNumber": "1",
"xPosition": "189",
"yPosition": "167",
"anchorString": "@SPAAR",
"anchorXOffset": "0",
"anchorYOffset": "0",
"anchorUnits": "pixels",
"tabId": "e2def7a9-bfdb-404d-8901-e387d9e4f856"
}
]
},
"creationReason": "sender",
"isBulkRecipient": "false",
"identityVerification": {},
"name": "test",
"email": "[email protected]",
"recipientId": "3",
"recipientIdGuid": "0b6541f4-45b7-4b6e-a4cb-740d2f9f07a7",
"requireIdLookup": "false",
"userId": "386ae756-5c1c-480e-abd5-94d960781f5c",
"clientUserId": "49b09797-cea5-4d87-b653-49f13f733dd3",
"routingOrder": "1",
"status": "delivered",
"deliveredDateTime": "2019-08-02T20:36:14.5170000Z",
"totalTabCount": "4"
}
],
"agents": [],
"editors": [],
"intermediaries": [],
"carbonCopies": [],
"certifiedDeliveries": [],
"inPersonSigners": [],
"seals": [],
"witnesses": [],
"recipientCount": "3",
"currentRoutingOrder": "1"
}
Our concern is when we make the POST EnvelopeViews: createRecipient call. We have found that this seems to return a recipient view for whichever recipient does not yet have a completed status. Is this dependable/deterministic? We have tried specifying the recipientId in this request, but it does not affect which recipient view we receive. For example, the following request returns the view for Recipient 3 (delivered) instead of Recipient 2 (completed) as requested.
{
"authenticationMethod": "email",
"email": "[email protected]",
"returnUrl": "http://www.google.com",
"userName": "test",
"clientUserId": "49b09797-cea5-4d87-b653-49f13f733dd3",
"recipientId": "2"
}
Update - More details around the business requirements:
We create an envelope with 2 captive recipients, Presenter and Agent. At the time of envelope creation, we do not know who will sign as the agent, so the initial recipient is a placeholder user. Later in the business flow, we determine who will act as the agent and we call the DocuSign API to update and "swap" the Agent recipient's email, clientUserId, and username to be that of the actual user who will be signing. This is a process which we have been successfully using for years.
We now have a case where the Presenter and Agent could be the same person, but the Presenter must complete their signing before the Agent (same person or someone else) can sign. When they are the same person, following the "swapping" method described above, we still have 2 recipients but the Agent recipient has the same userId as the Presenter recipient.
My main concern is regarding whether the POST EnvelopeViews: createRecipient method is deterministic because it seems to ignore the recipientId parameter. I want to be sure we can reliably do the following:
*
*Allow the Presenter user to complete signing.
*"Swap" the same user in to sign as the Agent.
*Successfully get a recipient view for the user as the Agent recipient.
A: what exactly are you trying to do? when a recipient is complete, you still want to create a recipient view for that recipient? even while the same recipient has to still complete this envelope further down the routing order? This is most likely not a supported scenario. I would suggest that if the same exact person needs to act on an envelope more than once, and they completed their first interaction, then the next request would/should take them to the next interaction. It doesn't make sense to do anything else IMHO.
| |
doc_3039
|
{"@timestamp":"2015-06-10T15:04:08.628Z","level":"info","message":"POST /.kibana/config/4.0.3/_update 200 - 4ms","node_env":"production","request":{"method":"POST","url":"/elasticsearch/.kibana/config/4.0.3/_update","headers":{"host":"localhost:5601","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:38.0) Gecko/20100101 Firefox/38.0","accept":"application/json, text/plain, /","accept-language":"en-US,en;q=0.5","accept-encoding":"gzip, deflate","content-type":"application/json;charset=utf-8","referer":"http://localhost:5601/","content-length":"35","cookie":"mp_75ac3e60a415a533d2cfa1c2cce55f42_mixpanel=%7B%22distinct_id%22%3A%20%22kby9hyc2w0tpa4s31415908364663%22%2C%22%24initial_referrer%22%3A%20%22%24direct%22%2C%22%24initial_referring_domain%22%3A%20%22%24direct%22%2C%22companyId%22%3A%20%221%22%2C%22companyName%22%3A%20%22Fluig%20Default%20Company%22%2C%22userName%22%3A%20%22admin%40totvs.com%22%2C%22userOrigin%22%3A%20%22CP_ADMIN%22%2C%22userRole%2...; fbm_832538890127683=base_domain=.localhost","connection":"keep-alive","pragma":"no-cache","cache-control":"no-cache"},"remoteAddress":"127.0.0.1","remotePort":61716},"response":{"statusCode":200,"responseTime":4,"contentLength":65}}
{"@timestamp":"2015-06-10T15:04:08.643Z","level":"info","message":"POST /_mget?timeout=0&ignore_unavailable=true&preference=1433948336937 200 - 2ms","node_env":"production","request":{"method":"POST","url":"/elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1433948336937","headers":{"host":"localhost:5601","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:38.0) Gecko/20100101 Firefox/38.0","accept":"application/json, text/plain, /","accept-language":"en-US,en;q=0.5","accept-encoding":"gzip, deflate","content-type":"application/json;charset=utf-8","referer":"http://localhost:5601/","content-length":"62","cookie":"mp_75ac3e60a415a533d2cfa1c2cce55f42_mixpanel=%7B%22distinct_id%22%3A%20%22kby9hyc2w0tpa4s31415908364663%22%2C%22%24initial_referrer%22%3A%20%22%24direct%22%2C%22%24initial_referring_domain%22%3A%20%22%24direct%22%2C%22companyId%22%3A%20%221%22%2C%22companyName%22%3A%20%22Fluig%20Default%20Company%22%2C%22userName%22%3A%20%22admin%40totvs.com%22%2C%22userOrigin%22%3A%20%22CP_ADMIN%22%2C%22userRole%2...; fbm_832538890127683=base_domain=.localhost","connection":"keep-alive","pragma":"no-cache","cache-control":"no-cache"},"remoteAddress":"127.0.0.1","remotePort":61716},"response":{"statusCode":200,"responseTime":2,"contentLength":102}}
{"@timestamp":"2015-06-10T15:04:08.859Z","level":"info","message":"POST /.kibana/config/4.0.3 200 - 3ms","node_env":"production","request":{"method":"POST","url":"/elasticsearch/.kibana/config/4.0.3","headers":{"host":"localhost:5601","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:38.0) Gecko/20100101 Firefox/38.0","accept":"application/json, text/plain, /","accept-language":"en-US,en;q=0.5","accept-encoding":"gzip, deflate","content-type":"application/json;charset=utf-8","referer":"http://localhost:5601/","content-length":"2","cookie":"mp_75ac3e60a415a533d2cfa1c2cce55f42_mixpanel=%7B%22distinct_id%22%3A%20%22kby9hyc2w0tpa4s31415908364663%22%2C%22%24initial_referrer%22%3A%20%22%24direct%22%2C%22%24initial_referring_domain%22%3A%20%22%24direct%22%2C%22companyId%22%3A%20%221%22%2C%22companyName%22%3A%20%22Fluig%20Default%20Company%22%2C%22userName%22%3A%20%22admin%40totvs.com%22%2C%22userOrigin%22%3A%20%22CP_ADMIN%22%2C%22userRole%22...; fbm_832538890127683=base_domain=.localhost","connection":"keep-alive","pragma":"no-cache","cache-control":"no-cache"},"remoteAddress":"127.0.0.1","remotePort":61716},"response":{"statusCode":200,"responseTime":3,"contentLength":81}}
The indices are already created and kibana was able to find the field mappings as well ..
Tried both on chrome and firefox. Kibana version:
Version 4.0.3
Build 6103
Commit SHA c3487fb
Here's a github issue I created as well .. : https://github.com/elastic/kibana/issues/4167
A: Figured it out .. My elasticsearch cluster was not storing source only index and thus kibana cannot get the actual config documents and thus the problem!
| |
doc_3040
|
Thanks.
A: Inside the initialization for the tab (assuming WinForms until I see otherwise):
Thread newThread = new Thread(() =>
{
// Get your data
dataGridView1.Invoke(new Action(() => { /* add data to the grid here */ } );
});
newThread.Start();
That is obviously the most simple example. You could also spawn the threads using the ThreadPool (which is more commonly done in server side applications).
If you're using .NET 4.0 you also have the Task Parallel library which could help as well.
A: There are two basic approaches you can use. Choose the one that makes the most sense in your situation. Often times there is no right or wrong choice. They can both work equally well in many situations. Each has its own advantages and disadvantages. Oddly the community seems to overlook the pull method too often. I am not sure why that is really. I recently stumbled upon this question in which everyone recommeded the push approach despite it being the perfect situation for the pull method (there was one poor soul who did go against the herd and got downvoted and eventually deleted his answer leaving only me as the lone dissenter).
Push Method
Have the worker thread push the data to the form. You will need to use the ISynchronizeInvoke.Invoke method to accomplish this. The advantage here is that as each data item arrives it will immediately be added to the grid. The disadvantage is that you have to use an expensive marshaling operation and the UI could bog down if the worker thread acquires the data too fast.
void WorkerThread()
{
while (true)
{
object data = GetNewData();
yourForm.Invoke(
(Action)(() =>
{
// Add data to your grid here.
}));
}
}
Pull Method
Have the UI thread pull the data from the worker thread. You will have the worker thread enqueue new data items into a shared queue and the UI thread will dequeue the items periodically. The advantage here is that you can throttle the amount of work each thread is performing independently. The queue is your buffer that will shrink and grow as CPU usage ebbs and flows. It also decouples the logic of the worker thread from the UI thread. The disadvantage is that if your UI thread does not poll fast enough or keep up the worker thread could overrun the queue. And, of course, the data items would not appear in real-time on your grid. However, if you set the System.Windows.Forms.Timer interval short enough that might be not be an issue for you.
private Queue<object> m_Data = new Queue<object>();
private void YourTimer_Tick(object sender, EventArgs args)
{
lock (m_Data)
{
while (m_Data.Count > 0)
{
object data = m_Data.Dequeue();
// Add data to your grid here.
}
}
}
void WorkerThread()
{
while (true)
{
object data = GetNewData();
lock (m_Data)
{
m_Data.Enqueue(data);
}
}
}
A: You should have an array of threads, to be able to control them
List<Thread> tabs = new List<Thread>();
...
To add a new one, would be like:
tabs.Add( new Thread( new ThreadStart( TabRefreshHandler ) );
//Now starting:
tabs[tabs.Count - 1].Start();
And finally, in the TabRefreshHandler you should check which is the calling thread number and you'll know which is the tab that should be refreshed!
| |
doc_3041
|
I have this query
Select studentName
from courseGrade
where grade < avg(grade) and sectionID=290001
A: Use a subquery to compute the average grade of the course. For example:
select studentName
from courseGrade
where sectionID = 290001
and grade < (
select avg(grade) from courseGrade where sectionID = 290001
)
| |
doc_3042
|
I have checked, I have this DLL in the following locations -
1 - My project location bin directory
2 - C:\Program Files (x86)\Microsoft SQL Server\100\SDK\Assemblies
But I find that the DLLs are of different size at different places, at one place it is 1.53 MB where as other is 1.68 MB.
This machine is 64 bit. And I have installed SQL Server 2008 Express and SQL Server 2008 Express Management Studio.
A: I have just found the answer for this problem. It started working as soon as I replaced the replication dll in my bin folder. The problem was with the version of this DLL.
| |
doc_3043
|
What is the correct way of freeing up memory used by idle databases without taking the SQL Service down? We would like to automate the unloading of any database that was not accessed in the last 30 minutes.
Note: I'm looking for a solution that applies to SQL 2005. However, if there's a feature in SQL 2008 to do that I'd like to know about it.
A: To start I would suggest looking into:
DBCC FREEPROCCACHE
and
DBCC DROPCLEANBUFFERS
They aren't database specific but they can replace your nightly restart.
For a database specific command you could issue a CHECKPOINT which would force any dirty pages to disk but it only applies to writes.
In SQL Server 2008 (and R2) Enterprise Edition you can utilize the Resource Governor to slice up your memory into pools and dedicate a larger portion to the critical databases allowing a more granular control of CPU and memory. To do this correctly requires thorough planning and testing however.
A: Try this:
ALTER DATABASE blah SET AUTO_CLOSE ON;
This setting (which is normally not recommended for production) might be appropriate for your case. This setting will work in SQL Server 2005 and 2008.
For more information: http://msdn.microsoft.com/en-us/library/bb522682.aspx
A: SQL Server will free automatically all memory that can be freed and will avoid paging. If you encounter paging then the 99% memory is in use, is not available to be freed. You need to investigate how is the memory used, it is likely external components like sp_oa_xxx created object or distributed queries. Start by investigating the memory consumers, look at sys.dm_os_memory_clerks and read on How to use the DBCC MEMORYSTATUS command to monitor memory usage on SQL Server 2005.
As a side note, you already have the means to automatically close databases that are not is use: alter database <dbname> set auto_close on:
AUTO_CLOSE: When set to ON, the
database is shut down cleanly and its
resources are freed after the last
user exits. The database automatically
reopens when a user tries to use the
database again.
If you host hundreds of databases that are used seldom then AUTO_CLOSE is exactly what you're looking for.
A: For MSSQL 2012; run first following
EXEC sys.sp_configure N'max server memory (MB)', N'256'
GO
RECONFIGURE WITH OVERRIDE
GO
then check for Task Manager to see the memory is down to 256 MB (or higher as you set above)
Then run this (replace 2048 with MB you want to assign regularly):
EXEC sys.sp_configure N'max server memory (MB)', N'2048'
GO
RECONFIGURE WITH OVERRIDE
GO
A: I had such problem before and I found a solution for this issue you can create a stored procedure as described below : before you start your application, you should call this stored procedure in Mode 1 because you need enough memory
For SQL operations and before closing your application you have to call this stored procedure again in Mode 0
Create Proc [dbo].[MP_Rpt_ConfigureMemory]
(@Mode bit)
as
declare @RAM as integer
declare @MAX as integer
declare @MIN as integer
set @RAM = (SELECT
[physical_memory_in_bytes]/1048576 AS [RAM (MB)]
FROM [sys].[dm_os_sys_info])
Set @MAX = ((@RAM / 4) * 3)
Set @MIN = ((@RAM / 4) * 1)
if @Mode = 0
begin
exec SP_Configure 'min server memory', 1
RECONFIGURE
exec SP_Configure 'max server memory', 100
RECONFIGURE
end
else
if @Mode = 1
begin
exec SP_Configure 'max server memory', @MAX
RECONFIGURE
exec SP_Configure 'min server memory', @MIN
RECONFIGURE
end
| |
doc_3044
|
from random import uniform
class ponto:
def __init__(self,x,y):
self.x=float(x)
self.y=float(y)
def aleatorio(self):
''' Ponto aleatorio com coordenadas 0.0 a 10.0 '''
self.x=uniform(0.0,10.0)
self.y=uniform(0.0,10.0)
class retangulo():
def __init__(self,a,b):
self.a=a
self.b=b
def interior(self,p):
''' Verifica se ponto no interior do retangulo '''
if p.x >= self.a.x and p.x <=self.b.x and p.y >=self.a.y and p.y<=self.b.y:
return True
return False
def area(self):
return self.a*self.b
a1=ponto(0.0,0.0)
b1=ponto(2.0,2.0)
r1=retangulo(a1,b1)
b2=ponto(4.0,4.0)
r2=retangulo(a1,b2)
p=ponto(0.4,0.9)
p.aleatorio()
d1=0
d2=0
for r in range(10000):
if r1.interior(p)==True:
d1+=1
elif r2.interior(p)==True:
d2+=1
print(d1,d2)
As suggested I added the print of d1,d2 which returns: 0 0. d1 and d2 are supposed to be the number of times my random point falls inside r1 and r2, respectively. I guess 0 either means I'm not generating the random point or that I'm simply not counting the number of times it falls inside correctly, but I'm not sure what the reason is.
A: Perhaps your trouble is the looping, not the die rolling.
@AlanLeuthard is quite correct: this code generates a single point, and then checks 10000 times whether that same point is in each of the two rectangles. Try making a new point every time through the loop:
d1=0
d2=0
for r in range(10000):
p.aleatorio()
if r1.interior(p)==True:
d1+=1
elif r2.interior(p)==True:
d2+=1
print d1, d2
Output:
397 1187
Does this look better?
| |
doc_3045
|
A: To install sass
npm install -g sass
then, to convert scss to css
sass source/stylesheets/index.scss build/stylesheets/index.css
More info here
| |
doc_3046
|
for example,
<?php
$helloWorld = "Hello World";
?>
... some HTML
<?php
function echoHello(){
echo $helloWorld;
}
?>
The variable is not variable in the second part. How to make the variable visible so I can access later when I want to execute some PHP script.
A: It should be visible, assuming you're still on the same pageload, one or other of them is not inside a function, and many other factors that can only be evaluated by seeing your full code.
But just a quick guess, have you tried View Source? If the browser shows your raw PHP code inside the source, then the server is not parsing the PHP at all and it's just silently hiding as invalid HTML tags.
A: If you echo the variable in the section of the page it won't show.
And you need to call the function to run it, so underneath put echoHello();
If both are on the same page, it should work just fine
| |
doc_3047
|
For a = 1 To 60
If Application.WorksheetFunction.CountA(Excel.Sheets("Import_R").Columns(a)) > 0 Then
Excel.Sheets("Import_R").Columns(a).Copy
b = Excel.Sheets("Organise_R").UsedRange.Columns.Count
Excel.Sheets("Organise_R").Select
If Application.WorksheetFunction.CountA(Excel.Sheets("Organise_R").Columns(b)) > 0 Then b = b + 1
Excel.Sheets("Organise_R").Columns(b).EntireColumn.Select
Excel.ActiveSheet.Paste
Excel.Application.CutCopyMode = False
Selection.TextToColumns Destination:=Cells(1, b), DataType:=xlDelimited, _
TextQualifier:=xlDoubleQuote, ConsecutiveDelimiter:=True, Tab:=False, _
Semicolon:=False, Comma:=False, Space:=True, Other:=False, FieldInfo _
:=Array(Array(1, 1), Array(2, 1), Array(3, 1), Array(4, 1), Array(5, 1), Array(6, 1), _
Array(7, 1), Array(8, 1)), TrailingMinusNumbers:=True
End If
Next a
A: If the code is always supposed to start in the first column of "Organize_R" then why do you use the code below?
b = Excel.Sheets("Organise_R").UsedRange.Columns.Count
why not
Excel.Sheets("Organise_R").ClearContents
b = 1
?
| |
doc_3048
|
I can successfully manage to get a particular user's rating on a certain restaurant and store it in my Firebase database BUT the thing is that I can't stop the same user from voting multiple times. I only want every user to vote only once, they can comment as much as they want but their star ratings should be limited exclusively to one. I think it has something to do with their uids, but I couldn't manage to solve the problem. Also, how can I display average rating of the restaurant? I mean how and where should I keep it and display it?
Here is my code so far. I'd appreciate any help! Thank you very much in advance!
@IBAction func button(_ sender: Any) {
let ref = Database.database().reference()
let commentsReference = ref.child(locationName).child("comments")
let newCommentID = commentsReference.childByAutoId().key
let newCommentReference = commentsReference.child(newCommentID)
Auth.auth().signInAnonymously { (authResult, error) in
if let error = error {
print("Sign in failed:", error.localizedDescription)
} else {
let user = authResult?.user
let isAnonymous = user?.isAnonymous // true
let uid = user?.uid
// print ("Signed in with uid:", authResult!.uid)
var currentUserID = uid
if let currentUser = Auth.auth().currentUser {
currentUserID = currentUser.uid
}
newCommentReference.setValue(["uid": currentUserID as Any, "userRating":self.ratingStackView.starsRating]) { (error, ref) in
//ratingStakView.starsRating successfully returns the number of stars the user has chosen from my other viewController
if error != nil {
print(error!.localizedDescription)
return
}
}
}
}
}
| |
doc_3049
|
(The visualization won't be done in real-time. The sound and video will be combined together later).
Thanks in advance,
-skazhy
A: Your first port of call should be Processing, which is a Java based language with a simplified syntax, limitless visual/graphics capabilities and good support for audio.
You could also try packages such as SuperCollider and PureData, both of which run on Linux. PureData involves no programming, as such, at all -- rather it is a 'dataflow' programming language, aka point-and-click. Still, many interesting results are possible. SuperCollider is a powerful language aimed more at audio programmers and composers, but probably has the best feature extraction (ie, audio analysis) options. In the past, I have created visualisations by extracting the audio data in SuperCollider, and sent it via OpenSoundControl to Processing. This would be involved, but potentially lots of fun..
How you extract frequency spectrum, loudness, etc depends on the feature you are extracting and the platform you are using. Generally, it is not too difficult to do: your first step should be to check the platform documentation.
Hope that helps
A: maybe this could help you too:
Music Analysis and Visualization
| |
doc_3050
|
library(ggplot2)
ggplot(data, aes(x=data$d1, y=data$d2)) +
geom_point(aes(colour = mydata))
A: ggplot plots the data you give it. If you only want it to plot a subset of your data, only give it a subset of your data:
ggplot(subset(data, d1 < 0.2), aes(x = d1, y = d2)) +
geom_point(aes(colour = mydata))
Also, don't use data$column inside aes()--just use unquoted column names.
A: You can filter the points in your data feeding into ggplot function. Please notice that you don't need data$ in aes; you can simply use the variable name. Here, I am using iris dataset since you haven't provided your data.
library(ggplot2)
ggplot(iris[iris$Sepal.Length<5,], aes(x=Sepal.Length, y=Sepal.Width)) +
geom_point(aes(colour = Species))
In your case, it'd be:
ggplot(data[data$d1<0.2,], aes(x=d1, y=d2)) +
geom_point(aes(colour = mydata))
| |
doc_3051
|
Let us assume the following relationships (diagram). A FundCompany has Funds, and Accounts. There is also a FundAccount which creates a many-to-many relationship (as well as other attributes at the relationship level) between Accounts and Funds. Lastly an Account has one or more Beneficiary.
The FundCompany is an aggregate root as it’s at the top of the pyramid. Neither Account nor Fund can exist without the FundCompany. FundAccount cannot exist without both Fund and Account; does that make them both aggregate roots? Or is Fund still the only aggregate root, having to go through it to perform operations on FundAccount entities? The fact that the Account also has Beneficiaries, which cannot exist without the Account, does that also signal Account being an aggregate root?
All of the entities on this diagram will require CRUD operations and screens in my application. The reason I’m bringing this is up is most often every UI screen will store the ID of the row/entity it’s referencing. So for example a user clicks “Details” on a table with Funds, I might need to retrieve a fund via its ID. My understanding is that if an entity needs direct access then it’s an aggregate root in itself. However this would make a lot of entities aggregate roots by default.
Based on the answers to above questions I have to map these operations to the proper aggregate root’s repository.
A: Whole your diagram is single aggregate = it have single aggregate root - the top level FundFamily. If Fund and Account can't live without FundFamily their can't be aggregate roots.
| |
doc_3052
|
so would require some inputs.
1) Front End - React.js
2) Backend - Java (Spring boot)
3) Architecture - Microservices
4) Infra - AWS
5) CI - Jenkins
We have divided the development in three phases
Phase 1 - Create AWS infra, front end service and few backend services using Spring Boot and Spring Cloud. Keep the use of AWS services to as minimum as possible
Phase 2 - Create more backend services and dockerize everything
Phase 3 - Orchestrate previous phase using Kubernetes and use more AWS services if required
I am at phase 1 and after going through lot of resources and study material, need help in creating production grade architecture and AWS environment.
There are lot of individual resources but did not find much on how the real system should look like in when it is live.
1) how to isolate environments?
My understanding - create organization which will have 5 accounts - root, security, shared-services ,prod and non-prod aws accounts. Non-prod can have multiple environments if required like test,stage.
Something like this
2) How to create security/network layer?
My understanding - create private and public subnets and create vpc peering between like shared-service and non-prod env and use iam roles.
3) Best way for designing microservices?
my understanding - have a micro frontend and microservices in backend. Client will request webpage in browser the request will come to UI service.
*
*I am confused what is the order of components.
*Either it should come to react app first and then it would go to api gateway or ELB?
*Do we need ELB?
*Request would come to ELB and then it will go to API gateway and further?
*Answer to all these questions will determine answer to next question
*Option A or B? Or if both are wrong? Or where should be arrows heading from client to backend services?
Option A
Option B
4) Which resources to keep in private and public subnets?
my understanding - in phase 1, follow one instance per service model so each service will have ec2 instance. in later phases we will move to containerization. few things are clear that backend services and persistence like DB will be in private subnet and keep very less resources in public like only bastion hosts etc
- Question is depending on answer for question #3, what others should be public?
- ELB, api gateway, service discovery?
- do i need to keep NAT gateway always so as to allow private ones to access internet?
5) What should be the complete release deployment workflow?
my understanding - developer commits code, jenkins should trigger the build, store artifacts and deploy.
- how should jenkins server communicate with other machines to deploy services?
- how credentials are managed for communication between jenkins and ec2 instances?
- what is the production grade structure for jenkins project? like does build, deploy and test should be separate items?
I would really appreciate if some experienced architects can help me out if you have configured same in your enterprise/organisations and also let me know if there are any references available online which i could not find for building such production grade systems
Note - Phase 1 should be designed in a way to have as smooth transition as possible to phase 2 and 3
A: This is a complex question which we cannot do justice to on Stack Overflow.
I would recommend spending some time reading:
*
*Implementing Microservices on AWS
*Delivering Excellence with Microservices on AWS
*Serverless Application Lens
You could also research content from AWS serverless heroes.
| |
doc_3053
|
Before you tell me to stop using iframes, i am afraid that is not an option for me at this time.
Below is some code to reproduce this issue. Three files:
*
*Default.aspx: Contains an iframe pointing to DownloadPage.aspx
*DownloadPage.aspx: Contains a hyperpink to Download.ashx
*Download.ashx: Responds with a text/plain content-type type with content-disposition: attachment
Default.aspx:
<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="AndroidTest.Default" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
<title></title>
</head>
<body>
<form id="form1" runat="server">
<div>
<iframe src="DownloadPage.aspx" width="320" height="1000"></iframe>
</div>
</form>
</body>
</html>
DownloadPage.aspx
<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="DownloadPage.aspx.cs" Inherits="AndroidTest.DownloadPage" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
<title></title>
</head>
<body>
<form id="form1" runat="server">
<div>
<asp:HyperLink NavigateUrl="~/Download.ashx" runat="server">Download</asp:HyperLink>
</div>
</form>
</body>
</html>
Download.ashx.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
namespace AndroidTest {
/// <summary>
/// Summary description for Download
/// </summary>
public class Download : IHttpHandler {
public void ProcessRequest(HttpContext context) {
context.Response.ContentType = "text/plain";
context.Response.AddHeader("Content-Disposition", "attachment;filename=\"test.txt\"");
context.Response.Write("Hello World");
}
public bool IsReusable {
get {
return false;
}
}
}
}
Thanks for your time.
A: most browsers will do this as a point of security to avoid maliciously embedded automated downloads. you may want to present the user with a link or a button to complete the download so that the browser recognises the interaction and allows the download to complete.
| |
doc_3054
|
Editing for clarity:
Here's a screenshot of Eclispe with the Activiti plugin.
http://i.imgur.com/DAKtq.jpg
This workflow gets started by another workflow with webscripts.
var props = new Object();
var dd = new Date();
props[EL_PROP_WORK_UNIT_NAME] = "testNode" + DateISOString( dd );
props[EL_PROP_WORK_UNIT_SOURCE_CODE] = "ROB";
props[EL_PROP_WORK_UNIT_DELIVERY_DATE] = dd;
node = getHome().createNode(name, EL_TYPE_WORK_UNIT, props);
var EL_WORKFLOW = "activiti$The-workflow";
var activeWfs = node.activeWorkflows;
if( activeWfs === null || activeWfs.length === 0 )
{
var workflowPackage = workflow.createPackage();
workflowPackage.addNode( node );
var workflowDef = workflow.getDefinitionByName(EL_WORKFLOW);
var workflowPath = workflowDef.startWorkflow( workflowPackage, new Object());
}
So the listener calls another javascript method...
function artPDFRename()
{
logger.log("==============================");
logger.log("<START> artPDFRename");
var workflowDef = workflow.getDefinitionByName(EL_WORKFLOW);
var activeInstance = workflowDef.getActiveInstances();
// ????
}
The goal is to have this handling be automatic. We're trying to design this with as little of manual intervention as possible, and are not assigning tasks to users to perform. Yes, there's probably another way to rename a PDF file, but I can't seem to figure out from the documentation listed here how to get a pointer to the node I put in the bpm_package object. That's the question.
Or am I so far off base on how we're developing this that it makes no sense?
A: As an example check the ScriptTaskListener class. Here all the workflow variables are put in a map.
The following code is interesting:
// Add all workflow variables to model
Map variables = delegateTask.getExecution().getVariables();
for (Entry<String, Object> varEntry : variables.entrySet())
{
scriptModel.put(varEntry.getKey(), varEntry.getValue());
}
So with this you could use bpm_package as an object within your script within the workflow script task.
So if you need the node were the workflow has run on, the following code should work (where task is your delegateTask from your notify method of the Listener:
delegateTask.getVariable("bpm_package");
// or like the example above
delegateTask.getExecution().getVariable("bpm_package");
This will be a list so take the first one and that will be your node.
---------update
If you're using the javascript from alfresco then you can directly use the parent object bpm_package.
So in your case it would be best to do the following:
var node = bpm_package.children[0]; //or you could check if the
package isn't null
// Then send the node into your
artPDFRename(node); //or you could just add the bpm_package code in
your js file
| |
doc_3055
|
When carrying out an embarrassingly parallel task on large datasets with scikit-learn, it is convenient to do so on a cluster in a high-performance computing (HPC) environment. Scikit-learn provides support for parallelization by allowing the user to specify the number of cores to use in parallel via the n_jobs argument. This should in principle make it straightforward to match the number of cores used within the analysis pipeline to the number of processors requested from the scheduler in the HPC environment.
Problem
IT have pointed out to me that the number of processes running per single job (namely 7) exceeded the number of processors I had requested (namely 2) although I had in fact set scikit-learn's n_job argument to 2. As I have been told, the result is that these processes are now blocking each other from using the limited computing resources (a bit like musical chairs, I'm imagining) thereby creating unnecessary overhead and making inefficient use of the cluster. The number of threads were detected by running pstree PID.
Question
Where do the additional 5 processes come from? How can I control the number of processes in scikit-learn to match the number of processors actually allocated to the cluster job?
Code example
I have created a small example to illustrate this behavior (Python 3.6.8, numpy 1.18.1, scikit-learn 0.20.3)
#!/usr/bin/env python
import numpy as np
import os
import time
from sklearn.model_selection import permutation_test_score
from sklearn import datasets, svm
n_folds = 3
n_perm = 12000
n_jobs = 2 # The number of tasks to run in parallel
n_cpus = 2 # Number of CPUs assigned to this process
pid = os.getpid()
print("PID: %i" % pid)
print("loading iris dataset")
X, y = datasets.load_iris(return_X_y=True)
# Control which CPUs are made available for this script
cpu_arg = ''.join([str(ci) + ',' for ci in list(range(n_cpus))])[:-1]
cmd = 'taskset -cp %s %i' % (cpu_arg, pid)
print("executing command '%s' ..." % cmd)
os.system(cmd)
t1 = time.time()
res = permutation_test_score(svm.SVC(kernel='linear'), X, y, cv=n_folds,
n_permutations=n_perm, n_jobs=n_jobs,
verbose=3)
t_delta = time.time() - t1
ips = n_perm / t_delta
print("number of iterations per second: %.02f it/s" % ips)
When checking with htop, it appears that exactly the number of CPUs are running at 100 % that I have indeed requested via n_jobs=2 and n_cpus=2 - two in this case. When I look at the threads using pstree PID, however, I can see 6 processes running instead of the expected two processes.
Requesting only n_jobs=1 and n_cpus=1 results in htop showing one CPU being 100 % busy and pstree PID likewise shows only a single processing running, as expected.
Additional information and further attempts
Some information on my OS as per uname -a:
Linux COMPUTERNAME 4.15.0-96-generic #97-Ubuntu SMP Wed Apr 1 03:25:46 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
... and about the CPUs as per lscpu | grep -e CPU -e core:
CPU op-mode(s): 32-bit, 64-bit
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
CPU family: 6
Model name: Intel(R) Xeon(R) W-2123 CPU @ 3.60GHz
CPU MHz: 1200.050
CPU max MHz: 3900,0000
CPU min MHz: 1200,0000
NUMA node0 CPU(s): 0-7
My example code now allows control of the number of CPUs that are made available for code execution using the taskset command. So I can separately manipulate the number of tasks requested to run in parallel in scikit-learn (n_jobs) and the number of available CPUs (n_cpus). This allowed me to compare four scenarios to each other:
*
*n_jobs=1 and n_cpus=1: 360.24 it/s, htop shows one CPU busy, pstree PID shows one thread running.
*n_jobs=2 and n_cpus=2: 658.40 it/s, htop shows two CPUs busy, pstree PID shows 6 threads running.
*n_jobs=2 and n_cpus=1: 336.80 it/s, htop shows one CPU busy, pstree PID shows 6 threads running.
*n_jobs=1 and n_cpus=2: 358.60 it/s, htop shows one CPU busy, pstree PID shows one thread running.
It seems that pstree always depends on the number of threads requested in the scikit-learn API. The number of CPUs shown by htop to be active is the number of CPUs that are available to the script and that are also in use. So only scenario 2 results in two CPUs being busy. Likewise, processing speed (measured in iterations per second) is only increased in the parallel case if CPUs are available and used (also scenario 2). Crucially, the number of threads, which is six according to pstree and hence larger than the number of CPUs, did not indicate that they interfered with each other for limited CPUs. Only scenario 3 showed that processing appeared slower when two jobs were requested but only one CPU was made available. This was slower than scenario 1.
I am starting to wonder if pstree really is a good diagnostic for the efficiency of parallel processing and how it is different from htop.
| |
doc_3056
|
*
*Create a new single page application in swift using Xcode 6 beta 5
*Add TouchDB.framework and CouchCocoa.framework to project by following this steps
*Add a new swift file to project and type
*Create a bridging header file and import CouchCocoa:
import < CouchCocoa/CouchCocoa.h >
*Add a new swift file to project and type
class TestData : CouchMode { }
*Build the project,
ld: warning: ignoring file /Users/Robin/Documents/TouchTest/Frameworks/TouchDB.framework/TouchDB, missing required architecture x86_64 in file /Users/Robin/Documents/TouchTest/Frameworks/TouchDB.framework/TouchDB (3 slices)
ld: warning: ignoring file /Users/Robin/Documents/TouchTest/Frameworks/CouchCocoa.framework/CouchCocoa, missing required architecture x86_64 in file /Users/Robin/Documents/TouchTest/Frameworks/CouchCocoa.framework/CouchCocoa (3 slices)
Undefined symbols for architecture x86_64:
"_OBJC_CLASS_$_CouchModel", referenced from:
_OBJC_CLASS_$__TtC9TouchTest8TestData in ViewController.o
"_OBJC_METACLASS_$_CouchModel", referenced from:
_OBJC_METACLASS_$__TtC9TouchTest8TestData in ViewController.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Any help here?
This works previously on Xcode 6 Beta 4
Apologize for bad english
| |
doc_3057
|
What I would like to know is, is it that the database is exceeding certain (limited) size or is it that there are too many activities being performed on the DB causing it to crash.
PS : It worked fine in the simulator thou.
A: 1) If the total database size will be only a few hundred megs, then you can safely and effectively use a SQL backed Core Data repository for the images.
2) When you start getting close to a gig or more, then you should save the larger (or all) images in files, and use Core Data to keep a reference (file path or URL) to the images. The way to do this is (for ios 5.1 and newer) is create a directory inside the "Application Support" directory (which you may need to create), mark it so that is is not included in iCloud backups, and store the images there. In this manner you can keep around gigs of data (assuming the user doesn't get upset and delete your app).
EDIT: I just read your comment. Assuming a large number of small (8K) images, if the issue is having them all active at one time (that is, you are setting entity attributes all at one time, not over a long time), then you may need to make the entity 'fault' using 'refreshObject:mergeChanges:'. You can read about this in the Core Data Programming Guide along with other tips on reducing memory footprints.
| |
doc_3058
|
So I hope you can give me some creative ideas, examples or share your thoughts !
After login in to a website, I can change my product details, this is done by HTTP POST forms.
Because we have over 1000 products I somehow want to customise / easyify.
My idea was, make a PHP form on my own server which submits to the supplier url(s).
However when doing this, it forwards us to the customer login.
If I temper my submitted data in firefox, I see this is because after login a cookie is been set and obvious our system does not have this.
Anyone an idea how to automate this process ? In other words, how can I set this cookie in my php form in order to submit it succesfully.
Or Im I thinking about the wrong solution ?!
A: You cannot post to another server unless you use cURL or something like that. Maybe I am not understanding your question.
| |
doc_3059
|
A: you can use formula to convert it:
=ARRAYFORMULA(SUBSTITUTE(SUBSTITUTE(T2:T; "Z"; ); "T"; " "))
| |
doc_3060
|
let test = [{a=5;b=10}; {a=10;b=100}; {a=200; b=500}; {a=100; b=2}];;
I would like to create a function which will sum up a and b in such a way it will display
Sum of a : 315
Sum of b : 612
I think I have to use a recursive function. Here is my attempt :
let my_func record =
let adds = [] in
let rec add_func = match adds with
| [] -> record.a::adds; record.b::adds
| _ -> s + add_func(e::r)
add_func test;;
This function doesn't seem to work at all. Is there a right way to do such function?
A: OK, you know exactly what you're doing, you just need one suggestion I think.
Assume your function is named sumab and returns a pair of ints. Then the key expression in your recursive function will be something like this:
| { a = ha; b = hb; } :: t ->
let (a, b) = sumab t in (ha + a, hb + b)
| |
doc_3061
|
Cross-domain requests and dataType: "jsonp" requests do not support
synchronous operation.
However this works in all recent browsers but firefox version >= 20. Here is the type of calls i'm making:
$.ajax({
type : "GET",
async: false,
dataType : "text",
url : link,
xhrFields: { withCredentials: true },
success: function(response){
console.log("success ");
},
error: function(error){
console.error(error);
}
});
Does anyone have a clue why this is happening ?
UPDATE:
Ive tested both with jQuery and vanilla XHR the error is always the same
[Exception... "A parameter or an operation is not supported by the
underlying object" code: "15" nsresult: "0x8053000f
(InvalidAccessError)"
A: Use beforeSend instead of xhrField.
$.ajax({
type : "GET",
async: false,
dataType : "text",
url : link,
beforeSend: function(xhr) {
xhr.withCredentials = true;
},
success: function(response){
console.log("success ");
},
error: function(error){
console.error(error);
}
});
| |
doc_3062
|
u = User.first
User Load (0.1ms) SELECT "users".* FROM "users" LIMIT 1
=> #<User id: 18 [...]>
1.9.3-p194 :004 > u.roles
NameError: uninitialized constant User::Assignment
from [path]/ruby-1.9.3-p194/gems/activerecord-3.2.3/lib/active_record/inheritance.rb:119:in `compute_type'
from [path]/ruby-1.9.3-p194/gems/activerecord-3.2.3/lib/active_record/reflection.rb:172:in `klass'
from [path]/ruby-1.9.3-p194/gems/activerecord-3.2.3/lib/active_record/reflection.rb:385:in `block in source_reflection'
from [path]/ruby-1.9.3-p194/gems/activerecord-3.2.3/lib/active_record/reflection.rb:385:in `collect'
from [path]/ruby-1.9.3-p194/gems/activerecord-3.2.3/lib/active_record/reflection.rb:385:in `source_reflection'
from [path]/ruby-1.9.3-p194/gems/activerecord-3.2.3/lib/active_record/reflection.rb:508:in `check_validity!'
from [path]/ruby-1.9.3-p194/gems/activerecord-3.2.3/lib/active_record/associations/association.rb:26:in `initialize'
from [path]/ruby-1.9.3-p194/gems/activerecord-3.2.3/lib/active_record/associations/collection_association.rb:24:in `initialize'
from [path]/ruby-1.9.3-p194/gems/activerecord-3.2.3/lib/active_record/associations/has_many_through_association.rb:10:in `initialize'
from [path]/ruby-1.9.3-p194/gems/activerecord-3.2.3/lib/active_record/associations.rb:157:in `new'
from [path]/ruby-1.9.3-p194/gems/activerecord-3.2.3/lib/active_record/associations.rb:157:in `association'
from [path]/ruby-1.9.3-p194/gems/activerecord-3.2.3/lib/active_record/associations/builder/association.rb:44:in `block in define_readers'
from (irb):4
from [path]/ruby-1.9.3-p194/gems/railties-3.2.3/lib/rails/commands/console.rb:47:in `start'
from [path]/ruby-1.9.3-p194/gems/railties-3.2.3/lib/rails/commands/console.rb:8:in `start'
from [path]/ruby-1.9.3-p194/gems/railties-3.2.3/lib/rails/commands.rb:41:in `<top (required)>'
from script/rails:6:in `require'
from script/rails:6:in `<main>'
models/user.rb
class User < ActiveRecord::Base
attr_accessible :name, :email, :password, :password_confirmation, :role_id
has_many :assignments
has_many :roles, through: :assignments
#more code...
end
models/role.rb
class Role < ActiveRecord::Base
attr_accessible :name
has_many :assignments
has_many :users, through: :assignments
end
models/assignments.rb
class Assignments < ActiveRecord::Base
attr_accessible :role_id, :user_id
belongs_to :user
belongs_to :role
end
schema.rb: http://cl.ly/323n1t0Q1t390y1M2S0E
A: change the name of model file models/assignments.rb to models/assignment.rb and its class name from class Assignments to class Assignment. Model names should be singular its a rails convention. Therefore when you ran u.roles it is looking for Assignment not Assignments.
| |
doc_3063
|
.button {
margin-top: 50px;
padding: 10px 50px;
border: unset;
border-radius: 5px;
background: linear-gradient(to left, red 35%, yellow 100%);;
color: transparent;
}
button:hover {
animation: ex 4s ease-out;
}
@keyframes ex {
from {
background: linear-gradient(to left, red 35%, yellow 100%);
}
to {
background: linear-gradient(to right, red 35%, red 100%);;
}
}
<button class="button">xxxxxx</button>
A: Here is another idea where you can animate background-color:
.button {
margin-top: 50px;
padding: 10px 50px;
border: unset;
border-radius: 5px;
background: linear-gradient(to left, red 35%, transparent 100%);
background-color:yellow;
transition:2s background-color;
color: transparent;
}
button:hover {
background-color:red;
}
<button class="button">xxxxxx</button>
Another one with background-position
.button {
margin-top: 50px;
padding: 10px 50px;
border: unset;
border-radius: 5px;
background: linear-gradient(to right, yellow, red 32.5%);
background-size:200% 100%;
transition:2s background-position;
color: transparent;
}
button:hover {
background-position:right;
}
<button class="button">xxxxxx</button>
A: You can use an inset red shadow to cover it up. In addition, it's better in this case to use a transition. Transition will work when you stop hovering the element, even in the middle of the animation.
.button {
margin-top: 50px;
padding: 10px 50px;
border: unset;
border-radius: 5px;
background: linear-gradient(to left, red 0, red 77%, yellow 100%);
background-size: 200%;
color: transparent;
box-shadow: inset 0 0 0 200px transparent;
transition: box-shadow 4s ease-out;
}
.button:hover {
box-shadow: inset 0 0 0 200px red;
}
<button class="button">xxxxxx</button>
And if you really want to use animation (check what happens when you stop hovering the element):
.button {
margin-top: 50px;
padding: 10px 50px;
border: unset;
border-radius: 5px;
background: linear-gradient(to left, red 0, red 77%, yellow 100%);
background-size: 200%;
color: transparent;
transition: box-shadow 4s ease-out;
}
button:hover {
animation: ex 4s ease-out;
}
@keyframes ex {
from {
box-shadow: inset 0 0 0 200px transparent;
}
to {
box-shadow: inset 0 0 0 200px red;
}
}
<button class="button">xxxxxx</button>
| |
doc_3064
|
What I've Been Doing = This queries every 5 seconds if a new message has been posted for the current user.
- (void)queryForMessagesBadgeNumber:(NSTimer *)timer
{
NSString *currentUserID = [PFUser currentUser][@"userID"];
PFQuery *badgeQuery = [PFQuery queryWithClassName:@"Message"];
[badgeQuery whereKey:@"receiverID" equalTo:currentUserID];
[badgeQuery whereKey:@"wasRead" equalTo:[NSNumber numberWithBool:NO]];
[badgeQuery countObjectsInBackgroundWithBlock:^(int number, NSError *error)
{
if (number == 0)
{
NSLog(@"No unread messages");
}
else
{
[[self.tabBar.items objectAtIndex:2] setBadgeValue:[NSString stringWithFormat:@"%d",number]];
}
}];
}
Issues = This is very taxing on the "Requests Per Second" count. Parse allows for 30req/sec on freemium and just with my one phone I am using half of that with this query.
Question = Does anyone know of a more efficient way I can achieve what I am doing here?
A: Use Push notifications.
When a user sends a message to another user, you could get the app to send the recepient a push notification. You can choose whether to display a banner or not, or simply update the badge.
Guide is located here - Parse Push guide
Parse push is also free to a certain amount of notifications, but a much better way than polling the DB every x amount of seconds
| |
doc_3065
|
*
*The user need to change an amount in a dropdown (select)
*Depend what he choosed, he will get different select/option in the next dropdown
So far so good, it does in a not nice way (in my eyes) what I need. Anyone has a better solution how I can realize it?
<div class="form-row field_select">
<label>Please select the amount of PAX/Passengers:</label><br>
<div class="select-box">
<span>Select the amount</span>
<form action="" id="myform" method="POST">
<select class="" name="amount_pax" id="amount_pax" onchange='this.form.submit()'>
<option value="NO" selected>Select the amount of passengers</option>
<option value="4">1 - 4 Passengers (PAX)</option>
<option value="8">5 - 8 Passengers (PAX)</option>
<option value="12">8 and more Passengers (PAX)</option>
</select>
</form>
<?php $samount_pax=$_POST['amount_pax'];?>
</div>
</div>
<div class="form-row field_select">
<label>Please select the amount of PAX/Passengers:</label><br>
<div>
<select data-placeholder="Choose Your Accommodation..." class="chosen-select" tabindex="2" name="price" id="price">
<option value="no accommodation was selected"><?php display_text("res_1_9") ?></option>
<?php
switch ($samount_pax) {
case "4":
$mysqli->set_charset("utf8");
$query = "SELECT Hotellist.hotelname, Hotellist.location, Hotellist.price44 FROM Hotellist ORDER BY Hotellist.location ASC";
$result = $mysqli->query($query) or die($mysqli->error.__LINE__);
while ($row = mysqli_fetch_array( $result, MYSQL_ASSOC)) {
echo '<option value="'.$row['price44'].'">'.$row['hotelname'].' ('.$row['location'].')</option>'."\n";}
break;
case "8":
$mysqli->set_charset("utf8");
$query = "SELECT Hotellist.hotelname, Hotellist.location, Hotellist.pricebus FROM Hotellist ORDER BY Hotellist.location ASC";
$result = $mysqli->query($query) or die($mysqli->error.__LINE__);
while ($row = mysqli_fetch_array( $result, MYSQL_ASSOC)) {
echo '<option value="'.$row['pricebus'].'">'.$row['hotelname'].' ('.$row['location'].')</option>'."\n";}
break;
case "12":
echo "12";
break;
default:
echo "Your favorite color is neither red, blue, nor green!";
}
?>
</select>
</div>
| |
doc_3066
|
So I have a UIView subclass called enemyBlock. and I have a couple of these laid around on the screen. And these blocks expand and contract using [UIView animateWithDuration:...] over and over. I have an UIImageView that moves around according to the accelerometer data - this is the player.
When ever the player comes in contact with an enemyBlock the game should reset.
Im using CGRectIntersectsRect(player.frame, enemyblock.frame) to detect if a hit has occurred. But this behaves very strangely as a hit occurs only when the enemyblocks are not animating otherwise the player can pass right through them.
Any clue as to why this happens?
A: Not quite an answer but I would advise using cocos2d if you are writing a game. It's an awesome game engine written in Objective C that I've used myself. You create a 2D scenegraph of sprites. You then write an update method which gets called at regular intervals, e.g. 30 times per second. In this update method you can update the position for each enemy and player sprite incrementally and do collision checking using CGRectIntersectsRect to detect collisions. Hope this helps.
A: When animating views, its the presentation layer of the view that is updated with the geometry
information during the animation. Depending on how you are animating your player layer try
CGRectIntersectsRect(player.frame, enemyblock.layer.presentationLayer.frame)
All the best
A: CGRect does not detect During the animation of UIView and UIImageView therefore "CGRectIntersectsRect(player.frame, enemyblock.frame) " doesn't play appropriate role and your faced something strangely. My advise to you make custom animation of UIView it will work for you.
| |
doc_3067
|
This is my result:
As you can see, the toggle is on the left not on the right like the image below. This is what I need to fix
I want to get this result (Call Ma):
But with a toggle instead of "telephone-outline" icon.
So.. This is my HTML:
<ion-view>
<ion-content>
<ion-list>
<div ng-repeat="cancha in vm.canchasComplejo" class="list card">
<ion-item class="item-stable"
ng-click="vm.toggleGroup(cancha)"
ng-class="{active: vm.isGroupShown(cancha)}">
<i class="icon" ng-class="vm.isGroupShown(cancha) ? 'ion-minus' : 'ion-plus'"></i>
Group
</ion-item>
<ion-item class="item-accordion"
ng-repeat="opcion in cancha.opciones"
ng-show="vm.isGroupShown(cancha)">
<div class="item item-icon-left item-icon-right" >
<i class="icon ion-chatbubble-working"></i>
Option
<label class="toggle toggle-assertive">
<input type="checkbox">
<div class="track">
<div class="handle"></div>
</div>
</label>
</div>
</ion-item>
</div>
</ion-list>
</ion-content>
</ion-view>
Could you tell me how to fix my problem?
Thanks
A: Use item-toggle class. Below this sample code
<div class="item item-toggle item-text-wrap">
<i class="icon ion-chatbubble-working"></i>
Option
<label class="toggle toggle-assertive">
<input type="checkbox">
<div class="track">
<div class="handle"></div>
</div>
</label>
</div>
| |
doc_3068
|
I am using this code to draw onto uiimageview.
But i want to detect the transperent portion in uiimage and prevent filling the color into that portions.
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
// CGFloat red,green,blue,alpha;
UITouch *touch1 = [touches anyObject];
CGPoint currentPoint = [touch1 locationInView:Image_apple];
UIGraphicsBeginImageContext(Image_apple.frame.size);
[Image_apple.image drawInRect:CGRectMake(0, 0, Image_apple.frame.size.width, Image_apple.frame.size.height)];
CGContextSetBlendMode(UIGraphicsGetCurrentContext(), kCGBlendModeNormal);
// CGContextSetBlendMode(UIGraphicsGetCurrentContext(),[self getBlendMode]);
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), lastPoint.x , lastPoint.y );
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPoint.x , currentPoint.y );
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(),8.0 );
// [[self getPaintColor] getRed:&red green:&green blue:&blue alpha:&alpha];
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), 1, 0.5, 0.2, 1);
CGContextSetBlendMode(UIGraphicsGetCurrentContext(),kCGBlendModeNormal);
CGContextStrokePath(UIGraphicsGetCurrentContext());
Image_apple.image = UIGraphicsGetImageFromCurrentImageContext();
[Image_apple setAlpha:1];
UIGraphicsEndImageContext();
}
Thanks in advance
A: There's a blend mode that might help you: kCGBlendModeDestinationOver. It draws only on opaque portions of the context.
If you have to use other blend modes, you could convert the image to a mask and clip the context to it.
| |
doc_3069
|
from ortools.linear_solver import pywraplp
def LinearProgrammingExample():
"""Linear programming sample."""
# Instantiate a Glop solver, naming it LinearExample.
solver = pywraplp.Solver.CreateSolver('GLOP')
if not solver:
return
# Create the two variables and let them take on any non-negative value.
x = solver.NumVar(0, solver.infinity(), 'x')
y = solver.NumVar(0, solver.infinity(), 'y')
print('Number of variables =', solver.NumVariables())
# Constraint 0: x + 2y <= 14.
solver.Add(x + 2 * y <= 14.0)
# Constraint 1: 3x - y >= 0.
solver.Add(3 * x - y >= 0.0)
# Constraint 2: x - y <= 2.
solver.Add(x - y <= 2.0)
print('Number of constraints =', solver.NumConstraints())
# Objective function: 3x + 4y.
solver.Maximize(3 * x + 4 * y)
# Solve the system.
status = solver.Solve()
if status == pywraplp.Solver.OPTIMAL:
print('Solution:')
print('Objective value =', solver.Objective().Value())
print('x =', x.solution_value())
print('y =', y.solution_value())
else:
print('The problem does not have an optimal solution.')
print('\nAdvanced usage:')
print('Problem solved in %f milliseconds' % solver.wall_time())
print('Problem solved in %d iterations' % solver.iterations())
LinearProgrammingExample()
but instead of optimizing 3x+4y, I would like to optimize 3x**2+4y. How to set the power of x ? I tried x*x, ** or np.power but x is an object. it is not working. any solution ?
A: The current api does not support quadratic terms.
If you build a protobuf manually, (see linear_solver.proto), you can express it and solve it with scip or gurobi.
But the code is ugly.
Math_opt is built to support it, and much more. But it is c++, bezel only at the time being.
So you are out of luck.
PS: if your problem is purely integral (no continuous variables), you can solve it with CP-SAT own api.
| |
doc_3070
|
One of them is recovering from a UART disconnection, which seems to happen after a computer sleep or when you pull the plug of an RS-232 USB interface.
This is the error I get:
Error 0x5 at ..\rxtx\src\termios.c(517): Access is denied.
Error 0x5 at ..\rxtx\src\termios.c(2712): Access is denied.
Error 0x5 at ..\rxtx\src\termios.c(2601): Access is denied.
Error 0x5 at ..\rxtx\src\termios.c(1490): Access is denied.
Error 0x5 at ..\rxtx\src\termios.c(1301): Access is denied.
Here, Ilkka Myller proposes a nice fix but it looks it was never implemented - was it?
"We could implement a method to detect lost UART in event loop at
native lib side and introduce a new Java side SerialPortEvent type
UART_DISCONNECT. The native lib would send that event in case of error
and proceed to automatically close the serial port and clear the
invalid handles etc. (reset to fault free state)."
In case it wasn't, does any other lib handle such disconnections?
| |
doc_3071
|
It seems to be logging in correctly due to the fact that I get an "incorrect password" error if I purposely enter a wrong password below, but how do i connect the login to the url i want to scrape?
from bs4 import BeautifulSoup
import urllib
import csv
import mechanize
import cookielib
cj = cookielib.CookieJar()
br = mechanize.Browser()
br.set_cookiejar(cj)
br.open("http://www.barchart.com/login.php")
br.select_form(nr=0)
br.form['email'] = 'username'
br.form['password'] = 'password'
br.submit()
#print br.response().read()
r = urllib.urlopen("http://www.barchart.com/stocks/sp500.php?view=49530&_dtp1=0").read()
soup = BeautifulSoup(r, "html.parser")
tables = soup.find("table", attrs={"class" : "datatable ajax"})
headers = [header.text for header in tables.find_all('th')]
rows = []
for row in tables.find_all('tr'):
rows.append([val.text.encode('utf8') for val in row.find_all('td')])
with open('snp.csv', 'wb') as f:
writer = csv.writer(f)
writer.writerow(headers)
writer.writerows(row for row in rows if row)
#from pymongo import MongoClient
#import datetime
#client = MongoClient('localhost', 27017)
print soup.table.get_text()
A: I am not sure that you actually need to login to retrieve the URL in your question; I get the same results whether logged in or not.
However, if you do need to be logged in to access other data, the problem will be that you are logging in with mechanize, but then using urllib.urlopen() to access the page. There is no connection between the two, so any session data gathered by mechanize is not available to urlopen when it makes its request.
In this case you don't need to use urlopen() because you can open the URL and access the HTML with mechanize:
r = br.open("http://www.barchart.com/stocks/sp500.php?view=49530&_dtp1=0")
soup = BeautifulSoup(r.read(), "html.parser")
| |
doc_3072
|
models.py
json_ctn = JsonField(verbose_name=_('Json'), null=True, blank=True)
fields.py
class JsonField(models.TextField):
def __init__(self, *args, **kwargs):
if kwargs.get('validators'):
kwargs['validators'].append(JsonValidator())
else:
kwargs.update({'validators': [JsonValidator()]})
super(JsonField, self).__init__(*args, **kwargs)
def __eq__(self, other):
return True
validators.py
@deconstructible
class JsonValidator(object):
error_messages = {
'wrong_json_code': _('Provided custom value is not a valid JSON string.'),
}
def __call__(self, value):
try:
json.loads(value)
except (ValueError, SyntaxError) as err:
raise ValidationError(self.error_messages.get('wrong_json_code'))
return value
def __eq__(self, other):
return True
The problem is with everytime I run makemigrations, even when there is no change in anything, a new migration is created with the following content :
migrations.AlterField(
model_name='whatever',
name='json_ctn',
field=PATH.fields.JsonField(blank=True, null=True, verbose_name='Json', validators=[PATH.validators.JsonValidator(), PATH.validators.JsonValidator(), PATH.validators.JsonValidator()]),
),
*
*Any idea why is this the behavior ? I've even altered the _ eq _ to
always return True, as mentioned here.
*Also why JsonValidator() is added 3 times to the
validators in the migration file ?
Thanks !
A: You get the duplicate entries because you are appending to the same list each time the field is initialized. It would be better to use the default_validators attribute, as used in the docs. You can then remove your __init__ method.
class JsonField(models.TextField):
default_validators = [validators.validate_slug]
Hopefully that will solve the migration issue as well. You might need to create one final migration before it stops changing (or recreate the previous migration that added the json fields).
| |
doc_3073
|
I cannot understand why this does not work:
def InvNormal(q,a,b):
return norm.ppf(q,a,b)
F_invs = functools.partial(InvNormal,0,1)
Doing
print(InvNormal(0.3,0,1))
print(F_invs(0.3))
I get
-0.5244005127080409
-inf
Any help is appreciated!
Bernardo
I tried it with different random variables.
| |
doc_3074
|
(node:52346) UnhandledPromiseRejectionWarning: Error:
mux-demux@git://github.com/Raynos/mux-demux.git#error-messages:
Listing the refs for git://github.com/Raynos/mux-demux.git failed at
ChildProcess.u.on
(/Users/linus/.nvm/versions/node/v10.24.1/lib/node_modules/yarn/bin/yarn.js:2:412797)
at ChildProcess.emit (events.js:198:13) at maybeClose
(internal/child_process.js:982:16) at Socket.stream.socket.on
(internal/child_process.js:389:11) at Socket.emit (events.js:198:13)
at Pipe._handle.close (net.js:607:12) (node:52346)
UnhandledPromiseRejectionWarning: Unhandled promise rejection. This
error originated either by throwing inside of an async function
without a catch block, or by rejecting a promise which was not handled
with .catch(). (rejection id: 1) (node:52346) [DEP0018]
DeprecationWarning: Unhandled promise rejections are deprecated. In
the future, promise rejections that are not handled will terminate the
Node.js process with a non-zero exit code.
(node:52346)
UnhandledPromiseRejectionWarning: Error:
sockjs-client@git://github.com/substack/sockjs-client.git#browserify-npm:
Listing the refs for git://github.com/substack/sockjs-client.git
failed at ChildProcess.u.on
(/Users/linus/.nvm/versions/node/v10.24.1/lib/node_modules/yarn/bin/yarn.js:2:412797)
at ChildProcess.emit (events.js:198:13) at maybeClose
(internal/child_process.js:982:16) at
Process.ChildProcess._handle.onexit (internal/child_process.js:259:5)
(node:52346) UnhandledPromiseRejectionWarning: Unhandled promise
rejection. This error originated either by throwing inside of an async
function without a catch block, or by rejecting a promise which was
not handled with .catch(). (rejection id: 2)
yarnpkg (v1) gives me:
error Command failed.
Exit code: 128
Command: git
Arguments: ls-remote --tags --heads git://github.com/substack/sockjs-client.git
Directory: /Users/linuxgrolmes/PhpStormProjects/ifs-user-online-list-vue2
Output:
fatal: unable to connect to github.com:
github.com[0: 140.82.121.4]: errno=Operation timed out
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
By the way, the librarys "mux-demux" and "sockjs-client" are the only ones which links are referenced with "git://" instead of "https://" at the beginning in "package-lock.json".
Could it be this is a cause of the problem? I tried a lot up to this point and I'm not really sure on how to continue now..
| |
doc_3075
|
public class Exchanger : BaseContract
{
[DataMember, Key, Column(TypeName = "bigint")]
public long Id { get; set; }
....
[DataMember]
public virtual ICollection<PaymentSystem> PaymentSystems { get; set; }
}
[DataContract, Serializable]
public class PaymentSystem : BaseContract
{
[Key, Column(TypeName = "bigint"), DataMember]
public long Id { get; set; }
...
[DataMember, JsonIgnore]
public virtual ICollection<Exchanger> ExchangersSupport { get; set; }
}
and fluent api directions to have many to many relations:
modelBuilder.Entity<Exchanger>()
.HasMany(t => t.PaymentSystems)
.WithMany(t => t.ExchangersSupport)
.Map(m => m.ToTable("ExchangerToPaymentSystem"));
code for inserting:
public void Create(Exchanger ex, long clientId)
{
if (_context != null)
{
ex.ClientId = clientId;
ex.LastTimeUpdated = DateTime.UtcNow;
_context.Exchangers.Add(ex);
_context.SaveChanges();
}
}
When I'm inserting new entry in Exchanger table EF create entries in ExchangerToPaymentSystem table, also it creates same entries in PaymentSystem table at the same time.
When i'm updating, nothing happens.
what am i do wrong?
A: you'r mapping configuration have to be something like following for middle table:
this.ToTable("ExchangerToPaymentSystem");
this.HasKey(e => e.Id);
this.HasRequired(e => e.Exchanger )
.WithMany(e => e.ExchangersSupport )
.HasForeignKey(pc => pc.ExchangerId);
this.HasRequired(pc => pc.PaymentSystem )
.WithMany(p => p.PaymentSystems)
.HasForeignKey(pc => pc.PaymentSystemId);
often this behavior happens when you'r entities come from different data context.make sure that all the entities is from same data context.
A: So, when you do
var exchanger = new Exchanger() { PaymentSystems = paymentSystems, ... };
it will automatically create the records in the link table for you, this is expecteds and required.
I'm not entirely sure I understand what the problem is now...
A: Ok, I didn't knew that I had to reattach PaymentSystems collection to Object Context, it's not automaticaly be reattached, but marked as new (state = added).
public void Create(Exchanger ex, long clientId)
{
if (_context != null)
{
ex.ClientId = clientId;
ex.LastTimeUpdated = DateTime.UtcNow;
**var ps = ex.PaymentSystems.Select(x=>x.Id);
var ps2 = _context.PaymentSystems.Where(x => ps.Any(y => y == x.Id)).ToList();
ex.PaymentSystems.Clear();
foreach (var pp in ps2)
{
ex.PaymentSystems.Add(pp);
}**
_context.Exchangers.Add(ex);
_context.SaveChanges();
}
}
| |
doc_3076
|
public enum Item {
PEN, GLASS, AND, SO, ON;
private static Item current = PEN;
public static Item getNext() {
current = values()[current.ordinal()];
return current;
}
In my first class is pieces of my game map where some of them are initialized with a value from the enum with this constructor:
public class MapPiece {
private Sprite sprite;
private Item item;
public MapPiece (Sprite sprite, Item item) {
this.sprite = sprite;
this.item = item;
}
This class also has a draw function that looks like this:
public void draw(Batch batch, float x, float y) {
sprite.setPosition(x, y);
sprite.draw(batch);
}
¨
In the last class I initialize my map pieces and put them in a 2D array like this:
atlas = new TextureAtlas("atlas.pack");
mapPieces[0][0] = new MapPiece(atlas.createSprite("piece"), Item.getNext();
mapPieces[0][1] = new MapPiece(atlas.createSprite("piec2"), Item.getNext();
...and so on until I added all my pieces
The itemSprites are created like this:
for ( Item i : Item.values()) {
Sprite itemSprite = atlas.createSprite(i.toString());
The map pieces are being drawed on a sprite batch like this:
int mapPieceSize = 100;
for (int i = 0; i < mapPiece.length; i++) {
for (int j = 0; j < mapPiece[0].length; j++) {
mapPiece[j][i].draw(spriteBatch, i * mapPieceSize, j * mapPieceSize);
}
}
Now to my problem. The textures for the items are the same size as the map pieces but are all transparent except in the middle where the item is. I made it like this to make it easier to position.
What I want to do is to draw a sprite of the items on top of the mapPiece that holds the enum value of the item. So for example, if mapPiece[0][5] holds the enum value PEN I would like to have a texture taken from my atlas called PEN and then draw on the same position as mapPiece[0][5]. I hope it´s not all too confusing.
A: I'm not totally sure I understand the problem yet, because I'm not sure where it's more difficult than what you've already done, but I suppose one way you could do it is like this.
Add a texture region field to the enum (not static).
public enum Item {
PEN, GLASS, AND, SO, ON;
private static Item current = PEN;
public static Item getNext() {
current = values()[(current.ordinal()+1)%values().length];
return current;
}
public TextureRegion region;
}
Then fill in your texture regions to the enum instances right after loading the atlas.
Item.PEN.region = atlas.findRegion("PEN");
Item.GLASS.region = atlas.findRegion("GLASS");
...
And then when looping through your map pieces, draw the background tile, and the item tile:
int mapPieceSize = 100;
for (int i = 0; i < mapPiece.length; i++) {
for (int j = 0; j < mapPiece[0].length; j++) {
//You had a typo on this line
mapPiece[j][i].sprite.draw(spriteBatch, i * mapPieceSize, j * mapPieceSize);
//draw the texture region directly without using a sprite instance
spriteBatch.draw(mapPiece[j][i].item.region, i * mapPieceSize, j * mapPieceSize);
}
}
Also, you mentioned having a bunch of invisible padding around your items. That could waste performance by drawing a lot of invisible pixels. If you want to go in and tweak their positions, you can store an offset in the Item instance, or for each MapPiece instance and add that to the x and y locations when drawing the item.
| |
doc_3077
|
CREATE TABLE "ADMIN"."SESSIONS"
( "SESSIONID" VARCHAR2(30 BYTE),
"SESSIONTYPE" NUMBER(*,0),
"USERID" VARCHAR2(30 BYTE),
"ACTIVITYSTART" TIMESTAMP (6),
"ACTIVITYEND" TIMESTAMP (6),
"ACTIVITY" CLOB,
"USERNAME" VARCHAR2(30 BYTE),
"IPADDRESS" VARCHAR2(30 BYTE),
"LOGINTIME" TIMESTAMP (6),
"LOGOUTTIME" TIMESTAMP (6)
) SEGMENT CREATION IMMEDIATE
Can you tell me how I can insert into the table only if the maximum number is not reached? I want to do this using prepared statement.
A: I would normally close this question as a duplicate of Creating a table with max number of rows (ORACLE). Basically, it's nearly impossible and if you need to do this you're probably doing something wrong. You don't want to be calculating the number of records in a table prior to inserting into it - this is a lot of excess work.
However, given the names of the column names in your table there's probably an easier way of doing this. Alter the SESSIONS parameter to be 300. This will restrict your database to 300 concurrent sessions. If an attempt is made to create a 301st session the error ORA-00018 maximum number of sessions exceeded will be raised. If you are restricting the number of concurrent users you may have to be relatively aggressive about dropping unused connections - it depends on the number of users you're expecting.
If you still need to maintain the table after this then you can use BEFORE LOGON and BEFORE LOGOFF triggers to maintain it (though I'm not sure what ACTIVITYEND and LOGOUTTIME could be used for), the BEFORE LOGIN trigger would insert into the table and the BEFORE LOGOFF would delete from it.
A: Add an ID column to your table, control its values using a sequence as suggested by Jeffrey Kemp in Creating a table with max number of rows (ORACLE) to which @Ben has been pointing.
(Sorry - not allowed to comment, yet. So if this helps, credit Ben, please. Otherwise: What else is needed?)
| |
doc_3078
|
At the moment, the program is such that the server starts out listening for incoming packets so the client send the first message.
I'd really like to know of a simple way to implement this if possible.
Here's the code for the Client:
class EchoClient
{
public static void main( String args[] ) throws Exception
{
System.out.println("\nWelcome to UDP Client");
System.out.println("---------------------------------------------------");
Scanner sc = new Scanner (System.in);
DatagramSocket socket = new DatagramSocket();
socket.setSoTimeout(120000);
while (true)
{
//Send
System.out.print("\nEnter message: ");
String msg = sc.nextLine();
byte[] buffer = msg.getBytes();
DatagramPacket packetS = new DatagramPacket(buffer,buffer.length,InetAddress.getByName(args[0]),1500);
socket.send( packetS );
//Receive
DatagramPacket packetR = new DatagramPacket(new byte[512],512);
socket.receive( packetR );
System.out.println( "Alice at: "+new Date()+" "+packetR.getAddress()+":"+packetR.getPort()+"\nSays: "+new String(packetR.getData(),0,packetR.getLength()) );
}
}
}
And the code for the Server:
class EchoServer
{
public static void main( String args[] ) throws Exception
{
System.out.println("\nWelcome to UDP Server");
System.out.println("---------------------------------------------------");
Scanner sc = new Scanner (System.in);
DatagramSocket socket = new DatagramSocket(1500);
//Message loop
while ( true )
{
//Receiving
DatagramPacket packetR = new DatagramPacket(new byte[512],512);
socket.receive( packetR );
System.out.println("Bob at: "+new Date()+" "+packetR.getAddress()+":"+packetR.getPort()+"\nSays: "+new String(packetR.getData(),0,packetR.getLength()) );
//Send
System.out.print("\nEnter message: ");
String msg = sc.nextLine();
byte[] buffer = msg.getBytes();
DatagramPacket packetS = new DatagramPacket(buffer,buffer.length,packetU.getAddress(),packetU.getPort());
socket.send( packetS );
}
}
}
A: Hello an eay way for doing that is to have mutltiple thread launched in the main.
class EchoServerClient
{
public static void main( String args[] ) throws Exception
{
new Thread(() -> {
EchoServer.main(args) //You can also rename main by another name
}).start();
new Thread(() -> {
EchoClient.main(args) //You can also rename main by another name
}).start();
}
}
| |
doc_3079
|
Below is example of the source MongoDB document and sub-documents:
{
_id: <some_UUID>,
fieldA: "<some_fieldA_value>,
subDocA: [
{
subFieldA: "aaa",
subFieldB: "bbb"
},
{
subFieldA: "aaa",
subFieldB: "ccc"
},
{
subFieldA: "aa1",
subFieldB: "ddd"
}
]
}
and the target MongoDb document should be:
{
_id: <some_UUID>,
fieldA: "<some_fieldA_value>,
subDocA: [
{
subFieldA: "aaa",
subFieldB: "bbb"
},
{
subFieldA: "aa1",
subFieldB: "ddd"
}
]
}
That is if sub-document field subFieldA=aaa, then keep only 1 such sub-document in the parent document.
Can someone help me for coming up with MongoDB query? Thanks in advance.
Edit:
The sub-document may not have same value for other fields. And there can be many such documents in the collection for which I need to do the same.
A: You can try this aggregation pipeline.
*
*First $unwind to deconstruct the array.
*Then $group twice: First time group by subFieldA value to get the $first one. Second time to get the original structure.
db.collection.aggregate([
{
"$unwind": "$subDocA"
},
{
"$group": {
"_id": "$subDocA.subFieldA",
"subDocA": {
"$first": "$subDocA"
},
"fieldA": {
"$first": "$fieldA"
},
"id": {
"$first": "$_id"
}
}
},
{
"$group": {
"_id": "$id",
"fieldA": {
"$first": "$fieldA"
},
"subDocA": {
"$push": "$subDocA"
}
}
}
])
Example here
| |
doc_3080
|
A: I have had a similar problem where some of my shortcuts (such as the Alt+Shift+F10 intellisense shortcut) stopped working...
I fixed it by going to Tools -> Import and Export Settings -> Reset all settings.
I was able to reimport my saved settings after I had reset them also but only with out the broken short cuts!
A: Might be a ridiculous suggestion but does your keyboard have some kind of "F Lock" key? Happened to me after I got a new keyboard and accidentally hit it. Didn't even know it was there :)
A: I had a similar question, and reading this thread led me to an answer which is similar to the f-Lock key answer. I'm using a new laptop, and re-introducing myself to programming. When the book said press Ctrl-F5 I did so and the only thing that happened was that my monitor got slightly dimmer, though it returned to normal brightness as soon as I clicked on a few other things. What I noticed after reading this thread is that this new-fangled keyboard has a blue Fn key, and all the function key names are printed in blue. In other words, to get the function key functionality you have to be holding down the blue Fn key when you tap the actual function key. So F5 becomes Fn-F5 and Ctrl-F5 becomes Ctrl-Fn-F5. This is on a Dell Inspiron 1564. This is not quite as egregious as a F-Lock key, so I hope I'll get used to it soon.
A: I fixed this by pressing Fn + F5, this is right next to the Windows Button.
A: One day it happened to me when I made some changes to the project and solution files in my ASP .NET project. When I opened it in Visual Studio, it allowed me to rebuild but not to run nor debug (the menu option didn't appear and the Ctrl + F5 didn't work).
What I did to make it work again was right-click over the solution, into the Visual Studio "Solution Explorer" tree tab in the right, where all the files of your project appear, and select properties.
Then, select there a project, and put it as start project.
Perhaps this is your problem here. If not, I hope it can help someone in the future.
A: Before you trash all your settings, consider just resetting the Keyboard preferences:
In Tools / Options / Environment / Keyboard there's a drop-down for your Keyboard scheme and next to it a Reset button. Make sure the mapping scheme is set to whatever you want then hit the Reset button.
A: Chances are you changed your default settings. Go to Tools->Import and Export Settings... Select 'Reset all', Click 'Next' and choose the 'Visual C# Development Settings' option.
A: It happend to me something similar, but only with one project, and the others running right with CTRL + F5, so i will post here my problem and solution because it could help people searching for the same problem, as i was :)
I was opening in Visual Studio 2012 a solution created with Visual Studio 2013, and it didn't run with CTRL + F5. I open the solution file with notepad, and i change the following section:
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Any CPU = Debug|Any CPU
Release|Any CPU = Release|Any CPU
EndGlobalSection
with this other:
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug_x86|Any CPU = Debug_x86|Any CPU
Debug_x86|x86 = Debug_x86|x86
Debug|Any CPU = Debug|Any CPU
Debug|x86 = Debug|x86
PruebaRelease|Any CPU = PruebaRelease|Any CPU
PruebaRelease|x86 = PruebaRelease|x86
Release|Any CPU = Release|Any CPU
Release|x86 = Release|x86
EndGlobalSection
That solves the problem for me.
A: This keyboard shortcut is set by default in os x, you need to disable it to use ctrl+F5. Have a look at http://www.daniellewis.me.uk/2015/07/01/ctrl-f5-not-working-in-a-windows-vm-running-on-parallels/ to fix. I can run without debugging now in my VM.
A: I had a similar problem with Alt+F7 (Find Usages in Resharper). Turns out the GeForce Experience was taking the shortcut for it's Sharing functionality. Uninstalled GE and all is good.
A: I had the same problem.
In my case, I just deleted the folder containing my project and created a new one. Everything went well.
A: Hello from the future.
Mac 2021 M1:
Choose Apple menu > System Preferences.
Click Keyboard.
Select "Use F1, F2, etc. keys as standard function keys".
Not sure how it changed to begin with (probably happy fingers on my part)
A: should check for Fn + f5, which between window button and the left crtl
| |
doc_3081
|
Whenever I tried to run the code, it freezes and nothing ever comes out of it.
I can't even comment and ask my question there, in the post above, because my credit isn't great enough.
For your information, I try to run the code on a Windows 10 computer.
I believe I've changed the setup of the inbound and outbound of TCP connections on firewall, which I read was what needed to be done with Win-10. I also thought maybe I should change the way the directory is written from "//" to "\\". Didn't work either. In addition, I've tried to change local tcp to "tcp://127.0.0.1:5555" and it still didn't.
Here's the code,
import time
import zmq
context = zmq.Context()
socket=context.socket(zmq.REP)
socket.bind("tcp://*:5555")
while True:
message=socket.recv()
print("Received request: %s" % message)
time.sleep(1)
print("test")
socket.send(b"World")
import zmq
context = zmq.Context()
print("Connecting to hello world server...")
socket = context.socket(zmq.REQ)
socket.connect("tcp://*:5555")
for request in range(10):
print("Sending request %s..." % request)
socket.send(b"Hello")
message = socket.recv()
print("Received reply %s [%s]" % (request, message))
Any suggestion would be really appreciated.
A:
In case one has never worked with ZeroMQ,one may here enjoy to first look at "ZeroMQ Principles in less than Five Seconds"before diving into further details
Q : "Why a ZeroMQ demo code doesn't work on Win10?"
Because of this SLOC :
socket.connect( "tcp://*:5555" ) # production-grade code always ERROR checks a call
This call ought have specified a tcp://-TransportClass a valid address:port to go and try to .connect(), which must failed by the above posted try to attempt a "*:port"
Repair it & you ought be ready to proceed into the beautiful gardens of the Zen of Zero.
| |
doc_3082
|
ArrayList<String> str_arr1 = new ArrayList<String>();
ArrayList<String> str_arr2 = new ArrayList<String>();
ArrayList<Integer> hr_arr_1 = new ArrayList<Integer>();
ArrayList<Integer> mint_arr_1 = new ArrayList<Integer>();
ArrayList<Integer> hr_arr_2 = new ArrayList<Integer>();
ArrayList<Integer> mint_arr_2 = new ArrayList<Integer>();
String str,str1;
final ArrayList<HashMap<String, String>> new_item = new ArrayList<HashMap<String, String>>();
int[] str_int = new int[3];
int[] str_int1 = new int[3];
@SuppressLint("NewApi")
public class SheduleActivityMain extends Activity
{
int count=0;
public void onCreate(Bundle savedInstanceState)
{
SimpleDateFormat dateFormat = new SimpleDateFormat("HH:mm:ss");
Calendar now = Calendar.getInstance();
Calendar now1 = Calendar.getInstance();
Calendar now2 = Calendar.getInstance();
System.out.println("Current time : " + dateFormat.format(now.getTime()));
String time_now = dateFormat.format(now.getTime());
System.out.println("time_now----->"+time_now);
now1.add(Calendar.HOUR,1);
System.out.println("New time after adding 1 hours : "
+ dateFormat.format(now1.getTime()));
now2.add(Calendar.HOUR,2);
System.out.println("New time after adding 2 hours : "
+ dateFormat.format(now2.getTime()));
System.out.println("Is now1 after now ? : " + now1.after(now));
StringTokenizer st1 = new StringTokenizer(time_now, ":");
while (st1.hasMoreElements())
{
//System.out.println(st1.nextElement());
str = (String) st1.nextElement();
str_arr1.add(str);
}
str_int[0] = Integer.parseInt(str_arr1.get(0));
str_int[1] = Integer.parseInt(str_arr1.get(1));
str_int[2] = Integer.parseInt(str_arr1.get(2));
for(int j=0;j<str_int.length;j++)
{
System.out.println("integer array is... "+j+"..."+str_int[j]);
}
for(int time=0;time<train_time.length;time++)
{
StringTokenizer st2 = new StringTokenizer(train_time[time], ":");
while (st2.hasMoreElements())
{
str1 = (String) st2.nextElement();
str_arr2.add(str1);
System.out.println("str1..."+str1);
}
System.out.println("str_arr2.........."+str_arr2);
System.out.println("method calling..........");
getTrainTime(str_arr2.get(count),str_arr2.get(count+1));
for(int chk=count;chk<=str_arr2.size()-3;chk++)
{
getTrainTime(str_arr2.get(count),str_arr2.get(count+1));
}
}
void getTrainTime(String s1,String s2)
{
System.out.println("count--->"+count);
int n1 = Integer.parseInt(s1);
int n2 = Integer.parseInt(s2);
System.out.println("n1.n2 "+n1+n2);
count = count+3;
System.out.println(str_int[0]+str_int[1]);
if(n1==str_int[0]+1)
{
System.out.println("inside if condition.....");
hr_arr_1.add(n1);
//mint_arr_1.add(n2);
System.out.println("hr_arr_1,mint_arr_1"+hr_arr_1+mint_arr_1);
}
else if(n1==str_int[0]+2 )
{
}
//System.out.println("hr_arr_1,mint_arr_1"+hr_arr_1+mint_arr_1);
}
}
I got some error like this..
java.lang.IndexOutOfBoundsException: Invalid index 12, size is 12
How can I compare two times? How can I resolve this error? Can anybody help me?
A: java.lang.IndexOutOfBoundsException: Invalid index 12, size is 12
This error means, you try to get the value at the 13th position, but you array has only 12 entries. (-> Index ouf of bounds). Keep in mind: Index of Array starts with 0!
You should use sth like yourArray[yourArray.length - 1]
EDIT:
And please modify your code next time. it`s unreadable (remove unnecessary System.out.print, give variables a describing name...)
By the way, to check if value is within the next two hours, you can use Calendar objects like
Calendar currentTime = Calendar.getInstance();
Calendar currentTimePlus2 = Calendar.getInstance();
currentTimePlus2.add(Calendar.HOUR,2);
Calendar yourDataTime; //where ever it might come
if( yourDataTime.getTime().before(currentTimePlus2.getTime()) &&
yourDataTime.getTime().after(currentTime.getTime()){
//your Data within the next two hours
} else{
//your Data NOT within the next two hours
}
)
| |
doc_3083
|
import org.specs2.mock.Mockito
import org.specs2.mutable.Specification
import org.specs2.specification.Scope
import akka.event.LoggingAdapter
class MySpec extends Specification with Mockito {
"Something" should {
"do something" in new Scope {
val logger = mock[LoggingAdapter]
val myVar = new MyClassTakingLogger(logger)
myVar.doSth()
there was no(logger).error(any[Exception], "my err msg")
}
}
}
When running this, I get the following error:
[error] org.mockito.exceptions.misusing.InvalidUseOfMatchersException:
[error] Invalid use of argument matchers!
[error] 2 matchers expected, 1 recorded:
[error] -> at org.specs2.mock.mockito.MockitoMatchers$class.any(MockitoMatchers.scala:47)
[error]
[error] This exception may occur if matchers are combined with raw values:
[error] //incorrect:
[error] someMethod(anyObject(), "raw String");
[error] When using matchers, all arguments have to be provided by matchers.
[error] For example:
[error] //correct:
[error] someMethod(anyObject(), eq("String by matcher"));
Which would make a lot of sense, but neither eq("my err msg") nor equals("my err msg") does the job as I get an error. What am I missing?
A: When you are using matchers to match parameters you have to use them for all parameters. as the all arguments have to be provided by matchers indicates.
Moreover if you use a specs2 matcher it needs to be strongly-typed. equals is a Matcher[Any] but there is no conversion from Matcher[Any] to a String which is what method accepts.
So you need a Matcher[T] or a Matcher[String] in your case. If you just want to test for equality, the strongly-typed matcher is ===
there was no(logger).error(any[Exception], ===("hey"))
A: I would like to add that you should be wary of default arguments, i.e. if using matchers when stubbing methods, make sure to pass argument matchers for all arguments, because default arguments will almost certainly have constant values - causing this same error to appear.
E.g. to stub the method
def myMethod(arg1: String, arg2: String arg3: String = "default"): String
you cannot simply do
def myMethod(anyString, anyString) returns "some value"
but you also need to pass an argument matcher for the default value, like so:
def myMethod(anyString, anyString, anyString) returns "some value"
Just lost half an hour figuring this out :)
| |
doc_3084
|
@echo off
chcp 65001 > NUL
php engine.php
The "chcp 65001" part is specifically to make it use Unicode (but it doesn't).
engine.php contains:
<?php
$input = fgets(STDIN);
var_dump($input);
When I run start.bat, it opens a console with engine.php running:
åäölöäå
string(9) " l
"
Press any key to continue . . .
As you can see, it strips away every character except for the "l". This means that my scripts which rely on me pasting a text as the input break, which completely cripples my workflow. I need to be able to paste any string I have copied into my script and have it survive perfectly.
Why does this happen? What can we do to get around it?
Please don't tell me to use PowerShell. It's a bloody broken mess and also seems to behave the same even when I force it to run (I never want to deal with that again). Also, I don't have the Windows Terminal yet, so I'm stuck with classic console for at least a few more years.
A: Got the same problem in Windows 10 and PHP 7.2.
Solved it switching to readline() function instead of fgets().
Combined with the Unicode codepage, set by chcp 65001, received input is now valid UTF8. No conversion needed.
https://www.php.net/manual/en/function.readline.php
| |
doc_3085
|
Please note I created a function Root for Drawer Navigation and
try to activate it through CategoryNavigator=><Stack.Navigator>. I guess the onPress of menu button is not doing what I am hoping for.
I want to keep this structure because I want the header title to change according to the screen but have the constant hamburger menu also visible on all screens.
Also, see the image.
Please see the code:
import 'react-native-gesture-handler';
import { createNativeStackNavigator } from '@react-navigation/native-stack';
import React from 'react';
import CategoriesScreen from '../screens/CategoriesScreen';
import CategoryMealsScreen from '../screens/CategoryMealsScreen';
import MealDetailScreen from '../screens/MealDetailScreen';
import Colors from '../constants/Colors';
import FavoriteNavigator from './FavoriteNavigator';
import { createDrawerNavigator } from '@react-navigation/drawer';
import { HeaderButtons, Item } from 'react-navigation-header-buttons';
import CustomHeaderButton from '../components/CustomHeaderButton';
import FavoriteScreen from '../screens/FavoriteScreen';
const Stack = createNativeStackNavigator();
const Drawer = createDrawerNavigator();
function Root() {
return (
<Drawer.Navigator>
<Drawer.Screen name="Favorites" component={FavoriteNavigator} />
</Drawer.Navigator>
);
}
const CategoryNavigator = (props) => {
const { navigation } = props;
return (
<Stack.Navigator
screenOptions={{
headerStyle: {
backgroundColor: Platform.OS === 'android' ? Colors.primaryColor : '',
},
headerTintColor:
Platform.OS === 'android' ? 'white' : Colors.primaryColor,
headerLeft: () => (
<HeaderButtons HeaderButtonComponent={CustomHeaderButton}>
<Item title="Menu" iconName="ios-menu" onPress={Root} />
</HeaderButtons>
),
headerBackVisible: true,
headerBackTitle: '',
}}
>
<Stack.Screen
name="Categories"
component={CategoriesScreen}
options={{
title: 'Categories',
}}
/>
<Stack.Screen
name="CategoryMeals"
component={CategoryMealsScreen}
options={{
title: 'Categroy Meals',
}}
/>
<Stack.Screen
name="MealDetail"
component={MealDetailScreen}
options={{ title: ' ' }}
/>
</Stack.Navigator>
);
};
export default CategoryNavigator;
Also adding App.JS Code
import 'react-native-gesture-handler';
import React, { useState, useEffect } from 'react';
import { Platform } from 'react-native';
import { NavigationContainer } from '@react-navigation/native';
import { enableScreens } from 'react-native-screens';
import * as Font from 'expo-font';
import AppLoading from 'expo-app-loading';
import { createBottomTabNavigator } from '@react-navigation/bottom-tabs';
import FavoriteScreen from './screens/FavoriteScreen';
import { Ionicons } from '@expo/vector-icons';
import CategoryNavigator from './components/CategoryNavigator';
import Colors from './constants/Colors';
import FavoriteNavigator from './components/FavoriteNavigator';
enableScreens(true);
const fetchFonts = () => {
return Font.loadAsync({
'open-sans': require('./assets/fonts/OpenSans-Regular.ttf'),
'open-sans-bold': require('./assets/fonts/OpenSans-Bold.ttf'),
});
};
const Tab = createBottomTabNavigator();
export default function App() {
const [fontLoaded, setFontLoaded] = useState(false);
const primaryColor = Colors.primaryColor;
const size = 25;
if (!fontLoaded) {
return (
<AppLoading
startAsync={fetchFonts}
onFinish={() => setFontLoaded(true)}
onError={(err) => console.log(err)}
/>
);
}
return (
<NavigationContainer>
<Tab.Navigator
screenOptions={({ route }) => ({
tabBarIcon: ({ focused, primaryColor, size }) => {
let iconName;
if (route.name === 'All') {
iconName = focused ? 'ios-restaurant' : 'ios-restaurant-outline';
} else if (route.name === 'Favorites') {
iconName = focused ? 'ios-heart' : 'ios-heart-outline';
}
// You can return any component that you like here!
return (
<Ionicons name={iconName} size={size} color={primaryColor} />
);
},
tabBarActiveTintColor: primaryColor,
tabBarInactiveTintColor: 'gray',
headerShown: false,
})}
>
<Tab.Screen name="All" component={CategoryNavigator} />
<Tab.Screen name="Favorites" component={FavoriteNavigator} />
</Tab.Navigator>
</NavigationContainer>
);
}
```javascript
| |
doc_3086
|
I wrote this API request
http://prometheus01/api/v1/query_range?query=count(kpi_metrics)&start=1515747230&end=1515750830&step=60
returned
{"status":"success","data":{"resultType":"matrix","result":[{"metric":{},"values":[[1515747230,"39"],[1515747290,"39"],[1515747350,"39"],[1515747410,"39"],[1515747470,"39"],[1515747530,"39"],[1515747590,"39"],[1515747650,"39"],[1515747710,"39"],[1515747770,"39"],[1515747830,"39"],[1515747890,"39"],[1515747950,"39"],[1515748010,"39"],[1515748070,"39"],[1515748130,"39"],[1515748190,"39"],[1515748250,"39"],[1515748310,"39"],[1515748370,"39"],[1515748430,"39"],[1515748490,"39"],[1515748550,"39"],[1515748610,"39"],[1515748670,"39"],[1515748730,"39"],[1515748790,"39"],[1515748850,"39"],[1515748910,"39"],[1515748970,"39"],[1515749030,"39"],[1515749090,"39"],[1515749150,"39"],[1515749210,"39"],[1515749270,"39"],[1515749330,"39"],[1515749390,"39"],[1515749450,"39"],[1515749510,"39"],[1515749570,"39"],[1515749630,"39"],[1515749690,"39"],[1515749750,"39"],[1515749810,"39"],[1515749870,"39"],[1515749930,"39"],[1515749990,"39"],[1515750050,"39"],[1515750110,"39"],[1515750170,"39"],[1515750230,"39"],[1515750290,"39"],[1515750350,"39"],[1515750410,"39"],[1515750470,"39"],[1515750530,"39"],[1515750590,"39"],[1515750650,"39"],[1515750710,"39"],[1515750770,"39"],[1515750830,"39"]]}]}}
this good.
After I want to get information if value satisfies the condition
kpi_metrics<1
I wrote API request
http://prometheus01/api/v1/query_range?query=count(kpi_metrics<1)&start=1515747230&end=1515750830&step=60
BUT prometheus returned only
{"status":"success","data":{"resultType":"matrix","result":[]}}
How will change api request to get the result on the basis of condition kpi_metrics<1?
A: The query and result is correct, all of the time series of kpi_metrics are 39 which is not less than 1.
| |
doc_3087
|
it all works when it is stored locally.
I uploaded the php files to a domain and the database to db4free (a free testing server).
PHP file:
{
"message":"DB_CONNECT_OK",
"songsGroups":[
{"groupName":"Christmas","language":"arabic"},
{"groupName":"Christmas","language":"english"},
{"groupName":"Easter","language":"arabic"},
{"groupName":"Mary","language":"arabic"}],
"success":1
}
Json parser code:
try {
if(method == "GET"){
Log.i(TAG,"inGet");
// request method is GET
OkHttpClient httpClient = new OkHttpClient();
Request httpGet = new Request.Builder().url(newUrl.toString()).build();
Response httpResponse = httpClient.newCall(httpGet).execute();
String httpEntity = httpResponse.body().string();
//Log.d(TAG,"httpEntity " + httpEntity);
is = new ByteArrayInputStream(httpEntity.getBytes());
Log.d(TAG, is.toString());
}
}
I started debugging locally and globally, and realized that globally instead of getting the php echo, I am getting:
httpEntity = `
<html>
<body>
<script type="text/javascript" src="/aes.js" ></script>
<script>function toNumbers(d){var e=[];d.replace(/(..)/g,function(d){e.push(parseInt(d,16))});return e}function toHex(){for(var d=[],d=1==arguments.length&&arguments[0].constructor==Array?arguments[0]:arguments,e="",f=0;f<d.length;f++)e+=(16>d[f]?"0":"")+d[f].toString(16);return e.toLowerCase()}var a=toNumbers("f655ba9d09a112d4968c63579db590b4"),b=toNumbers("98344c2eee86c3994890592585b49f80"),c=toNumbers("6340d3d5958d62708984fc0193ccdb68");document.cookie="__test="+toHex(slowAES.decrypt(c,2,a,b))+"; expires=Thu, 31-Dec-37 23:55:55 GMT; path=/"; document.cookie="referrer="+escape(document.referrer); location.href="http://sitapp.byethost15.com/get_all_taratil_groups.php?ckattempt=1";</script>
<noscript>This site requires Javascript to work, please enable Javascript in your browser or use a browser with Javascript support</noscript>
</body>
</html>`
The page source is: `
{
"message":"DB_CONNECT_OK",
"songsGroups":
[{"groupName":"Christmas","language":"arabic"}
,{"groupName":"Christmas","language":"english"}
,{"groupName":"Easter","language":"arabic"}
,{"groupName":"Mary","language":"arabic"}],
"success":1
}`
Where did that JS came from?
So instead of the echo I am getting this JS from this line String httpEntity = httpResponse.body().string(); while it was working fine while stored locally.
A: I tried this on my code and it worked after you set the responseStr
try (
Response response = httpClient.execute(httpGet);
setResponseStr(EntityUtils.toString(response.getEntity()));
)
| |
doc_3088
|
Here's my code.
from urllib import urlopen
from bs4 import BeautifulSoup
SourceURL = "http://www.amazon.in/s/ref=nb_sb_noss_2?url=search-alias%3Daps&field-keywords=android"
ResultsPage = urlopen(SourceURL )
Soup = BeautifulSoup(ResultsPage)
print "<SearchResults>"
for SearchResult in Soup.findAll('li', attrs={'class': 's-result-item celwidget'}):
#Read Result Title
Title = SearchResult.find("h2", {"class": "a-size-medium a-color-null s-inline s-access-title a-text-normal"})
ResultTag = "\t<Result><![CDATA["
if Title is not None:
ResultTag += Title.text
ResultTag += "]]></Result>"
print ResultTag
print "</SearchResults>"
The output displayed is as below
<SearchResults>
<Result><![CDATA[Micromax Bolt S301 (Black, No charger, No earphone inbox)]]></Result>
<Result><![CDATA[Android Application Development (with Kitkat Support), Black Book]]></Result>
<Result><![CDATA[ZTE Blade Buzz White V815W]]></Result>
<Result><![CDATA[Android: App Development & Programming Guide: Learn In A Day! (Android, Rails, Ruby Programming, App Development...]]></Result>
<Result><![CDATA[]]></Result>
<Result><![CDATA[Karbonn Titanium S21 (Grey)]]></Result>
<Result><![CDATA[Head First Android Development]]></Result>
<Result><![CDATA[Micromax Canvas A1 Android One (White, 8GB)]]></Result>
<Result><![CDATA[Professional Android 4 Application Development (Wrox)]]></Result>
<Result><![CDATA[OnePlus X (Onyx) - Invite Only]]></Result>
<Result><![CDATA[Lenovo Vibe S1 (4G, White)]]></Result>
<Result><![CDATA[Micromax Bolt D320 (Black, 4GB)]]></Result>
<Result><![CDATA[2 in 1 Capacitive Stylus Pen With Black Ball Pen for Android Touch Sceen Mobile Phones and Tablets All iPads and...]]></Result>
<Result><![CDATA[Moto E 2nd Generation XT1506 (3G, Black)]]></Result>
<Result><![CDATA[Android: App Development & Programming Guide: Learn In A Day!]]></Result>
<Result><![CDATA[Lenovo Vibe S1 (4G, Dark Blue)]]></Result>
</SearchResults>
If you notice, fifth result is missing from the output for some reason, while it prints all other rows with same code. Essentially, SearchResult.find() method is returning NULL value only for one record.
Can you please let me know if I am missing something?
Thanks,
Nikhil
A: if you look at your link http://www.amazon.in/s/ref=nb_sb_noss_2?url=search-alias%3Daps&field-keywords=android, the 5th li element matches your criteria for class name s-result-item celwidget , which actually is Customers shopped for android in and does not completely match your second criteria of a-size-medium a-color-null s-inline s-access-title a-text-normal, which is causing Title to be set to none.
You can probably update your condition to below to print desired output.
if Title is not None:
ResultTag = "\t<Result><![CDATA["
ResultTag += Title.text
ResultTag += "]]></Result>"
print ResultTag
| |
doc_3089
|
I have a NSTimer that calls the updateLabel method to update the countdownLabel
- (void)updateLabel
{
NSString *counterStr;
self.dateComponents = [self.gregorianCalendar components:(NSHourCalendarUnit | NSMinuteCalendarUnit | NSSecondCalendarUnit | NSDayCalendarUnit | NSMonthCalendarUnit | NSYearCalendarUnit)
fromDate:[NSDate date]
toDate:self.countdownDate
options:0];
int yearsRemaining = [self.dateComponents year];
int monthsRemaining = [self.dateComponents month];
int daysRemaining = [self.dateComponents day];
int hoursRemaining = [self.dateComponents hour];
int minutesReamining = [self.dateComponents minute];
int secondsReamining = [self.dateComponents second];
if ((yearsRemaining + monthsRemaining + hoursRemaining + minutesReamining + secondsReamining) > 0)
{
if (yearsRemaining == 0 && monthsRemaining == 0 && daysRemaining == 0 && hoursRemaining == 0)
counterStr = [NSString stringWithFormat:@"%02d:%02d", minutesReamining, secondsReamining];
else if (yearsRemaining == 0 && monthsRemaining == 0 && daysRemaining == 0)
counterStr = [NSString stringWithFormat:@"%02d:%02d:%02d", hoursRemaining, minutesReamining, secondsReamining];
else if (yearsRemaining == 0 && monthsRemaining == 0)
counterStr = [NSString stringWithFormat:@"%02d:%02d:%02d:%02d", daysRemaining, hoursRemaining, minutesReamining, secondsReamining];
else if (yearsRemaining == 0)
counterStr = [NSString stringWithFormat:@"%02d:%02d:%02d:%02d:%02d", monthsRemaining, daysRemaining, hoursRemaining, minutesReamining, secondsReamining];
else
counterStr = [NSString stringWithFormat:@"%i:%02d:%02d:%02d:%02d:%02d", yearsRemaining, monthsRemaining, daysRemaining, hoursRemaining, minutesReamining, secondsReamining];
}
else
counterStr = @"Countdown Ended!";
self.countdownLabel.text = counterStr;
}
The timer aspect of the app works just fine. However, I can't seem to figure out how to add the time unit labels at run time so that I only add the labels I need based on whats shown in the countdown label and have the time unit labels align under the respective time below the countdown label.
I've tried using a method that extracted the respective time value as a substring and then figured out where to put the the time unit label based on that substring rect, but it didn't work well or look pretty.
There has to be an easier way to do this and I'm just learning Objective C so any help would be much appreciated!
| |
doc_3090
|
I don't need any fancy skeleton or other features, just the center of mass of the moving object will do it.
Any pointers?
A: I would see Comparing a saved movement with other movement with Kinect to track the entire body. The answer shows the code here which shows how to save skeleton data. And mapping an ellipse to a joint in kinect sdk 1.5 to have the tracking of joints if you want to track the joints not the entire body (currently works better, but when the tracking the entire body works, use that because it is more effective and efficient).
A: your case is pretty simple, but requires initialization for the object since in general a term "object" is ill-defined. It can be a closest object or moving object or even the object that was touched, has certain color, size or shape.
Let's assume that you define object by motion that is whatever moves in your point cloud is an object. I suggest to do this:
*
*Object detection is easy if object moves more than its size since
then you just may subtract depth maps and end up with your object:
depth1-depth2 > T but if the object moves slowly and shifts only by a
fraction of its size you have to use whatever high frequency info you
have, which can be depth or colour or both. It is going to be noisy as the figure below shows
*
*as soon as you have your object selected you may want to clean it by running some morphological filters (erode +
dilate) to erase noise and get a single blob. After that you just
need to find some features in the blob such as average depth or mean
color and look for them in a small window around the object's previous
location in order to rediscover the object;
*finally don't forget to update these features as object moves
through.
Some other ideas you may want to use are: depth gradient, connected components in depth, pre-recording background depth for cleaner subtraction, running grabCut on depth area selected by mouse click, etc.
| |
doc_3091
|
grep -E -w "entry1" file1 >output.csv
In output.csv I get four lines matching
A, entry1
B, entry1
C, entry1
D, entry1
I am opening this file in excel and modifying the row header in column1 to
A_type1, entry1
B_type1, entry1
C_type1, entry1
D_type1, entry1
Is it possible to do the same thing in single line?
A: according to the grep output you given, I think this line should help:
awk -F, '$2~/entry1/{print $1"_type1" FS $2}' file
A: You can do something like
$ grep -E "A|B" sample.txt | perl -pe 's/A/A_type1/g; s/B/B_type1/g'
This will filter the input file and apply the transformation on the output. The input file remains unchanged.
If you would rather perform an inplace repalcement in the file without affecting the other lines, you could use "sed". In this case the input file does get updated.
sed -i 's/A/A_type1/g; s/B/B_typ2/g' sample.txt
admin@DESKTOP-J2E0MU7 /cygdrive/c/workspace/temp
$ cat sample.txt
A line
B file
C test1
D sample
E something else
F what
G else
A other line
B second occurence
admin@DESKTOP-J2E0MU7 /cygdrive/c/workspace/temp
$ grep -E "A|B" sample.txt | perl -pe 's/A/A_type1/g; s/B/B_type1/g'
A_type1 line
B_type1 file
A_type1 other line
B_type1 second occurence
A: A different solution using awk to append "_type1" to column1 of whatever comes out of grep.
$ grep -E "A|B" sample.txt | awk '{$1=$1"_type";print}''\
A: awk -F, '{print $1"_type1",$2}' OFS=, file
A_type1, entry1
B_type1, entry1
C_type1, entry1
D_type1, entry1
| |
doc_3092
|
A: When a query is cached, NHibernate will cache the IDs of the entities resulting from the query.
Very importantly, it does not cache the entities themselves - only their IDs. This means that you almost certainly want to ensure that those entities are also set to be cachable in your second level cache. Otherwise, NHiberate will get the IDs of the entities from the query cache, but then be forced to go to the database to get the actual entities. That could be more costly than just going to the database in the first place!
Also important: queries are cached based on their exact SQL and parameter values. Any differences in either of those will mean that the database will be hit. So you probably only want to cache those queries that have little variance in their inputs.
A: When you enable the caching the nHibernate will store query results somewhere inside when you execute query. When you try to execute query with SAME parameters again it will get results from cache, not from database, and of course it is much faster! but beware that other apps can modify database in background! But nHibernate can update caches.
A: By using it nHibernate doesn't need to access the Data store it access whats in the cache.
| |
doc_3093
|
I haven't problems with the plotting itself. I have exactly the same distribution that I want. However, the plot shows only one part of the magnitude orders. The dataframe shows data at -07, -08, and -09. I tried the chart below to use gaps, breaks, and some transformations but with bad results. Below you can find an example of what I want to plot. I only work with R, so I will appreciate if you can share only R codes.
Here is the example code:
##plot data
ggplot(data, aes(x = reorder(Treatment, -mean), y = mean))+
geom_bar(aes(x = reorder(Treatment, -mean), y= mean), stat="identity", fill="black" , alpha=0.5)+
geom_errorbar(aes(x = reorder(Treatment, -mean), ymin=mean-se, ymax=mean+se), width=0.4, colour="black", alpha=0.9, size=1.3)+
theme(
line = element_line(colour = "black", size = 1, linetype = 1, lineend = "butt"),
rect = element_rect(fill = "white", colour = "black", size = 1, linetype = 1),
aspect.ratio = 1,
plot.background = element_rect(fill = "white"),
plot.margin = margin(1, 1, 1, 1, "cm"),
axis.text = element_text(size = rel(2.5), colour = "#000000", margin = 1),
strip.text = element_text(size = rel(0.8)),
axis.line = element_blank(),
axis.text.x = element_text(vjust = 0.2),
axis.text.y = element_text(hjust = 1),
axis.ticks = element_line(colour = "#000000", size = 1.2),
axis.title.x = element_text(size = 30, vjust=0.5),
axis.title.y = element_text(size = 30, angle = 90),
axis.ticks.length = unit(0.15, "cm"),
legend.background = element_rect(colour = NA),
legend.spacing = unit(0.15, "cm"),
legend.key = element_rect(fill = "grey95", colour = "white"),
legend.key.size = unit(1.2, "lines"),
legend.key.height = NULL,
legend.key.width = NULL,
legend.text = element_text(size = rel(2.0)),
legend.text.align = NULL,
legend.title = element_text(size = rel(2.0), face = "bold", hjust = 0),
legend.title.align = NULL,
legend.position = c(.80, .88),
legend.direction = NULL,
legend.justification = "center",
legend.box = NULL,
panel.background = element_rect(fill = "#ffffff", colour = "#000000",
size = 2, linetype = "solid"),
panel.border = element_rect(colour = "black", fill=NA, size=2),
)+
ylab(expression(Lp[r]~(m~s^-1~Mpa^-1))) + xlab(expression(Treatment))
A: library(ggbreak)
df <- data.frame(treatment = factor(1:4),
y = c(100000, 1000, 100, 10),
se = c(10000, 100, 10, 1))
ggplot(df, aes(x=treatment, y=y)) +
geom_col() +
geom_errorbar(aes(ymin=y-se, ymax=y+se), width=.4) +
scale_y_cut(breaks=c(25, 150, 1600))
| |
doc_3094
|
Context
I have 2 containers: celery-worker and api, I want to send data via websockets from celery-worker container to the browser through api container using channels, here a picture:
Question
Do you know how to "initialize" channels in api container and use this channels inside celery-worker container? to after in celery-worker container call only to Group('pablo').send(message) and it automatically send to the browser.
Any advice will be ok.
Note: I tried to not post code because is very extensive and maybe It would result difficult to understand the question but if you want I can post some code that you need.
A: I have created the example (with simple tasks) that is using Celery and Django Channels 2 (github). Celery worker is sending to channel layer messages. Messages are broadcasted to clients that are connected to websocket.
On server side I have consumer:
class TasksConsumer(AsyncWebsocketConsumer):
async def connect(self):
# One group name for all clients: 'tasks'
self.group_name = 'tasks'
await self.channel_layer.group_add(self.group_name, self.channel_name)
await self.accept()
async def disconnect(self, close_code):
await self.channel_layer.group_discard(self.group_name, self.channel_name)
async def receive(self, text_data):
pass
async def task_update_message(self, event):
# Send message to channel
await self.send(json.dumps(event))
You can see that the group name is 'tasks'. On celery side, worker is calling:
channel_layer = get_channel_layer()
async_to_sync(channel_layer.group_send)("tasks", msg)
To use channels in worker code, you need to set django settings:
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'server.settings')
import django
django.setup()
Hope it helps!
A: You need to let know other containers that you depend on them. Example. Here you see that PostgreSQL has dependencies from user_service and notification_service you need to add this for each container that wants to use other containers either linked or dependant. here is an example.
version: '3'
services:
db:
image: postgres
ports:
- '5434:5434'
user_service:
build: ""
environment:
- JWT_SECRET=mysecret_json_web_token_pass
command: python user/app.py
volumes:
- .:/microservices
ports:
- "9001:9001"
depends_on:
- db
notification_service:
build: ""
environment:
- JWT_SECRET=mysecret_json_web_token_pass
command: python notification/app.py
volumes:
- .:/microservices
ports:
- "9002:9002"
depends_on:
- db
for your case you might want to add
depends_on:
- celery
- redis
You can also stablish a local network.. but Rather than doing that I created containers in the same docker-compose that way they know eachother.
Here is another example
version: '2'
services:
nginx:
image: nginx:latest
container_name: nx01
ports:
- "8001:8001"
volumes:
- ../src:/src
- ./static:/static
- ./media:/media/
- ./config/nginx:/etc/nginx/conf.d
depends_on:
- web
web:
build: .
container_name: dg01
command: gunicorn mydjango.wsgi 0.0.0.0:8000
depends_on:
- db
links:
- redis
volumes:
- ../src:/src
- ./static:/static
- ./media:/media/
expose:
- "8001"
db:
image: postgres:latest
container_name: pq01
ports:
- "5432:5432"
redis:
image: redis:latest
container_name: rd01
ports:
- '6379:6379'
celery:
build: .
container_name: cl01
command: celery worker --app=app.tasks
volumes:
- ..:/src
links:
- db
- redis
to call it on your code... use it like this.
CELERY_BROKER_URL = 'redis://redis:6379/0'
CELERY_RESULT_BACKEND = 'redis://redis:6379/0'
| |
doc_3095
|
input {
http {
id => "sensor_data_http_input"
user => "sensor_data"
password => "sensor_data"
}
}
filter {
jdbc_streaming {
jdbc_driver_library => "E:\ElasticStack\mysql-connector-java-8.0.18\mysql-connector-java-8.0.18.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/sensor_metadata"
jdbc_user => "elastic"
jdbc_password => "hide"
statement => "select st.sensor_type as sensorType, l.customer as customer, l.department as department, l.building_name as buildingName, l.room as room, l.floor as floor, l.location_on_floor as locationOnFloor, l.latitude, l.longitude from sensors s inner join sensor_type st on s.sensor_type_id=st.sensor_type_id inner join location l on s.location_id=l.location_id where s.sensor_id= :sensor_identifier"
parameters => { "sensor_identifier" => "sensor_id"}
target => lookupResult
}
mutate {
rename => {"[lookupResult][0][sensorType]" => "sensorType"}
rename => {"[lookupResult][0][customer]" => "customer"}
rename => {"[lookupResult][0][department]" => "department"}
rename => {"[lookupResult][0][buildingName]" => "buildingName"}
rename => {"[lookupResult][0][room]" => "room"}
rename => {"[lookupResult][0][floor]" => "floor"}
rename => {"[lookupResult][0][locationOnFloor]" => "locationOnFloor"}
add_field => {
"location" => "%{lookupResult[0]latitude},%{lookupResult[0]longitude}"
}
remove_field => ["lookupResult", "headers", "host"]
}
}
output {
elasticsearch {
hosts =>["localhost:9200"]
index => "sensor_data-%{+YYYY.MM.dd}"
user => "elastic"
password => "hide"
}
}
But when I start logstash, I see following error:
[2020-01-09T22:57:16,260]
[ERROR][logstash.javapipeline]
[main] Pipeline aborted due to error {
:pipeline_id=>"main",
:exception=>#<TypeError: failed to coerce jdk.internal.loader.ClassLoaders$AppClassLoader to java.net.URLClassLoader>,
:backtrace=>[
"org/jruby/java/addons/KernelJavaAddons.java:29:in `to_java'",
"E:/ElasticStack/Logstash/logstash-7.4.1/vendor/bundle/jruby/2.5.0/gems/logstash-filter-jdbc_streaming-1.0.7/lib/logstash/plugin_mixins/jdbc_streaming.rb:48:in `prepare_jdbc_connection'",
"E:/ElasticStack/Logstash/logstash-7.4.1/vendor/bundle/jruby/2.5.0/gems/logstash-filter-jdbc_streaming-1.0.7/lib/logstash/filters/jdbc_streaming.rb:200:in `prepare_connected_jdbc_cache'",
"E:/ElasticStack/Logstash/logstash-7.4.1/vendor/bundle/jruby/2.5.0/gems/logstash-filter-jdbc_streaming-1.0.7/lib/logstash/filters/jdbc_streaming.rb:116:in `register'", "org/logstash/config/ir/compiler/AbstractFilterDelegatorExt.java:56:in `register'",
"E:/ElasticStack/Logstash/logstash-7.4.1/logstash-core/lib/logstash/java_pipeline.rb:195:in `block in register_plugins'", "org/jruby/RubyArray.java:1800:in `each'",
"E:/ElasticStack/Logstash/logstash-7.4.1/logstash-core/lib/logstash/java_pipeline.rb:194:in `register_plugins'",
"E:/ElasticStack/Logstash/logstash-7.4.1/logstash-core/lib/logstash/java_pipeline.rb:468:in `maybe_setup_out_plugins'",
"E:/ElasticStack/Logstash/logstash-7.4.1/logstash-core/lib/logstash/java_pipeline.rb:207:in `start_workers'",
"E:/ElasticStack/Logstash/logstash-7.4.1/logstash-core/lib/logstash/java_pipeline.rb:149:in `run'",
"E:/ElasticStack/Logstash/logstash-7.4.1/logstash-core/lib/logstash/java_pipeline.rb:108:in `block in start'"],
:thread=>"#<Thread:0x17fa8113 run>"
}
[2020-01-09T22:57:16,598]
[ERROR][logstash.agent] Failed to execute action {
:id=>:main,
:action_type=>LogStash::ConvergeResult::FailedAction,
:message=>"Could not execute action: PipelineAction::Create<main>, action_result: false",
:backtrace=>nil
}
I am enriching my http input with some data in my mysql database but it doesn't start logstash at all.
A: I see two potential problems, but you need to check which is really the issue here:
*
*MySql Driver class name has changed to com.mysql.cj.jdbc.Driver
*Maybe a classloader problem is occurring when you are using a recent jdbc driver outside the classloader path in combination with newer jdk versions. There are serveral issues around that at github.
Put the driver in the logstash folder under <logstash-install-dir>/vendor/jar/jdbc/ (you need to create this folder first). If this don't work, move the driver under <logstash-install-dir>/logstash-core\lib\jars and don't provide any driver path in config file: jdbc_driver_library => ""
A: Problem solved with remove jdbc_driver_library option entirely from the config file and also, as mentioned, set jdbc_driver_class to com.mysql.cj.jdbc.Driver.
| |
doc_3096
|
My code:
public void ProcessRequest(HttpContext context)
{
HttpContext.Current.Response.Clear();
HttpContext.Current.Response.ContentType = "application/pdf";
HttpContext.Current.Response.AddHeader("content- disposition", "attachment;filename=john.pdf");
HttpContext.Current.Response.Cache.SetCacheability (HttpCacheability.NoCache);
StringWriter stringWriter = new StringWriter();
HtmlTextWriter htmlTextWriter = new HtmlTextWriter(stringWriter);
string imagepath = context.Server.MapPath(@"~/img/logo3.png");
Document Doc = new Document(PageSize.A4, 10f, 10f, 10f, 10f);
HTMLWorker htmlparser = new HTMLWorker(Doc);
PdfWriter pdfwriter = PdfWriter.GetInstance(Doc, HttpContext.Current.Response.OutputStream);
Doc.Open();
iTextSharp.text.Image image = iTextSharp.text.Image.GetInstance(imagepath);
image.ScalePercent(106f, 90f);
Doc.Add(image);
AddPDf(pdfwriter, Doc);
OnEndPage(pdfwriter, Doc);
Doc.Close();
HttpContext.Current.Response.End();
}
public void AddPDf(PdfWriter writer, Document document)
{
PdfPTable table = new PdfPTable(3);
table.TotalWidth = 400f;
//fix the absolute width of the table
table.LockedWidth = true;
//relative col widths in proportions - 1/3 and 2/3
float[] widths = new float[] { 2f, 4f, 6f };
table.SetWidths(widths);
table.HorizontalAlignment = 0;
//leave a gap before and after the table
table.SpacingBefore = 20f;
table.SpacingAfter = 30f;
PdfPCell cell = new PdfPCell(new Phrase("Header spanning 3 columns"));
cell.Colspan = 3;
cell.HorizontalAlignment = 1; //0=Left, 1=Centre, 2=Right
table.AddCell(cell);
table.AddCell("Col 1 Row 1");
table.AddCell("Col 2 Row 1");
table.AddCell("Col 3 Row 1");
table.AddCell("Col 1 Row 2");
table.AddCell("Col 2 Row 2");
table.AddCell("Col 3 Row 2");
document.Open();
document.Add(table);
}
public void OnEndPage(PdfWriter writer, Document document)
{
var content = writer.DirectContent;
var pageBorderRect = new Rectangle(document.PageSize);
pageBorderRect.Left += document.LeftMargin;
pageBorderRect.Right -= document.RightMargin;
pageBorderRect.Top -= document.TopMargin;
pageBorderRect.Bottom += document.BottomMargin;
content.SetColorStroke(BaseColor.BLACK);
content.Rectangle(pageBorderRect.Left, pageBorderRect.Bottom, pageBorderRect.Width, pageBorderRect.Height);
content.Stroke();
}
private static void addCell(PdfPTable table, string text, int rowspan)
{
BaseFont bfTimes = BaseFont.CreateFont(BaseFont.TIMES_ROMAN, BaseFont.CP1252, false);
iTextSharp.text.Font times = new iTextSharp.text.Font(bfTimes, 6, iTextSharp.text.Font.NORMAL, iTextSharp.text.BaseColor.BLACK);
PdfPCell cell = new PdfPCell(new Phrase(text, times));
cell.Rowspan = rowspan;
cell.HorizontalAlignment = PdfPCell.ALIGN_CENTER;
cell.VerticalAlignment = PdfPCell.ALIGN_MIDDLE;
table.AddCell(cell);
}
A: You export the pdf like this:
As a link:
<a target="_blank" href="/exportPDF.ashx">Export PDF</a>
As a javascript function:
function exportPDF() {
location.href = "/exportPDF.ashx";
}
Javascript does a redirect, but since the response is a file, the browser will prompt what to do (open/save) and stays on the same page.
Your code does generate a PDF so you can use that as is.
| |
doc_3097
|
when i'm running
dump($this->container->get('router'));exit;
on a controller my router context is like this
#context: RequestContext {#306 ▼
-baseUrl: "/my-project/web/app_dev.php"
-pathInfo: "/accueil"
-method: "GET"
-host: "localhost"
-scheme: "http"
-httpPort: 82
-httpsPort: 443
-queryString: ""
-parameters: array:1 [▶]
}
But the same code but on mailer service i get this #context: Symfony\Component\Routing\RequestContext {#312
-baseUrl: ""
-pathInfo: "/accueil"
-method: "GET"
-host: "localhost"
-scheme: "http"
-httpPort: 80
-httpsPort: 443
-queryString: ""
-parameters: []
}
i found this problem after getting urls like
"http://localhost/bundleRoute/myRoute/7" instead of
"http://localhost/my-project/web/app_dev.php/bundleRoute/myRoute/7"
THANKS.
A: You can configure the request context for your application when parts of it are executed from the command-line: http://symfony.com/doc/current/cookbook/console/sending_emails.html#configuring-the-request-context-globally
| |
doc_3098
|
Class Masterdocument {
vector <MasterProcess>;
};
Class MasterProcess1 {
id = 10;
num of questions = 2;
question1;
answer1option1;
answer1option2;
question1Weightage;
question2;
answer2option1;
answer2option2;
question2weightage;
//constructor
MasterProcess1(){
question1 =1;
answer1option1 =1;
answer1 option2 = 2;
question1weightage = 0.1
question2 =2;
answer2option1 = 1;
answer2option2 = 2;
question2weightage = 0.2;
}
};
Class MasterProcess2 {
id =11;
num of questions = 3;
question1;
answer1option1;
answer1option2;
question1Weightage;
question2;
answer2option1;
answer2option2;
answer2option3;
question2weightage;
question3;
answer3option1;
answer3option2;
question3weightage;
//constructor
MasterProcess2(){
question1 =1;
answer1option1 =1;
answer1 option2 = 2;
question1weightage = 0.2
question2 =2;
answer2option1 = 1;
answer2option2 = 2;
answer2option3 = 3;
question2weightage = 0.3;
question3 = 3;
answer3option1 = 1;
answer3option2 = 2;
question3weightage = 0.4
}
};
The MasterDocument and all the MasterProcesses are constants. The values do not change. But the number of questions (and answer options for each question) differs for each process. I can initialize them using the constructor. But how do i add them to the vector in the MasterDocument as all the MasterProcesses have a different name e.g. MasterProcess1, MasterProcess2 and so on. So I cannot have a vector in the MasterDocument.
If i use the same name for every process (call each a MasterProcess), then how would i know which constructor to call for first MasterProcess as it has differet num of questions then second masterProcess.
I can hard code the values in the Master Document as it does not change but how do I initialize the values. I can put all the processes in single MasterDocument and create a huge constructor with all the questions/answers for every process, but that does not look pretty.
I can call each process as MasterProcess and pass the id of the Process in the constructor (like MasterProcess (id)) but how would I dictate that MasterProcess(10) should call the constructor of first class and MasterProcess(11) should call the constructor of second class.
@ Heisenbug
I followed your lead and came up with this code
#include <iostream>
#include <utility>
#include <string>
#include <iostream>
#include <sstream>
#include <string>
#include <vector>
using namespace std;
class BaseMasterProcess {
protected:
int processID;
int num_of_Questions;
double min_Threshold_Score_for_Process;
double total_Process_Score;
double overall_Audit_Value;
int question;
pair <int,double> answer;
//define all the variable used in any sub-class
int question1;
int question2;
int question3;
int question4;
int question5;
double question1_Weightage;
double question2_Weightage;
double question3_Weightage;
double question4_Weightage;
double question5_Weightage;
int passing_Score;
pair <int,double> answer1_Option1;
pair <int,double> answer1_Option2;
pair <int,double> answer1_Option3;
pair <int,double> answer2_Option1;
pair <int,double> answer2_Option2;
pair <int,double> answer2_Option3;
pair <int,double> answer3_Option1;
pair <int,double> answer3_Option2;
pair <int,double> answer3_Option3;
pair <int,double> answer4_Option1;
pair <int,double> answer4_Option2;
pair <int,double> answer4_Option3;
pair <int,double> answer5_Option1;
pair <int,double> answer5_Option2;
pair <int,double> answer5_Option3;
public:
abstract void Init();
virtual double getQuestionWeightage(int ques) = 0;
virtual double getAnswerScore(int ques, int ans) = 0;
int getNumQuestions()
{
return num_of_Questions;
}
int getProcesssID()
{
return processID;
}
double getMinThresholdScore()
{
return min_Threshold_Score_for_Process;
}
double overallAuditValue()
{
return overall_Audit_Value;
}
};
class ConcreteMasterProcess1 : public BaseMasterProcess
{
public:
void Init()
{
processID = 10;
num_of_Questions = 3;
passing_Score = 70;
min_Threshold_Score_for_Process = 0.7;
overall_Audit_Value = 0.1;
question1 = 1;
question1_Weightage = 0.3;
answer1_Option1 = make_pair (1,0.3);
answer1_Option2 = make_pair (2,0.0);
question2 = 2;
question2_Weightage = 0.3;
answer2_Option1 = make_pair (1,0.3);
answer2_Option2 = make_pair (2,0.0);
question3 = 3;
question3_Weightage = 0.4;
answer3_Option1 = make_pair (1,0.4);
answer3_Option2 = make_pair (2,0.0);
}
double getQuestionWeightage(int ques)
{
switch (ques)
{
case 1:
return question1_Weightage;
case 2:
return question2_Weightage;
case 3:
return question3_Weightage;
}
}
double getAnswerScore(int ques, int ans)
{
if (ques == question1 && ans == answer1_Option1.first)
return answer1_Option1.second;
else if (ques == question1 && ans == answer1_Option2.first)
return answer1_Option2.second;
else if (ques == question2 && ans == answer2_Option1.first)
return answer2_Option1.second;
else if (ques == question2 && ans == answer2_Option2.first)
return answer2_Option2.second;
else if (ques == question3 && ans == answer3_Option1.first)
return answer3_Option1.second;
else
return answer3_Option2.second;
}
};
class ConcreteMasterProcess2 : public BaseMasterProcess
{
void Init()
{
processID = 11;
num_of_Questions = 4;
passing_Score = 70;
min_Threshold_Score_for_Process = 0.75;
overall_Audit_Value = 0.1;
question1 = 1;
question1_Weightage = 0.25;
answer1_Option1 = make_pair (1,0.25);
answer1_Option2 = make_pair (2,0.0);
question2 = 2;
question2_Weightage = 0.25;
answer2_Option1 = make_pair (1,0.25);
answer2_Option2 = make_pair (2,0.0);
answer2_Option3 = make_pair (3,0.15);
question3 = 3;
question3_Weightage = 0.25;
answer3_Option1 = make_pair (1,0.25);
answer3_Option2 = make_pair (2,0.0);
question4 = 4;
question4_Weightage = 0.2;
answer4_Option1 = make_pair (1,0.2);
answer4_Option2 = make_pair (2,0.0);
question5 = 5;
question5_Weightage = 0.2;
answer5_Option1 = make_pair (1,0.2);
answer5_Option2 = make_pair (2,0.0);
}
double getQuestionWeightage(int ques)
{
switch (ques)
{
case 1:
return question1_Weightage;
break;
case 2:
return question2_Weightage;
break;
case 3:
return question3_Weightage;
break;
case 4:
return question4_Weightage;
break;
}
}
double getAnswerScore(int ques, int ans)
{
if (ques == question1 && ans == answer1_Option1.first)
return answer1_Option1.second;
else if (ques == question1 && ans == answer1_Option2.first)
return answer1_Option2.second;
else if (ques == question2 && ans == answer2_Option1.first)
return answer2_Option1.second;
else if (ques == question2 && ans == answer2_Option2.first)
return answer2_Option2.second;
else if (ques == question2 && ans == answer2_Option3.first)
return answer2_Option3.second;
else if (ques == question3 && ans == answer3_Option1.first)
return answer3_Option1.second;
else if (ques == question3 && ans == answer3_Option2.first)
return answer3_Option2.second;
else if (ques == question4 && ans == answer4_Option1.first)
return answer4_Option1.second;
else
return answer4_Option2.second;
}
};
class MasterDocument
{
std::vector<BaseMasterProcess*> myProcessList;
void AddProcess(BaseMasterProcess* iProcess)
{
myProcessList.push_back(iProcess);
}
void foo()
{
//myProcessList[...]->Method1(); //do something without knowing which specific concrete class the process belongs to..
}
};
int main ()
{
BaseMasterProcess bmp;
ConcreteMasterProcess6 p6;
MD master_doc;
master_doc.addProcess(bmp); // gives ERROR
master_doc.addProcess(p6); // gives ERROR
master_doc.foo();
}
It gives me following errors:
Regarding Init() -> ISO C++ forbids declaration of ‘Init’ with no type [-fpermissive]
EDIT: changed to void Init() -> RESOLVED
Regarding function getQuestionWeightage(int) -> In member function ‘virtual double ConcreteMasterProcess1::getQuestionWeightage(int)’: error: a function-definition is not allowed here before ‘{’ token
EDIT: was missing the } at the end of switch -> RESOLVED
Regarding main() -> expected ‘}’ at end of input expected unqualified-id at end of input
EDIT: has extra } in the mian() -> RESOLVED
How do I resolve the errors shown in main(). All I want to do is create the MasterDocument and have the two Concrete Processes in the myProcssList ???
A: Use a Structure like this. Break it down
MasterDocument{
vector <Process>
...
};
Process{
vector<Question>;
...
};
Question{
vector<Answer>:
...
};
Answer{
map<int answer, int score>;
};
A: Another option using shared_ptr is as follows.
struct Answer {
int whatever;
}
struct Process {
int whatever;
std::vector<std::shared_ptr<Answer> > answers;
};
struct Document {
Document();
std::vector<std::shared_ptr<Process> > processes;
};
int main (){
Document::Document()
{
// Make some processes
for (int i = 0; i < 5; i++) {
std::shared_ptr<Process> foo = std::shared_ptr<Process>();
foo->whatever = i;
processes.push_back(foo);
}
}
}
| |
doc_3099
|
*
*MacBook Pro (13-inch, 2019, Four Thunderbolt 3 ports)
*2.8 GHz Quad CoreIntel Core i7
*16 GB 2133 MHz LPDDR3
*Intel Iris Plus Graphics 655 1536 MB
*Docker: 19.03.12
*Druid: 0.19.0
Although I followed official instructions, I failed to build or run Druid locally.
About this: https://github.com/apache/druid/tree/master/distribution/docker
I typed the following commands.
git clone https://github.com/apache/druid.git
docker build -t apache/druid:tag -f distribution/docker/Dockerfile .
However, the program never proceed.
Sending build context to Docker daemon 78.19MB
Step 1/18 : FROM maven:3-jdk-8-slim as builder
---> addee4586ff4
Step 2/18 : RUN export DEBIAN_FRONTEND=noninteractive && apt-get -qq update && apt-get -qq -y install --no-install-recommends python3 python3-yaml
---> Using cache
---> cdb74d0f6b3d
Step 3/18 : COPY . /src
---> 60d35cb6c0ce
Step 4/18 : WORKDIR /src
---> Running in 73dfa666a186
Removing intermediate container 73dfa666a186
---> 4839bf923b21
Step 5/18 : RUN mvn -B -ff -q dependency:go-offline install -Pdist,bundle-contrib-exts -Pskip-static-checks,skip-tests -Dmaven.javadoc.skip=true
---> Running in 1c9d4aa3d4e8
PLUS
Moreover, I followed this instruction and run docker-compose -f distribution/docker/docker-compose.yml up but I failed and get the error below.
coordinator | 2020-08-06T08:41:24,295 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.helper.DruidCoordinatorRuleRunner - Uh... I have no servers. Not assigning anything...
PLUS END
About this: https://hub.docker.com/r/apache/druid/tags
I typed the following commands.
docker pull apache/druid:0.19.0
docker run apache/druid:0.19.0
This program seems to work like this.
2020-08-06T07:50:22+0000 startup service
Setting 172.17.0.2= in /runtime.properties
cat: can't open '/jvm.config': No such file or directory
2020-08-06T07:50:24,024 INFO [main] org.hibernate.validator.internal.util.Version - HV000001: Hibernate Validator 5.2.5.Final
2020-08-06T07:50:24,988 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [druid-hdfs-storage], jars: jackson-annotations-2.10.2.jar, hadoop-mapreduce-client-common-2.8.5.jar, httpclient-4.5.10.jar, htrace-core4-4.0.1-incubating.jar, apacheds-kerberos-codec-2.0.0-M15.jar, jackson-mapper-asl-1.9.13.jar, commons-digester-1.8.jar, jetty-sslengine-6.1.26.jar, jackson-databind-2.10.2.jar, api-asn1-api-1.0.0-M20.jar, ion-java-1.0.2.jar, hadoop-mapreduce-client-shuffle-2.8.5.jar, asm-7.1.jar, jsp-api-2.1.jar, druid-hdfs-storage-0.19.0.jar, api-util-1.0.3.jar, json-smart-2.3.jar, jackson-core-2.10.2.jar, hadoop-client-2.8.5.jar, httpcore-4.4.11.jar, commons-collections-3.2.2.jar, hadoop-hdfs-client-2.8.5.jar, hadoop-annotations-2.8.5.jar, hadoop-auth-2.8.5.jar, xmlenc-0.52.jar, aws-java-sdk-s3-1.11.199.jar, commons-net-3.6.jar, nimbus-jose-jwt-4.41.1.jar, hadoop-common-2.8.5.jar, jackson-dataformat-cbor-2.10.2.jar, hadoop-yarn-server-common-2.8.5.jar, accessors-smart-1.2.jar, gson-2.2.4.jar, commons-configuration-1.6.jar, joda-time-2.10.5.jar, hadoop-aws-2.8.5.jar, aws-java-sdk-core-1.11.199.jar, commons-codec-1.13.jar, hadoop-mapreduce-client-app-2.8.5.jar, hadoop-yarn-api-2.8.5.jar, aws-java-sdk-kms-1.11.199.jar, jackson-core-asl-1.9.13.jar, curator-recipes-4.3.0.jar, hadoop-mapreduce-client-jobclient-2.8.5.jar, jcip-annotations-1.0-1.jar, jmespath-java-1.11.199.jar, hadoop-mapreduce-client-core-2.8.5.jar, commons-logging-1.1.1.jar, leveldbjni-all-1.8.jar, curator-framework-4.3.0.jar, hadoop-yarn-client-2.8.5.jar, apacheds-i18n-2.0.0-M15.jar
2020-08-06T07:50:25,004 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [druid-kafka-indexing-service], jars: lz4-java-1.7.1.jar, kafka-clients-2.5.0.jar, druid-kafka-indexing-service-0.19.0.jar, zstd-jni-1.3.3-1.jar, snappy-java-1.1.7.3.jar
2020-08-06T07:50:25,006 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [druid-datasketches], jars: druid-datasketches-0.19.0.jar, commons-math3-3.6.1.jar
usage: druid <command> [<args>]
The most commonly used druid commands are:
help Display help information
index Run indexing for druid
internal Processes that Druid runs "internally", you should rarely use these directly
server Run one of the Druid server types.
tools Various tools for working with Druid
version Returns Druid version information
See 'druid help <command>' for more information on a specific command.
However, even if I add an argument like version, it does not work like this.
❯ docker run apache/druid:0.19.0 version
2020-08-06T07:51:30+0000 startup service version
Setting druid.host=172.17.0.2 in /runtime.properties
cat: can't open '/jvm.config': No such file or directory
2020-08-06T07:51:32,517 INFO [main] org.hibernate.validator.internal.util.Version - HV000001: Hibernate Validator 5.2.5.Final
2020-08-06T07:51:33,503 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [druid-hdfs-storage], jars: jackson-annotations-2.10.2.jar, hadoop-mapreduce-client-common-2.8.5.jar, httpclient-4.5.10.jar, htrace-core4-4.0.1-incubating.jar, apacheds-kerberos-codec-2.0.0-M15.jar, jackson-mapper-asl-1.9.13.jar, commons-digester-1.8.jar, jetty-sslengine-6.1.26.jar, jackson-databind-2.10.2.jar, api-asn1-api-1.0.0-M20.jar, ion-java-1.0.2.jar, hadoop-mapreduce-client-shuffle-2.8.5.jar, asm-7.1.jar, jsp-api-2.1.jar, druid-hdfs-storage-0.19.0.jar, api-util-1.0.3.jar, json-smart-2.3.jar, jackson-core-2.10.2.jar, hadoop-client-2.8.5.jar, httpcore-4.4.11.jar, commons-collections-3.2.2.jar, hadoop-hdfs-client-2.8.5.jar, hadoop-annotations-2.8.5.jar, hadoop-auth-2.8.5.jar, xmlenc-0.52.jar, aws-java-sdk-s3-1.11.199.jar, commons-net-3.6.jar, nimbus-jose-jwt-4.41.1.jar, hadoop-common-2.8.5.jar, jackson-dataformat-cbor-2.10.2.jar, hadoop-yarn-server-common-2.8.5.jar, accessors-smart-1.2.jar, gson-2.2.4.jar, commons-configuration-1.6.jar, joda-time-2.10.5.jar, hadoop-aws-2.8.5.jar, aws-java-sdk-core-1.11.199.jar, commons-codec-1.13.jar, hadoop-mapreduce-client-app-2.8.5.jar, hadoop-yarn-api-2.8.5.jar, aws-java-sdk-kms-1.11.199.jar, jackson-core-asl-1.9.13.jar, curator-recipes-4.3.0.jar, hadoop-mapreduce-client-jobclient-2.8.5.jar, jcip-annotations-1.0-1.jar, jmespath-java-1.11.199.jar, hadoop-mapreduce-client-core-2.8.5.jar, commons-logging-1.1.1.jar, leveldbjni-all-1.8.jar, curator-framework-4.3.0.jar, hadoop-yarn-client-2.8.5.jar, apacheds-i18n-2.0.0-M15.jar
2020-08-06T07:51:33,524 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [druid-kafka-indexing-service], jars: lz4-java-1.7.1.jar, kafka-clients-2.5.0.jar, druid-kafka-indexing-service-0.19.0.jar, zstd-jni-1.3.3-1.jar, snappy-java-1.1.7.3.jar
2020-08-06T07:51:33,526 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [druid-datasketches], jars: druid-datasketches-0.19.0.jar, commons-math3-3.6.1.jar
ERROR!!!!
Found unexpected parameters: [version]
===
usage: druid <command> [<args>]
The most commonly used druid commands are:
help Display help information
index Run indexing for druid
internal Processes that Druid runs "internally", you should rarely use these directly
server Run one of the Druid server types.
tools Various tools for working with Druid
version Returns Druid version information
See 'druid help <command>' for more information on a specific command
A: So I see a few things here:
*
*docker run apache/druid:0.19.0 means "fire and forget", if you don't have an endless running service here, your docker container will be shut down shortly after start.
To have an interaction within the docker container start it with "-it" command.
To let it run without interaction run it with "-d" command for detached.
YOu can find information about this here: https://docs.docker.com/engine/reference/run/
*You have to check the start command.
The thing you wrote after the run command is the start command (in your case "version") - this is runned like you would type it into the running shell after words (just "version").
Additional to that, if you DONT add a startup command, there could be a startup command in the default druid dockerfile.
You can see the dockerfile of your selected image at docker.hub, like here:
https://hub.docker.com/layers/apache/druid/0.19.0/images/sha256-eb2a4852b4ad1d3ca86cbf4c9dc7ed9b73c767815f187eb238d2b80ca26dfd9a?context=explore
There you see, the start command, wihtin a dockerfile this is called ENTRYPOINT, is a shellscript:
ENTRYPOINT ["/druid.sh"]
So writing "version" after your run commands stops the shell command from running - we should not do that :)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.