id
stringlengths 5
11
| text
stringlengths 0
146k
| title
stringclasses 1
value |
|---|---|---|
doc_1900
|
I'm running a lambda function after every user auth to hit a third party API and pull back relevant data to that user. The entities that come back may already be stored but related to another user. I'm assuming there's a better way than attempting to Create, catching an error and then attempting to Fetch & Update with the new user appended to the users field?!
type Event
@model
@auth(rules: [
{allow: public, provider: apiKey, operations: [read, create, update, delete]}
{allow: owner, ownerField: "users"}
])
@key(fields: ["venue", "date"])
{
id: ID!
venue: String!
date: AWSDate!
ref: String!
users: [String]!
}
Any help massively appreciated (even just good resources to read up on writing resolvers - looking at the generated Mutation.updateEvent.req.vtl file for inspiration is a bit intimidating)
A: You need to override the generated resolver for update mutation.
Just copy the content of the autogenerated resolvers and make some changes.
File name may look like this:
<project-root>/amplify/backend/api/<api-name>/build/resolvers/<TypeName>.<FieldName>.<req/res>.vlt
To override, copy this file to:
<project-root>/amplify/backend/api/<api-name>/resolvers/<TypeName>.<FieldName>.<req/res>.vlt
For example: amplify/backend/api/blog/build/resolvers/Mutation.updatePost.req.vtl
Then remove these line:
## Begin - key condition **
#if( $ctx.stash.metadata.modelObjectKey )
#set( $keyConditionExpr = {} )
#set( $keyConditionExprNames = {} )
#foreach( $entry in $ctx.stash.metadata.modelObjectKey.entrySet() )
$util.qr($keyConditionExpr.put("keyCondition$velocityCount", {
"attributeExists": true
}))
$util.qr($keyConditionExprNames.put("#keyCondition$velocityCount", "$entry.key"))
#end
$util.qr($ctx.stash.conditions.add($keyConditionExpr))
#else
$util.qr($ctx.stash.conditions.add({
"id": {
"attributeExists": true
}
}))
#end
That part of vtl code checks that the id on update mutation should exists.
| |
doc_1901
|
My result table should look like
.
Representing the presence of a certain value in the group. The table1 is very huge, I don't want to use any cursor or loops. Please suggest me a better way to do it in SQL
A: Use conditional logic. Here is one approach:
select column1,
(case when sum(case when column2 = 'C' then 1 else 0 end) > 0
then 1
end) as has_c,
(case when sum(case when column2 = 'C' then 1 else 0 end) = 0
then 1
end) as does_not_have_c,
from table1 t1
group by column1;
Or more simply as:
select column1,
max(case when column2 = 'C' then 1 end) as has_c,
min(case when column2 = 'C' then 0 else 1 end) as does_not_have_c,
from table1 t1
group by column1
| |
doc_1902
|
'uploadedFile.CheckInComment' threw an exception of type
'Microsoft.SharePoint.Client.PropertyOrFieldNotInitializedException'
I am not sure why as the context should be OKsince it did upload the file.
I am going to try to update some meta data fields on the document I just uploaded.
Folder currentRunFolder = site.GetFolderByServerRelativeUrl(barRootFolderRelativeUrl + "/");
FileCreationInformation newFile = new FileCreationInformation
{
Content = System.IO.File.ReadAllBytes(@p),
Url = Path.GetFileName(@p),
Overwrite = true
};
currentRunFolder.Files.Add(newFile);
currentRunFolder.Update();
context.ExecuteQuery();
newUrl = siteUrl + barRootFolderRelativeUrl + "/" + Path.GetFileName(@p);
// Set document properties
Microsoft.SharePoint.Client.File uploadedFile = context.Web.GetFileByServerRelativeUrl(newUrl);
ListItem listItem = uploadedFile.ListItemAllFields;
listItem["TestEQCode"] = "387074";
listItem.Update();
context.ExecuteQuery();
A: Could you try this.
currentRunFolder.Files.Add(newFile);
//currentRunFolder.Update();
context.Load(newFile);
context.ExecuteQuery();
//newUrl = siteUrl + barRootFolderRelativeUrl + "/" + Path.GetFileName(@p);
// Set document properties
//Microsoft.SharePoint.Client.File uploadedFile = context.Web.GetFileByServerRelativeUrl(newUrl);
ListItem listItem = newFile.ListItemAllFields;
listItem["TestEQCode"] = "387074";
listItem.Update();
context.ExecuteQuery();
A: Ok so eventhough the ListItems is NULL I can set the TestEQCode and update and the field is getting updated on the SharePoint side. This whole time I was concerned on the ListItems getting the actual meta data list, but really it doesn't need that. I just will have to Hard Code those items and it will update. –
| |
doc_1903
|
Helper is our custom C# project including all our custom functions and properties used by the reports. I found a post that explained that I must put a copy of the dll into the folder: C:\Program Files (x86)\Microsoft Visual Studio xx.0\Common7\IDE\PrivateAssemblies\ depending on the VS version. But, I still get the same error.
I also checked the namespace but, in fact, nothing changed since years.
Any idea?
Regards,
Pascal
| |
doc_1904
|
I'm using
com.evernote:android-sdk:2.0.0-RC4
I've followed the official guide https://github.com/evernote/evernote-sdk-android
Should I change
The error is:
org.scribe.exceptions.OAuthException: Response body is incorrect. Can't extract token and secret from this: '<html>
<head>
<script>
(function() {
var request = new XMLHttpRequest();
request.open('GET', '/IsLoggedIn.action', true);
request.onload = function() {
if (this.status === 403) {
window.location = '/Login.action?targetUrl='
+ encodeURIComponent(window.location.pathname);
}
};
request.send();
})();
</script>
<meta charset="utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=9,chrome=1" />
<meta name="viewport" content="width=device-width,initial-scale=1" />
<link rel="Shortcut Icon" href="/favicon.ico?v2" type="image/x-icon" />
<link rel="stylesheet" href="/redesign/global/css/reset.css" />
<link rel="stylesheet" href="/redesign/global/css/fonts.css" media="all" />
<link rel="stylesheet" href="/redesign/global/css/header.css" />
<link rel="stylesheet" href="/redesign/global/css/layout.css" />
<title>Evernote Error</title>
</head>
<body>
<div class="header">
<div class="header-inner">
<a href="https://evernote.com/" class="evernote-logo"><svg xmlns="http://www.w3.org/2000/svg" width="160" height="36" viewBox="0 0 160 36">
<title>Evernote</title>
<g fill="none" fill-rule="evenodd" transform="translate(-14.89 -14.89)">
<rect width="189.752" height="65.168" y=".117"/>
<g fill="#fff" fill-rule="nonzero" transform="translate(14.89 14.89)">
<g transform="translate(38.54 5.84)">
<path class="evernote-logo-2018-text" d=""/>
</g>
<path class="evernote-logo-2018-elephant" d=""/>
</g>
</g>
</svg></a></div>
</div>
<div id="container-boundingbox" class="wrapper">
<div id="container" class="wrapper">
<div class="main">
<div class="page-header">
<h1>
Oops, we encountered an error.</h1>
</div>
<div>
<p>
Sorry, we've encountered an unexpected error.</p>
</div>
<div class="clear"></div>
</div>
</div>
<div class="footer wrapper">
<a href="https://evernote.com/tos/" class="footer-entry">Terms of Service</a><a href="https://evernote.com/privacy/" class="footer-entry">Privacy Policy</a><span class="footer-entry last">Copyright 2018 Evernote Corporation. All rights reserved.</span>
</div>
</div>
</body>
</html>
'
at org.scribe.extractors.TokenExtractorImpl.extract(TokenExtractorImpl.java:41)
at org.scribe.extractors.TokenExtractorImpl.extract(TokenExtractorImpl.java:27)
at org.scribe.oauth.OAuth10aServiceImpl.getRequestToken(OAuth10aServiceImpl.java:64)
at org.scribe.oauth.OAuth10aServiceImpl.getRequestToken(OAuth10aServiceImpl.java:40)
at org.scribe.oauth.OAuth10aServiceImpl.getRequestToken(OAuth10aServiceImpl.java:45)
at com.evernote.client.android.EvernoteOAuthHelper.createRequestToken(EvernoteOAuthHelper.java:106)
at com.evernote.client.android.EvernoteOAuthHelper.startAuthorization(EvernoteOAuthHelper.java:127)
at com.evernote.client.android.login.EvernoteLoginTask.startAuthorization(EvernoteLoginTask.java:144)
at com.evernote.client.android.login.EvernoteLoginTask.execute(EvernoteLoginTask.java:51)
at com.evernote.client.android.login.EvernoteLoginTask.execute(EvernoteLoginTask.java:23)
at net.vrallev.android.task.Task.executeInner(Task.java:67)
at net.vrallev.android.task.TaskExecutor$TaskRunnable.run(TaskExecutor.java:191)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636)
at java.lang.Thread.run(Thread.java:764)
A: Just in case anyone stumbles upon the same issue: I have contacted evernote and it turned out the key was not activated on production although I was told it was.
| |
doc_1905
|
*
*Some sensor on raspberry pi provides the realtime data.
*Some application takes the data and pushes to the time series database.
*If network is off (GSM modem ran out of money or rain or something else), store data locally.
*Once network is available the data should be synchronised to the time series database in the cloud. So no missing data and no duplicates.
*(Optionally) query database from Grafana
I'm looking for time series database that can handle 3. and 4. for me. Is there any?
I can start Prometheus in federated mode (Can I?) and keep one node on raspberry pi for initial ingestion and another node in the cloud for collecting the data. But that setup would instantly consume 64mb+ of memory for Prometheus node.
A: Take a look at vmagent. It can be installed at every device where metrics from local sensors must be collected (e.g. at the edge), and collect all these metrics via various popular data ingestion protocols. Then it can push the collected metrics to a centralized time series database such as VictoriaMetrics. Vmagent buffers the collected metrics on the local storage when the connection to a centralized database is unavailable, and pushes the buffered data to the database as soon as the connection is recovered. Vmagent works on Rasberry PI and on any device with ARM, ARM64 or AMD64 architecture.
See use cases for vmagent for more details.
| |
doc_1906
|
I also tried with the postman but it gives the same error
@RestController
@RequestMapping()
public class REST {
@GetMapping("/tweet")
public void noticiaPost(){
String consumerKey = "AVlNWQzs3FYQ2DHsdDWY***";
String consumerSecret = "UL1PCQzBxm9kkEVDte3dgbgzgT7A*******";
String accessToken = "1281888984361308163-zqxjCD********";
String accessTokenSecret = "hMwcViNClKKjDj1TH751*****";
TwitterTemplate twitterTemplate =
new TwitterTemplate(consumerKey, consumerSecret, accessToken, accessTokenSecret);
twitterTemplate.timelineOperations().updateStatus("hola buenos dias");
}
application properties
Aplication properties where i put consumekeys and accesstoken from twitter developer
SpringAtSO.consumerKey=AVlNWQzs3FYQ*****
SpringAtSO.consumerSecret=UL1PCQzBxm9kkEVDt*****
SpringAtSO.accessToken=AAAAAAAAAAAAAAAAAAAAALk2WILF6Wr*******
ERROR
i dont understand this, i put all the keys in restcontroller
2021-01-18 10:50:18.214 WARN 4396 --- [nio-8080-exec-1] o.a.h.c.protocol.ResponseProcessCookies : Invalid cookie header: "set-cookie: personalization_id="v1_wttGCbR3euCsc75cUIKYPQ=="; Max-Age=63072000; Expires=Wed, 18 Jan 2023 09:50:17 GMT; Path=/; Domain=.twitter.com; Secure; SameSite=None". Invalid 'expires' attribute: Wed, 18 Jan 2023 09:50:17 GMT
2021-01-18 10:50:18.214 WARN 4396 --- [nio-8080-exec-1] o.a.h.c.protocol.ResponseProcessCookies : Invalid cookie header: "set-cookie: guest_id=v1%3A161096341741656047; Max-Age=63072000; Expires=Wed, 18 Jan 2023 09:50:17 GMT; Path=/; Domain=.twitter.com; Secure; SameSite=None". Invalid 'expires' attribute: Wed, 18 Jan 2023 09:50:17 GMT
ERROR: 401 UNAUTHORIZED :: Could not authenticate you.
2021-01-18 10:50:18.240 ERROR 4396 --- [nio-8080-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.social.MissingAuthorizationException: Authorization is required for the operation, but the API binding was created without authorization.] with root cause
org.springframework.social.MissingAuthorizationException: Authorization is required for the operation, but the API binding was created without authorization.
at org.springframework.social.twitter.api.impl.TwitterErrorHandler.handleClientErrors(TwitterErrorHandler.java:100) ~[spring-social-twitter-1.1.2.RELEASE.jar:1.1.2.RELEASE]
at org.springframework.social.twitter.api.impl.TwitterErrorHandler.handleError(TwitterErrorHandler.java:60) ~[spring-social-twitter-1.1.2.RELEASE.jar:1.1.2.RELEASE]
at org.springframework.web.client.ResponseErrorHandler.handleError(ResponseErrorHandler.java:63) ~[spring-web-5.2.9.RELEASE.jar:5.2.9.RELEASE]
at org.springframework.web.client.RestTemplate.handleResponse(RestTemplate.java:782) ~[spring-web-5.2.9.RELEASE.jar:5.2.9.RELEASE]
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:740) ~[spring-web-5.2.9.RELEASE.jar:5.2.9.RELEASE]
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:714) ~[spring-web-5.2.9.RELEASE.jar:5.2.9.RELEASE]
at org.springframework.web.client.RestTemplate.postForObject(RestTemplate.java:440) ~[spring-web-5.2.9.RELEASE.jar:5.2.9.RELEASE]
at org.springframework.social.twitter.api.impl.TimelineTemplate.updateStatus(TimelineTemplate.java:198) ~[spring-social-twitter-1.1.2.RELEASE.jar:1.1.2.RELEASE]
at org.springframework.social.twitter.api.impl.TimelineTemplate.updateStatus(TimelineTemplate.java:160) ~[spring-social-twitter-1.1.2.RELEASE.jar:1.1.2.RELEASE]
at com.Twitter.controller.REST.noticiaPost(REST.java:32) ~[classes/:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64) ~[na:na]
| |
doc_1907
|
I know Notes question don't have a high reply rate over here, but I give it a try anyway.
Thanks !
A: On Windows Notes clients greater than version 6, there is support for a Notes:\ URL scheme to launch documents. You can construct a URL dynamically in .Net that points to the user's mail database and opens a new mail form.
http://www.dominoguru.com/pages/LotusNotes_notesURLs.html has more details, but essentially it is of the form Notes:\server\database\0\memo?OpenForm
A: Any reason you can't just use a mailto call in your code? Assuming that Lotus Notes is the registered mail handler on the client system, you should be able to pass in the body attribute and wot-not…
A: The Lotus Domino Objects (Interop.Domino.dll) don't have access to the Notes UI. You would need to use the deprecated, late-bound Lotus Notes Automation classes. Warning: they're crashy, which is one of the reasons they've been deprecated for more than ten years (since the release of Lotus Notes and Domino R5.0.2c).
A: I finally did use the mailto. Here the code :
Public Shared Sub OuvrirNouveauMessage(ByVal destinataire As String, ByVal sujet As String, ByVal corpsCourriel As String)
Dim sFile As String = "mailto:" & destinataire & _
"?subject=" & sujet & _
"?body=" & corpsCourriel
If sFile.Length > 2050 Then
sFile = sFile.Substring(0, 2050)
End If
System.Diagnostics.Process.Start(sFile)
End Sub
| |
doc_1908
|
Example
[1] => Apple
[0] => Stem
[1] => Leaf
[2] => Orange
I'd only like to download the data under Apple.
Thanks!
A: This is not possible, if you don't have control over the API
If you have control over the API, you could let it accept additional parameters, which are evaluated by the webservice and have to be added to your HTTP request. So a parameter &details=apple could be evaluated in the backend like this
details = extractFromRequestParams("details");
if (details==="apple") {
printOutDetailsForApple();
} else {
printOutEverything();
}
In case you do not have control over the API, your cURL request will always do the whole request, grab everything from the answer, and only afterwards gets you access to that answer.
When you're talking about rather large responses, it could be worth to replacing the cURL library with something slighlty lower level, that gives you more control over reading from the HTTP request. You then would be processing parts of the answer as they come in already, and could stop reading when you got everything you need from the response. And probably save a few bytes from being transmitted. You can't skip parts in the beginning, of course, so the usefulness of this approach depends on the position of the data which is important for you in the whole answer. And this requires constantly flushing on the server side, which you don't have control over.
| |
doc_1909
|
The cloud.getRemoteAccessUsers is an asynchronous method, but I can't edit in the service.
I want create a guard for check user exist, I will paste a code snippet.
I want if the console.log(test) is show an array, but is only return with ZoneAwarePromise.
canActivate(route: ActivatedRouteSnapshot): boolean {
this.lock = this.cloud.getLocks().find(x => +x.Id === +route.paramMap.get('id'));
let test = this.cloud.getRemoteAccessUsers(this.lock).then((data) => {
return data;
});
console.log(test);
return false;
}
A: The CanActivate interface can also return a Promise<boolean> or Observable<boolean>
You can return your observable directly:
canActivate(route: ActivatedRouteSnapshot): Promise<boolean> {
this.lock = this.cloud.getLocks().find(x => +x.Id === +route.paramMap.get('id'));
return this.cloud.getRemoteAccessUsers(this.lock).then((data) => {
return data;
});
}
If data isn't a boolean, you can write the logic in the callback to return a boolean.
Your current log for test will show ZoneAwarePromise because it's an async call, so you are logging the promise and not the value that the promise returns.
If its the value you want, you can either log the data inside the .then
return test = this.cloud.getRemoteAccessUsers(this.lock).then((data) => {
console.log(data)
return data;
});
You could also use async/await:
async canActivate(route: ActivatedRouteSnapshot): Promise<boolean> {
this.lock = this.cloud.getLocks().find(x => +x.Id === +route.paramMap.get('id'));
const test = await this.cloud.getRemoteAccessUsers(this.lock);
console.log(test);
return test;
}
A: You are trying to return Promise, but not the data
If you want to resolve Promise right inside the CanActivate method, then move your test assignment inside then.
canActivate(route: ActivatedRouteSnapshot): boolean {
this.lock = this.cloud.getLocks().find(x => +x.Id === +route.paramMap.get('id'));
this.cloud.getRemoteAccessUsers(this.lock).then((data) => {
let test = data;
console.log(test);
});
return false;
}
| |
doc_1910
|
<table class="table table-bordered table-hover table-striped ">
<thead>
<tr>
<th>Flight No</th>
<th>Flight destination</th>
<th>Flight origin</th>
<th>Flight date</th>
<th>Flight time</th>
<th>Book now</th>
</tr>
</thead>
<tbody>
<form:form commandName="reserv" cssClass="form-horizontal">
<c:forEach items="${flightInfos}" var="flightInfo">
<tr>
<td>${flightInfo.flightNo}</td>
<td>${flightInfo.destination}</td>
<td>${flightInfo.origin}</td>
<td>${flightInfo.flightDate}</td>
<td>${flightInfo.flightTime}</td>
<td><input type="submit" value="Book now" class="btn btn-primary"></td>
</tr>
</c:forEach>
</form:form>
</tbody>
</table>
In this table, I want to check the user is logged or not when user click on "Book now" button. It means everyone can see this page but they need to log into the system to book a flight. Here is my security.xml file.
<http use-expressions="true">
<intercept-url pattern="/users**" access="hasRole('ROLE_ADMIN')" />
<intercept-url pattern="/users/**" access="hasRole('ROLE_ADMIN')" />
<intercept-url pattern="/account**" access="hasRole('ROLE_USER')" />
<intercept-url pattern="/reservation**" access="hasRole('ROLE_USER')" />
<form-login login-page="/login.html"/>
<csrf disabled="true"/>
<logout logout-url="/logout" />
</http>
How can I check whether the user is logged or not, when he clicks on "Book now" button ? Is there any simple way to do this ?
A: "How can I check whether the user is logged or not, when he clicks on "Book now" button ? Is there any simple way to do this ?"
When the submit button is pressed; the form will be posted to the URL you have given it (as a POST) example:
<form action="/myURL" method="POST">
First name: <input type="text" name="fname"><br>
Last name: <input type="text" name="lname"><br>
<input type="submit" value="Submit">
</form>
just change the URL to something that is restricted in your security xml.
something like:
<intercept-url pattern="/postformURL" access="hasRole('ROLE_USER')" />
Then when it get your Controller mapping of the formURL ; it will only get there if it passed spring security.
| |
doc_1911
|
MAIN THREAD DOING SO MUCH OF WORK ,any one please help how to improve my navigation drawer performance and how to get smooth scrolling any one please help me.
when i use profile pic in nav_header_main.xml i am facing this problem without prfile pic it will scroll smooth
here my error
Skipped 42 frames! The application may be doing too much work on its main thread.
MainActivity.class
public class MainActivity extends AppCompatActivity
implements NavigationView.OnNavigationItemSelectedListener {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Toolbar toolbar = (Toolbar) findViewById(R.id.toolbar);
setSupportActionBar(toolbar);
DrawerLayout drawer = (DrawerLayout) findViewById(R.id.drawer_layout);
ActionBarDrawerToggle toggle = new ActionBarDrawerToggle(
this, drawer, toolbar, R.string.navigation_drawer_open, R.string.navigation_drawer_close);
drawer.setDrawerListener(toggle);
toggle.syncState();
NavigationView navigationView = (NavigationView) findViewById(R.id.nav_view);
navigationView.setNavigationItemSelectedListener(this);
}
@Override
public void onBackPressed() {
DrawerLayout drawer = (DrawerLayout) findViewById(R.id.drawer_layout);
if (drawer.isDrawerOpen(GravityCompat.START)) {
drawer.closeDrawer(GravityCompat.START);
} else {
super.onBackPressed();
}
}
@Override
public boolean onCreateOptionsMenu(Menu menu) {
// Inflate the menu; this adds items to the action bar if it is present.
getMenuInflater().inflate(R.menu.main, menu);
return true;
}
@Override
public boolean onOptionsItemSelected(MenuItem item) {
// Handle action bar item clicks here. The action bar will
// automatically handle clicks on the Home/Up button, so long
// as you specify a parent activity in AndroidManifest.xml.
int id = item.getItemId();
//noinspection SimplifiableIfStatement
if (id == R.id.action_settings) {
return true;
}
return super.onOptionsItemSelected(item);
}
@SuppressWarnings("StatementWithEmptyBody")
@Override
public boolean onNavigationItemSelected(MenuItem item) {
// Handle navigation view item clicks here.
int id = item.getItemId();
if (id == R.id.nav_camara) {
// Handle the camera action
} else if (id == R.id.nav_gallery) {
} else if (id == R.id.nav_slideshow) {
} else if (id == R.id.nav_manage) {
} else if (id == R.id.nav_share) {
} else if (id == R.id.nav_send) {
}
DrawerLayout drawer = (DrawerLayout) findViewById(R.id.drawer_layout);
drawer.closeDrawer(GravityCompat.START);
return true;
}
}
nav_header_main.xml
android:layout_width="match_parent"
android:layout_height="190dp"
android:background="@drawable/bghdpi"
android:orientation="vertical"
>
<ImageView
xmlns:app="http://schemas.android.com/apk/res-auto"
android:id="@+id/profile_image"
android:layout_width="100dp"
android:layout_height="100dp"
android:src="@drawable/flag"
android:layout_centerVertical="true"
android:layout_centerHorizontal="true" />
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Comp. Sci. Tutorials"
android:textSize="14sp"
android:textColor="#FFF"
android:textStyle="bold"
android:gravity="center"
android:paddingBottom="4dp"
android:id="@+id/username"
android:layout_alignParentRight="true"
android:layout_alignParentEnd="true"
android:layout_above="@+id/email"
android:layout_alignParentLeft="true"
android:layout_alignParentStart="true" />
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="[email protected]"
android:id="@+id/email"
android:gravity="center"
android:layout_marginBottom="8dp"
android:textSize="14sp"
android:textColor="#fff"
android:layout_alignParentBottom="true"
android:layout_alignLeft="@+id/username"
android:layout_alignStart="@+id/username"
android:layout_alignParentRight="true"
android:layout_alignParentEnd="true" />
A: In your main thread start a new thread and let that do all the work. Main should only initialize your app, not perform any other logic.
| |
doc_1912
|
start https://www.example.com/
But it flashes the console so I made a VBScript file to hide it:
Set WshShell = WScript.CreateObject("WScript.Shell")
WshShell.Run "test.bat", 0
But I only wanted 1 file so I changed it to:
Set WshShell = WScript.CreateObject("WScript.Shell")
WshShell.Run "cmd /c start https://www.example.com/", 0
Now it does what I want. The only problem I found is that apparently it doesn't end the process (doesn't stop running) until I close the opened browser.
If I run it and leave the browser open I can delete the file but not the directory/folder. I get a error saying that it's not possible because the folder or a file in it is opened in another program.
Adding the exit command didn't solved the issue too.
I don't know much of DOS commands/batch files nor VBScript.
A: Why not just call it instead of batch?
Set WshShell = WScript.CreateObject("WScript.Shell")
WshShell.Run "iexplore http://www.stackoverflow.com", 0
| |
doc_1913
|
I am getting this error when trying to load a Python script in xchat IRC. I have several other Python scripts which used the xchat module just fine, but this one script seems to be the only one giving me the error. Why does this happen and how can I fix it?
Also, this is happening with another script as well when I run python script.py install. But for other scripts, it has worked fine.
A: Since you haven't provided the actual stack traces that cause the problem, it's hard to say for sure where the problem is. It's likely that it's being caused by an import xchat statement somewhere -- but it'd be reassuring to see that trace, so please edit your question.
When you know which line is causing the problem, then put this line before that line:
print 'System path:', '\n\t'.join(sys.path)
(you'll need to import sys somewhere above that, if you haven't already).
That will print out your system path ($PYTHONPATH). Look in that list and make sure the directory that contains the xchat module is present. If it's not, then that's your problem -- it's likely that something somewhere is either changing or not initializing $PYTHONPATH before the invokation of python.
For sanity, do the same thing for the scripts that do work to see if the path is behaving correctly in that case.
| |
doc_1914
|
regex:/^[\pL\s\-',.0-9]+$/u
Am I getting the right idea with this? I'm a little confused because it still accepts number inputs.
A: I guess you mean that the input must not be only digits. You could use (*SKIP)(*FAIL) here:
^\d+$(*SKIP)(*FAIL)|^[-\pL ',.\d]+$
See a demo on regex101.com.
| |
doc_1915
|
Name of Store Procedure: dbo.sp_ins_output_str
Date overwritten: 2014-06-26 (yesterday)
I have a SQL Server stored procedure thats overwritten by accident and executed, how can I recover/roll-back this store procedure to its original state..?
A: There's no "Undo Button" per se, but you can get it back using the following method.
*
*Restore the database to a different location, recovering to a time
prior to the alter being executed
*Extract the DDL of that SP from the restored location
*Alter the current erroneous SP using the DDL taken in Step 2
*Drop the restored database
*Feel heart rate decrease
Though if you have no backup, you're out of luck unless you have a CVS that contains all your DDL.
| |
doc_1916
|
componentDidMount cannot do the trick because I want to fetch data as soon as possible even before the screen comes into focus each time the user navigate to my screen.
A: You could do something like:
/**
* Your redux action should set loading-state to false after
* fetch-operation is done...
*/
reduxActionDoingFetch();
useEffect(() => {
if (reduxActionDoingFetch_Loading === false) {
Navigation.navigate('YourTargetScreen');
}
}, [reduxActionDoingFetch_Loading]);
A: You CANNOT fetch data before componentDidMount. That's literally the first thing happens when a new screen is rendered.
Regarding fetching data each time when the screen is focused, you need to use focus event as mentioned in the migration guide: https://reactnavigation.org/docs/upgrading-from-4.x/#navigation-events
The focus event equivalent to willFocus from before. It fires as soon as the screen is focused, before the animation finishes. I'm not sure what you mean by it fires too late.
Also, for data fetching, there is a special hook: useFocusEffect
https://reactnavigation.org/docs/use-focus-effect/
A: React.useEffect(() => {
const unsubscribe = navigation.addListener('focus', () => {
// do something
});
return unsubscribe;
}, [navigation]);
| |
doc_1917
|
select city,CY_date,sum(sales) as cy_sales from table1 where date>trunc(sysdate-90,'DAY') group by city,date.
consider today is 19/04/2022 and Tuesday then this will give me the data from 17/01/2022 (This is taking the last 90 days of data and including all the days in the first week)
I want to add the sales value for these cities for a similar time last year(as shown in py_date and pay_sales). Table1 has the data for the last 10 years. Similar time last year means to compare this year Monday to last year same period Monday. So the data 18/04/2022 will be compared against 19/04/2021(it will be sysdate-364, let me know if there is a better way as this is going to fail in the leap year).
The main question here is how can I add last two columns in same table as displayed on shown screen with efficient manner.
A: You can use an analytic function with a RANGE window to find the previous year's data:
SELECT *
FROM (
SELECT city,
dt AS cy_date,
sales AS cy_sales,
ADD_MONTHS(dt, -12) AS py_date,
SUM(sales) OVER (
PARTITION BY city
ORDER BY dt
RANGE BETWEEN INTERVAL '1' YEAR PRECEDING
AND INTERVAL '1' YEAR PRECEDING
) AS py_sales
FROM (
SELECT city,
TRUNC(dt) AS dt,
SUM(sales) AS sales
FROM table_name
WHERE ( dt < TRUNC(SYSDATE) + 1
AND dt >= TRUNC(SYSDATE) - 90)
OR ( dt < TRUNC(ADD_MONTHS(SYSDATE, -12)) + 1
AND dt >= TRUNC(ADD_MONTHS(SYSDATE, -12)) - 90)
GROUP BY city, TRUNC(dt)
)
)
WHERE cy_date < TRUNC(SYSDATE) + 1
AND cy_date >= TRUNC(SYSDATE) - 90
Note: unlike adding an INTERVAL to a date, using an INTERVAL in a range window will work correctly around leap years.
Which, for a minimal example:
CREATE TABLE table_name (city, dt, sales) AS
SELECT 'City' || LEVEL, TRUNC(SYSDATE) - LEVEL +1, 100 * LEVEL
FROM DUAL
CONNECT BY LEVEL <= 6
UNION ALL
SELECT 'City' || LEVEL, ADD_MONTHS(TRUNC(SYSDATE) - LEVEL +1, -12), 10 * LEVEL
FROM DUAL
CONNECT BY LEVEL <= 5
Outputs:
CITY
CY_DATE
CY_SALES
PY_DT
PY_SALES
City1
19-APR-22
100
19-APR-21
10
City2
18-APR-22
200
18-APR-21
20
City3
17-APR-22
300
17-APR-21
30
City4
16-APR-22
400
16-APR-21
40
City5
15-APR-22
500
15-APR-21
50
City6
14-APR-22
600
14-APR-21
null
Or, you can generate the totals in a sub-query factoring clause and then join the current year to the previous year using the ADD_MONTHS function:
WITH totals (city, dt, sales) AS (
SELECT city,
TRUNC(dt),
SUM(sales)
FROM table_name
WHERE ( dt < TRUNC(SYSDATE) + 1
AND dt >= TRUNC(SYSDATE) - 90)
OR ( dt < TRUNC(ADD_MONTHS(SYSDATE, -12)) + 1
AND dt >= TRUNC(ADD_MONTHS(SYSDATE, -12)) - 90)
GROUP BY city, TRUNC(dt)
)
SELECT ct.city,
ct.dt AS cy_date,
ct.sales AS cy_sales,
pt.dt AS py_dt,
pt.sales AS py_sales
FROM totals ct
LEFT OUTER JOIN totals pt
ON ( ct.city = pt.city
AND ADD_MONTHS(ct.dt, -12) = pt.dt)
WHERE ct.dt >= TRUNC(SYSDATE) - 90
Which, for the same sample data, outputs:
CITY
CY_DATE
CY_SALES
PY_DT
PY_SALES
City1
19-APR-22
100
19-APR-21
10
City2
18-APR-22
200
18-APR-21
20
City3
17-APR-22
300
17-APR-21
30
City4
16-APR-22
400
16-APR-21
40
City5
15-APR-22
500
15-APR-21
50
City6
14-APR-22
600
null
null
db<>fiddle here
| |
doc_1918
|
How would I go about creating a 'transparent' weakref.ref.
So I don't have to use the __call__() method?
I can just use y.value
class integer():
def __init__(self,value):
self.value = value
x = integer(5)
y = weakref.ref(x)
print(y.__call__().value)
output:
5
A: You would use:
class integer():
def __init__(self,value):
self.value = value
x = integer(5)
y = weakref.proxy(x)
print(y.value)
outputs:
5
| |
doc_1919
|
ex :- if i have series,
series: [{
name:'Art & Architecture',
data: [{x:0,y:46.9,mname:"dhan"},{x:1,y:57.4}]
},{
name:'Business',
data: [{x:0,y:44},{x:1,y:55.8}]
},{
name:'Literature',
data: [{x:0,y:43.5},{x:1,y:52.2}]
},{
name:'Books - Miscellaneous',
data: [{x:0,y:40.7},{x:1,y:45.5}]
},{
name:'Cameras',
data: [{x:0,y:39},{x:1,y:43.5}]
},{
name:'Computers',
data: [{x:0,y:37.5},{x:1,y:41.9}]
},{
name:'Audio Equipment',
data: [{x:0,y:35.2},{x:1,y:39.2}]
}]
i want to give this names on y axis.
| |
doc_1920
|
This is what I'm trying to design:
I have one code that can have one helper code or not. In other words, a code can have zero or one helper code.
And also, I could have helper codes that are not related to any code.
These are the POCOs classes:
public class HelperCode
{
public int HelperCodeId { get; set; }
[ ... ]
public virtual Code Code { get; set; }
}
public class Code
{
public int CodeId { get; set; }
[ ... ]
public int? HelperCodeId { get; set; }
public virtual HelperCode HelperCode { get; set; }
}
Yes, I know, I have another question with the same issue. Well, it is not the same question because the answer doesn't work:
class CodeConfiguration : EntityTypeConfiguration<Code>
{
public CodeConfiguration()
{
//other codes
HasOptional(c => c.HelperCode)
.WithRequired(hc => hc.Code).HasForginKey(x=>x.HelperCodeId);
}
}
Because it is not required to have a code and .WithRequired() doesn't have .ForeignKey().
Any idea?
A: In EF6 this type of relationship is modeled with Shared Primary Key Association.
Start by removing the HelperCodeId member from Code entity because there will not be FK from Code to HelperCode. Instead, HelperCodeId in HelperCode entity will be both PK and FK to Code.
Then change the Code entity configuration as follows:
HasOptional(c => c.HelperCode)
.WithRequired(hc => hc.Code)
.WillCascadeOnDelete();
| |
doc_1921
|
list2 = ['b','c','d','a']
I have these two unordered lists and want to check if both have EXACTLY the same elements. Don't want to use set() or sorted() methods. But use looping to loop through both lists.
A: Keep it simple, without any helper function or list comprehension:
list1 = ['a','b','c','d']
list2 = ['b','c','d','a']
def same_lists(li1, li2):
if len(li1) != len(li2): # if the length of the lists is different than they are not equal so return false
return False
else:
for item1 in li1:
if item1 not in li2:
return False # if there is one item in li1 that is not in li2 than the lists are not identical so return false
for item2 in li2:
if item2 not in li1:
return False # same as three rows up
return True # if we didn't returned false for the whole list, than the lists are identical and we can return true.
print (same_lists(list1,list2))
A: This should work:
common_elements = [x for x in list1 if x in list2]
if len(list1) == len(list2) == len(common_elements):
print("EXACT SAME ELEMENTS")
else:
print("DIFFERENT ELEMENTS")
A: If you sort the elements first, or keep track as you encounter them, e.g. in another container, you can hand roll some solutions.
If you insist on avoiding this you need to check everything in the first list is in the seocnd and vice versa.
def same_lists(li1, li2):
if len(li1) != len(li2):
return False
else:
for item in li1:
if item not in li2:
return False
for item in li2:
if item not in li1:
return False
return True
This returns True for list1 = ['a','b','c','d'] compared with list2 = ['b','c','d','a'] and False for list3 = ['b','c','d','a', 'a'] and list4 = ['b','c','d','a', 'z'].
This is quadratic - we've compared everything with everything else (with a small optimisation checking the lengths first). We have to go through BOTH lists comaparing with the other list.
It would be quicker to sort first.
| |
doc_1922
|
First I create the .nuspec file...
C:\code\MySolution>.nuget\nuget.exe spec MyProject\MyProject.csproj
I edit the generated .nuspec file to be minimal, with no dependencies.
<?xml version="1.0"?>
<package >
<metadata>
<id>MyProject</id>
<version>1.2.3</version>
<title>MyProject</title>
<authors>Example</authors>
<owners>Example</owners>
<requireLicenseAcceptance>false</requireLicenseAcceptance>
<description>Example</description>
<copyright>Copyright 2013 Example</copyright>
<tags>example</tags>
<dependencies />
</metadata>
</package>
Then I build the solution and create a NuGet package...
C:\code\MySolution>.nuget\nuget.exe pack MyProject\MyProject.csproj -Verbosity detailed
Here is the output of that command...
Attempting to build package from 'MyProject.csproj'.
Packing files from 'C:\code\MySolution\MyProject\bin\Debug'.
Using 'MyProject.nuspec' for metadata.
Found packages.config. Using packages listed as dependencies
Id: MyProject
Version: 1.2.3
Authors: Example
Description: Example
Tags: example
Dependencies: Google.ProtocolBuffers (= 2.4.1.473)
Added file 'lib\net40\MyProject.dll'.
Successfully created package 'C:\code\MySolution\MyProject.1.2.3.nupkg'.
The .nupkg package that is created has a .nuspec file contained within it but it includes a dependencies section which I did not have in the original .nuspec file...
<dependencies>
<dependency id="Google.ProtocolBuffers" version="2.4.1.473" />
</dependencies>
I believe this is happening because of this... (from the output above)
Found packages.config. Using packages listed as dependencies
How can I make NuGet not automatically resolve dependencies and insert them into the .nuspec file which is generated from the pack command?
I am using NuGet 2.2 currently. Also, I don't think that this behaviour happened in older version of NuGet; is this a new "feature"? I couldn't find any documentation describing this "feature" or when it was implemented.
A: In version 2.7 there is an option called developmentDependency that can be set into package.config to avoid including dependency.
<?xml version="1.0" encoding="utf-8"?>
<packages>
<package id="jQuery" version="1.5.2" />
<package id="netfx-Guard" version="1.3.3.2" developmentDependency="true" />
<package id="microsoft-web-helpers" version="1.15" />
</packages>
A: You can explicitly specify which files to include then run the pack command on the nuspec file itself.
<?xml version="1.0"?>
<package >
<metadata>
<id>MyProject</id>
<version>1.2.3</version>
<title>MyProject</title>
<authors>Example</authors>
<owners>Example</owners>
<requireLicenseAcceptance>false</requireLicenseAcceptance>
<description>Example</description>
<copyright>Copyright 2013 Example</copyright>
<tags>example</tags>
<dependencies />
</metadata>
<files>
<file src="bin\Release\MyProject.dll" target="lib" />
</files>
</package>
You should set the src attribute with the path that is relative to the nuspec file here. Then you can run the pack command.
nuget.exe pack MyProject\MyProject.nuspec
A: According to issue #1956 (https://nuget.codeplex.com/workitem/1956) and pull request 3998 (https://nuget.codeplex.com/SourceControl/network/forks/adamralph/nuget/contribution/3998) this feature has been included in the 2.4 release 2.7 release.
However I'm not able to find documentation about the feature, and I still haven't figured how to use it. Please update this answer if anyone knows.
Update: Issue #1956 has been updated by the developer. The feature was not included in the 2.4 release, but delayed to the upcoming 2.7 release. That's why it's not documented yet.
A: Prevent your package from being dependent on other packages
As of NuGet 2.7 there is a new developmentDependency attribute that can be added to a package node in your project's packages.config file. So if your project includes a NuGet package that you don't want your NuGet package to include as a dependency, you can use this attribute like so:
<?xml version="1.0" encoding="utf-8"?>
<packages>
<package id="CreateNewNuGetPackageFromProjectAfterEachBuild" version="1.8.1-Prerelease1" targetFramework="net45" developmentDependency="true" />
</packages>
The important bit that you need to add here is developmentDependency="true".
Prevent your development package from being depended on by other packages
If you are creating a development NuGet package for others to consume, that you know they will not want to have as a dependency in their packages, then you can avoid users having to manually add that attribute to their packages.config file by using the NuGet 2.8 developmentDependency attribute in the metadata section of your .nuspec file. This will automatically add the developmentDependency attribute to the packages.config file when the your package is installed by others. The catch here is that it requires the user to have at least NuGet 2.8 installed. Luckily, we can make use of the NuGet 2.5 minClientVersion attribute to ensure that users have at least v2.8 installed; otherwise they will be notified that they need to update their NuGet client before they can install the package.
I have a development NuGet package that this was perfect for. Here is what my .nuspec file looks like, showing how to use the minClientVersion and developmentDependency attributes (lines 3 and 20 respectively):
<?xml version="1.0" encoding="utf-8"?>
<package xmlns="http://schemas.microsoft.com/packaging/2011/08/nuspec.xsd">
<metadata minClientVersion="2.8">
<id>CreateNewNuGetPackageFromProjectAfterEachBuild</id>
<version>1.8.1-Prerelease1</version>
<title>Create New NuGet Package From Project After Each Build</title>
<authors>Daniel Schroeder,iQmetrix</authors>
<owners>Daniel Schroeder,iQmetrix</owners>
<licenseUrl>https://newnugetpackage.codeplex.com/license</licenseUrl>
<projectUrl>https://newnugetpackage.codeplex.com/wikipage?title=NuGet%20Package%20To%20Create%20A%20NuGet%20Package%20From%20Your%20Project%20After%20Every%20Build</projectUrl>
<requireLicenseAcceptance>false</requireLicenseAcceptance>
<description>Automatically creates a NuGet package from your project each time it builds. The NuGet package is placed in the project's output directory.
If you want to use a .nuspec file, place it in the same directory as the project's project file (e.g. .csproj, .vbproj, .fsproj).
This adds a PostBuildScripts folder to your project to house the PowerShell script that is called from the project's Post-Build event to create the NuGet package.
If it does not seem to be working, check the Output window for any errors that may have occurred.</description>
<summary>Automatically creates a NuGet package from your project each time it builds.</summary>
<releaseNotes>- Changed to use new NuGet built-in method of marking this package as a developmentDependency (now requires NuGet client 2.8 or higher to download this package).</releaseNotes>
<copyright>Daniel Schroeder 2013</copyright>
<tags>Auto Automatic Automatically Build Pack Create New NuGet Package From Project After Each Build On PowerShell Power Shell .nupkg new nuget package NewNuGetPackage New-NuGetPackage</tags>
<developmentDependency>true</developmentDependency>
</metadata>
<files>
<file src="..\New-NuGetPackage.ps1" target="content\PostBuildScripts\New-NuGetPackage.ps1" />
<file src="Content\NuGet.exe" target="content\PostBuildScripts\NuGet.exe" />
<file src="Content\BuildNewPackage-RanAutomatically.ps1" target="content\PostBuildScripts\BuildNewPackage-RanAutomatically.ps1" />
<file src="Content\UploadPackage-RunManually.ps1" target="content\PostBuildScripts\UploadPackage-RunManually.ps1" />
<file src="Content\UploadPackage-RunManually.bat" target="content\PostBuildScripts\UploadPackage-RunManually.bat" />
<file src="tools\Install.ps1" target="tools\Install.ps1" />
<file src="tools\Uninstall.ps1" target="tools\Uninstall.ps1" />
</files>
</package>
If you don't want to force your users to have at least NuGet v2.8 installed before they can use your package, then you can use the solution I came up with and blogged about before the NuGet 2.8 developmentDependency attribute existed.
A: If you are using the PackageReference element instead of package.config file, here is the solution:
<PackageReference Include="Nerdbank.GitVersioning" Version="1.5.28-rc" PrivateAssets="All" />
or
<PackageReference Include="Nerdbank.GitVersioning" Version="1.5.28-rc">
<PrivateAssets>all</PrivateAssets>
</PackageReference>
Related GitHub discussion
A: I'm not sure why you would want to ignore the dependencies that your NuGet package requires to run, but have you thought about using the NuGet Package Explorer tool instead to create your package?
| |
doc_1923
|
Actually my code is creating Product.templates with their attribute.line.ids perfectly fine, but somehow the product.variants are not creating, so only one product.variant gets created without any attributes. And I can´t figure out how to do it properly.
So first I am creating a product.template like following (to make it short only insert name here):
id = models.execute_kw(db, uid, password, 'product.template', 'create', [{
'name': "New Partner",
}])
Afterwards I am adding the attribute.line.ids like this:
for key in attValIdList.keys():
attribute_line = models.execute_kw(db, uid, password, 'product.attribute.line', 'create', [{
'product_tmpl_id': id,
'attribute_id': key,
'value_ids': [(6, 0, attValIdList[key])]
}])
So attValidKeys is a list with dictionaries where I store the attribute_id with their attribute_value_ids.
So this part gets filled correctly.
But no product.variants are created out of the product.line.ids..
Actually adding product.product with the attributes is working fine too, but then I am having the issue, that this random product.product without any attributes gets created automatically..
Would be great if you guys could help me out, waisting a lot of hours on this problem.
A: Odoo.v12 create Product Variants with python xml-rpc
Replace=> product_id=>your product id,variant_name=> your variant name,variant_value=>your veriant value
product_tmpl_id=models.execute_kw(db, uid, password, 'product.product', 'read', [[[<product_id>]])[0].get('product_tmpl_id','')
temp_pro_dta=models.execute_kw(db, uid, password, 'product.product', 'read', [[[<product_id>]])[0].get('attribute_line_ids')
value_ids=models.execute_kw(db, uid, password, 'product.template.attribute.line', 'read', [temp_pro_dta])
attrib_ids=models.execute_kw(db, uid, password, 'product.attribute', 'search', [[['name', '=','<variant_name>' ]]])
if attrib_ids:
attrib_id=models.execute_kw(db, uid, password, 'product.attribute', 'read', [attrib_ids])[0].get('id')
else:
attrib_id=models.execute_kw(db, uid, password, 'product.attribute', 'create', [{"name":"<variant_name>",'create_variant':'always','type':'select'}])
attrib_value_ids=models.execute_kw(db, uid, password, 'product.attribute.value', 'search', [[['name', '=','<variant_value>'],['attribute_id','=',attrib_id]]])
if attrib_value_ids:
attrib_value=models.execute_kw(db, uid, password, 'product.attribute.value', 'read', [attrib_value_ids])[0].get('id')
else:
attrib_value=models.execute_kw(db, uid, password, 'product.attribute.value', 'create', [{"name":'<variant_value>','attribute_id':attrib_id}])
if value_ids:
value_data_ids=value_ids[0].get('value_ids')
if attrib_value not in value_data_ids:
value_data_ids.append(attrib_value)
attrib_key=models.execute_kw(db, uid, password, 'product.template.attribute.line', 'write', [value_ids[0].get('id'),{'value_ids':[[6,0,value_data_ids]]}])
else:
value_ids.append(attrib_value)
attrib_key=models.execute_kw(db, uid, password, 'product.template.attribute.line', 'create', [{"display_name":"<variant_name>",'product_tmpl_id':product_tmpl_id[0],'attribute_id':attrib_id,'value_ids':[[6,0,value_ids]]}])
| |
doc_1924
|
Say I want to get all items written in Bulgarian. This is where I'm stuck:
SELECT lang, id, name
FROM items
WHERE lang = "bg" OR lang = "en"
GROUP BY id
The problem arises when there is an item, say, (1, "en") and an item (1, "bg"). The ids are the same. Then how does MySQL or SQLite determine which result to return? Is there any way I can tell it that I would rather prefer to return (1, "bg") if it exists but if it doesn't then (1, "en") would satisfy me?
P.S.
To further illustrate what I want let's imagine that the database contains the following entries with schema (id, lang, name):
(1, "en", "abc")
(2, "en", "cde")
(3, "en", "def")
(1, "bg", "абв")
(3, "bg", "жзи")
After executing the desired query for Bulgarian I should get:
(1, "bg", "абв")
(2, "en", "cde")
(3, "bg", "жзи")
A: If "untranslated" means "English" or in other words, the base language is English, you can LEFT join the table to itself and use COALESCE() function to get rid of NULL values
SELECT COALESCE(bg.lang, en.lang) AS lang
, en.id AS id
, COALESCE(bg.name, en.name) AS name
FROM items en
LEFT JOIN items bg
ON bg.id = en.id
AND bg.lang = 'bg'
WHERE en.lang = 'en'
A: Standard SQL does not let you select columns in "group by" queries that do not either (1) appear on the group by list, or (2) included in an aggregate function. Any industrial strength SQL engine (DB2, Oracle, SQL Server) would consider your query incorrect.
In cases when you need to choose a specific item or a default when it is not in the database, the coalesce function is used. With this function in hand, you should be able to formulate your query without "group by".
A: You can do GROUP BY on multiple columns:
SELECT lang, id, name
FROM items
WHERE lang = "bg" OR lang = "en"
GROUP BY id, lang
| |
doc_1925
|
a) use glGetTexImage and pass in the buffer which receives the whole texture, and read the appropriate pixels from that
b) create a framebuffer, draw into it using the texture with only the portion needed, and extract the pixels produced with glReadPixels.
I'm guessing b) is faster, but I am relative novice, I'd like to know whether I'm heading in the right direction. a) is easier to code so I wonder whether the possible speed hit is negligible.
Steve
A: Given that the image data is in a texture, there are several possible solutions. Ordered from most desired to least:
*
*Employ glGetTextureSubImage (requires OpenGL 4.5 or ARB_get_texture_sub_image) to just do the job directly.
*Use glCopyImageSubData (requires OpenGL 4.3 or ARB_copy_image. Or NV_copy_image. The latter is implemented on more hardware than just NVIDIAs) to copy the desired rectangle into a texture of the appropriate size, then use glGetTexImageon that.
*Attach the large texture to an FBO, then attach the small texture to another FBO. Use glBlitFramebuffer (requires OpenGL 3.0 or ARB_framebuffer_objects) to copy the desired section of the large texture to the small one. Then use glGetTexImage on the small texture.
Rendering the texture to a framebuffer with triangles would only be needed in the event of working under very old OpenGL implementations.
| |
doc_1926
|
Take the following for an example. Thanks!
#include <iostream>
#include <climits>
int fibonacci(int n){
return fibonacci(n - 1) + fibonacci(n - 2);
}
int main(){
int ans = fibonacci(6);
std::cout << ans << std::endl;
}
A: The premise of the question is false. GCC reports:
: In function 'int fibonacci(int)':
:6:5: warning: infinite recursion detected [-Winfinite-recursion]
6 | int fibonacci(int n){
| ^~~~~~~~~
:8:21: note: recursive call
8 | return fibonacci(n - 1) + fibonacci(n - 2);
| ~~~~~~~~~^~~~~~~
Clang reports:
:6:21: warning: all paths through this function will call itself [-Winfinite-recursion]
int fibonacci(int n){
^
1 warning generated.
MSVC reports:
(9) : warning C4717: 'fibonacci': recursive on all control paths, function will cause runtime stack overflow
A: Modern compilers, in their quest to help you out and generate near-optimal code, will indeed recognize that this function never terminates. However, nothing in the C or C++ language specifications requires that. In contrast to languages like Prolog or Haskell, C/C++ do not guarantee any semantic analysis of your program. A very simple compiler would turn your code
int fibonacci(int n){
return fibonacci(n - 1) + fibonacci(n - 2);
}
into a sequence of low-level instructions equivalent to
*
*set a = n - 1
*set b = n - 2
*put a in the stack position or register for the first int argument
*call function fibonacci
*move the return value into temporary x
*put b in the stack position or register for the first int argument
*call function fibonacci
*move the return value into temporary y
*set z = x + y
*move z into the stack position or register for the function return value
*return to caller
This is a perfectly legal compilation of your program, and does not require any errors or warnings to be generated. Obviously, during execution, the "move the return value into temporary x" and later instructions (most significantly, the "return to caller") will never be reached. This will generate an infinite recursion loop until the machine stack space is exhausted.
| |
doc_1927
|
Something like this:
[LicensedRoute("/api/whatever")]
where '/api/whatever' is only added to the route table if the application is licensed.
Obviously I can explicitly do the check in the action method or use an action filter to validate the requests but ultimately I prefer the route not to be available if the software is not licensed.
A: Seems you need Attribute Routing: http://www.asp.net/web-api/overview/web-api-routing-and-actions/attribute-routing-in-web-api-2
Is it RESTful? How you store the licensing info: is it user logins? tokens? key?
You could do it RESTful and force the client to pass a token every time via token-based authentication, for example: define several "licence" levels/types (eg. Free/Trial/Basic/Pro) and then in a persistent storage (table) map tokens (guids) to a licence type.
Then using a custom attribute, mark each endpoint/controller/action with the minimum required licence type to be accessible (e.g. [MinimumLicence("Basic")]). And then create "routing tables" based on the licence required.
In this case you would deny access to routes rather than "remove" them.
| |
doc_1928
|
type MyEntity = {
name: string;
// would work with any other fields
}
const handleChange = (fieldName: keyof MyEntity) => (
e: React.ChangeEvent<HTMLInputElement | HTMLTextAreaElement>
) => {
setValues({ ...values, [fieldName]: e.currentTarget.value });
However, my type for strings is a dictionary. Eg:
{[index: string]: string}
...where the index is a language code string and its corresponding value is a string for that language's content. For example, a name object would look like, name:{'en': 'Michael', 'es': 'Miguel'}
Here's my current field-specific change handler that I'd like to make work for all text fields. Note- selectedLanguage is a string state variable with the language code the user selected, omitted for brevity. See codesandbox link below for full version):
type StringLanguage = {
[index: string] : string;
}
interface MyEntity {
name: StringLanguage;
}
const handleNameChange = (e: React.ChangeEvent<HTMLInputElement>) => {
setData((prevData) => {
return {
...prevData,
name: {
...prevData.name,
[selectedLanguage]: e.target.value
}
};
});
};
Here's my attempt to use one like the example I found.
const handleChange2 = (fieldName: keyof MyEntity) => (
e: React.ChangeEvent<HTMLInputElement>
) => {
setData((prevData) => {
return {
...prevData,
[fieldName]: {
...prevData[fieldName],
[selectedLanguage]: e.currentTarget.value
}
};
});
};
TS is giving me the error, "Spread types may only be created from object types. ts(2968)" on the line of code:
...prevData[fieldName]
But prevData is an object and in the field-specific version it works. How do I get this more generic function to work with my dictionary type, StringLanguage?:
Here's the usage for a text input:
onChange={(e) => handleChange2("name")}
Finally, here is a codesandbox with the complete example of what I'm doing. Feel free to comment out code to experiment. I've left the version that works active and commented out what doesn't work from above:
example
A: Hey the thing is that in your interface MyEntity there is an id: string property, that is why the ts complier complains.
If you want to do something generic it would be something like this
const isPrimitive = (yourVariable: any) =>
typeof yourVariable === "object" && yourVariable != null;
const overrideObject = (original: object, override: object) => ({
...original,
...override
});
const handleChange = (fieldName: keyof MyEntity) => (
e: React.ChangeEvent<HTMLInputElement>
) => {
setData((prevData: MyEntity) => {
return {
...prevData,
[fieldName]: isPrimitive(prevData[fieldName])
? e.currentTarget.value
: overrideObject(prevData[fieldName] as object, {
[selectedLanguage]: e.currentTarget.value
})
};
});
};
| |
doc_1929
|
How do I put this:
dates = list()
for entry in some_list:
entry_split = entry.split()
if len(entry_split) >= 3:
date = entry_split[1]
if date not in dates:
dates.append(date)
into a one-liner in Python?
A: Instead of a 1-liner, probably it's easier to understand with a 3-liner.
table = (entry.split() for entry in some_list)
raw_dates = (row[1] for row in table if len(row) >= 3)
# Uniquify while keeping order. http://stackoverflow.com/a/17016257
dates = list(collections.OrderedDict.fromkeys(raw_dates))
A: If the order does not matter:
dates = set(split[1] for split in (x.split() for x in some_list) if len(split) >= 3)
Of course if you want a list instead of a set, just pass the result into list().
Although I would probably stick with what you have, as it is more readable.
A: Try this, assuming that set comprehensions are available in your Python version:
list({d[1] for d in (e.split() for e in some_list) if len(d) >= 3})
If set comprehensions are not available, this will work:
list(set(d[1] for d in (e.split() for e in some_list) if len(d) >= 3))
But seriously, it's not a good idea to write this as a one-liner, for the reasons mentioned in the comments. Even so, your code can be improved a bit, use a set whenever you need to remove duplicates from a collection of elements:
dates = set()
for entry in some_list:
entr_split = entry.split()
if len(entry_split) >= 3:
dates.add(entry_split[1])
dates = list(dates)
A: Seems to me that you are splitting every element in some_list, checking to see if the split element has 3 or more parts, and taking the second part and appending it to the list if it is not already in the list. You can use a set for this last behavior, since sets only contain unique elements.
list(set((entry_split[1] for entry_split in (entry.split() for entry in some_list) if len(entry_split) >= 3]))
I re-cast the result to a list because you used a list.
A: Using a list comprehension:
dates = set([entry.split()[1] for entry in some_list if len(entry.split()) >= 3])
which gives us what we want:
>>> some_list = [
... "a 2011-08-04 b",
... "x y",
... "e 2012-03-04 g"
... ]
>>> dates = set([entry.split()[1] for entry in some_list if len(entry.split()) >= 3])
>>> dates
{'2011-08-24', '2012-03-04'}
| |
doc_1930
|
I created a conda environment:
$ conda create −n tensorflow python =3.5
Of course I activated my conda environment
$ source activate tensorflow
Then I played a bit around in Spyder, tried to plot a MNIST-digit (copy-paste code from my tutor which is tested several times), it includes of course
import matplotlib.pyplot as plt
[...]
plt.plot(number)
but executing the Python file with bash gives me:
(tensorflow) leon@leon-linux:~/ANNsCourse/Session1$ python helloWorld.py
Traceback (most recent call last):
File "helloWorld.py", line 10, in <module>
import matplotlib.pyplot as plt
ImportError: No module named 'matplotlib'
I'm quite confused right now, as the (tensorflow) in the bash obviously denotes that my conda tensorflow environment works (at least from my understanding). Also, from what I understood, conda should have matplotlib built in, right? And it should also load this in my conda tensorflow environment, right? This is what my tutor's slide said
There is no need to install further packages like numpy or matplotlib, since Anaconda contains current versions of them already.'
and also what I was able to take from everything I Googled and StackOverflowed. Neither Googling nor StackOverflowing gave me any good answer (might also just be because I don't understand enough yet).
My best guess would be that I still have to include matplotlib into my tensorflow conda environment, but this would be contradicting both my tutor & Google, while I also would not know how to do this.
edit: conda list gave me that matplotlib was not in my tensorflowenvironment, so I went
conda install matplotlib
I'm still afraid something is wrong with my conda tensorflow environment, shouldn't the matplotlib have been in there by default? It also told me:
Package plan for installation in environment /home/leon/.conda/envs/tensorflow:
The following NEW packages will be INSTALLED:
cycler: 0.10.0-py35_0
dbus: 1.10.10-0
expat: 2.1.0-0
fontconfig: 2.12.1-3
freetype: 2.5.5-2
glib: 2.50.2-1
gst-plugins-base: 1.8.0-0
gstreamer: 1.8.0-0
icu: 54.1-0
jpeg: 9b-0
libffi: 3.2.1-1
libgcc: 5.2.0-0
libiconv: 1.14-0
libpng: 1.6.27-0
libxcb: 1.12-1
libxml2: 2.9.4-0
matplotlib: 2.0.0-np112py35_0
mkl: 2017.0.1-0
numpy: 1.12.0-py35_0
pcre: 8.39-1
pyparsing: 2.1.4-py35_0
pyqt: 5.6.0-py35_2
python-dateutil: 2.6.0-py35_0
pytz: 2016.10-py35_0
qt: 5.6.2-3
sip: 4.18-py35_0
six: 1.10.0-py35_0
Proceed ([y]/n)? y
Which tells me also numpy was missing? Can someone confirm this to be correct now, or is there something fishy with my conda?
A: You just created a conda environment named tensorflow and switched into it. You haven't installed the tensorflow package or any of the default anaconda packages.
To do that, do
conda create -n tensorflow python=3.5 anaconda # install anaconda3 default packages
source activate tensorflow # switch into it
conda install -c conda-forge tensorflow # install tensorflow
A: I ran into the same problem using these instructions:
https://www.anaconda.com/tensorflow-in-anaconda/
for tensorflow-gpu.
Running
conda create -n tensorflow_gpuenv tensorflow-gpu
conda activate tensorflow_gpuenv
should ensure that "TensorFlow is now installed and ready for use."
But it doesn't. Running 'conda list' shows matplotlib was not installed. So you'll need to install that as well:
conda install -c conda-forge matplotlib
A: I faced the same problem on my mac.
So I ran conda list to see whether the matplotlib is installed or not.
Once i found it missing, i went ahead and ran the command conda install matplotlib.
After this step to verify it's properly installed; Do the following.
conda activate tf
This activates the tensorflow environment on anaconda.
After this start interactive python shell on the same terminal.
import matplotlib
If its installed properly this should not throw any error now.
| |
doc_1931
|
There might be a problem with the project dependency tree. It is
likely not a bug in Create React App, but something you need to fix
locally.
The react-scripts package provided by Create React App requires a
dependency:
"babel-jest": "^26.6.0"
Don't try to install it manually: your package manager does it
automatically. However, a different version of babel-jest was detected
higher up in the tree:
D:\node_modules\babel-jest (version: 24.9.0)
Manually installing incompatible versions is known to cause
hard-to-debug issues.
If you would prefer to ignore this check, add
SKIP_PREFLIGHT_CHECK=true to an .env file in your project. That will
permanently disable this message but you might encounter other issues.
To fix the dependency tree, try following the steps below in the exact
order:
*
*Delete package-lock.json (not package.json!) and/or yarn.lock in your project folder.
*Delete node_modules in your project folder.
*Remove "babel-jest" from dependencies and/or devDependencies in the package.json file in your project folder.
*Run npm install or yarn, depending on the package manager you use.
In most cases, this should be enough to fix the problem. If this has
not helped, there are a few other things you can try:
*If you used npm, install yarn (http://yarnpkg.com/) and repeat the above steps with it instead.
This may help because npm has known issues with package hoisting which may get resolved in future versions.
*Check if D:\node_modules\babel-jest is outside your project directory.
For example, you might have accidentally installed something in your home folder.
*Try running npm ls babel-jest in your project folder.
This will tell you which other package (apart from the expected react-scripts) installed babel-jest.
If nothing else helps, add SKIP_PREFLIGHT_CHECK=true to an .env file
in your project. That would permanently disable this preflight check
in case you want to proceed anyway.
P.S. We know this message is long but please read the steps above :-)
We hope you find them helpful!
npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR!
[email protected] start: react-scripts start npm ERR! Exit
status 1 npm ERR! npm ERR! Failed at the [email protected] start
script. npm ERR! This is probably not a problem with npm. There is
likely additional logging output above.
npm ERR! A complete log of this run can be found in: npm ERR!
C:\Users\smaso\AppData\Roaming\npm-cache_logs\2021-01-26T22_50_48_484Z-debug.log
A: If this fails to work, create a .env file in the root directory of your project and add the following line
SKIP_PREFLIGHT_CHECK=true
A: just go to your project's root directory and delete node_module folder and npm start your project.
A: I also face this problem. Simple solution of this problem is:
1)create .env file.
2) add SKIP_PREFLIGHT_CHECK=true in the file.
3) npm start
A: Delete and redownload node modules and run application again
A: To fix the dependency tree, try following the steps below in the exact order:
*
*Delete package-lock.json (not package.json!) and/or yarn.lock in your project folder.
*Delete node_modules in your project folder.
*Remove "babel-jest" from dependencies and/or devDependencies in the package.json file in your project folder.
*Run npm install or yarn, depending on the package manager you use.
In most cases, this should be enough to fix the problem.
If this has not helped, there are a few other things you can try:
*If you used npm, install yarn (http://yarnpkg.com/) and repeat the above steps with it instead.
This may help because npm has known issues with package hoisting which may get resolved in future versions.
*Check if ./babel-jest is outside your project directory.
For example, you might have accidentally installed something in your home folder.
*Try running npm ls babel-jest in your project folder.
This will tell you which other package (apart from the expected react-scripts) installed babel-jest.
A: Seems like you created the react project using create-react-app and installed jest using the following command.
yarn add --dev jest babel-jest @babel/preset-env @babel/preset-react react-test-renderer
But in the documentation it says just to run the following command if you are using create-react-app
yarn add --dev react-test-renderer
[See the documentation][1]https://jestjs.io/docs/tutorial-react
This worked for me.
A: I used the following command to uninstall jest npm uninstall jest in the root project folder (not the client) and then used the react scripts start command npm start and it worked
A: removing jest.config.js and uninstall ts-jest solved my problem
| |
doc_1932
|
static void boot_jump_linux(bootm_headers_t *images, int flag)
{
#ifdef CONFIG_ARM64
void (*kernel_entry)(void *fdt_addr);
int fake = (flag & BOOTM_STATE_OS_FAKE_GO);
kernel_entry = (void (*)(void *fdt_addr))images->ep;
debug("## Transferring control to Linux (at address %lx)...\n",
(ulong) kernel_entry);
bootstage_mark(BOOTSTAGE_ID_RUN_OS);
announce_and_cleanup(fake);
if (!fake)
kernel_entry(images->ft_addr);
#else
unsigned long machid = gd->bd->bi_arch_number;
char *s;
void (*kernel_entry)(int zero, int arch, uint params);
unsigned long r2;
int fake = (flag & BOOTM_STATE_OS_FAKE_GO);
kernel_entry = (void (*)(int, int, uint))images->ep;
s = getenv("machid");
if (s) {
strict_strtoul(s, 16, &machid);
printf("Using machid 0x%lx from environment\n", machid);
}
debug("## Transferring control to Linux (at address %08lx)" \
"...\n", (ulong) kernel_entry);
bootstage_mark(BOOTSTAGE_ID_RUN_OS);
announce_and_cleanup(fake);
if (IMAGE_ENABLE_OF_LIBFDT && images->ft_len)
r2 = (unsigned long)images->ft_addr;
else
r2 = gd->bd->bi_boot_params;
if (!fake)
kernel_entry(0, machid, r2);
#endif
}
I understood from the related question: Trying to understand the usage of function pointer that kernel_entryis a pointer to a function. Can someone help me understand where that function is defined? I don't even know the name of this function so I failed to grepit.
NOTE: The entire u-boot source code is here.
A: Indeed kernel_entry is a function pointer. It is initialized from the ep field of the piece of data passed in called images, of type bootm_header_t. The definition of that struct is in include/image.h. This is the definition of a bootable image header, ie the header of a kernel image which contain the basic info to boot that image from the boot loader. Obviously, to start it, you need a program entry point, similarly to the main function in regular C programs.
In that structure, the entry point is simply defined as a memory address (unsigned long), which the code you listed cast into that function pointer.
That structure as been obtained from loading the first blocks of the image file on disk, whose location is known already by the boot loader.
Hence the actual code pointed by that function pointer belongs to a different binary, and the definition of the function must be located in a different source code. For a linux kernel, this entry point is an assembly hand coded function, whose source is in head.S. This function being highly arch dependent, you will find many files of that name implementing it accross the kernel tree.
| |
doc_1933
|
Does anyone know of any tools or methods to speed up this process?
Thanks all!
A: IF you need to do it via C# then take a look at FileHelpers or http://www.codeproject.com/KB/cs/CsvReaderAndWriter.aspx .
IF you want to do it via SQL (BULK INSERT) then see the walkthrough (including source) here http://blog.sqlauthority.com/2008/02/06/sql-server-import-csv-file-into-sql-server-using-bulk-insert-load-comma-delimited-file-into-sql-server/ (MSDN reference http://msdn.microsoft.com/en-us/library/ms188365.aspx).
There is an easier option though by using the SQL Server Import Wizard interactively for a small number of files.
A: SQL Server Management Studio, on the database under Right Click -> Tasks -> Import Data... can consume CSV files.
If you need to do it via C#, there are plenty of CSV readers and writers around:
http://www.codeproject.com/KB/cs/CsvReaderAndWriter.aspx
From here it's only a short hop to a DataTable and SqlBulkCopy.
A: You want to execute a BULK INSERT statement.
A quick google suggests these sites:
*
*SQL SERVER – Import CSV File Into SQL Server Using Bulk Insert – Load Comma Delimited File Into SQL Server « Journey to SQLAuthority
*BULK INSERT on msdn
A: The FileHelpers library is fantastic for doing this sort of stuff through code.
A: Assuming the csv is structured in this way:
*
*One row is one entry
*The split-character to devide the fields is in the first position of every row.
I would do it in that way:
var lines = File.ReadAllLines("<CSV-File>");
foreach (string line in lines)
{
var values = line.Split(new[] { line[0] }, StringSplitOptions.None);
}
A: SQL Server Integration Services (SSIS) is Microsoft's ETL tool and handles CSV files easily.
| |
doc_1934
|
What can I do?
A: Generally speaking, avoid sending email during HTTP request processing. As you've found, it will eventually time out. There are some measures you can take to reduce this, but you won't make it go away altogether:
*
*Extend your timeout
*Send faster, e.g. by relaying through a local mail server
The right way to fix this is to only trigger the sending of your messages during the HTTP request, and do the actual sending via a cron job or other scheduled or queued task that does the actual sending. This way your trigger will do something small and simple like updating a "time to send" timestamp in a mailing list record, then have your scheduled task check for things that are are due to send, send them, and then mark them as sent so that later HTTP requests can see the status of the send.
There is an example of sending to a mailing list included with PHPMailer, and that would be exactly the kind of thing you want to trigger. Note that that code does not make any reference to $_GET, $_POST, or any other user input, it's driven purely from what's in the database.
| |
doc_1935
|
Coding for the bus station system.
FUNCTION get_bus_result_attribute (iv_result_name varchar2,
iv_result_key varchar2,
iv_result_attribute varchar2)
--define the parameter to get the "sql_result" script result
RETURN varchar2 IS
sql_result varchar2(500);
sql_return varchar2(500);
BEGIN
sql_result := 'SELECT ' || iv_result_attribute || '
FROM MyTable a,
MyTable b
WHERE a.bus_value_set_id = b.bus_value_set_id
AND b.bus_value_set_name = iv_result_name
AND a.enabled_flag = ''Y''
AND a.bus_value = iv_result_key
AND iv_result_name = get_bus_code (v_bus)
AND iv_result_key = get_bus_name(v_group)
AND iv_result_key = iv_result_attribute';
EXECUTE IMMEDIATE sql_result
INTO sql_return; --get the "sql_result" script result
return sql_return;
exception
when others then
return '';
end get_bus_result_attribute;
FUNCTION get_bus_code (v_bus varchar2)
RETURN VARCHAR2 IS
v_get_bus_code_result VARCHAR2(20) ;
BEGIN
SELECT busa.bus_code
INTO v_get_bus_code_result
FROM tbl_bus_code busa, tbl_bus_line busb
WHERE busa.bus_code_set_id = busb.bus_code_set_id
AND busb.bus_line_set_name = 'HK_BUS_CODE'
AND busa.enabled_flag = 'Y'
AND (busa.attribute4 = 'Y' OR busa.attribute5 = 'Y')
AND busa.BUS_VALUE = v_bus;
RETURN v_get_bus_code_result;
RETURN get_result_attribute('BUS_LINES', v_bus, 'attribute1'); /*BUS_GP*/
EXCEPTION
WHEN OTHERS THEN
RETURN '';
END get_bus_code;
FUNCTION get_bus_name(v_group VARCHAR2) --define the parameter and enter the value in the function 'get_bus_result_attribute'
RETURN VARCHAR2 IS
v_get_bus_div_result VARCHAR2(20) ;
BEGIN
SELECT DISTINCT CASE busa.bus_code --Bus code
WHEN '52' THEN '52X'
WHEN '58P' THEN '58'
WHEN 'K1' THEN 'K1C'
WHEN '40' THEN '40X'
WHEN '6' THEN '6X'
WHEN '7' THEN '7'
WHEN '58M' THEN '58'
ELSE ''
END bus_code --Bus code
INTO v_get_bus_div_result
FROM tbl_bus_code busa, tbl_bus_line busb
WHERE busa.bus_code_set_id = busb.bus_code_set_id
AND busb.bus_line_set_name = 'HK_BUS_LINES'
AND busa.enabled_flag = 'Y'
AND (busa.attribute4 = 'Y' OR busa.attribute5 = 'Y')
AND busa.bus_code NOT IN ('INACTIVE', 'XXX')
AND get_bus_code(busa.BUS_VALUE) = v_group
RETURN get_result_attribute('BUS_GROUP', v_group, 'attribute2');
--bus_group_dir
EXCEPTION
WHEN OTHERS THEN
RETURN '';
END get_bus_name;
FUNCTION BUS_DOC_TEXT (N_ID NUMBER, N_HEAD_ID NUMBER)
RETURN VARCHAR2 IS
v_bus_doc_text VARCHAR2(150);
BEGIN
SELECT 'BUS\'
|| get_bus_result_attribute(abc.attribute14)
|| '\'
|| abc.attribute14
|| '\'
|| abc.segment1
INTO v_bus_doc_text
FROM my_table_c abc
WHERE abc.ORG_ID = N_ID -- parameter
AND abc.bus_id = N_HEADER_ID; -- parameter
END;
RETURN v_bus_doc_text ;
END;
END;
| |
doc_1936
|
Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory, after upgrading an application to Angular 10, it started giving an error of JavaScript heap out of memory.
Can someone please advise. Tried multiple ways to increase the memory but didn't get resolve. Thanks!
A: First things first: which node version are you using? You should use v10.x or v12.x, according to compatibility informations. I've seen some weird errors happening from using node v8.x with new Angular versions.
With this said, in the project I work on (massive project with 500+ components and 190+ modules, with a lot of problems in module hierarchy) we've had to change our build scripts to increase node memory. It can be done in your package.json file:
// package.json file
"scripts": {
"build": "node --max_old_space_size=6144 ./node_modules/@angular/cli/bin/ng build",
"postinstall": "ngcc --properties es2015 es5 browser module main --first-only --create-ivy-entry-points"
},
You'll notice the postinstall script. It's a good idea to use it, as since Angular v9, the new Ivy compiler is the default for projects, but not for libraries. This implies the use of the compatibility compiler (ngcc), which will process all Angular libraries. The postscript will run ngcc after npm install, instead of running it at build time.
Cheers!
| |
doc_1937
|
I am heading a team of 4 people. They have all forked my copy of the project, now they make changes and send me pull requests.
Let's say I receive a pull request and merge it to the dev branch (which is still pending QA). In the meantime, there's another pull request which is very critical, another team member took the latest from dev, made a new branch, and applied the fixes in the new branch, then sent me a pull request.
The problem is, I just want to pull in the changes made for the critical fix. If at this stage I merge the changes to production, it will also merge the prior changes, which were still awaiting QA.
Is it possible to checkout only files which require changes, so that it will force me to push the current development work on production?
I have Googled and found a command git checkout --orphan branchname which will create a blank branch, now after checkout, it will automatically stage the whole project for commit. So I used git reset HEAD -- to unstage everything.
Now I just added the required files and committed it, but when I raise a pull request to the remote repo, it says both the branches has entirely different history.
Can anyone tell me how I could achieve this?.
A: You want to "cherry-pick" the critical fixes. This just takes the changes in that changeset, without taking anything else.
*
*Find the changeset hash in your dev branch.
*git checkout -b my-critical-fix production
*git cherry-pick $HASH
That will give you a branch called "my-critical-fix", based on production, with just the change that you want. Now you can check it and create a pull request against the production branch when you are happy.
| |
doc_1938
|
import React, { useState } from 'react';
import {Button, Text, View, StyleSheet, TouchableOpacity} from 'react-native';
const HomeScreen = ({ navigation }) => {
const userpermission = 'admin';
const adminbutton = '<Button title="Adminstuff"></Button>';
return (
<View style={{ flex: 1, alignItems: 'center', justifyContent: 'center' }}>
<Text>Applikationen</Text>
<TouchableOpacity
style={styles.button}
onPress={() => navigation.navigate('Arbeitszeiterfassung')}>
<Text style={styles.text}>Arbeitszeiterfassung</Text>
</TouchableOpacity>
// HERE I WANT TO RENDER THE BUTTON
// if(userpermission == 'admin'){
// {adminbutton}
// }
</View>
);
};
A: Try this way
const HomeScreen = ({ navigation }) => {
const userpermission = 'admin';
const adminbutton = '<Button title="Adminstuff"></Button>';
const renderButtonsContainer = () => {
return <View>
<Button title="Adminstuff"></Button>
<Button title="Adminstuff"></Button>
.........
</View>
}
return (
<View style={{ flex: 1, alignItems: 'center', justifyContent: 'center' }}>
{userpermission == 'admin' ? renderButtonsContainer() : null }
</View>
);
};
A: You can do something like this.
import React, { useState } from 'react';
import {Button, Text, View, StyleSheet, TouchableOpacity} from 'react-native';
const HomeScreen = ({ navigation }) => {
const userpermission = 'admin';
return (
<View style={{ flex: 1, alignItems: 'center', justifyContent: 'center' }}>
<Text>Applikationen</Text>
<TouchableOpacity
style={styles.button}
onPress={() => navigation.navigate('Arbeitszeiterfassung')}>
<Text style={styles.text}>Arbeitszeiterfassung</Text>
</TouchableOpacity>
{userpermission == "admin" ? <Button title="Adminstuff" /> : null}
</View>
);
};
A: A better way to achieve this in functional component.
import { Button } from 'react-native';
const MyButton = ({userpermission}) => {
if(userpermission == 'admin'){
return <Button title="Adminstuff"/>
}
else {
return <Button title="Userstuff"/>
}
}
const HomeScreen = () => {
const userpermission = 'admin';
return (
<View>
<View>Something</<View>
<MyButton userpermission={userpermission} />
</View>
)
}
| |
doc_1939
|
This is my code for loading more data
public void loadmoredata(int limit) {
limit++;
String url = "http://oilpeople.talenetic.com/api/SearchJob?limit="+limit+ "&jobkeyword=" + key + "&countrytext=" + coun + "&location=" +loc+ "&apikey=1111111111&siteid=1";
JsonObjectRequest loadmore = new JsonObjectRequest(Request.Method.GET, url, null, new Response.Listener<JSONObject>() {
@Override
public void onResponse(JSONObject response) {
try {
JSONArray jsonArray = response.getJSONArray(loadArray);
jobTitle = new String[jsonArray.length()];
salary = new String[jsonArray.length()];
companyName = new String[jsonArray.length()];
for (int i = 0; i < jsonArray.length(); i++) {
JSONObject c = jsonArray.getJSONObject(i);
// Storing each json item in variable
if(loadArray.equals("Indeedjobslist")) {
String salaryLoc = c.getString("location");
jobTitle[i] = c.getString("jobtitle");
companyName[i] = c.getString("companyname");
salary[i] = salaryLoc;
} else {
String salaryLoc = c.getString("salary")+ " \u2022 " +c.getString("location");
jobTitle[i] = c.getString("jobtitle");
companyName[i] = c.getString("companyname");
salary[i] = salaryLoc;
}
// show the values in our logcat
m.setCompanyName(companyName);
m.setJobTitle(jobTitle);
m.setSalary(salary);
adapter.notifyDataSetChanged();
}
} catch(JSONException e) {
}
}
}, new Response.ErrorListener() {
@Override
public void onErrorResponse(VolleyError error) {
}
});
Volley.newRequestQueue(SearchResults.this).add(loadmore);
}
adapter is a global variable. I am trying to load more data like this:
@Override
public void onBindViewHolder(MyViewHolder holder, int position) {
holder.fTextView.setText(jobTitle[position]);
holder.mTextView.setText(companyName[position]);
holder.sTextView.setText(jobs[position]);
if (position % 10 == 0) {
int limit = position + 5;
loadmoredata(limit);
}
}
Whenever the % of position is 0 I load more data, but I am getting indexoutofBounds error.
A: remove
if (position % 10 == 0) {
int limit = position + 5;
loadmoredata(limit);
}
and use this method calling to know count from your main activity then call your above cod but i think that will cause infinite loop
public int getItemCount() {
return mDataset.length;
}
| |
doc_1940
|
if 'DYNO' in os.environ: # Is running on Heroku
DEBUG = False
else:
DEBUG = True
...
if DEBUG==True:
DATABASES = {
'default': {
...
}
}
else: # For Heroku
# Parse database configuration from $DATABASE_URL
import dj_database_url
DATABASES = {'default':dj_database_url.config()}
# Honor the 'X-Forwarded-Proto' header for request.is_secure()
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
Also modify wsgi.py:
from <myApp> import settings
if settings.DEBUG==True:
application = get_wsgi_application()
else: # For Heroku
from dj_static import Cling
application = Cling(get_wsgi_application())
The above modifications are to identify if the app is running locally with runserver or on Heroku. However, if I try to run foreman start instead of runserver, the settings in wsgi.py will not work since foreman also requires Cling.
Is there a way that I can detect if the app is run by foreman so that I can make proper setting?
A: Heroku provide you DATABASE_URL, So If 'DATABASE_URL' does no exist, then it's local machine
if not os.environ.has_key('DATABASE_URL'):
os.environ['DATABASE_URL'] = 'postgres://user:password@localhost/name'
DATABASES = {'default': dj_database_url.config(default=os.environ['DATABASE_URL'])}
Updated : Match answer to the exact question.
Procfile
export SERVER_ENV=foreman
web: gunicorn yourapp.wsgi
wsgi.py
if os.getenv('SERVER_ENV') == 'foreman':
application = Cling(get_wsgi_application())
else:
application = get_wsgi_application()
| |
doc_1941
|
Now I'm reading some things about the release of a modular project and I saw that some people use a same version for all modules, but in this case how Can i manage upgrade and bug fixing on the modules. Summarizing I'm really confused on this topic and I want to ask you some suggestion on how to begin to address this problem.
A: We also have a similar structure of modules. Each in its own maven project.
Each module has its own life-cycle and its own version.
Using your module names -
You want to upgrade the services module
*
*You release a new version of services
*You update the rest, admin-webapp and webapp with the services new version
*You build them and release them
This methodology requires that you manage the dependencies properly.
We try to keep it simple, but with many modules, you can get in to a dependency hell.
Another option is to keep all modules in a single, multi-module maven project
*
*Where you have a single version for all modules (parent pom)
*You build all the project, releasing all modules with each new change
Makes an easy life of managing dependencies, but many unneeded releases.
I think both options are valid. You have to see what your team will feel easy to manage and scale over time.
I hope this helps.
| |
doc_1942
|
The only way I can do this at present is to create another panel in the north position and add the label in the center position of this panel.
Is there a way to do this without the extra panel?
A: There is no need to add extra panel, As I see that you only need Label in North (i.e. Top).
Components added to north in borderlayout will occupy complete width and height will be preffered height of component. This is decided on various factors.
You just need to take care of setting label text and image in center. Look at alignment api's of label for same.
Details:
http://www.ehow.com/way_5579409_java-borderlayout-tutorial.html
e.g.
http://www.java2s.com/Tutorial/Java/0240__Swing/1340__BorderLayout.htm
http://download.oracle.com/docs/cd/E17409_01/javase/tutorial/uiswing/layout/border.html
A: Well, I'm not sure I understand the question. A JLabel does not "autosize" itself. The size of the label is the size of the Icon added to the label. So the size of the image will not change even if the width changes.
Maybe you can use:
label.setAlignmentX(...);
label.setHorizontalAligment(...);
To horizontally center the label in the North of the panel, if thats what your question is.
Why don't you post your currently working SSCCE that shows what you are doing. Also, what is the problem with using a second panel?
| |
doc_1943
|
According to this document: https://help.branch.io/developers-hub/docs/ios-basic-integration#2-configure-associated-domains
I need to submit a URL Scheme. Recommendations? Apple says reverse DNS.
According to this document: https://branch.io/glossary/uri-schemes/
URL Schemes are obsolete.
What should I do?
A: A Branchster here -
While most of the redirection happens through Universal Links on iOS there can be certain situations where a third-party app is not able to trigger UL and in that case, the fallback redirection using URI schemes comes into the picture.
Ideally, you should set up both Universal Links and URI schemes in your Info.plist file so that you cover all the edge cases. You can check out the recommendations for the URI scheme here though just ensure it is something unique to your app.
To start you need to configure your Branch Dashboard enabling Universal Links toggle and entering your URI scheme and app information -
You can then proceed to enter the same information in your project's Info.plist file -
<key>branch_universal_link_domains</key>
<array>
<string>sample.app.link</string>
<string>sample-alternate.app.link</string>
<string>sample.test.app.link</string>
<string>sample-alternate.test.app.link</string>
</array>
<key>CFBundleURLTypes</key>
<array>
<dict>
<key>CFBundleTypeRole</key>
<string>Editor</string>
<key>CFBundleURLSchemes</key>
<array>
<string>com.your.schene</string>
</array>
<key>CFBundleURLName</key>
<string>com.branch.monster</string>
</dict>
</array>
<key>branch_key</key>
<dict>
<key>live</key>
<string>key_live_</string>
<key>test</key>
<string>key_test_</string>
</dict>
| |
doc_1944
|
This is the definition of the provider:
final branchRepositoryProvider = FutureProvider<BranchRepository>((ref) async {
var repository = await BranchRepository.create();
developer.log('BranchRepositoryProvider created.');
return repository;
And this is the code that makes use of it:
class NavigationDrawer extends ConsumerWidget {
const NavigationDrawer({Key? key}) : super(key: key);
@override
Widget build(BuildContext context, WidgetRef ref) {
AsyncValue<BranchRepository> provider =
ref.read(repositories.branchRepositoryProvider);
developer.log('Got branch provider.');
return Drawer(
child: ListView(
padding: EdgeInsets.zero,
children: <Widget>[
DrawerHeader(...),
] +
provider.when(
data: (repository) => branchOptions(context, repository),
loading: () =>
[const ListTile(title: Text('Reading the database...'))],
error: (e, s) => [
ListTile(title: Text('Something when wrong:\n$e'))]),
),
);
}
List<ListTile> branchOptions(
BuildContext context, BranchRepository repository) {
developer.log('Creating branch options...');
var branches = getBranches(repository);
...
}
Map<int, String> getBranches(BranchRepository repository) {
developer.log('Iterating over ids...');
...
}
}
Messages in functions branchOptions(...) and getBranches(...) are shown only after the second click.
A: I see you're using ref.read in a builder. In general, you should use ref.watch in a builder, and ref.read only for callbacks. Otherwise, the builder isn't set to rebuild when the value changes.
A: As Randal Schwartz suggested, my mistake was using ref.read instead of ref.watch, as
AsyncValue<BranchRepository> provider =
ref.watch(repositories.branchRepositoryProvider);
With that change, it is working now.
| |
doc_1945
|
Note, it is likely that the line begins with several spaces.
What if the input is coming from an API instead of an existing file?
. . .
"events" : [ {
"id" : "123456",
"important" : true,
"codeView" : {
"lines" : [ {
"fragments" : [ {
"type" : "NORMAL_CODE",
"value" : "str = wrapper.getParameter("
}, {
"type" : "NORMAL_CODE",
"value" : ")"
} ],
"text" : "str = wrapper.getParameter("motif")"
} ],
"nested" : false
},
"probableStartLocationView" : {
"lines" : [ {
"fragments" : [ {
"type" : "STACKTRACE_LINE",
"value" : "<init>() @ JSONInputData.java:12"
} ],
"text" : "<init>() @ JSONInputData.java:92"
} ],
"nested" : false
},
"dataView" : {
"lines" : [ {
"fragments" : [ {
"type" : "TAINT_VALUE",
"value" : "CP"
} ],
"text" : "{{#taint}}CP{{/taint}}"
} ],
"nested" : false
},
"collapsedEvents" : [ ],
"dupes" : 0
}, {
"id" : "28861,28862",
"important" : false,
"type" : "P2O",
"description" : "String Operations Occurred",
"extraDetails" : null,
"codeView" : {
"lines" : [ {
"fragments" : [ {
"type" : "TEXT",
"value" : "Over the following lines of code, blah blah."
} ],
"text" : "Over the following lines of code, blah blah."
} ],
"nested" : false
},
"probableStartLocationView" : {
"lines" : [ {
"fragments" : [ {
"type" : "STACKTRACE_LINE",
"value" : "remplaceString() @ O_UtilCaractere.java:234"
} ],
"text" : "remplaceString() @ O_UtilCaractere.java:234"
}, {
"fragments" : [ {
"type" : "STACKTRACE_LINE",
"value" : "replaceString() @ O_UtilCaractere.java:333"
} ],
"text" : "replaceString() @ O_UtilCaractere.java:333"
}, {
"fragments" : [ {
"type" : "STACKTRACE_LINE",
"value" : "creerIncidentPaie() @ Incidents.java:444"
} ],
"text" : "creerIncidentPaie() @ Incidents.java:219"
}, {
"fragments" : [ {
"type" : "STACKTRACE_LINE",
"value" : "repliquerAbsenceIncident() @ Incidents.java:876"
} ],
"text" : "repliquerAbsenceIncident() @ IncidentsPaieMgr.java:882"
} ],
"nested" : false
},
"dataView" : {
"lines" : [ {
"fragments" : [ {
"type" : "TEXT",
"value" : "insert into TGE_INCIDENT...4&apos;, &apos;YYYYMMDD&apos;), &apos;A&apos;, &apos;"
}, {
"type" : "TAINT_VALUE",
"value" : "CP"
}, {
"type" : "TEXT",
"value" : "&apos;, &apos;&apos;, null, &apos;T&apos;, &apos;ADPTVT&apos;, to_date(&apos;2013012214..."
} ],
"text" : "insert into TGE_INCIDENT...4&apos;, &apos;YYYYMMDD&apos;), &apos;A&apos;, &apos;{{#taint}}CP{{/taint}}&apos;, &apos;&apos;, null, &apos;T&apos;, &apos;ADPTVT&apos;, to_date(&apos;2017062214..."
} ],
"nested" : false
}
. . .
A: This will work robustly in any awk:
awk '/"codeView"/{close(out); out="_temp" ++c ".txt"} out!=""{print > out}' file
A: Try:
csplit -f _temp -b %d.tmp file '/codeView/' '{*}'
Or, if the data comes from some other program:
my_api | csplit -f _temp -b %d.tmp - '/codeView/' '{*}'
How it works
*
*-f _temp -b %d.tmp
These two options sets the names of the split files to format that you want.
*file
Replace this with the name of your input file. Use - if input is to come from stdin.
*/codeView/
This is the regex that you want to split on.
*'{*}'
This tells csplit not to stop at the first match but to keep splitting.
A: awk to the rescue!
$ awk '/"codeView"/{c++} {print > ("_temp" (c+0) ".txt")}' file
the header up to the first match will be in the 0th temp file. If there is a chance that the key may appear in the content perhaps change pattern match to literal match $1=="\"codeView\""
you can pipe in the data to the awk script instead of reading from a file as well.
If there are too many files opened, you may want to close them before it errs out.
| |
doc_1946
|
R=np.array([[1.05567452e+11, 1.51583103e+11, 5.66466172e+08],
[6.94076420e+09, 1.96129124e+10, 1.11642674e+09],
[1.88618492e+10, 1.73640817e+10, 4.84980874e+09]])
Remove = [(0, 1),(0,2)]
R1 = R.flatten()
print([R1])
The desired output is
array([1.05567452e+11, 6.94076420e+09, 1.96129124e+10, 1.11642674e+09,
1.88618492e+10, 1.73640817e+10, 4.84980874e+09])
A: One option is to use numpy.ravel_multi_index to get the index of Remove in the flattened array, then delete them using numpy.delete:
out = np.delete(R, np.ravel_multi_index(tuple(zip(*Remove)), R.shape))
Another could be to replace the values in Remove, then flatten R and filter these elements out:
R[tuple(zip(*Remove))] = R.max() + 1
arr = R.ravel()
out = arr[arr<R.max()]
Output:
array([1.05567452e+11, 6.94076420e+09, 1.96129124e+10, 1.11642674e+09,
1.88618492e+10, 1.73640817e+10, 4.84980874e+09])
A: R = np.array([[1.05567452e+11, 1.51583103e+11, 5.66466172e+08],
[6.94076420e+09, 1.96129124e+10, 1.11642674e+09],
[1.88618492e+10, 1.73640817e+10, 4.84980874e+09]])
R1 = np.delete(R, (1, 2))
print([R1])
A: You can do this with list comprehension:
import numpy as np
R=np.array([[1.05567452e+11, 1.51583103e+11, 5.66466172e+08],
[6.94076420e+09, 1.96129124e+10, 1.11642674e+09],
[1.88618492e+10, 1.73640817e+10, 4.84980874e+09]])
Remove = [(0, 1),(0,2)]
b = [[j for i, j in enumerate(m) if (k, i) not in Remove] for k, m in enumerate(R)]
R1 = np.array([i for j in b for i in j]) #Flatten the resulting list
print(R1)
Output
array([1.05567452e+11, 6.94076420e+09, 1.96129124e+10, 1.11642674e+09,
1.88618492e+10, 1.73640817e+10, 4.84980874e+09])
A: import numpy as np
R = np.array([[1.05567452e+11, 1.51583103e+11, 5.66466172e+08],
[6.94076420e+09, 1.96129124e+10, 1.11642674e+09],
[1.88618492e+10, 1.73640817e+10, 4.84980874e+09]])
Remove = [(0, 1), (0, 2)]
Remove = [R.shape[1]*i+j for (i, j) in Remove]
print(np.delete(R, Remove))
| |
doc_1947
|
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" >
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
I am trying to put these charactors ù,é it turns in to �
Please tell me why?
Thank you.
A:
I am trying to put these charactors ù,é it turns in to �
That is a pretty certain indicator that the text you output is not UTF-8 encoded as you say in the header. My guess would be it's ISO-8859-1 encoded.
This can be because
*
*The HTML file you are editing isn't UTF-8 encoded. Save it as UTF-8 - the option for that is often in the "Save As..." dialog of your editor or IDE.
*The database connection you are getting the text from isn't UTF-8 encoded.
A: You need to save the html file as UTF-8 format. Also you can add an attribute lang="fr" to your html tag.
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
A: You can use the following code in head section
<meta http-equiv="encoding" content="text/html" />
I think it will works for you.
| |
doc_1948
|
I'm trying to move <ReaderTypeID>5</ReaderTypeID> from under <SCPReplyMessage> to be under <SCPReplyMessage><tran>
The section of code where I take a node from outside tran and move it inside tran became troubling and I had to get it working, so I resorted to a more comfortable (but inefficient) approach: string manipulation.
-- move ReaderTypeID from outside <tran> to be inside <tran>
DECLARE @rtidXml VARCHAR(100)
SELECT @rtidXml = CONVERT(VARCHAR(100),@ReplyMessageXml.query('/SCPReplyMessage/ReaderTypeID'))
DECLARE @st NVARCHAR(max)
SET @st = CONVERT(NVARCHAR(MAX),@tranXml)
SET @st = REPLACE(@st,'</tran>',@rtidXml + '</tran>')
SET @tranXml.modify('delete /SCPReplyMessage/ReaderTypeID')
I'd like to accomplish the same result without the CONVERT to and from XML.
Thanks!
the function:
CREATE FUNCTION dbo.udf_mTranAddl (@ReplyMessageXml XML)
returns XML
AS
BEGIN
DECLARE @tranXml XML
SELECT @tranXml = @ReplyMessageXml.query('/SCPReplyMessage/tran')
-- Discard extraneous tran elements
SET @tranXml.modify('delete /tran/ser_num')
SET @tranXml.modify('delete /tran/time')
SET @tranXml.modify('delete /tran/sys')
SET @tranXml.modify('delete /tran/sys_comm')
-- move ReaderTypeID from outside <tran> to be inside <tran>
DECLARE @rtidXml VARCHAR(100)
SELECT @rtidXml = CONVERT(VARCHAR(100),@ReplyMessageXml.query('/SCPReplyMessage/ReaderTypeID'))
DECLARE @st NVARCHAR(max)
SET @st = CONVERT(NVARCHAR(MAX),@tranXml)
SET @st = REPLACE(@st,'</tran>',@rtidXml + '</tran>')
SET @tranXml.modify('delete /SCPReplyMessage/ReaderTypeID')
RETURN CONVERT(xml, @st)
END
Input @ReplyMessageXml:
<SCPReplyMessage>
<ContDeviceID>5974</ContDeviceID>
<LocalTime>2019-08-29T12:35:43</LocalTime>
<Priority>false</Priority>
<ReaderTypeID>5</ReaderTypeID>
<Deferred>false</Deferred>
<tran>
<ser_num>147</ser_num>
<time>1567096543</time>
<source_type>9</source_type>
<source_number>0</source_number>
<tran_type>6</tran_type>
<tran_code>13</tran_code>
<sys>
<error_code>4</error_code>
</sys>
<sys_comm>
<current_primary_comm>123</current_primary_comm>
<current_alternate_comm>4</current_alternate_comm>
</sys_comm>
<c_id>
<format_number>4</format_number>
<cardholder_id>123</cardholder_id>
<floor_number>4</floor_number>
</c_id>
<oal>
<nData>AAAAAA==</nData>
</oal>
</tran>
<SCPId>99</SCPId>
<ReplyType>7</ReplyType>
<ChannelNo>-1</ChannelNo>
</SCPReplyMessage>
output (which is correct):
<tran>
<source_type>9</source_type>
<source_number>0</source_number>
<tran_type>6</tran_type>
<tran_code>13</tran_code>
<c_id>
<format_number>4</format_number>
<cardholder_id>123</cardholder_id>
<floor_number>4</floor_number>
</c_id>
<oal>
<nData>AAAAAA==</nData>
</oal>
<ReaderTypeID>5</ReaderTypeID>
</tran>
FINAL RESULT:
Thanks to @PeterHe
CREATE FUNCTION dbo.udf_mTranAddl (@ReplyMessageXml XML)
returns XML
AS
BEGIN
DECLARE @tranXml XML
SELECT @tranXml = @ReplyMessageXml.query('/SCPReplyMessage/tran')
-- Discard extraneous tran elements
SET @tranXml.modify('delete /tran/ser_num')
SET @tranXml.modify('delete /tran/time')
SET @tranXml.modify('delete /tran/sys')
SET @tranXml.modify('delete /tran/sys_comm')
-- move ReaderTypeID from outside <tran> to be inside <tran>
DECLARE @x1 xml;
SELECT @[email protected]('SCPReplyMessage/ReaderTypeID');
SET @tranXml.modify('insert sql:variable("@x1") into (/tran)[1]')
SET @tranXml.modify('delete /SCPReplyMessage/ReaderTypeID')
RETURN @tranXml
END
GO
A: YOu can do it using xquery:
DECLARE @x xml = '<SCPReplyMessage>
<ContDeviceID>5974</ContDeviceID>
<LocalTime>2019-08-29T12:35:43</LocalTime>
<Priority>false</Priority>
<ReaderTypeID>5</ReaderTypeID>
<Deferred>false</Deferred>
<tran>
<ser_num>147</ser_num>
<time>1567096543</time>
<source_type>9</source_type>
<source_number>0</source_number>
<tran_type>6</tran_type>
<tran_code>13</tran_code>
<sys>
<error_code>4</error_code>
</sys>
<sys_comm>
<current_primary_comm>123</current_primary_comm>
<current_alternate_comm>4</current_alternate_comm>
</sys_comm>
<c_id>
<format_number>4</format_number>
<cardholder_id>123</cardholder_id>
<floor_number>4</floor_number>
</c_id>
<oal>
<nData>AAAAAA==</nData>
</oal>
</tran>
<SCPId>99</SCPId>
<ReplyType>7</ReplyType>
<ChannelNo>-1</ChannelNo>
</SCPReplyMessage>'
DECLARE @output xml;
SELECT @output = @x.query('/SCPReplyMessage/tran');
SET @Output.modify('delete(/tran/ser_num)');
SET @Output.modify('delete(/tran/time)');
SET @Output.modify('delete(/tran/sys)');
SET @Output.modify('delete(/tran/sys_comm)');
DECLARE @x1 xml;
SELECT @[email protected]('SCPReplyMessage/ReaderTypeID');
SET @output.modify('insert sql:variable("@x1") into (/tran)[1]')
SELECT @output;
A: Here is a much easier way by using XQuery FLWOR expression. The main idea is to construct what you need in one single statement instead of moving, deleting, inserting, etc.
SQL
DECLARE @xml XML =
N'<SCPReplyMessage>
<ContDeviceID>5974</ContDeviceID>
<LocalTime>2019-08-29T12:35:43</LocalTime>
<Priority>false</Priority>
<ReaderTypeID>5</ReaderTypeID>
<Deferred>false</Deferred>
<tran>
<ser_num>147</ser_num>
<time>1567096543</time>
<source_type>9</source_type>
<source_number>0</source_number>
<tran_type>6</tran_type>
<tran_code>13</tran_code>
<sys>
<error_code>4</error_code>
</sys>
<sys_comm>
<current_primary_comm>123</current_primary_comm>
<current_alternate_comm>4</current_alternate_comm>
</sys_comm>
<c_id>
<format_number>4</format_number>
<cardholder_id>123</cardholder_id>
<floor_number>4</floor_number>
</c_id>
<oal>
<nData>AAAAAA==</nData>
</oal>
</tran>
<SCPId>99</SCPId>
<ReplyType>7</ReplyType>
<ChannelNo>-1</ChannelNo>
</SCPReplyMessage>';
SELECT @xml.query('<tran>{
for $x in /SCPReplyMessage/tran
return ($x/source_type,
$x/source_number,
$x/tran_type,
$x/tran_code,
$x/c_id,
$x/oal,
$x/../ReaderTypeID)
}</tran>');
| |
doc_1949
|
const request = new XMLHttpRequest();
request.open(
"GET",
"https://cricheroes.in/api/v1/tournament/get-tournaments/-1?pagesize=12&status=-1"
);
let objFromJson = "";
request.onreadystatechange = () => {
if (request.readyState === 4 && request.status === 200) {
objFromJson = JSON.parse(request.response)
console.log(objFromJson)
for (let i = 0; i < objFromJson.data.length; i++) {
tournamentsList.innerHTML += ` <li class="tournaments-name">${objFromJson.data[i].name}</li>`;
}
}
};
request.send();
let url = "";
url = objFromJson.page.next;
console.log(url);
I want the last three lines of code(from defining let variable to logging it out) to be executed after the execution of the callback function of onreadystatechange event. How can I achieve this?
A: from readyState === 4 your execution will continue that means you got the data now you can continue your logic from there.
Your question maybe looks incomplete. If you're trying to modify some href or some src attribute then you also need to write that setAttribute code respective to that element which you want to change. The following code will only assign value to url
const tournamentsList = document.getElementById('tournaments-name-list');
const request = new XMLHttpRequest();
let url = "";
request.open(
"GET",
"https://cricheroes.in/api/v1/tournament/get-tournaments/-1?pagesize=12&status=-1"
);
let objFromJson = "";
request.onreadystatechange = () => {
if (request.readyState === 4 && request.status === 200) {
objFromJson = JSON.parse(request.response)
console.log(objFromJson)
for (let i = 0; i < objFromJson.data.length; i++) {
tournamentsList.innerHTML += ` <li class="tournaments-name">${objFromJson.data[i].name}</li>`;
}
url = objFromJson.page.next
console.log(url);
}
};
request.send();
| |
doc_1950
|
function ced_out_of_stock_products() {
if ( WC()->cart->is_empty() ) {
return;
}
$removed_products = [];
foreach ( WC()->cart->get_cart() as $cart_item_key => $cart_item ) {
$product_obj = $cart_item['data'];
if ( ! $product_obj->is_in_stock() ) {
WC()->cart->remove_cart_item( $cart_item_key );
$removed_products[] = $product_obj;
}
}
if (!empty($removed_products)) {
wc_clear_notices();
foreach ( $removed_products as $idx => $product_obj ) {
$product_name = $product_obj->get_title();
//your notice here
$msg = sprintf( __( "The product '%s' was removed from your cart because it is out of stock.", 'woocommerce' ), $product_name);
wc_add_notice( $msg, 'error' );
}
?>
<script type="text/javascript">
(function($){
$('body').trigger('update_checkout');
})(jQuery);
</script>
<?php
}
}
add_action( 'woocommerce_check_cart_items', 'ced_out_of_stock_products', 9 );
| |
doc_1951
|
A: Cassandra is about speed and performance, it does not support joins and WHERE clause is disabled by default on non primary key columns since this filtering has a negative effect on performance.
Cassandra modeling rules are not the same as relational databases ones. In Cassandra you should model your tables according to your queries not according to your entities and relationships.
The key principles when modeling data in Cassandra are:
*
*know your data.
*know your queries
*denormalize data.
The steps to model your data in cassandra are:
*
*Conceptual data model and Application workflow (queries)
*Logical data model.
*Physical data model.
I know this may have no sense for you. It is just to tell you that Cassandra modeling is different from Relational databases.
To learn more about this topic and get a solid understanding, here is a COMPLETE FREE COURSE provided by datastax company about cassandra data modeling.
| |
doc_1952
|
I integrated OpenIddict (OAuth) for authorization since IdentityServer needs a custom license on the latest .net version.
The problem I currently face only happens on production, when OpenIddict redirects to the login page with the custom route parameters, the URL is missing the custom port I'm using on production. On localhost (Dev / local machine) the correct port is supplied https://localhost:7115, but on production, I'm redirected to https://example.com/Identity/Account/Login, instead of https://example.com:9999/Identity/Account/Login.
When I'm changing the URL manually everything works fine and I can correctly login to my application.
The OpenIddict Server settings:
builder.Services.AddDefaultIdentity<ApplicationUser>()
.AddRoles<IdentityRole>()
.AddEntityFrameworkStores<ApplicationDbContext>()
.AddDefaultTokenProviders();
builder.Services.Configure<IdentityOptions>(options =>
{
options.ClaimsIdentity.UserNameClaimType = OpenIddictConstants.Claims.Name;
options.ClaimsIdentity.UserIdClaimType = OpenIddictConstants.Claims.Subject;
options.ClaimsIdentity.RoleClaimType = OpenIddictConstants.Claims.Role;
});
builder.Services.AddQuartz(options =>
{
options.UseMicrosoftDependencyInjectionJobFactory();
options.UseSimpleTypeLoader();
options.UseInMemoryStore();
});
builder.Services.AddQuartzHostedService(options => options.WaitForJobsToComplete = true);
builder.Services.AddOpenIddict()
.AddCore(options =>
{
options.UseEntityFrameworkCore()
.UseDbContext<ApplicationDbContext>();
options.UseQuartz();
})
.AddServer(options =>
{
options.SetIssuer(new Uri(publicHostFullUrl));
options.SetAuthorizationEndpointUris(authorizationEndpoint)
.SetLogoutEndpointUris(logoutEndpoint)
.SetTokenEndpointUris(tokenEndpoint)
.SetUserinfoEndpointUris(userInfoEndpoint);
options.RegisterScopes(OpenIddictConstants.Scopes.Email, OpenIddictConstants.Permissions.Scopes.Profile, OpenIddictConstants.Permissions.Scopes.Roles);
options.AllowAuthorizationCodeFlow()
.AllowRefreshTokenFlow();
options.AddEncryptionCertificate(certificate)
.AddSigningCertificate(certificate);
}
options.UseAspNetCore()
.EnableAuthorizationEndpointPassthrough()
.EnableLogoutEndpointPassthrough()
.EnableStatusCodePagesIntegration()
.EnableTokenEndpointPassthrough();
options.AcceptAnonymousClients();
options.DisableScopeValidation();
})
.AddValidation(options =>
{
options.SetIssuer(new Uri(publicHostFullUrl));
options.UseLocalServer();
options.UseAspNetCore();
});
The Client settings:
builder.Services.AddOidcAuthentication(options =>
{
options.ProviderOptions.ClientId = clientId;
options.ProviderOptions.Authority = $"{builder.HostEnvironment.BaseAddress}";
options.ProviderOptions.ResponseType = "code";
options.ProviderOptions.ResponseMode = "query";
options.AuthenticationPaths.RemoteRegisterPath = $"{builder.HostEnvironment.BaseAddress}Identity/Account/Register";
options.AuthenticationPaths.LogInCallbackPath = $"{builder.HostEnvironment.BaseAddress}/Identity/Account/Login";
options.AuthenticationPaths.LogInPath = $"{builder.HostEnvironment.BaseAddress}/Identity/Account/Login";
});
My Authorization Controller action where I return the ChallengeResult to the Login Page:
[HttpGet("~/connect/authorize")]
[HttpPost("~/connect/authorize")]
[IgnoreAntiforgeryToken]
public async Task<IActionResult> Authorize()
{
if (request.HasPrompt(Prompts.Login))
{
var prompt = string.Join(" ", request.GetPrompts().Remove(Prompts.Login));
var parameters = Request.HasFormContentType ? Request.Form.Where(parameter => parameter.Key != Parameters.Prompt).ToList() : Request.Query.Where(parameter => parameter.Key != Parameters.Prompt).ToList();
parameters.Add(KeyValuePair.Create(Parameters.Prompt, new StringValues(prompt)));
return Challenge(
authenticationSchemes: IdentityConstants.ApplicationScheme,
properties: new AuthenticationProperties
{
RedirectUri = Request.PathBase + Request.Path + QueryString.Create(parameters)
});
}
var result = await HttpContext.AuthenticateAsync(IdentityConstants.ApplicationScheme);
if (!result.Succeeded || (request.MaxAge != null && result.Properties?.IssuedUtc != null &&
DateTimeOffset.UtcNow - result.Properties.IssuedUtc > TimeSpan.FromSeconds(request.MaxAge.Value)))
{
if (request.HasPrompt(Prompts.None))
{
return Forbid(
authenticationSchemes: OpenIddictServerAspNetCoreDefaults.AuthenticationScheme,
properties: new AuthenticationProperties(new Dictionary<string, string>
{
[OpenIddictServerAspNetCoreConstants.Properties.Error] = Errors.LoginRequired,
[OpenIddictServerAspNetCoreConstants.Properties.ErrorDescription] = "The user is not logged in."
}!));
}
return Challenge(
authenticationSchemes: IdentityConstants.ApplicationScheme,
properties: new AuthenticationProperties
{
RedirectUri = Request.PathBase + Request.Path + QueryString.Create(
Request.HasFormContentType ? Request.Form.ToList() : Request.Query.ToList())
});
}
var user = await _userManager.GetUserAsync(result.Principal) ??
throw new InvalidOperationException("The user details cannot be retrieved.");
var application = await _applicationManager.FindByClientIdAsync(request.ClientId!) ??
throw new InvalidOperationException("Details concerning the calling client application cannot be found.");
var authorizations = await _authorizationManager.FindAsync(
subject: await _userManager.GetUserIdAsync(user),
client: (await _applicationManager.GetIdAsync(application))!,
status: Statuses.Valid,
type: AuthorizationTypes.Permanent,
scopes: request.GetScopes()).ToListAsync();
switch (await _applicationManager.GetConsentTypeAsync(application))
{
case ConsentTypes.External when !authorizations.Any():
return Forbid(
authenticationSchemes: OpenIddictServerAspNetCoreDefaults.AuthenticationScheme,
properties: new AuthenticationProperties(new Dictionary<string, string>
{
[OpenIddictServerAspNetCoreConstants.Properties.Error] = Errors.ConsentRequired,
[OpenIddictServerAspNetCoreConstants.Properties.ErrorDescription] =
"The logged in user is not allowed to access this client application."
}!));
case ConsentTypes.Implicit:
case ConsentTypes.External when authorizations.Any():
case ConsentTypes.Explicit when authorizations.Any() && !request.HasPrompt(Prompts.Consent):
var principal = await _signInManager.CreateUserPrincipalAsync(user);
principal.SetScopes(request.GetScopes());
principal.SetResources(await _scopeManager.ListResourcesAsync(principal.GetScopes()).ToListAsync());
var authorization = authorizations.LastOrDefault();
if (authorization == null)
{
authorization = await _authorizationManager.CreateAsync(
principal: principal,
subject: await _userManager.GetUserIdAsync(user),
client: (await _applicationManager.GetIdAsync(application))!,
type: AuthorizationTypes.Permanent,
scopes: principal.GetScopes());
}
principal.SetAuthorizationId(await _authorizationManager.GetIdAsync(authorization));
foreach (var claim in principal.Claims)
{
claim.SetDestinations(GetDestinations(claim, principal));
}
return SignIn(principal, OpenIddictServerAspNetCoreDefaults.AuthenticationScheme);
case ConsentTypes.Explicit when request.HasPrompt(Prompts.None):
case ConsentTypes.Systematic when request.HasPrompt(Prompts.None):
return Forbid(
authenticationSchemes: OpenIddictServerAspNetCoreDefaults.AuthenticationScheme,
properties: new AuthenticationProperties(new Dictionary<string, string>
{
[OpenIddictServerAspNetCoreConstants.Properties.Error] = Errors.ConsentRequired,
[OpenIddictServerAspNetCoreConstants.Properties.ErrorDescription] =
"Interactive user consent is required."
}!));
default:
return View(new AuthorizeViewModel
{
ApplicationName = await _applicationManager.GetDisplayNameAsync(application),
Scope = request.Scope
});
}
}
[Authorize, FormValueRequired("submit.Accept")]
[HttpPost("~/connect/authorize"), ValidateAntiForgeryToken]
public async Task<IActionResult> Accept()
{
var request = HttpContext.GetOpenIddictServerRequest() ??
throw new InvalidOperationException("The OpenID Connect request cannot be retrieved.");
var user = await _userManager.GetUserAsync(User) ??
throw new InvalidOperationException("The user details cannot be retrieved.");
var application = await _applicationManager.FindByClientIdAsync(request.ClientId!) ??
throw new InvalidOperationException("Details concerning the calling client application cannot be found.");
var authorizations = await _authorizationManager.FindAsync(
subject: await _userManager.GetUserIdAsync(user),
client: (await _applicationManager.GetIdAsync(application))!,
status: Statuses.Valid,
type: AuthorizationTypes.Permanent,
scopes: request.GetScopes()).ToListAsync();
if (!authorizations.Any() && await _applicationManager.HasConsentTypeAsync(application, ConsentTypes.External))
{
return Forbid(
authenticationSchemes: OpenIddictServerAspNetCoreDefaults.AuthenticationScheme,
properties: new AuthenticationProperties(new Dictionary<string, string>
{
[OpenIddictServerAspNetCoreConstants.Properties.Error] = Errors.ConsentRequired,
[OpenIddictServerAspNetCoreConstants.Properties.ErrorDescription] =
"The logged in user is not allowed to access this client application."
}!));
}
var principal = await _signInManager.CreateUserPrincipalAsync(user);
principal.SetScopes(request.GetScopes());
principal.SetResources(await _scopeManager.ListResourcesAsync(principal.GetScopes()).ToListAsync());
var authorization = authorizations.LastOrDefault();
if (authorization == null)
{
authorization = await _authorizationManager.CreateAsync(
principal: principal,
subject: await _userManager.GetUserIdAsync(user),
client: (await _applicationManager.GetIdAsync(application))!,
type: AuthorizationTypes.Permanent,
scopes: principal.GetScopes());
}
principal.SetAuthorizationId(await _authorizationManager.GetIdAsync(authorization));
foreach (var claim in principal.Claims)
{
claim.SetDestinations(GetDestinations(claim, principal));
}
return SignIn(principal, OpenIddictServerAspNetCoreDefaults.AuthenticationScheme);
}
UPDATE
My nginx configuration for better understading of the hosting environment:
server {
listen 9999 ssl;
server_name example.com;
ssl_certificate /etc/../certificate.crt;
ssl_certificate_key /etc/../key.key;
location / {
proxy_pass https://example.com:10000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwared-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Update 2
I'm still facing this issue. After some more testing & guessing around, I found out that the redirect to the registration page is correctly working by setting options.AcuthenticationPaths.RemoteRegisterPath in the client's program.cs to the absolute URL. There is no property to set the login path to its absolute URL, only LogInPath but no RemoteLogInPath like it's used for the registration page.
Any advice on what I am missing or getting wrong is greatly appreciated.
A: Your issue stems from using Request.PathBase in building your redirect URL. PathBase is normally an empty string. Instead, you could use the Request.Host to get the server name and port to use in the redirect.
| |
doc_1953
|
When making calls from dapper the EF intercepts the calls from Dapper and the dapper calls fail.
I think this is because in the EF library someone has implemented an IDbCommandInterceptor and it is somehow also intercepting the normal database connections used in Dapper.
Would anyone know of a way to get the EF to ignore the connections that are being used by Dapper, to leave them out of it's interception.
A: I found out the reason this was happening was because the ASP.NETZero framework was implementing a UnitOfWork pattern which was intercepting the calls.
Thanks to everyone for the replies, sorry for the late update, work seems to get in the way of everything these days.
Kind Regards
| |
doc_1954
|
LANG=C gpg2 --verbose --symmetric
gpg: using cipher CAST5
I dont know why it doesn't use the new AES-128 default value. But i guess I doesn't have the latest release available (nor does the package manager in my debian system).
I would really like to configure GPG to use AES for encryption by setting the corresponding values in the gpg.conf file mentioned here and on the homepage of GPG. So I tried to fetch the information about the file location by using gpgme_get_engine_info and looking at home_dirbut this path seems empty. The file_name is usr/bin/gpg2. But well, there is no gpg.conf. Strictly speaking there is no gpg.conf in my system at all.
So what should I do if this file is missing? Or is there a way to set the value programmatically with GPGME? I only found
gpgme_ctx_set_engine_info (gpgme_ctx_t ctx, gpgme_protocol_t proto, const char *file_name, const char *home_dir)
So I could create a new config and set the path with this function. But this would presume that I know how this file looks like. Sadly I don't.
A: I found a good example of a pgp.conf file which shows how to set the prefered values of the ciphers and hashfunctions. This is a excerpt:
# list of personal digest preferences. When multiple digests are supported by
# all recipients, choose the strongest one
personal-cipher-preferences AES256 AES192 AES CAST5
# list of personal digest preferences. When multiple ciphers are supported by
# all recipients, choose the strongest one
personal-digest-preferences SHA512 SHA384 SHA256 SHA224
# message digest algorithm used when signing a key
cert-digest-algo SHA512
# This preference list is used for new keys and becomes the default for
# "setpref" in the edit menu
default-preference-list SHA512 SHA384 SHA256 SHA224 AES256 AES192 AES CAST5 ZLIB BZIP2 ZIP Uncompressed
| |
doc_1955
|
Running the program on a computer with a High resolution (3840x2160) screen seems to work fine as well. All is scaled correct and everything is readable. Until I open a datafile and display the results on the Map or Chart control. Then the whole program suddenly rescales to about 50% of it's original size and all fonts become unreadable small. Changing the font size or resolution using the Windows display settings has no effect. What could cause this runtime scaling, and how can I work around this ?
A: After following all best practices as described in this post: it still does not work. Another comprehensive explanation on the Telerik website however put me on the right track.
The problem was caused by the GMap control, but could be caused by any third party control that was designed on a different system. The control contained a scaling setting in the Designer.cs file:
this.AutoScaleDimensions = new System.Drawing.SizeF(8F, 16F);
This setting is added automatically by the IDE when you create a new control and is based on the DPI settings of system it is created on. On a 96 DPI system this is always
this.AutoScaleDimensions = new System.Drawing.SizeF(6F, 13F);
So all the forms I had created for my app had this 96DPI based setting, but the GMap control had a different setting. And when the GMap control was redrawn it caused the whole application to scale using the incorrect AutoScaleDimensions. The solution is simply to search for all occurrences of AutoScaleDimensions and set them all to (6F,13F).
| |
doc_1956
|
html {
min-height: 100%;
position: relative;
}
body {
background-image: url("images/background.png");
background-repeat: repeat;
margin: 0 0 64px;
}
#sticky {
background-color: black;
height: 44px;
position: absolute;
bottom: 0;
left: 0;
right: 0;
}
#test {
width:50%;
height: 600px;
margin:50px auto;
background-color:white;
border: 1px solid #000000;
}
And the following HTML code:
<!DOCTYPE html>
<html lang="en">
<head>
<meta name="viewport" content="width=device-width, user-scalable=no" />
<meta charset="utf-8"/>
<title>test</title>
<script src="https://code.jquery.com/jquery-latest.min.js">
<script>
$(document).ready(function() {
$(document).on('click', '#test', function() {
$('#test').height('300px');
});
});
</script>
</head>
<body>
<div id="test">
<table style="height:600px;"></table>
</div>
<div id="sticky"></div>
</body>
</html>
1) The element called sticky is a sticky footer
2) The element called test is a div that gets resized on click
If I load the page, scroll till the address bar is hidden, and then click on the test div, the element gets resized and an extra space (equivalent to the height of the address bar) is added below the sticky footer.
Scrolling back to show the address bar causes the page to jump quickly and the extra space gets removed.
I noticed this strange behavior only in Android Chrome (haven't tested other browsers on Android) and I tested on multiple Android devices and even the Android emulator does the same. The issue is not there when tested on iOS simulator and desktop Chrome.
Those 2 screenshots demonstrate the issue:
Is this a well known bug in Android Chrome or is it the expected behavior? Is there a solution for this annoying behavior?
| |
doc_1957
|
This is how a working example could look:
This works great by using absolute positioning on both childs, but it requires a fixed height for the first child ("header" in the example above). Whenever the first child expands the second one would of course overlap it.
See jsfiddle instances small header and large header for example code.
Do you have any idea how to design this in a way the margins still apply no matter how large the header grows?
Thanks in advance!
A: You have a few options:
*
*Limit the length of the headers so they are always 1 line (using text-overflow)
*Use Javascript to change the height of the content minus the height of the header.
*Use a server side language to calculate the titles length and adjust the content
*Have the entire content area scroll so the header may not always be visible.
Personally I like the limiting the length of the headers and just hiding the overflow, but thats not always an option.
| |
doc_1958
|
For Ex:- my table has 1000 records in it.
I have updated the value of 500th record of table X.
So, I want the ID 500 in return.
Thanks in advance.
Regards,
A: I don't think this feature exists with MySQL. However you can get the same effect by adding a timestamp column:
ALTER TABLE yourtable
ADD COLUMN last_update TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
This would make the last_update column pretty much automatically maintained. Now you can select from yourtable the last updated row based on the timestamp.
| |
doc_1959
|
var i = 1;
$('<input type="text" id="num" size="20" name="num[]"/>').appendTo(scntDiv);
How to add var i into id="num" to id="num_1"
A: Just append i into your generated HTML:
$('<input type="text" id="num_' + i + '" size="20" name="num[]"/>').appendTo(scntDiv);
| |
doc_1960
|
I have a function that takes a single argument and returns it. When I pass the parameter and reassign it using arguments[0] = value, it's updating the value.
function a(b) {
arguments[0] = 2;
return b;
}
console.log(a(1)); //returns 2
But when I call the same function with no parameters it returns undefined.
function a(b) {
arguments[0] = 2;
return b;
}
console.log(a()); //returns undefined
But even if I pass undefined, the value will update as well.
function a(b) {
arguments[0] = 2;
return b;
}
console.log(a(undefined)); //returns 2
I thought that if you do not pass a parameter to a JavaScript function, it automatically creates it and assigns the value to undefined and after updating it should reflect the updated value, right?
Also a() and a(undefined) are the same thing, right?
A: ECMA 262 9.0 2018 describes this behaviour in 9.4.4 Arguments Exotic Objects with
NOTE 1:
The integer-indexed data properties of an arguments exotic object whose numeric name values are less than the number of formal parameters of the corresponding function object initially share their values with the corresponding argument bindings in the function's execution context. This means that changing the property changes the corresponding value of the argument binding and vice-versa. This correspondence is broken if such a property is deleted and then redefined or if the property is changed into an accessor property. If the arguments object is an ordinary object, the values of its properties are simply a copy of the arguments passed to the function and there is no dynamic linkage between the property values and the formal parameter values.
In short,
*
*if in 'sloppy mode', then all arguments are mapped to their named variables, if the length correspond to the given parameter, or
*if in 'strict mode', then the binding is lost after handing over the arguments.
This is only readable in an older version of ECMA 262 7.0 2016. It describes this behaviour in 9.4.4 Arguments Exotic Objects with
Note 1:
For non-strict functions the integer indexed data properties of an arguments object whose numeric name values are less than the number of formal parameters of the corresponding function object initially share their values with the corresponding argument bindings in the function's execution context. This means that changing the property changes the corresponding value of the argument binding and vice-versa. This correspondence is broken if such a property is deleted and then redefined or if the property is changed into an accessor property. For strict mode functions, the values of the arguments object's properties are simply a copy of the arguments passed to the function and there is no dynamic linkage between the property values and the formal parameter values.
A: Assigning to arguments indicies will only change the associated argument value (let's call it the n-th argument) if the function was called with at least n arguments. The arguments object's numeric-indexed properties are essentially setters (and getters):
http://es5.github.io/#x10.6
Italics in the below are my comments on how the process relates to the question:
(Let) args (be) the actual arguments passed to the [[Call]] internal method
*
*Let len be the number of elements in args.
*Let indx = len - 1.
*Repeat while indx >= 0, (so, the below loop will not run when no arguments are passed to the function:)
(assign to the arguments object being created, here called map:)
*
*
*
*Add name as an element of the list mappedNames.
*
*Let g be the result of calling the MakeArgGetter abstract operation with arguments name and env.
*
*Let p be the result of calling the MakeArgSetter abstract operation with arguments name and env.
*
*Call the [[DefineOwnProperty]] internal method of map passing ToString(indx), the Property Descriptor {[[Set]]: p, [[Get]]: g, [[Configurable]]: true}, and false as arguments.
So, if the function is invoked with no arguments, there will not be a setter on arguments[0], so reassigning it won't change the parameter at index 0.
The same sort of thing occurs for other indicies as well - if you invoke a function with 1 parameter, but the function accepts two parameters, assigning to arguments[1] will not change the second parameter, because arguments[1] does not have a setter:
function fn(a, b) {
arguments[1] = 'bar';
console.log(b);
}
fn('foo');
So
a() and a(undefined) are the same thing right?
is not the case, because the second results in an arguments object with a setter and a getter on index 0, while the first doesn't.
Note that this odd interaction between the arguments and the function parameters is only present in sloppy mode. In strict mode, changes to arguments won't have any effect on the value an individual argument identifier contains:
'use strict';
function a(b) {
arguments[0] = 2;
return b;
}
console.log(a(1)); //returns 1
A: it's because arguments it's not like a Array, it's a object with integer indexed data keys, and property length, And if length equal zero it's mean you don't have a arguments
function a(b) {
arguments[0] = 2;
console.log(arguments.length)
return b;
}
a(1); // length 1 returns 2
console.log(a()); // length 0 returns undefined
A: My understanding is that the arguments object only tracks what is passed into the function. Since you've not initially passed anything, b is not bound and at that point arguments is not 'tracking' b. Next, you assign a value to the initialised but empty Array-like object arguments and finally return b, which is undefined.
To delve into this further:
If a non-strict function does not contain rest, default, or destructured parameters, then the values in the arguments object do change in sync with the values of the argument variables. See the code below:
function func(a) {
arguments[0] = 99; // updating arguments[0] also updates a
console.log(a);
}
func(10); // 99
and
function func(a) {
a = 99; // updating a also updates arguments[0]
console.log(arguments[0]);
}
func(10); // 99
When a non-strict function does contain rest, default, or destructured parameters, then the values in the arguments object do not track the values of the arguments. Instead, they reflect the arguments provided when the function was called:
function func(a = 55) {
arguments[0] = 99; // updating arguments[0] does not also update a
console.log(a);
}
func(10); // 10
and
function func(a = 55) {
a = 99; // updating a does not also update arguments[0]
console.log(arguments[0]);
}
func(10); // 10
and
// An untracked default parameter
function func(a = 55) {
console.log(arguments[0]);
}
func(); // undefined
Source: MDN Web docs
A: When you are not providing any parameter then arguments array has length equal to 0. Then you are trying to set the non existing element of array to 2 which causes returning undefined
You can simply test this with this snippet:
function a(b){
alert(arguments.length) // It will prompt 0 when calling a() and 1 when calling a(undefined)
arguments[0] = 2;
return b;
}
A: This is the undefined value definition from javascript spec :
primitive value used when a variable has not been assigned a value.
so if you do not specify the function return type it will return undefined.
so a() and a(undefined) it is not same thing. returning undefined is based on return type is defined or not.
for more clarification similar_problem
| |
doc_1961
|
My code is
function redo() {
.
.
.
'<%Session["mins"] = "' + mins + '";%>';
alert('session min value after is ' + '<%=Session["mins"]%>');
}
Minute value in session is shown in alert but when i access it in code behind
session contains "' + mins + '".
My code is
protected void Page_Load(object sender, EventArgs e)
{
var aa = Convert.ToInt32(Session["mins"]);
}
Minutes value is not available in code behind.
How to get session value in code behind ?
A: Session is a serverside concept. The only way(*) for the client to know it is to ask the server (through inclusion in the originally rendered page, or through AJAX).
*) In some realisations of the session concept, the session itself is stored in the cookie, and the client could access and decode it there. This is not reliable, nor recommended, and not available in all frameworks.
A: Assign the value to a hidden control and access that in JS code
| |
doc_1962
|
This code works: createWorkout returns a Workout object, which is stored in the local variable test, and then the instance variable mWorkout is set from that.
public void startWorkout() {
Workout test = workoutFactory.createWorkout();
mWorkout = test;
}
Whereas this code doesn't:
public void startWorkout() {
mWorkout = workoutFactory.createWorkout();
}
mWorkout remains null even though createWorkout is still returning a Workout object.
Above code is slightly simplified for clarity.
A: Try qualifying mWorkout with this.
this.mWorkout = workoutFactory.createWorkout();
My assumption is that you have defined a local mWorkout that's shadowing your instance variable with the same name.
A: My bet is that somewhere in the non-working version of startWorkout you have declared a method-scope instance of mWorkout which is masking the instance field. If you tried this.mWorkout = ... you might get a different result.
| |
doc_1963
|
Say I need a visitor hits count for my website, the count would permanently increase on every visit.
The Problem
It is considered bad practice to change the server state with GET requests. And since I'd like to keep count of the number of users that entered my website, I'll have to store the state somehow.
How would I approach this? Should I break that practice and change the server on GET requests? Or is there some more elaborate scheme I can pull off?
A: The assertion you're making is generally true. But here, you want precisely to track if someone made a GET request. So doing this treatment on such requests makes sense!
It's just a bit tricky when using in combination with a caching mechanism. Because the part where you count the visitor can't be in cache, you always need server-side to track the count.
Other solutions include:
*
*External tools like Google Analytics uses JavaScript with a tracking image (the retrieval of the image is a way to simulate the POST request, but it's just GET anyway), in combination with a cookie to track only unique visitors.
*Log analysis is another alternative. Web servers can write every request in a file, along with other informations (such as IP address, User-Agent). Analyzing the access log can be solution.
[edit] I particularly like the tracking image. Makes both solutions easier.
A: What I do to record visits is use PHP's auto_prepend_file or auto_append_file. In those files you have something like this:
error_reporting(0);
$conn = mysqli_connect('localhost', 'visitor_counter', 'password', 'visitors');
$ip = mysqli_real_escape_string($conn, $_SERVER['REMOTE_ADDR']);
$query = "UPDATE `visits` SET `count` = `count` + 1 WHERE `ip` = '$ip'";
mysqli_query($conn, $query);
I use a shared host, so I have to use a .htaccess file like this:
php_value auto_prepend_file /php/head.php
Hope this helps!
| |
doc_1964
|
(i.e width: 20rem; & min-width: 20rem; vs min-width: 20rem; & max-width: 20rem;)
Using just width: 20rem; results in the sidebar shrinking from a change in width:
Expected width of the sidebar:
Sidebar shrinks when I would like it to remain the same size:
However using width: 20rem; & min-width: 20rem; seems to solve this issue.
Expected width of the sidebar:
Expected width of the sidebar remains:
Also using min-width: 20rem; & max-width: 20rem; seems to also solve this issue.
Expected width using max-width:
Expected width using max-width even when window size changes:
My overall question is which solution is preferred and what are the consequences of each as they both seem relatively the same
General css code
.messages-component {
.threads-sidebar {
@include box-shadow-helper(1);
background: white;
display: flex;
flex-direction: column;
width: 20rem;
.threads-sidebar-header {
@include box-shadow-helper(1);
display: flex;
min-height: 3em;
}
.threads-sidebar-body {
@include box-shadow-helper(1);
display: flex;
flex: 1;
flex-direction: column;
.test {
color: $mvn-btn-red;
}
}
}
}
A: Min-width and max-width specify constraints on how large/small a resizable object can go. So if you want your element to not be resizable you should just use a constant width by using pixels(px) instead of rem.
A: Width and min-width can work together. Most people use two different units when using them example: width: 90%, min-width:600px. It is also worth noting that min-width will overide width.
W3 min-width
| |
doc_1965
|
<video loop muted autoplay>
are becoming like this
<video loop="" muted="" autoplay="">
i have no idea why this is the case, this is my code for copying the content:
/*copy to clipboard */
function copyToClipboard() {
// Create an auxiliary hidden input
var aux = document.createElement("input");
// Get the text from the element passed into the input
aux.setAttribute("value", document.getElementById('sampleeditor').innerHTML);
// Append the aux input to the body
document.body.appendChild(aux);
// Highlight the content
aux.select();
// Execute the copy command
document.execCommand("copy");
var tooltip = document.getElementById("myTooltip");
tooltip.innerHTML = "Copied";
// Remove the input from the body
document.body.removeChild(aux);
setTimeout(() => { tooltip.innerHTML = "Copy to clipboard"; }, 2000);
}
A: Because they are an attribute with an empty value. The code will work the same.
If you really don't want it, you can use replace with a RegEx to filter ="":
el.innerHTML.replace(/=""/g, '')
| |
doc_1966
|
However I do not know in which module it is defined (if I did, I would just call getattr(module,var), but I do know it's imported.
Should I go over every module and test if the class is defined there ? How do I do it in python ?
What if I have the module + class in the same var, how can I create an object from it ? (ie var = 'module.class')
Cheers,
Ze
A: globals()[classname] should do it.
More code: http://code.activestate.com/recipes/285262/
A: Classes are not added to a global registry in Python by default. You'll need to iterate over all imported modules and look for it.
A: Rather than storing the classname as a string, why don't you store the class object in the var, so you can instantiate it directly.
>>> class A(object):
... def __init__(self):
... print 'A new object created'
...
>>> class_object = A
>>> object = class_object()
A new object created
>>>
| |
doc_1967
|
$users = [
0 => [
'user_id' => 1,
'user_date' => '2017-04-26',
'user_name' => 'test',
],
1 => [
'user_id' => 2,
'user_date' => '2017-04-26',
'user_name' => 'test 2',
],
2 => [
'user_id' => 3,
'user_date' => '2017-04-28',
'user_name' => 'test 3',
]
];
While looping throug this array a want to group the users that has the same date. An example how the output should look like
Array
(
[0] => Array
(
[DATE] => 2017-04-26
[USERS] => Array
(
[0] => Array
(
[user_id] => 1
[user_title] => test
)
[1] => Array
(
[user_id] => 2
[user_title] => test 2
)
)
)
[1] => Array
(
[DATE] => 2017-04-28
[USERS] => Array
(
[0] => Array
(
[user_id] => 4
[user_title] => test 4
)
)
)
)
I have tried to do some things in a foreach loop but could not make this get to work.
$result = array();
$i = 0;
// Start loop
foreach ($users as $user) {
// CHECK IF DATE ALREADY EXISTS
if(isset($result[$i]['DATE']) && $result[$i]['DATE'] == $user['user_date']){
$i++;
}
// FILL THE ARRAY
$result[$i] = [
'DATE' => $user['user_date'],
'USERS' => [
'user_id' => $user['user_id'],
'user_title' => $user['user_name'],
]
];
}
I've changed it a little bit to this:
foreach ($users as $user => $properties) {
foreach ($properties as $property => $value) {
if($property == 'user_date'){
if(empty($result[$value])){
$result[$i] = [];
}
$result[$i][] = [
'user_id' => $properties['user_id'],
'user_name' => $properties['user_name'],
];
$i++;
}
}
}
But how could i change the start keys (dates) to numbers equal to 0, 1 etc.
A: $users = [
0 => [
'user_id' => 1,
'user_date' => '2017-04-26',
'user_name' => 'test',
],
1 => [
'user_id' => 2,
'user_date' => '2017-04-26',
'user_name' => 'test 2',
],
2 => [
'user_id' => 3,
'user_date' => '2017-04-28',
'user_name' => 'test 3',
]
];
$sorted = [];
foreach ($users as $user => $properties) {
foreach ($properties as $property => $value) {
if ($property =='user_date') {
if (empty($sorted[$value])) {
$sorted[$value] = [];
}
$sorted[$value][] = $users[$user];
}
}
}
var_dump($sorted);
Do a nested loop through your arrays and then check for the unique value you're looking for (in this case the user_date) and add that as a key in your sorted array. If the key exists add a new item (user) to that key, otherwise make the new key first. This way you have an array of dates each containing an array of users with that date.
A: If you want the exact output you showed (honestly, I like Ryan's answer better):
$result = array();
$i = 0;
// Start loop
foreach ($users as $user) {
// CHECK IF DATE ALREADY EXISTS AND IS NOT IN THE SAME GROUP
if (isset($result[$i]['DATE']) && $result[$i]['DATE'] != $user['user_date']){
$i++;
}
// STARTING A NEW GROUP
if(!isset($result[$i])) {
$result[$i] = array(
'DATE' => $user['user_date'],
'USERS' => array()
);
}
// FILL THE ARRAY (note the ending [] to add a new entry in this group's USERS array)
$result[$i]['USERS'][] = array(
'user_id' => $user['user_id'],
'user_title' => $user['user_name'],
);
}
A: There are few ways to tackle your question. I always prefer to use PHP built-in function, as there are a lot of them. This answer uses a PHP builtin function usort to sort your array in place. It takes two arguments, you array and a comparator function. usort will parse two array object to comparator function. If you dont know about compactor functions, a Comparator compare these two objects and return a integer 1, 0, or -1 which tells if first object is greater, equal or less than second object, respectively. So pass in a comparator function to that takes care of the comparation of dates.
$users = [
0 => [
'user_id' => 1,
'user_date' => '2017-04-25',
'user_name' => 'test',
],
1 => [
'user_id' => 2,
'user_date' => '2017-04-26',
'user_name' => 'test 2',
],
2 => [
'user_id' => 3,
'user_date' => '2017-04-28',
'user_name' => 'test 3',
],
3 => [
'user_id' => 4,
'user_date' => '2017-04-28',
'user_name' => 'test 4',
],
4 => [
'user_id' => 5,
'user_date' => '2017-04-26',
'user_name' => 'test 5',
],
];
usort($users, function($user1, $user2){
// This function sort users by ascending order of date. Compares date. if user 1 has later date than user 2, place him on the bottom of the array
return strtotime($user1['user_date']) > strtotime($user2['user_date']);
});
var_dump($users);
| |
doc_1968
|
alt text http://www.freeimagehosting.net/uploads/6f804323db.jpg
I am using xaml and Control templates, it works fine but, the button recives the click event even if i click in any part into the rectangle form by the geometry, it launch the click event, I want the event to be catch only inside the geomtry...
here is the xaml
<ControlTemplate x:Key="ButtonTemplate" TargetType="{x:Type Button}">
<Grid>
<Image>
<Image.Source>
<DrawingImage>
<DrawingImage.Drawing>
<GeometryDrawing x:Name="X"Geometry= "M 0,0
A .8,.8 180 1 1 0,4
L 0,3
A .6,.6 180 1 0 0,1
L 0,0">
<GeometryDrawing.Pen>
<Pen Brush="Black" Thickness=".1" />
</GeometryDrawing.Pen>
<GeometryDrawing.Brush>
<LinearGradientBrush StartPoint="0,0" EndPoint="0,1">
<GradientStop Offset="0" Color="Blue"/>
<GradientStop Offset="1" Color="Red"/>
</LinearGradientBrush>
</GeometryDrawing.Brush>
</GeometryDrawing>
</DrawingImage.Drawing>
</DrawingImage>
</Image.Source>
</Image>
<Viewbox>
<ContentControl Margin="20" Content="{TemplateBinding Content}"/>
</Viewbox>
</Grid>
</ControlTemplate>
...
<Button Template="{StaticResource ButtonTemplate}" Click="Button_Click" HorizontalAlignment="Right">OK</Button>
Thanks for any Comments
A: Instead of using an Image, which will have rectangular bounds that are used for hit testing, you can use a Path element with your Geometry data. The Path will only do hit testing on the area defined by the outline. Whatever text or other content is set will also be clickable unless you set IsHitTestVisible="false" on the ContentPresenter.
<Button Content="OK">
<Button.Template>
<ControlTemplate TargetType="{x:Type Button}">
<Grid>
<Path Data="M 0,0
A .8,.8 180 1 1 0,4
L 0,3
A .6,.6 180 1 0 0,1
L 0,0" Stroke="Black" StrokeThickness="1" Stretch="Uniform">
<Path.Fill>
<LinearGradientBrush StartPoint="0,0" EndPoint="0,1">
<GradientStop Offset="0" Color="Blue"/>
<GradientStop Offset="1" Color="Red"/>
</LinearGradientBrush>
</Path.Fill>
</Path>
<Viewbox>
<ContentPresenter Margin="20" />
</Viewbox>
</Grid>
</ControlTemplate>
</Button.Template>
</Button>
| |
doc_1969
|
Can anybody tell me how to implement this with the three plots have the same size?
A: You must set a fixed right margin with e.g. set rmargin at screen 0.85. That sets the right border of the plot at 85% of the image size:
set multiplot layout 3,1
set rmargin at screen 0.85
plot x
plot x
plot x linecolor palette
unset multiplot
set output
Output with 4.6.3:
See also the related question multiplot - stacking 3 graphs on a larger canvas.
Generic solution for fixed margins
If you want a layout with one row and three columns, can use the multiplot options margins and spacing to get three plots which have the same width:
set xlabel 'xlabel'
set ylabel 'ylabel'
set multiplot layout 1,3 margins 0.1,0.9,0.1,0.95 spacing 0.05
plot x
unset ylabel
plot x
plot x linecolor palette
unset multiplot
| |
doc_1970
|
reply_keyboard = [
['test']
]
reply_keyboard_markup = ReplyKeyboardMarkup(keyboard=reply_keyboard,one_time_keyboard=True,resize_keyboard=True,input_field_placeholder='pm')
update.message.reply_text('''test
''',reply_markup = reply_keyboard_markup)
A: You can build a reply markup in telethon nesting lists too, but instead of strings you must storage telethon.tl.custom.button.Button instances, check the documentation here to get a better understanding of Button methods.
Building a simple keyboard as the showed in your image:
from telethon import Button
async def handler(event):
await event.respond(
'Hello!',
buttons=Button.text(
text=' Hello, World!',
resize=True,
single_use=True
)
)
You get this.
I think you cannot mix placeholder with a text keyboard in telethon. You can make a placeholder using Button.force_reply:
async def handler(event):
await event.respond(
'Hello!',
buttons=Button.force_reply(
single_use=True,
placeholder='Say something'
)
)
This is the result.
| |
doc_1971
|
I get my data from the $_POST, loop through the values and create an array called checklist.
if($_POST != ''):
$dataset = $_POST['data'];
$checklist = array();
$eventid='';
foreach ($dataset as $i => $row)
{
$uid = $row['box-id'];
$state = $row['box-state'] ;
$eventid = $row['e_id'];
$checklist[] = array('uid'=>$uid,
'state'=> $state);
}
Checklist has two fields, a uid and a state.
I then run a script that generates another array, called $updates. It loops through a different set of objects and outputs the data to populate the variables for $updates. The structure of $updates is as such.
$updates[] = array('uid'=>$uid,
'state'=> $state,
'class' => $class,
'container' => $button_cont,
'closer' => $button_closer);
What I would like to do is to compare $updates with $checklist.
I'd like to know the most efficient way to match the records by the uid and compare the state. If the state matches, I'd like to do nothing.
I've read a few of the articles on looping and search, but I'm thinking I've been looking at this for too long because it's Greek to me. Thanks for assistance.
A: save the checklist like -
$checklist[$uid] = $state;
same for updates
$updates[$uid] = array('state'=> $state,
'class' => $class,
'container' => $button_cont,
'closer' => $button_closer);
then start the loop
foreach ($updates as $key => $update) {
if ($update['state'] == $checklist[$key]) {
//your action
}//compare the values
}
$key will be the uid.hope it will help you
| |
doc_1972
|
I am passing by reference two objects of the base class to the method of the derived class and try to access the objects' protected member. However, the editor complains.
In sort, here is what I am trying to do:
class A {
protected:
int x;
};
class B:public A
{
public:
void test(A &obj1, A &obj2)
{
obj1.x = 1;
obj2.x = 2;
}
};
And this is the complain from the editor:
int A::x
protected member A::x (declared at line 5) is not accessible though "A" pointer or object.
What is wrong with my code and what can I do to correct it?
Thank you.
A: You can only access base class protected members of classes with the same type as the derived object. You will have to make a public method to obtain the member or other workaround. Imagine you had another class C, which inherited A as private. You could pass C to the B method as an A reference, but the base class members wouldn't be accessible. If the references passed to the B method where B references, then you would be able to access the protected members in them.
A: For class A the variable x is protected which will act like private for class A, so the member variable A::X is not accessible.
However, if you change the method
void test(A &obj1, A &obj2)
to
void test(B &obj1, B &obj2)
Then you can access the variable x from class B, as its available as protected as inheritance is public.
So, the whole code can be written like follows for accessing x in class B:
class A {
protected:
int x;
};
class B:public A
{
public:
void test(B &obj1, B &obj2)
{
obj1.x = 1;
obj2.x = 2;
}
};
| |
doc_1973
|
...but I want the small number to be left to the image, to display a range. Code is as follows:
const div = L.DomUtil.create('div', 'info versuch');
div.innerHTML += `${this.min} <i style="background:linear-gradient(to right, #9bc8f6 0%, #08519c 100%);"></i> ${this.max}`;
CSS:
.info {
padding: 4px 4px 4px 5px;
background: rgba(255, 255, 255, 0.65);
border-radius: 5px;
}
.versuch {
line-height: 18px;
color: #555;
}
.versuch i {
height: 18px;
width: 200px;
float: left; //image wont show at all if i dont state float
opacity: 1.0;
}
A: Your float is the culprit though it's doing what would be expected, and it didn't show without it because a i as an inline element doesn't care about width without content like @IvanSanchez was saying, so change to inline-block and remove the float and voila. Cheers
// const div = L.DomUtil.create('div', 'info versuch');
const min = 1234,
max = 4321;
document.getElementById('blah').innerHTML = `${min} <i style="background:linear-gradient(to right, #9bc8f6 0%, #08519c 100%);"></i> ${max}`;
.info {
padding: 4px 4px 4px 5px;
background: rgba(255, 255, 255, 0.65);
border-radius: 5px;
}
.versuch {
line-height: 18px;
color: #555;
}
.versuch i {
height: 18px;
width: 200px;
display: inline-block;
}
<div id="blah" class="info versuch"></div>
| |
doc_1974
|
BUILD FAILED in 9s
error Failed to install the app. Make sure you have the Android development environment set up: https://reactnative.dev/docs/environment-setup.
Error: Command failed: gradlew.bat app:installDebug -PreactNativeDevServerPort=8081
C:\Users\...\node_modules\@react-native-firebase\messaging\android\src\main\java\io\invertase\firebase\messaging\ReactNativeFirebaseMessagingModule.java:34: error: cannot find symbol
import com.google.firebase.iid.FirebaseInstanceId;
^
symbol: class FirebaseInstanceId
location: package com.google.firebase.iid
C:\Users\...\node_modules\@react-native-firebase\messaging\android\src\main\java\io\invertase\firebase\messaging\ReactNativeFirebaseMessagingModule.java:121: error: cannot find symbol
.call(getExecutor(), () -> FirebaseInstanceId.getInstance().getToken(authorizedEntity, scope))
^
symbol: variable FirebaseInstanceId
location: class ReactNativeFirebaseMessagingModule
C:\Users\...\node_modules\@react-native-firebase\messaging\android\src\main\java\io\invertase\firebase\messaging\ReactNativeFirebaseMessagingModule.java:135: error: cannot find symbol
FirebaseInstanceId.getInstance().deleteToken(authorizedEntity, scope);
^
symbol: variable FirebaseInstanceId
location: class ReactNativeFirebaseMessagingModule
3 errors
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':react-native-firebase_messaging:compileDebugJavaWithJavac'.
> Compilation failed; see the compiler error output for details.
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
* Get more help at https://help.gradle.org
Tried:
*
*different gradle versions 6.9, 7.0, 7.0.1.
*different google-services versions 4.3.5, 4.3.6, 4.3.7
*applying google-services at the top and at the bottom of build.gradle app
*different gradle plugin versions 4.1.3, 4.2.0, 4.2.1
*deleting and reinstalling all node_modules
*cleaning build with gradlew clean and rebuilding again after each version change
Here's my npm list output:
├── @babel/[email protected]
├── @babel/[email protected]
├── @react-native-community/[email protected]
├── @react-native-community/[email protected]
├── @react-native-firebase/[email protected]
├── @react-native-firebase/[email protected]
├── @react-native-firebase/[email protected]
├── @react-native-firebase/[email protected]
├── @react-native-firebase/[email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
└── [email protected]
I'm overriding to latest firebase BOM 28.0.1. Also tried default @react-native-firebase 26.8.0 version, then app crashes on start with no error output at build or metro.
App has google-services.json at android/app and is registered in firebase.
Currently using gradle 6.9, plugin 4.1.3, and google-services 4.3.5.
First time the messaging is used at index.js, listening for background push messages, as stated in docs at https://rnfirebase.io/messaging/usage:
import {AppRegistry} from 'react-native';
import App from './App';
import {name as appName} from './app.json';
import messaging from '@react-native-firebase/messaging';
// Register background handler
messaging().setBackgroundMessageHandler(async (remoteMessage) => {
console.log('Message handled in the background!', remoteMessage);
});
AppRegistry.registerComponent(appName, () => App);
Later try to get device token and call cloud function:
var switchFunction = functions().httpsCallable(toggleFunctionName);
messaging()
.getToken()
.then((token) => {
switchFunction({
organization: encodeURI(parsedLocalData.organizationName),
user_email: encodeURI(parsedLocalData.userMai),
device_token: token,
})
.then((result) => {
// Read result of the Cloud Function.
var sanitizedMessage = result.data.text;
console.log('Firebase: ' + sanitizedMessage);
})
.catch((error) => {
// Getting the Error details.
console.log('Firebase error: ' + error);
});
});
A: Add this to your app/build.gradle under dependencies:
implementation 'com.google.firebase:firebase-messaging:21.1.0'
implementation 'com.google.firebase:firebase-iid'
And this to the top level build.gradle under ext {}
firebaseMessagingVersion = "21.1.0"
Also if you haven't done that already, under dependencies in your top level build.gradle :
classpath 'com.google.gms:google-services:4.3.8'
Finally, at the tope of app/build.gradle :
apply plugin: "com.android.application"
apply plugin: 'com.google.gms.google-services'
| |
doc_1975
|
<div class="container">
Enter your values:<input type="text" multiple #inputCheck>
<input type="submit"(click)="sendInput(inputCheck.value)">
</div>
These inputs are to be stored in the following array.
arrayStored=[]
I have tried using the below code but the inputs are not divided in the array and the whole input is seen as a single element inside an array. I need to divide the input into multiple elements and store them inside the array.
sendInput(event:any){
this.inputGiven = event;
this.arrayStored.push(this.inputGiven);
Example: If a user enters SAM,ALEX7,23 and clicks submit , the array should store it as arrayStored=["SAM","ALEX7,"23"] but now it is being stored as arrayStored=["SAM,ALEX7,23"]. How do I split the input and store them as an individual element inside the array ?
A: You can split the elements in an array like so:
this.arrayStored.concat(this.inputGiven.spilt(“,”));
and to remove any duplicates from the array you can convert it into a set and back into an array like below:
this.arrayStored = Array.from(new Set(this.arrayStored));
| |
doc_1976
|
using CUDA
function mean!(x, n, out)
"""out = sum(x, dims=2)"""
row_idx = (blockIdx().x-1) * blockDim().x + threadIdx().x
for i = 1:n
@inbounds out[row_idx] += x[row_idx, i]
end
out[row_idx] /= n
return
end
using Test
nrow, ncol = 1024, 10
x = CuArray{Float64, 2}(rand(nrow, ncol))
y = CuArray{Float64, 1}(zeros(nrow))
@cuda threads=256 blocks=4 row_sum!(x, size(x)[2], y)
@test isapprox(y, sum(x, dims=2)) # test passed
Also consider the following CUDA kernel
function add!(a, b, c)
""" c = a .+ b """
i = (blockIdx().x-1) * blockDim().x + threadIdx().x
c[i] = a[i] + b[i]
return
end
a = CuArray{Float64, 1}(zeros(nrow))
b = CuArray{Float64, 1}(ones(nrow))
c = CuArray{Float64, 1}(zeros(nrow))
@cuda threads=256 blocks=4 add!(a, b, c)
@test all(c .== a .+ b) # test passed
Now, suppose I wanted to write another kernel that uses the intermediate results of mean!(). For example,
function g(x, y)
""" mean(x, dims=2) + mean(y, dims=2) """
xrow, xcol = size(x)
yrow, ycol = size(y)
mean1 = CuArray{Float64, 1}(undef, xrow)
@cuda threads=256 blocks=4 mean!(x, xcol, mean1)
mean2 = CuArray{Float64, 1}(zeros(yrow))
@cuda threads=256 blocks=4 mean!(y, ycol, mean2)
out = CuArray{Float64, 1}(zeros(yrow))
@cuda threads=256 blocks=4 add!(mean1, mean2, out)
return out
end
(Of course, g() isn't technically a kernel since it returns something.)
My question is whether g() is "correct". In particular, is g() wasting time by transferring data between the GPU/CPU?
For example, if my understanding is correct, one way g() could be optimized is by initializing mean2 the same way we initialize mean1. This is because when constructing mean2, we're actually first creating zeros(yrow) on the CPU, then passing this to the CuArray constructor to be copied to the GPU. In contrast, mean1 is constructed but uninitialized (due to the undef) and therefore avoids this extra transfer.
To summarize, how do I save/use intermediate kernel results while avoiding data transfers between the CPU/GPU as much as possible?
A: You can generate arrays or vectors of zeros directly on GPU!
Try:
CUDA.zeros(Float64, nrow)
Some benchmarks:
julia> @btime CUDA.zeros(Float64, 1000,1000)
12.600 μs (26 allocations: 1.22 KiB)
1000×1000 CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}:
...
julia> @btime CuArray(zeros(1000,1000))
3.551 ms (8 allocations: 7.63 MiB)
1000×1000 CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}:
...
| |
doc_1977
|
def :::[B >: A](prefix: List[B]): List[B] =
if (isEmpty) prefix
else if (prefix.isEmpty) this
else (new ListBuffer[B] ++= prefix).prependToList(this)
override def ++[B >: A, That](that: GenTraversableOnce[B])
(implicit bf: CanBuildFrom[List[A], B, That]): That = {
val b = bf(this)
if (b.isInstanceOf[ListBuffer[_]])(this ::: that.seq.toList).asInstanceOf[That]
else super.++(that)
}
From a usage perspective, should I prefer a ::: b or a ++ b? From an implementation perspective, is there a specific reason why one of these operators doesn't simply call the other?
A: The difference is in that you can only use ::: on 2 lists -- this operation is only available on the List datatype. Since lists are sequences, it acts as a concatenation operators for lists.
The ++ method is more general - it allows creating a union of any two collections. That may be two sets, in which case it acts as a union, or two sequences in which case it acts as a concatenation.
There is no semantic difference between ++ and ::: for 2 lists -- ::: is the variant of ++ for functional lists that should look more familiar to functional programmers.
The if statement you see in the ++ implementation is an optimization -- if both this collection and that collection are lists, just use the list concatenation operator ::: to add the two lists together. Otherwise, use the generic implementation of ++ that adds all the elements of this and that collection to an appropriate builder for the type That.
So, the relevant difference for lists is performance -- for functional lists you don't need to traverse the second list as a generic ++ implementation would - only the nodes of the first list need to be reinstantiated to create a new functional list.
| |
doc_1978
|
import React, {Component} from 'react';
class Nav extends Component {
openNav = () =>{
document.getElementById("mySidenav");
}
closeNav = () => {
document.getElementById("mySidenav");
}
render() {
return(
<div id="main">
<div id="mySidenav" class="sidenav">
<a href="javascript:void(0)" class="closebtn" onclick={this.closeNav()}>×</a>
<a href="#">About</a>
<a href="#">Services</a>
<a href="#">Clients</a>
<a href="#">Contact</a>
</div>
<h2>Animated Sidenav Example</h2>
<p>Click on the element below to open the side navigation menu.</p>
<span onclick={this.openNav()}>open</span>
<script>
</script>
</div>
);
}
}
export default Nav;
A: You are calling the function but you are not doing anything substantial inside it. You want a state variable that can be used to toggle some a state variable. Changing the state variable will cause rerender, which is what you want.
import React, {Component} from 'react';
class Nav extends Component {
constructor(){
this.state = {
'show' : false;
}
}
openNav = () =>{
this.setState(prevState => !prevState);
}
closeNav = () => {
this.setState(prevState => !prevState);
}
render() {
return(
<div id="main">
{this.state.show && <div id="mySidenav" class="sidenav">
<a href="javascript:void(0)" class="closebtn" onclick={this.closeNav()}>×</a>
<a href="#">About</a>
<a href="#">Services</a>
<a href="#">Clients</a>
<a href="#">Contact</a>
</div>}
<h2>Animated Sidenav Example</h2>
<p>Click on the element below to open the side navigation menu.</p>
<span onclick={this.openNav()}>open</span>
<script>
</script>
</div>
);
}
}
export default Nav;
So you can just toggle the state and based on that show/hide the component.
Expressions like a && b, b is only evaluated if a is true. If a is false code does not reach b. And when b is evaluated it is returned, if it is a truthy value (which the nav code block is).
I have purposely kept two functions above. You might notice that you can do with one. Something like toggleNav().
Note: Always better to use refs in react instead of document.querySelector.
| |
doc_1979
|
class CourseEvent:
public class CourseEvent : ICourseEvent
{
// other properties
public string Vendor { get; set; }
}
class courseVendor:
public class CourseVendor
{
public string Name { get; set; }
}
What I tried to do was:
// 1) get List<CourseEvent> items
List<CourseEvent> courseEvents = LoadCourseEvents();
// 2) group items by property "Vendor"
IEnumerable<IGrouping<string,CourseEvent>> groups = courseEvents.GroupBy(c => c.Vendor).ToList();
// 3) convert to list
List<CourseEvent> courseVendors = groups.SelectMany(group => group).ToList();
// 4) initiate target class
List<CourseVendor> vendors = new List<CourseVendor>();
// 5) fill target class
courseVendors.ForEach(c => vendors.Add(new CourseVendor { Name = c.Vendor }));
That would save the Key (vendor) field into a new List<CourseVendor> but the thing is, because of the .SelectMany in line 3) the vendor field of each item in courseEvents will be written into courseVendors. But how can I correct it? Every vendor should occur only once in my List<CourseVendor> vendors.
A: There's no need for steps 3 through 5. Once you have your groupings, you can simply use:
var vendors = groups.Select(g => new CourseVendor { Name = g.Key }).ToList();
And if you don't have any need for the intermediate data, you could make it all a one liner:
var vendors =
LoadCourseEvents()
.GroupBy(c => c.Vender)
.Select(g => new CourseVendor { Name = g.Key })
.ToList();
| |
doc_1980
|
I now want to switch to https, and discovered here that deploying https using Letsencrypt in docker is non trivial and frankly, quite messy.
I am considering putting an AWS loadbalancer in front of the whole setup, and enabling https on the load balancer instead. This would imply load balancer talks to ngnix, and ngnix passes requests over to Python flask.
Is there a better way to do this? Is the nginx now superfluous? Do you foresee issues with this setup?
A: Yes, you can use ELB for offloading SSL.
I don't see any issues with this setup. Actually, I would recommend this for the following reason:
HTTPS is an encrypted protocol, and encryption required high CPU utilization to perform the needed mathematical computations.Since most web applications are CPU bounded, you should avoid processing SSL at your servers and let the load balancer do it for you.
Since the communication between the load balancer and your instances is on AWS internal network, you can rest assured that it is secure and can use HTTP for this.
| |
doc_1981
|
main.c includes lists.h
i want to make a makefile, i run it from the terminal but it seems like it only creates the objects and doesn't run them.
What am i doing wrong?
(Sorry if it seems like a retarded question):
CC=gcc
CFLAGS=-Wall
maman21: main.o lists.o
main.o: main.c
lists.o: lists.c lists.h
A: Your rules compiles and links the maman21 executable. You can run it manually after it succeeded with the command ./mman21
If you want the makefile to run the program when it's compiled, make a rule for that,
CC=gcc
CFLAGS=-Wall
runit: maman21
./maman21
maman21: main.o lists.o
...
Note that the line after the runit: rule must be indented by 1 tab character.
| |
doc_1982
|
[
{
"rub": {
"item1": 979,
"item2": 32,
"item3": 845
},
"shop": "shop1",
},
{
"rub": {
"item232": 84,
"item213": 348
},
"shop": "shop2"
}
]
I try to filter it in a table by key using ng-model. But it isn't filtering at all.
<table class="table ng-cloak" ng-repeat="rub in rubs | filter:isActive" ng-if='isActive'>
<input type="text" class="form-control" placeholder="Товар" ng-model="rub.rub[key]">
<thead>
<tr>
<th>#</th>
<th>Товар</th>
<th>Число</th>
</tr>
</thead>
<tbody>
<tr ng-repeat='(key, val) in rub.rub'>
<td>{{ $index }}</td>
<td>{{ key }}</td>
<td>{{ val }}</td>
</tr>
</tbody>
</table>
My controller:
curryControllers.controller('CurryRubricsCtrl', ['$scope', '$routeParams', '$http', '$route',
function($scope, $routeParams, $http, $route) {
$scope.cityId = $routeParams.cityId;
$http.get('cities.json').success(function(data) {
$scope.cities = data;
$http.get('json/shop_data.json').success(function(data2) {
$scope.rubs = data2;
$scope.isActive = function(item) {
return item.shop === $scope.cityId;
};
});
});
I've tried to add $scope.searchRub = '' to the controller and put a form in the html template.
<form>
<div class="form-group">
<div class="input-group">
<div class="input-group-addon"><i class="fa fa-search"></i></div>
<input type="text" class="form-control" placeholder="Поиск" ng-model="searchRub">
</div>
</div>
</form>
Added this 'searchRub' filter here : <td> {{ key | filter:searchRub }} </td>
It didn't help either.
A: You want the search box to model an independent value which you can then use to filter, rather than trying to model it to the key of an object that you are already using.
<input type="text" class="form-control" placeholder="Товар" ng-model="search”>
There are a number of ways to use this value to filter, but the easiest is with an ng-show:
<tr ng-repeat='(key, val) in rub.rub' ng-show="search ? search===key : true">
Here’s the plunk. I’ve hardcoded the cityId to avoid using routeParams for the demo.
https://plnkr.co/edit/sI8HAbNKBBJGMx0FediD?p=preview
type "item1" into the search box.
A: You can use Underscore.js to achieve this,
data = [
{
"rub": {
"item1": 979,
"item2": 32,
"item3": 845
},
"shop": "shop1"
},
{
"rub": {
"item232": 84,
"item213": 348
},
"shop": "shop2"
}
]
$scope.specific_key = []
_.filter(data, function(val){$scope.specific_key.push(val['rub'])});
console.log($scope.specific_key);
in $scope.specific_key, It will return contain keys 'rub' .
| |
doc_1983
|
The I/O operation has been aborted because of either a thread exit or an application request. (0x800703e3) http://www.slipjig.org/IISError.gif
I am using the browser HTTP stack, because the client HTTP stack does not support client certificates. The client code attempting to hit the server is the Prism module loader. If I run the app out-of-browser but ignore client certs, or if I run the application in-browser but require client certs, it works fine. It seems to be the combination of the two that is causing the problem.
I tried the following to gather more info:
*
*Used Fiddler to view the failing request. It works if Fiddler is running (presumably because Fiddler is handling the client certificate differently?);
*Created an .aspx web form to serve up the module .xaps;
*Created an HTTPModule to see if I could intercept the request before it failed;
*Used a packet sniffer to see if I could tell if the client certificate was being sent correctly.
None of the above gave me much useful information beyond what I could see in the trace file, although the Fiddler thing is interesting.
Any ideas? Thanks in advance!
Mike
A: I beat my head against the wall for weeks on this problem. Here's what I learned and how I finally worked around it.
Prism's FileDownloader class uses System.Net.WebClient to load modules. In OOB mode, WebClient seems to use the same stack as IE, but it apparently either doesn't send the client certificate, or (more likely) doesn't correctly negotiate the SSL/client cert handshake with the server. I say this because:
*
*I was able to successfully request .xap files using Firefox and Chrome;
*I was not able to successfully request .xap files using IE;
*IIS would fail with a 500, not a 403.
I couldn't get good visibility into what was actually happening over the wire; if I used Fiddler, it would work, because Fiddler intercepts communications with the server and handles the client certificate handshake itself. And trying to use a packet sniffer obviously wouldn't tell me anything because of SSL.
So - I first spent a lot of time on the server side trying to eliminate things (unneeded handlers, modules, features, etc.) that might be causing the problem.
When that didn't work, I tried modifying the Prism source code to use the browser's HTTP stack instead of WebClient. To do this, I created a new class similar in design to FileDownloader, implementing IFileDownloader, that used the browser stack. I then made some changes to XapModuleTypeLoader (which instantiates the downloader) to make it use the new class. This approach failed with the same error I was originally experiencing.
Then I started researching whether a commercial third-party HTTP stack might be available. I found one that supported the features I needed and that supported the Silverlight 4 runtime. I created another implementation of IFileDownloader that used that stack, and BOOM - it worked.
The good news with this approach is that not only can I use this to load modules, I can also use it to protect communications between the client and our REST API (a benefit we were going to give up, before).
I plan to submit a patch to Prism to allow the downloader to be registered or bound externally, as it's currently hard-coded to use its own FileDownloader. If anyone is interested in that or in the commercial HTTP stack I'm using, contact me (msimpson -at- abelsolutions -dot- com) for links and code samples.
And I must say this - I still don't know for sure whether the root problem is in the HTTP stack on the client side or the server side, but it's a FAIL on Microsoft's part nonetheless.
A: What we (Slipjig and I) found out this week is that there does appear to be a way around these issues, or at least, we're on the trail to determining whether there is a reliable, repeatable way. We're still not positive on that, but here's what we know so far:
At first pass, if you have code like this you can start making requests with either the Browser or Client stack:
First, place a "WebBrowser" control in your Silverlight XAML, and make it send a request to your HTTPS site.
This may pop up the certificate dialog box for the user. Big deal. Accept it. If you have only one cert, then you can turn an option in IE off to suppress that message.
private void Command_Click(object sender, RoutedEventArgs e) {
// This does not pop up the cert dialog if the option to take the first is turned on in IE settings:
BrowserInstance.Navigate(new Uri("https://www.SiteThatRequiresClientCertificates.com/"));
}
Then, in a separate handler invoke by the user, create an instance of your stack, either Client or Browser:
private void CallServer_Click(object sender, RoutedEventArgs e) {
// Works with BrowserHttp factory also:
var req = WebRequestCreator.ClientHttp.Create(new Uri("https://www.SiteThatRequiresClientCertificates.com/"));
req.Method = "GET";
req.BeginGetResponse(new AsyncCallback(Callback), req);
}
Finally, the Callback:
private void Callback(IAsyncResult result)
{
var req = result.AsyncState as System.Net.WebRequest;
var resp = req.EndGetResponse(result);
var content = string.Empty;
using (var reader = new StreamReader(resp.GetResponseStream())) {
content = reader.ReadToEnd();
}
System.Windows.Deployment.Current.Dispatcher.BeginInvoke(() =>
{
Results.Text = content;
});
}
A: I had the same issue and I fixed it by creating the certificate using makecert. Follow the steps from this article http://www.codeproject.com/Articles/24027/SSL-with-Self-hosted-WCF-Service and replace CN with your ip/domain. In my case I have tested the service on the local machine and run the commands as follows:
1) makecert -sv SignRoot.pvk -cy authority -r signroot.cer -a sha1 -n "CN=Dev Certification Authority" -ss my -sr localmachine
after running the first command drag the certificate from "Personal" directory to "Trusted Root Certification Authority"
2) makecert -iv SignRoot.pvk -ic signroot.cer -cy end -pe -n
CN="localhost" -eku 1.3.6.1.5.5.7.3.1 -ss my -sr
localmachine -sky exchange -sp
"Microsoft RSA SChannel Cryptographic Provider" -sy 12
In case you want to run the silverlight application on another machine, export the certificate created at step1 and then import it on any machine where you want your application to run.
| |
doc_1984
|
Should pass resolved color instead of resource id here
at line 57:
circlePaint.setColor(pressed ? pressedColor : defaultColor);
Any help on what should I do?
A: You have to pass color of type int, precisely instance of android.graphics.Color since that is the one used in the setColor library method, not a resource ID.
If you want to construct a Color yourself, instead of using the default ones, look at int argb(red, green, blue, alpha) for example.
For more information: android.graphics.Color.
A: I think the value of your pressedColor is similar to this R.color.some_color_name.
A correct way of retrieving color from resources is following:
ContextCompat.getColor(context, R.color.some_color_name);
In your case:
circlePaint.setColor(ContextCompat.getColor(context, pressed ? pressedColor : defaultColor));
What's the difference???
R.id.some_color_name is just an id from R class, that is being generated from your resources (res folder).
What you want is a color int, that is constructed by the value of the resource, that your resource id points to.
| |
doc_1985
|
I tried to run my subagent with few debug flags. I found out that not a single function generated code is called on snmpset request, only on snmpget. smnpget on exactly same OID will return valid value. I have user with RW access everywhere. I can set value to sysName.0 with same user. I tried removing MIB file and use exact oid but had same result.
Because It's not even reaching code, I don't know much what to do.
I tried it with 2 tables generated same way.
One table has index as IMPLIED DisplayString and second table has INDEX as combination of 2 INTEGERs.
EDIT:
I found out that it created .conf file in /var/lib/snmp/ for each my agent. I tried to add create_user with same name & password but it disappeared after agent was started again.
EDIT2:
Code was generetad using mib2c.mfd.conf . I tried mib2c.iterate.conf and it called function from generated code. It's not working with mib2c.mfd.conf but looks like it will work with mib2c.iterate.conf . I would like to be able make it works with mib2c.mfd.conf so I wouldn't need to change all subagents.
Output from my subagent where 3.fw is index:
agentx/subagent: checking status of session 0x44150
agentx_build: packet built okay
agentx/subagent: synching input, op 0x01
agentx/subagent: session 0x44150 responded to ping
agentx/subagent: handling AgentX request (req=0x1f9,trans=0x1f8,sess=0x21)
agentx/subagent: -> testset
snmp_agent: agent_sesion 0xc4a08 created
snmp_agent: add_vb_to_cache( 0xc4a08, 1, MSE-CONFIGURATION-MIB::mseDpuConfigActivationAdminStatus.3.fw, 0x3d3d0)
snmp_agent: tp->start MSE-CONFIGURATION-MIB::mseDpuConfigActivationTable, tp->end MSE-CONFIGURATION-MIB::mseDpuConfigActivation.3,
agent_set: doing set mode = 0 (SET_RESERVE1)
agent_set: did set mode = 0, status = 17
results: request results (status = 17):
results: MSE-CONFIGURATION-MIB::mseDpuConfigActivationAdminStatus.3.fw = INTEGER: prepare(1)
snmp_agent: REMOVE session == 0xc4a08
snmp_agent: agent_session 0xc4a08 released
snmp_agent: end of handle_snmp_packet, asp = 0xc4a08
agentx/subagent: handling agentx subagent set response (mode=162,req=0x1f9,trans=0x1f8,sess=0x21)
agentx_build: packet built okay
agentx/subagent: FINISHED
agentx/subagent: handling AgentX request (req=0x1fa,trans=0x1f8,sess=0x21)
agentx/subagent: -> cleanupset
snmp_agent: agent_sesion 0xc7640 created
agent_set: doing set mode = 4 (SET_FREE)
agent_set: did set mode = 4, status = 17
results: request results (status = 17):
results: MSE-CONFIGURATION-MIB::mseDpuConfigActivationAdminStatus.3.fw = INTEGER: prepare(1)
snmp_agent: REMOVE session == 0xc7640
snmp_agent: agent_session 0xc7640 released
snmp_agent: end of handle_snmp_packet, asp = 0xc7640
agentx/subagent: handling agentx subagent set response (mode=162,req=0x1fa,trans=0x1f8,sess=0x21)
agentx_build: packet built okay
agentx/subagent: FINISHED
agentx/subagent: checking status of session 0x44150
agentx_build: packet built okay
agentx/subagent: synching input, op 0x01
agentx/subagent: session 0x44150 responded to ping
Values/config used for generating code:
## defaults
@eval $m2c_context_reg = "netsnmp_data_list"@
@eval $m2c_data_allocate = 0@
@eval $m2c_data_cache = 1@
@eval $m2c_data_context = "generated"@ [generated|NAME]
@eval $m2c_data_init = 1@
@eval $m2c_data_transient = 0@
@eval $m2c_include_examples = 1@
@eval $m2c_irreversible_commit = 0@
@eval $m2c_table_access = "container-cached"@
@eval $m2c_table_dependencies = 0@
@eval $m2c_table_persistent = 0@
@eval $m2c_table_row_creation = 0@
@eval $m2c_table_settable = 1@
@eval $m2c_table_skip_mapping = 1@
@eval $m2c_table_sparse = 1@
@eval $mfd_generate_makefile = 1@
@eval $mfd_generate_subagent = 1@
SNMPd version:
# snmpd --version
NET-SNMP version: 5.9
Web: http://www.net-snmp.org/
Email: [email protected]
A: I found out that in generated file *_interface.c from mib2c.mfd.conf template, there is inverted check.
#if !(defined(NETSNMP_NO_WRITE_SUPPORT) || defined(NETSNMP_DISABLE_SET_SUPPORT))
HANDLER_CAN_RONLY
#else
HANDLER_CAN_RWRITE
#endif /* NETSNMP_NO_WRITE_SUPPORT || NETSNMP_DISABLE_SET_SUPPORT */
I removed ! from condition and it stared working. Both defines are undefined so it should use HANDLER_CAN_RWRITE but because of wrong check it used HANDLER_CAN_RONLY.
| |
doc_1986
|
<head>
<style type="text/css">
</style>
<script src="http://code.jquery.com/jquery.min.js" type="text/javascript"></script>
<script type="text/javascript">
$(document).ready(function() {
$('#Presentation').click(function() {
var jsonloc = "ppt.json";
$.when($.getJSON(jsonloc)).then(function(info){
$('#header').empty();
$.each(info.slides, function(entryIndex, entry){
var html = '<div class="info">';
html += '<h3>' + entry['title'] + '</h3>';
html += '<div class="author">' + entry['author'] + '</div>';
if(entry['slides']){
$.each(entry['slides'],function(slideIndex, slides){
html += '<h3>' + slides['Slide'] + '<h3>';
html += '<div class="header">' + slides['header'] + '</div>';
});
if(slides['Content']){
html += '<div class="Content">';
html += '<ol>';
$.each(slides['content'],function(contentIndex, content){
html += '<li>' + content + '</li>';
});
html += '</ol>';
html += '</div>';
};
$('#header').append(html);
};
});
return false;
});
});
});
</script>
</head>
<body>
<a href="#" id="Presentation">ppt presentation</a>
<div id="header">
</div>
</body>
here is the JSON:
{
"title": "presentation",
"date_created": "",
"last_modified": "",
"author": "By: Someone online",
"slides": [
{
"Slide": "1",
"header": "first header",
"src": "ssss.jpg",
"Content": "dddddddddddddddddddddddddddddddddddd",
"Content": "dddddddddddddddddddddddddddddddddddd",
"Content": "dddddddddddddddddddddddddddddddddddd"
},
{
"Slide2": "2",
"header2": "header 2",
"src2": null,
"Content": "dddddddddddddddddddddddddddddddddddd",
"Content": "dddddddddddddddddddddddddddddddddddd",
"Content": "dddddddddddddddddddddddddddddddddddd"
},
{
"Slide3": "3",
"header3": "header3",
"src3": "sdfdsf.jpg",
"Content": "dddddddddddddddddddddddddddddddddddd",
"Content": "dddddddddddddddddddddddddddddddddddd",
"Content": "dddddddddddddddddddddddddddddddddddd"
}
]
}
I really want to make this work and don't want to use other methods such as jquery templates.
Is there anything that jumps out?
A: You're closing .each too soon :
$.each(entry['slides'],function(slideIndex, slides){
html += '<h3>' + slides['Slide'] + '<h3>';
html += '<div class="header">' + slides['header'] + '</div>';
});
if(slides['Content']){
// ...
That way you only access the last value of slides['Content'].
You need to put that if and what it contains into your loop like this :
$.each(entry['slides'],function(slideIndex, slides){
html += '<h3>' + slides['Slide'] + '<h3>';
html += '<div class="header">' + slides['header'] + '</div>';
if(slides['Content']){
// ...
});
| |
doc_1987
|
clang: error: linker command failed with exit code 1 (use -v to see invocation) error in iphone opencv project
A: You need to change the architecture armv7.
and change the compiler GCC to LLVM.
| |
doc_1988
|
background: center / cover url(image) no-repeat;
Also I have an image of line which connects rocket and button at the left part of the screen.
When i change the screen size all elements change their position.
I used vh, but it didn't work.
A: You can use media to get device size and set certain CSS to the object.
(example)
/* if mobile device max width 380px */
@media only screen and (max-device-width: 380px) {
.image{
position:relative;
top:10px;
}
}
| |
doc_1989
|
Then if user performs a click on the label, the next picture is displayed. If user performs a double-click on the label then the label is removed and the list is added back.
How can I catch a double-click event on the list and on the label?
Thanks,
William
A: You will need to avoid the actionPerformed as this will be called instantly on the first pointer release.
We are considering adding more builtin gestures (e.g. double tap) so this is actually a great time to ask this. Right now the only way to do this is to override pointer released and in the first released create a UITimer (e.g. for 300ms) if another pointer released happens, cancel the timer and call the "double tap" event. The timer code can just call the "tap" event.
E.g.:
List l = new List(...) {
private UITimer timer;
public void pointerReleased(int x, int y) {
super.pointerReleased(x, y);
if(timer == null) {
timer = UITimer.timer(300, false, getComponentForm(), () -> {
singleTapEvent();
timer = null;
});
} else {
timer.cancel();
timer = null;
doubleTapEvent();
}
}
};
| |
doc_1990
|
Does both the following statements works similarly?
final _controller = StateProvider.autoDispose((ref) => PageController());
or
final _controller = PageController();
@override
void dispose() {
_controller.dispose();
super.dispose();
}
A: autoDispose modifier, as the name suggests, disposes call of the resources when the provider is not used, it simply frees up memory without us specifying any dispose function.
One example that may help you for example when you navigate into another page in which you have a provider with autoDispose, if you go back to your first page your provider data will reset, without autoDispose, everything will be saved.
| |
doc_1991
|
A: TL;DR
sudo apt-get install build-essential git-core pkg-config automake libtool wget zlib1g-dev python-dev libbz2-dev
git clone https://github.com/moses-smt/mosesdecoder.git
cd mosesdecoder
make -f contrib/Makefiles/install-dependencies.gmake
./compile.sh
When you install Moses, GIZA++ is also installed in the mosesdecoder/bin/ directory. See http://www.statmt.org/moses/?n=Development.GetStarted
To install MGIZA++, do this:
sudo apt-get install -y cmake libboost-all-dev
git clone https://github.com/moses-smt/mgiza.git
cd mgiza/mgizapp
cmake . && make && make install
cp scripts/merge_alignment.py bin/
The binaries for MGIZA++ would be in mgiza/mgizapp/bin/.
A: Assuming that you have the dependencies, simple install with:
$ wget https://giza-pp.googlecode.com/files/giza-pp-v1.0.7.tar.gz
$ tar -zxvf giza-pp-v1.0.7.tar.gz
$ cd giza-pp/
$ make
I've uploaded the pre-compiled binaries and you can get it here, but i'm not sure whether it works on your machine:
https://dl.dropboxusercontent.com/u/45771499/giza-binaries.zip
If you run into dependencies problems, simply install the dependencies required by the MOSES toolkit:
sudo apt-get install g++ git subversion automake libtool zlib1g-dev libboost-all-dev libbz2-dev liblzma-dev
Personally, I would just use the fast aligner which implemented IBM model 2 without the whole fuss about mkcls, see https://github.com/clab/fast_align
| |
doc_1992
|
Warning messages:
1: naive_bayes(): Feature Öksürük - zero probabilities are present. Consider Laplace smoothing.
2: naive_bayes(): Feature Ateş - zero probabilities are present. Consider Laplace smoothing.
3: naive_bayes(): Feature Halsizlik - zero probabilities are present. Consider Laplace smoothing.
Öksürük<-c("Var","Yok","Yok","Yok","Var","Yok","Yok","Yok","Var","Yok","Var")
Ateş<-c("Var","Var","Yok","Yok","Yok","Var","Yok","Var","Var","Var","Yok")
Halsizlik<-c("Yok","Var","Yok","Var","Yok","Yok","Var","Var","Yok","Var","Var")
COVID19<-c("POZİTİF","POZİTİF","POZİTİF","POZİTİF","NEGATİF","NEGATİF","NEGATİF","NEGATİF","NEGATİF","NEGATİF","")
df<-data.frame("Öksürük"=Öksürük,"Ateş"=Ateş,"Halsizlik"=Halsizlik,"COVID-19"=COVID19)
nbfit<-naivebayes::naive_bayes(df[1:10,1:3],df[1:10,4])
ali<-predict(nbfit,df[11,1:3])
A: I did reproduce the error, it seems like the last value in COVID19 was empty:
COVID19<-c("POZİTİF","POZİTİF","POZİTİF","POZİTİF","NEGATİF","NEGATİF","NEGATİF","NEGATİF","NEGATİF","NEGATİF","")
The error doesn't show up when there is a given value, for example
COVID19<-c("POZİTİF","POZİTİF","POZİTİF","POZİTİF","NEGATİF","NEGATİF","NEGATİF","NEGATİF","NEGATİF","NEGATİF","NEGATİF")
| |
doc_1993
|
*
*Given a UTF-8 string, how many bytes are needed for the UTF-16 encoding of the same string.
*Assume the UTF-8 string has already been validated. It has no BOM, no overlong sequences, no invalid sequences, is null-terminated. It is not CESU-8.
*Full UTF-16 with surrogates must be supported.
Specifically I wonder if there are shortcuts to knowing when a surrogate pair will be needed without fully converting the UTF-8 sequence into a codepoint.
The best UTF-8 to codepoint code I've seen uses vectorizing techniques so I wonder if that's also possible here.
A: Efficiency is always a speed vs size tradeoff. If speed is favored over size then the most efficient way is just to guess based on the length of the source string.
There are 4 cases that need to be considered, simply take the worst case as the final buffer size:
*
*U+0000-U+007F - will encode to 1byte in utf8, and 2bytes per character in utf16. (1:2 = x2)
*U+0080-U+07FF - encoded to 2byte utf8 sequences, or 2byte per character utf16 characters. (2:2 = x1)
*U+0800-U+FFFF - are stored as 3byte utf8 sequences, but still fit in single utf16 characters. (3:2 = x.67)
*U+10000-U+10FFFF - are stored as 4byte utf8 sequences, or surrogate pairs in utf16. (4:4 = x1)
The worse case expansion factor is when translating U+0000-U+007f from utf8 to utf16: the buffer, bytewise, merely has to be twice as large as the source string. Every other unicode codepoint results in an equal size, or smaller bytewise allocation when encoded as utf16 as utf8.
A: Very simple: count the number of head bytes, double-counting bytes F0 and up.
In code:
size_t count(unsigned char *s)
{
size_t l;
for (l=0; *s; s++) l+=(*s-0x80U>=0x40)+(*s>=0xf0);
return l;
}
Note: This function returns the length in UTF-16 code units. If you want the number of bytes needed, multiply by 2. If you're going to store a null terminator you'll also need to account for space for that (one extra code unit/two extra bytes).
A: It's not an algorithm, but if I understand correctly the rules are as such:
*
*every byte having a MSB of 0 adds 2 bytes (1 UTF-16 code unit)
*
*that byte represents a single Unicode codepoint in the range U+0000 - U+007F
*every byte having the MSBs 110 or 1110 adds 2 bytes (1 UTF-16 code unit)
*
*these bytes start 2- and 3-byte sequences respectively which represent Unicode codepoints in the range U+0080 - U+FFFF
*every byte having the 4 MSB set (i.e. starting with 1111) adds 4 bytes (2 UTF-16 code units)
*
*these bytes start 4-byte sequences which cover "the rest" of the Unicode range, which can be represented with a low and high surrogate in UTF-16
*every other byte (i.e. those starting with 10) can be skipped
*
*these bytes are already counted with the others.
I'm not a C expert, but this looks easily vectorizable.
| |
doc_1994
|
I'm open to other solutions not using ci_reporter.
I'm failing to get the ci_reporter_minitest gem to work with the rails binary that kicks off running minitest for rails 6. Also, I haven't succeeded yet in finding any posts or questions referencing this problem.
I looked into just using the rake binary for the ci server, but didn't find a working approach there either.
Here's a config I tried that does run the ci:setup task (removing any previous test/report directory) and does run the tests, but doesn't generate the xml output.
I did have this working fine with rails 4.2
#lib/tasks/test_tasks.rake
require 'ci/reporter/rake/minitest'
task :minitest => 'ci:setup:minitest'
namespace :test do
task :something => 'test:prepare' do
$: << "test"
Rake::Task['ci:setup:minitest'].invoke
test_files=FileList['test/models/something.rb']
Rails::TestUnit::Runner.run(test_files)
end
end
$> bundle exec rails test:something
A: I did have success switching to minitest-reporters gem.
| |
doc_1995
|
df1 = pd.read_csv(r'C:\Users\...\phone1.csv')
df2 = pd.read_csv(r'C:\Users\...\phone2.csv')
df3 = pd.read_csv(r'C:\Users\...\phone3.csv')
df4 = pd.read_csv(r'C:\Users\...\phone4.csv')
df5 = pd.read_csv(r'C:\Users\...\phone5.csv')
df6 = pd.read_csv(r'C:\Users\...\phone6.csv')
I tried the following code
for i in range(1, 7):
'df'+i = pd.read_csv(r'C:\Users\siddhn\Desktop\phone'+str(i)+'.csv', engine = 'python')
But I get an error saying that cannot assign to operator
How to import the datasets using a loop.?
A: As @TimRoberts mentioned, you should use a list or a dictto store your dataframes but if you really want to have variable df1, df2, ..., df6, you can use locals() or globals():
for i in range(1, 7):
locals()[f'df{i}'] = pd.read_csv(fr'C:\Users\siddhn\Desktop\phone{i}.csv')
print(df1)
print(df2)
A: you can store it in a list, here is the idea
var = []
for i in range(1, 7):
var.append(i)
print(var[0])
print(var[2])
and from the list you can access the value using their key.
A: Use the inbuilt glob package
from glob import glob
fullpath = f'C:\Users\siddhn\Desktop\phone[1-6].csv'
dfs = [pd.read_csv(file) for file in glob(fullpath)]
print(dfs[0])
A: 'df'+i return a lvalue, i.e that can be assigned to other variables but cannot store some contents.
insted of using
for i in range(1, 7):
'df'+i = pd.read_csv(r'C:\Users\siddhn\Desktop\phone'+str(i)+'.csv', engine = 'python')
create a List of data_frames as
df = []
Now append your data_frames as
for i in range(7):
df.append(pd.read_csv(r'C:\Users\siddhn\Desktop\phone'+str(i)+'.csv', engine = 'python')
You can then acess data_frames by indexing them like df[0] or df[1].....
A: You can create a list of data frames and then iterate over it or access by index.
df_list = [pd.read_csv(r'C:\Users\siddhn\Desktop\phone'+str(i)+'.csv', engine = 'python') for i in range(1, 7)]
df_list[1]
The variable cannot be an operator thats the reason you get the error.
| |
doc_1996
|
An example of what I want:
var x=5;
x.increment();
console.log(x); //6
What I tried to do:
Number.prototype.increment=function(){
this++; //gave me a parser error
};
A: Numbers are immutable in javascript. when you do console.log(this) you will see it will point to Number whose primitive value is 5(in our case), so you can not change it's value.
What you can do is return incremental value(by doing this + 1) from incremnt and assign it to x like x=x.increment();
Number.prototype.increment = function(){
return this + 1;
}
var x = 5;
x=x.increment();
console.log(x);
A: this cannot be changed directly. Try this way
Number.prototype.increment = function(){
return this + 1;
}
var x = 5;
x = x.increment(); // x will be 6 here
Hope this helps.
| |
doc_1997
|
[ImportExport]
comment = Import Export directory
path = /path/folder
browseable = Yes
hosts allow = <IP>
guest ok = Yes
force user = <user>
forcegroup = pro4
read only = No
create mask = 0777
directory mask = 0777
dead time = 10
However, adding a very similar configuration on our Test server, I cannot even get to the server from a Windows box, I get the "\ is not accessible..." message as if the server does not exist, or there are no shares.
Is there anything else I need to do to the local AIX folder to get this visible to Windows, or can you give me some ideas of what the pre-reqs are for this?
Sorry, I am not an AIX specialist, primarily a Windows house.
Thanks
A: Got this working now - Samba 3.2.0 was restricted to 14 chars or less for the share name. Reduced, restarted SAMBA and now OK. Thanks all
| |
doc_1998
|
Can someone explain to me what these threads do and how they are used?
A: I'd research the following two things:
Handler and AsyncTask
This is a pretty good resource for Android threading. http://www.vogella.com/articles/AndroidBackgroundProcessing/article.html
Also, if you're asking because you'll be fetching some data / making simple API calls, I'd definitely recommend checking out http://loopj.com/android-async-http/. This will make your life a lot simpler.
A: The main Thread is the UI Thread. So when you start your Activity you are on the Main (UI) Thread. When you want to use a separate thread to do "heavy work" such as network processes then you have several options. You may create a separate Thread inside your Activity and call runOnUiThread to update your UI. You could also use AsyncTask for short lived operations. According to the docs, things that may only take a few seconds. Here's a short example of that:
public class TalkToServer extends AsyncTask<String, String, String> {
@Override
protected void onPreExecute() {
super.onPreExecute();
}
@Override
protected void onProgressUpdate(String... values) {
super.onProgressUpdate(values);
}
@Override
protected String doInBackground(String... params) {
//do your work here
return something;
}
@Override
protected void onPostExecute(String result) {
super.onPostExecute(result);
// do something with data here-display it or send to mainactivity
and you would call it from
TalkToServer myAsync = new TalkToServer() //can add params if you have a constructor
myAsync.execute() //can pass params here for `doInBackground()` method
Just make sure not to try and update UI in doInBackground() method. Use any of the others or passs the data back to Activity method. If your AsyncTask class is innner method of Activity then you can use its context for updating UI. If it is in its own file then you will need to pass context to its constructor like
TalkToServer myAsync = new TalkToServer(this);
You also may want to read this
Painless Threading
A: The UI thread and the main thread are just different names for the same thread.
All of the UI inflation for an application is done on this main thread. The reason we delegate "heavier" work to other threads is because we do not want those operations to slow the responsiveness and inflation time of the UI.
You will want to run any operations that change the UI or modify objects used by the UI on the main thread.
An example with an AsyncTask
package com.wolfdev.warriormail;
import android.app.Activity;
import android.content.Intent;
import android.os.Bundle;
import android.view.View;
import android.view.View.OnClickListener;
import android.view.animation.Animation;
import android.view.animation.AnimationUtils;
import android.widget.Button;
import android.widget.CheckBox;
import android.widget.EditText;
public class LoginActivity extends Activity implements OnClickListener{
private Button loginButton;
private EditText eText;
private EditText pText;
private CheckBox box;
private String user;
private String pass;
@Override
public void onCreate(Bundle savedInstanceState){
super.onCreate(savedInstanceState);
setContentView(R.layout.login);
//Initialize UI objects on main thread
loginButton = (Button) findViewById(R.id.button1);
loginButton.setOnClickListener(this);
eText = (EditText) findViewById(R.id.editText1);
pText = (EditText) findViewById(R.id.editText2);
eText.clearFocus();
pText.clearFocus();
Animation fadeIn = AnimationUtils.loadAnimation(this,R.anim.fadeanimation);
Animation slideIn = AnimationUtils.loadAnimation(this, R.anim.slideanimation);
eText.startAnimation(slideIn);
pText.startAnimation(slideIn);
box = (CheckBox)findViewById(R.id.checkBox1);
box.startAnimation(fadeIn);
login.startAnimation(fadeIn);
}
@Override
public void onClick(View v) {
user = email.getText().toString();
password = pass.getText().toString();
}
class LoginTask extends AsyncTask<Void,Void,Void>{
@Override
protected Void doInBackground(Void... args){
/* Here is where you would do a heavy operation
* In this case, I want to validate a users
* credentials. If I would do this on the main
* thread, it would freeze the UI. Also since
* this is networking, I am forced to do this on
* a different thread.
*/
return null;
}
@Override
protected void onPostExecute(Void result){
/* This function actually runs on the main
* thread, so here I notify the user if the
* login was successful or if it failed. If
* you want update the UI while in the background
* or from another thread completely, you need to
* use a handler.
*/
}
}
}
| |
doc_1999
|
here is what i want :
public function dummy()
{
return (auth()->user()) ? $this->hasOne(blah::class) : emptyrelationship();
}
A: You should check with DD() what is being returned as you like.
If there's no data for the relationship to show, it will just return no data.
A: To return an empty relationship instead of null, you can try this:
public function item()
{
return $this->belongsTo(Item::class)
->withDefault(function () {
return new Item();
});
}
A: Try This example
public function shop(){
if(true) {
return $this->newQuery(); // or newQueryWithoutScopes()
}
return $this->belongsTo('App\Models\Shop');
}
A: If there is no user in the user table collection will return relationship as "relationship: user: []" as blank only if you do dd($var) on your query,then you can check though conditions in your code;
A: Eloquent has a method for that newModelInstance
Best to keep the eloquent model standard relationship and move logic elsewhere
public function dummy()
{
return $this->hasOne(blah::class);
}
$dummy = $model->dummy;
if (!$dummy) {
$dummy = $model->dummy()->newModelInstance();
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.