id
stringlengths 5
11
| text
stringlengths 0
146k
| title
stringclasses 1
value |
|---|---|---|
doc_3300
|
x1 = [x11,x12,.......,x1N] OR x1 = X1 (scalar value)
x2 = [x21,x22,.......,x2N] OR x2 = X2
....
xM = [xM1,xM2,.......,xMN] OR xM = XM
My curve shader takes three float attributes x,y,z which represent the variables that are currently on display.
For each curve and each x,y,z, I bind a vertex buffer containing the data for the respective variable to the attribute if the data is a vector. Drawing multiple curves with only vector data works fine.
If the data for some variable is just a scalar number, I disable the attribute array and set the attribute value (for example X1) with:
glDisableVertexAttribArray(xLocation);
glVertexAttrib1f(xLocation,X1);
Now to my question: It seems that all curves use the same value for any vertex attribute with disabled array in the shader, (the one for the last curve that I draw) even though i reset the values between glDrawArray() calls. Is it just not possible to use more than one value for an attribute with disabled array in a shader, or should it be possible and I have a bug?
| |
doc_3301
|
Is this travel time timely updated (real-time traffic information) or constant (travel time is always the same whenever we use "mapdist" for each OD? Thanks!
mapdist(from='18.958011, 72.819789', to='18.958558, 72.831462', mode="driving", output="simple")
I got the information like time:
from to m km miles seconds minutes hours
1 18.958011, 72.819789 18.958558, 72.831462 1304 1.304 0.8103056 241 4.016667 0.06694444
A: This function uses standard google API. There you can find that you would have to specify optional parameter departure_time to your request, to take traffic under consideration.
And if you look into mapdist source code you can see that departure_time is not part of your request (neither as parameter, or default option).
| |
doc_3302
|
but i have a warning message
RNFetchBlob error when sending request : null
this is the code :
const url ='http://manafeth.ncsi.gov.om/admin/download/countries/import/2018/en?portTypes=land,air,sea&size=100000';
RNFetchBlob
.config({
fileCache : true,
})
.fetch('GET', url)
.then((res) => {
console.log('The file saved to ', res.path())
}).catch((err)=>{
console.log('The error is ', err)
}
)
| |
doc_3303
|
I come from a C++ background and I am trying to learn MQL4 language & conventions.
I am writing a simple Expert Advisor (my first ever).It compiles but, when I am trying to test it, it ends with no results. I attach code to better illustrate what I am trying to do:
//+------------------------------------------------------------------+
//| MyFirstExpert.mq4 |
//| Leonardo |
//| http://investinmarkets.altervista.org |
//+------------------------------------------------------------------+
#property copyright "Leonardo "
#property link "http://investinmarkets.altervista.org"
#property version "1.00"
#property strict
input int BarCount = 3;
int Ticket = 0;
//+------------------------------------------------------------------+
//| Expert tick function |
//+------------------------------------------------------------------+
void OnTick() {
int BarCountTemp = BarCount + 1;
double bars[];
ArrayResize( bars, BarCountTemp );
for ( int i = 0; i < BarCountTemp; i++ ) {
bars[i] = Close[i + 1];
}
int i = 0;
bool is_p;
do
{
if ( bars[i] > bars[i+1] && i < BarCountTemp ) is_p = true;
else is_p = false;
i++;
}
while ( is_p );
if ( is_p == true && Ticket == 0 ) {
Ticket = OrderSend(_Symbol,OP_SELL,0.1,Bid,0,0,0,"Sell Order Custom",110);
Alert("Sell order opened to match found.");
Comment("Sell order opened #"+Ticket+".");
}
if ( Ticket != 0 ) {
bool select = OrderSelect(Ticket,SELECT_BY_TICKET);
if ( Close[1] > Close[2] ) {
bool close = OrderClose(Ticket,OrderLots(),Ask,0,clrGreen);
Alert("Sell order closed.");
Comment("Sell order closed #"+Ticket+".");
Ticket = 0;
}
}
}
//+------------------------------------------------------------------+
I want to simply count bars (input by user) and then perform a check: if e.g. 3 bars are all positive then open a sell order (just this case for the moment). If opened, the next bar check if still positive, if not close the trade.
I am getting always blank results.
Thank you in advance!
A: Welcome to the MQL4-world, Leonardo
let's review the syntax:
for ( int i = 0; i < BarCountTemp; i++ ) {
bars[i] = Close[i + 1];
}
int i = 0;
bool is_p;
do
{
if ( bars[i] > bars[i+1] && i < BarCountTemp ) is_p = true;
else is_p = false;
i++;
}
while ( is_p );
could be merged / simplified into a single loop/break construct:
bool is_p = True; // FYI: FALSE if not initialised
// WARNING: "New"-MQL4 has changed variable visibility-scope to be limited just to the innermost syntax-construct and variables easily "cease" exist outside that syntax-construct boundary ... for(){bool is_p ...visible...} ...invisible...
for ( int i = 0; // .SET
i < BarCountTemp; // .TEST: [**]
i++ ) { // .INC
if ( Close[i+1] > Close[i+2] // avoid TimeSeries' replica(s)
// && i < BarCountTemp // ALWAYS TRUE [^**]
) continue; // ---------------------------- LOOP-^
else {
is_p = False;
break; // ---------------------------- EXIT-v
}
Next: got at least one Comment() remark on top of the chart window?
int Ticket = EMPTY; // Rather initialise as = EMPTY;
if ( is_p == True
&& Ticket == EMPTY // un-ambiguous meaning
) {
Ticket = OrderSend( _Symbol, // .SYM
OP_SELL, // .OP
0.1, // .LOTs check sizing, MarketInfo()
Bid, // .PRICE
0, // .SLIPPAGE
0, // .SL
0, // .TP
"Sell Order Custom",// .COMMENT
110 // .MAGNUM
);
if ( Ticket == EMPTY ){ // EXC. HANDLER
...
}
else {
Alert( "Sell order opened to match found." ); // .NOP if isTesting()
Comment( "Sell order opened #" + Ticket + "." ); // .GUI is visible????
}
}
Finally: include Exception handlers for cases, where error may appear
if ( Ticket != EMPTY // TEST 1st,
&& Close[1] > Close[2] // TEST 2nd, prevent dbPool-ops, if not True
) {
bool select = OrderSelect( Ticket, SELECT_BY_TICKET );
if (!select ){ // EXC. HANDLER
...
}
bool close = OrderClose( Ticket,
OrderLots(),
Ask,
0,
clrGreen
);
if (!close ){ // EXC. HANDLER
...
}
Alert( "Sell order closed." );
Comment( "Sell order closed #" + Ticket + "." );
Ticket = EMPTY; // .SET EMPTY
}
}
| |
doc_3304
|
Department | Salary | Date | Type | Days from Y | Days to M
-----------+--------+------------+-----------------+-------------+-----------
Finance | 71 | 01-01-2016 | Regular payment | 1 | 30
Sales | 3000 | 20-01-2016 | Regular payment | 20 | 11
Sales | -300 | 21-01-2016 | Correction | 21 | 10
Finance | 2000 | 01-02-2016 | Regular payment | 32 | 27
Sales | 3100 | 15-02-2016 | Regular payment | 46 | 12
For regular payments, the salary needs to be corrected to present as if it was a full month. But, in the next month the correction of the previous month must not be included (because it's already provided for in the new salary) - only the correction of the last month should be included!
For Sales, that would be:
Date | Salary | Salary (cum.) | Correction | Salary (corr.) cum.
---------------------------------------------------------------------------
2016 | 5800 | 5800 | |
2016-01 | 2700 | 2700 | 1650 | 4350
2016-01-20 | 3000 | 3000 | 1650 | 4650
2016-01-21 | -300 | 2700 | | 4350
2016-02 | 2550 | 5250 | 2040 | 7290
2016-02-15 | 2550 | 5250 | 2040 | 7290
Calculating the correction itself is quite easy: if the it's a regular payment, then use that date to calculate the correction for the given month-department combination.
Using a LASTNONBLANK expression, I can make a correct cumulative measure that works for a single department:
Salary (corr.) cum := CACLULATE(MAX([Correction]); LASTNONBLANK([Date]; MAX([Correction])
However, this doesn't work across departments - for 2016-01 that would lead to wrong total counters:
Department | Salary | Salary (cum.) | Correction | measure | should be
-----------------------------------------------------------------------
(Total) | 2771 | 3071 | | 4721 | 6851
Finance | 71 | 71 | 2130 | 2201 | 2201
Sales | 2700 | 3000 | 1650 | 4650 | 4650
How do I create a measure that correctly calculates the corrections for each month, as well as gets the totals correct?
(so basically it looks to the last correction for each department (or other dimension) and uses the sum of these instead of the last correction across all dimensions)
A: You basically need to iterate over the departments.
Salary (corr.) cum :=
SUMX (
Departments,
CACLULATE(MAX([Correction]); LASTNONBLANK([Date]; MAX([Correction])
)
That should do the trick.
Alberto
| |
doc_3305
|
TestController.java
@RestController
public class TestController {
@RequestMapping(value = "/test", method = RequestMethod.POST)
public String test() {
throw new PasswdException("password err");
}
}
PasswdException.java
public class PasswdException extends RuntimeException {
public PasswdException(String msg) {
super(msg);
}
}
RestTest.java
public class RestTest {
public static void main(String[] args) {
RestTemplate restTemplate = new RestTemplate();
try {
String s = restTemplate.postForObject("http://localhost:8080/test", null, String.class);
} catch (Exception e) {
if(e instanceof PasswdException){
System.out.println("..........");
//do sth
}
}
}
}
expected: client exception instance PasswdException,but actual exception is HttpServerErrorException
A: It's impossible in terms of REST but you can achieve it by making a contract definition with:
1. Define HTTP error status code for your exception like (BAD_REQUEST is just as more relevant, you can pick any you feel suits):
@ResponseStatus(value=HttpStatus.BAD_REQUEST, reason="Wrong password")
public class PasswdException extends RuntimeException {
private static final long serialVersionUID = 1L;
}
*RestTest (Client) code should check this HTTP error status with something like:
if(resp.getStatus().equals(HttpStatus.BAD_REQUEST))
etc.
| |
doc_3306
|
lock screen or notification drawer when Android app receives the notification on foreground?
I'd like to implement the function without using local notificatin.
In case of iOS app, you can use the following method when receiving notification on foreground.
completion([.list])
in
userNotificationCenter(_:willPresent:withCompletionHandler:)
Then you can show the notification in Notification Center
https://developer.apple.com/documentation/usernotifications/unusernotificationcenterdelegate/1649518-usernotificationcenter
I'd appreciate it if you tell me how to achieve the UX.
Than you.
| |
doc_3307
|
I calculate one path using pgr_trsp function now, but it gives only one route. I need several shortests paths and turn restriction.
A: pgRouting does not have that functionality today. The best you can do today is write a wrapper that calls pgr_trsp multiple times.
There is additional work we want to do on trsp to convert it to boost along the lines of the work done on dijkstra in version 2.1.0 but it is not a priority at the moment without some funding.
| |
doc_3308
|
A: I solved this using JOIN.I used join between two select statements.
SELECT products.count,prices.sumofproducts FROM
(select count(*) as count,1 as DUMMY from products) products
join
(select sum(prices) as sumofproducts,1 as DUMMY from prices) prices
on products.DUMMY=prices.DUMMY
| |
doc_3309
|
Supose i have a Dice
public class Dice
{
public int FaceValue { get; set; }
public Dice(int faceValue)
{
this.FaceValue = faceValue;
}
}
And a Result class ...
public class Result
{
public Dice D1 { get; set; }
public Dice D2 { get; set; }
public Dice D3 { get; set; }
// Always has three dices ...
public Result(Dice d1,Dice d2,Dice d3)
{
D1 = d1;
D2 = d2;
D3 = d3;
}
}
And a class Bet ...
public class Bet
{
// A bet could have one , two , or three dices ....
public List<Dice> Dices = new List<Dice>();
}
Is there any very simple way (LINQ or not) to COUNT how many times a single Bet ( that can have one , two or three Dices )
appears in single Result that always have three dices ?
and if my List of Bets has more than one Bet , check if any Bet appears in a Result of three dices ?
For instance
Result.D1 = new Dice(1);
Result.D2 = new Dice(4);
Result.D3 = new Dice(1);
{ { new Dice(1), new Dice(4) } } appears 1 time ===> 1
{ { new Dice(1) } } appears 2 times ====> 2
{ { new Dice(4) , new Dice(1) , new Dice(1) } } appears 1 time ====> 1
{ { new Dice(5) , new Dice(2) , new Dice(3) } } doesn't appear ====> 0
{ { new Dice(1) , new Dice(6) , new Dice(6) },
{ new Dice(4) , new Dice(4) , new Dice(4) },
{ new Dice(1) , new Dice(2) , new Dice(3) },
{ new Dice(1) , new Dice(5) , new Dice(5) },
{ new Dice(1) , new Dice(1) , new Dice(4) },
{ new Dice(3) , new Dice(3) , new Dice(3) } } has one bet that is equal so ========> 1
A: public class Result
{
public Dice D1 { get; set; }
public Dice D2 { get; set; }
public Dice D3 { get; set; }
// Always has three dices ...
public Result(Dice d1,Dice d2,Dice d3)
{
D1 = d1;
D2 = d2;
D3 = d3;
}
public bool Match(IEnumerable<Dice> dice)
{
return ...; // Your comparison logic here
}
}
var bets = new List<Bet>();
foreach(var bet in bets)
{
var matchCount = bet.Count(x => Result.Match(x.Dices));
}
A: var dice = ShortForm(new[]{result.D1, result.D2, result.D3});
var betGoodCount = bets.Count(bet => BetInDice(bet, dice));
Dictionary<int, int> ShortForm(IEnumerable<Dice> dice)
{
return dice
.GroupBy(die => die.FaceValue)
.ToDictionary(group => group.Key, group => group.Count);
}
bool BetInDice(Bet bet, Dictionary<int, int> dice)
{
return ShortForm(bet.Dice)
.All(pair => dice.ContainsKey(pair.Key) && pair.Value <= dice[pair.Key];
}
A: I'm going under the assumption that you roll x number of dice and place y number of bets. You then want to compare if any or your bets was a number that was rolled.
First, you should change up how your Bet class is structured.
public class Bet
{
public int FaceValue { get; set; }
}
The reason is that one bet relates to one face value. You will then have a list of bets, like this:
List<Bet> bets = new List<Bet>()
{
new Bet() { FaceValue = 2 },
new Bet() { FaceValue = 4 },
//etc
};
Add these methods to your Result class:
private IEnumerable<int> CorrectBets(List<Dice> dice, List<Bet> bets)
{
//use an linq join on their face values
return from die in dice
join bet in bets on die.FaceValue equals bet.FaceValue
select die.FaceValue;
}
public int NumberOfCorrectBets(List<Bet> bets)
{
var dice = new List<Dice>() { D1, D2, D3 };
return CorrectBets(dice, bets).Count(); //this actually gets the count
}
The only thing you have to do now is create a List<Bet> object and pass that into the NumberOfCorrectBets method.
This should account for duplicate dice numbers / bet numbers. Meaning if you bet on a 3 and a 3 gets rolled 2 times, you will get 2 for an answer.
| |
doc_3310
|
$x = 4
$y = 32
while ($y > $x) { $y = $y-$x; }
The final value of $y is what I'm after.
A: Use the mod operator:
$remainder = $y % $x;
If you want to get how many, use division and take the floor:
$total = floor( $y / $x);
So, with $y = 35; and $x = 4;, you'd get:
$total = 8, $remainder = 3
| |
doc_3311
|
This Excel sheet contains a drop down list, which is populated with data from a different worksheet in the same file.
Currently you have to select the desired item by hand, so it can set a number of variables. The sheet has a macro with some calculations that need these variables.
The program I'm making is going to automate the process, so I need to select the right item in the list programatically and run the macro afterwards. This is where I'm stuck. I can add values to other cells and the macro runs fine, but it's using the default list value.
I've been looking for a solution, but can't get it to work.
This does nothing for me:
Changing the value in a drop down list Excel C#
worksheet.Cells[4, 3] = "5";
where "5" is an arbitrary index just to get it to change. Doesn't work.
Replacing "5" by "actual string value" doesn't work either.
I'm using the Microsoft.Office.Interop.Excel.Application object to control the file.
How do I select an item from the list? Either by matching string or by index, both will do.
A: Got it to work!
The problem was that the cell value got overridden by a value in another worksheet.. Changing this one in stead of the actual drop down list did the trick.
| |
doc_3312
|
@auth.route('/register', methods=['GET', 'POST'])
def register():
reg_form = RegistrationForm()
if reg_form.validate_on_submit():
try:
user = User(username=reg_form.username.data, password=reg_form.password.data,
email=reg_form.email.data)
db.session.add(user)
db.session.commit()
send_email("Welcome!", "email/welcome_msg",
user.email, user=user)
return redirect(url_for("auth.login"))
except:
flash("Test")
title = _("Register")
return render_template("auth/register.html",
reg_form=reg_form,
title=title)
When you enter an existing username or email into the registration form, the unique constraint in my database model works as no new data is added to the database is created and the website remains on the registration page rather than redirecting to the login page.
However, it is apparent that no IntegrityError or any other such error is returned when someone tries to register a username or email address which is already taken, so the except portion where a message is supposed to flash does not occur, regardless of whether or not the except is specific or not.
In my efforts to provide an error message in the event that someone attempts to register with an existing username or email, I have also tried checking the database for an existing username like so:
def register():
reg_form = RegistrationForm()
if reg_form.validate_on_submit():
user = User(username=reg_form.username.data, password=reg_form.password.data,
email=reg_form.email.data)
if db.session.query(db.exists().where(User.username == reg_form.username.data)).scalar():
flash("Test")
else:
db.session.add(user)
db.session.commit()
send_email("Welcome!", "email/welcome_msg",
user.email, user=user)
return redirect(url_for("auth.login"))
title = _("Register")
return render_template("auth/register.html",
reg_form=reg_form,
title=title)
However, this doesn't seem to work either as no message gets flashed. I have tried a few variations of the above, but have not had any success yet. I will note that I am using Flask-Toastr to flash the messages, but I don't think that is very relevant as it has worked fine with a number of other 'if-else' and 'try-except' statements I have used.
A: you also need to place the flash message in your html template file:
{% with messages = get_flashed_messages() %}
{% if messages %}
<ul>
{% for message in messages %}
<li>{{ message }}</li>
{% endfor %}
</ul>
{% endif %}
{% endwith %}
| |
doc_3313
|
What are the best practices to support them working?
Is it a good solution if I run PHPUnit tests in production server with cron jobs and if they fail, I will send email to a programmer?
A: That seems like a pretty good solution. It doesn't have to be PHPUnit per se. You could just run a small script that requests some information (or even ping for that matter), check if that information is correct and send an email to the admin if it isn't.
As said, you could even just run a CRON with a ping that sends an email as soon as the ping times out.
A: If the package comes with tests its always good idea to run them.
You can even find third party error monitoring servises like bugsnag and honeybadger useful to track exceptions and errors in production environment.
| |
doc_3314
|
I have on select, and this select give to me just ONE ID, see
string query = "SELECT ID FROM USERS WHERE NAME = 'WILL'";
MySqlCommand cmd = new MySqlCommand(query, conn);
MySqlDataAdapter da = new MySqlDataAdapter(cmd);
i don´t know how i get this ID and use in other select,
result of Select:
ID = 23 // ID is result of first select
new select with the result of other select:
SELECT NAME FROM USERS WHERE ID = @ID
cmd.Parameters.AddWithValue("@ID", ID);
How i do this?
A: Since your query returns one cell, you can use ExecuteScalar method. Looks like you don't need to use MySqlDataAdapter in this case.
Executes the query, and returns the first column of the first row in
the result set returned by the query.
int id = (Int32)cmd.ExecuteScalar(); //id will be 23
Then you can use it as;
cmd.Parameters.AddWithValue("@ID", id);
Of course you can use MySqlDataAdapter here. You can use while statement with .Read() method and you can get first column value. But ExecuteScalar do these in one line.
i try with int16 but only works with int32, why?
ExecuteScalar method returns this value as object. That's why this is an boxing/unboxing issue. From documentation;
Unboxing is an explicit conversion from the type object to a value
type or from an interface type to a value type that implements the
interface. An unboxing operation consists of:
*
*Checking the object instance to make sure that it is a boxed value of
the given value type.
*Copying the value from the instance into the value-type variable.
Also
For the unboxing of value types to succeed at run time, the item being
unboxed must be a reference to an object that was previously created
by boxing an instance of that value type. Attempting to unbox null
causes a NullReferenceException. Attempting to unbox a reference to an
incompatible value type causes an InvalidCastException.
In your case your 23 value is boxed. And it is an int by default. But you try to unboxed it a variable that is not an int. This isn't a valid operation. You can't do that in just one step.
For example this will be a valid operation;
Int16 id = (Int16)(Int32)cmd.ExecuteScalar(); // valid
| |
doc_3315
|
However, what I want to be able to do is navigate to the page with PHP variables entered into the URL which then autofill the text box with the job number from the URL.
I can't figure out how to check whether this PHP variable is present in my javascript.
Here is the relevant javascript below:
<script type="text/javascript">
function jobCheckCallback (data) {
if (data.includes("<td")) {
alert("Job already exists. Edit this job by clicking Edit on the job's overview page");
document.getElementById("jobID").value = 0;
} else {
var njobID = document.getElementById("jobID").value;
if (njobID == "") njobID = 0;
else njobID= <?php echo $jobID; ?>;
sendAsync("editDatabase.php?sql=UPDATE+customerlist+SET+jobID="+njobID+" WHERE+jobID=" +jobID);
sendAsync("editDatabase.php?sql=UPDATE+operations+SET+jobID="+njobID+" WHERE+jobID="+jobID);
sendAsync("editDatabase.php?sql=UPDATE+jobfiles+SET+jobID="+njobID+" WHERE+jobID="+jobID);
sendAsync("editDatabase.php?sql=UPDATE+pallets+SET+jobID="+njobID+" WHERE+jobID="+jobID);
sendAsync("editDatabase.php?sql=UPDATE+jobs+SET+jobID="+njobID+" WHERE+jobID="+jobID,function(id){
return function(){
setjobID(id);
}
}(njobID));
}
}
</script>
The error message I get is "SyntaxError: missing ; before statement" but I'm guessing its another issue causing this error.
EDIT: $jobID is the PHP variable that can be entered into the URL, and is then autofilled into the textbox on the page.
A: Have you checked the resulting page source? I suspect you need to wrap the php output statement with quotes so:
else njobID= '<?php echo $jobID; ?>';
So if job ID is empty it won't result in invalid js.
| |
doc_3316
|
and there was a brief mention of modifying the StepState to StepState.editing so you get a pencil icon.
How can I modify the StepState of the step I am on so that it changes the state to editing (or complete) when I step on/past it
class _SimpleWidgetState extends State<SimpleWidget> {
int _stepCounter = 0;
List<Step> steps = [
Step(
title: Text("Step One"),
content: Text("This is the first step"),
isActive: true
),
Step(
title: Text("Step Two"),
content: Text("This is the second step"),
isActive: true,
),
Step(
title: Text("Step Three"),
content: Text("This is the third step"),
isActive: true,
),
Step(
title: Text("Step Four"),
content: Text("This is the fourth step"),
isActive: true,
),
];
@override
Widget build(BuildContext context) {
return Container(
child: Stepper(
steps: steps,
currentStep: this._stepCounter,
type: StepperType.vertical,
onStepTapped: (step) {
setState(() {
_stepCounter = step;
steps[step].state = StepState.editing; // this does not work but is what Im trying to accomplish
});
},
onStepCancel: () {
setState(() {
_stepCounter > 0 ? _stepCounter -= 1 : _stepCounter = 0;
});
},
onStepContinue: () {
setState(() {
_stepCounter < steps.length - 1 ? _stepCounter += 1 : _stepCounter = 0;
});
},
),
);
}
}
A: Complete example with 3 states while moving steps:
class _State extends State<MyApp> {
int _current;
List<StepState> _listState;
@override
void initState() {
_current = 0;
_listState = [
StepState.indexed,
StepState.editing,
StepState.complete,
];
super.initState();
}
List<Step> _createSteps(BuildContext context) {
List<Step> _steps = <Step>[
new Step(
state: _current == 0
? _listState[1]
: _current > 0 ? _listState[2] : _listState[0],
title: new Text('Step 1'),
content: new Text('Do Something'),
isActive: true,
),
new Step(
state: _current == 1
? _listState[1]
: _current > 1 ? _listState[2] : _listState[0],
title: new Text('Step 2'),
content: new Text('Do Something'),
isActive: true,
),
new Step(
state: _current == 2
? _listState[1]
: _current > 2 ? _listState[2] : _listState[0],
title: new Text('Step 3'),
content: new Text('Do Something'),
isActive: true,
),
];
return _steps;
}
@override
Widget build(BuildContext context) {
List<Step> _stepList = _createSteps(context);
return new Scaffold(
appBar: new AppBar(
title: new Text('Stepper Example'),
),
body: new Container(
padding: new EdgeInsets.all(20.0),
child: new Center(
child: new Column(
children: <Widget>[
Expanded(
child: Stepper(
type: StepperType.vertical,
steps: _stepList,
currentStep: _current,
onStepContinue: () {
setState(() {
if (_current < _stepList.length - 1) {
_current++;
} else {
_current = _stepList.length - 1;
}
//_setStep(context);
});
},
onStepCancel: () {
setState(() {
if (_current > 0) {
_current--;
} else {
_current = 0;
}
//_setStep(context);
});
},
onStepTapped: (int i) {
setState(() {
_current = i;
});
},
),
),
],
),
),
),
);
}
}
A: Move the Step list declaration into the build method and declare the state field of each step as for instance first step: _stepCounter == 0 ? StepState.editing : StepState.indexed and remove this line steps[step].state = StepState.editing; because .state is final and therefore can't be changed.
| |
doc_3317
|
Byte[] bytes = (byte[])(reader["Avatar"]);
fs1.Write(bytes, 0, bytes.Length);
pictureBox1.Image = Image.FromFile("image.jpg");
pictureBox1.SizeMode = PictureBoxSizeMode.StretchImage;
pictureBox1.Refresh();
but the wrong is out of bound memory exception in line : "pictureBox1.Image = Image.FromFile("image.jpg");"
I do not know why this happen, please help me
A: If fs1 is a stream you probably should close it before you access that file in the next line.
Note that you can also create the image in memory and avoid the file system completely.
A: Try with this method:
public Image ImageFromBytes(byte[] bytes)
{
using(var ms = new MemoryStream(bytes))
{
return Image.FromStream(ms);
}
}
A: You mast Close and Dispose your stream:
fs1.Write(bytes, 0, bytes.Length);
//Make sure you closed your stream
fs1.Close();
// You should call Dispose too.
fs1.Dispose();
pictureBox1.Image = Image.FromFile("image.jpg");
or enclose your writing file procces in using block:
using (Stream fs1 ...)
{
...
fs1.Write(bytes, 0, bytes.Length);
}
| |
doc_3318
|
var a in A; # to say that the variable takes value from index A
and I wanted to use it as something like:
M1[a] >= 10;
M2[a] <= 100;
However AMPL complains:
variable in index expression
What can I point to an element of an array or matrix, using a variable?
Thanks
A: AMPL doesn't allow variables in subscripts yet. However there is a way to emulate them. For example, M1[a] >= 10 can be emulated as follows:
s.t. c: exists{i in A} (M1[i] >= 10 and i = a);
This is not very efficient, but should work fine for small problems. Note that to solve a problem containing the above constraint (or variables in subscripts once they are added) requires a constraint programming solver such as ilogcp or gecode. See LOGIC AND CONSTRAINT PROGRAMMING EXTENSIONS for details.
New version of ilogcp driver for AMPL supports the element constraint, for example:
include cp.ampl;
var x{i in 0..2} >= i integer;
var y in 0..2 integer;
minimize o: element({i in 0..2} x[i], y);
option solver ilogcp;
solve;
where element({i in 0..2} x[i], y) is equivalent to x[y] and is translated into an IloElement constraint.
| |
doc_3319
|
SELECT
POLIN.Itemkey, POLIN.Description, POLIN.Location,
SUM(POLIN.Qtyremn), INLOC.Qtyonhand
FROM
X.dbo.INLOC INLOC, X.dbo.POLIN POLIN
WHERE
INLOC.Itemkey = POLIN.Itemkey
AND INLOC.Location = POLIN.Location
AND ((POLIN.Location = 'SPL')
AND (POLIN.Qtyremn > 0))
GROUP BY
POLIN.Itemkey, POLIN.Description
A: They are several columns in select clause that are not inside an aggregate function neither in group by clause:
SELECT polin.itemkey,
polin.description,
polin.location, <-- this one
Sum(polin.qtyremn),
inloc.qtyonhand <-- this one
FROM x.dbo.inloc INLOC,
x.dbo.polin POLIN
WHERE inloc.itemkey = polin.itemkey
AND inloc.location = polin.location
AND ( ( polin.location = 'SPL' )
AND ( polin.qtyremn > 0 ) )
GROUP BY polin.itemkey,
polin.description
A solution may be:
SELECT polin.itemkey,
polin.description,
polin.location,
inloc.qtyonhand ,
Sum(polin.qtyremn)
FROM x.dbo.inloc INLOC,
x.dbo.polin POLIN
WHERE inloc.itemkey = polin.itemkey
AND inloc.location = polin.location
AND ( ( polin.location = 'SPL' )
AND ( polin.qtyremn > 0 ) )
GROUP BY
polin.itemkey,
polin.description,
polin.location,
inloc.qtyonhand
Perhaps you are use to work with MySQL that allows hidden columns
A: Try changing the group by to:
GROUP BY POLIN.Itemkey, X.dbo.POLIN.Description
A: Close... but here I think is what you're really after. Sum the QtyOnHand... do not group by it... otherwise you're probably OK. You are most likely getting duplicates of your first 3 fields... this should eliminate them. Also, use inner join instead of commas. New standard (as mentioned above ... since like 88... yes... 1988.)
SELECT polin.itemkey,
polin.description,
polin.location,
Sum(inloc.qtyonhand) [qtyonhand] ,
Sum(polin.qtyremn)[qtyremn]
FROM x.dbo.inloc INLOC
INNER JOIN x.dbo.polin POLIN on inloc.itemkey = polin.itemkey
AND inloc.location = polin.location
WHERE ( ( polin.location = 'SPL' )
AND ( polin.qtyremn > 0 ) )
GROUP BY
polin.itemkey,
polin.description,
polin.location
| |
doc_3320
|
import javax.sound.sampled.*;
import java.io.*;
class tester {
public static void main(String args[]) throws IOException {
try {
Clip clip_1 = AudioSystem.getClip();
AudioInputStream ais_1 = AudioSystem.getAudioInputStream( new File("D:\\UnderTest\\wavtester_1.wav") );
clip_1.open( ais_1 );
Clip clip_2 = AudioSystem.getClip();
AudioInputStream ais_2 = AudioSystem.getAudioInputStream( new File( "D:\\UnderTest\\wavtester_2.wav") );
clip_2.open( ais_2 );
byte arr_1[] = new byte[ais_1.available()]; // not the right way ?
byte arr_2[] = new byte[ais_2.available()];
ais_1.read( arr_1 );
ais_2.read( arr_2 );
} catch( Exception exc ) {
System.out.println( exc );
}
}
}
From the above code i have a byte array1,array2 for ais_1,ais_2 . Is there any way to concatenate these 2 byte arrays ( arr_1,arr_2 ) and then convert them back to an audio stream ? I want to concatenate 2 audio files.
A: Once you have the two byte arrays in hand (see my comment), you can concatenate them into a third array like this:
byte[] arr_combined = new byte[arr_1.length + arr_2.length];
System.arraycopy(arr_1, 0, arr_combined, 0, arr_1.length);
System.arraycopy(arr_2, 0, arr_combined, arr_1.length, arr_2.length);
Still not a complete answer, sorry, as this array is just the sample data - you still need to write out a header followed by the data. I didn't see any way to do this with the AudioSystem api.
Edit: try this:
Join two WAV files from Java?
| |
doc_3321
|
Below is the code:
var parseSize = function(obj){
if (obj === 0 ){
return 0;
} else {
return parseFloat(obj/1024).toFixed(2);
}
}
var testData=[
{ name: 'ddd',Vcpu: 2, memory: 4096, os: 'Microsoft Windows Server 2008 (32-bit)'},
{ name: 'eee',Vcpu: 2, memory: 2040, os: 'Microsoft Windows Server 2008 (32-bit)'},
{ name: 'ddd',Vcpu: 2, memory: 4096, os: 'Microsoft Windows Server 2008 (32-bit)'},
{ name: 'eee',Vcpu: 2, memory: 2040, os: 'Microsoft Windows Server 2008 (32-bit)'}
];
testData =_.invoke(testData , function(){
testData['memory'] = parseSize(testData['memory']) + " GB";
});
console.log(testData);
The above code is not working. PLease let me know where I am going wrong.
Adding the Jsfiddle link: http://jsfiddle.net/prashdeep/k29zuba2/
A: The answer of what went wrong in your code:
*
*You should use _.each
*You should not reassign testData with the result
*You should be accessing "memory" property on each item in testData, not on testData itself
So change your latter part of code to:
_.each(testData , function(datum){
datum['memory'] = parseSize(datum['memory']) + " GB";
});
console.log(testData);
A: You can use map. This is built into JavaScript since ES5:
var result = testData.map(function(x) {
x.memory = parseSize(x.memory) + ' GB'
return x
})
A: Using underscore _.each you can do the following:
_.each(testData , function( item ){
item['memory'] = parseSize(item['memory']) + " GB";
});
See jsfiddle example.
Don't use map or forEach if you want support for older browsers like IE8.
A: var parseSize = function(obj){
if (obj === 0 ){
return 0;
} else {
return parseFloat(obj/1024).toFixed(2);
}
}
var testData=[
{ name: 'ddd',Vcpu: 2, memory: 4096, os: 'Microsoft Windows Server 2008 (32-bit)'},
{ name: 'eee',Vcpu: 2, memory: 2040, os: 'Microsoft Windows Server 2008 (32-bit)'},
{ name: 'ddd',Vcpu: 2, memory: 4096, os: 'Microsoft Windows Server 2008 (32-bit)'},
{ name: 'eee',Vcpu: 2, memory: 2040, os: 'Microsoft Windows Server 2008 (32-bit)'}
];
testData.forEach(function(d){ d.memory = parseSize(d.memory); });
console.log(testData);
| |
doc_3322
|
Installing python 3.4 Successful
Installing numpy Successful
installing matpilotlib Failed
installing cv2 Failed
can anybody help me please thanks a lot.
A: You can install matplotlib using pip (which is already installed on your machine - mentioned in your previous quesiton):
pip install matplotlib
more info:
http://matplotlib.org/faq/installing_faq.html
A: It's very common to install Python packages through pip today (recursive acronym for pip installs packages). However, this is not that trivial under Windows.
How to install matplotlib:
Try to open a commandline and type in pip install matplotlib. If this does not work, you'll need to do some more work to get pip running. I gave a detailed answere here: Not sure how to fix this Cmd command error?.
How to install OpenCV:
The Python OpenCV DLL must be made for your version of Python and your system architecture (or, to be more specific, the architecture your Python was compiled for).
*
*Download OpenCV for your Python version (2/3)
*Try replacing the x64 version with the x86 version
*There are a lot of different binaries here: http://www.lfd.uci.edu/~gohlke/pythonlibs/#opencv. Try to get the one exactly matching your Python version and System architecture and install it via pip (cp35 means CPython version 3.5 ect.).
If you have the OpenCV .whl file matching your system configuration, do pip install file.whl.
Hope this helps!
A: You may be better off using an package such as pythonxy as a start, e.g. from https://python-xy.github.io/ , instead of installing each single package manually.
| |
doc_3323
|
Using flex, the 3 modules have the same height. The module titles are well top-aligned.
But impossible to bottom align the buttons :
#container {
display: flex;
align-items: stretch;
}
.module {
margin-right: 2em;
border: 1px solid white;
flex-basis: 30%;
}
<div style="text-align: center;">
<h1>Title</h1>
<h2>tagline</h2>
<div id="container">
<!-- Module1 -->
<div class="module" style="background-color: red;">
<table>
<tbody>
<tr>
<td valign="top">
<p><strong>Module 1</strong></p>
<p>lorem ipsum</p>
<p>lorem ipsum</p>
<p>lorem ipsum</p>
</td>
</tr>
<tr>
<td valign="bottom">
<div id="mc_embed_signup">
<form>
<div>
<div><input name="EMAIL" type="email" value="" placeholder="email address" /></div>
</div>
<div><input type="submit" value="button" /></div>
</div>
</form>
</div>
</td>
</tr>
</tbody>
</table>
</div>
<!-- Fin module 1, début module 2 -->
<div class="module" style="background-color: green;">
<table>
<tbody>
<tr>
<td valign="top">
<p><strong>Module 2</strong></p>
</td>
</tr>
<tr>
<td valign="bottom">
<div>
<form>
<div>
<div><input name="EMAIL" type="email" placeholder="email address" /></div>
</div>
<div class="clear"><input name="subscribe" type="submit" value="button" /></div>
</div>
</form>
</div>
</td>
</tr>
</tbody>
</table>
</div>
<!-- Fin module 2, début module 3 -->
<div class="module" style="background-color: yellow;">
<table>
<tbody>
<tr>
<td valign="top">
<p><strong>Module 3</strong></p>
<p>lorem ipsum</p>
</td>
</tr>
<tr>
<td valign="bottom">
<div>
<form>
<div>
<div ><input name="EMAIL" type="email" placeholder="email address" /></div>
</div>
<div class="clear"><input name="subscribe" type="submit" value="button" /></div>
</div>
</form>
</div>
</td>
</tr>
</tbody>
</table>
</div>
<!-- Fin module 3 -->
</div>
</div>
To achieve this bottom-align, I used a simple table HTML code, as suggested here.
It doesnt work here. What have I done wrong ?
A: I would recommend not to be using a table layout at all here. Since you are using a flex layout you can easily, align your buttons and input fields to the bottom, by setting the module to display:flex as well and using justify-content with space-between.
Update:
To be a bit more specific on why this works, let me try to explain it in detail.
The flex-direction of the .module elements is set to column. I'm using flex-flow here, which combines flex-direction and flex-wrap. This will force the .module-child elements to be laid out from top to bottom.
flex-direction
column
The flex container's main-axis is the same as the block-axis. The main-start and main-end points are the same as the before and after points of the writing-mode.
Now setting justify-content to space-between will make sure, that the flex items
(.module-child elements) are evenly distributed along the line. First item on the start line and last item on the end line.
justify-content
space-between
Flex items are evenly distributed along the line. The spacing is done such as the space between two adjacent items is the same. Main-start edge and main-end edge are flushed with respectively first and last flex item edges.
Hope this makes a bit more sense now.
Here the example.
Sorry, but i just had to remove those inline styles. ;-)
.main {
text-align: center;
}
#container {
display: flex;
justify-content: center;
align-items: stretch;
}
.module {
display: flex;
flex-flow: column nowrap;
justify-content: space-between;
flex-basis: 30%;
margin: 0 1em;
padding: 10px;
border: 1px solid white;
}
.module:nth-child(1) {
background-color: red;
}
.module:nth-child(2) {
background-color: green;
}
.module:nth-child(3) {
background-color: yellow;
}
.module-child {
width: 100%;
}
<div class="main">
<h1>Title</h1>
<h2>tagline</h2>
<div id="container">
<!-- Module1 -->
<div class="module">
<div class="module-child">
<p><strong>Module 1</strong></p>
<p>lorem ipsum</p>
<p>lorem ipsum</p>
<p>lorem ipsum</p>
</div>
<div id="mc_embed_signup" class="module-child">
<form>
<div>
<div><input name="EMAIL" type="email" value="" placeholder="email address" /></div>
<div><input type="submit" value="button" /></div>
</div>
</form>
</div>
</div>
<!-- Module2 -->
<div class="module">
<div class="module-child">
<p><strong>Module 2</strong></p>
<p>lorem ipsum</p>
</div>
<div id="mc_embed_signup" class="module-child">
<form>
<div>
<div><input name="EMAIL" type="email" value="" placeholder="email address" /></div>
<div><input type="submit" value="button" /></div>
</div>
</form>
</div>
</div>
<!-- Module3 -->
<div class="module">
<div class="module-child">
<p><strong>Module 3</strong></p>
<p>lorem ipsum</p>
</div>
<div id="mc_embed_signup" class="module-child">
<form>
<div>
<div><input name="EMAIL" type="email" value="" placeholder="email address" /></div>
<div><input type="submit" value="button" /></div>
</div>
</form>
</div>
</div>
</div>
</div>
A: Add this css
.module table {min-height:100%; height:100%;}
Demo Link
http://jsfiddle.net/qhpgk7nw/2/
A: you can try this one:
<div style="text-align: center;">
<h1>Title</h1>
<h2>tagline</h2>
<div id="container">
<!-- Module1 -->
<div class="module" style="background-color: red;">
<table>
<tbody>
<tr>
<td valign="top">
<p><strong>Module 1</strong></p>
<p>lorem ipsum</p>
<p>lorem ipsum</p>
<p>lorem ipsum</p>
</td>
</tr>
<tr>
<td valign="bottom">
<div id="mc_embed_signup">
<form>
<div>
<div><input name="EMAIL" type="email" value="" placeholder="email address" /></div>
</div>
<br/>
<div><input type="submit" value="button" /></div>
</div>
</form>
</div>
</td>
</tr>
</tbody>
</table>
</div>
<!-- Fin module 1, début module 2 -->
<div class="module" style="background-color: green;">
<table>
<tbody>
<tr>
<td valign="top">
<p><strong>Module 2</strong></p>
</td>
</tr>
<tr>
<td valign="bottom">
<div>
<form>
<div>
<div><input name="EMAIL" type="email" placeholder="email address" /></div>
</div>
<br/>
<div class="clear"><input name="subscribe" type="submit" value="button" /></div>
</div>
</form>
</div>
</td>
</tr>
</tbody>
</table>
</div>
<!-- Fin module 2, début module 3 -->
<div class="module" style="background-color: yellow;">
<table>
<tbody>
<tr>
<td valign="top">
<p><strong>Module 3</strong></p>
<p>lorem ipsum</p>
</td>
</tr>
<tr>
<td valign="bottom">
<div>
<form>
<div>
<div ><input name="EMAIL" type="email" placeholder="email address" /></div>
</div>
<br/>
<div class="clear"><input name="subscribe" type="submit" value="button" /></div>
</div>
</form>
</div>
</td>
</tr>
</tbody>
</table>
</div>
<!-- Fin module 3 -->
</div>
</div>
DEMO HERE
A: try this may help you in your style
<style>
#container
{
display: flex;
align-items: stretch;
justify-content: center;
}
.module
{
margin-right: 2em;
border: 1px solid white;
flex-basis: 30%;
display: flex;
align-items: center;
justify-content: center;
}
.module tr:nth-child(2)
{
height: 7em;
}
.module tr:nth-child(1)
{
align-self: flex-start;
}
.module tr:nth-child(3)
{
align-self: flex-end;
}
</style>
in your html
<div style="text-align: center;">
<h1>Title</h1>
<h2>tagline</h2>
<div id="container">
<!-- Module1 -->
<div class="module" style="background-color: red;">
<table>
<tbody>
<tr>
<td valign="top">
<p><strong>Module 1</strong></p>
</td>
</tr>
<tr>
<td><p>lorem ipsum</p>
<p>lorem ipsum</p>
<p>lorem ipsum</p>
</td>
</tr>
<tr>
<td valign="bottom">
<div id="mc_embed_signup">
<form>
<div>
<div><input name="EMAIL" type="email" value="" placeholder="email address" /></div>
</div>
<div><input type="submit" value="button" /></div>
</form>
</div>
</td>
</tr>
</tbody>
</table>
</div>
<!-- Fin module 1, début module 2 -->
<div class="module" style="background-color: green;">
<table>
<tbody>
<tr>
<td valign="top">
<p><strong>Module 2</strong></p>
</td>
</tr>
<tr>
<td>
</td>
</tr>
<tr>
<td valign="bottom">
<div>
<form>
<div>
<div><input name="EMAIL" type="email" placeholder="email address" /></div>
</div>
<div class="clear"><input name="subscribe" type="submit" value="button" /></div>
</div>
</form>
</td>
</tr>
</tbody>
</table>
</div>
<!-- Fin module 2, début module 3 -->
<div class="module" style="background-color: yellow;">
<table>
<tbody>
<tr>
<td valign="top">
<p><strong>Module 3</strong></p>
</td>
</tr>
<tr>
<td> <p>lorem ipsum</p>
</td>
</tr>
<tr>
<td valign="bottom">
<div>
<form>
<div>
<div ><input name="EMAIL" type="email" placeholder="email address" /></div>
</div>
<div class="clear"><input name="subscribe" type="submit" value="button" /></div>
</div>
</form>
</td>
</tr>
</tbody>
</table>
</div>
<!-- Fin module 3 -->
</div>
</div>
| |
doc_3324
|
2006 to 2010 : 2006, 2007, 2008, 2009, 2010
if any anyone know this please help thanks
A: If you have O365, you can use SEQUENCE(), as @JvdV stated in his comment to the OP. This seems to be the neater option, however, here is a version that will only use MIN(), MAX(), and COLUMN() in case you don't have O365:
Assuming your "Begin Year" heading is in A1:
=IF((MIN($A2:$B2)+COLUMN()-3)>MAX($A2:$B2),"",MIN($A2:$B2)+COLUMN()-3)
This should be entered into C2 and copied to across and down.
| |
doc_3325
|
The Controller retrives data from the Model and passes it to the View
This seems a bit verbose and messy.
$model = new Model;
$view = new View;
$view->set('foo', $model->getFoo());
$view->display();
The Controller passes the Model to the View
What if the View needs data from multiple Models?
$model = new Model;
$view = new View($model);
$view->display(); //View takes what is needed from the Model
The Controller passes the View to the Model
$view = new View;
$model = new Model($view);
$view->display(); //Model has told the View what is needed
Which of these is the "best" way to go about things? If none, what is?
A: The Controller retrives data from the Model and passes it to the View
As you said it's verbose and messy. But that's the most appropriate solution with the philosophy of MVC.
The Controller passes the Model to the View
Seems valid too. However it'll require for the view to ask for some model method. Which is not really in the spirit of MVC. Your view should only render the datas that are provided to it, without caring about the context.
The Controller passes the View to the Model
Forget that one. Here it is messy.
A: The answer is self evident if you consider that the 'model' is the central artifact (potentially used across applications), and that a 'view' may (or may not) be aware of the specific model but it is (by definition) a 'view' of a (potentially abstract) model and again, potentially usable across applications. The 'controller' is managing interactions and is the most application specific element of the pattern, so it definitively needs to know about model and view details.
If the view is specific to a given model, you can use option 2.
If the view is for an abstract model (and you can use it to display info from a set of models), you use option 1.
Option 3 is simply wrong.
A: The answer to the original question is:
*
*The Controller retrives data from the Model and passes it to the View
MVC is actually very neat and clean. Remember what it is addressing:
*
*Code reuse (Models do not rely on controllers or views. Views do not rely on controllers or models. Controllers are app specific.)
*Separation of Logic (For instance changing an authentication backend from MySQL to LDAP require 0 change to a view. Changing a view's layout requires 0 change to model. Changing the database table structure requires 0 change to the controller or view).
Now IF you want your forms to be automatically generated from a table structure - the views are now tied to the table (tightly coupled). A change in the table require a change in the view (albeit potentially automatically). This may take less code - but the view is no longer dependable from a code-reuse stand point.
Similarly your views (in MVC) should be nothing more than templates. There should be no logic - just variables. All the "logic", aka business rules, reside in the controller. The models know how to get data and keep it normalized. The views know how to display data. The controller knows when to use the data and which views to apply the data to.
MVC is a strict 3-tier architecture. A two tiered architecture is valid for some applications. For quick mashups and "getting crap done" a one tied architecture can be appropriate (but you don't get style points).
Hope this helps.
A: IMHO, option 2 (the Controller passes the model to the view) best maintains the proper decoupling and separation of concerns. If the view needs multiple models, the model passed in should be a composite data type that contains each model needed by the view. "Each model needed by the view" is usually different from your entity model in that it is flattened and streamlined for display, often called a ViewModel.
Option 1 (the Controller retrives data from the Model and passes it to the View) is quite similar to option 2, but I contend option 2 is preferable because it places less logic in the controller. In MVC, as much logic as possible should be in the model, leaving your controllers and views as simple as possible.
A: I tend to agree with the second one. MVC on the web can't really be implemented as it can in more stateful applications. Most web MVC implementations have you put your logic in your controllers and use the model for raw data access. I think the more correct way is to put your logic in your model. There is almost an implied 4th layer in that raw data access is done within the model, however the model is also responsible for giving that data meaning and updating the view.
The wikipedia article explains it pretty good.
| |
doc_3326
|
<div ui-layout="{flow: 'column'}">
<div ui-layout-container>Hello</div>
<div ui-layout-container>World</div>
</div>
| |
doc_3327
|
A: Drupal has RESTful Web Services. So you can consume those rest services through your Ionic app easily.
Ionic3/Angular app --> Ionic Provider --> Drupal Resrfull service
| |
doc_3328
|
And I already set this on the top of my page
// Display error - if there is
error_reporting(E_ALL);
ini_set('display_errors', 1);
ini_set('MAX_EXECUTION_TIME', -1);
Any suggestion ?
A: ini_set('max_execution_time', 0).
Don't use all caps (in general for ini settings) and use 0 not -1 when setting value to no limit.
http://php.net/manual/en/info.configuration.php#ini.max-execution-time
| |
doc_3329
|
{"error":{"text":"Signature was
invalid","id":"INVALID_SIGNATURE","description":"Expired timestamp:
given 1303539322 and now 1303541647 has a greater difference than
threshold 300"}}
What can I do to overcome this error?
Thanks in advance.
A: You need to change your computer(dev machine or server) Timezone to the correct one.
As @Rufinus said, your server has to run on the proper/right time (that's the correct answer), I found this a few minutes ago, I toke borrow a machine and tried to run my YELP app and start throwing the same error, I change the TimeZone Setting of the laptop, and now is running again.
in windows:
left click over the clock > click "change date and time settings..." > click "change time zone"
select from the Select options the proper one.
A: In the OAuth.php file I changed:
private static function generate_timestamp() {
return time();
}
to
private static function generate_timestamp() {
return time() + 10000;
}
| |
doc_3330
|
The activity looks like this:
here are my files:
activity_main.xml
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:fitsSystemWindows="false"
android:orientation="vertical">
<androidx.fragment.app.FragmentContainerView
android:id="@+id/myNavHostFragment"
android:name="androidx.navigation.fragment.NavHostFragment"
android:layout_width="match_parent"
android:layout_height="match_parent"
app:defaultNavHost="true"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent"
app:navGraph="@navigation/navigation" />
</androidx.constraintlayout.widget.ConstraintLayout>
MainActivity.kt
class MainActivity : AppCompatActivity() {
private val baseAppState by inject<BaseAppStateManager>()
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
val navHostFragment = supportFragmentManager.findFragmentById(R.id.myNavHostFragment) as NavHostFragment
val navController = navHostFragment.navController
NavigationUI.setupActionBarWithNavController(this, navController)
}
override fun onSupportNavigateUp(): Boolean {
val navController = findNavController(R.id.myNavHostFragment)
return navController.navigateUp()
}
}
fragment_devices.xml
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:fitsSystemWindows="false"
android:orientation="vertical">
<include
android:id="@+id/dashboard_status"
layout="@layout/dashboard_phone_status"
android:layout_width="match_parent"
android:layout_height="160dp" />
<androidx.recyclerview.widget.RecyclerView
android:id="@+id/devices_recyclerview"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</LinearLayout>
DevicesFragment.kt
class DevicesFragment : Fragment() {
...
override fun onCreateView(inflater: LayoutInflater, container: ViewGroup?, sis: Bundle?): View {
binding = FragmentDevicesBinding.inflate(inflater, container, false)
phoneStatusScreen = PhoneStatusScreenManager(binding.dashboardStatus)
return binding.root
}
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
super.onViewCreated(view, savedInstanceState)
setHasOptionsMenu(true)
observeLiveData()
...viewmodel calls...
historyAdapter = DevicesHistoryAdapter(onDeviceClicked = ::onDeviceClicked)
setupRecyclerView(historyAdapter)
}
Sometimes the toolbar looks okay when I open the app but as soon as I navigate to another fragment by clicking on an element from the RecyclerView, when I click on the back arrow to navigate up, the toolbar is like you see in the screenshot below.
Also, here is the application theme:
<resources>
<!-- Base application theme. -->
<style name="AppTheme" parent="Theme.AppCompat.Light.DarkActionBar">
<!-- Theme customization. -->
<item name="colorPrimary">@color/colorPrimary</item>
<item name="colorPrimaryDark">@color/colorPrimaryDark</item>
<item name="colorAccent">@color/colorAccent</item>
</style>
</resources>
| |
doc_3331
|
A: You are receiving Missing Authentication Token because you do not have valid signed cookies or query string parameters.
If you are using postman, make sure you enable cookies for the given site.
More about Signed Cookies:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-cookies.html#private-content-check-expiration-cookie
Signed Parameters:
https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html
If you receive Invalid Key Pair, it is either the time for your signed url is expired or you are not using the right key to generate those authenticated values.
EDIT1:
It is also a generic error from CloudFront if the content is not found.
https://docs.aws.amazon.com/apigateway/latest/developerguide/customize-gateway-responses.html
Hope it helps.
| |
doc_3332
|
public IEnumerable<TEntity> Find(Expression<Func<TEntity, bool>> predicate)
{
return _objectSet.Where<TEntity>(predicate);
}
This works okay if your just working with the one object set but say if you wanted to select all the comments made by a user that are greater than 128 chars and the user is active. How would you create a specification when two or more object sets are used?
Example:-
class User
{
public string Name { get; set; }
public bool Active { get; set; }
public virtual ICollection<Post> Posts { get; set; }
public User()
{
Posts = new List<Post>();
}
}
class Post
{
public string Text { get; set; }
public DateTime Created { get; set; }
public virtual ICollection<Comment> Comments { get; set; }
public Post()
{
Comments = new List<Comment>();
}
}
class Comment
{
public string Text { get; set; }
public DateTime Created { get; set; }
}
To do this in Linq is :-
var results = from u in users
from p in u.Posts
from c in p.Comments
where u.Active && c.Text.Length > 128
select c;
How would you then convert that to a specification class? Maybe I am just not seeing something as it seems like a reasonable thing to do :)
EDIT
The specification interface:
public interface ISpecification<TEntity>
{
bool IsSatisfiedBy(TEntity entity);
}
A: First of all your current setup doesn't allow such query because User and Comment are not related. You can select only all comments related to posts related to user but you don't know who posted comments.
Just add relation between User and Comment and you can simply use:
var results = from c in context.Comments
where c.User.Active and c.Text.Length > 128
select c;
This will be easily possible in your Specification pattern. Anyway if you want to build complex condition from Comment in your Find method you must expose navigation properties to allow that.
A: Funny I was just reading about OCP (Open Closed Principle) and the Specification pattern, and I was wondering whether it's actually worth implementing the Specification pattern in my project. I'm just worried I might end up with a huge pile of specifications due to the fact that I have several entities and I query by several criteria.
Anyway, here's one (actually two) of my favorite blog posts about the patterns you're using (which I'm using as well):
Entity Framework 4 POCO, Repository and Specification Pattern
Specification Pattern In Entity Framework 4 Revisited
| |
doc_3333
|
index blade:
@extends('layouts.frontLayout.front_design')
@section('content')
<!--html here-->
@endsection
controller:
<?php
namespace App\Http\Controllers;
use App\Index;
use Illuminate\Http\Request;
class IndexController extends Controller
{
public function index()
{
return view('index');
}
route:
Route::resource('/','IndexController');
A: Its problem is occured, becouse you send user to the view with name of 'index', it means that there is a blade page in the view folder of your resource, but as i see in your structure, the index.blade.php is in this address: layouts.index. then you can refactor your controller:
public function index()
{
return view('layouts.index');
}
A: wrong
Route::resource('/','IndexController');
right
Route::get('/', 'IndexController@index');
| |
doc_3334
|
I have encoded categorical values but when I want to scale the features I'm getting this error:
"Cannot center sparse matrices: pass `with_mean=False` "
ValueError: Cannot center sparse matrices: pass `with_mean=False` instead. See docstring for motivation and alternatives.
I'm getting that error in this line:
features = scaler.fit_transform(features)
What am I doing wrong?
This is my code:
features = df[['InvoiceNo', 'StockCode', 'Description', 'Quantity',
'UnitPrice', 'CustomerID', 'Country', 'Total Price']]
columns_for_scaling = ['InvoiceNo', 'StockCode', 'Description', 'Quantity', 'UnitPrice', 'CustomerID', 'Country', 'Total Price']
transformerVectoriser = ColumnTransformer(transformers=[('Encoding Invoice number', OneHotEncoder(handle_unknown = "ignore"), ['InvoiceNo']),
('Encoding StockCode', OneHotEncoder(handle_unknown = "ignore"), ['StockCode']),
('Encoding Description', OneHotEncoder(handle_unknown = "ignore"), ['Description']),
('Encoding Country', OneHotEncoder(handle_unknown = "ignore"), ['Country'])],
remainder='passthrough') # Default is to drop untransformed columns
features = transformerVectoriser.fit_transform(features)
print(features.shape)
scaler = StandardScaler()
features = scaler.fit_transform(features)
sum_of_squared_distances = []
for k in range(1,16):
kmeans = KMeans(n_clusters=k)
kmeans = kmeans.fit(features)
sum_of_squared_distances.append(features.inertia_)
Shape of my data before preprocessing: (401604, 8)
Shape of my data after preprocessing: (401604, 29800)
A: If you set sparse=False when instantiating the OneHotEncoder then the StandardScaler() will work as expected.
import pandas as pd
import numpy as np
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder, StandardScaler
from sklearn.cluster import KMeans
# define the feature matrix
features = pd.DataFrame({
'InvoiceNo': np.random.randint(1, 100, 100),
'StockCode': np.random.randint(100, 200, 100),
'Description': np.random.choice(['a', 'b', 'c', 'd'], 100),
'Quantity': np.random.randint(1, 1000, 100),
'UnitPrice': np.random.randint(5, 10, 100),
'CustomerID': np.random.choice(['1', '2', '3', '4'], 100),
'Country': np.random.choice(['A', 'B', 'C', 'D'], 100),
'Total Price': np.random.randint(100, 1000, 100),
})
# encode the features (set "sparse=False")
transformerVectoriser = ColumnTransformer(
transformers=[
('Encoding Invoice number', OneHotEncoder(sparse=False, handle_unknown='ignore'), ['InvoiceNo']),
('Encoding StockCode', OneHotEncoder(sparse=False, handle_unknown='ignore'), ['StockCode']),
('Encoding Description', OneHotEncoder(sparse=False, handle_unknown='ignore'), ['Description']),
('Encoding Country', OneHotEncoder(sparse=False, handle_unknown='ignore'), ['Country'])
],
remainder='passthrough'
)
features = transformerVectoriser.fit_transform(features)
# scale the features
scaler = StandardScaler()
features = scaler.fit_transform(features)
# run the cluster analysis
sum_of_squared_distances = []
for k in range(1, 16):
kmeans = KMeans(n_clusters=k)
kmeans = kmeans.fit(features)
sum_of_squared_distances.append(kmeans.inertia_)
Alternatively, you can use features = features.toarray() to convert the sparse matrix to an array.
import pandas as pd
import numpy as np
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder, StandardScaler
from sklearn.cluster import KMeans
# define the feature matrix
features = pd.DataFrame({
'InvoiceNo': np.random.randint(1, 100, 100),
'StockCode': np.random.randint(100, 200, 100),
'Description': np.random.choice(['a', 'b', 'c', 'd'], 100),
'Quantity': np.random.randint(1, 1000, 100),
'UnitPrice': np.random.randint(5, 10, 100),
'CustomerID': np.random.choice(['1', '2', '3', '4'], 100),
'Country': np.random.choice(['A', 'B', 'C', 'D'], 100),
'Total Price': np.random.randint(100, 1000, 100),
})
# encode the features
transformerVectoriser = ColumnTransformer(
transformers=[
('Encoding Invoice number', OneHotEncoder(handle_unknown='ignore'), ['InvoiceNo']),
('Encoding StockCode', OneHotEncoder(handle_unknown='ignore'), ['StockCode']),
('Encoding Description', OneHotEncoder(handle_unknown='ignore'), ['Description']),
('Encoding Country', OneHotEncoder(handle_unknown='ignore'), ['Country'])
],
remainder='passthrough'
)
features = transformerVectoriser.fit_transform(features)
features = features.toarray() # convert sparse matrix to array
# scale the features
scaler = StandardScaler()
features = scaler.fit_transform(features)
# run the cluster analysis
sum_of_squared_distances = []
for k in range(1, 16):
kmeans = KMeans(n_clusters=k)
kmeans = kmeans.fit(features)
sum_of_squared_distances.append(kmeans.inertia_)
| |
doc_3335
|
The code runs fine and the table is generated in the console however I can not figure out how to save just the table as a pdf. Do I need to link R and Latex with a folder on my computer to save the file?
library(xtable)
options(xtable.floating = FALSE)
options(xtable.timestamp = "")
data(tli)
table<- xtable(tli[1:10, ])
print(table)
And I get this output
> library(xtable).
> options(xtable.floating = FALSE)
> options(xtable.timestamp = "")
>
> data(tli).
> table<- xtable(tli[1:10, ])
>
> print(table)
% latex table generated in R 3.5.0 by xtable 1.8-2 package
%
\begin{tabular}{rrlllr}
\hline
& grade & sex & disadvg & ethnicty & tlimth \\
\hline
1 & 6 & M & YES & HISPANIC & 43 \\
2 & 7 & M & NO & BLACK & 88 \\
3 & 5 & F & YES & HISPANIC & 34 \\
4 & 3 & M & YES & HISPANIC & 65 \\
5 & 8 & M & YES & WHITE & 75 \\
6 & 5 & M & NO & BLACK & 74 \\
7 & 8 & F & YES & HISPANIC & 72 \\
8 & 4 & M & YES & BLACK & 79 \\
9 & 6 & M & NO & WHITE & 88 \\
10 & 7 & M & YES & HISPANIC & 87 \\
\hline
\end{tabular}
What am I supposed to do next to save this table as a PDF?
Thank you for your time.
A: To generate a PDF, make sure you have a LaTeX compiler installed (e.g. MikTeX). I would then install a LaTeX editor. There are plenty of editors around (almost all of them free, too). For example, I use TeXstudio.
In R, modify your code to:
print(table,file="mytable.tex")
This will create a TeX document. Now, in your LaTeX editor, you can create a new TeX document with, such as:
\documentclass{article}
%This is the preamble
\begin{document}
%Put input here
\end{document}
In LaTeX, you have a preamble where you can put any packages you may need. It seems with this output that you don't need any extra packages, but you may in the future. Put a % in front of anything that's a comment, similar to R's use of #.
Where it says %Put input here, you can replace that with \input(mytable.tex). Just make sure that your new TeX document and the table TeX document are in the same folder/directory.
When you compile your TeX document, you'll produce a PDF document with the table. As you learn LaTeX (and will probably have questions on how to modify your table), you can find helpful answers at tex.stackexchange.com.
| |
doc_3336
|
float *pointer
type that is used in the VS c++ project
to some other type, so that it will still behave as a floating type but with less range?
I know that the floating point values never exceed some fixed value in that project, so I want to optimize the program by memory it uses. It doesn't need 4 bytes for each element of the 'float *pointer', 2 bytes will be enough I think. If I change a float to short and imitate the floating point behaviour, then it will use twice shorter memory. How to do it?
EDIT:
It calculates the probabilities. So there are divisions like
A / B
Where A < B,
And also B (and A) can be from 1 to 10 000.
A: Maybe use fixed-point math? It all depends on value and precision you want to achieve.
http://www.eetimes.com/discussion/other/4024639/Fixed-point-math-in-C
For C there is a lot of code that makes fixed-point easy and I'm pretty sure there are also many C++ classes that make it even easier, but I don't know of any, I'm more into C.
A: There is a standard 16-bit floating point format described in IEEE 754-2008 called "binary16". It is specified as a format to store floating point values with reduced precisions. There is almost no compiler support for that yet (I think GCC supports it for certain ARM platforms), but it is quite easy to roll your own routines. This fellow:
http://blog.fpmurphy.com/2008/12/half-precision-floating-point-format_14.html
wrote a bit about it and also presents a routine to convert half-float <-> float.
Also, here seems to be a half-float C++ wrapper class:
half.h:
http://www.koders.com/cpp/fidABD00D95DE84C73BF0218AC621E400E07AA77B53.aspx
half.cpp
http://www.koders.com/cpp/fidF0DD0510FAAED03817A956D251787609BEB5989E.aspx
which supplies "HalfFloat" as a possible drop-in replacement type.
A: The first, obvious, memory optimization would be to try and get rid of the pointer. If you can store just the float, that may, depending on the larger context, reduce your memory consumption from eight to four bytes already. (On a 64-Bit system, from twelve to four.)
Whether you can get by with a short depends on what your program does with the values. You may be able to use fix point arithmetic using an integral type such as a short, yes but your questions shows way too little context to judge that.
A: The code you posted and the text in the question do not deal with actual float, but with pointers to float. In all architectures I know of, the size of a pointer is the same regardless of the pointed type, so there would be no improvement in changing that to a short or char pointer.
Now, about the actual pointed elements, what is the range that you expect in your application? What is the precision you need? How many of those elements do you have? What are the memory constraints of your target platform? Unless the range and precision are small and the number of elements huge, just use floats. Also note that if you need floating point operations, storing any other type will require conversions before and after each operation, and you might be impacting performance.
Without greater knowledge of what you are doing, the ranges for short in many architectures are [-32k, 32k), where k stands for 1024. If your data ranges is [-32,32) and you can do with roughly 3 decimal digits you could use fixed point arithmetic with shorts, but there are few such situation.
| |
doc_3337
|
Is there a way to ignore just the first comparison? Where I output that there is no previous number(see desired output)
Important is that I can not change my numbers.txt file. These I get automatically generated from another function.
$ cat numbers.txt
1
2
3
4
5
code:
with open('numbers.txt') as file:
lines = file.read().splitlines()
print lines
for i in range(len(lines)):
previous_number = lines[i-1]
current_number = lines[i]
print "current Nr: ", current_number
print "previous Nr: ", previous_number
if current_number > previous_number:
print " current Nr is larger"
else:
print "current Nr is smaller"
output:
['1', '2', '3', '4', '5']
current Nr: 1
previous Nr: 5
current Nr is smaller
current Nr: 2
previous Nr: 1
current Nr is larger
current Nr: 3
previous Nr: 2
current Nr is larger
current Nr: 4
previous Nr: 3
current Nr is larger
current Nr: 5
previous Nr: 4
current Nr is larger
desired output
['1', '2', '3', '4', '5']
current Nr: 1
previous Nr: There is no previous!
current Nr is none
current Nr: 2
previous Nr: 1
current Nr is larger
current Nr: 3
previous Nr: 2
current Nr is larger
current Nr: 4
previous Nr: 3
current Nr is larger
current Nr: 5
previous Nr: 4
current Nr is larger
A: You can use enumerate to check on index
for i, value in enumerate(lines):
previous_number = "None"
CurrentNrText = "None"
if i != 0:
previous_number = lines[i-1]
if current_number > previous_number:
CurrentNrText = " current Nr is larger"
else:
CurrentNrText = "current Nr is smaller"
current_number = lines[i]
print("current Nr: ", current_number)
print("previous Nr: ", previous_number)
print(CurrentNrText)
A: If you want to start from the second number, then explicitly start from the second number:
for i in range(1, len(lines)):
Or, even better, use the more idiomatic enumerate:
for i, number in enumerate(lines[1:], 1):
A: You can give a try this method :
with open('numbers.txt') as file:
numbers=[None,]
for line in file:
numbers.append(line)
for idx,no in enumerate(numbers,1):
try:
if numbers[idx]>numbers[idx-1]:
print('Current no is {}'.format(numbers[idx]))
print('Previous no is {}'.format(numbers[idx-1]))
print ("current Nr is larger")
else:
print ("current Nr is smaller")
except TypeError:
print('Current no is {}'.format(numbers[idx]))
print('There is no previous!')
except IndexError:
pass
output:
Current no is 1
There is no previous!
Current no is 2
Previous no is 1
current Nr is larger
Current no is 3
Previous no is 2
current Nr is larger
Current no is 4
Previous no is 3
current Nr is larger
Current no is 5
Previous no is 4
current Nr is larger
| |
doc_3338
|
import numpy as np
incomp_arr = np.array([1., 2., 3., 4., 6., 0.])
I want to add averages between each two value to have:
comp_arr = np.array([1., 1.5, 2., 2.5, 3., 3.5, 4., 5., 6., 3., 0.])
at the moment I can only make the average array from incomplete_arr using:
avg_arr = ((incomp_arr + np.roll(incomp_arr,1))/2.0)[1:]
I very much appreciate any help to do so.
A: If you change how you compute avg_arr you can do:
avg_arr = ((incomp_arr + np.roll(incomp_arr, -1))/2.0)
# array([1.5, 2.5, 3.5, 5. , 3. , 0.5])
np.vstack([incomp_arr, avg_arr]).flatten('F')[:-1]
# array([1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 5. , 6. , 3. , 0. ])
| |
doc_3339
| ||
doc_3340
|
Usually I would take 011101 get its inverse of 100010 and add 1 to get
100011.
The value of this is 35. How then is the answer 29?
A: 011101 is 29 //Binary to Decimal
100011 + 011101 = 000000 //100011 is inverse+1
100011 = -011101
100011 = -29
There is no '35' because in a two's compliment system any number starting with a '1' is a negative number. This means, assuming 6 bits, that any number greater than 31 (011111) is in fact a negative number.
A: The term "two's complement" is ambiguous.
*
*011101 is the two's complement representation of the decimal number 29.
*Performing the two's complement operation on 011101 results in 100011 (decimal -29, since two's complement notation uses the most-significant-bit as the sign bit).
A: 100011 is correct, and its decimal equivalent is indeed 35. How do you know it should be 29?
A: The answer is 35, not 29.
invert(0b011101) + 0b1 = 0b100010 + 0b1 = 0b100011 = 35
The question was for the two's complement of 29:
0b011101 = 29
A: The "inverse +1" procedure is for encoding negative numbers.
You know this number is positive because the high bit is zero, so:
1*2^4 + 1*2^3 + 1*2^2 + 0*2^1 + 1*2^0 = 16 + 8 + 4 + 0 + 1 = 29
| |
doc_3341
|
{
"100": 0,
"100T": 0
}
How do I decode it to an associate array such that both the keys "100" and "100T" remain as strings. When I use json_decode the "100" is converted to an integer.
Example code:
$json = '{"100":0, "100T":0}';
$array = json_decode($json, true);
var_dump($array);
Gives this output:
array(2) {
[100] =>
int(0)
'100T' =>
int(0)
}
But I want this output instead:
array(2) {
'100' =>
int(0)
'100T' =>
int(0)
}
I'm using PHP version 7.0.25 on Ubuntu 16.04
| |
doc_3342
|
<div class="subsection">
<v-data-table
:headers="prescriptionHeaders"
:items="pendingItems"
show-expand
item-key="id"
>
<template v-slot:expanded-item="{headers,item}">
<td :colspan="headers.length">
<v-data-table
:headers="pendingPrestationHeaders"
:items="item.prestations"
v-model="selected"
>
<template v-slot:[`item.actions`]="{ item }">
<div class="table-row-actions">
<v-tooltip left v-if="item.categeoryTypeId === 6">
<template v-slot:activator="{ on, attrs }">
<v-icon
v-bind="attrs"
v-on="on"
@click="func1(item)"
class="action-doc"
>
mdi-file-document-outline
</v-icon>
</template>
<span>blablabla</span>
</v-tooltip>
</div>
</template>
</v-data-table>
</td>
</template>
</v-data-table>
</div>
the problem is that I need to call func1 with a property of the item from the outer v-data-table. How can I access it from within my <template v-slot:[`item.actions`] template ? I know I could include a reference to the parent item in my child item, or just duplicate the data that I need from the parent into the child (that's what I'm currently doing), but I was just curious to find out whether there is a way to refer to the "outer" item in the template slot, but I guess not.
A: To access the item of the outer v-data-table, you need to change the inner data table with props.item
Something like this should work.
<template v-slot:[`props.item.actions`]="{ props.item }">
<div class="table-row-actions">
<v-tooltip left v-if="props.item.categeoryTypeId === 6">
<template v-slot:activator="{ on, attrs }">
<v-icon
v-bind="attrs"
v-on="on"
@click="func1(item)"
class="action-doc"
>
mdi-file-document-outline
</v-icon>
</template>
<span>blablabla</span>
</v-tooltip>
</div>
</template>
A: I've exactly the same challenge. I want to access to item ID of the outer v-data-table from the inner one. The paragraph shows the data correctly. How to access this value from the inner data-table?
<template v-slot:expanded-item="{ item }">
<td :colspan="attachmentHeaders.length">
<p>{{item.finDocId}}</p>
<v-data-table
:headers="attachmentHeaders"
:items="item.attachmentPlainDtos"
item-key="finDocId"
disable-pagination
:hide-default-footer="true"
no-data-text='Geen bijlagen'
>
<template v-slot:[`item.attachmentActions`]="{ item }">
<v-icon large @click="removeAttachment(item.id, item.attachmentId)">
mdi-delete
</v-icon>
</template>
</v-data-table>
</td>
</template>
| |
doc_3343
|
I want to check if TEST symbol exists, and only then, do some things.
So I did what you see in the picture below and in the class it works. However this does not work in the views.
The text in this block is gray even if TEST is defined!
How can I cause it work if TEST is defined?
A: The symbol you set is only used during compilation. It does not exist otherwise. So, your web project's DLL does not have that symbol at all. Therefore, when the View is compiled. the symbol isn't there, and it won't work as you are expecting.
A: Rather than specify the compiler flag in web.config as per the accepted answer (which also requires specifying the compiler version in web.config, which is a non-standard location) I went with the following:
Add a method to a base class shared by my models
public bool IsDebugBuild
{
get
{
#if DEBUG
return true;
#else
return false;
#endif
}
}
Use that method in my views
if (mm.IsDebugBuild) {
<div class="debug">
// Do Stuff
</div>
}
A: The problem is related to the fact that views are only compiled when you run your application so the TEST symbol that you defined is no longer applied by the compiler because it has no knowledge of it.
Assuming that you are using C# you need to configure the compiler to use the TEST symbol when building the views and for this you need to override its configuration in Web.config using the following:
<system.codedom>
<compilers>
<compiler
language="c#;cs;csharp"
extension=".cs"
type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.3600.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"
compilerOptions="/define:TEST"
warningLevel="1" />
</compilers>
</system.codedom>
The important part is that you define compilerOptions="/define:TEST". The rest of the configuration you need to adapt to your specific needs, for example switch between .NET 2.0 or .NET 4.0.
If you apply this directly in the Web.config it will work but will define TEST every time. So what you should really do is use Web.config transformations so that the symbol is only applied for the correct build configurations.
A: I don't think its possible to use conditional symbols in a view as Andrew Barber has already said.
But you could use conditional symbols in the model:
public class ViewModel
{
//...whatever else you need to define
private bool test;
public bool Test
{
get
{
return test;
}
}
public ViewModel()
{
#if (TEST)
test = true;
#endif
}
}
And then check the value in the view:
@{
if (Model.Test)
{
<p>debug statements here</p>
}
}
| |
doc_3344
|
Account_ID (integer)
Product_ID (integer)
Other columns are not material. This lists products bought by accounts. I want to create an output with three columns like so:
Account_ID_1 | Account_ID_2 | Count(distinct product_ID)
The result should have a combination of all values of Account_IDs and associated distinct count of common Product_Ids among each Account_Id Combination.
I'm using Google BigQuery. Is there an SQL method for doing this, or should I plan to code it in a full programming language?
A: Here I calculate how many product both account have in comon.
SELECT
T1.Account_ID as Account_ID_1,
T2.Account_ID as Account_ID_2,
COUNT(distinct T1.product_id)
From YourTable as T1
JOIN YourTable as T2
ON T1.Account_ID < T2.Account_ID
AND T1.product_ID = T2.product_ID
GROUP BY
T1.Account_ID,
T2.Account_ID
A: this works for me:
select
t1.Account_ID, T2.Account_ID, count(t1.Product_ID) count_product_id
from
MYTABLE t1 join MYTABLE t2 on t1.Product_ID = t2.Product_ID
where t1.Account_ID <> t2.Account_ID
group by t1.Account_ID, t2.Account_ID
order by 1,2
A: The BigQuery version:
(JOINs only on equality, while keeping the < in a WHERE clause)
SELECT a.corpus, b.corpus, EXACT_COUNT_DISTINCT(a.word) c
FROM
(SELECT corpus, word FROM [publicdata:samples.shakespeare]) a
JOIN
(SELECT corpus, word FROM [publicdata:samples.shakespeare]) b
ON a.word=b.word
WHERE a.corpus>b.corpus
GROUP BY 1, 2
ORDER BY 4 DESC
| |
doc_3345
|
Example:
*
*unique id
*date 1 and 2
*some more numbers
I wrote a very simple script that runs in the console and enters the data just fine.
The Problem
My script stops execution whenever it requires the page to reload or it loads another page. I cannot find any information on how to continue executing a script after a page has loaded.
My Limitations
I'm basically limited to what's on FireFox, Chrome, or Edge. Unfortunately, I cannot download any programs or tools that would make the automation any easier right now. Otherwise, I would just use Selenium and Python.
What I've Tried
First I tried to use the script that I describe above (simple DOM manipulation)
Then I tried to use the Selenium browser add-on, but I had to enter a starting URL for it to run. Selenium was not able to get past the login page of our system which is the only static URL that I can use as a starting point.
I then tried to use the Firefox Browser Console (different from the dev console) because the documentation seemed to suggest that I can use JavaScript on the entire browser (not just one tab). Unfortunately, I cannot find any helpful information on how to use the browser console for DOM manipulation. Everything that I search for points to how you create a browser extension, add-on, or how to use JavaScript on your own website.
What I Want To Do
I want to create a script that runs in a dev console. The script should take all of the data either from a separate page or an array then enter the data on each page for each person. I'll also have it prompt the user to verify the data before submission.
What I'm Looking For
What I'm hoping to get from this question is at least one three things.
*
*An answer to the question's title.
*Being directed to documentation or some other solution that can solve any of the above problems.
*Being told if this is impossible and why by those who have more experience than me (I don't understand if the problem is just a lack of knowledge or limitations on the tools themselves.)
A: I think you can create a chrome extension and put your code in the background service worker. or use workers read this link
| |
doc_3346
|
I basically want to use the mongodb version of the sql "like" '%m%' operator
but in my situation i'm using the java api for mongodb, while the other post is using mongodb shell
i tried what was posted in the other thread and it worked fine
db.users.find({"name": /m/})
but in java, i'm using the put method on the BasicDBObject and passing it into the find() method on a DBCollections object
BasicDBObject q = new BasicDBOBject();
q.put("name", "/"+m+"/");
dbc.find(q);
but this doesn't seem to be working.
anyone has any ideas?
A: You need to pass an instance of a Java RegEx (java.util.regex.Pattern):
BasicDBObject q = new BasicDBObject();
q.put("name", java.util.regex.Pattern.compile(m));
dbc.find(q);
This will be converted to a MongoDB regex when sent to the server, as well as any RegEx flags.
A: You must first quote your text and then use the compile to get a regex expression:
q.put("name", Pattern.compile(Pattern.quote(m)));
Without using java.util.Pattern.quote() some characters are not escaped.
e.g. using ? as the m parameter will throw an exception.
A: To make it case insensitive:
Document doc = new Document("name", Pattern.compile(keyword, Pattern.CASE_INSENSITIVE));
collection.find(doc);
A: In spring data mongodb, this can be done as:
Query query = new Query();
query.limit(10);
query.addCriteria(Criteria.where("tagName").regex(tagName));
mongoOperation.find(query, Tags.class);
A: Document doc = new Document("name", Pattern.compile(keyword));
collection.find(doc);
A: Might not be the actual answer, ( Executing the terminal query directly )
public void displayDetails() {
try {
// DB db = roleDao.returnDB();
MongoClient mongoClient = new MongoClient("localhost", 5000);
DB db = mongoClient.getDB("test");
db.eval("db.test.update({'id':{'$not':{'$in':[/su/]}}},{$set:{'test':'test3'}},true,true)", new Object[] {});
System.out.println("inserted ");
} catch (Exception e) {
System.out.println(e);
}
}
A: if(searchType.equals("employeeId"))
{
query.addCriteria(Criteria.where(searchType).regex(java.util.regex.Pattern.compile(searchValue)));
employees = mongoOperations.find(query, Employee.class, "OfficialInformation");
}
| |
doc_3347
|
@set sourcepath=\\foo\bar\p_*
I prepended my project folders with 'p_' so I could set my source variable to /foo/bar/p_*. This way I would pick up each folder and all of it's files without grabbing all of the other 'non-project' folders. Obviously this did not work.
Is there a way to pull this off without having to build a folder list that I need to manage every time a project folder is added?
Your input is appreciated
A: You can get the list of directories with a command.
dir /A:D /B /r p_*
Then you can use a FOR loop to start robocopy for each directory.
SETLOCAL
SET SOURCEDIR=\\source\...
SET DESTINATIONDIR=\\dest\...
CD %SOURCEDIR%
FOR /F %%G IN ('dir /A:D /B /r p_*') DO robocopy "%SOURCEDIR%\%%G" "%DESTINATIONDIR%\%%G" /e
| |
doc_3348
|
But when I download the model file and put it into another machine with XGBoost 0.4a30 version,the predicted result is much different. Because there is several model generated by 0.4a30 version, so I couldn't update the version.
I don't know how I can fix it. Is there some default parameter to be changed?
| |
doc_3349
|
How do i make this happen?
HTML
<span class="label label-success">Online</span>
PHP
<?php
$result = popen("pingnas.py", "r");
return $result;
?>
A: You can make an AJAX call to the php script which in turn calls the python script to check whether the user is online or offline.
Something like this.
-In your Javascript, make an AJAX call to PHP script.
-PHP in turn executes python to see if the user is online or offline.
-Send the JSON response. May be something like {"status":"online"}
-Based on the JSON response, change the HTML of the span element.
A: Add an ID to the span that you want to manipulate
<span class="label label-success" id="indicator">Online</span>
Return the result variable to javascript and then manipulate the text/classname with javascript. Assuming that the python script returns 1 for success and 0 for failure
<?php
$result = popen("pingnas.py", "r");
?>
<html>
<head>
<script type='text/javascript'>
var setStatus = function(status) {
var ind = document.getElementById('indicator');
if (status === 1) {
ind.innerHTML='Online';
ind.className='label label-success';
return;
}
ind.innerHTML='Offline';
ind.className='label label-fail';
}
window.onload = function() {
setStatus(<?php echo $result ?>);
};
</script>
...rest of the HTML
| |
doc_3350
|
In the "2 Description of the Bkd-Tree" section, author describe a single on-disk kdtree structure.
*
*Each leaf block can contain B points.
*Each inner block can contain
Bi points.
*Total point count is N.
*This kdtree stores
actual points only in leaf blocks and we have N/B total leaf blocks.
Think the bkd-tree description as during a loading processing. The points are stream data on disk.
Loading is converting a stream of points to on-disk representation of a single kd-tree. This is my understanding, not necessarily correct.
Suppose first that N/B is an exact power of Bi, i.e., N/B
= Bip , for some p, and that Bi is an exact power of 2. In this case the internal nodes can easily be
stored in O(N/(BBi)) blocks in a natural way. Starting from
the kd-tree root v, we store together the nodes obtained by performing
a breadth-first search traversal starting from v, until Bi
nodes have been traversed. The rest of the tree is then blocked
recursively.
I draw this picture by my understanding. Points in each block are contained in dashed lines.
Collect a fixed number of points in one block, from root. If the number of left subtree points
is small enough, block it as leaf block.
If N/B is not a power of Bi, we fill the block containing
the kd-tree root with less than Bi nodes in order to be
able to block the rest of the tree as above.
Denote each inner block subtree height as max_inner_node_height. If
If N/B is not a power of Bi, then total height is not dividable by
max_inner_node_height, under full blocks may appear as the level above leaf blocks.
We can fill the root block as follows:
Now only the root block become under-full, which is better.
If N/B is not a power of 2 the kdtree is unbalanced and the above
blocking algorithm can end up under-utilizing disk blocks. To
alleviate this problem we modify the kd-tree splitting method and
split at rank power of 2 elements, instead of at the median elements.
More precisely, when constructing the two children of a node v from a
set of p points, we assign 2^floor((log2(p))) points to the left
child, and the rest to the right child. This way, only the blocks
containing the rightmost path—at most ceil(logBi(N/B)) —can be
under-full.
This is where I'm confused. If split at real median, there can be under-full blocks, as
If splitting at rank power of 2, there still can be some under-full blocks, as
I can't see the why the number of under-full blocks can be bounded by ceil(logBi(N/B)), which is
the height of the block tree. And the number of under-full blocks is not less than real median splitting method. So is how space utilization improved by splitting at rank power of 2?
| |
doc_3351
|
class User(models.Model):
username = models.CharField(max_length=100, unique=True)
companies = models.ManyToManyField('Company', blank=True)
class Company(models.Model):
name = models.CharField(max_length=255)
According to the Django documentation:
"It doesn't matter which model has the ManyToManyField, but you should only put it in one of the models -- not both.".
So I understand that if I have an instance of a User, called user, I can do:
user.companies
My question is how do I do the reverse? How do I get all users that belong to a Company instance, let's say Company:
company.users # This doesn't work!
What's the convention to do this? The documentation that I've read doesn't really cover this. I need the association to work both ways, so I can't simply move it from one model to the other.
A: company.user_set.all()
will return a QuerySet of User objects that belong to a particular company. By default you use modelname_set to reverse the relationship, but you can override this be providing a related_name as a parameter when defining the model, i.e.
class User(models.Model):
companies = models.ManyToManyField(Company, ..., related_name="users")
> company.users.all()
here is the relevant documentation
| |
doc_3352
|
I've written a simple Hello World program in C called hello.c, and ran the following command:
gcc -S hello.c
That produced hello.s. Then I used that file with GNU assembler, as:
as hello.s
Which produced a non-executable a.out, which still needs to be linked, I understand?
I try to link it by using ld, like so:
ld a.out
But get the following error:
a.out: file not recognized: File truncated
And ld deletes my file.
This is an x86 Ubuntu system. What am I doing wrong? Many thanks!
A: My first question would be: why are you assembling the code? If you want the assembler code the, by all means, use gcc -S to get it (for viewing, I guess).
But you don't need to run that through as to keep going, just use:
gcc -o hello hello.c
gcc -S hello.c
That first step will turn the C source directly into an executable, the second will give you your assembler source.
Your specific problem may be that ld tries to write its output to a.out. If that's also your input file, it may well be being destroyed in the process of running ld. You could try renaming a.out to a.in before running the ld command: ld a.in.
A: Here is how I do it:
> gcc -S forums.c
> as forums.s -o forums.o
> gcc forums.o -o forums
> ./forums
test
Why do I invoke gcc instead of ld? Because GCC takes care of linking the C runtime, and doing other implementation-dependant stuff. If you want to see that, use the --verbose option:
> gcc --verbose forums.o -o forums
Using built-in specs.
Target: i686-pc-linux-gnu
Configured with: ../configure --prefix=/usr --enable-shared --enable-languages=c,c++,fortran,objc,obj-c++ --enable-threads=posix --mandir=/usr/share/man --infodir=/usr/share/info --enable-__cxa_atexit --disable-multilib --libdir=/usr/lib --libexecdir=/usr/lib --enable-clocale=gnu --disable-libstdcxx-pch --with-tune=generic
Thread model: posix
gcc version 4.4.0 (GCC)
COMPILER_PATH=/usr/lib/gcc/i686-pc-linux-gnu/4.4.0/:/usr/lib/gcc/i686-pc-linux-gnu/4.4.0/:/usr/lib/gcc/i686-pc-linux-gnu/:/usr/lib/gcc/i686-pc-linux-gnu/4.4.0/:/usr/lib/gcc/i686-pc-linux-gnu/:/usr/lib/gcc/i686-pc-linux-gnu/4.4.0/:/usr/lib/gcc/i686-pc-linux-gnu/
LIBRARY_PATH=/usr/lib/gcc/i686-pc-linux-gnu/4.4.0/:/usr/lib/gcc/i686-pc-linux-gnu/4.4.0/:/usr/lib/gcc/i686-pc-linux-gnu/4.4.0/../../../:/lib/:/usr/lib/
COLLECT_GCC_OPTIONS='-v' '-o' 'forums' '-mtune=generic'
/usr/lib/gcc/i686-pc-linux-gnu/4.4.0/collect2 --eh-frame-hdr -m elf_i386 --hash-style=both -dynamic-linker /lib/ld-linux.so.2 -o forums /usr/lib/gcc/i686-pc-linux-gnu/4.4.0/../../../crt1.o /usr/lib/gcc/i686-pc-linux-gnu/4.4.0/../../../crti.o /usr/lib/gcc/i686-pc-linux-gnu/4.4.0/crtbegin.o -L/usr/lib/gcc/i686-pc-linux-gnu/4.4.0 -L/usr/lib/gcc/i686-pc-linux-gnu/4.4.0 -L/usr/lib/gcc/i686-pc-linux-gnu/4.4.0/../../.. forums.o -lgcc --as-needed -lgcc_s --no-as-needed -lc -lgcc --as-needed -lgcc_s --no-as-needed /usr/lib/gcc/i686-pc-linux-gnu/4.4.0/crtend.o /usr/lib/gcc/i686-pc-linux-gnu/4.4.0/../../../crtn.o
A: EDIT: okay, I tried this all out on my system, and I think I know what the problem is. ld is writing to a.out (its default output file), while reading from it at the same time. Try something like this:
ld a.out -o myprog
A: reinstall glibc-devel anyway you can and check if it work. this process work for me.
| |
doc_3353
|
here my urls
url(r'^(?P<slug>\S+)/$', QuestionDetailView.as_view(), name='detail'),
url(r'^(?P<slug>\S+)/$', QuestionUniListView.as_view(), name='uni-list'),
this slugs get different models. When I run like this only one url works?
A: Django url keeps searching for the pattern from the top of the file. When it matches the pattern it render the requeat and stop further execution. So, its not possible to have same urls.
You should try changing some keyword in the url
A: Try with different slug name for both urls as below
url(r'^(?P<slug>\S+)/$', QuestionDetailView.as_view(), name='detail'),
url(r'^(?P<list_slug>\S+)/$', QuestionUniListView.as_view(), name='uni-list'),
and in your Html where you are calling this url pass as below
{% url 'uni-list' list_slug='{{ your_slug }}' %}
| |
doc_3354
|
Generic<String> foo;
Generic<Double> bar;
and i put them into a list like this:
List<Generic<?>> list;
list.add(foo);
list.add(bar);
now i want to read a method that returns A, but instead i only get Object as return type, and i know why, because of the ? in the generic type of the list. I also know that List<Generic<String>> is a complete different type than List<Generic<Double>>... But is there any kind of List Structure or Generic Type argument that i can use to keep the Type of the Generic class the same? Its no problem to cast for me in my program, because i save also a ID for every Instance in my list and now can determinate which type is in there but this seems a little bit dirty...
A: Is it this what you are looking for?:
public <T> T someMethod(List<Generic<T>> list){
//Return an element from list
}
A: Well if you whant a common type you should use Object or some class that wrapps your stuff. If you use Object then just check instanceof the object in your method where you do stuff
A: Do you have to use the ? or can you change your code to use List<Generic<T>> to keep the type?
If you want to check the type after initialising the List with some content, you can use list.get(0).getClass() to gain some information but that's not really a nice way to program
A: You have to cast at some point :(
| |
doc_3355
|
I have spent(wasted) a lot of time trying to find a solution and just cannot make this work.
I have a Splash Activity on which I go full screen. But just before the Splash Activity appears, a screen appears (for a very short period of time) that has nothing but an Action Bar with the App Icon and Activity Title Text. I do not understand why it appears. Is this default Android behavior? I want to avoid this screen, or at least remove the Activity Title from the Action Bar on this screen. If I set it to "", I lose the app from the Recent Apps chooser.
Please refer to these images:
Pre Splash :
Splash Screen :
I have referred to the following examples on SO and also searched elsewhere. But nothing seems to work.
Remove the title text from the action bar in one single activity
Action Bar remove title and reclaim space
How do you remove the title text from the Android ActionBar?
How can I remove title and icon completetly in Actionbar sherlock?
Please check my Android Manifest and the theme file...
<application
android:name="com.zipcash.zipcashbetaversion.MyApplication"
android:allowBackup="true"
android:icon="@drawable/zipcash_icon"
android:label="@string/app_name"
android:logo="@drawable/zipcash_logo_small"
android:theme="@style/Theme.MyTheme" >
<activity
android:name="com.zipcash.zipcashbetaversion.SplashActivity"
android:label="@string/app_name"
android:screenOrientation="portrait" >
<intent-filter >
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
<activity
android:name="com.zipcash.zipcashbetaversion.SignUpActivity"
android:label="@string/title_activity_sign_up"
android:screenOrientation="portrait" >
</activity>
And so on...
The following is my theme file:
<style name="Theme.MyTheme" parent="Theme.Sherlock.Light">
<!-- Set style for Action Bar (affects tab bar too) -->
<item name="actionBarStyle">@style/Widget.MyTheme.ActionBar</item>
</style>
<style name="Widget.MyTheme.ActionBar" parent="Widget.Sherlock.ActionBar">
<!-- define background for action bar (sets default for all parts of action bar - main, stacked, split) -->
<item name="android:background">@drawable/background</item>
<item name="background">@drawable/background</item>
<!-- set background for the tab bar (stacked action bar) - it overrides the background property -->
<item name="backgroundStacked">@color/action_bar_tab_background</item>
<item name="titleTextStyle">@style/NoTitleText</item>
<item name="subtitleTextStyle">@style/NoTitleText</item>
<item name="displayOptions">showHome|useLogo</item>
</style>
<style name="NoTitleText">
<item name="android:textSize">0sp</item>
<item name="android:textColor">#00000000</item>
<item name="android:visibility">invisible</item>
</style>
I have also written this code in my Splash Activity:
ActionBar actionBar = getSupportActionBar();
actionBar.setDisplayOptions(actionBar.getDisplayOptions() ^ ActionBar.DISPLAY_SHOW_TITLE);
actionBar.hide();
ActivityHelper.initialize(this);
setContentView(R.layout.activity_splash);
Have I made a mistake that I am overlooking. Is there something else I should do. My minimum API level is 8.
Any help will be appreciated.
A: You can add this to your activity:
android:theme="@android:style/Theme.Black.NoTitleBar.Fullscreen" >
A: When you first launch your app and Application onCreate haven't finished loading you see the empty lag screen. You can set the window background to green and remove the ActionBar for Application theme, so it will look splash-like.
colors.xml
<?xml version="1.0" encoding="utf-8"?>
<resources>
<color name="splash_background">#86bf3a</color>
</resources>
styles.xml
<style android:name="SplashTheme" parent="@style/Theme.Sherlock.Light.NoActionBar">
<item name="android:windowBackground">@color/splash_background</item>
</style>
Make application use splash theme with green background and no ActionBar
<application
...
android:theme="@style/SplashTheme" >
<activity
android:name="com.zipcash.zipcashbetaversion.SplashActivity"
....>
<!-- .... -->
</activity>
<!--...
make all other Activities use MyTheme
...-->
<activity
android:name="com.zipcash.zipcashbetaversion.SignUpActivity"
...
android:theme="@style/Theme.MyTheme" />
| |
doc_3356
|
id group name
1 2 dodo
2 1 sdf
3 2 sd
4 3 dfs
5 3 fda
....
and i want to get intro record from each group like following
id group name
... 1 sdf
2 dodo
3 dfs
...
A: SELECT MIN(id) id, group, name
FROM TABLE1
GROUP BY group
ORDER BY group
A: select * from table_name where id in (select min(id) from table_name group by group)
| |
doc_3357
|
kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)
I have no idea what most of these are for, can they be safely removed?
A: Many of them can be safely removed. Here's a quick rundown of what they are for:
*
*kernel32 : Process and thread management, file and device I/O, memory allocation (keep this, the C and C++ runtime libraries and compiler-generated code uses it)
*user32 : Window and menu management (keep this if using GUI, can remove for console apps) The base set of widgets (= predefined window classes, like buttons and checkboxes and scrollbars) are here.
*gdi32 : Drawing (keep this if using custom rendered graphics, can remove if just using widgets)
*comctl32 : Fancy new widgets, like trees, listviews, and progress bars
*winspool : Advanced usage of printing beyond what GDI covers. I always remove it.
*comdlg32 : Common dialogs, like Open and Save File Dialogs
*advapi32 : Registry support, user account and access control, cryptography. I usually end up needing this one, your needs may differ.
*shell32, shlwapi : Taskbar and notification tray UI and more helper functions, like predefined folders and path manipulation functions. Often useful, but many applications won't need it.
*ole32, oleaut32 : OLE is the basis for ActiveX, DCOM, etc. Many of the newer OS APIs are COM objects, so you probably need to keep this.
*uuid : Advanced OLE usage, probably not needed.
*odbc32, odbccp32 : Database access using a very old and unfriendly API. I always remove these.
Italicized libraries are not in the default list, but more useful than half the ones that are.
A: No, you can't remove them. These are the libraries that interface with Windows.
You do not need to worry about it. .lib are really small, and the .dlls they refer to are already present as part of your Windows installation.
| |
doc_3358
|
project link
https://localhost/test-project/#step_4#step_4#step_4 using window.location.href.
I have done below code for that
var url_id = window.location.href;
var url_id_value = url_id.split('#')[1];
console.log(url_id.split('#')[1]);
But I just got step_4 in console.
But I want to store a value https://localhost/test-project/ in variable using window.location.href or is there any other way to store that value in variable?
A: Replace the line of code var url_id_value = url_id.split('#')[1]; with the following:
var url_id_value = url_id.substr(0, url_id.lastIndexOf("/") + 1);
| |
doc_3359
|
A: Yes, it is possible to open it at a specific percentage or in fit view mode. You set the document's Open action with a Goto action that targets the page you want to display. When you set the target page you also have the option to set the desired zoom or a Fit mode.
| |
doc_3360
|
I log in as an Adminstrator.
In the JSPUI Admistrator screen I don't see the 'Login As' button at all. Under XMLUI the 'Login as E-Person' button is present, but colored grey, so I cannot click on it.
What can be the cause of this behavoir?
A: make sure to restart the server after changing the configuration to make sure your changes are applied.
A: You cannot log in as another administrator -- that's probably why the button is greyed out in XMLUI. It may also be the reason for the button not appearing at all in JSPUI.
| |
doc_3361
|
SELECT A.UNITCODE, B.FORMATIONCODE, C.UPPERFORMATIONCODE, D.UPPERFORMATIONCODE
FROM UNIT AS A.UNITCODE
INNER JOIN FORMATION AS B.FORMATIONCODE
INNER JOIN UPPERFORMATION_UNIT AS C.UPPERFORMATION
INNER JOIN UPPERFORMATION AS D.UPPERFORMATIONCODE
WHERE UNITCODE='7000007'
Can you guys help me how to join 4 tables with specified column?
A: Assuiming the all 3 related tables have the same UNIT_ID for join with table UNIT
SELECT
A.UNITCODE
, B.FORMATIONCODE
, C.UPPERFORMATIONCODE
, D.UPPERFORMATIONCODE
FROM UNIT AS A
INNER JOIN FORMATION AS B ON B.FORMATIONCODE = A.UNIT_ID
INNER JOIN UPPERFORMATION_UNIT AS C. C.UPPERFORMATION = A.UNIT_ID
INNER JOIN UPPERFORMATION AS D D.UPPERFORMATIONCODE = A.UNIT_ID
WHERE UNITCODE='7000007'
A: It seems you are confusing table aliases and linking columns.
This is how to give a table an alias name in the query in order to enhance readability:
INNER JOIN formation AS f
where the AS is optional. Most often it is ommited.
This is how to join:
FROM unit AS u
INNER JOIN upperformation_unit AS ufu ON ufu.unitcode = u.unitcode
Well, I don't know the columns linking the tables of course, let alone their names. But I suppose the query would have to look like this more or less:
SELECT
u.unitcode,
f.formationcode,
uf.upperformationcode,
ufu.upperformationcode
FROM unit u
JOIN upperformation_unit ufu ON ufu.unitcode = u.unitcode
JOIN upperformation uf ON uf.upperformationcode = ufu.upperformationcode
JOIN formation f ON f.formationcode uf.formationcode
WHERE u.unitcode = 7000007;
| |
doc_3362
|
select * from table t where (t.k1='apple' and t.k2='pie') or (t.k1='strawberry' and t.k2='shortcake')
... --10000 more key pairs here
This looks quite verbose to me. Any better alternatives? (Currently using SQLite, might use MYSQL/Oracle.)
A: You can use for example this on Oracle, i assume that if you use regular concatenate() instead of Oracle's || on other DB, it would work too (as it is simply just a string comparison with the IN list). Note that such query might have suboptimal execution plan.
SELECT *
FROM
TABLE t
WHERE
t.k1||','||t.k2 IN ('apple,pie',
'strawberry,shortcake' );
But if you have your value list stored in other table, Oracle supports also the format below.
SELECT *
FROM
TABLE t
WHERE (t.k1,t.k2) IN ( SELECT x.k1, x.k2 FROM x );
A: Don't be afraid of verbose syntax. Concatenation tricks can easily mess up the selectivity estimates or even prevent the database from using indexes.
Here is another syntax that may or may not work in your database.
select *
from table t
where (k1, k2) in(
('apple', 'pie')
,('strawberry', 'shortcake')
,('banana', 'split')
,('raspberry', 'vodka')
,('melon', 'shot')
);
A final comment is that if you find yourself wanting to submit 1000 values as filters you should most likely look for a different approach all together :)
A: select * from table t
where (t.k1+':'+t.k2)
in ('strawberry:shortcake','apple:pie','banana:split','etc:etc')
This will work in most of the cases as it concatenate and finds in as one column
off-course you need to choose a proper separator which will never come in the value of k1 and k2.
for e.g. if k1 and k2 are of type int you can take any character as separator
A: SELECT * FROM tableName t
WHERE t.k1=( CASE WHEN t.k2=VALUE THEN someValue
WHEN t.k2=otherVALUE THEN someotherValue END)
- SQL FIDDLE
| |
doc_3363
|
df<- structure(c(14.12087951, 14.99460661, 33.46234987, 10.17615856,
5.274590779, 2262.260928, 30.95475607, 489.3857185, 100.2231956,
1.927758832, 12063.47923, 12.40706075, 2010.075103, 1161.375364,
789.7376463, 3118.915801, 202.9969196, 5.098794774, 913.8294948,
25.66624202, 254.0262357, 351.1804779, 1.164233553, 1.725950597,
1.73866603, -0.182861618, 1.073288641, 2.355917497, 1.903814342,
2.106296918, 1.641698736, 1.00452836, 1.530285115, 1.224115304,
1.549014357, 1.571649698, 1.336788511, 1.566214154, 1.287767608,
1.43739379, 1.107868132, 1.446075949, 1.053322707, 1.084792083
), .Dim = c(22L, 2L), .Dimnames = list(c("A", "AAA", "AAAA",
"AAAAA", "AAAAAA", "AAAAAAA", "B", "BB", "BBB", "BBBB", "BBBBB",
"F", "FFF", "FFFFF", "FFFFFF", "FFFFFFF", "FFFFFFFF", "E62534",
"GDTDFS", "AZE", "ZIEY", "SIS"), c("value1", "value2")))
I am trying to show only specific labels when I use heatmap.2
So I do this
Labelst <- c("BBBBB", "AAAAAAA", "SIS","ZIEY")
heatmap.2(df, labRow = Labelst)
Is I don't use the labRow, then the order of the labels are different. for example, compare the following with above
heatmap.2(df)
Now, biggest issue is how to avoid overlapping? when I have 1000 of rows, then when I try to show only specific ones, they overlap. Can someone show me how to show specific row labeles with a flash ? or with a distance from each other? I don't want to change the font and make it smaller
A: Make sure read the help page of heatmap.2 as it's very detailed. While not specifically stated, labRow takes a vector of the same length as your data.frame. The help states these default to rownames(x).
So simply replace any labels you don't want with an empty string.
have <- rownames(df)
want <- c("BBBBB", "AAAAAAA", "SIS","ZIEY")
have[!(have %in% want)] <- ""
heatmap.2(df, labRow = have)
I missed the second half of your question. The options avaliable for heatmap.2 to modify/change the row/colnames are these:
# Row/Column Labeling
margins = c(5, 5),
ColSideColors,
RowSideColors,
cexRow = 0.2 + 1/log10(nr),
cexCol = 0.2 + 1/log10(nc),
labRow = NULL,
labCol = NULL,
srtRow = NULL,
srtCol = NULL,
adjRow = c(0,NA),
adjCol = c(NA,0),
offsetRow = 0.5,
offsetCol = 0.5,
colRow = NULL,
colCol = NULL,
Note specifically cexRow, it specifies the size. Change this variable to stop overlapping.
| |
doc_3364
|
I would like to understand how can one implement a feature like that? what is the software design / architecture for it.
I am planning to design and implement this functionality in Java, Spring and Tomcat using server side sessions and would prefer to roll-out my own impersonation feature instead of using a library
A: If you want a specific solution that GitLab implemented in Ruby, you could take a look at the commit that introduced the feature: Commit 3bb626f9 - refactor login as to be impersonation with better login/logout
Please note that the security issue introduced on this commit was later fixed later: GitLab Blog Post - Critical Security Release for GitLab 8.2 through 8.7
Otherwise, I think this question is too broad.
I need some more details - Framework, Current Authentication mechanism, etc.
EDIT:
I do not know Java Spring framework too well, but these links may help you:
*
*spring security (3.0.x) and user impersonation
*How to do impersonation in spring
| |
doc_3365
|
First, rewinding head to replay your work on top of it...
Applying: <First Change>
Applying: <Second Change>
.git/rebase-apply/patch:20: trailing whitespace.
warning: 1 line adds whitespace errors.
error: Your local changes to the following files would be overwritten by merge:
<some exiting project file file>
<another existing project file>
Please commit your changes or stash them before you merge.
Aborting
error: Failed to merge in the changes.
Using index info to reconstruct a base tree...
Falling back to patching base and 3-way merge...
Patch failed at 0002 <Second change>
The copy of the patch that failed is found in: .git/rebase-apply/patch
I'm used to resolving CONFLICT files during merge or rebase. But this situation has me puzzled. There is no CONFLICT, is just that second change cannot be applied because there are files that would be overwritten, which makes no sense to me. These are all existing files which are tracked. git status before the rebase shows no pending adds, the files show themselves correctly in git log -- <some exiting project file file>. The <First Change> does not touch these files. The <Second Change> modifies them, but rebase somehow sees the patch as an overwrite.
Any explanation for this mystery, any suggestion how to tackle this problem?
git version 2.14.1.windows.1
A: I have come to the conclusion that is is caused by interaction between VS keeping open file handles and git. An aggravating factor appears to be having a gulp:serve task running while js/ts/css files are being checked in/checked out/branch changed/origin pull etc. I have also noticed access denied errors with security descriptors of the files being completely busted (eg. cannot take ownership of file not even as local system). Is sad to see NTFS metadata handling in such a poor state, but the conclusion for me is: close VS when working with git. Or use vim and msbuild from command line, which is way more relaible, and easier to use.. but I digress.
| |
doc_3366
|
As for now, the app is working as expected, the user selects an option from the segmentedIndex control, and the annotations are shown on the mapView.
My current issue is that I need the user to click on the callout view to open another viewController.
I think my code is right, but I guess it isn't then the showed callout view is the default calloutview, with title and subtitle. No action is fired when clicked on it.
Any help is welcome.
- (MKAnnotationView *)mapView:(MKMapView *)mapView viewForAnnotation:(id <MKAnnotation>)annotation {
static NSString *identifier = @"MyLocation";
if ([annotation isKindOfClass:[PlaceMark class]]) {
MKPinAnnotationView *annotationView =
(MKPinAnnotationView *)[myMapView dequeueReusableAnnotationViewWithIdentifier:identifier];
if (annotationView == nil) {
annotationView = [[MKPinAnnotationView alloc]
initWithAnnotation:annotation
reuseIdentifier:identifier];
} else {
annotationView.annotation = annotation;
}
annotationView.enabled = YES;
annotationView.canShowCallout = YES;
// Create a UIButton object to add on the
UIButton *rightButton = [UIButton buttonWithType:UIButtonTypeDetailDisclosure];
[rightButton setTitle:annotation.title forState:UIControlStateNormal];
[annotationView setRightCalloutAccessoryView:rightButton];
UIButton *leftButton = [UIButton buttonWithType:UIButtonTypeInfoLight];
[leftButton setTitle:annotation.title forState:UIControlStateNormal];
[annotationView setLeftCalloutAccessoryView:leftButton];
return annotationView;
}
return nil;
}
- (void)mapView:(MKMapView *)mapView
annotationView:(MKAnnotationView *)view calloutAccessoryControlTapped:(UIControl *)control {
if ([(UIButton*)control buttonType] == UIButtonTypeDetailDisclosure){
// Do your thing when the detailDisclosureButton is touched
UIViewController *mapDetailViewController = [[UIViewController alloc] init];
[[self navigationController] pushViewController:mapDetailViewController animated:YES];
} else if([(UIButton*)control buttonType] == UIButtonTypeInfoDark) {
// Do your thing when the infoDarkButton is touched
NSLog(@"infoDarkButton for longitude: %f and latitude: %f",
[(PlaceMark*)[view annotation] coordinate].longitude,
[(PlaceMark*)[view annotation] coordinate].latitude);
}
}
A: Most likely the map view's delegate is not set in which case it won't call viewForAnnotation and will instead create a default view (red pin with a callout showing only the title and subtitle -- no buttons).
The declaration in the header file does not set the map view's delegate. That just tells the compiler that this class intends to implement certain delegate methods.
In the xib/storyboard, right-click on the map view and connect the delegate outlet to the view controller or, in viewDidLoad, put mapView.delegate = self;.
Unrelated, but I want to point out that in calloutAccessoryControlTapped, rather than checking the buttonType, you probably want to just know whether it's the right or left button so just do:
if (control == view.rightCalloutAccessoryView) ...
See https://stackoverflow.com/a/9113611/467105 for a complete example.
There are at least two problems with checking the buttonType:
*
*What if you want to use the same type for both buttons (eg. Custom)?
*In iOS 7, setting a button to UIButtonTypeDetailDisclosure ends up actually creating a button of type Info (see MKAnnotationView always shows infoButton instead of detailDisclosure btn for details). So the check for buttonType UIButtonTypeDetailDisclosure would fail (on iOS 7).
| |
doc_3367
|
Observer: Every component(eg. JButton) is a subject which can add observers(ActionListeners). When someone pushes a button it notifies all its ActionListeners by calling their actionPerformed(ActionEvent e).
But how about Command Pattern?
When I am making classes that implements ActionListener (eg: MyActionListener) the actionPerformed(ActionEvent e) is now the execute command?
It confuses me that actionPerformed(ActionEvent e) is used both as a execute() and a update() method. Am I right here?
A: Here is an article that will help. Basically, it is saying you can create concrete command classes that interact with a target object by deriving the ActionListener. Then you can expand what an action event invoker will do by registering these decoupled commands to it.
A: Yes, so basically making an object that encapsulates the behavior and other information that is needed when an action takes place can be seen as using the command pattern.
The Wikipedia article linked above uses the Action interface as an example of the command pattern in Swing. The Action interface is a subinterface of ActionListener, so a class that implements Action will have to implement the actionPerformed method.
Therefore, a class implementing Action will be encapsulating some operations which will be performed when an action occurs. And that class itself can be seen to follow the command pattern.
When it comes to the implementation, in general, an AbstractAction can be easier to use than implementing Action as it has several methods that needs to be overridden. An example using AbstractAction can be:
class MySpecialAction extends AbstractAction {
@Override
public void actionPerformed(ActionEvent e) {
// Perform operations.
}
}
The MySpecialAction is a command pattern object -- it has the behavior it must exhibit when an action takes place. When instantiating the above class, one could try the following:
MySpecialAction action = new MySpecialAction("Special Action", mySpecialIcon);
Then, the action can be registered to multiple components, such as JButtons, JMenuItems and such. In each case, the same MySpecialAction object will be called:
JMenuItem specialMenuItem = new JMenuItem(action);
/* ... */
JButton b = new JButton(action);
In both cases, the action that is associated with each component, the button and the menu item, refer to the same MySpecialAction action object, or command. As we can see, the MySpecialAction object is functioning as a object following the command pattern, as it encapsulates some action to be performed at a the time when an action takes place.
A: Interesting take on it and you might be right, but I see it as something that's performed, period. The reason for the action being performed can be a state change, or a mouse click, but it's still a Command in the Command Pattern sense.
A: Here is my take on it :
http://blue-walrus.com/2011/10/swing-and-design-patterns-%E2%80%93-part-3-command-pattern/
The key is that your action is an interchangebale unit. You can have this action behind many buttons or menus. Eg. a Save action can hang behind the 'save' menu option, and also hang behind the 'save' icon.
| |
doc_3368
|
public static void main(String[] args) {
SparkSession sparkSession = SparkSession.builder()
.appName("SparkSQL")
.master("local")
.getOrCreate();
Dataset<Row> df = sparkSession.createDataFrame(
Arrays.asList(new Person("panfei",27)),Person.class);
System.out.println(df.rdd().count());
}
}
I am beginner to spark, upper code run on my local machine, it's has been make sure that the Person implement Serializable, but the code throws this exception:
17/09/30 18:13:26 INFO SparkContext: Starting job: count at App.java:32
17/09/30 18:13:26 INFO DAGScheduler: Got job 0 (count at App.java:32) with 1 output partitions
17/09/30 18:13:26 INFO DAGScheduler: Final stage: ResultStage 0 (count at App.java:32)
17/09/30 18:13:26 INFO DAGScheduler: Parents of final stage: List()
17/09/30 18:13:26 INFO DAGScheduler: Missing parents: List()
17/09/30 18:13:26 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[4] at rdd at App.java:32), which has no missing parents
17/09/30 18:13:26 INFO TaskSchedulerImpl: Cancelling stage 0
17/09/30 18:13:26 INFO DAGScheduler: ResultStage 0 (count at App.java:32) failed in Unknown s due to Job aborted due to stage failure: Task not serializable: java.io.NotSerializableException: scala.collection.Iterator$$anon$11
Serialization stack:
- object not serializable (class: scala.collection.Iterator$$anon$11, value: empty iterator)
- field (class: scala.collection.Iterator$$anonfun$toStream$1, name: $outer, type: interface scala.collection.Iterator)
- object (class scala.collection.Iterator$$anonfun$toStream$1, <function0>)
- field (class: scala.collection.immutable.Stream$Cons, name: tl, type: interface scala.Function0)
- object (class scala.collection.immutable.Stream$Cons, Stream([27,panfei]))
- field (class: org.apache.spark.sql.execution.LocalTableScanExec, name: rows, type: interface scala.collection.Seq)
- object (class org.apache.spark.sql.execution.LocalTableScanExec, LocalTableScan [age#0, name#1]
)
- field (class: org.apache.spark.sql.execution.DeserializeToObjectExec, name: child, type: class org.apache.spark.sql.execution.SparkPlan)
- object (class org.apache.spark.sql.execution.DeserializeToObjectExec, DeserializeToObject createexternalrow(age#0, name#1.toString, StructField(age,IntegerType,true), StructField(name,StringType,true)), obj#6: org.apache.spark.sql.Row
+- LocalTableScan [age#0, name#1]
)
- field (class: org.apache.spark.sql.execution.DeserializeToObjectExec$$anonfun$2, name: $outer, type: class org.apache.spark.sql.execution.DeserializeToObjectExec)
- object (class org.apache.spark.sql.execution.DeserializeToObjectExec$$anonfun$2, <function2>)
- field (class: org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndexInternal$1, name: f$22, type: interface scala.Function2)
- object (class org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndexInternal$1, <function0>)
- field (class: org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndexInternal$1$$anonfun$apply$24, name: $outer, type: class org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndexInternal$1)
- object (class org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndexInternal$1$$anonfun$apply$24, <function3>)
- field (class: org.apache.spark.rdd.MapPartitionsRDD, name: f, type: interface scala.Function3)
- object (class org.apache.spark.rdd.MapPartitionsRDD, MapPartitionsRDD[2] at rdd at App.java:32)
- field (class: org.apache.spark.NarrowDependency, name: _rdd, type: class org.apache.spark.rdd.RDD)
- object (class org.apache.spark.OneToOneDependency, org.apache.spark.OneToOneDependency@3420d0d9)
- writeObject data (class: scala.collection.immutable.$colon$colon)
- object (class scala.collection.immutable.$colon$colon, List(org.apache.spark.OneToOneDependency@3420d0d9))
- field (class: org.apache.spark.rdd.RDD, name: org$apache$spark$rdd$RDD$$dependencies_, type: interface scala.collection.Seq)
- object (class org.apache.spark.rdd.MapPartitionsRDD, MapPartitionsRDD[3] at rdd at App.java:32)
- field (class: org.apache.spark.NarrowDependency, name: _rdd, type: class org.apache.spark.rdd.RDD)
- object (class org.apache.spark.OneToOneDependency, org.apache.spark.OneToOneDependency@7342323d)
- writeObject data (class: scala.collection.immutable.$colon$colon)
- object (class scala.collection.immutable.$colon$colon, List(org.apache.spark.OneToOneDependency@7342323d))
- field (class: org.apache.spark.rdd.RDD, name: org$apache$spark$rdd$RDD$$dependencies_, type: interface scala.collection.Seq)
- object (class org.apache.spark.rdd.MapPartitionsRDD, MapPartitionsRDD[4] at rdd at App.java:32)
- field (class: scala.Tuple2, name: _1, type: class java.lang.Object)
- object (class scala.Tuple2, (MapPartitionsRDD[4] at rdd at App.java:32,<function2>))
17/09/30 18:13:26 INFO DAGScheduler: Job 0 failed: count at App.java:32, took 0.098482 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task not serializable: java.io.NotSerializableException: scala.collection.Iterator$$anon$11
Serialization stack:
- object not serializable (class: scala.collection.Iterator$$anon$11, value: empty iterator)
- field (class: scala.collection.Iterator$$anonfun$toStream$1, name: $outer, type: interface scala.collection.Iterator)
- object (class scala.collection.Iterator$$anonfun$toStream$1, <function0>)
- field (class: scala.collection.immutable.Stream$Cons, name: tl, type: interface scala.Function0)
- object (class scala.collection.immutable.Stream$Cons, Stream([27,panfei]))
- field (class: org.apache.spark.sql.execution.LocalTableScanExec, name: rows, type: interface scala.collection.Seq)
- object (class org.apache.spark.sql.execution.LocalTableScanExec, LocalTableScan [age#0, name#1]
)
- field (class: org.apache.spark.sql.execution.DeserializeToObjectExec, name: child, type: class org.apache.spark.sql.execution.SparkPlan)
- object (class org.apache.spark.sql.execution.DeserializeToObjectExec, DeserializeToObject createexternalrow(age#0, name#1.toString, StructField(age,IntegerType,true), StructField(name,StringType,true)), obj#6: org.apache.spark.sql.Row
+- LocalTableScan [age#0, name#1]
)
- field (class: org.apache.spark.sql.execution.DeserializeToObjectExec$$anonfun$2, name: $outer, type: class org.apache.spark.sql.execution.DeserializeToObjectExec)
- object (class org.apache.spark.sql.execution.DeserializeToObjectExec$$anonfun$2, <function2>)
- field (class: org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndexInternal$1, name: f$22, type: interface scala.Function2)
- object (class org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndexInternal$1, <function0>)
- field (class: org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndexInternal$1$$anonfun$apply$24, name: $outer, type: class org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndexInternal$1)
- object (class org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndexInternal$1$$anonfun$apply$24, <function3>)
- field (class: org.apache.spark.rdd.MapPartitionsRDD, name: f, type: interface scala.Function3)
- object (class org.apache.spark.rdd.MapPartitionsRDD, MapPartitionsRDD[2] at rdd at App.java:32)
- field (class: org.apache.spark.NarrowDependency, name: _rdd, type: class org.apache.spark.rdd.RDD)
- object (class org.apache.spark.OneToOneDependency, org.apache.spark.OneToOneDependency@3420d0d9)
- writeObject data (class: scala.collection.immutable.$colon$colon)
- object (class scala.collection.immutable.$colon$colon, List(org.apache.spark.OneToOneDependency@3420d0d9))
- field (class: org.apache.spark.rdd.RDD, name: org$apache$spark$rdd$RDD$$dependencies_, type: interface scala.collection.Seq)
- object (class org.apache.spark.rdd.MapPartitionsRDD, MapPartitionsRDD[3] at rdd at App.java:32)
- field (class: org.apache.spark.NarrowDependency, name: _rdd, type: class org.apache.spark.rdd.RDD)
- object (class org.apache.spark.OneToOneDependency, org.apache.spark.OneToOneDependency@7342323d)
- writeObject data (class: scala.collection.immutable.$colon$colon)
- object (class scala.collection.immutable.$colon$colon, List(org.apache.spark.OneToOneDependency@7342323d))
- field (class: org.apache.spark.rdd.RDD, name: org$apache$spark$rdd$RDD$$dependencies_, type: interface scala.collection.Seq)
- object (class org.apache.spark.rdd.MapPartitionsRDD, MapPartitionsRDD[4] at rdd at App.java:32)
- field (class: scala.Tuple2, name: _1, type: class java.lang.Object)
- object (class scala.Tuple2, (MapPartitionsRDD[4] at rdd at App.java:32,<function2>))
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1000)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:918)
at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:862)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1613)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1918)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1931)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1944)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1958)
at org.apache.spark.rdd.RDD.count(RDD.scala:1157)
at com.ctrip.market.dmp.spark.app.App.main(App.java:32)
Caused by: java.io.NotSerializableException: scala.collection.Iterator$$anon$11
Serialization stack:
- object not serializable (class: scala.collection.Iterator$$anon$11, value: empty iterator)
- field (class: scala.collection.Iterator$$anonfun$toStream$1, name: $outer, type: interface scala.collection.Iterator)
- object (class scala.collection.Iterator$$anonfun$toStream$1, <function0>)
- field (class: scala.collection.immutable.Stream$Cons, name: tl, type: interface scala.Function0)
- object (class scala.collection.immutable.Stream$Cons, Stream([27,panfei]))
- field (class: org.apache.spark.sql.execution.LocalTableScanExec, name: rows, type: interface scala.collection.Seq)
- object (class org.apache.spark.sql.execution.LocalTableScanExec, LocalTableScan [age#0, name#1]
)
- field (class: org.apache.spark.sql.execution.DeserializeToObjectExec, name: child, type: class org.apache.spark.sql.execution.SparkPlan)
- object (class org.apache.spark.sql.execution.DeserializeToObjectExec, DeserializeToObject createexternalrow(age#0, name#1.toString, StructField(age,IntegerType,true), StructField(name,StringType,true)), obj#6: org.apache.spark.sql.Row
+- LocalTableScan [age#0, name#1]
)
- field (class: org.apache.spark.sql.execution.DeserializeToObjectExec$$anonfun$2, name: $outer, type: class org.apache.spark.sql.execution.DeserializeToObjectExec)
- object (class org.apache.spark.sql.execution.DeserializeToObjectExec$$anonfun$2, <function2>)
- field (class: org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndexInternal$1, name: f$22, type: interface scala.Function2)
- object (class org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndexInternal$1, <function0>)
- field (class: org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndexInternal$1$$anonfun$apply$24, name: $outer, type: class org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndexInternal$1)
- object (class org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndexInternal$1$$anonfun$apply$24, <function3>)
- field (class: org.apache.spark.rdd.MapPartitionsRDD, name: f, type: interface scala.Function3)
- object (class org.apache.spark.rdd.MapPartitionsRDD, MapPartitionsRDD[2] at rdd at App.java:32)
- field (class: org.apache.spark.NarrowDependency, name: _rdd, type: class org.apache.spark.rdd.RDD)
- object (class org.apache.spark.OneToOneDependency, org.apache.spark.OneToOneDependency@3420d0d9)
- writeObject data (class: scala.collection.immutable.$colon$colon)
- object (class scala.collection.immutable.$colon$colon, List(org.apache.spark.OneToOneDependency@3420d0d9))
- field (class: org.apache.spark.rdd.RDD, name: org$apache$spark$rdd$RDD$$dependencies_, type: interface scala.collection.Seq)
- object (class org.apache.spark.rdd.MapPartitionsRDD, MapPartitionsRDD[3] at rdd at App.java:32)
- field (class: org.apache.spark.NarrowDependency, name: _rdd, type: class org.apache.spark.rdd.RDD)
- object (class org.apache.spark.OneToOneDependency, org.apache.spark.OneToOneDependency@7342323d)
- writeObject data (class: scala.collection.immutable.$colon$colon)
- object (class scala.collection.immutable.$colon$colon, List(org.apache.spark.OneToOneDependency@7342323d))
- field (class: org.apache.spark.rdd.RDD, name: org$apache$spark$rdd$RDD$$dependencies_, type: interface scala.collection.Seq)
- object (class org.apache.spark.rdd.MapPartitionsRDD, MapPartitionsRDD[4] at rdd at App.java:32)
- field (class: scala.Tuple2, name: _1, type: class java.lang.Object)
- object (class scala.Tuple2, (MapPartitionsRDD[4] at rdd at App.java:32,<function2>))
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:993)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:918)
at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:862)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1613)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
A: This problem has been solved by change the scala version from 2.10 to 2.11. Anywhere, i am puzzled.
A: Your main class does not need to implement Serializable.
| |
doc_3369
|
how to do that in Swift ? I can't find it in stackoverflow so far
A: This Jailbreak medium article provided a very good rundown
if TARGET_IPHONE_SIMULATOR != 1
{
// Check 1 : existence of files that are common for jailbroken devices
if FileManager.default.fileExists(atPath: “/Applications/Cydia.app”)
|| FileManager.default.fileExists(atPath: “/Library/MobileSubstrate/MobileSubstrate.dylib”)
|| FileManager.default.fileExists(atPath: “/bin/bash”)
|| FileManager.default.fileExists(atPath: “/usr/sbin/sshd”)
|| FileManager.default.fileExists(atPath: “/etc/apt”)
|| FileManager.default.fileExists(atPath: “/private/var/lib/apt/”)
|| UIApplication.shared.canOpenURL(URL(string:”cydia://package/com.example.package”)!)
{
return true
}
// Check 2 : Reading and writing in system directories (sandbox violation)
let stringToWrite = “Jailbreak Test”
do
{
try stringToWrite.write(toFile:”/private/JailbreakTest.txt”, atomically:true, encoding:String.Encoding.utf8)
//Device is jailbroken
return true
}catch
{
return false
}
}else
{
return false
}
| |
doc_3370
|
The following procedure is what I think,
1) sender sends lots of packets with big window size
2) the first packet is dropped
3) Receiver sends ACKS that contains the seq # of the first packets. And other ACKS will have same ack numbers. (For example, the sender send 1,2,3,4,5,6,7 and packet 1 is dropped. Then, the receiver sends ACKs 1,1,1,1,1,1,1)
4) The sender gets 3 dup acks and retransmit the dropped packet However, the receiver still sends 6 dup acks because the receiver doesn't get retransmitted packet. Therefore, the receiver will send duplicated ACK until it gets the dropped packet
5) The sender gets 3 dup acks again, and retransmits the packet again.
Therefore, the sender will retransmit the packet repeatedly. I thinks this is weird. Is there any problem in the above procedure (scenario) ?
Or is there any TCP logic that can prevent the above problem?
| |
doc_3371
|
i have used this code to hide properties from child class
[Browsable(false)]
public string ItemNo
{
get { return itemNo; }
set { itemNo = value; }
}
but above code - Hide my properties in both class, that's mean (base class and child class)
My Question is : i need to hide this properties only from Child class that's mean (InvoiceSummary class) datagridview at the same time i need to shows this same properties in my Base class datagridview.. please give me a solution..
invoice class code
namespace BillingSystem.Business
{
[Serializable()]
public class Invoice : ISerializable
{
private string invoiceid;
private string itemNo;
[Browsable(false)]
public string Invoiceid
{
get { return invoiceid; }
set { invoiceid = value; }
}
[Browsable(false)]
public string ItemNo
{
get { return itemNo; }
set { itemNo = value; }
}
InvoiceSummary Class properties
public class invoiceSummary :Invoice
{
private int no;
private string customerName;
private int invoiceID;
}
for more details i have attached screenshot
InvoiceSummary dataGridView
A: You want to add the sealed modifier to the property in the base class.
public sealed string ItemNo
{
get { return itemNo; }
set { itemNo = value; }
}
This will prevent classes that inherit from the base class from overriding this property.
| |
doc_3372
|
{"errors":[{"message":"Bad Authentication data","code":215}]}
I'm confused about whether it's still possible to get back JSON and use it with $.ajax or $.getJSON in version 1.1 of the Twitter API.
| |
doc_3373
|
<style>
#container {
position: relative;
}
div.overlay {
opacity: 0.6;
background-color: black;
position: absolute;
}
</style>
<div id="container">
<div style="width:200px; height:200px" id="background" class="overlay"></div>
<img src="http://upload.wikimedia.org/wikipedia/commons/d/db/Sports_portal_bar_icon.png"width="200px" height="200px" />
</div>
http://jsbin.com/baweseqo/1/edit.
I want to add a div with text on it.
how can I do that?
A: The background image needs to be done in CSS. The text is just put in the div:
CSS:
#container {
position: relative;
}
div.overlay {
opacity: 0.6;
background-color: black;
position: absolute;
width:200px;
height:200px;
background-image: url(http://upload.wikimedia.org/wikipedia/commons
/d/db/Sports_portal_bar_icon.png);
}
.colortext{
color:cyan;
}
HTML:
<div id="container">
<div class="overlay">
<p class="colortext">your text</p>
</div>
</div>
A: Do position absolute:
Here is the example http://jsbin.com/nikafamo/1/edit?html,js,console,output
A: Just put this inside the <div id="container">
<div>
<p>Text</p>
</div>
| |
doc_3374
|
For the following code, I am getting an error.
var query = (from p in dc.CustomerBranch
where p.ID == Convert.ToInt32(id) // here is the error.
select new Location()
{
Name = p.BranchName,
Address = p.Address,
Postcode = p.Postcode,
City = p.City,
Telephone = p.Telephone
}).First();
return query;
LINQ to Entities does not recognize the method 'Int32 ToInt32 (System.String)', and this method can not be translated into a store expression.
A: Do the conversion outside LINQ:
var idInt = Convert.ToInt32(id);
var query = (from p in dc.CustomerBranch
where p.ID == idInt
select new Location()
{
Name = p.BranchName,
Address = p.Address,
Postcode = p.Postcode,
City = p.City,
Telephone = p.Telephone
}).First();
return query;
A: No they wouldn't. Think of it this way: both ToString() and Parse() are methods on the objects. Since LINQ to Entities tries to convert your LINQ expression to SQL, those are not available.
If one needs to do this in the query, it might be possible with Cast, which should be available in LINQ to Entities. In the case of ToString, you could use SqlFunctions.StringConvert().
| |
doc_3375
|
var deg = 0;
document.getElementById('main_photo').style.setProperty("-webkit-transform","rotate("+ deg +90 +"deg)", null);
A: rotate("+ deg +90 +"deg)"
Change that to
rotate("+ (deg + 90) +"deg)"
In javascript, string + number is a string. Thus, you want to add your numbers inside parentheses.
| |
doc_3376
|
$('a').not("a[href^='http://'], a[href^='https://'], a[href^='/'], a[href^='./'], a[href^='../'], a[href^='#'], a[href$='.pdf'], a[href$='.html']").addClass( 'dl' );
With my code some relative and external links get affected. What can I do to fix it?
Thank you!
A: This is an approach you may can consider to try.
The filename array represents all suffixes which you want to add the class.
var fileNames = ["suffix1", "suffix2"];
$("a").each(function(index, element){
fileNames.forEach(function(fileName){
if($(element).attr("href").startsWith(fileName)){
$(element).addClass("dl");
}
});
});
.dl {
color:red;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<a href="suffix1">addClass suffix1</a>
<a href="http://">don't add</a>
<a href="suffix2">addClass suffix2</a>
<a href="https://">don't add</a>
| |
doc_3377
|
The problem is that usually my working copy is not clean while I'm committing. They are not staged or even untracked files that I don't want to commit. Sometimes I even explicitly specify only a few files to commit which has nothing to do with what is currently staged.
Of course I want to compile and test only the changes that will be committed, ignoring the other ones.
There would be 3 steps to it:
*
*Remove all changes that won't be committed.
*Run the tests.
*Restore the all the changes to exactly how they were before the 1st step.
The 1st step could be achieved by running git stash push --include-untracked --keep-index. The stash entry would also help with the 3rd step.
However, I don't know what to do when I'm committing explicit list of files that are not staged.
(The 2nd step is not really a part of the question.)
The 3rd step could be theoretically done with command git stash pop --index but this command seems to be prone to conflicts if some file was staged and then changed more without staging it again.
This script creates a repository with some files and changes that cover various corner cases:
#!/usr/bin/env sh
set -e -x
git init test-repo
cd test-repo
git config user.email "[email protected]"
git config user.name "Your Name"
echo foo >old-file-unchanged
echo foo >old-file-changed-staged
echo foo >old-file-changed-unstaged
echo foo >old-file-changed-both
git add .
git commit -m 'previous commit'
echo bar >old-file-changed-staged
echo bar >old-file-changed-both
echo bar >new-file-staged
echo bar >new-file-both
git add .
echo baz >old-file-changed-unstaged
echo baz >old-file-changed-both
echo baz >new-file-both
echo baz >untracked-file
A: You were actually quite close to a correct solution.
(In this answer, I'm going to use the word "cache" instead of "stage" because the latter one is too similar to "stash".)
In fact, the trick with using stash would work even if you were to commit files that are not cached. This is because Git changes the cache for the duration of running hooks, so it always contains the correct files. You can check it by adding the command git status to your pre-commit hook.
So you can use git stash push --include-untracked --keep-index.
The problem with conflicts when restoring the stash is also quite easily solvable. You already have all the changes backed up in the stash so there is no risk of loosing anything. Just remove all the current changes and apply the stash to a clean slate.
This can be done in two steps.
The command git reset --hard will remove all the tracked files.
The command git clean -d --force will remove all untracked files.
After that you can run git stash pop --index without any risk of conflicts.
A simple hook would look like that:
#!/bin/sh
set -e
git stash push --include-untracked --keep-index --quiet --message='Backed up state for the pre-commit hook (if you can see it, something went wrong)'
#TODO Tests go here
git reset --hard --quiet
git clean -d --force --quiet
git stash pop --index --quiet
exit $tests_result
Let's break it down.
set -e ensures that the script stops immediately in case of an error so it won't do any further damage.
The stash entry with backup of all changes is done at the beginning so in case of an error you can take manual control and restore everything.
git stash push --include-untracked --keep-index --quiet --message='...' fulfills two purposes. It creates a backup off all current changes and removes all non staged changes from the working directory.
The flag --include-untracked makes sure that untracked files are also backed up and removed.
The flag --keep-index cancels removal of the cached changes from the working directory (but they are still included in the stash).
#TODO Tests go here is where you tests go.
Make sure you don't exit the script here. You still need to restore the stashed changes before doing that.
Instead of exiting with an error code, set its value to the variable tests_result.
git reset --hard --quiet removes all the tracked changes from the working directory.
The flag --hard makes sure that nothing stays in the cache and all files are deleted.
git clean -d --force --quiet removes all the untracked files from the working directory.
The flag -d tells Git to remove directories recursively.
The flag --force tells Git you know what you're doing and it is really supposed to do delete all those files.
git stash pop --index --quiet restores all changes saved in the latest stash and removes it.
The flag --index tells it to make sure it didn't mixed up which files were cached and which were not.
Disadvantages of this method
This method is only semi-robust and it should be sufficient for simple use cases.
However, they are quite a few corner cases that may break something during real-life usage.
git stash push refuses to work with files that were only added with the flag --intent-to-add.
I'm not sure why that is and I haven't found a way to fix it.
You can bypass the problem by adding the file without the flag or by at least adding it as an empty file and left only the content not cached.
Git tracks only files, not directories. However, the command git clean can remove directories. As the result, the script will remove empty directories (unless they are ignored).
Files that were added to .gitignore since the last commit will be deleted. I consider this a feature but if you want to prevent it, you can by reversing the order of git reset and git clean.
Note that this works only if .gitignore is included to the current commit.
git stash push does not create a new stash if there is no changes but it still returns 0. To handle commits without changes such as changing the message using --amend you would need to add some code that checks if stash was really created and pop it only if it was.
Git stash seems to remove the information about current merge, so using this code on a merge commit will break it.
To prevent that, you need to backup files .git/MERGE_* and restore them after popping the stash.
A robust solution
I've managed to iron out most of the kinks of this method (making the code much longer in the process).
The only remaining problem is removing empty directories and ignored files (as described above). I don't think these are severe enough issues to take time trying to bypass them. (It is doable, though.)
#!/bin/sh
backup_dir='./pre-commit-hook-backup'
if [ -e "$backup_dir" ]
then
printf '"%s" already exists!\n' "$backup_dir" 1>&2
exit 1
fi
intent_to_add_list_file="$backup_dir/intent-to-add"
remove_intent_to_add() {
git diff --name-only --diff-filter=A | tr '\n' '\0' >"$intent_to_add_list_file"
xargs -0 -r -- git reset --quiet -- <"$intent_to_add_list_file"
}
readd_intent_to_add() {
xargs -0 -r -- git add --intent-to-add --force -- <"$intent_to_add_list_file"
}
backup_merge_info() {
echo 'If you can see this, tests in the `pre-commit` hook went wrong. You need to fix this manually.' >"$backup_dir/README"
find ./.git -name 'MERGE_*' -exec cp {} "$backup_dir" \;
}
restore_merge_info() {
find "$backup_dir" -name 'MERGE_*' -exec mv {} ./.git \;
}
create_stash() {
git stash push --include-untracked --keep-index --quiet --message='Backed up state for the pre-commit hook (if you can see it, something went wrong)'
}
restore_stash() {
git reset --hard --quiet
git clean -d --force --quiet
git stash pop --index --quiet
}
run_tests() (
set +e
printf 'TODO: Put your tests here.\n' 1>&2
echo $?
)
set -e
mkdir "$backup_dir"
remove_intent_to_add
backup_merge_info
create_stash
tests_result=$(run_tests)
restore_stash
restore_merge_info
readd_intent_to_add
rm -r "$backup_dir"
exit "$tests_result"
| |
doc_3378
|
ds = xr.Dataset({'data': (('x'), [1, 2, 3])})
ds['new'] = 5
I get
<xarray.Dataset>
Dimensions: (x: 3)
Dimensions without coordinates: x
Data variables:
data (x) int64 1 2 3
new int64 5
But, what I'd like is:
<xarray.Dataset>
Dimensions: (x: 3)
Dimensions without coordinates: x
Data variables:
data (x) int64 1 2 3
new (x) int64 5 5 5
How do I get this behaviour independent of the length of x?
A: xarray.full_like can be useful for this purpose:
ds["new"] = xr.full_like(ds.data, 5)
| |
doc_3379
|
jQuery(document).ready(function($)
{
var frm = $('#export-form');
//Catch the submit
frm.submit(function(ev) {
console.log("Here I am") //Not visible in Firebug
// Prevent reload
ev.preventDefault();
$.ajax({
type : 'POST',
url : 'libs/GenerateCSV.php',
data : 'export',
success : function (data) {
alert('success'); // Temporary debugging, later a redirection is planned
}
});
});
});
But the script is never executed.
I've tried to use an onsubmit call but I had the same result.
Can anyone help me?
A: Nothing looks wrong in your code sample.
Did you make sure that jQuery is loaded before your script ?
Also, you can wrap your code in a document.ready statement:
jQuery(document).ready(function($)
{
// your code goes here
});
A: Check your HTML code, I guess maybe you write onClick in <input>, like below:
<input type="submit" value="submit" onClick="this.form.submit(); this.disabled=true; this.value='waiting ...'; ">
A: Try this approach
<!-- Include jQuery before this code -->
<form id="export-form" method="post" accept-charset="utf-8">
<a id="submit-form" href="#">Start Download</a>
</form>
<script type="text/javascript">
$(document).ready(function() {
$("#submit-form").click(function(){
$.ajax({
type : 'POST',
url : 'libs/GenerateCSV.php',
data : 'export',
success : function (data) {
alert('success');//Just for debugging, later a redirection to a file is planned
}
});
});
});
</script>
anchor tag won't automatically submit your form and redirect you. I always use this solution to submit form without redirection. You can on success event redirect wherever you want. Hope it will help you.
| |
doc_3380
|
I tried separate function for each columns but then the function will override another function and its not working.
<p> Filter by Activities:<select id="myactivity" onchange="myActivity()" class='form-control' style="width: 30%">
<option>a</option>
<option>b</option>
<option>c</option>
<option>d</option>
<option>e</option>
<option>f</option>
<option>g</option>
<option>h</option>
<option>i</option>
<option>j</option>
</select> </p>
function myActivity() {
var input, filter, table, tr, td, i;
input = document.getElementById("myactivity");
filter = input.value.toUpperCase();
table = document.getElementById("example");
tr = table.getElementsByTagName("tr");
for (i = 0; i < tr.length; i++) {
/* search column index 1 */
td = tr[i].getElementsByTagName("td")[1];
if (td) {
if (td.innerHTML.toUpperCase().indexOf(filter) > -1) {
tr[i].style.display = "";
} else {
tr[i].style.display = "none";
}
}
}
}
I have found a reference to this problem just in case anyone has the same problem with me.
https://www.aspforums.net/Threads/187399/Filter-data-in-HTML-Table-with-multiple-DropDownList-in-jQuery/
| |
doc_3381
|
var microsoftGraph = require("@microsoft/microsoft-graph-client");
var client = microsoftGraph.Client.init({
authProvider: (done) => {
done(null, token);
}
});
client.api('/me/calendar/events/xyz').patch({
'Body': {
'ContentType': '0',
'Content': 'test from api 2'
}
}, (err, res) => {
if (err) {
console.log('err: ', err);
} else {
console.log('res: ', res);
}
});
The problem is the token expires and it doesn't come with refresh token to renew it from server without user intervention.
Here is the error I get after ~2 hours:
err: {
statusCode: 401,
code: 'InvalidAuthenticationToken',
message: 'CompactToken validation failed with reason code: -2147184088.',
requestId: '5bdd2402-b1b9-4c25-8b2d-1ca2c4a79192',
date: 2017 - 08 - 07 T19: 27: 44.000 Z,
body: {
code: 'InvalidAuthenticationToken',
message: 'CompactToken validation failed with reason code: -2147184088.',
innerError: {
'request-id': '5bdd2402-b1b9-4c25-8b2d-1ca2c4a79192',
date: '2017-08-07T15:27:44'
}
}
}
What am I missing here? There has to be a way to integrate a server side code with Microsoft Graph without the need to renew tokens from client side?
A: Access tokens expire after 60 minutes, so you'll either need to get a new one or use a refresh token. Refresh tokens can be used to get a new access token and the docs for that API call are here: https://developer.microsoft.com/en-us/graph/docs/concepts/auth_v2_user#5-use-the-refresh-token-to-get-a-new-access-token
To get a refresh token, request the offline_access permission and the user will see this in the consent screen as "Access your data anytime". An example of requesting the offline_access scope can be found here: https://developer.microsoft.com/en-us/graph/docs/concepts/auth_v2_user#2-get-authorization
| |
doc_3382
|
Is there a way I can get around this and be able to solve my problem of decompiling my .py files anyhow so I can get the earlier project structure that I require?
Edit: unpyclib and Decompyle++ are another tools that I looked into, however both are Python 2.xx only programs. The most successful decompiler is uncompyle6(offering support upto Python 3.8 as of now) as is quite evident from several other responses on stackoverflow.
| |
doc_3383
|
As I know, virtual server can not upload exe files (like url2bmp.exe, webshot.exe, webscreencapture.exe, etc). And they all use linux system (it can not use new COM("InternetExplorer.Application")).
So, is there any possible cacth a web screenshot in virtual server with PHP? Thanks.
A: as a possible alternative, you can check out this project: http://code.google.com/p/wkhtmltopdf/
A: You can do this with Linux, it is seriously tricky though. You need FireFox, imagmagik and VNC installed.
Basically you get Firefox to open a new window in a VNC display, grab the screenshot of that display with imagmagik and then save it as a thumbnail. The hard part about this is getting the VNC portion to work, especially with a headless setup. But it is completely do-able.
However, it will probably be a ton easier just getting a Windows VPS.
Doing a search, found this which might work:
Taking website screenshot, server-side, on a Linux rented server, free
Ah and here is the post about what I described above:
Command line program to create website screenshots (on Linux)
A: You can take automated screenshots of websites using an open-source tool like pageres. It can also simulate various resolutions, testing responsive layouts.
I'm not sure whether it's relevant that your website is coded in PHP, or that you're mentioning .exe files. Are you new to web development?
| |
doc_3384
|
org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:334)
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1202)
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:537)
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:476)
org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:303)
org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:299)
org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194)
org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:755)
org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:757)
org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:480)
org.springframework.web.servlet.FrameworkServlet.configureAndRefreshWebApplicationContext(FrameworkServlet.java:663)
org.springframework.web.servlet.FrameworkServlet.createWebApplicationContext(FrameworkServlet.java:629)
org.springframework.web.servlet.FrameworkServlet.createWebApplicationContext(FrameworkServlet.java:677)
org.springframework.web.servlet.FrameworkServlet.initWebApplicationContext(FrameworkServlet.java:548)
org.springframework.web.servlet.FrameworkServlet.initServletBean(FrameworkServlet.java:489)
org.springframework.web.servlet.HttpServletBean.init(HttpServletBean.java:136)
javax.servlet.GenericServlet.init(GenericServlet.java:158)
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:504)
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:421)
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1074)
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:611)
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316)
java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
java.lang.Thread.run(Unknown Source)
spring-servlet.xml
<bean id="jspViewResolver"
class="org.springframework.web.servlet.view.InternalResourceViewResolver">
<property name="viewClass"
value="org.springframework.web.servlet.view.JstlView" />
<property name="prefix" value="/WEB-INF/jsp/" />
<property name="suffix" value=".jsp" />
</bean>
<bean id="messageSource"
class="org.springframework.context.support.ReloadableResourceBundleMessageSource">
<property name="basename" value="classpath:messages" />
<property name="defaultEncoding" value="UTF-8" />
</bean>
<bean id="propertyConfigurer"
class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"
p:location="/WEB-INF/jdbc.properties" />
<bean id="dataSource"
class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close"
p:driverClassName="${jdbc.driverClassName}"
p:url="${jdbc.databaseurl}" p:username="${jdbc.username}"
p:password="${jdbc.password}" />
<bean id="sessionFactory"
class="org.springframework.orm.hibernate3.LocalSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="configLocation">
<value>classpath:hibernate.cfg.xml</value>
</property>
<property name="configurationClass">
<value>org.hibernate.cfg.AnnotationConfiguration</value>
</property>
<property name="hibernateProperties">
<props>
<prop key="hibernate.dialect">${jdbc.dialect}</prop>
<prop key="hibernate.show_sql">true</prop>
</props>
</property>
</bean>
<tx:annotation-driven />
<bean id="transactionManager"
class="org.springframework.orm.hibernate3.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory" />
</bean>
Please et me know how to resolve this error ?????
this is my jdbc.properties file in WEB-INF folder
jdbc.driverClassName= com.mysql.jdbc.Driver
jdbc.dialect=org.hibernate.dialect.MySQLDialect
jdbc.databaseurl=jdbc:mysql://localhost:3306/SpringHibernate
jdbc.username=vishantmahajan
jdbc.password=abcdefgh
This is my dependency declaration in pom.xml :--
<groupId>commons-dbcp</groupId>
<artifactId>commons-dbcp</artifactId>
<version>20030825.184428</version>
</dependency>
<dependency>
<groupId>commons-pool</groupId>
<artifactId>commons-pool</artifactId>
<version>20030825.183949</version>
</dependency>
A: The error is that you are missing a dependency in your project, because of the following exception in your stack trace Cannot find class [org.apache.commons.dbcp.BasicDataSource]
You only need to add this dependency to your maven pom.xml, just make sure that you are adding the correct version for your project, for example:
<dependency>
<groupId>commons-dbcp</groupId>
<artifactId>commons-dbcp</artifactId>
<version>1.4</version>
</dependency>
You can find the list of Common Database Connection Pooling for Maven here
A: Error clearly says Cannot find class [org.apache.commons.dbcp.BasicDataSource] so you need to add dependency for BasicDataSource.
you can get maven dependency here
or
Add below dependency in pom.xml
<dependency>
<groupId>commons-dbcp</groupId>
<artifactId>commons-dbcp</artifactId>
<version>1.2.2</version>
</dependency>
| |
doc_3385
|
namespace :admin do
resources :users
resources :active_vulnerabilities
# Admin root
root to: 'application#index'
end
But I get the error uninitialized constant Admin::ActiveVulnerabilitiesController so I changed my controller to class Admin::ActiveVulnerabilitiesController < ApplicationController
I then get the error Unable to autoload constant ActiveVulnerabilitiesController, expected /home/luke/projects/vuln_frontend/app/controllers/active_vulnerabilities_controller.rb to define it but the file mentioned is my controller named exactly as that.
A: Your controller should be put in app/controllers/admin/ because the namespace. Otherwise, you can forget this directory and the namespace and use just scope
scope :admin do
resources :active_vulnerabilities
end
class ActiveVulnerabilitiesController < ApplicationController
| |
doc_3386
|
I'm guessing it's a dependency version issue. I looked at Migrating to Spring HATEOAS 1.0 and they have you download and run a script but it doesn't run.
Here's my pom file (I took some other dependencies to make it shorter):
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.2.0.M4</version>
<relativePath />
</parent>
<dependency>
<groupId>org.springframework.hateoas</groupId>
<artifactId>spring-hateoas</artifactId>
<version>2.2.0.M4</version>
</dependency>
<!-- Added starter-hateoas as @Gabriel suggested -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-hateoas</artifactId>
<version>2.2.0.M4</version>
</dependency>
<repositories>
<repository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/snapshot</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/snapshot</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</pluginRepository>
<pluginRepository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
</pluginRepository>
</pluginRepositories>
This is the stack trace:
Caused by: java.lang.ClassNotFoundException: org.springframework.hateoas.server.LinkRelationProvider
at java.net.URLClassLoader.findClass(URLClassLoader.java:381) ~[na:1.8.0_60]
at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[na:1.8.0_60]
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) ~[na:1.8.0_60]
at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[na:1.8.0_60]
... 37 common frames omitted
It's looking for the org.springframework.hateoas.server which doesn't exist in the 0.25.1.RELEASE according to the API docs.
A: Not sure what I did right but this is the pom that ended up working (I did a combination of mvn tree, purge-local-repositories, clean install, and run)
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.1.6.RELEASE</version> <!-- changed this to 2.1.6 version -->
<relativePath />
</parent>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<version>2.2.0.M4</version> <!-- upgraded this -->
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>org.junit.jupiter</groupId> <!-- added this dependency-->
<artifactId>junit-jupiter-api</artifactId>
<version>5.5.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId> <!-- added this dependency-->
<artifactId>spring-boot-starter-hateoas</artifactId>
<version>2.2.0.M4</version>
</dependency>
<dependency>
<groupId>org.springframework.hateoas</groupId>
<artifactId>spring-hateoas</artifactId>
<version>0.25.1.RELEASE</version>
</dependency>
</dependencies>
<repositories>
<repository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/snapshot</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/snapshot</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</pluginRepository>
<pluginRepository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
</pluginRepository>
</pluginRepositories>
Again I excluded some dependencies just to make it more concise but please let me know if you have questions.
| |
doc_3387
|
I also want to add the EDM metadata format to dspace but I don't have an example on how to create a new one.
A: One option would be including a vocabulary server with skos capbilities. Try ASKOSI (as its name implies). We have used it for some projects and there are some good examples of Askosi-dspace integration around there: http://www.windmusic.org/dspace/.
| |
doc_3388
|
The problem, when I am inserting text by using jQuery inside of the div box the box is still draggable but not resizable anymore and I do not know why.
function displayRooms() {
for(i=0; i < room_count; i++) {
//console.log(i);
var room_test = "#room" + i;
//console.log(room_test);
if($(room_test).length == 0) {
$(".rooms").append(rooms[i]);
}
}
$(".room").draggable();
$(".room").resizable();
}
| |
doc_3389
|
Memory leak n°368 was a singleton which was not deleted and now it has disappeared.
But the other two were not accessible, the index for _CrtSetBreakAlloc* was not high enough to catch them.
Even if it is really sad to let them alive, we didn't really care at that time.
Then, we updated a library which is used by our software (Poet/FastObject/Versant/Actian) and after that..:
We managed to have ~950 memory leaks, which were not reachable with a breakpoint..
Does someone have any idea how I can track them ? We tried to go inside the loop of global variable initialization (through _initterm) but even that, we couldn't get an index low enough to break on these memory leaks..
(*) we use this piece of code to get the index and set a breakpoint :
typedef struct _CrtMemBlockHeader
{
// Pointer to the block allocated just before this one:
struct _CrtMemBlockHeader *pBlockHeaderNext;
// Pointer to the block allocated just after this one:
struct _CrtMemBlockHeader *pBlockHeaderPrev;
char *szFileName; // File name
int nLine; // Line number
size_t nDataSize; // Size of user block
int nBlockUse; // Type of block
long lRequest; // Allocation number
// Buffer just before (lower than) the user's memory:
unsigned char gap[4];
} _CrtMemBlockHeader;
static long memoryBlocIndex=0;
void HKERNEL_API CheckMemoryBlocIndex() {
char* c=new char();
_CrtMemBlockHeader* mbh=reinterpret_cast<_CrtMemBlockHeader*>(c-32);
memoryBlocIndex=mbh->lRequest;
TRACE1("\nMemBlockHeader index : %ld\n\n",memoryBlocIndex);
delete c;
int brkValue, memoryIndex;
std::ifstream debugfile("breakpoint.txt");
if (debugfile) {
std::istringstream streamLine;
std::string sline;
std::getline(debugfile, sline);
streamLine.str(sline);
streamLine >> memoryIndex >> brkValue;
if (brkValue > 0) {
BreakAt(brkValue, memoryIndex);
}
}
}
void HKERNEL_API BreakAt(long request, long notifiedMemoryBlocIndex) {
_CrtSetBreakAlloc((request-notifiedMemoryBlocIndex)+memoryBlocIndex);
}
Thank you!
Sbizz
| |
doc_3390
|
The problem is the ouput for any given number is the largest factor for the input not the prime factor.
After some tinckering,I have confirmed that the problem is in "TestFactor" as "FindFactor" is able to calculate the largest factor correctly,but I have no idea why "TestFactor" always outputs 0,as the two functions are practically identical.
Things get even weirder when I tried to use a debugger,(but this is probably due to the fact that this is my first time using it and I have no idea what I'm doing):
I set a break point to "NF",a local contained in "TestFactor",to see it's value and I get a message telling me "identifier "i" is undefined","i" being a local variable contained in "FindFactor" the function that actually works correctly.
Then I set a breakpoint to "NFactors" and this time I get the following exception:"Unhandled exception at 0x00A01D9D in Largest Prime Factor.exe: 0xC0000094: Integer division by zero."
referring to the following operation:
if (y % j == 0)
Which is definetly not the case,as j=y-1.
Here's the full program:
#include <iostream>
using namespace std;
int FindFactor(int x);
int TestFactor(int y);
int main() {
int input, Factor, NFactors,inputsave; bool prime=false;
cout << "Please enter a number" << endl;
cin >> input;
inputsave = input;
while (prime == false) {
Factor = FindFactor(input);
NFactors = TestFactor(Factor);
if (NFactors != 0) {
prime = true;
}
else {
prime = false;
input = Factor;
}
}
cout << "The largest prime factor for " << inputsave << " is " << Factor << endl;
}
int FindFactor(int x) {
int i;
for (i = x - 1; i > 1; i--) {
if (x % i == 0) {
break;
}
else {};
}
return i;
}
int TestFactor(int y) {
int j, NF = 0;
for (j = y - 1; j > 1; j--) {
if (y % j == 0) {
NF++;
}
else {};
}
return NF;
}
To Summarize:
"TestFactor":Output is always 0.
Main Program:Output is always the largest factor of the input,not prime factor.
A: There is a typo in TestFactor, use j > 1 as the condition for the loop.
| |
doc_3391
|
What causes git to add or not add this automatic merge commit on pull?
A: git pull is a shortcut for git fetch and git merge.
git merge will create a merge commit if the two branches diverged (i.e., you have local commits and there are remote commits).
You can use git pull --ff-only if you want it to fail if a merge commit is needed, then decide whether to merge manually, rebase or whatever.
| |
doc_3392
|
$("#frm").submit(function() {
$('#mailtoperson').each(function() {
if ($(this).val() == '') {
$('#wizard').smartWizard('showMessage','Please check OK and YES once the email is ready to send');
}
});
if ($('#mailtoperson').size() > 0) {
$('#wizard').smartWizard('showMessage','Please complete the required field.');
return false;
}
})
Basically I have a '#frm' with a textarea '#mailtoperson' and onsubmit I want to validate if '#mailtoperson' is blank, and if it is I want a 'pls complete' message and if has data then I want the 'pls check ok' message to populate.
Edit:
This seems to work to trigger the blank field function:
$('#frm').submit(function()
{
if( !$(this).val() ) {
$('#wizard').smartWizard('showMessage','Please complete');
}
And this works for a simple on submit message
$('#frm').submit(function() {
$('#wizard').smartWizard('showMessage','Please check OK and YES once the email is ready to send');
})
});
But they do not work together - ie please complete if blank, and if not blank 'please check ok'...
A: $("#frm").submit(function() {
if($('#mailtoperson').val().length == 0 ) {
$('#wizard').smartWizard('showMessage','Please complete...');
} else {
$('#wizard').smartWizard('showMessage','pls check ok...');
}
});
A: Use an if/else statement, also there's no need for a loop since there should only be one element with that id:
$("#frm").submit(function() {
if($('#mailtoperson').val().trim() == '') {
$('#wizard').smartWizard('showMessage','Please check OK and YES once the email is ready to send');
}else{
$('#wizard').smartWizard('showMessage','Please complete the required field.');
}
return false;
});
| |
doc_3393
|
x_train shape: (50000, 32, 32, 3) - y_train shape: (50000, 1)
x_test shape: (10000, 32, 32, 3) - y_test shape: (10000, 1)
In my case, I have one-hot encoded the labels, so that the size and shape are as shown below and I tend to optimize the categorical cross entropy loss:
x_train shape: (1919, 256, 256, 3) - y_train shape: (1919, 2)
x_test shape: (476, 256, 256, 3) - y_test shape: (476, 2)
I trained a base VGG-16 based classifier with two output nodes and softmax activation on these data to minimize the categorical cross-entropy loss and obtained some classification performance.
Then I followed the Keras example code to perform Supervised contrastive learning. In the first phase, I truncated the VGG16 model in the deepest convolutional layer and added the projection head to optimize the supervised contrastive loss. The complete code is shown below:
# Supervised contrastive learning loss function
class SupervisedContrastiveLoss(keras.losses.Loss):
def __init__(self, temperature=1, name=None):
super(SupervisedContrastiveLoss, self).__init__(name=name)
self.temperature = temperature
def __call__(self, labels, feature_vectors, sample_weight=None):
# Normalize feature vectors
feature_vectors_normalized = tf.math.l2_normalize(feature_vectors, axis=1)
# Compute logits
logits = tf.divide(
tf.matmul(
feature_vectors_normalized, tf.transpose(feature_vectors_normalized)
),
self.temperature,
)
return tfa.losses.npairs_loss(tf.squeeze(labels), logits)
#%%
#add projection head and train model
vgg16 = keras.applications.VGG16(include_top=False, weights='imagenet',
input_tensor=model_input)
base_model_vgg16=Model(inputs=vgg16.input,outputs=vgg16.get_layer('block5_conv3').output)
x = base_model_vgg16.output
x = GlobalAveragePooling2D()(x)
outputs = layers.Dense(projection_units, activation="relu")(x)
encoder_with_projection_head = keras.Model(inputs=vgg16.input, outputs=outputs,
name="vgg16-encoder_with_projection-head")
encoder_with_projection_head.summary()
#%%
#train the model
sgd = SGD(lr=0.001, decay=1e-6, momentum=0.9, nesterov=True)
encoder_with_projection_head.compile(optimizer=sgd,loss=SupervisedContrastiveLoss(temperature))
filepath = 'weights1/' + encoder_with_projection_head.name + '.{epoch:02d}-{val_loss:.4f}.h5'
checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1,
save_weights_only=False, save_best_only=True,
mode='min', save_freq='epoch')
callbacks_list = [checkpoint]
t=time.time()
encoder_with_projection_head_history = encoder_with_projection_head.fit(
datagen.flow(X_train, Y_train,batch_size=batch_size),
steps_per_epoch=X_train.shape[0] // batch_size,
callbacks=callbacks_list,
epochs=epochs,
shuffle=True,
verbose=1,
validation_data=(X_test, Y_test))
print('Training time: %s' % (time.time()-t))
The only difference is my code uses one-hot encoded labels but the example does not. When I run the code, I get the following error:
Epoch 1/32
Traceback (most recent call last):
File "C:\Users\xxx\code1.py", line 433, in <module>
validation_data=(X_test, Y_test))
File "c:\users\xxx\appdata\local\continuum\anaconda3\envs\tf_2.4\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1100, in fit
tmp_logs = self.train_function(iterator)
File "c:\users\xxx\appdata\local\continuum\anaconda3\envs\tf_2.4\lib\site-packages\tensorflow\python\eager\def_function.py", line 828, in __call__
result = self._call(*args, **kwds)
File "c:\users\xxx\appdata\local\continuum\anaconda3\envs\tf_2.4\lib\site-packages\tensorflow\python\eager\def_function.py", line 888, in _call
return self._stateless_fn(*args, **kwds)
File "c:\users\xxx\appdata\local\continuum\anaconda3\envs\tf_2.4\lib\site-packages\tensorflow\python\eager\function.py", line 2943, in __call__
filtered_flat_args, captured_inputs=graph_function.captured_inputs) # pylint: disable=protected-access
File "c:\users\xxx\appdata\local\continuum\anaconda3\envs\tf_2.4\lib\site-packages\tensorflow\python\eager\function.py", line 1919, in _call_flat
ctx, args, cancellation_manager=cancellation_manager))
File "c:\users\xxx\appdata\local\continuum\anaconda3\envs\tf_2.4\lib\site-packages\tensorflow\python\eager\function.py", line 560, in call
ctx=ctx)
File "c:\users\xxx\appdata\local\continuum\anaconda3\envs\tf_2.4\lib\site-packages\tensorflow\python\eager\execute.py", line 60, in quick_execute
inputs, attrs, num_outputs)
InvalidArgumentError: logits and labels must be broadcastable: logits_size=[16,16] labels_size=[32,16]
[[{{node PartitionedCall/softmax_cross_entropy_with_logits_1}}]] [Op:__inference_train_function_3780]
Function call stack:
train_function
I suspect the "SupervisedContrastiveLoss" class needs to be modified to support training with one-hot encoded labels.
A: Ideally, you should convert the one-hot encoded labels to integer indices and pass them into the model.fit() method.
In case you want to do this conversion inside the loss, you can add this line before the return statement in the __call__() method of SupervisedContrastiveLoss class.
labels = tf.argmax(labels, axis=1) # converting one-hot labels to integer indices
Note - Though very small, this is unnecessary overhead in compute time during model training.
| |
doc_3394
|
But I think I'm hitting some kind of limitation or bug. Sorry but I was not able to reproduce this on a smaller codebase.
This is the basic Framework I came up with:
module TestRunner
open System
type TestOptions = {
Writer : ConsoleColor -> string -> unit}
type TestResults = {
Time : TimeSpan
Failure : exn option
}
type Test = {
Name : string
Finished : IEvent<TestResults>
SetFinished : TestResults -> unit
TestFunc : TestOptions -> Async<TestResults> }
let createTest name f =
let ev = new Event<TestResults>()
{
Name = name
Finished = ev.Publish
SetFinished = (fun res -> ev.Trigger res)
TestFunc =
(fun options -> async {
let watch = System.Diagnostics.Stopwatch.StartNew()
try
do! f options
watch.Stop()
return { Failure = None; Time = watch.Elapsed }
with exn ->
watch.Stop()
return { Failure = Some exn; Time = watch.Elapsed }
})}
let simpleTest name f =
createTest name (fun options -> f options.Writer)
/// Create a new Test and change the result
let mapResult mapping test =
{ test with
TestFunc =
(fun options -> async {
let! result = test.TestFunc options
return mapping result})}
let writeConsole color f =
let old = System.Console.ForegroundColor
try
System.Console.ForegroundColor <- color
f()
finally
System.Console.ForegroundColor <- old
let printColor color (text:String) =
writeConsole color (fun _ -> Console.WriteLine(text))
type WriterMessage =
| NormalWrite of ConsoleColor * String
| StartTask of AsyncReplyChannel<int> * String
| WriteMessage of int * ConsoleColor * String
| EndTask of int
/// will handle printing jobs for two reasons
/// 1. Nice output grouped by tests (StartTask,WriteMessage,EndTask)
/// 2. Print Summary after all tests finished (NormalWrite)
let writer = MailboxProcessor.Start (fun inbox ->
let currentTask = ref 0
let newHandle (returnHandle:AsyncReplyChannel<int>) =
let handle = System.Threading.Interlocked.Increment currentTask
returnHandle.Reply handle
handle
// the tasks describe which tasks are currently waiting to be processed
let rec loop tasks = async {
let! newTasks =
match tasks with
/// We process the Task with the number t and the name name
| (t, name) :: next ->
inbox.Scan
(fun msg ->
match msg with
| EndTask (endTask) ->
// if the message is from the current task finish it
if t = endTask then
Some (async { return next })
else None
| WriteMessage(writeTask, color, message) ->
if writeTask = t then
Some (async {
printColor color (sprintf "Task %s: %s" name message)
return tasks
})
else None
| StartTask (returnHandle, name) ->
// Start any tasks instantly and add them to the list (because otherwise they would just wait for the resonse)
Some (async {
let handle = newHandle returnHandle
return (List.append tasks [handle, name]) })
| _ -> None)
// No Current Tasks so just start ones or process the NormalWrite messages
| [] ->
inbox.Scan
(fun msg ->
match msg with
| StartTask (returnHandle, name) ->
Some (async {
let handle = newHandle returnHandle
return [handle, name] })
| NormalWrite(color, message) ->
Some (async {
printColor color message
return []
})
| _ -> None)
return! loop newTasks
}
loop [])
/// Write a normal message via writer
let writerWrite color (text:String) =
writer.Post(NormalWrite(color, text))
/// A wrapper around the communication (to not miss EndTask for a StartTask)
let createTestWriter name f = async {
let! handle = writer.PostAndAsyncReply(fun reply -> StartTask(reply, name))
try
let writer color s =
writer.Post(WriteMessage(handle,color,s))
return! f(writer)
finally
writer.Post (EndTask(handle))
}
/// Run the given test and print the results
let testRun t = async {
let! results = createTestWriter t.Name (fun writer -> async {
writer ConsoleColor.Green (sprintf "started")
let! results = t.TestFunc { Writer = writer }
match results.Failure with
| Some exn ->
writer ConsoleColor.Red (sprintf "failed with %O" exn)
| None ->
writer ConsoleColor.Green (sprintf "succeeded!")
return results})
t.SetFinished results
}
/// Start the given task with the given amount of workers
let startParallelMailbox workerNum f =
MailboxProcessor.Start(fun inbox ->
let workers = Array.init workerNum (fun _ -> MailboxProcessor.Start f)
let rec loop currentNum = async {
let! msg = inbox.Receive()
workers.[currentNum].Post msg
return! loop ((currentNum + 1) % workerNum)
}
loop 0 )
/// Runs all posted Tasks
let testRunner =
startParallelMailbox 10 (fun inbox ->
let rec loop () = async {
let! test = inbox.Receive()
do! testRun test
return! loop()
}
loop ())
/// Start the given tests and print a sumary at the end
let startTests tests = async {
let! results =
tests
|> Seq.map (fun t ->
let waiter = t.Finished |> Async.AwaitEvent
testRunner.Post t
waiter
)
|> Async.Parallel
let testTime =
results
|> Seq.map (fun res -> res.Time)
|> Seq.fold (fun state item -> state + item) TimeSpan.Zero
let failed =
results
|> Seq.map (fun res -> res.Failure)
|> Seq.filter (fun o -> o.IsSome)
|> Seq.length
let testCount = results.Length
if failed > 0 then
writerWrite ConsoleColor.DarkRed (sprintf "--- %d of %d TESTS FAILED (%A) ---" failed testCount testTime)
else
writerWrite ConsoleColor.DarkGray (sprintf "--- %d TESTS FINISHED SUCCESFULLY (%A) ---" testCount testTime)
}
Now the Exception is only triggered when i use a specific set of tests
which do some crawling on the web (some fail and some don't which is fine):
#r @"Yaaf.GameMediaManager.Primitives.dll";; // See below
open TestRunner
let testLink link =
Yaaf.GameMediaManager.EslGrabber.getMatchMembers link
|> Async.Ignore
let tests = [
// Some working links (links that should work)
yield!
[ //"TestMatch", "http://www.esl.eu/eu/wire/anti-cheat/css/anticheat_test/match/26077222/"
"MatchwithCheater", "http://www.esl.eu/de/csgo/ui/versus/match/3035028"
"DeletedAccount", "http://www.esl.eu/de/css/ui/versus/match/2852106"
"CS1.6", "http://www.esl.eu/de/cs/ui/versus/match/2997440"
"2on2Versus", "http://www.esl.eu/de/css/ui/versus/match/3012767"
"SC2cup1on1", "http://www.esl.eu/eu/sc2/go4sc2/cup230/match/26964055/"
"CSGO2on2Cup", "http://www.esl.eu/de/csgo/cups/2on2/season_08/match/26854846/"
"CSSAwpCup", "http://www.esl.eu/eu/css/cups/2on2/awp_cup_11/match/26811005/"
] |> Seq.map (fun (name, workingLink) -> simpleTest (sprintf "TestEslMatches_%s" name) (fun o -> testLink workingLink))
]
startTests tests |> Async.Start;; // this will produce the Exception now and then
https://github.com/matthid/Yaaf.GameMediaManager/blob/core/src/Yaaf.GameMediaManager.Primitives/EslGrabber.fs is the code and you can download https://github.com/downloads/matthid/Yaaf.GameMediaManager/GameMediaManager.%200.9.3.1.wireplugin (this is basically a renamed zip archive) and extract it to get the Yaaf.GameMediaManager.Primitives.dll binary
(you can paste it into FSI instead of downloading when you want but then you have to reference the HtmlAgilityPack)
I can reproduce this with Microsoft (R) F# 2.0 Interactive, Build 4.0.40219.1. The Problem is that the Exception will not be triggered always (but very often) and the stacktrace is telling me nothing
System.Exception: multiple waiting reader continuations for mailbox
bei <StartupCode$FSharp-Core>[email protected](AsyncParams`1 _arg11)
bei <StartupCode$FSharp-Core>.$Control.loop@413-40(Trampoline this, FSharpFunc`2 action)
bei Microsoft.FSharp.Control.Trampoline.ExecuteAction(FSharpFunc`2 firstAction)
bei Microsoft.FSharp.Control.TrampolineHolder.Protect(FSharpFunc`2 firstAction)
bei <StartupCode$FSharp-Core>.$Control.finishTask@1280[T](AsyncParams`1 _arg3, AsyncParamsAux aux, FSharpRef`1 firstExn, T[] results, TrampolineHolder trampolineHolder, Int32 remaining)
bei <StartupCode$FSharp-Core>.$Control.recordFailure@1302[T](AsyncParams`1 _arg3, AsyncParamsAux aux, FSharpRef`1 count, FSharpRef`1 firstExn, T[] results, LinkedSubSource innerCTS, TrampolineHolder trampolineHolder, FSharpChoice`2 exn)
bei <StartupCode$FSharp-Core>[email protected](Exception exn)
bei Microsoft.FSharp.Control.AsyncBuilderImpl.protectedPrimitive@690.Invoke(AsyncParams`1 args)
bei <StartupCode$FSharp-Core>.$Control.loop@413-40(Trampoline this, FSharpFunc`2 action)
bei Microsoft.FSharp.Control.Trampoline.ExecuteAction(FSharpFunc`2 firstAction)
bei Microsoft.FSharp.Control.TrampolineHolder.Protect(FSharpFunc`2 firstAction)
bei <StartupCode$FSharp-Core>[email protected](Object state)
bei System.Threading.QueueUserWorkItemCallback.WaitCallback_Context(Object state)
bei System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx)
bei System.Threading.QueueUserWorkItemCallback.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem()
bei System.Threading.ThreadPoolWorkQueue.Dispatch()
bei System.Threading._ThreadPoolWaitCallback.PerformWaitCallback()
Because this is will be triggered on a worker thread, which I have no control of, this will crash the application (not FSI but the exception will be displayed here too).
I found http://cs.hubfs.net/topic/Some/2/59152 and http://cs.hubfs.net/topic/None/59146 but I do not use StartChild and I don't think I'm invoking Receive from multiple Threads at the same time somehow?
Is there anything wrong with my Code or is this indeed a bug? How can I workaround this if possible?
I noticed that in FSI that all tests will run as expected when the Exception is silently ignored. How can I do the same?
EDIT: I noticed after I fixed the failing unit tests it will work properly. However I can stil not reproduce this with a smaller codebase. For example with my own failing tests.
Thanks, matthid
A: My feeling is that the limitation would be within the MailboxProcessor itself rather than async.
To be honest I would err on the side of caution with the Scan functions. I wrote a blog post on the dangers of using them.
Is it possible to process the tasks with the standard receiving mechanism rather than using Scan functions?
As a note, inside async there is trampoline that is used so that the same thread is reused a set number of time to avoid unnecessary thread pool usage, (I think this is set to 300) so when debugging you may see this behaviour.
I would approach this problem slightly differently decomposing the separate components into pipeline stages rather than the nested async blocks. I would create a supervisor component and routing component.
The Supervisor would look after the initial tests and post messages to a routing component that would round-robin the requests to other agents. When the tasks are completed they could post back to the supervisor.
I realise this does not really help with the problem in the current code but I think you will have to decompose the problem anyway in order to debug the async parts of the system.
A: I do believe there was a bug in the 2.0 implementation of Scan/TryScan/Receive that might spuriously cause the
multiple waiting reader continuations for mailbox
exception; I think that bug is now fixed in the 3.0 implementation. I haven't looked carefully at your code to try to ensure you're only trying to receive one message at a time in your implementation, so it's also possible this might be a bug in your code. If you can try it out against F# 3.0, it would be great to know if this goes away.
A: Some notes, in case someone finds my experiences useful (it took a long time debugging multiple processes in order to locate the problem):
Execution and throughput started to get clogged up with just 50 Agents/Mailboxes. Sometimes with a light load it would work for the first round of messages but anything as significant as making a call to a logging library triggered the longer delay.
Debugging using the Threads/Parallel Stacks window in the VS IDE, the runtime is waiting on the results of
FSharpAsync.RunSynchronously -> CancellationTokenOps.RunSynchronously call by Trampoline.ExecuteAction
I suspect that the underlying ThreadPool is throttling startup (after the first time it seems to run ok). It's a very long delay. I'm using agents to serialise within certain queues minor computations, while allowing the main dispatching agent to remain responsive, so the delay is somewhere in the CLR.
I found that running MailboxProcessor Receive with a Timeout within a try-with, stopped the delay, but that this needed to be wrapped in an async block to stop the rest of the program slowing down, however short the delay. Despite a little bit of twiddling around, very happy with the F# MailboxProcessor for implementing the actor model.
A: Sadly I never actually could reproduce this on a smaller code base, and now I would use NUnit with async test support instead of my own implementation. I used agents (MailboxProcessor) and asyncs in various projects since them and never encountered this again...
| |
doc_3395
|
function expand()
{
var slidingDiv = document.getElementById("expandDiv");
var startPosition = 350;
var stopPosition = -150;
if ((parseInt(slidingDiv.style.top) > stopPosition )&&(parseInt(slidingDiv.style.top) < startPosition ))
{
slidingDiv.style.top = parseInt(slidingDiv.style.top) + 5 + "px";
setTimeout(expand, 5);
}
}
.......
<a onclick="expand();">Expand</a>
<div id="expandDiv" style="width:300;height:100;background-color:#fff;position:absolute;border:1px solid #ccc;">hello<br />great testing</div>
A: function expand()
{
var slidingDiv = document.getElementById("expandDiv");
var startPosition = 350;
var stopPosition = -150;
var value = parseInt(slidingDiv.style.top.replace("px", ""));
if (value > stopPosition && value < startPosition)
{
slidingDiv.style.top = (value + 5).toString() + "px";
setTimeout(expand, 5);
}
}
Alternatively, use jQuery to do this.
function expand()
{
$("#expandDiv").slideDown(2000, function() {
// Afterwards
});
}
| |
doc_3396
|
my java file 'App.java'
package mhealth;
import org.apache.camel.CamelContext;
import org.apache.camel.Exchange;
import org.apache.camel.Processor;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.component.jms.JmsComponent;
import org.apache.camel.impl.DefaultCamelContext;
import javax.sql.DataSource;
import org.apache.camel.impl.SimpleRegistry;
import org.apache.commons.dbcp.BasicDataSource;
public class App {
public static void main(String args[]) throws Exception {
String url = "jdbc:postgresql://x.x.x.x:5432/mhealth";
DataSource dataSource = setupDataSource(url);
SimpleRegistry reg = new SimpleRegistry() ;
reg.put("myDataSource",dataSource);
CamelContext context = new DefaultCamelContext(reg);
context.addRoutes(new App().new MyRouteBuilder());
context.start();
}
class MyRouteBuilder extends RouteBuilder {
public void configure() {
from("timer://Timer?period=3s")
.setBody(constant("SELECT count(*)>0 as count FROM forms_data " +
"where created_time > (now() - INTERVAL '4 days' + interval '5 hours 30 minutes')" +
"OR last_updated_time > (now() - INTERVAL '4 days' + interval '5 hours 30 minutes') "))
.to("jdbc:myDataSource")
.split(body())
.choice()
.when(body().convertToString().contains("count=t"))
.setBody(constant("select * from beneficiary_journey"))
.to("jdbc:myDataSource")
.split(body())
.to("stream:out");
}
}
private static DataSource setupDataSource(String connectURI) {
BasicDataSource ds = new BasicDataSource();
ds.setDriverClassName("org.postgresql.Driver");
ds.setUsername("postgres");
ds.setPassword("postgres");
ds.setUrl(connectURI);
System.out.println("works!!");
return ds;
}
}
my POM file
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>test</groupId>
<artifactId>healthTest</artifactId>
<version>1.0-SNAPSHOT</version>
<!-- <packaging>jar</packaging> -->
<name>healthTest</name>
<url>http://maven.apache.org</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
</dependency>
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>9.3-1100-jdbc41</version>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-core</artifactId>
<version>2.16.1</version>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-jms</artifactId>
<version>2.16.1</version>
</dependency>
<dependency>
<groupId>commons-dbcp</groupId>
<artifactId>commons-dbcp</artifactId>
<version>1.2.2</version>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-jdbc</artifactId>
<version>2.16.1</version>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-stream</artifactId>
<version>2.16.1</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.38</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.14</version>
</dependency>
</dependencies>
<build>
<plugins>
<!-- Allows the example to be run via 'mvn compile exec:java' -->
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>1.2.1</version>
<configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>java</goal>
</goals>
</execution>
</executions>
<!-- <includePluginDependencies>false</includePluginDependencies> -->
<mainClass>mhealth.App</mainClass>
<cleanupDaemonThreads>false</cleanupDaemonThreads>
</configuration>
</plugin>
</plugins>
</build>
</project>
the error i get when i run
mvn install && mvn exec:java -Dexec.mainClass="mhealth.App"
it polls for a couple of seconds and then i get this error on my console
[WARNING] thread Thread[Camel (camel-1) thread #0 - timer://Timer,5,mhealth.App] was interrupted but is still alive after waiting at least 15000msecs
[WARNING] thread Thread[Camel (camel-1) thread #0 - timer://Timer,5,mhealth.App] will linger despite being asked to die via interruption
[WARNING] thread Thread[Abandoned connection cleanup thread,5,mhealth.App] will linger despite being asked to die via interruption
[WARNING] thread Thread[Timer-0,5,mhealth.App] will linger despite being asked to die via interruption
[WARNING] NOTE: 3 thread(s) did not finish despite being asked to via interruption. This is not a problem with exec:java, it is a problem with the running code. Although not serious, it should be remedied.
[WARNING] Couldn't destroy threadgroup org.codehaus.mojo.exec.ExecJavaMojo$IsolatedThreadGroup[name=mhealth.App,maxpri=10]
java.lang.IllegalThreadStateException
at java.lang.ThreadGroup.destroy(ThreadGroup.java:775)
at org.codehaus.mojo.exec.ExecJavaMojo.execute(ExecJavaMojo.java:334)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:120)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:355)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:155)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:584)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:216)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:160)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 17.996 s
[INFO] Finished at: 2016-02-17T19:45:04+05:30
[INFO] Final Memory: 13M/172M
Of course if i try <cleanupDaemonThreads>true</cleanupDaemonThreads>
the program just builds successfully and stops ,i need the program to keep running and polling
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building healthTest 1.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ healthTest ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/ashish/Desktop/healthTest/src/main/resources
[INFO]
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ healthTest ---
[INFO] Nothing to compile - all classes are up to date
[INFO]
[INFO] >>> exec-maven-plugin:1.2.1:java (default-cli) > validate @ healthTest >>>
[INFO]
[INFO] <<< exec-maven-plugin:1.2.1:java (default-cli) < validate @ healthTest <<<
[INFO]
[INFO] --- exec-maven-plugin:1.2.1:java (default-cli) @ healthTest ---
works!!
log4j:WARN No appenders could be found for logger (org.apache.camel.impl.DefaultCamelContext).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2.952 s
[INFO] Finished at: 2016-02-17T19:55:57+05:30
[INFO] Final Memory: 19M/169M
[INFO] ------------------------------------------------------------------------
Sorry for the long post,thought it'd be best if i didn't skip out any details
A: The start method on CamelContext is not blocking, read its javadoc. You need to keep the JVM running somehow. You can also use Camel's Main class that can do this. See for example: https://github.com/apache/camel/blob/master/tooling/archetypes/camel-archetype-java/src/main/resources/archetype-resources/src/main/java/MainApp.java
| |
doc_3397
|
(1st try) iheatmapr makes it easy to add to add bars as below, but I couldnt see how to add labels inside the heatmap on individual cells.
library(tidyverse)
library(iheatmapr)
library(RColorBrewer)
in_out <- data.frame(
'Economic' = c(2,1,1,3,4),
'Education' = c(0,3,0,1,1),
'Health' = c(1,0,1,2,0),
'Social' = c(2,5,0,3,1) )
rownames(in_out) <- c('Habitat', 'Resource', 'Combined', 'Protected', 'Livelihood')
GreenLong <- colorRampPalette(brewer.pal(9, 'Greens'))(12)
lowGreens <- GreenLong[0:5]
in_out_matrix <- as.matrix(in_out)
main_heatmap(in_out_matrix, colors = lowGreens)
in_out_plot <- iheatmap(in_out_matrix,
colors=lowGreens) %>%
add_col_labels() %>%
add_row_labels() %>%
add_col_barplot(y = colSums(bcio)/total) %>%
add_row_barplot(x = rowSums(bcio)/total)
in_out_plot
Then used: save_iheatmap(in_out_plot, "iheatmapr_test.png")
Because I couldnt use ggsave(device = ragg::agg_png etc) with iheatmapr object.
Also, the iheatmapr object's apparent incompatibility (maybe I am wrong) with ggsave() is a problem for me because I normally use ragg package to export image AGG to preserve font sizes. I am suspecting some other heatmap packages make custom objects that maybe incompatible with patchwork and ggsave.
ggsave("png/iheatmapr_test.png", plot = in_out_plot,
device = ragg::agg_png, dpi = 72,
units="in", width=3.453, height=2.5,
scaling = 0.45)
(2nd try) ComplexHeatmap makes it easy to label individual number "cells" inside a heatmap, and also offers marginal bars among its "Annotations", and I have tried it, but its colour palette system (which uses integers to refer to a set of colours) doesnt suit my RGB vector colour gradient, and overall it is a sophisticated package clearly designed to make graphics more advanced than what I am doing.
I am aiming for style as shown in screenshot example below, which was made in Excel.
Please can anyone suggest a more suitable R package for a simple heatmap like this with marginal bars, and number labels inside?
A: Instead of relying on packages which offer out-of-the-box solutions one option to achieve your desired result would be to create your plot from scratch using ggplot2 and patchwork which gives you much more control to style your plot, to add labels and so on.
Note: The issue with iheatmapr is that it returns a plotly object, not a ggplot. That's why you can't use ggsave.
library(tidyverse)
library(patchwork)
in_out <- data.frame(
'Economic' = c(1,1,1,5,4),
'Education' = c(0,0,0,1,1),
'Health' = c(1,0,1,0,0),
'Social' = c(1,1,0,3,1) )
rownames(in_out) <- c('Habitat', 'Resource', 'Combined', 'Protected', 'Livelihood')
in_out_long <- in_out %>%
mutate(y = rownames(.)) %>%
pivot_longer(-y, names_to = "x")
# Summarise data for marginal plots
yin <- in_out_long %>%
group_by(y) %>%
summarise(value = sum(value)) %>%
mutate(value = value / sum(value))
xin <- in_out_long %>%
group_by(x) %>%
summarise(value = sum(value)) %>%
mutate(value = value / sum(value))
# Heatmap
ph <- ggplot(in_out_long, aes(x, y, fill = value)) +
geom_tile() +
geom_text(aes(label = value), size = 8 / .pt) +
scale_fill_gradient(low = "#F7FCF5", high = "#00441B") +
theme(legend.position = "bottom") +
labs(x = NULL, y = NULL, fill = NULL)
# Marginal plots
py <- ggplot(yin, aes(value, y)) +
geom_col(width = .75) +
geom_text(aes(label = scales::percent(value)), hjust = -.1, size = 8 / .pt) +
scale_x_continuous(expand = expansion(mult = c(.0, .25))) +
theme_void()
px <- ggplot(xin, aes(x, value)) +
geom_col(width = .75) +
geom_text(aes(label = scales::percent(value)), vjust = -.5, size = 8 / .pt) +
scale_y_continuous(expand = expansion(mult = c(.0, .25))) +
theme_void()
# Glue plots together
px + plot_spacer() + ph + py + plot_layout(ncol = 2, widths = c(2, 1), heights = c(1, 2))
| |
doc_3398
|
Is it possible for a specific position in a string to be queried?
Eg. SELECT ID FROM Names WHERE Firstname[0] = "J" AND Lastname = "Doe"
If anything is unclear please let me know, any help is appreciated, thank you.
A: Yes, you can, but rather like this
SELECT ID
FROM Names
WHERE Firstname LIKE 'J%' AND Lastname = 'Doe';
Notes:
*
*In SQL, strings should be delimited with single quotes.
*The LIKE operation has a pattern. The pattern says the first character is J; % is a wildcard that matches 0 or more characters.
*Databases generally do not have a built-in array types that are compatible across databases. Further, no databases (as far as I know) , treats strings as arrays of characters. These concepts are somewhat foreign to SQL (although some databases do support arrays).
A: You could use LIKE in WHERE clause.
example:-
SELECT ID FROM Names WHERE Firstname LIKE "J%" AND Lastname = "Doe"
A: you can use keyword 'like' for example : WHERE names LIKE 'a%' it will return all name starting with character 'a'. like wise '%a'it will return all names ending with 'a'.
| |
doc_3399
|
In the above table screenshot, I want to disable the Approve button once the user is Approved (i.e, the button is slid to right).
The button on click calls an API which updates is_confirm filed (boolean type) in DB and the same field is being used for disabling the button.
But the button is not getting disabled, it can still be slid.
<tr *ngFor="let user of users">
<td>{{ user.name }}</td>
<td>{{ user.userName }}</td>
<td>{{ user.occupation }}</td>
<td>{{ user.maritalStatus }}</td>
<td>{{ user.mobileNo }}</td>
<td>
<div class="row">
<div class="col-sm-4"><button class="btn"(click)="viewUserDetails(user.id)"><i class="fa fa-address-card fa-lg" aria-hidden="true"></i></button></div>
<div class="col-sm-4"><label class="switch">
<input type="checkbox" [disabled]="user.isConfirm" [checked]="user.isConfirm" (change)="approveUserById(user.id)">
<span class="slider round"></span>
</label></div>
<div class="col-sm-4"><button class="btn" (click)="lockOrUnlockUserById(user)"><i [ngClass]="user.isLock ? 'fa fa-unlock fa-lg' : 'fa fa-lock fa-lg'" aria-hidden="true"></i></button></div>
</div>
</td>
</tr>
Component.ts
//Method to get all the users called in onInit
viewUsersList(roleId: string) {
let cityObj = {roleId:roleId,cityId:this.selectedCity}
this.httpClient.post('USER_LIST_API', cityObj).subscribe(
(responseData: any) => {
this.users = <ViewUser[]>responseData
console.log("Response Data: ",responseData)
console.log(this.users)
},
(error: any) => {
console.log(error)
this.router.navigate(['error']);
}
)
}
//Method to approve a user
approveUserById(userId: string) {
this.httpClient.post('APPROVE_USER_API', userId).subscribe(
(responseData: any) => {
if (responseData.message === 'TRUE') {
//*This is the place I need help to refresh the data *
console.log("Response ",responseData)
}
}
)
A: Try [disabled]="!user.isConfirm"
<input type="checkbox" [disabled]="!user.isConfirm" [checked]="user.isConfirm" (change)="approveUserById(user.id)">
<span class="slider round"></span>
```
A: Try: <button [disabled]="!is_confirm" ...>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.